Technology content trusted in North America and globally since 1999
8,190 Reviews & Articles | 61,955 News Posts

Crucial MX100 512GB Two-Drive SSD RAID Report

By: Jon Coulter | RAID in Storage | Posted: Sep 3, 2014 1:13 pm

Futuremark PCMark 8 Extended - Consistency Test


Heavy Usage Model


We consider PCMark 8's consistency test to be our heavy usage model test. This is the usage model most enthusiasts, heavy duty gamers, and professionals fall into. If you do a lot of gaming, audio/video processing, rendering, or have workloads of this nature, then this test will be most relevant to you.


PCMark 8 has built-in, command line executed storage testing. The PCMark 8 Consistency test measures the performance consistency and the degradation tendency of a storage system.


The Storage test workloads are repeated. Between each repetition, the storage system is bombarded with a usage that causes degraded drive performance. In the first part of the test, the cycle continues until a steady degraded level of performance has been reached (Steady State).


In the second part, the recovery of the system is tested by allowing the system to idle and measuring the performance with long intervals (TRIM).


The test reports the performance level at the start, the degraded steady state, and the recovered state, as well as the number of iterations required to reach the degraded state, and the recovered state.


We feel Futuremark's Consistency Test is the best test ever devised to show the true performance of solid state storage in a heavy usage scenario. This test takes an average of 13 to 17 hours to complete, and it writes somewhere between 450GB and 13,600GB of test data, depending on the drive(s) being tested. If you want to know what a SSD's performance is going to look like after a few months or years of heavy usage, this test will show you.


Here's a breakdown of Futuremark's Consistency Test:


Precondition phase:


1. Write to the drive sequentially through up to the reported capacity with random data.

2. Write the drive through a second time (to take care of overprovisioning).


Degradation phase:


1. Run writes of random size between 8*512 and 2048*512 bytes on random offsets for ten minutes.

2. Run performance test (one pass only).

3. Repeat steps one and two, eight times, and on each pass increase the duration of random writes by five minutes.


Steady state phase:


1. Run writes of random size between 8*512 and 2048*512 bytes on random offsets for 50 minutes.

2. Run performance test (one pass only).

3. Repeat steps one and two, five times.


Recovery phase:


1. Idle for five minutes.

2. Run performance test (one pass only).

3. Repeat steps one and two, five times.



Storage Bandwidth


PCMark 8's Consistency test provides a ton of data output that we can use to judge a drive/array's performance.




We consider steady state bandwidth (the blue bar) our test that carries the most weight in ranking a drive/array's performance. The reason we consider steady state performance more important than TRIM is that when you are running a heavy-duty workload, TRIM will not be occurring while that workload is being executed. TRIM performance (the orange and red bars) is what we consider the second most important aspect to consider when ranking a drive/array's performance. Trace based consistency testing is where true high performing SSDs are separated from the rest of the pack.


Following what we saw from Crucial's more expensive M550 variant identically, the MX100 512GB shifts from mediocre in this test as a single drive, to beast mode in RAID. This is what we love about IMFT flash; it scales incredibly well. This is the first we have seen of 16nm IMFT flash, and as this test, the most brutal of all tests, demonstrates, 16nm Micron flash has not lost any performance with a die shrinkage as opposed to 20nm Micron flash.


Although the Toshiba flash based arrays on our chart all edge out our MX100 array, I anticipate that if we had a three drive array of MX100 512GB SSD's, the outcome would be quite different. The reason I anticipate this performance increase is because I believe the MX100 will keep scaling better than the Toshiba based drives, with the possible exception of Toshiba's Q Series Pro. If I can talk Chris Ramseyer out of his MX100 512GB, then we will have a look for ourselves.


Notice how all of our IMFT flash based arrays have far better TRIM performance than the Toshiba flash based arrays? This has turned out to be a very good indicator as to how well our arrays will potentially scale as we add drives to the array.





We chart our test subject's storage bandwidth as reported at each of the test's 18 trace iterations. This gives us a good visual perspective of how our test subjects perform as testing progresses.



Total Access Time (Latency)


Access time is the time delay, or latency, between a request to an electronic system, and the action being completed or the requested data returned. Essentially, access time is how long it takes to get data back from the disk. We chart the total time the disk is accessed as reported at each of the test's 18 trace iterations.




This is a great visual representation of what RAID brings to the table. A quick look at Steady State One shows that when you RAID two MX100's, latency under load drops as much as 5.5x in comparison to a single MX100. Surprisingly, we find our MX100 array outperforms our M550 array, delivering lower latency for almost the entire test.



Disk Busy Time


Disk Busy Time is how long the disk is busy working. We chart the total time the disk is working as reported at each of the test's 18 trace iterations.




When latency is low, disk busy time is low as well. In a steady state, an MX100 array spends up to 3.6 X less time working than a single MX100 with the exact same workload.



Data Written


We measure the total amount of random data that the drives/arrays are capable of writing during the degradation phases of the consistency test. The total combined time that degradation data is written to the drive/array is 470 minutes. This can be very telling. The better the drive/array can process a continuous stream of random data, the more data will be written.




Our MX100 array has lower latency than our M550 array, and as such, can write more random data in the same span of time. Drives/arrays like Seagate's awesome 600 Pro with its enterprise pedigree have the ability to write many times more data than consumer based drives/arrays in this test. This is directly attributable to lower latency.


Gratuitous Benchmarking


This is where we show you what our array's performance looks like when powered by the fastest operating system for SATA based storage ever made, Windows Server 2008. This is the exact same hardware, just an OS change.








4k QD1 Write. 168,000 IOPS is just smoking fast!


You can't get performance like this from Windows 8, or 8.1. You can get very close with Windows 7, but nothing performs quite as well as Server 2008 when it comes to SATA based storage. 4K write performance is vastly superior on Server 2008 and Windows 7 in comparison to Windows 8 or 8.1.

    PRICING: You can find products similar to this one for sale below.

    United States: Find other tech and computer products like this over at Amazon's website.

    United Kingdom: Find other tech and computer products like this over at Amazon UK's website.

    Canada: Find other tech and computer products like this over at Amazon Canada's website.

    We at TweakTown openly invite the companies who provide us with review samples / who are mentioned or discussed to express their opinion of our content. If any company representative wishes to respond, we will publish the response here.

Related Tags

Got an opinion on this content? Post a comment below!