RAID 5 4K Random Read/Write
We precondition both 64K stripe RAID 5 arrays, with 8 and 4 x 200GB Intel DC S3700 SSDs, for 18,000 seconds, or five hours, receiving reports on several parameters of workload performance every second. We plot this data to illustrate the drives' descent into steady state.
This chart consists of 16,000 data points. The dots signify IOPS performance every second during the test. The lines through the data scatter represent the average performance during the test. This type of testing presents standard deviation and maximum/minimum I/O in a visual manner.
High-granularity testing can give our readers a good feel for the latency distribution by viewing IOPS at one-second intervals. This should be in mind when viewing our test results below. We provide latency charts for further granularity below.
This downward slope of performance happens very few times in the lifetime of the device, typically during the first few hours of use, and we present the precondition results only to confirm steady state convergence.
Each QD for every parameter tested includes 300 data points (five minutes of one second reports) to illustrate the degree of performance variability. The line for each QD represents the average speed reported during the five-minute interval.
4K random speed measurements are an important metric when comparing drive performance, as the hardest type of file access for any storage solution to master is small-file random. One of the most sought-after performance specifications, 4K random performance is a heavily marketed figure.
The 8-drive array peaks at an average of 479,810 IOPS at QD256, while the 4-drive array reaches 249,151 IOPS.
The sweet spot for the 4-drive array occurs at QD128, while the larger array performs best with QD256.
Garbage collection routines are more pronounced in heavy write workloads. This leads to more variability in performance.
The Intel DC S3700 8-drive array exhibits performance that mirrors that of the smaller array, with performance peaking at QD64 and falling at higher workloads. This limitation is likely the result of the RAID 5 algorithms. Performance peaks at QD64 with 167,255 IOPS for the large array, and 24,109 for the 4-drive array.
The write latency gets considerably worse for both arrays after we cross the QD64 threshold. There is no doubt that QD64 provides optimum results in these configurations.