4K Random Read/Write
We preconditioned the WarpDrive with a heavy 4K random write workload for 18,000 seconds, or five hours. Every second we are receiving reports on several parameters of the workload performance. We then plot this data to illustrate the drives' descent into steady state.
This chart consists of 36,000 data points. The red dots signify the IOPS during the test, and the blue dots are the latency encountered during the test period. We place the latency data in a logarithmic scale to bring it into comparison range. This is a dual-axis chart, with the IOPS on the left, and the latency on the right. The lines through the data scatter are a moving average during the test. This type of testing presents standard deviation and maximum I/O in a visual manner.
Note that the IOPS and Latency figures are nearly mirror images of each other. This illustrates the point that the scatter testing can give our readers a good feel for the latency distribution by viewing the IOPS at one-second intervals. This should be in mind when viewing our read and write results below.
We provide histograms to provide further latency granularity below. This slope of performance happens very few times in the lifetime of the device, and we present these test results only to confirm that the device has reached steady state convergence.
Each QD for each parameter tested includes 300 data points (five minutes of one second reports) to illustrate the degree of performance variability. Dark blue data points are incompressible data, red data points are 80% compressible data, and the light blue data points signify 100% compressible data. This key is to the left of the axis. The line for each QD represents the average speed reported during the five-minute interval.
4K random read speed measurements are an important metric when comparing drive performance, as the hardest type of file access for any storage solution to master is small-file random. One of the most sought-after performance specifications, 4K random performance is a heavily marketed figure.
With incompressible data the WarpDrive delivers an average of 17,583 IOPS, 80% compressible comes in at 20,251 and 100% compressible data averages 37,031 during the QD 256 measurement window.
The tests are conducted in reverse order (QD 256, 128, 64, 32, 16, 8, 4) to keep the device under load. This ensures steady state conditions are held as long as possible. Especially at lower QD the device can tend to begin to recover from steady state.
With SandForce devices, however, once they begin to use fully compressible data they begin to recover regardless of the QD. This is why we are observing the apparently lower performance with the higher QD in some scenarios. In actual deployment, this will not always be the case when utilizing 100% compressible data and the results with higher QD may be in line with the QD 128 results.
Random read is largely unaffected by the compression of the data, and at QD256 the WarpDrive comes in with an impressive average of 145,779 IOPS.
The histogram represents the latency of every single I/O during the QD256 testing period represented in percentages.
During incompressible testing the majority of writes 19% of the requests, or 4,091,467 I/Os, fell into the 10-20 ms range.
For 80% compressible data 27% of the I/O, or 6,377,634 I/Os fell into the 6-8ms range.
For 100% compressible data 52% of accesses, or 27,405,087 I/Os fell into the 4-6ms range.