Test System and Methodology
We utilize a new approach to HDD and SSD storage testing here at TweakTown for our Enterprise Test Bench. The inaugural launch of our new methods with SSDs is with the LSI Nytro WarpDrive, and comparisons to other SSDs will be forthcoming as we test new models and SSDs from various manufacturers.
Designed specifically to target the long-term performance of solid state with a high level of granularity, our new SSD testing regimen is applicable to a wide variety of flash devices. From typical form-factor SSDs to the hottest PCIe application accelerators available, we are utilizing this new test regimen to provide accurate performance measurements over a variety of parameters.
Many forms of testing involve utilizing peak and average measurements over a given time period. While these average values can give a basic understanding of the performance of the storage solution, they fall short in providing the clearest view possible of the QOS (Quality of Service) of the I/O.
The problem with average results is that they do little to indicate the variability experienced during the actual deployment of the device. The degree of variability is especially pertinent, as many applications can hang or lag as they wait for one I/O to complete. This type of testing illustrates the performance variability expected in these types of scenarios while also including a whole host of other relevant data, including the average measurements during the measurement window.
In reality, while under load all storage solutions deliver variable levels of performance that are subject to constant change. While this fluctuation is normal, the degree of fluctuation is what separates enterprise storage solutions from typical client-side hardware. By providing ongoing measurements from our workloads with one-second reporting intervals, we can illustrate the difference between different products in relation to the purity of the QOS while the device is under load. By utilizing scatter charts readers can get a basic understanding of the latency distribution of the I/O stream without directly observing numerous graphs.
Consistent latency is the goal of every storage solution, and measurements such as Maximum Latency only illuminate the single longest I/O received during testing. This can be misleading, as a single 'outlying I/O' can skew the view of an otherwise superb solution. Standard Deviation measurements take the average distribution of the I/O into consideration, but do not always effectively illustrate the entire I/O distribution with enough granularity to provide a clear picture of system performance. We use histograms to illuminate the latency of every single I/O issued during our test runs, providing a clear picture of the actual percentage of I/O requests that fall within each latency range.
Our testing regimen follows SNIA principles to ensure consistent, repeatable testing. Due to the very nature of NAND devices, it is important that we test under steady state conditions. We attain steady state convergence through a process that brings the device within a performance level that does not range more than 20% from the average speed measured during the measurement window.
We only test below QD32 to illustrate the scaling of the device. However, low QD testing with enterprise-class storage solutions is a frivolous activity if not presented with higher QD results as well. Administrators that have optimized their infrastructure correctly sustain high QD levels, capitalizing on the performance of the premium tier of storage that SSDs provide. Especially with the explosion of virtualization into the datacenter, the high QD performance of the storage solution is the most important metric.
The first page of results will provide the 'key' to understanding and interpreting our new test methodology.