Test System and Methodology
We utilize a new approach to HDD and SSD storage testing for our Enterprise Test Bench, designed specifically to target the long-term performance of solid state with a high level of granularity.
Many forms of testing involve utilizing peak and average measurements over a given time period. While these average values can give a basic understanding of the performance of the storage solution, they fall short in providing the clearest view possible of the QOS (Quality Of Service) of the I/O.
The problem with average results is that they do little to indicate the performance variability experienced during the actual deployment of the device. The degree of variability is especially pertinent, as many applications can hang or lag as they wait for one I/O to complete. This type of testing illustrates the performance variability expected in these types of scenarios, including the average measurements, during the measurement window.
In reality, while under load all storage solutions deliver variable levels of performance that are subject to constant change. While this fluctuation is normal, the degree of fluctuation is what separates enterprise storage solutions from typical client-side hardware. By providing ongoing measurements from our workloads with one-second reporting intervals, we can illustrate the difference between different products in relation to the purity of the QOS. By utilizing scatter charts readers can gain a basic understanding of the latency distribution of the I/O stream without directly observing numerous graphs.
Consistent latency is the goal of every storage solution, and measurements such as Maximum Latency only illuminate the single longest I/O received during testing. This can be misleading, as a single 'outlying I/O' can skew the view of an otherwise superb solution. Standard Deviation measurements take the average distribution of the I/O into consideration, but do not always effectively illustrate the entire I/O distribution with enough granularity to provide a clear picture of system performance. We use histograms to illuminate the latency of every single I/O issued during our test runs.
Our testing regimen follows SNIA principles to ensure consistent, repeatable testing. We attain steady state convergence through a process that brings the device within a performance level that does not range more than 20% from the average speed measured during the measurement window. Forcing the device to perform a read-write-modify procedure for new I/O triggers all garbage collection and housekeeping algorithms, highlighting the real performance of the solution.
We only test below QD32 to illustrate the scaling of the device. However, low QD testing with enterprise-class storage solutions is a frivolous activity if not presented with higher QD results as well. The explosion of virtualization into the datacenter places focus on the high QD performance of the storage solution as the most important metric.
We have also begun expanded power testing with a measurement of the power consumption during each of our precondition runs. This provides measurements in time-based fashion, measured every second, that illuminate the behavior of the power consumption in steady state conditions. The power consumption of storage devices can cost more over the life of the device than the actual up-front costs of the drive itself. This significantly affects the TCO of the storage solution.
The first page of results will provide the 'key' to understanding and interpreting our new test methodology.
Click the banner below to learn more about SanDisk SSDs:
PRICING: You can find products similar to this one for sale below.
United States: Find other tech and computer products like this over at Amazon's website.
Australia: Find other tech and computer products like this over at PLE Computer's website.
Canada: Find other tech and computer products like this over at Amazon Canada's website.