Test System and Methodology
We designed our approach to storage testing to target long-term performance with a high level of granularity. Many testing methods record peak and average measurements during the test period. These average values give a basic understanding of performance, but fall short in providing the clearest view possible of I/O QoS (Quality of Service).
While under load, all storage solutions deliver variable levels of performance. 'Average' results do little to indicate performance variability experienced during actual deployment. The degree of variability is especially pertinent, as many applications can hang or lag as they wait for I/O requests to complete. While this fluctuation is normal, the degree of variability is what separates enterprise storage solutions from typical client-side hardware.
Providing ongoing measurements from our workloads with one-second reporting intervals illustrates product differentiation in relation to I/O QoS. Scatter charts give readers a basic understanding of I/O latency distribution, without directly observing numerous graphs. This testing methodology illustrates performance variability, and includes average measurements during the measurement window.
IOPS data that ignores latency is useless. Consistent latency is the goal of every storage solution, and measurements such as Maximum Latency only illuminate the single longest I/O received during testing. This can be misleading, as a single 'outlying I/O' can skew the view of an otherwise superb solution. Standard Deviation measurements consider latency distribution, but do not always effectively illustrate I/O distribution with enough granularity to provide a clear picture of system performance. We utilize high-granularity I/O latency charts to illuminate performance during our test runs.
Our testing regimen follows SNIA principles to ensure consistent, repeatable testing, and utilizes multi-threaded workloads found in typical production environments. We tested two SanDisk ULLtraDIMMs, but it is important to note that typical deployments will consist of larger arrays, typically between four and eight devices. The MCS Management Console provides incredible overprovisioning granularity. Users can select from 10% to 90% of additional overprovisioning, and for this evaluation, we test single and striped RAID configurations with 100% (full-span) utilization, and 70% capacity utilization (30% additional overprovisioning).
We didn't pull any punches with the competing devices. The 6Gb/s SATA Samsung 845DC PRO and the 12GB/s SAS HGST SSD800MH both deliver leading performance in their respective categories. Both competing drives where attached via a 12Gb/s LSI 9300-8i HBA during testing. We will circle back with PCIe competitors when we receive additional ULLtraDIMM samples.
The first page of results provides the 'key' to understanding and interpreting our test methodology.
PRICING: You can find products similar to this one for sale below.
United States: Find other tech and computer products like this over at Amazon.com
United Kingdom: Find other tech and computer products like this over at Amazon.co.uk
Australia: Find other tech and computer products like this over at Amazon.com.au
Canada: Find other tech and computer products like this over at Amazon.ca
Deutschland: Finde andere Technik- und Computerprodukte wie dieses auf Amazon.de
- Page 1 [Introduction]
- Page 2 [Internals and Specifications]
- Page 3 [MCS Management Console]
- Page 4 [MCS Architecture and VSAN]
- Page 5 [Guardian Technology Platform]
- Page 6 [Test System and Methodology]
- Page 7 [Benchmarks - 4k Random Read/Write]
- Page 8 [Benchmarks - 8k Random Read/Write]
- Page 9 [Benchmarks - 128k Sequential Read/Write]
- Page 10 [Database/OLTP and Webserver]
- Page 11 [Email Server]
- Page 12 [Final Thoughts]