The advent of the Intel DC S3700 series of enterprise SSDs brought about a new focus on performance consistency. For many unaccustomed to the inner workings of enterprise storage solutions, this triggered the revelation that SSDs do not deliver a steady level of performance, even when under a steady workload.
Intel has worked hard to illustrate the need for performance consistency in everyday applications, and rightly so. There is a plethora of cheap consumer SSDs on the market, and in an attempt to lower overall TCO, many administrators have been experimenting with these SSDs in actual production environments. Many of these same administrators come away from the experience unsatisfied with the inconsistent performance trends and poor endurance of these consumer solutions.
The problem lies in the variable performance delivered by these solutions, one Chris Ramseyer and I highlighted in our article and video Consumer vs. Enterprise SSD Performance Analysis. In our testing, we found that some client SSDs suffer from I/O 'outliers' that were in the same latency range as those delivered by a 15,000 RPM HDD. This highlights the fact that storage performance is not always defined by the 'average' results posted in typical test results, and choosing the wrong solution can have disastrous results.
By monitoring the performance of the storage with greater granularity, we can expose the flaws that plague client-side and less robust enterprise storage solutions. The Intel DC S3700 strives to provide a level of consistency unmatched by many other SSDs, and the challenge for Intel became how to spread that message. One of the best solutions for highlighting performance variability is by measuring performance in one-second intervals. At TweakTown we had already begun testing enterprise storage products in this manner, and the Intel DC S3700 spawned very similar testing by other hardware review sites.
The focus on performance consistency from Intel has brought one of the crucial aspects of storage performance into the limelight; predictable performance. One of the major motivators behind creating an SSD with predictable performance is the need to feed I/O to applications in a consistent manner. Performance variability can rob applications of performance. Individual 'hangs' and lags from outlying I/O can significantly affect application performance, simply because applications are forced into waiting for the next I/O to complete.
Outliers can rob applications of their performance, especially when there is significant variability. In enterprise scenarios, these poorly performing applications can result in the need for more resources to make up the difference. When placed into massive datacenters, the extra servers needed to compensate for poor performance can create tremendous cost.
Another of the greatest aspects of the Intel DC S3700 is the performance consistency should equate to large gains in performance in RAID applications. RAID arrays can only read and write only as fast as the slowest device in the array. If any single SSD in the array experiences significant variability, it will slow down the entire array. Since the speed of the array only operates as fast as the slowest I/O, several SSDs in an array with poor performance consistency can magnify the problem. In effect, the number of poorly performing SSDs in the RAID array multiplies the amount of outlying I/O.
We are accustomed to reading over-the-top RAID reviews with insane numbers of SSDs, and have been guilty of writing them in the past. Expect us to write more of them in the future as well, we have some very exciting new RAID testing coming up.
However, in typical deployments, SSDs are used as high performance tiers for hot data tiers or caching applications. Not every application requires millions of IOPS, and today we are focusing on the performance of a realistic deployment of four and eight Intel DC S3700 SSDs in an enterprise environment.