The Changing Datacenter
The datacenter is evolving away from traditional SAN architectures and moving to converged infrastructures and server-side SAN configurations. This creates a unique opportunity for flash in the datacenter by keeping it as close to the processor as possible. Software defined storage initiatives aim to provision resources on-the-fly and share them among multiple servers.
Intel plays a big role in the future of the datacenter due to their heavy penetration in the processing sector. Intel's control of chipsets is important as PCIe begins to shoulder more of the load for storage traffic. Increasing PCIe lanes will allow for more complex storage fabrics, such as multipath and failover. PCIe extenders and switches, or RDMA over Ethernet, can share excess capacity and performance of in-chassis PCIe SSDs with other servers.
Existing networking infrastructure will probably lead to use of RDMA more than PCIe for rack-level sharing, and Intel has a line of networking equipment that also fits into this role. Intel also offers their Cache Acceleration Software for merging the capacity of HDDs with the performance of flash.
As SSDs became mainstream, the manufacturers focused on the basics, such as managing write amplification and garbage collection techniques. The focus turned to maximum performance as SSDs evolved. Intel came to the realization that performance consistency is the key to delivering better application performance, and infused performance consistency into their DC S3700 and DC S3500 SSDs.
Now Intel has begun to focus on the next frontier of performance tuning. Intel has spent considerable time collecting feedback from a wide range of customers, and even measured many common workloads themselves. This analysis and feedback led to the conclusion that the majority of real-world applications rarely touch the upper limit of performance, and tend to hover in queue depths lower than 64.
One unfortunate aspect of SSD performance measurements, and marketing, is that the peak performance under high load is often the yardstick used to measure enterprise SSD performance. Latency is also measured at a queue depth of one, which is the lowest performance measurement possible on a storage device.
With the P series, Intel has begun tuning their enterprise SSDs to reflect real-world workloads by providing superior scaling and performance that starts at lower queue depths. In the graph above, we note that while some SSDs have a very high performance plateau under a high load, the most desirable SSD will scale and provide more performance at lower queue depths. This can be a bit of a challenge because SSDs rely upon parallelism to deliver peak performance.
The new approach leads to a slightly lower (20%) peak operating speed, but enhanced performance in the common operating regions. Very few workloads are 100% read or write, and Intel tunes the drive for this reality as well. Intel drives deliver an optimized percentage of reads and write I/O during mixed workloads. This enables drives with lower write performance, such as the DC P3500 and DC P3600, to deliver robust performance in mixed workloads.
On a side note, in most cases the low queue depths most SSDs experience during their life are indicative of wasted performance. In ideal architectures, SSDs are shareable among multiple servers, and if provisioned correctly they could deliver more value by operating at full utilization. This is one of many reasons that we are observing a move toward converged infrastructures and SDDC (Software Defined Data Centers), but change is a slow process in the datacenter largely due to legacy infrastructures and applications. For now, Intel is optimizing their drives for the current operating environment.
PRICING: You can find products similar to this one for sale below.
United States: Find other tech and computer products like this over at Amazon's website.
United Kingdom: Find other tech and computer products like this over at Amazon UK's website.
Canada: Find other tech and computer products like this over at Amazon Canada's website.
- Page 1 [Introduction]
- Page 2 [The Changing Datacenter - Workload Tuning]
- Page 3 [Reliability Statistics - Data Protection]
- Page 4 [Measuring Reliability]
- Page 5 [Design and Specifications]
- Page 6 [Test System and Methodology]
- Page 7 [Benchmarks - 4k Random Read/Write]
- Page 8 [Benchmarks - 8k Random Read/Write]
- Page 9 [Benchmarks - 128k Sequential Read/Write]
- Page 10 [Database/OLTP and Webserver]
- Page 11 [Email Server and File Server]
- Page 12 [Final Thoughts]