TweakTown NewsRefine News by Category:
Power consumption is the highest ongoing expense in the datacenter, and for giants like Facebook it can easily add up to billions of dollars per year. One of the most obvious sources of power consumption spawns from cooling costs. Power consumption generates heat, and Facebook has grabbed the low-hanging fruit by moving to open-air datacenter designs that radically reduce cooling requirements. Now Facebook has turned their attention to UPS systems for the next layer of power savings. Reducing overall power consumption is key because it also incurs the expense of power backup. During a power loss event the systems automatically fall back to massive UPS systems that provide enough power, typically 90 seconds worth, to cover the gap until backup generators come online. Facebook has already altered UPS design by migrating from large central UPS systems to seven-foot tall server cabinets interspersed throughout the datacenter.
Today these massive power backup systems rely upon lead-acid batteries, but now Facebook is experimenting with the same type of lithium-ion batteries found in today's latest electric vehicles. The long term cost of maintenance is lower for lithium-ion batteries, and they also deliver more power in a smaller footprint. Facebook is experimenting with designs that embed lithium-on batteries at the rack level. Two batteries will slide into each rack and provide UPS protection. This design also reduces the chance of UPS failure. If a standard centralized UPS fails the entire datacenter can go down. With rack-level battery backups, only small groups of servers would be effected by individual failures.
Recent advances in lithium-ion battery technology have been fueled by electric car development. Vehicles like the Volt, Tesla, and Leaf, have ushered in advanced battery technology and also lowered the overall cost. Now that cost of Li-on batteries has fallen they have become a sensible alternative for UPS applications in massive datacenters. Facebook is integrating their new designs into their Open Compute initiative, which might serve to expand the widespread use of Lithium-ion in the datacenter. There is no word on how increased demand would affect the overall pricing.
Open Server Summit 2014 focuses on next-generation server designs that leverage industry-standard hardware and open-source software. The show is a great place to view future server technology, which makes it the perfect venue for displaying the Diablo MCS (Memory Channel Storage) architecture at work on the SanDisk ULLtraDIMM. The SanDisk ULLtraDIMM DDR3 SSD brings latency as low as five microseconds by sidestepping the traditional storage stack, and communicating via the DDR3 bus. This reduces cabling, complexity, and components required for typical storage deployments.
The slim form factor, which takes advantage of the existing memory subsystem, will enable radical new server designs, particularly in the blade and microserver segment. The hardware consists of a JEDEC-compliant ULLtraDIMM that presents itself as a block storage device with 200 or 400GB of capacity. The ULLtraDIMM utilizes two Marvell 88SS9187 controllers running the Guardian Technology Platform to increase endurance and reliability. This tandem delivers random read/write performance of 140,000/40,000 IOPS, and sequential read/write speeds up to 880/600 MB/s. Ten DWPD (Drive Writes Per Day) of endurance, and a five-year warranty (or TBW) are provided by SanDisk 19nm eMLC NAND.
The real genius of the ULLtraDIMM design is its enhanced parallelism. Stacking several devices in parallel unlocks key performance advantages that will challenge even the fastest datacenter-class PCIe SSDs. We recently had a chance to take an in-depth look at the ULLtraDIMM and post our independent third party testing results in the SanDisk ULLtraDIMM DDR3 400GB SSD Enterprise Review. Head over to the PCIe category in our IT/Datacenter section for a look at competing PCIe devices.
Lately HDDs aren't gaining in capacity as quickly due to the limitations of PMR (Perpendicular Magnetic Recording). PMR stores magnetic bits of data vertically, allowing manufacturers to cram more data onto the HDD's platters, which provides more density than the previous horizontal method. Every new technology has its limits, and PMR has nearly reached the end of its evolutionary cycle. Now manufacturers are turning to HAMR (Heat-Assisted Magnetic Recording) to increase density. HAMR uses a small laser to heat the surface of the platter to 800 degrees Fahrenheit before data is written. The laser is incredibly small and embedded into the drive's write head, and the small heated surface area cools back down in under a nanosecond.
Heat alters the magnetic properties of the disk for this nanosecond in time, and removes or reduces the superparamagnetic effect while data is written. This process allows for exponential gains in density, and HAMR drives with up to 20TB of storage are on the horizon. While this technology sounds a bit far-fetched, working development drives have already been displayed. With any new technology one of the immediate concerns is a lack of development tools. A team from A*STAR, led by Hongzhi Yang and the National University of Singapore, have designed a pump-probe laser to test HAMR devices. This allows accurate testing of temperature-dependent recording in localized regions without actually destroying the media. This is one more step on the path to creating affordable HAMR HDDs, and the first Seagate HAMR HDDs are projected to release in 2016 timeframe.
NVM Express has announced the new NVMe 1.2 specification, and many of the features are aligned to increase adoption in mobile designs, such as laptops and ultrathins. NVMe is a new storage protocol that provides amazing performance and low latency in comparison to legacy approaches, but while we have seen some amazingly fast enterprise SSDs hit our labs, NVMe hasn't quite made it to the consumer space. New power management features will allow NVMe SSDs to kick into lower power states, which will increase battery life for mobile applications.
Another new feature can also help to make SSDs more affordable. The NVMe specification now supports a host-based memory buffer. With the notable exception of SandForce devices, current SSDs use DRAM for caching. This extra DRAM component adds cost, draws more power, and takes up space on the SSD. NVMe 1.2 allows the SSD to use the computers RAM for SSD management, which means simpler, and cheaper, SSD designs. The smaller form factors will also lend themselves well to ultra-thins, 2-in-1's, and tablets. One neat aspect is that the SSD can request varying amounts of DRAM from the host system. This DRAM is typically utilized for translation tables for the FTL, but it isn't hard to imagine some uses for caching actual data in the future. Enhanced temperature management will keep the SSD from overheating, which is also a key feature in cramped laptops and ultra-thins. If the SSD reaches a high temperature it can simply throttle performance to cool down. These new features are welcome additions, and new NVMe SSDs will speed their way into your home computer or mobile device soon.
The Intel DC S3500 series competes in the price-sensitive segment and is geared for read-intensive and mixed workloads. The DC S3500 (evaluated here) doesn't sport quite the performance of its older brother, the DC S3700 (evaluated here), but provides plenty of performance and endurance for many workloads. Today Intel is announcing the release of 1.2 and 1.6TB variants, along with a new M.2 design. Expanded capacity is coupled with low power consumption that delivers reduced TCO. The DC S3500 has an active read power below 1.3 Watts. A sprinkling of other datacenter-specific technologies provide resiliency and a 0.3DWPD (Drive Writes Per Day) endurance limitation. End-to-end data protection, data redundancy technology, AES encryption, and power loss protection, ensure data safety.
Intel 20nm MLC NAND and a new 8-channel controller drive the DC S3500 models. Details are scant on the new Intel-proprietary controllers, but we will update readers as more information becomes available. We can expect to see the same consistent performance from the new drives, with a .5ms latency maximum for 99.9% of 4k random read IOPS. There are 10 capacity points available for the 2.5 drives, allowing users to tailor capacity for their specific needs. The high-capacity 2.5" variants feature up to 500/460 MB/s of sequential read/write speed and up to 65,000/18,500 random read/write IOPS. The larger pool of flash provides a bit more performance for the high-capacity variants, but the entire DC S3500 range features varying speeds based upon capacity.
The M.2 design relies upon the SATA interface and comes in 80, 120, and 340GB capacities. The performance of the M.2 variant seems tuned for slightly more random write speed than the similar capacity 2.5" variants, but slightly lower read speed. Intel is expecting the compact M.2 design to make a big splash in embedded applications, such as digital signage and slot machines. The M.2 design will also work well for server boot volumes. The ultra-dense design is particularly well-suited for blade and microserver designs, and some OEMs are in the process of developing systems with multiple M.2 connectors.
SMART Modular Technologies has announced the new M.2 SATA XR+ with SafeDATA power loss protection technology. SMART Modular Technologies, part of the larger SMART consortium of companies, is a privately-held company that has been in the electronics industry for over 25 years. Their products are usually confined to the OEM market, where they create custom designs for varying applications. The double-sided M.2 design is available in capacities from 32 to 512GB. These SSDs are designed to meet the needs of Tier 1 OEM customers and sport sequential read/write speeds of 540/320 MB/s.
SafeDATA consists of power loss detection and hold-up circuitry, in addition to advanced controller firmware, to flush all data to the underlying NAND in the event of a host power loss event. Power loss protection is a critical requirement in enterprise and embedded applications, and fusing that functionality onto a slim M.2 design opens up new applications for the dense design. The product is sampling now, and volume production begins in Q1 of 2015.
HGST led the way with 6TB drives by developing their HelioSeal technology, which fills the HDD with helium and seals the drive. This delivers a number of benefits, lower internal air resistance reduces flutter and allows use of thinner and lighter materials. With less air resistance the drive also doesn't have to work as hard to spin the platters, even while increasing the platter count to 7, thus producing radical reductions in power consumption. HGST is leveraging the benefits of HelioSeal technology to move forward with the new He8, an 8TB version of the previous-generation drive.
In a sign that 8TB drives will experience a rapid uptake, Aberdeen announced today they are integrating the new He8 into their AberNAS and storage server products. This will provide increased density for their customers and also tremendous reductions in power consumption. The He8 drive will deliver instant benefits and boost capacity of just one 4U rackmount up to 192TB. We took a deep-dive with the first commercially-available helium drive in our HGST Ultrastar He6 6TB Helium Enterprise HDD Review, and found it to deliver on its promises.
As Supercomputing 2014 begins we expect a rush of storage news, and today Innodisk kicked that off with the announcement of their first All-Flash array. The FlexiArray SE110 is a 1U rackmount with 10 SSDs in a 3TB configuration. The new array utilizes Innodisk's proprietary FlexiRemap Technology for global wear-leveling, which should boost endurance of the underlying media.
The new array utilizes consumer-grade MLC flash to deliver 320,000 sustained IOPS. The SE110 sports a Quad Port 10Gbe SFP+ connection and InfiniBand FDR QSFP connectivity. Innodisk will have a live demo running at the show. Innodisk will also have a wide variety of their other flash products on display, including mSATA, M.2 S42 and S80, and USB EDC.
Seagate has announced their Kinetic HDD, which connects via dual Ethernet ports and leverages the Seagate Kinetic Open Storage platform. Seagate has developed an entire ecosystem to support the new approach, which removes the need for a dedicated storage tier. The goal is to reduce the price of infrastructure to realize a TCO reduction of 50%. The open-source Kinetic API utilizes object storage, which circumvents the hindrances of normal file system architectures. This removes the software stack and allows applications to communicate directly with the Kinetic HDD.
Kinetic HDDs reside in backplanes that have two embedded Ethernet connections for each drive. This provides a dual port active/active connection. The typical deployment then utilizes two 10Gbe Ethernet connections to communicate with the server. HDDs can also speak directly to each other, without going through the operating system, streamlining operations such as disk-to-disk replication and minimizing overall network traffic to the server. Ethernet is widely deployed and presents the ability to use existing infrastructure for IP-based management.
The Kinetic platform also provides performance benefits. Seagate has observed a 4X increase in random write speed, due to the lack of metadata and queuing processes from legacy filesystems and operating system interaction. The new 4TB Kinetic drive is available for customer qualification now, and general availability begins at the end of November.
Enmotus has announced the general availability of their FuzeDrive server software, which provides software-defined storage acceleration for server-side SSD and NVDIMM deployments, which are becoming more popular in clustered servers and hyper-converged architectures. FuzeDrive's MicroTiering storage algorithms load-balance data across devices, and allows the use of standard SSDs to provide seamless caching for server-side flash deployments.
Andy Mills from Enmotus demonstrated the actual use of FuzeDrive software for us at the 2014 Flash Memory Summit. FuzeDrive provides easy management capability integrated into the operating system's native file browsing tools. FuzeDrive also allows for file-pinning in the cache, which keeps desired data constantly in the SSD cache to deliver maximum performance acceleration for critical files. Users can also use a real-time at-a-glance visual mapping tool to monitor performance. FuzeDrive differentiates itself from caching solutions by providing low-impact acceleration that doesn't eat CPU cycles. In come configurations caching software can chew up to 50% of the host CPU cycles running cache management tables and algorithms, and also have limits on the amount of addressable flash capacity. Enmotus is currently working with a select number of solution and channel partners to make the technology available.
Marrying the capacity of HDDs with the performance of flash is one of the most common use-cases for server-side flash deployments, specifically because it can reduce network traffic, or even take the SAN out of the picture entirely. Samsung recently purchased Proximal Data to expand its base of technology, and other players in this space have already made significant investments in various caching/tiering software companies. It wouldn't be entirely surprising to see Enmotus acquired in the near future.