IT/Datacenter & Super Computing News - Page 9

The latest and most important IT/Datacenter & Super Computing news - Page 9.

Follow TweakTown on Google News

PMC Flashtec NVMe controllers to power 8TB Memblaze PBlaze4 PCIe SSDs

Paul Alcorn | Dec 2, 2014 1:02 PM CST

PMC Flashtec controllers are powering the next generation of Memblaze PCIe SSDs. The Memblaze PBlaze 4 is designed for hyperscale and Open Compute Project architectures. The Flashtec controllers on the PBlaze 4 provide up to 850,000 IOPS for random read workloads, and 265,000 IOPS for random writes. Sequential performance is equally impressive, with up to 3.2 / 2.5 GB/s read/write available. NVMe provides the lowest CPU load and includes a number of architectural improvements for high-performance storage products. We recently took a deep-dive on the new NVMe specification in our Defining NVMe article.

PMC Flashtec NVMe controllers to power 8TB Memblaze PBlaze4 PCIe SSDs 01

The Flashtec controller can address up to 8TB of flash and features 16 and 32 channel variants. Dual-port functionality provides enterprise-class high-availability features. Memblaze differentiates their products with multiple capacity points and solutions tailored for specific workloads. Memblaze utilizes NAND from several vendors, and Flashtec NVMe controllers provide a flexible architecture that supports a wide variety of NAND vendors.

PMC enjoys a market-leading position in NVMe controllers, and several hyperscale customers are already building their own NVMe SSDs with PMC controllers. Memblaze is currently the #1 PCIe SSD vendor in China and is expanding to the US and European markets, and expects to deliver over 6PB of flash storage this year. We recently took an in-depth look at Memblaze's latest product in the Memblaze PBlaze3L 1.2TB Enterprise PCIe SSD Review. Head to our IT/Datacenter section for the latest in competitive performance analysis of Enterprise PCIe SSDs.

Continue reading: PMC Flashtec NVMe controllers to power 8TB Memblaze PBlaze4 PCIe SSDs (full post)

Seagate releases new 6TB NAS HDD for 4-16 bay NAS units

Paul Alcorn | Dec 2, 2014 12:16 PM CST

Seagate has announced the release of a new HDD aimed at 4 to 16 bay enterprise NAS deployments. The stratification of the NAS market has led to varying HDD products to address the different workloads and performance requirements of each segment. The new Seagate enterprise NAS HDD bumps speed up a notch. Typical consumer NAS models spin at 5,400 RPM, but the new Seagate NAS HDD moves up to 7,200 RPM. WD has already released the WD Red Pro, a 7,200 RPM product, to address larger NAS arrays, as outlined in our WD Red Pro 4TB Enterprise NAS HDD Review.

Seagate releases new 6TB NAS HDD for 4-16 bay NAS units 01

The WD Red Pro tops out at 4TB, but the Seagate Enterprise NAS HDD comes in 2, 3, 4, 5, and 6TB flavors and features Seagate's NASWorks firmware. NASWorks specifically tailors the drive for NAS usage. The drive also features RAID rebuild technology that supports surgical rebuilds to significantly reduce RAID rebuild time. The drive also features a larger 128MB cache in comparison to the WD Red's 64MB, and a faster transfer speed of 216 MB/s. An optional data recovery service also offers users easy data recovery in the event of a drive failure.

The burgeoning NAS market is fueling the rapid expansion of NAS HDD offerings. The SME, SMB, and SME tower and rackmount segment is one of the fastest growing segments, and manufacturers are providing solutions refined for each environment.

Continue reading: Seagate releases new 6TB NAS HDD for 4-16 bay NAS units (full post)

PMC scores win with Lenovo ThinkServer SAS partnership

Paul Alcorn | Nov 25, 2014 10:53 AM CST

PMC has announced that Lenovo has selected PMC storage solutions for external connectivity in their ThinkServer product line. Lenovo is offering the Lenovo 8885E by PMC for 12Gb/s SAS applications. The low-profile MD2 form factor 8885E is an HBA that provides eight SAS/SATA ports for connectivity. HBA's are becoming more popular in the datacenter as new architectures arise to leverage scale-out storage and advanced erasure coding. PMC Sierra has been very aggressive on the SAS front and recently captured the goal of providing the most SAS ports on a single card. This has led to a leading position in the market, and PMC has currently shipped more SAS ports than their competitors.

PMC scores win with Lenovo ThinkServer SAS partnership |

The increased density has the side effect of lower power consumption per port, which resonates well in power-constrained datacenters. PMC has measured 40% lower power consumption than their competitors with the same number of devices connected, which results in a tangible TCO reduction for their customers. As a rough guideline most datacenters spec each watt of power as an incremental cost increase of $2 dollars per year. When deploying thousands of SAS adaptors this can lead to a staggering amount of increased cost if there is a difference of a few watts per port.

12Gb/s SAS is gaining in popularity due to the bandwidth limitation of SATA SSDs. SATA is still stuck at 6Gb/s, and there are no plans to increase this in the future. SAS is cooking along at 12GB/s and provides more bandwidth for powerful solid state drives. High-Availability features also provide a more robust architecture, and until NVMe competitors can offer the same type of features SAS will continue to provide administrators tangible benefits. The Adaptec Series 8 adapters feature the PM8063 ROC, and offer great performance in a variety of workloads. We recently took the 8 Series for a test drive with 24 SATA SSDs and 8 12Gb/s SAS SSDs. Head over to our Adaptec by PMC ASR-8885 12Gb/s RAID Controller Review in the IT/Datacenter section for more in-depth coverage.

Continue reading: PMC scores win with Lenovo ThinkServer SAS partnership (full post)

ATSC lays out path to 100 TB HDDs by 2025

Paul Alcorn | Nov 25, 2014 10:18 AM CST

The quest for more storage has led to revolutionary breakthroughs in HDD technology. SSDs get the most attention in the storage world, but the incredible technology that goes into HDDs has created some of the most refined precision instruments in history. HDD density has increased 500 million fold since the initial designs were released in 1956. During the recent MMM (Magnetism and Magnetic Materials) Conference the ASTC (Advanced Storage Technology Consortium) laid out the continuing path of progress on the HDD front. Acronyms aside, the demand for more storage has resulted in billions of dollars in investments in new technology, and these new techniques are pushing us forward on the path to 100TB HDDs by 2025.

ATSC lays out path to 100 TB HDDs by 2025 |

There are already 10TB HDDs on the menu for 2015, but they utilize SMR (Shingled Magnetic Recording) technology, which has some performance pitfalls. Helium drives have also come to the forefront in the quest for more density, and as demonstrated in our HGST Ultrastar He6 6TB Helium Enterprise HDD Review they deliver increased density, lower power consumption, and don't skimp on performance.

These radical new advancements are required because the pace of density increases have slowed as we reach the limits of current HDD recording technology (PMR). According to the ASTC, and several industry sources, HAMR should arrive in 2017. This will speed the annual density growth rate to 30%, which is a considerable increase from the current 15% annual increase. BPMR (Bit-Patterned Magnetic Recording) is the next step to realize incredible increases in density, and that is slated for release in the 2021 timeframe. Combining HAMR and BPMR seems to be a very promising approach that will deliver 10X the density of current HDDs, or 100 TB drives, by 2025.

Continue reading: ATSC lays out path to 100 TB HDDs by 2025 (full post)

Qualcomm to challenge Intel with low-cost ARM server chips

Paul Alcorn | Nov 24, 2014 12:53 PM CST

Intel enjoys a 97.8% share of the server CPU market, and with AMD continuing to slide, it hasn't looked like anyone can break Intel's stranglehold. Popular new architectures in the datacenter have brought about customized low-power designs that can handle light-impact workloads. Right-sizing servers to the task at hand lowers cost and eases cooling requirements, and ARM processors have attractive low-power features that have always been an interesting alternative in the datacenter. Some Xeons operate within a TDP envelope of 90 Watts, but many 64-bit ARM designs operate between 10 and 45 Watts. Low cost is also another incentive to use ARM CPUs, but a lack of specialized chips and systems has hampered expansion.

Qualcomm to challenge Intel with low-cost ARM server chips |

This radical reduction in power consumption has led many enterprise powerhouses, such as Red Hat, to institute development projects to boost software development for 64-bit ARM platforms. Microsoft has even gotten in on the ARM-compatibility act by developing Windows RT. RT has been a failure of sorts, but many consider it to be the gateway to ARM-compatible Windows Server flavors. The expanding ecosystem development to further 64-bit ARM processors in the datacenter has placed the onus on suppliers to step up with competitive ARM offerings. One supplier with considerable heft in the ARM category has remained conspicuously silent on server CPU models, until now.

Qualcomm CEO Steve Mollenkomp has reportedly announced intentions to bring ARM server CPU's into their lineup. Qualcomm's entrance into the server CPU market isn't likely to budge Intel from the top spot anytime soon, but there are other advantages to increased competition. Intel's dominating market share allows them to charge a premium for their server CPUs. A low-cost alternative, backed by a bastion like Qualcomm, could open up more competitive pricing from Intel in the future. There is no announcement on release dates, but considering the slowing growth rate in other segments we can expect Qualcomm to move quickly.

Continue reading: Qualcomm to challenge Intel with low-cost ARM server chips (full post)

Intel announces 3D NAND design

Paul Alcorn | Nov 21, 2014 2:55 PM CST

Rob Crooke, the Vice President and General Manager of the NVM (Non-Volatile Memory) Solutions Group at Intel, announced the impending release of 3D NAND at Intel's Investor Meeting. Incidentally, the presentation was running on an Intel 3D NAND SSD to demonstrate the progress Intel has already made in integrating their new 3D NAND into a workable device. The launch was a bit light on technical details of the new 3D NAND, but now that images from the presentation are available we are posting more information.

Intel announces 3D NAND design 01

The first Intel SSD was developed in 1992 and featured a whopping 12MB capacity, and continued die shrinks have led to 128Gb dies. The transition to mainstream Intel SSDs began in 2008, and the initial revisions utilized 2D planar NAND. The continued path of NAND development has led to denser designs that sped adoption by lowering the cost per bit. Samsung released the first 3D NAND product in 2014 with 128Gb of density, and Intel's 3D NAND is slated for release in 2015.

Intel helped pioneer the SSD market, and their continued innovation has led to a huge chunk of SSD data center market share. These statistics reflect the current market share of major industry SSD manufacturers. The chart is incomplete and only lists two competitors with NAND fabrication capability. Intel includes the market share of the WD subsidiary HGST in their overall market share numbers due to the HGST and Intel JDA (Joint Development Agreement). The JDA provides Intel NAND to HGST, and in turn HGST collaborates on engineering and manufactures the SAS SSD products.

Continue reading: Intel announces 3D NAND design (full post)

SanDisk Fusion ioMemory SSDs used in CERN supercomputing projects

Paul Alcorn | Nov 20, 2014 10:33 AM CST

Supercomputing 2014: The quest to understand the building blocks of the universe requires intense computing power, which in turn requires some of the fastest storage solutions available. CERN's Large Hadron Collider, which discovered the Higgs boson in 2012, will begin colliding elements with the most energy ever achieved in a particle accelerator in 2015. This requires transmitting 170 petabytes datasets to far-flung research centers around the world. The University of Michigan and University of Victoria are utilizing SanDisk's Fusion ioMemory solutions to handle the influx of data at their multi-site supercomputing project.

SanDisk Fusion ioMemory SSDs used in CERN supercomputing projects |

The universities need to create a data transfer architecture with the capability to transfer figures across 100 computing centers at 100Gb/s speeds. This isn't typically a huge problem if there is a distributed architecture, but this particular deployment needs to provide that capability from a single server. SanDisk Fusion ioMemory products are stepping in to fulfil the extreme performance requirements, and they are demonstrating a data transfer from the University of Victoria campus to the WAN in the University of Michigan booth (#3569) at the Supercomputing 2014 conference.

Continue reading: SanDisk Fusion ioMemory SSDs used in CERN supercomputing projects (full post)

OCZ Storage Solutions introduces Saber 1000 SSD

Paul Alcorn | Nov 20, 2014 9:57 AM CST

OCZ Storage Solutions is leveraging their homegrown Barefoot 3 controller and firmware in tandem with Toshiba A19nm NAND for the new Sabre 1000 SSD Series. OCZ's move to their own proprietary SSD controller is a big step that provides them with tremendous flexibility to tailor their products for different segments. The OCZ Sabre 1000 is geared for read-intensive workloads in high-volume hyperscale deployments.

OCZ Storage Solutions introduces Saber 1000 SSD 01

The Sabre 1000 comes in capacities of 240, 480, and 960GB, and provides an economical alternative for administrators with light and mixed workloads. The SSD features PFM+ (Power Failure Management Plus) that protects data in the event of host power loss. Another key feature is the value-added StoragePeak 1000 SSD management system. This friendly and easy-to-use GUI allows central monitoring and management of the SSD.

Performance varies based upon capacity, with top random read/write speeds weighing in at 98,000/23,000 IOPS. Sequential speed is also impressive at 550/515 MB/s read/write. The Sabre 1000 is geared for read-intensive applications such as read cache and indexing, VOD (Video On Demand), VDI, media streaming, and cloud infrastructures. We recently put the latest OCZ enterprise SSD through the paces in our OCZ Intrepid 3600 Enterprise SSD Review. Head to our IT/Datacenter section to view our library of competitive performance analysis of other leading enterprise SSDs.

Continue reading: OCZ Storage Solutions introduces Saber 1000 SSD (full post)

Micron displays Hybrid Memory Cube at SC14 as HMCC spec is finalized

Paul Alcorn | Nov 20, 2014 9:25 AM CST

Supercomputing 2014: In the world of HPC (High-Performance Computing) the bleeding edge is always the preferred route to realize insane computational power. HMC (Hybrid Memory Cubes) are the next big thing, and offer plenty of performance advantages over existing DRAM. The current generation of HMC technology sips power and provides more density and performance than existing memory technology. With 15 times the performance, 90 percent less space, and 70 percent less power consumption, it is easy to see why industry leaders are touting the advantages of HMC. The key to HMC adoption, as with any new technology, lies in the committees that establish industry-standard interface specifications.

Micron displays Hybrid Memory Cube at SC14 as HMCC spec is finalized 01

The HMCC (Hybrid Memory Cube Consortium) was founded by Micron, Altera, Open-Silicon, Samsung and Xilinx in 2011 and has grown to more than 150 members. At Supercomputing 2014 the HMCC has announced the finalization and public availability of the HMCC 2.0 specification.

The new specification increases speed from 15Gb/s to 30Gb/s and migrates the associated channel model from short reach (SR) to very short reach (VSR). VSR is key to the eventual fusion of HMC into the CPU. The path to faster data processing, as with storage, involves getting closer to the CPU. The future integration into the CPU will expand upon the tremendous performance advantages of HMC. Imagine an L2 cache with 100 times the capacity.

Continue reading: Micron displays Hybrid Memory Cube at SC14 as HMCC spec is finalized (full post)

SGI demonstrates 30 million IOPS beast with Intel P3700's at SC14

Paul Alcorn | Nov 20, 2014 8:40 AM CST

Supercomputing 2014: Intel and SGI combined their talents to create an HPC monster that touts 30 million IOPS of 4k random speed with 180GB/s of sequential throughput. Scaling storage performance and capacity in tandem is an ongoing challenge in the enterprise storage world, and old interfaces have been the primary culprit hampering these objectives. A diminishing point of returns is reached as more storage devices are added to the server, and performance begins to decline as latency increases. This is a particular pain point when utilizing RAID and HBA architectures in tandem with 2.5" SSDs.

SGI demonstrates 30 million IOPS beast with Intel P3700's at SC14 01

Enter the PCIe SSD. Moving flash to the PCIe bus provides better performance scaling, but many initial revisions of PCIe SSDs leveraged existing standards, such as AHCI, for host communication. This leads to performance degradation and excessive CPU overhead as performance scales. As explained in our Defining NVMe article, NVMe is a new storage protocol designed specifically for non-volatile memory. A slew of architectural refinements combine to provide the best performance possible over the PCIe interface. Intel's DC P3700 (covered in-depth in our Intel DC P3700 1.6TB NVMe Enterprise Review) is one of the fastest PCIe SSDs available, and the combination of NVMe and consistent performance provide enhanced scalability when deploying multiple units.

Intel and SGI decided to push the performance envelope by integrating a whopping 64 of the DC P3700's into a 32-socket server to test the limits of NVME scaling. The results are quite impressive, and the modified SGI UV 300H SAP HANA server surpassed 30 million IOPS in 4k random testing. Perhaps the most impressive aspect is the linear performance scaling in the graphic above. The solid blue lines denote the IOPS performance curve as more P3700's were added, and the dotted blue line is a trend line illustrating a theoretical linear progression. From the results we can see that the SGI beast barely deviates from the perfect scaling trend line. The throughput, tested with 128k sequential blocks, topped out just shy of 180 GB/s.

Continue reading: SGI demonstrates 30 million IOPS beast with Intel P3700's at SC14 (full post)

Newsletter Subscription
Latest News
View More News
Latest Reviews
View More Reviews
Latest Articles
View More Articles