There has been tons of news around AMD's new 7nm, and Intels fight to retain market share over the past months. Today this article is all AMD as they worked with CRAY to create something truly insane.
UK Research and Innovation (UKRI) has once again contracted the team at CRAY to build their follow-up to the Archer supercomputer. Archer 2 is reported to offer up to 11x the throughput of the previous Archer supercomputer put into service back in late 2013.
Archer 2 is going to be powered by 12,000 EPYC Rome 64 Core CPUs with 5,848 compute nodes, each having two of the 64 core behemoths. The total core count is 748,544 ( 1,497,088 threads) and 1.57PB for the entire system. The CPU speed is listed as 2.2GHz, which we must assume they are running off of the base clock, so that would be EPYC 7742 CPUs with a 225W TDP. These sorts of specs are insane but also will make some significant heat. Archer 2 will be cooled by 23 Shasta Mountain direct liquid cooling and associated liquid cooling cabinets. The back end for connectivity is Cray's next-gen slingshot 100Gbps network compute groups.
AMD's EPYC lineup of processors is here and ready to compete with Intel's next generation of server-ready products.
The AMD EPYC 7000 series processors will offer up to 32 "Zen" cores, which were designed to fit into this space. Each processor will have eight DDR4 channels, support for a whopping 2TB of memory, 128 PCI-E lanes, a dedicated security subsystem, and there will be no chipset outside of the CPU. The motherboard socket will also be compatible with the next generation of EPYC processors, in the hope that more vendors will jump on board.
There are going to be nine dual-socket SKUs that go all the way from 8 cores and 16 threads to 32 cores and 64 threads at 3.2GHz. TDPs will range up to 180W (that includes chipset too), and all processors will feature eight memory channels and 128 PCI-E lanes, so there is no compromise. The way AMD can scale EPYC is through their Infinity Fabric, which helps them scale past Moore's Law.
We might be running out of room on the Earth for server racks and compute power. Or maybe not, but Microsoft still wants to start putting server farms and small clusters of data-centers in the bottom of the ocean. It might even be greener and more cost effective.
Project Natick is precisely the venture that Microsoft is concocting to put our data under the sea. The logic is actually quite sound, however. The idea is that containerized data centers can, if properly equipped, be cooled naturally and even use the energy from currents and waves to power them. It's a novel approach to making data, and the cloud, a more environmental friendly thing. If they don't leak and pollute the ocean of course.
And the researchers plan their submersibles to have a five year life-cycle, where they can be retrieved, refitted and upgraded with new hardware. And what if there's a malfunction or problem? Hardware failures happen, it's just a fact of life. So what if there's a HDD that suddenly can't write, and it needs to be replaced and the data restored? Presumably it'll have to be retrieved by boat and attended to, which could cost more money in manpower and equipment than just having a data-center easily accessible by humans.
From the outside, it must seem like China is close to the final construction of Skynet, and it really does seem that way. China's Tianhe-2 has retained the title of the faster computer system in the world, for the sixth consecutive time, thanks to the latest Top 500 supercomputer rankings that were released yesterday.
China has almost tripled the number of supercomputers it has on the Top 500 supercomputer list to 109 supercomputers, up from just 37 only six months ago. The US has the most supercomputers on the list from a single country, with 201 supercomputers, but it's the lowest number since the Top 500 was created in 1993.
The Tianhe-2 was created by China's National University of Defense Technology has an insane 3.1 million cores, and is capable of a swift 33.86 quadrillion floating point operations, or FLOPS, in a single second. It's nearly twice as fast as the second most powerful supercomputer in the world, the Titan Cray XK7 which the US energy department owns.
GTC 2015 - At NVIDIA GTC 2015, Lenovo shows its NeXtScale systems blades and enclosure. The blades can support NVIDIA Tesla K80 video cards, compute nodes and storage.
Here we see three blade examples with the NVIDA Tesla K80 blade in the middle. This blade can support two K80's or two Intel Xeon Phi coprocessor cards in a single doublewide blade. Other blades can be configured as computer nodes supporting two Intel Xeon E5-2600 v3 processors, or storage blades housing up to 7x hard drives.
Also on display was Lenovo's N1200 server enclosure outfitted to support NVIDIA GRID servers.
SEMI-THERM 31 - We had a chance to visit QuantaCool at SEMI-THERM 31 to see their new cooling systems. The systems are using QuantaCool's MHP technology that provides passive cooling of high-intensity heat sources such as CPU's. There are no moving parts in the loop or water; cooling fluids are safe, environmentally benign and electrically nonconductive. These systems do not require a pump; Coolant circulation is driven by the heat being removed and uses gravity-return to provide circulation.
This was QuantaCool's first trade show and made a huge impression at SEMI-THERM 31. The systems they demonstrated are in prototype stages now, however they did have several systems up and running to show cooling potential in several different configurations.
The first system was a workstation running an Intel 4770K @ 4.6GHz. This system was running for several days at heavy stress loads and maintained operation without a glitch.
GTC 2015 - At NVIDIA GTC 2015, Tyan displays two of its heavy-duty HPC platforms. While most companies displayed GPU platforms, Tyan was there with its powerful High Performance Computing Platforms.
The first system is a FT77C-B7079 4U platform designed for up to 8x Intel Xeon Phi Coprocessors. This is a dual CPU socket system using Intel E5-2600 v3 processors and fast DDR4 memory.
Next, we found a real powerhouse and the only Quad CPU system that we saw at NVIDA GTC 2015. This system is called the FT76-B7922 4U4S, server platform for both Enterprise and HPC applications. The CPU's used on this system are 4x Xeon E7-4800 v3 processors. Resent leaks of data on these CPU's shows the E7-4800 v3's can go as high as 14 cores each which could give this system 56 cores / 112 threads. For memory load-out, it includes a massive capability of up to 6.144TB of DDR4 in 8x memory risers with 96 memory slots.
We also spotted Tyan's TN71-BP012 2U OpenPOWER platform for CSP deployment. This system uses IBM POWER8 Turismo SCM processors and can handle up to 1TB of DDR3.
GTC 2015 - Today at NVIDIA GTC 2015, we stopped by Supermicro's booth to look at the latest Tesla GPU Superservers. As always, Supermicro has a wide variety of servers and workstations to meet the needs of its customers.
Sumit Gupta, general manager of Accelerated Computing at NVIDIA explained to us: "Supermicro's new high-density servers provide a range of computing solutions for enterprise and HPC customers," he continued: "Designed to take full advantage of ultra-high performance Tesla GPU accelerators while minimizing power consumption, the servers bring new levels of energy-efficient performance for compute-intensive data analytics, deep learning and scientific applications."
The first offering is the SYS-7048GR-TR, 4U Dual Processor GPU SuperWorkstation with 4-Way GeForce SLI Support.
GTC 2015 - Walking around at NVIDIA GTC 2015 you could not help but notice the 16x GPU Compute Accelerator from One Stop Systems. These external GPU accelerators are used in applications like seismic modeling, trading algorithms and research.
The showstopper is One Stop Systems High Density Compute Accelerator (HDCA), which can accommodate up to 16 NVIDIA Tesla K80s. This beast of a machine includes dual redundant 6,000watt PSUss to power all those GPUss. The interface for these systems is One Stop Systems PCIe expansion card, which is used to connect One Stops Systems to a host machine to expand its GPU capabilities.
One Stop Systems feature smaller GPU boxes called "The Cube". These boxes expand a systems GPU capability by using OSS's PCIe expansion card and come in many different sized boxes from the pCUBE with 1x GPU all the way up to The CUBE3 supporting 8x GPUs.
GTC 2015 - At NVIDIA GTC 2015, ASRock Rack had their 3U8G-C612 High Density GPU Server on display. The 3U8G-C612 is designed for VCA, High-End AOI, Multi-Display Systems and Face Recognition systems.
The 3U8G features support for GPGPU, NVIDIA Tesla and AMD Firepro and Xeon Phi cards. Form factor is a 3U systems case with dual processors and up to 8x GPU's. This is a very robust server, which functions as a standalone machine with its own dual E5-2600/4600 v3 processors and fast DDR4 memory for processing power.