Technology content trusted in North America and globally since 1999
8,422 Reviews & Articles | 64,501 News Posts

Super Computing Posts - Page 1

HPE's new supercomputer will help NASA run Crysis on the Moon

By: Anthony Garreffa | Super Computing | Posted: Aug 25, 2019 @ 22:19 CDT

HPE has teamed with NASA on future supercomputer collaboration, with HPE providing its new Aitken supercomputer for future missions to the moon. HPE and NASA Ames Research Center have signed a four-year, multi-phase partnership over its Aitken supercomputer.

hpes-new-supercomputer-help-nasa-run-crysis-moon_02

NASA will use HPE's new Aitken supercomputer for its Artemis program, which will see humans returning to the moon in 2024. Artemis will be handling calculations, modeling, and simulations of entry, descent, and landing (EDL) on the moon.

Inside, HPE's new Aitkin supercomputer is based on HPE's SGI 8600 HPC platform, which is a tray-based, scalable supercomputer cluster.

Aitken packs 1150 nodes, each of which have 2 x 20-core second-gen Intel Scalable processors and Mellonox InfiniBand interconnects. This means that Aitken has a huge 46,080 cores and an even crazier 221TB of memory throughout its 1150 nodes providing an impressive 3.69 petaflops of performance.

Continue reading 'HPE's new supercomputer will help NASA run Crysis on the Moon' (full post)

Supercomputer built 8 million simulated universes in 3 weeks

By: Jak Connor | Super Computing | Posted: Aug 12, 2019 @ 7:17 CDT

Scientists are always looking for more updated and better ways to understand the universe we are currently living in and one of the best ways they can do that is through simulations.

supercomputer-used-create-8-million-virtual-universes_01

According to a new announcement by researchers at the University of Arizona, the Ocelote supercomputer has managed to generate 8 million simulated universes for scientists to study and understand. These simulated universes are going to be directly compared to our actual cosmos, and through the comparison scientists hope to draw a better conclusion of the cosmic events that occurred while also filling in missing data points that are currently puzzling to theorists.

While 8 million simulated universes in just three weeks is certainly an achievement in itself, Ocelote didn't quite have the power to render these universes to every detail, as that would require an astronomical amount of computing power. Instead, Ocelote and the scientists created a system where the computer produced results that are a "sizeable chunk" of the observable universe. It should also be noted that each of the universes created were devised under a completely different set of rules, meaning that scientists have a lot of busy comparison work to do now.

AMD powers world's largest, most expensive supercomputer

By: Anthony Garreffa | Super Computing | Posted: May 8, 2019 @ 20:24 CDT

AMD has just scored a gigantic deal in that the company will power the next-gen fastest and most expensive supercomputer in the world, with the US Department of Energy buying a new AMD-powered custom supercomputer built by Cray called Frontier.

amd-powers-worlds-largest-expensive-supercomputer_02

Frontier will come online in 2021 and is powered with super-fast EPYC processors and Radeon Instinct accelerators that will pump out an astonishing and record-breaking 1.5 exaflops of processing power. The system will be used for various tasks which will include performing advanced calculations in nuclear and climate research, simulating quantum computers, nuclear reactors, and more.

The new system will be delivered in late 2021 and turned on and cranking along in 2022 for the Oak Ridge National Laboratory in Tennessee. AMD has some huge bragging rights here as Frontier has as much processing power as the 160 fastest supercomputers combined -- yeah, combined.

Continue reading 'AMD powers world's largest, most expensive supercomputer' (full post)

Intel develops new tools to speed up quantum computer tech

By: Anthony Garreffa | Super Computing | Posted: Feb 28, 2019 @ 23:34 CST

Intel is wanting to boost the development of quantum computing technology, with the chipmaker unveiling its new Cryogenic Wafer Prober which allows researchers to test qubits on 300mm silicon wafers at super-low temperatures. Intel says this is the first quantum computing testing tool ever made, making it a very big deal.

intel-develops-new-tools-speed-up-quantum-computer-tech_04

Intel partnered up with Bluefors and Afore for the new cryoprober, with the new quantum computer testing tool made because during the development of Intel's own quantum computer, they worked out they needed a cryoprober to make it easier to test qubits in silicon before they're finalized and put into quantum chips and then sent off to customers. The company added that the cryoprober would allow the company to scale up manufacturing of silicon quantum computers, with less issues.

Quantum computers and their respective chips are normally tested for months and months in a super-low temperature dilution refrigerator where it would work out what works, and what doesn't. Normal transistors can be tested within an hour, versus months and months for quantum chips. The feedback from testing can be used to make tweaks that can be sent to manufacturing before the chips are made.

Continue reading 'Intel develops new tools to speed up quantum computer tech' (full post)

NVIDIA powers worlds fastest supercomputer: 200 petaflops

By: Anthony Garreffa | Super Computing | Posted: Jun 11, 2018 @ 20:29 CDT

The US is home again to the world's fastest supercomputer, with Summit making its debut at the Oak Ridge National Laboratory in Oak Ridge, Tennessee. Summit is powered by NVIDIA technology, which is how it has become the best in the supercomputer business.

nvidia-powers-worlds-fastest-supercomputer-200-petaflops_05

Inside of Summit you'll find an insane 27,648 of NVIDIA's super-fast Volta Tensor Core GPUs that is capable of 200 petaflops of computing power. Considering that the current supercomputer champion is China's Sunway TaihuLight that now only pushes 93 petaflops, Summit has truly climbed new supercomputing heights.

On top of the 27,648 Volta Tensor Core GPUs there's also 9,216 CPUs that get crammed into 5,600 square feet of cabinet space that is about the size of two tennis courts. The systems combined have an approximate weight of a commercial jet, and considering the 200 petaflops of power, this is an amazing technical achievement. Summit is capable of 3 exaops of AI, where if every single human being on Earth did 1 calculation per second, it would take 15 years... but on Summit it will take a single second.

Continue reading 'NVIDIA powers worlds fastest supercomputer: 200 petaflops' (full post)

Google's new TPU 3.0 revealed, REQUIRES liquid cooling

By: Anthony Garreffa | Super Computing | Posted: May 10, 2018 @ 0:11 CDT

Google has just blown the industry away with their new TPU 3.0, their next-gen custom-designed processor that is ridiculously over-powered to train machine learning systems.

googles-new-tpu-3-revealed-requires-liquid-cooling_07

TPU 3.0 is 8x faster than its predecessor, with the first TPU being released in 2015, the company has made leaps and bounds. A pod of TPU 2.0s packed ASICs that featured 64GB of HBM that pumped out 2.4TB/sec, which is pretty insane. In comparison, the Radeon RX Vega 64 with 8GB of HBM2 is capable of 512GB/sec memory bandwidth.

Google should be the new AI chip champion with its TPU 3.0 ready for TensorFlow use, as well as a refined push into the cloud from Google. The new TPU 3.0 chips are so next-level that they require and use liquid cooling to keep them cool, but provide a huge 100 PFLOPs of machine learning power... crazy stuff.

Google didn't provide full hardware specifications of TPU 3.0 apart from it being 8x faster than TPU 2.0, so we'll have to wait a little while longer to see just what makes it 800% faster than its predecessor. I'm sure Google is using a new node process, HBM2, and much more to reach these lofty heights.

Intel Nervana Neural Network Processor: 32GB HBM2 at 1TB/sec

By: Anthony Garreffa | Super Computing | Posted: Dec 7, 2017 @ 21:53 CST

Intel is hard at work on the research and development side of its upcoming Nervana Neural Network Processor, a new chip that will blow away any general-purpose processor for machine learning and AI applications.

intel-nervana-neural-network-processor-32gb-hbm2-1tb-sec_03

Vice President of Hardware for Intel's Artificial Intelligence Products Group, Carey Kloss, has provided an update to the work Intel has made on the NNP.

What does a neural network processor (NNP) have to do? In order to train a machine using neural networks needs a gigantic amount of memory and arithmetic operations in order to generate useful output. Then we step into the scaling capabilities, power consumption and maximum utilization being the cornerstones of Intel's Nervana.

Continue reading 'Intel Nervana Neural Network Processor: 32GB HBM2 at 1TB/sec' (full post)

Amazon goes 1984, uses cloud AI to translate/track people

By: Anthony Garreffa | Super Computing | Posted: Nov 30, 2017 @ 0:55 CST

During the recent Amazon Web Services re:Invent conference in Las Vegas on Wednesday, AWS boss Andy Jassy announced that the company will be enabling a suite of AI-powered tools. Jassy told the audience of over 40,000 people: "We have to solve the problem of making [AI] accessible for everyday developers and scientists".

amazon-goes-1984-uses-cloud-ai-translate-track-people_08

These new cloud-based AI tools will be capable of measuring sentiment, tracking people in live feeds, translating languages, and much more. The list of things that the new AI enabled services AWS has is scary good, check them out:

  • The new Amazon Rekognition Video tool is able to recognize and track people in real-time video feeds, giving it certain advantages over video recognition tools from cloud rivals Google and Microsoft.
  • The Amazon Transcribe system can transcribe audio recordings of people speaking into clean text files.
  • The Amazon Comprehend service can pick up on positive or negative sentiment and certain people, places and phrases in text.
  • AWS also unveiled Amazon Translate, a service for translating text from one language into another, which Google has provided to developers for years. CNBC first reported that AWS was working on a translation tool in June.

Google is building its new AI research lab in Canada

By: Anthony Garreffa | Super Computing | Posted: Jul 6, 2017 @ 1:15 CDT

I thought that Skynet would open up in the US, but it looks like the Terminators will want some bacon and maple syrup instead, with Google announcing its new DeepMind AI research lab is open for... well... business I guess, in Canada.

google-building-new-ai-research-lab-canada_07

DeepMind has announced that it's new AI research lab is opening up in Edmonton, Alberta later this month, with three University of Alberta computer science professors (Richard Sutton, Michael Bowling and Patrick Pilarski) leading the group. They will be joined by seven other AI leaders, too. The big question is: why isn't Google's new AI digs opening up on US soil? Recode reports that there are familiarity and political considerations, with over a dozen University of Alberta grads working at DeepMind, and Sutton was one of the first to join the AI lab as an advisor.

The Canadian government is also more willing to invest in AI research, with Canada cozying up to AI scientists to the tune of $125 million in funding - on top of existing funding. On US soil, the Trump administration is swaying away from scientific research, proposing major funding cuts.

Big Pharma is tapping AI for drug delivery process

By: Anthony Garreffa | Super Computing | Posted: Jul 3, 2017 @ 23:35 CDT

It appears that Skynet wants us all on Big Pharma drugs, with British pharmaceutical giant GlaxoSmithKline (GSK) looking to AI to design better, more efficient - and I'm sure, more profitable drugs.

big-pharma-tapping-ai-drug-delivery-process_03

GSK announced a new partnership with Exscientia, a British company that specializes in drug design. The two will work toegther to use Exscienta's AI-enabled platform to discover new, high-quality drug candidate-quality molecules. GSK has tasked Exscientia to work on 10 specific disease-related targets, and if they hit those targets, GSK will write a cheque for $43 million in research payments.

The partnership will see the companies tapping into the power of supercomputers and machine learning in order to see how new compounds will behave, and by speeding this process up with the help of crazy amounts of computing power aided by AI, it will save the company both time and money. Human researchers are nowhere near as efficient as AI and supercomputers working every second of every day with a billion things going on at once, which could mark a very big change for medicine.

Continue reading 'Big Pharma is tapping AI for drug delivery process' (full post)