Google has reached a monumental achivement for the future of quantum processors and quantum computing, with their experimental quantum processor completing a 10,000-year task in just 3 minutes, 20 seconds.
In a blog post on Google's website, Google CEO Sundar Pichai explained that Nature published its 150th anniversary issue with the big news that Google's team of researchers have achieved a "big breakthrough in quantum computing known as quantum supremacy".
Google flexed its new quantum processing muscles to stretch and achieve that quantum supremacy, with Pichai explaining: "As we scale up the computational possibilities, we unlock new computations. To demonstrate supremacy, our quantum machine successfully performed a test computation in just 200 seconds that would have taken the best known algorithms in the most powerful supercomputers thousands of years to accomplish".
Everyday we are moving one step more closer to unlocking the mystery behind quantum computing and the benefits it can provide the human race. Today is yet again another one of those days.
According to researchers out of the The Johns Hopkins University, a newly discovered superconducting material has been found to have the "properties that could be the building blocks for technology of the future." Quantum computing is the most complicated computing humans are currently working on, and if you have a general grasp of how normal computers work then you should be able to appreciate the complexity of any quantum progression.
Normal bits that are present in all traditional computers use 0 or 1 to represent an electrical voltage pulse to store information. Quantum computers which are based on the laws of quantum mechanics use quantum bits or better known as qubits. These qubits exist in both the 0 state and the 1 state, but also both states at the same time. This is called a superposition, perhaps you have heard of the famous qubit example called Schrodinger's cat?
D-Wave announced its next-gen quantum computer dubbed 'Advantage' which during the announcement, had its first customer lined up for the next wave in quantum computing.
D-Wave's new quantum computer already has its first customer with nuclear weapons research site Los Alamos National Laboratory (LANL) securing the next-gen quantum computer. This isn't LANL's first business with D-Wave either, it'll actually be their third upgrade to their in-house D-Wave quantum computer.
Los Alamos National Laboratory associate director for simulation and computation, Irene Qualters, said in a statement: "This is the third time we will have upgraded our D-Wave system. Each upgrade has enabled new research into developing quantum algorithms and new tools in support of Los Alamos' national security mission. Quantum computing is a critical area of research for Los Alamos".
- 53 qubits - IBM's new Q quantum computer
- 53 qubits - Google's new Sycamore quantum computer
- 72 qubits - Google's Bristlecone quantum computer
- 2000 qubits - D-Wave's current quantum computer
- 5000 qubits - D-Wave's new Advantage quantum computer
The fight for quantum supremacy might have just been tipped into Google's favor, with the search giant saying it has reached a major milestone towards the development of quantum computing.
A recent paper was published and then quickly pulled on NASA's website, which read that "this experiment marks the first computation that can only be performed on a quantum processor". The research paper was titled "Quantum supremacy using a programmable superconducting processor".
Google's in-house quantum computer smashed through a calculation of a random number generator in just 3 minutes and 20 seconds, versus the world's fastest supercomputer -- Summit, which would take around 10,000 years. The authors of the paper wrote: "To our knowledge, this experiment marks the first computation that can only be performed on a quantum processor". Impressive stuff.
HPE has teamed with NASA on future supercomputer collaboration, with HPE providing its new Aitken supercomputer for future missions to the moon. HPE and NASA Ames Research Center have signed a four-year, multi-phase partnership over its Aitken supercomputer.
NASA will use HPE's new Aitken supercomputer for its Artemis program, which will see humans returning to the moon in 2024. Artemis will be handling calculations, modeling, and simulations of entry, descent, and landing (EDL) on the moon.
Inside, HPE's new Aitkin supercomputer is based on HPE's SGI 8600 HPC platform, which is a tray-based, scalable supercomputer cluster.
Aitken packs 1150 nodes, each of which have 2 x 20-core second-gen Intel Scalable processors and Mellonox InfiniBand interconnects. This means that Aitken has a huge 46,080 cores and an even crazier 221TB of memory throughout its 1150 nodes providing an impressive 3.69 petaflops of performance.
Scientists are always looking for more updated and better ways to understand the universe we are currently living in and one of the best ways they can do that is through simulations.
According to a new announcement by researchers at the University of Arizona, the Ocelote supercomputer has managed to generate 8 million simulated universes for scientists to study and understand. These simulated universes are going to be directly compared to our actual cosmos, and through the comparison scientists hope to draw a better conclusion of the cosmic events that occurred while also filling in missing data points that are currently puzzling to theorists.
While 8 million simulated universes in just three weeks is certainly an achievement in itself, Ocelote didn't quite have the power to render these universes to every detail, as that would require an astronomical amount of computing power. Instead, Ocelote and the scientists created a system where the computer produced results that are a "sizeable chunk" of the observable universe. It should also be noted that each of the universes created were devised under a completely different set of rules, meaning that scientists have a lot of busy comparison work to do now.
AMD has just scored a gigantic deal in that the company will power the next-gen fastest and most expensive supercomputer in the world, with the US Department of Energy buying a new AMD-powered custom supercomputer built by Cray called Frontier.
Frontier will come online in 2021 and is powered with super-fast EPYC processors and Radeon Instinct accelerators that will pump out an astonishing and record-breaking 1.5 exaflops of processing power. The system will be used for various tasks which will include performing advanced calculations in nuclear and climate research, simulating quantum computers, nuclear reactors, and more.
The new system will be delivered in late 2021 and turned on and cranking along in 2022 for the Oak Ridge National Laboratory in Tennessee. AMD has some huge bragging rights here as Frontier has as much processing power as the 160 fastest supercomputers combined -- yeah, combined.
Intel is wanting to boost the development of quantum computing technology, with the chipmaker unveiling its new Cryogenic Wafer Prober which allows researchers to test qubits on 300mm silicon wafers at super-low temperatures. Intel says this is the first quantum computing testing tool ever made, making it a very big deal.
Intel partnered up with Bluefors and Afore for the new cryoprober, with the new quantum computer testing tool made because during the development of Intel's own quantum computer, they worked out they needed a cryoprober to make it easier to test qubits in silicon before they're finalized and put into quantum chips and then sent off to customers. The company added that the cryoprober would allow the company to scale up manufacturing of silicon quantum computers, with less issues.
Quantum computers and their respective chips are normally tested for months and months in a super-low temperature dilution refrigerator where it would work out what works, and what doesn't. Normal transistors can be tested within an hour, versus months and months for quantum chips. The feedback from testing can be used to make tweaks that can be sent to manufacturing before the chips are made.
The US is home again to the world's fastest supercomputer, with Summit making its debut at the Oak Ridge National Laboratory in Oak Ridge, Tennessee. Summit is powered by NVIDIA technology, which is how it has become the best in the supercomputer business.
Inside of Summit you'll find an insane 27,648 of NVIDIA's super-fast Volta Tensor Core GPUs that is capable of 200 petaflops of computing power. Considering that the current supercomputer champion is China's Sunway TaihuLight that now only pushes 93 petaflops, Summit has truly climbed new supercomputing heights.
On top of the 27,648 Volta Tensor Core GPUs there's also 9,216 CPUs that get crammed into 5,600 square feet of cabinet space that is about the size of two tennis courts. The systems combined have an approximate weight of a commercial jet, and considering the 200 petaflops of power, this is an amazing technical achievement. Summit is capable of 3 exaops of AI, where if every single human being on Earth did 1 calculation per second, it would take 15 years... but on Summit it will take a single second.
Google has just blown the industry away with their new TPU 3.0, their next-gen custom-designed processor that is ridiculously over-powered to train machine learning systems.
TPU 3.0 is 8x faster than its predecessor, with the first TPU being released in 2015, the company has made leaps and bounds. A pod of TPU 2.0s packed ASICs that featured 64GB of HBM that pumped out 2.4TB/sec, which is pretty insane. In comparison, the Radeon RX Vega 64 with 8GB of HBM2 is capable of 512GB/sec memory bandwidth.
Google should be the new AI chip champion with its TPU 3.0 ready for TensorFlow use, as well as a refined push into the cloud from Google. The new TPU 3.0 chips are so next-level that they require and use liquid cooling to keep them cool, but provide a huge 100 PFLOPs of machine learning power... crazy stuff.
Google didn't provide full hardware specifications of TPU 3.0 apart from it being 8x faster than TPU 2.0, so we'll have to wait a little while longer to see just what makes it 800% faster than its predecessor. I'm sure Google is using a new node process, HBM2, and much more to reach these lofty heights.