Google has just blown the industry away with their new TPU 3.0, their next-gen custom-designed processor that is ridiculously over-powered to train machine learning systems.
TPU 3.0 is 8x faster than its predecessor, with the first TPU being released in 2015, the company has made leaps and bounds. A pod of TPU 2.0s packed ASICs that featured 64GB of HBM that pumped out 2.4TB/sec, which is pretty insane. In comparison, the Radeon RX Vega 64 with 8GB of HBM2 is capable of 512GB/sec memory bandwidth.
Google should be the new AI chip champion with its TPU 3.0 ready for TensorFlow use, as well as a refined push into the cloud from Google. The new TPU 3.0 chips are so next-level that they require and use liquid cooling to keep them cool, but provide a huge 100 PFLOPs of machine learning power... crazy stuff.
Google didn't provide full hardware specifications of TPU 3.0 apart from it being 8x faster than TPU 2.0, so we'll have to wait a little while longer to see just what makes it 800% faster than its predecessor. I'm sure Google is using a new node process, HBM2, and much more to reach these lofty heights.
- > NEXT STORY: First NBA Playgrounds 2 gameplay trailer bounces in
- < PREVIOUS STORY: id Software might unveil RAGE 2 at E3 2018 next month