It looks like The Matrix and the Terminator movies weren't enough to make us stop trying to create an AI takeover, but now Facebook has just announced plans to open source its Open Rack-compatible hardware design for AI computing - something that has been codenamed Big Sur.
Facebook's Kevin Lee and Serkan Piantino explained that Big Sur was built to use 8 x high-performance GPUs, consuming 300W each. They were using NVIDIA's Tesla Accelerated Computing Platform, claiming that Big Sur was twice as fast as previous generations, something that were using off-the-shelf components and design.
The increased speed allows Facebook to train neural networks twice as fast, as well as exploring networks that are twice as large as before. In the end, training can be distributed between the 8 x GPUs, with the size and speed of the networks being scaled by another factor of two.