Engineers are attempting to discover new ways to leverage AI to make breakthroughs in various scientific fields, and one may have just happened that harnesses the power and speed of light.

Researchers have penned a new study published in Nature Photonics that builds upon an earlier concept initially unveiled in 2018, where researchers showcased the power of diffractive neural networks. For those who don't know, traditional computing is comprised of electronic circuit boards, and while extremely powerful in their own right, they do have limitations or concessions users must make. Electronic circuit boards have inherent latency as it takes time for the data to be moved. Additionally, this process can be extremely energy-demanding.
However, researchers have designed a new AI chip that manipulates light as it's capable of performing the calculation instantly versus a traditional computer that has to interpret light. As light travels through the new AI chip, it's directed, significantly speeding up the transmission process of data and also reducing the power needed for a calculation to be completed. While being a ground-breaking design that has many implications for numerous fields of computing, the new light-manipulating AI chip does face significant problems, and that is scaling it at the product.

What the light-based AI chip is currently producing
According to the researchers, the chip is made with a fixed design, meaning it will need to be customized for an individual task. Unfortunately, engineers won't be able to just produce 100,000 of these chips and roll them out globally, they will need to be made one by one, and specifically designed for the process they will be undertaking.
"As the tasks change or the fibre-optic system changes, one would need to have a new design to be fabricated and integrated with the fibre," says Aydogan Ozcan at the University of California, Los Angeles
At the moment, the new AI chip can process data flowing through fiber-optic cables as fast as light travels through each of the layers, which means the computations can be performed within trillionths of a second.