Google have been playing with artificial intelligence (AI) for quite a while now, with previous reports and stories of it being able to detect cats in YouTube videos. But now the team is months ahead in their project, where they've been experimenting with different methods and giving it more power.
Google's learning software is all based on simulating groups of connected brain cells, that all communicate and influence each other - which is normally refered to a 'neural network'. When this network is connected to data, the relationships between differing neurons can change. Once this change begins, the network develops new abilities - where it is able to react in different ways to the incoming data, and its new ability is having learnt something.
Learning something for a neural network is the exciting part of it all - and various companies have been playing with this technology for quite a while with the Terminator and Matrix series' famous for making these networks more well-known. Where Google are changing things up is that the Mountain View-based company's engineers have found different ways to put more number-crunching power behind them, creating neural networks that can learn, all without human assistance.
These networks are powerful enough to be used commercially, which is another huge step in the right direction - a step toward more collaboration, funding and experimentation. Google's neural networks decide for themselves which features of data it wants to pay attention to, which patterns are worth concentrating on, instead of requiring humans to decide which colors and particular shapes are of interest to software trying to identify objects.
Where Google is taking it is the most interesting part, as they are pushing it toward better speech recognition, which is something quite important to the company's mobile OS, Android. Other areas of Google's business that are sure to benefit are their upcoming Project Glasses.
These Glasses have sensors built into them, but because it's not a physical-touch, physical-reaction type of product, voice control is going to be a huge part of it. But being able to recognize different voices, dialects, accents and more is going to be quite time consuming, which is where neural networks and artificial intelligence come into play.
Another benefit to Google's neural network are their self-driving cars. The better brain power the on-board computer has, or has access to (cloud-computing with an Internet-connected vehicle) is going to be invaluable. Once these neural networks are powerful enough, a car could, for example, detect everything in front of it - evaluating risks such as weather changes, a child walking next to the curb (working out the thousands of possibilities of if the child walked out in front of the car).
It truly is exciting - and we should hopefully see this tech roll out more and more over the coming years.