Google has reportedly reached a milestone in its artificial intelligence research, showing off an algorithm that could beat a human being playing Atari video games. Not only playing it, but it was learning from the experience, just as we would, according to a paper published by Nature last week.
Demis Hassabis, one of the authors from the paper said: "We can go all the way from pixels to actions as we call it and actually it can work on a challenging task that even humans find difficult. We know now we're on the first rung of the ladder and it's a baby step, but I think it's an important one". The team started their work at DeepMind, which is the London-based start up that Google acquired back in January 2014. When they joined Google, they began looking at ways of baking their intelligence into Google products.
The researchers then began working with Atari games, which had more complicated 3D environments, which Hassabis says the algorithm would be able to beat those games within the next five years. Hassabis added: "Ultimately the idea is that if this algorithm can race a car in a racing game then also essentially with a few extra tweaks it should be able to drive a real car. But that's again, even further away than that".
This is where we have to think about the huge undertaking it would be for Google to record every inch of the roads and environments for the self-driving car to be safe enough on our roads. But, if the AI inside of it was able to work out any and all decisions on-the-fly, it wouldn't need those maps. It would use its various sensors and cameras to scan the roads all around it, teaching itself how to drive constantly. Hassabis adds: "In the future I think what we're most psyched about is using this type of AI to help do science and help with things like climate science, disease, all these areas which have huge complexity in terms of the data that the human scientists are having to deal with".
The world of autonomous cars is forming around us, but are you in for the long haul?