Google has reportedly reached a milestone in its artificial intelligence research, showing off an algorithm that could beat a human being playing Atari video games. Not only playing it, but it was learning from the experience, just as we would, according to a paper published by Nature last week.
Demis Hassabis, one of the authors from the paper said: "We can go all the way from pixels to actions as we call it and actually it can work on a challenging task that even humans find difficult. We know now we're on the first rung of the ladder and it's a baby step, but I think it's an important one". The team started their work at DeepMind, which is the London-based start up that Google acquired back in January 2014. When they joined Google, they began looking at ways of baking their intelligence into Google products.
The researchers then began working with Atari games, which had more complicated 3D environments, which Hassabis says the algorithm would be able to beat those games within the next five years. Hassabis added: "Ultimately the idea is that if this algorithm can race a car in a racing game then also essentially with a few extra tweaks it should be able to drive a real car. But that's again, even further away than that".