Deep Learning the Key to Advanced AI

Google Goes Go

Google Goes Go

While chess, Scrabble, Jeopardy! and Othello have been mastered by computers using artificial intelligence, one game has eluded developers – the 2,500-year-old ancient Chinese game of Go. Recently top AI experts said that mastery of Go by a computer was still ten years away.

Wrong. Developers at Google’s DeepMind project have managed to create a system that beat the European Go master, Fan Hui — five times in a row. This has been considered an important milestone in AI development, as Go is an incredibly complex, and flexible, game — it is one more step towards an independently thinking machine.

DeepMind was bought by Google in early 2014, anticipating a future where AI will be all-important to technology, science, industry and human/machine interaction.

The scientific journal Nature published a paper describing how the system works, and most important of all was the use of an increasingly important feature of AI — deep learning. After studying a large collection of recorded Go games, DeepMind’s system, called AlphaGo, learned to play even better, by playing against itself.

AlphaGo still has to take on Lee Sedol, considered by some to be the ‘Roger Federer of the Go world’. This is scheduled to take place in March 2016.

Deep learning opens the path to AI that can learn and adapt from its environment, modifying its behaviour to suit variable conditions, and making appropriate decisions without human intervention.

It’s proof once again that it’s hard to predict when major leaps are going to happen in science and technology. One day it’s “ten years” — the next thing it’s already happened. Exciting times!

Read more at Wired.com, or listen to the Nature podcast. There is also a Youtube video, if you prefer.

Advertisements