Go is a fascinating game. It's easy to learn and damn-near impossible to master, and has been entertaining people for thousands of years. But while we've long had the ability to beat grand masters in chess with our computers, nobody had beaten a Go champion with an artificial intelligence system. Until now.
Google announced this week that its DeepMind team had managed to beat the European champion of Go in a matchup that recalled previous computer-versus-human competitions, such as Deep Blue versus Garry Kasparov.
Wired has a good look at the effort that went into creating the system that managed to win a game that had frustrated AI researchers for years. (Facebook's Mark Zuckerberg, in a humorous or maybe pre-mediated move,
posted about Facebook's work chasing this goal just hours before Google made its announcement.)
And after reading so many profiles of Marvin Minsky over the course of the week (
Steven Levy of Backchannel has a good one), one thing that struck me was the distance between the development of his theories and DeepMind's accomplishment: nearly 60 years and untold advances in technology Minksy could not have forseen when he first got started.
AI researchers know to be wary of "
the AI winters" that can often follow a period in which public excitement in artificial intelligence has been growing, such as the one we're in now. And Minsky's life and legacy underscore that artificial intelligence research has produced amazing breakthroughs, but it has taken a very long time, and will continue to take a very long time by the standards of most other things in technology.
The real test for DeepMind will come later this year, some time around Structure Data in March, when it faces off against South Korea's Lee Sedel, widely considered one of the best to ever play the game. But it's only the latest milestone in the long history of AI research that paved the way to Google's accomplishment.
(Image courtesy
Flickr user Reilly Butler/Creative Commons)