Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

AP Photo/Ahn Young-joon

Explainer: Why is Google's AI beating a board game world champion so significant?

Google’s AI is currently winning 2-0 in a best-of-five challenge, but why it is such a significant event?

This piece was originally published yesterday (9 March 2015) and has been updated as AlphaGo won its second match this morning.

THIS WEEK, A Google-owned artificial intelligence (AI) is playing Go against one of the world’s best players. This morning, it won its second match out of a best-of-five challenge.

DeepMind’s AlphaGo program beat the world champion, South Korea’s Lee Sedol in a match. It was the first of a series of matches which will be played in Seoul over the next few days.

Upon first glance, such a victory doesn’t seem that amazing if you don’t know what Go is, but its significance shouldn’t be understated, with some describing it as a historic win.

So what happened?

This match was part of Google’s DeepMind Challenge Match, a best of five match where the winner takes away $1 million (If Google wins, it said it will donate it to UNICEF, STEM, and Gocharities).

AlphaGo already defeated the European champion 5-0 in a similar match back in October 2014.

Lee is the current world champion having 18 international titles to his name. Both players received two hours per game and was expected to take between four to five hours to complete.

Yet in the first match, Lee ended up resigning after three and a half hours of play with tens of thousands watching it live on YouTube. The other four games are scheduled to take place on Thursday, Saturday, Sunday and Tuesday.

Deepmind Go DeepMind / YouTube DeepMind / YouTube / YouTube

What exactly is Go?

Go is an ancient Chinese board game and is said to be more than 2,500 years old. While it shares some similarities with Chess – it’s turn based and only requires two players – the objective is to surround a larger total area of the board with one’s stones than the opponent by the end of the game.

What makes it different than chess is the sheer number of possible outcomes. On a 19 x 19 grid, the average number of moves between experts is about 150 with an average of 250 choices per moves.

That means the possible number of outcomes goes well into the millions, meaning it’s difficult for anyone to calculate the possibilities in advanced

That means games can rely on intuition more than logic, something that up until now wasn’t possible with AI.

DeepMind / YouTube

Why is this win important?

Before now, AI was able to beat regular players but had a tough time against the best players in the game.

So DeepMind has been working on an AI that can play and beat those players.

The entire purpose of this challenge is to see how far AI has come and develop systems that go beyond traditional human intelligence. Go was seen as the stumbling block for AI, the number of combinations meant many felt it would be a decade before it would become better than a human.

Apart from it being an impressive achievement for AI, it was the buildup to today’s win that’s impressive.

Usually, AI has to be programmed to perform certain tasks and look out for basic patterns so it improves. In the case of AlphaGo, it doesn’t play Go like a human, but a machine with one or two characteristics you’d associate with human intuition.

AlphaGo’s training in Go was relatively similar to how anyone else would practice. It studied common patterns that were repeated in past games, studying more than 100,000 Go games available online.

Deepmind’s founder and CEO Demis Hassabis told the BBC it went a step further: it played against different versions of itself.

After it learnt that, it got to reasonable standards by looking at professional games. It then played itself, different versions of itself millions and millions of times and each time get incrementally slightly better – it learns from its mistakes.

While it’s still a long way off, the fact it had to play millions of times against itself should be an obvious hint, the process which led to its victory could be applied to other areas. We already see basic versions of AI cropping up in places like Siri, Google Now and Facebook’s experimental M but one that learns by itself could make existing tools more useful over time.

South Korea Game Human vs Computer Go player Lee Sedol game against DeepMind's AlphaGo appears on TV screens in Korea. AP Photo / Ahn Young-joon AP Photo / Ahn Young-joon / Ahn Young-joon

But what about the drawbacks?

An AI that can improve, even slightly in this case, over time brings up thoughts of doom and destruction, but the reality is we’re still a long way from an AI that can act like a human. Even so, that’s something you would have to put a lot of work into first since you would have to program it that way.

Back in August, Dr. Ken Ford, who has worked in AI for most of his career and is now with the Institute for Human and Machine Cognition, mentioned how the fears around AI isn’t that it becomes too intelligent, but it becomes too human.

Most of these fears are grounded in science-fiction. It’s not that such fears aren’t unjustified, such ethical questions will have to be answered eventually, but a machine can’t suddenly decide to be good or evil unless it’s programmed to be that way.

“There’s no doubt that any really advanced technology, or even really simple technology … can be used for good or bad purposes,” Dr. Ford said at the time. Most people’s ideas about AI [and robots] are informed largely by sci-fi and there’re no other ideas about it”.

If you think about HAL or all the others, the hazard associated with them wasn’t their great intelligence or to the artificialness, whatever that means, of their intelligence, it was due to their humanity. HAL had paranoia and numerous other unfortunate human traits that we would have to have to go to great lengths to build.

Another important thing to remember is AlphaGo is working within set parameters. Its sole focus is to learn how to play Go, and it doesn’t mean it will automatically jump into a new game unless the programmers begin teaching it that.

Also, if you feel that this is advancing very quickly. Remember that the last noticeable comparison, the IBM supercomputer Deep Blue beating the world chess champion Garry Kasparov, happened back in 1997.

Progress may be speeding up, but AI still has a long way to go.

Read: Here’s how you can identify (and stop) annoying background apps >

Read: This bionic fingertip allows amputees to feel rough and smooth textures >

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Close
14 Comments
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.
    JournalTv
    News in 60 seconds