Google Brain chief: Deep learning takes at least 100,000 examples

Jeff Dean, a senior fellow at Google and head of the Google Brain project, speaks at VB Summit 2017 in Berkeley, California on October 23, 2017

While the current class of deep learning techniques is helping fuel the AI wave, one of the frequently cited drawbacks is that they require a lot of data to work. But how much is enough data?

“I would say pretty much any business that has tens or hundreds of thousands of customer interactions has enough scale to start thinking about using these sorts of things,” Jeff Dean, a senior fellow at Google, said in an onstage interview at the VB Summit in Berkeley, California. “If you only have 10 examples of something, it’s going to be hard to make deep learning work. If you have 100,000 things you care about, records or whatever, that’s the kind of scale where you should really start thinking about these kinds of techniques.”

Dean knows a thing or two about deep learning — he’s head of the Google Brain team, a group of researchers focused on a wide-ranging set of problems in computer science and artificial intelligence. He’s been working with neural networks since the 1990s when he wrote his undergraduate thesis on artificial neural networks. In his view, machine learning techniques have an opportunity to impact virtually every industry, though the rate at which that happens will depend on the specific industry.

There are still plenty of hurdles that humans need to tackle before they can take the data they have and turn it into machine intelligence. In order to be useful for machine learning, data needs to be processed, which can take time and require (at least at first) significant human intervention. “There’s a lot of work in machine learning systems that are not actually machine learning,” Dean said. “And so you still have to do a lot of that. You have to get the data together, maybe you have to have humans label examples, and then you have to write some data processing pipeline to produce the dataset that you will then do machine learning on.”

In order to simplify the process of creating machine learning systems, Google is turning to machine learning itself to determine the right system for solving a particular problem. It’s a tough task that isn’t anywhere near completed, but Dean said the team’s early work is promising. One encouraging example of how this might work comes from a self-trained network that posted state-of-the-art results identifying images from the ImageNet dataset earlier this year. And Google-owned DeepMind just published a paper about a version of AlphaGo that appeared to have mastered the game solely by playing against itself.

DeepMind, a division of Google that’s focused on advancing artificial intelligence research, unveiled a new version of its AlphaGo program today that learned the game solely by playing itself. Called AlphaGo Zero, the system works by learning from the outcomes of its self-play games, using a machine learning technique called reinforcement learning. As Zero was continuously trained, the system began learning advanced concepts in the game of Go on its own and picking out certain advantageous positions and sequences.

After three days of training, the system was able to beat AlphaGo Lee, DeepMind’s software that defeated top Korean player Lee Sedol last year, 100 games to zero. After roughly 40 days of training — which translates to 29 million self-play games — AlphaGo Zero was able to defeat AlphaGo Master (which defeated world champion Ke Jie earlier this year) 89 games to 11. The results show that there’s still plenty more to be learned in the field of artificial intelligence when it comes to the effectiveness of different techniques. AlphaGo Master was built using many of the similar approaches that AlphaGo Zero was, but it began training on human data first before moving on to self-play games.

One interesting note is that while AlphaGo Zero picked up on several key concepts during its weeks of training, the system learned differently than many human players who approach the game of Go. Sequences of “laddered” stones, played in a staircase-like pattern across the board, are one of the first things that humans learn when practising the game. Zero only understood that concept later in its training, according to the paper DeepMind published in the journal Nature.

In addition, AlphaGo Zero is far more power-efficient than many of its predecessors. AlphaGo Lee required the use of several machines and 48 of Google’s Tensor Processing Unit machine learning accelerator chips. AlphaGo Fan, an earlier version of the system, required 176 GPUs. AlphaGo Zero, along with AlphaGo Master, each only require a single machine with four TPUs. What remains to be seen is how well these techniques and concepts generalize to problems outside the realm of God. While AlphaGo’s effectiveness in human games and against itself has shown that there’s room for AI to surpass our capacity in tasks that we think are far too difficult, the robot overlords aren’t here yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s