Blog Post

Robots helped inspire deep learning and might become its killer app

The next time you’re watching a robot hand someone a cup of coffee, ask someone a simple question or even drive a car, do yourself a favor and don’t be such a critic.

Yes, a lot of what so-called intelligent or learning robots are doing is still fairly simple — some of it borders on mundane — but they’re not exactly working with a human brain. The fact of the matter is that machine learning is really hard; most artificial intelligence is, in fact, very engineered. Finding the right method of interaction between humans and robots might be even harder.

However, deep learning — the approach du jour among artificial intelligence researchers — might be just what the forthcoming robot doctor ordered to cure what ails our robots’ brains. Earlier this month, I spent a couple days at the Robotics: Science and Systems conference and was impressed by the amount of robotics research that seemingly could be addressed using the deep learning techniques made famous over the past couple years by Google, Facebook and Microsoft.

There were talks about nearly every aspect of robotic intelligence, from using a tool called “Tell Me Dave” (which we covered here) in order to crowdsource the process of training robot assistants to perform household tasks, to teaching robots to choose the best path from Point A to Point B. Researchers discussed applications for self-driving vehicles; from analyzing soil types to increase traction in off-road vehicles to learning latent features of geographical locations in order to recognize them in sunlight, darkness, rain or snow.

Stefanie Tellex of Brown University (left) and Ross Knepper of MIT (right) present their research on the Ikeabot.

One of my favorite talks was about a robot dubbed the “Ikeabot” for its focus on helping to assemble furniture. The researchers working on it are trying to figure out the optimal process for communication between the robot and its human co-workers. As it turns out, that requires a lot more than just teaching the robot to understand what certain objects look like or how they fit into the assembly process. How the robot poses requests for help, for example, can affect the efficiency and workflow of human co-workers, and can even make them feel like they’re working along with the robot rather than just next to it.

The connective tissue throughout all these applications and all these attempts to make robots smarter in some way is data. Whatever the input — speech, vision or some sort of environmental sensor — robots rely on data in order to make the right decisions. The more and better data researchers have in order to train their artificial intelligence models and create algorithms, the smarter their robots get.

The good news: There’s a lot of good data available. The bad news: Training those models is hard.

Essentially, machine learning researchers often need to spend years worth of human-hours determining the attributes, or features, a model should focus on and writing code to turn those features into something a computer can understand. Training a computer vision system on thousands of images, just to create a robot (or an algorithm, really) that can recognize a chair, is a lot of work.

[pullquote person=”Andrew Ng” attribution=”Andrew Ng, chief scientist at Baidu and Coursera co-founder”]”I came to the view that if I wanted to make progress in robotics, [I had] to spend all my time in deep learning.”[/pullquote]

This is where new approaches to artificial intelligence, including deep learning, come into play. As we have explained on numerous occasions, there’s currently a lot of effort being put into systems that can teach themselves the features that matter in the data they’re ingesting. Writing these algorithms and tuning these systems is not easy (which is why experts in fields like deep learning are paid top dollar), but when they work they can help eliminate a lot of that tedious and time-consuming manual labor.

In fact, Andrew Ng said in a keynote at the Robotics event, deep learning (a field he says includes, but is not limited to, deep neural networks) is the best method he has found for soaking up and analyzing large amounts of data. Ng is best known for co-founding Coursera, heading up the Google Brain project in 2011 and teaching machine learning at Stanford. Most recently, he joined Chinese search engine giant Baidu as its chief scientist.

The output of an object classifier for robots that Ng worked on with Adam Coates.
The output of an object classifier for robots that Ng worked on with Adam Coates. The research was published in 2010.

But Ng also knows a thing or two about robots. In fact, much of his research since joining the Stanford faculty in 2002 has been focused on applying machine learning to robots in order to make them walk, fly and see better. It was this work — or, rather, the limitations of it — that inspired him to devote so much of his time to researching deep learning.

“I came to the view that if I wanted to make progress in robotics, [I had] to spend all my time in deep learning,” he said.

What Ng has found is that deep learning is remarkably good at learning features from labeled datasets (e.g., pictures of objects properly labeled as what they are) but is also getting good at unsupervised learning, where systems learn concepts as they process large amounts of unlabeled data. This is what Ng and his Google Brain peers demonstrated with a famous 2012 paper about recognizing cats and human faces, and also what has powered a lot of advances in language understanding.

Naturally, he explained, these capabilities could help out a lot as we try to build robots that can better hear us, understand us and generally perceive the world around them. Ng showed an example of current Stanford research into AI systems in cars that distinguish between cars and trucks in real time, and highlighted the promise of GPUs to help move some heavy computational work into relatively small footprints.

Andrew Ng robotics 2
Andre Ng highlights some current examples of deep learning applications at Baidu.

And as the deep learning center of gravity shifts toward unsupervised learning, it might become even more helpful for roboticists. He spoke about a project he once worked on that aimed to teach a robot to recognize objects it might spot in the Stanford offices. That project included tracking down 50,000 images of coffee mugs on which to train the robot’s computer vision algorithms. It was good research and taught the researchers a lot, but the robot wasn’t always very accurate.

“For a lot of applications,” Ng explained, “we’re starting to run out of labeled data.” As researchers try scaling training datasets from 50,000 to millions in order to improve accuracy, Ng noted, “there really aren’t that many coffee mugs in the world.” If there are that many images, most of them won’t be labeled. Computers will need to learn the concept of coffee mugs by themselves and then be told what they’ve discovered, because no one can afford to spend the amount it would take to label them.

Besides, Ng added, most experts believe that human brains — still the world’s most-impressive computers, which have “very loosely inspired” deep learning techniques — learn largely in an unsupervised manner. No matter how good a parent you were, he joked, “you didn’t point out 50,000 coffee mugs to your children.”

6 Responses to “Robots helped inspire deep learning and might become its killer app”

  1. I wish we had discovered how brain cells deal with the ions and chemicals when making a decision and working on a complex decision.

    We are close!

    Deep learning rocks. We are one step closer to make it work with unlabeled data.
    Really need to come up with the fast tracking method like open sourcing it and then let multple people work in the same algorithm!?

    Push Bhatkoti
    PhD student

  2. Wayne Caswell

    Fiber-connected Robots that Learn from Each Other — Extending on Brad’s comment, a main difference between human and machine learning is, or should be, the fiber connected Internet, which can function like the network of synapses connecting individual neurons.

    In practice, each robot or computing device could be thought of a a very powerful neuron, and the Internet could then represent the synaptic connections, but instead of each of the 200 million neurons having up to 2,000 connections, we’re talking about billions of commuting devices, each with unlimited connections to other devices, that sense, measure, collect, interpret, analyze, and then act autonomously, individually or as a coordinated group but tapping into a global body of learned intelligence.

    Don’t think about replicating the brain and it’s learning ability in a single computing device; think about all devices learning together, collectively and sharing knowledge and insight between themselves. So the question becomes where to store the collective intelligence, and I see the answer as similar to distributed computing – everywhere that makes sense yet close enough for fast access. With fiber, that can be about anywhere.

    Then think of the cyber security and homeland security issues of one nation getting this capability first, or another nation being able to tap into that nation’s collective learning. This will all happen much faster than most people realize, very much faster I think.

    • arnaudboeglin

      200 millions ? Last estimations I have seen were around 100 billion neurons. And some neurons have above 10,000 synaptic links. That is for the human brain. Your number corresponds to the brain of a mouse. And if you look at the work of NG that is exactly what he’s been doing these past years. Using networks to improve parallelism, but we’re still far away from a brain, physically, and logically.

      Indeed, a neuron works at the millisecond scale, and computers under the nanosecond. Still, brains are much faster at what they do. Our algorithms need to do much more clock cycles than the brain does in order to treat a sequence.

      And I fully disagree, I believe computer networks aren’t a solution to deep learning or AI in general. It is only a short term “fix”. But if we really want to evolve and make a leap we’ll have to develop a hardware technology that supports better the software. They do tons of good research on that at the moment with varying technologies such as photonics.

      In the end I think we need to rethink the way we consider computers and what we want to do with them. We stick to transistors and binaries for well over half a century now. Which has led us to advanced calculators that can display colors play sounds and communicate with each other. What we do now is like doing planes with vapor technology.

  3. Brad Arnold

    The primary difference between a human brain and AI (aside from the hardware advantage currently of the human brain) is that AI has perfect memory, and can be copied. In other words, teach once, run anywhere. The Singularity is coming, and is greatly misunderstood. The Singularity is the emergence of, first AGI (roughly human like AI intelligence), and then ASI (artificial super intelligence compared to a human). When “a computer” is a smarter mind than the smartest human, then we simply can’t guess what will happen – it will see things so much better than humans, who knows what it will come up with.

  4. Oneasasum

    You might find this insightful:

    Deep Learning could indeed shake things up in robotics… and at least some are worried it could have negative consequences for Japan’s mighty robotics industry.

    I think the applications to natural language processing will be the most impactful in the near future, however. (I’m looking forward to reading those EMNLP 2014 papers on deep learning, as some look quite intriguing, and will give me a better sense of where things are headed.)