Why deep learning is at least inspired by biology, if not the brain

4 Comments

As deep learning continues gathering steam among researchers, entrepreneurs and the press, there’s a loud-and-getting-louder debate about whether its algorithms actually operate like the human brain does.

The comparison might not make much of a difference to developers who just want to build applications that can identify objects or predict the next word you’ll text, but it does make a difference. Researchers leery of another “AI winter” or trying to refute worries of a forthcoming artificial superintelligence worry that the brain analogy is setting people up for disappointment, if not undue stress. When people hear “brain,” they think about machines that can think like us.

On this week’s Structure Show podcast, we dove into the issue with Ahna Girschick, an accomplished neuroscientist, visual artist and senior data scientist at deep learning startup Enlitic. Girschick’s colleague, Enlitic Founder and CEO (and former Kaggle chief scientist) Jeremy Howard, also joined us for what turned out to be a rather insightful discussion.

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed

Below are some of the highlights, focused on Girshick and Howard view the brain analogy. (They take a different tack than Google researcher Greg Corrado, who recently called the analogy “officially overhyped.”). But we also talk at length about deep learning, in general, and how Enlitic is using it to analyze medical images and hopefully help overcome a global shortage of doctors.

If you’re interested in hearing more from Girshick, Enlitic and deep learning, come to our Structure Data conference next month, where she’ll be accepting a startup award and joining me on stage for an in-depth talk about how artificial intelligence can improve the health care system. If you want two full days of all AI, all the time, start making plans for our Structure Intelligence conference in September.

Ahna Girshick, Enlitic's senior data scientist.

Ahna Girshick

Natural patterns at work in deep learning systems

“It’s true, deep learning was inspired by how the human brain works,” Girshick said on the Structure Show, “but it’s definitely very different.”

Just like with our vision systems, deep learning systems for computer vision process stuff in layers, if you will. They start with edges and then get more abstract with each layer, focusing on faces or perhaps whole objects, she explained. “That said, our brain has many different types of neurons,” she added. “Everywhere we look in the brain we see diversity. In these artificial networks, every node is trying to basically do the same thing.”

This is why our brains are able to navigate a dynamic world and do many things, while deep learning systems are usually focused on one task with a clear objective. Still, Girshick said, “From a computer vision standpoint, you can learn so much by looking at the brain that why not.”

She explained some of these connections by discussing a research project she worked on at NYU:

“We were interested in, kind of, the statistics of the of the world around us, the visual world around us. And what that means is basically the patterns in the visual world around us. If you were to take a bunch of photos of the world and run some statistics on them, you’ll find some patterns — things like more horizontals than verticals. . . . And then we look inside the brain and we see,  ‘Gee, wow, there’s all these neurons that are sensitive to edges and there’s more of them that are sensitive to horizontals than verticals!’ And then we measured . . . the behavioral response in a type of psychology experiment and we see, ‘Gee, people are biased to perceive things as more horizontal or more vertical than they actually are!'”

Asked if computer vision has been such a big focus of deep learning research so far because of those biological parallels, or because that’s companies such as Google and Facebook have the most need for, Girshick suggested it’s a bit of both. “It’s the same in the neuroscience department at a university,” she said. “The reason that people focus on vision is because a third of our cortex is devoted to vision — it’s a major chunk of our brain. . . . It’s also something that’s easier for us to think about, because we see it.”

Structure Data 2012: Ryan Kim – Staff Writer, GigaOM, Eric Huls – VP, Allstate Insurance Company, Jeremy Howard – President and Chief Scientist, Kaggle

Jeremy Howard (left) at Structure: Data 2012.

Howard noted that the team at Enlitic keeps finding more connections between Girshick’s research and the cutting edge of deep learning, and suggested that attempts to distance the two fields are sometimes insincere. “I think it’s kind of fashionable for people to say how deep learning is just math and these people who are saying ‘brain-like’ are crazy, but the truth is … it absolutely is inspired by the brain,” he said. “It’s a massive simplification, but we keep on finding more and more inspirations.”

The issue probably won’t be resolved any time soon — in part because it’s so easy for journalists and others to take the easy way out when explaining deep learning — but Girshick offered a solution.

“Maybe they should say ‘inspired by biology’ instead of ‘inspired by the brain,'” she said. “. . . Yes, deep learning is kind of amazing and very flexible compared to other generations of algorithms, but it’s not like the intelligent system I was studying when I studied the brain — at all.”

4 Comments

bandhav

And the subconscious mind works on the experience and what we thought but not bothered about it ….The things which we seen but not wanted to remember

Louis Savain

Ahna Girschick is absolutely correct. In my opinion, in spite of their current success, deep learning models are a red herring, an unfortunate distraction on the road to true AI. Consider that the cortex responds only to changes (represented by discrete signals or pulses) and has no need of supervision during learning. The most effective deep learning models, by contrast, are fully supervised and require pre-labeled samples. The cortex far surpasses current systems in its ability to understand what it sees. A deep learning net can be trained to recognize many different instances of chairs, which is impressive. But it really has no idea what a chair is.

Micky Badgero

Deep layers of artificial neural nets; yes they are limited by supervised learning and when limited to one task (vision) they have no way else to learn. The brain has a differentiated structure based on what it does with what it figures out. That is, how it decides to move to achieve survival goals.

Unsupervised learning (Hebbian learning, lateral inhibition, principal component analysis) are also based on biological models. And they produce results without supervised training, but also without values or goals.

Deep learning is not a red herring, just incomplete. Based on unsupervised neural models and robotic, it could be a real path to AI.

John Rauscher

Great comment, Louis.
AI has a brighter future in automating repetitive tasks for the service sector where the productivity has barely improved in the last 30 years compared to the manufacturing sector. Deep Learning is only a small part of AI and may lead to some levels of disappointment within this decade.

Comments are closed.