The Gigaom interview: Jeff Hawkins on why his approach to AI will become the approach to AI

13 Comments

Jeff Hawkins is best known for bringing us the Palm Pilot, but he’s working on something that could be much, much bigger.

For the past several years, Hawkins has been studying how the human brain functions with the hope of replicating it in software. In 2004, he published a book about his findings. In 2012, Numenta, the company he founded to commercialize his work, finally showed itself to the world after roughly seven years operating in stealth mode.

I recently spoke with Hawkins to get his take on why his approach to artificial intelligence will ultimately overtake other approaches, including the white-hot field of deep learning. We also discussed how Numenta has survived some early business hiccups and how he plans to keep the lights on and the money flowing in.

An edited version of the interview follows. Hawkins kicks if off with a description of Numenta’s technology — which it calls hierarchical temporal memory — and how it came to be.

Jeff Hawkins. Credit: Numenta

Jeff Hawkins. Credit: Numenta

Derrick Harris: Please explain Numenta’s approach to brain-inspired artificial intelligence technology.

Jeff Hawkins: We’re going through a transition right now in the world of machine intelligence that’s similar to the transition from analog to digital computing back in the 1940s. Today, if you look in the world of machine learning and machine intelligence, you see varied types of things going on. There are different types of algorithms that people are using — specific and universal — and people debate which approach is better.

We’re very confident that by the end of the 2020s, we’re going to be settled on a dominant paradigm. It’s going to be quite different than the one we’re currently in today, where specific algorithms that excel at one task dominate. We believe it’s going to be based instead on the universal algorithms that work on many problems. They’re going to be memory-based, not mathematically based. They’re going to be based primarily on time-based patterns, and they’re going to be online learning paradigms.

We’ve invented a term called hierarchical temporal memory, which describes the basic theory about what’s going on here. Very importantly, nothing in HTM is task-specific, just like in your brain. The way you see and the way you hear and the way you feel, the neural tissue that does that is exactly the same. You can actually swap them around, and you’ll still work.

That’s the key part of the whole bet here, that there are going to be, as we know in the brain, universal learning algorithms that may not be the best at everything, but they’re really good at everything.

What about the business side of things? I first wrote about Numenta in 2013 around some work you were doing with some energy industry customers. Then you pulled back, and then the new Grok app for Amazon Web Services came out. Is the technology different in any way since then?

Between what we were doing in the energy space and what we did with Grok, it’s pretty much the same technology. What we found in the energy space was that although we had all these companies that wanted work with us, none of them were ready to deploy.

We were basically doing energy predictions for something called demand response. The idea was that if you could predict future energy requirements, and future energy prices, you can adjust your consumption and save a lot of money. We did a really great job with this. All these companies were working with us, but they actually weren’t able to deploy it anywhere because they just didn’t have the infrastructure in place to actually act on our predictions.

It took us a while to realize that this was going to take a long time. We said, “Fine, we can’t wait around for that.” Then we said, “Let’s take a market space where we can build something today, and companies can deploy it today, and we don’t have anybody waiting around.”

Source: Numenta

The portfolio of Numenta applications for Grok. Source: Numenta

You’ve compared your approach to deep learning and other approaches to AI that are really good at specific tasks, and talked about how the neocortex is using the same framework for vision and motor skills. Can Numenta’s technology actually handle computer vision tasks, or is it just a pattern recognition layer?

We can. There are two basic types of inference of pattern recognition that occurs in the brain. One is what we call sensory motor inference, which is how you recognize the world through movement. Vision actually is mostly a sensory motor inference problem. That is, your eyes are moving all the time. You’re not aware of it, but three to five times a second, you have completely different patterns coming into your eyes even though the world seems stable to you.

The brain uses those movements to build a model of the world. It’s not something that the brain is trying to get around. It’s not an inconvenience. It’s a part of how the brain sees. We use the same thing when you walk into a building. As you walk down a hallway going out doors and reach and touch things, most of the way you interact with the world is through your own behavior. It’s called sensory motor inference.

Grok uses high order inference. There’s no behavior in Grok, it just listens to the sounds of patterns coming off the servers.

Now, to do vision correctly, you need to do sensory motor inference. We understand that and we’re in the process of building it out now. That’s been a major research effort for us, starting in January. We think we can get to a vision system that is cortical like. It will work the same way the brain does. I have faith that it will be better than other approaches, but I can’t prove that yet. I do have a path to get there.

We’re currently working on that, but I can’t sit here today and say how our vision system performs compared to Google’s system or something like that.

While object recognition is useful and novel and very valuable in some ways, I can see a future where you definitely want to be able to recognize some of the higher-order stuff. Can you get to a point with Numenta where a system can actually make inferences beyond that an object is a thing?

Our basic approach is adhering to neuroscience principles so that we will get the properties that brains have. This idea that something is not just a cat or a dog, but what does it mean? What is it doing? What are the implications of this image? These come about because of the nature of what are called sparse distributed representations, and the way the brain forms in a hierarchy.

It’s a big bet that if you understand these cortical principles and you build the thing to work on those cortical principles, you will get the properties that human brains exhibit. I can’t prove that, but it seems pretty obvious to me that’s going to happen. We understand why it will happen now. It’s not just a guess.

The approach we’re taking is a very long-term approach. We’re trying to lay the foundation for the next 50 to 100 years of machine intelligence and machine learning. That takes a bit more patience, and it means that you spend more time building these foundational theories than going out and implementing stuff and reporting some results in some benchmark.

A diagram of a "brain" region in the Numenta architecture. Source: Numenta

A diagram of a “brain” region in the Numenta architecture. Source: Numenta

How much time do you think you have to make it work? How much runway do you have as a company?

We’re working on five different applications that are variations of the high order inference and prediction that I talked about. We did Grok, the real product. We’ve done a bunch of other things in detecting human behavior — changes in human behavior like if I’m trying to identify someone who becomes a rogue trader, or someone who starts stealing inside company information. We’re able to make predictions in tech anomalies and stock volumes.

I should mention that when we announced Grok back in the beginning of the year, immediately we were approached by large companies that want to license it. We said, “Let’s make that our business strategy.” We’re revealing a series of applications, and we think ultimately people will want to license this stuff. There will be a series of more applications when we put in the sensory motor stuff, which is going to be a bit longer.

The problems we solve today are very different than what most people solve in deep learning. They basically can solve spatial pattern-recognition problems. They’re not really looking for time-based patterns. They don’t use prediction or anomaly detection. There’s not a lot of overlap between what they do and what we do.

It’s not like this is a zero-sum game, correct? Isn’t there room for your approach, and a deep learning approach and other approaches, to coexist?

I believe that they have to come together in the end. It’s like analog computing and digital computing coexisted for a few years, but in the end they didn’t coexist. The need for network effects is pushing us in a direction where in the end they will ultimately come together, or they will die.

I think deep learning is a great example. If you speak to Andrew Ng, he’ll say we have to add time deep learning networks. Deep learning networks are basically hierarchical networks, but they don’t have time. They don’t have a sense of prediction. They don’t have a sense of motor behavior.

If these fields don’t merge together, if people don’t make that effort, then there’s going to be orphans. I don’t believe it’s going to be like there will be lots of different techniques. I really believe it’s going to settle on a dominant paradigm.

We’ve been working with a large semiconductor manufacturer now for over a year that is very forward-thinking. It has a team of people developing hardware for our algorithms. It has chosen the HTM algorithms over anything else because it’s thinking 5, 10, 15, 20 years out and wants to pick the winning machine learning platform. Even if today it seems a little bit embryonic, it’s got legs to it.

I hear quantum computing referenced as this holy grail because it will let us do new things in machine learning. What does it mean for all the machine learning work being done on today’s systems if quantum computing becomes commercially viable?

I’m a brain guy. I’m a neuroscientist and a computer scientist, but if you look at the brain, it’s not magic. There’s no quantum weirdness going on there. It’s a very complex neural system. You have to understand why it looks the way it does, and then you can implement it in silicon.

At the moment, it looks like the most promising methods for implementing these things in hardware does not require any new device physics. They require new connectivity schemes. The big problem in building brains in silicon is connectivity, connecting memory in a way that it’s a huge memory system in a very large connectivity matrix. That’s the challenge in building artificial brains.

Breaking down how sparse distributed representations activate parts of a Numenta region. Source: Numenta

Breaking down how sparse distributed representations activate parts of a Numenta region. Source: Numenta

What do you see as the killer application for all this? Some of the stuff you’re doing right now with anomaly detection or pattern recognition, or computer vision? Maybe it’s superhuman AI agents.

History tells us something here. When a new technology paradigm or a new science paradigm or a new type of machine intelligence comes around, history tells you that the obvious applications are not the killer apps. The things that people thought, like what they were doing in the past, turned out not to be the killer apps. The killer apps tend to be surprising. No one anticipates them.

One of the things that we’ve done in Numenta is we don’t want to place a bet too strongly on any one of these things because we just don’t know. We’ll see how it goes, how many people like it, what’s going to happen with it, but we won’t bet the company on it. Something can generate good license revenue and be a good business without necessarily being the killer app.

I think what will happen is within the next few years, the killer apps will emerge, and I don’t think it you or I or anybody else would have anticipated them.

Is robotics the ultimate medium for these algorithms?

The principles of robotics have nothing to do with a physical embodiment. They have to do with an agent moving through a world. As you move, your sensory input changes. What you perceive changes because you’re moving through the world. Why does it have to be physical world? You can build a robot that works in a virtual world.

Web crawlers are very simple robots. They just follow links on the World Wide Web, that’s what they do. They’re like cockroaches that follow along, and just bump the wall, and they scour the web that way. You could build a robot system that’s a virtual system that basically moves through cyberspace. It learns to navigate in cyberspace, and learns to build a model of the world in the same way you and I build a model by walking through our towns and our houses.

This is what I find very interesting. I have another idea for what potentially could be a killer app in the same vein. Data itself has structure. If I look at a spreadsheet or some big database, it has structure to it. We might be able to apply our algorithms, our robotic algorithms, to essentially dig around in data to discover the structure in static data in the same way a human machine learning expert does.

I do think — just to hover on another topic that you mentioned — the whole idea of superhuman intelligence is something that people get wrong completely. We’re going to build machines that are far more intelligent than humans, but they’re not going to be human-like at all. They’re not going to be conversing with you, or threatening you, or anything like that.

Superhuman intelligence to me is basically just building a brain that’s really big and really fast, and has some very interesting sensory organs, but it doesn’t have to be human-like. I don’t think we’re going to build human-like things. There’s no point in that.

Your example of a robot that would move through cyberspace, that’s actually a very viable sort of superintelligent thing.

It’s not a threat to humanity.

Well, it could be. A very smart bot or virus roaming through cyberspace could wreak a lot of havoc.

You said the key word there. The word is virus. The thing that we have to be concerned about from a safety point of view is anything that replicates. Intelligence is not dangerous. Self-replication is dangerous. A stupid self-replicating thing is sometimes more dangerous than a smart self-replicating thing, but either way self-replication is the thing we have to worry about.

The Numenta timeline thus far. Source: Numenta

The Numenta timeline thus far. Source: Numenta

There have been some notable personnel and business changes at Numenta. At this point, is the company in a good place to go forward and really capitalize on the technology?

A year from now would we change something? I don’t know. Possibly. But I haven’t felt this good about Numenta for a long time. One, from the science point of view, the technology point of view, we’re making great progress. We’ve made big progress just the beginning of this year on the sensory motor stuff. We’ve proven the other technology works well.

We’ve also had independent validation that the technology is valuable. We have people who have said, “Yup, this is really cool. We want to buy this. It’s worth a lot to us.”

Our open source community is doing very well. That’s been around for about a year. I’m feeling that we’ve turned a corner here, so I’m a little more optimistic.

I think also our business strategy and our system strategy is great. We may create a hit product along the way, but we’re not counting on that. If that happens, great. We’re going to building products until we get better at it, and eventually something will click, or someone will buy the company or something like that.

Given the technology we’re talking about, how big can a company like Numenta be if it actually hits a homerun? Or will Google acquire it first and that will be that?

It’s hard to say. Again, I like history. If you go back when people first started building computers in the late 1940s, there were a whole bunch of companies formed. They were fighting each other. There were patent wars going on at this time. Almost all those companies killed each other. IBM snuck up on the outside and made a business out of it.

What I’ll say is the technology we’re developing is going to be huge. It’s a foundation for the next 60, 70, 100 years of computing. It’s not replacing computing, but it’s as a big as computing. We’re pioneers, and we’re laying a foundation which will exist and survive.

As a business, it’s very tricky in the beginning of something like this to pick your bets and stay viable. How big Numenta will be, I have no idea. Our approach is to stay small as long as possible because that gives us flexibility. That gives us the ability to change this. As soon as you start becoming big, then you’ve chosen a path. Then you become obsolete in a few years.

All I can say is that I believe these machine learning concepts that are coming from the brain are going to be the central ones that are used in machine learning and machine intelligence. That’s going to be a monstrous industry, multiple industries going forward for decades. We’re going to see how to keep ourselves relevant and valuable as long as we can.

13 Comments

Suresh Kumar

Great article. And jeff seems a lot more optimistic than a year ago – was aware that he did a pivot on the business last year.

On other point Jeff should read Stuart Hameroffs research on the microtubules inside neurone which display quantum vibration effect.
http://youtu.be/erSd5xep30w

Zygmunt

> The killer apps in computers initially turned out to be data processing in businesses. Nobody anticipated that in the 1940s. IBM stumbled upon that.

Well, it’s not a big secret that IBM sold its machines to Nazis to help them keep track of the Jews on the industrial scale. That was before 1940.

eder

“Deep learning networks are basically hierarchical networks, but they don’t have time.” Deep RECURRENT Neural Networks said they miss you at the library…

The Brain Whisperer

“In 2012, Numenta…finally showed itself to the world after roughly seven years operating in stealth mode. ”

Now Hawkins says: “…the technology we’re developing is going to be huge. It’s a foundation for the next 60, 70, 100 years of computing.” This has been in development for a decade…that’s less “start up” and more “failed idea”.

“We’re very confident that by the end of the 2020s…” …the end of 2020s (?!?!)…is that for real?

Since reading On Intelligence, I have followed Jeff’s progress and Bay Area speaking engagements. And it has been like watching a huge overblown balloon deflate.

It may be time for Hawkins to admit that this dog [Numenta] don’t hunt…

Bageleater

Right. He couldn’t crack the nut of machine learning in less than 10 years… Surely he’s wasting his time!

Action Jackson

A retirement hobby for him and his co-founder Dubinsky. He’s been saying the same for the last 10 years, but look at the apps that they developed…Is this the future of computing? Really?

rebelscience

Nice interview. While Numenta’s model is certainly closer to true intelligence than anything else in the business, Jeff Hawkins should not be so confident in his current approach as to believe that he will remain ahead of the race forever. The CLA is near the ballpark but it’s not there yet. Something is wrong with it for sure, otherwise Hawkins would be on the evening news.

There are a lot of smart people out there thinking about this problem. Don’t be surprised if someone else, maybe even a lone wolf working in a basement, comes out of nowhere and runs off with the pot of gold, leaving the deep learning folks and Numenta sitting in the dust. Things have gotten a lot hotter in this field than most people suspect, IMO.

Jack Decker

Trying to get computer hardware to operate like biological hardware to create a sentient AI hasn’t worked so far and AI researchers have been working on this for decades. Oddly, no one has ever tried to have a computer learn the way humans learn. As for why no one has, I believe it is because the AI field is thought to be the domain of computer programmers and not psychologists. In other words, those with the least social skills and understanding of their fellow man are trying to create a sentient being while those with the best understanding are not even invited to the table. When I regularly point this out to AI researchers, they either scoff at needing to know psychology or say the AI field isn’t developed enough yet to bring in psychologists.

Eight years back, I went through the entire archive of AAAI’s AI Magazine and read every book that dealt with the development of AI. Not a single article talked about trying to get computers to learn like how humans learn. Not a single paper had even a co-writer who had a degree in psychology, not even a BA in psychology. Less than a year ago, I checked to see if this situation had changed. It hasn’t.

Eight years ago, I assembled a group of programmers and told them my idea. They liked it but they quickly pointed out a problem. Key to its implementation was access to a large body of digital knowledge that it could go through. At that time, there just wasn’t enough on the World Wide Web, no respected scientific journals was online, Wiki was more of a joke than it is today, and memory was expensive. Now those things have changed so maybe I will look into testing my AI hypothesis again.

bevier

@Jack Decker: “As for why no one has, I believe it is because the AI field is thought to be the domain of computer programmers and not psychologists.”
– people thought the earth to be flat and computer scientists and mathematicians think to be “above” physics with their rules and algoriths.

These words “We were basically doing energy predictions” in the interview shows the right direction, because everything changing anything has to be caused by physics and in a universe of conservation of energy efficiency is the most basic requirement for survival.

So Mr. Jeff Hawkins is right, it doesn’t matter if a human brain or a computer “thinks”, it is just about information and information is nothing else than a mathematical group of true physical change elements of a given entity (©1999), so “memory-based, not mathematically based” is not a contradiction, because memory is always mathematically based. Memory is just about counting and weighting.

Information retrieval from chaotic input is nothing else than to count and weight input to detect the (or better a) mathematical group – the typical triangle of the “fly” of information processing (©2001): the sensory triangle: open to input leading to one decision and the motoric triangle: realizing the one decision to as much output as possible.

Put that together with how information is retrieved: Using the typcial characteristics of a physical change element you create a hypothesis about the input by comparison of the input in first: kind and second: time. So you fabricate a world of objects and their behaviour and to adjust it with reality you have to verify it – preferably by memory, so by counting the events to measure their information probability as weight of the event or more accurate by contradiction (©2002).

In this self-produced “world” you can simulate processes and so predict behaviour of your environment. Using the efficiency, so using the principle of least action, you can select the most probable progress: That is problem solving (ML method).

The human brain is a gigantic information processing system, but it does just this: information processing according to the laws of physics.

Nothing special human.

@Jack Decker: “Now those things have changed so maybe I will look into testing my AI hypothesis again”

Do that! And don’t forget to dd physics to your considerations ;-)

alexandertolley

“I’m a neuroscientist”. That seems a little inflated to me. Hawkins has associated himself with neuroscience, but to say he is a neuroscientist is going to far, IMO.

The late 2020’s is too long range for success. The HTM technology likely will have been bypassed by then, even if the dominant technologies use something similar and call it by another name.

Comments are closed.