Voices in AI – Episode 56: A Conversation with Babak Hodjat

0 Comments

About this Episode

Episode 56 of Voices in AI features host Byron Reese and Babak Hodjat talking about genetic algorithms, cyber agriculture, and sentience. Babak Hodjat is the founder and CEO of Sentient Technologies. He holds a PhD in the study of machine intelligence.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Babak Hodjat, he is the founder and CEO of Sentient Technologies. He holds a PhD in the study of machine intelligence. Welcome to the show, Babak. Rerecorded the intro

Babak Hodjat: Great to be here, thank you.

Let’s start off with my normal intro question, which is, what is artificial intelligence?

Yes, what a question. Well we know what artificial is, I think mainly the crux of this question is, “What is intelligence?”

Well actually no, there are two different senses in which it’s artificial. One is that it’s not really intelligence, it’s like artificial turf isn’t really grass, that it just looks like intelligence, but it’s not really. And the other one is, oh no it’s really intelligent it just happens to be something we made.

Yeah it’s the latter definition I think is the consensus. I’m saying this partly because there was a movement to call it machine intelligence, and there were other names to it as well, but I think artificial intelligence is, certainly the emphasis is on the fact that, as humans, we’ve been able to construct something that gives us a sense of intelligence. The main question then is, “What is this thing called intelligence?” And depending on how you answer that question, actual manifestations of AI have differed through the years.

There was a period in which AI was considered: If it tricks you into believing that it is intelligent, then it’s intelligent. So, if that’s the definition, then everything is fair game. You can cram this system with a whole bunch of rules, and back then we called them expert systems, and when you interact with these rule sets that are quite rigid, it might give you a sense of intelligence.

Then there was a movement around actually building intelligence systems, through machine learning, and mimicking how nature creates intelligence. Neural networks, genetic algorithms, some of the approaches, amongst many others that were proposed and suggested, reinforcement learning in its early form, but they would not scale. So the problem there was that they did actually show some very interesting properties of intelligence, namely learning, but they didn’t quite scale, for a number of different reasons, partly because we didn’t quite have the algorithms down yet, also the algorithms could not make use of scalable compute, and compute and memory storage was expensive.

Then we switched to redefinition in which we said, “Well, intelligence is about these smaller problem areas,” and that was the mid to late 90s where there was more interest in agenthood and agent-based systems, and agent-oriented systems where the agent was tasked with a simplified environment to solve. And intelligence was extracted into: If we were tasked with a reduced set of tools to interact with the world, and our world was much simpler than it is right now, how would we operate? That would be the definition of intelligence and those are agent based systems.

We’ve kind of swung back to machine learning based systems, partly because there have been some breakthroughs in the past, I would say 10-15 years, in neural networks in learning how to scale this technology, and an awesome rebranding of neural networks—calling them deep learning—the field has flourished on the back of that. Of course it doesn’t hurt that we have cheap compute and storage and lots and lots of data to feed these systems.

You know, one of the earlier things you said is that we try to mimic how nature creates intelligence, and you listed three examples: neural nets, and then GANNs, how we evolve things and reinforcement learning. I would probably agree with evolutionary algorithms, but do you really think… I’ve always thought neural nets, like you said, don’t really act like neurons. It’s a convenient metaphor I guess, but do you really consider neural nets to be really derived from biology or it’s just an analogy from biology?

Well it was very much inspired by biology, very much so. I mean models that we had of how we thought neurons and synapses between neurons and chemistry of the brain operates, fuels this field, absolutely. But these are very simplified versions of what the brain actually does, and every day there’s more learning about how brain cells operate. I was just reading an article yesterday about how RNA can capture memory, and how the basal ganglia also have a learning type of function—it’s not just the pre-frontal cortex. There’s a lot of complexity and depth in how the brain operates, that is completely lost when you simplify it. So absolutely we’re inspired definitely, but this is not a model of the brain by any stretch of the imagination.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com 

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Comments are closed.