SRI International & AGI

Voices in AI – Episode 36: A Conversation with Bill Mark

In this episode Byron and Bill talk about SRI International, aging, human productivity and more.

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today our guest is Bill Mark. He heads up SRI International’s Information and Computing Sciences Division which consists of two hundred and fifty researchers, in four laboratories, who create new technology and virtual personal assistants, information security, machine learning, speech, natural language, computer visionall the things we talk about on the show. He holds a Ph.D. in computer science from MIT. Welcome to the show, Bill.

Bill Mark:  Good to be here.

So, let’s start off with a little semantics. Why is artificial intelligence, artificial? Is artificial because it’s not really intelligence, or what? 

No, it’s artificial, because it’s created by human beings as opposed to nature. So, in that sense, it’s an artifact, just like any other kind of physical artifact. In this case, it’s usually a software artifact.

But, at its core, it truly is intelligent and its intelligence doesn’t differ in substance, only in degree, from human intelligence?

I don’t think I’d make that statement. The definition of artificial intelligence to me is always a bit of a challenge. The artificial part, I think, is easy, we just covered that. The intelligence part, I’ve looked at different definitions of artificial intelligence, and most of them use the word “intelligence” in the definition. That doesn’t seem to get us much further. I could say something like, “it’s artifacts that can acquire and/or apply knowledge,” but then we’re going to have a conversation about what knowledge is. So, what I get out of it is it’s not very satisfying to talk about intelligence at this level of generality because, yes, in answer to your question, artificial intelligence systems do things which human beings do, in different ways and, as you indicated, not with the same fullness or level that human beings do. That doesn’t mean that they’re not intelligent, they have certain capabilities that we regard as intelligent.

You know it’s really interesting because at its core you’re right, there’s no consensus definition on intelligence. There’s no consensus definition on life or death. And I think that’s really interesting that these big ideas aren’t all that simple. I’ll just ask you one more question along these lines then. Alan Turing posed the question in 1950, Can a machine think? What would you say to that?

I would say yes, but now we have to wonder what “think” might mean, because “think” is one aspect of intelligent behavior, it indicates some kind of reasoning or reflection. I think that there are software systems that do reason and reflect, so I will say yes, they think.

All right, so now let’s get to SRI International. For the listeners who may not be familiar with the company can you give us the whole background and some of the things you’ve done to date, and why you exist, and when it started and all of that?

Great, just a few words about SRI International. SRI International is a non-profit research and development company, and that that’s a pretty rare category. A lot of companies do research and development—a fewer than used to, but still quite a few—and very few have research and development as their business, but that is our business. We’re also non-profit, which really means that we don’t have shareholders. We still have to make money, but all the money we make has to go into the mission of the organization which is to do R&D for the benefit of mankind. That’s the general thing. It started out as part of Stanford, it was formerly the Stanford Research Institute. It’s been independent since 1970 and it’s one of the largest of these R&D companies in the world, about two thousand people.

Now, the information and computing sciences part, as you said, that’s about two hundred and fifty people, and probably the thing that we’re most famous for nowadays is that we created Siri. Siri was a spinoff of one of my labs, the AI Center. It was a spinoff company of SRI, that’s one of the things we do, and it was acquired by Apple, and has now become world famous. But we’ve been in the field of artificial intelligence for decades. Another famous SRI accomplishment would be Shakey the Robot, which was really the first robot that could move around and reason and interact. That was many years ago. We’ve also, in more recent history, been involved in very large government-sponsored AI projects which we’ve led, and we just have lots of things that we’ve done in AI.

Is it just a coincidence that Siri and SRI are just one letter different, or is that deliberate?

It’s a coincidence. When SRI starts companies we bring in entrepreneurs from the outside almost always, because it would be pretty unusual for an SRI employee to be the right person to be the CEO of the startup company. It does happen, but it’s unusual. Anyway, in this case, we brought in a guy named Dag Kittlaus, and he’s of Norwegian extraction, and he chose the name. Siri is a Norwegian women’s name and that became the name of the company. Actually, somewhat to our surprise, Apple retained that name when they launched Siri.

Let’s go through some of the things that your group works on. Could we start with those sorts of technologies? Are there other things in that family of conversational AI that you work on and are you working on the next generation of that?

Yes, indeed, in fact, we’ve been working on the next generation for a while now. I like to think about conversational systems in different categories. Human beings have conversations for all kinds of reasons. We have social conversations, where there’s not particularly any objective but being friendly and socializing. We have task-oriented kinds of conversations—those are the ones that we are focusing on mostly in the next generation—where you’re conversing with someone in order to perform a task or solve some problem, and what’s really going on is it’s a collaboration. You and the other person, or people, are working together to solve a problem.

I’ll use an example from the world of online banking because we have another spinoff called Kasisto that is using the next-generation kind of conversational interaction technology. So, let’s say that you walk into a bank, and you say to the person behind the counter, “I want to deposit $1,000 in checking.” And the person on the other side, the teller says, “From which account?” And you say, “How much do I have in savings?” And the teller says, “You have $1,500, but if you take $1,000 out you’ll stop earning interest.” So, take that little interaction. That’s a conversational interaction. People do this all the time, but it’s actually very sophisticated and requires knowledge.

If you now think of, not a teller, but a software system, a software agent that you’re conversing with—we’ll go through the same little interaction. The person says, “I want to deposit $1,000 in checking.” And the teller said, “From which account?” The software system has to know something about banking. It has to know that a deposit is a money transfer kind of interaction and it requires a from-account and a to-account. And in this case, the to-account has been specified but the from-account hasn’t been specified. In many cases that person would simply ask for that missing information, so that’s the first part of the interaction. So, again, the teller says, “From which account?” And the person says, “How much do I have in savings?” Well, that’s not an answer to the question. In fact, it’s another question being introduced by the person and it’s actually a balance inquiry question. They want to know how much they have in savings. Now, when I go through this the first time, the reason I do this twice is that when I went through it the first time, almost nobody even notices that that wasn’t an answer to the question, but if you try out a lot of the personal assistant systems that are out there, they tend to crater on that kind of interaction, because they don’t have enough conversational knowledge to be able to handle that kind of thing. And then the interaction goes on where the teller is providing information, beyond what the person asked, about potentially losing interest, or it might be that they would get a fee or something like that.

That illustrates the point that we expect our conversational partners to be proactive, not just to simply answer our questions, but to actually help us solve the problem. That’s the kind of interaction that we’re building systems to support. It’s very different than the personal assistants that are out there like Siri, and Cortana, and Google which are meant to be very general. Siri doesn’t really know anything about banking, which isn’t a criticism it’s not supposed to know anything about banking, but if you want to get your banking done over your mobile phone then you’re going to need a system that knows about banking. That’s one example of sort of next-generation conversational interaction.

How much are we going to be able to use transfer learning to generalize from that? You built that bot, that highly verticalized bot that knows everything about banking, does anything it learned make it easier now for it to do real estate, and then for it to do retail, and then all the other things? Or is it the case that like every single vertical, all ten thousand of them are going to need to start over from scratch?

It’s a really good question, and I would say, with some confidence, that it’s not about starting over from scratch because some amount of the knowledge will transfer to different domains. Real estate has transactions, if there’s knowledge about transactions some of that knowledge will carry over, some of it won’t.

You said, “the knowledge that it has learned,” and we need to get pretty specific about that. We do build systems that learn, but not all of their knowledge is picked up by learning. Some of it is built in, to begin with. So, there’s the knowledge that has been explicitly represented, some of which will transfer over. And then there’s knowledge that has been learned in other ways, some of that will transfer over as well, but it’s less clear-cut how that will work. But it’s not starting from scratch every time.

So, eventually though you get to something that could pass the Turing test. You could ask it, “So, if I went into the bank and wanted to move $1,000, what would be the first question you would ask me?” And it would say, “Oh, from what account?” 

My experience with every kind of candidate Turing test system, and nobody purports that we’re there by a long shot, but my first question is always, “What’s bigger, a nickel or the sun?” And I haven’t found a single one that can answer the question. How far away is that?

Well, first just for clarity, we are not building these systems in order to pass the Turing test, and in fact, something that you’ll find in most of these systems is that outside of their domain of expertise, say banking, in this case, they don’t know very much of anything. So, again, the systems that we build wouldn’t know things like what’s bigger, the nickel or the sun.

The whole idea of the Turing test is that it’s meant to be some form of evaluation, or contest for seeing whether you have created something that’s truly intelligent. Because, again, this was one of Turing’s approaches to answering this question of what is intelligence. He didn’t really answer that question but he said if you could develop an artifact that could pass this kind of test, then you would have to say that it was intelligent, or had human-like behavior at the very least. So, in answer to your question, I think we’re very far from that because we aren’t so good at getting the knowledge that, I would say, most people have into a computer system yet.

Let’s talk about that for a minute. Why is it so hard and why is it so, I’ll go out on a limb and say, easy for people? Like, a toddler can tell me what’s bigger the nickel or the sun, so why is it so hard? And what makes humans so able to do it?

Well, I don’t know that anyone knows the answer to that question. I certainly don’t. I will say that human beings spend time experiencing the world, and are also taught. Human beings are not born knowing that the sun is bigger than a nickel, however, over time they experience what the sun is and, at some point, they will experience what a nickel is, and they’ll be able to make that comparison. By the way, they also have to learn how to make comparisons. It would be interesting to ask toddlers that question, because the sun doesn’t look very big when you look up in the sky, so that brings in a whole other class of human knowledge which I’ll just broad-brush call book learning. I certainly would not know that the sun is really huge, unless I had learned that in school. Human beings have different ways of learning, only a very small sample of which have been implemented in artificial intelligence learning systems.

There’s Calvin and Hobbes, where his dad tells Calvin that it’s a myth that the sun is big, that it’s really only the size of a quarter. And he said, “Look, hold it up in the sky. They’re the same.” So, point taken. 

But, let me ask it this way, human DNA is, I don’t know, I’m going to get this a little off, but it’s like 670MB of data. And if you look at how much that’s different than, say, a banana, it’s a small amount that is different. And then you say, well, how much of it is different than, say, a chimp, and it’s a minuscule amount. So, whatever that minuscule difference in code is, just a few MBs, is that, kind of, the secret to intelligence? Is that a proof point that there may be some very basic, simple ways to acquire generalized knowledge, that we just haven’t stumbled across yet that, but there may be something that gives us this generalized learner, we can just plug into the Internet and the next day it knows everything. 

I don’t make that jump. I think the fact that a relatively small amount of genetic material differentiates us from other species doesn’t indicate that there’s something simple out there, because the way those genes or the genetic material impacts the world is very complex, and lead to all kinds of things that could be very hard for us to understand and try to emulate. I also don’t know that there is a generalist learner anyway. I think, as I said, human beings seem to have different ways of learning things, and that doesn’t say to me that there is one general approach.

Back in the Dartmouth days, when they thought they could knock out a lot of AI problems in a summer, it was in the hope that intelligence followed a few simple laws, like how the laws of physics explain so much. It’s been kind of the consensus move to think that we’re kind of a hack of a thousand specialized things that we do that all come together and make generalized intelligence. And it sounds like you’re more in that camp that it’s just a bunch of hard work and we have to tackle these domains one at a time. Is that fair?

I’m actually kind of in between. I think that there are general methods, there are general representations, but there’s also a lot of specific knowledge that’s required to be competent in some activity. I’m into sort of a hybrid.

But you do think that building an AGI, generalized intelligence, that is as versatile as a human is theoretically possible I assume? 

Yes.

You mentioned something when we were chatting earlier that a child explores the world. Do you think embodiment is a pathway to that, that until we give machines away, in essence, to “experience” the world, that that will always limit what we’re able to do? Is that embodiment, that you identified as being important for humans, also important for computers?

Well, I would just differentiate the idea of exploration from embodiment. I think that exploration is a fundamental part of learning. I would say that we, yes indeed, will be missing something unless we design systems that can explore their world. From my point of view, they may or may not be embodied in the usual sense of that word, which means that they can move around and actuate within their environment. If you generalize that to software and say, “Are software agents embodied because they can do things in the world?” then, yeah, I guess I would say embodiment, but it doesn’t have to be physical embodiment.

Earlier when you were talking about digital assistants you said Siri, Cortana and then you said, “Oh, and Google.” And that highlights a really interesting thing that Amazon named theirs, you named yours, Microsoft named theirs, but Google’s is just the Google Assistant. And you’re undoubtedly familiar with the worries that Weizenbaum had with ELIZA. He thought that this was potentially problematic that we name these devices, and we identify with them as if they are human. He said, “When a computer says, ‘I understand,’ it’s just a lie. There’s no ‘I,’ and there’s nothing that understands anything.” How would you respond to Weizenbaum? Do you think that’s an area of concern or you think he was just off?

I think it’s definitely an area of concern, and it’s really important in designing. I’ll go back to conversational systems, systems like that, which human beings interact with, it’s important that you do as much as possible to help the human being create a correct mental model of what it is that they’re conversing with. So, should it be named? I think it’s kind of convenient to name it, as you were just saying, it kind of makes it easier to talk about, but it immediately raises this danger of people over-reading into it: what it is, what it knows, etcetera. I think it’s very much something to be concerned about.

There’s that case in Japan, where there’s a robot that they were teaching how to navigate a mall, and very quickly learned that it got bullied by children who would hit it, curse at it, and all these things. And later when they asked the children did you think it was upset, was it acting upset? Was it acting human-like or mechanical? They overwhelmingly said it was human-like. 

And I still have a bit of an aversion to interrupting the Amazon deviceI can’t say its name because it’s on my desk right next to meand telling it, “Stop!” And so I just wonder where it goes because, you’re right, it’s like the Tom Hanks’ movie Castaway when his only friend was a soccer ball named “Wilson” that he personified. 

I remember there was a case in the ‘40s where they would show students a film of circles and lines moving around, and ask them to construct stories, and they would attribute to these lines and circles personalities, and interactions, and all of that. It is such a tempting thing we do, and you can see it in people’s relationships to their pets that one wonders how that’s all going to sort itself out, or will we look back in forty years and think, “Well, that was just crazy.”

No, I think you’re absolutely right. I think that human beings are extremely good at giving characteristics to objects, systems, etcetera, and I think that will continue. And, as I said, that’s very much a danger in artificial intelligence systems, the danger being that people assume too much knowledge, capability, understanding, given what the system actually is. Part of the job of designing the system is, as I said before, to go as far as we can to give the person the right idea about what it is that they’re dealing with.

Another area that you seem to be focused on, as I was reading about you and your work, is AI and the aging population. Can you talk about what the goal is there and what you are doing, and maybe some successes or failures you’ve had along the way?

Yes, indeed, we are, SRI-wide actually, looking at what we can do to address the problem, the worldwide problem, of higher percentage of aging population, lower percentage of caregivers. We read about this in the headlines all the time. In particular, what we can do to have people experience an optimal life, the best that is possible for them as they age. And there’s lots of things that we’re looking at there. We were just talking about conversational systems. We are looking at the problem of conversational systems that are aimed at the aging population, because interaction tends to be a good thing and sometimes there aren’t caregivers around, or there aren’t enough of them, or they don’t pay attention, so it might actually be interesting to have a conversational system that elderly people can talk to and interact with. We’re also looking at ways to preserve privacy and unobtrusively monitor the health of people, using artificial intelligence techniques. This is indeed a big area for us.

Also, your laboratories work on information security and you mentioned privacy earlier, talk to me, if you would, about the state of the art there. Across all of human history, there’s been this constant battle between the cryptographers and the people who break the codes, and it’s unclear who has the upper hand in that. It’s the same thing with information security. Where are we in that world? And is it easier to use AI to defend against breaches, or to use that technology to do the breach?

Well, I think, the situation is very much as you describe—it’s a constant battle between attackers and defenders. I don’t think it’s any easier to use AI to attack, or defend. It can be used for both. I’m sure it is being used for both. It’s just one of the many sets of techniques that can be used in cybersecurity.

There’s a lot of concern wrapped up in artificial intelligence and its ability to automate a lot of work, and then the effect of that automation on employment. What’s your perspective on how that is going to unfold?

Well, my first perspective is it’s a very complex issue. I think it’s very hard to predict the effect of any technology on jobs in the long-term. As I reflect, I live in the Bay Area, a huge percentage of the jobs that people have in the Bay Area didn’t exist at all a hundred years ago, and I would say a pretty good percentage didn’t exist twenty years ago. I’m certainly not capable of projecting in the long run what the effect of AI and automation will be. You can certainly guess that it will be disruptive, all new technologies are disruptive, and that’s something as a society we need to take aboard and deal with, but how it’s going to work out in the long-term, I really don’t know.

Do you take any comfort that we’ve had transformative technologies aplenty? Right, we had the assembly line which is a kind of artificial intelligence, we had the electrification of industry, we had the replacement of animal power with steam power. I mean each of those was incredibly disruptive. And when you look back across history each one of them happened incredibly fast and yet unemployment never surged from them. Unemployment in the US has always been between four and ten percent, other than the depression. And you can’t the point and say, “Oh, when this technology came out unemployment went briefly to fourteen percent,” or anything like that. Do you take comfort in that or do you say, “Well, this technology is materially different”? 

I take comfort in it in the sense that I have a lot of faith in the creativity and agility of people. I think what that historical data is reflecting is the ability of individuals and communities to adapt to change and I expect that to continue. Now, artificial intelligence technology is different, but I think that we will learn to adapt and thrive with artificial intelligence in the world.

How is it different though, really? Because technology increases human productivity, that’s kind of what it does. That’s what steam did. That’s what electricity did. That’s what the Industrial Revolution did. And that’s what artificial intelligence does. How is it different?

I think in the sense that you’re talking about, it’s not different. It is meant to augment human capability. It’s augmenting now, to some extent, different kinds of human activity, although arguably that’s been going on for a long time, too. Calculators, printing presses, things like that have taken over human activities that were once thought to be core human things. It’s sort of a difference in degree, not a difference in kind.

One interesting thing about technology and how the wealth that it produces is disseminated through culture, is that in one sense technology helps everybodyyou get a better TV, or better brakes in your car, better deodorant, or whateverbut in two other ways, it doesn’t. If you’re somebody who sells your labor by the hour, and your company can produce a labor-saving device, that benefit doesn’t accrue to you it generally would accrue to the shareholders of the company in terms of higher earnings. But if you’re self-employed, or you buy your own time as it were, you get to pocket all of the advances that technology gets you, because it makes your productivity higher and you get all of that. So, do you think that the technology does inherently make worse the income-inequality situation, or am I missing something in that analysis? 

Well, I don’t think that is inherent and I’m not sure that the fault lines will cut that way. We were just talking about the fact that there is disruption and what that tends to mean is that some people will benefit in the short-term, and some of the people will suffer in the short-term. I started by saying this is a complex issue. I think one of the complexities is actually determining what that is. For example, let’s take stuff around us now like Uber and other ride-hailing services. Clearly that has disrupted the world of taxi drivers, but on the other hand has created opportunities for many, many, many other drivers, including taxi drivers. What’s the ultimate cost-benefit there? I don’t know. Who wins and loses? Is it the cab companies, is it the cab drivers? I think it’s hard to say.

I think it was Niels Bohr that said, “Making predictions is hard, especially if they’re about the future.” And he was a Nobel Laureate.

Exactly.

The military, of course, is a multitrillion-dollar industry and it’s always an adopter of technology, and there seems to be a debate about making weapon systems that make autonomous kill decisions. How do you think that’s going to unfold?

Well, again, I think that this is a very difficult problem and is a touchpoint issue. It’s one manifestation of an overall problem of how we trust complex systems of any kind. This is, to me anyway, this goes way beyond artificial intelligence. Any kind of complex system, we don’t really know how it works, what its limitations are, etcetera. How do we put boundaries on its behavior and how do we develop trust in what it’s done? I think that’s one of the critical research problems of the next few decades.

You are somebody who believes we’re going to build a general intelligence, and it seems that when you read the popular media there’s a certain number of people that are afraid of that technology. You know all the names: Elon Musk says it’s like summoning the demon, Professor Hawking says it could be the last thing we do, Bill Gates says he’s in the camp of people who are worried about it and don’t understand why other people aren’t was, Wozniak, the list goes on and on. Then you have another list of people who just almost roll their eyes at those sorts of things, like Andrew Ng who says it’s like worrying about overpopulation on Mars, the roboticist Rodney Brooks says that it’s not helpful, Zuckerberg and so forth. So, two questions: why, among a roomful of incredibly smart people is there such a disagreement over it, and, two, where do you fall in that kind of debate?

Well, I think the reason for disagreements, is that it’s a complex issue and it involves something that you were just talking about with the Niels Bohr quote. You’re making predictions about the future. You’re making predictions about the pace of change, and when certain things will occur, what will happen when they occur, really based on very little information. I’m not at all surprised that there’s dramatic difference of opinion.

But to be clear, it’s not a roomful of people saying, “These are really complex issues,” it’s a roomful of people were half of them are saying, “I know it is a problem,” and half of them saying, “I know it is not a problem.” 

I guess that might be a way of strongly stating a belief. They can’t possibly know.

Right, like everything you’re saying you’re taking measured tones like, “Well, we don’t know. It could happen this way or that way. It’s very complicated.” They are not taking that same tone. 

Well, let me get to your second question, we can come back to the first one. So, my personal view, and here comes this measured response that you just accused me of is, yes, I’m worried about it, but, honestly, I’m worried about other things more. I think that this is something to be concerned about. It’s not an irrational concern, but there are other concerns that I think are more pressing. For example, I’m much more worried about people using technology for untoward purposes than I am about superintelligence taking over the world.

That is an inherent problem with technology’s ability to multiply human effort, if human effort is malicious. Is that an insoluble problem? If you can make an AGI you can, almost by definition, make an evil AGI, correct?

Yes. Just to go back a little bit, you asked me whether I thought AGI was theoretically possible, whether there are any theoretical barriers. I don’t think there are theoretical barriers. We can extrapolate and say, yes, someday that kind of thing will be created. When it is, you’re right, I think any technology, any aspect of human behavior can be done for good or evil, from the point of view of some people.

I have to say, another thing I think about when we talk about super intelligence, I was relating it to complex systems in general. I think of big systems that exist today that we live with, like high-speed automated trading of securities, or weather forecasting, these are complex systems that definitely influence our behavior. I’m going to go out on a limb and say nobody knows what’s really going on with them. And we’ve learned to adapt to them.

It’s interesting, I think part of the difference of opinion boils down to a few technical questions that are very specific that we don’t know the answer to. One of them is, it seems like some people are kind of, I don’t want to say down on humans, but they don’t think human abilities, like creativity and all of that are all that difficult, and machines are going to be able to master that. There’s a group of people who would say the amount of time between one of these systems being able to self-improve is short, not long. I think that some would say intelligence isn’t really that hard, but there’s probably a few breakthroughs. You stack enough of those together and you say, “Okay, it’s really soon.” But if you take the opposite side on thosecreativity is very hard, intelligence is very hardthen you’re, kind of, in the other camp. I don’t doubt the sincerity of any of the parties involved. 

On your comment about the theoretical possibility of a general intelligence, just to explore that for a moment, without any regard for when it will happen—we understand how a computer could, for instance, measure temperature, but we don’t really understand how a computer, or I don’t, could feel pain. For a machine to go from measuring the world to experiencing the world, we don’t really know that, and so is that required to make a general intelligence, to be able to, in essence, experience qualia, to be conscious, or not. 

Well, I think that if we’re truly talking about general intelligence in the sense that I think most people mean it, which is human-like intelligence, then one thing that people do is experience the world and react to it, and it becomes part of the way that we think and reason about the world. So, yes, I think, if we want computers to have that kind of capability, then we have to figure out a way for them to experience it.

The question then becomes—I think this is in the realm of the very difficult—when, to use your example, a human being or any animal experiences pain, there is some physical and then electrochemical reaction going on that is somehow interpreted in the brain. I don’t know how all of that works, but I believe that it’s theoretically possible to figure out how that works and to create artifacts that exhibit that behavior.

Because we can’t really confine it to how humans feel pain, right? But, I guess I’m still struggling over that. What would that even look like, or is your point, “I don’t know what it looks like, but that would be what’s required to do it.” 

I definitely don’t know what it looks like on the inside, but you can also look at the question of, “What is the value of pain, or how does pain influence behavior?” For a lot of things, pain is a warning that we should avoid something, touching a hot object, moving an injured limb, etcetera. There’s a question of whether we can get computer systems to be able to have that kind of warning sensation which, again, isn’t exactly the same thing as creating a system that feels pain in any way like an animal does, but it could get the same value out of the experience.

Your lab does work in robotics as well as artificial intelligence, is that correct?

Right.

Talk a little bit about that work and how those two things come together, artificial intelligence and robots.

Well, I think that, traditionally, artificial intelligence and robotics have been the same area of exploration. One of the features of any maturing discipline, which I think AI is, is that various specializations and specialty groups start forming naturally as the field expands and there’s more and more to know.

The fact that you’re even asking the question shows that there has become a specialization in robotics that is seen as separate from, some people may say, part of, some people may say, completely different from, artificial intelligence. As a matter of fact, although my labs work on aspects of robotics, other labs within SRI, that are not part of the information computing sciences division, also work on robotics.

The thing about robotics is that you’re looking at things like motion, manipulation, actuation, doing things in the world, and that is a very interesting set of problems that has created a discipline around it. Then on top of that, or surrounding it, is the kind of AI reasoning, perception, etcetera, that enables those things to actually work. To me, they are different aspects of the same problem of having, to go back to something you said before, some embodiment of intelligence that can interact with the real world.

The roboticist Rodney Brooks, who I mentioned earlier, says something to the effect that he thinks there’s something about biology, something very profoundly basic that we don’t understand which he calls, “the juice.” And to be clear, he’s 100% convinced that “the juice” is biology, that there’s nothing mystical about it, that it’s just something we don’t understand. And he says it’s the difference between, you put a robot in a box and it tries to get out, it just kind of runs through a protocol and tries to climb. But you put an animal in a box and it frantically wants out of that boxit’s scratching, it’s getting agitated and worked upand that difference between those two systems he calls “the juice.” Do you think there is something like that that we don’t yet know about biology that would be beneficial to have to put in robots? 

I think that there’s a whole lot that we don’t know about biology, and I can assure you there’s a huge amount that I don’t know about biology. Calling it “the juice,” I don’t know what we learn from that. Certainly, the fact that animals have motivations and built-in desires that make them desperately want to get out of the box, is part of this whole issue of what we were talking about before of how and whether to introduce that into artifacts, into artificial systems. Is it a good thing to have in robots? I would say, yes. This gets back to the discussion about pain, because presumably the animal is acting that way out of a desire for self-preservation, that something that it has inherited or learned tells it that being trapped in a box is not good for its long-term survival prospects. Yes, it would be good for robots to be able to protect themselves.

I’ll ask you another either/or question you may not want to answer. The human body uses one hundred watts and we use twenty of that to power our brain, and we use eighty of it to power our body. The biggest supercomputers in the world use twenty million watts and they’re not able to do what the brain does. Which of those is a harder thing to replicate? If you had to build a computer that operated with the capabilities of the human brain that used twenty watts, or you had to build a robot that only used eighty watts that could mimic the mobility of a human. Which of those is a harder problem?

Well, as you suggested when you brought this up, I can’t take that either/or. I think that they’re both really hard. The way you phrased that makes me think of somebody who came to give a talk at SRI a number of years ago, and was somebody who was interested in robotics. He said that, as a student, he had learned about the famous AI programs that had become successful in playing chess. And as he learned more and more about it, he realized that what was really hard was a human being picking up the chess piece and moving it around, not the thinking that was involved in chess. I think he was absolutely right about that because chess is a game that is abstract and has certain rules, so even though it’s very complex, it’s not the same thing as the complexities of actual manipulation of objects. But if you ask the question you did, which is comparing it not to chess, but to the full range of human activity then I would just have to say they’re both hard.

There isn’t a kind of a Moore’s law of robotics is there—the physical motors and materials and power, and all of that? Is that improving at a rate commensurate with our advances in AI, or is that taking longer and slower? 

Well, I think that you have to look at that in more detail. There has been tremendous progress in the ability to build systems that can manipulate objects that use all kinds of interesting techniques. Cost is going down. The accuracy and flexibility is going up. In fact, that’s one of the specialty areas of the robotics part of SRI. That’s absolutely happening. There’s also been tremendous progress on aspects of artificial intelligence. But other parts of artificial intelligence are coming along much more slowly and other parts of robotics are coming along much more slowly.

You’re about the sixtieth guest on the show, and I think that all of them, certainly all of them that I have asked, consume science fiction, sometimes quite a bit of it. Are you a science fiction buff? 

I’m certainly not a science fiction buff. I have read science fiction. I think I used to read a lot more science fiction than I do now. I think science fiction is great. I think it can be very inspiring.

Is there any vision of the future in a movie, TV, or book, or anything that you look at and say, “Yes, that could happen, that’s how the world might unfold? You can say Her, or Westworld, or Ex Machina, or Star Trek, or any of those.

Nope. When I see things like that I think they’re very entertaining, they’re very creative, but they’re works of fiction that follow certain rules or best practices about how to write fiction. There’s always some conflict, there’s resolution, there’s things like that are completely different from what happens in the real world.

All right, well, it has been a fantastically interesting hour. I think we’ve covered a whole lot of ground and I want to thank you for being on the show, Bill. 

It’s been a real pleasure.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.