Voices in AI – Episode 32: A Conversation with Alan Winfield

1 Comment

In this episode, Byron and Alan talk about robot ethics, military robots, emergence, consciousness, and self-awareness.

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Alan Winfield. Alan Winfield is a professor of robot ethics at the University of West England. He has so many credentials, I don’t even know where to start. He’s a member of the World Economic Forum Council on the Future of Technology, Values and Policy. He’s a member of the Ethics Advisory Board for the Human Brain Project, and a number more. He sits on multiple editorial boards, such as the Journal of Experimental and Theoretical Artificial Intelligence, and he’s the associate editor of Frontiers in Evolutionary Robotics. Welcome to the show, Alan.

Alan Winfield: Hello, Byron, great to be here.

So, I bet you get the same first question every interview you do: What is a robot ethicist?

Well, these days, I do, yes. I think the easiest, simplest way to sum it up is someone who worries about the ethical and societal implications or consequences of robotics and AI. So, I’ve become a kind of professional worrier.

I guess that could go one of three ways. Is it ethics of how we use robots, is it the ethics of how the robots behave, or is it the ethics of… Well, I’ll just go with those two. What do you think more about?

Well, it’s both of those.

Okay.

But, certainly, the biggest proportion of my work is the former. In other words, how humans—that’s human engineers, manufacturers and maintainers, repairers and so on, in other words, everyone concerned with AI and robotics—should behave responsibly and ethically to minimize the, as it were, unwanted ethical consequences, harms if you like, to society, to individual humans and to the planet, from AI and robotics.

The second one of those, how AI and robotics can itself behave ethically, that’s very much more a research problem. It doesn’t have the urgency of the first, and it really is a deeply interesting question. And part of my research is certainly working on how we can build ethical robots.

I mean, an ethical robot, is that the same as a robot that’s a moral agent itself?

Yes, kind of. But bearing in mind that, right now, the only full moral agents that exist are adult humans like you and I. So, not all humans of course, so adult humans of sound mind, as it were. And, of course, we simply cannot build a comparable artificial moral agent. So, the best we can do so far, is to build minimally ethical robots that can, in the very limited sense, choose their actions based on ethical rules. But, unlike you and I, cannot decide whether or not to behave ethically, and certainly cannot, as it were, justify their actions afterwards.

When you think about the future and about ethical agents, or even how we use them ethically, how do you wrap your head around the fact that there aren’t any two people that agree on all ethics? And if you look around the world, the range of beliefs on what is ethical behavior and what isn’t, varies widely. So, is it not the case you’re shooting for a target that’s ill-defined to begin with?

Sure. Of course, we certainly have that problem. As you say, there is no single, universal set of ethical norms, and even within a particular tradition, say, in the Western ethical tradition, there are multiple sets of ethics, as it were, whether they’re consequentialist ethics or deontic or virtue ethics, so it’s certainly complicated. But I would say that you can abstract out of all of that, if you like, some very simple principles that pretty much most people would agree, which is that, for instance, a robot should not harm people, should not cause people to come to harm.

That happens to be Asimov’s first rule of robotics, and I think it’s a pretty wise, as it were, starting point. Asimov’s first rule of robotics is universal, but, what I’m saying is that, we probably can extract a very small number of ethics, which if not universal, will attract broad agreement, broad consensus.

And yet, there’s an enormous amount of money that goes into artificial intelligence, to highlight just that one, right? Robots used in military for instance, specifically, including robots that actually do, or are designed to kill and do harm, and so, we can’t even start at something that, at first glance, seems pretty obvious.

Well, indeed, and the weaponization of AI, and any technology, is something that we all should be concerned about. I mean, you’re right that the real world has weapons. That doesn’t mean that we shouldn’t strive for a better world in which technology is not weaponized. So, yes, this is an idealistic viewpoint, but what do you expect a robot ethicist to be except an idealist?

Point taken. One more question along these lines. Isn’t the landmine a robot with artificial intelligence that is designed to kill? I mean, the AI says if the object weighs more that forty-five pounds, I run this program which blows it up, is that a robot that makes the kill decision itself?

Well, in a minimal sense, I suppose you might say it’s certainly an autometer, or an automaton. It has a sensor, which is the device that senses a weight upon it, and an actuator, which is the thing that triggers the explosion. But the fact is, of course, that landmines are hideous weapons that should’ve been banned a long time ago, and mostly are banned, and of course the world is still clearing up landmines.

I would like to switch gears a little bit and talk about emergence. You study swarm behavior.

Yes, I spent many years studying swarm behavior. That’s right, yes.

You’ve no doubt seen the video of—and, again, you’re going to have to help me with the example here. It’s a wasp that when threatened, they make a spinning pinwheel, where they’re all kind of making their wings open and close in this tight unison where it gives the illusion there’s this giant spinning thing. And it’s like the wave in a stadium, which happens so quickly. They’re not, like, saying, “Oh, Bob just waved his wings, now it’s my turn.” Are you familiar with that phenomenon?

I’m not. That’s a new one on me, Byron.

Then let’s just talk about any other… How is it that anthills and beehives act in unison? Is that to achieve larger goals, like, cool the hive, or what not? Is that swarm?

Yeah. I mean, the thing I think that we need to try and do is to dismiss any notion of goals. It’s certainly true that a termite’s mound, for instance, is an emergent consequence. It’s an emergent property of hundreds of thousands of termites doing their thing. And all of the extraordinary sophistication we see—the air conditioning, the fungus farms and such—in the termite mounds, are all also emergent properties of the, as it were, the myriad microscopic interactions between the individuals, between each other and their environment, which is, if you like, the materials and structure of the termite nest.

But if people say to me, “How do they know what they’re doing and when they’ve finished?” the answer is, well, firstly, no individual knows what it’s doing in the termite mound, and secondly, there is no notion of finished. The work of building and maintaining the termite mound just carries on forever. And the reason the world isn’t full of termite mounds, it hasn’t been, as it were, completely colonized by termite mounds, is for all sorts of reasons: climate, environment conditions, the fact that if termite mounds get too big, they’ll collapse because of their own weight, larger animals of course will either deliberately break into the termite mounds to feed on termites, or will just blunder into them and knock them over, and there’s flooding and weather and all kinds of stuff.

So, the fact that when we see termite mounds, we imagine that this is some kind of goal-oriented activity, is unfortunately, simply applying a very human metaphor to a non-human process. There is simply no notion that any individual termite knows what it’s doing, or of the collective, as it were, finishing a task. There are no tasks in fact. There are simply interactions, microscopic actions and interactions.

Let’s talk about emergence for a minute. I’ll set my question up with a little background for any listener. Emergence is the phenomenon where we observe attributes of a system that are not present in any of the individual components. Is that a fair definition?

Yes. I mean, there are many definitions of emergence, but essentially, you’re looking for macroscopic structures or phenomena or properties that are not evident in the behavior of individuals.

We divide it into two halves, and one half, a good number of people don’t believe exist. So, the first one is weak emergence, as I understand it, where you could study hydrogen for a year and you could study oxygen for a year, and never in your wildest imagination would you have guessed that you put them together and they make water and it’s wet, it’s got this new wetness. And yet, in weak emergence, when you study it enough and you figure out what’s going on, you go, “Oh, yeah, I see how that worked,” and then you see it.

And then there’s strong emergence, which posits that there are characteristics that emerge for which you cannot take a reductionist view, you cannot in any way study the individual components and ever figure out how they produced that result. And this isn’t an appeal to mysticism, moreso it’s a notion that maybe strong emergence is a fundamental force of the universe or something like that. Did I capture that distinction?

Yeah, I think you’ve got it. I mean, I’m definitely not a strong emergentist. It’s certainly true that, and I’ve seen this a number of times in my own work, emergent properties can be surprising. They can be puzzling. It can sometimes take you quite a long time to figure out what on Earth is going on. In other words, to unpick the mechanisms of emergence. But there’s nothing mysterious.

There’s nothing in my view that is inexplicable about emergence. I mean, there are plenty of emergent properties in nature that we simply cannot explain mechanically, but that doesn’t mean that they are inexplicable. It just means that we’re not smart enough. We haven’t, as it were, figured out what’s going on.

So, when you were talking about the termite nest, you said the termite nest doesn’t know what it’s doing, it doesn’t have goals, it doesn’t have tasks that have a beginning and an end. If all of that is true, then the human mind must not be an emergent phenomenon, because we do have goals, we know exactly what we’re doing.

Well, I’m not entirely sure I agree with that. I mean, we think we know what we’re doing, that may well be an illusion, but carry on anyway.

No, that’s a great place to start. So, you’re alluding to the studies that suggest you do something instinctually, and then your brain kind of races to figure out why did I do that, and then it reverses the order of those two things and says, “I decided to do it. That’s why I did it.”

Well, I mean, yeah, that’s one aspect which may or may not be true. But what I really mean, Byron, is that when you’re talking about human behaviors, goals, motivations and so on, what you’re really looking at is the top, the very top layer of an extraordinary multi-layered process, which we barely understand, well, we really don’t understand at all. I mean, there’s an enormous gap, as it were, between the low-level processes—which also we barely understand—in other words, the interactions between individual neurons and, as it were, the emergence of mind, let alone, subjective experience, consciousness and so on.

There are so many layers there, and then the top layer, which is human behavior, is also mediated through language and culture, and we mustn’t forget that. You and I wouldn’t have been having this conversation half a million years ago. The point is that the things that we can think about and have a discourse over, we wouldn’t be able to have a discourse about if it were not for this extraordinary edifice of culture, which kind of sits on top of a large number of human minds.

We are social animals, and that’s another emergent property. You’ve got the emergent property of mind, and then consciousness, then you have the emergent property of society, and on top of that, another emergent property, which is culture. And somewhere in the middle of that, all mixed up, is language. So, I think it’s so difficult to unpick all of this, when you start to ask questions like, “Yes, but how can a system of emergence have goals, have tasks?” Well, it just so happens that modern humans within this particular culture do have what we, perhaps rather pretentiously, think of as goals and motivations, but who knows what they really are? And I suspect we probably don’t have to go back many tens, certainly hundreds of generations, to find that our goals and motivations were no different to most of the animals, which is to eat and survive, to live another day.

And so, let’s work up that ladder from the brain to the mind to consciousness. Perhaps half a million years ago, you’re right, but there are those who would maintain that when we became conscious, that’s the moment we, in essence, took control and we had goals and intentions and all of that subtext going on. So, I’ll ask you the unanswerable question, how do you think consciousness comes about?

Gosh, I wish I knew.

Is it quantum phenomenon? Is it just pure emergence?

I certainly think it’s an emergent property, but I think it’s such a good adaptation, that I doubt that it’s just an accident. In other words, I suspect that consciousness is not like a spandrel of San Marco, you know, that wonderful metaphor. I think that it’s a valuable adaptation, and therefore, when—at some point in our evolutionary history, probably quite recent evolutionary history—some humans started to enjoy this remarkable phenomena of being a subject and the subjective experience of recognizing themselves and their own agency in the world, I suspect that they had such a big adaptive advantage over their fellow humans, hominids, who didn’t have that experience, that, rather quickly, I think it would have become a strongly self-selecting adaptation.

I think that the emergence of consciousness is deeply tied up with being sociable. I think that in order to be social animals, we have to have theory of mind. To be a successful social animal, you need to be able to navigate relationships and the complexity of social hierarchies, pecking orders and such like.

We know that chimpanzees are really quite sophisticated with what we call Machiavellian intelligence. In other words, the kind of social intelligence where you will, quite deliberately, manipulate your behaviors in order to achieve some social advantage. In other words, I’ll pretend to want to get to know you, not because I really want to get to know you, but because I know that you are friends with somebody else, and I really want to be friends with her. So that’s Machiavellian intelligence. And it seems that chimpanzees are really rather good at it, and probably just as good at it as we homo sapiens.

And in order to be able to have that kind of Machiavellian intelligence, you need to have theory of mind. Now, theory of mind means having a really quite sophisticated model of your conspecifics. Now, that, I think, in turn, arose out of the fact that we have complicated bodies, bodies that are difficult to control, and therefore, we, at some earlier point in our evolutionary history, started to have quite sophisticated body self-image. In other words, an internal simulation, or whatever you call it, an internal model of our own physical bodies.

But, of course, the beauty of having a model of yourself is that you then automatically have a model of your conspecifics. So, I think having a self-model bootstraps into having theory of mind. And then, I think, once you have theory of mind, and you can—and I don’t know at what point this might have come in, whether it would come after we have theory of mind, probably, I think—start to imitate each other; in other words, do social learning.

I think social learning was, again, another huge step forward in the evolution of modern mind. I mean, social learning is unbelievably more powerful than individual learning. Suddenly the ability to pass on knowledge to your children, from your ancestors, especially once you invent writing, as well, or symbols and language, writing of course came much later, but I think that all of these things were necessary, but perhaps not sufficient in themselves prerequisites for consciousness. I mean, it’s very interesting, I don’t know if you know the work of Julian Jaynes.

Of course, Bicameral Mind. That we weren’t even conscious until 500 BC, and that the Greek gods and the rise of oracles was just us realizing we had lost the voice that we used to hear in our heads.

I mean, it’s a radical hypothesis. Not many people buy that argument. But I think it’s extremely interesting, the idea that modern consciousness may be a very recent adaptation, as you say, within, as it were, recorded history, back to Homeric times. So, I think the story of how consciousness evolved, may never be known of course. It’s like a lot of natural history. We can only ever have Just So Stories. We can only have more or less plausible hypotheses.

I’m absolutely convinced that key prerequisites are internal models. Dan Dennett has this wonderful structure, this conceptual framework that he calls the “Tower of Generate-and-Test,” this set of conceptual creatures that each has a more sophisticated way of generating and testing hypotheses about what action to take next. And without going through the whole thing in detail, his Popperian creatures have this amazing innovation of being able to imagine the outcomes of actions before trying them out. And therefore, they can imagine a bad action, and decide not to try it out for real, which may well be extremely dangerous.

And then he suggests that a subset of Popperian creatures are what he calls Gregorian creatures, who’ve invented mind tools, like language, and therefore have this additional remarkable ability to learn socially from each other. And I think that social learning and theory of mind are profoundly, in my view, implicated in the emergence of consciousness. Certainly, I would stick my neck out and say that I think solitary animals cannot enjoy the kind of consciousness that you and I do.

So, all of that to say, we don’t know how it came about, and you said we may never know. But it’s really far more intractable than that because we don’t really know, if you agree with this, that it’s not just how it came about, we don’t have any science that suggests how a cloud of hydrogen could come to name itself. We don’t have any science to say how is it that I can feel something? How is it that I can experience something as opposed to just sensing it? As I listen to you along this conversation, I just replace everything with zombie, you know, the analogy of a human without consciousness.

In any case, so what would you say to that? I’ve heard consciousness described as the most difficult problem, maybe the only problem left that we know neither how to ask it, nor what the answer would look like. So, what do you think the answer to the question of how is it that we have subjective experience looks like?

Well, again, I have no idea. I mean, I completely agree with you, Byron. It is an extraordinarily difficult problem. What I was suggesting earlier were just a very small number of prerequisites, not in any sense was I suggesting that those are the answer to what is consciousness. There are interesting theories of consciousness. I mean, I like the work very much of Thomas Metzinger, who I think has a very, well to me at least, a very attractive theory of consciousness because it’s based upon the idea of the self-model, which I’ve indicated I’m interested in models, and his notion of the phenomenal self-model.

Now, as you quite rightly say, there are vast gulfs in our misunderstanding, and we certainly don’t even know properly what questions to ask, let alone answer, but I think we’re slowly getting there. I think progress is being made in the study of consciousness. I mean, the work of Anil Seth I think is deeply interesting in this regard. So, I’m basically agreeing with you.

We don’t have a science to understand how something can experience. So, I get a temperature sensor up to my computer, that I write a program that it screams if it gets over five hundred degrees, and then I hold a match to it and it screams. We don’t think the computer is feeling pain, even though the computer’s able to sense all that’s going on, we don’t think that there’s an agent that can feel anything.

In fact, we don’t even really have science to understand how something could feel. And, I’m the first to admit it just kicks the can down the street, but you came out against strong emergence at the get-go, you’re definitely not that, but couldn’t you say “Well, clearly our basic physical laws don’t account for how matter can experience things, and therefore there might be another law at play that comes from complexity or any number of other things, that it isn’t reductionist and we just don’t understand it.” But why is it that you reject strong emergence so unequivocally, but still kind of struggle with, “We don’t really know any scientific way, with physics, to answer that question of how something can experience?”

Well, no, I think they’re completely compatible positions. I’m not saying that consciousness, subjective experience—what it is to subjectively experience something—is unknowable, in other words, the process. I don’t believe the process by which subjective experience happens in some complex collections of matter is unknowable. I think it’s just very hard to figure out and will take us a long time, but I think we will figure it out.

A lot of times when people look at the human brain, they say, “Well, the reason we don’t understand it is because it’s got one hundred billion neurons.” And yet there’s been an effort underway for two decades to take the nematode worm’s 302 neurons and try to make it—

—Two of which, interestingly, are not connected to anything.

—And try to make a digital life, you know, model it. So, we can’t even understand how the brain works to the degree that we can reproduce a three hundred-neuron brain. And even more so, there are those who suggest that a single neuron may be as complicated as a super computer. So, what do you think of that? Why can’t we understand how the nematode brain works?

Well, understanding of course, is a many-layered thing. And at some level of abstraction, we can understand how the nervous system of C. elegans works. I mean, we can, that’s true. But, as with all of science, understanding or scientific model is an abstraction, at some degree of abstraction. It’s a model at some degree of abstraction. And if you want to go deeper down, increase the level of granularity of that understanding, that’s I think when you start to have difficulties.

Because as you say, when we build, as it were, a computer simulation of C. elegans, we simply cannot model each individual neuron with complete fidelity. Why not? Well, not just because it’s extraordinarily complex, but we simply don’t fully understand all the internal processes of a biological neuron. But that doesn’t mean that we can’t, at some useful, meaningful level of abstraction, figure out that a particular stimulus to a particular sensor in the worm will cause a certain chain reaction of activations and so on, which will eventually cause a muscle to twitch. So, we can certainly do that.

You wrote a paper, “Robots with Internal Models: A Route to Self-Aware and Hence, Safer Robots,” and you alluded to that a few moments ago, when you talked about an internal model. Let’s take three terms that are used frequently. So, one of them is self-awareness. You have Gallup’s red dot test, that says, “I am a ‘self.’ I can see something in the mirror that has a red dot, and I know that’s me and I try to wipe it off my forehead.” That would be a notion of self-awareness. Then you have sentience, and of course it’s often misused, sentience of course just means to be able to sense something, usually to feel pain. And then you have consciousness, which is this, “I experience it.” Does self-awareness imply sentience, and does sentience imply consciousness? Or can something be self-aware and neither sentient or conscious?

I don’t think it’s all binary. In other words, I think there are degrees of all of those things. I mean, even simple animals have to have some limited self-awareness. And the simplest kind of self-awareness that I think pretty much all animals need to have is to be able to tell the difference between me and not me. If you can’t tell the difference between me and not me, you’re going to have difficulty getting by in the world.

Now, that I think is a very limited form, if you like, of self-awareness, even though I wouldn’t suggest for a moment that simple animals that can indeed tell the difference between me and not me, have sentience or consciousness. So, I think that these things exist on a spectrum.

Do you think humans are the only example of consciousness on the planet or would you suspect—?

No, no, no. I think, again, that there are degrees of consciousness. I think that perhaps there are undoubtedly some unique attributes of humans. We’re almost certainly the only animal on the planet that can think about thinking. So, this kind of reflective—or is reflexive, is that the right word here—ability to kind of ask ourselves questions, as it were.

But even though, for instance, a chimpanzee probably doesn’t think about thinking, I think it is conscious. I mean, it certainly has plenty of other attributes of consciousness. And not only chimpanzees, but other animals are capable, clearly, of obviously feeling pain, also feeling grief, and feeling sadness. When a member of the clan is killed or dies, these are, in my view, evidence of consciousness in other animals. And there are plenty of animals that we almost feel instinctively are conscious to a reasonably high degree. Dolphins are another such animal. One of the most puzzling ones, of course, is the octopus.

Right, because you said a moment ago, a non-sociable animal shouldn’t be able to be conscious.

Exactly. And that’s the kind of black swan of that particular argument, and I was well aware of that when I said it. I mean, clearly, there’s something else going on in the octopus, but we can nevertheless be sure that the octopus, collectively, don’t have traditions in the way that many other animals do. In other words, they don’t have localized, socially-agreed behaviors like birdsong, or in chimpanzee, cracking nuts open a different way on one side of the mountain to the other side of the mountain. So, there’s clearly something very puzzling going on in octopus, which seems to buck, what otherwise I think is a pretty sound proposition, which is, in my view, the role of sociability in the emergence of consciousness.

And, I think, octopus only live about three years, so just imagine if they had a one hundred-year lifespan or something.

What about plants? Is it possible that plants are self-aware, sapient, sentient or conscious?

Good question. I mean, certainly, plants are intelligent. I’m more comfortable with the word intelligence there. But as for, well, maybe even a limited form of self-awareness, a very limited form of sentience, in the sense that plants clearly do sense their environments. Plants, trees, clearly do sense and respond to attacks from neighboring plants or pests, and appear even to be able to respond in a way that protects themselves and their neighboring, as it were, conspecifics.

So, there is extraordinary sophistication in plant behavior, plant intelligence, that’s really only beginning to be understood. I have a friend, a biologist in the University of Tel Aviv, Danny Chamovitz, and Danny’s written a terrific book on plant intelligence that really is well worth reading.

What about Gaia? What about the Earth? Is it possible the Earth has its own emergent awareness, its own consciousness, in the same sense that all the neurons in our brain come together in our mind to give us consciousness?

And I don’t think these are purely academic questions, because at some point we’re going to have to address, “Is this computer conscious, is this computer able to feel, is this robot able to feel?” If we can’t figure out if a tree can feel, how in the world would we feel about something that didn’t share ninety percent of our DNA with? So, what would you think about the Earth having its own will and consciousness and awareness, that’s an emergent behavior of all of the lifeforms that live on it?

Yeah, gosh. I think you’ve probably really stumped me there. I mean, I think this is, you’re right, it’s an interesting question. I’ve absolutely no idea. I mean, I’m a materialist. I kind of find it difficult to understand how that might be the case when the planet isn’t a homogeneous system, it isn’t a fully connected system in the sense that nervous systems are.

I mean, the processes going on in and on the planet are extraordinarily complex. There’s tons of emergence going on. There are all kinds of feedback loops. Those are all undoubtedly facts. But whether that is enough, in and of itself, to give rise to any kind of analogue of self-awareness, I have to say, I’m doubtful. I mean, it would be wonderful if it were so, but I’m doubtful.

You wouldn’t be able to look at a human brain under a microscope and say, “These things are conscious.” And so, I guess, Lovelock would look back over the—and I don’t know what his position on that question would be—but he would look at the fact that the Earth self regulates so many of its attributes within narrow ranges. I’ll ask you one more, then. What about the Internet? Is it possible that the Internet has achieved some kind of consciousness or self-awareness? I mean, it’s certainly got enough processors on it.

I mean, I think perhaps the answer to that question, and I’ve only just thought of this or it’s only just come to my mind, is that I think the answer is no. I don’t think the Internet is self-aware. And I think the reason, perhaps, is the same reason that I don’t think the Earth is self-aware, or the planet is self-aware, even though it is, as you quite rightly say, a fabulously self-regulating system. But I think self-awareness and sentience and in turn consciousness, need not just highly-connected networks, they also need the right architecture.

The point I’m making here, it’s a simple observation, is that our brains, our nervous systems, are not randomly connected networks. They have architecture, and, that is an evolved architecture. And it’s not only evolved, of course, but it’s also socially conditioned. I mean, the point is that, as I keep going on about, the only reason you and I can have this conversation is because we were both, we share a culture, a cultural environment, which is itself highly evolved. So, I think that the emergence of consciousness, as I’ve hinted, comes as part and parcel of that emergence of communication, language, and ultimately culture.

I think the reason that the Internet, as it were, is unlikely to be self-aware, it’s because it just doesn’t have the right architecture, not because it doesn’t have lots of processing and lots of connectivity. It clearly has those, but it’s not connected with the architecture that I think is necessary—in the sense that the architectures of animal nervous systems are not random. That’s clearly true, isn’t it? If you just take one hundred billion neurons and connect them randomly, you will not have a human brain.

Right. I mean, I guess you could say there is an organic structure to the Internet in terms of the backbone and the nodes, but I take your point. So, I guess where I’m going with all of this is if we make a machine, and let’s not even talk about conscious for a minute, if we make a machine that is self-aware and is sentient in the sense that it can feel the world, how would we know?

Well, I think that’s a problem. I think it’s very hard to know. And one of the ethical, if you like, risks of AI and especially brain emulation, which is in a sense, a particular kind of AI, is that we might unknowingly build a machine that is actually experiencing, as it were, phenomenal subjectivity, and even more worrying, pain. In other words, a thing that is experiencing suffering. And the worst part about it, as you rightly say, is we may not even know that it is experiencing that suffering.

And then, of course, if it ever becomes self-aware, like if my Roomba all of a sudden is aware of itself, we also run the risk that we end up making an entire digital race of slaves, right? Of beings that feel and perceive the world, which we just build to do our bidding at our will.

Well, yeah. I mean, the ethical question of robots as slaves is a different question. But let’s not confuse it or conflate it with the problem of artificial suffering. I’m much less ethically troubled by a whole bunch of zombie robots, in a sense, that are not sentient and conscious, because they don’t have very much, I won’t say zero, but they have a rather low claim on moral patiency. I mean, if they were at all sentient, or if we believed they were sentient, then the claim we would have to treat them with a level of moral patiency, that we absolutely do not treat robots and AIs with right now.

When robot ethics come out, or ethics and AI, and people say well that’s a real immediate example that we have to think about—aside from the use of these devices in war—the one that everybody knows is the self-driving car, do I drive off the cliff or run over the person? One automaker has come out and specifically said, “We protect the driver. That’s what we do.” As a robot ethicist, how do you approach that problem, just that single isolated, real world problem?

Well, I think the problem with ethical dilemmas, particularly the trolley problem, is that they’re very, very rare. I mean, you have to ask yourself, how often have you and I, I guess you drive a car, and you may well have been driving a car for many years, how often have you faced a trolley problem? The answer is never.

Three times this week. [Laughs] No, you’re entirely right. Yes. But you do know that people get run over by cars.

Sure.

We have to wrestle with the question because it’s going to come up in everything else. Like medical diagnoses and which drugs you give to which people for which ailments which may or may not become lethal reactions to medicines that are rare. It really permeates everything, this assessment of risk and who bares it. Is it fundamentally the programmer? Because that’s one way to say it, it’s that robots don’t actually make any decisions, it’s all humans. And so, you just follow the coding trail back to the person who decided to do it that way.

Well, what you’ve just said is true. It’s not necessarily the programmer, it’s certainly humans. My view—and I take a very hard line on this—is that humans, not robots, are responsible agents, and I mean including AI. So, however a driverless car is programmed, it cannot be held responsible. I think that is an absolute fundamental, I mean, right now. In several hundred years maybe we might be having a slightly different conversation, but, right now, I take a very simple view—robots and AIs cannot be responsible, only humans.

Now, as for what ethics do we program into a driverless car? I think that has to be a societal question. It’s certainly not down to the design of the programmer or even the manufacturer to decide. I think it has to be a societal question. So, you’re right that when we have driverless cars, there will still be accidents, and, hopefully, there will be very few accidents, but still occasionally, very rarely we hope, people will still be killed in car accidents, where the driverless car, as it were, did the wrong thing.

Now, what we need is several things. I think we need to be able to find out why the driverless car went wrong, and that really means that driverless cars need to be fitted with the equivalent of a flight data recorder in aircraft, what I call an ethical black box. We have a paper on that in the next couple of weeks that we’re giving called, “The Case for an Ethical Black Box.” And we need to have regulatory structures that mean that manufacturers are obliged to fit these black boxes to driverless cars, and that the accident investigators have the authority and the power to be able to look at the data in those ethical black boxes and find out what went wrong.

But, then, even when you have all of that structure in place, which I think we must have, there will still be occasional accidents, and the only way to resolve that is by having ethics in driverless cars, if indeed we do decide to have ethics in them at all, which I think is itself not a given. I think that’s a difficult question to ask of itself. But if we did fit driverless cars with ethics, then those ethics need to be decided by the whole of society, so that we collectively take responsibility for those small number of cases where there is an accident and people are harmed.

Fair enough. I have three final questions for you. The first is, Weizenbaum, who famously made ELIZA, which, for the benefit of the listener, was a computer program in the ‘60s that was simple chatbot. You would type, “I have a problem,” it would say, “What kind of problem do you have?” “I’m having trouble with my parents,” “What kind of trouble are you having with your parents?” and it goes on and on like that.

Weizenbaum wrote it, or had it written it, and then noticed that people were developing emotional attachments to it, even though they knew it was just a simple program. And he kind of did a one-eighty, turned on it all. He distinguished between deciding and choosing. And he said, “Robots should only decide. It’s a computational action. They should never choose. Choosing is for people to do.”

What do you think he got right and wrong, and what are your thoughts on that distinction? He thought it was fundamentally wrong for people to use robots in positions that require empathy, because it doesn’t elevate the machine, it debases the human.

Yeah, I mean, I certainly have a strong view that if we do use robots at all as personal assistants or chatbots or advisors or companions, whatever, I think it’s absolutely vital that that should be done within a very strict ethical framework. So, for instance, to ensure that nobody’s deceived and nobody is exploited. The deception I’m particularly thinking of is the deception of believing that you’re actually talking to a person, or, even if you realize you’re not talking to a person, believing that the system, the machine is caring for you, that the machine has feelings for you.

I certainly don’t take a hard line that we should never have companion systems, because I think there are situations where they’re undoubtedly valuable. I’m thinking, for instance here, of surrogate pets. There’s no doubt that when an elderly person, perhaps with dementia, goes into a care home, one of the biggest traumas they experience is leaving their pet behind. People I’ve spoken to who work in care homes for the elderly, elderly people with dementia, say that they would love for their residents to have surrogate pets.

Now, it’s likely that those elderly persons may recognize that the robot pet is not a real animal, but, nevertheless, still may come to feel that the robot, in some sense, cares for them. I think that’s okay because I think that the balance of benefit versus, as it were, the psychological harm of being deceived in that way, weighs more heavily in terms of the therapeutic benefit of the robot pet.

But really the point I’m making is, I think we need strong ethical frameworks, guidelines and regulations that would mean that vulnerable people, particularly children, disabled people, elderly people, perhaps with dementia, are not exploited perhaps by unscrupulous manufacturers or designers, for instance, with systems that appear to have feelings, appear to have empathy.

As Weizenbaum said, “When the machine says, ‘I understand,’ it’s a lie, there’s no I there.”

Indeed, yes, exactly right. And I think that rather like Toto in the Wizard of Oz, we should always be able to pull the curtain aside. The machine nature of the system should always be transparent. So, for instance, I think it’s very wrong for people to find themselves on the telephone and believe that they’re talking to a person, a human being, when in fact they’re talking to a machine.

I agree.

Second question, what about science fiction? Do you consume any in written or movie or TV form that you think, “Ah, that could happen. I could see that future unfolding”?

Oh, lots, well I mean, certainly I consume a lot of science fiction, not all of it, by any means, would I expect or like to see happening. Often the best sci-fi is dystopian, but that is okay, because good science fiction is like a thought experiment, but I like the utopian kind, too. And I rather like the kind of AI utopia of The Culture which is the Iain M. Banks Culture novels—a universe in which there are hugely intelligent, and rather inscrutable, but, nevertheless, rather kindly and benevolent AIs, essentially, looking after us poor humans. I kind of like that idea.

And, finally, you’re writing a lot. How can people keep up with you and follow you and get all of your latest thinking? Can you just go through the litany of resources?

Sure. Well, I don’t blog very often, because I’m generally very busy with other stuff, but I’d be delighted if people go to my blog, which is just: alanwinfield.blogspot.com, and also follow me on Twitter. And, again, I’m easy to find. I think it’s just @alan_winfield. And, similarly, there are quite a few videos of talks that I’ve given to be found on YouTube and online generally. And if people want to get in touch directly, again, it’s easy to find my contact details online.

Alright, well thank you. It has been an incredibly fascinating hour and I appreciate your time.

Thank you, Byron, likewise, very much enjoyed it.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

1 Comment

Comments are closed.