Interview with Stephen Wolfram on AI and the future

Why do you use the word “artificial”? Because in my understanding of artificial intelligence, it really is artificial, it’s something that looks intelligent, but it isn’t really, like the way artificial fruit is truly artificial. We build it to look that way, but it isn’t. Why do you even keep that word in? Why aren’t you saying, “I’m building intelligence”?

Maybe I should.

Because what is artificial about it at all, in your view?

I think that’s just a historical word—one of the things one realizes in dealing with language is that language is this weird historical artifact that has to do with a collective agreement about how to describe things. A word can just become a word. There is clearly a meaningful distinction to be made at a practical level between intelligence that comes from brains that grow in humans versus intelligence that comes from software downloads or something. So in that sense, it’s worth having the distinction whether that word “artificial” should be considered to mean—how one should take that word, I’m not sure.

I think this question about goals … I’ve been thinking about this for a while, and I haven’t really resolved my thinking about it, and as I say, I’m still very conflicted because the things that we seem to be on course to do are things that don’t agree with my personal prejudices about what I hope happens, so to speak. But one of the issues is, right now, many goals that one has, have to do with scarce resources, finite human life spans, things like this. If you imagine—and maybe we can’t successfully imagine it—removing those constraints, what then happens to human goals? Is the pressure to do things then removed greatly? We [would then] have no basis for knowing what our goals should be. The historical forces that have shaped our goals aren’t there anymore. And so, one of the things that I have certainly thought about is that, just as for a thousand years people would look to the wisdom of the ancients to try and understand how to live one’s lives, and for the goals that one should expect to pursue and that are right to pursue. One of the funny possibilities is that, from the future, people will look to say, “Well, when humans were really humans, what were their goals? Those were the right goals. Those were the goals that are honoring our civilization to pursue,” or however else it would be characterized. And our generation is the first one where there is really detailed recording of what we’ve done, and to some extent, why we’ve chosen to do what we do: the email, the social media, the personal analytics, all these kinds of things. There’s a lot of information about what we’ve done and why we’ve done it, and so, I could imagine, in one scenario of the future, some number of years hence when a lot of the constraints that we have today have been removed, it’s like, “Let’s go back and look at those guys who were living at a time when the constraints hadn’t been removed, but where we have enough information to tell why they did what they did, and let’s microscopically reconstruct for those seven billion people what the choices that they made were, and let’s codify those into what we think is the right way to behave as a genuine non-artificial human, so to speak.” Perhaps this won’t be what actually happens, but I think this is one of the possible [outcomes].

Talk a little bit about some of the stuff that you are doing at Wolfram|Alpha that people can actually go and kick around with and use.

There’s things like imageidentify.com, which is really just using one out of 5,000 functions in our Wolfram Language. What we’ve been trying to do is build a knowledge-based language where we can take the knowledge of our civilization, encode it in a form where it can be, in a sense, concretely built on top of, where we have a language that allows you to start from the frontier of what our civilization has already achieved and then build whatever it is you want to build. There’s a lot of practicality behind that, of clouds, and mobile apps, and being able to deploy things in different environments, and dealing with the current, very complicated, practical world of software engineering, and so on.

But the objective is to be able to have something where, if you have an idea, a goal, then the language that we’ve provided and the system we’ve built will let you, with the minimal possible effort, actually realize—actualize—the idea, the goal, whatever, and turn it into a web app that’s running somewhere, or an API that gets called by lots of other things, or some consumer product of some kind. That’s really the goal, to encapsulate what has been achieved through the knowledge about algorithms and computation, and about data, that’s been accumulated in the civilization, and put it in a form where one can immediately build from it. I’m still searching for exactly the right way to understand from a historical point of view what it is we have been doing, and I’m sort of understanding—I thought it was a fairly big thing, which is why I spent many years of my life doing it. But I think it’s actually a bigger thing than I thought it was, in the sense that this idea is still rather abstract, but it’s this point at which one starts being able to take computationally encoded knowledge, take the whole corpus of what already exists in the world, and then start building on top of it in a systematic way. I think that’s an important thing, how that all works. There’s just a lot to say about this.

One of the things that’s confusing about this is, say, “Where is the frontier of where you’ve reached?” For me, I’m a gradual-understanding kind of guy, and it takes me sometimes a decade to actually understand the significance of something. Even if I might have an intuition that this is a good thing to do, to really understand more globally what the significance of it is takes awhile. I’m at the stage where we’ve just got an incredible number of things that are happening, that we can now do, and there’s a slowly dawning global understanding of what it means that I don’t have completely nailed down.

Well, I’m really curious about one question that’s going to sound frivolous, but I mean it in all sincerity. Does weather, in your opinion—I know you don’t know, but does weather have a mind of its own?

Well, what do you mean, I don’t know? It depends what you mean by the question.

Well, I assumed at some level you’re guessing.

No, I’m not guessing.

Okay. I will ask my question. Does weather have a mind of its own?

So, what do we mean by “mind”? I have tried to define that term in an abstract way, and based on what I’ve managed to figure out, in terms of an abstract definition of that term, the answer is yes. We can be more explicit about this. We can say, if we are trying to discover extraterrestrials, for example, and we see certain kinds of signals, then are they mind-like signals? Are they signs of a sort of computational process that is mind-like? Or are they mere physics, so to speak? We say, “Well, gosh, if it’s just some pulsar in the magnetosphere, it’s mere physics.” Well, I hate to point it out, but our brains are also mere physics. And the question is, is your mere physics better than my mere physics? And the thing that came out of lots of this basic science that I did was this idea of the Principle of Computational Equivalence, which basically is saying that you don’t get to make that distinction, there really isn’t a distinction. It is all these different things have kind of an equivalent level of computational sophistication. You can’t say, “That’s a mind, and that’s not a mind.” Now, you can say, “That’s a human-like mind, and that’s not.” That’s a little different, but when you say “human-like,” you’re saying it’s a mind that deals with certain senses. Could you imagine a situation where there are humans who don’t have the same senses that they have today? Well, obviously, there are plenty of humans who have a smaller set of senses, with some of them enhanced relative to others, and so on, and so on. And I don’t think anybody would imagine that there’s anything fundamentally different about those minds than other human minds.

What I’m saying is that abstractly, should we consider it [weather] to have a mind? I think the answer is yes. It clearly doesn’t have a mind that has shared history with the minds that we humans have, and it doesn’t have lots of the details that human minds have, but I don’t think that those are essential details, when it comes to defining a mind-like thing. And for example, when we get to define computers that do mind-like things, almost any one of those specifics, for some, will not be present, in at least particular versions of the kind of mind-like computer system.

Well, the interesting thing about these questions is that while people have been asking them, and trying to answer them since the beginning of civilization, this is the first time the answers have ever really been more than theoretical, because at some point, you are going to build a machine that claims it’s conscious and is entitled to rights.

Absolutely! No, that’s correct. I think that this is the time. You know, I’ve been interested a lot in the ways of encoding everyday discourse and knowledge and so on, and I say, “Well, gosh, I should look at what people have done on this.” And I realized, well, Aristotle did stuff about this, so did Leibniz. Not a lot’s been done since. And these basic questions of how to encode the world have been sitting out there for a couple of thousand years. And part of the reason not much progress has been made is because nobody really cares. You can have a big debate about whether John Wilkins’ philosophical language is better than the kinds of things that Aristotle came up with, but it really hasn’t made a lot of difference.

What we end up doing, it’s like, “Okay, we’ve got to turn this into a practical system, and actually build out a precisely defined language that captures features of the world.” And now, it really makes a difference. It’s kind of fun for me because I’ve been somewhat aware of lots of things about philosophy for a long time, and I’ve always thought that philosophy is, in a sense, very floppy. You can argue back and forth for 2,000 years, but it’s kind of been amusing to me that some of these arguments, when it comes to some issue of how ontology works or how epistemology works, it’s got to turn into a piece of code for us, and there’s no more argument. You have to actually come up with a conclusion and turn it into code, and then, I suppose the argument starts again. Given the code and how it behaves, you can now argue, “Well, what does this mean? How can we understand it in terms of this or that sort of framework for thinking about these kinds of things?” But yes, it’s really an interesting time, because it’s a time when philosophy gets implemented as software, basically.

Now, you’re quick to point to very valid examples of humans wanting to maintain their distinctiveness, wanting to be, not just different from, but better than, the machine. I think it boils down, though, to humans feel self-aware, and they don’t know why that is, or what that is, and they just have a sense that they can’t articulate, that machines just don’t have this capability.

Really? I don’t know. Think of how many ways, when you describe what your computer is doing, in which you anthropomorphize what’s happening. And as we see these things that have more, oh, I don’t know … like playing around with image identification. It really is very human-like. And for me, the fact that I could dissect it and know what every bit does, it doesn’t really—so what? Let me give you an example. I was recently, to my great chagrin, involved in debugging a bunch of issues to do with our Wolfram Cloud system. I haven’t dived that deep in probably a decade or more. But anyway, so I dive in, and yeah, sure, every bit is accountable for. Every bit is deterministic, and so on. But, what’s it doing? It’s really complicated, and one has to become kind of a computer psychologist to figure out what’s going on, and one has descriptions which are sort of what’s going on inside, and the kinds of descriptions that people end up giving sound very psychological, and the statement that “my computer has no soul,” so to speak, I don’t think that’s the impression anymore. Sometimes, the computer is kept on a leash that’s very short because it’s been gotten to do things that are very specifically predetermined by engineers who built up incrementally the sequence of things to get it to do what it does. But I think when we let the computer have its head, so to speak, and just do its own exploration of the computational universe, albeit quite automatic, the things it does come up with are things which seem to us to show no particular evidence of that underlying determinism—which, by the way, I think we share anyway; the underlying determinism, that is.

As I say, I think the disembodied intelligence, the raw intelligence, of “Okay, I’ve got this computer, and it’s intelligent, and isn’t that nice,” I think that without some addition of goals and a more detailed history, I don’t think that ends up being [much]. It’s almost a null kind of thing to have produced. It’s almost generic. It’s almost like saying, “Well, I’ve made a piece of the universe again.” It’s too unspecific to be useful, so to speak.

But if we are fundamentally, at our core, deterministic, and to your point, we don’t look it because the math is beyond us, but we are, what do you think emotions are? Are they real in the sense that we’re feeling? Will the computer love, and will it hate?

Here’s the terrible thing. We’re building stuff that tries to do emotion-space analysis of things, and so on, looking at whether it’s facial expressions or text or whatever else, and in effect say, “Okay, that means the dopamine level was up at this moment,” because the collective effect of what was going on with that thinking that was happening results in secretion of dopamine, which did this and that and the other, and had this or that global effect.

It’s a funny thing for me because I’m a very people-oriented person in many ways, and yet, what I end up doing from this science and technology point of view is, in a sense, a deep deconstruction of human conditions. It’s a little weird for me because I think of emotions as real, but on the other hand, at a rational level, I’m like, “Okay, we’re going to turn this into numbers, and emotion points in an emotion space,” and all this kind of thing. I think what I have perhaps for myself gotten to the point of doing is, to some extent, dissociating the “here’s how I feel,” knowing that if I really were to dissect how I feel and really were to make all kinds of brain measurements and so on on myself, I would probably be able to logically understand every piece of how I feel, and it would be like, “Well, it’s all just turned into bits, and there’s nothing kind of feeling about it.” I guess I personally just live life the way I feel, so to speak, and separately understand that underneath, it’s sort of all just bits that are operating quite deterministically, and I can study those, and make use of those, and build technology based on those, and so on.

I think the challenge with that, though, is that we have clawed our way to a point where we have defined these things called “human rights.” And they are all predicated on the assumption that we are somehow different. A chair doesn’t have chair rights. And the danger with this viewpoint perhaps is that, in making such machines, you do not actually ennoble the machine. You do not bring the machine up to the level of the human, but you remove the whole basis for why we would even say we have human rights because we should have no more rights than an iPhone or anything.

That’s a different question, rights like that. Should bots be able to get social media accounts? That’s a very concrete current issue. My bot has built up huge amounts of state, let’s say. Should you be allowed to just delete my bot? I think that’s complicated. To me, if I say, “I’ve got a thing. It’s had a quadrillion operations, computer operations that have contributed to the state it’s in now.” I feel kind of, in a sense, almost morally wrong deleting that thing. And to me, that’s the beginning of feeling, gosh, okay, even though it was just a quadrillion operations from a computer, something feels wrong about just saying, “Okay, press the delete button. It’s gone.” Just as for a long time, it felt wrong to me to kill a mosquito, although I eventually decided that it was me versus them and they were going to lose.

Your question about what’s our justification for giving ourselves rights, and not giving rights to things that have similar intelligence: in human history, sadly, I don’t think there’s an abstract way of deciding. Ethics is not something that can abstractly just be decided, no more so than goals can be abstractly decided. Ethics is about, in a sense, the way we feel things should be. There are different theories of it, and I suppose there’s sort of evolutionary theories where if you have the wrong ethics, you just won’t exist anymore, so therefore, by natural selection, so to speak, only certain ethics will survive. It’s like, every religion digs down to have some kind of creation story. It has foundational justification of things, and it feels like what you’re looking for here is a foundational justification for, “Why should us humans put ourselves out as special?” And I think the answer is, well, if that’s the way we choose to do it, it’s really a choice, not a—we’re not going to be able to find the divine right of humans-type thing. We’re not going to be able to find a science justification for why humans shouldn’t let their computers, let their bots, have rights, so to speak. I don’t know.

How many years away, in your mind, are we from that becoming a mainstream topic? Is this a decade, or 25 years, or …?

I think more like a decade. Look, there’s going to be skirmish issues that come up more immediately. The issues that come up more immediately are, “Are AIs responsible?” That is, if your self-driving car’s AI glitches in some way, who is really responsible for that? That’s going to be a pretty near-term issue. There’re going to be issues about, “Is the bot a slave, basically? Is the bot an owned slave, or not? And at what point is the bot responsible for its actions, as opposed to the owner of the bot? Can there be un-owned bots?” That one, just for my amusement, I’ve thought about how it would work, in the way the world was set up in the very amusing scenarios of un-owned bots, where it’s just not clear who has the right to anything with this thing, because it doesn’t have an owner. A lot of the legal system is set up to depend on—okay, companies were one example. There was a time when it was just, “there’s a person, and they’re responsible,” and then there started to be this idea of a company.

So I think the answer is that there will be skirmish issues that come up in the fairly near term. In the kind of the big “bots as another form of intelligence on our planet”-type thing, I’m not sure. … Right now, the bots don’t have advocacy, particularly, for them, but I think there will be scenarios in which that will end up happening. As I say, these things about the extent to which a bot is merely a slave of its owner. I don’t even know what happened historically with that, to what extent there was responsibility on the part of the owner. The emancipation of the bots, it’s a curious thing. Here’s another scenario. When humans die, which they will continue to do for a while, but many aspects of them are captured in bots, it will seem a little less like, “Oh, it’s just a bot. It’s just a bag of bits that’s a bot.” It will be more like, “Well, this is a bot that sort of captures some of the soul of this person, but it’s just a bot.” And maybe it’s a bot that continues to evolve on its own, independent of that particular person. And then what happens to that sort of person-seeded but evolved thought? And at what point do you start feeling, “Well, I don’t think—it isn’t really right to just say, ‘Oh, this bot is just a thing that can be switched off and that doesn’t have any expectation of protection and continued existence, and so on.’”

I think that that’s a transitional phenomenon. I think it’s going to be a long time before there is serious discussion of generic cellular automata having rights, if that ever happens. In other words, something disconnected from the history of our civilization, something that is not connected to and informed by the kinds of things—in this kind of knowledge-based, language way of creating things, that’s going to be a much more near-term issue. At some level, then, we’re back to, “Does the weather have a mind of its own?” and “Do we have to give rights to every animistic?” It’s animism turned into a legal system, and, of course, there are places where that’s effectively happening. But the justification is not, “Don’t mess with the climate. It has a soul-type thing.” That’s not really the rhetoric about that, although I suppose with the Gaia hypothesis, one might not be so far away from that.