Blog Post

Interview with Stephen Wolfram on AI and the future

So you don’t think there’s a distinction between doing intelligent things and being intelligent? That those are identical, you would say?

Yes, I think I would say that. That’s a different way of putting what I have thought deeply about, but I think it’s equivalent to what you’re saying.

And what do you think this thing we call “self-awareness and consciousness” is?

I don’t know. I think that it’s a way of describing certain kinds of behaviors, and so on. It’s a way of labeling the world, it’s a way of—okay, let me give a different example, which I think is somewhat related, which is free will. It’s not quite the same thing as consciousness and self-awareness, but it’s a thing in that same soup of ideas. And so you say, “Okay, what does it take to have free will?”

Well, if something outside you can readily predict what you will do, you don’t seem to have free will. If you are the moth that’s just repeatedly bashing itself against the glass around the light, it doesn’t seem like it has free will. The impression of free will comes typically when you can’t predict what the system is going to do. Or another way to say it is, when the will appears to be free of any deterministic—well, not any deterministic—rules, but it’s not purely determined by rules that you can readily see. Now, people get very confused because they say, “Oh, gosh! Absent things about quantum mechanics, and so on, the laws of physics seem to be fundamentally deterministic.” How can it be the case, then, that we have free will if everything about the universe—and people like me actually believe this is how the whole of physics works—is ultimately deterministic, is ultimately determined by a fixed set of rules that just get run where there’s a unique possible next step, given the rule, given what happened before? How can that be consistent with free will? Well, I think the point is that it’s a consequence of something I call “computational irreducibility.” It’s a consequence of the fact that even though you may know the rules by which something operates, when the thing actually runs and applies those rules many times, the result can be a sophisticated computation—in fact, a computation sufficiently sophisticated that you can’t predict its outcome much faster than just having the computation run itself and seeing what happens.

But it is still deterministic at that point?

It’s absolutely deterministic.

You’re just saying we’re just not smart enough to do it, but it’s still there, that there’s a line of cause and effect that connects the Big Bang to this conversation—that would have been foreseeable given a big enough computer.

That’s correct, but here’s the thing. The point about computational irreducibility is—and it’s related to this thing I call the Principle of Computational Equivalence—you just don’t get to be smart enough to predict these things. In other words, all the different things around in the universe, whether they’re fluids flowing in the atmosphere, or whether they are brains in humans, they’re all equivalent, ultimately, in their computational capabilities. And so that means that you can’t expect a shortcut. Even if you have a tiny, little cellular automaton with rules that you can write down in half a line of code or something, that gets to do computations as sophisticated as happen in our brains. It’s not just a question of, “Well, in practice, it isn’t the case that you can predict what will happen.” It’s also in principle, there’s no way to predict what will happen. In other words, the way that any way of formalizing things works—in a sense the way that math works, but it isn’t really what people traditionally think of as math—the way that everything has to work, it’s the case that there are many systems where the computations they do are equivalently sophisticated, which means that you can’t jump ahead and see, “Oh! That thing isn’t free of its deterministic rules. I can tell what it’s going to do.” It seems to be robotic, in some sense.

But aren’t you saying, therefore, that there is no such thing as free will, however, the math is so complicated, it sure looks like it, and for all intents and purposes, the universe functions like there is. But there really isn’t any.

What do we mean by “free will”? The question is—for example, what are the consequences of having free will, or not having free will? There are things like, “Are you responsible for your actions?” If you have free will, you might be responsible for your actions. If you don’t, then, well, it’s a not-up-to-you type thing. So here’s, I think, how that works. The typical dichotomy: Is the behavior that I’m showing something that’s a consequence of the environment that I’m in, where I’m merely sort of a transducer of the randomness of my environment, or is it something which is intrinsic to me that is producing the behavior I’m producing? I am generating the random sequence of the digits of pi, and that’s what determines my behavior, and it’s coming from inside me, versus, I’m being buffeted by random noise from the outside, and that’s what’s determining what I do.

Well, what I’m saying is, it’s intrinsic that these are things that are being generated by things, sort of intrinsic processes. And while those intrinsic processes are determined, they are intrinsic to us. It’s not like we just get to say, “Oh, I did that because of some terrible experience I had years ago as a result of randomness from the world.” I think, when you say it slightly disparagingly, you’re saying there isn’t any free will, and everything’s over.

Let me make another point, which is, what is the alternative to what I’m talking about? Let’s imagine that you could work out the outcome of some long process of computational evolution, so to speak; that there was a smarter being who could always work out what was going to happen. That would be very disappointing for us because, in a sense, it would mean that our whole progression of history didn’t really add up to anything, because it’s like, “Oh, forget these thousands of years of cultural evolution.” With a better computer or a smarter organism, whatever else, you would just be able to say, “Okay, forget it. We know what’s going to happen. We know what the outcome is.” The point about computational irreducibility is that that can’t be the case. There are detailed questions about, “Could you run it at twice the speed?” and things like this. But fundamentally, the outcome is something that is not—there’s an irreducible amount of computation. The computation that’s going on is, in a sense, irreducible.

Well, I think when you say what would be a possible alternative is that, the ant colony is smarter collectively than all of the individual ants put together. Our bodies are made up of cells that don’t know that we exist, and we’re something emergent. We’re more than the sum of them. There’s a notion called the Gaia hypothesis that humans are essentially cells in a larger organism, and that at each of these levels, there’s an emergent property which is governed by a set of laws or physics that we don’t know and which don’t look anything like the ones that we are familiar with. It isn’t magic, it is that there’s something that happens that we don’t understand, and it is that which separates the brain from the mind.

Consciousness is the thing in all of the universe we are most aware of. It’s “I think, therefore, I am.” Now, you might say people are just self-delusional, and they aren’t really conscious. But most people, even those who feel very deeply about the power of technology, believe they are something fundamentally different than an iPhone. They aren’t a progression along a path, of which an iPhone is just a less developed version of them. They believe that they are something fundamentally different than that. The iPhone doesn’t have any emergent properties; it is simply the sum of its parts.

I’ve been thinking about stuff for probably 35 years now, so I’ve got definite comments on this. In terms of, “you start from some set of rules, and then you see what emerges from that,” absolutely. There is an irreducible difference between what emerges and what goes in, so to speak, and that’s what we see with endless cellular automata, other kinds of simple computational systems, and so on. That’s the sort of thing that’s in my New Kind of Science efforts. That’s the paradigm-breaking point that, even though in the computational universe of possible programs lots of programs are very simple, their behavior is often not, which is something that we as humans are just not particularly used to. Now, I’ve been living with that for 35 years, and I’m more used to it at this point, but when I first encountered it, I was like, “This can’t possibly be right. This just isn’t the way things work. The only way to make something complicated like that is to go through lots of human engineering effort and put the work in to make the complexity, so to speak.”

Now, when you talk about an iPhone, and it doesn’t seem to be emergent—yeah, that’s, well, in large measure, correct. It is a product of a particular tradition of engineering in which one is moving step by step, and saying, “We’ve got to understand every step of what we’re doing.” What I’ve been doing for the last decade or more, is often engineering—and by the way, this is also what’s happened with the modern AI stuff. It’s a form of engineering in which you define what you want to come out, and then, you say … okay, like in my case, often we’ve done these big searches for trillions of different possible programs that can achieve a particular objective; in the current neural network case, one’s doing sort of a more incremental evolutionary-type search to find the form of neural network that has successfully learnt certain knowledge.

The point in those things is that what one’s doing there is—most engineering, as it’s been done over the last couple of thousand years that we’ve had sort of reasonable engineering—it’s always been, “Let’s put this piece in because we understand what its consequences are, and it’s going to achieve the objective we want.” What I’m saying is there’s a different form of engineering, in which you say, “Let’s define the goal, and then let’s essentially automate getting to how that goal is achieved.” And the result of that is programs, neural networks, whatever else, where we don’t understand. They’re not constructed to be understandable to us. They are simply constructed to achieve some goal, and you can say that those things have a rather different character from the robotic engineering—and “robotic,” by that, I mean, in the 1950s sense of you only get out what you put in. It’s very rigid—you define what’s going in to get the particular behavior you want. I think our intuition about how systems work—well, it’s already changed to some extent; for people like me, it changed a while ago because one understands from the science point of view.

My observation is when you’re doing these explorations in this computational universe of possible systems, it’s very routine to discover the unexpected, and for these little systems that are made from tiny rules to be, in a sense, smarter than you are, and to be able to do things that you can’t foresee. I think that the time when we can readily understand how our engineering systems work is coming to an end. I think that we’ve already encountered that a lot with bugs in software, and so on, and, “Oh, how can we understand what’s going on?” You know, you dive into a big modern software system, and it will take lots of science-like activity to debug what’s going on. It has behaviors that are really very elaborately different from the level of description that you came into it with. I think the notion that you can expect to understand how the engineering works … that’s perhaps one of the things that people find disorienting about the current round of AI development is that “you can expect to understand how it works” is definitely coming to an end.

In a sense, we’ve been there before. Look at nature. Things happen in nature. We make use of things in nature. We entrain processes and systems that we find in nature for our own human technological purposes, and we can narrowly use them, even though we don’t necessarily understand how they work inside. In a sense, this is nothing new, but our engineering has mostly in the past been based on, “Let’s incrementally build things up, so that at every step we understand what’s going on.” And there’s nothing that is emergent about it. It’s all just built step by step, with understanding at every step. I think, in the future—and we can see this already in a lot of systems I’ve been building, too—that the humans are the ones who define the goal. The role of the technology is to automate getting to that goal, and how that goal is got to is really just the technology’s business. It’s not something where—yes, it can be academically intellectually interesting to understand, “Oh, how did the technology work to get me here?” But the objective is just to get there as automatically and efficiently as possible, without the constraint of necessarily having to understand how the thing worked inside, or having to be able to build it up incrementally, with understanding of each step.

What are your goals in this field? What are you trying to do other than just broadly push the puck forward?

[Laughing] That’s a good question. And I set myself up for that question by talking about goals. For me, there’s a certain set of ideas that I think I’ve gradually been understanding over the course of maybe 40 years now, and I think those ideas have certain logical conclusions and implications, and I’m trying to understand what those are. And really, the only way I’d know to understand how things like that work is to build real systems that do those things. And by the way, those real systems can be very useful, and that’s great. But in my own life, I’ve sort of alternated between doing basic science and doing technology development. And what’s typically happened is, the basic science tells me something about what kind of technology is worth developing, or can be developed, and then the technology development lets me actually be able to do more on the basic science side. And like right now, with all the things we’ve developed about computational knowledge, I think we’re again on the cusp of a real understanding of how symbolic language relates to traditional long-thought-about neural-net, more traditionally brain-like, AI activities—and now, what will the true end point of all of this be? I don’t really know. I’ve thought quite a bit about that. I’m not thrilled with some of the things that I’m understanding, in terms of where this goes and where it ends, because in a sense, my personal emotional goal structure is not necessarily aligned with what I can see scientifically and technologically as the path that we’re on.

Do you mean the robot overlord scenario?

Well, I think the question is, what’s the future of what humans end up doing? And I think the key point is—okay, so imagine you got to the point where there’s a box on your desk that you say, “Okay, we succeeded. This is an AI.” Well, it may be able to do processing of information in ways that are completely as good, or vastly better than human brains and all that kind of thing. But the question is, what will that box do? And in the abstract, there’s no kind of goal defined for the box, because the only way we know to define goals—it doesn’t really make any sense to say the box has a goal, because goals are things that are really very much connected with a cultural history. Goals for us are a very human-defined thing, where individuals have goals, those goals depend on their history, the history of our civilization, things like that.

But the box can be said to have a purpose. Wouldn’t that be synonymous?

Where did it get the purpose from?

We imputed it to it. We built the box to do something.

Absolutely, absolutely. But then, it’s the same story as for technology all the time, which is: humans define the goals, technology implements the goals. The question is, where do goals abstractly come from? And the answer, I think, is, whereas there may be an abstract notion of intelligence and computational ability and so on, there isn’t an abstract notion of goals. That is, a goal is an arbitrary thing. People throughout history have wanted to think that people are special, in one way or another. And so, for example, people used to say, “Oh, we’re at the center of the universe,” okay? Then the whole Copernican thing came along, and that didn’t seem right, and people—there’s a whole series of this where we are completely special. Like, for example, in this conversation, you’re making the assertion that self-awareness and consciousness is a place in which, “Okay, that’s a piece of specialness that we have.” That’s our defensible “we’re really different from everything else” kind of thing. I don’t think that particular one is the case. I think the way in which we are special, and it’s almost tautological, is that we are the only things that have the particular history that we have, so to speak. For example, things like the goals that we have, which are in no way, in my view, abstractly defined. Even within human society, people will often say, “Well, of course, everybody should want to learn more and be a better person,” or that kind of thing, but we know perfectly well that there’s no such uniformity. If you go to talk to somebody else, and that somebody will say, “Well, of course, everybody wants to be rich,” but you can talk to plenty of people who say, “I just don’t care. It doesn’t make any difference.” Goals are not, I think, an absolute thing. They’re arbitrarily defined, and defined by history, and individuals, and things like this. And I think when it comes to making an artificial intelligence, there is no intrinsic sense. You can give it whatever goals you want. As a human, you can say, “Okay, I want this artificial intelligence to go and trade stocks for me and make the maximum possible amount of money.” Or, “I want this thing to go and be a great companion to me and help me to be a better me,” whatever else you want to do. But, for it intrinsically, it does not have—merely as a consequence of being intelligent, that doesn’t give it some kind of goal. And I think the issue is, as you look to the future, and you say, “Well, what will the future humans …?” where there’s been much more automation that’s been achieved than in today’s world—and we’ve already got plenty of automation, but vastly more will be achieved. And many professions which right now require endless human effort, those will be basically completely automated, and at some point, whatever humans choose to do, the machines will successfully do for them. And then the question is, so then what happens? What do people intrinsically want to do? What will be the evolution of human purposes, human goals?

One of the things I was realizing recently—one of the bad scenarios, for me, looked at from my current parochial point of view—is maybe the future of humanity is people playing video games all the time and living in virtual worlds. One of the things that I then realized, as a sobering thought: looked at from 200 years ago, much of what we do today would look like playing video games, as in, it’s a thing whose end, whose goal, is something almost intrinsic to the thing itself, and it doesn’t seem related to—it’s like, why would somebody care about that? It seems like a thing which is just taking time and putting in effort; proving mathematical theorems, why would people care about that? Why would people care to use endless social media apps, and so on, and why would people care to play Angry Birds?

Well, don’t we know the answer to that question already, though? The wealthy are freed from having to do anything, and thus they can do anything they want or nothing at all. And there are two very distinct paths, the Star Trek path, where the goal is to better yourself, and the Wall-E path, where you just get progressively more sedentary. Don’t we see already that some people choose one, and other people choose the other?

Yeah. There’s great diversity in what people choose. So in some sense, your point could be that, if we could define, just like we can define the space of all possible programs, we can define a space of all possible goals, and those things might even be representable by some symbolic language—in fact, we’re going to need that, because if we’re going to tell our AIs what to do, we’re going to need some language in which to describe what to do. And that’s a place where, if we can have a symbolic language of goals, then that’s the right way to do it. Given that, we can, in principle, just enumerate all possible goals; then we could ask, given the actual humans who exist, how do those humans spread out across the space of all possible goals?

13 Responses to “Interview with Stephen Wolfram on AI and the future”

  1. Eric O. LEBIGOT (EOL)

    “In 2014, as a culmination of more than 30 years of work, Wolfram began to roll out the Wolfram Language” should be qualified: “Wolfram language” is only the new *name* of a language that was first released in 1988. Despite what the Wolfram company tries to make people believe, it should not sound like some long, underground work has led to a recent breakthrough. A more accurate description is that the Wolfram language has been progressively gaining more and more libraries over the years, and has essentially always tried to offer powerful, modern algorithms.

  2. Naufil

    Stephen Wolfram is no doubt one of the most brilliant minds of his generation. No doubt. But … the guy needs help getting his thoughts in line. This was one of the most difficult and rambling interviews I’ve ever read.

    Perhaps, some of the responsibility lies with the author. It would have been nice if you put in a bit of work, editing his responses. So that they read better. Maybe his responses are easier to understand, as phrased, in audio.

    In any case, happy to see GigaOm back.

  3. Daniel Bigham

    Thanks so much for this interview… reading it was very enjoyable. But…

    The elephant in the room, for me, and perhaps also for Byron, is that Stephen’s responses imply that qualia doesn’t exist. And it makes me curious: Does Stephen really think that? Or does he acknowledge qualia internally but avoid acknowledging it externally? Or has he simply spent so much time thinking about the behavioral side of the mind that his inner models of qualia aren’t rich enough to come into play for a conversation like this? (I somehow doubt the later, but who knows)

    My experience as a human being, and as a thinker, are that human beings are more than behaving entities. We feel. Not only do we “sense” our environment as to provide data to decision-making engine, we also “feel”/”experience” our environment. Most simply put, what is red? The physicist says, oh, it’s a range of wavelengths of EMR. Great… but we’ve only described light, we haven’t described the way a person sees the correlate of it in their mind. Stephen’s responses seem to imply that he views “red” as something which doesn’t actually exist. Maybe he didn’t intend to imply that, I don’t know.

    At the core of why people have thought themselves to be special is that, internally, many people are aware of qualia. Some people are more “aware” of it in an explicit sense than others, but we are all aware of it I would presume at an instinctual level. As you say, we look at a rock, and the intuition is that the rock isn’t experiencing anything it all. The rock isn’t sad. The rock isn’t experiencing the rock concert playing in stadium next door. The rock isn’t aware that it exists.

    Typically, people refer to “self awareness” as a bridge for referring to qualia. I think the reason is that the experience of self awareness is one of the most heightened experiences of qualia we can have. I sometimes call it a “resonant” experience of consciousness. Almost like looking into a mirror when there’s also a mirror behind you, and there’s an infinite cascade… the awareness of self, and the awareness of awareness, and the awareness of awareness of awareness… it seems to result in a jolt of qualia, a special flavor of qualia, that is iconic of the phenomenon.

    What makes consciousness challenging to think about is that it is tightly correlated with behavior and all of the machinery required to make a behaving entity. If you take the intersection of a computer and a human being, the overlap is quite astonishing. Both things take inputs and act on those inputs. Both have memory. The similarities are so striking that one can be led down a path that suggests “Oh, we’re just fancy wet computers”. But once we’ve arrived there, we’ve been fooled. Almost like a magician uses distraction to keep someone’s attention away from the critical piece. (behavior being the distracting similarity)

    What’s maddening to me is that, intuitively, I would like to think that any conscious human being, especially smart, introspective ones, should find it plainly obvious that behaving entities need not experience qualia in any way, whereas human experience qualia in a mind blowingly profound way. And so as soon as you start talking about computational equivalence, you’ve missed the boat. You’ve gotten stuck on “humans are just behaving entities” island.

    Why do smart introspective people get stuck on that island? There are a variety of reasons I can guess at, but I obviously can’t know the true answers…

    1. Does qualia make scientific types deeply anxious? We like to have our feet on solid ground, and science can give us that feeling. But if science, in 2015, has yet to provide any compelling theories to explain qualia, perhaps the existence of qualia is threatening. Just as hope often fuels faith in a religious sense, so perhaps things that are threatening encourage us to think of something not existing. (so long as our thinking/pretending it doesn’t exist doesn’t threaten us in some way)

    2. Because qualia is such an obviously critical facet in a conversation like this interview, but left as an elephant in the room, it makes me wonder whether humans vary in how the experience qualia. Just as a song can be played faintly or loudly, perhaps human brains vary in how loudly they present qualia. This theory is tempting at times, because it could explain why some people respond to these questions as if they were unciousness AIs. (called “zombies” in the literature — the idea that there could exist human beings that are behaviorally identical but which don’t experience qualia)

    3. Related to #2, perhaps people’s “consciousness of their consciousness” varies a lot in the human population. I think it’s actually clear that this is in fact true, but the scale of its truth is not obvious. For example, there have been times in my life where the strength of my self awareness and the bizarre and unexplained nature of qualia hits me in the head so hard it feels like my mind is about to rupture. The closest feeling I can relate it to is the notion of “epiphany”. But what if for large swaths of the population, they’ve never had that epiphany / eye opening moment, and so they are “unconscious of their consciousness”?

    Anyway, I’m probably droning on at this point. I just wanted to call out the elephant in the room. I’ve often wanted to ask Stephen about consciousness/qualia, so I’m glad you got the chance. Disappointed that he either dodged the question or some other such thing.

    • Charlie

      I have same questions like this. Does consciousness/qualia belong to computable problem? Maybe doctor Stephen hasn’t yet got a satisfactory conclusion on this problem.
      I am just curious about AI advances. AI such as deep Learning tries to simulate the mechanism of human brain by algorithms implemented on computers, e.g., the process of image recognition. Question is does consciousness belong to computable problem? Since many scientists endeavor to find effective AI algorithms, they potentially premise that consciousness can be simulated by computational model, everything is computable in this world, and some computer algorithms can evolve into computer-consciousness some day.
      It’s possible to create a sovereign AI with autonomous learning and self-evolving ability. For now experts could move closer to the true AI, or they might run towards some directions that are wrong from the very beginning, but how could people know the right way to develop AI if they don’t use the try-and-error method? I remember I read an interesting comment from reddit, he/she commented like, “AI is but a bunch of algorithms, experts just throw them at the wall like clays to see if some stick”.

    • Jeremie Papon

      I wouldn’t presume to speak for Stephen, but I would assume he would respond that qualia are a state of computation. For example, “red” wavelengths reflect off an object and enter your eye, fire certain cones, which produce certain chemical/electrical signals, which affect the state of computation of your brain in a certain way. The way they affect the computational state of your mind is the qualia of “red”.

      Different minds can experience “red” in different ways (e.g. emotionally) exactly because of this. Observing “red” wavelengths results in an enormously complex non-linear computation, a computation which is greatly affected by initial conditions – the state of the mind before the observation.

      I don’t really see why qualia are such a problem to this computational view of the universe. In my mind, computation is actually a fantastic explanation of qualia, and go a long way towards explaining why they are so difficult to nail down – they are actually just a state in a vast chain of computations.

    • jandrewlinnell

      Daniel, you gave us a lot here, thank you! The question of self-consciousness is key here. AI has intelligence, frozen intelligence. It has no enthusiasm. It is detached. AI will have sub-categories such as Artificial Soul but real people will not be fooled by their artificial feelings. Will we get to Artificial Free Will?

  4. asipols

    First of all, many thanks to Byron Reese for resurrecting GIGAOM! And what a restart – an interview with a figure no less than Wolfram the Him on a subject no less than AI. Byron – bravo and good luck with the project!

    Wolfram, on the other hand, disappointed. The “me/my science” flavour was predictable (justified by personal interview format and even toned down from NKS level) but one would expect a bit more clarity/precision/brevity from an experienced and active thinker and practitioner who has spent decades on these matters. Yes, the subject is nebulous, and the questions are loaded, but still. Stephen – you can do better!

  5. Joshua McClure

    Saw this via the GigaOm twitter feed. Wolfram is a hero of mine. He was thinking about AI far before I was truly understanding that the sci-fi novels I was reading could actually be reality in my lifetime. Excellent interview, Byron. I really enjoyed it.

  6. CJ London

    Understanding natural human language is certainly not there yet. Enabling such systems to learn through direct conversation would be incredible. A wolfram personal assistant, on your phone and pc, interfacing with his systems.

  7. Dick Bird

    nice to see some dialog between a reporter and an actual expert, as opposed to yet another regurgitation of hawking and musk’s hysterical nonsense

    goals come from the gonads. make sure you don’t give the robots any gonads and you’ll be ok