Interview with Stephen Wolfram on AI and the future

13 Comments

Why do you use the word “artificial”? Because in my understanding of artificial intelligence, it really is artificial, it’s something that looks intelligent, but it isn’t really, like the way artificial fruit is truly artificial. We build it to look that way, but it isn’t. Why do you even keep that word in? Why aren’t you saying, “I’m building intelligence”?

Maybe I should.

Because what is artificial about it at all, in your view?

I think that’s just a historical word—one of the things one realizes in dealing with language is that language is this weird historical artifact that has to do with a collective agreement about how to describe things. A word can just become a word. There is clearly a meaningful distinction to be made at a practical level between intelligence that comes from brains that grow in humans versus intelligence that comes from software downloads or something. So in that sense, it’s worth having the distinction whether that word “artificial” should be considered to mean—how one should take that word, I’m not sure.

I think this question about goals … I’ve been thinking about this for a while, and I haven’t really resolved my thinking about it, and as I say, I’m still very conflicted because the things that we seem to be on course to do are things that don’t agree with my personal prejudices about what I hope happens, so to speak. But one of the issues is, right now, many goals that one has, have to do with scarce resources, finite human life spans, things like this. If you imagine—and maybe we can’t successfully imagine it—removing those constraints, what then happens to human goals? Is the pressure to do things then removed greatly? We [would then] have no basis for knowing what our goals should be. The historical forces that have shaped our goals aren’t there anymore. And so, one of the things that I have certainly thought about is that, just as for a thousand years people would look to the wisdom of the ancients to try and understand how to live one’s lives, and for the goals that one should expect to pursue and that are right to pursue. One of the funny possibilities is that, from the future, people will look to say, “Well, when humans were really humans, what were their goals? Those were the right goals. Those were the goals that are honoring our civilization to pursue,” or however else it would be characterized. And our generation is the first one where there is really detailed recording of what we’ve done, and to some extent, why we’ve chosen to do what we do: the email, the social media, the personal analytics, all these kinds of things. There’s a lot of information about what we’ve done and why we’ve done it, and so, I could imagine, in one scenario of the future, some number of years hence when a lot of the constraints that we have today have been removed, it’s like, “Let’s go back and look at those guys who were living at a time when the constraints hadn’t been removed, but where we have enough information to tell why they did what they did, and let’s microscopically reconstruct for those seven billion people what the choices that they made were, and let’s codify those into what we think is the right way to behave as a genuine non-artificial human, so to speak.” Perhaps this won’t be what actually happens, but I think this is one of the possible [outcomes].

Talk a little bit about some of the stuff that you are doing at Wolfram|Alpha that people can actually go and kick around with and use.

There’s things like imageidentify.com, which is really just using one out of 5,000 functions in our Wolfram Language. What we’ve been trying to do is build a knowledge-based language where we can take the knowledge of our civilization, encode it in a form where it can be, in a sense, concretely built on top of, where we have a language that allows you to start from the frontier of what our civilization has already achieved and then build whatever it is you want to build. There’s a lot of practicality behind that, of clouds, and mobile apps, and being able to deploy things in different environments, and dealing with the current, very complicated, practical world of software engineering, and so on.

But the objective is to be able to have something where, if you have an idea, a goal, then the language that we’ve provided and the system we’ve built will let you, with the minimal possible effort, actually realize—actualize—the idea, the goal, whatever, and turn it into a web app that’s running somewhere, or an API that gets called by lots of other things, or some consumer product of some kind. That’s really the goal, to encapsulate what has been achieved through the knowledge about algorithms and computation, and about data, that’s been accumulated in the civilization, and put it in a form where one can immediately build from it. I’m still searching for exactly the right way to understand from a historical point of view what it is we have been doing, and I’m sort of understanding—I thought it was a fairly big thing, which is why I spent many years of my life doing it. But I think it’s actually a bigger thing than I thought it was, in the sense that this idea is still rather abstract, but it’s this point at which one starts being able to take computationally encoded knowledge, take the whole corpus of what already exists in the world, and then start building on top of it in a systematic way. I think that’s an important thing, how that all works. There’s just a lot to say about this.

One of the things that’s confusing about this is, say, “Where is the frontier of where you’ve reached?” For me, I’m a gradual-understanding kind of guy, and it takes me sometimes a decade to actually understand the significance of something. Even if I might have an intuition that this is a good thing to do, to really understand more globally what the significance of it is takes awhile. I’m at the stage where we’ve just got an incredible number of things that are happening, that we can now do, and there’s a slowly dawning global understanding of what it means that I don’t have completely nailed down.

Well, I’m really curious about one question that’s going to sound frivolous, but I mean it in all sincerity. Does weather, in your opinion—I know you don’t know, but does weather have a mind of its own?

Well, what do you mean, I don’t know? It depends what you mean by the question.

Well, I assumed at some level you’re guessing.

No, I’m not guessing.

Okay. I will ask my question. Does weather have a mind of its own?

So, what do we mean by “mind”? I have tried to define that term in an abstract way, and based on what I’ve managed to figure out, in terms of an abstract definition of that term, the answer is yes. We can be more explicit about this. We can say, if we are trying to discover extraterrestrials, for example, and we see certain kinds of signals, then are they mind-like signals? Are they signs of a sort of computational process that is mind-like? Or are they mere physics, so to speak? We say, “Well, gosh, if it’s just some pulsar in the magnetosphere, it’s mere physics.” Well, I hate to point it out, but our brains are also mere physics. And the question is, is your mere physics better than my mere physics? And the thing that came out of lots of this basic science that I did was this idea of the Principle of Computational Equivalence, which basically is saying that you don’t get to make that distinction, there really isn’t a distinction. It is all these different things have kind of an equivalent level of computational sophistication. You can’t say, “That’s a mind, and that’s not a mind.” Now, you can say, “That’s a human-like mind, and that’s not.” That’s a little different, but when you say “human-like,” you’re saying it’s a mind that deals with certain senses. Could you imagine a situation where there are humans who don’t have the same senses that they have today? Well, obviously, there are plenty of humans who have a smaller set of senses, with some of them enhanced relative to others, and so on, and so on. And I don’t think anybody would imagine that there’s anything fundamentally different about those minds than other human minds.

What I’m saying is that abstractly, should we consider it [weather] to have a mind? I think the answer is yes. It clearly doesn’t have a mind that has shared history with the minds that we humans have, and it doesn’t have lots of the details that human minds have, but I don’t think that those are essential details, when it comes to defining a mind-like thing. And for example, when we get to define computers that do mind-like things, almost any one of those specifics, for some, will not be present, in at least particular versions of the kind of mind-like computer system.

Well, the interesting thing about these questions is that while people have been asking them, and trying to answer them since the beginning of civilization, this is the first time the answers have ever really been more than theoretical, because at some point, you are going to build a machine that claims it’s conscious and is entitled to rights.

Absolutely! No, that’s correct. I think that this is the time. You know, I’ve been interested a lot in the ways of encoding everyday discourse and knowledge and so on, and I say, “Well, gosh, I should look at what people have done on this.” And I realized, well, Aristotle did stuff about this, so did Leibniz. Not a lot’s been done since. And these basic questions of how to encode the world have been sitting out there for a couple of thousand years. And part of the reason not much progress has been made is because nobody really cares. You can have a big debate about whether John Wilkins’ philosophical language is better than the kinds of things that Aristotle came up with, but it really hasn’t made a lot of difference.

What we end up doing, it’s like, “Okay, we’ve got to turn this into a practical system, and actually build out a precisely defined language that captures features of the world.” And now, it really makes a difference. It’s kind of fun for me because I’ve been somewhat aware of lots of things about philosophy for a long time, and I’ve always thought that philosophy is, in a sense, very floppy. You can argue back and forth for 2,000 years, but it’s kind of been amusing to me that some of these arguments, when it comes to some issue of how ontology works or how epistemology works, it’s got to turn into a piece of code for us, and there’s no more argument. You have to actually come up with a conclusion and turn it into code, and then, I suppose the argument starts again. Given the code and how it behaves, you can now argue, “Well, what does this mean? How can we understand it in terms of this or that sort of framework for thinking about these kinds of things?” But yes, it’s really an interesting time, because it’s a time when philosophy gets implemented as software, basically.

Now, you’re quick to point to very valid examples of humans wanting to maintain their distinctiveness, wanting to be, not just different from, but better than, the machine. I think it boils down, though, to humans feel self-aware, and they don’t know why that is, or what that is, and they just have a sense that they can’t articulate, that machines just don’t have this capability.

Really? I don’t know. Think of how many ways, when you describe what your computer is doing, in which you anthropomorphize what’s happening. And as we see these things that have more, oh, I don’t know … like playing around with image identification. It really is very human-like. And for me, the fact that I could dissect it and know what every bit does, it doesn’t really—so what? Let me give you an example. I was recently, to my great chagrin, involved in debugging a bunch of issues to do with our Wolfram Cloud system. I haven’t dived that deep in probably a decade or more. But anyway, so I dive in, and yeah, sure, every bit is accountable for. Every bit is deterministic, and so on. But, what’s it doing? It’s really complicated, and one has to become kind of a computer psychologist to figure out what’s going on, and one has descriptions which are sort of what’s going on inside, and the kinds of descriptions that people end up giving sound very psychological, and the statement that “my computer has no soul,” so to speak, I don’t think that’s the impression anymore. Sometimes, the computer is kept on a leash that’s very short because it’s been gotten to do things that are very specifically predetermined by engineers who built up incrementally the sequence of things to get it to do what it does. But I think when we let the computer have its head, so to speak, and just do its own exploration of the computational universe, albeit quite automatic, the things it does come up with are things which seem to us to show no particular evidence of that underlying determinism—which, by the way, I think we share anyway; the underlying determinism, that is.

As I say, I think the disembodied intelligence, the raw intelligence, of “Okay, I’ve got this computer, and it’s intelligent, and isn’t that nice,” I think that without some addition of goals and a more detailed history, I don’t think that ends up being [much]. It’s almost a null kind of thing to have produced. It’s almost generic. It’s almost like saying, “Well, I’ve made a piece of the universe again.” It’s too unspecific to be useful, so to speak.

But if we are fundamentally, at our core, deterministic, and to your point, we don’t look it because the math is beyond us, but we are, what do you think emotions are? Are they real in the sense that we’re feeling? Will the computer love, and will it hate?

Here’s the terrible thing. We’re building stuff that tries to do emotion-space analysis of things, and so on, looking at whether it’s facial expressions or text or whatever else, and in effect say, “Okay, that means the dopamine level was up at this moment,” because the collective effect of what was going on with that thinking that was happening results in secretion of dopamine, which did this and that and the other, and had this or that global effect.

It’s a funny thing for me because I’m a very people-oriented person in many ways, and yet, what I end up doing from this science and technology point of view is, in a sense, a deep deconstruction of human conditions. It’s a little weird for me because I think of emotions as real, but on the other hand, at a rational level, I’m like, “Okay, we’re going to turn this into numbers, and emotion points in an emotion space,” and all this kind of thing. I think what I have perhaps for myself gotten to the point of doing is, to some extent, dissociating the “here’s how I feel,” knowing that if I really were to dissect how I feel and really were to make all kinds of brain measurements and so on on myself, I would probably be able to logically understand every piece of how I feel, and it would be like, “Well, it’s all just turned into bits, and there’s nothing kind of feeling about it.” I guess I personally just live life the way I feel, so to speak, and separately understand that underneath, it’s sort of all just bits that are operating quite deterministically, and I can study those, and make use of those, and build technology based on those, and so on.

I think the challenge with that, though, is that we have clawed our way to a point where we have defined these things called “human rights.” And they are all predicated on the assumption that we are somehow different. A chair doesn’t have chair rights. And the danger with this viewpoint perhaps is that, in making such machines, you do not actually ennoble the machine. You do not bring the machine up to the level of the human, but you remove the whole basis for why we would even say we have human rights because we should have no more rights than an iPhone or anything.

That’s a different question, rights like that. Should bots be able to get social media accounts? That’s a very concrete current issue. My bot has built up huge amounts of state, let’s say. Should you be allowed to just delete my bot? I think that’s complicated. To me, if I say, “I’ve got a thing. It’s had a quadrillion operations, computer operations that have contributed to the state it’s in now.” I feel kind of, in a sense, almost morally wrong deleting that thing. And to me, that’s the beginning of feeling, gosh, okay, even though it was just a quadrillion operations from a computer, something feels wrong about just saying, “Okay, press the delete button. It’s gone.” Just as for a long time, it felt wrong to me to kill a mosquito, although I eventually decided that it was me versus them and they were going to lose.

Your question about what’s our justification for giving ourselves rights, and not giving rights to things that have similar intelligence: in human history, sadly, I don’t think there’s an abstract way of deciding. Ethics is not something that can abstractly just be decided, no more so than goals can be abstractly decided. Ethics is about, in a sense, the way we feel things should be. There are different theories of it, and I suppose there’s sort of evolutionary theories where if you have the wrong ethics, you just won’t exist anymore, so therefore, by natural selection, so to speak, only certain ethics will survive. It’s like, every religion digs down to have some kind of creation story. It has foundational justification of things, and it feels like what you’re looking for here is a foundational justification for, “Why should us humans put ourselves out as special?” And I think the answer is, well, if that’s the way we choose to do it, it’s really a choice, not a—we’re not going to be able to find the divine right of humans-type thing. We’re not going to be able to find a science justification for why humans shouldn’t let their computers, let their bots, have rights, so to speak. I don’t know.

How many years away, in your mind, are we from that becoming a mainstream topic? Is this a decade, or 25 years, or …?

I think more like a decade. Look, there’s going to be skirmish issues that come up more immediately. The issues that come up more immediately are, “Are AIs responsible?” That is, if your self-driving car’s AI glitches in some way, who is really responsible for that? That’s going to be a pretty near-term issue. There’re going to be issues about, “Is the bot a slave, basically? Is the bot an owned slave, or not? And at what point is the bot responsible for its actions, as opposed to the owner of the bot? Can there be un-owned bots?” That one, just for my amusement, I’ve thought about how it would work, in the way the world was set up in the very amusing scenarios of un-owned bots, where it’s just not clear who has the right to anything with this thing, because it doesn’t have an owner. A lot of the legal system is set up to depend on—okay, companies were one example. There was a time when it was just, “there’s a person, and they’re responsible,” and then there started to be this idea of a company.

So I think the answer is that there will be skirmish issues that come up in the fairly near term. In the kind of the big “bots as another form of intelligence on our planet”-type thing, I’m not sure. … Right now, the bots don’t have advocacy, particularly, for them, but I think there will be scenarios in which that will end up happening. As I say, these things about the extent to which a bot is merely a slave of its owner. I don’t even know what happened historically with that, to what extent there was responsibility on the part of the owner. The emancipation of the bots, it’s a curious thing. Here’s another scenario. When humans die, which they will continue to do for a while, but many aspects of them are captured in bots, it will seem a little less like, “Oh, it’s just a bot. It’s just a bag of bits that’s a bot.” It will be more like, “Well, this is a bot that sort of captures some of the soul of this person, but it’s just a bot.” And maybe it’s a bot that continues to evolve on its own, independent of that particular person. And then what happens to that sort of person-seeded but evolved thought? And at what point do you start feeling, “Well, I don’t think—it isn’t really right to just say, ‘Oh, this bot is just a thing that can be switched off and that doesn’t have any expectation of protection and continued existence, and so on.’”

I think that that’s a transitional phenomenon. I think it’s going to be a long time before there is serious discussion of generic cellular automata having rights, if that ever happens. In other words, something disconnected from the history of our civilization, something that is not connected to and informed by the kinds of things—in this kind of knowledge-based, language way of creating things, that’s going to be a much more near-term issue. At some level, then, we’re back to, “Does the weather have a mind of its own?” and “Do we have to give rights to every animistic?” It’s animism turned into a legal system, and, of course, there are places where that’s effectively happening. But the justification is not, “Don’t mess with the climate. It has a soul-type thing.” That’s not really the rhetoric about that, although I suppose with the Gaia hypothesis, one might not be so far away from that.

13 Comments

Otto

I would like to know what Mr. Wolfram thinks about the Hameroff/Penrose ‘Orch OR’ theory of consciousness.

Eric O. LEBIGOT (EOL)

“In 2014, as a culmination of more than 30 years of work, Wolfram began to roll out the Wolfram Language” should be qualified: “Wolfram language” is only the new *name* of a language that was first released in 1988. Despite what the Wolfram company tries to make people believe, it should not sound like some long, underground work has led to a recent breakthrough. A more accurate description is that the Wolfram language has been progressively gaining more and more libraries over the years, and has essentially always tried to offer powerful, modern algorithms.

Naufil

Stephen Wolfram is no doubt one of the most brilliant minds of his generation. No doubt. But … the guy needs help getting his thoughts in line. This was one of the most difficult and rambling interviews I’ve ever read.

Perhaps, some of the responsibility lies with the author. It would have been nice if you put in a bit of work, editing his responses. So that they read better. Maybe his responses are easier to understand, as phrased, in audio.

In any case, happy to see GigaOm back.

Daniel Bigham

Thanks so much for this interview… reading it was very enjoyable. But…

The elephant in the room, for me, and perhaps also for Byron, is that Stephen’s responses imply that qualia doesn’t exist. And it makes me curious: Does Stephen really think that? Or does he acknowledge qualia internally but avoid acknowledging it externally? Or has he simply spent so much time thinking about the behavioral side of the mind that his inner models of qualia aren’t rich enough to come into play for a conversation like this? (I somehow doubt the later, but who knows)

My experience as a human being, and as a thinker, are that human beings are more than behaving entities. We feel. Not only do we “sense” our environment as to provide data to decision-making engine, we also “feel”/”experience” our environment. Most simply put, what is red? The physicist says, oh, it’s a range of wavelengths of EMR. Great… but we’ve only described light, we haven’t described the way a person sees the correlate of it in their mind. Stephen’s responses seem to imply that he views “red” as something which doesn’t actually exist. Maybe he didn’t intend to imply that, I don’t know.

At the core of why people have thought themselves to be special is that, internally, many people are aware of qualia. Some people are more “aware” of it in an explicit sense than others, but we are all aware of it I would presume at an instinctual level. As you say, we look at a rock, and the intuition is that the rock isn’t experiencing anything it all. The rock isn’t sad. The rock isn’t experiencing the rock concert playing in stadium next door. The rock isn’t aware that it exists.

Typically, people refer to “self awareness” as a bridge for referring to qualia. I think the reason is that the experience of self awareness is one of the most heightened experiences of qualia we can have. I sometimes call it a “resonant” experience of consciousness. Almost like looking into a mirror when there’s also a mirror behind you, and there’s an infinite cascade… the awareness of self, and the awareness of awareness, and the awareness of awareness of awareness… it seems to result in a jolt of qualia, a special flavor of qualia, that is iconic of the phenomenon.

What makes consciousness challenging to think about is that it is tightly correlated with behavior and all of the machinery required to make a behaving entity. If you take the intersection of a computer and a human being, the overlap is quite astonishing. Both things take inputs and act on those inputs. Both have memory. The similarities are so striking that one can be led down a path that suggests “Oh, we’re just fancy wet computers”. But once we’ve arrived there, we’ve been fooled. Almost like a magician uses distraction to keep someone’s attention away from the critical piece. (behavior being the distracting similarity)

What’s maddening to me is that, intuitively, I would like to think that any conscious human being, especially smart, introspective ones, should find it plainly obvious that behaving entities need not experience qualia in any way, whereas human experience qualia in a mind blowingly profound way. And so as soon as you start talking about computational equivalence, you’ve missed the boat. You’ve gotten stuck on “humans are just behaving entities” island.

Why do smart introspective people get stuck on that island? There are a variety of reasons I can guess at, but I obviously can’t know the true answers…

1. Does qualia make scientific types deeply anxious? We like to have our feet on solid ground, and science can give us that feeling. But if science, in 2015, has yet to provide any compelling theories to explain qualia, perhaps the existence of qualia is threatening. Just as hope often fuels faith in a religious sense, so perhaps things that are threatening encourage us to think of something not existing. (so long as our thinking/pretending it doesn’t exist doesn’t threaten us in some way)

2. Because qualia is such an obviously critical facet in a conversation like this interview, but left as an elephant in the room, it makes me wonder whether humans vary in how the experience qualia. Just as a song can be played faintly or loudly, perhaps human brains vary in how loudly they present qualia. This theory is tempting at times, because it could explain why some people respond to these questions as if they were unciousness AIs. (called “zombies” in the literature — the idea that there could exist human beings that are behaviorally identical but which don’t experience qualia)

3. Related to #2, perhaps people’s “consciousness of their consciousness” varies a lot in the human population. I think it’s actually clear that this is in fact true, but the scale of its truth is not obvious. For example, there have been times in my life where the strength of my self awareness and the bizarre and unexplained nature of qualia hits me in the head so hard it feels like my mind is about to rupture. The closest feeling I can relate it to is the notion of “epiphany”. But what if for large swaths of the population, they’ve never had that epiphany / eye opening moment, and so they are “unconscious of their consciousness”?

Anyway, I’m probably droning on at this point. I just wanted to call out the elephant in the room. I’ve often wanted to ask Stephen about consciousness/qualia, so I’m glad you got the chance. Disappointed that he either dodged the question or some other such thing.

Charlie

I have same questions like this. Does consciousness/qualia belong to computable problem? Maybe doctor Stephen hasn’t yet got a satisfactory conclusion on this problem.
I am just curious about AI advances. AI such as deep Learning tries to simulate the mechanism of human brain by algorithms implemented on computers, e.g., the process of image recognition. Question is does consciousness belong to computable problem? Since many scientists endeavor to find effective AI algorithms, they potentially premise that consciousness can be simulated by computational model, everything is computable in this world, and some computer algorithms can evolve into computer-consciousness some day.
It’s possible to create a sovereign AI with autonomous learning and self-evolving ability. For now experts could move closer to the true AI, or they might run towards some directions that are wrong from the very beginning, but how could people know the right way to develop AI if they don’t use the try-and-error method? I remember I read an interesting comment from reddit, he/she commented like, “AI is but a bunch of algorithms, experts just throw them at the wall like clays to see if some stick”.

Jeremie Papon

I wouldn’t presume to speak for Stephen, but I would assume he would respond that qualia are a state of computation. For example, “red” wavelengths reflect off an object and enter your eye, fire certain cones, which produce certain chemical/electrical signals, which affect the state of computation of your brain in a certain way. The way they affect the computational state of your mind is the qualia of “red”.

Different minds can experience “red” in different ways (e.g. emotionally) exactly because of this. Observing “red” wavelengths results in an enormously complex non-linear computation, a computation which is greatly affected by initial conditions – the state of the mind before the observation.

I don’t really see why qualia are such a problem to this computational view of the universe. In my mind, computation is actually a fantastic explanation of qualia, and go a long way towards explaining why they are so difficult to nail down – they are actually just a state in a vast chain of computations.

jandrewlinnell

Daniel, you gave us a lot here, thank you! The question of self-consciousness is key here. AI has intelligence, frozen intelligence. It has no enthusiasm. It is detached. AI will have sub-categories such as Artificial Soul but real people will not be fooled by their artificial feelings. Will we get to Artificial Free Will?

asipols

First of all, many thanks to Byron Reese for resurrecting GIGAOM! And what a restart – an interview with a figure no less than Wolfram the Him on a subject no less than AI. Byron – bravo and good luck with the project!

Wolfram, on the other hand, disappointed. The “me/my science” flavour was predictable (justified by personal interview format and even toned down from NKS level) but one would expect a bit more clarity/precision/brevity from an experienced and active thinker and practitioner who has spent decades on these matters. Yes, the subject is nebulous, and the questions are loaded, but still. Stephen – you can do better!

Joshua McClure

Saw this via the GigaOm twitter feed. Wolfram is a hero of mine. He was thinking about AI far before I was truly understanding that the sci-fi novels I was reading could actually be reality in my lifetime. Excellent interview, Byron. I really enjoyed it.

CJ London

Understanding natural human language is certainly not there yet. Enabling such systems to learn through direct conversation would be incredible. A wolfram personal assistant, on your phone and pc, interfacing with his systems.

Dick Bird

nice to see some dialog between a reporter and an actual expert, as opposed to yet another regurgitation of hawking and musk’s hysterical nonsense

goals come from the gonads. make sure you don’t give the robots any gonads and you’ll be ok

Comments are closed.