Blog

Voices in AI – Episode 112: A Conversation with David Weinberger

Stay on Top of Emerging Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

About this Episode

On Episode 112 of Voices in AI, Byron speaks with fellow author and technologist David Weinberger about the nature of intelligence artificial, and otherwise.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today my guest is David Weinberger. He is the guy that likes to explore the effects of technology on ideas. He’s a senior researcher at Harvard’s Berkman Klein Center for Internet and Society, and was co-director of the Harvard Library Innovation Lab and a Journalism Fellow at Harvard’s Shorenstein Center. Dr. Weinberger has been a marketing VP and adviser to high tech companies, an adviser to presidential campaigns and a Franklin Fellow at the US State Department. Welcome to the show, Dr. Weinberger.

David Weinberger: Hi Byron.

So, when did you first hear about AI?

Well about AI…

Well was it 1956?

That’s what I’m thinking.

But you were only six then. So I’m guessing it wasn’t then.

Well as soon as the first science fiction robot movies came out, that’s probably when I heard about it. Robby the Robot, I think.

There you go. And so I don’t know if we called it that colloquially then, but in any case, how do you define it? In fact, let me narrow that question a bit. How do you define intelligence?

Oh jeez, I seriously try not to define it.

But don’t you think that’s interesting that there’s no consensus definition for it? Could you argue therefore that it doesn’t really exist? Like if nobody can even agree on what it is, how can we say it’s something that’s a useful concept at all?

Well I don’t want to measure whether things exist by whether our concepts for them are clear, since most of our concepts are ultimately, when you look at them long enough, — they’re not clear. Words have uses. We seem to have a use for the word ‘intelligence’ as intelligent as opposed to something else. It’s usually really useful to think about the context in which you use that word or another one. And even though we are not [doing so], define ‘life,’ right? It’s a pretty useful term especially when you’re talking about whether something is alive or dead. You know you don’t have to be able to define life precisely for that term to be useful. Same thing with intelligence, and I think it can often be a mistake to try to define things too precisely.

Well let me ask a slightly different question then. Do you think artificial intelligence is artificial because we made it? Or is it artificial because it’s not really intelligence, like artificial turf? It’s not really grass. It’s just something that can mimic intelligence, or is there a difference?

So it’s a good question. I would say I think it’s artificial in both ways and there is a difference.

Well tell me what the difference is. How is it only mimicking intelligence, but it isn’t actually intelligent itself?

Well you’re gonna be really angry at me. It depends how you define intelligence. To me that’s not the burning question at this point. And I’m not sure if or when it ever would be. Generally we ask about whether machines are intelligent in sort of everyday conversation. Insofar as this, we’re talking in everyday conversation about the sort of thing, but it’s because we are concerned about whether we need to treat these things in a way that we treat other human beings, that is, as creatures that we care about what happens to them ultimately. Or we want to know are they doing stuff that we do cognitively that’s sufficiently advanced that we are curious about whether a machine is doing it. We don’t call an abacus intelligent even though we use it for counting. We’re a little more tempted to worry about whether machines are intelligent when we can’t see how they work.

And I think you hit the nail on the head with your comment about [how] in the end we want to know whether we have to treat them as if they’re sentient in the true sense of the word, able to sense things, able to feel things, able to feel pain. How do you think we would know that? If they had a ‘self’ that could experience the world as it were?

So we may get very confused about it. I’m already pretty confused about it. I mean I’ll tell you why my suspicion is that they cannot be intelligent in the sense in which they have an inner life. I find that sort of philosophically objectionable, even though I just brought it up, or whether they care about what happens to them. This is not my argument. I don’t remember whose it is though.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

One Response to “Voices in AI – Episode 112: A Conversation with David Weinberger”