Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
About this Episode
Episode 77 of Voices in AI features host Byron Reese and Nicholas Thompson discussing AI, humanity, social credit, as well as information bubbles. Nicholas Thompson is the editor in chief of WIRED magazine, contributing editor at CBS, co-founder of The Atavist and also worked at The New Yorker and authored a Cold War era biography.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.
Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today my guest is Nicholas Thompson. He is the editor in chief of WIRED magazine. He’s also a contributing editor at CBS which means you’ve probably seen him on the air talking about tech stories and trends. He also co-founded The Atavist, a digital magazine publishing platform. Prior to being at WIRED he was a senior editor at The New Yorker and editor of NewYorker.com. He also published a book called The Hawk and the Dove, which is about the history of the Cold War. Welcome to the show Nicholas.
Nicholas Thompson: Thanks Byron. How you doing?
I’m doing great. So… artificial intelligence, what’s that all about?
(Laughs) It’s one of the most important things happening in technology right now.
So do you think it really is intelligent, or is it just faking it? What is it like from your viewpoint? Is it actually smart or not?
Oh, I think it’s definitely smart. I think that the premise of artificial intelligence, which if you define it as machines making independent decisions, is very smart right now and soon to get even smarter.
Well it always sounds like I’m just playing what they call semantic gymnastics or something. But does the machine actually make a decision, or is it just no more than your clock makes a decision to advance the minute hand one minute? The computer is as deterministic as that clock. It doesn’t really decide anything it just is a giant clockwork isn’t it?
Right. I mean that gets you into about 19 layers of a really complicated discussion. I would say ‘yes’ in a way it is like a clock. But in other ways, machines are making decisions that are totally independent from the instructions or the data that was initially fed it, are finding patterns that the humans won’t see, and couldn’t be coded in. So in that way it becomes quite different from a clock.
I’m intrigued by that. I mean the compass points to the north. It doesn’t know which way north is. That would be giving it too much credit. But it does something that we can’t do, called magnetic north. So how is that really is the compass intelligent by the way you see the world?
Is the compass intelligent by the way I see the world? Well the compass is… I mean one of the issues here is that artificial intelligence uses two words that have very complicated meanings and their definition evolves as we learn more about artificial intelligence. And not only that, but the definition of artificial intelligence and the way it’s used changes constantly both as our technology evolves as it learns to do new things, and as it develops its brand value. So back to your initial question, “Is a compass that points to the north intelligent?” It is intelligent in the sense that it’s adding information to our world, but it’s not doing anything independent of the person who created it, who built the tools and who imagined what it would do. You build a compass you know that it’s going to point north, you put the pieces inside of it, [and] you know it will do that. It’s not breaking outside of the box of the initial rules that were given to it and the premise of artificial intelligence is that it is breaking out of that box.
So I’d like to really understand that a little more. Like if I buy a NEST learning thermometer and over time I’m like, ‘oh I’m too hot, I’m too cold, I’m too cold,’ and it “figures it out” but how is it breaking out of what it knows?
Well what would be interesting about a NEST thermometer, (I don’t know the details of how a NEST thermometer works, but) a NEST thermometer is looking at all the patterns of when you turn on your heat and when you don’t…. If you program in a NEST thermometer and you say please make the house hotter between 6:00 in the morning and 10:00 o’clock at night, that’s relatively simple. If you just install a NEST thermometer and then it watches you and follows your patterns and then reaches the same conclusion, it’s ended up at the same output, but it’s done it in a different way which is more intelligent right?
Well that’s really the question isn’t it? The reason I dwell on these things is not to kind of count angels dancing on heads of pins. But to me this kind of speaks to the ultimate limit of what this technology can do. Like if it is just a giant clockwork, then you have to come to the question, ‘Is that what we are? Are we just a giant clockwork?’ If we’re not and it is, then there are limits to what it can do. If we are and it is or we’re not and it’s not, then maybe someday it can do everything we can do. Do you think that someday it can do everything we can to do?
Yes. I thought this might be where you were going and this is where it gets so interesting. And that was where in my initial answer I was starting to head in this direction, but my instinct is that we are like a giant clock, an extremely complex clock and a clock that’s built on rules that we don’t understand and won’t understand for a long time, and that is built on rules that defy the way we normally programmed rules into clocks and calculators, but that essentially we are reducible to some form of math, and with infinite wisdom we could reach that that there isn’t a special spiritual unknowable element in the box…
Let me pause right there. Let’s put a pin in that word ‘spiritual’ for a minute, but I want to draw attention to when I asked you if AI is just a clockwork, you said “No it’s more than that,” and if I ask you if a human’s a clockwork, you say “yeah I think so.”
Well that’s because I was taking your definition of clock, right? So I think what you said a minute ago is really where it’s at — which is: either we are clocks and the machines are clocks, or we are machines, we are clocks and they’re not clocks, there are four possibilities there. And my instinct is that if we’re going to define it that way, I’m going to define clocks in an incredibly broad sense meaning mathematical reasoning including mathematics we don’t understand today, I’ll make the argument that both humans and machines you’re creating are clocks.
If we’re thinking of clocks in a much narrower sense, which is just a set of simple instructions input/output, then machines can go beyond that and humans can go beyond that too. But no matter how we define the clocks, I’m putting the humans and the machines in the same category. So I either agree depending on what your base definitions are that humans and machines both are category A or they’re both not category A, that there isn’t any fundamental difference between the humans and the machines.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.