Voices in AI – Episode 7: A Conversation with Jared Ficklin

0 Comments

In this episode, Byron and Jared talk about rights for machines, empathy, ethics, singularity, designing AI experiences, transparency, and a return to the Victorian era.

-
-
0:00
0:00
0:00

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Jared Ficklin. He is a partner and Lead Creative Technologist at argodesign.

In addition, he has a wide range of other interests. He gave a well-received mainstage talk at TED about how to visualize music with fire. He co-created a mass transit system called The Wire. He co-designed and created a skatepark. For a long while, he designed the highly-interactive, famous South by Southwest (SXSW) opening parties which hosted thousands and thousands of people each year.

Welcome to the show, Jared.

Jared Ficklin: Thank you for having me.

I’ve got to start off with my basic, my first and favorite question: What is artificial intelligence?

Well, I think of it in the very mechanical way of, that it is a machine intelligence that has reached a point of sentience. But I think it is just a broad umbrella where we kind of apply it to any case where the computerization is attempting to solve problems with human-like thoughts or strategies.

Well, let’s split that into two halves, because there was an aspirational half of sentience, and then there was a practical half. Let’s start with the practical half. When it tries to solve problems that a person can solve, would you include a sprinkler that comes on when your lawn is dry as being an artificial intelligence? Because I don’t have to keep track of when my lawn is dry; the sprinkler system does.

First of all, this is my favorite half. I like this half of the procedural side more than the sentience side, although it’s fun to think about.

But, when you think of this sprinkler that you just talked about, there’s a couple of ways to arrive at this. One, it can be very procedural and not intelligent at all. I can have a sensor. The sensor can throw off voltage when it sees soil is of a certain dryness. That can connect on an electrical circuit which throws off a solenoid, and water begins spraying everywhere.

Now, you have the magic, and a person who doesn’t know that’s going on might look at that and say, “Holy cow! It’s intelligent! It has watered the lawn.” But it’s not. That is not machine intelligence and that is not AI. It’s just a simple procedural game.

There would be another way of doing that, and that’s to use a whole bunch of computations to study, and bring in a lot of factors of the weather coming in, the same sensor telling what soil dryness is… Run it through a whole lot of algorithms and make a decision based on the probability and the threshold of whether to turn on that sprinkler or not, and that would be a form of machine learning.

Now, if you look at the two, they seem the same on the face but they’re very different—not just in how they happen, but in the outcome. One of them is going to turn on the sprinkler, even though there are seven inches of rain coming tomorrow, and the other is not going to turn on the sprinkler because it’s aware that seven inches of rain are coming tomorrow. That little added extra judgment, or intelligence as we call it, is the key difference. That’s what makes all the difference in this, multiplied by a million times. To me.

Just to be clear, you specifically invoked machine learning. Are you saying there is no AI without machine learning?

No, I’m not saying that. That was just the strategy that applied in this situation.

Is the difference between those two extremes, in your mind, evolutionary? It’s not a black-and-white difference?

Yeah, there’s going to be scales and gradients. There’s also different strategies and algorithms that breed this outcome. One had a certain presumption of foresight, and a certain algorithmic processing. In some ways, it’s much smarter than a person.

There’s a great analogy. Matthew Santone, who is a co-worker here, is the first one who introduced me to the analogy. And I don’t know who came up with it, but it’s the ten thousand squirrels analogy around artificial intelligence in its state today.

On the face of it, you would think humans are much smarter than squirrels, and in many ways we are, but a squirrel has this particular capability of hiding ten thousand nuts in a field and being able to find them the next spring. When it comes to hiding nuts, a squirrel is much more intelligent than we are.

That’s another one of the key attributes of this procedural side of artificial intelligence, I think. It’s that these algorithms and intelligence become so focused on one specific task that they actually become much more capable and greater at it than humans.

Where do you think we are? Needless to say, the enthusiasm around AI is at a fevered pitch. What do you think brought that about, and do you think it’s warranted?

Well, it’s science fiction, I think, that has brought it about—everything from The Matrix in film, to books by John Varley or even Isaac Asimov—have given us a fascination about machines and artificial intelligence and what they can produce.

Then, right now, the business world is just talking all about it, because, I think, we’re at the level of the ten thousand squirrels. They can see a lot of value of putting those squirrels together to monitor something—you know, find those nuts in a way better than a human can. When you combine the two, it’s just on everyone’s lips and everywhere.

It doesn’t hurt that some of the bigwigs of thinkers of our time are out there talking about how dangerous it could possibly be, and that captures everyone’s attention as well.

What do you think of that? Why do you think that there are people who think we’re going to have an artificial general intelligence in a few years—five years is the earliest—and it’s something we should be concerned about? And then, there are people who say it’s not going to come for hundreds of years, and it’s not something we should be worried about. What is different in how they’re viewing the world?

It might be a reflection of the world that they live in, as well. For me, I really see two scales of danger. One is that we, as humans, put a lot of faith in machines—particularly our generation, Generation X. When I go to drive across town—and I’ve lived in my hometown of Austin, Texas, for seventeen years—I know a really good short route right through downtown. Every time I try to take it, my significant other will tell me that Google says there is a better route. We trust technology more than other humans.

The problem comes in, it’s like, if you have these ten thousand squirrels and they’re a toddler-level AI, you could turn over control far too early and end up in a very bad place. A mistake could happen, it could shut down the grid, a lot of people could die. That’s a form of danger I think some people are talking about, and they’re talking about it on the five-year scale because that’s where it’s at. You could get into that situation not because it’s more intelligent than us, but just because you put more reliance on something that isn’t actually very intelligent. That’s one possible danger that we’re facing.

The hundred-year danger is that I think people are afraid of the Hollywood scenario, the Skynet scenario, which I’m less afraid of—although I have one particular view on that that does give me some concern. I do get up every morning and tell Alexa, “Alexa, tell the robots I am on your side,” because I know how they’re programming the AI. If I write that line of code ten-thousand times, maybe I can get in the algorithm.

There are more than a few efforts underway, by one count, twenty-two different governments who are trying to figure out how to weaponize artificial intelligence. Does that concern you or is that just how things are?

Well, I’m always concerned about weaponization, but I’m not completely concerned. I think militaries think in a different way than creative technologists. They can do great damage, but they think in terms of failsafe, and they always have. They’re going to start from the position of failsafe. I’m more worried about marketing and a lot of areas where they work quick and dirty, and they don’t think about failsafe.

If you’re going to build a little bit of a neural net or a machine learning system, it’s open-sourced, it’s up on the cloud, a lot of people are using it, and you’re using it to give recommendations. And then at the end of the recommendations you’re not satisfied with it, and you say, “I know that you have recommended this mortgage from Bank A but the client is Bank B, so how can we get you to recommend Bank B?”

Essentially, teaching the machines that it’s okay to lie to humans. That is not operating from a position of failsafe. So it might just be marketing—clever terms like ‘programmatic’ and what not—that generates Skynet, and not necessarily the military industrial complex, which really believes in kill switches.

More kind of real world day-to-day worries about the technology—and we’re going to get to all the opportunities and all the benefits and all of that in just a moment.

Start with the fear.

Well, I think the fear tells us more, in a way, about the technology because it’s fun to think about. As far back as storytelling, we’ve talked about technologies that have run amok. And it seems to be this thing, that whenever we build something, we worry about it. Like, they put electricity in the White House, but then the president would never touch it and wouldn’t let his family touch it. When they put radios in cars, they said, “Oh, distracted driving, people are going to crash all the time.”

Airbags are going to kill you.

Right. Frankenstein, right? The word ‘robot’ comes to us from a Czech play.

You just hit a part of the psyche that I think people are letting in, too, when you said Frankenstein. It’s personification that often is the dangerous thing.

Think of people who dance with poisonous snakes. Sometimes it’s done as a dare, but sometimes it’s done because there’s a personification put on the animal that gives it greater importance than what it actually is, and that can be quite dangerous. I think we risk that here, too, just putting too much personification, human tendencies, on the technology.

For instance, there is actually a group of people who are advocating rights for industrial robots today, as if they are human, when they are not. They are very much just industrial machines. That kind of psyche is what I think some people are trying to inoculate now, because it walks us down this path where you’re thinking you can’t turn that thing off, because it’s given this personification of sentience before it has actually achieved it.

It’s been given this notion of rights before it actually has them. And the judgment of, even if it’s dangerous and we should hit the kill switch, there are going to be people reacting against that, saying, “You can’t kill this thing off”—even though it is quite dangerous to the species. That, to me, is a very interesting thing because a lot of people are looking at it as if, if it becomes intelligent, it will be a human intelligence.

I think that’s what a lot of the big thinkers think about, too. They think this thing is not going to be human intelligence, at which point you have to make a species-level judgment on its rights, and its ability to be sentient and put out there.

Let’s go back to the beginning of that conversation with ELIZA and Weizenbaum.

This man in the ‘60s, Weizenbaum, made this program called ELIZA, and it was a really simple chatbot. You would say, “I am having a bad day.” And it says, “Why are you having a bad day?” And then, you would say, “I’m having a bad day because of my mom.” “What did your mom do to make you have a bad day?” That’s it, very simple.

But Weizenbaum saw that people were pouring their heart out to it, even knowing that it was a machine. And he turned on it. He was like, “This is terrible.” He said, “When a machine says, ‘I understand,’ the machine is telling a lie. There is no ‘I’ there. There is nothing that understands anything.”

Is your comment about personification a neutral one? To say, “I am observing this,” or are you saying personification is a bad thing or a good thing? If you notice, Alexa got a name, Siri got a name, Cortana got a name, but Google Assistant didn’t get a name.

Start there—what are your thoughts on personification in terms of good, bad, or we don’t know yet?

In the way I was just talking about it, personification, I do think is a bad thing, and I do see it happening. In the way you just talked about it, it becomes a design tool. And as a design tool, it’s very useful. I name all my cars, but that’s the end of the personification.

You were using it to say they actually impute human characteristics on these beyond just the name?

Yes, when someone is fighting for the human rights or the labor rights of an industrial machine, they have put a deep personification on that machine. They’re feeling empathy for it, and they’re feeling it should be defended. They’re seeing it as another human or as an animal; they’re not seeing it as an industrial machine. That’s weird, and dangerous.

But, you as a designer, think, “Oh, no, it’s good to name Alexa, but I don’t want people to start thinking of Alexa as a thing.”

Yeah.

But you’re a part of that then, right?

Yeah, we are.

You’re naming it and putting a face on it.

You’ve circled right back to what I said—Skynet is going to come from product design and marketing.

From you.

Well, I did not name Alexa.

And just for the record, we’re not impugning Alexa here.

Yeah, we are not. I love Alexa. I have it, and like I said, I tell her every morning.

But, personification is this design tool, and how far is it fair for us to lean into it to make it convenient? In the same way that people name their favorite outfit, or their cars, or give their house a name—just as a convenience in their own mind—versus actually believing this thing is human and feeling empathy for it.

When I call out to Alexa in the morning, I don’t feel empathy for Alexa. I do wonder if my six-year-old son feels empathy for Alexa, and if by having that stuff in the homes—

—Do you know the story about the Japanese kids in the mall and the robot?

No.

There was this robot that was put in this Japanese mall. They were basically just trying to figure out how to make sure that the robot can get around people. The robot was programmed to ask politely for you to step aside, and if you didn’t, it would go around you.

And some kids started stepping in front of it when it tried to go around them. And then, they started bullying it, calling it names, hitting it with things. The programmers had to re-circle and say, “We need to rewrite the program so that, if there are small people, kids, and there’s more than a few, and there’s not big people around; we’ve got to program the robot to run away towards an adult.” And so, they do this.

Now, you might say, “Well, that’s just kids being kids.” But here’s the interesting thing: When they later took those kids and asked them, “Did you feel that the robot was human-like or machine-like?” Eighty-percent said it was human-like. And then, they said, “Do you feel like you caused it distress?” Seventy-five percent of them said yes. And so, these kids were willing to do that even though they regarded it as human-like and capable of feeling emotion.

They treated it like another kid.

Right. So, what do you read in the tea leaves of that story?

Well, more of the same, I’m afraid, in that we’re raising a generation—funny enough, Japan really did start this—where there needs to be familiarity with robotics. And it’s hard to separate robotics and AI, by the way. Robotics seems like the corpus of AI, and so much of what I think the public’s imagination that’s placed on AI is robotics, and has nothing to do with AI.

That is a fascinating thing to break apart, and they are starting to converge now, but back when they were doing that research, and the research like Wendy Ju does with the trash can on the public square going around, and it’s just a trashcan on wheels but it actually evokes very emotional responses from people. People personify it almost immediately even though it’s a trash can. One of the things the kids do in this case is they try and attract it with trash and say, “Come over here, come over here,” because they view it as this dog that eats trash, and they think that they can play with it. Empathy also arrives as well. Altruism arrives. There’s a great scene where this trash can falls over and a whole bunch of people go, “Aww…” and they run over and pick it up.

We’ve got to find a way to reset our natural tendencies. Technology has been our servant for all this time, and this dumb servant. And although we’re aware of it having positive and negative consequences, we’ve always thought of it as improving our experience, and we may need to adjust our thinking. The social medias might be doing that with the younger generations, because they are now seeing the great social harm that can come, and it’s like, do they put that on each other or do they put it on the platform?

But, I think some people who are very smart are painting with these broad brushes, and they’re talking about the one-hundred-year danger or the danger five years out, just because they’re struggling with how we change the way we think about technology as a companion. Because it’s getting cheaper, it’s getting more capable, and it’s invading the area of intelligence.

I remember reading about a film—I think this was in the ‘40s or ‘50s—and they just showed these college kids circles that would bounce or roll around together, or a line would come in. And they said, “What’s going on in these?” And they would personify those, they’d say, “Oh, that circle and that circle like each other.”

It’s like, if we have a tendency to do that to a circle in a film, you can only imagine that, when these robots can read your face, read your emotions—and I’m not even talking about a general intelligence—I mean something that, you know, is robotic and can read your face and it can laugh at your jokes and what not. It’s hard to see how people will be able to keep their emotions from being wrapped up in it.

Yeah, and not be tempted to explore those areas and put them into the body of capability and intelligence.

I was just reading two days ago—and I’m so bad at attribution—but a clever researcher, I think, at MIT created this program for scanning people’s social profile and looking at their profile photo… And after enough learning, building their little neural net where it’d just look at a photograph and guess whether this person was gay or not, their sexual preference, and they nail it pretty well.

I’m like, “Great, we’re teaching AI to be as shallow and presumptive as other humans, who would just make a snap judgment based on what you look like, and maybe it’s even better than us at doing it.”

I really think we need to develop machine ethics, and human ethics, and not be teaching the machine the human ethics, even if that’s a feature on the other side. And that’s more important than privacy.

Slow that down a second. When you do develop a difference between human ethics and machine ethics, I understand that; and then, don’t teach the machine human ethics. What does that mean?

We don’t need more capable, faster human ethics out of there. It could be quite damaging.

How did you see that coming about?

Like I said, it comes about through, “I’m going to create a recommendation engine.”

No, I’m sorry—the solution coming about.

Yeah.

Separating machine and human ethics.

We have this jokey thought experiment called “Death by 4.7 Stars”, where you would assume that there is a Skynet that has come to intelligence, and it has invaded recommendation engines. And when you ask it, “What should I have for lunch?”, it suggests that you have this big fatty hamburger, a pack of Lucky Strikes, and a big can of caffeinated soda.

At this point, you die of a heart attack younger. Just by handing out this horrible advice, and you trusting it implicitly, and it not caring that it’s lying to you, you just extinguish all of humanity. And then Skynet is sitting there going, “That was easy. I thought we were going to have a war between humans and machines and have to build the Matrix. Well, we didn’t have to do that.” Then, one of the AIs will be like, “Well, we did have to tell that lady to turn left on her GPS into a quarry.” And then, the AI is like, “Well, technically, that wasn’t that hard. This was a very easy war.”

So, that’s why we need to figure out this way to put a machine ethic in there. I know it seems old-fashioned. I’m a big fan of Isaac Asimov. I think he did some really good work here, and there’s other groups that are now advancing that and saying, “How can we put a structure in place where we just don’t give these robots a code of ethic?”

And then, the way you actually build these systems is important, too. AI should always come to the right conclusion. You should not then tell it, “No, come to this conclusion.” You should just screen out conclusions. You should just put a control layer in that filters out the conclusions you don’t want for your business purposes, but don’t build a feedback loop back into the machine that says, “Hey, I need you to think like my business,” because your business might need a certain amount of misdirection and non-truths to it.

And you don’t, maybe, understand the consequences because there’s a certain human filter between that stuff—what we call ‘white lies’ and such—that allows us to work. Whereas, if you amplify it times the million circuits and the probabilities that go down to the hundreds of thousands of links, you don’t really know what the race condition is going to produce with that small amount of mistruth.

And then, good governance and controls that say that little adjusted algorithm, which is very hard to ferret out—almost like the scene from Tron where they’re picking out the little golden strands—doesn’t move into other things.

And so, this is the kind of carefulness that we need to put into it as we deploy it, if we’re going to be careful as these magic features come along. And we want the features. There’s a whole digital lifestyle predicated on the ability for AI to establish context, that’s going to be really luxurious and awesome; and that’s one reason why I even approach things like the singularity, or “only you can prevent Skynet,” or even get preachy about it at all—because I want this stuff.

I just got back from Burning Man, and you know, Kathryn Myronuk says it’s a dress rehearsal for a post-scarcity society. What’s going to give us post-scarcity is artificial intelligence. For a large part, the ability to stand up machines enough to supply our needs, wants, and desires, and to sweep away the lower levels of Maslow’s hierarchy of need.

And then we can live in just a much more awesome society. Even before that, there’s just a whole bunch of cool features coming down the pipeline. So, I think that’s why it’s important to have this discussion now, so we can set it up in a way that it continues to be productive, trustful, and it doesn’t put the entire species in danger somehow, if we’re to believe Stephen Hawking or Elon Musk.

Another area that people are concerned about, obviously, are jobs—automation of jobs. There are three narratives, just to set them up for the listener:

The first is that AI is going to take a certain class of jobs that are ‘low-skill’ jobs, and that the people who have those jobs will be unemployed and there’ll be evermore of them competing for ever fewer low-skill jobs, and we’ll have a permanent Great Depression.

There’s a second area that says, “Oh, no, you don’t understand, everybody’s job—your job, my job, the President’s job, the speechwriter’s job, the artist’s job, everybody—because once the machines can learn something new faster than we can, it’s game over.”

And then, there’s a third narrative that says both of these are wrong. Every time we have a new technology, no matter how disruptive it is to human activity—like electricity or engines or anything like that—people just take that technology and they use it to magnify their own productivity. And they raise their wages and everybody uses the technology to become more productive, and that’s the story of the last two hundred and fifty years.

Which of those three scenarios, or a fourth one, do you identify with?

A fourth one, where the burden of productivity being the guide of work is released, or lessened, or slackened. And then, the people’s jobs who are at the most danger are the people who hate their jobs. Their jobs are at the most danger. Those are the ones that AI is going to take over first and fastest.

Why is that not my first setup, which is there are some jobs that it’s going to take over, putting those people out of work?

Because there will be one guy who really loves driving people around in his car and is very passionate about it, and he’ll still drive his car and we’ll still [get] into it. We’ll call the human car. He won’t be forced out of his job because he likes it. But the other hundred guys who hated driving a car for a living, their job will be gone because they weren’t passionate enough to protect it or find a new way to do it or enjoy doing it anymore. That’s the slight difference, I think, between what I said and what you said.

You say those hundred people won’t use the technology to find new employment?

I think an entire economy of a different kind of employment that works around passion will ultimately evolve. I’m not going to put a timescale on this, but let’s take the example of “ecopoesis,” which I’m a big fan of, which comes out of Stanley K. Robinson’s Mars. But probably before that was one of the first times I encountered it.

Ecopoesis is a combination of ecology poet – ecopoesis. If you practice it, you’re an ecopoet. This is how it would work in the real world, right? We would take Bill Gates’s proposal, and we would tax robots. Then we would take that money, and we would place an ad on Craigslist, and say, “I would need approximately sixty thousand people who I can pay $60,000 a year to go into the Lincoln National Forest, and we want you to garden the thing. We want you to remove the right amount of deadfall. We want you to remove evasive species. We want you to create glades. We want for the elk to reproduce. We want you to do this on the millions of hectares that is the Lincoln National Forest. In the end, we want it to look like Muir Woods. We want it to be just the most gorgeous piece of garden property possible.”

How many people who are driving cars today or working as landscapers wouldn’t just look at that Craigslist ad and immediately apply for the opportunity to spend the next twenty years of their life gardening this one piece of forest, or this one piece of land, because they’re following their passion into it and all of society benefits from it, right? That’s just one example of what I mean.

I think you can begin a thought experiment where you can see whole new categories of jobs crop up, but also people who are so passionate in what they’re doing now that they simply don’t let the AI do it.

I was on a cooking show once. I live a weird life. While we were on it we were talking about robots taking jobs, just like you and I were. We were talking about what jobs will robots take. Robots could take the job of a chef. The sous chef walks out of the back and he says, “No, it won’t.” We’re like, “Oh, you’re with nerds discussing this. What do you mean, ‘No, it won’t’?” He’s like, “Because I’ll put a knife in its head, and I will keep cooking.”

That’s a guy who’s passionate about his job. He’s going to defend it against the robots and AI. People will follow that passion and see value in it and pursue it.

I think there’s a fourth one that’s somewhere between one and three, that is what comes out of this. Not that there won’t be short-term disruption or pain but, ultimately, I think what will happen is humanity will self-actualize here, and people will find jobs they want to do.

Just to kind of break it down more a bit, that sounds like WPA or the Depression.

Yeah.

It says, “Let’s have people paint murals, build bridges, plant saplings.”

There was a lot of that that went on, yeah.

And so, you advocate for that?

I think that that is a great bridge when we’re in that point between post-singularity—or an abundance society, post-scarcity—and we’re at this in-between point. Even before that, in the very near-term, a lot of jobs are going to be created by the deployment of AI. It actually just takes a whole lot of work to deploy and it doesn’t necessarily reverberate into removing a bunch of jobs. Often, it’s a very minute amount of productivity it adds to a job, and it has an amplifying effect.

The industry of QA is going to explode. Radiologists, their jobs are not going to be stolen; they’re going to be shifted to the activity of QA to make sure that this stuff is identifying correctly in the short term. Over the next twenty to fifty years, there’s going to be a whole lot of that going on. And then, there’s going to be just a whole lot of robotics fleet maintenance and such, that’s going to be going on. And some people are going to enjoy doing this work and they’ll gravitate to it.

And then, we’re going to go through this transition where, ultimately, when the robots start taking care of something really lower-level, people are going to follow their passions into higher-level, more interesting work.

You would pay for this by taxing the robots?

Well, that was Bill Gates’s idea, and I think there’s a point in history where that will function. But ultimately, the optimistic concept is that this revolution will bring about so much abundance that the way an economy works itself will change quite a bit. Thus, you pay for it out of just doing it.

If we get to the point where I can stick out my hand, and a drone drops a hammer when I need a hammer to build something, how do you pay for that transaction? If that’s backed with a Tokamak Reactor—we’ve created fusion and energy is superfluous—how do you pay for that? It’s such a miniscule thing that there just might not be a way to pay for it, that paying for things will just completely change altogether.

You are a designer.

I’m a product designer, yes. That’s what I do by trade.

So, how do you take all of that? And how does that affect your job today, or tomorrow, or what you’re doing now? What are the kinds of projects you’re doing now that you have to apply all of this to?

This is how young it actually is. I am currently just involved in what does the tooling look like to actually deploy this at any kind of scale. And when I say “deploy,” I don’t mean sentience or anything close to it; but just something that can identify typos better than the current spellcheck system. Or identify typos in a very narrow sphere of jargon that other people know. Those are the problems being worked on right now. We’re scraping pennies outside of dollars, and it just needs a whole lot of tooling on that right now.

And so, the way I get to apply this, quite fundamentally, is to help influence what are the controls, governance, and transparency going to look like, at least in the narrow sphere where I’m working with people. After that, it’s all futurism, my friend.

But, on a day-to-day basis at argo, where do you see designing for this AI world? Is it all just down to the tooling area?

No, that’s just one that’s very tactical. We are actually doing that, and so it’s absorbing a lot of my day.

We have had a few clients come in and be like, “How do I integrate AI?” And you can find out it’s a very ticklish problem of like, “Is your business model ready for it? Is your data stream ready for it? Do you have the costing ability to put it all together?” It’s very easy to sit back and imagine the possibilities. But, when you get down to the brass tacks of integration and implementation, you start realizing it needs more people here to work on it.

Other than putting out visions that might influence the future, and perhaps enter into the zeitgeist our opinion on how this could transpire, we’re really down in the weeds on it, to be honest.

In terms of far out, you’ve referred to the singularity a number of times, do you believe in Kurzweil’s vision of the singularity?

I actually have something that I call “the other singularity”. It’s not as antagonistic as it sounds. It’s meant like the other cousin, right? While the singularity is happening—his grand vision, which is very lofty—there’s this other singularity going on. This one of cast-offs of the exponential technology curve. So, as computational power gets less expensive, yesterday’s computer—the quadcore computer that I first had for $3,000—is now like a $40 gum stick, and pretty soon it’s going to be a forty-cent MCU computer on a chip.

At that point, you can apply computational power to really mundane and ordinary things. We’re seeing that happen at a huge pace.

There’s something I like to call the “single-function computer” and the new sub-$1000. In the ‘90s, when computers were out there… They were out there for, really, forty, fifty years before mass adoption hit. From a marketing perspective, it was said that, until a price comes below $1,000 for a multifunction computer, they won’t reach adoption. Soon as it did, they spread widely.

We still buy these sub-$1000 computers. Some of us buy slightly more in order to get an Apple on the front of them and stuff, but the next sub-$1000 is how to get a hundred computers in the home for under $1,000 and that’s being worked on now.

What they’re going to do is take the function of these single-function computers, which take a massive amount of computational power, and dedicate them to one thing. The Nest would be my first example that people are most familiar with. It has the same processing power as the original MacBook G4 laptop, and all that processing power is just put to algorithmically keeping your home comfortable in a very exquisite out-of-the-box experience.

We’re seeing more and more of these experiences erupt. But they’re not happening in this elegant, singularity, intelligence-fed path. They just do what they do procedurally, or with a small amount of intelligence, and they do it extremely well. And it’s this big messy mess, and it’s entirely possible that we reach a form of the singularity without sentient artificial intelligence guiding it.

An author that I really love that works in this space a lot is Cory Doctorow. He has a lot of books that kind of propose this vision where machines are somehow taking care of this lower level of Maslow’s hierarchy of needs, and creating a post-scarcity society, but they are not artificial intelligence. They have no sentience. They’re just very, very capable at what they do, and there’s a profundity of them to do a lot of things.

That’s the other singularity, and that’s quite possibly how it may happen, especially if we decide that sentience is so dangerous [that] we don’t need it. But I find it really encouraging and optimistic, that there is this path to the future that does not quite require it, but could still give us a lot of what we see in these singularity-type visions of the future—the kind of abundance, and ability to not be toiling each day for survival. I love that.

I think Kurzweil thinks that the singularity comes about because of emergence.

Yeah.

Because, at some point, you just bolt enough of this stuff together and it starts glowing with some emergent behavior, that it is at a conscious decision that we decide, “Let’s build.”

Yeah, the exponential technology curve predicts the point at which a computer can have the same number of computations as we have neurons, right? At which point, I agree with you, it kind of implies that sentience will just burst forth.

Well, that’s what he says.

Yeah.

That’s the question, isn’t it?

I don’t think it happens that way.

What do you think happens?

I don’t think sentience just bursts forth at that moment.

First of all, taking a step back, in what sense are you using the word ‘sentience’? Strictly speaking, it means ‘able to sense something, able to feel’—that’s it. Then, there’s ‘sapience’, which is intelligent. That’s what we are, homo sapiens. Then, there’s ‘consciousness’, which is the ability to have subjective experience—that tea you just drank tasted like something and you tasted it.

In what sense are you thinking of computers—not necessarily having to be that?

Closer to the latter. It’s something that is aware of itself and begins guiding its own priorities.

You think we are that. We have that, humans.

Yeah.

Where do you think it comes from? Do you think it’s an emergent property of our brains? Is it something we don’t know? Do you have an opinion on that?

I mean, I’m a spiritualist, so I think it derives from the resonance of the universe that was placed there for a reason.

In that view of the world, you can’t manufacture that, in other words. It can’t come out of the factory and someplace.

To be metaphysical, yes. Like Orson Scott Card, will the philotics plug into the machine, and suddenly it wakes up and it has the same cognitive powers as a human? Yeah, I don’t know.

What you do, which is very interesting, is you say, “What if that assumption—that one assumption—that someday the machine kind of opens its eyes; what if that one assumption isn’t true?” Then what does the world look like, of ever-better computers that just do their thing, and don’t have an ulterior motive?

Yeah, and the truth is they could also happen in parallel. Both could be happening at the same time, as they are today, and still progress. But I think it’s really fascinating. I think some people guard themselves. They say, “If this doesn’t happen, there’s nothing smart enough to make all the decisions to improve humanity, and we’re still going to have to toil away and make them.” And I say, “No, it might be entirely possible that there’s this path where just these little machines, and profundity do it for us and sentience is not necessary.”

It also opens up the possibility that, if sentience does just pop into existence right now, it makes very fair the debate that you could just turn it off, that you could commit the genocide of the machine and say, “We don’t want you or need you. We’re going to take this other path.”

We Skynet them.

We Skynet them, and we keep our autonomy and we don’t worry about the perils. I think part of the fear about this kind of awareness—we’ve been calling it sentience—kind of theory on AI, is this fear that we just become dependent on them, and subservient to them, and that’s the only path. But I don’t think it is.

I think there’s another path where technology takes us to a place of great capability so profound that it even could remove the base layer of Maslow’s hierarchy of needs. I think of books like Makers by Cory Doctorow and others that are forty years in the future, and you start thinking of micro-manufacturing.

We just put up this vision on Amazon and Whole Food, which was another nod towards this way of thinking. That ignoring the energy source a little bit—because we think it’s going to sort itself out, everyone has solar on their hands or Tokamak—if you can get these hydroponic gardens into everyone’s garage, produce is just going to be so universally available. It goes back to being the cheapest of staples. Robots could reduce spoilage by matching demand, and this would be a great place for AI to live.

AI is really good at examining this notion of like, “I think you’re going to use those Brussels sprouts, or I think your neighbor is going to use them first.” We envision this fridge that has a door on the outside, which really solves a lot of delivery problems. You don’t need those goofy cardboard boxes with foil and ice in them anymore. You just put it in the fridge. It also can move the point of purchase all the way into the home.

When you combine that with the notion of this dumber AI that’s just sitting there, deciding whether you or the neighbor needs Brussels sprouts, it can put the Brussels sprouts there opportunistically, thinking, “Maybe he’ll get healthy this week.” When I don’t take them before they spoil, it can move them over to the neighbor’s fridge where they use [them]. You just root so much spoilage out of the system, that nutrition just raises and it becomes more ubiquitous.

Now, if people wanted to harvest those goods or tend those gardens, they could. But, if people didn’t, robots could make up the gap. Next thing you know, you have a food system that’s decoupled from the modern manufacturing system, and is scalable and can grow with humanity in a very fascinating way.

Do you think we’re already dependent on the machine? Like, if an EMP wave just fried all of our electronics, a sizeable part of the population dies?

I think that’s very likely. Ignoring all the disaster and such right then, it would take a whole lot of… I don’t necessarily think that’s purely a technological judgment. It’s just the slowness of humanity to change their priorities. In other words, we would realize too late that we all needed to rededicate our resources to a certain kind of agriculture, for instance, before the echo moved through the machine. That would be my fear on it—that we all engrain our habits and we’re too slow to change them.

Way to kill off humanity three times in this podcast!

That’s right.

Does that happen in most of these that you are doing?

No.

Oh, great! It’s just my dark view.

It’s really hard to kill us off, isn’t it?

Yeah.

Because, if it was going to happen, it seems like it would have happened before when we had no technology. You know, there were just three million of us five-thousand years ago. By some counts, thousands of us, at one time, and wooly mammoths running around.

But back then, ninety-nine percent of our technology was dedicated to survival, and it’s a way lower percentage now. In fact, we invented a percentage of technology that is dedicated to our destruction. And so, I don’t know how much the odds have changed. I think it’s a really fascinating discussion—probably something that AI can determine for us.

Well, I don’t know the percentage. It would be the gross amount, right?

Yeah.

Because you could say the percentage of money we’re spending on food is way down, but that doesn’t mean we’re eating less. The percentage of money we’re spending on survival may be way down, but that doesn’t mean we’re spending less.

Yeah.

In a really real-world kind of way, there’s a European initiative that says: When an AI makes a decision that affects you, you have a right to know why it made that decision. What do you think of that? I won’t impute anything. What do you think of that?

Yeah, I think Europe is ahead of us here. The funny thing is a lot of that decision was reported as rights for AI, or rights for robots. But when you really dig into it, it’s rights for humans. And they’re good rights.

If I were to show you designs out of my presentations right now, I have this big design that’s… You’re just searching for a car and it says, “Can I use your data to recommend a car?” and you click on that button and say yes. That’s the way it should be designed. We have taken so many liberties with people’s data and privacy up until now, and we need to start including them in on the decision.

And then, at the bottom of it, it has a slider that says, “The car you want, the car your wife wants.” You should also have transparency and control of the process, right? Because machine learning and artificial intelligence produces results with this kind of context, and you should be allowed to change the context.

First of all, it’s going to make for a better experience because, if it’s looking at all my data historically, and it’s recommended to me the kind of sleeping bag I should buy, it might need to be aware—and I might have to make it aware—that I’m moving to Alaska next week, because it would make a different recommendation. This kind of transparency in government actually… And I also think they put in another curious thing—and we’ll see how it plays out through the court—but I believe they also said that, if you get hurt by it—this was the robotic side—the person who made the robot is responsible for it.

Some human along the way made a decision that hurt you is the thesis.

Yes, or the business corpus that put this robot out there is responsible for it. It’s the closest thing to the three laws of robotics or something put into law that we’ve seen yet. It’s very advanced thinking, and I like it; and it’s already in our design practice.

We’re already trying to convince clients that this is the way to begin designing experiences. More than that, we’re trying to convince our fellow designers, because we have a certain role in this, that we can utilize to design the experiences so that they are open and transparent to the person using them. That little LED green lights says, “AI is involved in this decision,” so you might judge that differently.

But where does that end? Or does that inherently limit the advancement of the technology? Because you could say, “I rank number two in Google for some search—some business-related search—and somebody else ranks number one.” I could go to Google and say, “Why do I rank number two and they rank number one?” Google could, in all fairness, say, “We don’t know.”

Yeah, that’s a problem.

And so, do you say, “No, you have to know. You’ve got to limit the technology until you can answer that question,” or do you just say, “We don’t know how people make decisions.” You can’t ask the girl why she didn’t go out with you. “Why aren’t you going out with me?” That affects me. It’s like, “I’m just not going to.”

You’ve framed the consumer’s dilemma in everything from organic apples to search results, and it’s going to be a push-and-pull.

But I would say, yeah, if you’re using artificial intelligence, you should know a little bit about how it’s being produced, and I think there’ll be a market for it. There’s going to be a value judgment on the other side. I really think that some of the ways we’re looking at designing experiences, it’s much more valuable to the user to see a lot of these things and know it—to be able to adjust the rankings based on the context that they’re in, and they’re going to prefer that experience.

I think, eventually, it’ll all catch up in the end.

One last story, I used to sell snowboards. So much of this is used for commerce. It’s an easy example for us to understand, retail. I used to sell snowboards, and I got really good at it. My intelligence on it got really focused. I was at a pretty good hit rate. Someone could walk in the door, and if I wrote down what snowboard they were going to buy, I was probably right eighty-five to ninety-percent of the time. I got really good at it. By the end of the season, you just know.

But, if I walked up to any of those people and said, “Here’s your snowboard,” I would never make a sale. I would never make a sale. It creeps them out, they walk away, the deal is not closed. There’s a certain amount of window dressing, song and dance, gathering of information to make someone comfortable before they will make that decision to accept the value.

Up until now, technology has been very prescriptive. You write the code, it does what the code says. But that’s going to change, because the probabilities and the context-gathering goes away. But to be successful, there is still going to have to be that path, and it’s the perfect place to put in what we were just talking about—the transparency, the governance, and the guidance to the consumer to let them know that they’re [in] on that type of experience. Why? You’re going to sell more snowboards if you do.

In your view of a world where we don’t have this kind of conscious AGI, we’re one notch below that, will those machines still pass the Turing test? Will you still be able to converse with them and not know that it’s a computer you’re talking to?

I think it’ll get darn close, if not all the way there. I don’t think you could converse with them as much as people imagine though.

Fair enough. I’m going to ask you a privacy question. Right now, privacy is largely buried on just the sheer amount of data. Nothing can listen to every phone conversation. Nothing can do that. But, once a machine can listen to them all, then it can.

Then, we can hear them all right now, but we can’t listen to them all.

Correct. And I read that you can now get human-level lip-reading from cameras, and you get facial recognition.

Yeah.

And so you could understand that, eventually, that’s just a giant data mining problem. And it isn’t even a nefarious one, because it’s the same technology that recommends what you should buy someplace.

Yeah.

Tell me what you think about privacy in a world where all of that information is recorded and, I’m going to use ‘understood’ loosely, but able to be queried.

Yeah, this is the, “I don’t want a machine knowing what I had for lunch,” question. The machine doesn’t care; people care. What we have to do is work to develop a society where privacy is a virtue, not a right. When privacy is a right, you have to maintain it through security. The security is just too fallible, especially given the modern era.

Now, there’ll always be that certain kind of thing, but privacy-as-a-virtue is different. If you could structure society where privacy is a virtue, well, then it’s okay that I know what you have on lunch. It’s virtuous for me to pretend like I don’t know what you had for lunch, to not act on what I know you had for lunch, and not allow it to influence my behavior.

It sounds almost Victorian and I think there is a reason that, in the cyberpunk movement in science fiction, you see this steampunk kind of Victorian return. In the Victorian era, we had a lot of etiquette based on just the size of society. And the new movement of information meant that you knew a lot about people’s business that you didn’t know anymore. And the way we dealt with it was this kind of really pent-up morality where it was virtuous to pretend like you didn’t know—almost to make it as a game and not allow it to influence your decision-making. Only priests do this anymore.

But we’re all going to have to pick up the skill and train our children, and I think they’re training themselves to do it, frankly, right now, because of the impacts of social media on their lives. We might return to this second Victorian era, where I know everything about you but it’s virtuous.

Now, that needs to bleed into the software and the hardware architectures as well. Hard drives need to forget. Code algorithms need to forget, or they need to decide what information they treat as virtuous. This way, we can have our cake and eat it, too. Otherwise, we’re just going to be in this weird security battle forever, and it’s not going to function. The only people who are going to win in that one are the government. We’re just going to have to take it back in this manner.

Now, you can just see how much optimism bleeds through me when I say it this way, and I’m not totally incognizant of my optimism here, but I really think that’s the key to this. Any time we’re faced with the feature, we just give up our privacy for it. And so, we may as well start designing the world that can operate with less privacy-as-a-right.

It’s funny, because I always hear this canard that young people don’t care about privacy, but that’s not my experience. I have four kids. My oldest son always comes in and says, “How can you use that? It’s listening to everything you’re doing.” Or, “How do you have these settings on your computer the way you do?” I’m like, “Yeah, yeah, well…” But you say, not only do they value it more, but they’re learning etiquette around it as well.

Yeah, they’re redefining it.

They see what their friends did last night on social media, but they’re not going to mention it when they see them.

That’s right, and they’re going to monitor their own behavior. They just have to in order to function socially. We as creatures need this. I think we grew up in a more unique place. It’s goofy, but I lived in 1867. You had very little privacy in 1867.

That’s right. You did that PBS thing.

Yeah, I did that PBS thing, that living history experiment. Even though it’s fourteen people, the impacts of a secret or something slipping out could be just massive, but everyone has that impact. There was an anonymity that came from the Industrial Revolution that we, as Gen Xers, probably enjoy the zenith of, and we’ve watched social media pull it back apart.

But I don’t think it’s a new thing to humanity, and I think ancestral memory will come back, and I think we will survive it just fine.

In forty-something guests, you’ve referred to science fiction way more than even the science fiction writers I have on the show.

I’m a fanboy.

Tell me what you think is really thoughtful. I think Frank Herbert said, “Sometimes, the purpose of science fiction is to keep the future from happening.”

Yes.

Tell me some examples. I’m going to put you on the spot here.

I just heard that from Cory Doctorow two weeks ago, that same thing.

Really? I heard it because I used to really be annoyed by dystopian movies, because I don’t believe in them, and yet I’m required to see them because everybody asks me about them. “Oh, my gosh, did you see Elysium?” and I’m like, “Yes, I saw Elysium.” And so, I have to go see these and they used to really annoy me.

And then, I saw that quote a couple of years ago and it really changed me, because now I can go to them and say, “Ah, that’s not going to happen.”

Anyway, two questions: Are there any futures that you have seen in science fiction that you think will happen? Like, when you look at it, you say, “That looks likely to me,” because it sounds like you’re a Gene Roddenberry futurist.

I’m more of a Cory Doctorow futurist.

And then, are there ones you have seen that you think could happen, but you don’t think it’s going to happen, but it could?

I’m still on the first question. In my recent readings, the whole Stanley K. Robinson and Cory Doctorow works are very good.

Now, let’s talk about Iain M. Banks, the whole Culture series, which is so far-future, and so grand in scale, and so driven by AI that knows it’s superior to humans—but is fascinated with them. Therefore, it doesn’t want to destroy them but rather to attach themselves to society. I don’t think that is going to happen but it could happen. It’s really fascinating.

It’s one of those bigger-than-the-galaxy type universes where you have megaships that are mega-AIs, and can do the calculations of a trillion humans in one second, and they keep humans around for two reasons… And this is how they think about it: One, they like them, they’re fascinating and curious; and two, there are thirteen of them that, by sheer random number, they’re always right. Therefore, they need a certain density of humanity just so they can consult them when they can’t come up with an answer of enough certainty.

So, there are thirteen humans that are always right.

Yeah, because there are so many trillions and trillions of them. And the frustrating thing to these AI ships is, they can’t figure out why they’re always right, and no one has decided which theory is correct. But the predominant leading theory is that they’re just making random decisions because there are so many humans, these thirteen random decisions happen to always be correct. And the humans themselves, we get a little profile of one of them and she’s rather depressed, because we can’t be fatalists as a species.

Jared, that is a wonderful place to leave this. I want to thank you for a fascinating hour. We have covered, I think, more ground than any other talk I’ve had, and I thank you for your time!

Thank you! It was fun!

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

Comment

Community guidelines

Be sure to review our Community Guidelines. By continuing you are agreeing to our Terms of Service and Privacy Policy.