Voices in AI – Episode 28: A Conversation with Mark Stevenson

1 Comment

In this episode, Byron and Mark discuss the future of jobs, energy and more.

-
-
0:00
0:00
0:00

Byron Reese: This is “Voices in AI,” brought to you by Gigaom. I’m Byron Reese. Today I’m excited we have Mark Stevenson. Mark is a London-based British author, businessman, public speaker, futurologist and occasionally musician and comedian. He is also a fellow of The Royal Society for the Encouragement of Arts, Manufactures and Commerce. His first book, An Optimist’s Tour of the Future was released in 2011 and his second one, We Do Things Differently came out in 2017. He also co-founded and helps run the London-based League of Pragmatic Optimists. Welcome to the show, Mark! 

Mark Stevenson: Thank you for having me on, Byron! It’s a pleasure.

So, the subtitle of your Optimist’s Tour of the Future is, “One curious man sets out to answer what’s next.” Assuming you’re the curious man, what is next?

You can take “curious” in two ways, can’t you? Somebody is interested in new stuff, or somebody’s just a little bit odd, and I am probably a bit of both. Actually, I don’t conclude what’s next. I actually said the question is its own answer. My work is about getting people to be literate about the questions the future is asking them. What’s next will depend on how we collectively answer those questions.

What’s next could be a climate change, dystopian, highly unequal world; or what’s next could be a green-powered, prosperous, abundant, distributed economy for everybody. And each is likely. What’s next is what we decide to do about it, and that’s why I do the work I do, which is trying to educate people about the questions we’re being asked, and allowing them to imagine for themselves.

You said that’s why you do the work that you do. What do you do?

Well, I guess I am a professional irritant. I work with governments, corporations, universities helping them become literate about the questions the future is asking them. You’ll find that most organizations have a very narrow view of the world, because they are kind of governed by their particular marketplace or whatever, and same with governments and government departments.

So, I’ll give you an example, I was working with an insurance company recently who wanted me to come in and help them, and I just put up a picture of two cars having an accident and I said, “What happens if one or both of these is a driverless car?” and the head of insurance went, “I don’t know.” And I’m like, “Well, you should really be asking yourself that question because that question is coming.” And he said, “Mark, we insure drivers. If there aren’t any, it’s a real fucker on the balance sheet.”

It’s funny, but I used to work on old cars, and they were always junkers when I got them, and one time, I had one parked at the top of the hill and in the middle of the night, the brakes failed evidently and it rolled down the hill and hit another car. That scenario actually happened.

The other thing I said was, “What’s your biggest cost?” and he said, “Of course, it’s claims.” And ninety-seven percent or something of claims are because of human error, and it turns out driverless cars are way safer than cars with drivers in them; so maybe that’s good for him, because maybe it will reduce claims. My point was that I don’t know what he should do. He’s the expert in insurance, but my point is, you should be asking yourselves these questions.

Another example from insurance—I was working with the reinsurance industry, the insurers that insure the insurers. On the one hand, you’re being asked to underpin businesses that are insuring a coal-fired power plant. On the other hand, you’re being asked to insure businesses that are going to be absolutely decimated by climate risk.

And you can’t do both and it’s that lack of systems thinking, I suppose, I bring to my clients. And how the food system, the energy system, the government system, the education system, what’s happening in physics, what’s happening in the arts and culture, what’s happening in technology, what’s happening in economics, what’s happening in politics—how they all interrelate, and what questions they ask you?

And then what are you going to do about it, with the levers you have and the position you’re in, to make our world more sustainable, equitable, humane and just? And if you’re not doing that, why are you getting up in the morning and what is the point of you? That’s kind of my business.

When you deal with people, are they, generally speaking, optimistic, are they pessimistic, or are they agnostic on that, because they’re basically just looking at the future from a business standpoint?

That’s a really good question. They’re often quite optimistic about their own chances and often pessimistic about everybody else’s. [Laughter] If you ask people, “Are you optimistic about the future?” they’re going to go, “Yeah, I’m optimistic about the future.” Then, you go, “Are you optimistic about the future generally, like, for the human race?” And you hear, “Oh, no, it’s terrible.”

Of course, those two things are incompatible. People are convinced of their ability to prevail against the odds, but not for everybody else. And so, I often get hired by companies who are saying to me, “We want you to help us be more successful in the future,” and then, I’ll point out to them that actually there’s some existential threats to their business model that may mean they’ll be irrelevant in five years, which they haven’t even thought about.

A really good example of this from the past, which is quite famous, is what happened to Blockbuster. So Netflix went to Blockbuster—I think in 2006—and said, “You should invest in us. You should buy us. We’ll be your online distribution arm.” And the management at Blockbuster went, “I don’t know. I think people will always want to take a cassette home.” But also, Blockbuster made a large amount of their profits from late returns.

So they weren’t likely to embrace downloads, because that would kind of cannibalize one of their revenue streams. Of course, that was very short-sighted of them. And one of the things I say to a lot of my clients is, “Taking the future seriously is going to cost some people their jobs, and I am sorry about that, but not taking the future seriously is going to cost everybody their jobs. So it’s kind of your choice.”

Are your clients continental, British, American… primarily? 

All over. I’m under non-disclosure agreements with most of them.

Fair enough. My follow-up question is going to be, there’s of course a stereotype that Europeans overall are more pessimistic about the future and Americans are less so. Is that true or is it that there’s a grain of truth somewhere, but it’s not really material?

I think there is something in it, and I think it’s because certainly, people from the United States are very confident about the wonderfulness of the United States and how it will prevail. There’s that “American Dream” kind of culture, whereas Europe is in a lot of smaller nations that up until quite recently have been beating the crap out of each other. Perhaps we are a little bit more circumspect, but yeah, it’s a very slight skewing in one direction or the other.

You subtitle your book “What’s Next?” and then, you say, “The question is the answer,” kind of in this Zen fashion, but at some level you must have an opinion, like, it could go either way, but it will likely do what? What do you personally think?

 I don’t know. I feel it’s really up for grabs. If we carry on the way we’re going, it’s going to be terrible; there’s no doubt about that. I think it’s an ancient Chinese proverb that says, “If we don’t change the direction we’re going, we’re going to end up where we’re headed.” And where we’re heading to at the moment is a four-degree world, mass inequality, mass unemployment from the subject we’re going to get into a bit later, which is AI replacing a lot of middle-class jobs, etc. That’s certainly possible.

Then, on the other hand, because of the other work I do with Atlas of the Future, I’m constantly at the cutting-edge, finding people doing amazing stuff. There’s all sorts of people out there putting different futures on the table that make it imminently possible for us to have a humane and just and sustainable world. When you realize, for instance, that we’re installing half a million solar panels a day at the moment. Solar is doubling in capacity every two or three years, and it’s a sort of low starting point, but if it carries on like that, we’ll be completely on renewables within a generation.

And that’s not just good for the environment. Even if you don’t care about the environment, it’s really good for the economy, because the marginal cost of renewable energy is zero and the energy price is very, very stable, which is great when you want to invest long-term. Because one of the problems with the world’s economy is that the oil price keeps going up and down, and nobody knows what’s going to happen to their economy as a result.

You’ll remember—I don’t know how old you are, but certainly some of your listeners will remember—what happened after the Yom Kippur War, where the Arab nations, in protest of American support for Israel, just upped the oil price by about fivefold and suddenly, you had a fifty-five mile-per-hour speed limit, there were states that banned Christmas lights because it was a frivolous use of energy, there was gas rationing, etc. That’s a very extreme example of what’s wrong with relying on fossil fuels, just from an economic perspective, not even an environmental one.

So there are all sorts of great opportunities out there, and I think we really are on the dividing line at the moment. And I suppose I have just decided to put my shoulder against fighting for the side of sustainability and humanity and justice, rather than business as usual, and I don’t have a view. People call me an optimist because I fight, I suppose, for the optimistic side, but we could lose, and we could lose very badly.

Of course, you’re right that if we don’t change direction, you can see what’s going to happen. But there are other things that no force on heaven and earth could stop, like the trend toward automation, the trend toward computerization, the development of artificial intelligence, and those sorts of things.  

Those are known things that will happen. Let’s dive into that topic. Putting aside climate and energy and those topics for the moment, what do you think are just things that will certainly happen in the future?

This is really interesting. The problem with futurology as a profession—and I use that word “profession” very loosely—is that it’s associated with prediction, and predictions are usually wrong. As you said, there are some things you can definitely see happening, and it’s therefore very easy to predict what I would call the “first-order effects” of that.

A good example: When the internet arrived, it’s not hard to predict the rise of email, as you’ve got a network of computers with people sat behind them, typing on keyboards. Email is not a massive leap. So predicting the rise of email is not a problem, but does anybody predict the invention of social media? Does anybody predict the role of social media in spreading fake news or whatever? You can’t. These are second, third-order, fourth-order effects. So each technology is really not an answer, it’s just a question.

If you look at AI, we are looking very much at the automation of lots of jobs that previously we would’ve thought “un-automatable.” As already mentioned, driverless cars is one example of artificial intelligence. A great report came out last year from the Oxford Martin School listing literally hundreds of middle-class jobs that are on the brink of being replaced by automation—

Let me put a pin there, because that’s not actually what they say, they go to great pains to say just the opposite. What they say is that forty-seven percent of things people do in their jobs are potentially automatable. That’s why things on their list are things like pharmacist assistants or whatnot. So all they really say is, “We make no predictions whatsoever about what is going to happen in jobs.”

So if a futurologist does anything, the futurologist looks at the past, and says, “We know human nature is a constant, and we know things that have happened in the past, again and again and again. And we can look at that and say ‘Okay, that will probably happen again.’” So we know that for two hundred and fifty years, three hundred years since the Industrial Revolution in the West, unemployment has remained very narrow in this broad band of five to ten percent.

Aside from the Depression, all over the West, even though you’ve had, arguably, more disruptive technologies—you’ve had the electrification of industry, the mechanization of industry, the end of animal power being a force of locomotion, coal grew from generating five percent of energy to eighty percent of energy in just twenty years—all these enormous disrupting things that did, to use your exact words, “automated jobs that we would’ve thought were not automatable,” and yet, we never ever had a hiccup or a surge in unemployment from that. So wouldn’t it be incumbent on somebody saying something different is going to happen, to really go into a lot of detail about what’s different with this? 

I absolutely agree with you there, and I am not worried about employment in the long run. Because if you look at what’s happened in employment, it’s what you call “non-routine things,” things that humans are good at, that have been hard to automate. A really good example is the beginning of the Industrial Revolution, lots of farm laborers, end of Industrial Revolution, not nearly as many farm laborers—I think five percent of the number—because we introduced automation to the farming industry, tractors, etcetera; now far fewer people can farm the same amount of land.

And by the same token, at the beginning of the Industrial Revolution, not so many accountants; by the end of it, stacks of accountants—thirty times more accountants. We usually end up creating these higher-value, more complex jobs. The problem is the transition. In my experience, not many farm laborers want to become accountants, and even if they did, there’s no transition route for them. So whole families, whole swathes of the populace can get blindsided by this change, because they’re not literate about it, or their education system isn’t thinking about it in a sensible way.

Let’s look at driverless technology again. There’s 3.5 million truck drivers in the United States, and it’s very likely that a large chunk of them will not have that job available to them in ten or fifteen years, and it’s not just them. Actually, if you go to the American Trucking Association, they will say that one in fifteen of the American workforce are somehow related to the trucking industry.

A lot of those jobs will be at threat. Other jobs may replace them, but my concern is what happens to the people who are currently truck drivers? What happens to an education system that doesn’t tell people that truck drivers won’t be existing in such numbers in ten or fifteen years’ time? What does the American Trucking Association do? What do logistics firms that employ those truckers do?

They’ve all got a responsibility to think about this problem in a systemic way, and they often don’t, which is where my work comes in, saying, “Look, Government, you have to think about an education that is very different, because AI is going to be creating a job market that’s entirely different from the one you’re currently educating your children into.”

Fair enough. I don’t think that anybody would argue that an industrial economy education system is going to make workers successful in this world of tomorrow, but that set up that you just gave, it strikes me as a bit disingenuous. Which is to say, well, let’s just take truck driving for example. The facts on the ground are that it will be gradual, because you’ve got, likely, ten years to replace all the truckers, and it’s going to be gradual. So, fewer people are going to enter the field, people who might retire earlier are going to retire out of it. Technology seldom does it all that quickly.

But the thing that I think might be different is that, usually, what people say is, “We’re going to lose these lower-skill jobs and we’re going to make jobs for geneticists,” and those people who had these lower-skill jobs are going to become geneticists, and nobody actually ever says that that’s what happens.

The question is, “Can everybody already do a job a little harder than the one they presently have?” So, each person just goes up one layer, one notch in the food chain that doesn’t actually require that you take truck drivers and send them to graduate school for twelve years.

Indeed, and this is why having conversations like this is so important, because, as I said, my thing is about making people literate about the questions the future is asking them. And so, now, we’re having quite a literate conversation about that, and that’s really important. It’s why podcasts like this are important, it’s why the research you do is important. But in my experience, a lot of people, particularly in government, they would not even be having this conversation or asking this question. And the same for lots of people in business as well, because they’re very focused on a very narrow way of looking at things. So, I think I’m in violent agreement with you.

And I with you. I am just trying to dissect it and think it through, because one could also say that about the electrification of industry, all those things I just listed. Nobody said, “Electrification is coming.” We’ve always been reactive, and, luckily, change has come at a pace that our reactive skills have been able to keep up. Do you think this time is different? Are you saying there’s a better way to do it?

I just think it’s going to be faster this time. I think it’s an arguable truism in the work of futurism that technology waves speed up. If you look at, for instance, there are some figures I’ve got for the United States National Intelligence Council, and it’s really interesting just to look at how long it took the United States population to adopt certain technologies. It took forty-six years for twenty-five percent of the United States population to bring electricity into their homes from its introduction to the market.

It took just seven for the World Wide Web, and there were two and a half times as many citizens there. And that makes sense, because each technology provides the platform and the tools to build the next one. You can’t have the World Wide Web until you have electricity. And so you see this speeding up because now you have more powerful tools than you had the last time to help you build the next one, and they distribute much more quickly as well.

So what we have—and this is what my third book is going to be about—is this problem between the speed of change of technology and also, the speed of change of thought and philosophy and new ideas about how we might organize ourselves, and the speed of our bureaucracies and our governments and our administration, which is still painfully slow. And it’s that mismatch of those gears that I think causes the most problems. The education system being a really good example. If your education system isn’t keeping up with those changes, isn’t in lockstep with them, then inevitably, you’re going to do a disservice to many of the students going through it.

Where do you think that goes to? Because, if it took forty-seven years for electricity and seven for the web, eventually, it’s like that movie Spaceballs, where they had that scene where the video hits the video store before they finish shooting it. At some point, there’s an actual physical limit to that, right? You don’t have a technology that comes out on Thursday and by Friday, half the world is using it. So what does that world look like?

Exactly, and all of these things move at slightly different speeds. If you look at what’s happening with energy at the moment, which is one of my favorite topics because I think it kind of underpins everything else, the speed at which the efficiency of solar panels is rising, the speed at which the price of solar is going down, the invention of energy Internet technology, based on ideas from Bob Metcalfe, is extraordinary.

I was at the EU Commission a few weeks ago, talking to them about their energy policy and looking at it and saying, “Look guys, you have a fantastic energy policy for 1994. What’s going on here? How come I am having to tell you about this stuff? Because actually, we should be moving to a decentralized, decarbonized, much more efficient, much cheaper energy system because that’s good for everybody, but you’re still writing energy policy as if it was the mid ‘90s.” And that really worries me. Energy is not going to move as fast as a new social networking application, because you do have to actually build stuff and stick it in the ground and connect to each other, but it is still moving way faster than the administration, and that is my major concern.

The focus of my work for the next two-three years is working at, how do we get those things working at the same speed or at least nearly enough at the same speed so they can usefully talk to each other. Because governments, at the moment, don’t talk to technology in any useful way. Data protection law, I was just talking to a lawyer yesterday and he’s saying, “I’m in the middle of this data protection case. I am dealing with data protection law that was written in 1985.”

Let’s spend one more minute on energy, because it obviously makes the world go around, literally. My question is, the promise of nuclear way back was that it would be too cheap to meter, or in theory it could’ve been, and it didn’t work out. There were all kinds of things that weren’t foreseen and whatnot. Energy is arguably the most abundant thing in the universe, so do you think we’ll get to a point where it’s too cheap to meter, it’s like radio waves, it’s like the water fountain at the department store that nobody makes you put a quarter in?

Yeah, I think we will, but I think that comes from a distributed system, rather than a centralized one. One of my pet tropes that I trot out quite regularly is this idea that we’re moving from economies of scale to economies of distribution. It used to be that the most efficient way to do things was to get everything in a centralized place and do it all there because it was cheaper that way, given the technology we had at that time. Whether it was schools where we get all the children into a room and teach at them, whether it was power stations where we dig up a bunch of coal, take it to a big factory or power station, burn it and then send it out through the wires. Even though in your average coal-fired power plant, you would lose sixty-seven percent of the energy through waste-heat, it was still the most efficient way to do things.

Now, we have these technologies that are distributed. Even though they might be slightly less efficient or not quite as cost-effective, in and of themselves, when you connect them all together and distribute them, you start to see the ability to do things that the centralized system can’t. Energy, I think, is a really good example of that.

All our energy is derived from the sun, and the sun’s energy doesn’t hit just power plants. It hits the entire planet, and there’s that very famous statistic, that there’s more energy that hits the Earth’s surface in an hour than the human race uses in a year, I think. The sun has been waving this massive energy paycheck in our face every second since it started burning, and we haven’t been able to bank it very well.

So we’ve been running into the savings account, which is fossil fuels. That’s sunshine that has been laid down for us very dutifully by Mother Nature for billions of years and we can dig it up, thank you very much. Thank you for the savings account, but now, we don’t need the savings account so much because we can actually bank the stuff as it’s coming towards us with the improving renewable technologies that are out there. Couple that with an energy Internet, and you start to make your energy and your fuel where you are. I’m also an advisor to Richard Branson’s “Virgin Earth Challenge”, which is a twenty-five million dollar prize for taking carbon out of the atmosphere.

You have to be able to do that in an environmentally-sustainable way, and make a profit while you’re doing it. And I have to be very careful and say this is not the view of the Virgin Earth Challenge; it’s not the official view, but I am fairly confident that we will award that prize in the next three to four years, because we’ve got finalists that are taking carbon directly out of the air and turning it into fuel, and they’re doing it at a price point that’s competitive with the fossil fuel.

So if you distribute the production of liquid fuels and electricity and anybody can do it, that means you as a school can do it, you as a local business can do it. And what you find is when people do take control of the energy system, because they’re not so motivated by making a profit, the energy is cheaper, they maintain it better, and everybody’s happier.

There’s a town in the middle of Texas right now called Georgetown—65,000 Trump voters who I imagine are not that interested about the climate change threat, as conservatives generally don’t seem to think that that is a problem—and they’re all moving over to renewables, because it’s just cheaper than using oil, and they are in the middle of central Texas. I think we’re definitely going in that direction.

You’re entirely right. I am going to pull these numbers from my head, so they could be off, but something like four million exajoules of sunlight comes on the planet every year, and humanity needs five hundred. That’s what it is right now. It’s like four million raining down and we have to figure out how to pull five hundred of them and harvest those economically. Maybe, if the Virgin Earth Prize works, there’s going to be a crisis in the future—there’s not enough carbon in the air! They’ve pulled it all out at a profit.

That would be a nice problem to have, because we’ve already proven to ourselves that we can put carbon in the air. That’s not going to be a problem if it’s getting too low.

So let’s return to artificial intelligence for a moment. I want to throw a few things at you. Two different views of the world—I’d love to talk about each one by itself. One of them is that the time it takes for a computer to learn to do a task gets shorter and shorter as we learn how to do it better, and that there’s some point at which it is possible for the computer to learn to do everything a human can do, faster than a human can do it. And it would be at that point that there are literally no jobs, or could be literally no jobs if we chose that view. So, whether you think that or not, I am curious about, but assuming that that is true, what do you think happens?

I think we find new kinds of jobs. I really do. The thing is that the clue is in the name, “artificial intelligence.” We have planes; that’s artificial flying. We don’t fly the same way that birds fly. We’ve created an entire artificial way of doing it. And the intelligences that will come out of computers will not be the same as human intelligence.

They might be as intelligent, arguably, although I am not convinced of that yet, but they will be very different intelligences—in the same way that a dog’s intelligence is not the same as an ant’s intelligence, which is not the same as my Apple MacBook’s intelligence, if it has any, which is not the same as human intelligence. These intelligences will do different things.

They’ll be artificial intelligences and they’ll be very, very good at some things and very bad at other things. And the human intelligence will have certain abilities that I don’t think a machine will ever be able to replicate, in the same way that I don’t believe a wasp is ever going to be as good as me at playing the bass guitar and I am never going to be as good as it at flying.

So what would be one of those things that you would be dubious that artificial intelligence would be able to do?

I think it is the moral questions. It’s the actual philosophy of life—what are we here for, where are we going, why are we doing it, what’s the right thing to do, what do we value, and also the curiosity. I interviewed Hod Lipson at Columbia and he was very occupied with the idea of creating a computer that was curious, because I think curiosity is one of those things that sort of defines a human intelligence, that machines, to my knowledge, don’t have in any measurable sense.

So I think it would be those kind of very uniquely human things—the ability to abstract across ideas and ask moral, ethical questions and be curious about the world. Those are things that I don’t see machines doing very well at the moment, at all, and I am not convinced they’ll do them in the future. But it’s such a rapidly evolving field and I’m not a deep expert in AI, and I’m willing to be proved wrong.

So, you don’t think there will ever be a book One Curious Computer Sets Out To Answer What’s Next? 

Do you know what? I don’t, but I really wish there was because I’d love to go on stage and have that panel discussion with that computer.

Then, let’s push the scenario one step further. I would have to say it’s an overwhelming majority of people who work in the AI field who believe that we will someday—and interestingly, the estimates range from five to five hundred years—make a general intelligence. And it begins with the assumption that we, our brains and our minds, are machines and therefore, we can eventually build a mechanical one. It sounds like you do not hold that view.

It’s a nuance view. Again, it’s interesting to discuss these things. What we’re really talking about here is consciousness, because if you want to build an “artificial general intelligence,” as they call it, what you’re talking about is building a conscious machine that can have the same kind of thoughts and reflections that we associate with our general intelligence. Now, there are two things I’d say.

The first is, to build a conscious machine, you’d have to know what consciousness is, and we don’t. And we’ve been arguing about it for two thousand years. I would also say that some of the most interesting work in that field is happening in AI, particularly in robotics, because in nature, there is no consciousness without a body. It may be that when we say, “What is consciousness?” consciousness isn’t actually one thing; it’s actually eight separate questions we have to answer, and we worked out what those eight are, and we can answer with technology. I think that might be a plausible route.

And clearly, as you point out, consciousness must be computable, because we are computing it right now. Me and you are “just” DNA computer code being read, and that computer code generates proteins and lipids and all kinds of things to make us work, and we’re having this conversation as a result of these computer programs that are running in ourselves. So clearly, consciousness is computable, but I am still very much to be convinced that we have any idea of what consciousness really is, or if we’re even asking the right questions about it.

To your point, we’re way ahead of ourselves in one sense, but do you think that in the end, if you really did have a conscious computer, a conscious machine, does that in some way undermine human rights? In the sense that we think people have these rights by virtue of being conscious and by virtue of being sentient, being able to feel pain? Do you think that if all of a sudden, the refrigerator and everything in your house also made that claim, that we are somehow lessened by it, not that the machines are somehow ennobled by it?

I would hope not. George Church, who runs Harvard Medical School said to me, “If you could show me a conscious machine, I wouldn’t be frightened by it. I’d be emboldened by it, I’d be curious about how that thing works, because then I’d be able to understand myself better.”

I was asked just recently by the people who are making “The Handmaid’s Tale,” the TV series based on the Margaret Atwood book, “What do you think AI is going to do for humanity?” Hopefully, one scenario is that it helps us understand ourselves better, because if we are able to create that machine that is conscious, we will have to answer the question, “What is consciousness?” as I said earlier, and when we’ve done that, we will have also unlocked also some of the great secrets about ourselves, about our own motivations, about our emotions, why we fight, what’s good for us, what’s bad for us, how to handle depression. We might open a whole new toolbox on actually understanding ourselves better.

One interpretation of it is that actually creating artificial general intelligence is one of the best things that could happen to humanity, because it will help us understand ourselves better, which might help us achieve more and be better human beings.

At the beginning of our chat, you listed a litany of what you saw as the big challenges which face our planet. You mentioned income inequality. So, absent wide-scale redistribution, technology, in a sense, promotes that in a way, doesn’t it?

Microsoft, Google and Facebook between them have generated 12 billionaires, so it’s evidently easier to make a billion dollars now—not me, but for some people to make billions now—than it would’ve been twenty years ago or five hundred years ago for that matter. Do you think that technology in itself, by multiplying the abilities of people and magnifying it ever-more, is a root cause of income inequality? Or do you think that comes from somewhere else?

I think income inequality comes from the way our capital markets and our property law works. If you look at democracy for instance, there’s several pillars to it. If you talk to a political philosopher, they’ll say, you know, a functioning democracy has several things that need to be working. One is you need to have universal suffrage, so everybody gets to vote, you need to have free and fair elections, you need to have free press, you need to have a judiciary that isn’t influenced by the government, etcetera.

The other thing that’s mentioned but less talked about is working property rights. Working property rights say that you, as a citizen, have the right to own something, whether that’s some property or machinery or an idea, and you are allowed to generate an income from that and profit from it. Now that’s a great idea, and it’s part of entrepreneurship and going and creating something, but the problem is once you have a certain amount of property that you’ve profited from, you would then have more ability to go and buy some property from other people.

What’s happening is the property rights, whether they’re intellectual or physical, have concentrated themselves in fewer and fewer hands, because as you get rich, it’s easier to buy other stuff. And I know this from my own experience. I used to be a poor musician-student. Now, I’m doing pretty well and I find myself today buying some shares in a company that I thought was going to do really well… and they did. And I find myself just thinking, “Wow, that was easy.” It’s easy for me now because I have more property rights to acquire more property rights, and that’s what we’re seeing. There’s a fundamental problem there somewhere, and I am not quite sure how we deal with it.

After World War II, England toyed with incredibly high, sometimes over 100% marginal taxes on unearned income, and I think The Beatles figured they needed to leave. What is your take on that? Did that work, is that an experiment you would advocate repeating, or what did we learn from that? 

I think we’ve learnt that’s a very bad way of doing it. Again, it comes back to how much do things cost? If things are expensive and you’re running a state, you need to collect more taxes. We’re having this huge debate in the UK at the moment about the cost of National Health Service, and how do you fund that. To go back to some of our earlier conversation, if you suddenly reduce the cost of energy to very little, actually everything gets cheaper—healthcare, education, building roads.

If you have a whole bunch of machines that can do stuff for you cheaper that humans could do it, in one way, that’s really good, because now you can provide health care, education, road building, whatever… cheaper. The question is, “How does the job market change then? Where do human beings find value? Do we create these higher-valued jobs?” One radical idea that’s come out at the moment is this idea of universal basic income.

The state has now enough money coming in because the cost of energy has gone down, and it can build stuff much more cheaply. We’ll just get a salary anyway from the state to follow our dreams. That’s one plausible scenario.

Moving on, I would love to hear more about the book that’s just come out. I’ve read what I could find online, I don’t have a copy of it yet. What made you write We Do Things Differently, and what are you hoping it accomplishes?

So with my first book, which is really an attempt to talk about the cutting-edge of technology and what’s happening with the environment in an entertaining way for the layman, I got to the end of that book and it became very clear to me that we have all the technology that we need to solve the world’s grand challenges, whether that’s the energy price, or climate change, or problems with manufacturing.

We’re not short of technology. If we didn’t invent another thing from tomorrow, we could deal with all the world’s grand challenges, we could distribute wealth better, we could do all the things. But it’s not technology that’s the problem. It’s the administration, it’s the way we organize ourselves, it’s the way our systems have been built, and how they’ve become kind of fossilized in the way they work.

What I wanted to do with this book is look at systems and look at five key human systems—energy, healthcare, food, education and governance—and say, “Is there a way to do these better?” It wasn’t about me saying, “Here’s my idea.” It was about me going around the world and finding people who’ve already done it better and prevailed and say, “What do these people tell us about the future?”

Do they give us a roadmap to and a window on a future that is better run, more sustainable, kinder to everybody, etcetera? And that’s what it is. It’s a collection of stories of people who’ve gone and looked at existing systems, challenged those systems, built something better, and they’ve succeeded and they’ve been there for a while—so you can’t say it was just like a six-month thing. They’re actually prevailing, and it’s those stories in education, healthcare, food, energy and governance.

I think the saddest fact I know, in all the litany of the things you run across, any time food comes up, it jumps to the front of my mind. There’s a billion people more or less—960 something million—that are hungry. You can go to the UN’s website, you can download a spreadsheet, it lists them out by country.

The sad truth is that seventy-nine percent of hungry people in the world live in nations that are net food exporters. So, the food that’s made inside of the country can be sold on the world market for more than the local people can pay for it. The truth in the modern age is not that you starve to death if you have no food; it is that you starve to death if you have no money. What did you find?

 There’s an even worse fact that I can tell you, which is, the human race wastes between thirty and fifty percent of the food it makes, depending on where you are in the world, before it even reaches the market. It spoils or it rots or it gets wasted or damaged between the field and the supermarket shelf, and this is particularly prevalent in the global south, the hotter countries. And the reason is we simply don’t have enough refrigeration, we don’t have enough cold chains, as they’re called.

So one of the great pillars of civilization, which we kind of take for granted and don’t really think about, is refrigeration and cooling. In the UK, where I am, sixteen percent of our electricity is spent on cooling stuff, and it’s not just food as well. It’s medical tissues and medicines and all that kind of stuff. And if you look at sub-Saharan Africa, it’s disastrous because the food they are growing, they are not even eating because it ruins too quickly, because we don’t have a sustainable refrigeration system for them to use. And one of the things I look at in the book is a new sustainable refrigeration system that looks like it could solve that problem.

You also talk about education. What do you advocate there? What are your thoughts and findings?

I try not to advocate anything, because I think that’s generally vainglorious and I’m all about debate and getting people to ask the right questions. What I will do is sort of say, look, this person over here seems to have done something pretty extraordinary. What lessons can we draw from them?

So, I went to see a school in a very, very rough housing estate in Northern England. This is not an urban paradise; this is a tough neighborhood, lots of violence, drug dealing, etcetera, low levels of social cohesion, and in the middle of this housing estate there was a school that, I think the government called it the fifth worst school in the entire UK, and they were about to close it. A guy called Carl turns up as new headmaster and two years later, it’s considered one of the best schools in the world, and he’s done all that without changing any staff. It took the same staff everybody thought was rubbish and two years later, they’re regarded as some of the best educators in the world.

And the way he did that is not rocket science. It was really about creating a collaborative learning environment. One of the things he said was, “Teachers don’t work in teams anymore. They don’t watch each other teach. They don’t learn about the latest of what’s happening in education; they don’t do that. They kind of become automatized and do their lessons, so I’m going to get them working as a team.”

He also said they lost any culture of aspiration about what they should be doing, so they were just trying to get to the end of the week, rather than saying, “Let’s create the greatest school in the world.” So he took some very simple management practices which is about, ‘We’re going to aspire to be the best, and we’re going to start working together, and we’re going to start working with our kids.”

And he did the same with the kids, even though they were turning up at this school four years old, most of them still in nappies, most of them without language, even at four—by the time they were leaving, they were outperforming the national average, from this very rough working-class estate. By also working with the kids in the same way and saying, “Look, what’s your aspiration? How are we going to design this together collectively as a school—you the students, us the teachers?”

This is actually good management practice, but introduced into a school environment, and it worked very well. I am vastly trivializing the amount of effort and sweat and emotional effort he had to put into that. But, again, talking about teamwork: Rather than splitting the world up into subjects, which is what we tend to do in schools, he’s like, “Let’s pick things that the kids are really interested in, and we’ll teach the subjects along the way because they’ll all be interrelated with each other.”

I walked into a classroom there and it’s bedecked out like NASA headquarters, because they picked the theme of space for this term for this particular class. But of course, as they talk about space and astronauts, they learn about the physics, the maths, they learn about the communications, they learn about history…

And I said to Carl, “Once they’re given this free environment, how do they feel when exams come along, which is a very constraining environment?” He said, “Oh, they love it.” I’m like, “You’re kidding me!” He said, “No, they can’t wait to prove how much they’ve learnt.”

None of this is rocket science, but it’s really interesting that education is one of those places where, when you try and do anything new, someone is going to try to kill you, because education is autobiography. Everybody’s been through it, and everybody has a very prejudiced view of what it should be like. So for any change, it’s always going to upset somebody.

You made the statement that even if we didn’t invent any new technology, we would know how to solve all of life’s greatest challenges. I would like to challenge that and say, we actually don’t know how to solve the single biggest challenge.

This sounds good.

Death.

Death! That’s an interesting question, whether you view it as a challenge or not.

I think most people, even if they don’t want to live indefinitely, that the power to choose the moment of your own demise is something that I think many people would aspire to—to live a full life and then choose the terms of their own ending. Do you think death is solvable? Or at least aging?

 I think aging is probably solvable. Again, I am not a high-ranking scientist in this area, but I know a number of them. I was working with the chief scientist at one of our big aging charities recently, and if you look at the research that’s coming out from places like Stanford and Harvard, there’s an incredible roadmap to humans living healthy lives in healthy bodies till one hundred and ten, one hundred and thirty. Stanford have been reversing human aging in certain human cell lines since 2014.

The problem is, of course, it turns out that what’s good for helping humans live longer is also often quite good for promoting cancer. And so that’s the big conundrum we have at the moment. Certainly, we are living longer and healthier anyway. Average life expectancy has been rising a quarter-year for every year, for the last hundred years. Technology is clearly doing something in that direction.

Well what it seems to be doing is ending premature death, but the number of people who live to be supercentenarians, one hundred and ten and above is forty, and it doesn’t seem to be going up particularly.

Yeah, I think that’s true. But it depends what you call “premature death,” because actually, certainly the age at which we die is definitely creeping up. But if we can keep ourselves a bit younger, if we can, for instance, find a way to lengthen the telomeres in our cells without encouraging cancer, that’s a really good thing because most of the diseases we end up dying from are the diseases of aging—cardiovascular disease, stroke, etcetera.

We haven’t solved it yet. You asked me if I think it’s solvable. Like you, I think I am fairly optimistic about the human race’s ability to finally ask the right questions, and then find answers to them. But I think we still don’t really understand aging well enough yet to solve it, but I think we’re getting there much faster, I would say, than we are perhaps with an artificial general intelligence.

Talk about the “Atlas of the Future” project.

 Ah, I love the Atlas. The Atlas is kind of the first instantiation of something from the Democratizing the Future society. What we’re trying to do is to say, “Look, if we want the world to progress in a way that’s good for everybody, it needs to involve everybody.” And therefore, you need to be literate about the questions the future asks you, and not just literate about threats. Which is what we get from the media. The general media will just walk in and go, “It’s all going to be terrible, everyone’s trying to kill you.” They’ll drop that bomb and then just walk away, because that gets your attention.

We are trying to say, “Yeah, all those stories are worth paying attention to, and there are a whole other bunch of stories worth paying attention to, about what we can do with renewables, what we can do to improve healthcare, what we can do to improve social cohesion, what we can do to improve happiness, what we can do to improve nations understanding each other, what we can do to reduce partisan political divides, etcetera.” And we collect all that stuff. So it’s a huge media project.

If you go to “The Atlas of the Future,” you’ll find all these projects of people doing amazing stuff—some of them very big-picture stuff, some of it small-picture stuff. Subsequently, what we’re doing with that is we’re farming out that content either via TV series, the books I write, there’s a podcast—by The Futurenauts, which is me and my friend, Ed Gillespie—where we talk about the stuff on the Atlas and we interview people.

So it’s about a way of creating a culture of the future that’s aspirational, because we kind of feel that, at the moment, we’re being asked to be fearful of the future and run away in the opposite direction. And we’d like to put on the table the idea that the future could be great, and we’d like to run towards that, and get involved in making it.

And then, what’s this third book you are working on?

The third book is just an idea at the moment, but it is about how do we get our administration, our government, our bureaucracy to move at something like a similar pace to the pace of ideas and technology, because it seems to me that it’s that friction that causes so many of the problems—that we don’t move forward fast enough. The time it takes to approve a drug is stratospheric, and there’s some good reasons for that, I am not against the work the FDA does, but when you’re looking at, sometimes, twelve or thirteen years for a drug to reach the market, that’s got to be too slow.

And so, we have to think about ways to get those parts of the human experience—the technology, the philosophy and the bureaucracy—working at roughly the same clock speed, then I think things would be better for everybody. And that’s the idea I want to explore in the next book—how we go about doing that. Some of it, I think, will be blockchain technology, some of it might be the use of virtual reality, and a whole bunch of stuff I haven’t probably found out yet. I’m really just asking that question. If any of your listeners have any ideas about what some of the technologies or approaches or philosophies that will help us solve that, I’d love to hear from them.

You mentioned a TV program earlier. In views of the future, science fiction movies, TV, books, all of that, what do you read or watch that you think, “Huh, that could happen. That is a possible outcome”? What do you think is done really well?

It’s interesting, because I have a sixteen-month old child, and I am trying to write a book and save the world, so I hardly watch anything. I think it’s very difficult to cite fiction as a good source. It’s an inspiration, it’s a question, but it never turns out how we imagine. So I take all those things with a pinch of salt, and just enjoy them for what they are.

I have no idea what the future is going to be like, but I have an idea that it could be great, and I’d like it to be so. And actually, there is no fiction really like that, because if you look at science fiction, generally, it’s dystopian, or it’s about conflict, and there’s a very good reason for that—which is that it’s entertaining. Nobody wants to watch a James Cameron movie where the robots do your gardening. That’s not entertaining to watch. Terminator 3: Gardening Day is nothing that anybody is going to the cinema to see.

I’m in full agreement with that. I authored a book called Infinite Progress, and, unlike you, I have a clearer idea of what I think the future is going to be. And I used to really be bothered by dystopian movies, mainly because I am required to go see them. Because everybody’s like, “Did you see Elysium?” So, I have to go see and read everything, because I’m in that space. And it used to bother me, until I read a quote, I think by Frank Robert—I apologize if it isn’t him—who said, “Sometimes, the job of science fiction is to warn you of something that could happen so that you have your guard up about it,” so you’re like, “A-ha! I’m not going to let that happen.” It kind of lets the cat out of the bag. And so I was able to kind of switch my view on it by keeping that in mind, that these are cautionary tales.

I think we also have to adopt that view with the media. The media leads on the stuff that is terrifying, because that will get our attention, and we are programmed as human beings to be cautious first and optimistic second. That makes perfect sense on the African savanna. If one of your tribe goes over the hill without checking for big cats, and gets eaten by a big cat, you’re pretty cynical about hills from that moment on. You’re nervous of them, you approach them carefully. That’s the way we’re kind of programmed to look at the world.

But of course, that kind of pessimism doesn’t move us forward very much. It keeps us where we are, and even worse than that is the cynicism. And of course, cynicism is just obedience to the status quo, so I think you can enjoy the entertainment, and enjoy the dystopia, enjoy us fighting the robots, all that kind of stuff. One thing you do see about all those movies is that eventually, we win, even if we are being attacked by aliens or whatever; we usually prevail. So whilst they are dystopian, there is this yearning amongst us, saying, “Actually, we will prevail, we will get somewhere.” And maybe it will be a rocky ride, but hopefully, we’ll end up in the sunshine.

An Optimist’s Tour of the Future is still available all over the world—I saw it was in, like, nine languages—and you can order that from your local book proprietor and We Do Things Differently, is that out in the US? When will that be out in US? 

It’s out in the US early next year. We don’t have a publication date yet, but I am told by my lovely publishers that that will be sort of January-February next year. Yet you can buy the UK edition on Amazon.com and various other online stores, I’m sure.

If people want to follow you and follow what you do and whatnot, what’s the best way to do that? 

My Twitter handle is @Optimistontour. You can learn about me at my website, which is markstevenson.org, and check out “The Futurenauts” podcast at thefuturenauts.com where we do something similar to this, although we have more swearing and nakedness than your podcast. Also, get yourself down to “Atlas of the Future.” I think that would be the central place to go. It’s a great resource for everybody, and that’s not just about me—there’s a whole bunch of future, forward-thinking people on that. Future heroes. We should probably get you on there at some point, Byron.

I would be delighted. This was an amazing hour! There could be a Mark Stevenson show. It’s every topic under the sun. You’ve got wonderful insights, and thank you so much for taking the time to share them with us. Bye!

 Cheers! Bye!

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Comment

Community guidelines

Be sure to review our Community Guidelines. By continuing you are agreeing to our Terms of Service and Privacy Policy.

1 Comment