Blog

Falling in the middle in the debate on AI is still an uncomfortable place

As the inexorable rise of AI and robots proceeds, the division is growing between those that embrace cognitive computing and Intelligence as a Service as an inevitable good and those that believe it is a potentially inescapable end point for humanity.

Thomas Davenport recently wrote in a narrowly focused piece about algorithmic financial advice, and the lower charges that companies offering it will charge users. The likelihood that human advisors will be drastically reduced came up:

I interviewed an investment advisor for one of these three self-directed firms, and he said he hears the robotic footsteps:

“Our advice to clients isn’t fully automated yet, but it’s feeling more and more robotic. My comments to clients are increasingly supposed to follow a script, and we are strongly encouraged to move clients into the use of these online tools. I am thinking that over time they will phase us out altogether,” he worries.

If you’re an advisor for one of these firms, the future is now.

And the near future might see a large proportion of human advisors seeking careers elsewhere, if there is an elsewhere left.

Kevin Kelly is unabashedly positive about the advent of Intelligence as a Service, writing in The Three Breakthroughs That Have Finally Unleashed AI on the World that ‘AI has attracted more than $17 billion in investments since 2009’, and that ‘more than $2 billion was invested in 322 companies with AI-like technology’ in 2013 alone. Investments by Facebook, Google,  Yahoo, Intel, Dropbox, LinkedIn, Pinterest, and Twitter by hiring investigators or buying companies is growing steeply. Kelly points out that ‘private investment in the AI sector has been expanding 62 percent a year on average for the past four years, a rate that is expected to continue’. Of course, just the fact that world beater companies are investing millions does not prove that nothing but sunshine and flowers is ahead.

He counters the other end of the spectrum fear of the killer AI a la 2001 or Terminator:

Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000—a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness—or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now it’s here.

While I don’t doubt that a great deal of money can be made by providing new products and services based on ‘take X and add AI’, and I also agree with Kelly’s reasoning as to why that’s possible now: cheap parallel computation, big data, and better algorithms. But Kelly never gets out ahead of the social issues — the potential for huge job losses, for example — or why exactly brilliant AI might not gain consciousness and start acting out in unforeseen ways.

Nick Bilton recently was scared to the core by reading Nick Bolstrum’s Superintelligence, and poses the obvious counter to the idea that we will build in safeguards against rogue AI’s deciding that the solution to traffic jams is less people:

Let’s be realistic: It took nearly a half-century for programmers to stop computers from crashing every time you wanted to check your email. What makes them think they can manage armies of quasi-intelligent robots?

Bilton is in good company. As he pointed out, Elon Musk recently said that AI is ‘potentially more dangerous than nukes’. And Stephen Hawking says that superintelligent A. I. ‘would be the biggest event in human history. Unfortunately, it might also be the last.’

Yes, nanobots will soon be designed to clean plaque from our teeth or repair fraying neurons in our brains, but Bostrum points out that

A person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth.

Or just a second-rate scientist with access to a first-rate toolset might program the grey goo to do something unplanned and unpleasant.

This is clearly a major debate in society, and even if the result isn’t a shiny, shiny utopia as envisioned by Kelly, or the burnt earth dystopia that Bilton and Bostrum worry about, we are still likely to be stuck in the middle: living in a world where near- or past-human-grade intelligence can be rented for pennies on the dollar. And that leads to a world in which business still makes things, and financial institutions still give advice, but there may be far less people with money to buy them because there will be far fewer people working in those companies.

As I wrote in the August Pew Internet Center report,  AI, Robotics, and the Future of Jobs:

The central question of 2025 will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?

Let’s focus on answering that, rather than scaring ourselves silly about AI boogeymen, or the techno-triumphalism of those who see no downside over the horizon.