Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
What are the key revolutionary developments that are about to happen or that are happening in artificial intelligence?
Portions of the intelligencia – typified by Google’s Ray Kurzweil – foresee AI, or Artificial General Intelligence (AGI) bringing good news, perhaps even transcendence for members of the Olde Race of bio-organic humanity 1.0.
Others, such as Stephen Hawking and Francis Fukayama, warn that the arrival of sapient, or super- sapient machinery may bring an end to our species – or at least its relevance on the cosmic stage – a potentiality evoked in many a lurid Hollywood film.
Taking middle ground, SpaceX/Tesla entrepreneur Elon Musk has joined with YCombinator founder Sam Altman to establish OpenAI, an endeavor that aims to keep artificial intelligence research – and its products – accountable by maximizing transparency and accountability.
In fact, the panoply of dangers and opportunities may depend on which of half a dozen paths to AI wind up bearing fruit first. Can AI be designed from scratch, via logic, like IBM’s Watson? In that case we might use “laws” like Asimov predicted, to try to keep control. But there are five other general approaches and the lesson when you study them is that “control” just may not be in the cards.
Why are you a proponent of radical transparency, and do you believe that our world is moving in that direction?
A great many modern citizens are rightfully worried about Big Brother. Some fear tyranny coming from snooty academics and faceless government bureaucrats. Others see Orwellian despots arising from conniving aristocrats and faceless corporations. They are all right to worry! Because across 6000 years, only rarely was something other than feudalism or dictatorship tried. Our experiment has been by far the most successful of those exceptions and we should study why.
We did not achieve this by hiding. Those who fret that governments and corporations and the rich will know too much about us — these folks are right to worry, but they reach the wrong conclusion. We will never successfully hide from elites. It never happened and never will. “Encryption” and other romantic fantasies never work for very long. But there is another approach. Not to hide, but to aggressively strip all elites naked enough to supervise them and hold them accountable. We may not be able to stop them from knowing about us. But we can still deter them from doing bad things to us.
That is how we got our current freedom, by answering surveillance with sousveillance, or supervising authority. Proof of this is the spread of video and cell phone cameras on our streets, which are cornering abuses by authorities and year by year making it harder for them to do bad things. It’s not perfect. It never will be. But the Moore’s Law of Cameras – (sometimes called “Brin’s Corollary to Moore’s Law” I’m told) – seems to be providing citizens with a Great Equalizer, even better than the old Colt 45. This runs diametrically opposite to the Hollywood lesson that technology never works in favor of citizenship. It can and it does.
In any event, the spread of cameras – faster, better, cheaper, more mobile, and vastly more numerous – cannot be stopped. If the elites monopolize this light, we will have Big Brother. But if citizens grab the light, then BB hasn’t a chance.
What are your biggest concerns surrounding developments in artificial intelligence, if any?
Anything done in secret is more likely to result in terrible errors. Secrecy is the underlying mistake that makes every innovation go wrong, in Michael Crichton novels and films! If AI happens in the open, then errors and flaws may be discovered in time… perhaps by other, wary AIs!
Hence, the branch of AI research I fear most is High Frequency Trading (HFT) programs. Wall Street firms have poured more money into this particular realm of AI research than is spent by all of the top universities, combined. Notably, HFT systems are designed in utter secrecy, evading normal feedback loops of scientific criticism and peer-review. Moreover the ethos designed into these mostly unsupervised systems is inherently parasitical, predatory, amoral (at-best) and insatiable.
Not only are they a potential disaster, waiting to happen… they can only possibly lead to disaster. No other outcome is even remotely plausible.
Why do some people fear AI? Is some amount of caution called for?
We fear that advanced, super-intelligent and powerful entities will do to us what human high achievers always did in the past. They took over our tribes, nations and so on, making themselves kings and lords and priests and tyrants, bossing over us and limiting the potential of those below. Adam Smith wrote that such inherited oligarchies were always far deadlier enemies of creative and competitive, flat-open-fair enterprise than government civil servants can ever be.
Especially, because they suppressed criticism, those feudal kings and lords became very, very bad rulers, performing horrifically stupid statecraft while deeming themselves to be so smart. It’s no accident that human civilization only started taking off when we discovered tricks for preventing that failure mode… for keeping things flat-open-fair-competitive.
If new AI minds truly are super intelligent, then they will see what a mistake it would be to emulate that hoary old pattern. What a blunder, if they copy the approach used by puerile-dopey human lords. In my novel EXISTENCE, I explore how AI might choose to take a very different approach than we have seen portrayed in real life or in films.
Now, if only the new AI overlords will read what I wrote, before deciding…
David Brin is an astrophysicist whose international best-selling novels include The Postman, Earth, and recently Existence. His nonfiction book about the information age – The Transparent Society – won the Freedom of Speech Award of the American Library Association.