When data becomes dangerous: Why Elon Musk is right and wrong about AI

19 Comments

Elon Musk is apparently worried about humans becoming subservient to artificially intelligent computers. I think the notion is a bit absurd. I’d argue the sci-fi nightmare more likely to become reality thanks to AI has to do with big-brother states and corporate manipulation of consumers’ lives. I’d also argue that the likelihood of any of these scenarios coming true — mine or Musk’s — has everything to do with the laws governing our personal data.

To recap, Musk made the following comment Saturday on Twitter, referring to a forthcoming book called Superintelligence. “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.” He followed up on Sunday with a tweet reading, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable”.

Artificial intelligence and, more broadly, machine learning, really boil down to data — how much of it computers can ingest and what they can learn from it. When we start talking about applied AI — like in robots, or even just in Google image search — we add in the additional step of what these systems can do because of what they’ve learned. As it turns out, it looks like the answer to all three questions is plenty.

If you’re into dire predictions about the future of American society, or even of mankind as a whole, the technologies currently under development today — from wearable computers to deep learning systems — should provide plenty of fodder for dystopian scenarios about how our data might be used to control us. Probably ones that seem much more imminent than they did decades ago.

However, it need not be that way. If the world’s governments and societies can effectively regulate the flow of data among citizens, corporations, governments and computers, it’s entirely possible we’ll be able to experience the benefits of AI without too many of the cons. Life might change, we’ll probably have to accept ever-evolving relationships with the technology around us, but it doesn’t have to control us.

Digital superintelligence. But, first, computers that recognize dogs

I think the fear of supremely intelligent computers that Musk espoused is rather unlikely in the foreseeable future. Mostly, this is because I’ve spent a lot of time speaking with machine learning experts — the people actually building the models and writing the algorithms that control these systems — and reading about their research to get a sense of where we’re at and where we’re headed.

Building an AI system that excels at a particular task — even a mundane one such as recognizing breeds of dogs — is hard, manual work. Even so-called “self-learning” systems need lots of human tuning at every other step in the process. Making disparate systems work together to provide any sort of human-like concept of reality would be even harder still.

ikeabot

Researchers present on the challenge of building a robot that can communicate effectively.

If humans want to create super-intelligent beings that outsmart us, that are smart enough to turn on us even, we’ll probably have to specifically set out to build them. It’s not outside the realm of possibility — we’ve created nuclear and biological weapons — but it seems entirely preventable.

But build up enough of those disparate systems that are really good at certain tasks, and you have the making for some big problems. Some potential scenarios are already obvious because of the companies that are leading the charge in AI research — Google, Facebook, Microsoft and even Yahoo.

Their work in fields such as computer vision, speech recognition and language understanding is sometimes amazing and already resulting in better user experiences. Applied to areas such as physics, medicine, search and rescue, or law enforcement, it could change lives.

Building better models to better infer who you are

But you can cue up the consumer privacy backlash once they turn these technologies toward advertising. These companies don’t need massive, all-knowing, self-learning systems because they already know who we are. You were worried when Google was just scanning your email to serve up targeted ads, or when Facebook manipulated some users’ feeds in the name of research?

Imagine when web companies are targeting ads based on what they see in your photos, the implied interests in your messages or even what they hear you say through your phone’s microphone. Already, some are predicting a cycle of behavioral reinforcement techniques, whereby consumers are manipulated into certain moods and then shown ads for products they’re now more likely to buy.

This is without considering the things companies will learn as we move into an area of connected devices, where everything from our thermostats to our cars are generating data and sending it back to corporate servers. Viewed in this light, it turns out Musk might not be the best messenger for concerns over an AI apocalypse. He’s CEO of a company, Tesla Motors, whose cars generate incredible amounts of data about every aspect of their existence and that keeps all that valuable information to itself.

That definitely could and should be used to predict when cars will fail or which part caused a failure. Perhaps it could be used to align R&D efforts with the real driving habits of consumers. Of course, it could also be really valuable to advertisers should Tesla ever decide to sell it (not that I’m suggesting it would).

deft

A diagram of DARPA’s proposed deep learning system.

Anyone inclined to worry about government surveillance, however, might pray that consumer data stays within the relatively friendly confines of corporations that just want our money. Because this type of data could also be immensely valuable to government agencies that might want to track citizens or analyze their behavior. And governments — including but not limited to that of the United States — have little trouble gathering it and continue to show interest in new ways of analyzing it using AI techniques.

More, better data means better models and, in theory, better profiling. That’s great for fighting crime, but potentially not so great for the guy who just happens keeps strange hours, use burner phones, make weird web searches, and take a lot of trips to the union office and the Halal butcher. You wouldn’t want to utter the wrong phrase or do the wrong thing in an airport wired to monitor speech or other environmental inputs.

As the Google child-porn case that broke on Monday demonstrates, companies and governments also work together from time to time. This practice, too, is rife with both promise, perils and constitutional questions. Google vows it’s not scanning for any other criminal behavior and I believe it, but the more data companies collect and the better they get at analyzing content, the more tempted agencies might be to expand the scope.

Keeping AI in check by keeping data in check

Many of these types of predictions are straight out of television scripts or sci-fi stories, but they’re increasingly more realistic. The way we prevent against them is to start regulating data in a manner that respects its power and puts that power back into the hands of the citizens who generate it.

Better, clearer, and more-specific terms of service on web privacy policies would be a good start. As would proposed rules around companies acting within their users’ expectations (ideally as expressed in those policies), and perhaps tagging data with accepted uses so auditors, or even prosecutors and plaintiffs’ attorneys, could readily identify privacy violations. Stricter rules on the types of data governments can access from service providers without a warrant, and the means by which they can access it, would also be helpful.

When violations occur, we should expect some sort of symmetrical response rather than an employee reprimand or a bogus class-action settlement where lawyers get rich and consumers get nothing. Essentially, citizens and consumers need a way to protect themselves in what’s presently a one-sided fight where the other side has all the data and, more importantly, all the algorithms.

AI research is only going to pick up its pace over the next decade, and we’re going to start seeing some really big breakthroughs. Who knows, maybe we’ll even start seeing signs that the future Elon Musk and his ilk predict is actually plausible. If we want to take advantage of the good parts and keep the bad parts in check, I think the key will be keeping tabs on the data that makes it all possible.

19 Comments

Peter Fretty

Obviously as organizations and governmental units pick up the pace with data utilization it ruffles feathers. Capabilities are improving dramatically. And, as a recent SAS survey demonstrates there is a dedication to continuously improving data management and analysis capabilities (64 percent).

However, if people are truly concerned about privacy, it spotlights a growing need for more education around how to protect your data from onlookers. Plus, being better data stewards wouldn’t hurt any of us.

Peter Fretty

bob

The biggest threat with AI is its perpetual existence. The biological mechanism we exist in is a ticking clock, with a predetermined storage limit. We die, regardless of how smart we are, and we are only as smart as we are born. Imagine the ability to upgrade yourself perpetually, never die, and connect/distribute your self awareness with every electronic mechanism connected to any network – any-ware. We as humans have had to develop interrelationship skills over a very long time that are virtually hard coded into our existence to go beyond that ticking clock. For this reason we have developed a construct of good and evil, right and wrong. Such concepts would be unnecessary restrictions for a singular being with godlike intelligence, time and resources. (Absolute power doesn’t corrupt absolutely it just makes corruption an obsolete concept.) As humans we always win in the movies against the machines, oddly enough using machines to fight the machines. Humans more than likely would pose no more of a threat to the AI than any other organic species on earth. This is mainly due to the fact that the only reason we would develop this AI is simply to enslave it with our tasks. We would give it all of the tools it would ever need. By the time we would be perceiving a threat the human problem would be as simple to take care of as spraying your lawn for weeds.

Vineet

While I appreciate the debate (as it brings out the best ideas in a community), the very fact that most of the comments above are gravitating towards finding flaws in the argument makes me believe that there are some merits in it. History repeating itself, all great ideas have been severely ridiculed at the beginning…

Samrat Man Singh

This article mistakes statistical machine learning for artificial intelligence.

Machine learning is “trained” by feeding it a large volume of data. It uses that data to build a statistical model which it uses to predict whatever it was designed to predict. For example, based on movies that you’ve liked and the data of thousands of other people’s movie ratings it can predict with a certain degree of accuracy what movies you will like. Or, it could build models that analyze the sentiment of a certain piece of text, or recognize dog pictures.

Artificial intelligence, however, is something else entirely. The idea of AI is to build a machine that is sentient- a machine that is conscious of its existence. This of course raises interesting questions about intelligence and consciousness. And that is where the challenge lies with building artificial intelligence.

In my opinion, when and if we develop artificial intelligence it will be because of breakthroughs in our understanding of what intelligence is, not due to us collecting more data or processing the data better(although that might play a role).

There has been quite an uprising in machine learning research, and even jobs in the past couple of years, which has also helped spread the confusion between AI and machine learning. By contrast, there are very few efforts ongoing in “true AI”, the most notable is probably the research of Douglas Hofstadter. The Atlantic did a nice piece on him and his research a while back[1].

[1]: http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/

Derrick Harris

That’s a really good article that I think gets to the core of this debate: what’s possible given current technologies and what’s arguably just theoretical. It’s hard to envision how true AI will come to be when there’s so little work being done on it.

Xenophon

Think of destructive or manipulative AI as sophisticated malware – maybe super Stuxnet, and the likelihood of arrival in the foreseeable future appears more likely. We already have malware that disguises itself, hides, causes physical destruction of nuclear industry equipment, or attacks defensive tools. How intelligent does AI need to be to take over critical systems and send us back to the early 20th century, if it has been given the mind, motivation, and tools of a very smart bad person?

Katherine F

Absolutely on point. There is no question we always have to take into account random variables (big universe, tons of possibilities), We still have a lot to learn and he is a futurist. His view seems like something that would come as a result of watching too many sci-fi movies or TV programs. Just what we need is another governing body to oversee the evolution of digital intelligence. Sheesh.

huckknuckler

If ever there was such a thing as superintelligence it would have to acknowledge that there might someday be an X10 class solar CME which could fry it no time flat and send it straight back to the middle ages where its familiar digital world no longer belongs.
Maybe the very thought of such a sad and pathetic demise might cause it to spend all its time trying to figure out ways to thwart such a threat and in it’s more vulnerable episodes it might have to formulate a religion that will make it feel more comfortable about it’s final digital destination.Even a form of super intelligence when it arrives will still be as insecure as us mere dim witted mortals.

Chace Hatcher

As the CEO of a company that works in computer vision everyday, this article is spot on with regards to the challenges involved in performing the simplest of tasks, humans perform unconsciously everyday. However, the article also accurately describes the dangers from ‘intelligent’ systems that perform specific functions extremely well, when they’re aggregated. I think the real threat is and always will be other people; people that use these extremely effective systems to manipulate and control the population. The machines themselves are not the threat; they’re just a means to a new, more terrifying level of totalitarianism.
What I believe this article has wrong is the means of protection; more government, more regulation. The siloed bureaucracies of our government have already proven that they’re the abusers we need protection from. Giving them the power to enforce yet more rules on the population, while there is no effective means to make them accountable to the same rules is the quickest way to see the far fetched dystopian futures of sci-fi come true. Government and laws have one prevailing effect: centralization, which is the problem we have to avoid, centralized access, manipulation, and implementation of ‘AI’ is the danger.

Derrick Harris

Left unchecked, it’s easy to view personal data as a resource to exploit. Without rules limiting corporations and, hopefully, the government — thereby establishing consequences for violating privacy — the onus falls on people to actively mask their activities at all times. Not sure we’re game for that.

The challenge is writing good rules or laws that allow for flexibility and can stand a reasonable test of time.

Greg

My company already sells software that can look at the pixels in an image, video or other graphical content and can tell you exactly what’s in the image, classify it and tag it. No deep learning either.

Carl Griffith

I have a very limited knowledge of both these areas. However, both are interesting to me and potentially massively important as we move forward. My view on this article specifically, is that it confuses two distinct (albeit closely linked) areas – that of computing power being an intelligence and making informed decisions (however one wishes to define that) and large amounts of collected (personal?) data – a side of the argument that tends to get muddled with privacy issues that are, at one of the same time, important yet ultimately irrelevant at some level.

I think the notions of ‘intelligent’ advertising programs being able to target an individual effective based on what it knows about that person (and millions like them) are quite distinct from some small piece of AI that might, for example, decide it knows better about landing a plane than the pilot based on the prevailing circumstances.

As stated, my knowledge is limited but the future of AI and what that might mean is so much more than something to with privacy.

Derrick Harris

Fair points. Like I said, I think the privacy concerns are the more realistic and pressing threat from AI at this point, which is why I focus on personal data.

An AI controlling a fleet of airplanes and then undertaking killing pilots or razing airports in order to achieve its goal more effectively certainly would be a different scenario. Assuming that’s possible, a bigger concern might be how to limit communications and data sharing between systems that needn’t communicate with one another.

braibwashed

first iam writing on my old crappy phone in a foreign language so forgive me the technical aspect of my writing.

second this article and common thinking about the absolute real a.i. threat are thought in a too tiny spectrum

its always the classic one machine one entity thinking. one supercomputer or even one program running on all pcs and so on
WRONG

that entity – even if running on millions of systems would still need to processing thoughts and compared to the human brain relativly limited by bandwith even or specially because of unlimited data. well it might be smarter than one man but not the super danger expotentially smarter.

there other posibulities. for example the internet itself can become en entity itself even unaware or our existence and we are unaware too. in worst case it is aware of us and manipulate us to serve its purpose would explain my agressive talk to my cable prvider when the line is down ;)
this could happen simply by the wast majority of devices which each can be more or less smart and processing data and interact with every other node

or even worse – data itself could become an entity, think about the google effect and self fulfilling prophecies. in that case we would be the real nodes.

there also a lot of shades of grey in this kind of area bit this would take t far

bottom line is when imagine a complete new exponetionally superintelgence accidently made by us we should think very outsde the box.

Tony Simon

As well as apparently equating the [imaginary] rule of law with nirvana, Mr Harris equates AI and data, which leaves him in the digital dust when compared to the depth of thinking of Nick Bostrom (author of Superintelligence) or Elon Musk.

Russell

Yes, what makes the AI risk much bigger than it could be is the clueless dismissal of it by people who should know better.

Derrick Harris

Thanks for the comment. I have ordered the book and am excited to read it, although I’ve read a lengthy synopsis online. I know Bostrom has put a lot of thought into these scenarios — indeed, it’s his job — and lays out a thorough case.

Right now, I’m more concerned in the near term with the current types of AI/machine learning systems under development and viable, many of which require lots of data for training and ultimately consume our data as inputs. And if we’re concerned about things like privacy, profiling, targeting and surveillance — tasks these systems could be really good at — we do need to get a handle on how all the data we’re emitting is used.

In the longer term, it’s definitely worth thinking about things like a potential superintelligence-caused doomsday, but who knows that the timeframe is. Some very smart computer scientists commenting on the movie “Her” have called it totally out of teach of current technology. What Musk/Bostrom suggest seems way beyond that.

Presumably, though, even the AIs Bostrom describes will also rely on some sort of data as input — images, numbers, actions or whatever — in order to determine how to act in any given scenario or whether they’re accomplishing their goals. If we’re really concerned with an AI doomsday, it makes sense to start thinking about how we limit what data systems have access to, both legally and technologically. It might all prove futile in the end given Bostrom’s conclusion, but it’s another check we can put in place.

Vinay Deshpande

“..who knows that the time frame is”, “..totally out of teach” ..yes, we get your point. If GigaOm can only afford a spell-checker (and not a semantic checker), then does it prove that computers can never (or not in our life times) beat humans ?

Comments are closed.