The potential benefits of “big data” have been well described, both by us and others: the ability to spot flu trends earlier and potentially save lives, for example, or to make it easier for companies to provide services in a more personalized way. But these same tools could also be used for more disturbing purposes that smack of Orwell’s Big Brother, and two prominent digital skeptics — Nicholas Carr and Evgeny Morozov — recently raised warning flags about that prospect. Which kind of future will we get?
Carr looked at a recent speech from PayPal (s ebay) co-founder Max Levchin at the DLD conference in Germany (one Om also attended, where he conducted an in-depth interview with Levchin) and clearly didn’t like what he saw. Levchin’s view of people, according to Carr, is that they are just resources that are not being utilized efficiently, and the technology of sensors and real-time information can be used to improve that, in much the same way that programmers try to optimize the clock cycles of a microprocessor. To take one example, Levchin said:
“How about dynamic pricing for brain cycles? We have been maximizing utilization of very high-value, very low-frequency specialists — today you can already rent the brain of a data-mining genius via Kaggle by the hour, tomorrow by brain-hour. Just like the [email protected] screensaver ‘steals’ CPU cycles to sift through cosmic radio noise for alien voices, your brain plug firmware will earn you a little extra cash while you sleep, by being remotely programmed to solve hard problems.”
More efficient for users, or just creepy?
If you are a geek, this might sound like something with a lot of potential, but Carr describes it as “Clay Shirky’s ‘cognitive surplus’ idea taken to its logical, fascistic extreme.” Levchin goes on to paint a picture of a future in which his insurance company learns — via sensors in his car — that he is taking his children to work, and boosts his insurance premium by a few dollars for the extra risk (Note: we’ll be talking more about the potential of big data at our Structure:Data conference in New York).
Levchin no doubt sees this as efficient, but Carr sees the looming shadow of Big Brother: What if those same sensors detected that you were overweight, or had eaten too much pizza, he asks — would they report that to your insurance company? Maybe the company would boost your rates a little, or maybe you would be “scheduled for a brief re-education session down at the local office of the Bureau for Internal Resource Optimization.” As he puts it:
“This is the nightmare world of Big Data, where the moment-by-moment behavior of human beings — analog resources — is tracked by sensors and engineered by central authorities to create optimal statistical outcomes. We might dismiss it as a warped science fiction fantasy if it weren’t also the utopian dream of the Max Levchins of the world. They have lots of money and they smell even more.”
In a recent piece for Slate, Carr’s fellow digital skeptic Evgeny Morozov looked at the potential implications of banks and other credit-issuing agencies using big data to determine who deserves a loan. Although he says the idea of big data is “mostly big hype,” Morozov talks about several companies that are trying to use data from all kinds of sources — including social networks such as Facebook (s fb) and Twitter — to figure out who is credit-worthy.
Hong Kong-based Lenddo and U.S.-based LendUp look at an applicant’s connections on Facebook and Twitter, Morozov says, and “the key to getting a successful loan is having a handful of highly trusted individuals in your social networks.” A British payday-loan company called Wonga even considers the time of day and how a user clicks around a website in order to determine whether they deserve a loan (although Morozov doesn’t mention it, PayPal uses similar methods to gauge credit-worthiness).
The key is who controls the use of the information
Morozov also mentions ZestFinance, founded by former Google chief information officer Doug Merrill (who we had at our Structure: Data conference in New York last year), whose company looks at more than 70,000 signals and 10 different models to assess credit risk. And he draws a direct link between this and Big Brother, saying: “If only East Germany’s Stasi — the true pioneers of ‘big data’ — had the same model for assessing potential dissidents!”
Despite that comment, however, in the end he (somewhat surprisingly) seems concerned mostly that these companies will use all this information to market things to people who don’t need them, rather than turning them in to the government or their insurance company:
“What happens once these firms, having figured out that all data are credit data, realize that all data are also marketing data? Given how much they know about their clients, it would be very hard for such lending companies not to use this information to sell their existing customers on yet another loan or, perhaps, encourage them to use the loan to take advantage of some unique online sales offer.”
The common thread in both of these dystopian visions is a world in which our data is transmitted without our knowledge, and/or used against us in some way. Where Levchin seems to see an efficient exchange of data between user and service, one with benefits for both — and presumably a level (and secure) playing field in terms of who has access to it — Carr and Morozov see companies and governments misusing this data for their own nefarious purposes, while we remain powerless.
What makes it difficult to argue with either one is that we’ve already seen the building blocks of this potential future emerge, whether it’s Facebook playing fast and loose with the privacy settings of a billion people, or companies aggregating information and creating profiles of us and our activities and desires. What happens when the sensor-filled future that Levchin imagines becomes a reality? Who will be in control of all that information?