The promise of the Internet age is one of unparalleled access to information of all kinds, but it has also seen the rise of some powerful gatekeepers that control our access to that information: gatekeepers like Google(s goog), Facebook, Apple (s aapl) and even Twitter. These new information overlords have been in the news recently because of their control (or perceived control) over certain information, and the reaction from users has reinforced the tension between the freedom these companies provide and the hoops through which we have to go in order to achieve it. How does that alter the way we see the world around us?
Google, for example, has been accused of censorship for removing certain terms from its “auto-complete” and Google Instant search features, including terms that relate to potential copyright-infringing services such as file-sharing network The Pirate Bay, or torrent search engines like Isohunt — both of which have been the subject of lawsuits and other actions because they refer people to infringing files. Google has said in the past that it does this because it’s trying to help media companies combat piracy and so excludes terms it believes are “closely associated” with piracy.
Is it really censorship when a search engine removes a reference to such sites from its auto-complete feature? After all, users can still search for those terms and find them in Google’s index quite easily. It’s not as though links to The Pirate Bay have been removed from Google’s index altogether (although the prospect exists that this could happen, if Congress passes laws like the Stop Online Piracy Act — which allows private companies to force search engines to remove sites from the domain-name system — or legal judgements like the one handed down in Texas this week hold up).
Is it censorship to exclude certain terms?
I raised this question on Twitter after a report in TorrentFreak about Google’s actions, and several people — including sociologist Zeynep Tufekci — said it’s a form of censorship, or at the very least, a kind of “algorithmic gate-keeping.” While many people may not use auto-complete, others do, and the argument is that their experience will be reduced, even by a small amount, due to this filtering. Tufekci said these small kinds of changes can affect the way that people process information, in subtle but important ways.
@mathewi It is important to discuss the emerging power of algorithms as new gatekeepers. Not as bad as old media, sure, but still powerful.
— Zeynep Tufekci (@zeynep) November 24, 2011
Google is an old hand at this kind of thing, since it has dominated the search market for the past half a decade at least, to the point where it’s being investigated by the federal government for antitrust activities. Critics claim it deliberately removes terms from its search results, or highlights others that promote its own products, and argue the company should be forced to abide by some kind of legislated “search neutrality,” similar to the telecom-industry principle of net neutrality. But does Google really have a duty to provide unfiltered results? Is there a societal downside?
Twitter has come under fire for something similar, or at least the perception of something similar: Advocates of the “Occupy” protest movement have complained bitterly over the past few weeks about how the network is excluding terms related to the movement from its trending topics list. Some users may never look at this list, but it has come to be seen by many as a badge of honor. During the recent removal of Occupy camps in Los Angeles and New York City, there were repeated accusations of censorship against Twitter for allegedly removing those terms from its trending list.
Twitter says its algorithm is responsible
Twitter has said a number of times that it doesn’t filter trending topics to remove specific terms (although it does remove offensive words and phrases). Instead, the trending algorithm looks for short-term spikes in activity, and that tends to exclude terms that are being used a lot over a longer period of time. In an email message to me, Twitter spokeswoman Carolyn Penner said:
Trending topics are based on an algorithm that looks at spikes. Trends surface the fastest rising popular topics, or the hottest hot topics. They are not curated. Bottom line — we aren’t censoring #occupy terms.
Even Apple has been criticized for gatekeeping of a sort, with accusations this week that Siri — the voice-activated search assistant that appears as a feature in Apple’s latest iPhones — is deliberately refusing to provide users with information about abortion clinics. The company has apparently said that this is a glitch in the software, not a deliberate choice to exclude certain information, but the uproar over the incident speaks to a larger concern about Apple’s control over what its users do.
If you control the platform, you control the information flow
As Harvard law professor Jonathan Zittrain argues in a recent piece for MIT’s Technology Review, the company has an almost unprecedented level of control over what users do with its devices — and potentially even over what information they can access and how — because it controls the platform from end to end. What if Apple decided, or was forced by law, to prevent users from going to certain sites in Safari? Or from asking Siri to search for certain terms, such as The Pirate Bay? That may seem far-fetched, but it isn’t. Google could be forced to do the same kinds of things in Chrome.
Of course, people don’t have to use Google, or Twitter, or Apple, or Facebook. They are free to use other search engines and information networks, and many do — but the vast majority of people do not. As research has repeatedly shown, most people use defaults because they are easier (which is why search deals like the ones Firefox signs are so valuable). And that has the potential to erect barriers to free information flow, even if most users don’t realize they exist.
What are the potential ramifications of that for society, as more and more people access the Internet through proprietary platforms and devices, or become “locked in” psychologically to certain services? How are algorithms changing the way that we perceive the world around us? We are only just beginning to find out.