One of the fears about the explosion of information online is that users might become more narrow in their interests — either because they are overwhelmed by the amount of content choices facing them, or because personalization filters will wind up catering to their existing preconceptions. In a recent post on the topic, Cornell University communications professor Tarleton Gillespie makes exactly that kind of criticism about Twitter’s algorithms, and specifically its “trending topic” filters. Handing over more of our information consumption to companies like Twitter may make our lives easier, but does it also make them narrower as well — and if so, what do we do about it?
Gillespie’s argument starts off with a discussion of how various “Occupy Wall Street” topics failed to trend on Twitter despite the frenzy of activity around those subjects during the demonstrations in New York and elsewhere. There were repeated accusations and conspiracy theories that said Twitter was somehow censoring the topic of Occupy Wall Street and any related hashtags, even after the company described its trending-topic algorithm — and how it looks not at volume of a specific term, but the volume of activity around that term over time (in other words, a sustained level of high activity won’t necessarily trip the filters).
Like Google’s search algorithm, Twitter is a black box
Despite this defence of its practices, however, Gillespie says it’s not surprising that some might be suspicious of Twitter’s approach, since so little is known about how it actually works. Just like Google and its search algorithm — which has also come under a lot of fire from critics for favoring certain things — Twitter can’t say too much about its system for competitive reasons, and because it could theoretically help those who want to use it for their own purposes:
Everyone from spammers to marketers to activists to 4chan tricksters to narcissists might want to optimize their tweets and hashtags so as to Trend. This opacity makes the Trends results, and their criteria, deeply and fundamentally open to interpretation and suspicion.
Gillespie also notes that Twitter’s trending topics are more than just an attempt to show users relevant information — they are one of the keys to the company’s commercial success as well, as it tries to promote its value to advertisers. I raised a similar point when Twitter first launched its “promoted trends” feature, which adds a commercial message to the trending topics. The company says that once added, these branded topics are governed by the same criteria as any other trending topic, but Gillespie’s point is that users have no way of actually knowing that.
The idea of a “filter bubble” comes from a book of the same name by author Eli Pariser, who argues that the rise of personalization algorithms developed by providers such as Google and Amazon is a double-edged sword, While these tools — including new variations such as Google’s “Search Plus Your World” — can save time and makes information consumption more manageable, Pariser says it can also wind up trapping users in a bubble where their prejudices and preconceptions are reinforced instead of being challenged. That is not only bad for individuals, the author argues, but for society as well.
Is it Twitter’s duty to broaden our world view?
Pariser says that while users and information consumers have a duty to try and break out of these bubbles, companies like Google and Facebook also have a duty to help expose us to different viewpoints. He says that they should explicitly code into their algorithms “a sense of the public life [and] a sense of civic responsibility,” and should give users the ability to control what gets through the filter and what doesn’t. Gillespie, meanwhile, says that Twitter’s choice of what makes a trending topic essentially reinforces the public’s obsession with what is new rather than what is important:
Perhaps we could again make the case that this choice fosters a public more attuned to the “new” than to the discussion of persistent problems, to viral memes more than to slow-building political movements.
Gillespie’s argument, and Pariser’s as well, are similar to criticisms of Google’s approach to Google News, which some believe should make more of an effort to suggest or display “important” stories instead of just whatever is popular. When this concern was mentioned at the Zeitgeist forum last year, Google founder Larry Page said that he sympathized with this view, and had thought about tweaking Google News in order to make the news “better.” But is this what we want? Do we really want companies like Google or Facebook to give us the information they think we need to see?
Newspapers are sometimes seen as “serendipity engines” because they contain information that readers might not normally come across — but they too have suffered in the past from an echo-chamber type of mentality, where editors choose the stories that suit their prejudices. In extreme cases, relying on a single outlet like the New York Times has arguably had devastating effects on society as a whole, as in the case of Judith Miller’s unbalanced reporting in the lead up to the Iraq war. Is the filter bubble really so much worse now than it used to be when people read a single paper or watched one TV news channel?
In the end, the only cure for the ailment that Gillespie and Pariser have described is for users to take more control over their own information consumption habits, and to consciously subject themselves to contrary opinions or viewpoints. One interesting attempt to make this happen is a browser plugin called Rbutr (short for “rebutter”), which is a finalist in the Knight News Challenge for journalism applications, and allows users to click and see webpages that disagree with the one they are currently reading. But will enough people want to do this, or will they be content to live inside their bubbles?