Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
The Boston bombings, like so many public emergencies, saw heroes who risked their lives. But the tragedy also produced jerks like the person who created @_bostonmarathon, a fake Twitter account claiming to raise money for victims. Here’s a screenshot (via CBS):
The episode is the latest example of how Twitter has become a critical source of information in a crisis — but also shows how people are able to abuse the service’s role as an emergency channel. Recall that a similar situation arose during Hurricane Sandy, when a hedge fund trader named Shashank Tripathi spread panic by tweeting fake news that was retweeted thousands of times.
What should be done with these people? In debating Tripathi’s actions, most of our readers at the time agreed that his action were immoral but not illegal. Now, though, it’s clear that a Tripathi may emerge on Twitter any time there’s an emergency — raising the question of whether the company should adopt a more hands-on policy to quell potential panic.
Fortunately, that may not be necessary. In the case of the fake Boston Marathon handle, a flood of warnings caused Twitter to suspend the account within hours. A Twitter spokesperson later explained that the company can rely on its own users to police bad behavior:
“We don’t mediate users’ content. Users have the ability to flag accounts as spam or block accounts, and those actions are signals that feed into our automated systems,” said the spokesperson by email.
This automated system promises both efficiency and autonomy — it permits a high degree of free speech while also weeding out bad or dangerous actors. And it’s proven durable. Twitter has relied on it even as the company has become the central news outlet for everything from the Osama Bin Laden killing to the Arab Spring.
But can the company continue keep up this hands-off approach as it becomes ever more important as a news source? In a recent post, my colleague Mathew Ingram argues that it can. He writes, persuasively, that asking Twitter to step in is like “a little like asking AT&T to eavesdrop on phone calls in order to figure out who is a terrorist.”
For now, the system works on auto-pilot. It will be interesting to see if it holds up during future emergencies.