8 Comments

Summary:

Political commentator Ronan Farrow says that social networks like Twitter and Facebook should do more to police violent content from terrorist groups — but who gets to draw the line between free speech and hate speech, or choose which content should disappear forever?

Social networks and platforms like Facebook, Twitter and YouTube have given everyone a megaphone they can use to share their views with the world, but what happens — or what should happen — when their views are violent, racist and/or offensive? This is a dilemma that is only growing more intense, especially as militant and terrorist groups in places like Iraq use these platforms to spread messages of hate, including graphic imagery and calls to violence against specific groups of people. How much free speech is too much?

That debate flared up again following an opinion piece that appeared in the Washington Post, written by Ronan Farrow, an MSNBC host and former State Department staffer. In it, Farrow called on social networks like Twitter and Facebook to “do more to stop terrorists from inciting violence,” and argued that if these platforms screen for things like child porn, they should do the same for material that “drives ethnic conflict,” such as calls for violence from Abu Bakr al-Baghdadi, the leader of the Jihadist group known as ISIS.

“Every major social media network employs algorithms that automatically detect and prevent the posting of child pornography. Many, including YouTube, use a similar technique to prevent copyrighted material from hitting the web. Why not, in those overt cases of beheading videos and calls for blood, employ a similar system?”

Free speech vs. hate speech — who wins?

In his piece, Farrow acknowledges that there are free-speech issues involved in what he’s suggesting, but argues that “those grey areas don’t excuse a lack of enforcement against direct calls for murder.” And he draws a direct comparison — as others have — between what ISIS and other groups are doing and what happened in Rwanda in the mid-1990s, where the massacre of hundreds of thousands of Tutsis was driven in part by radio broadcasts calling for violence.

In fact, both Twitter and Facebook already do some of what Farrow wants them to do: for example, Twitter’s terms of use specifically forbid threats of violence, and the company has removed recent tweets from ISIS and blocked accounts in what appeared to be retaliation for the posting of beheading videos and other content (Twitter has a policy of not commenting on actions that it takes related to specific accounts, so we don’t know for sure why).

_75635188_isisnew

The hard part, however, is drawing a line between egregious threats of violence and political rhetoric, and/or picking sides in a specific conflict. As an unnamed executive at one of the social networks told Farrow: “One person’s terrorist is another person’s freedom fighter.”

In a response to Farrow’s piece, Jillian York — the director for international freedom of expression at the Electronic Frontier Foundation — argues that making an impassioned call for some kind of action by social networks is a lot easier than trying to sort out what specific content to remove. Maybe we could agree on beheading videos, but what about other types of rhetoric? And what about the journalistic value of having these groups posting information, which has become a crucial tool for fact-checking journalists like British blogger Brown Moses?

“It seemed pretty simple for Twitter to take down Al-Shabaab’s account following the Westgate Mall massacre, because there was consistent glorification of violence… but they’ve clearly had a harder time determining whether to take down some of ISIS’ accounts, because many of them simply don’t incite violence. Like them or not… their function seems to be reporting on their land grabs, which does have a certain utility for reporters and other actors.”

Twitter and the free-speech party

As the debate over Farrow’s piece expanded on Twitter, sociologist Zeynep Tufekci — an expert in the impact of social-media on conflicts such as the Arab Spring revolutions in Egypt and the more recent demonstrations in Turkey — argued that even free-speech considerations have to be tempered by the potential for inciting actual violence against identifiable groups:

It’s easy to sympathize with this viewpoint, especially after seeing some of terrible images coming out of Iraq. But at what point does protecting a specific group from theoretical acts of violence win out over the right to free speech? It’s not clear where to draw that line. When the militant Palestinian group Hamas made threats towards Israel during an attack on the Gaza Strip in 2012, should Twitter have blocked the account or removed the tweet? What about the tweets from the official account of the Israeli military that triggered those threats?

What makes this difficult for Twitter in particular is that the company has talked a lot about how it wants to be the “free-speech wing of the free-speech party,” and has fought for the rights of its users on a number of occasions, including an attempt to resist demands that it hand over information about French users who posted homophobic and anti-Semitic comments, and another case in which it tried to resist handing over information about supporters of WikiLeaks to the State Department.

Despite this, even Twitter has been caught between a rock and a hard place, with countries like Russia and Pakistan pressuring the company to remove accounts and use its “country withheld content” tool to block access to tweets that are deemed to be illegal — in some cases merely because they involve opinions that the authorities don’t want distributed. In other words, the company already engages in censorship, although it tries hard not to do so.

Who decides what content should disappear?

Facebook, meanwhile, routinely removes content and accounts for a variety of reasons, and has been criticized by many free-speech advocates and journalists — including Brown Moses — for making crucial evidence of chemical-weapon attacks in Syria vanish by deleting accounts, and for doing so without explanation. Google also removes content, such as the infamous “Innocence of Muslims” video, which sparked a similar debate about the risks of trying to hide inflammatory content.

What Farrow and others don’t address is the question of who should be left to make the decision about what content to delete in order to comply with his desire to banish violent imagery. Should we just leave it up to unnamed executives to remove whatever they wish, and to arrive at their own definitions of what is appropriate speech and what isn’t? Handing over such an important principle to the private sector — with virtually no transparency about their decision-making, nor any court of appeal — seems unwise, to put it mildly.

What if there were tools that we could use as individuals to remove or block certain types of content ourselves, the way Chrome extensions like HerpDerp do for YouTube comments? Would that make it better or worse? To be honest, I have no idea. What happens if we use these and other similar kinds of tools to forget a genocide? What I think is pretty clear is that handing over even more of that kind of decision making to faceless executives at Twitter and Facebook is not the right way to go, no matter how troubling that content might be.

Post and thumbnail images courtesy of Shutterstock / Aaron Amat

  1. That is not good!
    Leslie

    Reply Share
  2. Tying free speech to Facebook and Twitter is way off base. Those companies let you say whatever you want to say because it’s their business model, not because they are committed to democracy. Just refer to what Zuckerberg thought of people uploading their personal information and sharing it. Meanwhile, John Warnock of Adobe used some of his wealth to purchase an original hand copy of the Bill of Rights.

    Reply Share
  3. Who cares what Ronan Farrow thinks about who should be silenced? And who’s rhetoric should be censored? I guess only Jihadists at first so we can become acclimated to the censorship. And if we didn’t allow “Innocence of Muslims” on YouTube, what will Washington do when they need to blame their scandals on others? Without ISIS propaganda online, how will we get whipped into a frenzy for more state-sponsored violence done in our name? And what Twitter intern should define what is offensive again? Or should we just add @ronanfarrow to all tweets so he can let us know?

    Reply Share
  4. Ralph Haygood Tuesday, July 15, 2014

    “Should we just leave it up to unnamed executives to remove whatever they wish, and to arrive at their own definitions of what is appropriate speech and what isn’t?”: On their own platforms, yes, we probably should. We’re not talking about public utilities here. If we were, public regulation would be clearly appropriate. Given we’re not, the case for public regulation is much weaker. I’m not a libertarian, nor do I agree with most Americans of liberal inclinations that freedom of political speech should be practically absolute, but I do think restrictions on it with the force of law should be imposed very cautiously and under intense public scrutiny. At a minimum, if you want to argue Facebook or Twitter warrants such restrictions, I think you’ll have to agree Fox “News” and hate radio do too. Are you sure you want to open that can of worms?

    In practice – and this topic is quite practical for me, as I’m in the process of creating a service something like Facebook – I think services should spell out in their terms of use what kinds of content are unacceptable, encourage users to flag content they consider unacceptable, and consistently remove flagged content that well-trained staff members agree is unacceptable. That, and no more, is a reasonable expectation of a service. I’d suggest users make a habit of unfriending / unfollowing / unwhatevering posters of content they find offensive and blocking obnoxious commenters. It would be nice if more commenting systems supported the latter, rather than forcing readers to wait on administrators to ban trolls; for me, this feature is the only important virtue of Facebook commenting on non-Facebook sites.

    Reply Share
  5. I think I am paraphrasing B. Franklyn: There is no compromise in the pursuit of freedom.

    I think that is a bad idea to trade freedom for security, but that is what this discussion is really about. I see corporations as another form of government – with different charters and limitations of course – so if I do not want the government to regulate my freedom, why would I want corporations.

    Regulating the voices in any media – traditional or social – is trading freedom for security. Some of the voices are dangerous and I promise you, anything that is anti-USA or anti-Israel, anti-semitic, ethnocentric, racist, bullying, etc I am very much against; Still, I am not willing to trade my freedom for security.

    Reply Share
    1. Franklyn? with a “Y” ? really ? now there’s a reason for censorshyp…

      Reply Share
      1. Ooopppsss … sorry … I can not spell in any of the 5 languages I speak.

        Reply Share
  6. SOCIAL MEDIA ACTS AS ABASE FOR FREE EXPRESSION OF VIEWS AND THE REPERCUSSIONS THEREOF IS A MATTER OF CONJECTURE ONLY.WHEN SOMETHING UNTOWARD HAPPENS BASING ON THE POSTINGS IT WILL BE CLASSIFIED AS BAD AND A CYBER CRIME.A VERY UNPREDICTABLE SITUATION.

    Reply Share