Summary:

Facebook has said it will be changing how it handles offensive content and hate speech on its site, but it remains a sticky topic for tech companies that try to build platforms to facilitiate speech.

free speech_Newtown grafitti

It’s clear to many people, and obviously to Facebook itself, that the company mishandled the process for removing gender-based hate speech on the social network. The company wrote in a blog post Tuesday that it would be re-evaluating its policies as a result, and has resolved to “do better” in the future.

But even as activists call for Facebook to take action on removing hateful speech from the platform, others will surely criticize the company for making judgment calls when it comes to deciding what is hate speech and what is simply offensive content. Making this call is a challenge, as Facebook has demonstrated. Even in Facebook’s post on Tuesday, it explained that there’s plenty of distasteful or offensive content on the site, but not all of it deserves to be taken down unless it specifically encourages real-world violence or qualifies as hate speech, which can be a challenging thing to define on a platform with 1 billion users.

“There’s a bunch of stuff that stays up because it’s not hate and it’s not inciting violence, but it’s just distasteful,” Facebook’s COO Sheryl Sandberg said at the D11 conference on Wednesday morning. She said the company hopes that users will speak up about content they find offensive, creating the kind of “self-cleaning oven” atmosphere my colleauge Mathew Ingram has written about. Which is convenient, because that means less work on Facebook’s part.

But the company certainly acknowledged that when it comes to hate speech, as opposed to just distasteful content, its current approach was not adequate, as it wrote in the statement it gave Tuesday:

“In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate. In some cases, content is not being removed as quickly as we want.  In other cases, content that should be removed has not been or has been evaluated using outdated criteria. We have been working over the past several months to improve our systems to respond to reports of violations, but the guidelines used by these systems have failed to capture all the content that violates our standards. We need to do better – and we will.”

We’ve written about this issue before, and how companies like Google or Twitter have struggled to establish platforms that allow for free speech and conversations around things like political dissent without going so far as to encourage violence or hate, or skirting any laws. Facebook presents a slightly different challenge than Twitter because it focuses on having users present their real identities, wheras Twitter has fought in court to protect the identify of users who remain anonymous.

Facebook might have apologized and revamped it policies this time, but don’t think for a minute that we’ve reached some resolution on how tech companies negotiate offensive content and free speech.

Comments have been disabled for this post