35 Comments

Summary:

The removal of the Facebook page belonging to film critic Roger Ebert and blocking of content from a protest group in Britain has raised questions about the site’s censorship of content. We treat Facebook as a semi-public space, but it is controlled by a private company.

Updated: The benefits of being on Facebook are fairly obvious by now: you can connect to friends and family and share things with them no matter where they are — and it’s all free! This quasi-public space is also owned and controlled by a corporate entity, however, and it has its own views about what kinds of behavior should be allowed. That inevitably raises questions about whether the site is engaging in what amounts to censorship — questions that resurfaced this week after a page belonging to film critic Roger Ebert disappeared, and a group of protesters in Britain found their content blocked. Who is watching the watchmen?

Ebert triggered a storm of criticism on Monday with his response to the death of Jackass co-star Ryan Dunn, who was killed in a single-vehicle accident early Monday morning. Police said that speed was likely a factor in the crash, and there have also been suggestions that the TV actor may have been drinking before the incident. Ebert — who took to Twitter after a cancer operation led to the loss of his lower jaw, and now has 475,000 followers — posted that “friends don’t let jackasses drive drunk,” a comment that drew attacks from Dunn’s co-stars and from celebrity blogger Perez Hilton.

The film critic later tweeted that his page had been removed (even though his comments on Twitter about Dunn never actually appeared there), to be replaced by an error message stating that the page had been removed due to violations of Facebook’s terms of use, which ban any content that is hateful, threatening or obscene or that attacks an individual or group. In response, Ebert said his page was harmless and asked: “Why did you remove it in response to anonymous jerks? Makes you look bad.”

Facebook later said that the page was taken down “in error” and it was reinstated. But as Jillian York of the Electronic Frontier Foundation and Global Voices Online noted in a blog post, it’s not clear what kind of error led to the page being removed. Was it taken down automatically after being flagged as abusive? York — who has written in the past about Facebook removing pages set up by political dissidents in the Middle East and elsewhere — says the company has denied removing pages automatically. So was there human error involved? And if so, what steps is Facebook taking to prevent that in the future?

If critics of his Twitter comments attacked Ebert’s page by repeatedly flagging it, they effectively took the same approach some governments have taken in trying to shut down dissent: Foreign Policy magazine columnist Evgeny Morozov said recently that he knows of at least one government that flags dissident group pages as pornography in order to get them removed. Facebook has also removed pages in the past that were seen as anti-Islam or anti-Israel — in some cases reinstating them later — and has taken down more innocuous content as well, such as pages about the benefits of breastfeeding.

And it’s not just taking down pages that Facebook users are concerned about: According to a blog post from one of the organizers of a recent public anti-government protest in Britain, a number of users reported that Facebook not only blocked them from linking to a website set up by the group, but from linking to a blog post about it as well. A spokesman for the social network said this too was an error that was later corrected — but, again, what kind of error it was isn’t clear. Nor is it clear what criteria Facebook uses to make these decisions.

As the British blogger notes in his post on the incident, Facebook is “increasingly the space within which people receive their information, including civic information.” We are living more and more of our public lives and getting more of our information through networks such as Facebook, and while that can be a very powerful thing — as we’ve seen with events such as the Arab Spring uprisings in Tunisia and Egypt — it also means that more of our information is being filtered by a corporate entity, with its own desires and rules, not all of which are obvious. The implications of that are profound.

Update: Facebook spokesman Barry Schnitt responded to this post via email, and his comments about the specific incidents mentioned (both in the post and in the comments here) are below. They have been edited for length.

  • Ebert — “Difficult to say. Perhaps the reviewer misinterpreted [the term] “jackass” or the other fan comments as a concerted personal attack on a private individual. Perhaps they just meant to push “ignore” and hit the wrong button. In the end, we responded to the mistake pretty quickly.”
  • Britain/j30strike — “The domain jstrike30.com, when originally registered, redirected to an affiliate site that’s associated with spam and other malicious content. We block many of these sites to protect users. In this case, we blocked the affiliate site and, when we found that this domain redirected to it, we blocked it as well. Soon after being registered, however, the domain was changed to host the non-spammy content that’s there now. Our system didn’t pick up on this, and it remained blocked. As Dave explained, we’re already working on improvements to prevent this type of mistake in the future.”
  • Third Intifada — “As we explained publicly around this controversy, the page was reported many many times but we kept it up because it was an explicitly peaceful protest, even though it used a word — “intifada” — associated with violence in the past. However, the page eventually became overrun with calls for violence and was removed.”
  • Breastfeeding — “No policy against this at all. I’d encourage you to search the site. There are dozens of groups and thousands, if not millions, of breastfeeding photos to be found. We have removed some pictures of naked women who happen to be holding a baby. And, of course, I’m sure we’ve made some mistakes.”
  • Trademark comment — “Abuse of DMCA and other intellectual property notice procedures is a challenge for every major Internet service and we take it seriously. We have invested significant resources into creating a dedicated team that uses specialized tools, systems and technology to review and properly handle intellectual property notices. This system evaluates a number of factors when deciding how to respond and, in many cases, we require the reporter to provide additional information before we take action. As a result of these efforts, the vast majority of intellectual property notices that we receive are handled without incident. However, we are always striving to improve our practices. If your reader sends his info, I’ll have someone look into it.”

Post and thumbnail photos courtesy of Flickr user David Reece

  1. The first sentence of this article is pretty much what email is.

    Share
    1. Not really, but I can see your point — and just imagine if the corporation that sends your email decided not to send certain messages because of their content.

      Share
  2. Sadly, we now live in a society where 5% of the population dictates to the rest of us what we can see, say, and hear. Special interest groups who represent every minority (I’m talking beyond even ethnicity) “flavor” is out there extorting our civil rights with their loud mouths, access to media, and propaganda machine. I got permanently banned from Hoff Po for daring to submit a comment walling Al Sharpton a racist ambulance chaser. They all claim they hate “intolerance”, yet they wont tolerate al opposing stance. Sad times.

    Share
  3. Lucian Armasu Tuesday, June 21, 2011

    This is why we need decentralized Facebook-like and Twitter-like services. These types of services are becoming increasingly more important, and yet they could take down your account without you being able to do much about it.

    Share
    1. I agree, Lucian — the problem is that those services exist (or close to it with Diaspora and Status.net) but no one wants to use them because all their friends are on Facebook and Twitter :-)

      Share
      1. I think it’s more like people don’t want to use these because they need to be set up and that isn’t so easy for non-techy individuals.

        I still stand by the centralized social network concept. It just needs to be run fairly with a bias toward free speech. It’s something my partner and I are working on.

        Share
    2. FYI, Dave Winer is working on (and he and others are already using) a tool called Blork: http://scripting.com/stories/2011/04/05/gettingStartedWithBlork.html

      To Mathew’s point that these tools aren’t being widely used because everyone is on Twitter and Facebook, that’s probably always going to be true. So our alternatives need not to be separate social networks where one has to basically start all over. The alternatives need to be such that one can still feed content to Twitter and Facebook where all of one’s friends are while still maintaining control over that content on one’s own server or some other such method. So the end result would be being on Twitter and Facebook without ACTUALLY being there. Know what I mean?

      Share
  4. The control Facebook has over communications is of increasing concern to me. Never mine the obvious, like shutting down pages; I don’t think I am seeing people with opposite political views from my own, or people FB doesn’t think I “should” see. I really notice this because sometimes I see ONLY my Arizona friends, and not those from SV. I don’t want my feed curated; I have friends in both environments for a reason.

    Share
    1. Control over your feed is definitely an important thing to have.

      Share
  5. I think that “WE” need to have competitors to all of the Big Co silos like FB and Twitter . Until there is viable competition members feel they have no choice if the disagree with the Big Co. In my view these competitors should be owned fully or partially owned by the “Public”. In a way these applications would be “Public Utilities” who’s primary goal would be to provide services in the “intrest” of the public.
    We have been working on bringing this concept into reality and will have some announcements shortly. We also have been working on several applications that have the possibility of becoming “Public Utilities” The Social network is in alpha here http://social.maia68.com. There are some bugs we are trying to fix them as fast as we can.
    We are also working on somethings here http://www.maia68.com and here http://www.kleemi.com

    Share
    1. Frankly, I think, that is a horrible idea. The government has a huge incentive to censor and would if it were treated like a public utility. Maybe you mean something else by that, I don’t know. Centralized control is the problem and while I have concerns about Facebook that keep me off the site, it’s way better than letting the government control it. Facebook can’t do much to hinder it’s competition while the government could outlaw competition like it did with the Post Office, which was used to censor.

      Share
  6. parallel internet will be a need soon .. facebook, twitter, both needed, and both unreliable.

    Share
  7. It’s good that things like this happen because they warn all of us to not be lulled into the sense that the proprietors of Facebook would ever allow the service to be turned into a public commons. Some irony in the fact that the Facebook service illustrates many of the things a digital commons might be about, but, whatever that future is, Facebook won’t be it. Too centralized; too lacking in imagination; to bound to the service of a tired, corrupt industry (advertising) that the commercial side of a truly social media would free us from.

    But it’s a private business, so the only fair remedy to the problem is for individuals to choose with their eyes open.

    The far more difficult problem, I think, is what to do about data centers and cloud services that really do function as de facto public utilities, when (a) they don’t like content they are hosting, and (b) government wants to search or seize a server and doesn’t know how to distinguish between the suspect and it’s virtual neighbors? That’s also infrastructure for . . . well, everything: commerce, education, the arts, politics, everything.

    Share
  8. Hi Mathew, You ask several times about the criteria that is used, and our terms of service are public and available: https://www.facebook.com/terms.php?ref=pf

    We seek to create a safe environment for all users and investigate reports from users about violations of the terms. In those cases, the Pages are sometimes frozen while the claim is investigated to ensure that malicious behavior is taking place. Calling it “censorship” is not only misleading, but grossly inaccurate.

    Share
    1. Where in your terms of service does it list the criteria used?

      There pretty clearly was no violation of copyrights or privacy in Ebert’s post. Apparently, you pulled it based on what you thought might be offensive content. That IS censorship.

      Share
    2. Vadim, don’t be so quick to run from the term! Censorship is in fact what one would naturally do to enforce terms of service that pertain to content, which yours do. Suggestion: just be very clear about what the criteria are at Facebook for the censorship that obviously can occur (your terms state that you can deny service to someone who violates the standards).

      Share
    3. Thanks for the comment, Vadim — I’m aware of the terms of service, but it’s still not clear to me which of those were infringed by either Roger Ebert or the British group in question, or for that matter which are infringed by pages or groups devoted to breastfeeding and the like.

      And while censorship may not be the term you might choose for what happens when you remove that kind of content, I think in purely functional terms that is what is occurring — even if there is justification for it, or if it breaches Facebook’s terms of service etc.

      Share
    4. Grossly inaccurate? Ahem. When Facebook takes down the Pages of Roger Ebert and Sarah Palin (in her case, last year, it was just a note), it’s still censorship. Because they’re famous, and their stories are publicized, it gets fixed. But when it’s an activist or ordinary user, the accounts often remain down, and Facebook’s persistent notice that “this decision cannot be appealed” means that users are unlikely to even attempt to get their accounts back.

      Share
  9. Thanks for the post. I work on Facebook’s User Operations team focusing on our content policies and how we enforce them. I know that there are lots of questions and some confusion about our policies and practices around content. We’ve spent a lot of time trying to be transparent on our site (http://www.facebook.com/communitystandards), on our blog (https://www.facebook.com/blog.php?post=403200567130), and through the press (http://www.nytimes.com/2010/12/13/technology/13facebook.html). However, we can always do more, and we’d really appreciate your help in that effort.

    Generally, we’re very proud of the environment on Facebook. We want Facebook to be a place where people can openly express their views and opinions, even if others don’t agree with them. That said, there are some types of content we don’t allow. For example, we remove hate, threats, pornography, attacks on private individuals, and other kinds of abuse when they’re reported to us. We also work to prevent spam. You’ll find a more complete list of what we don’t allow, as well as an explanation of the community we’re trying to build, on the Community Standards page mentioned above. We’ve developed these policies with guidance from our Safety Advisory Board and other recognized experts from around the world.

    Unfortunately, there aren’t many examples for Facebook to follow in this area. We simply face a challenge that few companies have ever faced – protecting a service that’s under.
    constant attack and through which over 500 million people share more than 30 billion pieces of content every month. Our international team of reviewers works across offices in California, Texas, Ireland, and India to review hundreds of thousands of pieces of reported content every day. Our engineers build systems that classify over ten billion actions (suspicious logins, friend requests, etc.) and pieces of content (messages, Wall posts, etc.) every day. The hundreds of people and sophisticated systems that enforce our policies stand in stark contrast to the vast majority of the rest of the Internet.

    Of course, no system is perfect, and we do sometimes make mistakes like the two you mention above. Usually, it’s because a system didn’t work as intended or someone simply made a wrong call. In the case of the Page, one of our reviewers mistakenly identified it as an attack on a private individual, which we don’t allow. In the case of the link, our system mistakenly identified it as redirecting to a known spam website.

    As I mention above, there are some common misconceptions about our policies and practices. For example, we found long ago that the number of reports on a piece of content is a very unreliable signal for determining abuse. In fact, in many cases, the number of reports a piece of content receives is inversely related to the accuracy of those reports (imagine how many reports a Page criticizing Justin Bieber might get, for example). Because of this, number of reports has no influence on our decision about content. None. Instead, we evaluate it based on whether it violates our policies.

    We strongly believe that what is more important than any mistakes is our response to them. When we do make a mistake, we act quickly to fix it (both of the above issues were resolved within hours), apologize, and improve our processes so that it doesn’t happen again. In fact, our team has already started discussing improvements to prevent cases like the above in the future. Still, though, we believe our error rates rival those of any company in any industry.

    We take all of these responsibilities very seriously. We don’t pretend to have all the answers, though, and we welcome constructive feedback on how to improve our approach. Please don’t hesitate to contact my colleagues in PR if you have any suggestions. Thanks.

    Share
    1. Thanks for the comment, Dave — I know that the job of moderating billions of pages and comments can’t be easy. And I’m encouraged to hear you say that the raw number of complaints or flagging of content has no bearing on whether content is removed. But I’m still curious about why Ebert’s page would have been blocked even for a short time, and why the protesters in Britain would have found their links even to an innocuous blog post blocked.

      I’m also curious about how Facebook interprets community standards, since that kind of thing differs so much from country to country and community to community. I’d be interested in hearing more about that — you can reach me at mathew@gigaom.com.

      Share
    2. One of our Facebook Pages was removed without warning because somebody complained to Facebook that it violated their trademark. We couldn’t find a single lawyer who thought the complaint was valid, but Facebook presumes we are guilty until proven innocent. The user operations team basically told us to come back once we had a legal judgement. Censorship and arbitrary standards regarding trademark and copyright enforcement are real risks when doing business on someone else’s platform. Dave I appreciate your comments and would be very grateful if you’d review our case. Thanks.

      Share
      1. Thanks for pointing that out, Tom — that’s a real minefield for companies, I would think, and for Facebook as well.

        Share
    3. Here’s a question: Why, when someone’s account is taken down, do they receive a message stating that the decision “cannot be appealed.” Surely, in the case of a Sarah Palin, Justin Bieber, or Roger Ebert, the problem is easily solved, but the vast majority of individuals who lose their accounts for a TOS violation–real or in error–don’t know how to appeal the decision, and are informed by your User Operations team that they cannot do so. How about fixing that?

      Share
      1. That’s a great point, Jillian.

        Share
  10. Why does Facebook’s fine print claim it owns all data its users put into it? Why does Facebook summarily close accounts and destroy social networks after acting as judge, jury, and executioner, with zero due process of law, then keep all the data on the accounts for its own use? Why is there no due process of law in the first place? Because Facebook is above the law, and is evil. Why do they think they can close accounts in the first place? If some material somebody posted is objectionable, why don’t they just delete that and leave the account going? Why don’t they restrict posting privileges after multiple violations? Where’s the appeal process when somebody’s account is completely closed and can’t even log on to send an appeal?

    Save them the trouble and close your account now and boycott Facebook forever!

    Share

Comments have been disabled for this post