8 Comments

Summary:

Observers might be forgiven for thinking that EU privacy law allows links to serious journalism to be removed from Google’s results if the subject complains. That’s really not the case, as Google knows very well.

On Wednesday, journalists from both The Guardian and the BBC complained that stories from their outlets had disappeared from Google search results within Europe. There is no good reason for this to be happening.

Ah, you might ask, what about that ruling from the Court of Justice of the European Union (CJEU) that Google must remove results if the information is out-of-date and people complain about it? That’s clearly central to this whole affair, but it still doesn’t explain why Google is removing these specific results. The ruling, you see, doesn’t order it to do that.

Here’s the crucial part of that ruling which, remember, only established that Google had to abide by existing EU law (emphasis is mine):

As the data subject may, in the light of his fundamental rights under Articles 7 and 8 of the Charter, request that the information in question no longer be made available to the general public on account of its inclusion in such a list of results, those rights override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name. However, that would not be the case if it appeared, for particular reasons, such as the role played by the data subject in public life, that the interference with his fundamental rights is justified by the preponderant interest of the general public in having, on account of its inclusion in the list of results, access to the information in question.

Good call, bad call — but whose call?

Let’s have a quick look at those vanished stories (which are still online, of course, but not findable through Google’s European services, and Google has over 95 percent of the EU search market.) Danny Sullivan over at MarketingLand has a pretty good run-down, and here’s a recap of that with my non-lawyerly take on Google’s justification in delisting them:

  • 3 stories about a Scottish soccer referee who had to quit because he lied about a decision he made — Definitely a public role, and a memorable scandal. Probably shouldn’t come down.
  • A story about a lawyer who was accused of fraud standing for election in the Law Society. The lawyer was subsequently cleared. — A matter of public interest at the time, but the chap was cleared. Tough call, so this is one the courts should decide on, not Google.
  • An article about French office workers decorating their office windows with Post-It notes — Utterly random, and a prime example for why Google should be making its criteria clear.
  • An article about a Merrill Lynch chairman at the height of the sub-prime mortgage debacle — Public interest; shouldn’t come down.
  • A piece about a supermarket employee insulting his bosses on social media — Easy to see why this was taken down, as it’s probably making the guy unemployable. No public interest.
  • A story about a couple having sex on a train — Again no public interest and probably highly embarrassing to them. The article isn’t linked to in the MarketingLand piece, though, so hard to judge.

If you’re yelling at your screen that I’m in no position to be judging this stuff, then you’re right, but the same applies to Google. And the CJEU was entirely wise to that fact, saying explicitly in its ruling that, if the search operator denies the data subject’s request for de-linking, the subject can then go complain to the courts.

In other words, when faced with a tricky call, Google should be erring on the side of caution and shoving casework in the courts’ direction. Instead, it appears to be granting applications that should in many cases be denied. What methods does it use to judge what should stay up and come down? Heaven knows – I’ve repeatedly asked Google to explain this since it began censoring results a week ago, but have hit a brick wall.

Straw man?

Let’s quickly go over the timeline of this entire “right to remove” fiasco, as it relates to Google. As soon as the ruling came through, the BBC revealed that the company had received de-linking requests from pedophiles and disgraced politicians. Great PR fodder for Google, which strongly opposed the CJEU ruling, but irrelevant because the public interest in keeping those stories in Google’s search results was clear. These were obviously cases that fell under the CJEU’s exception.

At the end of May, Google started formally taking de-linking requests. It began actually de-linking stuff a few weeks later, despite the fact that lawmakers in the various EU member states are still trying to figure out the processes for making the CJEU’s ruling work in practice. And voila: just days later, two of the highest-profile British publications find that stories have come down which pretty clearly shouldn’t have.

If Google is trying to prove that the system is unworkable, then it’s succeeding – only the system it’s apparently operating in isn’t the system the CJEU described. It’s a straw man, a nastified version of the actual legal framework that makes that framework easier to attack. And it’s not just me seeing this possibility; it’s worth reading what Leo Mirani and Paul Bernal have written on the matter.

I’m not a big fan of the CJEU’s ruling, mainly because I’m worried about how national courts will interpret it, and also because I don’t like the idea of censoring the gatekeeper when the information on the other side is legal and allowed to stay online. But I do actually sympathize with the decision to some extent, certainly in terms of the court upholding European law. I also think it’s worth giving the framework described in the judgement a fair chance, before deciding it doesn’t work.

The European perspective

In Europe, as the CJEU pointed out, freedom of speech does not automatically trump the right to privacy and dignity. Article 6 of Data Protection Directive of 1995 clearly states that personal data must be:

… accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that data which are inaccurate or incomplete, having regard to the purposes for which they were collected or for which they are further processed, are erased or rectified.

All the CJEU said was that Google has to abide by that 19-year-old law like everyone else operating in Europe. The same law – which also provides an exception for journalism and artistic expression — states that personal data shouldn’t be held in a way that identifies the subject for longer than necessary. This is one reason why the CJEU struck down Europe’s spy-friendly Data Retention Directive earlier this year, too.

As it happens, those laws are about to receive a major update that will enhance the “right to be forgotten”. Is that such a bad thing? I’m not so sure.

We are now in a very different world than the one that existed before the internet. In that age, there was no real need to have a strict right to be forgotten – people simply forgot what was in last week’s newspaper by nature. If you wanted to research someone, it required effort, going to libraries and poring through microfiche; calling people and asking.

Now, thanks to search engines, you can thoroughly research someone’s history almost by accident. You only wanted to find their email address but darn it, you just learned that they were up on some charges a decade ago (and you don’t see the subsequent article saying they were innocent and cleared). One slip, or malicious move by someone else, can ruin someone’s life when it’s made online.

So do we just shrug our shoulders and accept that, based on the idea that any kind of censorship is always wrong, or do we try and find a way to make very selective censorship work while making sure that genuine scoundrels can’t hide their tracks?

The middle path would clearly be in the interests of, well, everyone. But we will never find it, nor be in any position to judge its viability, if it’s being obscured by misrepresentation. I really hope that’s not what Google’s trying to do, because: a) de-linking serious journalism is not cool nor called for; b) it would be dishonest; and c) it would probably come back to bite the company, as regulators and lawmakers across Europe are already pretty fed up with the company’s track record of trying to override or skirt EU law.

If Google really is acting in good faith, as it claims to be, it should only be de-linking content that is very clearly not in the public interest. It should also only be putting its “results may be missing” notes on pages from which results have been excised, not every page that gives results for a name – a tactic that is neither helpful for those trying to spot and evaluate censorship, nor called for in the CJEU ruling.

In short, it should follow the ruling to the letter, so we can all see if that’s a good idea or not.

  1. They might have only wanted to remove one link… and then removed random other ones to ‘cover it up with randomness’. Who knows

    Reply Share
  2. FYI, I read in another story that Google only wanted to put the “results may be missing” notes on pages from which results have been excised, but the EU said that would be in the spirit of the law.

    The thought was that by only putting the note on pages from which results have been excised would indicate that the person searched was hiding something or had something removed. Thus, Google shows it for all names except obviously public ones.

    Reply Share
  3. Please, you really think that Google has the desire or resources to make those kinds of decisions for every one of the tons of requests that they’re receiving? How long did it take you to review each one of those cases? Just how many thousands of people do you want them to employ to review those requests? If I were in their shoes, I would do the same thing to comply with the ruling: any request would get a nearly automatic approval, maybe with a quick algorithmic review of the data first.

    Reply Share
    1. felrefordit Sunday, July 13, 2014

      “Please, you really think that Google has the desire or resources to make those kinds of decisions for every one of the tons of requests that they’re receiving? ”
      Well, too bad for them.

      Google has been reaping the benefits of the web, by taking everyone’s data and content without explicit consent (and without the consent of the people mentioned on those content pages), and made a lucrative business out of it, by presenting said data (alongside with ads) to the web users.

      Now Google is about to face the downside of its unreciprocal behaviour, and wants to claim that he has no responsibility whatsoever regarding the data it displays, because it does not own them. Well, too bad. You can’t have your cake and eat it too.

      It’s time to force Google to take on the same responsibilities everyone else is obliged to take on. Especially that Google is still not doing the hard and real work here, who everyone else who actually produces content for the internet is doing.

      Reply Share
  4. Google doesn’t follow the ruling to the letter because it can’t. It’s not financially feasible (1000 requests per day so far!) nor can it be their job or indeed their jurisdiction to effectively rule on questions of privacy and censorship. I’ll bet a fiver that on Google’s side, there’s maybe one or two interns handling this whole process – if they didn’t completely automate it with a heuristic algorithm already. Because spending more than even the bare minimum on this is a waste of resources, as nothing can ever follow the already obscure spirit of the court’s ruling.

    Whether Google goes overboard to sulk and protest or not, that’s not ever really necessary to demonstrate the absurdity of the ruling. Of course there are the clear-cut cases – the public’s right to know definitely prevails in an official’s conviction over a scandal, but probably doesn’t for a private individual’s repossession some decades ago – but it can’t be the job of a private company to decide the millions and millions of grey-area cases in between. Either the courts bear the consequences of their ruling and take upon themselves the hundreds of cases a day they generated, or they admit that they don’t understand the first thing about the Internet and approach the issues from a more educated position – maybe this time including one or two people in the process who know how to use the web? (Forgive me for at times wondering what profession those EU judges might have learned).

    If they really want to stick to the method of takedown requests, the least that would be needed would be some kind of liability for the party issuing the request, as we know them from the U.S. DMCA takedown practice. There, it’s under penalty of fine or imprisonment to request an unjustified takedown request. This allows providers to automate the process of honouring takedown, while at the same time deterring people from issuing wrongful and frivolous requests. An author or publisher who is notified of a result being removed would be able to appeal the takedown if they feel strongly that it was unjustified. Only then an actual ruling would be made, with the requesting party standing to be convicted if their request turns out to have been unlawful.

    Reply Share
  5. Wow, journalist didn’t give a damn if Google loose the case. Now that their stories are no where to be found and can’t make any money on it, it saying google just can’t randoming delink story without judging it first. This is a tech company not a law firm or a court system. If there’s a chance of litigation google has the right to delink it. Maybe newspaper company should sue the courts about the story’s relevance to stay linked? The easy solution is to not store any data and make people do research or inform the old fashion way

    Reply Share
  6. “In short, it should follow the ruling to the letter, so we can all see if that’s a good idea or not.”

    Part of the problem is that the decision what do de-list and what not lies at Google. So, what you are describing shows by itself already pretty clearly to me that the ruling was not a good idea.

    Reply Share
  7. ajaybharadwaj Friday, July 4, 2014

    I don’t think courts really understood the nature of the Internet and what google search provides. Google is not and should not be selectively linking or de-linking content. Why Should google be responsible to remove the link to the content when the content itself can be modified/removed by the original content author/owner if inappropriate. Do we really want google to be the judge in this case.

    I think the court ruling for bad for public and bad for the principle of “right to information”.

    Reply Share