A British House of Lords committee has strongly criticized a recent ruling by Europe’s top court that said search engines like Google must remove links to people’s personal information if asked, and where there is no public interest involved. The peers said this was proving unworkable.
In a Wednesday report, the European Union committee also urged the U.K. government to reject upcoming European data protection revisions that will entrench and expand this so-called “right to be forgotten.”
“It is clear to us that neither the 1995 [Data Protection] Directive, nor the Court’s interpretation of the Directive, reflects the current state of communications service provision, where global access to detailed personal information has become part of the way of life,” the Lords said. “It is no longer reasonable or even possible for the right to privacy to allow data subjects a right to remove links to data which are accurate and lawfully available.”
The report followed a very brief enquiry, with witnesses including Google, Open Rights Group chief Jim Killock, tech ethicist Luciano Floridi — who is also on the Google committee that’s helping the search engine figure out how to apply the May ruling by the Court of Justice of the European Union — justice and civil liberties minister Simon Hughes, and Steve Wood from the Office of the Information Commissioner.
A key problem identified by the committee and its witnesses lies in the EU court’s definition of “data controller.” This is a very tricky one – the 1995 Directive that the Court was interpreting is certainly out of date, which is why it’s about to be replaced, but many witnesses (and me, for that matter) think it fair to classify Google as a data controller, as the Court did (against advice from the EU Advocate General.) Although it does so at scale and in a largely automated fashion, Google has some control over people’s personal data.
What the committee was particularly concerned about, though, was the possibility of the new Data Protection Directive classifying search engine users as data controllers. The new Directive (as set out by former EU justice chief Viviane Reding) aims to let people legally compel other people to take down personal information about them. This, incidentally, would be much closer to a real “right to be forgotten” — what’s going on now is much more the “right to be de-linked,” as the source material can stay up.
I must say, even though I sympathize with the aims of the “right to be forgotten” — as I have repeatedly written, I think technological changes have made things unforgettable in a way that is frequently unjust in practice — I also agree with some of the committee’s conclusions.
It would be clearly absurd to classify search engine users as data controllers. The committee also noted that the current Italian presidency of the Council of the European Union seems to think that the Court ruling back in May should inform the new Directive. This too is absurd — that ruling was (correctly) based on old definitions in a Directive that’s demonstrably out of date (yet still in force), and the new Directive should reflect new realities, with an eye on the future.
The internet paradox
However, I also think the committee is wrong in some respects. On the issue of the implications of the new Data Protection Regulation, if it were to go through as Reding intended, the committee said that the mandate for startups to “incorporate ‘privacy by design’ and to bear in mind what impact the technology and business methods they employ will have on the privacy of individuals… might result in many SMEs not getting beyond the start-up phase.”
Privacy by design is, in my opinion, not only an essential consideration but also something that could act as a positive differentiator for European startups. Adhering to these principles should not be a costly exercise, particularly when you consider that we’re talking about maintaining fundamental rights. Also, as this element is part of the upcoming Data Protection Regulation rather than the associated Directive, countries would be able to interpret it into national law as they see fit. There’s flexibility in this approach.
Ultimately, though, everyone involved in this process is stuck with a fundamental paradox, one that I tried to address last week, and one that no-one seems able to solve — the internet’s very nature pushes a certain transmit-it-all, retain-it-all ethos that is possibly impossible to stop, but that also flies in the face of privacy principles held very deeply by many countries and citizens.
I want to see people’s fundamental rights strengthened, not weakened by the wholesale removal of the “right to be forgotten” or the “right to erasure”, as proposed by the Lords committee. At the same time, like everyone else, I don’t want to see the creation of unworkable, and therefore bad, law.
Here’s hoping that someone figures out how to make that law workable without letting down those people whose lives are destroyed by random comments they made online as kids, or who have no legal grounds to get web services to delete their data when they ask. Emotive stuff, yes, but that’s ultimately what’s at stake here.
Somebody, please, find a realistic way to fix this.