EU search results censorship is about our irresponsibility with data, not Google’s
Last week’s European Court of Justice ruling that a Spanish citizen (and, by extension, all European Union citizens) could require Google to remove search results pointing to content about that person was not the “clear victory” for online privacy protection. Quite the opposite.
To recap, a 59-year-old lawyer – Mario Costeja – took Google to court in an attempt to expunge search results from the organisation’s databases. Specifically, search results which brought up a news article that stated he had been forced to sell his house in 1998 to pay debts. At first glance, this seems entirely reasonable – what happened 16 years ago under unfortunate circumstances should have no bearing on whether or not Costeja can acquire new clients today.
However, as many have pointed out – including an excellent take on the subject from GigaOm, here, the “right to be forgotten” is hard to tally with the principles of free speech and open society.
The ruling targets search results and not the content it points to – and that is a pretty clever approach. Search results have become almost analogous to domain names. Deregistering a domain name practically takes a website down because the human-friendly link to the associated server’s IP address makes visiting the website difficult. Similarly, because search engines drive so much traffic to content, removing the search results that point to that content removes a common route to it.
So under the EU ruling, the original article remains online, but Google can’t point to it. Effectively, that will make it invisible online.
To be clear, the decision doesn’t give any European citizen the right to force Google to remove any search result pointing to information about him or her. According to Google, among the hundreds of requests it received in the weekend after the ruling there were more than a few insalubrious attempts to hide skeletons from the public: a politician with a criminal record, a man found guilty of storing child porn on his computer, and so on. The Court’s ruling means that whenever an application is made to a search engine, a decision must be taken balancing an individual’s right to privacy against the public’s right to know. As GigaOm puts it:
… the ruling suggests a request for removal must concern a link that is “inadequate, irrelevant or no longer relevant, or excessive in relation to the purpose for which they were processed and in the light of the time that has elapsed.” Courts and regulators will also have to balance the person’s desire for privacy against the “interest of the public in having that information, an interest which may vary, in particular, according to the role played by the data subject in public life.”
The first challenge with this ruling is that it is almost certainly going to be almost impossible to implement effectively, if at all. Google has reportedly started receiving a number of requests to remove search results pointing to bits of personal information online and it won’t be long before floodgates open, if the ruling is implemented across the EU. Assessing requests can’t be automated, as each has to be manually assessed and the game of “Privacy or Right to Know” must be played.
Who gave Google (or Yahoo! or Microsoft et al) the power to make the decision about what should and shouldn’t be in the public domain.
After the ruling, Google circulated a media release which I happen to agree with:
Whilst we at Google agree there are important issues to address around personal information (e.g. issue of spent/expunged convictions) a) Google is not best placed to make these decisions and (b) any regime should not open the door to the widespread removal of information. The right to know important information — from something as simple as a bad review for a plumber to something more fundamental like a would be politician’s background — is important in a free society.
Although the process may be streamlined to a point, I don’t see how this is a feasible solution to the underlying problem, namely personal information about people being available online and not necessarily presenting a positive perspective on those people. Rather than create more efficient methods for enabling people to have search results removed, we should focus on why problematic content is published to the web in the first place.
The real problem isn’t that Google, Microsoft, Yahoo or some other company has control over our personal information. It is that we have given our personal information to these companies and then abdicated our responsibility for how that personal information is used. We ostensibly have a say over how our personal information is processed through privacy policies we agree to and privacy controls we have available to us but how conscious are we when we tick that box or select privacy controls?
For the most part we rush to submit as much information about us as we can because we want to use these services without much thought about the implications for us down the line. Then, when we feel that our trust has been betrayed, we are outraged and the object of our frustration changes something and we fall back asleep. We find ourselves in this position where vast multinational companies have tremendous amounts of information about us because we gave it to them with our blessing, thanks even. Instead, perhaps we should consider whether handing over so much really serves us? Do we really need to disclose so much? Should we not insist on other ways to share that don’t leave us exposed?
On the other hand, perhaps we should give up this vestigial notion of privacy and adapt to living in a far more transparent society where we not only take responsibility for our activities but also acclimatise to greater scrutiny. At the moment we are straddling both worlds and either side is plagued by the same disease: chronic inability to take meaningful responsibility. Instead, we look to Google and the other companies we hand our personal information to and then punish them for taking it from us.[Image –