What’s wrong with automatically scanning Twitter for suicidal tweets?

As I noted when reporting yesterday’s launch of Samaritans Radar — a Twitter app that alerts people to potentially suicidal tweets made by those they follow — a lot of people really don’t like this idea, particularly those with experience of mental health issues.

The Samaritans are the leading suicide prevention charity in the U.K. and Ireland, providing valuable and much-used counselling services. The new service that has caused so much outrage scans Twitter to help identify vulnerable people, based on keywords and phrases suggesting depression. When it spots a tweet saying, for example, “I want to end it all”, it sends an email to the person who subscribed, leaving it up to them to take further action.

I find the incident to be an excellent example of how modern web services come up against existing data protection law in the U.K. and the EU – from where British data protection laws are derived – and a worthwhile case study to bear in mind when thinking about how these laws will evolve. Personally, as you’ll see in yesterday’s piece, I began by viewing the Samaritans Radar project positively. However, several of the excellent points that people have raised in the last day have changed my mind.

Let’s begin with some of the most salient observations.

Backlash observations

Jon Baines noted that the Samaritans are processing sensitive data, which is “afforded a higher level of protection under the Data Protection Act 1998 (DPA).” He suggested that this processing doesn’t meet any of the conditions that allow such data to be legally processed, except perhaps the one that states: “The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject.” Even there, though, the data comprises “not just the public tweet, but the ‘package’ of that tweet with the fact that the app (not the tweeter) has identified it as a potential call for help.”

Susan Hall, meanwhile, reckoned the app may fall foul of a clause in the DPA that lets people object to having their data processed by purely automatic means, if it leads to the controller (the Samaritans) making a decision “which significantly affects that individual.” She also reckons, as do I, that the right to “object to processing that is likely to cause or is causing damage or distress” could apply as well, depending on the circumstances.

“Latent Existence” criticized the Samaritans’ response to the outrage, which literally read: “If you use Twitter settings to mark your tweets private #SamaritansRadar will not see them.” The blogger wrote: “The idea that people should lock their account to avoid something is one that is also frequently used to defend harassment and to defend doing nothing about harassment.” The big issue here is consent – the Samaritans only get the consent of those who opt into having their followees monitored, and there’s no way for people to opt out of having their “depressed” tweets flagged up.

Adrian Short characterized this as a “surveillance system… that will let almost anyone be alerted when almost anyone on Twitter might be particularly vulnerable.” He provided an interesting run-through of the various scenarios that could play out – false positives, true negatives and so on, with the tool being used by both friendly and antagonistic players – and concluded that “it’s very hard to see how Samaritans Radar will have an overall positive effect.”

As with other posts, that of “Delusions of Candour” discussed the public/private nature of tweets and the distinction between responding to a distressed tweet and setting up an automated system for doing so: “The latter feels invasive and intrusive. It’s not dissimilar to the contrast between bumping into a friend in the high street, and following that friend down the high street so you can engineer an encounter. There is also a risk that this app could be used to target vulnerable individuals; I have at least one friend who is outspoken about her mental illness and receives all kinds of abuse as a result, even (sometimes especially) when she is in crisis.”

Sarah Myles shared an email she has sent to the Samaritans, bringing up the crucial matters of autonomy and, again, consent: “Vulnerable people need to feel that they are calling the shots with regard to their own wellbeing. Samaritans Radar sweeps that away, simply by having no ‘opt-in’ facility…. By launching software that specifically purports to be activated ‘discreetly’, which a user’s followers will have no knowledge of until they receive a communication checking up on them, you are sending the message that vulnerable people cannot be trusted to make their own choices about who to communicate with.”

“Bear Faced Lady” had similar concerns: “I never expected when I first joined Twitter in 2009 that it would become a trusted and safe space for me to talk about the depths and despair of my mental health but it has. It has for many. It’s a hard, sad fact that people kill themselves. That they genuinely see no place for themselves in the world, or the world they inhabit is just too damn hard. What is harder is to accept that it is their right to seek help or not. You can show them support. You can be a friend. You can tell them you care. You really don’t need an app for that.”

Opting in, opting out

I don’t believe that informed consent is possible in the case of Samaritans Radar. This would entail people opting into having their tweets monitored for suicide risk, and I suspect that people would rather just approach an organization such as the Samaritans directly if they were feeling that low, rather than dragging people who might be near-strangers into the equation.

Consent is not a legal requirement under current British and European data protection law, though it is one of the aforementioned conditions that can confirm the legality of the processing of sensitive personal data. This situation will change under the incoming revamp of the EU data protection law, which will explicitly require controllers (such as the Samaritans in this case) to gain the data subject’s consent for any processing of their personal data — for specified purposes, no less.

It’s difficult to overstate the impact of this shift, if indeed the new data protection rules clear the final hurdle of member states’ approval. The consent requirement for the processing of personal data will affect every app that sucks in the user’s contact list (the people on that list will technically need to give their permission), every service that solicits data for one purpose then uses it for another, and many analytics apps that plug into services like Twitter, such as Samaritans Radar.

I’m torn on this one. On the one hand, this is not how online services currently work – they constantly send data between themselves to achieve various goals and to try out new models, and the new rules may slow the pace of innovation. On the other hand, it’s right that people should retain control over their data and only let others process it with permission, for agreed purposes. That’s not just a matter of theoretical ethics; the Samaritans Radar debacle is an excellent example of how the principle makes a real difference.

But that’s future legislation, and the current legal landscape should be enough to make the Samaritans reconsider what they’re doing. After all, at the end of launch day the charity said it had just over 1,500 sign-ups, meaning it was already monitoring around 900,000 Twitter feeds. They said this had already led to 258 alerts, of which 10 saw “subscribers” confirm that they were worried about the tweeter, but at the same time the rapid network effect means the potential for damage is widespread.

As everyone who has addressed this topic in the last day has stressed, the Samaritans do terrific work and no one doubts that Radar was born out of the very best intentions. But its practical application is at the least partly unethical, and quite possibly illegal too. Unless the charity can come up with a way to fix these issues – allowing people to opt out of being monitored would be the bare minimum requirement — it should end this project immediately.

UPDATE (9.50am PT): The Samaritans now say they are letting people join an opt-out “whitelist” by direct-messaging them on Twitter — they claim to have disabled the feature that blocks DMs from strangers. They also say they themselves do not receive alerts about worrisome tweets. As they told me yesterday, though, they do briefly keep a record of tweets that were flagged up but not acted on, to better train their algorithms.