Dealing with the Twitter mob: Would crowdsourcing block lists make things better or worse?

6 Comments

Like any pseudo-anonymous social platform, Twitter (s twtr) is particularly susceptible to troll-like behavior, from the merely annoying all the way to the truly disgusting — as Zelda Williams, the daughter of actor and comedian Robin Williams, found out following the death of her father on Tuesday — but Twitter blocking and banning features have been widely criticized as ineffective, if not completely useless. Glenn Fleishman, publisher of The Magazine, argues that “collaborative blocking” services could help by crowdsourcing a solution to this problem.

In an essay at Boing Boing, Fleishman says Twitter’s rules on harassment “give individuals and groups of people asymmetrical power as long as they are persistent and awful,” and argues that the company’s definition of abuse is too restrictive, and its response to such complaints usually inadequate. He quotes Samantha Allen, a fellow at Emory University, who was subjected to repeated abuse earlier this year after she wrote about video-game journalism:

“As it’s currently built, Twitter wins during harassment campaigns and we lose. We have to accept working and socializing in an unsafe environment because Twitter doesn’t want to permanently ban users or implement more drastic penalties for abuse.”

Crowdsourcing a troll list

Twitter has come under fire from a number of other users who have been on the receiving end of similar abuse by trolls, such as British journalist Caroline Criado-Perez, who got thousands of abusive messages — including death threats, which are specifically banned by Twitter’s terms of use — and complained that the company’s response was not helpful, and in any case took too long to implement. Twitter later changed its “block” feature, but the changes were criticized by many of those it was supposed to appeal to, so the company changed it back.

roadblock

Fleishman’s suggestion is that groups of Twitter users collaborate on deciding whom to block — or mute — via third-party apps and services such as The Block Bot, an open-source project that was set up by a group of atheists who found themselves subjected to harassment for expressing their views, as well as Block Together and a third project in the alpha stage known as Flaminga, which would allow friends to share block and mute lists. Fleishman says that Samantha Allen used Block Bot after her experience and liked what she saw (or didn’t see):

“It’s definitely made Twitter more livable for me, at least in the short term. I know that it might end up blocking a handful of people that I wouldn’t otherwise want to block, but when you get the kind of unwanted attention that I regularly receive, you just have to accept that you have to make little sacrifices like that for your piece of mind.”

As Fleishman points out in his post, it’s difficult — if not impossible — for a middle-class white male who isn’t gay or a member of some other marginalized group to appreciate what life online can be like for women, homosexuals, the transgendered and others who are routinely subject to abuse. They are the ones who have to deal with the downside of Twitter’s “free-speech wing of the free-speech party” approach to social-network management, and the company’s desire to keep the service as open and public as possible.

Who decides what a troll is?

But as someone who has defended Twitter’s approach a number of times, even when the speech in question was offensive and abusive to certain groups — as it was in France, when a number of users posted homophobic and anti-Semitic messages that triggered a lawsuit against Twitter — the idea of a collaborative block list sounds to me like a good idea that could potentially go awry (although any crowdsourced feature in social media would probably fit that description).

My fear, and that of some others I have spoken to about it, is that these collaborative lists could create a slippery-slope problem and wind up generating a kind of reverse mob effect, in which some users wind up being blocked by large numbers of people for reasons that aren’t quite clear, or don’t really meet the standard of abuse and harassment. And that’s exactly what Martin Robbins describes in a recent piece in VICE magazine: not only has he been put on the block list, but so have users like New Statesman editor Helen Lewis and — somewhat shockingly — Caroline Criado-Perez, who herself was the target of sexist abuse and harassment.

Fleishman noted during a discussion of the idea on Twitter — as did blogging pioneer and ThinkUp co-founder Anil Dash — that block lists don’t actually ban anyone from Twitter, they merely block or hide them from users who opt-in to the feature or app. So where’s the harm? What difference does it make whether someone like Robbins is blocked by a small number of users for “trivializing a serious discussion,” or some other perceived slight against a particular group?

There’s no obvious damage being done to freedom of speech in this scenario, since it’s just a group of people deciding they don’t want to pay attention to someone. Unless Twitter decides to build those lists into its own blocking functions, there wouldn’t be a concern about it snowballing to the point where some users were denied access to a Twitter audience — and some have argued that it’s better than having Twitter decide who gets banned. But I confess that I still find the idea troubling, even though I can’t quite put my finger on why.

Post and thumbnail images courtesy of Flickr user Tony Margiocchi and Shutterstock / aceshot1

6 Comments

taiganaut

The same arguments have been waged about Realtime Blackhole Lists (RBLs) for email spam years ago.

Basically, it boils down to, pick one/some RBLs you trust and either use them outright, or use them as input to a heuristic which weights multiple RBLs and decides whether to block or not.

H.E.A.T.

The problem with having a group, or groups, of social networking citizens deciding who should be blocked is that now they themselves will soon become trolls. They will use their new form of online justice to harass other users to conform to their universal ideas of behavior.

By the simple fact that otherwise innocent bystanders may be blocked is enough to see this as a bad idea.

Have you ever read some of the reasons a person will block another user? A very large number of reasons have nothing to do with improper behavior: not commenting on photos, favoriting content without commenting, not responding to a private messaging, or simple disagreement on an issue.

I agree there are things to be done to improve the online experience, but relying on a bunch of self-appointed deputies of proper social behavior is definitely NOT the answer.

Dave Winer

Such a facility would also be used by trolls to victimize innocent people. I’ve seen stuff like that happen, in the early days of the blogosphere.

LV Moving

The Twitter Mob – We all know that once a social site like Twitter is launched for Private citizen use, the business world, MLM’s and entrepreneurs would saturate the site posts with their marketing ads. I like this post for discussing options.

Lyndal Cairns

It’s clear to me that the current tools are not effective because they’re not being enforced, either because Twitter doesn’t have the manpower or it doesn’t have the interest. But you can’t crowdsource a blocklist without the risk – nay, certainty – that it will be misused by interest groups.

The answer might be a community of superusers – a la Wikipedia – who are even-handed, level-headed and experienced community managers able to defuse a dangerous situation and make a quick call where community standards are clearly being violated. But for that, Twitter needs more strident set of community standards – and I’m not sure that fits with its modus operandi.

Ralph Haygood

Spot on. Twitter needs to get serious about enforcing its own terms of service, and in view of all the money it’s rolling in, there’s no excuse for not hiring enough people to do the job.

Comments are closed.