Responses to the Report Abuse Button and other Technical Suggestions

By Tim Hardy

Freedom of speech for abusers means curtailed speech for victims.

(@newsmaryTwitter’s freedom of speech)

Many point out that misogyny is a social problem not a technical one. However, it is not enough to do nothing until we have full equality not least because we will never achieve equality when women are silenced by threats. Technical solutions can help create safer spaces in which women don’t feel intimidated when they speak out.

There are clearly potential problems with a report abuse button as many thoughtful people have pointed out. [Update] Everyday Victim blaming are currently inviting people to submit their Concerns About the Report Abuse Petition; please read and submit your own. In case I didn’t make it clear in my post, I do not support the idea of a button that automatically suspends accounts; I support a button that makes it easier to fill in the already existing complaint procedure forms which are then moderated by a human being. Nonetheless there are still problems with this as many have highlighted.

One major worry is that making it easier to report abuse will lead to more false reports in general that will then make it harder to respond to immediate threats of violence, rape or sexual abuse.

Another – a fear which cannot be underestimated – is that making it easier to report abuse will be misused by white, cis-, non-disabled women and men to silence minority voices or more generally to silence political debate.

As well as the the excellent Twitter’s freedom of speech by @newsmary quoted above, two excellent posts and responses to the idea of the “report abuse” button (and related campaigns) are Against a Twitter “report abuse” button by @stavvers and Feminists boycotting Twitter is not the way to end trolling by  @bmagnanti

One technical solution proposed by @flayman and highlighted by @newsmary and @higgleDpee is as follows –

There’s a nice summary of this proposal here Twitter and Report Abuse Buttons – Pitfalls and Solutions  by @MattBluefoot

[Update] @flayman has written a summary to of this proposal too: Panic mode – my proposal to curb Twitter abuse.

As @Dymaxion points out twitter and other internet services do tackle social problems with technology –

Vicious speech acts used to oppress women can be mitigated just like spam. Technical solutions are obviously not the answer but without them social and political change is going to be far harder to achieve.

If reporting abuse is not a scalable solution then what alternatives can we think up? This is a difficult problem but the most interesting and rewarding problems always are. The question is, do we have the political will to do so?

[Update] A further related proposal by @PennyRed and @Dymaxion is for shared, curated multiple block lists – so people can coordinate against harassers and subscribe to an app or service that automatically blocks you to people identified by your community of peers. This is a process already used by @The_Block_Bot who has kindly made their code available so others can use it to run a similar service. This would not ban abusers but would make life easier for those who are repeatedly harassed.

[Update] Other links worth reading:

Two Must Read Articles on Proposals to Tackle Twitter Abuse

twitter-abuse-press-response / also at ‌Should Twitter be ‘policed’?

Twitter abuse: let’s debate what the police are doing

Misogyny and Twitter – confusing cause with medium

4 thoughts on “Responses to the Report Abuse Button and other Technical Suggestions

  1. No one seems to be mentioning a simpler option: A premium, paid-for service. I’m sure this sort of abuse is the result of free accounts. I want to be a customer, not fodder for advertisers. It doesn’t have to be a lot. £5-£10 a year would be enough.

    Furthermore, Twitter is a private American company, subject to the state of California. Is there a UK equivalent? Would it be hamstrung by patents and copyright?

    • Thanks Andrew. I have seen this mooted but I haven’t included this proposal in my post because I don’t agree with it.

      Twitter has been use to galvinise and coordinate political activism around the world including in repressive regimes.

      Introducing payments, however small, would tie accounts to individuals making them easy to identify.

      Anonymity is part of free speech. Even in ostensible democracies, whistleblowers need initial protection even if they may choose to go public later.

      Payments would also make it prohibitively expensive to create new accounts for campaigns or actions making it easier to silence dissident voices.

      The jury is still out as to the extent social media influences campaigning but I cannot support this measure.

  2. I wrote / @the_block_bot … Main issue for me is that many people are either put off joining Twitter completely or they turn on protected tweets which makes it almost impossible for them to network, the whole reason of Twitter! While those being abusive whine about freedom of speech and cutting *them* out of discourse… The irony leaves an unpleasant metallic taste in your mouth after arguing this for the hundredth time.

    One thing to note about the block bot, which it is impossible to describe on Twitters limited char count, is its origin. There is a rift in the atheist-sceptic community due to massive over-reaction to Rebecca Watson saying “guys, don’t do that” otherwise known as elevator-gate.

    So there are many that obsessively hate her blog network plus (Its mainly feminist) and even have their own forum where they discuss everything each blogger and commenter on those networks says and does. Basically a forum for cyber-stalking! Hence a lot of the blocks are these obsessives and their various cheerleaders, so some blocks are not people who would ever be on the list if it wasn’t for this bizarre stalking and tribalism.

    This is part of the reason for splitting into three levels as level3 has many ppl who are just boring and will trot out a load of harassment minimisation arguments etc. Level2 is more the out and out MRAs and abusive people. Level1 is stalkers, fakers and quite abusive and bigoted users.

    The focus on a community is not surprising as each group – atheists, sceptics, gamers, trans* communities etc – will have their own coterie of abusive people with some intersection. We do expose our list via JSON/CSV and if any more lists are created it would be nice to either have a central registry so users can sign up to multiple lists at once, or code it so they can subscribe to multiple block lists with each implementation. The more people that are reporting users the more likely they are to be blocked before they can be abusive to other people.

    I have an EC2 micro-instance and I run the block bot plus two blogs off it. Performance wise it could easily handle 2K users and 10K blocks at least (Hopefully not that many abusive users! After 6 months we have 600). With a bit of parallelisation it could likely handle a lot more users as the limitation is waiting for HTTP REST requests to finish in serial.

Comments are closed.