Why do we not innovate and treat abuse as though it were spam?
While twitters page of suggestions on how to help someone who is dealing with online abuse is nice it shows how little Twitter has done to try to innovate in the area. Here are a couple brief suggestions.
Taking @PennyRed as an example of someone dealing with abuse.
- Allow @PennyRed to nominate moderators for tweets that mention her. She can choose if she sees only tweets that pass moderation, or sees all tweets accept those moderators determined abusive.
- Allow dynamic moderation and communit watch – Let anyone subscribe to a special feed of tweets mentioning @PennyRed. The subscriber then sees these tweets under a “community watch” tab where perhaps they may also see tweets of other users they wish to help with twitter abuse, such as @femfreq or TheQuinnspiracy. In this feed a sans-dialog “abuse” button is exposed to quickly report abuse. If the person reporting this abuse is a person that @PennyRed follows the weight of that report should be higher that one outside of her direct network. Then @PennyRed can herself choose to filter her feed based on this information.
Basically, abusive content should be something like a spam folder that we choose to look at when bored but rarely otherwise. And actually, that twitter has not tried to innovate in this manner is surprising. The suggestions mentioned here are not a solution. Online abuse is most certainly something much different than spam. But attempting to find ways to help people deal with it would show a concerted effort at trying to deal with the problem in some, in any, manner.