Monday, October 2, 2017

"This researcher programmed bots to fight racism on Twitter. It worked."



"there may be limits to the effectiveness of top-down efforts by companies that run social-media platforms. In the short run, heavy-handed sanctions like account bans can actually embolden users who are censored. There is excellent evidence that this happens in China when the regime employs censorship.

A better option might be to empower users to improve their online communities through peer-to-peer sanctioning. To test this hypothesis, I used Twitter accounts I controlled (“bots,” although they aren’t acting autonomously) to send messages designed to remind harassers of the humanity of their victims and to reconsider the norms of online behavior... I sent every harasser the same message:

@[subject] Hey man, just remember that there are real people who are hurt when you harass them with that kind of language

I used a racial slur as the search term because I thought of it as the strongest evidence that a tweet might contain racist harassment. I restricted the sample to users who had a history of using offensive language, and I only included subjects who appeared to be a white man or who were anonymous... 

Only one of the four types of bots caused a significant reduction in the subjects’ rate of tweeting slurs: the white bots with 500 followers... tweets from black bots with few followers (the type of bots that I thought would have a minimal effect) actually caused an increase in the use of racist slurs."



FB: i.e. the best things allies can do is call people out "If people call out bad behavior in a way that emphasizes the social distance between themselves and the person they’re calling out, my research suggests that the sanctioning is less likely to be effective." 

No comments:

Post a Comment