In a candid blog post today, Twitter gave an update on its ongoing effort to tamp down on hateful speech, threats, and other abusive behaviors on its platform.
It says that in the first three months of this year, it suspended 100,000 account holders who creating new accounts following an initial suspension (a 45% year-over-year increase) and that it flagged three times more abusive accounts within 24 hours after a report. It also revealed that now, thanks to machine learning and other automated techniques, 38% of abusive content on Twitter is “surfaced proactively” for its team of human reviewers, and that it’s observed 16% fewer abuse reports after an interaction from an account the reporter doesn’t follow.
“The same technology we use to track spam, platform manipulation and other rule violations is helping us flag abusive Tweets to our team for review,” wrote Twitter vice president Donald Hicks and director
This article was originally published on on VentureBeat.
Click here to read the rest of the article.