On Tuesday evening we spotted a tweet from the Twitter Support account.
The tweet alerted us to an experiment the website will be running for a limited time in which iOS users may receive a prompt when replying to a tweet.
We should note that this will only happen if the reply contains “language that could be harmful”.
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
Of course there is no telling whether this is a feature that will be introduced to the broader Twitter community. We also don’t know how Twitter is monitoring the language and what constitutes harmful language.
For instance, is using a swear word harmful if it’s just used as an expletive or is Twitter contextualising the language? The firm does have policies about hateful content and of course its own rules around how the platform can be used, but it’s all about how those rules are applied.
With that having been said, Twitter has been fighting hatred on its platform for a long time. Last year it introduced hidden replies which allows a user to hide replies they find annoying or off-topic.
As The Verge highlights, this feature seems less about banning users and getting misinformation off of the platform and rather encouraging users to be nicer to each other.
Whether users will take Twitter’s advice and edit their response before posting remains to be seen.
We’re keen to see the results of this test and whether it was at all successful. Perhaps if folks think Twitter is watching what they say, they’ll be less inclined to be racist, misogynistic or bigoted (among many others) toward others.