Take a look at the comment sections around the web that still exist and the chances are high that you’ll run across toxicity in some form or another.
In an attempt to lessen the amount of abusive comment in discussions Google and Jigsaw have partnered to create Perspective.
This piece of software uses machine learning to identify comments which may be considered off-colour.
The plan for perspective is to make the API available to publishers so that it is easier to sort toxic comments from comments which contribute to a discussion.
“To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers,” president of Jigsaw Jared Cohen said in a blog post.
“Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.”
Perspective is currently being tested with The New York Times where moderators are able to sort through comments faster.
The technology could also be implemented in the comments section of a website to show a commentator how their words could be seen as toxic should that be the case.
We hope to see this little bit of software gain some proliferation around the web. Who knows, perhaps telling the toxic trolls that infest comment sections that why they are doing is wrong as they’re doing it will make them delete that nasty comment about your hairstyle.
Brendyn Lotz writes news, reviews, and opinion pieces for Hypertext. His interests include SMEs, innovation on the African continent, cybersecurity, blockchain, games, geek culture and YouTube.
Google can use machine learning to tell you if your comments are toxic
Take a look at the comment sections around the web that still exist and the chances are high that you’ll run across toxicity in some form or another.
In an attempt to lessen the amount of abusive comment in discussions Google and Jigsaw have partnered to create Perspective.
This piece of software uses machine learning to identify comments which may be considered off-colour.
The plan for perspective is to make the API available to publishers so that it is easier to sort toxic comments from comments which contribute to a discussion.
“To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers,” president of Jigsaw Jared Cohen said in a blog post.
“Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.”
Perspective is currently being tested with The New York Times where moderators are able to sort through comments faster.
The technology could also be implemented in the comments section of a website to show a commentator how their words could be seen as toxic should that be the case.
We hope to see this little bit of software gain some proliferation around the web. Who knows, perhaps telling the toxic trolls that infest comment sections that why they are doing is wrong as they’re doing it will make them delete that nasty comment about your hairstyle.
About Author
Brendyn Lotz
Related News
Start saving for LEGO’s May releases in South Africa
Zuckerberg warns Meta investors that profits from AI will take time
How to turn off the ads in the Windows 11 Start menu
Biden makes TikTok ban official, ByteDance has one year to sell
LEGO’s UCS TIE Interceptor comes to South Africa for R4399
SPEEDRUN – Local creators say no thanks to Comic Con