Take a look at the comment sections around the web that still exist and the chances are high that you’ll run across toxicity in some form or another.
In an attempt to lessen the amount of abusive comment in discussions Google and Jigsaw have partnered to create Perspective.
This piece of software uses machine learning to identify comments which may be considered off-colour.
The plan for perspective is to make the API available to publishers so that it is easier to sort toxic comments from comments which contribute to a discussion.
“To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers,” president of Jigsaw Jared Cohen said in a blog post.
“Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.”
Perspective is currently being tested with The New York Times where moderators are able to sort through comments faster.
The technology could also be implemented in the comments section of a website to show a commentator how their words could be seen as toxic should that be the case.
We hope to see this little bit of software gain some proliferation around the web. Who knows, perhaps telling the toxic trolls that infest comment sections that why they are doing is wrong as they’re doing it will make them delete that nasty comment about your hairstyle.
Brendyn Lotz writes news, reviews, and opinion pieces for Hypertext. His interests include SMEs, innovation on the African continent, cybersecurity, blockchain, games, geek culture and YouTube.
Google can use machine learning to tell you if your comments are toxic
Take a look at the comment sections around the web that still exist and the chances are high that you’ll run across toxicity in some form or another.
In an attempt to lessen the amount of abusive comment in discussions Google and Jigsaw have partnered to create Perspective.
This piece of software uses machine learning to identify comments which may be considered off-colour.
The plan for perspective is to make the API available to publishers so that it is easier to sort toxic comments from comments which contribute to a discussion.
“To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers,” president of Jigsaw Jared Cohen said in a blog post.
“Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.”
Perspective is currently being tested with The New York Times where moderators are able to sort through comments faster.
The technology could also be implemented in the comments section of a website to show a commentator how their words could be seen as toxic should that be the case.
We hope to see this little bit of software gain some proliferation around the web. Who knows, perhaps telling the toxic trolls that infest comment sections that why they are doing is wrong as they’re doing it will make them delete that nasty comment about your hairstyle.
About Author
Brendyn Lotz
Related News
Microsoft working on its own large language model codenamed MAI-1
Google partners with African Union on #DiscoverMyAfrica campaign
SPEEDRUN – Burning flag riles up South Africans
IBM software via AWS Marketplace expands to 92 countries, including SA
Registry Africa renews .africa for a further 10 years
Entries for the DStv Content Creator Awards 2024 now open for all Africans