Take a look at the comment sections around the web that still exist and the chances are high that you’ll run across toxicity in some form or another.
In an attempt to lessen the amount of abusive comment in discussions Google and Jigsaw have partnered to create Perspective.
This piece of software uses machine learning to identify comments which may be considered off-colour.
The plan for perspective is to make the API available to publishers so that it is easier to sort toxic comments from comments which contribute to a discussion.
“To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers,” president of Jigsaw Jared Cohen said in a blog post.
“Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.”
Perspective is currently being tested with The New York Times where moderators are able to sort through comments faster.
The technology could also be implemented in the comments section of a website to show a commentator how their words could be seen as toxic should that be the case.
We hope to see this little bit of software gain some proliferation around the web. Who knows, perhaps telling the toxic trolls that infest comment sections that why they are doing is wrong as they’re doing it will make them delete that nasty comment about your hairstyle.
Brendyn Lotz writes news, reviews, and opinion pieces for Hypertext. His interests include SMEs, innovation on the African continent, cybersecurity, blockchain, games, geek culture and YouTube.
Google can use machine learning to tell you if your comments are toxic
Take a look at the comment sections around the web that still exist and the chances are high that you’ll run across toxicity in some form or another.
In an attempt to lessen the amount of abusive comment in discussions Google and Jigsaw have partnered to create Perspective.
This piece of software uses machine learning to identify comments which may be considered off-colour.
The plan for perspective is to make the API available to publishers so that it is easier to sort toxic comments from comments which contribute to a discussion.
“To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers,” president of Jigsaw Jared Cohen said in a blog post.
“Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.”
Perspective is currently being tested with The New York Times where moderators are able to sort through comments faster.
The technology could also be implemented in the comments section of a website to show a commentator how their words could be seen as toxic should that be the case.
We hope to see this little bit of software gain some proliferation around the web. Who knows, perhaps telling the toxic trolls that infest comment sections that why they are doing is wrong as they’re doing it will make them delete that nasty comment about your hairstyle.
About Author
Brendyn Lotz
Related News
Telkom confirms tariff increases for 1st April
UCT Online High School’s Adult Matric now costs R800 a month
Immortals of Aveum the PS5-only pick for April’s PS Plus games
Amazon completes $4 billion investment in Anthropic
Google says it removed over 5.5 billion “bad ads” in 2023
Registrations open for this year’s Mythic24 for Valorant series