The latest move from Twitter in its seemingly endless quest to stop harmful content from being shared on the platform involves blocking links to harmful content.

In a policy update published this week, Twitter outlined how it deals with harmful links.

While most of the update looks at dealing with phishing and other cybercrime, Twitter has also taken a hard stance against sharing links to content that would violate Twitter’s policies if shared on its site.

This includes links to websites that feature:

  • Terrorism and violent extremism
  • Child sexual exploitation
  • Illegal or certain regulated goods and services
  • Hateful conduct
  • Violence
  • Private information
  • Non-consensual nudity
  • Content that interferes with civic and election integrity
  • Hacked material

What constitutes each of the above is made clear over on Twitter’s Help Center.

Many of you may be asking the same question we did – how will Twitter know that a link to a website is harmful?

The social network says that it receives information about links from multiple sources including:

  • Third-party vendors which specialise in countering spam and malware
  • Collaborative information sharing with peers and NGO partners
  • Internal technoloy and tools
  • Reported tweets

Twitter says it reviews content before slapping a warning label on it to determine whether it is harmful and if it is how harmful it may be.

Based on several considerations, links may contain a simple warning but more egregious content may not be allowed to be shared on Twitter at all.

What’s more is that accounts which continue to share content that Twitter blocks may be subject to suspension. However, in some instances – such as sharing child sexual exploitation content – accounts will be suspended according to a zero-tolerance policy.

We urge our readers that are on Twitter to read through this updated policy to make sure you don’t unknowingly lock yourself out of your account by sharing something you shouldn’t.