The spread of misinformation on Facebook is still a serious problem that the platform needs to address, having failed to do so with past attempts.
Whether that changes with the social media organisation’s latest initiative remains to be seen, but Facebook founder and CEO Mark Zuckerberg recently wrote a lengthy blog post explaining the company’s position on engagement and how it plans to limit the spread of sensationalist and provocative content on the platform.
This “borderline content”, as it is being termed, will not be removed from Facebook entirely, with Zuckerberg’s previous points about censorship and freedom of speech cited once again here.
He also adds that controversial content is naturally engaged with by Facebook users than other types of content which sounds like Zuckerberg is abundantly aware that. although provocative posts could pose a problem for users, it still ensures a good number of clicks for Facebook.
“An important question we face is how to balance the ideal of giving everyone a voice with the realities of keeping people safe and bringing people together,” writes the Facebook founder.
“What should be the limits to what people can express? What content should be distributed and what should be blocked? Who should decide these policies and make enforcement decisions? Who should hold those people accountable?,” asks Zuckerberg.
Facebook’s new solution to tackle borderline content is simply adjusting what its algorithms currently look for, with its AI identifying content that is needlessly controversial, and then limiting how far it can spread on the platform.
Examples of this include “photos close to the line of nudity, like with revealing clothing or sexually suggestive positions”, or “posts that don’t come within our definition of hate speech but are still offensive,” explains Zuckerberg.
In both examples given, there is certainly some room for interpretation by the algorithm, and we suspect there will be a few incorrect post removals as the AI begins to learn what to look for with greater accuracy.
While we’re sure Facebook’s algorithms can indeed decrease how much a story appears in news feeds, but will it be able to stop users naturally spreading viral or fake content they encounter on the platform?
[Image – Facebook]