Deepfakes are getting better and better as the technology is refined and, while that is undeniably cool, fake video is also concerning.
Facebook appears to be of the same mind as the social network has announced it will be banning deepfakes from its platform.
Facebook’s vice president of Global Policy Management, Monika Bickert, explains that deepfake video presents a challenge for Facebook and other social network operators.
The VP says that Facebook has been investigating deepfakes and the technology behind them together with 50 experts. This research has informed the direction Facebook’s policy regarding deepfakes will take.
The social network says that “manipulated media” will be removed if it meets the following criteria:
- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
Facebook goes on to say that videos or images created as a parody or satire will not have this policy applied to them.
Interestingly, Facebook will not remove deepfake content that isn’t used in advertising.
“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false,” says Bickert.
“Consistent with our existing policies, audio, photos or videos, whether a deepfake or not, will be removed from Facebook if they violate any of our other Community Standards including those governing nudity, graphic violence, voter suppression and hate speech,” added Bickert.
Of course, identifying deepfakes isn’t all that easy which is why Facebook has been investing in ways to identify manipulated media.
The social network launched the Deep Fake Detection Challenge in 2019 which invites developers to create tools and technologies that can identify manipulated media. The fruits of that labour will be seen in March.
Facebook is also working with Reuters to train journalists to identify deepfakes. The course is free and well worth going through if you aren’t a journalist.
“As these partnerships and our own insights evolve, so too will our policies toward manipulated media. In the meantime, we’re committed to investing within Facebook and working with other stakeholders in this area to find solutions with real impact,” concluded Bickert.