Sign up to our free weekly IndyTech newsletter delivered straight to your inbox
Sign up to our free IndyTech newsletter
Facebook has banned AI-manipulated videos known as deepfakes ahead of the 2020 US presidential election, though other misleading content will still be permitted.
The technology giant announced the new policy in a blog post written by the firm’s head of global policy management, Monika Bickert. She explained that damaging content designed to spread misinformation, such as doctored videos of Labour MP Keir Starmer or US House speaker Nancy Pelosi, will not be removed under the new policy.
Instead, only videos that count as deepfakes will be removed, despite very few incidents of the technology being used to manipulate viewers.
“While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases,” she wrote.
“We are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes... This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.”
Facebook birthday: 15 defining moments for the social network
Show all 15
Ms Bickert said videos would be removed from Facebook and Instagram if they meet two criteria: They are edited in a way that makes it appear that someone “said words that they did not actually say”; and if they are created using artificial intelligence that replaces content in a video, “making it appear to be authentic”.
Only content that adheres to these narrow criteria will actually be removed, while videos that are verified to be sharing false or deliberately misleading information will be allowed to remain on the platforms.
“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad,” the blog post states. “People who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.”
The company claims this approach is “critical” to its strategy, as leaving them up and allowing them to be shared among users is “providing people with important information and context”.
Facebook has also previously said that any content that violates its policies will be allowed if it is deemed newsworthy.
Nick Clegg, Facebook’s head of global communications, said in September: “If someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies