Meta announces new AI content policy

From May onwards, Meta will start labeling a wider variety of AI-generated content

Amanda Greenwood
April 10, 2024

Back in February, Meta’s independent Oversight Board–that’s in place to make sure Meta doesn’t make the wrong decisions when deciding to remove or retain content across its platforms–criticized Meta for its existing content policy, claiming it was too 'narrow,' 'incoherent,' and 'confusing to users, and ran the risk of letting other fake content through the AI safety net.

What are Meta’s new AI content rules?

In response to this, Meta has announced new rules which include labeling AI-generated videos, audio, and images with a ‘Made with AI’ badge, and adding context to high-risk material (such as political content) from May onwards, to identify content that may have been created to intentionally or unintentionally deceive people.

Previously, Meta was applying “Imagined with AI” labels to photorealistic images using its own Meta AI feature, but this new policy will cover “a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling”, giving people more information and context so they can assess it better.

To prevent infringement of freedom of speech, Meta has also established that, from July onwards, it will not be removing AI-generated or manipulated content unless it violates other policies, such as voter interference, bullying, harassment, violence, incitement, or other Community Standards issues. Instead, it will rely on its labeling and added context approach, with Meta’s VP of content policy, Monika Bickert, establishing that “providing transparency and additional context is now the better way to address this content.”

What does Meta’s new content policy mean for future AI-generated content?

With global elections looming, and countries like the EU imposing new rules to mitigate systemic risk (while protecting free speech), Meta’s new content policy is timely and aligns with its recent decision to work with others (including Google and OpenAI) in the industry to develop a set of common standards to identity AI-generated content.

However, Meta’s approach to only label content that has “industry standard AI image indicators” or has been identified as ‘AI content’ by its creator, and not remove AI-generated content unless it violates its other policies, means any AI-generated content that falls outside these jurisdictions will escape unlabelled. As a result, there is likely to be an increase in AI-generated content and manipulated media across Meta’s platforms, like Facebook and Instagram, meaning more labels, and fewer takedowns.