Ethics

Google, Meta, and OpenAI commit to child safety

Big AI companies commit to implementing child safety measures when developing, deploying, and maintaining AI systems.

Martin Crowley
April 24, 2024

Some of the biggest names in AI and tech–including Amazon, Google, Meta, Microsoft, and OpenAI–have signed a set of safety principles, committing them to prioritizing child safety at every stage of the development, deployment, and maintenance of their AI systems.

Other signatories include Anthropic, Civitai, Metaphysic, Mistral AI, and Stability AI.

Who created the principles and why were they created?

The principles, named “Safety by Design”, have been developed by Thorn, an online child safety nonprofit, and All Tech is Human, a nonprofit dedicated to tackling unethical AI practices in society.

They’ve been designed to prevent the misuse of technology to create and spread AI-generated child exploitation material, as the use of AI to generate harmful and abusive content rises:  

- At least five of the companies committing to these principles have responded to reports that their products and services have been used to facilitate the creation and spread of illicit deepfakes, featuring children.

- In 2023, more than 104M cases of suspected child abuse were filed in the US, with only between 5% and 8% leading to arrest or conviction.

- The National Center for Missing & Exploited Children reported 4,700 images or videos of child exploitation, made by generative AI, in 2023.

What are the “Safety by Design” principles?  

By signing the “Safety by Design” principles, organizations are committing to:

- Ensuring that training datasets don’t contain any illicit child abuse-related content, avoiding datasets with a high risk of including illicit child abuse-related content, reviewing their existing AI training data, and removing this type of content (images and links) from all its current and future data sources.

- Stress-testing their AI models to ensure they don’t generate any dangerous child abuse imagery

- Only releasing AI models that have been evaluated for child safety

- Regularly sharing documentation of their progress with these principles

By integrating the “Safety by Design” principles into their AI systems and processes, these companies are not only protecting children, they’re also leading the charge in ethical AI innovation, something which Dr Rebecca Portnoff, the VP of Data Science at Thorn is keen to promote to inspire more companies to commit to child safety:

“The more companies that join these commitments, the better that we can ensure this powerful technology is rooted in safety while the window of opportunity is still open for action.”