Global regulations

The UK and US Partner Up to Test AI Safety

The US and the UK have agreed to work together to develop a common approach for testing advanced AI models against safety risks

Amanda Greenwood
March 24, 2024

The UK and the US AI Safety Institutes–both established at the inaugural AI Safety Summit in November 2023–have signed a Memorandum of Understanding (MoU), which was a commitment they also made at the same AI Safety Summit. The signed memorandum details how the two countries will work together to develop a common method for testing AI models and systems for safety risks, and also comes after several major tech companies (including Amazon, Google, Meta, Microsoft, and OpenAI) agreed to complete voluntary safety testing for AI systems, in the “Bletchley Declaration”: An agreement that was also backed by the UK and US, and 10 other countries including the EU, China, Germany, and Japan.

What’s in the Memorandum of Understanding?

The agreement–which was signed by UK Technology Secretary, Michelle Donelan, and US Commerce Secretary, Gina Raimondo–states that both countries will align their scientific approaches and work together to develop a common method for safety testing AI models, systems, and agents, using the same techniques and underlying infrastructure.

So safety risks can be removed, minimized, or dealt with effectively, the agreement also establishes that both countries will share employees and information ”in accordance with national laws and regulations, and contracts”, and will perform at least one joint testing exercise on a publically accessible AI model. 

When will the agreement take effect? 

The agreement will take effect immediately, with both countries recognizing the urgent need to “act now to ensure a shared approach to AI safety which can keep pace with the technology’s emerging risks.” This already aligns with the US’s executive order that requires AI companies to complete and report on safety tests, and the UK’s $125M financial commitment to the UK AI Safety Institute, which demands that companies like Google, Meta, and OpenAI allow them to test their tools.

And although the UK has not yet announced any short-term plans to regulate AI development–preferring to foster innovation and growth in the sector–the UK’s Technology Secretary, Donelan, firmly believes that “only by working together can we address the technology's risks head-on and harness its enormous potential to help us all live easier and healthier lives."

What’s next? 

The memorandum also commits the two countries to forming similar partnerships with other countries that are in the process of setting up their own AI Safety Institutes or AI evaluation methods. For example, Japan announced the establishment of its own AI Safety Institute in February, and the EU has just passed an AI Law for the use of AI systems, which–when it comes into effect in several years–will require AI companies to comply with strict safety standards.