Microsoft to stop deep fakes with credentials

Microsoft is expanding access to its ‘Content Integrity’ tools to support global elections and stop AI deepfakes.

Martin Crowley
April 23, 2024

As global elections loom, and more people than ever before will have an opportunity to vote, Microsoft is expanding access to its suite of ‘Content Integrity' tools to EU political parties, campaigners, and news organizations from around the world.

Microsoft originally built its Content Integrity tools to support US political campaigns, back in November last year. It’s now planning to allow wider access to a private preview of the toolset to give organizations control over their content, prevent the spread of deepfakes and misinformation generated by AI, and allow the public to trust that the content they see is authentic and genuine.

What will Microsoft’s Content Integrity tools do?

The tools allow companies to add “Content Credentials” to their online content–such as images, videos, or audio–which are like nutrition labels or digital receipts, that reveal its origin, creation details, AI involvement, and editing history. The public will be able to see who created the content, when and where it was created, if it was generated using AI, and if it has been edited or altered in any way since its creation.

The suite of tools has three key parts:

1. A private web app, that will be available to political campaigns, news organizations, and election officials, so they can add these Content Credentials to their content.

2. A private mobile app, that will be available to political campaigns, news organizations, and election officials, so they can capture authenticated media (like photos, videos, and audio tracks) and add content credentials in real time from any smartphone device.

3. A public website that will be available to any member of the public so they can check content for its “Content Credentials.”

Why is Microsoft expanding access to its Content Integrity tools?

Microsoft is fighting against the use of AI to generate deep fakes and misinformation, and the expansion of its Content Integrity tools follows its announcement, earlier this year, that they, along with 20 other companies, have signed the Tech Accord to ‘Combat Deceptive Use of AI’ in video, audio, and images that alters the appearance, voice, or actions of political candidates and election officials.

Believing that:

“Healthy democracies depend on healthy information ecosystems. Through this expansion, we are delivering tools these organizations can use to help voters understand the information they encounter online.”