Financial Times signs its content over to OpenAI

The Financial Times and OpenAI have struck a mutually beneficial content deal

Martin Crowley
April 30, 2024

The Financial Times (FT) and OpenAI have struck a deal (for an undisclosed amount) that will allow OpenAI to use FT content to train its GPT models. ChatGPT users will now be able to see attributed summaries, quotes, and rich links to FT articles, in response to their queries.

In return, OpenAI will work with FT to develop new AI models for its readers, similar to the ‘Ask FT’ model (released in beta last month and powered by Anthropic’s Claude large language model (LLM)) which lets FT subscribers find information in its published articles.

Why has the FT struck this deal?

Earlier this year, the FT gave all its employees Enterprise access to ChatGPT to make sure they could benefit from its creativity and productivity gains, showing its commitment to embracing AI in the newsroom, even though FT Group CEO, John Ridding, was quick to establish that they’re still committed to ‘’human journalism’’

“The FT is committed to human journalism, as produced by our unrivaled newsroom, and this agreement will broaden the reach of that work while deepening our understanding of reader demands and interests,” – FT Group CEO, John Ridding

Plus, the rise of chatbots, like ChatGPT, is threatening to take readers away from search engines, which direct them to news publishers' sites. So there’s a clear strategic advantage to the FT developing a close relationship with OpenAI, as OpenAI has agreed, as part of the deal, to attribute their ‘’human’’ content.

Why has OpenAI struck this deal?  

OpenAI’s latest deal with the FT follows “around a dozen” deals, made with other news outlets (such as Axel Springer, Bild and Welt, and The Associated Press) to license their content to train its AI models. Although the financial details of the deals (including this one) remain a mystery, it’s believed that OpenAI offers between $1M and $5M to license this news content.

But why?

LLMs (that power chatbots) like GPT, Claude, and Gemini, are notorious for their ‘hallucinations’ and ability to misrepresent information. This goes against journalistic principles, where reporters work hard to verify the information they publish and prove its accuracy, therefore, earning readers' trust.

While OpenAI has acknowledged that these issues are present in ChatGPT, they haven’t yet been able to fix them. So, their partnerships with news outlets–which allow them to train their models with credible news content–could be a step forward in stopping the spread of misinformation and hallucinations.

Another theory behind the decision to partner with news organizations could be down to the number of lawsuits OpenAI has against them for copyright infringement: The New York Times, The Intercept, Raw Story, and AlterNet have all claimed that OpenAI has used copyrighted content to train its models. So, forming partnerships and paying news companies for their content could be a way to stop these expensive lawsuits.