Wednesday’s top story: At the international AI Safety summit in Seoul, 16 major AI companies have agreed to commit to a set of AI safety measures designed to create transparency and accountability when developing AI systems.
📢 Major AI safety breakthrough
🛒 Google brings ads back
🔒 Safe AI in the EU
📍 Humane seeks $1B buyer
Read Time: 5 minutes
Our Report: At the international AI Safety summit (co-hosted by the UK and South Korea) in Seoul, 16 AI companies—including Amazon, Google, Microsoft, Meta, and OpenAI—have agreed to a set of AI safety measures: the “Frontier AI Safety Commitments.”
🔑 Key Points:
By signing the Frontier AI Safety Commitments, each company has agreed to safely develop and deploy their frontier AI models, and publish safety frameworks on how they will measure risks.
These frameworks will outline thresholds that will establish when risks are “deemed intolerable”, and what companies will do to ensure they don’t exceed these thresholds.
Each company has also agreed to take accountability and not develop or deploy an AI model or system if they can’t keep the associated risks below their outlined thresholds.
🤔 Why you should care: To have so many leading, global AI companies agreeing to the same AI safety commitments is a world first, and it will ensure consistent accountability and transparency during the future development of AI models and systems.
Together with EleventhAI
Industry leaders are enhancing operations and cutting budgets with AI automation.
But how do you do it? Where do you start?
EleventhAI takes the hassle and complexity out of integrating AI into your business by doing it all.
Our partners are saving 70,000+ hours and $100k+ each month with our 100% done-for-you service.
GetGenerativeAI creates Salesforce estimates and proposals
Podmob crafts personalized podcast lineups and newsletters
Mosaik is an ML-driven real-time news aggregator
Chatsimple is for sales and support chatbots
DrippiAI automates Twitter outreach
Our Report: Following the announcement of its new AI-powered search results feature (AI Overview, which uses AI to summarize information at the top of a search results page) last week, Google has announced plans to test Search and Shopping ads within the feature, for US users.
🔑 Key Points:
Google will only display ads within AI Overviews if they’re relevant to the user's search query and the information AI Overview has generated, and they’ll be labeled as ‘sponsored.’
Early tests show that users are finding these ads helpful, and clicks generated from ads within AI Overviews are high quality, leading to users spending more time on the linked websites.
US advertisers that already run Google ad campaigns will be eligible to appear in AI Overviews and Google has asked for their feedback to “test and learn new formats.”
🤔 Why you should care: The introduction of ads to the new AI search feature could clutter up the quick and clean search summaries–defeating the original objective–and perhaps even contribute to biased search results.
Our Report: Following approval from the European Parliament in March, EU lawmakers have finally approved a set of risk-based regulations designed to promote the use of “safe and trustworthy” AI systems across the EU.
🔑 Key Points:
The new law—which will come into force 20 days after publication—categorizes AI systems according to how risky they are and applies different requirements and obligations accordingly.
It bans AI that is deemed unacceptably risky, including social scoring, cognitive behavioral manipulation, and the use of biometric data to categorize people.
Companies that break these laws will be fined based on a percentage of their global annual turnover in the previous year or a predetermined amount, whichever is higher.
🤔 Why you should care: These ‘first-of-its-kind’ regulations emphasize the importance of transparency and accountability when developing new technologies, but also allow room for innovation to flourish, and could set a standard for the global regulation of AI.
Our Report: AI start-up, Humane—creators of the ‘Humane AI Pin’, which was launched in November last year, and received a lot of bad press—is reportedly looking to sell the business.
🔑 Key Points:
It’s unknown who Humane has approached to buy the business, but they’re working with an adviser to help secure between $750M to $1B from prospective buyers.
The news comes after they recently integrated OpenAI’s GPT-4o into the pin, claiming it improved the device’s ability to understand requests and provide more accurate responses.
The device had previously been slammed by influencers for its high price, slow responses, and poor user experience, sparking negative reviews and bad press, damaging its reputation.
🤔 Why you should care: Humane previously raised $230M from investors, including OpenAI’s CEO, Sam Altman, who is rumored to be developing something similar with Apple designer, Jony Ive…watch this space?
The UK has ended its year-long probe into Snapchats AI chatbot, stating its privacy measures comply with UK data protection laws.
Data labelling start-up, Scale AI, has raised $1B in a round led by Accel, valuing the company at $13.8B.
At its Microsoft Build 2024 conference, Microsoft announced a new, small AI model—Phi-3 vision—which can read images.
OpenAI CEO, Sam Altman, has dropped hints that GPT-5 might work like a virtual brain with deeper thinking capabilities.
Hit reply and tell us what you want more of!
Until next time, Martin & Liam.
P.S. Don’t forget, you can unsubscribe if you don’t want us to land in your inbox anymore.