Sponsorship | Go Pro | Accreditations
TOGETHER WITH BRILLIANT
Monday’s top story: Apple has finally signed a voluntary commitment (issued by the White House) to develop AI safely and responsibly, a full year after 15 other tech companies, including OpenAI, Microsoft, Meta, and Google, signed it.
🍎 Apple finally commits to safe AI
⬆️ How to level up your AI skills with Brilliant
🔧 US government re-launches AI safety tool
📈 How to keep up with industry trends with The FutureParty
🚢 How to handle a crisis on social media using ChatGPT
🗞️ Musk’s privacy blunder revealed
🕊️ 8 mind-blowing ways AI is being used at the Olympics
Read Time: 5 minutes
❗AI stocks and the market overall suffered a tough week. With earning season underway, this week will be pivotal to decide if this is just a correction in a larger uptrend or if the sellers have taken control and more downside is likely. Learn more.
Our Report: As it prepares to integrate its recently unveiled AI platform (Apple Intelligence) into its core products, Apple has finally signed the White House’s voluntary commitment to developing trustworthy AI, which is a set of AI safeguards designed to promote the safe and responsible development of AI and joins 15 other tech companies—including Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI—who all signed the commitment in July 2023.
🔑 Key Points:
The voluntary commitment—which the White House is calling “the first step towards developing safe, secure, and trustworthy AI”—asks companies to test their AI systems for security flaws and share the results with the public.
It also asks them to develop labeling systems (like watermarking) so users can identify what content is/isn’t AI-generated and work on unreleased AI models in secure environments, limiting limit employee access.
Unlike the EU’s AI Law (regulations to protect citizens from high-risk AI, effective from August 2nd) these voluntary safeguards aren’t legally binding, meaning companies will not be penalized for non-compliance.
🤔 Why you should care: Some believe Apple has just signed this commitment to try and prevent any future intervention by regulators, not because they have a genuine interest in developing “safe, secure, and trustworthy AI.”
Together with Brilliant
The time to level up your AI skills is now.
Download Brilliant and, in just 15 minutes a day, learn how to leverage the concepts behind AI, data science, and other cutting-edge topics.
Access a library, filled with quick, fun, hands-on lessons in CS, data science, math, and more.
Learn with bite-sized, interactive sessions that help you apply what you’ve learned to everyday situations.
(Plus, as an AI Tool Report reader, you’ll get 20% off an annual premium membership)
Meco is a distraction-free space for reading and discovering newsletters, separate from the inbox.
WriteHuman transforms AI-generated text into human-like writing
Replicate generates text prompts that match images
Nack generates images from text on mobile using AI
Codeamigo uses AI to help users learn how to code
Our Report: The National Institue of Safety Standards (NIST)—a key agency under the US Commerce Department that develops and tests new technology for the US government, companies, and the public—has re-launched a tool (called Dioptra) designed to assess, analyze, and monitor AI risks, particularly those that "poison" AI model training data.
🔑 Key Points:
Dioptra is an open-source, web-based tool that will provide a benchmark for companies training AI models and help them test their AI models against simulated threats in a“red-teaming” environment.
It will help “government agencies and small to mid-sized businesses assess AI developers’ declarations about their systems’ performance,” and has been launched alongside documents on how to mitigate AI dangers.
The re-release comes after the UK AI Safety Institute recently launched a similar tool (Inspect), after both countries agreed to jointly develop advanced AI model tests, in a step towards the better mitigation of AI risks.
🤔 Why you should care: Although this is a positive step forward for reducing risk in AI, creating these AI benchmark tests is challenging because the infrastructure, training data, and other key aspects of sophisticated AI models are often kept a secret by the companies developing them (in a bid to stay competitive), so while NIST insists Dioptra will highlight what type of attacks will lower the performance of an AI system, it has caveated this by stating it will not “completely de-risk” AI models.
Keeping up with industry trends is a process.
TheFutureParty makes it easier with its daily newsletter. They provide the latest stories and insights to help you understand the future of tech, business, and culture — all in a quick and witty package.
Now, you don’t need to search the web to stay ahead. Get smart about trends in just 5 minutes a day.
Type this prompt into ChatGPT:
Results: After typing this prompt, you will get a strategy to help you leverage social media and other digital channels during a crisis.
P.S. Use the Prompt Engineer GPT by AI Tool report to 10x your prompts.
On Friday, X (formerly Twitter) users discovered that Elon Musk had—without telling them—implemented a change that meant their data would be collected and used to train its AI model, Grok, by default.
As a result, EU privacy watchdog, the Data Protection Commission (DPC), is “seeking clarity” from Musk as this is a clear violation of GDPR rules, which require companies to ask for consent before they use personal data.
The DPC is “surprised” by this move from Musk as they’ve been “engaging with X on this matter for a number of months,” and although they haven’t received a response from Musk yet, they do expect one early this week.
Hit reply and tell us what you want more of!
Until next time, Martin & Liam.
P.S. Don’t forget, you can unsubscribe if you don’t want us to land in your inbox anymore.