Welcome to Transformer, your weekly briefing of what matters in AI. I’m back after a few weeks of respite from the AI firehose. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
As OpenAI fundraises at a reported $100b valuation, there’s been a flurry of news about the company’s progress.
That funding round will reportedly be led by Thrive Capital, and might include investment from Apple and Nvidia. The NYT says the company’s considering changing its non-profit structure as part of the funding talks.
It’s also moving forward with plans for a huge AI infrastructure project, which could spend “tens of billions of dollars”, according to Bloomberg. It’s in talks with investors for that, too.
On the product side, the company has reportedly demoed Strawberry, a model with significantly improved reasoning abilities, to national security officials. It seems like it might be paired with GPT-4o in ChatGPT, possibly in the next few months.
The Information also reported that OpenAI is using Strawberry to generate synthetic training data for Orion, a forthcoming flagship LLM.
And usage is way up: OpenAI said ChatGPT has over 200m weekly active users, double what it had in November 2023. Meanwhile, it has 1m paid users to ChatGPT Team and Enterprise.
It’s also apparently considering raising subscription prices for its new Strawberry and Orion models, with $2,000 per month supposedly discussed (though unlikely to materialise).
In personnel moves, Chris Lehane is the company’s new VP of global policy. Anna Makanju is now “vice president of global impact” — seemingly a demotion.
Ilya Sutskever’s Safe Superintelligence raised $1b at a reported $5b valuation. Investors include Sequoia Capital, DST Global, SV Angel … and Andreessen Horowitz.
SSI CEO Daniel Gross said “it's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence”.
In an interview accompanying the announcement, Sutskever suggested he’s got a new approach to scaling: “Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?”
A big group of academics urged Gavin Newsom to sign SB 1047 into law. Signatories include Gary Marcus, Stuart Russell, Bin Yu, Geoffrey Hinton, Yoshua Bengio, Hany Farid and Lawrence Lessig, along with many others.
Other notable endorsements recently include Jan Leike, Flo Crivello, Scott Aaronson, and Elon Musk.
After the California legislature overwhelmingly passed the bill last week, Newsom has until September 30 to sign or veto the bill.
One data point Newsom might consider: polling suggests that if he vetoes the bill, voters will personally blame him for any future AI catastrophes.
The discourse
Bill Gates is a bit worried about AI, but he’s not a doomer:
“As it gets more powerful, and you know, as bad guys are using it, there’ll be issues. But overall, I believe that it’s a beneficial thing, and we need to just shape it in the right way.”
Matt Southey has a great explanation of accelerationism and its roots in Nick Land’s ultra-nihilistic philosophy:
“Land should be considered an ultra-doomer who is happy about the prospect of human extinction.”
Anthropic’s Sam Bowman published what he calls the “checklist”: a fairly comprehensive look at what succeeding at AI safety will need to involve. One highlight:
“The most urgent safety-related issue that Anthropic can’t directly address is the need for one or, ideally, several widely respected third-party organizations that can play this adjudication role [of AI safety cases] competently.”
The Atlantic reviewed Yuval Noah Harari’s new book on AI:
“As a book, Nexus doesn’t reach the high-water mark of Sapiens, but it offers an arresting vision of how AI could turn catastrophic. The question is whether Harari’s wide-angle lens helps us see how to avoid that.”
Nick Whitaker thinks Congress should pass the ENFORCE Act:
“The ENFORCE Act is necessary to close glaring loopholes in BIS authorities, such as their ambiguous control over the exportation of AI systems and the use of cloud computing.”
Policy
The US, EU and UK signed the Council of Europe’s convention on AI, the first legally binding treaty on AI (though there aren’t any real sanctions for violating it).
The convention itself requires signatories to ensure AI doesn’t violate human rights, election integrity, human dignity, or the rule of law. They also have to carry out AI risk and impact assessments.
Yi Zeng launched the Beijing Institute of AI Safety and Governance, a new R&D organisation “established with guidance and support from various Beijing municipal departments” and in partnership with the Chinese Academy of Sciences and the China Academy of Information and Communications Technology.
It “aims to build a systematic safety and governance framework” and provide guidance to the Chinese government on AI regulation.
Relatedly, The Economist recently asked if Xi Jinping is an “AI doomer”, with some evidence suggesting he is deeply worried about catastrophic risks from AI.
OpenAI and Anthropic will give the US AI Safety Institute early access to their models for testing.
The DOJ reportedly subpoenaed Nvidia over antitrust concerns, though the company denies it.
The Dutch government has retaken control of export licensing decisions for some ASML products, as part of a tweak to US export rules.
China has reportedly threatened to retaliate against Japan if it strengthens export controls on chipmaking equipment.
The UK Competition and Markets Authority closed its antitrust investigation into the Microsoft-Inflection deal, though it did call the deal “a merger under UK law”.
Australia published proposals for “mandatory guidelines for AI in high-risk settings”, which include things like mandatory model testing and ensuring meaningful human oversight of AI.
Influence
TIME published its AI 100 list, featuring all sorts of excellent people (Elizabeth Kelly, Geoffrey Irving, Jade Leung, and Beth Barnes, to name but a few).
Oprah Winfrey is hosting an ABC special on AI next week, featuring Sam Altman, Bill Gates, Marques Brownlee, Tristan Harris, and Aza Raskin (the latter two will talk about the risks of superintelligent AI). People are already criticising it for not including their preferred guests.
The Dataset Providers Alliance advocated for an opt-in system for AI training data.
Convicted criminals Jacob Wahl and Jack Burkman have been pseudonymously running LobbyMatic, an AI lobbying startup.
Industry
Meta’s Llama models have seen a “10x jump in monthly usage since the start of the year”, Mark Zuckerberg said. Meta AI, meanwhile, has over 400m monthly active users.
Anthropic launched a Claude Enterprise plan.
The FT found that it’s cheaper to rent Nvidia A100s in China than the US, suggesting export controls might not be working so well.
Runway removed its diffusion models from Hugging Face and GitHub, including Stable Diffusion 1.5, which is known to be used to generate child sexual abuse material. Hugging Face CEO Clem Delangue called the removal “sad”.
Other people seem to have re-uploaded SD 1.5 to Hugging Face, though, just in case you thought the company might finally not be complicit in CSAM generation.
Aleph Alpha’s given up on competing at the frontier, instead pivoting to helping companies use AI tools.
Sakana AI raised $100m from investors including Nvidia.
Applied Digital also raised $160m from investors including Nvidia. It’s building out an AI cloud-computing business.
You.com raised $50m from investors including, you guessed it, Nvidia!
Reid Hoffman said xAI’s 100k H100 cluster is “table stakes”. (Elon describes it as “the most powerful AI training system in the world”.) He also said SSI should have been a benefit corporation.
Moves
Lily Jamali is the BBC’s new North America Tech Correspondent, based in the Bay.
The Cosmos Institute, a new effort “dedicated to promoting human flourishing in the age of AI”, launched, run by Brendan McCord and counting Jack Clark as a founding fellow.
It’s launching three things to start: the Human-Centered AI Lab at Oxford University, led by Prof. Philipp Koralus; a Fellowship program; and a grants program.
The UK is hiring a head of frontier AI regulatory framework.
Best of the rest
Yoshua Bengio finally joined Twitter. He also published a paper and accompanying blog looking at whether we can “[bound] the probability of harm” from an AI.
For those interested, here’s some interesting criticism of such “formal verification” approaches.
METR published a helpful overview of the emerging consensus around frontier AI safety policies.
Semianalysis has a very long, very good overview of Google’s AI training infrastructure and OpenAI’s plans to beat it with multi-datacentre training.
Yale is spending $150m on building its AI infrastructure, including buying lots of GPUs.
Salesforce is doubling down on AI agents.
The OpenAI GPT Store is full of apps that violate OpenAI’s terms of service, Gizmodo found.
AI Digest has a fun new interactive quiz to test your knowledge on what LLMs can and can’t do.
An investigation uncovered deepfake porn rings in South Korea targeting underage girls en masse.
Video game performers signed a deal with some studios on AI usage. Most major studios are yet to sign, though.
Call centre workers in the Philippines are terrified about AI replacing them.
Thanks for reading; have a great weekend.