Welcome to Transformer, your weekly briefing of what matters in AI. I’ve been sick, so have got a slightly abridged version for you this week. If you’ve been forwarded this email, click here to subscribe and receive future editions.
The discourse
Marc Andreessen made some pretty wild claims about the government’s approach to AI:
“They want to control it. They want to put it in a headlock … We had meetings this spring that were the most alarming meetings I've ever been in, where they were taking us through their plans and it was just full government control, this sort of thing. There will be a small number of large companies that will be completely regulated and controlled by the government. They told us, they just said, don't even start startups, don't even bother. There's just no way. There's no way that they can succeed. There's no way that we're going to permit that to happen.”
Citation, very obviously, needed.
Gina Raimondo is bullish on international AI safety cooperation:
“This isn’t unlike other technologies, you know, nuclear technology or other technologies. There have been moments in the world’s history where new technology comes forward that is so powerful that we have to get the world together to agree on guardrails and restrictions and standards so that everybody is kept safe. Our interests are aligned even with some of our fiercest competitors like China.”
Yann LeCun’s timelines keep getting shorter:
“I don’t think [AGI] is that far away … It’s quite possible within a decade. But it’s not going to happen next year.”
Policy
New AI-focused export controls are reportedly coming next week, with a particular focus on restricting China’s access to high-bandwidth memory.
Trump is reportedly considering naming an AI czar to coordinate federal AI policy. It won’t be Elon, apparently. (I’ve heard Michael Kratsios’s name being thrown about, but who knows.)
Sen. Peter Welch introduced the TRAIN Act, which would let copyright holders subpoena AI training data to determine if their work was used without consent.
The OECD released a report listing top AI risks and policy priorities, which include “establishing clearer liability rules, drawing AI ‘red lines’, investing in AI safety and ensuring adequate risk management procedures”.
The UK announced a new Laboratory for AI Security Research, which will “assess the impact of AI on [UK] national security”.
Patrick Vallance confirmed that the UK government will soon launch a public consultation on AI and copyright proposals.
Influence
Mark Zuckerberg reportedly met with Donald Trump at Mar-a-Lago this week.
Reid Hoffman said he’s worried about Elon Musk’s AI COIs in the next Trump administration.
Marc Andreessen is “acting as a key networker for talent recruitment” for the new Department of Government Efficiency, according to the Washington Post.
Google UK boss Debbie Weinstein said the company wants to build more AI infrastructure in Britain, but needs "the right conditions" — namely energy infrastructure.
Miles Brundage and Dean Ball put out a joint analysis of the draft EU GPAI Code of Practice.
CSIS released a report on the impact of US chip export controls on Chinese and American industries. TL;DR: it’s complicated.
NTI | bio convened two international expert working groups to address AI biosecurity risks.
Industry
Amazon announced an additional $4b investment in Anthropic, bringing its total investment to $8b. Anthropic will use Amazon Trainium chips to train its future models.
Bloomberg has a profile of Annapurna, Amazon’s chip design team.
SoftBank is reportedly spending $1.5b to buy OpenAI employees’ shares.
OpenAI is reportedly considering developing a web browser.
Alibaba released QwQ-32B-Preview, an open-weights AI reasoning model that it says outperforms o1 on some benchmarks.
Speaking of o1: Luca Righetti points out that OpenAI claimed o1-preview cannot meaningfully help novices create CBRN weapons, but its test results do not clearly support this conclusion.
Amazon has reportedly developed a new AI model called Olympus that can process video and images in addition to text. It might be released next week.
Some artists leaked OpenAI’s Sora video generation model. OpenAI quickly suspended access.
Nvidia demoed Fugatto, an audio generation model — but has no immediate plans to release it publicly.
Apple is reportedly developing a more conversational “LLM Siri”.
It’s also reportedly struggling to roll out its AI features in China, and might need a local partner to be able to do so.
Google released yet another incremental update to Gemini.
Anthropic proposed an open-source standard called Model Context Protocol to connect AI models to external data sources.
Twitter investors reportedly received 25% of shares in xAI.
Black Forest Labs is reportedly in talks to raise up to $200m, led by Andreessen Horowitz, at a $1b+ valuation.
Pony AI raised $413m in its IPO.
Former Google and Stripe execs raised $56m for /dev/agents at a reported $500m valuation. It’s building an “operating system for AI agents”.
Moves
Yi Tay is returning to Google DeepMind.
Peter Cihon joined the US AI Safety Institute as a Senior Advisor.
Sriram Krishnan is leaving Andreessen Horowitz, and has reportedly discussed joining DOGE.
Justin Uberti joined OpenAI to lead real-time AI efforts.
Sean Strong is joining Anthropic as head of strategic partnerships for startups.
David Rein joined METR.
Samsung reshuffled its chip leadership, naming Jun Young-hyun as co-CEO and putting him in charge of its struggling memory business.
Mistral is hiring lots of new people in Silicon Valley.
The WSJ has a piece on how Chinese companies are trying to poach Western semiconductor talent.
Best of the rest
OpenAI, Meta, and Orange partnered to train AI models on African languages.
OpenAI accidentally erased critical evidence in The New York Times’ lawsuit over its training data.
Scale AI marketed its Meta-powered "Defense Llama" chatbot with an example that experts called "irresponsible" and "worthless".
METR announced a new ML research engineering benchmark. Claude and o1 do pretty well!
Relatedly, Tarbell Fellow Scott Mulligan has an excellent piece on how many popular AI benchmarks are “outdated or poorly designed”.
Lawrence Livermore National Lab is using o1 to work on nuclear fusion problems.
The WSJ has a good profile of xAI and how it’s managed to build data centres so quickly.
OpenAI released two papers on red-teaming frontier AI models.
Evolv Technologies settled with the FTC over allegations of false marketing claims about its AI-powered security screening system.
Wired has a very meaty profile of how Satya Nadella turned Microsoft into an AI leader.
OpenAI is funding researchers to develop algorithms that can “predict human moral judgments”.
Rest of World has a piece on how AI is changing call centre work in the Philippines.
A Hugging Face employee created a dataset of 1m scraped Bluesky posts, enraging users.
The Nikkei’s best-performing stock this year is Fujikura, a 139-year-old cable maker which has become very important to AI data centres.
Thanks for reading; have a great weekend.