Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
The US-China Economic and Security Review Commission’s annual report called for a Manhattan Project for AGI.
More specifically, it called on Congress to “establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability”, suggesting that Congress:
“Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership”
“Direct the U.S. secretary of defense to provide a Defense Priorities and Allocations System “DX Rating” to items in the artificial intelligence ecosystem to ensure this project receives national priority.”
Jacob Helberg, who sits on the Commission, said that “China is racing towards AGI ... It's critical that we take them extremely seriously.”
The report obviously got a lot of attention. But there are a bunch of important caveats.
As Dean Ball points out, the Manhattan Project idea feels more like a throwaway line designed to grab attention, rather than a serious policy proposal:
“The terms ‘Artificial General Intelligence,’ ‘AGI,’ and ‘Manhattan Project’ are, near as I can tell, mentioned exactly nowhere else in the 800-page report other than in this recommendation.”
“[The report does not] provide a justification for the radical Manhattan Project policy it recommends first among all other policies.”
And as Garrison Lovely points out, it’s based on a false premise: China does not seem to be racing towards AGI, and if anything is taking AI risks extremely seriously.
Also worth remembering:
Helberg is very close friends with Sam Altman, and this “Manhattan Project” language reflects some of OpenAI’s own narratives.
Helberg is angling for relevance in Trump’s inner circle.
The discourse
Jensen Huang, reporting expectation-beating earnings, said scaling laws aren’t dead yet:
“Foundation model pretraining scaling is intact and it’s continuing.”
Satya Nadella, meanwhile, is excited about test-time compute:
“We are seeing the emergence of a new scaling law.”
Chris Lehane is doubling down on America First rhetoric:
“We view AI as transcending partisan politics. It’s not Republican AI, it’s not Democratic AI, it’s American AI … It’s absolutely critical that US-led AI prevail over PRC-led AI particularly when you think about values like democracy, freedom and opportunity.”
Dario Amodei called for mandatory AI safety testing:
“I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it … There’s nothing to really verify or ensure the companies are really following those plans in letter or spirit.”
In the FT, Yoshua Bengio explains why o1 is a big deal:
“Advances in reasoning abilities make it all the more urgent to regulate AI models in order to protect the public.”
Policy
Joe Biden and Xi Jinping agreed that humans, not AI, should control nuclear weapons decisions.
Xi identified AI as one of two “global challenges” requiring more US-China cooperation.
Donald Trump nominated Howard Lutnick for Commerce Secretary.
He seems to have been very quiet about his views on AI and AI policy. If you know what he thinks, I’d love to know too.
Brendan Carr, meanwhile, was tapped to chair the FCC. He’s previously spoken about AI regulation:
“I don’t believe in ‘no regulation’ of AI but at the same time believe there is a risk of overdoing it early on.”
“I don't think we should be regulating AI based purely on speculative harms that aren’t showing up in the real world.”
The International Network of AI Safety Institutes held its first meeting this week, and issued a joint mission statement for the AISIs.
The US AISI also established a new government taskforce, made up of experts from Commerce, Defense, Energy, Homeland Security, NSA, and NIH, to collaborate on “testing risks of AI for national security”.
The US and UK AISIs published details on their joint evaluation of Anthropic’s new Claude 3.5 Sonnet model. The full report is very detailed and worth a read.
Rep. Jay Obernolte said that the House AI Task Force’s report on AI legislation has been drafted.
He also suggested keeping the FDA in charge of regulating AI in healthcare.
The Defense Department’s Inspector General said that the DOD needs to better implement its AI adoption strategy.
Wired reported that the US Patent and Trademark Office has banned almost all internal use of generative AI tools.
The UK Competition and Markets Authority cleared Google’s Anthropic investment.
China is reportedly accelerating efforts to boost domestic semiconductor production ahead of Trump’s return.
Wired has a good piece about the US restrictions on Chinese AI investment. I hadn’t realised they’re based on a compute threshold.
Influence
The Existential Risk Observatory proposed an international “Conditional AI Safety Treaty”, which would halt unsafe AI development if capabilities get too close to loss of control.
Mozilla and Columbia University held their second “Convening on AI Openness and Safety”, designed to “demonstrate the power of centering pluralism in AI safety”.
The Information Technology Industry Council once again urged Congress to codify US AISI.
Marc Andreessen was reportedly at a Rockbridge Network gathering of tech execs and Republican donors in Vegas recently, along with Susie Wiles and RFK Jr.
Industry
DeepSeek released DeepSeek-R1, a “reasoning” model which it says rivals OpenAI’s o1.
And Alibaba announced Qwen2.5-Turbo, with a 1M context length, faster inference speed, and lower costs compared to GPT-4.
Microsoft said its custom AI chip Maia is in use on Azure OpenAI. It plans to launch more custom AI chips too.
Nvidia is reportedly facing overheating issues with its new 72-chip Blackwell GPU server racks.
Huawei reportedly plans to mass-produce its new Ascend 910C chip in early 2025, despite low yields. It’s reportedly stuck at 7nm until at least 2026, thanks to US sanctions.
Foxconn projected AI servers will make up over 50% of its 2025 server revenue.
Anthropic is talking with federal agencies about potential FedRAMP authorisation sponsorship.
Meta held a hackathon to develop ways to use Llama in UK public services.
Palantir and the UK Ministry of Justice discussed using its technology to calculate prisoners’ reoffending risks.
Female staff at OpenAI reportedly circulated an internal memo criticising the company's culture for failing to retain and promote women leaders.
H, the Paris-based AI startup, launched Runner H, an "agentic" AI product built on a custom 2B parameter LLM. It plans to raise more money in the next few months.
Sam Altman is reportedly promoting Rain AI’s upcoming $150m funding round, which will value the chip company at $600m.
Perplexity competitor ProRata was valued at $130m, and signed licensing deals with DMG Media and The Guardian.
Moves
Sam Altman is co-chairing the transition team of San Francisco mayor-elect Daniel Lurie.
Meta hired Clara Shih, formerly Salesforce’s AI CEO, to lead a new product group for enterprise AI tools.
Wen-Yee Lee is joining Reuters as a Taiwan technology correspondent, focusing on TSMC and the chip industry.
Akin Gump is no longer lobbying for the Center for AI Safety Action Fund.
Alibaba, ByteDance and Meituan are reportedly expanding their AI teams in Silicon Valley.
xAI is hiring AI safety engineers.
Best of the rest
Researchers from the Johns Hopkins Center for Health Security and RAND wrote an excellent article in Nature arguing that AI systems could pose significant biorisks — and proposing detailed ways to identify and evaluate those risks.
Menlo Ventures said that business spending on generative AI surged to $13.8b in 2024, with OpenAI losing market share to Anthropic (which Menlo is invested in).
Lots of meaty profiles this week:
TechTonic Justice warned that flawed AI systems are denying benefits to low-income Americans.
SafeRent, an AI landlord screening tool, agreed to stop scoring low-income tenants and pay $2.3m in a discrimination settlement.
Hannah Ritchie has a good piece on AI’s (overstated) energy use.
The WSJ reported that the AI boom will require a big upgrade to data centre networking equipment and infrastructure.
Wired reports that AI-generated influencers are flooding Instagram with stolen content from real models.
Microsoft signed a deal with HarperCollins to use its nonfiction books for AI training. OpenAI signed a $16m annual licensing deal with Dotdash Meredith.
Google reportedly developed an AI music generator that could mimic artists, but shelved it due to copyright concerns.
Oxford Martin School researchers published a report on the role of insurance and liability in tackling AI risks.
The AI Safety Fund announced grant opportunities for research on AI bio and cyber risks.
Steve Newman has a great piece on the key questions underlying AI policy debates.
Google announced AlphaQubit, “an AI-based decoder that identifies quantum computing errors with state-of-the-art accuracy”. It’s working with Nvidia on quantum processor design simulations, too.
Researchers found you can replicate people’s behaviour and personalities by simulating them with GPT-4.
Thanks for reading, have a great weekend.