Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
The UN High Level Advisory Board on AI released its final report on what the UN should do on AI. There are seven big recommendations:
An international scientific panel on AI “surveying AI-related capabilities, opportunities, risks and uncertainties”.
A twice-yearly policy dialogue on AI governance.
An AI standards exchange to develop and maintain “a register of definitions and applicable standards for measuring and evaluating AI systems”; to debate and evaluate the standards; and to identify gaps for new standards.
A capacity development network to help build AI capacity globally.
A global fund for AI to “put a floor under the AI divide”.
A global AI data framework, to deal with issues of training data provenance and use.
An AI office reporting to the Secretary-General, which will provide support for all these things.
This is all pretty light-touch for now, but the UN’s acknowledged that it might need to do more:
“Eventually, some kind of mechanism at the global level might become essential to formalise red lines if regulation of AI needs to be enforceable … waiting for a threat to emerge may mean that any response will come too late … possible thresholds for such a move could include the prospect of uncontrollable or uncontainable AI systems being developed … [or indications of] the emergence of ‘superintelligence’”.
A bunch of updates from California, as the September 30 deadline for signing or vetoing SB 1047 ticks closer.
Gavin Newsom said he’s concerned about the “chilling effect, particularly in the open source community” of SB 1047 on AI development, the strongest signal yet that he’s planning to veto the bill.
That’s despite an outpouring of support for the bill this week, most notably from A-list actors like Mark Ruffalo.
Pluribus News reported this week that more than $5 million has been spent lobbying against the bill, “making it one of, if not the most expensive influence campaign of the year”.
On Transformer this week, Garrison Lovely and I exposed how much of the campaign against the bill has been full of outright lies.
Newsom did sign some AI bills, though, which would combat election deepfakes and protect actors' likenesses from nonconsensual AI replication without consent.
The Senate Judiciary subcommittee on technology held a hearing on “insiders’ perspectives” on AI, featuring Helen Toner, William Saunders, Margaret Mitchell, and David Evan Harris.
You can find links to all the witness testimonies here (Toner’s is particularly good); here are some highlights from those and the session itself:
Toner: “In public and policy conversations, talk of AGI is often treated as either a science-fiction pipe dream or a marketing ploy. Among the scientists and engineers of these companies, it is an entirely serious goal.”
Saunders: “[OpenAI] have repeatedly prioritised deployment over rigour … When I was at OpenAI, there were long periods of time where there were vulnerabilities that would have allowed me or hundreds of other engineers at the company to bypass access controls and steal the company’s most advanced AI systems.”
Harris: “Voluntary self-regulation does not work … [and] the solutions for AI safety and fairness exist in the framework and bills proposed by the members of the committee”.
Toner: “If [OpenAI’s] most aggressive predictions of how quickly their systems will get more advanced are correct, then I have serious concerns.”
Mitchell: “I think that it needs to be very clear to people working internally when and how to whistle blow.”
Notably, Sen. Dick Durbin indicated his support for the Hawley-Blumenthal regulatory framework — and Sen. Blumenthal said the text will be available “very shortly”.
The discourse
Rep. Jay Obernolte said AI regulation is going to take a while:
“I think we have to accept that the job of regulating AI is not going to be one 3,000-page bill … It’s going to be a few bills a year for the next 10 years as we get our arms around this issue.”
Sen. Mike Rounds concurred:
"I think you're going to find artificial intelligence legislation embedded in almost every single piece of legislation that passes the House and the Senate in the coming years — not just one or two pieces, but literally dozens of pieces.”
Catherine Thorbecke shot down the “we can’t regulate because of China” argument:
“Approaching AI safety as a zero-sum game between the US and China leaves no winners. Mutual suspicion and mounting geopolitical tensions mean we won’t likely see the two working together to mitigate the risks anytime soon. But it doesn’t have to be this way.”
Holden Karnofksy said we need “if-then commitments” to tackle AI risk:
“Such adoption does not require agreement on whether major AI risks are imminent—a polarised topic—only that certain situations would require certain risk mitigations if they came to pass.”
Paul Scharre said AI regulation doesn’t have to be difficult:
“By controlling the physical inputs to AI, nations can securely govern AI and build a foundation for a safe and prosperous future. It’s easier than many think.”
Aidan Gomez said regulation could help spur AI innovation:
“Right now, there's a lot of risk aversion from the private sector and highly regulated industries. They don't know how to adopt the technology, what's compliant.”
Policy
The inaugural convening of the International AISI Network will take place on November 20-21 in San Francisco, jointly hosted by the Commerce and State Departments.
Australia, Canada, the EU, France, Japan, Kenya, Korea, Singapore, and the UK will attend.
The UK AISI and the Centre for Governance of AI are hosting a side-conference on AI safety frameworks for companies and researchers.
The House Committee on Science, Space, and Technology said it’s going to avoid mandatory AI rules, with committee comms director Heather Vaughan saying that “we’ve heard concerns about stifling innovation, and that’s not the approach that we want to take”.
A bipartisan group of representatives introduced legislation to prohibit political campaigns from using AI to misrepresent opponents' views.
Sens. Schumer and Markey called for federal agencies using AI for "consequential decisions" to establish civil rights offices.
HHS’s AI chief said it’s setting up new assurance labs to help test health AI products.
The House Committee on Administration announced a new policy for AI use in Congress.
The EU will not scrutinise Microsoft’s Inflection deal.
The European Parliament Research Service recommended expanding AI liability regulations to include general-purpose AI products.
Influence
Leading AI scientists from the US, China, and other countries called for an “international governance regime to prevent the development of models that could pose global catastrophic risks”.
Meta and other companies warned that EU tech regulations could hinder AI innovation and economic growth, and called for more consistent regulatory decision-making.
Microsoft called for more “clarity and consistency” on US export controls delaying AI chip shipments to the Middle East.
Google said the UK risks being “left behind” in the AI race unless the country builds more data centres.
The FT has a piece on the opposition to one such data centre in an English village.
General Catalyst, launched a policy institute (read: lobbying effort).
Microsoft and G42 are setting up a new centre in Abu Dhabi for “the responsible use of AI in the Middle East and the Global South”.
OpenAI is hosting a day of events at the UN General Assembly next week.
Industry
BlackRock, Global Infrastructure Partners, Microsoft, and MGX launched a $30 billion investment partnership for AI infrastructure and power, focusing on data centres in the US.
It hopes to raise up to an additional $70b in debt financing.
OpenAI news:
OpenAI’s $6.5 billion funding round — which will value the company at $150b — is reportedly oversubscribed, with the decision on who to include to be made today. Investors have to chip in at least $250m to be considered, The Information reported.
Sam Altman reportedly told staff the company's non-profit structure will change next year, likely moving to a more traditional for-profit model.
Altman is also stepping down from the company's internal safety committee, which will supposedly become independent and be chaired by Zico Kolter.
Former NSA chief Paul Nakasone, who is on that committee, said he was "heavily involved" in safety oversight of the new o1 models.
OpenAI has been sending warning emails and ban threats to users attempting to figure out how o1 works.
Epoch AI found that the new o1 models outperform Claude 3.5 Sonnett on the GPQA benchmark for complex scientific questions.
Altman said the model is at “level two” in its AGI hierarchy, and will enable the company to reach “level three” (independent agents) “relatively quickly”.
Meta’s reportedly planning on launching a proper Meta AI app. Most people currently use the product through WhatsApp.
It also plans to add image recognition to Llama 3 next week, according to The Information.
Constellation Energy plans to restart its Three Mile Island nuclear plant, selling the energy to Microsoft for its data centres.
Groq and Aramco said they’re building “the world’s largest AI inference centre” in Saudi Arabia. It will have 19,000 “language processing units” to start with, which could later expand to 200k.
Intel is making a custom AI chip for Amazon.
ByteDance is reportedly aiming for mass production of its custom-designed AI chips by 2026. TSMC would manufacture them.
The Economist has a good piece on how Chinese AI companies have come up with ways to cope with export controls.
SiFive announced new RISC-V chip designs customised for AI applications.
Scale AI and the Center for AI Safety launched a new AI benchmark effort called "Humanity's Last Exam". It aims to develop more challenging tests for expert-level AI models.
Alibaba released a bunch of new open-source models in its Qwen 2.5 family, and a new text-to-video model.
Amazon launched an AI-powered video generator for advertisers.
YouTube Shorts is integrating a video generation model for creators.
AI video generation company Runway signed a deal with Lionsgate and launched its API.
Google Search and ads will soon integrate C2PA watermarking technology, informing users when images are AI-generated.
LinkedIn said it’s going to train AI on user data; cue outrage.
Nvidia is reportedly considering buying OctoAI for $165m.
GitHub Copilot competitor Poolside is reportedly raising $500m at a $3b valuation.
Fei-Fei Li’s World Labs has raised $230m.
Fal.ai, a platform for generative media models, raised $23m.
Lenovo has started building AI servers in India.
Moves
Thierry Breton resigned as the EU’s digital boss. His tech portfolio will now be handled by Henna Virkkunen.
Nando de Freitas joined Microsoft AI.
Andreas Kirsch joined DeepMind.
OpenAI hired Leah Belsky, formerly of Coursera, to lead its education product initiatives.
Kip Wainscott, formerly of Snapchat, is JPMorgan Chase’s new executive director of AI policy.
April Mellody is TechNet’s first senior VP of communications.
Zeyi Yang is now a senior China reporter at Wired.
Miranda Nazzaro is now a tech reporter for The Hill.
Best of the rest
Nvidia launched AI-RAN, a platform to help telecoms companies manage AI-driven network strain.
A new GovAI paper proposed a “grading rubric for AI safety frameworks”.
Tarbell Fellow and law professor Kevin Frazier called for an AI safety hotline.
Mining company BHP said AI demand will worsen the world’s copper shortage.
The Washington Post published a big piece on the environmental impacts of AI. But as many experts have noted, its numbers don’t make sense.
There’s a bunch of opposition in Memphis to xAI’s new data centre, with people worried about pollution and water access.
Nvidia and G42 are building a climate tech lab.
T-Mobile’s working with OpenAI on an AI-powered customer service system.
Researchers found AI's impact on elections has been overstated.
The Library of Congress has become a popular source of AI training data.
Wordfreq, which tracks language usage across the internet, has shut down due to AI-generated text polluting its data.
20% of UK GPs are using AI tools for daily tasks.
Vodafone bought 68,000 licences for Microsoft’s Copilot products.
In a lengthy lecture, Stephen Fry shared his worries about AI (including, but not limited to, catastrophic risks and extinction).
Someone’s made a social media app where you’re the only human.
Thanks for reading; have a great weekend.