Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
On Transformer: Helen Toner has finally gone public with why the OpenAI board fired Sam Altman.
On the TED AI Show, Toner accused Altman of “outright lying to the board”, to the extent that the board “couldn’t believe things that Sam was telling us”.
Notably, Toner said that when Altman tried to push Toner off the board, he did so by “lying to other board members”. That matches previous reporting from the New York Times and Wall Street Journal.
She gave several other examples of Altman’s deception. Toner said Altman didn't inform the board about the launch of ChatGPT, didn't tell the board that he owned the OpenAI Startup Fund (despite “claiming to be an independent board member with no financial interest in the company”), and “gave [the board] inaccurate information” about OpenAI's safety processes.
Toner also said that executives at OpenAI came to the board accusing Altman of “psychological abuse” and saying they couldn't trust him.
And in an op-ed for The Economist, Toner and fellow ex-board member Tasha McCauley said their experience at OpenAI led them to “believe that self-governance cannot reliably withstand the pressure of profit incentives”.
In a statement given to TED, OpenAI’s new board chair Bret Taylor tried to push back on Toner’s concerns, though he didn’t actually rebut anything she said.
“An independent committee of the board worked with the law firm WilmerHale to conduct an extensive review of the events of November,” Taylor said, which “concluded that the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers or business partners”
Taylor and new board member Larry Summers said similarly vague things in a response article for The Economist, too.
OpenAI’s governance is changing, though, and might change further.
OpenAI’s board set up a safety and security committee, led by Bret Taylor, Adam D’Angelo, Nicole Seligman, and Sam Altman.
Aleksander Madry, Lilian Weng, John Schulman, Matt Knight and Jakub Pachoki are also on the committee, which will “evaluate and further develop OpenAI’s processes and safeguards over the next 90 days”.
The Information, meanwhile, reports that Altman is considering turning OpenAI into a “for-profit corporation”, while some investors want him to take equity in the company.
Altman dodged questions about governance when asked about it at the AI for Good conference this week.
“We continue to talk about how to implement governance. I probably shouldn’t say too much more right now.”
Also: Current and former OpenAI employees also told the FT that the company is “struggling to contain internal rows about its leadership and safety”.
On Transformer: California Sen. Scott Wiener defended his SB 1047 bill against critics at an event on Thursday, dismissing accusations that the bill was drafted by tech giants and would hurt open source developers.
“The big tech companies have not been cheerleading for this bill, to put it mildly,” Wiener said.
Emphasising that the bill was only focused on the “absolute largest models”, Wiener announced that the bill would soon change to stress this further: instead of covered models being defined as those trained with 10^26 FLOPs, they’ll now be defined as those trained with 10^26 FLOPs and cost at least $100 million to train.
Wiener also argued that the bill would not harm the open source ecosystem as some have suggested, clarifying that its shutdown provision only applies to models in the developer’s possession, and that developers would not be liable for the effects of their model if someone else had significantly fine tuned it. He said both things would be clarified in amendments to the bill shortly.
Closing the session, Wiener responded to some of the more outlandish criticism of the bill. “I don't think this is thought policing: it's about doing a safety evaluation.”
Meanwhile, Governor Gavin Newsom gave mixed messages on whether he wants AI regulation in California:
“If we over-regulate, if we overindulge, if we chase the shiny object, we could put ourselves in a perilous position.”
But also: “When you have the inventors of this technology, the godmothers and fathers, saying: ‘Help, you need to regulate us,’ that’s a different environment,” Newsom said.
On SB 1047, Newsom said it’s a “work in progress” and didn’t elaborate.
The discourse
Eric Schmidt is worried about agents:
“Some believe that these agents will develop their own language to communicate with each other. And that’s the point when we won’t understand what the models are doing. What should we do? Pull the plug? Literally unplug the computer? It will really be a problem when agents start to communicate and do things in ways that we as humans do not understand. That’s the limit, in my view … A reasonable expectation is that we will be in this new world within five years, not 10.”
He also said the US and China should agree on something: “if you’re going to do training for something that’s completely new on the AI frontier, you have to tell the other side that you’re doing it”.
And said he’s “much more concerned about the proliferation of open source” than he is China.
Mary Robinson said governments need to move faster on AI:
“I remain deeply concerned at the lack of progress on the global governance of artificial intelligence … Ungoverned AI poses an existential risk to humanity and has the potential to exacerbate other global challenges.”
In a Time profile, Dario Amodei said people misunderstand Anthropic’s mission:
“We’re not trying to say we’re the good guys and the others are the bad guys. We’re trying to pull the ecosystem in a direction where everyone can be the good guy.”
Tristan Harris said AI companies face bad incentives:
“These companies must be subjected to a liability framework that exposes them to meaningful financial losses, should they be found responsible for harms. Only then will they take safety more seriously.”
Tharman Shanmugaratnam, president of Singapore, said we need to regulate AI:
“We cannot leave AI to the law of the jungle … We can’t wait till we wake up and find out if the singularity has arrived.”
Sarah Hooker said it’s good that OpenAI dissolved its superalignment team, actually:
“Companies that succeed with safety build it into the development process.”
Rep. Alexandria Ocasio-Cortez said we need legislation to ban nonconsensual deepfake porn:
Patrick Collison said Mark Zuckerberg’s AI strategy is winning him friends:
“I told Mark, I think that open sourcing LLaMA is the most popular thing that Facebook has done in the tech community — ever.”
Yann LeCun got in quite an entertaining fight with Elon Musk:
“I very much dislike his vengeful politics, his conspiracy theories, and his hype.”
Policy
The US is reportedly delaying giving Nvidia and AMD permission to export AI chips to the Middle East, including the UAE. Officials are reportedly conducting a review so they can figure out how to stop those chips ending up being used by Chinese companies.
Last week, Sen. Mark Rubio and other China hawks expressed national security concerns about Microsoft’s deal with G42, which was brokered by the Commerce Department.
NIST launched ARIA, “a new testing, evaluation, validation and verification (TEVV) program intended to help improve understanding of artificial intelligence’s capabilities and impacts”. It will support AISI.
The EU unveiled the AI Office, run by Lucilla Sioli and composed of five units (Regulation and Compliance, Excellence in AI and Robotics, AI for Societal Good, AI Innovation and Policy Coordination, AI Safety).
China set up its third chip fund; it’s got $47.5b to spend, the biggest so far.
The UN AI Advisory Board had a meeting in Singapore this week.
Politico has an interesting piece on how Secure Enclave butts up against the CHIPS Act.
Influence
A new report from Public Citizen found that over 560 clients lobbied on AI last year, up from 270 a year earlier. The number of individual lobbyists went up to 3,410, and 85% of those “represented corporate interests”.
Interestingly: at the start of 2023, 323 people lobbied the White House on AI; by the end of the year it was 931.
Elon Musk is increasingly talking to Donald Trump, Bloomberg reported, about crypto and other topics. Musk might be invited to speak at the Republican convention, and is reportedly being considered for an advisory role if Trump wins.
Andreessen Horowitz poured another $25m into crypto super PACs.
I missed this last week: Americans for Responsible Innovation held an event with Sens. Young and Ernst and Reps. Lieu, Himes, and Beyer.
Industry
OpenAI confirmed that it’s “recently begun training its next frontier model”, which it anticipates will bring it “to the next level of capabilities on our path to AGI”.
Apple reportedly signed a deal with OpenAI (which Microsoft isn’t thrilled about), and the new version of Siri will actually be able to control apps.
xAI raised $6b from Andreessen Horowitz, Sequoia, and others.
Meta is reportedly considering launching a paid version of its chatbot.
Anthropic added tool use to Claude’s API.
Google said that its AI Overviews “generally don’t ‘hallucinate’”, and said the viral examples of its mistakes last week were because it struggles to get satire.
Palantir signed a $480m computer vision deal with the US Army.
Mistral released Codestral. It can’t be used for commercial purposes.
OpenAI launched new plans for nonprofits and universities.
PwC is now OpenAI’s biggest customer. It will be the company’s “first reseller for ChatGPT Enterprise”.
A Saudi Aramco fund participated in Zhipu AI’s $400m funding round, according to the FT, making it the only foreign investor in a Chinese AI company.
CoreWeave is reportedly prepping a 2025 IPO.
Nvidia stock continues to soar; it’s close to overtaking Apple’s market cap.
South Korean chip inventories dropped 33.7% YoY, the biggest decrease since 2014. Combined with growing exports, it signals that chip demand is massively outstripping supply.
Perplexity is reportedly raising $250m at a $3b valuation, led by Bessemer.
Maven AGI launched with a $28m Series A.
SoftBank said it aims to invest $9b a year in AI.
Google’s spending $2b on its first data centre in Malaysia.
Scale AI published its proprietary SEAL Leaderboards.
The Atlantic and Vox Media signed licensing agreements with OpenAI.
Moves
Jan Leike joined Anthropic, where he’ll lead a new team focused on “scalable oversight, weak-to-strong generalisation, and automated alignment research”. Sam Bowman will lead a separate safety team, focused on “safety cases” and other things.
Anthropic’s Long-Term Benefit Trust appointed Jay Kreps to Anthropic’s board. Luke Muehlhauser is leaving the board. Time has a piece diving into Anthropic’s unusual corporate structure.
I missed this last week: Anthropic hired Krishna Rao, formerly of Airbnb, as its CFO.
France is setting up a “Centre d’évaluation de l’IA”, and hiring a director.
OpenAI has brought back its robotics team. It reportedly “intends to coexist rather than compete” with companies like Figure.
Best of the rest
OpenAI found that organisations in Russia, China, Iran and Israel were using its tools to run covert influence operations. It said GPT didn’t seem to have “meaningfully increased their audience engagement or reach”, but OpenAI’s disrupted their usage of its services regardless. Meta also said Facebook had removed AI-generated influence campaigns from China, Israel, Iran and Russia.
An AI generated image of Gaza seems to have become the first viral AI activism image.
A new Epoch AI report found that “the amount of compute used in training is consistently being scaled up at 4-5x/year”.
In a new paper, Daron Acemoglu thinks the GDP boost from AI will be “modest, in the range of 0.93% − 1.16% over 10 years”.
Paul Romer thinks AI’s a bubble.
Jeremie & Edouard Harris went on Joe Rogan to talk about AI risk.
Wired launched an AI elections project to track “every instance of AI’s use” in this year’s global elections.
A Center for Countering Digital Hate report found that it’s still very easy to deepfake politicians’ voices.
RAND published a new report on securing AI model weights.
In Lawfare, a group of researchers said tort law can do more for frontier AI governance than some realise, though it’s not a silver bullet.
Google DeepMind’s Conor Griffin published an “AI Policy Atlas”, designed to give an overview of the very many topics in AI policy.
In Time, Fei-Fei Li and John Etchemendy said AI isn’t sentient. The piece was sharply criticised by many.
Shannon Vallor has written a new book which offers the usual “x-risk is a distraction” arguments. She says LLMs are “mirrors” and therefore can never become AGI.
Max Tegmark said tech companies are winning the AI discourse.
The IMF’s Gita Gopinath said AI could make the next recession worse.
Former FCC Chair Tom Wheeler discussed AI regulation on the Lawfare Podcast.
Sam Altman signed the Giving Pledge.
Geoffrey Hinton features in the opening of Atlas, the new Netflix J-Lo movie about AI. It’s apparently not very good.
Coming up
Tuesday: The House Energy and Commerce committee will hold a hearing on “Powering AI: Examining America’s Energy and Technology Future”.
Tuesday: The Joint Economic Committee will hold “hearings to examine artificial intelligence and its potential to fuel economic growth and improve governance”.
If you’ve been forwarded this email, click here to subscribe and receive future editions. Thanks for reading, have a great weekend.