Who will ‘control’ OpenAI?
Transformer Weekly: OpenAI’s for-profit plans, Fidji Simo’s new role, and the end of the diffusion rule
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
After public backlash and pressure from the California and Delaware attorneys-general, OpenAI tweaked its for-profit conversion plans, Lynette Bye writes.
The company still plans to become a public benefit corporation, and will remove its capped-profit structure. But the non-profit will now “control” the new PBC, according to OpenAI, and “also be a large shareholder” of it.
“Control” isn’t precisely defined, but presumably it means the non-profit will own a majority of voting rights.
Wired reports that the non-profit will “have the right to appoint and remove board members from the public-benefit corporation”.
The question is whether this is enough. Some activists aren’t sure.
“Based upon what I've seen publicly, this does not seem at all like a victory,” former OpenAI employee Page Hedley told the Politico Tech podcast.
Other experts Transformer spoke to felt similarly, worrying that this change might just be a clever way to get what OpenAI wants without the scrutiny.
OpenAI was carefully structured to ensure that all commercial activity legally had to serve the nonprofit’s mission: ensuring AGI benefits all of humanity. In fact, that’s why OpenAI’s founders chose their current structure instead of a PBC originally, according to OpenAI president Greg Brockman.
If OpenAI does anything inconsistent with that mission, the non-profit board is supposed to step in. If they don’t, attorneys general can take the matter to court.
Converting to a normal PBC would change that, even if the non-profit is given a majority of voting rights in the new company — because a PBC is legally obligated to balance profit with the non-profit’s mission.
“If it's the case that the way the non-profit has control is by appointing directors of the PBC and the PBC runs things, that is not really non-profit control in the meaningful sense,” Nathan Calvin of Encode, an advocacy group that opposes the OpenAI changes, told Transformer.
Rose Chan Loui, the director of UCLA Law's non-profit program, thinks that as well as a majority of voting rights, the non-profit should get additional rights, too.
She told Transformer that this could include contractual rights to veto decisions or to govern future AGI technologies, as OpenAI has previously said.
Chan Loui also thinks contracts should explicitly limit certain investor rights, such as the right to sue if their profit isn’t maximized.
The devil, in other words, is in the details. The decision might ultimately not be up to OpenAI: Delaware Attorney General Kathy Jennings said she’ll be reviewing the new plan to ensure “the non-profit entity retains appropriate control over the for-profit entity.”
There’s one other player, too: Microsoft, which reportedly has the right to block the restructuring — and is still “actively negotiating” the deal, according to Bloomberg.
Sam Altman testified at a Senate Commerce hearing, saying that requiring government approval before deploying AI systems would be “disastrous”.
When asked if self-regulation was sufficient, he said “some policy is good … [but] it is easy for it to go too far”.
“Standards can help increase the rate of innovation, but it’s important that the industry figure out what they should be first,” he argued.
He did not, from what I can tell, properly discuss any of the potential catastrophic risks from AI that he’s previously warned about.
Sen. Ted Cruz used the hearing to tout a forthcoming bill, which he said would “remove barriers to AI adoption, prevent needless state over-regulation and allow the AI supply chain to rapidly grow here in the US”.
Elsewhere in the hearing, Microsoft’s Brad Smith, CoreWeave’s Michael Intrator, and AMD’s Lisa Su emphasized the importance of building data centers and energy capacity to maintain America’s AI lead.
The discourse
Alasdair Phillips-Robins and Sam Winter-Levy warned against exporting advanced AI chips to the Gulf:
“If—lured by the promise of cheap energy and Gulf capital—the Trump administration greenlights unlimited chip exports, then it will be placing the most important technology of the 21st century at the whims of autocratic regimes with sophisticated surveillance systems, expanding ties to China, and interests very far from those of the United States.”
Mary Clare McMahon published a detailed look into Huawei’s attempts to beat Nvidia:
“The core question is not whether Nvidia’s dominance is being contested, but whether Huawei’s software strategy can mature enough for a full-stack transition away from US hardware.”
At a House Judiciary hearing, Helen Toner said the government needs more insight into AI development:
“The US government has a critical national security interest in understanding and monitoring the technology being built inside leading US AI companies.”
David Duvenaud published an accessible explanation of “gradual disempowerment” in The Guardian:
“Eventually, with no one having planned or chosen it, we might all find ourselves struggling to hold on to money, influence, even relevance.”
Investor Paul Tudor Jones talked about AI catastrophic risks:
“All these folks in AI are telling us we’re creating something that’s really dangerous. It’s going to be really great, too, but we’re helpless to do anything about it. That’s, to their credit, what they’re telling us, and yet we’re doing nothing right now, and it’s really disturbing.”
Jack Clark went on Tyler Cowen’s podcast:
“I think there is a chance of something that looks like a nonproliferation agreement between states, including the US and China in the limit.”
Pope Leo XIV’s name has something to do with AI, according to the Holy See press office (translated with Google Translate):
“It is a clear reference [to Leo XIII] … Clearly it is not a casual reference to men and women, to their work, even in times of Artificial Intelligence.”
Policy
The government said it would repeal the AI Diffusion Rule.
On Twitter, David Sacks explained why, arguing it created “a bottleneck that chilled legitimate, non-sensitive commerce” and “strained relationships with key allies”.
“Yes, we must take aggressive steps to prevent advanced semiconductors from being illegally diverted into China. But that goal should not preclude legitimate sales to the rest of the world as long as partners comply with reasonable security conditions,” he added.
The administration said it’ll be “replacing it with a much simpler rule”. Bloomberg reported that this will include “curbs on countries that have diverted chips to China, including Malaysia and Thailand”.
Trump's budget proposal preserved AI funding at the NSF and significantly increased funding for BIS.
Sen. Tom Cotton introduced a bill requiring location verification mechanisms on advanced AI chips to prevent smuggling to China.
Rep. Bill Foster plans to introduce similar legislation, which would also include on-chip shutdown mechanisms.
A group of senators re-introduced the TEST AI Act, which calls on NIST and the DOE to develop AI measurement standards.
The FDA has reportedly held meetings with OpenAI to discuss using AI for drug evaluation.
The EU missed its May 2 deadline to deliver the GPAI Code of Practice. The final version is now due “by August”.
The UK AI Security Institute published a detailed research agenda.
It said it “is committed to delivering rigorous, scientific research into the most serious emerging risks from AI — including cyber and chemical-biological risks, criminal misuse, and risks from autonomous systems — and testing and developing mitigations to those risks.”
The UK government's Data (Use and Access) Bill cleared the House of Commons despite efforts from opponents to make changes related to AI and copyright.
Influence
In a private meeting with the House Foreign Affairs Committee, Jensen Huang reportedly emphasized the importance of skilled immigration and reformed export controls.
Over 100 AI experts published the "Singapore Consensus on Global AI Safety Research Priorities”.
Contributors include OpenAI board member Zico Kolter and senior folks from the US, UK, Japan, France and Singapore AISIs.
It identifies a bunch of research priorities, including loss-of-control risk assessment and “AGI and ASI control problems”.
The AI Policy Network registered a PAC.
Anthropic hired Continental Strategy as a lobbyist.
Americans for Responsible Innovation hired Mindset Advocacy as a lobbyist.
The Innovative Future Collective, described by Politico as “a nonprofit aimed at improving education and understanding of AI technologies and policy impact” run by two FK & Company execs, held a well-attended launch event in DC.
A group of think tanks published the “Techno-Industrial Policy Playbook”, which includes a few pieces on AI:
Industry
OpenAI announced that Fidji Simo is joining the company as “CEO of applications”, reporting to Sam Altman.
She’ll take on a lot of Altman’s responsibilities, with COO Brad Lightcap, CFO Sarah Friar and CPO Kevin Weil all reporting to her, according to Bloomberg.
Altman, meanwhile, plans to increase his focus on “research, compute, and safety”.
Nvidia is reportedly about to release a “downgraded H20” for Chinese customers to get around new export controls.
OpenAI published a more thorough postmortem of the GPT-4o sycophancy debacle.
It says that testers noticed something “off” with the model, but the company rolled it out anyway. It admits this was a mistake, and commits to doing better on this in future.
OpenAI launched a new initiative to help countries build AI infrastructure through Stargate-esque partnerships.
Anthropic is reportedly offering to buy back employees’ shares at a $61.5b valuation.
Google finally released a model card for Gemini 2.5 Flash, which shows that the model “performs worse on text-to-text and image-to-text safety” than its predecessor.
Google partnered with Elementi Power to provide 1.8GW of nuclear power across three data center sites.
Huawei is reportedly building three advanced chip manufacturing facilities in Shenzhen.
Abu Dhabi's state-backed G42 established a US entity as part of its expansion plans.
The group’s seemingly given up on competing at the frontier, telling Bloomberg that “it’s just not reasonable for us to do as a nation of this size”.
Apple is reportedly working with Anthropic to integrate Claude into Xcode.
OpenAI has reportedly agreed to buy AI coding tool Windsurf for $3b.
Cursor developer Anysphere reportedly raised $900m at a $9b valuation.
Andrew Ng’s AI fund raised $190m.
Anthropic launched an AI for Science program offering up to $20k in API credits to researchers working on scientific projects.
Moves
Robert Fergus is the new leader of Meta’s FAIR. According to Yann LeCun, the organization is “refocusing on Advanced Machine Intelligence: what others would call human-level AI or AGI.”
Former US AISI director Elizabeth Kelly joined Anthropic to lead its new Beneficial Deployments team.
Steve Newman’s launching the Golden Gate Institute for AI, along with Rachel Weinberg and Taren Stinebrickner-Kauffman. It’s focused on sense-making and analysis of AI developments.
Brad Carson is now president and CEO of Americans for Responsible Innovation and the Center for Responsible Innovation.
Hasan Sukkar stepped down as CEO of 11x.
Best of the rest
UK AISI researchers published a sketch of an alignment safety case based on debate.
METR discussed how it worked with Amazon to “pilot a new type of external review in which Amazon shared evidence beyond what can be collected via API, including information about training and internal evaluation results with transcripts, to inform our assessment of its AI R&D capabilities”.
A new paper discusses “third-party compliance reviews for frontier AI safety frameworks”.
Asterisk published an interesting dialogue between Ajeya Cotra and Arvind Narayanan about the pace of AI progress.
A great piece for MIT Tech Review looks at the various problems with AI benchmarks, and some of the efforts underway to improve validity.
This piece was funded by Tarbell Grants — if you want funding to produce similarly excellent journalism, applications for the current round of grants are open now.
People in the data center industry warned that Trump's crackdown on renewable energy could undermine the buildout of AI infrastructure.
A Bloomberg analysis found that “about two-thirds of new data centers built or in development since 2022 are in places already gripped by high levels of water stress.”
The largest deepfake porn site permanently shut down after a "critical service provider" terminated access.
Rolling Stone has a big piece on how AI is fueling spiritual delusions, with some users believing they've "awakened" chatbots.
This New York Magazine piece on AI cheating in colleges was all anyone seemed to talk about this week.
Greg Brockman was at the Met Gala.
Thanks for reading; have a great weekend.
This is a super useful roundup! LOVE it.