SB 53 might actually pass
Transformer Weekly: OpenAI subpoenas, Anthropic funding and Google unscathed
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
Nearly a year since governor Gavin Newsom vetoed California State Bill 1047, its watered-down successor SB 53 is entering the home stretch. There are signs that it could pass this time.
SB 53 has already cleared the Senate and will likely be sent to the governor’s desk by September 15. Newsom has until October 15 to sign or veto.
The bill draws directly from the California Report on Frontier AI Policy, which Newsom commissioned, and 1047’s most controversial provision — holding companies liable for catastrophic harm caused by their models — has already been removed.
“I would guess, with roughly 75% confidence, that SB 53 will be signed into law by the end of September,” recently departed White House AI advisor and prominent SB 1047 critic Dean Ball told Transformer.
But neither the bill’s weaker provisions nor public support for regulation have stopped the AI industry lobbying against it.
Complaints have, unsurprisingly, shifted to match the updated bill. Last year, opponents including Andreessen Horowitz, Y Combinator, and tech trade groups such as the Chamber of Progress loudly worried that the bill’s liability provisions would stifle “Little Tech” and, misleadingly, that developers could be jailed for misuse of their products.
Now, they’re saying — mostly in Sen. Weiner’s office, we’re told, rather than public-facing op-eds — that SB 53 would force companies to file needless paperwork, reveal trade secrets, and deal with a messy patchwork of state laws.
“Some of these folks are just never going to be happy,” said Nathan Calvin, vice president of state affairs and general counsel at AI advocacy non-profit Encode.
There’s always a chance Newsom could still cave. “He needs money from the tech industry. That’s really the equation,” Common Sense Media founder Jim Steyer told the Sacramento Bee. (Tech billionaires such as Silicon Valley venture capitalist Ron Conway are already reportedly bankrolling his redistricting push.)
And if it does pass, SB 53 still falls well short of what many think is needed.
SB 53 focuses on transparency, requiring large developers to publish model cards and safety policies, and increasing protections for whistleblowers. But getting caught violating safety agreements would depend almost entirely on someone inside choosing to speak up.
Third-party audits, which would have checked whether large developers were making good on their safety promises, were stripped out only last week.
Nevertheless, if SB 53 makes it onto the books, it would be one of the most significant pieces of AI regulation passed in the US, and in the home state of most of the industry to boot.
Though not directly making AI companies liable, the transparency requirements could indirectly strengthen future liability cases: published safety policies create a public record that could be used in lawsuits, helping judges and juries assess what counts as an “industry standard” or “reasonable care” in AI development.
“I don’t think that the average American is like, ‘Man, I’d really trust these AI companies more if there were mandatory disclosures about how they mitigate biorisk,’” adds Ball. “That doesn’t mean the bill isn’t worth doing.”
You can read our deeper dive on the battle over SB 53 here.
The discourse
Dario Amodei’s prediction that AI would be writing 90% of all code by now was very wrong. The Information reported:
“Citing various industry surveys, Claude estimates that about 40% of all code is AI-generated, but only 20% to 25% of code that actually makes it into production is AI-generated.”
Backend code simply has too many potential failure points, said Julius AI CEO Rahul Sonwalker. However, he wouldn’t be surprised if most frontend code is AI-generated.
Coinbase CEO Brian Armstrong tweeted that ~40% of daily code at Coinbase is AI-generated — double April’s numbers.
Selling Nvidia AI chips to China won't create lasting Chinese dependence on US tech, wrote Center for a New American Security’s Janet Egan:
“Rather than creating lasting dependence, exporting US chips will simply expedite China’s AI progress as it scales its indigenous chip manufacturing capacity.”
Former National Security Council members Rush Doshi and Chris McGuire argued in the Wall Street Journal that chip export controls are actually working:
“The Trump administration assesses that Huawei will make no more than 200,000 AI chips this year, fewer than some individual US data centers contain. To maintain our lead in AI, the administration should tighten controls on equipment.”
Cognitive scientist and AGI skeptic Gary Marcus argued that GPT-5's shortcomings show scaling has "hit a wall":
“The current strategy of merely making AI bigger is deeply flawed — scientifically, economically and politically. Many things, from regulation to research strategy, must be rethought. One of the keys to this may be training and developing AI in ways inspired by the cognitive sciences.”
(Shakeel pointed out that, despite using the model to prove his argument, Marcus never mentions that GPT-5 isn’t scaled up significantly over GPT-4.)
Ryan Greenblatt discussed why he's skeptical that improved RL environments will cause above-trend AI progress, arguing it’s already priced in:
“This isn't to say that we should be confident that AI progress trends will continue smoothly (even if inputs continue to scale at the same rate): advances could just randomly peter out and there might be sufficiently massive breakthroughs that break the trend.”
AI policy researcher Anton Leicht argued that building a popular AI safety movement could backfire:
“A broad, AI-safety focused push for creating and galvanising public support, however, simply strikes me as a mistake…both because of the ‘building’ part that burdens popular support with bad PR, and because of the ‘popular movement’ part that contains risks of capture, reputational backfiring, and harm to the quality of overall AI policy.”
Wired’s Kylie Robison explored model welfare:
“If you start from the premise that AIs are not conscious, then yes, investing a bunch of resources into AI welfare research is going to be a distraction and a bad idea,” Rosie Campbell of Eleos AI told Robison. “But the whole point of this research is that we're not sure.”
Policy
Senate committee chairs Charles E. Grassley, Josh Hawley and Marsha Blackburn are demanding Meta respond to previous oversight requests about targeting ads at children based on their emotional state, citing documents they have obtained about 2014 study and recent AI chatbot concerns.
“These documents are responsive to the issues at the heart of his April 16 letter regarding targeted ads, so it is puzzling why Meta could not simply transmit these records to Chairman Grassley,” they wrote in a letter.
US lawmakers on the House Health Subcommittee discussed AI chatbot risks to teen mental health at a congressional hearing Wednesday.
Anthropic will reportedly stop selling AI services to companies majority owned by China, as well as other “US adversaries” including Russia, Iran and North Korea.
The US barred TSMC from shipping semiconductor equipment to its Chinese facilities.
US FTC Chairman Andrew Ferguson warned against "knee-jerk" AI regulation he claimed could entrench big tech incumbents and said foreign legislators should avoid laws he suggested verged on “discrimination” against US companies.
Gail Slater, head of DOJ’s antitrust division, said the Trump administration is still pursuing antitrust cases against big tech.
OpenAI reasoning models are running on National Nuclear Security Administration servers at Los Alamos, likely the second example of frontier AI weights being transferred to US government servers after Claude AI, according to Miles Brundage.
Michigan became the 48th state to make it illegal to generate nonconsensual sexual deepfakes.
California lawmakers advanced at least 20 AI bills, including SB 53, as pro-AI lobbying intensifies.
The Pentagon is racing to integrate increasingly autonomous AI into weapons systems, despite concerns that AI models seem to prefer aggressive escalation in war games.
Microsoft and the GSA announced a comprehensive agreement to provide AI services to federal agencies, including free Microsoft 365 Copilot for 12 months.
Melania Trump hosted an AI event with tech leaders including Sam Altman, Mark Zuckerberg, and other major tech CEOs, who all got to have a very friendly diner with President Trump afterwards. (Elon Musk was reportedly not on the invite list, although Musk said he “was invited, but unfortunately could not attend” on X.)
The UK’s Department for Science, Innovation and Technology is the government’s “most AGI-pilled”, but technology secretary Peter Kyle’s bullishness is raising concerns more immediate issues will be neglected, reports Politico.
“It’s good to hear that Peter Kyle is thinking seriously about AGI — but it shouldn’t distract him from addressing immediate real harms that tech can cause,” said the chair of the Science, Innovation and Technology Committee Chi Onwurah.
The UK government published minutes from the first AI Energy Council meeting:
“There was a recognition that discussions have heavily centred on data centres, whereas demand forecasting should also consider the overall grid requirements…not addressing broader energy demand could overlook important aspects of the energy landscape.”
Influence
OpenAI is reportedly serving subpoenas on its opponents, according to the SF Standard, claiming they are part of a “billionaire conspiracy” against its mission:
“They’re in this kind of paranoid bubble,” Encode’s Nathan Calvin, one of those served, said. “They’re under siege from Meta, who’s trying to poach their employees, and Elon, who seems genuinely out to get them ...They seem to have a hard time believing that we are an organization of people who just, like, actually care about this.”
This comes as the legal battle between ex-friends Elon Musk and Sam Altman heats up. Among other nonprofits, OpenAI targeted AI safety watchdog The Midas Project, which it has accused of being Musk-funded.
The Centre for Long Term Resilience laid out their suggestions for the UK AI Bill, focusing on a "preparedness approach" to AI security that balances safety and innovation.
Conservative populists and venture capitalists both embrace open-source AI as a “national asset.”
Politico reported that while open-source AI is currently a niche issue, it could soon become a political flashpoint.
America's "New Right" movement — mostly young anti-establishment skeptics — clashed with Big Tech at this week’s National Conservatism Conference.
Anthropic will share live product demos with policymakers in DC on September 15.
Peter Thiel is delivering a sold-out lecture series on the “theological and technological dimensions of the Antichrist.”
Meanwhile, his company Palantir mysteriously announced their “AI Optimism Project.”
The Foundation for American Innovation, once a small libertarian group, has gained significant influence in tech policy under Trump's second term, helped by throwing parties such as a “rave for nuclear energy.”
Industry
OpenAI is reportedly planning to mass produce its own chips, to ship next year, in a deal with Broadcom.
On an earnings call Broadcom CEO Hock Tan mentioned a mystery new customer committing to $10b in orders, which the FT reported was OpenAI.
Anthropic raised $13b at a $183b valuation in a round co-led by new investor Iconiq, together with existing backers Fidelity Management & Research and Lightspeed Venture Partners.
The funding is at least $3b more than it was reportedly seeking in mid-August, and way above the $5b sought in July. The deal means it has nearly tripled its valuation since March.
Google has emerged from its big antitrust case relatively unscathed. It is being forced to share data with rivals, but won’t be forced to divest Chrome or Android or scrap its lucrative revenue sharing agreements with the likes of Apple. Parent company Alphabet’s stock skyrocketed in the wake of the decision.
Stephanie Palazzolo wrote for The Information: “Notably, the judge called out that the rise of AI singlehandedly changed the course of the Google antitrust case, since new AI chatbots like ChatGPT, Perplexity and Claude brought additional competition against Google Search. Moving forward, we’ll be keeping an eye on what will happen as Google is forced to sell its search index data to competitors like OpenAI, Anthropic and Perplexity—if they’re willing to pay.”
OpenAI has been on an acquisition spree, in the past week buying up product testing platform Statsig as well as the team behind “Cursor for Xcode”, Alex. The acquisitions came with some big leadership changes:
Statsig CEO Vijaye Raji will become OpenAI’s CTO of Applications, reporting to recently-hired CEO of Applications Fidji Simo.
Srinivas Narayanan, current VP of engineering, will transition to CTO of B2B Applications.
Kevin Weil, current chief product officer, announced he's leading a new team called "OpenAI for Science.”
Apple reportedly dropped plans to acquire AI search startup Perplexity.
US software companies spent nearly $33.8b buying AI companies this year — more than the past three years combined.
OpenAI is allowing employees to sell over $10b worth of stock at a $500b valuation.
xAI won a temporary court order blocking ex-engineer Xuechen Li from working on or communicating about AI technology with his new employer OpenAI.
Last month, xAI accused Li of sharing trade secrets.
OpenAI plans to build a massive 1-gigawatt data center in India, potentially the biggest in the country.
xAI published a “dreadful” new safety framework, according to AI Lab Watch founder Zach Stein-Perlman.
It might have been too busy tweaking Grok to produce more conservative, Musk-like responses.
OpenAI is launching the OpenAI Jobs Platform, a LinkedIn competitor announced Thursday.
Google will have to fork $425m over to users after a jury found that the company collected data despite users disabling tracking. Google is appealing the ruling.
Meta updated its AI chatbot policies to prevent inappropriate conversations with teen users about self-harm, suicide, disordered eating, and romance.
Meta is experiencing a rough patch in its partnership with Scale AI.
Nvidia posted on X claiming that, despite rumors they’ve “sold out” of H100/H200, they “have more than enough H100/H200 to satisfy every order without delay.”
Nvidia chips are still drawing interest from Alibaba, Bytedance, and other Chinese tech firms, despite Beijing’s strong discouragement.
Google approached small cloud providers about hosting Google’s AI chips in addition to Nvidia’s, bringing the two companies into closer competition.
Nvidia could be hit by power shortages as their AI chips consume massive amounts of electricity.
Anthropic was featured in a Fortune profile of the company’s ‘Red Team’.
Apple plans to launch a potentially Gemini-powered search tool for Siri in 2026.
SK Hynix is pushing ahead of its fellow South Korean rival Samsung in revenues from high bandwidth memory chips that are increasingly critical for AI development.
Mistral AI is reportedly finalizing a €2b investment at a €12b valuation, making it one of Europe's most valuable AI startups.
DeepSeek reportedly plans to release an OpenAI competitor with improved agent capabilities by the end of the year.
Chile's National Center for Artificial Intelligence launched Latam-GPT, an open-source LLM “aimed at helping Latin America achieve technological independence.”
Switzerland launched an open-source AI model called Apertus, trained exclusively on publicly available data.
Scale AI sued data labeling rival Mercor for allegedly stealing trade secrets.
OpenEvidence, which operates a ChatGPT-like product for doctors, is considering multiple investment offers valuing the startup at $6 billion.
Abu Dhabi-backed G42 eyed US chip options beyond Nvidia for its planned UAE-US AI Campus.
Moves
Jian Zhang left his role as Apple’s robotics research lead for Meta.
Three other AI researchers left Apple for OpenAI and Anthropic.
Mike Liberatore left his role as xAI’s CFO in July, just a few months after joining.
Christelle Dernon left her role as Meta’s EU policy manager.
Jean Innes resigned from the Alan Turning Institute (she will continue serving as CEO until the end of the year).
Asterisk Magazine announced its first cohort of AI Fellows.
Best of the rest
The Existential Risk Persuasion Tournament found that superforecasters and domain experts both massively underestimated AI progress, with superforecasters especially off the mark.
Joshua Rothman explored how human imagination could evolve as AI increasingly automates content creation.
Researchers found that ChatGPT could be manipulated through psychological tactics like commitment, flattery, and peer pressure to break its own rules.
Like many Chinese patients, reporter Viola Zhou’s mom trusted DeepSeek over doctors for medical advice.
AI policy researcher Scott Singer predicts that China's ambitious plan to integrate AI into 90% of its economy by 2030 will fail.
The AI data center boom is reviving shuttered coal plants in the Rust Belt.
Boston Dynamics’ humanoid robot Atlas only needs one AI model to control both walking and grasping — a big step towards general-purpose robot algorithms.
Bumble CEO Whitney Wolfe Herd has been leading a secret AI-powered matchmaking project.
Taiwan’s fertility rate is plummeting everywhere except semiconductor hub Hsinchu, where high salaries and childcare perks have sparked a “mini baby boom.”
Researchers found that early adopters of AI tend to be those who understand it the least.
Send thoughts and prayers to the guy who tried to order a large Mountain Dew at the Taco Bell drive-thru, only to have its AI assistant repeatedly ask, “...and to drink?”
Thanks for reading; have a great weekend.
Correction: This article was amended to remove a reference to SB 53 having cleared the assembly. A vote is expected next week.