Alex Bores wants to fix Dems’ AI problem
Transformer Weekly: Anthropic’s political donations, energy bills policy, and an xAI exodus
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
And a big announcement: please welcome Veronica Irwin, our new senior policy reporter. She’s based in New York and DC — you can get in touch with her here.
NEED TO KNOW
Anthropic donated $20m to Public First Action, part of the flagship AI safety political fundraising operation.
The Trump admin has reportedly drafted a voluntary compact that would require AI companies to pay 100% of their data center power generation costs.
A flurry of xAI employees left the company, including two co-founders.
But first…
THE BIG STORY
Alex Bores thinks he’s come up with a winning AI policy strategy.
On Thursday, the New York Assembly member and RAISE Act author — who is running for a seat in the House — released an eight-point federal AI policy framework. “I think this is the national message, period,” Bores says in an interview with Transformer. “What this plan is doing is giving policy detail to the already existing desires of the vast majority of Americans that want a say in AI.”
The framework is a meaty release four months from the primary — a period in which most candidates are circulating stubs. It would “nationalize” the RAISE Act, instituting mandatory reporting and independent safety testing requirements from the bill’s original text, along with federal-level provisions such as a push to “engage in diplomacy” — which Bores describes as “active conversation with China and anyone else doing [AI] development.”
The framework also calls for dedicating significant resources to contingency plans for AI advancement, such as creating a kill switch should development turn catastrophic. “I think this ‘let it rip’ approach that is coming from the federal government right now is one that won’t serve us well, and we need to be very intentional about the potential futures and how we would respond in those futures.”
Bores clearly hopes that making AI regulation a priority will help set him apart in a crowded, nine-candidate primary race. But his ambitions extend beyond his district. Democrats have struggled to find a coherent AI message, caught between those favoring lax regulations and a public increasingly uneasy about the technology. Bores argues that’s because nobody has articulated a clear vision — and is hoping his populist, progressive and tech-savvy message will change that.
Pointedly, Bores says that his “willingness to stand up to the political influence that is pushing for nothing to happen” could give Democratic leaders a helpful nudge.
By “political influence,” he clearly means donors to super PAC network Leading the Future, which include OpenAI’s Greg Brockman and venture capitalists Marc Andreessen, Ben Horowitz, Ron Conway, and Joe Lonsdale. Since November the PAC has spent more than $1m targeting Bores, buying attack ads that he says air so frequently they pop up on the TV when he stops by his neighborhood deli.
Not everyone in the party has such an adversarial relationship with the PAC — Democrats in Illinois seem happy to take its money, following a long tradition of tech-friendly Dems. But Leading the Future appears to see Bores as exemplary of a type of Democratic messaging it’s keen to stamp out before it spreads.
Bores’ supporters also seem to see him as a potential influence on national Democratic messaging. Three quarters of his campaign donations come from out of state, in large part from wealthy AI safety advocates. And last week, a new PAC launched ads supporting Bores. (As of December 31, it was entirely funded by Anthropic employee Daniel Ziegler.)
Even if he makes it to Congress, Bores might struggle to translate his message into durable legislation. As Transformer reported, the text of Bores’ RAISE Act was severely weakened just before New York Gov. Kathy Hochul signed it into law, apparently at the behest of those same Silicon Valley titans.
Bores says he could build the Congressional coalition required to resist tech industry influence, naming Republican Sen. Josh Hawley and Democratic Rep. Ted Lieu as examples of lawmakers he’d approach. But persuading 218 members — let alone congressional leadership — to support a bill that still has teeth in its final form remains a big ask.
Bores insists, though, that getting RAISE passed at all shows he can put up a fight. “I am far from the first or only state legislator to propose regulating AI, but I think I am the only one, or one of the only ones, that got the governor to sign an AI safety bill after the Trump executive order saying they would punish states for doing that,” he says.
Democratic voters in his district will give their verdict at the ballot box in June’s primary. If Bores loses, expect other Democrats to become more fearful of AI regulation. But if he wins, they may have found their blueprint.
— Veronica Irwin
THIS WEEK ON TRANSFORMER
Why the AI industry can’t resist dirty on-site gas turbines — James Ball on the unfortunate logic driving the industry’s demand for off-grid gas power.
India’s AI summit is trying to do too much— Shakeel Hashim breaks down why you shouldn’t get your hopes up for the AI Impact Summit in New Delhi next week.
THE DISCOURSE
Mrinank Sharma, who worked on AI safety at Anthropic, left on an ominous note:
“I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most.”
When asked if he left because “Anthropic’s practices were or became worse than you expected,” he replied cryptically:
“I’m limited in what I say by confidentiality agreements … I worked on ASL-3 deployment mitigations in line with the RSP for over a year. I led many of the efforts there. My priority is (and always has been at Anthropic) and living up to my values and being in my integrity.”
And researcher Zoë Hitzig left OpenAI over its ads launch, warning in an NYT op-ed that the company is repeating Facebook’s mistakes:
“This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer … OpenAI says it will adhere to principles for running ads on ChatGPT … I’m worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules.”
OpenAI’s Noam Brown expressed concern about Anthropic’s Opus 4.6 evals:
“I appreciate Anthropic’s honesty in their latest system card, but the content of it does not give me confidence that the company will act responsibly with the deployment of advanced AI models.”
Miles Deutscher is worried:
“The alarms aren’t just getting louder. The people ringing them are now leaving the building.”
Ejaaz echoed: “my god this week felt like that Red Wedding episode of game of thrones but for AI alignment…good week for the doomers”
Saikat Chakrabarti, who’s running to replace Nancy Pelosi in Congress, tweeted:
“I know Anthropic has been much more concerned about alignment than other AI companies, so can someone explain why Anthropic released Opus 4.6 anyway?”
Dean Ball flagged:
“This might be the first example of someone trying to take genuine ‘frontier AI governance’ issues and make them political…This person is running against Scott Wiener, attempting to carve out space ‘to the left’ of Wiener on AI safety.”
A viral essay from AI entrepreneur Matt Shumer argued that AI has reached a “Covid-like” inflection point:
“I think we’re in the ‘this seems overblown’ phase of something much, much bigger than Covid.”
As a reminder: Shumer was the man behind Reflection, the “world’s top open-source model” which actually appeared to be a Claude wrapper.
POLICY
The Trump administration has reportedly drafted a voluntary compact that would require AI companies to pay 100% of their data center power generation costs.
Anthropic pledged to do so this week, following similar pledges from OpenAI and Microsoft last month.
Sens. Hawley and Blumenthal introduced the GRID Act yesterday, which would do something similar — and ban new grid-connected data centers.
The Pentagon wants OpenAI, Anthropic, Google, and xAI to deploy AI tools on both classified and unclassified networks without standard usage restrictions.
OpenAI agreed to let the Pentagon use ChatGPT for “all lawful uses” after months of internal deliberation, while Anthropic reportedly continues to resist the same terms due to concerns about reliability and safety.
The Trump administration reportedly plans to exempt AI hyperscalers from upcoming chip tariffs.
The exemptions are tied to TSMC’s commitment to invest $165b in building capacity in the US.
Sen. Elizabeth Warren and Sen. Jim Banks plan to introduce a Senate version of the AI Overwatch Act.
It comes after Anthropic CEO Dario Amodei met with some Senate Banking Committee members earlier this week to advocate for export controls.
The House China and Foreign Affairs Committee Chairs also urged the Trump administration to close export control loopholes allowing China to access advanced chip-making equipment, requesting a briefing on strengthening controls by next month.
Meanwhile, the Trump administration is reportedly pausing bans on Chinese data center equipment ahead of an April summit with Xi Jinping.
In a letter to the House China Committee, OpenAI accused DeepSeek of trying to distill OpenAI’s models to train its own:
“Our review indicates that DeepSeek has continued to pursue activities consistent with adversarial distillation targeting OpenAI and other US frontier labs.”
Labor advocates and trade industry officials testified to the House Education and Workforce Committee that human oversight is necessary when deploying AI in the workplace.
Chair Rep. Tim Walberg said AI regulation is “our opportunity to make sure that jobs are supported” as House Republicans aim to develop an AI regulatory framework by the end of the year.
INFLUENCE
Anthropic donated $20m to the flagship AI safety political fundraising operation.
The donation was specifically made to Public First Action, a dark money group which itself funds two allied super PACs. Federal government contractors are prohibited from donating directly to super PACs.
The Anthropic-backed operation’s first donations are supporting Republicans Marsha Blackburn — who torpedoed preemption efforts last year — and export control enthusiast Pete Ricketts.
Leading the Future, the super PAC backed by OpenAI’s Greg Brockman and the founders of Andreessen Horowitz pledged $5m to support Rep. Byron Donalds’ Florida gubernatorial campaign. This is its first state-level race.
It also announced support for Laurie Buckhout, a Republican in North Carolina, and two Democrats in Illinois.
Scoop: Build American AI, Leading the Future’s affiliated 501(c)(4), hosted a reception for Senate Republicans’ Chiefs of Staff at the National Republican Senatorial Committee Winter Retreat last Saturday.
Dream NYC, a new super PAC supporting Alex Bores, came under scrutiny over whether it’s coordinating with the Bores campaign — which it is not allowed to do.
Bloomberg has a good profile of how Andreessen Horowitz has become a “lobbying juggernaut.”
a16z “helped guide the administration on what to put in the [Trump AI preemption EO], according to one person close to the White House.”
Tech executives donated $3.3m to a PAC supporting San Jose Mayor Matt Mahan’s California governor bid.
Mahan also received $78,400 donations from Google co-founder Sergey Brin and Palantir co-founder Joe Lonsdale.
A new Abundance think tank called Next American Era is lead by former Illinois Democratic Representative Cheri Bustos, who was a registered lobbyist for OpenAI and Oracle.
The Midas Project alleged that OpenAI broke California’s new AI safety law and could owe millions in fines.
INDUSTRY
Anthropic
Anthropic raised $30b at a $380b post-money valuation.
The round was led by GIC and Coatue, and co-led by Peter Thiel’s Founders Fund and the UAE’s MGX, among others.
Traditional institutional investors were well-represented: Fidelity, BlackRock, Blackstone, Goldman Sachs, JPMorgan and Morgan Stanley were all included in the round.
The “round also includes a portion of the previously announced investments from Microsoft and NVIDIA.”
The company’s revenue run-rate is now $14b, over 10x what it was this time last year.
Claude Code’s revenue run-rate has doubled to $2.5b since the start of the year.
It reportedly plans to build up to 10 GW of data center capacity by renting from cloud providers and leasing its own space — with help from a team of ex-Google execs.
It launched a 2.5x faster version of Claude Opus 4.6 for 6x the price.
It launched Cowork on Windows, directly competing with Microsoft’s Copilot.
Goldman Sachs has reportedly been working with Anthropic to automate back-office jobs.
The Wall Street Journal profiled Claude-whisperer Amanda Askell.
Meanwhile, the New Yorker covered Anthropic’s history, office vibe, and interpretability research.
One quote from researcher Emmanuel Ameisen: “It’s like we understand aviation at the level of the Wright brothers, but we went straight to building a 747 and making it a part of normal life.”
OpenAI
Ads officially came to ChatGPT users on Free and Go tiers.
OpenAI launched Spark, a new, faster Codex model that runs on Cerebras chips.
It retired GPT-4o today, to the dismay of loyal users and relief of those concerned about its sycophancy and potential for harm.
Sam Altman reportedly told employees that ChatGPT is “back to exceeding 10% monthly growth.”
The New York Times reported that OpenAI hopes to triple its revenue to almost $40b this year.
Some executives at the company were apparently surprised by reports that it’s hoping to IPO by December, believing that “the company wasn’t ready.”
The Information reported that OpenAI uses a special version of ChatGPT to catch leakers by crosschecking news stories against internal documents and messages.
xAI
Co-founders Jimmy Ba and Tony Wu left xAI, along with several other staff members.
Half of xAI’s founding team is now gone.
As Alex Heath points out, SpaceX’s recent acquisition of the company let employees cash out — so why stick around?
Meanwhile, Elon Musk reorganized xAI into four core areas: Grok’s chatbot and voice product, Coding, the Imagine video product, and Macrohard, an agent-run software company.
He reportedly told employees that the company needs to build a factory and “self-sustaining city” on the moon to facilitate building data centers in space.
Will Marshall, chief executive of satellite company Planet, told the Financial Times that Musk’s plan is ahead of its time, “but none of this is insurmountable…the race is on.”
Apollo is reportedly nearing a $3.4b lending deal to fund an investment vehicle leasing Nvidia chips to xAI.
X hit $1b in annualized subscription revenue.
It also announced the winner of its $1m “Articles” contest: a user with a history of “racist and fringe” posts on X.
Investors clamored to get in on Alphabet’s $20b bond sale to fund its AI infrastructure buildout.
It reportedly attracted “more than $100b of orders at its peak — among the strongest ever for a corporate bond offering.”
Gemini 3 Deep Think got an upgrade designed for science, math, and engineering.
It beat previous records on Humanity’s Last Exam and ARC-AGI-2, outperforming Claude Opus 4.6 and GPT-5.2.
Google appears not to have released an updated system card for the new model.
Google reported “distillation attacks” targeting Gemini — repeated prompts designed to probe the model’s inner workings.
Waymo introduced the Waymo World Model, built on DeepMind’s Genie 3, which generates photorealistic simulations of rare driving scenarios.
Meta
Meta plans to spend over $10b on a huge data center campus in Lebanon, Indiana.
EY, Meta’s auditor, flagged a $27b data center project as a “critical audit matter.”
Others
Google, Amazon, and Meta are spending so much on AI infrastructure that they’re projected to nearly eliminate free cash flow this year.
Google, Amazon, and Microsoft each reported hundreds of billions in unfilled cloud computing contracts, totaling $1.1t in revenue backlog.
Nvidia shares rose 8% as Jensen Huang defended hyperscalers’ AI infrastructure spending as “justified, appropriate, and sustainable.”
Microsoft’s Mustafa Suleyman told the FT that the company is building its own frontier models to fulfil its “true self-sufficiency mission.”
Amazon engineers are reportedly pushed toward Kiro, its in-house AI coding assistant, over Claude Code, which they can’t use without approval — despite Amazon’s large stake in Anthropic.
Apple’s still having trouble launching its upgraded Siri.
T-glass, an ultrathin glass sheet used in AI chips, is in short supply, potentially affecting prices for companies including Apple and Nvidia.
The semiconductor industry is facing 80-90% price increases from a memory chip shortage.
SemiAnalysis expects more price increases to come.
SemiAnalysis also reported that reinforcement learning and agentic AI are driving unexpected demand for CPUs from companies including Intel and AMD.
Samsung began commercial shipments of HBM4 memory chips.
ByteDance is reportedly developing an AI chip, to be manufactured by Samsung.
Its new Seedance 2.0 video-generation model went viral this week.
Mistral said its annualized revenue run rate topped $400m.
Legal AI startup Harvey raised $200m at an $11b valuation.
Runway raised $315m at a $5.3b valuation.
UK AI chip startup Olix raised $220m at a $1b valuation.
OpenAI, Anthropic, Google, Meta, Microsoft, and Mistral partnered on F/ai, a new Paris-based accelerator for European AI startups.
Non-tech firms involved in AI buildouts — such as contractors, cooling companies, and construction equipment manufacturer Caterpillar —are seeing their shares surge.
MOVES
OpenAI disbanded its mission alignment team, Platformer reported.
Former lead Joshua Achiam stepped into a new role as Chief Futurist, where he’ll be “studying AI impacts and engaging the world to discuss them.”
OpenAI fired Ryan Beiermeister, VP of product policy.
She reportedly opposed the upcoming ChatGPT “Adult Mode” launch, and was accused of sexual discrimination against a male employee.
Harsh Mehta, AI R&D researcher, left Anthropic to “start something new.”
Kinsey Fabrizio will be the Consumer Technology Association’s new President and CEO.
Larissa Schiavo joined the Golden Gate Institute as its Director of Strategic Initiatives.
Kevin Frazier joined the Cato Institute’s Technology and Privacy team.
Senior reporter Madison Mills moved to the AI beat at Axios.
RESEARCH
A Nature Medicine study found that LLMs — specifically, GPT-4o, Llama 3, and Command R+ — frequently provide incorrect medical advice.
Kevin Roose tweeted: “i am begging academics to study AI capabilities using frontier models. the models used in this study (which is going to be cited for years as proof that “AI is bad at health advice”) are…two obsolete models and one i’ve never heard of.”
A separate Lancet Digital Health study reported that AI models are more likely to repeat bad health advice when it comes from realistic hospital notes, versus social media posts.
MIT researchers studied 809 models released over the last three years, and observed that 80-90% of frontier LLM performance differences can be explained by scaling training compute, rather than some “secret sauce” hidden in proprietary algorithms.
A team of researchers trained a “meta-model” on a billion LLM activations to map out how models organize concepts, outperforming the sparse autoencoder approach.
Isomorphic Labs demonstrated that its drug design engine has more than doubled AlphaFold 3’s accuracy on challenging biomolecular structure prediction benchmarks.
BEST OF THE REST
Even elite human forecasters think that AI will soon take their jobs.
Harassers put Grok-generated nudes on an OnlyFans account impersonating feminist creator Kylie Brewer, 404 Media reported.
“Did AI write that?” is becoming this year’s trendiest insult.
Jasmine Sun published a dispatch from the DC AI scene, where data center NIMBYism and child safety concerns are fueling bipartisan populist backlash.
Steve Yegge described the “Anthropic Hive Mind,” its unique and mysterious work culture.
Kai Williams explained why LLMs struggle to maintain consistent personas, including what happened with MechaHitler.
Dean Ball argued that AI policy should establish private verification organizations — similar to financial auditors — to independently verify frontier labs’ safety claims.
The AI 2027 team graded their 2025 predictions, finding that “progress on quantitative metrics is at roughly 65% of the pace that AI 2027 predicted.”
Vox illustrated the Great AI Vibe Shift of 2026 — investors are starting to treat agentic AI as an existential threat to SaaS companies.
The NYT covered AI-assisted romance novels. Writer Coral Hart wrote over 200 novels with Claude’s help last year, but said the model was “terrible at sexy banter.”
MEME OF THE WEEK
Thanks for reading. Have a great weekend.


