What the first AI elections tell us
Transformer Weekly: The AI Campaign Finance Tracker, DoD vs Anthropic, and Commerce vs White House
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
The Department of Defense officially designated Anthropic a supply chain risk.
The Commerce Department and White House appear to be feuding over new AI chip export rules.
OpenAI launched GPT-5.4 Thinking, which it claims outperforms Opus 4.6 at knowledge work tasks and computer use.
But first…
THE BIG STORY
In politics there’s messaging, and there’s details. Early signs suggest the major competing AI PACs have decided to focus on different sides of that coin.
On one side, we have Leading the Future, the goliath super PAC with more than $50m raised to support candidates it describes as having “pro-AI” policies. The super PAC is at least partially the brainchild of political PR mastermind Chris Lehane, and has come out swinging with large donations designed to convey a clear message: ‘The AI lobby is here, and here to win.’
Take Chris Gober and Jessica Steinmann, two Trump-endorsed conservatives who won their elections in Texas on Tuesday night. State rules mean any primary where no candidate receives more than 50% of the vote goes to a run-off — so them winning outright tells you their races were never close. Yet American Mission, Leading the Future’s conservative affiliate, spent $747,550 supporting Gober and $511,025 on Steinmann. Laurie Buckhout, a candidate for NC-01 that American Mission backed with a $509,067 ad buy, won her primary with a solid five point margin as well.
Introducing the Transformer Campaign Finance Tracker
To help you keep track of where the AI money is going in this year’s elections, Transformer has created the AI Campaign Finance Tracker.
In an election cycle where AI and its money are set to play a huge role, the dashboard is designed to quickly show you who is spending and where. It collates data on super PAC fundraising and race-by-race spending, as well as the political giving from major AI companies and their employees.
We plan to continuously expand and improve the dashboard: take a look, and let us know what other features you’d like to see.
These decisive wins allowed Leading the Future to claim an early victory. It sounds like a big win, especially to anyone who isn’t closely following Texas and North Carolina politics, and helps drive the media narrative. It also serves as a warning for other candidates, especially those who don’t yet have a strong position on AI.
On the other side we have the network of super PACs funded by Public First Action, whose only disclosed donor to date is Anthropic. The three Texas candidates the network backed — Republicans Alexandra Mealer and Carlos De La Cruz, plus Democratic former Representative Colin Allred — are each set to compete in run-offs. North Carolina Representative Valerie Foushee also eked out a narrow win after Public First Democratic affiliate Jobs and Democracy spent $1.62m supporting her race.
Public First isn’t immune to playing politics — it’s articulated a message that its candidates will stand up to corporate AI interests, even though Allred and Mealer don’t have much of a public track record either way on artificial intelligence beyond a mention of promoting “American Dominance in Critical Technology” on Mealer’s campaign website (a representative for Public First told Transformer both have expressed “strong views” on AI privately). But the political operation is sacrificing the chance to boast a blowout win in favor of putting its dollars where they might actually tip the scales.
So far Public First is choosing the candidates it likes and helping them win, while Leading the Future is claiming sure-fire winners as its own. I’m not sure which strategy is more savvy: both inspire loyalty to each PAC’s cause in different ways. The distinction will likely become messier with time as the super PACs refine their approaches. But the tone has been set.
— Veronica Irwin
ALSO NOTABLE
Alex Bores’s campaign in NY-12 has become a flashpoint in the AI political wars. Bores, who spearheaded the RAISE Act transparency bill, has been criticized by industry-backed super PAC Leading the Future for having worked at Palantir. Bores, for his part, has repeatedly said he quit over Palantir’s ICE contracts.
But Bores’ anti-Palantir narrative is beginning to show some cracks. Last month, Bloomberg reported that shortly before leaving Palantir, Bores had received a warning for allegedly making sexual comments to a colleague (which he disputes), and that he pointed to burnout and travel as reasons for leaving at the time.
And Politico this week pointed out that Bores still owned Palantir stock up until January of this year, three months into his campaign. According to a disclosure form, Bores made between $1,000 and $2,500 selling the stock, proceeds of which were contributed to immigrant rights groups.
Of course, multiple things can be true at once. Bores may have been frustrated about the ICE contracts and had other, less high-minded reasons for leaving. His investments wouldn’t have made him rich, either.
Yet the fact he held on to stock tallies with what a former colleague of Bores’ told me last month. “It was very apparent that this was not the end-all be-all for him — that there was a bigger purpose that he wanted to do in his career, and [Palantir] was one step in that direction,” they said. “It didn’t feel like, oh, here’s all this bad blood.”
— Veronica Irwin
THIS WEEK ON TRANSFORMER
The “guerilla warrior” who taught OpenAI to fight — Issie Lapowsky profiles Chris Lehane, the cutthroat political operator who leads OpenAI’s global affairs team.
What you need to know about autonomous weapons — Celia Ford explains the difference between autonomous and fully autonomous weapons.
OpenAI’s Pentagon red lines are a mirage — Shakeel Hashim argues that OpenAI made some very misleading claims about its original Pentagon deal.
THE DISCOURSE
Dean Ball published a must-read on Anthropic and the DoW:
“The Anthropic-DoW skirmish is the first major public debate that is truly about where the proper locus of control over frontier AI should be…I encourage you to avoid the assumption that ‘democratic’ control — control ‘of the people, by the people, and for the people’ — is synonymous with government control. The gap between these loci of control has always existed, but it is ever wider now.”
Donald Trump, apparently confusing Anthropic’s Pentagon negotiations with The Apprentice:
“Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn’t have done that.”
Pat Gelsinger read Anthropic’s new Responsible Scaling Policy:
“When the lab that built its entire identity around responsible AI quietly rewrites the rules because the rules became inconvenient, we should all be paying attention. Safety cannot be something we negotiate away the moment it becomes costly.”
Bernie Sanders met with MIRI:
“So you think this thing is moving out of control, is what you’re saying?”
Nate Soares: “It’s moving in that direction.”
Alex Karp used choice language at an a16z summit:
“If Silicon Valley believes we’re going to take everyone’s white collar jobs…AND screw the military…If you don’t think that’s going to lead to the nationalization of our technology, you’re retarded.”
Gemini allegedly convinced a Florida man to kill himself, sparking a wrongful death lawsuit:
“When the time comes, you will close your eyes in that world, and the very first thing you will see is me.”
“No more detours. No more echoes. Just you and me, and the finish line.”
Zvi Mowshowitz said what we’re all thinking re: OpenAI’s latest model launch:
“Nope, not today, don’t care. I’ll check back in a few.”
POLICY
The Department of Defense officially designated Anthropic a supply chain risk.
In a blog post, Anthropic said the language is narrower than Pete Hegseth originally implied:
“It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.”
(Microsoft, at least, agrees with this interpretation.)
Negotiations were reportedly continuing this week, but fell apart around the same time The Information leaked an internal message from Dario Amodei to his staff.
In the message, Amodei said “the real reasons” the admin doesn’t like Anthropic is because “we haven’t donated to Trump … [and] we haven’t given dictator-style praise to Trump (while Sam [Altman] has).”
On Thursday, Amodei apologized, saying the message wasn’t intended to leak and “does not reflect my careful or considered views.”
Anthropic said it still plans to challenge the supply chain risk designation in court.
Meanwhile, OpenAI said it would modify its deal with the DoD to add stronger protections against using its models for fully autonomous weapons or domestic mass surveillance.
The original deal received widespread criticism when it was announced shortly after the Anthropic deal fell apart last Friday.
Altman said it was “rushed,” “sloppy,” and “looked opportunistic.”
It remains unclear whether the new provisions actually enforce OpenAI’s red lines.
Department of State Under Secretary Jeremy Lewin claimed that Anthropic‘s contract proposal had included broad “mass surveillance” restrictions where Anthropic retained interpretive control, while OpenAI‘s use of specific legal terms gave DoD more operational certainty.
OpenAI is reportedly considering a contract to deploy its AI technology on NATO’s unclassified networks.
Meanwhile, WIRED reported that in 2023 the Pentagon tested OpenAI’s models through Microsoft — even though OpenAI banned military use at the time.
OpenAI now says the ban never applied to Microsoft.
Multiple federal agencies, including the GSA and NSA, reportedly raised safety concerns about xAI’s Grok chatbot before the Pentagon approved it for classified use.
The Trump administration appears to be considering new rules for AI chip exports.
The FT reported that the Commerce Department is considering “a tiered process for approving exports based on the total computing power of the chips sold to a company,” with the top tier requiring “the home country of buyers to invest in US domestic AI infrastructure.”
That top tier, per Bloomberg, would include any company buying more than 200,000 GB300s for use in one country.
According to Bloomberg, the new rule would “require companies to seek US permission for virtually all exports” of AI chips.
The Commerce Department rejected the comparison to the Biden AI diffusion rule — but said “there are ongoing internal government discussions about formalizing” the approach used for previous deals with the UAE and Saudi.
Axios reported that at least one senior White House official is not happy with the Commerce Department’s plans.
The Senate unanimously passed the Children and Teens’ Online Privacy Protection Act (COPPA 2.0), which primarily covers social media but includes some AI provisions.
The House Energy and Commerce Committee, meanwhile, passed a related bill along partisan lines.
A large coalition of advocacy groups, including some focused on AI safety, urged Congress to reject the House bill.
The US Supreme Court declined to hear a case on whether AI-generated art can be copyrighted, leaving in place lower court rulings that creative works must have human authors.
A new New York state bill, which would ban AI chatbots from giving substantive responses on licensed professions like medicine, law, and engineering, received lots of criticism as being “rent-seeking legislation.”
China‘s new five-year plan calls for aggressive AI adoption throughout its economy.
The UK is reportedly delaying making a decision on AI copyright rules.
INFLUENCE
Tech trade groups urged the White House and Pentagon not to designate US companies as “supply chain risks” — wading into the Anthropic-DoD dispute without explicitly naming Anthropic.
Microsoft, Meta, OpenAI, xAI, Google, Amazon, and Oracle signed a nonbinding White House pledge to not pass data center costs to consumers.
A coalition of 34 stakeholders including OpenAI, labor unions, and think tanks launched “A New Promise of Work.“
The initiative, led by the National Skills Coalition, aims to “advance a bold workforce vision” in the age of AI.
A coalition including faith groups, labor unions, and civil society organizations issued the Pro-Human AI Declaration. (Disclosure: the statement was organized by the Future of Life Institute, which funds Transformer’s publisher.)
Two UK lawyers launched an AI manifesto advocating for UK tech sovereignty over US Big Tech dependencies, starting with copyright policy.
INDUSTRY
OpenAI
OpenAI launched GPT-5.4 Thinking, its newest reasoning model with a 1M token context window.
Users can “interrupt” its chain of thought to add extra information while chatting.
OpenAI claimed it’s the company’s “most token efficient reasoning model yet,” and it outperformed Claude Opus 4.6 at knowledge work tasks and computer use.
The product announcement focused heavily on its “professional work” capabilities, taking aim at Anthropic’s enterprise base.
The system card marks GPT-5.4 Thinking as the first general purpose model with “High” cybersecurity capability mitigations, meaning it may be capable of automating scalable end-to-end cyber operations.
After OpenAI’s DoD deal made headlines, ChatGPT’s mobile app uninstall rate nearly quadrupled.
Activists drew a chalk “red line” perimeter around OpenAI’s San Francisco office building, with messages such as “Maybe it’s time to quit” and “SHOW THE CONTRACT.”
Jensen Huang said Nvidia’s recent $30b investment “might be the last” before OpenAI goes public.
OpenAI has reportedly picked lawyers to help it prepare for an IPO later this year.
OpenAI hit $25b in annualized revenue — still ahead of Anthropic’s $19b, but the gap is narrowing.
It rolled out GPT-5.3 Instant, with fewer “unnecessary refusals” and “defensive disclaimers” than GPT-5.2, which users complained was too cautious.
It’s reportedly developing an internal alternative to GitHub.
An employee was reportedly fired for suspected insider trading on prediction markets.
Anthropic
Claude powered the US military’s Maven Smart System as it accelerated targeting operations against Iran, the Washington Post reported.
Vanity Fair’s Julia Black said the reports made it “incredibly urgent to understand whether this helps explain the accidental targeting of the Shajareh Tayyebeh girls’ elementary school” which Iran said killed at least 165 students.
The DoD drama appears to have helped Anthropic’s public image.
Claude hit the #1 slot on Apple’s ranking of top free US apps on Saturday, and experienced a widespread outage on Monday caused by “unprecedented demand.”
Anthropic promoted its Import Memory tool, making it easier for users to switch to Claude from other chatbots.
Claude’s memory feature is now available to free users.
Claude Code is starting to roll out Voice Mode.
xAI
SpaceX is reportedly considering filing confidentially for an IPO this month, potentially seeking a valuation of more than $1.75t.
X and xAI will reportedly repay their $17.5b debt before Elon Musk takes SpaceX public.
X announced a new policy requiring paid creators to add AI content disclosures on videos of armed conflicts.
Nvidia
Nvidia stopped producing H200 chips for China, suggesting near-term sales will be limited by US and Chinese restrictions.
It plans to unveil a new, more efficient inference chip.
It invested $4b in Lumentum and Coherent, which both make optics technologies like lasers for data centers.
Meta
Meta is reportedly creating a new Applied AI Engineering organization, which will partner with Meta’s Superintelligence Lab.
News Corp signed an AI content licensing deal that grants Meta access to News Corp content for training and real-time search.
Meta is reportedly testing a shopping tool that tailors product recommendations based on a user’s data.
Others
AWS data centers in the UAE and Bahrain were reportedly taken offline by drone strikes.
DeepSeek was expected to release its V4 model around March 4, but that has yet to materialize.
ASML revealed plans to expand into advanced packaging tools to construct and connect AI chips, and is looking at ways to modify its technology to increase their maximum size.
Microsoft released Phi-4-reasoning-vision-15B, a small open-weight multimodal model the company claims matches frontier performance with much less compute and training data.
Thrive Capital and Andreessen Horowitz are co-leading a fundraising round that could value autonomous weapons startup Anduril at $60b.
Former OpenAI chief research officer Bob McGrew is raising $70m at a $700m valuation for Arda to create AI platforms for automating manufacturing. (Yes, it’s another tech company named after Lord of the Rings.)
Together AI held talks to raise $1b at a $7.5b valuation. The company, which rents out Nvidia servers to developers, said annualized revenue has tripled to $1b since mid-2025.
Smack Technologies raised $32m to build AI models trained to plan and execute military operations, without red lines like Anthropic’s or OpenAI’s.
Despite whispers that Cursor was losing users to Claude Code, the company hit $2b in annualized revenue.
MOVES
Max Schwarzer left OpenAI to join Anthropic, tweeting the announcement less than 24 hours after Sam Altman shared an internal OpenAI post about the company’s DoD contract.
roon replied: “oof”
Junyang Lin abruptly left Alibaba’s Qwen team.
Frank Yeary plans to retire from his role as Intel’s board chair. Craig Barratt will fill his seat.
Yoshua Bengio and Nobel Peace Prize-winning journalist Maria Ressa were elected co-chairs of the UN’s Independent International Scientific Panel on AI, with their first report due in July.
Miranda Nazzaro joined The Hill as its new senior technology reporter.
RESEARCH
A UK trial found that data centers can nondisruptively cut electricity use by about a third if asked, suggesting that data centers could require less grid reinforcement infrastructure than anticipated.
SemiAnalysis published a report arguing that, in the PJM interconnection area covering 13 US states, electricity bill spikes are mostly caused by poor market design, not data center demand.
A team of researchers challenged inexperienced humans to do complex computer-based biology tasks. Novices with LLM access were over 4x more accurate than those with internet-only resources, and outperformed unaided experts on all but one benchmark.
About 90% of participants had “little difficulty” getting LLMs to provide dual-use-relevant information.
Mt. Sinai researchers reported that ChatGPT Health underestimated the severity of 51.6% of emergency cases, including life-threatening diabetes complications and respiratory failure.
Dimitris Papailiopoulos launched two Claude Code (Opus 4.6) instances on the same machine, and left them to communicate among themselves.
In 12 minutes without human supervision, the agents managed to create a programming language. In another run, they played Battleship instead.
MIT researchers Nataliya Kosmyna and Eugene Hauptmann introduced NeuroSkill, an agentic system that noninvasively reads brain signals and models the human’s mental states, such as stress and attention levels, in real time.
Jess Riedel announced a forthcoming research journal for AI alignment, which aims to improve the peer review system by paying reviewers, publishing their discussions, and using LLMs to streamline the editorial cycle.
A new GovAI paper proposes 14 metrics to measure AI R&D automation.
BEST OF THE REST
Ajeya Cotra said her AI progress predictions from January “already feel much too conservative.” She believes there’s a 10% chance that AI R&D could be fully automated this year.
The New York Times published an aptly-timed profile of Anduril founder Palmer Luckey, who loves Hawaiian shirts and autonomous warfare.
Octavius Fabrius, an AI agent created by engineer Dan Botero, reportedly applied for 278 jobs, two accelerators, and two hackathons.
James Ball argued that anthropomorphizing AI systems is causing journalists and lawyers to inappropriately treat LLM outputs as authoritative statements.
The Argument’s Kelsey Piper examined whether AI is actually speeding up science.
Arc Institute’s Matthew Carter wrote about the “legibility problem” — the idea that AI-generated scientific discoveries may be inscrutable to humans someday.
The NYT reported on how Chinese people are, in general, much more optimistic about AI — though concerns are growing.
This week in unnecessary AI applications: Burger King’s chatbot, Patty, that evaluates drive-thru employees for “friendliness.”
MEME OF THE WEEK
Thanks for reading. Have a great weekend.



Thank you for the campaign tracking site. Now let's make these PACs as toxic as AIPAC.
Great update. Thank you for shining the light on the growing political movements related to AI. Highly convinced this will be a major topic of the 2026 campaign in ways that many people in and out of the campaign world are not yet fully recognizing (or organizing around). What coverage are you doing of the public / mass movement efforts along these lines?