AI safety PACs should be more transparent about who’s funding them
While advocating for accountability and transparency, the Public First Action network of super PACs is obscuring where its money comes from
One of the central purposes of campaign finance law is to provide voters with transparency over who is trying to sway their votes. One of the central policy priorities of AI safety group Public First Action is to make AI companies more transparent.
It is ironic, then, that Public First Action is operating as a “dark money” vehicle funneling at least $5.5m from donors to super PACs — a setup that keeps those donors completely anonymous.
Public First Action is a 501(c)(4) nonprofit organization, a structure often referred to as a “dark money” group because they do not need to disclose their individual donors.
It is far from the only AI-related 501(c)(4). Build American AI advocates for the industry-friendly federal frameworks preferred by OpenAI and a16z. The newly established Innovation Council Action exists to support the Trump administration’s specific brand of AI regulation. Both obscure their funding as 501(c)(4)s. But so far, Public First Action is the only one of these channeling money into super PACs.
In theory, 501(c)(4)s are issue advocacy groups, carrying out activities such as hosting educational events or conducting polling. Such work arguably does not have a vital need for funding transparency. But they become more problematic when used to channel money to super PACs, which are otherwise required by law to disclose the identities of those who fund them, and can spend an unlimited amount of money on campaign ads.
Public First Action’s stated mission is to “educate Americans on key AI issues and advance an AI Policy agenda supporting safeguards,” according to its website. But it also sends money to three PACs: a nonpartisan one named Public First, a Democratic affiliate called Jobs and Democracy, and a Republican affiliate called Defending Our Values. As we reported last week, the 501(c)(4)’s only publicly disclosed donor is Anthropic, whose $20m donation is specifically earmarked as money which cannot be used to “influence federal elections.” According to quarterly disclosures published last week, six other individuals have given directly to the PACs. Their identities are disclosed: they include Anthropic alignment lead Jan Leike, Anthropic researcher Peter Lofgren, and others linked to the effective altruism and Bay Area AI safety community.
Doing the math, that means that at least $5.5m raised by the “dark money” group — the amount which it has funneled directly to super PACs, and thus designated for the purposes of election influence — has come from donors other than Anthropic, and who have declined to expose their identity, avoiding the legislative structures which are designed to hold them accountable to the voters which their money persuades. Public First Action declined to comment on their donors as a matter of general policy.
The lack of transparency conflicts with the apparent views of Public First Action and its donors. The group lists “Accountability and Transparency” as its top AI policy issue on its website. And the effective altruism and AI safety communities from which the PACs’ disclosed donors are drawn from generally place great emphasis on transparency, radical candor, and intensive discourse.
Given their backgrounds, you’d assume those donors would rather the organization they are giving to was more transparent about where the rest of Public First Action’s money is coming from. But at least one is not. The group’s largest disclosed individual donor, Michael Cohen — who gave $500,000 and is an AI policy researcher at UC Berkeley — told Transformer that Public First Action’s use of the 501(c)(4) structure to obscure certain donor contributions “seems pretty standard.” When asked why he contributed specifically to the PAC, rather than the dark money group, he said “I’m really concerned about the AI industry’s political spending in Washington, so I want to counteract as much of that as I can. The OpenAI/a16z PAC motivated me in particular.”
Of course, the potential downside of opaque AI development is very different from undisclosed campaign contributions. Transparency for frontier AI models is about giving the public a method of accurately assessing the AI tools that are created for potential risks — possibly even existential ones, according to AI safety advocates. Transparency in campaign finance is about preventing corruption and helping the public understand who is influencing electoral politics. Public First Action is concerned with preventing AI risk, not cleaning up American politics. But in both contexts, required transparency gives the public a method for auditing operations, and the ammo to critique them if they feel so justified.
The use of a 501(c)(4) to obscure funding sources is even more striking in comparison to its chief opponent, Leading the Future — the “OpenAI/a16z PAC” Cohen is referring to. The accelerationist super PAC is indeed backed by a16z and OpenAI co-founder and president Greg Brockman and his wife Anna, with Republican and Democratic affiliated super PACs contributed to by Palantir co-founder Joe Lonsdale and investor Ron Conway — something we can say with confidence because the super PAC fully discloses all of its donors, shielding none of its funding behind a dark money group. Sure, Leading the Future itself funds a dark money group in Build American AI, and Build American AI almost certainly has additional, undisclosed donors, but Build American AI is not the primary tool the group is using to influence elections.
Funding a PAC with a 501(c)(4) is not uncommon in Washington, and Public First Action is likely to be joined by other AI campaigning groups in using one. Innovation Council Action, led by longtime Trump advisor Taylor Budowich, is expected to create a super PAC, for example. The use of “dark money” groups is also not the only sleight of hand in the world of AI-related campaign influence: New York super PAC DREAM NYC has close enough ties to NY-12 candidate Alex Bores’ campaign that a government accountability group has said it raises concerns about potentially illegal coordination, for example.
Leading the Future’s transparency also only goes so far. OpenAI’s Brockman and his wife give in a “personal capacity,” something which creates distance between their donations and OpenAI, which claims it’s not getting involved in the midterms — despite Brockman being one of the single biggest spenders on pro-industry campaigning. Perplexity, a government contractor technically prohibited from spending on elections, has also given $100,000 to Leading the Future via a technically separate entity, Perplex AI.
Public First Action’s consistent critique of Leading the Future is that the group isn’t forthright about its aims, privately fighting regulation outright despite publicly claiming it wants a “federal standard.” In a statement to Transformer, for example, Public First Action spokesperson Anthony Rivera-Rodriguez said “Public First Action is working to elevate the American public’s call for AI safeguards in an election that anti-regulatory voices are trying to buy.” But that jab loses a lot of its punch while Public First Action is keeping its own operations opaque.






Agreed! In fact, we argue Public First Action needs to change course on many points of its strategy: from funding MAGA conservatives with no real commitments to AI Safety against moderate Republican Challengers to polarizing potential allies from the Left by putting money in contentious primaries. Read here: https://onethousandmeans.substack.com/p/public-first-actions-strategy-doesnt