Anthropic employees say they’ll give away billions. Where will it go?
A coming wave of Anthropic wealth could flood EA-aligned nonprofits with cash. Whether that’s good depends on who you ask
Normally, “a bunch of people working at an ascendant tech company are very rich and about to get much richer” isn’t much of a story.
Unless, that is, that tech company is Anthropic.
The company finalized a funding round at a $380b valuation in mid-February. Less than two weeks later, it opened a tender offer allowing current and former employees to sell up to $6b worth of shares for the first time. An IPO, which would unlock yet more wealth, is expected in the coming year.
All seven of its co-founders, including siblings and top executives Dario and Daniela Amodei, have pledged to donate 80% of their wealth. Forbes recently estimated that each co-founder holds roughly 1.8% of the company. If that’s accurate and Anthropic goes public at its current valuation, each co-founder’s pledge would be worth roughly $5.4b, or $37.8b combined. For scale: that’s nearly ten times what Coefficient Giving, one of the world’s largest research-driven grantmakers, has given away in its entire history, and nearly four times the estimated wealth of Facebook co-founder Dustin Moskovitz, Coefficient’s biggest funder.
Although the Amodeis have distanced themselves from effective altruism, their very public entanglement with it matters. Effective altruism, or EA, is a philanthropic movement that urges people to donate to causes that seem to mathematically “maximize good” per dollar. It is also a social and ideological movement that values empiricism, impartiality, and in some cases, technocracy.
Anthropic’s full web of EA entanglements is challenging to fully map, but one doesn’t need to look hard to spot some big ones. Daniela Amodei’s husband Holden Karnofsky, who also works at Anthropic, co-founded flagship EA non-profits GiveWell and Open Philanthropy, the previous incarnation of Coefficient Giving (Disclosure: Coefficient Giving is the primary funder of Transformer). Another co-founder, Ben Mann, published a 2019 blog post titled “Why I now identify as an Effective Altruist.” Last month, nearly 30 Anthropic employees registered for an EA conference in San Francisco, including three recruiters — over twice the representation of OpenAI, Google DeepMind, xAI, and Meta Superintelligence Labs combined.
Dario Amodei, Amanda Askell, and Jack Clark, have all publicly signed the Giving What We Can Pledge, a public commitment to donate at least 10% of their income to “effective charities.” (Askell herself was formerly married to Will MacAskill, one of EA’s originators and co-founder of Giving What We Can.) Dozens, if not hundreds, of other Anthropic employees have almost certainly also done so. EA, and its approach to giving, will inevitably have an outsized influence on where the philanthropic capital from Anthropic’s newly wealthy employees flows.
This is something that EAs have been discussing amongst themselves for months. In early December, SecureBio Detection director Jeff Kaufman speculated about Anthropic’s potential IPO, and the massive swell of EA-aligned donations that could follow, sparking conversation within the EA community. Essentially, he reasoned that if you know a bunch of money will be pouring into the charities you care about next year, your individual dollars are worth more now, and said he’d likely make his own donation decisions following that logic. (Disclosure: Kaufman has donated to Transformer’s publisher, Tarbell.)
In the weeks following Kaufman’s post, several other forum members chimed in. Sam Anschell, a senior program associate at Coefficient Giving, published his own post in early February. Among other suggestions, he urged organizations to anticipate increased media attention if EA causes receive loads of Anthropic money, and to start investing in “high-integrity conduct” now.
This hypothetical surge of Anthropic money is not yet guaranteed. The AI bubble could burst, tanking Anthropic’s valuation. Its employees could buy superyachts instead of anti-malaria bed nets. But presuming the philanthropic windfall does materialize, we can make some educated guesses about where it may — or may not — go.
How the money flows
Silicon Valley founders have a mixed track record with philanthropy. Several early titans, such as Bill Hewlett and Bill Gates, established massive foundations to fund charities around the world. Others, such as Steve Jobs, reportedly never engaged with public philanthropy at all. Across the board, billionaires rarely turned their attention to charity until long after their companies went public, often post-retirement.
Dustin Moskovitz and Mark Zuckerberg marked a generational shift. Alongside their wives, the Facebook co-founders launched Good Ventures and the Chan Zuckerberg Initiative around the company’s IPO, while at the height of their careers. This “signaled a shift in norms around tech wealth,” said David Callahan, founder and editor of Inside Philanthropy, establishing that tech titans can start giving their wealth away while continuing to build their companies.
The EA movement’s most infamous brush with big tech philanthropy, however, went down in flames. Sam Bankman-Fried’s flagrant mismanagement and fraud burned EA-aligned non-profits who banked on promised grants from the Future Fund, the FTX Foundation’s giving vehicle.
However, there’s reason to believe that philanthropic donations from Anthropic are likely to differ from those tied to FTX. Anthropic’s value, while certainly not guaranteed to hold, is tied to a real product with real revenue. Kaufman said that even if the AI bubble pops and the entire industry collapses, donations from Anthropic employees’ legitimately-earned equity wouldn’t be clawed back like Future Fund grants were following the 2022 FTX collapse. And many of Anthropic’s AGI-pilled staff seem to believe that its current valuation underestimates the company’s potential: would-be buyers are reportedly struggling to find employees who are willing to sell.
Yet the organizations that could receive significant funds from Anthropic remain all too aware of the spectre of FTX, including Coefficient Giving, the current biggest player.
Coefficent’s previous incarnation, Open Philanthropy, was founded as a giving vehicle for Good Ventures, the philanthropic foundation launched in 2011 by Moskovitz and wife Cari Tuna to give away Moskovitz’s Facebook fortune. Among other cause areas, the organization has been a funder of AI safety and security since before OpenAI or Anthropic existed.
In November 2025, Open Philanthropy rebranded as Coefficient Giving. After launching a handful of multi-donor initiatives in global health and economic development, the organization’s announcement post explained, it wanted to distinguish itself from Good Ventures. Its reorganization created “funds” for distinct focus areas such as global health, AI safety, and animal welfare, which individual donors can join to support their favorite pre-researched and EA-approved causes. The Coefficient Giving website explicitly solicits donations from funders looking to give over $250,000 a year — a tempting opportunity for a newly-wealthy effective altruist with tech money, earnestness, and limited free time.
“As we’ve evolved into being not just a philanthropic funder but also an advisor, we’re increasingly working with donors beyond Good Ventures,” Coefficient said in a statement to Transformer. “We continue to be excited about moving more philanthropic money off the sidelines and towards high-impact giving opportunities. For example, we’ve raised more than $300 million from other donors in the last few years for global health and wellbeing causes.”
While the biggest and most high-profile, Coefficient isn’t the only donor advisory organization well-positioned to absorb donations from AI safety-minded philanthropists. Longview Philanthropy, the Astralis Foundation, and the Effective Institutions Project all provide similar services and funds for those hoping to donate large sums to AI safety projects. All would likely be beneficiaries of Anthropic employee giving. (Disclosure: Longview Philanthropy has donated to Transformer’s publisher, Tarbell.)
The risk of group think
Reading between the lines of Anthropic’s updated Responsible Scaling Policy, announced February 24, reveals hints about where all this money might end up.
The document recommends that major industry players commission “independent bodies (standards-setting organizations, auditors, etc.)” to review their safety claims, calling to mind non-profits such as METR and the AI Verification & Evaluation Research Institute (AVERI). It also explicitly required that these independent bodies have no financial interest in Anthropic, but Karnofsky — who led the new RSP’s development — recently noted on LessWrong that “a lot of our employees are socially integrated into the AI safety community,” and that these employees “could be a major source of donations for the kinds of non-profits that could be potential external reviewers.”
In the same post, Karnofsky made a pragmatic argument for approaches that help “companies do more and more risk-reducing things that don’t slow them down.” To critics, this is an example of Anthropic employees pursuing motivated reasoning. In one recent EA Forum comment, a user predicted that donors from within the AI industry will be “hesitant to bite the hand that feeds them,” and disproportionately fund charities that align with their interests. It is reasonable to suspect that organizations like PauseAI, Evitable, and MIRI, are unlikely to receive much of the money from sales of Anthropic’s stock: Holly Elmore, executive director of PauseAI US, has publicly accused Coefficient Giving of not funding her organization “because they serve Anthropic’s interests.” (Overtly industry-friendly groups pushing for less regulation and/or unabated AI development, such as the Abundance Institute, probably won’t be beneficiaries of Anthropic-linked giving either.)
Organizations doing the kind of AI safety work that makes Anthropic look responsible, on the other hand, may see a large influx of cash if the company’s valuation holds. Non-profits running evaluations that feed into Anthropic’s RSP, conducting alignment research, and creating pragmatic governance frameworks fall in the sweet spot — save for potential conflicts of interest, that is. Paying people to hold you accountable always carries inherent risks, said Leif Wenar, a philosophy professor at Stanford. “It will be hard for them to resist the competitive pressures of their industries unless there’s accountability mechanisms for the whole industry.” There is also a potential public image problem: the same EA Forum commenter cautioned that organizations accepting funds from Anthropic employees risk “losing public credibility … It would not be hard to cast orgs that [take] AI-involved source funds as something like a lobbying arm of Anthropic equity holders.”
Few of the AI-related organizations likely to receive Anthropic employee donations have clear policies for handling potential conflicts of interest. While METR, an AI safety evaluation non-profit, does not accept money from AI companies, it doesn’t explicitly ban donations from any given individual. Epoch AI, which tracks AI capabilities research, does accept money from large AI labs. Other non-profits, including the Center for AI Safety, FAR AI, and MIRI, don’t have published conflict-of-interest policies about donations from frontier lab employees at all.1
There is also the risk of group think. AI safety likely would not exist in its current form had it not been deemed as important, tractable, and neglected, key metrics EA uses to decide where to direct funding. It’s unsurprising, then, that many of the independent organizations tasked with monitoring companies like Anthropic are staffed by people who share a worldview with many Anthropic employees. If Anthropic employees donate billions of dollars to these non-profits over the next several years — which seems more than plausible, since Anthropic paints itself as a safety-focused company — a very particular worldview will gain even more prominence.
To its credit, Coefficient Giving has transparently acknowledged this problem and the outsized influence it’s had on the AI safety research agenda. In October, it published a post titled “AI safety and security need more funders,” conceding that it currently holds “a concentrated share of AI safety philanthropic funding,” urging others to “correct our blind spots and make new bets.” That only works, however, if Anthropic’s employees don’t outsource their decisions to Coefficient.
Politics and bed nets
Of course, donations likely won’t exclusively target AI safety non-profits, at least not directly. Political campaigns will likely be another major beneficiary.
Anthropic itself has already donated $20m to Public First Action, which backs political candidates who support AI safety regulation, including Rep. Valeria Foushee and New York assemblymember Alex Bores. According to FEC data compiled on Transformer’s Campaign Finance Tracker, meanwhile, Anthropic employees have collectively donated $401,250 to campaigns supporting Bores, Foushee, California state senator Scott Wiener, and Colorado state representative Manny Rutinel.
With active efforts to whip up more support for “AI safety champions” like Bores and Wiener, Anthropic employees’ donations could become a powerful force leading up to the 2028 election cycle. In a thread where EA Forum members shared their 2025 donations, one AI safety researcher reported giving $129,000 — more than his income that year — to “Bores, Wiener, and other AI safety in US politics stuff.”
That said, if a substantial number of donors lean on giving vehicles like GiveWell and Coefficient Giving, which distribute donations across charities based on internal research, much of their money may go to EA-associated causes that have nothing to do with AI or Anthropic. GiveWell’s current top charities, for instance, fund anti-malaria medicines and bed nets, along with other supplements and vaccines for children in regions such as sub-Saharan Africa.
Misstep potential
There’s also a world in which no money materializes at all. The Giving What We Can pledge isn’t legally binding, and no one is forced to follow through. “Value drift is something that EA has been worried about for many years,” said philosopher and EA critic Émile Torres. “And it actually has happened, including with leading figures.” The last time the EA movement was suddenly flush with cash, Sam Bankman-Fried spent a bunch of it on private jets and a luxury penthouse in the Bahamas.
Among the mega-rich in general, said Callahan, “there’s a tendency toward modest giving or inaction. A lot of these people don’t actually give at a sizable level, early on.”
“They’re busy,” he said, and figuring out what to do with all that cash takes time — and the plethora of options available could prove paralyzing.
But despite his scathing critiques of the broader EA movement, Wenar trusts Anthropic employees to mostly honor their pledges. “They seem to be very smart and morally motivated,” he said. “I think that they actually have an opportunity to have a real impact on human well-being.” Callahan put it more bluntly: “I think that they’ll eventually follow through on those commitments. Because what else are you going to do with all the money? Leave your kid $14b?”
Whether that money goes towards AI safety non-profits that share Anthropic’s worldview, or towards global health charities that tech employees have hardly engaged with directly, the potential for well-meaning missteps is real. Just because you suddenly come into a lot of money doesn’t mean you automatically know how to spend it wisely. “If you’ve been working really hard at your job, and your job is in a particular sub-sector in tech, you’ve likely not had much time and energy to really understand the issues that are facing the community,” said Kat Rosqueta, the founding executive director of the University of Pennsylvania’s Center for High Impact Philanthropy. “There’s this ignorance gap that needs to be bridged.” (In theory, Coefficient Giving and other EA donor advisory organizations exist to do just that — but Wenar and others argue that they are not as well-informed as they might think.) (Others disagree with that.) (This debate has not yet been resolved.)
That said, Wenar would be “very enthusiastic” about newly-wealthy Anthropic employees using that money “to make their own industry better — that is, to do something in a realm that they know something about, that they’re engaged in every day, and that they’re the world’s experts in.”
Update, March 12: Added a disclosure that Longview Philanthropy has donated to Tarbell, Transformer’s publisher.
Footnote: (A policy from the Tarbell Center for AI Journalism, Transformer’s publisher, says that Tarbell “does not accept corporate contributions from organizations directly involved in the development and deployment of frontier AI … Tarbell also does not currently accept personal donations from employees of such companies.”)







Well, I lost interest about halfway in. This has to be the most most ‘first world problem’ in the history of first world problems.
Maybe Claude-loving dipshits should stop making them wealthy