The tone of this is odd. The part about whether a massive influx of donations to AI safety organizations from the people who they’re supposed to be analyzing is good and important.
The rest seems like it’s trying to find other reasons to worry about a ton of charitable giving and doesn’t really make sense to me.
Are we really worried that folks might give billions to thoroughly vetted charities that could save millions of lives just because donors haven’t engaged with them directly and are trusting others who critics think maybe don’t know as much as they think?
I don't think this is trying to describe a problem? (First world or otherwise.) It is fairly "inside baseball" though. I wouldn't expect this to directly affect many people outside some specific nonprofits.
It might affect you indirectly though -- "billions of dollars are likely to be deployed by a small class of people with highly correlated worldviews" is newsworthy!
This piece does something important it holds the tension between Anthropic's commercial imperatives and its EA ethos without collapsing it into either cynicism or hagiography. And that is hard, so kudos to Celia. I am new to this community and what strikes me is how much of this philanthropy infrastructure is being built in real time, before the capital arrives. The coordination question is not just how much gets donated, but to whom, through what vehicles, and with what theory of change feels like the real story underneath the headline numbers. Additionally, expecting any for-profit company, even a mission-driven PBC, to be perfectly consistent under competitive pressure isn't a reasonable standard. Not in this economy or marketplace! What matters is whether the structural commitments such as the 80% pledges, the Coefficient Giving infrastructure, the Long-Term Benefit Trust all hold when the money actually flows. That's the accountability moment worth watching and what I will be paying attention too. Looking forward to engaging with others thinking seriously about these questions.
Too many words and paragraphs. The piece—clearly aiming for a nuanced, uncompromising stance—could grow some cojones and take a stand. Words like "cartel" are implicit, but unsaid.
It's a classic "revolving door + concentrated donor class + aligned nonprofits + shared ideology" dynamic, which is very much evidence of a nascent philanthropic-intellectual cartel forming around EA-aligned AI safety, where Anthropic's windfall could cement it.
The tone of this is odd. The part about whether a massive influx of donations to AI safety organizations from the people who they’re supposed to be analyzing is good and important.
The rest seems like it’s trying to find other reasons to worry about a ton of charitable giving and doesn’t really make sense to me.
Are we really worried that folks might give billions to thoroughly vetted charities that could save millions of lives just because donors haven’t engaged with them directly and are trusting others who critics think maybe don’t know as much as they think?
Well, I lost interest about halfway in. This has to be the most most ‘first world problem’ in the history of first world problems.
I don't think this is trying to describe a problem? (First world or otherwise.) It is fairly "inside baseball" though. I wouldn't expect this to directly affect many people outside some specific nonprofits.
It might affect you indirectly though -- "billions of dollars are likely to be deployed by a small class of people with highly correlated worldviews" is newsworthy!
Especially when quite a lot of it looks like it’s going to flow into politics…
This piece does something important it holds the tension between Anthropic's commercial imperatives and its EA ethos without collapsing it into either cynicism or hagiography. And that is hard, so kudos to Celia. I am new to this community and what strikes me is how much of this philanthropy infrastructure is being built in real time, before the capital arrives. The coordination question is not just how much gets donated, but to whom, through what vehicles, and with what theory of change feels like the real story underneath the headline numbers. Additionally, expecting any for-profit company, even a mission-driven PBC, to be perfectly consistent under competitive pressure isn't a reasonable standard. Not in this economy or marketplace! What matters is whether the structural commitments such as the 80% pledges, the Coefficient Giving infrastructure, the Long-Term Benefit Trust all hold when the money actually flows. That's the accountability moment worth watching and what I will be paying attention too. Looking forward to engaging with others thinking seriously about these questions.
Too many words and paragraphs. The piece—clearly aiming for a nuanced, uncompromising stance—could grow some cojones and take a stand. Words like "cartel" are implicit, but unsaid.
It's a classic "revolving door + concentrated donor class + aligned nonprofits + shared ideology" dynamic, which is very much evidence of a nascent philanthropic-intellectual cartel forming around EA-aligned AI safety, where Anthropic's windfall could cement it.
Maybe Claude-loving dipshits should stop making them wealthy
Hoping billionaires will fund projects is a really fucked up way to run a society. I hope their stock crashes and burns.
…out of what profits? Anthropic is lighting money on fire