4 Comments
User's avatar
Alexei Gannon's avatar

Maybe AI Populism does not have a safety problem, but an opportunity. For the majority of its participants, AI populism is based on a demand for cheap energy bills and a refusal to lose your job to the computer. Given these demands are not incompatible with X-risk, how do you build a coalition that can encompass the teeth of AI populism with the policy of X-risk? The answer is trust.

Sanders stands a chance at bridging this gap because he is the foremost left-populist in the United States. People understand that Bernie's not trying to hype-up big tech when he talks about existential risk. His inclusion of environmental concerns that AI Safety people don't find important is pivotal for his ability to come off as sincere and in collaboration with grassroots anti-AI sentiment. It's from a position of collaboration that people change their minds. It is my view that AI Safety advocates should be willing to make trades with the populist left that would build trust & help AI Safety gain a mass political base.

For the same reason, the big money being spent in congressional races from both OpenAI and Anthropic is absolutely corrosive to this coalition. In-the-know progressives are going to be skeptical of Anthropic forever after they helped defeat Nida Allam. I know plenty of progressives who liked Alex Bores when OpenAI was spending money against him but now feel neutral after Anthropic started funding him. I'm of the mind that Anthropic's best play was to just spend 1-for-1 against OpenAI and cancel out the PACs, but once you start picking and choosing winners from above you're going to make enemies on the ground.

In short, if you care about AI Safety and have a platform, you should start building trust with the politicians, organizers, and influencers of the populist left. Sanders gives you an incredible foot-in-the-door, but you need to walk in.

I've published a lot about the political coalitions who decide whether data centers get build (https://onethousandmeans.substack.com/p/noise-complaints) and what Sanders' recent actions mean for AI Safety (https://onethousandmeans.substack.com/p/sanders-sounds-the-alarm-on-ai). Subscribe to my journal One Thousand Means if you want to read more about how to create this coalition.

Goomphus's avatar
2dEdited

AI doomers sound way too much like marketers dunking on the dummy leftists for not appreciating Claude. The leftists are in facts dummies, but it’s rhetorically ineffective to do so.

Their concerns have no moral force behind them, so leftists ignore it. It’s like “Oh my god I love Claude Code it’s so productive, we’re all gonna die btw”. Yud brings moral clarity, but he has his own problems.

AI risk is literally the greatest argument for righteous, populist, outrage of all time. Get angry, stop talking about how much more ethical Anthropic (40% everyone dies) is than OpenAI(50% everyone dies). Stop thinking about the hypothetical infinite happiness computer. Communicate like a normal person who thinks they’re about to die.

Alexei Gannon's avatar

I actually completely agree with your point on moral force. The problem is that most AI Safety people have a media diet of libertarian bloggers who convince people by sounding the most technically correct, whereas leftist media is concerned about who is the most morally just. I think both of these things actually matter a lot, but you need to know your audience

Jennifer Keith's avatar

'Parents are trying not to freak out about AI’s impact on their kids, WSJ reported: “the only way to AI-proof your kid is to teach them, in the wise words of Chumbawamba, that they’ll get knocked down, but they’ll get up again.”'

There's no mechanism to guarantee that. No one in AI even cares about that. What a joke.