8 Comments
User's avatar
Connor Williams's avatar

Lots of good stuff here, but it falls into a different trap, I think, than the common AI-safety trap of refusing public and political engagement. A lot of the strategies discussed in the article, if scaled up and intertwined with AI safety more deeply, risk turning the fight to stop the AI race into an explicitly politically polarized issue. We need to engage not just the left, but also the right. I'll be writing more about this soon, hopefully in the next week or two, but this is not an impossible task, far from it. Both mass disemployment/perma-UBI with no means to better your own situation through your own hard work, and x-risk itself, are very powerful horrors to especially the populist/MAGA right if framed in their own languages and their own priorities rather than making it seem like just one new piece of a broader lefty-activist omnicause, as well as avoiding falling into certain very avoidable traps. Much more on this and other topics here: https://connorsscratchpad.substack.com/p/strategic-considerations-for-pausing

This is especially true given the current administration. A purely lefty movement is not going to move a right wing administration. In my linked essay above I talked about how the mass MAGA participation in the pushback against Mike Lee's land sale efforts was crucial.

Geoffrey Miller has also written on this, on X and elsewhere, much more eloquently than myself.

The Long Game's avatar

"Twenty-year old Daniel Alejandro Moreno-Gama was arrested for the attack outside OpenAI’s headquarters, where he was allegedly trying to break in. In his backpack, officers reportedly found a manifesto listing the names and home addresses of other AI executives. Earlier this year, he wrote Substack posts about death, destiny, and existential risks, or “x-risk,” posed by artificial intelligence."

DOESN'T SOUND LIKE PSYOP AT ALL.

Every article like this is death by laughter.

The "x" is a reference to the flag on the database profiles of the protected ruling bloodlines. It's something the cops see when they run a bloodline person's license and such so they know to let them go instead of making arrest.

Normies just need to wake up.

Tom Bibby's avatar

Really enjoyed this! Perhaps this speaks somewhat to the tension covered here, but PauseAI reject being called an "anti-AI activist group". I think this was corrected in a previous Transformer article.

Matthew Milone's avatar

This op-ed has several problems that stem from either a conflation between AI safety and AI governance (the latter of which is much broader), or a failure to take existential AI risks seriously.

The section "Who's Missing" contains both of these mistakes. For example, the second paragraph implies that unemployed people deserve more representation within AI safety advocacy. Why? I understand that gen-AI screwed up the job market, but that's not a *safety* problem; it's a governance problem.

The section also recommends "centering the most impacted", but that doesn't make sense as a response to an extinction threat, which maximally impacts everyone. This universality also applies to lesser concerns, including unemployment. The ILO report claims that >70% of jobs have no exposure to automation, but that's ridiculous to anyone who takes ASI seriously. AI is coming for everyone's job, regardless of one's race or sex. (Tangential observation: the ILO report's so-called "global" data excludes the U.S.A., Canada, Australia, and New Zealand.)

Contrary to a different part of the op-ed, loss-of-control and gradual disempowerment are not extensions of power concentration, and loss-of-control certainly isn't a an extension of job displacement. A loss-of-control scenario involves an AI acting against its overseers. This is another conflation between AI safety and AI governance.

https://securityandtechnology.org/virtual-library/report/ai-loss-of-control-risk-indications-warning/

Regarding the critique that AI safety advocacy relies on overly theoretical arguments: that may have been true two years ago, when the survey was taken, but it's not true anymore. Both Anthropic and independent organizations, such as Redwood Research, have concretely demonstrated the problems that early AI safety researchers warned about.

https://www.lesswrong.com/posts/b8eeCGe3FWzHKbePF/agentic-misalignment-how-llms-could-be-insider-threats-1

Alexei Gannon's avatar

For these reasons, AI Safety advocates need to be mindful about how throwing PAC money into contested primaries can turn off potential allies. People don't like dark money in politics! You need to find ways to build trust with the electorate & electeds that don't look like burning money to buy your candidate. Read more here: https://onethousandmeans.substack.com/p/public-first-actions-strategy-doesnt?utm_source=share&utm_medium=android&r=5yex5f

Kenneth Sun's avatar

i’m so happy that more people are concerned about the intellectual and cultural homogeneity in AIS

how to change that though? concretely, i’m looking for project ideas

Aicha's avatar

It’s not that easy. As a normie, once you get in these organizations, if you don’t subscribe to the doom thinking you’re made to feel crazy or ostracized. It’s really unpleasant.

Legible by Dalia Ezzat's avatar

This. Right. Here.

Understated, underexamined and underreported. Normies will go through all the hoops: take courses, embed in communities, write, volunteer and when finally hired, the cultural is so insular and inherently anti outsiders that it loses out on incredible skills and experience that might actually bridge the gap between the community and public understanding.

Talk to normies who have tried and continue to try but are somehow still intellectually inferior @Celia Ford