The left is missing out on AI
As a movement, it has largely refused to engage seriously with AI, ceding debate about a threat and opportunity to the right

Abdication
“Somehow all of the interesting energy for discussions about the long-range future of humanity is concentrated on the right,” wrote Joshua Achiam, head of mission alignment at OpenAI, on X last year. “The left has completely abdicated their role in this discussion. A decade from now this will be understood on the left to have been a generational mistake.”
It’s a provocative claim: that while many sectors of the world, from politics to business to labor, have begun engaging with what artificial intelligence might soon mean for humanity, the left has not. And it seems to be right.
As a movement, it appears the left has not been willing to engage seriously with AI — despite its potential to affect the lives and livelihoods of billions of people in ways that would normally make it just the kind of threat, and opportunity, left politics would concern itself with.
Instead, the left has, for a mix of reasons good and bad, convinced itself that AI is at the same time something to hate, to mock, and to ignore. “The GenAI sector’s foremost feat of marketing has been the term intelligence itself,” N+1, one of America’s foremost left publications, recently wrote. “A much more important question: What if China develops time travel or warp speed before we do?” asked Will Menaker, a host of the popular left podcast Chapo Trap House, when responding on X in December to a discussion of the possibilities of advanced AI. “Large language models do not, cannot, and will not ‘understand’ anything at all,” argued Tyler Austin Harper, the self-described “leftist, sort of Marxist-skewing” former professor, now The Atlantic staff writer, last summer.
Whether you hate AI or not — that’s up to you. There are many things to dislike about how it’s currently being developed, and valid reasons to dislike its very existence. But disliking something and ignoring it are different activities, and only one positions you to do anything about it.
The new consensus
There are, of course, high-profile voices on the left who talk about AI; perhaps the most famous American leftist, Bernie Sanders, is now warning about its dangers. But just as he has often been a lonely voice in Congress, on AI he stands apart from those within his own part of the political spectrum.
Take another high-profile voice associated with the left, at least when it comes to tech, Cory Doctorow, one of the world’s most esteemed sci-fi and technology writers. In December, Doctorow published the text of a speech given to the University of Washington called “The Reverse-Centaur’s Guide to Criticizing AI.” His purpose was to “explain what I think is going on here with this AI bubble, and sort out the bullshit from the material reality.” At its heart is the claim that “AI is just a word guessing program, because all it does is calculate the most probable word to go next.” In case you missed the point, Doctorow repeated it elsewhere in plainer words: AI is merely a “spicy autocomplete machine.”
This idea, that large-language models merely produce statistically plausible word sequences based on training data, without having any idea about what the words refer to, has become the baseline across much of the left-intellectual landscape. Thanks to it, fundamental questions about AI’s capabilities, now and in the future, are considered settled.
The publications that play a key (if diminished) role in the left-wing argumentative ecosystem have converged on this line. Here are four.
The Nation: “AI only ‘knows’ anything in the same way that a calculator knows that 2 plus 3 is 5, which is why it cannot be counted on to learn and develop in the same way that a human would.”
The New Republic: “Generative AI chatbots simply ‘predict’ the next word in a sequence using methods that require vast computational resources, data, and labor. . . they cannot ‘think’ or ‘understand’ language. . .”
The New York Review of Books: “Chatbots regurgitate and rearrange fragments mined from all the text previously written. As plagiarists, they obscure and randomize their sources but do not transcend them.”
N+1: “Large language models, which promise so much today, do not offer judgment, let alone intelligence, but unrivaled pattern-processing power, based on a vast corpus of precedents.”
Social media reinforces this consensus, so that anyone who turns from the NYRB to Reddit or Bluesky, or the remaining left corners of X, will see the same thing. “Ppl don’t know how ChatGPT works,” one recent post said. “It doesn’t ‘know’ things. It autocompletes sentences. It makes things up.” The post has more than 70,000 likes.
As with many left ideas these days, the autocomplete view of AI is a popular adaptation of the views held by critical academics. People who follow AI closely will know this, though they may not know how deeply embedded in left discourse in particular these views have become.
“If you take the phrase ‘artificial intelligence,’ in a sentence like ‘does AI understand?’ or ‘can AI help us make better decisions?’, and you replace it with ‘mathy maths’ or ‘SALAMI’ [an acronym for Systematic Approaches to Learning Algorithms and Machine Inferences], it’s immediately obvious how ridiculous it is. You know, does the SALAMI understand?”
The above is from Emily Bender, a University of Washington computational linguist and the person probably most responsible for the autocomplete view and its adoption in left circles. Except she gives it another name, the “stochastic parrots” hypothesis, which explains the impression of intelligence that LLMs offer in the immediately graspable image of a bird that talks but doesn’t know anything. This was a stroke of mimetic genius: the 2021 paper it was coined for, written by Bender with Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell, has been cited around 8,000 times. From there, it’s echoed through The Nation and N+1 and Bluesky, sometimes without attribution.
In 2023, when chatbots were more toy than tool, AI-as-autocomplete was maybe a defensible position. But now?
That view takes next-token prediction, the technical process at the heart of large-language models, and makes it sound like a simple thing — so simple it’s deflating. And taken in isolation, next-token prediction is a relatively simple process: do some math to predict and then output what word is likely to come next, given everything that’s come before it, based on the huge amounts of human writing the system has trained on. But when that operation is done millions, and billions, and trillions of times, as it is when these models are trained? Suddenly the simple next token isn’t so simple anymore. Instead, a web of associations grows so complex and so clearly productive it reminds one of Stalin’s apocryphal comment that quantity has a quality all its own.
Yet the properties of scale do not often enter the left conversation. Nor do several other factors. Factors such as the likelihood that training a system to predict across millions of different cases forces it to build representations of the world that then, even if you want to reserve the word “understanding” for beings that walk around talking out of mouths, produce outputs that look a lot like understanding. Or that reserving words like “understanding” for humans depends on eliding the fact that nobody agrees on what it or “intelligence” or “meaning” actually mean. And that, if you’re arguing for human uniqueness, you need to show that the trillions of neuron-connections in the brain aren’t also doing next-token prediction, or something like it.
As if that weren’t enough, it’s now debated whether “predicts the next token” remains an accurate and comprehensive description of what current systems are up to. Reinforcement learning has shifted the training objective from “what word would appear next on the internet” to “what response would a human prefer” — and today’s reasoning models are trained to work through problems step by step rather than answer in a single pass.
Given all this, the fraction of meaning in the autocomplete view of current AI is alarmingly akin to the random, not always incorrect observations about temperature cycles conservatives used to throw around in debates about climate change. In both cases, a debatable description of mechanism is mistaken for proof of (in)significance. CO2 makes up only 0.04% of the atmosphere, which sounds much too little for it to drive global warming — until you learn CO2’s molecular structure lets it absorb infrared radiation in ways nitrogen and oxygen can’t. Similarly, “AI just predicts the next token” sounds deflating — until you consider what predicting the next token involves and start to ask if there’s really such a difference between predicting and learning.
Indeed, it’s a little disturbing how closely this discourse follows climate-debate patterns set down 20 years ago by the right. Either a man-made phenomenon isn’t happening or, if it is, it’s not important. The common words in those articles, “just,” “simply,” “only,” are there because the argument doesn’t stand up without them.
The con
As it has for conservatives and climate change, dismissing a phenomenon that is already showing evidence of significant impact on the world puts a fair amount of epistemic stress on the people who do it. If AI is just “spicy autocomplete,” then what’s responsible for the current frenzy of attention? Autocorrect could explain away pre-ChatGPT interest levels without too much trouble. But it doesn’t come close to accounting for the trillions now invested, the data centers appearing around every corner, or the daily reports of AI automating task after task.
Another piece of framing is therefore needed to shore the argument up. What’s responsible for the AI frenzy? False consciousness and trickery. “Artificial intelligence, if we’re being frank, is a con: a bill of goods you are being sold to line someone’s pockets,” write Bender and Alex Hanna in their book The AI Con, published in 2025 to grateful reviews in literary and intellectual quarters. In this view, the money and attention flowing into AI aren’t reflections of anything real, they’re simply the con in action.
This belief is echoed in Doctorow’s essay. To him, tech CEOs are hucksters trying to Ponzi in more investment. “The primary goal is to keep the market convinced that your company will continue to grow, and to remain convinced until the next bubble comes along,” he writes. Of course, the 5-10x annual increases in AI lab revenues, that ChatGPT was the most rapidly adopted consumer technology in history, that consumer is another word for ordinary person and not tycoon — nowhere do these facts enter the picture. What’s left is a view of capitalism not as a system that can unfairly externalize harm, or as a negative system altogether, but as essentially a fake one.
This impression is enhanced by the bizarre way the issue of AI taking human jobs comes up in these discussions. The left hates tech CEOs and knows they’re out to get the ordinary worker, but the left also thinks the CEOs are idiots and can’t actually pull it off. Thus, Doctorow claims, “Bosses are mass-firing productive workers and replacing them with janky AI, and when the janky AI is gone, no one will be able to find and re-hire most of those workers, we’re going to go from dysfunctional AI systems to nothing.” Or, in Bender and Hanna’s words, “AI is not going to replace your job. But it will make your job a lot shittier.” The picture is practically Cubist: management is trying to fill your role — with something that’s not real and can’t do it.
Reasons to be skeptical
Right at this mystifying point is where some understandable reasons for skepticism enter. It’s not as if the tech world hasn’t spent billions of dollars on iffy technologies before. Matt Bruenig, the left writer, founder of the People’s Policy Project, and someone who doesn’t share the autocorrect view of AI, explained those reasons sympathetically in an email. “The tech sector has a credibility problem as well because, in the decade or so prior to LLMs,” he wrote, “it seemed to be primarily fixated on blockchain and cryptocurrencies which do appear to be completely useless, at least as far as production goes.”
It is hard to argue with that. Likewise, there are clear contradictions in how tech talks about AI. Seán Ó hÉigeartaigh, director of the AI: Futures and Responsibility Programme at the University of Cambridge and someone who has decades of experience trying to discuss advanced AI in left-leaning intellectual circles, described the skepticism that results from these contradictions. “CEOs say, ‘We think our technology might destroy the world,’ and then they go and build it,” he said. “To people coming to this topic fresh, those actions don’t match up with the belief. If they think what they’re doing is destroying the world, why are they doing it? Either they’re complete psychopaths or they don’t really believe that.”
There are plenty of reasons to be suspicious of the motivations and claims of the people in charge of AI companies. The question that the left seems determined to avoid, however, is why that necessarily means you should dismiss the underlying technology, especially given the evidence so far. The gap between what AI systems can do now and what previously hyped technologies ever delivered is already vast. Crypto, for all its flaws, because of all its flaws, never got a fraction of the energy and attention from non-boosters AI now gets; to take another example, the metaverse remains a joke to everyone except for Mark Zuckerberg.
But try to point this out in these circles and it might not go well. In May, Ethan Mollick, a Wharton professor and a measured voice on AI, announced he was limiting his posts on Bluesky because “talking about AI here is just really fraught.” In reply, a reasonably well-known left journalist said, “Maybe we can chase him off the goddamn earth too.” Ó hÉigeartaigh, for his part, said he regularly gets called a useful idiot running interference for Big Tech.
Academia
No one person designed the system of buttressed beliefs that’s built up across left-intellectual discourse; no doubt it grew together because each has trouble carrying weight on its own. But it does come from somewhere. As the Bender and Mollick and Ó hÉigeartaigh examples suggest, the closer to academia one gets, it seems, the more surrounded by this thinking — which is a bit strange, since, as the many alarmed reports of students handing over their studies to ChatGPT indicate, the university is one of the places AI has already affected most.
On the other hand, it’s a bit less strange if you consider it as an example of an intellectual war that’s escaping into the world from academia.
Here Bender is again the way to understanding.
Her view of AI is based on a firm belief about the nature of knowledge that comes from her work in linguistics. “The language modeling task, because it only uses form as training data, cannot in principle lead to learning of meaning,” she writes in one paper, meaning, basically, that because LLMs are disembodied, they cannot connect words to the things in the world they describe — which is a problem, since connecting words to things is the essence of meaning. The key term in that claim is “in principle.” It means that no amount of improvement in LLM ability could ever change the claim, and indeed, as LLMs have improved, Bender has shown little sign of altering her view.
This description of how AI works is in other words more a philosophical definition than an empirical description. That’s why the main energy of her work lately is to reframe — to drag things from process and output back to philosophy. That’s why “understanding” becomes “parroting,” “neural networks” become “mathy maths,” “LLMs” become “synthetic text extruding machines.” She who best changes the terms wins the debate is the approach, and Bender has in many ways done just that. (Of course, that “Can mathy maths help us make better decisions?” is a perfectly cogent question, to which the answer is almost certainly yes, shows the limits of this approach.)
Bender is entitled to her philosophy. She knows what she’s committing to and what risks she’s running. And, to be fair, she doesn’t think that AI is always useless. “There are applications of machine learning that are well scoped,” she’s written. “These include such everyday things as spell-checkers.” But, for the most part, the people who parrot the parrots hypothesis thirdhand don’t know this. They don’t know they’ve signed up in a long-running philosophical war. They think they are talking about capabilities, about scientific measurement. And that mismatch is leading them into worrying places.
In part, they’re not aware of this because an opaque sorting has happened in academic AI research in recent years. “The people who are most optimistic about rapid progress,” said Ó hÉigeartaigh, have “disproportionately seen industry as a place to do their work, in part because you need a lot of compute and resources to do it.” The bullish ones have left academia, which means those who remain are by definition more bearish.
Academic practices play into this process too. Publishing in journals requires peer review, and peer review is slow. As Zvi Mowshowitz, who writes perhaps the world’s most exhaustive newsletter on AI, said, “Nobody in real academia can adhere to their norms and actually be in the conversation, because by the time you’re publishing, everything you were trying to say is irrelevant,” a generation or two behind the cutting edge. Another incentive for researchers to leave for industry, then.
This splitting of a field that once would have been forced to coexist has probably made industry too optimistic about the pace of progress and made academia too skeptical. That then skews what’s heard by people who listen to academia but not industry — and nearly everyone with that tendency, today, is on the left. They hear only the skeptics, unaware that real science is taking place in the AI labs too (or especially), done by PhD’d researchers they might trust if only they sat in a faculty office.
Exceptions and the right
How long can this situation hold? The example of climate change shows such attitudes can linger for quite a long time in a rump group dedicated to them. So perhaps it’s better to ask how long these attitudes will continue to spread outside that group.
Here, things look brighter. Epistemic distress is not the whole story of the left-of-center world. Though sometimes you can hear parrots squawking in the background, the left-leaning, general-interest outlets that tend to have New York in the name — The New Yorker and New York and The New York Times, for instance — are much more willing to consider a wide range of views about what’s happening with AI.
And AI is entering left electoral discussion in a meaningful way. The Biden administration took AI seriously in its last years. Bernie Sanders is suddenly frantic. “Despite the extraordinary importance of this issue and the speed at which it is progressing, AI is getting far too little discussion in Congress,” he wrote in The Guardian recently. “Right now, there is an amazing lack of political discourse for something that will be a very high priority later,” a Sanders adviser and founder of More Perfect Union told Axios this fall. A strategist for Zohran Mamdani said that “every candidate should be embracing an aggressive vision” on AI regulation.
On the whole, then, and refreshingly, given the low view of politicians these days, the politicians left of center are in better shape on “take AI seriously, please” than the intellectuals. Alex Bores, a New York state assemblymember running for Congress on a platform heavy on AI regulation, ascribes that to daily contact with the public. When people come up to him now, he hears worry about AI’s capabilities, not dismissal of them. “We’re hearing it from our constituents. This is a concern that is brought up to me,” he said. “When you see things happening quickly, when you see your neighbors being impacted, our job is to take action. This has moved very, very quickly from the theoretical to the real.”
Still, despite the relative alertness from political quarters, it’s hard to avoid the impression that the right is more alert, both to AI’s opportunity and its danger. That doesn’t mean they are masters of wise AI policy. Both the accelerationists and the industrialists influential in the current administration show it is alarmingly often the opposite. It simply means that, between them and the Steve Bannon anti-tech wing, more or less the entirety of the movement agrees AI is not a fake technology.
One key sign: conservative intellectual magazines are in better shape than their left counterparts, generally blending a reasonably accurate grasp of the technology with concerns about social costs, along with — and this is something missing from nearly any portion of the left — some hope for what AI might mean for humanity. Take this, from Commentary: “As we learn to live with AI, I believe we’ll become more comfortable with the notion that these models ‘think.’ After all, the LLMs are getting better all the time.” Or American Affairs: “AI may serve as a powerful force multiplier for a well-honed native intelligence, or as a substitute for developing it in the first place.” And there’s really nothing on the left compared to the philosophical depth with which The New Atlantis has approached AI over the last few years.
Costs and missed opportunities
There are many costs of the left-intellectual world not taking AI seriously, and they will be paid by many quarters — with the left first in line. As Achiam put it, “when there’s a Big Problem that is going to be top of mind for everyone in a decade, whoever is first to the Big Problem gets to set all the rules for discussion and debate about it. In politics it’s a miss if you sit that out.”
More concretely, not taking AI seriously might blind the left to its political uses. “One possible concern might be the left-wing abstaining from using the tools when the right-wing does not, in politics, campaigning, policy,” Bruenig worried. There is already some data to this effect: 44% of Republican political consultants use AI for work daily, compared to 28% of Democratic ones, according to the American Association of Political Consultants.
Then there are the costs beyond the left — costs to the public and policy. The left’s current stance leads to a focus not on dealing with AI by regulating it wisely or preparing for it but on popping the economic bubble, which here is a baked-in fact of history and not a possibility of the future. After all, if AI is fake, nothing needs to be done except dispel the myth that it is real. And sometimes even that isn’t required: the bubble will pop itself; AI development is always already stopping. “The AI bubble . . . will burst,” N+1 writes. “The technology’s dizzying pace of improvement, already slowing with the release of GPT-5, will stall.” This stirring call to non-action was published in fall 2025 — in other words, weeks before the release of the three models, Gemini 3, GPT 5.1, and Opus 4.5, that pulled AI capable of changing daily life from the future into the present. (Since it must be said: it is entirely possible a bubble-popping crash happens — but even that likely won’t stop AI development.)
So it’s probably not ideal that just before what might — or might not — be the moment of greatest job dispossession in history, or of democratic dispossession, or worse, or better, part of the group historically most concerned with such things is plugging its ears. What should it be doing instead? There’s a huge amount of open room for left contributions to shaping the near and far futures. These are more the subject for another essay, but it’s worth gesturing to a few, in order from most concrete to most exotic.
On the near future, Dean W. Ball, until recently one of the White House’s key AI policy writers, is adamant that by not taking AI abilities seriously, the left is going to miss important ways of improving government. “The left persuasion requires a state that’s good at doing things,” he said. “If I were the left, the first thing I would be doing” would be to ask, “How can we use this to massively advance state capacity and massively expand the ability of the government to deliver public services to people?”
Bores thinks AI offers an opportunity to speed the US to cleaner energy. “We desperately need to upgrade our electric grid,” he said. “Now we have a system where you have basically unlimited private capital willing to invest in our electric grid, but the incentives right now are to turn on or buy power privately from old coal or oil places, because it’s just quicker to get approval for that than it is to hook up a renewable source.”
As for the far or more exotic futures: what’s the best shape to universal basic income if it’s needed? What if it’s wanted? Can treaties be designed to slow a race to superintelligence and reduce the risk of a catastrophe? What is the ethical view of post-humanism? Hardly any on the left is considering these questions in ways worth agreeing or disagreeing with. Aaron Bastani, the hard left British journalist, is one exception. His 2019 book Fully Automated Luxury Communism envisions the ways technological development could eventually abolish material scarcity and free humanity from toil. “The demand would be a 10- or 12-hour working week, a guaranteed social wage, universally guaranteed housing, education, healthcare and so on,” he said in 2015. Far from revealing a thrall to capitalism, these attitudes reflect a belief in industrial power that goes all the way back to Karl Marx.
But who’s listening? Instead, you sometimes get the discomfiting sense you’re watching ghosts — people who were so unprepared for the future, because they were so certain they knew it, that they were already out of it.
Tempering that feeling is one thing the AI observers spoken to here emphasized: it isn’t yet too late to change direction. How much time, who knows. Ball, who doesn’t believe AI poses a strong risk to human civilization, thinks “there’ll always be time” to catch up. Mowshowitz, who does believe that, said, “I don’t think it’s too late. The world yearns for more and better thoughts.” Ó hÉigeartaigh was more urgent. “There’s potentially a narrowing window to really engage on this,” he said. “It would be really nice to get perspectives across the political spectrum just in case this giant transformation in human society does come along.”








If you're looking for leftists who seriously consider AI, I would humbly suggest some of my own writing on the necessity of a positive socialist vision of AI
(https://open.substack.com/pub/onethousandmeans/p/the-left-must-plan-for-ai?utm_campaign=post-expanded-share&utm_medium=web)
and how "left-intellectuals" like Bender or Timnit sell left-facing pseudoscience.
(https://open.substack.com/pub/onethousandmeans/p/the-ai-denial-industry-the-lefts?utm_campaign=post-expanded-share&utm_medium=web).
I'm Left and I actively oppose AI. The creative community is very active against AI. The problem is the perennial problem: leadership. We have none.