Nine AI predictions for 2026
Will the bubble pop? Will AI dominate the midterms? Will Sam Altman snap?
2025 was a wild year in AI. The technology leapt forward; valuations soared higher still; and the political backlash began to crystallize into something real. 2026 promises to be even stranger. To kick off the new year, the Transformer team has put together some predictions for what to expect from the industry — and the political response.
The bubble will deflate — or actually pop…
Look, I hate to agree with Ed Zitron (and on most things AI, I don’t).
But there are enough warning signs in AI’s current financial trajectory to think a crash is, if not guaranteed, then highly likely.
The most obvious concern is that, as with the dotcom boom, valuations have massively detached themselves from both profits and revenues. OpenAI is far from profitable, and even its annualized revenues, recently surpassing $20b, are roughly 40x less than the $830b valuation it is reportedly raising at.
Of course, such valuations are based on the prospect of a transformative technology that will deliver huge returns. It really is a bet that we’ll see an end to the economic paradigms that make boom and bust inevitable.
But the reason to think that a crash is likely is that, whether or not that kind of shift is coming eventually, it does not look like it will come quickly enough to prevent the markets becoming concerned about their returns over the next few years. Bain & Co. estimates that AI companies will need $2 trillion in combined annual revenue by 2030 to fund projected compute demand — and expects them to fall $800b short of that mark.
There are already signs of jitters. Credit default swaps against Oracle — bets that it will default on its debts — are higher than at any time since 2009. CDS volumes across major tech names have climbed 90% since September. Circular financing deals mean companies like Oracle and Nvidia are increasingly reliant on a very small number of customers — some of which do not have great balance sheets.
None of this to say that AI itself is a bubble. The rate of progress in AI development remains both impressive and terrifying. But that doesn’t mean the current financial trajectory is an uninterrupted path to the moon.
— Jasper Jackson, managing editor
… and that will be bad for much more than AI
When talk of a bubble really took off, the potential fallout looked like it might be confined to tech companies and their direct investors. That’s no longer true.
Tech companies have moved more than $120b of data centre spending off their balance sheets using special purpose vehicles funded by traditional, blue-chip Wall Street firms. There are very sensible strategic and accounting reasons for doing so. But such practices spread risk around the economy in a way that increases the likelihood of contagion.
There are also early signs of securitization — lenders pooling debt and selling it to outside investors, including pension funds. The deals are still small, but the mechanism is the exact same one that spread mortgage risk through the financial system in the mid-2000s.
What we’re seeing today is not 2008. The underlying assets are real, the risk is visible, and nobody is pretending AI debt is AAA-rated government bonds. But the structural parallels are real. Risk is being moved off corporate balance sheets and onto private credit funds with limited transparency. And the investors holding this debt include institutions whose losses would ripple through the broader economy.
If a crash does come, then, the consequences may be felt well outside of Silicon Valley — with potentially horrible repercussions for the rest of us.
— Jasper Jackson, managing editor
The transformation will begin to show
For a while now, people have been saying that “even if AI development stopped in its tracks today, it would still be transformative.” I didn’t buy it — the technology we had wasn’t enough to be transformative in an “everything is changed forever” way.
As of last month, though, that’s changed. Opus 4.5 with Claude Code is a genuine marvel, a tool that can work autonomously for extended periods and do economically valuable work in a fraction of the time it would take humans. And while it’s designed for coding tasks, it’s capable of a lot more. It’s the first truly impressive autonomous agent I’ve used.
Over Christmas, many people — myself included — used Claude Code with Opus 4.5 and discovered its power. In the coming year, I expect UX improvements will help these capabilities spread throughout the economy, with the impacts showing up in tangible ways. Software companies will ship much faster. Bankers’ productivity will tick up. Anthropic’s and OpenAI’s valuations will continue to soar. 2025 was dominated by talk of an AI slowdown. By the end of 2026, I think any such talk will have been firmly put to bed.
— Shakeel Hashim, editor
Progressives will run on America’s hatred of AI
In December, Senator Bernie Sanders warned constituents of the threats posed by transformative AI and endorsed a moratorium on data center construction. And where Bernie goes, centrist Democrats tend to begrudgingly follow.
Economic populism is hot, and the American left desperately needs to re-energize its base to flip seats in the 2026 midterms. As campaign strategist Morris Katz recently told Politico, “We’re really heading towards a point in which it feels like we will all be struggling, except for 12 billionaires hiding out in a wine cave somewhere.”
Indeed, tech billionaires spent 2025 courting the Trump administration. 13 of them went out of their way to pander to Trump himself at a White House dinner in September. The battle lines are forming, pitting tech-friendly Republican elites against … everyone else? For Democrats looking to mobilize their base, the contrast writes itself.
Running against Big Tech, and AI in particular, increasingly looks like a winning strategy for the left. I expect many Democrats — especially those in purple regions directly affected by the AI infrastructure buildout — to take a page out of Bernie’s book in 2026.
— Celia Ford, reporter
The AI super PAC won’t be as successful as it hopes
Andreessen Horowitz, OpenAI president Greg Brockman, and a range of other industry backers clearly hope that their new super PAC network will do for the AI industry what Fairshake did for crypto — bend Congress to their will through sheer spending power.
But they face an uphill battle. Fairshake didn’t have to fight a counter PAC — Leading the Future faces one financed by deep-pocketed AI safety advocates. Fairshake didn’t have to convince politicians to adopt deeply unpopular and high-salience policy stances. As Celia notes above, Leading the Future will. And because crypto never became a particularly mainstream issue, Fairshake mostly dodged media scrutiny. Leading the Future won’t be so lucky.
I suspect that while Leading the Future will notch some wins, the midterms won’t grant it the industry-friendly Congress it so badly wants. I suspect we’ll see at least one “astroturfing” scandal, with the ensuing backlash helping the very candidate the PAC’s trying to crush. The counter PAC will be able to neutralize the industry’s cash in several races. Combined with the public’s ever-increasing disillusionment with AI, all this points to midterms in which the industry might find itself sorely disappointed.
— Shakeel Hashim, editor
AI anxiety will top the pop charts
Spotify’s AI slop problem made headlines, angered musicians, and ruined playlists this fall. NYT tech columnist Kevin Roose predicts that primarily AI-generated music could be nominated for a major award in 2026 anyway.
But musicians, especially those with Grammy-nominating power, largely loathe and fear AI-generated art. Unlike Kevin, I’d bet my bass guitar that AI won’t win Best New Artist. But I do think that artificial intelligence has broken into the zeitgeist enough to appear in a viral, mainstream, award-nominated pop girlie’s songwriting.
I don’t mean a dark alt-pop darling like SOFIA ISELLA, who already co-wrote these chilling lyrics with Grimes in April 2025:
The phones are fat / they’re covered in flesh
They’re falling in love with you
They’re writing and reading / erect and eating
They’re getting turned on for you
The machines are turning to meat / they’re mocking us how we bleed
It’s what we asked for, what we wanted and need
She’s ahead of the curve, but I suspect the A-listers aren’t far behind. This will be the year that Billie Eilish, Charli XCX, or Sabrina Carpenter drops a human-made banger about AI reshaping our minds and relationships — for the worse.
— Celia Ford, reporter
World models will be a thing
Though reports of scaling’s death look greatly exaggerated, large language models’ jaggedness doesn’t appear to be going away. It remains plausible that there may be areas of activity, economic and otherwise, where LLMs will still remain inadequate and even ineffective.
That may mean that continued progress in AI will require another architecture altogether. At the moment, the most plausible alternative appears to be some sort of “world model” which would, at least in theory, tackle some of the problems with generalizing and flexibility that still dog LLMs.
There are good reasons to think that AI equipped with an internal model of how the world they inhabit behaves may be better suited to a whole range of tasks. The most obvious is robotics. But even in other domains, such as those where creativity is required, there are reasons to think that purely statistical models might have greater limitations.
The recent flurry of activity in world models — ranging from Yann LeCun and Fei-Fei Li to DeepMind and OpenAI — suggest that more attention on these alternative approaches is coming. I’d expect LLMs to be sharing at least some of the spotlight in 2026.
— Jasper Jackson, managing editor
A reckoning for Anthropic
Anthropic executives have staked their reputation on a bold claim: by “late 2026 or early 2027”, we’ll have “powerful AI systems” with “intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines” and “the ability to autonomously reason through complex tasks over extended periods … much like a highly capable employee would.” In other words: AGI-lite.
While I believe such systems are certainly coming, I don’t expect them on such bullish timelines — there are simply too many bottlenecks and unresolved technical problems. As Dwarkesh Patel recently wrote, “models keep getting more impressive at the rate the short timelines people predict, but more useful at the rate the long timelines people predict.”
If by year’s end we don’t have a country of geniuses in a datacenter, Anthropic (along with a whole bunch of other short-timelines folks) is going to look a bit silly. That, in turn, could have big implications: no one listens to the boy who cried wolf.
— Shakeel Hashim, editor
Sam Altman will snap
As OpenAI drifts ever further from its stated mission and the stakes get ever higher, it seems inevitable that Sam Altman will crash out.
The best public meltdowns are unpredictable, but I’ll make a few wild guesses:
Sam and Elon Musk have a history of squabbling on X. One of them lifts weights; the other already challenged a tech CEO to a cage match once. It could happen again. I’m really rooting for this one.
A series of internal emails between Sam and OpenAI’s newly-hired Head of Preparedness are leaked, revealing something dreadful. When asked about this in a video interview, Sam begins to respond defensively. The host pushes back, and Sam yells, “WE DON’T KNOW WHAT TO DO, OKAY,” tears streaming down his face. This goes viral.
As grassroots opposition to data center construction builds, Sam posts a series of late-night tweets railing against middle-class Americans. He deletes them in the morning, but the screenshots have already made the front page of every major news outlet. Much to the horror of his PR team, a visibly furious Sam goes live on X to bemoan the “deeply unserious people opposing progress.”
Here’s to an unhinged 2026!
— Celia Ford, reporter






Any predictions on federal AI preemption efforts?