What happens when the AI bubble bursts?
The world is prepping for an AI crash. History points to what that might look like
In the last month or so, talk about an AI crash has shifted subtly but significantly: people no longer talk about “if” there’ll be a crash, but instead about “when.” There is less speculation as to whether there’s a bubble in AI investment, and much more about what it’s going to be like when it pops — or explodes.
OpenAI’s Sam Altman has reportedly referred to an AI bubble, likening it to the dotcom boom, and agreeing investors are “overexcited about AI.” The Bank of England has warned of the potential for a “sudden correction” in the market — the nearest the august institution will come to using a word like ‘crash’ — “particularly for technology companies focused on artificial intelligence.” Others are spooked by what looks like circular investment and spending between Nvidia and some of its biggest customers. What started as a whisper of concern is rapidly growing louder.
There is some substantive support for those concerns, too. In August, highly respected investor Harris Kupperman estimated the AI industry would need to generate annual revenues of $160b a year (more than ten times its current level) for at least a decade just to break even on 2025’s capex investment. Earlier this month, he revisited that estimate: because of the rapid obsolescence of AI data centers, he said, revenues would need to be $320b or more.
Under any kind of conventional modelling, AI is attracting far more investment than it could ever reasonably return. Some AI backers aren’t looking at conventional modelling, of course — for them, true artificial general intelligence is the prize — but for listed companies and institutional investors alike, returns and modelling matter.
If a crash is coming, the question of “when” is the burning one for an investor, but in many ways it’s the least interesting. If a crash was fully predictable, economic theory tells us it would already be happening. As it isn’t, it could happen tomorrow, or not happen for a year, or more.
The more interesting questions, then, are what happens as and when a crash comes — how big will it be, who will be worst hit, and what might be the knock-on effects? What will happen to the broader economy — and perhaps more significantly for those of us interested in AI, to the development of the technology? Here, at least, history and expertise can offer us some interesting steers.
The most memorable stories of the dotcom boom and bust are the huge consumer sites that failed — pets.com, would-be fashion retailer boo.com, or Webvan, which imagined that one day we might order our groceries online and have them delivered to our home.
These might have been the dotcom failures that made the front-page headlines (even if the ideas behind all three were vindicated in the long run), but they weren’t where most of the real investment was. Instead, the real money — and the real losses — were mainly confined to the business pages: it was the infrastructure.
“The dotcom bubble did not involve very much real investment,” explains Andrew Odlyzko, Professor Emeritus of the School of Mathematics at the University of Minnesota. “You know, they didn’t buy millions of trucks. They built some warehouses, but they didn’t build a number of giant warehouses similar to the data centers being built today. So, the stock market valuations for dotcom bubble were giant, but the actual investments were fairly minor. Telecom was a bit different.”
Making a website during the dotcom bubble required some cash, but not large sums relative to the size of the economy, or even relative to institutional investors. But behind the consumer-facing bubble was a similar dash to install the infrastructure everyone believed would be needed to facilitate the huge boom in online activity people thought was coming.
It was a near-universal belief at the time that internet traffic was doubling every 100 days, meaning bandwidth needs would increase tenfold year-on-year. That led to a huge race to lay the necessary fiber-optic cable. There was just one problem: the growth projections were wrong, as Professor Odlyzko spotted at the time.
“These investments were based on the assumption that internet traffic was growing tenfold per year, and it wasn’t,” he says bluntly — it was instead only doubling each year. “There was evidence that was not the case. And so these things were bound to blow up in a fairly quick order, as they did, roughly on schedule.”
Several cable companies entered bankruptcies, while the ones that survived lost 90% or more of their value off their peak — and because infrastructure is expensive, serious amounts of investor and lender cash was lost.
To think about how much of a disaster or otherwise that was, Odlyzko jumps even further back in time. “The great British railway mania of the 1840s was the largest technology bubble in history, if you look at the amount of real capital investment compared to the size of the economy,” he says.
As companies scrambled to connect cities with railway lines, they underestimated their costs and the time it would take. At the same time, they were grossly optimistic as to how quickly revenues would roll in. The investments turned out to be terrible.
The first railway crash, Odlyzko suggests, didn’t harm the economy too badly, as unprotected investors were left to simply lose everything, and on this occasion major banks and systemically important institutions were untouched.
“Hardly any of the lines that were built were left unused. There was enough revenue to pay for the running expenses. So as a result, Britain got this fantastic network, far ahead of anybody else in the world,” he recalls. “And many people credited the great Victorian boom that started right after the collapse of the railway, partially on this new infrastructure — built on the bodies of the investors.”
There is a rosy view of the dotcom boom and bust that looks very similar to this story. Investors had the right idea, but the wrong timing, and many lost a fortune — but they left useful infrastructure (in the form of cable and bandwidth) behind that laid the foundations for the productive, transformative internet boom that followed.
Beyond that, lots of skilled developers were left looking for work or for startups — at more affordable salaries than during the boom — and ‘smart’ investment money was choosier about what it backed. No-one today can credibly suggest that the dotcom crash in any way invalidated or significantly undermined the internet as a technology. Perhaps the same could be true of any coming AI crash?
This would certainly be a reassuring thought, but it might be overly optimistic. When it comes to infrastructure, the active lifespan of an investment matters a great deal. Railways were useful for decades, and in some ways centuries — even if tracks and sleepers need replacing, having an owned and undeveloped connecting strip of land between cities is immensely valuable even a century later.
Fiber-optic doesn’t last nearly so long, but much of the real expense of cabling comes in the cost of digging out and installing ducting through which to feed cable — once this is in place, it’s relatively cheap to upgrade the fiber within. That ducting lasts for decades, too.
If anything being built for the AI boom qualifies as infrastructure — and this is a matter that divides experts — it is the specialist data centers being built at an astonishing rate everywhere. But as Shane Greenstein, the Martin Marshall Professor of Business Administration at Harvard Business School, sets out, data centers used to train AI have several specialist characteristics that suggest they may not endure as useful infrastructure.
For a start, they are being built with the expectation of a shorter shelf life even than usual data centers, perhaps even just 3-5 years. They are based on GPUs rather than CPUs, which sharply limits the ranges of tasks for which they’re useful.
That specialization, Greenstein explains, comes at a cost — the setups are inefficient at general processing, and thus are poorly suited for most other uses — though another expert suggested that “for cents on a dollar” an abandoned AI data center might be useful for cryptocurrency mining, even in a suboptimal setup.
And, while most data centers need to be located near to their key users, to reduce latency, AI data centers are often located where energy and land are cheap. This is a rational choice for their purpose, but further limits their usefulness to be deployed in any other field. For virtually anything except AI training, crypto mining and some forms of research, being close to the end user is more important than being near a powerplant.
Many AI data centers that were sensibly optimized for training both in setup and location will likely be far less efficient and effective at actually processing requests to existing models — not just the inference involved in running the model, but everything else that goes into actually delivering a response. “A GPU-based data center is not good for much else,” says Greenstein. “It’s not good for storage. It’s not what you use for backup. Yeah, it’s great at math. But it’s not colocated, so it won’t be quick, it won’t be faster communication… That’s why we get colocation.”
That means that if the huge sums put into training more powerful models suddenly dried up, many AI data centers might end up as little more than stranded assets.
Markets are good at finding new uses for those — the Donnelly plant in Chicago, which used to print telephone directories, now houses a data center, Greenstein notes. “But God, think about what these data centers in the middle of Louisiana could be used for,” he wonders. “That’s a lot harder to think about.”
The questions so far have centered on the technological impact of a potential crash, and we’ll return to this, but the related question is how bad it could be for the general economy. Here, the central factor is typically how large the investment boom is relative to the economy as a whole — and that points in the direction of a crash being a particularly painful one.
“The money is just so much greater than the dotcom boom, with the so-called Magnificent Seven accounting for more than a third of the value of the S&P 500 and their AI spending being responsible for much of the growth the US economy has seen in the past few quarters,” says Margaret O’Mara, Scott and Dorothy Bullitt Chair of American History at the University of Washington.
“Not to alarm, but the interdependence of investment recalls the railroad boom of the 1860s-70s — that preceded the panic of 1873 and the subsequent global depression — as well as the mortgage/housing bubble preceding the 2008 crisis.”
O’Mara might not have wanted to alarm, but her comments are relatively alarming, especially considering the broader context of sluggish economic growth across the world and a broader stock market bubble that could rapidly deflate if AI knocked confidence. The first railway crash was relatively contained, the second did far-reaching damage.
An AI-led economic crash could be a bumpy landing indeed, but O’Mara does believe good things can come out of even the worst of downturns. She points to “regulatory guardrails” created by FDR in the wake of the Great Depression, which many believe contributed to America’s sustained growth through the 20th century. She is more optimistic than some about AI infrastructure than others, too. It may or may not pay off as its investors intend, she says, but “it certainly will be worth something to someone in the longer term.”
If there is a silver lining on the economic front, it is that most of the companies investing money in AI have an awful lot of money to lose. Professor Greenstein notes that before the current AI goldrush, several big tech companies had huge cash mountains that they simply would not touch.
“The major tech firms had all this cash, and they would not spend it,” he recalls. “At the time, enough of us were saying ‘you’re telling me you have no other investment opportunity that’s you could use this for that is going to be sufficiently high and you’re just going to sit on the cash and not give it back to your stockholders? Wait a minute, dude, this is not the way it’s supposed to work’.”
This doesn’t reflect the position of startups like OpenAI or Anthropic, but even here their investor base is relatively concentrated. At this stage, this doesn’t look at all like the 2008 subprime crisis, where it turned out retail banking institutions had huge exposure to losses on what looked like safe assets. This time, it looks like many of the biggest potential losers have extremely deep pockets — which for those who are truly aiming towards AGI rather than conventional investment logic, could also prove significant.
In the case of Meta, its share structure means the formal decision will remain with Mark Zuckerberg and his confidence in the project, though even he must be guided to an extent by the share price. For newer entrants like Anthropic and OpenAI, the willingness of lenders and investors to hold their nerve will be key.
One complicating factor is that much of the latest wave of AI investment is fuelled by debt. Oracle’s recent cloud compute deal with OpenAI reportedly will involve an additional $100 billion of borrowing over the next four years, almost doubling its existing debt pile. Various other deals create an interlinked web of debt and receivables between AI companies — bolstering balance sheets while that debt is looks good, but creating the potential for cascading failure if one player collapses. The contagion risk might not be as broad as in 2008, but within the sector it is real.
As is so often the case, the question as to whether the investment spend on AI ultimately pays off is likely to come down to whether or not it delivers transformative AGI in the medium term. It is simply too difficult to make a meaningful case for AI data center investment as true infrastructure, says UCL’s Professor Carlotta Perez, an expert in technology and socio-economic development.
“The reason why infrastructures are so crucial is because they broaden market access enormously, compared with the previous revolution,” she says. “And the bubbles are about getting enough investment to make the complete network possible even before any of it can be used profitably.”
Canals, the postal system, railways, steamships, the telegraph, highways, electricity and more qualify in this category for Perez. AI data centers do not.
“What is being called infrastructure in the case of AI is not about the market but about production capacity. It’s not the equivalent of railways, but the equivalent of very expensive steel plants or hydroelectric plants,” she concludes. “The sale of the product is going to use the internet infrastructure that gave access to world markets in the first place.”
The argument that the AI boom will leave behind useful infrastructure for the next boom does not really hold in any conventional sense — the investment going into fundamental AI research does not create infrastructure comparable to rail, road, or fiber. There is unlikely to be a dotcom-cable crash silver lining of this sort.
However, for those who believe a version of AGI could emerge in the next 5-10 years which could genuinely transform the economy and with it economic productivity, a crash is something that could still be exploited. Investors or AI companies with deep enough pockets and convictions to weather a crash could pick up compute at cents on the dollar, perhaps even accelerating their own research efforts. The payoff if an AI system then generated productivity boosting new technologies and infrastructures as a result would be the resulting payoff.
Unless, that is, a crash deters spending and investment from the larger pool of investors reliant on more conventional logic. This would mean that overall, it would likely delay existing forecasts and timetables for AGI — almost none of which forecast a slowdown or even forestalling of investment, or make any allowance for that in their timelines.
As the saying goes, history doesn’t repeat itself, but it rhymes. This is especially true for economic history, where every boom and every bust has traits in common, but with a new technological and political context every time. As ever, there will be silver linings and small mercies — but there is little point sugarcoating it. A crash, whenever it may come, is likely to be brutal.
> And, while most data centers need to be located near to their key users, to reduce latency, AI data centers are often located where energy and land are cheap
I think it's often overstated how important it is for data centers to be near users, including in this piece. Just look at the actual latency numbers, e.g., https://wondernetwork.com/pings/San%20Francisco. From SF to the East Coast is sub 100ms (round-trip), from SF to New Orleans is ~51ms. This is just not enough to matter for *most* LLM uses we see today; real time things like translation or voice are the exception. Maybe uses will change, but I expect the vast majority of LLM use won't be latency sensitive on the order of 1/20th of a second.
For reference on how much 50ms is, ChatGPT as measured by Artificial Analysis (https://artificialanalysis.ai/models/gpt-5-chatgpt/providers) has ~530ms time to first token (latency), many small models are also in the 0.3-1s range.