Want bad charts? We got bad charts.

The first, from the team that brought you AI in 2025: 25 Themes in 25 Memes, is . . . 

It’s the key evidence in a note from Adrian Cox and colleagues at Deutsche Bank Research Institute (and, apologies, we’re a day late in getting to it because Gmail labelled it as spam). Here’s their argument:

One AI bubble has already burst – the bubble in saying there’s a bubble.

The number of web searches for “AI bubble” has plummeted in the past month, according to Google Trends.

Peak “AI bubble” was on Aug 21, shortly after a little-understood report from MIT appeared to suggest that hardly any organisations were getting a return from their investment in AI, and OpenAI CEO Sam Altman said investors might be getting “over excited”, prompting a 3.8 percent pullback in the Magnificent Seven tech stocks over five days.

Since then, the number of web searches worldwide for “AI bubble” has fallen to 15 percent of that level. “AI boom” reached its own high a week earlier, at 40 percent of the “AI bubble” peak. Meanwhile, the bubble in “crypto bubble” references topped out in late January at a mere quarter of the AI version.

Let’s be fair. Google Trends has its uses. Its main use is in social sciences, where it gives academics a free data-dredging resource to p-hack whatever result they want to show.

What sometimes gets overlooked is that Trends data’s a bit janky. All it offers is a relative measure of a search term’s popularity using a secret-sauce sample of search volume that constantly changes and a collection method that also sometimes changes.

Our attempts to recreate the above chart failed, though at pixel time the 90-day view still shows “AI bubble” searches peaking on August 21. That’s around when Meta had reportedly frozen AI hiring, Altman was quoted as saying the AI market is a bubble, and “AI bubble” type commentary was appearing in media including CNN, The Guardian, NYT, LA Times, Forbes, Harvard, Fortune, The Telegraph, The Financial Times and FT Alphaville. How many searches for “AI bubble” were journalists looking for filler material is not something that can be determined.

Another tricky question is whether search volumes have any predictive power. This theme’s been studied a lot, in areas such as stocks trading, oil trading, oil consumption, FX, FX options, crypto trading, private consumption, fast-fashion trends, violence against women, internal migration, long-Covid, and “global interest in pain”. We couldn’t find any meta-analysis to sum the findings so had to read the conclusions, where the most common answer to the question is, “it depends”.

To add to the literature, here’s a chart.

Some content could not load. Check your internet connection or browser settings.

It’s silly, sure. But is it any sillier than the one at the top of the page? Your call.

Next!

The above is from the Dallas Fed’s research department. We checked the website several times to confirm it’s real. To save you the click, here’s the explainer:

Under one view of the likely impact of AI, the future will look similar to the past, and AI is just the latest technology to come along that will keep living standards improving at their historical rate. With this expectation, living standards over the next quarter century will follow something close to the orange line in Chart 1, extending past 2024.

However, discussions about AI sometimes include more extreme scenarios associated with the concept of the technological singularity. Technological singularity refers to a scenario in which AI eventually surpasses human intelligence, leading to rapid and unpredictable changes to the economy and society. Under a benign version of this scenario, machines get smarter at a rapidly increasing rate, eventually gaining the ability to produce everything, leading to a world in which the fundamental economic problem, scarcity, is solved. Under this scenario, the future could look something like the (hypothetical) red line in Chart 1.

Under a less benign version of this scenario, machine intelligence overtakes human intelligence at some finite point in the near future, the machines become malevolent, and this eventually leads to human extinction. This is a recurring theme in science fiction, but scientists working in the field take it seriously enough to call for guidelines for AI development. Under this scenario, the future could look something like the (hypothetical) purple line in Chart 1.

Today there is little empirical evidence that would prompt us to put much weight on either of these extreme scenarios (although economists have explored the implications of each). A more reasonable scenario might be one in which AI boosts annual productivity growth by 0.3 percentage points for the next decade. This is at the low end of a range of estimates produced by economists at Goldman Sachs. Under this scenario, we are looking at a difference in GDP per capita in 2050 of only a few thousand dollars, which is not trivial but not earth shattering either. This scenario is illustrated with the green line in Chart 1.

Examination of what humanity’s erasure might mean for centralised Treasury coupon processing will have to wait for another time. Next!

Lots of AI spending carousel charts have been doing the rounds recently, but it’s still interesting to see one in actual published investment bank research.

The above image is from Citigroup, which had previously been forecasting AI capex across the hyperscalers at $2.3tn by 2029. Yesterday it moved to $2.8tn, citing “a flurry of announcements from OpenAI and Nvidia, Alibaba and Nvidia, CoreWeave and Oracle, Microsoft, Stargate and its partners”. What it’s showing, in case you were mistaking it for pass-the-parcel finance, is the one true path to payback. Per Citi:

While there are similarities between the AI partnerships like NVIDIA/OpenAI/Oracle and AWS/Anthropic to those networking deals involving Nortel/Lucent/Cisco and early Internet startups – particularly the circular nature of vendor based leasing arrangements, the critical distinction, in our view, is the “off-ramp” created by growing external demand for AI services driven by enterprise adoption.

While true that AI companies have spent (and will likely continue to spend) considerable amounts of cash to fund their growth, we do not see evidence that this spending is associated with excessive advertising spend to build market or mind share to service a customer that ultimately doesn’t exist. Rather, we believe AI companies have line of sight into a reliable and discernable level of demand as indicated by (1) enterprise commentary on gains across knowledge retrieval, customer service, and healthcare and (2) continued technological progress that widens AI’s scope of application. Said differently, we believe enterprises have provided a clear external validation of value to near term interdepencies taking shape.

Great stuff. Next!

The above is from page 33 of Bain & Company’s Technology Report 2025 and . . . honestly, it shouldn’t be included in this post because, fair enough.

The argument presented is fairly simple. Compute demand has been growing at 4.5x every year over the past decade, whereas Moore’s Law of chip efficiency predicts 2x growth every two years (and may also be dead). Because of this mismatch, the power needed for compute might hit 200gw by 2030.

Building all that power infrastructure makes the economics of AI’s current trajectory unaffordable, and the assumptions for making it otherwise are implausible:

Bain’s research suggests that building the data centers with the computing power needed to meet that anticipated demand would require about $500 billion of capital investment each year, a staggering sum that far exceeds any anticipated or imagined government subsidies. This suggests that the private sector would need to generate enough new revenue to fund the power upgrade. How much is that? Bain’s analysis of sustainable ratios of capex to revenue for cloud service providers suggests that $500 billion of annual capex corresponds to $2 trillion in annual revenue.

What could fund this $2 trillion every year? If companies shifted all of their on-premise IT budgets to cloud and also reinvested the savings anticipated from applying AI in sales, marketing, customer support, and R&D (estimated at about 20% of those budgets) into capital spending on new data centers, the amount would still fall $800 billion short of the revenue needed to fund the full investment (see Figure 2).

The caveats Bain offers involve people inventing more efficient ways to mash data and quantum computing arriving much earlier than expected. As it stands, stable quantum computers are still 10 to 15 years away and power is a highly regulated industry where things can’t happen quickly, so without some great unanticipated breakthrough out of leftfield, “the field could be left to only those players in markets with adequate public funding.”

We’re living through very odd times when a management consulting firm is the most sensible voice in the room.