As we all know, OpenAI has been running around trying to join the club, claiming a few months ago to have $1.4tr and 30 gigawatts of compute commitment for the future (with no timeline), while it reported 1.9 gigawatts in use at the end of 2025. Since it doesn’t have the scale of cashflows from existing businesses that the hyperscalers can use, it has so far managed to do this, or at least announce this, with a combination of capital-raising (not all of which has necessarily closed) and other peoples balance sheets (some of which is also the famous ‘circular revenue’).

You can watch plenty of three-hour podcasts discussing all of this, and plenty of people have opinions about TPUs, Nvidia’s product lead, and Oracle’s strategy of borrowing against a declining but cash-generative legacy business to burn its way into the new thing, but how much should the rest of us care? Is this a path to a competitive advantage, or just a seat at the table?

We don’t really know what AI infrastructure costs will look like in the long term, but it’s quite possible that this turns out like the manufacture of airliners or semiconductors: there are no network effects, but with each generation the process gets more difficult and more expensive, and so those industries have gone from dozens of companies at the cutting edge to just Boeing and Airbus on one hand and TSMC on the other. Semiconductor manufacturing had both Moore’s Law, which everyone has heard of, and Rock’s Law, which most people haven’t: Moore’s Law said that the number of transistors on a chip was doubling every two years, but Rock’s Law said that the cost of a state-of-the-art semiconductor fab was doubling every four years. Maybe generative AI will work the same, with unit costs falling but fixed costs rising to the point that only a handful of companies are able to sustain the investment needed to build competitive models and everyone else is squeezed out.* This oligopoly would presumably have a price equilibrium, though it might be at high or low margins – this might all just be commodity infrastructure sold at marginal cost, especially given some of those at the table will be using their models to power other, much more differentiated businesses. Ask your favourite economist. **

So, when Sam Altman says he’s raised $100bn or $200bn, and when he says he’d like OpenAI to be building a gigawatt of compute every week (implying something in the order of a trillion dollars of annual capex), it would be easy to laugh at this as ‘braggawatts’, and apparently people at TSMC once dismissed him as ‘podcast bro’, but he’s trying to create a self-fulfilling prophecy. He’s trying to get OpenAI, a company with no revenue three years ago, a seat at a table where you’ll probably need to spend couple of hundred billion dollars a year on infrastructure, through force of will. His force of will has turned out to be pretty powerful so far.

But, again, does that get you anything more than a seat at that table? TSMC isn’t just an oligopolist – it has a de facto monopoly on cutting edge chips – but that gives it little to no leverage or value-capture further up the stack. People built Windows apps, web services and iPhone apps – they don’t build TSMC apps or Intel apps.

Developers had to build for Windows because it had almost all the users, and users had to buy Windows PCs because it had almost all the developers (a network effect!). But if you invent a brilliant new app or product or service using generative AI, or add it as a feature to an existing product, you use the APIs to call a foundation model running in the cloud and the users don’t know or care what model you used. No-one using Snap cares if it runs on AWS or GCP. When you buy an enterprise SaaS product you don’t care if it uses AWS or Azure. And if I do a Google Search and the first match is a product that’s running on Google Cloud, I would never know.

That doesn’t mean these APIs are interchangeable – there are good reasons why AWS, GCP and Azure have very different market shares, and why developers choose each. But the customer doesn’t know or care. Running a cloud doesn’t give you leverage over third part products and services that are further up the stack.

The difference now, perhaps, is that all of those services were separate silos: there was a common search and discovery layer at the top in Google and Facebook, and common infrastructures at the bottom in the cloud, but all those apps were never connected to each other. Now we have an emerging alphabet soup of standards and protocols for models and websites to talk to each other across ads, e-commerce and some kind of intent and automation (the brief enthusiasm around OpenClaw captured some of this). A website can surface its capabilities so that a subset can just show up in ChatGPT, be it a real estate search or a shopping cart. You’ll tell your agent to look at a recipe on Instagram and order the ingredients on Instacart. Everything can get piped to everything else, and everything can talk to each other!

Meanwhile, (saying the quiet part out loud), if you could set and control those APIs and manage the flows, that gives you power. Standards have been a basic competitive weapon in every generation of technology – remember Microsoft’s slogan ‘embrace and extend’. In particular, OpenAI suggests now suggests you’ll use your ChatGPT account as the glue linking all of these together. That’s a network effect!

I’m not sure about this: I’m not sure that this vision will really work, and if it does, I’m not sure it gives one company dominance.

First, there’s a recurring fallacy in tech that you can abstract many different complex products into a simple standard interface – you could call this the ‘widget fallacy’. A decade ago people said ‘APIs are the new BD’, which was really the same concept, and it mostly failed. This is partly because there’s a huge gap between what looks cool in demos and all of the work and thought in the interaction models and the workflows in the actual product: very quickly you’ll run into an exception case and you’ll need the actual product UI and a human decision. It’s also because the incentives are misaligned: no-one wants to be someone else’s dumb API call, so there’s an inherent tension or trade-off between the distribution that an abstraction layer might give you (Google Shopping, Facebook shopping, and now ChatGPT shopping) and your desire to control the experience and the customer relationship. Remember, after all, that all of Instacart’s profits come from showing ads.

Of course, this is just speculation – maybe it will all work this time! But the second problem is that if these are all separate systems plugged together by abstracted and automated APIs, is the user or developer locked into any one of them? If apps in the chatbot feed work, and OpenAI uses one standard and Gemini uses another, why stops a developer doing both? This is much less code than making both an iOS and Android app, and anyway, can’t you get the AI to write the code for you? What does that do to developer lock-ins? Meanwhile, yes, maybe I’ll log into all of these services with my OpenAI or Gemini account, but does it necessarily make sense for me to log into Tinder, Zillow and Workday with the same account? And, again, do they want that?

Hmm.

As I’ve written this essay, I’ve returned again and again to terms like platform, ecosystem, leverage and network effect. These terms get used a lot in tech, but they have pretty vague meanings. Google Cloud, Apple’s App Store, Amazon Marketplace, and even TikTok are all ‘platforms’ but they’re all very different.

Maybe the word I’m really looking for is power. When I was at university, a long time ago now, my medieval history professor, Roger Lovatt, told me that power is the ability to make people do something that they don’t want to do, and that’s really the question here. Does OpenAI have the ability to get consumers, developers and enterprises to use its systems more than anybody else, regardless of what the system itself actually does? Microsoft, Apple and Facebook had that. So does Amazon – this is a real flywheel.