{"id":623871,"date":"2026-04-23T18:51:11","date_gmt":"2026-04-23T18:51:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/623871\/"},"modified":"2026-04-23T18:51:11","modified_gmt":"2026-04-23T18:51:11","slug":"youre-about-to-feel-the-ai-money-squeeze","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/623871\/","title":{"rendered":"You\u2019re about to feel the AI money squeeze"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _17nnmdy6 _17nnmdy5 _1xwtict1\">Earlier this month, millions of OpenClaw users woke up to a sweeping mandate: The viral AI agent tool, which this year took the worldwide tech industry by storm, had been severely restricted by Anthropic.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Anthropic, like other leading AI labs, was under immense pressure to lessen the strain on its systems and start turning a profit. So if the users wanted its Claude AI to power their popular agents, they\u2019d have to start paying handsomely for the privilege.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cOur subscriptions weren\u2019t built for the usage patterns of these third-party tools,\u201d wrote Boris Cherny, head of Claude Code, on <a href=\"https:\/\/x.com\/bcherny\/status\/2040206440556826908?s=20\" rel=\"nofollow\">X<\/a>. \u201cWe want to be intentional in managing our growth to continue to serve our customers sustainably long-term. This change is a step toward that.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The announcement was a sign of the times. Investors have poured hundreds of billions of dollars into companies like OpenAI and Anthropic to help them scale and build out their compute. Now, they\u2019re expecting returns. After years of offering cheap or totally free access to advanced AI systems, the bill is starting to come due \u2014 and downstream, users are beginning to feel the pinch.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Over the past few years, most top AI labs have introduced new subscription tiers to court power users. OpenAI and Anthropic shifted their pricing plans for enterprise. OpenAI introduced in-platform advertisements. Anthropic, of course, restricted third-party tools.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">In some ways, this is a tale as old as time, and particularly, a clear echo of the tech boom of the \u201910s. Venture capitalists helped startups subsidize fast growth in all kinds of areas: ride-hailing apps, e-commerce, takeout and grocery delivery. Once companies cemented their power, they raised prices, added new revenue streams, and delivered a return to investors. Or they didn\u2019t \u2014 and they crashed and burned.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">But AI companies have gone through more investor money at a faster pace than any other sector in recent history. AI companies have broken ground on data centers around the world, dedicating billions of dollars with promises of better models, lower costs, and AI for everyone. Even stemming the flow of losses will be difficult \u2014 let alone making the kind of money investors are hoping for. \u201cWhen you sink trillions of dollars into data centers, you\u2019re going to expect a return,\u201d said Will Sommer, a senior director analyst at Gartner, who specializes in economic forecasting and quantitative modeling.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">\u201cWhen you sink trillions of dollars into data centers, you\u2019re going to expect a return.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cIs the era of basically free or close-to-free AI kind of coming to an end here?\u201d said Mark Riedl, a professor in the Georgia Tech School of Interactive Computing. \u201cIt\u2019s too soon to say for certain, but there are some signs.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Gartner\u2019s Sommer studies long-term economic market trends related to generative AI, including calculating just how much money is at stake. Between 2024 and 2029, he said, Gartner estimates that capital investment in AI data centers will reach about $6.3 trillion \u2014 a \u201cmassive amount of money.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">To avoid a write-down of these assets, major AI model providers would ideally generate a return on invested capital (ROIC) of about 25 percent, Sommer said. (That\u2019s about what Amazon, Microsoft, and Google tend to earn on their overall capital investments.) On the other hand, if the returns fall below 12 percent, institutional capital loses interest \u2014 there\u2019s better money elsewhere, Sommer said. Below 7 percent, you\u2019re in write-down territory, which is \u201can unmitigated disaster for all of the investors in this technology,\u201d Sommer said.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">To reach that bare minimum of 7 percent, Gartner forecasts that large AI companies would need to earn cumulatively close to $7 trillion in AI-driven revenue through 2029, which is close to $2 trillion per year by the end of the period. In order to achieve \u201chistoric returns,\u201d the providers would need to earn nearly $8.2 trillion in the same period.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">OpenAI has already made $600 billion in spending commitments through 2030, the company said <a href=\"https:\/\/www.cnbc.com\/2026\/02\/20\/openai-resets-spend-expectations-targets-around-600-billion-by-2030.html\" rel=\"nofollow noopener\" target=\"_blank\">in February<\/a>, which Sommer says is already a \u201cmassive step down\u201d from the $1.4 trillion it had planned before. Based on OpenAI\u2019s revenue forecasts and potential compound annual growth, Sommer said that even in the best-case scenario, he predicts that the lab would only hit a fraction of the overall spend required to hit that 7 percent ROIC.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">How do model providers like OpenAI make this money? By selling access to what are known as tokens. A token is essentially a unit of data input that an AI model can understand and process \u2014 it could be text, images, audio, or something else. One token is generally worth about four characters in the English language \u2014 the word \u201cbathroom,\u201d for instance, would likely be processed as two tokens. One paragraph in English is generally about 100 tokens, and a 1,500-word essay may be about 2,050 tokens, per an OpenAI <a href=\"https:\/\/help.openai.com\/en\/articles\/4936856-what-are-tokens-and-how-to-count-them\" rel=\"nofollow noopener\" target=\"_blank\">estimate<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">To hit investors\u2019 revenue expectations, providers would need to process a \u201cmind-bending\u201d number of tokens, Sommer said.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">By most measures, companies\u2019 numbers are already pretty big. Google announced it was processing 1.3 quadrillion tokens in October, for instance. If you add all the providers\u2019 estimates up, Sommer said, you get 100 to 200 quadrillion tokens a year. But to achieve the the $2 trillion in annual spend Gartner calculated, providers would need to be generating, by conservative estimates, a cumulative 10 sextillion tokens per year. (To make that slightly less abstract, a quadrillion has 15 zeros, and a sextillion has 21.) Even assuming a very generous profit margin of 10 percent per token, that would mean that token consumption between now and 2030 would need to grow by 50,000\u2013100,000x.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">To hit investors\u2019 revenue expectations, providers would need to process a \u201cmind-bending\u201d number of tokens<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Right now, constantly seeking more data centers and strapped for compute, companies aren\u2019t capable of processing this many tokens. Even if they could, they\u2019d face a problem: they\u2019re likely taking a loss on them. Sommer estimates that if you only account for the direct cost of infrastructure and electricity, \u201cevery company is making very reasonable margins on every token.\u201d But that margin is probably tighter or nonexistent with newer, more token-hungry models. And it\u2019s eaten up completely by indirect operation costs, like building out more compute and the \u201cungodly\u201d expense of constantly training the next big model.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cAs soon as you then add all of the infrastructure that needs to be built for the next generation of model, and you look at how these models are going to scale, it becomes increasingly untenable,\u201d Sommer said.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Sommer predicts that many companies \u201cwon\u2019t be able to sustain their burn rate,\u201d and says market consolidation is virtually inevitable \u2014 in his eyes, no more than two large language model providers in any regional market will survive. And the era where nearly every service has a fairly generous unpaid tier probably isn\u2019t going to last.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cFor the [labs] that have a lot of users that were free, I think the question was never really if you\u2019d monetize the free tier but it was when, and how badly do you do it,\u201d Jay Madheswaran, cofounder of legal AI startup Eve, which is a client of both OpenAI and Anthropic, told The Verge.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Even if you do find a way to square the math, building customer loyalty can be just as complicated. Top labs are constantly leapfrogging each other on model debuts, feature releases, strategy shifts, hiring announcements, and more. It can be tough to stay on top long enough to corner any part of the market \u2014 engineers and developers are famous for switching which model they\u2019re using on any given day, and it\u2019s easy to do so.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">So labs are increasingly emphasizing the importance of locking users into their platform and tools. Anthropic, which primarily builds for enterprise clients, has been going <a href=\"https:\/\/www.theverge.com\/report\/874308\/anthropic-claude-code-opus-hype-moment\" rel=\"nofollow noopener\" target=\"_blank\">all in on its coding efforts<\/a>, and OpenAI has recently pledged to mirror Anthropic\u2019s focus on coding and enterprise, ahead of both companies reportedly racing each other to IPO by the end of 2026.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">For now, that competition is benefiting end users. \u201cIt\u2019s an arms race where you cannot let up at all because the switching cost is zero,\u201d said Soham Mazumdar, cofounder and CEO of Wisdom AI, adding, \u201cAs a common man, I\u2019m going to be the winner longer-term.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">In the early days of AI, the bulk of compute costs went to training initial models, while inference (or performing tasks) was cheaper. As models have advanced and systems have added features, however, inference has gotten far more resource-intensive. AI agents, or tools that ideally can complete complex, multistep tasks on your behalf without constant hand-holding, now use vastly more tokens than the basic chatbot models did a few years back.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Reasoning models, which increasingly power AI agents, are notoriously expensive on the inference side as well, said Georgia Tech\u2019s Riedl. These agents \u2014 such as popular open-source platform OpenClaw \u2014 are typically more efficient and effective than ones without reasoning, but they also expend far more tokens doing behind-the-scenes work the end user may not see. That may look like \u201cthinking through\u201d a lot of different potential paths, launching sub-agents to do portions of a task, or verifying the accuracy of different steps of the process.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cYou put in your one-sentence prompt\u2026 and it\u2019ll talk out loud to itself for thousands and thousands of tokens, thousands and thousands of words, maybe even tens of thousands when you get into coding,\u201d Riedl said, adding, \u201cIf you have thousands or millions of people using these things every single day, the inference costs of just the users generating tons and tons of tokens all the time really outweighs the training side of things.\u201d If model providers were making a straightforward profit on all these tokens and had the compute to handle them easily, that wouldn\u2019t be a problem for them \u2014 but as things stand, it\u2019s a strain.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">\u201cThe use cases have exploded, and we\u2019re out of capacity.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cAnybody who was building agents in the past couple of years sort of saw this coming,\u201d said Aaron Levie, CEO of Box, adding, \u201cThe use cases have exploded, and we\u2019re out of capacity.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Top AI labs have recently changed their policies on API usage and third-party tools \u2014 like Anthropic <a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/907074\/anthropic-openclaw-claude-subscription-ban\" rel=\"nofollow noopener\" target=\"_blank\">essentially banning<\/a> the use of OpenClaw unless subscribers pay extra \u2014 due to the extra strain. \u201cYou\u2019ve got these tools that are basically just sitting as background processors on everyone\u2019s laptops and desktops, just continuously waking themselves up, generating some tokens, doing some stuff, and putting themselves back to sleep,\u201d says Riedl.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">And no matter what you\u2019re doing with a reasoning-model-powered AI agent, there are likely going to be wasted tokens \u2014 meaning times that an AI model goes down a non-useful path and then backtracks, or checks on how something is going but doesn\u2019t change anything, or even pauses to write itself a poem. In an era where labs are likely losing money on some tokens and companies are strapped for compute, the industry is trying to reduce wasted tokens and build more focused and targeted models.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Although it may be good for both paying customers and AI labs alike to make models use fewer tokens, it ironically works against the mission of massively increasing token usage. As Gartner\u2019s Sommer puts it, pricing models may change significantly down the line, but right now, there\u2019s a \u201cnarrow space on the treadmill\u201d between short- and long-term goals.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Add this all up, and big AI companies are at a transition point: they\u2019ve attracted huge numbers of users by offering free access, and now they need to keep those users while charging a lot more. \u201cOn one hand, they want to see more tokens being generated but they have to either suck up the costs, which they can sort of do as long as venture capital is flowing, or pass the costs back on to [customers],\u201d Riedl said. \u201cMaybe the economics are a little upside down right now.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">These days, OpenAI and Anthropic <a href=\"https:\/\/podcasts.apple.com\/us\/podcast\/chatgpt-the-super-assistant-era-bg2-guest-interview\/id1727278168?i=1000755428126\" rel=\"nofollow noopener\" target=\"_blank\">are<\/a> <a href=\"https:\/\/www.theregister.com\/2026\/04\/16\/anthropic_ejects_bundled_tokens_enterprise\/\" rel=\"nofollow noopener\" target=\"_blank\">often<\/a> <a href=\"https:\/\/help.openai.com\/en\/articles\/8265053-what-is-chatgpt-enterprise\" rel=\"nofollow noopener\" target=\"_blank\">weighing<\/a> the advantages of older flat-rate subscription plans and ones with metered fees. Both companies\u2019 enterprise plans are now token-based, since usership is \u201cuneven,\u201d as Andrew Filev, founder of Zencoder, called it \u2014 one person may use it once or twice a week for a few minutes, while another is running five agents in the background around the clock.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">For consumer chatbots, some monetization is taking the form of advertising<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">In consumer chatbots, some model makers are trying to mitigate this with advertising. OpenAI recently introduced ads within ChatGPT, which show up as a separate sidebar, and it\u2019s <a href=\"https:\/\/digiday.com\/marketing\/openai-builds-tool-to-track-whether-chatgpt-ads-convert\/\" rel=\"nofollow noopener\" target=\"_blank\">reportedly<\/a> working on a tool to track how well those ads work. (Anthropic famously <a href=\"https:\/\/www.theverge.com\/news\/874084\/ai-chatgpt-claude-super-bowl-ads-openai-anthropic\" rel=\"nofollow noopener\" target=\"_blank\">decried the move<\/a> in its 2026 Super Bowl ads.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">But for companies that build tools on top of models like GPT-5 or Claude Opus, the price of tokens is going up, and the extra cost is largely trickling down to their customers. Multiple tech companies The Verge spoke with said they, or their customers, are changing strategies to offset the new pricing. Some are considering moving fully or partially to open-source models, and some are using considerable time and resources to evaluate how expensive high-end models perform on certain tasks compared to cheaper alternatives.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">David DeSanto, CEO of software company Anaconda, recently returned from a five-week trip around the world speaking to customers. He said that many were moving to self-host AI models \u2014 deploying their own within Amazon Bedrock or Google\u2019s Vertex AI to have more control over the supply chain \u2014 or changing to open-source or open-weight models for a lot of their needs, since many such models have significantly improved on benchmarks as of late. Some companies also worry about the security of sending IP to a commercial frontier lab, so they only use ChatGPT or Claude models for \u201cmission-critical applications,\u201d he said.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cEveryone I spoke to had some version of this problem \u2014 their token usage has gone up, so their usage-based billing cost has gone up, or the tier they were on no longer has the same cap, and now they\u2019re having to go to a more expensive tier to try to keep the same amount of usage per month as part of their flat rate,\u201d DeSanto said.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Eve, a company that sells software to plaintiff lawyers, is constantly balancing quality and token costs, Madheswaran said \u2014 especially since Eve\u2019s token usage has gone up 100x year-over-year to date. So it\u2019s always switching between open-source models and varying ones from Anthropic and OpenAI.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">But even a 1 percent regression in quality of output negatively impacts Eve\u2019s customers \u201cquite significantly,\u201d Madheswaran said, which is why Eve spends a lot of internal resources tracking model quality. The company typically finds itself using the newer, more expensive reasoning models about 25\u201330% of the time, splitting the rest of its usage between Eve\u2019s own open-source variants and smaller, cheaper models from leading labs. Madheswaran said the company has found that some cheap models are just as accurate as expensive ones, depending on the query.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cWhat open source is really doing is it\u2019s putting pressure on these companies to make their cheaper models cheaper because their profit margins there are much, much better,\u201d Madheswaran said.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">\u201cWhat open source is really doing is it\u2019s putting pressure on these companies to make their cheaper models cheaper.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Wisdom AI, which provides AI-powered data analysis, hasn\u2019t had to pass on cost increases yet. The team is testing out how different models perform on different types of tasks, and then budgeting accordingly. Mazumdar said it\u2019s been testing out Cerebras, which is popular for open-weight models, lately, \u201cin anticipation of how expensive things will get\u201d from the premier labs like OpenAI and Anthropic. \u201c[Big AI companies] have been giving this away for free,\u201d Mazumdar said. \u201cWhat they\u2019re trying to do is, the moment they sense there\u2019s an enterprise at play, or there\u2019s propensity to pay, they absolutely jack up the prices drastically.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">But he said there\u2019s always a cost, especially on the coding front. \u201cThe reality is this: If you\u2019re doing coding of any kind, then the open-source models simply don\u2019t come close, and that\u2019s the unfortunate reality of where we are today,\u201d he said.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Box\u2019s Levie believes the changes will play out over the next 24 months. He said the VC subsidized era of AI was likely necessary for growth \u2014 after all, if two companies with largely equal products are competing for the same customers, and one is offering a (subsidized) product at a lower price, the latter will obviously win out, at least in the short term. But now it\u2019s time to build more efficiency into the system, and not everyone is going to survive it.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cThe size of the market is so large that I think it actually will sort of all work out,\u201d Levie said. \u201cAt an individual company level, you have to decide: Can you keep up with this flywheel, or are you going to be priced out based on an inability to raise capital or an inability to make the model more efficient for your tasks?\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Eve\u2019s Madheswaran thinks the industry will soon move from focusing on the so-called \u201cbest\u201d model to what works the best for a business\u2019s personalized, niche use cases. \u201cThat\u2019s my guess, and obviously I\u2019m betting our entire company on it.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Gartner\u2019s Sommer likens the whole scenario to what he called the \u201cstegosaurus paradox.\u201d When scientists first discovered the stegosaurus fossil, he said, they didn\u2019t understand how a large body could be supported by such a small head with a tiny mouth \u2014 and the theory they developed was that the stegosaurus would need to constantly be eating, and eating a highly nutritious diet.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cWe see AI as kind of being the same deal,\u201d Sommer said \u2014 for the stegosaurus (AI labs) to survive, then providers need to find more food for it (the entire global economy, not just the tech market) and it has to be highly nutritious, too (i.e., providers need to be able to earn a margin from it and stop subsidizing). If the stegosaurus paradox isn\u2019t resolved, and the mouth is \u201ctoo small for the body,\u201d he said, it will lead to write-downs, falling valuations, dried-up financing, and a broad resetting of expectations for AI worldwide. Therefore, Sommer said, a sustainable business model \u201cwould require that genAI be infused in everything from billboards to checkout kiosks,\u201d with providers taking a cut of all of those transactions.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _17nnmdya _1xwtict1\">\u201cThe free era was really a land grab \u2014 it\u2019s a common strategy used by startups,\u201d said Eve\u2019s Madheswaran. \u201cThat\u2019s just not a business model. You can\u2019t do that for too long.\u201d<\/p>\n<p>Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Hayden FieldClose<img alt=\"Hayden Field\" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"_1bw37385 x271pn0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/04\/1776970271_881_HAYDEN_BLURPLE.jpg\"\/><\/p>\n<p>Hayden Field<\/p>\n<p class=\"fv263x1\">Posts from this author will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/authors\/hayden-field\" rel=\"nofollow noopener\" target=\"_blank\">See All by Hayden Field<\/a><\/p>\n<p>AIClose<\/p>\n<p>AI<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">See All AI<\/a><\/p>\n<p>BusinessClose<\/p>\n<p>Business<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/business\" rel=\"nofollow noopener\" target=\"_blank\">See All Business<\/a><\/p>\n<p>ReportClose<\/p>\n<p>Report<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/report\" rel=\"nofollow noopener\" target=\"_blank\">See All Report<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Earlier this month, millions of OpenClaw users woke up to a sweeping mandate: The viral AI agent tool,&hellip;\n","protected":false},"author":2,"featured_media":623872,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,45,49,48,1660,61],"class_list":{"0":"post-623871","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-business","12":"tag-ca","13":"tag-canada","14":"tag-report","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/623871","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=623871"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/623871\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/623872"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=623871"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=623871"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=623871"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}