{"id":197622,"date":"2025-12-22T06:34:15","date_gmt":"2025-12-22T06:34:15","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/197622\/"},"modified":"2025-12-22T06:34:15","modified_gmt":"2025-12-22T06:34:15","slug":"why-nvidia-maintains-its-moat-and-gemini-wont-kill-openai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/197622\/","title":{"rendered":"Why Nvidia maintains its moat and Gemini won\u2019t kill OpenAI"},"content":{"rendered":"<p>Two prevailing narratives have driven markets recently. The first is that Nvidia Corp.\u2019s moat is eroding primarily thanks to graphics processing unit alternatives such as tensor processing units and other application-specific integrated circuits.<\/p>\n<p>The second is that Google LLC generally and its Gemini artificial intelligence model specifically is gaining share, will dominate AI search and ultimately beat OpenAI Group PBC. We believe both of these propositions are overstated and unlikely to materialize as currently envisioned by many. Specifically, our research indicates that Nvidia\u2019s GB300 and the follow on Vera Rubin will completely reset the economics of AI, conferring continued advantage to Nvidia. Furthermore, Nvidia\u2019s volume lead will make it the low cost producer and, by far, the most economical platform to run AI at scale \u2013 for both training and inference.<\/p>\n<p>As it pertains to Google, it faces, in our view, the ultimate innovator\u2019s dilemma because its search is tightly linked to advertising revenue. If Google moves its advertising model to a chatbot-like experience, its cost to serve search queries goes up by 100 times. The alternative is to shift its business model toward a more integrated shopping experience. But this requires more than 10 blue links thrown at users. Rather, it mandates a new trust compact with its users and advertisers, which Google does not currently possess, even with Gemini\u2019s recent success.<\/p>\n<p>Despite the criticisms of ChatGPT, OpenAI in our estimation is well on the way to disrupting today\u2019s online experience by emphasizing trusted information over pushing ads. The bottom line is the two early catalyzers of the AI era, Nvidia and OpenAI remain in a strong position in our view. Though lots can change, the current narrative around these two firms is likely to change when GB300 adoption ramps.<\/p>\n<p>In this, our\u00a0<a href=\"https:\/\/thecuberesearch.com\/podcasts\/breaking-analysis-with-dave-vellante\/\" rel=\"nofollow noopener\" target=\"_blank\">300th Breaking Analysis<\/a>, we set forth our thinking around what we believe is a misplaced narrative in the market. We\u2019ll explain what we think the market is missing and why Nvidia\u2019s forthcoming product lineup will reset the narrative. We\u2019ll also look at the economics of search, large language models and chatbots and share why OpenAI, while facing many challenges (competition, commitments, uncertainty), is in a stronger position than many have posited; and why Google, though clearly a leader in AI, remains challenged to preserve what may be the greatest business in the history of the technology business.<\/p>\n<p>Why TPUs don\u2019t break Nvidia\u2019s AI factory moat<\/p>\n<p>We believe the core issue with TPUs isn\u2019t whether they\u2019re \u201cgood\u201d chips \u2014 they are. The issue in our view is broad-based architectural fit for the next phase of AI, where frontier-scale workloads are increasingly communication-heavy and bandwidth-hungry and require systems that can scale to very large clusters without collapsing under their own coordination overhead. In our assessment, TPUs were designed in an era when bandwidth was expensive and hard to deliver, and that design center shows up as models scale and workloads diversify.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-319777\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/300-_-Breaking-Analysis-_-Why-NVIDIA-Maintains-its-Moat-and-Gemini-Wont-Kill-OpenAI-1024x576.jpg\"   alt=\"\" width=\"1024\" height=\"576\"\/><br \/>\nWhat TPUs were built for \u2014 and where the ceiling shows up<\/p>\n<p>TPUs are well-suited for lower-bandwidth AI and have proven effective in production contexts such as search. They can do certain training well, and they\u2019ve been associated with important early milestones. But as models get larger and the work becomes more distributed, our research indicates the TPU design runs into practical constraints on expansion and on the amount of bandwidth available within the architecture. In our view, that\u2019s a key reason the TPU approach hasn\u2019t become the broadly replicated blueprint across the industry.<\/p>\n<p>Why frontier training looks different than TPU-friendly\u2019 workloads<\/p>\n<p>We believe leading frontier efforts increasingly demand an architecture optimized for high bandwidth and scale \u2014 the kind of system design that enables \u201cGPU factories,\u201d where very large numbers of accelerators can be connected and kept productively utilized.<\/p>\n<p>When we talk about the requirements for AI factories, we highlight below three key factors:<\/p>\n<p>Near-linear bisection bandwidth growth: Bisection bandwidth is essentially the throughput across the \u201cmiddle\u201d of the network \u2014 how much data can move between two halves of the system. As workloads become more complex and more distributed, the system needs that cross-fabric bandwidth to grow smoothly as you add more devices.<br \/>\nMinimal collective degradation: As you scale out, collective communication patterns can become the bottleneck. The system has to avoid performance falling off a cliff as more nodes participate.<br \/>\nSustained real-world utilization (~50%): The goal isn\u2019t theoretical peak. It\u2019s keeping the system doing useful work at scale, consistently, in production-like conditions.<\/p>\n<p>In our view, architectures designed around higher-bandwidth, scalable interconnects are better aligned with those requirements.<\/p>\n<p>The \u2018single-vendor pod\u2019 constraint<\/p>\n<p>Our premise is that TPUs remain a single-vendor architecture with a topology that effectively forms pods \u2014 a tightly coupled unit presented as a single system. That approach was elegant for its time and echoed other historic designs (for example IBM Blue Gene, Cray) intended to solve the \u201chow do we connect everything\u201d problem. But we believe the limitations show up in two places:<\/p>\n<p>It doesn\u2019t scale the way frontier workloads increasingly require;<br \/>\nIt doesn\u2019t deliver the net bandwidth needed for the most communication-heavy, frontier-class model development.<\/p>\n<p>None of this makes TPUs irrelevant. In our opinion, TPUs remain extremely useful and attractive \u2014 particularly for more bounded workloads \u2014 but usefulness is not the same thing as being the dominant foundation for next-generation AI factories. And in our framework, it\u2019s definitely not the platform that will erode Nvidia\u2019s moat.<\/p>\n<p>The market narrative is oversimplified<\/p>\n<p>We believe the popular storyline \u2014 \u201cModel X is trained on TPUs, therefore TPUs are the future\u201d \u2014 misses the reality. Our view is that some major model training has indeed leveraged TPUs, but the data points toward a mixed approach where GPU-class architectures become increasingly necessary for frontier-scale, communication-heavy work.<\/p>\n<p>Our research indicates there\u2019s also a pragmatic factor. Specifically, when accelerators are constrained, it\u2019s rational to push existing assets to their limits. In that context, using available TPUs aggressively is not a statement that TPUs are the end state \u2014 it\u2019s an optimization under supply constraints.<\/p>\n<p>Key takeaway<\/p>\n<p>We\u2019re not bearish on TPUs. They are mature, and the engineering behind them is impressive. But we believe Nvidia\u2019s moat is reinforced by an end-to-end architecture designed for bandwidth, scale, and sustained utilization \u2014 the attributes that matter most as AI factories move from impressive single-system demos to large-scale production infrastructure.<\/p>\n<p>Why all the fuss about TPUs? Supply constraints, CoWoS and the market reality<\/p>\n<p>We believe the recent TPU enthusiasm is less about a structural shift away from Nvidia and more about the fact that volumes are constrained, demand is outstripping supply, and every hyperscaler is operating under scarcity. In this environment, buyers and builders will use whatever credible compute they can get \u2014 and that dynamic is amplifying the availability and capabilities of TPUs, as well as other alternatives.<\/p>\n<p>CoWoS is the gating factor<\/p>\n<p>The biggest constraint our research points to right now is packaging capacity.\u00a0<a href=\"https:\/\/3dfabric.tsmc.com\/english\/dedicatedFoundry\/technology\/cowos.htm\" rel=\"nofollow noopener\" target=\"_blank\">CoWoS<\/a>\u00a0(chip-on-wafer-on-substrate) is a Taiwan Semiconductor Manufacturing Co. packaging technology that takes chiplets from the wafer and integrates them onto a substrate so they can be connected with very high-speed communication. In our view, this is foundational to modern AI systems that depend on fast movement between chips and across complex multichip architectures.<\/p>\n<p>The key point is that when CoWoS capacity is tight, it constrains the output of advanced AI accelerators regardless of how strong demand is.<\/p>\n<p>What the CoWoS consumption chart tells us<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-319796\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/300-_-Breaking-Analysis-_-Why-NVIDIA-Maintains-its-Moat-and-Gemini-Wont-Kill-OpenAI-1-1024x576.jpg\"   alt=\"\" width=\"1024\" height=\"576\"\/><\/p>\n<p>The chart above shows projected consumption across Nvidia, Broadcom Inc., Advanced Micro Devices Inc. and others \u2014 and importantly, it reflects\u00a0all chips, not only AI chips. That matters because portions of the non-Nvidia consumption are serving other product categories and process needs. But the bottom line remains that AI chips are being governed by the same packaging bottleneck.<\/p>\n<p>In the data referenced, total CoWoS capacity expands materially over time with Nvidia locking up more than 60% of the volume:<\/p>\n<p>2025:\u00a0652<br \/>\n2026:\u00a01,150<br \/>\n2027:\u00a01,550<\/p>\n<p>At the same time, the compute and system efficiency of the leading platforms improves as Nvidia transitions through architectural steps \u2014 from\u00a0GB200 to GB300\u00a0and then\u00a0Rubin\u00a0\u2014 alongside improvements in switching and overall system design. In our opinion, the market should view these trends in tandem \u2013 that is, capacity increases, but so does the performance per system, which reinforces the economics for the vendors that can both secure volume\u00a0and\u00a0move fastest down the experience curve. The leading vendor, by far, in this scenario is Nvidia.<\/p>\n<p>Nvidia\u2019s pre-buys translate into share and cost advantage<\/p>\n<p>We believe one of the most underappreciated points in the current narrative is that Nvidia has effectively pre-bought and securured meaningful CoWoS capacity. As a result, even as the pie expands, Nvidia is positioned to maintain significant share over the planning horizon discussed \u2014 with the projection that by 2027 it still holds roughly\u00a061%\u00a0of the referenced market.\u00a0For\u00a0AI chips specifically\u00a0our estimate is that Nvidia will maintain closer to ~80% of the market.<\/p>\n<p>Our view is that share will be determined by unit economics. Our premise is that Nvidia has volume leadership and has locked up a critical bottleneck input, which will confer structural cost advantage and flywheel effects to the company.<\/p>\n<p>Why hyperscalers are mixing architectures<\/p>\n<p>In a supply constrained environment, hyperscalers will pursue a blended strategy. In the case of Google, it will use TPUs where they fit, and GPUs where they\u2019re required, to generally maximize access to any capable compute. We believe that\u2019s what\u2019s driving much of the current TPU buzz \u2014 not a belief that TPUs will broadly displace GPUs for frontier-scale, communication-heavy workloads.<\/p>\n<p>We also believe it\u2019s unlikely that a major hyperscaler (Google in this case) will broadly sell its proprietary accelerators to direct competitors in a way that creates a real external market. Claims and rumors may circulate, but in our opinion the more plausible driver behind \u201cTPUs in the wild\u201d narratives is ecosystem pressure from partners (for example Broadcom) and Meta Platforms Inc. (looking for any advantage right now). In short, we don\u2019t see this as a strategic decision by Google to become a true merchant silicon provider.<\/p>\n<p>Volume matters, and it compounds into cost leadership<\/p>\n<p>We believe the most important takeaway from this segment is the connection between the following three factors:<\/p>\n<p>Volume leadership\u00a0(which has always mattered in semiconductors and other scaled markets);<br \/>\nExperience curve benefits\u00a0(learning, yield, supply-chain leverage, system tuning); and<br \/>\nControl of constrained inputs\u00a0(CoWoS capacity being the prime example).<\/p>\n<p>In our view, this combination positions Nvidia, with near term platforms like\u00a0GB300\u00a0and especially\u00a0Rubin, to become the low-cost producers of tokens versus alternatives \u2014 not simply because of peak performance claims, but because scale and secured capacity will translate into superior economics.<\/p>\n<p>The shortage won\u2019t last forever \u2014 but it\u2019s not ending tomorrow<\/p>\n<p>We believe the market is in a phase where every credible AI vendor can sell everything they can build because supply is scarce. But our research indicates this will change as capacity catches up over the next couple of years. Historically, semiconductors tend to swing from undersupply to oversupply \u2014 timing this is hard, but our research suggests there is still meaningful runway, and that the near-term (including 2026) remains supply-constrained rather than surplus-driven.<\/p>\n<p>Net-net:\u00a0TPUs are getting attention because the market supply is short. CoWoS is a core bottleneck. And Nvidia\u2019s ability to lock up capacity and ride the experience curve reinforces both share and cost position as the cycle matures.<\/p>\n<p>In a recent investor podcast, Gavin Baker went deep into the\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=cmUo4841KQw&amp;t=1s\" rel=\"nofollow noopener\" target=\"_blank\">economics of GPUs and TPUs<\/a>. It\u2019s worth listening to the entire conversion. We\u00a0<a href=\"https:\/\/video.cube365.net\/c\/988523?\" rel=\"nofollow noopener\" target=\"_blank\">pulled a clip from that discussion<\/a>, which succinctly articulates the pending economic shift coming in the near term.<\/p>\n<p>What follows is our assessment of that conversation blended with our premise:<\/p>\n<p>Low-cost production, learning curves and why the advantage is shifting back to Nvidia<\/p>\n<p>We believe the \u201clow-cost producer\u201d framing is critical, but it\u2019s often misunderstood and not applied rigorously. In our view, being the low-cost producer has always mattered in scaled markets \u2014 the nuance is whether people mean\u00a0unit cost,\u00a0delivered price\u00a0or\u00a0economic margin structure. When viewed through that perspective, Google\u2019s position as the current low-cost producer for AI chips is increasingly fragile as the stack shifts and as the economics of TPU\/ASIC supply chains become more clear.<\/p>\n<p>Google\u2019s current cost position is real \u2014 but not sustainable<\/p>\n<p>Our research aligns with Gavin Baker\u2019s narrative, which indicates Google has enjoyed meaningful cost advantages in parts of its AI stack and has leaned into that position by aggressively approaching the market. Google can \u201cbomb\u201d AI with low-cost capacity because when you can drive lower unit costs, you can push more supply into the market and press competitors on price and availability.<\/p>\n<p>But we believe that advantage is highly dependent on the underlying performance curve and the economics of the hardware supply chain \u2014 and both are moving.<\/p>\n<p>Blackwell as an industry learning platform<\/p>\n<p>We believe one under-appreciated structural advantage for Nvidia is the learning loop created when a major deployment runs infrastructure at extreme scale. In our view, large-scale Blackwell deployments \u2014 particularly the kind of \u201cpush it to the limits\u201d rollout that X.ai is driving \u2014 act as a forcing function that exposes bugs, tightens systems performance and accelerates resilience. Nvidia learns from this and then propagates those learnings across its customer base, which translates into a time-to-market advantage that is difficult for alternatives to match unless Nvidia stumbles operationally.<\/p>\n<p>In our opinion, this dynamic will compound as scale finds issues faster, fixes get distributed broadly and the platform improves as more customers run it in production-like conditions.<\/p>\n<p>Scaling laws are intact \u2014 and that raises the premium on throughput and efficiency<\/p>\n<p>Gemini 3, as Baker points out, has demonstrated the scaling laws are still working. Our research indicates that, if scaling continues to pay off, then the market\u2019s center of gravity shifts toward whoever can deliver the most training and inference work per dollar, per watt and per unit time \u2014 at scale.<\/p>\n<p>That\u2019s where Nvidia\u2019s roadmap is positioned to reassert its dominance as GB300\u2019s are drop-in-compatible with GB200. The delta from Hopper to Blackwell was nontrivial. We\u2019ve\u00a0<a href=\"https:\/\/thecuberesearch.com\/298-breaking-analysis-worker-bee-agi-why-aws-is-betting-on-practical-agents-not-messiah-agi\/\" rel=\"nofollow noopener\" target=\"_blank\">previously reported\u00a0<\/a>the lower reliability of GB200-based racks due new cooling requirements, rack densities and the overall complexity of the transition. But early reports suggest GB300-based configurations at neoclouds are performing exceptionally well with low friction for upgrades from GB200-based infrastructure. This compatibility accelerates deployment velocity, and improves the probability that customers stay on the Nvidia upgrade path rather than detouring into architectural forks.<\/p>\n<p>And the economics favor Nvidia.<\/p>\n<p>TPU economics: The Broadcom dependency inherent in margins<\/p>\n<p>We believe the economics of the TPU\/ASIC stack are often overlooked. A critical constraint is if a large portion of value accrues to a supplier (for example Broadcom as Google\u2019s ASIC partner), then the \u201clow-cost producer\u201d claim becomes more complicated. Gavin Baker estimates at scale, roughly\u00a0$15 billion of Google\u2019s $30 billion TPU revenue goes to Broadcom, which eats a major share of the margin pool. Baker used a simply analogy that Google is like the architect and Broadcom is the builder \u2013 it manages the TSMC relationship. He correctly points out that Apple has taken control of both the front-end design and all the back-end work, including managing TSMC, because at its scale, that level of vertical integration makes sense.<\/p>\n<p>This dynamic puts pressure on Google\u2019s behavior over time. Even if TPU unit economics look attractive in isolation, the supplier margin structure can erode the sustained ability to undercut the market \u2014 especially as Nvidia\u2019s performance-per-system improves. Baker stated that the OpEx for Broadcom\u2019s entire semi division is $5 billion, so at scale, Google paying Broadcom $15 billion may become less attractive.<\/p>\n<p>Rubin widens the gap<\/p>\n<p>We believe the roadmap implication is that GB300 will reset the cost curve and Rubin extends it again, the space between Nvidia and TPU\/ASIC alternatives increases dramatically. In our opinion, that doesn\u2019t make TPUs irrelevant; it makes them situational. As Nvidia\u2019s platform becomes a lower-cost token producer at scale, alternatives are forced into narrower lanes or into \u201cuse what you have\u201d strategies.<\/p>\n<p>Nvidia vs. Google\/Broadcom and Seagate vs. Quantum\/MKE<\/p>\n<p>This topic is reminiscent of a battle in the disk drive business in the 1980s. Seagate was a leading manufacturer of hard drives at the time and pursued a vertically integrated strategy, manufacturing its own heads, media and the drives themselves. In the 1980\u2019s, Quantum was struggling with manufacturing quality but revived its business by outsourcing production to MKE, a world-class Japanese-based manufacturer. Though this required tight engineering coordination between designers and manufacturing, it addressed Quantum\u2019s \u201cback-end\u201d challenges. Quantum began to gain share rapidly and its stock price rose.<\/p>\n<p>This author, in a conversation with industry legend Al Shugart, CEO of Seagate, asked if this was a new business model that had merit. Shugart said simply, \u201cWhen you have to pay someone to make your product, you make less.\u201d He further intimated that long-term, when the industry consolidated, he would be the last man standing. At the time there were around 80 hard drive manufacturers. Today there are three, with Seagate the most valuable.<\/p>\n<p>Battle of the LLMs<\/p>\n<p>There\u2019s a tight linkage between silicon and models. In this next section we move further up the stack and examine the recent narrative around Google, Gemini and OpenAI.<\/p>\n<p>Looking forward: Models converge and services differentiate<\/p>\n<p>Our research indicates the competitive battle is moving up the stack. Thoough there\u2019s excitement about the rapid improvement in model capability \u2014 and we believe larger, more complete models will continue to emerge as AI factories ramp \u2014 a key strategic premise of ours is that model quality alone won\u2019t be the enduring differentiator. In our view, the center of gravity shifts to:<\/p>\n<p>The software ecosystem;<br \/>\nThe services wrapped around models; and<br \/>\nThe ability to operationalize those models reliably and economically.<\/p>\n<p>As laid out above, even if Google can claim periods of cost advantage today, we believe Nvidia\u2019s platform learnings, drop-in upgrade path and performance roadmap \u2014 combined with the margin realities embedded in TPU supply chains \u2014 shift the \u201clow-cost producer\u201d advantage back toward Nvidia over the next two cycles and likely beyond.<\/p>\n<p>The Gemini user-growth narrative misses the real setup: Google\u2019s innovator\u2019s dilemma<\/p>\n<p>We believe it\u2019s fair to say Gemini 3 has had a meaningful impact on the AI conversation, particularly in reinforcing that scaling laws remain intact. But we also believe some of the widely circulated charts \u2014 especially those implying ChatGPT is \u201cleveling off\u201d while Gemini is \u201cexploding\u201d \u2014 are potentially misleading if they\u2019re used as a proxy for durable competitive advantage or for what matters most economically.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-319797\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/300-_-Breaking-Analysis-_-Why-NVIDIA-Maintains-its-Moat-and-Gemini-Wont-Kill-OpenAI-1-1-1024x576.jpg\"   alt=\"\" width=\"1024\" height=\"576\"\/><\/p>\n<p>Why the MAU charts can distort what\u2019s happening<\/p>\n<p>Our view is that user growth charts are easy to over-interpret because they compress very different distribution mechanics into a single line. A product can appear to \u201cexplode\u201d in monthly active users from bundling, defaults, integration points or placement \u2014 while another can appear to \u201cflatten\u201d even as usage quality, monetization and ecosystem affinity remain strong. The data suggests there\u2019s more nuance here than the headline narrative implies.<\/p>\n<p>The real context: Google is fundamentally an advertising profit engine<\/p>\n<p>The more important factor, in our opinion, is that Alphabet\u2019s economic center of gravity is still advertising \u2014 particularly Search and related advertising properties. As shown below, Google generates an enormous amount of operating profit from advertising, with very large operating margins.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-319798\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/300-_-Breaking-Analysis-_-Why-NVIDIA-Maintains-its-Moat-and-Gemini-Wont-Kill-OpenAI-2-1024x576.jpg\"   alt=\"\" width=\"1024\" height=\"576\"\/><\/p>\n<p>Cloud, though meaningful in absolute dollars and improving in profitability, is still minor relative to the scale of Search-driven operating profit. Even at rising operating margin levels, the cloud business is small compared with the hundreds of billions in operating profit and cash generation tied to the advertising machine. And the \u201cOther Bets\u201d category is economically immaterial in the context of the overall profit engine.<\/p>\n<p>This creates a classic innovator\u2019s dilemma<\/p>\n<p>Google has arguably the world\u2019s best technology marketplace in the form of Search \u2014 extraordinary query volume, an unparalleled advertising monetization model and a highly efficient compute foundation that reinforces profitability. The system works, and it works at massive scale.<\/p>\n<p>The dilemma, in our view, is how Google migrates from that dominant model to something \u201cmore complete\u201d without undermining the profit engine that made it dominant in the first place. The data suggests the challenge is not whether Google can build strong AI models \u2014 it clearly can. The challenge is whether it can evolve the product and business model of Search in a way that preserves economics while moving toward a new interaction paradigm.<\/p>\n<p>What to watch<\/p>\n<p>Our strategic questions for Google are:<\/p>\n<p>Can Google transition from its current search-and-ads model to a more complete AI-native experience\u00a0without\u00a0cannibalizing the very margins and monetization dynamics that fund its advantage?<br \/>\nCan it do that while maintaining the operating discipline and low-cost compute foundation that make the existing machine so powerful?<\/p>\n<p>Gemini\u2019s momentum is impressive, but we believe the larger story is economic and structural. Google\u2019s strength in search creates the very dilemma it has to solve to lead the next phase.<\/p>\n<p>Engagement, not just MAUs: Why \u2018user minutes\u2019 changes the economics of AI + ads<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-319785\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/Screenshot-2025-12-21-at-11.58.57\u202fAM.png\"   alt=\"\" width=\"890\" height=\"936\"\/><\/p>\n<p>We believe the earlier monthly active user charts tell an incomplete story that can be misleading if used to infer leadership or monetization power. The more instructive view is engagement as shown above. Specifically, the SimilarWeb data shows\u00a0user minutes on the web\u00a0\u2014 because time spent is a better proxy for intensity of use, dependency, and ultimately monetizable opportunity.<\/p>\n<p>ChatGPT\u2019s lead shows up more clearly in user minutes<\/p>\n<p>ChatGPT maintains a meaningful lead in user minutes, even as Gemini is growing quickly. The chart also shows fast growth from other entrants (for example DeepSeek and Grok), but in our view the competitive reality remains concentrated and it\u2019s primarily a\u00a0two-horse race\u00a0for the bulk of user attention between ChatGPT and Gemini.<\/p>\n<p>The key metric is not only \u201cwho is growing,\u201d but\u00a0who is capturing time.<\/p>\n<p>Why minutes matter more than \u2018users\u2019 in the advertising context<\/p>\n<p>We believe the implications become more acute when you overlay Google\u2019s economic model. Google\u2019s profit engine is built on advertising monetization tied to search behavior \u2014 high volume, low marginal cost and well-optimized conversion funnels. That machine depends on serving an enormous number of ad opportunities efficiently.<\/p>\n<p>But if the interaction model shifts toward ChatGPT-style experiences \u2014 richer answers, longer sessions and more compute-heavy responses \u2014 the cost structure changes dramatically.<\/p>\n<p>The compute cost of richer answers is the problem<\/p>\n<p>Our research indicates each unit of engagement in an assistant-style model is materially more compute-intensive than the classic search model. The point made above is for a comparable \u201cuser minute,\u201d the assistant model can consume on the order of\u00a0~10 times more compute resources\u00a0to generate substantially richer output for the end user.<\/p>\n<p>In our view, this is the crux of why merging an AI assistant interaction model with an ad-funded business model is not trivial:<\/p>\n<p>In classic search,\u00a0ads are served into a low-cost query\/response\u00a0sequence;<br \/>\nIn assistant-led experiences,\u00a0the same user attention requires far higher compute, which raises the cost of each monetizable interaction.<\/p>\n<p>So although an AI-native interface may create a more compelling product, it also risks turning the economics of ad delivery from a high-margin machine into a heavier-cost model \u2014 unless monetization mechanics evolve to compensate.<\/p>\n<p>Even \u2018user minutes\u2019 still understate what\u2019s happening<\/p>\n<p>We believe user minutes is a better metric than MAUs, but it still doesn\u2019t fully capture the true shift. Time spent doesn\u2019t directly measure the\u00a0quantity and richness of knowledge being produced and delivered\u00a0into the market \u2014 and that richness is precisely what drives the compute intensity.<\/p>\n<p>The bottom line is engagement is the right metric, but the deeper point is economic. If the market moves from low-cost search interactions to compute-heavy assistant interactions, the cost to serve \u2014 and therefore the cost to monetize \u2014 rises sharply. That\u2019s the pressure point for Google\u2019s ad-dominated business model in our view.<\/p>\n<p>Why \u2018Google will just disrupt itself\u2019 is not a casual decision: The unit economics of search are changing<\/p>\n<p>We believe a common refrain \u2014 \u201cGoogle will just disrupt itself\u201d \u2014 ignores the single most important constraint in that the economics of search are uniquely favorable to Google, and moving from classic search to an assistant-style interaction model changes the unit economics in a way that can break the profit engine.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-319800\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/300-_-Breaking-Analysis-_-Why-NVIDIA-Maintains-its-Moat-and-Gemini-Wont-Kill-OpenAI-3-1024x576.jpg\"   alt=\"\" width=\"1024\" height=\"576\"\/><br \/>\nSearch is an extreme-scale, ultra-optimized compute business<\/p>\n<p>Our research indicates the\u00a0cost of a search is a fraction of a cent. That outcome is the product of:<\/p>\n<p>Decades of ranking system refinement;<br \/>\nExtreme scale; and<br \/>\nA highly optimized infrastructure stack.<\/p>\n<p>In our view, this is arguably the cheapest large-scale compute service in the world, and among the best-operated at global scale.<\/p>\n<p>The margin structure is the moat \u2014 and it\u2019s hard to walk away from<\/p>\n<p>The point is that revenue per search is\u00a0~five times to 10 times the gross margin on the cost of the search itself. That\u2019s the heart of the extraordinary business model Google has built \u2013 ultra-low unit cost paired with a monetization engine that extracts high value per interaction.<\/p>\n<p>In our opinion, you don\u2019t casually disrupt a model with those economics \u2014 not because you lack vision, but because the alternative must clear a very high economic bar.<\/p>\n<p>The workload characteristics are simple \u2014 and that\u2019s the advantage<\/p>\n<p>Our research indicates the\u00a0nature\u00a0of search queries supports these economics:<\/p>\n<p>Roughly\u00a08 billion to 9 billion searches per day;<br \/>\nBillions of active users<br \/>\nQueries are typically\u00a0very short\u00a0(often\u00a0two to three words);<br \/>\nTwo out of three\u00a0searches result in a click;<br \/>\nAbout\u00a01 to 1.5 queries per visit<\/p>\n<p>This is a high-volume, low-complexity workload. It is optimized for speed, efficiency and monetization \u2014 not for generating richly reasoned outputs.<\/p>\n<p>Turning search into an assistant collapses the model unless monetization changes<\/p>\n<p>Our central claim is that if Google converts this ultra-cheap interaction into an OpenAI-style experience \u2014 richer responses, longer sessions, higher compute per interaction \u2014 the cost structure rises sharply. If cost per \u201csearch\u201d increases by an order of magnitude while monetization mechanics remain in the old ad model, the economics compress and the business model can break.<\/p>\n<p>The bottom line is Google can absolutely innovate, but the data indicates self-disruption is economic surgery. The existing search machine is optimized around simplicity and margin. Shifting to compute-intensive assistant behavior without a new monetization model risks collapsing the very engine that funds the transition.<\/p>\n<p>Cost per session is the economic tripwire for Google\u2019s self-disruption<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-319801\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/300-_-Breaking-Analysis-_-Why-NVIDIA-Maintains-its-Moat-and-Gemini-Wont-Kill-OpenAI-4-1024x576.jpg\"   alt=\"\" width=\"1024\" height=\"576\"\/><\/p>\n<p>The debate about whether Google can disrupt itself becomes much clearer when you model\u00a0cost per session\u00a0rather than looking at top-line user counts. As indicated, the economics of classic Google search are engineered around extremely low cost per interaction and very high-margin monetization. Assistant-style sessions invert that equation.<\/p>\n<p>Google search: Pennies per session, high-margin ads per query<\/p>\n<p>Our research indicates the cost per interaction for Google Search is in the range of roughly 0.2 to 0.5 (as presented on the slide above), and because queries per session are low, the resulting cost per session is still \u201cless than pennies.\u201d The monetization model is also tightly coupled to this structure in that Google monetizes with ads per query, and the combination of low cost + strong ad yield produces very high margins.<\/p>\n<p>In our view, this is why Google Search has been such a durable business model. It is a highly optimized, low-cost service that is monetized efficiently at massive scale.<\/p>\n<p>ChatGPT-style sessions have a fundamentally different cost structure<\/p>\n<p>ChatGPT\u2019s cost per interaction is much higher, and the session pattern is different with\u00a0five to 10 queries per session\u00a0rather than the short, lightweight search interactions. When you multiply those factors, the\u00a0cost per session\u00a0becomes dramatically higher \u2014 we think\u00a0~100 times higher\u00a0than Google Search.<\/p>\n<p>Importantly, the point is not that ChatGPT is inefficient. The claim is that ChatGPT is\u00a0already\u00a0among the leaders in efficiency \u2014 and yet the underlying interaction paradigm is still much more compute-intensive per session.<\/p>\n<p>We believe this is the core reason Google can\u2019t simply flip Search into a ChatGPT-like model without destabilizing margins.<\/p>\n<p>The \u201810%-20%\/60%-70%\u2019 problem: The revenue concentration makes this existential<\/p>\n<p>Our research indicates the most economically valuable portion of search is\u00a0product and commercial search\u00a0\u2014 only\u00a0~10% to 20%\u00a0of queries, but responsible for\u00a0~60% to 70%\u00a0of search revenue. In our view, that concentration is exactly what makes the transition so difficult. Specifically:<\/p>\n<p>This slice is the pie Google most needs to protect;<br \/>\nIt is also the slice most likely to be contested as AI assistants move upstream into higher-intent workflows.<\/p>\n<p>A profound shift in how monetization works: From \u2018pay to be seen\u2019 to \u2018pay to be accurately represented\u2019<\/p>\n<p>We believe this is one of the most important ideas in the entire research note. The model moves from low-cost ads and blue links thrown at users toward a higher-value, higher-cost information economy where\u00a0trust and accurate representation\u00a0are the product.<\/p>\n<p>In our view, this has two direct implications:<\/p>\n<p>Brands become far more sensitive to\u00a0information quality\u00a0than to link placement.<br \/>\nMonetization migrates from \u201cpay for visibility\u201d toward \u201cpay for verified, trusted, high-fidelity representation\u201d \u2014 a different commercial construct with different economic models.<\/p>\n<p>What happens to Google\u2019s \u2018hybrid\u2019 mode?<\/p>\n<p>Google\u2019s current approach \u2014 offering a hybrid path where users can go deeper into AI mode \u2014 is both clever and useful. In an ideal world, Google would prefer to introduce assistant-style experiences slowly, ring-fence them as a separate business, and price them at a premium.<\/p>\n<p>But the point is that reality constrains the strategy because Google has to protect the commercial-search numbers. And though Google would like to layer higher-value services on top and charge advertisers materially more, it may be easier for challengers to enter with\u00a0higher-cost, higher-trust\u00a0services without needing to defend a legacy margin structure. This, we believe, is OpenAI\u2019s advantage.<\/p>\n<p>That tension makes the hybrid model look less like a stable end state and more like a transitional strategy \u2014 a fence-sitter that ultimately has to resolve toward one economic model or the other.<\/p>\n<p>Framing the competitive battlefield<\/p>\n<p>In our view, the market is carving into two distinct conflicts:<\/p>\n<p>Nvidia vs. TPU\/ASIC alternatives: Relatively clear to us, barring execution missteps;<br \/>\nGoogle vs. OpenAI (and others): Far less clear, because it is a battle over interface, economics and trust \u2014 not just model quality.<\/p>\n<p>The bottom line is cost per session is the economic forcing function in this new game. It explains why self-disruption is difficult, why the high-value commercial slice is vulnerable and why the market may shift from cheap ad inventory toward higher-cost, trust-based representation. This is the battle line for the next decade.<\/p>\n<p>The future of search is a revenue-model mismatch \u2014 and the highest-value slice is at risk<\/p>\n<p>We believe the core issue in \u201cthe future of search\u201d is not model quality. The issue is a\u00a0revenue model mismatch. Traditional search is an ad-funded machine optimized to deliver low-cost discovery \u2014 effectively \u201c10 blue links\u201d thrown at the user \u2014 where monetization is tied to placement and clicks, not to the intrinsic quality and trustworthiness of the information delivered.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-319803\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/300-_-Breaking-Analysis-_-Why-NVIDIA-Maintains-its-Moat-and-Gemini-Wont-Kill-OpenAI-5-1024x576.jpg\"   alt=\"\" width=\"1024\" height=\"576\"\/><\/p>\n<p>As search becomes answer-centric and trust-centric, our research indicates a large portion of Google\u2019s profit pool becomes exposed.<\/p>\n<p>A small slice of queries drives most of the money \u2014 and it\u2019s easy to lose on trust<\/p>\n<p>In this research we try to make a concentrated revenue point that we believe is the linchpin:<\/p>\n<p>~10% to 15%\u00a0of queries are commercial\/product-intent;<br \/>\nThat slice contributes\u00a0~65% to 75%\u00a0of search revenues.<\/p>\n<p>In our view, this is the part of the business that is most vulnerable to\u00a0trust erosion. If users begin to believe the answer is optimized for the advertiser rather than for the buyer, the value proposition degrades quickly \u2014 and that small slice is exactly where buyers are most sensitive to quality, ranking integrity and confidence.<\/p>\n<p>The risk is that Google can retain the bulk of total search volume and still lose the economically decisive segment:\u00a0\u201c90% of search,\u201d but not the part that pays the bills.<\/p>\n<p>Gen AI answers are orders of magnitude more expensive \u2014 and scale makes it unforgiving<\/p>\n<p>Our research indicates generating an answer with generative AI is\u00a0at least an order of magnitude\u00a0more expensive than a classic, highly optimized search query \u2014 and potentially\u00a0two or three orders\u00a0depending on the experience design. At Google\u2019s scale, even modest mix shifts toward AI-heavy sessions can have outsized margin impact.<\/p>\n<p>We believe this becomes both a timing and a strategy issue. In other words, the transition begins to bite into margins as usage patterns migrate, and the company has to manage a delicate balance between to countervailing forces:<\/p>\n<p>Protecting current margins; and<br \/>\nPreventing leakage of high-trust, high-value queries to other platforms.<\/p>\n<p>From the consumer standpoint, there is an upside in that users can increasingly choose between low-cost traditional search and higher-quality, higher-trust answer engines. But that optionality increases competitive pressure.<\/p>\n<p>Trust and authority become the new switching costs<\/p>\n<p>Because Google\u2019s search share is so highly elevated, it has nowhere to go but down from here, because the competitive axis is shifting. In classic search, if the result isn\u2019t good, the user simply refines the query and continues. In assistant-driven search, as users build affinity for an engine that consistently returns higher-quality outcomes, has memory and\u00a0trust becomes the moat. The platform that earns that trust can pull disproportionate share of the highest-value sessions.<\/p>\n<p>The key point is even if Google\u2019s model quality is strong, the business model incentives are different.<\/p>\n<p>Why this is a different business model, not just a better UI<\/p>\n<p>In our view, the future state is not \u201csmarter ads.\u201d It\u2019s a different value chain where brands need to be represented accurately, compared appropriately, and surfaced based on fit \u2014 not on who bought the top slot.<\/p>\n<p>The premise of this research captures the delta \u2013 a complex, high-intent request can be satisfied in a single minute with ranked options and an action plan, versus a longer, more iterative search process. Our research indicates that this \u201chigh-quality commercial search\u201d experience is precisely where share can migrate \u2014 and it\u2019s the most economically meaningful slice that moves.<\/p>\n<p>OpenAI\u2019s structural advantage: Aligned incentives via subscriptions and APIs<\/p>\n<p>We believe OpenAI has a structural incentive advantage rooted in its revenue model:<\/p>\n<p>Users pay (often via subscription) because they value the experience quality;<br \/>\nDevelopers and businesses pay through APIs that directly monetize usage.<\/p>\n<p>That incentive system is fundamentally different from ad-funded search where the buyer is not the user, and where brands pay for placement. In our opinion, this creates a more direct \u201cquality to revenue\u201d linkage for the assistant platform.<\/p>\n<p>On the brand side, our research indicates the emerging play is\u00a0buyer-facing APIs backed by trusted information, designed to score well in answer engines. That is a different marketing and distribution model than classic search engine optimization and paid links.<\/p>\n<p>SEO isn\u2019t dead \u2014 but it\u2019s on a glide path down<\/p>\n<p>We believe the right way to frame it is SEO is not dead, but it will matter less over time. As answer engines mediate discovery and ranking through trust and structured vendor information, the importance of classic SEO mechanics declines.<\/p>\n<p>Likely outcome: Positive bifurcation<\/p>\n<p>Our view is that the market bifurcates in a way that can still look \u201cgood\u201d in absolute terms for Google while being strategically disruptive:<\/p>\n<p>Google keeps a large share of general search volume.<br \/>\nOpenAI (and others) gain share in\u00a0high-value, high-trust commercial intent.<\/p>\n<p>Google will fight hard for that high-value slice, but our research indicates it requires substantial work above the model layer: APIs, software capabilities, interface design and the surrounding service stack that enables vendors and users to extract real utility from the platform.<\/p>\n<p>Bottom line is the future of search is a restructuring of incentives and economics. The platform that aligns trust, representation quality and monetization has the advantage in the most valuable part of the market.<\/p>\n<p>Revisiting the\u00a0<a href=\"https:\/\/thecuberesearch.com\/breaking-analysis-chatgpt-wont-give-openai-sustainable-first-mover-advantage\/\" rel=\"nofollow noopener\" target=\"_blank\">first-mover debate<\/a>: Why OpenAI\u2019s lead looks durable \u2014 and why the enterprise is the real prize<\/p>\n<p>We believe it\u2019s useful to end this segment in the context of an earlier Breaking Analysis where we put forth a plausible scenario where Google could have disrupted OpenAI\u2019s first-mover advantage. That scenario was not irrational \u2014 it was rooted in Google\u2019s deep technical bench, its distribution and the assumption that it could translate model strength into product and platform leadership.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-319805\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/300-_-Breaking-Analysis-_-Why-NVIDIA-Maintains-its-Moat-and-Gemini-Wont-Kill-OpenAI-6-1024x576.jpg\"   alt=\"\" width=\"1024\" height=\"576\"\/><\/p>\n<p>But the picture on the slide above \u2014 \u201cOpenAI likely to retain its leadership\u201d \u2014 reflects what our research indicates today \u2013 the conditions that sustain OpenAI\u2019s lead are strengthening, not weakening.<\/p>\n<p>The models will converge at the top \u2014 so don\u2019t over-index on \u2018best model\u2019<\/p>\n<p>Our view is that the market is over-fixated on model-to-model comparisons. The reality is that the leading labs will all produce very high-quality LLMs. Google\u2019s models are strong. Anthropic\u2019s Claude has positioned around coding. Gemini has demonstrated competence in many tasks and Grok is moving fast. The point is not that any one of these is \u201cbad.\u201d<\/p>\n<p>We believe the durable differentiation is shifting away from raw model quality and toward:<\/p>\n<p>The surrounding software stack;<br \/>\nApplication programming interfaces and developer surface area;<br \/>\nApplications and workflows;<br \/>\nThe ability to become a default platform for enterprise usage.<\/p>\n<p>OpenAI\u2019s structural advantages: Platform, APIs and (likely) compute allocation priority<\/p>\n<p>Our research indicates OpenAI maintains leadership in several areas:<\/p>\n<p>Best APIs;<br \/>\nBest applications; and<br \/>\nAn overall lead in both users with emerging enterprise momentum.<\/p>\n<p>We also believe OpenAI\u2019s close relationship with Nvidia matters. Our contention is if Nvidia remains the critical supplier for frontier compute, and if OpenAI is tightly coupled to that ecosystem, then OpenAI is positioned to receive preferential allocation versus competitors \u2014 particularly versus those whose narrative depends on Nvidia being displaced. In our view, that allocation dynamic reinforces time-to-capability and time-to-market.<\/p>\n<p>The enterprise mix is shifting<\/p>\n<p>The most important datapoint in this segment, in our opinion, is the mix shift described between consumer usage and enterprise adoption. We note movement from roughly\u00a070\/30 (users\/enterprise)\u00a0last year to about\u00a060\/40\u00a0by the end of this year.<\/p>\n<p>We believe this is a meaningful signal because enterprise growth tends to be stickier and more platform-defining than consumer novelty. Our research indicates enterprise adoption will increase as organizations learn how to:<\/p>\n<p>Curate and raise the quality of their data;<br \/>\nMake that data discoverable and usable by AI systems; and<br \/>\nOperationalize workflows where trusted information is surfaced reliably.<\/p>\n<p>In our view, that enterprise \u201cdata readiness\u201d journey is what turns an AI model into an enterprise system \u2014 and that is where platform advantage compounds.<\/p>\n<p>Google has strengths \u2014 but software and enterprise positioning remain the question<\/p>\n<p>We believe Google has advantages and will remain a formidable competitor. But our premise is that OpenAI is more likely to emerge as a high-quality software and platform player in enterprise AI than Google, which \u2014 despite being strong technically \u2014 is not broadly perceived as the enterprise AI software leader.<\/p>\n<p>Our view is that this matters because the next phase of competition is not \u201cwho has the best model demo,\u201d but \u201cwho owns the workflow and the integration fabric.\u201d<\/p>\n<p>Leadership is not guaranteed \u2014 but it\u2019s real right now<\/p>\n<p>Our research indicates OpenAI is \u201ca country mile ahead\u201d at the moment in platform momentum. That does not mean the lead is unassailable. OpenAI could blow up, or competitors could find superior approaches. But as of now, we believe the most likely outcome is continued leadership because the factors that matter most \u2014 platform, developer adoption, enterprise mix shift and access to scarce compute \u2014 are currently aligned in OpenAI\u2019s favor.<\/p>\n<p>Closing thoughts<\/p>\n<p>The two big conclusions of this Breaking Analysis are:<\/p>\n<p>Nvidia\u2019s moat looks reinforced by volume, experience curve effects, and years of end-to-end systems work;<br \/>\nOpenAI\u2019s lead looks reinforced by platform execution and enterprise pull \u2014 with a competitive landscape where model quality is table stakes, and the real battle is the software and services wrapped around the model.<\/p>\n<p>The bottom line is the early \u201cGoogle disrupts OpenAI\u201d scenario was plausible. But the data and the platform dynamics suggest OpenAI\u2019s first-mover advantage is evolving into something more durable \u2014 especially as the enterprise becomes the center of gravity. OpenAI\u2019s relationship with Nvidia is meaningful. Though Nvidia, like Intel Corp. before it, will try to keep the playing field level, for now it will bolster emerging platforms that are less competitive, such as the neocloud players and model builders such as OpenAI.<\/p>\n<p>Disclaimer:\u00a0All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.<br \/>\nDisclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and\/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what\u2019s published in Breaking Analysis.<br \/>\nImage: theCUBE Research\/Reve<\/p>\n<p>Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE\u2019s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.<\/p>\n<p>15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more<br \/>\n11.4k+ theCUBE alumni \u2014 Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.<\/p>\n<p>About SiliconANGLE Media<\/p>\n<p>SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of <a href=\"https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink&amp;url=https%3A%2F%2Fsiliconangle.com%2F&amp;esheet=54119777&amp;newsitemid=20240910506833&amp;lan=en-US&amp;anchor=SiliconANGLE&amp;index=9&amp;md5=646b1b564e2259100a2b8638aab0a552\" rel=\"nofollow noopener\" target=\"_blank\">SiliconANGLE<\/a>, <a href=\"https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink&amp;url=https%3A%2F%2Fwww.thecube.net%2F&amp;esheet=54119777&amp;newsitemid=20240910506833&amp;lan=en-US&amp;anchor=theCUBE+Network&amp;index=10&amp;md5=7de2a85f95ab4a4a495cede20b8cb1da\" rel=\"nofollow noopener\" target=\"_blank\">theCUBE Network<\/a>, <a href=\"https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink&amp;url=https%3A%2F%2Fthecuberesearch.com%2F&amp;esheet=54119777&amp;newsitemid=20240910506833&amp;lan=en-US&amp;anchor=theCUBE+Research&amp;index=11&amp;md5=7bb33676722925eb57d588ec343e4f6f\" rel=\"nofollow noopener\" target=\"_blank\">theCUBE Research<\/a>, <a href=\"https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink&amp;url=https%3A%2F%2Fwww.cube365.net%2F&amp;esheet=54119777&amp;newsitemid=20240910506833&amp;lan=en-US&amp;anchor=CUBE365&amp;index=12&amp;md5=d310fb35919714e66ad8d42c9c0c1bc6\" rel=\"nofollow noopener\" target=\"_blank\">CUBE365<\/a>, <a href=\"https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink&amp;url=https%3A%2F%2Fwww.thecubeai.com%2F&amp;esheet=54119777&amp;newsitemid=20240910506833&amp;lan=en-US&amp;anchor=theCUBE+AI&amp;index=13&amp;md5=b8b98472f8071b23ebb10ab9a8dd0683\" rel=\"nofollow noopener\" target=\"_blank\">theCUBE AI<\/a> and theCUBE SuperStudios \u2014 with flagship locations in Silicon Valley and the New York Stock Exchange \u2014 SiliconANGLE Media operates at the intersection of media, technology and AI.<\/p>\n<p>Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.<\/p>\n","protected":false},"excerpt":{"rendered":"Two prevailing narratives have driven markets recently. The first is that Nvidia Corp.\u2019s moat is eroding primarily thanks&hellip;\n","protected":false},"author":2,"featured_media":197623,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,112038,85,46,9662,125,112039],"class_list":{"0":"post-197622","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-guest-author","12":"tag-il","13":"tag-israel","14":"tag-siliconangle","15":"tag-technology","16":"tag-why-nvidia-maintains-its-moat-and-gemini-wont-kill-openai"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/197622","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=197622"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/197622\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/197623"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=197622"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=197622"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=197622"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}