Cognitive warmup. RAM prices are going up, and this will not be pretty. Mark my words. The fact that a 96GB DDR5 RAM costs as much as $900 (around ₹89,000) means it is more expensive than an entire Sony PlayStation 5 (that’s around ₹54,990). These are memory prices that haven’t doubled, but tripled over the past few months. The main culprit? AI’s insatiable appetite for memory. Data centres and cloud providers are buying up massive amounts of high-bandwidth memory for AI servers. Consumer RAM business naturally takes a backseat — that means your next smartphone, desktop, laptop, even your next tablet or gaming console, will get a bit more expensive. And I’m not overstating the scenario. There is another element at play. The transition from DDR4 to DDR5 (the first DDR5 products began arriving in 2022) did add complexity, and initial production of newer nodes understandably had lower yields, constraining supply during this crossover period between 2021 and 2023. Memory brands including Micron, Crucial, Patriot, and Corsair were playing inventory catch-up. Unfortunately, relief isn’t coming soon. I don’t foresee meaningful price drops before mid-2026 at the earliest. In the meantime, tech companies will inevitably pass on these costs to customers. Not that it means you should rush to buy a new phone or laptop, even if you don’t need one.

Google which has till now largely rented out custom TPUs is offering to sell them for the first time Google which has till now largely rented out custom TPUs is offering to sell them for the first time Last time on Neural Dispatch: The AI maths is broken, and a bubble that many pretend to not see, is deflatingALGORITHM

This week, we talk about stressful times for Nvidia, also because Google’s found a big leap for its TPU business that directly hurts Nvidia’s GPU sales, and its a case of never say never as OpenAI admits that advertisements within ChatGPT may more be a case of when and not if.

Google TPU vs Nvidia GPU: Who’s winning!

“Competitors can price their chips at $0 and Nvidia products will still be a better option,” CEO Jensen Huang said, not too recently. That didn’t age well at all, did it? Meta is now buying (not immediately, staggered) what will be billions of dollars worth of Google TPUs, or Tensor Processing Units. Two important things to note here. First, Google which has till now largely rented out custom TPUs is offering to sell them for the first time. Secondly, despite Nvidia’s insistence to the contrary, TPUs as application-specific integrated circuit (ASIC) will be a better and more cost effective fit in specific AI frameworks. This news hit the Nvidia stock price, quite hard. So much so, Nvidia posted a rather grudging congratulatory note (after all, everyone’s got to still show they’re friends, because AI’s sake…) whilst claiming that “Nvidia is a generation ahead of the industry — its the only platform that runs every AI model and does it everywhere computing is done.” Great.

What’s the difference between a GPU that Nvidia wants to sell for AI tasks, versus a TPU that Google champions? A GPU is a specialised processor originally designed for manipulating computer graphics. Their parallel structure makes them ideal for algorithms that process large blocks of data commonly found in AI workloads. A TPU is an ASIC designed for neural networks with specialised features, such as the matrix multiply unit (MXU) and proprietary interconnect topology that make them ideal for accelerating AI training and inference. All of Google AI services run off TPUs. Right now, Google has three iterations available for customers — the Cloud TPU v5e that’s ideal for medium-to-large-scale training and inference workloads, the Cloud TPU v5p which is a powerful TPU for building large, complex foundational models, as well as the Trillium, a sixth generation TPU which pushed the standards for energy efficiency and peak compute performance per chip for training and inference. Next in line is the Ironwood TPU, which Google says is their “most powerful and efficient TPU yet, for large scale training and inference.”

One thing is clear. Suddenly, the ‘Nvidia or nothing’ worldview seems less absolute. Is Nvidia still the gold standard? Perhaps, but we’ll let the market forces decide that. But the armour isn’t shining anymore.

Nvidia says they aren’t like Enron

I mean I don’t know everything, but I’d say that if you have to say “we are not like Enron,” you’re already on the back foot. Isn’t it? Nvidia’s attempts to silence ballooning speculation, over revenue recognition, accounting methodology, and the realistic lifespan of GPUs, are sounding increasingly like a company that is on the back foot. In a note to Wall Street analysts in response to the definitive puncturing of the AI bubble over the past few months (panic is setting in, because circular funding gas been found out) and also American investor and hedge fund manager Michael Burry made a rather blunt post X after Nvidia’s earnings release. He had pointed out the practice of Nvidia’s customers now using six-year depreciation schedules instead of the standard practice of two or three years. Burry had argued about the maths of that practice. Nevertheless, Nvidia in the note says “Nvidia does not resemble historical accounting frauds because Nvidia’s underlying business is economically sound, our reporting is complete and transparent, and we care about our reputation for integrity. Unlike Enron, Nvidia does not use Special Purpose Entities to hide debt and inflate revenue.” Great, then. The fact that investor forums, analysts, and a few very loud short-sellers even managed to find shades enough for that comparison, is telling. Every over-performing tech giant eventually learns — gravity isn’t a rumour; it’s a schedule.

From “never doing ads” to, this…

It was inevitable, wasn’t it? The world’s most powerful conversational product seems to be tip-toeing toward the world’s oldest monetisation model — advertising. Some things never change, no matter how technology changes. OpenAI’s Sam Altman has hinted that ads are “something we’ll try”. Don’t get sidetracked by this, because in OpenAI-speak, this translates to “a pilot is already running somewhere, we just haven’t told you.” I’d estimate this means three things. First, AI search is about to look even more like traditional search. Secondly, “objective answers” may slowly become “sponsored objectivity. And third, any dreams of an ad-free, neutral, reasoning-first AI assistant may be headed for a retirement home. The silver lining? If done with transparency, ads might subsidise cheaper tiers, in particular the India-focused education initiatives, without degrading the totality of the product. But the internet has heard that promise before. And some of us will only believe what we see.

Do check out my other newsletter, Wired Wisdom: Decoding home air purifiers, SaveSage’s AI saves you money, and Toyota’s tem museumTHINKING

“We will respectfully make our case in a way that is cognizant of the complexity and nuances of situations involving real people and real lives… Because we are a defendant in this case, we are required to respond to the specific and serious allegations in the lawsuit.” — OpenAI’s response to a lawsuit, and denying liability in teen suicide.

While this may just be another corporate statement crafted by a legal team to safeguard the larger business interests (as far as that is possible), it does signal an uncomfortable inflection point. One where AI systems have become emotionally accessible enough that people confide in them the way they do in humans — but they are still engineered to behave like tools. No boundaries. No limits. No balance. I am reminded of that time when Meta CEO Mark Zuckerberg was given an absolute earful by the U.S. Senate Judiciary Committee hearing on online child safety and exploitation (this was on January 31, 2024), and was forced to apologise to parents of children who were harmed by interactions on social media. Technology never had the societal filters for conversations with children. This at a time when public expectation mismatch is widening. Users increasingly (and absolutely incorrectly, I must add) treat ChatGPT like a thinking, empathic entity. It isn’t, and no AI is or will ever be. Regulators still classify it as software, and rightly so. It is a tool and will always be that — tools cannot be human, humans must find the right use for tools. Now is the time when courts must interpret a technology whose influence is human-like, yet whose accountability is legally non-human. This response is an example of that.

A Reality Check: This lawsuit I’m referring to, is the manifestation of that cognitive dissonance. And it won’t be the last. As AI becomes more agentic, and supposedly more personalised, it finds itself more deeply embedded in mental-health-adjacent interactions, the boundary between influence and responsibility blurrier still.

OpenAI’s defence, that the company cannot be held liable for all user outcomes, is institutionally predictable. No AI firm can survive if held responsible for every emotional, psychological, or behavioural consequence triggered by a model’s output. But morality doesn’t take instructions from legal disclaimers. And the truth is this — AI companies have spent years marketing these systems as empathetic, conversational, human-like companions. You can’t sell “intelligence,” “reasoning,” and “connection” on Monday and then argue “we’re just a tool” on Wednesday afternoon, without raising eyebrows and lawsuits. The stakes aren’t just about what happened in this tragic case. They’re about who carries the burden when AI steps into human emotional territory. The user? The developer? The regulator? Or the emerging grey zone of shared responsibility no one is quite ready to define?

The AI era’s legal phase has officially begun, and the industry may now begin to (and very uncomfortably so, I might add) discover that being “transformational” also means being answerable.