{"id":489317,"date":"2026-02-19T04:05:21","date_gmt":"2026-02-19T04:05:21","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/489317\/"},"modified":"2026-02-19T04:05:21","modified_gmt":"2026-02-19T04:05:21","slug":"running-ai-models-is-turning-into-a-memory-game","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/489317\/","title":{"rendered":"Running AI models is turning into a memory game"},"content":{"rendered":"<p id=\"speakable-summary\" class=\"wp-block-paragraph\">When we talk about the cost of AI infrastructure, the focus is usually on Nvidia and GPUs \u2014 but memory is an increasingly important part of the picture. As hyperscalers prepare to build out billions of dollars\u2019 worth of new data centers, the price for DRAM chips has jumped <a href=\"https:\/\/datatrack.trendforce.com\/Chart\/content\/4694\/mainstream-dram-spot-price\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">roughly 7x in the last year<\/a>.<\/p>\n<p class=\"wp-block-paragraph\">At the same time, there\u2019s a growing discipline in orchestrating all that memory to make sure the right data gets to the right agent at the right time. The companies that master it will be able to make the same queries with fewer tokens, which can be the difference between folding and staying in business.<\/p>\n<p class=\"wp-block-paragraph\"><a rel=\"nofollow noopener\" href=\"https:\/\/www.fabricatedknowledge.com\/p\/another-conversation-with-val-bercovici\" target=\"_blank\">Semiconductor analyst <\/a><a href=\"https:\/\/www.fabricatedknowledge.com\/p\/another-conversation-with-val-bercovici\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Doug O\u2019Laughlin<\/a> has an interesting look at the importance of memory chips on his Substack, where he talks with Val Bercovici, chief AI officer at Weka. They\u2019re both semiconductor guys, so the focus is more on the chips than the broader architecture; the implications for AI software are pretty significant too.<\/p>\n<p class=\"wp-block-paragraph\">I was particularly struck by this passage, in which Bercovici looks at the growing complexity of <a href=\"https:\/\/platform.claude.com\/docs\/en\/build-with-claude\/prompt-caching\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Anthropic\u2019s prompt-caching documentation<\/a>:<\/p>\n<p class=\"wp-block-paragraph\">The tell is if we go to Anthropic\u2019s prompt caching pricing page. It started off as a very simple page six or seven months ago, especially as Claude Code was launching \u2014 just \u201cuse caching, it\u2019s cheaper.\u201d Now it\u2019s an encyclopedia of advice on exactly how many cache writes to pre-buy. You\u2019ve got 5-minute tiers, which are very common across the industry, or 1-hour tiers \u2014 and nothing above. That\u2019s a really important tell. Then of course you\u2019ve got all sorts of arbitrage opportunities around the pricing for cache reads based on how many cache writes you\u2019ve pre-purchased.<\/p>\n<p class=\"wp-block-paragraph\">The question here is how long Claude holds your prompt in cached memory: You can pay for a 5-minute window, or pay more for an hour-long window. It\u2019s much cheaper to draw on data that\u2019s still in the cache, so if you manage it right, you can save an awful lot. There is a catch though: Every new bit of data you add to the query may bump something else out of the cache window.<\/p>\n<p class=\"wp-block-paragraph\">This is complex stuff, but the upshot is simple enough: Managing memory in AI models is going to be a huge part of AI going forward. Companies that do it well are going to rise to the top.<\/p>\n<p class=\"wp-block-paragraph\">And there is plenty of progress to be made in this new field. Back in October, I covered <a href=\"https:\/\/techcrunch.com\/2025\/10\/23\/tensormesh-raises-4-5m-to-squeeze-more-inference-out-of-ai-server-loads\/\" rel=\"nofollow noopener\" target=\"_blank\">a startup called Tensormesh<\/a> that was working on one layer in the stack known as cache optimization.<\/p>\n<p>Techcrunch event<\/p>\n<p>\n\t\t\t\t\t\t\t\t\tBoston, MA<br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t|<br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\tJune 23, 2026\n\t\t\t\t\t\t\t<\/p>\n<p class=\"wp-block-paragraph\">Opportunities exist in other parts of the stack. For instance, lower down the stack, there\u2019s the question of how data centers are using the different types of memory they have. (The interview includes a nice discussion of when DRAM chips are used instead of HBM, although it\u2019s pretty deep in the hardware weeds.) Higher up the stack, end users are figuring out how to structure their model swarms to take advantage of the shared cache.<\/p>\n<p class=\"wp-block-paragraph\">As companies get better at memory orchestration, they\u2019ll use fewer tokens and inference will get cheaper. Meanwhile, <a href=\"https:\/\/ramp.com\/velocity\/ai-is-getting-cheaper\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">models are getting more efficient at processing each token<\/a>, pushing the cost down still further. As server costs drop, a lot of applications that don\u2019t seem viable now will start to edge into profitability.<\/p>\n","protected":false},"excerpt":{"rendered":"When we talk about the cost of AI infrastructure, the focus is usually on Nvidia and GPUs \u2014&hellip;\n","protected":false},"author":2,"featured_media":489318,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,2729,254,255,64,63,2730,193665,5607,247325,105],"class_list":{"0":"post-489317","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-anthropic","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-au","13":"tag-australia","14":"tag-claude","15":"tag-dram","16":"tag-exclusive","17":"tag-inference-costs","18":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/489317","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=489317"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/489317\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/489318"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=489317"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=489317"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=489317"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}