{"id":494743,"date":"2026-03-25T15:57:18","date_gmt":"2026-03-25T15:57:18","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/494743\/"},"modified":"2026-03-25T15:57:18","modified_gmt":"2026-03-25T15:57:18","slug":"we-have-reached-the-gpu-ceiling-and-ai-tricks-like-dlss-are-how-companies-pretend-we-havent","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/494743\/","title":{"rendered":"We have reached the GPU ceiling, and AI tricks like DLSS are how companies pretend we haven&#8217;t"},"content":{"rendered":"<p>It makes me glad to say that I&#8217;ve grown up during a time when generational updates between graphics cards meant something. Each new generation, we saw double the power, double the cores, and so many more features that we couldn&#8217;t believe just how fast things were progressing. This was a time when horsepower was&#8230; enough. Buying a faster GPU meant getting more frames, and that was that. Sadly, that relationship feels almost quaint now.<\/p>\n<p>We&#8217;ve hit a point where <a href=\"https:\/\/www.xda-developers.com\/waterfall-effect-chip-landscape-silicon-cost\/\" target=\"_blank\" rel=\"nofollow noopener\">throwing more silicon at the problem<\/a> doesn&#8217;t solve it. It barely contains it. And in that gap between what games demand and what hardware can realistically deliver, AI, which was meant to help, has now quietly taken over.<\/p>\n<p>        <img width=\"440\" height=\"364\" loading=\"lazy\" decoding=\"async\" alt=\"Cyberpunk 2077 on a gaming PC.\" data-img-url=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/cyberpunk-2077-on-a-gaming-pc.jpg\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/cyberpunk-2077-on-a-gaming-pc.jpg\"\/><\/p>\n<p>                    Related<\/p>\n<p>\t\t<a href=\"https:\/\/www.xda-developers.com\/upscaling-and-frame-generation-are-democratizing-high-end-visuals-today\/\" title=\"Stop calling it AI slop \u2014 upscaling is democratizing high-end gaming for the 99%\" target=\"_blank\" rel=\"nofollow noopener\"><br \/>\n\t\t\tStop calling it AI slop \u2014 upscaling is democratizing high-end gaming for the 99%<br \/>\n\t\t<\/a><\/p>\n<p class=\"display-card-excerpt\">Better visuals for those not on the bleeding edge.<\/p>\n<p>                        The &#8220;demand&#8221; has spiraled out of control<\/p>\n<p>            Native graphics have stopped being a realistic target now<\/p>\n<p>Today, PC gaming&#8217;s demand keeps on stacking. For starters, 1080p simply is <a href=\"https:\/\/www.xda-developers.com\/5-reasons-1440p-is-the-new-1080p\/\" target=\"_blank\" rel=\"nofollow noopener\">no longer the gold standard<\/a>, but 1440p and 4K are. That&#8217;s four times the pixels, and the entire while, gamers expect ray tracing, dense foliage, massive open worlds, and triple-digit frame rates on their 144Hz or higher panels. At some point, you have to step back and realize that any reasonably sized chip would simply bend under its own weight trying to compute all of that in real time, natively.<\/p>\n<p>And yet, that&#8217;s exactly what modern games are built around. The expectation is no longer just visual fidelity, either. Now, <a href=\"https:\/\/www.xda-developers.com\/why-games-sizes-are-so-big\/\" target=\"_blank\" rel=\"nofollow noopener\">we need scale<\/a>, stability, and spectacle, all at once. Games now have to <a href=\"https:\/\/www.polygon.com\/game-pricing-70-80-50-aa-is-back\/\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">justify their $70 price tags<\/a> with sprawling environments and impeccable visuals, while still trying to make sure that they run stably across a wide range of hardware configurations.<\/p>\n<p>That&#8217;s exactly where things begin to shift. Now, when it came out, upscaling was originally meant to democratize high-end visuals by helping lower-end cards punch above their weight by internally rendering a fraction of the resolution. Half a decade later, it stopped being a support system, because developers are now actively making games with DLSS and similar technologies in mind, effectively treating upscaling as the new native baseline.<\/p>\n<p>As a result, improvements in upscalers&#8217; temporal stability and image quality are now the new <a href=\"https:\/\/www.xda-developers.com\/i-tried-running-games-at-native-resolution-but-dlss-4-changed-my-mind\/\" target=\"_blank\" rel=\"nofollow noopener\">technological progress milestones<\/a> that we look forward to, instead of pure GPUs that boast more nodes and raw horsepower. As such, native 4K gaming, the promise that Nvidia made when this decade started with the RTX 30-series, in many cases, is just no longer the goal. Instead, it seems like an illusion we&#8217;ve all already moved past.<\/p>\n<p>                        Brute forcing silicon is no longer the answer<\/p>\n<p>            Raw horsepower gains have stopped scaling<\/p>\n<p>The truth is something we&#8217;ve known for a while now: silicon alone can only do so much. Moore&#8217;s Law is pretty much dead now, and we&#8217;re seeing that reality play out in real time. GPUs simply aren&#8217;t scaling the way they used to with each generation, and that&#8217;s not because companies don&#8217;t want them to, but because physics (and reality) is starting to push back. We&#8217;re hitting core limits, clock limits, thermal ceilings, and increasingly uncomfortable power requirements. Nobody wants a 600W GPU, their connectors blowing off, or a graphics card that practically needs a PSU of its own just to function. The idea of brute forcing performance gains generation after generation is running into diminishing returns.<\/p>\n<p>This is not a problem unique to one company, by the way. AMD, Nvidia, and Intel are all converging on the same realization because the silicon process is the same for all GPU makers. There&#8217;s only so much you can extract from it before efficiency collapses. For years, the answer had been simple enough. Just shrink more nodes, add more cores, push higher clocks, and increase memory bandwidth. Now, however, every incremental gain comes at a disproportionately higher cost. The industry is just running out of room to do things the old way.<\/p>\n<p>                        There&#8217;s a limit to scaling memory and economics<\/p>\n<p>            The point of diminishing returns is now in the rearview mirror<\/p>\n<p>        <img width=\"1650\" height=\"928\" loading=\"lazy\" decoding=\"async\" alt=\"nvidia-rtx-4070-ti-super-propped\" data-img-url=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/nvidia-rtx-4070-ti-super-propped.png\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/nvidia-rtx-4070-ti-super-propped.png\" class=\"img-brightness-opt-out\"\/><br \/>\n        Credit:\u00a0Wikimedia Commons<\/p>\n<p> Another constraint that&#8217;s quietly tightening in the background is memory. VRAM is no longer a scalable solution, and throwing more of it at the problem was never going to fix things in the first place. Nvidia has already slashed almost all of its GPUs down to a single option per VRAM category in the 50-series, and even those cards haven&#8217;t really brought any meaningful increases over the previous generation. The expectation that more <a href=\"https:\/\/www.xda-developers.com\/gamers-were-wrong-8-gb-of-vram-is-still-good-enough-in-2026\/\" target=\"_blank\" rel=\"nofollow noopener\">VRAM would future-proof performance<\/a> is, quite frankly, starting to fall apart.<\/p>\n<p>At the same time, VRAM, DRAM, and memory in general are reaching sky-high prices because a significant portion of the global supply is being diverted toward powering AI data centers the size of football stadiums. Now, gaming hardware is straight up competing with an entirely different industry, and looking at the availability and prices of PC parts today, it&#8217;s losing.<\/p>\n<p>This creates a strange feedback loop. The same <a href=\"https:\/\/www.xda-developers.com\/nvidia-has-revealed-dlss-5-and-it-does-much-more-than-just-generate-frames\/\" target=\"_blank\" rel=\"nofollow noopener\">AI models that are helping games<\/a> run better are also driving up the cost and scarcity of the very resources that GPUs rely on. Scaling memory isn\u2019t the answer. It\u2019s just another ceiling we\u2019re beginning to hit. That&#8217;s why it&#8217;s simply not surprising that software is now doing the heavy lifting, with upscalers, frame generation, and dynamic resolution scaling being the major talking points at every new GPU launch.<\/p>\n<p>        <img width=\"440\" height=\"364\" loading=\"lazy\" decoding=\"async\" alt=\"Forza Horizon 5 running on a 4K TV.\" data-img-url=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/forza-horizon-5-dlss-balanced-4k-tv.jpg\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/forza-horizon-5-dlss-balanced-4k-tv.jpg\"\/><\/p>\n<p>                    Related<\/p>\n<p>\t\t<a href=\"https:\/\/www.xda-developers.com\/i-game-at-4k-and-its-more-accessible-than-ever-thanks-to-dlss-45\/\" title=\"I game at 4K, and it&#039;s more accessible than ever thanks to DLSS 4.5\" target=\"_blank\" rel=\"nofollow noopener\"><br \/>\n\t\t\tI game at 4K, and it&#8217;s more accessible than ever thanks to DLSS 4.5<br \/>\n\t\t<\/a><\/p>\n<p class=\"display-card-excerpt\">I believed I was happy with 1440p, and then DLSS 4.5 came along<\/p>\n<p>                        Upscaling and frame generation are no longer optional<\/p>\n<p>            DLSS and FSR have forever changed rendering itself<\/p>\n<p>When it came out, upscaling technology like DLSS was meant to be nothing more than assistance. Two years later, it became genuinely impressive, and soon enough, ray-tracing and DLSS <a href=\"https:\/\/www.xda-developers.com\/ray-tracing-promised-realism-but-ended-up-normalizing-upscaling\/\" target=\"_blank\" rel=\"nofollow noopener\">began going hand-in-hand<\/a>. Then came frame generation, adding frames in between what the GPU rendered. And now, with technologies pushing toward multi-frame generation at <a href=\"https:\/\/www.xda-developers.com\/nvidias-6x-frame-generation-proves-hardware-ceiling-for-gpus\/\" target=\"_blank\" rel=\"nofollow noopener\">extreme levels like 6x<\/a>, we&#8217;re looking at a pipeline where most of what you see isn&#8217;t &#8220;traditionally&#8221; rendered at all.<\/p>\n<p>At the same time, the industry has been quietly pushing another frontier, which is refresh rates. Higher refresh rates are becoming part of the conversation, and feeding those displays with native rendering alone would be absurdly expensive, if not outright impossible, and that too at modern visual standards. Now, it seems that the industry has collectively realized that rendering every frame, pixel, and lighting calculation natively is simply no longer scalable. This is a decided shift, because if you look at recent leaps in upscaling tech and how new GPU generations are presented with AI features leading the brochures, you&#8217;ll realize that the fundamental question has changed. We&#8217;re no longer asking, &#8220;How do we render this faster?&#8221; but rather, &#8220;How little do we actually need to render?&#8221;<\/p>\n<p>        <img width=\"440\" height=\"364\" loading=\"lazy\" decoding=\"async\" alt=\"Alienware AW2725Q on a desk running Cyberpunk 2077\" data-img-url=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/alienware-aw2725q-review-23.jpg\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/alienware-aw2725q-review-23.jpg\"\/><\/p>\n<p>                    Related<\/p>\n<p>\t\t<a href=\"https:\/\/www.xda-developers.com\/native-4k-was-supposed-to-be-endgame-but-upscaling-changed-that\/\" title=\"Native 4K was supposed to be the endgame, but upscaling changed that\" target=\"_blank\" rel=\"nofollow noopener\"><br \/>\n\t\t\tNative 4K was supposed to be the endgame, but upscaling changed that<br \/>\n\t\t<\/a><\/p>\n<p class=\"display-card-excerpt\">What&#8217;s the point when upscaled 4K is almost just as good?<\/p>\n<p>                        AI has become the performance benchmark now<\/p>\n<p>            Fake frames will be the only frames in the near future<\/p>\n<p>        <img width=\"1650\" height=\"928\" loading=\"lazy\" decoding=\"async\" alt=\"black-myth-wukong-dlss-4.5-preset-m-gameplay\" data-img-url=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/black-myth-wukong-dlss-4-5-preset-m-gameplay.png\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/black-myth-wukong-dlss-4-5-preset-m-gameplay.png\" class=\"img-brightness-opt-out\"\/><\/p>\n<p> At this point, we have to come to terms with the fact that AI has become performance itself instead of simply existing to enhance performance. Every time a new GPU drops, benchmarks that don&#8217;t include DLSS or FSR feel nothing more than theoretical, because nobody actually plays that way. Nvidia themselves have reported that over 80% of GeForce RTX players are using DLSS actively. Even GPU marketing has changed now, with AI and tensor cores leading the conversation, promising more AI-generated and AI-upscaled visuals instead of purely raster gains.<\/p>\n<p><a href=\"https:\/\/www.xda-developers.com\/amd-fsr-redstone-is-forgotten-and-unfinished-before-fsr-diamond\/\" target=\"_blank\" rel=\"nofollow noopener\">With FSR 4.1 and DLSS 4.5<\/a>, especially, we&#8217;re now seeing rather aggressive presets with internal resolutions as low as 50%, still managing to deliver impressive and remarkably stable visuals. Of course, there&#8217;s no denying that temporal reconstruction and machine learning models have advanced remarkably, but the one thing they are also doing is redefining what actually counts as acceptable rendering for the player base.<\/p>\n<p>AI may be replacing rendering, sure, but it still needs rendering to stand on regardless. Frame generation is about perceived smoothness, so it works best when there&#8217;s already a solid base to build on, and it often needs tools like NVIDIA Reflex to manage latency. So, we can <a href=\"https:\/\/www.xda-developers.com\/nvidia-fake-frames-new-normal\/\" target=\"_blank\" rel=\"nofollow noopener\">talk about &#8220;fake frames&#8221;<\/a> all the live long day, but the truth now is that what we&#8217;re optimizing isn&#8217;t raw output anymore, but rather what the player feels.<\/p>\n<p>        <img width=\"440\" height=\"364\" loading=\"lazy\" decoding=\"async\" alt=\"Power connector on the RTX 5090. \" data-img-url=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/rtx-5090-connector.jpg\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/rtx-5090-connector.jpg\"\/><\/p>\n<p>                    Related<\/p>\n<p>\t\t<a href=\"https:\/\/www.xda-developers.com\/gpu-thumb-rules-pc-newbies-should-know\/\" title=\"6 rules of thumb PC newbies should know about the GPU market in 2025\" target=\"_blank\" rel=\"nofollow noopener\"><br \/>\n\t\t\t6 rules of thumb PC newbies should know about the GPU market in 2025<br \/>\n\t\t<\/a><\/p>\n<p class=\"display-card-excerpt\">AMD &amp; Intel are back in the game, and advertised prices are a scam<\/p>\n<p>            So, where does performance go from here?<\/p>\n<p>Once the line between rendered and generated graphics disappears, we&#8217;ll have a new normal.<\/p>\n<p>The path forward doesn\u2019t look like the past. It&#8217;s not unreasonable to believe that progress will no longer come from doubling cores or chasing higher clocks. Now, it\u2019ll come from redefining what we consider real in the first place.<\/p>\n<p>The line between rendered and generated, between computed and reconstructed, will only continue to blur. And once it disappears completely, we&#8217;ll have the new normal, and only memories of what pure raster power, native resolution graphics, and rendered frames used to be.<\/p>\n","protected":false},"excerpt":{"rendered":"It makes me glad to say that I&#8217;ve grown up during a time when generational updates between graphics&hellip;\n","protected":false},"author":2,"featured_media":494744,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-494743","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/494743","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=494743"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/494743\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/494744"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=494743"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=494743"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=494743"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}