{"id":559255,"date":"2026-03-25T07:29:11","date_gmt":"2026-03-25T07:29:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/559255\/"},"modified":"2026-03-25T07:29:11","modified_gmt":"2026-03-25T07:29:11","slug":"the-ai-industry-is-lying-to-you","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/559255\/","title":{"rendered":"The AI Industry Is Lying To You"},"content":{"rendered":"<p>Hi! If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It\u2019s $70 a year, or $7 a month, and in return you get a weekly newsletter that\u2019s usually anywhere from 5000 to 18,000 words, including vast, detailed analyses of <a href=\"https:\/\/www.wheresyoured.at\/the-haters-guide-to-nvidia\/\" rel=\"nofollow noopener\" target=\"_blank\">NVIDIA<\/a>, <a href=\"https:\/\/www.wheresyoured.at\/howmuchmoney\/\" rel=\"nofollow noopener\" target=\"_blank\">Anthropic and OpenAI\u2019s finances<\/a>, and <a href=\"https:\/\/www.wheresyoured.at\/premium-the-haters-guide-to-the-ai-bubble-vol-2\/\" rel=\"nofollow noopener\" target=\"_blank\">the AI bubble writ large<\/a>. I just put out <a href=\"https:\/\/www.wheresyoured.at\/hatersguide-saas\/\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">a massive Hater\u2019s Guide To The SaaSpocalypse<\/a>, as well as <a href=\"https:\/\/www.wheresyoured.at\/hatersguide-adobe\/\" rel=\"nofollow noopener\" target=\"_blank\">the Hater\u2019s Guide to Adobe<\/a>.\u00a0It helps support free newsletters like these! <\/p>\n<p>The entire AI bubble is built on a vague sense of inevitability \u2014 that if everybody just believes hard enough that none of this can ever, ever go wrong that at some point all of the very obvious problems will just go away.<\/p>\n<p>Sadly, one cannot beat physics.<\/p>\n<p>Last week, economist Paul Kedrosky put out <a href=\"https:\/\/paulkedrosky.com\/chart-of-the-day-data-center-buildout-slowed-sharply\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">an excellent piece<\/a> centered around a chart that showed new data center capacity additions (as in additions to the pipeline, not brought online) halved in the fourth quarter of 2025 (per data from<a href=\"https:\/\/www.woodmac.com\/news\/opinion\/reality-bytes-the-us-data-centre-pipeline-additions-halved-in-q4-2025-compared-to-the-previous-quarter\/?ite=40049&amp;ito=847&amp;itq=33bf76c7-8815-436d-8df8-88e8a27f3aba&amp;itx%5Bidio%5D=7140913&amp;ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> Wood Mackenzie<\/a>):<\/p>\n<p><img class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"715\" height=\"528\" \/><\/p>\n<p>\u00a0\u00a0Wood Mackenzie\u2019s report framed it in harsh terms:<\/p>\n<p><a href=\"https:\/\/www.woodmac.com\/link\/e38fa16a6af24610b148a0fbb2dbf95c.aspx?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">US data-centre capacity additions<\/a> halved from Q3 to Q4 2025 as load-queue challenges persisted. The decline underscores the difficulties of the current development environment and signals a resulting focus on existing pipeline projects. While <a href=\"https:\/\/www.woodmac.com\/link\/307ef40dfa6a4da4913db68fff8b4647.aspx?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Texas extended its pipeline capacity lead in Q4 2025<\/a>, New Mexico, Indiana and Wyoming saw greater relative growth. Planned capacity continues to be weighted by new developers with a small number of massive, speculative projects, targeting in particular the South and Southwest. New Mexico owes its growth to a single, massive, speculative project by New Era Energy &amp; Digital in Lea County.\u00a0<\/p>\n<p>As I said above, this refers only to capacity that\u2019s been announced rather than stuff that\u2019s actually been brought online, and Kedrosky missed arguably the craziest chart \u2014 that of the 241GW of disclosed data center capacity, only 33% of it is actually under active development:<\/p>\n<p><img class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1148\" height=\"650\"  \/><\/p>\n<p>The report also adds that the majority of committed power (58%) is for \u201cwires-only utilities,\u201d which means the utility provider is only responsible for getting power to the facility, not generating the power itself, which is a big problem when you\u2019re building entire campuses made up of power-hungry AI servers.\u00a0<\/p>\n<p>WoodMac also adds that PJM, one of the largest utility providers in America, \u201c&#8230;remains in trouble, with utility large load commitments three times as large as the accredited capacity in PJM\u2019s risked generation queue,\u201d which is a complex way of saying \u201cit doesn\u2019t have enough power.\u201d\u00a0<\/p>\n<p>This means that fifty eight god damn percent of data centers need to work out their own power somehow. WoodMac also adds there is around $948 billion in capex being spent in totality on US-based data centers, but capex growth decelerated for the first time since 2023. Kedrosky adds:<\/p>\n<p>The total announced pipeline looks huge at 241 GW \u2014 about twice US peak electricity demand \u2014 but most of it is not real. Only a third is under construction, with the rest a mix of hopeful permits, speculative land deals, and projects that assume power sources nobody has actually built yet. In particular, much of it assumes on-site gas plants, a fraught assumption given current geopolitics.<\/p>\n<p>The most serious problem is in the mid-Atlantic. Regional grid operator PJM has made power commitments to data centers at roughly three times the rate that new generation is actually coming online. Someone is going to be waiting a very long time, or paying a lot more than they expected, or both.<\/p>\n<p>Let\u2019s simplify:<\/p>\n<p>Only 33% of announced US data centers are actually being built, with the rest in vague levels of \u201cplanning.\u201d That\u2019s about 79.53GW of power, or 61GW of IT load.\u201cActive development\u201d also refers to anything that is (and I quote) \u201c&#8230;under development or construction,\u201d meaning \u201cwe\u2019ve got the land and we\u2019re still working out what to do with it.This is pretty obvious when you do the maths. 61GW of IT load would be hundreds of thousands of NVIDIA GB200 NVL72 racks \u2014 over a trillion dollars of GPUs at $3 million per 72-GPU rack \u2014 and based on the fact there <a href=\"https:\/\/www.bloomberg.com\/graphics\/2025-ai-data-center-ownership\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">were only $178.5 billion in data center debt deals last year<\/a>, I don\u2019t think many of these are actually being built right now.Even if they were, there\u2019s not enough power for them to turn on.<a href=\"https:\/\/finance.yahoo.com\/news\/nvidia-ceo-huang-says-company-sees-more-than-1-trillion-in-sales-through-2027-221700475.html?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">NVIDIA claims it will sell $1 trillion of GPUs between 2025 and 2027<\/a>, and <a href=\"https:\/\/www.wheresyoured.at\/premium-how-the-ai-bubble-bursts-in-2026\/#nvidias-customer-base-is-quietly-collapsing-papered-over-by-large-sales-with-four-customers-accounting-for-61-of-revenue-in-its-latest-q3-fy2026-quarter-up-from-30-in-q1-fy2026:~:text=So%2C%20NVIDIA%20sold%20around%20%2443%20billion%20of%20data%20center%20GPUs%20in%20its%20last%20quarter.%20I%27m%20going%20to%20guess%20on%20the%20breakdown%20of%20that%20revenue%2C%20and%20the%20subsequent%20power%20it%20requires.%20And%20yes%2C%20it%E2%80%99s%20a%20guess%2C%20but%20the%20pricing%20is%20based%20on%20actual%20figures%20paid%20by%20end%20users.\" rel=\"nofollow noopener\" target=\"_blank\">as I calculated previously<\/a>, it sells about 1.6GW (in IT load terms, as in how much power just the GPUs draw) of GPUs every quarter, which would require at least 1.95GW of power just to run, when you include all the associated gear and the challenges of physically getting power.None of this data talks about data centers actually coming online.<\/p>\n<p>The term you\u2019re looking for there is data center absorption, which is (to quote Data Center Dynamics) \u201c&#8230;the net growth in occupied, revenue-producing IT load,\u201d <a href=\"http:\/\/datacenterdynamics.com\/en\/news\/cbre-new-data-center-capacity-under-construction-in-primary-us-markets-declines-year-on-year\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">which grew in America\u2019s primary markets from 1.8GW in new capacity in 2024 to 2.5GW of new capacity in 2025<\/a> according to CBRE.\u00a0\u00a0<\/p>\n<p>Definition sidenote! \u201cColocation\u201d space refers to data center space built that is then rented out to somebody else, versus data centers explicitly built for a company (such as Microsoft\u2019s Fairwater data centers). What\u2019s interesting is that it appears that some \u2014 such as Avison Young \u2014 count Crusoe\u2019s developments (such as Stargate Abilene) as colocation construction, which makes the collocation numbers I\u2019ll get to shortly much more indicative of the greater picture.<\/p>\n<p>The problem is, this number doesn\u2019t actually express newly-turned-on data centers. Somebody expanding a project to take on another 50MW still counts as \u201cnew absorption.\u201d\u00a0<\/p>\n<p>Things get more confusing when you add in other reports. Avison Young\u2019s reports about data center absorption found 700MW of new capacity in <a href=\"https:\/\/www.avisonyoung.us\/documents\/d\/us\/q1-2025-us-data-center-update?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Q1 2025<\/a>, <a href=\"https:\/\/www.avisonyoung.us\/documents\/d\/us\/q2-2025-us-data-center-update?_gl=1*9ccz2f*_up*MQ..*_ga*MTk3MjcwODYzOC4xNzU1MDIwNjAx*_ga_NB1T86YXFD*czE3NTUwMjA2MDEkbzEkZzAkdDE3NTUwMjA2MDEkajYwJGwwJGgw&amp;ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">1.173GW in Q2<\/a>, <a href=\"http:\/\/avisonyoung.us\/documents\/d\/us\/q4-2025-us-data-center-update?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">a little over 1.5GW in Q3 and 2.033GW in Q4<\/a> (I cannot find its Q3 report anywhere), for a total of 5.44GW, entirely in \u201ccolocation,\u201d meaning buildings built to be leased to others.<\/p>\n<p>Yet there\u2019s another problem with that methodology: these are facilities that have been \u201cdelivered\u201d or have a \u201ccommitted tenant.\u201d \u201cDelivered\u201d could mean \u201cthe facility has been turned over to the client, but it\u2019s literally a powered shell (a warehouse) waiting for installation,\u201d or it could mean \u201cthe client is up and running.\u201d A \u201ccommitted tenant\u201d could mean anything from \u201cwe\u2019ve signed a contract and we\u2019re raising funds\u201d (such as is the case with <a href=\"https:\/\/www.wheresyoured.at\/why-are-we-still-doing-this\/#:~:text=And%2C%20of%20course%2C%20Nebius%20just%20raised%20%243.75bn%20in%20debt%20on%20the%20back%20of%20that%20compute%20deal.%C2%A0\" rel=\"nofollow noopener\" target=\"_blank\">Nebius raising money off of a Meta contract to build data centers at some point in the future<\/a>).<\/p>\n<p>We can get a little closer by using the definitions from DataCenterHawk (from whichAvison Young gets its data), <a href=\"https:\/\/datacenterhawk.com\/resources\/hawkpodcast\/how-to-measure-the-data-center-market-data-center-fundamentals?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">which defines absorption as follows<\/a>:\u00a0<\/p>\n<p>To measure demand, we want to know how much capacity was leased up by customers over a specific period of time. At datacenterHawk we calculate this quarterly. The resulting number is what\u2019s called absorption.<\/p>\n<p>Let\u2019s say DC#1 has 10 MW commissioned. 9 MW are currently leased and 1 MW is available. Over the course of a quarter, DC#1 leases up that last MW to a few tenants. Their absorption for the quarter would be 1 MW. It can get a little more complicated but that\u2019s the basic concept.<\/p>\n<p>That\u2019s great! Except Avison Young has chosen to define absorption in an entirely different way \u2014 that a data center (in whatever state of construction it\u2019s in) has been leased, or \u201cdelivered,\u201d which means \u201ca fully ready-to-go data center\u201d or \u201can empty warehouse with power in it.\u201d\u00a0<\/p>\n<p>CBRE, on the other hand, defines absorption as \u201cnet growth in occupied, revenue-producing IT load,\u201d and is inclusive of hyperscaler data centers. <a href=\"https:\/\/www.cbre.com\/insights\/books\/north-america-data-center-trends-h2-2025?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Its report <\/a>also includes smaller markets like Charlotte, Seattle and Minneapolis, adding a further 216MW in absorption of actual new, existing, revenue-generating capacity.<\/p>\n<p>So that\u2019s about 2.716GW of actual, new data centers brought online. It doesn\u2019t include areas like Southern Virginia or Columbus, Ohio \u2014 two massive hotspots from Avison Young\u2019s report \u2014 and I cannot find a single bit of actual evidence of significant revenue-generating, turned-on, real data center capacity being stood up at scale. <a href=\"https:\/\/www.datacentermap.com\/usa\/ohio\/columbus\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">DataCenterMap shows 134 data centers in Columbus<\/a>, but as of August 2025, <a href=\"https:\/\/www.dispatch.com\/story\/business\/2025\/08\/18\/report-wave-of-data-centers-on-horizon-in-central-ohio\/85673685007\/?gnt-cfr=1&amp;gca-cat=p&amp;gca-uir=true&amp;gca-epti=z116251p119950n00----c00----e1179xxv116251d--48--b--48--&amp;gca-ft=154&amp;gca-ds=sophi&amp;ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">the Columbus area had around 506MW in total<\/a> according to the Columbus Dispatch, though Cushman and Wakefield <a href=\"https:\/\/www.cushmanwakefield.com\/en\/insights\/global-data-center-market-comparison?j=531192&amp;sfmc_sub=50973005&amp;l=24664_HTML&amp;u=11085413&amp;mid=514015162&amp;jb=2009&amp;utm_source=sfmc_se1&amp;utm_medium=email&amp;utm_campaign=2024-global-data-center-market-comparison&amp;utm_term=Access+the+Report&amp;utm_id=531192&amp;sfmc_id=50973005&amp;utm_campaign=2024-global-data-center-market-comparison\" rel=\"nofollow noopener\" target=\"_blank\">claimed in February 2026 that it had 1.8GW<\/a>.<\/p>\n<p>Things get even more confusing when you read that <a href=\"https:\/\/www.cushmanwakefield.com\/en\/insights\/americas-data-center-update?ref=wheresyoured.at#:~:text=At%20the%20same%20time%2C%20demand,the%20end%20of%20the%20decade.\" rel=\"nofollow noopener\" target=\"_blank\">Cushman and Wakefield estimates that around 4GW of new colocation supply<\/a> was \u201cdelivered\u201d in 2025, a term it does not define in its actual report, and for whatever reason lacks absorption numbers. <a href=\"https:\/\/digital.cushmanwakefield.com\/Americas-Data-Center-H1-2025-Update\/12\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Its H1 2025 report<\/a>, however, includes absorption numbers that add up to around 1.95GW of capacity\u2026without defining absorption, leaving us in exactly the same problem we have with Avison Young.\u00a0<\/p>\n<p>Nevertheless, based on these data points, I\u2019m comfortable estimating that North American data center absorption \u2014 as the IT load of data centers actually turned on and in operation \u2014 was at around 3GW for 2025, which would work out to about 3.9GW of total power.<\/p>\n<p>And that number is a fucking disaster.<\/p>\n<p>Earlier in the year, TD Cowen\u2019s Jerome Darling<a href=\"https:\/\/www.wheresyoured.at\/data-center-crisis\/#:~:text=As%20I%20discussed,end%20of%202026.\" rel=\"nofollow noopener\" target=\"_blank\"> told me<\/a> that GPUs and their associated hardware cost about $30 million a megawatt. 3GW of IT load (as in the GPUs and their associated gear\u2019s power draw) works out to around $90 billion of NVIDIA GPUs and the associated hardware, which would be covered under NVIDIA\u2019s \u201cdata center\u201d revenue segment:<\/p>\n<p><img class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"800\" height=\"450\"  \/><\/p>\n<p>America makes up about 69.2% of NVIDIA\u2019s revenue, <a href=\"https:\/\/d18rn0p25nwr6d.cloudfront.net\/CIK-0001045810\/e361e58a-7483-44f5-bc62-a9080ae6ec72.pdf?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">or around $149.6 billion in FY2026<\/a> (which runs, annoyingly, from February 2025 to January 2026). NVIDIA\u2019s overall data center segment revenue was $195.7 billion, which puts America\u2019s data center purchases at around $135 billion, leaving around $44 billion of GPUs and associated technology uninstalled.<\/p>\n<p>With the acceleration of NVIDIA\u2019s GPU sales, it now takes about 6 months to install and operationalize a single quarter\u2019s worth of sales. Because these are Blackwell (and I imagine some of the new next generation Vera Rubin) GPUs, they are more than likely going to new builds thanks to their greater power and cooling requirements, and while some could in theory be going to old builds retrofitted to fit them, <a href=\"https:\/\/www.wheresyoured.at\/the-enshittifinancial-crisis\/#the-devil%E2%80%99s-deal-of-investing-in-ai-startups:~:text=to%20walk%20away.-,The%20AI%20Bubble%20Is%20A%20Debt%20and%20Venture%20Bubble%2C%20And%20Will%20Burst%20When%20Both%20Run%20Out,-Before%20then%2C%20NVIDIA%E2%80%99s\" rel=\"nofollow noopener\" target=\"_blank\">NVIDIA\u2019s increasingly-centralized<\/a> (as in focused on a few very large customers) revenue heavily suggests the presence of large resellers like Dell or Supermicro (which I\u2019ll get to in a bit) or the Taiwanese ODMs <a href=\"https:\/\/www.wheresyoured.at\/dot-com-bubble\/#the-taiwan-problem-%E2%80%94-and-the-location-of-the-warehouses-of-gpus\" rel=\"nofollow noopener\" target=\"_blank\">like Foxconn and Quanta<\/a> who manufacture massive amounts of servers for hyperscaler buildouts.\u00a0<\/p>\n<p>I should also add that it\u2019s commonplace for hyperscalers to buy the GPUs for their colocation partners to install, which is why Nebius and Nscale and other partners never raise more than a few billion dollars to cover construction costs.\u00a0<\/p>\n<p>It\u2019s becoming very obvious that data center construction is dramatically slower than NVIDIA\u2019s GPU sales, which continue to accelerate dramatically every single quarter.<\/p>\n<p>Even if you think AI is the biggest most hugest and most special boy: what\u2019s the fucking point of buying these things two to four years in advance? Jensen Huang is announcing a new GPU every year!\u00a0<\/p>\n<p>By the time they actually get all the Blackwells in Vera Rubin will be two years old! And by the time we install those Vera Rubins, some other new GPU will be beating it!\u00a0<\/p>\n<p>Before we go any further, I want to be clear how difficult it is to answer the question \u201chow long does a data center take to build?\u201d. You can\u2019t really say \u201c[time] per megawatt\u201d because things become ever-more complicated with every 100MW or so. As I\u2019ll get into, it\u2019s taken Stargate Abilene two years to hit 200MW of power.<\/p>\n<p>Not IT load. Power.\u00a0<\/p>\n<p>Anyway, the question of \u201chow much data center capacity came online?\u201d is pretty annoying too.\u00a0<\/p>\n<p><a href=\"https:\/\/www.sightlineclimate.com\/research\/data-center-outlook?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Sightline<\/a>\u2019s research \u2014 which estimated that \u201calmost 6GW of [global data center power] capacity came online last year\u201d \u2014 found that while 16GW of capacity was slated to come online in 2026 across 140 projects, only 5GW is currently under construction, and somehow doesn\u2019t say that \u201cmaybe everybody is lying about timelines.\u201d<\/p>\n<p>Sightline believes that half of 2026\u2019s supposed data center pipeline may never materialize, with 11GW of capacity in the \u201cannounced\u201d stage with \u201c&#8230;no visible construction progress despite typical build timelines of 12-18 months.\u201d \u201cUnder construction\u201d also can mean anything from \u201c<a href=\"https:\/\/x.com\/sk7037\/status\/2029911649298362524?s=20&amp;ref=wheresyoured.at\" rel=\"nofollow\">a single steel beam<\/a>\u201d to \u201cnearly finished.\u201d<\/p>\n<p>These numbers also are based on 5GW of capacity, meaning about 3.84GW of IT load, or about $111.5 billion in GPUs and associated gear, or roughly 57.5% of NVIDIA\u2019s FY2026 revenue that\u2019s actually getting built.<\/p>\n<p>Sightline (and basically everyone else) argues that there\u2019s a power bottleneck holding back data center development, and<a href=\"https:\/\/www.camus.energy\/blog\/why-does-it-take-so-long-to-connect-a-data-center-to-the-grid?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> Camus explains<\/a> that the biggest problem is a lack of transmission capacity (the amount of power that can be moved) and power generation (creating the power itself):\u00a0<\/p>\n<p>The biggest driver of delay is simple: our power system doesn\u2019t have enough extra transmission capacity and generation to serve dozens of gigawatts of new, high-utilization demand 100% of the time. Data centers require round-the-clock power at levels that rival or exceed the needs of small cities, and building new transmission infrastructure and generation requires years of permitting, land acquisition, supply chain management, and construction.<\/p>\n<p>Camus adds that America also isn\u2019t really prepared to add this much power at once:<\/p>\n<p>Inside utilities, planners and engineers are working diligently to connect new loads. But the tools available to planners were built for extending power lines to new neighborhoods or upgrading equipment as communities grow. They weren\u2019t designed to analyze 50 new service requests of 100 MW each, all while new generation applications pile up.<\/p>\n<p>As a result, planners and engineers are overwhelmed; they\u2019re stuck working to review new applications while simultaneously configuring new tools that are better equipped for the scale of this challenge. And unlike generation interconnection, which has well-defined steps across most ISOs and utilities, the process for evaluating large loads is often much more ad hoc. This makes adopting the right tools much more difficult too. In fact, the majority of utilities and ISO\/RTOs are still developing formal study procedures.<\/p>\n<p>Nevertheless, I also think there\u2019s another more-obvious reason: it takes way longer to build a data center than anybody is letting on, as evidenced by the fact that we only added 3GW or so of actual capacity in America in 2025. NVIDIA is selling GPUs years into the future, and its ability to grow, or even just maintain its current revenues, depends wholly on its ability to convince people that this is somehow rational.<\/p>\n<p>Let me give you an example. OpenAI and Oracle\u2019s Stargate Abilene data center project<a href=\"https:\/\/www.datacenterdynamics.com\/en\/news\/crusoe-confirms-plans-for-200mw-ai-data-center-in-texas\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> was first announced in July 2024 as a 200MW data center<\/a>. In October 2024, the joint venture between Crusoe, Blue Owl and Primary Digital Infrastructure<a href=\"https:\/\/www.crusoe.ai\/resources\/newsroom\/crusoe-blue-owl-capital-primary-digital-joint-venture?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> raised $3.4 billion<\/a>, with the 200MW of capacity due to be delivered \u201cin 2025.\u201d<a href=\"https:\/\/www.esig.energy\/wp-content\/uploads\/2025\/05\/ESIG_LLTF_PresentationLancium.pdf?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> A mid-2025 presentation from land developer Lancium<\/a> said it would have \u201c1.2GW online by YE2025.\u201d In a May 2025<a href=\"https:\/\/www.crusoe.ai\/resources\/newsroom\/crusoe-blue-owl-capital-and-primary-digital-infrastructure-enter-joint-venture?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> announcement<\/a>, Crusoe, Blue Owl, and Primary Digital Infrastructure announced the creation of a $15 billion joint vehicle, and said that Abilene would now be 8 buildings, with the first two buildings being energized by the \u201cfirst half of 2025,\u201d and that the rest would be \u201cenergized by mid-2026.\u201d Each building would have 50,000 GPUs, and the total IT load is meant to be 880MW or so, with a total power draw of 1.2GW.\u00a0<\/p>\n<p>I\u2019m not interested in discussing OpenAI not taking the<a href=\"https:\/\/www.datacenterdynamics.com\/en\/news\/oracleopenai-drop-plans-to-expand-flagship-abilene-stargate-site-meta-in-talks-to-pick-up-crusoe-capacity-with-nvidias-help\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> supposedly-planned extensions to Abilene because it never existed and was never going to happen<\/a>.\u00a0<\/p>\n<p>In December 2025,<a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2025-12-12\/some-oracle-data-centers-for-openai-delayed-to-2028-from-2027?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> Oracle stated that it had \u201cdelivered\u201d 96,000 GPUs<\/a>, and in February, Oracle was<a href=\"https:\/\/x.com\/Oracle\/status\/2018094827213136154?ref=wheresyoured.at\" rel=\"nofollow\"> still only referring to two buildings<\/a>, likely because that\u2019s all that\u2019s been finished. My sources in Abilene tell me that Building Three is nearly done, but\u2026this thing is meant to be turned on in mid-2026.<a href=\"https:\/\/web.archive.org\/web\/20260122203159\/https:\/\/www.mortenson.com\/projects\/abilene-data-center-development\" rel=\"nofollow noopener\" target=\"_blank\"> Developer Mortensen claims the entire project will be completed by October 2026<\/a>, which it obviously, blatantly won\u2019t.<\/p>\n<p>I hate to speak in conspiratorial terms, but this feels like a blatant coverup with the active participation of the press. CNBC reported in September 2025 that \u201c<a href=\"https:\/\/www.cnbc.com\/2025\/09\/23\/openai-first-data-center-in-500-billion-stargate-project-up-in-texas.html?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">the first data center in $500 billion Stargate project is open in Texas<\/a>,\u201d referring to a data center with an eighth of its IT load operational as \u201conline\u201d and \u201cup and running,\u201d with<a href=\"https:\/\/www.crusoe.ai\/resources\/newsroom\/crusoe-announces-flagship-abilene-data-center-is-live?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> Crusoe adding two weeks later<\/a> that it was \u201clive,\u201d \u201cup and running\u201d and \u201ccontinuing to progress rapidly,\u201d all so that readers and viewers would think \u201cwow, Stargate Abilene is up and running\u201d despite it being months if not years behind schedule.<\/p>\n<p>At its current rate of construction, Stargate Abilene will be fully built sometime in late 2027. Oracle\u2019s Port Washington Data Center, as of March 6 2026,<a href=\"https:\/\/x.com\/sk7037\/status\/2029911649298362524?ref=wheresyoured.at\" rel=\"nofollow\"> consisted of a single steel beam<\/a>.<a href=\"https:\/\/www.datacenterdynamics.com\/en\/news\/vantage-breaks-ground-on-texas-gigawatt-data-center-campus-for-openai\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> Stargate Shackelford Texas broke ground on December 15 2025<\/a>, and<a href=\"https:\/\/truthout.org\/articles\/secretive-new-mexico-data-center-plan-races-forward-despite-community-pushback\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> as of December 2025, construction barely appears to have begun in Stargate New Mexico<\/a>.<a href=\"https:\/\/www.mortenson.com\/news-insights\/mortenson-begins-indiana-data-center-construction?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> Meta\u2019s 1GW data center campus in Indiana only started construction in February 2026<\/a>.\u00a0<\/p>\n<p>And, despite Microsoft<a href=\"https:\/\/blogs.microsoft.com\/blog\/2025\/09\/18\/inside-the-worlds-most-powerful-ai-datacenter\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> trying to mislead everybody<\/a> that its Wisconsin data center had \u2018arrived\u201d and \u201cbeen built,\u201d looking even an inch deeper suggests very little has actually come online\u201d \u2014 and, considering the first data center was $3.3 billion (<a href=\"https:\/\/www.wheresyoured.at\/data-center-crisis\/#:~:text=As%20I%20discussed,end%20of%202026.\" rel=\"nofollow noopener\" target=\"_blank\">remember: $14 million a megawatt<\/a> just for construction), I imagine Microsoft has successfully brought online about 235MW of power for Fairwater.<\/p>\n<p>What Microsoft wants you to think is it brought online gigawatts of power (always referred to in the future tense), because Microsoft, like everybody else, is building data centers at a glacial pace, because construction takes forever, even if you have the power, which nobody does!<\/p>\n<p>The concept of a hundred-megawatt data center is barely a few years old, and I cannot actually find a built, in-service gigawatt data center of any kind, just vague promises about theoretical Stargate campuses built for OpenAI, a company that cannot afford to pay its bills.\u00a0<\/p>\n<p>Everybody keeps yammering on about \u201cwhat if data centers don\u2019t have power\u201d when they should be thinking about whether data centers are actually getting built. Microsoft<a href=\"https:\/\/blogs.microsoft.com\/blog\/2025\/09\/18\/inside-the-worlds-most-powerful-ai-datacenter\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> proudly boasted in September 2025<\/a> about its intent to build \u201cthe UK\u2019s largest supercomputer\u201d in Loughton, England with Nscale, and as of March 2026, it\u2019s literally<a href=\"https:\/\/www.theguardian.com\/technology\/2026\/mar\/09\/from-press-release-to-scrap-metal-site-the-essex-supercomputer-thats-still-a-scaffolding-yard?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> a scaffolding yard full of pylons and scrap metal<\/a>. Stargate Abilene has been stuck at two buildings for upwards of six months.\u00a0<\/p>\n<p>Here\u2019s what\u2019s actually happening: data center deals are being funded by eager private credit gargoyles that don\u2019t know shit about fuck. These deals are announced, usually by overly-eager reporters that don\u2019t bother to check whether the previous data centers ever got built, as massive \u201cmulti-gigawatt deals,\u201d and then nobody follows up to check whether anything actually happened.\u00a0<\/p>\n<p>All that anybody needs to fund one of these projects is an eager-enough financier and a connection to NVIDIA. All Nebius had to do to<a href=\"https:\/\/nebius.com\/newsroom\/nebius-group-announces-proposed-private-offering-of-3-75-billion-of-convertible-senior-notes?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> raise $3.75 billion in debt<\/a> was to<a href=\"https:\/\/archive.ph\/submit\/?url=https%3A%2F%2Fwww.bloomberg.com%2Fnews%2Farticles%2F2026-03-16%2Fmeta-to-spend-up-to-27-billion-on-ai-infrastructure-from-nebius&amp;ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> sign a deal with Meta<\/a> for data center capacity that doesn\u2019t exist and will likely take three to four years to build (it\u2019s never happening). Nebius has yet to finish<a href=\"https:\/\/www.wheresyoured.at\/why-are-we-still-doing-this\/#:~:text=This%20is%20on%20top%20of%20Microsoft%E2%80%99s%20%2417.4%20billion%20deal%2C%20and%2C%20of%20course%2C%20Meta%E2%80%99s%20%243%20billion%20deal%20from%20last%20year.%C2%A0\" rel=\"nofollow noopener\" target=\"_blank\"> its Vineland, New Jersey data center for Microsoft<\/a>, which was meant to be \u201c<a href=\"https:\/\/nebius.com\/blog\/posts\/300-mw-new-jersey-and-iceland-regions?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">at 100MW<\/a>\u201d by the end of 2025, but<a href=\"https:\/\/northwiseproject.com\/nbis-stock-vineland-nj-data-center\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\"> appears to have only had 50MW (the first phase) available as of February 2026<\/a>.\u00a0<\/p>\n<p>I\u2019m just gonna come out and say it: I think a lot of these data center deals are trash, will never get built, and thus will never get paid. The tech industry has taken advantage of an understandable lack of knowledge about construction or power timelines in the media to pump out endless stories about \u201cdata center capacity in progress\u201d as a means of obfuscating an ever-growing scandal: that hundreds of billions of NVIDIA GPUs got sold to go in projects that may never be built.<\/p>\n<p>These things aren\u2019t getting built, or if they\u2019re getting built, it\u2019s taking way, way longer than expected, which means that interest on that debt is piling up. The longer it takes, the less rational it becomes to buy further NVIDIA GPUs \u2014 after all, if data centers are taking anywhere from 18 months to three years to build, why would you be buying more of them? Where are you going to put them, Jensen?<\/p>\n<p>This also seriously brings into question the appetite that private credit and other financiers have for funding these projects, because much of the economic potential comes from the idea that these projects get built and have stable tenants. Furthermore, if the supply of AI compute is a bottleneck, this suggests that when (or if) that bottleneck is ever cleared, there will suddenly be a massive supply glut, lowering the overall value of the data centers in progress\u2026which are, by the way, all filled with Blackwell GPUs, which will be two or three-years-old by the time the data centers are finally turned on.<\/p>\n<p>That\u2019s before you get to <a href=\"https:\/\/www.wheresyoured.at\/data-center-crisis\/\" rel=\"nofollow noopener\" target=\"_blank\">the fact that the ruinous debt behind AI data centers makes them all remarkably unprofitable<\/a>, or that<a href=\"https:\/\/www.wheresyoured.at\/hatersguide-saas\/#:~:text=Pretty%20much%20every%20AI%20startup%20is%20in%20SaaS%20and%20raises%20hundreds%20of%20millions%20of%20dollars%20so%20that%20it%20can%20make%20single%20or%20double%2Ddigit%20millions%20of%20dollars%20a%20month.%20This%20sounds%20sarcastic%20%E2%80%94%20petty%20even!%20%E2%80%94%20but%20it%E2%80%99s%20the%20truth%2C%20and%20nobody%E2%80%99s%20margins%20appear%20to%20be%20improving.%C2%A0\" rel=\"nofollow noopener\" target=\"_blank\"> their customers are AI startups that lose hundreds of millions or billions of dollars a year<\/a>, or that NVIDIA is the largest company on the stock market, and said valuation is a result of a data center construction boom that appears to be decelerating and even if it wasn\u2019t operating at a glacial pace compared to NVIDIA\u2019s sales.<\/p>\n<p>Not to sound unprofessional or nothing, but what the fuck is going on? We have 241GW of \u201cplanned\u201d capacity in America, of which only 79.5GW of which is \u201cunder active development,\u201d but when you dig deeper, only 5GW of capacity is actually under construction?\u00a0<\/p>\n<p>The entire AI bubble is a god damn mirage. Every single \u201cmulti-gigawatt\u201d data center you hear about is a pipedream, little more than a few contracts and some guys with their hands on their hips saying \u201cbrother we\u2019re gonna be so fuckin\u2019 rich!\u201d as they siphon money from private credit \u2014 and, by extension, you, because where does private credit get its capital from? That\u2019s right. A lot comes from pension funds and insurance companies.<\/p>\n<p>Here\u2019s the reality: data centers take forever. Every hyperscaler and neocloud talking about \u201ccontracted compute\u201d or \u201cplanned capacity\u201d may as well be telling you about their planned dinners with The Grinch and Godot. The insanity of the AI buildout will be seen as one of the largest wastes of capital of all time (<a href=\"https:\/\/justdario.com\/2025\/10\/the-data-centers-frenzy-will-be-remembered-as-the-largest-waste-of-capital-in-history\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">to paraphrase JustDario<\/a>), and I anticipate that the majority of the data center deals you\u2019re reading about simply never get built.<\/p>\n<p>The fact that there\u2019s so much data about data center construction and so little data about completed construction suggests that those preparing the reports are in on the con. I give credit to CBRE, Sightline and Wood Mackenzie for having the courage to even lightly push back on the narrative, even if they do so by obfuscating terms like \u201ccapacity\u201d or \u201cpower\u201d in ways that reporters and other analysts are sure to misinterpret.<\/p>\n<p>Hundreds of billions of dollars have been sunk into buying GPUs, in some cases years in advance, to put into data centers that are being built at a rate that means that NVIDIA\u2019s 2025 and 2026 revenues will take until 2028 to 2029 to actually operationalize, and that\u2019s making the big assumption that any of it actually gets built.<\/p>\n<p>I think it\u2019s also fair to ask where the money is actually going. 2025\u2019s $178.5 billion in US-based data center deals doesn\u2019t appear to be resulting in any immediate (or even future) benefit to anybody involved.<\/p>\n<p>I also wonder whether the demand actually exists to make any of this worthwhile, or what people are actually paying for this compute.\u00a0<\/p>\n<p>If we assume 3GW of IT load capacity was brought online in America, that should (theoretically) mean tens of billions of dollars of revenue thanks to the \u201cinsatiable demand for AI\u201d \u2014 except nobody appears to be showing massive amounts of revenue from these data centers.\u00a0<\/p>\n<p><a href=\"https:\/\/ir.applieddigital.com\/sec-filings\/all-sec-filings\/content\/0001144879-25-000021\/0001144879-25-000021.pdf?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Applied Digital only had $144 million in revenue in FY2025<\/a> (and lost $231 million making it). CoreWeave, which claimed to have \u201c<a href=\"https:\/\/www.cnbc.com\/2026\/02\/26\/coreweave-crwv-q4-earnings-report-2025.html?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">850MW of active power<\/a> (or around 653MW of IT load)\u201d at the end of 2025 (up from 420MW in <a href=\"https:\/\/investors.coreweave.com\/news\/news-details\/2025\/CoreWeave-Reports-Strong-First-Quarter-2025-Results\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Q1 FY2025<\/a>, or 323MW of IT load), <a href=\"https:\/\/d18rn0p25nwr6d.cloudfront.net\/CIK-0001769628\/72af7c2b-1904-4f3d-b4f6-a06f58fcdf63.pdf?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">made $5.13 billion of revenue (and lost $1.2 billion before tax) in FY2025<\/a>.\u00a0<\/p>\n<p>Nebius? <a href=\"https:\/\/assets.nebius.com\/assets\/2829268f-924f-4846-87b5-2365b7e58b41\/Financial%20results_Q4%202025_11022026.pdf?cache-buster=2026-02-12T09:48:03.404Z&amp;_gl=1*vhhmn*_gcl_au*NjcwNDM0NTQ4LjE3NzA2NTUwNTQ.&amp;ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">$228 million, for a loss of $122.9 million<\/a> on 170MW of active power (or around 130MW of IT load). <a href=\"https:\/\/iren.gcs-web.com\/static-files\/07e0b197-cf31-4158-a124-8ff0b70203c3?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Iren lost $155.4 million on $184.7 million last quarter<\/a>, and that\u2019s with a release of deferred tax liabilities of $182.5 million. <a href=\"https:\/\/investor.equinix.com\/sec-filings\/annual-reports\/content\/0001101239-26-000032\/0001101239-26-000032.pdf?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Equinix made about $9.2 billion in revenue in its last fiscal year<\/a>, and <a href=\"https:\/\/www.macrotrends.net\/stocks\/charts\/EQIX\/equinix\/net-income?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">while it made a profit<\/a>, <a href=\"https:\/\/www.macrotrends.net\/stocks\/charts\/EQIX\/equinix\/revenue?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">it\u2019s unclear how much of that came from its large and already-existent data center portfolio<\/a>, though it\u2019s likely a lot considering Equinix is boasting about its \u201cmulti-megawatt\u201d data center plans <a href=\"https:\/\/www.equinix.com\/data-centers\/hyperscale-data-centers?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">with no discussion of its actual capacity<\/a>.<\/p>\n<p>And, of course, Google, Amazon, and Microsoft refuse to break out their AI revenues. <a href=\"https:\/\/www.wheresyoured.at\/oai_docs\/\" rel=\"nofollow noopener\" target=\"_blank\">Based on my reporting from last year<\/a>, OpenAI spent about $8.67 billion on Azure through September 2025, and <a href=\"https:\/\/www.wheresyoured.at\/costs\/\" rel=\"nofollow noopener\" target=\"_blank\">Anthropic around $2.66 billion in the same period on Amazon Web Services<\/a>. As the two largest consumers of AI compute, this heavily suggests that the actual demand for AI services is pretty weak, and mostly taken up by a few companies (or hyperscalers running their own services.)\u00a0<\/p>\n<p>At some point reality will set in and spending on NVIDIA GPUs will have to decline. It\u2019s truly insane how much has been invested so many years in the future, and it\u2019s remarkable that nobody else seems this concerned.<\/p>\n<p>Simple questions like \u201cwhere are the GPUs going?\u201d and \u201chow many actual GPUs have been installed?\u201d are left unanswered as article after article gets written about massive, multi-billion dollar compute deals for data centers that won\u2019t be built before, at this rate, 2030.\u00a0<\/p>\n<p>And I\u2019d argue it\u2019s convenient to blame this solely on power issues, when the reality is clearly based on construction timelines that never made any sense to begin with. If it was just a power issue, more data centers would be near or at the finish line, waiting for power to be turned on. Instead, well-known projects like Stargate Abilene are built at a glacial pace as eager reporters claim that a quarter of the buildings being functional nearly a year after they were meant to be turned on is some sort of achievement.<\/p>\n<p>Then there\u2019s the very, very obvious scandal that NVIDIA, the largest company on the stock market, is making hundreds of billions of dollars of revenue on chips that aren\u2019t being installed. It\u2019s fucking strange, and I simply do not understand how it keeps beating and raising expectations every quarter given the fact that the majority of its customers are likely going to be able to use their current purchases in the next decade.<\/p>\n<p>Assuming that Vera Rubin actually ships in 2026, it\u2019s reasonable to believe that people will be installing these things well into 2028, if not further, and that\u2019s assuming everything doesn\u2019t collapse by then. Why would you bother? What\u2019s the point, especially if you\u2019re sitting on a pile of Blackwell GPUs?\u00a0<\/p>\n<p>Why are we doing any of this?\u00a0<\/p>\n<p>Last week also featured a truly bonkers story about Supermicro, a reseller of GPUs used by CoreWeave and Crusoe, where co-founder Wally Liaw and <a href=\"https:\/\/www.justice.gov\/opa\/pr\/three-charged-conspiring-unlawfully-divert-cutting-edge-us-artificial-intelligence?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">several other co-conspirators were arrested for selling hundreds of millions of dollars of NVIDIA GPUs to China<\/a>, with the intent to sell billions more.\u00a0<\/p>\n<p>Liaw, one of Supermicro\u2019s co-founders, <a href=\"https:\/\/hindenburgresearch.com\/smci\/?ref=wheresyoured.at#:~:text=Key%20Rehire%20%231%3A%20Wally%20Liaw%2C%20Super%20Micro%E2%80%99s%20Co%2DFounder%20And%20Former%20Senior%20Vice%20President%20Of%20International%20Sales%20During%20The%20Accounting%20Scandal%2C%20Resigned%20In%20January%202018\" rel=\"nofollow noopener\" target=\"_blank\">previously resigned in a 2018 accounting scandal<\/a> where Supermicro couldn\u2019t file its annual reports, only to be (per <a href=\"https:\/\/hindenburgresearch.com\/smci\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Hindenburg Research\u2019s excellent report<\/a>) rehired <a href=\"https:\/\/www.bamsec.com\/filing\/137536523000044\/1?cik=1375365&amp;hl=4259:4387&amp;hl_id=4kpvuafile&amp;ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">in 2021 as a consultant<\/a>, and restored to the board in 2023, <a href=\"https:\/\/www.bamsec.com\/filing\/137536523000044\/1?cik=1375365&amp;hl=2283:2334&amp;hl_id=vyqnpb08lx&amp;ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">per a filed 8K<\/a>.\u00a0<\/p>\n<p>Mere days before his arrest, <a href=\"https:\/\/x.com\/edzitron\/status\/2034999589674103097?s=20&amp;ref=wheresyoured.at\" rel=\"nofollow\">Liaw was parading around NVIDIA\u2019s GTC conference<\/a>, pouring unnamed liquids in ice luges and standing two people away from NVIDIA CEO Jensen Huang. Liaw <a href=\"https:\/\/x.com\/edzitron\/status\/2035001000747024458?s=20&amp;ref=wheresyoured.at\" rel=\"nofollow\">was also seen congratulating the CEO of Lambda on its new CFO appointment on LinkedIn<\/a>, as well as shaking hands (along with Supermicro CEO Charles Liang, who has not been arrested or indicted) <a href=\"https:\/\/x.com\/edzitron\/status\/2035003840877973539?s=20&amp;ref=wheresyoured.at\" rel=\"nofollow\">with Crusoe (the company building OpenAI\u2019s Abilene data center) CEO Chase Lochmiller<\/a>.\u00a0<\/p>\n<p>Supermicro isn\u2019t named in the indictment for reasons I imagine are perfectly normal and not related to keeping the AI party going. Nevertheless, Liaw and his co-conspirators are accused of shipping hundreds of millions of dollars\u2019 worth of NVIDIA GPUs to China through a web of counterparties and brokers, with over $510 million of them shipped between April and mid-May 2025. While the indictment isn\u2019t specific as to the breakdown, it confirms that some Blackwell GPUs made it to China, and I\u2019d wager quite a few.<\/p>\n<p>The mainstream media has already stopped thinking about this story, despite Supermicro being a huge reseller of NVIDIA gear, contributing billions of dollars of revenue, with at least $500 million of that apparently going to China. The fact that Supermicro wasn\u2019t specifically named in the case is enough to erase the entire tale from their minds, along with any wonder about how NVIDIA, and specifically Jensen Huang, didn\u2019t know.<\/p>\n<p>This also isn\u2019t even close to the only time this has happened. Late last year, <a href=\"https:\/\/archive.ph\/f3Id1?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Bloomberg reported<\/a> on Singapore-based Megaspeed \u2014 a (to quote Bloomberg) \u201conce-obscure spinoff of a Chinese gaming enterprise [that] evolved into the single largest Southeast Asian buyer of NVIDIA chips\u201d \u2014 and highlighted odd signs that suggest it might be operating as a front for China.\u00a0<\/p>\n<p>As a neocloud, Megaspeed rents out AI compute capacity like CoreWeave, and while NVIDIA (and Megaspeed) both deny any of their GPUs are going to China, Megaspeed, to quote Bloomberg, has \u201csomething of a Chinese corporate twin\u201d:<\/p>\n<p>This firm used similar presentation materials to Megaspeed\u2019s, had a nearly identical website to a Megaspeed sub-brand and claimed Megaspeed\u2019s Southeast Asia employees as its own. It\u2019s also posted job ads at and near the Shanghai data center whose rendering was used in Megaspeed\u2019s investor deck \u2014 including for engineering work on restricted Nvidia GPUs.<\/p>\n<p>Bloomberg reported that Megaspeed imported goods \u201cworth more than a thousand times its cash balance in 2023,\u201d with two-thirds of its imports being NVIDIA products. The investigation got weirder when Bloomberg tried to track down specific circuit boards that NVIDIA had told the US government were in specific sites:<\/p>\n<p>Data centers aren\u2019t the only Megaspeed facilities Nvidia visited. The vast majority of Megaspeed\u2019s $2.4 billion worth of Bianca boards, the circuit boards that house Nvidia\u2019s top-end GB200 and GB300 semiconductors, were unaccounted for at the sites Nvidia described to Washington. After Bloomberg asked about those products, the chipmaker went to separate Megaspeed warehouses, an Nvidia official said, and confirmed the Bianca boards are there.<\/p>\n<p>This person declined to specify the number observed in storage, nor where and when the chips \u2014 imported more than half a year ago \u2014 would be put to use. \u201cBuilding data centers is a complex process that takes many months and involves many suppliers, contractors and approvals,\u201d an Nvidia spokesperson said.<\/p>\n<p>Things get weirder throughout the article, with a Chinese company called \u201cShanghai Shuoyao\u201d having a near-identical website and investor deck (as mentioned) to Megaspeed, with several of the \u201ccomputing clusters under construction\u201d actually being in China.\u00a0<\/p>\n<p><img class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"735\" height=\"1157\"  \/><\/p>\n<p>Things get a lot weirder as Bloomberg digs in, including a woman called \u201cHuang\u201d that may or may not be both the CEO of Megaspeed and an associated company called \u201cShanghai Hexi,\u201d which is also owned by the Yangtze River Delta project\u2026 who was also photographed sitting next to Jensen Huang at an event in Taipei in 2024.<\/p>\n<p><img class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"804\" height=\"899\"  \/><\/p>\n<p>While all of this is extremely weird and suspicious, I must be clear there is no declarative answer as to what\u2019s going on, other than that NVIDIA GPUs are absolutely making it to China, somehow. I also think that it would be really tough for Jensen Huang to not know about it, or for billions of dollars of GPUs to be somewhere without NVIDIA\u2019s knowledge.\u00a0<\/p>\n<p>Anyway, Supermicro CEO Charles Liang has yet to comment on Wally Liaw or his alleged co-conspirators, other than a statement from the company that says that their acts were \u201c<a href=\"https:\/\/www.supermicro.com\/en\/pressreleases\/super-micro-computer-issues-statement-action-us-attorneys-office?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">a contravention of the Company\u2019s policies and compliance controls<\/a>.\u201d\u00a0<\/p>\n<p>Jensen Huang does not appear to have been asked if he knew anything about this \u2014 not Megaspeed, not Supermicro, or really any challenging question of any kind for the last few years of his life.\u00a0<\/p>\n<p>Huang did, however, <a href=\"https:\/\/archive.ph\/3EyYD?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">say back in May 2025<\/a> that there was \u201cno evidence of any AI chip diversion,\u2019 and that the countries in question \u201cmonitor themselves very carefully.\u201d\u00a0<\/p>\n<p>For legal reasons I am going to speak very carefully: I cannot say that Jensen is wrong, or lying, but I think it\u2019s incredible, remarkable even, that he had no idea that any of this was going on. Really? Hundreds of millions if not billions of dollars of GPUs are making it to China \u2014 <a href=\"https:\/\/www.theinformation.com\/articles\/deepseek-using-banned-nvidia-chips-race-build-next-model?rc=kz8jh3&amp;ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">as reported by The Information in December 2025<\/a> \u2014 and Jensen Huang had no idea? I find that highly unlikely, though I obviously can\u2019t say for sure.<\/p>\n<p>In the event that NVIDIA had knowledge \u2014 which I am not saying it did, of course \u2014 this is a huge scandal that, for the most part, nobody has bothered to keep an eye on outside of a few brave souls at The Information and Bloomberg who give a shit about the truth. Has anybody bothered to ask Jensen about this? People talk to him on camera all the time.\u00a0<\/p>\n<p>Sidenote: Earlier today, <a href=\"https:\/\/t.co\/mcW7TBPgG0?ref=wheresyoured.at\" rel=\"nofollow\">US Senators Jim Banks and Elizabeth Warren issued a letter to Howard Lutnick, Trump&#8217; s Commerce Secretary<\/a>, demanding the Department of Commerce take \u201call necessary and appropriate actions\u201d to stop the flow of NVIDIA chips to China, including potentially block exports to countries believed to be intermediaries, like Malaysia, Thailand, Vietnam, and Singapore. <\/p>\n<p>The arrest of Liaw has, it seems, ruffled some feathers in Washington, and I would not be shocked to see Huang sat before a congressional inquiry at some point. <\/p>\n<p>I\u2019ll also add that I am shocked that so many people are just shrugging and moving on from Supermicro, which is a major supplier of two of the major neoclouds (Crusoe and CoreWeave) and one of the minors (Lambda, which they also rents cloud capacity to). The idea that a company had no idea that several percentage points of its revenue were flowing directly to China via one of its co-founders is an utter joke.<\/p>\n<p>I hope we eventually find out the truth. Nevertheless, this kind of underhanded bullshit is a sign of desperation on the part of just about everybody involved.<\/p>\n<p>The End of Software Engineering \u2014 Hyperscalers Are Forcing AI On Their Workers, Destroying The Quality Of Their Products, and Crashing Their Services<\/p>\n<p>So, I want to explain something very clearly for you, because it\u2019s important you understand how fucked up shit has become: hyperscalers are forcing everybody in their companies to use AI tools as much as possible, tying compensation and performance use to token burn, and actively encouraging non-technical people to vibe-code features that actually reach production.\u00a0<\/p>\n<p>In practice, this means that everybody is being expected to dick around with AI tools all day, with the expectation that you burn massive amounts of tokens and, in the case of designers working in some companies, actively code features without ever knowing a line of code.\u00a0<\/p>\n<p>\u201cHow do I know the last part? Because a trusted source told me \u2014 and I\u2019ll leave it at that\u201d<\/p>\n<p>One might be forgiven for thinking this means that AI has taken a leap in efficacy, but the actual outcomes are a labyrinth of half-functional internal dashboards that measure random user data or convert files, spending hours to save minutes of time at some theoretical point. While non-technical workers aren\u2019t necessarily allowed to ship directly to production, their horrifying pseudo-software, coded without any real understanding of anything, is expected to be \u201cfixed\u201d by actual software engineers who are also expected to do their jobs.<\/p>\n<p>These tools also allow near-incompetent <a href=\"https:\/\/www.wheresyoured.at\/the-era-of-the-business-idiot\/\" rel=\"nofollow noopener\" target=\"_blank\">Business Idiot<\/a> software engineers to do far more damage than they might have in the past. LLM use is relatively-unrestrained (and actively incentivized) in at least one hyperscaler, with just about anybody allowed to spin up their own OpenClaw \u201cAI agent\u201d (read: series of LLMs that allegedly can do stuff with your inbox or Slack for no clear benefit, <a href=\"https:\/\/www.fastcompany.com\/91497841\/meta-superintelligence-lab-ai-safety-alignment-director-lost-control-of-agent-deleted-her-emails?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">other than their ability to delete all of your emails<\/a>). <a href=\"https:\/\/www.theinformation.com\/articles\/inside-meta-rogue-ai-agent-triggers-security-alert?rc=kz8jh3&amp;ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">In Meta\u2019s case<\/a>, this ended up causing a severe security breach:<\/p>\n<p>According to internal Meta communications and an incident report seen by The Information, a major security alert occurred last week after a Meta software engineer used an in-house agent tool, similar to OpenClaw, to analyze a technical question that another Meta employee had posted on an internal discussion forum. After doing the analysis, the AI agent posted a response in the discussion forum to the original question, offering advice on the technical issue, according to internal communications. The agent did so without approval from the employee.<\/p>\n<p>According to The Information, Meta systems storing large amounts of company and user-related data were accessible to engineers who didn\u2019t have permission to see them, and was marked a sec-1 incident, the second highest level of severity on an internal scale that Meta uses to rank security incidents.\u00a0<\/p>\n<p>The incident follows multiple problems caused at Amazon by its Kiro and Q LLMs. <a href=\"https:\/\/www.businessinsider.com\/amazon-tightens-code-controls-after-outages-including-one-ai-2026-3?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">I quote Business Insider<\/a>\u2019s Eugene Kim:\u00a0<\/p>\n<p>On March 2, customers across Amazon marketplaces saw incorrect delivery times when adding items to their carts. The incident led to nearly 120,000 lost orders and roughly 1.6 million website errors. Amazon&#8217;s AI tool Q was one of the primary contributors that triggered the event, according to an internal review.<\/p>\n<p>On March 5, another outage caused a 99% drop in orders across Amazon&#8217;s North American marketplaces, resulting in 6.3 million lost orders, one of the internal documents stated. One key factor was a production change that was deployed without using a formal documentation and approval process called Modeled Change Management.<\/p>\n<p>Despite the furious (and exhausting) marketing campaign around \u201cthe power of AI code,\u201d I believe that these events are just the beginning of the true consequences of AI coding tools: the slow destruction of the tech industry\u2019s software stack.\u00a0<\/p>\n<p>LLMs allow even the most incompetent dullard to do an impression of a software engineer, by which I mean you can tell it \u201cmake me software that does this\u201d or \u201clook at this code and fix it\u201d and said LLM will spend the entire time saying \u201cyou got this\u201d and \u201cthat\u2019s a great solution.\u201d\u00a0<\/p>\n<p>The problem is that while LLMs can write \u201call\u201d code, that doesn\u2019t mean the code is good, or that somebody can read the code and understand its intention (as these models do not think), or that having a lot of code is a good thing both in the present and in the future of any company built using generative code.\u00a0<\/p>\n<p>LLM-based code is often verbose, and rarely aligns with in-house coding guidelines and standards, guaranteeing that it\u2019ll take far longer to chew through, which naturally means that those burdened with reviewing it will either skim-read it or feed it into another LLM to work out what the hell to do.<\/p>\n<p>Worse still, LLM use is also entirely directionless. Why is anybody at Meta using an OpenClaw? What is the actual thing that OpenClaw does, other than burn an absolute fuck-ton of tokens?\u00a0<\/p>\n<p>Think about this very, very simply for a second: you have given every engineer in the company the explicit remit to write all their code using LLMs, and incentivized them to do so by making sure their LLM use is tracked. You have now massively increased both the operating costs of the company (through token burn costs) and the volume of code being created.\u00a0<\/p>\n<p>To be explicit, allowing an LLM to write all of your code means that you are no longer developing code, nor are you learning how to develop code, nor are you going to become a better software engineer as a result. This means that, across almost every major tech company, software engineers are being incentivized to stop learning how to write software or solve software architecture issues.\u00a0<\/p>\n<p>If you are just a person looking at code, you are only as good as the code the model makes, and as <a href=\"https:\/\/www.youtube.com\/watch?v=Q6nem-F8AG8&amp;ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Mo Bitar<\/a> recently discussed, these models are built to galvanize you, glaze you, and tell you that you\u2019re remarkable as you barely glance at globs of overwritten code that, even if it functions, eventually grows to a whole built with no intention or purpose other than what the model generated from your prompt.\u00a0<\/p>\n<p>Things only get worse when you add in the fact that hyperscalers like <a href=\"https:\/\/www.reuters.com\/business\/world-at-work\/meta-planning-sweeping-layoffs-ai-costs-mount-2026-03-14\/?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Meta<\/a> and <a href=\"https:\/\/www.crn.com\/news\/ai\/2026\/amazon-layoffs-hits-software-developers-engineers-directors-and-managers-in-washington?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">Amazon<\/a> love to lay off thousands of people at a time, which makes it even harder to work out why something was built in the way it was built, which is even harder when an LLM that lacks any thoughts or intentions builds it. Entire chunks of multi-trillion dollar market cap companies are being written with these things, prompted by engineers (and non-engineers!) who may or may not be at the company in a month or a year to explain what prompts they used.\u00a0<\/p>\n<p>We\u2019re already seeing the consequences! Amazon lost hundreds of thousands of orders! Meta had a major security breach! The foundations of these companies are being rotted away through millions of lines of slop-code that, at best, occasionally gets the nod from somebody who has \u201csoftware engineer\u201d on their resume, and these people keep being fired too, raising the likelihood that somebody who knows what\u2019s going on or why something is built a certain way will be able to stop something bad from happening.\u00a0<\/p>\n<p>Remember: Google, Amazon, Microsoft, and Meta all hold vast troves of personal information, intimate conversations, serious legal documents, financial information, in some cases even social security numbers, and all four of them along with a worrying chunk of the tech industry are actively encouraging their software engineers to stop giving a fuck about software.\u00a0<\/p>\n<p>Oh, you\u2019re so much faster with AI code? What does that actually mean? What have you built? Do you understand how it works? Did you look at the code before it shipped, or did you assume that it was fine because it didn\u2019t break?\u00a0<\/p>\n<p>This is creating a kind of biblical plague within software engineering \u2014 an entire tech industry built on reams of unmanageable and unintentional code pushed by executives and managers that don\u2019t do any real work. LLMs allow the incompetent to feign competence and the unproductive to produce work-adjacent materials borne of a loathing for labor and craftsmanship, and lean into the worst habits of the dullards that rule Silicon Valley.<\/p>\n<p>All the Valley knows is growth, and \u201cmore\u201d is regularly conflated with \u201cvaluable.\u201d The New York Times\u2019 Kevin Roose \u2014 in a shocking attempt at journalism \u2014 <a href=\"https:\/\/archive.ph\/6GxaK?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">recently wrote a piece celebrating the competition within Silicon Valley to burn more and more tokens using AI models<\/a>:<\/p>\n<p>An engineer at OpenAI processed 210 billion \u201ctokens\u201d \u2014 enough text to fill Wikipedia 33 times \u2014 through the company\u2019s artificial intelligence models over the last week, the most of any employee. At Anthropic, a single user of the company\u2019s A.I. coding system, Claude Code, racked up a bill of more than $150,000 in a month.And at tech companies like Meta and Shopify, managers have started to factor A.I. use into performance reviews, rewarding workers who make heavy use of A.I. tools and chastening those who don\u2019t.<\/p>\n<p>This is the new reality for coders, some of the first white-collar workers to feel the effects of A.I. as it sweeps through the economy. A.I. was supposed to help tech companies boost productivity and cut costs. But it has also created an expensive new status game, known as \u201ctokenmaxxing,\u201d among A.I.-obsessed workers who are desperate to prove how productive they are.<\/p>\n<p>Roose explains that both Meta and OpenAI have internal leaderboards that show how many tokens you\u2019ve used, with one software engineer in Stockholm spending \u201cmore than his salary in tokens,\u201d though Roose adds that his company pays for them.<\/p>\n<p>Roose describes a truly sick culture, one where <a href=\"https:\/\/community.openai.com\/t\/tokens-of-appreciation-milestone-awards-for-openai-api-token-usage\/1361639?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI gives awards to those who spend a lot of money on their tokens<\/a>, adding that he spoke with several tech workers who were spending thousands of dollars a day on tokens \u201cfor what amount to bragging rights.\u201d Roose also added one more insane detail: that one person found a loophole in Claude\u2019s $20-a-month using a piece of software made by Figma that allowed them to burn $70,000 in tokens.<\/p>\n<p>Despite all of this burn, Roose struggled to find anybody who was able to explain what they were doing beyond \u201cmaintaining large, complex pieces of software using coding agents running in parallel,\u201d but managed to actually find one particularly useful bit of information \u2014 that all of this might be performative:<\/p>\n<p>They said, by and large, that A.I. coding tools were making them more productive. But some also framed their use of A.I. as a strategic move \u2014 a way to signal, to their colleagues and bosses, that they\u2019re keeping up with the times, as the era of human coding appears to be coming to an end.<\/p>\n<p>I do give Roose one point for wondering if \u201c&#8230;any of these tokenmaxxers [were] producing anything good, or whether they [were] merely spinning their wheels churning out useless code in an attempt to look busy.\u201d Good job Kevin.\u00a0<\/p>\n<p>That being said, I find this story horrifying, and veering dangerously close to the actions of drug addicts and cult followers. Throughout this story in one of the world\u2019s largest newspapers, Roose fails to find a single \u201ctokenmaxxer\u201d making something that they can actually describe, which has largely been my experience of evaluating anyone who talks nonstop about the power of \u201cagentic coding.\u201d\u00a0<\/p>\n<p>These people are sick, and are participating in a vile, poisonous culture based on needless expenses and endless consumption.\u00a0<\/p>\n<p>Companies incentivizing the amount of tokens you burn are actively creating a culture that trades excess for productivity, and incentivizing destructive tendencies built around constantly having to find stuff to do rather than do things with intention. \u00a0They are guaranteeing that their software will be poorly-written and maintained, all in the pursuit of \u201cdoing more AI\u201d for no reason other than that everybody else appears to be doing so.<\/p>\n<p>Anybody who actually works knows that the most productive-seeming people are often also the most-useless, as they\u2019re doing things to seem productive rather than producing anything of note. A great example of this is a <a href=\"https:\/\/archive.ph\/KFmrl?ref=wheresyoured.at\" rel=\"nofollow noopener\" target=\"_blank\">recent Business Insider interview<\/a> with a person who got laid off from Amazon after learning \u201cAI\u201d and \u201cvibe coding,\u201d and how surprised they were that these supposed skills didn\u2019t make them safer from layoffs:<\/p>\n<p>At the time of the October layoffs, there was debate around whether AI was the reason.<\/p>\n<p>The company was encouraging us to use AI at the time, but I don&#8217;t think it took my job. I wrote descriptions for internal products at Amazon, and when I used AI to help, I&#8217;d need to ask it to rewrite its output without fluff words. It didn&#8217;t sound like how people talk. Despite my ethical qualms, I used AI, but, in my opinion, it was nowhere close to replacing my role. Before I was laid off, I helped build an internal site for Amazon using AI. I hadn&#8217;t really coded before, but with a colleague&#8217;s help, I learned how to vibe code with a lot of trial and error.<\/p>\n<p>I thought using AI for this project and showcasing different skills would make me more valuable to the company, but in the end, it didn&#8217;t keep me from being laid off.<\/p>\n<p>To be clear, this person is a victim. They were pressured by Amazon to take up useless skills and build useless things in an expensive and inefficient way, and ended up losing their job despite taking up tools they didn\u2019t like under duress.\u00a0<\/p>\n<p>Sidenote: If you read that sentence and suggest that she should\u2019ve used AI better, you are a mark. You are being conned into an unpaid marketing job for AI companies that actively hate you.\u00a0<\/p>\n<p>This person was, at one point, actively part of building an internal Amazon site using AI, and had to \u201clearn to vibe code with a lot of trial and error\u201d and the help of a colleague. Was this a good use of her time? Was this a good use of her colleague\u2019s time?<\/p>\n<p>No! In fact, across all of these goddamn AI coding hype-beast Twitter accounts and endless proclamations about the incredible power of AI agents, I can find very few accounts of something happening other than someone saying \u201cyeah I\u2019m more productive I guess.\u201d\u00a0<\/p>\n<p>I am certain that at some point in the near future a major big tech service is going to break in a way that isn\u2019t immediately fixable as a result of thousands of people building software with AI coding tools, a problem compounded by the dual brain drain forces of layoffs and a culture that actively empowers people to look busy rather than actually produce useful things.<\/p>\n<p>What else would you expect? You\u2019re giving people a number that they can increase to seem better at their job, what do you think they\u2019re going to do, try and be efficient? Or use these things as much as humanly possible, even if there really isn\u2019t a reason to?<\/p>\n<p>I haven\u2019t even gotten to how expensive all of this must be, in part because it\u2019s hard to fully comprehend.\u00a0<\/p>\n<p>But what I do know is that big tech is setting itself up for crisis after crisis, especially when Anthropic and OpenAI stop <a href=\"https:\/\/www.wheresyoured.at\/why-are-we-still-doing-this\/#:~:text=the%20actual%20costs.-,Most%20AI%20Users%20%E2%80%94%20Especially%20Coders%20%E2%80%94%20Are%20Unprepared%20For%20The%20Cost%20Of%20Paying%20For%20Their%20Actual%20Token%20Burn,-So%2C%20let%E2%80%99s%20do\" rel=\"nofollow noopener\" target=\"_blank\">subsidizing their models to the tune of allowing people to spend $2500 or more on a $200-a-month subscription<\/a>.\u00a0<\/p>\n<p>What happens to the people who are dependent on these models? What happens to the people who forgot how to do their jobs because they decided to let AI write all of their code? Will they even be able to do their jobs anymore?\u00a0\u00a0<\/p>\n<p>Large Language Models are creating Silicon Valley Habsburgs \u2014 workers that are intellectually trapped at whatever point they started leaning on these models that were subsidized to the point that their bosses encouraged them to use them as much as humanly possible. While they might be able to claw their way back into the workforce, a software engineer that\u2019s only really used LLMs for anything longer than a few months will have to relearn the basic habits of their job, and find that their skills were limited to whatever the last training run for whatever model they last used was.\u00a0<\/p>\n<p>I\u2019m sure there are software engineers using these models ethically, who read all the code, who have complete industry over it and use it as a means of handling very specific units of work that they have complete industry over.<\/p>\n<p>I\u2019m also sure that there are some that are just asking it to do stuff, glancing at the code and shipping it. It\u2019s impossible to measure how many of each camp there are, but hearing Spotify\u2019s CEO say that its top developers are basically not writing code anymore makes me deeply worried, because this shit isn\u2019t replacing software engineering at all \u2014 it\u2019s mindlessly removing friction and putting the burden of \u201cgood\u201d or \u201cright\u201d on a user that it\u2019s intentionally gassing up.<\/p>\n<p>Ultimately, this entire era is a test of a person\u2019s ability to understand and appreciate friction.\u00a0<\/p>\n<p>Friction can be a very good thing. When I don\u2019t understand something, I make an effort to do so, and the moment it clicks is magical. In the last three years I\u2019ve had to teach myself a great deal about finance, accountancy, and the greater technology industry, and there have been so many moments where I\u2019ve walked away from the page frustrated, stewed in self-doubt that I\u2019d never understand something.<\/p>\n<p>I also have the luxury of time, and sadly, many software engineers face increasingly-deranged deadlines set by bosses that don\u2019t understand a single fucking thing, let alone what LLMs are capable of or what responsible software engineering is. The push from above to use these models because they can \u201cwrite code faster than a human\u201d is a disastrous conflation of \u201cfast\u201d and \u201cgood,\u201d all because of flimsy myths peddled by venture capitalists and the media about \u201cLLMs being able to write all code.\u201d<\/p>\n<p>Generative code is a digital ecological disaster, one that will take years to repair thanks to company remits to write as much code as fast as possible.\u00a0<\/p>\n<p>Every single person responsible must be held accountable, especially for the calamities to come as lazily-managed software companies see the consequences of building their software on sand.\u00a0<\/p>\n<p>In the end, everything about AI is built on lies.\u00a0<\/p>\n<p>Hundreds of gigawatts of data centers in development equate to 5GW of actual data centers in construction.\u00a0<\/p>\n<p>Hundreds of billions of dollars of GPU sales are mostly sitting waiting for somewhere to go.<\/p>\n<p>Anthropic\u2019s constant flow of \u201cannualized\u201d revenues <a href=\"https:\/\/www.wheresyoured.at\/the-beginning-of-history\/#:~:text=inference%20and%20training.-,Anthropic%E2%80%99s%20CFO%20Said%20It%20Made%20%245%20Billion%20in%20Lifetime%20Revenues%20%E2%80%94%20But%20When%20You%20Add%20Up%20The%20Annualized%20Revenues%20Reported%2C%20They%E2%80%99re%20In%20Excess%20of%20%246%20Billion%2C%20Suggesting%20Reporters%20Are%20Being%20Misled,-To%20be%20abundantly\" rel=\"nofollow noopener\" target=\"_blank\">ended up equating to literally $5 billion in revenue in four years<\/a>, on $25 billion or more in salaries and compute.<\/p>\n<p>Despite all of those data centers supposedly being built, nobody appears to be making a profit on renting out AI compute.<\/p>\n<p>AI\u2019s supposed ability to \u201cwrite all code\u201d really means that every major software company is filling their codebases with slop while massively increasing their operating expenses. Software engineers aren\u2019t being replaced \u2014 they\u2019re being laid off because the software that\u2019s meant to replace them is too expensive, while in practice not replacing anybody at all.<\/p>\n<p>Looking even an inch beneath the surface of this industry makes it blatantly obvious that we\u2019re witnessing one of the greatest corporate failures in history. The smug, condescending army of AI boosters exists to make you look away from the harsh truth \u2014 AI makes very little revenue, lacks tangible productivity benefits, and seems to, at scale, actively harm the productivity and efficacy of the workers that are being forced to use it.<\/p>\n<p>Every executive forcing their workers to use AI is a ghoul and a dullard, one that doesn\u2019t understand what actual work looks like, likely because they\u2019re a lazy, self-involved prick.\u00a0<\/p>\n<p>Every person I talk to at a big tech firm is depressed, nagged endlessly to \u201cget on board with AI,\u201d to ship more, to do more, all without any real definition of what \u201cmore\u201d means or what it contributes to the greater whole, all while constantly worrying about being laid off thanks to the truly noxious cultures that are growing around these services.<\/p>\n<p>AI is actively poisonous to the future of the tech industry. It\u2019s expensive, unproductive, actively damaging to the learning and efficacy of its users, depriving them of the opportunities to learn and grow, stunting them to the point that they know less and do less because all they do is prompt. Those that celebrate it are ignorant or craven, captured or crooked, or desperate to be the person to herald the next era, even if that era sucks, even if that era is inherently illogical, even if that era is fucking impossible when you think about it for more than two seconds.<\/p>\n<p>And in the end, AI is a test of your introspection. Can you tell when you truly understand something? Can you tell why you believe in something, other than that somebody told you you should, or made you feel bad for believing otherwise? Do you actually want to know stuff, or just have the ability to call up information when necessary?\u00a0<\/p>\n<p>How much joy do you get out of becoming a better person?If you can\u2019t answer that question with certainty, maybe you should just use an LLM, as you don\u2019t really give a shit about anything.<\/p>\n<p>And in the end, you\u2019re exactly the mark built for an AI industry that can\u2019t sell itself without spinning lies about what it can (or theoretically could) do.\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"Hi! If you like this piece and want to support my independent reporting and analysis, why not subscribe&hellip;\n","protected":false},"author":2,"featured_media":190266,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-559255","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/559255","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=559255"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/559255\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/190266"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=559255"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=559255"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=559255"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}