{"id":386485,"date":"2026-01-04T01:31:29","date_gmt":"2026-01-04T01:31:29","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/386485\/"},"modified":"2026-01-04T01:31:29","modified_gmt":"2026-01-04T01:31:29","slug":"ais-imperial-agenda-with-karen-hao","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/386485\/","title":{"rendered":"AI\u2019s Imperial Agenda with Karen Hao"},"content":{"rendered":"<\/p>\n<p>After OpenAI CEO Sam Altman launched ChatGPT in 2022, the race for dominance in the field of artificial intelligence hit warp speed. Silicon Valley has poured billions of dollars into developing AI, building data centers, and promising a future free from the chains of unfulfilling work across the globe.<\/p>\n<p>But in \u201cEmpire of AI: Dreams and Nightmares in Sam Altman\u2019s OpenAI,\u201d tech reporter Karen Hao pulls back the curtain, unveiling the human and environmental cost of artificial intelligence and the colonial ambitions undergirding Silicon Valley\u2019s efforts to fuel the rise of AI.<\/p>\n<p>This week on The Intercept Briefing, host Jessica Washington speaks to Hao about her book and the dawn of the AI empire. \u201cEmpires similarly consolidate a lot of economic might by exploiting extraordinary amounts of labor and not actually paying that labor sufficiently or at all,\u201d says Hao. \u201cSo that\u2019s how they are able to amass wealth \u2014 because they\u2019re not actually distributing it.\u201d<\/p>\n<p>\u201cThe speed at which they\u2019re constructing the infrastructure for training and deploying their AI models\u201d is what shocks Hao the most, as \u201cthis infrastructure is actually not technically necessary, and \u2026 somehow the companies have effectively convinced the public and governments that it is. And therefore there\u2019s been a lot of complicity in allowing these companies to continue building these projects.\u201d<\/p>\n<p>\u201cThey have effectively been able to use this narrative of [artificial general intelligence] to accrue more capital, land, energy, water, data. They\u2019ve been able to accrue more resources \u2014 and critical resources \u2014 than pretty much anyone in history,\u201d Hao says, warning of \u201cthe complete aggressive and reckless\u201d growth of AI infrastructure, but stresses that none of this is inevitable. \u201cThere is a very clear path for how to unlock the benefits of AI without accepting the colossal cost of it.\u201d<\/p>\n<p>Listen to the full conversation of The Intercept Briefing on <a href=\"https:\/\/podcasts.apple.com\/us\/podcast\/the-intercept-briefing\/id1195206601\" rel=\"nofollow noopener\" target=\"_blank\">Apple Podcasts<\/a>, <a href=\"https:\/\/open.spotify.com\/show\/2js8lwDRiK1TB4rUgiYb24?si=e3ce772344ee4170\" rel=\"nofollow noopener\" target=\"_blank\">Spotify<\/a>, or wherever you listen.<\/p>\n<p>Transcript <\/p>\n<p>Jessica Washington: Welcome to The Intercept Briefing, I\u2019m Jessica Washington.<\/p>\n<p>In 2022, Sam Altman\u2019s company OpenAI launched ChatGPT, an AI chatbot that unleashed a wave of excitement over artificial intelligence. And it kickstarted a race for dominance in the field. <\/p>\n<p>Tech CEOs from Altman at OpenAI, to Mark Zuckerberg at Meta, and Alex Karp at Palantir have lauded artificial intelligence as the \u201cfuture\u201d of humanity.<\/p>\n<p>During a New York Times New Work Summit in 2019, years ahead of Open AI\u2019s launch of ChatGPT, <a href=\"https:\/\/www.youtube.com\/watch?v=AHQR1OBum5Q\" rel=\"nofollow noopener\" target=\"_blank\">Altman<\/a> predicted that artificial intelligence could \u201celiminate poverty.\u201d <\/p>\n<p>Sam Altman: It can be great, we have the potential to eliminate poverty, solve climate change, cure a huge amount of human disease, like educate everyone in the world phenomenally well. <\/p>\n<p>JW: In a more recent CNBC interview, <a href=\"https:\/\/www.youtube.com\/shorts\/NuGVWgwD6CA\" rel=\"nofollow noopener\" target=\"_blank\">Palantir CEO Alex Karp<\/a> claimed that AI made the United States the \u201cdominant country in the world\u201d:<\/p>\n<p>Alex Karp: AI makes America the dominant country in the world. So just start there. Every other country in the world \u2014 like, I spent half my life in Europe \u2014 they\u2019re whining and crying. We have the right chips. We have the right software. We have the right engineers. We have the right culture. We have the right people.<\/p>\n<p>JW: And in a video posted to Facebook, unveiling Meta\u2019s new AI research lab in July, Meta CEO Mark Zuckerberg <a href=\"https:\/\/www.facebook.com\/watch\/?v=1263305541425221\" rel=\"nofollow noopener\" target=\"_blank\">promised<\/a> to develop personal \u201csuperintelligence\u201d that would free its users to focus on what truly matters.<\/p>\n<p>Mark Zuckerberg: Advances in technology have freed much of humanity to focus less on subsistence and more on the pursuits that we choose. And at each step along the way, most people have decided to use their newfound productivity to spend more time on creativity, culture, relationships, and just enjoying life. And I expect superintelligence to accelerate this trend even more. <\/p>\n<p>JW: Only \u2014 what if these utopic visions mask a far, darker reality?<\/p>\n<p>In \u201cEmpire of AI: Dreams and Nightmares in Sam Altman\u2019s OpenAI,\u201d Karen Hao exposes the underlying reality of the lofty promises made by Sam Altman and the tech industry. Hao reveals the human toll of artificial intelligence from its extreme water usage, to its exploitation of data laborers, to AI companies\u2019 disturbing resemblance to the colonial empires that ravaged the planet for centuries.<\/p>\n<p>Joining me now to discuss \u201cEmpire of AI\u201d and Silicon Valley\u2019s grip on our world is Karen Hao. <\/p>\n<p>Karen, welcome to The Intercept Briefing.<\/p>\n<p>Karen Hao: Thank you so much for having me, Jessica.<\/p>\n<p>JW: Before we begin, we should start off by mentioning that The Intercept is a party in a <a href=\"https:\/\/theintercept.com\/2024\/11\/22\/openai-intercept-lawsuit\/\" rel=\"nofollow noopener\" target=\"_blank\">lawsuit against OpenAI<\/a> for allegedly using copyrighted materials to train ChatGPT.<\/p>\n<p>So, Karen, of all of the tech CEOs in the artificial intelligence rat race to profile, why Sam Altman, and why OpenAI?<\/p>\n<p>KH: So I actually didn\u2019t set out to write an OpenAI book. I was trying to write a book about these parallels that I had been documenting for several years between the AI industry and colonialism. And I realized as I was putting together that idea, that in order to really illustrate how every single thing that we know about AI today in the public consciousness, like I had to trace the history of OpenAI, because those decisions were made within that company. <\/p>\n<p>So the fact that we associate AI in the public with large language models with ChatGPT, with these colossally consumptive technologies that need massive amount of data, massive amounts of data centers \u2014 those were all because OpenAI made certain choices. And Sam Altman was at the helm of the company when it made many of those choices. So yeah, it really is, I would say the book is not just a history of Open AI, it\u2019s really a history of the modern-day AI boom.<\/p>\n<p>JW: As you\u2019ve alluded to in the book, you masterfully, in my opinion, weave the promises of Silicon Valley against the backdrop of its impact on the communities that host its data centers and feed other parts of the AI machine. What made you want to tell these two stories alongside each other, instead of just a tech book, or instead of just a book about the impact?<\/p>\n<p>KH: I\u2019ve always felt that the most important questions on people\u2019s minds about technology or about AI is just: How is it going to affect their lives? And the only way to really tell that story is to ground it in the experiences of people that have already been affected by the development of the technology, because they are the canaries in the coal mines, so to speak, of how the rest of the world is going to experience it. <\/p>\n<p>And if you only tell the story from the perspective of San Francisco and from the tech companies themselves and the elites that run the companies at the top, you\u2019re largely going to get a story about the technology working because it\u2019s designed by these people for these people.<\/p>\n<p>But that\u2019s not actually the real, full scope of the story. And so philosophically, in a lot of my reporting even before the book, I always believe that you really start to see where things fall apart when you go furthest away from Silicon Valley to the places that work fundamentally differently from SF, from the U.S., with people speaking fundamentally different languages who look different, who have a different history and culture.<\/p>\n<p>And that is actually more indicative of how the average person is going to ultimately be impacted by this technology because San Francisco\u2019s a really weird place. It\u2019s an extreme bubble. There\u2019s an extraordinary amount of wealth that is pretty much not replicated anywhere else in the world. There\u2019s an incredible amount of homogeneity.<\/p>\n<p>And so that\u2019s why I wanted to interweave both the inside story and the ideology of these people and the decisions and the context in which they make these decisions, but then quickly expand to the far reaches of the empire, as I call it, to document really how it\u2019s going to affect the vast majority of the world.<\/p>\n<p>JW: Yeah, I want to dive into the empire of it all. So the obvious through line of your book is colonialism and the ways in which these AI companies and tech companies have resembled these colonial empires of old. And I\u2019m curious, how do you see the comparisons and where do they differ?<\/p>\n<p>KH: Yeah, I mean, there\u2019s honestly so many comparisons. But I really focus on four in the book. The first one is that empires, they consolidate an extraordinary amount of wealth and power in part by just taking a lot of resources that are not their own. That refers to the intellectual property \u2014 as The Intercept <a href=\"https:\/\/theintercept.com\/2024\/11\/22\/openai-intercept-lawsuit\/\" rel=\"nofollow noopener\" target=\"_blank\">knows well<\/a> \u2014 that they take to just train their models without any creditor compensation. That\u2019s also taking the private data of people that they might leave in places like a Flickr photo album that they never realized could get hoovered up into these image generation tools. <\/p>\n<p>Also, second parallel: Empires similarly consolidate a lot of economic might by exploiting extraordinary amounts of labor and not actually paying that labor sufficiently or at all. So that\u2019s how they are able to amass wealth \u2014 because they\u2019re not actually distributing it. And I talk in my book extensively about the ways that the industry does exactly the same thing with workers in Kenya or [who are] <a href=\"https:\/\/www.technologyreview.com\/2022\/04\/20\/1050392\/ai-industry-appen-scale-data-labels\/\" rel=\"nofollow noopener\" target=\"_blank\">in crisis in Venezuela<\/a>, who are doing some of the lifeblood data annotation tasks that the AI industry needs to thrive but who see only a couple dollars a day or even at all for that kind of work.<\/p>\n<p>The third parallel is that empires always engage in this kind of control of information flows in order to perpetuate their ability to continue expanding unfettered. And we see this in the industry as well, where most AI researchers today are either employed by the companies or bankrolled by the companies in some way. And so the entire research agenda and AI development agenda has been completely distorted by the empire\u2019s agenda, and any research that reveals inconvenient truths is actively censored. So we don\u2019t have a true scientific picture of the limitations and capabilities of these technologies.<\/p>\n<p>And then the final parallel is: Empire is engaged in this narrative that they have to exist because of a moral or existential imperative. So they are the \u201cgood\u201d empire that\u2019s on a civilizing mission to bring progress in modernity to all of humanity. And they\u2019re competing with an evil empire that\u2019s trying to bring the demise of humanity.<\/p>\n<p>And so in OpenAI\u2019s history, there have been many examples of it framing \u201cGoogle was the evil empire.\u201d Now, Silicon Valley largely says, \u201cChina is the evil empire.\u201d And the idea is that if the evil empire crosses the finish line, then we\u2019re going to end up in an AI hell. And they say, AI could kill us all, or AI is going to lead to complete total authoritarianism in the wrong hands.<\/p>\n<p>Whereas when the good empire crosses the threshold first, we end up in this utopia \u2014 eliminating poverty, curing cancer, all of the things that you mentioned in the beginning are their common talking points.<\/p>\n<p>JW: Yeah. One thing that strikes me about tracking these empires as opposed to older, when you think of the British Empire, is the pace at which they\u2019re moving and the pace at which things are changing.<\/p>\n<p>We\u2019re in a vastly different landscape when it comes to AI than we were a year ago, or arguably even a month ago. Did you predict the pace at which this technology would proliferate and the kind of full-throated embrace of it from people in power really in both parties, or is there something that\u2019s surprising you about where we\u2019re at now?<\/p>\n<p>KH: I\u2019m definitely really shocked at the pace. And you\u2019re 100% right that one of the key differences of the classical empires of old and empires of AI is just the sheer speed. The British Empire moved at the pace of ships. And with the empires of AI, they\u2019re moving at the pace of bits. They can make like 60 decisions in an hour that affect billions of people around the world.<\/p>\n<p>But the thing that has shocked me the most is the speed at which they\u2019re constructing the infrastructure for training and deploying their AI models. Part of the shock is that this infrastructure is actually not technically necessary, and so I\u2019ve been shocked that somehow the companies have effectively convinced the public and governments that it is and therefore there\u2019s been a lot of complicity in allowing these companies to continue building these projects. <\/p>\n<p>\u201cSometimes I feel like that\u2019s a strategy to get people so shocked or confused by these large numbers that they can\u2019t even wrap their minds around that it allows the companies to continue doing what they\u2019re doing.\u201d<\/p>\n<p>But the other shock is just what they\u2019re trying to do is insane. It is hard to explain just how baffling the scale is. Sam Altman has recently said that he aims to build 250 gigawatts of data centers by 2033, which he estimates would cost $10 trillion. And when you just think about that figure of just $10 trillion, that\u2019s already insane. Like most people in the world have never encountered 10 trillion of anything, let alone dollars. And sometimes I feel like that\u2019s a strategy to get people so shocked or confused by these large numbers that they can\u2019t even wrap their minds around that it allows the companies to continue doing what they\u2019re doing. <\/p>\n<p>But 250 gigawatts is also an insanely baffling number because New York City on average is 5.5 gigawatts of power. So what he\u2019s talking about is constructing almost four dozen New York cities of data centers in the world to power and train his AI technologies.<\/p>\n<p>And Meta has talked about building supercomputers where the facilities are almost the size of Manhattan. And so like this is the largest infrastructure build-out that humanity has ever seen, and it\u2019s being controlled by a tiny group of people that are aggressively trying to build this out in communities around the world, many of whom actually do not want this infrastructure. There\u2019s huge protests that has started breaking out all around the world and all across the U.S. and so that\u2019s the thing that has shocked me is just the complete aggressive and reckless nature of the growth.<\/p>\n<p>\u201c This is the largest infrastructure build-out that humanity has ever seen, and it\u2019s being controlled by a tiny group of people.\u201d<\/p>\n<p>JW: When you talk about the growth, the first thing that comes to mind for me is the impact of that growth and what that could mean. Your book gets into some of these direct environmental harms. When we\u2019re talking about building out the kinds of infrastructure that Sam Altman is talking about, what are those harms?<\/p>\n<p>KH: So when talking about these data center facilities, one of the harms is the energy is coming from fossil fuels. Even Sam Altman has, when he was testifying in Congress, he admitted in the short term it would likely come from natural gas. From reporting we\u2019ve also seen that it comes from coal. There are coal plants that were meant to be retired that are now having their lives extended because of the utilities needed to meet an energy demands that they cannot meet with any other energy source.<\/p>\n<p>And essentially we are starting to see the AI industry provide a lifeline for the fossil fuel industry. So it\u2019s bringing extraordinary amounts of emissions into the air. <\/p>\n<p>\u201cWe are starting to see the AI industry provide a lifeline for the fossil fuel industry.\u201d <\/p>\n<p>Those emissions are also pollutants. So it\u2019s polluting working-class communities most often and rural communities. There has been phenomenal reporting on Memphis, Tennessee, hosting <a href=\"https:\/\/insideclimatenews.org\/news\/17072025\/elon-musk-xai-data-center-gas-turbines-memphis\/\" rel=\"nofollow noopener\" target=\"_blank\">Colossus<\/a>, the supercomputer that Elon Musk built to train Grok and it\u2019s being powered by 35 methane gas turbines that is pumping toxins into that community\u2019s air, which actually has a long history of environmental racism and inability to access the fundamental right to clean air.<\/p>\n<p>Then you have to talk about the fact that these data centers also require fresh water to cool the facilities. If they\u2019re going to use water, it needs to be fresh water and even drinking water \u2014 because any other type of water would lead to corrosion of the equipment or to bacterial growth. And so you often see in proposals for data centers the request from the company to the local government for potable water \u2014 to connect directly to the city drinking water supply.<\/p>\n<p>And many of these facilities are being put in places that don\u2019t have that drinking water to spare. There was a <a href=\"https:\/\/www.bloomberg.com\/graphics\/2025-ai-impacts-data-centers-water-data\/\" rel=\"nofollow noopener\" target=\"_blank\">Bloomberg investigation <\/a>that found that two-thirds of these data centers are going into already water-scarce areas. So there are communities that are actively competing with this computer infrastructure for life-sustaining resources. So it\u2019s basically layer upon layer of environmental and public health crises that are already underway, that are being massively accelerated by this push.<\/p>\n<p>JW: With the Trump administration moving to massively deregulate a lot of environmental protections, do you expect these costs to grow?<\/p>\n<p>KH: I do, and it\u2019s not just the deregulatory stance. The Trump administration and actually the Biden administration also had enabled data centers to be built on federal lands. So the federal government has been aggressively using all of the different mechanisms that they can to try to facilitate the recklessness of the tech industry.<\/p>\n<p>And of course, Trump also signed an executive order that is trying to <a href=\"https:\/\/theintercept.com\/2025\/05\/29\/trump-big-beautiful-bill-budget-ai-regulation\/\" rel=\"nofollow noopener\" target=\"_blank\">neuter state AI regulation<\/a> as well. So not only deregulating federal laws, but also trying to prevent any states from stepping into the vacuum. And so all of the trends that we see, if the public did nothing about it \u2014 if there was no contestation, if there were no protests, and everyone was just laid back and allowed this trajectory to barrel forward \u2014 I absolutely think that it could get worse. But I also think that there is an incredible amount that people can in fact do in the absence of leadership at the top to show leadership from the bottom.<\/p>\n<p>      We\u2019re independent of corporate interests \u2014 and powered by members. Join us.    <\/p>\n<p>    <a href=\"https:\/\/join.theintercept.com\/donate\/now\/?referrer_post_id=505891&amp;referrer_url=https%3A%2F%2Ftheintercept.com%2F2026%2F01%2F02%2Fempire-ai-sam-altman-colonialism%2F&amp;source=web_intercept_20241230_Inline_Signup_Replacement\" class=\"border border-white !text-white font-mono uppercase p-5 inline-flex items-center gap-3 hover:bg-white hover:!text-accentLight focus:bg-white focus:!text-accentLight\" data-name=\"donateCTA\" data-action=\"handleDonate\" rel=\"nofollow noopener\" target=\"_blank\"><br \/>\n      Become a member<br \/>\n    <\/a><\/p>\n<p>            Join Our Newsletter          <\/p>\n<p>            Thank You For Joining!          <\/p>\n<p class=\"text-[27px] mb-3.5 font-bold text-accentLight tracking-[0.01em] leading-[29px] font-sans xl:text-[37px] xl:leading-[39px]\">\n<p>            Original reporting. Fearless journalism. Delivered to you.          <\/p>\n<p>            Will you take the next step to support our independent journalism by becoming a member of The Intercept?\n        <\/p>\n<p>        <a href=\"https:\/\/join.theintercept.com\/donate\/now\/?referrer_post_id=505891&amp;referrer_url=https%3A%2F%2Ftheintercept.com%2F2026%2F01%2F02%2Fempire-ai-sam-altman-colonialism%2F&amp;source=web_intercept_20241230_Inline_Signup_Replacement\" class=\"group-[.default]:hidden border border-accentLight text-accentLight font-sans px-5 py-3.5 inline-flex items-center gap-3 text-[20px] font-bold\" data-action=\"handleDonate\" rel=\"nofollow noopener\" target=\"_blank\"><br \/>\n          Become a member<br \/>\n        <\/a><\/p>\n<p>By signing up, I agree to receive emails from The Intercept and to the <a href=\"https:\/\/theintercept.com\/privacy-policy\/\" rel=\"nofollow noopener\" target=\"_blank\">Privacy Policy<\/a> and <a href=\"https:\/\/theintercept.com\/terms-use\/\" rel=\"nofollow noopener\" target=\"_blank\">Terms of Use<\/a>.<\/p>\n<p>Break<\/p>\n<p>JW: There\u2019s been some public pushback to your water usage calculations, primarily from supporters of artificial intelligence. <a href=\"https:\/\/andymasley.substack.com\/p\/empire-of-ai-is-wildly-misleading\" rel=\"nofollow noopener\" target=\"_blank\">Andy Masley<\/a>, executive Director of Effective Altruism DC published a Substack in November questioning some of your data around water usage, and you <a href=\"https:\/\/karendhao.com\/20251217\/empire-water-changes\" rel=\"nofollow noopener\" target=\"_blank\">issued two changes <\/a>to your book regarding the water footprint data recently. I wanted to just give you a moment to respond to that critique.<\/p>\n<p>KH: Yeah, for sure. So yeah, Andy brought up some very valid criticisms. One was on a particular data point that, after he brought up the criticisms, we investigated it and realized it was wrong. This was a data point that appears in Chapter 12 of my book, where we are describing a proposed Google data center in Cerrillos, Chile, outside of the outskirts of Santiago. And I was trying, in that particular case study, to explain the water impact that this facility would have within the community by comparing it to the water use of that community. <\/p>\n<p>And basically what happened was the government document that stated the water usage of the community had a unit error. And so instead of quoting the numbers in meters cubed as they should have, they quoted it in liters. One meter cubed is 1,000 liters, so they underestimated the water use of the community by a factor of 1,000, which meant that when I then divided the data center proposed water usage by what the document said was the water usage, my comparison was off by a magnitude of 1,000.<\/p>\n<p>And so the corrected statement is that this proposed Google data center could use more water than the population of the town \u2014 which is already substantially bad. But of course, in the error of the calculation, I had said that it was going to be more than 1,000 times what the town uses, which is just incorrect. And basically I worked with my Chilean collaborator to figure out, contacted the Chilean government agency that had issued the document to get to the bottom of it, confirmed that it was in fact a unit error. We issued the correction.<\/p>\n<p>The second change that I made, which is also based on Andy\u2019s feedback, was that there was a part of my explanation or citation of a study about the overall water impact of AI that also used the wrong terminology. So I had used this term that AI was going to lead to this amount of \u201cwater consumption.\u201d But there\u2019s actually a technicality: \u201cWater consumption\u201d is not the same as \u201cwater use.\u201d And I should have actually used the term \u201cwater use\u201d because in consumption with data centers, it means that the water\u2019s evaporated and it just disappears. Whereas \u201cwater use\u201d means that it\u2019s running through the system, but then it exits out the system. Not that it\u2019s completely unchanged. It can have a lot more pollutants in that water, and it can have a higher temperature, and it might not actually be able to return safely to the environment, but it\u2019s different from pure evaporation.<\/p>\n<p>So I made that change as well and added some more language to explain that the study was referring to the water impact of data centers \u2014 both in terms of the water used to cool the facilities, but also the water used to generate the electricity to power the facilities, because that is also a huge important part of the water footprint of data centers.<\/p>\n<p>So those changes will be made in the next reprint of the physical edition and will also be made in the digital and audiobook edition.<\/p>\n<p>JW: Thank you for explaining that. I want to switch gears to one of my favorite chapters of your book where you talk about the concept of intelligence and this kind of mythical idea of superintelligence. What is superintelligence, and is it just something that tech CEOs are saying to sound futuristic?<\/p>\n<p>KH: [Laughs] So superintelligence, colloquially, I guess refers to a theoretical point at which AI exceeds human intelligence. That\u2019s why it\u2019s called superintelligence. And the problem with this term is that there is no scientific consensus around what human intelligence is.<\/p>\n<p>There\u2019s a long history of trying to define and quantify human intelligence. Much of it is a very dark history motivated by the desire to show through \u201cscientific means\u201d that certain races are superior to others. And we\u2019ve never landed on one test that definitively proves that this is like the marker of intelligence.<\/p>\n<p>\u201cArtificial general intelligence \u2014 which also, what does that mean?\u201d<\/p>\n<p>And so superintelligence is just like a totally unmoored concept. And indeed, this is very useful for executives of companies where when they want to market themselves, because there is no definition around this term, they can just define it however they want. They do the same thing with the term artificial general intelligence \u2014 which also, what does that mean? It\u2019s supposed to be the point right before superintelligence when the AI system theoretically matches human intelligence.<\/p>\n<p>And use see OpenAI define and redefine AGI constantly, based on what it wants to do at the next steps. So when Sam Altman is talking with consumers, he says AGI is going to be this amazing digital assistant that\u2019s going to solve all your problems \u2014 because he wants those people to buy it. When he is talking with Microsoft, The Information reported at one point that Microsoft in the agreement between OpenAI and Microsoft, they define AGI as <a href=\"https:\/\/www.theinformation.com\/articles\/microsoft-and-openais-secret-agi-definition\" rel=\"nofollow noopener\" target=\"_blank\">a system that can generate a $100 billion of revenue<\/a>. When Altman is talking to Congress, he says AGI is going to cure cancer and eradicate poverty and so on and so forth to try and ward off the regulation.<\/p>\n<p>And so you can see that it just shape-shifts based on the audience that needs to be convinced in that moment for the company to just continue its agenda.<\/p>\n<p>JW: Speaking of promises made by the tech industry about AI, one of the biggest promises is that it\u2019s going give people their time back to use on more fulfilling activities and that AI will eliminate the need to work essentially, since the expectation is that it\u2019s going to take our jobs.<\/p>\n<p>How exactly is that going to help people who then lose their income? Is the government supposed to step in and sufficiently take care of people, or are the titans of this industry going to pay more taxes to take care of people? I guess, what is the promise and what are they saying we\u2019re going to have in the future that\u2019s supposed to be so great?<\/p>\n<p>KH: [Laughs] Right. The answer is, they promise whatever they need to promise to convince whoever they need to convince. So the promises keep shape-shifting, but generally, they fall in the line of, \u201cThere\u2019s going to be so much abundance that we\u2019re not going to have a competition for resources anymore. Everyone\u2019s going to live wild and free and it\u2019s going to be amazing, and, like, all science will be solved.\u201d But the fine-grain details of this vision are not there.<\/p>\n<p>It\u2019s interesting, in OpenAI\u2019s early years they explored the idea of instituting some kind of tax structure upon which if an AI company had windfall profits, then there would be a ceiling to how much they could keep, and the rest of it would be redistributed as universal basic income to everyone. That\u2019s as far as I\u2019ve ever seen anyone in the industry go towards actually articulating a mechanism by which everyone gets a piece of the pie. But of course, this was like very early days in OpenAI, and we\u2019ve never heard about this proposal since.<\/p>\n<p>And what we\u2019re actually seeing instead is the complete opposite, right? We are currently seeing these companies get more and more and more and more wealthy, while the average American is struggling more and more with an affordability crisis, with inflation, with job loss \u2014 sometimes driven by AI.<\/p>\n<p>And we are in a moment right now where the <a href=\"https:\/\/www.nytimes.com\/2025\/12\/19\/business\/k-shaped-economy.html#:~:text=%E2%80%9CWhen%20people%20talk%20about%20the,at%20the%20University%20of%20Michigan.\" rel=\"nofollow noopener\" target=\"_blank\">economy<\/a> is <a href=\"https:\/\/paulkrugman.substack.com\/p\/a-new-k-in-america\" rel=\"nofollow noopener\" target=\"_blank\">k-shaped<\/a>. All of the <a href=\"https:\/\/www.marketplace.org\/story\/2025\/10\/31\/big-tech-dominance-is-another-example-of-the-kshaped-economy\" rel=\"nofollow noopener\" target=\"_blank\">AI-related <\/a><a href=\"https:\/\/www.wsj.com\/finance\/stocks\/mag7-stocks-sp500-ai-7d40d5a1\" rel=\"nofollow noopener\" target=\"_blank\">stocks<\/a> are flying, while everything else is <a href=\"https:\/\/www.washingtonpost.com\/business\/2025\/11\/24\/sp500-stock-market-tech-nvidia\/\" rel=\"nofollow noopener\" target=\"_blank\">going south<\/a>. And so this, I think is the clearest signal that we have of the true tally that AI \u2014 in Silicon Valley\u2019s conception of it \u2014 what it\u2019s actually delivering us and will continue to deliver us if we allow the empires to continue on.<\/p>\n<p>JW: In that vein, there\u2019s been this growing concern that we\u2019re in an AI bubble that companies are overvalued and overspending on data centers, on microchips. What do you make of that concern and the way that tech leaders are responding to that concern?<\/p>\n<p>KH: I think we\u2019re in a huge bubble, and I\u2019m deeply worried about what might happen if that bubble pops, especially for the ripple effects that it\u2019s going to have on average people, because the people at the top are going to be fine. Like, they are not going to be the ones that are suffering from the fallout that could happen with a market correction. <\/p>\n<p>But of course, the industry leaders are trying to project the fact that we\u2019re not in a bubble. They\u2019re trying to project continued confidence in the fact that their technology is going to lead to continued crazy GDP growth that will somehow get redistributed to the average person. But I think average Americans are starting to realize that this is totally not true.<\/p>\n<p>\u201cThey\u2019re trying to project continued confidence in the fact that their technology is going to lead to continued crazy GDP growth that will somehow get redistributed to the average person.\u201d<\/p>\n<p>And that\u2019s why we\u2019ve seen in the past few months the attitude towards the AI industry towards the way that these companies are developing AI in particular has really soured because people are actually experiencing their kids being harmed or having worries that their kids will be harmed. They\u2019re seeing data centers pop up in their communities that could hike up their utility bills or potentially contaminate their water, and they didn\u2019t have any say in that project.<\/p>\n<p>They\u2019re seeing a shrinking job market where they might themselves have been laid off in part because an executive is saying that they\u2019re engaging in an AI strategy. And so I think, as much as the executives are really trying to create this veneer that everything is fine, most people know that it\u2019s not fine.<\/p>\n<p>JW: As you\u2019ve mentioned throughout this conversation, we\u2019ve been focusing on the effects of AI outside of Silicon Valley, but there are red flags, as you\u2019ve mentioned in San Francisco, in the larger Bay Area in California, where wealth inequality has grown really exponentially as the tech industry has grown in the last 15 years. How do you view that, what we\u2019ve seen as a microcosm in that region, against the backdrop of this kind of larger exploitation?<\/p>\n<p>KH: This is something that I think about all the time because I used to live in San Francisco. And part of the reason why I left the tech industry and ended up becoming a journalist was because I felt like what I was seeing in San Francisco was really a manifestation of the real ideology that undergirded the industry. And there is this extraordinary amount of wealth. <a href=\"https:\/\/www.bloomberg.com\/news\/newsletters\/2024-02-14\/bloomberg-evening-briefing-artificial-intelligence-is-minting-billionaires\" rel=\"nofollow noopener\" target=\"_blank\">Bloomberg<\/a> reported at one point that the AI industry is minting billionaires faster than any other industry in history. It\u2019s an extraordinary amount of wealth. And there\u2019s been reporting talking about how this year, 2026, is going to see some massive IPOs that\u2019s going to create even more extraordinary wealth generation than we\u2019ve ever seen in this town. <\/p>\n<p>\u201cIt\u2019s just so crazy to me that they can talk all these utopic lofty goals about solving science and eradicating poverty \u2014 when they haven\u2019t eradicated poverty in their own town.\u201d<\/p>\n<p>And yet at the same time, there\u2019s rampant homelessness there. There\u2019s a huge housing crisis in general, and there is just an obliviousness almost to the people who are within the industry to the things that happen at their very doorstep. And it\u2019s just so crazy to me that they can talk all these utopic lofty goals about solving science and eradicating poverty \u2014 when they haven\u2019t eradicated poverty in their own town. They haven\u2019t done anything to solve the social ills within their own town, and in fact, they\u2019ve only done things to make it worse.<\/p>\n<p>JW: On that point, what is their larger goal? What do these tech billionaires, maybe even soon to be, some of them trillionaires, what do they actually want? They have all this money, as you\u2019ve said, they could spend on social welfare in the communities that they\u2019re already in. What are they actually after?<\/p>\n<p>KH: The reason why I use the metaphor of empire is because \u2026 the revealed agenda is an imperial agenda. They have effectively been able to use this narrative of AGI to accrue more capital, land, energy, water, data. Like, they\u2019ve been able to accrue more resources \u2014 and critical resources \u2014 than pretty much anyone in history. So that to me is what they\u2019re after.<\/p>\n<p>But also, it\u2019s complicated in the sense that there are also these, what I can only describe as quasi-religious movements that undergird the push for AGI as well. So there are some people that are more political actors that are seeing the opportunity to leverage these narratives about AGI to amass more and more power. But there are also genuine cohorts of people who believe in the myth of AGI or the religion of AGI, where they think that when the moment comes that AI actually matches or begins to surpass human intelligence, that it is somehow going to truly lead us, as I mentioned, like to an AI heaven, to an other worldly civilization 2.0, so to speak, where we finally unlock the next era of human evolution.<\/p>\n<p>\u201cWe actually have no idea how to define AGI, because we have no idea how to define human intelligence.\u201d <\/p>\n<p>The reason why I call it quasi-religious is because it\u2019s not actually backed in scientific reality. In 2025, there was a survey of researchers that found this \u2014 AI researchers \u2014 that found 75 percent of them do not think that we\u2019re on the path to AGI, and this is still actually an open question of \u201cCan we even reach AGI?\u201d Because once again, we actually have no idea how to define AGI, because we have no idea how to define human intelligence. So people call themselves believers when they say that they\u2019re AGI believers. They use this religious rhetoric of saying AGI is akin to an AI god, or the bad version of AGI might be akin to summoning the demon, as Elon Musk once said.<\/p>\n<p>And that is why in order to really understand what is truly motivating this industry, you can\u2019t actually just view it through a capitalistic lens. You have to also view it through an ideological one. And once again, that returns us back to this is why it\u2019s colonialism. Colonialism is the fusion of capital and ideology.<\/p>\n<p>JW: This has been fascinating, and I want to give you a chance to just share any final thoughts if you have anything you want to say.<\/p>\n<p>KH: I cannot stress enough that none of this is inevitable. I alluded to the fact that this scale is totally technically unnecessary. AI is actually a word that refers to such a wide array of different types of technologies.<\/p>\n<p>I think it\u2019s very akin to the word \u201ctransportation.\u201d Transportation can literally refer to anything from a bicycle to a rocket. Those are systems that all get you from point A to B, but have fundamentally different designs. They have fundamentally different cost-benefit trade-offs. And generally when we speak about transportation, we have a much more nuanced discussion of saying we need more public transit, rather than just saying we need more transportation in general.<\/p>\n<p>\u201cThe tech industry is able to manipulate public understanding by constantly selling the benefits of the bicycle version of AI, when they\u2019re actually building the rocket version of AI.\u201d<\/p>\n<p>And we are currently stuck in a moment where there isn\u2019t that nuance with AI, and the tech industry is able to manipulate public understanding by constantly selling the benefits of the bicycle version of AI, when they\u2019re actually building the rocket version of AI. <\/p>\n<p>And the reason I feel so strongly that none of this is inevitable is that there is a very clear path for how to unlock the benefits of AI without accepting the colossal cost of it. And that is just by simply shifting from building rockets to building bicycles.<\/p>\n<p>And even though there is no government willingness to hold the industry accountable, there are plenty of ways that individuals and communities can engage in collective action to hold the industry accountable themselves, and we are seeing remarkable movements of this already happening and already working.<\/p>\n<p>There have been, I believe, at this point, <a href=\"https:\/\/www.theguardian.com\/us-news\/2025\/dec\/08\/us-data-centers\" rel=\"nofollow noopener\" target=\"_blank\">$60 billion-plus<\/a> of data center projects that have been blocked because of protests. There have been lawsuits from families of victims who have suffered egregious <a href=\"https:\/\/www.nytimes.com\/2025\/11\/06\/technology\/chatgpt-lawsuit-suicides-delusions.html\" rel=\"nofollow noopener\" target=\"_blank\">mental health harms<\/a>, including dying by suicide after extended uses of ChatGPT that has led to a massive momentum around shoring up the safety of these models. There has been litigation around <a href=\"https:\/\/www.wired.com\/story\/ai-copyright-case-tracker\/\" rel=\"nofollow noopener\" target=\"_blank\">copyright, intellectual property<\/a>. There have been huge discussions sparked in schools about whether or not these tools should actually be actively adopted within schools. <\/p>\n<p>And I think all of this pushback is forcing the companies \u2014 even without regulation \u2014 to shift their practices, hopefully will force them to downsize away from empires to just being businesses that actually provide valuable products and services that are not built on extraordinary exploitation and extraction.<\/p>\n<p>I think that\u2019s like the final message that I want to leave with people: Any single person that\u2019s listening to this has an active role to play in shaping the future of AI development. And we absolutely can get to a point where we have the benefits of AI without any of the costs by just changing what types of AI systems we design.<\/p>\n<p>JW: Well, thank you so much. I really learned a lot reading your book and even more in this conversation. So appreciate you taking the time and thank you for joining me on The Intercept Briefing.<\/p>\n<p>KH: Thank you so much, Jessica.<\/p>\n<p>JW: That does it for this episode. <\/p>\n<p>This episode was produced by Andrew Stelzer. Laura Flynn is our supervising producer. Sumi Aggarwal is our executive producer. Ben Muessig is our editor-in-chief. Maia Hibbett is our managing editor. Chelsey B. Coombs is our social and video producer. Desiree Adib is our booking producer. Fei Liu is our product and design manager. Nara Shin is our copy editor. Will Stanton mixed our show. Legal review by David Bralow.<\/p>\n<p>Slip Stream provided our theme music.<\/p>\n<p>If you want to support our work, you can go to <a href=\"https:\/\/join.theintercept.com\/donate\/Donate_Podcast?source=interceptedshoutout&amp;recurring_period=one-time\" rel=\"nofollow noopener\" target=\"_blank\">theintercept.com\/join<\/a>. Your donation, no matter the amount, makes a real difference. If you haven\u2019t already, please subscribe to The Intercept Briefing wherever you listen to podcasts. And leave us a rating or a review, it helps other listeners to find us.<\/p>\n<p>If you want to send us a message, email us at <a href=\"https:\/\/theintercept.com\/2026\/01\/02\/empire-ai-sam-altman-colonialism\/mailto:podcasts@theintercept.com\" rel=\"nofollow noopener\" target=\"_blank\">podcasts@theintercept.com<\/a>.<\/p>\n<p>Until next time, I\u2019m Jessica Washington.<\/p>\n","protected":false},"excerpt":{"rendered":"After OpenAI CEO Sam Altman launched ChatGPT in 2022, the race for dominance in the field of artificial&hellip;\n","protected":false},"author":2,"featured_media":386486,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,36904,181,507,136294,36911,63208,36903,36908,36909,36910,178919,188053,74,188051,188052],"class_list":{"0":"post-386485","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-article-type-article-post","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-day-friday","13":"tag-language-english","14":"tag-longform","15":"tag-page-type-article","16":"tag-partner-factiva","17":"tag-partner-smart-news","18":"tag-partner-social-flow","19":"tag-subject-technology","20":"tag-subject-the-intercept-briefing","21":"tag-technology","22":"tag-time-11-00","23":"tag-wc-6000-6999"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/386485","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=386485"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/386485\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/386486"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=386485"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=386485"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=386485"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}