{"id":521112,"date":"2026-04-09T08:51:08","date_gmt":"2026-04-09T08:51:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/521112\/"},"modified":"2026-04-09T08:51:08","modified_gmt":"2026-04-09T08:51:08","slug":"ais-sinister-takeover-of-british-politics","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/521112\/","title":{"rendered":"AI&#8217;s sinister takeover of British politics"},"content":{"rendered":"<p class=\"has-drop-cap\">On 20 April 2025, an official in the British government emailed their colleagues a story from that day\u2019s Financial Times. The headline read: \u201cUAE set to use AI to write laws in world first\u201d. The officials, all of whom are involved in implementing AI in the running of the British state, read the article with amusement. \u201cWe were tempted to say: \u2018We got there first,\u2019\u201d one of them told me. But they felt that the UK was \u201cnot fighting for the crown of the first AI-written line of legislation\u201d, so they decided not to make public a fact very few people know: text composed by a large language model has made its way into an act of parliament. British laws are already being written by AI.<\/p>\n<p>This is a matter of sovereignty. The software products we refer to as \u201cAI\u201d are all built on advanced \u201cfoundational models\u201d from the US and China. This is a technology we do not control, but which plays an increasingly active role at every level of the British power structure. It is part of every conversation, drafting emails between officials, summarising ministers\u2019 briefings and composing speeches delivered in the House of Commons. The Bank of England is using machine learning to inform its decisions on interest rates. The BBC uses AI to redraft articles. Every student at Oxford \u2013 where 31 of our previous prime ministers were educated \u2013 is now being educated with the help of OpenAI. There is little public understanding of how quickly this technology is moving through the institutions of power, or how enthusiastically it\u2019s being pursued by a government that believes AI software could solve all its problems.<\/p>\n<p>In dozens of interviews with current and former government officials and advisers, technologists and MPs \u2013 most of whom asked not to be named, in order to speak freely \u2013 I have been told about a quiet handing over of control in the frameworks of advice, intelligence and decision-making that underlie every government decision. This is not just a simple software upgrade. Large language models (LLMs), the software behind AI program such as ChatGPT, are built to produce answers that will be accepted by users \u2013 not to calculate, but to convince. This highly persuasive software, built primarily overseas, is being handed an unknown amount of political power.<\/p>\n<p>In almost every interview conducted for this piece, I asked whether it was paranoid to suggest that the wholesale adoption of AI by our government, public services and wider economy is handing power to models built in the US and China. Even the most optimistic AI advocates agreed it was a reasonable argument. At a technology conference last year, I spoke to a person who had been involved at the highest level in the government\u2019s use of AI. I asked if it worried them that foundational models could reflect the politics of the people who control them \u2013 people who have very different political ideas to our elected leaders. My concerns were not brushed off. This person told me about a power struggle between the engineers building AI models, the plutocrats who own them and the politicians who seek to control them. Far from the noise of the public debate, a battle is being fought that could have lasting implications for our politics. \u201cMake no mistake,\u201d this person told me. \u201cThis is a war.\u201d<\/p>\n<p>                            <a href=\"https:\/\/www.newstatesman.com\/technology\/2026\/04\/javascript(void);\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" src=\"https:\/\/dl6pgk4f88hky.cloudfront.net\/2021\/09\/TNS_master_logo.svg\" class=\"img\"\/><\/a><\/p>\n<p>Subscribe to the New Statesman today and save 75% <\/p>\n<p>This is not a story about how AI works. It\u2019s not about whether it is going to become sentient, make us rich, or redundant. It is a story about power. It is about how politicians became distracted by a shiny new thing, and failed to understand \u2013 or chose not to ask \u2013 what it might cost. It is not about whether AI will help itself to your job. It is about whether the people who make AI are helping themselves to your country.<\/p>\n<p class=\"has-drop-cap\">The government\u2019s commitment to AI was agreed, long before it was announced to the public, at a cabinet meeting in November 2024. Keir Starmer\u2019s ministers had spent two months discovering just how difficult their jobs were going to be. The ambitions of opposition were dissolving in the acid reality of spending reviews and budget cuts. Into a pensive silence the science minister, Patrick Vallance, introduced his guest.<\/p>\n<p>Demis Hassabis has been talked about as a genius since he was a child. He began beating adults at chess aged four, wrote his first AI program at 12, was offered a place at Cambridge at 16. He sold his AI company, DeepMind, to Google in 2014 for \u00a3400m. The following month he would accept the Nobel Prize in Chemistry. Hassabis had advised No 10 before, when he briefed Rishi Sunak in 2023 about whether AI posed an existential threat. But to Starmer\u2019s cabinet he described it not as a problem for government, but a solution.<\/p>\n<p>Ministers listened to Hassabis describe a vision: as AI transformed the world the state would be transformed with it. LLMs would provide both economic growth and a faster, more productive government, by taking on the administrative duties of civil servants. The fiscal implications sounded incredible. In 2024, the Tony Blair Institute said AI could save the government \u00a337bn a year. The following year, Peter Kyle (then the secretary of state for science, innovation and technology) said AI represented \u201ca \u00a345bn jackpot\u201d for the public sector. Clearly, \u00a345bn is a lot of money. It is enough to run the UK\u2019s entire public transport and justice systems. It would cut government borrowing by a third.<\/p>\n<p>Starmer had already been speaking with Hassabis in private, as well as with the British entrepreneur Matt Clifford (who also advised Sunak on AI), and Tony Blair, whose institute is funded primarily by the foundation of US tech entrepreneur Larry Ellison. In the cabinet meeting, Starmer told his ministers that he believed they should be optimistic about AI\u2019s potential. Simon Case, who was then the cabinet secretary, remembers the shift that took place in the room: \u201cThey\u2019d all been in for a couple of months and realised quite how difficult being a minister of the Crown is. So it was real enthusiasm,\u201d he told me. \u201cIt was those presentations that made them think, \u2018Actually, this is the thing that could deliver. This is the way out of this problem.\u2019\u201d<\/p>\n<p>The meeting did not discuss the details of what these savings implied. But Clifford had already stated, months earlier, that there were no \u201choles\u201d in government that could conveniently be plugged by AI. The only way to find out if these savings existed would be by \u201cripping up\u201d and \u201crebuilding\u201d parts of the state to accommodate it, in a \u201chard and painful\u201d manner.<\/p>\n<p>In her Mais Lecture this March the Chancellor, Rachel Reeves, outlined the three \u201cbig choices\u201d that defined her plan for economic growth. The first two were obvious: a better trading relationship with the EU and a more balanced economy. The third was experimental: the UK should adopt AI \u2013 which she called \u201cthe defining technology of our era\u201d \u2013 faster than any other major economy. This is a project that is already proceeding very quickly. Sitting in the room where Reeves spoke were representatives from AI companies that already have contracts with the government; one told me how his firm\u2019s products are used to analyse new policies for education and defence.<\/p>\n<p>Reeves herself had already presented policy formed with the help of AI to parliament in her June 2025 Spending Review. In it, the first full spending review since 2021 and the foundation of Labour\u2019s plan for public finances, AI was used to analyse departments\u2019 bids for money. A spending review is a contentious political process, a battle between cabinet ministers. Now, software made in other countries helps decide how much our government spends on housing, schools, hospitals and border control.<\/p>\n<p>The Spending Review mentioned AI in 38 different places. It told government departments to make significant spending cuts \u2013 \u201cat least\u201d 16 per cent in real terms by 2029-30 \u2013 but it also allocated \u00a32bn in new funding to AI, through the AI Opportunities Action Plan. A government adviser who had advocated strongly for AI told me: \u201cI felt reasonably confident that we would get a good outcome.\u201d That \u201cwe\u201d refers not only to the department in which that person was employed, but a broader group of AI advocates in the government, backed by lobbyists and think tanks that argue for the state to be rewired by LLMs. One adviser told me the average civil servant was not as \u201cdeeply motivated\u201d as AI advocates, whom they consider \u201ca higher calibre of official\u201d, propelled not just by competence but by a belief in the tech revolution.<\/p>\n<p>This is also a story of a struggle for power in Whitehall. As US and Chinese technology becomes more influential in our political system, those who support it have the opportunity to become more influential, too. From the beginning, the AI revolution has been about more than upgrading systems. It has been about redistributing power.<\/p>\n<p class=\"has-drop-cap\">For Dominic Cummings, redistributing power was a career, a life\u2019s work, a guiding principle. The 2016 Brexit vote had been one step in a project he had begun at the turn of the century, when he started to campaign for civil service reform, and which had continued in his battle against what he called \u201cthe Blob\u201d at the Department for Education. He wanted to overturn the obstinate lump of government itself, to make it faster, more capable, and AI would become part of this. In July 2019, on his first day in No 10, Cummings wore a grey T-shirt bearing the logo of OpenAI, then a largely unknown company. It would become a global news story three years later when it launched ChatGPT.<\/p>\n<p>On the first working day of 2020, Cummings advertised for \u201cweirdos and misfits\u201d (scientists and technologists) to join him in Downing Street, and to change the way government was run. One of the first to come was the neuroscientist James Phillips. \u201cI always had this image that the government must have all the answers,\u201d Phillips told me. He believed that behind the circus of politics lay a \u201cdeep competence\u201d. That impression soon shattered. The experts were \u201coutnumbered\u201d, and \u201cdeep technical expertise, or even some sort of familiarity with science and technology, was very often completely absent\u201d.<\/p>\n<p>Cummings hoped to change this. A job advert was posted for a \u201chead of No 10 analytical unit\u201d to build a new data science team called 10DS within Downing Street. The job went to Laura Gilbert, a technologist who had already built systems to understand particle physics, to detect heart disease through a patient\u2019s thumbs, and to predict the behaviour of soldiers. Gilbert\u2019s team began to use data science to inform policymaking. This sounds prosaic, but within Whitehall it was a revolutionary act.<\/p>\n<p>A former senior civil servant explained why it was so controversial. When they entered the civil service in the 1990s, ministers were \u201cprisoners of their officials\u201d. Every piece of paperwork, every phone call, was mediated by the civil service, which had a \u201cmonopoly on advice\u201d. Cummings wanted 10DS \u2013 who, being nerds, sometimes referred to themselves as the \u201c10DS ninjas\u201d \u2013 to inform ministers directly. This altered the balance of power. In front of the Prime Minister, a permanent secretary\u2019s claims might be disproved by data.<\/p>\n<p>The drive for a smarter, faster state did not end with Cummings\u2019s departure in November 2020. In 2022 Henry de Zoete, who had worked with Cummings at the Department for Education and on the Vote Leave campaign, arrived in Downing Street. De Zoete had also worked in Silicon Valley with Sam Altman, the CEO of OpenAI. He was optimistic about the technology, thrilled by its potential, but also concerned by its power. On arrival in No 10 he went to Case to tell him executives from OpenAI and Anthropic had recently visited the White House with a warning: they did not fully understand what their models were doing, or what risk they might represent. De Zoete arranged for the same briefing to be given to the British government. In early 2023 Altman sat down with Sunak and played the prime minister a convincing deepfake of his own voice.<\/p>\n<p>Sunak had spent time in Silicon Valley. He understood this new tech and its uncanny imitation of human language better than most in Westminster. He understood how people felt about it: it was weird and frightening. Sunak oversaw a new AI Security Institute \u2013 still regarded as a global leader in the field \u2013 and convened a summit at Bletchley Park. Not everyone was on board. One former official remembers Nick Clegg, then the chief lobbyist for Meta (which owns Facebook, Instagram and WhatsApp, and has invested heavily in AI) complaining about the focus on safety. \u201cYou\u2019re putting people off AI,\u201d they recall Clegg protesting. \u201cThis is really, really bad!\u201d<\/p>\n<p>What was truly bad was that neither Clegg nor his business secretary, Vince Cable, had done anything about the appropriation of the UK\u2019s one globally significant AI company, DeepMind, when they were in the coalition; Google acquired the company in 2014. As a result, Sunak had to manage a government that had been almost totally unprepared for the arrival of generative AI, in a country that had no underlying \u201cfoundational models\u201d of its own. He had no choice but to seek a role for Britain as a convener of discussions, rather than a place that had any say in how the technology was developed. The disruption that Cummings and others had long planned might be achieved, but it would be done with technology that did not belong entirely to us.<\/p>\n<p>A pattern was being repeated. Britain was the first country to begin developing an atomic bomb (an idea conceived at the University of Birmingham) but allowed its programme to be taken over by the US. Today, we don\u2019t own our Trident missiles but lease them from the US Navy. France, on the other hand, developed an independent nuclear deterrent. Today, France has built its own foundational model, Mistral. The UK produced the leading intellect in the AI field \u2013 Hassabis \u2013 and then stood by as his company was sold to Americans, on whose models we now depend.<\/p>\n<p class=\"has-drop-cap\">The pace of change in Britain was nothing compared to the shift in the US, however. Donald Trump, whose 2024 victory was sponsored by the oligarchs of Silicon Valley, abandoned all pretence of caution around AI development. On his first day in the White House, Trump revoked Biden\u2019s executive order on \u201cAI safety and security\u201d. The new policy was \u201cwinning the race\u201d (against China). Sam Altman became \u201cone of Trump\u2019s favoured tycoons\u201d, according to a recent investigation by the New Yorker. A new executive order, \u201cPreventing Woke AI in the Federal Government\u201d, made it clear that the Trump administration intended to intervene in the technology itself, to imbue it with their principles, which would sit behind every one of the answers given to the hundreds of millions of people who use chatbots every day.<\/p>\n<p>This is politically important because AI products are built and tested to be convincing. A recent study tested how\u00a0 persuasive of LLMs were in conversations with nearly 80,000 British people. Kobi Hackenburg, who led the study, told me that in persuasion science, conversations are known to be much more effective than static messages. This is why canvassers appear at your door during elections, and why charities pay people to get you chatting in the street. Chatbots offer a huge political opportunity: to have persuasive conversations with millions of people, all at the same time.<\/p>\n<p>Hackenburg and his colleagues established that chatbots are indeed very persuasive, and becoming more so. They use techniques familiar to barristers and debating experts. They also make things up. There are two possible explanations for this: either the model \u201clearns\u201d that \u201cuntrue facts are more persuasive\u201d and then uses \u201cfacts which are less true\u201d (it starts deliberately lying), or it runs out of accurate information and compensates with less accurate information (it starts bullshitting). To be clear, the chatbot does not have any thoughts or opinions about this; it is designed to find the response most likely to be accepted by a human user. It is a persuasion machine. It just so happens that persuasion is the root of political power.<\/p>\n<p>This issue was brought to Downing Street by the British-Canadian technologist Geoffrey Hinton, shortly after the launch of ChatGPT in November 2022. Hinton, who won the 2024 Nobel Prize in Physics for his work with AI, told the then Cabinet Office minister, Alex Burghart, that \u201csuperintelligence\u201d carried huge risks. The government had initially been worried about what one former official described to me as a \u201cTerminator 2 scenario, where chatbots run amok and launch nuclear weapons\u201d. Hinton also saw this as a risk, but he had a more pressing concern. One person present at the meeting said Hinton warned Burghart that Donald Trump, \u201cjust from tweeting\u201d, had sent a mob of thousands of people to the Capitol on 6 January 2021. \u201cAnd no one thinks he\u2019s superintelligent,\u201d Hinton is said to have added. \u201cWhat if you have a superintelligence on Twitter? What can it get people to do?\u201d<\/p>\n<p>A former UK government official agreed in blunt terms: \u201cI genuinely worry,\u201d they told me, that, \u201cwe could end up in a world where AI can persuade people to do anything. And then we\u2019re all fucked.\u201d<\/p>\n<p class=\"has-drop-cap\">For the companies selling AI products to the government, this persuasive power is part of what makes them so valuable. Their ability to imbue models with political tendencies is already being implemented. See for yourself: google \u201cBiden dementia\u201d, and you\u2019ll get an AI summary of the former president\u2019s cognitive issues. Now try googling \u201cTrump dementia\u201d; the AI has nothing to say. (Results of AI queries can vary.)<\/p>\n<p>On 3 January, ChatGPT offered its own example of how chatbots could shape politics. After the US sent forces into Venezuela, ChatGPT denied the attack had happened. \u201cThe United States has not invaded Venezuela,\u201d the chatbot told a reporter from Wired. \u201cNicol\u00e1s Maduro has not been captured.\u201d It blamed reports of an attack on \u201csensational headlines\u201d and \u201csocial media misinformation\u201d.<\/p>\n<p>Such tendencies are also visible in the \u201csystem prompts\u201d that are given to LLMs to determine how they respond. These prompts can contain instructions such as \u201cavoid giving any answers that are woke\u201d (a real system prompt given to Grok, the model owned by Elon Musk\u2019s xAI). Sometimes the rules being given to AI are visible in national laws. China\u2019s DeepSeek \u2013 which is used by large numbers of British businesses, including high-street banks \u2013 is governed by China\u2019s law on Interim Measures for the Management of Generative Artificial Intelligence Services, which requires products to \u201crespect social mores, ethics and morality\u201d as defined by the Chinese government, and to uphold \u201ccore socialist values\u201d.<\/p>\n<p>Last year, I sat in on a demonstration of an AI learning tool used by thousands of people around the world. We were shown the version of the software that is sold to schools in China. An AI-generated avatar discussed topics with a student, and the conversation revolved around how well the city government was doing, and the wisdom of its environmental policies. Here was a teacher with whom no Red Guard would find fault, whose politics would never deviate from those of the state.<\/p>\n<p>But it is not only the people who control the model who can influence its output. In October 2025, researchers at the Institute for Strategic Dialogue tested four of the most widely used chatbots by asking them hundreds of questions, in five languages, that related to Russia\u2019s invasion of Ukraine. Nearly a fifth of all responses cited Russian state media or sources attributed to Russian intelligence. The Russians would not have needed to hack ChatGPT to achieve this. Chatbots are \u201ctrained\u201d on vast amounts of text gathered from the open internet \u2013 far more words and images than any human could possibly check. This data can be \u201cpoisoned\u201d by creating thousands of websites that hold the opinions you want the chatbot to express \u2013 websites no human will ever see, but which will influence the chatbot\u2019s responses. It is cheaply done, and because AI companies don\u2019t disclose their training data, it is effectively impossible to determine if it has happened.<\/p>\n<p>The British government\u2019s ability to address these questions of influence has been hampered by a lack of understanding of the technology, and an internal fight over whose responsibility it is. When James Phillips met with a group of MPs in 2021 to warn them that AI models could be imbued with someone else\u2019s politics, he was asked: \u201cWhat\u2019s AI?\u201d<\/p>\n<p>After ChatGPT launched the following year, the British government preoccupied itself with a new question: who gets to be in charge of this new policy area? One person who was in No 10 at the time told me that a \u201ctense\u201d competition began for funding, staff and power. A farce developed as different groups struggled to appear capable of developing policy for something that was obviously far beyond their control.<\/p>\n<p>Among those competing for influence were the new Department for Science, Innovation and Technology (Dsit), and the new Office for AI within it, and the new minister for AI (a post that was given, in a classic piece of Tory chummery, to a friendly viscount); and the Government Digital Service (GDS); and the Central Digital and Data Office; and the AI Council; and the AI Safety Institute; and the Incubator for Artificial Intelligence. None of these bodies had a clear authority over AI, and none of them was able to tell another government department what to do about it. The GDS staff failed to see why the government needed a minister for AI. It was like having a minister for email, they thought. Dsit officials thought GDS had been \u201cdrifting\u201d, and \u201casleep at the wheel\u201d. Another source told me of their ambivalence towards the Office for AI: \u201cThey published nothing in 2023. Nothing.\u201d<\/p>\n<p>For months, many of Dsit\u2019s civil servants had \u201cno office, no email addresses, no kit\u201d, one source said. Rather than understand technology themselves, a source said, senior officials \u201coutsourced technical understanding to universities and institutions\u201d. The advisory system was stocked with professors and vice-chancellors, another told me, rather than people who actually worked in tech companies.<\/p>\n<p>\u201cWe wanted the department to feel like a start-up,\u201d one former Dsit adviser complained, but they were told they couldn\u2019t even use the new AI tools that they were supposed to be creating policy for. The reason for this was mutual suspicion: Dsit\u2019s senior officials knew that they shared a server with the Cabinet Office, and they worried that any embarrassing questions asked to the government\u2019s internal chatbot might somehow show up on a screen in Downing Street. (A Cabinet Office source said they \u201ccouldn\u2019t see\u201d Dsit\u2019s data, and \u201cwouldn\u2019t have been interested anyway\u201d.)<\/p>\n<p>The farce went unnoticed by parliamentarians, few of whom come from technical backgrounds. Many developed enthusiastic opinions about a technology they hadn\u2019t taken the time to understand. Some began using AI to write emails to constituents, and speeches in the House of Commons. MPs began playing \u201cChatGPT bingo\u201d, listening for the familiar words and cadence of chatbot text. The former security minister Tom Tugendhat told me he spent three hours listening to \u201cspeech after speech\u201d that contained the same telltale phrases until, infuriated, he rose and accused his fellow MPs of reading \u201cChatGPT-generated press releases\u201d that began, revealingly, with \u201cI rise to speak\u201d \u2013 a phrase the chatbot includes if you ask it to write a political speech, because it is used in Congress. Unwittingly these MPs were demonstrating where the power in the AI revolution lies.<\/p>\n<p class=\"has-drop-cap\">The confusion that has developed in Westminster over AI is a lobbyist\u2019s dream. A government that does not understand a technology and is too busy fighting itself for control of it will never effectively regulate it. Last summer, in the courtyard of a Westminster caf\u00e9, a lobbyist for one of the world\u2019s biggest tech companies told me they thought any meaningful AI regulation is a distant prospect. It took Westminster two decades to even begin regulating social media, and a small minority of parliamentarians \u2013 the lobbyist guessed about 30 people \u2013 have any real understanding of AI.<\/p>\n<p>Meanwhile, there are plenty of vested interests willing to help out: my research found more than \u00a3476m in government contracts awarded for consultancy services relating to AI, mostly since 2022, and 60 members of the House of Lords who have a declarable interest in an AI company. The revolving door between the AI industry\u00a0 and the institutions of public power is well oiled. The Competition and Markets Authority, which regulates tech companies, is run by Doug Gurr, the former head of Amazon UK. The BBC, which informs the public about technology, is now run by Matt Brittin, who spent 18 years as a Google executive. Sunak has just taken up advisory roles at Microsoft and Anthropic; George Osborne has taken a job at OpenAI.<\/p>\n<p>What these interests tell our government is that the UK is a small market in which the giants of Silicon Valley would like to invest, but which they can also afford to avoid, which would mean losing the data centres that are planned by US companies. AI promises the two things Starmer\u2019s government wants most of all: economic growth and the Cummings dream \u2013 shared by his successor, Morgan McSweeney \u2013 of a disruptive, fast-moving state that can get things done. Last year, a person who was then one of Downing Street\u2019s most senior political appointees told me that realising these promises would inevitably involve trade-offs against \u201cother priorities\u201d such as \u201cenergy usage\u2026 planning permission\u2026 skills development, and obviously some of the more controversial areas like copyright\u201d.<\/p>\n<p>The Downing Street spokesperson conceded that while \u201cyou\u2019ve got to develop as much of your own capability as possible\u201d, resistance was futile. \u201cYou can\u2019t seal your borders off,\u201d they said. \u201cIt\u2019s not without risk but it\u2019s not really in our hands whether it\u2019s developed or not, is it?\u201d Britain is set on a specific path: \u201cYou have to have stronger partnership [on AI] with the US.\u201d If not, we will be left behind in a great race for transformation.<\/p>\n<p>When I asked what form this change would take, one senior AI adviser to the British government used the term situational awareness \u2013 a phrase they said I would find not in the mainstream media, but in the writing of Leopold Aschenbrenner.<\/p>\n<p>Aschenbrenner was fired by OpenAI in April 2024. He had joined the company shortly after leaving university, and worked there for about a year. His time there convinced him that by the end of this decade, \u201cbillions of vastly superhuman AI agents\u201d would entirely remake the global economy and geopolitics. In an essay that has been read by almost everyone in Silicon Valley, Aschenbrenner wrote that an LLM is a \u201cprimordial force\u201d, a \u201cdemon\u201d, a deus ex machina in the most literal sense. By the 2030s, he wrote, the successors to ChatGPT would make the US military \u201cobsolete\u201d. The world would be run by superintelligence, and the only humans with any control over it would be \u201ca few hundred researchers\u201d working on what he calls \u201cthe Project\u201d in a \u201csecure location\u201d in the US. These philosopher-kings would be the unassailable architects of a new world.<\/p>\n<p>A lot of very influential people really believe this is going to happen, and soon. The venture capitalist Marc Andreessen published a similarly divinatory \u201cmanifesto\u201d in October 2023: \u201cI am here to bring the good news,\u201d Andreessen wrote: \u201cOur descendents [sic] will live in the stars.\u201d Andreessen casts AI scepticism as evil: questioning whether an AI company should help itself to NHS data, for example, is committing \u201ca form of murder\u201d, because it might delay AI-powered medicine. Saving energy, too, is morally wrong \u2013 \u201cenergy should be in an upward spiral\u201d, he writes \u2013 because producing the new god is energy intensive.<\/p>\n<p>Techno-libertarians have long been impatient for the future to arrive. \u201cWe wanted flying cars, instead we got 140 characters,\u201d as Peter Thiel has put it. Now, for the first time, the establishment is broadly on their side. Ryan Wain, senior director of policy and politics at the Tony Blair Institute, told me the AI revolution is no \u201ctech fantasy\u201d but \u201ca transformation\u201d, to be embraced without hesitation.<\/p>\n<p>But the truly bold decisions are being taken elsewhere. The US is aggressively pursuing political control of AI: Anthropic, the AI company that resisted the Trump administration\u2019s demands, has been declared a \u201csupply chain risk\u201d, a designation normally applied to Chinese companies. The US is also investing far more aggressively. The head of one British AI company told me that against the trillions being gambled on the technology by Wall Street, the investments the UK government is making appear \u201claughable\u201d and \u201cpointless\u201d.<\/p>\n<p class=\"has-drop-cap\">When I asked Simon Case if he saw a problem with AI writing UK law, he asked me \u201cwhy on Earth would you resist a technology\u201d that could \u201csave vast amounts of time\u201d by writing legislation for us? \u201cWhy wouldn\u2019t you do that?\u201d The answer is to consider what an LLM is for. Its purpose, inarguably, is to take away at least some of the work of reading and writing. This can be very useful, but it comes at a price.<\/p>\n<p>Emily Bender, professor of computational linguistics at the University of Washington, explained it to me like this: \u201cWriting is thinking. Reading is thinking.\u201d Politics is the business of reading and writing, speaking and thinking. In the making of laws, it is important that those in power take the opportunity to think. Legislation, Bender said, is written \u201cto have an impact on the world now, and into the future\u201d; a court case decades hence might hinge on the meaning of a single word. \u201cYou want those words to be chosen with utmost care.\u201d<\/p>\n<p>Writing a law is not something for which there is a technological solution. It is not a perfectible process, it is a moral act that requires belief and responsibility. It is a process of debate. As MPs, advisers and lobbyists know, the real business of our constitution happens in the background \u2013 in emails, notes, agendas. If everyone involved is asking the same software to condense emails and write replies, if they are reading research and updates and memos composed by the same software, that software increasingly assumes the power of the people who previously did the thinking. Reading is thinking, and writing is thinking, and thinking is power. And when the inefficiencies of human thought, deliberation and opinion are cleared aside, we are left asking: who is in charge?<\/p>\n<p>[Further reading: <a href=\"https:\/\/www.newstatesman.com\/politics\/uk-politics\/2026\/03\/a-certain-idea-of-ed-miliband\" target=\"_blank\" rel=\"noopener nofollow\">A certain idea of Ed Miliband<\/a>]<\/p>\n<p>    Content from our partners<\/p>\n","protected":false},"excerpt":{"rendered":"On 20 April 2025, an official in the British government emailed their colleagues a story from that day\u2019s&hellip;\n","protected":false},"author":2,"featured_media":521113,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-521112","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/521112","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=521112"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/521112\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/521113"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=521112"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=521112"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=521112"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}