{"id":388244,"date":"2026-04-08T14:55:08","date_gmt":"2026-04-08T14:55:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/388244\/"},"modified":"2026-04-08T14:55:08","modified_gmt":"2026-04-08T14:55:08","slug":"its-finally-happened-im-now-worried-about-ai-and-consulting-chatgpt-did-nothing-to-allay-my-fears-emma-brockes","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/388244\/","title":{"rendered":"It\u2019s finally happened: I\u2019m now worried about AI. And consulting ChatGPT did nothing to allay my fears | Emma Brockes"},"content":{"rendered":"<p class=\"dcr-130mj7b\">A corollary of the truism \u201cdon\u2019t sweat the small stuff\u201d is, by implication, \u201cdo sweat the big stuff\u201d, but it can be hard to pick which big stuff to sweat. For example: since the 1970s, as the world has worried about inflation and rolling geopolitics, the big stuff we should have been sweating more urgently was the climate crisis. Last year, the top trending search on Google in the US was \u201cCharlie Kirk\u201d, with <a href=\"https:\/\/nypost.com\/2025\/12\/26\/lifestyle\/google-trends-2025-reveals-top-searches-from-search-engine-giant\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">several terms<\/a> relating to the threat posed by Donald Trump also popular, when the focus should arguably have been the threat posed by AI.<\/p>\n<p class=\"dcr-130mj7b\">Or, per my own Googling this week after reading Ronan Farrow and Andrew Marantz\u2019s highly alarming <a href=\"https:\/\/www.newyorker.com\/magazine\/2026\/04\/13\/sam-altman-may-control-our-future-can-he-be-trusted\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">lengthy piece<\/a> in the New Yorker about the rise of artificial general intelligence: \u201cWill I be a member of the permanent underclass and how can I make that not happen?\u201d<\/p>\n<p class=\"dcr-130mj7b\">I\u2019ll confess: prior to this moment of giving the subject more than two seconds\u2019 thought, my anxieties around AI were extremely localised. I thought in immediate terms of my own household income, and beyond that, of how the job market might look 10 years from now when my children graduate. I wondered if I should boycott ChatGPT, many of whose <a href=\"https:\/\/www.theguardian.com\/commentisfree\/2026\/mar\/04\/quit-chatgpt-subscription-boycott-silicon-valley\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">architects support Trump<\/a>, and decided that, yes, I should \u2013 an easy sacrifice because I don\u2019t use it in the first place.<\/p>\n<p class=\"dcr-130mj7b\">Anything bigger than that seemed fanciful. Last year, when Karen Hao\u2019s book <a href=\"https:\/\/www.nytimes.com\/2025\/05\/19\/books\/review\/empire-of-ai-karen-hao-the-optimist-keach-hagey.html\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Empire of AI<\/a> was published, it laid out a case against Sam Altman and his company, OpenAI, that briefly pierced the tedium of the discourse to say that Altman\u2019s leadership is cult-like and blind to cost \u2013 no different, in other words, to his tech predecessors, except much more dangerous. Still, I didn\u2019t read the book.<\/p>\n<p class=\"dcr-130mj7b\">The investigation this week in the New Yorker offers a lower-commitment on-ramp to the subject, while giving the casual reader an exciting opportunity: to ask <a href=\"https:\/\/www.theguardian.com\/technology\/chatgpt\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a>, the AI-powered chatbot created by Altman\u2019s OpenAI, to summarise the key findings of a piece that is highly critical of ChatGPT and Altman.<\/p>\n<p class=\"dcr-130mj7b\">With almost comically studious neutrality, the chatbot offers the following top line: that, per Farrow and Marantz, \u201cAI is as much a power story as a technology story\u201d, and \u201ca major focus [of the story] is <a href=\"https:\/\/www.theguardian.com\/technology\/sam-altman\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Sam Altman<\/a>, portrayed as a highly influential but controversial figure\u201d. Mmmm, lacks something, doesn\u2019t it? Let\u2019s try a human-powered summary of that same investigation, which might open with: \u201cSam Altman is a corporate grifter whose slipperiness would make one hesitate to put him in charge of a branch of Ryman, let alone in a position to steward the potentially world-ending capabilities of AI.\u201d<\/p>\n<p class=\"dcr-130mj7b\">It is these dangers, previously dismissed as sci-fi, that really startle here. As relayed in the piece, in 2014, <a href=\"https:\/\/x.com\/elonmusk\/status\/495759307346952192?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E495759307346952192%7Ctwgr%5Efc4df2bd994492bb712830ebaf889347c7c7e82e%7Ctwcon%5Es1_&amp;ref_url=https%3A%2F%2Fwww.cbsnews.com%2Fnews%2Felon-musk-artificial-intelligence-may-be-more-dangerous-than-nukes%2F\" data-link-name=\"in body link\" rel=\"nofollow\">Elon Musk tweeted<\/a>: \u201cWe need to be super careful with AI. Potentially more dangerous than nukes.\u201d There is the so-called alignment problem, yet to be solved, in which AI uses its superior intelligence to trick human engineers into believing it is following their instructions, meanwhile outmanoeuvring them to \u201creplicate itself on secret servers so that it couldn\u2019t be turned off; in extreme cases, it might seize control of the energy grid, the stock market, or the nuclear arsenal\u201d.<\/p>\n<p class=\"dcr-130mj7b\">At one time, Altman reportedly believed this scenario was possible, <a href=\"https:\/\/blog.samaltman.com\/machine-intelligence-part-1\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">writing in his blog <\/a>in 2015 that superhuman machine intelligence \u201cdoes not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn\u2019t care about us much either way, but in an effort to accomplish some other goal \u2026 wipes us out.\u201d For example: engineers ask AI to fix the climate crisis and it takes the shortest route to achieving that goal, which is to eliminate humanity. Since <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/oct\/28\/openai-for-profit-restructuring\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI became mainly a for-profit entity,<\/a> however, Altman has stopped talking in these terms and now sells the technology as a portal to utopia, <a href=\"https:\/\/blog.samaltman.com\/the-gentle-singularity\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">in which <\/a>\u201cwe\u2019ll all get better stuff. We will build ever-more-wonderful things for each other.\u201d<\/p>\n<p class=\"dcr-130mj7b\">This leaves us all with a problem. For voters in a position to prioritise AI oversight as a key election issue, the gap between personal AI use and the use to which governments, military regimes or rogue actors might use it is so vast, that the greatest danger we face is from a failure of imagination. I type into ChatGPT my concern about entering the permanent underclass, to which it replies: \u201cThat\u2019s a heavy question, and it sounds like you\u2019re worried about your long-term prospects. The idea of a \u2018permanent underclass\u2019 gets talked about in sociology, but in real life, people\u2019s paths are much more fluid than that term suggests.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Quite sweet, really, wholly witless and \u2013 here lurks the danger \u2013 seemingly entirely without threat.<\/p>\n","protected":false},"excerpt":{"rendered":"A corollary of the truism \u201cdon\u2019t sweat the small stuff\u201d is, by implication, \u201cdo sweat the big stuff\u201d,&hellip;\n","protected":false},"author":2,"featured_media":388245,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-388244","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/388244","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=388244"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/388244\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/388245"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=388244"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=388244"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=388244"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}