{"id":320319,"date":"2025-12-01T10:03:06","date_gmt":"2025-12-01T10:03:06","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/320319\/"},"modified":"2025-12-01T10:03:06","modified_gmt":"2025-12-01T10:03:06","slug":"the-world-still-hasnt-made-sense-of-chatgpt","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/320319\/","title":{"rendered":"The World Still Hasn\u2019t Made Sense of ChatGPT"},"content":{"rendered":"<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">This story is part of a series marking ChatGPT\u2019s third anniversary. Read Ian Bogost on <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/2025\/11\/ai-multiverse\/685067\/\" rel=\"nofollow noopener\" target=\"_blank\">how ChatGPT broke reality<\/a>, or <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/projects\/artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">browse more <\/a><a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/projects\/artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">AI coverage from <\/a><a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/projects\/artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">The Atlantic<\/a>.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">On this day three years ago, OpenAI released what it referred to internally as a \u201clow-key research preview.\u201d This preview was so low-key that, inside OpenAI, staff were <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/11\/sam-altman-open-ai-chatgpt-chaos\/676050\/\" rel=\"nofollow noopener\" target=\"_blank\">instructed<\/a> not to frame it as a product launch. Some OpenAI employees were nervous that the company was rushing out an unfinished product, but CEO Sam Altman forged ahead, hoping to beat a competitor to market and to see how everyday people might use the company\u2019s AI. They called it ChatGPT.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">And people sure did use it\u2014more than 1 million of them in the first five days. ChatGPT grew <a data-event-element=\"inline link\" href=\"https:\/\/www.reuters.com\/technology\/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01\/\" rel=\"nofollow noopener\" target=\"_blank\">faster<\/a> than any other consumer app in history. Today, it has 800 million weekly users. Numbers matter, but what is undeniable is that ChatGPT\u2019s success has quickly rewired parts of our society and economy. Now we are living in a world that ChatGPT helped build.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">OpenAI\u2019s product solidified the oracular chatbot as the primary way the world interacts with large language models. Other companies released their own spin on the technology, such as Google Bard (now named Gemini) and Microsoft\u2019s Bing chatbot, the latter of which quickly went off the rails and told a New York Times reporter to <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/2023\/02\/16\/technology\/bing-chatbot-microsoft-chatgpt.html\" rel=\"nofollow noopener\" target=\"_blank\">leave his spouse<\/a> and spend the rest of his life with the bot instead. ChatGPT introduced millions to a tool that, although <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/2025\/11\/ai-multiverse\/685067\/\" rel=\"nofollow noopener\" target=\"_blank\">prone to presenting false information<\/a>, simulates conversation well enough that people began to use it as an interface for countless tasks, such as finding information. Others employ it to automate the act of creation itself. The bot has proved handy for cheating on homework, writing boring work emails, researching, and coding. Now some people struggle to do anything without it.<\/p>\n<p id=\"injected-recirculation-link-0\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 1\" data-event-element=\"injected link\" data-event-position=\"1\"><a href=\"https:\/\/www.theatlantic.com\/technology\/2025\/11\/ai-multiverse\/685067\/\" rel=\"nofollow noopener\" target=\"_blank\">Read: Welcome to the slopverse<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">ChatGPT improved, as did its competitors, all new releases <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/04\/arc-agi-chollet-test\/682295\/\" rel=\"nofollow noopener\" target=\"_blank\">performing better on rigorous benchmark tests<\/a>. Companies embedded chatbots in customer-service platforms, and social-media grifters used them to create bot armies. Amazon became flooded with spammy, synthetically generated books. Articles written by robots clogged Google, making the site less and less useful. Already beleaguered universities struggled to <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/08\/ai-college-class-of-2026\/683901\/\" rel=\"nofollow noopener\" target=\"_blank\">adapt to the reality<\/a> that their curricula are now gamed effortlessly by students. <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/02\/new-luddites-ai-protest\/677327\/\" rel=\"nofollow noopener\" target=\"_blank\">Artists of all kinds protested<\/a> as large language models, trained on the creative output of humankind, threatened to render their jobs irrelevant or obsolete\u2014or to simply devalue creative work altogether. Many media companies chose to <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/05\/a-devils-bargain-with-openai\/678537\/\" rel=\"nofollow noopener\" target=\"_blank\">strike a deal<\/a> with the scrapers; <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/2023\/12\/27\/business\/media\/new-york-times-open-ai-microsoft-lawsuit.html\" rel=\"nofollow noopener\" target=\"_blank\">others<\/a> sued. (OpenAI entered into a corporate partnership with The Atlantic last year.) Some <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/06\/generative-ai-pirated-articles-books\/683009\/\" rel=\"nofollow noopener\" target=\"_blank\">businesses laid off staff<\/a> as chatbots became more useful.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\"><a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/07\/ai-radicalization-civil-war\/683460\/\" rel=\"nofollow noopener\" target=\"_blank\">A nascent culture ballooned in the Bay Area<\/a>\u2014hacker houses and manifestos. \u201cYou can see the future first in San Francisco\u201d was the <a data-event-element=\"inline link\" href=\"https:\/\/situational-awareness.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">overall argument<\/a> articulated by the AI researcher Leopold Aschenbrenner. More people started using phrases such as p(doom) and situational awareness. There were more <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/10\/sam-altman-mythmaking\/680152\/\" rel=\"nofollow noopener\" target=\"_blank\">manifestos<\/a> about technological timelines; \u201csuperintelligence\u201d and \u201cartificial general intelligence\u201d became things that rich people with serious-sounding jobs said in public without laughing.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The models got better, and the unintended consequences grew commensurately. People confided in the chatbots as they would therapists. They <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/10\/chatbot-transcript-data-advertising\/680112\/\" rel=\"nofollow noopener\" target=\"_blank\">confessed their darkest desires<\/a> despite no guarantee of perfect privacy. They expressed joy and sorrow and <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/2025\/09\/openai-teen-safety\/684268\/\" rel=\"nofollow noopener\" target=\"_blank\">intentions to kill themselves<\/a>; in one high-profile incident, ChatGPT reportedly offered <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" rel=\"nofollow noopener\" target=\"_blank\">help<\/a>, suggesting the right material for a noose. (OpenAI <a data-event-element=\"inline link\" href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/openai-denies-allegation-chatgpt-teenagers-death-adam-raine-lawsuit-rcna245946\" rel=\"nofollow noopener\" target=\"_blank\">denies<\/a> responsibility for this incident.) People fell in love with the tools and gave them names. Others <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/magazine\/2025\/12\/ai-companionship-anti-social-media\/684596\/\" rel=\"nofollow noopener\" target=\"_blank\">saw something in their conversations<\/a>\u2014a discovery or a conspiracy on the horizon. Some withdrew from daily life. Some found help; others didn\u2019t.<\/p>\n<p id=\"injected-recirculation-link-1\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 2\" data-event-element=\"injected link\" data-event-position=\"2\"><a href=\"https:\/\/www.theatlantic.com\/magazine\/2025\/12\/ai-companionship-anti-social-media\/684596\/\" rel=\"nofollow noopener\" target=\"_blank\">From the December 2025 issue: The age of anti-social media is here<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">ChatGPT is just one tool for interacting with large language models, but its runaway success was the spark that led to further excitement and investment, and the rollout of other AI interfaces: <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/05\/elevenlabs-ai-voice-cloning-deepfakes\/678288\/\" rel=\"nofollow noopener\" target=\"_blank\">text-to-speech voice clones<\/a>; image, video, and music generators; <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/2025\/10\/openai-chatgpt-atlas-web-browser\/684662\/\" rel=\"nofollow noopener\" target=\"_blank\">web browsers<\/a>. The models have continued to get better, helping build websites and <a data-event-element=\"inline link\" href=\"https:\/\/theconversation.com\/openai-says-deepseek-inappropriately-copied-chatgpt-but-its-facing-copyright-claims-too-248863\" rel=\"nofollow noopener\" target=\"_blank\">other models<\/a>, and allowing people to outsource more and more of their decisions. Generative-AI tools are used to write personalized bedtime stories and digitally <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/08\/ai-mass-delusion-event\/683909\/\" rel=\"nofollow noopener\" target=\"_blank\">reanimate children killed in mass shootings<\/a>. People use them to generate entire songs; at least one <a data-event-element=\"inline link\" href=\"https:\/\/www.cnn.com\/2025\/11\/01\/entertainment\/xania-monet-billboard-ai\" rel=\"nofollow noopener\" target=\"_blank\">debuted on a Billboard chart<\/a>. Low-quality synthetic renderings are staples of <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/08\/trump-posts-ai-image\/679540\/\" rel=\"nofollow noopener\" target=\"_blank\">political propaganda<\/a> and click-farm rage bait. People came up with a name for it: Slop.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">These tools are not magic, nor are they <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/2025\/10\/ai-consciousness\/683983\/\" rel=\"nofollow noopener\" target=\"_blank\">\u201cintelligent\u201d in any human way<\/a>. But for plenty of people, their first encounter with ChatGPT checked many of the boxes of a transformative technology. The bot is intuitive yet uncanny\u2014a piece of the future dropped into the present. If the disappointing-technology hype cycles that preceded large language models\u2014cryptocurrency booms and busts, Web3 and the metaverse\u2014felt like solutions in search of a problem, generative AI seemed to offer limitless applications. Rather than casting about for a use case, its boosters argued that it would <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/06\/ai-eats-the-world\/678627\/\" rel=\"nofollow noopener\" target=\"_blank\">eat the world<\/a>. In a sense, it has. How else to explain a timeline in which OpenAI has partnered with Mattel to embed ChatGPT into Barbies, and the <a data-event-element=\"inline link\" href=\"https:\/\/www.vulture.com\/article\/pope-leo-ai-homework.html\" rel=\"nofollow noopener\" target=\"_blank\">pope<\/a> has warned students, \u201cAI cannot ever replace the unique gift that you are to the world\u201d?<\/p>\n<p id=\"injected-recirculation-link-2\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 3\" data-event-element=\"injected link\" data-event-position=\"3\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/08\/ai-mass-delusion-event\/683909\/\" rel=\"nofollow noopener\" target=\"_blank\">Read: AI is a mass-delusion event<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">These models are unknowable\u2014black boxes with anthropomorphic traits, but that are ultimately a series of complex calculations and statistical inferences <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/2025\/11\/common-crawl-ai-training-data\/684567\/\" rel=\"nofollow noopener\" target=\"_blank\">based on mind-boggling sums of training data<\/a>; much of that information was taken without express permission from its creators. The models do not have souls or rights. But their ability to mimic us\u2014in part due to the human feedback in their training\u2014has inspired scientists and researchers to ask questions about our cognition and <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/05\/llm-ai-chatgpt-neuroscience\/674216\/\" rel=\"nofollow noopener\" target=\"_blank\">further probe how our minds work<\/a>.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">This list barely begins to capture the past three years\u2014the enthusiasm for these machines, as well as the loathing and anxiety they inspire. Depending on a person\u2019s view, one might see these models as a useful tool; others as <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/04\/great-language-flattening\/682627\/\" rel=\"nofollow noopener\" target=\"_blank\">\u201cstochastic parrots\u201d<\/a> or fancy autocorrect; and others still as catalysts for a fearsome alien intelligence.<\/p>\n<p id=\"injected-recirculation-link-3\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 4\" data-event-element=\"injected link\" data-event-position=\"4\"><a href=\"https:\/\/www.theatlantic.com\/technology\/2025\/10\/ai-consciousness\/683983\/\" rel=\"nofollow noopener\" target=\"_blank\">Read: The alien intelligence in your pocket<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">This is disruption, in the less technical sense of the word. In August, <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/08\/ai-mass-delusion-event\/683909\/\" rel=\"nofollow noopener\" target=\"_blank\">I wrote that<\/a> \u201cone of AI\u2019s enduring impacts is to make people feel like they\u2019re losing it.\u201d If you genuinely believe that we are just years away from the arrival of a paradigm-shifting, society-remaking superintelligence, behaving irrationally makes sense. If you believe that Silicon Valley\u2019s elites have lost their minds, foisting a useful-but-not-magical technology on society, declaring that it\u2019s building God, investing historic amounts of money in its development, and <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/2025\/10\/data-centers-ai-crash\/684765\/\" rel=\"nofollow noopener\" target=\"_blank\">fusing the fate of its tools with the fate of the global economy<\/a>, being furious makes sense.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The world that ChatGPT built is a world defined by a particular type of precarity. It is a world that is perpetually waiting for a shoe to drop. Young generations feel this instability acutely as they prepare to graduate into a workforce about which they are cautioned that <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/newsletters\/archive\/2025\/06\/the-college-major-gamble\/683358\/\" rel=\"nofollow noopener\" target=\"_blank\">there may be no predictable path to a career<\/a>. Older generations, too, are told that the future might be unrecognizable, that the marketable skills they\u2019ve honed may not be relevant. Investors are waiting too, dumping unfathomable amounts of capital into AI companies, data centers, and the physical infrastructure that they believe is necessary to bring about this arrival. It is, we\u2019re told, a race\u2014a <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/ideas\/archive\/2025\/10\/united-states-china-technology\/684754\/\" rel=\"nofollow noopener\" target=\"_blank\">geopolitical one<\/a>, but also a race against the market, a bubble, a circular movement of money and byzantine financial instruments and debt investment that could tank the economy. The AI boosters are waiting. They\u2019ve <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/10\/agi-predictions\/680280\/\" rel=\"nofollow noopener\" target=\"_blank\">created detailed timelines<\/a> for this arrival. Then the timelines shift.<\/p>\n<p id=\"injected-recirculation-link-4\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 5\" data-event-element=\"injected link\" data-event-position=\"5\"><a href=\"https:\/\/www.theatlantic.com\/technology\/2025\/10\/data-centers-ai-crash\/684765\/\" rel=\"nofollow noopener\" target=\"_blank\">Read: Here\u2019s how the AI crash happens<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">We are waiting because a defining feature of generative AI, according to its true believers, is that it is never in its final form. Like ChatGPT before its release, every model in some way is also a \u201clow-key research preview\u201d\u2014a proof of concept for what\u2019s really possible. You think the models are good now? Ha! Just wait. Depending on your views, this is trademark showmanship, a truism of innovation, a hostage situation, or a long con. Where you fall on this rapture-to-bullshit continuum likely tracks with how optimistic you are for the future. But you are waiting nonetheless\u2014for a bubble to burst, for a genie to arrive with a plan to print money, for a bailout, for Judgment Day. In that way, <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/07\/thrive-ai-health-huffington-altman-faith\/678984\/\" rel=\"nofollow noopener\" target=\"_blank\">generative AI is a faith-based technology<\/a>.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">It doesn\u2019t matter that the technology is already useful to many, that it can code and write marketing copy and complete basic research tasks. Because Silicon Valley is not selling useful; it\u2019s selling transformation\u2014with all the grand promises, return on investment, genuine risk, and collateral damage that entails. And even if you aren\u2019t buying it, three years out, you\u2019re definitely feeling it.<\/p>\n","protected":false},"excerpt":{"rendered":"This story is part of a series marking ChatGPT\u2019s third anniversary. Read Ian Bogost on how ChatGPT broke&hellip;\n","protected":false},"author":2,"featured_media":320320,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-320319","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/320319","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=320319"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/320319\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/320320"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=320319"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=320319"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=320319"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}