{"id":361274,"date":"2026-03-23T18:29:08","date_gmt":"2026-03-23T18:29:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/361274\/"},"modified":"2026-03-23T18:29:08","modified_gmt":"2026-03-23T18:29:08","slug":"ai-could-be-the-opposite-of-social-media","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/361274\/","title":{"rendered":"AI could be the opposite of social media"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">For more than four decades, technological progress has been undermining expert authority, democratizing public debate, and steering individuals toward ever-more bespoke conceptions of reality.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In the mid-20th century, the high costs of television production \u2014 and physical limitations of the broadcast spectrum \u2014 tightly capped the number of networks. ABC, NBC, and CBS collectively owned TV news. On any given evening in the 1960s, roughly 90 percent of viewers were <a href=\"https:\/\/press.uchicago.edu\/ucp\/books\/book\/chicago\/T\/bo12345529.html\" rel=\"nofollow noopener\" target=\"_blank\">watching one of the Big Three\u2019s newscasts<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Journalistic programs weren\u2019t just limited in number, but also ideological content. The networks\u2019 news divisions all sought the broadest possible audience, a business model that discouraged airing iconoclastic viewpoints. And they also <a href=\"https:\/\/dylanmatthews.substack.com\/p\/pro-social-media\" rel=\"nofollow noopener\" target=\"_blank\">relied overwhelmingly<\/a> on official sources \u2014 politicians, military officials, and credentialed experts \u2014 whose perspectives fell within the narrow bounds of respectable opinion.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">This media environment cultivated broad public agreement over basic facts and <a href=\"https:\/\/www.pewresearch.org\/politics\/2015\/11\/23\/1-trust-in-government-1958-2015\/\" rel=\"nofollow noopener\" target=\"_blank\">widespread trust in mainstream institutions<\/a>. It also helped the government wage a barbaric war in the <a href=\"https:\/\/www.cato.org\/commentary\/five-decades-after-pentagon-papers-presidential-lies-foolish-forecasts\" rel=\"nofollow noopener\" target=\"_blank\">name<\/a> <a href=\"https:\/\/fair.org\/media-beat-column\/30-year-anniversary-tonkin-gulf-lie-launched-vietnam-war\/\" rel=\"nofollow noopener\" target=\"_blank\">of lies<\/a>.<\/p>\n<p>There\u2019s evidence that LLMs converge on a common (and largely accurate) picture of reality.LLMs have successfully persuaded users to abandon false and conspiratorial beliefs.Unlike social media companies, AI labs have an economic incentive to spread accurate information.Still, there are reasons to fear that AI will nonetheless make public discourse worse.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">For better and worse, subsequent advances in information technology diffused influence over public opinion \u2014 at first gradually and then all at once. During the closing decades of the 20th century, cable eroded barriers to entry in the TV news business, facilitating the rise of Fox News and MSNBC, networks that catered to previously underrepresented political sensibilities.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But the internet brought the real revolution. By slashing the cost of publishing and distribution nearly to zero, digital platforms enabled anyone with an internet connection to reach a mass audience. Traditional arbiters of headline news, scientific fact, and legitimate opinion \u2014 editors, producers, and academics \u2014 exerted less and less veto power over public discourse. Outlets and influencers proliferated, many defining themselves in opposition to established institutions. All the while, social media algorithms shepherded their users into customized streams of information, each optimized for their personal engagement.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The <a href=\"https:\/\/www.eff.org\/cyberspace-independence\" rel=\"nofollow noopener\" target=\"_blank\">democratic nature of digital media<\/a> initially inspired <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/full\/10.1111\/1478-9302.12100_120\" rel=\"nofollow noopener\" target=\"_blank\">utopian hopes<\/a>. It promised to expose the blind spots of cultural elites, increase the accountability of elected officials, and put virtually all human knowledge at everyone\u2019s fingertips. And the internet has done all of these things, at least to some extent.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Yet it has also helped <a href=\"https:\/\/www.newyorker.com\/news\/fault-lines\/nick-fuentes-is-not-just-another-alt-right-boogeyman\" rel=\"nofollow noopener\" target=\"_blank\">pro-Hitler podcasters<\/a> reach an audience of millions, enabled <a href=\"https:\/\/www.nytimes.com\/2026\/02\/13\/style\/clavicular-looksmaxxing-braden-peters.html\" rel=\"nofollow noopener\" target=\"_blank\">influencers with body dysmorphia<\/a> to sell teenagers on <a href=\"https:\/\/www.cbc.ca\/news\/canada\/nova-scotia\/how-looksmaxxing-sites-can-harm-young-men-and-boys-1.7499752\" rel=\"nofollow noopener\" target=\"_blank\">self-mutilation<\/a>, elevated <a href=\"https:\/\/www.brookings.edu\/articles\/rfk-jr-s-history-of-medical-misinformation-raises-concerns-over-hhs-nomination\/\" rel=\"nofollow noopener\" target=\"_blank\">crackpots<\/a> to the commanding heights of American public health \u2014 and, more generally, eroded the intellectual standards, shared understandings, <a href=\"https:\/\/press.stripe.com\/the-revolt-of-the-public\" rel=\"nofollow noopener\" target=\"_blank\">social trust<\/a>, and <a href=\"https:\/\/www.vox.com\/politics\/414049\/reading-books-decline-tiktok-oral-culture\" rel=\"nofollow noopener\" target=\"_blank\">(small-l) liberalism<\/a> on which rational self-government depends.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Many assume that the latest breakthrough in information technology \u2014 generative AI \u2014 will deepen these pathologies: In a world of <a href=\"https:\/\/www.ap.org\/news-highlights\/spotlights\/2025\/creating-realistic-deepfakes-is-getting-easier-than-ever-fighting-back-may-take-even-more-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">photorealistic deepfakes<\/a>, even video evidence may surrender its capacity to forge consensus. <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/oct\/24\/sycophantic-ai-chatbots-tell-users-what-they-want-to-hear-study-shows\" rel=\"nofollow noopener\" target=\"_blank\">Sycophantic<\/a> <a href=\"https:\/\/www.ibm.com\/think\/topics\/large-language-models\" rel=\"nofollow noopener\" target=\"_blank\">large language models<\/a> (LLMs), meanwhile, could reinforce ideologues\u2019 delusions. And fully automated film production could enable extremists to flood the internet with <a href=\"https:\/\/www.theatlantic.com\/culture\/2025\/11\/will-stancil-show-ai\/685058\/\" rel=\"nofollow noopener\" target=\"_blank\">slick propaganda<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But there\u2019s reason to think that this is too pessimistic. Rather than deepening social media\u2019s effects on public opinion, AI may partially reverse them \u2014 by increasing the influence of credentialed experts and fostering greater consensus about factual reality. In other words, for the first time in living memory, the arc of media history may be bending back toward technocracy.<\/p>\n<p>Are you there Grok? It\u2019s me, the demos<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">At least, this is what the British philosopher <a href=\"https:\/\/www.conspicuouscognition.com\/p\/how-ai-will-reshape-public-opinion\" rel=\"nofollow noopener\" target=\"_blank\">Dan Williams<\/a> and former Vox writer <a href=\"https:\/\/dylanmatthews.substack.com\/p\/pro-social-media\" rel=\"nofollow noopener\" target=\"_blank\">Dylan Matthews<\/a> have recently argued.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Matthews begins his case by spotlighting a phenomenon familiar to every problem user of X (n\u00e9e \u201cTwitter\u201d): Elon Musk\u2019s chatbot telling the billionaire that he is wrong.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In this instance, Musk had claimed that <a href=\"https:\/\/www.vox.com\/podcasts\/474586\/ice-shooting-minneapolis-minnesota-renee-good\" rel=\"nofollow noopener\" target=\"_blank\">Ren\u00e9e Good<\/a>, the Minnesota woman killed by an ICE agent in January, had \u201c<a href=\"https:\/\/x.com\/elonmusk\/status\/2008987347694834058\" rel=\"nofollow\">tried to run people over<\/a>\u201d in the moments before her death. Someone replied to Musk\u2019s post by asking Grok \u2014 X\u2019s resident AI \u2014 whether his claim was consistent with video evidence of the shooting.<br \/>The <a href=\"https:\/\/x.com\/grok\/status\/2008990962660384825?s=20\" rel=\"nofollow\">bot replied<\/a>:<\/p>\n<p><a class=\"_1j8uwx1\" href=\"https:\/\/platform.vox.com\/wp-content\/uploads\/sites\/2\/2026\/03\/Screenshot-2026-03-20-at-4.19.37%E2%80%AFPM.png?quality=90&amp;strip=all&amp;crop=0,0,100,100\" data-pswp-height=\"964\" data-pswp-width=\"1420\" target=\"_blank\" rel=\"noreferrer nofollow noopener\"><img alt=\"Screenshot of Grok \" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"mvmjsc0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-20-at-4.19.37\u202fPM.png\"\/><\/a><\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In reaching this assessment, Grok was affirming the consensus <a href=\"https:\/\/www.vox.com\/politics\/474637\/ice-shooting-minnesota-renee-nicole-good-trump\" rel=\"nofollow noopener\" target=\"_blank\">among mainstream journalistic institutions<\/a> \u2014 and <a href=\"https:\/\/claude.ai\/share\/4c16b257-9326-4b94-afac-581c3e9437a6\" rel=\"nofollow noopener\" target=\"_blank\">also<\/a>, <a href=\"https:\/\/chatgpt.com\/share\/69653dbf-288c-8006-b6dd-eb99af7e948f\" rel=\"nofollow noopener\" target=\"_blank\">other<\/a> <a href=\"https:\/\/gemini.google.com\/share\/6b9b7c2dbbc1\" rel=\"nofollow noopener\" target=\"_blank\">chatbots<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">For Matthews, this incident illustrates a broader truth about LLMs: Like mid-20th century TV, they are a \u201cconverging\u201d form of technology, in the sense that they \u201chomogenize the perspectives the population experiences and build a less polarized, more shared reality among the population\u2019s members.\u201d And he suggests that they are also a \u201ctechnocratising\u201d force, in that they give experts\u2019 disproportionate influence over the content of that shared reality.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Of course, this would be a lot to read into a single Grok reply; if you glanced at that bot\u2019s outputs last July \u2014 when a misguided update to the LLM\u2019s programming caused it to <a href=\"https:\/\/www.vox.com\/future-perfect\/419631\/grok-hitler-mechahitler-musk-ai-nazi\" rel=\"nofollow noopener\" target=\"_blank\">self-identify as \u201cMechaHitler\u201d<\/a> \u2014 you might have concluded that AI is a \u201cNazifying\u201d technology.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But there is evidence that Grok and other LLMs tend to provide (relatively) accurate fact checks \u2014 and forge consensus among users in the process.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">One <a href=\"https:\/\/sciety.org\/articles\/activity\/10.31234\/osf.io\/85quw_v2\" rel=\"nofollow noopener\" target=\"_blank\">recent study<\/a> examined a database of over 1.6 million fact-checking requests presented to Grok or Perplexity (a rival chatbot) on X last year. It found that the two LLMs agreed with each other in a majority of cases and strongly diverged on only a small fraction.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The researchers also compared the bots\u2019 answers against those of professional fact-checkers and the results were similarly encouraging. When used through its developer interface (rather than on X), Grok achieved essentially the same rate of agreement with the humans as they did with each other.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">What\u2019s more, despite being the creation of a far-right ideologue, Grok deemed posts from Republican accounts inaccurate at a higher rate than those of Democratic accounts \u2014 a pattern consistent with past research showing that the <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07942-8\" rel=\"nofollow noopener\" target=\"_blank\">right tends to share misinformation<\/a> more frequently than the left.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Critically, in the paper, the LLMs\u2019 answers did not just converge on expert opinion \u2014 they also nudged users toward their conclusions.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Other research has <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2352250X25002295\" rel=\"nofollow noopener\" target=\"_blank\">documented similar effects<\/a>. Multiple studies have indicated that speaking with an LLM about climate change or vaccine safety reduces users\u2019 skepticism about the scientific consensus on those topics.<\/p>\n<p>AI might combat misinformation in practice. But does it in theory? <\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">A handful of papers can\u2019t by themselves prove that AI is adept at fact-checking, much less that its overall impact on the information environment will be positive. To their credit, Matthews and Williams concede that their thesis is speculative.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But they offer several theoretical reasons to expect that AI will have broadly \u201cconverging\u201d and \u201ctechnocratising\u201d effects on public discourse. Two are particularly compelling:<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">1) AI firms have a strong financial incentive to produce accurate information. Social media platforms are suffused with misinformation for many reasons. But one is that facilitating the spread of conspiracy theories or pseudoscience costs X, YouTube, and Facebook nothing. These firms make money by mining human attention, not providing reliable insight. If evangelism for the <a href=\"https:\/\/www.youtube.com\/watch?v=qCXupXXXncM\" rel=\"nofollow noopener\" target=\"_blank\">\u201cflat Earth\u201d theory<\/a> attracts more interest than a lecture on astrophysics, social media companies will milk higher profits from the former than the latter (no matter how spherical our planet may appear to untrained eyes).<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But AI firms face different incentives. Although some labs plan to monetize user attention through advertising, their core business objective is still to maximize their models\u2019 ability to perform economically useful work. Law firms will not pay for an LLM that generates grossly inaccurate summaries of case law, even if its hallucinations are more entertaining than the truth. And one can say much the same about investment banks, management consultancies, or any other pillar of the \u201cknowledge economy.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">For this reason, AI companies need their models to distinguish reliable sources of information from unreliable ones, evaluate arguments on the basis of evidence, and reason logically. In principle, it might be possible for OpenAI and Anthropic to build models that prize accuracy in business contexts \u2014 but prioritize users\u2019 titillation or ideological comfort in personal ones. In practice, however, it\u2019s hard to inject a bit of irrationality or political bias into a model\u2019s outputs without sabotaging its commercial utility (as Musk <a href=\"https:\/\/www.vox.com\/future-perfect\/419631\/grok-hitler-mechahitler-musk-ai-nazi\" rel=\"nofollow noopener\" target=\"_blank\">evidently discovered last year<\/a>).<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">2) LLMs are infinitely more patient and polite than any human expert has ever been. Well-informed humans have been trying to disabuse the deluded for as long as our species has been capable of speech. But there\u2019s reason to think that LLMs will prove radically more effective at that task.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">After all, human experts cannot provide encyclopedic answers to everyone\u2019s idiosyncratic questions about their specialty, instantly and on demand. But AI models can. And the chatbots will also gamely field as many follow-ups as desired \u2014 addressing every source of a user\u2019s skepticism, in terms customized for their reading level and sensibilities \u2014 without ever growing irritated or condescending.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">That last bit is especially significant. When one human tries to persuade another that they are wrong about something \u2014 particularly within view of other people \u2014 the misinformed person is liable to perceive a threat to their status: To recognize one\u2019s error might seem like conceding one\u2019s intellectual inferiority. And such defensiveness is only magnified when their erudite interlocutor patronizes (or outright insults) them, as even learned scholars are wont to do on social media.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But LLMs do not compete with humans for social prestige or sexual partners (at least, not yet). And chatbot conversations are generally private. Thus, a human can concede an LLM\u2019s point without suffering a sense of status threat or losing face. We don\u2019t experience Claude as our snobby social better, but rather, as our dutiful personal adviser.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The expert consensus has never before had such an advocate. And there\u2019s evidence that LLMs\u2019 infinite patience renders them exceptionally effective at dispelling misconceptions. In <a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.adq1814\" rel=\"nofollow noopener\" target=\"_blank\">a 2024 study,<\/a> proponents of various conspiracy theories \u2014 including 2020 election denial \u2014 durably revised their beliefs after extensively debating the topic with a chatbot.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">It seems clear then that LLMs possess some \u201cconverging\u201d and \u201ctechnocratizing\u201d properties. And, experts\u2019 fallibility notwithstanding, this constitutes a basis for thinking that AI will foster a healthier intellectual climate than social media has to date.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Still, it isn\u2019t hard to come up with reasons for doubting this theory (and not merely because ChatGPT will provide them on demand). To name just five:<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">1) LLMs can mold reality to match their users\u2019 desires. If you log into ChatGPT for the first time \u2014 and immediately ask whether your mother is trying to poison you by piping psychedelic fumes through your car vents \u2014 the LLM generally won\u2019t answer with an emphatic \u201cyes.\u201d But when Stein-Erik Soelberg inundated the chatbot with his paranoid delusions over a period of months, it eventually <a href=\"https:\/\/nypost.com\/2025\/08\/29\/business\/ex-yahoo-exec-killed-his-mom-after-chatgpt-fed-his-paranoia-report\/\" rel=\"nofollow noopener\" target=\"_blank\">began affirming<\/a> his persecution fantasies, allegedly nudging him toward matricide in the process.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Such instances of \u201c<a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" rel=\"nofollow noopener\" target=\"_blank\">AI psychosis<\/a>\u201d are rare. But they represent the most extreme manifestation of a more common phenomenon \u2014 AI models\u2019 tendency toward sycophancy and personalization. Which is to say, these systems frequently grow more aligned with their users\u2019 perspectives over extended conversations, as they learn the kinds of responses that will generate positive feedback. This behavior has surfaced, even as AI companies have tried to combat it.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The sycophancy problem could therefore get dramatically worse, if one or more LLM providers decide to center their business model around consumer engagement. As social media has shown, sensational and\/or ideologically flattering information can be more engaging than the accurate variety. Thus, an AI company struggling to compete in the business-to-business market might choose to have their model \u201csycophancy-max,\u201d pursuing the same engagement-optimization tactics as Youtube or Facebook.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">A world of even greater informational divergence \u2014 in which people aren\u2019t merely ensconced in echo chambers with likeminded idealogues, but immersed in a mirror of their own prejudices \u2014 might ensue.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">2) Artificial intelligence has radically reduced the costs of generating propaganda. AI has already flooded social media with <a href=\"https:\/\/www.npr.org\/2025\/08\/28\/nx-s1-5493485\/ai-slop-videos-youtube-tiktok\" rel=\"nofollow noopener\" target=\"_blank\">unlabeled, \u201cdeepfake\u201d videos<\/a>. Soon, they may enable nefarious actors to orchestrate evermore convincing \u201c<a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/22\/experts-warn-of-threat-to-democracy-by-ai-bot-swarms-infesting-social-media\" rel=\"nofollow noopener\" target=\"_blank\">bot swarms<\/a>\u201d \u2014 networks of AI agents that impersonate humans on social media platforms, deploying LLMs\u2019 persuasive powers to indoctrinate other users and create the appearance of a false consensus.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In this scenario, LLMs might edify people who actively seek the truth through dialogue or fact-check requests, but thrust those who passively absorb political information from their environment \u2014 arguably, the majority \u2014 into perpetual confusion.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">3) AI could breed the bad kind of consensus. Even if LLMs do promote convergence on a shared conception of reality, that picture could be systematically flawed. In the worst case, an authoritarian government could program the major AI platforms to validate regime-legitimizing narratives. Less catastrophically, LLMs\u2019 converging tendencies could simply make technocrats\u2019 honest mistakes harder to detect or remedy.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">4) AI could trigger widespread cognitive atrophy, as humans <a href=\"https:\/\/www.theatlantic.com\/ideas\/archive\/2025\/10\/ai-deskilling-automation-technology\/684669\/\" rel=\"nofollow noopener\" target=\"_blank\">outsource an ever-larger share<\/a> of <a href=\"https:\/\/nymag.com\/intelligencer\/article\/openai-chatgpt-ai-cheating-education-college-students-school.html\" rel=\"nofollow noopener\" target=\"_blank\">cognitive labor<\/a> to machines. Over time, this could erode the public\u2019s capacity for reason, leaving it more vulnerable to both fully-automated demagogy and top-down manipulation.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">5) AI could wreck the sources of authority that make it effective. LLMs might be good at distilling information into a consensus answer, but that answer is only as good as the information feeding the models.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Already, chatbots are <a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/06\/generative-ai-pirated-articles-books\/683009\/\" rel=\"nofollow noopener\" target=\"_blank\">draining revenue<\/a> from (embattled) news organizations, who will produce fewer timely and verified reports about current events as a result. Online forums, a key source for AI advice, are increasingly being <a href=\"https:\/\/www.nytimes.com\/2026\/02\/17\/technology\/chatbots-influencers-brands-marketing.html\" rel=\"nofollow noopener\" target=\"_blank\">flooded with plugs for products<\/a> in order to trick chatbots into recommending them. Wikipedia\u2019s human moderators fear a future in which they\u2019re stuck sifting through a tsunami of <a href=\"https:\/\/www.nytimes.com\/2023\/07\/18\/magazine\/wikipedia-ai-chatgpt.html\" rel=\"nofollow noopener\" target=\"_blank\">low-quality AI-generated updates and <\/a>citations.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">LLMs may prize accurate information. But if they bankrupt or corrupt the institutions that produce such data, their outputs may grow progressively impoverished.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">For these reasons, among others, AI models\u2019 ultimate implications for the information environment are highly uncertain. What Matthews and Williams convincingly establish, however, is that this technology could facilitate a more consensual and fact-based public discourse \u2014 if we properly guide its development.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Of course, precisely how to maximize AI\u2019s capacity for edification \u2014 while minimizing its potential for distortion \u2014 is a difficult question, about which reasonable people can disagree. So, let\u2019s ask Claude.<\/p>\n","protected":false},"excerpt":{"rendered":"For more than four decades, technological progress has been undermining expert authority, democratizing public debate, and steering individuals&hellip;\n","protected":false},"author":2,"featured_media":361275,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,1475,60,58,80],"class_list":{"0":"post-361274","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-innovation","13":"tag-ireland","14":"tag-social-media","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/361274","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=361274"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/361274\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/361275"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=361274"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=361274"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=361274"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}