{"id":225593,"date":"2026-01-07T15:59:08","date_gmt":"2026-01-07T15:59:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/225593\/"},"modified":"2026-01-07T15:59:08","modified_gmt":"2026-01-07T15:59:08","slug":"we-need-to-talk-about-how-we-talk-about-ai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/225593\/","title":{"rendered":"We Need to Talk About How We Talk About &#8216;AI&#8217;"},"content":{"rendered":"<p>\u201cAI\u201d is not your friend. Nor is it an intelligent tutor, an empathetic ear, or a helpful assistant. It can not \u201cmake up\u201d facts, and it does not make \u201cmistakes\u201d. It does not actually answer your questions. Such anthropomorphizing language, however, permeates the public discussion of so-called artificial intelligence technologies. The problem with anthropomorphic descriptions is that they risk masking important limitations of <a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3630106.3659040\" target=\"_blank\" rel=\"noopener nofollow\">probabilistic automation systems<\/a>, which make them fundamentally different from human cognition.<\/p>\n<p>The people and companies selling \u201cAI\u201d technologies routinely use language that portrays their systems as human-like \u2014 \u201creasoning capabilities\u201d, \u201challucinating\u201d, and artificial \u201cintelligence\u201d. The media has largely let them set the terms of the debate, right down to the terminology used in any discussion of these systems. But even the most flawless execution of a task typically associated with intelligence does not make a system \u201cintelligent\u201d, and the framing of systems as humans or human-like is misleading at best, <a href=\"https:\/\/www.bbc.com\/news\/articles\/cgerwp7rdlvo\" target=\"_blank\" rel=\"noopener nofollow\">deadly at worst<\/a>.<\/p>\n<p>Anthropomorphizing language influences how people perceive a system on multiple levels. It over-sells a system which is likely to under-deliver, and portrays a world view in which the people responsible for developing the systems are not held accountable for the system\u2019s inaccurate, inappropriate, and sometimes deadly output. It promotes misplaced trust, over-reliance, and dehumanization.<\/p>\n<p>The problematic nature of anthropomorphization \u2014 wishful mnemonics \u2014 is by no means a novel critique in the field of computing. In fact, it was raised half a century ago by the computer scientist Drew McDermott who <a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/1045339.1045340\" target=\"_blank\" rel=\"noopener nofollow\">wrote in 1976<\/a>, when \u201cartificial intelligence\u201d was still a relatively new field:<\/p>\n<p>If a researcher [\u2026] calls the main loop of his program \u201cUNDERSTAND,\u201d he is (until proven innocent) merely begging the question. He may mislead a lot of people, most prominently himself. [\u2026] What he should do instead is refer to this main loop as \u201cG0034,\u201d and see if he can convince himself or anyone else that G0034 implements some part of understanding. [&#8230;] Many instructive examples of wishful mnemonics by AI researchers come to mind once you see the point.<\/p>\n<p>In order to make more informed decisions about so-called AI, it helps to be able to recognize the different ways in which the language used to describe it is anthropomorphizing and thus misleading. The most prominent category of anthropomorphization includes terms that describe systems in terms of cognition or even emotions. These words can be verbs, describing what the system supposedly does (\u201cthink\u201d, \u201crecognize\u201d, \u201cunderstand\u201d) or nouns describing those actions or the result of those actions (\u201cchain of thought\u201d, \u201creasoning\u201d, \u201cskills\u201d). Words that describe cognitive failures, like \u201cignore\u201d, also belong here, since they describe the \u201cignoring\u201d entity as something that could conversely pay attention. The term \u201cartificial intelligence\u201d may itself even be particularly problematic, given <a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3491102.3517527\" target=\"_blank\" rel=\"noopener nofollow\">some research<\/a> has shown that people associate high machine competence with this term compared to, e.g., \u201cdecision support systems\u201d, \u201csophisticated statistical models\u201d or even \u201cmachine learning\u201d.<\/p>\n<p>Metaphors are helpful shortcuts, but they are also seductive, because they create an impression of understanding. Communicating accurate mental models of \u201cAI\u201d systems is challenging when technical descriptions are not meaningful to the average user. This difficulty does not let journalists and researchers off the hook, however. It remains our job to find clear and non-misleading ways to talk about the technology.<\/p>\n<p>Our Content delivered to your inbox.<\/p>\n<p>Join our newsletter on issues and ideas at the intersection of tech &amp; democracy<\/p>\n<p>Thank you!<\/p>\n<p style=\"text-align:center\">You have successfully joined our subscriber list.<\/p>\n<p>Another way that the metaphor of anthropomorphizing language misleads is by putting the automated system in the driver\u2019s seat, treating it as an agent in its own right. This is pervasive, and serves to obscure the actions and accountability of people building and using the systems. Examples include phrasings like \u201cChatGPT helps the students\u2026\u201d, \u201cthe model creates realistic video\u201d, or \u201cAI systems need more and more power every year\u201d. A variant of this positions a model as a collaborator to the person using it, rather than as a tool they are using, with words like \u201cco-write\u201d, \u201cco-create\u201d, etc.<\/p>\n<p>We are also anthropomorphizing automated systems when we describe them as participating in acts of communication. If you say you \u201casked\u201d the system a question, it \u201ctold\u201d you something or \u201clied\u201d, this is overselling what actually happened. These words entail communicative ability and intent, that is, the ability to understand communicative acts and a desire to reciprocate communication, plus the choice to do it in a certain way. Rephrasing the language we use to describe these interactions is truly swimming upstream, because not only do the companies selling these systems describe them as communicators, they also make many design choices to support this illusion. From the very chat interface to the use of I\/me pronouns by many such systems, these systems are <a href=\"https:\/\/www.techpolicy.press\/ai-chatbots-are-emotionally-deceptive-by-design\/\" target=\"_blank\" rel=\"noopener nofollow\">designed<\/a> to provide the illusion of a conversation partner. But they are outputting text that actually no one is accountable for, and playing on our very human tendency to make sense of any linguistic activity in languages we are familiar with.<\/p>\n<p>No matter how strongly a person may feel comfort, relief, even connectedness towards a chatbot, this does not make the chatbot a friend, a therapist, or a romantic partner. People may form friendly feelings towards inanimate objects or technology, but they are entirely unidirectional \u2014 surely, we would not call a child\u2019s plush toy a friend of theirs without at least a prefix of \u201cimaginary\u201d. Framing is an exceptionally powerful cognitive device that can make the difference between what we consider real and unreal. Take, for example, the many recent cases of <a href=\"https:\/\/www.psypost.org\/chatgpt-psychosis-this-scientist-predicted-ai-induced-delusions-two-years-later-it-appears-he-was-right\/\" target=\"_blank\" rel=\"noopener nofollow\">\u201cAI\u201d psychosis<\/a> and disastrous \u201ctherapeutic\u201d interactions between people and chatbots \u2014 for people prone to delusions, the tendency to anthropomorphize chatbots is particularly <a href=\"https:\/\/www.wsj.com\/tech\/ai\/chatgpt-murder-suicide-greenwich-openai-fd14fac2\" target=\"_blank\" rel=\"noopener nofollow\">perilous<\/a>. Frequent use of these technologies in \u201cconversations\u201d that mimic romantic exchanges is directly associated with <a href=\"https:\/\/journals.sagepub.com\/doi\/10.1177\/02654075251371394\" target=\"_blank\" rel=\"noopener nofollow\">higher levels of depression and lower life satisfaction<\/a>.<\/p>\n<p>From wishful mnemonics to accurate nomenclature<\/p>\n<p>We argue that we should aim for higher linguistic accuracy in our descriptions of \u201cAI\u201d systems. In scientific and journalistic writing, in public debate, and in everyday use. It requires deliberate rephrasing and might feel awkward at first. But the thing about patterns of language use is that we learn them from each other \u2014 and yesterday\u2019s oddities, if used persistently enough, become part of our linguistic landscape.<\/p>\n<p>The inaccuracies incited by anthropomorphic descriptions are likely to have a disproportionate impact on vulnerable populations. <a href=\"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/00222429251314491?casa_token=_q962uI1YEUAAAAA:KCZUBfV1ZahO4Mcf5axHMNytSG0qCAjsoDxJaStDh3MuqvXwuql-dH449BnNd1AL1aL1VLkhPP5l5GI\" target=\"_blank\" rel=\"noopener nofollow\">A 2025 article<\/a> shows a negative correlation between people\u2019s \u201cAI\u201d literacy and their \u201cAI\u201d receptivity; the more people know about how \u201cAI\u201d works, the less likely they are to want to use it: \u201cpeople with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI\u2019s execution of tasks that seem to require uniquely human attributes.\u201d Somewhat absurdly, the authors use this finding as an argument against educating the public about \u201cAI\u201d, as this would consequently reduce their adaptivity. We believe that language is a device to increase people\u2019s \u201cAI\u201d literacy, helping them make informed choices about technology acceptance.<\/p>\n<p>A more deliberate and thoughtful way forward is to talk about \u201cAI\u201d systems in terms of what we use systems to do, often specifying input and\/or output. That is, talk about functionalities that serve our purposes, rather than \u201ccapabilities\u201d of the system. Rather than saying a model is \u201cgood at\u201d something (suggesting the model has skills) we can talk about what it is \u201cgood for\u201d. Who is using the model to do something, and what are they using it to do?<\/p>\n<p>It takes effort to swim upstream against anthropomorphizing language embedded in commonly-used technical terms and popular discourse, both in recognizing the language at all but also in finding suitable alternatives. Whether we are participating in local discussions making decisions for our workplaces, schools or communities or writing for broad audiences we share a responsibility to create and use empowering metaphors rather than misleading language that embeds the tech companies\u2019 marketing pitches.<\/p>\n","protected":false},"excerpt":{"rendered":"\u201cAI\u201d is not your friend. Nor is it an intelligent tutor, an empathetic ear, or a helpful assistant.&hellip;\n","protected":false},"author":2,"featured_media":225594,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,46,125],"class_list":{"0":"post-225593","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-israel","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/225593","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=225593"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/225593\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/225594"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=225593"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=225593"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=225593"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}