{"id":252400,"date":"2025-10-31T04:18:10","date_gmt":"2025-10-31T04:18:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/252400\/"},"modified":"2025-10-31T04:18:10","modified_gmt":"2025-10-31T04:18:10","slug":"my-workers-are-smart-so-why-are-they-using-large-language-models-llms-so-much","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/252400\/","title":{"rendered":"My workers are smart, so why are they using large language models (LLMs) so much?"},"content":{"rendered":"<p>This may sound blatantly obvious to people like you \u2013 those who have remained vaguely impressed but fundamentally sceptical of this new technology \u2013 or those who understand the technology inside out.<\/p>\n<p>But to many, including, as you\u2019ve said, \u201cbright and capable\u201d professionals, it\u2019s not obvious at all. And so people have started using tools such as ChatGPT to replicate highly complex human processes or, yes, even as a source of objective truth.<\/p>\n<p>Suddenly, we hear people unironically telling us \u201cChatGPT told me \u2026\u201d or earnestly discussing what gender they assign to the chatbot, as if it\u2019s basically a digital person.<\/p>\n<p>There\u2019s a whole philosophical and ethical argument to be made about why this is dangerous. I don\u2019t have the space or the erudition to go down that path. So I\u2019ll just concentrate on two practical problems with this kind of unrestrained faith, namely LLMs\u2019 current propensity for sycophancy and hallucination.<\/p>\n<p>It\u2019s simply a fact that LLMs still produce wildly inaccurate, sometimes farcically silly, responses to questions or task requests. <a href=\"https:\/\/www.smh.com.au\/business\/workplace\/why-is-chatgpt-trying-to-gaslight-me-20240321-p5fe9x.html\" rel=\"noopener nofollow\" target=\"_blank\">Experts refer to this as \u201challucination\u201d<\/a>.<\/p>\n<p>To go back to my experiment asking chatbots to write a Work Therapy-esque column, when I asked them to help me understand where they had got the data to inform their choices of style and tone, their answers were confused. Then, after more specific questioning, they began presenting blatantly incorrect information as fact.<\/p>\n<p>Loading<\/p>\n<p>AI sycophancy is a phenomenon distinct, but not entirely separate, from hallucination. It, too, can lead to incorrect assertions or dubious advice.<\/p>\n<p>During <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-03390-0\" rel=\"noopener nofollow\" target=\"_blank\">a recent study on LLMs described in Nature<\/a>, researchers \u201ctested how 11 widely used large language models responded to more than 11,500 queries seeking advice, including many describing wrongdoing or harm\u201d.<\/p>\n<p>The article said that \u201cAI Chatbots \u2013 including ChatGPT and Gemini \u2013 often cheer users on, give them overly flattering feedback and adjust responses to echo their views, sometimes at the expense of accuracy\u201d.<\/p>\n<p>In that same article, a data science PhD student is quoted as saying: \u201cSycophancy essentially means that the model trusts the user to say correct things\u201d. What you\u2019ve observed is that this mistaken trust is being reciprocated by some users. And I don\u2019t think your anecdotal evidence is any kind of exception or anomaly.<\/p>\n<p>People are starting to use LLMs not as handy, but limited tools. They\u2019re going way beyond time-saving requests such as summarising a long, boring email or tidying up a pre-written speech.<\/p>\n<p>(I have reservations about handing either of those tasks over to AI, but I also recognise that the horse has well and truly bolted on these practices and that I\u2019m now part of a rapidly diminishing minority holding on to such a sentiment.) <\/p>\n<p>They are, instead, treating this technology as if it has advanced to such a degree that there is nothing it cannot do \u2013 that there is, as you put it, no \u201creasoning\u201d exercise or \u201ccreative\u201d endeavour humans can\u2019t \u201coutsource\u201d to it.<\/p>\n<p>This may be true in the future \u2013 and is almost certain in the distant future \u2013 but it is absolutely not true now. You may slightly underestimate what AI is capable of today \u2013 or at least be indifferent to its utility. I think that\u2019s preferable to massively overestimating what it can do.<\/p>\n","protected":false},"excerpt":{"rendered":"This may sound blatantly obvious to people like you \u2013 those who have remained vaguely impressed but fundamentally&hellip;\n","protected":false},"author":2,"featured_media":252401,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-252400","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/252400","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=252400"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/252400\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/252401"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=252400"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=252400"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=252400"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}