{"id":121123,"date":"2025-11-04T11:13:13","date_gmt":"2025-11-04T11:13:13","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/121123\/"},"modified":"2025-11-04T11:13:13","modified_gmt":"2025-11-04T11:13:13","slug":"why-do-some-of-us-love-ai-while-others-hate-it","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/121123\/","title":{"rendered":"Why do some of us love AI, while others hate it?"},"content":{"rendered":"<p>Analysis: The answer isn&#8217;t just about how AI works, but how our brains perceive risk and trust<\/p>\n<p>By <a href=\"https:\/\/theconversation.com\/profiles\/paul-jones-992605\" rel=\"nofollow noopener\" target=\"_blank\">Paul Jones<\/a>, <a href=\"https:\/\/theconversation.com\/institutions\/aston-university-1107\" rel=\"nofollow noopener\" target=\"_blank\">Aston University<\/a><\/p>\n<p>From <a href=\"https:\/\/chatgpt.com\/\" target=\"_blank\" rel=\"nofollow noopener\">ChatGPT<\/a> crafting emails, to AI systems recommending TV shows and even helping diagnose disease, the presence of machine intelligence in everyday life is no longer science fiction. Yet, for all the promises of speed, accuracy and optimisation, there&#8217;s a lingering discomfort. <a href=\"https:\/\/www.turing.ac.uk\/sites\/default\/files\/2023-06\/how_do_people_feel_about_ai_-_ada_turing.pdf\" rel=\"nofollow noopener\" target=\"_blank\">Some people love<\/a> using AI tools. Others feel anxious, suspicious, even betrayed by them. Why?<\/p>\n<p>The answer isn&#8217;t just about <a href=\"https:\/\/theconversation.com\/topics\/artificial-intelligence-ai-90\" rel=\"nofollow noopener\" target=\"_blank\">how AI works<\/a>, but it&#8217;s about how we work. We don&#8217;t understand it, so we don&#8217;t trust it. Human beings are more likely to trust systems they understand. Traditional tools feel familiar: you turn a key, and a car starts. You press a button, and a lift arrives.<\/p>\n<p alt=\"Is the AI Bubble about to burst?\" class=\"tpe\" data-description=\"After record gains in AI-linked stocks, there is now concern that the industry may be on the verge of a crash. Edel McAllister reports.\" data-embed=\"rte-player\" data-id=\"22555308\" data-ot-category=\"C0004\" data-title=\"After record gains in AI-linked stocks, there is now concern that the industry may be on the verge of a crash. Edel McAllister reports.\">We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.<a class=\"blocked-button\" href=\"https:\/\/www.rte.ie\/brainstorm\/2025\/1104\/1542024-ai-artificial-intelligence-psychology-chatbots-trust-risk\/javascript:void(0);\" onclick=\"OneTrust.ToggleInfoDisplay()\" rel=\"nofollow noopener\" target=\"_blank\">Manage Preferences<\/a><\/p>\n<p>From RT\u00c9 Radio 1&#8217;s News At One, is the AI bubble about to burst?<\/p>\n<p>But many AI systems operate as black boxes: you type something in, and a decision appears. The logic in between is hidden. Psychologically, this is unnerving. We like to see cause and effect, and we like being able to interrogate decisions. When we can&#8217;t, we feel disempowered.<\/p>\n<p>This is one reason for what&#8217;s called <a href=\"http:\/\/pubmed.ncbi.nlm.nih.gov\/25401381\/\" rel=\"nofollow noopener\" target=\"_blank\">algorithm aversion<\/a>. This is a term popularised by the marketing researcher <a href=\"https:\/\/www.chicagobooth.edu\/faculty\/directory\/d\/berkeley-j-dietvorst\" target=\"_blank\" rel=\"nofollow noopener\">Berkeley Dietvorst <\/a>and colleagues, whose research showed that people often prefer flawed human judgement over algorithmic decision making, particularly after witnessing even a single algorithmic error.<\/p>\n<p>We know, rationally, that AI systems don&#8217;t have emotions or agendas, but that doesn&#8217;t stop us from projecting them on to AI systems. When ChatGPT responds &#8220;too politely&#8221;, some users find it eerie. When a recommendation engine gets a little too accurate, it feels intrusive. We begin to suspect manipulation, even though the system has no self.<\/p>\n<p alt=\"Why China's new AI app is a game-changer globally\" class=\"tpe\" data-description=\"Deepseek is a new, free AI-powered chatbot. Yesterday news of its lauch wiped hundreds of billions of dollars off big technology stocks on stockmarkets around the world. To tell us more Davy's Head of Equities, Aidan Donnelly and by Barry O'Sullivan, Professor at UCC's School of Computer Science.&#10;\" data-embed=\"rte-player\" data-id=\"22484171\" data-ot-category=\"C0004\" data-title=\"Deepseek is a new, free AI-powered chatbot. Yesterday news of its lauch wiped hundreds of billions of dollars off big technology stocks on stockmarkets around the world. To tell us more Davy's Head of Equities, Aidan Donnelly and by Barry O'Sullivan, Professor at UCC's School of Computer Science.&#10;\">We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.<a class=\"blocked-button\" href=\"https:\/\/www.rte.ie\/brainstorm\/2025\/1104\/1542024-ai-artificial-intelligence-psychology-chatbots-trust-risk\/javascript:void(0);\" onclick=\"OneTrust.ToggleInfoDisplay()\" rel=\"nofollow noopener\" target=\"_blank\">Manage Preferences<\/a><\/p>\n<p>From RT\u00c9 Radio 1&#8217;s Drivetime, why China&#8217;s new AI app is a game-changer globally<\/p>\n<p>This is a form of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Anthropomorphism\" target=\"_blank\" rel=\"nofollow noopener\">anthropomorphism<\/a>, attributing humanlike intentions to nonhuman systems. Professors of communication Clifford Nass and Byron Reeves, along with others have <a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/191666.191703\" rel=\"nofollow noopener\" target=\"_blank\">demonstrated<\/a> that we respond socially to machines, even knowing they&#8217;re not human.<\/p>\n<p>We hate when AI gets it wrong<\/p>\n<p>One curious finding from behavioural science is that we are often more forgiving of human error than machine error. When a human makes a mistake, we understand it. We might even empathise. But when an algorithm makes a mistake, especially if it was pitched as objective or data-driven, we feel betrayed.<\/p>\n<p>This links to research on <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC5446980\/\" rel=\"nofollow noopener\" target=\"_blank\">expectation violation<\/a>, when our assumptions about how something &#8220;should&#8221; behave are disrupted. It causes discomfort and loss of trust. We trust machines to be logical and impartial. So when they fail, such as misclassifying an image, delivering biased outputs or recommending something wildly inappropriate, our reaction is sharper. We expected more. The irony? Humans make flawed decisions all the time. But at least we can ask them &#8220;why?&#8221;<\/p>\n<p alt=\"Jobs that AI Can't Do\" class=\"tpe\" data-description=\"While Artificial Intelligence continues to replace human workers, Fiona Looney wonders if there are certain jobs that only an Irish person can do that are beyond even AI's capability. \" data-embed=\"rte-player\" data-id=\"22553002\" data-ot-category=\"C0004\" data-title=\"While Artificial Intelligence continues to replace human workers, Fiona Looney wonders if there are certain jobs that only an Irish person can do that are beyond even AI's capability. \">We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.<a class=\"blocked-button\" href=\"https:\/\/www.rte.ie\/brainstorm\/2025\/1104\/1542024-ai-artificial-intelligence-psychology-chatbots-trust-risk\/javascript:void(0);\" onclick=\"OneTrust.ToggleInfoDisplay()\" rel=\"nofollow noopener\" target=\"_blank\">Manage Preferences<\/a><\/p>\n<p>From RT\u00c9 Radio 1&#8217;s The Business, what are the jobs that AI will not replace?<\/p>\n<p>For some, AI isn&#8217;t just unfamiliar, it&#8217;s existentially unsettling. Teachers, writers, lawyers and designers are suddenly confronting tools that replicate parts of their work. This isn&#8217;t just about automation, it&#8217;s about what makes our skills valuable, and what it means to be human.<\/p>\n<p>This can activate a form of <a href=\"http:\/\/psycnet.apa.org\/doiLanding?doi=10.1037%2F0003-066X.52.6.613\" rel=\"nofollow noopener\" target=\"_blank\">identity threat<\/a>, a concept explored by social psychologist <a href=\"https:\/\/en.wikipedia.org\/wiki\/Claude_Steele\" target=\"_blank\" rel=\"nofollow noopener\">Claude Steele<\/a> and others. It describes the fear that one&#8217;s expertise or uniqueness is being diminished. The result? Resistance, defensiveness or outright dismissal of the technology. Distrust, in this case, is not a bug \u2013 it&#8217;s a psychological defence mechanism.<\/p>\n<p>Craving emotional cues<\/p>\n<p>Human trust is built on more than logic. We read tone, facial expressions, hesitation and eye contact. AI has none of these. It might be fluent, even charming. But it doesn&#8217;t reassure us the way another person can.<\/p>\n<p>This is similar to the discomfort of the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Uncanny_valley\" target=\"_blank\" rel=\"nofollow noopener\">uncanny valley<\/a>, a term coined by Japanese roboticist <a href=\"https:\/\/en.wikipedia.org\/wiki\/Masahiro_Mori_(roboticist)\" target=\"_blank\" rel=\"nofollow noopener\">Masahiro Mori<\/a> to describe the eerie feeling when something is almost human, but not quite. It looks or sounds right, but something feels off. That emotional absence can be interpreted as coldness, or even deceit.<\/p>\n<p alt=\"Can you trust everything you see online?\" class=\"tpe\" data-description=\"With the rise of AI and deepfake videos, Brian O'Donovan joins Katie and David to discuss whether we can really trust what we are seeing online? David also gives us his review of the Bruce Springsteen biopic.&#10;&#10;Anyone affected by issues raised in this podcast can go to rte.ie\/helplines&#10;\" data-embed=\"rte-player\" data-id=\"22552918\" data-ot-category=\"C0004\" data-title=\"With the rise of AI and deepfake videos, Brian O'Donovan joins Katie and David to discuss whether we can really trust what we are seeing online? David also gives us his review of the Bruce Springsteen biopic.&#10;&#10;Anyone affected by issues raised in this podcast can go to rte.ie\/helplines&#10;\">We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.<a class=\"blocked-button\" href=\"https:\/\/www.rte.ie\/brainstorm\/2025\/1104\/1542024-ai-artificial-intelligence-psychology-chatbots-trust-risk\/javascript:void(0);\" onclick=\"OneTrust.ToggleInfoDisplay()\" rel=\"nofollow noopener\" target=\"_blank\">Manage Preferences<\/a><\/p>\n<p>From RT\u00c9 News&#8217; Behind the Story, can you trust everything you see online?<\/p>\n<p>In a world full of deepfakes and algorithmic decisions, that missing emotional resonance becomes a problem. Not because the AI is doing anything wrong, but because we don&#8217;t know how to feel about it.<\/p>\n<p>It&#8217;s important to say: not all suspicion of AI is irrational. Algorithms have been shown to <a href=\"https:\/\/theconversation.com\/when-ai-plays-favourites-how-algorithmic-bias-shapes-the-hiring-process-239471\" rel=\"nofollow noopener\" target=\"_blank\">reflect and reinforce bias<\/a>, especially in areas like recruitment, policing and credit scoring. If you&#8217;ve been harmed or disadvantaged by data systems before, you&#8217;re not being paranoid, you&#8217;re being cautious.<\/p>\n<p>This links to a broader psychological idea: learned distrust. When institutions or systems repeatedly fail certain groups, scepticism becomes not only reasonable, but protective.<\/p>\n<p>Telling people to &#8220;trust the system&#8221; rarely works. Trust must be earned. That means designing AI tools that are transparent, interrogable and accountable. It means giving users agency, not just convenience. Psychologically, we trust what we understand, what we can question and what treats us with respect.<\/p>\n<p>If we want AI to be accepted, it needs to feel less like a black box, and more like a conversation we&#8217;re invited to join.<img decoding=\"async\" alt=\"The Conversation\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/1762254793_364_count.gif\"\/><\/p>\n<p>Follow RT\u00c9 Brainstorm on <a href=\"https:\/\/www.whatsapp.com\/channel\/0029VaJ6ugQ1HsptikZkfS1f\" target=\"_blank\" rel=\"nofollow noopener\">WhatsApp<\/a> and <a href=\"https:\/\/www.instagram.com\/rte_brainstorm\" target=\"_blank\" rel=\"nofollow noopener\">Instagram<\/a> for more stories and updates<\/p>\n<p><a href=\"https:\/\/theconversation.com\/profiles\/paul-jones-992605\" rel=\"nofollow noopener\" target=\"_blank\">Paul Jones<\/a> is Associate Dean for Education and Student Experience at Aston Business School, <a href=\"https:\/\/theconversation.com\/institutions\/aston-university-1107\" rel=\"nofollow noopener\" target=\"_blank\">Aston University<\/a>. This article was originally published by <a href=\"https:\/\/theconversation.com\" rel=\"nofollow noopener\" target=\"_blank\">The Conversation<\/a>. <\/p>\n<p>The views expressed here are those of the author and do not represent or reflect the views of RT\u00c9<\/p>\n<p>                    <script async src=\"\/\/www.instagram.com\/embed.js\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"Analysis: The answer isn&#8217;t just about how AI works, but how our brains perceive risk and trust By&hellip;\n","protected":false},"author":2,"featured_media":121124,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-121123","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/121123","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=121123"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/121123\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/121124"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=121123"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=121123"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=121123"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}