{"id":150228,"date":"2025-11-20T14:07:10","date_gmt":"2025-11-20T14:07:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/150228\/"},"modified":"2025-11-20T14:07:10","modified_gmt":"2025-11-20T14:07:10","slug":"do-we-prefer-talking-to-machines-rather-than-each-other","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/150228\/","title":{"rendered":"Do we prefer talking to machines rather than each other?"},"content":{"rendered":"<p>This article is an on-site version of our The AI Shift newsletter. Premium subscribers can sign up <a href=\"https:\/\/ep.ft.com\/newsletters\/subscribe?newsletterIds=68da4b4af493110b11187d9f\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a> to get the newsletter delivered every Thursday. Standard subscribers can upgrade to Premium <a href=\"https:\/\/www.ft.com\/manage\/subscription\/change\/713f1e28-0bc5-8261-f1e6-eebab6f7600e?\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a>, or <a href=\"https:\/\/www.ft.com\/newsletters\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">explore<\/a> all FT newsletters<\/p>\n<p>Welcome back to The AI Shift, our weekly newsletter about AI and the labour market. This week, we\u2019re interested in whether \u2014 and if so, in what circumstances \u2014 people prefer speaking to AI rather than humans. The answer has obvious implications for which jobs may be disrupted by generative AI, but it has some deeper ramifications too.<\/p>\n<p>Sarah writes<\/p>\n<p>In a <a href=\"https:\/\/www.federalreserve.gov\/mediacenter\/files\/capital-framework-conference-fireside-chat-transcript.pdf\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">\u201cfireside chat\u201d<\/a> at a conference hosted by the US Federal Reserve Board this summer (as always with these things, no actual fireside in sight), Sam Altman of OpenAI singled out customer service agents as one occupation he thought would be \u201ctotally, totally gone\u201d because of AI. With \u201cAI customer support bots,\u201d he said, \u201cyou call once; the thing just happens; it\u2019s done.\u201d As a result, he said, \u201cit doesn\u2019t bother me, at all, that that\u2019s an AI and not a real person\u201d. But he saw interactions with doctors differently. \u201cMaybe I\u2019m a dinosaur here, but I really do not want to, like, entrust my medical fate to ChatGPT with no human doctor in the loop.\u201d<\/p>\n<p>My intuition would be the same: that most people would be happy to speak to a machine for a utilitarian purpose like customer support, but not when it comes to something high-stakes and personal like their health. But is that right?<\/p>\n<p>When it comes to call centres, there are indeed some signs that workers are being displaced by AI. The <a href=\"https:\/\/digitaleconomy.stanford.edu\/wp-content\/uploads\/2025\/08\/Canaries_BrynjolfssonChandarChen.pdf#page=10\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">\u201cCanaries in the Coalmine\u201d paper<\/a> by Erik Brynjolfsson\u2019s team at Stanford\u2019s Digital Economy Lab, which we discussed in the first edition of this newsletter, found early-career employment in customer service roles declined by about 10 per cent between late 2022 and July 2025.<\/p>\n<p>That said, some companies are dialling back their plans to fully automate customer service. Jonathan Schmidt, an analyst at research company Gartner, told me that some have \u201ctried to swing that pendulum all the way to full replacement, but the reality is [they] just can\u2019t. The processes, the structures \u2014 not to mention customer expectations \u2014 don\u2019t support full AI automation across all interactions.\u201d <\/p>\n<p>Gartner <a href=\"https:\/\/www.gartner.com\/en\/newsroom\/press-releases\/2025-09-10-gartner-predicts-none-of-the-fortune-500-companies-will-have-fully-eliminated-human-customer-service-by-2028\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">does not believe<\/a> that any Fortune 500 companies will have fully automated customer service by 2028, and it reckons half of the organisations that expected to \u201csignificantly reduce their service workforce due to AI\u201d will have dropped those plans by 2027.<\/p>\n<p>Why is it hard to fully dispatch with humans? AI is being used to automate straightforward customer queries, but when it comes to more complex issues, there are technical and organisational constraints. Resolving knotty problems often requires tacit knowledge of the organisation and its foibles as well as access to its data. It can also require a certain amount of back-and-forth to help some customers articulate what the problem actually is.<\/p>\n<p>Then there are customer preferences. One <a href=\"https:\/\/academic.oup.com\/jcr\/article\/50\/4\/848\/7100346?login=false\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">research study<\/a> found that people evaluated bots more negatively than humans even when the service provided was identical. The researchers attributed this response to consumers\u2019 belief that the company was using automation to cut costs rather than improve quality. <\/p>\n<p>If you have called a customer service line, the chances are that you\u2019re already annoyed, because presumably the website Q&amp;A and text chatbot haven\u2019t been able to help you. As well as a resolution to your problem, you might also want to vent at someone and to feel that you\u2019ve been heard. Indeed, call centre workers now say they often have to <a href=\"https:\/\/www.reddit.com\/r\/callcentres\/comments\/1n5mj82\/more_and_more_people_accusing_me_of_being_ai\/\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">persuade<\/a> irate customers that they are, in fact, real humans and not AI bots.<\/p>\n<p>So John, even though (or perhaps because?) nobody enjoys their interactions with call centres, they\u2019re not likely to be fully automated any time soon. But what does your research tell us about other examples of human-machine interactions?<\/p>\n<p>John writes<\/p>\n<p>One recent example, Sarah, which also serves as a more optimistic counterpoint to our fairly gloomy take on LLMs and recruitment <a href=\"https:\/\/www.ft.com\/content\/e5b7c3bd-925e-4907-a8fd-96b8e33353a5\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">last week<\/a>, comes in the form of <a href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=5395709\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">a paper<\/a> by Chicago Booth School economist Brian Jabarian and his co-author Luca Henkel, who found that having AI voice agents carry out job interviews can yield promising results in certain settings.<\/p>\n<p>Their study focused on the hiring process for a customer service firm in the Philippines, finding that not only were AI-led interviews more likely to result in job offers and job starts than recruiter-led conversations (offer decisions were always made by humans), they also led to better long-term outcomes in terms of staff retention, suggesting they really were producing good candidate-job-employer fits. Interestingly, most applicants also chose to be interviewed by an AI over a human when given the option.<\/p>\n<p>The main reason for these results was simple: AI interviewers are consistent; humans are not. Where the former generally stuck to the interview guidelines and covered all of the key topics, human interlocutors would often take a more meandering route and were less likely to get through all the questions. As a result, AI interviews tended to gather more relevant information from applicants, with observing recruiters rating AI-led interviews better than the ones they conducted themselves.<\/p>\n<p>There are a lot of caveats with this study \u2014 not least whether it can be generalised to other domains \u2014 but I think I\u2019m sold on the finding that in certain lower-stakes settings (such as recruitment for some lower-skilled jobs) generative AI\u2019s ability to hold a pleasant conversation while consistently following guidelines has the potential to free up significant amounts of human workers\u2019 time that could be spent on more valuable tasks. That this seems possible without negative side effects is especially promising.<\/p>\n<p>A very different \u2014 and certainly higher-stakes \u2014 domain where we\u2019re seeing some interesting results on AI conversations is healthcare. We might imagine sensitive medical topics and the imperative for expert advice make this the last place to find benefits from AI, but <a href=\"https:\/\/www.medrxiv.org\/content\/10.1101\/2025.06.09.25329258v1\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">recent research<\/a> finds that patients prefer discussing health issues with AI chatbots compared to text chats with healthcare practitioners. <\/p>\n<p>The most promising results are in the mental health domain, where <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2949882124000410\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">studies consistently find<\/a> that not only do people report high levels of satisfaction with chatbots, they also report reduced symptoms of depression relative to control groups.<\/p>\n<p>There seem to be two main mechanisms here. The first may surprise you: users consistently report that AIs are very empathetic \u2014 more so in fact than healthcare practitioners engaged in similar text-based chats. To my mind this meshes with the customer service example: humans can be inconsistent, perhaps tired or stressed, where AI retains a calm and sunny disposition. The second mechanism is that <a href=\"https:\/\/news.ku.edu\/news\/article\/study-finds-people-prefer-ai-chatbots-when-discussing-embarrassing-health-info-but-humans-when-they-are-angry\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">embarrassment<\/a> and stigma are often barriers to people discussing sensitive health topics, especially in <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC11748427\/\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">certain cultures<\/a>, but they feel more comfortable opening up to an AI.<\/p>\n<p>So what have we learned?<\/p>\n<p>Sarah: I find it really interesting that we live in a world in which people will irately press zero repeatedly on a customer service call because they want to speak to a human about their faulty broadband, but might actively prefer to talk to a machine about their mental health. Of course, these might well not be the same set of people. I suspect another factor which matters is whether you have had the chance to choose an AI or a human, or whether you expect a human and then feel \u201cfobbed off\u201d by a machine.<\/p>\n<p>John: I find myself rotating between optimism and alarm on the mental health use case. We\u2019ve got consistent evidence that AI chatbots have the potential to alleviate mental health problems for many people who might otherwise not be able to access help, but at the same time there have been a small number of <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/c5yd90g0q43o\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">very concerning cases<\/a> of people engaging in disturbing behaviour following conversations with ChatGPT, including the emergence of \u201c<a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/c24zdel5j18o\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">AI psychosis<\/a>\u201d and one incident where a teenager took his own life. A huge amount of effort will doubtless be put into adding and strengthening safeguards against these extreme outcomes, but it may be that for some people, talking to an AI is always going to pose risks.<\/p>\n<p>Recommended reading<\/p>\n<p>Wharton professor and AI specialist Ethan Mollick has been testing Google\u2019s new Gemini 3 model which marries generative with agentic AI, and <a href=\"https:\/\/www.oneusefulthing.org\/p\/three-years-from-gpt-3-to-gemini\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">he is impressed<\/a> (John)<\/p>\n<p>An <a href=\"https:\/\/www.ft.com\/content\/064bbca0-1cb2-45ab-85f4-25fdfc318d89\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">eye-opening missive<\/a> on Oracle and OpenAI by Bryce Elder over on FT Alphaville (Sarah). <a href=\"https:\/\/ftav.substack.com\/subscribe\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">Sign up to his new Substack launching tomorrow here.<\/a> <\/p>\n<p>Recommended newsletters for you<\/p>\n<p>The Lex Newsletter \u2014 Lex, our investment column, breaks down the week\u2019s key themes, with analysis by award-winning writers. Sign up <a href=\"https:\/\/ep.ft.com\/newsletters\/subscribe?newsletterIds=56657d10e4b04e04251004fd\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a><\/p>\n<p>Working It \u2014 Everything you need to get ahead at work, in your inbox every Wednesday. Sign up <a href=\"https:\/\/ep.ft.com\/newsletters\/subscribe?newsletterIds=62039b7ea31d6577a31f70df\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"This article is an on-site version of our The AI Shift newsletter. Premium subscribers can sign up here&hellip;\n","protected":false},"author":2,"featured_media":150229,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-150228","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/150228","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=150228"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/150228\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/150229"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=150228"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=150228"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=150228"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}