This article is an on-site version of our The AI Shift newsletter. Premium subscribers can sign up here to get the newsletter delivered every Thursday. Standard subscribers can upgrade to Premium here, or explore all FT newsletters

Welcome back to The AI Shift, our weekly newsletter about AI and the labour market. This week, we’re interested in whether — and if so, in what circumstances — people prefer speaking to AI rather than humans. The answer has obvious implications for which jobs may be disrupted by generative AI, but it has some deeper ramifications too.

Sarah writes

In a “fireside chat” at a conference hosted by the US Federal Reserve Board this summer (as always with these things, no actual fireside in sight), Sam Altman of OpenAI singled out customer service agents as one occupation he thought would be “totally, totally gone” because of AI. With “AI customer support bots,” he said, “you call once; the thing just happens; it’s done.” As a result, he said, “it doesn’t bother me, at all, that that’s an AI and not a real person”. But he saw interactions with doctors differently. “Maybe I’m a dinosaur here, but I really do not want to, like, entrust my medical fate to ChatGPT with no human doctor in the loop.”

My intuition would be the same: that most people would be happy to speak to a machine for a utilitarian purpose like customer support, but not when it comes to something high-stakes and personal like their health. But is that right?

When it comes to call centres, there are indeed some signs that workers are being displaced by AI. The “Canaries in the Coalmine” paper by Erik Brynjolfsson’s team at Stanford’s Digital Economy Lab, which we discussed in the first edition of this newsletter, found early-career employment in customer service roles declined by about 10 per cent between late 2022 and July 2025.

That said, some companies are dialling back their plans to fully automate customer service. Jonathan Schmidt, an analyst at research company Gartner, told me that some have “tried to swing that pendulum all the way to full replacement, but the reality is [they] just can’t. The processes, the structures — not to mention customer expectations — don’t support full AI automation across all interactions.”

Gartner does not believe that any Fortune 500 companies will have fully automated customer service by 2028, and it reckons half of the organisations that expected to “significantly reduce their service workforce due to AI” will have dropped those plans by 2027.

Why is it hard to fully dispatch with humans? AI is being used to automate straightforward customer queries, but when it comes to more complex issues, there are technical and organisational constraints. Resolving knotty problems often requires tacit knowledge of the organisation and its foibles as well as access to its data. It can also require a certain amount of back-and-forth to help some customers articulate what the problem actually is.

Then there are customer preferences. One research study found that people evaluated bots more negatively than humans even when the service provided was identical. The researchers attributed this response to consumers’ belief that the company was using automation to cut costs rather than improve quality.

If you have called a customer service line, the chances are that you’re already annoyed, because presumably the website Q&A and text chatbot haven’t been able to help you. As well as a resolution to your problem, you might also want to vent at someone and to feel that you’ve been heard. Indeed, call centre workers now say they often have to persuade irate customers that they are, in fact, real humans and not AI bots.

So John, even though (or perhaps because?) nobody enjoys their interactions with call centres, they’re not likely to be fully automated any time soon. But what does your research tell us about other examples of human-machine interactions?

John writes

One recent example, Sarah, which also serves as a more optimistic counterpoint to our fairly gloomy take on LLMs and recruitment last week, comes in the form of a paper by Chicago Booth School economist Brian Jabarian and his co-author Luca Henkel, who found that having AI voice agents carry out job interviews can yield promising results in certain settings.

Their study focused on the hiring process for a customer service firm in the Philippines, finding that not only were AI-led interviews more likely to result in job offers and job starts than recruiter-led conversations (offer decisions were always made by humans), they also led to better long-term outcomes in terms of staff retention, suggesting they really were producing good candidate-job-employer fits. Interestingly, most applicants also chose to be interviewed by an AI over a human when given the option.

The main reason for these results was simple: AI interviewers are consistent; humans are not. Where the former generally stuck to the interview guidelines and covered all of the key topics, human interlocutors would often take a more meandering route and were less likely to get through all the questions. As a result, AI interviews tended to gather more relevant information from applicants, with observing recruiters rating AI-led interviews better than the ones they conducted themselves.

There are a lot of caveats with this study — not least whether it can be generalised to other domains — but I think I’m sold on the finding that in certain lower-stakes settings (such as recruitment for some lower-skilled jobs) generative AI’s ability to hold a pleasant conversation while consistently following guidelines has the potential to free up significant amounts of human workers’ time that could be spent on more valuable tasks. That this seems possible without negative side effects is especially promising.

A very different — and certainly higher-stakes — domain where we’re seeing some interesting results on AI conversations is healthcare. We might imagine sensitive medical topics and the imperative for expert advice make this the last place to find benefits from AI, but recent research finds that patients prefer discussing health issues with AI chatbots compared to text chats with healthcare practitioners.

The most promising results are in the mental health domain, where studies consistently find that not only do people report high levels of satisfaction with chatbots, they also report reduced symptoms of depression relative to control groups.

There seem to be two main mechanisms here. The first may surprise you: users consistently report that AIs are very empathetic — more so in fact than healthcare practitioners engaged in similar text-based chats. To my mind this meshes with the customer service example: humans can be inconsistent, perhaps tired or stressed, where AI retains a calm and sunny disposition. The second mechanism is that embarrassment and stigma are often barriers to people discussing sensitive health topics, especially in certain cultures, but they feel more comfortable opening up to an AI.

So what have we learned?

Sarah: I find it really interesting that we live in a world in which people will irately press zero repeatedly on a customer service call because they want to speak to a human about their faulty broadband, but might actively prefer to talk to a machine about their mental health. Of course, these might well not be the same set of people. I suspect another factor which matters is whether you have had the chance to choose an AI or a human, or whether you expect a human and then feel “fobbed off” by a machine.

John: I find myself rotating between optimism and alarm on the mental health use case. We’ve got consistent evidence that AI chatbots have the potential to alleviate mental health problems for many people who might otherwise not be able to access help, but at the same time there have been a small number of very concerning cases of people engaging in disturbing behaviour following conversations with ChatGPT, including the emergence of “AI psychosis” and one incident where a teenager took his own life. A huge amount of effort will doubtless be put into adding and strengthening safeguards against these extreme outcomes, but it may be that for some people, talking to an AI is always going to pose risks.

Recommended reading

Wharton professor and AI specialist Ethan Mollick has been testing Google’s new Gemini 3 model which marries generative with agentic AI, and he is impressed (John)

An eye-opening missive on Oracle and OpenAI by Bryce Elder over on FT Alphaville (Sarah). Sign up to his new Substack launching tomorrow here.

Recommended newsletters for you

The Lex Newsletter — Lex, our investment column, breaks down the week’s key themes, with analysis by award-winning writers. Sign up here

Working It — Everything you need to get ahead at work, in your inbox every Wednesday. Sign up here