Unlock the Editor’s Digest for free

The writer is a contributing columnist, based in Chicago

The label “neurotic” isn’t normally viewed as a compliment. But when University of Chicago researchers tested earlier this year how people reacted to robots pretending to be restaurant greeters, they found that folks liked a dash of neuroticism in their artificial intelligence — saying it made the robot more “human-like”. 

But these days there is increasing controversy over just how “human” our AI helpers should pretend to be — and what personalities they should be given, if any. Critics argue that humanlike emotional attributes can trick people into treating them less like tools and more like friends or therapists, with sometimes tragic consequences.

OpenAI had to give ChatGPT a personality overhaul in April after backlash against an earlier version, which was criticised as too sycophantic. The company acknowledged that interacting with an overly obsequious chatbot “can be uncomfortable, unsettling, and cause distress” — and threaten trust.

This month ChatGPT invited me to choose how I would like to be spoken to: it offered “friendly”, “candid”, “professional”, “efficient”, “nerdy”, “cynical” or “quirky”, and also allowed me to adjust characteristics such as “warmth”, “enthusiasm” and “emoji use”. We are getting along much better now that I’ve chosen minimum warmth, enthusiasm and emojis, plus a “quirky” tone — and instructed it to “stop praising me for everything I say”.

Notably, OpenAI didn’t offer me a “neurotic” option — which makes the University of Chicago study all the more interesting. Here, researchers used a humanoid robot pretending to be a restaurant greeter, and gave it three personalities: extroverted, neurotic, or emotionless (ie, robotic). It was asked what three things it was grateful for. The extrovert enthused about how it was “super grateful” for all the “amazing” people it got to meet. The neurotic one peppered its speech with hmm’s and ha’s, and seemed much more humanly hesitant. 

Overall, participants enjoyed the outgoing robot more — but expressed surprise at how well the neurotic one could understand deep emotions: “the robot seemed like a person who was trying to get by in the world”, one participant told the researchers. “People are not expecting robots to be anxious and thinking about what other people think of it,” Sarah Sebo, director of University of Chicago’s Human-Robot Interaction lab, told me. “Neuroticism seemed to humanise and make the robot more relatable”. 

Some memorable fictional robots — like The Hitchhiker’s Guide to the Galaxy’s famously depressed robot Marvin the Paranoid Android — have been troubled. But Lionel Robert, a University of Michigan robotics expert, tells me “if you have a robot surgeon with a neurotic personality, that might not instil confidence”, nor would he want his autonomous car fretting about “not being very good at driving in snow”. 

Robert is not against giving social robots and AI chatbots a personality. “That works incredibly well,” he says, because “humans are used to interacting with other humans, and you’ve never interacted with a human without a personality, so it disarms people and makes them feel comfortable.” 

But the risk, Gideon Futerman of the Center for AI Safety tells me, is that “certain model personality traits, especially sycophancy, seem to make AI psychosis — where users develop paranoia or delusion in connection with conversations with chatbots — more likely”. 

Unhealthy interactions can take many forms. I, for example, always say please and thank you to ChatGPT, and never rebuke it directly, no matter how many times it makes the same mistake. I want to say “don’t be so stupid”, but instead I craft a tactful reprimand. Asked why, I’ve had to admit “I’m afraid it will be mean to me one day if I’m rude”.

“That means you think it’s human,” warns Yvonne Rogers, an expert on human-computer interaction at University College London. “That proves it actually works: it acts like a human and you respond like a human to it,” says Robert. Another AI expert suggests I train the bot to be more robust by insulting it from time to time. 

But we can get into trouble by focusing too much on “crafting the perfect personality” for AI, Sebo cautions. “I can’t fine tune my husband’s personality and that is part of the beauty of being human,” she says. A world where we prefer custom-designed AI personalities to engaging with real people would be a loss. Three cheers for neuroticism — it’s just so very human.