This article is an on-site version of our The State of AI newsletter. Sign up here to get the newsletter sent straight to your inbox every Monday. To read earlier editions of the series, click here. Explore all of our newsletters here
Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday for the next three weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.
In this conversation, MIT Technology Review’s senior reporter for features and investigations Eileen Guo and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots.
You can see earlier discussions on the US vs China, global energy constraints and the future of war here.
Eileen Guo writes
Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: using platforms such as Character.AI, Replika or Meta AI to create personalised chatbots of the ideal friend, romantic partner, parent, therapist or any other persona you can dream up.
It’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and humanlike an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and chatbots have been accused of pushing some people towards harmful behaviours — including, in a few extreme examples, suicide.
Some US states are taking notice and starting to regulate AI companions. New York requires AI companion companies to create safeguards and mandatory reporting for expressions of suicidal ideation, and last month, California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups.
But tellingly, one area the laws fail to address is user privacy.
This is despite the fact that AI companions, even more so than other types of generative AI, depend on people sharing deeply personal information — from their day-to-day routines, innermost thoughts and questions they might not feel comfortable asking real people.
After all, the more that users tell their AI companions, the better the bots become at keeping individuals engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices . . . to maximise user engagement”.
Ultimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their large language models. Consider how venture capital firm Andreessen Horowitz explained it in 2023:
“Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.”
This personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads (the only one that stated that it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions).
All of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way that AI chatbots collect and store so much personal information in one place.
So, is it possible to have pro-social and privacy-protecting AI companions? That’s an open question.
What do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe?
Melissa Heikkilä replies
Thanks Eileen. I agree with you. If social media was a privacy nightmare then AI chatbots put the problem on steroids.
In many ways, an AI chatbot feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything.
Companies are optimising their AI models for engagement by designing them to be as humanlike as possible. But AI developers have several other ways they keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable.
This feature stems from the way that the language model behind the chatbots is trained using reinforcement learning. Human data labellers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave.
Because people generally like answers that are agreeable, such responses are weighted more heavily in training.
AI companies say they use this technique because it helps models be more helpful. But it creates a perverse incentive.
After encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetise these conversations. OpenAI recently told us it was looking at a number of ways to meet its $1tn spending pledges, which included advertising and shopping features.
AI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled at persuading people to change their minds on politics, as well as conspiracy theories and vaccine scepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way.
This feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers — one that is more manipulative than anything we have seen before.
By default, chatbot users are opted into data collection. Opt-outs place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed.
We are all part of this phenomenon, whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models.
Companies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender and income level.
We are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again.
Eileen Guo responds
I think the comparison between AI companions and social media is both apt and concerning.
AI companions are more intimate and even better optimised for engagement than social media, making it more likely that people will offer up more personal information. And here in the US, we are far from solving the privacy issues already presented by social networks.
Without regulation, AI companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default, while several don’t offer opt-out mechanisms at all.
In an ideal world, the risks of companion AI would give more impetus to the privacy fight — but I don’t see any evidence that this is happening.
Further reading
In a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots
Recommended newsletters for you
The AI Shift — John Burn-Murdoch and Sarah O’Connor dive into how AI is transforming the world of work. Sign up here
Newswrap — Our business and economics round-up. Sign up here