In a week of pretty public exits from artificial intelligence companies, Zoë Hitzig’s case is, arguably, the most attention-grabbing. The former researcher at OpenAI divorced the company in an op-ed in the New York Times in which she warned not of some vague, unnamed crisis like Anthropic’s recently departed safeguard lead, but of something real and imminent: OpenAI’s introduction of advertisements to ChatGPT and what information it will use to target those sponsored messages.
There’s an important distinction that Hitzig makes early in her op-ed: it’s not advertising itself that is the issue, but rather the potential use of a vast amount of sensitive data that users have shared with ChatGPT without giving a second thought as to how it could be used to target them or who could potentially get their hands on it.
“For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda,” she wrote. “People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
OpenAI has at least acknowledged this concern. In a blog post published earlier this year announcing that the company will be experimenting with advertising, the company promised that it will keep a firewall between conversations that users have with ChatGPT and the ads they get served by the chatbot. “We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers.”
Hitzig believes that is true… for now. But she’s lost trust in the company to maintain that position over the long term, especially because there is nothing actually holding it to follow through on the promised privacy. The researcher argued that OpenAI is “building an economic engine that creates strong incentives to override its own rules,” and warned the company may already be backing away from previous principles.
For instance, OpenAI has stated that it doesn’t optimize ChatGPT to maximize engagement—a metric that would especially be of interest for a company trying to keep people locked into conversations so it can serve them more ads. But a statement isn’t binding, and it’s not clear the company has actually lived up to that. Last year, the company ran into an issue of sycophancy with its model—it started becoming overly flattering to its users and, at times, fed into delusional thinking that may have contributed to “chatbot psychosis” and self-harm. Experts have warned that sycophancy isn’t just some mistake in model tuning but an intentional way to get users hooked on talking to the chatbot.
In a way, OpenAI is just speedrunning the Facebook model of promising users privacy over their data and then rug-pulling them when it turns out that data is quite valuable. Hitzig is trying to get out in front of the train before it picks up too much steam, and recommended OpenAI adopt a model that will actually guarantee protections for users—either creating some sort of real, binding independent oversight or putting data in control of a trust with a “legal duty to act in users’ interests.” Either option sounds great, though Meta did the former by creating the Meta Oversight Board and then routinely ignored and flouted it.
Hitzig also, unfortunately, may have an uphill battle in getting people to care. Two decades of social media have created a sense of privacy nihilism in the general public. No one likes ads, but most people aren’t bothered by them enough to do anything. Forrester found that 83% of people surveyed would continue to use the free tier of ChatGPT despite the introduction of advertisements. Anthropic tried to score some points with the public by hammering OpenAI over its decision to insert ads into ChatGPT with a high-profile Super Bowl spot this weekend, but the public response was more confusion than anything, per AdWeek, which found the ad ranked in the bottom 3% of likability across all Super Bowl spots.
Hitzig’s warning is well-founded. The concern she has is real. But getting the public to care about their own privacy after years of being beaten into submission by algorithms is a real lift.