{"id":470343,"date":"2026-02-12T14:32:14","date_gmt":"2026-02-12T14:32:14","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/470343\/"},"modified":"2026-02-12T14:32:14","modified_gmt":"2026-02-12T14:32:14","slug":"openai-researcher-quits-warns-its-unprecedented-archive-of-human-candor-is-dangerous","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/470343\/","title":{"rendered":"OpenAI Researcher Quits, Warns Its Unprecedented &#8216;Archive of Human Candor&#8217; Is Dangerous"},"content":{"rendered":"<p>In a week of pretty public exits from artificial intelligence companies, Zo\u00eb Hitzig\u2019s case is, arguably, the most attention-grabbing. The former researcher at OpenAI divorced the company in an <a href=\"https:\/\/www.nytimes.com\/2026\/02\/11\/opinion\/openai-ads-chatgpt.html\" rel=\"nofollow noopener\" target=\"_blank\">op-ed in the New York Times<\/a> in which she warned not of some vague, unnamed crisis like Anthropic\u2019s <a href=\"https:\/\/gizmodo.com\/anthropic-ai-safety-researcher-mrinank-sharm-resigns-2000719865\" rel=\"nofollow noopener\" target=\"_blank\">recently departed safeguard lead<\/a>, but of something real and imminent: OpenAI\u2019s introduction of advertisements to ChatGPT and what information it will use to target those sponsored messages.<\/p>\n<p>There\u2019s an important distinction that Hitzig makes early in her op-ed: it\u2019s not advertising itself that is the issue, but rather the potential use of a vast amount of sensitive data that users have shared with ChatGPT without giving a second thought as to how it could be used to target them or who could potentially get their hands on it.<\/p>\n<p>\u201cFor several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda,\u201d she wrote. \u201cPeople tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don\u2019t have the tools to understand, let alone prevent.\u201d<\/p>\n<p>OpenAI has at least acknowledged this concern. In a <a href=\"https:\/\/openai.com\/index\/our-approach-to-advertising-and-expanding-access\/\" rel=\"nofollow noopener\" target=\"_blank\">blog post<\/a> published earlier this year announcing that the company will be experimenting with advertising, the company promised that it will keep a firewall between conversations that users have with ChatGPT and the ads they get served by the chatbot. \u201cWe keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers.\u201d<\/p>\n<p>Hitzig believes that is true\u2026 for now. But she\u2019s lost trust in the company to maintain that position over the long term, especially because there is nothing actually holding it to follow through on the promised privacy. The researcher argued that OpenAI is \u201cbuilding an economic engine that creates strong incentives to override its own rules,\u201d and warned the company may already be backing away from previous principles.<\/p>\n<p>For instance, OpenAI has <a href=\"https:\/\/archive.is\/LHdpX\" rel=\"nofollow noopener\" target=\"_blank\">stated<\/a> that it doesn\u2019t optimize ChatGPT to maximize engagement\u2014a metric that would especially be of interest for a company trying to keep people locked into conversations so it can serve them more ads. But a statement isn\u2019t binding, and it\u2019s not clear the company has actually lived up to that. Last year, the company ran into an issue of sycophancy with its model\u2014it started becoming overly flattering to its users and, at times, fed into delusional thinking that may have contributed to \u201c<a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/urban-survival\/202507\/the-emerging-problem-of-ai-psychosis\" rel=\"nofollow noopener\" target=\"_blank\">chatbot psychosis<\/a>\u201d and <a href=\"https:\/\/www.pbs.org\/newshour\/nation\/study-says-chatgpt-giving-teens-dangerous-advice-on-drugs-alcohol-and-suicide\" rel=\"nofollow noopener\" target=\"_blank\">self-harm<\/a>. Experts have <a href=\"https:\/\/techcrunch.com\/2025\/08\/25\/ai-sycophancy-isnt-just-a-quirk-experts-consider-it-a-dark-pattern-to-turn-users-into-profit\/\" rel=\"nofollow noopener\" target=\"_blank\">warned<\/a> that sycophancy isn\u2019t just some mistake in model tuning but an intentional way to get users hooked on talking to the chatbot.<\/p>\n<p>In a way, OpenAI is just speedrunning the Facebook model of promising users privacy over their data and then <a href=\"https:\/\/www.ftc.gov\/news-events\/news\/press-releases\/2011\/11\/facebook-settles-ftc-charges-it-deceived-consumers-failing-keep-privacy-promises?\" rel=\"nofollow noopener\" target=\"_blank\">rug-pulling them<\/a> when it turns out that data is quite valuable. Hitzig is trying to get out in front of the train before it picks up too much steam, and recommended OpenAI adopt a model that will actually guarantee protections for users\u2014either creating some sort of real, binding independent oversight or putting data in control of a trust with a \u201clegal duty to act in users\u2019 interests.\u201d Either option sounds great, though Meta did the former by creating the Meta Oversight Board and then <a href=\"https:\/\/www.platformer.news\/meta-oversight-board-5-years\/\" rel=\"nofollow noopener\" target=\"_blank\">routinely ignored and flouted it<\/a>.<\/p>\n<p>Hitzig also, unfortunately, may have an uphill battle in getting people to care. Two decades of social media have created a sense of <a href=\"https:\/\/pro.morningconsult.com\/trend-setters\/social-media-users-privacy-concerns\" rel=\"nofollow noopener\" target=\"_blank\">privacy nihilism in the general public<\/a>. No one likes ads, but most people aren\u2019t bothered by them enough to do anything. <a href=\"https:\/\/www.forrester.com\/blogs\/what-consumers-actually-think-about-ads-in-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">Forrester found that 83% of people<\/a> surveyed would continue to use the free tier of ChatGPT despite the introduction of advertisements. Anthropic tried to score some points with the public by hammering OpenAI over its decision to insert ads into ChatGPT with a high-profile Super Bowl spot this weekend, but the public response was more confusion than anything, <a href=\"https:\/\/www.adweek.com\/brand-marketing\/super-bowl-revealed-ai-messaging-crisis\/\" rel=\"nofollow noopener\" target=\"_blank\">per AdWeek<\/a>, which found the ad ranked in the bottom 3% of likability across all Super Bowl spots.<\/p>\n<p>Hitzig\u2019s warning is well-founded. The concern she has is real. But getting the public to care about their own privacy after years of being beaten into submission by algorithms is a real lift.<\/p>\n","protected":false},"excerpt":{"rendered":"In a week of pretty public exits from artificial intelligence companies, Zo\u00eb Hitzig\u2019s case is, arguably, the most&hellip;\n","protected":false},"author":2,"featured_media":467722,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,16324,278,61],"class_list":{"0":"post-470343","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-large-language-model","14":"tag-openai","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/470343","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=470343"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/470343\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/467722"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=470343"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=470343"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=470343"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}