{"id":156966,"date":"2025-11-24T12:07:07","date_gmt":"2025-11-24T12:07:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/156966\/"},"modified":"2025-11-24T12:07:07","modified_gmt":"2025-11-24T12:07:07","slug":"a-research-leader-behind-chatgpts-mental-health-work-is-leaving-openai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/156966\/","title":{"rendered":"A Research Leader Behind ChatGPT\u2019s Mental Health Work Is Leaving OpenAI"},"content":{"rendered":"<p>An OpenAI safety research leader who helped shape <a href=\"https:\/\/www.wired.com\/tag\/chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT\u2019s<\/a> responses to users experiencing <a href=\"https:\/\/www.wired.com\/story\/ftc-complaints-chatgpt-ai-psychosis\/\" rel=\"nofollow noopener\" target=\"_blank\">mental health crises<\/a> announced her departure from the company internally last month, WIRED has learned. Andrea Vallone, the head of a safety research team known as model policy, is slated to leave OpenAI at the end of the year.<\/p>\n<p class=\"paywall\">OpenAI spokesperson Kayla Wood confirmed Vallone\u2019s departure. Wood said OpenAI is actively looking for a replacement and that, in the interim, Vallone\u2019s team will report directly to Johannes Heidecke, the company\u2019s head of safety systems.<\/p>\n<p class=\"paywall\">Vallone\u2019s departure comes as OpenAI faces growing scrutiny over how its flagship product responds to <a href=\"https:\/\/www.wired.com\/story\/chatgpt-psychosis-and-self-harm-update\/\" rel=\"nofollow noopener\" target=\"_blank\">users in distress<\/a>. In recent months, several lawsuits have been filed against OpenAI alleging that users formed unhealthy attachments to ChatGPT. Some of the lawsuits claim ChatGPT contributed to mental health breakdowns or encouraged suicidal ideations.<\/p>\n<p class=\"paywall\">Amid that pressure, OpenAI has been working to understand how ChatGPT should handle distressed users and improve the chatbot\u2019s responses. Model policy is one of the teams leading that work, spearheading an <a data-offer-url=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/&quot;}\" href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\" rel=\"nofollow noopener\" target=\"_blank\">October report<\/a> detailing the company\u2019s progress and consultations with more than 170 mental health experts.<\/p>\n<p class=\"paywall\">In the report, OpenAI said hundreds of thousands of ChatGPT users may show <a href=\"https:\/\/www.wired.com\/story\/chatgpt-psychosis-and-self-harm-update\/\" rel=\"nofollow noopener\" target=\"_blank\">signs of experiencing a manic or psychotic crisis<\/a> every week, and that more than a million people \u201chave conversations that include explicit indicators of potential suicidal planning or intent.\u201d Through an update to GPT-5, OpenAI said in the report it was able to reduce undesirable responses in these conversations by 65 to 80 percent.<\/p>\n<p class=\"paywall\">\u201cOver the past year, I led OpenAI\u2019s research on a question with almost no established precedents: how should models respond when confronted with signs of emotional over-reliance or early indications of mental health distress?\u201d wrote Vallone in a <a data-offer-url=\"https:\/\/www.linkedin.com\/posts\/activity-7388702372509913088-WYKW?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACoP3XgBeulmY2c4Z8VFNF9GxGTf_WUCkJI\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www.linkedin.com\/posts\/activity-7388702372509913088-WYKW?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACoP3XgBeulmY2c4Z8VFNF9GxGTf_WUCkJI&quot;}\" href=\"https:\/\/www.linkedin.com\/posts\/activity-7388702372509913088-WYKW?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACoP3XgBeulmY2c4Z8VFNF9GxGTf_WUCkJI\" rel=\"nofollow noopener\" target=\"_blank\">post<\/a> on LinkedIn.<\/p>\n<p class=\"paywall\">Vallone did not respond to WIRED\u2019s request for comment.<\/p>\n<p class=\"paywall\">Making ChatGPT enjoyable to chat with, but not overly flattering, is a core tension at OpenAI. The company is aggressively trying to expand ChatGPT\u2019s user base, which now includes more than 800 million people a week, to compete with AI chatbots from Google, Anthropic, and Meta.<\/p>\n<p class=\"paywall\">After OpenAI released GPT-5 in August, users pushed back, arguing that the new model was <a href=\"https:\/\/www.wired.com\/story\/openai-gpt-5-backlash-sam-altman\/\" rel=\"nofollow noopener\" target=\"_blank\">surprisingly cold<\/a>. In the latest update to ChatGPT, the company said it had significantly reduced sycophancy while maintaining the chatbot\u2019s \u201cwarmth.\u201d<\/p>\n<p class=\"paywall\">Vallone\u2019s exit follows an <a data-offer-url=\"https:\/\/techcrunch.com\/2025\/09\/05\/openai-reorganizes-research-team-behind-chatgpts-personality\/\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/techcrunch.com\/2025\/09\/05\/openai-reorganizes-research-team-behind-chatgpts-personality\/&quot;}\" href=\"https:\/\/techcrunch.com\/2025\/09\/05\/openai-reorganizes-research-team-behind-chatgpts-personality\/\" rel=\"nofollow noopener\" target=\"_blank\">August reorganization of another group<\/a> focused on ChatGPT\u2019s responses to distressed users, model behavior. Its former leader, Joanne Jang, left that role to start a new team exploring novel human\u2013AI interaction methods. The remaining model behavior staff were moved under post-training lead Max Schwarzer.<\/p>\n","protected":false},"excerpt":{"rendered":"An OpenAI safety research leader who helped shape ChatGPT\u2019s responses to users experiencing mental health crises announced her&hellip;\n","protected":false},"author":2,"featured_media":156967,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,2670,61,60,1682,89,2212,80],"class_list":{"0":"post-156966","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-chatgpt","12":"tag-ie","13":"tag-ireland","14":"tag-openai","15":"tag-research","16":"tag-safety","17":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/156966","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=156966"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/156966\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/156967"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=156966"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=156966"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=156966"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}