{"id":129526,"date":"2025-11-11T12:33:10","date_gmt":"2025-11-11T12:33:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/129526\/"},"modified":"2025-11-11T12:33:10","modified_gmt":"2025-11-11T12:33:10","slug":"the-former-staffer-calling-out-openais-erotica-claims","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/129526\/","title":{"rendered":"The Former Staffer Calling Out OpenAI\u2019s Erotica Claims"},"content":{"rendered":"<p>When the history of <a href=\"https:\/\/www.wired.com\/tag\/artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">AI<\/a> is written, Steven Adler may just end up being its Paul Revere\u2014or at least, one of them\u2014when it comes to safety.<\/p>\n<p class=\"paywall\">Last month Adler, who spent four years in various safety roles at <a href=\"https:\/\/www.wired.com\/tag\/openai\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a>, wrote <a data-offer-url=\"https:\/\/www.nytimes.com\/2025\/10\/28\/opinion\/openai-chatgpt-safety.html\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www.nytimes.com\/2025\/10\/28\/opinion\/openai-chatgpt-safety.html&quot;}\" href=\"https:\/\/www.nytimes.com\/2025\/10\/28\/opinion\/openai-chatgpt-safety.html\" rel=\"nofollow noopener\" target=\"_blank\">a piece<\/a> for The New York Times with a rather alarming title: \u201cI Led Product Safety at OpenAI. Don\u2019t Trust Its Claims About \u2018Erotica.\u2019\u201d In it, he laid out the problems OpenAI faced when it came to allowing users to have erotic conversations with chatbots while also protecting them from any impacts those interactions could have on their mental health. \u201cNobody wanted to be the morality police, but we lacked ways to measure and manage erotic usage carefully,\u201d he wrote. \u201cWe decided AI-powered erotica would have to wait.\u201d<\/p>\n<p class=\"paywall\">Adler wrote his op-ed because OpenAI CEO Sam Altman had recently announced that the company would soon allow \u201c<a href=\"https:\/\/www.wired.com\/story\/chatgpt-horny-era\/\" rel=\"nofollow noopener\" target=\"_blank\">erotica for verified adults<\/a>.\u201d In response, Adler wrote that he had \u201cmajor questions\u201d about whether OpenAI had done enough to, in Altman\u2019s words, \u201cmitigate\u201d the mental health concerns around how users interact with the company\u2019s chatbots.<\/p>\n<p class=\"paywall\">After reading Adler\u2019s piece, I wanted to talk to him. He graciously accepted an offer to come to the WIRED offices in San Francisco, and on this episode of <a href=\"https:\/\/www.wired.com\/podcast\/uncanny-valley\/\" rel=\"nofollow noopener\" target=\"_blank\">The Big Interview<\/a>, he talks about what he learned during his four years at OpenAI, the future of AI safety, and the challenge he\u2019s set out for the companies providing chatbots to the world.<\/p>\n<p class=\"paywall\">This interview has been edited for length and clarity.<\/p>\n<p class=\"paywall\">KATIE DRUMMOND: Before we get going, I want to clarify two things. One, you are, unfortunately, not the same Steven Adler who played drums in Guns N\u2019 Roses, correct?<\/p>\n<p class=\"paywall\">STEVEN ADLER: Absolutely correct.<\/p>\n<p class=\"paywall\">OK, that is not you. And two, you have had a very long career working in technology, and more specifically in artificial intelligence. So, before we get into all of the things, tell us a little bit about your career and your background and what you&#8217;ve worked on.<\/p>\n<p class=\"paywall\">I&#8217;ve worked all across the AI industry, particularly focused on safety angles. Most recently, I worked for four years at OpenAI. I worked across, essentially, every dimension of the safety issues you can imagine: How do we make the products better for customers and rule out the risks that are already happening? And looking a bit further down the road, how will we know if AI systems are getting truly extremely dangerous?<\/p>\n","protected":false},"excerpt":{"rendered":"When the history of AI is written, Steven Adler may just end up being its Paul Revere\u2014or at&hellip;\n","protected":false},"author":2,"featured_media":129527,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[365,363,364,25963,111,139,69,620,49,1657,145,25964],"class_list":{"0":"post-129526","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-big-interview","12":"tag-new-zealand","13":"tag-newzealand","14":"tag-nz","15":"tag-openai","16":"tag-podcasts","17":"tag-sam-altman","18":"tag-technology","19":"tag-uncanny-valley-podcast"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/129526","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=129526"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/129526\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/129527"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=129526"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=129526"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=129526"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}