{"id":208012,"date":"2025-12-28T14:51:08","date_gmt":"2025-12-28T14:51:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/208012\/"},"modified":"2025-12-28T14:51:08","modified_gmt":"2025-12-28T14:51:08","slug":"openai-is-hiring-for-a-position-that-sounds-horrifying","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/208012\/","title":{"rendered":"OpenAI Is Hiring for a Position That Sounds Horrifying"},"content":{"rendered":"<p>\u201cThis will be a stressful job and you\u2019ll jump into the deep end pretty much immediately,\u201d OpenAI CEO Sam Altman <a href=\"https:\/\/x.com\/sama\/status\/2004939524216910323\" rel=\"nofollow\">wrote on X<\/a> in his announcement of the \u201chead of preparedness\u201d job at OpenAI on Saturday. <\/p>\n<p>In exchange for <a href=\"https:\/\/openai.com\/careers\/head-of-preparedness-san-francisco\/\" rel=\"nofollow noopener\" target=\"_blank\">$555,000 per year<\/a>, according to OpenAI\u2019s job ad, the head of preparedness is supposed to \u201cexpand, strengthen, and guide,\u201d the existing preparedness program within OpenAI\u2019s safety systems department. This side of OpenAI builds the safeguards that, in theory, make OpenAI\u2019s models \u201cbehave as intended in real-world settings.\u201d<\/p>\n<p>But hey, wait a minute, are they saying OpenAI\u2019s models behave as intended in real-world settings now? In 2025, ChatGPT <a href=\"https:\/\/www.thomsonreuters.com\/en-us\/posts\/technology\/genai-hallucinations\/\" rel=\"nofollow noopener\" target=\"_blank\">continued to hallucinate in legal filings<\/a>, attracted <a href=\"https:\/\/www.wired.com\/story\/ftc-complaints-chatgpt-ai-psychosis\/\" rel=\"nofollow noopener\" target=\"_blank\">hundreds of FTC complaints<\/a>, including complaints that it was triggering mental health crises in users, and evidently <a href=\"https:\/\/www.wired.com\/story\/google-and-openais-chatbots-can-strip-women-in-photos-down-to-bikinis\/\" rel=\"nofollow noopener\" target=\"_blank\">turned pictures of clothed women into bikini deepfakes<\/a>. Sora had to have its ability to make videos of figures like Martin Luther King, Jr. <a href=\"https:\/\/www.npr.org\/2025\/10\/17\/nx-s1-5577869\/sora-block-videos-mlk\" rel=\"nofollow noopener\" target=\"_blank\">revoked<\/a> because users were abusing the privilege to make revered historical figures <a href=\"https:\/\/www.reddit.com\/r\/OpenAI\/comments\/1nyyyez\/mlk_asks_you_if_youve_ever_had_a_dream\/\" rel=\"nofollow noopener\" target=\"_blank\">say basically anything<\/a>.<\/p>\n<p>When cases related to problems with OpenAI products reach the courts\u2014as with the wrongful death suit filed by the family of Adam Raine, who, it is alleged, received advice and encouragement from ChatGPT that led to his death\u2014there\u2019s a legal argument to be made that users were abusing OpenAI\u2019s products. In November, a filing from OpenAI\u2019s lawyers cited <a href=\"https:\/\/gizmodo.com\/openai-court-filing-cites-adam-raines-chatgpt-rule-violations-as-potential-cause-of-his-suicide-2000691765\" rel=\"nofollow noopener\" target=\"_blank\">rule violations as a potential cause of Raine\u2019s death<\/a>.<\/p>\n<p>Whether you buy the abuse argument or not, it\u2019s clearly a big part of the way OpenAI makes sense of what its products are doing in society. Altman acknowledges in his X post about the head of preparedness job that the company\u2019s models can impact people\u2019s mental health, and can find security vulnerabilities. We are, he says, \u201centering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits.\u201d<\/p>\n<p>After all, if the goal were purely to not ever cause any harm, the quickest way to make sure of that would be to just remove ChatGPT and Sora from the market altogether.<\/p>\n<p>The head of preparedness at OpenAI, then, is someone who will thread this needle, and \u201c[o]wn OpenAI\u2019s preparedness strategy end-to-end,\u201d figuring out how to evaluate the models for unwanted abilities, and design ways to mitigate them. The ad says this person will have to \u201devolve the preparedness framework as new risks, capabilities, or external expectations emerge.\u201d This can only mean figuring out new potential ways OpenAI products might be able to harm people or society, and come up with the rubric for allowing the products to exist, while demonstrating, presumably, that the risks have been dulled enough that OpenAI isn\u2019t legally liable for the seemingly inevitable future \u201cdownsides.\u201d\u00a0<\/p>\n<p>It would be bad enough having to do all this for a company that\u2019s treading water, but OpenAI has to take drastic steps to bring in revenue and release cutting edge products in a hurry. In an interview last month, Altman <a href=\"https:\/\/techcrunch.com\/2025\/11\/02\/sam-altman-says-enough-to-questions-about-openais-revenue\/\" rel=\"nofollow noopener\" target=\"_blank\">strongly implied<\/a> that he would take the company\u2019s revenue from where it is now\u2014apparently somewhere north of $13 billion per year\u2014to $100 billion in less than two years. Altman said his company\u2019s \u201cconsumer device business will be a significant and important thing,\u201d and that \u201cAI that can automate science will create huge value.\u201d <\/p>\n<p>So if you would like to oversee \u201cmitigation design\u201d across new versions of OpenAI\u2019s existing products, along with new physical gadgets, and platforms that don\u2019t exist yet, but are supposed to do things like \u201cautomate science,\u201d all while the CEO is breathing down your neck about needing to make approximately <a href=\"https:\/\/companiesmarketcap.com\/walt-disney\/revenue\/\" rel=\"nofollow noopener\" target=\"_blank\">the same amount of annual revenue as Walt Disney<\/a> the year after next, enjoy being the head of preparedness at OpenAI. Try not to fuck up the entire world at your new job.<\/p>\n","protected":false},"excerpt":{"rendered":"\u201cThis will be a stressful job and you\u2019ll jump into the deep end pretty much immediately,\u201d OpenAI CEO&hellip;\n","protected":false},"author":2,"featured_media":208013,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,116145,343,344,85,46,1748,3000,125],"class_list":{"0":"post-208012","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-risks","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-il","13":"tag-israel","14":"tag-openai","15":"tag-sam-altman","16":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/208012","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=208012"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/208012\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/208013"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=208012"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=208012"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=208012"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}