{"id":82145,"date":"2025-10-18T04:42:06","date_gmt":"2025-10-18T04:42:06","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/82145\/"},"modified":"2025-10-18T04:42:06","modified_gmt":"2025-10-18T04:42:06","slug":"silicon-valley-spooks-the-ai-safety-advocates","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/82145\/","title":{"rendered":"Silicon Valley spooks the AI safety advocates"},"content":{"rendered":"<p id=\"speakable-summary\" class=\"wp-block-paragraph\">Silicon Valley leaders including White House AI &amp; Crypto Czar David Sacks and OpenAI Chief Strategy Officer Jason Kwon caused a stir online this week for their comments about groups promoting AI safety. In separate instances, they alleged that certain advocates of AI safety are not as virtuous as they appear, and are either acting in the interest of themselves or billionaire puppet masters behind the scenes.<\/p>\n<p class=\"wp-block-paragraph\">AI safety groups that spoke with TechCrunch say the allegations from Sacks and OpenAI are Silicon Valley\u2019s latest attempt to intimidate its critics, but certainly not the first. In 2024, some venture capital firms <a href=\"https:\/\/techcrunch.com\/2024\/10\/02\/the-lawmaker-behind-californias-vetoed-ai-bill-sb-1047-has-harsh-words-for-silicon-valley\/\" rel=\"nofollow noopener\" target=\"_blank\">spread rumors<\/a> that a California AI safety bill,<a href=\"https:\/\/techcrunch.com\/2024\/09\/29\/gov-newsom-vetoes-californias-controversial-ai-bill-sb-1047\/\" rel=\"nofollow noopener\" target=\"_blank\"> SB 1047<\/a>, would send startup founders to jail. The Brookings Institution labeled the rumor as one of many \u201c<a rel=\"nofollow noopener\" href=\"https:\/\/www.brookings.edu\/articles\/misrepresentations-of-californias-ai-safety-bill\/\" target=\"_blank\">misrepresentations<\/a>\u201d about the bill, but Governor Gavin Newsom ultimately vetoed it anyway.<\/p>\n<p class=\"wp-block-paragraph\">Whether or not Sacks and OpenAI intended to intimidate critics, their actions have sufficiently scared several AI safety advocates. Many nonprofit leaders that TechCrunch reached out to in the last week asked to speak on the condition of anonymity to spare their groups from retaliation.<\/p>\n<p class=\"wp-block-paragraph\">The controversy underscores Silicon Valley\u2019s growing tension between building AI responsibly and building it to be a massive consumer product \u2014 a theme my colleagues Kirsten Korosec, Anthony Ha, and I unpack on this week\u2019s Equity podcast. We also dive into a new AI safety law passed in California to regulate chatbots, and OpenAI\u2019s approach to erotica in ChatGPT.<\/p>\n<p class=\"wp-block-paragraph\">On Tuesday, Sacks wrote a <a rel=\"nofollow\" href=\"https:\/\/x.com\/DavidSacks\/status\/1978145266269077891\">post on X<\/a> alleging that Anthropic \u2014 which has <a rel=\"nofollow noopener\" href=\"https:\/\/www.axios.com\/2025\/05\/28\/ai-jobs-white-collar-unemployment-anthropic\" target=\"_blank\">raised concerns<\/a> over AI\u2019s ability to contribute to unemployment, cyberattacks, and catastrophic harms to society \u2014 is simply fearmongering to get laws passed that will benefit itself and drown out smaller startups in paperwork. Anthropic was the only major AI lab to <a href=\"https:\/\/techcrunch.com\/2025\/09\/08\/anthropic-endorses-californias-ai-safety-bill-sb-53\/\" rel=\"nofollow noopener\" target=\"_blank\">endorse California\u2019s Senate Bill 53 (SB 53<\/a>), a bill that sets safety reporting requirements for large AI companies, which was signed into law last month.<\/p>\n<p class=\"wp-block-paragraph\">Sacks was responding to a <a rel=\"nofollow noopener\" href=\"https:\/\/importai.substack.com\/p\/import-ai-431-technological-optimism\" target=\"_blank\">viral essay<\/a> from Anthropic co-founder Jack Clark about his fears regarding AI. Clark delivered the essay as a speech at the Curve AI safety conference in Berkeley weeks earlier. Sitting in the audience, it certainly felt like a genuine account of a technologist\u2019s reservations about his products, but Sacks didn\u2019t see it that way.<\/p>\n<p lang=\"en\" dir=\"ltr\">Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering. It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem. <a rel=\"nofollow\" href=\"https:\/\/t.co\/C5RuJbVi4P\">https:\/\/t.co\/C5RuJbVi4P<\/a><\/p>\n<p>\u2014 David Sacks (@DavidSacks) <a rel=\"nofollow noopener\" href=\"https:\/\/twitter.com\/DavidSacks\/status\/1978145266269077891?ref_src=twsrc%5Etfw\" target=\"_blank\">October 14, 2025<\/a><\/p>\n<p class=\"wp-block-paragraph\">Sacks said Anthropic is running a \u201csophisticated regulatory capture strategy,\u201d though it\u2019s worth noting that a truly sophisticated strategy probably wouldn\u2019t involve making an enemy out of the federal government. In a <a rel=\"nofollow\" href=\"https:\/\/x.com\/DavidSacks\/status\/1978965238155239559\">follow up post on X,<\/a> Sacks noted that Anthropic has positioned \u201citself consistently as a foe of the Trump administration.\u201d<\/p>\n<p>Techcrunch event<\/p>\n<p>\n\t\t\t\t\t\t\t\t\tSan Francisco<br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t|<br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\tOctober 27-29, 2025\n\t\t\t\t\t\t\t<\/p>\n<p class=\"wp-block-paragraph\">Also this week, OpenAI\u2019s chief strategy officer, Jason Kwon, wrote a <a rel=\"nofollow\" href=\"https:\/\/x.com\/jasonkwon\/status\/1976762546041634878\">post on X<\/a> explaining why the company was sending subpoenas to AI safety nonprofits, such as Encode, a nonprofit that advocates for responsible AI policy. (A subpoena is a legal order demanding documents or testimony.) Kwon said that after Elon Musk sued OpenAI \u2014 over concerns that the ChatGPT-maker has veered away from its nonprofit mission \u2014 OpenAI found it suspicious how several organizations also raised opposition to its restructuring. Encode filed an amicus brief in support of Musk\u2019s lawsuit, and other nonprofits spoke out publicly against OpenAI\u2019s restructuring.<\/p>\n<p lang=\"en\" dir=\"ltr\">There\u2019s quite a lot more to the story than this.<\/p>\n<p>As everyone knows, we are actively defending against Elon in a lawsuit where he is trying to damage OpenAI for his own financial benefit.<\/p>\n<p>Encode, the organization for which <a rel=\"nofollow noopener\" href=\"https:\/\/twitter.com\/_NathanCalvin?ref_src=twsrc%5Etfw\" target=\"_blank\">@_NathanCalvin<\/a>  serves as the General Counsel, was one\u2026 <a rel=\"nofollow\" href=\"https:\/\/t.co\/DiBJmEwtE4\">https:\/\/t.co\/DiBJmEwtE4<\/a><\/p>\n<p>\u2014 Jason Kwon (@jasonkwon) <a rel=\"nofollow noopener\" href=\"https:\/\/twitter.com\/jasonkwon\/status\/1976762546041634878?ref_src=twsrc%5Etfw\" target=\"_blank\">October 10, 2025<\/a><\/p>\n<p class=\"wp-block-paragraph\">\u201cThis raised transparency questions about who was funding them and whether there was any coordination,\u201d said Kwon.<\/p>\n<p class=\"wp-block-paragraph\">NBC News reported this week that OpenAI sent broad subpoenas to Encode and <a rel=\"nofollow noopener\" href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/openai-chatgpt-accused-using-subpoenas-silence-nonprofits-rcna237348\" target=\"_blank\">six other nonprofits<\/a> that criticized the company, asking for their communications related to two of OpenAI\u2019s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also asked Encode for communications related to its support of SB 53.<\/p>\n<p class=\"wp-block-paragraph\">One prominent AI safety leader told TechCrunch that there\u2019s a growing split between OpenAI\u2019s government affairs team and its research organization. While OpenAI\u2019s safety researchers frequently publish reports disclosing the risks of AI systems, OpenAI\u2019s policy unit lobbied against SB 53, saying it would rather have uniform rules at the federal level.<\/p>\n<p class=\"wp-block-paragraph\">OpenAI\u2019s head of mission alignment, Joshua Achiam, spoke out about his company sending subpoenas to nonprofits in a <a rel=\"nofollow\" href=\"https:\/\/x.com\/jachiam0\/status\/1976690339546112098\">post on X<\/a> this week.<\/p>\n<p class=\"wp-block-paragraph\"> \u201cAt what is possibly a risk to my whole career I will say: this doesn\u2019t seem great,\u201d said Achiam.<\/p>\n<p class=\"wp-block-paragraph\">Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI (which has not been subpoenaed by OpenAI), told TechCrunch that OpenAI seems convinced its critics are part of a Musk-led conspiracy. However, he argues this is not the case, and that much of the AI safety community is quite critical of xAI\u2019s safety practices, <a href=\"https:\/\/techcrunch.com\/2025\/07\/16\/openai-and-anthropic-researchers-decry-reckless-safety-culture-at-elon-musks-xai\/\" rel=\"nofollow noopener\" target=\"_blank\">or lack thereof<\/a>.<\/p>\n<p class=\"wp-block-paragraph\">\u201cOn OpenAI\u2019s part, this is meant to silence critics, to intimidate them, and to dissuade other nonprofits from doing the same,\u201d said Steinhauser. \u201cFor Sacks, I think he\u2019s concerned that [the AI safety] movement is growing and people want to hold these companies accountable.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Sriram Krishnan, the White House\u2019s senior policy advisor for AI and a former a16z general partner, chimed in on the conversation this week with a <a rel=\"nofollow\" href=\"https:\/\/x.com\/sriramk\/status\/1978470229056364797\">social media post <\/a>of his own, calling AI safety advocates out of touch. He urged AI safety organizations to talk to \u201cpeople in the real world using, selling, adopting AI in their homes and organizations.\u201d<\/p>\n<p class=\"wp-block-paragraph\">A recent Pew study found that roughly half of Americans are <a rel=\"nofollow noopener\" href=\"https:\/\/www.pewresearch.org\/global\/2025\/10\/15\/how-people-around-the-world-view-ai\/\" target=\"_blank\">more concerned than excited<\/a> about AI, but it\u2019s unclear what worries them exactly. Another recent study went into more detail and found that American voters care more about <a rel=\"nofollow noopener\" href=\"https:\/\/neurosciencenews.com\/ai-harm-fear-psychology-28708\/\" target=\"_blank\">job losses and deepfakes<\/a> than catastrophic risks caused by AI, which the AI safety movement is largely focused on.<\/p>\n<p class=\"wp-block-paragraph\">Addressing these safety concerns could come at the expense of the AI industry\u2019s rapid growth \u2014 a trade-off that worries many in Silicon Valley. With AI investment propping up much of America\u2019s economy, the fear of over-regulation is understandable. <\/p>\n<p class=\"wp-block-paragraph\">But after years of unregulated AI progress, the AI safety movement appears to be gaining real momentum heading into 2026. Silicon Valley\u2019s attempts to fight back against safety-focused groups may be a sign that they\u2019re working.<\/p>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"Silicon Valley leaders including White House AI &amp; Crypto Czar David Sacks and OpenAI Chief Strategy Officer Jason&hellip;\n","protected":false},"author":2,"featured_media":82146,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,43595,343,344,4670,4193,85,46,55332,1748,57055,125],"class_list":{"0":"post-82145","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-safety","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-california","13":"tag-chatgpt","14":"tag-il","15":"tag-israel","16":"tag-nonprofits","17":"tag-openai","18":"tag-sb-53","19":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/82145","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=82145"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/82145\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/82146"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=82145"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=82145"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=82145"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}