{"id":151381,"date":"2025-09-12T11:49:09","date_gmt":"2025-09-12T11:49:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/151381\/"},"modified":"2025-09-12T11:49:09","modified_gmt":"2025-09-12T11:49:09","slug":"the-debate-behind-sb-53-the-california-bill-trying-to-prevent-ai-from-building-nukes","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/151381\/","title":{"rendered":"The debate behind SB 53, the California bill trying to prevent AI from building nukes"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">When it comes to AI, as California goes, so <a href=\"https:\/\/www.politico.com\/story\/2016\/09\/as-maine-goes-so-goes-the-nation-sept-8-1958-227727\" rel=\"nofollow noopener\" target=\"_blank\">goes the nation<\/a>. The biggest state in the US by population is also the central hub of AI innovation for the entire globe, <a href=\"https:\/\/www.gov.ca.gov\/2025\/03\/12\/icymi-california-is-home-to-32-of-the-top-50-ai-companies\/\" rel=\"nofollow noopener\" target=\"_blank\">home<\/a> to 32 of the world\u2019s <a href=\"https:\/\/www.forbes.com\/lists\/ai50\/\" rel=\"nofollow noopener\" target=\"_blank\">top 50 AI companies<\/a>. That size and influence have given the Golden State the weight to become a regulatory trailblazer, setting the tone for the rest of the country on <a href=\"https:\/\/escholarship.org\/uc\/item\/94g761c6#:~:text=California%20has%20led%20the%20country,the%20focus%20of%20this%20essay.\" rel=\"nofollow noopener\" target=\"_blank\">environmental<\/a>, <a href=\"https:\/\/www.gov.ca.gov\/2025\/09\/01\/happy-labor-day-california-is-1-for-workers-1-economy-in-the-nation\/\" rel=\"nofollow noopener\" target=\"_blank\">labor<\/a>, and <a href=\"https:\/\/oag.ca.gov\/privacy\/ccpa#:~:text=The%20CCPA%20requires%20business%20privacy,the%20Right%20to%20Non%2DDiscrimination.\" rel=\"nofollow noopener\" target=\"_blank\">consumer protection<\/a> regulations \u2014 and more recently, AI as well. Now, following the dramatic <a href=\"https:\/\/www.washingtonpost.com\/politics\/2025\/07\/01\/ai-moratorium-big-beautiful-bill\/\" rel=\"nofollow noopener\" target=\"_blank\">defeat<\/a> of a proposed federal moratorium on states regulating AI in July, California policymakers see a limited window of opportunity to <a href=\"https:\/\/carnegieendowment.org\/research\/2025\/07\/state-ai-law-whats-coming-now-that-the-federal-moratorium-is-dead?lang=en\" rel=\"nofollow noopener\" target=\"_blank\">set<\/a> the stage for the rest of the country\u2019s AI laws.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">This week, the California State Assembly is set to vote on <a href=\"https:\/\/calmatters.digitaldemocracy.org\/bills\/ca_202520260sb53\" rel=\"nofollow noopener\" target=\"_blank\">SB 53<\/a>, a bill that would require transparency reports from the developers of highly powerful, \u201c<a href=\"http:\/\/google.com\/url?q=https:\/\/www.gov.uk\/government\/publications\/frontier-ai-capabilities-and-risks-discussion-paper\/frontier-ai-capabilities-and-risks-discussion-paper&amp;sa=D&amp;source=docs&amp;ust=1757530386142596&amp;usg=AOvVaw0S2JeVpaK8Lm-DRzgICfjU\" rel=\"nofollow noopener\" target=\"_blank\">frontier<\/a>\u201d AI models. The models targeted represent the cutting-edge of AI \u2014 extremely adept generative systems that require massive amounts of data and computing power, like OpenAI\u2019s ChatGPT, Google\u2019s Gemini, xAI\u2019s Grok, and Anthropic\u2019s Claude. The bill, which has already passed the state Senate, must pass the California State Assembly before it goes to the governor to either be vetoed or signed into law.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">AI can <a href=\"https:\/\/www.vox.com\/future-perfect\/402418\/artificial-intelligence-good-robot-podcast-openai-chatgpt-ethics-discrimination\" rel=\"nofollow noopener\" target=\"_blank\">offer<\/a> tremendous benefits, but as the bill is meant to address, it\u2019s not without risks. And while there is no shortage of existing risks from issues like <a href=\"https:\/\/www.vox.com\/today-explained-podcast\/459234\/ai-jobs-market-unemployment-artificial-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">job displacement<\/a> and <a href=\"https:\/\/www.vox.com\/technology\/23738987\/racism-ai-automated-bias-discrimination-algorithm\" rel=\"nofollow noopener\" target=\"_blank\">bias<\/a>, SB 53 focuses on possible \u201ccatastrophic risks\u201d from AI. Such risks include AI-enabled biological weapons attacks and rogue systems carrying out cyberattacks or other criminal activity that could conceivably bring down critical infrastructure. Such catastrophic risks represent widespread disasters that could plausibly threaten human civilization at local, national, and global levels. They represent risks of the kind of AI-driven disasters that have not yet occurred, rather than already-realized, more personal harms like AI deepfakes.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Exactly what constitutes a catastrophic risk is up for debate, but SB 53 <a href=\"https:\/\/calmatters.digitaldemocracy.org\/bills\/ca_202520260sb53\" rel=\"nofollow noopener\" target=\"_blank\">defines<\/a> it as a \u201cforeseeable and material risk\u201d of an event that causes more than 50 casualties or over $1 billion in damages that a frontier model plays a meaningful role in contributing to. How fault is determined in practice would be up to the courts to interpret. It\u2019s hard to define catastrophic risk in law when the definition is far from settled, but doing so can help us protect against both near- and long-term consequences.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">By itself, a single state bill focused on increased transparency will probably not be enough to prevent devastating cyberattacks and AI-enabled chemical, biological, radiological, and nuclear weapons. But the bill represents an effort to regulate this fast-moving technology before it outpaces our efforts at oversight.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">SB 32 is the third state-level bill to try to specifically focus on regulating AI\u2019s catastrophic risks, after California\u2019s SB 1047, which passed the legislature only to be <a href=\"https:\/\/www.vox.com\/future-perfect\/369628\/ai-safety-bill-sb-1047-gavin-newsom-california\" rel=\"nofollow noopener\" target=\"_blank\">vetoed<\/a> by the governor \u2014 and <a href=\"https:\/\/assembly.state.ny.us\/mem\/Alex-Bores\/story\/114363\" rel=\"nofollow noopener\" target=\"_blank\">New York\u2019s Responsible AI Safety and Education (RAISE) Act<\/a>, which recently passed the New York legislature and is now awaiting Gov. Kathy Hochul\u2019s approval.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">SB 53, which was introduced by state Sen. Scott Wiener in February, requires frontier AI companies to develop safety frameworks that specifically detail how they approach catastrophic risk reduction. Before deploying their models, companies would have to publish safety and security reports. The bill also gives them 15 days to report \u201ccritical safety incidents\u201d to the California Office of Emergency Services, and establishes whistleblower protections for employees who come forward about unsafe model deployment that contributes to catastrophic risk. SB 53 aims to hold companies publicly accountable for their AI safety commitments, with a financial penalty up to $1 million per violation.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In many ways, SB 53 is the spiritual successor to <a href=\"https:\/\/www.vox.com\/future-perfect\/361562\/california-ai-bill-scott-wiener-sb-1047\" rel=\"nofollow noopener\" target=\"_blank\">SB 1047<\/a>, also introduced by Wiener.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Both cover large models that are trained at 10^26 FLOPS, a measurement of very significant computing power <a href=\"https:\/\/thebulletin.org\/2024\/06\/california-ai-bill-becomes-a-lightning-rod-for-safety-advocates-and-developers-alike\/\" rel=\"nofollow noopener\" target=\"_blank\">used<\/a> in a variety of AI legislation as a threshold for significant risk, and both bills strengthen whistleblower protections. Where SB 53 departs from SB 1047 is its focus on transparency and prevention<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">While SB 1047 aimed to <a href=\"https:\/\/www.vox.com\/future-perfect\/355212\/ai-artificial-intelligence-1047-bill-safety-liability\" rel=\"nofollow noopener\" target=\"_blank\">hold<\/a> companies liable for catastrophic harms caused by their AI systems, SB 53 formalizes sharing safety frameworks, which many frontier AI companies, including Anthropic, already do voluntarily. It focuses squarely on the heavy-hitters, with its rules applying only to companies that generate $500 million or more in gross revenue.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cThe science of how to make AI safe is rapidly evolving, and it\u2019s currently difficult for policymakers to write prescriptive technical rules for how companies should manage safety,\u201d said Thomas Woodside, the co-founder of <a href=\"https:\/\/secureaiproject.org\" rel=\"nofollow noopener\" target=\"_blank\">Secure AI Project,<\/a> an advocacy group that aims to reduce extreme risks from AI and is a sponsor of the bill, over email. \u201cThis light touch policy prevents backsliding on commitments and encourages a race to the top rather than a race to the bottom.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Part of the logic of SB 53 is the ability to adapt the framework as AI progresses. The bill authorizes the California Attorney General to change the definition of a large developer after January 1, 2027, in response to AI advances.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Proponents of the bill are optimistic about its chances of being signed by the governor should it pass the legislature, which it is expected to. On the same day that Gov. Gavin Newsom vetoed SB 1047, he <a href=\"https:\/\/www.gov.ca.gov\/2024\/09\/29\/governor-newsom-announces-new-initiatives-to-advance-safe-and-responsible-ai-protect-californians\/\" rel=\"nofollow noopener\" target=\"_blank\">commissioned<\/a> a working group focusing solely on frontier models. The resulting report by the group <a href=\"https:\/\/www.cafrontieraigov.org\" rel=\"nofollow noopener\" target=\"_blank\">provided<\/a> the foundation for SB 53. \u201cI would guess, with roughly 75 percent confidence, that SB 53 will be signed into law by the end of September,\u201d said Dean Ball \u2014 former White House AI policy adviser, vocal SB 1047 critic, and SB 53 supporter \u2014 to <a href=\"https:\/\/www.transformernews.ai\/p\/sb-53-california-ai-might-actually-pass-newsom\" rel=\"nofollow noopener\" target=\"_blank\">Transformer<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But several industry organizations have rallied in opposition, arguing that additional compliance regulation would be expensive, given that AI companies should already be incentivized to avoid catastrophic harms. OpenAI has <a href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/anthropic-backs-californias-sb-53-ai-bill-rcna229908\" rel=\"nofollow noopener\" target=\"_blank\">lobbied<\/a> against it and technology trade group Chamber of Progress <a href=\"https:\/\/progresschamber.org\/insights\/why-californias-sb-53-still-gets-ai-regulation-wrong\/\" rel=\"nofollow noopener\" target=\"_blank\">argues<\/a> that the bill would require companies to file unnecessary paperwork and unnecessarily stifle innovation.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cThose compliance costs are merely the beginning,\u201d Neil Chilson, head of AI policy at the <a href=\"https:\/\/abundance.institute\" rel=\"nofollow noopener\" target=\"_blank\">Abundance Institute,<\/a> told me over email. \u201cThe bill, if passed, would feed California regulators truckloads of company information that they will use to design a compliance industrial complex.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">By contrast, Anthropic enthusiastically <a href=\"https:\/\/www.anthropic.com\/news\/anthropic-is-endorsing-sb-53\" rel=\"nofollow noopener\" target=\"_blank\">endorsed<\/a> the bill in its current state on Monday. \u201cThe question isn\u2019t whether we need AI governance \u2013 it\u2019s whether we develop it thoughtfully today or reactively tomorrow,\u201d the company explained in a blog post. \u201cSB 53 offers a solid path toward the former.\u201d (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI, while Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic. Neither organization has editorial input into our content.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The debate over SB 53 ties into broader disagreements about whether states or the federal government should drive AI safety regulation. But since the vast majority of these companies are based in California, and nearly all do business there, the state\u2019s legislation matters for the entire country.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cA federally led transparency approach is far, far, far preferable to the multi-state alternative,\u201d where a patchwork of state regulations can conflict with each other, said Cato Institute technology policy fellow Matthew Mittelsteadt in an email. But \u201cI love that the bill has a provision that would allow companies to defer to a future alternative federal standard.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cThe natural question is whether a federal approach can even happen,\u201d Mittelsteadt continued. \u201cIn my opinion, the jury is out on that but the possibility is far more likely that some suggest. It\u2019s been less than 3 years since ChatGPT was released. That is hardly a lifetime in public policy.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But in a time of federal gridlock, frontier AI advancements won\u2019t wait for Washington.<\/p>\n<p>The catastrophic risk divide<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The bill\u2019s focus on, and framing of, catastrophic risks is not without controversy.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The idea of catastrophic risk comes from the fields of philosophy and quantitative risk assessment. Catastrophic risks are downstream of <a href=\"https:\/\/www.tobyord.com\/writing\/the-precipice-revisited\" rel=\"nofollow noopener\" target=\"_blank\">existential risks<\/a>, which threaten humanity\u2019s actual survival or else permanently reduce our potential as a species. The hope is that if these doomsday scenarios are identified and prepared for, they can be prevented or at least mitigated.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But if existential risks are clear \u2014 the end of the world, or at least as we know it \u2014 what falls under the catastrophic risk umbrella, and the best way to prioritize those risks, depends on who you ask. There are <a href=\"https:\/\/www.vox.com\/future-perfect\/23298870\/effective-altruism-longtermism-will-macaskill-future\" rel=\"nofollow noopener\" target=\"_blank\">longtermists<\/a>, people focused primarily on humanity\u2019s far future, who place a premium on things like <a href=\"https:\/\/www.vox.com\/future-perfect\/459050\/space-medicine-astronauts-health-longevity-mars-science\" rel=\"nofollow noopener\" target=\"_blank\">multiplanetary expansion<\/a> for human survival. They\u2019re often chiefly concerned by risks from rogue AI or extremely lethal pandemics. Neartermists are more preoccupied with existing risks, like climate change, mosquito vector-borne disease, or algorithmic bias. These camps can blend into one another \u2014 neartermists would also like to avoid getting <a href=\"https:\/\/www.newsweek.com\/killer-asteroid-impact-odds-earth-causes-death-2112762\" rel=\"nofollow noopener\" target=\"_blank\">hit<\/a> by asteroids that could wipe out a city, and longtermists don\u2019t dismiss risks like climate change \u2014 and the best way to think of them is like two ends of a spectrum rather than a strict binary.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">You can think of the AI ethics and AI safety frameworks as the near- and longtermism of AI risk, <a href=\"https:\/\/murat-durmus.medium.com\/the-difference-between-ai-safety-ai-ethics-and-responsible-ai-8296306af427\" rel=\"nofollow noopener\" target=\"_blank\">respectively<\/a>. AI ethics is about the moral implications of the ways the technology is deployed, including things like algorithmic bias and human rights, in the present. AI safety focuses on catastrophic risks and potential existential threats. But, as Vox\u2019s Julia Longoria reported in the <a href=\"https:\/\/www.vox.com\/future-perfect\/402418\/artificial-intelligence-good-robot-podcast-openai-chatgpt-ethics-discrimination\" rel=\"nofollow noopener\" target=\"_blank\">Good Robot series<\/a> for Unexplainable, there are inter-personal conflicts leading these two factions to work against each other, much of which has to do with emphasis. (AI ethics people argue that catastrophic risk concerns over-hype AI capabilities and ignores its impact on vulnerable people right now, while AI safety people worry that if we focus too much on the present, we won\u2019t have ways to mitigate larger-scale problems down the line.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But behind the question of near versus long-term risks lies another one: what, exactly, constitutes a catastrophic risk?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">SB 53 initially set the standard for catastrophic risk at 100 rather than 50 casualties \u2014 similar to New York\u2019s RAISE Act \u2014 before halving the threshold in an amendment to the bill. While the average person might consider, say, many people driven to suicide after <a href=\"https:\/\/www.nytimes.com\/2024\/10\/23\/technology\/characterai-lawsuit-teen-suicide.html\" rel=\"nofollow noopener\" target=\"_blank\">interacting<\/a> with AI chatbots to be catastrophic, such a risk is outside of the bill\u2019s scope. (The California State Assembly just <a href=\"https:\/\/techcrunch.com\/2025\/09\/10\/a-california-bill-that-would-regulate-ai-companion-chatbots-is-close-to-becoming-law\/\" rel=\"nofollow noopener\" target=\"_blank\">passed<\/a> a separate bill to regulate AI companion chatbots by preventing them from participating in discussions about suicidal ideation or sexually explicit material.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">SB 53 focuses squarely on harms from \u201cexpert-level\u201d frontier AI model assistance in developing or deploying chemical, biological, radiological, and nuclear weapons; committing crimes like cyberattacks or fraud; and \u201closs of control\u201d scenarios where AIs go rogue, behaving deceptively to avoid being shut down and replicating themselves without human oversight. For example, an AI model could be used to guide the creation of a new deadly virus that infects millions and kneecaps the global economy.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cThe 50 to 100 deaths or a billion dollars in property damage is just a proxy to capture really widespread and substantial impact,\u201d said Scott Singer, lead author of the <a href=\"https:\/\/www.gov.ca.gov\/wp-content\/uploads\/2025\/06\/June-17-2025-%E2%80%93-The-California-Report-on-Frontier-AI-Policy.pdf\" rel=\"nofollow noopener\" target=\"_blank\">California Report for Frontier AI Policy<\/a>, which helped inform the basis of the bill. \u201cWe do look at like AI-enabled or AI potentially [caused] or correlated suicide. I think that\u2019s like a very serious set of issues that demands policymaker attention, but I don\u2019t think it\u2019s the core of what this bill is trying to address.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Transparency is helpful in preventing such catastrophes because it can help raise the alarm before things get out of hand, allowing AI developers to correct course. And in the event that such efforts fail to prevent a mass casualty incident, enhanced safety transparency can help law enforcement and the courts figure out what went wrong. The challenge there is that it can be difficult to determine how much a model is accountable for a specific outcome, Irene Solaiman, the chief policy officer at <a href=\"https:\/\/huggingface.co\" rel=\"nofollow noopener\" target=\"_blank\">Hugging Face<\/a>, a collaboration platform for AI developers, told me over email.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cThese risks are coming and we should be ready for them and have transparency into what the companies are doing,\u201d said Adam Billen, the vice president of public policy at <a href=\"https:\/\/encodeai.org\" rel=\"nofollow noopener\" target=\"_blank\">Encode,<\/a> an organization that advocates for responsible AI leadership and safety. (Encode is another sponsor of SB 53.) \u201cBut we don\u2019t know exactly what we\u2019re going to need to do once the risks themselves appear. But right now, when those things aren\u2019t happening at a large scale, it makes sense to be sort of focused on transparency.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">However, a transparency-focused bill like SB 53 is insufficient for addressing already-existing harms. When we already know something is a problem, the focus should be on mitigating it.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cMaybe four years ago, if we had passed some sort of transparency legislation like SB 53 but focused on those harms, we might have had some warning signs and been able to intervene before the widespread harms to kids started happening,\u201d Billen said. \u201cWe\u2019re trying to kind of correct that mistake on these problems and get some sort of forward-facing information about what\u2019s happening before things get crazy, basically.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">SB 53 risks being both overly narrow and unclearly scoped. We have not yet faced these catastrophic harms from frontier AI models, and the most devastating risks might take us entirely by surprise. We don\u2019t know what we don\u2019t know.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">It\u2019s also certainly possible that models trained below 10^26 FLOPS, which aren\u2019t covered by SB 53, have the potential to cause catastrophic harm under the bill\u2019s definition. The EU AI Act sets the <a href=\"https:\/\/jack-clark.net\/2024\/03\/28\/what-does-1025-versus-1026-mean\/\" rel=\"nofollow noopener\" target=\"_blank\">threshold<\/a> for \u201csystemic risk\u201d at the smaller 10^25 FLOPS, and there\u2019s disagreement about the <a href=\"https:\/\/medium.com\/@ingridwickstevens\/regulating-ai-the-limits-of-flops-as-a-metric-41e3b12d5d0c\" rel=\"nofollow noopener\" target=\"_blank\">utility<\/a> of computational power as a regulatory standard at all, especially as models become more efficient.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">As it stands right now, SB 53 occupies a different niche from bills focused on regulating AI use in mental healthcare or data privacy, reflecting its authors\u2019 desire not to step on the toes of other legislation or bite off more than it can reasonably chew. But Chilson, the Abundance Institute\u2019s head of AI policy, is part of a camp that sees SB 53\u2019s focus on catastrophic harm as a \u201cdistraction\u201d from the real near-term benefits and concerns, like AI\u2019s potential to accelerate the pace of scientific research or create nonconsensual deepfake imagery, respectively.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">That said, deepfakes could certainly cause catastrophic harm. For instance, imagine a hyper-realistic deepfake impersonating a bank employee to commit fraud at a multibillion-dollar scale, said Nathan Calvin, the vice president of state affairs and general counsel at Encode. \u201cI do think some of the lines between these things in practice can be a bit blurry, and I think in some ways\u2026that is not necessarily a bad thing,\u201d he told me.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">It could be that the ideological debate around what qualifies as catastrophic risks, and whether that\u2019s worthy of our legislative attention, is just noise. The bill is intended to regulate AI before the proverbial horse is out of the barn. The average person isn\u2019t going to worry about the likelihood of AI sparking nuclear warfare or biological weapons attacks, but they do think about how algorithmic bias might affect their lives in the present. But in trying to prevent the worst-case scenarios, perhaps we can also avoid the \u201csmaller,\u201d nearer harms. If they\u2019re effective, forward-facing safety provisions designed to prevent mass casualty events will also make AI safer for individuals.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">If SB 53 passes the legislature and gets signed by Gov. Newsom into law, it could inspire other state attempts at AI regulation through a similar framework, and eventually encourage federal AI safety legislation to move forward.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">How we think about risk matters because it determines where we focus our efforts on prevention. I\u2019m a firm believer in the value of defining your terms, in law and debate. If we\u2019re not on the same page about what we mean when we talk about risk, we can\u2019t have a real conversation.<\/p>\n<p class=\"_1tzd3in1\">You\u2019ve read 1 article in the last month<\/p>\n<p class=\"_1tzd3in4\">Here at Vox, we&#8217;re unwavering in our commitment to covering the issues that matter most to you \u2014 threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.<\/p>\n<p class=\"_1tzd3in4\">Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.<\/p>\n<p class=\"_1tzd3in4\">We rely on readers like you \u2014 join us.<\/p>\n<p><img alt=\"Swati Sharma\" loading=\"lazy\" width=\"59\" height=\"69\" decoding=\"async\" data-nimg=\"1\" style=\"color:transparent\"  src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/09\/1757677749_790_image\"\/><\/p>\n<p class=\"_1tzd3in8\">Swati Sharma<\/p>\n<p class=\"_1tzd3in9\">Vox Editor-in-Chief<\/p>\n","protected":false},"excerpt":{"rendered":"When it comes to AI, as California goes, so goes the nation. The biggest state in the US&hellip;\n","protected":false},"author":2,"featured_media":151382,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,29584,4530,1343,28419,74],"class_list":{"0":"post-151381","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-future-perfect","12":"tag-innovation","13":"tag-policy","14":"tag-tech-policy","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/151381","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=151381"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/151381\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/151382"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=151381"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=151381"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=151381"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}