{"id":344710,"date":"2025-12-13T05:26:08","date_gmt":"2025-12-13T05:26:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/344710\/"},"modified":"2025-12-13T05:26:08","modified_gmt":"2025-12-13T05:26:08","slug":"whats-at-stake-in-trumps-executive-order-aiming-to-curb-state-level-ai-regulation","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/344710\/","title":{"rendered":"What\u2019s at stake in Trump\u2019s executive order aiming to curb state-level AI regulation"},"content":{"rendered":"<p>President Donald Trump signed an executive order on Dec. 11, 2025, that aims to <a href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/12\/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy\/\" rel=\"nofollow noopener\" target=\"_blank\">supersede state-level artificial intelligence laws<\/a> that the administration views as a hindrance to innovation in AI.<\/p>\n<p>State laws regulating AI are increasing in number, particularly in response to the rise of generative AI systems such as ChatGPT that produce text and images. Thirty-eight states <a href=\"https:\/\/www.ncsl.org\/technology-and-communication\/artificial-intelligence-2025-legislation\" rel=\"nofollow noopener\" target=\"_blank\">enacted laws in 2025 regulating AI<\/a> in one way or another. They range from <a href=\"https:\/\/www.billtrack50.com\/billdetail\/1779816\" rel=\"nofollow noopener\" target=\"_blank\">prohibiting stalking<\/a> via AI-powered robots to <a href=\"https:\/\/www.lw.com\/en\/insights\/texas-signs-responsible-ai-governance-act-into-law\" rel=\"nofollow noopener\" target=\"_blank\">barring AI systems<\/a> that can manipulate people\u2019s behavior. <\/p>\n<p>The executive order declares that it is the policy of the United States to produce a \u201cminimally burdensome\u201d national framework for AI. The order calls on the U.S. attorney general to create an AI litigation task force to challenge state AI laws that are inconsistent with the policy. It also orders the secretary of commerce to identify \u201conerous\u201d state AI laws that conflict with the policy and to withhold funding under the <a href=\"https:\/\/www.ntia.gov\/funding-programs\/high-speed-internet-programs\/broadband-equity-access-and-deployment-bead-program\" rel=\"nofollow noopener\" target=\"_blank\">Broadband Equity Access and Deployment Program<\/a> to states with those laws. The executive order exempts state AI laws related to child safety.<\/p>\n<p>Executive orders are <a href=\"https:\/\/theconversation.com\/trumps-executive-orders-can-make-change-but-are-limited-and-can-be-undone-by-the-courts-247857\" rel=\"nofollow noopener\" target=\"_blank\">directives to federal agencies<\/a> on how to implement existing laws. The AI executive order directs federal departments and agencies to take actions that the administration claims fall under their legal authorities.<\/p>\n<p>Big tech companies have <a href=\"https:\/\/www.wsj.com\/tech\/ai\/the-silicon-valley-campaign-to-win-trump-over-on-ai-regulation-214bd6bd\" rel=\"nofollow noopener\" target=\"_blank\">lobbied for the federal government<\/a> to override state AI regulations. The companies have argued that the <a href=\"https:\/\/www.politico.com\/news\/2025\/05\/12\/how-big-tech-is-pitting-washington-against-california-00336484\" rel=\"nofollow noopener\" target=\"_blank\">burden of following multiple state regulations<\/a> hinders innovation.<\/p>\n<p>Proponents of the state laws tend to frame them as attempts to <a href=\"https:\/\/www.healthcareitnews.com\/news\/states-take-lead-laboratories-ai-regulation\" rel=\"nofollow noopener\" target=\"_blank\">balance public safety with economic benefit<\/a>. Prominent examples are <a href=\"https:\/\/iapp.org\/resources\/article\/us-state-ai-governance-legislation-tracker\/\" rel=\"nofollow noopener\" target=\"_blank\">laws in California, Colorado, Texas and Utah<\/a>. Here are some of the major state laws regulating AI that could be targeted under the executive order:<\/p>\n<p>Algorithmic discrimination<\/p>\n<p>Colorado\u2019s <a href=\"https:\/\/leg.colorado.gov\/bills\/sb24-205\" rel=\"nofollow noopener\" target=\"_blank\">Consumer Protections for Artificial Intelligence<\/a> is the first comprehensive state law in the U.S. that aims to regulate AI systems used in employment, housing, credit, education and health care decisions. However, enforcement of the law <a href=\"https:\/\/theconversation.com\/colorado-is-pumping-the-brakes-on-first-of-its-kind-ai-regulation-to-find-a-practical-path-forward-269065\" rel=\"nofollow noopener\" target=\"_blank\">has been delayed<\/a> while the state legislature considers its ramifications. <\/p>\n<p>The focus of the Colorado AI act is <a href=\"https:\/\/www.naag.org\/attorney-general-journal\/a-deep-dive-into-colorados-artificial-intelligence-act\/\" rel=\"nofollow noopener\" target=\"_blank\">predictive artificial intelligence systems<\/a>, which make decisions, not newer generative artificial intelligence like ChatGPT, which create content. <\/p>\n<p>The Colorado law aims to protect people from algorithmic discrimination. The law requires organizations using these \u201chigh-risk systems\u201d to make impact assessments of the technology, notify consumers whether predictive AI will be used in consequential decisions about them, and make public the types of systems they use and how they plan to manage the risks of algorithmic discrimination.<\/p>\n<p>A similar Illinois law scheduled to take effect on Jan. 1, 2026, amends the Illinois Human Rights Act to make it <a href=\"https:\/\/legiscan.com\/IL\/drafts\/HB3773\/2023\" rel=\"nofollow noopener\" target=\"_blank\">a civil rights violation<\/a> for employers to use AI tools that result in discrimination.<\/p>\n<p>On the \u2018frontier\u2019<\/p>\n<p>California\u2019s <a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billTextClient.xhtml?bill_id=202520260SB53\" rel=\"nofollow noopener\" target=\"_blank\">Transparency in Frontier Artificial Intelligence Act<\/a> specifies guardrails on the development of the most powerful AI models. These models, called foundation or frontier models, are any AI model that is trained on extremely large and varied datasets and <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2108.07258\" rel=\"nofollow noopener\" target=\"_blank\">that can be adapted to a wide range of tasks<\/a> without additional training. They include the models underpinning OpenAI\u2019s ChatGPT and Google\u2019s Gemini AI chatbots.<\/p>\n<p>The California law applies only to the <a href=\"https:\/\/crfm.stanford.edu\/2023\/11\/18\/tiers.html\" rel=\"nofollow noopener\" target=\"_blank\">world\u2019s largest AI models<\/a> \u2013 ones that cost at least US$100 million and require at least 1026 \u2013 or 100,000,000,000,000,000,000,000,000 \u2013 floating point operations of computing power to train. Floating point operations are arithmetic that allows computers to <a href=\"https:\/\/www.techtarget.com\/whatis\/definition\/FLOPS-floating-point-operations-per-second\" rel=\"nofollow noopener\" target=\"_blank\">calculate large numbers<\/a>.<\/p>\n<p>            <a href=\"https:\/\/images.theconversation.com\/files\/708451\/original\/file-20251212-56-gizujd.png?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" alt=\"a scatter plot with colored dots\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/12\/file-20251212-56-gizujd.png\" class=\"native-lazy\" loading=\"lazy\"  \/><\/a><\/p>\n<p>              Today\u2019s most powerful AI models required far more computing power to train than previous models. The vertical axis is floating point operations, a measure of computing power.<br \/>\n              <a class=\"source\" href=\"https:\/\/epoch.ai\/blog\/tracking-large-scale-ai-models\" rel=\"nofollow noopener\" target=\"_blank\">Robi Rahman, David Owen and Josh You (2024), &#8216;Tracking large-scale AI models.&#8217; Published online at epoch.ai.<\/a>, <a class=\"license\" href=\"http:\/\/creativecommons.org\/licenses\/by\/4.0\/\" rel=\"nofollow noopener\" target=\"_blank\">CC BY<\/a><\/p>\n<p>Machine learning models can <a href=\"https:\/\/research.ibm.com\/publications\/trustworthy-ai-in-the-era-of-foundation-models\" rel=\"nofollow noopener\" target=\"_blank\">produce unreliable, unpredictable and unexplainable outcomes<\/a>. This poses <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2307.03718\" rel=\"nofollow noopener\" target=\"_blank\">challenges to regulating the technology<\/a>.<\/p>\n<p>Their internal workings are invisible to users and sometimes even their creators, leading them to be called <a href=\"https:\/\/theconversation.com\/what-is-a-black-box-a-computer-scientist-explains-what-it-means-when-the-inner-workings-of-ais-are-hidden-203888\" rel=\"nofollow noopener\" target=\"_blank\">black boxes<\/a>. The <a href=\"https:\/\/crfm.stanford.edu\/fmti\/December-2025\/index.html\" rel=\"nofollow noopener\" target=\"_blank\">Foundation Model Transparency Index<\/a> shows that these large models can be quite opaque. <\/p>\n<p>The <a href=\"https:\/\/internationalaisafetyreport.org\/\" rel=\"nofollow noopener\" target=\"_blank\">risks from such large AI models<\/a> include malicious use, malfunctions and systemic risks. These models could potentially pose catastrophic risks to society. For example, someone could <a href=\"https:\/\/uk.news.yahoo.com\/controversial-california-ai-bill-amended-222458144.html\" rel=\"nofollow noopener\" target=\"_blank\">use an AI model to create a weapon<\/a> that results in mass casualties, or instruct one to orchestrate a cyberattack causing billions of dollars in damages. <\/p>\n<p>The California law requires developers of frontier AI models to describe how they incorporate national and international standards and industry-consensus best practices. It also requires them to provide a summary of any assessment of catastrophic risk. The law also directs the state\u2019s Office of Emergency Services to set up a mechanism for anyone to report a critical safety incident and to confidentially submit summaries of any assessments of the potential for catastrophic risk.<\/p>\n<p>Disclosures and liability<\/p>\n<p>Texas enacted the Texas Responsible AI Governance Act, which imposes restrictions on the development and deployment of AI systems <a href=\"https:\/\/www.lw.com\/en\/insights\/texas-signs-responsible-ai-governance-act-into-law\" rel=\"nofollow noopener\" target=\"_blank\">for purposes such as behavioral manipulation<\/a>. The <a href=\"https:\/\/www.law.cornell.edu\/wex\/safe_harbor\" rel=\"nofollow noopener\" target=\"_blank\">safe harbor<\/a> provisions \u2013 protections against liability \u2013 in the Texas AI act are meant to provide incentives for businesses to document compliance with responsible AI governance frameworks such as the <a href=\"https:\/\/www.americanbar.org\/groups\/business_law\/resources\/business-law-today\/2025-july\/texas-enters-ai-sandbox-with-traiga-implications-business-trials\/\" rel=\"nofollow noopener\" target=\"_blank\">NIST AI Risk Management Framework<\/a>. <\/p>\n<p>What is novel about the Texas law is that it stipulates the creation of a \u201c<a href=\"https:\/\/csrc.nist.gov\/glossary\/term\/Sandbox\" rel=\"nofollow noopener\" target=\"_blank\">sandbox<\/a>\u201d \u2013 an isolated environment where software can be safely tested \u2013 for developers to test the behavior of an AI system.<\/p>\n<p>The Utah Artificial Intelligence Policy Act <a href=\"https:\/\/le.utah.gov\/%7E2024\/bills\/static\/SB0149.html\" rel=\"nofollow noopener\" target=\"_blank\">imposes disclosure requirements<\/a> on organizations using generative AI tools with their customers. Such laws ensure that a company using generative AI tools bears the ultimate responsibility for resulting consumer liabilities and harms and cannot shift the blame to the AI. This law is the first in the nation to stipulate consumer protections and require companies to prominently disclose when a consumer is interacting with generative AI system. <\/p>\n<p>Other moves<\/p>\n<p>States are also taking other legal and political steps to protect their citizens from the potential harms of AI.<\/p>\n<p>Florida Republican Gov. Ron DeSantis said he opposes federal efforts to override state AI regulations. He has also <a href=\"https:\/\/www.wusf.org\/politics-issues\/2025-12-04\/gov-ron-desantis-proposes-florida-ai-bill-of-rights\" rel=\"nofollow noopener\" target=\"_blank\">proposed a Florida AI bill of rights<\/a> to address \u201cobvious dangers\u201d of the technology.<\/p>\n<p>Meanwhile, the attorneys general of 38 states and the attorneys general of the District of Columbia, Puerto Rico, American Samoa and the U.S. Virgin Islands <a href=\"https:\/\/www.iowaattorneygeneral.gov\/media\/cms\/12_68B5C629180F6.pdf\" rel=\"nofollow noopener\" target=\"_blank\">called on AI companies<\/a>, including Anthropic, Apple, Google, Meta, Microsoft, OpenAI, Perplexity AI and xAI, to fix sycophantic and delusional outputs from generative AI systems. These are outputs that can lead users to become <a href=\"https:\/\/techcrunch.com\/2025\/08\/25\/ai-sycophancy-isnt-just-a-quirk-experts-consider-it-a-dark-pattern-to-turn-users-into-profit\/\" rel=\"nofollow noopener\" target=\"_blank\">overly trusting<\/a> of the AI systems or <a href=\"https:\/\/www.theatlantic.com\/technology\/2025\/12\/ai-psychosis-is-a-medical-mystery\/685133\/\" rel=\"nofollow noopener\" target=\"_blank\">even delusional<\/a>.<\/p>\n<p>It\u2019s not clear what effect the executive order will have, and observers have said <a href=\"https:\/\/www.npr.org\/2025\/12\/11\/nx-s1-5638562\/trump-ai-david-sacks-executive-order\" rel=\"nofollow noopener\" target=\"_blank\">it is illegal<\/a> because <a href=\"https:\/\/www.techpolicy.press\/why-trumps-ai-eo-will-be-doa-in-court\/\" rel=\"nofollow noopener\" target=\"_blank\">only Congress can supersede state laws<\/a>. The order\u2019s final provision directs federal officials to propose legislation to do so.<\/p>\n","protected":false},"excerpt":{"rendered":"President Donald Trump signed an executive order on Dec. 11, 2025, that aims to supersede state-level artificial intelligence&hellip;\n","protected":false},"author":2,"featured_media":344711,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-344710","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/344710","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=344710"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/344710\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/344711"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=344710"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=344710"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=344710"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}