{"id":253688,"date":"2026-04-17T03:31:13","date_gmt":"2026-04-17T03:31:13","guid":{"rendered":"https:\/\/www.newsbeep.com\/us-tx\/253688\/"},"modified":"2026-04-17T03:31:13","modified_gmt":"2026-04-17T03:31:13","slug":"dean-ball-helped-write-americas-ai-action-plan-in-dallas-he-told-business-leaders-what-to-watch-for-and-what-keeps-him-up-at-night-dallas-innovates","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us-tx\/253688\/","title":{"rendered":"Dean Ball Helped Write America&#8217;s AI Action Plan. In Dallas, He Told Business Leaders What to Watch For \u2014 and What Keeps Him Up\u00a0at\u00a0Night \u00bb Dallas Innovates"},"content":{"rendered":"<p>Danny Tobey told a roomful of North Texas business leaders at Convergence AI Dallas that the policy signals shaping artificial intelligence \u201care not just headlines.\u201d They will influence \u201cwhere the government will push, where it will invest, and how companies can design their own compliance systems to move as quickly as possible,\u201d said Tobey, an attorney, medical doctor, and exited entrepreneur who has advised at least half the Fortune 10 on AI.<\/p>\n<p>The lawyer chairs the AI and Data Analytics practice at DLA Piper, the global law firm with 90 offices in roughly 45 countries. He and his team built one of the first focused AI practices in the country about eight years ago\u2014and he\u2019s quick to note they wear many hats. \u201cMost of us are also computer scientists, data scientists, and former software founders,\u201d Tobey told the March 31 audience. \u201cWe very much love the technology. We are not anti-progress.\u201d<\/p>\n<p>But major companies are investing heavily in AI and not yet seeing the returns they want, Tobey said, while simultaneously \u201cfinding themselves opened up to risks of inaccuracy, lack of transparency, bias in data and other things that we\u2019re starting to see ripple out into a highly active litigation environment.\u201d He pointed to the <a href=\"https:\/\/www.reuters.com\/legal\/litigation\/jury-reaches-verdict-meta-google-trial-social-media-addiction-2026-03-25\/\" rel=\"nofollow noopener\" target=\"_blank\">first jury verdict on social media addiction<\/a>, handed down just days before the event, as \u201cthe tip of the iceberg.\u201d<\/p>\n<p>Tobey gave the audience three things to listen for in the fireside chat to follow. First, what winning looks like in concrete terms: \u201cinnovation, capacity, infrastructure, readiness, and global versus local standard setting.\u201d Second, how governance fits into that vision. \u201cNot as bureaucracy, not as checklists and fig leafs, but as a real operating system\u201d for scaling AI without scaling unknown risk. And third, what industry can do now, especially in Texas.<\/p>\n<p>\u201cResponsible AI is return on investment,\u201d he said. \u201cIf I can leave you with a thought, our AI is ROI.\u201d <\/p>\n<p>With that, he handed the stage to his DLA Piper colleague Sean Fulton and Dean Ball, one of the key architects of America\u2019s AI Action Plan.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-286629\" class=\"img-responsive\" src=\"https:\/\/www.newsbeep.com\/us-tx\/wp-content\/uploads\/2026\/04\/Convergence-AI-Day-2-2026-IMG_8786.jpg\" alt=\"The main stage at the Dallas Regional Chamber's Convergence AI Dallas on March 31, the second day of the two-day conference and the day of Dean Ball's fireside chat on America's AI Action Plan. [Photo: Sandra Louz\/DRC]\" width=\"970\" height=\"464\"\/><\/p>\n<p id=\"caption-attachment-286629\" class=\"wp-caption-text\">The main stage at the Dallas Regional Chamber\u2019s Convergence AI Dallas on March 31, the second day of the two-day conference and the day of Dean Ball\u2019s fireside chat on America\u2019s AI Action Plan. [Photo: Sandra Louz\/DRC]<\/p>\n<p>America\u2019s AI Action Plan is a to-do list<\/p>\n<p>Sean Fulton opened the chat by introducing Ball as the \u201cprimary designer\u201d of the federal strategy. Released in July 2025, the <a href=\"https:\/\/www.cbsnews.com\/news\/trump-uai-plan-data-centers-us-infrastructure\/\" rel=\"nofollow noopener\" target=\"_blank\">28-page document<\/a> outlines the Trump administration\u2019s plan for maintaining U.S. leadership in artificial intelligence. Fulton didn\u2019t waste time. \u201cWe only have 30 minutes for a topic that we could probably talk for days about,\u201d he said. \u201cSo we\u2019ll just jump right in.\u201d<\/p>\n<p>Fulton asked where the action plan stands 12 months after release and where it\u2019s headed in the next 12.<\/p>\n<p>Ball was the plan\u2019s primary staff drafter during his time as senior policy advisor for AI and Emerging Technology at the White House Office of Science and Technology Policy. Now a senior fellow at the Foundation for American Innovation, he said he\u2019s \u201cactually quite on the upside, happy with the way that the implementation of the action plan is going.\u201d He pointed to the Export Promotion Program and the adoption of AI in government as bright spots.<\/p>\n<p>Fulton asked what was left on the cutting room floor. Ball said that if his team had had another month or two, they would have gone deeper into AI adoption in heavily regulated industries like financial services and healthcare, which he believes could be transformed by it.<\/p>\n<p>The plan was designed to be different from other government AI strategies, which tend to be, in his words, \u201cvery fluffy and airy and high level.\u201d He described it as a credible \u201cto-do list for the federal government to carry out tasks in the near term on behalf of the American people\u201d\u2014within existing statutory authority and existing budgets.<\/p>\n<p>Solving a patchwork problem<\/p>\n<p>In his opening, Tobey \u201clevel set\u201d on the AI \u201cstate of play,\u201d defining that as operating \u201clike it or not, in a world of global regulation.\u201d<\/p>\n<p>\u201cMost of our companies are now multinational,\u201d he told the audience. \u201cData does not stop at borders, yet each sovereign nation oftentimes has its own data protection regime and increasingly, its own artificial intelligence regulatory regime.\u201d<\/p>\n<p>Tobey pointed to the EU AI Act as the prime example. \u201cMuch like for those of you who are familiar with GDPR, the prior European privacy law, the AI Act in Europe is extraterritorial,\u201d he said. \u201cIt purports to cover any company anywhere in the world, no matter where their models and their data are hosted, if the outputs of those models impact citizens in the European Union.\u201d He called the regulatory scope \u201cmassive\u201d and noted that penalties at the highest level can reach 7% of a company\u2019s global annual revenue. \u201cQuite extraordinary,\u201d he said.<\/p>\n<p>The U.S. is seeking \u201ca centralized approach that avoids a patchwork of state laws and allows an open and innovation-focused framework at the top,\u201d Tobey said, with rules around frontier risks like national security and child safety but without burdening companies with heavy regulation.<\/p>\n<p>Fulton asked Ball how the newly released national policy framework fits into the overall action plan.<\/p>\n<p>Ball said one of the leading items of the action plan addresses the issue of state preemption and onerous state AI regulations. \u201cThe president wants a national framework for AI. He doesn\u2019t want a state-by-state patchwork.\u201d When President Trump announced the plan, Ball said, that issue \u201cwas front and center for him.\u201d<\/p>\n<p>The issue of states \u201cracing ahead to regulate things is a real one,\u201d Ball said, and he pointed to Texas as an example.<\/p>\n<p>What emerged in Texas was the <a href=\"https:\/\/www.forbes.com\/sites\/lanceeliot\/2026\/01\/25\/texas-ai-law-gets-underway-with-stern-provisions-to-stop-the-manipulation-of-human-behavior-by-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Responsible Artificial Intelligence Governance Act<\/a>, or TRAIGA, which took effect January 1, 2026. The law is designed to both foster innovation and encourage private industry investment in the state while protecting individual rights. It establishes regulatory sandboxes\u2014environments where businesses can test AI systems with limited legal liability\u2014along with safe harbors designed to promote innovation rather than just restrict it.<\/p>\n<p>Texas \u201creally did chart a path with regulatory sandboxes, safe harbors, benefits to promote innovation and really a narrower view of how the government can help regulate this,\u201d Tobey said in his opening. TRAIGA stands apart from other state laws that tend to mirror the EU approach.<\/p>\n<p>\u201cMany people think the Texas law could provide one potential framework for national legislation,\u201d he noted.<\/p>\n<p>But it was a process. Ball said he was \u201cdismayed a year ago or so\u201d when the state was considering an earlier version\u2014one he described as \u201ca very, very aggressive, algorithmic discrimination bill that created a centralized regulator.\u201d That bill got walked back. \u201cAnd so we got a much, I think, lighter touch version in the end,\u201d he said.<\/p>\n<p>Ball\u2019s eye is on the big picture beyond any one state. The U.S. action plan addresses the patchwork itself, he said, and the <a href=\"https:\/\/www.whitehouse.gov\/releases\/2026\/03\/president-donald-j-trump-unveils-national-ai-legislative-framework\/\" rel=\"nofollow noopener\" target=\"_blank\">National Policy Framework for Artificial Intelligence<\/a>, released by the White House on March 20, was \u201ca really important part of sort of fulfilling that aspect of the action plan.\u201d The framework is a set of legislative recommendations to Congress and a direct follow-up to President Trump\u2019s December 2025 executive order, \u201c<a href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/12\/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy\/\" rel=\"nofollow noopener\" target=\"_blank\">Ensuring a National Policy Framework for Artificial Intelligence<\/a>.\u201d<\/p>\n<p>Still, Ball said, \u201ca compliance patchwork that creates a real maze for deploying firms and for AI developers is not good for anybody, and so that worries me quite a bit.\u201d<\/p>\n<p>The North Star<\/p>\n<p>Fulton followed with a question he hears frequently in his practice: \u201cWhat am I supposed to do with this patchwork?\u201d referring to clients contending with the EU AI Act and an array of states trying to do their own thing in a fragmented landscape. \u201cCertain states can\u2019t even pass a comprehensive bill, but have these kind of little side bills, kind of addressing various things.\u201d He asked Ball for a \u201cNorth Star\u201d\u2014something \u201cthat\u2019s not going to go away, regardless of what the regulation is.\u201d<\/p>\n<p>Ball admitted the question is a tough one. But \u201cnumber one \u2026 I think you want to make an effort to quantify, if you\u2019re using AI to automate an existing process, you want to make an effort to quantify the sort of current pre AI level of reliability, level of safety, whatever metric it is you care.\u201d<\/p>\n<p>If you\u2019re going end-to-end, \u201cwe should really want AI to be much better.\u201d Exactly how much better it needs to be depends on the thing, he said.<\/p>\n<p>But the bar should be high. While not the only one, self-driving cars are a great example, he said. \u201cSelf-driving cars should be like an order of magnitude better than human beings. That\u2019s technological progress. We shouldn\u2019t settle for self-driving cars that are as safe as human drivers. We should settle for them only when they are much safer than human drivers.\u201d The same standard, he said, \u201cis probably true for a lot of automated business processes.\u201d<\/p>\n<p>Liability, common law, and the question AI forces us to ask<\/p>\n<p>Fulton then turned to the allocation of liability, asking where it falls as AI integrates into organizations and product stacks.<\/p>\n<p>Ball said he\u2019s not a lawyer\u2014\u201dI\u2019m one of those people who\u2019s, you know, I know just enough about the law to be dangerous\u201d\u2014but he loves common law. \u201cIt\u2019s one of the most powerful incentives we have in our society,\u201d he said. \u201cThis notion that a person who causes harm to another must be compelled to internalize that negative externality is an extremely important one. And it\u2019s amazing how this body of law is accumulated.\u201d<\/p>\n<p>He described the evolution of his thinking. \u201cI used to be much more of like, I want liability shields as much as I can,\u201d he said. \u201cAnd then I kind of realized, as I got into AI policy,\u201d that one area where common law is \u201creally messy right now is the intersection of the First Amendment and common law liability.\u201d Part of the problem, he said, is Section 230. \u201cOne of the downsides to this kind of like binary liability shield is that you can\u2019t litigate the interesting things, which is how we accumulate knowledge.\u201d<\/p>\n<p>Ball said he believes Section 230 protections likely do not apply to frontier AI companies\u2014\u201dand I think that\u2019s probably, on the whole, a good thing.\u201d He paused and turned to Fulton. \u201cI\u2019d be actually curious for your thoughts about that,\u201d he said. \u201cMaybe I\u2019m wrong.\u201d<\/p>\n<p>But the flip side of that, he said, is the social media addiction case against Meta and YouTube that Tobey had referenced\u2014the one that produced <a href=\"https:\/\/www.reuters.com\/legal\/litigation\/jury-reaches-verdict-meta-google-trial-social-media-addiction-2026-03-25\/\" rel=\"nofollow noopener\" target=\"_blank\">the first jury verdict on social media addiction<\/a> just days before the event. \u201cYou can see that going in a very bad direction for AI where juries are second-guessing the design of the transformer or something, the algorithmic design of the transformer, or of neural networks or multi-layer perceptrons or some such,\u201d Ball said.\u00a0\u201cThat seems like it could be quite deeply problematic.\u201d<\/p>\n<p>Deployer-side liability and a backstage scenario<\/p>\n<p>With respect to frontier AI and governance, Ball\u2019s sense is that \u201cthere are going to be some obligations that end up falling on the developer.\u201d As an intellectual problem, legal scholars are interested in this because it\u2019s interesting to think about software liability, he added.<\/p>\n<p>What Ball thinks is \u201cactually profoundly more interesting and much less talked about \u2026 for lack of a better word \u2026 is deployer side liability,\u201d he said. \u201cIn other words, what are the characteristics of the responsible use of AI?\u201d<\/p>\n<p>It\u2019s worth considering \u201cat the firm level, sort of a business that\u2019s adopting AI,\u201d but also \u201cat the level of the individual, just me using AI agents for some purpose,\u201d he added. \u201cWe have a pretty clear sense of what responsible driving is. If you\u2019re on your phone distracted, we kind of all have a sense that that\u2019s not responsible driving. Will we have similar kind of socially constructed notions of what is responsible AI use with time, and will that plug into the common law system in interesting ways? I kind of hope so.\u201d<\/p>\n<p>Fulton put \u201ca finer point on the liability question\u201d with a scenario they\u2019d discussed backstage. Consider a doctor interpreting an MRI who comes to one conclusion and an AI that comes to another. The doctor, who\u2019s been practicing for 20 or 30 years, overrides the AI\u2014and ends up being wrong. \u201cHow do you deal with that kind of difference in information expertise versus technology?\u201d Fulton asked.<\/p>\n<p>Ball called it \u201canother thing about common law that doesn\u2019t get talked about that much.\u201d His view: \u201cIf a system is quantifiably superhuman in its reliability and safety characteristics, if an AI system is just reliably better than humans, then it may well be de facto negligent not to use it.\u201d Overriding that system, he acknowledged, \u201cgets complicated for various reasons.\u201d<\/p>\n<p>Where the market can\u2019t self-correct<\/p>\n<p>Fulton noted that courts are ill-equipped to handle AI cases due to information asymmetry and asked Ball what actual transparency looks like when it comes to policymakers and frontier model makers.<\/p>\n<p>\u201cThe transparency that I have focused the most on is actually probably one that is not directly relevant to many people in this audience,\u201d Ball said\u2014transparency around how frontier AI developers measure and mitigate catastrophic risk.<\/p>\n<p>\u201cWe will see models this year that have staggering cyber capabilities,\u201d he said, \u201cthat are better than many human cyber security experts at finding vulnerabilities in critical software. And they might well be better than all human experts at that at some point in the relatively near future.\u201d He noted that people also talk about bio risk\u2014\u201dthe ability to design novel pathogens using AI systems and all kinds of other threats that will manifest themselves.\u201d<\/p>\n<p>It\u2019s the area he focuses on most, Ball said, because it connects directly to the liability conversation. \u201cCatastrophic tail risk is the thing you shouldn\u2019t expect a common law liability regime or a market-based incentive to solve,\u201d he said.<\/p>\n<p>He pointed to <a href=\"https:\/\/fpf.org\/blog\/californias-sb-53-the-first-frontier-ai-law-explained\/\" rel=\"nofollow noopener\" target=\"_blank\">California\u2019s SB 53<\/a>, the Transparency in Frontier Artificial Intelligence Act, as an example of a legislative approach. The law, which took effect January 1, 2026, requires developers of the most powerful AI models to publicly disclose how they test for and mitigate catastrophic risk. Ball broke with some of his usual allies in supporting it. \u201cUnlike a lot of people on my side of the aisle, I\u2019m in fact supportive of\u201d the bill, he said.<\/p>\n<p>The diffusion challenge<\/p>\n<p>But SB 53 is aimed at the builders. For most of the companies in the room\u2014the ones Tobey described as \u201cnot yet seeing the returns that they want\u201d\u2014a different challenge is playing out: How do you adopt AI in a way that actually delivers?<\/p>\n<p>Fulton asked Ball what practical guidance he\u2019d give CEOs, general counsel, and CTOs trying to draw conclusions from the action plan. And does that advice differ for organizations deploying AI versus developing it?<\/p>\n<p>The action plan is called \u201cWinning the Race.\u201d Ball said the problem is how people are defining that race. \u201cI totally take responsibility for this, where I just think we confuse people,\u201d he said.<\/p>\n<p>A key theme throughout the action plan is what policymakers call \u201cdiffusion\u201d\u2014getting AI out of the lab and into the economy. But Ball said that idea has narrowed in practice. In governance conversations, diffusion \u201chas actually kind of turned out\u201d to mean getting older, open-source models into familiar use cases\u2014\u201dlike getting DeepSeek or some open source model into medical diagnostics or something.\u201d<\/p>\n<p>\u201cThere are a lot of people who think that\u2019s what the race is about,\u201d he said.<\/p>\n<p>Ball doesn\u2019t. \u201cThe actual really interesting diffusion challenge that we face is, how do you integrate AI\u2014and like, advanced AI, not like sub-frontier, but really frontier AI\u2014how is it going to restructure organizations?\u201d he said. \u201cHow will institutions be fundamentally upended by this technology?\u201d<\/p>\n<p>Tobey earlier described an aspect of that restructuring. \u201cWe\u2019re moving from an era of chatbot AI, where there is always a human at the receiving end of recommendations, to agentic and physical AI,\u201d he told the audience\u2014\u201da world where AI is increasingly automated, increasingly impacts the real world and takes actions and makes decisions on behalf of humans, and does so in ways that don\u2019t always stop for human intervention, permission and understanding.\u201d<\/p>\n<p>Companies that built AI governance programs several years ago are now calling him back, he said, because \u201cthe old human-in-the-loop rules really don\u2019t work as well when you\u2019re looking at ubiquitous, automated AI agents that are acting 24\/7 as digital workers.\u201d<\/p>\n<p>Infrastructure bottlenecks<\/p>\n<p>Fulton noted time was running short and asked Ball what he sees as the real-world bottlenecks to AI implementation in the next 18 months, the biggest risks coming forward, and how the government should address them.<\/p>\n<p>\u201cIt\u2019s not going to be surprising to anyone here\u2014probably, we are in Texas after all\u2014but it\u2019s energy,\u201d Ball said. \u201cThe grid and the infrastructure side of this.\u201d He added that one area \u201cthat\u2019s somewhat under-appreciated is the skilled labor that we\u2019re going to need to build all of that infrastructure, because that, in and of itself, is a big problem.\u201d<\/p>\n<p>He pointed to an example close to home. \u201cThe <a href=\"https:\/\/www.texasstandard.org\/stories\/stargate-data-center-abilene-texas-construction-ai-artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">Stargate Project<\/a> in Abilene, Texas\u2014I think they\u2019re importing skilled laborers from like 48 states or something,\u201d Ball said. \u201cEnormous operations with real shortages of the labor that we need. I think that\u2019s a really, really big challenge.\u201d<\/p>\n<p>But he was clear that these are not dead ends. \u201cWe\u2019re going to get through the energy bottleneck,\u201d he said. \u201cWe\u2019re going to get through the compute bottleneck, which will also be a real thing.\u201d Even the state-by-state patchwork, Ball said, won\u2019t stop AI\u2019s diffusion\u2014\u201dAI is a really important macro invention.\u201d\u00a0<\/p>\n<p>What keeps Dean Ball up at night<\/p>\n<p>Ball is confident the practical problems will get solved: energy, labor, compute, the patchwork.<\/p>\n<p>\u201cThe thing that keeps me up at night is not any of those things,\u201d he said. \u201cIt\u2019s this question of\u2014America feels, you know, we\u2019re in our 250th year now, we feel like, to me, we feel like an old country. Like maybe middle aged, right? We\u2019re not as young and energetic and hungry as we once were.\u201d<\/p>\n<p>He wondered aloud whether we can absorb what\u2019s coming. \u201cCan our civilization handle the dynamism, or do we just kind of want to go more in a European direction, and just kind of want to, you know, hang out and have a picnic for the next 150 years or something,\u201d he said. \u201cI hope not. I hope we remain a hungry country.\u201d<\/p>\n<p>He spoke to the room: \u201cI love coming to places like Texas, because I think Texas still has that much more than, for example, the East Coast where I live.\u201d<\/p>\n<p>Fulton, with a couple of minutes left, asked Ball for his \u201csage advice\u201d to the Convergence audience\u2014people excited about AI and its future.<\/p>\n<p>\u201cBe prepared for quite a lot of change,\u201d Ball said. \u201cDynamism means saying hello to new things, and it also means saying goodbye to some things that we maybe don\u2019t want to say goodbye to.\u201d<\/p>\n<p>He said he\u2019s \u201cquite convinced\u201d the diffusion of AI is going to be for the better. But he was candid about what comes with it. \u201cThe diffusion of artificial intelligence is going to mean humanity takes its hand off the wheel a little bit, of processes and mechanisms that we\u2019re not used to not having our hands on the wheel for,\u201d he said. \u201cI also would be lying if I said that I didn\u2019t feel some degree of melancholy about that.\u201d<\/p>\n<p>Don\u2019t miss what\u2019s next. Subscribe\u00a0to\u00a0Dallas\u00a0Innovates.<\/p>\n<p style=\"font-size: 1rem; line-height: 1.5; margin: 0 0 10px 0;\">Track Dallas-Fort Worth\u2019s business and innovation landscape with our curated news in your inbox Tuesday-Thursday.<\/p>\n<p>\u00a0<\/p>\n<p>\tR E A D\u00a0\u00a0 N E X T\t<\/p>\n<p>\t<a href=\"https:\/\/dallasinnovates.com\/dallas-regional-chamber-offering-human-help-to-talk-to-machines\/\" rel=\"nofollow noopener\" target=\"_blank\"><img loading=\"lazy\" decoding=\"async\" width=\"970\" height=\"464\" src=\"https:\/\/www.newsbeep.com\/us-tx\/wp-content\/uploads\/2025\/10\/DRC-TalkToMachines-adobe1-970.jpg\" class=\"attachment-rp4wp-thumbnail-post size-rp4wp-thumbnail-post wp-post-image\" alt=\"\"  \/><\/a><\/p>\n<p>AI seem overwhelming? Just go to office hours.\u00a0<\/p>\n<p>\t<a href=\"https:\/\/dallasinnovates.com\/2026-ai75\/\" rel=\"nofollow noopener\" target=\"_blank\"><img loading=\"lazy\" decoding=\"async\" width=\"970\" height=\"464\" src=\"https:\/\/www.newsbeep.com\/us-tx\/wp-content\/uploads\/2026\/03\/AI75-2026-title-w-colorstripes_top-date.jpg\" class=\"attachment-rp4wp-thumbnail-post size-rp4wp-thumbnail-post wp-post-image\" alt=\"Nominations for AI 75 2026 are open now. Deadline: Dec. 19, 2025.\"  \/><\/a><\/p>\n<p>Dallas Innovates and our partners at the Dallas Regional Chamber spotlight the names you need to know in Dallas-Fort Worth&#8217;s AI economy.<\/p>\n<p>\t<a href=\"https:\/\/dallasinnovates.com\/who-made-this-years-ai-75-list-meet-the-north-texans-leading-the-pack-in-artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\"><img loading=\"lazy\" decoding=\"async\" width=\"970\" height=\"464\" src=\"https:\/\/www.newsbeep.com\/us-tx\/wp-content\/uploads\/2025\/10\/AI75title-2025-970.jpg\" class=\"attachment-rp4wp-thumbnail-post size-rp4wp-thumbnail-post wp-post-image\" alt=\"\"  \/><\/a><\/p>\n<p>Dallas Innovates, in partnership with the Dallas Regional Chamber, once again is recognizing the most innovative leaders in AI in Dallas-Fort Worth. From visionaries and mavericks to transformers and academics, AI 75&#8217;s class of 2025 are the AI pacesetters you need to know now.<\/p>\n<p>\t<a href=\"https:\/\/dallasinnovates.com\/ut-dallas-team-builds-neuromorphic-computer-that-could-cut-ai-training-costs\/\" rel=\"nofollow noopener\" target=\"_blank\"><img loading=\"lazy\" decoding=\"async\" width=\"970\" height=\"464\" src=\"https:\/\/www.newsbeep.com\/us-tx\/wp-content\/uploads\/2025\/11\/UTD_JosephSFriedman-970.jpg\" class=\"attachment-rp4wp-thumbnail-post size-rp4wp-thumbnail-post wp-post-image\" alt=\"\"  \/><\/a><\/p>\n<p>The brain-inspired design could lower the energy and compute costs behind AI training requirements, an increasingly urgent challenge for enterprises.<\/p>\n<p>\t<a href=\"https:\/\/dallasinnovates.com\/ai-tinkerers-now-has-a-dallas-fort-worth-chapter-this-is-for-people-who-are-actually-using-ai\/\" rel=\"nofollow noopener\" target=\"_blank\"><img loading=\"lazy\" decoding=\"async\" width=\"970\" height=\"464\" src=\"https:\/\/www.newsbeep.com\/us-tx\/wp-content\/uploads\/2025\/10\/AI-Tinkers-composite-illustration.jpg\" class=\"attachment-rp4wp-thumbnail-post size-rp4wp-thumbnail-post wp-post-image\" alt=\"\"  \/><\/a><\/p>\n<p>Data scientist Anmolika Singh put Dallas on the global AI Tinkerers map. At the first meetup, more than 30 pros\u2014founders to Fortune 500 technologists\u2014showed up to trade ideas, projects, and solutions.<\/p>\n","protected":false},"excerpt":{"rendered":"Danny Tobey told a roomful of North Texas business leaders at Convergence AI Dallas that the policy signals&hellip;\n","protected":false},"author":2,"featured_media":253689,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[102,104,103,95792,95793,95794,95795,95796,95797,95798],"class_list":{"0":"post-253688","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-dallas","8":"tag-dallas","9":"tag-dallas-headlines","10":"tag-dallas-news","11":"tag-danny-tobey","12":"tag-dean-ball","13":"tag-dla-piper","14":"tag-foundation-for-american-innovation","15":"tag-sean-fulton","16":"tag-texas-responsible-artificial-intelligence-governance-act","17":"tag-white-house-office-of-science-and-technology-policy"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us-tx\/wp-json\/wp\/v2\/posts\/253688","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us-tx\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us-tx\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-tx\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-tx\/wp-json\/wp\/v2\/comments?post=253688"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us-tx\/wp-json\/wp\/v2\/posts\/253688\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-tx\/wp-json\/wp\/v2\/media\/253689"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us-tx\/wp-json\/wp\/v2\/media?parent=253688"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-tx\/wp-json\/wp\/v2\/categories?post=253688"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-tx\/wp-json\/wp\/v2\/tags?post=253688"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}