In late 2022, the release of ChatGPT set off a global scramble to govern artificial intelligence. The Biden administration set up the U.S. AI Safety Institute, an office tasked with developing protocols to identify risks posed by AI. Canada, Japan, Singapore, South Korea, the United Kingdom, and the European Union created similar oversight bodies. Global leaders gathered to discuss AI in Bletchley Park, the English country estate where Allied code breakers worked in secret during World War II. The next 15 months brought follow-on conferences in Seoul and Paris. In the face of a technology promising to disrupt work, warfare, and what it means to be human, the stage was set for a political response. Even the leaders of the commercial AI labs called for coordinated AI governance.

Today, such calls are rare. Despite the accelerating power of generative AI models, and despite polling indicating that the public is alarmed by the technology’s potential for job displacement and other harms, neither government nor private-sector leaders believe that regulation is likely at the national or supranational level. The Trump administration’s instinctive dislike of regulation, especially global regulation, explains part of this change, but it is not the only factor. There are powerful incentives to let the technology rip. The AI boom is generating much of the growth in the U.S. economy; throwing sand in the gears could be costly. The release of powerful Chinese AI models such as DeepSeek has discouraged the U.S. government from impeding domestic labs lest China race ahead.

Even if the AI bubble bursts, perhaps bankrupting some of the top firms, deep-pocketed tech giants in the United States and China will continue to accelerate deployment. Because of this race dynamic, the prospects for AI governance will remain challenging. But there is too much at stake to abandon the regulatory cause. Sooner or later—perhaps following an AI disaster, such as a cyberattack on critical infrastructure by a rogue agent—this truth will become self-evident. Artificial intelligence portends social and psychological upheaval on a scale at least equivalent to the Industrial Revolution, which set the stage for a century of political revolutions and world wars. At some point, governments will realize that refusing to shape how the AI revolution unfolds is an abdication of responsibility. The patchwork of regulatory efforts in California and other U.S. states will not produce a coherent governance framework. But it does underscore that many people are uneasy with the do-nothing option.

To prepare for AI’s return to the national agenda, however, proponents of regulation need to examine their position’s shortcomings. The safety push of 2023–24 fizzled in part because of its breadth. Advanced by a broad array of advocates, the agenda covered a long and bewildering list of problems that AI models could cause: job displacement, decreased critical thinking in schools, dangers to national security, environmental costs, copyright violations, hallucinations, deepfakes, and much else. But for AI regulation to succeed, its proponents must pick priorities and forge a clearer plan. To that end, they must understand the tradeoffs among competing objectives and contend with widespread misunderstandings about how AI is likely to develop in the future. Would-be regulators must also distinguish between workable policies and impractical ones. Workable policies take private labs’ incentives into account and empower the government appropriately. For example, a “risk tax” on private AI labs would encourage them to invest in safety research. And a revenue-generating national data repository would give the government’s AI safety overseers the resources they need to monitor frontier models.

Overextended

The goals of AI policy advocates in the United States can be grouped into three categories. Proponents want national security: the country’s military and intelligence services should fortify themselves with AI. They also want economic security: American businesses should develop and incorporate AI in ways that will make them more competitive in international markets. And they want societal security, a category that includes the mitigation of toxic AI outputs such as malware, as well as protections against joblessness and increased inequality, the risk that bad actors might use AI for nefarious purposes, and even a science-fiction outcome in which machines annihilate humans. The trouble is that working toward all three objectives at once amounts to a confusingly broad agenda that is exceedingly difficult to translate into action.

Consider three scenarios, each involving pursuits in two of the three possible categories. In the first scenario, a country could pursue both national and economic security by maximizing its investment in AI research, data centers, and energy infrastructure. This is the stance of the Trump administration. But a country cannot pursue those goals and simultaneously maximize societal security, which would involve slowing the rollout of AI to buy time to identify safety risks in models and build in remedies before releasing them. Conversely, if safety advocates succeeded in slowing down model releases, they would compromise national security and economic security. Both sides need to recognize the tradeoff.

In a second scenario, a country could prioritize national security and societal security, treating AI like nuclear technology by siloing it within the military and energy sectors and restricting its use in other areas. This approach would secure the state and insulate the public from disruption, preventing widespread job displacement and minimizing opportunities for malicious misuse. But doing so would compromise economic security by stifling commercial AI applications, preventing industries from leveraging AI efficiencies, and dooming domestic businesses to fall behind international competitors.

In a third scenario, a country could prioritize economic security and societal security, encouraging full-throttle AI development while also requiring compliance with rigorous safety regulations before models are released to the public. Big tech firms have sometimes described this mix as “responsible innovation.” The idea is that racing to develop the technology while rolling it out cautiously creates a virtuous circle—the innovators earn public trust, avoid a societal backlash, and achieve faster adoption in the long run. But a country that combines fast AI development with cautious AI rollout may struggle to maximize national security. Responsible countries may carefully validate their models, but incautious rivals will race to deploy autonomous weaponry and cyber-capabilities, gaining the military advantage.

The Singularity Delusion

The already difficult task of navigating these tradeoffs has been made even harder by a widespread misconception about how AI is likely to develop: the vision of a “singularity,” a concept proposed some 30 years ago by the science-fiction writer Vernor Vinge and later embraced by futurists such as Ray Kurzweil. The singularity, as Vinge explained it, is the moment when AI models become strong enough to upgrade their own code, unleashing a feedback loop of recursive self-improvement that triggers an intelligence explosion.

If one agrees with this mental model, the resolution to the AI trilemma becomes rather simple. If a singularity is approaching, short-term efforts to deploy AI for national security are futile; once superintelligence arrives, it will vastly supersede current systems. Similarly, the short-term pursuit of economic security through today’s corporate AI adoption efforts is pointless: superintelligence will invent revolutionary ways of boosting productivity, perhaps even rendering obsolete contemporary understandings of how economies function.

If AI is heading toward a sudden intelligence explosion, the only two policy objectives that matter are to reach the singularity before rivals (with the assumption that military and economic security will follow) and to prioritize societal security by minimizing the risk that surging machine intelligence will subjugate or annihilate the weaker, biological variety. This singularity-driven perspective is a version of the “responsible innovation” approach. The idea is to race to develop superintelligence, thereby achieving military and economic security—but to roll it out cautiously, thereby avoiding the risk that an AI system might attack humans.

Proponents of Washington’s export controls, which limit China’s access to AI chips and chip-making technologies, sometimes justify them by resorting to singularity assumptions. They concede that starving China of high-end semiconductors will drive the country to develop its own, rendering it more formidable in the long run. But they argue that this tradeoff is worthwhile: the singularity is imminent, so the priority is to delay China’s progress for a few years, by which point the United States will have already won the AI race. All that matters is to push the responsible-innovation formula to the max and be the first to reach a “safe singularity.”

There is too much at stake to abandon the regulatory cause.

The trouble is that the singularity is unlikely to happen. To be sure, a limited version of recursive self-improvement is already a reality; AI coding assistants such as Anthropic’s Claude are helping write the code for the next generation of AI models. But technology does not acquire agency on its own. To be powerful, a superintelligence must be placed in an environment in which it can act, and such environments will be designed and controlled by humans for the foreseeable future.

For example, superintelligent systems may soon be capable of replacing most lawyers. But to supplant humans, the systems must first be given access to the right data sets, such as client information. For that to happen, AI developers must clarify questions of liability for malpractice and solicit regulatory approval; they must fortify AI systems against attacks and overcome lobbying by incumbent workers. Once these obstacles have been navigated, the superintelligent systems must be given the authority to act. They must be empowered to draft legal contracts and write code that augments a company’s existing software; they must run that code, validate it, and share it with other systems. Each of these steps toward agency is likely to involve fresh legal and institutional hurdles. For all these reasons, superintelligent systems are not going to replace humans quickly.

The singularity scenario also glosses over the physical obstacles to AI scalability. To achieve superintelligence and apply it to millions of tasks, AI systems need racks of semiconductors, sophisticated cooling systems, and vast quantities of electricity. To make this possible, chip manufacturers must build fabrication facilities, source state-of-the-art machines to print circuits onto silicon, negotiate access to rare-earth elements, and ensure that national electric grids acquire new substations and transmission lines.

The vision of an intelligence explosion implicitly assumes an overly simple feedback loop: AI writes code, code improves AI, AI writes better code, and so on. But the path to superintelligence also involves humans haggling with governments over the location of new fabrication facilities, humans mining materials, humans negotiating energy contracts, humans raising capital, humans considering the risk of adversarial hackers, and countless other frictions that only humans can address.

In reality, superintelligence is likely to emerge gradually, not in a singular, revolutionary moment. For that reason, Washington’s race against Beijing will be prolonged—which weakens the case for semiconductor export controls. By the same token, the logic of sprinting toward a “safe singularity” becomes dubious since the project of balancing AI policy objectives will play out over an extended period.

The Practicality Test

Having grasped the AI trilemma and sidestepped the singularity delusion, advocates of AI governance must complete one final exercise. They must be blunt about the fact that some popular misgivings about AI should not be policy priorities. The clearest examples involve concerns that are exaggerated and problems that the private sector is motivated to solve, rendering regulatory pressure unnecessary.

The idea that AI models spew misleading hallucinations falls into both categories. Until 2023, large language models were indeed toxic and unreliable, which was precisely why the major labs were nervous about releasing them. Since 2023, however, hallucination has diminished. At least in some contexts, performance tests find that AI systems are much more likely than expert humans to answer questions correctly. The labs have every incentive to keep improving the models’ accuracy.

Other popular misgivings may be warranted but should nonetheless be excluded from the regulatory agenda because they are impractical. For instance, AI systems threaten the jobs of human knowledge workers. But policymakers should not make job preservation a priority, because they can’t do much about it. Some lawyers, radiographers, and Hollywood scriptwriters will hang on to their jobs if they can learn how to use AI. Those who are displaced deserve a public safety net—for example, a universal basic income. But freezing technological advancement is not a realistic policy.

The practicality test should also be applied to curbs on semiconductor exports. When the Biden administration imposed sweeping controls in late 2022, it hoped to further all three policy objectives contained in the trilemma. Cutting off China’s access to the best chips would slow the country’s military progress, thereby strengthening U.S. national security. It would likewise inhibit China’s development of AI business applications, thereby boosting U.S. economic security. And it would prevent China from building unsafe AI models that might decide to attack humans, allowing the United States to demand responsible restraint from its own labs without fear of yielding ground to a reckless adversary. But despite this trifecta of desirable benefits, the semiconductor controls have so far failed to prevent China from building impressive models. Chip smuggling is almost impossible to stop. Chinese model makers can overcome shortages of cutting-edge chips by deploying large numbers of inferior ones. Chinese developers can also compensate for hardware limitations by improving their software. The upshot is that, despite U.S. sanctions, China’s AI sector is advancing rapidly. Meanwhile, the chip controls have driven China to focus more intensely on domestic semiconductor development.

Freezing technological advancement is not a realistic policy.

Like job preservation and semiconductor curbs, the policy of restricting open-weight models—that is, models that can be downloaded and modified, including in ways that make them unsafe—is attractive in principle but difficult in practice. In principle, a clampdown on open-weight models involves sacrificing the economic upside of slightly faster AI diffusion to optimize the two other objectives: national security and societal security. National security stands to benefit because foreign adversaries, both state and nonstate, would no longer get easy access to strong open-weight models. Societal security would benefit for similar reasons. The safeguards that responsible AI labs train their proprietary systems to respect can be stripped out of open-weight ones, facilitating the creation of psychologically manipulative AI companions, for example. To be sure, a world in which all systems were proprietary would not be a world with zero harms: consider, for example, the deepfake nude images generated on demand by xAI’s Grok model. But when proprietary models go rogue, governments at least know which lab to hold accountable.

Because banning all open-weight systems would be impossible, believers in an open-weight clampdown often propose a targeted approach, arguing that the most powerful open-weight models could be restricted. After all, the strongest open-weight models are produced by well-known companies, which are susceptible to regulatory pressure. Some developers, such as Meta, are based in the United States. Others, such as Canada’s Cohere and France’s Mistral, have raised capital in the United States and are hoping for more of it. Nearly all serious builders of open-weight systems aspire to serve businesses and households either in the United States or in allied countries.

If government regulators could threaten to deprive developers of access to their financial backers, customers, and data-center partners, then the case for such an open-weight clampdown would be more plausible. By forging a coalition of like-minded allies, a future U.S. administration could make it hard for developers of open-weight systems to raise capital and collect revenue in North America and Europe. It could decree that owners of large data and computing centers, such as Amazon, Google, and Microsoft, cannot run high-risk, unvetted open models. Such steps would create incentives for leading AI companies to move away from open-weight systems.

But the catch is that developers of open-weight systems in China would probably refuse to join this coalition. And owing to the breakdown in U.S.-Chinese relations, Chinese labs have almost no connections to the U.S. market. If Chinese labs continue to release powerful open-weight systems, there is little point in telling Meta and other American firms not to. Curbing U.S. open-weight models would merely ensure that developing countries, which tend to favor open-weight models because of lower costs, would deepen their already troubling dependence on Chinese technology. In the absence of a thaw in U.S.-Chinese relations, the policy of restricting open-weight models fails the test of practicality.

Embrace the Tradeoffs

The best route forward for AI regulation is to embrace two compromises, each of which involves a modest economic cost and a larger gain for societal safety. The premise of the first compromise is that model safety is both a private and a public good. AI labs have incentives to produce models that are safe for their users, creating a private good. But a model that is harmless to its users can still harm public safety. For example, if a user asks a model to generate and distribute thousands of subtly distinct articles alleging an imaginary election fraud, the AI lab’s private incentive is to please that customer by complying—but that very compliance would harm the public by spreading disinformation. Because private labs lack incentives to avoid such externalities, they will not adequately invest in safety research.

A special “risk tax” could correct this shortfall. The goal would not be to raise revenue but to influence how companies allocate their resources, shifting more of them to safety research. For example, an AI developer that spent $1 billion to build a model would face a five percent tax, creating a $50 million liability. To offset this, the government would offer a tax credit worth 25 percent of each dollar the firm allocated to safety research. If the lab spent an additional $200 million on safety research, the resulting $50 million credit would offset its tax bill.

Industry leaders will object that coerced safety spending would set them back financially, harming their prospects in the race against unrestricted Chinese developers. But given the hundreds of billions of dollars that American labs are collectively investing in AI research, the intervention proposed here is modest. Moreover, investments in safety would be a drag on AI deployment only in the short term. In the longer term, better safety research would boost public trust in AI, smoothing the path to widespread deployment and the accompanying economic benefits.

Increased safety spending would also enhance societal security and bring other benefits, such as halting the hollowing out of academic AI labs. As private labs monopolize both hardware and talent, publicly funded university research labs—the historical source of U.S. technological advantage—are ailing. By permitting the private labs to earn tax credits by funding National Science Foundation safety grants or offering access to computing resources to university groups, the risk tax would breathe new life into the academic sector. The NSF funnels about $1 billion to university computer science programs per year. With industry training costs running at multiples of that, the boost from the risk tax to academic computer science would be substantial.

Government regulators must be empowered to veto the release of dangerous AI models.

In addition to incentivizing safety spending, the government should parlay its proprietary data into closer collaboration with AI companies. The curation of data needed to train AI systems has thus far been dominated by the private sector, which has scraped the open web and more. But the supply of high-quality public text is nearly exhausted, so the frontier of innovation is shifting toward nonpublic data sets, including some held by the U.S. government. Last year, the Trump administration announced a plan for the Department of Energy to release scientific data sets collected by 17 national laboratories so that private AI labs could use them to train AI models. That welcome initiative could be expanded into other areas. In health care, for example, anonymized clinical data held by the government could enable models to diagnose disease and prescribe medicine. In the economic arena, AI models could use anonymized tax data, which is currently not available to the public, to better predict shifts in saving and consumption.

The Trump administration could establish a national data repository under the umbrella of the government’s Center for AI Standards and Innovation (CAISI), which the Biden administration had originally established as the U.S. AI Safety Institute. The repository would clean and anonymize a wide variety of currently unpublished government data to meet the highest privacy standards, then share it with private AI developers. The developers would pay for the data, just as they currently pay private data-curation firms. The revenues would then be used to improve the center’s capabilities to monitor the safety of AI models developed by the private sector.

Once it is adequately resourced, CAISI will also need additional authority. The Biden administration required developers of frontier models to share safety information with the Department of Commerce’s Bureau of Industry and Security. The Trump administration has unwisely suspended this reporting requirement. But beyond the right to be informed about powerful models, the center needs the power to act. Just as the Food and Drug Administration can block the sale of unsafe pharmaceuticals, so the government’s AI regulators must be empowered to veto the release of dangerous AI models.

As with an AI risk tax, the goal in creating a stronger AI center would be to produce a large increase in societal safety in exchange for a small loss of economic security and no loss of national security. To be sure, stress testing AI models before releasing them would impose modest costs and delays on AI labs. But these would be offset by the availability of well-curated public data sets for AI training, and public vetting of the models would hasten the public’s embrace of them. Meanwhile, the introduction of stringent pre-release testing could avoid compromising national security if private labs are permitted to share models with defense and intelligence officials before CAISI reviews them. That way, national security leaders could plan how to use new models before the rest of the world even knows about them.

Toward a New Nonproliferation Victory

Critics of these proposals may object that it is not worth taxing U.S. developers of proprietary systems, pushing them to spend more on safety, and building an agency to identify and stop bad models, if there is a risk that China and developers of open-weight systems may produce unsafe models. But there is a case for federal regulation of proprietary models even if Chinese or open-weight models circulate broadly. The reason is that U.S. proprietary models remain stronger, especially for the most complex reasoning and multimodal tasks. If the goal of U.S. regulators is to keep tabs on the most innovative, most powerful, and therefore most threatening models, a focus on the U.S. proprietary labs is appropriate.

Moreover, taxing and regulating proprietary AI would have positive spillovers for the rest of the industry. By forcing the proprietary labs to pay for safety research, the government would catalyze the development of features that improve safety in all models. When encryption and multifactor authentication first appeared, they were limited to high-end software; over time, they became universal. Similarly, AI safety methodologies will begin as expensive niche ideas, then become the industry benchmark. Once a proprietary leader demonstrates the value of a safety add-on, any open-weight model lacking it will be perceived as defective or malicious. This market pressure, backed by the AI center’s ability to blacklist substandard models, ensures that the safety innovations funded by taxing proprietary systems would discipline the whole industry.

Furthermore, building a regulatory apparatus for proprietary AI may turn out to be a steppingstone to a broader governance project that extends, eventually, to an international agreement on restricting open-weight systems. Today, one of the reasons open-weight systems cannot be controlled is because of the poor state of U.S.-Chinese relations, but at some point U.S.-Chinese relations may change. During the Cold War, periods of belligerence gave way to phases of détente. In 1968, just six years after the Cuban missile crisis and at the height of the Vietnam War, the United States, the Soviet Union, and more than 50 other nations signed the Nuclear Nonproliferation Treaty, which successfully delayed the spread of nuclear weapons. The United States could now aspire to do the same for open-weight AI. By putting in place the institutional infrastructure to regulate proprietary models, the United States would be positioning itself for another nonproliferation victory.

Given the challenges of governing a fast-changing technology, and of taming a race for supremacy that is global in scope, proponents of AI regulation face difficult choices, with some forms of economic, societal, or national security inevitably being sacrificed. But it is better to embrace these tradeoffs than to pretend they don’t exist. If governments fail to make deliberate choices about AI, the technology will advance and make choices for them.

Loading…