Australia’s National AI Plan signals a deliberate reframing of how the nation intends to navigate the accelerating global competition in artificial intelligence. For more than a year, the expectation was that Australia would follow Europe’s lead by introducing a dedicated AI Act with hard guardrails, mandatory risk-classification, and explicit regulatory duties for AI developers and deployers. Instead, the government has opted for a more cautious, incremental strategy, one that relies on existing legislation, targeted oversight, and capability-building rather than comprehensive reform. It is a choice shaped by political, economic and international pressures.

Industry groups, including the major global platforms, argued strongly against premature regulation, warning that Australia risked imposing constraints faster and more rigidly than competitor nations. Treasury’s interest in securing a projected $100 billion uplift from AI adoption reinforced the argument for a slower regulatory tempo. The Productivity Commission then provided the framing: pause new laws, audit existing frameworks, and build the evidence base before committing to any decisive intervention.

The resulting plan reflects this consensus. Rather than create a new regulatory regime, the government has chosen to apply Australia’s “technology-neutral” legal frameworks as the primary mechanism for managing AI risks. “Ongoing refinement” reinforces an adaptive posture rather than a prescriptive one.

Australia still risks becoming a policy-taker in the global AI order.

At the centre of this strategy is the new AI Safety Institute. Its mandate is analytical rather than enforcement-driven: identify emerging systemic risks, test assumptions, advise ministers, and map the gaps between current law and practical reality. In effect, it becomes the hinge between Australia’s light-touch phase and any eventual movement toward stronger regulation. Its value will be determined by its analytical independence, its ability to inform national decision-making, and whether it can elevate AI risk from a technical issue to a whole-of-economy governance challenge.

The plan’s deeper priorities lie in capability. It outlines a significant expansion of enabling infrastructure – from multi-billion-dollar data centres to renewable-linked computing capacity, alongside national workforce development and structural support for business adoption. The goal is to ensure Australia has sufficient domestic capability to participate meaningfully in global AI ecosystems rather than rely exclusively on foreign models, platforms, and compute capacity. The emphasis on capacity-building also positions Australia to attract long-term investment from global technology companies seeking stable, predictable regulatory environments.

However, the reliance on existing legislation introduces structural risks. Most of Australia’s legal frameworks were designed around human decision-makers, transparent processes, and clear lines of accountability. AI systems challenge these assumptions through opacity, scale, and the potential for distributed harm. And here lies the uncomfortable truth: trying to contain a new class of technology within old legal containers is a little like patching a modern submarine with timber from the Endeavour – noble, historic, and wildly unfit for pressure.

Former minister Ed Husic’s concern about a “whack-a-mole” model remains relevant: responding to harms reactively rather than reshaping the regulatory architecture proactively. International experience, including in the United Kingdom and Singapore, suggests that hybrid, adaptive regulatory models eventually emerge as legacy frameworks encounter their limits.

Minister for Industry & Innovation Tim Ayres at the Lowy Institute on Tuesday (Sahlan Hayes/Lowy Institute)

Minister for Industry and Innovation and Minister for Science Tim Ayres at the Lowy Institute on Tuesday (Sahlan Hayes/Lowy Institute)

One of the most significant but understated elements of the plan concerns AI in the workplace. The government highlights the need to review how algorithmic decision-making intersects with labour rights, workplace surveillance, rostering, and automated management systems. These areas will likely generate the earliest and most visible impacts on citizens. Globally, AI deployment in workplace management has proven to be one of the fastest triggers for public concern, regulatory scrutiny, and legal challenge. And if history is any guide, Australians will tolerate a lot, except being managed by a machine that doesn’t explain itself. If Australia aims to maintain social licence for AI adoption, early clarity in this domain will be essential. Delay here risks eroding trust faster than any deepfake scandal or abstract “frontier AI” debate. PM’s risk losing their job.

Industry Minister Tim Ayres’s launch speech at the Lowy Institute on Tuesday was polished, principled and politically coherent, but it revealed, by omission, the core tension at the heart of Australia’s approach. He framed AI primarily as an industrial, economic and nation-building opportunity. That framing is true, but it is incomplete. What was left unsaid is equally important.

Ayres presented the decision to avoid a standalone AI Act as pragmatic realism. But the deeper reality is that relying on legacy legislation is a structural gamble. He did not address how Australia intends to manage systemic risks that cut across privacy, competition, employment law, national security, and democratic integrity simultaneously. Nor did he confront the uncomfortable truth that a light-touch approach favours capability-building now at the cost of potential regulatory upheaval later.

Ayres spoke convincingly about resilience, fairness and the “fair go”. Yet Australia still risks becoming a policy-taker in the global AI order. Nothing was said about the challenge of foreign dependency in compute, models or safety standards. Nor was there recognition that deferring regulation while the United States, European Union, United Kingdom and Singapore accelerate theirs risks locking Australia into frameworks written elsewhere.

Ayres’ emphasis on AI’s economic uplift did not engage with the systemic risks of opaque algorithmic decision-making, fragmented oversight, and the growing governance burden placed on institutions not designed for AI-era complexity. These are not technical quibbles but shape sovereignty itself.

In short, Ayres made the political case for the plan. What remains unsaid is the strategic case against complacency: that flexibility without a defined destination is indistinguishable from drift, and drift is a luxury that nations do not have in a fast-closing technological gap. Australia needs to stop vacillating and get to developing our own sovereign AI or get left behind – again.