
China is the only country with a national-level plan for responding to an AI crisis. Australia’s National AI Plan, released in December, commits the country to dealing with AI incidents in its existing crisis management framework. Without a tailored approach, Australia could default to cyber crisis arrangements that are not well-suited to the specific threats, stakeholders and response mechanisms relevant to an AI crisis.
Good implementation of the National AI Plan requires updating the Australian Government Crisis Management Framework (AGCMF) to explicitly cover these situations and creating an AI crisis plan to handle them. The AI crisis plan would facilitate the National Coordination Mechanism (NCM) by bringing AI companies, experts and data centre operators to the table to help the government effectively handle the crisis.
An AI crisis is where a frontier AI system is involved in a threat to public safety or national security. Such incidents could include:
—Misuse by malicious actors, who could use advanced AI to conduct widespread cyberattacks or support biological weapons development.
—Loss of human control, whereby AI systems autonomously spread inflammatory content, manipulate financial markets or self-replicate while resisting shutdown.
—Unforeseen incidents, such as novel vulnerabilities in widely deployed AI agents or unexpected interactions that cascade across interconnected systems.
China recognised AI’s catastrophic potential and became the first country to explicitly prepare for AI incidents at the national level. China’s February 2025 National Emergency Response Plan listed AI security alongside earthquakes and cyberattacks as a potential national disaster. National guidelines also outline AI incident severity levels and a four-phase response process.
Australia should follow China’s lead. The National AI Plan says the government is preparing for ‘any potential AI-related incident’ and evaluating how potential AI harms fit within the AGCMF. The answer is to make AI incidents an explicit category in the AGCMF and develop a dedicated AI crisis plan.
The AGCMF is organised around specific hazards with designated leadership, coordination arrangements and national plans. AI incidents do not cleanly map to any listed hazard. Cyber incidents—the closest existing category—are events affecting the ‘confidentiality, integrity or availability’ of systems or data. The cybersecurity minister leads, the Department of Home Affairs coordinates, and the Australian Cyber Response Plan provides the response framework.
AI incidents don’t fit that pattern. When an AI system coaches users through manufacturing biological weapons, there’s been no breach of confidentiality, integrity or availability. When a model evades operator control and self-replicates, the harm stems from autonomous behaviour, not a compromised system.
Triaging such incidents under cyber frameworks creates uncertainty about leadership, brings the wrong stakeholders to the table, hinders their ability to understand the problem, and arms them with the wrong playbook.
An AI crisis plan would have a lot to do. Most of the actors needed to diagnose and respond to an AI incident—frontier and major AI developers, data centre operators, AI safety experts and AI Safety Institutes—are not familiar with Australia’s emergency management ecosystem. Their first connection with the NCM should not occur during a crisis.
In practice, that means establishing relationships, understanding capabilities or limitations, and setting expectations in advance, including through existing structures such as the Trusted Information Sharing Network (TISN) and Resilience Expert Advisory Group (REAG). That way, the NCM can convene the right people quickly, and those participants understand what might be asked of them.
The plan would also need to practically explain responses. Depending on the incident, options may include restricting model access, changing deployment settings, pausing or rolling back an AI-enabled service, isolating affected systems, or coordinating actions with AI compute providers. These responses should not be thought about for the first time during a crisis. They require pre-agreed decision workflows, shared understandings of capability, clarity about who can authorise what, and an understanding of the downstream effects of constraining AI systems embedded in essential functions.
The plan should also be international as many plausible scenarios involve offshore models, infrastructure and companies. Existing crisis coordination channels within the Department of Defence and the Department of Foreign Affairs and Trade must accommodate AI incidents and include practical lines of communication with international stakeholders, especially the United States. These arrangements should be exercised and updated through the National Emergency Management Agency’s national exercising program, including scenarios where an AI incident drives cascading effects across multiple sectors.
Developing an AI crisis plan would highlight gaps in Australia’s legal and institutional frameworks. Government engagement with critical infrastructure owners is built through the Security of Critical Infrastructure (SOCI) framework and forums such as the TISN, supported by the REAG. This ecosystem enables the NCM to convene operators quickly in a crisis.
But this ecosystem doesn’t adequately capture the AI supply chain. SOCI doesn’t treat frontier AI computing and model operations as a distinct source of risk. The government should review SOCI to ensure high-end AI compute providers and other essential AI service operators are clearly within the risk-management, information-sharing, and (where relevant) emergency assistance architecture.
Without these changes, Australia risks facing AI incidents with frameworks designed for different threats, relationships with the wrong actors, and no clear authority to respond.