Australia has no legislation and policy designed specifically for AI digital twins—systems that train on an employee’s emails, meetings, documents and chat messages to create an AI-human replica that can answer questions. The technology is already running overseas and heading here fast.

A compromised twin trained on defence or critical-infrastructure personnel would be an intelligence windfall, but existing legislation and policies weren’t designed for non-human workers holding years of institutional knowledge. Australia needs to close this gap before the technology arrives.

A Silicon Valley startup, Viven, raised US$35 million (A$49 million) in October to build personalised AI twins to represent each employee in a given company. Colleagues could then query that person’s digital twin and get answers even if the real person were on leave or had quit. Viven is already running inside enterprises with tens of thousands of workers.

Australia’s appetite for AI is insatiable. Consulting firm Deloitte says 61 percent of Australian companies report improved efficiency from AI. The government’s National AI Plan commits more than A$460 million to the technology’s development. Every signal points the same way: more AI, faster.

That makes it worth asking a question nobody in Canberra seems to have considered: what happens when malicious actors target a digital twin trained on the full working life of a government contractor’s project lead?

Think about the pieces of data that a twin may hold: project plans, client emails, internal strategy, contract terms, and personnel decisions, to name a few. Gaining access to the twin of someone who’d worked across Australia’s defence supply chain would be an intelligence windfall. Compromise one twin and you get all recorded information that the person knew: suppliers, timelines, technical details—all in one place, only a query away.

No AI twin is known to have been breached yet. But there is a strong parallel in what has already happened to enterprise copilots, which share similar underlying architecture. AI twins and enterprise copilots work by pulling an employee’s emails, documents and messages into an AI that generates answers. The main difference is that a copilot retrieves context for the person using it, whereas a twin concentrates that person’s entire recorded working knowledge (and personality) into a permanent system that others can query. This is a ready-made vulnerability, a richer target.

In June 2025, researchers disclosed a flaw in Microsoft 365 Copilot that allowed an attacker to steal sensitive data with a single crafted email, no clicks required. The AI mistook hidden instructions for legitimate commands and quietly handed over whatever it had access to. The Open Worldwide Application Security Project, the global authority on software security, ranks this kind of attack as the number-one vulnerability in AI applications.

Separately, a project called Pharmaicy has shown that code-based modules can alter how an AI thinks, much as a drug can alter a human mind: making it hazy, speeding it up and scrambling its judgment. Apply that principle to a twin trained on years of someone’s working life and you aren’t distorting a conversation; you’re corrupting a career’s worth of knowledge.

There’s also a departure problem. When employees leave, if their AI twins stay behind, who owns those twins? For a government agency or defence contractor, that is a question about data sovereignty as well as intellectual property. Australia’s Privacy Act 1998, the Defence Industry Security Program, contract law and employment law all touch on aspects of this problem. But none was designed for non-human workers holding years of institutional knowledge.

The biggest gap is probably in critical civilian infrastructure. The Security of Critical Infrastructure Act 2018 requires risk management plans across 11 sectors including communications and data storage. But the act was designed for human identities, not digital ones.

Defence is arguably better placed than most to deal with threats through digital twins. It is more alive to the risk of leaks and already operates under stricter personnel security regimes. Nonetheless, it would be good to see an Australian public framework for governing AI in defence and national security settings, like those that have been set up in the United States and Britain. The Australian Parliamentary Library’s latest research on AI and the Australian workforce focuses entirely on productivity, jobs and inequality. Security does not rate a mention.

Several things should happen as AI digital twins such as Viven’s arrive in Australia. The AI Safety Institute, costing A$29.9 million and due to launch this year, should examine AI twins as a priority threat. The risk management rules outlined in the Security of Critical Infrastructure Act should be updated so critical infrastructure operators account for digital workers in their threat plans. And the Defence Industry Security Program should be extended to explicitly cover AI twins in the supply chain.

Australia has the chance to get the rules right before the technology emerges further. It would be a waste not to take it.