Electronic Frontiers Australia has criticised the federal government’s approach to regulating artificial intelligence, stating that the forthcoming National AI Plan places business opportunities ahead of public safety and digital rights.

Legislative direction

The advocacy group has voiced concerns about the government’s decision to forgo a dedicated, proactive AI law. Instead, Australia will manage AI through a combination of existing frameworks such as privacy, consumer protection, workplace, and national security laws. These laws, according to the group, provide mainly post-fact-or after-the-event-enforcement.

John Pane, Chair, Electronic Frontiers Australia, compared the AI regulatory approach to earlier privacy regulation efforts. “Many people are unaware that this Big Tech and Big Business friendly light touch approach to regulation was also used in the implementation of the National Privacy Principles made under the Federal Privacy Act back in 2000. Those “light touch” privacy principles were an abject failure due to poor design, regulatory capture and fear mongering from both Big Tech and Big Business interests, putting profit and productivity before people and digital rights. And now it looks like history is repeating itself with this National AI plan,” said Pane.

Regulatory fragmentation

The government’s plan will see AI-related obligations governed by a cross-section of existing legal frameworks, requiring updates for each to address new risks. EFA says this could mean legal fragmentation, as systems that were not originally compatible are forced to work together. Additionally, the burden of enforcement will likely fall to regulatory bodies such as the Office of the Australian Information Commissioner, which already faces resource constraints.

The group highlighted the challenge of relying on post-fact enforcement models at a time when AI-powered harms can emerge swiftly and at scale, sometimes without immediate detection. Pane said, “The new AI Safety Institute is starting to look like a lame duck, particularly if it can’t prevent and mitigate against high risk and prohibited AI use cases which other countries have identified as important for human rights and subsequently enshrined in law.”

International models

Pane argued that Australia should draw on frameworks established overseas, particularly the European Union’s AI Act, which sets out strict ex ante-or proactive-requirements. “We need strong EU style ex ante AI laws for Australia, not a repeat of Australia’s disastrous ‘light touch’ private sector privacy regime introduced in 2000. We need to also resist the significant geopolitical pressure being brought to bear on Australia by the Trump administration, forcing sovereign nations to adopt US technology ‘or else’,” said Pane.

Proposed protections

EFA called for the introduction of mandatory risk assessments for high-stakes applications, clear definitions of prohibited AI use cases, and requirements focused on fairness, transparency and explainability. Pane said that these should be backed by privacy, copyright, and anti-disinformation protections:

“We need to stand strong and pass an AI Act that: Introduces mandatory risk assessments for AI applications in high-stakes domains like healthcare, law enforcement, and finance. Defines high risk and prohibited use cases for exploiting vulnerabilities, inferring emotions, biometric categorisation, subliminal manipulation etc. Articulates clear accountability mechanisms for developers and deployers of AI technologies including fairness, transparency and explainability. Is supported by strong privacy and other legal protections to safeguard individuals from intrusive surveillance based data extraction practices, protect copyright, prevent algorithmic manipulation and stem the flow of misinformation/disinformation.”

Public trust concerns

EFA has linked the absence of a proactive AI legal regime to lower public trust in such technologies. “The absence of a citizen and society-centric ex ante legal framework for AI development and deployment not only jeopardises individual rights but also further undermines the existing very low levels of public trust in AI technologies,” said Pane.

“EFA again calls on the Australian Government to build a human rights based framework for AI regulation modelled on the European Union AI Act and to prioritise the privacy, safety and rights of its citizens over rubbery short-term economic gains, a large proportion of which will flow out of our country,” said Pane.