The federal government has accepted business demands to pause “mandatory guardrails” over AI, in the first national plan released this morning.
The long-awaited plan, which the government began consulting on in 2023, was originally intended to include hard rules to govern artificial intelligence, with deep distrust in the community over the rapidly spreading technology.
In September last year, former industry minister Ed Husic flagged 10 “mandatory guardrails” under development, which would include requirements for high-risk AI developers to create risk-management plans, test systems before and after deployment, establish complaints mechanisms, share data after adverse incidents and open records to assessment by third parties.
Those guardrails were intended to operate under a standalone AI Act that could be used to categorise technologies based on risk, with strict rules over high-risk AI and less regulation to encourage lower risk tools.
But the government has stepped back from that path, committing in the National AI Plan to instead use “strong existing, largely technology-neutral legal frameworks” and “regulators’ existing expertise” to manage artificial intelligence in the short-term.
From next year, a $30 million AI Safety Institute will monitor the development of AI and advise industry, agencies and ministers where stronger responses may be needed, while government continues “ongoing refinement” of its AI plan.
Business warning that burdensome laws could stifle AI
Earlier this year the Productivity Commission urged that AI guardrails be put on hold until an audit of gaps in the law could be completed, so that a potential $116 billion boost to the economy was not stifled.
Generative AI technologies, such as Open AI’s Chat GPT, have exploded in popularity in recent years, and provided everyday users and small businesses with a way to access technologies once only available to developers and massive corporations.
The rise of Open AI and other artificial intelligence companies has given more people access to the technologies. (Reuters: SOPA Images/SI/Rafael Henrique)
But the spread of AI tools has also prompted greater concern they could be misused to exploit people, commit fraud or mislead people with undisclosed AI content.
In June, industry group DIGI, which represents Apple, Google, Meta, Microsoft and other major tech players, warned any proposal to expand or improve regulation “should start from a thorough assessment of the operation and effectiveness of existing regulatory schemes”, and look for opportunities to reduce regulatory complexity.
“DIGI recommends that policy responses first build on existing regulation, rather than introducing new legislation aimed at regulating AI as a technology,” it wrote.
Treasurer Jim Chalmers flagged at the productivity roundtable that the government would seek to regulate “as much as necessary but as little as possible”.
Data centre, AI skills demands booming
The National AI Plan lays out a roadmap to accelerate the development and adoption of AI, including through stepped-up investment in AI data centres and training initiatives.
Australia was already the second-largest destination for investment in data centres last year, attracting $10 billion — with forecasts that the massive computing farms could make up 6 per cent of all electricity demand by the end of this decade.
A new data-centre plan will set expectations for future investment, including bringing online new renewable power, which advocates say could be partnered with centres to provide reliable demand for energy and supply to the centres.
Demand for AI-skilled workers has also tripled over the past decade, the plan says.
Industry Minister Tim Ayres said the plan would make sure technology served Australians, “not the other way around”.
“This plan is focused on capturing the economic opportunities of AI, sharing the benefits broadly, and keeping Australians safe as technology evolves,” Senator Ayres said.
“Guided by the plan, the government will ensure that AI delivers real and tangible benefits for all Australians.
“As the technology continues to evolve, we will continue to refine and strengthen this plan to seize new opportunities and act decisively to keep Australians safe.”
Tim Ayres said the government’s plan would ensure AI served the public. (ABC News: Matt Roberts)
Assistant Technology Minister Andrew Charlton, who before entering politics co-founded technology consultancy AlphaBeta, said the government’s plan would ensure Australia would benefit from the global AI transformation.
“The government is setting out an agenda that will attract positive investment, support Australian businesses to adopt and create new AI tools, and address the real risks faced by everyday Australians,” Mr Charlton said.
AI Plan to leverage existing laws
The government says instead of a standalone AI Act or other sweeping reforms, it would work with states and territories on “minor opportunities to clarify existing rules” regarding consumer protections, review the application of copyright laws to AI and review AI regulation in healthcare.
The AI Safety Institute, meanwhile, would have an ongoing role in identifying gaps in managing AI.
“By applying fit-for-purpose legislation, strengthening oversight and addressing national security, privacy and copyright concerns, we will work to keep the operation of AI systems responsible, accountable and fair,” the plan states.Â
“This gives businesses confidence to adopt AI responsibly while safeguarding people’s rights and protecting them from harm.”
AI progress has slowed and models are cheaper
However, the National AI Plan also flags future action to protect employees in the workplace from AI surveillance, bias and discrimination in rostering.
It commits to progressing an analysis of “workplace relations regulations” to ensure AI does not impinge on “fair, safe and cooperative” workplaces.
While the government has opted for a lighter-touch response in its AI plan, it has also left the door open to a stronger response in the future.
“Where necessary, we will take decisive action to ensure safety and accountability as new technologies and frontier AI systems emerge,” the plan says.
“If more regulation is needed to address bad actors or broader harms, the government will not hesitate to intervene.”
Former industry minister Ed Husic, who began the work on a National AI Plan, has called for a standalone Act that could adaptively respond to the technology as it grew.
Mr Husic has previously warned that using a patchwork of existing laws could lead to a “whack-a-mole” regulatory approach that may disadvantage business and discourage investment.