Aussie workers The Fair Work Commission has warned AI generated material can be ‘inaccurate, incomplete, out of date, or just made up’. (Source: AAP)

The Fair Work Commission is cracking down on disgruntled Aussie workers using artificial intelligence to lodge unfair dismissal and other claims against their bosses. The workplace watchdog has seen an “unprecedented” spike in its workload and has warned that materials created with AI can be incorrect or simply “made up”.

Fair Work Commission president Adam Hatcher has now moved to formally regulate the use of generative AI in cases before it. He released a draft note outlining new requirements that would apply when someone uses AI to prepare an application or other documents.

While AI tools can help workers decide whether to make an application, Justice Hatcher warned they can also give them “unrealistically optimistic predictions of their prospects of success and likely compensation” and lead to “unmeritorious” claims.

RELATED

“GenAI tools can assist litigants to produce applications, responses, submissions, witness statements and other documents to be lodged in Commission cases, but the material generated by GenAI tools can be inaccurate, incomplete, out of date, or just made up,” he said.

“This can include GenAI tools generating spurious legal arguments and references to legislation, case law, reference materials and facts that do not exist or are not relevant to a case.

“This may impose a significant time and cost burden on the other parties to a case and on the commission.”

The Commission’s workload has ballooned by more than 70 per cent in the space of three years, with Hatcher warning its capacity to deal with major cases looking at work from home rules and rights for gig workers was now being compromised.

Do you have a story to share? Contact tamika.seeto@yahooinc.com

Under the draft rules, workers would now be required to state in the document that GenAI was used.

Secondly, they would be required to check the document and ensure all details are correct and relevant, and state that this has been done. This includes checking that all facts are correct, and all cases and legislation exist and are correctly referred to.

Thirdly, if the document is a witness statement or declaration, the witness or declarant will need to check the document and ensure it is true to the best of their knowledge.

If you don’t comply with the requirements, the draft rulings note that it could impact your case. The documents could be disregarded by the Commission, you may need to pay costs incurred by the other party, or your case could be dismissed altogether.

Hatcher said he didn’t realise how easy AI had made it to create Fair Work applications until he tried it himself late last year.

He provided ChatGPT with a few basic facts, including telling it he believed he had been sacked because he made a complaint a few years earlier, and an application was prepared in less than 10 minutes.

It included a “substantially invented story” about his dismissal and claimed that in a “realistic scenario”, he would be compensated in the range of $15,000 to $40,000.

HR expert Jonathon Woolfrey said ChatGPT could create an application for a disgruntled worker in a matter of minutes, but the tools were prone to inventing fake dismissal stories and many claims had no chance of being successful.

Woolfrey has urged workers to think twice before using AI, but admitted there currently weren’t deterrents in place for workers not to use it.

“The only downfall for workers in putting these in, is they embarrass themselves,” he told Yahoo Finance.

“They clog up the time, their expectations get raised, and on a case that has no merit that they don’t realise, they’re distracted from actually getting on with their life and actually finding a new role or moving on with their life as well.”

Get the latest Yahoo Finance news – follow us on Facebook, LinkedIn and Instagram.