The Labour Court has followed the Workplace Relations Commission in issuing guidance on the use of artificial intelligence (AI) when preparing cases.
The move follows a number of instances of evidence being presented, particularly in relation to case law, that was partially or wholly inaccurate.
In its guidance, the court suggests that the use of AI is acceptable but that parties should be aware of its limits, the regularity with which it produces incorrect data and understand they are responsible for the reliability of anything they present as evidence.
Labour Court sources said the use of AI was “not wholesale but is definitely creeping in”.
They said there had been recent cases in which a hired representative had cited a High Court decision it was claimed contained a relevant point only for the other side to check and inform the court this was entirely untrue.
“The people didn’t admit to using AI but it’s sometimes obvious and we are seeing a growing number of lay litigants, representing themselves and submitting huge amounts of paperwork.”
“Complainants in the WRC repeatedly make the same mistake and AI is making it more pronounced,” says Galway-based Ruairi Guckian of GHR Consulting.
“They think that by submitting a lengthy complaint it will aid their case. We had one woman who submitted a 1,000 page complaint against a client. The trick is to keep it short and sweet – factually and documented evidence. There is no need for AI assistance if you stick to that principle.”
A number of recent WRC cases, however, highlighted how badly things can go wrong when people do not check the results they get.
In Fernando Oliveira v Ryanair, a member of cabin crew who took a case in which he claimed he had been discriminated against on the basis of race and family status, cited nine employment law cases he said provided precedents for a total claim of €170,000.
In evidence, the company’s legal representatives said a case cited as having resulted in the payment of compensation because false accusations had created a hostile environment had in fact been dismissed.
One that was supposed to have found a company guilty of negligence with regard to workplace stress had in fact been about harassment based on religion and no award was made.
Another, concerning career advancement having been blocked because of unproven allegations, had not happened at all.
The hearing was told that none of the nine reported precedents was accurate.
In her decision, the adjudication officer, Patricia Owens, said Oliveira had “wasted a considerable amount of time” because the submissions were “rife with citations that were not relevant, misquoted and, in many instances, non-existent”.
Oliveira had initially described the suggestion he had used AI as a “baseless … attack designed to distract from the merits of the case” before later acknowledging he had.
In another case where some of the same issues arose, Jadene Maclour v Virtuoso Learning, the adjudicating officer asked Maclour if she had been also using AI. “Of course, I’m just one person,” she replied.
Solicitor Anne O’Connell who specialises in employment law, said the issue is “rampant in the WRC and in grievances”, describing the length of many submissions as “bonkers, so long even the complainants don’t know what’s in them. Although it’s also being used incorrectly by small employers replying to employees.”
She said she had a case this week in which a client sought information from a firm and received a 12-page acknowledgment letter “that was clearly written by AI. It said the company was going to comply with all of these obligations. I was reading it and thinking: ‘This is fantastic, they’re actually digging their own grave.’”