Perhaps we should forgive Covington attorney John R. Walker or, at least, be willing to afford him a little grace.
He was admitted to the bar in 1983 and, as he notes in one filing, is a 43-year veteran of the legal profession.
So it is perhaps easy to believe his explanation for mistakes he made in a lawsuit involving the city of Mandeville. Walker says he didn’t understand the “limitations and potential pitfalls” of the cutting-edge tools he was using to help him write a legal brief. Plenty of folks struggle to adapt to new technologies.
But Walker’s struggles with technology might get him fined or otherwise punished when he has to go before a judge next week. So what exactly did he do?
Per his own admission, Walker filed a motion in the case that, as is normal, included case law citations and quotations. But in writing the motion, he used two generative AI programs: Westlaw Precision AI and ChatGPT.
The programs, Walker said, “hallucinated” cases. Made them up out of whole cloth. And Walker didn’t notice before filing.
U.S. District Judge Brandon Long was not amused.
“The Court has chosen to ignore most of Plaintiffs’ arguments brought in its response motion because many, if not all, of Plaintiffs’ case citations are to cases that do not exist, or, if they do exist, incorrectly quote from or inaccurately describe its facts and holding,” he wrote in a ruling that went against Walker’s client.
“Presumably,” Long continued, “this is the result of an astonishingly careless use of generative AI … A failure by licensed attorneys to perform even a cursory check to ensure their cited caselaw actually exists is wholly unacceptable.”
To Walker’s credit, he took full responsibility and threw himself on the mercy of the court.
“I was new to using these tools and did not appreciate the limitations of and potential pitfalls in using such tools, including the risk that ChatGPT would ‘hallucinate,’” he wrote.
He vowed it would never happen again.
It would be easy, at this point, to dismiss the tale as just a one-off example of a lazy lawyer and the seductiveness of AI. But Walker is far from unique.
The last three years have seen hundreds of documented cases of attorneys failing to check their AI-written briefs for fictional content worldwide, according to a database maintained by French researcher Damien Charlotin.
In some cases, attorneys have been fined or otherwise punished. Walker probably should be, too.
None of this should be surprising. These AI models are seductive precisely because they seem so authoritative. Real or not, what they produce looks good. And if they can fool longtime practitioners like Walker with their bunkum, what chance do us laypeople have?
Perhaps that’s the real power of AI: It’s not actually intelligent, but it’s very good at making us believe it is.