It started as a legal dispute over dog custody and visitation after a San Diego County couple broke up.

The judge’s order cited two similar cases and ultimately sided with the woman who had Kyra the dog for the last couple of years, giving her full rights and denying her ex’s requests.

It wasn’t until later, well into the appeal, that the opposing side discovered that the cases on which the ruling was based either didn’t exist or were wholly irrelevant to the case at hand. They appear to be hallucinations from artificial intelligence.

That discovery would turn the case into yet another sharp warning to attorneys and, notably this time, the judge, about the dangers of AI.

“As this case illustrates, it is equally important that judicial officers and court staff who are not themselves using generative AI verify the citations contained in proposed orders submitted to them by counsel,” the 4th District Court of Appeal, Division 1, wrote in a footnote added a week later to the unanimous March 5 opinion.

There have been several documented cases of AI hallucinations making their way into court filings, so many so that watchdogs track them. Earlier this year, the California State Bar announced its first discipline of an attorney in a case involving the use of generative AI.

Judges are also under scrutiny. Last fall, Senate Judiciary Committee Chairman Sen. Chuck Grassley asked two federal court judges how AI hallucinations made it into their orders. Grassley’s office said both judges said their staffs had used generative AI in drafting the orders

Still, AI is here. Last month, judicial AI company Learned Hand announced it would supply its technology to a select group of Los Angeles Superior Court judges to use for summarization, research, analysis and such. And last week, Northwestern University released a study indicating that 60% of the federal judges who responded to its survey indicated they used at least one AI tool in their work.

In making its ruling in the fight over Kyra, the San Diego appeals court said it had “no difficulty concluding that it is an abuse of discretion for a court to rely in material part on fictional case authorities in rendering a decision or making an order.”

“Although we appreciate that trial courts must often rely on the parties to prepare written orders, it is imperative for both the court and the parties to verify that the citations in all orders are genuine and truly stand for the propositions cited,” Appellate Justice Martin Buchanan wrote. “This is especially vital with the increasing incidence of hallucinated case citations generated by AI tools.”

In a twist, the appeals court declined to overturn the original judge’s ruling, even with the acknowledgement that it was based on AI hallucinations. The lawyers should have caught it earlier, the panel ruled.

The battle over Kyra

During the initial litigation, when Roxanne Chung Bonar, the attorney for Kyra’s owner, cited two precedents she said were relevant to this particular San Diego case, it didn’t raise eyebrows. The San Diego Superior Court commissioner presiding over the matter as a temporary judge even noted those two cases in siding with Bonar.

The judge then ordered the attorney on the losing side to draw up an order reflecting the ruling. The losing attorney did as she was told — including Bonar’s citations as noted by the judge.

With the loss, a different attorney, David Beavans, handled the case on appeal for the ex. Beavans said there is little in the way of case law regarding custody of pets, so he took the case in hopes the appeals court might use it to set precedent.

He came to realize that the two cases Bonar cited — the cases the court relied on in its order — were not real. “We are in a new time,” Beavens told the Union-Tribune about the new age of artificial intelligence. “Never before have we seen cases invented out of whole cloth.”

“AI will attempt to introduce a quote or a finding that is so seductive, so perfect, so inside the 10 ring that it begs to be used as ‘This is the quote that wins my case,’” Beavans said. “It’s so good even attorneys have been lulled into letting their guard down.”

One case that was relied upon, as cited, “is not a real case,” according to the appellate opinion, which said the citations provided are from “a criminal case having nothing to do with pets or custody determinations.”

The other, as cited, “is also not a real case,” Buchanan wrote. There is a real case with the same name, he wrote, but with a different official citation, and it related to spousal support, not pets. It also did not focus on emotional well-being and the stability of the parties involved, which was an issue in the case at hand.

It’s still unclear exactly what happened at the initial court hearing during which the judge issued her order; there was no transcript provided on appeal.

During the appeals process, Bonar said the claim that the cases were fabricated was “baseless.” She blamed opposing counsel for not being able to find the cases and called his competency into question.

During oral arguments, Bonar said she didn’t remember where those fictitious citations came from, the opinion states, adding that she said she had not had a paid subscription to a legal research service, but had been using online resources, including AI, for research.

According to the opinion, Bonar said someone had sent her client a Reddit article referencing one of the cases, and the client mentioned it in her declaration.

Bonar was hit with a $5,000 sanction, with the appeals court noting that she had “doubled down” when asked about the questionable provenance.

Bonar did not respond to requests for comment.

University of San Diego law professor Judith Lihosit, whose teachings include AI and legal research, said the appeals court was “really trying to strike a balance” with the ruling.

“You have the appeals court saying that the trial court really messed up,” Lihosit said. “This is really bad because this is a court order that has hallucinations in it. … It’s the court’s responsibility to verify and fact-check all this stuff.”

And by coming back and adding that footnote, the professor said, the appeals court “wanted to make it clear that judges are responsible to not only verify their own use of AI, but to verify anything that’s submitted to them.”

While the appellate court did not overturn the ruling (which favored the woman who already had possession of the dog), the panel noted it was possible the trial court judge could have come to the same conclusion without the hallucinated cases.

‘No system is infallible’

The San Diego Superior Court told the Union-Tribune that attorneys and even litigants representing themselves have “a clear and affirmative obligation” to ensure their filings are accurate.

But court officials acknowledged that the courts themselves have a “real and ongoing” concurrent responsibility to review materials presented by litigants and to take reasonable steps to ensure their accuracy — a responsibility that “exists alongside the reality that courts process a significant volume of cases and filings each day in service to the public.”

“While the court’s review is an important layer of protection and we do have manual cite checking tools available, by necessity, absent a red flag or objection, the court is often reliant in routine matters on the accuracy of the materials submitted by counsel and litigants,” the court said in a statement.

It and other courts across the state are also looking into technologies that could help prevent such issues, although cost is also a consideration.

“Ultimately,” the San Diego Superior Court said, “no system — human or technological — is infallible.”