The case that Commonwealth Court was slated to hear Wednesday morning, South Side Area School District et. al v. Pennsylvania Human Relations Commission, was already bound to be contentious: It involves a challenge to the commission’s application of anti-discrimination protections to transgender students in public schools.
But moments after attorney Thomas W. King rose to make his case against the protections, Judge Matthew Wolf introduced a new question: whether King’s firm had made improper use of artificial intelligence in preparing its brief.
“When I read the brief that was signed by three attorneys from your firm, I had a problem,” he told King and fellow lawyer Thomas E. Breth. “I read what I believe to be artificial intelligence hallucinations.”
The plaintiffs in the case include two Republican state House members, South Side Area School District in Beaver County, Knoch School District in Butler County, and parents. They argue that the state commission overstepped its authority by expanding sexual-discrimination protections to gay and transgender people in 2023. Part of that argument included a 50-page October filing that cited a number of prior cases it said backed up its arguments on issues that include limitations on public agencies.
But Wolf said many of those citations included quotes that either don’t exist, had been miscontextualized, or were later overruled.
“ You quote the Pennsylvania Supreme Court from the Bayada case for a quote that does not exist in that case,” Wolf said at one point. “You cite the Popowsky case for a proposition and a quote that does not exist, and that case is not even on point to this case.”
“I’m unhappy,” Wolf concluded. “I feel as though you’ve put the court at a disadvantage.”
“ I would apologize for any artificial intelligence that might be contained in this brief,” said King. “It would never be our intention to do so, and … I would like to say that we pride ourselves in being aware of this issue and not including such things in our briefs. And so I apologize to the court to the extent that any of that’s included.”
On Thursday, King told WESA that after the hearing, he conferred with everyone in his office and “no one used AI to compile this brief.
“We don’t draft briefs using AI,” he said, and then added: “I’m not a lawyer that would even know how to use [AI]. Neither is Tom Breth.”
The two lawyers are “lucky we can even use a phone,” he joked.
“To the extent that any human error was made, I’m happy to correct it,” he said. But “I stand by the brief.”
During the hearing, Wolf said, “I spent a lot of time on this, and I had a lot of people double-check what I thought so that I would not come in here and say things that were not true,” Wolf said. “I feel very confident about what I’ve said on the record.”
After the judges took a 20-minute recess, King said he would correct and refile the brief, though the judges did not indicate whether it would be accepted.
The dispute has political resonance. The plaintiffs include Republican state House members, Aaron Bernstine and Barbara Gleim. A third House Republican, Stephenie Scialabba, is one of four lawyers who was originally listed as an attorney on the case, though she did not sign the brief that Wolf flagged Wednesday. According to her LinkedIn page and an online directory, she now works for another Pittsburgh-area firm.
Scialabba also heads a House task force dedicated to artificial intelligence. When she was named to the post in March, House GOP leader Jesse Topper hailed her “extensive history in the practice of law concerning emerging cybersecurity and artificial intelligence issues.”
Scialabba’s legislative office directed questions about the case to her law office: Scialabba did not return a call or respond to a text message on Wednesday and Thursday. (Gleim and Bernstine also did not respond to calls to their office.)
AI has already begun to transform the practice of law, but AI-generated “hallucinations,” as its mischaracterizations have come to be called, have bedeviled lawyers and judges elsewhere.
Professional standards urge that if lawyers make use of AI in their research, they must ensure its citations are valid and accurately reflect the rulings involved. In a formal statement of guidance, the American Bar Association says AI can often produce “nonexistent opinions, inaccurate analysis of authority, and use of misleading arguments.”
The opinion says lawyers are obliged “before submitting materials to a court, to review these outputs [and] correct errors, including misstatements of law and fact, a failure to include controlling legal authority, and misleading arguments.”
King said his firm runs briefs through software that verifies the accuracy of citations. And while he said his firm didn’t use AI, he noted that its use is permitted.
The judges themselves spent much of the hour-long hearing discussing the merits of the case, though the lawyers were told they might hear more about questions related to artificial intelligence at a later date.
Some jurists have been bracing for concerns over AI. In an interview with WESA earlier this fall, state Supreme Court Justice David Wecht said the courts would likely face cases involving the technology. Among the concerns with it, he said, is the fact that “there are cases where judges in other jurisdictions have issued opinions that have [relied] on citations from lawyers that are non-existent — I mean the cases themselves don’t exist.”
“This hasn’t happened in Pennsylvania, thank goodness,” Wecht added.