In July last year, Louis Flores typed a question into the AI chatbot Grok: “What kind of case law can help us in a lawsuit to stop the privatization of NYCHA?”

Flores, a long-time community organizer, lives a block from the Fulton & Elliott-Chelsea Houses, an expansive public housing development in Manhattan that’s run by the New York City Housing Authority. For the last six years, he and a group of other activists have been fighting a legal battle, largely without an attorney, to stop a plan to demolish the existing buildings and build new ones.

Within seconds, Grok gave him a detailed list of cases that he could cite as precedent. Flores packaged them into a 42-page legal brief that he submitted to the court, arguing why the city should postpone the demolition of the buildings until the project goes through a more rigorous public review process.

“I felt confident in the case overall, but we had no help,” he told Gothamist. “I had no idea how it would go.”

It did not go well.

About a week later, NYCHA’s attorney sent a letter to Judge James D’Auguste, pointing out that four of the cases that Flores had cited either don’t exist or don’t say what Flores claimed they did.

Grok had hallucinated them — a fancy, AI-speak way of saying that the chatbot had made them up. Once the judge learned of what had happened, he didn’t just reprimand Flores and the other plaintiffs or ask them to refile the brief. He threw out the entire lawsuit.

“We worked so hard for six years to get to the point where a judge would look at our case,” Flores said, adding that he and his fellow organizers are in grief over the judge’s decision. “I felt like we were denied justice.”

It’s the most extreme example so far among dozens of legal cases in New York that have already tested the limits of how artificial intelligence is being used in the courts, and the willingness of judges to accept legal filings that haven’t been produced by humans.

“We’re getting blamed for being poor and not having attorneys,” Flores said.

The state’s court system acknowledges that AI’s use in legal proceedings has become “increasingly common” and that “judges are seeking guidance” on how to deal with it in their courtrooms, according to an October memorandum issued by an advisory committee established in 2024 to ensure the technology is being used responsibly. Committee members have concluded that AI chatbots don’t require a “novel rule” because their hallucinations are just an extension of an old problem: Litigants and lawyers have always made mistakes in their filings.

Al Baker, a court spokesperson, said the Administrative Board of the Courts is currently reviewing a proposed policy that updates the current rule requiring all signed filings, including those written with the help of AI, to contain only accurate information. The proposed policy does not go so far as banning the technology or forcing litigants and lawyers to disclose their use of AI when they submit court documents. Errors created by AI would simply be subject to the same fines and penalties as any other inaccurate information in case filings, according to Baker.

“The purpose of this policy statement is to promote uniformity and consistency and avoid a hodgepodge of conflicting part rules,” the advisory committee wrote in its memo last year.

But the guidelines don’t say anything about what penalties judges should hand down for false or fabricated information generated by AI. And at the federal level, which is not subject to the state courts’ oversight, at least one judge has created his own rules around the use of chatbots.

Bruce Green, a law professor at Fordham University and an expert on legal ethics, said he expects that both sides will eventually adapt: the courts will create more sophisticated rules around the technology, and new AI tools will be better at producing more accurate citations. But in the current, largely unregulated environment, he said it’s worth requiring transparency from litigants.

“It’s fair to ask them whether they’ve used AI,” Green said. “But for the moment, I don’t think judges should forbid anybody, unrepresented people or lawyers, from using AI tools.”

A free tool

The rapid advancement of AI has provided a valuable tool for those who typically have less access to the legal system.

“Much of the population cannot afford a lawyer at all,” said Stephen Gillers, a professor emeritus and expert on legal ethics at NYU Law School. “It’s getting worse: Legal fees are going up. People’s ability to pay them is going down.”

Millions of people across the United States file lawsuits in state courts without an attorney, according to a 2015 study on low-income litigants. In U.S. federal courts, national data shows that the lawyer-less make up about a quarter of cases each year.

Green said AI “holds a lot of promise for people who can’t afford a lawyer.” But he said its unfettered use could bog down the legal system if the courts can’t adapt quickly and develop a clear set of rules around the technology.

Judges now have to be more vigilant about identifying fictitious case law, and faulty legal documents can lead to delays.

“There’s a risk that people will be using it in ways that are burdening the court,” Green said.

Green said it’s unreasonable to expect courts and judges who are already overwhelmed to verify every citation in every document. But he said he sympathizes with those who are using AI to advance their cases because they can’t afford attorneys, and that litigants without lawyers have always been more likely to submit incorrect information because of the very fact that they lack legal representation.

The broad availability of AI chatbots may now be amplifying that tendency.

“They’re trying to do the best they can,” Green said. “It’s a lot to expect that they’re going to file legal briefs at the same quality as what lawyers would file.”

Flores said he used Grok exclusively for research, wrote the filings himself and was embarrassed by the errors he submitted to the court. But he never expected them to be fatal to his case. The judge, he said, had overreacted in his decision

“He wasn’t looking at the case on its merits,” Flores said.

His lawsuit aside, those without attorneys typically fare better than lawyers when AI makes mistakes.

Because there are no uniform rules governing AI use in court, there are no official tallies of how often AI tools have been used in legal filings or how often judges have disciplined litigants and lawyers for AI-produced errors. But Damien Charlotin, a researcher at the French business school HEC Paris, has assembled a database of more than 900 legal decisions across 31 different countries where judges have sometimes sanctioned plaintiffs with financial and legal penalties for filing erroneous, AI-generated legal documents.

In roughly 40% of those cases, lawyers, paralegals or other legal professionals were responsible for including the erroneous AI-generated information in the filings. Lawyers included the hallucinated errors in 24 out of the 54 New York City cases. Some received fines of up to $10,000 or were referred to the bar association for possible disciplinary action, while others got off with just a warning.

Green said lawyers have “no excuse” for filing faulty information to the courts, but that unrepresented litigants deserve more leniency.

AI v. Nicki Minaj

Tameer Peak turned to ChatGPT and Gemini when he sued the rapper Nicki Minaj for defamation in 2024. Peak, a once devout fan of Minaj, accused the rapper of making comments during a social media livestream implying that he was mentally unstable.

Peak said he thought his case was straightforward. But after lawyers quoted him five-figure fees to bring a lawsuit on his behalf, he decided to file it himself — with a little help.

“These tools provide access to people who would not have access because of a retainer fee or just simply a lawyer not believing in their case,” Peak said. “AI has allowed me to understand the legal system a bit more, like certain jargon or procedural steps.”

Peak said he used ChatGPT and Gemini to format his complaint and a variety of other motions. He also used it to find previous cases that could support his own.

Unlike in Flores’ petition, those cases did exist. But one letter Peak wrote to the judge asking the court to move the case forward included quotes from past lawsuits that were fabricated. In Judge Vernon Broderick’s response, he said he was “concerned” about Peak’s use of artificial intelligence “during the course of his litigation.”

Broderick, however, did not throw out Peak’s lawsuit. He instead issued a warning and wrote in his order that he was “sympathetic” to Peak’s status as an unrepresented plaintiff.

“I think that the judge was as fair as he could be,” Peak said, adding that the judge saw it as a relatively minor issue. “They want to decide a case on merit.”

At the moment, Broderick might be a rarity among judges. He’s developed his own set of rules for using AI in civil cases. They require litigants to be transparent about when they do use chatbots to create legal filings and to independently verify all information that those chatbots provide. A spokesperson for the Southern District of New York was not able to make Broderick available for an interview.

Peak said he’s gotten better at making sure the material he cites does in fact exist.

Pushing back

Flores and his group of organizers were finally able to hire two attorneys — John Low-Beer and Thomas Hilgardner — to help with their attempt to halt demolition at NYCHA’s Fulton & Elliot-Chelsea Houses in Manhattan. By then, Flores had already submitted his legal brief with the AI-generated errors.

In an interview with Gothamist, Hilgardner criticized Judge D’Auguste’s order as “outrageous.” While it said the Flores’ papers were “infused” with AI-hallucinations, there had only been four bad cases mentioned in his filing, which Hilgardner described as a minor defect in the overall lawsuit. The judge, in his view, dismissed the case on a technicality and ignored the central arguments in the original petition.

“”Lots of people have done this in the past, and no one’s ever gotten their case dismissed on a matter like this,” Hilgardner said. “The petition stands on its own two feet.”

Hilgardner and Low-Beer said that NYCHA’s lawyers, who first identified the fictitious references, had not asked the judge to punish the plaintiffs. Even if they had, Hilgardner said the plaintiffs should have been given an opportunity to defend themselves before the case was dismissed.

Judge D’Auguste declined Gothamist’s request to comment. His final order stated that the plaintiff’s use of AI was “far more pervasive than petitioners suggested.”

Gillers said judges should be more flexible around this issue, especially if it doesn’t affect the central arguments of a lawsuit. While standardized rules are helpful, and likely to come, Gillers maintained that the fundamental problem is that lawyers are too expensive for many low-income litigants who have for decades now relied on the internet and other free resources. Their capacity to generate errors, however, has grown exponentially with AI tools.

“They could go on Google and look up cases, but they can’t check the cases because they don’t have access to Lexis or Westlaw,” Gillers said, referring to legal databases used by attorneys.

Hilgardner filed a notice of appeal on Jan. 31. No date has been set yet for the demolition of the Fulton & Elliot Chelsea Houses.