“A better bill would target deception specifically: chatbots that claim to be licensed professionals, that create attorney-client relationships without disclosure, that fail to identify themselves as AI.”
People entering Brooklyn Housing Court. (Adi Talwar/City Limits)
Every year, more than 2 million New Yorkers walk into civil courtrooms without a lawyer. They face evictions, custody disputes, benefit terminations, and immigration proceedings—adversarial processes with life-altering stakes—alone. The other side almost always has counsel. The result is not justice. It is a rout.
I spent 20 years in New York City Housing Court as an advocate. I watched tenants lose their homes not because the law was against them, but because they could not understand the law well enough to use it. The justice gap in this country is not, at its core, a funding problem. It is a knowledge problem. Legal knowledge is rationed by a licensing system that has, for generations, ensured that access to it tracks wealth.
That is why I was hopeful when artificial intelligence began to offer a different possibility. I have spent the past several years working at the intersection of AI and access to justice, and I recently helped build Roxanne AI, a chatbot designed specifically to give tenants plain-language legal guidance. Not a lawyer. Not a replacement for a lawyer. A knowledgeable friend—the kind that people with money have always had, and that everyone else has always lacked.
Now the New York State Senate is poised to make tools like Roxanne AI illegal. Senate bill S7263, sponsored by Sen. Kristen Gonzalez and currently on the Senate floor calendar, would impose civil liability on any “proprietor” of a chatbot that provides substantive responses, information, or advice that, if given by a human, would constitute the unauthorized practice of a licensed profession. The bill covers every field regulated by the Education Law or the Judiciary Law—medicine, law, social work, nursing, architecture, and more.
Sen. Gonzalez’s instincts are right. There are genuinely dangerous chatbots out there, deployed by companies with no professional accountability, that confidently give people wrong medical diagnoses or botched legal strategies. That is a real harm worth preventing. The impulse to protect New Yorkers from being misled by machines masquerading as professionals is a legitimate one.
But the bill as written does not distinguish between a predatory chatbot impersonating a doctor and a nonprofit tool helping a Bronx tenant understand whether her landlord’s lockout is illegal. It does not carve out organizations working in good faith for underserved communities. It does not distinguish between a chatbot that claims to be a lawyer and one that is transparently an AI, trained on legal information, offering general guidance with appropriate caveats. Under S7263, they are the same. And both could expose their developers to liability.
The chilling effect would be immediate and severe. Legal aid organizations, academic institutions, and legal technology nonprofits would face impossible choices: shut down their AI tools, water them down to the point of uselessness, or expose themselves to open-ended litigation. The well-resourced companies that sell AI to law firms would weather this fine. The scrappy nonprofits building tools for the people who need them most would not.
We have been here before. For decades, the legal profession has used the unauthorized practice of law rules to suppress competition and maintain its monopoly over legal information—not always out of malice, but always with the effect of keeping ordinary people dependent on professionals they cannot afford. S7263, without intending to, threatens to extend that monopoly into the AI era precisely when technology had begun to break it open.
The irony is exquisite and painful. Just as AI tools are beginning to give a grandmother in Queens the ability to draft a response to a housing court petition, or help a farmworker in the Hudson Valley understand his wage theft rights, or walk a domestic violence survivor through a protective order application, New York may decide that giving her that information is a crime.
None of this requires abandoning consumer protection. A better bill would target deception specifically: chatbots that claim to be licensed professionals, that create attorney-client relationships without disclosure, that fail to identify themselves as AI. It would impose transparency requirements rather than blanket substantive prohibitions. It would create safe harbors for nonprofit tools with proper disclaimers. California’s disclosure-first approach, which requires chatbots to identify themselves when a user might reasonably believe they are human, offers a model worth adapting.
The justice gap is not a feature of our legal system. It is a failure of it. For the first time in the history of American law, technology exists that could meaningfully close that gap—not by replacing lawyers, but by giving people the knowledge they need to navigate a system that was never designed with them in mind.
New York should not be the state that stops that from happening. Sen. Gonzalez should amend this bill before it goes to a full vote. The millions of New Yorkers who will never have a lawyer are counting on someone in Albany to understand what is actually at stake.
Sateesh Nori is a legal technology advocate, a former housing court attorney, and the author of “Sheltered: Twenty Years in Housing Court.” He is a co-creator of Roxanne AI and an affiliate of NYU Law School.
