Applying for a job already sucks on its own. But increasingly, job seekers are left wondering whether an AI system is screening them out before a human ever sees their application.

A new lawsuit hopes to change that by forcing more transparency into how AI hiring tools work. The case argues that automated applicant “scores” should be legally treated like credit checks and be subject to the same consumer protection laws.

The proposed class action was filed on Wednesday in California state court by two women working in STEM who say AI hiring screeners have filtered them out of jobs they were qualified for.

“I’ve applied to hundreds of jobs, but it feels like an unseen force is stopping me from being fairly considered,” said Erin Kistler, one of the plaintiffs, in a press release. “It’s disheartening, and I know I’m not alone in feeling this way.”

And she’s right about not being the only person feeling this way at a time when more companies are relying on AI for hiring. Roughly 88% of companies now use some form of AI for initial candidate screening, according to the World Economic Forum.

The lawsuit specifically targets Eightfold, an AI human resources company that sells tools designed to help employers manage recruiting and hiring. Among its offerings is a tool that generates a numerical score predicting the likelihood that a candidate is a good match for a given role.

That scoring system sits at the center of the case. Eightfold’s “match score” is generated using information pulled from a variety of sources, including job postings, an employer’s desired skills, applications, and, in some cases, LinkedIn. The Model then provides a score ranging from zero to five that “helps predict the degree of match between a candidate and a job position.”

The lawsuit argues that this process effectively produces a “consumer report” under the Fair Credit Reporting Act (FCRA), a federal law passed in 1970 to regulate credit bureaus and background check companies. Because the score aggregates personal information and translates it into a ranking used to determine eligibility for “employment purposes,” the lawsuit claims Eightfold should be required to follow the same rules that apply to credit reporting agencies.

Those rules include notifying applicants when such a report is being created, obtaining their consent, and giving them the chance to dispute any inaccurate information.

“Eightfold believes the allegations are without merit. Eightfold’s platform operates on data intentionally shared by candidates or provided by our customers,” an Eightfold spokesperson told Gizmodo in an emailed statement. “We do not scrape social media and the like. We are deeply committed to responsible AI, transparency, and compliance with applicable data protection and employment laws.”

Still, the lawsuit is seeking a court order requiring Eightfold to comply with state and federal consumer reporting laws as well as financial damages.

“Qualified workers across the country are being denied job opportunities based on automated assessments they have never seen and cannot correct,” said Jenny R. Yang, a lawyer for the case and former chair of the U.S. Equal Employment Opportunity Commission. “These are the very real harms Congress sought to prevent when it enacted the FCRA. As hiring tools evolve, AI companies like Eightfold must comply with these common-sense legal safeguards meant to protect everyday Americans.”