Attorneys working with ClassAction.org are investigating whether class action lawsuits can be filed on behalf of job applicants who may have been harmed by the use of artificial intelligence (AI) during the screening and interview process.

Specifically, they believe certain companies that offer AI pre-employment screening services for employers could be illegally providing consumer reports about job applicants without adhering to the strict requirements of the Fair Credit Reporting Act (FCRA). For instance, companies that provide consumer reports must follow procedures to ensure that the reports are accurate, provide copies of the reports and investigate disputes.

As part of their investigation, the attorneys want to speak with individuals who, in the past two years, applied for a job with a company that used AI as part of the screening or interview process, including (but not limited to) AI services provided by the following companies:

Hirevue Workday Greenhouse Lever By Employ Ashby

To get in touch, fill out the form on this page. You may be able to start a class action lawsuit to help yourself and other applicants get back money for any harm caused by potential FCRA violations.

AI Hiring and Screening: How Is it Used?

The use of AI in recruiting and hiring is becoming increasingly common; for instance, the World Economic Forum reported in March 2025 that roughly 88% of companies use AI for initial candidate screening.

According to Resume.org, AI tools are used by companies for a range of tasks in the hiring process, including reviewing resumes, researching and assessing candidates, communicating and scheduling, and even conducting interviews.

In fact, job applicants have reported on Reddit that AI job interviews with companies like Hirevue are especially common for employers with high-volume hiring like Target, Johnson & Johnson and JP Morgan. Attorneys suspect that high-turnover jobs, such as sales, customer service and call centers, may also find use for AI in the hiring process.

Is There Bias in AI Hiring?

An October 2024 survey of hundreds of business leaders indicates that roughly seven in 10 companies allow AI tools to reject candidates without any human oversight—and concerns are being raised that the lack of human involvement could leave room for discrimination and AI hiring bias.

In fact, a 2024 study from researchers at the University of Washington found that massive text embedding models were biased in a resume screening scenario, with the models favoring white-associated names in 85.1% of cases and female-associated names in only 11.1% of cases. Further, the study found that Black males were disadvantaged in up to 100% of cases.

A study published in May 2025 by researchers at the University of Hong Kong and the Chinese Academy of Sciences found that five leading large language models (LLMs) systematically scored resumes of female candidates higher than those of male candidates, regardless of race, and most awarded lower scores to Black male candidates compared to white male candidates with identical qualifications. The researchers noted that pro-female and anti-Black male biases were consistent across all five LLMs, suggesting that they are “deeply embedded in how current AI systems evaluate candidates.”

The researchers hypothesized that the biases could be the result of the overrepresentation of certain social views in the AI training data (mostly internet content). It’s also possible that the debiasing procedures used by AI developers may have overcompensated for certain biases while introducing others, the researchers noted.

According to the article, the findings indicate that human oversight of AI in the hiring process “remains essential” and that although the tools may reduce human biases in some areas, they “introduce new patterns of discrimination that require monitoring.”

Lawsuits Filed Over AI Employee Screening

In September 2023, iTutorGroup, which provides English-language tutoring services to students in China, paid $365,000 to settle an AI screening discrimination lawsuit brought by the Equal Employment Opportunity Commission (EEOC). The lawsuit claimed iTutorGroup programmed its AI recruitment software to automatically reject applications from female candidates who were 55 or older and male candidates who were 60 or older. According to the case, the AI software rejected over 200 qualified applicants based on their age, in violation of the Age Discrimination in Employment Act.

Another AI hiring bias lawsuit filed in 2024 claims Workday’s job applicant screening technology discriminates against people over age 40. The plaintiff says he was rejected from over 100 jobs on the human resources software company’s platform due to his age, race and disabilities, and four other plaintiffs have since added their own age discrimination claims. The plaintiffs argue that their applications were rejected sometimes only hours or even minutes after submission, and during non-business hours, indicating that a human did not review the application.

In July 2024, CVS privately settled a proposed class action lawsuit filed by a job applicant who claimed the company broke Massachusetts law by having prospective employees take what legally amounted to a lie detector test. Specifically, the lawsuit alleged that applicants were required to undergo Hirevue video interviews, which used Affectiva’s AI technology to track facial expressions (e.g., smiles, smirks) and assign each candidate an “employability score.” According to the AI hiring lawsuit, the score included measuring a candidate’s “conscientiousness and responsibility” and “innate sense of integrity and honor.”

In another example, an AI hiring bias complaint levied by the ACLU of Colorado in March 2025 alleged that Hirevue’s hiring assessment platform discriminated against deaf and non-white individuals. The indigenous and deaf woman at the center of the complaint says she worked for Intuit for several years, during which she received positive feedback about her performance, yet was still subjected to an AI video interview when she applied for a promotion. The woman asked for but was denied a reasonable accommodation—human-generated captioning during the AI interview so she could access instructions and questions—and was later turned down for the job, with the feedback recommending she “practice active listening,” the AI screening complaint alleged.

In addition to the risk of discrimination in AI hiring, concerns have been raised about data security and privacy, as AI-driven hiring tools can collect a significant amount of sensitive data, such as biometric identifiers, potentially without proper consent.

How Might AI Recruiters Violate the Fair Credit Reporting Act?

The Federal Trade Commission (FTC) notes that companies that provide screening services for employers may be considered consumer reporting agencies under the Fair Credit Reporting Act if they provide information that indicates a person’s “credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living.”

Under the FCRA, consumer reporting agencies are required to follow reasonable procedures to ensure the “maximum possible accuracy” of the reports and obtain certifications from their clients that they are complying with the FCRA. The companies are also required to give consumers access to their files when requested, investigate disputes and correct or delete any inaccurate, incomplete or unverifiable information.

Employers are also subject to FCRA rules when obtaining consumer reports for employment purposes. Before obtaining the report, the employer must inform the job applicant (in a standalone format separate from an application) that they may use information from the report for employment decisions, and they must also get the applicant’s written permission to obtain the report.

The employer must also certify to the provider of the consumer report that they will not discriminate against the applicant or otherwise misuse the information in the report.

If the employer takes an adverse action against an applicant (such as rejecting their application) based on information from their consumer report, the employer must provide the person a notice that contains a copy of the report and their rights under the FCRA. The adverse action notice must also include the name, address and phone number of the company that provided the report and inform the applicant that they have a right to dispute the accuracy and completeness of the information.

The attorneys believe that the pre-employment screenings provided by AI companies to employers may constitute consumer reports under the FCRA—and that both the companies that provided the reports and the employers who requested them may have failed to adhere to the FCRA’s requirements.

How Could an AI Employment Screening Lawsuit Help?

A class action lawsuit, if filed and successful, could help consumers recover money for any FCRA violations. It could also force the AI companies and their clients to change their hiring practices and ensure that their use of AI tools complies with federal law.

What You Can Do

Did you apply for a job in the past two years with a company that used AI as part of the screening or interview process? Help the investigation by filling out the form on this page.

After you fill out the form, an attorney or legal representative may reach out to you directly to ask you some questions about your experience and explain how you may be able to help get a lawsuit started. It costs nothing to get in touch or talk with someone about your rights, and you’re not obligated to take legal action if you don’t want to.