Recent Cases Highlight the Legality, Uncertainty, and Challenges Associated with the Increased Utility of AI in Hiring Employees

By Ethan Krasnoo

As recounted below, recent lawsuits regarding the use of AI by companies in making employment hiring decisions highlight some serious legal challenges and concerns for employers. Last month, a class action lawsuit was filed in California state court against Eightfold AI Inc. As alleged in the suit, the company uses hidden AI technology to collect data – including social media profile information – about job applicants and evaluates them for, amongst other things, behavior, attitudes, and intelligence. It then sells reports to potential employers that include scoring of the applicants based on the likelihood of success on the job.  The Complaint filed in the case argues that Eightfold AI is a consumer reporting agency and that, in collecting the information on job applicants, it violates the Federal Credit Reporting Act and California’s Investigative Consumer Reporting Agencies Act by failing to give required notice and disclosures to applicants whose data is being mined and utilized, and in failing to provide them with the required means of correcting any purported errors in the reports.   While Eightfold AI is expected to raise arguments alleging that it is not a consumer reporting agency and, thus, the legal requirements are inapplicable, the case should cause companies to evaluate their use of AI from a notice and correction lens. Yet, the lawsuit against Eightfold AI reflects just one set of concerns with respect to use of AI technology in the employment hiring space.

Another significant lawsuit challenging AI use in employment hiring is Mobley v. Workday, Inc., which was filed in federal court in California in 2023 and remains ongoing.  In that case, it is alleged that Workday’s AI hiring tools created intentional and inherent bias with respect to employment hiring based on protected characteristics that led to discrimination in violation of numerous federal laws protecting employees and job applicants from discrimination based on age, race, and disability.   Although the court has since dismissed the intentional discrimination claims, the court has allowed the lawsuit to proceed with respect to the claims based on disparate impact, meaning that the inherent bias in the AI tools has the effect of causing discrimination.  For example, this could occur if an AI system rating various zip codes based on distance to the office unintentionally caused a lower rating for some areas farther away that are made up of predominantly Black residents.  Another hypothetical example would be if an AI tool assigning greater weight to height in evaluating candidates – perhaps for a warehouse role that involves reaching high shelves – scores taller applicants more favorably. Such a criterion could disparately impact female candidates over male candidates, since females on average tend to be shorter than males.

The judge in Mobley v. Workday, Inc. conditionally approved the case for nationwide collective action.  The case should cause companies to consider evaluating any AI based employment tools that they utilize or are considering for use and perform due diligence on the products and vendors that are providing them, including understanding how they work, in addition to performing auditing to evaluate whether, in fact, their use is causing unintentional bias and disparate impact in hiring practices.  Companies may also, in light of Mobley v. Workday, Inc., want to try to obtain indemnification (meaning coverage, should the employers be sued by third parties in connection with use of the AI products) or warranties from vendors providing such AI services, given the risk of lawsuits stemming from such use.  In light of Mobley and the potential inability of AI products to weed out inherent bias in connection with their use, companies employing such products and AI vendors should still use significant human oversight over their tools and use thereof.

States and municipalities have passed laws in recent years to curb misuse of AI.  For example, in 2023 New York City passed a law that requires independent auditing of automated employment decision tools to evaluate them for bias before they can be used, and, where applicable, residents of New York City are also required to receive notice that an employer or employment agency is using such automated employment decision tools in their hiring or employment evaluations.  Similarly, last year, California passed a law regulating automated decision systems that assist or make employment decisions, and the law prohibits employers from using such systems or selection criteria that discriminate against applicants or employees based on protected categories (such as race, gender, age or disability).  The law also requires that data associated with such tools be preserved for four years.  Illinois has also imposed requirements for AI use in connection with video interviews.

It is expected that the aforementioned cases and laws are just the tip of the iceberg of what is to come in terms of AI-based legal claims and laws governing the landscape.

For more information, please contact RPJ Partner Ethan Krasnoo who counsels clients in areas of complex commercial litigation, arbitration, mediation and dispute resolution, and employment, intellectual property, and entertainment and media. Mr. Krasnoo is admitted to practice law in New York, the United States District Courts for the Southern and Eastern Districts of New York, the United States Court of Appeals for the Second Circuit and United States Tax Court.