“Algorithmic Accountability: Addressing Discrimination in AI-Based Hiring Tools,” by Anna Beckelman with Ethan Krasnoo
In November 2021, the New York City Council passed a law regulating the use of artificial intelligence (AI) hiring tools by employers in the city, which will go into effect in January 2023; similar legislation has been proposed elsewhere in the United States and even at the federal level. These laws aim to address the potential for bias and discrimination in such tools, which has become a significant concern as more employers look to automate and streamline their hiring procedures.
Artificial intelligence can appear in a number of different forms throughout the recruiting process. One of the best-known examples is the use of resume screening tools, which have evolved over the years from searching for keywords in candidates’ resumes to comparing resumes and determining which candidates have the best work history for an open position.[1] Many employers also utilize face and voice recognition software to analyze candidates’ facial expressions and voices in recorded video interviews, along with various games and tests that evaluate candidates’ aptitudes and cognitive and personality traits.[2] AI may even determine whether someone sees a job listing in the first place, as some companies have used it to search social media profiles and decide whether to advertise an open job to a particular individual on a given social media platform.[3] Whatever their exact purpose, these AI tools typically utilize algorithms – processes or series of steps designed to answer questions, make decisions or complete tasks[4] – to evaluate candidates and, in many cases, decide who moves forward in the hiring process.
The use of artificial intelligence in the hiring process has reduced operational costs and increased efficiency for many businesses.[5] “It allowed us to broaden the base of applicants…we doubled the number of applications we received, while also condensing the hiring process from several months to about four weeks,” a Unilever spokesperson said in 2019 regarding the company’s use of AI-based tools.[6] Proponents of AI also claim that the algorithms used are more fair and objective in evaluating candidates than human recruiters, as they “avoid the bias that is inherent in human decision-making.”[7] Theoretically, then, the use of AI hiring tools could reduce or even remove the disadvantages many candidates have faced in their job searches due to gender, race, age or disability.
However, with time and use, AI’s power to decrease bias in employment decisions is being called into question, with some critics claiming that AI is actually perpetuating and exacerbating discrimination. Those wary of using AI in hiring argue that these tools are only as good as the algorithms they rely on, which are designed by humans and typically created using resumes of previous hires, therefore reflecting the biases of the individuals who made those hiring decisions.[8] Amazon, for example, allegedly scrapped an AI hiring tool that repeatedly downgraded resumes which included the word “women’s” or indicated that a candidate had attended an all-women’s college.[9] The tool’s bias against women resulted from its “training” to vet applicants based on patterns in resumes that were submitted to Amazon over a 10-year period, most of which were from men.[10] While Amazon claims that the tool in question was never utilized to evaluate candidates in an actual hiring process,[11] had it been used, it would have perpetuated women’s long-standing difficulties obtaining employment in the tech industry. Critics also point out that such resume mining tools may screen out disabled candidates who could not participate in certain extracurricular activities or obtain certain work experience that algorithms have been trained to look for in hiring for a particular job, without recognizing that the candidate may have other experience that makes them a good fit.[12]
In resume mining software and other automated hiring tools, bias may also break through in less obvious ways when the variables considered by algorithms cause them to use proxies for protected characteristics. For example, an algorithm may learn to disfavor candidates from particular ZIP codes, which often act as a proxy for race.[13] Similarly, PricewaterhouseCoopers has been accused of screening out older candidates when their recruiting tools discarded applications that did not have email accounts ending in .edu, which are university email accounts often used by students and recent graduates, suggesting a preference for younger applicants.[14]
Facial recognition and voice analysis programs have also generated controversy. Critics allege not only that such software is unscientific[15], but also that it perpetuates discrimination in the way it assesses job candidates. Studies have found, for example, that two popular facial recognition programs used to evaluate candidates’ emotions regularly assigned more negative emotions to black males than to white males,[16] often registering them as angrier and more contemptuous even when they are smiling. Such technology can also create a disadvantage for applicants with craniofacial abnormalities or atypical speech patterns, as well as those on the autism spectrum, unfairly blocking them from moving further in the interview process.[18]
For candidates with disabilities, the games and tests many companies use in the hiring process may not be accessible. While some companies creating these programs provide versions modified for individuals with certain conditions, such as dyslexia, colorblindness or ADHD, applicants may be reluctant to take advantage of these accommodations, fearful that choosing a modified game or test will disclose their disability to a potential employer and disqualify them.[19]
Using AI software that reinforces prior hiring biases can have serious legal repercussions for employers. Just like the hiring practices of human recruiters, AI tools are subject to Title VII of the Civil Rights Act of 1964 (“Title VII”), the Age Discrimination in Employment Act (“ADEA”) and the Americans with Disabilities Act (“ADA”). These laws prohibit disparate treatment (i.e. the differing treatment of job candidates or employees based on their being in a protected class), which AI has not eliminated from the hiring process and, in some cases, has reinforced. As noted above, a programmer’s biases can find their way into the algorithms used by hiring software, meaning that the software will have, at a minimum, unconscious biases that may lead to discrimination against a protected class.[20] These laws also prohibit disparate impact, meaning that they do not allow hiring or employment practices that disproportionately affect a protected class in a negative way, as some automated hiring tools have been found to do. Employment litigators Gary Friedman and Thomas McCarthy suggest that courts may treat disparate impact claims involving AI similarly to how the Supreme Court ruled on standardized tests for employment in Griggs v. Duke Power Company and Albemarle Paper Co. v. Moody; in both cases, the Supreme Court ruled that where any test for employment is shown to have a disparate impact on a protected class, the employer must demonstrate that the tests are “job-related and represent a reasonable measure of job performance.”[21]
There can be further legal implications under the ADA. Any algorithm used in the hiring phase that discerns an applicant’s physical or mental disability is unlawful under the ADA, which disallows pre-employment inquiries into medical conditions.[22] Companies must also provide reasonable accommodations for disabled candidates; for example, they must provide accessible versions of any tests or games assigned to candidates, or provide an alternative screening process if an accessible test is not available.[23]
A few states have already enacted laws regarding certain automated hiring tools. In 2019, the Artificial Intelligence Video Interview Act (AIVIA) was signed into law in Illinois, setting notice and consent requirements for the use of artificial intelligence in video job interviews and obligating employers to disclose to candidates how the AI works and what characteristics it will use to evaluate them.[24] In 2020, Maryland enacted legislation prohibiting the use of facial recognition technology in job interviews without the applicant’s consent.[25]
New York City’s new law goes further by targeting all AI hiring software, not just those that involve facial recognition. All automated employment decision tools are banned unless they undergo a bias audit no more than one year prior to use of the tool, with a bias audit defined as “an impartial evaluation by an independent auditor,” which would “assess the tool’s disparate impact.”[26] Employers must also provide notice to job candidates living within New York City of the use of automated hiring tools no less than ten days before such use; the notice must include what qualifications and characteristics the automated tool will assess, and permit candidates to request an alternative selection process or other accommodation.[27] They must also provide information on their website about what type of data the automated tools will collect or provide such information upon written request.[28] Failure to comply will result in a $500 fine for a first violation and up to $1,500 for each subsequent violation.[29]
Although it breaks new ground in the oversight of artificial intelligence, the New York City law has been criticized as vague and not going far enough. The Center for Democracy & Technology (“CDT”) has noted that the law is imprecise as to the standards for bias audits, and the notice requirements are insufficient to allow candidates, particularly those with disabilities, to determine what “job qualifications and characteristics” these automated tools are assessing and what accommodations they may need to request during the hiring process.[30] Furthermore, according to the CDT, the City’s new regulations merely reaffirm preexisting federal requirements to evaluate selection procedures for their disparate impact based on ethnicity, race or sex;[31] they do not sufficiently specify any need to audit for biases against older, disabled, or LGBTQ+ applicants.[32] The CDT also criticized the law for its limited scope and application, observing that earlier versions of the bill targeted AI use in a variety of employment-related decisions and applied to all employees of New York City-based employers, while the bill that ultimately passed only addresses hiring and promotion decisions, and its notice requirements do not apply to candidates who are not New York City residents.[33]
Whatever the New York City law’s possible shortcomings, it is indicative of a growing trend toward broader regulation of artificial intelligence in employment. In December 2021, District of Columbia Attorney General Karl Racine introduced the Stop Discrimination by Algorithms Act of 2021, which prohibits covered entities’ use of discriminatory algorithms in employment and other areas. The bill requires notice to affected individuals regarding the type of personal information collected by AI software and how it is used, sets forth auditing requirements for covered entities relating to the use of AI software (including an annual report of audit findings to the Office of the D.C. Attorney General), and creates an individual right of action.[34] The bill’s text notes that its purpose is to “protect individuals and classes of individuals from the harm that results when algorithmic decision-making processes operate without transparency, rely on protected traits and other personal data that are correlated with those traits, or disproportionately limit access to and information about important life opportunities.”[35]
In March 2022, the California Fair Housing & Employment Council proposed updates to the state’s employment non-discrimination laws to include regulation of “automated decision-systems,” including resume screening tools, facial recognition software and gamified testing.[36] The proposed revisions would not only make employers liable for biased AI tools, but would also specifically extend liability to third-party vendors who utilize discriminatory algorithms, while providing recruitment and other employment-related services to those employers. The updates also create “aiding and abetting” liability for those involved in the “advertisement, sale, provision, or use” of automated decision-systems whose use results in illegal discrimination.[37]
AI hiring tools are also garnering attention at the federal level. In October 2021, Charlotte Burrows, Chair of the U.S. Equal Employment Opportunity Commission (“EEOC”), announced that the EEOC would launch an initiative to “ensure that artificial intelligence (AI) and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces.”[38] The EEOC further noted that it had been examining the question of artificial intelligence tools in employment since at least 2016, and that its systemic investigators had received training on the use of artificial intelligence in employment practices in 2021.[39]
Meanwhile, in Congress, the Algorithmic Accountability Act of 2022 has been introduced in both the House of Representatives and the Senate. The bill proposes that the Federal Trade Commission (“FTC”) require “covered entities” to conduct impact assessments of “automated decision systems and augmented critical decision processes,” including those used in employment-related decisions, with regard to discrimination, privacy and other concerns.[40] Among other things, the bill also sets forth extensive requirements for such impact assessments and mandates annual reports to the FTC on these assessments and requires covered entities, “to the extent possible, to meaningfully consult with…relevant internal stakeholders (such as employees, ethics teams, and responsible technology teams) and independent external stakeholders (such as representatives of and advocates for any impacted groups, civil society and advocates, and technology experts)” in conducting such assessments.[41]
AI-based tools have become an integral part of the hiring process and will only become more so, especially as remote work becomes more common and hiring managers are less likely to meet candidates in person. Although the use of AI has not had the desired effect in reducing or eliminating discrimination in the hiring process, comprehensive legislation regulating these tools could help make them fairer and less damaging to protected classes. However, regardless of any action at the federal, state or local level, human oversight is essential to making sure artificial intelligence does not preserve the biases and discrimination that many workers have already faced when searching for a new job. “I believe that human-in-the-loop should not end at the recommendation that the algorithms suggest,” says Fay Cobb Payton, a professor of information technology and analytics at North Carolina State University. “Human-in-the-loop means in the full process of the loop from design to hire, all the way until the experience inside of the organization.”[42] In other words, recruiting and hiring professionals must be consistently mindful of how AI-based tools are designed and programmed, who may be excluded from their recommendations, and how AI may affect the candidates who do move forward in the hiring process. Such oversight can help create a more level playing field for diverse candidates, while also saving time and money for employers without causing them to miss out on meeting and hiring great candidates.
[1] Gary D. Friedman and Thomas McCarthy, “Employment Law Red Flags in the Use of Artificial Intelligence in Hiring.” American Bar Association, October 1, 2020. Employment Law Red Flags in the Use of Artificial Intelligence in Hiring (americanbar.org)
[2] “Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination?”, at 6. Center for Democracy and Technology, December 2020. Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination? (cdt.org)
[3] Gary D. Friedman and Thomas McCarthy, “Employment Law Red Flags in the Use of Artificial Intelligence in Hiring.” American Bar Association, October 1, 2020. Employment Law Red Flags in the Use of Artificial Intelligence in Hiring (americanbar.org)
[4] “Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination?”, at 5. Center for Democracy and Technology, December 2020. Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination? (cdt.org)
[5] Nicol Turner Lee and Samantha Lai, “Why New York City is cracking down on AI in hiring.” Brookings, December 20, 2021. Why New York City is cracking down on AI in hiring (brookings.edu)
[6] Jaclyn Diaz, “As AI Enters Hiring, Some Ask if It’s Injecting New Biases (1).” Bloomberg Law, April 17, 2019. As AI Enters Hiring, Some Ask if It’s Injecting New Biases (1) (bloomberglaw.com)
[7] “Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination?”, at 5. Center for Democracy and Technology, December 2020. Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination? (cdt.org)
[8] Gary D. Friedman and Thomas McCarthy, “Employment Law Red Flags in the Use of Artificial Intelligence in Hiring.” American Bar Association, October 1, 2020. Employment Law Red Flags in the Use of Artificial Intelligence in Hiring (americanbar.org)
[9] Jeffrey Dastin, “Amazon scraps secret AI hiring tool that showed bias against women.” Reuters, October 10, 2018. Amazon scraps secret AI recruiting tool that showed bias against women | Reuters
[10] Jeffrey Dastin, “Amazon scraps secret AI hiring tool that showed bias against women.” Reuters, October 10, 2018. Amazon scraps secret AI recruiting tool that showed bias against women | Reuters
[11] Jeffrey Dastin, “Amazon scraps secret AI hiring tool that showed bias against women.” Reuters, October 10, 2018. Amazon scraps secret AI recruiting tool that showed bias against women | Reuters
[12] “Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination?”, at 7. Center for Democracy and Technology, December 2020. Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination? (cdt.org)
[13] Jenny R. Yang, “Three Ways AI Can Discriminate in Hiring and Three Ways Forward.” Urban Institute, February 12, 2020. Three Ways AI Can Discriminate in Hiring and Three Ways Forward | Urban Institute
[14] Kenneth Terrell, “Can Artificial Intelligence Outsmart Age Bias?” AARP, January 16, 2019. Can AI Help With Age Discrimination in Hiring? (aarp.org)
[15] Kate Crawford, “Artificial Intelligence is Misreading Human Emotion.” The Atlantic, April 27, 2021. Artificial Intelligence Is Misreading Human Emotion – The Atlantic
[16] Nicol Turner Lee and Samantha Lai, “Why New York City is cracking down on AI in hiring.” Brookings, December 20, 2021. Why New York City is cracking down on AI in hiring (brookings.edu)
[17] Kate Crawford, “Artificial Intelligence is Misreading Human Emotion.” The Atlantic, April 27, 2021. Artificial Intelligence Is Misreading Human Emotion – The Atlantic
[18] Alex Engler, “For some employment algorithms, disability discrimination by default.” Brookings, October 31, 2019. For some employment algorithms, disability discrimination by default (brookings.edu)
[19] Sheridan Wall and Hilke Schellmann, “Disability rights advocates are worried about discrimination in AI hiring tools.” MIT Technology Review, July 21, 2021. Disability rights advocates are worried about discrimination in AI hiring tools | MIT Technology Review
[20] Gary D. Friedman and Thomas McCarthy, “Employment Law Red Flags in the Use of Artificial Intelligence in Hiring.” American Bar Association, October 1, 2020. Employment Law Red Flags in the Use of Artificial Intelligence in Hiring (americanbar.org)
[21] Gary D. Friedman and Thomas McCarthy, “Employment Law Red Flags in the Use of Artificial Intelligence in Hiring.” American Bar Association, October 1, 2020. Employment Law Red Flags in the Use of Artificial Intelligence in Hiring (americanbar.org)
[22] Gary D. Friedman and Thomas McCarthy, “Employment Law Red Flags in the Use of Artificial Intelligence in Hiring.” American Bar Association, October 1, 2020. Employment Law Red Flags in the Use of Artificial Intelligence in Hiring (americanbar.org)
[23] “Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination?”, at 3. Center for Democracy and Technology, December 2020. Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination? (cdt.org)
[24] “(820 ILCS 42/) Artificial Intelligence Video Interview Act.” Illinois General Assembly. 820 ILCS 42/ Artificial Intelligence Video Interview Act. (ilga.gov)
[25] Bill Text: MD HB1202, 2020, Regular Session, Engrossed. LegiScan. Bill Text: MD HB1202 | 2020 | Regular Session | Engrossed | LegiScan
[26] “Int. No. 1894-A.” The New York City Council, November 15, 2021. Legislation Text – Int 1894-2020 (aboutblaw.com)
[27] “Int. No. 1894-A.” The New York City Council, November 15, 2021. Legislation Text – Int 1894-2020 (aboutblaw.com)
[28] “Int. No. 1894-A.” The New York City Council, November 15, 2021. Legislation Text – Int 1894-2020 (aboutblaw.com)
[29] “Int. No. 1894-A.” The New York City Council, November 15, 2021. Legislation Text – Int 1894-2020 (aboutblaw.com)
[30] Matt Scherer and Ridhi Shetty, “NY City Council Rams Through Once-Promising but Deeply Flawed Bill on AI Hiring Tools.” Center for Democracy & Technology, November 12, 2021. NY City Council Rams Through Once-Promising but Deeply Flawed Bill on AI Hiring Tools – Center for Democracy and Technology (cdt.org)
[33] Matt Scherer and Ridhi Shetty, “NY City Council Rams Through Once-Promising but Deeply Flawed Bill on AI Hiring Tools.” Center for Democracy & Technology, November 12, 2021. NY City Council Rams Through Once-Promising but Deeply Flawed Bill on AI Hiring Tools – Center for Democracy and Technology (cdt.org)
[34] “Stop Discrimination by Algorithms Act of 2021.” Office of the Attorney General of the District of Columbia. DC-Bill-SDAA-FINAL-to-file-.pdf
[35] “Stop Discrimination by Algorithms Act of 2021.” Office of the Attorney General of the District of Columbia. DC-Bill-SDAA-FINAL-to-file-.pdf
[36] “Fair Employment & Housing Council Draft Modifications to Employment Regulations Regarding Automated-Decision Systems.” Department of Fair Employment & Housing, March 15, 2022. AttachB-ModtoEmployRegAutomated-DecisionSystems.pdf (ca.gov)
[37] “Fair Employment & Housing Council Draft Modifications to Employment Regulations Regarding Automated-Decision Systems.” Department of Fair Employment & Housing, March 15, 2022. AttachB-ModtoEmployRegAutomated-DecisionSystems.pdf (ca.gov)
[38] “EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness.” U.S. Equal Employment Opportunity Commission, October 28, 2021. EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness | U.S. Equal Employment Opportunity Commission
[39] “EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness.” U.S. Equal Employment Opportunity Commission, October 28, 2021. EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness | U.S. Equal Employment Opportunity Commission
[40] “Algorithmic Accountability Act of 2022.” MUR22045 (senate.gov)
[41] “Algorithmic Accountability Act of 2022.” MUR22045 (senate.gov)
[42] Jeremy Hsu, “AI Recruiting Tools Aim to Reduce Bias in the Hiring Process.” IEEE Spectrum, July 29, 2020. AI Recruiting Tools Aim to Reduce Bias in the Hiring Process – IEEE Spectrum
This article is intended as a general discussion of these issues only and is not to be considered legal advice or relied upon. For more information, please contact RPJ Attorney Ethan Krasnoo who counsels both companies and individuals on complex commercial litigation, employment, intellectual property, and entertainment and media. Mr. Krasnoo is admitted to practice law in New York. Attorney Advertising.