AI & HR: Algorithmic Discrimination in the Workplace
November 1, 2024Feature Article(Source)
The Emergence of AI in HR Practices
Artificial intelligence (AI) is quickly reshaping the way human resources (HR) departments make decisions in the workplace. In particular, AI is currently redefining key HR practices, including recruitment, selection, onboarding, performance management, and training and development. On the surface, the use of AI in HR offers a myriad of benefits: increased efficiency through streamlined HR processes, better decision-making through precise prediction and analysis, and improved employee productivity through AI personalization. Importantly, the integration of AI technologies in HR, like machine learning and natural language processing, aims to mitigate bias as well. However, plaintiffs, scholars, and others argue the opposite: AI may actually perpetuate and amplify biases in HR practices.
According to the Society of Human Resource Management, around 1 in 4 employers use AI in their HR practices, and among the organizations using AI for HR purposes, talent acquisition is the leading area for its use at 64%. Further, according to Gartner, 76% of HR leaders believe that their organization will be trailing behind in organizational success if they fail to implement AI in the next 1 to 2 years. Thus, the adoption of AI in HR is already significant, and its future prevalence is almost undeniable. What does this new reality mean for employees, employers, and the courts?
AI’s Potential for Algorithmic Bias and Discrimination
Between 2014 and 2018, Amazon developed a resume-scanning tool that utilized AI for recruitment. Amazon trained this tool on previously recruited candidates’ credentials to better identify and rank qualified applicants. However, Amazon engineers discovered that the tool systematically downgraded resumes submitted by female candidates. Although the gender of these applicants was never explicitly provided, the AI tool used “indirect markers, such as ‘captain of the women’s chess club’ as proxies” to identify which applicants were female and effectively screen them out.
Research shows that AI suffers from algorithmic bias by reproducing and amplifying human biases. Amazon’s AI recruitment tool discriminated against applicants on the basis of gender because of the data it trained on. The resume scanning tool defined the “ideal employee” based on historically biased data, which consisted of resumes predominantly submitted by men in the past. Thus, the training data organizations use in their AI-powered HR tools risk reflecting historical or present biases instead of focusing solely on an applicant’s skills and qualifications. In other words, if the data input is biased, the output will likely be biased. Considering the fact that human bias has historically plagued recruitment and selection processes, policymakers must understand the risk of algorithmic bias to ensure a fair and equitable workplace.
Algorithmic Discrimination in the Context of Law and Policy
The issue of algorithmic discrimination is already appearing in courts across the United States. For example, in Saas v. Major, Lindsey & Africa, LLC, a plaintiff alleged that Major, Lindsey & Africa, a recruiting firm, used “algorithmic, machine learning, and other technical tools in the conduct of their business, and their use of such tools caused [her] to be unlawfully discriminated against on the basis” of her sex and age. The plaintiff asserted claims of “failure to refer and ‘algorithmic bias’ in violation of [Title VII] and the [ADEA]; retaliation in violation of Title VII and the ADEA; and fraudulent inducement in violation of Maryland law.” However, a district court in the District of Maryland dismissed the “algorithmic bias” claim because the plaintiff’s allegation that the recruiting firm used AI was too speculative.
Further, in Mobley v. Workday, Inc., a plaintiff alleged that Workday, a human resource management service, used algorithmic decision-making tools “to screen applicants in [the] hiring process [that] discriminated against him and similarly situated job applicants on the basis of race, age, and disability.” The plaintiff’s applications allegedly listed his degree from a historically Black college and, in some applications, included personality tests that could indicate his mental health disorders. The plaintiff asserted that Workday’s AI tools relied on biased training data, prompting him to bring “disparate impact and disparate treatment claims under Title VII, the [ADEA] and the [ADA].” Although a district court in the Northern District of California dismissed the disparate treatment claim, that court allowed the disparate impact claim to proceed because the plaintiff’s complaint “support[ed] a plausible inference that Workday’s screening algorithms were automatically rejecting Mobley’s applications based on a factor other than his qualifications, such as a protected trait.”
The Mobley case teaches us that organizations cannot escape liability by using AI systems for their HR decisions. Notably, the Mobley court asserted that “[d]rawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era.” Accordingly, the Mobley case also teaches us that both developers and users of AI tools may be held liable for discrimination under existing law. However, the Saas case shows us that it may be difficult for plaintiffs to succeed on claims of algorithmic discrimination in the workplace under law. Thus, ambiguity exists among employees, employers, and the courts on issues similar to those introduced in Saas and Mobley.
This ambiguity exists in large part because the United States does not have any comprehensive legislation regulating the use of AI. Instead, AI-related actions like Executive Order 14110, issued by President Biden in 2023, and the White House’s blueprint for an AI Bill of Rights exist. Both of these actions acknowledge the impact of algorithmic discrimination and provide hope for a more fair and equitable workplace under the emergence of AI. For example, Executive Order 14110 states that “[i]t is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse,” and the White House’s blueprint for an AI Bill of Rights devotes an entire section to “Algorithmic Discrimination Protections.” This section states that individuals should not face algorithmic discrimination and that users and developers of automated systems should use and design these systems in an equitable way. While these statements are promising, vulnerable populations will continue to face algorithmic discrimination in the workplace without any federal legislation on the matter.
In the absence of federal legislation, some states and localities—including Illinois, New York City, Colorado, and California—are attempting to regulate how employers use AI in HR decisions to prevent discrimination. Additionally, federal agencies, including the EEOC and the DOL, have issued initiatives, guidance, and other materials in an attempt to clarify that the use and design of AI-powered HR tools may result in discrimination in violation of the law.
The Law’s Role in Mitigating Algorithmic Discrimination
Some may argue that the issue of algorithmic discrimination should be left to the states, and others may argue that federal legislation on the matter will stifle innovation in the workplace. However, algorithmic discrimination is an issue worthy of comprehensive federal recognition. Prohibiting discrimination in employment decisions is a cornerstone of U.S. employment law, and the federal government should do all it can to curb the detrimental effects that employment discrimination has on individuals, organizations, and society. Therefore, Congress must pass comprehensive federal legislation on the issue of AI use in the workplace, with particular attention to algorithmic discrimination. While the use of AI in HR offers incredible benefits to employees and employers, the power of AI can also propound employment discrimination in a way never seen before.
AI’s capability to drive discrimination in both a systematic and unbridled fashion differs substantially from anything achievable by a human decision-maker. Thus, the unique nature of algorithmic discrimination renders traditional Title VII frameworks inadequate for regulation. A new legal framework, influenced by existing employment law yet specifically tailored to address the complex forms of discrimination that AI poses, can help mitigate this burgeoning issue. Further, if the law directs developers to build features into their AI systems that enable better regulation, the government can help make AI-powered HR tools more regulable.
Nonetheless, employers seeking to use AI in their HR practices should be proactive in assessing how AI-powered tools were developed and trained. Further, developers of automated HR systems should utilize the power of AI to create algorithmic inclusion and take the steps necessary to prevent any possibility of perpetuating systematic discrimination. If Congress were to pass comprehensive federal legislation on this matter, employers and developers alike might find themselves taking steps like these to ensure a workplace free from discrimination.
Suggested Citation: Kadin Mesriani, AI & HR: Algorithmic Discrimination in the Workplace, Cornell J.L. & Pub. Pol’y, The Issue Spotter, (Oct. 31, 2024), https://jlpp.org/ai-hr-algorithmic-discrimination-in-the-workplace.
You may also like
- November 2024
- October 2024
- April 2024
- March 2024
- February 2024
- November 2023
- October 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- November 2019
- October 2019
- September 2019
- April 2019
- February 2019
- December 2018
- November 2018
- October 2018
- September 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- May 2017
- April 2017
- March 2017
- February 2017
- December 2016
- November 2016
- October 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- August 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- June 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- April 2011
- March 2011
- November 2010
- October 2010
- September 2010