HUMAN-IN-THE-LOOP APPROACH TO INVESTIGATE ALGORITHMIC JUSTICE IN AI AND MACHINE LEARNING ENABLED TALENT ACQUISITION SYSTEMS

Open Access
- Author:
- Neupane, Bikalpa
- Graduate Program:
- Informatics
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- June 16, 2021
- Committee Members:
- Daniel Susser, Major Field Member
Timothy Brick, Outside Unit & Field Member
Lynette Yarger, Chair & Dissertation Advisor
Mary Beth Rosson, Program Head/Chair
Fred Fonseca, Major Field Member
Jordan Beck, Special Member - Keywords:
- algorithmic justice
algorithmic equity
underprivileged
marginalized
AI bias
AI fairness
video based hiring systems
HRTech
hiring
interviewing
perceptions of AI
human-in-the-loop
diversity in hiring
equity in hiring
DEI - Abstract:
- As AI technology is increasingly embedded in corporate human resources environments, human and social actors interact to form algorithmic practices for completing talent acquisition activities such as job advertising, resume preparation and submission, resume filtering and evaluation, candidate selection and tracking, and interviewing and hiring. However, while the software has become more ubiquitous, co-occurring is a rising discourse in the popular press and research community that voices concern about the risks of (un)intended bias in AI tools and the drastic impacts that such bias can have on the lives of people, particularly historically underserved populations. This mounting discourse is critical for directing equitable policies for the design and use of emerging AI technologies to understand the potential types and sources of injustice that may be embedded in the predictions made by these machine learning algorithms, and their impacts for historically underserved groups. This study examines perceptions and actual experiences of women and racially minoritized job seekers affected by algorithmic hiring systems deployed by organizations during candidate screening and interviewing for entry-level technology positions. Using participants’ real-world experiences and Gilliland’s procedural justice framework, I used workshops, focus groups and interviews to explore how job seekers perceive justice and equity in the context of AI based hiring systems. Several theories from human computer interaction such as design rationale theory and MAIN model were utilized to offer insights for developing software design recommendations that may make the hiring process more equitable and inclusive of diverse job candidates. This dissertation provides experience of job seekers as examples to demonstrate how AI systems may perpetuate or inhibit bias and discrimination if algorithmic methods for detecting and classifying human beings are built without considering the broader historical and social context. The results suggest that job seekers’ perceptions around justice of the procedures applied during selection and equity in the outcomes of the selection process are inextricably intertwined. Job seekers also provide complex viewpoints highlighting both the positive and negative sides of AI based recruiting systems. These findings suggest that, if machine learning hiring systems could somehow work hand-in-hand with humans, the overall perceptions of women and racially minoritized job seekers’ towards machine learning systems could be improved. The perspectives of the historically disadvantaged groups are conceived as a productive means for talking back in the form of recommendations to computer scientists for algorithm design features and guidance to organizations for the deployment of AI systems to support more just and equitable hiring practices. The research results also contribute to future research seeking to audit systems for justice and inequity in other domains such as education and banking pertaining to underserved populations.