Researchers are raising questions about the growing array of new digital tools employers are using to streamline the hiring process.
They are studying new sourcing and recruiting platforms powered by artificial intelligence (AI) and machine learning, as well as algorithm-heavy screening and interview software that analyzes and ranks job applicants. Policymakers are concerned, too.
“Proponents of new technologies assert that digital tools eliminate bias and discrimination by attempting to remove humans from the process, but technology is not developed or used in a vacuum,” said Rep. Suzanne Bonamici, D-Ore. “A growing body of evidence suggests that left unchecked, digital tools can absorb and replicate systemic biases that are ingrained in the environment in which they are designed.”
Selection assessments increasingly rely on algorithmic decision-making, which “raises important questions for our antidiscrimination laws,” said Jenny Yang, a senior fellow at the Urban Institute in Washington, D.C., and former chair of the U.S. Equal Employment Opportunity Commission (EEOC).
“The complexity and opacity of many algorithmic systems often make it difficult if not impossible to understand the reason a selection decision was made,” she said. “Often thousands of data points have been analyzed to evaluate candidates from social media sites, words in resumes, and other available data. Many systems operate as a black box, meaning vendors of algorithmic systems do not disclose how inputs lead to a decision.”
Researchers Find Problems
Manish Raghavan, a doctoral student in computer science at Cornell University in Ithaca, N.Y., and author of “Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices,” said that technology companies are shielded by intellectual property laws and don’t have to disclose any information about their algorithmic models, though some companies did choose to cooperate with his team of researchers.
Raghavan and his colleagues studied 19 vendors who specialize in algorithmic pre-employment screening via assessments, video interview analysis and games. Very few of the companies offered specifics on how they mitigate algorithmic bias or disclosed how they validate their assessments.
The research, supported by the National Science Foundation and Microsoft, found that technology vendors tended to favor obscurity over transparency about how their algorithms were designed and how algorithmic bias is defined and addressed on their platforms.
“Plenty of vendors make no mention of efforts to combat bias [in public materials], which is particularly worrying since either they’re not thinking about it at all, or they’re not being transparent about their practices,” Raghavan said. Vendors’ claims of fairness are also unsatisfactory, without revealing how the company defines fairness, he said.
Additional research from professors Peter Cappelli and Prasanna Tambe at the Wharton School at the University of Pennsylvania in Philadelphia highlights the gap between the promise and reality of AI in HR, due to the complexity of human resources management, the limitations of small data sets and accountability questions associated with fairness.
Ifeoma Ajunwa, assistant professor of employment and labor law at Cornell University’s Industrial and Labor Relations School, outlined the top issues her research produced when studying automated recruitment technology. She found that the tools:
- May enable employers to discreetly eliminate applicants from protected categories without retaining a record.
- Allow for neutral variables that act as proxies for protected categories to be used to justify biased employment results as objective.
- Allow discriminatory practices to go undetected because intellectual property law protects automated hiring systems from scrutiny.
- Can lead to applicants being “algorithmically blackballed,” increasing the chance of repeated employment discrimination.
Ajunwa is especially concerned about the growing use of automated video interview software that captures candidates’ responses to pre-recorded interview questions and assesses them based on their word choices, speech patterns, and facial expressions to determine their fit for the job position and the company’s culture.
“There are no federal regulations as to the collection, storage, or use of data from automated hiring platforms, including video interviewing,” she said.
Regulators Are Taking Notice
Yang said that the government has begun to investigate concerns regarding these systems.
The Electronic Privacy Information Center, a public interest research organization based in Washington, D.C., filed a petition Feb. 3 asking the Federal Trade Commission to investigate and regulate the use of AI, facial recognition technology, biometric data and algorithms in pre-employment screening and hiring decisions, and the EEOC has at least two investigations into charges that algorithms unlawfully discriminate during the recruitment process, she said.
Ajunwa would like to see a new burden-shifting cause of action under Title VII of the Civil Rights Act, in which plaintiffs alleging employment discrimination could still sue even if they have difficulty showing statistical proof of disparate impact. She also believes that employers who use the hiring tools should undergo mandated internal and external audits of the systems, and that technology vendors should be required to build data retention and recordkeeping design features in their products.
“At present, the data trail of job applicants who do not make it past the hiring algorithm is typically lost,” she said. “Data-retention mechanisms will ensure that data from failed job applicants are preserved to be later compared against the successful job applicants, with the aim of discovering whether the data [shows] disparate impact.”
Yang suggested that an update of the Uniform Guidelines on Employee Selection Procedures—which assist employers in determining if their tests and selection procedures are lawful—is overdue. “A revision could align the federal guidance with the latest scientific knowledge regarding industrial and organizational psychology and computer science and provide greater clarity on the validation standards for algorithmic screens,” she said.
Yang also proposed a workers’ bill of rights to ensure that applicants and employees understand how algorithmic decisions are made and ensure a process to challenge biased or inaccurate decisions. “These rights could build on the GDPR [General Data Protection Regulation], which creates a more robust individual rights-based approach to data protection. Under GDPR, individuals have a right not to be subject to a selection decision based solely on automated processing.”
Forward, not Backward
Despite their concerns, the experts agree the tools have value and should be improved, not eliminated.
“We know from years of empirical evidence that humans suffer from a variety of biases when it comes to evaluating employment candidates,” Raghavan said. “Despite their many flaws, algorithms do have the potential to contribute to a more equitable society, and further work is needed to ensure that we can understand and mitigate the biases they bring.”
He noted that some of the technology companies his team engaged with acknowledged that they’re taking steps to address bias and discrimination, but there is a lack of consensus on exactly how that should be done.
Yang agreed, saying that “Algorithmic systems could help identify and remove systemic barriers in hiring and employment practices, but to realize this promise we must ensure they are carefully designed to prevent bias and to document and explain decisions necessary to evaluate their reliability and validity.”
Read more on SHRM.org