AI’s role in hiring practices in the spotlight amid bias concerns


Full story

Artificial intelligence (AI) likely played a role in securing some applicants’ current jobs or excluding them from consideration for the position they missed out on. Human resources departments have been using automated tools to assist in recruitment for some time now. However, in February of 2023, one of the first allegations of discrimination based on an employer’s use of AI in the hiring process emerged.

Derek Mobley has become the face of a class action lawsuit against Workday, Inc. Mobley claimed that since 2018, he has applied for positions at up to 100 companies through Workday’s application platform but he has not been able to secure a job.

“All things being equal, you know, he’s qualified,” Mobley’s attorney Rod Cooks said.

According to Cooks, Mobley alleges that he has been discriminated against because he is African American, over 40 years old and has a disability.

“We believe this lawsuit is without merit,” a Workday spokesperson said in an email. “At Workday, we’re committed to responsible AI. Our decisions are guided by our AI ethics principles, which include amplifying human potential, positively impacting society, and championing transparency and fairness.”

This is not the first time AI-based recruitment tools have faced bias concerns.

In 2015, Amazon discontinued its own recruiting engine after it was discovered that the system had developed a preference for male applicants over women.

John Rood is the founder of Proceptual, a company that works with HR departments to ensure compliance with emerging legislation. Rood emphasized the significant harm caused if protected minority classes, such as racial, gender, or sexual orientation minorities, systematically fail to secure jobs.

Rood explains that when using an automated employment decision tool, the technology picks which applicants should advance to the next phase based on predetermined criteria. Hiring managers often rely on these system recommendations, especially in large-scale hiring processes.

“The computer has not like decided who to hire, but at a part of that funnel, it’s made a very specific decision about who should be advancing to the next phase,” Rood said. “And most hiring managers are not going to, like look at that and be like, ‘oh, well, I need to carefully go through every single resume,’ especially when it’s, you know, an enterprise company, and they’re hiring for hundreds of roles.”

To address bias concerns, Rood’s company conducts “bias audits” for organizations using automated tools. These audits evaluate whether an algorithm selected applicants at a lower rate based on race, gender, national origin, or other factors.

New York City passed Local Law 144 in 2021, which requires employers to conduct bias audits and disclose the results publicly. Enforcement of the law will commence on July 5, 2023.

However, Rood says the law is far from perfect. While it sets the standard for what an audit should entail, he highlights that no one is entirely satisfied with the criteria outlined.

Critics include Matt Scherer, senior policy counsel for Workers’ Rights and Technology at the Center for Democracy and Technology. Scherer said that the audit requirement is redundant, pointing out that the law references federal regulations that already mandate companies to collect information on race, sex and national origin.

“The law is not doing anything for this bias audit that companies should not already be doing anyway,” Scherer said.

New York City’s law also necessitates employers to disclose their use of AI in the hiring process and the manner in which it is employed. However, a narrow version of the law passed, requiring that companies follow these rules if their automated employment decision tools completely replace human decision making.

Because many companies use such technology to assist in human decision-making, and not replace it entirely, those companies may not be affected by Local Law 144.

“I do not think that New York Law 144 is a very good law, I hesitate to call it regulation, or certainly new regulation,” Scherer said. “And I am not at all keen on the idea of that being kind of a model for the rest of the country for how these tools are regulated.”

As New York’s regulation takes effect, it shines a spotlight on a sector that has been utilizing AI products for years, potentially impacting the demographic composition of the workforce.

Tags: , , , ,

Full story

Artificial intelligence (AI) likely played a role in securing some applicants’ current jobs or excluding them from consideration for the position they missed out on. Human resources departments have been using automated tools to assist in recruitment for some time now. However, in February of 2023, one of the first allegations of discrimination based on an employer’s use of AI in the hiring process emerged.

Derek Mobley has become the face of a class action lawsuit against Workday, Inc. Mobley claimed that since 2018, he has applied for positions at up to 100 companies through Workday’s application platform but he has not been able to secure a job.

“All things being equal, you know, he’s qualified,” Mobley’s attorney Rod Cooks said.

According to Cooks, Mobley alleges that he has been discriminated against because he is African American, over 40 years old and has a disability.

“We believe this lawsuit is without merit,” a Workday spokesperson said in an email. “At Workday, we’re committed to responsible AI. Our decisions are guided by our AI ethics principles, which include amplifying human potential, positively impacting society, and championing transparency and fairness.”

This is not the first time AI-based recruitment tools have faced bias concerns.

In 2015, Amazon discontinued its own recruiting engine after it was discovered that the system had developed a preference for male applicants over women.

John Rood is the founder of Proceptual, a company that works with HR departments to ensure compliance with emerging legislation. Rood emphasized the significant harm caused if protected minority classes, such as racial, gender, or sexual orientation minorities, systematically fail to secure jobs.

Rood explains that when using an automated employment decision tool, the technology picks which applicants should advance to the next phase based on predetermined criteria. Hiring managers often rely on these system recommendations, especially in large-scale hiring processes.

“The computer has not like decided who to hire, but at a part of that funnel, it’s made a very specific decision about who should be advancing to the next phase,” Rood said. “And most hiring managers are not going to, like look at that and be like, ‘oh, well, I need to carefully go through every single resume,’ especially when it’s, you know, an enterprise company, and they’re hiring for hundreds of roles.”

To address bias concerns, Rood’s company conducts “bias audits” for organizations using automated tools. These audits evaluate whether an algorithm selected applicants at a lower rate based on race, gender, national origin, or other factors.

New York City passed Local Law 144 in 2021, which requires employers to conduct bias audits and disclose the results publicly. Enforcement of the law will commence on July 5, 2023.

However, Rood says the law is far from perfect. While it sets the standard for what an audit should entail, he highlights that no one is entirely satisfied with the criteria outlined.

Critics include Matt Scherer, senior policy counsel for Workers’ Rights and Technology at the Center for Democracy and Technology. Scherer said that the audit requirement is redundant, pointing out that the law references federal regulations that already mandate companies to collect information on race, sex and national origin.

“The law is not doing anything for this bias audit that companies should not already be doing anyway,” Scherer said.

New York City’s law also necessitates employers to disclose their use of AI in the hiring process and the manner in which it is employed. However, a narrow version of the law passed, requiring that companies follow these rules if their automated employment decision tools completely replace human decision making.

Because many companies use such technology to assist in human decision-making, and not replace it entirely, those companies may not be affected by Local Law 144.

“I do not think that New York Law 144 is a very good law, I hesitate to call it regulation, or certainly new regulation,” Scherer said. “And I am not at all keen on the idea of that being kind of a model for the rest of the country for how these tools are regulated.”

As New York’s regulation takes effect, it shines a spotlight on a sector that has been utilizing AI products for years, potentially impacting the demographic composition of the workforce.

Tags: , , , ,