Several companies recently announced that they are adopting new measures to vet vendors' artificial intelligence tools in order to mitigate data and algorithmic bias in human resource and workforce decisions.
In December 2021, more than a dozen large companies, including American Express, CVS Health, Nike, Mastercard, Meta, Walmart and General Motors, committed to using the Algorithmic Bias Safeguards for Workforce, developed by the Data and Trust Alliance, a consortium of businesses and institutions.
The safeguards comprise 55 questions in 13 categories designed to detect, mitigate and monitor algorithmic bias in vendors' AI tools.
Vendors and their AI applications will be evaluated on criteria such as training data and model design, bias testing methods, and bias remediation, as well as AI ethics and diversity commitments.
In addition to evaluating products, the safeguards provide guidance for HR teams assessing vendor responses to the evaluation, as well as a scorecard to grade and compare vendor products. The safeguards supplement member companies' vendor selection procedures, the Alliance said.
The safeguards were developed by a cross-industry working group of member company experts in areas such as human resources; artificial intelligence; legal; and diversity, equity and inclusion.
The Data and Trust Alliance is focused on member companies implementing the safeguards, executives said. The organization's chief operating officer, Robyn Bennett, declined to elaborate further on the initiative, noting that the organization would prefer to comment when the alliance and member CHROs/HR leaders have more experience with deployment of the safeguards.
Also in December, the World Economic Forum published Human-Centered Artificial Intelligence for Human Resources: A Toolkit for Human Resource Professionals. The toolkit includes a guide highlighting important topics, guidance on the responsible use of AI-based HR tools and two checklists—one focused on strategic planning and the other on the implementation of a specific AI tool.
An accompanying white paper states that one of the main goals of the toolkit is to help organizations improve their use of AI-based HR tools.
"Many organizations find their investments in AI fall short of their expectations because the tools are adopted for the wrong reasons, they do not anticipate the work necessary to integrate the tool, or because they did not gain sufficient buy-in from the people who were supposed to use it or are affected by it," the report states.
Other organizations, such as The Partnership on AI, The Institute of Electrical and Electronics Engineers, and The Algorithmic Justice League, have also introduced their own initiatives and published research to highlight the difficulties and address the issues of AI bias as well as the ethical use of AI in the workplace.
Increasing Scrutiny of AI Use
According to Daniel Chasen, director of research at the HR Policy Association, initiatives like the Data and Trust Alliance's Algorithmic Bias Safeguards reflect large employers' commitment to aligning their business goals with values such as diversity, equity and inclusion.
"This is a clear signal that employers are looking closely at the implications of using AI in the workplace, regardless of the vendor providing the AI tools," Chasen said.
He added that tools that help companies assess AI technology are invaluable for employers looking at ways to strengthen or establish their AI governance processes.
"AI is not a one-size-fits-all concept," he said. "It comes with a wide variety of both risk types and risk levels, so the adaptability inherent in tools like the Alliance's safeguards or the World Economic Forum's AI for Human Resources toolkit are critical in promoting the responsible and ethical use of AI in the workplace."
In the fall of 2021, the scrutiny around AI bias became more intense. In October, the U.S. Equal Employment Opportunity Commission (EEOC) launched an initiative to ensure that AI used in hiring and other employment decisions complies with federal civil rights laws. In December, the New York City Council passed a law that prevents employers in the city from using AI tools for recruiting, hiring or promotion unless those tools have been audited for bias. The law takes effect in January 2023.
As the legal ramifications surrounding the use of AI grow, companies will need to think hard about what they are using AI for, said Samuel Estreicher, a professor of law at New York University School of Law in New York City and director of its Center for Labor and Employment Law.
"Employers haven't thought hard enough about why they are using AI, how they are using it and what the legal issues are. The legal requirement is if there is a disparate impact, they have to use requirements that are job-related," Estreicher said.
He added that AI is a good tool for screening thousands of unsolicited resumes but isn't as good as human judgment when evaluating an employee's performance or making a decision as to whether an employee should be promoted.
"AI cannot replace human judgment," he said. "For example, I would not use AI for making employee compensation decisions. You can use AI to help you learn a compensation problem, but at the end of the day, you cannot replace what you've learned from someone you've been working with for 10, 15 or 20 years."
In a statement endorsing the Data and Trust Alliance's initiative, Laurie Havanec, chief people officer at CVS Health, said her company's acceleration of its digital transformation projects has heightened the need to use data and AI in a way that is consistent with company values.
"Our work to mitigate bias in human resources and recruiting will help us build the more diverse and inclusive teams that we know will produce better results for our companies and our communities," Havanec said in a statement.
Monique Matheson, chief human resources officer at Nike Inc., explained the Alliance's efforts this way: "None of us want to lose out on the right opportunity or the right person due to incomplete, inaccurate or biased data sets. The work of this cross-industry, multidisciplinary team of leaders and deep subject matter experts is designed to create an equitable playing field for all."
Nicole Lewis is a freelance journalist based in Miami.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.