Employers must be aware of the ethical considerations of using artificial intelligence (AI) technology—even in its current nascent stage—in the workplace, said Kerry Wang, speaking at the HR Technology Conference & Exposition in Las Vegas.
HR and business leaders are being drawn into the conflict between the competitive advantage the technology can provide and concerns about negative implications like unintended bias, said Wang, the CEO and co-founder of Searchlight, a San Francisco-based technology company that uses AI to help employers recruit talent and measure quality of hire.
"Imagine you have implemented an AI tool to proactively screen job applicants," she said. "Recruiters are happy because they can now spend less time screening resumes. Candidates are happy because recruiters are responding to them faster. But one day you notice that the technology is recommending more men than women for interviews. Do you continue to use the tech or decide to shelve it?"
Something similar happened at Amazon when the company built an experimental AI tool in 2015. Amazon scrapped that particular system, but since then, there has been an explosion in vendors touting AI for HR functions from sourcing and screening to predicting turnover and powering workforce analytics.
"Whether we like it or not, AI is everywhere," Wang said. "But AI is only as good as the rules that program it, and machine learning is only as good as the data it relies on."
She explained that AI includes any computer system that mimics human intelligence to complete a task. For example, a simple chatbot powered by an algorithm—a set of rules or lines of code—is using AI. More complex AI uses machine learning.
[SHRM members-only HR Q&A: What is artificial intelligence and how is it used in the workplace?]
"That's where modeling comes in," she said. "Modeling is when you find patterns in a huge dataset and then code those patterns as rules. In the chatbot example, if you gave the AI 10 million transcripts of human chats beginning when someone said 'hello,' the AI would learn a multitude of ways to respond to that greeting."
Wang said that AI is not meant to be "a silver bullet" but is instead supposed to assist human decision-making. "AI can make us smarter and more efficient—research shows that AI taking over more technical tasks frees people up to do more strategic things."
Whether to introduce AI to HR comes down to abundance and scarcity, said Ann Watson, senior vice president of people and culture at Verana Health in San Francisco, also speaking at the conference.
"How can we do more?" she asked. "How can we increase productivity? How can we best grow talent pipelines? How can we bring more people in and be more inclusive? The benefits of AI technology means having more time to do the things I want to do."
Maisha Gray-Diggs, vice president of global talent acquisition at Eventbrite, said that her team uses AI for recruiting and onboarding.
"The benefit of AI to me is getting an edge, saving time and resources," she said, speaking at the conference. "I'm very mindful that we don't want AI to replace people, but AI can be used to augment people. HR can't just keep doing more and more, it must do things smarter."
The Ethical Use of AI
Wang said that there are two prime areas of concern when it comes to the use of AI in employment: privacy and bias.
"I'm very uncomfortable with the idea of employee surveillance," Watson said. "I think of AI as finding ways to do more, not finding ways to catch people doing less."
She gave the example of a certain technology that can predict if an employee is about to quit based on workplace behavior. But research found that it only works if the employees don't know it's there.
"For it to work, it would have to be kept secret from the workforce," she said. "And that is not something I'm willing to do, even if it can accurately predict turnover."
Wang said that when Searchlight partners with a client, the company first sends out communication to employees detailing what is happening, why it's being done and what to expect.
"When we do that, 70 to 80 percent of employees opt in to the data collection," she said. "When you give people the choice and explain to them the benefits of using AI, the majority will agree to opt in."
Another major ethical issue that arises when considering AI for the workplace is the potential for it to be discriminatory. Bias can be created in the technology intentionally or unintentionally.
"Biases already exist in human judgment," Wang said. "The potential for biased tech is there. But the more we're aware of it and ensure that the data we plug into the models we use to make our decisions is as holistic as it can be—then we're in a better place."
Wang mentioned the new first-of-its-kind law going into effect in New York City on Jan. 1, 2023, which will prohibit employers from using AI and algorithm-based technologies for recruiting, hiring or promotion without those tools first being audited for bias.
"All of us who will use the tool will need to make the commitment to ask questions, to make sure that we are not discriminating," Gray-Diggs said. "We move so fast in the tech space that I feel we need to spend more time and do more research to understand the technology. And before you bring in a new tool, there must be acknowledgment of bias at the organization first. Think about women and other underrepresented folks not selling themselves as well as they can in the hiring process, and then you have this AI tool that may eliminate them even before they are given a chance."
Choosing an AI Vendor
Wang said that before you approach an AI vendor, you should pick a problem to solve that the business really cares about. "It's hard enough to advocate for any new technology and even harder to convince leadership about a problem they don't have an interest in," she said.
When engaging with vendors, ask them how they think about bias in their system, she explained. "Can they talk about how they use their data, how they train their model, how they validate not having adverse impact? I love it when an employer asks me about bias because it shows we are philosophically aligned."
You must ask hard questions, Watson said. "Push harder than you feel comfortable pushing. If you need to find someone else in the organization that has more understanding of the technology, bring them in to that conversation."
Gray-Diggs agreed, saying that "if HR is uncomfortable or out of its depth evaluating a new product, bring in the data science folks or IT. Bring in business leaders to make sure you don't miss things."
The composition of the vendor's team itself can be illuminating, Gray-Diggs said. "I look at the team presenting the product to me. If the team is a diverse team, that lets me know that they are already thinking about potential bias and discrimination."
Pilot programs are your friend, Watson said, "especially if you're thinking of implementing a tool that might be disruptive, even as it creates efficiency. Find a favorable department of supporters to pilot the product. Work with them to roll it out in a test scenario, learn from it and build buy-in in a measured way before you impact the organization all at once."
Wang added that pilot programs of AI technology are helpful, but only if there is a large enough sample size to use it. "Look to create a pilot of at least 100 people or the data patterns in your models will not be as accurate," she warned.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.