Stop Fearing AI and 'Big Data' in Recruiting
Advanced technologies can address bias, strengthen hiring decisions
Complex emerging technologies such as artificial intelligence, machine learning and "big data" analysis will be used to create the leading HR organizations of the future, and employers must be willing to invest the time and effort to use these powerful tools responsibly.
But that means first getting over the fear of what could go wrong and instead resolving to harness technology's power to better inform decision-making and revolutionize talent management.
SHRM Online discussed the critical future-of-work topic with Eric Sydell, an industrial organizational psychologist, expert in AI and machine learning, executive vice president of innovation at recruiting technology firm Modern Hire, and co-author of the new book Decoding Talent (Fast Company, 2022).
SHRM Online: People often react to leading-edge technology with trepidation. In the case of using AI in the workplace, government regulators are placing well-intentioned limits on data usage because they fear employers may abuse employee privacy and workers could suffer harm from bias. How can people move past these initial reactions to harness the benefits of this advanced technology more fully while also addressing its threats?
Sydell: It's been noted that we are creating advanced technology at a faster rate than we can civilize it. And throughout history, this is often the case—regulations and guidance are often created after the fact, to harness new technology.
AI is possibly the most powerful and consequential technology humans have ever developed. And as with any powerful tool, AI can be used for benevolent or malevolent purposes. In many cases, well-intentioned AI produces harmful results due to unforeseen consequences. And yet, as we all know, AI can also dramatically improve our world in many ways.
Privacy and bias are two of the largest problems with unconstrained applications of AI. As a society, we must figure out ways to limit those problems so that we can reap the benefits of the technology. Of course, there are a lot of business interests that want private personal data so they can better target ads and other tools, and bias is often buried deep in algorithms that produce some other beneficial impact. So finding the right balance between constraining privacy problems and bias and also enabling AI to be effective and helpful is a delicate dance between business and human interests.
In my opinion, we do not yet have AI and algorithmic development constraints adequate to the task of harnessing AI for the benefit of humanity. The key part of that last sentence is "of humanity." Not corporate interests. AI has to not only be beneficial for corporations, but also for individual humans. It has to make our lives better. Ensuring that private data is not used or that algorithmic bias is mitigated is not enough. And often, these issues are interrelated. For example, we often need to know which demographic groups people belong to so that we can ensure algorithms are not biased against any one group, and yet some regulations limit access to demographic information because it can be considered private or could be used by humans to discriminate. We still have a lot of work to do if we are to harness AI and algorithms for the benefit of individuals.
SHRM Online: The most high-profile news stories about using AI in employment decisions typically portray the negative consequences of the technology, including ethical, legal and privacy abuses. How can AI and big data be used to remove bias from hiring?
Sydell: Early on, AI developers were exuberant about the technology, and rolled out features that were not sufficiently vetted. This led to a lot of high-profile incidents such as when Microsoft released its Tay chatbot that was trained on Twitter data. Almost immediately, Twitter users began feeding Tay racist statements, which it then learned from and began spewing out on its own. Microsoft quickly took Tay down and has since learned that you can't allow an AI to learn from user responses in such an unfettered manner.
However, fundamentally, AI is just statistical analysis capability. That capability can be designed to find bias and root it out. While poorly developed AI can scale bias, the same types of techniques can also be used to identify bias and thus make hiring decisions fair to all classes of individuals. Remember AI is just a tool. It is up to governments to control how it is used, and developers to be aware of the negative potential of poorly developed code.
SHRM Online: If the key to effective AI use is capturing the right data to analyze, then how does an organization begin to identify this data and act on it?
Sydell: We all intuitively understand that some types of data are more useful than others. But the reality is that it is very hard to know which data points will ultimately prove more predictive and fair. As humans, we often think we know. We are very good at building narratives to explain the world around us. But one of the promises of big data and AI is that it can help make sense of complex, messy, unstructured data in ways that were not previously possible.
Some data types are likely worth more than others. I break candidate data down into the following four categories:
- Incidental. This refers to non-job-related data like social media profiles, the sound of a person's voice, or interview video. This type of data has not been found to be very predictive of job success, and it certainly includes a lot of potentially biasing information. It also tends to be viewed as invasive by candidates.
- Trace. This is online behavioral data such as mouse movements and replay counts. This type of information is also not very predictive of job success.
- Narrative. This refers to more job-related, but unstructured information such as LinkedIn profiles, cover letters and resumes. This type of data is useful in hiring, but it also contains a lot of biasing factors, so it must be used with caution.
- Intentional Response. This is the gold standard of data-oriented hiring. It refers to questions that candidates intentionally respond to, such as interview questions that can be quantified with AI and job-related test responses. This data is not invasive and since it is quantifiable, it can be validated, and bias can be measured.
SHRM Online: Talent acquisition professionals want to be able to predict candidates' job success, but sometimes they struggle. How can emerging AI technology better assess talent?
Sydell: Decisions about who to hire are inherently human decisions. And we humans are just not very good at making reasoned, high-quality, fair decisions about other humans. Our brains are wired to take in volumes of data and make very fast, intuitive decisions. And we do that with candidates. We get a sense of who they are in literally seconds, and it is often difficult to overcome those first impressions even as more data comes in.
While there is a ton of hiring tech available today, much of it does not help us get around our inherently human decision-making inadequacies. For that, we must turn to structured, scientifically driven tools that measure very specific candidate characteristics that are proven to be predictive of job performance. A typical example is a job-relevant, validated assessment, which is often the most valid and predictive part of a hiring process. Our own two decades of assessment research at Modern Hire has produced many examples of how validated assessments lead to vastly higher ROI [return on investment] and far greater levels of new-hire diversity.
AI allows us to study and score more than just tests, though; it allows us to vastly expand the array of candidate information that can be quantified and thus studied. At Modern Hire, for example, we use deep learning and natural language processing to score the transcribed words that candidates utter in response to interview questions, and the resultant competency scores are job-relevant, highly correlated with human subject matter expert scores, and yet are also almost four times lower in group differences than those same human scores. Essentially, AI allows us to quantify a tremendous amount of data, which recruiters and hiring managers previously had to eyeball. Ultimately, this helps dramatically shrink the hiring process from weeks to days or even hours, increases the effectiveness of those hiring decisions, and does all this with a level of fairness that humans simply cannot match.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.