As AI makes deeper inroads into organizations, employee concerns about the technology continue to grow. According to research from Gartner, those fears are primarily centered on job loss, as well as potential bias stemming from AI.
Experts say most organizations aren’t addressing those concerns well, which can undermine their growing investments in AI, impact employee morale, and spawn even more distrust of promising technology. SHRM Online spoke with Duncan Harris, a director with Gartner specializing in HR technology strategy, about how HR leaders can address the fear of AI among their own staff, as well as in the wider organization.
AI and HR Jobs
Conventional wisdom says employees’ biggest fear about AI is that it will take their jobs, but Harris said his research shows more workers believe their jobs will be redesigned, rather than replaced, by AI. Employees also worry that AI will make their jobs more complicated or less interesting.
Organizations should increase their investment in AI-related education to prepare workers for how their jobs may change, Harris said. HR can aid in that effort by collaborating with IT to provide learning and development on a range of topics, including how AI works, how to create effective prompts for generative AI (GenAI) tools, and how to evaluate AI outputs for biases and inaccuracies.
AI adoption already has led to the redesign of many HR jobs, Harris noted. One example is in HR shared services, where HR staff who once spent much of their day answering employees’ questions are now working alongside or overseeing chatbots that handle the bulk of those queries.
“Use of chatbots requires different levels of input from HR and administrators, including supporting initial setup of the bots, scripting, and being a subject matter expert on different HR processes,” Harris said. “It also includes monitoring, updating, and working alongside chatbots and virtual assistants where there are handover points.”
New roles also are emerging and other HR job descriptions are changing to accommodate the arrival of AI, Harris said. One such role is product manager for internal talent marketplaces, which are platforms that use AI to match employees to internal opportunities such as short-term projects, full-time roles, mentoring, and job-shadowing arrangements. This role sits in the HR function and seeks to integrate the use of sophisticated technology solutions into other job roles in an organization.
Another example of how HR roles are adapting to new technologies is the growing use of employee experience teams that report to HR and IT, Harris said.
“These fusion teams use process mapping and end-user persona creation, combined with talent expertise and ownership of technology governance, to coordinate increased adoption of emerging technologies, enhanced employee experience, and a link of those things to overall business goals,” he said.
Greater Transparency and Communication
Harris said one key to alleviating all employees’ concerns about AI is to better explain how the technology will be used. Few organizations are being fully transparent about how AI will impact their workforce and where and how they plan to use AI in their operations, he said.
Companies that show employees how AI works, provide input on where it can be helpful or harmful, and test solutions for accuracy can avoid fear of the unknown.
HR, for example, shouldn’t just provide general information about AI in its communications with employees. It needs to provide context and details on what risks and opportunities are influencing the organization’s AI policies.
“For example, one big concern is that GenAI can lead to organizations making inadvertent mistakes,” Harris said, whether that’s producing inaccurate outputs or leaking sensitive company data. “From an executive perspective, the biggest concern for the future in using GenAI is around data privacy—which is also one of the most common concerns for employees.”
Harris said some organizations adopt an employee digital bill of rights to help mitigate such concerns. The documents often include specific principles such as the right to purpose, or having a legitimate business purpose for any data collected; the right to limitation, or not collecting more data than is needed to fulfill a defined purpose; and the right to fairness, or using collected data in ways that reinforce equal opportunity, access, and treatment in the workplace.
Some organizations also are using new governance structures to formalize accountability for AI ethics to minimize reputational risks of the technology, Harris said. One way they’re doing that is by deputizing AI ethics representatives at the business unit level to oversee implementation of AI policies and practices in their departments.
Dave Zielinski is principal of Skiwood Communications, a business writing and editing company in Minneapolis.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.