How HR Can Build Trust in AI at Work
Steps include addressing bias, developing literacy, ensuring fairness
Communication, education, and transparency will help build trust in using artificial intelligence at work. HR can help explain the technology, provide training on how to use it effectively, and develop ethical guidelines for its implementation.
“There are many risks that AI poses for HR,” said Charlene Li, founder and CEO of Quantum Networks Group, at SHRM’s The AI+HI Project 2025 in San Francisco. “You have to constantly think, ‘Are we doing the right thing?’ You have to ask questions about ethics and values. Just because you can do something doesn’t mean you should.”
Nichol Bradford, executive in residence for AI + HI at SHRM, said that one of the biggest concerns about AI is the level of bias that can be baked into the training data. “But at the same time, humans come with a high level of bias too,” she said.
Bradford moderated the event held April 9-10, including Li’s session.
“AI is based on data, and any sort of bias we have in the data will be amplified,” Li said. In addition, “[e]very piece of data has bias, because it was created by humans. So, the solution is not unbiased data but understanding the bias that it has. Have a definition inside your organization for what fairness looks like and then test the outcomes to see if it abides by your definition.”
Li advised constant auditing and testing. “Having a human in the loop is essential,” she said.
“You want to ensure you not only have diversity of teams and people, but also diversity of data and tools — being able to swap models back and forth to test them, using different datasets to mitigate bias.”
Li brought up additional concerns about AI use at work. “AI can be a black box, yet we are using it to make important decisions,” she said. “How comfortable are we handing over that decision-making? It can also start feeling dehumanizing. When you realize that the tool can do so much of your work, what does that mean for me as a human?”
Li also noted that these are new kinds of conversations: “We didn’t have to have these types of discussions before. It’s important to talk about what you believe in as an organization.”
Building Trust
Li said that to lean into the potential of AI, employers need to first build trust in its use. “As we deploy AI into our organizations, we must be systematic and intentional about how we build trust,” she said. “Trust is built every single day, in every single interaction we have, by people who use these tools.”
Trust starts with attaining literacy, which includes “actually using the AI,” Li said. “Hands-on training is important. Have discussions. Seek out resources. Once you write out guidelines and policies, review them on a continual basis.”
When it comes to creating a governance framework, organizations should aim for the Goldilocks approach of not too much, not too little. “We want to stay safe and create trust, but you don’t want to stop everything in its tracks,” Li warned. “Most organizations get stuck, seeking zero risk.”
Li described a framework she calls the Pyramid of Trust, based on psychologist Abraham Maslow’s Hierarchy of Needs, which proposes that human beings need to have certain basic needs met before they can pursue their more complex needs.
Li’s pyramid rests on a strong foundation of safety, security, and privacy. Fairness sits atop that base. “If you don’t have fairness in place, you can’t understand quality or accuracy or build towards accountability and transparency,” which makes up the peak of the structure, she explained.
Li emphasized that safety, security, and privacy are nonnegotiable for responsible AI use and include developing secure systems through access controls, practicing data encryption, ensuring safe usage through training, and protecting user privacy.
At the next level, you want to minimize bias and maximize fairness. “Diversity of datasets best supports you in this mission,” Li said.
Making sure you define quality and accuracy — and what you expect from your teams — comes next and helps build trust, she said.
SHRM Members-Only How-To Guide: Writing AI Prompts for Interviews
Accountability is next. “Accountability is about responsibility,” Li said. “It involves defining roles, making sure there’s a decision-making process, and getting clear on how your organization will and won’t use AI. An important part of this level is creating a culture of accountability, which means your organization has a system for flagging and reporting problems — and there are no repercussions for speaking up.”
Finally, you are able to reach the top level of transparency. “Transparency involves a commitment to continuous monitoring, ethical practices, and clear communication about AI’s capabilities and limitations,” Li said. “It includes disclosing how and when AI is being used, especially for decisions that impact people.”
Bradford agreed. “In the end, talent wants to know they will be treated fairly,” she said. “HR will have a duty of transparency, especially if AI is being used to calculate compensation or evaluate performance.”
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.