How can HR teams successfully embrace generative AI (GenAI) to tackle complex compliance challenges? It all depends on how an organization keeps humans in the loop, according to Mitratech’s Aimee Pedretti, SHRM-SCP, and Susan Anderson, SHRM-SCP, at SHRM25 in San Diego.
As GenAI becomes more widely available, it presents a unique opportunity to streamline processes, boost productivity, and solve intricate challenges. Yet when it comes to HR compliance, the demand for accuracy, precision, and trust means HR teams looking to integrate AI will need to keep human expertise and judgment front and center.
“For all its technical promise, AI will always need an emotionally intelligent human to power it,” said Anderson, who is head of HR compliance experts for the New York-based info tech company. “That’s why HR is perfectly positioned to use AI to transform the 21st-century workplace.”
Anderson and Pedretti created a “Humans in the Loop” (HITL) approach, which uses a structured framework to ensure quality and collaboration along the AI adoption pathway. Here are some takeaways from this approach to transform AI adoption into a successful partnership between humans and technology.
Human Curiosity, Creativity Are Key
AI is rapidly evolving: It’s constantly fed new information, and new updates are made to existing tools on a regular basis. That makes them inherently flawed, but it’s also where the human element comes in to counterbalance those flaws.
“Call it governance, collaboration ... the role of humans in developing and testing these tools is absolutely creative,” Anderson said. “Human judgment is a superpower. This is what we do best.”
That’s also why subject matter experts are — and will remain — crucial players in the successful adoption of AI tools for HR compliance. Whether an organization develops a chatbot to help employees understand benefits or creates a user manual to train managers, subject matter experts will need to scrutinize the work AI produces. Without human judgment, flaws will remain, setting back the progress and development of AI tools.
However, not everyone on a project team for a new AI tool will be an HR subject matter expert — and that’s a good thing. Some will likely be technical experts tasked with understanding the architecture itself.
“It’s critical [for subject matter experts] to get curious and work closely with technical folks so you can understand their perspective,” said Pedretti, Mitratech’s principal AI transformation expert. “That way, when you’re advocating for the HR needs, you can do so in a language that is relevant to them. That’s going to build credibility within your team.”
Credential: SHRM AI + HI Specialty Credential
Prepare People, Not Just Platforms
AI is revolutionizing the workplace, but that doesn’t mean everyone will be immediately on board. As HR teams integrate AI into their processes, it’s important to stress that AI is a tool. And as a tool, it will shift roles, rather than erase them.
By leading with transparency around how AI is going to be adopted, HR teams can build more trust — not just in the platforms and systems themselves, but with transformational change that can feel daunting.
Preparing every person at every level of expertise can go a long way to ensure that AI is adopted with humans in mind.
“Tone at the top is really important. We have been extremely transparent with our team and intentional about what we’re telling our teams through this process about the role our HR functional experts are going to continue to play,” Anderson said.
Toolkit: Managing Organizational Change
Build Guardrails to Build Trust
Additionally, designing guardrails within a newly developed AI product is another step to build trust.
“In addition to bias and inappropriate content, you want to make sure that the product you’re deploying is in line with relevant legislation, privacy concerns, and best practices,” Pedretti said. Some questions to consider:
- What types of unintended output could occur in this use case, and what guardrails can help prevent this?
- What types of bias might we encounter in our use case, and how can guardrails mitigate this?
- Which laws and regulations apply to this use case? Who will monitor the evolving legal landscape?
Guardrails can be built into the HITL infrastructure from the beginning, leading to more trust in the process and a more responsible deployment of AI.
Learn More: How HR Can Build Trust in AI at Work
Critical Thinking Is the New Baseline Skill
Anyone can go to an AI chatbot for an answer, but do they have the critical thinking skills to confirm that the information they’re receiving is accurate, nuanced, or emotionally intelligent?
With AI making so many things easier, experts are already reporting the risk of “metacognitive laziness.” In other words, overuse or misuse of AI can zap our critical thinking potential while enabling us to produce repetitive, unoriginal, or mediocre work.
“We want to make sure that we’re stressing the importance of ongoing learning and domain knowledge,” Pedretti said. “And we also want to make sure that we’re using AI where it’s appropriate while still making space for deep thinking that the enhanced productivity is not worth.
“Responsible AI can’t just be about productivity and removing humans. It’s about embracing humans to do what they do best.”
Toolkit: Employee Development
Was this resource helpful?