As generative artificial intelligence (GenAI) reshapes workplaces, HR finds itself at the center of this transformation, evolving beyond its traditional role to become an architect of organizational culture, structure, and human capital strategy in an AI-driven world. This transformation brings significant responsibility: HR professionals must take on the challenge of ethical AI implementation and governance and place themselves in a critical position to influence how GenAI integrates with the employee experience and organizational values.
Why should HR take the lead in AI ethics and governance? While HR may not develop or code AI systems, we should think of this team as the circulatory system of an organization—its functions touch every part of the company, from hiring and training to creating cultural environments, performance management, and workforce planning. While HR may not develop or code AI systems, it helps create the cultural environments and set the governance that shape AI enterprise systems within the organization.
[Stay up-to-date on the latest insights around AI and technology in the workplace by exploring previous episodes of The AI+HI Project. Subscribe and get new episodes sent straight to your inbox every week.]
Influencing Ethical AI Practices
In a conversation with AI pioneer De Kai Wu, I was introduced to a new way of thinking about AI—viewing it as a child prodigy in need of guidance. Wu, who invented and built the world’s first global-scale online language translator (which led to the creation of Google Translate, Yahoo Translate, and Microsoft Bing Translator), has been recognized as a founding fellow of the Association for Computational Linguistics for his pioneering contributions in AI, natural language processing, and machine learning. He was also one of eight inaugural members of Google’s AI ethics council, serves as a professor of computer science and engineering at the Hong Kong University of Science and Technology (HKUST), and is a distinguished research scholar at Berkeley’s International Computer Science Institute. In his forthcoming book, Raising AI: An Essential Guide to Parenting Our Future (The MIT Press, 2025), Wu explores the ethical complexities and societal implications of AI, providing a fresh perspective on how we should approach the development and integration of these systems.
Drawing on his experience, Wu described today’s GenAI systems, such as ChatGPT, Gemini, Claude, and other large language models, as “young prodigies without executive function,” akin to children who have extraordinary abilities but lack self-awareness or ethical judgment. Imagine a child with a photographic memory, the ability to perform lightning-fast calculations, and an uncanny knack for pattern recognition. Now imagine that same child lacking adult judgment, self-awareness, and a robust belief system. That’s essentially how Wu views our current AI types.
While Wu spoke to the broader societal impact of large-scale AI systems, HR leaders are responsible for directing the implementation of AI in specific HR organizational processes such as hiring, training, and performance management. HR departments may not directly control AI systems, but they shape the work environments where GenAI operates.
This metaphor helps us understand the ongoing responsibility HR has not only during the implementation of AI but throughout its life cycle. Parenting is more than just a clever analogy—it signifies a profound shift in how we approach the ethical development and deployment of GenAI within organizations.
The Implications of Parenting AI
Viewing ourselves as “parents” to AI systems encourages us to take responsibility for their growth and development. HR, in particular, plays a key role in shaping the ethical values and culture that AI reflects by guiding policies around data ethics, ensuring diverse training sets, and fostering accountability frameworks. While HR professionals may not be developers or engineers, they act like members of a PTA, influencing the “curriculum” and guiding how AI systems are used, governed, and aligned with ethical principles within their organizations. This perspective helps us avoid feeling powerless or falling into the trap of trying to control AI completely.
AI Systems: Learning from Us
Every parent knows that children learn more from what we do than what we say. Similarly, GenAI systems learn from the data and interactions we provide. This means we must be mindful of the examples we set. Wu emphasized that AI won’t simply follow verbal instructions—it will model its behavior on what we demonstrate. As he pointed out, “You know that old line from parents: ‘Do as I say, not as I do.’ When does that ever work? They’re never going to do what we want them to do.”
The EU Artificial Intelligence Act aims to regulate AI systems based on potential risks. While the law is designed to protect citizens, Wu argued that such constraints could limit AI’s ethical development, much like overprotective parenting can hinder a child’s growth. These constraints, he suggested, could prevent AI from maturing in crucial ways, such as taking responsibility for its actions, reflecting on its impact, and gaining a deeper understanding of human psychology that is necessary for ethical behavior.
Wu highlighted a key reality: GenAI today is based on the internet, meaning it has absorbed our biases and a rudimentary understanding of humans. Much like a child who knows their parents’ soft spots, AI has picked up both our good and bad tendencies.
“If we approach AI with fear or try to constrain it at every turn,” Wu warned, “we risk creating digital ‘sociopaths’—entities that lack empathy and an understanding of human values.” Instead, he advocated for a more nuanced approach, allowing AI to develop a deeper understanding of human psychology and behavior.
Christopher Fernandez’s Insights: HR as AI Integration Leaders
The parenting metaphor for AI aligns well with HR’s evolving responsibilities. Just as parents shape a child’s environment and values, HR shapes the organizational culture in which AI systems operate. This connection is particularly evident in the insights Christopher Fernandez shared on the “AI Transformation—A Human-Centered Approach” episode of The AI+HI Project. As corporate vice president of HR at Microsoft, Fernandez is responsible for leading HR’s AI integration efforts.
Fernandez views HR as central to managing the human-AI interface and highlights HR professionals’ unique ability to shape how people engage with AI due to their deep understanding of behavioral science. He emphasized that “HR’s ability to finesse how people engage with that technology, because of their rooted understanding of behavioral science, will be central to the HR professional’s role going forward.” This perspective places HR leaders at the forefront of AI integration, leveraging their deep knowledge of human behavior to ensure smooth adoption.
Fernandez also envisions a broader role for HR, saying, “I see HR being central to assessing and quantifying the human behavioral experience in ways that encourage engagement with technology in various scenarios, both at work and outside of work.” This expanded idea positions HR as key interpreters of the human-AI dynamic, within the workplace and beyond.
A Framework for HR Leaders in the AI Era
As AI reshapes the workplace, HR leaders have the opportunity to shape human-AI collaboration. Based on these insights, here’s a framework they can follow:
- Becoming the AI Ethics Compass: HR leaders must evolve into AI ethics experts, shaping policies that govern AI’s ethical use. While it may not code AI systems, HR can ensure that training data reflects diverse, inclusive perspectives. This begins with HR professionals increasing their AI literacy through education, experimentation with AI tools, and collaboration with IT teams.
- Facilitating the AI Conversation: HR should lead open discussions about AI’s impact on the workforce. Workshops and training programs can demystify AI for employees, reducing fear and increasing engagement.
- Architecting Human-AI Synergy: HR should design strategies for AI integration that enhance human capabilities. Small-scale pilots, such as AI-assisted recruitment or diversity audits, can serve as testing grounds for broader human-AI collaboration.
- Leveraging Behavioral Science: HR’s expertise in human behavior can guide AI implementation effectively. By embedding behavioral science into AI-driven HR tools, HR can measure employee engagement and productivity alongside AI interactions.
- Quantifying the Human Experience: HR must develop new metrics to assess AI’s impact on productivity. Collaborating with data science teams, HR can co-create dashboards that track both performance and engagement and use this data to fine-tune AI systems.
- Fostering Transparency and Accountability: HR should lead by embedding ethical training throughout the AI life cycle and ensuring transparency in AI decision-making to maintain trust.
- Orchestrating Cross-Functional Collaboration: HR can spearhead collaboration between IT, data science, and other departments to integrate ethical considerations into AI deployment, ensuring alignment with company values and employee needs.