Over the past few months, clear evidence has emerged on two key fronts regarding AI at work. First, the use of artificial intelligence tools among individuals is growing significantly. In a recent study conducted by The University of Chicago and Statistics Denmark, 65% of marketers, 64% of journalists, and 30% of legal professionals reported using AI in their work. Similarly, another recent study from Harvard University’s Project on Workforce revealed that one-third of workers had used generative AI (GenAI) at work in the week prior to when they were interviewed, with ChatGPT being the most popular tool.
Additionally, individual productivity gains are undeniable. Various studies highlight how AI can enhance work performance. For instance, consultants performing 18 different tasks experienced a 25% productivity boost using GPT-4, while GitHub Copilot users reported a 26% productivity increase. Across industries, users reported AI cutting their working time in half for 41% of the tasks they perform.
Despite these encouraging signs, however, organizations have yet to fully capitalize on these productivity improvements. While individuals are thriving, organizations are still struggling to turn these personal wins into broader enterprise success. This raises the key question: Why is this success not translating into widespread organizational gains?
Enterprise Success with AI: Falling Short
The reality is stark: Organizations are struggling to successfully implement AI at scale. According to research from Gartner, the market is rife with confusion, leading to costly mistakes and misapplied AI technologies. GenAI, which dominates the conversation, accounts for only 5% of actual AI use cases in production. A RAND report highlights that 80% of AI projects fail due to misaligned leadership, data quality issues, and a lack of collaboration between teams.
While individual productivity gains are apparent, organizations often find that AI usage doesn’t lead to meaningful enterprise-level transformation. One of the primary reasons for this gap is rooted in human behavior. New research from LinkedIn reveals that 64% of professionals feel overwhelmed by the rapid pace of change, with 68% expressing a need for more support and 49% worried about being left behind. The failure to bridge this human-technology gap leaves enterprises in a state of inertia, unable to harness AI’s true potential. Moreover, during these periods of rapid change, a shocking number of employees feel like they’re left to navigate on their own. According to the LinkedIn study, just 37% of employees feel that they can rely on their manager to help them through changes, and only 51% think their company’s leadership is helping them keep pace.
The Trust Issue: Barriers at Work
At the core of the problem is trust. In many organizations, employees hide their AI use due to fears of punishment or skepticism. Some worry they won’t receive recognition for their achievements if they attribute them to AI, while others fear that productivity gains will lead to job cuts. This creates a culture of secrecy and distrust, where employees do not feel empowered to experiment with AI.
Organizations often compound this issue by sending mixed signals—warning employees about the risks of improper AI use without offering clear guidance on what constitutes responsible usage. This confusion erodes trust even further, stifling the potential for AI-driven innovation and productivity at the enterprise level. Without a transparent and safe space for employees to disclose their AI use and share their successes, companies will miss out on the innovations their employees may be developing without their knowledge.
Societal Trust and AI: A Growing Concern
The trust problem extends beyond the workplace into society at large. During an intimate talk I recently attended, hosted by the Commonwealth Club World Affairs of California and the Center for Humane Technology, Yuval Noah Harari highlighted the collapse of human trust as a critical challenge in the AI revolution. He shared that democracy, built on trust, is now threatened by rising tribalism and a lack of transparency in AI systems. Harari argued that while we are focusing on solving the AI problem, the deeper issue lies in rebuilding trust between humans and institutions.
In his view, the societal implications of adopting AI are significant. As AI mimics human intimacy and emotional connections, it poses a considerable threat to meaningful human relationships. Without transparent regulations and institutions to govern AI, trust will continue to erode, undermining the foundations of our social systems. “Today’s AI,” Harari warned, “is just an amoeba compared to what’s coming.”
Culture and Civilization: A Broader Perspective
The trust and culture problem is not limited to modern organizations. A report from RethinkX suggests that the historical civilizations of Sumer, Rome, and Babylonia collapsed due to their inability to adapt to new technological and organizational challenges versus just political and economic issues. RethinkX was co-founded by economist Tony Seba, who is famous for predicting the price of shale within a penny 20 years in advance, and Jamie Arbib. Seba and Arbib believe that our modern civilization is at risk of a similar fate if we continue clinging to outdated organizational systems.
The report emphasizes that technological advancements demand an accompanying shift in mindset. Societies that fail to evolve mentally in response to technological disruption risk falling into decline. This prompts a critical question: Are our current organizational structures—such as hierarchical enterprises—equipped to manage the complexities of an AI-driven world? And what cultural and mindset shifts do we need in order to encourage the adaptation and transformation required to avoid societal collapse?
Why Previous Culture Change Efforts Have Failed
Before diving into solutions for mindset shifts at scale, it’s important to understand why culture change initiatives often fail, as Jon Harding illustrates by recounting his experiences in HR leadership. One major reason is the tendency to treat culture like a machine that can be manipulated through data and behavioral science—a concept Harding refers to as “scientism.” This reductionist approach views people as mere objects of management rather than complex individuals with emotional and social needs.
Furthermore, many culture change programs assume a linear progression toward a utopian future. This teleological approach ignores the unpredictable and dynamic nature of organizations, leading to disillusionment when things don’t go as planned. Harding also highlights the “heroic leadership fallacy,” which places too much emphasis on individual leaders, neglecting the systemic and environmental factors that shape organizational outcomes. Lastly, cultural efforts often become performative, focusing solely on financial outcomes rather than the creation of meaningful workplace change.
If you’ve been following The AI+HI Project podcast and article series, then you’ve heard and read about the theme we’re beginning to see: The return on investment matters, but you can’t get there without meaningful organizational change. Read this article featuring Christopher Fernandez, corporate vice president of HR at Microsoft, to learn the details. These failures point to the need for a more human-centered, holistic approach to culture change, especially in the context of AI adoption.
Solutions: Building Cultures That Support AI Adoption
If culture is indeed the key to successful AI adoption, then the question becomes: How do we create cultures that support this transformation? The answer may lie in fostering dynamic, growth-oriented organizations.
Growth Culture
The work of social psychologist Mary C. Murphy, Ph.D., on growth culture provides a road map for transforming organizational behavior. Microsoft CEO Satya Nadella attributes his company’s transformation into a market leader to its shift to a growth mindset and culture—and its AI success is hard to argue with. Expanding on the research of her former mentor, Carol Dweck, Ph.D., Murphy argues that a growth mindset can—and should—extend beyond individuals to entire teams and organizations.
A growth culture promotes collaboration, innovation, and inclusion, helping organizations achieve top results. This cultural shift requires leaders to understand their teams’ mindset triggers and to adapt their leadership styles accordingly.
It seems to me that by creating environments that encourage learning, risk taking, and critical thinking, organizations can overcome the barriers to successful AI adoption.
Dynamic Organizations
To build on this, Josh Bersin’s concept of dynamic organizations emphasizes continuous transformation and adaptability. Dynamic organizations prioritize talent mobility, cross-functional collaboration, and ongoing learning, allowing them to thrive in rapidly changing environments. According to Bersin’s research, these organizations significantly outperform their peers in terms of financial success, innovation, and inclusion.
Bersin says that building a dynamic organization requires a shift in management mindset from static to dynamic thinking. This includes investing in the right technologies, fostering a culture of trust and inclusion, and aligning talent strategies with organizational goals. Again, this feels to me like an organizational culture that aligns with successful AI implementation.
Or Something Else?
It’s too early to declare the “perfect” organizational structure for an AI-native company. But what’s clear is that the organizations that succeed with AI will prioritize culture, trust, and adaptability over rigid, outdated models.
So, what can an HR leader do today to prepare for the future? Right now, the No. 1 thing you can do is to support trust within your organization—trust will be required regardless of the ultimate organization structure or culture.
But there do seem to be some unexpected rewards for successful AI implementation. A study from MIT Sloan and BCG shed light on the cultural benefits that success in AI can bring. A survey of 2,197 managers and interviews with 18 executives revealed that AI has the potential to strengthen teams, foster innovation, and enhance collective learning within organizations. The study, set to be updated for GenAI in November, emphasizes that trust in AI is crucial for successful adoption, leading to improved morale, collaboration, and role clarity. This research underscores a critical point: AI’s impact extends far beyond mere efficiency gains, profoundly influencing organizational culture and effectiveness. Companies that successfully navigate the AI integration process can create a positive feedback loop in which AI enhances culture, which in turn drives better organizational performance and competitiveness.
This perspective adds nuance to our understanding of AI’s role in the workplace, highlighting that when implemented thoughtfully, AI can be a powerful catalyst for positive cultural transformation. As Harari reminds us, the AI revolution is not just about technology but about reshaping society. For CHROs, this means not only managing the risks of AI use but also actively shaping the future of work. By adopting a balanced, human-centered approach to AI governance, CHROs can ensure that AI advances the organization. Safeguarding human agency in an AI-dominated workplace will require proactive leadership, transparency, and collaboration across industries and governments.
Practical Steps for CHROs: Supporting AI Adoption
For organizations to fully capitalize on AI’s potential, CHROs must play a proactive role in shaping the future of work. Here are four strategies CHROs can implement to foster a culture that supports successful AI adoption:
- Advocate for transparency and disclosure: Ensure that AI systems are clearly identified as nonhuman entities, particularly in HR, to maintain transparency and trust.
- Promote human-centric AI governance: Collaborate with leadership to develop ethical AI policies that balance human agency with technological capabilities.
- Empower employees: Invest in reskilling programs that complement AI and focus on fostering critical thinking, creativity, and emotional intelligence.
- Collaborate with policymakers: Engage with regulatory bodies to ensure that AI is governed by transparent, accountable frameworks.
In the end, AI’s success in any organization will depend less on the technology itself and more on the culture that surrounds it. For CHROs and leaders, fostering trust, transparency, and adaptability will be the key to unlocking AI’s full potential.