The intersection of trust and artificial intelligence presents both unprecedented opportunities and complex challenges for organizations. Two major 2025 studies—Deloitte’s State of Generative AI report and the Edelman Trust Barometer—provide complementary insights into the critical role of trust in AI adoption.
Understanding the Current Landscape
Deloitte’s research reveals a stark reality: Only 11% of organizations report successful embedding of AI tools into daily workflows (defined as over 60% of employees using AI daily). Among organizations with lower usage rates (fewer than 20% of employees reporting daily use), approximately half of them report returns below expectations for their most advanced AI initiatives.
Meanwhile, the Edelman report highlights a concerning trend: As employee grievances increase, trust in AI and technological innovation significantly decreases. Specifically, Edelman found a 21-point gap in comfort with AI adoption between employees with low grievance levels (50%) versus those with high grievance (29%).
Key Trust Barriers (Deloitte Findings):
- Concerns about AI reliability and “hallucination.
- Data privacy and security worries.
- Fear of job displacement.
- Lack of understanding about AI capabilities.
- Unclear governance frameworks.
Societal Context (Edelman Findings):
- Growing economic anxiety.
- Increased distrust in institutional leadership.
- Rising concerns about workplace fairness.
- Heightened sensitivity to technological displacement.
- Demand for greater transparency.
Implications for HR Professionals
Drawing from Deloitte’s successfully piloted generative AI (GenAI) assistant and Edelman’s trust research, HR leaders must architect a comprehensive trust-building framework:
1. Skills Development Architecture
- Create transparent learning pathways.
- Implement fair evaluation frameworks that recognize both technical and human skills.
- Design inclusive reskilling programs that address the 58% of employees concerned about automation.
2. Trust-Building Leadership Framework
- Foster psychological safety through open dialogue about AI concerns.
- Develop clear AI governance policies aligned with Deloitte’s four factors of trust:
- Reliability.
- Capability.
- Transparency.
- Humanity.
- Create collaborative spaces for experimentation and learning.
3. Change Management Excellence
- Implement staged rollouts following Deloitte’s pilot program model.
- Build networks of AI champions (which showed a 65% increase in tool usage in Deloitte’s study).
- Recognize achievements using metrics that matter to employees.
Employee Communications Strategy
Recent research illuminates the path forward. Deloitte’s findings demonstrate that organizations implementing strategic communication frameworks see a 16% improvement in trust metrics. Success emerges from multidimensional approaches: showcasing authentic employee experiences, creating interactive learning spaces through AI team sessions, and fostering collaborative knowledge-sharing environments.
Deloitte Transparency Pillars:
- Savvy user profiles: Highlighting real employee success stories.
- “Ask the GenAI team” sessions: Holding regular Q&A forums.
- Prompt-a-thons: Hosting interactive learning events.
- Community forums: Promoting ongoing knowledge sharing.
Edelman’s analysis reinforces the need for a deeper engagement model. Its framework emphasizes addressing core workforce concerns while creating dedicated spaces for dialogue about AI’s organizational impact. The key insight: Companies that demonstrate clear commitment to human-centered AI adoption through concrete programs and transparent communication build stronger foundations for transformation.
Edelman’s Engagement Model:
- Design an engagement framework.
- Address economic security concerns directly.
- Create forums for addressing broader institutional trust issues.
- Develop programs that demonstrate commitment to fair AI implementation.
Between the Lines:
I believe that the rapid evolution of AI agents and digital workers represents a defining challenge for organizational communications. Success hinges on developing thoughtful messaging frameworks and precise terminology. Getting the language right around concepts such as “digital workforce” and “agentic AI” becomes crucial for building genuine understanding and sustainable trust across the organization.
Building Sustainable Trust
Organizations must recognize that trust in AI is inextricably linked to broader institutional trust. Deloitte’s research shows that organizations successfully building trust saw:
- 65% increase in average user engagement.
- 52% increase in understanding of privacy protection.
- 49% improvement in perceived output quality.
- 14% increase in new users.
- 13% increase in repeat users.
Edelman’s findings reinforce this with several key insights:
- High-trust companies are 2.6 times more likely to see successful AI adoption.
- Companies with strong trust scores see up to 4 times higher market value.
- Employee comfort with AI tools correlates strongly with overall institutional trust.
Implementation Framework: Creating Sustainable AI Trust
The journey toward AI adoption requires more than just technological deployment: It demands a carefully orchestrated approach to building and maintaining trust. Deloitte’s successful pilot program offers valuable insights into how organizations can create a sustainable framework for AI implementation that prioritizes human needs alongside technological advancement.
Starting with Trust Measurement
Before diving into implementation, organizations must establish a clear understanding of their current trust landscape. This goes beyond simple surveys—it requires deep engagement with employees at all levels to understand their hopes, fears, and expectations about AI. Through Deloitte’s research, we’ve learned that successful organizations begin by evaluating trust across four key dimensions: reliability, capability, transparency, and humanity. This baseline assessment becomes the foundation for all future trust-building initiatives.
The baseline measurement process should engage employees in meaningful dialogue about their experiences and concerns. For example, when Deloitte conducted its initial assessment, it discovered that employees weren’t just concerned about AI reliability—they were also unclear about how AI would impact their daily work and career trajectories. This insight proved crucial in shaping Deloitte’s subsequent interventions.
Designing Human-Centered Interventions
With a clear understanding of trust gaps, organizations can design targeted interventions that address specific concerns while building broader institutional trust. These interventions should be viewed not as one-time solutions but as elements of an ongoing trust-building journey. The most successful organizations create interconnected programs that reinforce each other and build momentum over time.
Consider how Deloitte approached its intervention design: Rather than focusing solely on technical training, it created a multifaceted program that included “savvy user profiles” showcasing real employee experiences, interactive “prompt-a-thons” that made AI accessible and engaging, and regular community forums that fostered ongoing dialogue. Each element was designed to address specific trust barriers while contributing to a broader culture of transparency and collaboration.
Monitoring Progress and Adapting Approaches
The final piece of the implementation framework focuses on creating feedback loops that enable continuous learning and adaptation. Successful organizations establish clear metrics for measuring both trust levels and AI adoption, but they also remain attuned to qualitative feedback and emerging concerns.
Deloitte’s experience shows that regular trust assessments should be complemented by ongoing dialogue with employees. Its most successful interventions evolved based on employee feedback, leading to significant improvements in trust metrics—including a 49% rise in perceived output quality and a 52% increase in understanding of privacy protection measures.
Creating Sustainable Momentum
Organizations must recognize that trust-building is not a linear process. As the Edelman Trust Barometer reveals, broader societal concerns about technology and institutional trust can impact internal AI initiatives. Successful organizations create implementation frameworks that are robust enough to weather these challenges while remaining flexible enough to adapt to changing circumstances.
This might mean adjusting the pace of AI rollout in response to employee feedback, creating new channels for addressing emerging concerns, or modifying training programs to better align with employee needs. The key is maintaining a balance between strategic progress and human consideration.
Future-Proofing Trust
Organizations must start considering how their implementation frameworks can evolve to address future challenges. This might include:
- Creating governance structures that can adapt to new AI capabilities.
- Developing learning programs that anticipate future skill needs.
- Building feedback mechanisms that capture emerging trust concerns.
- Establishing communication channels that support ongoing dialogue.
By viewing implementation through this longer-term lens, organizations can create frameworks that not only support current AI initiatives but also build the foundation for future technological advancement. The goal is not just to implement AI successfully today: It’s to create an environment where trust in technology can flourish over time.
Through this more comprehensive approach to implementation, organizations can create the conditions for sustainable AI adoption while nurturing the trust that makes such adoption possible. As both the Deloitte and Edelman research demonstrates, success in AI implementation isn’t just about the technology—it’s about creating an environment where both humans and AI can thrive together.
Looking Ahead
Both studies emphasize that trust-building must be proactive and continuous. Deloitte’s research shows that organizations prioritizing trust from the start see significantly better ROI on their AI investments. Edelman’s findings suggest that addressing broader societal concerns alongside technological implementation creates more sustainable adoption patterns.
Success Metrics to Track (Combined Recommendations):
- Trust scores across the four factors (reliability, capability, transparency, and humanity).
- AI tool adoption rates.
- Employee satisfaction metrics.
- Productivity improvements.
- Skills development progress.
- Return on AI investments.
The path forward requires patience, persistence, and an unwavering focus on the human element of technological change. By putting trust at the center of AI adoption strategies, organizations can create a future where technology enhances rather than diminishes human potential.
As Deloitte’s research concludes, “Building trust is not just about technology acceptance—it’s about creating the type of organizations we want to belong to, and the type of world we want to live in.”