Trust in the Age of AI: A Multifaceted Approach to Organizational Success and Workplace Civility
On The AI+HI Project podcast, I’ve had the privilege of engaging in two thought-provoking conversations that have deepened my understanding of trust in the artificial intelligence era. First, I spoke with Christopher Fernandez, corporate vice president of HR at Microsoft, about organizational trust and AI implementation. Following that enlightening discussion, I chatted with machine learning expert, business leader, and entrepreneur Tim Hwang, with whom I explored the critical issues of misinformation and trust in the workplace. These conversations have led me to reflect deeply on the multifaceted nature of trust in our rapidly evolving technological landscape.
As we stand on the brink of an AI-driven transformation, the concept of trust is becoming more critical than ever. With the rise of deepfakes, misinformation, and automated systems influencing nearly every aspect of our lives these days, organizations must rethink how they foster trust within their teams and with the world at large.
Prior to generative AI (GenAI), enterprises already faced a trust gap, per PWC’s 2024 Trust Survey. In this research, 93% of executives said they believe that their organization’s “ability to build and maintain trust improves the bottom line.” But while 86% of executives said they think employee trust is high at their business, only 67% of employees said they highly trust their employer. Several studies have shown that GenAI layers in additional opportunities for mistrust—among them, a report by Accenture, which found that while 95% of employees valued working with GenAI, “they don’t trust organizations to ensure positive outcomes for everyone.”
The Three Levels of Trust in Organizations
At the core of any organization’s success is trust—trust in leadership, trust among peers, and trust in oneself. Each type of trust plays a pivotal role in how AI and other advanced technologies are adopted and integrated. For organizations to thrive in this new era, they must recognize and cultivate trust across all three of these areas.
1. Trust in Leadership.
Trust in leadership is foundational. Leaders set the tone for the organization, and their actions must reflect their words. Employees need to believe that their leaders have not only the best interests of the organization in mind but also their personal and professional growth. Leadership trust is built when employees see transparency, integrity, and a commitment to creating an environment in which people can thrive, especially as AI introduces new uncertainties and opportunities.
However, leadership trust is not enough on its own. Leaders cannot simply stand at the top and say, “Trust me.” Trust in leadership needs to be reinforced by what employees witness happening around them, particularly among their peers.
2. Trust Among Peers.
Peer-to-peer trust plays an equally powerful role when it comes to organizational success. One of Microsoft’s most inspiring takeaways from its AI transformation journey, as Christopher Fernandez highlighted, was the emergence of peer-to-peer trust. At Microsoft, employees started automating their tasks within communities of practice. This allowed their peers to witness firsthand how automation led to greater freedom. More importantly, they could see how the company reacted to that newfound freedom. Were people able to fill their time with more interesting work, or were they simply loaded up with more drudgery?
This peer-level observation creates a natural bridge between leadership trust and personal trust: If I see my colleague use technology to gain more freedom and fulfillment in their work, I’m much more likely to believe that the same technology will benefit me, too. When employees can witness a colleague's success with automation—whether through enhanced creativity or more control over their work—they are more likely to trust the process and the technology. The result is not abstract; rather, it’s something they can see working right next to them. Peer validation, in this context, becomes one of the most powerful trust-building mechanisms within an organization.
3. Trust in Oneself.
Perhaps the most transformative form of trust in an organization is each employee’s trust in themselves. What fascinates me most about successful AI implementations, like the Microsoft case, is how they can reaffirm and expand an employee’s understanding of their own value. When an organization implements AI in a way that helps its people recognize and enhance their unique contributions, this creates a profound level of trust—not just in the technology but in employees’ intrinsic sense of their own talents and purpose.
As employees interact with AI systems, they often realize that their unique knowledge and experience become even more crucial, enabling them to ask more targeted questions to engineer prompts and provide the context that AI needs to deliver truly meaningful results. This self-trust not only drives personal growth but also catalyzes innovation because employees feel empowered to experiment, collaborate, and contribute to the organization’s evolving needs.
Misinformation in the Workplace: The Role of Trust and Civility
Building on that discussion, my podcast conversation with Tim Hwang shed light on dealing with misinformation in the workplace. As the U.S. heads into another election season, we’re bracing for an unprecedented wave of misinformation. Given that new technologies have made it easier for people to generate images without copyright controls, we can no longer fully trust what we see or hear on the internet.
But here’s what I believe: While we may not be able to trust the outright content, we can still trust individual people and organizations as well as transparent processes. If an organization has deeply integrated, responsible AI practices, then its employees are more likely to trust in a framework that they’ve witnessed and can understand.
There’s a lot of effort out there focused on developing technology to detect deepfakes, but Tim and I discussed how this approach might be a constant game of catch-up. Instead, we explored the importance of creating organizational structures that proactively address the root causes of why someone might create or spread misinformation to help prevent these actions before they even begin.
For instance, does an organization have the structures in place to help a frustrated employee work through their issues constructively, rather than resorting to creating a deepfake of their manager? Do employees feel comfortable going to HR with serious concerns? Are there support systems, coaching opportunities, and conflict resolution processes in place to address difficult situations before they escalate?
These questions highlight how trust isn’t just about believing in the authenticity of information but also about having faith in the systems and people around us to handle challenges ethically and effectively. Moreover, fostering a culture of civility in the workplace can reinforce this trust, making it easier for employees to communicate openly and resolve conflicts constructively. Trust makes civility easier, and civility, in turn, strengthens trust—creating a powerful defense against misinformation that technology alone cannot provide.
The Rise of Curation and Taste
In this era of uncertainty, I predict we’ll see a shift toward trusting content curators. For example, while I might not be able to trust much of what I see online, if I know that a respected colleague has chosen to include an image, video, or article on their social media feed or website—and I’m familiar with their voice and am a part of their community—I’m more likely to trust their curated selection, even if they’ve altered the original content in some way.
During a recent speaking engagement at BrainMind, a philanthropic neuroscience organization, I told the company’s interns that the most important thing they can do in the age of AI is to develop—and be able to defend—their own personal sense of taste. Meaning, they should build up the skill of being able to determine the quality of something without needing input or support from other people—or technology. With AI generators readily available, many people might use them rather than doing their own “deep thinking”—but that’s a mistake. By relying too heavily on AI, they haven’t done the work to develop their own taste and point of view.
In a world where nothing online can be fully trusted, people will follow creators, leaders, and organizations with defined taste in addition to a credible and trusted brand, reputation, and product or service. And remember, taste isn’t just about creativity—it’s about conviction born from experience, whether in product development, strategic thinking, or content curation.
The Season of Trust
Trust is at the heart of successful AI adoption. I predict that GenAI implementations that fail to establish employee trust—whether in their leadership, their peers, themselves, or the technology itself—will struggle to achieve meaningful results. The success of AI will depend not only on the capabilities of the technology but also on the trust it fosters within the organization.
Most CEOs expect GenAI to transform their businesses, but many employees doubt their management’s ability to handle the transition effectively. Closing this trust gap is critical because the most effective AI use cases will emerge from within the organization. Employees closest to the work will identify the greatest opportunities for automation and innovation. However, if employees distrust leadership, they’ll hesitate to share insights and engage in building and refining the business architecture, potentially stifling the innovation that AI promises. Therefore, building trust at all levels is essential for unlocking AI’s full potential.
As we navigate the complexities of AI adoption and the growing influence of misinformation, trust becomes the most valuable currency. Trust in leadership provides stability, trust among peers fosters collaboration, and self-trust empowers individuals. In an era of abundant yet unreliable information, curated taste will serve as a beacon of trust in both personal and professional contexts.
The future of AI isn’t just about technology—it’s about people. It’s about creating environments in which trust thrives, innovation is driven by human creativity, and technology enhances rather than replaces human experience. Organizations that succeed in the AI era will understand the multidimensional nature of trust and actively cultivate it across all levels. Getting this right will determine the success of AI in business and society.