As Americans await the results of this year’s election, a unique alignment has emerged—not about the candidates, but around generative artificial intelligence (GenAI). Between Hesitation and Hope, a new report produced by the nonprofit More in Common in partnership with the Omidyar Network, highlights that Democrats and Republicans share strikingly similar attitudes about GenAI’s potential risks and their trust in tech companies and government to manage it responsibly. For a society as polarized as ours, this alignment around GenAI stands out, creating a rare space where Americans are more united than divided.
Most Americans view GenAI with caution. Among Democrats and Republicans, 70% and 72%, respectively, believe that “big tech companies do not help the American public,” while 67% of Democrats and 73% of Republicans agree that government policies about GenAI fail to prioritize citizens’ best interests. These perspectives underscore a shared distrust in how AI is being managed, specifically by Big Tech and policymakers, and a skepticism of AI’s potential without proactive safeguards. It’s a rare bipartisan consensus—and an opening to approach AI with more united, ethical foresight.
AI’s Double-Edged Sword: Opportunity and Caution
GenAI has made headlines for its transformative potential, yet Between Hesitation and Hope reveals nearly half of Americans (49%) feel uncertain about GenAI’s impact, while 36% express interest and 29% are worried. This spectrum of emotions illustrates that while some Americans see AI as a powerful tool for progress, others fear it will deepen social problems. A large segment—particularly rural Americans, women, and those with a low sense of belonging—are more concerned about GenAI’s negative impacts, and many respondents foresee greater distrust, division, and dependency.
The statistics are stark. More than 4 in 5 Americans (83%) fear that GenAI will erode trust in news, while 65% worry it will further strain interpersonal trust. Additionally, 76% believe that GenAI could make us lazier, with a particular concern about its effects on future generations’ critical thinking skills. These findings highlight the societal tension GenAI has introduced: It could either strengthen or further fracture our social fabric, depending on how we manage its influence.
Why These Views Matter in the Workplace
For HR professionals, these insights into public sentiment on AI are invaluable. The challenges facing GenAI in society at large—trust, misinformation, and fears of dependence—are also seen in the workplace. As AI becomes more integrated into organizational processes, HR leaders must navigate not only the technical aspects of GenAI but also its impact on workforce trust and engagement. We connected with the report’s lead researcher, Stephen Hawkins, to learn more.
My Q&A with Stephen Hawkins, Global Director of Research for More in Common
From your research, what would you say is a good first step for HR leaders when introducing AI to their teams?
SH: Rather than pitching a product to staff as bulletproof, acknowledge the ambivalence most Americans feel towards AI: both that it is occasionally astoundingly impressive and that it often flops and flounders with even simple tasks. Workplace tools will likely generate some of each type of experience, and employees should expect that. (See pages 10 and 17 of the report.)
That’s an insightful approach, setting realistic expectations. How can HR leaders ensure employees feel supported rather than threatened by these new AI tools?
SH: It’s essential to help employees to feel empowered, rather than threatened, by AI. For instance, we found that perception of AI work assistants as “helpful” was meaningfully higher than for [AI] work supervisors, who were more likely to be deemed as “harmful.” HR leaders should ask how solutions they are considering implementing bolster a sense of agency and empowerment before implementing them (page 37).
That’s a helpful way to look at it. So it sounds like keeping AI in the right role is key. Are there particular roles or situations where AI might be less appropriate?
SH: HR so often is the place where employees look for empathetic support in sensitive matters about conflict, discomfort, and ethical questions. We found Americans were more concerned about engaging with AI in roles that are often associated with higher levels of empathy, such as friends, judges, and romantic partners. HR leaders should be slow and hesitant to place AI tools in a role where employees need a listening ear or diplomatic guidance about how to navigate sensitive, emotional workplace issues (page 35).
What HR Managers Can Do Today
Studies show the success rate of AI projects remains low, with more than 80% failing, primarily due to human factors rather than technical limitations. Many of these failures stem from a lack of transparent communication, insufficient employee buy-in, and poor alignment with organizational values. For HR managers, the societal consensus on GenAI presents an opportunity to proactively address these challenges by fostering an AI-literate, trust-oriented workplace culture.
Here’s how HR can use these findings to guide GenAI’s adoption in ways that prioritize employee well-being and engagement:
1. Develop Transparent AI Policies to Foster Trust
Just as Americans express a deep distrust of Big Tech’s handling of GenAI, employees within organizations need clarity on how AI will impact their roles. HR can lead by establishing transparent policies that outline how AI will be used and ensure employees understand these changes. Open discussions, town halls, and AI literacy programs can demystify AI and address employee concerns, building a foundation of trust that’s essential for successful AI integration.
2. Prioritize Training on Critical Thinking and Other Human Skills
With 62% of Americans worried that GenAI may reduce societal intelligence by weakening critical thinking, HR has a vital role in counterbalancing these risks. As AI increasingly handles repetitive or analytical tasks, organizations have the opportunity to focus human roles on creative, interpersonal, and critical thinking abilities. Developing training programs that nurture these uniquely human capabilities will allow organizations to harness AI’s potential while also building a resilient, adaptive workforce.
3. Encourage a Culture of Collaboration and Purpose
With Americans fearful of AI’s divisive potential, HR can actively promote a culture of collaboration and shared purpose. AI should enhance rather than replace human interactions in the workplace. HR teams can create spaces for employees to share their AI experiences, collaborate on AI-driven projects, and engage in communities of practice. This approach not only maximizes AI’s effectiveness but also keeps employees connected and motivated by a sense of purpose.
A Path to Unity: Building on Shared Concerns
The More in Common report suggests GenAI could be a unifying factor in the U.S., given the bipartisan agreement on its risks. This convergence could help bridge social divides by fostering conversations rooted in shared values and collective interests. At a time when distrust in institutions runs high, GenAI has emerged as a powerful catalyst for dialogue—a tool whose responsible development is a priority that transcends partisan divides.
For organizations, this represents a call to action: HR and leadership must recognize the broader societal unease around AI and work to make its workplace adoption an inclusive, transparent process. Drawing from the shared concerns Americans have about GenAI’s role in society, companies can position themselves as leaders in ethical AI by establishing clear governance, fostering human-centric innovation, and actively involving employees in shaping AI policies. This approach doesn’t just prepare employees for an AI-enhanced future; it also affirms their role as stakeholders in this transformation.
In 2025, corporate leaders can prepare employees for these sweeping changes by supporting skills-first initiatives that help workers access a larger surface area of opportunities, hosting town halls with employees, and proactively redesigning work.
The Opportunity Ahead
Ultimately, GenAI could be more than a technology; it could be a catalyst for unity. By addressing public concerns and aligning AI’s development with shared human values, HR leaders can help organizations create workplaces that are not only more efficient but also more trusted and inclusive. The More in Common report serves as a reminder that even in a divided world, we have the potential to come together to see AI not as a tool for division but as a call to collectively shape a more connected, resilient future.
As we consider these findings amid the election, we’re reminded that unity on complex issues such as AI is possible. Americans’ shared views on GenAI present a chance for the private and public sectors to create an AI future that serves us all—a future rooted in trust, inclusion, and purpose.