As artificial intelligence becomes more integral to recruitment and workplace processes, HR leaders face a growing challenge: how to ensure AI enhances, rather than undermines, organizational integrity.
While AI is expected to be used in hiring by 70% of companies in 2025, an estimated 60% of organizations lack clear AI usage policies, leaving companies vulnerable to potential legal, ethical, and operational risks. Furthermore, over a third (36%) of organizations are not confident their AI deployments comply with current regulations.
The rise of AI-driven “skills mirages”—where candidates artificially enhance their qualifications—along with increasing misuse of AI by both candidates and employees calls for immediate and strategic action by HR leaders.
Here’s how AI misuse shows up throughout the employee life cycle, from candidate screening to on-the-job training.
AI Misuse by Candidates
The use of AI by job candidates is disrupting traditional metrics of qualifications and skills. Candidates now leverage AI to exaggerate their resumes, answer interview questions, obtain certifications, and manipulate technical assessments. This behavior can lead to operational setbacks when unqualified hires fail to meet job demands, costing companies time and resources.
Although direct studies on this issue are scarce, anecdotal evidence suggests HR professionals frequently encounter applicants whose skills appear to be inflated by AI tools. This trend is more pronounced among younger candidates, with employers reporting receiving hundreds of identical cover letters from different members of Generation Z candidates. Additionally, 61% of Gen Z say they can’t imagine doing work without using generative AI.
AI Misuse in Hiring
While AI misuse can begin with inflated candidate qualifications, it extends into hiring processes. The widespread use of AI in hiring is fraught with transparency and fairness challenges. Nearly half of employed U.S. job seekers (49%) perceive AI-driven recruitment tools as more biased than human recruiters.
This rapid AI adoption in recruitment carries great legal and ethical pitfalls. In a recent class-action lawsuit against Workday, Inc., plaintiffs alleged that the company’s AI-powered hiring tools discriminated against specific demographics, including Black and older applicants. Past problems at companies including Amazon and HireVue further highlight the risks.
AI Misuse on the Job
Once hired, employees often navigate the workplace with limited guidance, as 60% of organizations lack clear AI usage policies. Employees may use AI to generate reports or presentations that appear independent without fully understanding or validating the content. AI tools can also be misused to bypass standard protocols for performance reviews or promotions, increasing risks of inaccuracies, ethical violations, and poor decision-making.
In addition, AI misuse on the job could lead to data leak risks. Research by CybSafe reveals that 93% of U.S. workers may be unknowingly sharing confidential information via AI tools, and 38% admit to sharing data they wouldn’t casually disclose to a friend at a bar.
These issues highlight the urgent need for HR leaders to proactively manage AI governance. Yet, according to a DLA Piper study, 99% of AI decision-makers cite establishing robust AI governance as one of their biggest challenges, and 39% remain unclear about the evolving landscape of AI regulation.
4 Ways for HR Leaders to Address AI Misuse
1. Foster AI+HI Collaboration
Promote a culture where AI complements human intelligence (HI) in decision-making, rather than replacing it. HR leaders should begin by organizing workshops that demonstrate practical applications of AI in automating repetitive tasks, such as data entry or scheduling, while emphasizing the value of human oversight in decision-making. SHRM's AI+HI initiative highlights how integrating human intelligence with AI can ensure more effective, ethical, and transparent use of AI tools.
2. Develop Ethical AI Frameworks
HR leaders can start by creating an AI ethics committee within their organization that is tasked with developing clear guidelines for AI use in recruitment, performance evaluations, and decision-making processes. A key first step is conducting a thorough audit of existing AI tools to assess them for bias and ensure they are compliant with anti-discrimination regulations. Establishing regular AI ethics reviews will help ensure that tools are used ethically and responsibly.
3. Implement Enhanced Verification Methods
To combat AI-inflated qualifications, HR leaders should implement scenario-based assessments during the recruitment process. This can include job simulations or “real-world” challenges that mirror the tasks candidates will face in their roles. By adopting such methods, HR leaders can ensure skills are verified through practical applications rather than static certifications, minimizing the risk of AI-inflated qualifications.
4. Launch AI Literacy and Training Programs
To ensure employees understand both the potential and the limitations of AI, HR leaders should develop a comprehensive AI literacy program. This could include online courses, hands-on workshops, and real-life case studies, with an emphasis on ethical decision-making and how AI fits into existing workflows. A key actionable step would be to incorporate this training into the onboarding process for all new hires to build a strong foundation from day one.
Stepping Up to Prevent AI Misuse
HR leaders must step up now to establish clear policies, ensure ethical AI use, and educate their teams—or risk facing significant consequences from misuse and mismanagement.
They must go beyond just preventing misuse to shape the future of AI in their organizations. This means integrating AI into the strategic workforce planning process, aligning it with broader organizational goals, and ensuring that human judgment remains at the core of every decision.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.