Overview
Artificial intelligence (AI) is the use of machines and software to perform tasks that typically have required human intelligence to complete. About 1 in 4 organizations use AI to support HR-related activities. Nearly half of the HR professionals surveyed by SHRM in January 2024 said using AI to support HR has become somewhat or much more of a priority than in the previous year. The survey was conducted among 2,366 HR respondents representing organizations of all sizes in a wide variety of industries across the U.S.
A significant portion of HR leaders continue to lack a deep understanding of this groundbreaking technology. According to SHRM Research conducted in August 2024, 38% acknowledged having limited or no theoretical knowledge of AI, while 58% admitted to only a basic understanding of its core principles.
How Is AI Used by HR?
AI is being used in the workplace to manage the full employee life cycle, from sourcing and recruitment to performance management and employee development. Recruitment and hiring are by far the most popular areas where AI is used for employment-related purposes, according to research from SHRM in 2022. However, AI can be utilized in almost any human resource discipline.
In many cases, AI enables companies to automate repetitive tasks, streamline operations, and improve efficiency. It also provides data-driven insights, empowering businesses to make smarter decisions and enhance workforce productivity. Generative AI, such as OpenAI’s ChatGPT, allows users to ask questions in a conversational manner to find answers or to create or edit written content. A few examples:
- Performance management: Employers are now leveraging AI systems to assess and reward employees based on their skills rather than just job titles. This shift reflects the demand for adaptive, data-driven salary decisions in a competitive, skill-focused market.
- Salary decisions: AI-driven tools provide salary adjustment recommendations by analyzing real-time labor market trends and skill demands. Managers using such tools have reported lower attrition rates, as employees now understand how their skills and compensation align. AI also helps predict skill valuations, offering insights into future market demands and costs.
- Writing tasks: A manager might ask the bot to write an employee recognition letter, or a recruiter might prompt it to draft a job description. While the output from generative AI programs can be impressive, human review and final editing is almost always necessary. Employers are increasingly implementing acceptable AI use policies, particularly for generative AI. Yet, adopting AI for pay decisions isn't without challenges. If unchecked, algorithms may inherit biases from historical compensation data. To mitigate this, employers must implement rigorous data governance and monitor recommendations actively, ensuring fair outcomes for all employees.
Legal Issues
AI-powered recruitment tools are transforming hiring processes, but they have faced scrutiny for perpetuating biases present in historical data.
In 2025, AI policies took on a new dimension due to the issuance of an executive order aimed at redefining the U.S. approach to AI innovation and regulation.
The Trump administration’s executive order shifted the regulatory focus from the worker and consumer protections introduced by the Biden administration to prioritizing AI innovation. The order directed executive branch agencies to review existing policies and eliminate perceived barriers to American AI advancement.
With reduced federal oversight, states such as Colorado have implemented stricter AI compliance laws. These disparate regulations create a fragmented regulatory landscape, complicating compliance for businesses operating across multiple jurisdictions.
Pro Tip: Be aware of state AI hiring laws everywhere, not just in the state where your headquarters are located.
State Laws
Currently, there are a few state and local law requirements specific to AI. Colorado, Illinois, Maryland, and New York City are examples of jurisdictions with such laws, and it is likely that more states and municipalities will follow.
State AI laws related to employment typically address:
- Notifying applicants and employees of AI use.
- Gathering consent before use.
- Being transparent about the technology and disclosing how it works.
- Taking steps to avoid algorithmic discrimination.
- Completing impact assessments and audits of AI systems and outcomes.
- Implementing AI risk management policies.
Title VII Discrimination
AI systems, while efficient, can inadvertently result in Title VII discrimination if not diligently monitored and reviewed. Title VII of the Civil Rights Act prohibits employment discrimination based on race, color, religion, sex, or national origin. However, AI tools may unintentionally create biases, as demonstrated in the infamous case of Amazon's AI recruiting tool, which skewed hiring decisions against women due to biased training data.
To prevent such issues, here are some actions you should take:
- Regularly audit AI algorithms, assess their data inputs, and ensure ongoing compliance with the Uniform Guidelines on Employee Selection Procedures.
- Conduct bias audits and monitor for adverse impact when using AI for hiring, promotions, and other employment decisions.
- Use neutral algorithms to yield discriminatory outcomes if the underlying data reflects historical biases. Comprehensive understanding and validation of AI tools are essential to comply with legal standards and avoid unintended violations.
Beyond Title VII, employers must consider provisions under the Americans with Disabilities Act (ADA) and Age Discrimination in Employment Act (ADEA). AI screening tools that inadvertently disadvantage persons with disabilities, such as video interviews assessing speech patterns, risk ADA violations.
Americans with Disabilities Act
There is significant concern that the use of AI may disadvantage job applicants and employees with disabilities. The federal Equal Employment Opportunity Commission (EEOC) has issued guidance on avoiding adverse impact under the ADA when using AI for employment purposes.
Reasonable accommodations must be provided to individuals with disabilities when their medical condition will make the use of the technology difficult or result in less than favorable results.
The EEOC provides the following examples of AI technology that may negatively impact an individual with a disability:
- A chatbot that is programmed with an algorithm that rejects all applicants with significant gaps in their employment history could screen out an applicant with a disability if the applicant had a gap in employment caused by a disability (for example, if the individual needed to stop working to undergo treatment).
- A gamified assessment of memory that has been shown to be an accurate measure of memory for most people in the general population could screen out individuals who have good memories but are blind, and who therefore cannot see the computer screen to play the games.
Employers should clearly communicate that reasonable accommodations, including alternative formats and alternative tests, are available to people with disabilities and provide clear instructions for requesting reasonable accommodations.
Background Checks
Many background check vendors use AI to gather information on an individual's criminal history and other personal information. The same biases and discrimination issues found in other selection procedures can arise and run afoul of the Fair Credit Reporting Act (FCRA).
- Avoid bias and ensure the technology being used has been carefully vetted as necessary when complying with the FCRA.
- Consider the requirement under Title VII for employers to make an “individualized assessment” of a candidate's criminal history when determining if the information is job-related and consistent with business necessity. This assessment requires employers to consider factors that may not be identified by an algorithm and often requires a human conversation with the candidate.
Global Considerations
While the benefits and concerns of the use of technology in employment decisions from one country to another are often the same, the regulatory requirements are likely to vary greatly. Employers should ensure understanding and compliance with laws in all locations where they have employees.
Transparency
Transparency is critical when companies use AI in their operations, especially in decisions impacting employees and candidates. To begin, your company should disclose exactly where and how AI is being applied.
Example: If an AI tool is evaluating job applications, the traits and characteristics it measures (e.g., communication skills, problem-solving abilities) and the methods used to assess them should be clearly communicated.
Additionally, it’s important to maintain open communication by providing job applicants and employees with detailed information about the AI processes in use. Outline the measures you’ve implemented to minimize risks, such as regular audits or diverse datasets. Transparency also includes obtaining informed consent before using AI tools in employment decisions. This ensures employees and candidates remain engaged and aware of the impact AI tools may have on their employment-related processes.
Finally, emphasize the role of human oversight in AI-powered decision-making, reinforcing that these tools are designed to assist, not replace, human judgment. Share your company’s commitment to ethical AI use, highlighting policies designed to ensure fairness, reduce bias, and protect individual rights.
Mitigating Bias in AI
Take a comprehensive and collaborative approach to ensure ethical outcomes in workplace decision-making. Here are examples of how to do that.
- Form multidisciplinary innovation teams that include legal and HR representatives. These teams can evaluate AI tools from diverse perspectives, ensuring compliance with laws, alignment with company values, and consideration of employee well-being.
- Regularly audit data access controls.
- Evaluate what is being measured before implementation and conduct ongoing reviews to address unintended consequences.
- Opt for external vendors that prioritize inclusivity during development. Employers can be liable for discrimination claims even when a third party developed the tool.
- Implement disclosure and informed consent strategies when appropriate. Inform employees about how AI tools collect and use their data and how decisions are made.