HR Implications of Biden’s AI Executive Order
Human resource professionals searching for guidance on managing artificial intelligence should pay attention to the Biden administration's Oct. 30 executive order that seeks to manage the risks of AI and reap its benefits.
As HR executives ponder how they can use AI to create efficiencies, simplify work, and find cost savings, the executive order establishes new standards for AI safety and security in federal agencies and requires measures that will impact companies as they apply AI capabilities to their business operations.
The executive order sets an example for the private sector by, among other things, establishing standards and best practices for detecting AI-generated content and authenticating official government communications. It also requires vendors that develop AI software to share their safety test results, which will help government agencies and private companies that use AI tools.
FEATURED RESOURCE PAGE
According to the White House, the federal government is taking action to develop principles and best practices to reduce the harm AI can have on workers by providing guidance to "prevent employers from under-compensating workers, evaluating job applications unfairly, or impinging on workers' ability to organize."
In an effort to lead the world in AI-driven innovation and competition, the executive order is also directing agencies to make it easier for highly skilled immigrants with expertise in critical areas to study and work in the U.S.
"The administration is addressing key issues to mitigate the fear around AI," said Tommy Jenkins, vice president of recruiting at San Francisco-based RocketPower. "They see there is going to be a significant impact around labor, and they understand that the executive order has to establish some guidelines around AI because it's more than just a U.S. based effort, it has to be a global initiative."
As employers focus on their ability to hire and retain talent to implement, manage and operate AI's large data models in a responsible way, Jackie Watrous, senior director analyst at Gartner's HR practice, said there are three areas in the executive order that will affect HR:
- Considering how AI use within an organization may impact jobs and workforce responsibilities. The executive order directs the federal government to assess the impact of AI on the workforce, develop strategies to mitigate any negative impacts, and support programs that help workers develop the skills and knowledge they need to succeed in the AI economy.
- Ensuring that any use of AI tools has undergone the appropriate rigor to prevent discrimination, an area that many HR executives consider to be a priority. Watrous said the executive order "calls for the development of standards and guidelines for the responsible development and use of AI. These standards and guidelines should address the issue of bias and discrimination."
- Promoting innovation, which may include upskilling existing talent and bringing in AI-skilled talent from outside the U.S.
"The executive order calls for the federal agencies to promote innovation in AI, including by supporting research and development in the field of AI. The order also emphasizes the importance of attracting AI talent from outside the U.S. and enabling accelerated hiring pathways," Watrous said.
Improving U.S. companies' ability to scout for AI-related talent overseas aligns with RocketPower's multinational talent programs.
Many of the initiatives outlined in the wide-ranging executive order will help HR professionals who use AI tools for tasks such as recruiting and hiring talent, as well as analyzing employee data.
The executive order also applies to generative AI tools, which exploded onto the corporate landscape when OpenAI launched ChatGPT in November 2022. Since then, several companies have built their own generative AI tools that create content such as text, images, sound, animation and 3D models.
According to Zachary Chertok, research manager for employee experience at research firm IDC, AI has benefited from many years of technological advancements that have been used to establish use cases inside organizations. On the security front, many technology firms and their clients have already tested technical safeguards, such as encryption, firewalls, data masking, and data erasure, to comply with global regulations.
Chertok added, however, that what has recently changed is the introduction of generative AI that went into the public domain without first being subjected to the controlled environment of a technology company's research and development department.
"The rapid advancement of generative AI as an unknown element elevated public consciousness of AI tools as they now had to be retroactively trained for accuracy, trust, and reliability in their use cases, output, and outcomes," he said. "The Biden administration's executive order calls for the development of standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy including their insights and output."
Of particular concern is governance for use cases in materials, engineering, finance, health care, and the public sector, Chertok said.
"In the wake of GDPR, the HR world already takes individual data anonymity seriously and goes above and beyond to protect sensitive employee information even if the information is technically noncompromising," Chertok said. "HR data impacts employee behavior, sentiment, and subsequent retention, leading HR professionals to be well-versed in raising concerns around sensitivity."
HR professionals, Chertok predicted, will have more of a role to play in protecting against AI-enabled fraud and deception by establishing standards and best practices for detecting, authenticating, and certifying AI-generated content.
"HR is going to be an internal steward of this for employment verifications, employment fraud, and misused credentialing," he said. "AI deception is a risk for persona detection and makes current badging and photo-ID verifications systems vulnerable."
Chertok added that while many systems already use multi-factor approaches, HR is going to need to stay one step ahead of fraud detection that uses impersonated files and records by leaning into technologies that take it as seriously as they know they will need to.
The good news, according to HR analyst Josh Bersin, is that many of the security issues the executive order points to have to do with human capital management, and HR executives will have to engage other company executives to assist with developing security solutions.
"HR people are going to have to work with the IT department, the legal department, and security officers to make sure what they do is acceptable to the rest of the company. These are big legal issues," Bersin said.
"There's much more upside than downside. You have to respect the fact that the federal government is trying to establish some ground rules for AI so that we all behave well," he said.
Nicole Lewis is a freelance journalist based in Miami.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.