The Biden administration is considering putting restrictions on AI tools, such as ChatGPT, amid growing concerns that the technology could discriminate against job applicants and spread misinformation.
The National Telecommunications and Information Administration (NTIA) is fielding opinions on the possibility of artificial intelligence audits, risk assessments and other measures to ensure the systems work as intended and "without harm," the agency said. The deadline for submitting feedback is June 10.
"Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms," Assistant Commerce Secretary Alan Davidson, an NTIA administrator, said in a statement. "For these systems to reach their full potential, companies and consumers need to be able to trust them."
State and foreign governments have already made efforts to limit ChatGPT:
- New York City recently issued its "final rule" implementing Local Law 144, which requires a bias audit when employers use AI software like ChatGPT for employment decisions.
- China recently introduced stricter rules for ChatGPT-like AI tools.
- Italy outright banned ChatGPT over data-privacy concerns.
"Just as food and cars are not released into the market without proper assurance of safety," the NTIA said in a press release, "so too AI systems should provide assurance to the public, government and businesses that they are fit for purpose."
[SHRM Online: ChatGPT and HR: A Primer for HR Professionals]
How ChatGPT Can Discriminate
ChatGPT has been marketed as a scalable, cost-effective tool that can help employers analyze written text, provide responses to customer queries and conduct various tasks that would otherwise be performed by a human.
A recent survey revealed that about 43 percent of nearly 1,800 professionals have used AI tools, such as ChatGPT, for work-related tasks—about 70 percent of whom use this technology without their manager's knowledge.
If used incorrectly, the technology could deteriorate diversity, equity and inclusion (DE&I) efforts, according to Jeffrey L. Bowman, founder and CEO of tech platform Reframe in New York City.
"ChatGPT is only as good as the data it can pull from," Bowman said. "With the case of talent acquisition, there is already a DE&I issue for most companies, and if the ChatGPT data has gaps, it will likely have gaps across race, gender and age."
ChatGPT comprises a dataset that includes 300 billion words and 570GB of data obtained from books, Wikipedia entries, articles and other pieces of writing on the Internet. However, such large datasets can encode bias and reinforce social stereotyping that leads to discrimination.
In 2022, Meta's ChatGPT-like system trained on 48 million text samples produced false, biased information and was shut down by the company three days after its launch. And in 2018, Amazon abandoned its recruitment AI technology after it discriminated against female candidates.
EEOC Cracking Down on AI Bias
The number of employers using AI is skyrocketing: Nearly 1 in 4 organizations reported using automation or AI to support HR-related activities, including recruitment and hiring, according to a 2022 survey by SHRM.
However, the NTIA noted that a "growing number of incidents" have occurred where AI and algorithmic systems have led to harmful outcomes.
For example, the U.S. Equal Employment Opportunity Commission (EEOC) sued an English-language tutoring service for age discrimination. The agency alleged that the employer's AI algorithm automatically rejected older applicants.
The EEOC has heavily focused on preventing AI from discrimination at work. The agency has held public hearings and released guidance focused on preventing bias against applicants and employees, particularly those with disabilities.
In 2021, the agency launched the Artificial Intelligence and Algorithmic Fairness Initiative. This initiative helps to ensure that software, including AI, used in hiring and other employment decisions complies with the federal civil rights laws that the EEOC enforces.
How to Avoid a Discrimination Lawsuit
Alex Meier, an attorney with Seyfarth in Atlanta, recommended that employers avoid becoming fully reliant on automation in the workplace and continue to involve a human in the decision-making process.
"You will want to have a decision for why someone was hired, promoted or fired," he said. "ChatGPT is a dynamic tool that will not necessarily generate the same results or interpret ability as a human decision-maker."
Companies must understand the tools they're using, he warned. A black box that takes resumes and spits out the recommended hire without any documentation of the process is going to expose a company to legal and regulatory risk.
Meier noted that employers leveraging automation should be able to explain what criteria are being used and searched. For ChatGPT, query the reasons for prioritizing or rejecting certain candidates and have that validated by a person within the company.
"Without these measures, a company could find itself trying to defend an employment decision without any ability to explain the basis for its decision," Meier said. "You can't go to a judge, jury or arbitrator with, 'We did what the machine told us to do.' "
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement