The U.S. Department of Commerce announced April 11 a request for public comment to determine how regulators can help make artificial intelligence systems trustworthy. Comments are due to the Commerce Department's National Telecommunication and Information Administration (NTIA) by June 10.
We've gathered articles on the news from SHRM Online and other outlets.
Appropriate Guardrails
President Joe Biden has made clear that when it comes to AI, there must be responsible innovation and appropriate guardrails to protect individuals' rights and safety. Different approaches might be needed in different industry sectors, like employment and health care, the NTIA noted.
The Goal of Trustworthy AI
An AI system should be legal, ethical, safe and otherwise trustworthy—in other words, providing "AI assurance," the request for comment noted. The NTIA explained that trustworthy AI includes such attributes as "safety, efficacy, fairness, privacy, notice and explanation, and availability of human alternatives." There should be algorithmic discrimination protections.
"AI systems are being used in human resources and employment, finance, health care, education, housing, transportation, law enforcement and security, and many other contexts that significantly impact people's lives," the request for comment stated. "Public and private bodies are working to develop metrics or benchmarks for trustworthy AI where needed," it added.
Questions for commenters included:
- What kinds of topics should AI accountability mechanisms cover? How should they be scoped?
- Should AI audits or assessments be folded into other accountability mechanisms that focus on such goals as human rights; privacy protection; security; and diversity, equity, inclusion and access? Are there benchmarks for these other accountability mechanisms that should inform AI accountability measures?
- Can AI accountability practices have meaningful impact in the absence of legal standards and enforceable risk thresholds? What is the role for courts, legislatures and rulemaking bodies?
- Is the value of certifications, audits and assessments mostly to promote trust for external stakeholders or is it to change internal processes? How might the answer influence policy design?
- How should the accountability process address data quality and data voids of different kinds? For example, in the context of automated employment decision tools, there may be no historical data available for assessing the performance of a newly deployed, custom-built tool.
- Is the lack of a federal law focused on AI systems a barrier to effective AI accountability?
- What role should government policy have, if any, in the AI accountability ecosystem?
Self-Regulation
In addition to regulations, the NTIA's notice focuses on self-regulatory measures that might be adopted, which companies that build the AI technology would be likely to lead. That's a contrast to the European Union, where lawmakers this month are negotiating the passage of strict limits on AI tools depending on how high a risk they pose.
(AP)
Risks and Benefits of AI
Rep. Nancy Mace, R-S.C., chair of a House Oversight Committee panel on technology, last month opened a hearing on AI with a three-minute statement discussing AI's risks and benefits. Then she added, "Everything I just said in my opening statement was, you guessed it, written by ChatGPT."
ChatGPT has wowed some users with quick responses to questions but dismayed others with inaccuracies. George Washington University Law Professor Jonathan Turley recently called attention to some of AI's risks after he was falsely accused of sexual harassment by ChatGPT, which cited a fabricated article on the allegation.
In a statement, OpenAI spokesperson Niko Felix said, "When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress."
(The Wall Street Journal, Fox News and The Washington Post)
EEO Issues
Automation tools are becoming increasingly common but are often programmed and trained based on past hiring practices that can replicate patterns of illegal bias, said Judy Conti, director for government affairs at the National Employment Law Project in Washington, D.C., at an Equal Employment Opportunity Commission (EEOC) event last year.
Nearly 80 percent of organizations polled in a survey used or planned to use AI for HR purposes within the next five years, said Emily M. Dickens, chief of staff, head of public affairs and corporate secretary for the Society for Human Resource Management (SHRM) in Alexandria, Va. She added that leveraging AI-based devices in HR isn't all bad. For example, algorithmic systems have transformed how businesses operate by reducing the time it takes to fill open positions. And nearly 3 in 5 organizations report that the quality of recruits is higher due to their use of AI.
Commerce Action Applauded by Some, Criticized by Others
"The use of AI is growing—without any required safeguards to protect our kids, prevent false information or preserve privacy," tweeted Sen. Michael Bennet, D-Colo. "The development of AI audits and assessments can't come soon enough."
But some venture capitalists in Silicon Valley are opposed to more government regulation. "This is very concerning language coming from Washington," tweeted David Ulevitch, a general partner at the venture capital firm Andreessen Horowitz in New York City. "AI innovation is an American imperative—our adversaries are not slowing. Officials would be wise to clear a path, not create roadblocks."
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement