European Union (EU) policymakers reached a deal on the proposed AI Act, which includes steep penalties for violations, on Dec. 8.
In the U.S., Congress has not yet drafted bipartisan legislation on artificial intelligence but is in the early stages of doing so. President Joe Biden signed a first-of-its-kind executive order Oct. 30 on the development of AI.
We've gathered articles on the news from SHRM Online and other media outlets.
EU Deal
The deal appeared to ensure the European Parliament could pass the legislation before it recesses in May 2024 ahead of legislative elections, perhaps before the end of this year. Once passed, the law would take two years to go into effect.
The EU deal on AI came together after lengthy talks between representatives of the European Commission, which proposes laws, and the European Council and European Parliament, which adopt them. Companies violating the AI Act could face fines of up to 7 percent of their global revenue, depending on the violation and size of the company.
In the U.S., senators indicated that they would take a far lighter approach than the EU and focus instead on incentivizing developers to build AI in the U.S.
Risk-Based Approach
EU policymakers agreed to a risk-based approach to regulating AI, in which a defined set of applications faces the most restrictions. Companies that make AI tools that pose the most potential harm to individuals, such as in hiring and education, would need to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems, and assurances that the software doesn't cause harm, such as perpetuating racial biases. Human oversight would be required in creating and deploying the systems.
Emotion Recognition Systems Banned
The EU law would ban several uses of AI, including bulk scraping of facial images and most emotion recognition systems in workplace and educational settings. There are some safety exceptions, such as using AI to detect a driver falling asleep. Citizens would have the right to submit complaints about AI systems and receive explanations about decisions involving high-risk systems that affect their rights.
Model for Regulatory Authorities
The AI Act, introduced in April 2021, is expected to play a major part in shaping AI in the EU. It will also serve as a model for regulators in other countries and affect global companies with operations in Europe.
Executive Order in U.S.
In the U.S., Biden's executive order will shape how AI technology evolves in a way that can maximize its potential but limit its risks. The order requires the tech industry to develop safety and security standards, introduces new consumer and worker protections, and assigns federal agencies a to-do list to oversee the rapidly progressing technology.
The executive order sets an example for the private sector by, among other things, establishing standards and best practices for detecting AI-generated content and authenticating official government communications. It also requires vendors that develop AI software to share their safety test results, which will help government agencies and private companies that use AI tools.
(SHRM Online and SHRM Online)
DOL's Response to the AI Executive Order
The U.S. Department of Labor (DOL) has scheduled three public listening sessions this week on AI, focusing on the risks and impacts to workers presented by the technology, as well as the implications of employer surveillance using AI.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.