Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

AI in the Workplace: Data Protection Issues


London, United Kingdom
A photo of a cityscape in London, UK.

While the use of AI systems in recruitment and during employment continues to grow, it is essential for both creators and for employers, as users of AI systems, to carefully consider what types of data they will be handling and collecting, and the key data protection requirements.

This article examines the data protection framework that surrounds the use of AI platforms. It summarizes key legal considerations that employers should be aware of when using AI technologies in recruitment and employment.

Controller Versus Processor?

When an organization decides to process personal data for any activity, the first thing it should consider is whether it is a controller or processor of that data. If the organization decides the purpose and means of processing data (that is, what personal data is processed, and why—for example an employer obtaining employee or candidate data), then it is likely to be a controller.

If the organization provides a service to a third party and the third party decides what data is to be processed and why, or the organization is processing the personal data on the third party’s instructions—for example a company that handles payroll administration for another employer—the organization is likely to be a processor.

This role-based classification is important as the obligations for an employer depend on whether it is a controller or a processor.

As a controller, the employer is required under the UK General Data Protection Regulation (GDPR) to provide specific information (usually in the form of what is termed a “privacy notice”) to employees and job candidates. This privacy notice should set out key information about the processing activity, such as how the personal data is going to be used, the lawful basis relied on by the employer and the rights of employees in this regard. For example, if an employer uses an AI system to select candidates in a recruitment process, the employer should set this out in its privacy notice together with the associated lawful basis.

Another key requirement for controllers considering the use of AI systems is to assess the risks associated with the use of an AI system before engaging in the activity—this is likely to constitute a DPIA as outlined below.

Data Protection Impact Assessment (DPIA)

When an organization decides to carry out a “high-risk” processing activity using personal data, it is required to assess the risks associated with the activity by carrying out a DPIA. In an employment context potential high-risk activities using personal data include using AI platforms in recruitment, making employment decisions on task allocation, promotion and termination, and monitoring or evaluating employees.

Using of AI platforms for activities such as candidate selection or to review employee performance are likely to be high-risk activities because they involve what the Information Commissioner has classed as “innovative technology” and, furthermore, these processes can have significant impact on candidates and employees.

Some common risks associated with the use of AI based technologies include:

  • Inherent inbuilt bias in the AI platform.
  • Lack of transparency.
  • Unfair decision-making.
  • Accessing personal data without the knowledge or consent of individuals (also known as data scraping).

Consequently, when using AI-based technologies, employers should be aware of their data protection obligations. For instance, in addition to providing the usual information necessary to comply with the UK GDPR, transparency requires employers to inform employees when they are using AI systems to handle their personal data.

When use of AI involves automated decision-making about individuals, they also have the right under the UK GDPR to receive meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing. As a responsible operator of an AI system, an employer must be able to explain to its staff how its system works and how it reaches the decisions it does, in a way that a typical member of the public can understand.

To the extent that employers may be operating in the EU, or otherwise affected by extra-territorial provisions, these overarching principles are also echoed in the EU’s incoming AI Act.

Data Subject Access Requests (DSARs)

Explainability requirements are particularly important because employees, candidates and other individuals have the right under the UK GDPR to make a DSAR. This is a formal request made by an individual to an organization, seeking information about and access to the personal data that the organization holds about them. This helps individuals be aware of and verify the lawfulness of the processing of their personal data.

It is therefore important for creators of AI systems to consider how to develop the AI system to comply with the DSAR right, and for employers as users of an AI system to consider how well the system can respond to these requests.

Practical Measures

More broadly, creators and employers using AI systems should ensure the following practical measures are implemented where appropriate. This will help ensure compliance with the data protection framework applicable to the use of AI platforms and also help manage potential risks.

  • Be clear and up front with employees in your privacy notices and relevant policies about how and why you are using data.
  • If the employer is scraping data to train its AI model (such as extracting information from a website), it will need to complete a DPIA and there may be legal implications beyond just data protection law.
  • Be prepared to explain how your AI model works. You should consider this a mandatory requirement if you use an AI system in a recruitment or employment context.
  • Build the AI model so that a human is involved in the decision-making process.
  • If relevant, expect questions from investors, and others, around where you acquired your data, and be able to confirm that data was collected lawfully.

You can achieve this by using factsheets (a collection of information about how an AI model was developed and deployed), DPIAs (describing the capabilities and limitations of the system) and conformity assessments—that is, a demonstration that the AI system meets legal and regulatory requirements.

AI Versus Data Protection Compliance

The use of AI is increasingly coming under the regulatory spotlight, and in the UK the Information Commissioner's Office has launched the first of a series of consultations on generative AI, “examining how aspects of data protection law should apply to the development and use of the technology.” It will be essential for employers to keep up to date not just with technological and legal developments in this area, but also with developments in regulatory approach and risk.

The effective use of AI in the employment context requires a comprehensive understanding of data protection laws. As AI continues to evolve, staying on top of employers’ legal obligations in relation to AI is crucial for both creators and for employers using AI systems. This helps not only with regulatory compliance but also fosters trust and transparency in AI technologies.

Daniel Gray and Razia Begum are attorneys with Mishcon de Reya in London. © 2024 Mishcon de Reya. All rights reserved. Reposted with permission of Lexology.

Advertisement

​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.

Advertisement