Employers in Canada are increasingly using artificial intelligence to help them with hiring decisions. Legal experts recommend Canadian employers come up with strategies to evaluate the risks and benefits that AI poses.
Automated hiring tools assist in hiring decisions with varying degrees of human intervention, explained Robbie Grant, a lawyer at McMillan in Toronto.
These tools can include:
- Targeted job advertisements that use algorithms to determine the best place to advertise job opportunities, which may influence the pool of applicants.
- Resume screening tools that sort through applicants based on specific search terms.
- Intelligent applicant tracking systems that estimate how a candidate might perform on the job based on keywords and past employee data.
- AI-powered video interviewing tools that assess candidates based on facial expression analysis.
"Most companies are already having conversations about AI tools and how to use them effectively," Grant said. "There are several potential benefits to automated hiring tools—from improving efficiency for hiring teams, to enhancing the applicant experience, to removing or mitigating the inherent bias of human decision-makers."
However, there are downsides as well. The main challenge so far in using Al tools for recruitment is the risk of bias built into automated hiring decisions, as well as the possibility of AI tools being trained to discriminate against certain candidates, Grant said. Bias can lead to worse hiring decisions, in addition to potential liability under human rights laws in Canada, which prohibit discrimination in employment based on certain protected grounds, such as race, ethnic origin, gender identity and age.
"When using AI tools to make decisions with a meaningful impact on individuals, it is wise to keep a human being in the decision-making process as a backstop," said Ioana Pantis, a lawyer at McMillan in Toronto.
Compliance with Canadian Privacy Laws
In Canada, employers must ensure that their use of automated hiring tools complies with privacy laws.
"When it comes to the use of AI tools in the hiring process, one of the main privacy risks is related to consent," Grant said. "Depending on the applicable privacy laws, employers may be required to obtain consent before using automated tools to analyze an applicant's materials or using an employee's data to train an AI system."
Privacy laws in Canada also require that companies have an appropriate purpose to process an individual's personal information, Pantis noted.
A privacy regulator in Canada could therefore scrutinize some uses of AI tools in the workplace, she explained. For example, they would take a second look at AI tools used to make automated hiring decisions, tools that monitor employee productivity or performance, and tools that process biometric data, such as facial or voice recognition systems.
"It is important for companies to obtain expert advice to ensure their use of AI tools is appropriate and compliant with data privacy laws," Pantis said.
Developing a Responsible AI Strategy
Approximately 20 percent of 5,140 Canadians surveyed in 2023 currently tap into generative AI tools—such as ChatGPT and Google's Bard—as part of their professional or school routines, according to KPMG in Canada. Out of the 1,052 respondents who said they use generative AI, about 70 percent said they will continue to do so regardless of the risks.
"Generative AI tools are potentially transformative for employee productivity, but the reality is employees don't always use them responsibly," said Ven Adamov, partner and data analytics leader in KPMG's Risk Consulting practice in Oakville, Ontario. "Implementing a responsible AI framework—which includes both policies and tools that identify and mitigate risks with AI output—can help protect against misuse of this powerful technology."
Organizations should come up with a responsible AI strategy in which they:
- Assess and implement the right technology.
- Ensure data is relevant, recent and accurate.
- Train employees on data such as employee profiles, performance reviews, feedback surveys and learning histories.
"Employers should look at ways for proper use of AI and to encourage and reward employees who use it in an ethical manner," said Matt Chapman, a lawyer at SpringLaw in Toronto. "Some people will be hesitant about the technology, but employers can show how AI improves their performance and the product."
HR Has Vital Role in Workplace AI Policies
Human resources professionals in Canada should lead the development of a company's AI policy, as it is now their responsibility to ensure AI aligns with the organization's values, Chapman said.
"As AI continues to advance rapidly, it is crucial for organizations to stay ahead of the curve," he said. "An AI policy provides a road map for ethical AI use, ensuring fairness, transparency, privacy and accountability."
When drafting an AI policy, HR should spell out the risks and benefits of AI tools, along with clear expectations for employees, Grant said. The policy should cover risks related to data privacy, intellectual property and biased decision-making.
But the work—and responsibility—don't end there, Pantis said.
"HR plays an important role in identifying a need for an AI policy and developing it, but they cannot do it alone," she said. "IT security, the legal department and privacy professionals should also be involved to assess risks to the company's privacy and data security program. HR is then integral to rolling out the policy, providing training and monitoring compliance."
Catherine Skrzypinski is a freelance writer based in Vancouver, British Columbia.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.