If you had any doubts about the disruptive nature of generative artificial intelligence, the arrival of ChatGPT should have put them to bed. The language processing tool has been a runaway hit since its launch by OpenAI last November, and has sparked conversations about how businesses and institutions could be impacted by its increased use.
The premise of ChatGPT is as simple as it is extraordinary. Trained on a massive amount of text data, it's capable of understanding and generating human-like prose. It answers questions and can assist on tasks like composing essays, job applications or letters.
There are numerous implications from a data protection, infringement and copyright, confidentiality, accuracy and bias perspective. This article briefly highlights some of the key considerations for employers.
These considerations include:
- Are your employees using ChatGPT for their work? If so, have you set out guidelines or a policy about its use?
- If you have already introduced guidelines or intend to do so, will you also roll out training? There is little point in policies sitting on the shelf gathering dust, but employees need to be actively trained on them and understand how they work in practice.
- Have you considered banning the use of ChatGPT for certain roles or types of work product? Putting aside the legal implications of using the tool to generate, for example, a speech or article that an employee intends to pass off as their own, it is worth bearing in mind that there is a real risk that the same or similar content could be generated for another user. At the very least, that could be embarrassing and cause reputational damage.
- Will you establish a process for employees to report any concerns or issues related to the use of ChatGPT?
- Do you need to consider adapting your performance processes or targets for those employees who use ChatGPT in their roles?
- Do any of your suppliers use ChatGPT or similar technologies? If so, is your data being fed into their systems?
- Have your recruitment policies or practices—or those of your suppliers—been adjusted to take into account the use of ChatGPT, and the risk of bias, in recruitment or training exercises?
The sooner that organizations grapple with these questions the better, enabling them to take a proactive rather than reactive stance to issues that might occur. Organizations should implement some form of probationary checks and balances on ChatGPT's performance before embedding it into their business.
Anne Pritam and Leanne Raven are attorneys with Stephenson Harwood LLP in London. © 2023 Stephenson Harwood LLP. All rights reserved. Reposted with permission of Lexology.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.