The recent breakthroughs in AI are also ushering in an era of deepfakes—incredibly realistic (but phony) images, video, and audio that are wreaking havoc online. And as the cost and access barriers to deepfake technology fall, HR leaders will increasingly be at the forefront of dealing with this issue in workplace settings, according to Tim Hwang, former director of the Harvard-MIT Ethics and Governance of AI Initiative.
“We’ve started to see this in the national election. But in the next 12 to 24 months, the costs are coming so far down that we’re going to start to see deepfakes everywhere,” Hwang told attendees at the SHRM24 Executive Network Experience in Chicago on June 22.
The dangers of deepfakes to the workplace are many, including people using deepfake technology to pose as other people when applying for jobs and co-workers posting fake videos to retaliate against each other.
One key deepfakes problem for HR will come in trying to sort out discipline for behavior and conduct violations in the workplace.
“This is a really interesting problem that will emerge,” said Hwang. “Typically, if an HR professional is presented with [video or photo] evidence of clear wrongdoing, you’ve been able to take that media at face value. But what if that photo of someone or video of someone at a workplace party isn’t real?”
Hwang shared four lessons that can help HR leaders prepare for this new future of deepfakes.
Lesson 1: Lean into technical collaboration to spot the fakes
Hwang said that in the early days of deepfakes related to national elections, journalists tried to analyze every potential deepfake image and video on their own. That proved overwhelming and unsuccessful. But the most successful sleuths were those that teamed up with researchers and media forensics experts to parse out what was real and what wasn’t. And lately, new software tools are helping individuals and companies identify deepfake images.
“Not every HR department is going to have a full-on media forensics department,” said Hwang. “But I do think HR teams are going to have to bolster their ability to do this kind of forensics work. … And we are increasingly seeing tools and resources that help lower the cost of being able to tell what’s fake from what’s real.”
Lesson 2: Ensure a speedy response
Hwang also urged HR departments not to sit back and wait for deepfake problems to become major issues (and official complaints) in the workplace.
“The problem is, when employees are communicating with each other in real time, can your team move fast enough to deal with disinformation in the workplace?” he asked. “If you wait too long, the effect of this disinformation will seep in and create a really corrosive effect on the workplace.”
Hwang noted that when deepfakes first popped up, news outlets tried to respond by working with other newsrooms to spot the fakes. But they found their quickest reactions actually came by democratizing the response and setting up WhatsApp tip lines for readers to spot fakes.
“By accelerating their response, they were able to have a much stronger effect on fighting disinformation because they were able to move at the pace of the disinformation,” he explained. The message: Encourage your employees to alert HR whenever they see potential fake images or videos related to co-workers or the company itself.
Lesson 3: Provide context instead of clampdown
In the early years of disinformation, social media companies responded by trying to quickly remove any questionable content. But that only served to draw more interest to the image, video, or post that was taken down. The better path, said Hwang, “is to provide context over clampdown.” That means that rather than taking the questionable post down, they provide facts and other information and point to related media.
“This is a useful lesson as these situations start to emerge in the workplace,” he said. “The approach can’t be to clamp down because, in effect, you’ll actually produce much more interest.”
Lesson 4: Don’t expect social media to fix your problem
From Instagram to Facebook, fake videos and photos are starting to proliferate. The companies that run these platforms are working on solutions to identify and sort out the deepfakes. But they’re not prioritizing your employee’s fake photo of his co-worker supposedly committing a safety violation. They’re focusing mostly on the rash of deepfakes affecting the national scene, particularly the presidential election.
“They problem is that [social media companies] are going to prioritize the content that impacts the platform the most, not the users on that platform,” said Hwang. “That includes national elections and national security issues. They’re paying a lot less attention to what’s happening in the workplace. And that’s because they’re getting a lot less pressure about those deepfakes and the cost is lower for letting that stuff slip through.”
That means companies are essentially “on their own” in fighting this new wave of deepfakes, said Hwang. But he suggested that HR leaders and the HR profession can draw more attention to this problem—and possibly spark quicker fixes from social media companies—by becoming more vocal on the topic.
“This is something where HR teams and the profession as a whole might be able to speak in quite a loud voice,” he said. “Because a lot of this comes down to who reaches the ear of [social media companies] and when does it get loud enough for them to care.”
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.