Deepfake technology has been around for a few years. But it entered the mainstream when a Tom Cruise deepfake video went viral on TikTok last year. Millions of views later, 61 percent of users couldn't distinguish the real actor from the fake version.
No doubt users on TikTok and other entertainment forums will employ this technology for comic or advertising effect. But it poses real danger in terms of tricking people for the purposes of fraud or penetrating security defenses.
Kelley M. Sayler, an analyst in advanced technology and global security at the Congressional Research Service in Washington, D.C., said deepfakes could present a variety of national security challenges in the years to come. As these technologies mature, they could hold significant implications for businesses.
This isn't just speculative scaremongering: Bank heists and corporate fraud cases in Dubai and the U.K. used cloned voice technology to steal tens of millions of dollars. When finance personnel thought they heard the voice of a trusted client or associate, they were duped into transferring funds.
How Deepfake Works
Deepfake technology uses artificial intelligence (AI) software to make convincing impersonations of voices, images and videos. AI-based neural networks generate a counterfeit of a photo, audio recording or video while another tries to identify the fake version. As they go back and forth for what might be millions of iterations, increasingly realistic versions are made.
"The use of AI to generate deepfakes is causing concern because the results are increasingly realistic, rapidly created, and cheaply made with freely available software and the ability to rent processing power through cloud computing," Sayler said. "Thus, even unskilled operators could download the requisite software tools and, using publicly available data, create increasingly convincing counterfeit content."
As the technology matures, it will no doubt be deployed in a variety of devious means. Employees and executives need to be ready to question the authenticity of video, image, audio and news information, particularly if it calls for an emergency action, an unusual request for money, or a shift in security procedures and policy.
Protection Against Deepfakes
Hank Schless, senior manager of security solutions at cybersecurity vendor Lookout in San Francisco, offered these tips:
- Remember that not everything you see online is real. From deepfake technology to phishing attacks, scams are growing increasingly difficult to discern with the naked eye. Always exercise caution if you are contacted by a company or individual and you can't validate their identity with 100 percent confidence.
- Exercise caution when sharing information digitally. Often, scams will use urgency to trick people into giving away information quickly. If you see a post online, receive a text message or get a phone call from a company expressing extreme urgency, stop and go directly to the source to validate whether it is legitimate.
- Upgrade security. Lookout, for example, offers malware and safe browsing protection to scan all links in social media, text messages, and online, and to block threats before they do harm. It constantly scans the Internet for new URLs and flags sites as malicious as they are being built. As deepfake scams proliferate, expect other vendor solutions to add features that question the authenticity of posts, videos and audio clips.
"Just like any other form of social engineering, deepfakes can be used to make you believe something that isn't real, because it seems to come from a credible source," Schless said. "Social engineering is constantly evolving as the ways people interact with each other change. Nowadays, with most information consumption coming through video, it makes sense that deepfakes are being used more broadly."
He predicted that the deepfake market might go in a service-based direction. This happened with phishing campaigns. Initially, lone wolf hackers generated phishing traffic. Criminal gangs added a higher level of organization, and it finally evolved into gangs offering phishing campaigns as a service in prebuilt kits that could be purchased online much like any other software-as-a-service. The same thing happened with ransomware. Could deepfake be next? If so, Schless warns that service-based malware could begin to target human capital management solutions.
Educating Users
As always with security, technology alone is not enough. Employee education must also play a part. Sayler noted that current deepfake attempts can usually be spotted without specialized detection tools. However, the sophistication level is rising steeply. At this time, users should be given at least a rudimentary introduction to deepfake technology and its potential misuse by cybercriminals. They should be taught to look twice and never act on any emergency request without verifying it directly with the sender, preferably in person.
But at least for now, a careful look will reveal some red flags. Just as computer-generated fight scenes are not hard to detect (and that's why Tom Cruise does his own stunts rather than relying on a double or CGI), deepfake data nearly always looks inauthentic.
"Since these videos are computer-generated, there will often be strange or unnatural movements, facial expressions, fake teeth and unnatural eye movements that make things feel a little off when viewing the video," Schless said.
Drew Robb is a freelance writer in Clearwater, Fla., specializing in IT and business.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.