Cybersecurity threats in 2025 will get increasingly harder to detect as criminals exploit artificial intelligence to create sophisticated and personalized cyberscams.
According to experts, the top cybersecurity threats for 2025 are likely to include familiar concerns: credential compromise, phishing attacks, ransomware, social engineering, cloud environment intrusion, and malware. The difference is that more cybercrimes will be powered by AI, supercharging the speed, scale, and automation of attacks.
“As AI continues to mature and become increasingly accessible, cybercriminals are using it to create scams that are more convincing, personalized, and harder to detect,” said Abhishek Karnik, head of threat research at McAfee. “From deepfakes that blur the line between real and fake to AI-driven text message, email, social, and live video scams, the risks to trust and safety online have never been greater,” he said. “That’s why it’s more important than ever for businesses to stay informed about these emerging threats.”
Here are five key areas for employers to focus on in 2025.
Social Engineering
Experts anticipate that generative AI (GenAI) will fuel more sophisticated social engineering and phishing attempts, which trick users into providing sensitive information.
“The top threat to businesses has to be social engineering,” said Linn Freedman, chair of Robinson+Cole’s data privacy and cybersecurity practice and an attorney in the firm’s Providence, R.I., office.
“Social engineering has been a priority for a while, but with the ubiquitousness of ChatGPT and GenAI tools, the social engineering is easier to pull off, and campaigns are getting better,” she said. “The telltale signs of fraud are not as easy to identify anymore. Rank-and-file employees don’t think that they are being targeted, but they are. Social engineering campaigns lead to business email compromise or ransomware or malware being downloaded.”
Karnik said that while the scams are not new, the added power of AI behind them makes them more destructive, and the chances that people will be duped are higher.
“AI is giving cybercriminals the ability to easily create more convincing emails and messages that look like they’re from trusted sources such as employers,” he said. “They can craft these scams quickly and with precision, making them more difficult to detect and increasing their success rate. As AI tools become more accessible, these types of attacks are expected to grow in sophistication and frequency.”
Karnik added that AI-powered tools are also being used to create smarter, more adaptive malware that can increase its effectiveness. “The impact can be significant — if AI-driven malware slips past outdated antivirus programs, your personal and financial data can be exposed.”
Stolen credentials to gain access to systems and data is the most prevalent attack vector, Freedman said. “Stolen credentials are a huge problem and will continue to be a problem,” she said. “People are still using passwords across different platforms. Employees still have access to files and systems that they don’t need access to.”
Deepfakes
AI-driven deepfake video and voice cloning will continue to proliferate, aiming to steal employee credentials and commit fraud.
“Deepfakes have moved beyond futuristic concepts to a concrete threat that could undermine digital trust across industries,” said Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance in Washington, D.C. “Imagine a scenario where a seemingly authentic video of a company executive authorizes a massive financial transfer or spreads damaging misinformation.”
Not only can cybercriminals use AI-powered tools to create fake videos and audio recordings, but they’re also now able to impersonate people during live video calls.
“You could receive a video call from someone who appears to be your boss urgently requesting money or sensitive information,” Karnik said. That happened last year in Hong Kong when an employee paid out $25 million to fraudsters. In addition, voice scams have been on the rise, evolving from the common text scam where someone posing as an executive asks an employee to make a payment of some kind.
“Scammers are using artificial intelligence to create highly realistic fake videos or audio recordings that pretend to be authentic content from real people,” Karnik said. “As deepfake technology becomes more accessible and affordable, even people with no prior experience can produce convincing content.”
Supply Chain Attacks
Large organizations often cite supply chain challenges as the biggest barrier to cyber resilience, driven by complexity and lack of visibility into suppliers’ security.
“Modern business ecosystems are increasingly interconnected, making supply chain cybersecurity more complex than ever,” Steinhauer said. “Cybercriminals are targeting third-party vendors as entry points into larger, more secure networks. Current approaches, like lengthy questionnaires and siloed visibility, fail to address the frustrations of managing third-party risks as cybercriminals exploit these vulnerabilities to infiltrate larger networks.”
Karnik noted another type of third-party vulnerability — apps and software. “Scammers are increasingly embedding malicious code into popular software or app updates, a tactic that allows them to infect millions of devices in one fell swoop. Updating a favorite app might unknowingly install malware that compromises your personal data or device security. The increased reliance on third-party code and AI-assisted development tools is making these types of attacks more frequent and harder to detect, which poses significant risks to businesses.”
Geopolitical Threats
CEOs must also be wary of cyberthreats from nation-state-backed groups aiming to commit cyber espionage, IP theft, ransomware attacks, and disruption of operations.
Freedman noted that the Google Threat Intelligence Group recently published a new report that shares findings on how government-backed threat actors use and misuse Google’s GenAI tool Gemini.
“Google found government adversaries, including the People’s Republic of China, Russia, Iran, and North Korea, are attempting to misuse Gemini through ‘jailbreak attempts, coding and scripting tasks, gathering information about potential targets, researching publicly known vulnerabilities and enabling post-compromise activities, such as defense evasion in a target environment,’ ” she said.
According to the report, Iranian threat actors used Gemini the most, specifically for “crafting phishing campaigns, conducting reconnaissance on defense experts and organizations, and generating content with cybersecurity themes.”
Another threat originating overseas is the emergence of the Chinese GenAI company DeepSeek. The concern over DeepSeek follows warnings from federal law enforcement and intelligence agencies about Chinese hacking operations that have been ongoing for years.
“Chinese AI companies operate under distinct requirements that give their government broad access to user data and intellectual property,” Steinhauer said. “This creates unique challenges when considering the use of these AI systems by international users, particularly for processing sensitive or proprietary information. The technology sector needs frameworks that ensure all AI systems protect user privacy and intellectual property rights according to international standards while recognizing the different data access and governance requirements that exist across jurisdictions.”
Unchecked GenAI Adoption
One of the biggest data security risks is self-inflicted — the unauthorized use of GenAI tools in the workplace.
“It’s a big risk that especially small and midsize companies are not understanding or addressing,” Freedman said.
“Trying to understand how your employees are using your data is deeply important,” Karnik said. “Employers may think it’s OK to upload company data into ChatGPT, for example, but that could create more exposure for your organization.”
Steinhauer agreed, saying that large language models can be compromised through manipulated training data or sophisticated backdoor attacks. “To build trust with businesses and consumers, forward-thinking organizations must implement robust security frameworks with dataset integrity checks, continuous monitoring, and advanced encryption — ensuring transparent, reliable, and safe AI systems.”
Freedman added that companies will need to think hard about internal AI governance and that HR could play a role in developing an AI acceptable-use policy.
Mitigating Cyberthreats
Employers can protect their organizations from cyber risk by implementing strong security practices, creating security policies, and educating employees.
“People are naturally very trusting,” Freedman said. “Employees are still the No. 1 way that companies get hit with cyberattacks. Employee training on cybersecurity threats must become a very high priority. Education should be in person and made specific to their situation. Don’t be complacent and just do computer training.”
Karnik recommended employers create data protection policies, including chain-of-command protocols, employee access controls, and zero-trust architecture.
“Stay safe by enabling two-factor authentication, double-checking unexpected messages through official channels, and using advanced security tools that detect and flag suspicious communications before they reach you,” he said. “Ensure your operating system and security software are always up-to-date and only download apps or updates from reputable app stores and verified developers.”
Experts also recommend strong password management, regular vulnerability scanning and patching, data encryption — at rest and in transit — and cloud security reviews.
“In 2025, it’s imperative that companies make the shift from reactive to proactive security,” Steinhauer said. “The cybersecurity landscape is moving at unprecedented speeds, with vulnerabilities being exploited nearly as fast as they are discovered. Companies must adopt a proactive approach that combines AI-powered scanning, continuous testing, and collaborative threat intelligence to quickly identify and mitigate threats.”