Experts report that AI-enhanced deepfakes are rapidly emerging as a favored tool in cyberattacks on both corporate and government targets. Despite widespread awareness of the risk, most organizations have yet to put substantial defenses in place, creating a dangerous gap between perception and preparedness. 

According to research released by OpenAI on October 7, threat actors — both cybercriminals and nation-states — are turning to large language models (LLMs) to refine phishing schemes and develop more sophisticated malware. A follow-up study from email security provider Ironscales, published October 9, found that 85% of midsized organizations have faced deepfake or AI-voice scams, and over half (55%) have incurred financial damages from these attacks. Eyal Benishti, CEO of Ironscales, notes that although most companies acknowledge the seriousness of the threat, they continue to struggle to keep pace with it. 

"The deepfake threat landscape looks, above all else, dynamic," he says. "While email threats and static imagery are still the most commonly encountered vectors, there is a wide diversity of other forms of deepfakes that are quickly growing in prevalence. In fact, we're seeing more and more of every kind of deepfake in the wild." 

Attackers are now applying multiple AI-driven techniques to strengthen their cyberattack chains. One approach involves building “human digital twins” using publicly accessible data, enabling more believable phishing campaigns. When these models are matched with AI-generated voice samples, the result can be realistic audio deepfakes. Fears that such capabilities could be abused prompted Microsoft to largely cancel its voice cloning feature, which might otherwise have been embedded into platforms like Teams — and potentially misused for fraud.  

Surge in Cyberattacks Driven by Artificial Intelligence  

According to cybersecurity experts, AI-driven attack methods are already in active use. CrowdStrike’s 2025 Threat Hunting Report predicts that incidents involving audio deepfakes will double by next year. At present, static deepfake images and AI-enhanced business email compromise (BEC) scams dominate, affecting 59% of organizations surveyed by Ironscales. That study, which polled 500 U.S. IT and cybersecurity professionals working in mid-sized companies (1,000–10,000 employees), highlights how phishing has evolved. “It’s no longer about mass targeting,” explains April Lenhard of Qualys. “Attackers now craft personalized lures for each victim.”

Companies are stepping up their fight against AI-based deception, with 88% conducting deepfake-awareness training in the last year — up from 68% in 2024. However, this growing awareness hasn’t translated into better defense outcomes. Although nearly all cybersecurity professionals express confidence in their organizations’ ability to detect and stop deepfake attacks — including about 75% who feel “very confident” — most companies have still fallen victim. On average, affected firms lost $167,000, based on adjusted estimates; Ironscales’ initial mean figure of $280,000 was inflated by extreme cases, with 5% of respondents reporting losses above $1 million.