New research shows state-backed hackers are now integrating large language models into malware, enabling it to write and modify attack scripts in real time while staying under the radar.
Cybersecurity experts have long noted hackers leveraging AI to scale attacks, but Google researchers say they’ve now detected malware capable of using AI during execution to modify its behavior in real time.
The report describes the development as a crucial move toward more independent and adaptive malicious software.
Researchers in June identified PROMPTFLUX, an experimental dropper malware that leverages an LLM to modify its own code dynamically, helping it slip past security systems.
The report notes that PROMPTFLUX, which Google has worked to disrupt, appears to be in early testing and is not yet capable of breaching or infecting target networks.
Researchers identified PROMPTSTEAL, a new malware linked to Russia’s APT28 group (also known as BlueDelta, Fancy Bear, and FROZENLAKE), used in June against Ukrainian systems. The malware stood out for using an LLM to dynamically generate commands — the first time Google has seen such AI integration during an active cyber operation.
Though still in testing phases, researchers say these methods highlight the evolving nature of cyber threats and the growing potential for AI-driven attacks.
The report warns that a growing black market now offers AI tools built to support criminal activity, enabling less experienced or poorly funded hackers to launch more complex and far-reaching attacks.
“Many underground forum advertisements mirrored language comparable to traditional marketing of legitimate AI models, citing the need to improve the efficiency of workflows and effort while simultaneously offering guidance for prospective customers interested in their offerings,” the report says.