Google security researchers have uncovered the first instance of AI-powered malware being used in real-world cyberattacks. This development indicates that malicious actors are increasingly utilizing generative AI to enhance their attack capabilities. Two new malware strains, PromptFlux and PromptSteal, have been identified, both employing large language models to alter their behavior during attacks. These strains can dynamically generate malicious scripts, obscure their code for evasion, and create harmful functions on demand. PromptFlux was discovered by scanning VirusTotal uploads for code interacting with Gemini, and it actively rewrites its source code and disguises its activities. PromptSteal, identified by Ukraine, allows hackers to interact with it using natural language prompts and is designed to move through systems and steal data. Although these malware strains are still in early development, they represent a significant advancement in cybercrime. Google is concerned about hackers potentially disabling safety guardrails on open-source AI models used in such malware. The underground cybercrime market for AI tools has grown significantly, enabling less skilled criminals to launch sophisticated attacks. However, most attackers still rely on conventional methods like phishing and stolen credentials. This marks an evolution in cyber threats, with AI being adopted by both defenders and attackers.
axios.com
axios.com
Create attached notes ...
