Google Threat Intelligence has reported a significant rise in the adoption of AI tools by cybercriminals to improve every stage of their attacks. These advancements cover ransomware deployment, credential theft, and the creation of new malware variants.
Threat actors are now employing AI in diverse and sophisticated ways, including impersonating capture-the-flag (CTF) competitors to manipulate chatbots. This marks a new operational phase where AI is integrated deeply across the entire attack process.
The update to the “Adversarial Misuse of Generative AI” report, released earlier in 2025, highlights this troubling trend. According to the Google Threat Intelligence Group (GTIG), cybercriminals range from amateur coders to nation-state actors, all experimenting and exploiting AI tools on a broader scale.
“Attackers over the past twelve months have been observed moving into a new operational phase of AI abuse — one that integrates AI throughout the entire attack lifecycle,” the researchers stated.
The GTIG blog also outlines essential steps for cybersecurity teams to strengthen defenses against emerging threats, particularly those involving the misuse of large language models (LLMs) and Google’s Gemini AI Assistant.
Author's summary: Cybercriminals are increasingly leveraging AI to refine their attack strategies at all stages, posing escalating challenges for cybersecurity defenses worldwide.