The Google Threat Intelligence Group (GTIG) has released a new report examining how cybercriminals are exploiting artificial intelligence platforms and integrating AI into malware. The findings reveal a major shift from simple productivity enhancement to the direct use of AI within active malicious operations, marking a new phase in the evolution of cyber threats.
This report builds on GTIG’s earlier publication, Adversarial Misuse of Generative AI, and highlights how both state-sponsored and criminal groups are adopting AI throughout their attack lifecycles. Actors from China, North Korea, Russia, and Iran are reportedly experimenting with AI for malware generation, social engineering, tool creation, and broader operational improvements.
Google continues to emphasise responsible AI development and has implemented strong countermeasures, including disabling accounts linked to malicious activity and continuously refining its models to resist abuse. These actions are part of a broader strategy to enhance ecosystem-wide protection and promote industry best practices for AI governance.
Key Findings from the Report
GTIG’s latest analysis identified new malware families such as PROMPTFLUX and PROMPTSTEAL, which use large language models (LLMs) during execution. These threats dynamically generate scripts, obfuscate their own code, and create functions on demand, reducing their detectability and increasing adaptability.
To circumvent safeguards, some attackers are also applying social engineering techniques in their AI prompts, pretending to be students or cybersecurity researchers in order to trick models like Gemini into revealing restricted information or generating blocked outputs.
Meanwhile, the black market for illicit AI tools has matured significantly, with threat actors now selling multifunctional platforms for phishing, malware creation, and vulnerability scanning. These AI tools lower the barrier to entry for less-skilled attackers and expand access to advanced capabilities.
State-backed adversaries from North Korea, Iran, and China are also misusing Gemini and other AI systems to enhance reconnaissance, phishing, and command-and-control development, illustrating how artificial intelligence is being weaponised at every stage of cyber operations.
“Although adversaries attempt to use conventional AI platforms, many are shifting toward unrestricted models sold on the black market, which offer significant advantages to less advanced criminals,” said Billy Leonard, Tech Lead, Google Threat Intelligence Group.
Google Cloud Security’s Forecasts for 2026
In parallel with the GTIG findings, Google Cloud Security has published its Cybersecurity Forecasts for 2026, outlining emerging challenges across the next year. The study highlights three primary focus areas: AI-driven threat evolution, cybercrime as the most disruptive global risk, and continued activity from nation-state actors.
- AI will transition from being an exceptional tool for attackers to a standard capability, accelerating social engineering, disinformation, and malware development.
- Ransomware and multi-faceted extortion will remain the most damaging cybercrime category globally, fuelled by zero-day exploits and third-party vendor compromises.
- State actors, including Russia, China, Iran, and North Korea, will intensify campaigns focusing on espionage, disruption, and financial theft.
By 2026, AI-enhanced phishing and deepfake-driven Business Email Compromise (BEC) are expected to become mainstream, blending voice, video, and text for realistic impersonations. The ability to mimic trusted executives and partners will make these attacks more convincing and harder to detect.
Regional Trends and Defensive Priorities
Across Europe, the Middle East, and Africa (EMEA), Google anticipates a rise in hybrid warfare combining cyber operations with information manipulation and physical disruption of critical infrastructure. The increasing enforcement of AI and cybersecurity regulations will also require organisations to maintain stronger compliance and governance frameworks.
To remain resilient, organisations are urged to adopt proactive, multi-layered defence models, invest in AI governance, and continuously evolve security controls. According to GTIG, this adaptive approach is essential to mitigate AI-driven threats and preserve operational continuity in a rapidly shifting digital landscape.
The full report, Advancing Gemini’s Security Safeguards, provides additional technical insights into Google’s protection strategies and model-hardening initiatives for 2026.