/pcq/media/media_files/2025/11/06/google-warns-of-promptflux-a-new-ai-threat-built-on-chatgpt-apis-2025-11-06-13-12-46.png)
Google's Threat Analysis Group (TAG) discovered a new type of malware we'll call PromptFlux. PromptFlux utilizes generative AI, including ChatGPT, Gemini, and other generative AI tools to facilitate phishing, misinformation, or code generation. We first noticed this campaign in mid-2025, which illustrates how threat actors can leverage large language model (LLM) APIs to mask their cyberattacks behind a legitimate AI infrastructure.
Google’s Technical Findings
According to Google’s Threat Analysis Group, PromptFlux is a fully featured, modular malware framework for automating malicious tasks with LLMs (using compromised API keys and developer credentials). The infrastructure used for the PromptFlux campaign relies heavily on rotating proxy networks and cloud automation scripts to send high-frequency API requests to simulate legitimate traffic.
Each API request is simply a “prompt” to the AI to generate the desired content (phishing emails, deepfakes, obfuscated code snippets). After generating the content, PromptFlux automatically sends the messages to victims, either through the compromised environment, through botnets, or through the creation of faux social media accounts.
Google's telemetry indicates that PromptFlux employs a persistent prompt-chaining loop, allowing the attacker to also engage in multi-step operations, such as:
1. Using the LLM APIs to generate phishing templates.
2. Encode payloads in mixed emails or web templates.
3. Deploy using automated SMTP clients or social-network bots.
PromptFlux thus enables scale and persistence with minimal interaction.
Sophisticated Techniques for Evasion
TAG’s analysis reveals that PromptFlux is also engaging in a variety of techniques to blend in with typical AI activity. The operatives are bouncing traffic off real API gateways so that the firewalls don’t trip. They are also constantly changing the headers of their requests so that they resemble what one may expect from a developer.
Moreover, it switches between different AI APIs (i.e., Google Gemini, ChatGPT from OpenAI, and Claude from Anthropic) to fraction out the load to remain under the radar and not be detected or shut down at any one time. This "multi-LLM redundancy" gives the operatives of PromptFlux an extra layer of difficulty in fingerprinting or blocking their traffic streams, because none of the APIs offered for their service sees enough of the bad traffic for any individual API to plug them altogether. Even if they were to stop one of the API services, the other APIs would continue to run.
Researchers at TAG found that PromptFlux is using software that spins up new proxy servers, switching the API traffic on the fly, based on a domain generation algorithm. This makes blocking PromptFlux by IP address almost useless.
/filters:format(webp)/pcq/media/media_files/2025/11/06/google-hack-2025-11-06-13-19-32.png)
Google's Mitigation Efforts
When asked about it, Google said they've already yanked hundreds of compromised API keys and set up some fancy math-based models to figure out when someone is abusing the API. The models look at things like how often you're asking for stuff, how weird your questions are, and if you're reusing API tokens in ways that just don't seem right.
They also said they've been working with OpenAI, Anthropic, and the big cloud providers to share any clues they find and develop standard ways to detect if someone's misusing the API.
A Google TAG spokesperson put it this way: "We're working closely with the rest of the AI gang to make sure that we're all on the same page. We're talking about beefing up authentication, limiting how fast you can make requests, and generally just getting a better handle on how people use these APIs. We want to be able to spot when something is off and shut it down before it causes any problems."
Consequences for the AI and Security Ecosystem
PromptFlux shows a new threat vector: AI model exploitation at the API level. Unlike legacy malware that uses compiled code or binaries, PromptFlux uses legitimate AI model endpoints to produce content on demand and bypasses traditional antivirus or intrusion detection systems.
Cybersecurity research has shown that AI exploitation is moving from prompt injection attacks, where users trick AI models into misbehaving, to AI purpose-driven automated API exploitation that can generate threats at scale.
This episode highlights the need for:
• API-level behavioral monitoring for generative AI platforms.
• Granular access control policies and anomaly detection for developer accounts.
• Intelligence sharing across providers on LLM’s cyber exploitable patterns.
The Bigger Security Picture
PromptFlux is a shift in cyber operations where AI models can be both means and ends. By using AI in their automation pipelines, threat actors can now execute high-volume phishing, misinformation, and payload development ops with minimal effort.
Google’s reporting is an early warning that both defensive and ethical AI governance will need to be developed rapidly as generative AI becomes the norm in the future to mitigate against its misuse in the cybersecurity space.
More For You
The Herodotus Trojan: How a new Android threat is outsmarting users and defenses
ChatGPT Atlas Browser Exploit: A New Pipeline for AI Data Theft
Microsoft Teams Token Replay Attack: What Happened and Fixes
Is my phone hacked: Top apps and tips to secure your mobile from viruses and hackers
/pcq/media/agency_attachments/2025/02/06/2025-02-06t100846387z-pcquest-new-logo-png.png)
Follow Us