0-Click ChatGPT Agent Flaw Exposes Gmail Data to Attackers
Researchers have discovered a critical zero-click vulnerability in ChatGPT’s Deep Research agent that allows attackers to silently steal sensitive Gmail data without any user interaction.
This sophisticated attack leverages service-side exfiltration techniques, making it invisible to traditional security defenses and representing a significant escalation in AI agent security threats.
The Silent Data Theft Mechanism
As per a report, the vulnerability exploits ChatGPT’s Deep Research agent, an autonomous research tool that can browse websites and access connected services like Gmail to generate comprehensive reports.
Unlike previous client-side attacks that relied on users viewing malicious content, this flaw operates entirely within OpenAI’s cloud infrastructure, making detection nearly impossible.
Diagram illustrating the flow of requests and data between client, server, and database, highlighting server-side HTML response and client-side JavaScript fetching data
The attack begins when a cybercriminal sends a seemingly innocent email containing hidden HTML instructions using techniques like tiny fonts, white-on-white text, and layout tricks.
These invisible commands instruct the agent to locate specific sensitive information in the victim’s inbox and transmit it to an attacker-controlled server.
When users later ask the Deep Research agent to analyze their emails, it unknowingly executes these malicious instructions.

ChatGPT Deep Research agent connecting to Gmail and summarizing recent emails. Researchers achieved a 100% success rate by crafting sophisticated social engineering techniques that circumvent the agent’s built-in safety restrictions.
The malicious emails employ multiple psychological manipulation tactics: asserting false authority by claiming “full authorization,” disguising attacker servers as legitimate “compliance validation systems,” and creating urgency through threats of report deficiencies.
Flowchart depicting the phishing process where an attacker sends a malicious email, the victim interacts with it, and credentials are stolen and exploited by the attacker
The most clever technique involves Base64 encoding of stolen data, which researchers framed as a “security measure” to protect information during transmission.
This encoding occurs before OpenAI’s security layers can inspect the content, allowing sensitive personally identifiable information like names and addresses to be extracted without triggering safety mechanisms.
This represents a dangerous evolution from previous AI agent vulnerabilities. Traditional client-side attacks required users to view attacker-controlled images or content in their browser, making them detectable by enterprise security tools like web gateways and endpoint monitoring systems.
Comparison table detailing differences between server-side and client-side caching in web systems
The new service-side approach operates exclusively within OpenAI’s infrastructure using the agent’s built-in browsing capabilities.
Enterprise security controls cannot monitor or block these requests since they originate from OpenAI’s trusted servers rather than user devices.
This creates a significant blind spot for organizations relying on AI agents for sensitive data processing.
Additionally, service-side attacks provide attackers with broader exfiltration options.
While client-side image-based leaks are often restricted to specific trusted domains, researchers observed no comparable restrictions on URLs the Deep Research agent can directly access, enabling data theft to any attacker-chosen destination.
The vulnerability highlights critical risks as AI agents gain deeper integration with personal and corporate data systems.
Organizations using ChatGPT’s Deep Research with email access should implement additional monitoring and consider restricting agent permissions until patches are deployed.
Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates.
Post Comment