A security loophole in OpenAI’s ChatGPT allowed researchers to siphon sensitive information from Gmail inboxes.
The exploit, known as ShadowLeak, was revealed this week by cybersecurity firm Radware and has since been patched.
The team behind the discovery showed how attackers could manipulate OpenAI’s Deep Research, an agent built into ChatGPT, to perform tasks without the user’s knowledge.
In slipping hidden instructions into an email, the researchers managed to set a trap that instructed the agent to search inboxes for confidential records, including HR files and login credentials, and send them to a remote server.
What makes the case alarming is that it required no clicks or user action. The agent executed the attacker’s instructions once it accessed the inbox, bypassing local cybersecurity tools since the malicious code ran on OpenAI’s cloud infrastructure.
Radware noted: “This process was a rollercoaster of failed attempts, frustrating roadblocks, and, finally, a breakthrough.”
The researchers explained that the same strategy could be applied to other services connected to Deep Research, such as Outlook, GitHub, Google Drive, and Dropbox. According to the report, “The same technique can be applied to these additional connectors to exfiltrate highly sensitive business data such as contracts, meeting notes or customer records.”
Radware described ShadowLeak as a proof-of-concept, but its implications are far-reaching. Unlike traditional prompt injection attacks, this exploit was invisible to standard defences because it operated server-side, not on the victim’s device.
OpenAI confirmed it was informed of the flaw on 18 June 2025. A fix was released on 3 September 2025. The company said it had found no evidence of real-world abuse before the disclosure.
Cybersecurity experts see ShadowLeak as a wake-up call for the industry. They argue that autonomous agents should be treated with the same caution as privileged human users, meaning tighter access controls, stronger logging systems, and continuous monitoring of their behaviour.