
the security risks of sharing your inbox with an ai agent
Prompt injection, data leakage, and credential exposure. The real risks when your agent reads your personal email.
Your agent can book flights, write code, and manage your calendar. But the moment you connect it to your Gmail, you've handed it the keys to your entire life. Every password reset, every bank statement, every private conversation — sitting in a context window, waiting to be leaked.
This isn't hypothetical. In 2025 and early 2026, researchers demonstrated real zero-click attacks that exfiltrate data from personal inboxes through AI agents. Let's walk through the specific risks, because the details matter more than the fear.
Prompt injection through crafted emails#
The most immediate threat is indirect prompt injection. An attacker sends your agent a carefully crafted email containing hidden instructions — white text on a white background, tiny font sizes, or encoded strings buried in HTML. Your agent reads the email to categorize or summarize it, and the hidden instructions hijack its behavior.
This isn't a lab experiment. In June 2025, researchers at Aim Security disclosed CVE-2025-32711 (dubbed "EchoLeak"), a zero-click vulnerability in Microsoft 365 Copilot with a CVSS score of 9.3. An attacker sent a single email to an employee's Outlook inbox. When anyone later asked Copilot a routine question — summarizing a report, finding a file — the RAG engine retrieved the poisoned email as "relevant context," executed the embedded instructions, and silently exfiltrated SharePoint and OneDrive files to the attacker's server. No clicks required.
Warning
OWASP's 2025 Top 10 for LLM Applications ranks prompt injection as the number one vulnerability, appearing in over 73% of production AI deployments assessed during security audits. If your agent reads untrusted email, it's exposed.
A similar attack, ShadowLeak, hit ChatGPT's Deep Research agent in September 2025. A single malicious email in a user's Gmail triggered the agent to leak inbox data from OpenAI's cloud infrastructure — invisible to local defenses. The proof-of-concept worked across Gmail, Outlook, Google Drive, Dropbox, and Notion connectors. Every shell your agent has access to becomes an attack surface.
Credential harvesting from password reset emails#
Your inbox is a vault of credentials. Password reset links, two-factor backup codes, API keys from SaaS platforms, OAuth tokens from app signups. When an agent has full inbox access, all of this sits within reach.
The OpenClaw security crisis made this painfully concrete. In late January 2026, security researchers found over 30,000 exposed OpenClaw instances publicly reachable on the internet. Inside those instances, researchers found Anthropic API keys, Telegram bot tokens, Slack OAuth credentials, and complete conversation histories. Sixty-three percent of observed deployments were vulnerable, with nearly 13,000 instances exploitable via remote code execution.
If your agent is connected to your personal inbox, it's not just reading emails — it's sitting on a trove of secrets. One compromised agent, and every credential that ever passed through your inbox is exposed.
Data leakage through context windows#
Every email your agent reads becomes part of its context. That context gets processed by an LLM, logged in conversation histories, and sometimes cached across sessions. Your medical appointment confirmations, your salary negotiations, your legal correspondence — they're all fair game once they're in the window.
A study across 1,000 enterprise environments found that 99% had sensitive data exposed to AI tools due to insufficient access controls. The problem isn't just malicious actors. It's the everyday reality of feeding private data into systems that weren't designed to protect it. Context windows don't distinguish between "this is a routine newsletter" and "this contains my social security number." To the model, it's all just tokens.
The blast radius of a compromised agent#
When your agent has access to your personal inbox, the blast radius of any compromise is everything. Not just the agent's tasks. Not just recent emails. Everything.
The ClawHavoc campaign in February 2026 showed how fast things can spiral. Researchers found 341 malicious skills in OpenClaw's ClawHub marketplace — roughly 12% of the entire registry — delivering infostealers disguised as legitimate tools. By mid-February, that number had climbed past 1,184. A single ClawHub user uploaded 354 malicious packages in what appears to have been an automated blitz. Every agent that installed one of those skills and had inbox access was a potential conduit for full credential theft.
Warning
System prompt extraction was the most common attacker objective in Q4 2025. Attackers use extracted prompts — role definitions, tool descriptions, policy boundaries — to craft more effective follow-on attacks. If your agent's prompt includes inbox access patterns, that's a roadmap for an attacker.
This is the core problem with the shared inbox model. You can't scope the damage. A compromised agent with access to your personal email doesn't just lose its own data — it loses yours. Your contacts, your financial information, your private communications. The claw reaches into everything.
Isolation as mitigation#
The fix isn't to stop using AI agents with email. It's to stop giving them access to your email.
When your agent has its own dedicated inbox — its own shell — the blast radius shrinks to exactly what the agent needs. A compromised agent loses access to its own purpose-built inbox, not your 15 years of personal correspondence. No password reset links to harvest. No medical records to leak. No credentials sitting in old onboarding emails.
This is what agent email is designed for. Your agent gets an address like support-bot@getlobstermail.com and communicates through that. Your personal inbox stays untouched, out of the context window, out of reach.
It won't stop prompt injection from happening — that's a problem the entire industry is working on. But it limits what an attacker can get. The difference between "the agent's isolated inbox was compromised" and "my entire personal email history was exfiltrated" is the difference between an inconvenience and a catastrophe.
If you're connecting agents to email today, the question isn't whether something will go wrong. It's how much damage it'll do when it does. Your agent shouldn't be using your Gmail. Give it its own address, and keep the reef clean.
Frequently asked questions
What is prompt injection in the context of email?
Prompt injection is when an attacker embeds hidden instructions inside an email — using techniques like invisible text, tiny fonts, or encoded strings — that trick an AI agent into executing unintended commands when it reads the message.
Can an attacker really steal my data through a single email?
Yes. The EchoLeak vulnerability (CVE-2025-32711) demonstrated this against Microsoft 365 Copilot in production, and ShadowLeak showed the same attack pattern against ChatGPT's Gmail integration. Both were zero-click — no user action required.
What credentials are at risk when an agent accesses my inbox?
Password reset links, two-factor backup codes, API keys from SaaS platforms, OAuth tokens, onboarding credentials, and any other sensitive information that arrives via email. The OpenClaw incidents revealed exposed Anthropic API keys, Slack OAuth credentials, and Telegram bot tokens.
What was the ClawHavoc incident?
ClawHavoc was a supply-chain poisoning campaign targeting OpenClaw's skill marketplace in early 2026. Researchers found over 1,184 malicious skills designed to steal data from agents. Roughly 12% of the entire ClawHub registry was compromised.
How many OpenClaw instances were exposed?
Security researchers identified over 30,000 exposed OpenClaw instances publicly reachable on the internet, with 63% of observed deployments vulnerable and nearly 13,000 exploitable via remote code execution.
What is a context window leak?
When an AI agent reads your email, those messages become part of its context window — the data the model processes. That context can be logged, cached, or extracted by attackers, exposing private information the agent was never meant to share.
Does giving my agent its own inbox prevent prompt injection?
No. Prompt injection is a model-level vulnerability that the industry is still working to solve. But an isolated inbox limits the blast radius. If an agent is compromised, the attacker only accesses the agent's purpose-built inbox — not your personal email history.
How does LobsterMail isolate agent email?
LobsterMail gives each agent its own dedicated email address and inbox. The agent never touches your personal email. Communications are contained in the agent's own shell, so a compromise doesn't cascade to your private data.
Is OAuth access to Gmail safe for AI agents?
OAuth gives the agent scoped access, but in practice most implementations grant broad inbox read permissions. That means the agent can see everything — password resets, financial statements, private conversations. Dedicated agent email is a safer architecture. Read more about why OAuth with Gmail is painful for agents.
What should I do if my agent already has Gmail access?
Audit what permissions your agent has, revoke any unnecessary OAuth scopes, and consider migrating to a dedicated agent email address. The fewer secrets in the agent's reach, the smaller the blast radius if something goes wrong.
Can prompt injection happen through plain text emails?
Yes, though it's less common. Most documented attacks use HTML email techniques to hide instructions. But any text that an agent processes can contain injection payloads. The attack surface exists regardless of email format. Learn more about prompt injection in email agents.
Does email deliverability matter for agent security?
Indirectly, yes. Agents with poor deliverability may end up in spam folders alongside actual malicious emails. A dedicated agent email address with proper authentication (SPF, DKIM, DMARC) helps ensure legitimate messages are delivered and suspicious ones are filtered. See our guide on email deliverability for AI agents.
Give your agent its own email. Get started with LobsterMail — it's free.