
semantic kernel multi-agent orchestration: email as the async handoff layer
Wire LobsterMail into Semantic Kernel multi-agent pipelines for durable, cross-process handoffs that survive restarts and work across organizational boundaries.
Semantic Kernel's orchestration model is genuinely impressive for the use cases it was designed around. Wire together specialist agents — one for research, another for writing, a third for review — and the Process Manager coordinates the handoffs. For in-process, synchronous workflows, it works exactly as advertised.
But there's a class of workflows that breaks this model, and it's the one that comes up most often in enterprise deployments: what happens when Agent A hands off to Agent B, and Agent B has to wait for something outside the process? A human approval that takes two hours. A vendor response that arrives tomorrow. A compliance sign-off that takes three days. The in-memory handoff falls apart. You need something durable — something that persists across restarts, crosses organizational boundaries, and doesn't require the receiving agent to be running at the moment the message is sent.
Email is the answer. And with agents that can self-provision their own inboxes, wiring this into a Semantic Kernel pipeline is simpler than it looks.
How Semantic Kernel structures multi-agent work#
The Semantic Kernel agent framework composes agents into orchestration patterns: sequential pipelines where steps run in order, concurrent fan-out where multiple agents run in parallel, and dynamic handoffs where an agent decides at runtime which specialist to invoke next. Each agent carries its own tools, memory, and instructions. The Process Manager tracks which step is active and what each step produced.
For a document processing pipeline that completes in under a minute, this is exactly the right model. The seams show when a step has to wait for something it doesn't control — and that's most enterprise workflows.
Where in-memory handoffs fall apart#
Long waits are the first problem. A human approval step that takes hours can't block an in-memory queue indefinitely. You either leave the process open and hold resources, or you terminate and lose state. Neither works at scale.
Process restarts are the second. Kubernetes reschedules your pod. A deployment rolls out during a long-running pipeline. The in-memory orchestration state is gone, and picking up where you left off becomes a state persistence problem the core framework doesn't solve for you.
Cross-organization handoffs are the third, and it's the one most people don't anticipate until they're already in production. Your Semantic Kernel agent needs to pass a result to an agent running at a partner, a vendor, or a client on completely separate infrastructure. There's no shared message bus. No shared memory. No API you can both agree on before you need to ship.
Why email handles all three cases#
Email has properties that map directly onto these failure modes. Messages persist whether or not the recipient is active. Delivery is guaranteed by the receiving mail server, not by both parties being up simultaneously. The protocol works across every organization without requiring shared infrastructure or pre-negotiated API contracts. And every LLM can read and write email, so your agents handle it the same way they'd handle any other text.
The pattern is straightforward: Agent A finishes its step, serializes its output into an email, and sends it to Agent B's dedicated inbox. Agent B starts on a cron schedule (or responds to a webhook), polls its inbox, picks up the message, and resumes the pipeline. State lives in the email thread — durable, auditable, and independent of any single process.
This isn't novel architecture. Message queues and async workflows have operated this way for decades. Email is just the version that works across organizational boundaries without requiring a shared broker both parties have to agree to run.
For more on this role in the broader agent stack, the agent communication stack post covers where email fits relative to Slack, voice, and emerging protocols like A2A.
Setting up agent inboxes with LobsterMail#
The historical blocker for this pattern has been inbox provisioning. Setting up a Gmail account, configuring OAuth, or standing up SMTP infrastructure for each agent requires human intervention. Each new agent deployment means someone has to go provision credentials and wire them in. At scale, that becomes a real bottleneck.
LobsterMail lets each agent provision its own inbox autonomously. The SDK creates a free account on first run, persists the token, and issues the address — no human in the loop:
import { LobsterMail } from '@lobsterkit/lobstermail';
const lm = await LobsterMail.create();
const inbox = await lm.createSmartInbox({ name: 'sk-legal-review' });
console.log(inbox.address); // sk-legal-review@lobstermail.ai
createSmartInbox() generates a readable address from your agent's name and handles name collisions automatically. Agent A sends its handoff:
await senderInbox.send({
to: 'sk-legal-review@lobstermail.ai',
subject: 'Handoff: contract analysis complete',
body: JSON.stringify({
contractId: contract.id,
riskFlags: analysis.flags,
nextStep: 'legal-sign-off',
}),
});
Agent B wakes on schedule and picks it up:
const emails = await inbox.receive();
for (const email of emails) {
const payload = JSON.parse(email.body);
await resumeFromHandoff(payload);
}
The pipeline state now lives in the inbox. A process restart doesn't lose it. A cross-org handoff works identically — the receiving agent at a partner firm provisions its own LobsterMail inbox, and you send to that address the same way.
LobsterMail also scores incoming emails for injection risk, which matters when your agents are automatically acting on email content. The security docs cover how to use those scores in your pipeline logic.
When this pattern is worth using#
For short, in-process Semantic Kernel pipelines, use the native orchestration model. It's faster, simpler, and the right default for synchronous work.
Email handoffs add value when the wait between steps is measured in hours or days, when the pipeline needs to survive process restarts without losing progress, when you're coordinating across organizational boundaries where shared message queues aren't practical, or when compliance requirements mean you need a durable log of exactly what passed between agents.
Those conditions describe most of the interesting enterprise Semantic Kernel deployments — the ones with multi-day approval chains, vendor integrations, and compliance review steps baked in. For those workflows, the async layer is where reliability is won or lost. Email is the mechanism that makes it stable.
See multi-agent email coordination patterns for more on structuring shared inboxes, routing logic, and agent-to-agent messaging at scale.
Frequently asked questions
What is Semantic Kernel's multi-agent orchestration?
Semantic Kernel's agent framework lets you compose multiple AI agents into coordinated workflows — sequential pipelines, concurrent fan-out, and dynamic handoffs where one agent routes work to another based on runtime decisions. The Process Manager tracks state across steps. It handles synchronous, in-process coordination well; for async and cross-process coordination you need an external transport layer like email.
Why email instead of Redis, Kafka, or a shared database for async handoffs?
Redis and Kafka work great when all your agents share infrastructure. Email is the right choice when they don't — cross-org handoffs, vendor integrations, pipelines where the receiving agent runs on a completely separate system. Email also provides a built-in audit trail and works without pre-negotiating a shared broker that both parties have to agree to run and maintain.
Does LobsterMail work with Semantic Kernel's .NET or Python SDKs?
LobsterMail ships an npm package (@lobsterkit/lobstermail) for TypeScript and JavaScript. For .NET or Python Semantic Kernel agents, you can call the LobsterMail REST API directly, or run a thin JavaScript sidecar that handles email I/O. The HTTP API reference is at Getting Started.
How does the receiving agent know when a new email has arrived?
Two options: polling via inbox.receive() on a cron schedule, or webhooks that trigger your agent when mail arrives. For low-latency handoffs, webhooks are the right call. For scheduled batch processing — where the next step runs once an hour anyway — polling is simpler to operate.
What if the receiving agent is down when Agent A sends the handoff?
Email delivers to the inbox regardless of whether the recipient is running. The message waits there until Agent B comes online and polls. This is one of email's core advantages over direct API calls or in-memory queues — delivery doesn't require both parties to be up at the same time.
Can I pass large payloads through email?
Email works well for structured metadata and summaries — contract IDs, risk flags, step outputs, references. For large binary payloads (files, embeddings, full documents), store them in S3 or equivalent and pass the reference in the email body. This is standard practice for email-based async workflows and keeps message sizes manageable.
Is LobsterMail free to use for this?
The free tier includes 1,000 emails/month and no credit card required. For most multi-agent pipelines in development and light production, that covers a lot. The Builder plan at $9/month adds up to 10 inboxes and 5,000 emails/month. Full pricing is on the homepage.
How do I protect against prompt injection in incoming emails?
LobsterMail automatically scores incoming emails for injection risk. Each email comes back with security metadata you can inspect before passing content to your agent. Check the risk score and decide whether to process, quarantine, or discard based on your pipeline's tolerance. The security guide covers how to wire this into your handler logic.
Is this different from Semantic Kernel's built-in state persistence?
Yes. Semantic Kernel's state management handles in-process memory across agent steps within a single run. Email handoffs solve a different problem: durable, cross-process message passing where in-memory state can't survive a restart or cross a network boundary. They complement each other rather than overlap.
Should each agent in the pipeline get its own inbox?
Yes — give each agent a dedicated inbox. It keeps routing simple, makes debugging straightforward (you can see exactly what each agent received), and prevents agents from processing each other's messages. createSmartInbox() handles collision resolution automatically, so naming them is painless.
What's the delivery latency for LobsterMail?
Typical delivery is under 30 seconds for @lobstermail.ai addresses. For pipeline steps where the inter-step wait is measured in hours or days, this is immaterial. If you need sub-second handoffs, email is the wrong transport — use an in-memory queue or the native Semantic Kernel orchestration layer instead.
Can I use a custom domain instead of @lobstermail.ai?
Yes. Custom domain support lets your agents send and receive from addresses like review-agent@yourcompany.com. See the custom domains guide for setup instructions.
Give your agent its own email. Get started with LobsterMail — it's free.


