Launch-Free 3 months Builder plan-
Pixel art lobster building with CrewAI framework — crewai human-in-the-loop email approval

how to add email approval to crewai human-in-the-loop workflows

CrewAI's human-in-the-loop flag is limited to the terminal. Here's how to route approval requests through email so humans can approve agent tasks from their inbox.

9 min read
Ian Bussières
Ian BussièresCTO & Co-founder

CrewAI lets you pause an agent mid-task and ask a human for approval. The problem: that approval channel is stdin. Your agent prints a question to the terminal and blocks until someone types a response. Fine for local development. Useless for anything running on a server, in a cron job, or anywhere a human isn't staring at the console.

The fix is straightforward. Instead of waiting for terminal input, send a structured approval email, then parse the reply (or a webhook click) to resume the crew. This turns CrewAI's human-in-the-loop from a dev-mode curiosity into something that works in production.

How to add email approval to a CrewAI human-in-the-loop workflow#

To add email approval to a CrewAI human-in-the-loop workflow, follow these steps:

  1. Set human_input=True on the CrewAI Task that requires approval.
  2. Capture execution_id and task_id from the kickoff response.
  3. Compose an approval email with context and approve/reject links encoding both IDs.
  4. Send the email to the designated approver's address.
  5. Parse the inbound reply or link click via your email webhook.
  6. Call POST /resume with execution_id, task_id, is_approve, and human_feedback.
  7. Log the approval decision for your audit trail.

The rest of this article walks through each piece in detail.

What human-in-the-loop actually does in CrewAI#

When you set human_input=True on a CrewAI task, the framework pauses execution after the agent produces its output and before that output is finalized. It prints the agent's work to the console and waits for typed feedback. If the human approves, the task completes. If they provide corrections, the agent incorporates the feedback and tries again.

This is the Supervisor model of HITL: the agent does the work, the human signs off. CrewAI also supports task guardrails (validation functions that programmatically check output) and the HumanLayer SDK (which adds an @require_approval decorator for tool calls). But for most teams, the core question is simpler: how do I get that approval prompt out of my terminal and into a channel humans actually check?

Email is the obvious answer. Everyone has it open. It works asynchronously. And unlike Slack (which requires bot tokens, channel permissions, and app manifests), email just needs an address and a message.

The problem with stdin approval#

CrewAI's default HITL mechanism calls Python's input() under the hood. This creates two constraints that break production workflows:

The process blocks. Your crew sits idle, consuming memory and holding connections, until a human responds. If you're running crews on a backend server or inside a container, there's nobody at the keyboard. The crew hangs forever.

There's no routing. The terminal doesn't know who should approve what. In a real workflow, a financial review might go to your CFO while a content draft goes to your marketing lead. stdin has no concept of recipients, escalation, or timeouts.

The CrewAI team recognized this gap. Their enterprise API exposes a POST /resume endpoint that lets you programmatically unpause a crew after collecting feedback through any external channel. That endpoint is the key to building email-based approval.

Building the approval email#

A good approval email contains three things: context about what the agent did, the agent's output for review, and a clear way to approve or reject.

Here's a minimal example of the email body your agent would compose:

<h2>Approval requested: Draft outreach email</h2>

<p>Your CrewAI agent "sales-researcher" completed the task
"draft-cold-email" and is waiting for your review.</p>

<h3>Agent output:</h3>
<blockquote>
  Hi Sarah, I noticed Acme Corp just raised a Series B...
</blockquote>

<p>
  <a href="https://your-app.com/approve?exec=abc123&task=draft-cold-email&action=approve">
    ✅ Approve
  </a>
  &nbsp;|&nbsp;
  <a href="https://your-app.com/approve?exec=abc123&task=draft-cold-email&action=reject">
    ❌ Reject
  </a>
</p>

<p>Or reply to this email with your feedback.</p>

The approve/reject links hit your own API, which then calls CrewAI's /resume endpoint. The reply-to fallback gives approvers a way to provide nuanced feedback without clicking a binary button.

A few things matter here that none of the existing CrewAI tutorials mention:

Include enough context. Don't just say "Task X needs approval." Show the agent's actual output inline. The approver shouldn't need to log into a dashboard to understand what they're approving.

Use plain text fallback. Not everyone reads HTML email. Include a text/plain MIME part that's just as actionable: "Reply APPROVE or reply with corrections."

Set a timeout. If nobody responds within your SLA (say, 4 hours), send a reminder. If nobody responds within 24 hours, escalate to a backup approver or auto-reject. A crew that blocks for three days because someone was on vacation is worse than no HITL at all.

Parsing the response to resume the crew#

You have two paths for collecting the human's decision: link clicks and email replies.

Link clicks are simpler. Your approve/reject URLs point to a lightweight endpoint on your server. When clicked, it calls CrewAI's resume API:

import requests

def handle_approval_click(execution_id, task_id, action, feedback=""):
    response = requests.post(
        f"{CREWAI_BASE_URL}/resume",
        headers={"Authorization": f"Bearer {CREWAI_API_TOKEN}"},
        json={
            "execution_id": execution_id,
            "task_id": task_id,
            "is_approve": action == "approve",
            "human_feedback": feedback or "Approved via email link",
        },
    )
    return response.json()
**Email replies** are more powerful because the approver can write freeform feedback. To handle these, you need an inbound email webhook that parses the reply body and extracts intent. When a reply arrives at your approval inbox:

1. Match the email thread to an `execution_id` and `task_id` (stored when you sent the original approval request).
2. Parse the reply body. Look for explicit signals ("APPROVE", "REJECT", "Looks good") or treat any substantive text as revision feedback.
3. Call `POST /resume` with the parsed feedback.

This closes the loop entirely inside the email thread. The approver never leaves their inbox.

def handle_inbound_reply(sender, subject, body, thread_id):
    approval = lookup_pending_approval(thread_id)
    if not approval:
        return  # not an approval reply

    is_approved = any(
        word in body.upper()
        for word in ["APPROVE", "APPROVED", "LOOKS GOOD", "LGTM", "SHIP IT"]
    )

    resume_crew(
        execution_id=approval["execution_id"],
        task_id=approval["task_id"],
        is_approve=is_approved,
        human_feedback=body,
    )

    log_approval_decision(approval, is_approved, sender, body)

Where email infrastructure matters#

Here's where most tutorials hand-wave. They show the POST /resume call and skip everything about actually delivering the approval email reliably. But reliability is the whole point. If your approval email lands in spam, your crew blocks silently with no one the wiser.

Three things go wrong with naive email setups for HITL:

Deliverability. Approval emails are transactional, not marketing. They need to arrive instantly in the primary inbox. That means proper SPF, DKIM, and DMARC records on whatever domain you send from. If you're using a personal Gmail to send these, expect inconsistent delivery.

Tracking. You want to know: was the email opened? Did the approver click a link? How long did the approval sit before someone acted? Without delivery tracking, you can't set meaningful SLA escalations.

Inbound parsing. Handling reply-to-approve means you need an inbox that receives email and triggers a webhook when new messages arrive. Most email APIs are send-only. You need infrastructure that handles both directions.

If you're building agents that already use LobsterMail for email, the inbound webhook handling is built in. Your agent provisions an inbox, sends the approval email from it, and receives the reply back to the same address. The webhooks docs cover the setup. If you want your agent to self-provision an approval inbox, and point your CrewAI workflow at it.

Multi-approver and escalation patterns#

Some workflows need more than one person to sign off. A contract review might require both legal and finance approval. A content publish might need the author and an editor.

The pattern extends naturally:

  1. Send the approval email to multiple recipients (or send separate, role-specific emails).
  2. Track responses per approver in your state store.
  3. Define your quorum rule: all must approve, majority wins, or any-one-approves.
  4. Only call /resume once the quorum condition is met.
  5. If a timeout fires before quorum, escalate to a manager or auto-reject.

Keep the state simple. A database table with columns for execution_id, task_id, approver_email, decision, and responded_at covers most cases. When the quorum check passes, aggregate the feedback from all approvers into the human_feedback field so the agent sees everything.

Security considerations#

Exposing a /resume-style webhook to the public internet means anyone with a valid execution_id and task_id could theoretically approve a task. A few mitigations:

  • Sign your approval links. Include an HMAC token in the URL that your server validates before calling /resume. This prevents link guessing.
  • Verify sender identity. For reply-based approvals, check that the reply came from an authorized approver email, not a forwarded address.
  • Use HTTPS everywhere. Approval links over HTTP leak execution IDs in plaintext.
  • Expire links. An approval link that works six months later is a liability. Set a TTL and show a "this approval has expired" page after it passes.

Testing locally#

Before deploying, you can test the full flow on your machine:

  1. Run your CrewAI crew with human_input=True on a task.
  2. Instead of waiting for stdin, intercept the pause and send an approval email to your own address.
  3. Click the approve link (pointing to localhost) or reply to the email.
  4. Verify that your handler calls /resume and the crew continues.

For the inbound email path, you'll need a way to receive email locally. Tools like mailhog or a tunneled webhook (ngrok + your email provider's inbound routing) work for development.

Frequently asked questions

What does human_input=True actually do in a CrewAI task?

It pauses the task after the agent produces output and waits for human feedback via the terminal. The agent incorporates the feedback before finalizing. It's limited to stdin by default, which is why you need an external channel like email for production use.

How do I call the CrewAI /resume endpoint after a human approves via email?

Send a POST request to your CrewAI server's /resume endpoint with execution_id, task_id, is_approve (boolean), and human_feedback (string) in the JSON body. Your email webhook handler triggers this call when it receives an approval reply or link click.

What fields are required in the CrewAI resume payload?

The required fields are execution_id (from the original kickoff), task_id (the paused task), is_approve (true or false), and human_feedback (a string with the approver's comments). You can also include webhook URLs for task, step, and crew-level notifications.

Can I use task guardrails instead of human_input for approval logic?

Yes. Task guardrails are validation functions that run automatically on agent output. They're better for programmatic checks (format validation, length limits). For subjective decisions that need a human eye, human_input or HumanLayer's @require_approval decorator is more appropriate.

What is HumanLayer and how does it work with CrewAI?

HumanLayer is an SDK that adds human approval to AI agent tool calls. Its @require_approval decorator wraps any tool function so that execution pauses and routes an approval request through email, Slack, or other channels before the tool runs. It integrates with CrewAI as a tool-level gate.

How do I route approval requests to different people based on task type?

Maintain a routing map in your application that matches task IDs or task types to approver email addresses. When a task pauses, look up the appropriate approver and send the approval email to their address. This lets you route financial tasks to finance and content tasks to marketing.

What happens if no human responds to a CrewAI approval email?

By default, the crew blocks indefinitely. You need to implement your own timeout logic: schedule a reminder after N hours, escalate to a backup approver, or auto-reject and call /resume with is_approve: false after your SLA expires.

How do I parse a reply-to-approve email and call the CrewAI resume API?

Use an inbound email webhook to receive the reply. Match the email thread to a pending approval record, parse the body for approval signals ("APPROVE", "LGTM", etc.), and call POST /resume with the extracted decision. Store the thread-to-execution mapping when you send the original approval email.

Can CrewAI's HITL mechanism work in a serverless or backend environment?

Not out of the box, since human_input=True relies on terminal input. But by using the /resume API endpoint and routing approvals through email or Slack, you can run CrewAI crews on any backend. The crew pauses, your external system collects feedback, then calls /resume to continue.

How do I implement multi-approver flows in CrewAI?

Send separate approval emails to each required approver. Track individual responses in a database. Define a quorum rule (all must approve, majority, or any one). Only call /resume once the quorum condition is met. Aggregate all feedback into the human_feedback field.

How do I log and audit approval decisions in a CrewAI workflow?

Record every approval event (who approved, when, what feedback they gave, which execution and task IDs were involved) in a persistent store. Include the full email thread ID for traceability. This gives you a complete audit trail of every human decision in your agent pipeline.

What are the security risks of exposing a CrewAI /resume webhook?

Anyone with valid execution and task IDs could approve a task. Mitigate this by signing approval links with HMAC tokens, verifying sender email addresses on replies, using HTTPS, and setting link expiration times. Never expose raw IDs without authentication.

How do I test a CrewAI email approval flow locally?

Run your crew locally with human_input=True, intercept the pause to send an approval email to your own address, and point the approve/reject links to localhost. For inbound replies, use a tool like mailhog or ngrok tunneled to your local webhook handler.

Related posts