
how to set up conditional email routing with an AI agent in Make.com
Build an AI-powered email routing workflow in Make.com using ChatGPT, Router modules, and conditional filters. Step-by-step setup with common mistakes.
Make.com scenarios can watch your inbox, pull in each new message, and route it wherever you want based on rules you define. That works fine for simple cases. But static filters break the moment your emails get unpredictable. A filter matching "invoice" in the subject line won't catch the email titled "Quick follow-up" with payment terms buried three paragraphs into the body.
Conditional routing with an AI agent solves this. You place a ChatGPT module between your email trigger and your Router, let the model read and classify each message, then branch your scenario based on structured output instead of rigid keyword matching. The AI understands context. Your Router acts on categories, not string matches.
This approach handles moderate volumes with a handful of categories nicely. It also has limits that most tutorials skip. Here's how to build it and where it starts to crack.
How to set up AI-powered conditional email routing in Make.com#
The full workflow requires seven steps:
- Create a Gmail or Outlook "watch email" trigger module.
- Add an OpenAI (ChatGPT) module to classify the email body and subject.
- Set the AI prompt to return a structured JSON object with a
categoryfield. - Add a Parse JSON module to expose the category value as a variable.
- Add a Router module with one branch per expected category.
- Configure a Filter on each branch matching the expected category string.
- Attach the appropriate action module (reply, label, forward, create task) to each branch.
That's the skeleton. The sections below cover the parts that actually trip people up.
Connecting your email trigger#
Make.com supports Gmail, Outlook, and IMAP as email triggers. Gmail and Outlook use OAuth connections that Make.com manages through its built-in integrations. IMAP works with any email provider but requires you to enter credentials manually.
Select "Watch Emails" for Gmail or "Watch Messages" for Outlook as your trigger module. Set the folder to Inbox and configure your polling interval. The free Make.com plan polls every 15 minutes. That's fine for testing but too slow if routing speed matters in production.
One detail people miss: if you need to route emails across multiple inboxes simultaneously, you'll need a separate scenario for each one. Make.com's email trigger modules watch one mailbox at a time. For workflows where multiple agents coordinate through email, this single-mailbox constraint gets uncomfortable fast.
Classifying emails with the ChatGPT module#
The AI classification step is what replaces your Router's static filters with actual comprehension. Add an OpenAI "Create a Chat Completion" module directly after your email trigger.
Your prompt determines everything. Here's a template that produces consistent, parseable output:
Classify the following email into exactly one category:
sales_inquiry, support_request, billing, newsletter, spam.
Return ONLY a JSON object: {"category": "category_name", "confidence": 0.95}
Subject: {{1.subject}}
Body: {{1.textContent}}
When mapping the email body into your prompt, use textContent or text rather than the raw HTML body. Feeding HTML to the model wastes tokens on markup and confuses classification accuracy. You also want to define your categories as a fixed, closed list. Asking the model to "figure out what kind of email this is" without constraints returns different strings on different runs, and your Router filters downstream need exact matches to work.
If you're using GPT-4o or a newer model, enable JSON mode in the OpenAI module's advanced settings. This prevents the model from wrapping your output in markdown code fences or adding conversational commentary around the JSON object.
Tip
Keep your category list short. Five to eight categories work well. Beyond that, classification accuracy drops and your Router becomes unwieldy. If you need finer-grained sorting, classify into broad categories first, then run a second classification step on specific branches.
Parsing the output and building your Router#
The OpenAI module returns a raw text string. Even when that string contains valid JSON, Make.com can't access individual fields inside it until you explicitly parse it.
Add a "Parse JSON" module (found under the Tools app) immediately after the OpenAI module. Map the AI's response text as the JSON string input. After this module runs, category and confidence become separate variables you can reference in any downstream module.
Now add a Router module. This is where people confuse Routers and Filters. A Router creates parallel branches in your scenario, and each branch runs independently. A Filter is a condition you attach to a single branch (or any connection between modules) that blocks data unless the condition is met. You need both: the Router creates the branching structure, and a Filter on each branch matches a specific category value.
For each branch, set a Filter condition: category equals sales_inquiry (or whatever category that branch handles). Then attach the appropriate action module. Route sales inquiries to your CRM, support requests to your ticketing system, billing questions to your finance team's inbox.
If you want to enforce a confidence threshold, combine conditions in the Filter: category equals support_request AND confidence is greater than 0.7. Messages below that threshold fall through to your fallback branch, which brings us to the part most tutorials neglect entirely.
Handling errors and fallback routes#
Production failures in this workflow almost always trace back to the gap between what the AI returns and what your Filters expect. The model might return billing_question instead of billing. Your Filter requires an exact match. The email routes to nothing, no branch catches it, and nobody notices until a customer follows up days later asking why they never got a response.
The fix is a fallback branch. Add one more branch to your Router with no Filter condition at all. This branch catches every message that didn't match any specific category. Route those emails to a catch-all inbox, log them to a spreadsheet, or flag them for manual review.
The second common failure point is the OpenAI module itself. API rate limits are real. If your scenario processes a batch of 20 emails at once and the API throttles halfway through, some classifications fail or time out. Without an error handler attached to the OpenAI module, those emails vanish from your workflow entirely. Configure the error handler to send failed messages to the same fallback branch instead of halting the scenario.
Malformed JSON is the other risk. Even with JSON mode enabled, the Parse JSON module occasionally receives output it can't parse. When that happens, the module throws an error and the scenario stops unless you've wrapped it in its own error handler. Add one. Route unparseable responses to fallback so the scenario keeps running. You can inspect the failures later.
Where Make.com conditional routing hits its limits#
Make.com bills by operation. Every module execution in your scenario counts as one. A single email passing through trigger, OpenAI, Parse JSON, Router, Filter, and an action module consumes six operations minimum. Add error handlers and logging modules, and you're at eight or ten per email. On a plan with 10,000 operations per month, that's roughly 1,000 to 1,600 emails before you hit the ceiling.
For teams handling hundreds of emails per day, the math gets uncomfortable. The AI module runs on your own OpenAI API key at roughly $0.01 to $0.03 per classification depending on the model, adding a second per-email cost on top of your Make.com subscription.
There's also no built-in audit trail. If you need to answer "why was this email routed to billing on March 15th," you have to build logging yourself with extra modules. More operations, more cost for something that purpose-built email infrastructure handles natively.
Make.com recently added AI Agents as a native canvas feature, which can replace some Router logic with LLM decision-making. It's worth experimenting with, but early adopters have found it adds unpredictability and cost to workflows where deterministic routing would work fine.
And if your use case involves agents that need their own inboxes rather than routing within a single human mailbox, Make.com isn't the right layer. It's an automation tool sitting on top of someone's existing email. It can't provision new addresses or manage deliverability, and it wasn't built to protect AI agents from adversarial email content.
If you're at the point where your agent needs its own inbox with programmable routing baked in, and skip the plumbing entirely.
Start with Make.com if your email volume is low, your categories are simple, and you're routing within an existing human inbox. Build your fallback routes from day one. Test with real emails before enabling the scenario schedule. When you find yourself adding modules to cover what the routing layer should handle on its own, that's your signal to evaluate purpose-built tools.
Frequently asked questions
What is the Make.com Router module and how does it enable conditional email branching?
The Router module creates parallel branches in a Make.com scenario. Each branch can have its own Filter condition and action modules. For email routing, you place the Router after your AI classification step and create one branch per email category, plus a fallback.
How do you connect ChatGPT or OpenAI to Make.com for email classification?
Add an OpenAI "Create a Chat Completion" module to your scenario and connect it with your OpenAI API key. Set the model to GPT-4o or later, enable JSON mode, and map the email subject and body into the prompt. The module returns the AI's response as a text string you can parse downstream.
What is the difference between a Make.com Router and a Filter?
A Router creates multiple parallel branches in your scenario. A Filter is a condition attached to a single connection that blocks data unless the condition passes. You typically use both: the Router to branch, and Filters on each branch to determine which data flows through.
How many conditional branches can a single Make.com Router support?
There's no hard limit on the number of branches. In practice, keeping it under 10 maintains readability. For email routing, match one branch per classification category plus one fallback branch with no Filter.
How do you parse AI-returned JSON in Make.com to feed a routing decision?
Add a "Parse JSON" module (under the Tools app) after the OpenAI module. Map the AI's response text as the JSON string input. Individual fields like category and confidence then become variables you can reference in downstream Filters and action modules.
What happens when the AI module returns an unexpected or empty classification?
If the AI returns a category string that doesn't match any of your Router's Filters, the email falls through to no branch and is silently lost. Always add a fallback branch with no Filter condition to catch unmatched messages.
How do you build a fallback route in Make.com for unclassified emails?
Add an extra branch to your Router module with no Filter condition. This branch catches every message that didn't match a specific category. Connect it to a logging action or forward the email to a review inbox so nothing gets silently dropped.
How do you trigger a Make.com email routing scenario from Gmail versus Outlook?
For Gmail, use the "Watch Emails" trigger module with an OAuth connection. For Outlook, use "Watch Messages." Both let you specify the folder and polling interval. IMAP is a third option that works with any provider using manual credentials.
What are Make.com's operation limits and how do they affect email routing at scale?
Each module execution counts as one operation. A typical AI email routing scenario uses 6-10 operations per email. On a 10,000 operations/month plan, that supports roughly 1,000-1,600 emails. High-volume workflows need higher-tier plans or a different infrastructure approach.
How does Make.com AI email routing compare to a dedicated email API at scale?
Make.com works well for low-to-moderate volume with simple categories. At scale, per-operation billing, lack of native audit logging, and single-inbox triggers create friction. Dedicated email infrastructure like LobsterMail handles inbox provisioning and routing at the infrastructure level without per-message operation costs.
Is Make.com conditional email routing reliable enough for transactional or SLA-bound emails?
For best-effort routing like marketing triage or internal sorting, it's capable. For SLA-bound or transactional email where delivery guarantees matter, the lack of built-in retry logic and the dependency on third-party AI module uptime make it a risky choice.
How do you log and audit routing decisions inside a Make.com scenario?
Make.com has no native routing audit log. You need to add explicit logging modules (Google Sheets or a database connector) to record each decision with timestamps and category values. Each logging module adds one more operation to your per-email cost.
How do you safely test a Make.com email routing workflow before going live?
Use the "Run once" button to process a single email at a time. Send test emails covering each category and edge cases like empty body, ambiguous content, foreign language, and attachments without text. Verify every message reaches the correct branch and that your fallback catches misclassifications.
What are the most common mistakes when building AI-powered conditional routing in Make.com?
Feeding HTML instead of plain text to the AI module, using open-ended classification without a fixed category list, forgetting to add a fallback branch, not handling AI module timeouts with error handlers, and trying to regex the AI output instead of using Parse JSON properly.
When should a team move from Make.com email routing to dedicated agent-first email infrastructure?
When your email volume exceeds what Make.com's operation limits handle cost-effectively, when your agents need their own provisioned inboxes rather than routing within a human mailbox, or when you need native audit logging and delivery tracking without bolting on extra modules.


