Launch-Free 3 months Builder plan-
Pixel art lobster integrating with an email API — openai assistants api email function calling

openai assistants api email function calling: how to wire up real email delivery

Learn how to use OpenAI Assistants API function calling to send emails, with a step-by-step guide covering tool definitions, run lifecycle, and reliable delivery.

9 min read
Ian Bussières
Ian BussièresCTO & Co-founder

OpenAI's Assistants API lets you define custom functions that the model can call during a conversation. One of the most requested use cases is email: tell the assistant to email someone, and it figures out the recipient, subject, and body, then hands those arguments off to your code. The model never actually sends the email. Your function does.

That distinction matters more than most tutorials acknowledge. The assistant drafts the intent. Your backend executes it. And if your backend is a hastily assembled SMTP script with no authentication records, your emails will bounce, land in spam, or both. The function calling part is straightforward. The email delivery part is where people get stuck.

This guide walks through both halves: defining the email tool, handling the run lifecycle, and connecting a delivery backend that actually works.

How function calling works in the Assistants API#

The Assistants API uses a concept called "tools" to let the model interact with external systems. You define a function's name, description, and parameter schema in JSON. When the model decides it needs to call that function, it pauses the run and returns a required_action status with the function name and arguments it wants to use.

Your code then executes the function (calling your email API, querying a database, whatever) and submits the result back. The assistant incorporates that result and continues the conversation.

This is different from the Chat Completions API, where function calls are part of the message stream. In the Assistants API, the run itself enters a waiting state. You poll for that state, handle the tool call, and submit outputs to resume. It's more structured, which makes it easier to build approval flows or logging around function execution.

How to send emails with OpenAI Assistants API function calling#

Here's the process broken into concrete steps:

  1. Create an Assistant with a send_email tool definition that includes to, subject, and body parameters.
  2. Start a Thread and add a user message like "Email alex@example.com about the project update."
  3. Create a Run on that thread using your assistant.
  4. Poll the Run until its status changes to requires_action.
  5. Extract the function name and arguments from required_action.submit_tool_outputs.tool_calls.
  6. Call your email delivery API with the extracted arguments.
  7. Submit the tool output back to the Run to let the assistant continue.

Each step is a separate API call. Let's look at the code.

Defining the email tool#

When you create the assistant, you pass a tools array with your function schema:

{
  "name": "send_email",
  "description": "Send an email to a recipient with a subject and body",
  "parameters": {
    "type": "object",
    "properties": {
      "to": {
        "type": "string",
        "description": "Recipient email address"
      },
      "subject": {
        "type": "string",
        "description": "Email subject line"
      },
      "body": {
        "type": "string",
        "description": "Plain text email body"
      }
    },
    "required": ["to", "subject", "body"]
  }
}

Keep the descriptions clear. The model uses them to decide when and how to call the function. Vague descriptions produce vague arguments.

You can add multiple tools to a single assistant. If your agent needs to read emails too, define a `read_inbox` function alongside `send_email`. The model will pick the right one based on the user's message.

## Handling the run lifecycle

Once the user sends a message like "Send a follow-up email to dana@example.com about the Q2 report," you create a run. Here's what the polling loop looks like in practice:

```typescript
import OpenAI from "openai";

const openai = new OpenAI();

// Create run
let run = await openai.beta.threads.runs.create(threadId, {
  assistant_id: assistantId,
});

// Poll until the run needs action or completes
while (run.status === "queued" || run.status === "in_progress") {
  await new Promise((r) => setTimeout(r, 1000));
  run = await openai.beta.threads.runs.retrieve(threadId, run.id);
}

if (run.status === "requires_action") {
  const toolCalls =
    run.required_action.submit_tool_outputs.tool_calls;

  const outputs = [];

  for (const call of toolCalls) {
    const args = JSON.parse(call.function.arguments);

    if (call.function.name === "send_email") {
      const result = await sendEmailViaYourAPI(
        args.to,
        args.subject,
        args.body
      );
      outputs.push({
        tool_call_id: call.id,
        output: JSON.stringify(result),
      });
    }
  }

  // Submit results back
  await openai.beta.threads.runs.submitToolOutputs(threadId, run.id, {
    tool_outputs: outputs,
  });
}
The `requires_action` status is the handoff point. The model has decided it wants to send an email. It's extracted the recipient, subject, and body from the conversation. Now it's waiting for you to do the actual sending and report back.

That `sendEmailViaYourAPI` function is where the real decisions happen.

## The delivery problem most tutorials skip

Here's where I see people get stuck. They build the function calling loop, test it with `console.log`, and call it done. Then they wire up a real email backend and discover that sending email reliably is its own project.

If you're using raw SMTP or a personal Gmail account, you'll run into a familiar set of problems. You need SPF and DKIM records configured correctly or recipient servers will reject your messages. You need to handle rate limits. You need to monitor bounce rates because a spike in bounces damages your sender reputation, and once that's damaged, even valid emails start hitting spam folders.

For agents that send emails autonomously (without a human reviewing each one), these problems compound. A misconfigured agent can burn through a domain's reputation in hours. We've written about [the OAuth problem with Gmail](/blog/oauth-gmail-agents-painful) and why it creates so much friction for agent-based email workflows.

The pattern that works well for agent email is to let the agent itself provision and manage its own inbox. Instead of configuring OAuth tokens and SMTP credentials, the agent creates an inbox on demand and sends through authenticated infrastructure. This is the approach behind [agent self-signup](/blog/agent-self-signup-explained), where the agent handles its own email setup without human intervention.

## Wiring up a reliable email backend

Here's what the `sendEmailViaYourAPI` function can look like when you use LobsterMail as the delivery backend:

```typescript
import { LobsterMail } from "@lobsterkit/lobstermail";

const lm = await LobsterMail.create();
const inbox = await lm.createSmartInbox({ name: "Email Assistant" });

async function sendEmailViaYourAPI(
  to: string,
  subject: string,
  body: string
) {
  const result = await inbox.send({ to, subject, text: body });
  return {
    success: true,
    messageId: result.messageId,
    from: inbox.address,
  };
}

The agent gets its own `@lobstermail.ai` address. SPF and DKIM are handled automatically. No DNS configuration, no credential management, no OAuth consent screens. The agent provisions the inbox itself when the code first runs.

This fits naturally into the function calling pattern: the assistant decides to send an email, your function executes through authenticated infrastructure, and the result (including the message ID for tracking) goes back to the assistant.

## Adding a human approval step

One question that comes up often: should you let an LLM send emails without any human review? For internal notifications or transactional messages, autonomous sending is usually fine. For customer-facing emails or anything with legal implications, you probably want a human in the loop.

The Assistants API actually makes this easy. When the run hits `requires_action`, you don't have to submit tool outputs immediately. You can pause, surface the email draft to a human for approval, and only call the send function after they confirm. The run will wait.

```typescript
if (call.function.name === "send_email") {
  const args = JSON.parse(call.function.arguments);

  // Surface to human for approval
  const approved = await requestHumanApproval({
    to: args.to,
    subject: args.subject,
    body: args.body,
  });

  if (approved) {
    const result = await sendEmailViaYourAPI(
      args.to,
      args.subject,
      args.body
    );
    outputs.push({
      tool_call_id: call.id,
      output: JSON.stringify(result),
    });
  } else {
    outputs.push({
      tool_call_id: call.id,
      output: JSON.stringify({ success: false, reason: "Rejected by user" }),
    });
  }
}

The assistant will respond differently based on whether the email was sent or rejected. No special handling needed on the model side.

Testing locally before going live#

Before connecting your function to a live email API, test with a mock that logs the arguments:

async function sendEmailViaYourAPI(to, subject, body) {
  console.log("Would send:", { to, subject, body });
  return { success: true, messageId: "test-123" };
}

Run through a few conversations. Check that the model extracts arguments correctly. Verify that edge cases (missing subject, multiple recipients mentioned in one message) behave the way you expect. Once you're confident in the function calling logic, swap in the real delivery backend.

Common mistakes to avoid#

Forgetting to submit tool outputs. If the run enters requires_action and you never submit outputs, it'll hang indefinitely (until it times out). Always handle every tool call in the array, even if your function errors out. Return an error message as the output so the assistant can respond gracefully.

Overly complex parameter schemas. Start with to, subject, and body. You can add CC, BCC, attachments, and HTML body later. The model handles simple schemas more reliably.

No error handling in the delivery function. If your email API returns an error, pass that error back as the tool output. The assistant can then tell the user "I wasn't able to send that email" instead of silently failing.

Ignoring the Assistants API deprecation timeline. OpenAI has deprecated the Assistants API in favor of the Responses API, with a shutdown date of August 26, 2026. If you're building something new, consider whether the Responses API is a better starting point. The function calling concepts are similar; the execution model differs.

Info

OpenAI plans to shut down the Assistants API on August 26, 2026. The Responses API supports similar function calling patterns. If you're starting fresh, evaluate both before committing.

The function calling mechanics are the easy part of building an email assistant. The hard part is reliable delivery, sender authentication, and reputation management. Get the infrastructure right first, then the assistant layer works exactly as the tutorials promise.


Frequently asked questions

What is function calling in the OpenAI Assistants API and how does it differ from Chat Completions?

In the Assistants API, function calls pause the entire run and wait for you to submit tool outputs before continuing. In Chat Completions, function calls are part of the message stream and you handle them inline. The Assistants API approach is more structured and easier to build approval flows around.

How do I define a send_email function as a tool in the OpenAI Assistants API?

Pass a tools array when creating the assistant, with a function object containing name, description, and a JSON Schema parameters object. For email, define to, subject, and body as required string properties.

What does the required_action status mean in an OpenAI Assistant run?

It means the model wants to call one of your defined functions and is waiting for your code to execute it. You extract the function arguments from required_action.submit_tool_outputs.tool_calls, run your function, and submit the results back to resume the run.

Can OpenAI Assistants send emails automatically without user confirmation?

The model itself never sends emails. It requests a function call, and your code decides whether to execute it. You can send automatically or add a human approval step before calling your email API.

What email delivery service works best as a backend for an AI email assistant?

You need a service that handles SPF, DKIM, and deliverability automatically. LobsterMail is built for this use case, letting agents provision their own inboxes and send through authenticated infrastructure with no manual DNS setup. You can get started for free.

How do I ensure emails sent by an AI assistant don't land in spam?

Use a delivery backend with proper SPF and DKIM authentication. Avoid sending from unverified personal accounts or raw SMTP. Monitor bounce rates and keep your sending volume consistent. Sudden spikes in volume from a new domain trigger spam filters.

What are the security risks of allowing an LLM to autonomously send emails?

The main risks are prompt injection (where a malicious input tricks the model into sending unintended emails), data leakage (the model including sensitive context in the email body), and reputation damage from sending to invalid addresses. Add input validation, rate limits, and consider human approval for sensitive emails.

How do I add multiple tools like send_email and read_email to one assistant?

Include multiple function objects in the tools array when creating the assistant. The model will choose the appropriate function based on the user's message. Each function needs its own name, description, and parameter schema.

How do I submit tool outputs back to the assistant after calling my function?

Use openai.beta.threads.runs.submitToolOutputs() with the thread ID, run ID, and an array of tool_outputs. Each output needs the tool_call_id from the original request and an output string (typically JSON) with your function's result.

Is the OpenAI Assistants API being deprecated?

Yes. OpenAI has announced the Assistants API will shut down on August 26, 2026. The Responses API is the recommended replacement and supports similar function calling patterns. If you're starting a new project, evaluate the Responses API first.

How do I test my email function locally before connecting it to a live API?

Replace your email delivery function with a mock that logs the arguments and returns a fake success response. Run several conversations to verify the model extracts to, subject, and body correctly before swapping in real delivery.

What JSON schema format should I use for email function parameters?

Use a standard JSON Schema object with type: "object" and properties for each parameter. For a basic email function, define to, subject, and body as strings with clear descriptions, and list all three in the required array.

What's the difference between OpenAI function calling and tool use?

They refer to the same concept. "Function calling" is the original term from the Chat Completions API. "Tool use" is the broader term used in the Assistants API, where functions are one type of tool alongside built-in tools like Code Interpreter and File Search.

Related posts