How to Send Queued Email (Patterns That Actually Scale)
Email queuing patterns for developers: database-backed queues, Redis/BullMQ, pg-boss, and managed APIs. With code examples for Node.js, TypeScript, and Python.
Your user signs up, your server sends a welcome email inline, and your API response takes 3 seconds because the email provider was slow. Or worse — it times out and the user gets a 500 error, even though the signup was fine.
Queuing fixes this. Push the email into a queue, respond to the user immediately, and let a background worker handle the send. If it fails, the queue retries. If the provider is down, emails stack up and drain when it's back.
Three approaches, pick based on your stack:
- Database queue (pg-boss) — If you already have Postgres. Zero extra infrastructure.
- Redis queue (BullMQ) — If you already have Redis. Fast, battle-tested.
- Managed API — If you don't want to run queues. SendPigeon handles queuing, retries, and rate limiting for you.
Why queue emails?
Sending email inline (inside your API handler) seems simple, but it breaks in predictable ways:
| Problem | What happens |
|---|---|
| Provider is slow | Your API response time spikes |
| Provider is down | User gets an error for an unrelated action |
| You need to send 1,000 emails | Your process blocks for minutes |
| A send fails | No automatic retry — the email is lost |
| Your server crashes mid-batch | Half the emails sent, no way to resume |
A queue decouples the send from the request. Your API stays fast. Emails go out reliably in the background. Failures retry automatically.
Pattern 1: Database-backed queue with pg-boss
If you already have Postgres, pg-boss gives you a job queue with no extra infrastructure. It uses Postgres SKIP LOCKED for reliable at-least-once job processing.
Setup
npm install pg-boss sendpigeon
Producer — enqueue an email job
import PgBoss from "pg-boss";
const boss = new PgBoss(process.env.DATABASE_URL!);
await boss.start();
// In your API handler:
async function signupUser(email: string, name: string) {
// Save user to database...
// Queue the welcome email — returns immediately
await boss.send("send-email", {
to: email,
subject: `Welcome, ${name}!`,
html: `<h1>Welcome</h1><p>Thanks for signing up.</p>`,
});
// Respond to user without waiting for the email
return { success: true };
}
Worker — process the queue
import PgBoss from "pg-boss";
import { SendPigeon } from "sendpigeon";
const boss = new PgBoss(process.env.DATABASE_URL!);
const pigeon = new SendPigeon(process.env.SENDPIGEON_API_KEY!);
await boss.start();
await boss.work("send-email", async (job) => {
const { to, subject, html } = job.data;
const { error } = await pigeon.send({
from: "hello@yourdomain.com",
to,
subject,
html,
});
if (error) {
throw new Error(error.message); // pg-boss retries automatically
}
});
Configure retries
await boss.send(
"send-email",
{ to, subject, html },
{
retryLimit: 5,
retryDelay: 30, // 30 seconds between retries
retryBackoff: true, // Exponential backoff
expireInSeconds: 3600, // Give up after 1 hour
}
);
Pros: No extra infrastructure. Transactional — enqueue the email in the same database transaction as the user signup. Jobs survive server restarts.
Cons: Postgres isn't optimized for high-throughput queuing. Fine for most apps, but at 10K+ jobs/minute, consider Redis.
Pattern 2: Redis queue with BullMQ
If you already have Redis, BullMQ is the standard choice. It's fast, well-documented, and handles complex workflows (rate limiting, priorities, delayed jobs).
Setup
npm install bullmq sendpigeon
Producer
import { Queue } from "bullmq";
const emailQueue = new Queue("email", {
connection: { host: "localhost", port: 6379 },
});
async function signupUser(email: string, name: string) {
await emailQueue.add("welcome", {
to: email,
subject: `Welcome, ${name}!`,
html: `<h1>Welcome</h1><p>Thanks for signing up.</p>`,
}, {
attempts: 5,
backoff: { type: "exponential", delay: 10_000 },
});
return { success: true };
}
Worker
import { Worker } from "bullmq";
import { SendPigeon } from "sendpigeon";
const pigeon = new SendPigeon(process.env.SENDPIGEON_API_KEY!);
const worker = new Worker("email", async (job) => {
const { to, subject, html } = job.data;
const { error } = await pigeon.send({
from: "hello@yourdomain.com",
to,
subject,
html,
});
if (error) {
throw new Error(error.message);
}
}, {
connection: { host: "localhost", port: 6379 },
concurrency: 10, // Process 10 emails in parallel
});
Rate limiting
BullMQ has built-in rate limiting — useful if your email provider has send limits:
const emailQueue = new Queue("email", {
connection: { host: "localhost", port: 6379 },
defaultJobOptions: {
attempts: 5,
backoff: { type: "exponential", delay: 10_000 },
},
});
const worker = new Worker("email", processEmail, {
connection: { host: "localhost", port: 6379 },
limiter: {
max: 100, // Max 100 emails
duration: 1000, // Per second
},
});
Pros: Very fast. Built-in rate limiting, priorities, and delayed jobs. Great dashboard options (Bull Board, Arena).
Cons: Requires Redis. Jobs aren't in the same transaction as your database writes.
Pattern 3: Managed API with built-in queuing
If you don't want to manage queue infrastructure, use an email API that handles it for you. SendPigeon queues, retries, and rate-limits sends on its end — you just call the API.
import { SendPigeon } from "sendpigeon";
const pigeon = new SendPigeon(process.env.SENDPIGEON_API_KEY!);
async function signupUser(email: string, name: string) {
// SendPigeon handles queuing and retries
const { error } = await pigeon.send({
from: "hello@yourdomain.com",
to: email,
subject: `Welcome, ${name}!`,
html: `<h1>Welcome</h1><p>Thanks for signing up.</p>`,
});
if (error) {
console.error("Email failed:", error.message);
}
return { success: true };
}
This still makes an HTTP call in your handler, so it's not fully decoupled. For true background processing, combine with a simple queue:
// Minimal approach: use setTimeout for non-critical emails
async function signupUser(email: string, name: string) {
// Respond immediately
setTimeout(async () => {
await pigeon.send({
from: "hello@yourdomain.com",
to: email,
subject: `Welcome, ${name}!`,
html: `<h1>Welcome</h1><p>Thanks for signing up.</p>`,
});
}, 0);
return { success: true };
}
setTimeout works for fire-and-forget, but the job is lost if your server restarts. For emails that must be delivered, use a real queue (pg-boss or BullMQ).
Pros: No queue infrastructure to manage. Retries and rate limiting handled by the provider. Simplest setup.
Cons: Still an HTTP call in your handler (unless you add a lightweight queue). You're trusting the provider's queue, not your own.
Which pattern should you use?
| Situation | Recommendation |
|---|---|
| Already have Postgres, < 10K emails/day | pg-boss |
| Already have Redis, need rate limiting | BullMQ |
| Don't want to manage queues | Managed API (SendPigeon) |
| Low volume, non-critical emails | Inline send or setTimeout |
| Need transactional guarantees (email + DB write) | pg-boss (same transaction) |
For most apps, pg-boss is the sweet spot — no extra infrastructure, survives crashes, and handles the volumes most teams actually deal with.
Idempotency: don't send the same email twice
Queues retry failed jobs. But what if the email was sent successfully and your worker crashed before acknowledging the job? The queue retries, and the user gets a duplicate email.
Fix this by tracking which jobs have already been sent:
import PgBoss from "pg-boss";
import { SendPigeon } from "sendpigeon";
const boss = new PgBoss(process.env.DATABASE_URL!);
const pigeon = new SendPigeon(process.env.SENDPIGEON_API_KEY!);
await boss.work("send-email", async (job) => {
const { to, subject, html } = job.data;
// Check if this job was already sent (e.g. after a crash-and-retry)
const alreadySent = await db.sentEmails.findUnique({
where: { jobId: job.id },
});
if (alreadySent) {
return; // Skip duplicate
}
const { data, error } = await pigeon.send({
from: "hello@yourdomain.com",
to,
subject,
html,
});
if (error) {
throw new Error(error.message);
}
// Record that this job was sent
await db.sentEmails.create({
data: { jobId: job.id, emailId: data.id },
});
});
The key idea: record the job ID after a successful send. On retry, check the record before sending again. Your user gets exactly one email.
Monitoring queued emails
A queue you can't see is a queue you can't trust. Set up monitoring for:
- Queue depth — How many emails are waiting? A growing backlog means your worker can't keep up.
- Failed jobs — Jobs in the dead-letter queue need attention.
- Processing time — How long does each send take? Spikes indicate provider issues.
- Retry rate — High retries mean something is wrong (bad addresses, auth issues, rate limits).
For BullMQ, Bull Board gives you a web dashboard. For pg-boss, query the pgboss.job table directly.
Next Steps
- Set up your sending domain: Email Authentication Setup Guide
- Avoid the spam folder: Email Deliverability Checklist
- Warm up first: How to Warm Up an Email Domain
- Pick your framework: Framework Guides for Next.js, Hono, Express, and more