Trigger.dev Workflows and Data Flows
Key user journeys in Trigger.dev traced through the SDK, API, run engine, workers, and dashboard
Trigger.dev Workflows and Data Flows
Trigger.dev works by moving task definitions from the SDK into the web application, run engine, worker infrastructure, and dashboard so developers can trigger, observe, retry, and resume long-running jobs. This page traces the core execution flows so you can see how durable execution behaves from code trigger to completion.
Reading Guide
Each workflow is presented as a numbered sequence showing what happens at each layer. The SDK → API → Run Engine → Worker → Dashboard pattern repeats across almost every feature.
How a Task Flows Through Trigger.dev
Before diving into specific workflows, here's the general pattern every task execution follows:
Your Application
│
▼
SDK: trigger(myTask, { payload }) ← SDK call in your code
│ HTTPS POST
▼
Webapp API ← validates, authenticates
│
▼
Run Engine (EnqueueSystem) ← creates TaskRun in DB
│ adds to Redis queue
▼
Run Engine (DequeueSystem) ← fair multi-tenant pick
│
▼
Supervisor ← provisions container
│
▼
Task Container ← runs your code
│ heartbeats + logs
▼
Webapp (RunAttemptSystem) ← tracks progress
│
▼
Completion → DB update ← status + output persisted
│
▼
ElectricSQL → Dashboard ← real-time UI updateWorkflow 1: Triggering a Task
The most common action — calling trigger() from your application code.
Developer defines a task
In your project, you define a task using the SDK:
// tasks/my-task.ts
import { task } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
retry: { maxAttempts: 3 },
run: async (payload) => {
// Your business logic here
return { result: "done" };
},
});Application triggers the task
From your API route, background job, or anywhere in your code:
import { myTask } from "./tasks/my-task";
const handle = await myTask.trigger({ userId: "abc123" });
// Returns immediately with a run handle (non-blocking)SDK sends HTTP request to the API
The SDK serializes the payload and sends:
POST /api/v1/tasks/my-task/trigger
Headers: Authorization: Bearer tr_dev_xxx (environment API key)
Body: { "payload": { "userId": "abc123" } }Webapp processes the trigger
- API key authentication — validates the key, resolves the environment
- TriggerTaskService — finds the task definition for this environment
- Run Engine (EnqueueSystem) — creates a
TaskRunrecord in PostgreSQL - Idempotency check — if an idempotency key was provided, deduplicates
- Queue insertion — adds the run to the Redis queue (MarQS)
- Response — returns the run ID immediately (the task hasn't started yet)
Run Engine dequeues the run
The DequeueSystem uses Deficit Round-Robin to fairly pick the next run across all organizations. It checks:
- Queue concurrency limits (e.g. max 10 concurrent runs)
- Per-key concurrency (e.g. max 1 run per user)
- Rate limits (e.g. max 100 runs/minute)
Supervisor provisions a container
The Supervisor receives the dequeued run and:
- Selects the correct worker image (matching the deployed version)
- Starts a Docker container or Kubernetes pod
- Injects environment variables and the run payload
- Monitors the container for health via heartbeats
Task code executes
Inside the container, the SDK runtime:
- Receives the run payload
- Executes the
runfunction - Sends periodic heartbeats to the Supervisor
- Streams logs via OpenTelemetry to ClickHouse
- On completion, sends the result back to the Webapp
Dashboard updates in real-time
ElectricSQL replicates the TaskRun status change from PostgreSQL to the browser. The dashboard shows the run transitioning through: PENDING → EXECUTING → COMPLETED_SUCCESSFULLY.
Why Return Immediately?
The trigger() call returns a run handle before the task starts executing. This is critical for web applications — your API endpoint responds to the user instantly, and the heavy work happens asynchronously in a container. The handle includes a runId you can use to check status later.
Workflow 2: Local Development with trigger dev
How developers test tasks locally without deploying.
Developer starts the dev CLI
npx trigger devThe CLI:
- Reads the
trigger.config.tsconfiguration - Bundles task files using esbuild
- Opens a WebSocket connection to the Webapp at
/ws - Authenticates with the development API key
- Registers all discovered tasks with the platform
Webapp creates a dev worker
The Webapp receives the WebSocket connection and:
- Creates a
BackgroundWorkerrecord (type: development) - Registers each task as a
BackgroundWorkerTask - Creates a
DevQueueConsumerthat listens for runs in this environment
File change triggers hot reload
The CLI uses chokidar to watch task files. When a file changes:
- esbuild re-bundles the affected tasks
- The CLI sends updated task definitions over the WebSocket
- The Webapp updates the
BackgroundWorkerTaskrecords
User triggers a task (via dashboard or API)
A run is created in the queue. The DevQueueConsumer:
- Dequeues the run from Redis
- Sends the run payload to the CLI over WebSocket
- The CLI executes the task function in the developer's local Node.js process
Execution completes locally
The CLI:
- Captures the return value
- Sends logs and traces to the platform
- Reports completion status back via WebSocket
- The dashboard updates in real-time
For PMs: Why Dev Mode Matters
Developers hate slow feedback loops. With trigger dev, they write a task, save the file, trigger it, and see results instantly — all without deploying to a server or waiting for Docker containers. The task runs in their local Node.js process with full debugger support. This is a major developer experience advantage over competing platforms.
Workflow 3: Deploying Tasks to Production
How task code gets from a developer's machine to production.
Developer runs deploy
npx trigger deployCLI builds the project
- Reads
trigger.config.tsfor build configuration - Resolves build extensions (Prisma, Playwright, etc.)
- Bundles task code using esbuild
- Creates a Docker image with the bundled code + dependencies
- Pushes the image to the container registry (Depot for cloud, custom for self-hosted)
API receives deployment metadata
POST /api/v3/deployments/{deploymentId}/finalize
Body: { "imageDigest": "sha256:abc...", "contentHash": "..." }FinalizeDeploymentService processes the deployment
- Creates a
WorkerDeploymentrecord with the image reference - Creates a
BackgroundWorkerwith the new version - Indexes all task definitions from the deployment manifest
- Promotes the deployment to "current" for the environment
- Future runs will use this new worker version
Old workers drain gracefully
Existing running tasks continue on the old worker version (they're locked to their version). New triggers use the latest deployment. Old containers are cleaned up after their runs complete.
Workflow 4: Durable Execution with Checkpoints
How a task survives a crash or a long pause using checkpoints.
Task reaches a wait point
During execution, the task calls:
await wait.for({ seconds: 3600 }); // Wait 1 hourSDK signals the checkpoint
The SDK runtime:
- Serializes the current execution state (variables, call stack position)
- Creates a
Checkpointrecord in the database - Creates a
Waitpoint(type: DATETIME, resolves in 1 hour) - The container is stopped to free resources
Time passes — no container running
The task is "suspended." No compute resources are consumed during the wait. The TaskRun status is WAITING_ON_WAITPOINT.
Waitpoint resolves
After 1 hour, the Run Engine:
- Detects the DATETIME waitpoint has passed
- Moves the run back to QUEUED
- The DequeueSystem picks it up
Container restarts from checkpoint
The Supervisor starts a new container with:
- The same worker image
- The saved checkpoint state
- Execution resumes exactly where it left off — after the
wait.for()call
Task continues and completes
The remaining code runs normally. From the task's perspective, it's as if the wait was a simple await that took an hour.
Why Checkpointing Matters
Without checkpoints, a 1-hour delay means keeping a container running and paying for it the entire time. With checkpoints, the container is stopped during the wait and restarted only when needed. For AI workflows that wait for human approval, this can save hours of compute. It's also what makes Trigger.dev "durable" — if the server crashes, the checkpoint restores state.
Workflow 5: Cron Schedule Execution
How tasks run on a recurring schedule.
Developer defines a schedule
export const dailyReport = schedules.task({
id: "daily-report",
cron: "0 9 * * *", // 9 AM daily
timezone: "America/New_York",
run: async (payload) => {
// payload.timestamp contains the scheduled time
await generateAndEmailReport();
},
});Schedule Engine registers the cron
The @internal/schedule-engine creates:
- A
TaskSchedulerecord with the cron expression - A
TaskScheduleInstanceper environment - Calculates the next execution timestamp
At the scheduled time
The Schedule Engine:
- Detects that the next execution time has passed
- Triggers the task with a payload containing the scheduled timestamp
- Calculates and stores the next execution time
- Deduplicates to prevent double-firing
Task executes normally
From this point, the flow is identical to Workflow 1 — the run goes through the queue, gets picked up by a worker, executes, and reports completion.
Workflow 6: Batch Triggering
How to trigger thousands of tasks efficiently in a single API call.
Application sends a batch trigger
const runs = await myTask.batchTrigger([
{ payload: { id: 1 } },
{ payload: { id: 2 } },
// ... thousands more
]);API creates a BatchTaskRun
The Webapp:
- Creates a
BatchTaskRunparent record - Validates all payloads
- Sends items to the
BatchQueuefor processing
BatchQueue processes items in chunks
Instead of creating thousands of TaskRun records one by one:
- The
BatchSystemprocesses items in configurable chunks - Bulk-inserts
TaskRunrecords into PostgreSQL - Bulk-enqueues runs into Redis
Runs execute in parallel (with concurrency limits)
Runs are dequeued respecting the queue's concurrency limit. If the queue allows 10 concurrent runs and you batch-triggered 1,000, at most 10 will execute simultaneously.
Batch completion tracking
The BatchTaskRun tracks overall progress. When all child runs complete, the batch is marked as completed.
Workflow 7: Real-Time Run Monitoring
How the dashboard shows live task execution.
Task Container (executing)
│
├── Heartbeats ──────────────► Supervisor ──► DB (last heartbeat time)
│
├── OTLP Spans + Logs ──────► OTLP Collector ──► ClickHouse
│
├── Status changes ─────────► Webapp ──► PostgreSQL
│ │
│ ElectricSQL replication
│ │
│ ▼
│ Browser (Dashboard)
│ ┌─────────────────┐
│ │ Run: my-task │
│ │ Status: ● Running│
│ │ Logs: streaming │
│ │ Duration: 4.2s │
│ └─────────────────┘
│
└── Completion ─────────────► Webapp ──► DB ──► DashboardInformation Flow Summary
| From | To | Mechanism | Example |
|---|---|---|---|
| Your App → Webapp | REST API (HTTPS) | Triggering tasks, checking run status | |
| SDK → Webapp | REST API | Task trigger, batch trigger | |
| CLI → Webapp | WebSocket (/ws) | Dev mode: register tasks, receive runs | |
| Webapp → Redis | MarQS queue | Enqueuing runs for execution | |
| Redis → Supervisor | Dequeue polling | Supervisor picks up next run | |
| Supervisor → Container | Docker/K8s API | Spawning task containers | |
| Container → Supervisor | HTTP heartbeats | Health monitoring | |
| Container → ClickHouse | OTLP (OpenTelemetry) | Logs, traces, metrics | |
| PostgreSQL → Browser | ElectricSQL | Real-time dashboard sync | |
| Webapp → Browser | Socket.io | Worker coordination, notifications | |
| Container → Webapp | HTTP/WebSocket | Run completion, status updates |