Learn from OSS
Trigger.dev

Trigger.dev Workflows and Data Flows

Key user journeys in Trigger.dev traced through the SDK, API, run engine, workers, and dashboard

Trigger.dev Workflows and Data Flows

Trigger.dev works by moving task definitions from the SDK into the web application, run engine, worker infrastructure, and dashboard so developers can trigger, observe, retry, and resume long-running jobs. This page traces the core execution flows so you can see how durable execution behaves from code trigger to completion.

Reading Guide

Each workflow is presented as a numbered sequence showing what happens at each layer. The SDK → API → Run Engine → Worker → Dashboard pattern repeats across almost every feature.

How a Task Flows Through Trigger.dev

Before diving into specific workflows, here's the general pattern every task execution follows:

Your Application


SDK: trigger(myTask, { payload })    ← SDK call in your code
    │ HTTPS POST

Webapp API                           ← validates, authenticates


Run Engine (EnqueueSystem)           ← creates TaskRun in DB
    │                                   adds to Redis queue

Run Engine (DequeueSystem)           ← fair multi-tenant pick


Supervisor                           ← provisions container


Task Container                       ← runs your code
    │ heartbeats + logs

Webapp (RunAttemptSystem)            ← tracks progress


Completion → DB update               ← status + output persisted


ElectricSQL → Dashboard              ← real-time UI update

Workflow 1: Triggering a Task

The most common action — calling trigger() from your application code.

Developer defines a task

In your project, you define a task using the SDK:

// tasks/my-task.ts
import { task } from "@trigger.dev/sdk/v3";

export const myTask = task({
  id: "my-task",
  retry: { maxAttempts: 3 },
  run: async (payload) => {
    // Your business logic here
    return { result: "done" };
  },
});

Application triggers the task

From your API route, background job, or anywhere in your code:

import { myTask } from "./tasks/my-task";

const handle = await myTask.trigger({ userId: "abc123" });
// Returns immediately with a run handle (non-blocking)

SDK sends HTTP request to the API

The SDK serializes the payload and sends:

POST /api/v1/tasks/my-task/trigger
Headers: Authorization: Bearer tr_dev_xxx (environment API key)
Body: { "payload": { "userId": "abc123" } }

Webapp processes the trigger

  1. API key authentication — validates the key, resolves the environment
  2. TriggerTaskService — finds the task definition for this environment
  3. Run Engine (EnqueueSystem) — creates a TaskRun record in PostgreSQL
  4. Idempotency check — if an idempotency key was provided, deduplicates
  5. Queue insertion — adds the run to the Redis queue (MarQS)
  6. Response — returns the run ID immediately (the task hasn't started yet)

Run Engine dequeues the run

The DequeueSystem uses Deficit Round-Robin to fairly pick the next run across all organizations. It checks:

  • Queue concurrency limits (e.g. max 10 concurrent runs)
  • Per-key concurrency (e.g. max 1 run per user)
  • Rate limits (e.g. max 100 runs/minute)

Supervisor provisions a container

The Supervisor receives the dequeued run and:

  1. Selects the correct worker image (matching the deployed version)
  2. Starts a Docker container or Kubernetes pod
  3. Injects environment variables and the run payload
  4. Monitors the container for health via heartbeats

Task code executes

Inside the container, the SDK runtime:

  1. Receives the run payload
  2. Executes the run function
  3. Sends periodic heartbeats to the Supervisor
  4. Streams logs via OpenTelemetry to ClickHouse
  5. On completion, sends the result back to the Webapp

Dashboard updates in real-time

ElectricSQL replicates the TaskRun status change from PostgreSQL to the browser. The dashboard shows the run transitioning through: PENDING → EXECUTING → COMPLETED_SUCCESSFULLY.

Why Return Immediately?

The trigger() call returns a run handle before the task starts executing. This is critical for web applications — your API endpoint responds to the user instantly, and the heavy work happens asynchronously in a container. The handle includes a runId you can use to check status later.

Workflow 2: Local Development with trigger dev

How developers test tasks locally without deploying.

Developer starts the dev CLI

npx trigger dev

The CLI:

  1. Reads the trigger.config.ts configuration
  2. Bundles task files using esbuild
  3. Opens a WebSocket connection to the Webapp at /ws
  4. Authenticates with the development API key
  5. Registers all discovered tasks with the platform

Webapp creates a dev worker

The Webapp receives the WebSocket connection and:

  1. Creates a BackgroundWorker record (type: development)
  2. Registers each task as a BackgroundWorkerTask
  3. Creates a DevQueueConsumer that listens for runs in this environment

File change triggers hot reload

The CLI uses chokidar to watch task files. When a file changes:

  1. esbuild re-bundles the affected tasks
  2. The CLI sends updated task definitions over the WebSocket
  3. The Webapp updates the BackgroundWorkerTask records

User triggers a task (via dashboard or API)

A run is created in the queue. The DevQueueConsumer:

  1. Dequeues the run from Redis
  2. Sends the run payload to the CLI over WebSocket
  3. The CLI executes the task function in the developer's local Node.js process

Execution completes locally

The CLI:

  1. Captures the return value
  2. Sends logs and traces to the platform
  3. Reports completion status back via WebSocket
  4. The dashboard updates in real-time

For PMs: Why Dev Mode Matters

Developers hate slow feedback loops. With trigger dev, they write a task, save the file, trigger it, and see results instantly — all without deploying to a server or waiting for Docker containers. The task runs in their local Node.js process with full debugger support. This is a major developer experience advantage over competing platforms.

Workflow 3: Deploying Tasks to Production

How task code gets from a developer's machine to production.

Developer runs deploy

npx trigger deploy

CLI builds the project

  1. Reads trigger.config.ts for build configuration
  2. Resolves build extensions (Prisma, Playwright, etc.)
  3. Bundles task code using esbuild
  4. Creates a Docker image with the bundled code + dependencies
  5. Pushes the image to the container registry (Depot for cloud, custom for self-hosted)

API receives deployment metadata

POST /api/v3/deployments/{deploymentId}/finalize
Body: { "imageDigest": "sha256:abc...", "contentHash": "..." }

FinalizeDeploymentService processes the deployment

  1. Creates a WorkerDeployment record with the image reference
  2. Creates a BackgroundWorker with the new version
  3. Indexes all task definitions from the deployment manifest
  4. Promotes the deployment to "current" for the environment
  5. Future runs will use this new worker version

Old workers drain gracefully

Existing running tasks continue on the old worker version (they're locked to their version). New triggers use the latest deployment. Old containers are cleaned up after their runs complete.

Workflow 4: Durable Execution with Checkpoints

How a task survives a crash or a long pause using checkpoints.

Task reaches a wait point

During execution, the task calls:

await wait.for({ seconds: 3600 }); // Wait 1 hour

SDK signals the checkpoint

The SDK runtime:

  1. Serializes the current execution state (variables, call stack position)
  2. Creates a Checkpoint record in the database
  3. Creates a Waitpoint (type: DATETIME, resolves in 1 hour)
  4. The container is stopped to free resources

Time passes — no container running

The task is "suspended." No compute resources are consumed during the wait. The TaskRun status is WAITING_ON_WAITPOINT.

Waitpoint resolves

After 1 hour, the Run Engine:

  1. Detects the DATETIME waitpoint has passed
  2. Moves the run back to QUEUED
  3. The DequeueSystem picks it up

Container restarts from checkpoint

The Supervisor starts a new container with:

  1. The same worker image
  2. The saved checkpoint state
  3. Execution resumes exactly where it left off — after the wait.for() call

Task continues and completes

The remaining code runs normally. From the task's perspective, it's as if the wait was a simple await that took an hour.

Why Checkpointing Matters

Without checkpoints, a 1-hour delay means keeping a container running and paying for it the entire time. With checkpoints, the container is stopped during the wait and restarted only when needed. For AI workflows that wait for human approval, this can save hours of compute. It's also what makes Trigger.dev "durable" — if the server crashes, the checkpoint restores state.

Workflow 5: Cron Schedule Execution

How tasks run on a recurring schedule.

Developer defines a schedule

export const dailyReport = schedules.task({
  id: "daily-report",
  cron: "0 9 * * *",       // 9 AM daily
  timezone: "America/New_York",
  run: async (payload) => {
    // payload.timestamp contains the scheduled time
    await generateAndEmailReport();
  },
});

Schedule Engine registers the cron

The @internal/schedule-engine creates:

  1. A TaskSchedule record with the cron expression
  2. A TaskScheduleInstance per environment
  3. Calculates the next execution timestamp

At the scheduled time

The Schedule Engine:

  1. Detects that the next execution time has passed
  2. Triggers the task with a payload containing the scheduled timestamp
  3. Calculates and stores the next execution time
  4. Deduplicates to prevent double-firing

Task executes normally

From this point, the flow is identical to Workflow 1 — the run goes through the queue, gets picked up by a worker, executes, and reports completion.

Workflow 6: Batch Triggering

How to trigger thousands of tasks efficiently in a single API call.

Application sends a batch trigger

const runs = await myTask.batchTrigger([
  { payload: { id: 1 } },
  { payload: { id: 2 } },
  // ... thousands more
]);

API creates a BatchTaskRun

The Webapp:

  1. Creates a BatchTaskRun parent record
  2. Validates all payloads
  3. Sends items to the BatchQueue for processing

BatchQueue processes items in chunks

Instead of creating thousands of TaskRun records one by one:

  1. The BatchSystem processes items in configurable chunks
  2. Bulk-inserts TaskRun records into PostgreSQL
  3. Bulk-enqueues runs into Redis

Runs execute in parallel (with concurrency limits)

Runs are dequeued respecting the queue's concurrency limit. If the queue allows 10 concurrent runs and you batch-triggered 1,000, at most 10 will execute simultaneously.

Batch completion tracking

The BatchTaskRun tracks overall progress. When all child runs complete, the batch is marked as completed.

Workflow 7: Real-Time Run Monitoring

How the dashboard shows live task execution.

Task Container (executing)

    ├── Heartbeats ──────────────► Supervisor ──► DB (last heartbeat time)

    ├── OTLP Spans + Logs ──────► OTLP Collector ──► ClickHouse

    ├── Status changes ─────────► Webapp ──► PostgreSQL
    │                                          │
    │                               ElectricSQL replication
    │                                          │
    │                                          ▼
    │                               Browser (Dashboard)
    │                               ┌─────────────────┐
    │                               │ Run: my-task     │
    │                               │ Status: ● Running│
    │                               │ Logs: streaming  │
    │                               │ Duration: 4.2s   │
    │                               └─────────────────┘

    └── Completion ─────────────► Webapp ──► DB ──► Dashboard

Information Flow Summary

FromToMechanismExample
Your App → WebappREST API (HTTPS)Triggering tasks, checking run status
SDK → WebappREST APITask trigger, batch trigger
CLI → WebappWebSocket (/ws)Dev mode: register tasks, receive runs
Webapp → RedisMarQS queueEnqueuing runs for execution
Redis → SupervisorDequeue pollingSupervisor picks up next run
Supervisor → ContainerDocker/K8s APISpawning task containers
Container → SupervisorHTTP heartbeatsHealth monitoring
Container → ClickHouseOTLP (OpenTelemetry)Logs, traces, metrics
PostgreSQL → BrowserElectricSQLReal-time dashboard sync
Webapp → BrowserSocket.ioWorker coordination, notifications
Container → WebappHTTP/WebSocketRun completion, status updates

What's Next