Server
The Durable Workflow server is a standalone, language-neutral workflow orchestration service. It exposes the same durable execution engine as the PHP package over HTTP, letting you write workflows in Python, PHP, or any language that speaks HTTP.
Use the standalone server when you need:
- Polyglot workflows — Python workers executing PHP-authored workflows, or vice versa
- Microservice orchestration — orchestrate services written in different languages
- Centralized workflow runtime — multiple applications sharing one workflow engine
- Non-Laravel environments — use Durable Workflow outside Laravel
If you already run v2 embedded in a Laravel app, use the embedded-to-server migration guide to prepare type keys, deploy the server beside embedded execution, connect workers, and route only new workflow starts to the server.
Quick Start
Docker Compose
The fastest way to run the server:
# Clone the repository
git clone https://github.com/durable-workflow/server.git
cd server
# Copy environment config
cp .env.example .env
# Start the server with all dependencies
docker compose up -d
# Verify
curl http://localhost:8080/api/health
This starts:
- server — the API and worker services
- mysql — the workflow state database
- redis — cache and queue backend
- bootstrap — one-shot service that runs migrations and seeds the default namespace
Ports
| Service | Port | Purpose |
|---|---|---|
| Server API | 8080 | Control-plane and worker-protocol endpoints |
| MySQL | 3306 | Database (exposed for development convenience) |
| Redis | 6379 | Cache and queue (exposed for development convenience) |
Configuration
The server uses environment variables for configuration. Key settings:
Database
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=workflow
DB_USERNAME=workflow
DB_PASSWORD=secret
Supported: MySQL 8.0+, PostgreSQL 13+, SQLite 3.35+.
Cache and Queue
CACHE_DRIVER=redis
QUEUE_CONNECTION=redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=null
REDIS_DB=0
Cache must support atomic locks. Queue drivers: Redis, Amazon SQS, Beanstalkd, database.
Authentication
The server supports three auth modes:
Token-based (default):
WORKFLOW_SERVER_AUTH_DRIVER=token
WORKFLOW_SERVER_AUTH_TOKEN=your-secret-token-here
All requests must send Authorization: Bearer your-secret-token-here.
For least-privilege deployments, configure role-scoped tokens instead of one shared token:
WORKFLOW_SERVER_AUTH_DRIVER=token
WORKFLOW_SERVER_WORKER_TOKEN=worker-secret
WORKFLOW_SERVER_OPERATOR_TOKEN=operator-secret
WORKFLOW_SERVER_ADMIN_TOKEN=admin-secret
Worker tokens can register workers, poll tasks, heartbeat, and complete work. Operator tokens can start, list, signal, query, update, cancel, terminate, and observe workflows. Admin tokens can use administrative endpoints such as namespace and retention management.
HMAC signature:
WORKFLOW_SERVER_AUTH_DRIVER=signature
WORKFLOW_SERVER_SIGNATURE_KEY=your-signature-secret
Requests must include X-Signature, calculated as
hash_hmac('sha256', request_body, WORKFLOW_SERVER_SIGNATURE_KEY). The server
also accepts role-scoped signature keys:
WORKFLOW_SERVER_AUTH_DRIVER=signature
WORKFLOW_SERVER_WORKER_SIGNATURE_KEY=worker-signature-secret
WORKFLOW_SERVER_OPERATOR_SIGNATURE_KEY=operator-signature-secret
WORKFLOW_SERVER_ADMIN_SIGNATURE_KEY=admin-signature-secret
No auth (development only):
WORKFLOW_SERVER_AUTH_DRIVER=none
⚠️ Do not use none in production. All endpoints become publicly accessible.
Workflow Package
The Docker image installs the durable-workflow/workflow package. Control which version:
# Build-time arg (set in docker-compose.yml or pass to docker build)
WORKFLOW_PACKAGE_REF=v2 # branch, tag, or commit
WORKFLOW_PACKAGE_SOURCE= # custom Git remote (optional)
Retention
Configure how long completed workflows remain queryable:
WORKFLOW_DEFAULT_RETENTION_DAYS=30
After retention expires, workflows are pruned. Configure per-namespace retention via the API.
Namespaces
The bootstrap seeds a default namespace. To disable:
WORKFLOW_BOOTSTRAP_DEFAULT_NAMESPACE=false
Create namespaces via the API:
curl -X POST http://localhost:8080/api/namespaces \
-H "Authorization: Bearer $TOKEN" \
-H "X-Durable-Workflow-Control-Plane-Version: 2" \
-H "Content-Type: application/json" \
-d '{
"name": "production",
"description": "Production workflows",
"retention_days": 90
}'
Health Checks
API Health
curl http://localhost:8080/api/health
Returns 200 OK with:
{
"status": "serving",
"timestamp": "2026-04-15T12:00:00Z"
}
Server Capabilities
curl http://localhost:8080/api/cluster/info \
-H "Authorization: Bearer $TOKEN"
Returns the server build version, supported SDK versions, engine capabilities, the client compatibility policy, and the independently-versioned control-plane and worker-protocol manifests:
{
"server_id": "server-1",
"version": "2.0.0",
"default_namespace": "default",
"supported_sdk_versions": {
"php": ">=1.0",
"python": ">=0.2,<1.0",
"cli": ">=0.1,<1.0"
},
"client_compatibility": {
"schema": "durable-workflow.v2.client-compatibility",
"version": 1,
"authority": "protocol_manifests",
"top_level_version_role": "informational",
"fail_closed": true
},
"capabilities": {
"workflow_tasks": true,
"activity_tasks": true,
"signals": true,
"queries": true,
"updates": true,
"schedules": true,
"child_workflow_retry_policy": true,
"child_workflow_timeouts": true,
"payload_codecs": ["avro"],
"response_compression": ["gzip", "deflate"]
},
"control_plane": {
"version": "2",
"header": "X-Durable-Workflow-Control-Plane-Version",
"request_contract": { "schema": "durable-workflow.v2.control-plane-request.contract", "version": 1, "...": "..." },
"response_contract": { "schema": "durable-workflow.v2.control-plane-response.contract", "version": 1, "...": "..." }
},
"worker_protocol": {
"version": "1.0",
"server_capabilities": {
"long_poll_timeout": 30,
"supported_workflow_task_commands": [
"complete_workflow",
"fail_workflow",
"continue_as_new",
"schedule_activity",
"start_timer",
"start_child_workflow"
],
"workflow_task_poll_request_idempotency": true,
"history_page_size_default": 500,
"history_page_size_max": 1000,
"activity_retry_policy": true,
"activity_timeouts": true,
"child_workflow_retry_policy": true,
"child_workflow_timeouts": true,
"parent_close_policy": true,
"non_retryable_failures": true,
"response_compression": ["gzip", "deflate"],
"history_compression": {
"supported_encodings": ["gzip"],
"compression_threshold": 8192
}
}
}
}
Treat client_compatibility.authority: "protocol_manifests" as the rule for
client checks. The top-level version is build identity; CLI and SDK clients
should fail closed when control_plane.version,
control_plane.request_contract, or worker_protocol.version is missing or
unsupported.
Key field notes for client code:
- The app version is
version, notserver_version. - Workflow-task command capabilities live under
worker_protocol.server_capabilities.supported_workflow_task_commands, not at the top ofworker_protocol. The same nested object is echoed on every worker-plane response via theserver_capabilitiesfield. - Worker command-option capabilities, including retry policies, timeout fields, parent-close policy, and non-retryable failures, are also echoed in
server_capabilitiesso workers can negotiate behavior without a separate cluster-info request. - Universal payload codecs live under
capabilities.payload_codecs; final v2 advertisesavrothere. When the server advertises engine-specific codecs that only a PHP worker can honor, those appear undercapabilities.payload_codecs_engine_specific.<engine>— language-neutral SDKs should ignore that object unless they opt into that engine.
Connecting Workers
Workers poll the server for tasks and execute workflow code or activities. See the Worker Protocol reference for the full API contract.
PHP Workers
PHP workers use the durable-workflow/workflow package in standalone server mode:
composer require durable-workflow/workflow:^2.0@alpha
The @alpha flag is required while 2.0 is a pre-release on Packagist; drop it once 2.0.0 is tagged stable.
Configure the worker to connect to the server:
// config/workflow.php
return [
'mode' => 'server',
'server' => [
'url' => env('DURABLE_WORKFLOW_SERVER_URL', 'http://localhost:8080'),
'token' => env('DURABLE_WORKFLOW_AUTH_TOKEN'),
'namespace' => env('DURABLE_WORKFLOW_NAMESPACE', 'default'),
],
];
Run the worker:
php artisan workflow:work
Python Workers
Python workers use the durable-workflow SDK:
pip install durable-workflow
See the Python SDK guide for worker setup.
Custom Language Workers
Any language can implement a worker by:
- Registering with
POST /api/worker/register - Long-polling for tasks with
POST /api/worker/workflow-tasks/poll,POST /api/worker/activity-tasks/poll, orPOST /api/worker/query-tasks/poll - Completing tasks with
POST /api/worker/workflow-tasks/{id}/complete,POST /api/worker/activity-tasks/{id}/complete, orPOST /api/worker/query-tasks/{id}/complete
All requests require:
Authorization: Bearer $TOKENX-Namespace: your-namespaceX-Durable-Workflow-Protocol-Version: 1.0
The server validates that the namespace exists. Register it via
POST /api/namespaces before directing workers or clients at it, or the
server returns 404 with reason: "namespace_not_found".
See the server README for a curl-based walkthrough.
CLI
The Durable Workflow CLI provides a shell interface to the server:
# Install — Linux and macOS
curl -fsSL https://durable-workflow.com/install.sh | sh
# Install — macOS (Homebrew alternative)
brew install durable-workflow/tap/dw
# Install — Windows (PowerShell)
# irm https://durable-workflow.com/install.ps1 | iex
# Configure
export DURABLE_WORKFLOW_SERVER_URL=http://localhost:8080
export DURABLE_WORKFLOW_AUTH_TOKEN=your-token
export DURABLE_WORKFLOW_NAMESPACE=default
# Use
dw server:health
dw workflow:list
dw workflow:start --type=my-workflow --input='{"key":"value"}'
See the CLI install page for a platform-detecting installer and direct binary downloads.
Deployment
Docker
Build and run a production image:
docker build -t my-workflow-server .
docker run -d \
-p 8080:8080 \
-e DB_CONNECTION=mysql \
-e DB_HOST=your-db-host \
-e WORKFLOW_SERVER_AUTH_TOKEN=your-secret \
my-workflow-server
Run migrations before starting the API:
docker run --rm \
-e DB_CONNECTION=mysql \
-e DB_HOST=your-db-host \
my-workflow-server \
php artisan migrate --force
Kubernetes
The server is stateless and horizontally scalable. Key considerations:
- Shared cache — Use Redis or another networked cache for multi-node deployments. Long-poll wake-ups use cache-backed signals, so a shared cache ensures prompt task delivery.
- Shared queue — Use Redis, SQS, or another networked queue backend. Do not use the
syncdriver. - Database — MySQL 8.0+, PostgreSQL 13+, or compatible. Run migrations as a Kubernetes Job before starting the API.
- Liveness probe —
GET /api/health - Readiness probe —
GET /api/health
Example deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: workflow-server
spec:
replicas: 3
selector:
matchLabels:
app: workflow-server
template:
metadata:
labels:
app: workflow-server
spec:
containers:
- name: server
image: my-workflow-server:latest
ports:
- containerPort: 8080
env:
- name: DB_CONNECTION
value: mysql
- name: DB_HOST
value: mysql-service
- name: CACHE_DRIVER
value: redis
- name: REDIS_HOST
value: redis-service
- name: WORKFLOW_SERVER_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: workflow-secrets
key: auth-token
livenessProbe:
httpGet:
path: /api/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/health
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
API Reference
The server exposes three API surfaces:
Control Plane
Start, describe, signal, query, update, cancel, and terminate workflows; manage namespaces, task queues, schedules, search attributes, and workers. Every control-plane request requires X-Durable-Workflow-Control-Plane-Version: 2. Requests without it are rejected with missing_control_plane_version.
Key endpoints:
POST /api/workflows— Start a workflowGET /api/workflows/{id}— Describe a workflowPOST /api/workflows/{id}/signal/{name}— Send a signalPOST /api/workflows/{id}/query/{name}— Execute a queryPOST /api/workflows/{id}/update/{name}— Execute an updatePOST /api/workflows/{id}/cancel— Request cancellationPOST /api/workflows/{id}/terminate— Terminate immediatelyGET /api/workflows/{id}/runs/{runId}/history— List run history eventsGET /api/workflows/{id}/runs/{runId}/history/export— Export a replay bundleGET /api/namespaces,POST /api/namespaces,GET|PUT /api/namespaces/{namespace}— Namespace managementGET /api/workers,GET|DELETE /api/workers/{id}— Worker fleet managementGET|POST /api/schedules,GET|PUT|DELETE /api/schedules/{id},POST /api/schedules/{id}/{pause|resume|trigger|backfill}— Schedule managementGET|POST|DELETE /api/search-attributes— Search attribute managementPOST /api/system/repair/pass,POST /api/system/activity-timeouts/pass,POST /api/system/retention/pass— Operator passes
Workflow control-plane responses, including run-history listing responses,
include the nested control_plane contract metadata that identifies the
operation and response contract version. History export is intentionally not
wrapped in that envelope; it returns the replay bundle unchanged so the bundle
integrity checksum and optional signature cover the exact artifact received by
the client.
Validation failures return HTTP 422 with reason: validation_failed plus
errors and validation_errors. Workflow operation routes also project that
reason and validation detail into control_plane.reason and
control_plane.validation_errors. Current run-targeted command routes project
the URL run_id in the response and control_plane.run_id, so clients can
distinguish instance-level commands from explicit selected-run commands.
Worker Protocol
Workers register, poll for tasks, heartbeat, and complete tasks. Requires X-Durable-Workflow-Protocol-Version: 1.0.
Key endpoints:
POST /api/worker/register— Register a workerPOST /api/worker/workflow-tasks/poll— Long-poll for workflow tasksPOST /api/worker/workflow-tasks/{id}/complete— Complete workflow taskPOST /api/worker/query-tasks/poll— Long-poll for server-routed workflow query tasksPOST /api/worker/query-tasks/{id}/complete— Complete workflow query taskPOST /api/worker/query-tasks/{id}/fail— Fail or reject workflow query taskPOST /api/worker/activity-tasks/poll— Long-poll for activity tasksPOST /api/worker/activity-tasks/{id}/complete— Complete activity task
See the Worker Protocol reference for details.
Discovery (unversioned)
The only endpoints that do not require X-Durable-Workflow-Control-Plane-Version are discovery and health probes:
GET /api/health— Liveness/readiness probe (no auth required)GET /api/cluster/info— Server capabilities, protocol versions, payload codecs. Clients should hit this first to discover which control-plane and worker-protocol versions the server supports.
Troubleshooting
Workers not receiving tasks
Check:
- Workers registered?
curl http://localhost:8080/api/workers -H "Authorization: Bearer $TOKEN" -H "X-Durable-Workflow-Control-Plane-Version: 2" -H "X-Namespace: default" - Workers polling correct task queue?
- Workflow started with matching task queue?
- Cache backend shared across server instances?
Long-poll connections timing out immediately
Check:
- Cache driver supports atomic locks? Test with
php artisan workflow:v2:doctor --strict - Redis reachable from server?
- Load balancer timeout set higher than long-poll timeout (default: 60s)?
Database connection errors
Check:
- Database host and port correct?
- Credentials valid?
- Database exists?
- Migrations run?
php artisan migrate:status
Auth failures
Check:
WORKFLOW_SERVER_AUTH_DRIVERmatches client auth method?- Token/HMAC secret matches between server and client?
- Auth headers present?
Authorization: Bearer $TOKENor HMAC signature headers?
Learn More
- Worker Protocol Reference — Full API contract for workers
- Embedded to Server Migration — Adopt the server from a Laravel embedded v2 app
- Python SDK — Build Python workers
- CLI — Command-line interface
- Server Repository — Source code, issues, releases