Self-hosting
The engine and dashboard are self-hosted by design. deploy/docker-compose.yml
brings up the full stack — ClickHouse, Postgres, MinIO, the engine, and the
Next.js dashboard — behind loopback by default. Every password, port, and
URL is driven by a single .env file.
One-liner setup
git clone https://github.com/charlses/clearvoiance
cd clearvoiance
cp deploy/.env.example deploy/.env
docker compose --env-file deploy/.env -f deploy/docker-compose.yml up -d --build
git clone https://github.com/charlses/clearvoiance
cd clearvoiance
cp deploy/.env.example deploy/.env
docker compose --env-file deploy/.env -f deploy/docker-compose.yml up -d --build
Default endpoints (all 127.0.0.1-bound):
- Dashboard —
http://127.0.0.1:3000 - Engine REST + WS —
http://127.0.0.1:9101(/healthz,/api/v1/*,/ws,/docs) - Engine gRPC —
127.0.0.1:9100(the SDK endpoint) - MinIO console —
http://127.0.0.1:9001 - Postgres —
127.0.0.1:5499 - ClickHouse HTTP —
127.0.0.1:8123
First-run setup
The dashboard has a first-visit setup wizard. Open the dashboard URL
and you'll land on /setup — fill in an email and password, submit,
you're logged in. That's it. No CLI, no env-var dance.
The wizard only appears when the users table is empty. Once the
admin is created, any subsequent visit lands on /login instead.
Security note on the setup wizard
Whoever fills in the form first becomes the admin. In a standard self-host that's you, but don't publish the URL before you've claimed the admin account. If the deploy goes live and the page sits there for days, anyone who finds it could take it over. The usual practice:
- Deploy behind a private hostname or a loopback tunnel
- Hit
/setupyourself - Expose the URL publicly
Recovering if something goes wrong
Forgot the password, or someone else claimed admin?
# blow away users + sessions; next visit is the setup wizard again
docker compose -f deploy/docker-compose.yml exec postgres \
psql -U clearvoiance -d clearvoiance \
-c "DELETE FROM user_sessions; DELETE FROM users;"
# blow away users + sessions; next visit is the setup wizard again
docker compose -f deploy/docker-compose.yml exec postgres \
psql -U clearvoiance -d clearvoiance \
-c "DELETE FROM user_sessions; DELETE FROM users;"
Captured sessions, replays, and API keys are untouched.
Environment variables
Every variable has a working dev default, so cp + up -d just works
out of the box. Change the passwords before exposing anything beyond
localhost.
Core
CLEARVOIANCE_VERSION=0.0.0-dev # image tag for engine + ui
CLEARVOIANCE_API_URL=http://localhost:9101 # baked into the dashboard
CLEARVOIANCE_DOCS_URL=https://clearvoiance.vercel.app # the "Docs" link target
CLEARVOIANCE_VERSION=0.0.0-dev # image tag for engine + ui
CLEARVOIANCE_API_URL=http://localhost:9101 # baked into the dashboard
CLEARVOIANCE_DOCS_URL=https://clearvoiance.vercel.app # the "Docs" link target
Both URL vars are passed to the dashboard as build-args, not runtime
env — Next.js inlines NEXT_PUBLIC_* at build. Changing either
requires docker compose build ui. CLEARVOIANCE_DOCS_URL is only
worth touching if you run a fork with its own docs site; leave it on
the default otherwise.
Dashboard session cookie + CORS
Only needed when the dashboard and engine are on different subdomains (e.g. behind Traefik). On loopback everything is same-origin — leave these empty.
CLEARVOIANCE_COOKIE_DOMAIN=.example.com # parent of both subdomains
CLEARVOIANCE_COOKIE_SECURE=true # force Secure behind TLS terminators
CLEARVOIANCE_DASHBOARD_ORIGIN=https://app.example.com # CORS allow-list
CLEARVOIANCE_COOKIE_DOMAIN=.example.com # parent of both subdomains
CLEARVOIANCE_COOKIE_SECURE=true # force Secure behind TLS terminators
CLEARVOIANCE_DASHBOARD_ORIGIN=https://app.example.com # CORS allow-list
The cookie is HttpOnly + SameSite=Lax + Secure (auto when the request is HTTPS). CORS is set up to allow credentials only from the listed origins — leaving the allow-list empty disables CORS entirely, which is the correct default for same-origin / SDK-only deploys.
Datastores
POSTGRES_USER=clearvoiance
POSTGRES_PASSWORD=dev
POSTGRES_DB=clearvoiance
POSTGRES_HOST_PORT=5499
CLICKHOUSE_USER=default
CLICKHOUSE_PASSWORD=dev
CLICKHOUSE_DB=clearvoiance
CLICKHOUSE_HTTP_PORT=8123
CLICKHOUSE_NATIVE_PORT=9000
MINIO_ROOT_USER=dev
MINIO_ROOT_PASSWORD=devdevdev
MINIO_BUCKET=clearvoiance-blobs
MINIO_REGION=us-east-1
MINIO_PATH_STYLE=true
MINIO_API_PORT=9002
MINIO_CONSOLE_PORT=9001
POSTGRES_USER=clearvoiance
POSTGRES_PASSWORD=dev
POSTGRES_DB=clearvoiance
POSTGRES_HOST_PORT=5499
CLICKHOUSE_USER=default
CLICKHOUSE_PASSWORD=dev
CLICKHOUSE_DB=clearvoiance
CLICKHOUSE_HTTP_PORT=8123
CLICKHOUSE_NATIVE_PORT=9000
MINIO_ROOT_USER=dev
MINIO_ROOT_PASSWORD=devdevdev
MINIO_BUCKET=clearvoiance-blobs
MINIO_REGION=us-east-1
MINIO_PATH_STYLE=true
MINIO_API_PORT=9002
MINIO_CONSOLE_PORT=9001
Services
ENGINE_GRPC_PORT=9100
ENGINE_HTTP_PORT=9101
DASHBOARD_PORT=3000
ENGINE_GRPC_PORT=9100
ENGINE_HTTP_PORT=9101
DASHBOARD_PORT=3000
Going beyond localhost with Traefik
Two ways to turn Traefik on. Pick whichever fits your workflow — both wire up the same routers + TLS certs.
Option A — use the override file (recommended)
deploy/docker-compose.traefik.yml is a committed override that adds
the Traefik labels and joins the external network. No edits to
docker-compose.yml required:
docker compose \
--env-file deploy/.env \
-f deploy/docker-compose.yml \
-f deploy/docker-compose.traefik.yml \
up -d --build
docker compose \
--env-file deploy/.env \
-f deploy/docker-compose.yml \
-f deploy/docker-compose.traefik.yml \
up -d --build
Option B — uncomment inline
deploy/docker-compose.yml ships with the Traefik labels commented
on both engine: and ui:. Uncomment the networks: + labels:
blocks on each service plus the top-level networks: block at the
bottom of the file, then docker compose up -d --build.
Either way, set these in .env
TRAEFIK_NETWORK=traefik_proxy
TRAEFIK_ENTRYPOINT=websecure
TRAEFIK_CERT_RESOLVER=letsencrypt
TRAEFIK_API_HOST=api.clearvoiance.example.com
TRAEFIK_UI_HOST=app.clearvoiance.example.com
TRAEFIK_NETWORK=traefik_proxy
TRAEFIK_ENTRYPOINT=websecure
TRAEFIK_CERT_RESOLVER=letsencrypt
TRAEFIK_API_HOST=api.clearvoiance.example.com
TRAEFIK_UI_HOST=app.clearvoiance.example.com
TRAEFIK_CERT_RESOLVER must match the resolver name in your Traefik
config — letsencrypt is the common default but some setups call it
le or myresolver.
Make sure the external network exists
docker network create traefik_proxy # or whatever TRAEFIK_NETWORK is
docker network create traefik_proxy # or whatever TRAEFIK_NETWORK is
If you already run Traefik on this host, reuse its network name — both engine + dashboard will attach to it automatically.
Rebuild the dashboard with the public URL
CLEARVOIANCE_API_URL bakes into the JS bundle, so changing it
requires rebuilding ui:
# in deploy/.env:
CLEARVOIANCE_API_URL=https://api.clearvoiance.example.com
docker compose \
--env-file deploy/.env \
-f deploy/docker-compose.yml \
-f deploy/docker-compose.traefik.yml \
build ui
docker compose \
--env-file deploy/.env \
-f deploy/docker-compose.yml \
-f deploy/docker-compose.traefik.yml \
up -d
# in deploy/.env:
CLEARVOIANCE_API_URL=https://api.clearvoiance.example.com
docker compose \
--env-file deploy/.env \
-f deploy/docker-compose.yml \
-f deploy/docker-compose.traefik.yml \
build ui
docker compose \
--env-file deploy/.env \
-f deploy/docker-compose.yml \
-f deploy/docker-compose.traefik.yml \
up -d
(Omit the second -f if you took Option B and uncommented inline.)
Optional: DB observer
The DB observer is shipped in the engine image but not started by
default. It polls pg_stat_activity on your application's Postgres
and correlates slow queries to replay events via the application_name
tag that instrumentPg sets.
1. Uncomment the db-observer service
In deploy/docker-compose.yml, uncomment the db-observer: block near
the bottom.
2. Set the SUT Postgres DSN in .env
CLEARVOIANCE_OBSERVER_POSTGRES_DSN=postgres://readonly:pw@sut-postgres:5432/app?sslmode=disable
CLEARVOIANCE_OBSERVER_CLICKHOUSE_DSN=clickhouse://default:dev@clickhouse:9000/clearvoiance
CLEARVOIANCE_OBSERVER_POSTGRES_DSN=postgres://readonly:pw@sut-postgres:5432/app?sslmode=disable
CLEARVOIANCE_OBSERVER_CLICKHOUSE_DSN=clickhouse://default:dev@clickhouse:9000/clearvoiance
A read-only role with SELECT on pg_stat_activity is enough. The SUT
Postgres has to be reachable from the compose network — either it's
another container, or you use host.docker.internal / an external IP.
3. Bring the stack up
docker compose --env-file deploy/.env -f deploy/docker-compose.yml up -d db-observer
docker compose --env-file deploy/.env -f deploy/docker-compose.yml up -d db-observer
Slow queries from the SUT now show up in the dashboard's
/replays/<id>/db tab, grouped by query fingerprint and correlated to
the replay event that triggered them.
Operational notes
Data durability
- ClickHouse, Postgres, MinIO all mount named volumes
(
clickhouse_data,postgres_data,minio_data).docker compose downpreserves them;down -vwipes them. - The
minio-bootstrapone-shot auto-creates the blob bucket on first boot. It's safe to rerun —mc mb -pis idempotent.
Engine behavior without stores
If you clear CLEARVOIANCE_POSTGRES_DSN the engine runs in ephemeral
mode: sessions are in-memory only, SDK WALs can't drain across engine
restarts. Likewise, without CLEARVOIANCE_CLICKHOUSE_DSN events are
acked but dropped on the floor. Both are dev-only modes — production
deployments must set both.
IPv6
The compose healthchecks use 127.0.0.1 rather than localhost on
purpose — on kernels where IPv6 is disabled (many cloud VMs),
localhost resolves to ::1 and ClickHouse/MinIO's v4-only binds show
up as "connection refused" despite being healthy on v4.
Upgrading
Point CLEARVOIANCE_VERSION at the release you want and rebuild:
# in deploy/.env:
CLEARVOIANCE_VERSION=0.2.0
docker compose --env-file deploy/.env -f deploy/docker-compose.yml build
docker compose --env-file deploy/.env -f deploy/docker-compose.yml up -d
# in deploy/.env:
CLEARVOIANCE_VERSION=0.2.0
docker compose --env-file deploy/.env -f deploy/docker-compose.yml build
docker compose --env-file deploy/.env -f deploy/docker-compose.yml up -d
Schema migrations on ClickHouse / Postgres run on first boot of the new engine. Roll back by pointing at an older tag; data volumes are forward-compatible within a minor version.
What's next
- Quickstart — instrument your first service.
- Core concepts — how capture / replay / hermetic / observer fit together.