Record real production traffic. Replay it safely at N× speed.
Catch breaking points before they hit production — without triggering real APIs, payments, or side effects. clearvoiance captures every input to your backend and replays it against a hermetic clone, with per-event DB correlation so every slow query traces back to the exact request that caused it.
Why?
Staging environments lie.
Mocks drift. Synthetic tests miss real edge cases.
clearvoiance records actual traffic and replays it in a hermetic environment so you can test against reality — not guesses.
From zero to broken-on-purpose.
Five steps. No staging environment, no synthetic scripts, no hand-written load profiles. Just real traffic and a safe place to replay it.
- 1
Install the SDK
npm install @clearvoiance/node — one dep, zero framework peers pulled in.
- 2
Hit your endpoint
Real request, real headers, real body. The adapter wraps it as a captured event.
- 3
See the traffic appear
Events stream into the dashboard live — route, status, DB time, outbound calls.
- 4
Replay at 10×
Click Replay, pick a target. Every captured event fires again at compressed time.
- 5
Break something safely
Outbound calls served from the mock pack. Zero real emails, no real charges, no real damage.
Built for the load tests that actually matter.
Synthetic load scripts are always wrong. Real traffic replayed at compressed time is weird in exactly the ways production will be.
Capture everything
HTTP, Socket.io, node-cron, BullMQ, outbound HTTP + fetch, Postgres queries — one SDK, one stream.
Replay at N× speed
A 1-hour capture runs in 5 minutes at 12× with virtual users, JWT re-signing, Starlark body mutators.
Hermetic by default
Outbound calls get served from a mock pack during replay. Zero real emails, zero real Stripe charges.
DB correlation
Every slow query + lock wait + deadlock ties back to the exact replay event that caused it. The killer feature.
Remote-controlled
SDKs register as monitors and sit idle. Click Start on the dashboard to record a window, Stop to finalize — zero overhead when not capturing.
Self-hostable
One Go engine + ClickHouse + MinIO + Postgres via docker-compose. Your data stays yours. Apache-2.0.
Drop the SDK into your backend.
A capture session opens on client.start(). The adapters wrap your framework so every inbound request + every outbound call + every DB query flows to the engine as a correlated event.
- • Works with Express, Koa, Fastify, Strapi, Socket.io, node-cron, BullMQ.
- • WAL drains captured events across engine restarts — zero loss.
- • Header denylist redacts
Authorization/Cookieby default.
import express from "express";
import { createClient } from "@clearvoiance/node";
import { captureHttp } from "@clearvoiance/node/http/express";
import { patchOutbound } from "@clearvoiance/node/outbound";
const client = createClient({
engine: { url: process.env.CLEARVOIANCE_ENGINE_URL!, apiKey: process.env.CLEARVOIANCE_API_KEY! },
session: { name: "checkout-api" },
});
await client.start();
patchOutbound(client); // record every http.request + fetch
const app = express();
app.use(captureHttp(client)); // inbound HTTP, routed + headers + body
app.listen(3000);
import express from "express";
import { createClient } from "@clearvoiance/node";
import { captureHttp } from "@clearvoiance/node/http/express";
import { patchOutbound } from "@clearvoiance/node/outbound";
const client = createClient({
engine: { url: process.env.CLEARVOIANCE_ENGINE_URL!, apiKey: process.env.CLEARVOIANCE_API_KEY! },
session: { name: "checkout-api" },
});
await client.start();
patchOutbound(client); // record every http.request + fetch
const app = express();
app.use(captureHttp(client)); // inbound HTTP, routed + headers + body
app.listen(3000);
$ clearvoiance replay start \
--source sess_abc \
--target http://staging:3000 \
--speedup 12
→ replay started: rep_xyz
→ dispatching 42 310 events at 12× over 5m
$ clearvoiance replay results rep_xyz --db
┌───────────────────────────────┬──────┬──────────┬──────────┐
│ endpoint │ p95 │ db time │ deadlocks│
│ POST /api/leads │ 810 │ 46 412ms │ 4 │ ← N+1 + lock wait
│ GET /api/stats │ 210 │ 812ms │ 0 │
│ POST /webhooks/stripe │ 48 │ 103ms │ 0 │
└───────────────────────────────┴──────┴──────────┴──────────┘
$ clearvoiance replay start \
--source sess_abc \
--target http://staging:3000 \
--speedup 12
→ replay started: rep_xyz
→ dispatching 42 310 events at 12× over 5m
$ clearvoiance replay results rep_xyz --db
┌───────────────────────────────┬──────┬──────────┬──────────┐
│ endpoint │ p95 │ db time │ deadlocks│
│ POST /api/leads │ 810 │ 46 412ms │ 4 │ ← N+1 + lock wait
│ GET /api/stats │ 210 │ 812ms │ 0 │
│ POST /webhooks/stripe │ 48 │ 103ms │ 0 │
└───────────────────────────────┴──────┴──────────┴──────────┘
Run it at 12×. See what your DB saw.
The replay engine schedules every captured event at compressed time. The DB observer polls pg_stat_activity for queries carrying application_name = 'clv:<event_id>'and joins each slow query back to the originating request.
Output: “Under 12× load, POST /api/leads caused 4 deadlocks and 46s of DB time, dominated by an N+1 onleads_email_key. Here's the plan. Here's the captured event. Here's the reproducer.”
Adapters, not monkey-patches.
Each integration is a first-class subpath import. Install only what you use — framework peer deps are all optional. Non-Node SDKs (Python, Go, Ruby) coming after the OSS launch.
HTTP
- Express
- Koa
- Fastify
- Strapi
Sockets
- Socket.io
Queues & cron
- BullMQ
- node-cron
Outbound
- http / https
- fetch (undici)
Databases
- node-postgres
- Knex
- Prisma
- Mongoose
Detection
- autoInstrument()
Ready to stop load-testing your fantasies?
Self-host in under five minutes. Or just drop the SDK in, stream to a dev engine, and watch real production behavior replay against staging.