Documentation

InariWatch Docs

Everything you need to set up InariWatch — web dashboard, local CLI, integrations, and AI.

Web dashboard

The web dashboard is the fastest way to get started — no install, no card required. Sign up, connect your first integration, and InariWatch starts monitoring in minutes.

  1. 1

    Create an account

    Go to inariwatch.com/register and sign up with GitHub or email.
  2. 2

    Create a project

    A project groups your integrations and alerts. Give it the name of your app or service.
  3. 3

    Connect an integration

    Go to Integrations and connect GitHub, Vercel, or Sentry. See the integration guides below for exactly which token to use.
  4. 4

    (Optional) Add your own AI key for auto-fix

    AI analysis works out of the box. To unlock code remediation, chat, and post-mortems, go to Settings → AI analysis and add your key. See AI setup for supported providers.
Note: The InariWatch Cloud Dashboard polls connected services every 5 minutes (uptime checks every 1 minute) to detect issues instantly.

Local CLI

The CLI runs entirely on your machine — no account needed, data stays local. It's the best option if you prefer a terminal workflow or want zero cloud dependency.

Install
curl -fsSL https://get.inariwatch.com | sh
  1. 1

    Create a project

    inariwatch init — walks you through creating a local project interactively.
  2. 2

    Add an integration

    inariwatch add github — prompts for your token and owner. Repeat for vercel, sentry, etc.
  3. 3

    (Optional) Set an AI key

    inariwatch config --ai-key sk-ant-... — enables AI correlation and auto-remediation in the watch loop.
  4. 4

    Start watching

    inariwatch watch — polls every 60s, correlates events, and sends Telegram alerts if configured.

CLI — Installation

The CLI is a single Rust binary with no runtime dependencies.

Linux / macOS
curl -fsSL https://get.inariwatch.com | sh
Windows (PowerShell)
irm https://get.inariwatch.com/install.ps1 | iex
Build from source
git clone https://github.com/orbita-pos/inariwatch
cd inariwatch/cli
cargo build --release
# binary at: ./target/release/inariwatch

After installing, run inariwatch --help to confirm it works. On Linux/macOS the binary is placed in ~/.local/bin/inariwatch. On Windows it installs to %USERPROFILE%\.inariwatch\bin and is added to your user PATH automatically.

CLI — Commands

CommandDescription
inariwatch initCreate a new local project (interactive)
inariwatch add githubAdd GitHub integration — prompts for token + owner + repos
inariwatch add vercelAdd Vercel integration — prompts for token + team ID
inariwatch add sentryAdd Sentry integration — prompts for auth token + org slug
inariwatch add gitAdd local git integration (no token needed)
inariwatch add uptimeAdd uptime monitoring — prompts for URL + optional threshold
inariwatch add cronAdd cron scheduler — prompts for base URL + secret
inariwatch connect telegramLink a Telegram bot for notifications
inariwatch watchMain loop — polls every 60s, sends alerts, runs AI correlation
inariwatch statusShow integration health and last poll times
inariwatch logsShow recent alerts from the local SQLite database
inariwatch config --ai-key <key>Set AI key (Claude, OpenAI, Groq, Grok, DeepSeek, or Gemini)
inariwatch config --model <model>Set the AI model
inariwatch config --auto-fix trueEnable autonomous AI fix pipeline on critical alerts
inariwatch config --auto-merge trueAuto-merge generated PRs when all safety gates pass
inariwatch config --showPrint current config (keys masked)
inariwatch daemon installRegister InariWatch as a background service (systemd / launchd / Task Scheduler)
inariwatch daemon start|stop|statusControl the background daemon
inariwatch daemon uninstallRemove the background service
inariwatch agent-statsShow AI agent track record, trust level, and auto-merge gates
inariwatch rollback vercelInteractive rollback — pick a previous deployment to restore
inariwatch devLocal dev mode — catch errors, diagnose with AI, apply fixes to local files

CLI — Configuration

The CLI stores all config in two files:

FilePurpose
~/.config/inariwatch/config.tomlAI key, model, and per-project integration tokens
~/.local/share/inariwatch/inariwatch.dbSQLite — events and alerts (local history)
~/.config/inariwatch/config.toml (example)
[global]
ai_key    = "sk-ant-..."
ai_model  = "claude-haiku-4-5-20251001"
auto_fix  = false   # enable autonomous fix pipeline on critical alerts
auto_merge = false  # auto-merge PRs when all safety gates pass

[[projects]]
name = "my-app"
slug = "my-app"
path = "/home/you/projects/my-app"

[projects.integrations.github]
token         = "ghp_..."
repo          = "my-org/my-app"
stale_pr_days = 2

[projects.integrations.vercel]
token      = "..."
project_id = "prj_..."
team_id    = "team_..."   # optional

[projects.integrations.sentry]
token   = "..."
org     = "my-org"
project = "my-project"

[projects.integrations.uptime]
url       = "https://my-app.com"
threshold = 5000   # ms — optional, alerts if response > threshold

[projects.integrations.cron]
url    = "https://app.inariwatch.com"
secret = "your-cron-secret"

[projects.notifications.telegram]
bot_token = "123456:ABC-..."
chat_id   = "987654321"
Note: You can edit this file directly, but using inariwatch add and inariwatch config is safer — they validate tokens before saving.

CLI — Daemon

Run InariWatch as a background service so it monitors your project 24/7 — even when your terminal is closed. It registers as a systemd user service on Linux, a launchd agent on macOS, and a Task Scheduler task on Windows.

Terminal
inariwatch daemon install   # register and enable the service
inariwatch daemon start     # start immediately
inariwatch daemon stop      # stop the service
inariwatch daemon status    # check if running + tail recent logs
inariwatch daemon uninstall # remove the service

Logs are written to ~/.inariwatch/daemon.log on all platforms. The daemon runs inariwatch watch in the background — any config you set with inariwatch config applies to it automatically.

CLI — Auto-fix & Auto-merge

When auto_fix is enabled, every critical alert automatically triggers the full AI remediation pipeline: diagnose → read code → generate fix → self-review → push branch → wait CI → open PR. No human needed until the PR appears.

Terminal
inariwatch config --auto-fix true    # enable autonomous fix pipeline
inariwatch config --auto-merge true  # also merge PRs when all safety gates pass

auto_merge requires auto_fix to be enabled. Even then, a PR is only merged when all 11 safety gates pass: auto-merge enabled, CI green, confidence ≥ threshold, self-review score ≥ 70, lines changed ≤ max, Substrate risk ≤ 40, EAP chain verified, prediction safe, security scan clean, Substrate replay pass, and staging E2E pass.

Note: Use inariwatch agent-stats to see the AI's track record, current trust level, and which gates apply at your trust level. The agent earns relaxed gates as it accumulates successful fixes.
Trust levelRequiresAuto-merge gates
Rookie0 fixesNever auto-merges
Apprentice3 fixes, ≥ 50% successConf ≥ 90, lines ≤ 50
Trusted5 fixes, ≥ 70% successConf ≥ 80, lines ≤ 100
Expert10 fixes, ≥ 85% successConf ≥ 70, lines ≤ 200

CLI — MCP Server

Warning: The local inariwatch serve-mcp command has been deprecated. Use the hosted MCP server at mcp.inariwatch.com instead — it has the full 25-tool surface, OAuth, and works with any MCP-compatible client without running a local process.

See the MCP Server section below for setup instructions and the full tool catalog. The fastest path is npx @inariwatch/mcp init — it auto-detects Claude Code, Cursor, Windsurf, VS Code Copilot, Codex CLI, Gemini CLI, and OpenClaw, and wires them up in one command.

CLI — Rollback

When a bad deploy reaches production, inariwatch rollback vercel gives you an interactive list of your last 10 successful deployments so you can pick one and restore it in seconds.

Terminal
inariwatch rollback vercel

Fetching recent successful deployments for my-app…
? Roll back to which deployment?
> dpl_abc123 a1b2c3d (main) — fix: remove debug log — 2h ago
  dpl_def456 e4f5g6h (main) — feat: add dark mode  — 5h ago
  dpl_ghi789 i7j8k9l (main) — chore: bump deps     — 1d ago

  Deploy:  dpl_abc12345
  Branch:  main
  Commit:  a1b2c3d
  URL:     https://my-app.vercel.app

? Confirm rollback to production? (y/N) y
Rolling back…
✓ Rollback triggered!
  Live at: https://my-app.vercel.app
Note: The confirmation prompt defaults to No — you have to explicitly type y to proceed. This prevents accidental rollbacks.

CLI — Dev Mode

inariwatch dev is a local development companion. It catches errors from your dev server via the capture SDK, diagnoses them with AI, and applies fixes directly to your local files — no GitHub, no PR, no branch.

Terminal
inariwatch dev

◉ INARIWATCH DEV

◉ Dev mode — my-app | Capture :9111 | Ctrl+C to stop
→ Errors from your dev server will be diagnosed and fixed locally.

  🔴 TypeError: Cannot read 'user' of undefined
     auth/session.ts:84
     💡 Known pattern (confidence: 92%) — add null check
     → Scanning project files... 142 files
     → Diagnosing... 92% confidence
     → Read 1 file(s): auth/session.ts
     → Generating fix... done
     → Self-reviewing... 88/100 (approve)

     Fix: session.user?.id ?? null

     Apply fix? yes
     ✓ Saved auth/session.ts
     ✓ Fix applied. Memory saved.

How it works: the capture server listens on localhost:9111 for errors from @inariwatch/capture. When an error arrives, InariWatch reads your local source files, generates a fix with AI, runs a self-review, and shows you the diff. You confirm with y and the fix is applied directly to disk.

Dev trains prod: every fix you apply locally is saved to the incident memory. When the same error appears in production, InariWatch already knows the pattern — resulting in higher confidence and faster auto-fix.

FlagDescription
--project <name>Select which project to use (auto-detected if only one)
--port <port>Override capture server port (default: 9111)
Note: Dev mode requires an AI key (inariwatch config --ai-key). It does NOT require GitHub — everything runs locally. Your code never leaves your machine.
Pro tip: Run inariwatch dev alongside npm run dev or any local dev server that uses @inariwatch/capture. Errors are caught the instant they happen.

CLI — Cron Scheduler

The CLI includes a built-in cron scheduler that replaces external services like GitHub Actions for triggering InariWatch cloud endpoints. It runs inside the inariwatch watch loop and fires HTTP requests to your configured cron tasks at their defined intervals.

Terminal
inariwatch add cron
# Prompts for:
#   Base URL:    https://app.inariwatch.com
#   Cron secret: your-cron-secret

Once configured, the watch loop automatically fires 4 default tasks:

TaskPathIntervalPurpose
poll/api/cron/poll5 minPoll integrations for new alerts
uptime/api/cron/uptime60 secCheck uptime endpoints
escalate/api/cron/escalate5 minEscalate unacknowledged alerts
digest/api/cron/digest24 hrSend daily alert digest emails

Each request includes an Authorization: Bearer <secret> header. All cron endpoints verify this secret using constant-time comparison.

Note: You can customize tasks in config.toml — add new paths, change intervals, or disable specific tasks. SSRF protection is built in: the scheduler blocks requests to localhost, private IPs, and non-HTTP protocols.

Integration — GitHub

InariWatch uses a GitHub Personal Access Token (classic or fine-grained) to monitor CI runs, PRs, and commits.

Getting a token

  1. 1

    Go to GitHub → Settings → Developer settings → Personal access tokens

  2. 2

    Create a new token (classic)

    Click Generate new token → Classic.
  3. 3

    Select scopes

    ScopeWhy
    repoRead CI runs, PRs, and commits on private repos
    read:orgRead org membership (if monitoring an org)
    read:userIdentify the token owner for auto-detection
  4. 4

    Copy the token

    The token starts with ghp_. Paste it into InariWatch.

What InariWatch monitors

AlertSeverityDefault
Failed CI check on main/masterCriticalOn
Failed CI on any branchWarningOff
Stale PR (configurable days)WarningOn — 3 days
Unreviewed PR (configurable hrs)WarningOn — 24 hrs
Pre-deploy risk score on PRInfoOn (Requires AI key)
Pro tip: The owner field should be your GitHub username or org name — InariWatch uses it to scope which repos to watch.

Integration — Vercel

InariWatch monitors your Vercel deployments and can trigger instant rollbacks on production failures.

Note: Vercel is one of four supported hosting providers. Every feature below — webhook receiver, auto-rollback, auto-heal, deploy notifications, 15-min health check, dashboard rollback button, Slack /rollback, MCP rollback_deploy, and AI diagnosis with build logs — works identically on Netlify, Cloudflare Pages, and Render.

Getting a token

  1. 1

    Open Vercel → Account Settings → Tokens

  2. 2

    Create a token

    Give it a name like inariwatch. No expiry is easiest for long-term monitoring.
  3. 3

    (Optional) Find your Team ID

    Go to your Vercel team → Settings. The team ID is shown as team_.... Leave blank if you're on a personal account.

What InariWatch monitors

AlertSeverityDefault
Failed production deploymentCriticalOn
Failed preview deploymentWarningOff
Instant rollbackOn demand

Integration — Netlify

InariWatch receives webhooks from Netlify for failed deploys, alerts you, and can roll back to the last successful deploy — same UX as Vercel, just a different host.

Getting a token

  1. 1

    Open Netlify → User settings → Applications

  2. 2

    Create a Personal Access Token

    Give it a name like inariwatch. The token needs Deploys: Read/Write and Sites: Read.
  3. 3

    Find your Site ID

    Go to your site → Site settings → General → Site information. The Site ID looks like 12345678-abcd-efgh-ijkl-mnopqrstuvwx.
  4. 4

    Connect in InariWatch

    Integrations → Connect Netlify → paste token + Site ID. InariWatch validates the token and registers a webhook automatically.

What InariWatch monitors

AlertSeverityDefault
Failed production deployCriticalOn
Failed deploy-previewWarningOff
Build logs in AI diagnosisOn (via Netlify log API)
Instant rollback (API + UI)On demand
Auto-rollback on webhookOn (when autoRollback enabled)
Auto-heal on uptime downOn (when autoHeal enabled)

Integration — Cloudflare Pages

Full parity with Vercel and Netlify: deploy alerts, one-click rollback, auto-heal, and AI diagnosis enriched with Cloudflare build logs.

Getting a token

  1. 1

    Open Cloudflare → My Profile → API Tokens

  2. 2

    Create a Custom Token

    PermissionAccess
    Account → Cloudflare PagesEdit
    Account → Account SettingsRead
  3. 3

    Find your Account ID

    It's shown in the right sidebar of your Cloudflare dashboard under Account Details.
  4. 4

    Connect in InariWatch

    Integrations → Connect Cloudflare Pages → paste token, Account ID, and Project Name (must match the Pages project slug exactly).

What InariWatch monitors

AlertSeverityDefault
Failed production deploymentCriticalOn
Failed preview deploymentWarningOff
Build logs in AI diagnosisOn (via history/logs endpoint)
Instant rollback (API + UI)On demand
Auto-rollback on webhookOn (when autoRollback enabled)
Auto-heal on uptime downOn (when autoHeal enabled)

Integration — Render

Render is fully supported for deploy alerts, rollback, and auto-heal. The only caveat is that Render does not expose build logs via its public REST API, so AI diagnosis runs without the build output — everything else is identical.

Getting a token

  1. 1

    Open Render → Account Settings → API Keys

  2. 2

    Create an API Key

    Render API keys have full account access. Store it in a password manager — it's only shown once.
  3. 3

    Find your Service ID

    Open the service in the Render dashboard. The Service ID is in the URL: dashboard.render.com/web/srv-abc123...
  4. 4

    Connect in InariWatch

    Integrations → Connect Render → paste API key, Service ID, and a display name.

What InariWatch monitors

AlertSeverityDefault
Failed deploy (build_failed)CriticalOn
Instant rollback (API + UI)On demand
Auto-rollback on webhookOn (when autoRollback enabled)
Auto-heal on uptime downOn (when autoHeal enabled)
Build logs in AI diagnosisNot available (Render has no public log API)
Note: Render logs live in their dashboard and are not exposed via the REST API. AI diagnosis still runs — it just uses Sentry, GitHub CI, and Substrate context instead of build output.

Integration — Sentry

InariWatch polls Sentry every 5 minutes for new issues and regressions in your projects.

Getting a token

  1. 1

    Open Sentry → Settings → Auth Tokens

  2. 2

    Create an internal integration token

    PermissionAccess
    Issues & EventsRead
    ProjectRead
    OrganizationRead
  3. 3

    Find your org slug

    It's in the URL of your Sentry dashboard: sentry.io/organizations/my-org/

What InariWatch monitors

AlertSeverityWindow
New issue first seenWarningLast 10 min
Regression (re-opened)CriticalLast 10 min

Integration — Expo

Monitor EAS Build failures and OTA Update rollbacks. InariWatch polls the Expo API and receives webhooks for real-time alerts with AI diagnosis.

Connect Expo

  1. 1

    Create an access token

    Go to expo.dev/settings/access-tokens and generate a Personal Access Token with project read access.
  2. 2

    Connect in InariWatch

    Integrations → Connect Expo → Paste your token. InariWatch validates it and detects your username.
  3. 3

    Configure alerts

    Choose which alerts to enable: build failures, update rollbacks. Both enabled by default.

Webhook setup (optional)

For real-time alerts (instead of polling every 5 minutes), set up a webhook in Expo:

  1. 1

    Copy the webhook URL

    After connecting, InariWatch shows a webhook URL and a signing secret.
  2. 2

    Add in Expo

    Go to your Expo project → Settings → Webhooks → Add webhook. Paste the URL and secret. Select Build and Update events.

What InariWatch monitors

AlertSeverityTrigger
Build failureCriticalEAS Build status = errored or canceled
Update rollbackWarningEAS OTA Update rolled back to embedded

Each alert includes the app name, platform, build ID, and error message. The AI diagnosis analyzes the build log to suggest fixes — including monorepo issues, dependency conflicts, and configuration errors.

Integration — Uptime

Uptime monitoring checks your HTTP endpoints at every poll interval and alerts if they return a non-2xx status or respond slower than your threshold.

In the web dashboard, go to Integrations → Uptime → Configure and add your endpoints. Each endpoint has a URL and an optional response time threshold in milliseconds.

AlertSeverity
Endpoint returned non-2xxCritical
Response time exceeded thresholdWarning
Endpoint recoveredInfo
Note: No token required. InariWatch makes the HTTP requests from its own infrastructure.

Integration — PostgreSQL

InariWatch connects to your PostgreSQL database and monitors for health issues without storing your data.

You only need a read-only connection string. InariWatch runs read-only diagnostic queries — it never writes to your database.

Connection string format
postgresql://user:password@host:5432/dbname?sslmode=require
AlertSeverityThreshold
Connection failureCriticalAny failure
Too many active connectionsWarning> 80% of max_connections
Long-running queryWarning> 60 seconds
Replication lagWarning> 30 seconds
Warning: Create a dedicated read-only user for InariWatch. Never use a superuser connection string in a third-party service.
Create a read-only user (run in psql)
CREATE USER inariwatch WITH PASSWORD 'your-password';
GRANT CONNECT ON DATABASE your_db TO inariwatch;
GRANT USAGE ON SCHEMA public TO inariwatch;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO inariwatch;

Integration — npm / Cargo

InariWatch audits your package.json or Cargo.toml for known vulnerabilities using OSV.dev (17+ vulnerability databases including NVD, GitHub Advisory, PyPA, RustSec, Go) as the primary source, with GitHub Advisory as automatic fallback. Lockfiles (package-lock.json, yarn.lock, Cargo.lock) are auto-detected for transitive dependency scanning with exact version matching.

Provide a public URL to your manifest file. For private repos, use a raw GitHub URL with a Personal Access Token in the request (paste the full URL including auth).

Example public URLs
# npm
https://raw.githubusercontent.com/my-org/my-app/main/package.json

# Cargo
https://raw.githubusercontent.com/my-org/my-app/main/Cargo.toml
AlertSeverity
Critical CVE foundCritical
High-severity CVE foundWarning
Moderate CVE foundInfo

Integration — Datadog

InariWatch receives alerts from Datadog monitors via webhooks. When your Datadog monitor triggers (log anomaly, infrastructure spike, APM error), InariWatch creates an alert and optionally runs AI remediation — bridging the gap between detection and resolution.

Getting your keys

  1. 1

    Open Datadog → Organization Settings → API Keys

  2. 2

    Copy your API Key

    This is your organization's API key. It starts with a hex string.
  3. 3

    Create an Application Key

    Go to Application Keys tab and create a new key. Give it a name like inariwatch. Copy the key — it's only shown once.
  4. 4

    Connect in InariWatch

    Go to Integrations → Datadog → Connect. Paste both keys. InariWatch validates your API key automatically.

Setting up the webhook

After connecting, InariWatch generates a unique Webhook URL for your project. You need to configure this URL in Datadog so monitors can send alerts to InariWatch.

  1. 1

    Copy the Webhook URL from InariWatch

    It's shown under the Datadog integration card after connecting. Looks like: https://app.inariwatch.com/api/webhooks/datadog/your-integration-id
  2. 2

    Open Datadog → Integrations → Webhooks

  3. 3

    Create a new webhook

    Name it inariwatch, paste the Webhook URL, and leave the payload as the default JSON. Click Save.
  4. 4

    Add the webhook to your monitors

    Edit any Datadog monitor → Notify your team section → type @webhook-inariwatch. Now that monitor will alert InariWatch when it fires.

What InariWatch receives

Datadog EventInariWatch Severity
Monitor status: Alert / ErrorCritical
Monitor status: WarnWarning
Monitor status: Recovered / OKSkipped (auto-resolved)
Pro tip: Datadog sends a "Recovered" event when a monitor goes back to OK. InariWatch automatically ignores these so you don't get noise from self-healing issues.

Integration — @inariwatch/capture

The @inariwatch/capture SDK captures errors, logs, and deploy markers from your app and sends them to InariWatch. Zero dependencies, zero config. Works as a standalone Sentry replacement or alongside your existing integrations.

Quick start (zero config)

One command
npx @inariwatch/capture

Auto-detects your framework (Next.js, Nuxt, Remix, SvelteKit, Astro, Vite, Express, Fastify, Node), installs the SDK, and sets up instrumentation for whichever stack it finds. If you have an InariWatch account, the CLI opens a browser to authorize and automatically writes INARIWATCH_DSN to your .env — no manual copy-paste.

  1. 1

    Framework setup

    Detects 9 frameworks (Next, Nuxt, Remix, SvelteKit, Astro, Vite, Express, Fastify, Node). Installs the package and wires the right plugin automatically.
  2. 2

    Browser authorization

    Opens app.inariwatch.com/cli/verify in your browser. Click Authorize — takes 5 seconds.
  3. 3

    DSN written automatically

    INARIWATCH_DSN is written to .env.local (or .env). No signup or dashboard visit required.
Note: No account? No problem. Skip the browser step and errors print to your terminal in local mode. You can connect to your dashboard later by running npx @inariwatch/capture again.

Framework setup

Pick the section that matches your stack. All plugins inject git context (commit, branch, message) at build time and mark capture as external on server bundles so its node: builtin imports never leak into client or edge chunks.

Next.js

next.config.ts
import { withInariWatch } from "@inariwatch/capture/next"
export default withInariWatch(nextConfig)

And create instrumentation.ts:

instrumentation.ts
import "@inariwatch/capture/auto"
import { captureRequestError } from "@inariwatch/capture"

export const onRequestError = captureRequestError

Vite (Remix, SvelteKit, SolidStart, Qwik)

Remix, SvelteKit, SolidStart, and Qwik all build with Vite under the hood, so the same plugin works for all of them.

vite.config.ts
import { defineConfig } from "vite"
import { inariwatchVite } from "@inariwatch/capture/vite"

export default defineConfig({
  plugins: [inariwatchVite()],
})

Nuxt 3

nuxt.config.ts
export default defineNuxtConfig({
  modules: ["@inariwatch/capture/nuxt"],
})

The Nuxt module injects git context into runtimeConfig.inariwatch and marks capture as a Nitro external so it stays out of edge bundles.

Astro

astro.config.mjs
import { defineConfig } from "astro/config"
import { inariwatchVite } from "@inariwatch/capture/vite"

export default defineConfig({
  vite: { plugins: [inariwatchVite()] },
})

webpack (CRA, Vue CLI, Angular, raw webpack)

webpack.config.js
const { withInariWatchWebpack } = require("@inariwatch/capture/webpack")

module.exports = withInariWatchWebpack({
  // your existing webpack config
})

Express, Fastify, Koa, Hono, or any Node.js app

CLI flag
node --import @inariwatch/capture/auto app.js

Or in your package.json:

package.json
{ "scripts": { "start": "node --import @inariwatch/capture/auto src/index.js" } }

The /auto entrypoint reads INARIWATCH_DSN from the environment, starts the SDK before your app boots, and registers unhandled-rejection / uncaught-exception listeners. Works with Bun and Deno in Node-compat mode too.

Python, Go, Rust, or anything non-Node

For non-Node projects, use InariWatch's HTTP webhook ingest directly — no SDK required. Send JSON events to your project's capture endpoint and they show up in the dashboard alongside Node-captured errors.

Python (requests)
import requests, traceback, os

def capture(err: Exception):
    requests.post(os.environ["INARIWATCH_DSN"], json={
        "type": "exception",
        "message": str(err),
        "stack": traceback.format_exc(),
        "environment": os.environ.get("ENVIRONMENT", "production"),
    })

try:
    risky_operation()
except Exception as e:
    capture(e)
    raise
Go (net/http)
import "net/http"
import "encoding/json"
import "bytes"

func capture(err error, stack string) {
    body, _ := json.Marshal(map[string]interface{}{
        "type":    "exception",
        "message": err.Error(),
        "stack":   stack,
    })
    http.Post(os.Getenv("INARIWATCH_DSN"), "application/json", bytes.NewReader(body))
}

Alternatively, run your app with the InariWatch Agent installed — it captures errors at the kernel level, language-agnostic, zero code changes. See the InariWatch Agent section below.

Environment variables

Config is driven by environment variables — no DSN in source code. Omit INARIWATCH_DSN for local mode (terminal output).

VariableDescription
INARIWATCH_DSNCapture endpoint. Omit for local mode.
INARIWATCH_ENVIRONMENTEnvironment tag (fallback: NODE_ENV)
INARIWATCH_RELEASERelease version — triggers deploy marker
INARIWATCH_SUBSTRATESet to "true" to enable I/O recording

Substrate I/O recording

Capture every HTTP call, DB query, and file operation alongside your errors. When captureException() fires, the last 60 seconds of I/O are uploaded automatically.

Install
npm install @inariwatch/substrate-agent
Enable via env var
INARIWATCH_SUBSTRATE=true

Or programmatically:

init()
init({ substrate: true })

API

FunctionPurposeExample
init(config?)Initialize SDK (reads from env vars)init() or init({ substrate: true })
captureException(error)Capture exception with full stack tracecaptureException(err)
captureLog(message, level?, meta?)Send structured log eventcaptureLog("DB timeout", "error", { query })
captureMessage(message, level?)Send plain text eventcaptureMessage("Deploy started", "info")
flush()Wait for pending events (call before exit)await flush()
addBreadcrumb({ message, category?, level? })Add custom breadcrumbaddBreadcrumb({ message: "checkout started" })
setUser({ id?, role? })Set user context (email stripped for privacy)setUser({ id: "u123", role: "admin" })
setTag(key, value)Set custom tag for filteringsetTag("feature", "checkout")
setRequestContext({ method, url, headers?, body? })Set HTTP request contextsetRequestContext({ method: "POST", url: "/api/users" })

Automatic context

Every error automatically includes rich context — no code changes needed. Your AI gets the full picture without guessing.

ContextHow it worksWhat the AI sees
GitInjected at build time by withInariWatch — commit, branch, message"commit f5eface on main — refactor session handling (23 min ago)"
BreadcrumbsAuto-intercepts console.log + fetch — last 30 actionsGET /auth/session → 200 → console.log("Processing") → POST /api/users → 500
EnvironmentNode version, OS, memory, CPU at crash timeNode v20, linux, heap 890/1130MB, uptime 24h
RequestFull HTTP request (headers redacted, body scrubbed)POST /api/users { role: "admin" }
UserSet via setUser() — id + role only (email stripped)user_456 (admin)
TagsSet via setTag() — custom key-value pairsfeature=checkout, plan=pro

Sensitive data is scrubbed automatically: Bearer tokens, JWTs, passwords, API keys, credit card numbers, connection strings, and auth headers are all redacted before leaving your app.

Import paths

ImportDescription
@inariwatch/captureSDK — init, captureException, captureLog, flush
@inariwatch/capture/autoAuto-init on import — config from env vars
@inariwatch/capture/browserBrowser entry — error + unhandled rejection listeners
@inariwatch/capture/nextNext.js plugin — withInariWatch()
@inariwatch/capture/viteVite plugin — inariwatchVite() (Vite + Remix + SvelteKit + Astro + SolidStart + Qwik)
@inariwatch/capture/webpackwebpack wrapper — withInariWatchWebpack() (CRA, Vue CLI, Angular)
@inariwatch/capture/nuxtNuxt 3 module — add to modules: []
@inariwatch/capture/shieldRuntime security — source-to-sink attack detection
Pro tip: In serverless environments, call await flush() before the function returns to ensure events are sent.

Shield — Runtime Security

Shield detects security vulnerabilities at runtime by tracking user input from the request to dangerous operations (database queries, shell commands, file reads). Unlike a regex WAF, Shield has near-zero false positives because it detects the vulnerability, not the attack attempt.

Setup

Add one import to your instrumentation file:

instrumentation.ts
import "@inariwatch/capture/auto"
import "@inariwatch/capture/shield"
import { captureRequestError } from "@inariwatch/capture"
export const onRequestError = captureRequestError

For Express/Fastify, use the middleware:

app.ts
import { shield } from "@inariwatch/capture/shield"

app.use(shield()) // report-only (default)
// or
app.use(shield({ mode: "block" })) // block threats

What it detects

VulnerabilitySink hookedExample
SQL Injectionpg.query, mysql2.queryUser input in string-concatenated query
Command Injectionchild_process.execUser input in shell command
Path Traversalfs.readFile, fs.writeFile../../etc/passwd in file path
SSRFfetch, http.requestInternal IP in user-controlled URL
NoSQL Injectionmongodb.find$ne operator in user input
Prototype PollutionJSON.parse__proto__ key in request body

How it works

1. User sends '; DROP TABLE users-- as a search query.
2. Shield marks it as tainted (came from user request).
3. Your app passes it to pg.query("SELECT * WHERE name = '" + input + "'".
4. Shield detects tainted input inside the SQL string.
5. Reports to InariWatch: file, line, sink, source, input.
6. InariWatch AI reads the code and creates a PR with a parameterized query fix.

Modes

ModeBehaviorUse case
report (default)Detect and report to dashboard. Request continues.Production monitoring
blockReturn 403. Request rejected before sink executes.Active protection
Pro tip: Start with report mode. Review alerts in the dashboard. Enable block mode when confident in the detection accuracy for your app.

Security alerts

Shield events appear as security alerts in the dashboard with full context: vulnerability type, sink function, source input, file, and line number. The AI auto-analyze prompt is tailored for security — it assesses if the vulnerability is real, what the impact is, and how to fix it. Click Fix with AI to auto-generate a parameterized query, input sanitization, or safe API call.

InariWatch Agent — Kernel-level observability

The InariWatch Agent captures everything that happens in your kernel — process execution, network connections, file access, DNS queries, TLS plaintext, and security events (LSM hooks) — without requiring any SDK in your code. It uses eBPF under the hood and is language-agnostic: works with Node.js, Python, Go, Java, Rust, or any production process.

While @inariwatch/capture catches application errors from within your code, the InariWatch Agent watches your entire server from the kernel. It detects threats that code-level instrumentation cannot see — SSRF to cloud metadata endpoints, reverse shells, web shell uploads, container escapes, sensitive file reads, and more.

Quick install

Create an integration at Dashboard → Integrations → InariWatch Agent to get your credentials, then run this one-liner on your Linux server (as root):

One-line installer
curl -sf https://install.inariwatch.com | sudo sh -s -- \
  --integration-id <your-uuid> \
  --secret <your-secret>

Or via environment variables:

Env var install
IW_INTEGRATION_ID=<uuid> IW_SECRET=<secret> \
  bash -c "$(curl -sf https://install.inariwatch.com)"

Requirements

  • Linux kernel >= 5.8 with BTF support (check: ls /sys/kernel/btf/vmlinux)
  • Architecture: x86_64 or aarch64
  • Distros: Ubuntu 22.04+, Debian 12+, RHEL 9+, Fedora 38+, Amazon Linux 2023+
  • Root access (or CAP_BPF, CAP_PERFMON, CAP_NET_ADMIN, CAP_SYS_RESOURCE)

What it captures

The agent loads 7 eBPF programs into your kernel:

ProbeCapturesHook type
Processexec, exit, forktracepoint/sched
NetworkTCP connect/accept/close, retransmitstracepoint/sock + kprobe/tcp_*
Filesystemfile open, write, deletekprobe/vfs_open, vfs_write, vfs_unlink
DNSall DNS queries (parsed in userspace)kprobe/udp_sendmsg
TLSplaintext from OpenSSL + Go crypto/tlsuprobe on SSL_read/SSL_write
Syscallany syscall via raw tracepointraw_tracepoint/sys_enter
Security (LSM)exec, socket, capability, namespaceLSM hooks (needs BPF LSM)

Events are batched (1000 events / 256KB / 5s window), compressed with LZ4 (~88% ratio), and sent over HTTPS to the InariWatch cloud. Threat detection runs in the cloud pipeline.

Threat detection

The cloud pipeline analyzes events and creates alerts for:

  • SQL injection, XSS, command injection (via TLS plaintext interception)
  • SSRF to cloud metadata (169.254.169.254, metadata.google.internal, etc.)
  • Reverse shell attempts (/dev/tcp, mkfifo, bash -i)
  • Web shell uploads (.php, .jsp, .asp in web directories)
  • Sensitive file access (/etc/shadow, SSH keys, cloud credentials)
  • Malicious DNS queries (known C2 / exfiltration domains)
  • Container escape attempts (namespace manipulation)
  • Suspicious process execution (nc, nmap, wget from web processes)

Configuration

The installer creates /etc/inariwatch/agent.toml. Edit it to enable optional probes (TLS interception, BPF LSM security hooks) or tune batching:

/etc/inariwatch/agent.toml
[cloud]
endpoint = "https://app.inariwatch.com/api/agent/events"
integration_id = "your-uuid"
webhook_secret = "your-secret"

[agent]
log_level = "info"

[probes]
enable_process = true
enable_network = true
enable_filesystem = true
enable_dns = true
enable_tls = true          # captures plaintext from OpenSSL/Go crypto/tls
enable_syscall = true
enable_security = false    # needs CONFIG_BPF_LSM in kernel

Restart after editing:

Shell
sudo systemctl restart inariwatch-agent

Release verification

The install script pins release binaries by SHA-256 and will refuse to install a tampered file. Supply-chain signing via cosign and SLSA Level 3 provenance attestation are planned for the 1.0 release.

Service management

Shell
sudo systemctl status inariwatch-agent       # check running state
sudo journalctl -u inariwatch-agent -f       # live logs
sudo systemctl restart inariwatch-agent      # after config changes
sudo systemctl stop inariwatch-agent         # pause monitoring

Uninstall

Shell
curl -sf https://install.inariwatch.com | sudo sh -s -- --uninstall

Performance

  • ~250 events/second throughput (measured)
  • ~88% LZ4 compression ratio
  • < 1% CPU overhead
  • ~48 MB RAM
  • Zero kernel event drops under normal load
Note: The source code of the agent is private. Binary releases are distributed via orbita-pos/inariwatch-agent-releases (public). Contact info@jesusbr.com for commercial licensing or security audits.

AI setup — Overview

Alert analysis and correlation work out of the box — no AI key required. InariWatch provides built-in AI for basic alert analysis so you get value from day one.

Adding your own AI key (Bring Your Own Key) unlocks advanced features:

  • AI code remediation — writes the fix, pushes a branch, waits for CI, opens a PR
  • Pre-deploy PR risk scoring (GitHub integration required)
  • Auto post-mortems when an incident is resolved
  • Ask Inari — chat with your live monitoring data
Pro tip: You can add multiple providers. InariWatch uses whichever key you set as primary, with Claude preferred by default if present.

AI — Claude (Anthropic)

Claude is the recommended provider — InariWatch's AI features are tuned for Claude's output style.

  1. 1
  2. 2

    Copy the key

    Starts with sk-ant-api03-...
  3. 3

    Paste into InariWatch

    Settings → AI analysis → Add key → Select Claude.
ModelContextBest for
claude-sonnet-4-6 (recommended)200kRemediation, correlation, chat
claude-haiku-4-5-20251001200kFast analysis, lower cost
claude-opus-4-6200kComplex repos, maximum quality
CLI
inariwatch config --ai-key sk-ant-api03-... --model claude-sonnet-4-6

AI — OpenAI

  1. 1
  2. 2

    Copy the key

    Starts with sk-proj-... (new format) or sk-... (legacy).
  3. 3

    Paste into InariWatch

    Settings → AI analysis → Add key → Select OpenAI.
ModelBest for
gpt-5.4 (recommended)Flagship — code fixes, remediation
gpt-5-miniReasoning + long-form writing (postmortems)
gpt-4.1-mini1M context, balanced analysis
gpt-4o-miniFast & cheap — alert analysis, chat
CLI
inariwatch config --ai-key sk-proj-... --model gpt-4o-mini

AI — Grok (xAI)

  1. 1

    Create an API key

  2. 2

    Copy the key

    Starts with xai-...
  3. 3

    Paste into InariWatch

    Settings → AI analysis → Add key → Select Grok.
ModelBest for
grok-3-beta (recommended)Most capable — remediation & postmortems
grok-2-1212Balanced chat and analysis
grok-2-mini-1212Fast & cheap — alert analysis
CLI
inariwatch config --ai-key xai-... --model grok-3-beta

AI — Groq (Llama)

Groq runs Llama 3.1 at very high throughput — several times faster than other providers. Best for ultra-fast alert analysis and chat where latency matters more than absolute quality.

  1. 1

    Create an API key

  2. 2

    Copy the key

    Starts with gsk_...
  3. 3

    Paste into InariWatch

    Settings → AI analysis → Add key → Select Groq (Llama).
ModelBest for
llama-3.1-70b-versatileFast analysis & chat (recommended)
llama-3.1-8b-instantUltra-fast, lowest cost
mixtral-8x7b-32768Mixture-of-experts balanced
CLI
inariwatch config --ai-key gsk_... --model llama-3.1-70b-versatile

AI — DeepSeek

  1. 1
  2. 2

    Copy the key

    Starts with sk-...
  3. 3

    Paste into InariWatch

    Settings → AI analysis → Add key → Select DeepSeek.
ModelBest for
deepseek-chatV3 — fast analysis, chat, postmortems
deepseek-reasonerR1 — deep reasoning, remediation
CLI
inariwatch config --ai-key sk-... --model deepseek-chat

AI — Gemini (Google)

  1. 1
  2. 2

    Copy the key

    Starts with AIza...
  3. 3

    Paste into InariWatch

    Settings → AI analysis → Add key → Select Gemini.
ModelBest for
gemini-1.5-proRemediation & postmortems (recommended)
gemini-1.5-flashFast analysis & chat
gemini-2.0-flashLatest — fast, experimental
CLI
inariwatch config --ai-key AIza... --model gemini-1.5-pro

Autonomous Mode — Auto-Remediate

When enabled, InariWatch automatically triggers the full AI remediation pipeline on critical alerts — no human click needed. The developer wakes up to: "We had an incident at 3 AM. It's already fixed."

  1. 1

    Enable

    Project Settings → Auto-Merge → toggle Autonomous mode (amber).
  2. 2

    Critical alert arrives

    AI diagnosis runs automatically, then the full pipeline: read code → generate fix → self-review → push → CI → PR.
  3. 3

    Safety gates apply

    All 11 gates must pass for auto-merge. If any gate fails, a draft PR is created for manual review instead.
Warning: Autonomous mode requires auto-merge to be enabled. All existing safety gates (confidence, self-review, CI, lines changed, Substrate risk, EAP verification) still apply.

Autonomous mode suggestion

You don't need to enable autonomous mode manually. InariWatch watches your approval history and suggests it automatically when the data justifies the trust.

After each approved fix, InariWatch checks the last 30 days of remediation sessions for that project. If 5 or more sessions exist and the approval rate is ≥ 90%, a banner appears at the top of the project page:

Note: "Your last N fixes were approved X% of the time. Enable autonomous mode?" — Click Enable or dismiss permanently.

Clicking Enable sets autoRemediate: true and clears the banner. Dismissing hides it permanently for that project. The suggestion never reappears once dismissed.

Auto-tune confidence threshold

The minimum confidence threshold (minConfidence) controls which AI fixes are eligible for auto-merge. InariWatch adjusts this threshold automatically based on your project's actual approval history — no manual tuning needed.

ConditionAction
Approval rate ≥ 80% (last 30 days, ≥ 8 sessions)Lowers threshold to min_approved_confidence − 3 (floor: 55)
Approval rate < 50% (last 30 days, ≥ 3 cancellations)Raises threshold to median_cancelled_confidence + 5 (cap: 95)
Change < 5 pointsNo adjustment — avoids noise from small fluctuations

When auto-tune adjusts the threshold, the new value is shown in Project Settings → Auto-Merge next to the confidence input: Auto-tuned 70 → 65 · 3 days ago with a trend arrow.

Autonomous Mode — Auto-Heal

When your site goes down, InariWatch automatically rolls back to the last successful deploy and starts an AI fix in the background. Total downtime: ~90 seconds.

  1. 1

    Enable

    Project Settings → Auto-Merge → toggle Auto-heal (red). Requires a hosting integration (Vercel, Netlify, Cloudflare Pages, or Render).
  2. 2

    Uptime detects failure

    3 consecutive ping failures (not just 1) confirm the site is down. Prevents false positives.
  3. 3

    Rollback

    Automatically rolls back to the last successful deploy on whichever host the project uses. Site is back online in ~30 seconds.
  4. 4

    AI fix

    Remediation starts in background. When the fix is ready, a new deploy replaces the rollback with everything + the fix.
  5. 5

    Cooldown

    10-minute cooldown between auto-heal triggers prevents loops if the issue is not code-related (DB down, DNS, etc.).

Staging Verification

Before any fix reaches production, InariWatch deploys it to an ephemeral staging environment. A Playwright bot replays the exact HTTP requests from the Substrate recording — the same actions that caused the original crash — against the fixed code. If the error persists, the AI retries with a different approach. If it passes, the fix proceeds to the auto-merge safety gates.

  1. 1

    Fix generated

    AI generates a code fix and pushes it to a branch.
  2. 2

    Staging deploy

    The fix branch is deployed to an isolated Docker container with its own URL (e.g. fix-abc.staging.inariwatch.com).
  3. 3

    Bot verification

    A headless Chromium browser replays the recorded user session against the staging URL. Checks for 500 errors, console exceptions, and response correctness.
  4. 4

    Result

    If the bot confirms the fix works, it proceeds to the 11 safety gates. If it fails, AI retries with a different approach (up to 2 retries).
  5. 5

    Cleanup

    The staging container is automatically destroyed after verification (5 min TTL). No manual cleanup needed.

Staging Environment Variables

When AI remediation verifies a fix, it deploys the code to an ephemeral staging container. If your app needs environment variables to start (database URL, auth secrets, API keys), configure them in Project Settings → Staging Environment Variables.

Setup

Go to your project settings page (/projects/your-project-slug) and find the Staging Environment Variables section. Add key-value pairs:

VariableExampleNotes
DATABASE_URLpostgresql://user:pass@host/dbRequired if your app uses a database
NEXTAUTH_SECRETrandom-string-hereRequired for Next.js auth
NEXTAUTH_URLhttp://localhost:3000Any valid URL — staging overrides it
Pro tip: Values are encrypted at rest (AES-256-GCM) and never shown after saving — only the key names are visible. The full values are decrypted server-side only when deploying a staging container.

How it works

  1. 1

    AI generates a fix

    The remediation pipeline creates a fix branch and pushes it to GitHub.
  2. 2

    Staging deploys the fix

    An ephemeral Docker container is created with your fix branch. Your staging env vars are injected into the container at startup.
  3. 3

    Bot verifies

    A headless browser checks that the app starts and responds correctly.
  4. 4

    Container destroyed

    After verification (pass or fail), the container and all env vars are destroyed. TTL is 5 minutes.

Without staging env vars

If no staging env vars are configured, the staging gate is skipped — not failed. The PR is still created, CI still runs, and all other safety gates still apply. Staging verification is an optional extra layer of confidence.

Preview Fix

When an autonomous remediation completes, Preview Fix renders the merged fix two ways: an AI prediction you can look at in 2–3 seconds, and a live sandbox running the fix branch in an ephemeral Docker container for 24 hours. Every preview gets a shareable public URL with a cryptographic receipt from the EAP chain.

Note: Preview Fix reuses the same staging_env you configured above. Same vars, same encryption, same model — the fix branch runs with those values. Use preview-specific credentials (throwaway DB branch, test Stripe keys), not production.

Getting access

Preview Fix is rolling out to alpha workspaces first. If your org is on the allowlist, the panel appears automatically on any alert whose remediation reached the completed + merged state — no toggles, no config on your end.

Not seeing the panel on a merged fix? Email hello@inariwatch.com with your workspace slug and we'll add you.

How it works

  1. 1

    Alert page renders

    The <PreviewPanel> component POSTs to /api/alerts/:id/preview. Idempotent — the same remediation always returns the same preview row, even across refreshes.
  2. 2

    Tier 3 — AI prediction

    GPT-5.4 reads the last DOM snapshot from your Substrate recording, applies the fix diff, and returns predicted HTML. Cached per (alert, merged commit sha) so refreshes cost 0¢. Typical budget: ~$0.04 per cache miss.
  3. 3

    Tier 1 — Live sandbox

    A Go staging server on Hetzner clones the fix branch, auto-generates a Dockerfile for the detected framework (Next.js, Express, generic), and runs docker run behind a dynamic preview-<id>.staging.inariwatch.com Caddy route. TTL 24h.
  4. 4

    Screenshot captured

    The moment Tier 1 reaches running, a Playwright worker captures a 1280×800 PNG of the home page and uploads it to Cloudflare R2. The panel polls the preview row every 2s and swaps the skeleton for the hero image when it lands.
  5. 5

    Share

    The panel footer exposes a 12-character capability URL (app.inariwatch.com/preview/<slug>). Paste it in Slack or Twitter — the OG unfurl shows the real screenshot.

What you see in the panel

StatePanel shows
creating3-line Anthropic-style shimmer (≤200ms)
building“Provisioning preview container…”
startingContainer booting — waiting for health check
running, no screenshot yetBig centered “Open live preview” CTA + “Capturing screenshot…” spinner
running + screenshot readyHero card with the real image, overlay CTAs, optional “Try embedded view”
failedAmber error card with the last ~40 lines of the build log + “Use AI prediction instead” CTA
expiredMuted card — the 24h window closed; the fix is still merged in production
revokedFooter shows “Revoked” badge, public URL returns 410 Gone

Revoking a share URL

The project owner or any org member can revoke a share URL by clicking Revokein the panel footer. The public /preview/<slug> page then returns 410 Gone with a friendly notice. The live container and screenshot stay — revoke is a visibility signal, not a destruction op.

Warning: Social unfurls cache aggressively. Revoking does not remove an already-fetched Twitter / Slack / LinkedIn OG image. Visitors who click through land on the revocation notice, but the image itself may persist in third-party caches for days.

Troubleshooting

SymptomCauseWhat to do
Panel never appears on a merged fixYour workspace is not yet on the alpha allowlistEmail hello@inariwatch.com with your workspace slug.
Live build failed — “No GitHub integration for project”The project has no active GitHub integration (disconnected or never connected)Connect GitHub from /integrations on the project.
Live build failed — “GitHub token was rejected”The PAT expired or was revoked on GitHubRotate the PAT on GitHub, then reconnect from /integrations. The health banner on /integrations flags this automatically.
Live build failed — app crashes at bootYour app needs env vars for SSR that aren't in Project Settings → Staging environment variablesAdd the missing env vars. Use a preview DATABASE_URL (throwaway Neon branch) and test-mode keys for Stripe / auth / etc.
Tier 3 (AI prediction) shows “n/a”The alert has no Substrate recording, so there's no DOM snapshot to predict fromServer errors, background jobs, and alerts ingested from external sources without UI events don't have rrweb data. Tier 1 (live sandbox) still runs.
Public /preview/<slug> returns 410 GoneThe preview was revoked by the workspace ownerThe live preview is no longer public. The fix itself is still merged in production.
OG unfurl on Twitter/Slack shows the gradient card, not the screenshotThe screenshot capture hadn't completed when the social crawler first hit the pageForce a re-scrape (Twitter Card Validator / Slack: repost the URL after the screenshot arrives). Crawlers cache OG aggressively.

Autonomous Mode — Community Fixes

Every fix that gets approved is automatically and anonymously contributed to the community network. When a new error matches a known pattern, the fix appears instantly on the alert with its success rate — no AI generation needed.

Example: "12 teams hit this error. Community fix available — one click to apply."

Click Apply Community Fix to use the proven fix instead of generating a new one. The more teams use InariWatch, the faster everyone's errors get fixed. This is the network effect.

How auto-contribute works

  1. 1

    Fix approved

    When you approve an AI-generated fix, it is automatically contributed to the network. No action required.
  2. 2

    Anonymization

    All PII, secrets, API keys, IPs, URLs, and file contents are stripped before contribution. Only file paths, the fix approach, and confidence score are shared.
  3. 3

    Deduplication

    If another team already contributed a fix for the same error fingerprint with the same approach, the success count is incremented instead of creating a duplicate.
  4. 4

    Network effect

    As more teams use InariWatch, common framework errors accumulate high-confidence fixes. New errors skip AI generation entirely and resolve in seconds.

Slack Bot — Setup

The InariWatch Slack bot brings error monitoring, AI diagnosis, and auto-remediation directly into Slack. No more switching tabs — see errors, read the diagnosis, trigger fixes, and merge PRs without leaving your chat.

  1. 1

    Install to Slack

    Go to Settings → Slack → click Install Slack Bot. Authorize InariWatch in your Slack workspace.
  2. 2

    Map channels

    After installing, map each project to a Slack channel (e.g. api-service → #alerts-api). Alerts for that project will appear in the mapped channel.
  3. 3

    Link your account

    In Slack, run /inariwatch link your@email.com to connect your Slack user to your InariWatch account. This enables interactive actions.
Note: The bot requires 3 environment variables on Vercel: SLACK_CLIENT_ID, SLACK_CLIENT_SECRET, and SLACK_SIGNING_SECRET.

Slack Bot — Commands (14)

CommandDescription
/inariwatch statusOverview: open alert count, critical alerts, who is on-call
/inariwatch alerts [severity] [--resolved]List alerts with optional severity filter (critical, warning, info)
/inariwatch fix <id>Trigger AI remediation for an alert (diagnose, fix, PR)
/inariwatch oncallShow current on-call rotation for all your projects
/inariwatch oncall swap @userCreate a 24-hour on-call override for another user
/inariwatch trends [days]Error trends: top recurring errors, period comparison (default: 7 days)
/inariwatch ask <question>Ask Inari AI about your infrastructure in natural language
/inariwatch uptimeCheck all uptime monitors with status codes and response times
/inariwatch rollback <project>Rollback to previous production deploy on any supported host (Vercel, Netlify, CF Pages, Render)
/inariwatch maintenance <project> <mins>Create a maintenance window (suppresses alerts)
/inariwatch maintenance listShow active maintenance windows
/inariwatch search <error text>Search community fix network for known solutions
/inariwatch integrationsHealth check: status of all connected services
/inariwatch link <email>Link your Slack account to your InariWatch account

Button actions (10)

These buttons appear on alert messages and remediation threads:

ButtonWhereWhat it does
AcknowledgeAlert messageMark alert as read
ResolveAlert messageMark alert as resolved
ReopenResolved alertsReopen a resolved alert
Fix ItAlert messageTrigger full AI remediation pipeline
Apply Community FixCommunity fix suggestionApply a known fix from the network
Rate: Worked / Didn't WorkAfter community fix appliedRate fix quality (improves network)
Approve & MergeDraft PR messageApprove and merge the AI-generated fix
CancelDraft PR messageCancel in-progress remediation
RetryFailed remediationRetry remediation with a fresh attempt
Generate PostmortemIncident stormGenerate AI postmortem for the incident

Slack Bot — Fix from Slack

When an alert appears in Slack, click Fix It to trigger the full AI remediation pipeline. Progress updates appear as thread replies in real-time:

  1. 1

    Analyzing repository

    The AI connects to your GitHub repo and reads the codebase.
  2. 2

    Diagnosing root cause

    AI analyzes the error with context from Sentry, Vercel, Substrate recordings, and past fixes.
  3. 3

    Generating fix

    Code changes are generated and pushed to a new branch.
  4. 4

    Security scan

    3-layer security scan: 17 ESLint rules (eslint-plugin-security), 19 pattern detectors (SSRF, prototype pollution, hardcoded secrets, SQL injection, XSS, open redirect, etc.), and AI security review. HIGH findings block auto-merge.
  5. 5

    Self-review

    A second AI call reviews the fix like a senior engineer — score, concerns, recommendation.
  6. 6

    Waiting for CI

    The bot waits for GitHub Actions to pass (retries up to 3 times on failure).
  7. 7

    PR created

    A PR appears in the thread with confidence score and EAP verification. Click Approve & Merge to merge from Slack.

If Substrate is enabled, the recording (HTTP calls, DB queries, file operations) is automatically attached to the thread.

Slack Bot — Ask Inari

Mention @InariWatch in any channel or send a DM to ask questions about your errors:

Examples
@InariWatch what broke today?
@InariWatch why does the payment endpoint fail on Fridays?
@InariWatch summarize this week's incidents

If you ask in an alert thread, the AI automatically includes that alert's full context (stack trace, AI diagnosis, remediation history). Responses use your BYOK AI key from Settings.

Slack Bot — On-Call in Slack

When a critical alert arrives, the bot automatically tags the on-call engineer in the thread. Use /inariwatch oncall to see rotations and /inariwatch oncall swap @user to hand off.

Slack Bot — Deploy Monitoring

When a Vercel deploy succeeds, the bot posts a notification and monitors error rates for 15 minutes. After the monitoring window, it posts a follow-up: healthy or unhealthy with error count.

Telegram Bot — Setup

The InariWatch Telegram bot has full parity with Slack — 15 commands, 13 inline button callbacks, auto-delivery of alerts with AI diagnosis, and all remediation workflows.

  1. 1

    Create a Telegram bot

    Open Telegram → search @BotFather/newbot. Copy the token.
  2. 2

    Connect in Settings

    Go to Settings → Notification channels → Telegram. Paste the bot token.
  3. 3

    Link your account

    Send /link your@email.com to the bot to connect your Telegram to InariWatch.
  4. 4

    Set webhook

    The webhook URL is https://app.inariwatch.com/api/telegram/webhook. Set TELEGRAM_WEBHOOK_SECRET in your env.

Commands (15)

CommandDescription
/statusOpen alert count, critical alerts, who is on-call
/alerts [severity]List alerts with optional filter (critical, warning, info)
/fix_ALERTIDTrigger AI remediation for an alert
/oncallShow current on-call rotation
/oncall swap EMAILCreate a 24-hour on-call override
/trends [days]Error trends: top errors, period comparison
/ask QUESTIONAsk Inari AI about your infrastructure
/uptimeCheck all uptime monitors
/rollback PROJECTRollback to previous deploy on any supported host (Vercel, Netlify, CF Pages, Render)
/maintenance PROJECT MINSCreate a maintenance window
/maintenance listShow active maintenance windows
/search ERRORSearch community fix network
/integrationsIntegration health check
/link EMAILLink your Telegram to InariWatch
/helpShow all commands

Button Actions (10)

Inline buttons appear on alert messages and remediation updates:

ButtonWhat it does
AckAcknowledge alert
ResolveResolve alert
ReopenReopen resolved alert
FixTrigger AI remediation
Apply FixApply community fix
Worked / Didn't WorkRate community fix quality
Approve & MergeApprove AI-generated PR
CancelCancel in-progress remediation
RetryRetry failed remediation
Generate PostmortemAI postmortem for incidents

Auto-Delivery

These messages are sent automatically — no command needed:

FeatureWhat it sends
Alert pushNew alerts with AI diagnosis + Ack/Resolve/Fix buttons
Substrate recordingI/O recording attached 5s after alert (HTTP calls, DB queries)
Community fix suggestKnown fix with success rate + Apply/Rate buttons
On-call taggingDM to on-call engineer on critical alerts
Incident stormsGrouped notification + Generate Postmortem button
Deploy notificationsSuccess/failure + 15-min health follow-up
Shadow replayExecution replay risk score
PR predictionsPre-deploy risk warning with View PR link
EAP verificationCryptographic verification chain display
Weekly digestStats, top alerts, AI summary via cron
Remediation progressStep-by-step updates as replies
Full parity with Slack. Every feature available in the Slack bot is also available in Telegram — same commands, same buttons, same auto-delivery. Choose whichever your team prefers.

InariWatch Bot — Overview

InariWatch Bot is the native mobile app — your 4th notification channel alongside Slack, Telegram, and the web dashboard. Unlike Slack/Telegram, it has zero third-party limitations: full alert bodies, colored diffs, substrate I/O recordings, and native push notifications 24/7.

FeatureSlack/TelegramInariWatch Bot
Alert bodyTruncated (3000 chars)Full — no limit
AI diagnosisTruncatedFull — no limit
Code diffsMonospace plain textColored (green/red)
Substrate I/OSummary onlyFull event browser
Push notificationsVia third-party appNative iOS/Android
Quick actionsInline buttonsSwipe gestures + buttons

Install

Download InariWatch Bot →

PlatformHow to install
AndroidDownload APK from app.inariwatch.com/download — enable "Install from unknown sources"
iOSJoin TestFlight from app.inariwatch.com/download — tap "Join"

On first launch, tap Sign in with InariWatch. The browser opens for authentication — approve and the app is ready. Push notifications register automatically.

Screens (5)

ScreenWhat it does
FeedReal-time alert list (10s polling). Filter by severity. Swipe right to ack, left to resolve. Tap for detail.
Alert DetailFull body + AI diagnosis + Substrate I/O recording + Community fix (with success rate) + Remediation history. Actions: Ack, Resolve, Fix It.
Fix ProgressLive remediation timeline (3s polling). Steps: analyzing → reading code → generating → CI → PR. Approve, cancel, or retry.
Ask InariChat with AI about your infrastructure. Full context: alerts, remediations, integrations, uptime. Example questions to tap.
StatusUptime monitors (green/red), on-call rotation, alert count, error trends (7 days).

Push Notifications

Push notifications use Expo Push Service → FCM (Android) / APNs (iOS). They work 24/7, even when the app is closed.

SeverityBehavior
CriticalHigh priority push with urgent sound + vibration
WarningNormal push with default sound
InfoSilent push (badge update only)

Tapping a push notification opens the alert detail directly (deep link). Configure which severities trigger push in Settings → Notification channels on the web dashboard.

Same service layer. InariWatch Bot calls the same 17 MCP tools and service layer as Slack, Telegram, and the dashboard. A fix triggered from the mobile app appears in Slack. An alert resolved in Telegram disappears from the mobile feed.

VS Code Extension — Setup

The InariWatch VS Code extension shows errors inline in your editor with AI diagnosis on hover. No need to open a dashboard — errors appear as squiggly lines right where the code is.

  1. 1

    Install the extension

    Search for InariWatch in the VS Code marketplace, or install from the command line: code --install-extension inariwatch.inariwatch
  2. 2

    Sign in

    Open the command palette and run InariWatch: Sign In. Paste your API token from Settings → API Keys.
  3. 3

    Alerts appear

    Unresolved alerts from your projects appear as inline diagnostics, in the sidebar, and in the status bar.

VS Code Extension — Features

FeatureDescription
Inline diagnosticsError locations from stack traces appear as squiggly lines in your editor
Sidebar panelTreeView showing all alerts grouped by file with severity icons
Hover diagnosisHover over an error line to see the AI diagnosis in a tooltip
Status barUnread alert count in the bottom status bar, click to open sidebar
Mark read / ResolveRight-click an alert in the sidebar to mark as read or resolve
Open in dashboardJump to the full alert detail in your browser

The extension polls the InariWatch API every 30 seconds (configurable) and supports real-time updates via SSE.

VS Code Extension — Local Mode

The extension can work without a cloud account. Set inariwatch.mode to local in VS Code settings. It runs a local server on port 9222 that receives errors directly from the capture SDK.

Capture SDK → VS Code (local)
# Set your app's DSN to the local extension server
INARIWATCH_DSN=http://localhost:9222/ingest

Errors appear instantly in your editor. No account, no cloud, no signup.

Notifications — Telegram

The Telegram bot has full parity with Slack — 15 commands, 13 inline button callbacks, auto-delivery with AI diagnosis, and all remediation workflows. See the Telegram Bot section above for the complete setup and feature guide.

Notifications — Email

Email delivery is handled by InariWatch — you just provide your address. Critical alerts are sent immediately; warning and info alerts are batched into a daily digest.

  1. 1

    Go to Settings → Notification channels → Email

    Enter your email address and click Send verification.
  2. 2

    Verify your address

    Click the link in the verification email. Alerts won't send until verified.
  3. 3

    Set minimum severity (optional)

    You can filter to Critical only to reduce noise.
Note: To keep InariWatch free and respect email limits, non-critical alerts are batched into daily/weekly digests. Only Critical alerts are sent immediately.

Notifications — Slack

  1. 1

    Create an Incoming Webhook in Slack

    api.slack.com/apps → Create App → From scratch → Incoming Webhooks → Add new webhook to workspace.
  2. 2

    Select a channel

    Choose the channel where alerts should appear (e.g. #incidents). Copy the webhook URL.
  3. 3

    Paste into InariWatch

    Settings → Notification channels → Slack → paste the webhook URL.

Notifications — Push (browser)

Browser push sends OS-level notifications to your desktop or mobile browser — no app needed.

  1. 1

    Go to Settings → Notification channels → Push

    Click Enable push notifications.
  2. 2

    Allow browser permissions

    Your browser will prompt to allow notifications. Click Allow.
  3. 3

    Done

    InariWatch will send a test notification immediately to confirm it works.
Warning: Push notifications only work while your browser has been opened at least once since registration. For 24/7 coverage, use Telegram or email.

Notifications — On-Call Schedules

InariWatch allows you to configure timezone-aware daily on-call rotations for your team. Instead of paging the entire team with critical alerts, Escalation Rules can dynamically route the notification to the specific developer currently on-call.

  1. 1

    Go to your Project → On-Call Schedule

    Click Add schedule and set your project's timezone.
  2. 2

    Add members to slots

    Select a user and choose their day and hour ranges (e.g. Mon-Fri, 09:00-17:00).
  3. 3

    Enable in Escalation Rules

    Escalation rules will automatically use the on-call schedule before falling back to fixed channels.
Note: A green badge will appear in the dashboard indicating exactly who is currently on-call based on the active slots.

Notifications — Schedule Overrides

Schedule Overrides let you temporarily replace the on-call person without modifying the base rotation. Perfect for sick days, vacations, or emergencies.

  1. 1

    Go to your Project → On-Call Schedule

    Find the schedule you want to override.
  2. 2

    Click 'Add Override'

    Select the substitute user and choose a start and end date/time.
  3. 3

    Done

    During the override window, the substitute receives all escalation notifications instead of the original on-call person.
Pro tip: Overrides take priority over regular slots. Once the override window expires, the schedule automatically falls back to the base rotation — no cleanup needed.

Notifications — Incident Storm Control

When a major infrastructure failure occurs (e.g. database crash), dozens of monitors can trigger simultaneously. Without grouping, the on-call engineer gets 50 notifications in seconds — causing alert fatigue and panic.

Incident Storm Control detects when more than 5 alerts arrive for the same project within a 5-minute window. Instead of sending individual notifications, InariWatch groups them into a single "Incident Storm" message:

Example Storm Notification
🚨 [INCIDENT STORM] 14 alerts detected in 5 min
Project: my-production-app

Likely a cascading failure.
Resolve the root cause — all grouped alerts will clear together.
Note: Storm detection is fully automatic — no configuration needed. All alerts within a storm are linked to the same incident ID for post-mortem analysis.

Notifications — Interactive ACK

When InariWatch sends a critical alert to Telegram, the message includes interactive inline buttons that let you take action directly from your phone:

  • 👁️ Acknowledge — Stops the escalation timer. Your team knows you're looking at it.
  • ✅ Resolve — Marks the alert as resolved. No more follow-up notifications.
Pro tip: No need to open your laptop at 3 AM. Tap the button in Telegram from your bed and the escalation engine respects your acknowledgment instantly.

Desktop app — Setup & token

The InariWatch desktop app is a lightweight tray app that polls your account in the background and shows OS notifications — even when you're not in the browser.

  1. 1

    Download the desktop app

    Download the installer for your OS from the releases page. Supports macOS, Windows, and Linux.
  2. 2

    Generate a desktop token

    Go to Settings → Desktop app → Generate token. This creates a token starting with rdr_....
  3. 3

    Add the token to the config file

    Create or edit ~/.config/inari/desktop.toml with the values below.
  4. 4

    Start the app

    The tray icon appears (◉). Alerts will show as OS notifications. Click the icon to open the dashboard.
Note: The desktop app is completely free, just generate a token to connect it.

Desktop app — desktop.toml

~/.config/inari/desktop.toml
api_url   = "https://inariwatch.com"
api_token = "rdr_your_token_here"

The app polls /api/desktop/alerts every 60 seconds using this token. Alerts are shown as OS notifications and marked as read in the dashboard.

Analytics

The Analytics page (/analytics) gives you a 14-day view of alert trends and a 30-day view of AI remediation performance. All metrics update in real time as alerts arrive and fixes are approved.

SectionWindowWhat it shows
Alert trends14 daysAlerts per day (stacked by severity), by source, by severity distribution
AI Remediation30 daysApproval rate, avg confidence, avg decide time, auto-merge count, post-deploy success
Response time comparison30 daysHuman MTTR vs AI MTTR side by side
Cost savings30 daysEstimated engineering cost recovered based on time saved

MTTR comparison

InariWatch tracks two resolution times separately:

MetricDefinition
Human MTTRAverage time from alert created to resolved for alerts fixed manually (no AI remediation session)
AI MTTRAverage time from alert created to resolved for alerts fixed via an approved AI remediation session

When AI MTTR is at least 2× faster than human MTTR, a speedup banner appears: "AI resolves incidents 15× faster than manual review."

Note: MTTR data starts populating as soon as alerts are resolved. Alerts resolved before upgrading to this version do not have a resolved_at timestamp and are not included in the calculation.

Cost savings

InariWatch estimates the engineering cost recovered by AI remediation using a simple formula:

Formula
hours_saved  = ai_resolved_alerts × (human_mttr − ai_mttr) / 3600
cost_saved   = hours_saved × $150 / hr

The $150/hr rate is a conservative industry average for senior engineering time. The card appears in the Response time comparison section and only renders when there is enough data to show a meaningful difference.

AI Remediation stats

MetricDefinition
RemediationsTotal AI fix sessions started in the last 30 days
Approval rate% of sessions approved (status = completed)
Avg confidenceMean AI confidence score (0–100) across all sessions
Avg decide timeMean time from fix proposed to human approval
Auto-mergedSessions merged without a human click (autonomous mode)
Post-deploy% of merged fixes that passed the 10-min monitoring window
RevertedFixes auto-reverted after a regression was detected post-merge
CancelledSessions rejected by the developer

Weekly Digest

Every Monday InariWatch sends a weekly summary to all active notification channels — email and Slack. The digest covers the last 7 days and includes an optional AI-generated commentary if you have an AI key configured.

What's included

FieldDescription
Total alertsCount of all alerts received in the last 7 days
CriticalAlerts with severity = critical
ResolvedAlerts marked as resolved
OpenAlerts still unresolved at send time
Top 5 alertsMost critical recent alerts, sorted by severity then time
AI summary2–3 sentence narrative generated by your configured AI key (optional)

Delivery channels

The digest is sent to every verified email channel and every active Slack channel mapping on your account. A user with both email and Slack configured receives both. Users with no alerts in the past 7 days are skipped.

Pro tip: The digest is sent by an external cron via cron-job.org every Monday at 08:00 UTC. You can trigger it manually with GET /api/cron/digest using your CRON_SECRET header.

Code Intelligence

Code Intelligence is InariWatch's built-in Code RAG system. It indexes your entire codebase so the AI understands your conventions, patterns, and architecture — generating fixes that look like they were written by someone on your team, not a stranger.

The system combines tree-sitter AST parsing, Voyage Code 3 embeddings, hybrid search (vector + BM25), and a dependency graph to give the AI deep understanding of your code.

CapabilityWhat it doesStatus
Codebase IndexingParse every function, class, and type in your repo via tree-sitter ASTAutomatic
Hybrid SearchVector similarity + full-text search with AI re-rankingAutomatic
Dependency GraphKnows who calls what — if you change function A, it knows B and C are affectedAutomatic
Fix ReplayEmbeds past successful fixes — "this was fixed 3 times, here's what worked"Automatic
Regression TestsAI generates tests that reproduce the bug and verify the fixAutomatic
Substrate ReplayVerifies fix against recorded I/O that caused the crashWhen Substrate active
E2E StagingSpins up staging via GitHub Actions and runs Playwright tests against the fixWhen E2E detected
Note: Code Intelligence activates automatically when you connect a GitHub integration. No configuration needed — it detects your language, framework, and test setup.

Codebase Indexing

When you connect GitHub, InariWatch automatically indexes your repository. The pipeline:

  1. 1

    Fetch repo files via GitHub API

    Respects blocklists — skips .env, node_modules, lock files, binaries, build output.
  2. 2

    Parse AST with tree-sitter WASM

    Extracts every function, class, method, type, and interface with precise line numbers. Supports TypeScript, JavaScript, Python, Go, Rust, and Java.
  3. 3

    Generate docstrings with AI

    GPT-4o-mini generates natural language descriptions for each code chunk in batches of 15. These descriptions power the semantic search.
  4. 4

    Generate embeddings

    Voyage Code 3 (primary) or OpenAI text-embedding-3-small (fallback) creates 1024-dimensional vectors for each chunk. Stored in pgvector with HNSW index.
  5. 5

    Build dependency graph

    From the AST, extracts which functions call which, what imports each file has. Stored as edges in code_dependencies.

Incremental indexing: After the first full index, subsequent pushes only re-index changed files (via git diff). Rate limited to 1 re-index per repo per 5 minutes.

When the AI needs to find relevant code (during remediation or via the search_codebase MCP tool), it uses a three-stage retrieval pipeline:

StageMethodWhat it does
1. Vector searchpgvector cosine similarityFinds semantically similar code using Voyage Code 3 embeddings of docstrings
2. Keyword searchPostgreSQL tsvector (BM25)Finds exact matches on function names, variable names, error messages via full-text search
3. RRF FusionReciprocal Rank Fusion (k=60)Combines both result sets — chunks that appear in both get boosted
4. AI Re-rankingGPT-4o-miniFrom top 50 fused results, AI selects the 5-10 most relevant for the specific error
5. Graph enrichmentDependency graphFor each result, attaches callers (who calls it) and callees (what it calls)

The final context (up to 8,000 tokens) is injected into both the diagnosis and fix generation prompts with the instruction: "your fix MUST follow these patterns."

Embeddings — Voyage Code 3

InariWatch uses Voyage Code 3 as the primary embedding model for code — it achieves 12-15% better similarity on code retrieval benchmarks compared to OpenAI's text-embedding-3-small.

FeatureVoyage Code 3OpenAI (fallback)
Dimensions10241024 (truncated from 1536)
Input typesdocument / query (asymmetric)Single type
Code optimizedYes — trained on code retrievalGeneral purpose
DetectionAPI key starts with pa-API key starts with sk-
Cost~$0.06 / 1M tokens~$0.02 / 1M tokens
Pro tip: Set VOYAGE_API_KEY in your environment to use Voyage Code 3. If not set, InariWatch falls back to your OpenAI key automatically.

AST Parsing — Tree-sitter WASM

InariWatch uses web-tree-sitter (WebAssembly) for precise AST parsing. Unlike regex-based parsing, tree-sitter understands the actual grammar of each language — zero missed functions, correct handling of nested classes, decorators, generics, and macros.

LanguageWhat it extracts
TypeScript / JavaScriptFunctions, arrow functions, classes, methods, types, interfaces, imports
PythonFunctions, classes, methods, imports (from/import)
GoFunctions, methods (with receiver), types (struct/interface), imports
RustFunctions, impl blocks, structs, enums, traits, use declarations
JavaClasses, methods, constructors, interfaces, enums, imports

If tree-sitter WASM fails to load (rare edge case), a regex-based fallback parser activates automatically.

Fix Replay

When a fix completes successfully, InariWatch embeds the entire fix context (diagnosis + files changed + result) as a vector. When a similar error arrives later, it searches past fixes by embedding similarity.

This means the AI gets context like: "This function was fixed 3 times before. The first time, the fix was X (confidence 85%). The second time, it was Y (confidence 92%)." — turning past experience into future accuracy.

Note: Fix Replay builds up automatically over time. The more fixes InariWatch completes for your project, the better it gets at fixing similar errors.

Regression Test Generation

After generating a fix, the AI also generates 1-3 regression tests that:

  1. 1

    Reproduce the bug

    Creates a test case with the exact input that causes the crash.
  2. 2

    Verify the fix

    Asserts that with the fix applied, the same input no longer crashes.
  3. 3

    Follow your conventions

    Reads up to 3 existing test files from your repo to learn your framework (vitest, jest, pytest, go test), assertion style, and file structure.

The test files are pushed alongside the fix. Your CI runs them automatically. If the regression test fails, the fix is bad — InariWatch retries with a different approach.

Substrate Replay Verification

If your app uses @inariwatch/capture with substrate: true, InariWatch records all I/O (HTTP requests, DB queries, file operations) before a crash. Substrate Replay takes that recording and verifies the fix against it:

ModeHow it worksWhen it runs
AI AnalysisAI reads the I/O recording + fix and predicts if the fix prevents the crashAlways (when recording exists)
GitHub Action ReplayGenerates a workflow that replays the recorded HTTP requests against the fixed appOptional (generates workflow file)

The AI Analysis produces a risk score (0-100). If the score is <= 40, the substrate_replay gate passes. This is an optional gate — if no Substrate recording exists, it's skipped.

E2E Staging Verification

InariWatch auto-detects your E2E test framework (Playwright, Cypress) and generates a GitHub Actions workflow that builds the app with the fix, starts it, and runs your E2E tests against it.

  1. 1

    Detect framework

    Reads package.json to find Playwright, Cypress, or other E2E frameworks. Detects Next.js, Express, etc.
  2. 2

    Generate workflow

    Creates .github/workflows/inariwatch-e2e-staging.yml tailored to your stack.
  3. 3

    Push and wait

    Pushes the workflow to the fix branch. GitHub Actions runs it automatically. InariWatch polls every 20 seconds for results (max 10 min).
  4. 4

    Gate evaluation

    If E2E tests pass, the e2e_staging gate passes. If they fail, the fix likely introduces regressions.
Pro tip: E2E staging uses your existing GitHub Actions minutes — zero additional cost. If no E2E framework is detected in your project, this step is skipped automatically.

Safety Gates (11)

Every fix must pass through 11 safety gates before auto-merge. All gates are data-driven — no guessing.

#GatePass conditionRequired
0auto_merge_enabledEnabled in project settingsYes
1ci_passedAll CI checks pass (including regression tests)Yes
2confidenceAI diagnosis confidence >= configured thresholdYes
3lines_changedTotal lines changed <= configured maxYes
4self_reviewAI self-review score >= 70, not rejectedIf enabled
5substrate_simulateSubstrate simulate risk score <= 40If recording exists
6eap_chain_verifiedEAP cryptographic proof chain verifiedIf receipt exists
7prediction_safePrediction engine risk score <= 40If prediction ran
8security_scan0 HIGH severity findings (17 ESLint rules + 19 patterns + AI review)If scan ran
9substrate_replaySubstrate I/O replay confirms fix prevents crashIf recording exists
10e2e_stagingE2E staging tests pass in GitHub ActionsIf E2E framework detected

If all gates pass → auto-merge. If any gate fails → draft PR for human review. Optional gates (5-10) only activate when their data is available — they never block if there's nothing to check.

Session Replay

Watch any user session — DOM playback, console, network, navigation, Web Vitals, and frustration signals on a single timeline — then click Generate Fix to open a PR. Replay is workspace-scoped: enabled per organization, invisible to personal workspaces.

Lives at /sessions (list) and /sessions/[sessionId] (player). Storage on Cloudflare R2. Replays with errors auto-correlate to alerts (or create one) so the Generate Fix button is wired the moment a session lands.

Install the SDK

Replay ships as a separate package from @inariwatch/capture. It only loads in the browser — no server bundle impact.

Install
npm install @inariwatch/capture-replay
Initialize (browser entry / _app.tsx / root layout)
import { initReplay } from "@inariwatch/capture-replay";

initReplay({
  projectId: "your-project-id",
  // optional — defaults are sensible
  sampleRate: 1.0,           // record every session
  maskAllInputs: true,       // off only if you know what you're doing
});
Note: You don't need to invoke a recorder manually. initReplay() attaches automatically on page load and starts a new session per page-view.

Identify the user

Replay never scrapes the DOM for emails — you set the user contract explicitly via a global. First-write-wins: once the session has identified the user, later writes are ignored to prevent collision under SPA navs.

After your auth resolves
window.__INARIWATCH_USER__ = {
  id:    user.id,           // optional but recommended
  email: user.email,        // optional — masked per project setting
};

Both fields are optional. If the project has hashEndUserEmails: true, the SDK still sends the raw email and the server hashes it before persistence — your filters search via the hash, never the plaintext.

Privacy & PII masking

Inputs (<input>, <textarea>), passwords, and elements with the data-inari-block attribute are masked before they ever leave the browser. The PII classifier also redacts emails, tokens, and credit-card-shaped strings from console logs.

SurfaceDefaultHow to override
InputsmaskedmaskAllInputs: false in initReplay()
Specific elementsvisibleAdd data-inari-block attribute to mask
Console payloadsPII-classifiedServer-side redaction always on
Network bodiesnot capturedBody capture deferred — PII risk
End-user emailstored plaintextPer-project hashEndUserEmails setting

Web Vitals

LCP, CLS, INP, FCP, and TTFB are captured via PerformanceObserver (no web-vitals dependency). Vitals flush on visibilitychange and pagehide so they survive tab close. The player shows the worst-rated metric as a header chip; the list card shows a single “worst vital” badge.

Vitals also feed the AI summary prompt — “LCP slow” in the explanation comes straight from the snapshot.

Rage + dead-click detection

Detected server-side from the captured event stream — no extra SDK wiring.

SignalDetectionScore weight
Rage clicks≥ 3 clicks within 1000ms on the same target× 3
Dead clicksClick followed by < 5 DOM mutations within 3000ms and no nav× 1

frustrationScore = rage × 3 + dead × 1. Sessions with non-zero scores get an amber badge in the list. Filter by ?hasRageClicks=1 or ?hasDeadClicks=1 on the URL.

Generate Fix from a replay

When a replay contains an error, ingest auto-correlates it with an existing alert (matched by error fingerprint) — or synthesizes a new alert at severity warning. The Generate Fix button in the player triggers the full remediation pipeline: diagnose → read code → write fix → 11 safety gates → PR → auto-merge if every gate passes.

Warning: Replays ingested before Phase H (2026-04-15) have alertId = null and won't back-fill automatically. Record a new session with an error to test the flow, or run a backfill script over replay_sessions.

Per-project settings

Configure per project at /projects/[slug] → Replay tab. The settings UI merges patches on save — partial updates won't wipe other fields.

SettingDefaultNotes
enabledfalsePer-project kill switch (also gated by REPLAY_V2_ORGS env)
sampleRate1.00.0–1.0 — fraction of sessions recorded
maskAllInputstrueSet to false at your own risk
hashEndUserEmailsfalseWhen true, email filters use SHA-256 exact match
retentionDays301–366 — daily cron sweeps expired sessions from R2 + DB

CORS

The ingest endpoint is per-project, with an allowlist of origins configured in the project settings. Default is empty — add your production and staging origins before going live, or replays will be CORS-rejected.

Retention

A daily cron /api/cron/replay-retention sweeps sessions older than each project's retentionDays. 200 sessions per run, 1-day grace floor, 366-day absolute max. R2 objects and DB rows are deleted together — replays don't leave orphaned blobs.

MCP Server

Connect any AI coding tool to InariWatch via the Model Context Protocol (MCP). Your AI gets real-time access to production alerts, root cause analysis, community fixes, and remediation — all from inside your editor.

Works with Claude Code, Cursor, Windsurf, VS Code Copilot, Codex CLI, Gemini CLI, and any tool that supports MCP over HTTP.

Setup

Option A — One command (recommended)

Auto-detect & configure
npx @inariwatch/mcp init

Detects installed AI tools, opens the browser to authenticate, and writes config files automatically. Pass --token inari_xxxxx to skip browser auth.

Option B — Manual setup

1. Go to Settings → MCP and create an access token (choose scope: read, write, or full access).
2. Add to your AI tool:

ToolConfig
Claude Codeclaude mcp add inariwatch https://mcp.inariwatch.com --transport http -H "Authorization: Bearer <token>"
Cursor / Windsurf.cursor/mcp.json → { mcpServers: { inariwatch: { url, headers } } }
VS Code Copilot.vscode/mcp.json → { servers: { inariwatch: { url, headers } } }
Codex CLIcodex mcp add inariwatch https://mcp.inariwatch.com --header "Authorization: Bearer <token>"
Gemini CLIgemini mcp add inariwatch --url https://mcp.inariwatch.com --header "Authorization: Bearer <token>"
OpenClawopenclaw mcp set inariwatch '{"url":"https://mcp.inariwatch.com","transport":"streamable-http","headers":{"Authorization":"Bearer <token>"}}'

Option C — OAuth (zero-token setup)

Tools that support OAuth 2.1 can discover InariWatch automatically via https://mcp.inariwatch.com/api/mcp/.well-known/oauth-authorization-server. Click "Connect" in your tool, approve in the browser, done. PKCE (S256) enforced.

Tools (25)

Once connected, your AI can call these tools:

ToolDescriptionScopeRate
query_alertsList recent alerts by project/severityread200/min
get_statusProjects, integrations, alert countsread200/min
get_uptimeCurrent uptime status for all monitorsread200/min
get_build_logsBuild logs for any host (Vercel, Netlify, Cloudflare Pages)read200/min
get_substrate_contextI/O recording context for an alertread200/min
get_root_causeAI-powered root cause analysisread30/min
assess_riskPre-deploy risk assessment for a PRread30/min
get_postmortemGenerate or retrieve a post-mortemread200/min
search_community_fixesSearch community fix networkread30/min
trigger_fixStart AI remediation pipeline (SSE streaming)execute5/min
rollback_deployHost-agnostic rollback (Vercel, Netlify, CF Pages, Render) ⚠️execute5/min
silence_alertMark alert as read/resolvedwrite200/min
acknowledge_alertMark alert as read (acknowledged)write200/min
reopen_alertReopen a resolved alertwrite200/min
submit_feedbackReport if an AI fix workedwrite200/min
run_checkTrigger an immediate monitoring checkexecute30/min
ask_inariAsk natural language questions about your infrastructureread30/min
get_error_trendsError trends: alerts/day, top recurring errors, period comparisonread200/min
create_uptime_monitorCreate a new uptime monitor for a URLexecute200/min
run_health_checkFull installation health check (capture, integrations, AI key, DB, substrate)read30/min
reproduce_bugReplay I/O timeline before a crash (HTTP, DB, file ops) via Substrate recordingread30/min
simulate_fixAI simulates whether a proposed fix would resolve the bug based on I/O recordingread5/min
verify_remediationFull verification chain: fix → CI → merge → monitoring → recurrence checkread30/min
search_codebaseHybrid search (vector + BM25) across indexed codebase with dependency graphread30/min
reindex_codebaseTrigger incremental re-indexation of a project's codebaseexecute30/min

Tools include MCP annotations (readOnlyHint, destructiveHint) so AI clients know when to ask for confirmation. rollback_deploy and its legacy alias rollback_vercel are both marked destructive.

Resources (4)

Resources are live data feeds your AI can read without calling a tool. Subscribe for real-time notifications when data changes.

URIDescription
inariwatch://alerts/criticalCurrently open critical alerts across all projects
inariwatch://alerts/recentLast 20 alerts from the past 24 hours
inariwatch://status/overviewAll projects: uptime, alert counts, monitor status
inariwatch://remediations/activeAI remediation sessions currently in progress

Subscribe via resources/subscribe. Receive notifications/resources/updated via the SSE endpoint at GET /api/mcp/events when subscribed resources change (polled every 10s).

Prompts (7)

Predefined workflows that appear as commands in your AI tool. Each prompt orchestrates multiple tool calls automatically.

PromptWhat it does
diagnoseFind top critical alert → root cause → substrate context → community fixes → summary
status-reportFull status: uptime, open alerts, active issues
fix-thisFind critical alert → search known fixes → preview AI fix (dry run) → ask before applying
post-deploy-checkAfter deploy: check uptime, new errors, build logs → health report
weekly-summaryPast 7 days: alert trends, top patterns, system health (5-10 bullet points)
production-health-checkScheduled: automated hourly health check with action items
daily-reportScheduled: daily ops report — new alerts, resolved, trends, integration health

Auth & scopes

Tokens are SHA-256 hashed (never stored in plaintext). Choose a scope when creating:

ScopeAccessUse case
readQuery tools, resources, promptsDashboards, monitoring, read-only integrations
writeRead + silence alerts, submit feedbackTeam members who triage alerts
executeFull access: remediation, rollback, checksLead devs, CI/CD pipelines

Tokens can have an expiration date (30d / 90d / 1y / never). Usage stats are visible in Settings → MCP usage (calls/day, top tools, latency, error rate). All MCP calls are logged in the audit trail.

Protocol: Streamable HTTP (JSON-RPC 2.0 over POST), spec version 2024-11-05. Capabilities: tools, resources (subscribe), prompts, sampling. Endpoint: https://mcp.inariwatch.com.

Public API — Fix Marketplace

The Fix Marketplace API exposes the community fix database as a public, CORS-open REST API. No authentication required. Any tool can query it to look up fixes for known error patterns.

List fixes

GET /api/community/fixes
GET /api/community/fixes
  ?category=runtime_error   # runtime_error | build_error | ci_error | infrastructure | unknown
  &framework=nextjs          # any framework string
  &language=typescript       # any language string
  &min_success_rate=70       # integer 0–100
  &sort=success_rate         # success_rate | occurrences | recent
  &limit=50                  # max 100
  &offset=0                  # pagination
Response
{
  "fixes": [
    {
      "id": "uuid",
      "successRate": 94,
      "totalApplications": 47,
      "successCount": 44,
      "failureCount": 3,
      "fixApproach": "Wrap the async call in a try/catch and...",
      "fixDescription": "Unhandled promise rejection in middleware",
      "avgConfidence": 87,
      "createdAt": "2025-03-10T12:00:00Z",
      "pattern": {
        "id": "uuid",
        "fingerprint": "abc123",
        "patternText": "UnhandledPromiseRejection...",
        "category": "runtime_error",
        "framework": "nextjs",
        "language": "typescript",
        "occurrenceCount": 312
      }
    }
  ],
  "total": 150,
  "limit": 50,
  "offset": 0
}

Get a single fix

GET /api/community/fixes/:id
GET /api/community/fixes/uuid-of-fix

Returns the same shape as a list item, plus updatedAt and full pattern details.

Report outcome

After applying a fix, report whether it worked. This updates the community success rate so future teams benefit from your experience.

POST /api/community/fixes/:id/report
POST /api/community/fixes/uuid-of-fix/report
Content-Type: application/json

{ "worked": true }
Response
{ "ok": true }
Pro tip: The successRate field is recalculated live fromsuccessCount / totalApplications. Reporting helps every team that encounters the same pattern.

Public API — Status Widget

Embed a live status badge on any website — your landing page, README, or documentation. The badge shows current system status, optional uptime percentage, and links to your public status page.

Embed code

Place a div with a data-inariwatch-slug attribute wherever you want the badge to appear, then load the script once.

HTML
<div data-inariwatch-slug="your-project-slug"></div>
<script src="https://app.inariwatch.com/embed.js" async></script>

Alternatively, use the manual config for more control over the target element:

HTML (manual config)
<div id="my-status"></div>
<script>
  window.__INARIWATCH__ = {
    slug: "your-project-slug",
    container: "#my-status"
  };
</script>
<script src="https://app.inariwatch.com/embed.js" async></script>

Appearance

StatusColorBehavior
All Systems OperationalGreenStatic badge
Degraded PerformanceAmberPulsing dot
Major OutageRedPulsing dot

The badge shows uptime percentage when uptime monitors are configured for the project (e.g. All Systems Operational · 99.8% uptime). Clicking the badge opens the full public status page. The widget polls for updates every 60 seconds automatically.

Data endpoint

The widget fetches from a lightweight CORS-open endpoint you can also use directly:

GET /api/status/:slug/widget
GET /api/status/your-project-slug/widget
Response
{
  "status":          "operational",
  "label":           "All Systems Operational",
  "uptimePct":       99.8,
  "activeIncidents": 0,
  "slug":            "your-project-slug",
  "pageUrl":         "https://app.inariwatch.com/status/your-project-slug"
}
Note: The endpoint is cached for 60 seconds at the CDN layer (s-maxage=60, stale-while-revalidate=30). Status pages must be set to Public in your project settings to appear.

Reference — Alert types & severity

SeverityColorMeaning
CriticalRedImmediate action required — production is affected
WarningAmberDegraded state — action recommended soon
InfoBlueInformational — no immediate action needed

Deduplication

Before creating a new alert, InariWatch checks whether an open, unresolved alert with the same title already exists for the same project within the last 24 hours. If one does, the new alert is silently dropped — you won't get spammed by the same event.

To force a new alert (e.g. after resolving), mark the existing alert as Resolved first.

Reference — REST API

InariWatch exposes one public REST endpoint — used by the desktop app and any custom tooling.

GET /api/desktop/alerts

Returns the most recent unread alerts for the authenticated user.

Request
GET /api/desktop/alerts
Authorization: Bearer rdr_your_token_here
Response (200)
{
  "alerts": [
    {
      "id":          "uuid",
      "title":       "CI failing on main",
      "severity":    "critical",
      "isResolved":  false,
      "createdAt":   "2025-03-17T03:12:00Z",
      "sourceIntegrations": ["github"]
    }
  ]
}
StatusMeaning
200Success
401Missing or invalid token
403Token exists but account is not Pro

Reference — Stress Testing

InariWatch infrastructure is validated with a 14-scenario k6 suite (10 load + 4 chaos) that runs against the production stack. All scenarios pass.

#ScenarioWhat it validates
1Webhook StormCapture webhook ingestion under burst load, rate limiting
2MCP Rate Limits3 rate limit tiers (cheap 200/min, moderate 30/min, expensive 5/min)
3SSE Streaming50 concurrent Server-Sent Event connections, reconnection
4Alert DedupFingerprinting, deduplication under concurrent writes, storm detection
5Auth Brute ForceLogin rate limiting, device flow poll protection
6Cron Fan-out7 sub-pollers in parallel, overlap handling
7Neon SaturationDB concurrency: webhooks + MCP + cron simultaneously
8Push SerializationPush notification pipeline under critical alert burst
9Auto-Heal3 consecutive failures trigger single heal, cooldown, race safety
10Full IncidentEnd-to-end: deploy fail → error burst → uptime down → auto-heal → verify
11Chaos · IncidentFull incident with mixed valid/malformed payloads + concurrent cron + storm
12Chaos · MCP Storm200 concurrent MCP calls mixing all 3 rate limit tiers
13Chaos · Tenant IsolationFlood one workspace, verify another's latency stays normal
14Chaos · SSE50+ SSE connections with random abrupt disconnects

The stress test suite lives in k6/ and can be re-run with bash k6/run-all.sh. Individual scenarios can be run with bash k6/run-all.sh webhook-storm.