Observability for AI agents
Every agent decision,visible in one place.
Drop in our SDK. See every tool call, every LLM response, and every step your agents take—then set guardrails and ship with confidence.
Free to start. Node.js SDK is open source. [email protected]
Get started in minutes
import { Guardonic } from "@guardonic/sdk"
const g = new Guardonic({ apiKey, baseUrl })
await g.ingest({ agent, events })
The Node.js SDK is open source (MIT). The hosted API and dashboard are proprietary; integrate from Python or other languages via our HTTP API. Node SDK documentation.
How it works
Instrument, observe, and control AI agents in production
Instrument
Drop our SDK into your agent code. Every tool call, every LLM response, every decision — captured with zero performance overhead.
Observe
See exactly what happened in every run. Inputs, reasoning, actions, outputs. Searchable, filterable, replayable.
Control
Cost caps, action allowlists, content policies. Enforced at runtime. Kill a run mid-flight if you need to.
Every AI agent in production is a black box. Most teams find out something went wrong from their customers.
An agent calls an API it was never supposed to touch. You hear about it from a customer ticket.
A workflow burns through your monthly budget in six hours. No one set a limit.
Compliance asks what your agent told a user last Tuesday. There are no logs.
Platform
Everything you need to trust your agents in production.
Agent timeline
Every run broken down step by step. What the agent saw, decided, and did.
Simulation sandbox
Coming soonReplay past runs with different inputs. Test before you ship.
Guardrails engine
Cost caps, allowlists, content policies. Enforced at runtime, not after.
Alerts
Coming soonSlack, email, webhook. Know in seconds when something goes wrong.
Start building with Guardonic.
Get early access to the SDK and dashboard. Free for up to 1,000 agent runs per month.
or email us at [email protected]. Investing or partnerships? [email protected] — see also Contact.