Apache 2.0 fully open source; no feature gating; built-in + custom evaluators; annotation queues; Comet tracking integration
Tracing
3 full, 1 partial of 4
Prompt/Completion Tracing
Record the complete lifecycle of every LLM request — prompts, completions, tool calls, retrieval steps — with structured parent-child span relationships.
Full
Latency Monitoring
Track response times at each pipeline step with p50/p95/p99 breakdowns and historical trends.
Full
Multi-model Support
Trace across multiple LLM providers and frameworks (LangChain, LlamaIndex, Vercel AI SDK) with auto-instrumentation.
Full
Agentic Observability
Dedicated tracing for multi-step agent workflows — tool call visualization, decision tree inspection, agent-specific metrics, and multi-turn threading.
Partial
Cost & Perf
2 full, 1 partial of 3
Cost Tracking
Calculate per-request and aggregate costs. Attribute spend to teams, features, users, or projects.
Full
Token Analytics
Monitor input/output token counts, context window utilization, and token efficiency.
Full
Alerting & SLOs
Configure alerts for latency spikes, error thresholds, cost overruns, and quality degradation.
Partial
Evaluation
3 full, 2 partial of 5
Built-in Evals
Pre-built evaluators for hallucination, relevance, toxicity, faithfulness, coherence.