200+ models via single API; marketplace discovery; auto-routing by cost/speed/quality; OAuth integration
Routing
3 full, 1 partial of 4
Multi-Provider Support
Breadth of LLM providers supported through a single unified API — OpenAI, Anthropic, Google, AWS, Azure, Cohere, Mistral, open-source, and more.
Full
Smart Routing
Intelligent request routing based on cost, latency, model capability, or custom logic. Includes latency-based, cost-optimized, conditional, and semantic routing.
Full
Fallback & Retry
Automatic failover to backup providers/models on failure. Retry logic with exponential backoff, circuit breaking, and health-aware routing.
Full
Load Balancing
Distribute requests across endpoints. Round-robin, weighted, least-connections, and performance-aware strategies.
Partial
Cost & Perf
1 full, 2 partial of 4
Semantic Caching
Cache LLM responses and serve semantically similar requests from cache. Reduces cost and latency for repeated queries.
None
Cost Tracking & Budgets
Real-time token cost tracking per user, team, project, or API key. Budget limits, spend alerts, and cost attribution.
Full
Rate Limiting & Quotas
Token-aware and request-based rate limiting. Per-user, per-team, per-key quotas.
Partial
Latency Performance
Gateway overhead added to requests. Sub-ms is ideal. Rust/Go implementations outperform Python-based gateways.