>98% detection at <50ms; 80M+ adversarial patterns from Gandalf crowdsourcing
CLUSTER SCORES
Posture2/8
Runtime10/10
Governance2/10
Emerging4/8
Testing4/4
CAPABILITY BREAKDOWN
Posture
Asset DiscoveryPartial
Automatically inventory all AI models, agents, frameworks, and services across cloud, SaaS, and on-prem environments — including unsanctioned deployments.
Supply ChainNone
Scan model files for embedded malware, backdoors, and trojans. Track model provenance, dependencies, and generate an AI Bill of Materials (AI-BOM).
Shadow AI Det.Partial
Detect unauthorized AI tool usage across endpoints, browsers, and SaaS — including personal accounts, unapproved GenAI apps, and embedded AI features.
Data Poisoning Def.None
Protect training data and fine-tuning pipelines from manipulation. Detect poisoned datasets, adversarial training samples, and vector store tampering.
Runtime
Prompt InjectionFull
Detect and block prompt injection attacks — both direct (user-crafted) and indirect (embedded in retrieved content) — before they reach the model.
Output GuardrailsFull
Inspect and filter model outputs for policy violations, unsafe content, leaked secrets, and off-topic responses before they reach the end user.
Runtime MonitorFull
Continuously monitor live AI inference traffic for anomalies, attacks, and policy violations — with real-time alerting and automated response actions.
DLPFull
Prevent sensitive data (PII, PHI, PCI, secrets, source code, IP) from being sent to AI models via prompts, file uploads, or copy/paste actions.
Toxicity/SafetyFull
Filter harmful content categories (hate speech, violence, self-harm, sexual content, CSAM) distinct from prompt injection — covering both inputs and outputs.
Governance
Compliance/GovPartial
Map AI risks to regulatory frameworks (EU AI Act, NIST AI RMF, OWASP LLM Top 10, ISO 42001). Generate audit trails, policy reports, and compliance dashboards.
ObservabilityNone
Track model behavior over time — latency, error rates, usage patterns, drift detection, and performance degradation — with dashboards and alerting.
Hallucination Det.Partial
Detect when AI generates fabricated, unsupported, or factually incorrect content. Validate responses against source documents or formal logic rules.
Cost & BudgetNone
Control AI spend with per-team/per-key token budgets, rate limiting, spend alerts, and denial-of-wallet attack prevention across LLM providers.
Model ExplainabilityNone
Explain why an AI system produced a specific output. Required by EU AI Act for high-risk systems. Includes reasoning chains, feature attribution, and decision audits.
Emerging
Agentic AIFull
Secure autonomous AI agents — monitoring tool calls, enforcing permission boundaries, detecting goal hijacking, and preventing unintended autonomous actions.
MCP/Tool SecurityPartial
Secure Model Context Protocol servers, agent-to-tool connections, and tool invocations. Detect MCP rug pulls, tool poisoning, and unauthorized tool access.
RAG SecurityPartial
Validate that retrieval-augmented generation outputs are grounded in source documents. Protect vector stores from poisoning and detect retrieval manipulation.
Multimodal Sec.None
Extend security controls beyond text to images, audio, video, and code. Detect harmful visual content, voice-based attacks, and cross-modal evasion techniques.
Testing
Red TeamingFull
Proactively test AI systems with adversarial attacks — automated multi-turn prompt attacks, jailbreaks, model inversion, and goal hijacking simulations.
API SecurityFull
Secure AI API endpoints with authentication, rate limiting, input validation, and threat detection. Prevent model extraction, denial-of-service, and API abuse.