Automatically inventory all AI models, agents, frameworks, and services across cloud, SaaS, and on-prem environments — including unsanctioned deployments.
Partial
Supply Chain
Scan model files for embedded malware, backdoors, and trojans. Track model provenance, dependencies, and generate an AI Bill of Materials (AI-BOM).
None
Shadow AI Det.
Detect unauthorized AI tool usage across endpoints, browsers, and SaaS — including personal accounts, unapproved GenAI apps, and embedded AI features.
None
Data Poisoning Def.
Protect training data and fine-tuning pipelines from manipulation. Detect poisoned datasets, adversarial training samples, and vector store tampering.
None
Runtime
5 full, 0 partial of 5
Prompt Injection
Detect and block prompt injection attacks — both direct (user-crafted) and indirect (embedded in retrieved content) — before they reach the model.
Full
Output Guardrails
Inspect and filter model outputs for policy violations, unsafe content, leaked secrets, and off-topic responses before they reach the end user.
Full
Runtime Monitor
Continuously monitor live AI inference traffic for anomalies, attacks, and policy violations — with real-time alerting and automated response actions.
Full
DLP
Prevent sensitive data (PII, PHI, PCI, secrets, source code, IP) from being sent to AI models via prompts, file uploads, or copy/paste actions.
Full
Toxicity/Safety
Filter harmful content categories (hate speech, violence, self-harm, sexual content, CSAM) distinct from prompt injection — covering both inputs and outputs.
Full
Governance
0 full, 2 partial of 5
Compliance/Gov
Map AI risks to regulatory frameworks (EU AI Act, NIST AI RMF, OWASP LLM Top 10, ISO 42001). Generate audit trails, policy reports, and compliance dashboards.
Partial
Observability
Track model behavior over time — latency, error rates, usage patterns, drift detection, and performance degradation — with dashboards and alerting.
Partial
Hallucination Det.
Detect when AI generates fabricated, unsupported, or factually incorrect content. Validate responses against source documents or formal logic rules.
None
Cost & Budget
Control AI spend with per-team/per-key token budgets, rate limiting, spend alerts, and denial-of-wallet attack prevention across LLM providers.
None
Model Explainability
Explain why an AI system produced a specific output. Required by EU AI Act for high-risk systems. Includes reasoning chains, feature attribution, and decision audits.
None
Emerging
0 full, 3 partial of 4
Agentic AI
Secure autonomous AI agents — monitoring tool calls, enforcing permission boundaries, detecting goal hijacking, and preventing unintended autonomous actions.
Partial
MCP/Tool Security
Secure Model Context Protocol servers, agent-to-tool connections, and tool invocations. Detect MCP rug pulls, tool poisoning, and unauthorized tool access.
None
RAG Security
Validate that retrieval-augmented generation outputs are grounded in source documents. Protect vector stores from poisoning and detect retrieval manipulation.
Partial
Multimodal Sec.
Extend security controls beyond text to images, audio, video, and code. Detect harmful visual content, voice-based attacks, and cross-modal evasion techniques.
Partial
Testing
0 full, 1 partial of 2
Red Teaming
Proactively test AI systems with adversarial attacks — automated multi-turn prompt attacks, jailbreaks, model inversion, and goal hijacking simulations.
None
API Security
Secure AI API endpoints with authentication, rate limiting, input validation, and threat detection. Prevent model extraction, denial-of-service, and API abuse.