Home/AI Security & Defense/AWS Bedrock Guardrails

AWS Bedrock Guardrails

Cloud Native#18 of 41 in AI Security & Defense
53%
COVERAGE
Automated Reasoning (formal math verification); 6 safeguard policies; model-agnostic API; code domain support
Posture
0 full, 0 partial of 4
Asset Discovery
Automatically inventory all AI models, agents, frameworks, and services across cloud, SaaS, and on-prem environments — including unsanctioned deployments.
None
Supply Chain
Scan model files for embedded malware, backdoors, and trojans. Track model provenance, dependencies, and generate an AI Bill of Materials (AI-BOM).
None
Shadow AI Det.
Detect unauthorized AI tool usage across endpoints, browsers, and SaaS — including personal accounts, unapproved GenAI apps, and embedded AI features.
None
Data Poisoning Def.
Protect training data and fine-tuning pipelines from manipulation. Detect poisoned datasets, adversarial training samples, and vector store tampering.
None
Runtime
3 full, 2 partial of 5
Prompt Injection
Detect and block prompt injection attacks — both direct (user-crafted) and indirect (embedded in retrieved content) — before they reach the model.
Full
Output Guardrails
Inspect and filter model outputs for policy violations, unsafe content, leaked secrets, and off-topic responses before they reach the end user.
Full
Runtime Monitor
Continuously monitor live AI inference traffic for anomalies, attacks, and policy violations — with real-time alerting and automated response actions.
Partial
DLP
Prevent sensitive data (PII, PHI, PCI, secrets, source code, IP) from being sent to AI models via prompts, file uploads, or copy/paste actions.
Partial
Toxicity/Safety
Filter harmful content categories (hate speech, violence, self-harm, sexual content, CSAM) distinct from prompt injection — covering both inputs and outputs.
Full
Governance
2 full, 3 partial of 5
Compliance/Gov
Map AI risks to regulatory frameworks (EU AI Act, NIST AI RMF, OWASP LLM Top 10, ISO 42001). Generate audit trails, policy reports, and compliance dashboards.
Partial
Observability
Track model behavior over time — latency, error rates, usage patterns, drift detection, and performance degradation — with dashboards and alerting.
Partial
Hallucination Det.
Detect when AI generates fabricated, unsupported, or factually incorrect content. Validate responses against source documents or formal logic rules.
Full
Cost & Budget
Control AI spend with per-team/per-key token budgets, rate limiting, spend alerts, and denial-of-wallet attack prevention across LLM providers.
Partial
Model Explainability
Explain why an AI system produced a specific output. Required by EU AI Act for high-risk systems. Includes reasoning chains, feature attribution, and decision audits.
Full
Emerging
2 full, 1 partial of 4
Agentic AI
Secure autonomous AI agents — monitoring tool calls, enforcing permission boundaries, detecting goal hijacking, and preventing unintended autonomous actions.
Partial
MCP/Tool Security
Secure Model Context Protocol servers, agent-to-tool connections, and tool invocations. Detect MCP rug pulls, tool poisoning, and unauthorized tool access.
None
RAG Security
Validate that retrieval-augmented generation outputs are grounded in source documents. Protect vector stores from poisoning and detect retrieval manipulation.
Full
Multimodal Sec.
Extend security controls beyond text to images, audio, video, and code. Detect harmful visual content, voice-based attacks, and cross-modal evasion techniques.
Full
Testing
0 full, 1 partial of 2
Red Teaming
Proactively test AI systems with adversarial attacks — automated multi-turn prompt attacks, jailbreaks, model inversion, and goal hijacking simulations.
None
API Security
Secure AI API endpoints with authentication, rate limiting, input validation, and threat detection. Prevent model extraction, denial-of-service, and API abuse.
Partial
Top Peers in AI Security & Defense
1Palo Alto Prisma AIRS
88%
2Pillar Security
80%
3Cisco AI Defense
78%
See all 41 vendors in AI Security & Defense →
Full vendor profile →Back to AI Security & Defense →