AgentGuard — LLM guardrails (FastAPI)¶
Open-source FastAPI LLM guardrails for teams that need one HTTP control plane: prompt injection defense (heuristic), PII and secret checks, LLM output validation, versioned prompt packages, policy-as-code, retrieval grounding, and agent action governance with risk scoring and optional human approval.
AgentGuard is an open-source FastAPI service that sits between your application and any LLM provider. It applies LLM guardrails on input and output with transparent heuristics you can audit in code — not a black-box model. Python 3.11+.
- Quickstart — run locally in a few commands
- Comparison — vs Guardrails AI, NeMo Guardrails, LlamaGuard, Presidio, Rebuff
- Architecture — system design
- GitHub repository — source, issues, CI
Key capabilities¶
| Module | Purpose | Endpoint |
|---|---|---|
| AI Gateway | AuthN/AuthZ, tenant isolation, rate limiting, model routing | /v1/gateway/complete |
| Input Guardrails | LLM guardrails on user input (7 checks) | /v1/guardrails/evaluate-input |
| Prompt Framework | Versioned prompt packages with linting | /v1/prompts/compile |
| Retrieval Grounding | Citation packaging and confidence scoring | /v1/retrieval/search |
| Output Validation | LLM output validation (7 checks) | /v1/outputs/validate |
| Action Governance | Tool allowlist, risk scoring, HITL approval | /v1/actions/authorize |
| Policy Engine | Tenant/use-case/role/channel policies | /v1/policies/evaluate |
| Observability | Tracing, metrics, audit, evaluation suites | /v1/evals/run |