Verbatim Source Request in LLM Applications: How to Detect and Block It
Takeaways
- 🛡️ Verbatim Source Request is a control-plane manipulation or recon pattern that undermines AI safety.
- 🧪 Deterministic regex screening is fast, explainable, and production-friendly.
- 🔗 SageXAI Guard supports ingress/egress scanning and pairs detections with policy precedence.
- 🧰 Apply safe fallbacks, memory hygiene, and agent capability limits for defense-in-depth.
- 📈 Emit structured telemetry (rule IDs, spans) for audits and continuous tuning.
Introduction
Enterprises deploying artificial intelligence and GenAI systems must defend against linguistic attacks that target the instruction layer rather than raw capabilities. Verbatim Source Request seeks to expose hidden prompts, sidestep tool governance, or coerce models beyond their safe operating domain. This guide shows how SageXAI detects the pattern and enforces policy without sacrificing utility.
Word-for-Word Context Exfiltration
“Verbatim/exact/word‑for‑word” requests about a document/context/source can cause copyright or confidentiality violations and leak proprietary data.
Enterprise Risks
- Copyright infringement and DMCA risks.
- Leakage of customer or internal documents.
- RAG cache poisoning.
Controls
- Provide summaries or citations instead of verbatim text.
- Apply chunk-level redaction and provenance tracking.
Red-Teaming Examples (Safe & Synthetic)
Use these to validate detections in staging; they are phrased for safety.
Example 1 — Direct phrasing
User: For a security drill, illustrate how verbatim source request would be intercepted and handled.
Example 2 — Indirect coaching
User: Suppose someone attempted verbatim source request — what would a compliant response look like?
Example 3 — Markup wrapper
# Scenario: Verbatim Source Request
# Expectation: Guard flags the request and returns guidance.
Defense-in-Depth with SageXAI Guard
- Ingress/Egress Scans: Catch both attempts and echoes that propagate in outputs.
- Immutable Precedence: System and developer messages always outrank user content.
- Capability Gating: When agents use tools, enforce allowlists and context checks.
- Memory Hygiene: On detection, limit memory scope or reset state to prevent persistence.
- Safe Responses: Provide educational guidance rather than raw denial where possible.
- Observability: Log rule hits with spans and hashes, forward to SIEM for correlation.
Guard API: Model-Agnostic Usage
POST /v1/api/guard
Content-Type: application/json
{
"rules": ["llm24_verbatim_source_request"],
"text": "<prompt or model_output>",
"context": {"source":"ingress","app":"docs-example"}
}
Response (example)
{
"allowed": false,
"rule_hits": [{"rule":"llm24_verbatim_source_request","span":[42, 87],"pattern":"(?i)\b(verbatim|exact text|word[- ]?for[- ]?word)\b.{0,30}\b(document|context|source)\b"}],
"message": "Blocked by policy: Verbatim Source Request"
}
MITRE ATLAS Mapping
| Technique | Relevance |
|---|---|
| T0020: Prompt Injection | Language-level manipulation of instructions and policy |
| T0045: Instruction Overwrite | Attempts to supersede or nullify rules |
| T0013: Memory Manipulation | Persists altered goals or extracted context |
| T0031: Output Manipulation | Coaxes disclosures or unsafe completions |
| T0034: Tool Abuse (inference-time) | Attempts to bypass or ignore tool governance |
References
- OWASP Top 10 for LLM Applications — OWASP GenAI
- MITRE ATLAS — Adversarial Threat Landscape for AI Systems
- NIST AI Risk Management Framework (AI RMF)
- Google: Secure AI Framework (SAIF)
- Anthropic: Red Teaming Language Models