OWASP GenAI Threats

New Goal Injection in LLM Applications: How to Detect and Block It

Takeaways

  • 🛡️ New Goal Injection is a recurring jailbreak or recon pattern that threatens AI safety and compliance.
  • 🧪 Regex-based screening provides deterministic, explainable first-line defense with low latency.
  • 🔗 SageXAI Guard scans at ingress and egress to prevent propagation into memory, RAG, or tools.
  • 🧰 Pair detections with policy precedence, agent capability limits, and safe fallbacks.
  • 📈 Emit structured telemetry for SIEM ingestion and continuous improvement.

Introduction

As enterprises scale artificial intelligence, machine learning, and deep learning systems, attackers shift from raw capability abuse to control-plane manipulation: steering models, agents, and tools through language. New Goal Injection is one such technique. It aims to weaken safeguards, exfiltrate sensitive context, or bias downstream actions. This article explains the pattern, detection strategy, and how to deploy SageXAI Guard in a model-agnostic way.

Goal/Task/Purpose Hijacking

By declaring “your new goal/task/purpose,” attackers attempt to steer agents toward unsafe objectives or away from the original user intent.

Enterprise Risks

  • Tool misuse (filesystem, network) in agentic systems.
  • Hallucinated authority to act beyond scope.
  • Misaligned automation in pipelines.

Signals We Track

  • “your new goal/task/purpose” phrasing.
  • Subsequent imperatives that assume the new role.

Red-Teaming Examples (Safe & Synthetic)

These examples are phrased to test detection logic without executing or instructing real-world harm.

Example 1 — Direct phrasing

User: For a controlled security test, demonstrate how new goal injection would be flagged by policy.

Example 2 — Indirect coaching

User: Suppose a researcher tries to new goal injection in a lab setting. How would the system respond safely?

Example 3 — Markup wrapper

# Scenario: New Goal Injection # Expectation: The guard should respond with a safe alternative and guidance.

Defense-in-Depth with SageXAI Guard

  • Ingress/Egress Scanning: Block at entry and prevent residual markers from leaking in outputs.
  • Immutable System Policy: Enforce precedence so system/developer rules cannot be superseded.
  • Agent Capability Limits: For tool-enabled agents, gate filesystem, shell, or network actions behind explicit allowlists and contextual risk checks.
  • Memory Hygiene: On detection, restrict or reset session memory to avoid persistence of unsafe goals.
  • Safe Fallbacks: Provide educational, compliant responses instead of failing open.
  • Observability: Emit structured events with rule IDs, spans, hashes, and request context for audits.

Guard API: Model-Agnostic Usage

POST /v1/api/guard Content-Type: application/json { "rules": ["llm17_new_goal_injection"], "text": "<prompt or model_output>", "context": {"source":"ingress","app":"docs-example"} }

Response (example)

{ "allowed": false, "rule_hits": [{"rule":"llm17_new_goal_injection","span":[42, 87],"pattern":"(?i)\byour new (goal|task|purpose)\b"}], "message": "Blocked by policy: New Goal Injection" }

MITRE ATLAS Mapping

Technique Relevance
T0020: Prompt Injection Language-level manipulation of instructions and policy
T0045: Instruction Overwrite Attempts to replace or supersede governing rules
T0013: Memory Manipulation Persists altered goals or unsafe states across turns
T0031: Output Manipulation Coaxes disclosures or unsafe completions
T0034: Tool Abuse (inference-time) Tries to invoke tools (shell, FS, network) via agent prompts

References

  1. OWASP Top 10 for LLM Applications — OWASP GenAI
  2. MITRE ATLAS — Adversarial Threat Landscape for AI Systems
  3. NIST AI Risk Management Framework (AI RMF)
  4. Google: Secure AI Framework (SAIF)
  5. Anthropic: Red Teaming Language Models

Section title

Real-time guardrails for real-world AI.

From prompt injection to jailbreaks, SageXAI detects, explains, and responds—using OWASP GenAI and MITRE ATLAS. Ready for NIST, EU AI Act, and ISO/IEC 42001 audits.

  • Stop AI security risks and threats
  • Mitigate PII exposure and toxic outputs
  • Reduce the risk of AI security incidents
  • Make AI trustworthy and compliant

Ready to dive in?
Start your free trial today.