Artificial Intelligence

Prompt Attack Surface

The total set of potential vulnerabilities in an LLM application that can be exploited through prompt-based attacks, including injection, leaking, and jailbreaking vectors.

Why It Matters

Understanding your prompt attack surface is the first step in securing LLM applications. Every user input point is a potential attack vector.

Example

Mapping all points where user input reaches the LLM: direct chat, file uploads that get parsed into prompts, URL content that gets fetched, and tool outputs that feed back.

Think of it like...

Like mapping all the doors and windows in a building for a security assessment — every entry point needs to be evaluated and protected.

Related Terms