Prompt caching is an LLM serving optimization where the system reuses previously computed representations for an identical (or partially identical) prompt prefix, reducing repeated computation and improving time-to-first-token (TTFT) and throughput. Instead of reprocessing the same system prompt, tool instructions, or long documents for every request, the server caches intermediate results and applies them when the prefix repeats.
What is Prompt Caching?
When an LLM receives a prompt, it performs a “prefill” (prompt processing) pass over all input tokens, producing internal states (including attention key/value tensors) that will be used during decoding. In many applications, a large part of the prompt is static across users and sessions: system policies, tool schemas, formatting rules, or a fixed context document. Prompt caching stores the model’s computed states for that static prefix.
There are two common forms:
- Exact prefix caching: cache is reused only when the token sequence matches exactly.
- Segmented/prefix-tree caching: cache is reused for shared segments among many prompts, improving hit rate in multi-tenant systems.
Prompt caching is related to (but distinct from) KV cache. KV cache is typically per-request during decoding, while prompt caching focuses on reusing prefill results across requests.
Where it’s used and why it matters
Prompt caching matters in high-traffic chatbots, agent runtimes, and RAG systems where prompts can be large and repetitive. It reduces GPU compute spent on prompt processing, leading to lower latency and cost. It is especially valuable when tool definitions are long, when the system prompt contains extensive policies, or when many requests share a common “prefix” (for example, an enterprise assistant with a fixed instruction block).
Operational considerations include cache eviction, memory usage, privacy boundaries (do not share user-specific prefixes across tenants), and versioning (invalidate caches when prompts or tool schemas change).
Examples
- Tool-heavy agent: Cache the prefix that includes all tool schemas and safety rules so each task starts faster.
- Customer support bot: Cache the system prompt + brand style guide shared across all sessions.
- RAG application: Cache the static instruction prefix even if retrieved documents differ per request.
FAQs
Does prompt caching improve tokens-per-second? It mostly improves TTFT by accelerating prefill; decoding speed is more influenced by KV cache and model compute.
Is prompt caching safe across users? Only if the cached prefix contains no user-private data and the system enforces tenant isolation.
How do caches get invalidated? Typically by hashing the prefix tokens plus model/version identifiers; any change creates a new cache key.