Caching
Storing the results of expensive operations (HTTP requests, API calls, computations) so they can be reused without repeating the operation. In the context of tools like LlmsTxtKit, caching means that once a site's llms.txt file has been fetched and parsed, subsequent requests for the same file can be served from cache instead of making another HTTP request, reducing latency, avoiding rate limiting, and sidestepping WAF issues.
It's the engineering equivalent of writing the answer on your hand so you don't have to look it up again. Crude but effective.
Why it matters for writers: Caching is a system design concept that creates documentation work wherever it appears. If a tool uses caching, users need to understand: how long cached results are valid (TTL, time to live), how to force a fresh fetch when needed, and what behavior to expect when the cached version differs from the live version. These questions come up in every single tool that interacts with external data, and they generate a surprising number of support tickets when the docs don't cover them.
Related terms: Web Application Firewall · llms.txt · Context Generation