Skip to main content

Grounding

The degree to which an LLM's response is based on retrieved evidence rather than its training data or fabrication. A "well-grounded" response cites specific retrieved documents and doesn't introduce claims that aren't supported by the provided context. Grounding is the primary goal of RAG; it's trying to reduce hallucination by giving the model actual source material to work from.

Why it matters for writers: If you're evaluating RAG system quality, grounding is one of the key metrics. Can the model's response be traced back to specific retrieved passages? Does it introduce unsupported claims? These questions are central to the Context Collapse Mitigation Benchmark research, which measures whether curated llms.txt content produces better-grounded responses than raw HTML. (Spoiler: the results are nuanced. They're always nuanced.)

Related terms: Hallucination · Retrieval-Augmented Generation · Reranking