Precision
Of everything your retrieval system returned, how much of it was actually useful? That's precision. If you search for "account recovery" and get ten results, but only six are relevant, your precision is 0.6. The other four are noise. High precision means the system isn't wasting your time with junk; low precision means you're sifting through garbage to find the answer.
Precision is usually reported at a specific cutoff: Precision@5 means "of the top 5 results, how many were relevant?" Precision@10 means the same for the top 10. The cutoff matters because users rarely scroll past the first few results. A system with great Precision@5 but terrible Precision@20 might be perfectly fine if nobody looks past page one.
The trap: precision alone doesn't tell you if the system found everything. A system that returns one result and that result is correct has perfect precision (1.0) but might have missed twenty other relevant documents. That's where recall comes in. The tension between the two is the fundamental tradeoff in retrieval.
Why it matters for writers: If you're writing content that feeds a RAG pipeline, low precision means your content is getting retrieved for the wrong queries--structurally similar but semantically different topics are confusing the system. Better metadata, clearer section headings, and more distinct document scoping all improve precision by helping the retrieval system distinguish your document from near-misses.
Related terms: Recall · Similarity Search · Reranking