Skip to main content

MRR

Mean Reciprocal Rank. How far down the list does the user scroll before hitting something useful?

If the first result is relevant, that query's reciprocal rank is 1/1 = 1.0. If the first relevant result is in position 3, it's 1/3 = 0.33. If it's in position 10, it's 1/10 = 0.1. Average those scores across all your test queries and you get MRR. A perfect MRR of 1.0 means every query's best answer was the very first result.

MRR only cares about the first relevant result. It doesn't reward finding multiple relevant documents and it doesn't care about order beyond "where's the first good hit?" This makes it useful for scenarios where the user just needs an answer--a FAQ bot, a search bar, a quick lookup--but less useful when completeness matters. For that, you need recall or NDCG.

Why it matters for writers: MRR is the metric closest to the user's experience. When someone types a question and the first result is wrong, they notice. When the first result is right, everything else is invisible. If your content consistently ranks second or third instead of first for its target queries, MRR will catch it even if other metrics look healthy. The fix is usually sharpening the document's focus: stronger titles, clearer opening paragraphs, and metadata that leaves no ambiguity about what the document covers.

Related terms: NDCG · Precision · Similarity Search