Hallucination
When an LLM generates text that sounds plausible but is factually incorrect, fabricated, or unsupported by its training data. The model isn't "lying"; it doesn't have intentions. It's producing the most statistically probable next tokens, and sometimes those tokens form confident-sounding nonsense. Classic examples include invented citations with realistic-looking DOIs, fabricated statistics delivered with decimal-point precision, and fictional historical events described with the authority of a tenured professor.
Think of it as an extremely well-read friend with no ability to distinguish between something they actually know and something they feel like they know. We all have that friend. Now imagine giving that friend a professional microphone.
Why it matters for writers: Hallucinations are the single biggest trust problem with AI-generated content. Any workflow that uses LLMs for drafting, summarizing, or research must include a verification step. Not "should include" (must include. This is one of the problems RAG systems (see Retrieval-Augmented Generation) try to address by grounding responses in retrieved source material) though they reduce the problem rather than eliminating it.
Related terms: Grounding · Retrieval-Augmented Generation · Large Language Model