Hallucination
Definition
Output of a language model that is plausibly worded but factually wrong or not supported by the provided sources. Hallucinations are not bugs in any narrow sense — they are a structural property of probabilistic language models, which predict word sequences, not truth.
Noise — Signal
"Hallucination" suggests a pathological special case that can be fixed. In fact, hallucination is the default mode of a language model; what appears as a correct answer is a hallucination that, by chance or via suitable context, happens to coincide with reality. Consequence: "we reduce hallucinations" is the right phrasing; "we prevent hallucinations" is not. Techniques like RAG, constrained decoding or chain-of-thought lower the rate, they do not eliminate it.
The right question
Not: "How do we prevent hallucinations?" But: "At which points in our application path is a wrong statement tolerable, at which not, and which verification layer — source attribution, human sign-off, external fact-checking — kicks in before the result reaches the end customer?"