When an AI model generates information that sounds confident and plausible but is factually wrong or entirely fabricated. The model isn't "lying" — it's pattern-matching its way to fluent text without a concept of truth. Fake citations, invented statistics, and non-existent API methods are common examples.
Why it matters
Hallucination is the single biggest trust problem in AI today. It's why you should always verify critical facts from AI outputs, and why techniques like RAG and grounding exist.