AI Term

What is Hallucination?

When an AI generates information that sounds plausible but is actually incorrect, fabricated, or nonsensical—stating false facts with apparent confidence.

In AI, hallucination refers to when a model generates plausible-sounding but incorrect or fabricated information. The AI presents false claims with the same confidence as true ones.

Why AI Hallucinates

AI language models don't actually "know" facts—they predict likely text based on patterns. When the patterns suggest something plausible, the AI may generate it even if it's false.

Common Types of Hallucinations

**Fabricated facts**: Inventing statistics, dates, or events

  • "The Eiffel Tower was built in 1852" (actually 1889)
  • **Made-up sources**: Creating citations for non-existent papers

  • "According to Johnson et al. (2019)..." (paper doesn't exist)
  • **Confident errors**: Providing wrong answers with certainty

  • Giving incorrect code that looks correct
  • **Invented details**: Adding false specifics to general information

  • Including fictional features in product descriptions
  • Why This Happens

    1. **Training limitations**: Models learn patterns, not truth

    2. **No fact database**: There's no internal database being consulted

    3. **Optimization for fluency**: Models are trained to generate coherent text

    4. **Missing data**: When information is lacking, models fill gaps

    How to Protect Yourself

    Verification

    Always check important facts against reliable sources.

    Ask for sources

    Request citations, then verify they exist.

    Watch for warning signs

    Very specific statistics, obscure references, or answers that perfectly fit your question can indicate hallucinations.

    Use appropriate tools

    For current information, use AI with internet access or dedicated search tools.

    Reducing Hallucinations

    Some strategies that help:

  • Ask the AI to express uncertainty
  • Request step-by-step reasoning
  • Use retrieval-augmented generation (RAG) systems
  • Cross-check with multiple sources
  • The Bottom Line

    Hallucination is a fundamental limitation of current AI. Models are getting better, but no model is hallucination-free. Critical thinking remains essential when using AI-generated information.

    Examples

    Fabricated citationsIncorrect statisticsMade-up historical factsNon-existent sources

    Want to learn more AI terms?

    Browse All Terms