Skip to content
UPCOMING EVENTS:UX, Product & Market Research Afterwork23. Apr.@Packhaus WienDetailsInsights & Research Breakfast16. Mai@Packhaus WienDetailsVibecoding & Agentic Coding for App Development22. Mai@Packhaus WienDetails
UPCOMING EVENTS:UX, Product & Market Research Afterwork23. Apr.@Packhaus WienDetailsInsights & Research Breakfast16. Mai@Packhaus WienDetailsVibecoding & Agentic Coding for App Development22. Mai@Packhaus WienDetails

Hallucination

When an AI model generates plausible-sounding but factually incorrect or fabricated information. A natural consequence of how LLMs predict probable text rather than verify truth.

Definition: When an AI model generates plausible-sounding but factually incorrect or fabricated information. A natural consequence of how LLMs predict probable text rather than verify truth.

Hallucination refers to when a generates content that sounds convincing but is factually wrong, made up, or nonsensical.

Why It Happens

LLMs are designed to generate plausible-sounding text based on statistical patterns, not to verify factual accuracy. When a model encounters a gap in its training data or an ambiguous prompt, it fills the gap with what sounds statistically likely—even if that content is fabricated.

This is not a bug; it is a fundamental characteristic of how these models work.

Types of Hallucinations

  • Factual errors: Stating incorrect dates, names, or statistics
  • Citation fabrication: Inventing academic papers or sources that do not exist
  • Confident uncertainty: Providing detailed but entirely made-up explanations
  • Logical inconsistencies: Contradicting earlier statements within the same response

Implications for Research

Hallucination is why AI output must always be validated by a human expert. In research contexts:

  • Never cite sources an LLM provides without verifying they exist
  • Cross-check any factual claims against primary sources
  • Use LLMs for transformation tasks (summarizing, categorizing) rather than fact generation
  • Techniques like can reduce but not eliminate hallucination

The risk of hallucination reinforces the researcher's role as critical validator of AI output.

Hallucination - Definition | UX Research Glossary | Busch Labs