Hallucination
When an AI model generates plausible-sounding but factually incorrect or fabricated information. A natural consequence of how LLMs predict probable text rather than verify truth.
Definition: When an AI model generates plausible-sounding but factually incorrect or fabricated information. A natural consequence of how LLMs predict probable text rather than verify truth.
Hallucination refers to when a Large Language Model generates content that sounds convincing but is factually wrong, made up, or nonsensical.
Why It Happens
LLMs are designed to generate plausible-sounding text based on statistical patterns, not to verify factual accuracy. When a model encounters a gap in its training data or an ambiguous prompt, it fills the gap with what sounds statistically likely—even if that content is fabricated.
This is not a bug; it is a fundamental characteristic of how these models work.
Types of Hallucinations
- Factual errors: Stating incorrect dates, names, or statistics
- Citation fabrication: Inventing academic papers or sources that do not exist
- Confident uncertainty: Providing detailed but entirely made-up explanations
- Logical inconsistencies: Contradicting earlier statements within the same response
Implications for Research
Hallucination is why AI output must always be validated by a human expert. In research contexts:
- Never cite sources an LLM provides without verifying they exist
- Cross-check any factual claims against primary sources
- Use LLMs for transformation tasks (summarizing, categorizing) rather than fact generation
- Techniques like RAG can reduce but not eliminate hallucination
The risk of hallucination reinforces the researcher's role as critical validator of AI output.
Related Terms
Large Language Model (LLM)
An AI system trained on vast amounts of text to predict and generate human-like language. Best understood as a concept-transformation engine rather than a knowledge database.
Retrieval-Augmented Generation (RAG)
A technique that enhances LLM responses by first retrieving relevant information from a specific knowledge base, then using that information to ground the model's output.
Mentions in the Knowledge Hub
This term is referenced in the following articles:
Building a Research Career in the Age of AI
AI is transforming what researchers do daily, but it amplifies rather than replaces the core value researchers provide. Understanding which skills remain essential and how to grow them is critical for career development in this changing landscape.
What AI Can and Cannot Do for UX Research
AI is not going to take your job, but it is absolutely going to change it. Understanding what LLMs actually are, and are not, is the foundation for using them effectively.
Advanced AI Techniques for Research
Beyond basic prompting, there are techniques that dramatically improve AI reliability: structured communication, using notes over transcripts, treating models as a committee of raters, and understanding when RAG or fine-tuning makes sense.