Skip to content
UPCOMING EVENTS:UX, Product & Market Research Afterwork23. Apr.@Packhaus WienDetailsInsights & Research Breakfast16. Mai@Packhaus WienDetailsVibecoding & Agentic Coding for App Development22. Mai@Packhaus WienDetails
UPCOMING EVENTS:UX, Product & Market Research Afterwork23. Apr.@Packhaus WienDetailsInsights & Research Breakfast16. Mai@Packhaus WienDetailsVibecoding & Agentic Coding for App Development22. Mai@Packhaus WienDetails

UX Research Glossary

A comprehensive guide to user experience research terminology. Understanding these concepts is essential for effective UX work.

A

A/B Testing
A controlled experiment comparing two variants by randomly splitting users between them. The only reliable way to measure the causal impact of a specific change on user behavior.
Accessibility
The practice of designing products usable by as many people as possible, including those with permanent, temporary, or situational disabilities. Often abbreviated as a11y.
Active Data Collection
Research proactively designed to investigate a specific question, with researcher-controlled participant engagement through interviews, tests, or surveys. Also called directed research.
Analytics
The systematic collection and analysis of user behavior data from digital products. Tells you what is happening at scale but never why it is happening.
Attitude-Behavior Gap
The phenomenon where people's stated beliefs and attitudes do not match their actual behavior. Critical for understanding why observational data often trumps self-reported data for predicting actions.

B

Between-Subjects Design
A study structure where different groups of participants test different conditions. Group 1 tests only Version A; Group 2 tests only Version B. Eliminates order effects but requires more participants.
Bias
Systematic deviation from the true value in research findings. Cannot be eliminated, only managed through standardization and awareness. The goal is systematic bias (manageable) over unsystematic bias (chaos).
Building Blocks
The three foundational research activities—Asking, Observing, and Testing—that combine to form all UX research methods. A framework for understanding that complex methods are built from simple components.

C

Card Sorting
A research method where participants organize topics into groups that make sense to them, revealing their mental models and informing information architecture decisions.
Components of Experience
A hierarchical model of what shapes UX: Foundational qualities (QA, Accessibility), Pragmatic qualities (Usefulness, Usability), and Experiential qualities (Cognition, Affect, Values).
Confirmation Bias
The tendency to search for, interpret, and recall information in a way that confirms one's existing beliefs or hypotheses, while giving less attention to information that contradicts them.
Conjoint Analysis
A survey method that reveals how users make trade-offs between product attributes by presenting realistic product concepts with different feature and price combinations.
Contextual Inquiry
A semi-structured interview technique conducted in the user's natural environment, combining deep observation with in-the-moment questioning. Best for uncovering real-world context that shapes behavior.
Conversion Rate
The percentage of users who complete a desired action (e.g., purchase, sign-up) out of the total number of visitors.
Core Methods
The three primary UX research methods built from Building Blocks: the UX Test, the User Interview, and the Survey. Each represents a different combination of asking, observing, and testing activities.
Counterbalancing
A technique for controlling order effects in within-subjects designs by varying the sequence of conditions across participants. Half test A→B; half test B→A.
Customer Experience (CX)
The outermost layer of experience, encompassing every touchpoint a customer has with a company—from marketing and sales to product use and support. Broader than UX, which focuses on product interaction.
CX Research
Research that examines the entire customer journey across all touchpoints—not just the product interface. Covers every interaction from first awareness through support and renewal.

D

Diary Study
A longitudinal research method where participants log their experiences over an extended period. Captures in-the-moment feedback that overcomes memory limitations and reveals patterns over time.

E

Effect Size
A measure of the magnitude of a finding—how big the difference is between conditions, not just whether it exists. Essential for determining practical significance beyond statistical significance.
Ethnographic Research
Studying people in their natural environment over extended periods to understand behaviors, motivations, and cultural context that surveys and lab tests cannot reveal.
Evaluative Research
Research that assesses whether a specific solution works, either during development (formative) or after completion (summative). Answers 'Does this work?' rather than 'What should we build?'

F

Fine-Tuning
The process of further training a pre-trained LLM on a specialized dataset to alter its behavior or improve performance on specific tasks. A high-effort approach for large-scale, specialized needs.
Focus Group
A group interview format, common in market research, where multiple participants discuss a topic together. Useful for observing social dynamics but introduces challenges for individual UX insights.
Formative Evaluation
Research conducted during development to find problems and improve a design-in-progress. The goal is to shape and refine, not to measure final quality.
Funnel Analysis
Tracking how users move through a sequence of steps and where they drop off. Shows you exactly where your process loses people—and how many.

G

Generative Research
Research aimed at uncovering user pain points, unmet needs, and generating ideas for new products or features. Answers 'What should we build?' rather than 'Does this work?'

H

Hallucination
When an AI model generates plausible-sounding but factually incorrect or fabricated information. A natural consequence of how LLMs predict probable text rather than verify truth.
Heuristic Evaluation
An expert-based method where specialists review an interface against established usability principles (heuristics) to identify obvious problems without testing with actual users.

I

Incidence Rate
The percentage of people who respond to a recruitment invitation that actually qualify for your study based on screening criteria. A low incidence rate means most respondents will be screened out.
Information Architecture
The structural design of information environments—how content is organized, labeled, and connected to help users find what they need and understand where they are.
Insight
The interpretation of analysis and synthesis, connected directly to business goals and user needs. The answer to 'So what?'—what the patterns mean and why they matter.
Insights Repository
A centralized, searchable system for storing and connecting research findings across studies, enabling teams to build on previous work and prevent duplicate research.

K

Key Driver Analysis
A statistical method to identify which factors have the greatest impact on a critical outcome, like customer satisfaction or retention. Often the starting point for targeted qualitative research.
Key Performance Indicator (KPI)
A metric explicitly chosen to track progress toward a specific business or product goal. Not every metric is a KPI—only the ones tied to decisions you will actually make.

L

Large Language Model (LLM)
An AI system trained on vast amounts of text to predict and generate human-like language. Best understood as a concept-transformation engine rather than a knowledge database.
Layers of Experience
The scope hierarchy from macro to micro: Customer Experience (CX) → User Experience (UX) → Micro-UX (scenarios/goals, tasks/steps). Defines where to focus research inquiry.

M

Market Research
Research focused on understanding markets, competitors, and customer segments to inform business strategy. Broader in scope than UX research, with significant overlap in methods.
MaxDiff Analysis
A survey method for reliably prioritizing a long list of items by showing respondents small subsets and asking them to choose only the most and least important from each set.
Micro-UX
The experience quality of individual interface moments—a button click, an error message, a loading state. Small interactions that collectively shape the overall user experience.
Mixed Methods
A research approach that deliberately combines qualitative and quantitative methods to build a more complete picture. Qualitative explains the 'why'; quantitative measures the 'how much.'

N

Net Promoter Score (NPS)
A single-question metric measuring customer loyalty: 'How likely are you to recommend this product to a friend?' Widely used in business but not a direct measure of user experience.

O

Objectivity
The degree to which research findings are independent of who conducts the study. If two researchers follow the same protocol and get different results, you have an objectivity problem.
Observer Effect
The phenomenon where people change their behavior because they know they are being watched. A fundamental challenge in any research involving direct observation of participants.
Ontology
A formal representation of the relationships between concepts in a domain. Goes beyond taxonomy to define how categories relate to each other.
Order Effects
Changes in participant performance or preference caused by the sequence in which they encounter conditions, not by the conditions themselves. Controlled through counterbalancing.

P

P-Value
The probability of observing your data (or something more extreme) if there were truly no effect. Widely used, widely misunderstood, and never sufficient on its own to make a decision.
Passive Data Collection
Data generated by users without direct prompting from a researcher—analytics, A/B tests, support tickets, social listening. Ideal for uncovering unexpected patterns and generating new hypotheses.
Personas
Fictional characters created to represent the goals, behaviors, and characteristics of a real group of users. A tool for keeping specific user types in mind throughout product development.
Psychological Safety
A shared belief that the team is a safe place for interpersonal risk-taking—where members can question, disagree, and admit failure without fear of punishment or humiliation.
Psychometrics
The science of measuring psychological constructs—attitudes, abilities, personality traits—through standardized instruments. The discipline behind every validated questionnaire in UX research.

Q

Qualitative Research
Research focused on understanding the 'what' and 'why' through rich stories, observations, and context. Seeks depth of understanding rather than statistical measurement.
Quantitative Research
Research focused on numerical measurement with the goal of generalizing findings from a sample to a broader population. Answers 'how much,' 'how many,' and 'how often.'

R

Recruiting
Finding and enrolling qualified participants for a research study. The single biggest bottleneck in applied research—and the one most teams underestimate.
Reliability
The consistency of a research method—whether it produces similar results when repeated under the same conditions. About precision, not accuracy. A method can be reliable without being valid.
Research Operations
The orchestration and optimization of people, processes, and craft to amplify the value and impact of research at scale. Often abbreviated as ResearchOps.
Research Plan
The blueprint document that forces clarity on research goals, aligns stakeholders, and ensures every step is designed to answer core questions. The single most important tool for avoiding unfocused research.
Retrieval-Augmented Generation (RAG)
A technique that enhances LLM responses by first retrieving relevant information from a specific knowledge base, then using that information to ground the model's output.
ROI (Return on Investment)
A financial metric that measures the profitability of an investment relative to its cost, expressed as a percentage.

S

Sample Size
The number of participants in a research study. Appropriate sample size depends on research goals, method type (qualitative vs. quantitative), the precision required, and the number of distinct user segments being studied.
Sampling Bias
A systematic error introduced when your research sample does not represent the population you are trying to study. The most common and most overlooked threat to research validity.
Saturation
The point in qualitative research where you are no longer hearing new information or discovering new insights from participants. The signal that you have likely uncovered the most important themes.
Screening
The process of evaluating potential research participants against eligibility criteria before they enter a study. Good screening protects data quality; bad screening wastes everyone's time.
Sean Ellis Score
A single-question metric for assessing product-market fit by measuring how disappointed users would be if they could no longer use a product.
Segmentation
Dividing your user base into distinct groups based on shared characteristics, behaviors, or needs. The foundation for targeted research, personalized experiences, and meaningful sample design.
Single Ease Question (SEQ)
A single-item, 7-point rating scale administered after each task in a usability test, asking 'How easy or difficult was this task?' Quick, reliable, and highly sensitive to task difficulty.
Social Desirability Bias
The tendency of research participants to answer questions in ways they believe will be viewed favorably, rather than answering truthfully. Strongest with sensitive or self-image topics.
Stakeholders
Anyone who influences, is affected by, or makes decisions based on your research. Managing stakeholders is not overhead—it determines whether your findings actually change anything.
Statistical Significance
A determination that an observed result is unlikely to have occurred by random chance alone. Conventionally indicated by a p-value below 0.05, meaning less than 5% probability of the result being a fluke.
Summative Evaluation
Research conducted at the end of a development cycle to measure the finished product's success against defined criteria. The goal is assessment, not iteration.
Survey
A Core Method of asking at scale using standardized questions. Enables data collection from larger samples but sacrifices the depth of interviews for breadth and standardization.
Sycophancy
The tendency of AI models to agree with users, tell them what they want to hear, or avoid challenging their assumptions—even when doing so would be more helpful or accurate.
Synthesis
The process of combining findings from multiple data sources into coherent patterns and themes. Where raw observations become actionable insights.
System Usability Scale (SUS)
A 10-item standardized questionnaire that produces a score from 0-100 measuring perceived usability. The industry's most widely used instrument for benchmarking usability.
Systematic Error
Consistent, predictable bias that skews results in a known direction. Manageable because you can account for it in interpretation—far better than random, unsystematic error.

T

Task Analysis
Breaking down what users need to accomplish into discrete steps, decisions, and information requirements. The foundation for designing interfaces that match how people actually work.
Taxonomy
A classification system that organizes concepts into categories. In research, a predefined set of tags or codes used to systematically categorize qualitative data.
Thematic Analysis
A systematic method of structuring qualitative data by tagging it against a taxonomy of categories, then analyzing the frequency and patterns of those tags to move beyond summary to genuine insight.
Think-Aloud Protocol
A research technique where participants verbalize their thoughts, feelings, and assumptions while completing tasks, providing real-time insight into their mental processes and decision-making.
Tidy Data
A data organization principle where every column is a variable, every row is an observation, and every cell is a single value. The foundation for efficient analysis and automation.
Tree Testing
An evaluative method for validating information architecture by presenting users with a text-only version of a site structure and measuring whether they can navigate to the correct location for given tasks.
Triangulation
The practice of combining multiple data sources, methods, or perspectives to build more robust research findings. Reduces reliance on any single source and increases confidence in conclusions.

U

Unsystematic Error
Random variation in research data caused by unpredictable factors—participant mood, ambient noise, time of day. Unlike systematic error, it averages out with sufficient sample size.
Usability
Per ISO 9241-11: the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.
Usability Testing
A UX research method where representative users attempt to complete specific tasks with a product while observers watch, listen, and take notes.
User Experience (UX)
Per ISO 9241-210: a person's perceptions and responses resulting from the use and/or anticipated use of a product, system, or service—including emotions, beliefs, preferences, and behaviors before, during, and after use.
User Interview
A Core Method of structured asking designed for deep exploration of user needs, behaviors, and motivations. Distinguished from casual conversation by its defined goals, protocol, and systematic approach.
UX Benchmarking
A quantitative research method that measures user experience metrics (task success, time, satisfaction) at regular intervals or against competitors to track progress and prove ROI.
UX Maturity
A diagnostic framework for assessing how deeply UX research and design is integrated into an organization, ranging from absent to user-driven across multiple levels.
UX Measurement
The practice of quantifying user experience through standardized instruments, behavioral metrics, and self-reported measures. Enables benchmarking, tracking progress, and making evidence-based comparisons.
UX Research
The systematic study of users and their interactions with products or services to inform design decisions. Distinct from market research in its focus on the specific interaction, not the broader market landscape.
UX Research Theater
The performance of research-like activities that lack substance and rigor—workshops and exercises that make teams feel productive but produce outputs with no real connection to actual user data.
UX Test
A Core Method combining all three Building Blocks: testing task completion (effectiveness and efficiency), observing behavior and non-verbal cues, and asking questions about the experience. The most comprehensive single research method.

V

Validity
Whether a research method measures what it claims to measure. About accuracy, not precision. A method can be reliable (consistent) but not valid (accurate) if it consistently measures the wrong thing.
Van Westendorp Price Sensitivity Meter
A pricing research technique that uses four standard questions about perceived value to identify an acceptable price range and optimal price point.
Voice of the Customer (VoC)
A systematic program for capturing, analyzing, and acting on customer feedback across all channels. Turns scattered complaints and praise into structured organizational intelligence.

W

WCAG
Web Content Accessibility Guidelines—the international technical standard defining how to make web content accessible to people with disabilities. Provides testable success criteria organized by level (A, AA, AAA).
Within-Subjects Design
A study structure where the same participants test all conditions. Every participant interacts with both Version A and Version B. Statistically powerful but requires counterbalancing to control order effects.

NEED HELP APPLYING THESE CONCEPTS?

Our team can help you implement effective UX research practices for your product.

UX Research Glossary | Busch Labs