Validity
Whether a research method measures what it claims to measure. About accuracy, not precision. A method can be reliable (consistent) but not valid (accurate) if it consistently measures the wrong thing.
Definition: Whether a research method measures what it claims to measure. About accuracy, not precision. A method can be reliable (consistent) but not valid (accurate) if it consistently measures the wrong thing.
Validity refers to whether a research method measures what it claims to measure. It is about accuracy: are your results a true reflection of the underlying phenomenon you are trying to understand?
Types of Validity
Internal validity: Do your conclusions about cause and effect hold within the study? If you claim X caused Y, are you sure it was not something else?
External validity: Do your findings generalize beyond the study? Can you apply conclusions from your sample to the broader population?
Construct validity: Are you measuring the concept you think you are measuring? If you claim to measure "user satisfaction," does your scale actually capture satisfaction?
The Reliability-Validity Relationship
A method can be reliable without being valid. Your measurements might be perfectly consistent (high reliability) but consistently measuring the wrong thing (low validity).
However, a method cannot be valid without being reliable. If your measurements are random and inconsistent, they cannot be accurate by luck. Reliability is necessary but not sufficient for validity.
Threats to Validity
Common threats include:
- Confounding variables (internal validity)
- Non-representative samples (external validity)
- Poorly designed measurement instruments (construct validity)
- Leading questions that bias responses (construct validity)
Validity requires careful study design, not just consistent execution.
Related Terms
Reliability
The consistency of a research method—whether it produces similar results when repeated under the same conditions. About precision, not accuracy. A method can be reliable without being valid.
Objectivity
The degree to which research findings are independent of who conducts the study. If two researchers follow the same protocol and get different results, you have an objectivity problem.
Systematic Error
Consistent, predictable bias that skews results in a known direction. Manageable because you can account for it in interpretation—far better than random, unsystematic error.
Bias
Systematic deviation from the true value in research findings. Cannot be eliminated, only managed through standardization and awareness. The goal is systematic bias (manageable) over unsystematic bias (chaos).
Mentions in the Knowledge Hub
This term is referenced in the following articles:
UX Measurement Instruments: Scales, Scores, and What They Actually Measure
Standardized measurement instruments provide benchmarks and comparability. But using them effectively requires understanding what each one actually measures, and what it does not.
Research Quality and Managing Bias
You will always introduce bias into your research, that is unavoidable. The goal is not elimination but management. Understanding the difference between systematic and unsystematic error is what makes findings trustworthy.