Summary
Most teams ask 'How many participants do I need?' before asking the more important question: 'Is this study worth doing at all?' This calculator uses Value of Information theory to answer that question. Input your decision stakes, current confidence, and study costs — get a data-driven recommendation on whether to proceed.
Before asking "How many participants?" or "Which method?", ask: "Is this study worth doing at all?" This calculator answers that question using Value of Information theory — a framework from decision science that has been used in health economics for decades but has never been translated into a practical tool for product and UX teams.
The rest of this article explains the theory behind each calculation stage.
The Wrong First Question
Most research planning starts with "How many participants?" or "What method should we use?" Both are premature. The first question should be: "What is this decision worth, and how much would it cost to get it wrong?" The answer determines whether research makes economic sense at all — and if so, how much you should invest.
A €50M product launch decision? Even a €100K study is a rounding error. A €20K feature tweak? Five usability tests might already be overengineered. The decision stakes set the investment ceiling, not the statistical formula.
This is the core insight behind Value of Information (VOI) theory. It does not tell you how to run a study. It tells you whether running one makes economic sense — and what it can be worth.
How the Calculator Works
The calculator uses a simplified version of VOI analysis, progressing through three stages. Each stage adds precision to the estimate.
Stage 1: The Ceiling (EVPI)
The Expected Value of Perfect Information (EVPI) is the maximum amount any information could ever be worth for a given decision. The formula is simple:
If you are 70% confident in your current best option, there is a 30% chance you are wrong. If a wrong decision costs €200,000, the maximum value of perfect information is €60,000. No study — no matter how rigorous — is worth more than this.
This is the ceiling. It tells you the upper bound of what research can be worth, before you even think about methods or sample sizes.
Stage 2: What Your Study Is Actually Worth (EVSI)
Perfect information does not exist. Real studies reduce uncertainty partially, not completely. The Expected Value of Sample Information (EVSI) estimates how much uncertainty reduction a specific type of study provides.
Different research approaches reduce uncertainty by different amounts:
- Discovery research (interviews, contextual inquiry) shifts your understanding of the problem space but rarely resolves a specific decision. Uncertainty reduction factor: 0.15–0.30.
- Evaluative research (usability testing, heuristic evaluation) identifies problems in a concrete solution. Factor: 0.30–0.50.
- Decision research (MaxDiff, conjoint, concept testing) directly targets a specific choice. Factor: 0.50–0.70.
- Measurement depends on what you are measuring — task-level metrics (SEQ, task success) reduce specific uncertainty more than broad satisfaction scores (NPS).
These factors are heuristics, not precise measurements. The ranges reflect the inherent variability in how much any given study actually resolves. The calculator uses the midpoint as a reasonable default and shows you the range so you can judge for yourself.
Stage 3: The True Cost (Net Value)
A study's value is not just its information value minus its price tag. Time matters.
Research takes time. If delaying a decision costs money — lost revenue, idle engineering time, competitive risk — that delay cost reduces the net value of the study. This is the most underappreciated factor in research planning. A four-week study that delays a product generating €10,000/week in revenue has a €40,000 delay cost that must be subtracted from the study's information value.
When Net Value is positive, the study is a sound economic investment. When it is negative, you are better off deciding with what you have — or finding a faster, cheaper research approach.
The Four Recommendation Buckets
The calculator does not give you a single number and leave you to interpret it. It places your result into one of four buckets:
"Research is clearly worth it" — Net value exceeds 3× the study cost. Even with significant estimation error, the study pays for itself. Proceed with confidence.
"Research is likely worth it" — Net value is positive and exceeds the study cost. Your assumptions matter — double-check your estimates of decision stakes and wrong-decision costs. If they hold, proceed.
"Borderline" — Net value barely covers the cost. Consider a cheaper or faster study design. Focus on the single highest-uncertainty question rather than a comprehensive study.
"Skip the study" — The information value does not justify the cost. This is not a failure — it is a good decision. Either the stakes are too low, you are already confident enough, or the study is too expensive for what it can deliver.
Why These Numbers Are Useful Even When They Are Wrong
Every input to this calculator is an estimate. Your confidence level might be poorly calibrated. Your cost-of-wrong-decision figure might be off by 50%. The uncertainty reduction factors are heuristics, not empirical measurements.
So why bother? Three reasons.
The structure matters more than the precision. Forcing yourself to articulate "What is at stake?", "How confident am I?", and "What happens if I am wrong?" produces better decisions than not asking these questions — regardless of whether your specific numbers are accurate.
VOI calculations are most sensitive to decision stakes and least sensitive to probability estimates. Getting the order of magnitude right on stakes (€50K vs €500K vs €5M) matters far more than whether your confidence is 65% or 75%. And people are generally better at estimating costs than probabilities.
The recommendation thresholds are deliberately conservative. "Clearly worth it" requires 3× cost coverage — a substantial buffer against estimation error. If the calculator says "clearly worth it" and your estimates are even roughly in the right ballpark, the study almost certainly pays for itself.
Douglas Hubbard, whose Applied Information Economics framework underpins this calculator, puts it simply: even rough quantification consistently outperforms unaided expert judgment. The alternative to an imperfect calculation is not a perfect one — it is no calculation at all.
For the broader business case context that value calculations support, see The Business Case for UX Research.
Connection to Our Other Tools
This calculator is part of a three-tool ecosystem for research planning:
- Research Value Calculator (this tool): "Is this study worth doing?" — determines whether research makes economic sense.
- Research Method Explorer: "Which method fits?" — guides you to the right approach based on your research intent and constraints. The method categories in the Value Calculator map directly to the Method Explorer's four intents.
- Sample Size Calculator: "How many participants?" — calculates the right sample size once you have decided on a method.
The natural flow is: Value Calculator → Method Explorer → Sample Size Calculator. But in practice, you might enter at any point and loop between them.
Theoretical Background
The mathematics behind this calculator draw on three traditions:
Decision Theory (Raiffa & Schlaifer, 1961). The formal foundation. Expected Value of Perfect Information (EVPI) and Expected Value of Sample Information (EVSI) were defined as part of Bayesian decision analysis — the framework for making optimal choices under uncertainty.
Applied Information Economics (Hubbard, 2007/2014). Douglas Hubbard translated academic VOI into business practice. His key contribution is the concept of Expected Opportunity Loss (EOL = P(wrong) × Cost of wrong), which makes VOI accessible without requiring full Bayesian modeling. His finding that organizations routinely measure the wrong variables — investing in low-information-value metrics while ignoring high-value ones — is the motivation for this tool.
Health Economics (Claxton, 1999; Heath et al., 2024). Health economists operationalized VOI more thoroughly than any other field, developing computational methods to determine whether clinical trials are worth funding. The Gaussian approximation (Jalal et al., 2018) and the ISPOR VOI Task Force reports (Fenwick et al., 2020) provide the methodological foundation for simplified EVSI calculation.
Cost of Delay (Eckermann & Willan, 2008; Reinertsen, 2009). The insight that information has a time cost — not just a financial cost — comes from both health technology assessment and lean product development. Eckermann showed that delay costs frequently dominate the research value equation, making fast-but-imperfect studies more valuable than slow-but-rigorous ones.
For the full ROI calculation methodology, see Calculating the ROI of UX Research.
For the broader research technology landscape that includes value assessment, see Research Tools and the ResTech Landscape.
For a framework to evaluate whether AI tools deliver genuine research value, see Evaluating AI Research Tools.
References
- Claxton, K. (1999). The irrelevance of inference: a decision-making approach to the stochastic evaluation of health care technologies. Journal of Health Economics, 18(3), 341–364.
- Eckermann, S. & Willan, A.R. (2008). Time and expected value of sample information wait for no patient. Value in Health, 11(3), 522–526.
- Fenwick, E., Steuten, L., Knies, S., et al. (2020). Value of information analysis for research decisions — an introduction. Value in Health, 23(2), 139–150.
- Heath, A., Kunst, N. & Jackson, C. (2024). Value of Information for Healthcare Decision-Making. CRC Press.
- Hubbard, D.W. (2014). How to Measure Anything: Finding the Value of Intangibles in Business (3rd ed.). Wiley.
- Jalal, H., Goldhaber-Fiebert, J.D., Kuntz, K.M. (2018). A Gaussian approximation approach for value of information analysis. Medical Decision Making, 38(2), 174–188.
- Raiffa, H. & Schlaifer, R. (1961). Applied Statistical Decision Theory. Harvard University Press.
- Reinertsen, D.G. (2009). The Principles of Product Development Flow (2nd ed.). Celeritas Publishing.
- Runge, M.C., et al. (2023). A simplified method for value of information using constructed scales. Decision Analysis, INFORMS.
- Wilson, E.C.F. (2015). A practical guide to value of information analysis. PharmacoEconomics, 33, 105–121.