Summary
Not sure which research method to use? This interactive decision tree walks you through a series of questions about what you want to learn, what you have to work with, and what decisions you need to make — then recommends a method, instruments, and study configuration tailored to your situation.
Not sure which research method fits your question? This interactive tool guides you through a series of decisions — based on what you want to learn and what you have to work with — and recommends a method, instruments, and study configuration tailored to your situation.
Below you'll find the framework and concepts behind each recommendation.
The first question: do you have a solution?
Every research project starts with a fork in the road. If you don't yet have a product, prototype, or concrete solution, you're in generative research territory: the goal is to understand the problem space, discover unmet needs, and generate ideas for what to build. If you do have something — whether it's a wireframe, a working product, or a competitor's interface — you shift into evaluative research, where the goal is to assess how well that solution works. This single distinction determines the entire direction of your study. Generative research asks "What should we build?" while evaluative research asks "Does this work?" Conflating the two — running a usability test when you should be doing discovery, or conducting interviews when you should be measuring — is the most expensive mistake in research planning.
Four research intents
Within the generative and evaluative branches, the tool distinguishes four main intents that map to different method families.
Discover is pure generative work. You're exploring a problem space, trying to understand what users need, how they think about a domain, and where the gaps are. The methods here are qualitative and open-ended: contextual inquiries, ethnographic observation, focus groups, and exploratory interviews. The output is insight, not metrics — themes, mental models, unmet needs, opportunity spaces. Discovery research is how you avoid building the wrong thing.
Evaluate is the core of evaluative research. You have a solution and want to know how people experience it. This is where usability testing, heuristic evaluations, card sorting, and tree testing live. The critical sub-question is what aspect of the experience you're evaluating — whether it's findability, task efficiency, emotional response, or overall satisfaction — because different components of experience call for different techniques and instruments.
Decide uses structured research methods to inform business and design decisions. Feature prioritization through MaxDiff or conjoint analysis, pricing research, concept testing, product-market fit assessment — these methods go beyond "does it work?" to "which option should we choose?" and "is there demand for this?" The output is decision-ready data, often quantitative, designed to reduce uncertainty around a specific strategic choice.
Measure is about quantification with standardized instruments. Where Evaluate focuses on understanding the experience, Measure focuses on scoring it. The System Usability Scale, Net Promoter Score, Single Ease Question, task success rates, UX benchmarks — these produce numbers you can track over time, compare across products, and report to stakeholders who need evidence in a format they can act on.
Components of Experience
When you're evaluating a product, not everything is "usability." The Components of Experience model provides a hierarchical structure for what makes up the user experience. At the foundational level, you have accessibility and basic functionality — prerequisites that must work before anything else matters. The pragmatic level covers classical usability concerns: can people find things, complete tasks efficiently, and recover from errors? The experiential level addresses aesthetics, emotional response, trust, and the overall quality of the interaction.
This hierarchy matters for method selection because different components require different measurement approaches. Findability problems show up in tree tests. Task efficiency surfaces in moderated usability testing. Emotional response requires different instruments entirely — reaction cards, AttrakDiff, or the UEQ. When the tool asks you to select which components of experience you want to evaluate, it uses your answer to recommend the right combination of methods and instruments for what you actually care about.
Layers of experience and measurement
The Measure path in the tool maps to a layered model of what can be measured. At the task level, you measure whether people can complete specific actions — task success rates, time on task, error rates, and the Single Ease Question. At the product level, you assess the overall experience with instruments like the System Usability Scale or the User Experience Questionnaire. At the customer experience level, you look at relationship metrics — Net Promoter Score, Customer Satisfaction Score, Customer Effort Score — that capture how people feel about the brand and service beyond any single interaction. At the market fit level, you assess whether the product meets a real market need through product-market fit surveys and willingness-to-pay analysis.
Each layer requires different instruments, different sample sizes, and produces different kinds of evidence. The tool connects your measurement intent to the appropriate layer and recommends instruments accordingly. Task-level measurement can work with small samples and produces actionable micro-insights. Market-fit measurement needs larger samples and produces strategic evidence. Mixing up the layers — using a task-level instrument to make a market-level decision — leads to conclusions that don't support the weight you put on them.
The research profile
Every recommended study configuration produces a research profile — a five-dimensional characterization of where the study falls on the spectrum of research approaches. The five dimensions are drawn from established research methodology literature.
The qualitative to quantitative dimension captures whether you're working with words or numbers, themes or statistics. Most studies aren't purely one or the other, but lean in a direction. The generative to evaluative dimension reflects whether you're exploring a problem space or assessing a solution. The active to passive dimension distinguishes between methods where you directly engage participants — interviews, tests, surveys — and those where you observe behavior without intervention, like analytics or passive data collection. The depth to scale dimension captures the fundamental trade-off between understanding a few people deeply and measuring many people broadly. The attitudinal to behavioral dimension reflects whether you're capturing what people say and believe or what they actually do.
No profile is "better" than another. A deep, qualitative, generative study is exactly right for early-stage discovery. A broad, quantitative, evaluative study is exactly right for benchmarking a mature product. The profile makes these trade-offs visible so you can confirm they match your research intent — or adjust course if they don't.
What Comes Next
This tool gives you a starting point: a method, instruments, and a study configuration matched to your research intent. The next step is to write a research plan that operationalizes these recommendations — defining your research questions, sampling strategy, data collection protocol, and analysis approach. For a detailed calculation of how many participants you'll need, use our Sample Size Calculator.