top of page

The UX Research Styles: What to Use, When, and Why

Updated: Oct 25

By Philp Burgess - UX Research Leader


Why this matters: Teams often use these terms interchangeably, which leads to fuzzy goals, mismatched methods, and weak decisions. This guide clarifies the major UX Research Styles, when to use them, and how to explain each to stakeholders.

A simple mental model

Think in two dimensions across the product lifecycle:

  • Problem space → understand people, contexts, needs, opportunities

  • Solution space → design, test, and measure solutions


    Lifecycle: Explore → Define → Design → Build → Launch → Grow


UX Research Styles

1) Discovery Research (Foundational)


Goal: Build deep understanding of users, contexts, unmet needs, market landscape.Use when: Entering a new domain, shaping strategy, prioritizing where to invest

Methods: Field studies/ethnography, contextual inquiry, JTBD interviews, diary studies, segmentation surveys, competitive scans

Outputs: Personas/JTBD, journey maps, problem statements, opportunity areas, design principles.

Stakeholder line: “We’re doing discovery to decide which problems matter most and where to focus

Quick AI prompt: “Synthesize these 15 interview notes into 5 opportunity areas with evidence quotes.”


2) Exploratory Research (Stance)

Goal: Surface themes, language, mental models, unknowns—without heavy hypotheses.Use when: You don’t know what you don’t know; early signals conflict.

Methods: Open-ended interviews, exploratory surveys, open card sorts, concept mapping, social listening.

Outputs: Emerging themes, taxonomy candidates, risks/assumptions list.Stakeholder line: “We’re exploring to name patterns and risks before we commit.

Quick AI prompt: “Cluster these raw comments into themes; label clusters and add 3 representative quotes each.”


3) Generative Research (Concept-Shaping)

Goal: Create/refine ideas that solve validated problems; co-create with users.Use when: You’ve identified opportunities and need solution directions

Methods: Co-design workshops, storyboarding, early concept tests, desirability studies, Kano surveys, opportunity-solution trees

Outputs: Concept options, value propositions, prioritized feature hypotheses.

Stakeholder line: “We’re generating solution options to reduce concept risk before we design in detail.”

Quick AI prompt: “Turn these opportunity statements into 5 concept one-pagers (problem, concept, value, risks).”


4) Evaluative Research (Does it work?)

Two flavors:


4a) Formative Evaluation (Iterative improvement)

When: During design/prototyping.

Methods: Moderated usability tests, heuristic reviews, accessibility audits, tree tests, prototype analytics.

Metrics: Task success, errors, time-on-task, SEQ; qualitative issues + severity.

Outcome: Prioritized fix list to improve the design now.

Stakeholder line: “We’re doing formative testing to find and fix issues before build.”

Quick AI prompt: “Prioritize these 27 usability issues by severity, frequency, and effort; output a fix plan.”


4b) Summative Evaluation (Benchmark/decision)

When: End of a phase or pre/post-launch.

Methods: Benchmark tests, SUS/SUPR-Q/CSAT, performance targets, competitive benchmarks, large-sample unmoderated tests.

Metrics: SUS, success rate, time deltas, error rate, conversion, NPS.Outcome: Defensible score for quality and go/no-go decisions.

Stakeholder line: “We’re running a summative benchmark to decide readiness against target KPIs.”

Quick AI prompt: “Create a one-slide exec summary of our benchmark (SUS, success, time) with risks and a go/no-go call.”


5) Causal & Experimental (A/B, DoE)

Goal: Is change X causing outcome Y?Use when: You have a live experience or high-fidelity prototype with measurable outcomes.

Methods: A/B & multivariate tests, holdouts, quasi-experiments, difference-in-differences.

Outputs: Causal evidence to roll out or roll back.

Stakeholder line: “We’re testing to prove impact before full rollout.”

Quick AI prompt: “Explain these A/B results for execs: impact, CI, effect size, and decision.”


6) Descriptive & Behavioral Analytics

Goal: Describe what users do at scale; find drop-offs, friction, and cohorts.

Methods: Funnels, pathing, retention curves, cohort analysis, instrumentation reviews.

Outputs: Quant maps that guide where to dig qualitatively.

Stakeholder line: “We’re mapping behavior to spot biggest friction and size opportunities.”

Quick AI prompt: “Summarize this funnel CSV into 3 insights, 3 hypotheses, and 2 tests.”


7) Information Architecture & Findability

Goal: Validate how content is organized and found.

Methods: Open/closed card sorts, tree testing, search-log analysis.

Outputs: Taxonomy, labels, nav structure aligned to mental models.

Stakeholder line: “We’re validating IA to improve findability and reduce search friction.”

Quick AI prompt: “Turn these card-sort exports into a proposed IA with top-level categories, labels, and rationale.”


8) Accessibility & Inclusive Research

Goal: Ensure experiences are perceivable, operable, understandable, robust—for all.

Methods: Screen readers, keyboard-only, voice control, WCAG audits, usability with diverse abilities.

Outputs: Inclusive patterns, defect logs, conformance path.

Stakeholder line: “We’re testing for inclusive access and legal/regulatory compliance.”

Quick AI prompt: “Convert this WCAG audit into a developer-friendly backlog with acceptance criteria.”


Putting it together (fast tracks)

  • Explore & Define: Discovery + Exploratory

  • Design: Generative → Formative Evaluative (iterate)

  • Pre/Post-Launch: Summative Evaluative → A/B + Analytics → Continuous learning

 

If you have 2–3 weeks:

  1. Rapid Discovery (5–8 interviews + light analytics) → frame opportunities

  2. Generative concepts (co-create 3 options) → pick a direction

  3. Formative usability on a clickable prototype → fix big rocks

  4. If live: A/B the most contentious assumption → measure impact


Common pitfalls (and fixes)

  • Vague labels → vague outcomes. Always pair the type with a decision.

  • Jumping to solutions. Do Discovery before Generative.

  • Treating formative like summative. Early tests are for finding issues, not scoring.

  • Benchmarks without baselines. Define targets up front (e.g., SUS ≥ 80; success ≥ 90%).

  • No handoff to action. Every study should end with a prioritized decision + owner + date.


Cheat sheet (copy/paste)

  • Don’t know the problem → Discovery / Exploratory

  • Know the problem, need ideas → Generative

  • Have a design, need to improve → Formative evaluative

  • Need a score for readiness → Summative evaluative

  • Need proof of impact → A/B / Experimental

  • Need scale behavior → Analytics (descriptive)

  • Need better findability → IA (card sort/tree test)

  • Need inclusion → Accessibility research


 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page
'); opacity: 0.3;">

🔄 Continuous UX Research Feedback Loop

📊
Real-time
Analytics
💬
User
Feedback
🤖
AI
Synthesis
Rapid
Insights

Click on any node to explore the continuous research process

Discover how modern UX research creates a seamless feedback loop that delivers insights in real-time, enabling product teams to make data-driven decisions faster than ever before.