Designing UX Research AI Agents as Research Participants: Stress-Testing Journeys Before Launch
- Philip Burgess
- Sep 30
- 2 min read
Updated: Oct 25
By Philip Burgess - UX Research Leader
Rethinking Research Participants as UX Research AI Agents
Traditionally, UX research relies on recruiting human participants to test products, uncover pain points, and validate assumptions. But with the rise of AI, a new possibility is emerging:
AI agents acting as simulated users. While not a replacement for real human feedback, AI-driven participants can stress-test digital journeys, highlight obvious flaws, and accelerate pre-launch validation.
Why AI Agents?
Scalability: Run hundreds of simulated interactions in minutes.
Consistency: AI agents can repeat tests with precise control over variables.
Early Detection: Spot glaring usability issues before recruiting humans.
Cost Savings: Reduce wasted spend on testing flows that are fundamentally broken.
How It Works
Define User Profiles: Create AI agents with distinct goals, behaviors, and constraints (e.g., first-time shopper vs. returning power user).
Simulate Journeys: Run agents through key flows (signup, checkout, navigation).
Capture Data: Track clicks, completion rates, errors, and drop-off points.
Iterate Quickly: Use AI feedback to refine the prototype before investing in human studies.
Benefits and Best Uses
Stress-Testing Edge Cases: AI agents can be programmed to click through unusual paths or attempt invalid inputs.
Benchmarking: Compare AI-agent performance across different versions of a design.
Scenario Simulation: Model how different personas might behave under varied conditions (slow internet, accessibility settings).

Limitations to Keep in Mind
Not Human: AI lacks emotional nuance, cultural context, and subjective experience.
Bias Risk: Agents are only as diverse as the data and rules you program.
Supplement, Don’t Replace: AI participants should augment — not replace — human-centered research.
Ethical Considerations
Transparency: Be clear with stakeholders about what AI agents can and cannot reveal.
Integrity: Avoid presenting AI results as a proxy for authentic human needs.
Equity: Use AI testing to broaden, not narrow, perspectives by combining it with diverse human participants.
Example in Action
Scenario: E-commerce site preparing for holiday traffic.
AI Simulation: 1,000 AI shoppers attempt checkout simultaneously.
Insight: Identified bottlenecks in payment API integration.
Result: Engineering fixes applied before human usability studies, saving time and improving reliability.
Closing Thought
AI agents as research participants open a new frontier in UX — one where machines help us prepare for real human interactions. By combining speed, scale, and scenario testing with authentic user feedback, researchers can build products that are both technically resilient and truly human-centered.
Philip Burgess | philipburgess.net | phil@philipburgess.net



Comments