Case Study: How We Used AI to Accelerate Competitive UX Audits
- Philip Burgess
- Aug 21
- 2 min read
By Philip Burgess - UX Research Leader
Competitive audits are essential for identifying UX opportunities, understanding market standards, and guiding product differentiation. But let’s be honest—they’re time-intensive, messy, and often hard to scale across multiple competitors and platforms.
This case study explores how our team leveraged AI tools to cut audit time in half, enhance pattern recognition, and deliver richer, more actionable insights to stakeholders.
The Challenge
We were tasked with conducting a competitive UX audit of five leading platforms in our industry. The goal? Uncover UX best practices, pain points, and feature gaps across:
Web and mobile experiences
Onboarding flows
Navigation and IA
Support interactions
Checkout processes
Traditionally, this would take 3–4 weeks with manual walkthroughs, screenshots, notes, and synthesis. With a tight deadline and high stakeholder visibility, we needed a better way.
The AI-Powered Audit Stack
Here’s the toolkit we used:
Purpose | Tool |
Session Recording & Notes | Loom + tl;dv (AI-generated highlights) |
Screenshot Annotation | Scribe, CleanShot, or Tango |
Pattern Extraction & Themes | ChatGPT + Claude (using structured prompts) |
Survey Review Analysis | Perplexity + Notably AI |
Report Drafting | ChatGPT 4o (based on our annotated inputs) |
Our Process: AI in Action
1. Structured Capture Across Competitors
We performed guided walkthroughs of each competitor, recording our sessions in Loom and using tl;dv to generate instant summaries and highlight recurring usability patterns.
2. Rapid Annotation with AI Help
Instead of manually organizing screenshots, we used Tango to create step-by-step documentation with AI-generated captions and callouts.
3. Pattern Extraction with LLMs
We exported our notes and had ChatGPT group feedback into common UX patterns like:
Onboarding friction points
Confusing navigation labels
Checkout trust indicators
Support touchpoint accessibility
By feeding structured prompts (e.g., “Cluster these pain points by usability heuristic and competitor”), we quickly surfaced cross-platform patterns.
4. Drafting Competitive Insights
Using the patterns and annotated visuals, we prompted ChatGPT to draft a slide-level competitive audit summary, including:
Strengths and weaknesses by platform
Best-in-class UX examples
Feature parity tables
UX opportunity zones
We then layered in human analysis to ensure nuance, business relevance, and design implications.
The Impact
Metric | Traditional Audit | AI-Accelerated Audit |
Time to Completion | 3–4 weeks | 1.5 weeks |
Number of Competitors Audited | 3–4 max | 5 fully mapped |
Patterns Identified | ~15 | ~35 (richer clustering) |
Stakeholder Satisfaction | High | Very High – “Best audit we’ve received” |
Key Learnings
Prompt precision matters. The quality of insight from LLMs improves significantly with context-aware prompts.
AI saves time, not judgment. Human review was still necessary to avoid hallucinations and tie findings to business goals.
Visual storytelling matters. AI helped generate content, but we still invested time in visual polish for stakeholder credibility.
Final Thought
AI won’t replace UX research—but it amplifies our ability to conduct smarter, faster, more scalable audits. By automating the repetitive and accelerating synthesis, we freed up more time for strategic storytelling.
Have you tried using AI for your competitive audits? Share what worked—or didn’t—for your team.
Comments