From Click to Care
How Users Navigate Provider Selection
May - June 2025 | UC San Diego Health
The Challenge
Healthcare provider directories often fail patients at critical decision-making moments. After the User Journey Mapping Workshop revealed significant gaps in how patients select specialty care providers, a question emerged: How do our competitors address these same user needs? Are we falling behind in supporting the personalized, transparent, and informed decision-making users seek when choosing a healthcare provider?
Our digital experience needed benchmarking against competitors to understand where opportunities existed to better serve patients during one of their most important healthcare decisions.
Business Objectives
Benchmark against competitors to understand industry standards for provider directory features
Validate user priorities identified in the Journey Mapping Workshop through competitive lens
Identify gaps in UCSD Health's provider selection experience
Provide evidence-based recommendations for prioritizing feature development
How I Worked with Others
Sparked by Collaboration:
This project originated from a web developer's observation during a cross-functional meeting—proof that the best research questions often come from listening to colleagues closest to implementation challenges. Rather than dismissing it as a side comment, I recognized an opportunity to provide evidence for a decision the team was debating.
Cross-Functional Partnership:
I worked with the Digital Experience Associate Director to ensure the analysis would inform upcoming strategic decisions. Throughout the project, I maintained ongoing dialogue with web development and content teams to understand technical feasibility and content constraints—ensuring recommendations would be actionable rather than aspirational.
Inclusive Presentation:
I presented findings to Marketing and Communications stakeholders, web developers, and content strategists together. This created shared understanding across teams that typically work in sequence. The discussion that followed the presentation was as valuable as the research itself—teams problem-solved together about which features to prioritize and how to implement them within our constraints.
Continued Collaboration:
The analysis didn't end with the presentation. I continue to work with content and web development teams to evaluate provider page enhancements. This ongoing partnership ensures research insights translate into actual improvements rather than sitting in a report.
My Role & Responsibilities
I designed and conducted the entire competitive analysis:
Research design: Developed evaluation framework based on user priorities from Journey Mapping Workshop
Competitive benchmarking: Evaluated 12 features across 4 major San Diego health systems (Sharp, Scripps, Kaiser, UCSD Health)
AI-assisted analysis: Used Microsoft Copilot to analyze how competitors communicate provider values and care philosophy, applying ethical decision-making and critical thinking to validate AI outputs
Secondary research: Analyzed NRC Health study data (2021) on information sources patients prioritize when selecting providers for cancer care
Synthesis & presentation: Created comparison matrix and presented findings to stakeholders
-
Competitive Analysis:
Evaluated 4 health systems: Sharp HealthCare, Scripps Health, Kaiser Permanente, UC San Diego Health
Assessed 12 features across three user priority categories:
Transparency: Availability, insurance, years of experience
Personalization: Population-specific care, care philosophy, introduction videos
Decision support: Advanced filtering, reviews/ratings, comparison tools
User Interview Validation:
5 patients from User Journey Mapping Workshop (ages 25-61)
Validated priorities identified in workshop findings
Confirmed importance of features being evaluated
AI-Assisted Research:
Used Microsoft Copilot to analyze competitor provider profiles for values and care philosophy presentation
Applied critical thinking to validate AI findings against manual website review
Used ethical decision-making framework to ensure AI insights were accurate and unbiased
Secondary Research:
NRC Health study (2021): Patient information priorities when selecting cancer care providers
-
Where UCSD Health Excels:
Advanced filtering: Credentials, languages, subspecialties
Patient reviews and ratings: Displayed and aggregated
Provider introduction videos: Available for many providers
Critical Gaps Identified:
No next available appointment display (0 of 4 systems offer this)
No insurance plan information on provider pages or filters
No "accepting new patients" indicator for specialty care providers
No years of experience or graduation year displayed
No population-specific care highlights (LGBTQ+, cultural, age groups)
No provider care philosophy or values in profiles
No side-by-side provider comparison tool (0 of 4 systems offer this)
No location/hospital affiliation filter
Missing English language filter (only UCSD Health)
Competitor Strengths:
Sharp HealthCare: Insurance display, availability indicators, experience years, care philosophy, videos
Scripps Health: Availability indicators, experience years, care philosophy, videos
-
Copilot analysis revealed that Scripps Health and Sharp HealthCare provider profiles included more personal statements about care approach and values, making profiles feel less institutional and more patient-centered compared to UCSD Health's credential-focused approach.
-
Immediate Action:
English language filter added to provider search following presentation
Ongoing collaboration with content and web development teams to evaluate additional feature additions
Strategic Direction:
Informed User Journey Workshop: Findings directly shaped workshop discussions about provider selection pain points
Feature prioritization framework: Created evidence-based approach for evaluating which features to implement
Stakeholder alignment: Presentation created shared understanding of competitive landscape gaps
Broader Influence:
Connected to review credibility work: Analysis validated need to address provider review trust issues (separate research initiative)
Established benchmarking practice: Created repeatable framework for future competitive evaluations
What I Learned
The Value of Multi-Method Validation:
Using AI as a research assistant taught me the importance of critical validation. While Microsoft Copilot quickly analyzed provider profiles across four health systems for care philosophy and values, I cross-checked every finding manually. This hybrid approach accelerated analysis while maintaining rigor—the AI identified patterns I might have missed, but human judgment remained essential for context and accuracy.
Competitive Analysis as a Conversation Starter:
This project began with a casual observation from a web developer in a meeting. I learned that some of the best research questions emerge from cross-functional dialogue rather than formal planning. Being receptive to these moments and knowing how to quickly scope and execute focused research creates opportunities to drive change.
The Limitation of Feature Checklists:
While the comparative matrix provided clear visual evidence of gaps, I discovered that simply having features doesn't guarantee good user experience. Sharp and Scripps had more features, but their implementation varied in quality. This taught me to look beyond "what exists" to "how well it serves users"—a nuance that informed my recommendations.
Secondary Data as Validation:
Incorporating the 2021 NRC Health study strengthened stakeholder buy-in. Having external research validate our user interview findings gave the work more credibility. I learned that whenever possible, grounding competitive analysis in broader industry data creates more compelling business cases.
Timing Matters for Impact:
Presenting this analysis between the external research phase and the User Journey Workshop was strategically important. It gave workshop participants concrete data to reference during discussions and helped teams move from "should we address this?" to "how should we prioritize solutions?" faster.
Why This Project Matters
Healthcare provider selection is one of the highest-stakes decisions patients make, yet digital directories often treat it as a simple search problem. This competitive analysis revealed that even leading health systems struggle to support the personalized, transparent decision-making patients need.
By combining traditional competitive benchmarking with AI-assisted analysis and user interview validation, this project created a comprehensive picture of the landscape—not just what features exist, but how well they serve real patient needs. The findings directly shaped the User Journey Workshop and continue to inform feature prioritization decisions.
Most importantly, this work demonstrated that strategic research doesn't always require months of planning. Sometimes the most impactful studies emerge from listening to a colleague's observation and moving quickly to provide evidence when teams need it most.
Research Methods: Competitive analysis • Heuristic evaluation • User interview validation • AI-assisted analysis • Secondary research analysis
Skills Applied: Competitive benchmarking • AI research tools (Microsoft Copilot) • Feature prioritization • Stakeholder presentation • Cross-functional collaboration • Healthcare UX • Ethical AI use