Epic Case Study
Scaling an AI Design System Across 80+ Clinical Apps
Scaling an AI Design System Across 80+ Clinical Apps
Company
Role
Lead UX Designer
Work
UX strategy & design, design system, visioning
Years
2022 - 2025



Disclaimer: Due to IP restrictions, this case study only includes conceptual visuals. Actual Epic designs and assets cannot be shared, unless they've been made public.
Disclaimer: Due to IP restrictions, this case study only includes conceptual visuals.
Actual Epic designs and assets cannot be shared, unless they've been made public.
Background
Epic is the largest EHR provider in the U.S., serving over 325 million patients across 2,600+ hospitals and has 80+ clinical applications. As generative AI entered the healthcare space, Epic launched dozens of new features - summarization, automation, message drafting - at unprecedented speed. Learn more on Epic’s website.
To keep up, they needed a system: consistent UX, reusable design patterns, and a clear visual language to make AI trustworthy and intuitive across high-stakes tools.
Background
Epic is the largest EHR provider in the U.S., serving over 325 million patients across 2,600+ hospitals and has 80+ clinical applications. As generative AI entered the healthcare space, Epic launched dozens of new features - summarization, automation, message drafting - at unprecedented speed. Learn more on Epic’s website.
To keep up, they needed a system: consistent UX, reusable design patterns, and a clear visual language to make AI trustworthy and intuitive across high-stakes tools.
Background
Epic is the largest EHR provider in the U.S., serving over 325 million patients across 2,600+ hospitals and has 80+ clinical applications.
As generative AI entered the healthcare space, Epic launched dozens of new features - summarization, automation, message drafting - at unprecedented speed. Learn more on Epic’s website.
To keep up, they needed a system: consistent UX, reusable design patterns, and a clear visual language to make AI trustworthy and intuitive across high-stakes tools.
Background
Epic is the largest EHR provider in the U.S., serving over 325 million patients across 2,600+ hospitals and has 80+ clinical applications. As generative AI entered the healthcare space, Epic launched dozens of new features - summarization, automation, message drafting - at unprecedented speed. Learn more on Epic’s website.
To keep up, they needed a system: consistent UX, reusable design patterns, and a clear visual language to make AI trustworthy and intuitive across high-stakes tools.
The Challenge
Each of Epic’s 80+ apps was building AI independently, with different scopes, UIs, and timelines. The result: fragmented experiences, unclear interaction rules, and no scalable way to guide users through AI-driven workflows.
Clinicians didn’t know what to trust. Designers had no shared foundation. Developers lacked reusable components. Adoption was stalling. Without alignment, speed was becoming a liability.
The Challenge
Each of Epic’s 80+ apps was building AI independently, with different scopes, UIs, and timelines. The result: fragmented experiences, unclear interaction rules, and no scalable way to guide users through AI-driven workflows.
Clinicians didn’t know what to trust. Designers had no shared foundation. Developers lacked reusable components. Adoption was stalling. Without alignment, speed was becoming a liability.
The Challenge
Each of Epic’s 80+ apps was building AI independently, with different scopes, UIs, and timelines.
The result: fragmented experiences, unclear interaction rules, and no scalable way to guide users through AI-driven workflows.
Clinicians didn’t know what to trust. Designers had no shared foundation. Developers lacked reusable components. Adoption was stalling.
Without alignment, speed was becoming a liability.
The Challenge
Each of Epic’s 80+ apps was building AI independently, with different scopes, UIs, and timelines. The result: fragmented experiences, unclear interaction rules, and no scalable way to guide users through AI-driven workflows.
Clinicians didn’t know what to trust. Designers had no shared foundation. Developers lacked reusable components. Adoption was stalling. Without alignment, speed was becoming a liability.
My Role
I co-led the design and scale-up of Epic’s AI UX system - from pattern audits to system creation to cross-team rollout.
I helped define the core interaction types (e.g. summarization, task automation), built out reusable UX and UI patterns, created a visual system to flag AI-generated content, and equipped teams with guidance, documentation, and guardrails.
This became the UX foundation for all AI-powered experiences at Epic, trusted by 80K+ clinicians and deployed across 330+ organizations.
My Role
I co-led the design and scale-up of Epic’s AI UX system - from pattern audits to system creation to cross-team rollout.
I helped define the core interaction types (e.g. summarization, task automation), built out reusable UX and UI patterns, created a visual system to flag AI-generated content, and equipped teams with guidance, documentation, and guardrails.
This became the UX foundation for all AI-powered experiences at Epic, trusted by 80K+ clinicians and deployed across 330+ organizations.
My Role
I co-led the design and scale-up of Epic’s AI UX system - from pattern audits to system creation to cross-team rollout.
I helped define the core interaction types (e.g. summarization, task automation), built out reusable UX and UI patterns, created a visual system to flag AI-generated content, and equipped teams with guidance, documentation, and guardrails.
This became the UX foundation for all AI-powered experiences at Epic, trusted by 80K+ clinicians and deployed across 330+ organizations.
My Role
I co-led the design and scale-up of Epic’s AI UX system - from pattern audits to system creation to cross-team rollout.
I helped define the core interaction types (e.g. summarization, task automation), built out reusable UX and UI patterns, created a visual system to flag AI-generated content, and equipped teams with guidance, documentation, and guardrails.
This became the UX foundation for all AI-powered experiences at Epic, trusted by 80K+ clinicians and deployed across 330+ organizations.
Disclaimer: Due to IP restrictions, this case study only includes conceptual visuals. Actual Epic designs and assets cannot be shared, unless they've been made public.
📢 Full Case Study In Construction!
📢 Full Case Study In Construction!
A deeper dive into this case study is in the works, but if you want to hear about it now, let’s chat!
A deeper dive into this case study is in the works, but if you want to hear about it now, let’s chat!
When Speed Outpaces Structure
Epic shipped 100+ AI features in months after ChatGPT dropped. The energy of a startup - but at enterprise scale, that speed became a liability.






Eighty teams were building AI features in parallel with no shared patterns. Visual inconsistencies were everywhere. Some features looked polished, others felt broken. Clinicians couldn't tell what was AI-generated. Trust eroded and adoption stalled.
The problem was that Epic prioritized shipping fast but had no AI design foundation. Without structure, that speed was backfiring.
We needed a system, and we needed it fast.



Building Epic’s AI Design System
This wasn't a top-down initiative. A few designers, myself included, noticed that same chaos across every AI project that I mentioned above.
We knew this wouldn't scale. So we took ownership and built the structure Epic needed.
Research Strategy
Speed was critical, so we stayed lean and focused:
Audited 30+ in-progress AI features
Ran workshops to surface gaps
Interviewed customers and internal teams
Analyzed how top AI tools solved similar problems
Pulled insights from past UX research
Our goal was to build a foundation that scaled proactively, not reactively.
Building Epic’s AI Design System
This wasn't a top-down initiative. A few designers, myself included, noticed that same chaos across every AI project that I mentioned above.
We knew this wouldn't scale. So we took ownership and built the structure Epic needed.
Research Strategy
Speed was critical, so we stayed lean and focused:
Audited 30+ in-progress AI features
Ran workshops to surface gaps
Interviewed customers and internal teams
Analyzed how top AI tools solved similar problems
Pulled insights from past UX research
Our goal was to build a foundation that scaled proactively, not reactively.
The 4 Core AI UX Patterns
We simplified the chaos into four reusable patterns:


Summarization
Chart digests, medication updates, insights


Drafted Text
Messages, documentation starters, letters.


Transformed Content
Language simplification, data reshaping


Task Automation
Follow-ups, billing suggestions, prior auths.

Summarization
Chart digests, medication updates, insights

Drafted Text
Messages, documentation starters, letters.

Transformed Content
Language simplification, data reshaping

Task Automation
Follow-ups, billing suggestions, prior auths.
For each pattern, we designed reusable components:
Conversational UI (textboxes, threads, message flows)
Summary cards for structured insights
Field styling to distinguish AI vs. human inputs
In-context editors for easy modifications
Trust-focused elements (feedback mechanisms, validation steps)
Hover states and layered interactions for explainability
Note: Due to IP restrictions, I can’t show this work, but some designs are visible in public.


Summarization
Chart digests, medication updates, discharge insights.


Drafted Text
Messages, documentation starters, denial letters.


Transformed Content
Language simplification, conversational queries, data reshaping.


Task Automation
Follow-ups, billing suggestions, prior auths.
Visual Identity
Epic's existing design system wasn't built for generative AI. We needed something that felt distinct but cohesive.
We created the "Bloom" icon - the universal signal for AI across Epic's entire platform.
We also built:

The gen AI “Bloom”
Source: This LinkedIn post
AI-specific icon system
Distinct color palette and gradients
A distinct color and gradient system
Visual motifs (swoops, spirals)
Naming conventions for consistency
Our goal was to make AI feel powerful, safe, and integrated - not bolted on.
Scaling the System
Once the system felt solid, we ran continuous testing with clinicians and got positive responses.
And when we brought it to Epic's C-suite, they immediately supported the system, its strategic value, and in their words, “how beautiful it is.”
That greenlight pushed us into:
Partnered with dev leadership to build reusable code
Built Figma component library
Updated internal UX standards with practical guidance
Educated 4,000+ devs, QMs, and designers
We rolled out before Epic's largest conference with 40,000+ healthcare professionals watching. The launch made a strong first impression.
I then became the system's sole owner and scaled it across the entire platform.
Owning the System
My role expanded from execution to strategy. Daily work included:
Maintaining UX patterns across 80+ apps
Building reusable components with dev teams
Reviewing nearly every AI feature for consistency
Educating teams on implementation
My role quickly expanded into strategic work, partnering with C-suite to shape Epic's AI UX strategy.
Defining AI narratives for conferences and partnerships
Crafting pitches (including Microsoft partnership)
Working directly with orgs like Mayo Clinic
Building vision frameworks for AI workflows
I was at the center of both tactical execution and long-term strategy.
Process That Scaled
Epic broke its quarterly release model for AI. I designed a UX process that kept pace without compromising quality:
Most patterns followed this rhythm:
1
Research
Audits, SME input, past research, user interviews when needed
1
Research
Audits, SME input, past research, user interviews when needed
1
Research
Audits, SME input, past research, user interviews when needed
2
Define
What are we solving? What pattern does it align with? What’s the risk?
2
Define
What are we solving? What pattern does it align with? What’s the risk?
2
Define
What are we solving? What pattern does it align with? What’s the risk?
3
Co-Design
Pulled in designers from other apps to make sure it scaled
3
Co-Design
Pulled in designers from other apps to make sure it scaled
3
Co-Design
Pulled in designers from other apps to make sure it scaled
4
Validation
Built and tested prototypes, refined with clinician feedback
4
Validation
Built and tested prototypes, refined with clinician feedback
4
Validation
Built and tested prototypes, refined with clinician feedback
5
Enablement
Shipped reusable components, updated guidance, trained teams
5
Enablement
Shipped reusable components, updated guidance, trained teams
5
Enablement
Shipped reusable components, updated guidance, trained teams
Simple, fast, and effective.
Solving Hard Problems
Here were three complex challenges where design decisions directly impacted adoption, trust, and safety:
Preventing Automation Bias
Problem: When users over-trust AI, even when it's wrong, it's known as automation bias. In healthcare, that's super dangerous.
Although we weren't seeing it yet, we knew we had to get ahead of it. If doctors started accepting AI-generated chart summaries without verifying the source data, or using AI-drafted messages without checking accuracy, patients could be severely harmed or even die.
Solution: I partnered with Epic's ML and Ethical AI leaders on a multi-month initiative to tackle this systematically.
We created the Risk × Complexity Matrix, a framework that helped teams answer: "How cautious do we need to be here?"
The matrix helped teams answer: “How cautious do we need to be here?” It laid out four UX zones:
The Research Process
Analyzed cognitive bias literature and healthcare error patterns
Interviewed clinicians about their AI usage and trust patterns
Mapped real scenarios where automation bias had caused problems
Studied how aviation, finance, and other high-risk industries handle AI safety
The Framework: We mapped every AI feature across two axes:
Risk: What's at stake if it's wrong? (patient safety, regulatory compliance, workflow disruption)
Complexity: How difficult is the task? (routine vs. nuanced clinical judgment)
This created four zones with specific UX approaches, as seen in this loose diagram recreation.
Low Risk, Low Complexity
Keep it efficient and low-friction - users just need to know AI was used, not be slowed down by it.Low Risk, High Complexity
Help users stay oriented, guide them through complexity without overwhelming them or breaking flow.High Risk, Low Complexity
Make it crystal clear where the AI output came from, so users can trust it without needing to hunt for answers.High Risk, High Complexity
Slow things down just enough to help users focus, this is where safety matters most, and the UX should reflect that.



Each section of the matrix had a full system of UI/UX mitigators project teams would implement. They followed these patterns:
Positive friction
Confirmation steps, review prompts, "Are you sure?" moments
Explainable AI
Confidence scores, data sources, reasoning chains
Cognitive support
Simplified layouts, highlighted contradictions, next steps
Reflective UX
Nudges to pause, guided review processes, clear boundaries
We built this into system guidance and pattern libraries so teams could self-assess their features, choose the right mitigators, and know when to bring in deeper UX support.
Impact
Teams could self-assess their features and choose the right safety mitigators.
This scaled responsible design without slowing velocity.
More importantly, it gave us a shared language for discussing AI safety - something that didn't exist before.
AI Citations System
Problem: Clinicians were asking "Where did this come from?" and "How do I know it's right?" Users didn't have a way to review and validate AI outputs without leaving the workflow, which hurt trust.
The research: We dug in with user interviews, and mapped clinician behavior with and without citations. Without them, users:
Read through AI output carefully, often redoing the work manually
Left the screen to double-check context or data in other chart sections
Avoided using the AI features altogether in high-risk scenarios
From the research, it became clear that users prioritized 4 main principles:
Non-intrusive
Stay out of the way unless summoned
Transparent
Make sources and authorship obvious
Efficient
Reduce clicks and screen switches
Familiar
Mimic citation patterns clinicians already trust
Solution: I co-led the design of a comprehensive citation system that became core to Epic's AI UX. We iterated on and tested a range of solutions:
Inline references vs. expandable footnotes
Icon-only indicators vs. numbered citations
Hover previews, persistent sidebars, and pop-outs
Variations with confidence scores, author info, timestamps
The final design: Inline citation bubbles → compact hover cards → expandable reference list
This three-tier system worked because:
Tier 1 (Bubbles): Showed that information was cited without cluttering the interface
Tier 2 (Hover): Provided quick context (data type, author, timestamp, source snippet) without leaving the page
Tier 3 (Reference List): Offered complete source information when needed, but stayed out of the way
We built and tested multiple iterations with pilot users, tightening both behavior and trust. Then we shipped it as a core design pattern with reusable code components, and pushed standards org-wide.
Impact
Adoption soared once we rolled this out. Teams quickly integrated it across AI-driven features.
Clinician feedback consistently called it "beautiful," "seamless," and "a game-changer for trusting AI"
Dev teams loved having a reusable pattern with shared components and clarity around how and when to apply it.
The pattern became Epic's standard for all information transparency, extending beyond AI to other clinical tools
This citation system solved the fundamental trust problem in AI-generated content and became the foundation for responsible AI deployment at Epic scale.
Building Trust: Human-in-the-Loop
As AI took on more tasks, giving users control mattered more than ever. I worked across teams to design patterns that kept users in the loop, without breaking their workflow.
We focused on feedback, explainability, and clarity around who’s responsible for what.
(More examples and details available in the future.)
Case study is In progress
Case study is In progress
When Speed Outpaces Structure
Epic shipped 100+ AI features in months after ChatGPT dropped. The energy of a startup - but at enterprise scale, that speed became a liability.






Eighty teams were building AI features in parallel with no shared patterns. Visual inconsistencies were everywhere. Some features looked polished, others felt broken. Clinicians couldn't tell what was AI-generated. Trust eroded and adoption stalled.
The problem was that Epic prioritized shipping fast but had no AI design foundation. Without structure, that speed was backfiring.
We needed a system, and we needed it fast.

Epic “Health Grid” (some of the 80+ apps)
Source: showroom.epic.com
When Speed Outpaces Structure
Epic shipped 100+ AI features in months after ChatGPT dropped. The energy of a startup - but at enterprise scale, that speed became a liability.
Eighty teams were building AI features in parallel with no shared patterns. Visual inconsistencies were everywhere. Some features looked polished, others felt broken. Clinicians couldn't tell what was AI-generated. Trust eroded and adoption stalled.
The problem was that Epic prioritized shipping fast but had no AI design foundation. Without structure, that speed was backfiring.
We needed a system, and we needed it fast.

Epic “Health Grid” (some of the 80+ apps)
Source: showroom.epic.com







Building Epic’s AI Design System
This wasn't a top-down initiative. A few designers, myself included, noticed that same chaos across every AI project that I mentioned above.
We knew this wouldn't scale. So we took ownership and built the structure Epic needed.
Research Strategy
Speed was critical, so we stayed lean and focused:
Audited 30+ in-progress AI features
Ran workshops to surface gaps
Interviewed customers and internal teams
Analyzed how top AI tools solved similar problems
Pulled insights from past UX research
Our goal was to build a foundation that scaled proactively, not reactively.
Visual Identity
Epic's existing design system wasn't built for generative AI. We needed something that felt distinct but cohesive.
We created the "Bloom" icon - the universal signal for AI across Epic's entire platform.
We also built:

The gen AI “Bloom”
Source: This LinkedIn post
AI-specific icon system
A distinct color and gradient system
Visual motifs (swoops, spirals)
Naming conventions for consistency
Our goal was to make AI feel powerful, safe, and integrated - not bolted on.
Scaling the System
Once the system felt solid, we ran continuous testing with clinicians and got positive responses.
And when we brought it to Epic's C-suite, they immediately supported the system, its strategic value, and in their words, “how beautiful it is.”
That greenlight pushed us into:
Partnered with dev leadership to build reusable code
Built Figma component library
Updated internal UX standards with practical guidance
Educated 4,000+ devs, QMs, and designers
We rolled out before Epic's largest conference with 40,000+ healthcare professionals watching. The launch made a strong first impression.
I then became the system's sole owner and scaled it across the entire platform.
Owning the System
My role expanded from execution to strategy. Daily work included:
Maintaining UX patterns across 80+ apps
Building reusable components with dev teams
Reviewing nearly every AI feature for consistency
Educating teams on implementation
My role quickly expanded into strategic work, partnering with C-suite to shape Epic's AI UX strategy.
Defining AI narratives for conferences and partnerships
Crafting pitches (including Microsoft partnership)
Working directly with orgs like Mayo Clinic
Building vision frameworks for AI workflows
I was at the center of both tactical execution and long-term strategy.
1
Research
Audits, SME input, past research, user interviews when needed
1
Research
Audits, SME input, past research, user interviews when needed
2
Define
What are we solving? What pattern does it align with? What’s the risk?
2
Define
What are we solving? What pattern does it align with? What’s the risk?
3
Co-Design
Pulled in designers from other apps to make sure it scaled
3
Co-Design
Pulled in designers from other apps to make sure it scaled
4
Validation
Built and tested prototypes, refined with clinician feedback
4
Validation
Built and tested prototypes, refined with clinician feedback
5
Enablement
Shipped reusable components, updated guidance, trained teams
5
Enablement
Shipped reusable components, updated guidance, trained teams
Solving Hard Problems
Here were three complex challenges where design decisions directly impacted adoption, trust, and safety:
Preventing Automation Bias
Problem: When users over-trust AI, even when it's wrong, it's known as automation bias. In healthcare, that's super dangerous.
Although we weren't seeing it yet, we knew we had to get ahead of it. If doctors started accepting AI-generated chart summaries without verifying the source data, or using AI-drafted messages without checking accuracy, patients could be severely harmed or even die.
Solution: I partnered with Epic's ML and Ethical AI leaders on a multi-month initiative to tackle this systematically.
We created the Risk × Complexity Matrix, a framework that helped teams answer: "How cautious do we need to be here?"
The Research Process
Analyzed cognitive bias literature and healthcare error patterns
Interviewed clinicians about their AI usage and trust patterns
Mapped real scenarios where automation bias had caused problems
Studied how aviation, finance, and other high-risk industries handle AI safety
The Framework: We mapped every AI feature across two axes:
Risk: What's at stake if it's wrong? (patient safety, regulatory compliance, workflow disruption)
Complexity: How difficult is the task? (routine vs. nuanced clinical judgment)
This created four zones with specific UX approaches, as seen in this loose diagram recreation.


Each section of the matrix had a full system of UI/UX mitigators project teams would implement. They followed these patterns:
Positive friction
Confirmation steps, review prompts, "Are you sure?" moments
Explainable AI
Confidence scores, data sources, reasoning chains
Cognitive support
Simplified layouts, highlighted contradictions, next steps
Reflective UX
Nudges to pause, guided review processes, clear boundaries
We built this into system guidance and pattern libraries so teams could self-assess their features, choose the right mitigators, and know when to bring in deeper UX support.
Impact
Teams could self-assess their features and choose the right safety mitigators.
This scaled responsible design without slowing velocity.
More importantly, it gave us a shared language for discussing AI safety - something that didn't exist before.
AI Citations System
Problem: Clinicians were asking "Where did this come from?" and "How do I know it's right?" Users didn't have a way to review and validate AI outputs without leaving the workflow, which hurt trust.
The research: We dug in with user interviews, and mapped clinician behavior with and without citations. Without them, users:
Read through AI output carefully, often redoing the work manually
Left the screen to double-check context or data in other chart sections
Avoided using the AI features altogether in high-risk scenarios
From the research, it became clear that users prioritized 4 main principles:
Non-intrusive
Stay out of the way unless summoned
Transparent
Make sources and authorship obvious
Efficient
Reduce clicks and screen switches
Familiar
Mimic citation patterns clinicians already trust
Solution: I co-led the design of a comprehensive citation system that became core to Epic's AI UX. We iterated on and tested a range of solutions:
Inline references vs. expandable footnotes
Icon-only indicators vs. numbered citations
Hover previews, persistent sidebars, and pop-outs
Variations with confidence scores, author info, timestamps
The final design: Inline citation bubbles → compact hover cards → expandable reference list
This three-tier system worked because:
Tier 1 (Bubbles): Showed that information was cited without cluttering the interface
Tier 2 (Hover): Provided quick context (data type, author, timestamp, source snippet) without leaving the page
Tier 3 (Reference List): Offered complete source information when needed, but stayed out of the way
We built and tested multiple iterations with pilot users, tightening both behavior and trust. Then we shipped it as a core design pattern with reusable code components, and pushed standards org-wide.
Impact
Adoption soared once we rolled this out. Teams quickly integrated it across AI-driven features.
Clinician feedback consistently called it "beautiful," "seamless," and "a game-changer for trusting AI"
Dev teams loved having a reusable pattern with shared components and clarity around how and when to apply it.
The pattern became Epic's standard for all information transparency, extending beyond AI to other clinical tools
This citation system solved the fundamental trust problem in AI-generated content and became the foundation for responsible AI deployment at Epic scale.
Building Trust: Human-in-the-Loop
As AI took on more tasks, giving users control mattered more than ever. I worked across teams to design patterns that kept users in the loop, without breaking their workflow.
We focused on feedback, explainability, and clarity around who’s responsible for what.
(More examples and details available in the future.)
Case study is In progress
Epic Case Study
Scaling an AI Design System Across 80+ Clinical Apps
Company
Role
Lead UX Designer
Work
UX strategy & design, design system, visioning
Years
2022 - 2025


