Perplexity Health Advisory Board: Safer Health AI (2026)
Perplexity

Free AI at Work Playbook for managers using ChatGPT, Claude and Gemini.
➔ Download the Playbook
The Perplexity Health Advisory Board is a small group of practising clinicians, researchers, and health-tech leaders who guide Perplexity’s health products. Its role is to pressure-test product decisions, content quality, and safety safeguards against evidence-based medicine—so health answers support patients and clinicians rather than introducing avoidable risk.
Health information is not “just another content category”. When people use AI to interpret symptoms, medicines, lab results, or chronic conditions, small mistakes can create real-world harm.
That’s the context behind Perplexity’s decision to launch a Health Advisory Board alongside its broader health push in March 2026. The signal is clear: if you want users (and clinicians) to trust AI in healthcare, you need domain expertise built into the product lifecycle, not bolted on after an incident.
What is the Perplexity Health Advisory Board?
The Perplexity Health Advisory Board is a select panel of experts designed to steer Perplexity’s health-related product decisions toward evidence-based medicine, with explicit focus on:
Patient safety (what should never be suggested, and what must trigger “seek care” behaviours)
Content quality (accuracy, clarity, bias reduction, and citations)
Clinical workflow fit (how outputs support real consultations and documentation)
Perplexity framed the board as a governance layer that helps define responsible health AI experiences—particularly important as consumer tools begin connecting to medical records, lab results, and wearable data.
Who are the members?
As announced in March 2026, the first four named members are:
Dr Eric Topol, MD (Scripps Research)
Dr Devin Mann (NYU Grossman School of Medicine / NYU Langone Health)
Dr Wendy Chung (Harvard Medical School / Boston Children’s Hospital)
Tim Dybvig (health technology founder and operator)
Perplexity has indicated additional members will follow.
Why this matters now
1) Health AI is moving from “generic advice” to “personal context”
Many AI tools are shifting from general health guidance to answers informed by user-authorised personal data. That can improve relevance—but it raises the stakes:
Personal health context can make an answer feel more authoritative, even when it’s wrong.
Misinterpretation of labs or medications can create safety issues.
Workflow misfit can overload clinicians with extra admin, alerts, or parallel care loops.
A properly empowered advisory board helps keep product decisions grounded in clinical reality.
2) The hard problem isn’t generation—it’s governance
Modern models can draft plausible explanations. The challenge is ensuring the system:
knows its limits (uncertainty, missing data, “I can’t assess this safely”)
uses reliable sources and guidelines
handles edge cases (pregnancy, paediatrics, comorbidities, polypharmacy)
escalates appropriately to urgent care pathways
In other words: the value is not “AI that answers”, but AI that behaves safely.
How an advisory board improves product decisions
An advisory board is only useful if it influences real decisions. In practice, it can strengthen four parts of the product lifecycle.
1) Guardrails and escalation rules
Health AI needs clear behavioural rules—especially around emergencies and high-risk scenarios. Advisory input is vital for:
red-flag symptom patterns (e.g., chest pain + shortness of breath)
safeguarding for children and vulnerable users
medication interactions and contraindications messaging
language that avoids false reassurance
2) Source strategy and evidence hierarchy
In healthcare, not all sources are equal. An advisory board can help define an evidence hierarchy (for example):
National clinical guidelines and professional bodies
Peer-reviewed systematic reviews and meta-analyses
High-quality primary research
Patient information from trusted providers
This also supports better citation practices and clearer “what we know vs what we don’t”.
3) Output design that supports clinician conversations
The best consumer health AI doesn’t try to replace care. It helps people prepare for it.
That means producing outputs like:
a short “what to ask your clinician” list
a structured symptom timeline
plain-English interpretation of a lab result with limits
guidance on what would change the recommendation (new symptoms, dose changes)
These are product design choices, not model choices.
4) Workflow integration (and avoiding clinician overload)
Healthcare teams are drowning in alerts and admin. A board can help pressure-test whether workflows:
reduce friction (fewer clicks, less duplicate documentation)
respect clinical roles and accountability
avoid introducing unmanaged tasks into the care pathway
If Perplexity’s health features connect to records and wearables, advisory oversight becomes even more important.
Practical steps: what leaders can copy from this approach
If you’re a healthcare provider, insurer, life sciences team, or a digital product organisation building health experiences, here’s a pragmatic playbook.
Step 1: Define what the board is accountable for
Avoid vague mandates like “improve safety”. Instead, set a clear remit across:
safety guardrails and escalation pathways
evidence policy (sources, guidelines, update cadence)
UX standards for uncertainty and disclaimers
monitoring and incident response
Step 2: Put advisory review into your release process
The board should have formal touchpoints:
pre-launch safety review for new health capabilities
review of major prompt/agent changes
quarterly audit of high-risk topics and failure modes
Step 3: Track measurable safety and quality signals
Examples of useful signals include:
citation coverage rate for health answers
“urgent care” escalation accuracy (false negatives matter)
user comprehension checks (did they understand limits?)
clinician feedback on usefulness and workload impact
Step 4: Treat privacy and consent as product features
If you’re connecting to personal health data, users must be able to:
see what data is used and why
revoke access easily
understand what is and isn’t stored
The safest UX is transparent and reversible.
Step 5: Build a clear “support layer, not replacement” position
Trust grows when the product consistently reinforces that it supports the patient–clinician relationship. Make this visible in:
language (avoid diagnosing)
UI patterns (encourage appointments where appropriate)
handoff features (shareable summaries for care teams)
What this could mean for Perplexity users
If Perplexity follows through, a well-run advisory board should lead to:
fewer unsafe or overconfident health outputs
clearer explanations and stronger citations
better “next best action” guidance (including when to seek care)
outputs that help users have better conversations with their clinicians
It’s not a guarantee—governance quality depends on how much authority the board actually has—but it’s a meaningful step in the right direction.
FAQs
Who are the members of the Perplexity Health Advisory Board?
The first named members (March 2026) are Dr Eric Topol, Dr Devin Mann, Dr Wendy Chung, and Tim Dybvig. Perplexity has said additional members will be announced.
How does the advisory board influence product decisions?
Its purpose is to pressure-test Perplexity’s health features—covering safety safeguards, content quality, and clinical workflow fit—against evidence-based medicine standards.
What is the primary role of a health advisory board for AI?
To reduce avoidable harm by ensuring product choices reflect clinical standards, robust evidence, and real-world workflows, rather than purely technical optimisation.
Does a health advisory board make an AI product clinically approved?
Not by itself. It can improve governance and safety, but organisations still need appropriate compliance, testing, monitoring, and (where relevant) regulatory alignment.
What should organisations look for when evaluating vendor health AI?
Ask how the vendor handles evidence updates, safety escalation, uncertainty, auditing, privacy/consent, and clinician workflow impact—and whether external experts have real decision-making influence.
Next steps
If you’re evaluating health AI (or planning to launch it), don’t start with the model. Start with governance, safety, and workflow.
Generation Digital can help you:
assess health AI vendors and risk posture
design safe workflows and escalation paths
build adoption plans that clinicians will actually use
Contact us to map your use cases and governance approach.
Get weekly AI news and advice delivered to your inbox
By subscribing you consent to Generation Digital storing and processing your details in line with our privacy policy. You can read the full policy at gend.co/privacy.
Generation
Digital

UK Office
Generation Digital Ltd
33 Queen St,
London
EC4R 1AP
United Kingdom
Canada Office
Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canada
USA Office
Generation Digital Americas Inc
77 Sands St,
Brooklyn, NY 11201,
United States
EU Office
Generation Digital Software
Elgee Building
Dundalk
A91 X2R3
Ireland
Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia
Company No: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy









