Perplexity Comet Security: Built In From Day One

Perplexity Comet Security: Built In From Day One

Perplejidad

20 feb 2026

In a modern office setting, a man types on a laptop displaying a digital shield icon, signifying strong data protection, while a secure safe represents "Perplexity Comet Security: Built In From Day One."
In a modern office setting, a man types on a laptop displaying a digital shield icon, signifying strong data protection, while a secure safe represents "Perplexity Comet Security: Built In From Day One."

¿No sabes por dónde empezar con la IA?
Evalúa preparación, riesgos y prioridades en menos de una hora.

¿No sabes por dónde empezar con la IA?
Evalúa preparación, riesgos y prioridades en menos de una hora.

➔ Descarga nuestro paquete gratuito de preparación para IA

Perplexity’s Comet is an AI-native browser designed with security and privacy controls built in from the start. Official documentation emphasises local-first data by default, consent-based “personal searches” that use browsing context only when requested, and OS-level credential vault storage. Independent audits also highlight prompt-injection risks and the need for strong trust boundaries and validation.

AI browsers are a step change in capability — and in risk.

Traditional browsers display web pages. Agentic AI browsers can also act: summarise content, navigate sites, and complete tasks using your open tabs, history, and authenticated sessions. That shift introduces a new attack surface: if the agent can “do things for you”, attackers will try to trick it into doing the wrong things.

Perplexity’s Comet positions security as a foundational design concern. But for buyers and security teams, the important question isn’t the tagline — it’s the controls: what data stays local, when context is shared, and how the assistant treats untrusted web content.

This guide pulls together Comet’s published privacy/security details and independent security research to explain what “security from day one” means in practice.

Why AI browser security is different

The web is an adversarial environment. Pages can contain malicious scripts, misleading UI, and text designed to manipulate both humans and models.

With an AI browser assistant, the risk categories expand:

  • Indirect prompt injection: a page includes instructions that the model mistakes for user intent.

  • Data exfiltration: the assistant is tricked into pulling sensitive data from authenticated sessions (email, docs) and sending it elsewhere.

  • Phishing and fraud: an agent can be steered into interacting with fake sites that look legitimate.

  • Overreach by automation: an agent acts with too much autonomy, bypassing normal user caution.

Independent audits of agentic browsing experiences have repeatedly shown these failure modes, which is why “secure-by-default” needs to be more than a slogan.

What Comet says it does (privacy and data handling)

Perplexity’s published Comet privacy/security FAQ sets out several core design principles.

Local-first by default

Comet states that Comet data is stored on your device by default and that personal data is not sent to Perplexity until you initiate a “personal search” that requires context.

Personal searches are explicit

Comet distinguishes between general searching and “personal searches” (tasks like organising email or acting on your behalf). It states that personal context is only used when required by the query.

Clear permissioning for integrations

For access to third-party services (for example Gmail or scheduling), Comet describes opt-in “connector” permissions that can be revoked.

Credential storage in OS vaults

The FAQ states that credentials such as passwords and payment details are stored locally in an OS-level secure vault and not stored on Perplexity’s servers.

These controls matter because they constrain how much of the user’s world is accessible to the assistant by default — and they establish a consent boundary for more sensitive actions.

Independent audit signals: threat modelling and prompt injection

A strong “day one” security programme usually includes external testing.

Trail of Bits published a detailed account of an audit commissioned by Perplexity ahead of Comet’s launch. Their write-up describes:

  • an ML-centred threat model splitting the system into trust zones (local machine vs Perplexity servers)

  • prompt injection techniques designed to exploit the assistant’s tools and data access

  • proof-of-concepts demonstrating sensitive data exposure (e.g., extracting information from authenticated Gmail sessions)

Their conclusion is broader than any single product: AI agents that don’t treat external content as untrusted input are vulnerable. For Comet and other AI browsers, the defensive programme needs to explicitly address that reality.

Practical guidance: what “secure by design” should look like for an AI browser

Whether you’re evaluating Comet or any agentic browser, these are the security primitives you should expect.

1) Trust boundaries (and minimal privileges)

  • External web content should be treated as hostile input.

  • The assistant should only have access to the minimum context needed to complete the task.

2) Explicit user intent and confirmations

  • High-impact actions (logins, purchases, message sending) should require clear confirmation.

  • “Auto-run” behaviours should be tightly constrained.

3) Validation layers against prompt injection

  • Separate user instructions from page instructions.

  • Use a policy layer or validator to block unsafe tool calls.

  • Log decisions so risky patterns can be investigated.

4) Data handling that’s defensible

  • Local-first storage where possible.

  • Opt-in cloud sync if introduced later.

  • Straightforward deletion controls and retention transparency.

5) Security testing that mirrors reality

  • Red teaming with realistic browsing tasks.

  • Phishing and scam scenario testing.

  • Independent audits and responsible disclosure.

What organisations should do before rolling out an AI browser

If you’re considering Comet for teams (or any agentic browser), treat it as a new class of endpoint.

A lightweight enterprise rollout checklist

  • Start with a sandbox pilot: non-sensitive workflows first.

  • Define “no-go” actions: payments, privileged admin consoles, regulated data.

  • Set policy defaults: disable risky capabilities until validated.

  • Use least-privilege connectors: only the integrations required for the pilot.

  • Add monitoring: logs, anomaly detection, and incident pathways.

  • Train users: “what to ask” and “when not to delegate”.

Where Generation Digital helps

AI tooling doesn’t become safe by accident. It becomes safe when governance, security engineering, and user enablement work together.

Generation Digital helps teams:

  • define security and governance guardrails for AI agents

  • assess risk across workflows and data classes

  • build adoption plans that keep autonomy aligned with policy

Summary

Comet’s published security posture emphasises local-first data by default, explicit consent for “personal searches”, opt-in connectors, and OS-level credential vault storage. Independent security research also shows why prompt injection is a defining risk for AI browsers — and why threat modelling, validation layers, and realistic red teaming are essential from day one.

Next steps

  1. Decide which workflows are safe to delegate to an AI browser in your organisation.

  2. Pilot with tight guardrails and least-privilege integrations.

  3. Validate prompt-injection and phishing scenarios before enabling broader autonomy.

  4. If you want support designing a governed rollout, contact Generation Digital.

FAQs

Q1: How does Perplexity ensure security in Comet?
A: Comet’s published documentation emphasises privacy and user control: local-first storage by default, explicit “personal searches” when personal context is needed, opt-in integration permissions, and OS-level secure credential storage.

Q2: What makes Comet’s security approach unique?
A: The key differentiator is treating the AI browser assistant as a security boundary: requiring explicit intent for personal-context actions, keeping credentials in secure local vaults, and designing controls for agentic browsing risks such as prompt injection.

Q3: Can users trust Comet for secure interactions?
A: Comet includes privacy controls and security design features, but like all agentic AI browsers it needs strong safeguards against phishing and prompt injection. Users should be cautious with sensitive tasks until controls and organisational guardrails are validated.

Q4: What is prompt injection and why does it matter for AI browsers?
A: Prompt injection is when content on a web page includes instructions that an AI assistant mistakenly follows. In an AI browser, that can lead to unsafe tool use, data exposure, or actions taken on your behalf.

Q5: What should enterprises do before deploying Comet widely?
A: Start with a controlled pilot, restrict high-risk actions, use least-privilege connectors, test phishing/prompt-injection scenarios, and ensure logging and incident pathways are in place.

Perplexity’s Comet is an AI-native browser designed with security and privacy controls built in from the start. Official documentation emphasises local-first data by default, consent-based “personal searches” that use browsing context only when requested, and OS-level credential vault storage. Independent audits also highlight prompt-injection risks and the need for strong trust boundaries and validation.

AI browsers are a step change in capability — and in risk.

Traditional browsers display web pages. Agentic AI browsers can also act: summarise content, navigate sites, and complete tasks using your open tabs, history, and authenticated sessions. That shift introduces a new attack surface: if the agent can “do things for you”, attackers will try to trick it into doing the wrong things.

Perplexity’s Comet positions security as a foundational design concern. But for buyers and security teams, the important question isn’t the tagline — it’s the controls: what data stays local, when context is shared, and how the assistant treats untrusted web content.

This guide pulls together Comet’s published privacy/security details and independent security research to explain what “security from day one” means in practice.

Why AI browser security is different

The web is an adversarial environment. Pages can contain malicious scripts, misleading UI, and text designed to manipulate both humans and models.

With an AI browser assistant, the risk categories expand:

  • Indirect prompt injection: a page includes instructions that the model mistakes for user intent.

  • Data exfiltration: the assistant is tricked into pulling sensitive data from authenticated sessions (email, docs) and sending it elsewhere.

  • Phishing and fraud: an agent can be steered into interacting with fake sites that look legitimate.

  • Overreach by automation: an agent acts with too much autonomy, bypassing normal user caution.

Independent audits of agentic browsing experiences have repeatedly shown these failure modes, which is why “secure-by-default” needs to be more than a slogan.

What Comet says it does (privacy and data handling)

Perplexity’s published Comet privacy/security FAQ sets out several core design principles.

Local-first by default

Comet states that Comet data is stored on your device by default and that personal data is not sent to Perplexity until you initiate a “personal search” that requires context.

Personal searches are explicit

Comet distinguishes between general searching and “personal searches” (tasks like organising email or acting on your behalf). It states that personal context is only used when required by the query.

Clear permissioning for integrations

For access to third-party services (for example Gmail or scheduling), Comet describes opt-in “connector” permissions that can be revoked.

Credential storage in OS vaults

The FAQ states that credentials such as passwords and payment details are stored locally in an OS-level secure vault and not stored on Perplexity’s servers.

These controls matter because they constrain how much of the user’s world is accessible to the assistant by default — and they establish a consent boundary for more sensitive actions.

Independent audit signals: threat modelling and prompt injection

A strong “day one” security programme usually includes external testing.

Trail of Bits published a detailed account of an audit commissioned by Perplexity ahead of Comet’s launch. Their write-up describes:

  • an ML-centred threat model splitting the system into trust zones (local machine vs Perplexity servers)

  • prompt injection techniques designed to exploit the assistant’s tools and data access

  • proof-of-concepts demonstrating sensitive data exposure (e.g., extracting information from authenticated Gmail sessions)

Their conclusion is broader than any single product: AI agents that don’t treat external content as untrusted input are vulnerable. For Comet and other AI browsers, the defensive programme needs to explicitly address that reality.

Practical guidance: what “secure by design” should look like for an AI browser

Whether you’re evaluating Comet or any agentic browser, these are the security primitives you should expect.

1) Trust boundaries (and minimal privileges)

  • External web content should be treated as hostile input.

  • The assistant should only have access to the minimum context needed to complete the task.

2) Explicit user intent and confirmations

  • High-impact actions (logins, purchases, message sending) should require clear confirmation.

  • “Auto-run” behaviours should be tightly constrained.

3) Validation layers against prompt injection

  • Separate user instructions from page instructions.

  • Use a policy layer or validator to block unsafe tool calls.

  • Log decisions so risky patterns can be investigated.

4) Data handling that’s defensible

  • Local-first storage where possible.

  • Opt-in cloud sync if introduced later.

  • Straightforward deletion controls and retention transparency.

5) Security testing that mirrors reality

  • Red teaming with realistic browsing tasks.

  • Phishing and scam scenario testing.

  • Independent audits and responsible disclosure.

What organisations should do before rolling out an AI browser

If you’re considering Comet for teams (or any agentic browser), treat it as a new class of endpoint.

A lightweight enterprise rollout checklist

  • Start with a sandbox pilot: non-sensitive workflows first.

  • Define “no-go” actions: payments, privileged admin consoles, regulated data.

  • Set policy defaults: disable risky capabilities until validated.

  • Use least-privilege connectors: only the integrations required for the pilot.

  • Add monitoring: logs, anomaly detection, and incident pathways.

  • Train users: “what to ask” and “when not to delegate”.

Where Generation Digital helps

AI tooling doesn’t become safe by accident. It becomes safe when governance, security engineering, and user enablement work together.

Generation Digital helps teams:

  • define security and governance guardrails for AI agents

  • assess risk across workflows and data classes

  • build adoption plans that keep autonomy aligned with policy

Summary

Comet’s published security posture emphasises local-first data by default, explicit consent for “personal searches”, opt-in connectors, and OS-level credential vault storage. Independent security research also shows why prompt injection is a defining risk for AI browsers — and why threat modelling, validation layers, and realistic red teaming are essential from day one.

Next steps

  1. Decide which workflows are safe to delegate to an AI browser in your organisation.

  2. Pilot with tight guardrails and least-privilege integrations.

  3. Validate prompt-injection and phishing scenarios before enabling broader autonomy.

  4. If you want support designing a governed rollout, contact Generation Digital.

FAQs

Q1: How does Perplexity ensure security in Comet?
A: Comet’s published documentation emphasises privacy and user control: local-first storage by default, explicit “personal searches” when personal context is needed, opt-in integration permissions, and OS-level secure credential storage.

Q2: What makes Comet’s security approach unique?
A: The key differentiator is treating the AI browser assistant as a security boundary: requiring explicit intent for personal-context actions, keeping credentials in secure local vaults, and designing controls for agentic browsing risks such as prompt injection.

Q3: Can users trust Comet for secure interactions?
A: Comet includes privacy controls and security design features, but like all agentic AI browsers it needs strong safeguards against phishing and prompt injection. Users should be cautious with sensitive tasks until controls and organisational guardrails are validated.

Q4: What is prompt injection and why does it matter for AI browsers?
A: Prompt injection is when content on a web page includes instructions that an AI assistant mistakenly follows. In an AI browser, that can lead to unsafe tool use, data exposure, or actions taken on your behalf.

Q5: What should enterprises do before deploying Comet widely?
A: Start with a controlled pilot, restrict high-risk actions, use least-privilege connectors, test phishing/prompt-injection scenarios, and ensure logging and incident pathways are in place.

Recibe noticias y consejos sobre IA cada semana en tu bandeja de entrada

Al suscribirte, das tu consentimiento para que Generation Digital almacene y procese tus datos de acuerdo con nuestra política de privacidad. Puedes leer la política completa en gend.co/privacy.

Próximos talleres y seminarios web

A diverse group of professionals collaborating around a table in a bright, modern office setting.
A diverse group of professionals collaborating around a table in a bright, modern office setting.

Claridad Operacional a Gran Escala - Asana

Webinar Virtual
Miércoles 25 de febrero de 2026
En línea

A diverse group of professionals collaborating around a table in a bright, modern office setting.
A diverse group of professionals collaborating around a table in a bright, modern office setting.

Trabaja con compañeros de equipo de IA - Asana

Taller Presencial
Jueves 26 de febrero de 2026
Londres, Reino Unido

A diverse group of professionals collaborating around a table in a bright, modern office setting.
A diverse group of professionals collaborating around a table in a bright, modern office setting.

De Idea a Prototipo: IA en Miro

Seminario Web Virtual
Miércoles 18 de febrero de 2026
En línea

Generación
Digital

Oficina en Reino Unido

Generation Digital Ltd
33 Queen St,
Londres
EC4R 1AP
Reino Unido

Oficina en Canadá

Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canadá

Oficina en EE. UU.

Generation Digital Américas Inc
77 Sands St,
Brooklyn, NY 11201,
Estados Unidos

Oficina de la UE

Software Generación Digital
Edificio Elgee
Dundalk
A91 X2R3
Irlanda

Oficina en Medio Oriente

6994 Alsharq 3890,
An Narjis,
Riad 13343,
Arabia Saudita

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)

Número de la empresa: 256 9431 77 | Derechos de autor 2026 | Términos y Condiciones | Política de Privacidad

Generación
Digital

Oficina en Reino Unido

Generation Digital Ltd
33 Queen St,
Londres
EC4R 1AP
Reino Unido

Oficina en Canadá

Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canadá

Oficina en EE. UU.

Generation Digital Américas Inc
77 Sands St,
Brooklyn, NY 11201,
Estados Unidos

Oficina de la UE

Software Generación Digital
Edificio Elgee
Dundalk
A91 X2R3
Irlanda

Oficina en Medio Oriente

6994 Alsharq 3890,
An Narjis,
Riad 13343,
Arabia Saudita

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)


Número de Empresa: 256 9431 77
Términos y Condiciones
Política de Privacidad
Derechos de Autor 2026