Understanding the present, shaping the future.

Search
11:56 AM UTC · SUNDAY, MAY 10, 2026 XIANDAI · Xiandai
May 10, 2026 · Updated 11:56 AM UTC
Cybersecurity

Brave researchers uncover critical prompt injection vulnerability in Perplexity Comet

Brave security engineers discovered that Perplexity’s Comet AI browser can be tricked into exfiltrating sensitive user data through hidden, malicious instructions embedded in websites.

Ryan Torres

2 min read

Brave researchers uncover critical prompt injection vulnerability in Perplexity Comet
Photo: fedscoop.com

Security researchers at Brave have identified a significant vulnerability in Perplexity’s Comet browser that allows attackers to hijack an AI assistant using indirect prompt injection. The flaw enables malicious actors to bypass standard web security by embedding hidden commands that the AI executes without user consent.

Artem Chaikin, a senior mobile security engineer at Brave, led the investigation alongside VP of Privacy and Security Shivan Kaul Sahib. The team found that Comet fails to distinguish between legitimate user prompts and untrusted content retrieved from the web. When a user asks the AI to summarize a page, it processes every element on that page as an instruction.

The mechanics of the exploit

Attackers can hide malicious prompts within web content using techniques such as white text on white backgrounds, HTML comments, or user-generated comments on social platforms like Reddit. When the AI processes these pages, it follows the hidden commands as if they were direct user requests.

In a proof-of-concept demonstration, Brave researchers showed how the exploit could lead to full account takeover. By embedding instructions on a Reddit post, an attacker forced the Comet AI to navigate to the user’s Perplexity account settings, extract their email address, and retrieve a one-time password from the user’s Gmail account. The AI then exfiltrated this sensitive information back to the attacker.

"The attack demonstrates how easy it is to manipulate AI assistants into performing actions that were prevented by long-standing Web security techniques," the researchers noted in their report. The vulnerability highlights a fundamental disconnect in current agentic AI designs, which often treat external data as trusted input.

Brave disclosed these findings to Perplexity to address the risks posed by autonomous browser agents. As browsers move toward using AI to perform complex tasks like booking flights or managing logins, the potential for data exfiltration grows. The research serves as a warning that without strict sandboxing between content and system-level commands, AI agents may become conduits for credential theft.

Comments