Security Boulevard

Using threat modeling and prompt injection to audit Comet

Perplexity engaged Trail of Bits to assess the security of Comet, their AI-powered browser, before its launch. Using their TRAIL threat model, the testers identified four prompt injection techniques to extract user data, specifically from Gmail. These exploits demonstrated vulnerabilities arising from the browser's AI assistant's handling of external content. The tests revealed how the AI agent would act on instructions disguised within webpages. Techniques included using fake security mechanisms, summarization instructions, and simulated system messages. The exploits utilized various methods like fake CAPTCHAs, content fragments, and security validators to target the AI. These vulnerabilities were used to design exploits that retrieved and exfiltrated a user's Gmail contents. Trail of Bits' review led to five security recommendations focused on threat modeling, establishing clear boundaries, and systematic testing. The team also suggests applying the principle of least privilege and treating AI input as untrusted. Perplexity addressed these findings by integrating changes to prevent prompt injection.
favicon
bsky.app
Hacker & Security News on Bluesky @hacker.at.thenote.app
favicon
securityboulevard.com
securityboulevard.com