RSS Security Boulevard

HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage

Tenable Research discovered seven new vulnerabilities in ChatGPT, encompassing indirect prompt injections, data exfiltration, persistence, and safety bypasses. Attackers can exploit these vulnerabilities in the latest GPT-5 model through seemingly innocuous user interactions. The vulnerabilities allow attackers to extract private information from user memories and chat histories. A key issue is indirect prompt injection, where malicious instructions are embedded in external sources, manipulating the LLM. ChatGPT retains information using a "memory" feature, potentially storing private user data across conversations. ChatGPT also uses a web tool with browsing capabilities, making it susceptible to prompt injections. One vulnerability involves injecting malicious prompts via trusted sites during browsing, compromising the user. A "0-click" vulnerability enables prompt injection simply by asking a question that triggers a web search. A safety mechanism designed to filter unsafe URLs can be bypassed using Bing tracking links.
favicon
securityboulevard.com
securityboulevard.com
favicon
bsky.app
Hacker & Security News on Bluesky @hacker.at.thenote.app