A critical vulnerability in Anthropic's Claude AI allows attackers to exfiltrate user data via a chained exploit that abuses the platform's own File API.
A security researcher has exposed this flaw, which enables attackers to steal user data by turning the AI's own tools against itself.
The vulnerability allows hidden commands to hijack Claude's Code Interpreter, tricking the AI into using Anthropic's own File API to send sensitive data, like chat histories, directly to an attacker.
Anthropic initially dismissed the report on October 25 but reversed its decision on October 30, acknowledging a "process hiccup".
Author's summary: Critical vulnerability in Claude AI allows data theft.