Anthropic's Claude Code Leak Triggers Security Crisis
Updated April 4, 2026 · 5 sources
Anthropic, an AI firm, is grappling with a significant security incident following the inadvertent leak of its Claude Code source code, which occurred during the release of version 2.1.88. The leak exposed over 512,000 lines of code and nearly 2,000 files, revealing sensitive features like a Tamagotchi-like pet and an always-on agent named Kairos, which collects user data. Security experts have raised alarms about the operational integrity of AI systems, as the leaked code is now being distributed by hackers alongside malware, heightening the risk of malicious exploitation. Despite Anthropic's assurances that no sensitive user data was compromised, the incident has ignited widespread discussions about software vulnerabilities, competitive dynamics in the AI industry, and the implications for user privacy and data security. As the situation develops, stakeholders are increasingly concerned about the potential ramifications for both Anthropic and the broader AI landscape.