xiand.ai
AI

Anthropic Accidentally Exposes Claude Code Source via NPM Package

Anthropic confirmed a packaging error revealed the internal source code for its Claude Code tool. Security researcher Chaofan Shou discovered the leak, which included over 512,000 lines of TypeScript. The incident highlights risks in AI tool distribution and build security protocols.

La Era

3 min read

Anthropic Accidentally Exposes Claude Code Source via NPM Package
Anthropic Accidentally Exposes Claude Code Source via NPM Package

Anthropic confirmed on March 31, 2026, that a severe release error exposed the internal source code for its Claude Code tool, marking a high-profile security lapse in the AI sector. The incident occurred when the official npm package shipped with a map file linking to unobfuscated TypeScript files instead of compiled binaries, bypassing standard security protocols. This significant mistake drew immediate attention from the cybersecurity community and software developers worldwide who rely on the platform for secure coding assistance and development workflows.

Key Details

Security researcher Chaofan Shou discovered the vulnerability shortly after the build process completed for the latest version of the software distributed through the package manager. He reported the exposure, which allowed third parties to access the proprietary logic behind the AI coding assistant that powers modern development workflows for millions of users. Snapshots of the leaked code quickly spread across public GitHub repositories within hours of discovery, amplifying the reach of the incident exponentially across the global tech ecosystem.

The leaked archive contained approximately 1,900 TypeScript files totaling more than 512,000 lines of code that detailed the internal architecture and logic of the application. These files included full libraries of slash commands and built-in tools used within the application functionality to interact with the underlying language model and execute complex tasks. The data reached the general public after the repository received more than 41,500 forks within the first day, indicating high interest in the exposed material among security professionals.

Technical Breakdown

Technical analysis revealed the error stemmed from a reference to unobfuscated source in the map file included in the package metadata for debugging purposes. This file pointed to a zip archive hosted on Anthropic's Cloudflare R2 storage bucket for debugging purposes during the development cycle before release. Developers could download and decompress the archive to view the original source files directly without compilation, effectively bypassing obfuscation layers that normally protect the codebase.

"Earlier today, a Claude Code release included some internal source code," an Anthropic spokesperson told The Register regarding the incident and its immediate aftermath. The company confirmed the issue was a release packaging error caused by human error rather than a malicious attack targeting their infrastructure or customer data. Anthropic announced it is rolling out measures to prevent similar mistakes from happening again in future releases to maintain user trust and prevent recurrence.

Community and Analysis

Software engineer Gabriel Anhaia analyzed the exposed code to highlight the severity of the configuration error found in the build pipeline and package configuration. He noted that a single misconfigured .npmignore or files field in package.json can expose everything to the public internet without warning or notification to the user. This serves as a critical reminder for developers to rigorously check their build pipelines before publishing updates to their respective repositories and ensuring compliance.

The original uploader of the Claude Code source repurposed his GitHub repository to host a Python feature port instead of the leaked data to avoid legal complications. He cited concerns that he could be held legally liable for hosting Anthropic's intellectual property without permission under current copyright laws and digital rights frameworks. Several forks and mirrors remain available for those wishing to inspect the exposed code for educational purposes or security auditing to understand the risks.

Future Implications

The incident underscores the risks associated with publishing map files intended for debugging obfuscated code in production environments where security is paramount. Such files are generally frowned upon in production environments due to their potential to reveal sensitive source information to bad actors seeking vulnerabilities for exploitation. Trust remains a critical asset for AI companies building developer tools and maintaining ecosystem integrity within the rapidly evolving technology landscape and market.

Comments

Comments are stored locally in your browser.