Anthropic has confirmed that source code for Claude Code — the company's AI-powered coding assistant — was accidentally leaked inside a published npm package. The exposure was unintentional and has since been remediated, but not before the code was publicly accessible on the npm registry for a period of time.
Claude Code is normally closed source. Unlike many AI developer tools built on top of open-source frameworks, Anthropic has kept Claude Code's implementation details proprietary. The accidental npm publication temporarily exposed that codebase to anyone who downloaded or inspected the package.
Anthropic moved quickly to acknowledge the incident and pull the affected package version, stating in a public notice that no customer data, API keys, or credentials were exposed as part of the leak.
What Was Leaked
According to Anthropic's disclosure, the leaked code represents the Claude Code CLI source code — the implementation of the command-line interface that powers the Claude Code experience. This is the code behind:
- The terminal-based AI coding assistant
- Agent loop logic governing how Claude Code reads files, writes code, and executes commands
- Tool definitions and permission handling
- Session management and context window handling
Anthropic characterized the leak as an accidental inclusion of source files during the npm package build and publish process. The likely cause: a .npmignore file or build configuration that failed to exclude the source directory from the published package tarball.
What Was NOT Leaked
Anthropic was explicit in confirming that the following were not exposed:
| Data Type | Status |
|---|---|
| Customer data | Not exposed |
| API keys | Not exposed |
| Credentials | Not exposed |
| Model weights | Not exposed |
| Internal training data | Not exposed |
| User conversation history | Not exposed |
The exposure was limited to the CLI application source code itself — the tooling layer, not the underlying model infrastructure or customer-facing data.
How It Happened
While Anthropic has not published a full post-mortem, accidental source code exposure via npm follows a well-understood pattern. When publishing an npm package, developers use either a .npmignore file or a files field in package.json to control what gets included in the published tarball.
Common causes of accidental source inclusion:
// package.json — if 'files' array is missing or overly permissive:
{
"name": "@anthropic-ai/claude-code",
"version": "x.x.x",
"files": [
"dist/**",
"src/**" // ← accidentally including source
]
}Or a missing .npmignore combined with a src/ directory that gets included by default npm publish behavior.
Checking what a package will publish before running npm publish is a straightforward safeguard that can prevent this class of incident:
# Dry run — see exactly what npm publish would include without publishing
npm publish --dry-run
# Or explicitly list packed files
npm pack --dry-runSecurity Implications
The immediate risk to users of Claude Code is low. No credentials or customer data were in the leak, and Claude Code's functionality depends on model APIs that remain fully secured on Anthropic's backend infrastructure.
However, the exposure of the source code has several potential downstream implications:
For Anthropic:
- Proprietary implementation details, algorithmic approaches, and architecture decisions are now potentially known to competitors
- Any security mitigations or obfuscation techniques in the CLI are visible to researchers and adversaries
- The agent loop logic — how Claude Code interprets commands and manages permissions — could be studied to identify exploitable behaviors or bypass conditions
For Claude Code users:
- Security researchers (and threat actors) can now analyze the CLI's file access patterns, tool dispatch logic, and how it handles sensitive operations
- Any hardcoded configurations, default behaviors, or trust assumptions in the code are publicly documented
- The risk of targeted social engineering or prompt injection attacks designed around known code paths is elevated
For the broader AI developer tools ecosystem:
- This incident reinforces the importance of treating AI coding assistants as software products that require rigorous supply chain security practices, not just capable chat interfaces
What Users Should Do
For current Claude Code users, no immediate action is required:
- Continue using Claude Code normally — there are no known security vulnerabilities introduced by this leak
- Do not share API keys or credentials in Claude Code sessions (this is always best practice)
- Stay current on Anthropic's security advisories for any follow-up guidance
- Review your Claude Code installation to ensure you are running an official, signed release from the npm registry
# Verify your Claude Code version and installation source
claude --version
npm list -g @anthropic-ai/claude-codeAnthropic's Response
Anthropic has:
- Removed the affected package version from the npm registry
- Published a public disclosure acknowledging the accidental leak
- Confirmed the scope of what was and was not exposed
- Stated the incident is under internal review
The company's swift response and transparency about the incident reflects positively on its security posture, even if the underlying build process failure represents an operational gap that will need to be addressed.
Lessons for AI Tooling Teams
This incident highlights a class of risk that is easy to overlook when teams are moving fast on developer tooling: build and publish automation can silently include more than intended.
For teams building and publishing npm packages — especially for commercial or sensitive products:
# Always test what will be included before publishing
npm pack --dry-run
# Keep .npmignore tightly scoped
cat .npmignore
# Should explicitly exclude: src/, tests/, *.ts, internal docs, etc.
# Better: use the 'files' allow-list in package.json
# Only explicitly listed files/directories will be includedAutomated pre-publish checks that validate the package tarball contents can catch these issues before they reach the public registry.
Source: BleepingComputer — April 1, 2026