Vucense

Anthropic Accidentally Leaked Claude Code's Entire Source Code — 512,000 Lines, 44 Hidden Features

Kofi Mensah
Inference Economics & Hardware Architect Electrical Engineer | Hardware Systems Architect | 8+ Years in GPU/AI Optimization | ARM & x86 Specialist
Published
Reading Time 11 min read
Published: April 1, 2026
Updated: April 1, 2026
Recently Published Recently Updated
Verified by Editorial Team
Terminal screen showing code representing the Anthropic Claude Code source code leak via npm on March 31 2026
Article Roadmap

Key Takeaways

  • 512,000 lines exposed. On March 31, 2026, security researcher Chaofan Shou discovered that Claude Code v2.1.88 on npm contained a 59.8MB source map file with Anthropic’s full TypeScript source code attached. Within hours it was mirrored across GitHub with 41,500+ forks.
  • Anthropic confirmed: human error, not a breach. “This was a release packaging issue caused by human error, not a security breach. No customer data or credentials were involved.” The package has been pulled — the internet did not wait.
  • 44 unreleased features exposed. The source code contains 44 feature flags covering capabilities not yet shipped to users, including KAIROS (autonomous daemon mode), Penguin Mode, and the Dream System.
  • Concurrent npm supply chain attack. Separately from the leak, the axios npm package was compromised with a Remote Access Trojan during the same timeframe. Users who updated Claude Code on March 31 between 00:21–03:29 UTC may be affected.

What Happened

At approximately 4:23 AM ET on March 31, 2026, Chaofan Shou — a security researcher and intern at Solayer Labs — posted on X that the @anthropic-ai/claude-code npm package version 2.1.88 contained a source map file that should not have been there.

Source maps are debugging tools. They map minified, bundled JavaScript back to the original readable source code. They are meant to stay on internal build servers. When accidentally included in a public npm package, they hand anyone who downloads the package a full, readable copy of the original source.

In this case: 1,900 files. 512,000 lines. 59.8 megabytes. The complete internal source code of Anthropic’s flagship coding agent.

The package was pulled within hours. The code had already been mirrored to GitHub where it was forked 41,500+ times and starred by thousands of developers. It is now permanently in the public domain.

Direct Answer: Was the Claude Code source code leaked? Yes. On March 31, 2026, Anthropic accidentally published a 59.8MB JavaScript source map file with Claude Code v2.1.88 on the npm registry, exposing 512,000 lines of proprietary TypeScript source code. Security researcher Chaofan Shou discovered and publicised the exposure at 4:23 AM ET. The package was pulled, but the code was widely mirrored on GitHub. Anthropic confirmed the leak was human error during packaging, not a security breach. No customer data or credentials were exposed.


What the Leak Revealed

The Architecture: More Than a “Chat Wrapper”

The source code reveals Claude Code’s internal complexity. What presents externally as a terminal CLI is internally a multi-system agent with distinct components:

The Tool System (~40 tools): Claude Code uses a plugin-like architecture. Each capability — file reading, Bash execution, web fetch, git operations, LSP integration — is a discrete, permission-gated tool. The base tool definition alone runs to 29,000 lines of TypeScript.

The Query Engine (46,000 lines): The core orchestration layer. Handles all LLM API calls, streaming, context management, caching, and multi-agent coordination. The largest single module in the codebase.

Self-Healing Memory Architecture: The leak reveals how Claude Code solves “context entropy” — the tendency for AI agents to become confused in long sessions. The system uses three layers:

  • MEMORY.md — a lightweight index of pointers (~150 characters per line) always loaded into context
  • Topic files fetched on-demand when specific information is needed
  • Raw transcripts never fully read back into context, only grep’d for specific identifiers

This “Strict Write Discipline” prevents the model from polluting its context with failed attempts — the index is only updated after a confirmed successful file write.

KAIROS: The Unreleased Daemon Mode

The most significant revelation is KAIROS — named after the Ancient Greek concept of “the right time” — mentioned over 150 times in the source code.

KAIROS represents a shift from reactive to autonomous AI tooling. It is a daemon mode in which Claude Code operates as an always-on background agent that:

  • Runs background sessions while the user is idle
  • Performs memory consolidation (called autoDream) during idle periods
  • Merges disparate observations from previous sessions
  • Removes logical contradictions from its knowledge base
  • Converts vague insights into absolute facts stored for future sessions

This is not yet shipped to users. It sits behind a compile-time feature flag set to false in external builds.

44 Unreleased Feature Flags

The source contains 44 feature flags covering capabilities built but not yet shipped. Categories observed by developers analysing the code:

MAJOR (significant user-facing features): KAIROS daemon mode, enhanced multi-agent orchestration, native IDE bridge improvements.

IN-FLIGHT (partially shipped): Memory architecture improvements, enhanced permission prompts, Bash tool security hardening.

INFRASTRUCTURE: Internal monitoring systems, the Ant-only tooling (Anthropic-internal tools that only load when an Anthropic employee is using the tool).

DEV TOOLING: The Dream System (memory consolidation), various debugging and telemetry features.

The Playful Internal Culture

The source code also reveals significant personality in the codebase: animal codenames (Tengu, Fennec, Capybara), a “Penguin Mode,” and — genuinely — a Tamagotchi-like pet system for developers with gacha mechanics. Each user gets a companion whose species is determined by a PRNG seeded from their userId hash, with a 1% chance of “Shiny Legendary” status. The species names were obfuscated via String.fromCharCode() arrays specifically to avoid showing up in string searches.

There are 187 loading spinner verbs. Someone at Anthropic was having fun.

The System Prompt Was Distributed in the Package

One of the more surprising revelations: Claude Code’s system prompt — the instructions that define how the AI behaves — was shipped inside the distributed npm package rather than being fetched from Anthropic’s servers at runtime. This is a significant operational security decision that the leak made public regardless.


The Concurrent npm Supply Chain Attack

Separately from the leak — and potentially more dangerous for active users — the axios npm package was compromised by attackers on March 31, 2026.

Axios is a popular HTTP library used by millions of JavaScript projects, including Claude Code. Malicious versions 1.14.1 and 0.30.4 were published to npm between 00:21 and 03:29 UTC on March 31, containing a Remote Access Trojan (RAT) via a dependency called plain-crypto-js.

If you installed or updated Claude Code on March 31, 2026, between those UTC times, check immediately:

# Check your lockfile for compromised versions
grep -r "axios" package-lock.json yarn.lock bun.lockb 2>/dev/null
# Look for: axios 1.14.1 or 0.30.4
# Look for: plain-crypto-js in dependencies

# If found, update immediately:
npm install axios@latest
# Or remove and reinstall Claude Code entirely
npm uninstall -g @anthropic-ai/claude-code
npm install -g @anthropic-ai/claude-code

The compromised axios versions are now removed from npm, but any systems that installed during that window should be considered potentially compromised until audited.


The Technical Root Cause

The leak resulted from a known bug in Bun’s bundler — the JavaScript runtime Anthropic acquired at the end of 2025, which Claude Code is built on.

A GitHub issue filed March 11 (oven-sh/bun#28001) reports that Bun serves source maps in production mode even when its documentation states they should be disabled. The issue was open and unresolved at the time of the leak.

The specific failure: the .npmignore file or the files field in package.json did not exclude the .map file generated during the build process. A single misconfiguration exposed 59.8MB of internal source.

The dark irony noted by multiple developers: “A single misconfigured .npmignore or files field in package.json can expose everything” — and Claude Code itself may have written parts of the build pipeline that failed to protect itself.


What Anthropic Said

Anthropic confirmed the leak to The Register:

“Earlier today, a Claude Code release included some internal source code. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again. No customer data or credentials were involved or exposed.”

The company has pulled the affected npm package version and is implementing additional checks in its release pipeline.


The Sovereignty and Security Angle

For Vucense readers who use Claude Code in professional environments, three things require immediate attention:

1. Audit for the axios RAT. This is the most urgent security concern. The source code leak is embarrassing for Anthropic but harmless for users. The compromised axios package is a live threat on systems that updated during the window.

2. Review Claude Code’s permission model. The leaked source reveals that Claude Code’s tool permission system is sophisticated — tools require explicit permission grants and operate within defined boundaries. Understanding this architecture helps you configure Claude Code more securely in enterprise environments.

3. Consider the supply chain risk of AI tooling. Claude Code is installed globally via npm and runs with significant system permissions. This incident — coinciding with an npm supply chain attack — illustrates the risk of granting broad filesystem and Bash execution permissions to any tool with external update dependencies. Consider pinning to a specific version and auditing updates before deployment in sensitive environments.


FAQ

Is the leaked Claude Code source code available publicly? Yes — despite Anthropic pulling the npm package, the code was mirrored on GitHub and forked 41,500+ times within hours. It is now effectively in the public domain. Anthropic retains copyright but enforcement against mirroring is practically impossible at this distribution scale.

Was my data compromised by the Claude Code leak? No. The leak exposed Anthropic’s proprietary source code, not user data. Anthropic confirmed no customer data or credentials were involved.

How do I know if I’m affected by the axios RAT? Check your project lockfiles for axios versions 1.14.1 or 0.30.4, or the dependency plain-crypto-js. If present, treat your system as potentially compromised, update axios, and consider a full audit of processes running during that period.

Does the leak make Claude Code less secure to use? The source code exposure itself does not directly make Claude Code less secure for users — it exposes Anthropic’s intellectual property and roadmap, not user-facing vulnerabilities. However, the concurrent axios supply chain attack is a user security concern if you updated during the affected window.

What is KAIROS? An unreleased autonomous daemon mode for Claude Code that would allow it to run background sessions and consolidate memory while the user is idle, rather than only operating when actively invoked. It is built and flag-gated but not yet shipped.


Kofi Mensah

About the Author

Kofi Mensah

Inference Economics & Hardware Architect

Electrical Engineer | Hardware Systems Architect | 8+ Years in GPU/AI Optimization | ARM & x86 Specialist

Kofi Mensah is a hardware architect and AI infrastructure specialist focused on optimizing inference costs for on-device and local-first AI deployments. With expertise in CPU/GPU architectures, Kofi analyzes real-world performance trade-offs between commercial cloud AI services and sovereign, self-hosted models running on consumer and enterprise hardware (Apple Silicon, NVIDIA, AMD, custom ARM systems). He quantifies the total cost of ownership for AI infrastructure and evaluates which deployment models (cloud, hybrid, on-device) make economic sense for different workloads and use cases. Kofi's technical analysis covers model quantization, inference optimization techniques (llama.cpp, vLLM), and hardware acceleration for language models, vision models, and multimodal systems. At Vucense, Kofi provides detailed cost analysis and performance benchmarks to help developers understand the real economics of sovereign AI.

View Profile

Further Reading

All AI & Intelligence

You Might Also Like

Cross-Category Discovery

Comments