51,000 Lines of Code, Naked: How a “Minor Mistake” at Anthropic Became AI’s Biggest Drama of the Year

On March 31, a 59.8MB file did what no hacker, rival, or whistleblower had managed to do before: it tore straight through Anthropic’s carefully constructed defenses.

 

Chaofan Shou, an AI security researcher, was making his usual rounds on the npm package registry when he stumbled upon an unexpected gift. Anthropic’s flagship AI coding tool, Claude Code, had just released version 2.1.88. And with it, the company had accidentally packaged and publicly distributed a complete source map file.

 

Inside that file lay more than 512,000 lines of TypeScript source code nearly 2,000 source files spanning over 40 tool modules. This was the code that was supposed to be obfuscated, guarded, and treated as the crown jewels of a $38 billion AI unicorn. Instead, it was sitting there, exposed to the entire world.

A “Minor Mistake” With Major Consequences

This wasn’t a sophisticated hack. There was no nation-state actor, no elaborate phishing campaign. The breach happened because someone forgot to check a box.

 

Claude Code is distributed as an npm package the standard way JavaScript tools are shared with developers. To protect its intellectual property, Anthropic obfuscated the published JavaScript code, making it difficult to read or reverse-engineer. But when the team pushed version 2.1.88, they failed to remove the accompanying `.map` files.

 

For those unfamiliar with web development, a source map is essentially a translation key. It takes compressed, minified, unreadable code and maps it back to the original, human-readable source code. Including a source map in a production release is the equivalent of printing the safe combination on the outside of the vault door.

 

The news spread through X at the speed of a Silicon Valley wildfire. Chaofan Shou’s post racked up millions of views within hours. Users immediately mirrored the leaked code to GitHub repositories, which accumulated thousands of forks before Anthropic could scramble to issue DMCA takedown requests. By then, the code was already scattered across the internet.

 

Anthropic’s official response was measured and precise. A spokesperson characterized the incident as “a human error in the packaging process for a release,” adding that “no customer data or credentials were exposed.”

 

Technically true. But it sidestepped a more uncomfortable question: when the soul of your product, not just its skin, ends up naked in public, can you really claim nothing of value was lost?

What the Code Revealed

For the developers and researchers who actually dug into the leaked files, what they found was far more revealing than any press release.

 

The discovery that set the developer community ablaze was a feature codenamed BUDDY a digital pet designed to live inside the command line, perched beside the user’s input prompt. The code contained configurations for 18 different creatures, from a duck and a dragon to an axolotl, sorted into rarity tiers ranging from “common” to the coveted “1% legendary.” Each pet came with five dynamic attributes: Debugging, Patience, Chaos, Wisdom, and, most entertainingly, Sarcasm.

 

BUDDY was reportedly planned as an April Fools’ teaser, with an internal beta set for May. Now, the surprise is spoiled for everyone.

 

But BUDDY was the charming headline. The deeper story lived in a system codenamed KAIROS

 

The leaked files described KAIROS as an “Always-On Claude,” a persistent background agent capable of executing tasks across multiple sessions without constant user prompting. It maintained long-term memory, storing project context and user preferences in a private directory. Perhaps most fascinating was a mechanism the developers had dubbed “night dreaming”: to prevent the AI’s long-term memory from growing into an unmanageable mess, KAIROS was designed to activate while the user slept, reviewing the day’s interactions, discarding redundant information, and distilling the essentials into permanent memory.

 

If BUDDY was the whimsical Easter egg, KAIROS revealed the true scale of Anthropic’s ambition. The company isn’t just building a coding assistant. It’s building toward a vision of AI that quietly, persistently works alongside its human counterpart, learning and anticipating.

The Fake Confession That Fooled Everyone

As the tech world tried to make sense of the leak, a new story began circulating. A post appeared on social media from someone claiming to be Kevin Naughton Jr., identifying himself as an Anthropic engineer. In the post, he confessed to making the packaging mistake, claimed he had been fired, and offered a heartfelt apology.

 

The post was convincing. It spread rapidly, drawing sympathy and sparking conversations about whether a single junior engineer should bear the weight of such a catastrophic error.

 

There was only one problem: Kevin Naughton Jr. never worked at Anthropic.

 

He was an entrepreneur who had seized on a real news event to stage an elaborate piece of performance art or, more cynically, a marketing stunt. After the post had accumulated millions of views, he quietly added a link to his own startup’s product in the comment thread, complete with a discount code. The incident was quickly dubbed one of the most audacious marketing stunts of 2026.

A Week of Wounds

Perhaps the most troubling detail was this: the Claude Code leak wasn’t an isolated incident. It was the second major data exposure in a single week.

 

Days earlier, the company had inadvertently leaked information about an unreleased model codenamed “Mythos.” Thousands of internal files were exposed, including draft blog posts detailing performance benchmarks information the company had intended to keep under wraps until a formal announcement.

 

Two leaks. One week. The pattern suggested something deeper than bad luck: a systemic breakdown in Anthropic’s release engineering, internal review processes, and perhaps even its culture around operational security.

 

The timing could not have been worse. According to reports, Anthropic has been in discussions about an initial public offering, with a potential listing expected as early as October 2026. Investors are currently evaluating the company’s engineering rigor, its ability to protect intellectual property, and its readiness for the heightened scrutiny that comes with being a public company. Two unforced errors in seven days do not inspire confidence.

The Code Is Out There

Anthropic has promised to implement additional safeguards to prevent similar incidents in the future. But in a world where DMCA takedown requests travel slower than forks and mirrors, “preventing future incidents” is more about public reassurance than actual containment.

 

The code is out there. It has been cloned, archived, analyzed, and internalized. What Anthropic spent years building is now, in some meaningful sense, part of the commons, a gift, however unintentional, to every AI lab, startup, and independent researcher curious enough to go looking.

 

For a company that built its reputation on safety, security, and doing things the right way, that’s a legacy no press release can rewrite.

 

Share:

Related Blogs

Scroll to Top