Leaked draft blog posts from Anthropic reveal the company has finished training a new AI model that sits above its existing Opus line, described internally as achieving "dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity" compared to Claude Opus 4.6. The leak, first reported by Fortune, happened because a default setting in Anthropic's content management system made uploaded files publicly accessible, exposing close to 3,000 internal documents.
The documents show two name candidates for the same model: "Mythos" and "Capybara." In one version, the body text was swapped to read "Capybara" throughout, but the subtitle still said "Claude Mythos," and both versions used the same justification for the name, describing it as evoking "the deep connective tissue that links together knowledge and ideas." Anthropic told Fortune the documents were "early drafts of content that were being considered for publication."
What the drafts actually say
According to the leaked posts, reviewed independently by The Decoder, the model represents a new class: "larger and more intelligent than our Opus models, which were, until now, our most powerful." When Fortune asked for comment, an Anthropic spokesperson confirmed the company is training and testing the model. "We're developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity," the spokesperson said. "We consider this model a step change and the most capable we've built to date."
The drafts also describe a deliberately slow release strategy. The model is reportedly "currently far ahead of any other AI model in cyber capabilities" but "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." Anthropic plans to start with a small group of early-access customers focused on cybersecurity evaluation, with API access expanding gradually. The drafts acknowledge the model is "very expensive for us to serve, and will be very expensive for our customers to use," and that Anthropic is working to make it "much more efficient before any general release."
This is not just a technical disclosure. As we covered earlier this week, the decision to withhold a model over cybersecurity concerns raises real governance questions about who gets to make that call.
The financial pressure behind the curtain
The slow-release strategy sounds prudent in isolation. But it collides with a financial reality that makes patience expensive.
According to a legal filing reported by The Register, Anthropic CFO Krishna Rao disclosed that the company has raised $30 billion but earned only $5 billion while spending $10 billion on inference and training alone. The company is reportedly planning an IPO as soon as Q4 2026.
Meanwhile, Chinese AI labs are compressing the cost curve at a pace that threatens Anthropic's pricing model. On OpenRouter's LLM Rankings, the top six most popular models now come from Chinese companies: Xiaomi's MiMo-V2-Pro, stepfun's Step 3.5 Flash, DeepSeek V3.2, and three MiniMax models. Anthropic's Claude Opus 4.6 and Sonnet 4.6 sit at positions seven and eight. According to The Register, Anthropic's market share on the platform dropped from 29.1 percent in March 2025 to 13.3 percent in March 2026.
The price gap is stark. When developer tool Kilo Code compared Claude Opus 4.6 against MiniMax M2.7, it found MiniMax delivered 90 percent of the quality for 7 percent of the cost: $0.27 versus $3.67, according to their published analysis.
Anthropic has publicly accused MiniMax, Moonshot AI, and DeepSeek of distilling its Claude models. But as The Register noted, the track record of enforcing US intellectual property norms on Chinese firms is not encouraging.
The safety-cost squeeze
Here is the bind Anthropic is in: its brand is built on safety-first development. That brand won it enterprise customers and differentiated it from OpenAI. But that same posture now creates friction on multiple fronts.
Security researchers have reported growing frustration with Claude's refusal rates for legitimate vulnerability research. The Register spoke to researchers who described the model as "very, very, very heavily censored," with the CBRN (Chemical, Biological, Radiological, and Nuclear) safety filter generating excessive false positives. At least one researcher told The Register they cancelled a $200/month Max subscription over the issue.
Anthropic confirmed to The Register that it added new cyber safeguards with the Opus 4.6 release. The company's own documentation concedes these guardrails "may also block dual-use cybersecurity activities with legitimate defensive purposes."
So the most powerful model in the company's history sits behind a slow rollout, acknowledged as very expensive to serve, while the existing product line loses ground to competitors that cost a fraction of the price and face no comparable safety constraints.
What to watch
Two things will determine whether this leak is remembered as a footnote or a turning point.
First, pricing. If Mythos (or Capybara, or whatever it ships as) arrives at a premium only large enterprises can absorb, Anthropic narrows its addressable market at exactly the wrong time. The leaked drafts already flag cost as a concern, and the company's track record suggests the final price will not be cheap.
Second, timing. OpenAI is reportedly preparing its own next-generation model, codenamed "Spud," with CEO Sam Altman claiming internally it can "really accelerate the economy," according to The Decoder. Both companies are likely positioning their strongest releases ahead of planned IPOs. If OpenAI ships first at a lower price point, Anthropic's careful rollout could look less like responsible governance and more like a missed window.
The leak itself was an operational embarrassment, a misconfigured CMS setting that exposed thousands of internal assets. But the real story it surfaced is the tension between building the most capable model and building a business that can survive long enough to release it.
Paul Menon covers AI policy and safety for The Daily Vibe.



