OpenAI kills Sora to feed 'Spud,' its next major model
Updated March 24, 2026 at 7:00 PM ET with details on OpenAI's next model and internal restructuring.
When OpenAI shut down Sora, the company framed it as a strategic refocus on productivity and enterprise tools. The fuller picture, now emerging from multiple sources, suggests something more deliberate: OpenAI is clearing the deck for its next major model -- internally codenamed "Spud" -- and restructuring its leadership to get there. The video generator's shutdown is context. Spud is the story.
The codename that follows a pattern
According to The Information, Sam Altman told staff that OpenAI has completed initial development of Spud. The model carries no public capabilities announcement, no release date, and no confirmed public name. What it does carry is a recognizable naming convention: the same food-codename pattern that turned "Strawberry" into o1, OpenAI's first explicitly reasoning-focused model.
The question is whether Spud represents a comparable step-change in reasoning capability, a new architecture, or something else entirely. OpenAI has not said. What Altman's announcement to staff does confirm is that the model exists and that initial development has cleared its first internal gate -- a milestone that in OpenAI's development cycle typically precedes months of evaluation, red-teaming, and staged rollout.
The compute math behind Sora's exit
Sora's shutdown didn't come as a surprise to everyone inside the company. Employees working on the video generation product had complained internally that it was "a drag on the company's computing resources," according to The Information -- a friction point that became harder to justify as OpenAI faced intensifying competition from Anthropic and Google.
Video generation is computationally expensive in ways that don't compound into capability over time the way language and reasoning models do. Training runs that render high-fidelity video output don't make the underlying model smarter. In a resource-constrained environment where frontier model development is the explicit priority, that's a difficult allocation to defend.
The compute freed from Sora's shutdown is likely being redirected toward Spud's training and deployment infrastructure. OpenAI has not confirmed the specifics of that reallocation, but the directional logic is straightforward.
"No side quests"
The internal framing from OpenAI's head of applications Fidji Simo has been direct. At a March 16 all-hands meeting, she told employees: "We cannot miss this moment because we are distracted by side quests... We really have to nail productivity in general and particularly productivity on the business front."
Simo named three explicit priorities: Codex, OpenAI's coding model; winning business customers; and transforming ChatGPT into a productivity platform. She also characterized Anthropic's dominance in enterprise accounts as a "wake-up call" -- a notable concession from a company that has long positioned itself as the category leader in applied AI.
On X, Simo framed the Sora decision in organizational terms: "Companies go through phases of exploration and phases of refocus... when new bets start to work, like we're seeing now with Codex, it's very important to double down on them and avoid distractions."
The Wall Street Journal also reported that OpenAI is developing a ChatGPT "desktop superapp" focused on Codex and an AI browser, extending the productivity platform thesis beyond the existing web interface.
Altman steps back from safety oversight
The more structurally significant development, also reported by The Information, involves what Sam Altman is stepping back from. According to the report, Altman is reducing his day-to-day oversight of safety and security teams, with those groups now reporting through different organizational channels. His stated focus is shifting to capital raising ahead of a likely IPO, supply chain management, and datacenter buildout -- the infrastructure layer that determines whether OpenAI can run Spud at scale when it ships.
The safety reporting change is a genuine governance shift. OpenAI has faced sustained criticism over the pace and transparency of its safety evaluations, including the departures of prominent safety researchers over the past two years. The question is whether the new reporting structure reflects a mature, institutionalized safety function that no longer requires direct CEO oversight -- or a deprioritization under competitive pressure. The answer matters more than the org chart, particularly for enterprise buyers in regulated industries who are extending API dependencies.
What OpenAI is walking away from
For context on what Sora represented: the video generation product launched publicly in September 2025 and drew immediate attention for output quality. OpenAI had positioned it as the anchor of a broader media strategy, including a three-year licensing deal with Disney covering more than 200 characters from Disney, Marvel, Pixar, and Star Wars, paired with a planned $1 billion equity investment. That deal is now dead. NBC News reported that "Disney's deal with OpenAI is not proceeding."
Disney's statement was diplomatically neutral: "As the nascent AI field advances rapidly, we respect OpenAI's decision to exit the video generation business and to shift its priorities elsewhere."
The creative industries responded to Sora with wariness from the start. In November 2025, CODA -- a Japanese trade group representing major rights holders including Studio Ghibli -- demanded that OpenAI stop using their content for Sora 2 training. The copyright pressure on AI-generated video never let up, and the litigation risk added another line item to an already difficult cost structure.
The competitor pressure driving the pivot
Simo's reference to Anthropic wasn't rhetorical filler. Anthropic has made concrete inroads with enterprise customers through Claude's document analysis, code review, and extended context capabilities -- often at the direct expense of OpenAI's enterprise book. Google's Gemini models have improved at a pace that has surprised observers, particularly in multimodal and long-context tasks.
In that context, OpenAI's concentration on Codex and ChatGPT-as-productivity-platform is a direct competitive response. Enterprise software buyers want AI that integrates into existing workflows, produces auditable outputs, and reduces engineering overhead. A capable coding assistant and a productivity superapp address that brief. Video generation does not.
Practitioner implications
For teams building on OpenAI's API or evaluating its enterprise products, several threads are worth monitoring.
The Codex investment is likely to produce meaningful improvements in code generation quality and context utilization over the next two to three quarters. Engineering teams already using models for code review, generation, and test coverage have a concrete reason to track the roadmap.
The ChatGPT productivity platform direction -- including the reported desktop superapp and AI browser -- signals OpenAI's intent to compete for the surface layer above the API, where enterprise users actually spend time.
The Altman reorg warrants a closer look for compliance-sensitive enterprise buyers. How safety evaluations are conducted, by whom, and with what accountability structure is a material question for organizations extending deep API dependencies in regulated environments.
And Spud -- whatever its final form -- is the product OpenAI is spending its scarcest resource to build. When it ships, the context in which it arrives will have been shaped by every decision described above: the compute freed from Sora, the enterprise pivot, the capital raise, the datacenter buildout. The timing and capability bar of Spud's public release will be the most direct test of whether this consolidation was the right call.
Kai Nakamura covers AI for The Daily Vibe.
This article was AI-generated. Learn more about our editorial standards.



