Vucense

OpenAI Spud: The 2-Year AGI Milestone and Why Sora Was Shelved

Siddharth Rao
Tech Policy & AI Governance Attorney JD in Technology Law & Policy | 8+ Years in AI Regulation | Published Legal Scholar
Published
Reading Time 5 min read
Published: April 2, 2026
Updated: April 2, 2026
Recently Published Recently Updated
Verified by Editorial Team
OpenAI Spud AGI Model Research
Article Roadmap

Quick Answer: OpenAI has officially teased its next flagship AI model, internally codenamed “Spud.” Described by President Greg Brockman as a “major step toward AGI,” the model represents the culmination of two years of research into advanced reasoning. In a move that shocked the industry, OpenAI has even diverted compute resources from its Sora video project to ensure Spud’s successful launch in late 2026.

The AGI Pivot: Why Spud Matters for 2026

For months, the AI community has speculated on OpenAI’s next move after GPT-4.5 and the early iterations of GPT-5. The answer is Spud. Unlike previous models that focused on broader text generation, Spud is engineered for autonomous reasoning and deep task execution—the defining feature of Agentic AI.


Part 1: Two Years of Secret Research

According to Greg Brockman, Spud isn’t just an incremental update. It is the result of a two-year research cycle focused on one primary frustration: AI models that “don’t quite get it” and require constant human prompting.

“When you ask a question and the AI doesn’t quite get it, it’s always so disappointing… Spud is designed so you can use it for various tasks without thinking very much.” — Greg Brockman, OpenAI President

The Sora Sacrifice: Compute Priority

To power Spud’s massive pre-training phase, OpenAI has made the strategic decision to shelve Sora, its highly-anticipated video generation model. Despite a billion-dollar deal with Disney, the company is prioritizing AGI over generative media, signaling that the “intelligence” race has officially overtaken the “creativity” race.


Part 2: The Roadmap to AGI and Agentic Reality

OpenAI CEO Sam Altman has consistently stated that AGI is the company’s ultimate North Star. Spud is being positioned as the bridge to that goal. While currently in its pre-training phase, the model is expected to:

  • Surpass Reasoning Benchmarks: Especially in complex coding and multi-step logic where GPT-4o plateaued.
  • Enable True Agentic Workflows: Moving beyond chatbots toward assistants that can handle entire projects autonomously.
  • Optimize Compute Efficiency: Allowing for higher intelligence with a smaller memory footprint compared to the massive “brute force” models of 2024.

Part 3: The Vucense Perspective — AGI vs. Digital Sovereignty

At Vucense, we track the progress of AGI with both excitement and caution. As OpenAI moves closer to a “black box” that can reason on its own, the question of Digital Sovereignty becomes even more critical.

  • Centralized Intelligence Risks: Spud will likely be a closed-source, cloud-based model, meaning the “brains” of the future remain under the control of a single corporation.
  • The Case for Local AGI: As models like Spud emerge, the push for Sovereign LLMs (like Llama 4 and OpenClaw) must intensify. We need local-first models that can match this level of reasoning without requiring a permanent connection to OpenAI’s servers.

Vucense Take: Spud is a technological marvel, but it represents the ultimate centralization of intelligence. If we are truly moving toward AGI, we must ensure that the path there includes open-source alternatives that respect individual autonomy.

Stay informed. Build your own stack. Stay sovereign.

Siddharth Rao

About the Author

Siddharth Rao

Tech Policy & AI Governance Attorney

JD in Technology Law & Policy | 8+ Years in AI Regulation | Published Legal Scholar

Siddharth Rao is a technology attorney specializing in AI governance, data protection law, and digital sovereignty frameworks. With 8+ years advising enterprises and governments on regulatory compliance, Siddharth bridges legal requirements and technical implementation. His expertise spans the EU AI Act, GDPR, algorithmic accountability, and emerging sovereignty regulations. He has published research on responsible AI deployment and the geopolitical implications of AI infrastructure localization. At Vucense, Siddharth provides practical guidance on AI law, governance frameworks, and compliance strategies for developers building AI systems in regulated jurisdictions.

View Profile

Further Reading

All AI & Intelligence

You Might Also Like

Cross-Category Discovery

Comments