The most important decision in AI right now isn't what to build
February 2024. Sam Altman calls Sora "a remarkable moment." The internet loses its mind. The future of video has arrived.
September 2025. The Sora app launches publicly. One million downloads in five days. The GPT 3.5 moment for video, if you believed the hype.
December 2025. Disney signs a billion-dollar deal to put 200 of its characters inside Sora. Mickey Mouse, Marvel, Star Wars, all of it.
March 2026. OpenAI announces Sora is shutting down. Disney found out less than an hour before the public did.
Twenty-four months from revolution to shutdown. There has been lots of criticism of OpenAI for this "failure". But if you ask me (and given you're reading my blog, I'm sure you are eager to hear what I think) they should be commended just as much for their ambition to experiment as for their discipline for stopping.
The maths that stopped working
According to the Wall Street Journal, Sora was burning roughly a million dollars a day. User numbers peaked at around a million and then collapsed to fewer than 500,000. Every GPU generating a ten-second video of a dancing Pikachu was a GPU not running the text and code tools that actually pay OpenAI's bills.
With an IPO reportedly coming as early as late 2026, the maths stopped working. Wall Street wants recurring revenue, not just experiments.
Meanwhile, Anthropic doesn't generate images. Doesn't do video. Just text and code. Their run-rate revenue hit $19 billion in March 2026, according to Bloomberg. Up from $9 billion three months earlier. The growth is driven almost entirely by Claude Code, their coding tool.
OpenAI tried to do everything. Text, code, images, video, hardware, search, social media. Anthropic do less, and have a more expensive product targeted to the enterprise. And the focused company is winning on the metric that actually matters: money.
I wrote recently that strategy is what you choose not to do. Sora is the case study. OpenAI killing it wasn't a failure. It was, belatedly, a strategic act. However, Anthropic never needing to kill their equivalent is an even better one.
The pattern is everywhere
This isn't just an AI story. The same logic keeps showing up across industries, and the companies that get it right tend to win.
Apple spent a decade and reportedly more than $10 billion on Project Titan, its electric car. In February 2024 they killed it. Two thousand employees were redirected to the AI and machine learning division. Investors shrugged. Some applauded. The market understood: the opportunity cost of continuing was higher than the cost of stopping.
In January 2026, Mercedes pulled its Level 3 self-driving system from the facelifted S-Class entirely. BMW followed a month later. Both companies went backwards, from fully autonomous highway driving to a system where the human stays actively engaged. And as I wrote in a blog on AI autonomy, Mercedes' CEO described the downgraded experience as better.
Going backwards was the right move. Because Level 3 self-driving had the same problem Sora had: technically impressive, commercially unworkable, and consuming resources that could be better spent elsewhere.
Steve Jobs understood this instinctively. When he returned to Apple in 1997, he killed 70% of the product line. Drew his famous two-by-two grid. Four products. Everything else was gone. The strategy wasn't what Apple decided to build. It was the dozens of things they decided to stop.
What Google gets wrong
Of course, not all stopping is strategic.
Google kills products constantly. Reader, Stadia, Inbox, the list runs into the hundreds. But Google's product shutdowns aren't exercises in strategic focus. They're a side effect of internal incentives that reward launching over maintaining. Engineers get promoted for shipping new things, not for keeping old things alive. That's not discipline. It's dysfunction wearing discipline's clothes.
The difference matters. Stopping because the economics don't work, like Sora. Stopping because you're focusing on what matters, like Anthropic and Apple. That's strategy. Stopping because you got bored, like Google with Reader, is something else entirely.
The incentive problem
Here's the thing that makes strategic stopping so rare. The entire incentive structure of the tech industry, and honestly most industries, rewards building.
VC rewards growth. Media covers launches. LinkedIn celebrates announcements. Founders write threads about what they're building next. Nobody writes a press release about the thing they decided not to do.
At Topham Guerin we run into a version of this constantly when advising clients - whether on AI transformation of sharpening their marketing operation. The temptation is always to adopt the next thing. The harder, more valuable conversation is about which tools to drop. Which experiments didn't work. Which capabilities looked promising six months ago but aren't earning their keep.
That conversation never gets a standing ovation. But it's where the actual strategy lives.
The AI industry right now is obsessed with what to build next. The winners over the next twelve months will increasingly be defined by something less glamorous: what they choose to stop.
Enjoyed this? I write occasionally about politics, tech, and media.