Lost to the Algorithm

Why Smashing failed to fix your feed.

Hey — It’s Nico.

Welcome to another Failory edition. This issue takes 5 minutes to read.

If you only have one, here are the 5 most important things:

Let’s get into it.

This Week In Startups

🔗 Resources

Conversation Quality and Scale.

SSEBITDA – A steady-state profit metric for SaaS companies.

📰 News

Anthropic is launching a new program to study AI ‘model welfare’.

OpenAI’s new reasoning AI models hallucinate more.

EU fines Apple, Meta millions for breaching tech competition rules.

xAI’s Grok chatbot can now ‘see’ the world around it.

💸 Fundraising

HelloSky raised $5.5 million to use AI for recruiting hard-to-reach executives.

Electra raised $186 million make iron and steel production cleaner and more sustainable.

Satellite imagery startup Albedo, which can track people from space, is raising a new round at a $285 million valuation.

Fail(St)ory

Smashing the Feed

A few days ago, Smashing, the content curation app from the Goodreads founder, announced it’s shutting down.

They wanted to fix how we discover content online: to show you what’s good, not just what’s popular. But they couldn’t make it work fast enough.

What Was Smashing:

Smashing set out to challenge how most platforms serve content. Instead of relying on algorithms that reward clicks and virality, Smashing aimed to recommend high-quality, thoughtful content, personalized to each user, but curated by the community.

Here’s how it worked:

  • Users followed topics they cared about — from tech news to niche hobbies.

  • They’d get a mix of articles, podcasts, tweets, and blog posts not because they were trending, but because others in the community thought they were valuable.

  • Anyone could submit content and vote on what they found useful or interesting.

But it didn’t stop there:

  • Smashing used AI to summarize long reads, so you didn’t need to waste time.

  • It offered an interactive feature called Smashing AI Questions, letting users explore different angles on a story. Prompts like “Show all sides” or “Tear this apart” helped people break out of echo chambers.

  • The AI didn’t decide what was important — people did. The AI helped enhance the experience, but the community’s input drove what surfaced.

The goal? Time Well Spent. No clickbait, no viral garbage. Just content that actually made you smarter.

It was a refreshing idea. But building something that fights the algorithm-driven web is easier said than done.

The Numbers:

  • 📅 Founded in 2022.

  • 💰 Raised $3.4 million.

  • 🧑‍💻 Team of 7 people.

Reasons for Failure: 

  • Growth Was Too Slow: The team said it straight: “We simply didn’t grow fast enough to keep going.” For a consumer app like this, slow growth means no future. They didn’t have the luxury of figuring it out over years. With only 7 people and limited cash, they needed traction quickly.

  • Hard to Stand Out: The idea wasn’t unique. Other apps like Bulletin, Particle and Feeeed are all trying to help users find better content. Smashing didn’t have a standout feature or network effect strong enough to pull people in from those alternatives. In a crowded space, it’s not enough to be good, you have to be different enough for users to switch.

  • Community Building Takes Time: Smashing relied heavily on users submitting and voting on content. That kind of engagement is hard to get, especially early on. Most people just want to consume, not curate. Building a true community requires time, momentum, and usually a much bigger team. 

  • Fighting Virality Is an Uphill Battle: Smashing’s core belief was that meaningful content beats viral junk. But the internet often rewards the opposite. Competing against platforms optimized for engagement (and built by companies with massive resources) is tough. Smashing wasn’t just another algorithm, they were trying to change user behavior. That’s a big challenge, and with limited users, they couldn’t prove it worked.

Why It Matters: 

  • Community-driven platforms need serious time and resources before they take off.

  • Competing with viral content is hard, even with better ideas.

  • Strong features don’t always equal strong retention.

Trend

The Era of Experience

Last week, Google quietly published a paper that might just redefine where AI is headed next. It introduces a new phase they’re calling The Era of Experience — and if it plays out the way they suggest, it could leave models like ChatGPT looking outdated.

Why It Matters:

  • We’ve hit a ceiling: You can’t outlearn humans by only studying human data.

  • Google’s proposing something different: Let AI learn by experiencing, not just reading.

  • That means agents collecting their own data, learning from real interactions — not the internet’s leftovers.

According to the authors, which include AlphaGo creator David Silver, modern AI has evolved through two major eras:

Era 1: The Simulation Era

Back in the mid-2010s, AI researchers were obsessed with games. They trained models like AlphaGo and AlphaZero by having them play games like Go or Chess over and over in simulated environments. These models learned through reinforcement learning – figuring out how to win by trial and error.

This worked well for narrow tasks where rewards were clear (win the game, get the point). But these AIs weren’t much help when it came to messier, real-world problems with no clear scorecard.

Era 2: The Human Data Era

In 2017, another Google paper changed everything: “Attention Is All You Need”. This kicked off the era we’re in now – training AI on huge datasets made by humans. Think text from websites, books, code, images. The idea was simple: the more human data an AI could learn from, the better it could understand the world.

This led to ChatGPT, Claude, Gemini, and all the other big-name AI tools we use today. But here’s the catch: models trained this way can only be as good as the data they’re given. As the paper puts it: “Agents cannot go beyond existing human knowledge.”

And that’s the problem Google is now trying to solve.

The Future: Era of Experience

Google’s new idea is to let AI generate its own data, not just rely on us.

AI agents should interact with the world, set their own goals, make mistakes, and learn from feedback. They’d generate their own training data through real-world experience — not just scrape it from websites.

What does that look like according to the authors?

  • A health AI that tracks your sleep, activity, and heart rate, then adapts your goals based on what it learns about you.

  • An education AI that uses your test results to shape how it teaches you, learning from every interaction.

  • A science AI that runs real-world experiments, adjusts its approach based on outcomes, and keeps iterating until it discovers something new.

The goal is to break free from the limits of human data. Today’s LLMs are great at giving answers based on what’s already out there. But they don’t grow, they don’t adapt over time, and they definitely don’t explore.

In the Era of Experience, that changes. These AI agents won’t just think in short bursts. They’ll learn continuously, carrying knowledge from one task to the next. They’ll remember what worked, what didn’t, and use that to improve. Think of an AI that doesn’t just help once, but gets better every time you use it.

And importantly, success won’t be measured by human approval. It will be grounded in real-world results.

In some ways, this feels like a return to the early days, when AI learned by playing games and chasing simple goals. But now, the rewards aren’t points or wins in a simulation. They’re tied to what actually happens in the world. These agents won’t just optimize for a score, they’ll adapt to complex environments, learn from real consequences, and improve over time, just like we do.

Help Me Improve Failory

How Was Today's Newsletter?

If this issue was a startup, how would you rate it?

Login or Subscribe to participate in polls.

That's all of this edition.

Cheers,

Nico