One Patient, One Drug

Why this startup built custom drugs for each patient.

Hey - It’s Nico.

Welcome to another Failory edition. This issue takes 5 minutes to read.

If you only have one, here are the 3 most important things:

A huge thanks to today’s sponsor, Oceans. Hire U.S.-caliber marketing talent at up to 80% less cost with their help.

The Secret to Consistent Growth: Hire Global with Oceans Talent AD

What do Soapbox, Magic Spoon, and Cheers Health founders have in common? They hired experienced, vetted global talent to own their marketing function.

They hired with Oceans Talent.

Oceans Talent screens 1,500+ applicants each month and matches you with experienced, professional marketers who own their channel end to end: influencer management, paid search, paid social, lifecycle email, SEO, content, brand management or creative ops.

You get:

  • Skills plus personality vetting

  • An integration plan to support the first 100 days with defined expectations, success metrics, and recommendations

  • Ongoing support and upskilling so execution stays on track

86% of first matches are the right fit. That's why 400+ founders trust Oceans Talent marketing specialists.

This Week In Startups

🔗 Resources

The Software Shakeout: What is durable and what is not in the Age of AI?

Strategic choices: When both options are good

AI image and video as a core business skill, live * 

📰 News

Google settles with Epic Games, drops its Play Store commissions to 20%.

Claude Code rolls out a voice mode capability.

ChatGPT uninstalls surged by 295% after DoD deal.

💸 Fundraising

Startup making AI chips more power-efficient raises $500 Million.

UK self-driving startup Oxa raises $103 Million.

Data management startup Validio raises $30M.

Neura Robotics raised €1 Billion.

* sponsored

Fail(St)ory

Truly Personalized Care

This week, EveryONE Medicines shut down.

They were trying to build a company around custom drugs for people with ultra-rare genetic diseases. And by custom, I mean genuinely custom: in some cases, one drug for one patient.

The rough part is that they shut down just as regulators were finally starting to move in their direction.

What Was EveryONE:

Most biotech startups are built around one drug, or one group of drugs, for a defined set of patients. EveryONE was trying to do something different.

Their idea was to treat ultra-rare cases almost like a manufacturing problem. A child shows up with a mutation that is either unique or close to it. You figure out whether that mutation can be targeted. If it can, you design a drug specifically for that kid. Then you test it, make it, get it into the clinic, and somehow do all of that fast enough to matter.

That’s what made the company unusual. It wasn’t really selling a single medicine. It was trying to build the machinery for making lots of tiny custom medicines, each one built for a different patient. 

The company focused on ultra-rare pediatric neurological diseases, which is about as easy to build around as it sounds. Tiny patient populations. Very high urgency. Hard manufacturing. No settled reimbursement model. Regulators still figuring it out in real time. 

Still, EveryONE did make significant progress. In the UK they were involved in the Rare Therapies Launch Pad effort, and by late 2025 they had gotten MHRA approval for a master protocol that could cover multiple diseases and medicines under one framework. Then in January 2026, they reached the point of treating a patient in the UK with a custom drug they developed.

The Numbers:

  • 🚀 Founded: 2020

  • 👷 Team size: fewer than 10 full-time employees as of mid-2024  

  • ✅ Key milestone: MHRA-backed master protocol approved in October 2025  

  • 💥 Shutdown:  March 3, 2026

Reasons for Failure: 

  •  They were trying to scale something that still behaves like a one-off case. That’s the core tension. EveryONE wanted to turn custom therapies into a repeatable category, but each case still came with a lot of bespoke work. Different mutation, different patient, different clock ticking in the background. You can standardize parts of the system, sure, but this still isn’t SaaS with prettier margins. 

  • Reimbursement was probably the real landmine: Making a custom therapy is hard. Getting somebody to pay for it, repeatedly, at scale, is arguably worse. EveryONE clearly knew this, which is why reimbursement showed up as such a central part of its pitch. But the whole field still seems stuck in a world where payment is negotiated case by case, often with a lot of institutional improvisation.

  • The rules were improving, but not fast enough: In late February, the FDA introduced draft guidance for a new “plausible mechanism” pathway for individualized therapies, basically an attempt to create a more workable approval route for treatments aimed at very specific genetic conditions. That mattered because the old model treated each custom therapy too much like a standard drug program, which is painfully slow for these cases. But draft guidance is still draft guidance, and for a startup burning cash, “the rules may improve soon” is not the same as having a business now.

Why It Matters: 

  • If your unit of delivery is a custom job, you’re running a services company until you prove otherwise.

  • Category-building doesn’t mean value capture. EveryONE helped push the world toward individualized therapies… and then died right before that world could pay them back.

Trend

Interpretable AI

Since GPT first rolled out, AI has been facing the same problem: the models keep getting more useful but we still do not really know what is going on inside them.

That trade was easy to accept when AI was writing emails and summarizing PDFs. It gets a lot less cute once these models start writing code, handling support, reviewing contracts, or making decisions inside actual products.

Last week, Guide Labs released an 8B model that is a big step forward in making AI more interpretable. So today, I wanted to use this as an excuse to talk about AI interpretability: what it is, why it matters, and where AI seems to be heading.

Why it Matters

  • This changes what “trustworthy AI” could mean in practice. If a model is going to sit inside a real workflow, you eventually need more than “it usually works.” You need to know what pushed it toward an answer.

  • Interpretability is turning into a control layer. Anthropic’s persona vectors work shows that behaviors like hallucination and sycophancy may be tied to internal patterns you can actually monitor and influence.

  • This is starting to look like a category, not a side quest. Goodfire just raised $150 million at a $1.25 billion valuation to build around interpretability and model control. Once capital starts piling in, it usually means the market sees a real bottleneck.

The Problem

The core issue here is that today’s language models often arrive at answers through internal processes that are still mostly opaque. You get the result, but not the mechanism.

For a while, chain-of-thought made it feel like models were becoming easier to understand. Ask them how they got the answer and they give you something neat and convincing. The catch is that this explanation is not always a faithful readout of what actually happened internally. Anthropic has shown that models can produce clean-sounding reasoning while the real computation underneath is more tangled.

So the field has been stuck with an awkward reality. The models are already useful enough to deploy. They are not understandable enough to fully trust. That gap is where all this recent work sits.

The Progress

The freshest signal is Guide Labs. On February 23, it released Steerling-8B, which it describes as the first inherently interpretable 8B language model. The company says the model was trained on 1.35 trillion tokens and built around explicit concept modules, so outputs can be traced back to prompt pieces, concepts, and training origins. .

Anthropic has been pushing this from another direction, and in a more sustained way. In March 2025, it published work on tracing the thoughts of a large language model, showing internal reasoning paths and concept-level traces inside Claude. In May, it open-sourced circuit tracing tools. In August, it published persona vectors, tying internal activations to behaviors like hallucination and sycophancy and showing they could be steered. 

OpenAI is also exploring something called “sparse circuits.” This is not a new GPT model or product feature, but a research effort to understand how language models work internally.

Today’s models are extremely dense: information flows through thousands of interconnected pathways at once, which makes it hard to trace which parts actually produced an answer.

Sparse circuits explore whether models could be organized more cleanly, with only a small set of pathways activating for a given task. If that works, researchers could more easily map which internal circuits correspond to specific skills or concepts.

The Trend

This seems pretty clear now: a lot of serious players are working on interpretability.

That usually means the problem is real. And it usually means the next wave of models will not just compete on benchmarks, speed, or price. They will also compete on how well you can inspect them, steer them, and trust them inside real workflows.

That matters even more if you build in regulated or high-stakes spaces. Finance, healthcare, law, government. In those markets, black-box AI has always had a ceiling. Interpretable AI breaks this ceiling.

Help Me Improve Failory

How useful did you find today’s newsletter?

Your feedback helps me make future issues more relevant and valuable.

Login or Subscribe to participate in polls.

That's all for today’s edition.

Cheers,

Nico