Redefining Success

There’s an expression in tech that I think is often misunderstood and then used to justify bad behaviour—“fail fast, fail early”. The idea behind it is that if an idea is going to fail, it’s better if it fails early, before you reach the point of the sunk-cost fallacy.

This sounds sensible, and encourages people to build a Pilot rather than a Season 1, which I think is generally a good thing. But I have two major concerns with this expression.

  1. It tricks people into thinking that if we want to fail quickly, then we need to finish quickly. “Let’s just try this real quick and see if it works”. This shrinks all deadlines and can lead to rushed, low-quality work even in the “success” case.
  2. It normalises failure, which isn’t actually the goal. The goal is rapid validated learning. Failure can lead to validated learning, but not all failures are created equally.

I propose instead that we redefine what success and failure mean when doing experimental development work. When we set out to prove or disprove a given hypothesis, success is getting an outcome, no matter which way it falls. We only fail if we are unable to determine the result.

If I’m trying to test a new feature, I can spike a simple version and show it to a small subset of customers. If they like it, I can say I succeeded in showing that customers like the feature. If they don’t, I can say I succeeded in showing that they don’t.

If I start building the feature but abandon it after months of delays, I’ve done neither. Even if I build and ship the feature but never measure the reaction to it, I’ve done neither. I’ve failed to prove or disprove whether customers like the feature. Maybe I’ve proven whether or not I can build the feature, but honestly this is rarely the biggest concern in software development.

Framing success this way helps to address my two concerns with “fail fast, fail early”.

  1. Success doesn’t necessarily mean “we successfully built and shipped production-grade software”, it can just mean “we successfully proved/disproved demand for a feature”. It’s okay if the deadline for this is short, we can invest time in productionising once we prove demand.
  2. We aren’t normalising failure, failure isn’t really a good thing. Disproving a hypothesis is good. We knew something was a risk or assumption and we made a deliberate plan to validate it quickly or cheaply. We didn’t just take a wild stab at something and hope for the best.