Hypothesis Kit V4
Welcome to another iteration of the Hypothesis Kit, which has been tuned and iterated over eight years now.
It’s just a nice simple way for getting people to describe their Hypothesis (for an Experiment, Prototype, MVP or Feature Change) so that the important thinking isn’t left out :-)
Firstly, you need to understand how “The Hypothesis” is used — so reading this is a great introduction to why this is about ‘shifting from your default’:
The Hypothesis Kit:
Secondly, many thanks to Lukas Vermeer & Molly Stevens of Booking.com. Whilst Lukas originally helped with the first iterations of the kit, they both recently presented their latest thinking. I have shamelessly stolen some of their best ideas and incorporated this into a new iteration of the kit.
Don’t give me credit — thank everyone else who’s had to run a completely #random test idea and helped create an antidote!
Theory
One of the original strengths of the hypothesis kit is that it made you start with the word “Because”. To have an actual ‘reason’ to run a test. Someone once told me “Craig, just because a cosmic particle went through your brain and a neuron fired, it doesn’t make it a good experiment idea”. Why oh Why are we actually running this experiment? This is the Why bit!
Based on (data/research):
The observation, insight, data or research that was the catalyst for this hypothesis
we believe that (change):
This is the specific change you are making
for (population):
The cohort, audience, group that are exposed to the experiment
will cause (impact)
The outcome you believe that (change) will cause.
Note: you don’t need to use these exact words. Try to describe it as a behavioural change or just a straight difference in something. For example “will encourage them to”, “will make it easier for them to”, “will increase the amount of them who can” are all good. Same for “will increase”, “will decrease”, “will change” or “will remove”. Describe what you believe will happen.
Validation
Proof that your theory should change your default (or not) in terms of specific metrics and/or feedback that you expect or do not expect to see.
Note: this should include the primary or secondary metrics you are expecting to see change. Don’t forget to include the possibility of negative impact, for example — “We will know this when we see a decrease in churn and don’t see an increase in negative customer feedback or lower survey ratings”. Don’t forget that there may be a qualitative impact as well as a quantitative one. I have run plenty of experiments where the revenue goal has been met but at the cost of the coherence of the design and user experience.
Outcome
Another great idea from Booking, this allows you to articulate the experiment goal in terms of the impact on Customers, The Business, Partners, Suppliers and/or Staff. OKRs or Specific Goals are good to tie back to the other parts of the hypothesis!
Here’s an example from a recent change that resulted in a 20% increase in sitewide conversion and a doubling of subscription revenue. Nothing changed apart from the Product Detail Page. Churn now seems to be free of a large bunch of customers who accidentally purchased a subscription, when they wanted a one-off transaction. All those angry people who didn’t pick the right option, felt like they were cheated, so it helps to get that gone too! Data looks promising but this is a lagging indicator, so I’ll know better in two months time. It all started with this:
Two examples from Booking:
Lastly, thanks to Colin Mcfarland, Lukas Vermeer, Rik Higham, Doug Hall, Michael Aagard, Molly Stevens, Cassie Kozyrkov and countless others who have helped with this or influenced my thinking.