Experiment examples
Having experimentation at the core of your startup requires an experimentation playbook; a guide as to which experiments are applicable to your business, and when each type of experiment should be considered - as well as the practical steps to implement them.
Now that we’ve covered the theory, we’d like to share our own experimentation playbook.
Having run thousands of experiments, and dozens of each type, we’ve built up a list of our favorites. It isn’t definitive, but these are the ones we think founders should commonly run; many of which were first suggested by Eric Ries (author of The Lean Startup) or Strategyzer. These experiments include:
- Discovery Survey: This is an open-ended questionnaire used to collect information about a sample of customers. The focus here is on better understanding a problem area you are exploring, not necessarily validating your solution yet.
- Validation Survey: This is a more focussed questionnaire used to elicit specific opinions from possible customers. The focus here is on suggesting solutions to a problem you now understand well, to gauge how desirable they would be to the customer.
- Customer Interview: A round of interviews that could serve either the purpose of problem discovery, or solution validation. Insights gathered here are typically richer than from a survey, so long as the interview is run professionally, without bias and leading questions.
- Brochure: Before building anything, you can show a physical visual explainer of your imagined product and its features or value propositions. It gives customers something tangible to talk over, and is suitable for when you have direct access to customers.
- Explainer Video: Similar to the brochure, this asset walks people through what you are wanting to build (or have built), sometimes with a call-to-action at the end. It’s suitable when your customers are out of reach, but is oftentimes harder to produce than a simple pamphlet or brochure.
- Discussion Forums: A lot of unmet needs (whether in general or specifically in relation to your product) can be discovered on online forums, where people can speak honestly and anonymously about their experiences.
- Online Ads: You can sell something before you’ve built it, by running ads that pretend the product is ready and then monitoring the click-through-rates (CTR) or cost-per-click (CPC) of the ads. This can both indicate desirability but also hint at how expensive it would be to acquire customers through paid digital channels, which assists with answering your questions around viability.
- Boomerang: This is where you either use a competitor’s product to become more intimate with its strengths and flaws, or where you try to re-sell a competitor’s product to learn more about its customers and the barriers to adopting such a product.
- Clickable Prototype: A step-up from the brochure, this is a prototype mimicking the functionality of the product (often when wireframes or designs have been completed), allowing customers to get a firm grasp of the user flows. Conducting customer walkthroughs with this kind of asset can elicit very strong evidence about how they would interact with your product.
- Email Campaign: Communicating your value proposition to a mailing list you’ve acquired (with some kind of call-to-action at the end), or demanding more actions from an existing set of customers, are great ways to see how engaged your audience is with your product idea. Beyond the call-to-action, even looking at the open rates and read times of your email campaign can yield important insights around engagement.
- Search Trend Analysis: Even basic desktop research can be framed as an experiment. Discovering whether there are high-volume search terms related to your product idea (or a lack thereof), can tell you a lot about whether there is proven interest in the idea, and whether that interest sits online or in offline communities.
- A/B Test: Also known as a Split Test, this is any experiment where you are comparing two or more versions of a change/feature/activity over the same period of time to see which performs better. This could be two or more methods of emailing prospective customers, two or more different ads, two or more different variations of your landing page, and so on.
- Referral Program: To see whether the virality of your product can be influenced by an incentive, you could try different referral campaigns to get better word-of-mouth (WoM) about your offering, often through online competitions or referral codes.
- Simple Landing Page: Another form of selling before building, you could have a single site explaining your value proposition, with a call-to-action at the end such as joining a waitlist, to see how many people would sign-up for your product and presumably become paying customers. The concept of a waitlist can also be used by existing businesses for new features or services.
- Concierge: This is where you manually fulfil components of your product/service, simulating the customer experience you are trying to create, but using people instead of technology. While your customers will know they are not dealing with an end-to-end automated product yet, it allows you to do several trial runs of your value proposition before you’ve built anything substantial yet.
- Wizard of Oz: Similar to the Concierge, in this experiment you still have team members manually processing parts of the customer experience, but you try to create the impression that this is all done automatically by your system - the customer never knows there are humans ‘behind the curtain’ fulfilling their requests. This is as close as you can get to a live simulation of your minimum viable product (MVP) before actually building a technical MVP.
- Single Feature MVP: This is likely the experiment requiring the most effort, and should be last on your experimentation roadmap. Here, you actually build out your product and deliver it to customers, but the focus is on developing the one or two ‘killer features’ that make your product stand out, which have hopefully been validated to a large extent already by this stage. There can still be manual components to your MVP, but you’ve chosen to build out (and possibly automate) the core functionality. From this point onwards, you have a live and functional product in the market, and additional features you add to your MVP can be incrementally introduced via the build-measure-learn feedback loop.
Each of the above warrants an entire guide on setting up, executing, and analysing the experiment. They all have associated costs (both time and monetary) to run, and respective levels of evidence.
The remarkable thing in knowing about all these experiments is that you can line them up in a way that perfectly bridges the gap between where your startup stands today, and the MVP you want to have in the market sometime from now. You can create a sequence of experiments that, step-by-step, gets you to the point of a validated venture, or helps you pivot/kill the venture the moment things start looking too risky from a desirability, feasibility, or viability standpoint.
A tried and tested sequence of experiments you could run when founding your next venture could look as follows:
- Talking to your friends and family about a problem you’ve seen in the market, and listening to their advice.
- Doing some follow-up keyword research and discussion forum analysis, to further understand the market and some specific challenges that different customer types are facing.
- Equipped with this knowledge, having interviews with 3-5 possible customers to deep-dive into how the problem breaks down into specific pains that impact their lives or businesses (qualitative research).
- Taking all the factors raised in the interviews, and encoding them in a survey to get a broader, more empirical understanding of the ranking of the pains uncovered (quantitative research).
- Ideating a solution, and taking that solution back to the customers you interviewed and those who participated in the survey (if you collected contact information), possibly with the aid of a pitch deck or a brochure.
- Refining that solution based on their feedback, and coming up with mockups or wireframes that you again walk these ‘early adopters’ through to get live feedback.
- Taking screenshots of the different features in the refined wireframes, and making adverts out of them to see which messaging and which feature gets the most attention via ad clicks.
- Building a landing page which showcases that ‘killer feature’, and getting ad traffic to it using the best-performing ad from the last experiment, prompting visitors to join a waitlist.
- Manually delivering the product to those early-adopters as well as some members of the waitlist, in a Wizard of Oz or Concierge format.
- Building a single feature MVP and launching it to the rest of the waitlist.
What this 10 Step process has demonstrated is that, by the time you get to launching your single feature MVP, you could already have an entire list of trusted early customers, many of whom were engaged during the research phase, and anything you include in that MVP will have been highly validated by the steps preceding it. All that’s left for you to do is to execute on your validated business model and grow your customer base, possibly experimenting here and there to determine the next best feature or the next best channel to sell through.
At any point in the above process, if there was a strong signal that the venture didn’t have legs, you could have called it quits and saved yourself future time and money; and if at a certain stage the signals were positive, you could go into the next experiment with an entire scaffolding of supporting evidence that would ensure you ran that experiment well. Marvellous.
We’ve run these 10 steps before, and can attest to the power in building a business this way. It’s a vastly superior approach compared to building an entire product before talking to a single customer, which was my crucial mistake when I first entered the world of startups.
In the next (and final) guide I’ll walk you through 5 case studies of experiments we’ve run alongside founders, including the original test cards we used when designing the experiments.
You don’t have to make the same mistakes I did. At The Delta we’ve validated hundreds of ideas for early-stage entrepreneurs and unicorns alike. It’s what has helped us curate a venture portfolio worth more than €3.4 billion.
If you’re a founder, startup, or scaleup with a great idea for your next business, feature, or product, and are unsure about how you can get running with it today, talk to us and get a free consultation with one of our top strategists.