Go from 10 to 100 experiments per year
Note: This article is part II in a series dedicated to helping you increase your online experiment velocity. Stay tuned for future instalments.
Part 1: Building the Frame
Part 2: Plumbing – How to move experiments & ideas through the pipe
Plumbing gets sh*t from point A to point B. In the world of experimentation, plumbing is all about what is going on behind the scenes to make an experiment exist. Marketers and software providers often talk about flashy results and high velocity programs while omitting the blood, sweat, tears, and innovation that goes into achieving those results. It’s not an easy task.
Behind the scenes there are real people following processes and protocols that achieve these outcomes. A good plumbing system should allow a great idea to flow from anywhere in your organization back into your bank account without getting stifled or politicized along the way. During that journey the idea will get shaped, refined, supported, designed, communicated, developed, QA’d, launched, analyzed, deployed, etc. The result that took you ten minutes to read, often took tens of hours to produce.
Production can be expensive.
The costs of running an experimentation program typically fall into two major categories:
- Human Capital
- Technology & Infrastructure.
The technology costs usually scale with traffic volumes and impressions, while the human capital costs scales to match experiment velocity (you need bodies to design, develop, and analyze experiments). Because of this, as your output increases your human resources costs will become a proportionally larger part of your program.
Similar to optimizing your customer experience, you can see the value of optimizing your process. This post should act as a guide, highlighting the key decision points and techniques that are part of crafting an effective and efficient experimentation process (the plumbing) in your organization.
Creating an experiment is not a one-person job. There are many people filling a lot of different roles, often across different teams and departments. Each of these teams and departments likely use different technology and have different means of communicating. Do your developers work in the same systems as your marketers? Do they have the same priorities?
This challenge makes compliance with a unified system one of the single most important jobs for the leader of an experimentation program and is the first step in laying down your effective process.
At Widerfunnel, we suggest the introduction of an Experimentation Guide, that acts as the operating manual for the program. This 8-15 page PDF document should contain all of the operational standards that will govern experiment related work across the organization. Creating such a document will require leaders from each team involved in testing to agree on how they will operate inside the program.
Common pages of this document should include:
This document should be produced not only for education, but also for inspiration. Some experimentation topics, such as statistics, can be intimidating for people who may want to get involved with the program. If written properly it should be easily understood by absolute beginners and should leave them with an understanding of the basics of how experiments are run inside the organization. When complete, the Experimentation Guide should live in a centrally accessible location and be updated periodically as the program grows. New employees should be trained to these standards, and existing team members should be expected to uphold them.
This level of detail is not required when a program is just trying to get off the ground, as your focus should be on showcasing the value of experimentation. As your program grows and begins to involve multiple people and multiple teams with independent goals a degree of standardization and buy-in is required. Note: it is common for different teams to want to follow their own standards. While some degree of variance is acceptable, the important thing is that the results and insights are centrally formatted, stored, and circulated. If teams are resistant, the lack of insight sharing is usually the most compelling reason that will bring stakeholders together.
Great ideas can come from anywhere. In order to achieve scale with your program, idea generation cannot be a bottleneck. You must find a way to tap into the broader organization and intake ideas from those closest to the problem. An important distinction to make is that there is a major difference between the cultural efforts that cause people to want to submit ideas (motivation) and the process improvements that make it so people can submit ideas (ease). This article will focus on the latter. Culture is a beast for a future article.
Our ideal outcome is that it is easy for any member of the organization to submit high quality ideas.
While we want to have few accessibility barriers to submit an idea, we still want to introduce several cognitive barriers to entry. What I mean by this is that there has to be some onus put on the submitter to provide adequate thought into the idea and back it up with sufficient evidence. We are trying to avoid the “throw it over the fence” trap that many organizations fall into. Without a cognitive barrier your intake will become a dumping ground for ideas like “Update the new form”, which is essentially meaningless without additional information. The submission of high volume and low quality ideas will create political stress on the program as people will feel like they are submitting ideas that are not being actioned. Ensuring the right level of resistance is important to ensure a flowing list of high quality ideas. Note that over time you may want to adjust the resistance for both cognitive and accessibility barriers. A new program will likely have fewer cognitive barriers, and a more advanced program will likely introduce more. This is where the experimentation guide can come in handy. If individuals buy-in into the idea of standardization and there is easy access to definitions you can demand higher quality idea input from the team.
Tips for a successful intake process:
- Centrally accessible: People need to know where to submit ideas. This can be an online form, or existing ticketing system.
- Standardized: Require specific fields and force users to come to specific realizations. Common fields like hypothesis, and evidence should be required. Organizations will only learn a finite number of technical terms outside their domain. I recommend sticking to these two simple criteria until you are confident that the organization is ready for more terms to be added to your intake process. Leading with advanced terms like minimum detectable effect (MDE) and statistical significance can be intimidating and turn away potentially great ideas. Provide documentation and instruction for your fields and work your way up to all the fields you want.
- Have feedback loops: Intake without feedback is one of the most common mistakes I see. Worst case scenario is that the organization feels like they are submitting ideas and not seeing them come to life. Regardless of whether an idea gets developed people need to receive feedback. Accepting the idea? Great, tell them where it is slotting in. Need more information? That’s okay, train them to fill out the additional information. Rejecting the idea? No problem, tell the person why. Rejecting ideas can be politically tumultuous if it isn’t turned into an educational opportunity. Coach individuals on their submissions so they can ensure their next idea is impactful and more likely to be accepted. If there are particularly influential individuals submitting their first ideas I recommend working directly with them to sculpt their idea into something that goes through. Their buy-in to the system will be worth more over the long-term than the resources spent to run the experiment.
- Educational: Intake is your biggest education opportunity for the broader organization. It’s the time when the submitter’s motivations are aligned with yours. Guide them to think about what they are trying to achieve using your standardized framework and challenge their idea from a few angles.
It is very common for opportunity cost to be the single biggest constraint of your testing program. You can only produce a finite number of all the great ideas you receive. This means you need to pick and choose which ideas will give your program the best chance of success.
There are three key criteria to consider when determining the order of your backlog. We use The PIE Framework at Widerfunnel.
- Potential: How positive of an outcome do you expect this test to have? Do you have a lot of evidence to support it?
- Importance: Is this idea aligned with core business questions and challenges?
- Ease: How easy will it be to produce this test? Both politically and technically.
While we primarily use PIE, there are a number of other prioritization frameworks that can help you determine which experiments to chew off first. RICE is another popular alternative amongst product owners. If you decide to use your own, ensure it is as objective as possible.
One of the biggest ways to optimize a process is by choosing the right software to manage the program. Software allows for efficiencies through automation that you just can’t get from generic documents and spreadsheets. In order to move from running 10 experiments to running over 100 experiments per year it is a requirement to use some established software to manage your projects.
Your choice of software is going to depend on the level of specialization you require. You will decide between using a generic project management software, or using a specialized experimentation software.
Generic project management solutions:
Generic project management tools have your basic needs covered. You will be able to push projects through your process and assign resources to tasks with ease. In our experience these are sufficient up until a point, but tend to fall apart when trying to automate experimentation specific tasks, analyze/store robust data, and integrate with experimentation software.
For those ready to consider a more specialized solution, Widerfunnel has developed an experimentation project management software called Liftmap. Built by experimentation experts, Liftmap provides the architecture and blueprint needed to run a high-powered experimentation program.
Liftmap focuses on providing teams with the following:
- An expert process (fully customizable)
- A system to intake & prioritize test ideas
- Integrations with leading software
- A centralize place to communicate
- A place to conduct results analyses
- A fully searchable repository and tagging system
- An overview of program KPIs
- Visibility into production roadmap
- A system for linking experiments to business objectives
After years of being exclusively used by the Widerfunnel team and our clients, Liftmap is now available to the public.
The nuts and bolts. What should the process actually look like?
While this process will vary slightly from team to team based on available resources and team structure, every experiment should go through similar stages. Most commonly we see programs use the following stages:
Within each of these stages you will need to have “statuses” as there are many steps that have to occur to complete each of these stages. For example the development stage at Widerfunnel contains over 15 statuses (ex. Briefs, code review, goals, etc.). I recommend reviewing the 8 stages mentioned above with your team and considering what statuses you may need to create within them.
Stages and statuses are valuable because they help provide a lot of clarity for stakeholders. You want to make the system detailed enough that you don’t have to answer the question “where is this test at?”, but not too detailed that it’s not clear how to interpret the status as an outsider.
If your organization is able to address all of the sections above you will be well on your way to having an effective plumbing system for your experimentation program. You won’t hear flashy case studies about the behind the scenes but the process really is the backbone of any program. If you think you are doing well, consider the most successful programs in the world have spent years refining their process to the point where many are self-service and anyone in the organization can launch an experiment. There is always room for improvement!