A/B testing is not a process run just by analysts. This is a process that requires several different team members for every experiment. In this post, I’m going to break down what the specific steps are to building an A/B testing process and who is involved in each step.
 

What Is A/B Testing?

First, for those less familiar with this topic, A/B testing is when you conduct a randomized experiment with two or more variations of a digital experience to determine which version is more effective. Building a strong testing process is something that’s very important to do because every business needs to understand how to gain beneficial insights from their data.
 

7 Steps to Building a Strong Testing Process

From my perspective, with a testing process, there are 7 key steps. Each step contains a main contributor and has specific deliverables. Let’s do a quick overview of what’s involved in each one.
 
Step 1: Build a strong hypothesis.
First, let’s start with what a hypothesis is in digital experimentation. A hypothesis is a prediction supported by evidence and is created before a testing run. The main contributor to this step is the product/marketing management team, and deliverables include testing an idea with a pre-analysis. Structure-wise, a hypothesis is broken down into three parts.
 
If [variation], then [outcome] because [argument].
 
The variation portion is an element in the experience that will be changed to improve the performance of the digital channel. For the outcome, it is the expected result of the updated experienced element. And finally, the argument is the data (qualitative or quantitative) that supports the claim and makes the hypothesis stronger. All of these elements, when incorporated together, make up the hypothesis structure.
 
Here’s a working example:
hypothesis example
 Step 2: Craft test variations.
This is the next step in creating a testing process. After the hypothesis is built, the design team steps in as the main contributor, and they deliver the mock-ups.
 
In experimentation, variations are different versions of an experience that are tested against each other to determine which performs better for a specific goal.
 
There are two basic types of variations:
Test variation chart
 
Step 3: Architect the experiment.
After you craft test variations, you can now build a blueprint of online experiments. These are known as testing plans that are delivered by the main contributor in this step: your analytics team. Some things to measure or questions you may be asking at this step include:

  • What is our hypothesis?
  • What variations are we including in the test?
  • How is the test going to be setup from a technical perspective
  • What is the sample size needed?
  • Who is the target audience?
  • What is the duration of the experiment?
  • How are we measuring success?
  • Who is part of the team?

 
Step 4: Implement the actual variations.
For this step, the main contributor is the development team, and they deliver variations on tracking. The implementation process can be broken down into two different ways:

Implement Variations

Once you start the test and have results coming in, it’s time for the next step in the testing process.


 
Step 5: Monitor the live results.
The main contributor for this step is going to be your analytics team who will be delivering results dashboards. The minimum recommended test duration for this stage can be significantly longer depending on the volume of traffic, percentage of traffic leading to success, minimum detectable effect, and a user learning curve.
 
Keep in mind that monitoring can be done in real time when analysts set up dashboards. Once testing is finalized, testing results will be shared with stakeholders through an internal platform.
 
Here’s a great example of what it looks like when you monitor incoming data:
 
Incoming Data Results
 
Step 6: Make a decision.
Once the testing stops and you have data in place that’s been monitored, it’s time to make decisions about whether to fully release the variation(s) tested. The main contributors to this stage include the product/marketing management team. When making decisions, it’s important to refer back to the initial testing plan goals, calculate the statistical significance of results, and quantify any potential trade-offs.
 
Step 7: Get a better understanding of the results.
Finally, in order to get a better understanding of the results after the decision-making takes place, it’s time to analyze the data you have. The main contributing team in this step is going to be your analytics team. They should provide a post-test analysis, along with any recommendations.
 
You may be wondering why further analysis is needed. That’s because no matter if there’s a winning, losing, or inconclusive test, post-testing analysis helps the organization build new and more data-driven hypotheses. Usually, post testing analysis is done after the release for winning tests, before any release decision for inconclusive tests, and before iterating for losing tests.
 

Summary

All of these steps may seem a little overwhelming at first for any organization to take on. Many different teams are involved in this process, and it takes time, but keep in mind that what we do is simplify the A/B testing process. It’s also important to remember that the entire process of experimentation is a cycle. All new learnings from a deeper analysis are the ideas going through a prioritization process for the next hypotheses to test.A/B Testing Cycle Flow
This overview is just the beginning of how we can help you simplify the process of testing your hypotheses. Stay tuned for future posts as we dive into each step more in depth.

About the author

Robert Petrescu