A Practical Guide To AB Testing In Python

Luke Clarke
9 min readMay 31, 2021

First of all, what is AB Testing?

AB testing (also known as split testing or bucket testing) is a method of comparing two versions of a webpage or app against each other to determine which one performs better. AB testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal.

Running an AB test that directly compares a variation against a current experience lets you ask focused questions about changes to your website or app, and then collect data about the impact of that change.

In this way, continuous experimentation through the use of AB testing is a highly effective method that companies can leverage to obtain a greater understanding about the pains and needs of their customers and then use this information direct product development.

Testing takes the guesswork out of website optimisation and enables data informed decisions that shift business conversations from “we think” to “we know.” By measuring the impact that changes have on your metrics, you can ensure that every change produces positive results.

Our Application

In this article we’ll go over the process of analysing an AB test, from formulating a hypothesis, testing it, and finally interpreting results. For our data, we’ll use a dataset from Kaggle which contains the results of an AB test on what seems to be 2 different designs of a website page (old_page vs. new_page). Here’s what we’ll do:

To make it a bit more realistic, here’s a potential scenario for our study:

Let’s imagine you work on the product team at a medium-sized online e-commerce business. The UX designer worked really hard on a new version of the product page, with the hope that it will lead to a higher conversion rate. The product manager (PM) told you that the current conversion rate is about 13% on average throughout the year, and that the team would be happy with an increase of 2%, meaning that the new design will be considered a success if it raises the conversion rate to 15%.

Before rolling out the change, the team would be more comfortable testing it on a small number of users to see how it performs, so you suggest running an AB test on a subset of your user base users.

For anyone who is interested in following along the dataset and notebook I used for this project are available on my Github here: https://github.com/lukeclarke12/personal_project_studies/tree/main/ab_testing/

Table Of Contents:

  1. Design our experiment
  2. Collecting and preparing the data
  3. Visualising the results
  4. Testing the hypothesis
  5. Drawing conclusions

1. Designing Our Experiment

Formulating a hypothesis

First things first, we want to make sure we formulate a hypothesis at the start of our project. This will make sure our interpretation of the results is correct as well as rigorous.

Given we don’t know if the new design will perform better or worse (or the same?) as our current design, we’ll choose a two-tailed test:

𝐻0:𝑝=𝑝0

𝐻𝑎:𝑝≠𝑝0

where p and p0 stand for the conversion rate of the new and old design, respectively. We’ll also set a confidence level of 95%:

𝛼=0.05

The alpha value (𝛼) refers to the level of significance. This is a threshold we set, by which we say “if the probability of observing a result as extreme or more (p-value) is lower than α, then we reject the null hypothesis”. Since our 𝛼=0.05 (indicating 5% probability), our level of confidence (1 — α) is 95%.

Don’t worry if you are not familiar with the above, all this really means is that whatever conversion rate we observe for our new design in our test, we want to be 95% confident it is statistically different from the conversion rate of our old design, before we decide to reject the Null hypothesis H0.

Choosing the variables

For our test we’ll need two groups:

A control group — They’ll be shown the old design.

A treatment (or experimental) group — They’ll be shown the new design.

This will be our Independent Variable. The reason we have two groups even though we know the baseline conversion rate is that we want to control for other variables that could have an effect on our results, such as seasonality: by having a control group we can directly compare their results to the treatment group, because the only systematic difference between the groups is the design of the product page, and we can therefore attribute any differences in results to the designs.

For our Dependent Variable (i.e. what we are trying to measure), we are interested in capturing the conversion rate. A way we can encode this is by setting each user session to a binary variable:

0 — The user did not buy the product during this user session

1 — The user bought the product during this user session

This way, we can easily calculate the mean for each group to get the conversion rate of each design.

Choosing a sample size

It is important to note that since we won’t test the whole user base (our population), the conversion rates that we’ll get will inevitably be only estimates of the true rates.

The number of people (or user sessions) we decide to capture in each group will have an effect on the precision of our estimated conversion rates: the larger the sample size, the more precise our estimates (i.e. the smaller our confidence intervals), the higher the chance to detect a difference in the two groups, if present.

On the other hand, the larger our sample gets, the more expensive (and impractical) our study becomes.

So how many people should we have in each group?

The sample size we need is estimated through something called Power analysis, and it depends on a few factors:

  • Power of the test (1−𝛽) — This represents the probability of finding a statistical difference between the groups in our test when a difference is actually present. This is usually set at 0.8 as a convention.
  • Alpha value (𝛼) — The critical value we set earlier to 0.05.
  • Effect size — How big of a difference we expect there to be between the conversion rates Since our team would be happy with a difference of 2%, we can use 13% and 15% to calculate the effect size we expect.

Luckily, Python takes care of all these calculations for us:

We’d need at least 4720 observations for each group.

Having set the power parameter to 0.8 in practice means that if there exists an actual difference in conversion rate between our designs, assuming the difference is the one we estimated (13% vs. 15%), we have about 80% chance to detect it as statistically significant in our test with the sample size we calculated.

2. Collecting and Preparing The Data

So now that we have our required sample size, we need to collect the data. Usually at this point you would work with your team to set up the experiment, likely with the help of the Engineering team, and make sure that you collect enough data based on the sample size needed.

However, since we’ll use a dataset that we found online, in order to simulate this situation we’ll:

  1. Download the dataset
  2. Read the data into a pandas DataFrame
  3. Check and clean the data as needed
  4. Randomly sample n=4720 rows from the DataFrame for each group

Note: Normally, we would not need to perform step 4, this is just for the sake of the exercise

There are 294478 rows in the DataFrame, each representing a user session, as well as 5 columns :

  • user_id — The user ID of each session
  • timestamp — Timestamp for the session
  • group — Which group the user was assigned to for that session {control, treatment}
  • landing_page — Which design each user saw on that session {old_page, new_page}
  • converted — Whether the session ended in a conversion or not (binary, 0=not converted, 1=converted)

We’ll actually only use the group and converted columns for the analysis.

Before we go ahead and sample the data to get our subset, let’s make sure there are no users that have been sampled multiple times.

There are, in fact, users that appear more than once. Since the number is pretty low, we’ll go ahead and remove them from the DataFrame to avoid sampling the same users twice.

Sampling

Now that our DataFrame is nice and clean, we can proceed and sample n=4720 entries for each of the groups. We can use pandas’ DataFrame.sample() method to do this, which will perform Simple Random Sampling for us.

Note: I’ve set random_state=22 so that the results are reproducible if you feel like following on your own Notebook: just use random_state=22 in your function and you should get the same sample as I did.

Great, looks like everything went as planned, and we are now ready to analyse our results.

3. Visualising the results

The first thing we can do is to calculate some basic statistics to get an idea of what our samples look like.

Judging by the stats above, it does look like our two designs performed very similarly, with our new design performing slightly better, approx. 12.3% vs. 12.6 conversion rate.

Plotting the data will make these results easier to grasp:

The conversion rates for our groups are indeed very close. Also note that the conversion rate of the control group is lower than what we would have expected given what we knew about our avg. conversion rate (12.3% vs. 13%). This goes to show that there is some variation in results when sampling from a population.

So… the treatment group’s value is higher. Is this difference statistically significant?

4. Testing The Hypothesis

The last step of our analysis is testing our hypothesis. Since we have a very large sample, we can use the normal approximation for calculating our p-value (i.e. z-test).

Again, Python makes all the calculations very easy. We can use the statsmodels.stats.proportion module to get the p-value and confidence intervals:

5. Drawing Conclusions

Since our p-value=0.732 is way above our α=0.05, we cannot reject the null hypothesis H0, which means that our new design did not perform significantly different (let alone better) than our old one :(

Additionally, if we look at the confidence interval for the treatment group ([0.116, 0.135], i.e. 11.6–13.5%) we notice that:

  1. It includes our baseline value of 13% conversion rate
  2. It does not include our target value of 15% (the 2% uplift we were aiming for)

What this means is that it is more likely that the true conversion rate of the new design is similar to our baseline, rather than the 15% target we had hoped for. This is further proof that our new design is not likely to be an improvement on our old design, and that unfortunately we are back to the drawing board!

--

--

Luke Clarke

I am passionate life-long learner with a keen interest in data driven solutions!