A/B Split Testing and Identifying Target Segments for Marketing Strategies

The Challenge

Many businesses utilise online marketing strategies to bring more visitors to their websites, for example via e-mailshots, banner advertisements, social media channels and Search Engines Optimisation (SEO) techniques. However, increased traffic alone does not make a website successful. The design of the overall site as well as individual pages themselves also impacts performance, and can be optimised to help convert visitors into customers and sales.

In order to make informed decisions going forward, companies need a way to assess the effectiveness of their marketing campaigns, whilst minimising the cost, time and disruption to their existing and potential customers. Suppose, for example, that a company has developed a new design for one of their webpages, perhaps changing the styling and layout, or the images and text displayed. The crucial question is whether the new scheme is adding value, i.e.; does the updated design increase visitor conversion to sales?

The Approach

By directing some traffic to one version of the webpage with the new design scheme (active group) and some traffic to another version with the original design (control group), we can record and then compare the conversion rates between the two groups using a statistical test. This is often referred to as A/B or split testing.

By directing visitors to the different pages simultaneously, we can isolate any difference between the conversion rates that is due to the implementation of the new design scheme alone. For example, see the results given in the table below.

Active Group Control Group
Visitors 1,523,000 229,000
Customers 74,627 8,702
Conversion Rate 4.9% 3.8%

Statistical tests can reveal what the data tell us about the likelihood of the conversion rates being the same, i.e., whether the proportion of visitors that placed an order in the active group is the same as the proportion of visitors that placed an order in the control group. This depends on the size of the difference in the rates observed (if any) and the sample size, i.e., the number of visitors tested. In the above example, there is compelling evidence that there is a difference in uptake between active and control visitors. Although the conversion rates of 3.8% and 4.9% may seem rather similar, with such large sample sizes, i.e., hundreds of thousands of visitors, statistical methods are able to establish that this difference is very unlikely to have occurred by chance.

Statistical calculations can be used prior to performing such an experiment, to determine the number of visitors needed before making a decision, to have sufficient power to find a difference if there is one. Thereby minimising the time that a design needs to be tested and minimising the number of visitors that are directed to a potentially less profitable page, whilst collecting sufficient evidence to make an informed decision. More sophisticated techniques can also be used to help achieve the optimal number and allocation of visitors to the active and control groups. For example, Google Analytics content experiments use a variety of weighting and stopping rules to help keep experiments as short and as cost-efficient as possible.

More detailed studies can also be made, regarding sub-populations of the visitors. Different subgroups of visitors, such as those with different demographics, geographical locations and psychographics (e.g., lifestyle preferences), may prefer different designs and could respond to different marketing strategies. By recording and including these sorts of customer details in a statistical model of the conversion rates, we can determine not only their individual effects on conversion but also the effect of the design scheme on the conversion rate for different customer segments. This would enable targeting of those visitors most likely to lead to sales and become customers, and tailoring of the design of the pages displayed to different customers.

In fact, not taking these factors into account can give rise to quite misleading results.  For example, a difference in conversion rates could be due to an imbalance in the demographics between the visitors in the control and active groups, rather than the new design itself.  See our recent blog post on Simpson’s paradox for further discussion on the importance of understanding the context of your data.

The Value

A/B split testing enables us to assess the effectiveness of online marketing campaigns and to make evidence-based business decisions in order to increase visitor conversions. When designing a test, sample size calculations can be used to help minimise the testing time and the number of people receiving potentially less effective sites or campaigns. Furthermore, taking customer attributes into account, statistical modelling can allow us to obtain valuable information about which customers to target going forwards and how best to tailor the site to their preferences.

Services