Effective A/B Split Testing

 AB_image_helpguide_700

Introduction

Email is one of the most powerful tools you have when communicating with prospective customers. The problem is, it’s very difficult to know how the design of your email might impact customer behaviour. Will a larger “Buy Now” button increase conversions or should you try it in a different colour? How does the subject of the email influence open rates? A well-designed email campaign can have a significant impact on your bottom line, so how do you know that you’re using the best possible design? You test!

What is A/B Split Testing

An A/B test is much like a scientific experiment. It is randomised, and uses a control and a treatment to collect statistics about customer behaviour. Let’s look at that statement in greater detail before we go any further. The control is one version of an email. This uses your company’s normal design. Nothing is changed from what you’d send your contacts if this wasn’t a test. We’ll call this version ‘A’. The treatment is identical to the control except for one element that you change. You can change the subject, the font, or the call-to-action. We’ll go into more detail on the kinds of things you can change later on. We’ll call this version ‘B’. When you run the test, both emails are sent to your list, but they are randomly split so that half the list receives version A and the other half receives version B. For example, I might want to send an email to 2000 subscribers to try and generate sales through my e-commerce website. I design an email (the control) and then create another email with one element changed – the call-to-action (CTA).

  • The control goes to 1000 people, with the CTA that says “Offer ends October 31! Use code A1.”
  • The treatment goes to 1000 people. The CTA in this email states “Offer ends soon! Use code B1”

I can then monitor which email has a higher success rate (the one that generates more sales in this instance) by tracking the use of the two promotional codes. The more effective CTA will then be used in all future sales emails.

What Kinds of Things Can You Test?

The most important part of an A/B test is that it has a defined, measurable outcome. You need to know exactly what you are testing. Here are a few ideas for things you can test:

  • Email subject
    • Personalisation vs. no personalisation
    • Shorter vs. longer subject lines
    • Brand name in subject line
  • Call-to-Action
    • Deadlines to increase urgency
  • Email body text
    • Special offers
    • Different product features or benefits
    • Different tone: emotional, intriguing, research-based etc.
  • Graphics
    • Larger vs. smaller graphics
    • Photographs vs. animations
  • Colour
  • Email layout
    • One vs. two columns

Split Testing Protocol

Testing can only produce valuable results when you test repeatedly. A once-off test might provide some interesting results, but you need to run multiple tests to make sure that you are getting real insight into customer behaviour. Each test should suggest new avenues to explore, and stimulate subsequent rounds of testing. The following steps outline a testing protocol that should help you to get the most out of your A/B testing:

  1. Identify your control email: Your control email is the current layout before you have done any optimisation. When a treatment email performs better than the control, you will make that the new control page and test against it in the future.
  2. Establish the goals and parameters for your tests: When you have identified the goals of your testing campaign, you can set out the parameters. This will help you evaluate the success of your tests.
  3. Determine a sufficient test volume: Identify the number of emails you need to send to generate statistically significant results. This number will vary from business to business. You should be able to confidently declare a winner at the end of your test.

Evaluating the Results

An A/B test campaign can only be successful if you have clearly defined a measurable outcome before you start testing. Some examples of measurable outcomes are:

  • Number of sales made,
  • Click-rate conversion,
  • Number of people signing up for a newsletter.

As we mentioned in the previous section, it is important that you set an appropriate volume for your test. If you don’t send the test to enough people you may not achieve statistical confidence. Statistical confidence determines whether or not the results of your test are significant. If you don’t send your tests to enough people, you might give too much weight to actions that don’t reflect your whole audience. There are online calculators to help you work out whether not your results are statistically significant. You may also find that the results of your A/B split tests are unintuitive. You’re testing for the variation that makes people click on your CTA, this could be a bright yellow button on a blue page. It may not be pretty, but you’re not testing for aesthetics, you’re testing for conversions. Don’t dismiss the results because they contradict your expectations.

Tips

Running an A/B split test is a complex activity, these tips should help you keep your ducks in a row:

  • Make your test consistent across the whole email. If you change the colour of a sign-up button at the top of the newsletter, make sure it is the same colour in any other locations in the email that it appears.
  • Do repeated A/B tests. A single test can only tell you about one aspect of your email, and you can only get one of three results: positive, negative, or no result. Repeated testing over time will give you a much stronger picture of your audience.
  • Run the two versions of your test simultaneously. You can’t send one version of the email today and the other version tomorrow because you can’t account for other variables that might change between today and tomorrow.

Make sure that each person is always offered the same promotion. If you offer me 15% off today and 10% off tomorrow I may get upset, especially if I was expecting 15% off.

A/B Testing in Everlytic

Everlytic has designed a dedicated A/B Split Testing tool to help you to create beautiful, effective testing campaigns. First things first, create a list of contacts who will receive the test emails. Give this list an obvious name so that it is easy to find in the list selection step. Once you’ve done that, click on the campaign icon (the paper aeroplane) in the left navigation, and click Create Campaign. This will take you to the campaign type selection screen. Click Select on the A/B Split Testing card.

Screenshot of Campaign Type Page

Screenshot of Campaign Type Page

Campaign Properties

In the next step, fill in the following properties for your campaign:

  • Campaign Name
  • From Name (the name your contacts will see in their inboxes)
  • From Email (The email address your contacts will see)

All of these fields are compulsory. You can insert personalisation tags in these properties fields. Just click the Personalise button that appears when you hover your mouse over the field and choose the personalisation field you want to include from the pop-up. Click Continue to move to the next step.

List Selection

Search for the list you set up for this test and check its checkbox. You can segment the list with new or existing filters. When you are ready to move on, click Continue.

Campaign Settings

Here you can create the individual emails to use in your campaign. Click on Compose under the card for each email in the split test and follow our normal email composition steps.

Screenshot of List Split Page

Screenshot of List Split Page

Once you’ve composed your two emails, click Continue.

Campaign Confirmation

The final page in the split test campaign creation is the Campaign Confirmation page. Here you can review all the settings you’ve chosen for this campaign to make sure you’ve got everything just right. Once you are happy, click Send to get the campaign on the road.

Translate »