Michael Heering

Split-run test: some practical advice

Written by Michael Heering Jeffrey Bertoen on

Split-run tests, also known as A/B tests, are an effective way to optimize email marketing. These tests enable you to find out more about target groups, increase the ROI and to send more effective emailings. You can improve every new campaign by performing a larger or smaller split-run test as long as your open or click rate hasn't reached the limit of 100%.

List size

A split-run test involves nothing more than creating two or more versions of an emailing which contains (preferably) one variable factor. Send the test mails to different parts of your mailing list to see what works best. Do you have a large contact database of more than 5000 contacts? Create test groups out of a certain percentage of the database and send the most succesful emailing to the rest of your database.

When your database is too small, send each version of the emailing to half of your database and learn from your results for a next emailing. Or conduct the same type of test on all emailings for the duration of one month and analyze the combined results. Frequent testing makes a test more reliable, even when sending to a smaller number of addressees and when using more variations.

Goals of split-run tests

Testing only is useful when setting a goal. How else will you determine what type of test will lead to results? Think about what you would like to optimize:

  • More people have to open the emails
  • Hyperlinks have to be clicked more
  • One specific hyperlink has to be clicked more
  • Content needs to be shared more through social media
  • Less unsubscribers
  • Etc.

Your experience as a marketer and instinct have to help you determine what split-run test will be the most effective. Do you think your subject lines are too boring, your newsletter is too long, use of images is excessive or your timing is all wrong? Put that feeling to the test and see what the results have to offer

Tests

As mentioned previously, different goals will lead to different tests. Perform one single test for each campaign to find out what factor impacts your goals and target group the most. Some examples:

  • To improve the open rate: perform a split-run test using different From-headers. Send one version of the emailing with an address that relates to someone within your company. Send the other one using a general company-address.
  • To improve the number of clicks/click through rate: perform a split-run test on different call-to-actions. You can vary the CTA in many different ways. For example, by using different buttons or different text within the CTA.
  • To reduce unsubscribers: Perform tests in which you increase or decrease the frequency of sent emailings. Or try changing the tone-of-voice in your emailings.

Winning factor

Think about what is the most important test result beforehand to determine what test version is the 'winner' of the split-run. The winning factor can be simple. For example, the most clicks in the email, or a combination of certain results. For example, when you want more opens, but you find clicks to be more important. Set up a formula in which the clicks are appointed twice the value of the opens. The one that gets the highest score will eventually be the winner. Good software makes it easier to use these kind of conditions.

Possible winning factors:

  • Total number of opens
  • Total number of clicks
  • Number of clicks on a specific link
  • Conversion (measured outside of the email in for example website conversion, sales)
  • Least amount of abuse reports
  • Least amount of unsubcribers

Results of the split-run test

By only variating one variable per test, you can find out exactly what are the preferences of a target group. Notice: don't just look at 'what generates better results', but also ask yourself 'why does it generate better results?'. This way you can structurally improve emailings.

Sometimes we perform split-run tests too hastily. The results will then suggest a different winner when looking at the statistics one week or one month after the test has been completed. That is why it is important to keep in mind a reasonable testing time. Keep controling your strategy.

Test results can also be a good reason to redefine target groups. Why not take a loot at these results on an individual level? You might discover that a certain group of recipients demonstrate a different response to an emailing. Do they have a corresponding factor? Add this to your database and use it in an upcoming emailing.

With every new campaign you can go a step further in optimizing your emailings. It is important to keep checking what is more profitable and what is the most important improvement to reach your marketing goals.