Email A/B testing

The journey to higher open rates sometimes requires trial and error. To get the best possible click rate, email A/B testing is a must-do in your email marketing efforts.

What is email A/B testing?

Email A/B testing (or split testing) is a broadly-used statistical strategy that helps to optimize email campaigns. It involves creating message variants by changing one element at a time, such as your landing page, CTA, email content, visuals, or email subject lines. While it doesn’t guarantee higher open rates or conversion rates, marketers who do A/B testing often achieve the best possible metrics.

In fact, A/B testing is how marketers discovered the email marketing best practices we use today. If you run split tests regularly and optimize your email marketing strategy with the data, you may find yourself with better and more reliable metrics.

What is A/B testing in marketing?

A/B testing optimization is simple: You develop different versions of an email message for different audiences, gather data, and find out which one performs better. Then you take the winning variant and test again, and you keep split testing until you start seeing diminishing returns (stat difference below 3%).

The strategy behind A/B testing is extrapolation. If 1000 people respond a certain way, then you can expect a similar response from 100,000 people. The first step when testing email marketing campaigns is choosing your email list sample size:

  • If you have a small email list (<1000), use 50% of your email list for variant A and 50% for variant B to ultimate select the content resulting in higher open rates or even better overall metrics
  • If you have a large email list (10K+), your priority is getting enough data with the smallest sample size. That might be 20% to 40%, depending on the total size. Once you discover the test results, you can use the winning version for the rest of the list

Note: While it’s possible to test more than two versions at once, it’s not recommended. Same with testing too many variables. It’s better to run a dozen simple variants than testing everything at the same time, because you can more easily pinpoint why version A or version B performed better.

After selecting your sample size, identify the email message you want to test and the elements you’ll change. You can design A/B testing campaigns in most email automation software providers. If you don’t know where to start, try the subject line or call-to-action button.

Once you start A/B testing, wait 1-2 weeks to gather enough learnings from your test results to optimize your campaigns.