Skip to main content

Chances are that you’ve always got a couple of ideas about how to approach a specific email campaign, and there is also a distinct possibility that on certain occasions you’ve decided to play it safe and stick to what you know has worked in the past. This is, obviously, a very sensible approach but it does lead to hesitation in trying something new.

What is A/B testing?

You’ve probably heard of A/B testing for your email marketing, you may have also heard it called ‘split testing’. If you haven’t put simply A/B testing is a way of working out which out of two versions of a campaign will be most effective for engagement – whether that is how many people open the email, or click on a call to action. A/B testing can allow you to both try new things while mitigating the risk and verify if what you think has been working effectively is actually working effectively.

How does A/B testing work?

When you A/B test an email campaign you send two variations of an email to a small percentage of your proposed campaign recipients. Half of this test group will get version A, the other half will receive version B – hence the rather on the nose name, A/B testing! Whichever variation performs the best with your test group is the version that the rest of your campaign list will receive, which should result in better performance while also giving you valuable insight to inform future campaigns.

Simples!

What should you test?

The truth is you can test anything you want! Some of the most common variables tested using email campaign A/B testing are;

  • Subject lines
  • Call to action wording or positioning
  • The layout of your email (placing different elements in different positions)
  • Personalisation (using title and surname vs just forename, for example)
  • Testing one offer against another

At this point your mind is probably going into overdrive with the possibilities – and they are only limited by your creativity BUT, you should only pick one thing to test for each campaign.

Why only test one variable at a time? If you test multiple variables at one time, for example subject line and an offer in the same email, you won’t be able to say with any certainty which it was that was responsible for the change in performance.

How do you know which has performed better?

When you’re deciding which variable you want to test you should decide what success looks like, or what you’re trying to improve on. For example, if you’re wanting more guests to open your emails, you might decide to test subject lines on your next campaign. In this case whichever variation got a higher open rate would be the ‘winner’ and would be sent to the rest of your send list.

The same applies if you are testing variations in the wording of your call to action, perhaps in a pre-stay email encouraging guests to book dinner, whichever version resulted in the higher click through rate, or bookings, would be the winner.

It is important to make sure that you are clear in what metric you are measuring with your test before you send it. Measuring the success of variations in elements such as subject line, friendly from or preview text can be much easier to interpret as success will be an increase in the open rate, however, it can become much more difficult to interpret when testing different creative elements where clicks will be your metric of success.

Have a clear vision of what creative component you are going to test, whether it is the wording of a call to action, placement of your call to action or image placement.

When and how long should I let my test run?

When working with our hotel partners on AB testing of their campaigns, we would recommend that 20% of the available contact data is used as a starting point for your test, with a run time of 2 hours.

This means that 10% will get Version A, 10% will get version B and the remaining 80% will get the version of the campaign that ‘wins’ the test.

For email campaigns, the time of sending also needs to be taken into consideration. For example, if you regularly send email campaigns at 8am, because you obtain a good open rate in return, it wouldn’t be advisable to split test from 6am for 2 hours because although we know 80% have a high chance of opening at 8am, testing from 6am could be too early for worthwhile results.

Trial and error

No two campaigns or database are the same. For example, while a 20% test sample is a good rule of thumb, you may want to adjust this up or down depending on the size of your database.

The key to making the most of your AB tests is to just start testing, evaluate the results and adjust as you go.

Why not get in touch with us to see how we could help you take your email marketing campaigns to the next level.

You may also be interested in

CRMGeneralPartnerships & Integrations

Independent Solutions vs All-In-One Software Solutions: Which Ones Are Better for Your Hotel?

Introduction We are living in an exciting time, with so much tech available at our […]

Read more

CRMGeneralPartnerships & IntegrationsAnnouncementsMeet The Team

IN-SIDE FOR-SIGHT: For-Sight welcomes new Chief Operating Officer

Last month, we proudly announced the appointment of our new Chief Operating Officer, Alan Robertson. […]

Read more

Subscribe to our newsletter

"*" indicates required fields

I would like to receive*

"*" indicates required fields

Unlocking the guest Journey