Have you ever use an AB Split Test Wizard? This is one of the features included in our ListManager software. This tool gives the user the ability to send different versions of a message to random subsets of their mailing list, then compare the results to see which version is most effective. This type of testing eliminates any demographic or action-based bias that could alter the results.
What is discovered, whether you use the Wizard as a Dundee customer, or another ESP method, is that small changes made to your email may have a big impact on your Campaigns, whereas big changes may not. A/B Spilt testing is important and successful results depend on the hows, whys and when.
Generally, if you are not happy with your overall mailings after you ran and used your A/B Spilt results, it could be you didn’t have a plan or process for your AB testing; In other words, you just created the tests and went with the results you thought looked good. You may get a get a better response from your audience if you plan out a process for testing, one that you can use each time.
Before you write your process you should formulate a theory of how you think your testing should go. Start developing a hypothesis first (or some believe an educated guess if you will) using your conversion goals: you want to increase the percentage of email recipients who clicked on a link within an email and completed a desired action, a purchase. A theory or hypothesis will help you examine the results you get and compare them to your expectations.
Then consider,
What do you expect to learn or what problem do you expect to solve from your test? After you develop your hypothesis, compose the process you will use to test.
A process, will help you develop tests that are quantifiable and specific – tests with measurable results that are useful and assist in planning A/B tests for future email campaigns.
Once you develop a test plan that spells out a specific process, you need only focus on testing. Your plan may include what you want to test, for example:
- Subject Line
- Message Length
- Message format
- Links and images
- Personalized content
- Content
- Call to Action
Once you have your process down, some type of prioritization is needed to determine the order to do specific tests.
Test the ideas first, that have the highest chances of getting the best results. Assign weights or value to various parts of your email. Allocate a higher value to, for example the Subject line, as you may find a change to the way the Subject line is written has a greater effect on the audience then let’s say removing one of the images. Look at the confidence level of an email: are you confident that a change in the Subject Line to include some type of personalization will be well received and opened more than a change of content. Add point for ease to do and double points if there’s an AB wizard.
And finally don’t make any of this complicated – this isn’t rocket science, as “they say”
In summary, A/B testing, is a method of comparing different versions of the parts of an email against each other to determine which one performs better. A Hypothesis can be generated based on how you could improve your campaigns, set up your test and send. In theory this testing method gives the user solid evidence of what performs well and what does not. Through testing, you can get a clear idea of your customer’s preferences which will help you create the type of email they will read and act on.
Can you really trust AB tests? Yes, if you do them right.