The design and usability of your website has a huge impact on conversions. A/B testing can be used to determine which elements of your site or campaign need re-worked, and thus lead to a higher percentage of newsletter signups, sales, or feed subscriptions. You may encounter bumps along the road, so understanding the pitfalls that may arise while A/B testing is crucial to success. In this article we’ll review the five steps to address when a/b testing.
Before you engage in a/b testing, it’s important to develop a plan that’ll set the groundwork for your testing efforts. You won’t get the results you’re looking for unless you’ve identified issues that need resolved.
Without planning, a/b testing will demand much more of your time and efforts. Consult with them. Additionally, do you have a timescale developed for implementation? This will inform you if your data-gathering and testing times are realistic.
The following are some common goals one might want to achieve for their site:
• Optimizing purchases and sales, converting a higher percentage of visitors to customers.
• Improving sign-up rate, reducing bounce rate, increasing newsletter subscriptions.
2. Understanding Metrics
Understanding metrics will help you determine what changes to elements on your site are achieving. On a broad scale, they let you know your business’ problems, so design can be used to solve them. Metrics will help inform you what a visitor is doing as a result of changes made. They can pinpoint how many people are bouncing off your page, aren’t converting to signed-in users, and other useful data.
Google Analytics will provide the valuable user behavior insight you’ll need. In this example below, the ‘affiliate’ and ‘referral’ link aren’t converting. You can then go back and modify the source to make it a better fit for your marketing needs.
3. Test Interference
Nothing can skew test results more than interference. Test interference can arise when you test variations of the design on two separate pages.
To prevent this inference, check to make sure your testing software allows you to keep different page designs independent of each other. Your software should explain how it handles multiple tests running. You wouldn’t want to run a multi-page test with variations of a design on both pages, risking the chance that variables effecting customer performance will change.
4. Testing Attribution
Attribution simply means if there’s more than one way to accomplish a task, you should make yourself aware of them. This is because the test results may be skewed when there’s more than one path to the same goal.
Use A/B testing to understand how two variations in images, headlines, graphics, or copy affect visitor behavior more, less, or the same when tested against each other. By testing, learning, and repeating you’ll be able to make confident decisions in regards to your marketing. This is turn will allow you to achieve your marketing goals.
5. Confidence Levels
After all this testing, you’ll need to determine whether all this data is significant through the use of confidence levels. Statistical confidence’ ensures you’re evaluating your results the correct way and prevents you from reading too far into results when you only have a few conversions.
For instance, you may run a test where the results display a ‘95 per cent confidence level’ in regards to the winner. Only a 5% chance exists that these numbers were caused by chance.
Errors can be a normal part of any a/b testing endeavor. To make sure your a/b testing efforts are working the best for you, it’s wise to address these steps in ensuring you receive the positive results you’re looking for. Do you have a method for approaching a/b testing that hasn’t been addressed in this article? Let us know in the comments!
The post was written by Ruben Corbo, a freelance writer who writes for multiple websites including Maxymiser which helps evolve your website and by deploying A/B testing. When Ruben is not writing he’s producing or composing music for short films or other visual arts. You can follow him on twitter.