When you’re developing a product or service, you want it to appeal to as many users as possible. But not everyone has the same values and taste as you do. That’s why, no matter what you’re creating, you should always get feedback. A/B testing will help you determine which elements of your product work and which don’t, as well as how to improve them.
When you’re creating a website, you want it to be perfect. You spend days, or even months, trying to make it so. But you often make one mistake. You create a website that appeals to you, your employees and your friends, but when you show it to the world, it doesn’t exactly share your opinion. The new users may say the site is unintuitive, confusing, prevents them from finding the information they need etc.
An easy solution to this common problem is A/B testing.
What is A/B testing?
A/B testing is a testing method, during which you test multiple versions of an element / website, all at the same time. A user enters a website and is randomly assigned a version by the system. As a result, different users may see completely different versions of the same website. After a specified period of time, you compare the results and choose the version that appeals to a bigger audience.
Is it complicated?
Not really. A/B tests existed long before the age of the internet. For example, have you ever come to your favourite supermarket only to discover that everything changed? Healthy food is now closer to the entrance, carbonated drinks are at the back etc. Every product was removed from its shelf and placed somewhere else, based on some unknown formula. At first glance, it’s total chaos. You have to find your way to each product all over again.
Wait, why does everything look different today?
A couple of weeks later, things may get back to normal, sometimes with slight changes. And, if you think about it, you may realize you’ve just been a part of a testing process. During those weeks, the shop monitored the sales, prepared profit and loss reports, and compiled sales statistics. At the end, they determined if the new product arrangement resulted in better sales or more satisfied clients, or served another purpose that was being tested. If the test results were positive, the arrangement stayed the same until next time. If they were negative, everything went back to how it was at the beginning.
Sounds simple, right? You define a goal (e.g. financial profit, more clients, higher customer satisfaction) and conduct the appropriate A/B tests.
What should you test?
You can test everything you can measure. But you should only introduce changes that have a real influence on the measured element. E.g. if you’re introducing changes to a website on which you can’t measure the results, it’s pointless.
How to find elements for A/B testing?
Ask your users about things that annoy them and things they’d like included;
Test the elements that have the biggest impact on sales or customer acquisition;
Test the elements that may drive away potential clients;
Study the page statistics and find the place where the biggest percent of users leave the page.
Out of all these elements, choose the ones that could turn the biggest profit.
There are a couple of widely-used expressions that will help you navigate the world of A/B testing.
Conversion is a specific, desired action of a user, e.g. adding a product to cart, signing up for a newsletter or placing an order. You decide which element’s conversion you want to measure (i.e. conversion goal).
Conversion rate equals the number of conversions divided by the number of users, expressed as a percentage.
For example, 5 out of 450 users visiting an online store place an order. It equals a 1,1% conversion rate. Increasing this value to 1,2%-1,3% gives you more than a 10% increase in sales (and the costs stay the same). Some changes can boost the conversion rate as much as 100%, which is twice the increase.
The conversion rate is the main measurable element that proves how effective the new version is.
When you create content, you should always have a goal in mind. Take, for example, users signing up for a newsletter. Your conversion rate is 10%, and your goal is to increase the signup to 100%. So, the conversion goal is a 20% conversion increase. Once you have the conversion goal, you know what to test and what to improve. It may also turn out that you’ll achieve a specific goal only after a couple of A/B tests, each of them bringing you closer to the goal.
Types of A/B tests
A/B testing lets you test different approaches to the problem. Every approach is called a type. It can be a website with a different headline, colour scheme or call to action button.
Packaging services increased total revenue by 114%.
Your original version will always be type A. The testing is conducted between a couple of types, e.g. A/B/C. Each type is seen by an equal number of users. If there are 3 types, the traffic is split equally between them, i.e. 33,3% each. Then, you measure each type and calculate its conversion rate.
You also have to remember: the more types inside one test, the longer it will take.
Why run A/B tests?
Are A/B tests for you? Of course they are. We all want to improve, don’t we? But the way we go about it may be a shot in the dark. You may make random changes to your website, your product, your life, and then what? After a couple of months or years, it may turn out that things either went in the right direction or you messed everything up and have to start from scratch.
Contrary to what you might think, people more often fail than succeed. Why not introduce something that has the potential to help? With A/B testing, every change is verified by the users, and you know if everything is progressing the way it should. If something is not working out, you can react right away.
In time, you can get to know your users better, and, in turn, improve yourself and your product.
You have the aid of statistics and comparisons to help you determine whether you progress or regress. This allows you to react.
Not everyone knows that global brands like Facebook, Google or Amazon also conduct A/B tests. In 2011, Google ran over 7 000 tests in one year. This equals 20 tests a day. Users on Google forums often wrote that some new functionality had suddenly appeared. The rest of the community was surprised because they didn’t experience the change. This is exactly what A/B testing stands for: a number of users saw the change (e.g. 5%) and Google monitored their behaviour.
If Google cares so much about A/B testing, so should you.
In Google, the testing process may include changing button colours, font size or placement of recommended products, as well as adding completely new functionalities etc. Such changes are often invisible to the user and he/she uses the new functionality without realizing it. After a while, the tested functionalities become available to everyone, or not, depending on whether they served their purpose well.
The costs of A/B testing are small, sometimes near non-existent. Without having to pay for advertising, you significantly increase the effectiveness of all the elements on your website.
Accuracy of A/B testing
So, you came up with new approaches, then created and ran the test. The first conversions suggest that the new, more colourful version of your website is the way to go. Is it time to decide, end the test and show the improved version to all users?
Not yet. First, you should look at the number of conversions.
According to many different sources, each version should gain at least 100 conversions.
The higher the conversion rate, the smaller the chance that user behaviour will fluctuate for a brief time. This equals a smaller measurement error.
A small number of visitors will make the testing process last longer. Plus, if you’re testing more than two versions (original and an altered version), this will also extend the process.
This way of calculating is, of course, not exact. In order to make the test mathematically accurate, you can use calculators available at Visual Website Optimizer (download it here or view it here). According to them, if your average number of visitors is 1 000 per day, with 80% test accuracy, 7 days is enough to choose a better version of your website.
The more users you have, the more you can shorten the testing time.
A/B tests accuracy vs intuition
Should you ever end a test sooner, before you establish its accuracy? There are a couple of reasons why you might consider it:
The effectiveness of the new version is extremely low and causes you to lose money;
The new version is extremely effective;
You’re testing 3 versions – A/B/C, which allows you to switch off a particularly bad version and continue with the others.
Should you decide to end the testing process on your own? There’s no right answer to this question. If you run more tests, you’ll gain more knowledge about your users’ behaviour and their reaction to change. On the other hand, if their actions don’t differ much during similar tests, and the test results have been stable for a couple of days, you might consider ending the process sooner.
In the end, longer-lasting tests guarantee higher accuracy, but sometimes you’ll have to compromise.
A/B testing tools
There are many testing tools on the market, both free and paid. A free version of a program often offers more advanced functions for a fee. But if you’re just starting, the free version should be enough. Besides, you might have to try a couple of tools before you find the right one.
Easy implementation: script implementation on a website is intuitive and, in most cases, doesn’t require any knowledge of programming;
Visual text editor: all changes are made through a system, without programming, so anyone can create a test that applies changes to a website;
Flexible test planning: you can decide when to start the test and set the time. Additionally, you can control the test based on its length and assign a percentage of traffic to each version. You can also decide when to end the test (e.g. based on the conversion rate);
Full reporting: you get all the information on the number of conversions, number of users and statistics over time. It allows you to draw the right conclusions;
Ability to target the test audience: allows you to check e.g. what was the difference in versions for users aged 20-25 from England. A very useful option;
Technical support: you may often find websites offering extensive support, tutorials or even direct contact with the support team;
Compatibility with external statistics websites (e.g. Google Analytics): if you’re already using some ready-made statistics, you’ll be able to combine them with your A/B tests. It’ll be easier to control the testing process in a familiar environment, and you’ll obtain more interesting data than with a testing tool alone.
Testing software can be divided into a few categories.
A lot of tools allow you to test websites only, but mobile apps are becoming more and popular, and it’s also possible to test them with A/B tests. It was very difficult just a few years ago because you had to program everything yourself.
Now, there are tools like Firebase from Google, which allows you to easily implement various tests inside a mobile app. You can use it for free and get great results without having to pay for advanced features. I highly recommend it.
Should you write your own testing tool?
If you’re a programmer, you’d probably like to write your own stuff. And, sure, you can do it, but it won’t be as complex as the existing tools. It may seem like a free option, but how valuable is your time? If you’re willing to spend tens of hours working on your own tool, maybe it would be more worthwhile to improve an existing one instead?
You don’t want to reinvent the wheel.
Besides, the existing tools have already proven effective – they’re secure from e.g. bot traffic that can falsify the results. Plus, these systems can identify the user based not only on cookies but also other criteria, so you can be sure that he/she sees only one version of the tested software.
As someone who wrote his own A/B tests, I can assure you it’s not worth to even think about it at the beginning. Existing tools will do better and cheaper.
In this article, I wanted to encourage you to test and expand your knowledge of A/B testing. It will change your way of thinking.
In the course of a few years, I’ve conducted dozens of tests. Once, I rebuilt an order form and while I was testing it, I achieved a 30% increase of the conversion rate. This is just a number and may mean little to you, but if you convert it to the company’s turnover, it’s 5000 PLN per day. It’s pretty impressive, and the only cost was the time needed to redesign one form.
Of course, not all tests were successful. That’s why patience is crucial.
You create the A/B tests, run them and wait, then apply changes, test them and, once again, wait. The quality of the tested product will be getting better and better, but it won’t happen in a day. Be patient, and you’ll be rewarded with very satisfying results.
Marcin FrątczakPHP Developer
PHP programmer who likes to keep himself busy. Most of all, he enjoys discovering new things – he always tries to make the most of them and incorporate it into his everyday life. In his free time, he plays chess and strategy games, for intellectual challenge.