A comprehensive guide to A/B testing

By UserTesting | May 24, 2019
Image
A/B testing guide

How much impact can one tiny, little feature on a webpage really have?

A whole lot, as it turns out.

Through A/B testing, hotel booking site arenaturist.com found that a vertical form (vs. a horizontal form) had a huge impact on their users, and their conversion rates.

A/B test: hotel booking

Horizontal form

VS.

A/B testing

Vertical form

Their aim was to increase submissions on their forms, and by making this small change they certainly managed it – by a huge 52%.

Switching from a horizontal form to a vertical form had a significant impact on users, and massive potential to increase conversions and revenue. Especially when you remember this is just the first step in the booking process.

What is A/B testing?

A/B testing can be used to compare the performance of a webpage, app, email, or advert by pitting a control, ‘A’, against a test variant, ‘B’.

A/B testing works by randomly splitting your inbound traffic equally between the two versions. The data we capture from each group and their interaction with the variant can help you to make informed, evidence-based decisions about marketing, design, and user experience.

As such, an A/B test can often tell you if a proposed new feature or design change is a good or bad idea before committing to it permanently. This is done by measuring the impact on a given conversion goal, for example clicking or opening a link, completing a form, buying a product, or finishing a checkout process.

Common elements for testing include headlines, text, images, colors, buttons, layouts, and features. However, ideally there should be only one element different between each variation, so if you’re testing the layout then you should not alter the text. By isolating individual elements in this way, you can get a clear representation of their influence within the user journey.

Why should you A/B test?

A/B testing ultimately takes the risk and guesswork away from making changes.

Developing a website or app can incur significant investment and may require some change to your overall marketing strategy. If a substantial design change results in a downturn in conversion performance, it can result in wasted investment and could have a negative impact on business performance.

Through A/B testing, this risk is negated. You can target individual features to see the effects of changing them without making any permanent commitments, allowing you to predict long-term outcomes and plan your resources accordingly.

Furthermore, A/B testing can help you understand the features and elements that best improve your user experience. Sometimes, even the most minor changes can have a huge impact on conversion rates.

For example, one test found that a red CTA button gained 21% more clicks than a green CTA button. A/B test data can inform objective decisions and changes to individual webpages, a whole site, or even a wider marketing strategy.

A/B testing example

Benefits of A/B testing

Carrying out A/B tests can have a number of key benefits:

1: Improved user experience

Users are always looking to do something on your website. Often, there are blockers or pain points which hold users back from being able to complete their task. The result is a poor user experience. We can identify these blockers with user research, and then A/B testing is how we find a solution.  The data we gather through A/B testing helps us to make decisions that improve the user experience, which we know can have a positive impact on engagement and conversions.

2: Improved return on investment (ROI)

A/B testing allows you to test and increase conversions using your existing traffic — without needing to reach out to new audiences, which can be expensive. Therefore, even a minor change could cause a vast improvement in ROI. A/B testing allows for this with relatively little investment.

3: It’s cheap and easy to set up

Most A/B testing platforms are relatively easy to use and cheap to install. Even the least code-savvy won’t struggle to set up and implement their tests.

4: You can test your competitor’s ideas

A competitor may add or update a certain feature. Through A/B testing, you can determine whether a similar update will be beneficial to user journey and conversions.

5: It can validate (or invalidate) opinions

Everyone has their own ideas on what will work best on your site. And some people’s opinions can be more influential than others. A/B testing these ideas can provide clear data to support—or disprove—whether ideas and implementations are worthwhile.

6: Your business can become more customer-centric

Optimizing features to improve the user journey will automatically enable your business to focus on the customer. Especially when tests are repeated with features becoming more user-friendly.

Ultimately, A/B testing provides a relatively simple way to analyze and improve the performance of a webpage. With limited risk and cost involved, you can continually optimize content to improve the user journey, experience, and conversions.

When should you A/B test?

1: After completing user research

User research can uncover issues and pain points within the user journey.

For example, people might be struggling with a menu layout or a form. Your user research could include session recordings, live testing, focus groups, or surveys. You might then ask questions or set tasks to find why people are struggling with the menu, which features they’re struggling with the most etc. You could then use the insights from this research to design a new menu – and A/B test it against the old one to see if performance improves.

2: When your conversion rate is falling

Carry out A/B testing if your conversion rate isn’t as you expect it. Use existing data to pinpoint where you’re losing conversions and getting drop-offs. Test features at these points.

3: When redesigning your website

Web redesigns can damage traffic and conversion rates – think 404s, lost linkbacks, broken crawlers, and plummeting rankings. User experience can also take a hit. The site might look better, but users may not know how to navigate it as well. A/B tests should be carried out before, during, and after any redesign to make sure the site is as effective and usable as possible.

You could use any A/B data as a basis for some redesigns. And if your redesign doesn’t produce statistically significant results in an A/B test – rethink and optimize your strategy.

4: When you add a new feature, plug-in, or service

A/B testing can be crucial if you make a change to your page that will affect the user journey or purchase point. For example: a poorly-optimized change to a shopping cart or email sign-up page has the potential to lose traffic and conversions. Make sure any changes at these points will enhance the user experience.

5: When you want to increase revenue

An optimized website can improve user experience, and ultimately lead to higher conversions—and so higher revenue. Follow an ongoing A/B testing process to optimize your site continually.

What should you be testing?

Some features of a page have a more significant impact on users than others. Use A/B testing to optimize influential features within the user journey:

1: Headlines

The page headline hooks viewers to stay on the page. It needs to generate interest and engage users. You could test headline variations, such as:

  • Long vs short
  • Positive vs negative
  • Benefit-oriented vs feature-oriented

Email subject lines can also undergo similar A/B testing, with a goal for users to open the email. Subject lines with different features can be tested:

  • Questions vs statements
  • Personalization
  • With emojis vs without emojis
  • Different power words or verbs

2: Design and layout

Pages should include relevant content in a layout that isn’t cluttered or confusing. You can find optimum content and combination layouts by testing:

  • Features in different locations
  • Customer reviews and testimonials
  • Images vs video

Some user research tools include qualitative test features like heatmaps and click maps. These can support A/B test data to find any distractions or dead clicks, i.e. clicks that don’t link to another page.

3: Copy

Your audience will respond better to copy which is optimized specifically for them. You could use testing to find whether your audience responds better to:

  • Long vs short text
  • Persuasive, informative, or descriptive tones
  • Formal vs informal
  • Full-bodied paragraphs vs bullet-pointed lists

4: Call-to-actions (CTAs)

CTAs are where the action on a page takes place – and may also be the conversion goal site during a test. An example would be a colored button that says “click here.” A number of factors can influence user behavior at this point:

  • CTA text vs button
  • Persuasive vs informational copy
  • Color of text/button
  • Size of text/button
  • Contrast between button and text size/color
  • Location of CTA on the page

5: Navigation

Navigation can be the key to improving user journey, engagement, and conversions. A/B testing can help to optimize site navigation, making sure the site is well structured with each click leading to the desired page. Possible test features include:

  • Horizontal vs vertical navigation bars
  • Page or product categories
  • Menu lengths and copy

6: Forms

Forms require users to provide data to convert. While individual forms depend on their function, you can use A/B testing to optimize:

  • Number of fields
  • Form design, structure, and layout
  • Required vs non-required fields
  • Form location
  • Form heading and copy

7: Media

Images, videos, and audio files can be used differently depending on the customer journey, e.g. product images vs product demonstration videos. A/B testing may determine how media can optimize different aspects of the user journey:

  • Image vs video
  • Sound vs no sound
  • Product-based vs benefit-based
  • One media piece vs several
  • Autoplay videos
  • Narration vs closed-caption

8: Product descriptions

While descriptions are likely to depend heavily on the product, some elements can be optimized, such as:

  • Brief and clear vs longer and detailed
  • Full-bodied paragraphs vs bullet-pointed lists

9: Social proof

A/B testing could help determine whether adding social proof to your page is beneficial. You could test the formats and locations of features like:

  • Customer reviews
  • Testimonials and endorsements
  • Influencer content
  • Media mentions

A/B testing framework:

A/B testing should be treated as an ongoing process with continual rounds of tests, with each test based and built upon data collected previously. Within a single round of testing though, you can use the following framework to start, carry out, and complete a test.

1: Select features to test

Your existing data can help you find ‘problem’ areas. These could be pages with low conversion or high drop-off rates. Use your own data to inform which features you should test. You could base A/B testing on yours or someone else’s gut instinct and opinion. But tests based on objective data and in-depth hypotheses are more likely to gain valid results.

2: Formulate hypotheses

Generate a valid testing hypothesis – consider how and why a certain variation may be better.

For example, you might want to improve the open rate of emails. So your hypothesis could be: “Research shows users are not responding to impersonal subject lines. If we include the user’s name in the subject line, significantly more users will open our email.”

3: Select an A/B testing platform

Testing tools are embedded into your site to help run the test effectively. There are a number of tools available, each having its own pros and cons. Look into each different platform to find out which would be the best choice for your tests and goals.

4: Set conversion goals

Decide how you will determine the success of each variation: to click through to a certain page, to buy a product, or fill a form etc. You set this goal within the testing platform, so it knows what to class as a successful conversion.

If the goal is to complete a form, for example, it’s logged once the user reaches a “thanks for your submission” message. Or if the goal was to play a video, it’s logged once the video is viewed. The testing platform will record each conversion and the test variation shown to the user. This will almost always require a snippet of tracking code to be added to one or more webpages.

5: Create variations to test

You can create variations in two ways – usually within your chosen testing tool. Don’t forget to include a control or ‘champion’ page.

  • Create two variations of a feature: To test a single feature on a page, create two different variations within your testing tool. The A/B tool will randomly select which variation the user sees.
  • Create and redirect to two different pages: Create two almost identical pages – just change the testing feature. You’ll have two different URLs. The testing tool will randomly redirect some users to the alternate URL.

6: Run the test

Run the test over a pre-set time scale. Don’t cut it short if you see results earlier than planned; the average duration is 10-14 days, but this ultimately depends on your site and its traffic.

It’s crucial to consider and stick to a set sample size. The sample size needed usually depends on the change you expect to see. If you hypothesize a stronger change, you will need fewer users to confirm it.

Also bear in mind that some users may have seen the page before; they may automatically respond differently to users seeing the page for the first time.

If you’re testing emails and their content, you may have more control over the sample and what individual users see. You may need to randomly split the sample, and create and schedule the emails manually.

7: Collect and analyze data

Analyze your results using your chosen testing tools. Regardless of whether the test had a positive or negative outcome, it’s important to calculate whether the data is statistically significant.

Statistical significance determines whether a relationship between variables is a result of something other than chance. It can inform whether the test results and implications are reliable, and whether they justify making a change to your site.

Generally, running your test for a longer period of time – and allowing more users to be involved – lowers the risk of results being down to chance. Larger samples are often more representative of overall audiences. And so their behavior can more reliably represent how a whole audience would behave.

Analyzing the data

While A/B testing software will present quantitative data, you will need to analyse results effectively. And there are a few things to consider when doing so.

1: Focus on your goal metric

A/B testing can collect a lot of data on many aspects of user behavior. Focusing on your original goal metric is important. Your analysis and results will then align with your original hypothesis and goals.

2: Measure the statistical significance

Statistical significance is the probability that a relationship between variables is a result of something other than chance. It can inform whether the test results and implications are reliable, and whether they justify making a change to your site. Most testing tools include an analysis feature to calculate the statistical significance of data collected.

3: Base your next steps on the test results

If one variation proves to be a statistically significant positive change, then implement it. If the results are inconclusive or negative, use this knowledge as a basis for your next tests.

Understanding the why of A/B testing using qualitative research

As we said earlier, A/B testing provides quantitative (numerical based) data, but may not reveal the actual reasons why visitors to each page behave the way they do.

UserZoom customers often run a usability study to understand the quantitative data they are receiving from their A/B testing. So they may run a think-out-loud study to probe more deeply their two A/B designs to find out the reasons why one is performing better than another.

Top tips:

  • Focus on your goal metric: even though you’ll be collecting a lot of data, focus on your initial goal.
  • Measure significance: calculate whether results are statistically significant, ie are they enough to justify making a change?
  • Take action based on results. If one variation is statistically better, go with that. If not, go again. Don’t count negative results as failures – it’s important to learn what doesn’t work and what to try next.
  • Reinforce your results by using qualitative testing to verify quantitative findings.

Challenges of A/B testing

A/B testing can carry some challenges. But they can usually be overcome by following an objective and thorough procedure:

1: Deciding what to test

Use existing data to determine when – and why – to test a feature. For example, check for pages or links with fewer conversions. Test one element at a time so you can easily pinpoint the influence on users.

2: Formulating hypotheses

Data should be used to see where issues lie, and to formulate objective theories and solutions.

3: Sticking to the sample size

The sample size should be determined before the test runs. People commonly cut testing short because they want quick results. But A/B tests need a sufficient sample size for representative results.

4: Maintaining an ongoing test procedure

A/B tests should be repeated to ensure pages are continuously improved – this will help optimization efforts to be effective long-term. Use results and insights from each test to form the basis of the next. Learn from successes and failures. They can indicate how your users behave, and how other features can be optimized.

5: Staying objective

Try to ignore your own opinion and the results received by others. Focus on the statistical significance of results to make sure data – and any subsequent changes made – are justified.

When planning an A/B test, be sure to consider any external factors eg public holidays or sales that may influence web traffic. You should also research which testing tool is best suited to your needs. Some may slow your site down, or lack the necessary qualitative tools – eg heatmaps and session recordings.

When driven by data and executed objectively, A/B testing can generate great ROI, drive conversions, and improve user experience. Subjective opinion and guesswork is taken out of the optimization process. So A/B testing can ultimately inform strategic decisions across your marketing efforts, driven purely by data.

If decisions and tests are carried out randomly as based on opinion – they’ll probably fail. Start the testing process off the back of clear data, a strong webpage, and a controlled process. It’s the best way to start a cycle of effective A/B tests – and a great first step to a well-optimized site and user journey.

Insights that drive innovation

Get our best human insight resources delivered right to your inbox every month. As a bonus, we'll send you our latest industry report: When business is human, insights drive innovation.

About the author(s)
UserTesting

With UserTesting’s on-demand platform, you uncover ‘the why’ behind customer interactions. In just a few hours, you can capture the critical human insights you need to confidently deliver what your customers want and expect.