NEXT TERM 
NEXT 
What is A/B Testing?
A/B Testing (which is also known as Split Testing and Bucket Testing) is an experiment to discover which one of two campaign options is the most effective in terms of encouraging opens or clicks. Through this method, two or more variants of an email or page are shown to users at random and statistical analysis is used to determine which variant drives more conversion.
How does A/B Testing work?
For A/B Test we take an email or a web page or app screen and modify it to create a different version of the same email or page. The modification in case of an email can be as simple as a subject line change, content change, or a signature change. In the case of a web page, it can be a single headline or a button change or could be a complete redesign of the page. Then, half of your target audience is sent one email and the other is sent the other variant. In case of a web page, half of your traffic is shown the original version of the page (known as the control) and half are shown the modified version of the page (the variation).
As we serve the different variations to the visitors, their engagement with each variation is measured and collected in an analytics dashboard and analyzed through a statistical engine. You can then determine whether changing the experience had a positive, negative, or no effect on visitor behavior.
How to Perform an A/B Test?
For performing a test, broadly it includes the following steps:
Collecting Data: You will have to collect data on everything related to how many users are coming onto the site, reading your email, which pages drive the most traffic, what are the various conversion goals of different pages, etc. Your analytics will often provide insight into where you can begin optimizing.
Identify Goals: Your conversion goals are the metrics that you are using to determine which variation is more successful than the original version. Goals can be anything from clicking a button or link to product purchases and email signups.
Observe and Generate Hypothesis: Once you've identified a goal you can begin generating A/B testing ideas and hypotheses for why you think they will be better than the current version. Forming a hypothesis can be a bit more tricky. You might, for example, hypothesize that a different CTA button on your free trial page will increase signups. The best way to utilize every bit of data collated is to analyze it, to make keen observations on them, and then to draw websites as well as user insights to formulate data-backed hypotheses.
Create Variations: The next step is to create a variation based on your hypothesis, and A/B test it against the existing version (control). A variation is another version of your current version with changes that you want to test. You can test multiple variations against the control to see which one works best. Create a variation based on your hypothesis of what might work from a UX perspective. For example, enough people not filling forms? Does your form have too many fields? Does it ask for personal information? Maybe you can try a variation with a shorter form or another variation by omitting fields that ask for personal information.
Run Experiments: Kick off your experiment and wait for visitors to participate! At this point, visitors to your site or app will be randomly assigned to either the control or variation of your experience. Their interaction with each variation is measured, counted, and compared to determine how each performs.
Analyze Results: Once your experiment is complete, it's time to analyze the results. Your A/B testing software will present the data from the experiment and show you the difference between how the two versions of your page performed, and whether there is a statistically significant difference or not.
A/B testing an email campaign:
You can A/B test an email campaign in one of three ways:
Subject Line: For this test, campaign versions A and B are identical except for the subject line. For example, you could test to see if a generic subject gets more opens than a longer subject line that's more specific.
- For creating different variants of a subject line, you could:
- Make two completely different topics as the subject line, to see what content is of most interest to subscribers.
- Add personalization to identical subject lines to see if a first name greeting, for example, gets a better response.
- See what kind of promotion works best, let’s say by offering "Free Shipping" versus "15% Off".
From Name: Sender details are important because many people will not open an email if they don't recognize who it's from. With the From name test you can use a different name and email address for Version A versus Version B, as shown below, or just change one or the other. The best approach depends on your relationship with the subscriber. Consider if they are more likely to recognize an individual's name, your company name, or the product name your campaigns are about.
Email Content: This is to test different elements of the campaign itself, for example, section titles, article length, calls-to-action, header images, and more. You might even test two completely different designs to see which one gets the most clicks.
Campaigns for this type of test can be created using one of your saved templates or designed externally and imported (if that option is available to you). If you're doing an email content test, follow the on-screen instructions to set up Version A first. After that, you'll be prompted to repeat the process to set up Version B.
Advantages and Disadvantages:
Advantages:
- Get clear evidence
- Answer specific design questions
- Reduced Bounce Rate
- Increased Conversion Rate
- Ease of analysis
Disadvantages
- Can take a lot of time and resources.
- Only works on specific goals.