BACK
How to A/B Test your Outbound Sales Campaigns
September 18th 2020
Outbound Sales
How to A/B Test your Outbound Sales Campaign
Analytics and reporting are progressively becoming a central part of sales. Tracking sales metrics within the activities - such as no. of impressions, open rates, clicks, positive replies, etc., has become vital for improving your sales goals. A/B testing is also a methodical analysis of sales metrics by comparing two variants. To simplify it with an example, let's say for your outbound sales campaign, you are sending cold emails. Suppose you change the email subject line and it gets a subpar open rate, we will intuitively change your subject line for getting more open rates. But how do we decide on the most appropriate subject line to test next? This is where you’ll implement A/B testing on your different variants of cold email and analyze their performance. We’ll cover more on this in the sections below.

When it comes to planning your next outbound campaign, initially it sounds like an easier way to just google search for “Best sales email” and implement the same way for your campaign, but will this method guarantee the same results for your outbound campaign? The answer is “No”. When creating an outbound campaign, you shouldn’t ask what works for everyone else. Instead, try to find out what works for you and your SaaS product or service. Even when you plan your outbound sales campaign based on your gut and intuition you need a systematic way to carry out your campaign so that you can analyze and improvise the methods for better results.

That is where A/B testing comes in handy. It’s the most reliable way to find out what exactly works best for your business, rather than relying on gut feeling and crowdsourced best practices.

Here’s a quick hands-on guide to getting your A/B testing up and running, real quick.

Table of contents:

  • What exactly is A/B testing?
  • 3 tips for performing reliable A/B tests.
  • How to conduct A/B testing?
  • Is A/B testing always helpful?

What exactly is A/B testing?

A/B testing implies comparing two different versions of something, typically with one variable changed, to see which one performs better. The word “something” is purposely used as you can test numerous things from your website design or your cold email templates for A/B testing. It is an experiment to discover which one of two campaign options is the most effective in terms of encouraging opens or clicks. In sales and marketing, it is popularly used to test different approaches and find the most effective one.

It’s also possible to test more than just two variables, in which case you would have an A/B/C or A/B/C/D test or so on (commonly referred to as an A/B/n test).

As a result, A/B testing, if properly carried out, allows you to confidently determine the best version of your outbound sales campaign, unique to your business, and learn more about your prospects along the way to sell better.

3 tips for performing reliable A/B tests:

To get most out of your A/B testing you need to be thoroughly prepared and clear about your ultimate goal and what outcome you want to achieve from the results. Here are some tips to keep in mind before you get started with implementing A/B testing:

  • You need to know exactly what you’re trying to achieve - When we compare two variants against each other based on certain metrics, the expected outcome is that one will score better than the other - which is the outcome of the A/B test in a nutshell. But should the final decision be completely reliant on the results of the A/B test? What if email A results in more responses, but prospects who respond to email B have a higher lifetime value? Which one’s the winner? This is why you should be clear about what is your ultimate goal before you start A/B testing.
  • Carry out the test properly and analyze the results correctly - Having an unbiased mindset is important before you test your variants. You might have a personal opinion about which campaign is better suited but that judgement is not important while deploying your variants for A/B testing. That means the audience for each variation has to be randomized. For example, if your ‘gut’ told you that email A was better than B, you might send A to your best prospects, skewing the results.
  • Take into account statistical significance and confidence levels - To have confidence in the results, you need to run a test enough number times to be sure the result is reliable, and not a fluke or due to an unrelated/uncontrolled variation.

How to conduct A/B testing?

Let’s cover the steps to conduct your A/B testing:

Step 1: Decide your goal

Start with specifying your goal. Having a clear goal makes it easy to decide the key performance metric you should be monitoring for the testing - otherwise, it’s all too easy to get caught up in the different metrics available. Although you'll measure a number of metrics for a test, choose a primary metric to focus on - before you run the test. This is your "dependent variable". For an outbound sales email campaign, you might want to start with a positive reply rate, meetings booked, or a similar outcome-based metric.

Step 2: Pick a variable to test

As you plan your outbound campaign, you might find there are a number of variants that you want to test. But to evaluate how effective a change is, you'll want to isolate one "independent variable" and measure its performance - otherwise, you can't be sure which one was responsible for changes in performance.

With your desired outcome in mind, choose what you’re going to test. It’s usually best to focus on just one variable at a time. Keep in mind that even simple changes, like changing the image in your email or the words on your call-to-action button, can drive big improvements. In fact, these sorts of changes are usually easier to measure than the bigger ones. For outbound sales email campaign here are some commonly used variable for A/B testing:

  • Subject lines
  • Calls to action
  • Images
  • Body text
  • Sender name
  • Time and Day of Send

Step 3: Create the control and challenger variations

You now have your independent variable, your dependent variable, and your desired outcome. Use this information to set up the unaltered version of whatever you're testing as your "control". From there, build a variation, or a "challenger" - a different version of the variable. Create your A and B test pieces, in line with the best practices for outbound campaigns. For eg: your goal is to encourage customers to buy your product by providing promotional discounts on your landing page. You are using a click-to action button as your variable and offering two variants as “Free Shipping” and “15% Off”. You can set up the testing with control page be “Free Shipping” CTA and challenger as “15% Off” CTA button.

Remember, you should only change the one variable for testing, otherwise, you won’t be sure what caused the change in response.

Step 4: Split your sample groups equally and randomly

As we have mentioned earlier, achieving a reliable result is only possible if you are conducting the test by balancing your audience. How you do this will vary depending on the A/B testing tool you use. Testing tool will help automatically split traffic to your variations so that each variation gets a random sampling of visitors.

Sometimes you also want to determine your sample size. If you're A/B testing an email, you'll probably want to send an A/B test to a smaller portion of your list to get statistically significant results. Eventually, you'll pick a winner and send the winning variation on to the rest of the list.

Step 5: Analyze the results

Once the test is complete, see how both variations performed. Firstly you need to think about how significant your results need to be to justify choosing one variation over another. Statistical significance is a super important part of the A/B testing process that's often misunderstood. The higher the percentage of your confidence level, the more sure you can be about your results. Assuming the results are statistically significant, you will hopefully have a clear winner.

Even if you don’t have a winner though, you should be able to learn something from the results. Was your prediction correct? If so, how can you use that in your future emails? If not, why might that be?

Step 6: Start all over again

How to A/B Test your Outbound Sales Campaign

The A/B test you just finished may have helped you discover a new way to make your content more effective - but don't stop there. There’s always room for more optimization. A/B testing should be ongoing. Your next A/B test could be of the same variable but different variants or a different variable altogether. For eg: if you just tested a subject line of your sales campaign, why not do a new test on body copy? or images? Always keep an eye out for opportunities to increase conversion rates and leads. Even when you’ve discovered the ‘perfect’ email, with every conceivable potential variable explored, times change. What worked for you today may fail tomorrow. Keep testing, keep learning, and keep improving.

Is A/B testing always helpful?

Running A/B tests for your outbound sales campaign can be a great way to learn how to effectively generate more leads from the visits you’re getting. Even small tweaks to an email, cold call or call-to-action button can affect the number of leads your company attracts and converts. The potential benefits of A/B testing your outbound campaign are:

  • Improved messaging
  • Higher conversion Rate
  • Higher customer engagement
  • Reduced bounce rate

But is this methodical testing flawless and always guarantee a successful outcome? Suppose you are spending $100 per month for generating content for your outbound sales campaign which results in providing 200 new leads. Now you decide to A/B test on your sales campaign. You’ll have to spend an additional $50 for performing A/B testing on different variants of your content.

After performing A/B testing for a significant amount of time, two kinds of outcome is possible - either you’ll observe a significant increase in your new leads in which case all well and good - or the A/B test fails and there is no significant change in your new leads or even worse - decrease in the number of new leads generated. Does this mean your $50 on A/B testing has gone to waste? If you consider for the long run the answer is “no”. You might not get a positive result on your key metric from one test but there will be a positive outcome on what not to do in future for a successful campaign. So for your next A/B test, you can be more confident about what variants you want to choose and progressively your testing results will significantly improve. Therefore, it is really important to carry out A/B testing in recurring cycles until you get a successful outcome. No matter how many times your A/B test fails, its eventual success will almost always outweigh the cost to conduct it.

ABOUT THE AUTHOR
Rohit Kumar Singh
Growth Manager
ABOUT THE AUTHOR
Rohit Kumar Singh
Growth Manager
14-day free trial
Improve sales and marketing impact with personalized
communication, actionable sales
GET STARTED
Blog3
Upscale
Here is The Genesis of Upscale's Powerful Sales Engagement Platform
The past few months have been exciting and challenging for our team at Upscale. We have been working industriously with the single objective to empower sales teams to be efficient and drive revenue like never before.
Yeshu Singh
Co-Founder & CEO