A/B test your email blasts

A/B testing sends two versions of your email blast to measure which version receives the most engagement from your recipients.

Updated over a week ago

📌 Note: A/B testing availability differ per plan. For more information on plan types please see the add-ons page in your nation.

Table of Contents

Create a variant

Begin by creating an email blast. Add recipients, select a theme, and navigate back to the Settings or Review and Send step.

The Review and Send tab is where you will review the content of your email blast and set up your A/B test by clicking the Add A/B test variants link which will bring you back to your settings step. You can test one of two variables:

  • Email sender (this is who the email is “from”)

  • Subject line

Note that you will test Sender A and Sender B from the same Broadcaster. Both variants will be sent from the same email address but listed with different sender names.

1. Choose a variable to test. Click + Add variant above the appropriate text box. You can only test one variable at a time. 

2. A second text box will appear. Enter content for the two test variants.

You can remove the variant by clicking “- Remove variant” above the second text box. If you remove the variant you will lose any content entered into the text boxes.

3. Enter content for the body of your email. 

4. Click Save settings to save your changes.

Review and set up your test

1. Review your email within the Review and send step, both test variants will display.

2. Click Configure A/B test to setup your test

3. Select the percentage of recipients you want to use for the test. Sliding the handle right or left will adjust the size of your testing groups. The remainder will receive the winning variant. Recipients are randomly divided into these three groups. You cannot select what group a recipient will fall into.

4. You can have your blast send automatically when the test completes by checking the appropriate box. Please note, that you can only auto-send if the test can calculate meaningful results. Therefore, you must have a large enough sending population, testing groups of a certain size, and let the test run the full recommended time.    

The ideal total population is upwards of 21,000 — but you can test with fewer recipients, you'll just need a larger test segment and more time. For example, with 13k emails and a 72% test segment, it'll take ~6 hours to automatically select a winner. Alternatively, you can instead opt to pick and send a winner manually at a time of your choosing. 

5. Select Open rate or Click rate from the dropdown menu. This will determine how the winner of your test is selected.

You will see a suggestion for how long your test should run. This is the amount of time needed to conclusively declare a winner based on data collected.

Factors that affect your test length include:

  • The size of your testing groups. Smaller groups will need a longer test. 

  • Whether you selected open or click rate. Click rate will require a longer test.

5. You may choose to extend the length of your A/B test. Enter a number into the provided text box.

This will add a number of hours to the minimum test length. Longer tests are more accurate, so you might consider this option to extend the length of your test without increasing the size of your test groups.

The total length of your test will display in a notification below the settings. 

Start your test 

Begin your A/B test by clicking Send A/B test now. You may also schedule your A/B test for a future date. 

1. Click Schedule A/B test.

2. Enter the date and time for the start of your test. 

3. Click Schedule. Edit the date of your scheduled test by clicking on the grey date/time button.

Adjust your date and time and click Schedule. Unschedule the test by clicking Unschedule.

If you selected automatic sending, the blast will send after the scheduled test has completed.  

Your test in progress

Test results will display on the Preview screen of your email blast. 

You will be able to track: 

  • When it was sent

  • Email stats

  • The time remaining in the test if you selected automatic sending

Send your blast

There are three ways to send your winning blast.

1. You may elect for automatic sending. Select the appropriate checkbox while setting up your A/B test. When the test completes the winning variant will be sent to the remaining recipients. If the test is inconclusive, Variant A will send. 

2. Schedule your blast for a future time and date. Once your A/B test is in progress you can manually judge which variant has the most engagement. Select the chosen variant from the dropdown menu and click Schedule.

Select the date and time for your blast to send. You may edit the date at any time by clicking on the blue link with the date and adjusting the information. 

3. Manually send the blast. While the test is in progress, select the variant you want to send from the dropdown menu and click Send to remaining recipients now.

Final results

After the test is complete and the winning variant sent, the results from the test and final send will remain accessible on the Dashboard of the email. You will also be able to see the stats of the email at [Broadcaster] > Email > Blasts.

Understanding the open rate

Like the opened rate displayed elsewhere in the product, the open rate displayed in the stat box of an A/B test will reflect the number of recipients who opened the mailing out of the total number of recipients for whom open activity could be tracked. In other words, any recipients whose data cannot be verified due to email client privacy protection will be entirely excluded from the calculation.

The open rates displayed next to the A or B variant of an email work a little differently. Here, the rate is calculated out of the total number of recipients who were sent the A or B version of the test, whether or not the recipient who was sent to had privacy enabled. If a significant number of your recipients block open rate tracking, that may make these percentages appear slightly lower because you will only be seeing the trackable open rates out of all recipients who could have received the mailing.

Additionally, once a winner has been selected, open activities will no longer be tracked towards the initial variants of the test. This means that if a recipient of Variant A opened the mailing after Variant B had been selected as a winner, their open would be counted among the “remaining recipients”.

A note on meaningful results

You will notice that automatic sending is built around "meaningful" results. A note on how meaningful results are calculated:

There is a standard period of time that a typical recipient needs to make a decision about an email: to open, to click, or to delete. After looking at the aggregate historical data of all email blasts sent in NationBuilder, this standard was calculated. When you set up an A/B test, the specifics of your test are measured against this standard and a recommended running time is produced. Running your test for the full recommended time will lead to meaningful results with a 95% confidence in accuracy.

Keep in mind that you have some control over the recommended running time. The factors listed below will affect the runtime: 

1. Open vs Click rate. More people will open your email then click on a link. Therefore it will take a longer period of time to collect accurate data on the people who clicked in your email. 

2. Size of testing groups. Increasing the size of your testing groups will always decrease the recommended runtime because it means more data is collected before a final decision is made.

3. Size of recipient population. The larger your list, the more data you can collect in a shorter period of time. Note that A/B testing is often not meaningful for lists smaller than 21,000 recipients because of the data needed to accurately declare results.

When sending an A/B test it is important not to make a decision on the data too quickly. Statistics are often misleading and it is hard to be certain that your test has led to a meaningful conclusion. The provided runtime is meant to act as a recommendation to ensure that your results are as meaningful as possible. 

Related HOWTOs

Did this answer your question?