Economic Impact of Email Testing


“Just test it” is a phrase that all email marketers have heard one too many times. Every conference, every vendor, and every thought leader has pushed the idea of email testing for years. Yet, some studies in the past show that 20%-40% of marketers do not perform any email testing regularly. Regularly means that you are, at the very least, testing one element on at least 60% of all your email sends. These regular tests can be one-offs, such as subject lines, or something more substantial in time, such as messaging or creative elements.

If you don’t have a plan to test regularly, your organization might be missing out on a few critical areas for your program’s optimization. Missed opportunities inhibit engagement, increase the likelihood of attrition, and, most importantly, hinder revenue growth.

iPost believes that there is an economic impact on every email test performed. In this white paper, we will explore the organizational mentality needed to succeed as well as examine
the different types of tests, how to create a groundswell inside your organization around email testing, and show you the economic impact that you could be missing out by not performing even the most basics of tests. right now.

Just test it

Email Testing 2023 – A Nimbleness Mentality

If you are an organization that subscribes to agile methodologies, then a nimbleness mentality when it comes to testing should be easily adaptable. The key to a nimbleness testing mindset is that you organize your tests into oneweek sprints and quickly make decisions based on statistically significant results while looking for ways to revise the plan to beat the new control fast. There are four elements in a nimbleness mentality for email testing:


in a nimbleness model means that you must weigh the test outcome with a potential impact on the program as you rapidly develop your plan. For example, some tests such as one-off/ one-time of day tests are easy to pull off but offer little to any long-term programmatic impact. Prioritization will separate the nice-to-have tests with the need-to-haves.



is hard — organizations like the feeling of being comfortable and don’t like it when things change rapidly. In email testing, we need to be prepared to change and ensure that the test plan or calendar changes based on prioritization or results. For example, if we were running a COI (Confirmed Opt-In) subject line test, but found out that CTOR (Click-To-Open-Rate) rates were still underperforming, we might want to change the next test in our COI plan to do several words/characters in the copy to images.



is about empowerment. In a nimbleness model, a small group of people must have the ability to make decisions quickly based on data from the tests. Long gone are the days where a monthly review of tests is presented to the broader marketing group where many people weigh in on the next steps, which have little to no closeness to the email program. This small group should consist of the email program owner, an analyst, an agency, an ESP strategist (if available), and one other representative from the marketing department.



is the act of creating an on-demand library of all your test results. The key to vaulting is to look at proposed tests and historically look at the outcomes around those tests. The statement of “we have already done that” should always be backed up with that test’s categorical results. For example, if we wanted to test an urgent vs. humanized tone in our holiday kickoff campaigns, having a vault of how critical or humanized tones in subject lines performed in the past might change our decision to do so. We can then decide to either repeat or modify this test in a multitude of ways.

A nimbleness model does not happen overnight, even in the most progressive of organizations. You have to give careful consideration to each element and ensure that everyone involved is aligned. In some companies, it might be beneficial to bring an outside firm that can offer up ways to prioritize and implement each element.

Email Testing Classifications VS. Email Testing Types

There are four main classifications of tests people know: A/B, multivariate, hybrid multivariate, and Design of Experiments (aka DOE, DOX, or experimental design). While a separate whitepaper can be written about each of these classifications highlighting both pros and cons and your email program’s payoffs, we at iPost, believe that test types are the starting point and should be decided before picking the testing classification. To that end, test types should be broken out as follows:


is often overlooked in many organizations for many reasons, but we believe it can be the single most significant factor in down-funnel engagement. Pre-Send testing involves testing the experiences for the email program.

In many organizations, the website and UX are owned by many different groups. Still, if email marketing plays a massive part in revenue, it’s time for us to take a significant role. Some examples of pre-send testing are as follows:


  • Location of email sign-up
  • Data collected during and post-sign-up
  • Preference center branding and experience
  • Acquisition source spending and tracking
  • Opt-Down and unsubscribe experience
  • Frequency caps and holdout group testing

As email marketers, we need to think about the entire program, from beginning to end, and it starts
way before you send the first email.


involves testing elements and sub-elements that will optimize the open/read of the email itself. For example, a subject line test would be a pre-open test because depending on the sub-elements of the test itself will affect whether or not the subscriber decides to engage with that particular email. The subject line test is the main element, and the following could be sub-elements used to optimize the pre-open:


  • Urgent vs. Curious
  • Emoji vs. Non-Emoji
  • Short vs. Long
  • $ vs. no $
  • Personalization vs. No Personalization.

There can be well over a dozen combinations of sub-elements in a subject line test alone. Some other main elements of a pre-open test include day of the week, pre-header, from name, domain segmentation, and the famous time of day.


testing is testing anything inside the email creative that will drive the subscriber to action.
Pre-click testing is fun to talk about, but a little more complicated to pull off. A solid hypothesis with
concrete and aligned KPI’s to track are essential to test things like:


  • CTA’s
  • Personalization (beginner to advanced)
  • Imagery
  • Dynamic content. (Images to messaging)
  • Headers
  • Content

Pre-click testing can take on many forms, and depending on the type of test; it can have tens of thousands of combinations.


testing involves the subscriber experience once they leave the inbox. As an email marketing program owner, you should have oversight and influence on the subscriber experience and if you don’t, demand it.

Post -Click

Some examples of post-click testing include things like landing pages, in-app experiences, and site navigation.

Essentially, test classifications are how we execute a test, while a test type is what we test. In this instance, context matters, and whether you have different names for tests or how they are executed, they are entirely other things.

Every Email Test Has Economic Impact

If you don’t believe that every test in email marketing has an economic impact, you are in the wrong profession. However, what is frustrating is that whenever the economic effect of email testing is discussed or presented, examples of retail or eCommerce tend to dominate the story. This can frustrate marketers in the publishing, travel, B2B, affiliate, restaurant/franchise, and entertainment industries because it’s often hard to differentiate and stand out when you are in the retail shadow.

To show you, the reader, that iPost thinks a little differently; we will focus on three industries to discuss email testing’s financial impact.


The publishing industry competes heavily for inbox attention and revenue in the new digital age. So testing across all four test types is paramount to ongoing growth in the program. To illustrate how testing can impact a publishing organization, we are going to assume the following:

Publishing Company Assumption


  • Your list has 2,000,000 subscribers.
  • Your average unique open rate is 19%.
  • Your average unique CTR is 4%.
  • Your average unique CTOR (Click-To-Open-Rate) is 14.5%.
  • The revenue per page view is .07.

A common pre-open test that most publishing organizations like to do, but don’t do it as often as they should, is subject-line testing. In the example below, you will see that over time that consistent subject-line experimentation can pay off.

Subject Line A: 19% open, CTR of 4%, and revenue of .07 per page view, generates an average of 2.7 pages per click. This would mean that the gross revenue of only having one subject line would be $2,872.

If we add in another SL to test against:

Subject Line B: 20% open, 4.25% CTR with 2.7 pages per click yields gross revenue of $3,213, which is 11.8% greater than the first SL.

The $341 difference might not seem that much, but multiply an 11.8% impact for the long tail, and pre-open testing looks a bit more interesting. Adding a 3rd of 4th subject line to the test is where things can take off in terms of content and downstream revenue. Suppose you are a publisher and are not testing multiple subject lines on every send. In that case, you might be missing out on revenue and/or engagement from every subscriber, especially since you are content and copy driven.

If we were to use the same programmatic assumptions above, a pre-click Button Copy impact test might result in the following way:

Subject line test

  • Button Copy A: Read More
  • Button Copy B: Show Me!
  • Button Copy C: Take Me There!
  • Button Copy D: Read On!

At iPost, we have seen a 1-4% variance increase when button copy is tested over a period of time. If a 1% variance increase happens, it will result in 20,000 more clicks, and the revenue numbers look like this:

2M subscribers with a 4% CTR = 80,000 clicks x .07 = $5,600
2M subscribers with a 5% CTR = 100,000 clicks x .07 = $7,000 (a 25% increase)

These assumptions above take into consideration that every click only goes to one page and abandons. The revenue numbers can increase significantly, even if the average number of page views per clicker remains flat.

Conclusion for publishers: There is no silver bullet when it comes to testing and revenue impact. You have to put in the work to get the desired outcome, but there is little doubt around how effort = reward.

Conclusion for publishers

Restaurant/Franchise Economic Impact

As any restaurant operator or franchisee knows, loyal customers are the key to success, and these days loyalty to any brand is difficult to gain and retain. Critical areas of testing for the restaurant/franchisee should focus on Pre-Send, Pre-Open, and Pre-Click test types.

To illustrate how testing can impact a restaurant/franchisee organization, we will assume the following:

Restaurant Assumption

Restaurant Assumption

  • You have 500,000 subscribers on your list
  • Your average unique open rate is 18%
  • Your average unique click-thru rate is 2.4%
  • 35% of your list is enrolled in your loyalty/rewards program.
  • Your Customer Lifetime Value for those in the loyalty program is
    $75 while those not in the loyalty program have a CLV of $40
  • At sign-up, the only data you capture is name and email.
  • You send one welcome email with a coupon and then start
    sending out regular marketing emails.

In this instance, a pre-open and pre-click test type are critical for the email program. Since the CLV of a loyalty/ rewards customer is nearly double, testing an onboarding series to get them to convert to the loyalty rewards program should be paramount. It is a fact that the average consumer today is a part of many different loyalty/ rewards programs, all with various structures and reward tiers, so your program needs to stand out even more to retain your CLV.

Using the assumptions above, 175,000 subscribers are a part of your loyalty/rewards program. If you were to test messaging and positioning around your loyalty/rewards program inside a series of onboarding emails and achieve a 15% increase in your loyalty program sign-ups, you would effectively gain 75,000 new members that contribute double the CLV. Even if you retain 50% of those new members within the first 24 months, the time and resource investment to execute and convert new loyalists to your program are astronomical.

Lastly, a pre-send test of capturing some additional information from the subscriber is vital. For example, you can try and capture dietary restriction data such as gluten intolerance or other allergies. Once you have this information, you could isolate those data points and begin pre-open and pre-click test types. By doing this, you can see if the use of that data would yield greater engagement and conversion to either dine or become a part of your loyalty/ rewards program.

Restaurant and franchise economic impact of testing can be substantial, but only if done systematically and regularly. The possibilities are endless.

Associations/Non-Profit Economic Impact

Parts of the iPost team have spent a good portion of their careers working with or inside associations/non-profit email programs. These organizations provide tremendous opportunities for programmatic optimization, and it starts with testing.

To illustrate how testing can impact an association/non-profit organization, we will assume the following:

Association/Non-Profit Assumption

Association/Non-Profit Assumption

  • You have 320,000 subscribers on your list.
  • Your average open rate is 23%
  • Your average click-thru rate is 4.90%
  • 60% of your list is either paying members of the association or donors to the non-profit
  • Your email program is monetized via in-email advertisements, membership dues, page-views on-site, supplemental products, or paid partnerships with other organizations. In other words, your email program is measured on the ability to influence and help convert subscribers.
  • The current attribution model is that of last-click, and there are no plans to migrate to a time-decay, linear, or create a custom model.

One of the most critical test types for an association/non-profit is pre-send. The fluidity and transparency of the sign-up experience will set the tone for what is to come from the brand. Mimicking a retail sign up experience is no longer an accepted best practice because the association/non-profit has to humanize their message and brand promise quickly or run the risk of silent abandonment. People need to know who you are, what you stand for, and what to expect to give up their email address.

A typical tactical element that associations/non-profits like to follow when it comes to building out their subscriber list is that of Confirmed-Opt-In (COI). COI is when the marketer/organization sends a confirmation email to the subscribers’ email address, which contains a link that must be clicked to confirm consent for the organization to continue sending email. COI has also been called Double-Opt-In in some email circles. In this instance, the COI email would be considered a pre-send test type because if the subscriber does not take any action, no further emails can be sent. In our experience, typical COI rates can range from 45%-85%, and no matter where your organization is on that conversion scale, there is always room for improvement.

Imagine creating a pre-send test of the COI email, which results in a 10% increase in conversions. What is an email address worth to your organization, and how does getting more people to participate in it convert into revenue? A pre-send COI test can involve elements such as cadence, frequency, SL length, word count, and CTA buttons that all can have a substantial long-term impact on the revenue of the program. 10 415-382-4000

Post-click testing is also super crucial for associations/non-profits, especially when the essential CTA’s are that of membership or donating. Having award-winning creative, actionable SL’s with clear email CTA’s is the equivalent of running out gas in the last turn of a car race. You can get the subscriber to click, but you can’t them to convert unless you have the influence to test the post-click experience. Using the assumptions above, imagine the 15,680 people who click on your emails only to get to an unoptimized page with little to no information, and the conversion rate drops. What does it mean in terms of revenue to have the conversion rate improve by 2%? Context and experience matters in this uber-competitive world, so you, as the email marketer, need to ensure you have influence or ownership of the post-click experience, especially if success in your job depends on it.

Email Testing Template

Email Testing Takeaways & Conclusion

Consultants, influencers, and thought leaders like to use phrases like “just test it” because they have been programmed to say things like that. They are right, but testing is not easy to start, and it can often lead to frustration at many different levels of even the most agile of organizations. However, once a rhythm with results is established and actionized, testing becomes the fun part of email marketing because it will provide economic impact in the short and long term.

Prioritizing your test types can be a difference-maker when it comes to your email program’s economic impact. You need to know your program’s and organizational strengths and weakness to determine which types will have the most significant impact. In most cases, the easiest types of tests are not the ones that should be given high priority.

Every test you perform has an economic impact, but a nimbleness mentality, with structured and prioritized test types and a vault of the results, will serve your company well in the years to come.

Newsletter Signup

The Economic Impact of Email Testing