7 Split Tests You Can Implement Today

How many times have you heard “you should test to see what works best for you” or something to that effect? Probably too many to count, right?

The reason you hear it so often is because when it comes to email marketing (as well as any other marketing channel), testing separates the pros from the Joes.

It’s one thing to think we know what works best, but when we apply a little bit of scientific method to our marketing, we not only find out for sure, we learn more about our visitors and subscribers — and that helps us predict more accurately what will work in the future.

The challenge for a lot of people (including us at AWeber) is deciding what to test. There are simply so many small changes we can make to our forms, messages and other parts of our campaigns, that it’s easy to get stuck on deciding where to start.

So, to help you get started with split testing (or to get back into it if you’ve gotten complacent and stopped testing regularly), here are seven split tests you can run on your website to get and retain more subscribers, lower spam complaints, and increase response.

7 Split Tests

Give these a try and see how they affect your subscribers’ response (not to mention your perception of your subscribers).

See our Knowledge Base for instructions on how to create a web form split test and a broadcast split test.

You might have some interesting findings (if so, please share them!)…

…but even if you don’t — even if you run all 7 of these split tests and none of them bring immediate, significant changes to your campaign — you’ll still be more familiar with split testing than you were yesterday, and better prepared to test and improve your campaigns in the future.

Want More Email Marketing Tips and Ideas Like These?

Join the AWeber Blog newsletter and you’ll get them sent straight to your inbox 1-2 times per week.

You’ll also get a free copy of our Email Deliverability Guide detailing 12 Do’s and Don’ts to get more email delivered.








Naturally, as a permission-based email marketing company, we respect your privacy.

What Do YOU Split Test?

Are there other split tests that you run regularly?

Share them with your fellow email marketers below!

Print This Post

Related Articles

By:
Justin Premick is the former Director of Educational Products at AWeber.

Become a Better Email Marketer

Subscribe to This Blog by Email
Why Subscribe?

31 Comments

  1. My split tests focus on subject lines to see which one gets opened more frequently.

    Surprisingly, the subject line I thought would be more popular ended up in a tie with the more-bland subject.

    I stopped testing after this happened each time, but what you suggest in this post is quite interesting. It’s difficult to fit more work into a day, but perhaps I’ll try one of these options in the near future.

    6/10/2008 9:00 am
  2. Very good advice. I recently ran a split test of only one element – subject line. The click-through rate on the winning headline was exactly 65.9% higher than the click-through rate on the losing headline. It’s amazing what one can learn from split testing.

    Thanks for a great service!

    6/10/2008 9:45 am
  3. I like the button v. text link idea in HTML. Hadn’t occurred to me. Good point Justin.

    One split test I did was to compare click through rates for my list depending on which pictures I displayed at the main image in my newsletter. My list is about 2/3 male. Over several weeks I split tested identical HTML newsletters–one with a pic of an attractive man as the main image and one with a picture of an attractive woman.

    Over several weeks the CTR for the pic with the woman was 20% higher than the pic of the guy.

    Lesson here: Woman can make men do anything.

    6/11/2008 7:48 am
  4. Statistically significant results?

    Is a 40% conversion rate better than a 10% conversion rate? There is NO WAY TO KNOW until you know the SAMPLE SIZE, or number of visitors involved.

    To make use of split testing you MUST know if your results STATISTICALLY SIGNIFICANT enough to make decision.

    In other words, how do you know when can you stop testing and be CONFIDENT that you’ve got some results worth taking action on?

    Sorry this is a bit nerdy, but it’s VERY necessary to make good decisions when split testing. (I was a statistics minor)

    Let’s keep this SIMPLE though – it doesn’t have to be tough.

    Here’s a scenario as an example:

    If option A gets a 23% conversion after 100 visitors and option B gets a SEEMINGLY better 30% conversion after 100 visitors is there really a significant difference between the two options that warrants dropping option A and committing to Option B?

    Probably not. You need more testing.

    You can quickly find out if your testing is producing significant differences using a free SIMPLE tool like this one:

    http://www.splittestaccelerator.com/freetool.php

    Make CONFIDENT decisions by understanding and using a simple tool that identifies the STATISTICALLY SIGNIFICANCE of your results.

    6/11/2008 3:14 pm
  5. Jim,

    Statistical significance does matter, but as you say, it’s best to keep things simple.

    Start worrying too much about p-values, confidence intervals, etc. and you run the risk of just abandoning testing entirely and keeping your controls forever, because that takes less effort than testing.

    Personally, I like the way Paul Myers broke down testing in a newsletter he sent a couple months ago:

    Count.

    Change or add or remove something.

    Count again.

    More or less?

    More? Keep it.

    Less? Go back to the original and try something else.

    For people who haven’t done a lot of testing yet, that’s a great way to look at it. Once you start getting into more fine-tuned testing, I totally agree – statistical significance is critical.

    Thanks for the reminder about the value of stats — and that tool looks quite useful… thanks for sharing!

    6/11/2008 4:02 pm | Follow me on Twitter
  6. Awesome post Justin!

    Really great ideas.

    I especially like the "not everyone wants to *submit* to receive your mesages. Lol!

    Makes me imagine testing the use of *Surrender* on the button to see how that would convert.

    But seriously, very worthwhile and profitable information.

    Way to go man and have a great one.

    6/11/2008 7:56 pm
  7. Great. It looks like I misspelled the tongue twister, "messages"

    and left out subjects throughout in the name of being more conversational.

    In fact the only sentence containing the proper subject and predicate usage is the one beginning with the word, "I" which supposedly is not good to use either because it’s too much about the one writing and not about the reader.

    Oh well, there is time to improve and the lesson to be learned from my mistake:

    Proofread your comments before clicking *surrender* I mean, submit.

    Respectfully submitted at your service

    6/11/2008 8:00 pm
  8. For us also, the most compelling split tests have involved the subject line.

    We’re in the dating advice niche, so your mileage may vary, but we split tested the combo of a descriptive subject, a direct question (more inductive) and an intriguing quote lifted directly from the text.

    The time-staggered broadcast that you mentioned brought us very interesting results, including some counter-intuitive trends. We’ve also split-tested which days to send on.

    Others we’ve done:

    1) Text vs. Lite HTML

    2) "In This Issue" teaser vs. none

    3) Rearranging links/subheadings within an e-mail (the ones at the top and the very bottom of the unique content get clicked on the most)

    4) ALL CAP SUBJECTS, vs. Natural sentence structure… vs. Capitalized Standard Title Format

    5) Length of copy

    6) Single offer vs. adding a secondary P.S. and/or front-end teaser offer

    7) Using blatant spam filter shields (i.e "f.ree") vs. changing the syntax altogether

    8) Frequency of e-mails

    6/12/2008 2:01 am
  9. You can test the addition of a personal photo on your squeeze page vs just a name and signature.

    Photos generally increase conversion rates but…. not necessarily!

    Always test =)

    6/12/2008 6:54 am
  10. Hi Justin,

    Some great split-test ideas in this post, though there’s a bit of an issue with the last suggestion:

    "For your next broadcast, add a permission reminder (”you’re receiving this email because you signed up at ____” etc) in the message. Compare your clickthrough rates, and your spam complaint rates."

    Having that message in an email is a spam trigger. I had a client who put that in their messages, and they took it out once they started working with us.

    Here’s the SpamAssassin score for that:

    0.4 BODY: Gives a lame excuse about why spam was sent

    LOL

    If your messages test your SpamAssassin score limits, test this one at your own risk, or you could end up in the Junk folder… ;)

    6/12/2008 2:50 pm
  11. We also test the sender (both the alias and the actual email address).

    After all, the most challenging part of email marketing is getting the message opened, and there are only two things for a recipient to consider when deciding to open or delete: the sender and the subject line.

    6/12/2008 4:22 pm
  12. Scot,

    I’m a fan of the "In This Issue" teaser – just haven’t run enough tests on it yet to decide if I’m going to use it regularly. How’d it work out for you?

    John,

    Yeah, SpamAssassin does that sometimes – but I don’t let it bug me. Like you say, unless your score is already high, that little bit isn’t going to be an issue.

    Paul,

    Great call – this is one of the concepts we teach in the How to Get Started webinar. Choose an effective "from" line. Definitely worth testing going forward, too.

    6/12/2008 5:15 pm | Follow me on Twitter
  13. Great info for a newbie like I – thanks all.

    May I ask how do you split test a subject line? Do you send half of the list 1 subject, the remainder another? If so, how can that be an accurate test as the campaign is being delivered to 2 isolated groups?

    Still learning…

    6/15/2008 2:45 am
  14. Adam,

    There are a couple ways you can do it that may be appropriate for different situations.

    One common way is to create 3 groups – divided into say, 20%/20%/60% of your subscribers. Send out your split test to the 2 smaller groups, then once you’ve seen which version of your email pulls better, send that version to the largest group.

    This is useful when you’re trying to maximize response to one particular email (such as a sales promotion).

    Another way to split test is to do as you say – simply group your subscribers into even groups and send each version.

    This has the advantage of giving you more data points in your test, but does not let you "do something with that data" as quickly as the first split test method does. This second method is more useful for learning about your subscribers’ general tendencies and preferences. (For example, this is what we did in our "button vs. text link" split tests.)

    I’m not quite sure what you mean by your question about isolated groups – can you clarify that?

    6/16/2008 10:25 am | Follow me on Twitter
  15. Thanks for the clarification Justin, that makes sense.

    Re my isolated groups, I know that a particular bunch of my subscribers are avoid readers/ clickers…so if the split test went to that group (of highly active open/ clicking people) and the other list is less active, then are the split test results a real indication.

    In other words, some people will almost always open my mail, so if they are subject to list A or list B then my results will be skewed. Does that make any sense :-?

    6/17/2008 7:44 am
  16. Great stuff! I especially love Aaron’s post about the picture of the woman. That will definitely apply to our list. We pretty much always use text emails, but we’ll need to test some HTML.

    Thanks!

    6/17/2008 10:19 am
  17. @Jim,

    Thanks.

    This week we feature marketing babe Robin Meade of headline news.

    (This pic might be a bit too extreme…I don’t know.)

    http://www.FullTiltBlogging.com/news

    6/17/2008 5:56 pm
  18. I Knew I had to do split tests but really didn’t know what to test first. Good article for beginners.

    6/18/2008 3:50 am
  19. This is a great checklist and action plan. My biggest problem is not deciding what to split-test but rather getting organized to actually conduct the test! I’ve summarized the 7 points and posted it on my wall to remind me to actually start split testing! Thanks for the post!

    6/18/2008 11:36 pm
  20. Thanks Justin. Split testing and constant upgrading of emails, articles, white papers all will increase overall results by many times. The first few times I tested things I saw an increase of 5 to 10 times my original response rates.

    One thing that has had a big impact, is testing the subject line of an email, or the title of an article or white paper.

    The subject line determines what percentage of the time that the email gets opened. If it doesn’t get opened nothing else matters, no matter how well you do it, or how valuable what you have to sell.

    You’ve got to get ATTENTION in the subject line to get people to stop and read the rest.

    The subject lines that get the most opens are the ones that get the most attention and tend to follow the following:
    1) It mention PAIN that your reader is experiencing.
    2) It adds the VALUE they will get from reading further
    3) It’s got to be perceived as something that will HELP them remove a problem, and move toward great valuable results.

    What not to do:
    1) Don’t just give a generic title
    2) Don’t sell something
    3) Don’t just name something that is in the article.

    7/29/2008 9:17 am
  21. Ash

    Whoa! I totally forgot about this. Where have I been? I just recently (2 days ago) set up my own opt-in form (I usually do it for my clients) for my free e-book. I totally forgot about trying different options. I guess I would have gotten around to it some day, but I am glad this e-mail came to me today. I have a few hours to kill. I’m going for it.

    Thanks for the great articles.

    7/24/2009 12:00 pm
  22. PA

    What time line do you suggest for split tests before measuring results?

    I would’ve thought that 1 month would be ok, but then again I’ve seen that certain products that I promote can have a 20% different in monthly sales so it would be hard to measure minute changes if the same thing applies to email opt in rates.

    11/23/2010 4:00 pm
  23. My split tests never seem to work out that well. I will have to try some of your suggestions…

    Thanks a lot for your article, I think it will help me out in my own testing.

    12/12/2010 1:02 am
  24. Always great to learn from the gurus !!
    Thank you

    1/8/2011 2:55 pm
  25. Thank you for these practical and easy-to-implement suggestions! I’ll get started!

    1/22/2011 2:50 pm
  26. Have you looked at a morning drop vs an evening drop? What did you find?

    3/29/2011 7:30 am

Leave a Comment

Need an avatar?



Follow Comments
Send me notifications of new comments
rss Follow comments via RSS