Friday, March 21, 2014

is your a/b testing flawed?

we all do it, we all get varying degrees of success with it... that's right i'm talking about a/b testing, but in a/b testing are we getting false positives and negatives or constructing your tests for the desired income?
we all think about what to test - subject line, cta's, colors, creative, and so forth...but here are some things to consider when you test that could have an impact on the success or failure of your test...

  1. type of message: is it a one time message, a welcome series, an offer, your annual privacy notice...things like that. the type of message can impact the variables you test. one type of message could be a good subject to test subject lines, another preheader, and another could be template. think about this when you''re testing. for example, an annual privacy statement might be a good place to test preheaders where (a) is "important information about our privacy statement" and (b) would be "we can keep a secret here's proof" and the measurement would be open rate.
  2. sucess measurement: have you defined the success of your campaign by open rates, by click thru, by conversion? these kpis can dictate what you're testing. obviously open rates are driven by subject line so an open rate test screams a/b of the subject line.
  3. number of variables you test: don't test multiple variables within the a and the b, it's a/b testing, not a2/b2 testing after all. the pitfall of this is that we define the success for this type of testing usually on click-thru and you can get a false positive on this very quickly, the intent is good but it's not a good practice. we have "test, test, test" drilled into us, but that statement is meant to show the importance, not the number of variables.
  4. automated vs. manual testing: marketing automation is great, it does so much for us, but it's a program, not intelligence... i use ExactTarget (and I bleed orange) but the click thru a/b testing in ET is flawed, it counts unsubs in the automation. so think about that capabilities of your erp before doing testing, find out these things, and then run the test but don't send the winning email automatically at the end of the test. review the metrics and determine through discovery which is best.

  5. length of time: if you're testing something like a flash sale then sure you've got about 24 hours to test (maybe longer, more on that later), but if it's a welcome series you may need to plan for weeks of testing to get the data you need to make a sound marketing decision.
  6. time of year: so your testing out the welcome series, and you've determine that you want to give it a month. you have decided to also replace the full series instead of running old and new side by side, which is okay... but take this into account, is it fair to test that same creative against what ran the previous month or do you need to dig up the previous year's analytics for the same testing time period? the results might surprise you, your business could be seasonal and while this new creative wins for month over month testing, did it beat last year's numbers?
  7. your list make up: do you have a lot of mobile readers? does gmail own your list? a lot of military or government addresses? consumer or business addresses? knowing the make up of your lists can direct your testing variables...want to test a fancy responsive email and the majority of the list are gmail android users, flag on the play...you're wasting time and resources hiding content and being responsive, the email clients don't support it - a/b testing of this sort isn't worth a bad user experience.
  8. duration of the test: remember length of time? gmail tabs has made a virtual mall in our pockets and desktops. readers may not look at promotion tabbed every day. if you're a/b testing a flash sale do it over a three day weekend or extend the sale for three days to allow those readers time to "get the message".
  9. stacking the deck: remember you are testing something because you want to see how it will perform, you should never construct a test so that it yields the desired outcome.
  10. utilize outside sources: using google analytics will really open your eyes. sure the erp's tell us things like open, clicks, unsubs, deliverability...but what they don't tell you is a good measurement of conversion. using google analytics will help to fill in the gaps in your data, and help determine the overall success of the test. google analytics is great because it captures the click data - where they are coming from, what campaign that drove them there, and even more data if you use all of the tags - and it does so regardless of the person downloading the images or not. if the goal of the campaign is to get someone to the website to purchase something, say you have started a cart abandonment email, you'll be able to see more clearly if the person came back on their own or if your offer of free shipping really worked - even if they didn't "open the email" but looked at it the preview pane.
All of that said, there is no argument that a/b testing is your friend, but you have to think about the big picture, not the desired outcome. don't look at it as a case of winner/loser, look at it as effective/least effective - the ideas that you applied to the least effective creative can be used later to a/b test against something else or even to test for seasonality... even with well constructed a/b tests remember that your success (or failure) could be short lived and that you either hit your list on the right day or the wrong one.

0 comments:

Post a Comment