By Lucy Acheson, Head of Data Planning at WDMP
As marketeers we have a clear need to establish the most effective route to gaining and keeping a consumer’s attention. Measurability and test and learn principles are the lifeblood of what we do, and should sit squarely at the centre of all communication planning.
And at the fore front of that measurability and metrics battle lies the modest but brilliantly effective A/B or split test. This method of testing allows for objective campaign planning and strips away the need for guesswork and opinions, enabling brands to communicate with their consumers in a quantitatively proven and ultimately effective manner.
An A/B test is a clean and clear experimental approach for testing two variations of an email to a statistically valid subset of a target audience, focusing on just one altered element. The result is quickly appreciated and acted upon to the remaining target audience with confidence, knowing that on the day, Route A was preferred to Route B. The key is to keep all elements constant bar one element. If more than one aspect is altered, then it will not be possible to isolate which factor is responsible for the change in consumer behaviour and you will have learned nothing.
A/B tests need to be carried out at exactly the same time as one another. This ensures that the results can be attributed to the altered variable(s) and not external environmental response influencers: time of day, day of week, market place conditions, economy etc.
Another golden rule is to ensure that each cell is of an appropriate size to give statistically valid test results. This can be worked out using a tool, or an analyst, to calculate the crucial confidence levels, based on the proposed cell size and your expected response rate. A 95% confidence level is the norm for email marketing purposes.
Tests should also always be allowed to run their natural course. There is a temptation to dive in and grapple with the figures as soon as they become available, but an A/B test needs to be measured over the same time period as campaigns would normally be tracked. If, historically, 80% of response is captured within 48 hrs, then let the A/B test run for a similar time frame, once again ensuring statistical robustness of results.
Always ensure that the conversion funnel is primed and ready to receive the increased volume of traffic following communication optimisation. This is especially true at peak sales or response periods throughout the year. There is little point in working hard to optimise response, if the backend management systems such as call centres, websites and e-shops aren’t stocked and ready to receive the increased volume of consumers being driven towards them. In essence, think past the marketing strategy and include other stakeholders in planning your campaign.
Lastly as a small housekeeping point, store and name your test campaigns in a structured way. You are expending energy and budget to learn something. Those learnings should be made available to all concerned to avoid the corporate marketing amnesia we all face as campaigns come and go and personnel change on a regular basis.
One other point to remember is that, once a consumer is treated in a certain way, then make sure you are consistent in the way you interact with them thereon in. Tracking tools and campaign management systems available today should give a clear understanding of who has seen what, and deliver content accordingly.
Lastly, continue to test discretely, always striving to enhance the consumers experience and as a direct result elicit the behaviour you seek. Today’s outright winner in an A/B test will be tomorrow’s control cell.
Check out 12ahead, our brand new platform
covering the latest in cutting-edge digital marketing and creative technology from around the globe.
12ahead identifies emerging trends and helps
you to understand how they can apply to modern-day companies.
We believe 12ahead can put you and your
business 12 months ahead of the competition. Sign up for a free trial today.