In today’s paid social landscape, with more campaign optimization automation than ever before, creative has never been more important, as we outlined recently. It remains one of the few levers that marketers can still pull to influence performance.
To put a finer point on things, Nielsen estimates that creativity accounts for 47% of total sales impact of advertising elements, and Facebook data shows that we have about 1.7 seconds to grab users’ attention on mobile.
So what does this mean?
Simply put, it means that deciphering what creative styles, hooks, formats etc work most effectively is key to paid social success, and the best and only way to figure this out is through a robust creative testing strategy.
Below, we’ll discuss the purpose of iterative creative testing, the key elements to see success, and the best practices to ensure we continue to learn & develop our strategies.
Understanding Creative Testing Roadmaps
So, what are creative testing roadmaps, and why are they important?
A creative testing roadmap or blueprint acts as a systematic framework through which advertisers can consistently create and scale creative ideas and hypotheses.
The benefit is twofold:
- First, it enables advertisers to figure out what creatives work best for us to scale in the short term.
- And second, it acts as a feedback loop for advertisers and creative teams to consistently learn, hone and develop their creative strategies (and testing roadmap) based on the feedback and learnings from each test.
For example, if we were marketing a D2C skincare brand, we might want to figure out what hook in a video works best for us. We would create two to four almost-identical videos, with the only difference being the hook to grab attention in the first 2 seconds.
Once we run this test, if at least one performs well - and distinctly better than the others - we can scale this ad in our current campaigns, gaining immediate benefit from an optimized creative. Importantly though, our creative team can also internalize this learning and apply it to future creatives, because we now know that the winning hook works well for our goals.
Setting and Measuring Goals for Ad Creative
That brings us to our next point: what are our goals, and how do we measure them? Before we start to put together our various testing hypotheses, we need to figure out our benchmarked goals.
What do these goals look like?
Effectively, our overarching goal is to iteratively optimize our ad creative and to figure out what performs best. But these goals need to be SMART, and key to that is making them measurable.
As such, we recommend examining both your own top performing creatives and industry benchmarks to figure out what numbers would equate to success or strong performance.
The KPIs that we like to look at when testing creatives are:
- Hook Rate - a video-only metric that tells you how effective your ad is at stopping thumbs from scrolling further. This is calculated by dividing 3-second video plays by impressions. A good benchmark here is around 25-30%.
- Hold Rate - another video metric that tells you how effective your ad is at retaining the viewer after you have hooked them. If this is below benchmarks, it tells us that: a) the rest of the ad isn’t engaging enough; or b) that the hook may be misleading or potentially click-baity. This is calculated by dividing Thruplays or 15 second video views by impressions. A good benchmark here is around 15-20%.
- Click-Through Rate (CTR) - this is simple enough, as it will tell you how effective your ad is at getting people to visit your website or landing page. This will typically be the first metric we look at when testing image creatives. This is simply outbound clicks divided by impressions, and a good benchmark here is around 1-3%.
- Conversion Rate (CVR) - this one isn’t relevant all the time, but can help to tell you whether your landing page aligns well with your ad. It’s calculated by dividing link clicks by website purchases/conversions. A rough benchmark here is around 3%, but is widely dependent on item ticket price, among other factors.
- Return on Ad Spend (ROAS) - no real need to go too much into this one. Simply put, this will give you an idea if you’re profitable or not.
5 Key Elements of a Successful Creative Testing Roadmap
Now that we’ve discussed why creative testing roadmaps are important, and talked about the ways that we measure their success, let’s look at the key components of a good creative test and roadmap.
1. A Clear Hypothesis
Clear ideas or theories are vital to a good creative test. It’s also essential that these hypotheses can give us answers of value that we can then iterate upon. Some hypothesis examples would be: “adding a call to action over our imagery will increase click throughs and conversions”, or “adding a testimonial will lead to increased trust and credibility, thus leading to higher conversion rates”.
2. Selection of Testable Creative Elements
Similar to having a clear hypothesis, we must have an easily accessible/producible selection of creative elements that we can test.
Simply put, in our first hypothesis example above, this might involve having a photo of a product from a photo shoot with no text overlaid, plus a variation of the exact same photo but with a call to action overlaid.
3. Proper Sample Size, KPIs & Duration
We’ve outlined the various KPIs that we’re likely to use above, so now we must examine how much data we need to collect in order to make informed judgements, and how long/how much budget that will require.
A general guideline is to dedicate about 10-20% of your ad spend budget toward testing, so once you’ve determined this budget, it’s necessary to make some projections of how long it will take to build results of statistical significance.
In short, this means that we need to have enough data to be able to judge that the results we have observed are attributable to our hypothesis, and not just a result of chance. In practice, this means that we need to ensure that we’re getting enough video views, clicks and conversions to make this judgment. Once we’ve defined this, then it will be straightforward enough to determine how long we need to run our test cycles for.
4. Rigorous Data Analysis
Once we’ve reached a point of statistical significance (often platforms like Facebook Ads Manager’s Experiments tool will help to define this point), we must analyze our data thoroughly.
Using the KPIs that we’ve outlined above, we must first measure our data within the test environment. Once we’ve determined our winner (or winners), we then must measure our data against our existing benchmarks.
Simply enough, this is to ensure that we’re not rolling out creative in our scaled campaigns that has demonstrably lower performance than our existing scaled creatives.
5. Iterative Testing Process
The final key element is to ensure that this process is iterative, and we do this in two ways.
Firstly, the result of this test should be passed back to the creative team, who can internalize the result (e.g. we should include CTAs on our ad creative going forward).
Secondly, we can use this result to come up with fresh hypotheses - for example, does a certain CTA work better than another? What’s important is that we continue to learn and develop our hypotheses based on past tests, so that we’re always moving forward.
Best Practices for Creative Testing Roadmaps
So there you have it. These five elements and the KPIs should set you up for success in building your creative testing strategy. But before we wrap up, we want to highlight the key tips to keep you on the path to a successful iterative process.
Don’t try to learn too much at once. Testing incrementally allows you to isolate specific variables or creative elements, rather than testing multiple things at once in an uncontrolled manner, which can ultimately add noise to your results.
Avoid False Positives
This one is straightforward enough - make sure you have enough data to make an accurate judgment. It’s no use running your test for a limited time, and assessing ad creative that has only spent $20 and received 15 clicks.
Similar to this, when results between creatives are close, avoid confirmation bias - if there’s no clear winner, either abandon the hypothesis, as it may not make enough of an impact to matter, or test again.
Embrace Failures as Learning Opportunities
This is worth remembering and repeating - not all tests will work! Sometimes, we either won’t get discernible differences in the data, or our results will be lower than our benchmarks. That’s ok - but it’s important to document and internalize these results for future theories and tests.
Stay Organized & Document your Findings
Keep track! Make sure that you’ve got a central document or tool where you’re documenting your findings, whether that’s a Google Sheet or on the project management tool of your choice. It’s important to document your findings both for your future hypotheses, as well as for the creative team going forward, so that they have a clear hub that they can rely upon to follow creative best practices.
Creative testing is crucial for success in paid social. It allows you to test and optimize different creative elements and strategies to achieve your marketing goals, as well as building an iterative and optimized process for your creative team to constantly learn from. By following these best practices, you can achieve better campaign performance and drive greater success for your business.