“If you look at the brand marketing teams that obsess over going viral, they jump on a moment in a playful way that works with their customer base. They go viral because they’re different.”
Dean Murr
Founder and CEO of Programmai

To create a successful campaign, there are three levers to pull on. We’ve already talked about data. Now, let’s move onto creative.

  1. Data – to power your targeting, optimisation and measurement
  2. Creative – messages and imagery that gets attention
  3. Money – budget to generate sustainable growth

Creative often features last in our thinking. 

Without something compelling to show or say to your audience – not to mention the right one – it’s going to be a struggle to influence anything.

 

On finding something to say.

If you step into your user’s shoes, the ad is all they care about. They don’t know about your smartly set budget, audience strategies or custom optimisation events. They just see an ad. 

  • Good creative + poor targeting = can still deliver results if users are inspired, surprised or informed
  • Poor creative + good targeting = might seem to perform well but is highly unlikely to actually influence behaviour

I say ‘seem to perform well’ because you’ll always get some conversions. This can be misleading unless you have a good baseline.

Bidding algorithms will serve a portion of ads to users who are already en route to convert. 

Let’s say we show these users a box with white pixels. If they were already en route to convert, your campaign will still attribute the sale to the ad. But realistically? That white box had nothing to do with it.

 

What makes a good ad?

Every brand and target audience is different. Broadly speaking, here’s a checklist on what makes a great ad.

  • Triggers an emotion. 
  • Designed to hit a clear objective e.g. brand awareness/conversion.
  • Personalised around the end user.
  • Uses a format (video/image/carousel) that suits audience and objective.
  • Showcases your product, brand or lifestyle.
  • Includes copy that’s long enough to tell a story and short enough to not be skipped over. 
  • Includes emojis, if it suits your brand. 🎢 

 

Get testing!

Optimise your ads, relentlessly.

If you use a robust testing framework, you can optimise with minimal effort and gain shareable insights on your brand and target audience.

We love a lift test with a holdout to measure the incrementality of an activity, but it’s not necessary here. Since all other variables are the same, any difference in performance when split testing will come only from the creative.

Additional conversions from one ad over the other are therefore incremental. You’ve driven a behavioural change of users from your ad by showing them something better than the theoretical white box.

 

Limitations of split testing

On Facebook, you can run split tests with or without holdout groups. Yet, these split tests have their limitations.

When you split up your test population, you’ll need an extra ad set for each additional ad. More ad sets sharing the same budget will extend your learning phase and potentially affect performance.

Likewise, you’ll want to have a variety of ads live at the same time to avoid saturating your target audience with the same creative. This would not be possible in a split test.

 

Enter: the mini A/B test.

Or, you could run a less scientific mini A/B test.

It’s a bit more hacky but in many circumstances, this can get you results whilst having more flexibility. Let’s call it a mini A/B test.

Write some hypotheses and rank them by expected impact on performance. If you explain your objective well, you might be able to get a huge source of interesting hypotheses from your colleagues. Per hypothesis, create two or more ad variants you want to test. 

The higher the conversion volume of your campaign, the more variants and hypotheses you can test at the same time. Run too many ads though and you’ll risk spreading your budget too thin – resulting in poor overall performance and limited significant learnings.

 

How to run a series of A/B tests:

  1. Launch one or more mini tests at the same time. Launching at the same time will give each ad an equal chance of ‘winning’. It’ll also mean your campaign spends less time in the learning phase. It’s a win-win.
  2. Ads that Facebook have the most confidence in will convert more. They’ll also receive more spend. 
  3. After a few days, you should see a clear difference between A and B. Turn off the ad that heavily underperforms or underspends. Turning off an ad doesn’t always reset the learning phase in Facebook – another benefit.
  4. If you aren’t seeing significant differences in performance, you could A/B test properly if you think it’s an important hypothesis. If it’s not, you can conclude that your tested variable isn’t important.
  5. Clearly document your hypothesis, tests, results and conclusions. Share them as validated customer learnings with the wider marketing team.

Note that with a proper split test, you’re looking for statistically significant differences between A/B. 

With these mini-tests, you’re relying on the budget optimisation algorithm to show what works better.

 

How much should you test?

How much do you want to learn and improve? 

I’d recommend setting yourself a target to spend a percentage of your ad spend on these tests. Focus on outcomes rather than output.

Most advertisers aren’t spending enough resources on testing and improving their ads. This is your competitive advantage.

 

Test for ad performance predictors

It’s worth digging into the data to inform tests and hypotheses. Here are some ad performance predictors that you can test for. 

  • First purchased product that might drive higher than average predicted customer lifetime value
  • The products your users spend the most time browsing – or your hero product
  • Scraping historic ad performance and predicting the type of imagery/format/text, colours and emojis that have driven performance and lift
  • Products viewed that drive up customers’ pCLV

 

To get started.

  1. Set yourself a testing target.
  2. Write and rank your hypotheses.
  3. Test & (l)earn.

 

In the next part of this series, we’re going to look at what makes the world go round: Money, and more precisely, which budgets will drive sustainable growth for your business. 

Stay tuned!