The creative testing system that slashed our CAC (and scaled our spend)

We scaled Meta ad spend by 74.6% and dropped CAC 40%. Here’s how.

Nathan Hudson
Published

At some point, apps hit a wall when scaling: spend goes up, but so does customer acquisition cost (CAC). Sound familiar?

Now, what if I told you we scaled ad spend by 74.6% while cutting CAC by 40% — in just three weeks? No magic tricks, no fluffy theories. Just a methodical, structured approach to creative testing that gets results.

In this article, I’ll break down the “left of field”, “not what the playbook says” strategy that the team and I at Perceptycs implemented to get one of our fintech client’s ad spend working smarter, not harder. 

Here’s how we did it, step by step:

The challenge: scaling profitably without wasting budget

An infographic showing three common mistakes in ad testing: not testing enough (leads to fatigue), testing without scaling (missed potential), and chaotic testing (unclear learnings).
Most teams fall into one of three traps when testing creatives.

Whenever we take on a new ad account, we typically see that teams have made one of three mistakes:

  1. They don’t test enough and play it safe, scaling clear historical winners until they fatigue and performance crashes.
  2. They test, but don’t scale properly — so winners never reach their full potential.
  3. They test, but it’s a little chaotic, either the account structure is messy or it’s unclear what happens with winners and learnings

On this occasion it was a mixture of #1 and #3. There was some testing happening but there was no process in place to test systematically.

And the reason for that is probably because there was so much to test! The product was exceptional, there were several clear benefits, multiple use cases and JTBD and a range of audiences that could use the product.

What this meant was that performance had hit a ceiling. It was stable but sub par.

Here’s what we did.

A 70-ad creative blitz

The core hypothesis was simple: More ad creatives = more data = faster insights.

We needed to gather as much data as possible in the shortest amount of time to pick a direction to go in. That meant launching a high volume of ads to test a wide range of hypotheses.

So, in Week 1, we launched 70 creatives. All of them statics. This wasn’t a haphazard approach. Each ad was designed to test a specific hypothesis – a different value proposition, target audience, JTBD or use case. And all of these creatives followed the same creative format.

Why this worked:

  • More variations = faster identification of early winners: With 70 ads running simultaneously, we quickly saw which messages resonated with our target audience.
  • We tested across multiple audiences to see what stuck: We weren’t just testing creatives; we were also testing audience segments. This allowed us to identify the most receptive groups for each ad.

So we pushed them live and the data came pouring in. Some flopped. Some did ‘okay’. And three absolutely crushed it. We knew within days what had potential to scale. 

Lesson: You can’t scale what you don’t test. Most apps wait too long to find winners. We forced the process in one week.

Refining and doubling down

A decent assumption would be that in Week 2 we tested another 70 creatives. But we slowed down the pace of testing to focus on amplifying what was working.

The goal of creative testing isn’t to hit an arbitrary number of creatives launched. Sure, there’s a place for forcing volume. But the goal is to find winners and scale them.

And we’d already found 3. (Well…3 “potential” winners)

So in Week 2, we launched 30 more creatives. In Week 3, we launched 20 more. Why? To balance testing with scaling. We allocated 40% of our budget to those winners and left 60% for testing.

But what made these new creatives different?

  • Instead of just volume, we focused on creative diversity: We started experimenting with different creative formats: more native looking statics, motion videos, UGC video scripts.
  • Instead of just launching net new creatives, we began iterating on the top performers from Week 1: We already knew that there were certain hooks that captured attention, how could we use these hooks and hooks like these to produce new ads?

This approach allowed us to maximise the impact of our testing budget. We weren’t just throwing more darts at the board; we were getting closer to the bullseye with each throw.

Scaling the winners

Once we had identified our “clear winners”, it was time to scale them aggressively. But we didn’t just increase budgets across the board. It was time for a power move.

Scaling wasn’t just about increasing budgets—it was about expanding markets.

Here’s how we did it:

  • Took high-performing creatives and launched them into different regions: If an ad was crushing it in the US, we scaled it in the US and launched it with a scaled budget in Europe immediately.

This allowed us to increase spend incredibly quickly whilst also testing different audiences. We didn’t have to worry about scaling 20% per day or resetting the learning phase. We could just launch in a new market with a higher budget.

Long term this meant that we were increasing the lifetime of a winner in our core market since it would take longer to reach a point of fatigue.

The results & why this approach works

The results speak for themselves:

  • CAC dropped by 40%.
  • Ad spend increased by 74.6%.
  • And we did it in just three weeks.

But why did this approach work so well?

  • It’s structured (not random creative testing): We didn’t just launch a bunch of ads and hope for the best. We had a clear framework for testing hypotheses, identifying winners, and scaling them efficiently.
  • It’s scalable (finding winners and expanding intelligently): We didn’t just increase budgets across the board. We found the most profitable markets for each creative and angle, then doubled down on those.
  • It’s continuous (testing never stops, but it evolves): We didn’t stop testing after three weeks. We continued to refine our approach, test new creatives, and expand into new markets.

The big takeaway

A 5-step circular diagram illustrating an iterative creative testing process, including deploying ad variants, tracking performance, identifying winners, refining creatives, and scaling across regions.
The 5-part loop we used to scale Meta ad performance efficiently.

A lot of apps struggle to grow on Meta because they don’t have a dynamic creative testing process. They are either following a strict, one size fits all playbook that someone gave away in a “comment on my LinkedIn post and I’ll DM it to you” (I joke, but you know what I mean).

OR, they don’t really have a testing process at all.

Ultimately, it comes down to building a process that is both adaptable enough to be tweaked based on new learnings, insights, priorities and ideas. But robust enough that it can be followed and executed on a weekly basis.

If you’re stuck scaling, ask yourself:

  • Are you testing enough?
  • Are you scaling intelligently?
  • Are you feeding the right learnings and context into your testing cycles?
  • Are you being hamstrung by a process, framework or playbook?
  • Are you diversifying creative properly?

When you get this right, CAC drops, spend increases, and the creative flywheel starts to spin.

No guesswork. Just a repeatable process that works.

(And if you’re not sure how to implement this… well, you know where to find me.)

You might also like

Share this post

Subscribe: App Growth Advice

Enjoyed this post? Subscribe to Sub Club for biweekly app growth insights, best practice guides, case studies, and more.

Want to see how RevenueCat can help?

RevenueCat enables us to have one single source of truth for subscriptions and revenue data.

Olivier Lemarié, PhotoroomOlivier Lemarié, Photoroom
Read Case Study