AnalysisJournal 36 Aleksandr Dadaev August 25
I use these notes as a cheat sheet when talking to UA or Data teams. They’ve also been super helpful during job interviews at various companies. I hope you’ll find them useful too.
When you’re launching new ad creatives, the first thing to check is IPM (installs per mille). Or CTR, if you’re more used to that — the two are closely related. The key point is to understand how well the creative grabs attention right from the first impressions.
Ad networks start with “learning” impressions — their algorithms need to train. But their goal is simple: to make money. So they’ll start pushing the creatives that have IPM at least as good as your current ones. Not better than the competition — better than your own mix. The algorithm compares new creatives to your existing ones, not your competitors’.
If your optimization is set to post-install events (like purchases), up to 70% of your budget will likely go to top-performing creatives. That means any new creative will probably be going up against just 1–2 strong, established videos. So it’s pretty normal when you have only a handful of creatives that spend the majority of your budget.
If you have to wait two weeks just to figure out if a creative is working — it’s probably not. Good creatives show results quickly. No IPM, no point.
This is the unicorn. A creative that:
They’re rare — maybe 1% of everything you produce. But they’re the ones that actually drive growth in effective spend. That’s why your entire test and rotation system should be focused on finding these top creatives.
Because ultimately, it’s the growth of profitable spend that matters most. Everything else is secondary.
Some teams use proxy metrics to get deeper insights into creative performance:
Are they useful? Sometimes, yes. But be careful — these are proxy metrics. If you’re 100% sure they reflect actual business value, go ahead and use them. If not, stick with spend and IPM. In most cases, those are more than enough to guide your decisions.
Benchmarks aren’t very reliable. Creatives are subjective, and they attract different types of users. The best way to test is by comparing similar concepts — AI with AI, misleads with misleads, playables with playables. That’s how you find out what actually works.
But testing completely different concepts in the same test campaign? Not a great idea. The audiences will be different, they’ll monetize differently, and the results could mislead you. For example, a new concept might have a higher CPI than your control — and you kill it. But what if that concept brought in payers and would’ve been profitable?
Don’t base decisions on a single metric. Always look at the full picture: is this creative really helping you grow, or just looking nice in a spreadsheet?
Please login or subscribe to continue.
No account? Register | Lost password
✖✖
Are you sure you want to cancel your subscription? You will lose your Premium access and stored playlists.
✖