Measure the true causal impact of your Apple Ads campaigns using Catchbase.
Not all paid installs are created equal. Incrementality analysis measures the true impact of your campaigns by answering a critical question: Would these users have installed your app anyway, even without seeing your ads?
Downloads that only happened because users saw your ad. This represents genuine growth driven by your advertising investment.
Downloads from users who clicked your ad but would have found and installed your app organically. These represent wasted ad spend with zero incremental value.
Downloads that occur naturally through search, word-of-mouth, and brand awareness. This is your baseline growth without any paid advertising.
Catchbase uses AI-powered causal impact analysis to measure your advertising effectiveness. Our platform automates the entire testing process from test design through analysis to insights delivery.
Control Period
Campaign paused
Test Period
Campaign active
Solid line shows actual downloads, red dashed line shows predicted downloads without ads. The gap between them represents your true incremental lift.
Configure test parameters and campaign selection. Catchbase automatically manages campaign pausing and activation to create clean control and test periods.
ML models use control series to account for seasonality, trends, and external factors, creating accurate counterfactual baselines that isolate true campaign impact.
Receive clear metrics on incrementality, cannibalization, and effective CPI. Optimize bids and reallocate budget to maximize true incremental growth.
Incrementality insights fuel smarter bidding and budget allocation. Together, they create a continuous optimization cycle that maximizes true ROI.
Automated experiments measure true incremental impact of campaigns
Causal impact analysis quantifies incrementality and cannibalization rates
RL algorithms automatically adjust bids to maximize incremental value
Continuous monitoring validates performance and feeds new insights
Strategic testing at key decision points ensures your budget flows to channels delivering genuine incremental growth.
Branded keywords frequently capture users already searching for your app. Test both brand and generic campaigns separately to determine where you are buying genuine growth versus intercepting existing demand.
Before scaling spend up or down, validate incrementality to ensure marginal budget delivers proportional returns. Not all budget increases result in growth.
As your brand grows and market dynamics shift, campaigns that once drove incremental value may begin cannibalizing organic traffic. Quarterly checks ensure continued effectiveness.
We recommend a minimum of 14 days for each period. This duration allows our predictive models to establish a stable baseline and measure true campaign effects with statistical confidence.
For campaigns with lower conversion volumes, extend test periods to 21-30 days. More data strengthens confidence intervals and improves measurement precision.
Every measurement includes 95% confidence intervals, providing statistical assurance that observed effects are real, not random variation. This rigor enables confident budget decisions.
Measurement precision improves with higher conversion volumes and stable historical baselines. Campaigns with consistent patterns yield tighter confidence bounds and more actionable insights.
Our models automatically adjust for day-of-week effects, trends, and seasonal patterns when building counterfactual baselines. This prevents false attribution of normal seasonal variation to campaign activity.
Schedule tests during stable periods. Avoid major holidays, product launches, or PR events that create irregular baseline shifts and complicate measurement.
Brief pauses (2-3 weeks) have minimal long-term impact on performance. Campaigns typically recover quickly after reactivation. The value of understanding true incremental impact far outweighs any temporary performance dip.
More importantly, these tests often reveal which campaigns aren't delivering real value. Many clients find they can reallocate budget from low-performing campaigns to channels with proven incremental lift, significantly improving overall returns.
While technically possible, we recommend sequential testing to isolate effects cleanly. Testing branded and generic campaigns simultaneously obscures which campaign type drives incremental lift.
Catchbase helps you schedule experiments across multiple campaigns, giving you a clear overview of your testing calendar.
Non-significant results mean we can't confidently separate campaign impact from normal variation. This doesn't prove zero impact—it means we need more data to detect real lift above baseline noise.
When results show no measurable impact (actual and predicted performance match closely), your campaign likely drives minimal incremental value. This is valuable: reallocate that spend to channels delivering proven lift.
If results are inconclusive, run the test longer. More data improves accuracy and clarifies whether you're seeing real impact or just statistical noise.
Discover your true cost per incremental user and eliminate wasted ad spend.