Incrementality Testing
An experiment that measures the true causal impact of a marketing activity by comparing a test group that sees it against a holdout group that does not.
Why It Matters
Incrementality testing answers the question attribution cannot: "Would these conversions have happened anyway without this marketing?"
How It Works
You split your audience into a test group (exposed to the campaign) and a holdout group (not exposed). After the test period, you compare conversion rates between groups. The difference represents the incremental lift caused by the campaign.
Real-World Example
An incrementality test shows that branded search ads produce only 10% incremental conversions — the other 90% would have come through organic search anyway.
Common Mistakes
Running tests for too short a period to reach significance
Not ensuring the holdout group is truly isolated from exposure
Related Terms
A testing method where a control group is deliberately excluded from a campaign to measure its true incremental impact.
An attribution model that uses machine learning to analyze your actual conversion data and assign credit to each touchpoint based on its real contribution.
A statistical method that measures the impact of all marketing channels — including offline — on overall business outcomes like revenue.
Incrementality Testing FAQs
How is incrementality testing different from A/B testing?
A/B testing compares two versions of something; incrementality testing compares "something vs. nothing" to measure the true causal impact of the marketing activity.
Which channels benefit most from incrementality testing?
Retargeting and branded search are the most common candidates because they often claim credit for conversions that would have happened organically.
Need help with incrementality testing?
Get matched with a vetted specialist in 48 hours.
Ready to Get Started?
Get matched with a vetted specialist in 48 hours. No recruitment fees, no lengthy hiring process, just results.