Incrementality is the hardest measurement problem in retail media — and arguably the most important. Every retail media platform reports attributed revenue: sales that occurred after a consumer interacted with an ad, within the attribution window. But attributed revenue is not the same as caused revenue. The platform is measuring correlation, not causation.
A consumer who buys laundry detergent every two weeks on Amazon may click a Sponsored Products ad on week 1 of a campaign and be counted as an attributed conversion. But they would have repurchased regardless. The ad captured their click on the way to a purchase they were already going to make. This is cannibalization — attributed revenue that was not incremental.
Incrementality asks: if this consumer had never seen the ad, would they still have purchased? If yes, the attributed revenue is not incremental. If no, the ad caused the purchase, and that revenue is genuinely incremental.
Attributed ROAS vs. Incremental ROAS (iROAS)
Attributed ROAS uses total attributed revenue in the numerator. iROAS uses only the revenue that would not have occurred without advertising. iROAS is always lower than attributed ROAS for any campaign with meaningful cannibalization — and cannibalization is present in virtually every retail media campaign to some degree.
| Campaign Type | Typical Attributed ROAS | Typical Incrementality Rate | Estimated iROAS |
|---|---|---|---|
| Branded keyword (own brand) | 6–12x | 35–55% | 2–6x |
| Category keyword (non-branded) | 3–6x | 55–75% | 1.7–4.5x |
| Conquest (competitor brand) | 2–4x | 70–85% | 1.4–3.4x |
| Retargeting (cart abandoners) | 5–15x | 25–45% | 1.25–6.75x |
| Display / awareness | 2–5x | 40–60% | 0.8–3x |
These ranges are illustrative — actual incrementality rates vary significantly by brand maturity, category competition, and specific campaign configuration. Branded keyword campaigns typically show the highest cannibalization (lowest incrementality) because consumers already searching for a brand name have high organic purchase probability.
Retargeting campaigns often show extremely high attributed ROAS (10–20x) because they target consumers who already have high purchase intent — cart abandoners, recent product page visitors. But because these consumers were likely to convert anyway, incrementality rates are low. The high attributed ROAS is largely borrowed from organic conversions. True iROAS on retargeting campaigns is often well below what attributed figures suggest.
How to Measure Incrementality
Holdout testing (gold standard)
A holdout test randomly assigns consumers or geographic markets to exposed and unexposed (holdout) groups. The campaign runs normally for the exposed group; the holdout group either sees no ads or sees ads for a different product. After the test period, compare conversion rates between groups.
Incrementality Rate = (Exposed Conversion Rate − Holdout Conversion Rate) / Exposed Conversion Rate
If the exposed group converts at 2.4% and the holdout group converts at 1.6%, incrementality = (2.4 − 1.6) / 2.4 = 33%. That means 33% of attributed conversions are incremental; 67% would have occurred without the campaign.
Amazon Marketing Cloud (AMC)
Amazon Marketing Cloud supports custom holdout analysis through its clean room environment. Advertisers can run SQL queries that compare conversion behavior between exposed and unexposed user groups, enabling incrementality measurement at campaign or ad type level. AMC access requires enrollment in Amazon's advertiser program and SQL competency; most mid-market agencies use third-party tools to interface with AMC data.
Geo holdouts
Rather than randomizing individual users (which requires platform support), geo holdouts suppress ads in matched geographic markets and compare sales performance between suppressed and non-suppressed markets. Geo holdouts are logistically simpler but require matched market selection methodology to avoid selection bias — markets with different competitive dynamics or demographic profiles will produce unreliable results.
New-to-Brand as a proxy
Amazon's New-to-Brand metric is an approximation of incrementality: conversions from consumers who haven't purchased from the brand in 12 months are more likely to represent genuine new customer acquisition than repeat purchasers who would have bought anyway. High NTB rates correlate with higher incrementality, though they are not a precise substitute for holdout testing.
Why Incrementality Is Not Reported Natively
Retail media platforms have a structural incentive to maximize attributed revenue, not incremental revenue. A platform that measures and reports its own cannibalization rate is, in effect, telling advertisers that a significant portion of their spend is wasteful — which is unlikely to increase ad spend.
Attribution windows are set by platforms to maximize the number of conversions credited to ads. Longer windows capture more delayed purchases; view-through attribution counts purchases from consumers who may never have noticed the ad. These choices increase reported ROAS and make the ad product look more effective, regardless of actual incremental impact.
This is not unique to retail media — it is a structural feature of all self-reported advertising measurement. The solution is external measurement: holdout tests designed and controlled by the advertiser or their agency, not the platform.
Incrementality and Normalized ROAS
Incrementality measurement and ROAS normalization address different problems and are complementary, not competing approaches.
Normalization corrects for methodological differences between platforms — making cross-platform ROAS figures comparable by adjusting for window length, model type, and view-through inclusion. It answers: "which platform is actually performing better?"
Incrementality measures the true causal impact within a single platform or campaign — answering "how much of this platform's performance is actually caused by the advertising?"
The most rigorous approach applies both: normalize ROAS across platforms to enable fair comparison, then apply incrementality rates from holdout testing to adjust for cannibalization. The result is incremental normalized ROAS (iNROAS) — the most accurate signal available for budget allocation decisions.
Frequently Asked Questions
Attribution determines which ad receives credit for a conversion that occurred. Incrementality asks whether the ad actually caused the conversion — whether the purchase would have happened anyway without the ad. Attribution is a measurement methodology; incrementality is a causal question. High attributed ROAS does not imply high incrementality. Branded keyword campaigns routinely show 8–12x attributed ROAS with 30–50% incrementality because they're capturing credit for organic purchases from loyal customers.
Industry research and practitioner experience suggest that across all retail media campaign types, incremental revenue represents 40–70% of attributed revenue on average — meaning 30–60% of attributed ROAS reflects cannibalization from organic sales. This varies enormously by campaign type: branded keyword campaigns may be 35–45% incremental; conquest and prospecting campaigns may be 65–80% incremental. These are rough estimates; actual rates require holdout testing for each campaign configuration.
Precisely, no. Some proxies exist: Amazon's New-to-Brand rate approximates customer acquisition incrementality; media mix modeling can estimate incremental contribution at the channel level over long time horizons. But neither provides the precise, campaign-level incrementality measurement that a holdout test does. If running controlled holdout tests is not operationally feasible, applying a conservative incrementality discount (e.g., multiplying attributed ROAS by 0.6–0.7 for mature branded campaigns) is a pragmatic approach for planning purposes — better than treating 100% of attributed revenue as incremental.
RetailNorm's normalization pipeline corrects for the attribution methodology differences between platforms — the first layer of measurement accuracy. Pairing normalized ROAS with incrementality estimates from holdout testing gives agencies the most reliable performance signal available for cross-platform budget allocation.