Guide March 2026 7 min read

The Cross-Platform ROAS Problem in Retail Media

Every platform’s ROAS figure is internally consistent and externally incomparable. Placing Amazon, Walmart, and Criteo numbers side by side in a report doesn’t produce a comparison—it produces a collision of different measurement systems that looks like a comparison.

The retail media industry has a measurement problem that no amount of better reporting tooling can solve at the platform level. Each network defines its own rules for what counts as a conversion, how long after an ad interaction a purchase can be credited, and which types of interactions qualify. The result is a set of ROAS figures that are precise within their own systems and meaningless when compared across systems.

This is not a flaw that platforms are working to fix. It is a structural feature that serves their interests. Generous attribution windows and inclusive interaction counting make performance look better, which makes the ad product more attractive to buyers. There is no commercial incentive for any retailer to adopt a measurement standard that would make their numbers look worse in comparison.

The Three Layers of Incompatibility

Cross-platform ROAS comparison fails at three independent levels. Each layer compounds the others, meaning the total distortion is multiplicative rather than additive.

Layer 1: Attribution window length

The attribution window defines how many days after an ad interaction a resulting purchase can be credited to that ad. A 30-day window will always report more revenue than a 14-day window applied to the same campaign, because it captures delayed purchases that fall outside the shorter boundary.

For fast-moving consumer goods, the difference between a 14-day and 30-day window typically represents 12–18% additional attributed revenue. For longer consideration categories like electronics or appliances, the gap can reach 25–35%. This means a platform using a 30-day window will appear to deliver 12–35% higher ROAS than an identical campaign on a 14-day window platform, purely due to measurement methodology.

Layer 2: Attribution model type

The attribution model determines which touchpoint in the consumer journey receives credit for the conversion. Last-click gives full credit to the final ad interaction before purchase. First-click gives full credit to the initial ad interaction that introduced the consumer to the product. Linear distributes credit evenly across all touchpoints.

First-click models systematically favor upper-funnel awareness campaigns. They credit the ad that generated initial interest, regardless of how many subsequent interactions occurred. Last-click models favor lower-funnel retargeting campaigns that appear just before purchase. Running a first-click platform alongside a last-click platform produces figures that reward different parts of the funnel—making direct comparison misleading for budget allocation decisions.

Layer 3: Interaction type inclusion

Platforms diverge on whether passive impressions (views) count alongside active clicks as qualifying interactions. A platform that counts view-through conversions will report higher attributed revenue than one counting click-through only, because it captures purchases from consumers who were served an ad impression but never engaged with it.

For display-heavy campaigns, view-through inclusion can inflate reported ROAS by 15–40% compared to click-only measurement. When one platform in a comparison counts views and another counts only clicks, the view-inclusive platform will appear more efficient by a significant margin that has nothing to do with actual performance.

What This Costs in Practice

The practical consequence of cross-platform ROAS comparison without normalization is systematic budget misallocation. Capital moves toward platforms that report the highest ROAS—but if that ROAS is inflated by longer windows or view-through inclusion, the reallocation moves budget away from the platform that is actually performing better.

Platform Reported ROAS Normalized ROAS Raw Decision Correct Decision
Amazon Ads 4.8x 4.8x Reduce budget Increase budget
Walmart Connect 6.2x 4.1x Increase budget Reduce budget
Criteo 5.5x 3.9x Increase budget Reduce budget

In this example, raw ROAS comparison leads to the opposite budget allocation from what the normalized data supports. Budget moves from Amazon (which is actually performing best) toward Walmart and Criteo (which appear to lead but are inflated by longer windows and view-through inclusion). The client loses real revenue on the same total budget.

The agency accountability trap

When reallocated budget delivers lower-than-expected results, the agency is accountable for the recommendation. The platform’s methodology is not questioned—the agency’s judgment is. Normalization creates a defensible paper trail: the recommendation was based on comparable numbers, not platform-reported figures.

Why Industry Standardization Will Not Solve This

The retail media industry has been discussing attribution standardization for years. The IAB, the MRC, and various retail media coalitions have produced measurement frameworks. None have achieved broad platform adoption.

The reason is commercial incentive. Platforms that adopt stricter attribution standards—shorter windows, click-only counting, last-click models—will report lower ROAS figures than competitors using looser methodologies. Lower reported ROAS makes the ad product look less attractive. No platform has a commercial reason to adopt a standard that disadvantages them in buyer comparisons.

As retail media networks proliferate beyond the current 277+ networks globally, the fragmentation worsens rather than resolves. Each new network that launches brings its own measurement rules, adding another incompatible variable to the comparison problem.

The Correct Solution: A Correction Layer

Since platforms will not standardize measurement, the correction must happen outside of the platforms. A correction layer sits between the raw platform exports and the decision-making process, applying systematic normalization to bring all platforms onto a common measurement basis.

An effective correction layer must do three things simultaneously. It must adjust for window length differences using a decay model calibrated by category and purchase cycle. It must discount or exclude view-through revenue based on estimated incremental lift. And it must convert between attribution model types using touchpoint distribution assumptions appropriate to the campaign type.

The corrections cannot be applied sequentially—they must be applied as a combined factor to avoid compounding errors. A platform with a 30-day window, view-through inclusion, and a first-click model requires a single correction factor that accounts for all three dimensions at once.

The output is a normalized ROAS figure for each platform that reflects performance under a common standard. Comparisons made against this figure are structurally valid. Budget allocation decisions based on normalized ROAS will consistently outperform decisions based on raw reported figures.

RetailNorm is a correction layer for retail media agencies. It applies a three-factor normalization (window decay × view-through discount × model conversion) across Amazon, Walmart, Criteo, Instacart, and other networks, producing comparable ROAS figures and evidence-based budget allocation recommendations.

Run a normalized comparison on your own platform data →