A media planner at a mid-sized retail media agency manages campaigns for a single client across Amazon Ads, Walmart Connect, and Criteo. At the end of every week, they pull performance data from each platform, drop the numbers into a spreadsheet, and present a consolidated ROAS comparison to the client.
The spreadsheet shows Amazon delivering a 5.2x ROAS, Walmart at 4.1x, and Criteo at 3.8x. The recommendation seems obvious: shift budget toward Amazon.
The problem is that none of these numbers are measuring the same thing. Amazon counts a sale if it happens within 14 days of the last ad click. Walmart counts a sale if it happens within 30 days. Criteo counts a sale if it happens within 7 days of the first ad click. These are not different lenses on the same reality. They are fundamentally different measurement systems producing outputs that cannot be compared without structural correction.
This is not a data quality issue. The data from each platform is accurate within its own rules. The problem is that the rules are different—and when you place the outputs side by side, the comparison is structurally misleading.
What Attribution Windows Actually Measure
An attribution window defines the maximum time period between an ad interaction and a conversion event for that conversion to be credited to the ad. The window determines the boundary of causality that each platform is willing to claim.
The three critical dimensions where platforms diverge are the window length, the attribution model, and the interaction types they count. These dimensions compound to produce dramatically different revenue figures from the same underlying consumer behavior.
| Platform | Window | Model | Interactions |
|---|---|---|---|
| Amazon Ads | 14 days | Last-click | Clicks only |
| Walmart Connect | 30 days | Last-click | Clicks + views |
| Criteo | 7 days | First-click | Clicks + views |
| Instacart Ads | 14 days | Last-click | Clicks + views |
| CitrusAd | 28 days | Last-click | Clicks only |
When a platform uses a longer attribution window, it captures more conversions—including those that might have occurred regardless of the ad. A 30-day window will always report higher attributed revenue than a 14-day window applied to the same campaign, because the longer window catches delayed purchases that fall outside the shorter boundary.
A Numerical Example
Consider a household goods brand running identical monthly campaigns across Amazon and Walmart with $50,000 ad spend on each platform. The underlying consumer behavior is the same: similar products, similar audiences, similar purchase cycles.
Amazon, with its 14-day last-click window, reports $260,000 in attributed revenue. ROAS: 5.2x.
Walmart, with its 30-day window that includes view-through conversions, reports $310,000 in attributed revenue. ROAS: 6.2x.
The raw numbers suggest Walmart is outperforming Amazon by a full ROAS point. But when you normalize both platforms to a standard 14-day last-click measurement, the picture shifts. Walmart’s normalized revenue drops to approximately $221,000—a 4.4x ROAS. The “extra” revenue came from two sources: conversions that occurred between day 15 and day 30 (window inflation), and view-through conversions that Walmart counts but Amazon does not (interaction-type inflation).
Under normalized measurement, Amazon at 5.2x is actually outperforming Walmart at 4.4x—the exact opposite of what the raw data suggested.
If the planner follows the raw data, they shift budget from Amazon to Walmart—moving money from the higher-performing channel to the lower-performing one. The client loses money, and the agency cannot explain why performance declined after the reallocation.
Why Excel Cannot Solve This
The standard industry workaround is manual adjustment in spreadsheets. A planner might apply a flat discount to Walmart’s numbers (“cut 20% because of the longer window”) or exclude certain conversion types from the raw exports. This approach has three structural flaws.
The coefficients are not static
The relationship between a 30-day and 14-day attribution window is not a fixed ratio. It varies by product category, purchase cycle length, seasonality, and promotional cadence. A blanket 20% discount might be approximately correct for a pantry staple brand but wildly wrong for a consumer electronics campaign where consideration cycles are longer. Using a single coefficient across all clients and categories introduces systematic error.
View-through adjustments require granular data
Separating view-through from click-through attributed revenue is not always possible from standard platform exports. When it is possible, the planner must manually identify and remove view-through revenue from each platform’s export, then recalculate ROAS—a process that is both time-consuming and error-prone at scale.
The process does not survive client review
When a head of commerce at a brand asks the agency to explain the normalization methodology, “we apply a flat percentage haircut based on gut feel” does not inspire confidence. Agencies need a defensible, systematic methodology that can withstand scrutiny from sophisticated clients who understand measurement.
Excel is not the wrong tool because it lacks computational power. It is the wrong tool because it forces planners to make ad-hoc assumptions where systematic correction is needed.
Structural Normalization: A Different Approach
Structural normalization addresses the root cause: it adjusts platform-reported figures to a common measurement standard before comparison. Rather than discounting outputs after the fact, normalization applies conversion coefficients that account for the specific differences between each platform’s attribution configuration.
The normalization process applies corrections across three dimensions simultaneously.
Window length adjustment
Converting a 30-day attributed revenue figure to a 14-day equivalent requires understanding the revenue decay curve—how much incremental revenue is captured between day 14 and day 30. This is not a flat percentage; it follows a logarithmic decay pattern that varies by category. For fast-moving consumer goods, approximately 85% of attributed revenue occurs within the first 14 days of a 30-day window. For considered purchases, this drops to 70–75%.
Attribution model conversion
First-click and last-click models distribute credit differently across the purchase journey. A first-click model credits the ad that introduced the consumer to the product; a last-click model credits the ad that preceded the final purchase. Converting between them requires understanding the typical number of touchpoints in the purchase journey and how credit distribution shifts between models. In retail media, first-click models typically attribute 8–15% more revenue to upper-funnel campaigns than last-click models.
Interaction type standardization
Platforms that count view-through conversions inherently report higher attributed revenue than those counting only click-through. View-through conversions—where a consumer sees but does not click an ad, then later purchases—are legitimate signals, but including them alongside click-only metrics inflates the comparison. Normalization applies a view-through discount factor that reflects the incremental lift attributable to ad views versus organic purchase intent.
What Normalized Data Makes Possible
Once all platforms are measured against a common standard, several decisions become tractable that were previously guesswork.
Budget allocation across platforms can be based on actual comparative performance rather than platform-specific metrics that reward longer windows or looser attribution. A planner can identify which platform is delivering the most efficient return per dollar and reallocate accordingly—with confidence that the comparison is fair.
Client reporting gains credibility when the methodology is consistent and defensible. Instead of presenting numbers that each platform self-reports (where every platform looks good within its own rules), the agency delivers a unified view that the client can trust.
Performance trends become visible across networks. If normalized ROAS on Walmart is declining while Amazon holds steady, that signal is meaningful—it reflects actual performance changes, not measurement artifacts.
The Market Context
Retail media ad spend is projected to exceed $100 billion in 2025, distributed across more than 200 networks globally. Each network operates its own measurement methodology. The industry has discussed standardization for years—and it has not arrived. Retailers have no commercial incentive to adopt a common attribution standard; generous measurement makes their ad product look more effective.
Large holding companies like Publicis and Omnicom have built proprietary normalization layers for their enterprise clients. Mid-market agencies—the 5-to-50-person firms that manage the majority of retail media accounts—do not have access to these tools. They are left with Excel and manual approximation.
The measurement gap is not closing. As retail media networks proliferate, it is widening. Every new network that launches brings its own attribution rules, adding another variable to an already unmanageable comparison.
Toward a Common Measurement Layer
The solution to attribution window distortion is not persuading platforms to change their measurement. It is building a correction layer that sits between the raw platform data and the decision-making process. This layer must apply systematic, category-aware normalization coefficients, operate transparently enough to survive methodology review, and produce outputs that are directly actionable—not just analytically interesting.
RetailNorm applies standardized 14-day last-click normalization across every retail media platform, producing comparable ROAS figures and evidence-based budget allocation recommendations. The methodology is deterministic, auditable, and improves as more category-level data accumulates.