Ad spend divided by attributed revenue, expressed as a percentage. The inverse of ROAS.
ACOS = (Ad Spend / Attributed Revenue) × 100. An ACOS of 20% means the advertiser spent $20 for every $100 of attributed revenue, which corresponds to a 5x ROAS. Amazon is the platform that popularized ACOS as the primary reporting metric for sponsored ads; most other retail media networks report ROAS instead.
ACOS is particularly useful for understanding profit margins: if a product has a 30% gross margin, any ACOS above 30% means the campaign is losing money at the product level before accounting for overhead. The break-even ACOS is the gross margin percentage.
Like ROAS, ACOS is subject to attribution window distortion when compared across platforms. An ACOS figure from Amazon's 14-day window is not directly comparable to one from a 30-day window platform without normalization. See also: TACOS, which includes both paid and organic revenue in the denominator.
Amazon's unique 10-character product identifier, used as the base unit for campaign targeting and attribution.
Every product listed on Amazon receives a unique ASIN. In retail media, ASINs serve as the atomic unit of campaign structure: advertisers target specific ASINs with sponsored ads, and attribution ties resulting sales back to the advertised ASIN (or, in the case of brand halo, to related ASINs in the same portfolio).
For agencies managing large catalogs, ASIN-level performance data is essential for identifying which products drive efficient ROAS versus which consume budget without generating proportionate returns. Most advanced Amazon advertising strategies involve tiered ASIN prioritization: hero ASINs receive aggressive bidding, supporting ASINs receive maintenance-level spend, and tail ASINs may be excluded entirely.
ASIN data does not transfer to other retail media platforms, which use their own SKU or item identifier systems. When reporting cross-platform performance, agencies must map Amazon ASINs to Walmart item IDs and Criteo product IDs manually or through a shared product catalog.
The rule that determines which ad touchpoint receives credit for a conversion when multiple interactions precede the purchase.
An attribution model answers the question: when a consumer interacts with multiple ads before purchasing, how should credit for that purchase be distributed? Different models produce dramatically different ROAS figures for the same underlying behavior, making cross-platform comparison unreliable when platforms use different models.
The major attribution model types are: Last-click (100% credit to the final ad before purchase — Amazon's default), first-click (100% credit to the first ad in the journey — Criteo's approach), linear (credit distributed evenly across all touchpoints), time-decay (more credit to touchpoints closer to conversion), and data-driven (credit distributed algorithmically based on observed conversion lift).
In retail media specifically, the choice between first-click and last-click models has a systematic directional effect: last-click models favor lower-funnel retargeting campaigns that appear just before purchase, while first-click models favor upper-funnel awareness campaigns that introduced the consumer to the brand. Budget allocation based on model-specific ROAS without normalization systematically over-invests in whichever funnel stage the model happens to favor.
The time period after an ad interaction during which a resulting purchase is credited to that ad.
An attribution window defines the maximum delay between an ad interaction (a click or a view) and a conversion event for the platform to credit that conversion to the ad. If a consumer clicks an ad on day 1 and purchases on day 18, the sale is attributed to the ad only if the window is 18 days or longer.
| Platform | Default Window | Model | Views Counted |
|---|---|---|---|
| Amazon Ads | 14 days | Last-click | No |
| Walmart Connect | 30 days | Last-click | Yes |
| Criteo | 30 days | First-click | Yes |
| Instacart Ads | 14 days | Last-click | Yes |
| CitrusAd | 28 days | Last-click | No |
| Tesco Media | 14 days | Last-click | Yes |
The window length is the primary driver of cross-platform ROAS incomparability. A longer window captures more delayed conversions, inflating reported revenue and ROAS. For FMCG categories, the gap between 14-day and 30-day windows represents roughly 12–20% additional attributed revenue. For considered-purchase categories, the gap can reach 30%. This means platforms with longer windows will appear to outperform those with shorter windows even when underlying performance is identical.
Normalization converts all platforms to a common window (RetailNorm uses 14-day last-click) using revenue decay curves that model how attributed revenue accumulates over time within each window.
Serving ads to defined consumer segments based on behavioral, demographic, or purchase data rather than search intent or keyword context.
Audience targeting in retail media leverages retailer first-party data — purchase history, browsing behavior, loyalty membership — to define which consumers see a given ad. Unlike keyword targeting, which reaches consumers based on what they search for, audience targeting reaches consumers based on who they are and what they've historically purchased.
Common audience segments in retail media include: purchasers of a competitor product (conquest targeting), lapsed buyers of the advertiser's own brand (reactivation), and high-value category shoppers who haven't yet purchased the brand (prospecting). Retailers with large loyalty programs — Walmart, Kroger, Target — can offer particularly rich audience segments because they have purchase-level data tied to individual shoppers across both online and in-store channels.
Audience-targeted campaigns typically have different attribution characteristics than search-driven campaigns. Because the consumer isn't expressing immediate purchase intent, the time between ad exposure and conversion is often longer, making audience campaigns more susceptible to attribution window distortion. A 30-day window captures far more audience-driven conversions than a 14-day window, disproportionately inflating reported ROAS for display and audience campaigns relative to sponsored search.
The aggregate ROAS across all retail media platforms combined, calculated by dividing total attributed revenue by total ad spend.
Blended ROAS collapses performance across multiple platforms into a single metric: total attributed revenue across all networks divided by total ad spend across all networks. It is useful as a headline KPI for client reporting but masks the individual platform dynamics that drive budget allocation decisions.
A blended ROAS figure is only meaningful when the underlying platform figures are normalized. If Amazon contributes 14-day revenue, Walmart contributes 30-day revenue, and Criteo contributes first-click revenue, the blended number aggregates three incompatible measurements. The result is a figure that shifts based on platform mix rather than actual performance changes. If budget moves from Amazon to Walmart, blended ROAS may rise purely because of Walmart's more generous attribution — while real revenue declines.
Agencies that use normalized blended ROAS in client reporting provide a more defensible and consistent metric. Changes in normalized blended ROAS reflect actual performance changes rather than shifts in the mix of platforms and their measurement methodologies.
The distribution of total ad spend across platforms, campaigns, or ad types to maximize return on the total investment.
Budget allocation is the decision of how to divide a fixed total spend across competing platforms and campaigns. It is one of the highest-leverage decisions in retail media management: the difference between an optimal and a suboptimal allocation on a $100,000 monthly budget can represent $10,000–25,000 in additional attributed revenue on identical spend.
Optimal budget allocation is not achieved by moving money toward the platform with the highest average ROAS. Average ROAS is a historical ratio that reflects past performance across all spend levels; it says nothing about what the next dollar will generate on each platform. The correct input for allocation is marginal ROAS — the incremental return from the next unit of spend. Optimal allocation equalizes marginal ROAS across all platforms: if platform A generates more at the margin than platform B, shift a dollar from B to A.
A second common error is allocating based on raw reported ROAS figures without normalization. If Walmart reports a higher ROAS than Amazon due to a longer attribution window rather than better performance, moving budget to Walmart on the basis of that figure destroys real value while appearing to optimize toward a higher number.
Incremental sales of non-advertised products in the same brand portfolio that result from a sponsored ad campaign on a specific product.
When a consumer discovers a brand through a sponsored ad for one product and subsequently purchases a different product from the same brand, the sales of the non-advertised product constitute brand halo. For example, a consumer clicks a Sponsored Products ad for a brand's shampoo, lands on the detail page, and then navigates to purchase the brand's conditioner — the conditioner sale is a halo sale.
Amazon's New-to-Brand reporting distinguishes halo effects from direct attributed sales. Some third-party analytics platforms further break out halo sales at the ASIN level, allowing advertisers to understand which hero products drive the broadest catalog lift. This matters for budget allocation: a product with a modest direct ROAS but high halo contribution may justify more budget than its own attributed figures suggest.
Halo effects complicate cross-platform ROAS comparison because not all platforms report halo sales in the same way or at all. Platforms that report broader attributed revenue (including halo) will show higher absolute revenue figures than those reporting only direct product-level attribution, even for identical campaigns.
When paid ad-attributed sales replace organic sales that would have occurred anyway, rather than generating genuinely incremental revenue.
Cannibalization occurs when a consumer who would have purchased a product organically — without seeing or clicking an ad — is instead tracked as an attributed sale because they happened to interact with an ad before converting. The ad spend generates no incremental revenue; it merely "buys" credit for an organic sale that would have happened regardless.
Cannibalization is highest for branded keyword campaigns targeting consumers who are already searching specifically for the brand's products. A consumer typing a brand name into Amazon's search bar has high purchase intent regardless of advertising; bidding on that branded keyword captures their click but may not change their purchase behavior. The resulting attributed revenue may be largely cannibalized from what would otherwise be organic sales.
Measuring cannibalization requires holdout testing: comparing conversion rates in a control group that doesn't see the ads against the exposed group. The difference represents true incrementality. Most retail media platforms do not report cannibalization metrics natively; agencies must design and run their own holdout tests or use media mix modeling.
A secure, privacy-compliant environment where two parties can jointly analyze overlapping data without either party seeing the other's raw data.
A data clean room allows a brand and a retailer (or a brand and a media platform) to match their customer data and run joint analyses without exposing individual-level records to either party. The clean room processes queries on the combined dataset and returns only aggregate results that cannot be reverse-engineered to identify individual consumers.
Amazon Marketing Cloud (AMC) is the most prominent retail media clean room. It allows advertisers to run custom SQL queries against their Amazon campaign data combined with Amazon's first-party purchase signals — enabling analyses like customer journey across ad types, time-to-conversion distributions, and audience overlap between campaigns. These analyses are not possible through standard Amazon reporting interfaces.
For attribution purposes, clean rooms are particularly useful for incrementality measurement and multi-touch analysis. By overlapping ad exposure data with purchase data in a privacy-safe environment, advertisers can estimate true lift beyond what platform-reported attributed revenue shows. The limitation is that clean room analyses require SQL fluency and are typically accessible only to larger advertisers with technical resources — making them impractical for most mid-market agencies without tooling support.
A conversion where the consumer clicked an ad before purchasing, establishing a direct interaction signal between the ad and the sale.
A click-through conversion (also called a post-click conversion) occurs when a consumer clicks an ad and subsequently completes a purchase within the attribution window. The click creates an explicit causal signal: the consumer interacted directly with the ad before converting. Most retail media platforms count click-through conversions in their headline ROAS metrics.
Click-through conversions are generally considered more reliable as an incrementality signal than view-through conversions, because the click demonstrates active engagement rather than passive exposure. However, they are still subject to attribution window distortion: a click-through conversion counted within a 30-day window and one counted within a 14-day window are measuring different things, even though both involve a click.
The click-through rate (CTR) — clicks divided by impressions — is a secondary quality metric in retail media. High CTR indicates strong creative-audience alignment; very low CTR on a high-impression campaign suggests the ad is reaching the right people but failing to generate interest.
Serving ads based on the content context of the page or product listing, rather than based on audience data or behavioral profiles.
Contextual targeting places ads in environments that are semantically relevant to the product being advertised, without requiring consumer-level behavioral data. In retail media, contextual targeting typically means showing a sponsored product ad on pages for related or complementary products — for example, appearing on a dog food detail page when advertising dog treats.
Contextual targeting is gaining importance as third-party cookies phase out and privacy regulations restrict behavioral audience targeting. Retail media networks are relatively well-positioned here because their contextual signals — product pages, category browse, search results — are highly predictive of purchase intent without requiring individual consumer tracking.
For attribution purposes, contextual targeting campaigns have characteristics similar to upper-funnel display: the consumer may not be immediately ready to purchase, making longer attribution windows capture more apparent conversions. Normalization is essential when comparing contextual campaign ROAS across platforms using different window lengths.
Synonym for attribution window. The period during which a conversion is credited back to the ad interaction that preceded it.
Conversion window is used interchangeably with attribution window and lookback window across different platform documentation and industry contexts. Some platforms and practitioners use "conversion window" to refer specifically to the click-based window, and "view-through window" separately for impression-based attribution — though this distinction is inconsistently applied across the industry.
The conversion window is set at the campaign or account level on most platforms, with defaults ranging from 7 to 30 days. Advertisers can typically customize the window within platform-allowed ranges; however, changing the window mid-flight creates apples-to-oranges comparisons within the same campaign's historical data.
The average amount paid each time a consumer clicks on an ad. Total spend divided by total clicks.
CPC is the dominant pricing model for sponsored product and sponsored keyword ads in retail media. Advertisers set a maximum bid for each click; the platform's auction determines the actual CPC, which is typically below the maximum bid in second-price auction systems.
CPC is a cost efficiency metric, not a revenue metric. A low CPC is not necessarily better: if cheap clicks produce low conversion rates, a higher CPC with stronger purchase intent may generate better ROAS. The relationship between CPC, conversion rate, and average order value determines the true efficiency of a keyword or placement.
CPC levels vary dramatically across retail media platforms, categories, and competitive landscapes. Amazon keyword CPCs in competitive categories (vitamins, protein powder, beauty) can reach $3–8 per click, while niche or less competitive categories may see CPCs below $0.50. Cross-platform CPC comparisons are secondary to ROAS analysis: a platform with higher CPC can still be the right allocation choice if its conversion rates produce better returns.
The cost of 1,000 ad impressions. The standard pricing model for display and video campaigns in retail media.
CPM (from the Latin mille, meaning thousand) is the price per thousand ad impressions. It is the dominant pricing model for display, video, and streaming TV campaigns run through retail media DSPs. Unlike CPC, which charges per interaction, CPM charges for reach regardless of whether consumers engage with the ad.
In retail media, CPM campaigns are typically upper-funnel awareness placements — banner ads on retailer homepages, display units in category browse pages, streaming audio, and video. These placements reach consumers who may not be actively searching for the product, making the causal chain between impression and purchase longer and less direct than keyword-triggered sponsored products.
CPM campaigns are more susceptible to view-through conversion inflation than CPC campaigns, because the interaction signal is an impression rather than a click. Platforms that count view-through conversions from CPM campaigns will report significantly higher attributed revenue than those that don't — creating a large source of distortion when comparing CPM-heavy campaigns across platforms with different view-through policies.
An attribution model that uses machine learning to distribute conversion credit across touchpoints based on observed lift, rather than applying a fixed rule.
Data-driven attribution (DDA) analyzes the historical paths of consumers who did and didn't convert, and assigns credit to touchpoints based on how much each one influenced the conversion probability. Unlike rules-based models (last-click, first-click, linear), DDA doesn't apply a predetermined credit distribution — it learns from actual data which touchpoints in the journey have the highest incremental effect.
In practice, DDA is primarily available through large platforms with sufficient conversion volume to train the underlying models — Google, Amazon Marketing Cloud, Meta. Most mid-market retail media campaigns on individual platforms do not have the scale required for reliable DDA.
For cross-platform comparison purposes, DDA introduces a different kind of incompatibility: each platform's DDA model is proprietary and trained on different signals. Amazon's DDA and Walmart's DDA are not using the same methodology, meaning "data-driven" does not guarantee comparable results across platforms. Normalization to a common standard (such as 14-day last-click) remains necessary even when individual platforms offer DDA.
The principle that each additional unit of spend on a platform generates less incremental revenue than the previous unit, because the most efficient inventory is purchased first.
Diminishing returns is the foundational economic principle underlying retail media budget optimization. At low spend levels, campaigns capture the most efficient auction slots — high-intent search terms, premium placements, well-matched audiences — at relatively low cost. As spend increases, campaigns must bid into progressively less efficient inventory: broader keywords with lower conversion rates, wider audience segments, lower-quality placements.
The practical implication is that a platform's reported average ROAS overestimates the return from additional spend. A campaign delivering 5x average ROAS at $50,000/month may only generate 2.5x ROAS on an incremental $10,000. Planners who allocate additional budget based on average ROAS consistently overfund platforms that have reached saturation.
Diminishing returns follows a characteristic curve shape — steep initial returns flattening as spend increases — that can be modeled mathematically using Hill curves or similar saturation functions. Fitting this curve to historical data allows planners to estimate marginal returns at any spend level, enabling principled budget allocation decisions.
A programmatic advertising platform that allows advertisers to buy display, video, and audio impressions across multiple ad exchanges and publisher networks through automated bidding.
A DSP aggregates inventory from multiple sources — publisher networks, ad exchanges, streaming platforms — and allows advertisers to bid for impressions in real-time using audience targeting, contextual signals, and frequency controls. In retail media, the major retailer-operated DSPs are Amazon DSP, Walmart DSP (via The Trade Desk partnership), and Criteo Commerce Max.
Retail media DSPs are distinguished from traditional programmatic DSPs by their access to retailer first-party purchase data. Amazon DSP, for example, allows advertisers to target consumers based on their Amazon purchase history, browse behavior, and product affinity signals — and to measure resulting sales that occur on Amazon. This closed-loop attribution is a key advantage over traditional DSPs that lack purchase signal data.
DSP campaigns are almost exclusively CPM-priced and focused on upper-funnel awareness and consideration objectives. Their attribution is inherently more complex than sponsored product campaigns: the path from a display impression to a purchase is longer and less deterministic, making these campaigns particularly sensitive to view-through window length and model type in attribution reporting.
An attribution model that gives 100% of conversion credit to the first ad a consumer interacted with in the purchase journey.
First-click attribution (also called first-touch attribution) credits the entire conversion to the ad interaction that first introduced the consumer to the product or brand — regardless of how many subsequent touchpoints occurred before purchase. Criteo uses a first-click model by default, crediting the first interaction in the session regardless of whether the consumer interacted with other ads before converting.
First-click models systematically favor upper-funnel awareness campaigns. An ad that generates the consumer's initial product awareness receives full credit even if the consumer later clicked a retargeting ad, browsed the product page multiple times, and converted a week after the first interaction. This makes first-click ROAS appear high for awareness placements and low for retargeting campaigns.
When comparing first-click attributed ROAS (e.g., from Criteo) against last-click attributed ROAS (e.g., from Amazon), the figures are measuring different causal constructs. Normalizing from first-click to last-click requires an understanding of the typical number of touchpoints in the purchase journey for the category — longer journeys amplify the divergence between models.
Data collected directly by a business from its own customers or users — purchase history, loyalty membership, site behavior — without intermediaries.
First-party data is the core competitive advantage of retail media networks. Retailers like Amazon, Walmart, Kroger, and Tesco have accumulated years of verified purchase data tied to identifiable loyalty members — data that is far richer than anything available through third-party cookies or browser-based tracking.
For advertisers, retailer first-party data enables three capabilities that external data cannot match: precise audience targeting based on actual purchase behavior (not inferred interests), closed-loop attribution that ties ad exposure to confirmed purchases, and category-level insights about purchase patterns, basket size, and competitive switching behavior.
The strategic importance of first-party data has increased as third-party cookies phase out and device identifiers become less reliable. Brands that build direct customer relationships — email lists, loyalty programs, product registration — develop their own first-party data assets that can be used in data clean room analyses with retailers, enabling measurement capabilities that were previously available only through third-party tracking.
A limit on how many times a single consumer can be shown the same ad within a defined time period.
Frequency capping prevents ad fatigue by limiting the number of times an individual consumer sees the same creative in a day, week, or month. Without a frequency cap, budget can concentrate on a small group of consumers who are repeatedly served the same ad without converting — generating impressions and spend without proportionate attribution value.
For attribution purposes, high frequency creates an interesting distortion: a consumer who has seen an ad 20 times is likely to be attributed a conversion to the ad even if the purchase was driven by habit or organic intent. Diminishing marginal attribution value per impression is real but not captured in standard platform reporting.
Frequency caps are primarily relevant for display and DSP campaigns. Sponsored search ads are inherently frequency-limited because they only appear when a consumer actively searches for relevant terms — the consumer's own search behavior acts as a natural frequency control.
A mathematical function that models the diminishing relationship between ad spend and revenue, used to estimate marginal returns and identify budget saturation points.
A Hill curve (also called a saturation curve or response curve) describes the characteristic shape of revenue as a function of ad spend: steep initial returns that gradually flatten as spend increases and efficient inventory is exhausted. The mathematical form is Revenue(spend) = Rmax × spendn / (Kn + spendn), where Rmax is the theoretical revenue ceiling, K is the spend level at which revenue reaches 50% of maximum, and n controls the curve steepness.
Fitting a Hill curve to historical campaign data requires spend-revenue pairs at multiple budget levels over time — the more variation in historical spend, the better the curve fit. Once fitted, the curve provides two critical planning outputs: marginal ROAS at any spend level (the derivative of the curve at that point), and the saturation point where additional spend generates diminishing returns beyond a practical threshold.
Hill curve fitting is computationally straightforward with modern nonlinear regression tools, but requires normalized revenue data as input. If the revenue figures fed into the curve come from different platforms with different attribution windows, the curve parameters will be distorted. A 30-day attributed revenue series will produce a curve that overstates the platform's true efficiency at every spend level, leading to overinvestment in that platform relative to its actual returns.
A controlled experiment that withholds advertising from a matched control group to measure the true incremental effect of a campaign.
A holdout test measures incrementality by randomly or geographically assigning consumers to exposed and unexposed groups, then comparing conversion rates between the groups. The lift above the control group's baseline conversion rate represents the true causal effect of the advertising — the sales that would not have occurred without the campaign.
In retail media, holdout tests are typically implemented as geo holdouts (suppressing ads in matched geographic markets) or user holdouts (randomly excluding a percentage of eligible users from being served ads). Both methods require retailer cooperation and are not universally available on all platforms.
The results of a holdout test often reveal significant cannibalization: cases where attributed sales significantly exceed incremental sales, meaning the platform is claiming credit for purchases that would have happened anyway. The ratio of incremental sales to attributed sales is the "true incrementality rate" — a ratio of 60% means 40% of attributed conversions were cannibalized from organic sales. Agencies with access to holdout test data can use incrementality rates to adjust their normalized ROAS figures for budget allocation decisions.
The process of linking a consumer's interactions across devices, channels, and sessions into a unified identity profile for targeting and attribution.
Identity resolution connects a consumer's ad exposure on a mobile device to their purchase on a desktop, or their click on a display ad to their in-store purchase at a loyalty-linked checkout. Without identity resolution, these interactions appear as separate, unconnected events — making cross-device and cross-channel attribution impossible.
Retailers with large loyalty programs have a significant identity resolution advantage: when consumers log in or use loyalty accounts at checkout, the retailer can link in-store purchases to online ad exposure. This is why retail media networks can offer closed-loop attribution that general digital advertising cannot — they have a verified identity (the loyalty account) that bridges online and offline behavior.
The technical complexity of identity resolution means it is implemented differently across platforms, creating another source of attribution incomparability. Platform A's attribution may successfully resolve identities across 85% of purchasers while Platform B resolves only 65%, creating systematic differences in attributed revenue that have nothing to do with actual campaign effectiveness.
The additional sales that resulted specifically from the advertising — revenue that would not have occurred without the campaign.
Incrementality is the gold standard metric for retail media effectiveness. Where attributed ROAS measures all revenue that occurred after an ad interaction within the window, incremental ROAS (iROAS) measures only the revenue that was caused by the ad — revenue that would not have occurred in the absence of advertising.
The gap between attributed ROAS and incremental ROAS can be substantial. On branded keyword campaigns, where the consumer is already searching specifically for the brand, incremental ROAS may be 40–60% of attributed ROAS because much of the attributed revenue is cannibalized from organic sales that would have happened regardless. On conquest campaigns targeting competitors' customers, incrementality tends to be higher because the consumer is less likely to have converted organically.
Measuring incrementality requires either holdout testing, media mix modeling, or access to incrementality-calibrated attribution signals. Some platforms offer incrementality reporting natively — Amazon provides "new-to-brand" metrics as a proxy — but true incremental measurement requires controlled experimentation. For agencies that have run holdout tests, incrementality rates can be used as a correction factor on top of normalized ROAS to produce the most accurate performance signal available.
A campaign setting that controls how closely a consumer's search query must match a bid keyword for the ad to be eligible to show.
Keyword match types define the flexibility of the relationship between a bid keyword and the search terms that trigger the ad. The three standard match types across most platforms are Exact (the search query must match the keyword precisely, with only minor variants), Phrase (the keyword phrase must appear within the search query, but other words may surround it), and Broad (the platform can show the ad for semantically related searches, synonyms, and loosely related queries).
Match type selection is a fundamental lever for balancing reach and precision. Exact match minimizes irrelevant impressions and typically produces the highest conversion rates, but reaches only consumers using that exact query. Broad match maximizes reach but may show ads in contexts with low purchase intent, reducing conversion rates and ROAS.
For attribution purposes, match type affects the search intent of the converting consumer. Exact match conversions represent high-intent searches; broad match conversions may include more exploratory queries where the consumer wasn't specifically looking for the product. This means ROAS from exact match campaigns and broad match campaigns is measuring different consumer intent levels — another dimension of incomparability that affects budget allocation decisions.
An attribution model that gives 100% of conversion credit to the final ad a consumer clicked before completing a purchase.
Last-click attribution (also called last-touch attribution) is the most widely used model in retail media. It assigns full conversion credit to the most recent ad click before the purchase event. Amazon Ads, Instacart, and Tesco Media all use last-click as their default model. Its prevalence makes it the most logical normalization baseline for cross-platform comparison.
Last-click systematically favors lower-funnel campaign types that appear close to the conversion event. Retargeting campaigns, branded search terms, and sponsored product ads serving consumers who are already in-market tend to show high last-click ROAS because they capture high-intent consumers at the moment of decision. Upper-funnel campaigns that build awareness earlier in the journey receive no credit under last-click even if they were instrumental in creating purchase intent.
Despite its limitations, last-click remains the practical standard for cross-platform normalization because it is the most widely supported and consistently defined model. Converting from other models (first-click, linear, multi-touch) to last-click requires assumptions about the distribution of touchpoints in consumer journeys — assumptions that introduce uncertainty but are less distorting than leaving model differences unaddressed.
An attribution model that distributes conversion credit equally across all ad touchpoints in the consumer's journey to purchase.
Linear attribution divides 100% of conversion credit evenly across every ad interaction in the path to purchase. If a consumer interacted with four ads before converting, each receives 25% of the credit. This model assumes all touchpoints contributed equally to the conversion, which is rarely accurate in practice but avoids the extreme credit concentration of single-touch models (first-click, last-click).
Linear attribution produces lower ROAS figures than last-click for lower-funnel campaigns, because it shares credit with earlier touchpoints that wouldn't receive any credit under last-click. Conversely, it produces higher apparent ROAS for awareness campaigns by giving them partial credit for all conversions where they appeared in the path — including conversions that last-click would attribute entirely to a later retargeting ad.
Linear attribution is rarely used as a default in retail media platform reporting, but appears in some third-party analytics tools and media mix modeling contexts. When comparing linear-attributed figures against last-click figures from retail media platforms, the linear figures will consistently appear lower for performance-focused sponsored search and higher for display — creating systematic directional biases in cross-channel budget allocation decisions.
Synonym for attribution window. The historical period a platform looks back from a conversion event to find an eligible ad interaction to credit.
Lookback window is used interchangeably with attribution window and conversion window in platform documentation and industry discourse. The term emphasizes the direction of the query: at the moment of conversion, the platform looks backward in time for an eligible ad interaction to credit.
Some practitioners use "lookback window" specifically in the context of programmatic and DSP campaigns, where the window often applies to impression-based view-through attribution, and reserve "attribution window" for click-based conversion tracking. This usage distinction is not standardized across the industry.
The incremental revenue generated by the next unit of ad spend on a platform. The correct metric for budget allocation decisions.
Marginal ROAS is the derivative of the revenue curve at the current spend level — the answer to "how much additional revenue would the next dollar on this platform generate?" It is categorically different from average ROAS, which is total attributed revenue divided by total spend. Because of diminishing returns, marginal ROAS is always lower than average ROAS for any platform that is not severely underfunded.
Example: A platform with $60,000 spend and 4.5x average ROAS may have a marginal ROAS of only 2.1x at that spend level. The next $10,000 on that platform would generate approximately $21,000, not $45,000.
Optimal budget allocation across platforms occurs when marginal ROAS is equalized across all active platforms. If Platform A has mROAS of 3.8x and Platform B has mROAS of 2.2x, shifting budget from B to A increases total revenue on the same total spend. Continue shifting until mROAS converges. At that point, any further reallocation would reduce total revenue.
Calculating marginal ROAS requires fitting a return curve to historical spend-revenue data, then taking the curve's derivative at the current spend level. This process requires normalized revenue data — if attributed revenue is inflated by long attribution windows, the fitted curve and resulting mROAS estimates will overstate true efficiency, leading to overinvestment in that platform.
A statistical approach that uses historical spend and sales data to estimate the contribution of each media channel to total revenue, without relying on user-level tracking.
Media mix modeling uses regression analysis to decompose observed sales into contributions from advertising spend, organic factors, seasonality, promotions, and macroeconomic variables. Unlike attribution models that track individual user journeys, MMM operates at the aggregate market level — it asks "when we spent more on TV, did total sales go up?" rather than "did this consumer buy because they saw this ad?"
The primary advantage of MMM is that it is privacy-agnostic and works without user-level tracking data. It can incorporate offline channels (TV, radio, in-store) alongside digital and retail media — making it the only methodology capable of providing a unified view across all channels. For agencies managing clients with significant offline spend, MMM is essential for understanding the true contribution of retail media relative to other channels.
The limitation of MMM is that it requires substantial historical data (typically 2+ years) and cannot measure effects at the granular campaign or creative level. It is better suited to strategic budget allocation across channel categories than to tactical optimization within retail media campaigns. MMM and platform-level attribution are complementary rather than competing approaches — MMM provides the macro picture while normalized attribution provides the operational decision layer.
Attribution approaches that distribute conversion credit across multiple ad touchpoints in the consumer journey, rather than crediting only one interaction.
Multi-touch attribution (MTA) is the collective term for attribution models that acknowledge the contribution of multiple ads to a single conversion. It encompasses linear attribution (equal credit across all touchpoints), time-decay (more credit to touchpoints closer to conversion), U-shaped or position-based (40% each to first and last, 20% distributed among middle touchpoints), and data-driven attribution (ML-based credit distribution).
The theoretical appeal of MTA is that it more accurately reflects the multi-touchpoint reality of modern consumer journeys. In practice, MTA faces a critical data problem in retail media: the data required to observe complete cross-platform consumer journeys is fragmented across competing retailers who do not share it. Amazon does not know what Walmart ads a consumer saw, and Walmart does not know what Amazon search results they clicked. Each platform can only observe the touchpoints that occurred within its own ecosystem.
This makes true cross-platform MTA impossible without a data clean room arrangement between competing retailers — an arrangement that does not currently exist at scale. The practical alternative is to normalize each platform's single-platform attribution to a common standard, creating comparable figures without requiring cross-platform journey data.
A conversion from a consumer who has not purchased from the brand in the previous 12 months, representing genuine customer acquisition rather than repeat-purchase stimulation.
New-to-brand (NTB) metrics distinguish between conversions from new customers and those from existing buyers. A 100% NTB campaign is acquiring customers who hadn't purchased from the brand in the past year; a low NTB% campaign is primarily driving repeat purchases from existing buyers who likely would have repurchased regardless.
Amazon reports NTB metrics for Sponsored Brands and Sponsored Display campaigns, including NTB purchase rate, NTB revenue, and NTB orders. These metrics are a proxy for incrementality: NTB orders are more likely to represent genuine incremental sales because the consumer wasn't already a recent buyer who would have repurchased organically. A campaign with high attributed ROAS but low NTB% is largely re-attributing sales that organic channels would have captured.
NTB metrics are currently Amazon-specific; Walmart Connect and most other retail media networks do not offer equivalent reporting. This creates an asymmetry in how agencies can evaluate incrementality across platforms — NTB-adjusted analysis is possible on Amazon but not available as a native metric on other networks.
A ROAS figure that has been adjusted to a common attribution standard, making it directly comparable across platforms that use different windows, models, and interaction types.
Normalized ROAS is the output of applying systematic correction factors to raw platform-reported ROAS figures to bring all platforms onto a common measurement basis. The normalization process adjusts for three dimensions: attribution window length (converting from each platform's native window to a standard, typically 14-day), attribution model type (converting from first-click, linear, or other models to last-click), and interaction type (discounting or removing view-through conversions).
The correction factors are not flat percentages — they are derived from revenue decay curves, category-specific purchase cycle data, and view-through lift estimates. A 30-day to 14-day window conversion for a fast-moving consumer goods brand may require a 15% revenue haircut, while the same conversion for an electronics campaign may require 28%, because consumers in longer consideration categories take more time after an ad interaction to complete their purchase.
Normalized ROAS is the only metric on which valid cross-platform budget allocation decisions can be made. Raw ROAS figures from platforms with different methodologies are not comparable; they reflect measurement differences as much as performance differences. Agencies that present normalized ROAS in client reporting have a defensible, consistent methodology that survives client scrutiny and produces better allocation outcomes.
The increase in organic (non-ad-attributed) sales that results from advertising activity — typically through improved search rank, brand awareness, or review accumulation.
Organic sales lift captures the halo effect of paid advertising on a product's unpaid performance. In retail media, the most significant mechanism is search rank improvement: when an ad campaign generates clicks and conversions on a product, the platform's algorithm typically rewards the product with higher organic search placement, which in turn generates additional organic sales at no incremental ad cost.
This is why TACOS (Total Advertising Cost of Sale) is a more complete efficiency metric than ACOS for brands investing in long-term retail media strategies: by including organic sales in the denominator, TACOS captures the downstream benefit of paid advertising on overall category performance. A brand might accept a 30% ACOS on sponsored campaigns if the organic lift from those campaigns brings TACOS down to 15%.
Organic lift makes attribution-only ROAS analysis incomplete for brand-level performance assessment. The attributed revenue from a campaign understates total economic value when significant organic lift results. Measuring organic lift requires comparing organic conversion rates and rank positions before and after campaigns — ideally with controlled holdout regions to isolate the ad effect from other variables.
An advertising platform operated by a retailer, allowing brands to reach shoppers using the retailer's first-party purchase and behavioral data.
A retail media network (RMN) is the advertising infrastructure a retailer builds and operates to monetize its audience and data assets. The retailer becomes the media owner: brands buy ads to reach consumers on the retailer's digital properties (website, app, search results) and, increasingly, off-site through the retailer's audience data applied to external programmatic inventory.
The global retail media landscape has expanded dramatically: as of 2026, over 277 retail media networks operate globally, ranging from Amazon's dominant network generating $50B+ in annual ad revenue to niche single-category retailers with modest but highly targeted audiences. Major networks include Amazon Ads, Walmart Connect, Kroger Precision Marketing, Instacart Ads, Target Roundel, Tesco Media and Insight Platform, Carrefour Links, and hundreds of smaller regional and specialty retailers.
Each retail media network operates its own attribution methodology — measurement by its own rules, using its own data, with its own reporting interface. This fragmentation is the core problem that makes cross-platform measurement in retail media structurally difficult. A brand managing campaigns across five retail media networks is operating in five separate measurement systems with no common standard, requiring normalization before any meaningful cross-platform comparison is possible.
Serving ads specifically to consumers who have previously visited a product page, viewed an ad, or otherwise expressed interest without converting.
Retargeting in retail media serves ads to consumers who have demonstrated purchase intent without completing a purchase — product detail page visitors, add-to-cart abandoners, and past purchasers being targeted for repeat purchase. Because the consumer has already expressed interest, retargeting campaigns typically convert at higher rates than prospecting campaigns targeting cold audiences.
Higher conversion rates produce higher attributed ROAS for retargeting campaigns, but this figure is particularly susceptible to cannibalization. A consumer who added a product to their cart and then abandoned has already demonstrated high purchase intent; they may well return to complete the purchase organically without retargeting. The incremental effect of retargeting on high-intent audiences is often lower than the attributed ROAS suggests.
Retargeting is also particularly sensitive to view-through attribution: a consumer who sees a retargeting ad, doesn't click, but purchases the same day may or may not have been influenced by the ad. Because retargeting by definition reaches consumers who are already in the funnel, the probability of organic conversion without the ad is high, making view-through conversions from retargeting campaigns especially likely to represent cannibalization.
The mathematical model of how attributed revenue accumulates over time within an attribution window, showing that most conversions occur shortly after an ad interaction and taper off thereafter.
Revenue decay describes the temporal distribution of conversions within an attribution window. Conversion probability is highest immediately after an ad interaction — when purchase intent is fresh — and decays approximately logarithmically as time passes. A consumer who clicked an ad is much more likely to convert on day 1 than on day 14, and more likely on day 14 than on day 28.
The decay rate is not uniform across categories. Fast-moving consumer goods have steep decay curves: purchase decisions are made quickly, so most attributed revenue from a 30-day window occurs in the first 7–10 days. Considered-purchase categories (electronics, appliances, furniture) have flatter decay curves: consumers may research for weeks after initial ad exposure before purchasing, so a higher proportion of 30-day attributed revenue occurs in days 15–30.
Understanding revenue decay is essential for window normalization. Converting a 30-day attributed revenue figure to a 14-day equivalent requires estimating what percentage of the 30-day revenue fell within the first 14 days — and this percentage is category-specific. A blanket assumption that 14-day captures 80% of 30-day revenue is accurate for some categories and materially wrong for others. RetailNorm applies decay curve models calibrated by inferred category type to produce more accurate window conversion factors.
Attributed revenue divided by ad spend. The primary efficiency metric in retail media advertising.
ROAS = Attributed Revenue / Ad Spend. A ROAS of 4x means every dollar of ad spend generated four dollars of attributed revenue within the platform's attribution window. It is the headline performance metric reported by every retail media platform and the primary input for budget allocation decisions.
ROAS is a ratio, not an absolute measure of profitability. A 4x ROAS on a product with 25% gross margins means the campaign is breaking even at the product level; a 4x ROAS on a product with 50% margins is highly profitable. Agencies must understand client margin structures to interpret ROAS targets correctly — a blanket "target 3x ROAS" applies very differently to different product categories.
Critical limitation: ROAS figures from different platforms are not directly comparable. Amazon's 14-day click-only ROAS and Walmart's 30-day view-inclusive ROAS measure different things. Comparing them without normalization systematically misleads budget allocation decisions. See Normalized ROAS.
The inverse of ROAS is ACOS (1/ROAS × 100%). ROAS and ACOS convey identical information in different forms; ROAS is more natural for cross-platform reporting while ACOS maps directly to margin analysis. A 5x ROAS = 20% ACOS; a 3.33x ROAS = 30% ACOS.
A retail media ad format that displays a brand logo, custom headline, and multiple products in a prominent banner placement — typically at the top of search results.
Sponsored Brands (SB) ads appear prominently above, within, or below search results and allow advertisers to display their brand logo, a customized headline, and up to three products simultaneously. Unlike Sponsored Products (which link to a single product detail page), Sponsored Brands can link to a brand store, a custom landing page, or a collection of products — providing greater creative flexibility and narrative control.
Sponsored Brands campaigns are available on Amazon and have equivalents on Walmart Connect and several other retail media networks. They are best suited for brand awareness, consideration, and new-to-brand customer acquisition objectives — goals where the multi-product display and brand presence provide advantages over single-product Sponsored Products formats.
Attribution for Sponsored Brands is more complex than for Sponsored Products. Amazon's Sponsored Brands attribution includes the "brand halo" in reporting: sales of any product from the brand within 14 days of the click are attributed to the Sponsored Brands campaign, not just the specific products featured in the ad. This makes SB ROAS figures not directly comparable to SP ROAS figures even within the same platform, and especially not comparable to non-halo-inclusive ROAS from other platforms.
A retail media ad format that serves display ads to targeted audiences on the retailer's owned properties and, in some cases, external publisher sites.
Sponsored Display (SD) extends retail media advertising beyond keyword-triggered placements into audience-targeted display inventory. Unlike Sponsored Products (search-triggered) and Sponsored Brands (search banner), Sponsored Display can serve ads to defined audience segments regardless of what they're currently searching for — reaching consumers who have viewed the product, related products, or competitive products, at various points in their browse experience.
Amazon's Sponsored Display offers both on-Amazon placements (product detail pages, shopping results, homepage) and off-Amazon placements across third-party publishers through Amazon's DSP infrastructure — blurring the line between sponsored ads and DSP campaigns. This distinction matters for attribution: on-Amazon SD uses Amazon's standard attribution window, while off-Amazon placements may use different attribution configurations.
Sponsored Display is more susceptible to view-through attribution distortion than sponsored search formats, because it reaches consumers who may not be actively shopping. A consumer who sees a Sponsored Display ad while reading a review, never clicks it, and purchases three weeks later may be counted as a view-through conversion — adding to reported revenue while contributing minimal incremental lift.
The core keyword-targeted ad format in retail media, serving individual product ads in search results and product detail pages triggered by consumer search queries.
Sponsored Products is the flagship ad format across all major retail media networks: Amazon Sponsored Products, Walmart Connect Sponsored Products, Instacart Sponsored Products. The format is simple: an ad that looks like a regular search result but is promoted above or among organic listings, triggered when a consumer's search query matches the advertiser's keyword bids.
Sponsored Products campaigns are predominantly lower-funnel, capturing consumers who are actively searching for products in the advertised category. This makes them the highest-intent and typically highest-converting format in retail media — and the most commonly measured and reported by agencies in weekly performance reports.
Because SP campaigns are triggered by search queries (explicit purchase intent signals), they have different attribution characteristics than display or audience-targeted campaigns. Most SP conversions are click-through (the consumer clicked the ad before purchasing), with minimal view-through contribution. This makes SP ROAS relatively less inflated by view-through inclusion differences between platforms — though attribution window length and model type differences still create significant cross-platform distortion.
Ad spend divided by total sales (paid + organic), measuring advertising efficiency as a share of total category revenue rather than just attributed revenue.
TACOS = Ad Spend / (Attributed Sales + Organic Sales). Unlike ACOS, which divides spend by attributed revenue only, TACOS includes organic sales in the denominator — giving a more complete picture of advertising efficiency at the brand level. A brand spending $10,000 on ads that generates $30,000 in attributed sales and $20,000 in organic sales has an ACOS of 33% but a TACOS of 20%.
TACOS is particularly relevant for brands investing in long-term retail media strategies where advertising drives organic rank improvement. If a campaign generates search rank gains that produce lasting organic sales lift, the ad spend that created that lift should be evaluated against total revenue improvement — not just attributed revenue. A high-ACOS campaign that dramatically improves organic rank may be an excellent investment when measured by TACOS.
The practical challenge with TACOS is obtaining reliable organic sales data, which requires either platform-level total sales reports (available for own-brand catalog) or third-party analytics tools that track organic rank and estimated organic sales. Additionally, TACOS is a brand-level metric that doesn't map directly to campaign-level optimization — agencies typically monitor TACOS as a strategic health indicator while managing campaigns against ACOS or ROAS targets at the tactical level.
A bid strategy that automatically adjusts bids to achieve a specified ROAS target, using the platform's machine learning to optimize spend toward high-conversion opportunities.
Target ROAS bidding instructs the platform's algorithm to automatically raise and lower bids in real time to achieve a specified revenue-per-spend ratio. If the tROAS target is 4x, the system will bid aggressively in auctions where it predicts high conversion probability and reduce bids in auctions where predicted conversion probability would result in ROAS below the target.
tROAS bidding requires a learning period — typically 2–4 weeks — during which the algorithm accumulates sufficient conversion data to make accurate predictions. During this period, performance may be volatile. Campaigns with fewer than 15–20 conversions per week typically do not have enough signal for tROAS to outperform manual or enhanced CPC bidding.
A critical subtlety: tROAS targets are set against the platform's reported ROAS — which is unadjusted for attribution window or model. A tROAS target of 4x on Walmart is not the same efficiency requirement as a tROAS target of 4x on Amazon, because Walmart's reported ROAS is typically inflated by a longer window and view-through inclusion. Agencies that set uniform tROAS targets across platforms without normalization will underbid on platforms with conservative attribution and overbid on platforms with generous attribution.
A conversion credited to an ad that the consumer saw but did not click — the consumer was served an impression, didn't engage, but later purchased within the attribution window.
A view-through conversion (VTC) occurs when a consumer is served an ad impression (the ad loads in their browser or app), does not click it, and subsequently makes a purchase within the platform's view-through attribution window. The platform credits the sale to the ad impression despite the absence of a direct click interaction.
View-through attribution is theoretically valid: ad exposure can influence consumer behavior even without a direct click, particularly for brand awareness and consideration campaigns. The problem is that view-through conversions are extremely difficult to separate from organic purchases that would have occurred regardless. A consumer who was going to repurchase a household staple they use weekly is likely to make that purchase whether or not they were served a display impression — but if they happened to see an ad within 14 days of their repurchase, the sale may be credited as a view-through conversion.
| Platform | VTC Default | VT Window | Separate Reporting |
|---|---|---|---|
| Amazon Ads | No | N/A | N/A |
| Walmart Connect | Yes | 14 days | Yes |
| Criteo | Yes | 1 day | Yes |
| Instacart Ads | Yes | 14 days | Partial |
The practical impact on ROAS figures is significant: for FMCG display campaigns on platforms with view-through enabled, VTCs can represent 15–40% of total attributed revenue. When comparing this figure against Amazon's click-only ROAS, the view-inclusive platform will appear to outperform by a substantial margin that is a measurement artifact, not a performance difference. Normalization requires discounting view-through revenue using an estimated incremental lift factor — not blanket exclusion, but a calibrated correction that retains the genuine lift signal while removing the organic purchase inflation.
RetailNorm normalizes all of these metrics across platforms — attribution windows, view-through adjustments, model type corrections — and produces comparable ROAS figures in a single report. No spreadsheets.
Run a normalized analysis →