Try the engine — upload your CSVs →

Know where your
next dollar
should go.

Every retail media platform measures ROAS differently. RetailNorm corrects the distortion, normalizes attribution across all networks, and shows you exactly where to move budget for maximum return.

Built for performance agencies managing 3–15 retail media accounts.
6
platforms normalized
Amazon, Walmart, Criteo, Tesco, Instacart, Carrefour — corrected to one common baseline.
20–50%
typical ROAS inflation
The gap between what platforms report and what attribution actually supports. You're allocating budget on these numbers.
10s
to corrected output
Upload CSVs. Get corrected performance and allocation recommendations in under 10 seconds.
4
correction factors
Window decay, view-through discount, model conversion, and confidence weighting — fully auditable.

You're making budget decisions
with numbers that don't mean the same thing.

Three platforms. Three attribution models. Three different answers to "what's our ROAS?" — and none of them are comparable.
What platforms report — raw ROAS
Amazon Ads
Last-click
Attribution window
14 days
4.2
× ROAS
Reported performance
Industry baseline. 14-day window is the shortest of the three — captures the fewest conversions.
Walmart Connect
Last-click
Attribution window
30 days
5.5
× ROAS
Reported performance
Looks like the winner. But 30 days captures 2× the conversions — many would have happened anyway.
Criteo
First-click
Attribution window
7 days
3.5
× ROAS
Reported performance
Different model entirely. First-click credits the initial touchpoint — inflating top-of-funnel even with a short window.
Same campaign budget. Three incompatible scales. The platform with the longest window always looks like the winner. You're allocating real dollars based on this comparison.
After RetailNorm
Corrected ROAS — 14-day last-click baseline
Amazon Ads
3.6
× ROAS
−14% from reported
Walmart Connect
3.8
× ROAS
−31% from reported
Criteo
2.8
× ROAS
−20% from reported
Inflated ROAS doesn't just mislead.
It misallocates capital.
$8k–$40k
Misallocated per month
When a platform's ROAS is inflated 30% by a longer attribution window, and you allocate budget proportionally, you're moving real dollars based on phantom performance. For an agency managing $200k/month across three platforms, that's $8k–$40k in the wrong place every month.
Compounding over 6 months
The platform that gets more budget reports more revenue — because it has more budget to attribute. The over-credited platform looks stronger. The under-credited one gets cut. The gap widens. After six months, your allocation is structurally wrong.
Invisible
To your clients — until it's not
Client ROAS stalls. Efficiency plateaus. You suspect something is off but can't pinpoint it because the raw numbers all look reasonable. The problem isn't performance — it's measurement.
Upload. Correct.
Reallocate.
Click each step to see exactly what happens to your data. No black boxes — every correction is visible.
01 — PARSE

Upload platform exports

Auto-detect schemas & windows
02 — WINDOW

Correct attribution windows

Decay longer windows to 14d baseline
03 — VIEW-THROUGH

Discount phantom revenue

Remove inflated view-through credit
04 — MODEL

Convert attribution models

Align first-click, linear → last-click
05 — CONFIDENCE

Weight by data quality

Penalize thin data, reward signal
06 — RECOMMEND

Reallocate budget

Hill curve optimizer → max ROAS
Auto-detect everything
Drop exports from any supported platform. The engine identifies the platform, maps columns, and extracts attribution metadata — no configuration needed.
amazon_ads.csv
14d · last-click · 15 rows
walmart_connect.csv
30d · last-click · 15 rows
criteo_retail.csv
7d · first-click · 10 rows
Platform identified via column fingerprint
Attribution window extracted: 14d, 30d, 7d
View-through column detected (Walmart)
Model type mapped: last-click, first-click
Currency & delimiter normalized
No setup required. Fuzzy column matching handles naming variations across platform export formats. Works with standard CSV exports from all major RMNs.
Window decay correction
Longer attribution windows capture more conversions — inflating revenue. The engine decays each platform to a common 14-day last-click baseline.
Amazon Ads
14d
×1.00
Already at baseline
Walmart
30d
×0.82
16 extra days decayed
Criteo
7d
×1.12
Shorter window adjusted up
Revenue impact
Walmart revenue
$22.6k$18.5k
Criteo revenue
$10.1k$11.3k
Bayesian λ estimation. Decay rate isn't arbitrary — it's estimated from the data using a Bayesian posterior that blends platform-specific priors with observed daily ROAS patterns.
View-through discount
Some platforms bundle view-through conversions into reported revenue. This inflates ROAS by crediting sales to impressions that may not have influenced the purchase.
Amazon Ads
No VT reported
×1.00
Walmart
18% view-through
×0.88
−$4.1k rev
Criteo
No VT column
×1.00
Walmart's 5.5× raw ROAS includes $4.1k in view-through revenue that likely would have converted organically. After discounting, their contribution drops — but it's more accurate.
Dynamic β estimation. The discount factor adapts based on platform credibility score, sample size, and observed variance in view-through ratios across campaigns.
Attribution model alignment
Platforms use different attribution models. First-click, last-click, linear, and time-decay each assign credit differently — making cross-platform comparison meaningless without conversion.
Amazon Ads
Last-click
×1.00
Baseline model
Criteo
First-click
×0.90
First-click over-credits
Walmart
Last-click
×1.00
Same as baseline
Why this matters
First-click attribution credits the initial touchpoint with all revenue — even if the customer clicked 5 other ads before buying. This inflates top-of-funnel campaigns and makes Criteo's retargeting appear more effective than it actually is.
Bayesian shrinkage. Conversion factors use calibrated priors per model type, blended with observed data. When iROAS columns are detected, the engine switches to incremental-based conversion automatically.
Confidence weighting
Not all data is created equal. Platforms with more data points and lower variance get higher confidence. Platforms with thin or volatile data get penalized.
Amazon Ads
96%
×0.99
15 campaigns · low variance · strong signal
Walmart
92%
×0.97
15 campaigns · moderate variance · reliable
Criteo
78%
×0.93
10 campaigns · higher variance · penalized
Criteo's thinner dataset means its corrected ROAS carries a 7% confidence penalty. You still see the number — but you know to weight it less in your allocation decision.
Monte Carlo propagation. Confidence isn't just a single number — 500 simulations propagate uncertainty through all 4 factors to produce P5–P95 revenue bands for each platform.
Budget reallocation
After all four corrections, the engine models diminishing returns per platform and tells you exactly where to move budget for maximum blended ROAS.
Amazon Ads
$5.4k
$4.7k
−$700
Walmart
$4.1k
$5.0k
+$900 ↑
Criteo
$2.9k
$2.7k
−$200
38%
40%
22%
Amazon $4.7k
Walmart $5.0k
Criteo $2.7k
Projected revenue uplift at same total spend
+$3.1k / week
Same spend. More return. The optimizer doesn't ask for more budget — it reallocates what you already have based on corrected marginal ROAS per platform.
This is what your
allocation engine looks like.
Upload CSVs. The engine corrects attribution, models return curves, flags anomalies, and recommends the optimal budget split — in under 10 seconds.
🔒 app.retailnorm.com/engine
Engine
Allocation Engine
Return Curves
Simulator
Intelligence
Confidence92%
Calibrationv3
Anomalies2
Output
Reports
Settings
NovaBrew Co.
Engine live
· 3 platforms · Jan 6–10
↑ Upload CSVs
Generate Report →
Normalization Engine
3 platforms · 14-day last-click baseline · v3.2
Executive
Technical
Performance Reality Check
3.87x
Reported ROAS
3.37x
Normalized ROAS
12.9%
Inflation
92%
Confidence
Your reported performance appears inflated by 12.9% due to attribution window expansion and view-through over-crediting. Actual blended ROAS is 3.37x, not 3.87x.
Platform Performance
PlatformSpendNorm. ROASConfidenceStatus
Amazon Ads
$5.4k
3.58x
96%
Healthy
Walmart Connect
$4.1k
4.69x
88%
Healthy
Criteo
$2.9k
3.14x
72%
Attention
Budget Recommendation
Amazon
$5.4k$4.7k-13%
Decrease — performance drops after normalization.
Walmart
$4.1k$5.0k+22%
Increase — strong normalized performance.
Criteo
$2.9k$2.7k-7%
Decrease — lower confidence data.
01 — REALITY CHECK
Executive Summary in 5 Seconds
Reported vs. normalized ROAS, inflation %, and confidence score — with a plain-language narrative your CMO can scan before the meeting starts.
02 — PROGRESSIVE DETAIL
Click to Reveal the Why
Clean 5-column table shows what matters. Click any platform to expand correction factors — attribution window impact, view-through over-credit, and model alignment.
03 — ACTIONABLE CARDS
Budget Moves, Not Spreadsheets
Each platform gets a card: current vs. suggested budget, directional arrow, and a one-sentence explanation. Projected revenue uplift shown at the bottom.
Try It Out →
Upload your CSVs and see the correction on your own data
What you see inside
Every view in the dashboard
serves a specific decision.
Report Output
R
NovaBrew Co. — Weekly Report
Feb 17–23 · 4 platforms
Executive Summary
2.79x
Blended ROAS
−15.3%
Attribution Gap
$52.1k
Corr. Revenue
Platform Breakdown
Amazon Walmart Criteo Instacart
Client-Ready PDF
One-click export with executive summary, correction breakdown, and budget recommendation. Branded, professional, ready for the weekly call.
Return Curves
Hill Response Curves — mROAS by Spend Level
current mROAS spend →
Amazon Walmart Criteo
Diminishing Returns
Hill function models per platform show exactly where additional spend stops generating return. The optimizer finds the intersection point.
Confidence
Platform Data Quality Score
Amazon Ads96%
15 campaigns · low variance
Walmart88%
15 campaigns · moderate variance
Criteo72%
10 campaigns · higher variance
Monte Carlo: 500 samples
P5 2.21x → P50 2.79x → P95 3.38x
Uncertainty Quantified
Thin data gets penalized. Every platform gets a quality score. Monte Carlo turns correction uncertainty into actionable confidence bands.
Simulator
Budget Simulator — What If Analysis
Total Budget
$18.7k
→ same spend, different split
Amazon
30%
Walmart
42%
Criteo
18%
Instacart
10%
Projected blended ROAS 3.14x
What-If Scenarios
Drag sliders to test different allocations. The engine recalculates projected ROAS in real time using corrected Hill curves and confidence bands.
Read the full technical documentation →
An opinionated engine, not a data warehouse.
Mathematical correction handles the hard part. AI handles interpretation. You handle the client relationship.
Core engine

Attribution Correction

Four multiplicative correction factors — window decay, view-through discount, model conversion, and confidence weighting. Every factor is clamped to safety floors. Every adjustment is visible.

Core engine

Hill Curve Optimization

Response curves per platform model diminishing returns. The greedy marginal ROAS optimizer redistributes budget across platforms for maximum blended return.

Intelligence

Anomaly Detection

Z-score deviation analysis with sigmoid saturation catches when normalization gaps exceed expected ranges. Alerts fire before anomalies reach your weekly report.

Intelligence

Budget Simulator

Drag sliders to test what-if scenarios. See how shifting $2k from Amazon to Walmart affects total normalized ROAS, revenue, and per-platform efficiency in real time.

Safety

Monte Carlo Confidence

500-sample Monte Carlo propagation through all 4 factors generates P5–P95 revenue bands. You know how certain the numbers are before presenting to clients.

AI-powered

Report Narrative

AI generates an executive summary your client understands — explaining what changed, why, and what to do next. Copy-paste into your deck or send as a branded PDF.

Differentiation

A correction layer. Not another platform.

RetailNorm answers one question: where should the next dollar go? It sits between your platform exports and your allocation decisions — focused, opinionated, and built for the agency that manages 3–8 clients across multiple retail media networks.

$3k–10k/month enterprise campaign suite (Skai, Pacvue)
→ From $199/month, no contract, no onboarding
Data warehouse + BI team required (Mimbi)
→ Upload CSVs, get corrected allocation in 10 seconds
Manual Excel normalization with arbitrary adjustment factors
→ Bayesian correction model with auditable confidence bands
Marketing mix modeling requiring 12+ months of data
→ Works from a single week of CSV exports
Who is this for
Built for the agency that actually allocates the budget.
📊
Media planners across 2–4 RMNs
You manage Amazon Ads, Criteo, Walmart Connect — and need to know which one is actually winning before moving budget.
⚖️
Agencies of 5–50 people
Too sophisticated to ignore attribution differences. Too lean for a $3k/month enterprise tool or an in-house data team.
🎯
Commerce managers optimizing ROAS
Your client asks "where should we put the next $10k?" — now you have a data-driven answer backed by normalized performance and Hill curve optimization.
Why now
277
Retail media networks
with different metrics
$108B
Global retail media spend
with no unified measurement
54%
Of marketers cite analytics
resources as #1 barrier
0
Industry incentive to
standardize attribution

Stop presenting numbers
you can't compare.

Upload your first CSV and see the attribution gap in your own data.

Try for Free →