50% off all plans.Ends 20 May 2026Claim 50% off
Free tool · no login required

Promo Baseline Estimator

Quantify how baseline-estimation bias propagates into incremental-volume error, fire the pathological-incrementality alert when the math breaks, and bring a baseline-methodology recommendation to the JBP before the ROI argument starts. The same interactive model the full RGM Academy course uses for TPO Lesson 3 — no auth, no paywall.

Updated 23 April 2026Extracted from the Trade Promotion Optimisation module, lesson 3: Baseline vs Incremental Volume
Loading calculator…
Scenario walkthrough

Tap any section to explore in detail

5.1Scenario setup

The starting SKU, market, and assumptions the model makes.

You are the Trade Marketing Director on a mainstream juice brand. Joint Business Plan is in three weeks and the retailer category team is claiming your last-wave trade events delivered 5,200 incremental units while your own Sales team is claiming 8,000. Same 52 weeks of POS data, same 8 promo events, same $1.89 promo margin — but different baseline methodologies.

Your CFO wants a single defensible ROI number before the JBP. Your options: (a) fight the retailer's baseline; (b) concede and take the lower number; (c) quantify how much baseline bias distorts the ROI math and surface a jointly-agreed methodology BEFORE the ROI argument starts.

Your job by Friday: use the baseline estimator to quantify how much a ±10% methodological bias costs in incremental-volume reporting, identify the boundary where the math breaks (pathological incrementality), and bring a baseline-methodology recommendation — not a promo ROI claim — to the review. Getting the JBP to agree on HOW to build the baseline is the prerequisite for agreeing on what the promos delivered.

Your objective

Use the baseline simulator to quantify how baseline bias propagates into incremental-volume error, find the offset band that corresponds to industry-median 15-25% manufacturer-retailer disagreement, and identify the pathological boundary beyond which downstream ROI math becomes physically meaningless.

Key assumptions
  • MAPE = |offset|: in this simulator, the baseline error is UNIFORM across all 52 weeks (a single %-offset applied to every week's true baseline). That is a deliberate simplification — real-world baseline methods produce heteroscedastic errors (tighter in non-promo weeks, looser in seasonal peaks or promo-week estimation). For teaching purposes, the uniform-offset model preserves the single most important teaching point: a modest %-bias on baseline translates into a large %-error on incremental volume.

  • Incremental volume is a small difference of two large numbers: Σactual ≈ 276K at default; Σbaseline ≈ 240K; difference ≈ 37K. A 10%-bias on 240K = 24K error on the 37K incremental = 65% error on the thing that actually matters. This 3-5× multiplier (baseline error → incremental error) is the industry rule-of-thumb documented in the source lesson's Hook.

  • Pathological incrementality (<0% or >100%) is physically meaningless — it is the model screaming that the baseline has been pushed past the point where the math makes sense. Real-world BI dashboards that report estIncrementality > 100% are a signal of baseline under-estimation; those reporting <0% are signalling baseline over-estimation. Either way, the first move is to fix the baseline, not interpret the incrementality.

  • Industry-median JBP baseline disagreement is 15-25%: below 15% is rare (requires aligned methodology); above 25% is the UNUSABLE band where negotiations stall. The sandbox's baselineQuality state machine mirrors this: EXCELLENT <5% / ACCEPTABLE <15% / PROBLEMATIC 15-25% / UNUSABLE ≥25%.

  • This sandbox is simplified on purpose. A real baseline method has to handle non-uniform seasonality per SKU, competitor activity memory, post-promo dip estimation, distribution changes, non-promotional price shifts, and control-store selection. Four sliders cannot capture that complexity — but they DO capture the core teaching: baseline methodology choice matters more than which statistical method you pick.

5.2Controls & toggles

Every input the calculator exposes, its range, and what it changes.

ControlRangeDefaultWhat it changes
Baseline Trend Growth slider-10% to +15% per year (1% step)+5% (mid-growth category)Underlying year-over-year category growth, independent of promos. Scales every weekly baseline by a linear trend factor from week 1 to week 52. Positive in growing categories, negative in declining.
Seasonality Amplitude slider0% to 40% peak-to-trough (1% step)15% (moderate seasonal category like juice)Amplitude of the sine-wave seasonal pattern peaking around week 24. 0% = flat year-round; 40% = heavy seasonal swing. Higher amplitude makes naïve moving-average baselines diverge more from reality.
Promo Uplift Factor slider1.2x to 4.0x (0.1x step)2.0x (doubling on promo weeks)Volume multiplier during each of the 8 fixed promo weeks. Doesn't change the underlying baseline; only the actuals the simulator generates.
Your Baseline Estimate slider-25% to +25% vs true (1% step)+10% (non-zero self-demo seed — drag to 0 for accurate baseline)Your methodological bias. The slider IS the MAPE. Positive = over-estimated baseline (understates incrementality, makes promos look worse than they are); negative = under-estimated baseline (overstates incrementality, makes promos look better).
Baseline Quality verdict (derived)EXCELLENT < 5% / ACCEPTABLE < 15% / PROBLEMATIC 15-25% / UNUSABLE ≥ 25%ACCEPTABLE at default (MAPE 10%)State-machine over MAPE. The 15-25% band is where industry-median JBP baseline disagreements land — both sides producing technically-correct math from incompatible baselines.
Pathological alert (derived)OK / firingOK at default (est. incrementality 4.7% — in 0-100% band)Red banner fires when estimated incrementality goes <0% or >100%. Physically-impossible values; signal that the baseline error has broken downstream math. Default promo mix tips into pathological at offset ≥ +16%.
5.3Step-by-step exploration

7-step guided exploration of the scenario.

  1. Read the default self-demo scenario

    Leave every control at default — trend +5%, season 15%, uplift 2.0x, baseline offset +10%. Read the four KPI tiles and the amber warning banner.

    Expected outcome: True Baseline ≈ 239,850 units (52-week sum); Your Estimate ≈ 263,835 (10% higher). True Incremental ≈ 36,900 units; Est. Incremental ≈ 12,915 units (~65% lower). Amber warning fires: 'Your baseline estimate is high by 10%. This understates incrementality — making promotions look worse than they are.' Baseline Quality: ACCEPTABLE (MAPE 10%). Not pathological — est. incrementality is 4.7% (within 0-100% band). The headline story: a 10% baseline bias shrinks the reportable incremental volume from 36,900 to 12,915 — 65% of the event's value disappears from the ROI math because of methodology alone.
  2. Zero the bias — see what an aligned baseline looks like

    Drag Baseline Offset to 0%. Read the KPI tiles.

    Expected outcome: Your Estimate exactly matches True Baseline (both ≈ 239,850). Est. Incremental = True Incremental (both ≈ 36,900). Amber warning disappears. Baseline Quality flips to EXCELLENT (MAPE 0%). This is the defensible baseline the tool is designed to help you reach. In real-world practice, you reach it by agreeing the methodology with the retailer BEFORE the promo cycle, not by arguing about baselines AFTER.
  3. Find the pathological boundary

    With other sliders at default, drag Baseline Offset upward until the red PATHOLOGICAL banner fires.

    Expected outcome: Banner fires first at approximately offset +16%. At that point Σestimatedbaseline ≈ 239,850 × 1.16 ≈ 278,200 which exceeds Σactual ≈ 276,750 — so est. incremental goes NEGATIVE. Red alert: "your baseline assumption produces an estimated incrementality of [negative]%... not physically meaningful." Baseline Quality: PROBLEMATIC (MAPE 16%, in the 15-25% band). Teaching point: the pathological boundary isn't at some extreme offset; it's inside the industry-median JBP-disagreement band. A retailer methodology that overshoots true baseline by 16% is producing numbers that are mathematically impossible — but that range of overshoot is normal in practice.
  4. Test trend-growth sensitivity

    Reset offset to 0%. Drag Trend Growth slider from +5% to +15% (aggressive growth category).

    Expected outcome: Σbaseline climbs from ~239,850 to ~256,500 (about 7% higher because the trend multiplier reaches 1.15 by week 52). Σactual rises proportionally. Now re-introduce a +10% baseline offset — amber warning fires again, but the ABSOLUTE error in units is now larger (~26K instead of ~24K). MAPE still = 10% (the bias is proportional, not absolute). Lesson: the same %-methodology-error costs more absolute units in a high-growth category — so baseline discipline is MORE important, not less, when your category is growing.
  5. Test seasonality sensitivity

    Reset. Then drag Seasonality Amplitude from 15% to 40% (heavy seasonal category).

    Expected outcome: Weekly baseline now swings from ~2,700 (trough, week 50) to ~6,300 (peak, week 24). Σbaseline is unchanged because the sine wave averages to 1.0 over a full year — but the weekly variance is much higher. The key insight: at high seasonality, a naïve moving-average baseline (not this simulator's method — which assumes you know seasonality) would overestimate the trough weeks and underestimate the peak weeks, producing a heteroscedastic error that the uniform-MAPE model here cannot capture. For high-seasonality categories, invest in seasonal-decomposition methods (STL, Prophet, regression with Fourier seasonality terms) before running any baseline-based ROI analysis.
  6. Test promo-intensity effect on baseline sensitivity

    Reset. Then drag Promo Uplift Factor from 2.0x to 4.0x (aggressive deep-discount event).

    Expected outcome: Σactual climbs from ~276,750 to ~350,000 (additional 73K from 3× higher uplift × 8 promo weeks on ~4,600 baseline per week). True Incremental climbs from ~37K to ~110K. At offset +10%, Est. Incremental rises too (still 10% below true) — but the absolute error in units grows from ~24K to ~73K. MAPE still = 10%. Teaching point: deeper-discount / higher-uplift events are MORE susceptible to baseline methodology disputes because the absolute dollar stakes are larger. The 10%-bias that costs $45K on a 2.0x event costs $140K on a 4.0x event.
  7. Map back to Source-of-Volume and Promo ROI

    Open the related-concept links (Promo Baseline, Promo ROI, Source of Volume, 13 TPO Levers). Cross-reference the [Source-of-Volume Decomposer](/tools/source-of-volume-decomposer) and the [Promo ROI Calculator](/tools/promo-roi-calculator) at the same scenario.

    Expected outcome: Understanding that baseline is the UPSTREAM input into the whole TPO analytical pipeline. The Promo Baseline Estimator answers: what is the true incremental volume? The Source-of-Volume Decomposer answers: of that incremental volume, what proportion is productive (Switching / Expansion / Consumption) vs unproductive (Pantry)? The Promo ROI Calculator answers: what is the net profit from the productive volume minus the subsidy cost of the unproductive volume? Every number in the pipeline inherits the baseline error at the 3-5× multiplier — which is why a 15-25% baseline disagreement blows up the entire downstream scorecard.
5.4Reading the output

Every KPI, the formula behind it, and how to interpret a positive or negative value.

KPIFormulaHow to read it
True Baseline (Σ annual)Σw=1..52 [4500 × trendFactor(w) × seasonFactor(w)]The simulator's ground-truth counterfactual. What the model says would have been sold without ANY promotion. Unobservable in real life; the whole craft of baseline estimation is inferring this from the observed actuals minus the promo weeks.
Your Estimate (Σ annual)trueBaseline × (1 + offset% / 100)Your methodology's output. In this sandbox, a uniform-offset scaling; in real life, the output of a regression / STL decomposition / Prophet model / control-store comparison. The offset slider IS your methodological bias.
True IncrementalΣactual − Σtrue baselineWhat the promotions genuinely delivered above the underlying demand. This is the number every retailer-facing ROI claim SHOULD be built on.
Est. IncrementalΣactual − Σyour estimated baselineWhat your methodology says the promotions delivered. If your baseline is high, this reads low (understates); if baseline is low, this reads high (overstates). The gap vs True Incremental is the cost of the methodology bias.
Incrementality Ratio (Est.)estIncremental / totalActual × 100Share of total observed volume that your methodology attributes to the promotions. Real-world lives in 0-100%; values outside that band are pathological — your baseline has been pushed past the point the math makes sense.
MAPE (Baseline Quality)|baseline offset| (in this sandbox)Mean Absolute Percentage Error of the baseline estimate. Quality bands: <5% EXCELLENT, <15% ACCEPTABLE, <25% PROBLEMATIC, ≥25% UNUSABLE. Industry-median JBP disagreement is 15-25% — normally in the PROBLEMATIC band.

Read Est. vs True Incremental first — that gap is the cost of the methodology bias, denominated in units the retailer will push back on. Then read MAPE / Baseline Quality — that's the one-word verdict you can take into a review. Finally, check the PATHOLOGICAL alert — if it's firing, the baseline is broken and downstream ROI calculations are unusable; fix the baseline methodology BEFORE presenting any promo ROI number. The whole pipeline of TPO math (SoV, Bridge, Performance Grid) is downstream of this — a broken baseline poisons everything.

5.55 common mistakes to avoid

Diagnostic patterns that catch most misuse of this calculator in practice.

  1. Mistake 1Treating the baseline as a neutral technical detail rather than a commercial lever
    Symptom: JBP ROI arguments circle endlessly between manufacturer and retailer; both sides produce technically-correct numbers from incompatible baselines.
    Fix: Baseline methodology is NEVER neutral — it determines who wins the ROI argument by construction. The fix is to align methodology with the retailer BEFORE the promo cycle (typically as part of the annual JBP), not AFTER when incentives have hardened. The questions to agree on: (a) what statistical method (moving average, regression, STL/Prophet decomposition, control-store Difference-in-Differences), (b) what comparison period (trailing 13 weeks / trailing 52 weeks / same-period-last-year), (c) what seasonal adjustment, (d) what treatment of trend, (e) what handling of distribution changes.
  2. Mistake 2Confusing MAPE (baseline error magnitude) with baseline bias direction
    Symptom: A 10% MAPE gets treated as 'small error, doesn't matter' when in fact the direction (over vs under) flips the sign of every downstream ROI claim.
    Fix: MAPE measures the MAGNITUDE of the error, not the DIRECTION. A +10% bias (over-estimated baseline) understates incrementality by ~30% — making promos look too expensive. A −10% bias (under-estimated baseline) overstates incrementality by ~30% — making promos look too cheap. Both are 10% MAPE; both are deeply misleading in opposite directions. Always report BOTH direction (over/under/accurate) and magnitude (MAPE) — not just one.
  3. Mistake 3Assuming a uniform offset model is representative of real baseline errors
    Symptom: Rolling out a new baseline methodology that scores 5% MAPE against a synthetic uniform-offset test, then finding it produces 20%+ MAPE in production on weeks with distribution changes or competitor counter-promos.
    Fix: This sandbox's uniform-offset model is a simplification for teaching. Real baselines face heteroscedastic errors (tighter in non-promo weeks, looser in transition periods, badly wrong when competitors run counter-promos). Test your production baseline methodology against: (a) distribution-change weeks, (b) competitor-counter-promo weeks, (c) cross-category cannibalisation weeks, (d) one-off supply-chain disruption weeks. A method that hits 5% MAPE in the clean periods but 25%+ in the hard periods is a PROBLEMATIC method in production.
  4. Mistake 4Reporting Est. Incrementality % without reporting the pathological boundary
    Symptom: A BI dashboard shows Est. Incrementality 115% for the last 52 weeks; the business treats it as 'very productive promos' when in fact the baseline is badly under-estimated and >100% incrementality is physically impossible.
    Fix: Any reported incrementality outside 0-100% should automatically trigger a baseline-review, not a downstream ROI read. Build the pathological check INTO your BI layer: if est. incrementality goes <0% or >100% in any reporting period, suppress the ROI number and fire a 'baseline-methodology review needed' alert instead. The tool here shows the mechanism; your production systems should mirror it.
  5. Mistake 5Running the Source of Volume Decomposer or Promo ROI Calculator without first validating the baseline
    Symptom: The [Source-of-Volume Decomposer](/tools/source-of-volume-decomposer) shows Productivity Ratio 65%; the [Promo ROI Calculator](/tools/promo-roi-calculator) shows +$8,785 net value — but both are built on a baseline that's 15% high, so the REAL productive volume is 30% lower and the REAL net value is actually negative.
    Fix: Baseline is the upstream input for the entire TPO analytical pipeline. The sequence is: Baseline (this tool) → [Source-of-Volume Decomposer](/tools/source-of-volume-decomposer) → [Promo ROI Calculator](/tools/promo-roi-calculator) → Performance Grid verdict. Each downstream step inherits the baseline error at roughly a 3-5× multiplier. Before defending any SoV / ROI / Performance Grid number, defend the baseline — because "technically correct math on top of a wrong baseline" is the default failure mode in FMCG TPO work.
Related concepts

Go deeper on the theory

Continue with the lessons

Go further inside Trade Promotion Optimization

This calculator is the sandbox slice of Lesson 3: Baseline vs Incremental Volume. Each of the other 7 Trade Promotion Optimization lessons teaches a complementary concept that sharpens how you read the output above.

See Promo Baseline Estimator inside the full lesson

RGM Academy lets you pull every commercial lever yourself inside a senior-practitioner simulator, with the AI RGM Strategist coaching every decision you make.

Claim 50% off — unlock the full lesson

Or sign up free — 12 lessons included