RGM Maturity Diagnostic
Walk twelve questions across six RGM dimensions in roughly five minutes and see where your team's RGM practice actually sits today, which three gaps would move the needle most if closed first, and the specific lessons and tools that close them. Designed for senior commercial teams as a self-assessment ahead of an integrated annual planning cycle.
Tap any section to explore in detail
5.1Scenario setupThe starting SKU, market, and assumptions the model makes.
The starting SKU, market, and assumptions the model makes.
You are the new Commercial Director (or RGM lead, or CFO sponsoring an RGM agenda) at a mainstream FMCG / CPG company. The board has asked for an RGM capability assessment within 30 days. You need a structured way to grade your team's practice across the six RGM dimensions, identify the highest-leverage gaps, and bring back a prioritised investment plan that ties each capability gap to a specific upgrade path. The diagnostic answers four questions in five minutes: where are we mature, where are we weak, which gaps are most expensive to leave open, and what is the right first move to close them.
Use the diagnostic as a structured self-assessment across the six RGM dimensions (Strategic Pricing, Price Pack Architecture, Trade Promotion Optimization, Trade Terms, P&L Impact, Integration / Cross-Lever). Identify the maturity band, pinpoint the top three priority gaps (dimensions scoring below 60%), and walk the recommended lessons + tools that close each gap. The outcome is a prioritised capability investment plan grounded in specific, named upgrade points, rather than a generic 'we should be better at RGM' statement.
The diagnostic measures organisational practice, not theoretical knowledge. Each option asks how the team operates today, not how the playbook says they should. The most useful self-assessments come from picking the option that genuinely describes the current month, not the aspirational version of the team.
Two questions per dimension is a deliberate floor, not a ceiling. Twelve questions reaches a five-minute completion time and a four-band overall score that is robust to single-answer noise. Adding more questions would lift precision but reduce completion rate; the gap dimensions surface clearly enough at this granularity.
The recommendations engine pulls from existing course assets only. Every lesson, tool, and playbook surfaced as a recommendation is a real public URL on rgmacademy.app or rgmacademy.app/preview. There are no fabricated upgrade paths; the diagnostic is grounded in the same materials a senior RGM director would point a junior team at.
No data is stored or sent. The diagnostic runs entirely in the browser. Answers exist only in React state until the page is closed. The URL hydration parameter
?a=2,3,1,4,3,2,4,3,2,3,1,2lets users share or bookmark a diagnostic without server-side persistence.The diagnostic is dimension-agnostic, not category-specific. It works for biscuits, beverages, dairy, frozen, beauty, household care, and any other FMCG / CPG category. Specific category nuances (e.g., elasticity ranges that differ for premium-only categories, promo-frequency norms in highly-promoted categories) sit downstream in the recommended lessons; the diagnostic itself measures the discipline, not the category-specific number.
5.2Controls & togglesEvery input the calculator exposes, its range, and what it changes.
Every input the calculator exposes, its range, and what it changes.
| Control | Range | Default | What it changes |
|---|---|---|---|
| Question 1 to 2: Strategic Pricing | 4 weighted options (1 to 4 points each) | Unanswered | Q1 grades how list-price changes are set (cost-plus / margin-target / elasticity-informed / elasticity-modelled with break-even hurdle). Q2 grades the break-even discipline before approval (none / margin coverage / BESC / BESC + Break-Even Elasticity + sensitivity). The lowest-friction recommendations when this dimension is weak: the Price Elasticity Calculator and the Break-Even Calculator. |
| Question 3 to 4: Price Pack Architecture | 4 weighted options (1 to 4 points each) | Unanswered | Q3 grades whether the portfolio has a documented incentive curve (RSP per kg or per litre across pack sizes). Q4 grades pack-role definition (implicit / loose / Entry-Routine-Upsize-Upscale with guardrails / actively managed cross-lever). Most teams have neither in formal documentation; this is a frequent gap. |
| Question 5 to 6: Trade Promotion Optimization | 4 weighted options (1 to 4 points each) | Unanswered | Q5 grades event-level ROI measurement (sell-in only / sell-out volume / sell-out incremental units times incremental margin minus full event cost / above + sub-event decomposition). Q6 grades forward-buying awareness (don't know / above 30% / 15-30% with reduction plan / below 15% with scan-back). Pair: the Promo ROI Calculator + the Promo Baseline Estimator + the Promo Mechanic Selector close most of the gap. |
| Question 7 to 8: Trade Terms | 4 weighted options (1 to 4 points each) | Unanswered | Q7 grades trade-terms structure across customers (one-size-fits-all / size-tiered / size + cost-to-serve / + strategic fit + performance hooks). Q8 grades GTN refresh cadence (annual finance close / quarterly view-only / quarterly with optimisation / monthly with reallocation). Recommended tool: the Gross-to-Net Waterfall. |
| Question 9 to 10: P&L Impact | 4 weighted options (1 to 4 points each) | Unanswered | Q9 grades retailer P&L modelling (no / occasional / routine for top customers / always with dual-view bridges). Q10 grades pre-price-action modelling (no / pass-through only / pass-through + front margin / all three with sensitivity). Recommended tools: Manufacturer P&L Sensitivity + Retailer P&L Simulator. |
| Question 11 to 12: Integration / Cross-Lever | 4 weighted options (1 to 4 points each) | Unanswered | Q11 grades cross-lever coordination (lever-by-lever / year-end sync / joint plan with gates / always coordinated with simulation + measured Coordination Gap). Q12 grades RGM organisation (no function / embedded / dedicated team / C-suite line authority). Recommended tool: the Cross-Lever Impact Simulator. |
5.3Step-by-step exploration7-step guided exploration of the scenario.
7-step guided exploration of the scenario.
- Run the diagnostic for your team's current state
Open the tool and answer all twelve questions, picking the option that most accurately describes how the team operates today, not how the playbook says it should. Do not pick the most aspirational option just because it sounds better, because the diagnostic is most useful when the answers reflect what the team actually does this month. Roughly five minutes on your own, or up to fifteen if you want to stop and discuss with the team, and the progress bar at the top tracks how many questions you have answered out of twelve.
Expected outcome: Twelve questions answered, progress bar full at 100%. The 'Reveal my diagnostic' button enables. Click it to scroll to the results panel. - Read the overall maturity band first
The first card in the results panel shows the overall band (NASCENT, EMERGING, DEVELOPED, BEST-IN-CLASS) along with the % score, a summary of what that band means, and a 'Where to focus next' line. Read these before scrolling further. The band sets the framing for everything below: NASCENT means foundations first; EMERGING means dimension-by-dimension upgrade; DEVELOPED means cross-lever integration; BEST-IN-CLASS means edge cases. The band determines the right kind of next move, not just the right size of move.
Expected outcome: Clear sense of which kind of upgrade work is most appropriate. NASCENT band signals that running the Cross-Lever Impact Simulator next would be premature; the right move is foundational lessons. BEST-IN-CLASS band signals that the standard 5-lever frame may have already been mastered and the remaining gaps live in cross-portfolio cannibalisation and competitive war-gaming. - Read the per-dimension radar to see the gap pattern
The radar chart below the band card shows each of the six dimensions on a 0 to 100% scale. The shape of the radar is more informative than the absolute scores: a balanced shape (all six dimensions clustered in a narrow band) means the team has maturity at a particular level across the board; a skewed shape (one or two dimensions much lower than the others) means the team has been investing unevenly. Skewed shapes are diagnostically the most useful: they point at specific functions that have been under-invested relative to the rest of the team.
Expected outcome: Visual sense of whether your team is uniformly mature, uniformly weak, or skewed. The per-dimension table below the radar lists each dimension with its % score colour-coded (red below 50%, amber 50 to 75%, green above 75%) so the gap pattern surfaces at a glance. - Walk the top three priority gaps
The 'Top 3 priority gaps' card lists the three lowest-scoring dimensions (any dimension scoring below 60% qualifies; the card lists at most three). For each gap, the card surfaces the specific lessons + tools + playbooks from the course that close it. Click through the recommendations one at a time. Each recommendation is annotated with what the asset teaches and why it is the right next move for the named gap. The recommendations are not a generic 'study more RGM' nudge; they are specific URLs anchored to specific gaps.
Expected outcome: Two or three named priority gaps with three to four named recommendations each. Total reading or running time per gap is roughly 20 to 60 minutes (a free-preview lesson is about 15 minutes; a tool sandbox session is about 5 to 15 minutes). The cumulative upgrade path for a typical EMERGING-band team is roughly 4 to 8 hours of focused work across the priority gaps. - Re-run the diagnostic in 90 days
Bookmark the URL once all twelve questions are answered: the
?a=query parameter populates with your answers and lets you reopen the same diagnostic later or share it with a colleague. Re-run the diagnostic in 90 days, having worked through the priority-gap recommendations. The dimensions you addressed should close visibly on the radar; the dimensions you did not touch should hold their position. Compare the radar shapes side by side. The size of the movement reflects how deeply the recommended assets were genuinely worked through, not skimmed.Expected outcome: A second radar that you can compare against the first. The dimensions you targeted should show a closing gap; dimensions you did not touch should hold roughly steady. **A board-defensible RGM capability narrative**: 'Over 90 days we closed gaps on three named dimensions, with the underlying lever-by-lever changes attributable to specific course assets any team member can re-run.' That narrative is rare in commercial-team review packs and is itself a strong artefact. - Use the diagnostic as a cross-functional alignment tool
Bring the diagnostic into your next cross-functional commercial review. Ask each function to answer the two questions in their own dimension: Pricing or RGM answers questions 1 to 2, Category Management answers 3 to 4, Trade Marketing answers 5 to 6, Commercial Finance or Key Account Management answers 7 to 8, Finance answers 9 to 10, the RGM Director or the Commercial Director answers 11 to 12. Compare the answers. The disagreements between functions are usually more diagnostic than the answers themselves: if Pricing thinks the team runs BESC discipline at every price decision but Finance disagrees, the gap is often in the handoff between the two functions, not in either function's individual capability.
Expected outcome: A cross-functional view of practice maturity that exposes process handoff gaps. Often the highest-leverage upgrade is not adding a tool or a lesson, but tightening a handoff between two existing functions. The diagnostic surfaces this when used as a discussion artefact across the team rather than a single-leader self-assessment. - Decide whether the right next move is internal upgrade or external help
After running the diagnostic at least once, the answer to 'do we need outside help' becomes much sharper. NASCENT or EMERGING with three or more red dimensions usually warrants either a structured internal capability programme (the recommended lessons sequenced as a 6 to 12-week curriculum) or an external partner who can compress the timeline. DEVELOPED with one stubborn red dimension is usually internal: the right tool + the right lesson + a focused 4-week sprint typically closes the gap. BEST-IN-CLASS with no reds is the band where external help shifts from foundational to edge-case (competitive war-gaming, cross-portfolio cannibalisation simulation, advanced customer-pool optimisation).
Expected outcome: A clear answer to the 'build versus buy' question on RGM capability. **The diagnostic is built around real internal upgrade paths first**, rather than pushing toward consulting at every score, because internal capability building is usually the right move. External help is recommended only at specific score patterns where the timeline pressure or the edge-case nature of the gap genuinely warrants it.
5.4Reading the outputEvery KPI, the formula behind it, and how to interpret a positive or negative value.
Every KPI, the formula behind it, and how to interpret a positive or negative value.
| KPI | Formula | How to read it |
|---|---|---|
| Overall Maturity Band | ((sum of 12 answers - 12) / 36) * 100, mapped to 4 bands at 25/50/75% thresholds | **The high-level frame for what kind of upgrade is appropriate.** NASCENT means foundations work (start with the core lessons); EMERGING means dimension-led upgrade (close the priority gaps one at a time); DEVELOPED means integration work (cross-lever simulation, dual-view P&L); BEST-IN-CLASS means edge cases (competitive, cross-portfolio, customer-pool). The band is more informative than the % score because it points at the right kind of move, not just the size of the move. |
| Per-dimension % score | ((sum of 2 answers in dimension - 2) / 6) * 100 | **Granular maturity by dimension.** The colour coding (red below 50%, amber 50 to 75%, green above 75%) reads at a glance. Below 50% on any single dimension is the strongest priority signal: it usually means the team has either no formal practice or has process discipline but no measurement / tooling underneath. |
| Radar shape | Six dimensions plotted at 0 to 100% on a polar axis | **The pattern across dimensions matters more than the absolute scores.** A balanced shape (all six bunched at similar levels) means the team has invested evenly across the lever set; the upgrade path is to lift the whole shape up. A skewed shape (one or two dimensions far lower than the others) means uneven investment historically; the upgrade path targets the lagging dimensions specifically. Skewed shapes are usually the easier win because the underinvested dimension catches up faster than the already-mature ones. |
| Top 3 priority gaps | Dimensions with score below 60%, ranked by ascending % score, top 3 | **The actionable list.** Each gap surfaces 2 to 4 specific recommendations (lessons + tools + playbooks) anchored to the dimension. The recommendations are real URLs to existing course assets; click-through to start running them. Working through the priority gaps in the order surfaced is the right sequence: the gaps are ranked by absolute score, so the smallest-but-most-painful gaps come first. |
| Recommendations engine | Per-dimension lookup against the course's existing lessons + tools + playbooks | **Every recommendation is a real public URL.** No fabricated upgrade paths. The lessons surfaced are free-preview lessons (paid lessons are not pushed at the diagnostic stage); the tools are 100% free with no auth; the playbooks are public decision guides. The engine is deliberately thin: it picks the 2 to 4 most-relevant assets per dimension rather than dumping the whole course catalog. When all dimensions clear the 60% threshold, the engine instead points to the Cross-Lever Impact Simulator as the natural next-level surface. |
Read the results from the top down: band first, then radar shape, then per-dimension scores, then priority gaps, then recommendations. The band sets the framing; the radar shows the pattern; the priority gaps surface the specific upgrade points. The recommendations turn the diagnostic from an assessment into an action list.
The diagnostic deliberately does not output a numerical 'overall score' as the headline (it outputs the band first). Numerical scores invite false precision; bands invite the right kind of conversation about what to do next. A NASCENT team with a 23% score and a NASCENT team with a 7% score should both run the same kind of next move (foundational lessons), so collapsing both to NASCENT is more useful than separating them by 16 percentage points.
The URL hydration parameter ?a= lets you bookmark a diagnostic or share it with a colleague (e.g., to compare answers across functions in a cross-functional session). The diagnostic is dimension-agnostic, so it works across categories; the recommendations engine is course-asset-anchored, so the upgrade paths surfaced are always the same materials a senior RGM director would point a junior team at.
5.55 common mistakes to avoidDiagnostic patterns that catch most misuse of this calculator in practice.
Diagnostic patterns that catch most misuse of this calculator in practice.
- Mistake 1Picking the most aspirational answer rather than the most accurateSymptom: The diagnostic landed at DEVELOPED (62%) on the first run, with all dimensions in the green or amber band. The team felt good but could not name a single specific RGM artefact (BESC threshold, incentive curve refresh, scan-back funding mechanic, dual-view P&L bridge) they had actually shipped in the last quarter. A more grounded re-run landed at EMERGING (41%) with three red dimensions, which actually matched the team's lived experience.Fix: **Pick the option that describes how the team operates this month, not how the playbook says it should.** The fastest sanity check: for each question, ask 'when did we last actually do this?' If the answer is 'we should' or 'we plan to' rather than a specific recent example, downgrade by one option. The diagnostic is most useful when the answers reflect what the team is genuinely doing right now, because an inflated score leads to misallocated capability investment.
- Mistake 2Treating the % score as the headline rather than the bandSymptom: The team improved from 47% to 53% over a quarter and reported it as 'progress'. The actual band moved from EMERGING to DEVELOPED, which is a structural shift in what the right next move is (from dimension-led upgrade to cross-lever integration), but the team kept running the same EMERGING-band playbook. Three months of cross-lever work would have been more valuable than continuing to chip at single-dimension scores.Fix: **Read the band first, the % second.** The band tells you what kind of next move is right; the % score is granular signal within the band. A 3% lift inside a band rarely changes the right next move; a band change is a structural signal. Use the band to set the next 90-day plan, not the absolute score.
- Mistake 3Ignoring radar skew and focusing only on the lowest dimensionSymptom: The team scored Pricing at 30% (red) and the other five dimensions at 60 to 70% (cyan). They invested heavily in pricing capability over six months and brought Pricing up to 65%. Net overall band lift: 2 percentage points. The radar shape did not change because the lift was offset by the other dimensions stagnating during the same period.Fix: **The radar shape matters more than any single dimension's score.** A skewed radar where one dimension is much lower than the others is usually the easier win, but only if the rest of the team's discipline holds. While the underinvested dimension catches up, keep the others on a maintenance cadence. Use the diagnostic in 90-day cycles to track radar shape, not just absolute scores.
- Mistake 4Running the diagnostic as a single-leader self-assessment when cross-functional disagreement is the real signalSymptom: The Commercial Director ran the diagnostic alone and answered each question from their corner-office perspective. Pricing scored 75% (DEVELOPED). When the actual Pricing Manager ran the same questions independently, Pricing scored 35% (EMERGING). The 40-percentage-point gap was the diagnostic; neither absolute score was 'right'. The handoff between the two roles was the real upgrade target.Fix: **Run the diagnostic per-function in a cross-functional session.** Ask each function to answer the two questions in their own dimension; aggregate after. Disagreements between functions on the same dimension are usually the most diagnostic signal: they point at process-handoff gaps rather than function-level capability gaps. The fix is often a clearer interface between two functions, not a new tool or lesson for either function in isolation.
- Mistake 5Stopping at the diagnostic without committing to the recommendationsSymptom: The team ran the diagnostic, agreed the priority gaps were correct, bookmarked the page, and never opened a single recommended lesson or tool. Six months later the diagnostic was re-run and showed identical scores. The diagnostic surfaced the right gaps but the team did not work through the upgrade paths.Fix: **Convert the diagnostic into a 90-day capability plan immediately.** For each priority gap, schedule the recommended assets into the team's calendar: free-preview lessons as 30-minute slots, tool runs as 15-minute slots, playbook reads as 20-minute slots. Aim for 4 to 8 hours of focused work over the 90 days, distributed across the priority gaps. Re-run the diagnostic at day 90 to validate the lift. The diagnostic is most useful when paired with a follow-up diagnostic; a one-shot run yields self-knowledge but not capability uplift.
Go deeper on the theory
- Integration LabThe Five RGM LeversRGM five levers
- Integration LabCross-Lever P&L SensitivityP&L sensitivity RGM
- P&L Impact LabManufacturer P&L Sensitivitymanufacturer P&L sensitivity
- P&L Impact LabRetailer P&Lretailer P&L structure
- Price Pack ArchitectureOBPPC FrameworkOBPPC framework
- Trade Promotion OptimizationPromotion ROIpromotion ROI calculation
- Trade TermsJoint Business Plan (JBP)joint business plan FMCG
- PricingPrice Elasticity of Demandprice elasticity of demand
Continue with the lessonsGo further inside Cross-Lever Integration
This calculator is the sandbox slice of Lesson 7: RGM Maturity Diagnostic. Each of the other 6 Cross-Lever Integration lessons teaches a complementary concept that sharpens how you read the output above.
Go further inside Cross-Lever Integration
This calculator is the sandbox slice of Lesson 7: RGM Maturity Diagnostic. Each of the other 6 Cross-Lever Integration lessons teaches a complementary concept that sharpens how you read the output above.
- Cross-Lever Integration · Lesson 1Free previewGross-to-Net Waterfall (Integration)The Gross-to-Net waterfall as a single chart that shows how Pricing, PPA, TPO, Terms, and Mix all interact.Open the preview
- Cross-Lever Integration · Lesson 2Free previewP&L SensitivityPrice moves your profit about 3 times as much as volume does at typical FMCG margins. Why that matters for every decision.Open the preview
- Cross-Lever Integration · Lesson 3Sign up to unlockVolume, Price, and Mix DecompositionBreaking your sales growth into three drivers: how much you sold, what you charged, and what mix of products people bought.Claim 50% off — unlock
- Cross-Lever Integration · Lesson 4Sign up to unlockSONA and Profit PoolWorking out which lever earned the growth, and what it cost the rest of your portfolio to get it.Claim 50% off — unlock
- Cross-Lever Integration · Lesson 5Sign up to unlockMix ManagementHow the shape of your portfolio quietly drives your margin, even when nothing else has changed.Claim 50% off — unlock
- Cross-Lever Integration · Lesson 6Sign up to unlockCross-Lever IntegrationThe capstone. Pulling all five levers at once on a single scenario, and reading the integrated answer end to end.Claim 50% off — unlock
See RGM Maturity Diagnostic inside the full lesson
RGM Academy lets you pull every commercial lever yourself inside a senior-practitioner simulator, with the AI RGM Strategist coaching every decision you make.
Claim 50% off and unlock the full lesson