50% off all plans.Ends 20 May 2026Claim 50% off
Free tool · no login required

JBP Readiness Score

Walk twelve questions across six dimensions of JBP practice in roughly five minutes and see whether the draft Joint Business Plan in front of you is genuinely signable today, where the top three gaps sit before the senior strategy summit, and which course lessons and tools will close each of them. The diagnostic is built as the desk-top complement to the cross-functional pre-JBP review the team runs the week before sign-off.

Updated 30 April 2026Extracted from the Integration Lab module, lesson 8: JBP Readiness Score
Loading calculator…
Scenario walkthrough

Tap any section to explore in detail

5.1Scenario setup

The starting SKU, market, and assumptions the model makes.

You are the Key Account Director or Customer Marketing Lead drafting the next-cycle Joint Business Plan for a top-3 retailer customer. The signing window is six weeks out. Internally, the team has been building the calendar, the activation plan, the assortment commitments, and the trade-investment ask in parallel workstreams. The senior strategy summit is in four weeks. The CFO wants to see a credible JBP-readiness read before another £30M of trade investment goes on the table; the retailer is pressing for the draft to land earlier than usual; and the cross-functional team is unsure whether the JBP is genuinely signable or held together by goodwill from previous cycles.

The diagnostic is the desk-top complement to the senior strategy summit. It surfaces, in five minutes, which of the six JBP dimensions the team has under control and which carry structural gaps that will damage trust in execution. Six dimensions, two questions per dimension, four weighted answer options each. Roughly five minutes on your own, or up to fifteen if you want to talk it through with the cross-functional team.

Your objective

Use the diagnostic to surface the two or three highest-leverage gaps in the draft JBP, walk the recommended lessons and tools that close each gap, and land at a READY (51 to 75%) or BEST IN CLASS (76 to 100%) band before the JBP signing window. The output is a one-page action list aligned to course assets, with a shareable URL the cross-functional team can re-run.

Key assumptions
  • The diagnostic is a manufacturer-side read of how complete and coherent the draft JBP looks today. It does not model the retailer's separate read of the same JBP. The retailer-side reciprocity question is surfaced indirectly through Strategic Alignment and Governance dimensions but not modelled directly.

  • Weighted answer options grade practice patterns common across mainstream FMCG / CPG, not industry-specific frameworks. Every option text describes a recognisable practice; the diagnostic only ranks them on a 1 to 4 maturity axis.

  • Bands are diagnostic conventions, not factual claims: 0 to 25% NOT READY, 26 to 50% NEEDS WORK, 51 to 75% READY, 76 to 100% BEST IN CLASS. Per-dimension scores normalise the same way (both questions at weight 3 reads 66.7%, weight 4 reads 100%, weight 1 reads 0%).

  • The recommendations engine surfaces existing course assets only. When a dimension scores below 60%, the relevant lessons + tools (drawn from RGM Academy's registered surface) appear with one-line descriptions of what each closes. No fabricated upgrade paths.

  • No data is stored or sent anywhere. The diagnostic is computed entirely in the browser. The shareable URL contains only the 12 answer digits in ?a= and uses window.history.replaceState so no fetch ever leaves the device.

5.2Controls & toggles

Every input the calculator exposes, its range, and what it changes.

ControlRangeDefaultWhat it changes
Strategic Alignment (2 questions)1 to 4 weight per questionUnanswered until selectedQ1 looks at whether the JBP growth targets are jointly set with the retailer or just presented to them, and Q2 looks at whether the category role inside the retailer's wider strategy is documented and refreshed each year. The pattern that shows up most often when this dimension reads low is a JBP assembled internally and handed across the table as a fait accompli, where the retailer signs but never feels ownership of the targets.
Activation Plan Rigor (2 questions)1 to 4 weight per questionUnanswered until selectedQ3 looks at whether the promotional calendar carries per-event incrementality forecasts and back-tested ROI benchmarks, and Q4 looks at whether the new-product launch pipeline is fully integrated into the JBP rather than discussed in parallel. Teams that score low here typically have a calendar of events with no rationale per event plus an NPD pipeline managed outside the JBP cadence, so the activation plan and the growth ambition end up unrelated to each other.
Distribution and Assortment (2 questions)1 to 4 weight per questionUnanswered until selectedQ5 looks at whether SKU-level distribution commitments live inside the JBP, and Q6 looks at whether the assortment-rationalisation conversation is handled there rather than drifting ad-hoc through the year. Teams that read low on this dimension usually discuss distribution in headline terms only, and tail SKUs accumulate cycle after cycle because the cleanup never lands as a binding plan with named delists.
Performance Measurement (2 questions)1 to 4 weight per questionUnanswered until selectedQ7 looks at whether KPIs are jointly agreed (sell-out, share, distribution, shopper KPIs) rather than internal-only, and Q8 looks at how often the JBP is actually reviewed against those KPIs. The familiar low-score pattern is internal sell-in numbers reviewed annually with no shopper-side measurement at all, which leaves neither side with a current read on whether the JBP is working in-market.
Governance and Relationship (2 questions)1 to 4 weight per questionUnanswered until selectedQ9 looks at whether named owners exist on both sides of every workstream with an agreed escalation path, and Q10 looks at how often senior leaders meet senior leaders to discuss JBP delivery. Teams that read low usually have named owners on the manufacturer side only and a single transactional senior touchpoint a year, so when something needs to escalate mid-cycle there is no defined route and the delivery drifts.
Investment and ROI Discipline (2 questions)1 to 4 weight per questionUnanswered until selectedQ11 looks at what share of trade investment is conditional on retailer performance versus paid as entitlement, and Q12 looks at how rigorously JBP ROI is reviewed after the cycle ends. The pattern that shows up most often is mostly-unconditional investment paid against soft criteria with no formal post-cycle ROI review at all, which means the team commits to similar trade dollars each year without ever knowing what the previous year returned.
5.3Step-by-step exploration

7-step guided exploration of the scenario.

  1. Read the questions before answering

    Skim all twelve questions in one pass before answering any of them. The questions are deliberately ordered by dimension (Strategic Alignment, Activation Plan Rigor, Distribution and Assortment, Performance Measurement, Governance and Relationship, Investment and ROI Discipline) so you can see the spread of what the diagnostic is grading. Each question has four answer options ranked from least to most mature. Pick the option that genuinely describes how the draft JBP looks today, not the option that sounds best.

    Expected outcome: You arrive at the answering phase with a working mental model of the diagnostic's surface. The top of the page shows the empty progress bar (0 of 12 answered). The 'Reveal my JBP readiness' button is disabled until all twelve questions are answered.
  2. Answer all twelve questions, picking the option that describes today

    Walk through the twelve questions in order. For each, select the radio option whose text best describes the draft JBP's current state. The option that 'should' be true is not the right answer; the option that genuinely describes today is. The diagnostic only reads correctly if the inputs reflect the draft as it actually sits today. The progress bar updates after every click, and the URL ?a= parameter populates automatically once all twelve are answered, so you can bookmark or share the diagnostic.

    Expected outcome: Progress bar reaches 12 of 12 answered. The 'Reveal my JBP readiness' button enables. The browser URL has populated `?a=` with twelve comma-separated digits (each 1 to 4) so you can share the link with the cross-functional team.
  3. Reveal the diagnostic and read the overall band

    Click 'Reveal my JBP readiness'. The page scrolls smoothly down to the results panel. The first card is the overall readiness band card, colour-coded to the band: red NOT READY (0 to 25%), amber NEEDS WORK (26 to 50%), cyan READY (51 to 75%), or emerald BEST IN CLASS (76 to 100%). The card shows the band name, the percent score, the band range, a one-paragraph summary of what that band means in commercial reality, and a one-paragraph 'where to focus next' recommendation.

    Expected outcome: Overall band card visible with colour-coded headline. The band tells you whether the JBP is signable on its own merits today (READY or above) or whether structural work is required before sign-off (NOT READY or NEEDS WORK). Both cases are useful; the first goes into integration mode, the second into foundations mode.
  4. Read the per-dimension radar to see the gap pattern

    Below the overall band is the per-dimension radar chart. Six dimensions plotted on a 0 to 100% scale: Strategic Alignment, Activation Plan Rigor, Distribution and Assortment, Performance Measurement, Governance and Relationship, Investment and ROI Discipline. The radar shape is more diagnostic than any single dimension reading. A symmetrical hexagon at 60% is a different commercial situation than a 90% on three dimensions and 30% on the other three; the second case is a JBP held together by half the team while the other half cannot deliver.

    Expected outcome: Radar chart visible with the six dimensions labelled at the corners. A per-dimension table below the radar shows the exact percent reading on each dimension, colour-coded red below 50%, amber 50 to 75%, emerald 75% and above. The shape tells you whether the JBP is balanced (regular hexagon), lopsided (some dimensions much stronger than others), or thin all-round (foundational work needed everywhere).
  5. Read the top three priority gaps and click through the recommended assets

    Below the radar, the top three priority gaps panel surfaces dimensions scoring below 60%, ranked by gap size (lowest first). Each gap surfaces specific lessons + tools from the course registry that close that exact gap. The recommendations are not generic; each entry has a one-line description of what the lesson or tool teaches and why it matters for this gap specifically. Click through the highest-priority gap's recommendations first; bookmark the rest for the cross-functional pre-JBP review.

    Expected outcome: Up to three priority-gap cards visible, each with a colour-coded left border matching the dimension's accent colour. Each card lists 2 to 3 recommended assets with a TYPE badge (LESSON, TOOL, PLAYBOOK), the asset title as a clickable link, and a one-line description. Clicking any link navigates to the corresponding /preview/[lesson] or /tools/[slug] page in the same tab.
  6. Build the one-page action list from the priority gaps

    Open a fresh document. For each priority gap, write three lines: the dimension name and the percent score, the named action that closes the gap (drawn from the recommendations), and the workstream owner on the JBP team responsible for executing the action before the signing window. Three lines per gap, three gaps, nine lines total. Add a fourth-line target: 'Aim to land at READY (above 51%) within the next 4 weeks; re-run the diagnostic at the senior strategy summit and again at sign-off.'

    Expected outcome: A nine-line action list mapped to course assets and workstream owners. The list is the single artefact you take into the cross-functional pre-JBP review. The diagnostic URL (with `?a=` populated) goes at the top of the document so the team can re-run the diagnostic on demand.
  7. Share the URL with the cross-functional team

    Copy the URL bar (the ?a= parameter is populated). Share it in the JBP working-group channel and ask each cross-functional partner (Pricing, Trade Marketing, Customer Marketing, Finance, Supply Chain) to run the diagnostic against their own read of the JBP. The disagreements between functions usually surface the workstream where the JBP will under-deliver in execution, and bringing the four or five diagnostics into the senior strategy summit means the radar overlays make the gap conversation faster than any deck.

    Expected outcome: Cross-functional team has the URL; each partner runs the diagnostic in their own browser; the four or five different radar shapes go into the senior strategy summit as a single agenda artefact. The summit conversation moves from 'what is the JBP we want' to 'where does the JBP actually fall short and what closes each gap'.
5.4Reading the output

Every KPI, the formula behind it, and how to interpret a positive or negative value.

KPIFormulaHow to read it
Overall Readiness BandSum of 12 weighted answers (12 to 48 raw), normalised to 0 to 100%; mapped to NOT READY (0-25), NEEDS WORK (26-50), READY (51-75), BEST IN CLASS (76-100)**The headline number for the senior strategy summit.** A NEEDS WORK or NOT READY band at 4 weeks out is a signal that the JBP needs structural work before sign-off; READY is the gate-pass level; BEST IN CLASS means the JBP is operating as an integrated system and the next move is cross-portfolio or competitive war-gaming. The diagnostic conventions here are pedagogical, not factual; the interpretation is in the directional shift you see when running it across cycles.
Per-dimension percentages (radar)Sum of two weighted answers per dimension (2 to 8 raw), normalised to 0 to 100% via (raw - 2) / 6 * 100**The shape that tells you how to plan the next four weeks.** A symmetric hexagon at 60% means the team needs to lift every dimension a notch; a lopsided shape means a few dimensions need urgent attention while others can be left until the next cycle. Common shape patterns: 'foundational' (all dimensions in red or amber), 'lopsided governance' (Strategic and Governance dimensions weak, others strong; the JBP is technically defensible but politically fragile), 'lopsided investment' (Investment and ROI dimensions weak, others strong; the JBP commits trade dollars without measurement infrastructure to defend the spend).
Top 3 priority gapsDimensions ranked ascending by percent score; take any below 60%, slice top 3**The four-week action plan, surfaced in code.** Each gap maps to specific course assets that close it. The 60% threshold is conservative; teams already at 60% on a dimension still have room to lift but the diagnostic prioritises dimensions in the red and amber bands. If all six dimensions read above 60%, no priority-gap panel renders; the diagnostic instead surfaces the Cross-Lever Impact Simulator as the natural integration step.
Recommendations per gapCurated lesson + tool + playbook list per dimension, drawn from registered course assets only (no fabrications)**The materials a senior RGM director would point a junior team at.** Each recommendation has a one-line description of what the asset teaches and why it matters for that specific gap. Recommendations are not generic; they are the most direct path to closing the named gap. Bookmark them, walk through them, and re-run the diagnostic in two weeks to confirm the gap has closed.
Shareable URL (?a=)12 comma-separated digits, each 1 to 4, in OPT_TERMS-equivalent question order**The artefact for the cross-functional pre-JBP review.** Copy the URL after completing the diagnostic and share it in the working-group channel, so each cross-functional partner can run the diagnostic against their own read. The asymmetric reads between functions usually surface the workstream where the JBP will under-deliver, and the URL contains no PII and survives normal browser-history cleanup; for privacy-strict environments, paste the digit string into a working-group document instead.

Read the diagnostic in three layers. The overall band answers the binary question every committee asks first: is this JBP signable today, and if not, how far off is it? NOT READY and NEEDS WORK are foundational-work signals, READY is the gate-pass level, and BEST IN CLASS is the integration-led signal. The per-dimension radar answers the planning question of where the work happens, so a symmetric shape calls for a balanced lift while a lopsided shape calls for surgical action on the weakest two or three dimensions. The priority gaps and recommendations answer the execution question of what specifically each gap needs and which named course assets close it.

The diagnostic is at its most useful when run multiple times. Run it once today against the draft JBP as it actually sits in front of you. Run it again imagining the retailer's read on the same JBP, because the asymmetry between the two reads is usually more diagnostic than either reading alone. Run it a third time imagining the version of the JBP you want to sign at the senior strategy summit, because the gap between today's read and the target read is the four-week capability plan.

The diagnostic is built around real upgrade paths rather than a sales funnel: the recommendations engine surfaces existing course assets only and does not propose anything the user cannot click into and use immediately. NOT READY teams get foundational lessons (Trade Terms 1, TPO 1) and the most-leveraged tools (Promo ROI Calculator, Trade Terms Optimizer); NEEDS WORK and READY teams get the integration tools (Cross-Lever Impact Simulator, dual-view P&L); BEST IN CLASS teams get the edge-case simulators that handle scenarios the standard frame misses. No fabricated upgrade paths, no consulting-first push.

5.55 common mistakes to avoid

Diagnostic patterns that catch most misuse of this calculator in practice.

  1. Mistake 1Picking the answer that sounds best instead of the answer that describes today
    Symptom: The diagnostic returns a READY (60 to 75%) band but the JBP visibly under-delivers in execution two months later. Reviewing the answers in retrospect, the team picked the second-most-mature option on most questions because the most-mature option felt 'aspirational'. The band reading was inflated by a polite-but-wrong answer pattern.
    Fix: **The diagnostic only reads correctly if the inputs reflect the JBP as it actually sits today.** Pick the option that genuinely describes how the JBP works today, including the workstreams that are running on goodwill from previous cycles. The 'Where to focus next' recommendation is calibrated to the actual band, not to the polite band; an inflated band points the team at integration work when foundations work is what is actually missing. If in doubt between two options, pick the lower one; the diagnostic surfaces gaps a too-high reading would hide.
  2. Mistake 2Running the diagnostic only once, only on your own desk
    Symptom: The diagnostic returns a clean band on the manufacturer side, but the retailer-side review of the same JBP is a different conversation. The senior strategy summit goes off-script because the retailer was reading dimensions the manufacturer rated highly as actually weak. The diagnostic was useful but its value got left on the table.
    Fix: **Run the diagnostic at least three times before the senior strategy summit:** once for the manufacturer's working read of today, once imagining the retailer answering on their side, and once imagining the version of the JBP you want to sign at the summit. The differences between those three radar shapes are usually more diagnostic than any one reading on its own. Share the URL with the cross-functional team and ask each partner to add their own read, because the asymmetric pattern across functions and across the manufacturer-retailer divide is where the structural work lives.
  3. Mistake 3Ignoring the radar shape and only reading the overall band
    Symptom: The team got a 65% READY band and signed off on the JBP. Closer inspection of the radar shape would have shown 90% on Strategic Alignment and Performance Measurement but 35% on Governance and Investment. The JBP shipped, the trade investment landed, and the post-cycle ROI review never happened because the Investment-and-ROI dimension's structural gap was never closed before sign-off.
    Fix: **The shape is the diagnosis; the band is the headline.** A 65% band can come from many different shapes, and the shapes have very different commercial implications. A balanced 65% is a 'lift everything one notch' situation; a lopsided 65% is a 'fix the two weak dimensions before the cycle starts' situation. Always read the radar before reading the recommendations, and structure the action list around the shape, not around the band.
  4. Mistake 4Using the diagnostic as a pass / fail gate without acting on the recommendations
    Symptom: The diagnostic returned NEEDS WORK on the first run, which generated a frank conversation in the cross-functional team. The recommendations were noted but never actioned. Six weeks later the JBP shipped at the same NEEDS WORK level it was diagnosed at. The diagnostic served as documentation rather than as a capability-building tool.
    Fix: **The diagnostic is the input, the recommendations are the work.** Each priority gap maps to specific course assets that close it. Walk the recommendations one by one; assign owners on the JBP team to each one; re-run the diagnostic in two weeks to confirm the gap has closed. A diagnostic that shows the same band three times in a row is a signal that the team is using it for documentation, not capability-building. Re-frame: the band reading is a baseline, not a verdict.
  5. Mistake 5Treating the conditional-investment question as a procurement-only conversation
    Symptom: The Investment-and-ROI Discipline dimension scores low because most trade investment is paid as entitlement. The team responds by tightening procurement rules: more paperwork, more compliance checks, more delayed payments. The retailer relationship deteriorates; the conditional spend rises but the ROI review is still missing. The dimension's score moves from 35% to 45%; the JBP is technically more defensible but no easier to execute.
    Fix: **Conditional investment is a planning conversation, not a procurement conversation.** Walk the Trade Terms Optimizer (the conditional-versus-unconditional split is exactly what that tool models) and read the four sentinels: Restructuring Progress, Conditional Savings Depth, Face Value Discipline, Phase Alignment. Bring the result into the Investment-and-ROI dimension as a planning artefact; the conditionality is built into the JBP from the planning stage, not enforced after the fact. The downstream ROI review (Q12) gets easier when the conditionality (Q11) is embedded in the plan, not procedurally enforced.
Related concepts

Go deeper on the theory

Continue with the lessons

Go further inside Cross-Lever Integration

This calculator is the sandbox slice of Lesson 8: JBP Readiness Score. Each of the other 6 Cross-Lever Integration lessons teaches a complementary concept that sharpens how you read the output above.

See JBP Readiness Score inside the full lesson

RGM Academy lets you pull every commercial lever yourself inside a senior-practitioner simulator, with the AI RGM Strategist coaching every decision you make.

Claim 50% off and unlock the full lesson

Or sign up free, 12 lessons included