Skip to content →

Prioritization Techniques – an inventory

Last October I created an extensive mind-map for factors to consider when allocating resources or prioritizing. Back then, I signed-off by saying “the next step would be to import the mind-map into a spreadsheet that supports assigning weights for each mid-level branch” to then numerically score each option considered in a prioritization exercise. That thought still has merit; however, today I circled-back to the topic of prioritization in order to inventory the many techniques and frameworks in use.

By far the best existing inventory I found is 20 Product Prioritization Techniques: A Map and Guided Tour, from Folding Burritos consultancy. I really like their “periodic table” organization of the techniques along axes for ‘internal versus external input’ and ‘qualitative versus quantitative’. The twenty techniques, in the order presented in the article, are the following list.

  • The Kano Model — A four-quadrant model where the horizontal axis is the extent that the product functionality is complete (achievement) and the vertical axis is the extent that the customer is satisfied. Three lines traverse through the quadrants for ‘must have’ (expected quality, do these first), ‘wants’ (normal quality, do these next), and ‘delighters’ (exciting quality, do these last). Included in the book Product Roadmaps Relaunched.
    • A simplified variant of Kano is the Importance–Satisfaction Quadrant — First, survey customers and end-users regarding (1) how they rate the importance of each functional area, and (2) what is their current satisfaction with that area. Then, plot these results in a 2×2 matrix. The top priorities are those that have high importance and low satisfaction. This Survature blog provides further details via an example.
    • Also see: How to Visualize Importance vs. Satisfaction Survey Responses.
  • Quality Function Deployment (QFD) — Balances the ‘Voice of the Company’ with the ‘Voice of the Customer’.
  • Opportunity Scoring, from Anthony Ulwick.
  • Buy a Feature — Each feature is assigned a “price” based on complexity, effort, the actual cost to develop, etc. Then participants receive a “play money” budget to spend as they like on the items being prioritized. In his Scrum Product Owner book, Bob Galen describes a variation; “Bidding on a Feature”, where the facilitator plays the auctioneer. In my past, I have also used a hybrid of these techniques where items were not assigned a price, but estimates of complexity and duration were noted for each item. Participants then spend whatever money they care to on each item.
  • Story MappingProducts Roadmap Relaunched calls this ‘Critical Path’
  • MoSCoW — Must have, Should have, Could have, Won’t have. Included in Product Roadmaps Relaunched.
  • Prune the Product Tree — “The analogy in the game is that the product is a tree that will be pruned to our liking.”
  • Speed Boat — “The boat is the product and anchors are the features that customers feel frustrated with.”
  • Financial Analysis — Net Present Value (NPV), Internal Rate of Return (IRR), and Discounted Payback Period.
  • Ian McAllister’s Framework — Define and prioritize themes for the product, generate project ideas, estimate project impact and costs, and then prioritize project within themes.
  • Impact on Business Goal — Makes use of the AARRR metrics: Acquisition, Activation, Retention, Revenue, Referral. In a recent Mind the Product podcast Holly Donohue described how early in her prioritization journey she used ranking of expected Business Outcomes defined simply by value to the customer and value to the business. Later Holly evolved to a Value Matrix defined by: Increasing Revenue, Protecting Revenue, Reducing Costs, and Avoiding Costs.
  • Value vs. Risk — “The goal is to look for a balanced approach, going for High-risk/High-value first, Low-risk/High-value second and finally Low-risk/Low-value. High-risk/Low-value items are best avoided.”
  • Value vs. Cost, also ‘Value vs. Effort’ [Included in Product Roadmaps Relaunched], or ‘Value vs. Complexity’. — This simple technique is what I most often used years ago when I last worked as a business analyst.
  • Scorecard — This is what I had in mind as I set aside my work in October. Products Roadmap Relaunched suggests the three-factor scorecard of Desirability, Feasibility, Viability that I mentioned in my October blog. The authors extend the three-factors to divide by Effort, as in the bullet above, and multiple by a ‘confidence’ factor.
  • Theme Screening — Another specific scorecard. Define evaluation criteria and assign a baseline criterion for each feature or theme, then score the remaining criteria as either ‘+’, ‘0’, or ‘+’.
  • Classification Ranking — Essentially just the team’s opinion collected as a 1 to 5 or 1 to 10 scale, or via T-shirt sizing (extra-small, small, medium, large, extra-large), which is mentioned in Product Roadmaps Relaunched.
  • Systemico Model — Assigns ‘User Goals’ and ‘User Engagement’ (Core, Use, Engage, and Explore) for each user story (feature) and then uses a visual mapping analogous to Story Mapping to cluster product versions (releases).
  • Stacked Ranking — A variation on Classification Ranking. Again, entirely driven by opinion.
  • Feature Buckets — Assigns options to one of four buckets: Metric Movers, Customer Requests, Delight, and Strategic. “A well-balanced product release should typically include features from all these buckets.”
  • KJ Method — A facilitation method as follows:
    • Determine a focus question
    • Organize a cross-functional group to participate
    • Put opinions (or data) onto Sticky Notes
    • Put Sticky Notes on the wall
    • Group similar items
    • Name each group
    • Vote for the most important groups
    • Rank the most important groups

Additional popular techniques not covered in the above include:

  • RICE — A variation on Value vs Effort defined as ‘(Reach x Impact x Confidence) divided by Effort’. This Intercom blog covers the technique in detail.
  • Cost of Delay — “A way of communicating the impact of time on the outcomes we hope to achieve.” “Cost of Delay combines urgency and value – two things that humans are not very good at distinguishing between. To make decisions, we need to understand not just how valuable something is, but how urgent it is.” (Black Swan Farming)
  • Weighted Shortest Job First (WSJF) — “A prioritization model used to sequence jobs (eg., Features, Capabilities, and Epics) to produce maximum economic benefit. In the [Scaled Agile] SAFe [Framework], WSJF is estimated as the Cost of Delay (CoD) divided by job size.” (Scaled Agile)
  • CD3, Cost of Delay Divided by Duration — “Maximizes the value delivered in a given time period when you have limited capacity.” “CD3 is one specific form of the ‘WSJF’ queuing method. We could choose weight by other things (risk, stakeholder importance, length of time waiting, etc). In the case of CD3 we are weighting by Cost of Delay.” (Black Swan Farming)
  • Qualitative Cost of Delay — “While a qualitative approach would lack many of the benefits of a quantitative approach, perhaps it would at least get people thinking about Cost of Delay?“ ”Cost of Delay has two essential ingredients: Value and Urgency…These ingredients are NOT additive…, but rather they act on each other.” For the analysis, create a 3×3 matrix with Value on the vertical axis and Urgency on the horizontal and use some intentionally very qualitative labels for each. For qualitative value use ‘Killer’, ‘Bonus’, and ‘Meh’. For qualitative urgency use ‘ASAP’, ‘Soon’, and ‘Whenever’. The nine cells are then assigned Very High, High, Medium, Low, and Very Low priorities. (Black Swan Farming)

I attempted to be exhaustive with this inventory. Are there popular techniques that I have missed? Please reply in comments.

Further reading: Weighted Goals: A Prioritization Framework, Just one of the many possibilities for deciding what to do next, by Sari Harrison (May 2019)

Published in Business Analysis Resources


Leave a Reply

Your email address will not be published. Required fields are marked *