If everything is “high risk”, you’ve learned nothing

Your risk register shows twelve high risks, seven medium risks, and three low risks. The board meeting is in an hour. You’ve done the work, followed the process, assigned the scores. Everything looks rigorous.

Then someone asks: “So what should we actually do about these?”

And you realise the scores tell you nothing.

This is the fundamental problem with risk scoring methodology as it’s commonly practised. We’ve built elaborate systems for calculating risk values while losing sight of what those values are supposed to achieve. The scores look precise, feel objective, and fail completely to inform decisions.

The illusion of mathematical rigour

Most risk scoring follows a formula. Likelihood times impact, or some variation. Maybe you use a five-point scale, maybe it’s four-by-four, maybe you’ve customised it for your organisation. The details vary, but the pattern is consistent: quantify likelihood, quantify impact, multiply them together, get a score.

It feels mathematical. It feels defensible. It lets you point to a number and say “this is a high risk” with apparent objectivity.

But scratch the surface and the rigour disappears. Your likelihood scores are guesses dressed up as measurements. Your impact scores mix incompatible things – financial cost, reputational damage, operational disruption – into single numbers. And your multiplication assumes that likelihood and impact relate to each other in ways they often don’t.

The formula creates precision that doesn’t exist. You’re not measuring risk, you’re creating the appearance of measurement. And that appearance becomes dangerous when people start treating the scores as facts rather than structured guesses.

When everything becomes “high risk”

Here’s what happens in practice. Someone identifies a risk. You assess likelihood – could happen, probably won’t, but possible. Call it a three out of five. You assess impact – would be bad, definitely disruptive, maybe serious. Call it a four out of five.

Three times four is twelve. On your fifteen-point scale, that’s high risk. Add it to the register, flag it for the board, start discussing mitigations.

Then someone identifies another risk. Different nature, different context, but the scoring logic leads to another twelve. High risk. Then another. Then another.

Six months later, your register has fifteen high risks. The category has lost all meaning. “High risk” no longer indicates priority. It just means “we followed the formula and got a number above ten.”

This isn’t a failing of the people doing the scoring. It’s a structural problem with treating scoring as if it reveals priority rather than requiring judgment about priority. When you let the formula decide what matters, you end up with meaningless risk scores that nobody can use to make actual decisions.

The hidden assumptions in scoring systems

Every risk scoring methodology contains assumptions. Often they’re not explicit. Sometimes they’re not even conscious. But they shape every score you calculate.

Assumptions like: likelihood and impact are independent variables. That a risk with medium likelihood and high impact is equivalent to one with high likelihood and medium impact. That impacts across different domains can be meaningfully compared on the same scale. That five-point scales capture meaningful distinctions.

None of these assumptions are obviously true. Many are obviously false for specific risks. But they’re baked into the methodology, so they shape your scores whether they make sense or not.

The problem compounds when different people apply the methodology. One person’s “likely” is another’s “possible.” One person’s “major impact” is another’s “significant.” You add calibration sessions and definitions to standardise the scoring, but you’re still building precision on top of fundamentally subjective judgments.

And that’s fine – as long as you remember the scores are structured opinions, not measurements. But when the methodology becomes gospel, people forget what the numbers actually represent.

Qualitative versus quantitative risk assessment

Some organisations respond to scoring problems by getting more quantitative. Instead of likelihood scales, use probabilities. Instead of impact scales, use financial figures. Calculate expected loss, prioritise by value at risk.

This works when you have actual data. If you’re a bank assessing credit risk, you can build statistical models. If you’re an insurer pricing policies, you have actuarial tables. If you’re managing a portfolio of identical transactions, you can learn from frequency.

But most business risks don’t come with data. You don’t have historical frequencies for “key developer quits during critical project.” You don’t have loss distributions for “regulatory requirements change unexpectedly.” You’re not managing thousands of similar events where statistics become meaningful.

You’re making judgments about one-off situations where the uncertainty is genuine. Trying to force quantitative methods onto qualitative situations doesn’t make the assessment more rigorous. It just hides the judgment inside false precision.

Understanding risk properly means being honest about what you know and what you’re guessing. Sometimes that means quantitative analysis. More often, for the kind of risks SMEs and growing businesses face, it means structured qualitative judgment.

What actually helps you prioritise

If scoring doesn’t reliably indicate priority, what does?

Start by being clear about what prioritisation means. It’s not “which risks have the highest scores.” It’s “which uncertainties warrant action now, given everything else we’re dealing with.”

That requires context. A risk that scores high might not warrant immediate action if it’s stable, understood, and already partially mitigated. A risk that scores medium might demand urgent attention if it’s accelerating, poorly understood, or connected to critical operations.

The failure isn’t the maths. It’s that the maths is being asked to make decisions it was never designed to support.

Prioritising risk means asking:

Does this uncertainty affect something we’ve agreed matters? Not in theory, but in practice. Would this actually disrupt what we’re trying to achieve or protect?

Could we do something meaningful about it? Not “are there possible controls” but “are there actions we could take that would be proportionate and effective?”

Is now the right time to act? Sometimes the answer is yes. Sometimes it’s “not yet.” Sometimes it’s “we should monitor but not mitigate.”

What else are we prioritising? Risk decisions compete with other decisions for time, attention, and resources. Priority means “this matters more than other things we could be doing.”

These questions require judgment. They require conversation. They require understanding your specific context. And they produce better prioritisation than any scoring formula because they’re actually designed to support decisions.

When scores actively mislead

The worst case isn’t when scoring fails to help. It’s when scoring actively misleads.

This happens when people trust the scores more than their judgment. When someone raises a concern but the methodology scores it medium, so it gets deprioritised. When a risk scores high but doesn’t actually warrant action, so you waste effort mitigating something that doesn’t matter. When the scoring process becomes a way to avoid making difficult decisions rather than supporting them.

It also happens when scoring creates perverse incentives. If high-scoring risks get attention and resources, people learn to inflate their assessments. If medium-scoring risks get ignored, people learn to either over-score or not bother reporting. The methodology stops measuring risk and starts measuring who’s good at gaming the system.

Risk management tools can help by making relationships between risks and actual operations explicit. When you can see that a risk connects to a critical asset or key dependency, priority becomes clearer. But the tool can’t do the thinking for you. It can make the context visible so your judgment is better informed.

What to do instead

If you’re going to score risks – and sometimes there are good reasons to – do it with eyes open about what the scores mean and what they don’t.

Scores can help with initial filtering. They can help structure conversations. They can provide a starting point for prioritisation. They can create shared language for discussing uncertainty.

But they can’t replace judgment about what matters. They can’t decide priorities for you. And they definitely can’t tell you what to do.

Use scoring as input to decision-making, not as a substitute for it. Calculate the scores if it helps. Then ask: given this score, given our context, given what we know about this risk, does it warrant action? What action? When? Why?

Have the conversation. Make the decision. Document what you decided and why – not just what score you calculated.

And if your scoring methodology produces results that contradict your informed judgment about priority, trust your judgment. The methodology is a tool. You’re the one making the decision.

Making scoring useful

If you want risk scoring that actually informs decisions:

Keep it simple. Three levels work better than five. High-medium-low forces meaningful distinctions. Complex scales create false precision.

Define what the scores mean. Not abstractly, but in terms of decisions. What does “high risk” imply about action? If it doesn’t imply anything, the category is useless.

Review scores in context. Don’t just look at the number. Look at what’s being scored, why it matters, what you could do about it, and whether that’s proportionate.

Update scores when understanding changes. Not on a schedule, but when you learn something new about the risk or your context shifts. The score should reflect current understanding.

Remember scores are opinions. Structured, informed opinions, but opinions nonetheless. Don’t treat them as measurements.

The goal isn’t to eliminate scoring. It’s to use scoring in service of understanding rather than as a replacement for it. Score risks if it helps you think about them more clearly. But don’t let the scores do your thinking for you.

When everything scores high, you haven’t identified your priorities. You’ve just demonstrated that your methodology can’t distinguish between them. The answer isn’t to recalibrate the formula. It’s to recognise that prioritising risk requires judgment that no formula can provide.

If you come away from risk assessment with a list of scores but no clear view of what you should actually do differently, you’ve done the work but missed the point. Risk management exists to support decisions, not to generate numbers. Start there, and scoring becomes useful rather than theatrical.

Share the Post:

Related Posts

Scroll to Top