Your training dashboard shows 96% completion. Your audit report notes that mandatory security training is complete for the year. Your board paper confirms that staff have received appropriate awareness education.
None of this tells you whether anyone can actually do the thing you trained them to do.
Attendance is evidence of presence. Competence is evidence of capability. We’ve spent years confusing the two, and that confusion is why security training doesn’t work the way we need it to.
The measurement gap
When you measure attendance, you’re measuring whether someone logged in, clicked through the content, and submitted the quiz answers. You’re measuring participation in a process.
When you measure competence, you’re measuring whether someone can recognise a realistic scenario, make the correct decision, and explain why that decision matters. You’re measuring the outcome the process was supposed to produce.
These are not the same thing. In most fields, we understand this instinctively. A surgeon who attended lectures on appendectomy isn’t competent to perform one. A pilot who completed ground school isn’t competent to fly. We require demonstrated capability under realistic conditions.
In security training, we’ve lowered the bar to attendance and called it sufficient. It isn’t.
What completion actually proves
A training completion record proves that someone was present for information delivery. It might prove they could answer multiple-choice questions immediately after receiving that information. In the best case, it proves short-term recall of simplified examples.
It doesn’t prove they can apply that knowledge in their actual work. It doesn’t prove they’ll remember it next week, let alone next month. It doesn’t prove they can perform under pressure, distraction, or uncertainty.
If your training includes a quiz, you’re testing recognition: can someone identify the correct answer when it’s presented alongside obviously wrong alternatives. That’s a very different cognitive task from recall: can someone retrieve the correct answer from memory when prompted by a realistic situation.
Recognition is easier. Recognition produces higher scores. Recognition makes the training look more effective. And recognition is almost useless as a predictor of real-world behaviour.
The evidence you actually need
If you want to know whether training has reduced risk, you need evidence that answers three questions:
Can people recognise the situation when it occurs in their actual work?
Can they execute the correct response under realistic time and attention constraints?
Does this capability persist beyond the immediate training window?
Attendance doesn’t answer any of these questions. Neither does quiz performance. What you need is evidence of behaviour in context.
This evidence can take different forms. Realistic simulations that match actual work scenarios. Observed performance in controlled conditions. Longitudinal testing that checks retention over time.
What it can’t be is a completion timestamp. That measures the wrong thing entirely.
Why we measure the wrong thing
We measure attendance because it’s easy to track and easy to report. LMS platforms are built around completion metrics. Audit frameworks ask whether training occurred. Compliance requirements specify that staff must complete training annually.
None of this requires evidence of competence because competence is harder to measure. It requires more sophisticated assessment. It produces messier data. It can’t be reduced to a single percentage in a dashboard.
So we optimise for what we can easily measure, and we convince ourselves that completion equals effectiveness. We’re measuring our effort, not our outcome.
This is understandable but inadequate. If the goal is risk reduction, and risk reduction requires behavioural change, then you need evidence of behaviour. Anything less is faith, not assurance.
The cost of false assurance
When you mistake attendance for competence, you create false assurance. Leadership believes the risk is managed because the training is complete. Auditors are satisfied because the requirement is met. The organisation relaxes because the metrics look good.
Then an incident occurs. Someone clicks a phishing link. Someone misconfigures a system. Someone bypasses a control. And the investigation reveals that yes, they completed the training. They passed the quiz. They have the certificate.
They just didn’t have the competence to execute the behaviour when it mattered.
This is worse than no training at all. No training creates no false assurance. You know the gap exists. Ineffective training that produces completion metrics creates the illusion that the gap is closed. You’re blind to the actual risk.
Moving to evidence-based training
The shift from attendance to competence starts with asking different questions.
Not “did everyone complete the training” but “can people perform the behaviour.”
Not “what’s our completion rate” but “what evidence do we have of capability.”
Not “who hasn’t logged in yet” but “who can demonstrate understanding under realistic conditions.”
These questions are harder to answer. They require better assessment design. They produce more complex evidence. They can’t be reduced to a single dashboard metric.
But they measure what actually reduces risk. This is why awareness alone doesn’t reduce risk—and why behaviour has to be evidenced, not assumed.

