This isn’t a training problem. It’s a design problem.
When someone bypasses a security control, the instinct is to assume they didn’t understand the risk, didn’t read the policy, or simply didn’t care. This assumption is almost always wrong.
People bypass controls because the control is misaligned with how their work actually happens. They’re not ignoring the policy. They’re solving for something the policy doesn’t account for: getting their actual job done.
This matters because if you misdiagnose the problem as non-compliance, you’ll respond with more training, stricter enforcement, or clearer communication. None of which will work, because the problem isn’t that people don’t understand. It’s that the system is misdesigned.
How work actually happens
Policy is written in idealised conditions. It assumes people have time to follow every step, that processes are linear, that systems work as intended, that competing priorities don’t exist.
Work happens under deadline pressure, with incomplete information, using systems that crash or require workarounds, while balancing multiple conflicting requirements from different stakeholders.
When policy and reality collide, people optimise for delivery. They find the path that gets the work done, even if it’s not the path the policy describes. Not because they’re reckless, but because delivery is what they’re measured on, what their role requires, what their manager expects.
Security controls that add friction to this process without adding obvious value get bypassed. Not maliciously. Practically.
The shadow IT problem
Shadow IT is the classic example of this optimisation. People need to share large files. The approved system is slow, unreliable, or has file size restrictions that make it useless. So they use a consumer file-sharing service instead.
They know it’s not approved. They know there’s a policy. They also know that the approved option doesn’t work for their actual needs. So they optimise.
The security response is often to block the unauthorised service, send a reminder about policy, and threaten consequences for non-compliance. This addresses the symptom. It doesn’t address the problem.
The problem is that the approved system is inadequate. People are bypassing it because it fails to meet a legitimate work requirement. Until that requirement is met by an approved method, people will continue to find workarounds.
You can’t train people out of needing to do their jobs.
The password complexity trap
Password policies are another frequent example. Requirements for 12+ characters, uppercase, lowercase, numbers, symbols, no dictionary words, changed every 90 days, no reuse of the previous 24 passwords.
These requirements are intended to improve security. In practice, they encourage people to write passwords down, reuse variations across systems, or create patterns that meet the technical requirements while being easy to remember—and easy to crack.
People aren’t ignoring the policy. They’re optimising around an impossible cognitive load. They have dozens of accounts, each with different requirements, each expiring on different schedules. The policy demands more than human memory can reliably handle.
So they create a system that works for them. It might be a written list. It might be a pattern-based approach. It might be reusing one strong password everywhere and accepting the risk.
They’ve solved for the problem the policy created: how to access the systems they need while meeting requirements that weren’t designed around realistic human capability.
Awareness doesn’t reduce risk. Behaviour does. And if the behaviour you’re getting isn’t the behaviour you want, the problem is usually the system, not the person.
Operational pressure beats abstract risk
People prioritise what they’re measured on. If your role is measured by delivery speed, customer satisfaction, or meeting deadlines, you will optimise for those outcomes.
Security policies that slow down delivery, add steps to customer interactions, or create delays are working against your measured priorities. You’ll follow them when you can. When you can’t—when the pressure is high, the deadline is real, the customer is waiting—you’ll optimise.
This isn’t a moral failure. It’s a design failure. The policy asks people to make trade-offs between what they’re measured on and what security requires. Most people, most of the time, will prioritise the thing they’re measured on.
If you want different behaviour, you need to change the system. Make the secure path the easy path. Make security controls work with delivery pressures, not against them. Measure security outcomes alongside delivery outcomes.
Telling people to prioritise security while measuring them solely on delivery is setting them up to fail.
The workaround culture
In organisations with rigid controls and inflexible processes, workarounds become institutionalised. Everyone knows how to bypass the system. The workaround becomes the actual process. The official process becomes theatre—something you document for compliance while doing the real work a different way.
This is dangerous because it normalises non-compliance. It creates a culture where bypassing controls is expected, where people assume the official process doesn’t reflect reality, where asking how things really get done produces a different answer than asking how they’re supposed to get done.
Once workaround culture is established, it’s difficult to reverse. People don’t trust that new policies will be realistic. They assume they’ll need to find workarounds. They don’t engage with the policy design process because they expect it to be disconnected from their actual work.
The solution isn’t stricter enforcement. It’s designing policies that work with how work actually happens.
Policy as a starting point, not an endpoint
Good policy should be treated as a hypothesis: we believe this control will reduce risk while allowing necessary work to continue. Then you test that hypothesis.
If people are consistently bypassing the control, that’s data. It tells you the hypothesis was wrong. The control either doesn’t reduce the risk you thought it did, or it creates friction that makes compliance unrealistic.
This requires humility. It requires accepting that the policy might be wrong, even if it was based on best practice, even if it’s required by a standard, even if it’s what every other organisation does.
If your people can’t comply without breaking the work, the policy needs to change. Not the people.
What redesign looks like
Redesigning policies around actual behaviour means involving the people who do the work. Not just telling them what the policy will be, but understanding how their work happens, what constraints they face, what trade-offs they’re already making.
It means testing controls before rolling them out. Piloting them with real users in real conditions. Asking “can people actually follow this” and accepting the answer, even if it’s no.
It means designing for realistic human capability. Not perfect memory, unlimited attention, or ideal conditions. Actual people, under actual pressure, doing actual work.
The path of least resistance should be the secure path. If the secure option is slower, harder, or more complicated, people will find alternatives.
The risk perspective
From a risk management perspective, this reframing is critical. If you believe people are non-compliant, you address it with training, communication, and enforcement. If you understand they’re optimising around misaligned systems, you address it with redesign.
The latter approach is more effective. It’s also more respectful. It treats people as capable adults solving real problems, not as weak links who need to be controlled.
Behaviour isn’t a separate risk category. It’s how most other risks either materialise or get prevented. When people bypass controls, it’s a signal. Not of malice or incompetence, but of misalignment between what the system demands and what the work requires.
Moving forward
If your people are routinely bypassing controls, the first question shouldn’t be “how do we enforce compliance.” It should be “what does this tell us about our design.”
Are the controls realistic? Do they account for time pressure, competing priorities, system limitations? Do they work with how work actually happens, or how we wish it happened?
Are we measuring people on delivery while demanding they prioritise security? Are we creating impossible trade-offs and then blaming people for making the rational choice?
The goal isn’t to eliminate all workarounds. It’s to eliminate the workarounds that create risk. Some workarounds are people solving legitimate problems the system doesn’t address. Those need to be incorporated into the official process, not punished.
Security policies should enable work, not obstruct it. When they obstruct it, people will find ways around them. Every time.
People don’t ignore policy. They optimise around it. If you want different optimisation, you need different conditions.
The failure isn’t theirs. It’s yours. And that’s actually good news, because it means you can fix it.

