Someone at a client's company left sensitive contracts on his laptop hard drive, then sent it off for repair. Unencrypted. The kind of thing that makes you wince when you hear it, and makes the person involved want to disappear.
It was a genuine security incident. It needed to be addressed. And nobody quite knew how to address it without turning the person into an example. So it became a quiet thing. Mentioned carefully. Not really debriefed. The lesson stayed locked inside the awkwardness, and the organisation moved on without learning much at all.
This is the moment where security awareness training is supposed to help. And it's the exact moment where it fails.
The Annual Ritual
Most organisations do some version of it. A module. A slide deck. A quiz at the end. Sometimes there's a simulated phishing email that catches a few people out and produces a metric someone can report upward. The training gets completed, the completion rates get logged, and the whole thing resets twelve months later.
The purpose, in theory, is behavioural change. In practice, it's evidence generation. The organisation needs to demonstrate that awareness training happened, because the standard says it should. ISO 27001 requires competence and awareness. Auditors want to see records. So records get produced.
But records of completion are not evidence of learning. A 95% completion rate tells you that 95% of people clicked through something. It tells you nothing about whether anyone would behave differently tomorrow. The quiz scores measure short-term recall of whatever was on the slides. Recognising a real threat in the middle of a busy workday is something else entirely.
There's a deeper problem. Annual training treats security awareness as something you top up once a year, like a flu jab. But awareness isn't a state you achieve and maintain. It's a habit. And habits aren't formed in annual two-hour sessions. They're formed through repeated, contextual experience. Through encountering situations, reflecting on them, and adjusting.
The person who left contracts on his laptop didn't need a refresher on data handling policies. He probably knew the policy. Most people do. What he needed was a way to think through what happened, understand the risk in concrete terms, and figure out what he'd do differently. What he got instead was silence, because the organisation had no mechanism for that kind of reflection that didn't involve making him feel exposed.
The training programme, the one that cost real money and real time, had nothing useful to offer at the exact moment it was needed.
Why Instruction Doesn't Stick
The reason most awareness training doesn't produce behavioural change has less to do with content quality and more to do with a mistaken assumption about how people learn.
Instruction tells you what to know. It doesn't help you figure out what to do when the situation doesn't match the textbook. And real security incidents almost never match the textbook. The phishing email in the training module looks nothing like the one that actually fools someone. The data handling scenario is neat and obvious. The real one is messy and ambiguous.
There's a model that does work, and it comes from a domain where the cost of not learning is existential: the military after-action review.
AARs have a specific structure. What was supposed to happen? What actually happened? Why was there a difference? What do we do about it? But the structure isn't what makes them effective. What makes them effective is the conditions under which they happen.
First, they're immediate. Not six months later in a training module. Right after the event, while the experience is still raw and the details are still fresh.
Second, they're contextual. They deal with what actually happened in this specific situation, not a generic scenario someone wrote for a broad audience. The learning is anchored in real experience, which means it has somewhere to stick.
Third, and this is the part most compliance systems miss entirely, they're psychologically safe. In a well-run AAR, rank is suspended. The point is to understand what happened, not to assign blame. The private who spotted the problem speaks with the same authority as the officer who missed it.
That third condition explains why the laptop incident became a quiet thing rather than a learning opportunity. The organisation had no way to create the safety required for honest reflection. The only options were public, which meant shame, or private, which meant silence. So silence won.
This is what's missing from most security training: not better content, but a better container. A phishing simulation tests whether people click. It doesn't help them understand why they clicked, what they were thinking, what cues they missed. There's no after-action review. There's just a score.
A researcher we've been working with on cybersecurity training put it sharply: effective knowledge transfer requires interactive debriefs with relatable, domain-specific mentors. Not generic avatars delivering canned scenarios, but something that understands your context, your role, the specific pressures of your actual work. The reason the generic training module fails isn't that it's poorly designed. It's that it has nothing to do with your actual job.
Training That Disappears Into the Work
So what would awareness training look like if it were designed around the principles that actually produce learning? Immediate, contextual, and safe.
The military solved the immediacy and context problems through structured debriefs built into operations. The learning happens as part of the work, not in a classroom weeks later. The challenge for most organisations is the third element: safety. Creating the conditions where people can be honest about what went wrong without it becoming a career risk.
This is where we think AI agents change the equation.
Consider the laptop incident again. An AI agent, available through a private direct message in Slack or Teams, could have given that employee something the organisation couldn't: a space to work through what happened without an audience. No manager copied in. No incident report with his name on it. Just a conversation.
The agent can walk through the after-action review structure. What were you trying to do? What actually happened? What was the risk? What would you do differently? It can provide specific, contextual information about why unencrypted data on a repair-bound device matters. Not as a lecture, but as part of the dialogue. And it can do this immediately, while the experience is still fresh and the motivation to understand is highest.
The privacy is the crucial part. The incident still gets tracked. The agent can log the relevant details without identifying the individual. But the learning requires honesty, and honesty requires safety. People don't reflect openly when they're worried about judgement.
This is the shift from annual training events to continuous micro-learning embedded in daily work. When someone encounters a real situation, the learning happens right there, in context, with an agent that understands the specifics. The training disappears into the work because the work itself becomes the curriculum.
The evidence takes care of itself. Every conversation, every reflection, every behaviour change that follows, captured as a natural byproduct. No separate documentation task. No retrospective form-filling to prove that learning happened. The learning is the evidence.
We're still working out what this looks like in practice. The technology is ready, but the design questions are harder than the engineering ones. How much should the agent prompt versus wait? When does helpful become intrusive? How do you balance individual privacy with organisational visibility? These are questions we're actively exploring with clients and researchers, and we don't have complete answers yet.
The Goal Isn't Better Training
It's less need for training. And eventually, for a great part, no need to pay for it at all.
Consider what organisations actually spend on awareness training. The platform licences. The content subscriptions. The hours lost to annual modules that people click through while checking their email. The simulated phishing campaigns. The reporting on completion rates that prove nothing about capability. It adds up, and it produces remarkably little.
When the environment itself teaches, when incidents become learning moments instead of shameful secrets, when reflection is built into the flow of work rather than bolted on as an annual event, most of that spend becomes unnecessary. Not because awareness doesn't matter, but because it's being cultivated continuously through the same agents that are already handling compliance operations. The awareness training isn't a separate line item anymore. It's a byproduct of a system that's already there, already contextual, already trusted.
The person who left contracts on that laptop didn't have a knowledge problem. He had a reflection problem, and the organisation had a safety problem. Better slides wouldn't have helped. A private conversation with an agent that understood his context might have. And it wouldn't have cost a separate training budget to deliver it.
The question isn't how to make people sit through better training. It's how to build environments where the right behaviour emerges from the work itself, and the training programme quietly becomes redundant.