Skip to main content
← Back to Articles

The Attack Surface Nobody Models

March 18, 2026 · Ben Visser · 5 min read

Ask most security teams where their biggest risk lives and they'll tell you the same thing: people. Phishing. Misconfiguration. The department that keeps working around the access controls because the approved process is too slow. Humans, in other words, are where risk concentrates.

Then look at the management system that's supposed to track and reduce that risk. You'll find a detailed model of assets, systems, and controls. You'll find documented processes and risk registers. What you won't find is a model of the people: their roles, their track record under security pressure, the relationship between where they sit in the organisation and the incidents that tend to follow.

This isn't an oversight. Connecting roles, incident patterns, control coverage, and organisational context into something a management system could act on simply wasn't feasible until recently. The data existed in separate places. The tools to join it meaningfully didn't. Two things have changed that: AI agents that can operate continuously across organisational systems, and knowledge graphs that give those agents the structure they need to reason well. Together, they make something possible that wasn't before.

What Incidents Know That Policies Don't

When a breach occurs and the investigation goes back through the logs, the pattern is usually there. Access anomalies. Near-misses. Control exceptions. Each one documented at the time, each one closed. What the management system couldn't do was connect them, because it had no structure for connections to exist in.

Incident data is the richest behavioural signal most organisations are already producing. It tells you which departments generate friction with their controls, which roles are structurally more exposed, where the gap between written policy and actual practice is widest. Most compliance systems can't read this signal at the level it becomes useful, because the records sit flat, isolated from each other and from the context that would make them meaningful.

A knowledge graph changes this. The design we've been developing with TalTech researchers (as part of an Estonian grant proposal on human behaviour modelling in security systems) represents roles and teams as nodes, with edges connecting them to controls, risk scenarios, and incident history. Each node carries properties built from what the ISMS has already recorded: incidents reported, access review exceptions logged, training completed, control tests passed or failed. No new data collection. Just structure applied to what already exists.

The node isn't a person. It's a role. The function, the access that comes with it, and the incident patterns associated with it over time. A finance team with a cluster of access exceptions and two near-misses in a quarter looks different in the graph than one with a clean record. Not because anyone is watching individuals, but because the organisation's own documented history, properly connected, tells a story the flat record never could.

What incidents know is that risk isn't evenly distributed. Some departments generate more friction with their controls. Some patterns recur in ways a reactive, incident-by-incident response will never address. The graph makes this visible, from information the organisation was already producing, just never joining up.

From Reactive to Anticipatory

The conventional response to a risk concentration is a general intervention: training for everyone, a policy reminder, a company-wide simulation. These aren't wrong, they're just blunt. They go everywhere because the system can't see where the problem actually is. A department where risk has been quietly building for months gets the same response as one with a clean record. The effort is spread evenly across a problem that isn't evenly distributed.

A management system that models behavioural risk can direct interventions where the model says they're needed. A team with a pattern of access-related near-misses gets a targeted access review and a focused simulation rather than a generic awareness programme. A department where control test failure rates have been climbing gets specific attention early, rather than after something goes wrong. The ISMS knows this because it's been tracking signals it was already supposed to track, just connected now rather than isolated.

This is where AI agents operating against a structured graph shift from interesting to actually useful. Asking an agent to surface patterns across unstructured incident logs, policy documents, and training records is slow and unreliable; the model is reasoning through noise. An agent connected to a well-maintained knowledge graph is reading a model. The query is fast, the answer is grounded, and continuous monitoring of behavioural risk becomes something an organisation can actually sustain rather than revisit at audit time.

The auditor, in this world, gains access to something more useful than an assembled evidence pack. They can query a live model of how the organisation has actually behaved: which areas have patterns worth examining, which controls have the most friction at the human layer, where the management system has been directing its attention and why. The evidence hasn't been collected for the occasion. It's been accumulating as a byproduct of the system doing its job.

Your ISMS knows what controls you have. It should also know where your people are generating risk.

We're still working through what this looks like in practice. The TalTech collaboration is live, and the questions around graph schema, calibration, and human judgement boundaries are open. We're publishing as we go rather than when everything is settled, because the direction feels important enough to share now.

The primary attack surface has always been human behaviour. The tools to model it properly have finally arrived.

Stay in the loop

Get weekly insights on compliance automation and the future of work — straight to your inbox.