Yesterday I got on a call with someone I'd connected with on LinkedIn. We'd been exchanging messages about risk quantification and AI, the kind of conversation that happens easily enough, and we agreed to jump on a video call. I knew his name was Jack Jones. I knew he worked in risk. I did not know, until he mentioned it partway through the conversation, that he had created FAIR.
FAIR — Factor Analysis of Information Risk — is the framework that changed how enterprises think about cyber risk. It's the reason some organisations can answer "how much could we actually lose?" instead of pointing at a traffic-light risk matrix and hoping for the best. And its creator was sitting across from me on a Google Meet, casually explaining an idea that I'm still turning over in my head this morning.
He said: if frameworks like ISO are the anatomy, FAIR is the physiology.
I'm writing this partly as a thank you to Jack, and partly because I need to think through what that metaphor actually means for the work we're doing. It reframed something I'd been circling for months. Why compliance implementations can feel complete and hollow at the same time. Why organisations can tick every control and still not understand their risk posture. Why the certificate arrives and the nagging feeling doesn't leave.
I want to credit Jack properly for this metaphor, because it's the lens for everything that follows.
The Skeleton Without the Nervous System
ISO 27001 gives you a remarkable inventory of what a secure organisation should have. Access controls. Incident management. Business continuity. Cryptography. The Annex A controls read like a parts list for a well-defended building: walls here, locks there, cameras at each entrance.
Most organisations implement it that way. They go through the controls, determine which apply, implement them, document them. Each control gets its own line in a spreadsheet. Each one is assessed independently: is it in place? Is there evidence? The auditor checks the list. The certificate arrives.
The problem isn't that this is wrong. The controls matter. Having them is better than not having them. The problem is that a parts list doesn't tell you how the machine works.
Consider access control. You've implemented role-based access. You review permissions quarterly. There's a joiner/mover/leaver process. All documented. All evidenced. But what does this actually do to your risk exposure? How much does it reduce the likelihood of a data breach? What happens to your risk posture if the quarterly review slips to six-monthly? What if it's perfect but your endpoint protection is weak?
These aren't abstract questions. They're the questions that determine whether your security investment is well-placed or merely well-documented.
A framework tells you what to have. It doesn't tell you how what you have works together. It gives you the anatomy of security without the physiology. And the gap between the two is where most compliance value quietly disappears.
I think most people involved in compliance sense this. You finish the implementation, you get the certificate, and there's a feeling that something is missing. Not a control. Not a document. Understanding. You know what you have. You don't know what it does.
Controls as a Living System
Jack's distinction between anatomy and physiology unlocked something specific for me. FAIR doesn't just quantify risk in the abstract. Through FAIR-CAM (the Controls Analytics Model), it describes how controls operate as a complex, interconnected system, how they actually affect the risk scenarios they're meant to address. This is the physiology: not just what you have, but how it functions.
When you start treating controls as a system rather than a checklist, something shifts. A control isn't just "implemented" or "not implemented." It has an effect on specific risk scenarios. It interacts with other controls. It has a strength that varies depending on context. And crucially, its value only makes sense in relation to the threat it's addressing and the asset it's protecting.
This is what risk quantification tries to do. Not just list risks with traffic-light ratings, but model how controls actually affect exposure. When you ask "what's the probable frequency of this threat?" and "what's the probable magnitude of loss?" and "how does this specific control change those numbers?", you're doing physiology. You're understanding function, not just cataloguing structure.
The traditional objection is that this is too complex for smaller organisations. Risk modelling sounds like something that requires a dedicated team and specialised software. And historically, that was fair. FAIR was born in the enterprise world. The expertise to apply it was scarce, expensive, and entirely out of reach for a 15-person company trying to get through its first certification.
What's changing is that AI can do the analytical heavy lifting. We've been building agent skills that construct FAIR scenarios, walk users through their assumptions, challenge where estimates seem off, and run Monte Carlo simulations. Not to replace judgement, but to make the thinking accessible. The same way a good consultant would sit with you and work through it, except available at any time and without the day rate. Jack's reaction when I described this was encouraging. He called the approach "very smart," which from the person who created the framework felt like meaningful validation.
But here's where it gets interesting, and where I think most automation gets it wrong. The point isn't to generate a quantified risk report and hand it over. That's just replacing one kind of governance theatre with a more sophisticated version. The point is to use the process of quantification as a way of building genuine understanding. You come out of it knowing why your access controls matter more than your physical security, or vice versa. Not because a system told you, but because you reasoned through it.
This is the difference between having a risk register and understanding your risk posture. One is a document. The other is a capability.
From Having Controls to Understanding Risk
When an organisation makes this shift, several things change at once.
The conversation with leadership transforms. Instead of presenting a list of controls and asking for sign-off, you can say: "Our top risk scenario is X. The annualised loss exposure is in this range. These three controls reduce it by roughly this much. Here's where we're still exposed, and here's what closing that gap would cost." That's a conversation a CEO can engage with. It connects security to the language of business decisions: money, probability, trade-offs. This is precisely the problem that led Jack to create FAIR in the first place. His executives at Nationwide Insurance didn't want to hear about controls and vulnerabilities. They wanted to know about loss.
Evidence starts to mean something different too. In a control-checklist world, evidence proves you did the thing. In a risk-understood world, evidence shows the thing is working. The shift is from "we have access reviews" to "access reviews caught three inappropriate permissions last quarter, which we estimate reduced our exposure by this amount." The evidence isn't just a compliance artefact. It's operational intelligence.
And then there's the part we're still figuring out: the management system becomes something people actually use between audits. When your risk model is connected to your controls, changes in the business trigger real questions. New product? What risk scenarios does this introduce? New vendor? How does this affect our supply chain exposure? The system stops being a static archive and becomes a reasoning tool.
None of this eliminates the need for the controls themselves. You still need access management and incident response and business continuity. The anatomy still matters. But anatomy in service of physiology, controls in service of risk understanding, is a fundamentally different posture than controls for their own sake.
I got on a call yesterday expecting a good conversation about risk. I didn't expect to walk away with a mental model I'd keep reaching for every time I looked at what we're building. But that's what happens when you stumble into a conversation with the person who actually solved the problem you've been circling.
Jack created FAIR because his executives kept asking a question the industry couldn't answer: how much could we actually lose? Two decades later, most organisations doing ISO 27001 still can't answer it. Not because the methodology doesn't exist, but because it's been locked behind enterprise budgets and specialist expertise.
That's changing. If you're building a management system right now, or maintaining one that feels more like theatre than security, try asking that question seriously. Not "which controls do we have?" but "what do our controls actually do to our exposure?" If you can't answer it, you've got anatomy without physiology. And now you know there's a way to fix that.
Thank you, Jack. The metaphor landed.