Part 1: The Measurement Trap
There's a pattern that repeats across organisations of every size. Something matters, so someone decides to measure it. A dashboard appears. Targets get set. And slowly, invisibly, the thing that mattered starts to change shape.
Goodhart's Law has been quoted in boardrooms for decades: when a measure becomes a target, it ceases to be a good measure. But knowing the trap exists doesn't stop organisations walking into it. If anything, the instinct to measure has intensified. More data, more dashboards, more KPIs. The assumption underneath rarely gets examined: that measurement creates accountability.
It doesn't. Or rather, it creates a particular kind of accountability — one based on proof rather than trust. And that distinction matters more than most people realise.
When you measure to prove, you're solving for evidence. You want documentation that something happened, that someone did what they were supposed to do. The system assumes, quietly, that without this proof, the thing might not happen. That people might not do the right thing unless they know they're being watched.
This assumption has consequences. People start optimising for the metric rather than the outcome. They learn what gets measured and adjust accordingly — not because they're cynical, but because that's what the system rewards. The sales team hits their call targets while customer relationships suffer. The development team ships features on schedule while technical debt accumulates. The compliance team produces evidence while actual security practices drift.
The measurement didn't fail. It worked exactly as designed. It just measured the wrong thing, or measured the right thing in a way that changed its nature.
In compliance, this plays out constantly. Organisations build elaborate systems to prove they're following their own policies. Evidence gets manufactured. Checklists get ticked. Auditors see what they expect to see. But ask the people inside whether the management system reflects how work actually happens, and you'll often get a knowing look. The documentation and the reality have diverged. Everyone maintains the fiction because that's what compliance looks like.
This is governance theatre. It satisfies the requirement for proof while hollowing out the substance underneath.
The uncomfortable question is: what if the measurement itself is part of the problem?
Part 2: The Trust Inversion
Most measurement systems start from a position of suspicion. Not overtly — no one writes "we don't trust our people" into a policy document. But the architecture assumes it. Why else would you need a dashboard to verify that work is happening? Why else would you require sign-offs and approvals and audit trails for routine decisions?
The logic feels reasonable in isolation. Of course we need to track progress. Of course we need evidence for auditors. Of course we need to know what's happening. But each layer of measurement sends a signal: we're checking because we can't be sure. And that signal, received consistently over time, shapes behaviour in exactly the direction you were trying to prevent.
People stop using judgement because judgement isn't what's being measured. They follow the letter because the letter is what gets audited. They ask permission because permission is what's documented. The system designed to ensure good outcomes gradually crowds out the conditions that produce them.
What would it look like to start from the opposite assumption?
This isn't naive optimism. It's a design choice. You can build systems that assume people want to do good work and create structures that support that intention. The measurement changes from surveillance to reflection. From proving to understanding.
We've been experimenting with this in how we think about contribution and compensation at Askara. The conventional approach treats pay as a transaction: you do work, we measure the work, we pay based on the measurement. The measurement typically involves proxies — hours worked, seniority, output metrics — that attempt to quantify something fundamentally hard to quantify.
The problem isn't just that these proxies are imperfect. It's that they shift the locus of judgement away from the person doing the work. Someone else decides what counts. Someone else evaluates whether you've done enough. The system assumes you can't be trusted to assess your own contribution.
We're trying something different. Instead of measuring contribution from the outside, we're asking people to articulate their own sense of what they're bringing. Not in a vacuum — there's context, there's conversation, there's calibration against what the organisation needs. But the starting point is self-assessment rather than external measurement.
This creates a different dynamic. When you ask someone to reflect on their contribution, they have to think about it. What did I actually do? What difference did it make? Where did I fall short? The reflection itself is valuable, in a way that a manager's rating or a dashboard metric never is. It builds the muscle of honest self-evaluation.
It also surfaces things that traditional metrics miss. The person who didn't ship the most code but made the team better. The person who spotted the risk no one else saw. The person who held things together during a difficult stretch in ways that never showed up in a sprint review. When contribution is self-articulated rather than externally measured, these things become visible.
There's a deeper principle here about what accountability actually means. In a low-trust system, accountability is transactional: you account for your actions to someone with authority over you. In a high-trust system, accountability is relational: you account for your actions because you're part of something and you care whether it succeeds. The first requires surveillance. The second requires clarity of purpose and the space to reflect honestly.
The measurement doesn't disappear. But it changes character. Instead of checking up, you're checking in. Instead of proving, you're understanding.
Part 3: The Compliance Connection
This isn't just about compensation or team dynamics. The same pattern — measurement that undermines what it's meant to ensure — runs through most compliance implementations.
Consider how a typical ISO 27001 management system gets built. The standard requires documented procedures, defined responsibilities, evidence of controls. Organisations respond by creating documentation. Lots of it. Policies proliferate. Sign-offs multiply. Evidence gets collected, organised, stored for audit.
The implicit logic is: if we can prove we did it, we must be doing it right.
But proof and practice are different things. A documented procedure doesn't mean the procedure is followed. A signed acknowledgment doesn't mean the person understood what they signed. An audit trail doesn't mean the decisions in it were good ones. The evidence-generation system runs parallel to the actual work, sometimes connected, often not.
This is the measurement trap applied to governance. The organisation optimises for producing compliance evidence rather than being genuinely compliant. The management system becomes a performance for auditors rather than a tool for running the business.
The alternative is compliance that emerges from how work actually happens. Evidence as a byproduct, not a separate burden. Documentation that explains why, not just what. Systems designed so that doing the work correctly is the path of least resistance, with proof captured naturally along the way.
This requires trusting that people want to do good work and building structures that support them. It means pushing context outward rather than pulling decisions inward. It means designing for competence rather than compensating for its assumed absence.
The technology exists to make this practical. Automation can handle the repetitive, low-judgement compliance tasks. It can surface the right guidance at the right moment, inside the tools people already use. It can generate evidence without interrupting flow. What it can't do is replace the underlying philosophy. If the intent is surveillance, you get sophisticated surveillance. If the intent is enablement, you get something genuinely useful.
The question for any organisation building measurement systems — whether for compensation, performance, or compliance — is the same one that shapes everything: are we designing for people we trust, or people we don't?
The answer is visible in every dashboard, every approval chain, every evidence requirement. And the people inside the system know which one it is, even if no one says it aloud.
Trust-based measurement isn't the absence of accountability. It's a different foundation for it. One where the system supports people in doing good work rather than assuming they won't. Where reflection replaces surveillance. Where evidence emerges from thoughtful action rather than being manufactured to satisfy a requirement.
From measuring to prove, to measuring to understand. From compliance as performance, to compliance as how the organisation actually operates.
The trap is real. But it's not inevitable. It's a design choice.