← Back to Source Journal

The Instrument

March 10, 2026 · Ben Visser · 5 min read

Hey there,

Some weeks the work itself reminds you of something you already know but haven't had to articulate in a while. This was one of those weeks. After a stretch of deep flow building a new agent for Askara, I surfaced into a podcast conversation with the team that drifted into geopolitics, AI in warfare, and ultimately the question that sits at the heart of everything we're building: what happens when the instrument gets wielded without intention?

The Flow and the Fracture

The week started beautifully. I'd been locked into building our new business continuity simulation agent, one of those stretches where hours disappear and you emerge with something genuinely valuable. The agent runs risk scenarios drawn from a company's risk register and presents them to employees as interactive simulations. People respond in real time, making decisions about how they'd handle an unfolding incident, and the AI evaluates their choices against what a well-run cybersecurity programme would require. It analyses where the gaps are, where training needs strengthening, where the documentation doesn't reflect how people actually behave under pressure.

I was buzzing from that work when we sat down to record the podcast. And as often happens when energy is high and the conversation is loose, we drifted into territory none of us had planned to explore. The developments on the geopolitical stage came up, the escalation with Iran, and specifically the speculation swirling around AI's involvement in the bombing of a girls' school in Minab. Reports had surfaced that AI systems were being used in military strike planning, and while it remains unconfirmed whether AI specifically selected that target, the very possibility has planted a seed of dread that's hard to shake. The fear that speculation generates is real, regardless of whether it proves accurate. And the fear itself reveals something important about the moment we're in.

The Mirror

What struck me was the contrast I'd just lived through in a single afternoon. Hours earlier, I'd been building an AI system whose entire purpose is to help humans understand their own readiness. To surface gaps in how organisations protect themselves. To turn the messy reality of human decision-making into structured insight that improves how a company operates. The AI doesn't decide what your cybersecurity programme should look like. It holds up a mirror and lets you see where you stand.

This is a very intentional design choice. It reflects what we've formalised as Askara's foundational why: seeking a safe direction for humanity through helping people with better use of technology. The business continuity agent embodies this completely. It's the signature approach we're developing around workshops and scenario simulations, where we leverage what humans are genuinely good at, the creativity, the strategic thinking, the contextual judgement that comes from actually being in the room when things go wrong, and let AI handle the manual aftercare that turns all of that into documentation, analysis, and actionable improvements.

The AI is fully in supportive mode. It doesn't make decisions for you about how to run your cybersecurity programme. It creates the conditions for you to make better decisions yourself.

And then you look at the other end of the spectrum, where the same underlying technology is reportedly being used to plan targets in a military campaign, and the distance between those two applications becomes almost vertiginous.

The Instrument

AI is an instrument. The observation sounds obvious until you sit with what it means. An instrument doesn't have intent. A knife can prepare a meal or take a life, and the technology is identical in both cases. What changes everything is the consciousness of the person holding it, their awareness, their willingness to think carefully about consequences before acting.

The conversation that's missing right now is the one between the people who see AI's potential and the people who see its dangers. The doomers and the optimists aren't wrong about different things. They're looking at different applications of the same instrument and drawing conclusions based on what they see. The optimist watches an AI system help a small company understand its cybersecurity gaps and thinks this is going to democratise expertise. The doomer reads about AI in military targeting and thinks this is going to automate atrocity. Both are observing something real. Both are incomplete on their own.

What's needed is for those perspectives to actually meet, to have the honest and uncomfortable conversation about what this instrument is and what it could become. Because AI isn't going anywhere. It is too powerful to ignore, too useful to abandon, and too dangerous to deploy unconsciously. The question isn't whether it will be used but how, and by whom, and with what level of awareness about what's at stake.

The Choice

You can choose how you apply AI in your own reality. You can choose to build systems that amplify human judgement rather than replace it, to create tools that make people more capable rather than more dependent. But this requires a conscious and deliberate approach to how you engage with technology. It means not following blindly the direction set by whoever happens to have the most resources or the loudest voice.

The mainstream narrative around AI is shaped by established players whose interests don't necessarily align with what most people actually need. The optimisation targets of the largest AI deployments are engagement, efficiency, profit, not wisdom, not human flourishing, not the careful consideration of consequences.

At Askara, we're making a deliberate choice to stay on the side of the instrument that supports rather than supplants human agency. Our business continuity agent doesn't tell you what to do. It helps you understand what you're already doing and where you might do better. That distinction might seem subtle, but it carries within it an entire philosophy about what technology should be for.

We're fully aware that other approaches exist, and sometimes autonomous AI decision-making is genuinely the right tool for the situation. But the default should never be to hand over judgement without thinking carefully about what you're giving away. The default should be consciousness about what you're building and why.

That's the choice available to every builder, every organisation, every person engaging with these tools right now. And it's a choice that compounds. One intentional feature at a time, one human-centred design decision at a time, until the instrument serves what it was meant to serve.

With care, Ben