From Human Supervision to Policy Governance

Supervision has latency. A human watching an AI system needs time to observe what happened, recognise that something is wrong, decide what to do and then act. That sequence takes…

If Your AI Guardrails Live in the Prompt, They Aren’t Guardrails

When OpenClaw's security audit found over 500 vulnerabilities, the most serious category wasn't a flaw in the code. It was prompt injection. Prompt injection is what happens when an AI…

Adding Controls To AI Isn’t Governance

Most organisations discover they have a governance problem the same way. Something goes wrong. An automated workflow sends the wrong response to the wrong customer. An AI agent escalates an…

When Humans Leave the Loop

Many software systems that appear stable are not stable because they are well designed. They are stable because humans quietly make them so. People notice when something feels off. They…