Your AI Governance Was Built for Assistants, Not Agents
You think you are solving AI governance. In reality, you may be capping your AI ROI.
We recently worked with the Lloyd’s Market Association on research into AI Governance and Risk. The finding that stayed with me was not that the market has moved beyond experimentation. It was that much of today’s governance is still built for AI assistance, not AI delegation.
Roughly 72% of firms now have AI frameworks in place. Over 60% require human-in-the-loop review for AI outputs. Those are sensible controls for copilots. They are not, by themselves, a delegation model.
That distinction matters because the risk shifts from content to consequences. Assistive AI helps people work faster. Delegated or Agentic AI acts within a workflow. It retrieves information, calls tools, triggers processes, and changes records. It takes action.
The governance problem is different. It is no longer just: “Was the output reviewed?” It becomes: “Was this system authorized to take this specific action, in this specific context, for this specific purpose, at this specific moment?”
That is where many AI ROI cases will hit the wall.
1. The Productivity Trap
Assistive AI (Copilots) makes individuals faster at existing tasks. It’s “Typewriter 2.0.” You get an efficiency gain, but your business model stays exactly the same. The bottlenecks just move around.
The larger ROI prize will not come from making existing tasks marginally faster. It will come from redesigning workflows so that AI can safely take on bounded actions.
The next step is therefore not a better chatbot. It is a system that can plan a workflow, call tools, trigger actions, and provide an auditable record of what it did and why.
2. The Permissions Problem Nobody Wants to Own
The subtle technical hurdle isn’t about traditional security; it’s about Contextual Authorization.
Traditional identity and access management was built around relatively stable roles: underwriter, claims handler, finance user, administrator.
Agentic AI breaks that model. An agent is not simply a user. It is a non-human actor operating on behalf of a user, within a task, using tools, data, and inferred intent.
Static permissions are too blunt for that reality.
The question is not simply: “Can this agent access the payments system?”
The better question is: “Does this agent have a valid, policy-aligned intent that means it is authorized at this moment in time to trigger this payment, for this claim, under these policy terms, within this authority limit, with this evidence trail?”
If the answer is broad standing access, you have created a control weakness. If the answer is human approval at every step, you have removed the operating advantage.
That is the authorization gap. Current Identity and Access Management (IAM) systems can only see the login, not the intent and the context.
3. The Logic Drift
When you delegate, you aren’t just delegating a task; you are delegating interpretation.
The risk here isn’t just a “hallucination”. It is Logic Drift, where an agent’s interpretation of compliance shifts after a model update, or, more subtly, over time. If your governance relies on reviewing the output after the fact, you are already too late. You are operating outside your authority in real-time.
The Provocation: You are Optimizing for What-Is
By sticking to human-review-only frameworks, you are optimizing what you do now, not your future capability. You are building a framework to survive an audit, not to compete in a different operating model.
When a competitor learns how to govern the intent of an agent in real time, rather than reviewing the output after the fact, the gap will not just be technological. It will be operational.
You cannot compete on manual review cycles against a firm that has made delegated action governable, observable, and safe.
Meanwhile, if official governance cannot distinguish between safe and unsafe delegation, business units will eventually find informal workarounds and those workarounds will be harder to observe, audit, and control.
You are then disadvantaged twice: too slow to capture the upside, and too weakly instrumented to control the risk.
The So-What: How to Govern Delegation
The practical answer is not to abandon human oversight. It is to move human judgement to the right point in the system.
Firms will need to govern delegation through architecture, not just policy and model controls. That means:
- Contextual controls: authorization based on task, role, purpose, data, limit, workflow state, and evidence.
- Action-level auditability: a record not only of what the model produced, but what it did, why it did it, and under whose authority.
- Pre-action guardrails: real-time controls that prevent unauthorized action before it occurs, not merely review it afterwards.
- Clear escalation rules: humans intervene where judgement, ambiguity, or authority limits require it, not as a universal brake on every action.
Most controls, most of the time, should be code that can run in real-time and at scale.
The Bottom Line:
If your AI strategy depends on a human approving every meaningful action, you may still have a useful productivity tool. But you do not yet have a delegation model. And without a delegation model you will not achieve transformational benefits from the new technology.
The problem is not human involvement. The problem is indiscriminate human involvement.
Human judgement should be reserved for the points where judgement is genuinely needed. If every action requires manual approval, delegation fails. If no action requires manual escalation, governance fails.
Most firms are building AI governance for reviewable outputs. The next ROI frontier requires governance for authorized action.
See https://lmalloyds.com/ai-and-ml-in-actuarial-and-risk/ for the report.
About me: I help organizations turn complex data into clear decisions and commercial outcomes. My focus is on enabling better decision-making and unlocking new value through data-driven innovation.
Follow me on LinkedIn for more insights.