The AI Dependency Trap

Do you have governance, or just a subscription to someone else’s risk appetite?

AI
AI Governance
Risk Management
Tech Strategy
CDO
Author
Affiliation
Published

April 27, 2026

Your most difficult AI governance risk may not come from your own projects or anything you do yourself.

We worked with the Lloyds Market Association (LMA) to understand the state of AI adoption and to probe how risk and governance has evolved. While the insurance market at Lloyds is a specialized business, I believe the results and insights are generally useful and applicable.

Much of our discussions with Chief Risk Officers (CROs) were naturally about the in-house implementations, the value they have delivered, and the risk analysis and controls that the organizations had adopted. Ingestion of unstructured data for use in structured systems for underwriting and claims was a common use case with clear and measurable business value. Most had IT processes for procuring and controlling the tools. Requiring human oversight, validation, and decision was almost universally adopted as the main operational risk control.

There are some issues with this, but they are not for this post. Today, I want to focus on the hidden risk that many have ignored.

The hidden risk

Third-party AI is where governance becomes harder, because the firm does not control the system, the roadmap, or the commercial terms.

Firms are responding to this risk in a sensible way at the front end. Over 60% have specific AI contractual clauses and the same number use AI-specific due diligence during procurement, and over half have in place ongoing monitoring of third-party performance, security, and compliance.

These are all sensible defences that all firms in all industries can and should adopt.

But the more interesting question is whether they are enough once a vendor is strategically embedded in a core operating process. Because if it takes years to change an IT solution that supports underwriting, claims, finance, or some other key workflow, then the real issue is not simply whether the vendor uses AI. The real issue is what happens when that vendor introduces AI into the product, changes the terms, and you do not like the updated clauses.

At that point, the governance question is not just ‘can we object?’. It is ‘what is our practical alternative?’. And in many cases the honest answer may be: very little.

That is why I think the sharpest third-party risk here is dependency. A firm’s formal right to object may be much stronger than its practical ability to exit.

So the gap is not that firms have ignored vendor risk. It is that many have focused on diligence and clauses at the front end, without yet solving the harder issue of bargaining power, resilience, and realistic alternatives once the dependency already exists.

You may object that this is not different from every other vendor decision and argue this is not a new risk.

I think you’d be wrong.

The $15 trillion reason

About 15 trillion dollars of market value is dependent on AI becoming a foundational tool across the economy, or some similar vision of AI “success”. If you are a software company, then approximately1 100% of your stock market valuation depends on you having a compelling “AI story”. Therefore, about 100% of your bonus if you are a manager there.

Your software vendors will introduce more and more AI in their tools. $15 trillion says so. They will do so at an unprecedented pace and they will need to do it whether you want it or not because their investors demand it. $15 trillion says so.

This is new, and this is why it is different from vendor risks in the past. You are not looking at one vendor perhaps introducing something over time. You are looking at all vendors definitely introducing AI, all at once.

Reclaiming bargaining power

If the gap is bargaining power, what can you practically do to get it back?

Begin by looking at your data strategy. The real lock-in happens when your data is trapped in a vendor’s proprietary workflow and formats. To maintain strategic resilience, prioritize:

Data portability is one critical step to ensuring you have practical alternatives. Without portability, you don’t have governance; you have a subscription to someone else’s risk appetite.

Third-party AI governance is not just a procurement exercise. It is a strategic resilience issue. By the time a critical vendor presents AI-enabled terms on a take-it-or-leave-it basis, you are already negotiating from a weakened position. And this will happen, guaranteed, and soon. Fifteen trillion dollars say it will. Now is the time to wake up and prepare.

→ What’s your take? How prepared are you? Reach out to discuss not just AI governance but also how to drive EBITDA impact from AI innovation.

See: https://lmalloyds.com/ai-and-ml-in-actuarial-and-risk/ for the research summary.


Footnotes

  1. Exaggerated for effect, but not by much.↩︎