Friday, 9 January 2026

The liability gap: why your AI contracts won't protect you when things go wrong

Test Gadget Preview Image

After an awful lot of talking to marketing directors, compliance teams, and operational leaders about their AI strategies. The conversations follow a pattern.

They're excited about AI's potential. They've integrated tools across their operations. They're seeing results.

Then I ask about liability.

The room gets quiet.

The assumption that's costing organisations millions

Here's what I keep hearing: "We use ChatGPT, but it's OpenAI's problem if something goes wrong. We have contracts."

This assumption is dangerous.

When you enter sensitive information into ChatGPT, it becomes part of the chatbot's data model. It may be shared with others who ask relevant questions. That's a data leakage problem. That's a compliance violation.

And OpenAI's liability? Limited to $100 (approximately £75) or the fees you paid in the past 12 months, whichever is greater.

Your contract is with the employee using the tool, not with your enterprise. You can't bring a claim about confidentiality or security risks. The legal protection you assumed existed doesn't.

The deployer versus developer confusion

The EU AI Act brings fines up to 7% of global annual turnover or €35 million (whichever is greater) for the most serious breaches. For UK businesses operating in the EU market, full compliance is required by August 2026.

In the UK, the Information Commissioner's Office continues to enforce data protection standards, with notable fines in 2025 including £14 million against Capita for a cyber-attack that exposed the personal data of 6.6 million people. Whilst the UK's approach to AI regulation remains principles-based rather than prescriptive, UK GDPR fines can still reach £17.5 million or 4% of annual global turnover, whichever is greater.

The EU AI Act creates two categories: developers and deployers.

Developers build the AI system. They control design decisions, training data, and core functionality.

Deployers use the AI system in their operations. They don't control how it was built, but they control how it's used.

The problem? The line between these categories blurs fast.

If you put your name or trademark on a high-risk AI system, you become a provider. If you make substantial modifications to how the system works, you become a provider. If you customise the system beyond basic configuration, you might become a provider.

Most organisations don't realise they've crossed this line until they're facing regulatory scrutiny.

What the data reveals about your exposure

The numbers tell a story about how unprepared organisations are for this reality.

According to EY's 2025 Responsible AI Pulse survey, nearly 98% of UK respondents reported experiencing financial losses due to unmanaged AI risks, with an average loss estimated at approximately £3 million.

What changed? Organisations realised one AI lapse cascades into customer attrition, investor scepticism, regulatory scrutiny, and litigation.

Two-thirds (64%) of UK companies surveyed allow 'citizen developers', employees independently creating or deploying AI agents, but only 53% have formal policies in place to ensure responsible AI practices. The gap between adoption and governance creates massive exposure.

Organisations adopting AI governance measures, such as real-time monitoring and oversight committees, report significant improvements. Of the UK respondents interviewed, those with an oversight committee reported 35% more revenue growth, a 40% increase in cost savings, and a 40% rise in employee satisfaction.

Yet the skills gap remains stark. Many compliance professionals now handle AI governance responsibilities without specific training for these expanded roles.

UK AI regulation: the principles-based approach

Unlike the EU's prescriptive AI Act, the UK has adopted a principles-based framework built on five core principles: safety and security, transparency and explainability, fairness, accountability and governance, and contestability and redress.

The UK government has announced plans to introduce legislation addressing AI risks, though recent comments indicate the first UK AI Bill is unlikely before the second half of 2026. In the interim, existing regulators, including the ICO, Financial Conduct Authority, and Competition and Markets Authority, enforce AI standards within their respective sectors.

For UK businesses with EU operations, this creates a dual compliance challenge. The EU AI Act's extraterritorial reach means UK firms developing or deploying AI systems for the EU market must comply with both frameworks.

That word matters: both.

You can't point to your vendor and claim immunity. You can't hide behind a service agreement. Whether you're operating under UK principles or EU requirements, if you're deploying the system, you're responsible for how it affects people.

Why your contracts don't work the way you think

I've reviewed dozens of AI service agreements over the past year. They follow a pattern.

The organisation using the AI tries to push all liability onto the provider. The provider limits their exposure to minimal amounts. Nobody addresses who's responsible when things go wrong in practice.

OpenAI's terms require you to indemnify and hold them harmless from third-party claims arising from your use of their services. You're protecting them, not the other way around.

Whilst Section 230 immunity is a US legal concept, the principle applies globally: AI vendors aren't platforms hosting your content. They're providing tools that generate new content, and the legal protections you assumed don't apply.

What compliance teams need to understand now

The division of responsibility between deployers and developers isn't academic. It determines who pays when something goes wrong.

A deployer using an AI system doesn't control design decisions made by the company that developed it. A developer doesn't control how another organisation deploys their system.

But both can be liable.

Here's what that means for your compliance approach:

Document everything about how you're using AI tools. Not just what tools you're using, but how you've configured them, what data you're feeding them, and what decisions they're informing.

Audit your AI vendors' compliance posture. Don't assume they're handling the regulatory side. Ask specific questions about their data handling, their security measures, and their own compliance programmes.

Map your AI systems to regulatory categories. Understand which systems qualify as high-risk under various frameworks. The EU AI Act has specific risk categories; the UK framework assesses risk proportionally within each sector.

Create clear policies about personal data and AI. Your team needs to know what information can and cannot be entered into AI tools. One employee mistake can create enterprise-wide liability under UK GDPR.

Review your insurance coverage. Most policies weren't written with AI liability in mind. You may have gaps you don't know about.

The integration challenge nobody's talking about

According to the EY survey, 80% of UK respondents reported that adopting AI has led to improvements in innovation, whilst 79% said it had improved efficiency and productivity.

But adoption doesn't mean effectiveness.

The challenge isn't creating an AI policy. The challenge is integrating AI governance into existing compliance frameworks whilst maintaining operational agility.

Your team is already managing data protection, privacy regulations, industry-specific compliance requirements, and security protocols. Now you're adding AI governance on top.

The frameworks don't align neatly. UK GDPR focuses on personal data. The EU AI Act focuses on risk levels and use cases. The UK's principles-based approach delegates to sector regulators. Your industry regulations focus on sector-specific concerns.

You need a compliance approach that addresses all of these simultaneously without creating so much friction that your organisation can't move.

What I'm seeing work in practice

The organisations handling this well share common approaches.

They've created cross-functional AI governance teams that include legal, compliance, IT, and business stakeholders. No single department owns this problem because it touches everything.

They're conducting regular AI audits to understand what systems are in use, how they're configured, and what data they're processing. Shadow AI is a bigger problem than most organisations realise.

They're building AI literacy across the organisation. Compliance teams need to understand how the technology works. Technical teams need to understand the regulatory landscape. Business teams need to understand the risks.

They're treating AI governance as an ongoing process, not a one-time project. The technology evolves. The regulations evolve. Your approach needs to evolve with them.

The path forward

You can't avoid AI. The competitive pressure is too strong. The efficiency gains are too significant. The customer expectations are too high.

But you can't ignore the liability gap either.

The organisations that succeed will be the ones that build robust governance frameworks now, before regulatory enforcement ramps up and before a major incident forces their hand.

Start by understanding your actual exposure. Map your AI systems. Review your contracts. Identify the gaps between your assumed protection and your actual protection.

Then build a governance approach that's proportional to your risk. Not every AI system needs the same level of oversight. Focus your resources where the consequences of failure are highest.

The liability gap is real. Your contracts won't protect you the way you think they will. But clear-eyed assessment and proactive governance can.

The question isn't whether to integrate AI into your operations. The question is whether you'll do it with your eyes open to the legal realities or whether you'll learn about them the hard way.

No comments:

Post a Comment

The creative technologist is a bridge, not a destination

I started my fascination with code thirty-two years ago. Then I moved into design. Then brand. Then digital. Then the strategy. That progres...