https://delivery-p150664-e1601913.adobeaemcloud.com/adobe/assets/urn:aaid:aem:78090a30-9f36-4f89-b469-5842d6a2c307/as/ART-US00087.avif?assetname=ART-US00087.jpg
Article | 6 min read
New tech, old law?
UKJT considers liability for AI harm
false
aiSummary
Summarise with AI
AI summary
/content/shoosmiths/index
Summarise with AI
title
true
Modal title
medium
17B078

AI adoption continues to accelerate across sectors, with a significant majority1 of businesses reporting major performance gains as a result.  At the same time, the internal governance around AI adoption isn’t always keeping pace2, and AI-related litigation is a key concern for many in-house legal teams3. Add to this an absence of any overarching legal framework for liability in relation to AI in the UK,4 and the net result is a somewhat heady mix of short-term gains with (at least perceived) longer term uncertainty and risk.

Published: 20 February 2026
Authors: Alex Bishop, Peter Richards-Gaskin, Alex Kirkhope, Sarah Reynolds, Claire Kershaw, Stefanie Hughes

Against this background, the UK Jurisdiction Taskforce (UKJT) has recently published, and is consulting on, a draft legal statement on liability for non-deliberate AI harms under the private law of England and Wales (the Statement).  The core issue the Statement is concerned with is “in what circumstances, and on what legal bases, English common law will impose liability for loss that results from the use of AI”.

The UKJT’s approach to this is founded on the premise that AI (defined in the Statement as “a technology that is autonomous5) does not have legal personality in English law so liability for any harms caused by the use of it must therefore be attributed to other ‘legal persons’.  Despite the novel context, the UKJT suggests such liability can be attributed using ordinary existing legal principles within the ‘inherently flexible’ English common law system.

From that starting point, the Statement addresses the following questions:

Key takeaways / comment

Overall, the Statement serves as a helpful reminder of both (i) the extent to which existing legal principles can be applied to the relatively novel scenarios and risks of AI use and (ii) the ability of the English common law system to adapt further where necessary to address these.

In most commercial contexts the contractual arrangements between the various parties to an AI supply chain will be the most important legal framework governing liability for any non-deliberate harm arising.  The need for application/analysis of tortious or other legal principles is likely to be relatively rare in practice, and parties within the AI supply chain should therefore prioritise clear and robust contractual allocations of risk, and view any potential claim in negligence as amongst themselves as a fallback  Claims brought by end users against professional services firms are likely, however, to be brought in professional negligence, whether under the terms of their contract or under tort.

Meticulous drafting of contracts has always been important, and highly complex supply chains are also nothing new.  Nonetheless in the context of AI supply chains (the structure of which, the Statement notes, is likely to change as the technology advances), contracting parties and their advisers will benefit from renewed emphasis and particular attention to detail around the ‘basics’ of warranties, indemnities, limitations and exclusions.  A thorough understanding of the technology, roles and responsibilities of all parties within the supply chain will be key in apportioning liability and managing risk appropriately.

It’s worth noting in this context that, to date, AI vendors have been reluctant to offer detailed liability cover in respect of ‘downstream’ use of their products and platforms, though basic protection such as IPR indemnities for AI outputs is becoming more common with larger vendors.  Over time our expectation is that, in the same way as for data protection, under increased customer demand - more detailed template clauses will begin to take hold, requiring vendors to verify not only legal compliance, but also the broader legal, security and ethical processes and governance they have in place in respect of their AI systems.  Given the dynamic nature of AI systems, their performance, ongoing development and outputs, such governance would cover the full lifecycle of the system, spanning its historic development and training, current performance as well as ongoing oversight (human or otherwise) to provide customers with adequate assurance as to the product’s accuracy and integrity over the term of its deployment.

In the rarer cases where liability cannot be determined under contractual arrangements (and with reference to the four questions noted above):

Next steps

While the Statement goes some way to dispelling perceived legal uncertainties around liability for AI-related harm, needless to say real uncertainties remain.  Robust governance frameworks and contractual arrangements are essential to mitigating the most significant risks/uncertainties, ensuring management time and resource can remain focussed on driving business forward.  Flowing that governance and contractual assurance right through an organisation’s AI supply chain will increasingly become essential rather than a ‘nice to have’.

The consultation closed on 13 February 2026 and the UKJT states it intends, once responses have been considered, to publish a final version of the Statement as soon as possible.

More information

If you’re navigating the governance or contractual issues raised in this article – and need to protect your organisation’s risk and liability when procuring AI systems – our AI and dispute resolution and litigation teams are ready to help you move forward with confidence.

1 64% of respondents to Shoosmiths’ AI Governance Report: Uncharted territory: Navigating AI governance for better business performance.
2 32% of respondents to the above survey reported having advanced accountability frameworks in place.
3 55% of respondents to Shoosmiths’ Litigation Risk Report 2026 expect AI-related litigation to increase.
4  Other jurisdictions, including the EU with its stalled AI Liability Directive, have also struggled to codify clear liability principles in relation to AI.
5 Note that there is currently no universally accepted definition of AI, though the definition adopted by the UKJT within its Statement deliberately shares lineage with definitions used by UK Government, the EU and the OECD.
6 See in particular South Australia Asset Management Corporation v York Montague Ltd [1996] UKHL 10 and Manchester Building Society v Grant Thornton UK LLP [2021] UKSC 20
7 In accordance with the ‘Bolam test’ from Bolam v Friern Hospital Management Committee [1957] 1 WLR 582
8 This is a nuanced area given the state of the art in relation to software delivery methods and the current precedent that software-as-a-service does not amount to “goods” under UK legislation.