AI adoption continues to accelerate across sectors, with a significant majority1 of businesses reporting major performance gains as a result. At the same time, the internal governance around AI adoption isn’t always keeping pace2, and AI-related litigation is a key concern for many in-house legal teams3. Add to this an absence of any overarching legal framework for liability in relation to AI in the UK,4 and the net result is a somewhat heady mix of short-term gains with (at least perceived) longer term uncertainty and risk.
Published: 20 February 2026
Authors: Alex Bishop, Peter Richards-Gaskin, Alex Kirkhope, Sarah Reynolds, Claire Kershaw, Stefanie Hughes
Against this background, the UK Jurisdiction Taskforce (UKJT) has recently published, and is consulting on, a draft legal statement on liability for non-deliberate AI harms under the private law of England and Wales (the Statement). The core issue the Statement is concerned with is “in what circumstances, and on what legal bases, English common law will impose liability for loss that results from the use of AI”.
The UKJT’s approach to this is founded on the premise that AI (defined in the Statement as “a technology that is autonomous”5) does not have legal personality in English law so liability for any harms caused by the use of it must therefore be attributed to other ‘legal persons’. Despite the novel context, the UKJT suggests such liability can be attributed using ordinary existing legal principles within the ‘inherently flexible’ English common law system.
From that starting point, the Statement addresses the following questions:
- does the principle of vicarious liability apply to loss caused by AI?
- in what circumstances can a professional be liable for using or failing to use AI in the provision of their services? If AI used in the provision of professional services produces erroneous output, is the professional liable for loss resulting from the error?
- can a person ever be liable for harms caused by use of AI where there is no fault on their part?
- does liability attach to false statements made by an AI chatbot?
Key takeaways / comment
Overall, the Statement serves as a helpful reminder of both (i) the extent to which existing legal principles can be applied to the relatively novel scenarios and risks of AI use and (ii) the ability of the English common law system to adapt further where necessary to address these.
In most commercial contexts the contractual arrangements between the various parties to an AI supply chain will be the most important legal framework governing liability for any non-deliberate harm arising. The need for application/analysis of tortious or other legal principles is likely to be relatively rare in practice, and parties within the AI supply chain should therefore prioritise clear and robust contractual allocations of risk, and view any potential claim in negligence as amongst themselves as a fallback Claims brought by end users against professional services firms are likely, however, to be brought in professional negligence, whether under the terms of their contract or under tort.
Meticulous drafting of contracts has always been important, and highly complex supply chains are also nothing new. Nonetheless in the context of AI supply chains (the structure of which, the Statement notes, is likely to change as the technology advances), contracting parties and their advisers will benefit from renewed emphasis and particular attention to detail around the ‘basics’ of warranties, indemnities, limitations and exclusions. A thorough understanding of the technology, roles and responsibilities of all parties within the supply chain will be key in apportioning liability and managing risk appropriately.
It’s worth noting in this context that, to date, AI vendors have been reluctant to offer detailed liability cover in respect of ‘downstream’ use of their products and platforms, though basic protection such as IPR indemnities for AI outputs is becoming more common with larger vendors. Over time our expectation is that, in the same way as for data protection, under increased customer demand - more detailed template clauses will begin to take hold, requiring vendors to verify not only legal compliance, but also the broader legal, security and ethical processes and governance they have in place in respect of their AI systems. Given the dynamic nature of AI systems, their performance, ongoing development and outputs, such governance would cover the full lifecycle of the system, spanning its historic development and training, current performance as well as ongoing oversight (human or otherwise) to provide customers with adequate assurance as to the product’s accuracy and integrity over the term of its deployment.
In the rarer cases where liability cannot be determined under contractual arrangements (and with reference to the four questions noted above):
-
the principle of vicarious liability is unlikely to apply to harm caused by AI. Since an AI system has no legal personality itself, it cannot have the primary liability upon which the vicarious liability of another (legally recognised) person could be based. That said, an employer may be vicariously liable for loss resulting from the negligent use of AI by an employee (subject to the usual analysis).
-
the question of whether a professional has been negligent in their use of AI – or for failing to use AI – is likely to be determined in much the same way as the question of whether they have been negligent in using – or not using – any other tool available to them, applying well-established legal principles6. It will be highly fact specific; for example:
- evidence as to the standard of care exercised by reasonably competent members of the relevant profession7 will be crucial. While this may be informed by any applicable guidance from regulators/professional bodies (to which professionals must of course pay close attention), as the UKJT notes this “tends to be somewhat high level”, and evidence as to consistent market practice may be hard to pin down for a variety of reasons.
- particular uncertainties may arise in relation to causation where it is necessary to understand why AI produced the output it did (not only that it was incorrect). Maintaining clear and comprehensive audit trails (for example of prompts to the AI tool and steps taken to check/refine output, amongst other things) may help to fill evidential gaps.
-
whether in the context of AI use or generally, a person will not normally be liable for harm caused through no fault of their own, save for a failure to meet an agreed standard or result. In light of which, if the professional advice produced falls below the standard expected of a reasonably competent professional in that field, then it is likely that a professional services firm will be found liable in negligence to an end user/client, even if it was AI that caused or made the error. In addition, where an AI system is incorporated into a tangible product, such as a CD-ROM for example, the Consumer Protection Act 1987 will apply and carries strict liability for defective products causing certain physical harms. The Law Commission has stated it intends to review this area of the law including to address the status of “pure software”8.
-
non-contractual liability may in principle arise for statements made by an AI chatbot, in negligent misstatement or misrepresentation, deceit, libel, slander and/or malicious falsehood. As ever much will depend on particular facts, for example whether the chatbot’s response was autonomous or pre-determined, whether it is clear the AI chatbot is just that (as opposed to appearing as a chat function with a human), or whether the chatbot was well known to hallucinate.
Next steps
While the Statement goes some way to dispelling perceived legal uncertainties around liability for AI-related harm, needless to say real uncertainties remain. Robust governance frameworks and contractual arrangements are essential to mitigating the most significant risks/uncertainties, ensuring management time and resource can remain focussed on driving business forward. Flowing that governance and contractual assurance right through an organisation’s AI supply chain will increasingly become essential rather than a ‘nice to have’.
The consultation closed on 13 February 2026 and the UKJT states it intends, once responses have been considered, to publish a final version of the Statement as soon as possible.
More information
If you’re navigating the governance or contractual issues raised in this article – and need to protect your organisation’s risk and liability when procuring AI systems – our AI and dispute resolution and litigation teams are ready to help you move forward with confidence.
1 64% of respondents to Shoosmiths’ AI Governance Report: Uncharted territory: Navigating AI governance for better business performance.
2 32% of respondents to the above survey reported having advanced accountability frameworks in place.
3 55% of respondents to Shoosmiths’ Litigation Risk Report 2026 expect AI-related litigation to increase.
4 Other jurisdictions, including the EU with its stalled AI Liability Directive, have also struggled to codify clear liability principles in relation to AI.
5 Note that there is currently no universally accepted definition of AI, though the definition adopted by the UKJT within its Statement deliberately shares lineage with definitions used by UK Government, the EU and the OECD.
6 See in particular South Australia Asset Management Corporation v York Montague Ltd [1996] UKHL 10 and Manchester Building Society v Grant Thornton UK LLP [2021] UKSC 20
7 In accordance with the ‘Bolam test’ from Bolam v Friern Hospital Management Committee [1957] 1 WLR 582
8 This is a nuanced area given the state of the art in relation to software delivery methods and the current precedent that software-as-a-service does not amount to “goods” under UK legislation.