https://delivery-p150664-e1601913.adobeaemcloud.com/adobe/assets/urn:aaid:aem:8c691ee7-e40d-4e0f-8cb3-cc69674be2ff/as/ART-pear-5xgqdzt79Ys-unsplash.avif?assetname=ART-pear-5xgqdzt79Ys-unsplash.jpg
Article | 3 min read
Closing the AI governance gap before regulation bites
false
aiSummary
Summarise with AI
AI summary
/content/shoosmiths/index
Summarise with AI
title
true
Modal title
medium
17B078

AI adoption is accelerating across UK businesses, yet governance is lagging behind. Our latest research highlights a widening divide between AI leaders and organisations still without clear accountability, training or oversight. With regulatory enforcement approaching, the priority is no longer experimentation – it’s compliance, risk management and responsible deployment at scale.

Published: 2 March 2026

Our recently published report into AI Governance (the Report) confirms AI adoption is widespread and utilisation of the technology is firmly embedded within long-term business strategy.

The governance divide widens

With the abundance of use cases, it is no surprise that of the 200 business leaders surveyed, some 64% reported transformational or significantly positive impact of AI adoption.  We identified the emergence of a cohort of ‘AI Leaders’ - allowing us to draw inferences from those organisations considered the most advanced in their deployment of AI.  Whilst those AI Leaders paint a positive landscape for AI use, it also highlights a stark contrast with organisations that appear to have fallen behind on internal AI governance and strategy.

One of the key findings of our research was an apparent disconnect between adoption of AI systems across business functions and corresponding internal governance.  Less than a third of businesses surveyed have fully developed or implemented governance and accountability structures across their organisation.

Regulation is about to catch up

As we move towards full implementation of the EU AI Act (the Act) later in 2026 (pending any adjustments that the EU may make to the EU digital acquis as part of the ‘EU Digital Omnibus’ currently under discussion) a shift toward regulatory enforcement may help to focus the minds of those who are not yet as advanced in their governance efforts.  Encouragingly, organisations who fear they may be falling behind can use our Report to aid their internal benchmarking and analysis.

Responsible deployment of nascent technology should always be underpinned by comprehensive internal governance, where the risks of adoption have been carefully considered, and procedures have been put in place to effectively mitigate those risks.  Our Report provides an opportunity for organisations to identify gaps in current compliance strategies, which should make it easier to implement practical next steps to achieve compliance.

Take for example the current overarching requirement within the Act for businesses to ensure that their staff have a sufficient understanding of the risks, benefits and proper usage of AI tools available to them (see section 4 of the Act, which sets out an AI literacy requirement).  For an organisation to meet its literacy obligations, it should keep employees regularly apprised of internal AI policies and clearly outline how responsible use manifests within that organisation.  It would also be good practice to ensure that all employees have a general awareness of the risks and limitations associated with AI.

Training is the first line of defence

Our Report shows that AI Leaders are nearly twice as likely as other organisations to regularly provide employees with training and awareness programmes, a clear indication of their commitment to meeting one of the more straightforward regulatory obligations.

But what about those organisations who lack a structured internal training programme?  At best, a lack of training could prevent employees from using AI tools to their full potential, in turn hindering an organisation’s operational performance.  At worst, it could lead to employees inadvertently (or unknowingly) using prohibited or non-approved AI tools, and a lack of awareness of the risks of using those tools could result in employees uploading confidential, sensitive or proprietary business information to them.  Depending on the terms of use that apply, information inputted by the employee is unlikely to be protected from wider dissemination, and may even be used by the tool to generate future outputs, or to train or fine-tune its underlying model parameters.

This misuse of business information may leave an organisation open to reputational damage and litigation risk. In fact, risk of litigation was the number one risk of AI adoption identified by respondents to our Report.  Similar levels of concern were raised by a separate cohort of respondents in our recent 2026 Litigation Risk Report. By providing employees with clear parameters, training and policies on the use of AI, organisations can meet their regulatory obligations at the same time as mitigating their risk of AI-related litigation. This is just one of many issues which illustrate why businesses should be developing and deploying robust internal AI governance frameworks and policies as a priority.

The risk of enforcement action and litigation is set to increase toward the of end of 2026 - with upcoming implementation deadlines, lower barriers to entry for litigation, and increased general awareness and use of AI across organisations.

More information

At Shoosmiths, our AI team’s mission is to support our clients to facilitate the use AI within their organisations in a way which is legally compliant, ethically robust and which engenders stakeholder trust.  If you’d like to discuss any aspect of your AI governance or compliance programme, get in touch with us.