While there are many positives to AI adoption, unauthorised use such as shadow AI poses significant risks for employers. Here's how to mitigate them.
The emergence of new AI technology
The rapid advancement of artificial intelligence (AI) technology has made it more accessible than ever before. It is evolving at an unprecedented pace, with new tools and applications emerging regularly. While designed to enhance efficiency and accuracy, their widespread availability means that employees can often easily access and use them, without the knowledge or approval of their employers.
In addition, the integration of AI into everyday applications has made it easier for employees to leverage these tools for various tasks. From AI-powered document checking software to generative AI assistants, the range of available tools is continually expanding. This accessibility is further amplified by the fact that many AI tools are available as cloud-based services, allowing employees to use them from anywhere, at any time.
This increasing accessibility underscores the need for employers to stay vigilant regarding AI usage within their business.
Risks of unknown or unregulated AI usage
Such use of AI can lead to several risks:
- inconsistent and inappropriate use: without proper oversight, employees may use AI in ways that are not aligned with company policies or objectives
- inaccuracy: unauthorised AI tools may produce inaccurate results, leading to poor decision-making and potential harm to the business
- cybersecurity threats: the use of unapproved AI can expose the company to cybersecurity risks, as these tools may not adhere to the company's security protocols.
- data disclosure risks: employees may inadvertently place company data onto unauthorised AI platforms, risking the disclosure of sensitive information outside the business. This can result in company information being shared with unlicensed, non-corporate versions of AI, potentially leading to significant security and confidentiality breaches
The need for a robust AI framework
To mitigate these risks, employers should develop a framework for AI usage within their business. This framework should include:
- clear policies: establishing clear policies on when and how AI can be used. These policies should be communicated effectively to all employees
- training: Providing training to employees on the proper use of AI, ensuring they understand the potential risks and benefits
- enforcement: Implementing mechanisms to ensure compliance with established policies
By implementing a robust AI framework, employers can ensure that it is used responsibly and effectively, minimising risks and maximising benefits.
Implementing an AI workplace policy
An AI policy should generally include the following components:
- governance and accountability: outline any general prohibitions or limits on the use of AI technology; specify permitted AI products; and define roles and responsibilities for AI implementation and oversight
- legal compliance: ensure that AI use complies with relevant laws and regulations, such as data protection laws
- data management, privacy, and information security: implement measures to protect the data used and generated by AI systems, including conducting Data Protection Impact Assessments (DPIAs) and ensuring compliance with data protection regulations
- ethics, transparency, and bias mitigation: promote the ethical use of AI; implement strategies to identify and mitigate biases; and ensure fairness and transparency in AI systems
- training and awareness: provide training to employees on the responsible use of AI and implement awareness programmes to keep employees informed about the policy being used by the employer
- monitoring and enforcement: establish mechanisms to monitor AI usage and enforce compliance with the employer’s AI policy
- regular review: review and update the AI policy to reflect developments in AI technology and regulatory changes
By incorporating these components, an AI policy can help ensure that employees use AI technologies responsibly and in compliance with relevant regulations, thereby safeguarding the business.
Conclusion
As AI technology continues to evolve and become more accessible, it is crucial for employers to be alive to the issues and risks that it raises, alongside maximising its benefits.
By ensuring employee use of approved AI products, and by implementing strong AI governance practices, businesses can safeguard their organisations against the potential pitfalls of shadow AI and harness its full potential in a controlled and secure manner.
Disclaimer
This information is for general information purposes only and does not constitute legal advice. It is recommended that specific professional advice is sought before acting on any of the information given. Please contact us for specific advice on your circumstances. © Shoosmiths LLP 2025.