Unintended Consequences of Artificial Intelligence (AI) in Employment

What matters

What matters next

AI is quickly transforming the workplace by streamlining processes and enhancing productivity. While the benefits of AI are widely publicised, the legal and practical consequences of AI in the employment context are beginning to emerge.

The Transfer of Undertakings (Protection of Employment) Regulations 2006 (TUPE) 

Of course, the TUPE regulations were not designed with AI in mind. In its simplest form, TUPE is a set of rules designed to protect employees’ rights when a business or service is transferred or outsourced.

One of the tests in determining whether TUPE applies to a transfer of service is whether the activities before and after the transfer are “fundamentally the same”. This has the potential to muddy the waters where AI is introduced to replace or supplement jobs.

For example, imagine a company that currently employs a team of customer service agents to handle customer queries and complaints. To cut costs and improve response times, the company decides to outsource this function to a cloud-based AI helpdesk platform. This platform uses automated responses and generative AI to resolve most issues without human intervention. In this situation, it is arguable that TUPE does not apply as the services are not fundamentally the same.

If AI merely supplements a job rather than completely replacing it, for example where AI is introduced in hospitals to analyse medical records and make suggestions for diagnosis and treatment which are then acted upon by medical professionals, then TUPE is likely to still apply.

As businesses increasingly outsource services to AI platforms or acquire tech-savvy service providers, frequent questions are arising about whether TUPE applies. This will likely lead to uncertainty and increased disputes in outsourcing situations, mergers and acquisitions.

Employment tribunal proceedings

AI is not just a tool for employers. Increasingly, Claimants are using AI to draft tribunal claims, prepare correspondence to Respondents and the Tribunal, draft witness statements, or even prepare cross-examination questions. While AI can offer litigants in person assistance in some cases, it also raises concerns about accuracy, confidentiality and procedural fairness.

It is well known that AI can generate incorrect information which could include legal principles and cases. We have seen this first hand, with an Employment Appeal Tribunal Judge recently stating in an order that a case name used by a Claimant in an application did not exist and was likely generated by Chat GPT.

If used carefully, AI could support litigants in person in formulating and organising their own ideas into more coherent and persuasive language. However, if not checked carefully, it could create misleading legal arguments and jeopardise the fairness of proceedings.

Due to the speed at which AI can generate content, many Respondents are also noticing a much greater volume of correspondence from Claimants, with some Claimants sending lengthy emails on an almost daily basis – often showing the typical characteristics of AI generated text. Even where this correspondence lacks substantive merit, Respondents will often  have to spend time engaging with it, adding to the administrative burden of running the claim and potentially increasing legal costs where solicitors are instructed. In the absence of clear guidance on the use of AI from the Tribunals, it is likely that we will see increasing use of AI by Claimants.

Corporate transactions

AI is becoming increasingly embedded in business operations and this raises new due diligence challenges in merger and acquisition transactions.

As well as assessing the legal status of employees and overcoming the difficulties around the application of TUPE discussed above, buyers must now also conduct due diligence around AI systems being used by the business they are acquiring.

For example, they will need to assess whether those systems are GDPR compliant (for example do they comply with GDPR Article 22 which imposes limits on fully automated decision making?), and whether they have been trained on biased data and therefore pose the risk of discrimination claims from workers or employees.

There is also a risk that the buyer may inherit liability for decisions made by AI before the acquisition, for example, if AI is used to form redundancy pools and the buyer later makes redundancies based on this process.

Bias in AI

Many businesses are using AI tools in an attempt to remove human bias and there is increasing pressure in the market on recruiters and talent acquisition to implement AI into their hiring processes. However, they should be cautious in doing so as many AI systems are trained on biased data. Instead of removing human bias, AI may replicate or even amplify it, posing a significant risk of discrimination claims for employers.

For example, a CV screening tool which penalises candidates for gaps in employment may disproportionately affect women and carers, giving rise to potential claims for indirect discrimination.

Bias is not just a problem in recruitment. Any AI which carries out employment functions poses a risk of discrimination. For example, if AI is used to automatically schedule shift assignments, this could save time and cost but if, over time, the system routinely assigns late shifts to the same staff who happen to be mothers with childcare responsibilities, this again could lead to claims for indirect discrimination. To mitigate this risk, employers should conduct periodic audits of AI outputs.

The potential legal implications are profound. Under the Equality Act 2010, employers can be liable for discrimination even if it is carried out by AI, and under the GDPR rules if employers are using AI they must ensure that they have obtained the necessary consent from those whose data they are processing or can rely on another lawful basis for processing. In addition, any processing must be subject to a right of human review.

Employers must ensure that they know how their AI tools work, maintain transparency regarding its use, and implement regular testing to identify and mitigate bias. As legislation and regulatory frameworks evolve, new guidance is likely to emerge, but given the rapid pace of AI development it is doubtful that regulation will keep pace with new developments. This places greater responsibility on businesses to proactively manage the risks caused by their use of AI.

Conclusion

While the advantages of AI are clear, the possible unintended consequences must be appreciated. AI can be an excellent tool if it is adopted in a way which aligns with ethical standards and legal obligations but, as it continues to evolve, businesses, HR professionals and legal advisers alike must remain vigilant and proactive in identifying and mitigating these risks.

Disclaimer

This information is for general information purposes only and does not constitute legal advice. It is recommended that specific professional advice is sought before acting on any of the information given. Please contact us for specific advice on your circumstances. © Shoosmiths LLP 2025.

 

Insights

Read the latest articles and commentary from Shoosmiths or you can explore our full insights library.