Back to basics: Roles within the AI Value Chain under the EU AI Act

Explore the key roles in the AI value chain under the EU AI Act—Provider, Deployer, Importer, and more—and understand the compliance duties each carries, especially for high-risk systems. Clarity on your role is essential for effective governance.

In the second instalment of our series on the EU AI Act (Act), we offer a brief overview of the different categories of stakeholder within the AI ecosystem (also known as the AI value chain) and the key compliance obligations that organisations in these roles should be aware of. The Act outlines a number of roles, including Providers, Deployers, Importers and Distributors (among others). Defining your role can be complex, so it is vital that organisations take appropriate advice to understand where it will fall within this value chain for any particular AI System – and the role (or roles) it may assume under the Act - in order to map its obligations and take steps to demonstrate effective compliance.

As many of the obligations under the Act are determined by the level of risk associated with the AI system or model in question (with the most stringent requirements reserved for technologies deemed to pose ‘high’ or ‘systemic risk’) this article gives an indication of what organisations may expect within the context of a high-risk AI System.

Which roles exist under the Act?

Providers

The Act defines a ‘Provider’ as follows:

‘a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge’.

In other words, these are the entities ultimately responsible for creating the AI system in question. Accordingly, they bear the largest compliance burdens under the Act. In the context of high-risk AI systems, Providers have a range of responsibilities and must be cognisant of the Act from initial stages of development of their product, as well as throughout its lifecycle. For example, at the outset of the development cycle, Providers need to build and implement an appropriate risk management system. They must also ensure that training data meets strict quality requirements, as well as prevent, or at the very least mitigate, any propagation of bias within their AI system. These are just a few examples of the prescriptive requirements Providers must adhere to when developing their technology.

Deployers

The Act defines a ‘Deployer’ as follows:

‘a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity’.

These are the entities who use AI systems ‘under their own authority’. For example, a company using a third-party AI system to screen and filter job applications as part of its recruitment processes would be a Deployer of the third-party AI system.

As Deployers are not themselves responsible for developing the AI system in question, they are subject to lesser compliance obligations. Nonetheless, Deployers play a critical role in operationalising AI responsibly and ethically.

For example, in relation to high-risk AI systems specifically, Deployers are charged with monitoring the system’s operation and reporting to the Provider in respect of any such use ‘presenting a risk’ (within the meaning of relevant EU legislation1). Deployers must also ensure that any high-risk AI system they are using is subject to a level of human oversight. In practice, this means the Deployer will need to ensure its employees are sufficiently well-trained to be able to interpret and interrogate the output of any high-risk AI system.

Deployers should further be aware that there is a fine line between the role of Provider and Deployer, and accordingly, their role has the potential to morph into that of a Provider if certain thresholds are reached. One example of when an organisation may assume the additional, more onerous responsibilities of a Provider under the Act is if they make a substantial modification to the AI system. Organisations should be aware that the roles of Provider and Deployer are not mutually exclusive - meaning that if an organisation first modifies, and then uses the AI system, it will be considered both a Provider and a Deployer under the Act.

Other roles defined within the Act

Organisations should be aware of additional roles within the value chain, such as those of Importer, Distributor, and Authorized Representative. Each of these stakeholders play an important part in fulfilling the aims of the legislation. Importers are entities that bring AI systems from outside of the EU into the territory. Importers may act as an important compliance safeguard, ensuring technical documentation has been prepared, and EU declarations of conformity are present in the case of high-risk AI systems.

Distributors are a vehicle by which AI systems may be made available within the EU. These entities also perform a compliance function (by enduring that declarations of conformity are present before distributing, and cooperating with authorities to the extent required). Authorised Representatives act on behalf of Providers of high-risk systems based in a third country. They retain robust records in relation to the systems, and crucially, serve as a main point of contact for EU authorities.

Why does this matter?

Organisations who inadvertently misidentify their role within the AI value chain face increased regulatory risk due to gaps in accountability and unfulfilled obligations. Understanding these roles is therefore more than semantics - it is the foundation of an effective compliance strategy. As outlined in our previous article (GPAI Models: What you need to know), preparation, robust assessments, and record keeping are the cornerstones of effective AI governance. Ensure you are clear about which role you play within the AI value chain, and how you will meet the corresponding obligations under the Act.

Look out for further instalments in this series, where we’ll continue to delve into the Act, taking a closer look at concepts such as risk, compliance and governance. 

 

1 According to the Act, AI systems ‘presenting a risk’ shall be understood to be ‘products presenting a risk’. See further Article 3, point 19 of Regulation (EU) 2019/1020.

Disclaimer

This information is for general information purposes only and does not constitute legal advice. It is recommended that specific professional advice is sought before acting on any of the information given. Please contact us for specific advice on your circumstances. © Shoosmiths LLP 2025.

 

Insights

Read the latest articles and commentary from Shoosmiths or you can explore our full insights library.