The use of artificial intelligence (AI) is ever increasing. Thus far, the use of AI has not been regulated in the UK by dedicated legislation. However, this does not mean that certain UK businesses are not affected by legislation passed by the European Union (EU) on this topic.
The EU Artificial Intelligence Act (AI Act) affects certain UK businesses by imposing compliance obligations for the EU market. In this article, we provide a brief overview of compliance obligations UK businesses may face when providing or deploying AI systems for or in the EU or EEA.
The European AI Act
AI systems in the EU were thus far only subject to generally applicable rules stemming from other areas of law, such as data protection law. As the first jurisdiction worldwide, the EU has created a specific legal regime for AI. The official title is Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence. It is however commonly referred to as simply the AI Act.
- As a regulation the AI Act is directly applicable in all EU Member States without there being a need for further national implementation laws.
- The EU AI Act entered into force on 1 August 2024 and will, in principle, start applying from the 2 August 2026.
- Certain provisions which may be applicable UK businesses will start applying earlier e.g. the provisions on prohibited systems will apply from the 2 February 2025.
- Certain obligations for high-risk systems will start applying only three years after the entry into force of the AI Act, i.e. from 2 August 2027.
Applicability of the AI Act to UK Businesses
The AI Act applies to AI systems being placed on the market or being put into service, or general-purpose AI models being placed on the market, in the EU or European Economic Area (EEA), regardless of whether the providers are established or located in the EU or EEA, or a third country, such as the UK.
The AI Act will also apply to deployers located in third countries, such as the UK, if they use an AI system whose output is used in the EU or EEA, such outcomes including predictions, content, recommendations, or decisions.
UK businesses classed as providers or deployers (in certain instances) in terms of the AI Act will therefore be obliged to comply with the AI Act, even if they have no location or establishment in the EU or EEA.
Roles covered by the AI Act
How to identify whether your business is classed as a provider or deployer in terms of the AI Act:
Providers
The most heavily regulated roles under the AI Act are providers of AI systems. UK companies will be termed a provider in terms of the AI Act
- if they develop an AI system or a general-purpose AI model or have an AI system or a general-purpose AI model developed
- and place it on the market in the EU/EEA (defined as first making available of an AI system or a general-purpose AI model on the EU or EEA market)
- or put the AI system into service in the EU/EEA (defined as supply of an AI system for first use directly to the deployer or for own use in the EU or EEA) under its own name or trademark, whether for payment or free of charge.
Deployers
UK businesses will be termed a deployer in terms of the AI Act and be subject to the AI Act obligations if they use an AI system under its authority and the output thereof is used in the EU or EEA.
For example, a UK company buys a license for an AI-System to create images and videos. These images and videos are then used for a marketing campaign on behalf of the UK company in the EU.
Further roles
Further roles covered by the AI Act include e.g. importers, and the authorised representatives of providers of high-risk AS systems, which are not established in the EU, but the examination of these is beyond the scope of this article.
Subject matter of the AI Act
Aside from falling within a specified role, the subject matter and scope of the AI Act is important for determining whether your business is affected.
The AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
AI systems are divided into several categories by the AI Act, with a set of specific rules for each category. AI systems that do not fall under any of these categories are outside the scope of EU regulation and hence not subject to any specific rules.
UK businesses thus need to confirm both their role under the AI Act, and whether the category of AI system provided or deployed is affected.
Categories of AI Systems under the AI Act
Following a risk-based approach, the AI Act categories AI systems into four risk categories:
- unacceptable risk (prohibited AI practices),
- high risk (high-risk AI systems),
- limited risk (AI systems intended to interact with individuals), and
- minimal and/or no risk (all other AI systems that are outside the scope of the AI Act).
Lastly specific rules for general purpose AI models are established.
Prohibited AI systems
Certain AI-based practices are prohibited in the EU in their entirety in future. The AI Act enumerates AI systems which contravene European values, for instance by violating fundamental rights, and would pose an unacceptable risk to the affected individuals.
For example, the following AI systems are considered prohibited from the 2 February 2025:
- AI systems used for the purpose of social scoring,
- AI systems used for the purpose of cognitive behavioural manipulation,
- Real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, whereby certain exceptions apply, such as for targeted searches for specific potential victims of crime,
- AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage (likely an answer to the practices of Clearview AI),
- AI systems for emotion recognition in workplace and education institutions.
Emotion recognition is defined as an AI system “for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. The notion refers to emotions or intentions such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement.” Physical states, such as fatigue or pain are excluded, as is the “mere detection of readily apparent expressions, gestures or movements, unless they are used for identifying or inferring emotions”. It is noteworthy that the context of deployment of the AI system is the criteria on which hinges its prohibited status.
UK businesses currently supplying or intending to supply an AI system to the EU market should ensure their AI system does not fall under the prohibited AI systems list. In certain cases, this is easily done – simply restrict the use cases and do not allow the system to be used in the workplace or in education institutions.
Providers of prohibited AI systems face penalties of EUR 35 million or 7 % of the company’s total worldwide annual turnover for the preceding financial year for breaches of the rules on prohibited AI practices. This is higher than fines under the GDPR.
High-risk AI systems
Definition of high-risk AI systems
Most of the provisions of the AI Act pertain to AI systems that create a high risk to the health and safety or fundamental rights of natural persons (so-called high-risk AI systems). They are divided into two categories:
- This covers AI systems intended to be used as safety components of products, or are themselves products, that, according to the EU legal acts listed in Annex I to the AI Act, are required to undergo a third-party conformity assessment. This category covers AI systems used as safety components in medical devices, lifts, certain vehicles and aircrafts, among others.
- This covers stand-alone AI systems with fundamental rights implications. The list of such AI systems is provided in Annex III of the AI Act and is divided according to specific areas of deployment and certain categories of data used. For example:
- AI systems in the area of employment, workers’ management and access to self-employment, which are intended to be used to make decisions affecting terms of work-related relationships
- AI systems in the area of education and vocational training, which are intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels.
- AI systems using biometrics for emotion recognition.
Note: Certain AI systems may fall within an exception (e.g., if they merely perform a narrow procedural or a merely preparatory task, or if the system is intended to improve the result of a previously completed human activity). Whether an exception applies will need to be documented by a UK business providing such an AI system, and the AI system nonetheless needs to be notified to the EU database for high-risk AI systems listed in Annex III.
Obligations for High-risk AI system providers
These high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment. Among other obligations, providers of high-risk AI systems need to establish a quality management system that shall ensure compliance with the AI Act, and a risk management system covering the entire lifecycle of a high-risk AI system. Furthermore, the AI Act requires them to draw up detailed technical documentation on the AI system.
If data is used to train the model, the data sets used for training, validation and testing need to comply with the requirements set forth in Art. 10 of the AI Act.
There are also certain technical requirements for high-risk AI systems. For example, they have to generate logs while being in operation, thereby guaranteeing the traceability of the system’s functioning.
High-risk AI systems shall be developed in a way that they can be effectively overseen by natural persons when they are in use. Among others, this includes providing a stop button or a similar procedure by way of which, the AI system can be safely stopped. Furthermore, high-risk AI systems shall be designed and developed in a way to ensure that their operation is sufficiently transparent so as to enable users to interpret the system’s output and use it appropriately.
Especially relevant for businesses in the UK: If the provider is not established in the EU and directly provides its high-risk AI system to the EU market, it will be obliged to appoint an authorised representative in the EU.
AI systems intended to interact with individuals – transparency obligations
For the third category of AI systems (systems that interact with individuals) the AI Act introduces certain transparency obligations. In particular, this concerns four types of systems:
- The providers of AI systems intended to interact with individuals, e.g. AI-based chatbots, shall ensure that persons using such systems are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.
- Providers of AI systems that create synthetic audio, image, video or text content shall ensure that the outputs are marked in a machine-readable format and detectable as artificially generated or manipulated, unless an exception applies (for instance, if the AI system only performs an assistive function for standard editing or does not substantially alter the input data or the semantics thereof).
- Deployers of an emotion recognition system or a biometric categorisation system shall inform the affected individuals of the operation of the system.
- Deployers of an AI system that creates so-called deep fakes shall disclose that the content has been artificially generated or manipulated.
In case such a system fulfils the criteria for a high-risk AI system, the requirements imposed on such systems have to be fulfilled in addition to the transparency obligations mentioned in this section.
UK companies thus providing chatbots or using AI systems for the creation of so-called deepfakes and making the use of the AI systems or their output, as applicable, available in the EU or EEA will therefore be caught by these obligations.
Obligations for all AI systems - AI literacy
Obligations for all AI systems – AI literacy
All providers and deployers of AI systems ore obliged to take appropriate measures to ensure a sufficient level of AI literacy of their staff. Thereby, they have to take into account their technical knowledge, experience, education and training and the context in which the AI systems are going to be used as well as the groups of persons on which the AI systems will to be used. Importantly, this obligation applies to all providers and deployers of AI systems, even if their AI systems do not even fall within one of the risk categories regulated by the AI Act.
UK companies providing or deploying AI systems affected by the AI Act are thus encouraged to begin thinking about how to provide their staff with the required AI literacy.
Penalties under the AI Act
The AI Act penalties are limited at EUR 35 million or 7 % of the company’s total worldwide annual turnover for the preceding financial year for breaches of the rules on prohibited AI practices, EUR 15 million or 3 % of the company’s turnover for other violations, and EUR 7.5 million or 1 % of the company’s turnover for the supply of incorrect information to the authorities.
How should UK companies proceed?
UK companies are well advised to start investigating whether they fall under the provisions of the AI Act, and if so, to start preparing for compliance with its provisions as early as possible.
In particular, this holds true for the providers of high-risk AI systems. The AI Act requires such companies to not only adopt extensive governance structures and prepare appropriate documentation but might also likely result in the need to modify their AI systems (e.g., to have it produce logs or to integrate a stop button). Once the AI system is compliant with the AI Act, a conformity assessment will have to be conducted as well.
While two years until the start of enforcement of the AI Act may seem like a long period of time, the requirements under the AI Act are substantial, and past experiences with the EU GDPR have demonstrated that companies starting a few months before the rules become applicable will likely have a hard time achieving compliance in time.