...

EU AI ACT

Share on Linkedin
Share on Twitter
Share on Facebook
Share on Whatsapp

Article by Maxime Bertomeu-Savalle, Katrien Baetens, Alexia Kaztaridou and Alexia Kaztaridou from Linklaters as published in Insight/Out magazine #31

On 1 August 2024, the eagerly anticipated EU Artificial Intelligence Act (the “AI Act”) officially came into force.

The AI Act introduces a regulatory framework where stringency escalates with perceived risk levels. Under this framework, AI systems deemed to carry an “unacceptable risk” are strictly prohibited, while AI systems classified as “high-risk” are subject to substantial regulatory obligations. Additionally, the AI Act introduces specific regulations for general-purpose artificial intelligence (“GPAI”) through a tiered approach, imposing additional obligations on GPAI models with “systemic risks”. While most AI systems will fall under limited regulation, it is crucial to accurately identify which ones are subject to the strictest regulatory tiers.

The implementation of the AI Act will be phased over a three-year period. Most provisions will apply from 2 August 2026. However, prohibitions of AI systems deemed to carry an “unacceptable risk” will already apply six months following the entry into force of the AI Act, while rules for GPAI models will apply twelve months after its entry into force. As to high-risk systems used as safety components of a product, they will apply from 2 August 2027.

Material scope

The material scope of the AI Act is based on the OECD’s definition of an AI system, which can cover a wide range of artificial intelligence techniques and approaches (e.g., machine or deep learning and neural networks), leaving room for the European Commission to complement the AI Act through delegated or implementing acts to account for technological advancements and techniques. That being said, it remains practically difficult to determine with absolute certainty which systems fall within the AI Act’s definition. There is likely to be a presumption that systems using underlying techniques commonly accepted as “AI”, such as machine or deep learning, meet the definition of “AI systems”.

Territorial scope

Territorially, the scope of the AI Act is exceptionally broad and, similarly to GDPR, is expected to impact organisations globally. Not only does it extend to organisations outside the EU that place AI products on the EU market or put them into service in the EU, but it also applies, with specific exceptions, to outputs delivered by AI products which are used by individuals within the EU. In particular, the AI Act applies to providers placing AI systems or GPAI models on the EU market globally, deployers of AI systems located within the EU, providers and deployers of AI systems globally where the output produced is used in the EU, importers and distributors of AI systems, product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark, authorised representatives of non-EU providers and any person within the EU affected by AI systems under the AI Act.

Risk-based approach

The AI Act follows a risk-based approach with obligations set forth depending on the level of risk the relevant AI system can generate (unacceptable, high or limited).

AI systems deemed to pose unacceptable risks of violating EU fundamental rights and values are prohibited. The list of prohibited AI systems is short, focusing on AI systems used for manipulating or exploiting individuals’ vulnerabilities to cause harm, intrusive and discriminatory purposes, social scoring, or remote biometric identification. However, these are unlikely to be relevant to most organisations.

AI systems, the intended purpose of which can adversely impact peoples health, safety, or their fundamental rights, are classified as high-risk and subject to significant obligations. The AI Act lists specific areas and sectors, including the employment context and the financial sector.

AI systems that pose limited risks may be either subject to transparency obligations under the AI Act (e.g., users must be made aware that they are interacting with an AI system or when image, audio or video content is generated by an AI system, and employees must be informed that their employers deploy AI systems in the workplace) or not subject to any obligations beyond the currently applicable legislation (e.g., GDPR).

Impact of the AI Act on the financial sector

The AI Act was adopted at a time when, as a recent survey by the CSSF suggests, the use of AI in the Luxembourg financial sector was still at an early stage, but investments in technology, especially in machine learning, were soaring, promising a revolution in the industry. The AI Act aims to help embrace this challenge by promoting the development and use of safe and trustworthy AI.

It was also developed to apply to a wide array of businesses and does not specifically target the financial sector. The fact remains that it will affect the industry depending on the AI applications and their related categorisation as detailed above. For its part, the European Commission is preparing guidelines for the financial sector with respect to the implementation of the AI Act and high-risk classification in its specific market areas. It is expected that many AI systems used in the financial sector, whose services may be regarded as essential in the EU, will fall under the “high-risk” use case category, subject to stricter rules and standards, such as maintaining data quality, human oversight, transparency, risk assessment and management. In addition, it is likely that most of companies investing in AI systems in Luxembourg will be subject to the obligations of “deployers” of AI systems under the AI Act.

A number of relevant AI systems have already been categorised as “high-risk” in Annex III to the AI Act. It includes systems assessing the creditworthiness of natural persons as well as risks and pricing in relation to life and health insurance. The same applies to systems used in employment and workers’ management and access to self-employment (e.g., for screening and candidate assessments, decision-making affecting work conditions, task assignment based on individual behaviours or personal traits, monitoring and assessing the performance and conduct of employees).

Finally, in addition to the AI Act, existing legislation (GDPR, ePrivacy Directive, DORA, NIS – and soon NIS 2) may apply to AI systems used in the financial sector.

Five key steps to prepare for the AI Act

In practice, chances are that many of the largest organisations have AI systems in place that are subject to the AI Act. Determining with certainty which systems are affected is critical, as compliance costs for high-risk systems, and the related sanctions, can be significant.

Although the obligations under the AI Act do not have any immediate effect, it is crucial to begin preparations, which will vary by organisation. There are five key steps most structures will need to follow:

Develop your compliance action plan: Develop and implement a plan to meet these obligations and integrate this plan into your broader digital regulation compliance framework, considering AI legislation from other relevant jurisdictions.

1- Identify AI systems: Catalogue any software and hardware products used within (or provided by) your organisation and determine whether any of them qualifies as an “AI system”;

2- Assess the territorial scope: Verify whether products identified as AI systems fall under the broad territorial scope outlined in the AI Act;

3- Determine the regulatory tier: Ascertain, for AI systems potentially subject to the AI Act, which regulatory tier applies to each product;

4- Understand your obligations: Identify your organisational role (as provider, deployer, etc.) in relation to high-risk AI systems, bearing in mind that obligations will vary significantly depending on your role; and

5- Develop your compliance action plan: Develop and implement a plan to meet these obligations and integrate this plan into your broader digital regulation compliance framework, considering AI legislation from other relevant jurisdictions.