The new European rules on artificial intelligence


The European Union aims to be a world leader in the development of safe, reliable and ethical artificial intelligence (Recital 5).

Last April, it published the Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain pieces of Union legislation.

We analyse the most relevant aspects.

The proposed regulation pursues the following specific goals:

  • ensure that the AI systems placed and used on the Union market are safe and comply with existing fundamental rights legislation and Union values;
  • ensure legal certainty to facilitate investment and innovation in artificial intelligence;
  • improve governance and effective enforcement of existing legislation on fundamental rights and security requirements applicable to AI systems;
  • facilitate the development of a single market for licit, secure and reliable AI applications and prevent market fragmentation.

In order to achieve these targets, the Union has once again chosen regulation as the regulatory instrument and a so-called 'risk-based' approach.

The choice of regulation

The use of a regulation rather than a directive ensures greater uniformity of application within the European Union, facilitating the development of a single market where AI systems can circulate freely. This is a choice that the European Union has made frequently in recent times in strategic or particularly relevant areas (think of EU Reg. 679/2016 on the processing of personal data, EU Reg. 745/2017 and 746/2017 on medical devices and in vitro medical devices or EU Reg. 536/2014 on clinical trials of medicinal products for human use).

The risk-based approach

The proposal for a regulation does not want to dictate a 'locked-in' system of standards that would risk hampering technological development in an evolving field and thus the evolution of the related market, so the Union has opted for a proportionate risk-based regulatory approach. The risk will have to be assessed not only during the design and creation of the AI system but also after it has been placed on the market. Again, this is an approach that has already been proposed several times by the Union, e.g. the GDPR.

The definition of IA

The definition of AI itself is quite broad

Art. 3 (Definitions)

  • "artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;

In fact, it encompasses any software that, for human-determined purposes, is able to generate outputs capable of influencing the environments - hence also the people - with which they interact.

The proposed definition is flexible enough not to hinder future technological developments AI systems and at the same time cover such future developments; it also covers software designed to operate with varying levels of autonomy and to be used as stand-alone elements or as components of a product, irrespective of whether the system is physically embedded in the product (integrated) or assists the functionality of the product without being embedded in it (non-integrated).

The definition of supplier

Also interesting is the definition of supplier.

Art. 3 (Definitions)

  • provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge

It is clear from the definition of supplier that the Proposal for a Regulation is directed towards both private individuals and public authorities that will place an AI system on the market under their own name or brand.

The Scope of Application

The scope of the new rules concerning the placing on the market, commissioning and use of AI systems is also wide.

Pursuant to Article 2 (Scope)

This Regulation applies to:

(a)providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;

(b)users of AI systems located within the Union;

(c)providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union.

To summarise, whenever an AI system is placed on the Union market, is used by users within the Union or generates output that is used within the Union, european law applies.

Prohibited, high-risk, low-risk AI systems

Following the risk-based approach, the Proposed Regulation differentiates between AI systems according to whether they determine (i) unacceptable risk; (ii) high risk; (iii) low or minimal risk.

Systems that result in an unacceptable risk (e.g. because they violate fundamental rights) are prohibited and a list can be found in Title II (Prohibited Artificial Intelligence Practices) Art. 5.

Low-risk systems are basically free but suppliers are encouraged (Title IX) to adopt codes of conduct.

On the other hand, high-risk systems are regulated in detail.

High-risk systems

The classification of an IA system as high-risk is based on its intended purpose, in line with existing EU product safety legislation. Consequently, classification as high-risk depends not only on the function performed by the AI system, but also on the specific purpose and manner of use of that system (Report section 5.2.3).

Annex III to the Proposal already contains a list of systems classified as high risk, and it is envisaged that the Commission may extend this list by applying a set of criteria and a risk assessment methodology.

Chapter 2 of Title III defines the legal requirements for high-risk AI systems in relation to data and data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security.

Conformity assessment and certification

A system of conformity assessment (Art. 48) and CE marking (Art. 49) is provided for, which differs according to whether the IA systems are

  1. intended to be used as safety components of products regulated under the new regulatory framework (e.g. machinery, toys, medical devices, etc.), in which case they will be subject to the same ex ante and ex post compliance and enforcement mechanisms as the products of which they are a component. The key difference lies in the fact that the ex-ante and ex-post mechanisms will ensure compliance not only with the requirements set out in the sectoral legislation, but also with those set out in the proposed IA Regulation.
  2. Independent, in which case the Proposal provides for an internal ex ante and ex post evaluation system by suppliers, who will then issue the Declaration of Conformity. An exception is made for remote biometric identification systems, which would be subject to conformity assessment by notified bodies.

The post-marketing monitoring system

Vendors are required to establish a post-market monitoring plan that is proportionate to the nature of the AI technologies and the risks of the high-risk AI system. The monitoring system must be capable of actively and systematically collecting, documenting, and analysing relevant data provided by users or collected through other sources on the performance of high-risk AI systems throughout their lifecycle, and must enable the supplier to assess the continued compliance of AI systems with regulatory requirements.

The EU database of high-risk systems

A European database of high-risk systems is to be set up, in which suppliers enter the information listed in Annex VIII. The data entered in the database are accessible to the public.

Measures to support innovation

With a view to technological development, Article 53 envisages the creation of AI test-beds set up by one or more competent authorities of the Member States or by the European Data Protection Supervisor and which should constitute a controlled environment to facilitate the development, testing and validation of innovative AI systems for a limited period of time before they are placed on the market.

* * *

The Union's intention to contribute to the definition of global norms and standards and to the promotion of trustworthy AI that is consistent with the Union's values and interests is certainly a very ambitious challenge, which should be read in close connection with the EU Data Strategy (Communication from the Commission, A European Strategy for Data, COM/2020/66 final), which has among others the stated aim of establishing reliable mechanisms and services for re-using, sharing and pooling data, which are essential for the development of high-quality data-driven AI models.