AI ACT: CRIMINAL LIABILITY OF THE PROVIDER AND DEPLOYER
The AI Act, the world's first legislation regulating the development, placing on the market and use of artificial intelligence systems, is about to come into force.
Although the debate on liability currently focuses mainly on civil law remedies - which, by their very nature, are a particularly effective means of protecting people harmed by artificial intelligence systems - the advent of AI will also have an impact on criminal liability.
In our previous articles, we have already provided a detailed overview of the subjects covered by the AI Act. In this article, we will therefore look at the scenarios opened up by the Regulation, with particular reference to the criminal law implications for players such as the provider and the deployer.
The regulatory framework
Who are the provider and the deployer?
With a view to promoting the dissemination of safe and reliable artificial intelligence, the new regulation lays down specific rules not only for suppliers of AI systems (so-called providers), but also for their users (so-called deployers).
Let us then begin with definitions.
Introduced by Art. 2, the notions of provider and deployer are further clarified by Art. 3, which defines their scope. According to the final version of the AI Act:
- The provider is ‘a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge’;
- The deployer is ‘a natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity’
The regulation therefore applies not only to the natural person user, but also to entities - including Public Administrations - that use AI systems or make them available to users on the market under their authority, except in cases of personal non-professional use. The figure of the deployer is thus different from that of the supplier or importer, which are subject to different regulations.
In this regard, two clarifications are essential:
- the Regulation also applies to providers and deployers of IA systems established or located in a third country, in all cases where the output produced by the system is used within the territory of the Union (Article 2(1)(c));
- under recital 84, in specific circumstances (e.g. where a substantial change is made to a high-risk AI system or where its intended use is changed) any distributor, importer, user or other third party (‘deployer’) is deemed to be a ‘provider’. In such a case, since the role of provider will in practice be assumed by the deployer, the latter will consequently bear the obligations and responsibilities that the Regulation places on the provider. However, ‘those provisions should apply without prejudice to more specific provisions established in certain Union harmonisation legislation based on the New Legislative Framework, together with which this Regulation should apply. For example, Article 16(2) of Regulation (EU) 2017/745, establishing that certain changes should not be considered to be modifications of a device that could affect its compliance with the applicable requirements, should continue to apply to high-risk AI systems that are medical devices within the meaning of that Regulation’.
What are their obligations?
Depending on the qualification in terms of provider or deloyer, the AI Act provides for a number of different obligations and responsibilities.
In this respect, the classification of AI systems according to the risk associated with their use plays a fundamental role. Indeed, in line with a distinctly "risk-based" approach - a characteristic element of the new regulatory discipline - the greater the risk that the use of the system may entail for the user, the stricter the rules.
The Regulation attaches particular importance to high-risk AI systems (Art. 6) - which include medical devices, by virtue of the reference to the harmonisation legislation contained in Annex I, to which Art. 6(1)(a) expressly refers to) - imposing specific obligations on providers.
The obligations placed on the provider of high-risk systems, listed in Article 16, include, for example:
- ensuring that their systems comply with the requirements expressly laid down in the Regulation;
- having a quality management system;
- keeping the logs automatically generated by high-risk systems when under their control;
- ensure that the system undergoes the conformity assessment procedure before it is placed on the market or put into service;
- draw up an EU declaration of conformity and affix the CE marking to the high-risk IA system or, where this is not possible, to its packaging or accompanying documents to indicate compliance with the regulation;
- comply with registration obligations;
- take any necessary corrective measures and provide the necessary information;
- demonstrate, upon request of a competent national authority, the conformity of the high-risk IA system with the requirements of the Regulation;
Recital 73 also mandates the provider to identify appropriate human oversight measures (Art. 14) before the system is placed on the market or put into service. These must therefore ‘guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role’.
Providers of AI systems are responsible for ensuring that their systems comply with the criteria set out in the AI Act, particularly in relation to safety, reliability and transparency, but the regulation also introduces specific rules for deployers, as users of AI systems.
Pursuant to Article 26, the main obligations placed on the deployer of the high-risk IA system include:
- taking appropriate technical and organisational measures to guarantee the use of the systems, in accordance with the instructions for use provided by the providers;
- entrusting human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support;
- monitor the operation of the high-risk AI system on the basis of the instructions for use and, if necessary, inform the providers accordingly;
- inform the provider or distributor and the relevant market surveillance authority without delay and suspend its use if they have reason to believe that the use of the high-risk AI system in accordance with the instructions for use may present a risk to the health, safety or fundamental rights of persons;
- immediately inform the provider and subsequently the importer or distributor and the relevant market surveillance authorities if they have identified a serious incident.
Impact in terms of criminal liability
From the above overview, it can be seen that the AI Act sets out for providers and deployers of AI systems an articulated set of standards of conduct and obligations of compliance and transparency, the violation of which is sanctioned by the (sole) imposition of administrative fines, the amount of which will be determined by each Member State in accordance with the parameters set out in the Regulation (Article 99/101).
On the other hand, criminal liability is largely absent from the new legal framework and, pending legislative adjustments, reference will have to be made to the existing discipline.
Let us see how.
The system established by the AI Act promotes a 'proactive' rather than a 'reactive' type of approach, imposing on the parties involved (and thus, in most cases, on companies) a series of obligations and activities that are functional to risk management, giving rise to a complex system of responsibilities.
In line with the 'risk-based' approach, the AI Act defines a zone of acceptable risk, identifying a series of requirements that must be met for the AI system to be considered compliant and for companies to be able to produce and place AI systems on the market.
The role of compliance will therefore be crucial in complying with the new provisions of the AI Act.
In this respect, it will be essential to carry out a prior assessment of the possible risks associated with the activities carried out by or through AI systems, which will need to stay within the limits of acceptable risk. Once the appropriate precautions have been taken, liability for any harmful/dangerous events that may occur as a result of failure to comply with them would be attributed to the individual persons responsible for their adoption, as well as to the entity.
A concrete support as regards risk management, can be identified in the Organisation and Management Models (MOG) under Legislative Decree 231/01. These would identify the persons with decision-making powers and the methods of intervention in the event that an offence – falling within the predicate offences of the entity’s administrative liability - is committed as a result of using the AI system.
As also confirmed by the Italian Court of Cassation, when an organisational structure is objectively negligent in taking the necessary precautions to prevent the commission of offences, the harmful event is attributable to the entity for "organisational fault" and the sanctions provided for by Leg. Decree 231/2001 may therefore be imposed on it.
As mentioned above, the regulation does not, on the other hand, stipulate anything with regard to a reactive perspective. Although the use of AI systems may give rise to criminal offences, the European legislator does not provide for any incrimination obligations.
However, there are other applicable frameworks.
When identifying the types of offences that can be committed, it is essential to highlight how the offence can be relevant in both the individual and the collective dimension.
In fact, when products are generally mass-produced and placed on the market in large quantities, any defect, non-conformity or alteration in their condition constitutes a risk to the health, safety or life not only of an individual but of an indeterminate community. The sanctions provided for in the specific legislation on consumer protection, i.e. the Consumer Code (Legislative Decree 206/2005), are therefore of fundamental importance.
Among these, with regard to criminal liability, Article 112, paragraph 2 of the Consumer Code is particularly relevant, which provides that "unless the act constitutes a more serious offence, the producer who places dangerous products on the market shall be punished by imprisonment of up to one year and a fine of between 10,000 and 50,000 euros". The provider could therefore be held liable under this provision if it places on the market a product that does not meet the definition of a "safe product".
In addition to the collective dimension, there is also the individual dimension, which refers to the danger or damage that the product may cause to the individual user. Under this hypothesis, although there are no specific rules for the protection of the consumer, it is possible to commit offences against life and limb.
Among these, the cases of homicide and bodily injury - especially when culpable - are the most likely to be applied because they protect the personal interests of the person who suffers damage or risks their life/safety as a result of the use of the defective or altered product.
Particularly relevant in this respect is the concept of human oversight (Art. 14). Human oversight aims ‘to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse’. This is achieved not only by means of measures identified and incorporated into the high-risk AI system by the provider before it is placed on the market or put into service, but also by means of measures that, identified by the provider before it is placed on the market or put into service, are suitable to be implemented by the deployer.
In view of the central role acknowledged to this type of supervision, from a criminal law point of view a liability of the ‘human overseer’ (be it provider or deployer) could be envisaged for failing to take appropriate measures to prevent the harmful events occurring as a result of the functioning of the AI system.