Artificial intelligence and claims for damages in healthcare: how is Europe charting the way forward?

05/10/2022

In September 2022 the European Commission has submitted a proposal for a Directive -  Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence - to regulate civil liability in case of damage caused by software using Artificial Intelligence systems.

This proposal, which we are shortly going to discuss in detail, is closely related to COM(2021) 206 Proposal for a regulation laying down harmonized rules on artificial intelligence (still under discussion in the EU - possible approval in spring 2023) because of the strong impact it will have in the health sector.

The full effectiveness of EU Reg. 2017/745 on medical devices, by virtue of which much more software falls under the notion of medical device, the major push toward the digitization of health services envisaged by the PNRR, and the new design of territorial care development (Ministry of Health Decree May 23, 2022 No. 77) will surely lead to an increase in the use of digital tools in health care. Many of them are already using AI systems today. Thus, the evolving framework of liability in AI systems is bound to gain wide impact in our legal system.

Therefore, to grasp the scope of the proposed Directive, it is necessary to start from the framework of the proposed AI Regulation and then analyse the connections between the two frameworks. In other words: we need to start with the (future) rules applying to AI software, and then move on the (future) rules of compensation in case such software creates harm to a patient.

The proposal for a Regulation on artificial intelligence

The proposed Artificial Intelligence Regulation (released in April 2021) aims to introduce a harmonised EU-wide framework on artificial intelligence.

The proposal, which is the first worldwide example of a uniform discipline on AI, has a legal structure resembling that of the EU Reg. 2017/745 on medical devices (MDR). In fact, it seems to be based entirely on a risk approach and on the implementation of a risk management system (Art. 9). It provides for requirements and obligations to be met by:

  • high-risk AI systems (Art. 8-14),
  • providers (Art. 16-24),
  • authorised representatives (Art. 25),
  • importers (Art. 26),
  • distributors (Art. 27),
  • professional users (Art. 29).

The draft Regulation also governs the system of accreditation and operation of Notified Bodies (Art. 30 to Art. 39), harmonised standards (Art. 40) and common specifications (Art. 41), as well as the detailed requirements to be met for the CE Certificate to be issued by the Notified Bodies after verifying that the requirements are met (Art. 44). Art. 49 provides for the affixing of the CE marking on the IA system, with the relevant registration in a specific Community database (Art. 51 and 60) before placing it on the market or putting it into service, which is followed by a structured system of market surveillance and post-market monitoring.

The proposal for a Directive on the civil liability of AI systems

Building on the legal framework outlined in the AI Regulation Proposal, the purpose of this Directive proposal is to establish uniform rules for the compensation of damages caused by AI systems, thus establishing a broader protection for victims (whether individuals or companies) and greater clarity as to the different responsibilities of the parties.

More precisely:

  • a presumption of guilt is introduced where the design and implementation rules of the system under the proposed IA Regulation have been infringed;
  • access to evidence during the trial is facilitated.

Let’s now look at these two issues in more detail.

Presumption of guilt

There is a presumption of guilt on the part of the supplier when all of the following three conditions are met (Art. 4 of the draft Directive):

  1. there is a breach of a duty of diligence under Union or national law directly intended to protect against the harm that has occurred; more precisely, the provisions of Articles 9-16 of the proposed AI Regulation are violated;
  1. it is reasonably probable that the failure affected the output produced by the AI system, or the failure of the AI system to produce an output;
  2. the plaintiff (patient) has proved that the output produced by the AI system or the failure of the AI system to produce an output caused the harm.

With regard to point 1, it is then stated that the following violations of the AI Regulation Proposal trigger the presumption of guilt:

  • the AI system is not developed on the basis of training, validation and testing data that meet the quality criteria set out in Article 10(2) to (4) of the AI Regulation. Specifically, this article establishes the level of data quality that is used for software training and governance;
  • the AI system is not designed and developed in such a way as to meet the transparency requirements set out in Art. 13 of the draft AI Regulation. This obligation intersects with Art. 13 GDPR as well as Annex I point 23 of the MDR, as specified in the MDCG 2019-16 ‘Guidance on Cybersecurity for medical devices’ (in particular, the latter explains precisely what information must be disclosed to professionals and patients);
  • the AI system is not designed and developed in such a way as to allow effective supervision by physical persons during the period in which the AI system is in use within the meaning of Article 14 of the draft AI Regulation. Here the importance of human intervention is emphasised at the design and operational level; in other words, although the AI system is 'intelligent' software, it is essential for humans to be able to monitor the output of the machines and intervene if necessary;
  • the IA system is not designed and developed so as to achieve an adequate level of accuracy, robustness and cybersecurity within the meaning of Articles 15 and 16(a) of the draft AI Regulation; here, too, the provision intersects with Art. 5 GDPR and Annex I MDR;
  • the necessary corrective actions were not immediately taken to bring the AI system into compliance with the obligations set out in Title III of the draft Regulation or with the obligations to withdraw or recall the system pursuant to Articles 16(g) and 21 of the same Regulation. Here, the entire post-marketing monitoring and control system is emphasised.

Disclosure of evidence during trial

The other element (more strictly procedural in nature, but with a very strong practical and operational impact) concerns evidence in the course of a possible claim for damages by the patient.

Here, the assumption from which the lawmaker starts is quite realistic, i.e. the difficulty a patient or consumer would have in gathering sufficient and suitable evidence to prove the above-mentioned elements (triggering the liability of the AI system provider). On this point, Art. 3 of the proposed Directive states that where the affected patient is unable to gather such evidence, they may request the court to order the AI provider to disclose the evidence.

It should be noted that the provider's obligation to produce evidence on the functioning of the IA system will be subject to the principles of necessity and proportionality.

In addition, the court will have to take into account any acts or documents that are protected at company level under Directive (EU) 2016/943 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure, and take the specific measures necessary to preserve confidentiality. Perhaps this will also be an opportunity to reflect within companies on the implementation of this directive, which is of utmost importance but seems to be overlooked by most.

Civil liability in the digital age: the Commission's proposals

On 28 September 2022, the European Commission adopted two proposals to regulate liability with respect to the digital age, the circular economy and the global value chain.

The first proposal (COM(2022) 495 – Proposal for a directive of the European Parliament and of the Council on liability for defective products aims to 'modernise' the now obsolete Dir 85/374/EEC on the strict liability of producers for defective products, which provides a legal framework that is clearly no longer adequate for today's digital environment.

The second proposal  (COM(2022) 496 final Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence), on the other hand, covers the specific field of Artificial Intelligence.

The first proposal, in brief:

  • modernises liability rules for circular economy business models and product liability rules in the digital age, allowing compensation for damages for products such as robots, drones, etc.;
  • creates a more equal playing field between EU and non-EU producers as consumers damaged by products imported from non-EU countries will be able to turn to the EU representative of importers or manufacturers for compensation;
  • puts consumers on an equal footing with manufacturers, by requiring manufacturers to provide evidence in court, by introducing more flexibility in terms of filing claims and by easing the burden of proof for victims in complex cases, such as those involving pharmaceutical products.

Conclusions

This proposal for a Directive has only just been submitted. Typically, the approval time is about two years, so we certainly cannot refer to it as a text in force.

This, however, does not detract from its scope. There is no denying the clarity of the outline design, which is bound to affect the design of AI software as of today. Therefore, particularly in the health sector, AI software will have to take into account the intertwining of the MDR, the GDPR and the directions indicated by both the proposed Regulation on AI and the proposed Directive on liability for damage caused by AI.