High-risk AI systems and medical devices: data governance

21/02/2023

The Proposal on artificial intelligence regulates in detail systems classified as high risk, which includes medical devices.

We analyse the implications of a high-risk artificial intelligence system from a data governance perspective.

Article 3 of the Commission's proposal for a regulation on artificial intelligence provides a very broad definition of an 'artificial intelligence system', in fact encompassing any software that, for purposes determined by humans, is able to generate outputs capable of influencing the environments - hence also the people - with which they interact.

The Proposal was submitted to the Parliament, various EU institutions and finally to the Council, which was called upon to examine a compromise text on 6th December 2022; the aforementioned compromise text contains a substantially different definition of an artificial intelligence system from that of the Proposal

"artificial intelligence system” (AI system) means a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.

It will therefore be crucial to understand which will be the final definition of an artificial intelligence system in order to determine when a product falls under it or not.  

Unacceptable risk, high or low

The Proposal differentiates AI systems according to whether they pose (i) unacceptable risk; (ii) high risk; (iii) low or minimal risk, devoting ample space to high-risk systems.

When is an AI system high risk?

Article 6 of the Proposal states that an IA system is considered high-risk if both of the following conditions are fulfilled:

(a)the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;

(b)the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.

The compromise text approved on 6th December 2022 contains an Article 6 modified in the letter but not in the substance of the first two paragraphs:

An AI system that is itself a product covered by the Union harmonisation legislation listed in Annex II shall be considered as high risk if it is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the above mentioned legislation.

An AI system intended to be used as a safety component of a product covered by the legislation referred to in paragraph 1 shall be considered as high risk if it is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to above mentioned legislation. This provision shall apply irrespective of whether the AI system is placed on the market or put into service independently from the product.

Thus, for classification as high-risk, two conditions must coexist

  • the IA system must be a safety component of a product or itself be a product subject to harmonisation regulations,
  • simultaneously be a product subject to conformity assessment by third-party bodies before it can be placed on the market.

Are medical devices equipped with AI systems to be considered high-risk?

Annex II of the Proposed AI, cited by Art. 6(1)(a), includes both EU Regulations 745/2017 and 746/2017 on medical devices and in vitro medical devices in the list of Union harmonisation legislation.

Based on the Classification rules contained in Annex VII to EU Reg. 745/2017 medical devices using an AI system will hardly be able to fall into a risk class that excludes the conformity assessment of the Notified Body.

Consequently, medical devices that use a safety component consisting of an AI system or are an AI system are to be considered high-risk products within the meaning of the proposed regulation.

On the other hand, the European legislator's focus on AI systems that could adversely affect people's health is also confirmed in the recitals, where we read in no. 28) "...in the health sector, where the stakes for life and health are particularly high, increasingly sophisticated human diagnostic and decision-support systems should be reliable and accurate".

Requirements for high-risk AI systems

If an IA system is classified as high-risk, the draft regulation provides for compliance with a whole series of requirements set out in Chapter 2 of Title III.

In particular, Art. 10 regulates the use of data and in particular data sets.

Data governance

As is well known, one of the problems linked to AI is precisely that of 'data accuracy', understood as the exactness of the 'statistical modelling of the software': in essence, the notion of data accuracy in AI systems concerns not only the data entered, but also the logic and functioning of the AI software itself, as well as the final output of such processing.

The quality of the data sets used is crucial.

For this reason, Article 10 provides that the datasets used for training, validation and testing of the AI system must comply with the quality criteria set out in paragraphs 2 to 5 below. In particular, the data sets must be relevant, representative, error-free, complete and possess appropriate statistical properties, taking into account the particular characteristics or elements of the specific geographical, behavioural or functional context within which the high-risk AI system is to be used.

The adoption of data management practices is required, covering

  1. a) relevant design choices;

(b)how the data are collected;

(c)        processing operations relevant to the preparation of the data, such as annotation, labelling, cleaning, enrichment and aggregation;

(d)       the formulation of relevant assumptions, in particular as to what the data are supposed to measure and represent;

  1. e) a preliminary assessment of the availability, quantity and adequacy of the necessary data sets;

(f)        an examination to assess possible distortions;

(g)        the identification of any gaps or deficiencies in the data and how these can be filled.

Finally, paragraph 5 provides that AI system providers may process special categories of personal data,

'subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the use and reuse of state-of-the-art security and privacy measures, such as pseudonymisation or encryption, where anonymisation may significantly affect the intended purpose'.

Of the links with the GDPR and the implications of the AI proposal on the subject of data processing, we have already discussed https://www.studiolegalestefanelli.it/it/approfondimenti/intelligenza-artificiale-e-protezione-dei-dati-una-convivenza-possibile/.

Interesting on this point is a recent intervention by the ICO (Information Commissioner's Office), which recently published the Guidelines 'How to use AI and personal data appropriately ans lawfully'.

A first section of the document 'How to improve and how to handle AI and personal information' contains a series of tips and practical guidance on how to comply with the GDPR in the design and operation of AI systems.

In the second section 'Artificial intelligence and personal information - frequently asked questions', the lCO provides answers to nine frequently asked questions in the context of AI and data protection.