European Union and Artificial Intelligence: towards the development of an ecosystem of excellence

05/03/2020

On February 19, 2020 the European Commission presented the highly anticipated Artificial Intelligence whitepaper. The whitepaper is part of the "coordinated European approach to the human and ethical implications of AI" announced by Commission President Ursula von der Leyen at the beginning of her term of office.

The proposals in the Whitepaper are open for public consultation until 19 May 2020.

In the White Paper, the Commission addresses the important issue of how to adapt - if possible - the rich European regulatory framework already in place to the specificities related to AI systems, including for instance the profiles of intersection with the General Product Safety Directive[1] and the Medical Devices Regulation.[2] 

Although the documents circulated so far do not define a comprehensive European legal framework for AI, they do identify the EU's key priorities and what the next steps will be. The European Union is, in fact, keen to become a leader in the AI revolution.

Let us now proceed to analyze the European strategy step by step and in chronological order.

On 10 April 2018, 25 EU member states signed the Declaration of Cooperation on Artificial Intelligence. Subsequently, on 25 April 2018, the Commission, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe promulgate the communiqué "Artificial Intelligence for Europe", with which they lay the foundations of the European initiative on Artificial Intelligence. The aims outlined are

  • Promote the industrial and technological capacity of the European Union through investments in the field of Artificial Intelligence (increase of investments up to 20 billion euros per year over the next ten years), both in the private and public sectors. The need to make large amounts of data available and public (always in compliance with the GDPR) is explicitly mentioned, also through the creation of a new support center for data sharing;
  • Preparing the EU for the socio-economic change brought about by Artificial Intelligence;
  • Ensuring the creation of an ethical-normative framework based on EU values and respectful of the Charter of Fundamental Rights (including, among others, guidance on current rules on defective product liability).

In early June 2018, the Independent High Level Expert Group (HLEG) was created and the European Alliance on Artificial Intelligence was launched. This resulted in the the Ethical Guidelines for Trustworthy Artificial Intelligence, as well as a checklist to verify the reliability of Artificial Intelligence.

On February 19, 2020, as anticipated at the beginning of the article, the "White Paper on Artificial Intelligence - A European approach to excellence and trust" was published. On the same date, the Commission publishes two communications on its five-year digital strategy: "Shaping Europe's digital future" and the "European Data Strategy", and the "Report on the security and accountability implications of Artificial Intelligence, the Internet and Robotics".

The "White Paper on Artificial Intelligence - A European approach to excellence and trust"

Let's highlight now the most important features of the whitepaper, which is the most relevant document regarding the legal aspects.

Why is it important?

Because the Commission outlines the legislative options that the EU could implement to promote greater use of AI, while addressing the risks associated with this technology.

The Commission focuses on the importance of establishing a uniform approach to AI in the EU with the purpose of ensuring the effectiveness of the European single market. The Commission outlines a risk-based regulatory approach to Artificial Intelligence. Among the risks discussed in the whitepaper, the violation of fundamental rights and the allocation of responsibilities are highlighted.

It is interesting how the Commission highlights as a dangerous activity the ability of an IA to trace and de-anonymize data relating to natural persons, thus creating a new risk of violation of data protection principles also in relation to data that in itself would not be subject to the GDPR, as they contain anonymized data. This shows how slippery the terrain of data anonymity is and how difficult and risky it is to consider anonymous data as such.

The Commission presents a number of problematic situations related to the existing regulatory framework:

  1. Limitations in the scope of application of existing European legislation: in the legislation on product safety in the European market, software, when part of the final product, follows the rules on product safety. Two important issues remain therefore open: 
    1. Is stand-alone software subject to product safety regulations when there are no explicit rules (e.g. the Medical Device Regulation)?
    2. La normativa si applica solamente ai prodotti, perciò come deve essere regolato il commercio di servizi basati sull’Intelligenza Artificiale (ad es. servizi sanitari, servizi finanziari, servizi di trasporto, ..)?
  2. Functional change in AI-based systems: software integration and updating can cause a functional change of the product during its life cycle, in particular this concerns software that requires continuous updates or that is based on machine learning systems. The current legislation, in the Commission's view, is not suitable to address the risks that may result from updates or functional changes that are not present when the product is placed on the market. 
  3. Insecurity regarding the attribution of product liability for defective products: in general, according to European legislation, responsibility for the safety of the product (covering all its components) lies with the manufacturer who placed it on the market. According to the Commission, the application of these rules would be difficult in cases where, for example, Artificial Intelligence is added after it has been placed on the market by a third party.

Moreover, the spread of products based on or composed of AI systems has led to a change in the definition of 'safety' of the product as new risks have arisen. For example, risks related to cybersecurity and loss of connectivity.

A relevant passage concerns the fact that the Commission proposes to focus and regulate only 'high risk' applications of AI. An application of AI is considered to be high risk:

  1. If it is used in a high-risk sector, including health, transport and energy;
  2. If it is used in a way that poses a significant risk: this criterion serves to exclude that all IA applications in sectors considered to be high risk are deemed to be high risk. For example, software used to make appointments for health care visits is not considered to be high risk because of its slight impact on the individual.

Applications of AI considered high risk would be subject to regulation through mandatory conformity assessment requirements prior to placing on the market.

One wonders whether it will be possible for a regulatory source to define with certainty what is high risk and what is not. The danger is that certain grey situations will "escape" into the broad meshes of such a vague determination.

These requirements would apply to all operators offering AI products or services in the EU, regardless of their place of establishment. Conformity assessment should also be included in conformity assessments that already exist for certain products (e.g. MD).

For all other IA applications, the Commission speaks of a voluntary labelling scheme, which would allow suppliers to "report that their IA-enabled products and services are reliable". Although entirely voluntary, once a supplier chooses to use the label, the requirements would become binding.

But which will be the aspects of IA that could be regulated as part of the mandatory conformity assessment prior to placing on the market?

The Commission suggests these requirements:

  1. Data for IA training

Ensure that IA is trained on broad, representative and non-discriminatory data and always treated in accordance with the GDPR.

  1. Record and data retention:

Require companies to maintain detailed documentation on:

  • The data set used for the training and testing phase of the algorithm, including a description of its main features and why it was chosen;
  • In some specific cases, the retention of the training dataset itself may be required;
  • Documentation related to the programming and training methodology of the algorithm, as well as the procedures to test and validate it.

The aim is to be able to track and verify ex post how certain problematic IA choices have been made;

  1. Information

Communicate to citizens in a consistent, objective and easily understandable way when they are interacting with an AI, as well as its purpose and limitations. The Commission also recognizes that, since Article 13(2)(f) of the GDPR already requires data controllers to inform data subjects about automated processing, no excessive obligations should be imposed on the provider if it is obvious to the data subject that they are interacting with an IA.

  1. Robustness and accuracy

Requirements relating to the maintenance of the accuracy of the software during its life cycle and the maintenance, over time, of its ability to manage errors, to defend itself against cybernetic attacks and not being able to manipulate its own algorithm.

  1. Human oversight

An adequate level of human supervision of the work of the algorithm must always be ensured. This could be done by taking the following measures:

  • The output of the AI cannot become effective until it is reviewed by the human agent;
  • The output of the AI has immediate effectiveness but human intervention is guaranteed afterwards;
  • Constant monitoring of the AI system and the ability for the human operator to intervene in real time and deactivate it;
  • In the design phase, imposing certain operational limits on the system (e.g. in the case of the driverless machine that stops operating in case of poor visibility)

Specific requirements for biometric recognition are then suggested.

The regulatory framework suggested by the Commission reflects very much what has emerged over the years and is now applied to medical devices.

The whitepaper concludes that compliance with the future regulatory framework should be ensured by a governing body working with existing authorities (e.g. for medical devices or medicines) and conducting assessments of compliance with the legal requirements imposed, testing artificial intelligence-based systems independently.



[1] Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety

[2] Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC