Artificial Intelligence and data protection: a possible coexistence?


The aim of European legislation on AI is to ensure - as stated in the Explanatory Memorandum of the Proposal for a Regulation on AI - that 'European citizens can benefit from new technologies developed and operated in accordance with the values, fundamental rights and principles of the Union'.

Among these rights, the right to the protection of personal data certainly plays a primary role.

Proof of this is the fact that privacy issues are paramount on European AI working tables, where a sustainable balance is being sought between (responsible, ethical, fair) innovation and the protection of personal data and, therefore, of the individuals to whom they relate.

The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), in their joint opinion of June 2021, also contributed with suggestions for amendments to enrich and improve the Proposal.

Today, although the EDPB and the EDPS emphasise that there is 'still a long way to go before a well-functioning legal framework can emerge from the Proposal that can effectively complement the GDPR', we can read the Proposal and the GDPR to identify the main points of contact.

On closer analysis, the implications regarding data protection law are clear from reading the first Recitals.

Among the most relevant topics are: the risk analysis approach and accountability, data quality and accuracy, privacy by design and by default, transparency and the sanction mechanism.

  1. The risk analysis approach
    The Proposal, in particular Article 9 and Recital 42, prescribe for each provider the adoption of a risk management system similar in outline to the one envisaged by the GDPR. In fact, the Proposal prescribe mapping known and foreseeable risks, mitigating and managing them, informing those concerned of their existence, and monitoring and updating the management system constantly. An approach that seems to deviate little from the risk analysis that the GDPR prescribes in Articles 25 and 35.
  2. The quality and accuracy of data
    Data quality and accuracy is one of the requirements for the marketing of high-risk AI systems. It is a specificity introduced by Article 10 of the Proposal, which stipulates in paragraph 3 that the datasets used for training models must be 'relevant, representative, error-free and complete'. Although each principle must be read in the specific context in which it is inserted, the parallelism with the principles applicable to the processing of personal data listed in Article 5 GDPR (minimisation, accuracy, integrity) is immediate.
  3. Privacy by design and by default
    Article 10 of the Proposal also introduces an approach that seems to reflect (at least in part) the principle of privacy by design and by default set out in Article 25 GDPR. In fact, it is envisaged that AI systems, and in particular those involving the use of data to train models, are developed taking into account a number of indispensable profiles, such as: the collection of data (b); the processing operations relevant to data preparation, including data cleaning, aggregation, enrichment (c); a preliminary assessment of the availability, quantity and adequacy of the necessary datasets (e); the identification of any gaps or deficiencies in the data and how they can be filled (g). A method that seems to echo the conceptual approach of prior mapping, design and analysis that serves as a compass for the data protection framework.
  4. Principle of transparency
    Article 13 of the Proposal provides for transparency obligations where it states that 'high-risk artificial intelligence systems must be designed and developed in such a way as to ensure that their operation is sufficiently transparent to allow users to interpret the output of the system and use it appropriately'. Although it does not explicitly provide for the intersection of these requirements with the GDPR, it seems logical to refer to the principle of transparency laid down in Article 5 of the GDPR, which requires data controllers to make data subjects aware of how their data will be handled in relation to the specific processing carried out, as well as the risks associated with it.
    This duty of transparency is embodied in compliance with the information duties set out in Articles 13 and 14 of the GDPR, which require the data controller to inform data subjects of how the data relating to them will be handled and of the rights they can exercise with regard to data protection.
  5. Automated decision-making processes
    Article 14 of the Proposal identifies the cases in which human oversight of the high-risk AI system is necessary, with the aim of preventing or minimising risks to the health, safety or fundamental rights of individuals. Thus, measures are envisaged to make those entrusted with surveillance aware of the limitations of the AI system and to enable them to critically evaluate the outputs it produces, as well as to deviate from them where necessary.
    Even in this case, this provision bears an affinity with Article 22 of the GDPR, which provides for the right of data subjects not to be subject to a decision based solely on automated processing, including profiling, which is aimed at the taking of a decision relevant to their legal sphere.
    Article 22 in fact does not prohibit it at all, but by way of derogation from that limitation provides that the data controller may take a decision based on an automated system when 1) it is necessary for the performance of the contract with the data subject, 2) it is authorised by Union law, or 3) there is the data subject's consent.
    In the first and third cases, the data subject has in fact the right to request human intervention in the automated decision-making process, to express his or her opinion and to contest the decision. Moreover, returning to the information duties mentioned in the previous point, the data controller will have to inform the data subject of the existence of the automated decision-making system, the logic it uses and the consequences that such processing will have on him or her, thus raising difficult issues related to the 'explainability' of AI systems.
  6. Sanctions
    Article 72 of the Draft Regulation provides that the European Data Protection Supervisor may impose administrative fines on Union institutions, agencies and bodies that fall within the scope of the IA Regulation, following the 'ceiling' criterion as already provided for in the GDPR:
  • up to EUR 500 0000 for non-compliance with the prohibition of artificial intelligence practices referred to in Article 5;
  • up to EUR 250 000 for non-compliance of the IA system with the requirements of Article 10 (data and data governance).
    In commensurating the administrative sanction, the EDPS shall take into account the nature, seriousness and duration of the breach, the degree of cooperation with the Authority to remedy the breach and any previous similar breaches committed by the same body.