Profiling, monitoring and automated decision-making: work-related risks

04/11/2024

article by Guido Lepore

One of the areas most affected by the introduction and diffusion of Artificial Intelligence systems is undoubtedly the work sector.

In fact, the use of AI technologies in this sector can range from the pre-employment phase (such as the search and selection of personnel) to the management of the employment relationship (such as job changes, promotions, transfers, performance monitoring, etc.) and its termination (i.e. whether or not to proceed with dismissal).

Artificial intelligence can also be an effective work tool, supporting the functionalities of health and safety equipment or assisting workers themselves in performing their tasks.

But while the potential of this tool is enormous, we must also be aware of the risks associated with its use.

The concrete risks include the invasive monitoring of workers, their systematic profiling and automatic decision-making that can lead to discriminatory treatment.

Let us look at them in detail.

CONTROL OF WORKERS

With the rise of AI technologies in the workplace, the concept of "remote control" of workers in labour law has expanded significantly.

Indeed, AI systems can continuously monitor workers' performance in detail, analyse their productivity and emotions, assess any abnormal behaviour or even predict errors or inefficiencies. Undoubtedly, all this leads to an extension of the employer's power of control over its employees, which is unacceptable from the perspective of labour law, which protects the worker from employer interference.

In Italy, the issue of indirect control in the workplace has been regulated since article 4 of the 1970 Workers' Statute (as now amended by the 2015 so-called ‘Jobs Act’). This provision allows the introduction of systems that enable the indirect control of workers (e.g. video surveillance) in two situations:

  1. the existence of specific reasons, i.e. organisational and production requirements, safety at work, protection of company assets, and
  2. the prior existence of a trade union agreement or authorisation from the National Labour Inspectorate (paragraph 1).

Finally, the use of workers' personal data collected by means of these instruments is subject to the provision of a specific information notice to workers, in accordance with Article 13(3) of the GDPR.

Despite the sophisticated defensive apparatus of the 1970 Workers' Statute (still relevant today), the aforementioned rule does not apply in the case of instruments used to perform work. Therefore, there is a risk of control over the worker when AI technology is used during the performance of a work task.

EMPLOYEE PROFILING AND AUTOMATED DECISION-MAKING

Another critical issue arising from the use of AI systems in the workplace is the systematic profiling of employees.

According to Art. 4(4) GDPR, profiling is the collection and processing of personal data of an individual or a group of individuals in order to analyse their characteristics, to classify them into categories or groups, or to make assessments or predictions.

Profiling (or even rating) employees is already common in the pre-employment selection phase. In fact, many companies use AI systems capable of analysing and comparing candidates' CVs in order to select the most suitable candidate among them, based on the parameters used by the algorithm.

In the context of the employment relationship, this practice is used as a method of classifying employees, which can then be used to make relevant employment-related decisions (such as career progression, access to training and/or employment opportunities, including the transfer of employees and even the selection of redundant staff to be dismissed).

Profiling, especially when used by AI systems to make automated decisions, carries an extremely high risk of discriminatory behaviour and/or unequal treatment (also referred to as algorithmic bias).

And although Article 22 of the GDPR explicitly provides for the right not to be subjected to a decision based solely on automated processing and to obtain human intervention, this right is often difficult to guarantee when AI systems are used. This is due to the opacity of algorithms, which makes it extremely difficult to know and explain the decision-making process adopted.

Finally, the use of AI tools to make automated decisions can be found throughout the employment cycle: from the use of chatbots in job interviews, to automated selection in the allocation of tasks, duties and shifts, promotions and pay, to the identification of employees to be laid off.

In an almost dystopian context of fully automated labour management (also referred to in the jargon as "algorithmic management"), the risk of violating the fundamental precepts of labour law (such as the prohibition of discriminatory acts or the fair use of employers’ powers) is a concrete urgency to be reckoned with.

PROVISIONS OF THE AI ACT AND THE PROTECTION OF WORKERS

Having clarified the above, let us now see how EU Regulation 1689/2024 (the so-called AI ACT) fits into this context.

The European legislator, well aware of the risks mentioned above, has introduced a number of provisions that explicitly address labour issues.

First, in Article 5, the AI ACT has identified a number of AI practices that are prohibited.

These include:

  • placing on the market, putting into service or use of AI systems for the purpose of inferring the emotions of a natural person in the workplace (Art. 5 (f))
  • placing on the market and use of biometric categorisation systems which classify natural persons individually on the basis of their biometric data in order to draw inferences or conclusions about their race, political opinions, trade-union membership, etc. (Article 5(g)).

With regard to the practice referred to in the first point, it should be noted that Recital 18 excludes from the concept of emotion physical states (such as pain or fatigue) and the mere detection of ‘readily apparent’ expressions, gestures and movements (unless used to infer emotion).

Such practices have been deemed too risky to workers' rights, to the point where they have been banned or allowed with narrow exceptions.

Other types of AI systems are allowed by the AI Act, but they are considered high-risk due to their potential impact and regulated accordingly. These are the systems listed in Article 6.

This provision, one of the cornerstones of the AI Act, states that not only the safety components of a product regulated by a EU Regulation listed in Annex I (including the PPE and Machinery Regulations), but also AI systems used in the areas listed in Annex III, are to be considered as high-risk AI systems.

Specifically, point 4 of this Annex is entitled "Employment, workers’ management and access to self-employment".

In this area, the following AI systems are considered high-risk:

  • AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;
  • AI systems intended to be used to make decisions affecting terms of work relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.

The efforts of the European legislator to cover the entire lifecycle of the employment relationship, including the phase of access to employment, within these two broad cases are evident. 

Moreover, the AI Act is the first horizontal EU Regulation to introduce a specific framework for the user of AI systems, referred to as the "deployer" (Art. 3(4)).

The deployer (in this case, the employer who decides to use AI systems) must comply with the obligations set out in Article 26, including:

  • the use and implementation of the AI system in accordance with the instructions given by the provider (or supplier) (para.1)
  • ensuring human oversight in accordance with Article 14, entrusted to natural persons who have the necessary competence and training (para. 2 and 3)
  • monitoring and reporting obligations in the event of serious incidents (para. 5)
  • information to workers' representatives and the workers concerned before a high-risk IA system is put into operation or used in the workplace (para. 8).

The latter obligation demonstrates the laudable intention to strengthen the role of trade union organisations, giving them a concrete role in the introduction and management of AI systems in the workplace.

Finally, although the obligation to carry out an impact assessment (or DPIA) on the fundamental rights of data subjects under Article 35(9) GDPR is only provided for "where appropriate", the latest doctrine considers that the use of high-risk AI systems in the workplace inherently has the prerequisites for a DPIA, making it de facto mandatory for deployers/employers.

LABOUR PROTECTION BEYOND THE AI ACT: THE GDPR AND ITALIAN LEGISLATION

The Artificial Intelligence Regulation, although broad and exhaustive, is not sufficient on its own to guarantee the full protection of workers from the risks of artificial intelligence.

For this reason, it needs to be integrated with other regulatory provisions, first and foremost Reg. 679/2016 (GDPR), on the protection of workers' personal data, with which coordination is straightforward. The two regulations actually share the same risk-based approach, as well as many similarities (e.g. obligation to provide preventive information to data subjects in the GDPR and to recipients of AI systems in the AI Act).

Finally, at the national level, the AI Act needs to be integrated not only with established regulations, such as the Workers' Statute, but also with new relevant regulatory interventions, such as the Leg. Decree on Artificial Intelligence and Article 1 bis of Leg. Decree 152/1997 (as amended by Leg. Decree 4 May 2023) on the subject of information obligations in the case of the use of automated decision-making or monitoring systems.

CONCLUSIONS

In drafting the AI Act, European legislators have shown that they are aware not only of the many possible uses of AI in the workplace, but also of the specific risks it can pose to workers.

Indeed, the use of AI systems in this sector means, in almost all cases, that such systems are classified as high-risk under the AI Act, making it mandatory for employers to comply with the requirements of Article 26 (the violation of which results in the application of the sanctions set out in Article 99). The use of AI systems by employers therefore requires to take due precautions and measures, as prior regulatory compliance activities are required (as well as the involvement of workers' union representatives).