Automation of Tender Procedures and Human Oversight

19/11/2024

The automation of tendering procedures implies the use of algorithms and certainly represents an opportunity to optimise the management of public resources, reduce execution times and minimise calculation and evaluation errors. However, this process must be properly balanced by the legal principle of human oversight, which ensures human control over the process and thus the legitimacy of administrative decisions.

Human control has deep roots in the theory of the administrative organ and the principle of imputation of the administrative action. Traditionally, public administration operates through the intervention of natural persons acting as organs of the public body. This model, well defined in the theory of organic imputation, guarantees that each administrative action is attributed to a natural person. Therefore, in the case of automation, the lack of human intervention in administrative decisions would entail the risk of issuing acts that may lack legal legitimacy.

 

Key Definitions: Algorithm, Automation and Artificial Intelligence

A distinction between algorithm, automation and Artificial Intelligence allows us to understand the different ways in which technology and administrative activity interact:

  • Algorithm: a sequence of programmed instructions that allows software to solve a set of problems in a deterministic way, i.e. providing the same output for the same input. In public tenders, for example, the algorithm can rank bids according to pre-defined criteria, determine scores, exclude outlier bids and assist in the verification of documentation, thus reducing human intervention in the preliminary stages of evaluation.
  • Automation: the use of the algorithm to perform tasks without human intervention, delegating operations such as calculation or selection to the software. A distinction can be made between 'traditional' automation, based on algorithms that follow pre-defined rules, and Artificial Intelligence, which includes algorithms capable of learning and adapting. In public procurement, it can be used to check the formal requirements of tenders.
  • Artificial Intelligence (AI): a set of advanced technologies, such as machine learning, that enable a machine to 'learn' from data and refine its results. Unlike traditional algorithms, AI can generate decisions that are not predetermined at the programming stage, allowing for more sophisticated interpretations of information. In the context of public tenders, AI can play increasingly complex roles, such as identifying patterns of irregularity in bids or estimating the reliability of an economic operator based on past data.

 

Defects in Administrative Acts: Nullity and Voidability of Automated Decisions

In the case of decisions taken by algorithms and/or AI, the concepts of nullity and voidability become particularly relevant, as an error or technical defect may jeopardise the validity of the entire tendering process. Case law has made important distinctions in this context:

  • Nullity: In automated procedures, nullity may result from a fundamental defect, such as a failure to assign decision-making power to a competent body or a breach of constitutional principles. For example, the use of an algorithm that discriminates or violates the principles of impartiality and transparency could lead to the nullity of the act, as it would undermine the fundamental principles of administrative action.
  • Voidability: is established for less serious defects that do not radically invalidate the decision but affect its legitimacy. In algorithmic decisions, cases of voidability may include procedural errors, incomplete assessments or omissions. For example, if an algorithm fails to take into account relevant information based on predefined instructions, the defect would be voidable as it is correctable by revision and not such as to irreversibly invalidate the decision.

 

Human oversight therefore serves as a safeguard against the risks of nullity and voidability in algorithmic decisions. The need for human intervention is not only an ethical but also a legal guarantee. There must be a human contribution capable of verifying, modifying or correcting any errors in the algorithm. The absence of such intervention could lead to irremediable defects, resulting in the nullity of the act, especially if the algorithm operates on discretionary decisions where human control is necessary.

The use of algorithms that do not respect the principle of human oversight could therefore lead to situations where the act is declared null and void, particularly in cases of discrimination or error, with serious economic and procedural consequences.

 

The imputability of decisions

The human oversight principle also requires that any automated administrative decision must be attributable to a natural or legal person who is legally and financially liable for it. Otherwise, a loophole would be created in which the algorithm would operate independently, without effective control and without the possibility of sanctioning any errors it may have made.

Administrative judges have already stated on several occasions that software, although it performs complex calculations and operations, must be supervised and controlled by competent officials. This means that the use of algorithms do not exempt administrative officials from taking full responsibility for the decisions taken. The same approach is taken in the new Procurement Code, which specifies that automation is only allowed if it is accompanied by guarantees of transparency and accountability.

Within this framework, Article 30 of the new Procurement Code requires contracting authorities to ensure that the decision-making rationale is comprehensible and that the source code is available so that any errors can be detected and corrected. This obligation is designed to reduce the risk of nullity by facilitating human control over the functioning of the algorithms.

 

Risks and limitations of algorithmic decision making

The use of algorithms and artificial intelligence in public procurement is a practice that offers benefits in terms of efficiency and consistency of evaluation. However, it also entails a number of specific risks and limitations, which case law and administrative law seek to mitigate.

  • Algorithmic discrimination: the algorithm may reflect unconscious bias if the training data contains past biased information. For example, an automated system that favours certain suppliers based on quantitative criteria or a history of previous contracts could unknowingly exclude new participants or emerging companies, thereby consolidating systemic inequality. To this end, Legislative Decree 36/2023 requires that continuous monitoring and review mechanisms be in place to prevent discriminatory effects, with the possibility of corrective action by administrative staff.
  • Data Inaccuracy: As the algorithm relies on input data to generate results, the quality and accuracy of this data is critical. Errors or inconsistencies in the input datasets can lead to incorrect decisions and thus to a flawed evaluation of tenders. Therefore, it is emphasised that the public administration must also set up regular audits of the data used and verify that they reflect the necessary and relevant information for the tender. Incorrect decisions due to data quality must not compromise the principles of good performance and impartiality that the administration is bound to respect.

 

Case Law and Human Oversight: some cases

Finally, we take as an example a series of judgments that have defined and delimited the role of the algorithm and the need for human oversight in automated administrative decisions, introducing elements aimed at protecting the transparency and accountability of the public body, even before the entry into force of the new Public Procurement Code.

 

  • Council of State, Judgment No. 2270/2019:

Although this judgment predates the new Procurement Code, it analyses the case of an automated procedure for the allocation of posts to teachers recruited under an extraordinary plan provided for by law. The appellants essentially complained about the lack of transparency of the algorithm used to assign teaching posts and the irrationality of the results, with allocations that did not respect the preferences expressed and to the detriment of subjects that were better placed in the ranking list.

The complainants also pointed to the impossibility of understanding the criteria on which the automated decisions were based and denounced the fact that the results derived from them appeared to be unmotivated.

The algorithm was therefore perceived as a 'black box', inaccessible and incomprehensible, which prevented control over the fairness of the allocation of teaching posts.

In light of this, the Council of State reaffirmed that the algorithm must be intelligible in all its aspects, including the underlying logic, the criteria used for decision-making and the way in which the data are processed. Transparency strengthens the principle of transparency and allows for judicial control.

Furthermore, this judgment expresses the concept of the algorithm as a 'computerised administrative act'. In practice, an algorithm is not an autonomous entity but an extension of the administrative will. Therefore, it must be designed and managed in accordance with the principles of proportionality, rationality and transparency. Administrative discretion must be exercised in the programming phase of the algorithm, not in its execution.

The administrative court, then, must be able to assess the logic and rationality of the algorithm and the correctness of the data entered and the decisions taken. This guarantees the citizen's right of defence and the full effectiveness of judicial review.

Finally, the use of algorithms does not seem to relieve the administration of the responsibility of ensuring the fairness and lawfulness of the decisions, stressing that any error in the automated process is attributable to the public body. The judgment also states that the public administration must provide for ways of verifying the results produced by the algorithms, so that any errors or inconsistencies in the decision-making process can be corrected and made transparent.

 

  • TAR Lazio, Judgment No. 3769/2017

A similar case, but older in time, concerned the issue of access to administrative acts relating to the algorithm used by the Ministry of Education and Research (MIUR) to manage the interprovincial mobility of teachers in the 2016/2017 school year. The applicant requested access to the source codes of the software that operated the algorithm, after the administration had refused such access and had merely provided a general description of the functioning of the algorithm.

However, the TAR (Administrative Regional Court) did not consider the mere description of the algorithm provided by the administration to be sufficient and stressed that access to the source codes was necessary for a full understanding of the functioning of the algorithm. This was to ensure that any errors or inconsistencies in the automated processing could be verified.

Although software is protected as intellectual property, this protection does not override the right of access when this is necessary to ensure respect for the rights of interested parties, provided that access does not jeopardise the economic exploitation of the software.

 

Conclusions and future prospects

In conclusion, the introduction of algorithms in tenders has clear advantages, but it is essential that these tools are used carefully to prevent possible nullity and voidability. The principle of human oversight is a safeguard against the risk that a fully automated administrative action loses transparency and legitimacy. To ensure the correct application of algorithms, it is essential that administrations provide for human supervision and transparency of the logic used.

In addition, the new Italian Procurement Code requires that the decision-making logic be comprehensible and that the source code be accessible, so that any errors can be identified and corrected.

In short, it is possible to strike a balance between automation and lawfulness. But this requires a solid regulatory framework and constant vigilance to ensure that technological efficiency does not compromise the protection of tenderers' rights and the legitimacy of administrative action.