Should you disclose that you used an AI system to generate content? Transparency obligations for deployers
AI systems are increasingly being used to generate text, images, audio and video content.
Already in 2019, the Commission-appointed High-Level Expert Group on Artificial Intelligence (AI HLEG) drafted the Ethical Guidelines for Trustworthy AI, which outlines seven ethical principles for trustworthy and ethically sound AI.
The seven principles are: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; social and environmental well-being; and accountability.
The recent EU Regulation 1689/2024 (AI Act) places great emphasis on the principle of transparency.
Firstly, it clarifies that transparency means that “AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights” (Recital 27 AI Act), but also that deployers should “clearly and distinguishably disclose that the content has been artificially created or manipulated by labelling the AI output accordingly and disclosing its artificial origin” (Recital 134).
Article 50 then specifically sets out transparency obligations for both providers and deployers, the latter being defined as “a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity” – Art. 3 (Definitions).
In this article, we will specifically address the obligation of transparency imposed on deployers in relation to content generated with the aid of an AI system in the exercise of a professional activity.
Transparency obligations for deployers
In relation to content generated with the aid of an AI system, Article 50(4) of the AI Act lays down the following obligations for deployers:
- Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. (…) Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.
- Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.
Let us analyse the above provisions.
In relation to images and audio or video content, Recital 134 helps us to understand what is meant by a deep fake and therefore when the obligation of transparency applies. It states that deployers are required to disclose in a clear and unambiguous manner that content has been artificially created or manipulated if the images or audio and video content bear a strong resemblance to existing persons, objects, places, entities or events that could falsely appear authentic or true to a person (so-called ‘deep fake’). Deployers must comply with this obligation by labelling the AI-generated output and disclosing its artificial origin.
In the case of text generated by an AI system, the obligation to disclose that the text has been generated by an AI system applies if the text is intended to inform the public about matters of public interest.
Therefore, this applies if the text is primarily addressed to the public, and its content and purpose must be to 'inform' the public.
This obligation does not apply if the text has been edited by a human being. In this case however, the issue is proving that the published text is not just the output of a machine, but the result of human processing.
Having looked at the rules governing the obligation of transparency on the part of deployers in relation to the use of an artificial intelligence system, we will conclude this brief discussion with a practical example of how it might be applied.
It is easy to imagine that the use of AI touches on various areas, including advertising.
In this respect, it is possible for a brand to use AI to create images of a specific context for advertising and promotional purposes. However, as mentioned above, the public must be properly informed about the origin of the image, and the principles of transparency enshrined in the Regulation and underlying the advertising regulatory framework, including the principle of non-deception, must be respected.
Although the AI Act has not yet entered into force, for transparency reasons it would be appropriate (from now on) to include a disclaimer such as "Image generated/modified with AI".
Finally, with regard to advertising in particular, in addition to the application of Article 50 analysed above, attention must also be paid to the content of Article 5 of the AI Act.
In a nutshell, Article 5 introduces a prohibition on the use of systems that employ manipulative techniques to direct the user's choice or to induce them to adopt a behaviour that they would not otherwise have adopted.
This provision is particularly important given that the purpose of advertising is to promote a particular service, product or brand by appealing to the public, using a variety of techniques and means of communication, in order to obtain its approval.
Nevertheless, as specified in the same article, the self-determination and therefore the free choice of the public cannot be manipulated, even for the benefit of a commercial activity which, by its very nature, aims at attracting a large audience.
In this respect, it should be noted that the principle just described, together with the obligation of transparency, is also in perfect harmony with the principles contained in the Italian regulatory framework on advertising (art. 20 et seq., Leg. Decree 206/2005, the so-called "Consumer Code"), such as the prohibition of subliminal advertising, which is aimed at protecting consumers.