On May 8, 2026, the European Commission (“Commission”) published draft guidelines (“Guidelines”) on the implementation of the transparency obligations under Article 50 of the EU Artificial Intelligence Act (“AI Act”), opening a targeted consultation that runs until June 3, 2026.
The Guidelines are non-binding, but they are the first Commission instrument to provide interpretive guidance across the full scope of Article 50. They were prepared in parallel with the related, but more narrowly scoped, Code of Practice on Transparency of AI-Generated Content (“Code of Practice” or “Code”), the second draft of which was published on March 5, 2026.
Below, we summarise 10 takeaways.
1. The Guidelines cover the full scope of Article 50 — including obligations the Code of Practice does not address
A point that bears emphasis at the outset is the difference in coverage between the Guidelines and the draft Code of Practice. The Code of Practice addresses only the obligations under Article 50(2) (machine-readable marking and detection of AI-generated content, applicable to providers) and Article 50(4) (labelling of deepfakes and certain AI-generated text publications, applicable to deployers). The Guidelines cover Article 50 as a whole, including two categories of obligations that are not addressed by the Code:
- Article 50(1): The obligation of providers of interactive AI systems (e.g., chatbots, virtual assistants, and AI companions) to inform natural persons that they are interacting with an AI system;
- Article 50(3): The obligation of deployers of emotion recognition and biometric categorisation systems to inform exposed natural persons of the system’s operation.
The Guidelines also provide standalone guidance on Article 50(5) — the horizontal requirement that transparency information required under Article 50(1)-(4) be provided in a “clear and distinguishable manner” at the latest at the time of first interaction or exposure.
2. Responsibilities across the AI value chain—and their limits
The Guidelines clarify how Article 50 responsibilities are distributed across the AI value chain:
- Providers bear the upstream obligations: designing interactive AI systems to disclose their artificial nature (Article 50(1)) and ensuring that synthetic content is marked and detectable (Article 50(2));
- Deployers bear the downstream-facing obligations: informing individuals exposed to emotion recognition and biometric categorisation systems (Article 50(3)) and labelling deepfakes and certain AI-generated text publications (Article 50(4)).
Notably, the Guidelines clarify that actors whose role is limited to disseminating or transmitting AI-generated content created by third parties — including online platforms — are not deployers within the meaning of the AI Act, because they do not exercise “authority” over the AI system that generated the content. The Guidelines nonetheless encourage these actors to preserve the machine-readable marks and labels applied upstream and to take “appropriate measures” to ensure that individuals exposed to the content are informed of its artificial origin.
3. Art. 50(1): When interaction with an AI system is—and is not—“obvious”
Article 50(1) exempts providers from the disclosure obligation where the AI nature of the interaction is “obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect.” The Guidelines adopt the “average consumer” standard from EU consumer protection law as the benchmark for assessing obviousness, and provide a multi-factor test that considers the target audience, the potential for vulnerable groups (including children, elderly persons, and persons with disabilities) to be part of that audience, and the level of AI and digital literacy among the intended users.
Notably, the Guidelines provide concrete examples of when the exception does and does not apply. AI-powered code assistance chatbots available only to professional developers, as well as AI-enabled Non-Playable Characters (NPCs) in video games, may meet the obviousness threshold. By contrast, AI-enabled robotic companion pets designed to mimic natural human-pet interaction, AI avatars in immersive environments, and chatbots embedded in online helpdesks may not.
The Guidelines also confirm that agentic AI systems fall within the scope of Article 50(1) where they are designed to interact with the persons instructing them or with other natural persons, and specify that where the provider cannot reliably determine whether the AI agent will interact with a natural person, the agent should be instructed to disclose itself as an AI system in every situation where such interaction is likely.
4. Art. 50(2): Multi-layered marking, technical feasibility, and the boundaries of “standard editing”
For providers of AI systems generating or manipulating synthetic content, the Guidelines confirm the draft Code of Practice’s position that no single marking technique currently meets the Article 50(2) requirements of effectiveness, interoperability, robustness, and reliability simultaneously, and that a combination of techniques is therefore necessary under the current state of the art.
The Guidelines also define what “technically feasible” means under Article 50(2): solutions capable of being implemented using currently available technology and methods, within the specific technical architecture and operational environment. Notably, the Guidelines clarify that technical feasibility is an “objective notion that is not dependent on the specific resources and capabilities of individual providers” — suggesting that the Commission does not intend to allow smaller providers to argue that compliance is technically infeasible simply because it would be costly or resource-intensive for them.
The Guidelines note certain narrow cases in which compliance obligations may be reduced or disapplied. For example, AI systems generating outputs in closed industrial environments (e.g., navigation systems in vehicles) may rely on a single marking technique where the risks of deception are low. There is also a “business-to-business” exception for strictly technical AI outputs (e.g., engineering designs, industrial production workflows) intended only for a limited, pre-defined set of professionals. Ephemeral, real-time content in video games may also be exempted where users are aware of the AI origin and the content is consumed immediately without being stored.
The Guidelines also provide detailed examples of what constitutes “standard editing” (exempt) versus substantive alteration (not exempt). Grammar correction, spellchecking, noise reduction, and minor colour adjustments would be exempt. AI-generated translations, summaries, object removal, face alteration, and converting black-and-white images to colour are substantive changes that require marking.
5. Art. 50(3): Emotion recognition and biometric categorisation — deployers must inform, with flexibility on format
The Guidelines’ treatment of Article 50(3) is relatively straightforward. Deployers of emotion recognition and biometric categorisation systems must inform all natural persons exposed to the system’s operation — including children — at the latest at the time of first exposure. The AI Act does not prescribe a particular format; depending on the context, the information may be provided in writing, by standardised icons, orally, or through a combination of these.
6. Art. 50(4) deepfakes: Intent to deceive is irrelevant; artistic work obligations are attenuated, not eliminated
The Guidelines’ treatment of deepfakes under Article 50(4) is notable in several respects. First, the definition of “deep fake” under Article 3(60) of the AI Act requires that content “would falsely appear to a person to be authentic or truthful.” The Guidelines clarify that this assessment does not consider the deployer’s intention to deceive or mislead. Rather, it requires due consideration of the “possible (diverse) composition of the audience” — including whether children, elderly persons, or other groups with lower digital literacy may be exposed to the content. The Guidelines also state that content may constitute a deepfake so long as it resembles “someone or something that can exist or could have existed in reality” (para. 107, emphasis added). This suggests that resemblance to an actually existing, recognizable person, thing, or event (e.g., a celebrity, artwork, place, or building) is not required.
In addition, the Guidelines provide interpretive guidance on “evidently artistic, creative, satirical, fictional or analogous” works or programmes under Article 50(4), emphasising that such content receives an attenuated — but not eliminated — transparency obligation. Deployers must still disclose the AI origin but may do so in a manner that does not hamper the display or enjoyment of the work. In addition, they must ensure that the rights and freedoms of third parties are safeguarded and respected (e.g., intellectual property rights). Importantly, the Guidelines exclude from the scope of the artistic allowance content that “serves primarily an informative or commercial purpose and is recognisable as such” — meaning, for example, that AI-generated deepfakes of celebrities in commercial advertising cannot benefit from this carve-out.
Notably, the Code of Practice complements the draft Guidelines on this front by specifying modality-specific placement requirements (e.g., icons or labels displayed consistently throughout short videos, at the beginning and at regular intervals for long videos) as well as a proposed “common EU icon” featuring the capitalised acronym “AI” for visual content.
7. Art. 50(4) human review exception for AI-generated text publications: Narrowly construed
For AI-generated or manipulated text published with the purpose of informing the public on matters of public interest, Article 50(4) provides an exception where the text has undergone “human review or editorial control” and a natural or legal person holds “editorial responsibility.” The Guidelines construe this exception narrowly: human review must involve “deliberate examination of the substance of the content” by persons with “relevant competence and professional judgement.” Superficial, solely formal or procedural checks — such as spell-checking, grammatical correction, or cursory editorial approval without substantive engagement — cannot satisfy the exception.
8. Art. 50(5): “Clear and distinguishable” disclosure — what it means in practice
The Guidelines define the two limbs of the Article 50(5) standard. Information is “clear” where it is noticeable and easy to understand by the person concerned, including persons with accessibility needs. It is “distinguishable” where it is easy to identify as separate from other information and the environment in which the content is presented. Per the Guidelines, a disclosure buried in terms and conditions, manuals, or layered menu options would not meet this standard.
The Guidelines also clarify that “first interaction or exposure” is not a one-time obligation tied to the first person who encounters the content — it applies with respect to each natural person who is exposed to the output of the AI system. The Guidelines illustrate this by noting that viewers may tune into a broadcast featuring deepfakes midway through, meaning disclosures should appear repeatedly—or persistently—throughout.
9. Interplay with the DSA: Complementary obligations, converging in practice
The Guidelines address the interplay between the AI Act’s transparency obligations and the DSA’s. The AI Act requires deployers to label deepfakes under Article 50(4); the DSA requires providers of very large online platforms and search engines (“VLOPs” and “VLOSEs”) to assess and mitigate systemic risks arising from AI-generated content under Articles 34 and 35 DSA — which may include (as a mitigation) ensuring that deepfakes presented on their online interfaces are “distinguishable through prominent markings” (Article 35(1)(k)).
The DSA obligation is broader in one respect: it covers manipulated content that falsely appears authentic regardless of the technology used to create it, not only AI-generated content. However, in practice the two frameworks converge, and the Guidelines note that where a VLOP or VLOSE makes labelling tools available to deployers, those deployers may rely on such tools to fulfil their Article 50(4) obligations. The Guidelines also clarify that AI Act marking and labelling requirements do not obviate the assessment of whether content is illegal under other laws: a labelled deepfake may still be unlawful (e.g., under criminal law or intellectual property rules), and an unlabelled deepfake may itself constitute “illegal content” within the meaning of the DSA by virtue of non-compliance with Article 50.
10. Enforcement: Non-signatories to the Code of Practice will face greater scrutiny
Finally, the Guidelines contain an important signal regarding enforcement. For providers and deployers that sign the Code of Practice (once finalised), the Commission and competent market surveillance authorities will “focus their supervisory activities on assessing whether those signatories have adhered to the code of practice.” Such signatories will “benefit from increased trust from the Commission, the other competent market surveillance authorities and other stakeholders.”
By contrast, providers and deployers that are not signatories to the Code of Practice will be expected to demonstrate how they have complied with Articles 50(2) and 50(4), including by carrying out a gap analysis comparing their measures against the Code.
This two-track enforcement approach appears intended to create a practical incentive for regulated entities to join the Code of Practice, even though adherence is formally voluntary.
Timeline and Next Steps
The Guidelines arrive less than three months before the Article 50 transparency obligations become applicable on August 2, 2026 — including the interactive AI disclosure requirement (Article 50(1)), emotion recognition and biometric categorisation (Article 50(3)), and deepfake labelling (Article 50(4)).
For providers subject to the marking and detection obligations under Article 50(2), the timeline might end up being slightly different: on May 7, 2026, the European Parliament and Council reached a provisional agreement on the Digital Omnibus on AI, which, if adopted, would grant providers of generative AI systems already on the EU market before August 2, 2026 a transitional period until December 2, 2026 to bring their systems into compliance. Systems placed on the EU market or put into service in the EU from August 2, 2026 onwards must comply from that date.
The consultation on the draft Guidelines closes on June 3, 2026. The final version of the Code of Practice is also expected in June. Together, the two instruments will form the primary compliance framework for the AI Act’s transparency regime — with the Guidelines providing the Commission’s interpretation of Article 50 and the Code providing the technical implementation details for Articles 50(2) and 50(4).