Commission Starts Drafting AI Act Code of Practice on GenAI Transparency
The European Commission has launched work on a Code of Practice to support compliance with Article 50 of the EU AI Act, which introduces transparency obligations for providers and deployers of generative AI systems. The initiative aims to ensure that AI-generated or manipulated content can be clearly identified, thereby reducing risks of deception and safeguarding trust in public information.
The Code will address a wide range of generative outputs, including text, audio, images, and video. Its objective is to promote clear labeling and technical solutions that make AI-generated content recognizable, while remaining proportionate and technologically feasible. This focus reflects growing concerns around deep fakes and the broader societal impact of synthetic media.
Drafting will be led by two dedicated working groups. One group will focus on providers of generative AI systems and develop guidance on machine-readable marking and other robust technical measures. The second group will address deployers, with particular attention to disclosure obligations for AI-generated content and deep fakes, especially where matters of public interest are involved. Cross-cutting issues such as user transparency and cooperation throughout the AI value chain will also be covered.
Participation is open to a broad range of stakeholders, including AI developers, civil society organizations, academic experts, and large online platforms. The process will run for approximately seven months, supported by independent chairs and vice-chairs, with European and international observers contributing where relevant. The final Code of Practice is expected by May or June 2026, ahead of the AI Act becoming fully applicable in August 2026.