EU star work on ts code of practice for AI content labelling
The European Commission has initiated work on a voluntary code of practice for marking and labelling AI‑generated content, with a kick‑off plenary held on 5 November 2025. Under the AI Act, providers and deployers of certain AI systems—especially generative and interactive tools—will be subject to transparency obligations requiring clear identification of synthetic content, including deepfakes, text, audio, images, and video. The move responds to the increasing difficulty of distinguishing AI outputs from human‑produced material and aims to curb misinformation, fraud, impersonation, and consumer deception.
An independent group of experts appointed by the European AI Office will lead a seven‑month, stakeholder‑driven drafting process for the code, incorporating input from a public consultation and an open call for stakeholders. The code is intended as a practical instrument to help providers operationalize compliance with the AI Act’s transparency rules, focusing on machine‑readable labelling to support detection and verification across platforms and tools. It will also guide deployers on disclosure obligations when using deepfakes or synthetic content in contexts of public interest.
The envisaged technical measures include consistent metadata, watermarking, and other machine‑readable signals to identify AI involvement. The Commission’s emphasis is on interoperability and robustness, aiming for solutions that can be implemented across diverse media formats and distribution channels. The code seeks to balance innovation with accountability, avoiding prescriptive burdens while promoting best practices that can be rapidly adopted and audited.
These transparency obligations will become applicable in August 2026, complementing existing AI Act provisions on high‑risk systems and general‑purpose AI models. The code of practice is expected to accelerate readiness among providers and deployers, facilitate alignment with platform policies, and enhance cross‑industry coordination on detection standards. Legal teams and compliance officers should track the drafting process and prepare for integration of machine‑readable labels and disclosure workflows into product and communication pipelines.