AI Act Omnibus Moves Forward with Delayed High-Risk Obligations
The European Parliament backs delayed AI Act obligations, fixed application dates, a ban on nudifier apps, and added flexibility for regulated products and growing EU tech companies.
The European Parliament backs delayed AI Act obligations, fixed application dates, a ban on nudifier apps, and added flexibility for regulated products and growing EU tech companies.
EU policymakers stress that simplifying EU digital laws must preserve strong regulatory interplay between the GDPR, DSA, DMA, and AI rules to ensure consistent enforcement and protect fundamental rights.
MEPs advance AI Act amendments extending high-risk compliance deadlines, tightening deepfake bans, and raising industry concerns over reduced simplification and overlapping EU digital regulation.
The FRIA guide explains how to assess and manage fundamental rights risks of high-risk AI systems under the EU AI Act.
The Commission’s second draft AI transparency code simplifies marking and labelling duties under the AI Act, adding flexibility for providers and deployers ahead of August 2026.
The European Parliament has disabled built‑in AI tools on work devices, citing data security and cloud processing risks, underscoring growing institutional caution toward AI use.
EDPB and EDPS back simplification for AI Act implementation but warn against measures that weaken data protection, urging narrow use of sensitive data, retained registration, DPA oversight, and timely rules.
The Commission’s draft AI Code of Practice outlines voluntary transparency measures, including a common EU icon and watermarking, to help companies comply with AI Act deepfake rules.
The Commission has started drafting an AI Act Code of Practice to clarify transparency duties for generative AI ahead of the AI Act’s application in August 2026.
EDPS has mapped high-risk AI use across EU institutions, preparing its market surveillance role under the AI Act and identifying priority areas such as AFSJ and AI in recruitment.