European Parliament Restricts AI Use on Official Devices
The European Parliament has disabled built‑in AI tools on work devices, citing data security and cloud processing risks, underscoring growing institutional caution toward AI use.
The European Parliament has disabled built‑in AI tools on work devices, citing data security and cloud processing risks, underscoring growing institutional caution toward AI use.
EDPB and EDPS back simplification for AI Act implementation but warn against measures that weaken data protection, urging narrow use of sensitive data, retained registration, DPA oversight, and timely rules.
The Commission’s draft AI Code of Practice outlines voluntary transparency measures, including a common EU icon and watermarking, to help companies comply with AI Act deepfake rules.
The Commission has started drafting an AI Act Code of Practice to clarify transparency duties for generative AI ahead of the AI Act’s application in August 2026.
EDPS has mapped high-risk AI use across EU institutions, preparing its market surveillance role under the AI Act and identifying priority areas such as AFSJ and AI in recruitment.
FRA warns that AI Act implementation is undermined by weak rights impact tools, unclear high-risk scope and ineffective oversight, urging broader safeguards for fundamental rights.
The Commission’s new AI Act whistleblower tool offers a secure, confidential EU-wide channel to report suspected AI Act breaches directly to the EU AI Office.
Council Decision (EU) 2025/2350 backs the Council of Europe’s AI equality recommendation, provided it stays fully consistent with the EU AI Act and imposes no additional obligations.
The Commission plans to delay high-risk AI rules to August 2027 to align with technical standards and competitiveness, while keeping core AI Act prohibitions in force.
EDPS guidance sets technical risk controls for fair, accurate, minimal, and secure AI, stressing interpretability, lifecycle governance, provider transparency, and support for data subject rights.