Commission Issues Guidelines on General-Purpose AI Model Obligations
The EU Commission’s new guidelines clarify AI Act obligations for general-purpose AI model providers, detailing compliance timelines, exemptions, and enforcement.
The EU Commission’s new guidelines clarify AI Act obligations for general-purpose AI model providers, detailing compliance timelines, exemptions, and enforcement.
The Commission published a report analyzing public consultation feedback on the AI Act, clarifying definitions and prohibited practices to support effective compliance.
Funding shortages and lack of technical expertise threaten the effective enforcement of the EU AI Act, raising concerns about member states’ regulatory capacity.
Delays in EU AI Act technical standards mean companies may face uncertainty until at least 2026 as standardization bodies work to ensure compliance and consensus.
The EU AI Office launched a living repository of AI literacy practices to support Article 4 of the AI Act, promoting transparency, skill-building, and collaboration among AI providers and deployers.
The EU Commission has adopted rules to establish a scientific panel of AI experts to assist in the enforcement and governance of the AI Act.
The EU’s AI Act faces scrutiny over lack of guidance on banned systems, with concerns about enforcement and exceptions, as the February deadline looms.
The EU’s AI Office needs more staff to handle AI regulations, as it currently lags behind the UK’s AI oversight capacity, posing risks to EU citizens and businesses.
The second draft of the General-Purpose AI Code of Practice outlines compliance measures for AI providers under the AI Act, focusing on transparency, risk management, and systemic risk obligations.
The General-Purpose AI Code of Practice drafting process will start with an online kick-off plenary on September 30, involving nearly 1000 global stakeholders.