EU AI Act Published
The European Union’s landmark AI Act has been officially published and will come into force on August 1, 2024. This comprehensive regulation introduces a phased approach to AI governance, with full applicability expected by mid-2026. The law categorizes AI use cases into different risk levels, imposing varying obligations on developers. While most AI applications are deemed low risk and remain unregulated, certain high-risk uses, such as biometric AI and AI in law enforcement, must meet stringent data quality and anti-bias requirements.
The Act also introduces transparency requirements for general-purpose AI (GPAI) models, like OpenAI’s GPT. The most powerful GPAIs, determined by compute thresholds, will need to conduct systemic risk assessments. Despite heavy lobbying from some industry elements and member states, the EU has maintained robust obligations for GPAIs, aiming to balance innovation with safety and ethical considerations.
Key deadlines for compliance have been set. Six months post-enforcement, by early 2025, prohibited AI uses will become illegal, including social credit scoring and untargeted facial recognition scraping. Nine months in, around April 2025, codes of practice for AI developers will be enforced, with the EU’s AI Office overseeing the drafting process. Transparency requirements for GPAIs will start to apply 12 months after the law’s enforcement, on August 1, 2025.
The most generous compliance deadline is set for some high-risk AI systems, allowing 36 months post-enforcement, until 2027, to meet their obligations. Other high-risk systems must comply within 24 months. The phased implementation aims to provide a balanced approach, ensuring AI development aligns with ethical standards while fostering innovation.