EU Commission Publishes Q&A on AI Literacy
The European Commission’s Q&A clarifies that AI literacy obligations under the AI Act apply from February 2025, with enforcement starting August 2026 and broad requirements for training staff.
The European Commission’s Q&A clarifies that AI literacy obligations under the AI Act apply from February 2025, with enforcement starting August 2026 and broad requirements for training staff.
The European Commission is reviewing Meta’s risk assessment for its AI chat tool to ensure compliance with the Digital Services Act amid ongoing regulatory scrutiny in the EU.
The third draft of the EU Code of Practice for general-purpose AI models refines commitments for transparency, copyright, and systemic risk, aligning with the AI Act and inviting stakeholder feedback.
The EU Commission’s updated cybersecurity blueprint enhances crisis management, strengthens collaboration, and promotes resilience against large-scale cyber incidents.
The European Parliament continues to push for AI liability rules despite the European Commission’s plan to withdraw the directive due to negotiation challenges.
X challenges Berlin court’s decision on DSA compliance, citing due process and impartiality concerns, impacting user privacy and free speech amid EU digital law enforcement tensions.
European start-ups are misled into thinking the EU AI Act stifles innovation, but experts argue it fosters trust and innovation by regulating AI systems based on risk levels.
The second draft of the General-Purpose AI Code of Practice outlines compliance measures for AI providers under the AI Act, focusing on transparency, risk management, and systemic risk obligations.
The EU investigates TikTok for potential DSA violations in election integrity during the Romanian elections, focusing on foreign interference and political ad policies.
The OECD report outlines AI’s potential benefits and risks, urging policymakers to establish effective governance and safety measures.