AI Act Still Lacks Guidance on Banned AI Systems
The EU’s AI Act faces scrutiny over lack of guidance on banned systems, with concerns about enforcement and exceptions, as the February deadline looms.
The EU’s AI Act faces scrutiny over lack of guidance on banned systems, with concerns about enforcement and exceptions, as the February deadline looms.
The EU’s AI Office needs more staff to handle AI regulations, as it currently lags behind the UK’s AI oversight capacity, posing risks to EU citizens and businesses.
The second draft of the General-Purpose AI Code of Practice outlines compliance measures for AI providers under the AI Act, focusing on transparency, risk management, and systemic risk obligations.
The General-Purpose AI Code of Practice drafting process will start with an online kick-off plenary on September 30, involving nearly 1000 global stakeholders.
The European Parliament is forming a monitoring group to oversee the AI Act’s implementation, emphasizing transparency and civil society involvement.
The European Commission allows AI providers to draft their own compliance codes, raising concerns about civil society’s limited role and potential industry bias.
The first AI Board meeting set the groundwork for the AI Act’s implementation, focusing on governance, supervision, and organizational structure.
The EU AI Act highlights the need for comprehensive evaluations of AI models to mitigate systemic risks and ensure safe deployment.
The European Commission has established a new AI Office led by Lucilla Sioli to oversee the implementation and compliance of the AI Act, with the unit set to employ 140 experts and begin operations on 16 June.
Europe’s AI Act progresses as member states are asked to appoint AI regulators, setting a foundation for unified AI governance.