AI Act Still Lacks Guidance on Banned AI Systems
The EU’s AI Act faces scrutiny over lack of guidance on banned systems, with concerns about enforcement and exceptions, as the February deadline looms.
The EU’s AI Act faces scrutiny over lack of guidance on banned systems, with concerns about enforcement and exceptions, as the February deadline looms.
OpenAI’s Media Manager, a tool for creators to manage AI training data inclusion, remains undeveloped, facing skepticism over its effectiveness in addressing IP concerns.
The AI Action Summit in France aims to position Europe as a leader in AI by fostering trust, sustainability, and global cooperation, following similar events in the UK and South Korea.
The Dutch watchdog urges faster AI standardization under the EU’s AI Act, emphasizing compliance and safety, while national and EU initiatives prepare businesses for upcoming regulations.
Experts caution against excessive detail in EU digital laws, advocating for clarity and simplicity to ensure effective regulatory frameworks.
European start-ups are misled into thinking the EU AI Act stifles innovation, but experts argue it fosters trust and innovation by regulating AI systems based on risk levels.
The EU’s AI Office needs more staff to handle AI regulations, as it currently lags behind the UK’s AI oversight capacity, posing risks to EU citizens and businesses.
The second draft of the General-Purpose AI Code of Practice outlines compliance measures for AI providers under the AI Act, focusing on transparency, risk management, and systemic risk obligations.
The OECD report outlines AI’s potential benefits and risks, urging policymakers to establish effective governance and safety measures.
FPF’s report highlights the need for structured AI impact assessments to manage risks, urging organizations to improve information gathering, education, and risk measurement strategies.