OECD report: Risks, Benefits, and Governance Strategies
The OECD report outlines AI’s potential benefits and risks, urging policymakers to establish effective governance and safety measures.
The OECD report outlines AI’s potential benefits and risks, urging policymakers to establish effective governance and safety measures.
FPF’s report highlights the need for structured AI impact assessments to manage risks, urging organizations to improve information gathering, education, and risk measurement strategies.
Tech giants and European firms are embracing “sovereign AI,” focusing on local infrastructure and data sovereignty to enhance competitiveness and cultural alignment within EU digital law frameworks.
The EU’s draft Code of Practice for AI model providers focuses on transparency and systemic risk management, seeking feedback to refine compliance guidelines under the AI Act.
The EU is consulting stakeholders to refine AI Act guidelines, focusing on defining AI systems and banned uses, with guidance expected in early 2025.
The EU Parliament’s AI monitoring group will oversee AI Act implementation, co-chaired by McNamara and Benifei, focusing on AI regulation based on societal risk.
The EU leads in AI regulation with a new framework; LatticeFlow’s Compl-AI assesses AI model compliance, revealing gaps in fairness and resilience, urging balanced development.
The World Economic Forum and GEP’s guide helps businesses adopt AI responsibly, focusing on transparency, accountability, and ethical principles to drive growth and efficiency.
The EU Commission’s first Code of Practice plenary for general-purpose AI revealed significant disagreements between GPAI providers and other stakeholders on risk management and transparency requirements.
The complementary impact assessment calls for a broader and more balanced regulatory framework for AI and software liability in the EU.