FPF publishes report on assessing AI risks
FPF’s report highlights the need for structured AI impact assessments to manage risks, urging organizations to improve information gathering, education, and risk measurement strategies.
FPF’s report highlights the need for structured AI impact assessments to manage risks, urging organizations to improve information gathering, education, and risk measurement strategies.
The EU’s draft Code of Practice for AI model providers focuses on transparency and systemic risk management, seeking feedback to refine compliance guidelines under the AI Act.
ENISA seeks industry feedback on NIS2 cybersecurity guidance to enhance EU digital infrastructure resilience, with comments due by December 9, 2024.
The World Economic Forum and GEP’s guide helps businesses adopt AI responsibly, focusing on transparency, accountability, and ethical principles to drive growth and efficiency.
The European Commission has requested information from YouTube, Snapchat, and TikTok about their recommender systems under the DSA, with responses due by 15 November.
The EU is defining “significant” cybersecurity incidents under NIS2, stressing quick reporting and setting thresholds to balance accurate incident assessment and societal security.
The EU Commission’s first Code of Practice plenary for general-purpose AI revealed significant disagreements between GPAI providers and other stakeholders on risk management and transparency requirements.
The EU’s Digital Services Act enforcement intensifies with ongoing probes into major platforms like Meta and TikTok, while some member states lag in appointing necessary regulators.