FPF publishes report on assessing AI risks
As artificial intelligence (AI) systems become increasingly prevalent, organizations are adopting a structured approach to manage potential risks. The Future of Privacy Forum (FPF) has released a report titled “AI Governance Behind the Scenes: Emerging Practices For AI Impact Assessments,” highlighting the growing use of AI impact assessments to evaluate and mitigate risks. This report emphasizes the necessity for companies to operationalize these assessments, identify risks, and implement comprehensive risk management strategies.
Despite the rise in AI governance laws and resources, many organizations are still uncertain about conducting AI impact assessments. FPF surveyed over 60 private sector stakeholders to understand common practices and challenges in these assessments. The findings reveal that organizations are focusing on both intended and unintended uses of AI models, but face challenges in obtaining complete information from developers and measuring the effectiveness of risk management strategies.
FPF’s report identifies several insights, including the integration of AI impact assessments into existing enterprise risk management processes and the use of both qualitative and quantitative methods to identify AI-related risks. Organizations are advised to enhance their processes for gathering information from third-party developers, improve internal education on AI risks, and develop better techniques for measuring risk management effectiveness.
The FPF Center for Artificial Intelligence aims to promote collaboration and shared knowledge among stakeholders for the responsible development of AI. The report, created with input from numerous experts, addresses key knowledge gaps and provides a comprehensive overview of current practices and challenges in AI impact assessments.