Comparing the EU AI Act and Council of Europe AI Convention
The “Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law”, approved in May and open for signatures on 5 September 2024, establishes a legal framework focused on safeguarding human rights, democracy, and the rule of law in AI development and usage. The convention emphasizes principles such as transparency, accountability, risk management, and special protection for vulnerable groups. It aligns with the European Union AI Act, which categorizes AI systems by risk levels and introduces specific regulatory mechanisms for high-risk AI applications.
The EU AI Act excels with its market-centric approach, offering clear regulatory guidelines that ensure a safe, innovation-friendly environment for businesses while protecting consumer rights. Its risk-based framework allows for differentiated oversight based on the risk posed by AI applications, particularly in high-risk sectors such as healthcare and transportation. This precision fosters compliance and encourages AI development within clear ethical boundaries. In contrast, the Council of Europe Framework Convention has a broader scope, focusing on human rights, democracy, and the rule of law, emphasizing transparency, accountability, and inclusivity across all sectors.
While both the EU AI Act and the Council of Europe Framework Convention emphasize human rights, transparency, and accountability, they differ in several key areas. The EU AI Act is a regulatory framework specific to the EU, establishing binding legal obligations for entities operating within the EU. The Council of Europe Convention, as a treaty, applies to a broader range of countries and provides a broad framework, allowing member states flexibility in implementation. Additionally, the Council of Europe Convention places a stronger emphasis on protecting democratic processes and the rule of law, and explicitly requires public consultation and multistakeholder involvement in AI governance discussions.
Both frameworks provide strong foundations for AI regulation but leave certain gaps, particularly in adapting to rapid technological advancements and addressing the ethical use of AI in military and national security contexts. While they emphasize international cooperation, neither framework offers a clear path for integrating their approaches into a broader global AI governance system. Addressing these issues would enhance the comprehensiveness and effectiveness of both frameworks in governing AI responsibly.