EU Impact Assessment Suggests Mixed Liability for AI Systems
The complementary impact assessment calls for a broader and more balanced regulatory framework for AI and software liability in the EU.
The complementary impact assessment calls for a broader and more balanced regulatory framework for AI and software liability in the EU.
The European Commission study highlights AI’s potential to improve EU public services, identifies adoption challenges, and offers policy recommendations for accelerating AI integration.
The EU’s voluntary AI pact aims for trustworthy AI, with over 100 signatories, but lacks support from major tech firms like Meta and Apple, raising concerns about its overall impact.
The Council of Europe Framework Convention and the EU AI Act both emphasize transparency and human rights in AI, but differ in scope, with the former being broader and the latter offering detailed, market-centric regulations.
Professor Sandra Wachter critiques the EU’s Artificial Intelligence Act and related directives for significant regulatory gaps due to lobbying and political pressures, which lead to broad exemptions and weak enforcement, potentially impacting AI governance and risk management globally.
The ECNL report on the AI Act emphasizes the need for a unified, human-centric AI regulatory framework in the EU to protect digital rights and promote ethical AI practices.
Hogan Lovells offers AI Act compliance services to help organizations evaluate the Act’s applicability and ensure HR systems meet new EU regulations.
The Council of Europe has launched the first legally binding AI treaty to ensure compliance with human rights, democracy, and the rule of law.
The Dutch DPA’s AI & Algorithmic Risks Report emphasizes the need for vigilant AI risk management in the Netherlands, highlighting issues in trust, information provision, and democratic control.
The new study by Padova and Thess outlines recommendations for the EU’s GPAI code of practice to ensure innovation and alignment with international standards.