Expert Analysis on EU AI Act and Liability Directives’ Limitations
Professor Sandra Wachter of the Oxford Internet Institute has highlighted significant regulatory gaps in the European Union’s Artificial Intelligence Act (AIA) and related directives. In her essay ‘Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States and Beyond‘ published in the Yale Journal of Law & Technology, Wachter argues that lobbying by tech companies and political pressures have diluted the AIA, resulting in broad exemptions and a heavy reliance on self-assessment and co-regulation. These loopholes, she asserts, could have far-reaching implications for AI governance and risk management in the EU, the United States, and beyond.
Despite commendable efforts by European lawmakers, Wachter points out that the AIA, along with the Product Liability Directive (PLD) and the Artificial Intelligence Liability Directive (AILD), fails to adequately address critical issues such as discrimination, bias, explainability, misinformation, copyright, data protection, and environmental impact. The current framework lacks clear, practical requirements for AI developers and providers and has weak enforcement mechanisms. Recourse mechanisms are primarily focused on material harms and financial damages, neglecting immaterial and societal harms such as discrimination and privacy infringement.
Wachter emphasizes the need for a broader understanding of harm, noting that intangible and invisible harms, like those caused by biased or privacy-invasive AI, can be just as damaging as physical harm. She also criticizes the AIA’s exemptions, which could allow companies to evade accountability for AI-related harms. Remote biometric identification and predictive policing systems, known for their poor accuracy and potential for generating biased results, are among the technologies that could exploit these loopholes.
To address these issues, Wachter proposes several regulatory mechanisms and policy actions aimed at closing the identified loopholes, preventing harmful technologies, and fostering ethically and societally beneficial innovation.