AI Act Still Lacks Guidance on Banned AI Systems
As the 2 February deadline approaches, concerns mount over the European Commission’s lack of guidance on banned AI systems under the AI Act. Civil society groups are particularly worried about the absence of interpretive guidelines, which are crucial for ensuring compliance. Although companies have until mid-next year to align their policies with the broader provisions of the AI Act, the ban on specific AI systems, including facial recognition, begins earlier. The Commission’s AI Office has promised guidelines by early 2025, yet these documents remain unpublished, causing unease among advocacy groups.
The AI Act prohibits AI systems that pose societal risks but allows exceptions where public interest prevails, such as in law enforcement. Critics, like Caterina Rodelli from Access Now, argue these exceptions undermine the prohibitions, potentially allowing the use of unreliable systems like predictive policing and profiling in migration. Similarly, EDRi’s Ella Jakubowska expresses concern that such loopholes could be exploited by companies and governments to continue harmful AI practices.
The extra-territorial scope of the AI Act means non-EU companies could face penalties, up to 7% of global annual turnover, for non-compliance. While most provisions will apply next year, member states must establish national regulators by August. Some countries have begun preparations, assigning oversight to data protection or telecom bodies. However, there is a lack of clarity in several nations regarding the authorities responsible for enforcing the rules.
The debate over facial recognition systems was intense during the AI Act’s negotiation, with calls for strict bans. The current patchwork approach to regulatory oversight raises questions about the effectiveness of enforcement. Jakubowska notes the fragmented state of market surveillance authorities and the uncertainty surrounding notified bodies, which adds to the complexity of applying the AI Act uniformly across the EU.