FRA report exposes AI Act implementation gaps for fundamental rights in the EU
The EU Agency for Fundamental Rights (FRA) has published a new report highlighting that developers and users of high-risk artificial intelligence systems often lack the tools and expertise to assess and mitigate their impact on fundamental rights. This gap persists despite the growing deployment of AI in highly sensitive contexts such as asylum and migration procedures, education, employment, law enforcement and social benefits administration. The findings underscore that practical readiness for rights-compliant implementation of the AI Act lags behind the regulatory framework.
According to the study, which draws on interviews with AI providers, deployers, experts and rightsholders in Germany, Ireland, the Netherlands, Spain and Sweden, awareness is comparatively higher around data protection, privacy and non-discrimination. However, the report notes that broader fundamental rights impacts receive much less attention. As an example, AI tools used to assess children’s reading abilities raise questions about the right to education, yet these implications are rarely systematically evaluated.
The FRA also identifies fragmented and largely untested mitigation practices. Human oversight, often framed as a key safeguard in the AI Act, is not always effective in practice when users over-rely on system outputs or lack the skills to detect errors or biases. In parallel, there is uncertainty about what qualifies as an AI system and when a system falls into the high-risk category under the AI Act. The report warns that a broad application of the “filter” for simple or preparatory tools may create loopholes when such tools still influence decisions affecting fundamental rights.
To address these shortcomings, the FRA recommends a broad interpretation of the definition of an AI system in order to ensure comprehensive rights protection, coupled with a narrow and closely monitored reading of the high-risk “filter.” It calls for detailed guidance on fundamental rights impact assessments that go beyond privacy and discrimination, as well as sustained investment in research, testing and empirical studies, particularly in high-risk sectors. The report also stresses the need for independent, well-resourced oversight bodies with strong fundamental rights expertise. While the findings do not directly cover the Digital Omnibus proposal, they build on the FRA’s prior work on AI, fundamental rights and algorithmic bias and are central for practitioners preparing for AI Act implementation.