Guide to Fundamental Rights Impact Assessments Under the EU AI Act
The FRIA guide explains how to assess and manage fundamental rights risks of high-risk AI systems under the EU AI Act.
The FRIA guide explains how to assess and manage fundamental rights risks of high-risk AI systems under the EU AI Act.
FRA warns that AI Act implementation is undermined by weak rights impact tools, unclear high-risk scope and ineffective oversight, urging broader safeguards for fundamental rights.
FRA urges EU justice systems to embed robust rights safeguards, inclusive access, and non-digital alternatives in the rollout of digital and AI tools.
The draft UN Cybercrime Convention is opposed by experts and organizations for its broad scope and potential to undermine EU digital laws and human rights.
The Dutch government abstains from supporting the current EU Regulation on combating online child sexual abuse material due to concerns over privacy and digital security.
The Council of Europe has launched the first legally binding AI treaty to ensure compliance with human rights, democracy, and the rule of law.
The UN committee has approved the first global treaty on cybercrime aimed at fostering international cooperation and criminalizing various cyber offenses, despite significant opposition from human rights groups and tech companies concerned about potential human rights infringements.
The proposed UN Cybercrime Convention risks expanding surveillance powers without robust privacy safeguards, threatening global human rights and privacy protections.
The European Parliament is forming a monitoring group to oversee the AI Act’s implementation, emphasizing transparency and civil society involvement.
The AI Act’s ban on predictive policing faces challenges due to potential loopholes and national security exemptions, risking its effectiveness in safeguarding fundamental rights.