European AI Legislation Faces Criticism Over Security Loopholes
The EU’s AI Act bans certain AI uses to protect citizens but faces criticism for exemptions allowing law enforcement and migration authorities to use AI for serious crimes.
The EU’s AI Act bans certain AI uses to protect citizens but faces criticism for exemptions allowing law enforcement and migration authorities to use AI for serious crimes.
The EHDS regulation enhances EU citizens’ control over health data, boosts research potential, and ensures interoperability across member states.
The Dutch watchdog urges faster AI standardization under the EU’s AI Act, emphasizing compliance and safety, while national and EU initiatives prepare businesses for upcoming regulations.
European start-ups are misled into thinking the EU AI Act stifles innovation, but experts argue it fosters trust and innovation by regulating AI systems based on risk levels.
The OECD report outlines AI’s potential benefits and risks, urging policymakers to establish effective governance and safety measures.
FPF’s report highlights the need for structured AI impact assessments to manage risks, urging organizations to improve information gathering, education, and risk measurement strategies.
The EU adopts rules for eID Wallets, ensuring secure, interoperable digital identity management across Member States by 2026.
Tech giants and European firms are embracing “sovereign AI,” focusing on local infrastructure and data sovereignty to enhance competitiveness and cultural alignment within EU digital law frameworks.
Norway plans to raise the social media age limit to 15, seeking EU-style solutions to protect minors online.
The draft UN Cybercrime Convention is opposed by experts and organizations for its broad scope and potential to undermine EU digital laws and human rights.