European Start-ups Misled on EU AI Act’s Impact
European start-ups are misled into thinking the EU AI Act stifles innovation, but experts argue it fosters trust and innovation by regulating AI systems based on risk levels.
European start-ups are misled into thinking the EU AI Act stifles innovation, but experts argue it fosters trust and innovation by regulating AI systems based on risk levels.
The EU is consulting stakeholders to refine AI Act guidelines, focusing on defining AI systems and banned uses, with guidance expected in early 2025.
The EU’s digital rulebook aims to streamline governance, but its complexity challenges organizations’ compliance, necessitating effective digital governance frameworks to achieve its objectives.
Norway plans to raise the social media age limit to 15, seeking EU-style solutions to protect minors online.
Only Belgium and Croatia have notified the EU of their NIS2 implementation, with a deadline approaching and potential fines for non-compliance.
The Dutch government abstains from supporting the current EU Regulation on combating online child sexual abuse material due to concerns over privacy and digital security.
The Council of Europe Framework Convention and the EU AI Act both emphasize transparency and human rights in AI, but differ in scope, with the former being broader and the latter offering detailed, market-centric regulations.
Professor Sandra Wachter critiques the EU’s Artificial Intelligence Act and related directives for significant regulatory gaps due to lobbying and political pressures, which lead to broad exemptions and weak enforcement, potentially impacting AI governance and risk management globally.
Hogan Lovells offers AI Act compliance services to help organizations evaluate the Act’s applicability and ensure HR systems meet new EU regulations.
The Council of Europe has launched the first legally binding AI treaty to ensure compliance with human rights, democracy, and the rule of law.