EU Parliament Pushes Forward on AI Liability Rules
The European Parliament continues to push for AI liability rules despite the European Commission’s plan to withdraw the directive due to negotiation challenges.
The European Parliament continues to push for AI liability rules despite the European Commission’s plan to withdraw the directive due to negotiation challenges.
The European Commission has halted the proposed EU AI Liability Directive due to industry pressure and aims to simplify digital regulations, raising concerns about potential compliance challenges.
The AI Action Summit in Paris shifted focus to innovation over regulation, with leaders like Vance and Macron advocating for deregulation to support AI growth, while maintaining commitments to governance and safety.
OpenEuroLLM aims to strengthen EU digital sovereignty by developing open-source AI models, addressing computing challenges, and fostering innovation within European businesses.
The EU Commission’s guidelines on AI system definitions aim to clarify rules under the AI Act, balancing innovation with safety and rights protection.
The EU guidelines outline unacceptable AI practices to ensure compliance with the AI Act, balancing innovation with fundamental rights protection.
The EU’s AI Act bans certain AI uses to protect citizens but faces criticism for exemptions allowing law enforcement and migration authorities to use AI for serious crimes.
As of February 2, 2025, the first wave of requirements came into force, introducing critical obligations for companies operating within the EU.
The EU’s AI Act faces scrutiny over lack of guidance on banned systems, with concerns about enforcement and exceptions, as the February deadline looms.
OpenAI’s Media Manager, a tool for creators to manage AI training data inclusion, remains undeveloped, facing skepticism over its effectiveness in addressing IP concerns.