The Commission publishes guidelines on AI system definition
The European Commission has issued guidelines on the definition of artificial intelligence (AI) systems, as part of the AI Act. These guidelines aim to assist providers and other stakeholders in determining whether a software system qualifies as an AI system. By clarifying this definition, the Commission seeks to facilitate the effective application of the AI Act’s rules. Although these guidelines are not legally binding, they are expected to evolve based on practical experiences and emerging questions.
The guidelines complement the existing framework of the AI Act, which categorizes AI systems into various risk levels, including prohibited and high-risk categories, as well as those subject to transparency obligations. The AI Act is designed to balance innovation with the protection of health, safety, and fundamental rights, ensuring that AI systems are used responsibly within the EU. As of February 2, certain provisions of the AI Act, such as the AI system definition and AI literacy, have started to apply, along with a few prohibited AI use cases that pose unacceptable risks.
The Commission has also released guidelines on prohibited AI practices, reinforcing the AI Act’s commitment to safeguarding EU citizens. These guidelines, although still in draft form, provide a framework for understanding which AI applications are considered unacceptable. This proactive approach by the Commission highlights its dedication to maintaining high standards in AI deployment across the EU, while fostering an environment conducive to technological advancement.
It is important to note that while the Commission has approved the draft guidelines on AI system definitions, they have not yet been formally adopted. This ongoing development indicates the Commission’s readiness to adapt the guidelines as necessary, ensuring they remain relevant and effective in addressing the dynamic nature of AI technologies.