Meta’s AI EU Launch Delayed by DSA Review
The European Commission is currently awaiting a risk assessment from Meta to evaluate whether the company’s new AI chat feature, MetaAI, complies with the Digital Services Act (DSA). The DSA, which governs safety and transparency for online platforms, requires companies to submit annual risk assessments and provide prior evaluations before deploying new features. The Commission has stated that it will carefully review Meta’s submissions to ensure the AI tool aligns with EU standards and does not pose undue risks to users.
MetaAI, which has already been introduced in the US, India, and the UK, faced delays in its European rollout due to regulatory challenges. Last summer, the Irish Data Protection Commission requested a postponement, citing concerns over Meta’s use of Facebook and Instagram user data for training its large language models. These issues highlight the complexities of navigating Europe’s stringent digital regulatory framework.
Meta has expressed confidence in its compliance with the DSA and emphasized its transparency with the European Commission throughout the process. The company has maintained ongoing discussions with EU regulators and reiterated its commitment to meeting all legal obligations. In a recent blog post, Meta acknowledged the delays caused by Europe’s regulatory environment but stated its enthusiasm for finally launching MetaAI in the region.
Despite these assurances, Meta’s leadership, including CEO Mark Zuckerberg, has been critical of Europe’s approach to regulating US tech firms. This criticism has intensified in light of recent geopolitical shifts, including the change in US administration. The situation underscores the growing tension between global tech companies and the EU’s robust regulatory framework for digital services.