Commission launches new DSA investigation into X and Grok
The European Commission has opened a new formal investigation against X under the Digital Services Act (DSA), while also extending its ongoing proceedings launched in December 2023. The new investigation focuses on whether X properly assessed and mitigated systemic risks linked to the deployment of Grok’s functionalities in the EU, particularly risks related to the dissemination of illegal content, including manipulated sexually explicit material and content that may amount to child sexual abuse material.
The Commission will examine whether X complied with its obligations under Articles 34 and 35 of the DSA to diligently identify, assess, and mitigate systemic risks arising from Grok’s integration into the platform. This includes risks related to gender‑based violence and serious harm to users’ physical and mental well‑being. A key element of the investigation is whether X conducted and submitted an ad hoc risk assessment to the Commission before deploying Grok functionalities that significantly altered the platform’s risk profile.
In parallel, the Commission has expanded its December 2023 proceedings to assess X’s compliance with DSA requirements concerning recommender systems, including the recent shift to a Grok‑based recommender system. If confirmed, failures in this area could constitute infringements of Articles 34(1) and (2), 35(1), and 42(2) DSA. The Commission may adopt interim measures, request further information, conduct inspections, or accept commitments from X during the investigation.
The Commission is closely cooperating with Coimisiún na Meán, the Irish Digital Services Coordinator, which will be associated with the proceedings under Article 66(3) DSA. These proceedings build on earlier enforcement action, including a €120 million fine imposed in December 2025 for breaches related to deceptive design, advertising transparency, and data access for researchers. As a designated very large online platform, X remains subject to heightened scrutiny regarding its risk management obligations, particularly where AI‑driven features may affect fundamental rights and the protection of minors.