EU assesses how to regulate ChatGPT
The European Commission is expected to take until mid-2026 to decide how the Digital Services Act (DSA) applies to ChatGPT, despite the service surpassing 120 million monthly users for its search functionality. The DSA’s framework, drafted before generative AI’s mainstream adoption, does not neatly capture chatbot services, forcing regulators to determine whether ChatGPT should be designated as a very large online platform (VLOP) or a very large online search engine (VLOSE), and whether the designation should cover only search or the broader chatbot service. This decision will shape the scope of risk assessments, mitigation duties, and transparency obligations, with fines up to 6 percent of global turnover for non-compliance.
ChatGPT is already subject to the EU AI Act, which since August requires risk assessment and mitigation, with penalties up to €15 million for breaches. The DSA designation would add a separate systemic risk regime—covering areas such as civic integrity, elections, public health, and fundamental rights—alongside obligations related to algorithmic systems and recommender design. A narrow designation limited to search could reduce obligations, such as avoiding a notice-and-action mechanism, while a broader designation could entail comprehensive scrutiny of the underlying large language model. OpenAI may challenge any designation, potentially extending timelines.
The interaction between the AI Act and the DSA poses coordination challenges. While the AI Act categorizes AI risks (unacceptable, high, limited, minimal), the DSA requires mitigation of systemic risks at the platform level. In some integrated cases—like Google’s AI features within VLOPSEs—filing DSA assessments can imply AI Act compliance for those features, but this assumption is less clear for vertically integrated providers like OpenAI. Some domains (e.g., disinformation, deepfakes) will primarily fall under the DSA, while others (e.g., AI in hiring) remain under the AI Act, leaving potential gaps.
Regulators must determine the breadth of obligations and reporting expected from OpenAI, including design and operation of relevant algorithmic systems. The Commission’s choice to classify ChatGPT’s search component alone or the entire service will set the compliance perimeter. With user numbers far above the DSA’s 45 million threshold, and overlapping duties under the AI Act and DSA, OpenAI faces a complex compliance landscape. Safe harbor principles may limit liability for third-party content, but they do not displace the DSA’s due diligence regime.