Meta refuses EU AI Code of Practice citing legal concerns
Meta has declined to sign the European Union’s voluntary Code of Practice for general-purpose AI models, just weeks before the EU’s AI Act enforcement begins. The Code, introduced by the European Commission, serves as a framework to help AI model providers align with upcoming legislative requirements. It asks companies to maintain up-to-date documentation, avoid using pirated content in training datasets, and honor content owners’ opt-out requests.
Joel Kaplan, Meta’s chief global affairs officer, stated that the Code introduces legal ambiguities for developers and imposes obligations that exceed the AI Act’s intended scope. He criticized the EU’s approach as regulatory overreach, warning that it could hinder the development and deployment of advanced AI models in Europe and limit opportunities for European businesses to innovate on these platforms.
The AI Act itself is a risk-based regulation that bans certain uses deemed to pose “unacceptable risk,” such as cognitive behavioral manipulation and social scoring. It also sets strict requirements for “high-risk” applications, including biometric identification, education, and employment-related tools. Developers must register their AI systems and adhere to risk and quality management standards.
Despite pushback from major technology companies, including Alphabet, Microsoft, and Mistral AI, the European Commission is proceeding with its timeline. The Commission recently released guidelines for providers of general-purpose AI models ahead of rules taking effect on August 2, 2025. Providers with models already on the market must ensure compliance by August 2, 2027.