Meta has refused to sign the European Union’s code of practice for its AI Act, just weeks before the bloc’s rules for general-purpose AI models take effect. Meta’s global affairs chief Joel Kaplan criticized the EU’s approach, calling it an “overreach” that threatens innovation.
The EU’s voluntary code, published earlier this month, helps companies align with the AI Act by mandating documentation, banning the use of pirated content in training, and honoring content owners’ opt-out requests. However, Meta argued that the code creates legal uncertainty and imposes obligations beyond the AI Act itself.
Kaplan warned that Europe’s rules would “throttle the development and deployment of frontier AI models in Europe and stunt European businesses.” The AI Act, a risk-based framework, already bans unacceptable AI uses like social scoring and mandates strict rules for high-risk applications such as biometrics and education.
Other tech giants, including Google, Microsoft, and Mistral AI, have also lobbied for delays, but the European Commission remains firm, publishing new compliance guidelines on Friday.
By August 2, 2027, providers of general-purpose AI models with systemic risks—like OpenAI, Meta, and Anthropic—must fully comply. The EU insists these rules are crucial to ensure ethical and safe AI deployment.