Meta CEO Mark Zuckerberg and Spotify CEO Daniel Ek have criticized the European Union (EU) artificial intelligence (AI) regulations impeding open-source innovation. In a joint statement, both tech leaders expressed concerns about the EU regulatory approach and worried the region might miss out on AI benefits.
The two companies have shared similar views on different issues concerning regulations, including supporting the EU Digital Markets Act. They criticized Apple’s rules in response to the Act. However, they are now the ones concerned about EU regulations.
The CEOs complained about how the EU applies the General Data Protection Regulation (GDPR) to AI model training. This followed the regulator recently telling Meta to delay training its open-source AI models on public data on Facebook and Instagram until it decides on applicable rules.
Zuckerberg cautioned that the delay in formulating these regulations would mean a lack of access to data for training AI models in the EU. As a result, the most advanced AI models may not capture the subtleties of European cultural diversity, potentially leaving EU residents with AI not tailored to their needs.
He said:
“With more open-source developers than America has, Europe is particularly well placed to make the most of this open-source AI wave. Yet its fragmented regulatory structure, riddled with inconsistent implementation, is hampering innovation and holding back developers.”
Meta is a significant player in open-source AI development, and many AI technologies, including the Llama large language model (LLM), are free and open-source. Therefore, Zuckerberg warned that EU organizations, institutions, and developers that use its open-source technologies might miss out on the next wave.
Meanwhile, Spotify also criticized the rules, noting that early investments in AI made it the streaming leader and European tech success story. The company believes that open source could have a more significant advantage for its business in the future, which is why a simple regulatory structure is essential.
Ek said:
“Europe needs to make it easier to start great companies, and to do a better job of holding on to its talent. Many of its best and brightest minds in AI choose to work outside Europe.”
The company admitted that it is necessary to regulate against essential harms. However, it questioned preemptive regulations addressing the theoretical harms when such risks might not exist. It warned that Europe’s risk averseness could cost it the benefits of AI technologies.
Perhaps to prove its point that the regulations could disadvantage EU users, Meta has announced that it will not release the upcoming Llama multimodal models in the EU due to regulatory uncertainty in the region. The multimodal model is an open-source model that can understand images, and Spotify believes its unavailability hurts organizations in the EU.
Meta is not the first major tech company to withhold its latest AI developments in the EU. Apple has also made a similar decision with its upcoming Apple Intelligence and three new features, citing privacy concerns under the DMA.
Spotify believes this represents a failure of a law originally designed to enhance European businesses. The companies wrote:
“The stark reality is that laws designed to increase European sovereignty and competitiveness are achieving the opposite. This isn’t limited to our industry: many European chief executives, across a range of industries, cite a complex and incoherent regulatory environment as one reason for the continent’s lack of competitiveness.”
Meanwhile, the companies have called for EU regulators to change their approach by adopting “clearer policies and more consistent enforcement.” They want Europe to harmonize its regulations so that innovation on the continent can catch up with the level in America and Asia.