It’s an old joke: ‘You innovate in the United States, replicate in China, and regulate in Europe.’ But that joke contains an essential grain of truth, as shown by a very concerned letter, which was sent today to European policymakers, as well as to the Dutch Parliament and various newspapers like the Financial Times and Le Monde. ‘Compared to other regions, Europe has become less competitive and less innovative and now risks falling further behind in the A.I. era due to inconsistent decision-making in regulation,’ the authors wrote.
‘Europe risks falling further behind in the A.I. era due to inconsistent decision-making.’
The open letter was signed by Mark Zuckerberg (Meta), Daniel Ek (from Sweden’s Spotify), Ericsson CEO Börje Ekholm, SAP CEO Christian Klein, and many researchers and institutions in this field. 8vance CEO Han Stoffels is probably the most notable signatory in the Netherlands. The company, which develops A.I. to match job seekers with vacancies based on skills, is deeply concerned about current developments.
Can we no longer train and fine-tune?
‘European privacy regulators are currently blocking the training of A.I. models with European personal data,’ explains Laurens Waling, evangelist at 8vance. ‘This affects Meta, X, and also us. Overly strict privacy interpretations from the Dutch Data Protection Authority make it difficult, for example, to develop technology that helps job seekers connect to new jobs. The AP requires consent, but in practice, that’s not feasible. Consent can always be withdrawn, but once A.I. is trained with data, you can’t remove that data. The European Commission has previously reprimanded the AP for this but to no avail.’
‘Privacy regulators are currently blocking the training of A.I. models with European personal data.’
By the end of the year, the European Data Protection Board (EDPB) must take a central position that will form the framework for using personal data in A.I. model training and fine-tuning in the coming years, he continues. ‘The problem is that this position is being established without third-party consultation or input. If the EDPB decides that “consent” is required in all cases, it will mean that no one in the EU will be able to train proper A.I. models, meet the high standards of the AI Act, or adequately localize models (such as speaking Dutch or Frisian, or matching people to jobs, like at 8vance).’
Brake on Open A.I.
According to the letter’s authors, Europe is especially at risk of missing out on two “pillars of A.I. innovation.” ‘The first is the development of free, open models that are available for public use, modification, and expansion, delivering measurable socio-economic benefits. Open A.I. models enhance sovereignty and control, allowing organizations to download and adjust models because they need to comply with the necessity to transfer data to third parties.’
‘The difference between purely textual and multimodal models is like the difference between having just 1 sense versus all 5.’
The second pillar the letter addresses is the latest multimodal models that process text, images, and speech. ‘It’s the next step in artificial intelligence development, boosting the economy’s competitiveness, improving public service efficiency, and supporting technologies for people with disabilities. The difference between purely textual and multimodal models is like having just one sense versus all 5.’
‘Saving hundreds of billions’
We could significantly increase productivity and support scientific research if the European economy were more open to modern, comprehensive, textual, or multimodal models. According to the letter’s authors, this could yield the European economy ‘hundreds of billions of euros.’ ‘Public institutions and research centres use such models to accelerate medical research or contribute to the preservation of languages.’
‘Generative A.I. could increase global GDP by 10% over the next 10 years.’
On the other hand, such open and multimodal models can offer established companies and young start-ups access to tools they could never acquire or create on their own. Without them, A.I. development would take place elsewhere, and Europeans would be deprived of the possibility of technological advancement, which would occur in the U.S., China, and India instead. The letter said that Generative AI could increase global GDP by an estimated 10% over the next 10 years. EU citizens must not be denied the right to these benefits.
‘Regulation is too unpredictable’
For the development of generative A.I., many billions are required. The companies and institutions signing the letter are willing to invest but find the current regulation too ‘fragmented and unpredictable.’ Recent European Data Protection Authorities (like our AP) interventions have further increased uncertainty about which data can be used to train A.I. models. ‘This means that new open-source AI models, like all the products and services based on them, will not practically understand or consider European culture or languages.’
‘Europe faces a choice whose effects will be felt for decades.’
And that could hurt a lot, the authors say. ‘Europe faces a choice whose effects will be felt for decades. We can choose harmonization in a consistent regulatory framework, like the GDPR, and propose a new version of these rules that respects the underlying values. Then we can continue A.I. innovation on the same scale and pace as elsewhere. Or we can reject progress, deny the idea of the internal market, and passively watch as the rest of the world grows thanks to technologies to which Europeans no longer have access.’
‘Urgent decisions needed’
Europe cannot afford this, they state. According to them, we urgently need to harmonize, be consistent, clear, and efficiently make decisions within the framework of EU data usage regulations so that European data can be used to train A.I. models for the benefit of all Europeans. Strong action is needed to unleash creativity, ingenuity, and entrepreneurship. These bring prosperity, development, and a place at the forefront of modern technologies.’
‘Strong action is needed to unleash creativity, ingenuity, and entrepreneurship.’
According to Waling, the future of a well-functioning labour market also depends on it. Responsible matching will also become increasingly complex if A.I. models can no longer be trained with European data. ‘It seems crucial to me that not only technical know-how but also social and economic aspects play a role in the upcoming decision of the European Data Protection Board, and that there is, therefore, a clear signal from politicians to the regulators.’