Press Release

Major Researchers, Think Tanks and CSOs Support the Tiered Approach in the EU AI Act for Managing Foundation Model Risks

Brussels, Belgium – Today, a number of major organizations like the Center for Democracy & Technology, the Avaaz Foundation and The Future Society, along with prominent figures like former MEP Marietje Schaake, world-class AI scientist Yoshua Bengio, or emeritus professor Gary Marcus, announced in an open letter, their support for the tiered approach supported by the European Parliament and defended by the Spanish to manage risks associated with foundation models in the European Union's Artificial Intelligence Act.

Published by Euractiv to European legislators, this open letter emphasizes the necessity of a tiered approach to managing risks from AI foundation models. It first argues this on economic grounds. As Sorbonne University emeritus professor Raja Chatila emphasizes, "If Foundation Models are not regulated, industries using them to develop and provide their specific applications will be the only ones responsible for any model unreliability. This situation is unfair. It makes these industries vulnerable and dependent on Foundation Models suppliers.".

It then discusses why the tiered approach is perfectly adequate to mitigate the risks of the most powerful systems. Second most cited AI scientist, recently put in charge of the IPCC for AI, signatory Yoshua Bengio explains that "next generations of frontier AI models will bring their own risks even before they are put in the hands of application-specific deployers. These harms thus cannot be mitigated by regulation focused only on the deployment use-cases."

The support for the Spanish approach comes at a critical juncture in the EU AI Act trilogue negotiations, with several nations, namely France, Germany, and Italy, advocating instead for companies to self-regulate their creation of this powerful AI. 

In the aftermath of the dramatic events affecting the developer of ChatGPT, OpenAI, emeritus professor and signatory of this open letter Gary Marcus emphasizes that “the chaos at OpenAI only serves to highlight the obvious: we can’t trust big tech to self-regulate. Reducing critical parts of the EU AI Act to an exercise in self-regulation would have devastating consequences for the world.”.

Key Highlights:

Unique Risk Profile: Foundation models present a unique risk profile due to their generality, development cost, and potential as a single point of failure for numerous applications. These risks, not yet fully understood and potentially affecting millions of European citizens, necessitate a comprehensive risk management strategy.

Developer Responsibility: The proposed regulation places an appropriate degree of responsibility on the handful of developers training advanced foundation models, as they are uniquely capable of addressing the systemic risks inherent in them. This approach is crucial for safeguarding EU citizens and industries.

Balanced Regulation: The Spanish proposal suggests a balanced regulatory framework that acknowledges the global footprint of foundation models. This aligns with Europe's regulatory leadership in AI, setting a precedent for managing these technologies.

Protection for EU Industry: Contrary to being a burden, this regulation provides essential protection for EU industry, shielding it from liabilities and risks from advanced foundation models that smaller companies do not have the resources to manage.

Innovation and Risk Management: The proposed compute thresholds offer a practical and measurable basis for regulation, striking a balance between innovation and effective risk management.



Some signatories:


Gary Marcus: 



Pictures of some of the signatories.