EU AI Act 

Open Letter

Supporting Foundation Models' Regulation with a Tiered Approach in the EU AI Act 

Open Letter to Support Regulating Foundation Models with a Tiered Approach in the EU AI Act 

To European legislators,

As European stakeholders spanning researchers, SMEs, workers, consumers, citizens, think tanks, AI digital rights & ethics, risks and privacy organisations, we write this letter to support the efforts of the Spanish presidency and European Parliament in addressing the unique challenges posed by foundation models through a tiered approach within the AI Act.

Europe’s regulatory leadership is an asset that should be valued: Amidst a growing number of global AI governance non-regulatory efforts to manage foundation model risks such as the G7 Code of Conduct, the UK and US AI Safety Institute, and the White House Executive Order on AI, Europe stands in a unique position to enforce the first horizontal regulation on AI, including foundation models with global footprint. As such, the proposal of the Spanish presidency allows a balanced approach to regulating foundation models, refining the European Parliament’s cross-party position to set obligations for such models  in the EU AI Act.

Foundation models as a technology present a unique risk profile that must be managed: Foundation models differ significantly from traditional AI. Their generality, cost of development, and ability to act as a single point of failure for thousands of downstream applications mean they carry a distinct risk profile – one that is systemic, not yet fully understood, and affecting substantially all sectors of society (and hundreds of millions of European citizens). It is essential that we assess and manage these risks comprehensively along the value chain, with responsibility lying in the hands of those with the capacity and efficacy to address them. Given the lack of technical feasibility and accessibility to modify underlying flaws in a foundation model when it is deployed and being adapted to an application, there is no other reasonable approach to risk management than putting some responsibility on the technology provided by the upstream model developers.

Far from being a burden for European industry, regulation applied to the technology of foundation models offers essential protection that will benefit the EU industry and emerging AI ecosystem. The very large resources needed to train high-impact models limits the number of developers, so the scope of such regulation would be narrow: fewer than 20 regulated entities in the world all capitalised at more than 100 million dollars, compared to the thousands of potential EU deployers. These large developers can and should bear the responsibility of risk management on current powerful models if the Act aims to minimise burdens across the wider EU ecosystem. Requirements for large upstream developers provide transparency and trust to numerous smaller downstream actors. Otherwise, European citizens are exposed to many risks that downstream deployers, and SMEs in particular simply can’t possibly manage technically: lack of robustness, explainability and trustworthiness. Model cards and voluntary - and therefore not enforceable - codes of conduct won’t suffice. EU companies deploying these models would become liability magnets. Regulation of foundation models is an important safety shield for EU industry and for citizens.

The Spanish Presidency’s approach balances risk management and innovation: We support the proposal as a suitable compromise between the European Parliament and the Member States through a tiered approach. The proposed use of compute thresholds, easily measurable criteria that correlates well with risks, offers a practical basis for regulation which makes sense from a risk management perspective while preserving SMEs AI development efforts. For future-proofing, the thresholds will have to be modified and the criteria complemented as technology evolves and as the science of measurement of risks improves, but it provides a good starting baseline*. We believe that this will crucially allow the EU AI Act to manage risks that European citizens are and will be exposed to. 

Resisting narrow lobbying interests to protect the democratic process: The EU AI Act has consulted for more than 2 years a broad range of representative stakeholders: developers, European industry, SMEs, civil society, think tanks and more. On that basis, it is crucial to prevent the vested lobbying efforts of Big Tech and few large AI companies from circumventing this democratic process. The ability of European regulators to protect society and support SMEs must not be compromised by the interests of a select few.

The integration of foundation models into the AI Act is not just a regulatory necessity but a democratic imperative and necessary step towards responsible AI development and deployment. We urge all involved parties to work constructively, building on the Spanish proposal and consensus of the Trilogue of the 24th of October to find a suitable regulatory solution for the benefit of a safer, more trustworthy and sovereign AI landscape in Europe.


* The rate of algorithmic improvement, the main driver of advances that compute thresholds would have to adjust to, is 2.5x per year which means that the threshold would have to be lowered every 2 to 3 years, which seems very manageable (Erdil et al., 2023)

Add your signature filling this form for organisations and individuals.

Share on social media:

LinkLinkLinkedIn

Signatories

Organisations

European Digital SME Alliance, EU

Avaaz Foundation, Global

reciTAL (AI Startup), France

Privacy Network, Italy

Open Markets Institute, Europe, Global

The Future Society, Global

Future of Life Institute, Global

SaferAI, Paris

AI Ethics Alliance, Brussels




Pax Machina AI, Italy

International Center for Future Generations, Brussels

Existential Risk Observatory, Netherlands

Centre for Democracy & Technology, Europe, Brussels

Defend Democracy, Global

CyberEthics Lab, Italy

Notable Figures

Marietje Schaake, former MEP, International Policy Director at Stanford University Cyber Policy Center

Huberta von Voss, Executive Director, ISD Germany

Yoshua Bengio, 2nd Most Cited AI Scientist, Chair of State of Science Report (an “IPCC for AI”), Professor - University of Montreal, Scientific Director - Mila Quebec AI Institute

Wolfgang M. Schröder, Prominent Professor of Philosophy, University of Wuerzburg, German expert in CEN-CENELEC and ISO/IEC through the DIN AI Standardization Committee.

Emilia Tantar, Chief Data and AI Officer, Black Swan LUX, convenor CEN and CENELEC JTC21 WG2 "Operational aspects" 

Fosca Giannotti, Professor of Computer Science, Scuola Normale Superiore - Pisa - Italy, ERC P.I., Coordinator of Center Big Data and Artificial Intelligence of Tuscany,  Member of the Scientific Board of the Italian National Project on AI, Next Generation EU, FAIR - Future AI research

Marc Rotenberg, President & Founder, Center for AI and Digital Policy

Dino Pedreschi, Professor, University of Pisa, Member of GPAI (Global Partnership on AI), Director of "Social AI" research line of Humane-AI-Net, Director of "Human-centered AI" spoke of Next Generation EU partnership "FAIR - Future AI Research"

Thomas Metzinger, German philosopher, Professor at Johannes Gutenberg University of Mainz, former Member of the European Commission’s High-Level Expert Group on Artificial Intelligence

Giulio Rossetti, Senior Researcher, CNR-ISTI, Italy

Julia Reinhardt, Fellow, European New School of Digital Studies, Europa-Universität Viadrina & AI Governance, Advisory Board Member Cambrian Futures

Raja Chatila, AI Professor Emeritus, Sorbonne University, Paris. Former member of the EU High-Level Expert Group on AI

Ciro Cattuto, Scientific Director, ISI Foundation, Italy

Francesca Pratesi, Researcher,  National Research Council of Italy

Philip Brey, Professor of Philosophy and Ethics of Technology, Winner of 2022 Weizenbaum Award, University of Twente, The Netherlands

Nicolas Miailhe, Founder & President, The Future Society (TFS), Member of the Global Partnership on AI (Responsible AI Working Group), Member of the UNESCO High Level Expert Group on AI Ethics implementation, Member of the OECD Network of Expert on AI Governance

Max Tegmark, MIT Professor, Center for Brains, Minds & Machines, Swedish Citizen

Francesca Bria, Executive Board Member Italian public media company, Innovation Economist, UCL 

Alessandro Lenci, Full professor in computational linguistics, University of Pisa, Director of the Computational Linguistics Lab, Member of the PNRR FAIR project 

Dirk Helbing, Professor of Computational Social Science, ETH Zurich, Switzerland and Elected Member of the Germany Academy of Sciences "Leopoldina"  

Simon Friederich, Associate Professor of Philosophy of Science, University of Groningen

Salvatore Ruggieri, Professor of Computer Science, University of Pisa, Italy

Gary Marcus, Founder and CEO, Geometric Intelligence, Professor Emeritus, NYU 

Luca Pappalardo, Senior researcher at the Institute of Information Science and Technologies (ISTI) at the National Research Council of Italy (CNR)

Alistair Knott, Professor of Artificial Intelligence; Co-Lead, Global Partnership on AI's Social Media Governance project; Member of AI working groups for the Global Internet Forum to Counter Terrorism, the Christchurch Call to eliminate Terrorist and Extremist Content Online, the Forum for Information and Democracy 

Ramon Lopez de Mantaras, Founder and former Director of the Artificial Intelligence Research Institute of the Spanish National Research Council, pioneer of AI in Spain and Europe, active in AI since 1975. EurAI Fellow, recipient of the AAAI Robert S. Engelmore award and the 2018 Spanish National Scientific Research Prize in Mathematics and ICT

Anka Reuel, Computer Science PhD Researcher & KIRA Founding Member, Stanford University

Karl von Wendt, Writer (pen name Karl Olsberg), Ph.D. in AI, Germany

John Burden, Senior Research Associate, Cambridge Institute for Technology and Humanity

Jan-Willem van Putten, Co-Founder and EU AI Policy Lead at Training for Good 

Diego Albano, Analytics leader, ForHumanity, Spain 

Dr. Aleksei Gudkov, AI Governance Counsellor

Dr. Adrian Hutter, Senior Research Engineer at Google

Greg Elliott, HZ University of Applied Sciences

Janis Hecker, NERA Economic Consulting 


Alexandru Enachioaie, Head of Instrastructure at Altmetric


Mathias Ljungberg, Product Owner AI Products at Ahlsell (Swedish), Data Scientist, PhD in Physics