Skip to content

Main Insight

In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

September 27, 2023

For the past two years, TFS has conducted extensive research on governing general-purpose AI (GPAI) and related foundation models. We compile these insights into a holistic, risk-based, tiered approach for GPAI, which we present in our latest blueprint: “Heavy is the head that wears the Crown”.

An executive summary is available here. The blueprint explains why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them. A summary of our findings is below:

  1. We find there are seven distinct challenges arising mostly in GPAI/genAI overall, ranging from generalisation to concentration of power and misuse.
  2. Most definitions conflate generative AI, foundation models, GPAI, etc. We explain why separating GPAI models from generative AI systems that build upon them is important for proportionality.
  3. We propose 3 tiers: generative AI systems, Type-I GPAI models and Type-II GPAI models (cutting-edge).
  4. In a tiered approach to GPAI regulation, requirements are set on models in proportion to their risk potential. Type-II GPAI models pose different and more severe challenges than Type-I and Generative AI systems; a Type-I GPAI model poses more severe and different challenges than a Generative AI system.
  5. Therefore, Type-II models (currently ~10 providers) must comply with the full set of listed requirements, while Type-I models (currently ~14 providers, incl. 6 from Type-II) have only a subset of these requirements, and generative AI (>400 providers) have an even smaller subset – reflecting risk-based proportionality.
  6. The distinction between tiers is based on the generality of capabilities, which predicts quite well how risky the GPAI model is. It can be approximated by compute used for training (a metric that is readily available internally and predictable, because it is a major cost driver), to update over time until better metrics are available.
  7. Requirements for each tier are summarized in the Executive Summary.
  8. We present additional measures for effective enforcement, open source governance, combinations of GPAI models, and value chain governance.

Team members

Related resources

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

Closing 2024, Looking Ahead to 2025: A Message from The Future Society’s Executive Director

Closing 2024, Looking Ahead to 2025: A Message from The Future Society’s Executive Director

A look at our work in 2024 and The Future Society's ten year anniversary, as well as what's ahead for AI governance in 2025.

The Future Society Newsletter: November 2024

The Future Society Newsletter: November 2024

In This Newsletter: Preview of The Athens Roundtable on Artificial Intelligence and the Rule of Law on December 9. Interim findings from our expert consultation for France’s 2025 AI Action Summit. Our report on AI Safety Institutes’ collaboration. OpEd on the GPAI Code of Practice for the EU AI Act.

Delivering Solutions: An Interim Report of Expert Insights for France’s 2025 AI Action Summit

Delivering Solutions: An Interim Report of Expert Insights for France’s 2025 AI Action Summit

We share initial findings from 84 global experts who contributed to the first part of our open consultation for France's 2025 AI Action Summit, revealing four key shifts needed in AI governance and concrete recommendations for action. The complete analysis will be presented at The Athens Roundtable on December 9.

The Future Society Newsletter: Fall 2024

The Future Society Newsletter: Fall 2024

Updates from The Future Society: Announcing the Sixth Edition of The Athens Roundtable. Launch of our public consultation for France’s AI Action Summit. Opportunities for civil society engagement. New resource: Open foundation model risk mitigation strategies. We are hiring: Chief Operating Officer.

Springtime Updates

Springtime Updates

A leadership transition, an Athens Roundtable report, and a few more big announcements!