Main Insight
The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.
Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI
March 12, 2025
From railroads to nuclear power, history has shown us again and again how breakthrough technologies serve society best when there’s a good regulatory framework in place.
Some industries have learned this the hard way. Aircraft manufacturing and air travel, for example, have had to modify certain processes and incorporate key insights from major accidents, reflecting the central role that trust and safety play in mass adoption and market development of safety critical technologies.
Today, the world is grappling with the crucial question of how to govern increasingly capable AI. With the implementation of the EU AI Act, we have the opportunity–and in our opinions, the obligation–to learn from the examples set by other industries. This particularly applies to the EU Code of Practice, designed to help providers of General-Purpose AI (GPAI) Models with systemic risk to comply with the EU AI Act.
A strong Code protects citizens and promotes innovation
The process to produce the GPAI Code of Practice has been a remarkable effort in and of itself, bringing together hundreds of experts from civil society, academia, and the tech industry as well as the wider private sector.
A central question has been: Does protecting citizens come at the cost of European innovation? We find it’s quite the opposite.
First, there seems to be a misconception about the scope of code. The majority of Commitments in the Code apply only to major technology companies that are producing the most powerful models, which commonly cost several hundreds of millions of dollars to develop. Second, a strong Code enables effective downstream adoption, which is the key lever for European AI innovation. This is because the Code introduces the kind of assurance–in terms of reliability, testing, and transparency–-that downstream providers, start-ups, and in particular Small and Medium-sized Enterprises (SMEs) require of their suppliers to confidently build innovative applications on top of these models.
The release of the third draft of the Code this week marks the beginning of the final feedback round. Through our preliminary analysis of the text, we identify specific improvements in priority areas that are essential for the Code to realise the AI Act’s promise.
Revisions in five priority areas for a successful Code
1. External pre-deployment assessment (Measure II.11.1.)
The EU requires mandatory external assessment in almost all safety critical industries, including a plethora of everyday products, such as syringes and boilers. However, the current version of the Code exempts providers from external assessment if they have sufficient expertise and their model is similarly safe to another model. On the one hand, expertise does not guarantee independence, a key factor in assessing risks. On the other hand, we are concerned about the notion of “similarly safe”. After all, if a car manufacturer builds a cheaper version of a premium model that performs better on metrics such as mileage and emissions standards, should it be entirely exempted from third-party assessment? To us, it is clear that the Code should not allow for this type of exemption.
Proposed revisions: We propose a simplification of the Code, removing the exemptions for similarly safe models, and instead requiring external assessment as a default. The only exemption should relate to cases where no external assessor was found, although that viable search time should be extended from two to four weeks. Lastly, providers should be required to give preference to external assessors recognised by the AI Office, over assessors they have evaluated themselves.
2. Pre-deployment notification (Measure II.14.4.)
The current draft takes a “deploy first, question later” approach. But effective enforcement of the Code will require constructive engagement between providers and the AI Office, and to this end, information-sharing is key. Unfortunately, the third draft only requires providers to share their Model Report, containing all risk-relevant assessments, by the time they deploy their model. The current practice of providers working on their models right until the day they deploy is the problem, not the solution.
Proposed revisions: We suggest for Model Reports to be shared at least eight weeks before a model is placed on the market. For open-weight or open-source models, this should extend to 12 weeks. The AI Office should be mandated to review model-specific adequacy assessments and share an opinion with the provider within 4 weeks of receiving them, to provide as much advance notice and certainty as possible about the likely compliance status of their models.
3. Whistleblower protections (Commitment II.13.)
From Suchir Balaji to Frances Haughen, whistleblowers have proven essential for uncovering malpractice and incentivising good behaviour by industry players. The third draft has removed more specific language on whistleblowing, introducing instead a high-level principle of non-retaliation and a vague reference to the relevant Directive (EU) 2019/1937 in a recital.
Proposed revisions: To ensure legal certainty, we suggest a full revision of the commitment, clarifying the concrete mechanisms that providers of GPAI models with systemic risk should implement to protect the individuals that are willing to speak up in the face of wrongdoing. By definition, a Directive cannot provide this level of legal certainty.
4. Serious incident response readiness (Measure II.6.3.)
Anticipating and preparing for major incidents is common practice in many safety-critical industries. When you are in a burning building, wouldn’t you want there to be emergency processes with clear responsibilities? We do, but the Code waters down corresponding measures considerably from the second draft.
Proposed revisions: We suggest requiring providers to define emergency processes, identifying corrective measures for specific systemic risks, articulating response timelines, and documenting all measures in a corresponding framework to invite scrutiny from the EU AI Office.
5. Risk taxonomy (Appendix 1.1.)
The systemic risk taxonomy is the foundation of the Code, articulating the types of risks that providers must assess and mitigate. The EU AI Act adopts an expansive understanding of AI risks, setting out to protect the health, safety, and fundamental rights of Europeans. In stark contrast, the third draft reduces the scope of risks covered, removing large-scale discrimination from the prioritised list of systemic risks.
Proposed revisions: We suggest that the Code honours the intent of the Act (Recital 110 and article 3(65)) by prioritising large-scale discrimination, risks to fundamental rights, and risks from major accidents, among others. The EU AI Office should develop and manage the list of systemic risks, ensuring regular updates to reflect emergent risks and insights from serious incidents.
Why providers should sign on to the Code
Once the Code of Practice has been finalised, GPAI model providers will face a choice: do they sign on to the Code or will they demonstrate compliance with the AI Act in an alternative way? The decision should be easy. We see at least three compelling reasons why providers should adhere to the Code:
- The Code is the outcome of an unparalleled, inclusive process, convening leading experts from academia, civil society, and industry. Their work has spanned several months and hundreds of thousands of working hours. Signing the Code demonstrates a concern for the public interest–rather than the sole pursuit of profit.
- The Code is strongly grounded in existing industry practice as well as international voluntary measures and norms co-shaped by industry. Thus, complying with the Code poses limited regulatory burden, especially when viewed in proportion to the vast costs involved in developing the relevant models.
- If a provider wants to deploy in Europe, compliance with the AI Act is required, as it is a binding European law. However, pursuing alternative means of compliance shifts the burden of proof to the GPAI model provider.