Skip to content

Main Insight

Our report examines how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

November 15, 2024


As the capabilities of artificial intelligence increase and its impact on society continues to grow, governments worldwide are establishing mechanisms to manage the risks and harness the benefits of these transformative technologies. Among the most prominent of these initiatives is the creation of AI Safety Institutes (AISIs)—specialized, government-funded entities tasked with overseeing the safe development of AI systems, especially those on the frontier of technological advancement.

A significant step toward multilateral coordination was achieved with the launch of the International Network of AI Safety Institutes (AISI Network) in May 2024. The Network brings together members from diverse regions, including Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States. Its inaugural formal gathering, scheduled for November 20-21, 2024, in San Francisco, aims to lay the groundwork for “advancing global collaboration and knowledge-sharing on AI safety.”

Our report, Weaving a Safety Net, examines how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.

Section 1 introduces AISIs, providing a high-level overview of their purpose and core functions. Section 2 examines the benefits and challenges of collaboration among AISIs while, in Section 3, the focus shifts to outlining the current state of collaboration. Building on this foundation, Section 4 considers potential structures of the AISI Network for enhanced coordination. Finally, Section 5 explores how AISIs can engage with and complement other prominent actors within the global AI governance ecosystem.

Together, these sections aim to provide an analysis of the current state of collaboration between AISIs and other key institutions, and propose strategic directions to deepen these collaborative efforts inside and outside of the Network, enhancing its impact in addressing global AI challenges.

A network diagram of publicly announced bilateral collaborations between AISIs as well as with major AI companies. Line color indicates the primary area of collaboration, as inferred from the most emphasized topic in press releases about each partnership. Line size represents the scope of collaboration, with broader lines denoting a greater number of distinct areas of collaboration. A comprehensive source list for each connection in the diagram is available in the appendix of the report.

Related resources

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

Our explainer defines the concept of AI crisis preparedness, exploring how an international framework could stop hazards from escalating into crises.

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

The Future Society contributed to Partnership on AI’s risk mitigation strategies for the open foundation model value chain, which identify interventions for the range of actors involved in building and operationalizing AI systems.

National AI Strategies for Inclusive & Sustainable Development

National AI Strategies for Inclusive & Sustainable Development

From 2020 to 2022, TFS supported the development of 3 National AI Strategies in Africa with public sector partners, GIZ, and Smart Africa. These programs build capacity through AI policy, regulatory, and governance frameworks to support countries’ efforts to harness AI responsibly, to achieve national objectives and inclusive and sustainable...

Working group publishes A Manifesto on Enforcing Law in the Age of AI

Working group publishes A Manifesto on Enforcing Law in the Age of AI

The Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of "Artificial Intelligence" convened for a second year to draft a manifesto that calls for the effective and legitimate enforcement of laws concerning AI systems.