Skip to content

Main Insight

AI global governance approaches will need to borrow and adapt from other governance regimes including climate change, internet governance, arms control, international trade, and finance.

Why We Need an Intergovernmental Panel for Artificial Intelligence

December 5, 2018

The rise of AI involves an unprecedented combination of complex dynamics, which poses challenges for multilateral efforts to govern its development and use. Global governance approaches will need to strike the right balance between enabling beneficial innovations and mitigating risks and adverse effects. AI global governance approaches will need to borrow and adapt from other governance regimes including climate change, internet governance, arms control, international trade, and finance. Given the high systemic complexity, uncertainty, and ambiguity surrounding the rise of AI, its dynamics and its consequences – a context similar to climate change – creating an IPCC for AI, or “IPAI”, can help build a solid base of facts and benchmarks against which to measure progress.

Read Nicolas Miailhe’s article: “AI & Global Governance: Why We Need an Intergovernmental Panel for Artificial Intelligence” here.

Team members

Related resources

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

Our explainer defines the concept of AI crisis preparedness, exploring how an international framework could stop hazards from escalating into crises.

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Our report examines how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

The Future Society contributed to Partnership on AI’s risk mitigation strategies for the open foundation model value chain, which identify interventions for the range of actors involved in building and operationalizing AI systems.

National AI Strategies for Inclusive & Sustainable Development

National AI Strategies for Inclusive & Sustainable Development

From 2020 to 2022, TFS supported the development of 3 National AI Strategies in Africa with public sector partners, GIZ, and Smart Africa. These programs build capacity through AI policy, regulatory, and governance frameworks to support countries’ efforts to harness AI responsibly, to achieve national objectives and inclusive and sustainable...