Skip to content

Main Insight

The Future Society contributed to Partnership on AI’s risk mitigation strategies for the open foundation model value chain, which identify interventions for the range of actors involved in building and operationalizing AI systems.

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

August 30, 2024

This summer, Partnership on AI (PAI) published risk mitigation strategies for the open foundation model value chain. This effort provides practical guidance for managing risks associated with foundation models that have openly available model weights. The Future Society is proud to have contributed to this effort, as part of our work to inform pragmatic measures, practices, and precautions for foundation model developers to address their downstream risks.

These guidelines were created from a workshop co-hosted by PAI and GitHub in San Francisco in April 2024, convening experts from leading AI companies, academia, and civil society. Our Director of Global AI Governance Yolanda Lannquist shared insights regarding the misuse of open foundation models, where downstream actors can fine-tune the models and circumvent safety guardrails put in place by the original developers. More details about the workshop and a list of participants are here.

For policymakers navigating the complexities of governing open foundation models, it is crucial to balance the numerous benefits of openness with the need for safe and responsible AI development and use. The value-chain perspective is indispensable as it addresses the diverse roles and responsibilities of actors involved in building and operationalizing AI systems—including model developers, compute and cloud providers, data providers, model adapters, hosting services, and application developers. It helps identify where interventions can make a difference and considers the specific contexts and environments in which these systems operate.

Looking ahead, significant work remains to define and reach broad international consensus on specific capability and risk thresholds for open—as well as closed—foundation models. These thresholds will be essential for identifying categories of models that require additional evaluation, oversight, and potential restrictions on their release. Similarly, there is a need for continued exploration of accountability and transparency mechanisms that apply to providers of all foundation models, regardless of whether they are “open” or not. We look forward to future collaborations towards a common goal of ensuring the responsible development, release, and use of foundation models.

Banner image generated using artificial intelligence.

Team members

Related resources

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

Our explainer defines the concept of AI crisis preparedness, exploring how an international framework could stop hazards from escalating into crises.

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Our report examines how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.

National AI Strategies for Inclusive & Sustainable Development

National AI Strategies for Inclusive & Sustainable Development

From 2020 to 2022, TFS supported the development of 3 National AI Strategies in Africa with public sector partners, GIZ, and Smart Africa. These programs build capacity through AI policy, regulatory, and governance frameworks to support countries’ efforts to harness AI responsibly, to achieve national objectives and inclusive and sustainable...

Working group publishes A Manifesto on Enforcing Law in the Age of AI

Working group publishes A Manifesto on Enforcing Law in the Age of AI

The Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of "Artificial Intelligence" convened for a second year to draft a manifesto that calls for the effective and legitimate enforcement of laws concerning AI systems.