Main Insight
The Future Society contributed to Partnership on AI’s risk mitigation strategies for the open foundation model value chain, which identify interventions for the range of actors involved in building and operationalizing AI systems.
Guidance on Risk Mitigation for the Open Foundation Model Value Chain
August 30, 2024
This summer, Partnership on AI (PAI) published risk mitigation strategies for the open foundation model value chain. This effort provides practical guidance for managing risks associated with foundation models that have openly available model weights. The Future Society is proud to have contributed to this effort, as part of our work to inform pragmatic measures, practices, and precautions for foundation model developers to address their downstream risks.
These guidelines were created from a workshop co-hosted by PAI and GitHub in San Francisco in April 2024, convening experts from leading AI companies, academia, and civil society. Our Director of Global AI Governance Yolanda Lannquist shared insights regarding the misuse of open foundation models, where downstream actors can fine-tune the models and circumvent safety guardrails put in place by the original developers. More details about the workshop and a list of participants are here.
For policymakers navigating the complexities of governing open foundation models, it is crucial to balance the numerous benefits of openness with the need for safe and responsible AI development and use. The value-chain perspective is indispensable as it addresses the diverse roles and responsibilities of actors involved in building and operationalizing AI systems—including model developers, compute and cloud providers, data providers, model adapters, hosting services, and application developers. It helps identify where interventions can make a difference and considers the specific contexts and environments in which these systems operate.
Looking ahead, significant work remains to define and reach broad international consensus on specific capability and risk thresholds for open—as well as closed—foundation models. These thresholds will be essential for identifying categories of models that require additional evaluation, oversight, and potential restrictions on their release. Similarly, there is a need for continued exploration of accountability and transparency mechanisms that apply to providers of all foundation models, regardless of whether they are “open” or not. We look forward to future collaborations towards a common goal of ensuring the responsible development, release, and use of foundation models.
Banner image generated using artificial intelligence.