Caroline Jeanmaire
Caroline is a Senior Associate at The Future Society. Her work, grounded in her background in international relations and public policy, centers on ensuring the trustworthiness of advanced AI systems. She serves as an independent expert at the Collège Numérique of the French government. Caroline’s expertise spans AI policies across the United States, United Kingdom, France, and the European Union, aiming to lead a global advancement in AI regulation.
Previously the Director of Strategic Research and Partnerships at UC Berkeley’s Center for Human-Compatible AI (CHAI), Caroline has a rich history of building research communities and stakeholder relationships focused on AI safety and ethics. Her work includes organising high-impact conferences and workshops, such as the CHAI flagship conference and two editions of the Global Governance of AI roundtable with The Future Society.
Her publications cover various AI governance topics, including building transatlantic consensus on AI governance, exploring positive AI economic futures, reviewing annual progress in public trust in AI governance, discussing AI geopolitics, and writing policy reports for the White House and the European Union. Her work has been featured by the World Economic Forum.
Caroline holds dual master’s degrees from Peking University and Sciences Po Paris, and a bachelor’s degree in political sciences from Sciences Po Paris. She is a PhD candidate at Oxford’s Blavatnik School of Government, focusing on AI’s critical risks. Her accolades include being named one of the “35 under 35” future leaders by the Barcelona Centre for International Affairs and recognition as one of the “100 Brilliant Women in AI Ethics.

Caroline Jeanmaire
Paris
Related resources
Ten AI Governance Priorities: Survey of 44 Civil Society Organizations
Ten AI Governance Priorities: Survey of 44 Civil Society Organizations
Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.
Did the Paris AI Action Summit Deliver on the Priorities of Citizens and Experts?
Did the Paris AI Action Summit Deliver on the Priorities of Citizens and Experts?
Our official consultation for the AI Action Summit delivered a clear message: Harnessing AI's potential requires "constructive vigilance." We examine specific priorities identified during the consultation by citizens and experts, evaluating what the Summit successfully delivered and where gaps remain.
Towards Progress in Paris: Our Engagement with the AI Action Summit and Side Events
Towards Progress in Paris: Our Engagement with the AI Action Summit and Side Events
The Future Society is honored to be participating in the AI Action Summit and side events in Paris. Learn about how we've been engaging so far and what we'll be participating in during this momentous week for AI governance.
Citizens and Experts Unite: Final Report on Global Consultations for France’s 2025 AI Action Summit
Citizens and Experts Unite: Final Report on Global Consultations for France’s 2025 AI Action Summit
Our comprehensive consultation of 10,000 citizens and 200+ experts worldwide reveals five priority areas for AI governance, with concrete deliverables for the 2025 AI Action Summit. The results demonstrate strong public support for robust AI regulation and a vigilant approach to AI development, emphasizing the need for inclusive, multistakeholder governance.
Delivering Solutions: An Interim Report of Expert Insights for France’s 2025 AI Action Summit
Delivering Solutions: An Interim Report of Expert Insights for France’s 2025 AI Action Summit
We share initial findings from 84 global experts who contributed to the first part of our open consultation for France's 2025 AI Action Summit, revealing four key shifts needed in AI governance and concrete recommendations for action. The complete analysis will be presented at The Athens Roundtable on December 9.
TFS joins U.S. NIST AI Safety Institute Consortium
TFS joins U.S. NIST AI Safety Institute Consortium
TFS is collaborating with the NIST in the AI Safety Institute Consortium to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote development of trustworthy AI and its responsible use.