Skip to content

Main Insight

Our official consultation for the AI Action Summit delivered a clear message: Harnessing AI's potential requires "constructive vigilance." We examine specific priorities identified during the consultation by citizens and experts, evaluating what the Summit successfully delivered and where gaps remain.

Did the Paris AI Action Summit Deliver on the Priorities of Citizens and Experts? 

February 28, 2025

Ahead of the Paris AI Action Summit, we co-organized the official consultation of citizens and experts from around the world to inform Summit planning at a critical moment of rapid technological advancement. The submissions delivered a clear message: While there is broad citizen and expert support for harnessing AI’s potential, it must be pursued with “constructive vigilance.” As we reflect on the outcomes of the Summit, a key question remains: How well did the AI Action Summit answer this call?

The Summit’s Consultation Process

To inform the Summit’s agenda, we collaborated with several organizations, including the Tech and Global Affairs Innovation Hub at Sciences Po, the AI & Society Institute, the French Digital Council (CNNum), and Make.org. Together, we gathered insights through: 

  • A citizen consultation answering the question “What are your ideas for shaping AI to serve the public good?”, which got more than 570 proposals and 120,000 votes, from 11,661 citizens across the world.
  • An expert consultation that identified 15 concrete deliverables for the Summit based on the recommendations of over 200 civil society and academic associations, think-tanks, research groups, and individual experts. These contributors represented diverse perspectives across five continents, including participants from France, United States, China, India, Brazil, South Africa, Kenya, United Kingdom, Germany, Pakistan, Argentina, South Korea, Italy, and many other countries. 

This was the first consultation of this scale ever done in partnership with an AI Summit–a groundbreaking effort. Findings and conclusions were shared with Summit Envoys and working groups in December 2024. The full report and a four-page summary are available online

Overall, the Summit implemented 55% of the consultation’s recommendations.

Key Successes of the Summit

The Summit achieved several successes aligned with the priorities identified in the consultation:

  • Launch of the Current AI Foundation, a public interest partnership, with an initial endowment of €400 million and a €2.5 billion funding target.
  • Creation of AI and future of work observatories across 11 nations.
  • Presentation of the first International Scientific Report on the Safety of Advanced AI.
  • Development of a pilot AI Energy Score Leaderboard to track sustainability.

Notable Gaps in the Summit’s Outcomes

While progress was made in several areas, the Summit fell short on addressing some crucial priorities identified by citizens and experts:

  • No proposals on risk thresholds or system capabilities that could pose severe risks, as was committed to in Seoul. Out of the citizens who voted on the proposal of “setting risk thresholds beyond which AI development must pause” in the consultation, 68% agreed.
  • No corporate accountability mechanisms to track and monitor commitments made at previous Summits, although 77% of citizens voting on the proposal endorsed “ensuring AI giants honor commitments made at summits.”
  • No comprehensive education and reskilling programs.
  • No concrete mechanisms to ensure the meaningful inclusion of Civil Society Organizations at the next Summit.

“Constructive Vigilance”: Safeguards for the Public Interest

The strongest message from citizens in the consultation was the need for ‘constructive vigilance’—calling for innovation with safeguards. However, while the Summit heavily emphasized AI’s opportunities, it gave significantly less attention to concrete governance measures. 

Notably, the Summit dismissed systemic risks as “science fiction,” contradicting findings from the expert consultation and the International Scientific Report on the Safety of Advanced AI. Concrete oversight mechanisms were absent, despite 84% of citizens voting on this matter supporting subjecting AI to human oversight to prevent its autonomy.

Moreover, the Summit lacked international support: despite having more signatories than the Bletchley Summit, the Paris Declaration lacked significant signatories, most notably the United States and the United Kingdom.

Detailed Comparison: Consultation Priorities vs. Summit Outcomes

The following provides a comprehensive breakdown of how the Summit’s outcomes aligned with citizen and expert priorities. For each thematic area, we examine specific priorities identified during the consultation, evaluate what the Summit successfully delivered, and highlight remaining gaps. Overall, 5.5 of the consultation priorities were reflected in the Summit’s deliverables, while the remaining 4.5 were not reflected.

Public Interest
Priorities from Consultation
  • 📃 Global Charter for Public Interest AI
  • 🗳️ AI Commons Initiative to Empower Citizens in AI Design
Summit Successes
  • Public Interest: Significant emphasis on Public Interest AI throughout the Summit, pushing the ecosystem to place the public in the center of AI development and its governance.
  • Current AI: Launch of Current AI, a public interest AI partnership with an initial $400 million investment from the French government, AI Collaborative, other leading governments, philanthropic and industry partners, and plans to raise a total of €2.5 billion over the next five years. The focus areas of data, responsibility, and openness for trust and safety align with the consultation.
  • The Paris Charter on Artificial Intelligence in the Public Interest: A voluntary charter signed by ten countries that endorses public interest initiatives. It focuses on openness, accountability, participation, and transparency. Similar to the Charter proposed in the consultation, it emphasizes societal well-being, human rights, democratic participation, and equitable benefit distribution. The charter also stresses the importance of preventing both individual and collective harm.
  • Citizens Empowerment in AI Design: Many of the initiatives of Current AI mention the need for inclusive governance, representing a first step towards a concrete initiative.
  • AI & Democracy: A global call for an Alliance on AI and Democracy has been signed by numerous individuals and is now supported by 35 companies, NGOs, and research labs. The Alliance aims to support over 100 concrete initiatives to safeguard democracy in the face of AI-driven challenges.
Remaining Gaps
  • Few Signatures: With signatures from only ten countries, the Public Interest Charter lacks broad global consensus.
  • Commons: A formal AI Commons Initiative to involve citizens in designing AI systems was not launched. The role of citizen oversight has not been formalized yet.

Overall: 2/2 consultation priorities reflected

Innovation and Culture
Priorities from Consultation
  • 🔍 Auditable Model Efficiency Standards
  • 🌱 Global Green AI Leaderboard
Summit Successes
  • Audits: Audits should form a part of the priorities of the Current AI partnership, with particular attention paid to environmental impacts and children protection.
  • Data for Public Interest: Observatoire IA was launched by the French Ministry for Economy and Finances to collect and diffuse data for uses such as in healthcare and environment.
  • AI Energy Score: A new scoreboard sets out to compare over 200 models in terms of their energy performance, following the global green AI leaderboard takeaway of the consultation. While some of the large current models are missing, this is an important first step.
Remaining Gaps
  • Audits: No model efficiency standards were presented.
  • Limitations of the AI Energy Score: The AI Energy scoreboard doesn’t include the frontier models that likely represent most of the energy use of foundation models. It also focuses on inference, and does not compare the usefulness or capability of models.

Overall: 2/2 consultation priorities reflected

Future of Work
Priorities from Consultation
  • 📚 Global AI Education and Critical Thinking Initiative
  • 👷 International Task Force on AI and Labor Market Disruption
Summit Successes
  • Network of Observatories on AI’s Impact on Work: A voluntary platform to exchange knowledge and reinforce dialogues on the impact of AI on work was discussed. It includes 11 national institutions (Australia, Germany, Austria, Brazil, Canada, Chile, India, Italy, Japan, UK and France) and partners (Adecco, Capgemini Invent, Human Technology Foundation) coordinated by the ILO and OECD.
  • Pledge for a Trustworthy AI in the World of Work: A wide range of companies made commitments in several areas. These include promoting social dialogue, investing in human capital, and ensuring workplace safety and dignity. Companies also pledged to avoid discrimination, protect worker privacy, and promote productivity and inclusiveness throughout their value chains.
Remaining Gaps
  • Critical Thinking & Education: While citizens and experts emphasized the importance of teaching critical thinking regarding AI development, none of these initiatives reflect this priority.
  • Task Force: No creation of a fully resourced education/reskilling program (e.g. a Global AI Education Initiative).

Overall: 1/2 consultation priorities reflected

Global Governance
Priorities from Consultation
  • 🛡️ International Scientific Panel on AI Risks
  • 🌍 Mechanisms for Civil Society Organizations and Global South inclusion
Summit Successes
  • Scientific Assessment: The first International Scientific Report on the Safety of Advanced AI was presented, though briefly. Meetings were held on the sidelines of the Summit regarding the report’s continuation and the report is set to continue.
  • Inclusion: Civil society and academia were consulted and invited more than at past Summits. International inclusion has been improved with more stakeholders from the Global South attending. This was further supported by the wide variety of side events. Workshops were also held at the Summit itself with nearly 100 international partners.
  • Governance Cartography: A mapping of AI governance issues was created, highlighting the need for technical standards, policies, safety frameworks, and ethical guidelines.
Remaining Gaps
  • Follow-ups on Scientific Assessments: There was no commitment to create an equivalent of the Intergovernmental Panel on Climate Change for AI.
  • The Future of Civil Society and Global South Inclusion: No concrete mechanisms to include Civil Society Organizations and the Global South has been set forth for the next Summit.

Overall: 0.5/2 consultation priorities reflected

Trust in AI
Priorities from Consultation
  • 🚨 Common Thresholds for AI Oversight
  • 📈 AI Corporation Commitments Report Card
Summit Successes
  • Launch of ROOST: Robust Open Online Safety Tools: The project aims to build scalable and resilient safety infrastructure and tools for online safety. It is supported by large tech companies and open-source actors and universities and philanthropists, with particular attention paid to the protection of children, through an initial investment of over 25 million dollars.
  • AI-Content Detection: The French government’s Digital Regulation Expertise Center (PEReN) and the Service for Vigilance and Protection Against Foreign Digital Interference (VIGINUM) developed an open-source AI-generated content detection tool. Additionally, an open-source tool called “D3lta” was published by VIGINUM and the OECD to identify large-scale text manipulation tactics, accompanied by a VIGINUM report on using AI to combat information manipulation.
  • Cybersecurity Analysis: French agency ANSSI published an analysis of risks and opportunities in cyber, created by 200 experts and signed by 14 countries.
  • New Safety Commitments: Some companies made new Frontier Model Safety commitments around the Summit, such as Meta, Amazon, Microsoft, and xAI.
Remaining Gaps
  • Common Thresholds: The Summit did not make any progress on defining red lines and risk thresholds despite this being a key commitment from Seoul. It did not propose a plan to achieve this at future Summits.
  • Reporting and Standards: There is no AI Corporate Commitments Report Card, which could increase the chances of companies following through on their commitments. Long-term monitoring mechanisms are not fully developed.
  • Advancements on Company Commitments: Some companies that had promised to make commitments did not deliver any public progress on the commitments, such as Mistral, Zhipu AI, and more.

Overall: 0/2 consultation priorities reflected

Looking Forward

As India prepares to host the next Summit in November 2025 and Switzerland has expressed interest in following, both countries have the opportunity to build on Paris achievements while addressing key gaps. The Current AI partnership and monitoring initiatives provide valuable foundations for progress, but five critical areas remain underdeveloped: risk thresholds (despite Seoul commitments), serious risk assessment, accountability mechanisms, inclusive participation, and broader international consensus. 

India brings unique strengths to this challenge—democratic practice, digital infrastructure expertise, G20 experience, and Global South leadership. Success will depend on achieving the “constructive vigilance” that citizens and experts worldwide have prioritized, setting safeguards that promote purposeful innovation that serves the public interest.

Related resources

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.

International Coordination for Accountability in AI Governance: Key Takeaways from the Sixth Edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law

International Coordination for Accountability in AI Governance: Key Takeaways from the Sixth Edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law

We share highlights from the Sixth Edition of the Athens Roundtable on AI and the Rule of Law on December 9, 2024, including 15 strategic recommendations that emerged from the day's discussions.

Towards Progress in Paris: Our Engagement with the AI Action Summit and Side Events

Towards Progress in Paris: Our Engagement with the AI Action Summit and Side Events

The Future Society is honored to be participating in the AI Action Summit and side events in Paris. Learn about how we've been engaging so far and what we'll be participating in during this momentous week for AI governance.

Citizens and Experts Unite: Final Report on Global Consultations for France’s 2025 AI Action Summit

Citizens and Experts Unite: Final Report on Global Consultations for France’s 2025 AI Action Summit

Our comprehensive consultation of 10,000 citizens and 200+ experts worldwide reveals five priority areas for AI governance, with concrete deliverables for the 2025 AI Action Summit. The results demonstrate strong public support for robust AI regulation and a vigilant approach to AI development, emphasizing the need for inclusive, multistakeholder governance.

Open Consultation to Engage Citizens, Academics, and Civil Society Around the World for the Upcoming AI Action Summit

Open Consultation to Engage Citizens, Academics, and Civil Society Around the World for the Upcoming AI Action Summit

Civil society organizations, academics, and individuals are invited to participate in a landmark consultation to shape the future of AI governance, organized by Sciences Po, the AI & Society Institute, The Future Society, CNNum, and Make.org, in partnership with the French Presidency.

Join us for the fourth edition of The Athens Roundtable on AI and the Rule of Law

Join us for the fourth edition of The Athens Roundtable on AI and the Rule of Law

The fourth edition of The Athens Roundtable on AI and the Rule of Law will take place December 1-2, 2022.