Skip to content

Main Insight

Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

April 2, 2025

As artificial intelligence becomes increasingly powerful, we are seeing critical gaps in governance to address urgent risks. The 2025 Paris AI Action Summit laid significant groundwork by launching initiatives such as the CurrentAI Foundation, AI workforce observatories, and the AI Energy Score Leaderboard. Yet, without concrete, enforceable governance mechanisms, AI risks will continue to escalate unchecked.

To identify the most urgent governance interventions for 2025, we conducted a prioritization exercise. Building on the Summit’s global consultation of 11,000+ citizens and 200+ experts, we co-organized a structured workshop on the margins of the Summit, where 61 participants mapped critical governance gaps. We then surveyed 44 civil society organizations—including research institutes, think tanks, and advocacy groups—to rank ten potential governance mechanisms. Though limited in scale, this sample offers an indication of emerging civil society consensus. 

The survey responses showed the strongest consensus around: 

  1. Establishing legally binding “red lines” for high-risk or uncontrollable AI systems, 
  2. Mandating independent third-party audits with verification of safety commitments, 
  3. Implementing crisis management frameworks, 
  4. Enacting robust whistleblower protections, and 
  5. Ensuring meaningful civil society participation in governance.

Our analysis organizes these priorities into a four-dimensional roadmap: enforcement mechanisms, risk management systems, inclusive governance structures, and beneficial applications. This roadmap provides actionable guidance for key governance initiatives including the upcoming India AI Summit, the OECD AI Policy Observatory, UNESCO’s AI Ethics initiative implementation, the UN Global Digital Compact, and regional frameworks such as the EU’s GPAI Code of Practice. It also offers civil society organizations a shared snapshot of priorities for advocacy and strategic coordination. 

By working together to implement these priority mechanisms, policymakers and civil society can create a foundation for making AI safer, more accountable, and more aligned with the public interest.

Background and Methodology

The survey and its results analysis build heavily upon insights gathered through a structured consultation process on AI governance priorities.

Foundation: AI Action Summit Consultation (October – December 2024)
The Future Society, Make.org, Sciences Po, AI and Society Institute, and CNNum conducted the global consultation for the AI Action Summit, gathering input from over 11,000 citizens and 200+ expert organizations across five continents. The consultation identified key areas of concern including AI risk thresholds, corporate accountability for commitments, and civil society inclusion.

Phase 1: Paris AI Action Summit Side Event (February 11, 2025)
Building on the global consultation findings, an official side event titled “Global AI Governance: Empowering Civil Society” was held during the Paris AI Action Summit. This event was co-organized by Renaissance Numérique, The Future Society, Wikimedia France, the Avaaz Foundation, Connected by Data, Digihumanism, and the European Center for Non-for-Profit Law. The results of the Consultation were presented during the event, and 61 participants identified governance gaps and submitted concrete proposals to fill those gaps via an interactive polling platform. These submissions were then synthesized into 15 potential governance priorities, enabling real-time consolidation of diverse inputs.

Phase 2: Priority Ranking Survey (February 15-25, 2025)
Renaissance Numérique and The Future Society consolidated the 15 initial priorities into 10 for easier voting. The ranking survey of these priorities was distributed through multiple civil society networks and professional channels to ensure broad participation, with 44 organizations ultimately responding. While participation was anonymous, all respondents self-identified as representatives of civil society organizations, think tanks, or academic institutions with expertise in AI governance.

Scoring Methodology
The prioritization used a weighted ranking system where the highest-ranked items received 10 points, second-ranked received 9 points, and so on. Final scores represent average points across all organizations, ranging from 3.16 to 7.25, with higher scores indicating stronger consensus.

While the sample size is modest and cannot claim to represent all civil society perspectives, the structured methodology and diverse participation provide a valuable snapshot of where consensus may be emerging. The priorities identified should be understood as a starting point for further dialogue and research rather than as a definitive ranking.

Poll Results: Civil Society Organizations’ Governance Priorities 

The survey results revealed varying levels of consensus among civil society organizations regarding which governance mechanisms should be prioritized for immediate action. While all ten options were identified as valuable through the consultation process, their relative importance was ranked as follows:

Figure 1: Civil Society Organizations’ AI Governance Priorities by Consensus Score (Scale: 0-10). In our weighted ranking system, a score of 10 would indicate unanimous agreement that an item should be the top priority, while a score of 0 would indicate that no organization ranked that item at all. The actual range of scores (3.16 to 7.25) demonstrates that all items received meaningful support, with a clear stratification of priorities. The difference between consecutive scores indicates the relative strength of preference between items. You can find the original Sli.do results here.

  1. Establish legally binding “red lines” prohibiting certain high-risk or uncontrollable AI systems incompatible with human rights obligations. (7.25)
  2. Mandate systematic, independent third-party audits of general-purpose AI systems—covering bias, transparency, and accountability—and require rigorous verification of AI safety commitments with follow-up and penalties for noncompliance. (6.52)
  3. Establish crisis management frameworks and rapid-response security protocols to identify, mitigate, and contain urgent AI risks. (5.98)
  4. Enact robust whistleblower protections, shielding those who expose AI-related malpractices or safety violations from retaliation. (5.27)
  5. Ensure civil society, including Global South organizations, plays a strong role in AI governance via agenda-setting, expert groups, and oversight. (4.73)
  6. Support the global AI Safety and Security Institutes network, fostering shared research, auditing standards, and best practices across regions. (4.36)
  7. Publish periodic, independent global scientific reports on AI risks, real-world impacts, and emerging governance gaps. (4.21)
  1. Invest in large-scale “AI for Good” collaborations in sectors like health, climate, and education under transparent frameworks. (3.77)
  2. Incorporate sustainability, equity, and labor protections into AI governance structures. (3.57)
  3. Hold regular high-level AI Action Summits modeled on multilateral environmental conferences. (3.16)

This ranking should not be interpreted as suggesting that lower-ranked items are unimportant. In fact, all these governance mechanisms were already selected as priorities through the different consultation phases. Rather, it indicates where civil society organizations see the most urgent need for immediate action given limited policy resources and attention.

Analysis: A Framework for AI Governance in 2025

The survey results reveal a roadmap for 2025 with four interconnected dimensions. This framework moves beyond high-level principles to concrete implementation mechanisms.

Figure 2. A Roadmap for AI Governance in 2025

1. Enforcement Mechanisms 

Participating civil society organizations demonstrated overwhelming consensus on legally binding measures, with enforcement mechanisms receiving the highest support across all priorities:

  • Establish legally binding “red lines” prohibiting certain high-risk or uncontrollable AI systems incompatible with human rights obligations. (Highest ranked overall)
  • Mandate systematic, independent third-party audits of general-purpose AI systems—covering bias, transparency, and accountability—and require rigorous verification of AI safety commitments with follow-up and penalties for noncompliance. (Second highest consensus)

Professor Stuart Russell, President Pro Tem of the International Association for Safe and Ethical AI, emphasizes that effective red lines must cover “both unacceptable uses of AI by humans and unacceptable behaviors by AI systems.” For system behaviors, he argues that “the onus should be on developers to demonstrate with high confidence that their systems will never exhibit such behaviors.” This aligned with the international IDAIS-Beijing consensus statement from 2024 that identified specific examples, including prohibitions on autonomous replication, power-seeking behaviors, weapons of mass destruction development, harmful cyberattacks, and deception about capabilities. These concrete applications likely contributed to binding prohibitions emerging as the top priority in the civil society survey.  Professor Russell, like the other experts quoted throughout this article, provided expert perspective on the survey findings when solicited for comment.

Sarah Andrew, Legal and Campaign Director at Avaaz, said that “AI Governance developed without red lines to protect human rights and dignity would be a hollow sham,” adding that “the evidence of AI’s potential to harm humanity, at both system and general-purpose model levels, is mounting.” Avaaz has notably published real stories of people harmed by AI systems decisions.

The second priority of independent verification addresses a fundamental flaw in current approaches. David Harris, a Chancellor’s Public Scholar at UC Berkeley, warns of the “fundamental conflict of interest in allowing AI companies to evaluate their own products for compliance under laws like the EU AI Act.” He emphasizes that “we must not allow AI companies to grade their own homework,” arguing instead for external third-party auditors who “undergo a certification process to verify their capacity and neutrality.”

2. Risk Management 

Building on enforcement foundations, organizations showed strong consensus around complementary mechanisms to identify and address risks:

  • Establishing crisis management frameworks for rapid response to urgent AI risks
  • Protecting whistleblowers who expose AI-related violations
  • Supporting the global network of AI Safety/Security Institutes
  • Publishing periodic independent scientific reports on AI risks and impacts

These mechanisms work together: research identifies emerging risks, formal audits verify compliance, whistleblower protections enable the catching of issues that formal processes miss, and crisis protocols enable swift response when needed.

Jess Whittlestone, Head of AI Policy at the Centre for Long-Term Resilience (CLTR), emphasizes the urgency of crisis preparedness: “Governments must increase focus on incident preparedness— the ability to effectively anticipate, plan for, contain and bounce back from incidents involving AI systems that threaten national security and public safety. This will require a range of measures, including updated incident response plans as well as ensuring there are appropriate legal powers to intervene in a crisis. If this does not become a priority now, we’re concerned that incidents could cause avoidable catastrophic harms within the next few years.”

The whistleblower protection priority reflects growing recognition of its crucial role in the AI governance ecosystem. Karl Koch, founder of the AI whistleblowing-focused non-profit Third Opinion, calls these protections “a uniquely efficient tool to rapidly uncover and prevent risks at their source.” He advocates for measures that “shield individuals who disclose potential critical risks” while noting that regional progress is insufficient: “we need global coverage through, at a minimum, national laws, especially at the U.S. federal level, in China, and in Israel.”

3. Inclusive Governance 

For both enforcement and risk management to function effectively across diverse deployment contexts, governance requires varied perspectives:

  • Strengthening civil society representation, particularly from the Global South
  • Holding regular high-level AI Summits with multi-stakeholder involvement

Zakariyau Yusuf, Director of the Tech Governance Project based in Nigeria, cautions against developing AI governance without Global South participation. Such frameworks, he warns, “risk overlooking socio-economic contexts or exploiting resources and data without equitable benefits.” This exclusion creates dangerous blind spots in regulation while simultaneously undermining trust in the resulting systems. When properly engaged, Yusuf explains, Global South stakeholders bring critical perspectives that ensure “unique challenges, identities, and perspectives from their nations are appropriately represented.” This inclusion is not merely about representation but about creating governance structures that preserve “autonomy, liberty, and a fair governance structure” for all affected communities.

Jhalak Kakkar, Executive Director of the Centre for Communication Governance at National Law University in Delhi, pinpoints a structural problem in AI development: “AI is often built in contexts disjointed from their deployment.” This geographical and resource disconnect creates a significant risk—AI systems developed in high-resource environments but deployed in lower-resource settings frequently produce unintended harms. This reality, she explains, makes Global South participation in governance essential, as these organizations understand the specific “societal, economic and cultural contexts” that determine AI’s real-world impacts. Their expertise is crucial for designing “effective oversight and enforcement to protect the most marginalized and vulnerable people.” 

The upcoming India AI Summit represents what Kakkar calls “a pivotal moment” to integrate inclusion throughout “the AI value chain and ecosystem,” embedding these principles in both global frameworks and local governance approaches.

4. Beneficial Applications 

Complementing risk-focused governance, organizations emphasized directing AI toward positive outcomes:

  • Investing in large-scale “AI for Good” collaborations in sectors like health and climate
  • Incorporating sustainability, equity, and labor protections into governance frameworks

These priorities address a key strategic challenge identified by B Cavello, Director of Emerging Technologies at Aspen Digital: “People today do not feel like AI works for them. While we work to minimize potential harms from AI use, we must also get concrete about prioritizing progress on the things that matter to the public.” This legitimacy gap could undermine support for governance itself if not addressed.

Cavello articulates a specific approach: “Public accountability demands that we develop meaningful measures of impact on important issues like standards of living and be transparent about how things are going. There are many positive applications of these tools, and those efforts must be prioritized as research goals, not merely byproducts.” This outcomes-oriented framing suggests beneficial applications should be governed through measurable impact metrics rather than aspirational principles.

Practical Applications

These priorities have direct implications for upcoming governance initiatives. 

India AI Summit (November 2025)

With the India Summit planned for November 2025, there is an urgent opportunity to translate these priorities into action. Based on these priorities, policymakers preparing for the Summit should consider:

  • Establishing technical working groups specifically focused on defining red lines and risk thresholds. 
  • Establishing accountability mechanisms to verify that safety commitments made at previous summits have been fulfilled, with transparent reporting on progress and gaps.
  • Developing standardized methodologies for independent verification that can work across jurisdictional boundaries.
  • Creating crisis response protocols with clear intervention thresholds and coordinated response mechanisms.
  • Implementing transparent civil society participation mechanisms that ensure equitable representation.

Global and Regional Implementation Frameworks

Beyond the India AI Summit, civil society AI governance priorities can directly support key regional and global governance processes entering critical implementation phases in 2025. Policymakers can enhance their effectiveness with the following concise recommendations:

  • European Union (GPAI Code of Practice): Mandate independent audits covering bias, transparency, and accountability, with clear verification processes and penalties; establish robust whistleblower protections and crisis response protocols.
  • OECD (AI Policy Observatory): Expand AI Incidents Monitor (AIM) for systematic risk tracking and rapid crisis response; regularly publish independent AI risk assessments.
  • UNESCO (AI Ethics Recommendation): Operationalize ethical assessments (EIA) to proactively prevent high-risk AI applications; enhance civil society engagement, particularly from the Global South.
  • UN Global Digital Compact: Establish regular global scientific AI risk assessments and ensure inclusive civil society participation, especially from the Global South.
  • AI Safety and Security Institutes Network: Facilitate international collaboration on verification standards, crisis response frameworks, and sharing auditing outcomes broadly.
  • NATO (AI Security and Governance): Implement NATO-aligned AI verification standards and robust whistleblower protections, and closely coordinate crisis protocols with EU and UN initiatives.

Conclusion: A Roadmap for Effective Governance

The priorities identified by civil society organizations provide a clear mandate for AI governance: binding prohibitions, independent verification, and crisis response frameworks. This four-dimensional roadmap—enforcement mechanisms, risk management systems, inclusive governance, and beneficial applications—offers concrete guidance for policymakers at both global and regional levels.

As AI capabilities advance rapidly, the window for establishing effective governance is narrowing. The upcoming India AI Summit and implementation of existing frameworks present critical opportunities to translate these priorities into action. Further research should expand civil society consultation to include more diverse voices, ensuring governance frameworks truly reflect global needs and perspectives. Replacing aspirational principles with concrete, enforceable mechanisms remains essential to ensuring AI development proceeds in ways that are safe, beneficial, and aligned with the public interest.


This analysis was produced by The Future Society and Renaissance Numérique, cross-published on both websites. 

Authors: Caroline Jeanmaire, Jessica Galissaire, Tereza Zoumpalova, and Niki Iliadis 

We acknowledge the valuable contributions from the 44 civil society organizations, along with over 11,000 citizens, 200 expert organizations, and 61 participants of the AI Action Summit’s official side event, whose collective insights have significantly informed this priority-setting initiative.

Related resources

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

Our explainer defines the concept of AI crisis preparedness, exploring how an international framework could stop hazards from escalating into crises.

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

Did the Paris AI Action Summit Deliver on the Priorities of Citizens and Experts? 

Did the Paris AI Action Summit Deliver on the Priorities of Citizens and Experts? 

Our official consultation for the AI Action Summit delivered a clear message: Harnessing AI's potential requires "constructive vigilance." We examine specific priorities identified during the consultation by citizens and experts, evaluating what the Summit successfully delivered and where gaps remain.

International Coordination for Accountability in AI Governance: Key Takeaways from the Sixth Edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law

International Coordination for Accountability in AI Governance: Key Takeaways from the Sixth Edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law

We share highlights from the Sixth Edition of the Athens Roundtable on AI and the Rule of Law on December 9, 2024, including 15 strategic recommendations that emerged from the day's discussions.

Towards Progress in Paris: Our Engagement with the AI Action Summit and Side Events

Towards Progress in Paris: Our Engagement with the AI Action Summit and Side Events

The Future Society is honored to be participating in the AI Action Summit and side events in Paris. Learn about how we've been engaging so far and what we'll be participating in during this momentous week for AI governance.

Citizens and Experts Unite: Final Report on Global Consultations for France’s 2025 AI Action Summit

Citizens and Experts Unite: Final Report on Global Consultations for France’s 2025 AI Action Summit

Our comprehensive consultation of 10,000 citizens and 200+ experts worldwide reveals five priority areas for AI governance, with concrete deliverables for the 2025 AI Action Summit. The results demonstrate strong public support for robust AI regulation and a vigilant approach to AI development, emphasizing the need for inclusive, multistakeholder governance.