The Future Society https://thefuturesociety.org/ Wed, 04 Jun 2025 00:15:14 +0000 en-US hourly 1 https://thefuturesociety.org/wp-content/uploads/2023/02/cropped-tfs-save-32x32.png The Future Society https://thefuturesociety.org/ 32 32 What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One? https://thefuturesociety.org/aicrisisexplainer/ Mon, 26 May 2025 19:45:13 +0000 https://thefuturesociety.org/?p=9673 Our explainer defines the concept of AI crisis preparedness, exploring how an international framework could stop hazards from escalating into crises.

The post What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One? appeared first on The Future Society.

]]>
We are in an era increasingly shaped by artificial intelligence (AI). Since the public launch of ChatGPT in late 2022 and subsequent breakthroughs, the world has become more aware of AI’s capabilities and its transformative potential. It brings the opportunity for immense benefits, such as boosting productivity, preventing disease, and mitigating climate change. But alongside these benefits are profound and novel risks to humanity, including to fundamental rights, public safety, democratic processes, and both national and global security.

The world is underprepared for how quickly AI hazards could spiral into crises, potentially outpacing our ability to respond. The concept of preparedness has long been a driving factor for disaster risk reduction, emergency response, and accident prevention, with specific applications in safety-critical areas like nuclear energy, cybersecurity, and public health

Yet we currently lack a global framework for handling a potential AI crisis, including anticipating, detecting, assessing, containing, responding to, and recovering from the crisis. Without coordinated global capacity and action, we risk facing AI systemic disruptions with fragmented and insufficient responses.

This explainer covers what AI crisis preparedness means, why it is urgently needed to minimize avoidable harms, and how the international community can take actions designed to set us all up for a safer and more secure future. 

What Is an AI Crisis?

To support proactive and coordinated planning, we outline an escalation pathway that shows how serious harms—caused or intensified by the design, deployment, misuse, or failure of AI–can evolve from early-stage hazards to full-scale crises. 

This pathway is not a predictive model, but rather a working tool to support anticipatory thinking and response planning. It maps four escalating stages—hazard, incident, emergency, and crisis. 

Each stage reflects increasing severity, scale, and urgency, drawing on terminology used by multilateral institutions like the OECD and established in academic literature. For each stage, we give illustrative examples drawn from global security—such as military AI, state-level cybersecurity, and critical infrastructure protection—to show what escalating harm can look like in practice.

Stage 1: AI Hazard

AI Hazard: An event, circumstance, or built-in system feature where AI development or use could lead to harm. It signals potential risk but has not yet caused harm.

Example: Advanced lethal autonomous weapons are being used by both state and non‑state actors without binding international rules, clear accountability, rigorous testing, or guarantees of meaningful human control. These systems can operate faster than humans can respond and may misinterpret signals or behave unpredictably. Even a small glitch or mistaken identification could trigger a rapid chain of events (for instance, a drone falsely identifying a threat and prompting others to attack), leading to unintended military escalation and global security consequences. In these cases, while no harm has yet occurred, the situation is volatile: monitoring and mitigation are essential to prevent it from becoming an incident.

Stage 2: AI Incident

AI Incident: An occurrence when AI development or use directly or indirectly causes harm. This harm may include injury, critical infrastructure disruption, rights violations, or environmental damage. An incident is typically limited in scope, contained without widespread spillover, and recoverable.

Example: An AI-enabled cyber defense system mistakenly flags a routine software update from an allied country as a hostile intrusion. Its built-in countermeasure temporarily disrupts a non-essential communication link used by the allied military. The problem is identified quickly and resolved through standard diplomatic and technical processes. The harm that took place was localized and short-lived.

Stage 3: AI Emergency

AI Emergency: A fast-moving, urgent threat related to AI. It often affects national security, with a scope or speed that can overwhelm routine response mechanisms.

Example: A sophisticated, multi-pronged, AI-driven cyberattack simultaneously targets and disrupts core operational systems across multiple critical infrastructure sectors within a nation, including significant portions of the national power grid, critical communication networks, and key logistical hubs. The attacks evolve dynamically, bypassing conventional defenses. Essential services falter, and panic begins to spread. Traditional response systems are too slow, requiring immediate national or even international intervention.

Stage 4: AI Crisis

AI Crisis: This is the most severe stage. A deep, systemic, and long-lasting AI-driven disruption that causes large-scale harm. It often spreads across borders or sectors, producing mass harm and undermining geopolitical stability.

Example: Autonomous military AI systems across several countries are compromised by adversarial cyberattacks. False inputs trigger retaliatory strikes without human review, hitting both military targets and critical civilian infrastructure. The resulting cascade paralyzes global financial systems, collapses energy grids, and causes mass casualties. Trust in international coordination breaks down, making recovery slow and fraught.

Figure 1. An escalation pathway illustrating how an initial hazard can develop into an incident, escalate into an emergency, and ultimately culminate in a crisis if left unmanaged.

Why Should We Prepare for an AI Crisis?

We define AI crisis preparedness as the ability to stop AI hazards, incidents, and emergencies from becoming a full blown crisis. It also means limiting the impact of such crises if they do occur. 

This includes steps across the escalation pathway. It covers early detection of AI hazards and incidents; triggering coordination and rapid responses across jurisdictions and sectors; minimizing harm; and supporting recovery, accountability, and long-term resilience. These measures may include legal frameworks, policy coordination, institutional arrangements, technical infrastructure, operational protocols, and strategic communication systems.

While prevention is generally better than cure, preparedness assumes that prevention can fail. It builds the resilience needed for when shocks happen. It combines proactive anticipation (for instance, through forecasting and monitoring) with reactive capacity, akin to public health systems preparing for pandemics through both surveillance and surge capacity

The imperative for AI crisis preparedness is no longer theoretical. Several converging trends, from technical to geopolitical and institutional, are rapidly increasing the likelihood of AI harms that could have significant implications for national security and economic stability across the world. That’s why international governance is key.

  1. Speed, scale, and “agency”: AI systems can operate at speeds that exceed human response times. Agentic AI, specifically, has the ability to act with limited or no real-time human intervention, increasing the potential for rapid, uncontrolled escalation of harm. This means AI-driven failures or attacks could rise to crisis levels far faster than previously experienced in other domains, demanding new approaches to detection and response.
  2. Growing evidence-base of AI incidents: Current preventative measures are failing to stop AI incidents from happening, whether related to algorithmic bias, safety and security failures, malicious use, or other issues. And, the growing number of warnings from leading AI scientists, frontier lab whistleblowers, industry figures, and national security experts about severe risks from future AI systems highlight the need for forward-looking preparedness strategies at national and international levels.
  3. Integration into critical infrastructure: AI is increasingly being integrated into critical national and international infrastructure: energy grids, financial markets, telecommunications, healthcare systems, and defense networks. As these infrastructures become more reliant on AI, they will be more vulnerable to systemic failures. A single flaw or outage could cascade across sectors and borders, making AI both a force multiplier for efficiency and a potential vector for collapse.
  4. Market concentration: High market concentration exists among the few providers developing frontier AI models and essential cloud or hardware infrastructure. This dependency creates potent single points of failure, which means that flaws or outages in one dominant system could trigger simultaneous disruptions across many sectors. The July 2024 CrowdStrike incident is an example of what could happen.
  5. Heightened international instability: We live in an era of overlapping crises—climate volatility, geopolitical conflict and rivalry, declining trust in institutions, and fragile global supply chains. Incidents will not unfold in a vacuum; they will coincide with, intensify, and be exacerbated by other global shocks. In such a world, preparedness must account not only for AI-specific risks but also for their interaction with an already unstable global system.

How Can We Prepare for an AI Crisis? 

While AI presents heightened challenges, the broader task of planning for uncertain, high-impact incidents is not unprecedented. We can draw on dedicated practices related to disaster risk reduction and emergency response.

Frameworks like the Sendai Framework for Disaster Reduction, though still undergoing real-world stress-tests, offer useful models for AI crisis preparedness efforts. What makes the Sendai Framework notable is its emphasis on the mechanics of emergency response, including:

  • Clear escalation triggers; 
  • Public-private surge financing; 
  • Liability protections that encourage early disclosure;
  • Trusted ombudspeople who convey incidents without panic; and 
  • An equity lens that prioritizes the most vulnerable. 

The fields with mature crisis preparedness frameworks that can guide AI preparedness approaches and initiatives include:

  1. Cybersecurity preparedness: Regular drills and strong public-private partnerships have significantly strengthened cybersecurity resilience. Sector-wide simulations and threat-sharing protocols have taught us the value of tested response plans and information sharing before harm occurs. AI preparedness must similarly prioritize rehearsal, red-teaming, and real-time information exchange between stakeholders.
  2. Nuclear safety preparedness: The nuclear sector developed a mature safety regime built on hard lessons learned. International conventions now mandate early warning and mutual assistance, coordinated by the International Atomic Energy Agency (IAEA). Regular, large-scale international exercises rigorously simulate severe accidents, allowing nations to test global readiness and identify and fix weaknesses. This process highlights the need for international standards, transparency, and tested protocols.
  3. Pandemic preparedness: Pandemic surveillance and dedicated agencies have helped to improve readiness. International frameworks for early warning and coordination, such as the International Health Regulations, proved crucial in the case of the H1N1 pandemic by leveraging prior H5N1 preparation. A draft pandemic agreement to be submitted to the World Health Assembly in May 2025 intends to strengthen preparedness further, for example, through facilitating the transfer of pandemic-related technology and knowledge and mobilizing a skilled and multidisciplinary national and global health emergency workforce. This illustrates how institutional foresight, investment, and cooperation can close readiness gaps over time.

AI preparedness will look different than these fields, but it can draw from seven core elements that enable a strong international framework for response in times of crises: legal frameworks and oversight, multilateral policy coordination, standardized incident reporting, monitoring and early warning, operational protocols, trusted communication systems, and recovery and accountability.

What Are the Key Elements of Global AI Crisis Preparedness?

Successfully navigating AI crises requires strong global governance approaches. This means:

  • Fostering international coordination through pre-agreed playbooks and Memoranda of Understandings that synchronize cross-border alerts and align response standards.
  • Investing significantly in real-time monitoring systems, simulation capabilities, red-team sandboxes, and well-tested response protocols.
  • Equitably building capacity worldwide by bridging North-South disparities so that all regions have necessary technical infrastructure and institutional mechanisms to respond effectively. 

A crucial step is establishing an international framework for AI crisis preparedness. Such a framework can be composed of several elements, including:

  • Legal oversight: There is a need for clear legal mechanisms that mandate continuous AI risk assessment, define escalation, and build on thresholds and grant regulators authority to restrict or recall systems posing imminent harm. The Chemical Weapons Convention and the World Health Organization (WHO) International Health Regulations are signals that states are likely to accept legally binding inspection and verification regimes including “stop buttons” for high risk systems. Ensuring legitimacy of such efforts requires trust, transparency, and third-party verification mechanisms that can also guarantee appropriate access for civil society organizations, academics, and researchers to relevant non-sensitive data and governance processes, thereby enabling independent oversight, diverse perspectives, and accountability.
  • Cross-border policy coordination: Just as the IAEA Convention on Early Notification of a Nuclear Accident mandates synchronization of global alerts within hours, AI preparedness could benefit from pre-negotiated playbooks and MoUs that unlock shared resources immediately after a crisis signal. Such coordination is crucial for ensuring that national authorities and international bodies can act in concert, leveraging collective capabilities and shared understanding to mitigate AI risks effectively through a coherent global response.
  • Standardized incident reporting: To build shared awareness and enable analysis, a robust global system is needed to consistently track AI incidents, near-misses, and emerging risks. Valuable initiatives like the OECD AI Incidents Monitor and the AI Incident Database provide important foundations for tracking. However, achieving comprehensive global coverage requires statutes that mandate continued risk assessment, going beyond publicly reported cases, as well as the adoption of internationally agreed-upon common classification standards and reporting formats to ensure data consistency and comparability. 
  • Monitoring and early warning: To enable timely intervention before crises escalate, the framework must establish shared thresholds or ‘red lines’ that automatically trigger alerts. Aviation’s Annex 13 accident-reporting rules show how compulsory, harmonized formats accelerate learning loops; an AI equivalent can do the same. Building on practices from cybersecurity hazard intelligence sharing, clear protocols are needed for rapidly escalating these warnings between frontier AI companies, national authorities, and international bodies, leveraging real-time monitoring pipelines and red-team sandboxes, and defining escalation thresholds as part of legal frameworks. 
  • Operational protocols: Effective action requires operational readiness. Regular simulations such as the IAEA INEX drills and NATO’s Locked Shields cyber war-game build muscle-memory long before disaster strikes. This involves developing standardized but adaptable response playbooks for various AI hazards and regularly testing coordination through joint international simulations. These protocols can include tested playbooks covering rapid model rollback, user notification, and emergency sandboxing, regularly drilled and updated for new hazard classes. 
  • Trusted communication systems: Maintaining trust and coordination during a crisis demands both secure official channels and transparent public messaging. The WHO’s multilingual risk-communication guidelines and Cold-War nuclear “hotlines” remind us that these  are two sides of the same coin. Protocols should ensure clear, multilingual updates that inform stakeholders, counter misinformation, and sustain public trust before, during and after a crisis. Equally vital are pre-established secure channels, akin to diplomatic or industry security networks, for rapid information sharing among government, industry, and international actors, forming the technical backbone that delivers actionable data to decision-makers.
  • Recovery and accountability: Preparedness includes planning for the aftermath. International Civil Aviation Organization’s independent accident investigations and the Organisation for the Prohibition of Chemical Weapons’s fact-finding missions show how mandatory post-mortems drive accountability and systemic learning. Drawing on investigation models from sectors like aviation safety, mandated assessments and independent audits after significant events are vital to understand failures, ensure accountability, and be able to apply lessons learned going forward, with institutional mechanisms in place to oversee and audit every step from early warning to post-crisis review. Embedding dedicated AI crisis response and analysis expertise within relevant international institutions could also bolster global support and shared learning.

Why Does The Future Society Care About Preparing for an AI Crisis?

The stakes have never been higher at this time in history. Investment in resilience is essential and must be sustained, given the systemic risks that general purpose AI systems pose and the rapid pace of innovation. By adapting lessons from nuclear safety, cybersecurity, and pandemic response, we can establish an international framework that identifies AI hazards early and prevents them from spiraling into a crisis.

But preparedness isn’t just about sidestepping future costs—it’s about confronting the hard truth that rapid AI advances have already put our global systems on shaky ground. Ignoring the warning signs today risks far greater economic, social, and security fallout tomorrow. Proactive measures now are the key to building a safer, more stable future for everyone.

Acknowledgments

Thanks to our colleagues, Caroline Jeanmaire, Kathrin Gardhouse, Sam Baskeyfield, Tereza Zoumpalova and Toni Lorente, for their contributions. We also appreciate the feedback and valuable perspectives provided by Julia Chen, Justin Bullock, Matthijs Maas, and Miles Brundage.

The post What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One? appeared first on The Future Society.

]]>
Ten AI Governance Priorities: Survey of 44 Civil Society Organizations https://thefuturesociety.org/cso-ai-governance-priorities/ Wed, 02 Apr 2025 08:00:00 +0000 https://thefuturesociety.org/?p=9635 Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.

The post Ten AI Governance Priorities: Survey of 44 Civil Society Organizations appeared first on The Future Society.

]]>
As artificial intelligence becomes increasingly powerful, we are seeing critical gaps in governance to address urgent risks. The 2025 Paris AI Action Summit laid significant groundwork by launching initiatives such as the CurrentAI Foundation, AI workforce observatories, and the AI Energy Score Leaderboard. Yet, without concrete, enforceable governance mechanisms, AI risks will continue to escalate unchecked.

To identify the most urgent governance interventions for 2025, we conducted a prioritization exercise. Building on the Summit’s global consultation of 11,000+ citizens and 200+ experts, we co-organized a structured workshop on the margins of the Summit, where 61 participants mapped critical governance gaps. We then surveyed 44 civil society organizations—including research institutes, think tanks, and advocacy groups—to rank ten potential governance mechanisms. Though limited in scale, this sample offers an indication of emerging civil society consensus. 

The survey responses showed the strongest consensus around: 

  1. Establishing legally binding “red lines” for high-risk or uncontrollable AI systems, 
  2. Mandating independent third-party audits with verification of safety commitments, 
  3. Implementing crisis management frameworks, 
  4. Enacting robust whistleblower protections, and 
  5. Ensuring meaningful civil society participation in governance.

Our analysis organizes these priorities into a four-dimensional roadmap: enforcement mechanisms, risk management systems, inclusive governance structures, and beneficial applications. This roadmap provides actionable guidance for key governance initiatives including the upcoming India AI Summit, the OECD AI Policy Observatory, UNESCO’s AI Ethics initiative implementation, the UN Global Digital Compact, and regional frameworks such as the EU’s GPAI Code of Practice. It also offers civil society organizations a shared snapshot of priorities for advocacy and strategic coordination. 

By working together to implement these priority mechanisms, policymakers and civil society can create a foundation for making AI safer, more accountable, and more aligned with the public interest.

Background and Methodology

The survey and its results analysis build heavily upon insights gathered through a structured consultation process on AI governance priorities.

Foundation: AI Action Summit Consultation (October – December 2024)
The Future Society, Make.org, Sciences Po, AI and Society Institute, and CNNum conducted the global consultation for the AI Action Summit, gathering input from over 11,000 citizens and 200+ expert organizations across five continents. The consultation identified key areas of concern including AI risk thresholds, corporate accountability for commitments, and civil society inclusion.

Phase 1: Paris AI Action Summit Side Event (February 11, 2025)
Building on the global consultation findings, an official side event titled “Global AI Governance: Empowering Civil Society” was held during the Paris AI Action Summit. This event was co-organized by Renaissance Numérique, The Future Society, Wikimedia France, the Avaaz Foundation, Connected by Data, Digihumanism, and the European Center for Non-for-Profit Law. The results of the Consultation were presented during the event, and 61 participants identified governance gaps and submitted concrete proposals to fill those gaps via an interactive polling platform. These submissions were then synthesized into 15 potential governance priorities, enabling real-time consolidation of diverse inputs.

Phase 2: Priority Ranking Survey (February 15-25, 2025)
Renaissance Numérique and The Future Society consolidated the 15 initial priorities into 10 for easier voting. The ranking survey of these priorities was distributed through multiple civil society networks and professional channels to ensure broad participation, with 44 organizations ultimately responding. While participation was anonymous, all respondents self-identified as representatives of civil society organizations, think tanks, or academic institutions with expertise in AI governance.

Scoring Methodology
The prioritization used a weighted ranking system where the highest-ranked items received 10 points, second-ranked received 9 points, and so on. Final scores represent average points across all organizations, ranging from 3.16 to 7.25, with higher scores indicating stronger consensus.

While the sample size is modest and cannot claim to represent all civil society perspectives, the structured methodology and diverse participation provide a valuable snapshot of where consensus may be emerging. The priorities identified should be understood as a starting point for further dialogue and research rather than as a definitive ranking.

Poll Results: Civil Society Organizations’ Governance Priorities 

The survey results revealed varying levels of consensus among civil society organizations regarding which governance mechanisms should be prioritized for immediate action. While all ten options were identified as valuable through the consultation process, their relative importance was ranked as follows:

Figure 1: Civil Society Organizations’ AI Governance Priorities by Consensus Score (Scale: 0-10). In our weighted ranking system, a score of 10 would indicate unanimous agreement that an item should be the top priority, while a score of 0 would indicate that no organization ranked that item at all. The actual range of scores (3.16 to 7.25) demonstrates that all items received meaningful support, with a clear stratification of priorities. The difference between consecutive scores indicates the relative strength of preference between items. You can find the original Sli.do results here.

  1. Establish legally binding “red lines” prohibiting certain high-risk or uncontrollable AI systems incompatible with human rights obligations. (7.25)
  2. Mandate systematic, independent third-party audits of general-purpose AI systems—covering bias, transparency, and accountability—and require rigorous verification of AI safety commitments with follow-up and penalties for noncompliance. (6.52)
  3. Establish crisis management frameworks and rapid-response security protocols to identify, mitigate, and contain urgent AI risks. (5.98)
  4. Enact robust whistleblower protections, shielding those who expose AI-related malpractices or safety violations from retaliation. (5.27)
  5. Ensure civil society, including Global South organizations, plays a strong role in AI governance via agenda-setting, expert groups, and oversight. (4.73)
  6. Support the global AI Safety and Security Institutes network, fostering shared research, auditing standards, and best practices across regions. (4.36)
  7. Publish periodic, independent global scientific reports on AI risks, real-world impacts, and emerging governance gaps. (4.21)
  1. Invest in large-scale “AI for Good” collaborations in sectors like health, climate, and education under transparent frameworks. (3.77)
  2. Incorporate sustainability, equity, and labor protections into AI governance structures. (3.57)
  3. Hold regular high-level AI Action Summits modeled on multilateral environmental conferences. (3.16)

This ranking should not be interpreted as suggesting that lower-ranked items are unimportant. In fact, all these governance mechanisms were already selected as priorities through the different consultation phases. Rather, it indicates where civil society organizations see the most urgent need for immediate action given limited policy resources and attention.

Analysis: A Framework for AI Governance in 2025

The survey results reveal a roadmap for 2025 with four interconnected dimensions. This framework moves beyond high-level principles to concrete implementation mechanisms.

Figure 2. A Roadmap for AI Governance in 2025

1. Enforcement Mechanisms 

Participating civil society organizations demonstrated overwhelming consensus on legally binding measures, with enforcement mechanisms receiving the highest support across all priorities:

  • Establish legally binding “red lines” prohibiting certain high-risk or uncontrollable AI systems incompatible with human rights obligations. (Highest ranked overall)
  • Mandate systematic, independent third-party audits of general-purpose AI systems—covering bias, transparency, and accountability—and require rigorous verification of AI safety commitments with follow-up and penalties for noncompliance. (Second highest consensus)

Professor Stuart Russell, President Pro Tem of the International Association for Safe and Ethical AI, emphasizes that effective red lines must cover “both unacceptable uses of AI by humans and unacceptable behaviors by AI systems.” For system behaviors, he argues that “the onus should be on developers to demonstrate with high confidence that their systems will never exhibit such behaviors.” This aligned with the international IDAIS-Beijing consensus statement from 2024 that identified specific examples, including prohibitions on autonomous replication, power-seeking behaviors, weapons of mass destruction development, harmful cyberattacks, and deception about capabilities. These concrete applications likely contributed to binding prohibitions emerging as the top priority in the civil society survey.  Professor Russell, like the other experts quoted throughout this article, provided expert perspective on the survey findings when solicited for comment.

Sarah Andrew, Legal and Campaign Director at Avaaz, said that “AI Governance developed without red lines to protect human rights and dignity would be a hollow sham,” adding that “the evidence of AI’s potential to harm humanity, at both system and general-purpose model levels, is mounting.” Avaaz has notably published real stories of people harmed by AI systems decisions.

The second priority of independent verification addresses a fundamental flaw in current approaches. David Harris, a Chancellor’s Public Scholar at UC Berkeley, warns of the “fundamental conflict of interest in allowing AI companies to evaluate their own products for compliance under laws like the EU AI Act.” He emphasizes that “we must not allow AI companies to grade their own homework,” arguing instead for external third-party auditors who “undergo a certification process to verify their capacity and neutrality.”

2. Risk Management 

Building on enforcement foundations, organizations showed strong consensus around complementary mechanisms to identify and address risks:

  • Establishing crisis management frameworks for rapid response to urgent AI risks
  • Protecting whistleblowers who expose AI-related violations
  • Supporting the global network of AI Safety/Security Institutes
  • Publishing periodic independent scientific reports on AI risks and impacts

These mechanisms work together: research identifies emerging risks, formal audits verify compliance, whistleblower protections enable the catching of issues that formal processes miss, and crisis protocols enable swift response when needed.

Jess Whittlestone, Head of AI Policy at the Centre for Long-Term Resilience (CLTR), emphasizes the urgency of crisis preparedness: “Governments must increase focus on incident preparedness— the ability to effectively anticipate, plan for, contain and bounce back from incidents involving AI systems that threaten national security and public safety. This will require a range of measures, including updated incident response plans as well as ensuring there are appropriate legal powers to intervene in a crisis. If this does not become a priority now, we’re concerned that incidents could cause avoidable catastrophic harms within the next few years.”

The whistleblower protection priority reflects growing recognition of its crucial role in the AI governance ecosystem. Karl Koch, founder of the AI whistleblowing-focused non-profit Third Opinion, calls these protections “a uniquely efficient tool to rapidly uncover and prevent risks at their source.” He advocates for measures that “shield individuals who disclose potential critical risks” while noting that regional progress is insufficient: “we need global coverage through, at a minimum, national laws, especially at the U.S. federal level, in China, and in Israel.”

3. Inclusive Governance 

For both enforcement and risk management to function effectively across diverse deployment contexts, governance requires varied perspectives:

  • Strengthening civil society representation, particularly from the Global South
  • Holding regular high-level AI Summits with multi-stakeholder involvement

Zakariyau Yusuf, Director of the Tech Governance Project based in Nigeria, cautions against developing AI governance without Global South participation. Such frameworks, he warns, “risk overlooking socio-economic contexts or exploiting resources and data without equitable benefits.” This exclusion creates dangerous blind spots in regulation while simultaneously undermining trust in the resulting systems. When properly engaged, Yusuf explains, Global South stakeholders bring critical perspectives that ensure “unique challenges, identities, and perspectives from their nations are appropriately represented.” This inclusion is not merely about representation but about creating governance structures that preserve “autonomy, liberty, and a fair governance structure” for all affected communities.

Jhalak Kakkar, Executive Director of the Centre for Communication Governance at National Law University in Delhi, pinpoints a structural problem in AI development: “AI is often built in contexts disjointed from their deployment.” This geographical and resource disconnect creates a significant risk—AI systems developed in high-resource environments but deployed in lower-resource settings frequently produce unintended harms. This reality, she explains, makes Global South participation in governance essential, as these organizations understand the specific “societal, economic and cultural contexts” that determine AI’s real-world impacts. Their expertise is crucial for designing “effective oversight and enforcement to protect the most marginalized and vulnerable people.” 

The upcoming India AI Summit represents what Kakkar calls “a pivotal moment” to integrate inclusion throughout “the AI value chain and ecosystem,” embedding these principles in both global frameworks and local governance approaches.

4. Beneficial Applications 

Complementing risk-focused governance, organizations emphasized directing AI toward positive outcomes:

  • Investing in large-scale “AI for Good” collaborations in sectors like health and climate
  • Incorporating sustainability, equity, and labor protections into governance frameworks

These priorities address a key strategic challenge identified by B Cavello, Director of Emerging Technologies at Aspen Digital: “People today do not feel like AI works for them. While we work to minimize potential harms from AI use, we must also get concrete about prioritizing progress on the things that matter to the public.” This legitimacy gap could undermine support for governance itself if not addressed.

Cavello articulates a specific approach: “Public accountability demands that we develop meaningful measures of impact on important issues like standards of living and be transparent about how things are going. There are many positive applications of these tools, and those efforts must be prioritized as research goals, not merely byproducts.” This outcomes-oriented framing suggests beneficial applications should be governed through measurable impact metrics rather than aspirational principles.

Practical Applications

These priorities have direct implications for upcoming governance initiatives. 

India AI Summit (November 2025)

With the India Summit planned for November 2025, there is an urgent opportunity to translate these priorities into action. Based on these priorities, policymakers preparing for the Summit should consider:

  • Establishing technical working groups specifically focused on defining red lines and risk thresholds. 
  • Establishing accountability mechanisms to verify that safety commitments made at previous summits have been fulfilled, with transparent reporting on progress and gaps.
  • Developing standardized methodologies for independent verification that can work across jurisdictional boundaries.
  • Creating crisis response protocols with clear intervention thresholds and coordinated response mechanisms.
  • Implementing transparent civil society participation mechanisms that ensure equitable representation.

Global and Regional Implementation Frameworks

Beyond the India AI Summit, civil society AI governance priorities can directly support key regional and global governance processes entering critical implementation phases in 2025. Policymakers can enhance their effectiveness with the following concise recommendations:

  • European Union (GPAI Code of Practice): Mandate independent audits covering bias, transparency, and accountability, with clear verification processes and penalties; establish robust whistleblower protections and crisis response protocols.
  • OECD (AI Policy Observatory): Expand AI Incidents Monitor (AIM) for systematic risk tracking and rapid crisis response; regularly publish independent AI risk assessments.
  • UNESCO (AI Ethics Recommendation): Operationalize ethical assessments (EIA) to proactively prevent high-risk AI applications; enhance civil society engagement, particularly from the Global South.
  • UN Global Digital Compact: Establish regular global scientific AI risk assessments and ensure inclusive civil society participation, especially from the Global South.
  • AI Safety and Security Institutes Network: Facilitate international collaboration on verification standards, crisis response frameworks, and sharing auditing outcomes broadly.
  • NATO (AI Security and Governance): Implement NATO-aligned AI verification standards and robust whistleblower protections, and closely coordinate crisis protocols with EU and UN initiatives.

Conclusion: A Roadmap for Effective Governance

The priorities identified by civil society organizations provide a clear mandate for AI governance: binding prohibitions, independent verification, and crisis response frameworks. This four-dimensional roadmap—enforcement mechanisms, risk management systems, inclusive governance, and beneficial applications—offers concrete guidance for policymakers at both global and regional levels.

As AI capabilities advance rapidly, the window for establishing effective governance is narrowing. The upcoming India AI Summit and implementation of existing frameworks present critical opportunities to translate these priorities into action. Further research should expand civil society consultation to include more diverse voices, ensuring governance frameworks truly reflect global needs and perspectives. Replacing aspirational principles with concrete, enforceable mechanisms remains essential to ensuring AI development proceeds in ways that are safe, beneficial, and aligned with the public interest.


This analysis was produced by The Future Society and Renaissance Numérique, cross-published on both websites. 

Authors: Caroline Jeanmaire, Jessica Galissaire, Tereza Zoumpalova, and Niki Iliadis 

We acknowledge the valuable contributions from the 44 civil society organizations, along with over 11,000 citizens, 200 expert organizations, and 61 participants of the AI Action Summit’s official side event, whose collective insights have significantly informed this priority-setting initiative.

The post Ten AI Governance Priorities: Survey of 44 Civil Society Organizations appeared first on The Future Society.

]]>
Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI https://thefuturesociety.org/copthirddraft Wed, 12 Mar 2025 15:04:11 +0000 https://thefuturesociety.org/?p=9624 The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

The post Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI appeared first on The Future Society.

]]>
From railroads to nuclear power, history has shown us again and again how breakthrough technologies serve society best when there’s a good regulatory framework in place. 

Some industries have learned this the hard way. Aircraft manufacturing and air travel, for example, have had to modify certain processes and incorporate key insights from major accidents, reflecting the central role that trust and safety play in mass adoption and market development of safety critical technologies.   

Today, the world is grappling with the crucial question of how to govern increasingly capable AI. With the implementation of the EU AI Act, we have the opportunity–and in our opinions, the obligation–to learn from the examples set by other industries. This particularly applies to the EU Code of Practice, designed to help providers of General-Purpose AI (GPAI) Models with systemic risk to comply with the EU AI Act. 

A strong Code protects citizens and promotes innovation

The process to produce the GPAI Code of Practice has been a remarkable effort in and of itself, bringing together hundreds of experts from civil society, academia, and the tech industry as well as the wider private sector. 

A central question has been: Does protecting citizens come at the cost of European innovation? We find it’s quite the opposite. 

First, there seems to be a misconception about the scope of code. The majority of Commitments in the Code apply only to major technology companies that are producing the most powerful models, which commonly cost several hundreds of millions of dollars to develop. Second, a strong Code enables effective downstream adoption, which is the key lever for European AI innovation. This is because the Code introduces the kind of assurance–in terms of reliability, testing, and transparency–-that downstream providers, start-ups, and in particular Small and Medium-sized Enterprises (SMEs) require of their suppliers to confidently build innovative applications on top of these models. 

The release of the third draft of the Code this week marks the beginning of the final feedback round. Through our preliminary analysis of the text, we identify specific improvements in priority areas that are essential for the Code to realise the AI Act’s promise.

Revisions in five priority areas for a successful Code

1. External pre-deployment assessment (Measure II.11.1.)

The EU requires mandatory external assessment in almost all safety critical industries, including a plethora of everyday products, such as syringes and boilers. However, the current version of the Code exempts providers from external assessment if they have sufficient expertise and their model is similarly safe to another model. On the one hand, expertise does not guarantee independence, a key factor in assessing risks. On the other hand, we are concerned about the notion of “similarly safe”. After all, if a car manufacturer builds a cheaper version of a premium model that performs better on metrics such as mileage and emissions standards, should it be entirely exempted from third-party assessment? To us, it is clear that the Code should not allow for this type of exemption.

Proposed revisions: We propose a simplification of the Code, removing the exemptions for similarly safe models, and instead requiring external assessment as a default. The only exemption should relate to cases where no external assessor was found, although that viable search time should be extended from two to four weeks. Lastly, providers should be required to give preference to external assessors recognised by the AI Office, over assessors they have evaluated themselves.

2. Pre-deployment notification (Measure II.14.4.)

The current draft takes a “deploy first, question later” approach. But effective enforcement of the Code will require constructive engagement between providers and the AI Office, and to this end, information-sharing is key. Unfortunately, the third draft only requires providers to share their Model Report, containing all risk-relevant assessments, by the time they deploy their model. The current practice of providers working on their models right until the day they deploy is the problem, not the solution. 

Proposed revisions: We suggest for Model Reports to be shared at least eight weeks before a model is placed on the market. For open-weight or open-source models, this should extend to 12 weeks. The AI Office should be mandated to review model-specific adequacy assessments and share an opinion with the provider within 4 weeks of receiving them, to provide as much advance notice and certainty as possible about the likely compliance status of their models.

3. Whistleblower protections (Commitment II.13.)

From Suchir Balaji to Frances Haughen, whistleblowers have proven essential for uncovering malpractice and incentivising good behaviour by industry players. The third draft has removed more specific language on whistleblowing, introducing instead a high-level principle of non-retaliation and a vague reference to the relevant Directive (EU)  2019/1937 in a recital. 

Proposed revisions: To ensure legal certainty, we suggest a full revision of the commitment, clarifying the concrete mechanisms that providers of GPAI models with systemic risk should implement to protect the individuals that are willing to speak up in the face of wrongdoing. By definition, a Directive cannot provide this level of legal certainty. 

4. Serious incident response readiness (Measure II.6.3.)

Anticipating and preparing for major incidents is common practice in many safety-critical industries. When you are in a burning building, wouldn’t you want there to be emergency processes with clear responsibilities? We do, but the Code waters down corresponding measures considerably from the second draft. 

Proposed revisions: We suggest requiring providers to define emergency processes, identifying corrective measures for specific systemic risks, articulating response timelines, and documenting all measures in a corresponding framework to invite scrutiny from the EU AI Office. 

5. Risk taxonomy (Appendix 1.1.)

The systemic risk taxonomy is the foundation of the Code, articulating the types of risks that providers must assess and mitigate. The EU AI Act adopts an expansive understanding of AI risks, setting out to protect the health, safety, and fundamental rights of Europeans. In stark contrast, the third draft reduces the scope of risks covered, removing large-scale discrimination from the prioritised list of systemic risks.

Proposed revisions: We suggest that the Code honours the intent of the Act (Recital 110 and article 3(65)) by prioritising  large-scale discrimination, risks to fundamental rights, and risks from major accidents, among others. The EU AI Office should develop and manage the list of systemic risks, ensuring regular updates to reflect emergent risks and insights from serious incidents.

Why providers should sign on to the Code

Once the Code of Practice has been finalised, GPAI model providers will face a choice: do they sign on to the Code or will they demonstrate compliance with the AI Act in an alternative way? The decision should be easy. We see at least three compelling reasons why providers should adhere to the Code:

  1. The Code is the outcome of an unparalleled, inclusive process, convening leading experts from academia, civil society, and industry. Their work has spanned several months and hundreds of thousands of working hours. Signing the Code demonstrates a concern for the public interest–rather than the sole pursuit of profit.
  2. The Code is strongly grounded in existing industry practice as well as international voluntary measures and norms co-shaped by industry. Thus, complying with the Code poses limited regulatory burden, especially when viewed in proportion to the vast costs involved in developing the relevant models.
  3. If a provider wants to deploy in Europe, compliance with the AI Act is required, as it is a binding European law. However, pursuing alternative means of compliance shifts the burden of proof to the GPAI model provider.

The post Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI appeared first on The Future Society.

]]>
Did the Paris AI Action Summit Deliver on the Priorities of Citizens and Experts?  https://thefuturesociety.org/aiactionsummitvspublicpriorities/ Fri, 28 Feb 2025 08:53:20 +0000 https://thefuturesociety.org/?p=9569 Our official consultation for the AI Action Summit delivered a clear message: Harnessing AI's potential requires "constructive vigilance." We examine specific priorities identified during the consultation by citizens and experts, evaluating what the Summit successfully delivered and where gaps remain.

The post Did the Paris AI Action Summit Deliver on the Priorities of Citizens and Experts?  appeared first on The Future Society.

]]>
Ahead of the Paris AI Action Summit, we co-organized the official consultation of citizens and experts from around the world to inform Summit planning at a critical moment of rapid technological advancement. The submissions delivered a clear message: While there is broad citizen and expert support for harnessing AI’s potential, it must be pursued with “constructive vigilance.” As we reflect on the outcomes of the Summit, a key question remains: How well did the AI Action Summit answer this call?

The Summit’s Consultation Process

To inform the Summit’s agenda, we collaborated with several organizations, including the Tech and Global Affairs Innovation Hub at Sciences Po, the AI & Society Institute, the French Digital Council (CNNum), and Make.org. Together, we gathered insights through: 

  • A citizen consultation answering the question “What are your ideas for shaping AI to serve the public good?”, which got more than 570 proposals and 120,000 votes, from 11,661 citizens across the world.
  • An expert consultation that identified 15 concrete deliverables for the Summit based on the recommendations of over 200 civil society and academic associations, think-tanks, research groups, and individual experts. These contributors represented diverse perspectives across five continents, including participants from France, United States, China, India, Brazil, South Africa, Kenya, United Kingdom, Germany, Pakistan, Argentina, South Korea, Italy, and many other countries. 

This was the first consultation of this scale ever done in partnership with an AI Summit–a groundbreaking effort. Findings and conclusions were shared with Summit Envoys and working groups in December 2024. The full report and a four-page summary are available online

Overall, the Summit implemented 55% of the consultation’s recommendations.

Key Successes of the Summit

The Summit achieved several successes aligned with the priorities identified in the consultation:

  • Launch of the Current AI Foundation, a public interest partnership, with an initial endowment of €400 million and a €2.5 billion funding target.
  • Creation of AI and future of work observatories across 11 nations.
  • Presentation of the first International Scientific Report on the Safety of Advanced AI.
  • Development of a pilot AI Energy Score Leaderboard to track sustainability.

Notable Gaps in the Summit’s Outcomes

While progress was made in several areas, the Summit fell short on addressing some crucial priorities identified by citizens and experts:

  • No proposals on risk thresholds or system capabilities that could pose severe risks, as was committed to in Seoul. Out of the citizens who voted on the proposal of “setting risk thresholds beyond which AI development must pause” in the consultation, 68% agreed.
  • No corporate accountability mechanisms to track and monitor commitments made at previous Summits, although 77% of citizens voting on the proposal endorsed “ensuring AI giants honor commitments made at summits.”
  • No comprehensive education and reskilling programs.
  • No concrete mechanisms to ensure the meaningful inclusion of Civil Society Organizations at the next Summit.

“Constructive Vigilance”: Safeguards for the Public Interest

The strongest message from citizens in the consultation was the need for ‘constructive vigilance’—calling for innovation with safeguards. However, while the Summit heavily emphasized AI’s opportunities, it gave significantly less attention to concrete governance measures. 

Notably, the Summit dismissed systemic risks as “science fiction,” contradicting findings from the expert consultation and the International Scientific Report on the Safety of Advanced AI. Concrete oversight mechanisms were absent, despite 84% of citizens voting on this matter supporting subjecting AI to human oversight to prevent its autonomy.

Moreover, the Summit lacked international support: despite having more signatories than the Bletchley Summit, the Paris Declaration lacked significant signatories, most notably the United States and the United Kingdom.

Detailed Comparison: Consultation Priorities vs. Summit Outcomes

The following provides a comprehensive breakdown of how the Summit’s outcomes aligned with citizen and expert priorities. For each thematic area, we examine specific priorities identified during the consultation, evaluate what the Summit successfully delivered, and highlight remaining gaps. Overall, 5.5 of the consultation priorities were reflected in the Summit’s deliverables, while the remaining 4.5 were not reflected.

Public Interest
Priorities from Consultation
  • 📃 Global Charter for Public Interest AI
  • 🗳 AI Commons Initiative to Empower Citizens in AI Design
Summit Successes
  • Public Interest: Significant emphasis on Public Interest AI throughout the Summit, pushing the ecosystem to place the public in the center of AI development and its governance.
  • Current AI: Launch of Current AI, a public interest AI partnership with an initial $400 million investment from the French government, AI Collaborative, other leading governments, philanthropic and industry partners, and plans to raise a total of €2.5 billion over the next five years. The focus areas of data, responsibility, and openness for trust and safety align with the consultation.
  • The Paris Charter on Artificial Intelligence in the Public Interest: A voluntary charter signed by ten countries that endorses public interest initiatives. It focuses on openness, accountability, participation, and transparency. Similar to the Charter proposed in the consultation, it emphasizes societal well-being, human rights, democratic participation, and equitable benefit distribution. The charter also stresses the importance of preventing both individual and collective harm.
  • Citizens Empowerment in AI Design: Many of the initiatives of Current AI mention the need for inclusive governance, representing a first step towards a concrete initiative.
  • AI & Democracy: A global call for an Alliance on AI and Democracy has been signed by numerous individuals and is now supported by 35 companies, NGOs, and research labs. The Alliance aims to support over 100 concrete initiatives to safeguard democracy in the face of AI-driven challenges.
Remaining Gaps
  • Few Signatures: With signatures from only ten countries, the Public Interest Charter lacks broad global consensus.
  • Commons: A formal AI Commons Initiative to involve citizens in designing AI systems was not launched. The role of citizen oversight has not been formalized yet.

Overall: 2/2 consultation priorities reflected

Innovation and Culture
Priorities from Consultation
  • 🔍 Auditable Model Efficiency Standards
  • 🌱 Global Green AI Leaderboard
Summit Successes
  • Audits: Audits should form a part of the priorities of the Current AI partnership, with particular attention paid to environmental impacts and children protection.
  • Data for Public Interest: Observatoire IA was launched by the French Ministry for Economy and Finances to collect and diffuse data for uses such as in healthcare and environment.
  • AI Energy Score: A new scoreboard sets out to compare over 200 models in terms of their energy performance, following the global green AI leaderboard takeaway of the consultation. While some of the large current models are missing, this is an important first step.
Remaining Gaps
  • Audits: No model efficiency standards were presented.
  • Limitations of the AI Energy Score: The AI Energy scoreboard doesn’t include the frontier models that likely represent most of the energy use of foundation models. It also focuses on inference, and does not compare the usefulness or capability of models.

Overall: 2/2 consultation priorities reflected

Future of Work
Priorities from Consultation
  • 📚 Global AI Education and Critical Thinking Initiative
  • 👷 International Task Force on AI and Labor Market Disruption
Summit Successes
  • Network of Observatories on AI’s Impact on Work: A voluntary platform to exchange knowledge and reinforce dialogues on the impact of AI on work was discussed. It includes 11 national institutions (Australia, Germany, Austria, Brazil, Canada, Chile, India, Italy, Japan, UK and France) and partners (Adecco, Capgemini Invent, Human Technology Foundation) coordinated by the ILO and OECD.
  • Pledge for a Trustworthy AI in the World of Work: A wide range of companies made commitments in several areas. These include promoting social dialogue, investing in human capital, and ensuring workplace safety and dignity. Companies also pledged to avoid discrimination, protect worker privacy, and promote productivity and inclusiveness throughout their value chains.
Remaining Gaps
  • Critical Thinking & Education: While citizens and experts emphasized the importance of teaching critical thinking regarding AI development, none of these initiatives reflect this priority.
  • Task Force: No creation of a fully resourced education/reskilling program (e.g. a Global AI Education Initiative).

Overall: 1/2 consultation priorities reflected

Global Governance
Priorities from Consultation
  • 🛡 International Scientific Panel on AI Risks
  • 🌍 Mechanisms for Civil Society Organizations and Global South inclusion
Summit Successes
  • Scientific Assessment: The first International Scientific Report on the Safety of Advanced AI was presented, though briefly. Meetings were held on the sidelines of the Summit regarding the report’s continuation and the report is set to continue.
  • Inclusion: Civil society and academia were consulted and invited more than at past Summits. International inclusion has been improved with more stakeholders from the Global South attending. This was further supported by the wide variety of side events. Workshops were also held at the Summit itself with nearly 100 international partners.
  • Governance Cartography: A mapping of AI governance issues was created, highlighting the need for technical standards, policies, safety frameworks, and ethical guidelines.
Remaining Gaps
  • Follow-ups on Scientific Assessments: There was no commitment to create an equivalent of the Intergovernmental Panel on Climate Change for AI.
  • The Future of Civil Society and Global South Inclusion: No concrete mechanisms to include Civil Society Organizations and the Global South has been set forth for the next Summit.

Overall: 0.5/2 consultation priorities reflected

Trust in AI
Priorities from Consultation
  • 🚨 Common Thresholds for AI Oversight
  • 📈 AI Corporation Commitments Report Card
Summit Successes
  • Launch of ROOST: Robust Open Online Safety Tools: The project aims to build scalable and resilient safety infrastructure and tools for online safety. It is supported by large tech companies and open-source actors and universities and philanthropists, with particular attention paid to the protection of children, through an initial investment of over 25 million dollars.
  • AI-Content Detection: The French government’s Digital Regulation Expertise Center (PEReN) and the Service for Vigilance and Protection Against Foreign Digital Interference (VIGINUM) developed an open-source AI-generated content detection tool. Additionally, an open-source tool called “D3lta” was published by VIGINUM and the OECD to identify large-scale text manipulation tactics, accompanied by a VIGINUM report on using AI to combat information manipulation.
  • Cybersecurity Analysis: French agency ANSSI published an analysis of risks and opportunities in cyber, created by 200 experts and signed by 14 countries.
  • New Safety Commitments: Some companies made new Frontier Model Safety commitments around the Summit, such as Meta, Amazon, Microsoft, and xAI.
Remaining Gaps
  • Common Thresholds: The Summit did not make any progress on defining red lines and risk thresholds despite this being a key commitment from Seoul. It did not propose a plan to achieve this at future Summits.
  • Reporting and Standards: There is no AI Corporate Commitments Report Card, which could increase the chances of companies following through on their commitments. Long-term monitoring mechanisms are not fully developed.
  • Advancements on Company Commitments: Some companies that had promised to make commitments did not deliver any public progress on the commitments, such as Mistral, Zhipu AI, and more.

Overall: 0/2 consultation priorities reflected

Looking Forward

As India prepares to host the next Summit in November 2025 and Switzerland has expressed interest in following, both countries have the opportunity to build on Paris achievements while addressing key gaps. The Current AI partnership and monitoring initiatives provide valuable foundations for progress, but five critical areas remain underdeveloped: risk thresholds (despite Seoul commitments), serious risk assessment, accountability mechanisms, inclusive participation, and broader international consensus. 

India brings unique strengths to this challenge—democratic practice, digital infrastructure expertise, G20 experience, and Global South leadership. Success will depend on achieving the “constructive vigilance” that citizens and experts worldwide have prioritized, setting safeguards that promote purposeful innovation that serves the public interest.

The post Did the Paris AI Action Summit Deliver on the Priorities of Citizens and Experts?  appeared first on The Future Society.

]]>
International Coordination for Accountability in AI Governance: Key Takeaways from the Sixth Edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law https://thefuturesociety.org/international-coordination-for-accountability Fri, 07 Feb 2025 02:54:54 +0000 https://thefuturesociety.org/?p=9524 We share highlights from the Sixth Edition of the Athens Roundtable on AI and the Rule of Law on December 9, 2024, including 15 strategic recommendations that emerged from the day's discussions.

The post International Coordination for Accountability in AI Governance: Key Takeaways from the Sixth Edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law appeared first on The Future Society.

]]>

How do we tackle one of the most critical challenges of our time–ensuring AI is governed in alignment with the rule of law? As AI deployment accelerates, so does the urgency of enforceable governance. Robust accountability mechanisms are needed to ensure AI governance can actually work. How do we do this? 

It begins with dialogue between diverse stakeholders. For our Sixth Edition of the Athens Roundtable on AI and the Rule of Law on December 9, 2024, we brought together 135 in-person leaders from government, civil society, academia, and the tech industry and 830 online participants from 108 countries to figure out a way forward towards greater accountability. This year’s roundtable welcomed leadership from the OECD, United Nations, UNESCO, the European AI Office, and the Network of AI Safety Institutes.

Prominent policymakers and thought leaders in attendance included the Hellenic Digital Minister Dimitris Papastergiou, Ambassador of Rwanda to France François Nkulikiyimfura, Tanzania Member of Parliament Neema Lugangira, Members of the European Parliament Axel Voss and Brando Benifei, Special Envoy of the French President to the AI Action Summit Anne Bouverot, European AI Office Director Lucilla Sioli, U.S. AISI Director Elizabeth Kelly, UN AI Advisory Board Co-Chair Carme Artigas, Anthropic Co-Founder and Head of Policy Jack Clark, The Leadership Conference on Civil and Human Rights President and CEO Maya Wiley, and UC Berkeley Professor Stuart Russell, among the many esteemed participants.

Our report International Coordination for Accountability in AI Governance highlights key takeaways from the Roundtable, which was held at the OECD Headquarters in Paris. It captures ideas, recommendations, and quotes from the day of thoughtful discussions, which covered assessing progress in AI regulation, identifying gaps, crafting solutions, and setting priorities for the AI Action Summit in February 2025.

One key takeaway: Building effective AI governance is an ongoing endeavor—one that demands international coordination, with legal, policy, and normative clarity and decisive action now.

Recommendations for Key Stakeholders

Building on discussions during The Athens Roundtable, our report presents 15 strategic recommendations for strengthening international coordination and accountability in AI governance. Read the full report for more context.

Governments

  • Harmonize international AI standards through multilateral agreements, securing high-level political commitment to shared governance principles.
  • Move from voluntary industry pledges to binding legal frameworks, with clear mandates for oversight and safety assessments.
  • Require rigorous independent evaluations and third-party audits to verify AI compliance with regulatory benchmarks.

Intergovernmental Organizations

  • Establish globally recognized standards through coordinated efforts with AI Safety Institutes and regulatory bodies.
  • Develop binding international frameworks that enforce compliance across jurisdictions and prevent regulatory fragmentation.
  • Institutionalize joint oversight mechanisms, such as an international incident reporting system, to manage cross-border AI risks effectively.

Industry

  • Enhance transparency through standardized reporting on risk assessments, safety measures, and compliance with ethical AI principles.
  • Implement lifecycle accountability, ensuring AI models are monitored for unintended consequences post-deployment.
  • Align development with responsible AI commitments, integrating ethical guardrails and robust safety mechanisms.

Civil Society

  • Advocate for inclusive AI governance, ensuring underrepresented voices are included in global discussions and policy decisions.
  • Monitor AI impacts through independent research and accountability frameworks to track corporate and governmental adherence to ethical AI standards.
  • Push for adaptive global benchmarks that continuously assess AI risks and societal implications.

Individuals

  • Support ethical AI development through informed choices, including ethical consumer behavior and advocacy for responsible AI use.
  • Demand greater transparency and governance accountability from AI developers and regulators.
  • Participate in public consultations to influence AI policies and oversight mechanisms.

Looking Ahead

As the global AI landscape evolves, the key takeaways from the Sixth Edition of the Athens Roundtable can help shape governance priorities in 2025 and beyond. With discussions centered on strengthening accountability in AI governance, the event underscored the necessity of international coordination and enforceable oversight mechanisms.

The upcoming AI Action Summit in Paris can build on these insights, aiming to advance concrete steps toward a globally coordinated accountability framework. Next steps should include formalizing commitments into binding agreements, enhancing AI oversight institutions, and ensuring governance mechanisms reflect a diverse array of stakeholder perspectives.

The conversation on AI accountability does not end here. To maintain momentum, stakeholders must push for the integration of robust accountability measures throughout the AI lifecycle. The Sixth Edition reaffirmed that AI governance is a continuous process—one that demands sustained collaboration, institutional innovation, and decisive leadership to navigate the complex challenges ahead.

The post International Coordination for Accountability in AI Governance: Key Takeaways from the Sixth Edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law appeared first on The Future Society.

]]>
Towards Progress in Paris: Our Engagement with the AI Action Summit and Side Events https://thefuturesociety.org/towards-progress-in-paris-our-engagement-with-the-ai-action-summit-and-side-events/ Thu, 06 Feb 2025 20:01:29 +0000 https://thefuturesociety.org/?p=9508 The Future Society is honored to be participating in the AI Action Summit and side events in Paris. Learn about how we've been engaging so far and what we'll be participating in during this momentous week for AI governance.

The post Towards Progress in Paris: Our Engagement with the AI Action Summit and Side Events appeared first on The Future Society.

]]>
The AI Action Summit hosted by France on February 10 and 11 could not be happening at a more critical time. Recent breakthroughs in artificial intelligence technology underscore what many of us have long been saying–AI is moving fast and greater international coordination on AI governance is urgently needed. The opportunities and challenges that AI presents are global, so the response must also be on a global level. 

Bringing together more than 80 heads of state and hundreds of global leaders, the Summit is a window for action that can determine the course of history. It is a moment of scientific progress, given the launch of the International AI Safety Report and integration of France’s AI Safety Institute (Institut national pour l’évaluation et la sécurité de l’intelligence artificielle) into the International Network of AI Safety Institutes. It is also a moment of political progress, with the possibility of a €2.5 billion Foundation for AI Commons and the Summit as one of the few venues for U.S.-China dialogue on AI at a time of geopolitical uncertainty. And, it marks a promising step towards inclusive governance, with civil society making up about a third of attendees–which can help to ensure that AI policies are sustainable and grounded in the realities of communities worldwide. 

As an organization that has worked on AI governance for more than a decade, The Future Society has witnessed how previous AI Summits at Bletchley Park and in Seoul have helped to move the needle on AI governance. They have created a foundation for constructive international cooperation and launched vital efforts and institutions such the AI Safety Institutes. 

Now, France is sparking new momentum with this Summit, including by a groundbreaking effort to engage experts and the broader public through the consultation we co-organized. More than 11,000 citizens and over 200 experts organizations shared their hopes for the Summit. Findings were presented at our Sixth Edition of The Athens Roundtable on AI and the Rule of Law, an official Road to the AI Action Summit event held at the OECD Headquarters in Paris this past December. During the Roundtable, Summit Envoys and participants discussed the role of the AI Action Summit for advancing a more accountable AI governance ecosystem. Two months later, it’s time to make this a reality. 

We are looking forward to this next week in Paris, represented at the Summit by our Executive Director Nick Moës. Our Senior Associate Caroline Jeanmaire and Associate Tereza Zoumpalova will also join a number of events organized on the Summit’s margins, including the side event we are co-hosting on February 11 to explore and strengthen the role of civil society organizations in AI global dialogues. Read on for more details.

Follow our social media accounts on LinkedIn and X for updates, and get in touch if you want to connect in Paris. 

Where We’ll Be

These are only some of the many interesting events that we plan to attend. The full list of 100+ Summit side events is here

The post Towards Progress in Paris: Our Engagement with the AI Action Summit and Side Events appeared first on The Future Society.

]]>
Closing 2024, Looking Ahead to 2025: A Message from The Future Society’s Executive Director https://thefuturesociety.org/happy-new-year-2025/ Tue, 31 Dec 2024 12:15:00 +0000 https://thefuturesociety.org/?p=9470 A look at our work in 2024 and The Future Society's ten year anniversary, as well as what's ahead for AI governance in 2025.

The post Closing 2024, Looking Ahead to 2025: A Message from The Future Society’s Executive Director appeared first on The Future Society.

]]>
Dear Friends and Partners,

As 2024 comes to a close, I want to take a moment to thank you for being part of our community. The mission of aligning AI through better governance is a significant undertaking, requiring engagement with a broad and diverse ecosystem including government, industry, civil society, academia, and individual citizens. Our ability to drive impact depends on your collaboration and support.

There were many ways that you have helped us to realize our mission this past year. In particular, I want to recognize everyone who joined us in-person and online for The Sixth Edition of The Athens Roundtable, all who contributed to our consultation for France’s AI Action Summit, and those who have been working alongside us throughout the European AI Act Code of Practice process. These are just some of the ways that we are collectively strengthening AI governance, holding industry players accountable through more inclusive processes.

I have been honored to lead this unique organization for a year now, having been a member of the team since 2018. I am deeply grateful to all who have partnered with us on the journey so far.

This year marked ten years since the founding of The Future Society in 2014. Our organization has been working on how to govern AI technologies since before concerns about their consequences became mainstream. I can’t help but reflect on what AI governance would look like today if The Future Society never existed. There remains a lot of work, of course, especially with the rapid technological acceleration of the past three years. But there is much to celebrate from The Future Society’s first decade, and I invite you to watch this short film with ten examples of our achievements over these ten years.

While looking back at our impact, it’s important to be clear-eyed about our plan for 2025. This year will begin with France’s AI Action Summit, the first such convening to invite public participation through the consultation we co-led. Combined with the finalization of the EU Code of Practice, Paris and Brussels are taking a more active role in shaping the governance of general-purpose AI. We are also hopeful about the prospects for the AI Safety Institute Network for improving international cooperation toward secure, safe, and trustworthy AI. We are excited to be scaling up The Future Society’s work in the United States, as we set up our U.S. AI Governance theme under the leadership of our newest director. Across our work, The Future Society is committed to ensuring leaders can benefit from the insights and expertise of civil society.

The coming years will be decisive for establishing and defending meaningful rules for general-purpose AI. During these pivotal times, we are doubling down on research, convening, and advisory efforts to match the prominence that AI governance has taken on the agendas of world leaders. To do this, we will continue to rely on contributions from those who share our view that AI governance is one of the most pressing challenges of our time.

We hope you will consider helping to sustain and expand our work. I thank our donors for providing the financial support that makes our nonprofit work possible.

Finally, I’d like to express my deepest appreciation for the sagacity of our Board of Directors and for the efforts of our team. On a working retreat earlier this year, I asked my colleagues about why they care about AI governance. Each of their responses reflected a genuine commitment to serving the public interest and reminded me of how privileged I am to work with such exceptional people, whose good intentions, brilliance, and dedication inspire me everyday.

On behalf of everyone on our team, I wish you a fulfilling 2025 and I look forward to continuing our work in the new year. Together, we can make progress towards greater accountability in AI and steer towards a more positive future for all of us.

Warmest regards,

Nicolas Moës
Executive Director, The Future Society

The Future Society team at The Sixth Edition of The Athens Roundtable on December 9, 2024, at The OECD Headquarters, Paris France

The post Closing 2024, Looking Ahead to 2025: A Message from The Future Society’s Executive Director appeared first on The Future Society.

]]>
Citizens and Experts Unite: Final Report on Global Consultations for France’s 2025 AI Action Summit https://thefuturesociety.org/aiactionsummitconsultationreport/ Sun, 08 Dec 2024 22:40:41 +0000 https://thefuturesociety.org/?p=9447 Our comprehensive consultation of 10,000 citizens and 200+ experts worldwide reveals five priority areas for AI governance, with concrete deliverables for the 2025 AI Action Summit. The results demonstrate strong public support for robust AI regulation and a vigilant approach to AI development, emphasizing the need for inclusive, multistakeholder governance.

The post Citizens and Experts Unite: Final Report on Global Consultations for France’s 2025 AI Action Summit appeared first on The Future Society.

]]>
The Future Society, alongside the AI & Society Institute (ENS-PSL), Sciences Po’s Tech & Global Affairs Innovation Hub, the French Digital Council (CNNum), and Make.org, has concluded a landmark consultation process that gathered unprecedented insights from more than 10,000 citizens and 200 experts worldwide, in partnership with France’s 2025 AI Action Summit.

This consultation marks the first time a global AI summit’s preparation has included such extensive stakeholder engagement, setting a new standard for inclusive policy development in technological governance.

The Future Society’s Expert Consultation

The Future Society led the expert consultation component, engaging with over 200 leading voices from academia, nonprofits, unions and scientific organizations across five continents. Through structured dialogue and in-depth analysis, we gathered detailed insights on implementation protocols, stakeholder coordination mechanisms, and transition strategies. This expert input was crucial in developing concrete, actionable recommendations that bridge theoretical frameworks with practical implementation challenges.

Five Priority Areas for Action

The consultation identified five key areas requiring immediate attention at the Summit:

  1. Global Multistakeholder AI Governance: Participants called for the establishment of an International Scientific Cooperation on AI Risks and new mechanisms to ensure meaningful participation from Civil Society Organizations and Global South stakeholders.
  2. Shared Standards for Safe AI: The consultation revealed strong support for concrete accountability measures, including an AI Corporation Commitments Report Card and clear thresholds for oversight.
  3. Universal AI Education: Stakeholders emphasized the need for a Global AI Education and Critical Thinking Initiative, complemented by an International Task Force to address labor market disruption.
  4. Environmental Sustainability: Participants supported the creation of a “GreenAI Leaderboard” and mandatory efficiency standards, demonstrating strong environmental consciousness in AI development.
  5. Public Interest Solutions: The consultation highlighted specific sectors where AI applications received broad support, particularly in healthcare, climate action, and combating disinformation.

A Pragmatic Path Forward

The results reveal a mature, nuanced approach to AI adoption. Rather than advocating for widespread deployment, participants favor targeted implementation in areas where AI’s value is clearly demonstrated and risks can be effectively managed.

The findings are being presented at The Sixth Edition of The Athens Roundtable at the OECD headquarters in Paris, where leaders from government, industry, academia, and civil society are engaging in dialogue on international coordination for AI governance.

This consultation represents just the beginning of an ongoing engagement process. Its success demonstrates that inclusive, decentralized approaches can effectively bridge diverse stakeholder interests and create a foundation for responsible AI governance that serves the greater good.

The post Citizens and Experts Unite: Final Report on Global Consultations for France’s 2025 AI Action Summit appeared first on The Future Society.

]]>
The Future Society Newsletter: November 2024 https://thefuturesociety.org/newsletter-november-2024/ Fri, 22 Nov 2024 17:28:10 +0000 https://thefuturesociety.org/?p=9435 In This Newsletter: Preview of The Athens Roundtable on Artificial Intelligence and the Rule of Law on December 9. Interim findings from our expert consultation for France’s 2025 AI Action Summit. Our report on AI Safety Institutes’ collaboration. OpEd on the GPAI Code of Practice for the EU AI Act.

The post The Future Society Newsletter: November 2024 appeared first on The Future Society.

]]>
Want to receive updates about the field of AI governance and The Future Society’s efforts? Sign up for our newsletter.

Join Us Online: Register for The Sixth Edition of the Athens Roundtable

Our annual flagship event The Athens Roundtable is approaching soon on December 9, 2024. This year’s focus is on enhancing international coordination for accountability in AI governance. See our agenda.

Sessions will focus on mapping the emerging landscape of AI accountability and governance mechanisms, identifying key gaps and opportunities for international coordination, presenting takeaways from the global consultation for France’s AI Action Summit, and envisioning the role of civil society in global AI governance initiatives. Read more in a blog post on The OECD’s The AI Wonk.

Lightning talks will kick off each session, with speakers that include Co-Chair of the UN’s AI Advisory Board Carme Artigas, the OECD’s Deputy Secretary General Ulrik Vestergaard Knudsen, Hellenic Minister of Digital Dimitris Papastergiou, European AI Office Director Lucilla Sioli, France’s AI Action Summit Secretary General Chloé Goupille, U.S. AI Safety Institute Executive Director Elizabeth Kelly, H.E. Rwanda’s Minister for ICT and Innovation Paula Ingabire, Anthropic Co-Founder and Head of Policy Jack Clark, Tsinghua University Professor Dr. Lan Xue, and UC Berkeley Professor Stuart Russell. See our participants list.

We invite our global community to join virtually via an interactive livestream of sessions from 10 a.m. to 6 p.m. CET.

This edition is organized by The Future Society in partnership with the OECD, UNESCO, France’s AI Action Summit, the AI & Society Institute, the French Digital Council (CNNum), Make·org, and the Tech and Global Affairs Innovation Hub of Sciences Po, with support from Arnold & Porter and Fathom and under the patronage of H.E. the President of the Hellenic Republic Ms. Katerina Sakellaropoulou.


Interim Report on Our Consultation for France’s AI Action Summit

With France’s AI Action Summit in February 2025 set to be a significant milestone for AI governance, we are honored to have coordinated the public consultation for experts from civil society and academia to share their visions and concrete proposals for the Summit.

We published initial findings from 84 global experts who contributed to the first part of our open consultation for France’s 2025 AI Action Summit. Delivering Solutions: An Interim Report of Expert Insights for France’s 2025 AI Action Summit revealed four key shifts needed in AI governance and concrete recommendations for action.

We will share a full report of all 231 contributions to the consultation at The Athens Roundtable on December 9. This consultation is part of a wider effort to involve civil society and citizens in planning for the Summit, with the Tech and Global Affairs Innovation Hub of Sciences Po, AI & Society Institute, le Conseil National du Numérique (CNNUm), and Make·org.

New Publication: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

As members of the International Network of AI Safety Institutes meet this week in San Francisco, we published key insights on how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.

Our report, Weaving a Safety Net, provides an analysis of the current state of collaboration between AISIs and other key multilateral institutions, and proposes concrete models and strategic directions for expanding its impact in addressing global AI challenges.

A companion blog post on the OECD’s The AI Wonk shares a succinct overview of the report’s key themes and recommendations.

Recently Published: Why does the GPAI Code of Practice matter to Europe and to the rest of the world – a boon for responsible AI innovation?

“As a major piece of legislation on AI with global significance, the EU AI Act is charting a new path forward for AI governance and innovation. The GPAI Code of Practice can showcase how intention becomes action.”

At the beginning of November, in an OpEd for Encompass, our Executive Director Nicolas Moës wrote about the Code of Practice for governing general-purpose AI through the groundbreaking EU AI Act. The first draft of the Code has since been published, which we hope will ultimately trigger a “Brussels Effect” for better AI innovation in the EU and beyond.

We are hiring

Senior Associate, AI Governance: We are seeking a driven and experienced professional with 6+ years of relevant experience for developing, advocating for, and/or implementing international AI policy and governance mechanisms. This role will monitor and assess the global AI governance landscape, design and lead high-impact AI policy recommendations, and engage a diverse network of stakeholders to champion these initiatives. Deadline: December 4, 2024

Our Recent and Upcoming Activities

Over the last two months, The Future Society Team participated in the following AI governance events:

In the next month, we will take part in:

Let us know if you’ll be there too!


Support our work

The Future Society is a nonprofit 501(c)(3) organization. Philanthropy is what makes our work possible.

As we approach Giving Tuesday, we hope you will consider supporting our mission of aligning AI through better governance.

The post The Future Society Newsletter: November 2024 appeared first on The Future Society.

]]>
Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration https://thefuturesociety.org/aisi-report/ Fri, 15 Nov 2024 13:07:48 +0000 https://thefuturesociety.org/?p=9402 Our report examines how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.

The post Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration appeared first on The Future Society.

]]>


As the capabilities of artificial intelligence increase and its impact on society continues to grow, governments worldwide are establishing mechanisms to manage the risks and harness the benefits of these transformative technologies. Among the most prominent of these initiatives is the creation of AI Safety Institutes (AISIs)—specialized, government-funded entities tasked with overseeing the safe development of AI systems, especially those on the frontier of technological advancement.

A significant step toward multilateral coordination was achieved with the launch of the International Network of AI Safety Institutes (AISI Network) in May 2024. The Network brings together members from diverse regions, including Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States. Its inaugural formal gathering, scheduled for November 20-21, 2024, in San Francisco, aims to lay the groundwork for “advancing global collaboration and knowledge-sharing on AI safety.”

Our report, Weaving a Safety Net, examines how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.

Section 1 introduces AISIs, providing a high-level overview of their purpose and core functions. Section 2 examines the benefits and challenges of collaboration among AISIs while, in Section 3, the focus shifts to outlining the current state of collaboration. Building on this foundation, Section 4 considers potential structures of the AISI Network for enhanced coordination. Finally, Section 5 explores how AISIs can engage with and complement other prominent actors within the global AI governance ecosystem.

Together, these sections aim to provide an analysis of the current state of collaboration between AISIs and other key institutions, and propose strategic directions to deepen these collaborative efforts inside and outside of the Network, enhancing its impact in addressing global AI challenges.

A network diagram of publicly announced bilateral collaborations between AISIs as well as with major AI companies. Line color indicates the primary area of collaboration, as inferred from the most emphasized topic in press releases about each partnership. Line size represents the scope of collaboration, with broader lines denoting a greater number of distinct areas of collaboration. A comprehensive source list for each connection in the diagram is available in the appendix of the report.

The post Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration appeared first on The Future Society.

]]>