Skip to content

Main Insight

Our explainer defines the concept of AI crisis preparedness, exploring how an international framework could stop hazards from escalating into crises.

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

May 26, 2025

We are in an era increasingly shaped by artificial intelligence (AI). Since the public launch of ChatGPT in late 2022 and subsequent breakthroughs, the world has become more aware of AI’s capabilities and its transformative potential. It brings the opportunity for immense benefits, such as boosting productivity, preventing disease, and mitigating climate change. But alongside these benefits are profound and novel risks to humanity, including to fundamental rights, public safety, democratic processes, and both national and global security.

The world is underprepared for how quickly AI hazards could spiral into crises, potentially outpacing our ability to respond. The concept of preparedness has long been a driving factor for disaster risk reduction, emergency response, and accident prevention, with specific applications in safety-critical areas like nuclear energy, cybersecurity, and public health

Yet we currently lack a global framework for handling a potential AI crisis, including anticipating, detecting, assessing, containing, responding to, and recovering from the crisis. Without coordinated global capacity and action, we risk facing AI systemic disruptions with fragmented and insufficient responses.

This explainer covers what AI crisis preparedness means, why it is urgently needed to minimize avoidable harms, and how the international community can take actions designed to set us all up for a safer and more secure future. 

What Is an AI Crisis?

To support proactive and coordinated planning, we outline an escalation pathway that shows how serious harms—caused or intensified by the design, deployment, misuse, or failure of AI–can evolve from early-stage hazards to full-scale crises. 

This pathway is not a predictive model, but rather a working tool to support anticipatory thinking and response planning. It maps four escalating stages—hazard, incident, emergency, and crisis. 

Each stage reflects increasing severity, scale, and urgency, drawing on terminology used by multilateral institutions like the OECD and established in academic literature. For each stage, we give illustrative examples drawn from global security—such as military AI, state-level cybersecurity, and critical infrastructure protection—to show what escalating harm can look like in practice.

Stage 1: AI Hazard

AI Hazard: An event, circumstance, or built-in system feature where AI development or use could lead to harm. It signals potential risk but has not yet caused harm.

Example: Advanced lethal autonomous weapons are being used by both state and non‑state actors without binding international rules, clear accountability, rigorous testing, or guarantees of meaningful human control. These systems can operate faster than humans can respond and may misinterpret signals or behave unpredictably. Even a small glitch or mistaken identification could trigger a rapid chain of events (for instance, a drone falsely identifying a threat and prompting others to attack), leading to unintended military escalation and global security consequences. In these cases, while no harm has yet occurred, the situation is volatile: monitoring and mitigation are essential to prevent it from becoming an incident.

Stage 2: AI Incident

AI Incident: An occurrence when AI development or use directly or indirectly causes harm. This harm may include injury, critical infrastructure disruption, rights violations, or environmental damage. An incident is typically limited in scope, contained without widespread spillover, and recoverable.

Example: An AI-enabled cyber defense system mistakenly flags a routine software update from an allied country as a hostile intrusion. Its built-in countermeasure temporarily disrupts a non-essential communication link used by the allied military. The problem is identified quickly and resolved through standard diplomatic and technical processes. The harm that took place was localized and short-lived.

Stage 3: AI Emergency

AI Emergency: A fast-moving, urgent threat related to AI. It often affects national security, with a scope or speed that can overwhelm routine response mechanisms.

Example: A sophisticated, multi-pronged, AI-driven cyberattack simultaneously targets and disrupts core operational systems across multiple critical infrastructure sectors within a nation, including significant portions of the national power grid, critical communication networks, and key logistical hubs. The attacks evolve dynamically, bypassing conventional defenses. Essential services falter, and panic begins to spread. Traditional response systems are too slow, requiring immediate national or even international intervention.

Stage 4: AI Crisis

AI Crisis: This is the most severe stage. A deep, systemic, and long-lasting AI-driven disruption that causes large-scale harm. It often spreads across borders or sectors, producing mass harm and undermining geopolitical stability.

Example: Autonomous military AI systems across several countries are compromised by adversarial cyberattacks. False inputs trigger retaliatory strikes without human review, hitting both military targets and critical civilian infrastructure. The resulting cascade paralyzes global financial systems, collapses energy grids, and causes mass casualties. Trust in international coordination breaks down, making recovery slow and fraught.

Figure 1. An escalation pathway illustrating how an initial hazard can develop into an incident, escalate into an emergency, and ultimately culminate in a crisis if left unmanaged.

Why Should We Prepare for an AI Crisis?

We define AI crisis preparedness as the ability to stop AI hazards, incidents, and emergencies from becoming a full blown crisis. It also means limiting the impact of such crises if they do occur. 

This includes steps across the escalation pathway. It covers early detection of AI hazards and incidents; triggering coordination and rapid responses across jurisdictions and sectors; minimizing harm; and supporting recovery, accountability, and long-term resilience. These measures may include legal frameworks, policy coordination, institutional arrangements, technical infrastructure, operational protocols, and strategic communication systems.

While prevention is generally better than cure, preparedness assumes that prevention can fail. It builds the resilience needed for when shocks happen. It combines proactive anticipation (for instance, through forecasting and monitoring) with reactive capacity, akin to public health systems preparing for pandemics through both surveillance and surge capacity

The imperative for AI crisis preparedness is no longer theoretical. Several converging trends, from technical to geopolitical and institutional, are rapidly increasing the likelihood of AI harms that could have significant implications for national security and economic stability across the world. That’s why international governance is key.

  1. Speed, scale, and “agency”: AI systems can operate at speeds that exceed human response times. Agentic AI, specifically, has the ability to act with limited or no real-time human intervention, increasing the potential for rapid, uncontrolled escalation of harm. This means AI-driven failures or attacks could rise to crisis levels far faster than previously experienced in other domains, demanding new approaches to detection and response.
  2. Growing evidence-base of AI incidents: Current preventative measures are failing to stop AI incidents from happening, whether related to algorithmic bias, safety and security failures, malicious use, or other issues. And, the growing number of warnings from leading AI scientists, frontier lab whistleblowers, industry figures, and national security experts about severe risks from future AI systems highlight the need for forward-looking preparedness strategies at national and international levels.
  3. Integration into critical infrastructure: AI is increasingly being integrated into critical national and international infrastructure: energy grids, financial markets, telecommunications, healthcare systems, and defense networks. As these infrastructures become more reliant on AI, they will be more vulnerable to systemic failures. A single flaw or outage could cascade across sectors and borders, making AI both a force multiplier for efficiency and a potential vector for collapse.
  4. Market concentration: High market concentration exists among the few providers developing frontier AI models and essential cloud or hardware infrastructure. This dependency creates potent single points of failure, which means that flaws or outages in one dominant system could trigger simultaneous disruptions across many sectors. The July 2024 CrowdStrike incident is an example of what could happen.
  5. Heightened international instability: We live in an era of overlapping crises—climate volatility, geopolitical conflict and rivalry, declining trust in institutions, and fragile global supply chains. Incidents will not unfold in a vacuum; they will coincide with, intensify, and be exacerbated by other global shocks. In such a world, preparedness must account not only for AI-specific risks but also for their interaction with an already unstable global system.

How Can We Prepare for an AI Crisis? 

While AI presents heightened challenges, the broader task of planning for uncertain, high-impact incidents is not unprecedented. We can draw on dedicated practices related to disaster risk reduction and emergency response.

Frameworks like the Sendai Framework for Disaster Reduction, though still undergoing real-world stress-tests, offer useful models for AI crisis preparedness efforts. What makes the Sendai Framework notable is its emphasis on the mechanics of emergency response, including:

  • Clear escalation triggers; 
  • Public-private surge financing; 
  • Liability protections that encourage early disclosure;
  • Trusted ombudspeople who convey incidents without panic; and 
  • An equity lens that prioritizes the most vulnerable. 

The fields with mature crisis preparedness frameworks that can guide AI preparedness approaches and initiatives include:

  1. Cybersecurity preparedness: Regular drills and strong public-private partnerships have significantly strengthened cybersecurity resilience. Sector-wide simulations and threat-sharing protocols have taught us the value of tested response plans and information sharing before harm occurs. AI preparedness must similarly prioritize rehearsal, red-teaming, and real-time information exchange between stakeholders.
  2. Nuclear safety preparedness: The nuclear sector developed a mature safety regime built on hard lessons learned. International conventions now mandate early warning and mutual assistance, coordinated by the International Atomic Energy Agency (IAEA). Regular, large-scale international exercises rigorously simulate severe accidents, allowing nations to test global readiness and identify and fix weaknesses. This process highlights the need for international standards, transparency, and tested protocols.
  3. Pandemic preparedness: Pandemic surveillance and dedicated agencies have helped to improve readiness. International frameworks for early warning and coordination, such as the International Health Regulations, proved crucial in the case of the H1N1 pandemic by leveraging prior H5N1 preparation. A draft pandemic agreement to be submitted to the World Health Assembly in May 2025 intends to strengthen preparedness further, for example, through facilitating the transfer of pandemic-related technology and knowledge and mobilizing a skilled and multidisciplinary national and global health emergency workforce. This illustrates how institutional foresight, investment, and cooperation can close readiness gaps over time.

AI preparedness will look different than these fields, but it can draw from seven core elements that enable a strong international framework for response in times of crises: legal frameworks and oversight, multilateral policy coordination, standardized incident reporting, monitoring and early warning, operational protocols, trusted communication systems, and recovery and accountability.

What Are the Key Elements of Global AI Crisis Preparedness?

Successfully navigating AI crises requires strong global governance approaches. This means:

  • Fostering international coordination through pre-agreed playbooks and Memoranda of Understandings that synchronize cross-border alerts and align response standards.
  • Investing significantly in real-time monitoring systems, simulation capabilities, red-team sandboxes, and well-tested response protocols.
  • Equitably building capacity worldwide by bridging North-South disparities so that all regions have necessary technical infrastructure and institutional mechanisms to respond effectively. 

A crucial step is establishing an international framework for AI crisis preparedness. Such a framework can be composed of several elements, including:

  • Legal oversight: There is a need for clear legal mechanisms that mandate continuous AI risk assessment, define escalation, and build on thresholds and grant regulators authority to restrict or recall systems posing imminent harm. The Chemical Weapons Convention and the World Health Organization (WHO) International Health Regulations are signals that states are likely to accept legally binding inspection and verification regimes including “stop buttons” for high risk systems. Ensuring legitimacy of such efforts requires trust, transparency, and third-party verification mechanisms that can also guarantee appropriate access for civil society organizations, academics, and researchers to relevant non-sensitive data and governance processes, thereby enabling independent oversight, diverse perspectives, and accountability.
  • Cross-border policy coordination: Just as the IAEA Convention on Early Notification of a Nuclear Accident mandates synchronization of global alerts within hours, AI preparedness could benefit from pre-negotiated playbooks and MoUs that unlock shared resources immediately after a crisis signal. Such coordination is crucial for ensuring that national authorities and international bodies can act in concert, leveraging collective capabilities and shared understanding to mitigate AI risks effectively through a coherent global response.
  • Standardized incident reporting: To build shared awareness and enable analysis, a robust global system is needed to consistently track AI incidents, near-misses, and emerging risks. Valuable initiatives like the OECD AI Incidents Monitor and the AI Incident Database provide important foundations for tracking. However, achieving comprehensive global coverage requires statutes that mandate continued risk assessment, going beyond publicly reported cases, as well as the adoption of internationally agreed-upon common classification standards and reporting formats to ensure data consistency and comparability. 
  • Monitoring and early warning: To enable timely intervention before crises escalate, the framework must establish shared thresholds or ‘red lines’ that automatically trigger alerts. Aviation’s Annex 13 accident-reporting rules show how compulsory, harmonized formats accelerate learning loops; an AI equivalent can do the same. Building on practices from cybersecurity hazard intelligence sharing, clear protocols are needed for rapidly escalating these warnings between frontier AI companies, national authorities, and international bodies, leveraging real-time monitoring pipelines and red-team sandboxes, and defining escalation thresholds as part of legal frameworks. 
  • Operational protocols: Effective action requires operational readiness. Regular simulations such as the IAEA INEX drills and NATO’s Locked Shields cyber war-game build muscle-memory long before disaster strikes. This involves developing standardized but adaptable response playbooks for various AI hazards and regularly testing coordination through joint international simulations. These protocols can include tested playbooks covering rapid model rollback, user notification, and emergency sandboxing, regularly drilled and updated for new hazard classes. 
  • Trusted communication systems: Maintaining trust and coordination during a crisis demands both secure official channels and transparent public messaging. The WHO’s multilingual risk-communication guidelines and Cold-War nuclear “hotlines” remind us that these  are two sides of the same coin. Protocols should ensure clear, multilingual updates that inform stakeholders, counter misinformation, and sustain public trust before, during and after a crisis. Equally vital are pre-established secure channels, akin to diplomatic or industry security networks, for rapid information sharing among government, industry, and international actors, forming the technical backbone that delivers actionable data to decision-makers.
  • Recovery and accountability: Preparedness includes planning for the aftermath. International Civil Aviation Organization’s independent accident investigations and the Organisation for the Prohibition of Chemical Weapons’s fact-finding missions show how mandatory post-mortems drive accountability and systemic learning. Drawing on investigation models from sectors like aviation safety, mandated assessments and independent audits after significant events are vital to understand failures, ensure accountability, and be able to apply lessons learned going forward, with institutional mechanisms in place to oversee and audit every step from early warning to post-crisis review. Embedding dedicated AI crisis response and analysis expertise within relevant international institutions could also bolster global support and shared learning.

Why Does The Future Society Care About Preparing for an AI Crisis?

The stakes have never been higher at this time in history. Investment in resilience is essential and must be sustained, given the systemic risks that general purpose AI systems pose and the rapid pace of innovation. By adapting lessons from nuclear safety, cybersecurity, and pandemic response, we can establish an international framework that identifies AI hazards early and prevents them from spiraling into a crisis.

But preparedness isn’t just about sidestepping future costs—it’s about confronting the hard truth that rapid AI advances have already put our global systems on shaky ground. Ignoring the warning signs today risks far greater economic, social, and security fallout tomorrow. Proactive measures now are the key to building a safer, more stable future for everyone.

Acknowledgments

Thanks to our colleagues, Caroline Jeanmaire, Kathrin Gardhouse, Sam Baskeyfield, Tereza Zoumpalova and Toni Lorente, for their contributions. We also appreciate the feedback and valuable perspectives provided by Julia Chen, Justin Bullock, Matthijs Maas, and Miles Brundage.

Related resources

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Our report examines how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

The Future Society contributed to Partnership on AI’s risk mitigation strategies for the open foundation model value chain, which identify interventions for the range of actors involved in building and operationalizing AI systems.

National AI Strategies for Inclusive & Sustainable Development

National AI Strategies for Inclusive & Sustainable Development

From 2020 to 2022, TFS supported the development of 3 National AI Strategies in Africa with public sector partners, GIZ, and Smart Africa. These programs build capacity through AI policy, regulatory, and governance frameworks to support countries’ efforts to harness AI responsibly, to achieve national objectives and inclusive and sustainable...

Working group publishes A Manifesto on Enforcing Law in the Age of AI

Working group publishes A Manifesto on Enforcing Law in the Age of AI

The Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of "Artificial Intelligence" convened for a second year to draft a manifesto that calls for the effective and legitimate enforcement of laws concerning AI systems.