Skip to content

Main Insight

Understanding the intersection between Artificial Intelligence and Cybersecurity, with co-author Roman V. Yampolskiy

The Intersection & Governance of Artificial Intelligence and Cybersecurity

May 21, 2020

As individuals, businesses and governments increasingly rely on digital devices and IT, cyber attacks have increased in number, scale and impact and the attack surface for cyber attacks is expanding. First, this paper lists some of the characteristics of the cybersecurity and AI landscape that are relevant to policy and governance. Second, the paper reviews the ways that AI may affect cybersecurity: through vulnerabilities in AI models and by enabling cyber offence and defense. Third, it surveys current governance and policy initiatives at the level of international and multilateral institutions, nation-states, industry and the computer science and engineering communities. Finally, it explores open questions and recommendations relevant to key stakeholders including the public, computing community and policymakers. Some key issues include the extent to which international law applies to cyberspace, how AI will alter the offence-defense balance, boundaries for cyber operations, and how to incentivize vulnerability disclosure.

Yolanda Lannquist, Jia Yuan Loke, Nicolas Miailhe, Cyrus Hodes and Roman V. Yampolskiy authored the paper.

Find the paper on ResearchGate here and on OECD AI Policy Observatory here.

Photo by Michael Dziedzic on Unsplash

Related resources

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

Our explainer defines the concept of AI crisis preparedness, exploring how an international framework could stop hazards from escalating into crises.

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Our report examines how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

The Future Society contributed to Partnership on AI’s risk mitigation strategies for the open foundation model value chain, which identify interventions for the range of actors involved in building and operationalizing AI systems.

National AI Strategies for Inclusive & Sustainable Development

National AI Strategies for Inclusive & Sustainable Development

From 2020 to 2022, TFS supported the development of 3 National AI Strategies in Africa with public sector partners, GIZ, and Smart Africa. These programs build capacity through AI policy, regulatory, and governance frameworks to support countries’ efforts to harness AI responsibly, to achieve national objectives and inclusive and sustainable...