Skip to content

Main Insight

The Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of "Artificial Intelligence" convened for a second year to draft a manifesto that calls for the effective and legitimate enforcement of laws concerning AI systems.

Working group publishes A Manifesto on Enforcing Law in the Age of AI

October 25, 2022

From June to October 2022, a working group comprised of legal practitioners and academics from both sides of the Atlantic, under the title of Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of “Artificial Intelligence,” convened to draft A Manifesto on Enforcing Law in the Age of “Artificial Intelligence,” which calls for the effective and legitimate enforcement of laws concerning AI systems. In doing so, it recognizes the important and complementary role of standards and compliance practices. This manifesto builds upon a manifesto produced by the group in 2021, A Manifesto in Defense of Democracy and the Rule of Law in the Age of “Artificial Intelligence.”

Katherine Evans, Nicolas Miailhe, Samuel Curtis, and Denia Psarrou (left to right) present remarks at the launch event hosted by John Cabot University.
Denia Psarrou and Bruce Hedin (left to right) provide responses to the Manifesto at the launch event.

Background

The Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of “Artificial Intelligence” first formed in 2021, including several of TFS team members, and with the support of The Athens Roundtable on AI and the Rule of Law. That year, the group drafted and published their first manifesto, A Manifesto In Defense of Democracy and the Rule of Law in the Age of “Artificial Intelligence,” which focused on the relationship between democratic law-making and technology.

This first manifesto affirmed the need to empower citizens through institutions of countervailing power, called for cooperation between technologists and policy communities on both sides of the Atlantic, and stressed that technological solutionism must not supplant the painstaking work of democracy. Over 125 individuals signed the first manifesto, including prominent political leaders, academics exploring the implications of AI on the rule of law, developers of AI systems, and practitioners designing AI governance mechanisms.

Francesco Lapenta, Paul Nemitz, Wendell Wallach (front, left to right) and Niki Iliadis (on screen) present opening remarks at the launch event.

Reconvening to produce a second manifesto

A short year on, many of the challenges addressed in the first manifesto are ever more acute. Technology firms are developing and deploying AI systems poised to massively reshape our societies, while governments—many of which have committed to the OECD AI Principles and the UNESCO Recommendation on the Ethics of AI—have yet to demonstrate adherence to these commitments or fulfillment of their duties to citizens through adequate oversight. Though there have been incremental developments toward legislating rules for AI systems, including the proposed EU AI Act and a limited number of US state laws, there is a lack of meaningful constraints on the deployment of AI systems or meaningful progress toward a level of transparency that might make possible trust in the outcomes that AI systems produce.

With these challenges in mind, the Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of “Artificial Intelligence” reconvened in June 2022, to begin working on a second manifesto that shifts focus from the design of law in the age of AI to the enforcement of law.

Between June to September 2022, the group met in a series of virtual meetings, conducting writing and editing asynchronously. With the support of the Institute of Future and Innovation Studies at John Cabot University—an American university in Rome—the group came to a final consensus in meetings on 8-9 October and publicly presented the Manifesto on 10 October 2022 in Rome. More details about the launch event can be found here.

Contributors to the Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of “Artificial Intelligence” in 2022 were (listed alphabetically by last name): Julie Cohen, Samuel Curtis, Nicolas Economou, Katherine Evans, Bruce Hedin, Niki Iliadis, Anja Kaspersen, Francesco Lapenta, Nicolas Miailhe, Paul Nemitz, Denia Psarrou, Stefano Quintarelli, Marc Rotenberg, Sarah Spiekermann, Thomas Streinz, and Wendell Wallach.

Katherine Evans reads from the Manifesto at the launch event.

Recommendations

A Manifesto on Enforcing Law in the Age of “Artificial Intelligence” offers 10 recommendations for addressing the enforcement challenges shared across transatlantic stakeholders. These recommendations are published to share practically feasible steps that policymakers and regulators can take to create enforcement regimes that, in the age of AI, are both effective throughout the lifecycle of AI systems, and consistent with democratic values.

  1. Implement international agreements on AI and translate them into national laws and standards.
  2. Establish new laws and governance mechanisms for AI.
  3. Establish and enforce clear prohibitions for deployments of AI systems that violate fundamental human rights.
  4. Support coordinated efforts for technical standards, benchmarking, and certifications of compliance.
  5. Build capacities across institutions to effectively enforce laws.
  6. Ensure that public interest supersedes countervailing assertions of intellectual property rights.
  7. Launch a transatlantic observatory for reporting AI incidents to help hold developers and deployers of AI systems accountable.
  8. Empower academia, civil society, and the public to participate meaningfully in oversight and enforcement.
  9. Invest in the development of trustworthy AI tools to assist in the enforcement of regulatory law.
  10. Sanction corporations that violate laws or use AI to undermine the rule of law.

More detailed descriptions of the recommendations can be found in the manifesto. Those who support these recommendations are invited to sign the manifesto.

Next steps

A broader discussion with the participation of national and international policymakers, industry, the legal community, and other stakeholders is planned to take place at the fourth edition of The Athens Roundtable on AI and the Rule of Law on December 1st and 2nd, 2022 at the European Parliament in Brussels. Click here to learn more and register.

Team members

Related resources

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

Our explainer defines the concept of AI crisis preparedness, exploring how an international framework could stop hazards from escalating into crises.

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Our report examines how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

The Future Society contributed to Partnership on AI’s risk mitigation strategies for the open foundation model value chain, which identify interventions for the range of actors involved in building and operationalizing AI systems.

Response to NIST Generative AI Public Working Group Request for Resources

Response to NIST Generative AI Public Working Group Request for Resources

TFS submitted a list of clauses to govern the development of general-purpose AI systems (GPAIS) to the U.S. NIST Generative AI Public Working Group (NIST GAI-PWG).