Skip to content

Main Insight

We share highlights from the fifth edition of The Athens Roundtable on AI and the Rule of Law, and we present 8 key recommendations that emerged from discussions.

Towards Effective Governance of Foundation Models and Generative AI

March 19, 2024

Today, we are publishing “Towards Effective Governance of Foundation Models and Generative AI,” a report sharing highlights and key recommendations from the fifth edition of The Athens Roundtable on AI, which took place on November 30th and December 1st, 2023 in Washington, D.C. The event brought together over 1,150 attendees in a two-day dialogue focused on governance mechanisms for foundation models and generative AI. Participants were encouraged to generate innovative “institutional solutions”—binding regulations, inclusive policy, standards-development processes, and robust enforcement mechanisms—to align the development and deployment of AI systems with the rule of law.

2023 marked a year in which AI governance climbed the agenda of policymakers and decision-makers worldwide. The release of technologies with increasingly general capabilities has generated intrigue and hype, and accelerated a concentration of power in big tech, triggering a societal-scale wake-up call. The growing threats to democratic processes and human rights presented by generative AI systems have prompted calls for regulation. We must collectively demand rigorous standards of safety, security, transparency, and oversight to ensure that these systems are developed and deployed responsibly.

In our report, we share highlights from the panels, fireside chats, and other dialogues at the fifth edition of The Athens Roundtable and present 8 key recommendations that emerged through discussions.

Key recommendations emerging from discussions

  1. Adopt comprehensive horizontal and vertical regulations: It is crucial that countries adopt legally binding requirements in the form of regulation to effectively shape the behavior of AI developers and deployers towards the public interest. Self- and soft-governance have not realized their promises regarding responsible AI and safety, especially when it comes to foundation models. Sector-specific and umbrella regulations should be adopted in a complementary manner across jurisdictions to fill the existing gap in AI governance. This approach allows for robust governance across the entire AI value chain, from design and development to monitoring, including for general-purpose foundation models that do not fit in any particular sector and may not be covered by current or future sectoral regulations.
  2. Strengthen the resilience of democratic institutions: There is an urgent need to build resilience in democratic institutions against disruptions from technological developments, notably of advanced general-purpose AI systems. Key elements in building resilience are: capacity-building, in the form of employee training and talent attraction and retention, across government institutions; institutional innovation to bring public sector structures and processes up to date; enforcement authority spanning oversight of the development and deployment of AI systems; and effective public participation. The latter is crucial to ensure that state institutions remain democratic, maintain citizens’ trust, and act in the public interest.
  3. Enhance coordination among civil society organizations (CSOs) to advance responsible AI policies: In a policy environment with heavy industry lobbying and many conflicting viewpoints, it will be crucial for CSOs to coordinate efforts in order to amplify promising policy recommendations. Key to this coordination will be ensuring that CSOs involved are demographically, culturally, and politically representative of the population at large, and that they consistently listen to the voices of the communities most impacted by emerging technologies.
  4. Invest in the development of methods to measure and evaluate foundation models’ capabilities, risks, and impacts: Measurement and evaluation methods play an indispensable role in understanding and monitoring technological capabilities, establishing safeguards to protect fundamental rights, and mitigating large-scale risks to society. However, current methods remain imperfect, and will require persistent development in the years to come. Governments should invest in multi-disciplinary efforts to develop measurement and evaluation methods, such as benchmarks, capability evaluations, red-teaming tests, auditing techniques, risk assessments, and impact assessments.
  5. Include global majority representation and impacted stakeholders in standard-setting initiatives: Many standard-setting initiatives still lack input from civil society organizations that represent impacted communities. Policymakers and leaders of such initiatives must strive to understand and address structural factors that have led to the under-representation or lack of participation by certain groups in international standard-setting efforts. Potential mechanisms to promote participation include remunerating underrepresented groups and restructuring internal processes to tangibly engage them, rather than provide mere formal representation.
  6. Develop and adopt liability frameworks for foundation models and generative AI: Liability frameworks must address the complex, evolving AI value chain to disincentivize potentially harmful behavior and mitigate risks. Companies that make foundation models available to downstream deployers across a range of domains benefit from a liability gap, where the causal chain between development choices and any harm caused by the model is currently overlooked. Regulation that establishes liability along the AI value chain is crucial to engender accountability and fairly distribute legal responsibility, avoiding liability being transferred exclusively onto deployers or users of AI systems.
  7. Develop and implement a set of regulatory mechanisms to operationalize safety by design in foundation models: Given the borderless character of the AI value chain, regulatory mechanisms must be interoperable across jurisdictions. Regulators should invest in regulatory sandbox programs to test and refine foundation models and corresponding regulatory safeguards before deployment.
  8. Create a special governance regime for dual-use foundation model release: Decisions regarding the release methods for dual-use foundation models should be scrutinized, as they pose societal risks. Exhaustive testing before release would be in the public interest for models at the frontier. Further discussion among stakeholders should identify model release methods that maximize the benefits of open science and innovation without sacrificing public safety.

The fifth edition in numbers

The fifth edition of The Athens Roundtable was organized by The Future Society and co-hosted by esteemed partners—the Institute for International Science and Technology Policy (IISTP), the NIST-NSF Institute for Trustworthy AI in Law & Society (TRAILS), UNESCO, OECD, the World Bank, IEEE, Homo Digitalis, the Center for AI and Digital Policy (CAIDP), Paul, Weiss LLP, Arnold & Porter, and the Patrick J. McGovern Foundation—and was proudly held under the aegis of the Greek Embassy to the United States.

Looking ahead

The Future Society will continue to facilitate dialogues and collaborations that aim to steer the development of AI in a manner that upholds fundamental rights and the rule of law.

Moving forward, we maintain one fundamental commitment with respect to The Athens Roundtable: To reexamine our current practices and assumptions, welcoming input and feedback from broad audiences, with particular attention paid to engaging underrepresented communities.

Team members

Amanda Leal

Niki Iliadis

Samuel Curtis

Related resources

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

Closing 2024, Looking Ahead to 2025: A Message from The Future Society’s Executive Director

Closing 2024, Looking Ahead to 2025: A Message from The Future Society’s Executive Director

A look at our work in 2024 and The Future Society's ten year anniversary, as well as what's ahead for AI governance in 2025.

The Future Society Newsletter: November 2024

The Future Society Newsletter: November 2024

In This Newsletter: Preview of The Athens Roundtable on Artificial Intelligence and the Rule of Law on December 9. Interim findings from our expert consultation for France’s 2025 AI Action Summit. Our report on AI Safety Institutes’ collaboration. OpEd on the GPAI Code of Practice for the EU AI Act.

Delivering Solutions: An Interim Report of Expert Insights for France’s 2025 AI Action Summit

Delivering Solutions: An Interim Report of Expert Insights for France’s 2025 AI Action Summit

We share initial findings from 84 global experts who contributed to the first part of our open consultation for France's 2025 AI Action Summit, revealing four key shifts needed in AI governance and concrete recommendations for action. The complete analysis will be presented at The Athens Roundtable on December 9.

The Future Society Newsletter: Fall 2024

The Future Society Newsletter: Fall 2024

Updates from The Future Society: Announcing the Sixth Edition of The Athens Roundtable. Launch of our public consultation for France’s AI Action Summit. Opportunities for civil society engagement. New resource: Open foundation model risk mitigation strategies. We are hiring: Chief Operating Officer.

Springtime Updates

Springtime Updates

A leadership transition, an Athens Roundtable report, and a few more big announcements!