Samuel Curtis
Samuel is interested in predicting and preventing large-scale risks posed by emergent AI/ML and biotechnological tools. His interests span transformative AI, synthetic media, and synthetic biology. At The Future Society, his work involves applied policy research, strategy, and communications.
Prior to joining TFS, he conducted research on China’s use of facial recognition technology during the COVID-19 pandemic, which received an Outstanding Capstone Award from Schwarzman College and contributed to a publication by the Beijing Academy of AI. His past employment includes designing cancer therapeutics in a Seattle biotech, teaching English in Central Asia, and conducting research on North Korea’s nuclear program with the former Economic Advisor to the President of Kazakhstan.
He completed his BA in Biochemistry, Biophysics, and Molecular Biology at Whitman College and his MSc in Global Affairs at Tsinghua University through the Schwarzman Scholars program. He speaks Russian (advanced) and Mandarin Chinese (pre-intermediate).

Samuel Curtis
Portland
Related resources
TFS joins U.S. NIST AI Safety Institute Consortium
TFS joins U.S. NIST AI Safety Institute Consortium
TFS is collaborating with the NIST in the AI Safety Institute Consortium to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote development of trustworthy AI and its responsible use.
Towards Effective Governance of Foundation Models and Generative AI
Towards Effective Governance of Foundation Models and Generative AI
We share highlights from the fifth edition of The Athens Roundtable on AI and the Rule of Law, and we present 8 key recommendations that emerged from discussions.
The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.
The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.
The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks.
A Blueprint for the European AI Office
A Blueprint for the European AI Office
We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.
Model Protocol for Electronically Stored Information (ESI)
Model Protocol for Electronically Stored Information (ESI)
The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery.
Response to U.S. OSTP Request for Information on National Priorities for AI
Response to U.S. OSTP Request for Information on National Priorities for AI
Our response put forward national priorities focused on security standards, measurement and evaluation frameworks, and an industry-wide code of conduct for GPAIS development.
Strengthening the AI operating environment
Strengthening the AI operating environment
In a paper published at the International Workshop on Artificial Intelligence and Intelligent Assistance for Legal Professionals in the Digital Workplace (Legal AIIA), Dr. Bruce Hedin and Samuel Curtis present an argument for distributed competence as a means to mitigate risks posed by AI systems.
Response to U.S. NTIA AI Accountability Policy Request for Comment
Response to U.S. NTIA AI Accountability Policy Request for Comment
Our response emphasized the need for scrutiny in the design and development of general-purpose AI systems (GPAIS). We encourage the implementation of third-party assessments and audits, contestability tools for impacted persons, and a horizontal regulatory approach toward GPAIS.
Policy achievements in the EU AI Act
Policy achievements in the EU AI Act
The draft AI Act approved by the European Parliament contains a number of provisions for which TFS has been advocating, including a special governance regime tailored to general-purpose AI systems. Collectively, these operationalize safety, fairness, accountability, and transparency in the development and deployment of AI systems.
Giving Agency to the AI Act
Giving Agency to the AI Act
Earlier this year, we conducted research comparing different institutional models for an EU-level body to oversee the implementation and enforcement of the AI Act. We're pleased to share our memo: Giving Agency to the AI Act.
Takeaways from the fourth edition of The Athens Roundtable
Takeaways from the fourth edition of The Athens Roundtable
Held at the European Parliament in Brussels, the dialogue focused on implementation and enforcement of AI governance mechanisms at this critical juncture in human history.
Working group publishes A Manifesto on Enforcing Law in the Age of AI
Working group publishes A Manifesto on Enforcing Law in the Age of AI
The Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of "Artificial Intelligence" convened for a second year to draft a manifesto that calls for the effective and legitimate enforcement of laws concerning AI systems.
TFS champions Regulatory Sandboxes in the EU AI Act
TFS champions Regulatory Sandboxes in the EU AI Act
The Future Society has been advocating for regulatory sandboxes to be implemented via the EU AI Act and designed a three-phase roll out program.
AI Policy Seminar for U.S. State Legislators
AI Policy Seminar for U.S. State Legislators
In the United States, many paramount AI issues are litigated, regulated, and evaluated in lower jurisdictions before reaching the federal government. States, in essence, serve as laboratories for broader policymaking—providing insight into national regulatory trajectories. In spite of state legislators’ critical role, relatively little is known about their AI expertise,…
Progress with GPAI AI & Pandemic Response Subgroup at Paris Summit 2021
Progress with GPAI AI & Pandemic Response Subgroup at Paris Summit 2021
On November 12th, 2021, an update on our work on the 'AI-Powered Immediate Response to Pandemics' project was presented at the GPAI Paris Summit 2021.
Classifying AI systems used in COVID-19 response
Classifying AI systems used in COVID-19 response
To aid in our information-gathering efforts, starting today, we are inviting those associated with the development of AI systems used in COVID-19 pandemic response to complete our survey by August 2nd, 2021.