Skip to content

Main Insight

Bridging AI’s trust gaps: Aligning policymakers and companies is a new global survey conducted by The Future Society in collaboration with EYQ (EY’s Think Tank). Common understanding between companies and policymakers is key to build a governance framework and protect citizens’ rights. Our survey reveals ethical gaps between company and policy leaders that need to

Report Launch: Bridging AI’s trust gaps: Aligning policymakers and companies

July 22, 2020

The COVID-19 global pandemic has highlighted complex health, economic, and ethical trade-offs, and inspired unprecedented coordination between policymakers and companies. Several AI-enabled solutions have been deployed to deal with the crisis, ranging from innovative algorithms to accelerate the discovery of vaccines and treatments, to contact tracing applications and temperature scanning drones. However, without an effective governance framework and alignment between companies and policymakers, these technologies have the potential to undermine our fundamental human rights and civil liberties. We therefore need to bridge AI’s trust gaps and build a common understanding among policy and industry leaders about the ethical risks raised by AI applications during the pandemic and beyond. 

Bridging AI’s trust gaps: Aligning policymakers and companies is a new global survey conducted by The Future Society in collaboration with EYQ (EY’s Think Tank) and Thought Leadership Consulting, A Euromoney Institutional Investor company. Over the course of a year, our team analyzed AI ethical guidelines and surveyed leading companies and policymakers’ perceptions of AI applications across key sectors such as healthcare, aviation, law, retail, and financial services. 

The report was co-authored by The Future Society team: Nicolas Miailhe, Sacha Alanoca, Adriana Bora, Yolanda Lannquist and Arohi Jain in collaboration with the EYQ (Ernst & Young’s Think Tank) team: Gil Forer, Gautam Jaggi, Prianka Srinivasan and Ben Falk.

We asked policymakers and companies to identify the most important ethical principles when regulating a range of AI applications, and found divergent priorities between them across use cases. Ethical misalignments generally concentrate in four areas: fairness and avoiding bias, innovation, data access, and privacy and data rights.

Top insight 1: Policymakers have a clear vision of AI ethical risks 

We are at a critical transition point in AI governance, as the locus of activity shifts from articulating ethical principles to implementing them. Policymakers have achieved consensus on the ethical principles they intend to prioritize and there are signs of increasing collaboration across jurisdictions. Policymakers’ alignment is evident in the survey data. On the use of AI for facial recognition check-ins in airports, hotels and banks, for example, policymakers show a clear ethical vision. They consistently rate “fairness and avoiding bias” and “privacy and data rights” as the two most important principles, and doing so by a wide margin (Exhibit 1). Their consensus on the key ethical issues reveals a thorough understanding of the risks raised by AI applications.

Top insight 2: Companies and policymakers have different ethical priorities

On the other hand, companies showed weaker consensus. Their responses were fairly evenly distributed across all ethical principles. Overall, companies seem to focus on the principles prioritized by existing regulations such as GDPR (e.g, privacy and cybersecurity) rather than on emerging issues that will become critical in the age of AI (e.g. explainability, fairness and non-discrimination) (Exhibit 2).

Exhibit 2: 

Top insight 3:  Varying expectations about who will lead AI governance

In addition to ethical misalignments, the survey reveals a large expectation gap when asked who will lead AI governance efforts. 84% of policymakers expect state actors to fulfill this role while 38% of companies believe the private sector will conduct any rule-making process (Exhibit 3).

Exhibit 3:

Interested in other key insights? You can read our full report here

Related resources

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

Our explainer defines the concept of AI crisis preparedness, exploring how an international framework could stop hazards from escalating into crises.

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

Closing 2024, Looking Ahead to 2025: A Message from The Future Society’s Executive Director

Closing 2024, Looking Ahead to 2025: A Message from The Future Society’s Executive Director

A look at our work in 2024 and The Future Society's ten year anniversary, as well as what's ahead for AI governance in 2025.

The Future Society Newsletter: November 2024

The Future Society Newsletter: November 2024

In This Newsletter: Preview of The Athens Roundtable on Artificial Intelligence and the Rule of Law on December 9. Interim findings from our expert consultation for France’s 2025 AI Action Summit. Our report on AI Safety Institutes’ collaboration. OpEd on the GPAI Code of Practice for the EU AI Act.

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Our report examines how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.