Skip to content

Main Insight

“To what extent should societies delegate to machines decisions that affect people?” This question permeates all discussions on the sweeping ascent of artificial intelligence. Sometimes, the answer seems self-evident.

A ‘Principled’ Artificial Intelligence could improve justice

October 3, 2017

If asked whether entirely autonomous, artificially intelligent judges should ever have the power to send humans to jail, most of us would recoil in horror at the idea. Our answer would be a firm “Never!”.

But assume that AI judges, devoid of biases or prejudices, could make substantially more equitable, consistent and fair systemwide decisions than humans could, nearly eliminating errors and inequities. Would (should?) our answer be different? What about entirely autonomous AI public defenders, capable of demonstrably superior results for their clients than their overworked and underpaid human counterparts? And finally, what about AI lawmakers, capable of designing optimal laws to meet key public policy objectives? Should we trust those AI caretakers with our well-being?

Fortunately, these questions, formulated as a binary choice, are not imminent. However, it would be a mistake to ignore their early manifestations in the increasingly common “hybrid intelligence” systems being used today that combine human and artificial intelligence. (It is worth noting that there is no unanimously accepted definition of “artificial intelligence”—or even of “intelligence” for that matter.)

Just the idea of inserting assessments made by machines into legal decision-making serves as an ominous warning. For example, whereas courts do not yet rely on AI in assigning guilt or innocence, several states use “risk assessment” algorithms when it comes to sentencing. In Wisconsin, a judge sentenced a man to six years in prison based in part on such a risk profile. This system’s algorithm remained secret. The defendant was not allowed to examine how the algorithm arrived at its assessment. No independent, scientifically sound evaluations of its efficacy were presented.

On what basis then did the judge choose to rely on it? Did he understand its decision-making pathways or its potential for error or biases or did he unduly trust it by virtue of its ostensible scientific basis or the glitter of its marketing? More broadly, are judges even competent to assess whether such algorithms are reliable in a particular instance? Even if a judge happened to be a computer scientist, could she assess the algorithm absent access to its internal workings or to sound scientific evidence establishing its effectiveness in the real-world application in which it is about to be used? And even if the algorithms are in fact effective and even if the judge is, in fact, competent to evaluate their effectiveness, is society as a whole equipped with the information and assurances it needs to place its trust in such machine-based systems?

Read the full article by Nicolas Economou here.

Nicolas Economou is the CEO of electronic discovery and information retrieval firm H5, a senior advisor to the AI Initiative of the Future Society at Harvard Kennedy School, and is an advocate of the application of scientific methods to electronic discovery.

 

Team members

Sorry, we couldn't find any posts. Please try a different search.

Related resources

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One?

Our explainer defines the concept of AI crisis preparedness, exploring how an international framework could stop hazards from escalating into crises.

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Ten AI Governance Priorities: Survey of 44 Civil Society Organizations

Through our survey, 44 civil society organizations—including research institutes, think tanks, and advocacy groups—ranked ten potential AI governance mechanisms.

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

Closing the Gap: Five Recommendations for Improving the Third Draft of the EU Code of Practice for General-Purpose AI

The EU Code of Practice helps providers of General-Purpose AI (GPAI) Models with systemic risks to comply with the EU AI Act. Our preliminary analysis of the third draft of the Code identifies opportunities for improvement in five key areas.

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration

Our report examines how the AISI Network can be structured and strategically integrated into the broader AI governance ecosystem to effectively fulfill its mandate toward secure, safe, and trustworthy AI.

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

Guidance on Risk Mitigation for the Open Foundation Model Value Chain

The Future Society contributed to Partnership on AI’s risk mitigation strategies for the open foundation model value chain, which identify interventions for the range of actors involved in building and operationalizing AI systems.

National AI Strategies for Inclusive & Sustainable Development

National AI Strategies for Inclusive & Sustainable Development

From 2020 to 2022, TFS supported the development of 3 National AI Strategies in Africa with public sector partners, GIZ, and Smart Africa. These programs build capacity through AI policy, regulatory, and governance frameworks to support countries’ efforts to harness AI responsibly, to achieve national objectives and inclusive and sustainable...