Machines We Trust

Perspectives on Dependable AI

Ebook
On sale Aug 24, 2021 | 174 Pages | 9780262362160
Experts from disciplines that range from computer science to philosophy consider the challenges of building AI systems that humans can trust.

Artificial intelligence-based algorithms now marshal an astonishing range of our daily activities, from driving a car ("turn left in 400 yards") to making a purchase ("products recommended for you"). How can we design AI technologies that humans can trust, especially in such areas of application as law enforcement and the recruitment and hiring process? In this volume, experts from a range of disciplines discuss the ethical and social implications of the proliferation of AI systems, considering bias, transparency, and other issues.

The contributors, offering perspectives from computer science, engineering, law, and philosophy, first lay out the terms of the discussion, considering the "ethical debts" of AI systems, the evolution of the AI field, and the problems of trust and trustworthiness in the context of AI. They go on to discuss specific ethical issues and present case studies of such applications as medicine and robotics, inviting us to shift the focus from the perspective of a "human-centered AI" to that of an "AI-decentered humanity." Finally, they consider the future of AI, arguing that, as we move toward a hybrid society of cohabiting humans and machines, AI technologies can become humanity's allies.
List of Figures
Preface
1 Introduction
I SETTING THE STAGE
2 Shortcuts to Artificial Intelligence
3 Mapping the Stony Road toward Trustworthy AI: Expectations, Problems, Conundrums
II ISSUES
4 The Issue of Bias: The Framing Powers of Machine Learning
5 Adjudicating with Inscrutable Decision Rules
6 Cobra AI: Exploring Some Unintended Consequences of Our Most Powerful Technology
7 The Importance of Prediction in Designing Artificial Intelligence Systems 
III PROSPECTS
8 A Human-Centered Agenda for Intelligible Machine Learning
9 The AI of Ethics
Contributors

About

Experts from disciplines that range from computer science to philosophy consider the challenges of building AI systems that humans can trust.

Artificial intelligence-based algorithms now marshal an astonishing range of our daily activities, from driving a car ("turn left in 400 yards") to making a purchase ("products recommended for you"). How can we design AI technologies that humans can trust, especially in such areas of application as law enforcement and the recruitment and hiring process? In this volume, experts from a range of disciplines discuss the ethical and social implications of the proliferation of AI systems, considering bias, transparency, and other issues.

The contributors, offering perspectives from computer science, engineering, law, and philosophy, first lay out the terms of the discussion, considering the "ethical debts" of AI systems, the evolution of the AI field, and the problems of trust and trustworthiness in the context of AI. They go on to discuss specific ethical issues and present case studies of such applications as medicine and robotics, inviting us to shift the focus from the perspective of a "human-centered AI" to that of an "AI-decentered humanity." Finally, they consider the future of AI, arguing that, as we move toward a hybrid society of cohabiting humans and machines, AI technologies can become humanity's allies.

Table of Contents

List of Figures
Preface
1 Introduction
I SETTING THE STAGE
2 Shortcuts to Artificial Intelligence
3 Mapping the Stony Road toward Trustworthy AI: Expectations, Problems, Conundrums
II ISSUES
4 The Issue of Bias: The Framing Powers of Machine Learning
5 Adjudicating with Inscrutable Decision Rules
6 Cobra AI: Exploring Some Unintended Consequences of Our Most Powerful Technology
7 The Importance of Prediction in Designing Artificial Intelligence Systems 
III PROSPECTS
8 A Human-Centered Agenda for Intelligible Machine Learning
9 The AI of Ethics
Contributors