How to Stay Smart in a Smart World

Why Human Intelligence Still Beats Algorithms

How to stay in charge in a world populated by algorithms that beat us in chess, find us romantic partners, and tell us to “turn right in 500 yards.”

Doomsday prophets of technology predict that robots will take over the world, leaving humans behind in the dust. Tech industry boosters think replacing people with software might make the world a better place—while tech industry critics warn darkly about surveillance capitalism. Despite their differing views of the future, they all seem to agree: machines will soon do everything better than humans. In How to Stay Smart in a Smart World, Gerd Gigerenzer shows why that’s not true, and tells us how we can stay in charge in a world populated by algorithms.

Machines powered by artificial intelligence are good at some things (playing chess), but not others (life-and-death decisions, or anything involving uncertainty). Gigerenzer explains why algorithms often fail at finding us romantic partners (love is not chess), why self-driving cars fall prey to the Russian Tank Fallacy, and how judges and police rely increasingly on nontransparent “black box” algorithms to predict whether a criminal defendant will reoffend or show up in court. He invokes Black Mirror, considers the privacy paradox (people want privacy but give their data away), and explains that social media get us hooked by programming intermittent reinforcement in the form of the “like” button. We shouldn’t trust smart technology unconditionally, Gigerenzer tells us, but we shouldn’t fear it unthinkingly, either.
Introduction
The Human Affair with AI
1.Is true love just a click away?
2.What AI is best at: The stable-world principle
3.Machines influence how we think of intelligence
4.Are self-driving cars just down the road?
5.Common sense and AI
6.One data point can beat big data
High Stakes
1.Transparency
2.Sleepwalking into surveillance
3.The psychology of getting users hooked
4.Safety and self-control
5.Fact or fake?
Acknowledgments
Notes
References
Index
Technological solutionism is the belief that every societal problem is a “bug” that needs a “fix” through an algorithm. Technological paternalism is its natural consequence, government by algorithms. It doesn’t need to peddle the fiction of a superintelligence; it instead expects us to accept that corporations and governments record where we are, what we are doing, and with whom, minute by minute, and also to trust that these records will make the world a better place. As Google’s former CEO Eric Schmidt explains, “The goal is to enable Google users to be able to ask the question such as ‘What shall I do tomorrow’ and ‘What job shall I take?’”23 Quite a few popular writers instigate our awe of technological paternalism by telling stories that are, at best, economical with the truth.24 More surprisingly, even some influential researchers see no limits to what AI can do, arguing that the human brain is merely an inferior computer and that we should replace humans with algorithms whenever possible.25 AI will tell us what to do, and we should listen and follow. We just need to wait a bit until AI gets smarter. Oddly, the message is never that people need to become smarter as well.

I have written this book to enable people to gain a realistic appreciation of what AI can do and how it is used to influence us. We do not need more paternalism; we’ve had more than our share in the past centuries. But nor do we need technophobic panic, which is revived with every breakthrough technology. When trains were invented, doctors warned that passengers would die from suffocation.26 When radio became widely available, the concern was that listening too much would harm children because they need repose, not jazz.27 Instead of fright or hype, the digital world needs better-informed and healthily critical citizens who want to keep control of their lives in their own hands.

23. Daniel and Palmer, “Google’s Goal.”
24. Overstated claims about algorithms without supporting evidence can be found, for instance, in Harari, Homo Deus. I provide examples in chapter 11.
25. See the spectrum of opinions in Brockman, Possible Minds. Also, Kahneman (“Comment,” 609) poses the question whether AI can eventually do whatever people can do: “Will there be anything that is reserved for human beings? Frankly, I don’t see any reason to set limits on what AI can do.” And: “You should replace humans by algorithms whenever possible” (610).
26. Gigerenzer, Risk Savvy.
27. On fear cycles, see Orben, “Sisyphean Cycle.”
Gerd Gigerenzer is Director of the Harding Center for Risk Literacy at the University of Potsdam, Director Emeritus at the Max Planck Institute for Human Development, and Partner of Simply Rational—the Institute for Decisions. He is the author of Calculated Risks, Gut Feelings, Risk Savvy, and How to Stay Smart in a Smart World (MIT Press).

About

How to stay in charge in a world populated by algorithms that beat us in chess, find us romantic partners, and tell us to “turn right in 500 yards.”

Doomsday prophets of technology predict that robots will take over the world, leaving humans behind in the dust. Tech industry boosters think replacing people with software might make the world a better place—while tech industry critics warn darkly about surveillance capitalism. Despite their differing views of the future, they all seem to agree: machines will soon do everything better than humans. In How to Stay Smart in a Smart World, Gerd Gigerenzer shows why that’s not true, and tells us how we can stay in charge in a world populated by algorithms.

Machines powered by artificial intelligence are good at some things (playing chess), but not others (life-and-death decisions, or anything involving uncertainty). Gigerenzer explains why algorithms often fail at finding us romantic partners (love is not chess), why self-driving cars fall prey to the Russian Tank Fallacy, and how judges and police rely increasingly on nontransparent “black box” algorithms to predict whether a criminal defendant will reoffend or show up in court. He invokes Black Mirror, considers the privacy paradox (people want privacy but give their data away), and explains that social media get us hooked by programming intermittent reinforcement in the form of the “like” button. We shouldn’t trust smart technology unconditionally, Gigerenzer tells us, but we shouldn’t fear it unthinkingly, either.

Table of Contents

Introduction
The Human Affair with AI
1.Is true love just a click away?
2.What AI is best at: The stable-world principle
3.Machines influence how we think of intelligence
4.Are self-driving cars just down the road?
5.Common sense and AI
6.One data point can beat big data
High Stakes
1.Transparency
2.Sleepwalking into surveillance
3.The psychology of getting users hooked
4.Safety and self-control
5.Fact or fake?
Acknowledgments
Notes
References
Index

Excerpt

Technological solutionism is the belief that every societal problem is a “bug” that needs a “fix” through an algorithm. Technological paternalism is its natural consequence, government by algorithms. It doesn’t need to peddle the fiction of a superintelligence; it instead expects us to accept that corporations and governments record where we are, what we are doing, and with whom, minute by minute, and also to trust that these records will make the world a better place. As Google’s former CEO Eric Schmidt explains, “The goal is to enable Google users to be able to ask the question such as ‘What shall I do tomorrow’ and ‘What job shall I take?’”23 Quite a few popular writers instigate our awe of technological paternalism by telling stories that are, at best, economical with the truth.24 More surprisingly, even some influential researchers see no limits to what AI can do, arguing that the human brain is merely an inferior computer and that we should replace humans with algorithms whenever possible.25 AI will tell us what to do, and we should listen and follow. We just need to wait a bit until AI gets smarter. Oddly, the message is never that people need to become smarter as well.

I have written this book to enable people to gain a realistic appreciation of what AI can do and how it is used to influence us. We do not need more paternalism; we’ve had more than our share in the past centuries. But nor do we need technophobic panic, which is revived with every breakthrough technology. When trains were invented, doctors warned that passengers would die from suffocation.26 When radio became widely available, the concern was that listening too much would harm children because they need repose, not jazz.27 Instead of fright or hype, the digital world needs better-informed and healthily critical citizens who want to keep control of their lives in their own hands.

23. Daniel and Palmer, “Google’s Goal.”
24. Overstated claims about algorithms without supporting evidence can be found, for instance, in Harari, Homo Deus. I provide examples in chapter 11.
25. See the spectrum of opinions in Brockman, Possible Minds. Also, Kahneman (“Comment,” 609) poses the question whether AI can eventually do whatever people can do: “Will there be anything that is reserved for human beings? Frankly, I don’t see any reason to set limits on what AI can do.” And: “You should replace humans by algorithms whenever possible” (610).
26. Gigerenzer, Risk Savvy.
27. On fear cycles, see Orben, “Sisyphean Cycle.”

Author

Gerd Gigerenzer is Director of the Harding Center for Risk Literacy at the University of Potsdam, Director Emeritus at the Max Planck Institute for Human Development, and Partner of Simply Rational—the Institute for Decisions. He is the author of Calculated Risks, Gut Feelings, Risk Savvy, and How to Stay Smart in a Smart World (MIT Press).

Books for National Depression Education and Awareness Month

For National Depression Education and Awareness Month in October, we are sharing a collection of titles that educates and informs on depression, including personal stories from those who have experienced depression and topics that range from causes and symptoms of depression to how to develop coping mechanisms to battle depression.

Read more

Horror Titles for the Halloween Season

In celebration of the Halloween season, we are sharing horror books that are aligned with the themes of the holiday: the sometimes unknown and scary creatures and witches. From classic ghost stories and popular novels that are celebrated today, in literature courses and beyond, to contemporary stories about the monsters that hide in the dark, our list

Read more

Books for LGBTQIA+ History Month

For LGBTQIA+ History Month in October, we’re celebrating the shared history of individuals within the community and the importance of the activists who have fought for their rights and the rights of others. We acknowledge the varying and diverse experiences within the LGBTQIA+ community that have shaped history and have led the way for those

Read more

FROM THE PAGE: An excerpt from Gerd Gigerenzer’s How to Stay Smart in a Smart World

How to stay in charge in a world populated by algorithms that beat us in chess, find us romantic partners, and tell us to “turn right in 500 yards.”   Technological solutionism is the belief that every societal problem is a “bug” that needs a “fix” through an algorithm. Technological paternalism is its natural consequence,

Read more