Imagine sitting in a laboratory with your brain connected to a computer. You’re in a rigid chair, waiting patiently, when a scientist walks into the room and offers you a deal. “In just a few seconds, I can upload everything psychologists know about human judgment, including a complete list of biases and how they undermine rational thinking, into your mind. All I need is your permission.”
There is a good argument for saying “No. Absolutely not.”
In the early 2000s, the psychologists Emily Pronin at Princeton and Lee Ross at Stanford conducted a series of studies that examined what happens when you ask people to evaluate themselves and then teach them about self-serving biases. Psychologists have known for years that most people believe they are above average in terms of just about every measurable trait—sociability, humor, intelligence, driving skills—but Pronin and Ross wanted to know if telling people about their egocentric habits would deflate their sense of self. It’s like saying, “OK, now that you know 95 percent of people believe they are above-average, would you like to amend anything you just said about yourself?”
Across many studies, Pronin and Ross found that self-ratings were unaffected by the news. It was as if someone from the Flat-Earth society was sent to the International Space Station, peeked out the window, and concluded that our planet was indeed flat.
It’s been four years since Nobel-Prize winner Daniel Kahneman, professor of psychology and public affairs at Princeton University, published Thinking, Fast and Slow, a book that documents nearly forty years of research in the many ways we make poor decisions. Kahneman, along with his late partner Amos Tversky, showed that contrary to the economic model of human nature, we’re prone to a suite of biases that systematically distort how we perceive the world.
Given what we know about how people react when they learn about biases, it’s worth wondering if popular books outlining how we screw up, including Thinking, Fast and Slow, may not only fail to change behavior but even instill overconfidence. It’s very easy to conclude, just as the participants in Pronin and Ross’ study did, that learning about biases makes us immune to them, as if they are something we can permanently fix.
We used to think that the hard part of the question “How do I improve judgment?” had to do with understanding judgment. But it may have more to do with understanding the environment in which we make decisions. Many researchers now believe, to varying degrees, that in order make better decisions, we’ve got to redesign the environment around our foibles instead of simply listing them.
Soll and his colleagues begin by clarifying what they mean by “debiasing.” Although the researchers list several tools to help people overcome their limitations, they also insist that psychologists should focus less on achieving perfect rationality and more on modifying the environment to help people achieve their goals. “This approach accepts that there is bias,” the researchers write, “but strives to create situations in which a bias is either irrelevant or may even be helpful.”
Defaults, which leverage our tendency to opt for the path of least resistance, are one example. They have been used to increase flu vaccination rates and retirement savings. Some readers might be familiar with research from Eric Johnson and Dan Goldstein. In 2003, they found that the default on organ donation forms in several European countries dramatically influenced how many people become donors. Donation rates were as much as 90 percent higher in countries when the default was to donate.
Because we’re deeply swayed by how numbers and ratios are framed, the EPA has taken steps to help people understand fuel economy better. Although trading in a car that gets 25 MPG for a hybrid that gets 50 MPG might seem like a sizeable improvement, someone who swaps a gas-guzzling pickup truck that gets 10 MPG for a sedan that gets 15 MPG will save about 1.3 gallons more every hundred miles. MPG is not a linear metric—gains are much greater on the low end the scale—yet most people perceive it that way. As a result, the EPA began including GPM (Gallons per Mile) in 2013 to help users make more informed decisions.
“Planning prompts” help people follow through on their intentions by prompting them to visualize themselves completing them. In the weeks leading up to Pennsylvania’s April 2008 presidential primary, a Harvard behavioral scientist named Todd Rogers scripted a phone call that went out to nearly 20,000 Democratic households in Pennsylvania. Compared to a control condition, in which Rogers simply encouraged voters to vote instead of prompting them to make a specific plan to vote, those in the experimental condition were four-times more likely to go to the polls. Planning prompts have been used to help people in several other areas, such as dieting and scheduling colonoscopies, where willpower is notoriously unreliable.
It’s tempting to read “A User’s Guide to Debiasing” as evidence that human reason is deeply flawed. We might laugh at how easily fooled we are by something as important as the difference between miles per gallon and gallons per mile, or how something as trivial as defaults and planning can protect us from the flu. Pronin and Ross might be right. We’re blind to our blindness.
However, a better interpretation should begin with the assumption that even though we systemically screw up, we’re smart enough to except mistakes and account for them.
In The Checklist Manifesto, Atul Gawande writes about the difference between errors of ignorance—mistakes we make because we don’t have enough information—and errors of ineptitude—mistakes we make when we have enough information but don’t use it properly.
Under the economic model, in which people are assumed to easily understand confusing ratios and complicated statistics, misunderstanding MPGs was an error of ignorance. Now we know that conflating MPG with GPM is an error of ineptitude. The problem isn’t the people shopping for cars. It’s the designers at the EPA who printed those misleading labels.
There is a memorable scene in The Matrix where the protagonist, Neo, learns Kung Fu in a few seconds by downloading it into his brain. Although Neo “knows” Kung Fu he still requires hours of training to learn how to use it. The Matrix was released in 1999 but the scene embodies a trope that dates back at least to the Ancient Greeks. There is a constant tension in Western intellectual history between knowing-that and knowing-how, between acquiring knowledge and using it.
Soll and his colleagues show that this debate might be an antiquated one. If we want to live in a post-Kahneman world, we should spend more time reforming the environment and less time reforming ourselves. Diligently reading Thinking, Fast and Slow will only get us so far. The brain is not a computer to debug. It is a feature of the choice environment that we must design around.