Does Reading Cognitive Bias Research Distort the Mind?

Photo

Over break I read The Invention of Science: A New History of the Scientific Revolution by the British historian David Wootton. Wootton writes that modern science was invented between 1572 (when the Danish astronomer Tycho Brahe saw a nova) and 1704 (when Isaac Newtorn published Opticks). A big part of the revolution was technological. The telescope, barometer, and printing press allowed people to study the world with more precision and share their findings with scholarly communities, such as The Royal Society. But, more importantly, the scientific revolution involved a new set of conceptual tools.

Take the concept of discovery, for instance. Until around the 16th century, most scholars believed that humanity’s greatest achievements were in the past—Aristotle, the preeminent source for all intellectual inquiry, still towered over European thought like a colossus, despite his deeply flawed ideas about the nature of the universe. When Columbus sailed to America in 1492, he did not use the word “discover” to describe what he had done because he was not familiar with the concept. After Amerigo Vespucci used the new word in 1504, it quickly spread into other European languages. Soon, intellectuals of the era began to not only investigate the world in new ways. They began to treat the world as something to be investigated.

I liked Wootton’s book because it helped me understand something I’ve noticed ever since the Nobel-Prize winning psychologist Daniel Kahneman published Thinking, Fast and Slow in 2011. Kahneman’s book is about biases that distort judgment, how to identify them and what we can do to avoid them. In the traditional view, emotion is the enemy and people are thought to be generally rational, their thinking sound. Nearly four decades of decision-making research reveal a new perspective. Systematic biases not only undermine the idea that people are rational but they are largely invisible to us. We are “blind to our blindness,” Kahneman says.

The deeper lesson to glean from this body of research is not just that our decisions are occasionally flawed—we’ve known about our mental foibles since at least the Greeks—or even that the conscious mind is convinced that it’s not flawed. It’s that if the conscious mind functions like a press secretary, someone who does not seek the truth but justifies intuitive judgments and glosses over its own shortcomings and delusions, then we should be very careful when we read a book like Thinking, Fast and Slow. Although it’s easy to grasp the idea that people deviate from the standards of rationality, it’s much harder to resist the belief that reading about this research will automatically help us think clearer. Learning about judgment errors elicits a feeling of enlightenment, the sense that knowledge has been gained, that can further distort how we perceive the world.

We do not, in other words, absorb this body of research like a scientist studies the physical world. Ironically, we interpret decision-making mistakes in a way that makes us look good, often falling prey to the very biases that we are advised to avoid, such as overconfidence and the above-average effect. So while readers and students genuinely understand that they’re “biased” in a nominal sense, they conflate learning about how people make decisions with a real improvement in judgment. This is what happens when the object of inquiry is also the tool of inquiry, and when that tool is specifically designed to generate an unjustified sense of righteousness.

When I first noticed this paradox a few years ago, I had a hard time describing it. I used the clumsy term “confirmation bias bias” to describe how learning about decision-making mistakes engenders a myopic view of the mind, as if our persistent tendency to search for confirming evidence inevitably causes us to only view the mind in those same terms. Reading Wootton’s book helped me understand the broader insight. There is a difference between discovering something new, on the one hand, and that discovery changing the way we think, on the other. Much like Columbus discovered America without grasping the broader concept of “discovery,” the judgment and decision-making community has discovered new insights into how the mind makes decisions without considering how those insights affect the mind. Thinking, Fast and Slow is such an intriguing book because, by learning about the details of our intellectual hardware, we change them.

If learning about how our own mind screws up distorts judgment instead of improving it, then the question we should ask is not how the mind works, but what it means to have a mind in the first place. One irony of the scientific revolution is that we began to treat the mind as an object of scientific study, just like how Brahe and Newton treated novas and beams of light, even though unlike the physical world, it changes each time we examine and scrutinize it; that is, it changes because we examine and scrutinize it. And while we should rely on the scientific method to interpret everything in the natural world, we need to remember where that method was developed, and how that conflict of interest could lead us astray. As the writer and neurologist Robert Burton says, “Our brains possess involuntary mechanisms that make unbiased thought impossible yet create the illusion that we are rational creatures capable of fully understanding the mind created by these same mechanisms.”

So what is a mind? Nailing down that definition represents, I think, one of the central tasks of modern neuroscience. But, more importantly, it is a task that must inform cognitive psychology—if the goal is in fact to correct outdated assumptions about human nature.

The Invention of Science, a survey into how we made discoveries about the world and how those discoveries replaced the conceptual tools we used to perceive the world, is a lesson in intellectual humility. It’s a story about the persistent belief that we see the world as it is, on the one hand, and our willingness to test that belief, on the other. The purpose of this essay is to test the belief that you can use your mind to understand your mind, and I’d proceed cautiously if that test elicited a sense of enlightenment. We should expect nothing less from an organ that evolved to do just that.

Postscript

Daniel Kahneman often says that despite forty years in academia, he still falls for the very same biases that his research has revealed. He is pessimistic, and it might seem from this article that I am, too.

Wootton’s book explained how we began to not only study the physical world but also recognize that we don’t see it objectively. That is, we began to study the physical world because we recognized that we don’t see it objectively. The fact that we’re talking about biases here in the 21st century suggests that something has changed in the last few hundreds years. We now dedicate large swaths of academic work to researching not only the physical world but also how reason can make us perceive the physical world incorrectly. The judgment and decision-making literature is a direct descendent of Descartes, who emphasized reason and reflection over sense experience. In an era where educated people believed in alchemy and witches but not germs, Descartes wanted to get people to improve their beliefs. We have.

More importantly, we’ve dramatically improved how we think, not just what we think. Consider Superforecasters, by Wharton psychologist Phil Tetlock. Tetlock is famous for publishing longitudinal study a decade ago that measured “expert prediction.” He found that the professional pundits invited to take part in his study performed no better than chance when they made long-term forecasts about the future of political events, wars, etc. Wharton’s new book documents an elite group of people, superforecasters, who have a remarkable track record of making accurate forecasts. They’re really good at answering questions like, “Will there be a terrorist attack in Europe between August 2016 and December 2016?” When Tetlock investigated what made these people so good, he did not find that they were smarter. He found that they possessed better mental tools, which is to say that they used basic probability and statistics to think about the future, avoided ideology, and encouraged dissonance. In short, they did a good job of correcting their biases.

I’ve become wary of judgment and decision-making, not the research but the way people talk about it and the way it is reported online and in print. I suppose you could say that even though the JDM community has done a tremendous job explaining how people actually make decisions, it has not fulfilled its promise to explain “how the mind works,” as so many subtitles seem to suggest. My impetus for writing the article was to recommend that the JDM community be mindful of meta-biases and remember their relatively small role in the broader cognitive science pie–and the fact that cognitive science is so young. We still know so little about the mind and the brain, perhaps an equivalent amount to what Columbus knew about the New World.

Ironically, it’s those catchy subtitles that got me into this fascinating body of knowledge in the first place.

Image via Flickr/Zoe Rimmer

8 Responses to “Does Reading Cognitive Bias Research Distort the Mind?”

  1. Barry Schachter

    The potential flaw in your reasoning is not acknowleging that the conscious and unconscious serve different roles. Reflective thought is how the unconscious is provided new rules for fast thinking and how existing rules are updated and refined. Delaying the rush to judgment is perhaps the meta lesson from a book like TFaS. Such a lesson learned enhances decision making. This, at least, is not a self-referential paradox I think ; )

  2. Sam McNerney

    (Note to readers and myself)

    Dave Nussbaum read a draft and raised a question about our ability to overcome biases, not just by studying them, but by designing around them (choice architecture interventions). Does the fact that we can improve decision-making by studying it undermine my thesis?

    No, because I’m not claiming that learning about decision-making automatically undermines decision-making. As I told Dave, I think his comment is perfectly compatible with the view that a mind can never know itself. There is a difference between redesigning a choice environment to help people and inquiring into the consequences of having a mind that is specifically programmed to not see itself clearly.

  3. Jori

    Moral philosophers are no more moral than the rest of us.
    Psychologists are no more sane than the rest of us.
    Lawyers are no more law abiding than the rest of us.

    And so it should be of no surprise that behavioral economists, or at least, people that are familiar with the research, are no better at overcoming their cognitive biases.

    What all these examples have in common is that the information that is gained from studying a topic may not be enough to help us put it into action in our own lives.

    The disconnect here is that in each of these fields a person is learning generally about the human condition, as opposed to direct observation and experience of their specific condition, (though of course they do not have to be mutually exclusive).

    I would imagine that learning about biases, in the general sense, might not be as effective as learning about them in the specific, which would entail an individual sitting down and being able to observe how their specific mind works and being able to experience the biases first hand.

    So, here’s an idea:

    Three groups of people participate in a classic behavioral economics study.

    (First Group) People who practice insight meditation but have no knowledge of recent findings on cognitive bias.

    (Second Group) People who have knowledge of cognitive bias but have no experience meditating.

    (Third Group) People who have a regular insight meditation practice and are knowledgable of recent findings on cognitive biases.

    It would be interesting to see which group, if any, had better luck at being conscious of and then circumventing their cognitive biases.

  4. Sam McNerney

    (note to self)

    In my reading of Ray Monk’s biography, Wittgenstein, more than any other philosopher, would appreciate the difference between learning about biases and recognizing the damaging affects learning about them can have on the mind. He believed that all of philosophy must begin with a confession–an approach he borrow from reading Augustine’s Confessions. The idea, I believe, is that philosophy is as much as about problems of the will as it is about problems of the intellect. If we want to think clearly and acquire genuine understanding, we have to abandon any sense of pride. “If anyone is unwilling to descend into himself, because this is too painful, he will remain superficial in his writing,” Wittgenstein says.

    By contrasting economic assumptions about human nature with assumptions about human nature supported by cognitive psychology research, many popular psychology writers believe that they are promoting a more accurate view of the human mind, when in fact they are inflating that sense of pride Wittgenstein warned us to reject. The sense of enlightenment gained when we learn about new insights into human nature can trigger something epistemologically dangerous–namely, self-deception. I must emphasize this point because it is so easy to gloss over: a proper and honest reading of the literature on cognitive biases would be nearly impossible because it challenges the positive illusions the mind must maintain in order to function. This is why we read this body of knowledge as a story of enlightenment. It preserves the ego.

    Wittgenstein appreciated this invisible danger so much that he wrote a confession and delivered it to his closes friends. According to Monk, the confession mostly focused on moments in which Wittgenstein was not honest to himself, or when he failed to do something he felt he should have done. This, I believe, was tremendously courageous. Socrates bragged about his ignorance, Montaigne wondered about his ignorance, Descartes discussed his ignorance, Wittgenstein lived with it so genuinely that he was nearly unable to philosophize.

  5. Sam McNerney

    (note to self, extended)

    Consider the connection between ease of recall (and, to a broader extent, the notion of “fluency”) and judgment. The German psychologist Norbert Schwarz has shown that people who are asked to list twelve instances in which they were assertive rated themselves as less assertive than participants who only had to list six instances. This finding has been extended to reveal the same pattern elsewhere. For instance, we become less confident in a decision when we’re asked to produce more arguments to support it. It seems like the mind has it’s own bullshit detector, but that it is not on the same team as the part of the mind in charge of defending the ego.

    One major problem with TFS (and other books in the aisle) is that it is easy to read. The implication of Schwarz’ study, as well as the science of availability and recall in general, is that if we want to develop a more accurate view of ourselves and not simply rely on intuition and memory, we need to pay close attention to how easily we recall examples when we make decisions and form judgments. Often, the easier the recall, the more dubious the belief. As readers breeze through the passage about Schwarz study in TFS, they jump to conclusions about their own judgment, much like the participants in the study who only listed six instances of assertiveness. (We should be very skeptical each time we feel like we “get it.” It is surreptitious that this feeling of understanding is closely associated with illusions of understanding.)

    One implication is that we must encourage disfluency, in order to not just improve judgment, but remind us that judgment is something that can always be improved. Research involving the infamous bat and ball problem, as well as other “cognitive reflection tasks,” are a good example. When this tricky problem is written in a difficult-to-read font, or a language that a participant is only moderately competent in, performance actually improves. One wishes that Kahneman considered publishing TFS in his own hand-writing. Doing so would have forced a closer reading and actually improved how readers perceived themselves.

    Philosophy is a problem of the will because we constantly encounter moments where we have the option to encourage disfluency but don’t; self-delusion is the result. The only people I know who appreciate this tradeoff are artists—writers, musicians, or anyone who creates something original work and puts it into the world for other people to scrutinize. By trying to transfer an idea from the mind to paper, creatives are (unlike, say, the modern business worker) constantly bumping up against their self-delusions. This is the central theme of the movie Adaptation, in which the protagonist, a screenwriter named Charlie Kaufman (based on the real Charlie Kaufman), worries about becoming a “walking cliché.” He is contrasted with Donald Kaufman, Charlie’s twin brother, who spends the movie writing a screenplay that is filled with the usual Hollywood tropes–car chases, provocative but unrealistic romance, and a few predictable plot twists. Donald is a fool but his illusions keep him happy and confident; he is constantly giving Charlie horrible advice without knowing it. Charlie, on the other hand, is miserable. Without grandiose illusions about his talents, he suffers, not just as a writer, but as a person. Adaptation is clever because it finishes with a car chase, romance, and a plot twist–as if Charlie had to give into cliches in order to survive.

    “How often have we designated a work of art or invention a masterpiece or a classic,” Sarah Lewis asks in The Rise, “while its creator considers it incomplete, permanently unfinished, riddle with difficulties and flaws?” Lewis cites Franz Kafka, who, on his death bed, insisted to his friend Max Brod that he burn his entire oeuvre. Brod ignored Kafka’s request, and it’s a good thing that he did. Amerika, The Trial, and The Castle would have never been published.

    I hope you don’t feel comfortable.