Is Rory Sutherland’s Wedding Invitation Analogy a Good One?

In an article published last summer in New Statesman, Ian Leslie, a writer and veteran of the ad industry, points out that a brand’s message can be far more powerful when it is visible to large groups of people at the same time—the Super Bowl or Times Square, for example. The creatives at ad agencies know how to take advantage of these shared moments to help brands. Whereas, as Leslie puts it, the engineers at Facebook and Google tend to conceive of ads “as a mere conduit for information about the product.”

According to Leslie, Rory Sutherland, Vice-Chairman of Ogilvy & Mather, compares viewing an online ad to receiving a wedding invitation by email. If ads really were just a “conduit for information” then it wouldn’t matter if a wedding invitation appeared in your inbox or your mail box.

But it does matter. Mailing is more expensive than emailing. And as every bride or groom knows, look and feel are powerful things. You get a pretty good sense of what kind of wedding it is going to be the second you glance at the envelope, well before you’re exposed to the actual information.  

Rory is right—the medium is the message. But his point might be too theoretical to provide ad agencies with a way to win back profits from tech giants including Facebook and Google, each of which now “has a market value exceeding the combined value of the six largest advertising and marketing holding companies,” says media commentator Ken Auletta in The New Yorker.   

Does receiving a wedding invitation via email really change how you feel about the wedding? Do you remember what the last invitation you received “felt” like? Is a Super Bowl ad worth it?

For decades the value of mass market ad campaigns has been assumed. But with the rise of data rich tech companies—and their unchallenged ability to show how we click and buy—the burden of proof has shifted. As one of my tech friends said to me when I was trying to convince him of Rory’s analogy: “I’ll believe you. Just show me the data.” 

I’ve heard Rory make his email analogy a few times—first in his entertaining presentations and then on an essay for, a website dedicated to contemporary intellectuals and scientists. When I encountered the email analogy in Leslie’s article, I figured that it would be a good idea to test it.  

So that’s what my team and I did. We recruited a few hundred participants online and randomly separated them into two groups. The first group saw an image of a wedding invitation and the second group saw the same image embedded on a mock Gmail account. We told the first group, “Imagine receiving this wedding invitation in your mail box.” We told the second group the same thing but changed “your mail box” to “your inbox.” We asked both groups a question inspired by Rory: “What are the odds this wedding will have a cash bar?”

We found that the average result from each group was nearly identical—just around 50 percent, suggesting that people took one look at the invitation and randomly guessed.  

We decided to conduct a follow up experiment. We recruited another group of participants and asked a different question: “How would you judge the overall quality of this wedding?” This time, we found that participants who received the invitation in their inbox judged the wedding to be of worse quality.

The effect was small. But that made it interesting. Based on the data we could better estimate what a bride and groom stand to lose if they send their invitation over email. When you think about all the moments before a wedding that contribute to your expectation of it, from the save-the-date card to the choice of registry, it’s easy to see how small impressions add up. They color an experience in a way we rarely notice.  

As Rory says, “10 percent of advertising is information. The rest is inference.”  

Leslie ends his article with a quote from the French philosopher Gilles Deleuze: “It is not the slumber of reason that engenders monsters, but vigilant and insomniac rationality.” Leslie’s point is that the tech nerds who design our smartphones and engineer our social media feeds are indifferent to the concept of a brand. To them, shopping is an engineering problem. The goal is to “identify the precise moment that a consumer needs something so that it can trigger a sale.”

In 2013, the feminine hygiene brand Always launched the “Like A Girl” campaign. The T.V. commercial featured people of all ages interpreting the phrase “like a girl,” as in “run like a girl”, “throw like a girl” and “fight like a girl.” In the first few frames, the camera shows girls entering puberty who flail their arms unathletically after being instructed to “run like a girl.” Then the camera shifts to younger girls who had not been alive long enough for the link between the phrase and running style to solidify. So, when one of the younger girls was asked by the director “What does it mean to you when I say ‘run like a girl’?” she responds, “It means run as fast as you can.” She proceeds to sprint around the studio enthusiastically, uninfluenced by a stereotype that instantly feels outdated.   

Brands work on us. When Listerine famously placed a label “halitosis” on “bad breath” they didn’t pressure people into believing that their breath smelled bad. They convinced everyone that their friends thought their breath smelled bad and weren’t speaking up—hence the gossipy taglines “If your friends were entirely frank with you” and “They say it behind your back,” which appeared in the 1920s.

But how does that process work? How does a brand become part of common knowledge? These are the questions ad agencies spend their time thinking about. They study how people use brands to broadcast signals without consciously acknowledging them. They learn how to interpret and then redesign those signals. They know that advertising works not by preying on our emotions but by “changing the landscape of cultural meanings, which in turn changes how we are perceived by others when we use a product,” says software engineer Kevin Simler in a blog post.

The digital revolution is strange. It designs interfaces that make it easier for us to get things we want. Good design in this view is targeted and invisible. In the “Like a Girl” campaign, the opposite is true. Your thinking is interrupted. Your beliefs are addressed. A social norm is challenged.  

Like a wedding invitation buried in a pile of mail, a great ad momentarily interrupts mindless routine to create an impression. It stands out. It draws you in. It’s not invisible.

I’ll never fully convince my friend of Rory’s analogy. I ran the study and I collected the data. But the world is too chaotic to determine if Rory is right with the kind of certainty my friend is looking for. He insisted that brands did not “work” on him like they worked on most people. Based on his outfit, I was inclined to believe him. And yet, I wouldn’t be surprised if he had a bottle of Listerine in his bathroom.

A Digital Collaborator: The Future of Search

In 1941, Jorge Luis Borges published a short story about an unending library comprised of hexagonal rooms. The library contains every book ever written, every book that will be written, and every book that could be written, in all languages. One book contains a detailed history of the future. Another describes the true story of your death. There’s commentary on the gospel of Basilides, and commentary on the commentary.

The library contains books filled with every possible combination of letters—one book has the letters M C V repeated from start to finish—rendering most books nonsense. Some residents diligently search for a perfect index of the library, but it’s a quixotic search. How could they distinguish the faithful catalogue of the library from the innumerable false ones?

Borges’ short story, The Library of Babel, is an eerie illustration of a problem we encounter every day: information overload. We live in an era where information, once scarce and expensive, has become a commodity. And while access to more information is a good thing, it often comes at the expense of having to sort through heaps of gibberish. In a way, we’re all living in the Library of Babel.

Or are we?

If you trace the history of information, from the first spoken languages to the Internet, you’ll notice that each time we invent something that spews more information into the world, we ingeniously respond by creating a system that organizes the new information. Contemporary critics rightfully complain about information overload—we’re suffocating from “Data Smog,” as author David Shenk puts it—but it’s simultaneously true that we’re living in an era of extreme organization. It’s never been easier to store, retrieve, and share information. Not even close.

Yet the ability to access the world’s knowledge with just a swipe and a click might come at a cost. What John Stuart Mill said of happiness—that it “was only to be attained by not making it the direct end”—also describes the nature of discovery. We tend to descend on good ideas obliquely, as Financial Times writer John Kay puts it. That is, scientists and artists make discoveries when they’re contemplating something that is only vaguely related to their original question. It’s an overlooked aspect of the creative process that repeats itself—Archimedes in the bathtub, Darwin reading Malthus, Fleming experimenting with bacteria.

At this point, you might suspect that this essay is about the fundamental tradeoff between structure and serendipity. If we generate good ideas by welcoming a dose of unexpected encounters, then each time we organize information we risk impeding intellectual progress. Being productive and creative is about injecting the right dose of disorder and chaos into your daily routine, right?

The problem with this view is that it involves debating two abstractions. What’s at stake is not balancing the Apollonian with the Dionysian but answering a more concrete question: How do search interfaces influence search behavior?

This is where things get interesting. The field of information retrieval is based on a search model that we’ve inherited from the early days of computer science. That model assumes that retrieving information from a database involves going to a computer, searching the database, finding the document, and leaving. “It just wasn’t intuitive to imagine a cohesive information environment where people could search many databases at the same time,” Marcia Bates, Professor Emerita of Information Studies at UCLA, says.

Google was such a significant breakthrough because it indexed the World Wide Web and not just one database. It used an algorithm that ranked websites by the number and quality of inbound links instead of simply counting keywords. The logic of the algorithm, which Google co-founder Larry Page wryly called PageRank, is similar to the logic of academic citations: the quality of a paper is determined by how many times it has been cited.

Google has since improved search by incorporating slick new features such as Autofill. It can distinguish the meaning of a query from the words within the query better. Yet the difference between Google and Gerald Salton’s “SMART”, an early information retrieval system developed in the 1960s, is a difference of degree, not kind. In terms of organizing information online, we don’t need to worry about data smog. We need to replace an interface that’s over 50-years-old.

When search experts and information scientists talk about the future of search, they talk about having “a space to explore” and the opportunity “to go in various directions,” as Anabel Quan-Haase, an Associate Professor of Information Science at the University of Western Ontario in London put it to me. This is not the simple idea that Google will get better at answering your questions. It’s the more groundbreaking hypothesis that in the future, Google (or a competitor) might help by inspiring a few, too.

To understand the difference, I spoke with Tuukka Ruotsalo who leads a team of researchers at the Helsinki Institute for Information Technology. Tuukka and his team completed a new search engine called SciNet a few years ago. “The project started from the idea that search has developed dramatically in the last few years but search interfaces have not,” Ruotsalo says. “We still type in keywords and get a list of documents. We are trying to help users recognize topics they’re interested in, and a big part of that is visualization.”

SciNet’s interface looks like a Copernican solar system. The searched word or phrase appears in the middle (“machine vision”) and related keywords and topics dot the periphery (“nano-technology,” “neural networks,” “artificial technology”). Users drag new keywords toward the middle across concentric circles; the closer the keywords are to the center, the more they influence the results, which are listed on an adjacent column. A simple color-coding system makes it easy for users to spot useful articles. Taken together, it’s a wonderful experience.

Ruotsala said that SciNet is “not trying to beat Google,” emphasizing that his team “designed it to help scientific people find useful scientific information,” but they since established Etsimo, a company that will explore a commercial version of SciNet. If conducting research is about finding material at the periphery, this is a promising development. SciNet may bring us closer to the next generation of search by making it more visual and dynamic.

As I spoke with Ruotsala it became clear that the question of structure versus serendipity is misleading. If early IR systems were like hitting every red light down a long road, Google was the engineer who reprogrammed the lights to make the system work better. The future of search, breaking from this approach entirely, would play the role of an erudite driving buddy, stimulating the conversation at just the right moments. You’ll still take the shortest rout, but now you’ll have a digital collaborator to help you think through your hunch. In this view, the history of search is best seen not as an ongoing attempt to organize information, but as an ongoing attempt to simulate a real conversation, where critical feedback and original ideas are exchanged, not just facts.

And yet, when we contemplate the future of search we tend to imagine ourselves caught in Borges’ library, frantically searching for a way to escape. This fear emerges each time information becomes easier and cheaper to produce and share. And while worrying about overload is not completely misguided—imagine living through the 18th century, when the number of books in print doubled from about 331,000,000 to 628,000,000—it ignores a much broader and more important trend.

Right now, search is a completely unimaginative experience, akin to hanging out with a dull accountant. If the creative mind is fundamentally dialectic, constantly questioning and scrutinizing itself, thriving from exchange and dialog, then search must become a collaborator, a process by which a singular idea emerges out of interaction.

We extract two kinds of information when we collaborate with other people, the explicit stuff and the nonverbal cues—the smiles and subtle gestures that “envelop nearly all human action,” as Nietzsche said. If those cues are essential to human communication—psychologists insist that speaking and listening are fundamentally nonverbal—then our latest innovations in search are impressive but relatively primitive.

When you reflect on the inevitable rise of voice recognition software, virtual reality, and artificial intelligence, it’s easy to see what the future of search will look like: more human.

The Administrative Nudge Versus the Cultural Jig

Last September, President Obama issued an executive order officially endorsing the nudge. The order represented a major victory for the behavioral science community. What was once an esoteric talking point amongst rouge economists has since become a widely popular research topic, ripe for application outside academia in domains such as public policy. It’s as if we’ve finally acknowledged something that a lot of people have known for a long time: that we are not mythical Econs who maximize utility but real Humans who sometimes screw up.

The question is: Does the government have the right to nudge? Critics argue that nudges undermine freedom and autonomy, even when they are disclosed or implemented with good intentions. Proponents point out that the government—along with other people and private businesses—nudges inevitably. The belief that we inhabit “neutral choice environments” is a myth.

The problem with this debate, like so many others, is that there are more than two sides. That false dichotomy brings me to Matthew Crawford’s The World Beyond Your Head: On Becoming an Individual In An Age of DistractionCrawford, a motorcycle fabricator with a Ph.D. from the University of Chicago in political philosophy, says that in addition to debating the ethics of nudging we need to also examine broader cultural forces, such as the introduction of credit in the early 20thcentury, that gave rise to nudges in the first place.

His argument begins with the jig, a tool familiar to metal- and woodworkers, which he defines as a “device or procedure that guides a repeated action by constraining the environment in such a way as to make the action go smoothly, the same each time.” If a carpenter needs to cut two-dozen boards to the same length, he won’t measure and cut each board separately. Rather, he will slide each board under a band saw until it hits the jig, ensuring a perfect cut each time.

At first glance, the nudge and the jig appear to be very similar. Both involve shifting the burden of attention from System 2 to System 1 by manipulating the environment. However, the jig typically involves skilled practitioners performing relatively tedious tasks while the nudge does not. If the jig engrains habits and refocuses attention, the nudge simply helps people make better decisions, usually without them even realizing it.

Consider arithmetic, a kind of mental jig. If you need to figure out the product of 18 and 12, you could multiply 18 by 10 to get 180, and then multiply 18 by 2 to get 36, and then add 180 and 36 to get 216. Calculating the product of 911 and 356, however, would be nearly impossible. The solution is not to focus harder—the burden on working memory would be too much—but a different method altogether, one that breaks the problem down into a series of smaller calculations and relies on pencil and paper. “With this simple expedient,” Crawford says, “we vastly extend our intellectual capacities.”

Crawford’s essential insight involves the concept of a cultural jig. Much like a jig diffuses information through constraints in the environment, a cultural jig diffuses knowledge and practical wisdom through “linguistic, social, political, and institutional constraints.” A jig helps people cut, saw, and weld; a cultural jig helps people make judgments and decisions, typically by ingraining useful conventions and norms.

Crawford sites the concept of thrift in early America, which emerged in tandem with the Protestant ethic. According to this view, the goal was parsimonious spending over conspicuous consumption—accumulating wealth was an indication that your life was on track, not a signal that you could indulge. As Crawford puts it, “The debtor cannot speak frankly to the man he owes money to; he must make himself pleasing and hope for continued forbearance.”

The invention of consumer credit (and the subsequent norm that carrying debt was a normal aspect of adult life) dismantled thrift as a cultural jig. Crawford hypothesizes that today we are “very fat” and “very prone to divorce” because other cultural jigs also disintegrated. How? He points to the “liberating and deregulating efforts of the right and left,” which in turn dramatically increased the demand for self-regulation. We now stay out of debt, in shape, and in wedlock not through religion, social norms, or shared customs but something much less potent: human willpower.

This is the perspective Crawford uses to understand the nudge. He is not an opponent (he actually endorses the nudge) and he is wise enough to realize that even if government doesn’t nudge, corporations certainly will. His point, simply, is that “Getting people to save money through administrative nudges such as the opt-out 401(k) plan is best seen not as a remedy for our failure to be rational as individuals, but as an attempt to compensate for the dismantling of those cultural jigs we once relied on to act (and think and feel) in ways that support thrift.”

It’s worth taking this point seriously for at least two reasons. The first is that if government wants people to spend wisely, stay in shape, and stay married, it should focus on promoting the right cultural jigs, not just the right nudges. That’s a much broader debate about the role of government, of course, but it’s a debate worth remembering the next time you think about the ethics of nudging.

The second relates to a central theme of Crawford’s book: attention. Before he opened his motorcycle shop in Virginia, Crawford worked in a think tank in Washington D.C., where he was trained to “project an image of rationality without indulging too much in actual thinking.” He believes, correctly, that in a world containing more and more demands on attention, we risk forgoing the opportunity to fully submit ourselves to an activity, to see our efforts directly impact the world and to receive immediate feedback from it. He prescribes becoming absorbed in a skilled activity as a remedy for becoming intellectually bankrupt at work, insisting that “knowledge work” is much more physically monotonous than we think.

Many administrative nudges exist because demands on cognition have become so unrelenting that even easy tasks, such as answering an email, trigger tremendous bouts of stress. Don’t even think about someone actually taking time to change the default from “Yes” to “No.”

There’s no doubt, to be sure, that the administrative nudge is not only a good idea but a smart way to save money and even a few lives. In 2013 a team from Columbia business school led by Eric Johnson tested a nudge to help people sign up for health care plans that could save customers and taxpayers approximately $10 billion annually. Last year Melissa Knoll and a team at the Social Security Administration created a nudge that could mean a difference of tens of thousands of dollars for beneficiaries.

However, as you skim through the forms at the DMV, it’s worth pausing to pay attention—not just to the content of the form, but to how you pay attention in general. Crawford isn’t a Luddite yearning for simpler times. His goal is to suggest that some administrative nudges might be band-aides on a deeper cultural wound that is not healing, and could even be widening.

Does Reading Cognitive Bias Research Distort the Mind?

Over break I read The Invention of Science: A New History of the Scientific Revolution by the British historian David Wootton. Wootton writes that modern science was invented between 1572 (when the Danish astronomer Tycho Brahe saw a nova) and 1704 (when Isaac Newtorn published Opticks). A big part of the revolution was technological. The telescope, barometer, and printing press allowed people to study the world with more precision and share their findings with scholarly communities, such as The Royal Society. But, more importantly, the scientific revolution involved a new set of conceptual tools.

Take the concept of discovery, for instance. Until around the 16th century, most scholars believed that humanity’s greatest achievements were in the past—Aristotle, the preeminent source for all intellectual inquiry, still towered over European thought like a colossus, despite his deeply flawed ideas about the nature of the universe. When Columbus sailed to America in 1492, he did not use the word “discover” to describe what he had done because he was not familiar with the concept. After Amerigo Vespucci used the new word in 1504, it quickly spread into other European languages. Soon, intellectuals of the era began to not only investigate the world in new ways. They began to treat the world as something to be investigated.

I liked Wootton’s book because it helped me understand something I’ve noticed ever since the Nobel-Prize winning psychologist Daniel Kahneman published Thinking, Fast and Slow in 2011. Kahneman’s book is about biases that distort judgment, how to identify them and what we can do to avoid them. In the traditional view, emotion is the enemy and people are thought to be generally rational, their thinking sound. Nearly four decades of decision-making research reveal a new perspective. Systematic biases not only undermine the idea that people are rational but they are largely invisible to us. We are “blind to our blindness,” Kahneman says.

The deeper lesson to glean from this body of research is not just that our decisions are occasionally flawed—we’ve known about our mental foibles since at least the Greeks—or even that the conscious mind is convinced that it’s not flawed. It’s that if the conscious mind functions like a press secretary, someone who does not seek the truth but justifies intuitive judgments and glosses over its own shortcomings and delusions, then we should be very careful when we read a book like Thinking, Fast and Slow. Although it’s easy to grasp the idea that people deviate from the standards of rationality, it’s much harder to resist the belief that reading about this research will automatically help us think clearer. Learning about judgment errors elicits a feeling of enlightenment, the sense that knowledge has been gained, that can further distort how we perceive the world.

We do not, in other words, absorb this body of research like a scientist studies the physical world. Ironically, we interpret decision-making mistakes in a way that makes us look good, often falling prey to the very biases that we are advised to avoid, such as overconfidence and the above-average effect. So while readers and students genuinely understand that they’re “biased” in a nominal sense, they conflate learning about how people make decisions with a real improvement in judgment. This is what happens when the object of inquiry is also the tool of inquiry, and when that tool is specifically designed to generate an unjustified sense of righteousness.

When I first noticed this paradox a few years ago, I had a hard time describing it. I used the clumsy term “confirmation bias bias” to describe how learning about decision-making mistakes engenders a myopic view of the mind, as if our persistent tendency to search for confirming evidence inevitably causes us to only view the mind in those same terms. Reading Wootton’s book helped me understand the broader insight. There is a difference between discovering something new, on the one hand, and that discovery changing the way we think, on the other. Much like Columbus discovered America without grasping the broader concept of “discovery,” the judgment and decision-making community has discovered new insights into how the mind makes decisions without considering how those insights affect the mind. Thinking, Fast and Slow is such an intriguing book because, by learning about the details of our intellectual hardware, we change them.

If learning about how our own mind screws up distorts judgment instead of improving it, then the question we should ask is not how the mind works, but what it means to have a mind in the first place. One irony of the scientific revolution is that we began to treat the mind as an object of scientific study, just like how Brahe and Newton treated novas and beams of light, even though unlike the physical world, it changes each time we examine and scrutinize it; that is, it changes because we examine and scrutinize it. And while we should rely on the scientific method to interpret everything in the natural world, we need to remember where that method was developed, and how that conflict of interest could lead us astray. As the writer and neurologist Robert Burton says, “Our brains possess involuntary mechanisms that make unbiased thought impossible yet create the illusion that we are rational creatures capable of fully understanding the mind created by these same mechanisms.”

So what is a mind? Nailing down that definition represents, I think, one of the central tasks of modern neuroscience. But, more importantly, it is a task that must inform cognitive psychology—if the goal is in fact to correct outdated assumptions about human nature.

The Invention of Science, a survey into how we made discoveries about the world and how those discoveries replaced the conceptual tools we used to perceive the world, is a lesson in intellectual humility. It’s a story about the persistent belief that we see the world as it is, on the one hand, and our willingness to test that belief, on the other. The purpose of this essay is to test the belief that you can use your mind to understand your mind, and I’d proceed cautiously if that test elicited a sense of enlightenment. We should expect nothing less from an organ that evolved to do just that.


Daniel Kahneman often says that despite forty years in academia, he still falls for the very same biases that his research has revealed. He is pessimistic, and it might seem from this article that I am, too.

Wootton’s book explained how we began to not only study the physical world but also recognize that we don’t see it objectively. That is, we began to study the physical world because we recognized that we don’t see it objectively. The fact that we’re talking about biases here in the 21stcentury suggests that something has changed in the last few hundreds years. We now dedicate large swaths of academic work to researching not only the physical world but also how reason can make us perceive the physical world incorrectly. The judgment and decision-making literature is a direct descendent of Descartes, who emphasized reason and reflection over sense experience. In an era where educated people believed in alchemy and witches but not germs, Descartes wanted to get people to improve their beliefs. We have.

More importantly, we’ve dramatically improved how we think, not just what we think. Consider Superforecasters, by Wharton psychologist Phil Tetlock. Tetlock is famous for publishing longitudinal study a decade ago that measured “expert prediction.” He found that the professional pundits invited to take part in his study performed no better than chance when they made long-term forecasts about the future of political events, wars, etc. Wharton’s new book documents an elite group of people, superforecasters, who have a remarkable track record of making accurate forecasts. They’re really good at answering questions like, “Will there be a terrorist attack in Europe between August 2016 and December 2016?” When Tetlock investigated what made these people so good, he did not find that they were smarter. He found that they possessed better mental tools, which is to say that they used basic probability and statistics to think about the future, avoided ideology, and encouraged dissonance. In short, they did a good job of correcting their biases.

I’ve become wary of judgment and decision-making, not the research but the way people talk about it and the way it is reported online and in print. I suppose you could say that even though the JDM community has done a tremendous job explaining how people actually make decisions, it has not fulfilled its promise to explain “how the mind works,” as so many subtitles seem to suggest. My impetus for writing the article was to recommend that the JDM community be mindful of meta-biases and remember their relatively small role in the broader cognitive science pie–and the fact that cognitive science is so young. We still know so little about the mind and the brain, perhaps an equivalent amount to what Columbus knew about the New World.

Ironically, it’s those catchy subtitles that got me into this fascinating body of knowledge in the first place.