Blog

A Digital Collaborator: The Future of Search

In 1941, Jorge Luis Borges published a short story about an unending library comprised of hexagonal rooms. The library contains every book ever written, every book that will be written, and every book that could be written, in all languages. One book contains a detailed history of the future. Another describes the true story of your death. There’s commentary on the gospel of Basilides, and commentary on the commentary.

The library contains books filled with every possible combination of letters—one book has the letters M C V repeated from start to finish—rendering most books nonsense. Some residents diligently search for a perfect index of the library, but it’s a quixotic search. How could they distinguish the faithful catalogue of the library from the innumerable false ones?

Borges’ short story, The Library of Babel, is an eerie illustration of a problem we encounter every day: information overload. We live in an era where information, once scarce and expensive, has become a commodity. And while access to more information is a good thing, it often comes at the expense of having to sort through heaps of gibberish. In a way, we’re all living in the Library of Babel.

Or are we?

If you trace the history of information, from the first spoken languages to the Internet, you’ll notice that each time we invent something that spews more information into the world, we ingeniously respond by creating a system that organizes the new information. Contemporary critics rightfully complain about information overload—we’re suffocating from “Data Smog,” as author David Shenk puts it—but it’s simultaneously true that we’re living in an era of extreme organization. It’s never been easier to store, retrieve, and share information. Not even close.

Yet the ability to access the world’s knowledge with just a swipe and a click might come at a cost. What John Stuart Mill said of happiness—that it “was only to be attained by not making it the direct end”—also describes the nature of discovery. We tend to descend on good ideas obliquely, as Financial Times writer John Kay puts it. That is, scientists and artists make discoveries when they’re contemplating something that is only vaguely related to their original question. It’s an overlooked aspect of the creative process that repeats itself—Archimedes in the bathtub, Darwin reading Malthus, Fleming experimenting with bacteria.

At this point, you might suspect that this essay is about the fundamental tradeoff between structure and serendipity. If we generate good ideas by welcoming a dose of unexpected encounters, then each time we organize information we risk impeding intellectual progress. Being productive and creative is about injecting the right dose of disorder and chaos into your daily routine, right?

The problem with this view is that it involves debating two abstractions. What’s at stake is not balancing the Apollonian with the Dionysian but answering a more concrete question: How do search interfaces influence search behavior?

This is where things get interesting. The field of information retrieval is based on a search model that we’ve inherited from the early days of computer science. That model assumes that retrieving information from a database involves going to a computer, searching the database, finding the document, and leaving. “It just wasn’t intuitive to imagine a cohesive information environment where people could search many databases at the same time,” Marcia Bates, Professor Emerita of Information Studies at UCLA, says.

Google was such a significant breakthrough because it indexed the World Wide Web and not just one database. It used an algorithm that ranked websites by the number and quality of inbound links instead of simply counting keywords. The logic of the algorithm, which Google co-founder Larry Page wryly called PageRank, is similar to the logic of academic citations: the quality of a paper is determined by how many times it has been cited.

Google has since improved search by incorporating slick new features such as Autofill. It can distinguish the meaning of a query from the words within the query better. Yet the difference between Google and Gerald Salton’s “SMART”, an early information retrieval system developed in the 1960s, is a difference of degree, not kind. In terms of organizing information online, we don’t need to worry about data smog. We need to replace an interface that’s over 50-years-old.

When search experts and information scientists talk about the future of search, they talk about having “a space to explore” and the opportunity “to go in various directions,” as Anabel Quan-Haase, an Associate Professor of Information Science at the University of Western Ontario in London put it to me. This is not the simple idea that Google will get better at answering your questions. It’s the more groundbreaking hypothesis that in the future, Google (or a competitor) might help by inspiring a few, too.

To understand the difference, I spoke with Tuukka Ruotsalo who leads a team of researchers at the Helsinki Institute for Information Technology. Tuukka and his team completed a new search engine called SciNet a few years ago. “The project started from the idea that search has developed dramatically in the last few years but search interfaces have not,” Ruotsalo says. “We still type in keywords and get a list of documents. We are trying to help users recognize topics they’re interested in, and a big part of that is visualization.”

SciNet’s interface looks like a Copernican solar system. The searched word or phrase appears in the middle (“machine vision”) and related keywords and topics dot the periphery (“nano-technology,” “neural networks,” “artificial technology”). Users drag new keywords toward the middle across concentric circles; the closer the keywords are to the center, the more they influence the results, which are listed on an adjacent column. A simple color-coding system makes it easy for users to spot useful articles. Taken together, it’s a wonderful experience.

Ruotsala said that SciNet is “not trying to beat Google,” emphasizing that his team “designed it to help scientific people find useful scientific information,” but they since established Etsimo, a company that will explore a commercial version of SciNet. If conducting research is about finding material at the periphery, this is a promising development. SciNet may bring us closer to the next generation of search by making it more visual and dynamic.

As I spoke with Ruotsala it became clear that the question of structure versus serendipity is misleading. If early IR systems were like hitting every red light down a long road, Google was the engineer who reprogrammed the lights to make the system work better. The future of search, breaking from this approach entirely, would play the role of an erudite driving buddy, stimulating the conversation at just the right moments. You’ll still take the shortest rout, but now you’ll have a digital collaborator to help you think through your hunch. In this view, the history of search is best seen not as an ongoing attempt to organize information, but as an ongoing attempt to simulate a real conversation, where critical feedback and original ideas are exchanged, not just facts.

And yet, when we contemplate the future of search we tend to imagine ourselves caught in Borges’ library, frantically searching for a way to escape. This fear emerges each time information becomes easier and cheaper to produce and share. And while worrying about overload is not completely misguided—imagine living through the 18th century, when the number of books in print doubled from about 331,000,000 to 628,000,000—it ignores a much broader and more important trend.

Right now, search is a completely unimaginative experience, akin to hanging out with a dull accountant. If the creative mind is fundamentally dialectic, constantly questioning and scrutinizing itself, thriving from exchange and dialog, then search must become a collaborator, a process by which a singular idea emerges out of interaction.

We extract two kinds of information when we collaborate with other people, the explicit stuff and the nonverbal cues—the smiles and subtle gestures that “envelop nearly all human action,” as Nietzsche said. If those cues are essential to human communication—psychologists insist that speaking and listening are fundamentally nonverbal—then our latest innovations in search are impressive but relatively primitive.

When you reflect on the inevitable rise of voice recognition software, virtual reality, and artificial intelligence, it’s easy to see what the future of search will look like: more human.

The Administrative Nudge Versus the Cultural Jig

Last September, President Obama issued an executive order officially endorsing the nudge. The order represented a major victory for the behavioral science community. What was once an esoteric talking point amongst rouge economists has since become a widely popular research topic, ripe for application outside academia in domains such as public policy. It’s as if we’ve finally acknowledged something that a lot of people have known for a long time: that we are not mythical Econs who maximize utility but real Humans who sometimes screw up.

The question is: Does the government have the right to nudge? Critics argue that nudges undermine freedom and autonomy, even when they are disclosed or implemented with good intentions. Proponents point out that the government—along with other people and private businesses—nudges inevitably. The belief that we inhabit “neutral choice environments” is a myth.

The problem with this debate, like so many others, is that there are more than two sides. That false dichotomy brings me to Matthew Crawford’s The World Beyond Your Head: On Becoming an Individual In An Age of DistractionCrawford, a motorcycle fabricator with a Ph.D. from the University of Chicago in political philosophy, says that in addition to debating the ethics of nudging we need to also examine broader cultural forces, such as the introduction of credit in the early 20thcentury, that gave rise to nudges in the first place.

His argument begins with the jig, a tool familiar to metal- and woodworkers, which he defines as a “device or procedure that guides a repeated action by constraining the environment in such a way as to make the action go smoothly, the same each time.” If a carpenter needs to cut two-dozen boards to the same length, he won’t measure and cut each board separately. Rather, he will slide each board under a band saw until it hits the jig, ensuring a perfect cut each time.

At first glance, the nudge and the jig appear to be very similar. Both involve shifting the burden of attention from System 2 to System 1 by manipulating the environment. However, the jig typically involves skilled practitioners performing relatively tedious tasks while the nudge does not. If the jig engrains habits and refocuses attention, the nudge simply helps people make better decisions, usually without them even realizing it.

Consider arithmetic, a kind of mental jig. If you need to figure out the product of 18 and 12, you could multiply 18 by 10 to get 180, and then multiply 18 by 2 to get 36, and then add 180 and 36 to get 216. Calculating the product of 911 and 356, however, would be nearly impossible. The solution is not to focus harder—the burden on working memory would be too much—but a different method altogether, one that breaks the problem down into a series of smaller calculations and relies on pencil and paper. “With this simple expedient,” Crawford says, “we vastly extend our intellectual capacities.”

Crawford’s essential insight involves the concept of a cultural jig. Much like a jig diffuses information through constraints in the environment, a cultural jig diffuses knowledge and practical wisdom through “linguistic, social, political, and institutional constraints.” A jig helps people cut, saw, and weld; a cultural jig helps people make judgments and decisions, typically by ingraining useful conventions and norms.

Crawford sites the concept of thrift in early America, which emerged in tandem with the Protestant ethic. According to this view, the goal was parsimonious spending over conspicuous consumption—accumulating wealth was an indication that your life was on track, not a signal that you could indulge. As Crawford puts it, “The debtor cannot speak frankly to the man he owes money to; he must make himself pleasing and hope for continued forbearance.”

The invention of consumer credit (and the subsequent norm that carrying debt was a normal aspect of adult life) dismantled thrift as a cultural jig. Crawford hypothesizes that today we are “very fat” and “very prone to divorce” because other cultural jigs also disintegrated. How? He points to the “liberating and deregulating efforts of the right and left,” which in turn dramatically increased the demand for self-regulation. We now stay out of debt, in shape, and in wedlock not through religion, social norms, or shared customs but something much less potent: human willpower.

This is the perspective Crawford uses to understand the nudge. He is not an opponent (he actually endorses the nudge) and he is wise enough to realize that even if government doesn’t nudge, corporations certainly will. His point, simply, is that “Getting people to save money through administrative nudges such as the opt-out 401(k) plan is best seen not as a remedy for our failure to be rational as individuals, but as an attempt to compensate for the dismantling of those cultural jigs we once relied on to act (and think and feel) in ways that support thrift.”

It’s worth taking this point seriously for at least two reasons. The first is that if government wants people to spend wisely, stay in shape, and stay married, it should focus on promoting the right cultural jigs, not just the right nudges. That’s a much broader debate about the role of government, of course, but it’s a debate worth remembering the next time you think about the ethics of nudging.

The second relates to a central theme of Crawford’s book: attention. Before he opened his motorcycle shop in Virginia, Crawford worked in a think tank in Washington D.C., where he was trained to “project an image of rationality without indulging too much in actual thinking.” He believes, correctly, that in a world containing more and more demands on attention, we risk forgoing the opportunity to fully submit ourselves to an activity, to see our efforts directly impact the world and to receive immediate feedback from it. He prescribes becoming absorbed in a skilled activity as a remedy for becoming intellectually bankrupt at work, insisting that “knowledge work” is much more physically monotonous than we think.

Many administrative nudges exist because demands on cognition have become so unrelenting that even easy tasks, such as answering an email, trigger tremendous bouts of stress. Don’t even think about someone actually taking time to change the default from “Yes” to “No.”

There’s no doubt, to be sure, that the administrative nudge is not only a good idea but a smart way to save money and even a few lives. In 2013 a team from Columbia business school led by Eric Johnson tested a nudge to help people sign up for health care plans that could save customers and taxpayers approximately $10 billion annually. Last year Melissa Knoll and a team at the Social Security Administration created a nudge that could mean a difference of tens of thousands of dollars for beneficiaries.

However, as you skim through the forms at the DMV, it’s worth pausing to pay attention—not just to the content of the form, but to how you pay attention in general. Crawford isn’t a Luddite yearning for simpler times. His goal is to suggest that some administrative nudges might be band-aides on a deeper cultural wound that is not healing, and could even be widening.

Does Reading Cognitive Bias Research Distort the Mind?

Over break I read The Invention of Science: A New History of the Scientific Revolution by the British historian David Wootton. Wootton writes that modern science was invented between 1572 (when the Danish astronomer Tycho Brahe saw a nova) and 1704 (when Isaac Newtorn published Opticks). A big part of the revolution was technological. The telescope, barometer, and printing press allowed people to study the world with more precision and share their findings with scholarly communities, such as The Royal Society. But, more importantly, the scientific revolution involved a new set of conceptual tools.

Take the concept of discovery, for instance. Until around the 16th century, most scholars believed that humanity’s greatest achievements were in the past—Aristotle, the preeminent source for all intellectual inquiry, still towered over European thought like a colossus, despite his deeply flawed ideas about the nature of the universe. When Columbus sailed to America in 1492, he did not use the word “discover” to describe what he had done because he was not familiar with the concept. After Amerigo Vespucci used the new word in 1504, it quickly spread into other European languages. Soon, intellectuals of the era began to not only investigate the world in new ways. They began to treat the world as something to be investigated.

I liked Wootton’s book because it helped me understand something I’ve noticed ever since the Nobel-Prize winning psychologist Daniel Kahneman published Thinking, Fast and Slow in 2011. Kahneman’s book is about biases that distort judgment, how to identify them and what we can do to avoid them. In the traditional view, emotion is the enemy and people are thought to be generally rational, their thinking sound. Nearly four decades of decision-making research reveal a new perspective. Systematic biases not only undermine the idea that people are rational but they are largely invisible to us. We are “blind to our blindness,” Kahneman says.

The deeper lesson to glean from this body of research is not just that our decisions are occasionally flawed—we’ve known about our mental foibles since at least the Greeks—or even that the conscious mind is convinced that it’s not flawed. It’s that if the conscious mind functions like a press secretary, someone who does not seek the truth but justifies intuitive judgments and glosses over its own shortcomings and delusions, then we should be very careful when we read a book like Thinking, Fast and Slow. Although it’s easy to grasp the idea that people deviate from the standards of rationality, it’s much harder to resist the belief that reading about this research will automatically help us think clearer. Learning about judgment errors elicits a feeling of enlightenment, the sense that knowledge has been gained, that can further distort how we perceive the world.

We do not, in other words, absorb this body of research like a scientist studies the physical world. Ironically, we interpret decision-making mistakes in a way that makes us look good, often falling prey to the very biases that we are advised to avoid, such as overconfidence and the above-average effect. So while readers and students genuinely understand that they’re “biased” in a nominal sense, they conflate learning about how people make decisions with a real improvement in judgment. This is what happens when the object of inquiry is also the tool of inquiry, and when that tool is specifically designed to generate an unjustified sense of righteousness.

When I first noticed this paradox a few years ago, I had a hard time describing it. I used the clumsy term “confirmation bias bias” to describe how learning about decision-making mistakes engenders a myopic view of the mind, as if our persistent tendency to search for confirming evidence inevitably causes us to only view the mind in those same terms. Reading Wootton’s book helped me understand the broader insight. There is a difference between discovering something new, on the one hand, and that discovery changing the way we think, on the other. Much like Columbus discovered America without grasping the broader concept of “discovery,” the judgment and decision-making community has discovered new insights into how the mind makes decisions without considering how those insights affect the mind. Thinking, Fast and Slow is such an intriguing book because, by learning about the details of our intellectual hardware, we change them.

If learning about how our own mind screws up distorts judgment instead of improving it, then the question we should ask is not how the mind works, but what it means to have a mind in the first place. One irony of the scientific revolution is that we began to treat the mind as an object of scientific study, just like how Brahe and Newton treated novas and beams of light, even though unlike the physical world, it changes each time we examine and scrutinize it; that is, it changes because we examine and scrutinize it. And while we should rely on the scientific method to interpret everything in the natural world, we need to remember where that method was developed, and how that conflict of interest could lead us astray. As the writer and neurologist Robert Burton says, “Our brains possess involuntary mechanisms that make unbiased thought impossible yet create the illusion that we are rational creatures capable of fully understanding the mind created by these same mechanisms.”

So what is a mind? Nailing down that definition represents, I think, one of the central tasks of modern neuroscience. But, more importantly, it is a task that must inform cognitive psychology—if the goal is in fact to correct outdated assumptions about human nature.

The Invention of Science, a survey into how we made discoveries about the world and how those discoveries replaced the conceptual tools we used to perceive the world, is a lesson in intellectual humility. It’s a story about the persistent belief that we see the world as it is, on the one hand, and our willingness to test that belief, on the other. The purpose of this essay is to test the belief that you can use your mind to understand your mind, and I’d proceed cautiously if that test elicited a sense of enlightenment. We should expect nothing less from an organ that evolved to do just that.

Postscript

Daniel Kahneman often says that despite forty years in academia, he still falls for the very same biases that his research has revealed. He is pessimistic, and it might seem from this article that I am, too.

Wootton’s book explained how we began to not only study the physical world but also recognize that we don’t see it objectively. That is, we began to study the physical world because we recognized that we don’t see it objectively. The fact that we’re talking about biases here in the 21stcentury suggests that something has changed in the last few hundreds years. We now dedicate large swaths of academic work to researching not only the physical world but also how reason can make us perceive the physical world incorrectly. The judgment and decision-making literature is a direct descendent of Descartes, who emphasized reason and reflection over sense experience. In an era where educated people believed in alchemy and witches but not germs, Descartes wanted to get people to improve their beliefs. We have.

More importantly, we’ve dramatically improved how we think, not just what we think. Consider Superforecasters, by Wharton psychologist Phil Tetlock. Tetlock is famous for publishing longitudinal study a decade ago that measured “expert prediction.” He found that the professional pundits invited to take part in his study performed no better than chance when they made long-term forecasts about the future of political events, wars, etc. Wharton’s new book documents an elite group of people, superforecasters, who have a remarkable track record of making accurate forecasts. They’re really good at answering questions like, “Will there be a terrorist attack in Europe between August 2016 and December 2016?” When Tetlock investigated what made these people so good, he did not find that they were smarter. He found that they possessed better mental tools, which is to say that they used basic probability and statistics to think about the future, avoided ideology, and encouraged dissonance. In short, they did a good job of correcting their biases.

I’ve become wary of judgment and decision-making, not the research but the way people talk about it and the way it is reported online and in print. I suppose you could say that even though the JDM community has done a tremendous job explaining how people actually make decisions, it has not fulfilled its promise to explain “how the mind works,” as so many subtitles seem to suggest. My impetus for writing the article was to recommend that the JDM community be mindful of meta-biases and remember their relatively small role in the broader cognitive science pie–and the fact that cognitive science is so young. We still know so little about the mind and the brain, perhaps an equivalent amount to what Columbus knew about the New World.

Ironically, it’s those catchy subtitles that got me into this fascinating body of knowledge in the first place.

The World Beyond Your Head: Becoming an Individual in an Age of Distraction

The fundamental premise of improv theater is the “Yes, And” rule. While this rule is often subtle—improv actors rarely use those exact words—it can be used to prevent a scene from hitting an awkward dead end or taking a confusing detour. Once you accept the fact that you’re a drunken fraternity brother, an arthritic grandparent, or an unemployed model train collector, it’s much easier for the audience to believe your character. At that point, the humor comes naturally.

The “Yes, And” mantra has unfortunately evolved into a eye-rolling business cliché. Improv theaters from New York to Los Angeles regularly conduct corporate workshops to encourage employees to embrace their colleagues’ ideas and not dispute or reject them. And in an era where it’s fashionable to talk about disrupting the status quo, these sessions have become very appealing to unimaginative middle managers charged with “making the company more innovative.” The essential insight of “Yes, And” lends itself almost too well to the sterile concept that there is no such thing as a bad idea.

I thought about “Yes, And” as I read The World Beyond Your Head: On Becoming an Individual in an Age of Distraction by Matthew Crawford, a senior fellow at the University of Virginia’s Institute for Advanced Studies in Culture.

In his first book, Shop Class as Soulcraft, Crawford—who worked at a think tank in Washington D.C. before opening a motorcycle repair shop in Virginia—writes about the difference between intellectual work and manual work, insisting that even though knowledge workers are hired to analyze information and generate original ideas, they spend most of their time performing cognitively monotonous tasks. “Knowledge work” is much more intellectually bankrupt than we might think.

In The World Beyond Your Head Crawford picks up on one theme from his first book—attention. Just as food engineers have carefully studied our taste buds to design snacks that deliver the perfect combination of sugar, salt, and fat—there is a reason why only eating one Oreo is so damn hard—corporations have carefully studied attention to design experiences, interfaces, games, ads and apps that are impossible to resist. One of Crawford’s central claims, which emerged from his experience at the check out line in a grocery store, where he was prompted with an ad in between swiping his credit card and confirming the total, is not that businesses are constantly fighting for our attention. It’s that we risk losing the right to not be addressed.

Taken at face value, this idea might appear to be the tip of a broader inquiry into the perils of advertising and, to take an example from a Domino’s commercial I saw last week which advertised 34 million different pizza combinations, the consequences of living in an era of choice overload. It takes just a few paragraphs to notice that Crawford is scratching the surface of a much deeper question about the nature of the self and human flourishing. If we’re willing to forgo our right to be addressed—not just to real people who speak to us face-to-face but to nameless corporations who don’t—then what will happen to the individual who gives that right away?

To appreciate Crawford’s concern it helps to appreciate how Enlightenment thinkers shaped how we understand the relationship between the world and us. Very broadly, the Enlightenment project shifted the burden of epistemic responsibility from higher authorities to the individual. We became the ultimate source of knowledge and reason evolved into an impersonal tool used to process information. Importantly, the standard of truth relocated from an outside source to our own heads. “We are to take a detached stance toward our own experience and subject it to critical analysis from a perspective that isn’t infected with our own subjectivity,” Crawford says.

This mental shift influenced an understanding of the brain that we are just beginning to overturn. In the old view, the brain passively processes sensory inputs and creates “representations” of the world. In this view, which was fueled not just by Enlightenment assumptions but the digital revolution, the brain is treated as a powerful all-purpose computer. The new view considers the obvious fact that a brain is connected to a neck, which is in turn connected to a body that moves through physical space. “We think through the body,” Crawford says. Perception is not something that happens. It’s something we do.

For Crawford, accepting this new perspective forces us to rethink how we engage the world with tools and activities. In his first book Crawford emphasized the experience of seeing a “direct effect of our actions in the world”—hitting a three-pointer, finishing a manuscript, fixing a broken diesel engine—and how the feeling of agency, a key component of well-being, arises when we submit to these activities. That’s why he describes knowledge work, which tends to strip away the feeling of agency, as “intellectually bankrupt.”

In his new book, Crawford emphasizes that submitting to these activities—activities that require skill and rely on immediate feedback from the world—is a potent source of pleasure and fulfillment because they blur the boundary between mind and environment. The hockey player’s stick becomes an extension of his cognition. The experienced motorcyclist, who pays close attention to subtle vibrations in his bike and the contours of the road, becomes part of the traffic. “To understand human cognition it is a mistake to focus only on what goes on inside the skull,” Crawford says. Perception is intimately bound up with action—that is, how we move in the world.

If Enlightening means embracing autonomy and thinking for oneself, Crawford is prescribing the opposite. To be washed up in an activity is to rely on a higher authority—not a political authority, but a community of professionals who maintain the rules and uphold the history of their profession. I like golf because unlike office work, which can be maddeningly opaque, I can see the direct output of my skills. But there is something else going on that makes this frustrating sport so enjoyable. For four hours I subject myself to a notoriously strict set of rules and social norms that actually limit my autonomy. If you think about it, the condition of being subjected to an outside force—heteronomy—“brings out facets of experience that don’t fit easily into our Enlightenment framework,” Crawford says.

Crawford quotes the British philosopher Iris Murdoch, who said that when she is learning Russian she is “confronted by an authoritative structure which commands my respect.” A prerequisite for acquiring skills is trust in the broader community that appreciates those skills. “What it means to learn Russian is to become part of the community of Russian speakers, without whom there would be no such thing as ‘Russian.’”

The World Beyond Your Head is not a book about technology written by a frustrated Luddite. It’s a book about what we should give our attention to, and therefore what we should value, written by a blue-collar academic. Crawford isn’t trying to persuade you to stop checking email and smell the roses. His project is much deeper than that. He is urging us to pay more attention to how we pay attention.

Think back to “Yes, And.” I wrote that it is “unfortunate” that this rule has become an eye-rolling cliché not to tease chubby middle managers but to remind them that the purpose of the rule is not only about agreement. It’s about teaching improv actors to pay attention. After all, the best way to undermine good improv is to worry about what you’re going to say next. As soon as you direct your attention inward and frantically scrutinize the contents of your own thoughts, you by default ignore everything that’s happening on stage—the actors, the story, and the general flow of each scene. But, if you’re willing to redirect your attention outward and give yourself to the contents of the stage, you’ll be in a much better position to contribute to the story as it unfolds.

Although the difference between these two perspectives might only seem relevant to improvisers, it highlights a central question Crawford poses in his book. Should we think of ourselves as independent subjects within an “objective,” neutral environment? Or are we fundamentally situational, constantly bound up in and shaped by context? Even though the feeling of being an autonomous self as distinct from the environment is convincing, the second view is more consistent with the evidence. Our thinking, scientists now believe to varying degrees, arises from how the brain interacts with the world and not just how it processes information from the world. That is, our involvement with an object, another person, or an improv performance is influenced by the agenda we bring and what we learn along the way.

In an era where businesses diligently fight for the spotlight of our attention by putting shiny objects in front of it, we risk forgoing the opportunity to fully submit ourselves to an activity. I like philosophy because it makes my inner monologues seem less boring. That’s why, for example, I enjoy riding the elevator and eating lunch in a quiet restaurant. They provide space to think and, more importantly, a chance to become competent in a discipline.

And yet, I’ve noticed that in moments of low-level anxiety, instead of immersing ourselves in the current situation—the real thing—we turn to our devices and engage the world through a representation. We prefer, in other words, a quick digital binge to a moment of reverie. I suppose that’s why corporate improv classes have become so popular. We go to learn how to pay attention to the world beyond our heads.