The Marginalian
The Marginalian

This Explains Everything: 192 Thinkers on the Most Elegant Theory of How the World Works

Every year since 1998, intellectual impresario and Edge editor John Brockman has been posing a single grand question to some of our time’s greatest thinkers across a wide spectrum of disciplines, then collecting the answers in an annual anthology. Last year’s answers to the question “What scientific concept will improve everybody’s cognitive toolkit?” were released in This Will Make You Smarter: New Scientific Concepts to Improve Your Thinking, one of the year’s best psychology and philosophy books.

In 2012, the question Brockman posed, proposed by none other than Steven Pinker, was “What is your favorite deep, elegant, or beautiful explanation?” The answers, representing an eclectic mix of 192 (alas, overwhelmingly male) minds spanning psychology, quantum physics, social science, political theory, philosophy, and more, are collected in the edited compendium This Explains Everything: Deep, Beautiful, and Elegant Theories of How the World Works (UK; public library) and are also available online.

In the introduction preceding the micro-essays, Brockman frames the question and its ultimate objective, adding to history’s most timeless definitions of science:

The ideas presented on Edge are speculative; they represent the frontiers in such areas as evolutionary biology, genetics, computer science, neurophysiology, psychology, cosmology, and physics. Emerging out of these contributions is a new natural philosophy, new ways of understanding physical systems, new ways of thinking that call into question many of our basic assumptions.

[…]

Perhaps the greatest pleasure in science comes from theories that derive the solution to some deep puzzle from a small set of simple principles in a surprising way. These explanations are called ‘beautiful’ or ‘elegant.’

[…]

The contributions presented here embrace scientific thinking in the broadest sense: as the most reliable way of gaining knowledge about anything — including such fields of inquiry as philosophy, mathematics, economics, history, language, and human behavior. The common thread is that a simple and nonobvious idea is proposed as the explanation of a diverse and complicated set of phenomena.

Stanford neuroscientist Robert Sapolsky, eloquent as ever, marvels at the wisdom of the crowd and the emergence of swarm intelligence:

Observe a single ant, and it doesn’t make much sense, walking in one direction, suddenly careening in another for no obvious reason, doubling back on itself. Thoroughly unpredictable.

The same happens with two ants, a handful of ants. But a colony of ants makes fantastic sense. Specialized jobs, efficient means of exploiting new food sources, complex underground nests with temperature regulated within a few degrees. And critically, there’s no blueprint or central source of command—each individual ants has algorithms for their behaviors. But this is not wisdom of the crowd, where a bunch of reasonably informed individuals outperform a single expert. The ants aren’t reasonably informed about the big picture. Instead, the behavior algorithms of each ant consist of a few simple rules for interacting with the local environment and local ants. And out of this emerges a highly efficient colony.

Ant colonies excel at generating trails that connect locations in the shortest possible way, accomplished with simple rules about when to lay down a pheromone trail and what to do when encountering someone else’s trail—approximations of optimal solutions to the Traveling Salesman problem. This has useful applications. In “ant-based routing,” simulations using virtual ants with similar rules can generate optimal ways of connecting the nodes in a network, something of great interest to telecommunications companies. It applies to the developing brain, which must wire up vast numbers of neurons with vaster numbers of connections without constructing millions of miles of connecting axons. And migrating fetal neurons generate an efficient solution with a different version of ant-based routine.

A wonderful example is how local rules about attraction and repulsion (i.e., positive and negative charges) allow simple molecules in an organic soup to occasionally form more complex ones. Life may have originated this way without the requirement of bolts of lightning to catalyze the formation of complex molecules.

And why is self-organization so beautiful to my atheistic self? Because if complex, adaptive systems don’t require a blue print, they don’t require a blue print maker. If they don’t require lightning bolts, they don’t require Someone hurtling lightning bolts.

Developmental psychologist Howard Gardner, who famously coined the seminal theory of multiple intelligences, echoes Anaïs Nin in advocating for the role of the individual and Susan Sontag in stressing the impact of individual acts on collective fate. His answer, arguing for the importance of human beings, comes as a welcome antidote to a question that suffers the danger of being inherently reductionist:

In a planet occupied now by seven billion inhabitants, I am amazed by the difference that one human being can make. Think of classical music without Mozart or Stravinsky; of painting without Caravaggio, Picasso or Pollock; of drama without Shakespeare or Beckett. Think of the incredible contributions of Michelangelo or Leonardo, or, in recent times, the outpouring of deep feeling at the death of Steve Jobs (or, for that matter, Michael Jackson or Princess Diana). Think of human values in the absence of Moses or Christ.

[…]

Despite the laudatory efforts of scientists to ferret out patterns in human behavior, I continue to be struck by the impact of single individuals, or of small groups, working against the odds. As scholars, we cannot and should not sweep these instances under the investigative rug. We should bear in mind anthropologist Margaret Mead’s famous injunction: ‘Never doubt that a small group of thoughtful committed citizens can change the world. It is the only thing that ever has.’

Uber-curator Hans Ulrich Obrist, who also contributed to last year’s volume, considers the parallel role of patterns and chance in the works of iconic composer John Cage and painter Gerhard Richter, and the role of uncertainty in the creative process:

In art, the title of a work can often be its first explanation. And in this context I am thinking especially of the titles of Gerhard Richter. In 2006, when I visited Richter in his studio in Cologne, he had just finished a group of six corresponding abstract paintings which he gave the title Cage.

There are many relations between Richter’s painting and the compositions of John Cage. In a book about the Cage series, Robert Storr has traced them from Richter‘s attendance of a Cage performance at the Festum Fluxorum Fluxus in Düsseldorf 1963 to analogies in their artistic processes. Cage has often applied chance procedures in his compositions, notably with the use of the I Ching. Richter in his abstract paintings also intentionally allows effects of chance. In these paintings, he applies the oil paint on the canvas by means of a large squeegee. He selects the colors on the squeegee, but the factual trace that the paint leaves on the canvas is to a large extent the outcome of chance.

[…]

Richter‘s concise title, Cage, can be unfolded into an extensive interpretation of these abstract paintings (and of other works)—but, one can say, the short form already contains everything. The title, like an explanation of a phenomenon, unlocks the works, describing their relation to one of the most important cultural figures of the twentieth century, John Cage, who shares with Richter the great themes of chance and uncertainty.

Writer, artist, and designer Douglas Coupland, whose biography of Marshall McLuhan remains indispensable, offers a lyrical meditation on the peculiar odds behind coincidences and déja vus:

I take comfort in the fact that there are two human moments that seem to be doled out equally and democratically within the human condition—and that there is no satisfying ultimate explanation for either. One is coincidence, the other is déja vu. It doesn’t matter if you’re Queen Elizabeth, one of the thirty-three miners rescued in Chile, a South Korean housewife or a migrant herder in Zimbabwe—in the span of 365 days you will pretty much have two déja vus as well as one coincidence that makes you stop and say, “Wow, that was a coincidence.”

The thing about coincidence is that when you imagine the umpteen trillions of coincidences that can happen at any given moment, the fact is, that in practice, coincidences almost never do occur. Coincidences are actually so rare that when they do occur they are, in fact memorable. This suggests to me that the universe is designed to ward off coincidence whenever possible—the universe hates coincidence—I don’t know why—it just seems to be true. So when a coincidence happens, that coincidence had to work awfully hard to escape the system. There’s a message there. What is it? Look. Look harder. Mathematicians perhaps have a theorem for this, and if they do, it might, by default be a theorem for something larger than what they think it is.

What’s both eerie and interesting to me about déja vus is that they occur almost like metronomes throughout our lives, about one every six months, a poetic timekeeping device that, at the very least, reminds us we are alive. I can safely assume that my thirteen year old niece, Stephen Hawking and someone working in a Beijing luggage-making factory each experience two déja vus a year. Not one. Not three. Two.

The underlying biodynamics of déja vus is probably ascribable to some sort of tingling neurons in a certain part of the brain, yet this doesn’t tell us why they exist. They seem to me to be a signal from larger point of view that wants to remind us that our lives are distinct, that they have meaning, and that they occur throughout a span of time. We are important, and what makes us valuable to the universe is our sentience and our curse and blessing of perpetual self-awareness.

MIT social scientist Sherry Turkle, author of the cyber-dystopian Alone Together: Why We Expect More from Technology and Less from Each Other, considers the role of “transitional objets” in our relationship with technology:

I was a student in psychology in the mid-1970s at Harvard University. The grand experiment that had been “Social Relations” at Harvard had just crumbled. Its ambition had been to bring together the social sciences in one department, indeed, most in one building, William James Hall. Clinical psychology, experimental psychology, physical and cultural anthropology, and sociology, all of these would be in close quarters and intense conversation.

But now, everyone was back in their own department, on their own floor. From my point of view, what was most difficult was that the people who studied thinking were on one floor and the people who studied feeling were on another.

In this Balkanized world, I took a course with George Goethals in which we learned about the passion in thought and the logical structure behind passion. Goethals, a psychologist who specialized in adolescence, was teaching a graduate seminar in psychoanalysis. … Several classes were devoted to the work of [Donald] Winnicott and his notion of the transitional object. Winnicott called transitional the objects of childhood—the stuffed animals, the bits of silk from a baby blanket, the favorite pillows—that the child experiences as both part of the self and of external reality. Winnicott writes that such objects mediate between the child’s sense of connection to the body of the mother and a growing recognition that he or she is a separate being. The transitional objects of the nursery—all of these are destined to be abandoned. Yet, says Winnicott, they leave traces that will mark the rest of life. Specifically, they influence how easily an individual develops a capacity for joy, aesthetic experience, and creative playfulness. Transitional objects, with their joint allegiance to self and other, demonstrate to the child that objects in the external world can be loved.

[…]

Winnicott believes that during all stages of life we continue to search for objects we experience as both within and outside the self. We give up the baby blanket, but we continue to search for the feeling of oneness it provided. We find them in moments of feeling “at one” with the world, what Freud called the “oceanic feeling.” We find these moments when we are at one with a piece of art, a vista in nature, a sexual experience.

As a scientific proposition, the theory of the transitional object has its limitations. But as a way of thinking about connection, it provides a powerful tool for thought. Most specifically, it offered me a way to begin to understand the new relationships that people were beginning to form with computers, something I began to study in the late 1970s and early 1980s. From the very beginning, as I began to study the nascent digital culture culture, I could see that computers were not “just tools.” They were intimate machines. People experienced them as part of the self, separate but connected to the self.

[…]

When in the late 1970s, I began to study the computer’s special evocative power, my time with George Goethals and the small circle of Harvard graduate students immersed in Winnicott came back to me. Computers served as transitional objects. They bring us back to the feelings of being “at one” with the world. Musicians often hear the music in their minds before they play it, experiencing the music from within and without. The computer similarly can be experienced as an object on the border between self and not-self. Just as musical instruments can be extensions of the mind’s construction of sound, computers can be extensions of the mind’s construction of thought.

This way of thinking about the computer as an evocative object puts us on the inside of a new inside joke. For when psychoanalysts talked about object relations, they had always been talking about people. From the beginning, people saw computers as “almost-alive” or “sort of alive.” With the computer, object relations psychoanalysis can be applied to, well, objects. People feel at one with video games, with lines of computer code, with the avatars they play in virtual worlds, with their smartphones. Classical transitional objects are meant to be abandoned, their power recovered in moments of heightened experience. When our current digital devices—our smartphones and cellphones—take on the power of transitional objects, a new psychology comes into play. These digital objects are never meant to be abandoned. We are meant to become cyborg.

Anthropologist Scott Aran considers the role of the absurd in religion and cause-worship, and the Becket-like notion of the “ineffable”:

The notion of a transcendent force that moves the universe or history or determines what is right and good—and whose existence is fundamentally beyond reason and immune to logical or empirical disproof—is the simplest, most elegant, and most scientifically baffling phenomenon I know of. Its power and absurdity perturbs mightily, and merits careful scientific scrutiny. In an age where many of the most volatile and seemingly intractable conflicts stem from sacred causes, scientific understanding of how to best deal with the subject has also never been more critical.

Call it love of Group or God, or devotion to an Idea or Cause, it matters little in the end. This is the “the privilege of absurdity; to which no living creature is subject, but man only” of which Hobbes wrote in Leviathan. In The Descent of Man, Darwin cast it as the virtue of “morality,” with which winning tribes are better endowed in history’s spiraling competition for survival and dominance. Unlike other creatures, humans define the groups to which they belong in abstract terms. Often they strive to achieve a lasting intellectual and emotional bonding with anonymous others, and seek to heroically kill and die, not in order to preserve their own lives or those of people they know, but for the sake of an idea—the conception they have formed of themselves, of “who we are.”

[…]

There is an apparent paradox that underlies the formation of large-scale human societies. The religious and ideological rise of civilizations—of larger and larger agglomerations of genetic strangers, including today’s nations, transnational movements, and other “imagined communities” of fictive kin — seem to depend upon what Kierkegaard deemed this “power of the preposterous” … Humankind’s strongest social bonds and actions, including the capacity for cooperation and forgiveness, and for killing and allowing oneself to be killed, are born of commitment to causes and courses of action that are “ineffable,” that is, fundamentally immune to logical assessment for consistency and to empirical evaluation for costs and consequences. The more materially inexplicable one’s devotion and commitment to a sacred cause — that is, the more absurd—the greater the trust others place in it and the more that trust generates commitment on their part.

[…]

Religion and the sacred, banned so long from reasoned inquiry by ideological bias of all persuasions—perhaps because the subject is so close to who we want or don’t want to be — is still a vast, tangled and largely unexplored domain for science, however simple and elegant for most people everywhere in everyday life.

Psychologist Timothy Wilson, author of the excellent Redirect: The Surprising New Science of Psychological Change, explores the Möbius loop of self-perception and behavior:

My favorite is the idea that people become what they do. This explanation of how people acquire attitudes and traits dates back to the philosopher Gilbert Ryle, but was formalized by the social psychologist Daryl Bem in his self-perception theory. People draw inferences about who they are, Bem suggested, by observing their own behavior.

Self-perception theory turns common wisdom on its head. … Hundreds of experiments have confirmed the theory and shown when this self-inference process is most likely to operate (e.g., when people believe they freely chose to behave the way they did, and when they weren’t sure at the outset how they felt).

Self-perception theory is an elegant in its simplicity. But it is also quite deep, with important implications for the nature of the human mind. Two other powerful ideas follow from it. The first is that we are strangers to ourselves. After all, if we knew our own minds, why would we need to guess what our preferences are from our behavior? If our minds were an open book, we would know exactly how honest we are and how much we like lattes. Instead, we often need to look to our behavior to figure out who we are. Self-perception theory thus anticipated the revolution in psychology in the study of human consciousness, a revolution that revealed the limits of introspection.

But it turns out that we don’t just use our behavior to reveal our dispositions—we infer dispositions that weren’t there before. Often, our behavior is shaped by subtle pressures around us, but we fail to recognize those pressures. As a result, we mistakenly believe that our behavior emanated from some inner disposition.

Harvard physician and social scientist Nicholas Christakis, who also appeared in Brockman’s Culture: Leading Scientists Explore Societies, Art, Power, and Technology, offers one of the most poetic answers, tracing the history of our understanding of why the sky is blue:

My favorite explanation is one that I sought as a boy. It is the explanation for why the sky is blue. It’s a question every toddler asks, but it is also one that most great scientists since the time of Aristotle, including da Vinci, Newton, Kepler, Descartes, Euler, and even Einstein, have asked.

One of the things I like most about this explanation—beyond the simplicity and overtness of the question itself—is how long it took to arrive at correctly, how many centuries of effort, and how many branches of science it involves.

Aristotle is the first, so far as we know, to ask the question about why the sky is blue, in the treatise On Colors; his answer is that the air close at hand is clear and the deep air of the sky is blue the same way a thin layer of water is clear but a deep well of water looks black. This idea was still being echoed in the 13th century by Roger Bacon. Kepler too reinvented a similar explanation, arguing that the air merely looks colorless because the tint of its color is so faint when in a thin layer. But none of them offered an explanation for the blueness of the atmosphere. So the question actually has two, related parts: why the sky has any color, and why it has a blue color.

[…]

The sky is blue because the incident light interacts with the gas molecules in the air in such as fashion that more of the light in the blue part of the spectrum is scattered, reaching our eyes on the surface of the planet. All the frequencies of the incident light can be scattered this way, but the high-frequency (short wavelength) blue is scattered more than the lower frequencies in a process known as Rayleigh scattering, described in the 1870’s. John William Strutt, Lord Rayleigh, who also won the Nobel Prize in physics in 1904 for the discovery of argon, demonstrated that, when the wavelength of the light is on the same order as the size of the gas molecules, the intensity of scattered light varies inversely with the fourth power of its wavelength. Shorter wavelengths like blue (and violet) are scattered more than longer ones. It’s as if all the molecules in the air preferentially glow blue, which is what we then see everywhere around us.

Yet, the sky should appear violet since violet light is scattered even more than blue light. But the sky does not appear violet to us because of the final, biological part of the puzzle, which is the way our eyes are designed: they are more sensitive to blue than violet light.

The explanation for why the sky is blue involves so much of the natural sciences: the colors within the visual spectrum, the wave nature of light, the angle at which sunlight hits the atmosphere, the mathematics of scattering, the size of nitrogen and oxygen molecules, and even the way human eyes perceive color. It’s most of science in a question that a young child can ask.

Nature editor-in-chief Philip Campbell considers the beauty of a sunrise, echoing Richard Feynman’s thoughts on science and mystery and Danis Dutton’s evolutionary theory of beauty:

Scientific understanding enhances rather than destroys nature’s beauty. All of these explanations for me contribute to the beauty in a sunrise.

Ah, but what is the explanation of beauty? Brain scientists grapple with nuclear-magnetic resonance images—a recent meta-analysis indicated that all of our aesthetic judgements seem to include the use of neural circuits in the right anterior insula, an area of the cerebral cortex typically associated with visceral perception. Perhaps our sense of beauty is a by-product of the evolutionary maintenance of the senses of belonging and of disgust. For what it’s worth, as exoplanets pour out of our telescopes, I believe that we will encounter astrochemical evidence for some form of extraterrestrial organism well before we achieve a deep, elegant or beautiful explanation of human aesthetics.

But my favorite essay comes from social media researcher and general genius Clay Shirky, author of Cognitive Surplus: Creativity and Generosity in a Connected Age, who considers the propagation of ideas in culture and the problems with Richard Dawkins’s notion of the meme in a context of combinatorial creativity:

Something happens to keep one group of people behaving in a certain set of ways. In the early 1970s, both E.O. Wilson and Richard Dawkins noticed that the flow of ideas in a culture exhibited similar patterns to the flow of genes in a species—high flow within the group, but sharply reduced flow between groups. Dawkins’ response was to assume a hypothetical unit of culture called the meme, though he also made its problems clear—with genetic material, perfect replication is the norm, and mutations rare. With culture, it is the opposite — events are misremembered and then misdescribed, quotes are mangled, even jokes (pure meme) vary from telling to telling. The gene/meme comparison remained, for a generation, an evocative idea of not much analytic utility.

Dan Sperber has, to my eye, cracked this problem. In a slim, elegant volume of 15 years ago with the modest title Explaining Culture, he outlined a theory of culture as the residue of the epidemic spread of ideas. In this model, there is no meme, no unit of culture separate from the blooming, buzzing confusion of transactions. Instead, all cultural transmission can be reduced to one of two types: making a mental representation public, or internalizing a mental version of a public presentation. As Sperber puts it, “Culture is the precipitate of cognition and communication in a human population.”

Sperber’s two primitives—externalization of ideas, internalization of expressions—give us a way to think of culture not as a big container people inhabit, but rather as a network whose traces, drawn carefully, let us ask how the behaviors of individuals create larger, longer-lived patterns. Some public representations are consistently learned and then re-expressed and re-learned—Mother Goose rhymes, tartan patterns, and peer review have all survived for centuries. Others move from ubiquitous to marginal in a matter of years. . . .

[…]

This is what is so powerful about Sperber’s idea: culture is a giant, asynchronous network of replication, ideas turning into expressions which turn into other, related ideas. … Sperber’s idea also suggests increased access to public presentation of ideas will increase the dynamic range of culture overall.

Characteristically thought-provoking and reliably cross-disciplinary, This Explains Everything is a must-read in its entirety.


Published January 22, 2013

https://www.themarginalian.org/2013/01/22/this-explains-everything-brockman-edge-question/

BP

www.themarginalian.org

BP

PRINT ARTICLE

Filed Under

View Full Site

The Marginalian participates in the Bookshop.org and Amazon.com affiliate programs, designed to provide a means for sites to earn commissions by linking to books. In more human terms, this means that whenever you buy a book from a link here, I receive a small percentage of its price, which goes straight back into my own colossal biblioexpenses. Privacy policy. (TLDR: You're safe — there are no nefarious "third parties" lurking on my watch or shedding crumbs of the "cookies" the rest of the internet uses.)