Brain Pickings

Posts Tagged ‘John Brockman’

11 FEBRUARY, 2014

Big Thinkers on the Only Things Worth Worrying About

By:

A cross-disciplinary kaleidoscope of intelligent concerns for the self and the species.

In his famous and wonderfully heartening letter of fatherly advice, F. Scott Fitzgerald gave his young daughter Scottie a list of things to worry and not worry about in life. Among the unworriables, he named popular opinion, the past, the future, triumph, and failure “unless it comes through your own fault.” Among the worry-worthy, courage, cleanliness, and efficiency. What Fitzgerald touched on, of course, is the quintessential anxiety of the human condition, which drives us to worry about things big and small, mundane and monumental, often confusing the two classes. It was this “worryability” that young Italo Calvino resolved to shake from his life. A wonderful 1934 book classified all of our worries in five general categories that endure with astounding prescience and precision, but we still struggle to identify the things truly worth worrying about — and, implicitly, working to resolve — versus those that only strain our psychoemotional capacity with the deathly grip of anxiety.

'My Wheel of Worry' by Andrew Kuo, depicting his inner worries, arguments, counterarguments, and obsessions in the form of charts and graphs.

Click image for details.

In What Should We Be Worried About? (public library), intellectual jockey and Edge founder John Brockman tackles this issue with his annual question — which has previously answered such conundrums as the single most elegant theory of how the world works (2012) and the best way to make ourselves smarter (2011) — and asks some of our era’s greatest thinkers in science, psychology, technology, philosophy, and more to each contribute one valid “worry” about our shared future. Rather than alarmist anxiety-slinging, however, the ethos of the project is quite the opposite — to put in perspective the things we worry about but shouldn’t, whether by our own volition or thanks to ample media manipulation, and contrast them with issues of actual concern, at which we ought to aim our collective attention and efforts in order to ensure humanity’s progress and survival.

Behavioral neuroscientist Kate Jeffery offers one of the most interesting answers, reminiscent of Alan Watts’s assertion that “without birth and death … the world would be static, rhythm-less, undancing, mummified,” exploring our mortality paradox and pointing to the loss of death as a thing to worry about:

Every generation our species distills the best of itself, packages it up and passes it on, shedding the dross and creating a fresher, newer, shinier generation. We have been doing this now for four billion years, and in doing so have transmogrified from unicellular microorganisms that do little more than cling to rocks and photosynthesize, to creatures of boundless energy and imagination who write poetry, make music, love each other and work hard to decipher the secrets of themselves and their universe.

And then they die.

Death is what makes this cyclical renewal and steady advance in organisms possible. Discovered by living things millions of years ago, aging and death permit a species to grow and flourish. Because natural selection ensures that the child-who-survives-to-reproduce is better than the parent (albeit infinitesimally so, for that is how evolution works), it is better for many species that the parent step out of the way and allow its (superior) child to succeed in its place. Put more simply, death stops a parent from competing with its children and grandchildren for the same limited resources. So important is death that we have, wired into our genes, a self-destruct senescence program that shuts down operations once we have successfully reproduced, so that we eventually die, leaving our children—the fresher, newer, shinier versions of ourselves—to carry on with the best of what we have given them: the best genes, the best art, and the best ideas. Four billion years of death has served us well.

Now, all this may be coming to an end, for one of the things we humans, with our evolved intelligence, are working hard at is trying to eradicate death. This is an understandable enterprise, for nobody wants to die—genes for wanting to die rarely last long in a species. For millennia, human thinkers have dreamed of conquering old age and death: the fight against it permeates our art and culture, and much of our science. We personify death as a specter and loathe it, fear it and associate it with all that is bad in the world. If we could conquer it, how much better life would become.

Celebrated filmmaker Terry Gilliam leans toward the philosophical with an answer somewhere between John Cage and Yoda:

I’ve given up asking questions. I merely float on a tsunami of acceptance of anything life throws at me… and marvel stupidly.

Music pioneer Brian Eno, a man of strong opinions on art and unconventional approaches to creativity, is concerned that we see politics, a force that impacts our daily lives on nearly every level, as something other people do:

Most of the smart people I know want nothing to do with politics. We avoid it like the plague — like Edge avoids it, in fact. Is this because we feel that politics isn’t where anything significant happens? Or because we’re too taken up with what we’re doing, be it Quantum Physics or Statistical Genomics or Generative Music? Or because we’re too polite to get into arguments with people? Or because we just think that things will work out fine if we let them be — that The Invisible Hand or The Technosphere will mysteriously sort them out?

Whatever the reasons for our quiescence, politics is still being done — just not by us. It’s politics that gave us Iraq and Afghanistan and a few hundred thousand casualties. It’s politics that’s bleeding the poorer nations for the debts of their former dictators. It’s politics that allows special interests to run the country. It’s politics that helped the banks wreck the economy. It’s politics that prohibits gay marriage and stem cell research but nurtures Gaza and Guantanamo.

But we don’t do politics. We expect other people to do it for us, and grumble when they get it wrong. We feel that our responsibility stops at the ballot box, if we even get that far. After that we’re as laissez-faire as we can get away with.

What worries me is that while we’re laissez-ing, someone else is faire-ing.

Barbara Strauch, science editor of The New York Times, echoes Richard Feynman’s lament about the general public’s scientific ignorance — not the good kind, but the kind that leads to the resurgence of preventable diseases — when it comes to science, as well as the dismal state of science education. She sees oases of hope in that desert of ignorance but finds the disconnect worrisome:

Something quite serious has been lost. . . . This decline in general-interest science coverage comes at a time of divergent directions in the general public. At one level, there seems to be increasing ignorance. After all, it’s not just science news coverage that has suffered, but also the teaching of science in schools. And we just went through a political season that saw how all this can play out, with major political figures spouting off one silly statement after another, particularly about women’s health. . . .

But something else is going on, as well. Even as we have in some pockets what seems like increasing ignorance of science, we have at the same time, a growing interest of many. It’s easy to see, from where I sit, how high that interest is. Articles about anything scientific, from the current findings in human evolution to the latest rover landing on Mars, not to mention new genetic approaches to cancer — and yes, even the Higgs boson — zoom to the top of our newspaper’s most emailed list.

We know our readers love science and cannot get enough of it. And it’s not just our readers. As the rover Curiosity approached Mars, people of all ages in all parts of the country had “Curiosity parties” to watch news of the landing. Mars parties! Social media, too, has shown us how much interest there is across the board, with YouTube videos and tweets on science often becoming instant megahits.

So what we have is a high interest and a lot of misinformation floating around. And we have fewer and fewer places that provide real information to a general audience that is understandable, at least by those of us who do not yet have our doctorates in astrophysics. The disconnect is what we should all be worried about.

Nicholas Carr, author of the techno-dystopian The Shallows: What the Internet Is Doing to Our Brains, considers the effects that digital communication might be having on our intricate internal clocks and the strange ways in which our brains warp time:

I’m concerned about time — the way we’re warping it and it’s warping us. Human beings, like other animals, seem to have remarkably accurate internal clocks. Take away our wristwatches and our cell phones, and we can still make pretty good estimates about time intervals. But that faculty can also be easily distorted. Our perception of time is subjective; it changes with our circumstances and our experiences. When things are happening quickly all around us, delays that would otherwise seem brief begin to feel interminable. Seconds stretch out. Minutes go on forever. . . .

Given what we know about the variability of our time sense, it seems clear that information and communication technologies would have a particularly strong effect on personal time perception. After all, they often determine the pace of the events we experience, the speed with which we’re presented with new information and stimuli, and even the rhythm of our social interactions. That’s been true for a long time, but the influence must be particularly strong now that we carry powerful and extraordinarily fast computers around with us all day long. Our gadgets train us to expect near-instantaneous responses to our actions, and we quickly get frustrated and annoyed at even brief delays.

I know that my own perception of time has been changed by technology. . . .

As we experience faster flows of information online, we become, in other words, less patient people. But it’s not just a network effect. The phenomenon is amplified by the constant buzz of Facebook, Twitter, texting, and social networking in general. Society’s “activity rhythm” has never been so harried. Impatience is a contagion spread from gadget to gadget.

One of the gravest yet most lucid and important admonitions comes from classicist-turned-technologist Tim O’Reilly, who echoes Susan Sontag’s concerns about anti-intellectualism and cautions that the plague of ignorance might spread far enough to drive our civilization into extinction:

For so many in the techno-elite, even those who don’t entirely subscribe to the unlimited optimism of the Singularity, the notion of perpetual progress and economic growth is somehow taken for granted. As a former classicist turned technologist, I’ve always lived with the shadow of the fall of Rome, the failure of its intellectual culture, and the stasis that gripped the Western world for the better part of a thousand years. What I fear most is that we will lack the will and the foresight to face the world’s problems squarely, but will instead retreat from them into superstition and ignorance.

[…]

History teaches us that conservative, backward-looking movements often arise under conditions of economic stress. As the world faces problems ranging from climate change to the demographic cliff of aging populations, it’s wise to imagine widely divergent futures.

Yes, we may find technological solutions that propel us into a new golden age of robots, collective intelligence, and an economy built around “the creative class.” But it’s at least as probable that as we fail to find those solutions quickly enough, the world falls into apathy, disbelief in science and progress, and after a melancholy decline, a new dark age.

Civilizations do fail. We have never yet seen one that hasn’t. The difference is that the torch of progress has in the past always passed to another region of the world. But we’ve now, for the first time, got a single global civilization. If it fails, we all fail together.

Biological anthropologist Helen Fisher, who studies the brain on love and whose Why We Love remains indispensable, worries that we misunderstand men. She cites her research for some findings that counter common misconceptions and illustrate how gender stereotypes limit us:

Men fall in love faster too — perhaps because they are more visual. Men experience love at first sight more regularly; and men fall in love just as often. Indeed, men are just as physiologically passionate. When my colleagues and I have scanned men’s brains (using fMRI), we have found that they show just as much activity as women in neural regions linked with feelings of intense romantic love. Interestingly, in the 2011 sample, I also found that when men fall in love, they are faster to introduce their new partner to friends and parents, more eager to kiss in public, and want to “live together” sooner. Then, when they are settled in, men have more intimate conversations with their wives than women do with their husbands—because women have many of their intimate conversations with their girlfriends. Last, men are just as likely to believe you can stay married to the same person forever (76% of both sexes). And other data show that after a break up, men are 2.5 times more likely to kill themselves.

[…]

In the Iliad, Homer called love “magic to make the sanest man go mad.” This brain system lives in both sexes. And I believe we’ll make better partnerships if we embrace the facts: men love — just as powerfully as women.

David Rowan, editor of Wired UK and scholar of the secrets of entrepreneurship, worries about the growing disconnect between the data-rich and the data-poor:

Each day, according to IBM, we collectively generate 2.5 quintillion bytes — a tsunami of structured and unstructured data that’s growing, in IDC’s reckoning, at 60 per cent a year. Walmart drags a million hourly retail transactions into a database that long ago passed 2.5 petabytes; Facebook processes 2.5 billion pieces of content and 500 terabytes of data each day; and Google, whose YouTube division alone gains 72 hours of new video every minute, accumulates 24 petabytes of data in a single day. . . . Certainly there are vast public benefits in the smart processing of these zetta- and yottabytes of previously unconstrained zeroes and ones. . . .

Yet as our lives are swept unstoppably into the data-driven world, such benefits are being denied to a fast-emerging data underclass. Any citizen lacking a basic understanding of, and at least minimal access to, the new algorithmic tools will increasingly be disadvantaged in vast areas of economic, political and social participation. The data disenfranchised will find it harder to establish personal creditworthiness or political influence; they will be discriminated against by stock markets and by social networks. We need to start seeing data literacy as a requisite, fundamental skill in a 21st-century democracy, and to campaign — and perhaps even to legislate — to protect the interests of those being left behind.

Some, like social and cognitive scientist Dan Sperber, go meta, admonishing that our worries about worrying are ushering in a new age of anxiety, the consequences of which are debilitating:

Worrying is an investment of cognitive resources laced with emotions from the anxiety spectrum and aimed at solving some specific problem. It has its costs and benefits, and so does not worrying. Worrying for a few minutes about what to serve for dinner in order please one’s guests may be a sound investment of resources. Worrying about what will happen to your soul after death is a total waste. Human ancestors and other animals with foresight may have only worried about genuine and pressing problems such as not finding food or being eaten. Ever since they have become much more imaginative and have fed their imagination with rich cultural inputs, that is, since at least 40,000 years (possibly much more), humans have also worried about improving their lot individually and collectively — sensible worries — and about the evil eye, the displeasure of dead ancestors, the purity of their blood — misplaced worries.

A new kind of misplaced worries is likely to become more and more common. The ever-accelerating current scientific and technological revolution results in a flow of problems and opportunities that presents unprecedented cognitive and decisional challenges. Our capacity to anticipate these problems and opportunities is swamped by their number, novelty, speed of arrival, and complexity.

[…]

What I am particularly worried about is that humans will be less and less able to appreciate what they should really be worrying about and that their worries will do more harm than good. Maybe, just as on a boat in rapids, one should try not to slowdown anything but just to optimize a trajectory one does not really control, not because safety is guaranteed and optimism is justified — the worst could happen — but because there is no better option than hope.

Mathematician and economist Eric R. Weinstein considers our conventional wisdom on what it takes to cultivate genius, including the myth of the 10,000 hours rule, and argues instead that the pursuit of excellence is a social malady that gets us nowhere meaningful:

We cannot excel our way out of modern problems. Within the same century, we have unlocked the twin nuclei of both cell and atom and created the conditions for synthetic biological and even digital life with computer programs that can spawn with both descent and variation on which selection can now act. We are in genuinely novel territory which we have little reason to think we can control; only the excellent would compare these recent achievements to harmless variations on the invention of the compass or steam engine. So surviving our newfound god-like powers will require modes that lie well outside expertise, excellence, and mastery.

Going back to Sewall Wright’s theory of adaptive landscapes of fitness, we see four modes of human achievement paired with what might be considered their more familiar accompanying archetypes:

A) Climbing—Expertise: Moving up the path of steepest ascent towards excellence for admission into a community that holds and defends a local maximum of fitness.

B) Crossing—Genius: Crossing the ‘Adaptive Valley’ to an unknown and unoccupied even higher maximum level of fitness.

C) Moving—Heroism: Moving ‘mountains of fitness’ for one’s group.

D) Shaking—Rebellion: Leveling peaks and filling valleys for the purpose of changing the landscape to be more even.

The essence of genius as a modality is that it seems to reverse the logic of excellence.

He adds the famous anecdote of Feynman’s Challenger testimony:

In the wake of the Challenger disaster, Richard Feynman was mistakenly asked to become part of the Rogers commission investigating the accident. In a moment of candor Chairman Rogers turned to Neil Armstrong in a men’s room and said “Feynman is becoming a real pain.” Such is ever the verdict pronounced by steady hands over great spirits. But the scariest part of this anecdote is not the story itself but the fact that we are, in the modern era, now so dependent on old Feynman stories having no living heroes with which to replace him: the ultimate tragic triumph of runaway excellence.

This view, however, is remarkably narrow and defeatist. As Voltaire memorably remarked, “Appreciation is a wonderful thing: It makes what is excellent in others belong to us as well.” Without appreciation for the Feynmans of the past we duly don our presentism blinders and refuse to acknowledge the fact that genius is a timeless quality that belongs to all ages, not a cultural commodity of the present. Many of our present concerns have been addressed with enormous prescience in the past, often providing more thoughtful and richer answers than we are able to today, whether it comes to the value of space exploration or the economics of media or the essence of creativity or even the grand question of how to live. Having “living heroes” is an admirable aspiration, but they should never replace — only enhance and complement — the legacy and learnings of those who came before.

Indeed, this presentism bias is precisely what Noga Arikha, historian of ideas and author of Passions and Tempers: A History of the Humours, points to as her greatest worry in one of the most compelling answers. It’s something I’ve voiced as well in a recent interview with the Guardian. Arikha writes:

I worry about the prospect of collective amnesia.

While access to information has never been so universal as it is now — thanks to the Internet — the total sum of knowledge of anything beyond the present seems to be dwindling among those people who came of age with the Internet. Anything beyond 1945, if then, is a messy, remote landscape; the centuries melt into each other in an insignificant magma. Famous names are flickers on a screen, their dates irrelevant, their epochs dusty. Everything is equalized.

She points to a necessary antidote to this shallowing of our cultural hindsight:

There is a way out: by integrating the teaching of history within the curricula of all subjects—using whatever digital or other means we have to redirect attention to slow reading and old sources. Otherwise we will be condemned to living without perspective, robbed of the wisdom and experience with which to build for the future, confined by the arrogance of our presentism to repeating history without noticing it.

Berkeley developmental psychologist Alison Gopnik, author of The Philosophical Baby: What Children’s Minds Tell Us About Truth, Love, and the Meaning of Life, worries that much of modern parenting is concerned with the wrong things — particularly the push for overachievement — when evidence strongly indicates that the art of presence is the most important gift a parent can bestow upon a child:

Thinking about children, as I do for a living, and worrying go hand in hand. There is nothing in human life so important and urgent as raising the next generation, and yet it also feels as if we have very little control over the outcome. . . .

[But] “parenting” worries focus on relatively small variations in what parents and children do — co-sleeping or crying it out, playing with one kind of toy rather than another, more homework or less. There is very little evidence that any of this make much difference to the way that children turn out in the long run. There is even less evidence that there is any magic formula for making one well-loved and financially supported child any smarter or happier or more successful as an adult than another.

Instead, she argues, it is neglect that parents should be most worried about — a moral intuition as old as the world, yet one lamentably diluted by modern parents’ misguided concerns:

More recently research into epigenetics has helped demonstrate just how the mechanisms of care and neglect work. Research in sociology and economics has shown empirically just how significant the consequences of early experience actually can be. The small variations in middle-class “parenting” make very little difference. But providing high-quality early childhood care to children who would otherwise not receive it makes an enormous and continuing difference up through adulthood. In fact, the evidence suggests that this isn’t just a matter of teaching children particular skills or kinds of knowledge—a sort of broader institutional version of “parenting.” Instead, children who have a stable, nurturing, varied early environment thrive in a wide range of ways, from better health to less crime to more successful marriages. That’s just what we’d expect from the evolutionary story. I worry more and more about what will happen to the generations of children who don’t have the uniquely human gift of a long, protected, stable childhood.

Journalist Rolf Dobelli, author of The Art of Thinking Clearly, offers an almost Alan Wattsian concern about the paradox of material progress:

As mammals, we are status seekers. Non-status seeking animals don’t attract suitable mating partners and eventually exit the gene pool. Thus goods that convey high status remain extremely important, yet out of reach for most of us. Nothing technology brings about will change that. Yes, one day we might re-engineer our cognition to reduce or eliminate status competition. But until that point, most people will have to live with the frustrations of technology’s broken promise. That is, goods and services will be available to everybody at virtually no cost. But at the same time, status-conveying goods will inch even further out of reach. That’s a paradox of material progress.

Columbia biologist Stuart Firestein, author of the fantastic Ignorance: How It Drives Science and champion of “thoroughly conscious ignorance,” worries about our unreasonable expectations of science:

Much of science is failure, but it is a productive failure. This is a crucial distinction in how we think about failure. More importantly is that not all wrong science is bad science. As with the exaggerated expectations of scientific progress, expectations about the validity of scientific results have simply become overblown. Scientific “facts” are all provisional, all needing revision or sometimes even outright upending. But this is not bad; indeed it is critical to continued progress. Granted it’s difficult, because you can’t just believe everything you read. But let’s grow up and recognize that undeniable fact of life. . . .

So what’s the worry? That we will become irrationally impatient with science, with it’s wrong turns and occasional blind alleys, with its temporary results that need constant revision. And we will lose our trust and belief in science as the single best way to understand the physical universe. . . . From a historical perspective the path to discovery may seem clear, but the reality is that there are twists and turns and reversals and failures and cul de sacs all along the path to any discovery. Facts are not immutable and discoveries are provisional. This is the messy process of science. We should worry that our unrealistic expectations will destroy this amazing mess.

Neuroscientist Sam Harris, who has previously explored the psychology of lying, is concerned about bad incentives that bring out the worst in us, as individuals and as a society:

We need systems that are wiser than we are. We need institutions and cultural norms that make us better than we tend to be. It seems to me that the greatest challenge we now face is to build them.

Writer Douglas Rushkoff, author of Present Shock: When Everything Happens Now, offers a poignant and beautifully phrased, if exceedingly anthropocentric, concern:

We should worry less about our species losing its biosphere than losing its soul.

Our collective perceptions and cognition is our greatest evolutionary achievement. This is the activity that gives biology its meaning. Our human neural network is in the process of deteriorating and our perceptions are becoming skewed — both involuntarily and by our own hand — and all that most of us in the greater scientific community can do is hope that somehow technology picks up the slack, providing more accurate sensors, faster networks, and a new virtual home for complexity.

We should worry such networks won’t be able to function without us; we should also worry that they will.

Harvard’s Lisa Randall, one of the world’s leading theoretical physicists and the author of, most recently, Knocking on Heaven’s Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World, worries about the decline in major long-term investments in research, the kind that made the Large Hadron Collider possible, which would in turn diminish our capacity for exploring the most intensely fascinating aspects of the unknown:

I’m worried I won’t know the answer to questions I care deeply about. Theoretical research (what I do) can of course be done more cheaply. A pencil and paper and even a computer are pretty cheap. But without experiments, or the hope of experiments, theoretical science can’t truly advance either.

One of the most poignant answers comes from psychologist Susan Blackmore, author of Consciousness: An Introduction, who admonishes that we’re disconnecting our heads from our hands by outsourcing so much of our manual humanity to machines, in the process amputating the present for the sake of some potential future. She writes:

What should worry us is that we seem to be worrying more about the possible disasters that might befall us than who we are becoming right now.

From 'Things I have learned in my life so far' by Stefan Sagmeister.

Click image for details.

What Should We Be Worried About? is an awakening read in its entirety. For more of Brockman’s editorial-curatorial mastery, revisit the Edge Question compendiums from 2013 and 2012, and see Nobel-winning behavioral economist Daniel Kahneman on the marvels and flaws of our intuition.

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.





You can also become a one-time patron with a single donation in any amount:





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

30 OCTOBER, 2013

How Our Minds Mislead Us: The Marvels and Flaws of Our Intuition

By:

“The confidence people have in their beliefs is not a measure of the quality of evidence but of the coherence of the story that the mind has managed to construct.”

Every year, intellectual impresario and Edge editor John Brockman summons some of our era’s greatest thinkers and unleashes them on one provocative question, whether it’s the single most elegant theory of how the world works or the best way to enhance our cognitive toolkit. This year, he sets out on the most ambitious quest yet, a meta-exploration of thought itself: Thinking: The New Science of Decision-Making, Problem-Solving, and Prediction (public library) collects short essays and lecture adaptations from such celebrated and wide-ranging (though not in gender) minds as Daniel Dennett, Jonathan Haidt, Dan Gilbert, and Timothy Wilson, covering subjects as diverse as morality, essentialism, and the adolescent brain.

One of the most provocative contributions comes from Nobel-winning psychologist Daniel Kahneman — author of the indispensable Thinking, Fast and Slow, one of the best psychology books of 2012 — who examines “the marvels and the flaws of intuitive thinking.”

In the 1970s, Kahneman and his colleague Amos Tversky, self-crowned “prophets of irrationality,” began studying what they called “heuristics and biases” — mental shortcuts we take, which frequently result in cognitive errors. Those errors, however, reveal a great deal about how our minds work:

If you want to characterize how something is done, then one of the most powerful ways of characterizing how the mind does anything is by looking at the errors that the mind produces while it’s doing it because the errors tell you what it is doing. Correct performance tells you much less about the procedure than the errors do.

One of the most fascinating examples of heuristics and biases is what we call intuition — a complex cluster of cognitive processes, sometimes helpful but often misleading. Kahneman notes that thoughts come to mind in one of two ways: Either by “orderly computation,” which involves a series of stages of remembering rules and then applying them, or by perception, an evolutionary function that allows us to predict outcomes based on what we’re perceiving. (For instance, seeing a woman’s angry face helps us predict the general sentiment and disposition of what she’s about to say.) It is the latter mode that precipitates intuition. Kahneman explains the interplay:

There is no sharp line between intuition and perception. … Perception is predictive. . . . If you want to understand intuition, it is very useful to understand perception, because so many of the rules that apply to perception apply as well to intuitive thinking. Intuitive thinking is quite different from perception. Intuitive thinking has language. Intuitive thinking has a lot of word knowledge organized in different ways more than mere perception. But some very basic characteristics [of] perception are extended almost directly to intuitive thinking.

He then considers how the two types of mental operations established by modern cognitive science illuminate intuition:

Type 1 is automatic, effortless, often unconscious, and associatively coherent. . . . Type 2 is controlled, effortful, usually conscious, tends to be logically coherent, rule-governed. Perception and intuition are Type 1. … Type 2 is more controlled, slower, is more deliberate. . . . Type 2 is who we think we are. [And yet] if one made a film on this, Type 2 would be a secondary character who thinks that he is the hero because that’s who we think we are, but in fact, it’s Type 1 that does most of the work, and it’s most of the work that is completely hidden from us.

Type 1 also encompasses all of our practiced skills — for instance, driving, speaking, and understanding a language — which after a certain threshold of mastery enter autopilot mode. (Though this presents its own set of problems.) Underpinning that mode of thinking is our associative memory, which Kahneman unpacks:

You have to think of [your associative memory] as a huge repository of ideas, linked to each other in many ways, including causal links and other links, and activation spreading from ideas to other ideas until a small subset of that enormous network is illuminated, and the subset is what’s happening in the mind at the moment. You’re not conscious of it, you’re conscious of very little of it.

This leads to something Kahneman has termed “associative coherence” — the notion that “everything reinforces everything else.” Much like our attention, which sees only what it wants and expects to see, our associative memory looks to reinforce our existing patterns of association and deliberately discounts evidence that contradicts them. And therein lies the triumph and tragedy of our intuitive mind:

The thing about the system is that it settles into a stable representation of reality, and that is just a marvelous accomplishment. … That’s not a flaw, that’s a marvel. [But] coherence has its cost.

Coherence means that you’re going to adopt one interpretation in general. Ambiguity tends to be suppressed. This is part of the mechanism that you have here that ideas activate other ideas and the more coherent they are, the more likely they are to activate each other. Other things that don’t fit fall away by the wayside. We’re enforcing coherent interpretations. We see the world as much more coherent than it is.

Put another way, our chronic discomfort with ambiguity — which, ironically, is critical to both our creativity and the richness of our lives — leads us to lock down safe, comfortable, familiar interpretations, even if they are only partial representations of or fully disconnected from reality.

The Type 1 modality of thought gives rise to a System 1 of interpretation, which is at the heart of what we call “intuition” — but which is far less accurate and reliable than we like to believe:

System 1 infers and invents causes and intentions. [This] happens automatically. Infants have it. . . . We’re equipped … for the perception of causality.

It neglects ambiguity and suppresses doubt and … exaggerates coherence. Associative coherence [is] in large part where the marvels turn into flaws. We see a world that is vastly more coherent than the world actually is. That’s because of this coherence-creating mechanism that we have. We have a sense-making organ in our heads, and we tend to see things that are emotionally coherent, and that are associatively coherent.

But the greatest culprit in the failures of our intuition is another cognitive property Kahneman names “what you see is all there is” — a powerful and persistent flaw of System-1 thinking:

This is a mechanism that takes whatever information is available and makes the best possible story out of the information currently available, and tells you very little about information it doesn’t have. So what you get are people jumping to conclusions. I call this a “machine for jumping to conclusions.”

This jumping to conclusions, Kahneman adds, is immediate and based on unreliable information. And that’s a problem:

That will very often create a flaw. It will create overconfidence. The confidence people have in their beliefs is not a measure of the quality of evidence [but] of the coherence of the story that the mind has managed to construct. Quite often you can construct very good stories out of very little evidence. . . . People tend to have great belief, great faith in the stories that are based on very little evidence.

Most treacherous of all is our tendency to use our very confidence — and overconfidence — as evidence itself:

What’s interesting is that many a time people have intuitions that they’re equally confident about except they’re wrong. That happens through the mechanism I call “the mechanism of substitution.” You have been asked a question, and instead you answer another question, but that answer comes by itself with complete confidence, and you’re not aware that you’re doing something that you’re not an expert on because you have one answer. Subjectively, whether it’s right or wrong, it feels exactly the same. Whether it’s based on a lot of information, or a little information, this is something that you may step back and have a look at. But the subjective sense of confidence can be the same for intuition that arrives from expertise, and for intuitions that arise from heuristics. . . .

In other words, intuition, like attention, is “an intentional, unapologetic discriminator [that] asks what is relevant right now, and gears us up to notice only that” — a humbling antidote to our culture’s propensity for self-righteousness, and above all a reminder to allow yourself the uncomfortable luxury of changing your mind.

Thinking is excellent and mind-expanding in its entirety. Complement it with Brockman’s This Will Make You Smarter: New Scientific Concepts to Improve Your Thinking, one of the best psychology books of 2012.

Public domain photographs via Flickr Commons

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:





You can also become a one-time patron with a single donation in any amount:





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

22 JANUARY, 2013

This Explains Everything: 192 Thinkers on the Most Elegant Theory of How the World Works

By:

“The greatest pleasure in science comes from theories that derive the solution to some deep puzzle from a small set of simple principles in a surprising way.”

Every year since 1998, intellectual impresario and Edge editor John Brockman has been posing a single grand question to some of our time’s greatest thinkers across a wide spectrum of disciplines, then collecting the answers in an annual anthology. Last year’s answers to the question “What scientific concept will improve everybody’s cognitive toolkit?” were released in This Will Make You Smarter: New Scientific Concepts to Improve Your Thinking, one of the year’s best psychology and philosophy books.

In 2012, the question Brockman posed, proposed by none other than Steven Pinker, was “What is your favorite deep, elegant, or beautiful explanation?” The answers, representing an eclectic mix of 192 (alas, overwhelmingly male) minds spanning psychology, quantum physics, social science, political theory, philosophy, and more, are collected in the edited compendium This Explains Everything: Deep, Beautiful, and Elegant Theories of How the World Works (UK; public library) and are also available online.

In the introduction preceding the micro-essays, Brockman frames the question and its ultimate objective, adding to history’s most timeless definitions of science:

The ideas presented on Edge are speculative; they represent the frontiers in such areas as evolutionary biology, genetics, computer science, neurophysiology, psychology, cosmology, and physics. Emerging out of these contributions is a new natural philosophy, new ways of understanding physical systems, new ways of thinking that call into question many of our basic assumptions.

[…]

Perhaps the greatest pleasure in science comes from theories that derive the solution to some deep puzzle from a small set of simple principles in a surprising way. These explanations are called ‘beautiful’ or ‘elegant.’

[…]

The contributions presented here embrace scientific thinking in the broadest sense: as the most reliable way of gaining knowledge about anything — including such fields of inquiry as philosophy, mathematics, economics, history, language, and human behavior. The common thread is that a simple and nonobvious idea is proposed as the explanation of a diverse and complicated set of phenomena.

Stanford neuroscientist Robert Sapolsky, eloquent as ever, marvels at the wisdom of the crowd and the emergence of swarm intelligence:

Observe a single ant, and it doesn’t make much sense, walking in one direction, suddenly careening in another for no obvious reason, doubling back on itself. Thoroughly unpredictable.

The same happens with two ants, a handful of ants. But a colony of ants makes fantastic sense. Specialized jobs, efficient means of exploiting new food sources, complex underground nests with temperature regulated within a few degrees. And critically, there’s no blueprint or central source of command—each individual ants has algorithms for their behaviors. But this is not wisdom of the crowd, where a bunch of reasonably informed individuals outperform a single expert. The ants aren’t reasonably informed about the big picture. Instead, the behavior algorithms of each ant consist of a few simple rules for interacting with the local environment and local ants. And out of this emerges a highly efficient colony.

Ant colonies excel at generating trails that connect locations in the shortest possible way, accomplished with simple rules about when to lay down a pheromone trail and what to do when encountering someone else’s trail—approximations of optimal solutions to the Traveling Salesman problem. This has useful applications. In “ant-based routing,” simulations using virtual ants with similar rules can generate optimal ways of connecting the nodes in a network, something of great interest to telecommunications companies. It applies to the developing brain, which must wire up vast numbers of neurons with vaster numbers of connections without constructing millions of miles of connecting axons. And migrating fetal neurons generate an efficient solution with a different version of ant-based routine.

A wonderful example is how local rules about attraction and repulsion (i.e., positive and negative charges) allow simple molecules in an organic soup to occasionally form more complex ones. Life may have originated this way without the requirement of bolts of lightning to catalyze the formation of complex molecules.

And why is self-organization so beautiful to my atheistic self? Because if complex, adaptive systems don’t require a blue print, they don’t require a blue print maker. If they don’t require lightning bolts, they don’t require Someone hurtling lightning bolts.

Developmental psychologist Howard Gardner, who famously coined the seminal theory of multiple intelligences, echoes Anaïs Nin in advocating for the role of the individual and Susan Sontag in stressing the impact of individual acts on collective fate. His answer, arguing for the importance of human beings, comes as a welcome antidote to a question that suffers the danger of being inherently reductionist:

In a planet occupied now by seven billion inhabitants, I am amazed by the difference that one human being can make. Think of classical music without Mozart or Stravinsky; of painting without Caravaggio, Picasso or Pollock; of drama without Shakespeare or Beckett. Think of the incredible contributions of Michelangelo or Leonardo, or, in recent times, the outpouring of deep feeling at the death of Steve Jobs (or, for that matter, Michael Jackson or Princess Diana). Think of human values in the absence of Moses or Christ.

[…]

Despite the laudatory efforts of scientists to ferret out patterns in human behavior, I continue to be struck by the impact of single individuals, or of small groups, working against the odds. As scholars, we cannot and should not sweep these instances under the investigative rug. We should bear in mind anthropologist Margaret Mead’s famous injunction: ‘Never doubt that a small group of thoughtful committed citizens can change the world. It is the only thing that ever has.’

Uber-curator Hans Ulrich Obrist, who also contributed to last year’s volume, considers the parallel role of patterns and chance in the works of iconic composer John Cage and painter Gerhard Richter, and the role of uncertainty in the creative process:

In art, the title of a work can often be its first explanation. And in this context I am thinking especially of the titles of Gerhard Richter. In 2006, when I visited Richter in his studio in Cologne, he had just finished a group of six corresponding abstract paintings which he gave the title Cage.

There are many relations between Richter’s painting and the compositions of John Cage. In a book about the Cage series, Robert Storr has traced them from Richter‘s attendance of a Cage performance at the Festum Fluxorum Fluxus in Düsseldorf 1963 to analogies in their artistic processes. Cage has often applied chance procedures in his compositions, notably with the use of the I Ching. Richter in his abstract paintings also intentionally allows effects of chance. In these paintings, he applies the oil paint on the canvas by means of a large squeegee. He selects the colors on the squeegee, but the factual trace that the paint leaves on the canvas is to a large extent the outcome of chance.

[…]

Richter‘s concise title, Cage, can be unfolded into an extensive interpretation of these abstract paintings (and of other works)—but, one can say, the short form already contains everything. The title, like an explanation of a phenomenon, unlocks the works, describing their relation to one of the most important cultural figures of the twentieth century, John Cage, who shares with Richter the great themes of chance and uncertainty.

Writer, artist, and designer Douglas Coupland, whose biography of Marshall McLuhan remains indispensable, offers a lyrical meditation on the peculiar odds behind coincidences and déja vus:

I take comfort in the fact that there are two human moments that seem to be doled out equally and democratically within the human condition—and that there is no satisfying ultimate explanation for either. One is coincidence, the other is déja vu. It doesn’t matter if you’re Queen Elizabeth, one of the thirty-three miners rescued in Chile, a South Korean housewife or a migrant herder in Zimbabwe—in the span of 365 days you will pretty much have two déja vus as well as one coincidence that makes you stop and say, “Wow, that was a coincidence.”

The thing about coincidence is that when you imagine the umpteen trillions of coincidences that can happen at any given moment, the fact is, that in practice, coincidences almost never do occur. Coincidences are actually so rare that when they do occur they are, in fact memorable. This suggests to me that the universe is designed to ward off coincidence whenever possible—the universe hates coincidence—I don’t know why—it just seems to be true. So when a coincidence happens, that coincidence had to work awfully hard to escape the system. There’s a message there. What is it? Look. Look harder. Mathematicians perhaps have a theorem for this, and if they do, it might, by default be a theorem for something larger than what they think it is.

What’s both eerie and interesting to me about déja vus is that they occur almost like metronomes throughout our lives, about one every six months, a poetic timekeeping device that, at the very least, reminds us we are alive. I can safely assume that my thirteen year old niece, Stephen Hawking and someone working in a Beijing luggage-making factory each experience two déja vus a year. Not one. Not three. Two.

The underlying biodynamics of déja vus is probably ascribable to some sort of tingling neurons in a certain part of the brain, yet this doesn’t tell us why they exist. They seem to me to be a signal from larger point of view that wants to remind us that our lives are distinct, that they have meaning, and that they occur throughout a span of time. We are important, and what makes us valuable to the universe is our sentience and our curse and blessing of perpetual self-awareness.

MIT social scientist Sherry Turkle, author of the cyber-dystopian Alone Together: Why We Expect More from Technology and Less from Each Other, considers the role of “transitional objets” in our relationship with technology:

I was a student in psychology in the mid-1970s at Harvard University. The grand experiment that had been “Social Relations” at Harvard had just crumbled. Its ambition had been to bring together the social sciences in one department, indeed, most in one building, William James Hall. Clinical psychology, experimental psychology, physical and cultural anthropology, and sociology, all of these would be in close quarters and intense conversation.

But now, everyone was back in their own department, on their own floor. From my point of view, what was most difficult was that the people who studied thinking were on one floor and the people who studied feeling were on another.

In this Balkanized world, I took a course with George Goethals in which we learned about the passion in thought and the logical structure behind passion. Goethals, a psychologist who specialized in adolescence, was teaching a graduate seminar in psychoanalysis. … Several classes were devoted to the work of David Winnicott and his notion of the transitional object. Winnicott called transitional the objects of childhood—the stuffed animals, the bits of silk from a baby blanket, the favorite pillows—that the child experiences as both part of the self and of external reality. Winnicott writes that such objects mediate between the child’s sense of connection to the body of the mother and a growing recognition that he or she is a separate being. The transitional objects of the nursery—all of these are destined to be abandoned. Yet, says Winnicott, they leave traces that will mark the rest of life. Specifically, they influence how easily an individual develops a capacity for joy, aesthetic experience, and creative playfulness. Transitional objects, with their joint allegiance to self and other, demonstrate to the child that objects in the external world can be loved.

[…]

Winnicott believes that during all stages of life we continue to search for objects we experience as both within and outside the self. We give up the baby blanket, but we continue to search for the feeling of oneness it provided. We find them in moments of feeling “at one” with the world, what Freud called the “oceanic feeling.” We find these moments when we are at one with a piece of art, a vista in nature, a sexual experience.

As a scientific proposition, the theory of the transitional object has its limitations. But as a way of thinking about connection, it provides a powerful tool for thought. Most specifically, it offered me a way to begin to understand the new relationships that people were beginning to form with computers, something I began to study in the late 1970s and early 1980s. From the very beginning, as I began to study the nascent digital culture culture, I could see that computers were not “just tools.” They were intimate machines. People experienced them as part of the self, separate but connected to the self.

[…]

When in the late 1970s, I began to study the computer’s special evocative power, my time with George Goethals and the small circle of Harvard graduate students immersed in Winnicott came back to me. Computers served as transitional objects. They bring us back to the feelings of being “at one” with the world. Musicians often hear the music in their minds before they play it, experiencing the music from within and without. The computer similarly can be experienced as an object on the border between self and not-self. Just as musical instruments can be extensions of the mind’s construction of sound, computers can be extensions of the mind’s construction of thought.

This way of thinking about the computer as an evocative object puts us on the inside of a new inside joke. For when psychoanalysts talked about object relations, they had always been talking about people. From the beginning, people saw computers as “almost-alive” or “sort of alive.” With the computer, object relations psychoanalysis can be applied to, well, objects. People feel at one with video games, with lines of computer code, with the avatars they play in virtual worlds, with their smartphones. Classical transitional objects are meant to be abandoned, their power recovered in moments of heightened experience. When our current digital devices—our smartphones and cellphones—take on the power of transitional objects, a new psychology comes into play. These digital objects are never meant to be abandoned. We are meant to become cyborg.

Anthropologist Scott Aran considers the role of the absurd in religion and cause-worship, and the Becket-like notion of the “ineffable”:

The notion of a transcendent force that moves the universe or history or determines what is right and good—and whose existence is fundamentally beyond reason and immune to logical or empirical disproof—is the simplest, most elegant, and most scientifically baffling phenomenon I know of. Its power and absurdity perturbs mightily, and merits careful scientific scrutiny. In an age where many of the most volatile and seemingly intractable conflicts stem from sacred causes, scientific understanding of how to best deal with the subject has also never been more critical.

Call it love of Group or God, or devotion to an Idea or Cause, it matters little in the end. This is the “the privilege of absurdity; to which no living creature is subject, but man only” of which Hobbes wrote in Leviathan. In The Descent of Man, Darwin cast it as the virtue of “morality,” with which winning tribes are better endowed in history’s spiraling competition for survival and dominance. Unlike other creatures, humans define the groups to which they belong in abstract terms. Often they strive to achieve a lasting intellectual and emotional bonding with anonymous others, and seek to heroically kill and die, not in order to preserve their own lives or those of people they know, but for the sake of an idea—the conception they have formed of themselves, of “who we are.”

[…]

There is an apparent paradox that underlies the formation of large-scale human societies. The religious and ideological rise of civilizations—of larger and larger agglomerations of genetic strangers, including today’s nations, transnational movements, and other “imagined communities” of fictive kin — seem to depend upon what Kierkegaard deemed this “power of the preposterous” … Humankind’s strongest social bonds and actions, including the capacity for cooperation and forgiveness, and for killing and allowing oneself to be killed, are born of commitment to causes and courses of action that are “ineffable,” that is, fundamentally immune to logical assessment for consistency and to empirical evaluation for costs and consequences. The more materially inexplicable one’s devotion and commitment to a sacred cause — that is, the more absurd—the greater the trust others place in it and the more that trust generates commitment on their part.

[…]

Religion and the sacred, banned so long from reasoned inquiry by ideological bias of all persuasions—perhaps because the subject is so close to who we want or don’t want to be — is still a vast, tangled and largely unexplored domain for science, however simple and elegant for most people everywhere in everyday life.

Psychologist Timothy Wilson, author of the excellent Redirect: The Surprising New Science of Psychological Change, explores the Möbius loop of self-perception and behavior:

My favorite is the idea that people become what they do. This explanation of how people acquire attitudes and traits dates back to the philosopher Gilbert Ryle, but was formalized by the social psychologist Daryl Bem in his self-perception theory. People draw inferences about who they are, Bem suggested, by observing their own behavior.

Self-perception theory turns common wisdom on its head. … Hundreds of experiments have confirmed the theory and shown when this self-inference process is most likely to operate (e.g., when people believe they freely chose to behave the way they did, and when they weren’t sure at the outset how they felt).

Self-perception theory is an elegant in its simplicity. But it is also quite deep, with important implications for the nature of the human mind. Two other powerful ideas follow from it. The first is that we are strangers to ourselves. After all, if we knew our own minds, why would we need to guess what our preferences are from our behavior? If our minds were an open book, we would know exactly how honest we are and how much we like lattes. Instead, we often need to look to our behavior to figure out who we are. Self-perception theory thus anticipated the revolution in psychology in the study of human consciousness, a revolution that revealed the limits of introspection.

But it turns out that we don’t just use our behavior to reveal our dispositions—we infer dispositions that weren’t there before. Often, our behavior is shaped by subtle pressures around us, but we fail to recognize those pressures. As a result, we mistakenly believe that our behavior emanated from some inner disposition.

Harvard physician and social scientist Nicholas Christakis, who also appeared in Brockman’s Culture: Leading Scientists Explore Societies, Art, Power, and Technology, offers one of the most poetic answers, tracing the history of our understanding of why the sky is blue:

My favorite explanation is one that I sought as a boy. It is the explanation for why the sky is blue. It’s a question every toddler asks, but it is also one that most great scientists since the time of Aristotle, including da Vinci, Newton, Kepler, Descartes, Euler, and even Einstein, have asked.

One of the things I like most about this explanation—beyond the simplicity and overtness of the question itself—is how long it took to arrive at correctly, how many centuries of effort, and how many branches of science it involves.

Aristotle is the first, so far as we know, to ask the question about why the sky is blue, in the treatise On Colors; his answer is that the air close at hand is clear and the deep air of the sky is blue the same way a thin layer of water is clear but a deep well of water looks black. This idea was still being echoed in the 13th century by Roger Bacon. Kepler too reinvented a similar explanation, arguing that the air merely looks colorless because the tint of its color is so faint when in a thin layer. But none of them offered an explanation for the blueness of the atmosphere. So the question actually has two, related parts: why the sky has any color, and why it has a blue color.

[…]

The sky is blue because the incident light interacts with the gas molecules in the air in such as fashion that more of the light in the blue part of the spectrum is scattered, reaching our eyes on the surface of the planet. All the frequencies of the incident light can be scattered this way, but the high-frequency (short wavelength) blue is scattered more than the lower frequencies in a process known as Rayleigh scattering, described in the 1870′s. John William Strutt, Lord Rayleigh, who also won the Nobel Prize in physics in 1904 for the discovery of argon, demonstrated that, when the wavelength of the light is on the same order as the size of the gas molecules, the intensity of scattered light varies inversely with the fourth power of its wavelength. Shorter wavelengths like blue (and violet) are scattered more than longer ones. It’s as if all the molecules in the air preferentially glow blue, which is what we then see everywhere around us.

Yet, the sky should appear violet since violet light is scattered even more than blue light. But the sky does not appear violet to us because of the final, biological part of the puzzle, which is the way our eyes are designed: they are more sensitive to blue than violet light.

The explanation for why the sky is blue involves so much of the natural sciences: the colors within the visual spectrum, the wave nature of light, the angle at which sunlight hits the atmosphere, the mathematics of scattering, the size of nitrogen and oxygen molecules, and even the way human eyes perceive color. It’s most of science in a question that a young child can ask.

Nature editor-in-chief Philip Campbell considers the beauty of a sunrise, echoing Richard Feynman’s thoughts on science and mystery and Danis Dutton’s evolutionary theory of beauty:

Scientific understanding enhances rather than destroys nature’s beauty. All of these explanations for me contribute to the beauty in a sunrise.

Ah, but what is the explanation of beauty? Brain scientists grapple with nuclear-magnetic resonance images—a recent meta-analysis indicated that all of our aesthetic judgements seem to include the use of neural circuits in the right anterior insula, an area of the cerebral cortex typically associated with visceral perception. Perhaps our sense of beauty is a by-product of the evolutionary maintenance of the senses of belonging and of disgust. For what it’s worth, as exoplanets pour out of our telescopes, I believe that we will encounter astrochemical evidence for some form of extraterrestrial organism well before we achieve a deep, elegant or beautiful explanation of human aesthetics.

But my favorite essay comes from social media researcher and general genius Clay Shirky, author of Cognitive Surplus: Creativity and Generosity in a Connected Age, who considers the propagation of ideas in culture and the problems with Richard Dawkins’s notion of the meme in a context of combinatorial creativity:

Something happens to keep one group of people behaving in a certain set of ways. In the early 1970s, both E.O. Wilson and Richard Dawkins noticed that the flow of ideas in a culture exhibited similar patterns to the flow of genes in a species—high flow within the group, but sharply reduced flow between groups. Dawkins’ response was to assume a hypothetical unit of culture called the meme, though he also made its problems clear—with genetic material, perfect replication is the norm, and mutations rare. With culture, it is the opposite — events are misremembered and then misdescribed, quotes are mangled, even jokes (pure meme) vary from telling to telling. The gene/meme comparison remained, for a generation, an evocative idea of not much analytic utility.

Dan Sperber has, to my eye, cracked this problem. In a slim, elegant volume of 15 years ago with the modest title Explaining Culture, he outlined a theory of culture as the residue of the epidemic spread of ideas. In this model, there is no meme, no unit of culture separate from the blooming, buzzing confusion of transactions. Instead, all cultural transmission can be reduced to one of two types: making a mental representation public, or internalizing a mental version of a public presentation. As Sperber puts it, “Culture is the precipitate of cognition and communication in a human population.”

Sperber’s two primitives—externalization of ideas, internalization of expressions—give us a way to think of culture not as a big container people inhabit, but rather as a network whose traces, drawn carefully, let us ask how the behaviors of individuals create larger, longer-lived patterns. Some public representations are consistently learned and then re-expressed and re-learned—Mother Goose rhymes, tartan patterns, and peer review have all survived for centuries. Others move from ubiquitous to marginal in a matter of years. . . .

[…]

This is what is so powerful about Sperber’s idea: culture is a giant, asynchronous network of replication, ideas turning into expressions which turn into other, related ideas. … Sperber’s idea also suggests increased access to public presentation of ideas will increase the dynamic range of culture overall.

Characteristically thought-provoking and reliably cross-disciplinary, This Explains Everything is a must-read in its entirety.

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:





You can also become a one-time patron with a single donation in any amount:





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.