Brain Pickings

Posts Tagged ‘psychology’

25 JANUARY, 2013

Virginia Woolf on the Creative Benefits of Keeping a Diary


“The habit of writing thus for my own eye only is good practice. It loosens the ligaments.”

Literary icon Virginia Woolf (January 25, 1882 — March 28, 1941) was not only a masterful letter-writer and little-known children’s book author, but also a dedicated diarist on par with Susan Sontag and Anaïs Nin. A fairly late journaling bloomer, she began writing in 1915, at the age of 33, and continued until her last entry in 1941, four days before her death, leaving behind 26 volumes written in her own hand. More than a mere tool of self-exploration, however, Woolf approached the diary as a kind of R&D lab for her craft. As her husband observes in the introduction to her collected journals, A Writer’s Diary (UK; public library), Woolf’s journaling was “a method of practicing or trying out the art of writing.”

In an entry from April 20th, 1919, Woolf makes a case for the creative benefits of keeping a diary — something Joan Didion echoed nearly a century and a half later in her timeless essay on keeping a notebook — and argues for it as an essential tool for honing one’s writing style:

I got out this diary and read, as one always does read one’s own writing, with a kind of guilty intensity. I confess that the rough and random style of it, often so ungrammatical, and crying for a word altered, afflicted me somewhat. I am trying to tell whichever self it is that reads this hereafter that I can write very much better; and take no time over this; and forbid her to let the eye of man behold it. And now I may add my little compliment to the effect that it has a slapdash and vigour and sometimes hits an unexpected bull’s eye. But what is more to the point is my belief that the habit of writing thus for my own eye only is good practice. It loosens the ligaments. Never mind the misses and the stumbles. Going at such a pace as I do I must make the most direct and instant shots at my object, and thus have to lay hands on words, choose them and shoot them with no more pause than is needed to put my pen in the ink. I believe that during the past year I can trace some increase of ease in my professional writing which I attribute to my casual half hours after tea. Moreover there looms ahead of me the shadow of some kind of form which a diary might attain to. I might in the course of time learn what it is that one can make of this loose, drifting material of life; finding another use for it than the use I put it to, so much more consciously and scrupulously, in fiction. What sort of diary should I like mine to be? Something loose knit and yet not slovenly, so elastic that it will embrace anything, solemn, slight or beautiful that comes into my mind. I should like it to resemble some deep old desk, or capacious hold-all, in which one flings a mass of odds and ends without looking them through. I should like to come back, after a year or two, and find that the collection had sorted itself and refined itself and coalesced, as such deposits so mysteriously do, into a mould, transparent enough to reflect the light of our life, and yet steady, tranquil compounds with the aloofness of a work of art. The main requisite, I think on re-reading my old volumes, is not to play the part of censor, but to write as the mood comes or of anything whatever; since I was curious to find how I went for things put in haphazard, and found the significance to lie where I never saw it at the time. But looseness quickly becomes slovenly. A little effort is needed to face a character or an incident which needs to be recorded. Nor can one let the pen write without guidance; for fear of becoming slack and untidy. . . .

It was also an autobiographical tool. In an entry from January 20th, 1919, a 37-year-old Woolf considers the utility of the diaries to her future self, noting with equal parts sharp self-awareness and near-comic self-consciousness her own young-person’s perception of 50 as an “elderly” age:

I note however that this diary writing does not count as writing, since I have just re-read my year’s diary and am much struck by the rapid haphazard gallop at which it swings along, sometimes indeed jerking almost intolerably over the cobbles. Still if it were not written rather faster than the fastest type-writing, if I stopped and took thought, it would never be written at all; and the advantage of the method is that it sweeps up accidentally several stray matters which I should exclude if I hesitated, but which are the diamonds of the dustheap. If Virginia Woolf at the age of 50, when she sits down to build her memoirs out of these books, is unable to make a phrase as it should be made, I can only condole with her and remind her of the existence of the fireplace, where she has my leave to burn these pages to so many black films with red eyes in them. But how I envy her the task I am preparing for her! There is none I should like better. Already my 37th birthday next Saturday is robbed of some of its terrors by the thought. Partly for the benefit of this elderly lady (no subterfuges will then be possible: 50 is elderly, though I anticipate her protest and agree that it is not old) partly to give the year a solid foundation I intend to spend the evenings of this week of captivity in making out an account of my friendships and their present condition, with some account of my friends’ characters; and to add an estimate of their work and a forecast of their future works. The lady of 50 will be able to say how near to the truth I come; but I have written enough for tonight (only 15 minutes, I see).

On March 9th, 1920, she returns to her future “elderly” self, this time with more hopefulness, presenting the diary as fodder for her future creative output — the building blocks of her combinatorial creativity:

In spite of some tremors I think I shall go on with this diary for the present. I sometimes think that I have worked through the layer of style which suited it — suited the comfortable bright hour, after tea; and the thing I’ve reached now is less pliable. Never mind; I fancy old Virginia, putting on her spectacles to read of March 1920 will decidedly wish me to continue. Greetings! my dear ghost; and take heed that I don’t think 50 a very great age. Several good books can be written still; and here’s the bricks for a fine one.

Portrait of Virginia Woolf by Roger Fry, 1917, via Wikimedia Commons

And yet Woolf’s relationship with the diary is at times ambivalent: On June 14th, 1925, she laments:

A disgraceful confession — this is Sunday morning and just after ten, and here I am sitting down to write diary and not fiction or reviews, without any excuse, except the state of my mind.

Like any dedicated diarist has experienced first-hand, Woolf observes the troublesome friction between the diary as a therapeutic tool for exorcising one’s demons and its uncomfortable counterarguments for one’s ego. On October 25th, 1920, she records her anguish:

(First day of winter time) Why is life so tragic; so like a little strip of pavement over an abyss. I look down; I feel giddy; I wonder how I am ever to walk to the end. But why do I feel this: Now that I say it I don’t feel it. The fire burns; we are going to hear the Beggar’s Opera. Only it lies about me; I can’t keep my eyes shut. It’s a feeling of impotence; of cutting no ice. Here I sit at Richmond, and like a lantern stood in the middle of a field my light goes up in darkness. Melancholy diminishes as I write. Why then don’t I write it down oftener? Well, one’s vanity forbids. I want to appear a success even to myself. Yet I don’t get to the bottom of it.

On December 29th, 1940, some eight years after her self-constructed brink of elderly age, she observes wistfully, aligning the desire to write in the diary with her writerly and existential vitality:

There are moments when the sail flaps. Then, being a great amateur of the art of life, determined to suck my orange, off, like a wasp if the blossom I’m on fades, as it did yesterday — I ride across the downs to the cliffs. A roll of barbed wire is hooped on the edge. I rubbed my mind brisk along the Newhaven road. Shabby old maids buying groceries, in that desert road with the villas; in the wet. And Newhaven gashed. But tire the body and the mind sleeps. All desire to write diary here has flagged. What is the right antidote? I must sniff round. I think Mme. de Sevigne. Writing to be a daily pleasure. I detest the hardness of old age — I feel it. I rasp. I’m tart.

Exactly three months later, Woolf filled her pockets with stones, walked into the river near her home in Sussex, and drowned herself.

In the introduction to A Writer’s Diary, from which all of these excerpts come, Woolf’s husband adds an apt caveat:

The diary is too personal to be published as a whole during the lifetime of many people referred to in it. It is, I think, nearly always a mistake to publish extracts from diaries or letters, particularly if the omissions have to be made in order to protect the feelings or reputations of the living. The omissions almost always distort or conceal the true character of the diarist or letter-writer and produce spiritually what an Academy picture does materially, smoothing out the wrinkles, warts, frowns, and asperities. At the best and even unexpurgated, diaries give a distorted or one-sided portrait of the writer, because, as Virginia Woolf herself remarks somewhere in these diaries, one gets into the habit of recording one particular kind of mood — irritation or misery, say — and of not writing one’s diary when one is feeling the opposite. The portrait is therefore from the start unbalanced, and, if someone then deliberately removes another characteristic, it may well become a mere caricature.

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:

You can also become a one-time patron with a single donation in any amount:

Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

22 JANUARY, 2013

This Explains Everything: 192 Thinkers on the Most Elegant Theory of How the World Works


“The greatest pleasure in science comes from theories that derive the solution to some deep puzzle from a small set of simple principles in a surprising way.”

Every year since 1998, intellectual impresario and Edge editor John Brockman has been posing a single grand question to some of our time’s greatest thinkers across a wide spectrum of disciplines, then collecting the answers in an annual anthology. Last year’s answers to the question “What scientific concept will improve everybody’s cognitive toolkit?” were released in This Will Make You Smarter: New Scientific Concepts to Improve Your Thinking, one of the year’s best psychology and philosophy books.

In 2012, the question Brockman posed, proposed by none other than Steven Pinker, was “What is your favorite deep, elegant, or beautiful explanation?” The answers, representing an eclectic mix of 192 (alas, overwhelmingly male) minds spanning psychology, quantum physics, social science, political theory, philosophy, and more, are collected in the edited compendium This Explains Everything: Deep, Beautiful, and Elegant Theories of How the World Works (UK; public library) and are also available online.

In the introduction preceding the micro-essays, Brockman frames the question and its ultimate objective, adding to history’s most timeless definitions of science:

The ideas presented on Edge are speculative; they represent the frontiers in such areas as evolutionary biology, genetics, computer science, neurophysiology, psychology, cosmology, and physics. Emerging out of these contributions is a new natural philosophy, new ways of understanding physical systems, new ways of thinking that call into question many of our basic assumptions.


Perhaps the greatest pleasure in science comes from theories that derive the solution to some deep puzzle from a small set of simple principles in a surprising way. These explanations are called ‘beautiful’ or ‘elegant.’


The contributions presented here embrace scientific thinking in the broadest sense: as the most reliable way of gaining knowledge about anything — including such fields of inquiry as philosophy, mathematics, economics, history, language, and human behavior. The common thread is that a simple and nonobvious idea is proposed as the explanation of a diverse and complicated set of phenomena.

Stanford neuroscientist Robert Sapolsky, eloquent as ever, marvels at the wisdom of the crowd and the emergence of swarm intelligence:

Observe a single ant, and it doesn’t make much sense, walking in one direction, suddenly careening in another for no obvious reason, doubling back on itself. Thoroughly unpredictable.

The same happens with two ants, a handful of ants. But a colony of ants makes fantastic sense. Specialized jobs, efficient means of exploiting new food sources, complex underground nests with temperature regulated within a few degrees. And critically, there’s no blueprint or central source of command—each individual ants has algorithms for their behaviors. But this is not wisdom of the crowd, where a bunch of reasonably informed individuals outperform a single expert. The ants aren’t reasonably informed about the big picture. Instead, the behavior algorithms of each ant consist of a few simple rules for interacting with the local environment and local ants. And out of this emerges a highly efficient colony.

Ant colonies excel at generating trails that connect locations in the shortest possible way, accomplished with simple rules about when to lay down a pheromone trail and what to do when encountering someone else’s trail—approximations of optimal solutions to the Traveling Salesman problem. This has useful applications. In “ant-based routing,” simulations using virtual ants with similar rules can generate optimal ways of connecting the nodes in a network, something of great interest to telecommunications companies. It applies to the developing brain, which must wire up vast numbers of neurons with vaster numbers of connections without constructing millions of miles of connecting axons. And migrating fetal neurons generate an efficient solution with a different version of ant-based routine.

A wonderful example is how local rules about attraction and repulsion (i.e., positive and negative charges) allow simple molecules in an organic soup to occasionally form more complex ones. Life may have originated this way without the requirement of bolts of lightning to catalyze the formation of complex molecules.

And why is self-organization so beautiful to my atheistic self? Because if complex, adaptive systems don’t require a blue print, they don’t require a blue print maker. If they don’t require lightning bolts, they don’t require Someone hurtling lightning bolts.

Developmental psychologist Howard Gardner, who famously coined the seminal theory of multiple intelligences, echoes Anaïs Nin in advocating for the role of the individual and Susan Sontag in stressing the impact of individual acts on collective fate. His answer, arguing for the importance of human beings, comes as a welcome antidote to a question that suffers the danger of being inherently reductionist:

In a planet occupied now by seven billion inhabitants, I am amazed by the difference that one human being can make. Think of classical music without Mozart or Stravinsky; of painting without Caravaggio, Picasso or Pollock; of drama without Shakespeare or Beckett. Think of the incredible contributions of Michelangelo or Leonardo, or, in recent times, the outpouring of deep feeling at the death of Steve Jobs (or, for that matter, Michael Jackson or Princess Diana). Think of human values in the absence of Moses or Christ.


Despite the laudatory efforts of scientists to ferret out patterns in human behavior, I continue to be struck by the impact of single individuals, or of small groups, working against the odds. As scholars, we cannot and should not sweep these instances under the investigative rug. We should bear in mind anthropologist Margaret Mead’s famous injunction: ‘Never doubt that a small group of thoughtful committed citizens can change the world. It is the only thing that ever has.’

Uber-curator Hans Ulrich Obrist, who also contributed to last year’s volume, considers the parallel role of patterns and chance in the works of iconic composer John Cage and painter Gerhard Richter, and the role of uncertainty in the creative process:

In art, the title of a work can often be its first explanation. And in this context I am thinking especially of the titles of Gerhard Richter. In 2006, when I visited Richter in his studio in Cologne, he had just finished a group of six corresponding abstract paintings which he gave the title Cage.

There are many relations between Richter’s painting and the compositions of John Cage. In a book about the Cage series, Robert Storr has traced them from Richter‘s attendance of a Cage performance at the Festum Fluxorum Fluxus in Düsseldorf 1963 to analogies in their artistic processes. Cage has often applied chance procedures in his compositions, notably with the use of the I Ching. Richter in his abstract paintings also intentionally allows effects of chance. In these paintings, he applies the oil paint on the canvas by means of a large squeegee. He selects the colors on the squeegee, but the factual trace that the paint leaves on the canvas is to a large extent the outcome of chance.


Richter‘s concise title, Cage, can be unfolded into an extensive interpretation of these abstract paintings (and of other works)—but, one can say, the short form already contains everything. The title, like an explanation of a phenomenon, unlocks the works, describing their relation to one of the most important cultural figures of the twentieth century, John Cage, who shares with Richter the great themes of chance and uncertainty.

Writer, artist, and designer Douglas Coupland, whose biography of Marshall McLuhan remains indispensable, offers a lyrical meditation on the peculiar odds behind coincidences and déja vus:

I take comfort in the fact that there are two human moments that seem to be doled out equally and democratically within the human condition—and that there is no satisfying ultimate explanation for either. One is coincidence, the other is déja vu. It doesn’t matter if you’re Queen Elizabeth, one of the thirty-three miners rescued in Chile, a South Korean housewife or a migrant herder in Zimbabwe—in the span of 365 days you will pretty much have two déja vus as well as one coincidence that makes you stop and say, “Wow, that was a coincidence.”

The thing about coincidence is that when you imagine the umpteen trillions of coincidences that can happen at any given moment, the fact is, that in practice, coincidences almost never do occur. Coincidences are actually so rare that when they do occur they are, in fact memorable. This suggests to me that the universe is designed to ward off coincidence whenever possible—the universe hates coincidence—I don’t know why—it just seems to be true. So when a coincidence happens, that coincidence had to work awfully hard to escape the system. There’s a message there. What is it? Look. Look harder. Mathematicians perhaps have a theorem for this, and if they do, it might, by default be a theorem for something larger than what they think it is.

What’s both eerie and interesting to me about déja vus is that they occur almost like metronomes throughout our lives, about one every six months, a poetic timekeeping device that, at the very least, reminds us we are alive. I can safely assume that my thirteen year old niece, Stephen Hawking and someone working in a Beijing luggage-making factory each experience two déja vus a year. Not one. Not three. Two.

The underlying biodynamics of déja vus is probably ascribable to some sort of tingling neurons in a certain part of the brain, yet this doesn’t tell us why they exist. They seem to me to be a signal from larger point of view that wants to remind us that our lives are distinct, that they have meaning, and that they occur throughout a span of time. We are important, and what makes us valuable to the universe is our sentience and our curse and blessing of perpetual self-awareness.

MIT social scientist Sherry Turkle, author of the cyber-dystopian Alone Together: Why We Expect More from Technology and Less from Each Other, considers the role of “transitional objets” in our relationship with technology:

I was a student in psychology in the mid-1970s at Harvard University. The grand experiment that had been “Social Relations” at Harvard had just crumbled. Its ambition had been to bring together the social sciences in one department, indeed, most in one building, William James Hall. Clinical psychology, experimental psychology, physical and cultural anthropology, and sociology, all of these would be in close quarters and intense conversation.

But now, everyone was back in their own department, on their own floor. From my point of view, what was most difficult was that the people who studied thinking were on one floor and the people who studied feeling were on another.

In this Balkanized world, I took a course with George Goethals in which we learned about the passion in thought and the logical structure behind passion. Goethals, a psychologist who specialized in adolescence, was teaching a graduate seminar in psychoanalysis. … Several classes were devoted to the work of David Winnicott and his notion of the transitional object. Winnicott called transitional the objects of childhood—the stuffed animals, the bits of silk from a baby blanket, the favorite pillows—that the child experiences as both part of the self and of external reality. Winnicott writes that such objects mediate between the child’s sense of connection to the body of the mother and a growing recognition that he or she is a separate being. The transitional objects of the nursery—all of these are destined to be abandoned. Yet, says Winnicott, they leave traces that will mark the rest of life. Specifically, they influence how easily an individual develops a capacity for joy, aesthetic experience, and creative playfulness. Transitional objects, with their joint allegiance to self and other, demonstrate to the child that objects in the external world can be loved.


Winnicott believes that during all stages of life we continue to search for objects we experience as both within and outside the self. We give up the baby blanket, but we continue to search for the feeling of oneness it provided. We find them in moments of feeling “at one” with the world, what Freud called the “oceanic feeling.” We find these moments when we are at one with a piece of art, a vista in nature, a sexual experience.

As a scientific proposition, the theory of the transitional object has its limitations. But as a way of thinking about connection, it provides a powerful tool for thought. Most specifically, it offered me a way to begin to understand the new relationships that people were beginning to form with computers, something I began to study in the late 1970s and early 1980s. From the very beginning, as I began to study the nascent digital culture culture, I could see that computers were not “just tools.” They were intimate machines. People experienced them as part of the self, separate but connected to the self.


When in the late 1970s, I began to study the computer’s special evocative power, my time with George Goethals and the small circle of Harvard graduate students immersed in Winnicott came back to me. Computers served as transitional objects. They bring us back to the feelings of being “at one” with the world. Musicians often hear the music in their minds before they play it, experiencing the music from within and without. The computer similarly can be experienced as an object on the border between self and not-self. Just as musical instruments can be extensions of the mind’s construction of sound, computers can be extensions of the mind’s construction of thought.

This way of thinking about the computer as an evocative object puts us on the inside of a new inside joke. For when psychoanalysts talked about object relations, they had always been talking about people. From the beginning, people saw computers as “almost-alive” or “sort of alive.” With the computer, object relations psychoanalysis can be applied to, well, objects. People feel at one with video games, with lines of computer code, with the avatars they play in virtual worlds, with their smartphones. Classical transitional objects are meant to be abandoned, their power recovered in moments of heightened experience. When our current digital devices—our smartphones and cellphones—take on the power of transitional objects, a new psychology comes into play. These digital objects are never meant to be abandoned. We are meant to become cyborg.

Anthropologist Scott Aran considers the role of the absurd in religion and cause-worship, and the Becket-like notion of the “ineffable”:

The notion of a transcendent force that moves the universe or history or determines what is right and good—and whose existence is fundamentally beyond reason and immune to logical or empirical disproof—is the simplest, most elegant, and most scientifically baffling phenomenon I know of. Its power and absurdity perturbs mightily, and merits careful scientific scrutiny. In an age where many of the most volatile and seemingly intractable conflicts stem from sacred causes, scientific understanding of how to best deal with the subject has also never been more critical.

Call it love of Group or God, or devotion to an Idea or Cause, it matters little in the end. This is the “the privilege of absurdity; to which no living creature is subject, but man only” of which Hobbes wrote in Leviathan. In The Descent of Man, Darwin cast it as the virtue of “morality,” with which winning tribes are better endowed in history’s spiraling competition for survival and dominance. Unlike other creatures, humans define the groups to which they belong in abstract terms. Often they strive to achieve a lasting intellectual and emotional bonding with anonymous others, and seek to heroically kill and die, not in order to preserve their own lives or those of people they know, but for the sake of an idea—the conception they have formed of themselves, of “who we are.”


There is an apparent paradox that underlies the formation of large-scale human societies. The religious and ideological rise of civilizations—of larger and larger agglomerations of genetic strangers, including today’s nations, transnational movements, and other “imagined communities” of fictive kin — seem to depend upon what Kierkegaard deemed this “power of the preposterous” … Humankind’s strongest social bonds and actions, including the capacity for cooperation and forgiveness, and for killing and allowing oneself to be killed, are born of commitment to causes and courses of action that are “ineffable,” that is, fundamentally immune to logical assessment for consistency and to empirical evaluation for costs and consequences. The more materially inexplicable one’s devotion and commitment to a sacred cause — that is, the more absurd—the greater the trust others place in it and the more that trust generates commitment on their part.


Religion and the sacred, banned so long from reasoned inquiry by ideological bias of all persuasions—perhaps because the subject is so close to who we want or don’t want to be — is still a vast, tangled and largely unexplored domain for science, however simple and elegant for most people everywhere in everyday life.

Psychologist Timothy Wilson, author of the excellent Redirect: The Surprising New Science of Psychological Change, explores the Möbius loop of self-perception and behavior:

My favorite is the idea that people become what they do. This explanation of how people acquire attitudes and traits dates back to the philosopher Gilbert Ryle, but was formalized by the social psychologist Daryl Bem in his self-perception theory. People draw inferences about who they are, Bem suggested, by observing their own behavior.

Self-perception theory turns common wisdom on its head. … Hundreds of experiments have confirmed the theory and shown when this self-inference process is most likely to operate (e.g., when people believe they freely chose to behave the way they did, and when they weren’t sure at the outset how they felt).

Self-perception theory is an elegant in its simplicity. But it is also quite deep, with important implications for the nature of the human mind. Two other powerful ideas follow from it. The first is that we are strangers to ourselves. After all, if we knew our own minds, why would we need to guess what our preferences are from our behavior? If our minds were an open book, we would know exactly how honest we are and how much we like lattes. Instead, we often need to look to our behavior to figure out who we are. Self-perception theory thus anticipated the revolution in psychology in the study of human consciousness, a revolution that revealed the limits of introspection.

But it turns out that we don’t just use our behavior to reveal our dispositions—we infer dispositions that weren’t there before. Often, our behavior is shaped by subtle pressures around us, but we fail to recognize those pressures. As a result, we mistakenly believe that our behavior emanated from some inner disposition.

Harvard physician and social scientist Nicholas Christakis, who also appeared in Brockman’s Culture: Leading Scientists Explore Societies, Art, Power, and Technology, offers one of the most poetic answers, tracing the history of our understanding of why the sky is blue:

My favorite explanation is one that I sought as a boy. It is the explanation for why the sky is blue. It’s a question every toddler asks, but it is also one that most great scientists since the time of Aristotle, including da Vinci, Newton, Kepler, Descartes, Euler, and even Einstein, have asked.

One of the things I like most about this explanation—beyond the simplicity and overtness of the question itself—is how long it took to arrive at correctly, how many centuries of effort, and how many branches of science it involves.

Aristotle is the first, so far as we know, to ask the question about why the sky is blue, in the treatise On Colors; his answer is that the air close at hand is clear and the deep air of the sky is blue the same way a thin layer of water is clear but a deep well of water looks black. This idea was still being echoed in the 13th century by Roger Bacon. Kepler too reinvented a similar explanation, arguing that the air merely looks colorless because the tint of its color is so faint when in a thin layer. But none of them offered an explanation for the blueness of the atmosphere. So the question actually has two, related parts: why the sky has any color, and why it has a blue color.


The sky is blue because the incident light interacts with the gas molecules in the air in such as fashion that more of the light in the blue part of the spectrum is scattered, reaching our eyes on the surface of the planet. All the frequencies of the incident light can be scattered this way, but the high-frequency (short wavelength) blue is scattered more than the lower frequencies in a process known as Rayleigh scattering, described in the 1870′s. John William Strutt, Lord Rayleigh, who also won the Nobel Prize in physics in 1904 for the discovery of argon, demonstrated that, when the wavelength of the light is on the same order as the size of the gas molecules, the intensity of scattered light varies inversely with the fourth power of its wavelength. Shorter wavelengths like blue (and violet) are scattered more than longer ones. It’s as if all the molecules in the air preferentially glow blue, which is what we then see everywhere around us.

Yet, the sky should appear violet since violet light is scattered even more than blue light. But the sky does not appear violet to us because of the final, biological part of the puzzle, which is the way our eyes are designed: they are more sensitive to blue than violet light.

The explanation for why the sky is blue involves so much of the natural sciences: the colors within the visual spectrum, the wave nature of light, the angle at which sunlight hits the atmosphere, the mathematics of scattering, the size of nitrogen and oxygen molecules, and even the way human eyes perceive color. It’s most of science in a question that a young child can ask.

Nature editor-in-chief Philip Campbell considers the beauty of a sunrise, echoing Richard Feynman’s thoughts on science and mystery and Danis Dutton’s evolutionary theory of beauty:

Scientific understanding enhances rather than destroys nature’s beauty. All of these explanations for me contribute to the beauty in a sunrise.

Ah, but what is the explanation of beauty? Brain scientists grapple with nuclear-magnetic resonance images—a recent meta-analysis indicated that all of our aesthetic judgements seem to include the use of neural circuits in the right anterior insula, an area of the cerebral cortex typically associated with visceral perception. Perhaps our sense of beauty is a by-product of the evolutionary maintenance of the senses of belonging and of disgust. For what it’s worth, as exoplanets pour out of our telescopes, I believe that we will encounter astrochemical evidence for some form of extraterrestrial organism well before we achieve a deep, elegant or beautiful explanation of human aesthetics.

But my favorite essay comes from social media researcher and general genius Clay Shirky, author of Cognitive Surplus: Creativity and Generosity in a Connected Age, who considers the propagation of ideas in culture and the problems with Richard Dawkins’s notion of the meme in a context of combinatorial creativity:

Something happens to keep one group of people behaving in a certain set of ways. In the early 1970s, both E.O. Wilson and Richard Dawkins noticed that the flow of ideas in a culture exhibited similar patterns to the flow of genes in a species—high flow within the group, but sharply reduced flow between groups. Dawkins’ response was to assume a hypothetical unit of culture called the meme, though he also made its problems clear—with genetic material, perfect replication is the norm, and mutations rare. With culture, it is the opposite — events are misremembered and then misdescribed, quotes are mangled, even jokes (pure meme) vary from telling to telling. The gene/meme comparison remained, for a generation, an evocative idea of not much analytic utility.

Dan Sperber has, to my eye, cracked this problem. In a slim, elegant volume of 15 years ago with the modest title Explaining Culture, he outlined a theory of culture as the residue of the epidemic spread of ideas. In this model, there is no meme, no unit of culture separate from the blooming, buzzing confusion of transactions. Instead, all cultural transmission can be reduced to one of two types: making a mental representation public, or internalizing a mental version of a public presentation. As Sperber puts it, “Culture is the precipitate of cognition and communication in a human population.”

Sperber’s two primitives—externalization of ideas, internalization of expressions—give us a way to think of culture not as a big container people inhabit, but rather as a network whose traces, drawn carefully, let us ask how the behaviors of individuals create larger, longer-lived patterns. Some public representations are consistently learned and then re-expressed and re-learned—Mother Goose rhymes, tartan patterns, and peer review have all survived for centuries. Others move from ubiquitous to marginal in a matter of years. . . .


This is what is so powerful about Sperber’s idea: culture is a giant, asynchronous network of replication, ideas turning into expressions which turn into other, related ideas. … Sperber’s idea also suggests increased access to public presentation of ideas will increase the dynamic range of culture overall.

Characteristically thought-provoking and reliably cross-disciplinary, This Explains Everything is a must-read in its entirety.

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:

You can also become a one-time patron with a single donation in any amount:

Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

21 JANUARY, 2013

Remembering Aaron Swartz: David Foster Wallace on the Meaning of Life


“Worship your intellect, being seen as smart — you will end up feeling stupid, a fraud, always on the verge of being found out.”

This past weekend, I attended the heartbreaking memorial for open-access activist Aaron Swartz, who for the past two years had been relentlessly and unscrupulously prosecuted for making academic journal articles freely available online and who had taken his own life a week prior. A speaker at the service read a piece by one of Aaron’s personal heroes, David Foster Wallace — an excerpt from Wallace’s famous Kenyon College commencement address, the only public talk he ever gave on his views of life, which was eventually adapted into a slim book titled This Is Water: Some Thoughts, Delivered on a Significant Occasion, about Living a Compassionate Life (public library).

I’ve written about the speech previously, but the particular excerpt read at Aaron’s memorial resonates with chilling clarity in light of recent meditations on the meaning of life, how to find one’s purpose, morality vs. intelligence, and whether money can really buy happiness. Wallace remarks:

If you worship money and things — if they are where you tap real meaning in life — then you will never have enough. Never feel you have enough. It’s the truth. Worship your own body and beauty and sexual allure and you will always feel ugly, and when time and age start showing, you will die a million deaths before they finally plant you. On one level, we all know this stuff already — it’s been codified as myths, proverbs, clichés, bromides, epigrams, parables: the skeleton of every great story. The trick is keeping the truth up-front in daily consciousness. Worship power — you will feel weak and afraid, and you will need ever more power over others to keep the fear at bay. Worship your intellect, being seen as smart — you will end up feeling stupid, a fraud, always on the verge of being found out. And so on.

Also speaking at the memorial, data visualization godfather Edward Tufte captured the essence of Aaron’s character:

Aaron’s unique quality was that he was marvelously and vigorously different. There’s a scarcity of that.

Hear This Is Water in its entirety, with notable excerpts, here. Help fight the broken system that mauled Aaron here. Honor his legacy with a contribution to Creative Commons here.

Portrait: Aaron Swartz by Fred Benenson under Creative Commons

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:

You can also become a one-time patron with a single donation in any amount:

Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.