Brain Pickings Icon
Brain Pickings

Search results for “popper”

The Writing of “Silent Spring”: Rachel Carson and the Culture-Shifting Courage to Speak Inconvenient Truth to Power

“It is, in the deepest sense, a privilege as well as a duty to have the opportunity to speak out — to many thousands of people — on something so important.”

The Writing of “Silent Spring”: Rachel Carson and the Culture-Shifting Courage to Speak Inconvenient Truth to Power

“Life and Reality are not things you can have for yourself unless you accord them to all others,” philosopher Alan Watts wrote in the 1950s as he contemplated the interconnected nature of the universe. What we may now see as an elemental truth of existence was then a notion both foreign and frightening to the Western mind. But it was a scientist, not a philosopher, who levered this monumental shift in consciousness: Rachel Carson (May 27, 1907–April 14, 1964), a Copernicus of biology who ejected the human animal from its hubristic place at the center of Earth’s ecological cosmos and recast it as one of myriad organisms, all worthy of wonder, all imbued with life and reality. Her lyrical writing rendered her not a mere translator of the natural world, but an alchemist transmuting the steel of science into the gold of wonder. The message of her iconic Silent Spring (public library) rippled across public policy and the population imagination — it led to the creation of the Environmental Protection Agency, inspired generations of activists, and led Joni Mitchell to write lyrics as beloved as “Hey farmer farmer — / Put away the DDT / Give me spots on my apples, / but leave me the birds and the bees. / Please!”

A scientist without a Ph.D. or a Y chromosome or academic affiliation became the most powerful voice of resistance against ruinous public policy mitigated by the self-interest of government and industry, against the hauteur and short-sightedness threatening to destroy this precious pale blue dot which we, along with countless other animals, call home.

Carson had grown up in a picturesque but impoverished village in Pennsylvania. It was there, amid a tumultuous family environment, that she fell in love with nature and grew particularly enchanted with birds. A voracious reader and gifted writer from a young age, she became a published author at the age of ten, when a story of hers appeared in a children’s literary magazine. She entered the Pennsylvania College for Women with the intention of becoming a writer, but a zestful zoology professor — herself a rare specimen as a female scientist in that era — rendered young Carson besotted with biology. A scholarship allowed her to pursue a Master’s degree in zoology and genetics at Johns Hopkins University, but when her already impecunious family fell on hard times during the Great Depression, she was forced to leave the university in search of a full-time paying job before completing her doctorate.

After working as a lab assistant for a while, she began writing for the Baltimore Sun and was eventually hired as a junior aquatic biologist for what would later become the U.S. Fish and Wildlife Service. Her uncommon gift for writing was soon recognized and Carson was tasked with editing other scientists’ field reports, then promoted to editor in chief for the entire agency. Out of this necessity to reconcile science and writing was born her self-invention as a scientist who refused to give up on writing and a writer who refused to give up on science — the same refusal that marks today’s greatest poets of science.

Rachel Carson at her microscope and her typewriter

In 1935, 28-year-old Carson was asked to write a brochure for the Fisheries Bureau. When she turned in something infinitely more poetic than her supervisor had envisioned, he asked her to rewrite the brochure but encouraged her to submit the piece as an essay for The Atlantic Monthly. She did. It was accepted and published as “Undersea” in 1937 — a first of its kind, immensely lyrical journey into the science of the ocean floor inviting an understanding of Earth from a nonhuman perspective. Readers and publishers were instantly smitten. Carson, by then the sole provider for her mother and her two orphaned nieces after her older sister’s death, expanded her Atlantic article into her first book, Under the Sea-Wind — the culmination of a decade of her oceanographic research, which rendered her an overnight literary success.

Against towering cultural odds, these books about the sea established her — once a destitute girl from landlocked Pennsylvania — as the most celebrated science writer of her time.

But the more Carson studied and wrote about nature, the more cautious she became of humanity’s rampant quest to dominate it. Witnessing the devastation of the atomic bomb awakened her to the unintended consequences of science unmoored from morality, of a hysterical enthusiasm for technology that deafened humanity to the inner voice of ethics. In her 1952 acceptance speech for the John Burroughs Medal, she concretized her credo:

It seems reasonable to believe — and I do believe — that the more clearly we can focus our attention on the wonders and realities of the universe about us the less taste we shall have for the destruction of our race. Wonder and humility are wholesome emotions, and they do not exist side by side with a lust for destruction.

Photograph by Charles O’Rear from the Environmental Protection Agency’s Documerica project (U.S. National Archives)

One of the consequences of wartime science and technology was the widespread use of DDT, initially intended for protecting soldiers from malaria-bearing mosquitoes. After the end of the war, the toxic chemical was lauded as a miracle substance. People were sprayed down with DDT to ward off disease and airplanes doused agricultural plots in order to decimate pest and maximize crop yield. It was neither uncommon nor disquieting to see a class of schoolchildren eating their lunch while an airplane aiming at a nearby field sprinkled them with DDT. A sort of blind faith enveloped the use of these pesticides, with an indifferent government and an incurious public raising no questions about their unintended consequences.

In January of 1958, Carson received a letter from an old writer friend named Olga Owens Huckins, alerting her that the aerial spraying of DDT had devastated a local wildlife sanctuary. Huckins described the ghastly deaths of birds, claws clutched to their breasts and bills agape in agony. This local tragedy was the final straw in Carson’s decade-long collection of what she called her “poison-spray material” — a dossier of evidence for the harmful, often deadly effects of toxic chemicals on wildlife and human life. That May, she signed a contract with Houghton Mifflin for what would become Silent Spring in 1962 — the firestarter of a book that ignited the conservation movement and awakened the modern environmental consciousness.

Photograph by Charles O’Rear from the Environmental Protection Agency’s Documerica project (U.S. National Archives)

But the book also spurred violent pushback from those most culpable in the destruction of nature — a heedless government that had turned a willfully blind eye to its regulatory responsibilities and an avaricious agricultural and chemical industry determined to maximize profits at all costs. Those inconvenienced by the truths Carson exposed immediately attacked her for her indictment against elected officials’ and corporations’ deliberate deafness to fact. They used every means at their disposal — a propaganda campaign designed to discredit her, litigious bullying of her publisher, and the most frequent accusation of all: that of being a woman. Former Secretary of Agriculture Ezra Taft Benson, who would later become Prophet of the Mormon Church, asked: “Why a spinster with no children was so concerned about genetics?” He didn’t hesitate to offer his own theory: because she was a Communist. (The lazy hand-grenade of “spinster” was often hurled at Carson in an attempt to erode her credibility, as if there were any correlation between a scientist’s home life and her expertise — never mind that, as it happened, Carson did have one of the most richly rewarding relationships a human being could hope for, albeit not the kind that conformed to the era’s narrow accepted modalities.)

Photograph by Marc St. Gil from the Environmental Protection Agency’s Documerica project (U.S. National Archives)

Carson withstood the criticism with composure and confidence, shielded by the integrity of her facts. But another battle raged invisible to the public eye — she was dying.

She had been diagnosed with cancer in 1960, which had metastasized due to her doctor’s negligence. In 1963, when Silent Spring stirred President Kennedy’s attention and he summoned a Congressional hearing to investigate and regulate the use of pesticides, Carson didn’t hesitate to testify even as her body was giving out from the debilitating pain of the disease and the wearying radiation treatments. With her testimony as a pillar, JFK and his Science Advisory Committee invalidated her critics’ arguments, heeded Carson’s cautionary call to reason, and created the first federal policies designed to protect the planet.

Carson endured the attacks — those of her cancer and those of her critics — with unwavering heroism. She saw the former with a biologist’s calm acceptance of the cycle of life and had anticipated the latter all along. She was a spirited idealist, but she wasn’t a naïve one — from the outset, she was acutely aware that her book was a clarion call for nothing less than a revolution and that it was her moral duty to be the revolutionary she felt called to be. Just a month after signing the book contract, she articulates this awareness in a letter found in Always, Rachel: The Letters of Rachel Carson and Dorothy Freeman, 1952–1964 (public library) — the record of her beautiful and unclassifiable relationship with her dearest friend and beloved.

Carson writes to Freeman:

I know you dread the unpleasantness that will inevitably be associated with [the book’s] publication. That I can understand, darling. But it is something I have taken into account; it will not surprise me! You do know, I think, how deeply I believe in the importance of what I am doing. Knowing what I do, there would be no future peace for me if I kept silent… It is, in the deepest sense, a privilege as well as a duty to have the opportunity to speak out — to many thousands of people — on something so important.

Photograph by Boyd Norton from the Environmental Protection Agency’s Documerica project (U.S. National Archives)

In that sense, the eventual title of Silent Spring was a dual commentary on how human hubris is robbing Earth of its symphonic aliveness and on the moral inadmissibility of remaining silent about the destructive forces driving this loss. Carson upheld that sense of duty while confronting her own creaturely finitude as she underwent rounds of grueling cancer treatment. In a letter to Freeman from the autumn of 1959, she reports:

Mostly, I feel fairly good but I do realize that after several days of concentrated work on the book I’m suddenly no good at all for several more. Some people assume only physical work is tiring — I guess because they use their minds little! Friday night … my exhaustion invaded every cell of my body, I think, and really kept me from sleeping well all night.

And yet mind rose over matter as Carson mobilized every neuron to keep up with her creative vitality. In another letter from the same month, she writes to Freeman about her “happiness in the progress of The Book”:

The other day someone asked Leonard Bernstein about his inexhaustible energy and he said “I have no more energy than anyone who loves what he is doing.” Well, I’m afraid mine has to be recharged at times, but anyway I do seem just now to be riding the crest of a wave of enthusiasm and creativity, and although I’m going to bed late and often rising in very dim light to get in an hour of thinking and organizing before my household stirs, my weariness seems easily banished.

Stirring her household was Roger — the nine-year-old orphan son of Carson’s niece, whom she had adopted and was single-parenting, doing all the necessary cooking, cleaning, and housework while writing Silent Spring and undergoing endless medical treatments. All of this she did with unwavering devotion to the writing and the larger sense of moral obligation that animated her. In early March of 1961, in the midst of another incapacitating radiation round, she writes to Freeman:

About the book, I sometimes have a feeling (maybe 100% wishful thinking) that perhaps this long period away from active work will give me the perspective that was so hard to attain, the ability to see the woods in the midst of the confusing multitude of trees.

With an eye to Albert Schweitzer’s famous 1954 Nobel Prize acceptance speech, which appeared under the title “The Problem of Peace” and made the unnerving assertion that “we should all of us realize that we are guilty of inhumanity” in reflecting on the circumstances that led to the two world wars, she adds:

Sometimes … I want [the book] to be a much shortened and simplified statement, doing for this subject (if this isn’t too presumptuous a comparison) what Schweitzer did in his Nobel Prize address for the allied subject of radiation.

In June of that year, Carson shares with Freeman a possible opening sentence, which didn’t end up being the final one but which nonetheless synthesizes the essence of her groundbreaking book:

This is a book about man’s war against nature, and because man is part of nature it is also inevitably a book about man’s war against himself.

At that point, Carson was considering The War Against Nature and At War with Nature as possible titles, but settled on Silent Spring in September — a title inspired by Keats, Carson’s favorite poet: “The sedge is withered from the lake, / And no birds sing.”

Four months later, in January of 1962, she reports to Freeman the completion of her Herculean feat:

I achieved the goal of sending the 15 chapters to Marie [Rodell, Carson’s literary agent] — like reaching the last station before the summit of Everest.

Photograph by Bill Reaves from the Environmental Protection Agency’s Documerica project (U.S. National Archives)

Rodell had sent a copy of the manuscript to longtime New Yorker editor William Shawn, who gave Carson the greatest and most gratifying surprise of her life. Struggling to override her typical self-effacing humility, she relays the episode to Freeman:

Last night about 9 o’clock the phone rang and a mild voice said, “This is William Shawn.” If I talk to you tonight you will know what he said and I’m sure you can understand what it meant to me. Shamelessly, I’ll repeat some of his words — “a brilliant achievement” — “you have made it literature” “full of beauty and loveliness and depth of feeling.” … I suddenly feel full of what Lois once called “a happy turbulence.”

In an exquisite letter to Freeman penned later that day — a letter that is itself a literary masterpiece — Carson echoes Zadie Smith’s assertion that the best reason for writing books is “to experience those four and a half hours after you write the final word.” She writes:

After Roger was asleep I took Jeffie [Carson’s cat] into the study and played the Beethoven violin concerto — one of my favorites, you know. And suddenly the tensions of four years were broken and I got down and put my arms around Jeffie and let the tears come. With his little warm, rough tongue he told me that he understood. I think I let you see last summer what my deeper feelings are about this when I said I could never again listen happily to a thrush song if I had not done all I could. And last night the thoughts of all the birds and other creatures and the loveliness that is in nature came to me with such a surge of deep happiness, that now I had done what I could — I had been able to complete it — now it had its own life!

Silent Spring, original 1962 edition.

Silent Spring was published on September 27, 1962 and adrenalized a new public awareness of the fragile interconnectedness of this living world. Several months later, CBS host Eric Sevareid captured its impact most succinctly in lauding Carson as “a voice of warning and a fire under the government.” In the book, she struck a mighty match:

When the public protests, confronted with some obvious evidence … it is fed little tranquilizing pills of half truth.

How tragic to observe that in the half-century since, our so-called leaders have devolved from half-truths to “alternative facts” — that is, to whole untruths that fail the ultimate criterion for truth: a correspondence with reality.

Carson, who was posthumously awarded the Presidential Medal of Freedom, never lived to see the sea change of policy and public awareness that her book precipitated. Today, as a new crop of political and corporate interests threatens her hard-won legacy of environmental consciousness, I think of that piercing Adrienne Rich line channeling the great 16th-century Danish astronomer Tycho Brahe, another scientist who fundamentally revolutionized our understanding of the universe and our place in it: “Let me not seem to have lived in vain.”

Let’s not let Rachel Carson seem to have lived in vain.

UPDATE: For more on Carson, her epoch-making cultural contribution, and her unusual private life, she is the crowning figure in my book Figuring.

BP

This Idea Must Die: Some of the World’s Greatest Thinkers Each Select a Major Misconception Holding Us Back

From the self to left brain vs. right brain to romantic love, a catalog of broken theories that hold us back from the conquest of Truth.

“To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact,” asserted Charles Darwin in one of the eleven rules for critical thinking known as Prospero’s Precepts. If science and human knowledge progress in leaps and bounds of ignorance, then the recognition of error and the transcendence of falsehood are the springboard for the leaps of progress. That’s the premise behind This Idea Must Die: Scientific Theories That Are Blocking Progress (public library) — a compendium of answers Edge founder John Brockman collected by posing his annual question“What scientific idea is ready for retirement?” — to 175 of the world’s greatest scientists, philosophers, and writers. Among them are Nobel laureates, MacArthur geniuses, and celebrated minds like theoretical physicist and mathematician Freeman Dyson, biological anthropologist Helen Fisher, cognitive scientist and linguist Steven Pinker, media theorist Douglas Rushkoff, philosopher Rebecca Newberger Goldstein, psychologist Howard Gardner, social scientist and technology scholar Sherry Turkle, actor and author Alan Alda, futurist and Wired founding editor Kevin Kelly, and novelist, essayist, and screenwriter Ian McEwan.

Brockman paints the backdrop for the inquiry:

Science advances by discovering new things and developing new ideas. Few truly new ideas are developed without abandoning old ones first. As theoretical physicist Max Planck (1858–1947) noted, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” In other words, science advances by a series of funerals.

Many of the answers are redundant — but this is a glorious feature rather than a bug of Brockman’s series, for its chief reward is precisely this cumulative effect of discerning the zeitgeist of ideas with which some of our era’s greatest minds are tussling in synchronicity. They point to such retirement-ready ideas as IQ, the self, race, the left brain vs. right brain divide, human nature and essentialism, free will, and even science itself. What emerges is the very thing Carl Sagan deemed vital to truth in his Baloney Detection Kit — a “substantive debate on the evidence by knowledgeable proponents of all points of view.”

Illustration by Lizi Boyd from ‘Flashlight.’ Click image for more.

One of the most profound undercurrents across the answers has to do with our relationship with knowledge, certainty, and science itself. And one of the most profound contributions in that regard comes from MacArthur fellow Rebecca Newberger Goldstein, a philosopher who thinks deeply and dimensionally about some of the most complex questions of existence. Assailing the idea that science makes philosophy obsolete — that science is the transformation of “philosophy’s vagaries into empirically testable theories” and philosophy merely the “cold-storage room in which questions are shelved until the sciences get around to handling them” — Goldstein writes:

The obsolescence of philosophy is often taken to be a consequence of science. After all, science has a history of repeatedly inheriting — and definitively answering — questions over which philosophers have futilely hemmed and hawed for unconscionable amounts of time.

The gravest problem with this theory, Goldstein notes, is its internal incoherence:

You can’t argue for science making philosophy obsolete without indulging in philosophical arguments… When pressed for an answer to the so-called demarcation problem, scientists almost automatically reach for the notion of “falsifiability” first proposed by Karl Popper. His profession? Philosophy. But whatever criterion you offer, its defense is going to implicate you in philosophy.

This is something that Dorion Sagan, Carl Sagan’s son, has previously addressed, but Goldstein brings to it unparalleled elegance of thought and eloquence of expression:

A triumphalist scientism needs philosophy to support itself. And the lesson here should be generalized. Philosophy is joined to science in reason’s project. Its mandate is to render our views and our attitudes maximally coherent.

In doing so, she argues, philosophy provides “the reasoning that science requires in order to claim its image as descriptive.” As a proponent of the vital difference between information and wisdom — the former being the material of science, the latter the product of philosophy, and knowledge the change agent that transmutes one into the other — I find the provocative genius of Goldstein’s conclusion enormously invigorating:

What idea should science retire? The idea of “science” itself. Let’s retire it in favor of the more inclusive “knowledge.”

Neuroscientist Sam Harris, author of the indispensable Spirituality Without Religion, echoes this by choosing our narrow definition of “science” as the idea to be put to rest:

Search your mind, or pay attention to the conversations you have with other people, and you’ll discover that there are no real boundaries between science and philosophy — or between those disciplines and any other that attempts to make valid claims about the world on the basis of evidence and logic. When such claims and their methods of verification admit of experiment and/or mathematical description, we tend to say our concerns are “scientific”; when they relate to matters more abstract, or to the consistency of our thinking itself, we often say we’re being “philosophical”; when we merely want to know how people behaved in the past, we dub our interests “historical” or “journalistic”; and when a person’s commitment to evidence and logic grows dangerously thin or simply snaps under the burden of fear, wishful thinking, tribalism, or ecstasy, we recognize that he’s being “religious.”

The boundaries between true intellectual disciplines are currently enforced by little more than university budgets and architecture… The real distinction we should care about — the observation of which is the sine qua non of the scientific attitude — is between demanding good reasons for what one believes and being satisfied with bad ones.

In a sentiment that calls to mind both Richard Feynman’s spectacular ode to a flower and Carl Sagan’s enduring wisdom on our search for meaning, Harris applies this model of knowledge to one of the great mysteries of science and philosophy — consciousness:

Even if one thinks the human mind is entirely the product of physics, the reality of consciousness becomes no less wondrous, and the difference between happiness and suffering no less important. Nor does such a view suggest that we’ll ever find the emergence of mind from matter fully intelligible; consciousness may always seem like a miracle. In philosophical circles, this is known as “the hard problem of consciousness” — some of us agree that this problem exists, some of us don’t. Should consciousness prove conceptually irreducible, remaining the mysterious ground for all we can conceivably experience or value, the rest of the scientific worldview would remain perfectly intact.

The remedy for all this confusion is simple: We must abandon the idea that science is distinct from the rest of human rationality. When you are adhering to the highest standards of logic and evidence, you are thinking scientifically. And when you’re not, you’re not.

Illustration from ‘Once Upon an Alphabet’ by Oliver Jeffers. Click image for more.

Psychologist Susan Blackmore, who studies this very problem — famously termed “the hard problem of consciousness” by philosopher David Chalmers in 1996 — benches the idea that there are neural correlates to consciousness. Tempting as it may be to interpret neural activity as the wellspring of that special something we call “consciousness” or “subjective experience,” as opposed to the “unconscious” rest of the brain, Blackmore admonishes that such dualism is past its cultural expiration date:

Dualist thinking comes naturally to us. We feel as though our conscious experiences were of a different order from the physical world. But this is the same intuition that leads to the hard problem seeming hard. It’s the same intuition that produces the philosopher’s zombie — a creature identical to me in every way except that it has no consciousness. It’s the same intuition that leads people to write, apparently unproblematically, about brain processes being either conscious or unconscious… Intuitively plausible as it is, this is a magic difference. Consciousness is not some weird and wonderful product of some brain processes but not others. Rather, it’s an illusion constructed by a clever brain and body in a complex social world. We can speak, think, refer to ourselves as agents, and so build up the false idea of a persisting self that has consciousness and free will.

Much of the allure of identifying such neural correlates of consciousness, Blackmore argues, lies in cultural mythologies rooted in fantasy rather than fact:

While people are awake they must always be conscious of something or other. And that leads along the slippery path to the idea that if we knew what to look for, we could peer inside someone’s brain and find out which processes were the conscious ones and which the unconscious ones. But this is all nonsense. All we’ll ever find are the neural correlates of thoughts, perceptions, memories, and the verbal and attentional processes that lead us to think we’re conscious.

When we finally have a better theory of consciousness to replace these popular delusions, we’ll see that there’s no hard problem, no magic difference, and no NCCs.

Illustration by Rob Hunter from ‘A Graphic Cosmogony.’ Click image for more.

In a related grievance, social psychologist Bruce Hood — author of the uncomfortable yet strangely comforting The Self Illusion — does away with the notion of the self. Half a century after Alan Watts enlisted Eastern philosophy in this mission, Hood presents a necessary integration of science and philosophy:

It seems almost redundant to call for the retirement of the free willing self, as the idea is neither scientific nor is this the first time the concept has been dismissed for lack of empirical support. The self did not have to be discovered; it’s the default assumption most of us experience, so it wasn’t really revealed by methods of scientific inquiry.

[…]

Yet the self, like a conceptual zombie, refuses to die. It crops up again and again in recent theories of decision making, as an entity with free will which can be depleted. It reappears as an interpreter in cognitive neuroscience, as able to integrate parallel streams of information arising from separable neural substrates. Even if these appearances of the self are understood to be convenient ways of discussing the emergent output of multiple parallel processes, students of the mind continue to implicitly endorse the idea that there’s a decision maker, an experiencer, a point of origin.

We know the self is constructed because it can be so easily deconstructed — through damage, disease, and drugs. It must be an emergent property of a parallel system processing input, output, and internal representations. It’s an illusion because it feels so real, but that experience is not what it seems. The same is true for free will. Although we can experience the mental anguish of making a decision… the choices and decisions we make are based on situations that impose on us. We don’t have the free will to choose the experiences that have shaped our decisions.

[…]

By abandoning the free willing self, we’re forced to reexamine the factors that are truly behind our thoughts and behavior and the way they interact, balance, override, and cancel out. Only then will we begin to make progress in understanding how we really operate.

Illustration by Ben Newman from ‘A Graphic Cosmogony.’ Click image for more.

Among the most provocative answers, in fact, is one examining the factors that underlie one of the most complex and seemingly human of our experiences: love. Biological anthropologist Helen Fisher, who studies the brain on love, points to romantic love and addiction as two concepts in need of serious reformulation and reframing — one best accomplished by understanding the intersection of the two. Fisher argues that we ought to broaden the definition of addiction and do away with science’s staunch notion that all addiction is harmful. Love, she argues with a wealth of neurobiological evidence in hand, is in fact a state that closely resembles that of addiction in terms of what happens in the brain during it — and yet love, anguishing as it may be at times, is universally recognized as the height of positive experience. In that respect, it presents a case of “positive addiction.” Fisher writes:

Love-besotted men and women show all the basic symptoms of addiction. Foremost, the lover is stiletto-focused on his/her drug of choice, the love object. The lover thinks obsessively about him or her (intrusive thinking), and often compulsively calls, writes, or stays in touch. Paramount in this experience is intense motivation to win one’s sweetheart, not unlike the substance abuser fixated on the drug. Impassioned lovers distort reality, change their priorities and daily habits to accommodate the beloved, experience personality changes (affect disturbance), and sometimes do inappropriate or risky things to impress this special other. Many are willing to sacrifice, even die for, “him” or “her.” The lover craves emotional and physical union with the beloved (dependence). And like addicts who suffer when they can’t get their drug, the lover suffers when apart from the beloved (separation anxiety). Adversity and social barriers even heighten this longing (frustration attraction).

In fact, besotted lovers express all four of the basic traits of addiction: craving, tolerance, withdrawal, and relapse. They feel a “rush” of exhilaration when they’re with their beloved (intoxication). As their tolerance builds, they seek to interact with the beloved more and more (intensification). If the love object breaks off the relationship, the lover experiences signs of drug withdrawal, including protest, crying spells, lethargy, anxiety, insomnia or hypersomnia, loss of appetite or binge eating, irritability, and loneliness. Lovers, like addicts, also often go to extremes, sometimes doing degrading or physically dangerous things to win back the beloved. And lovers relapse the way drug addicts do. Long after the relationship is over, events, people, places, songs, or other external cues associated with their abandoning sweetheart can trigger memories and renewed craving.

Fisher points to fMRI studies that have shown intense romantic love to trigger the brain’s reward system and the dopamine pathways responsible for “energy, focus, motivation, ecstasy, despair, and craving,” as well as the brain regions most closely associated with addiction and substance abuse. In shedding light on the neurochemical machinery of romantic love, Fisher argues, science reveals it to be a “profoundly powerful, natural, often positive addiction.”

Illustration by Christine Rösch from ‘The Mathematics of Love.’ Click image for more.

Astrophysicist Marcelo Gleiser, who has written beautifully about the necessary duality of knowledge and mystery, wants to do away with “the venerable notion of Unification.” He points out that smaller acts of unification and simplification are core to the scientific process — from the laws of thermodynamics to Newton’s law of universal gravity — but simplification as sweeping as reducing the world to a single Theory of Everything is misplaced:

The trouble starts when we take this idea too far and search for the Über-unification, the Theory of Everything, the arch-reductionist notion that all forces of nature are merely manifestations of a single force. This is the idea that needs to go.

Noting that at some point along the way, “math became equated with beauty and beauty with truth,” Gleiser writes:

The impulse to unify it all runs deep in the souls of mathematicians and theoretical physicists, from the Langlands program to superstring theory. But here’s the rub: Pure mathematics isn’t physics. The power of mathematics comes precisely from its detachment from physical reality. A mathematician can create any universe she wants and play all sorts of games with it. A physicist can’t; his job is to describe nature as we perceive it. Nevertheless, the unification game has been an integral part of physics since Galileo and has produced what it should: approximate unifications.

And yet this unification game, as integral as it may be to science, is also antithetical to it in the long run:

The scientific impulse to unify is crypto-religious… There’s something deeply appealing in equating all of nature to a single creative principle: To decipher the “mind of God” is to be special, is to answer to a higher calling. Pure mathematicians who believe in the reality of mathematical truths are monks of a secret order, open only to the initiated. In the case of high energy physics, all unification theories rely on sophisticated mathematics related to pure geometric structures: The belief is that nature’s ultimate code exists in the ethereal world of mathematical truths and that we can decipher it.

Echoing Richard Feynman’s spectacular commencement address admonishing against “cargo cult science,” Gleiser adds:

Recent experimental data has been devastating to such belief — no trace of supersymmetric particles, of extra dimensions, or of dark matter of any sort, all long-awaited signatures of unification physics. Maybe something will come up; to find, we must search. The trouble with unification in high energy physics is that you can always push it beyond the experimental range. “The Large Hadron Collider got to 7 TeV and found nothing? No problem! Who said nature should opt for the simplest versions of unification? Maybe it’s all happening at much higher energies, well beyond our reach.”

There’s nothing wrong with this kind of position. You can believe it until you die, and die happy. Or you can conclude that what we do best is construct approximate models of how nature works and that the symmetries we find are only descriptions of what really goes on. Perfection is too hard a burden to impose on nature.

People often see this kind of argument as defeatist, as coming from someone who got frustrated and gave up. (As in “He lost his faith.”) Big mistake. To search for simplicity is essential to what scientists do. It’s what I do. There are essential organizing principles in nature, and the laws we find are excellent ways to describe them. But the laws are many, not one. We’re successful pattern-seeking rational mammals. That alone is cause for celebration. However, let’s not confuse our descriptions and models with reality. We may hold perfection in our mind’s eye as a sort of ethereal muse. Meanwhile nature is out there doing its thing. That we manage to catch a glimpse of its inner workings is nothing short of wonderful. And that should be good enough.

Ceramic tile by Debbie Millman courtesy of the artist

Science writer Amanda Gefter takes issue with one particular manifestation of our propensity for oversimplification — the notion of the universe. She writes:

Physics has a time-honored tradition of laughing in the face of our most basic intuitions. Einstein’s relativity forced us to retire our notions of absolute space and time, while quantum mechanics forced us to retire our notions of pretty much everything else. Still, one stubborn idea has stood steadfast through it all: the universe.

[…]

In recent years, however, the concept of a single shared spacetime has sent physics spiraling into paradox. The first sign that something was amiss came from Stephen Hawking’s landmark work in the 1970s showing that black holes radiate and evaporate, disappearing from the universe and purportedly taking some quantum information with them. Quantum mechanics, however, is predicated upon the principle that information can never be lost.

Gefter points to recent breakthroughs in physics that produced one particularly puzzling such paradox, known as the “firewall paradox,” solved by the idea that spacetime is divided not by horizons but by the reference frames of the observers, “as if each observer had his or her own universe.”

But the solution isn’t a multiverse theory:

Yes, there are multiple observers, and yes, any observer’s universe is as good as any other’s. But if you want to stay on the right side of the laws of physics, you can talk only about one at a time. Which means, really, that only one exists at a time. It’s cosmic solipsism.

Here, psychology, philosophy, and cosmology converge, for what such theories suggest is what we already know about the human psyche — as I’ve put it elsewhere, the stories that we tell ourselves, whether they be false or true, are always real. Gefter concludes:

Adjusting our intuitions and adapting to the strange truths uncovered by physics is never easy. But we may just have to come around to the notion that there’s my universe and there’s your universe — but there’s no such thing as the universe.

Biological anthropologist Nina Jablonski points to the notion of race as urgently retirement-ready. Pointing out that it has always been a “vague and slippery concept,” she traces its origins to Hume and Kant — the first to divide humanity into geographic groupings called “races” — and the pseudoscientific seeds of racism this division planted:

Skin color, as the most noticeable racial characteristic, was associated with a nebulous assemblage of opinions and hearsay about the inherent natures of the various races. Skin color stood for morality, character, and the capacity for civilization; it became a meme.

Even though the atrocious “race science” that emerged in the 19th and early 20th century didn’t hold up — whenever scientists looked for actual sharp boundaries between groups, none came up — and race came to be something people identify themselves with as a shared category of experiences and social bonds, Jablonski argues that the toxic aftershocks of pseudoscience still poison culture:

Even after it has been shown that many diseases (adult-onset diabetes, alcoholism, high blood pressure, to name a few) show apparent racial patterns because people share similar environmental conditions, groupings by race are maintained. The use of racial self-categorization in epidemiological studies is defended and even encouraged. Medical studies of health disparities between “races” become meaningless when sufficient variables — such as differences in class, ethnic social practices, and attitudes — are taken into account.

Half a century after the ever-prescient Margaret Mead made the same point, Jablonski urges:

Race has a hold on history but no longer has a place in science. The sheer instability and potential for misinterpretation render race useless as a scientific concept. Inventing new vocabularies to deal with human diversity and inequity won’t be easy, but it must be done.

Psychologist Jonathan Gottschall, who has previously explored why storytelling is so central to the human experience, argues against the notion that there can be no science of art. With an eye to our civilization’s long struggle to define art, he writes:

We don’t even have a good definition, in truth, for what art is. In short, there’s nothing so central to human life that’s so incompletely understood.

Granted, Gottschall is only partly right, for there are some excellent definitions of art — take, for instance, Jeanette Winterson’s or Leo Tolstoy’s — but the fact that they don’t come from scientists only speaks to his larger point. He argues that rather than being unfit to shed light on the role of art in human life, science simply hasn’t applied itself to the problem adequately:

Scientific work in the humanities has mainly been scattered, preliminary, and desultory. It doesn’t constitute a research program.

If we want better answers to fundamental questions about art, science must jump into the game with both feet. Going it alone, humanities scholars can tell intriguing stories about the origins and significance of art, but they don’t have the tools to patiently winnow the field of competing ideas. That’s what the scientific method is for — separating the more accurate stories from the less accurate stories. But a strong science of art will require both the thick, granular expertise of humanities scholars and the clever hypothesis-testing of scientists. I’m not calling for a scientific takeover of the arts, I’m calling for a partnership.

[…]

The Delphic admonition “Know thyself” still rings out as the great prime directive of intellectual inquiry, and there will always be a gaping hole in human self-knowledge until we develop a science of art.

In a further testament to the zeitgeist-illuminating nature of the project, actor, author, and science-lover Alan Alda makes a passionate case for the same concept:

The trouble with truth is that not only is the notion of eternal, universal truth highly questionable, but simple, local truths are subject to refinement as well. Up is up and down is down, of course. Except under special circumstances. Is the North Pole up and the South Pole down? Is someone standing at one of the poles right-side up or upside-down? Kind of depends on your perspective.

When I studied how to think in school, I was taught that the first rule of logic was that a thing cannot both be and not be at the same time and in the same respect. That last note, “in the same respect,” says a lot. As soon as you change the frame of reference, you’ve changed the truthiness of a once immutable fact.

[…]

This is not to say that nothing is true or that everything is possible — just that it might not be so helpful for things to be known as true for all time, without a disclaimer… I wonder — and this is just a modest proposal — whether scientific truth should be identified in a way acknowledging that it’s something we know and understand for now, and in a certain way.

[…]

Facts, it seems to me, are workable units, useful in a given frame or context. They should be as exact and irrefutable as possible, tested by experiment to the fullest extent. When the frame changes, they don’t need to be discarded as untrue but respected as still useful within their domain. Most people who work with facts accept this, but I don’t think the public fully gets it.

That’s why I hope for more wariness about implying we know something to be true or false for all time and for everywhere in the cosmos.

Illustration from ‘Once Upon an Alphabet’ by Oliver Jeffers. Click image for more.

And indeed this elasticity of truth across time is at the heart of what I find to be the most beautiful and culturally essential contribution to the collection. As someone who believes that the stewardship of enduring ideas is at least as important as the genesis of new ones — not only because past ideas are the combinatorial building blocks of future ones but also because in order to move forward we always need a backdrop against which to paint the contrast of progress and improvement — I was most bewitched by writer Ian McEwan’s admonition against the arrogance of retiring any idea as an impediment to progress:

Beware of arrogance! Retire nothing! A great and rich scientific tradition should hang onto everything it has. Truth is not the only measure. There are ways of being wrong that help others to be right. Some are wrong, but brilliantly so. Some are wrong but contribute to method. Some are wrong but help found a discipline. Aristotle ranged over the whole of human knowledge and was wrong about much. But his invention of zoology alone was priceless. Would you cast him aside? You never know when you might need an old idea. It could rise again one day to enhance a perspective the present cannot imagine. It would not be available to us if it were fully retired.

To appreciate McEwan’s point, one need only look at something like Bertrand Russell’s timely thoughts on boredom, penned in 1930 and yet astoundingly resonant with our present anxieties about the societal side effects of current technology. McEwan captures this beautifully:

Every last serious and systematic speculation about the world deserves to be preserved. We need to remember how we got to where we are, and we’d like the future not to retire us. Science should look to literature and maintain a vibrant living history as a monument to ingenuity and persistence. We won’t retire Shakespeare. Nor should we Bacon.

Complement This Idea Must Die, the entirety of which weaves a mind-stretching mesh of complementary and contradictory perspectives on our relationship with knowledge, with some stimulating answers to previous editions of Brockman’s annual question, exploring the only thing worth worrying about (2013), the single most elegant theory of how the world works (2012), and the best way to make ourselves smarter (2011).

BP

The Spirituality of Imperfection: Storytelling and the Search for Meaning

“If a thing is worth doing, it is worth doing badly.”

The poet John Keats once described the ideal state of the psyche as negative capability — the ability “of being in uncertainties, Mysteries, doubts without any irritable reaching after fact & reason.” “The truth of life is its mystery,” echoed Joyce Carol Oates. This comfort with mystery and the unknown, indeed, is at the heart not only of poetic existence but also of the most rational of human intellectual endeavors, as many of history’s greatest scientific minds have attested. And yet, caught between the opinion culture we live in and our deathly fear of being wrong, we long desperately for absolutism, certitude, and perfect truth.

Originally published in 1993, The Spirituality of Imperfection: Storytelling and the Search for Meaning (public library) explores what’s arguably the most important dimension of what it means to be human — our inherent imperfection — and the many ways in which we violate it daily, delivering a constellation of wisdom and practical insight on how to live in a way that enables, rather than disempowers, our humanity.

Authors Ernest Kurtz and Katherine Ketcham describe the spirituality of imperfection as “a spirituality of not having all the answers, stories convey the mystery and the miracle — the adventure — of being alive.” Though much of the focus falls on the Alcoholics Anonymous program — hailed by many as one of the most important organized movements of the 20th century and criticized by some for its own imperfections — the book, which passes the skepticism radar even of someone as non-religious as myself, is really about cultivating our capacity for uncertainty, for mystery, for having the right questions rather than the right answers.

The problem with organized religions, Bill Wilson once complained, ‘is their claim how confoundedly right all of them are.’ The spirituality of imperfection … makes no claim to be ‘right.’ It is a spirituality more interested in questions than in answers, more a journey toward humility than a struggle for perfection.

The spirituality of imperfection begins with the recognition that trying to be perfect is the most tragic human mistake.

Adding to the ongoing discussion of the psychology and philosophy of spirituality, Kurtz and Ketcham observe:

We are not ‘everything,’ but neither are we ‘nothing.’ Spirituality is discovered in that space between paradox’s extremes, for there we confront our helplessness and powerlessness, our woundedness. In seeking to understand our limitations, we seek not only an easing of our pain but an understanding of what it means to hurt and what it means to be healed. Spirituality begins with the acceptance that our fractured being, our imperfection, simply is: There is no one to ‘blame’ for our errors — neither ourselves nor anyone nor anything else. Spirituality helps us first to see, and then to understand, and eventually to accept the imperfection that lies at the very core of our human be-ing. Spirituality accepts that ‘If a thing is worth doing, it is worth doing badly.’

Further:

This is not a spirituality for the saints or the gods, but for people who suffer from what the philosopher-psychologist William James called ‘torn-to-pieces- hood’ (his trenchant translation of the German Zerrissenheit). We have all known that experience, for to be human is to feel at times divided, fractured, pulled in a dozen directions … and to yearn for serenity, for some healing of our ‘torn-to-pieces-hood.’

Much has been written — and debated — about the science of storytelling in recent weeks, so this excerpt on spirituality and story is of particular note:

Without imperfection’s ‘gap between intentions and results,’ there would be no story.

[…]

Listening to stories and telling them helped our ancestors to live humanly — to be human. But somewhere along the way our ability to tell (and to listen to) stories was lost. As life speeded up, as the possibility of both communication and annihilation became ever more instantaneous, people came to have less tolerance for that which comes only over time. The demand for perfection and the craving for ever more control over a world that paradoxically seemed ever more out of control eventually bred impatience with story. As time went by, the art of storytelling fell by the wayside, and those who went before us gradually lost part of what had been the human heritage— the ability to ask the most basic questions, the spiritual questions.

It all circles back to our discomfort with the mysterious and the unanswered, highlighting the urgency of relaxing into rather than tensing against it:

We modern people are problem-solvers, but the demand for answers crowds out patience — and perhaps, especially, patience with mystery, with that which we cannot control. Intolerant of ambiguity, we deny our own ambivalences, searching for answers to our most anguished questions in technique, hoping to find an ultimate healing in technology. But feelings of dislocation, isolation, and off-centeredness persist, as they always have.

If The Spirituality of Imperfection reminds you of Brené Brown’s excellent The Gifts of Imperfection, it’s for good reason — both go to the heart of our deepest conditioning, the kind of personal and cultural narratives we’ve come of age believing yet ones that keep us from fully inhabiting our own selves.

Thanks, Kirstin

BP

Creating a “Fourth Culture” of Knowledge: Jonah Lehrer on Why Science and Art Need Each Other

From Gertrude Stein to Karl Popper, or how to architect “negative capability” and live with mystery.

One of my favorite books of all time is Jonah Lehrer’s Proust Was a Neuroscientist, which tells the story of how a handful of iconic creators each discovered an essential truth about the mind long before modern science was able to label and pinpoint it — for instance, George Eliot detected neuroplasticity, Gertrude Stein uncovered the deep structure of language, Cézanne fathomed how vision works, and Proust demonstrated the imperfections of memory. I was recently reminded of this powerful passage, in which Lehrer makes a case for the extraordinary importance of the cross-pollination of disciplines, the essence of Brain Pickings’ founding philosophy, particularly of art and science — a convergence Lehrer calls a “fourth culture” that empowers us to “freely transplant knowledge between the sciences and the humanities, and focus on connecting the reductionist fact to our actual experience.”

We now know enough to know that we will never know everything. This is why we need art: it teaches us how to live with mystery. Only the artist can explore the ineffable without offering us an answer, for sometimes there is no answer. John Keats called this romantic impulse ‘negative capability.’ He said that certain poets, like Shakespeare, had ‘the ability to remain in uncertainties, mysteries, doubts, without any irritable reaching after fact and reason.’ Keats realized that just because something can’t be solved, or reduced into the laws of physics, doesn’t mean it isn’t real. When we venture beyond the edge of our knowledge, all we have is art.

But before we can get a fourth culture, our two existing cultures must modify their habits. First of all, the humanities must sincerely engage with the sciences. Henry James defined the writer as someone on whom nothing is lost; artists must heed his call and not ignore science’s inspiring descriptions of reality. Every humanist should read Nature.

At the same time, the sciences must recognize that their truths are not the only truths. No knowledge has a monopoly on knowledge. That simple idea will be the starting premise of any fourth culture. As Karl Popper, an eminent defender of science, wrote, ‘It is imperative that we give up the idea of ultimate sources of knowledge, and admit that all knowledge is human; that it is mixed with our errors, our prejudices, our dreams, and our hopes; that all we can do is to grope for truth even though it is beyond our reach. There is no authority beyond the reach of criticism.”

Lehrer’s new book, Imagine: How Creativity Works, comes out later this month.

HT Wired

BP

View Full Site

Brain Pickings participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn commissions by linking to Amazon. In more human terms, this means that whenever you buy a book on Amazon from a link on here, I receive a small percentage of its price. Privacy policy.