Brain Pickings

Search results for “time travel”

A Madman Dreams of Tuning Machines: The Story of Joseph Weber, the Tragic Hero of Science Who Followed Einstein’s Vision and Pioneered the Sound of Spacetime

…and a remarkable letter from Freeman Dyson on the difficult, necessary art of changing one’s mind.

A Madman Dreams of Tuning Machines: The Story of Joseph Weber, the Tragic Hero of Science Who Followed Einstein’s Vision and Pioneered the Sound of Spacetime

In his groundbreaking 1915 paper on general relativity, Albert Einstein envisioned gravitational waves — ripples in the fabric of spacetime caused by astronomic events of astronomical energy. Although fundamental to our understanding of the universe, gravitational waves were a purely theoretical construct for him. He lived in an era when any human-made tool for detecting something this faraway was simply unimaginable, even by the greatest living genius, and many of the cosmic objects capable of producing such tremendous tumult — black holes, for instance — were yet to be discovered.

One September morning in 2015, almost exactly a century after Einstein published his famous paper, scientists turned his mathematical dream into a tangible reality — or, rather, an audible one.

blackholes_einstein

The Laser Interferometer Gravitational-Wave Observatory — an enormous international collaboration known as LIGO, consisting of two massive listening instruments 3,000 kilometers apart, decades in the making — recorded the sound of a gravitational wave produced by two mammoth black holes that had collided more than a billion years ago, more than a billion light-years away.

One of the most significant discoveries in the history of science, this landmark event introduces a whole new modality of curiosity in our quest to know the cosmos, its thrill only amplified by the fact that we had never actually seen black holes before hearing them. Nearly everything we know about the universe today, we know through five centuries of optical observation of light and particles. Now begins a new era of sonic exploration. Turning an inquisitive ear to the cosmos might, and likely will, revolutionize our understanding of it as radically as Galileo did when he first pointed his telescope at the skies.

In Black Hole Blues and Other Songs from Outer Space (public library) — one of the finest and most beautifully written books I’ve ever read, which I recently reviewed for The New York Times — astrophysicist and novelist Janna Levin tells the story of LIGO and its larger significance as a feat of science and the human spirit. Levin, a writer who bends language with effortless might and uses it not only as an instrument of thought but also as a Petri dish for emotional nuance, probes deep into the messy human psychology that animated these brilliant and flawed scientists as they persevered in this ambitious quest against enormous personal, political, and practical odds.

Levin — who has written beautifully about free will and the relationship between genius and madness — paints the backdrop for this improbable triumph:

Somewhere in the universe two black holes collide — as heavy as stars, as small as cities, literally black (the complete absence of light) holes (empty hollows). Tethered by gravity, in their final seconds together the black holes course through thousands of revolutions about their eventual point of contact, churning up space and time until they crash and merge into one bigger black hole, an event more powerful than any since the origin of the universe, outputting more than a trillion times the power of a billion Suns. The black holes collide in complete darkness. None of the energy exploding from the collision comes out as light. No telescope will ever see the event.

What nobody could see LIGO could hear — a sensitive, sophisticated ear pressed to the fabric of spacetime, tuned to what Levin so poetically eulogizes as “the total darkness, the empty space, the vacuity, the great expanse of nothingness, of emptiness, of pure space and time.” She writes of this astonishing instrument:

An idea sparked in the 1960s, a thought experiment, an amusing haiku, is now a thing of metal and glass.

But what makes the book most enchanting is Levin’s compassionate insight into the complex, porous, often tragic humanity undergirding the metal and glass — nowhere more tragic than in the story of Joseph Weber, the controversial pioneer who became the first to bring Einstein’s equations into the lab. Long before LIGO was even so much as a thought experiment, Weber envisioned and built a very different instrument for listening to the cosmos.

Weber was born Yonah Geber to a family of Lithuanian Jewish immigrants in early-twentieth-century New Jersey. His mother’s heavy accent caused his teacher to mishear the boy’s name as “Joseph,” so he became Joe. After he was hit by a bus at the age of five, young Joe required speech rehabilitation therapy, which replaced his Yiddish accent with a generic American one that led his family to call him “Yankee.” As a teenager, he dropped out of Cooper Union out of concern for his parents’ finances and joined the Navy instead, where he served on an aircraft carrier that was sunk during WWII. When the war ended, he became a microwave engineer and was hired as a professor at the University of Maryland at the then-enviable salary — especially for a 29-year-old — of $6,500 a year.

Eager to do microwave research, he turned to the great physicist George Gamow, who had theorized cosmic microwave background radiation — a thermal remnant of the Big Bang, which would provide unprecedented insight into the origin of the universe and which Weber wanted to dedicate his Ph.D. career to detecting. But Gamow inexplicably snubbed him. Two other scientists eventually discovered cosmic microwave background radiation by accident and received the Nobel Prize for the discovery. Weber then turned to atomic physics and devised the maser — the predecessor of the laser — but, once again, other scientists beat him to the public credit and received a Nobel for that discovery, too.

Levin writes:

Joe’s scientific life is defined by these significant near misses… He was Shackleton many times, almost the first: almost the first to see the big bang, almost the first to patent the laser, almost the first to detect gravitational waves. Famous for nearly getting there.

And that is how Weber got to gravitational waves — a field he saw as so small and esoteric that he stood a chance of finally being the first. Levin writes:

In 1969 Joe Weber announced that he had achieved an experimental feat widely believed to be impossible: He had detected evidence for gravitational waves. Imagine his pride, the pride to be the first, the gratification of discovery, the raw shameless pleasure of accomplishment. Practically single-handedly, through sheer determination, he conceives of the possibility. He fills multiple notebooks, hundreds of pages deep, with calculations and designs and ideas, and then he makes the experimental apparatus real. He builds an ingenious machine, a resonant bar, a Weber bar, which vibrates in sympathy with a gravitational wave. A solid aluminum cylinder about 2 meters long, 1 meter in diameter, and in the range of 3,000 pounds, as guitar strings go, isn’t easy to pluck. But it has one natural frequency at which a strong gravitational wave would ring the bar like a tuning fork.

Joseph Weber with his cylinder
Joseph Weber with his cylinder

Following his announcement, Weber became an overnight celebrity. His face graced magazine covers. NASA put one of his instruments on the Moon. He received ample laud from peers. Even the formidable J. Robert Oppenheimer, a man of slim capacity for compliments, encouraged him with a remark Weber never forgot: “The work you’re doing,” Oppenheimer told him, “is just about the most exciting work going on anywhere around here.”

Under the spell of this collective excitement, scientists around the world began building replicas of Weber’s cylinder. But one after another, they were unable to replicate his results — the electrifying eagerness to hear gravitational waves was met with the dead silence of the cosmos.

Weber plummeted from grace as quickly as he had ascended. (Einstein himself famously scoffed at the fickle nature of fame.) Levin writes:

Joe Weber’s claims in 1969 to have detected gravitational waves, the claims that catapulted his fame, that made him possibly the most famous living scientist of his generation, were swiftly and vehemently refuted. The subsequent decades offered near total withdrawal of support, both from scientific funding agencies and his peers. He was almost fired from the University of Maryland.

Among Weber’s most enthusiastic initial supporters was the great theoretical physicist Freeman Dyson. Perhaps out of his staunch belief that no question is unanswerable, Dyson had emboldened Weber to attempt what no one had attempted before — to hear the sound of spacetime. But when the evidence against Weber’s data began to mount, Dyson was anguished by a sense of personal responsibility for having encouraged him, so he wrote Weber an extraordinary letter urging him to practice the immensely difficult art of changing one’s mind. Levin quotes the letter, penned on June 5, 1975:

Dear Joe,

I have been watching with fear and anguish the ruin of our hopes. I feel a considerable personal responsibility for having advised you in the past to “stick your neck out.” Now I still consider you a great man unkindly treated by fate, and I am anxious to save whatever can be saved. So I offer my advice again for what it is worth.

A great man is not afraid to admit publicly that he has made a mistake and has changed his mind. I know you are a man of integrity. You are strong enough to admit that you are wrong. If you do this, your enemies will rejoice but your friends will rejoice even more. You will save yourself as a scientist, and you will find that those whose respect is worth having will respect you for it.

I write now briefly because long explanations will not make the message clearer. Whatever you decide, I will not turn my back on you.

With all good wishes,
Yours ever
Freeman

But Weber decided not to heed his friend’s warm caution. His visionary genius coexisted with one of the most unfortunate and most inescapable of human tendencies — our bone-deep resistance to the shame of admitting error. He paid a high price: His disrepute soon veered into cruelty — he was ridiculed and even baited by false data intended to trick him into reaffirming his claims, only to be publicly humiliated all over again. In one of the archival interviews Levin excavates, he laments:

I simply cannot understand the vehemence and the professional jealousy, and why each guy has to feel that he has to cut off a pound of my flesh… Boltzmann committed suicide with this sort of treatment.

Here, I think of Levin’s penchant for celebrating tragic heroes whose posthumous redemption only adds to their tragedy. Her magnificent novel A Mad Man Dreams of Turing Machines is based on the real lives of computing pioneer Alan Turing and mathematician Kurt Gödel, both of whom committed suicide — Turing after particularly cruel mistreatment. Levin’s writing emanates a deep sympathy for those who have fallen victim to some combination of their own fallible humanity and the ferocious inhumanity of unforgiving, bloodthirsty others. No wonder Weber’s story sings to her. A mad man dreams of tuning machines.

Without diminishing the role of personal pathology and individual neurochemistry, given what psychologists know about suicide prevention, social support likely played a vital role in Weber’s ability to withstand the barrage of viciousness — Dyson’s sympathetic succor, but most of all the love of his wife, the astronomer Virginia Trimble, perhaps the most unambivalently likable character in the book. Levin writes:

She called him Weber and he called her Trimble. They married in March 1972 after a cumulative three weekends together. She laughs. “Weber never had any trouble making up his mind.” Twenty-three years her senior, he always insisted she do what she wanted and needed to do. Perhaps trained in part by his first wife, Anita, a physicist who took a protracted break to raise their four boys, the widower had no reservations about Virginia’s work, her independence, or her IQ. (Stratospheric. In an issue of Life magazine with a now-vintage cover, in an article titled “Behind a Lovely Face, a 180 I.Q.” about the then eighteen-year-old astrophysics major, she is quoted as classifying the men she dates into three types: “Guys who are smarter than I am, and I’ve found one or two. Guys who think they are— they’re legion. And those who don’t care.”)

Virginia Trimble in LIFE magazine, 1962
Behind a Lovely Face, a 180 I.Q.: Virginia Trimble in LIFE magazine, 1962

Trimble was the second woman ever allowed at the famed Palomar Observatory, a year after pioneering astronomer Vera Rubin broke the optical-glass ceiling by becoming the first to observe there. Levin, whose subtle kind-natured humor never fails to delight, captures Trimble’s irreverent brilliance:

In her third year, having demonstrated her tenacity — particularly manifest in the fact that she still hadn’t married, she suspects — she was awarded a fellowship from the National Science Foundation. When she arrived at Caltech, she was delighted. “I thought, ‘Look at all of these lovely men.’” In her seventies, with her coral dress, matching shoes and lip color, Moon earrings, and gold animal-head ring, she beams. Still a lovely face. And still an IQ of 180.

This fierce spirit never left Trimble. Now in her seventies, she tells Levin:

When I fell and broke my hip last September, I spent four days on the floor of my apartment singing songs and reciting poetry until I was found.

It isn’t hard to see why Weber — why anyone — would fall in love with Trimble. But although their love sustained him and he didn’t take his own life, he suffered an end equally heartbreaking.

By the late 1980s, Weber had submerged himself even deeper into the quicksand of his convictions, stubbornly trying to prove that his instrument could hear the cosmos. For the next twenty years, he continued to operate his own lab funded out of pocket — a drab concrete box in the Maryland woods, where he was both head scientist and janitor. Meanwhile, LIGO — a sophisticated instrument that would eventually cost more than $1 billion total, operated by a massive international team of scientists — was gathering momentum nearby, thanks largely to the scientific interest in gravitational astronomy that Weber’s early research had sparked.

He was never invited to join LIGO. Trimble surmises that even if he had been, he would’ve declined.

One freezing winter morning in 2000, just as LIGO’s initial detectors were being built, 81-year-old Weber went to clean his lab, slipped on the ice in front of the building, hit his head, and fell unconscious. He was found two days later and taken to the E.R., but he never recovered. He died at the hospital several months later from the lymphoma he’d been battling. The widowed Trimble extracts from her husband’s tragedy an unsentimental parable of science — a testament to the mismatch between the time-scale of human achievement, with all the personal glory it brings, and that of scientific progress:

Science is a self-correcting process, but not necessarily in one’s own lifetime.

When the LIGO team published the official paper announcing the groundbreaking discovery, Weber was acknowledged as the pioneer of gravitational wave research. But like Alan Turing, who was granted posthumous pardon by the Queen more than half a century after he perished by inhumane injustice, Weber’s redemption is culturally bittersweet at best. I’m reminded of a beautiful passage from Levin’s novel about Turing and Gödel, strangely perfect in the context of Weber’s legacy:

Their genius is a testament to our own worth, an antidote to insignificance; and their bounteous flaws are luckless but seemingly natural complements, as though greatness can be doled out only with an equal measure of weakness… Their broken lives are mere anecdotes in the margins of their discoveries. But then their discoveries are evidence of our purpose, and their lives are parables on free will.

Free will, indeed, is what Weber exercised above all — he lived by it and died by it. In one of the interviews Levin unearths, he reflects from the depths of his disrepute:

If you do science the principal reason to do it is because you enjoy it and if you don’t enjoy it you shouldn’t do it, and I enjoy it. And I must say I’m enjoying it… That’s the best you can do.

At the end of the magnificent and exceptionally poetic Black Hole Blues, the merits of which I’ve extolled more fully here, Levin offers a wonderfully lyrical account of LIGO’s triumph as she peers into the furthest reaches of the spacetime odyssey that began with Einstein, gained momentum with Weber, and is only just beginning to map the course of human curiosity across the universe:

Two very big stars lived in orbit around each other several billion years ago. Maybe there were planets around them, although the two-star system might have been too unstable or too simple in composition to accommodate planets. Eventually one star died, and then the other, and two black holes formed. They orbited in darkness, probably for billions of years before that final 200 milliseconds when the black holes collided and merged, launching their loudest gravitational wave train into the universe.

The sound traveled to us from 1.4 billion light-years away. One billion four hundred million light-years.

[…]

We heard black holes collide. We’ll point to where the sound might have come from, to the best of our abilities, a swatch of space from an earlier epoch. Somewhere in the southern sky, pulling away from us with the expansion of the universe, the big black hole will roll along its own galaxy, dark and quiet until something wanders past, an interstellar dust cloud or an errant star. After a few billion years the host galaxy might collide with a neighbor, tossing the black hole around, maybe toward a supermassive black hole in a growing galactic center. Our star will die. The Milky Way will blend with Andromeda. The record of this discovery along with the wreckage of our solar system will eventually fall into black holes, as will everything else in the cosmos, the expanding space eventually silent, and all the black holes will evaporate into oblivion near the end of time.

blackholeblues_jannalevin2

BP

Nietzsche on Dreams as an Evolutionary Time Machine for the Human Brain

“Dreams carry us back to the earlier stages of human culture and afford us a means of understanding it more clearly.”

Nietzsche on Dreams as an Evolutionary Time Machine for the Human Brain

We spend a third of our lives in a parallel nocturnal universe and the half-imagined, half-remembered experiences we have there are in constant dynamic interaction with our waking selves. Our nightly dreams are both fragmentary reflections of our conscious lives, rearranged into barely recognizable mosaics by our unconscious, and potent agents of psychic transmutation — a powerful dream can cast an unshakable mood over the wakeful hours, or even days, that follow it. Science is only just beginning to shed light on the role of dreams in memory consolidation and mood, but their nature and purpose remain largely a mystery. “We feel dreamed by someone else, a sleeping counterpart,” the poet Mark Strand wrote in his beautiful ode to dreams.

Friedrich Nietzsche (October 15, 1844–August 25, 1900) saw this sleeping counterpart as our link to primitive humanity — an atavistic remnant of the pre-rational human mind. Nearly two decades before Freud’s seminal treatise on dreams, Nietzsche explored the mystique of the nocturnal unconscious in a portion of Human, All Too Human: A Book for Free Spirits (public library | free ebook) — his altogether terrific 1879 inquiry into how we become who we are.

In a section on dreams and civilization, he writes:

In the dream … we have the source of all metaphysic. Without the dream, men would never have been incited to an analysis of the world. Even the distinction between soul and body is wholly due to the primitive conception of the dream, as also the hypothesis of the embodied soul, whence the development of all superstition, and also, probably, the belief in god. “The dead still live: for they appear to the living in dreams.” So reasoned mankind at one time, and through many thousands of years.

Therein lies Nietzsche’s most intriguing point: Sleep, he suggests, is a kind of evolutionary time machine — a portal to the primitive past of our sensemaking instincts. He paints the sleeping brain as a blunt Occam’s Razor — in seeking out the simplest explanations for our daily confusions, it ends up succumbing to the simplistic. This, Nietzsche argues, is how superstitions and religious mythologies may have originated:

The function of the brain which is most encroached upon in slumber is the memory; not that it is wholly suspended, but it is reduced to a state of imperfection as, in primitive ages of mankind, was probably the case with everyone, whether waking or sleeping. Uncontrolled and entangled as it is, it perpetually confuses things as a result of the most trifling similarities, yet in the same mental confusion and lack of control the nations invented their mythologies, while nowadays travelers habitually observe how prone the savage is to forgetfulness, how his mind, after the least exertion of memory, begins to wander and lose itself until finally he utters falsehood and nonsense from sheer exhaustion. Yet, in dreams, we all resemble this savage. Inadequacy of distinction and error of comparison are the basis of the preposterous things we do and say in dreams, so that when we clearly recall a dream we are startled that so much idiocy lurks within us. The absolute distinctness of all dream-images, due to implicit faith in their substantial reality, recalls the conditions in which earlier mankind were placed, for whom hallucinations had extraordinary vividness, entire communities and even entire nations laboring simultaneously under them. Therefore: in sleep and in dream we make the pilgrimage of early mankind over again.

Illustration by Tom Seidmann-Freud, Freud’s cross-dressing niece, from a philosophical 1922 children’s book about dreams

Just like the dreaming self contains vestiges of every self we’ve inhabited since childhood, to be resurrected in sleep, Nietzsche argues that the dreaming brain contains vestiges of the primitive stages of the human brain, when our cognitive capacity for problem-solving was far more limited and unmoored from critical thinking. Nearly a century before modern scientists came to study what actually happens to the brain and body while we sleep, he writes:

Everyone knows from experience how a dreamer will transform one piercing sound, for example, that of a bell, into another of quite a different nature, say, the report of cannon. In his dream he becomes aware first of the effects, which he explains by a subsequent hypothesis and becomes persuaded of the purely conjectural nature of the sound. But how comes it that the mind of the dreamer goes so far astray when the same mind, awake, is habitually cautious, careful, and so conservative in its dealings with hypotheses? Why does the first plausible hypothesis of the cause of a sensation gain credit in the dreaming state? (For in a dream we look upon that dream as reality, that is, we accept our hypotheses as fully established). I have no doubt that as men argue in their dreams to-day, mankind argued, even in their waking moments, for thousands of years: the first causa, that occurred to the mind with reference to anything that stood in need of explanation, was accepted as the true explanation and served as such… In the dream this atavistic relic of humanity manifests its existence within us, for it is the foundation upon which the higher rational faculty developed itself and still develops itself in every individual. Dreams carry us back to the earlier stages of human culture and afford us a means of understanding it more clearly.

Illustration by Judith Clay from Thea’s Tree

Nietzsche considers the cognitive machinery of this dreamsome deduction:

If we close our eyes the brain immediately conjures up a medley of impressions of light and color, apparently a sort of imitation and echo of the impressions forced in upon the brain during its waking moments. And now the mind, in co-operation with the imagination, transforms this formless play of light and color into definite figures, moving groups, landscapes. What really takes place is a sort of reasoning from effect back to cause.

[…]

The imagination is continually interposing its images inasmuch as it participates in the production of the impressions made through the senses day by day: and the dream-fancy does exactly the same thing — that is, the presumed cause is determined from the effect and after the effect: all this, too, with extraordinary rapidity, so that in this matter, as in a matter of jugglery or sleight-of-hand, a confusion of the mind is produced and an after effect is made to appear a simultaneous action, an inverted succession of events, even. — From these considerations we can see how late strict, logical thought, the true notion of cause and effect must have been in developing, since our intellectual and rational faculties to this very day revert to these primitive processes of deduction, while practically half our lifetime is spent in the super-inducing conditions.

In a sentiment that calls to mind Polish Nobel laureate Wislawa Szymborska’s wonderful notion that the poet is “the spiritual heir of primitive humanity,” Nietzsche adds:

Even the poet, the artist, ascribes to his sentimental and emotional states causes which are not the true ones. To that extent he is a reminder of early mankind and can aid us in its comprehension.

Human, All Too Human is a spectacular read in its totality. Complement this particular portion with the science of how REM sleep regulates our negative moods and the psychology of dreams and why we have nightmares, then revisit Nietzsche on the power of music, how to find yourself, why a fulfilling life requires embracing rather than running from difficulty, and his ten rules for writers.

BP

Rebecca Solnit on Hope in Dark Times, Resisting the Defeatism of Easy Despair, and What Victory Really Means for Movements of Social Change

“This is an extraordinary time full of vital, transformative movements that could not be foreseen. It’s also a nightmarish time. Full engagement requires the ability to perceive both.”

Rebecca Solnit on Hope in Dark Times, Resisting the Defeatism of Easy Despair, and What Victory Really Means for Movements of Social Change

“There is no love of life without despair of life,” wrote Albert Camus — a man who in the midst of World War II, perhaps the darkest period in human history, saw grounds for luminous hope and issued a remarkable clarion call for humanity to rise to its highest potential on those grounds. It was his way of honoring the same duality that artist Maira Kalman would capture nearly a century later in her marvelous meditation on the pursuit of happiness, where she observed: “We hope. We despair. We hope. We despair. That is what governs us. We have a bipolar system.”

In my own reflections on hope, cynicism, and the stories we tell ourselves, I’ve considered the necessity of these two poles working in concert. Indeed, the stories we tell ourselves about these poles matter. The stories we tell ourselves about our public past shape how we interpret and respond to and show up for the present. The stories we tell ourselves about our private pasts shape how we come to see our personhood and who we ultimately become. The thin line between agency and victimhood is drawn in how we tell those stories.

The language in which we tell ourselves these stories matters tremendously, too, and no writer has weighed the complexities of sustaining hope in our times of readily available despair more thoughtfully and beautifully, nor with greater nuance, than Rebecca Solnit does in Hope in the Dark: Untold Histories, Wild Possibilities (public library).

Rebecca Solnit (Photograph: Sallie Dean Shatz)
Rebecca Solnit (Photograph: Sallie Dean Shatz)

Expanding upon her previous writings on hope, Solnit writes in the foreword to the 2016 edition of this foundational text of modern civic engagement:

Hope is a gift you don’t have to surrender, a power you don’t have to throw away. And though hope can be an act of defiance, defiance isn’t enough reason to hope. But there are good reasons.

Solnit — one of the most singular, civically significant, and poetically potent voices of our time, emanating echoes of Virginia Woolf’s luminous prose and Adrienne Rich’s unflinching political conviction — originally wrote these essays in 2003, six weeks after the start of Iraq war, in an effort to speak “directly to the inner life of the politics of the moment, to the emotions and preconceptions that underlie our political positions and engagements.” Although the specific conditions of the day may have shifted, their undergirding causes and far-reaching consequences have only gained in relevance and urgency in the dozen years since. This slim book of tremendous potency is therefore, today more than ever, an indispensable ally to every thinking, feeling, civically conscious human being.

Solnit looks back on this seemingly distant past as she peers forward into the near future:

The moment passed long ago, but despair, defeatism, cynicism, and the amnesia and assumptions from which they often arise have not dispersed, even as the most wildly, unimaginably magnificent things came to pass. There is a lot of evidence for the defense… Progressive, populist, and grassroots constituencies have had many victories. Popular power has continued to be a profound force for change. And the changes we’ve undergone, both wonderful and terrible, are astonishing.

[…]

This is an extraordinary time full of vital, transformative movements that could not be foreseen. It’s also a nightmarish time. Full engagement requires the ability to perceive both.

Illustration by Charlotte Pardi from Cry, Heart, But Never Break by Glenn Ringtved

With an eye to such disheartening developments as climate change, growing income inequality, and the rise of Silicon Valley as a dehumanizing global superpower of automation, Solnit invites us to be equally present for the counterpoint:

Hope doesn’t mean denying these realities. It means facing them and addressing them by remembering what else the twenty-first century has brought, including the movements, heroes, and shifts in consciousness that address these things now.

Enumerating Edward Snowden, marriage equality, and Black Lives Matter among those, she adds:

This has been a truly remarkable decade for movement-building, social change, and deep, profound shifts in ideas, perspective, and frameworks for broad parts of the population (and, of course, backlashes against all those things).

With great care, Solnit — whose mind remains the sharpest instrument of nuance I’ve encountered — maps the uneven terrain of our grounds for hope:

It’s important to say what hope is not: it is not the belief that everything was, is, or will be fine. The evidence is all around us of tremendous suffering and tremendous destruction. The hope I’m interested in is about broad perspectives with specific possibilities, ones that invite or demand that we act. It’s also not a sunny everything-is-getting-better narrative, though it may be a counter to the everything-is-getting-worse narrative. You could call it an account of complexities and uncertainties, with openings.

Solnit’s conception of hope reminds me of the great existential psychiatrist Irvin D. Yalom’s conception of meaning: “The search for meaning, much like the search for pleasure,” he wrote, “must be conducted obliquely.” That is, it must take place in the thrilling and terrifying terra incognita that lies between where we are and where we wish to go, ultimately shaping where we do go. Solnit herself has written memorably about how we find ourselves by getting lost, and finding hope seems to necessitate a similar surrender to uncertainty. She captures this idea beautifully:

Hope locates itself in the premises that we don’t know what will happen and that in the spaciousness of uncertainty is room to act. When you recognize uncertainty, you recognize that you may be able to influence the outcomes — you alone or you in concert with a few dozen or several million others. Hope is an embrace of the unknown and the unknowable, an alternative to the certainty of both optimists and pessimists. Optimists think it will all be fine without our involvement; pessimists take the opposite position; both excuse themselves from acting. It’s the belief that what we do matters even though how and when it may matter, who and what it may impact, are not things we can know beforehand. We may not, in fact, know them afterward either, but they matter all the same, and history is full of people whose influence was most powerful after they were gone.

Illustration from The Harvey Milk Story, a picture-book biography of the slain LGBT rights pioneer

Amid a 24-hour news cycle that nurses us on the illusion of immediacy, this recognition of incremental progress and the long gestational period of consequences — something at the heart of every major scientific revolution that has changed our world — is perhaps our most essential yet most endangered wellspring of hope. Solnit reminds us, for instance, that women’s struggle for the right to vote took seven decades:

For a time people liked to announce that feminism had failed, as though the project of overturning millennia of social arrangements should achieve its final victories in a few decades, or as though it had stopped. Feminism is just starting, and its manifestations matter in rural Himalayan villages, not just first-world cities.

She considers one particularly prominent example of this cumulative cataclysm — the Arab Spring, “an extraordinary example of how unpredictable change is and how potent popular power can be,” the full meaning of and conclusions from which we are yet to draw. Although our cultural lore traces the spark of the Arab Spring to the moment Mohamed Bouazizi set himself on fire in an act of protest, Solnit traces the unnoticed accretion of tinder across space and time:

You can tell the genesis story of the Arab Spring other ways. The quiet organizing going on in the shadows beforehand matters. So does the comic book about Martin Luther King and civil disobedience that was translated into Arabic and widely distributed in Egypt shortly before the Arab Spring. You can tell of King’s civil disobedience tactics being inspired by Gandhi’s tactics, and Gandhi’s inspired by Tolstoy and the radical acts of noncooperation and sabotage of British women suffragists. So the threads of ideas weave around the world and through the decades and centuries.

In a brilliant counterpoint to Malcolm Gladwell’s notoriously short-sighted view of social change, Solnit sprouts a mycological metaphor for this imperceptible, incremental buildup of influence and momentum:

After a rain mushrooms appear on the surface of the earth as if from nowhere. Many do so from a sometimes vast underground fungus that remains invisible and largely unknown. What we call mushrooms mycologists call the fruiting body of the larger, less visible fungus. Uprisings and revolutions are often considered to be spontaneous, but less visible long-term organizing and groundwork — or underground work — often laid the foundation. Changes in ideas and values also result from work done by writers, scholars, public intellectuals, social activists, and participants in social media. It seems insignificant or peripheral until very different outcomes emerge from transformed assumptions about who and what matters, who should be heard and believed, who has rights.

Ideas at first considered outrageous or ridiculous or extreme gradually become what people think they’ve always believed. How the transformation happened is rarely remembered, in part because it’s compromising: it recalls the mainstream when the mainstream was, say, rabidly homophobic or racist in a way it no longer is; and it recalls that power comes from the shadows and the margins, that our hope is in the dark around the edges, not the limelight of center stage. Our hope and often our power.

[…]

Change is rarely straightforward… Sometimes it’s as complex as chaos theory and as slow as evolution. Even things that seem to happen suddenly arise from deep roots in the past or from long-dormant seeds.

One of Beatrix Potter’s little-known scientific studies and illustrations of mushrooms

And yet Solnit’s most salient point deals with what comes after the revolutionary change — with the notion of victory not as a destination but as a starting point for recommitment and continual nourishment of our fledgling ideals:

A victory doesn’t mean that everything is now going to be nice forever and we can therefore all go lounge around until the end of time. Some activists are afraid that if we acknowledge victory, people will give up the struggle. I’ve long been more afraid that people will give up and go home or never get started in the first place if they think no victory is possible or fail to recognize the victories already achieved. Marriage equality is not the end of homophobia, but it’s something to celebrate. A victory is a milestone on the road, evidence that sometimes we win, and encouragement to keep going, not to stop.

Solnit examines this notion more closely in one of the original essays from the book, titled “Changing the Imagination of Change” — a meditation of even more acute timeliness today, more than a decade later, in which she writes:

Americans are good at responding to crisis and then going home to let another crisis brew both because we imagine that the finality of death can be achieved in life — it’s called happily ever after in personal life, saved in politics — and because we tend to think political engagement is something for emergencies rather than, as people in many other countries (and Americans at other times) have imagined it, as a part and even a pleasure of everyday life. The problem seldom goes home.

[…]

Going home seems to be a way to abandon victories when they’re still delicate, still in need of protection and encouragement. Human babies are helpless at birth, and so perhaps are victories before they’ve been consolidated into the culture’s sense of how things should be. I wonder sometimes what would happen if victory was imagined not just as the elimination of evil but the establishment of good — if, after American slavery had been abolished, Reconstruction’s promises of economic justice had been enforced by the abolitionists, or, similarly, if the end of apartheid had been seen as meaning instituting economic justice as well (or, as some South Africans put it, ending economic apartheid).

It’s always too soon to go home. Most of the great victories continue to unfold, unfinished in the sense that they are not yet fully realized, but also in the sense that they continue to spread influence. A phenomenon like the civil rights movement creates a vocabulary and a toolbox for social change used around the globe, so that its effects far outstrip its goals and specific achievements — and failures.

Invoking James Baldwin’s famous proclamation that “not everything that is faced can be changed, but nothing can be changed until it is faced,” Solnit writes:

It’s important to emphasize that hope is only a beginning; it’s not a substitute for action, only a basis for it.

What often obscures our view of hope, she argues, is a kind of collective amnesia that lets us forget just how far we’ve come as we grow despondent over how far we have yet to go. She writes:

Amnesia leads to despair in many ways. The status quo would like you to believe it is immutable, inevitable, and invulnerable, and lack of memory of a dynamically changing world reinforces this view. In other words, when you don’t know how much things have changed, you don’t see that they are changing or that they can change.

Illustration by Isabelle Arsenault from Mr. Gauguin’s Heart by Marie-Danielle Croteau, the story of how Paul Gauguin used the grief of his childhood as a catalyst for a lifetime of art

This lack of a long view is perpetuated by the media, whose raw material — the very notion of “news” — divorces us from the continuity of life and keeps us fixated on the current moment in artificial isolate. Meanwhile, Solnit argues in a poignant parallel, such amnesia poisons and paralyzes our collective conscience by the same mechanism that depression poisons and paralyzes the private psyche — we come to believe that the acute pain of the present is all that will ever be and cease to believe that things will look up. She writes:

There’s a public equivalent to private depression, a sense that the nation or the society rather than the individual is stuck. Things don’t always change for the better, but they change, and we can play a role in that change if we act. Which is where hope comes in, and memory, the collective memory we call history.

A dedicated rower, Solnit ends with the perfect metaphor:

You row forward looking back, and telling this history is part of helping people navigate toward the future. We need a litany, a rosary, a sutra, a mantra, a war chant for our victories. The past is set in daylight, and it can become a torch we can carry into the night that is the future.

Hope in the Dark is a robust anchor of intelligent idealism amid our tumultuous era of disorienting defeatism — a vitalizing exploration of how we can withstand the marketable temptations of false hope and easy despair. Complement it with Camus on how to ennoble our minds in dark times and Viktor Frankl on why idealism is the best realism, then revisit Solnit on the rewards of walking, what reading does for the human spirit, and how modern noncommunication is changing our experience of time, solitude, and communion.

BP

The Charter of Free Inquiry: The Buddha’s Timeless Toolkit for Critical Thinking and Combating Dogmatism

“Do not go upon what has been acquired by repeated hearing; nor upon tradition; nor upon rumor…”

The Charter of Free Inquiry: The Buddha’s Timeless Toolkit for Critical Thinking and Combating Dogmatism

Two millennia before Carl Sagan penned his famous Baloney Detection Kit for critical thinking, another sage of the ages laid out a similar set of criteria for sound logical reasoning to help navigate the ideological maze of truth, falsehood, and dogma-driven manipulation. Siddhartha Gautama, better known as the Buddha, formulated his tenets of critical thinking in response to a question by a tribal clan called the Kalama — the inhabitants of the small village of Kesaputta, which he passed while traveling across Eastern India.

The Kalamas, the story goes, asked the Buddha how they could discern whom to trust among the countless wandering holy men passing through their land and seeking to convert them to various, often conflicting preachings. His answer, delivered as a sermon known today as the Kalama Sutta or the Buddha’s “charter of free inquiry,” discourages blind faith, encourages a continual critical assessment of all claims, and outlines a cognitive toolkit for defying dogmatism.

buddha

Included in the altogether fantastic Sit Down and Shut Up: Punk Rock Commentaries on Buddha, God, Truth, Sex, Death, and Dogen’s Treasury of the Right Dharma Eye (public library), it reads as follows:

Do not go upon what has been acquired by repeated hearing; nor upon tradition; nor upon rumor; nor upon what is in a scripture; nor upon surmise; nor upon an axiom; nor upon specious reasoning; nor upon a bias towards a notion that has been pondered over; nor upon another’s seeming ability; nor upon the consideration, “The monk is our teacher.” But when you yourselves know: “These things are good; these things are not blamable; these things are praised by the wise; undertaken and observed, these things lead to benefit and happiness,” enter on and abide in them.

But the most heartening part of the Buddha’s sutta is that implicit to it is a timeless measure of integrity — it is the mark of the noble and secure intellect to encourage questioning even of his own convictions. The Buddha was, after all, just one of the holy men passing through the Kalamas’ land and he was urging them to apply these very principles in assessing his own teachings.

Complement with Galileo on critical thinking and the folly of believing our preconceptions, Michael Faraday on how to cure our propensity for self-deception, and Maria Konnikova on why even the most rational of us are susceptible to deception, then revisit the great Buddhist teacher D.T. Suzuki on what freedom really means and the 1919 manifesto Declaration of the Independence of the Mind.

BP

View Full Site

Brain Pickings participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn commissions by linking to Amazon. In more human terms, this means that whenever you buy a book on Amazon from a link on here, I receive a small percentage of its price, which goes straight back into my own colossal biblioexpenses. Privacy policy.