“Knowledge consists in the search for truth… It is not the search for certainty.”
By Maria Popova
“I dream of a world where the truth is what shapes people’s politics, rather than politics shaping what people think is true,” astrophysicist Neil deGrasse Tyson lamented. Nearly half a century earlier, Hannah Arendt captured the crux of the problem in her incisive reflection on thinking vs. knowing, in which she wrote: “The need of reason is not inspired by the quest for truth but by the quest for meaning.”
This distinction between truth and meaning is vital, especially today, as political propaganda and the “alternative facts” establishment manipulate a public that would rather know than think, preying on the desire for the certitude of ready-made meaning among those unwilling to engage in the work of critical thinking necessary for arriving at truth — truth measured by its correspondence with reality and not by its correspondence with one’s personal agendas, comfort zones, and preexisting beliefs.
All things living are in search of a better world. Men, animals, plants, even unicellular organisms are constantly active. They are trying to improve their situation, or at least to avoid its deterioration… Every organism is constantly preoccupied with the task of solving problems. These problems arise from its own assessments of its condition and of its environment; conditions which the organism seeks to improve… We can see that life — even at the level of the unicellular organism — brings something completely new into the world, something that did not previously exist: problems and active attempts to solve them; assessments, values; trial and error.
Popper argues that because the identification of error is so central to the problem-solving process, its corrective — that is, truth — is a core component of our quest for betterment:
The search for truth … no doubt counts among the best and greatest things that life has created in the course of its long search for a better world.
We have made great mistakes — all living creatures make mistakes. It is indeed impossible to foresee all the unintended consequences of our actions. Here science is our greatest hope: its method is the correction of error.
Looking back on the sometimes troubled but ultimately exponential reach for a better world that had unfolded over the eighty-seven years of his life — “a time of two senseless world wars and of criminal dictatorships” — Popper writes:
In spite of everything, and although we have had so many failures, we, the citizens of the western democracies, live in a social order which is better (because more favourably disposed to reform) and more just than any other in recorded history. Further improvements are of the greatest urgency. (Yet improvements that increase the power of the state often bring about the opposite of what we are seeking.)
What often warps and frustrates our quest for betterment, Popper notes in a 1982 lecture included in the volume, is our failure to distinguish between the search for truth and the assertion of certainty:
Knowledge consists in the search for truth — the search for objectively true, explanatory theories.
It is not the search for certainty. To err is human. All human knowledge is fallible and therefore uncertain. It follows that we must distinguish sharply between truth and certainty. That to err is human means not only that we must constantly struggle against error, but also that, even when we have taken the greatest care, we cannot be completely certain that we have not made a mistake… To combat the mistake, the error, means therefore to search for objective truth and to do everything possible to discover and eliminate falsehoods. This is the task of scientific activity. Hence we can say: our aim as scientists is objective truth; more truth, more interesting truth, more intelligible truth. We cannot reasonably aim at certainty.
Since we can never know anything for sure, it is simply not worth searching for certainty; but it is well worth searching for truth; and we do this chiefly by searching for mistakes, so that we can correct them.
In a sentiment of piercing pertinence today, as a litany of “alternative facts” attempts to gaslight an uncritical public, Popper offers a definition and admonition of elegant acuity:
A theory or a statement is true, if what it says corresponds to reality.
Truth and certainty must be sharply distinguished.
Condemning relativistic approaches to truth — ones that regard truth as “what is accepted; or what is put forward by society; or by the majority; or by my interest group; or perhaps by television” — he cautions:
The philosophical relativism that hides behind [Kant’s] “old and famous question” “What is truth?” may open the way to evil things, such as a propaganda of lies inciting men to hatred.
Relativism … is a betrayal of reason and of humanity.
It is useful here to revisit Arendt’s distinction between truth and meaning, for where truth is absolute — a binary correspondence with reality: a premise either reflects reality or does not — meaning can be relative; it is shaped by one’s subjective interpretation, which is contingent upon beliefs and can be manipulated. Certainty lives in the realm of meaning, not of truth. The very notion of an “alternative fact,” which manipulates certainty at the expense of truth, is therefore the sort of criminal relativism against which Popper so rigorously cautions — something that, as he puts it, “results from mixing-up the notions of truth and certainty.” All propaganda is in the business of manipulating certainty, but it can never manipulate truth. Arendt had articulated this brilliantly a decade earlier in her timely treatise on defactualization in politics: “No matter how large the tissue of falsehood that an experienced liar has to offer, it will never be large enough … to cover the immensity of factuality.”
Popper argues that the ability to discern truth by testing our theories against reality using critical reasoning is a distinctly human faculty — no other animal does this. A generation before him, Bertrand Russell — perhaps the twentieth century’s greatest patron saint of reason — called this ability “the will to doubt” and extolled it as our greatest self-defense against propaganda. The cultural evolution of our species, Popper notes, was propelled by the necessity of honing that ability — we developed a language that contains true and false statements, which gave rise to criticism, which in turn catalyzed a new phase of selection. He writes:
Natural selection is amplified and partially overtaken by critical, cultural selection. The latter permits us a conscious and critical pursuit of our errors: we can consciously find and eradicate our errors, and we can consciously judge one theory as inferior to another… There is no knowledge without rational criticism, criticism in the service of the search for truth.
But this rational criticism, Popper notes, should also be applied to science itself. Cautioning that the antidote to relativism isn’t scientism — a form of certitude equally corrosive to truth — he writes:
Despite my admiration for scientific knowledge, I am not an adherent of scientism. For scientism dogmatically asserts the authority of scientific knowledge; whereas I do not believe in any authority and have always resisted dogmatism; and I continue to resist it, especially in science. I am opposed to the thesis that the scientist must believe in his theory. As far as I am concerned “I do not believe in belief,” as E. M. Forster says; and I especially do not believe in belief in science. I believe at most that belief has a place in ethics, and even here only in a few instances. I believe, for example, that objective truth is a value — that is, an ethical value, perhaps the greatest value there is — and that cruelty is the greatest evil.
Gathered here are exceptional books that accomplish at least two of the three, assembled in the spirit of my annual best-of reading lists, which I continue to consider Old Year’s resolutions in reverse — not a list of priorities for the year ahead, but a reflection on the reading most worth prioritizing in the year being left behind.
Everything we know about the universe so far comes from four centuries of sight — from peering into space with our eyes and their prosthetic extension, the telescope. Now commences a new mode of knowing the cosmos through sound. The detection of gravitational waves is one of the most significant discoveries in the entire history of physics, marking the dawn of a new era as we begin listening to the sound of space — the probable portal to mysteries as unimaginable to us today as galaxies and nebulae and pulsars and other cosmic wonders were to the first astronomers. Gravitational astronomy, as Levin elegantly puts it, promises a “score to accompany the silent movie humanity has compiled of the history of the universe from still images of the sky, a series of frozen snapshots captured over the past four hundred years since Galileo first pointed a crude telescope at the Sun.”
Astonishingly enough, Levin wrote the book before the Laser Interferometer Gravitational-Wave Observatory (LIGO) — the monumental instrument at the center of the story, decades in the making — made the actual detection of a ripple in the fabric of spacetime caused by the collision of two black holes in the autumn of 2015, exactly a century after Einstein first envisioned the possibility of gravitational waves. So the story she tells is not that of the triumph but that of the climb, which renders it all the more enchanting — because it is ultimately a story about the human spirit and its incredible tenacity, about why human beings choose to devote their entire lives to pursuits strewn with unimaginable obstacles and bedeviled by frequent failure, uncertain rewards, and meager public recognition.
Indeed, what makes the book interesting is that it tells the story of this monumental discovery, but what makes it enchanting is that Levin comes at it from a rather unusual perspective. She is a working astrophysicist who studies black holes, but she is also an incredibly gifted novelist — an artist whose medium is language and thought itself. This is no popular science book but something many orders of magnitude higher in its artistic vision, the impeccable craftsmanship of language, and the sheer pleasure of the prose. The story is structured almost as a series of short, integrated novels, with each chapter devoted to one of the key scientists involved in LIGO. With Dostoyevskian insight and nuance, Levin paints a psychological, even philosophical portrait of each protagonist, revealing how intricately interwoven the genius and the foibles are in the fabric of personhood and what a profoundly human endeavor science ultimately is.
Scientists are like those levers or knobs or those boulders helpfully screwed into a climbing wall. Like the wall is some cemented material made by mixing knowledge, which is a purely human construct, with reality, which we can only access through the filter of our minds. There’s an important pursuit of objectivity in science and nature and mathematics, but still the only way up the wall is through the individual people, and they come in specifics… So the climb is personal, a truly human endeavor, and the real expedition pixelates into individuals, not Platonic forms.
Gleick, who examined the origin of our modern anxiety about time with remarkable prescience nearly two decades ago, traces the invention of the notion of time travel to H.G. Wells’s 1895 masterpiece The Time Machine. Although Wells — like Gleick, like any reputable physicist — knew that time travel was a scientific impossibility, he created an aesthetic of thought which never previously existed and which has since shaped the modern consciousness. Gleick argues that the art this aesthetic produced — an entire canon of time travel literature and film — not only permeated popular culture but even influenced some of the greatest scientific minds of the past century, including Stephen Hawking, who once cleverly hosted a party for time travelers and when no one showed up considered the impossibility of time travel proven, and John Archibald Wheeler, who popularized the term “black hole” and coined “wormhole,” both key tropes of time travel literature.
Gleick considers how a scientific impossibility can become such fertile ground for the artistic imagination:
Why do we need time travel, when we already travel through space so far and fast? For history. For mystery. For nostalgia. For hope. To examine our potential and explore our memories. To counter regret for the life we lived, the only life, one dimension, beginning to end.
Wells’s Time Machine revealed a turning in the road, an alteration in the human relationship with time. New technologies and ideas reinforced one another: the electric telegraph, the steam railroad, the earth science of Lyell and the life science of Darwin, the rise of archeology out of antiquarianism, and the perfection of clocks. When the nineteenth century turned to the twentieth, scientists and philosophers were primed to understand time in a new way. And so were we all. Time travel bloomed in the culture, its loops and twists and paradoxes.
I wrote about Gleick’s uncommonly pleasurable book at length here.
A very different take on time, not as cultural phenomenon but as individual psychological interiority, comes from German psychologist Marc Wittmann in Felt Time: The Psychology of How We Perceive Time (public library) — a fascinating inquiry into how our subjective experience of time’s passage shapes everything from our emotional memory to our sense of self. Bridging disciplines as wide-ranging as neuroscience and philosophy, Wittmann examines questions of consciousness, identity, happiness, boredom, money, and aging, exposing the centrality of time in each of them. What emerges is the disorienting sense that time isn’t something which happens to us — rather, we are time.
One of Wittmann’s most pause-giving points has to do with how temporality mediates the mind-body problem. He writes:
Presence means becoming aware of a physical and psychic self that is temporally extended. To be self-conscious is to recognize oneself as something that persists through time and is embodied.
In a sense, time is a construction of our consciousness. Two generations after Hannah Arendt observed in her brilliant meditation on time that “it is the insertion of man with his limited life span that transforms the continuously flowing stream of sheer change … into time as we know it,” Wittmann writes:
Self-consciousness — achieving awareness of one’s own self — emerges on the basis of temporally enduring perception of bodily states that are tied to neural activity in the brain’s insular lobe. The self and time prove to be especially present in boredom. They go missing in the hustle and bustle of everyday life, which results from the acceleration of social processes. Through mindfulness and emotional control, the tempo of life that we experience can be reduced, and we can regain time for ourselves and others.
Perception necessarily encompasses the individual who is doing the perceiving. It is I who perceives. This might seem self-evident. Perception of myself, my ego, occurs naturally when I consider myself. I “feel” and think about myself. But who is the subject if I am the object of my own attention? When I observe myself, after all, I become the object of observation. Clearly, this intangibility of the subject as a subject — and not an object — poses a philosophical problem: as soon as I observe myself, I have already become the object of my observation.
All life is lived in the shadow of its own finitude, of which we are always aware — an awareness we systematically blunt through the daily distraction of living. But when this finitude is made acutely imminent, one suddenly collides with awareness so acute that it leaves no choice but to fill the shadow with as much light as a human being can generate — the sort of inner illumination we call meaning: the meaning of life.
That tumultuous turning point is what neurosurgeon Paul Kalanithi chronicles in When Breath Becomes Air (public library) — his piercing memoir of being diagnosed with terminal cancer at the peak of a career bursting with potential and a life exploding with aliveness. Partway between Montaigne and Oliver Sacks, Kalanithi weaves together philosophical reflections on his personal journey with stories of his patients to illuminate the only thing we have in common — our mortality — and how it spurs all of us, in ways both minute and monumental, to pursue a life of meaning.
What emerges is an uncommonly insightful, sincere, and sobering revelation of how much our sense of self is tied up with our sense of potential and possibility — the selves we would like to become, those we work tirelessly toward becoming. Who are we, then, and what remains of “us” when that possibility is suddenly snipped?
At age thirty-six, I had reached the mountaintop; I could see the Promised Land, from Gilead to Jericho to the Mediterranean Sea. I could see a nice catamaran on that sea that Lucy, our hypothetical children, and I would take out on weekends. I could see the tension in my back unwinding as my work schedule eased and life became more manageable. I could see myself finally becoming the husband I’d promised to be.
And then the unthinkable happens. He recounts one of the first incidents in which his former identity and his future fate collided with jarring violence:
My back stiffened terribly during the flight, and by the time I made it to Grand Central to catch a train to my friends’ place upstate, my body was rippling with pain. Over the past few months, I’d had back spasms of varying ferocity, from simple ignorable pain, to pain that made me forsake speech to grind my teeth, to pain so severe I curled up on the floor, screaming. This pain was toward the more severe end of the spectrum. I lay down on a hard bench in the waiting area, feeling my back muscles contort, breathing to control the pain — the ibuprofen wasn’t touching this — and naming each muscle as it spasmed to stave off tears: erector spinae, rhomboid, latissimus, piriformis…
A security guard approached. “Sir, you can’t lie down here.”
“I’m sorry,” I said, gasping out the words. “Bad … back … spasms.”
“You still can’t lie down here.”
I pulled myself up and hobbled to the platform.
Like the book itself, the anecdote speaks to something larger and far more powerful than the particular story — in this case, our cultural attitude toward what we consider the failings of our bodies: pain and, in the ultimate extreme, death. We try to dictate the terms on which these perceived failings may occur; to make them conform to wished-for realities; to subvert them by will and witless denial. All this we do because, at bottom, we deem them impermissible — in ourselves and in each other.
“Try not to get overly attached to a hypothesis just because it’s yours,” Carl Sagan urged in his excellent Baloney Detection Kit — and yet our tendency is to do just that, becoming increasingly attached to what we’ve come to believe because the belief has sprung from our own glorious, brilliant, fool-proof minds. How con artists take advantage of this human hubris is what New Yorker columnist and psychology writer Maria Konnikova explores in The Confidence Game: Why We Fall for It … Every Time (public library) — a thrilling psychological detective story investigating how con artists, the supreme masterminds of malevolent reality-manipulation, prey on our hopes, our fears, and our propensity for believing what we wish were true. Through a tapestry of riveting real-life con artist profiles interwoven with decades of psychology experiments, Konnikova illuminates the inner workings of trust and deception in our everyday lives.
It’s the oldest story ever told. The story of belief — of the basic, irresistible, universal human need to believe in something that gives life meaning, something that reaffirms our view of ourselves, the world, and our place in it… For our minds are built for stories. We crave them, and, when there aren’t ready ones available, we create them. Stories about our origins. Our purpose. The reasons the world is the way it is. Human beings don’t like to exist in a state of uncertainty or ambiguity. When something doesn’t make sense, we want to supply the missing link. When we don’t understand what or why or how something happened, we want to find the explanation. A confidence artist is only too happy to comply — and the well-crafted narrative is his absolute forte.
Konnikova describes the basic elements of the con and the psychological susceptibility into which each of them plays:
The confidence game starts with basic human psychology. From the artist’s perspective, it’s a question of identifying the victim (the put-up): who is he, what does he want, and how can I play on that desire to achieve what I want? It requires the creation of empathy and rapport (the play): an emotional foundation must be laid before any scheme is proposed, any game set in motion. Only then does it move to logic and persuasion (the rope): the scheme (the tale), the evidence and the way it will work to your benefit (the convincer), the show of actual profits. And like a fly caught in a spider’s web, the more we struggle, the less able to extricate ourselves we become (the breakdown). By the time things begin to look dicey, we tend to be so invested, emotionally and often physically, that we do most of the persuasion ourselves. We may even choose to up our involvement ourselves, even as things turn south (the send), so that by the time we’re completely fleeced (the touch), we don’t quite know what hit us. The con artist may not even need to convince us to stay quiet (the blow-off and fix); we are more likely than not to do so ourselves. We are, after all, the best deceivers of our own minds. At each step of the game, con artists draw from a seemingly endless toolbox of ways to manipulate our belief. And as we become more committed, with every step we give them more psychological material to work with.
Needless to say, the book bears remarkable relevance to the recent turn of events in American politics and its ripples in the mass manipulation machine known as the media.
“This is the entire essence of life: Who are you? What are you?” young Leo Tolstoy wrote in his diary. For Tolstoy, this was a philosophical inquiry — or a metaphysical one, as it would have been called in his day. But between his time and ours, science has unraveled the inescapable physical dimensions of this elemental question, rendering the already disorienting attempt at an answer all the more complex and confounding.
In The Gene: An Intimate History (public library), physician and Pulitzer-winning author Siddhartha Mukherjee offers a rigorously researched, beautifully written detective story about the genetic components of what we experience as the self, rooted in Mukherjee’s own painful family history of mental illness and radiating a larger inquiry into how genetics illuminates the future of our species.
Three profoundly destabilizing scientific ideas ricochet through the twentieth century, trisecting it into three unequal parts: the atom, the byte, the gene. Each is foreshadowed by an earlier century, but dazzles into full prominence in the twentieth. Each begins its life as a rather abstract scientific concept, but grows to invade multiple human discourses — thereby transforming culture, society, politics, and language. But the most crucial parallel between the three ideas, by far, is conceptual: each represents the irreducible unit — the building block, the basic organizational unit — of a larger whole: the atom, of matter; the byte (or “bit”), of digitized information; the gene, of heredity and biological information.
Why does this property — being the least divisible unit of a larger form — imbue these particular ideas with such potency and force? The simple answer is that matter, information, and biology are inherently hierarchically organized: understanding that smallest part is crucial to understanding the whole.
“In wildness is the preservation of the world,” Thoreau wrote 150 years ago in his ode to the spirit of sauntering. But in a world increasingly unwild, where we are in touch with nature only occasionally and only in fragments, how are we to nurture the preservation of our Pale Blue Dot?
The story follows a little girl who, in a delightful meta-touch, pulls this very book off the bookshelf and begins learning about the strange and wonderful world of the polar bear, its life, and the science behind it — its love of solitude, the black skin that hides beneath its yellowish-white fur, the built-in sunglasses protecting its eyes from the harsh Arctic light, why it evolved to have an unusually long neck and slightly inward paws, how it maintains the same temperature as us despite living in such extreme cold, why it doesn’t hibernate.
Beyond its sheer loveliness, the book is suddenly imbued with a new layer of urgency. At a time when we can no longer count on politicians to protect the planet and educate the next generations about preserving it, the task falls on solely on parents and educators. Desmond’s wonderful project alleviates that task by offering a warm, empathic invitation to care about, which is the gateway to caring for, one of the creatures most vulnerable to our changing climate and most needful of our protection.
“We are — as far as we know — the only part of the universe that’s self-conscious,” the poet Mark Strand marveled in his beautiful meditation on the artist’s task to bear witness to existence, adding: “We could even be the universe’s form of consciousness. We might have come along so that the universe could look at itself… It’s such a lucky accident, having been born, that we’re almost obliged to pay attention.” Scientists are rightfully reluctant to ascribe a purpose or meaning to the universe itself but, as physicist Lisa Randall has pointed out, “an unconcerned universe is not a bad thing — or a good one for that matter.” Where poets and scientists converge is the idea that while the universe itself isn’t inherently imbued with meaning, it is in this self-conscious human act of paying attention that meaning arises.
Physicist Sean Carroll terms this view poetic naturalism and examines its rewards in The Big Picture: On the Origins of Life, Meaning, and the Universe Itself (public library) — a nuanced inquiry into “how our desire to matter fits in with the nature of reality at its deepest levels,” in which Carroll offers an assuring dose of what he calls “existential therapy” reconciling the various and often seemingly contradictory dimensions of our experience.
With an eye to his life’s work of studying the nature of the universe — an expanse of space and time against the incomprehensibly enormous backdrop of which the dramas of a single human life claim no more than a photon of the spotlight — Carroll offers a counterpoint to our intuitive cowering before such magnitudes of matter and mattering:
I like to think that our lives do matter, even if the universe would trundle along without us.
I want to argue that, though we are part of a universe that runs according to impersonal underlying laws, we nevertheless matter. This isn’t a scientific question — there isn’t data we can collect by doing experiments that could possibly measure the extent to which a life matters. It’s at heart a philosophical problem, one that demands that we discard the way that we’ve been thinking about our lives and their meaning for thousands of years. By the old way of thinking, human life couldn’t possibly be meaningful if we are “just” collections of atoms moving around in accordance with the laws of physics. That’s exactly what we are, but it’s not the only way of thinking about what we are. We are collections of atoms, operating independently of any immaterial spirits or influences, and we are thinking and feeling people who bring meaning into existence by the way we live our lives.
Carroll’s captivating term poetic naturalism builds on a worldview that has been around for centuries, dating back at least to the Scottish philosopher David Hume. It fuses naturalism — the idea that the reality of the natural world is the only reality, that it operates according to consistent patterns, and that those patterns can be studied — with the poetic notion that there are multiple ways of talking about the world and of framing the questions that arise from nature’s elemental laws.
Wohlleben chronicles what his own experience of managing a forest in the Eifel mountains in Germany has taught him about the astonishing language of trees and how trailblazing arboreal research from scientists around the world reveals “the role forests play in making our world the kind of place where we want to live.” As we’re only just beginning to understand nonhuman consciousnesses, what emerges from Wohlleben’s revelatory reframing of our oldest companions is an invitation to see anew what we have spent eons taking for granted and, in this act of seeing, to care more deeply about these remarkable beings that make life on this planet we call home not only infinitely more pleasurable, but possible at all.
“The act of smelling something, anything, is remarkably like the act of thinking itself,” the great science storyteller Lewis Thomas wrote in his beautiful 1985 meditation on the poetics of smell as a mode of knowledge. But, like the conditioned consciousness out of which our thoughts arise, our olfactory perception is beholden to our cognitive, cultural, and biological limitations. The 438 cubic feet of air we inhale each day are loaded with an extraordinary richness of information, but we are able to access and decipher only a fraction. And yet we know, on some deep creaturely level, just how powerful and enlivening the world of smell is, how intimately connected with our ability to savor life. “Get a life in which you notice the smell of salt water pushing itself on a breeze over the dunes,” Anna Quindlen advised in her indispensable Short Guide to a Happy Life — but the noticing eclipses the getting, for the salt water breeze is lost on any life devoid of this sensorial perception.
Dogs, who “see” the world through smell, can teach us a great deal about that springlike sensorial aliveness which E.E. Cummings termed “smelloftheworld.” So argues cognitive scientist and writer Alexandra Horowitz, director of the Dog Cognition Lab at Barnard College, in Being a Dog: Following the Dog Into a World of Smell (public library) — a fascinating tour of what Horowitz calls the “surprising and sometimes alarming feats of olfactory perception” that dogs perform daily, and what they can teach us about swinging open the doors of our own perception by relearning some of our long-lost olfactory skills that grant us access to hidden layers of reality.
I am besotted with dogs, and to know a dog is to be interested in what it’s like to be a dog. And that all begins with the nose.
What the dog sees and knows comes through his nose, and the information that every dog — the tracking dog, of course, but also the dog lying next to you, snoring, on the couch — has about the world based on smell is unthinkably rich. It is rich in a way we humans once knew about, once acted on, but have since neglected.
Savor more of the wonderland of canine olfaction here.
I CONTAIN MULTITUDES
“I have observed many tiny animals with great admiration,” Galileo marveled as he peered through his microscope — a tool that, like the telescope, he didn’t invent himself but he used with in such a visionary way as to render it revolutionary. The revelatory discoveries he made in the universe within the cell are increasingly proving to be as significant as his telescopic discoveries in the universe without — a significance humanity has been even slower and more reluctant to accept than his radical revision of the cosmos.
Even when we are alone, we are never alone. We exist in symbiosis — a wonderful term that refers to different organisms living together. Some animals are colonised by microbes while they are still unfertilised eggs; others pick up their first partners at the moment of birth. We then proceed through our lives in their presence. When we eat, so do they. When we travel, they come along. When we die, they consume us. Every one of us is a zoo in our own right — a colony enclosed within a single body. A multi-species collective. An entire world.
All zoology is really ecology. We cannot fully understand the lives of animals without understanding our microbes and our symbioses with them. And we cannot fully appreciate our own microbiome without appreciating how those of our fellow species enrich and influence their lives. We need to zoom out to the entire animal kingdom, while zooming in to see the hidden ecosystems that exist in every creature. When we look at beetles and elephants, sea urchins and earthworms, parents and friends, we see individuals, working their way through life as a bunch of cells in a single body, driven by a single brain, and operating with a single genome. This is a pleasant fiction. In fact, we are legion, each and every one of us. Always a “we” and never a “me.”
There are ample reasons to admire and appreciate microbes, well beyond the already impressive facts that they ruled “our” Earth for the vast majority of its 4.54-billion-year history and that we ourselves evolved from them. By pioneering photosynthesis, they became the first organisms capable of making their own food. They dictate the planet’s carbon, nitrogen, sulphur, and phosphorus cycles. They can survive anywhere and populate just about corner of the Earth, from the hydrothermal vents at the bottom of the ocean to the loftiest clouds. They are so diverse that the microbes on your left hand are different from those on your right.
But perhaps most impressively — for we are, after all, the solipsistic species — they influence innumerable aspects of our biological and even psychological lives. Young offers a cross-section of this microbial dominion:
The microbiome is infinitely more versatile than any of our familiar body parts. Your cells carry between 20,000 and 25,000 genes, but it is estimated that the microbes inside you wield around 500 times more. This genetic wealth, combined with their rapid evolution, makes them virtuosos of biochemistry, able to adapt to any possible challenge. They help to digest our food, releasing otherwise inaccessible nutrients. They produce vitamins and minerals that are missing from our diet. They break down toxins and hazardous chemicals. They protect us from disease by crowding out more dangerous microbes or killing them directly with antimicrobial chemicals. They produce substances that affect the way we smell. They are such an inevitable presence that we have outsourced surprising aspects of our lives to them. They guide the construction of our bodies, releasing molecules and signals that steer the growth of our organs. They educate our immune system, teaching it to tell friend from foe. They affect the development of the nervous system, and perhaps even influence our behaviour. They contribute to our lives in profound and wide-ranging ways; no corner of our biology is untouched. If we ignore them, we are looking at our lives through a keyhole.
“No woman should say, ‘I am but a woman!’ But a woman! What more can you ask to be?” astronomer Maria Mitchell, who paved the way for women in American science, admonished the first class of female astronomers at Vassar in 1876. By the middle of the next century, a team of unheralded women scientists and engineers were powering space exploration at NASA’s Jet Propulsion Laboratory.
Meanwhile, across the continent and in what was practically another country, a parallel but very different revolution was taking place: In the segregated South, a growing number of black female mathematicians, scientists, and engineers were steering early space exploration and helping American win the Cold War at NASA’s Langley Research Center in Hampton, Virginia.
Long before the term “computer” came to signify the machine that dictates our lives, these remarkable women were working as human “computers” — highly skilled professional reckoners, who thought mathematically and computationally for their living and for their country. When Neil Armstrong set his foot on the moon, his “giant leap for mankind” had been powered by womankind, particularly by Katherine Johnson — the “computer” who calculated Apollo 11’s launch windows and who was awarded the Presidential Medal of Freedom by President Obama at age 97 in 2015, three years after the accolade was conferred upon John Glenn, the astronaut whose flight trajectory Johnson had made possible.
Just as islands — isolated places with unique, rich biodiversity — have relevance for the ecosystems everywhere, so does studying seemingly isolated or overlooked people and events from the past turn up unexpected connections and insights to modern life.
Against a sobering cultural backdrop, Shetterly captures the enormous cognitive dissonance the very notion of these black female mathematicians evokes:
Before a computer became an inanimate object, and before Mission Control landed in Houston; before Sputnik changed the course of history, and before the NACA became NASA; before the Supreme Court case Brown v. Board of Education of Topeka established that separate was in fact not equal, and before the poetry of Martin Luther King Jr.’s “I Have a Dream” speech rang out over the steps of the Lincoln Memorial, Langley’s West Computers were helping America dominate aeronautics, space research, and computer technology, carving out a place for themselves as female mathematicians who were also black, black mathematicians who were also female.
Shetterly herself grew up in Hampton, which dubbed itself “Spacetown USA,” amid this archipelago of women who were her neighbors and teachers. Her father, who had built his first rocket in his early teens after seeing the Sputnik launch, was one of Langley’s African American scientists in an era when words we now shudder to hear were used instead of “African American.” Like him, the first five black women who joined Langley’s research staff in 1943 entered a segregated NASA — even though, as Shetterly points out, the space agency was among the most inclusive workplaces in the country, with more than fourfold the percentage of black scientists and engineers than the national average.
Over the next forty years, the number of these trailblazing black women mushroomed to more than fifty, revealing the mycelia of a significant groundswell. Shetterly’s favorite Sunday school teacher had been one of the early computers — a retired NASA mathematician named Kathleen Land. And so Shetterly, who considers herself “as much a product of NASA as the Moon landing,” grew up believing that black women simply belonged in science and space exploration as a matter of course — after all, they populated her father’s workplace and her town, a town whose church “abounded with mathematicians.”
Building 1236, my father’s daily destination, contained a byzantine complex of government-gray cubicles, perfumed with the grown-up smells of coffee and stale cigarette smoke. His engineering colleagues with their rumpled style and distracted manner seemed like exotic birds in a sanctuary. They gave us kids stacks of discarded 11×14 continuous-form computer paper, printed on one side with cryptic arrays of numbers, the blank side a canvas for crayon masterpieces. Women occupied many of the cubicles; they answered phones and sat in front of typewriters, but they also made hieroglyphic marks on transparent slides and conferred with my father and other men in the office on the stacks of documents that littered their desks. That so many of them were African American, many of them my grandmother’s age, struck me as simply a part of the natural order of things: growing up in Hampton, the face of science was brown like mine.
The community certainly included black English professors, like my mother, as well as black doctors and dentists, black mechanics, janitors, and contractors, black cobblers, wedding planners, real estate agents, and undertakers, several black lawyers, and a handful of black Mary Kay salespeople. As a child, however, I knew so many African Americans working in science, math, and engineering that I thought that’s just what black folks did.
But despite the opportunities at NASA, almost countercultural in their contrast to the norms of the time, life for these courageous and brilliant women was no idyll — persons and polities are invariably products of their time and place. Shetterly captures the sundering paradoxes of the early computers’ experience:
I interviewed Mrs. Land about the early days of Langley’s computing pool, when part of her job responsibility was knowing which bathroom was marked for “colored” employees. And less than a week later I was sitting on the couch in Katherine Johnson’s living room, under a framed American flag that had been to the Moon, listening to a ninety-three-year-old with a memory sharper than mine recall segregated buses, years of teaching and raising a family, and working out the trajectory for John Glenn’s spaceflight. I listened to Christine Darden’s stories of long years spent as a data analyst, waiting for the chance to prove herself as an engineer. Even as a professional in an integrated world, I had been the only black woman in enough drawing rooms and boardrooms to have an inkling of the chutzpah it took for an African American woman in a segregated southern workplace to tell her bosses she was sure her calculations would put a man on the Moon.
And while the black women are the most hidden of the mathematicians who worked at the NACA, the National Advisory Committee for Aeronautics, and later at NASA, they were not sitting alone in the shadows: the white women who made up the majority of Langley’s computing workforce over the years have hardly been recognized for their contributions to the agency’s long-term success. Virginia Biggins worked the Langley beat for the Daily Press newspaper, covering the space program starting in 1958. “Everyone said, ‘This is a scientist, this is an engineer,’ and it was always a man,” she said in a 1990 panel on Langley’s human computers. She never got to meet any of the women. “I just assumed they were all secretaries,” she said.
These women’s often impossible dual task of preserving their own sanity and dignity while pushing culture forward is perhaps best captured in the words of African American NASA mathematician Dorothy Vaughan:
What I changed, I could; what I couldn’t, I endured.
Predating NASA’s women mathematicians by a century was a devoted team of female amateur astronomers — “amateur” being a reflection not of their skill but of the dearth of academic accreditation available to women at the time — who came together at the Harvard Observatory at the end of the nineteenth century around an unprecedented quest to catalog the cosmos by classifying the stars and their spectra.
Decades before they were allowed to vote, these women, who came to be known as the “Harvard computers,” classified hundreds of thousands of stars according to a system they invented, which astronomers continue to use today. Their calculations became the basis for the discovery that the universe is expanding. Their spirit of selfless pursuit of truth and knowledge stands as a timeless testament to pioneering physicist Lise Meitner’s definition of the true scientist.
Sobel, who takes on the role of rigorous reporter and storyteller bent on preserving the unvarnished historical integrity of the story, paints the backdrop:
A little piece of heaven. That was one way to look at the sheet of glass propped up in front of her. It measured about the same dimensions as a picture frame, eight inches by ten, and no thicker than a windowpane. It was coated on one side with a fine layer of photographic emulsion, which now held several thousand stars fixed in place, like tiny insects trapped in amber. One of the men had stood outside all night, guiding the telescope to capture this image, along with another dozen in the pile of glass plates that awaited her when she reached the observatory at 9 a.m. Warm and dry indoors in her long woolen dress, she threaded her way among the stars. She ascertained their positions on the dome of the sky, gauged their relative brightness, studied their light for changes over time, extracted clues to their chemical content, and occasionally made a discovery that got touted in the press. Seated all around her, another twenty women did the same.
Among the “Harvard computers” were Antonia Maury, who had graduated from Maria Mitchell’s program at Vassar; Annie Jump Cannon, who catalogued more than 20,000 variable stars in a short period after joining the observatory; Henrietta Swan Levitt, a Radcliffe alumna whose discoveries later became the basis for Hubble’s Law demonstrating the expansion of the universe and whose work was so valued that she was paid 30 cents an hour, five cents over the standard salary of the computers; and Cecilia Helena Payne-Gaposchkin, who became not only the first woman but the first person of any gender to earn a Ph.D. in astronomy.
Helming the team was Williamina Fleming — a Scotswoman whom Edward Charles Pickering, the thirty-something director of the observatory, first hired as a second maid at his residency in 1879 before recognizing her mathematical talents and assigning her the role of part-time computer.
For a lighter companion to the two books above, one aimed at younger readers, artist and author Rachel Ignotofsky offers Women in Science: 50 Fearless Pioneers Who Changed the World (public library) — an illustrated encyclopedia of fifty influential and inspiring women in STEM since long before we acronymized the conquest of curiosity through discovery and invention, ranging from the ancient astronomer, mathematician, and philosopher Hypatia in the fourth century to Iranian mathematician Maryam Mirzakhani, born in 1977.
True as it may be that being an outsider is an advantage in science and life, modeling furnishes young hearts with the assurance that people who are in some way like them can belong and shine in fields comprised primarily of people drastically unlike them. It is this ethos that Igontofsky embraces by being deliberate in ensuring that the scientists included come from a vast variety of ethnic backgrounds, nationalities, orientations, and cultural traditions.
“In the course of looking deeply within ourselves, we may challenge notions that give comfort before the terrors of the world.”
By Maria Popova
“Unless we are very, very careful,” wrote psychologist-turned-artist Anne Truitt in contemplating compassion and the cure for our chronic self-righteousness, “we doom each other by holding onto images of one another based on preconceptions that are in turn based on indifference to what is other than ourselves.” She urged for “the honoring of others in a way that grants them the grace of their own autonomy and allows mutual discovery.” But how are we to find in ourselves the capacity — the willingness — to honor otherness where we see only ignorance and bigotry in beliefs not only diametrically opposed to our own but dangerous to the very fabric of society?
Sagan considers how we can bridge conviction and compassion in dealing with those who disagree with and even attack our beliefs. Although he addresses the particular problems of pseudoscience and superstition, his elegant and empathetic argument applies to any form of ignorance and bigotry. He explores how we can remain sure-footed and rooted in truth and reason when confronted with such dangerous ideologies, but also have a humane and compassionate intention to understand the deeper fears and anxieties out of which such unreasonable beliefs arise in those who hold them
When we are asked to swear in American courts of law — that we will tell “the truth, the whole truth, and nothing but the truth” — we are being asked the impossible. It is simply beyond our powers. Our memories are fallible; even scientific truth is merely an approximation; and we are ignorant about nearly all of the Universe…
If it is to be applied consistently, science imposes, in exchange for its manifold gifts, a certain onerous burden: We are enjoined, no matter how uncomfortable it might be, to consider ourselves and our cultural institutions scientifically — not to accept uncritically whatever we’re told; to surmount as best we can our hopes, conceits, and unexamined beliefs; to view ourselves as we really are… Because its explanatory power is so great, once you get the hang of scientific reasoning you’re eager to apply it everywhere. However, in the course of looking deeply within ourselves, we may challenge notions that give comfort before the terrors of the world.
Sagan notes that all of us are deeply attached to and even defined by our beliefs, for they define our reality and are thus elemental to our very selves, so any challenge to our core beliefs tends to feel like a personal attack. This is equally true of ourselves as it is of those who hold opposing beliefs — such is the human condition. He considers how we can reconcile our sense of intellectual righteousness with our human fallibility:
In the way that skepticism is sometimes applied to issues of public concern, there is a tendency to belittle, to condescend, to ignore the fact that, deluded or not, supporters of superstition and pseudoscience are human beings with real feelings, who, like the skeptics, are trying to figure out how the world works and what our role in it might be. Their motives are in many cases consonant with science. If their culture has not given them all the tools they need to pursue this great quest, let us temper our criticism with kindness. None of us comes fully equipped.
But kindness, Sagan cautions, doesn’t mean assent — there are instances, like when we are faced with bigotry and hate speech, in which we absolutely must confront and critique these harmful beliefs, for “every silent assent will encourage [the person] next time, and every vigorous dissent will cause him next time to think twice.” He writes:
If we offer too much silent assent about [ignorance] — even when it seems to be doing a little good — we abet a general climate in which skepticism is considered impolite, science tiresome, and rigorous thinking somehow stuffy and inappropriate. Figuring out a prudent balance takes wisdom.
The greatest detriment to reason, Sagan argues, is that we let our reasonable and righteous convictions slip into self-righteousness, that deadly force of polarization:
The chief deficiency I see in the skeptical movement is in its polarization: Us vs. Them — the sense that we have a monopoly on the truth; that those other people who believe in all these stupid doctrines are morons; that if you’re sensible, you’ll listen to us; and if not, you’re beyond redemption. This is unconstructive… Whereas, a compassionate approach that from the beginning acknowledges the human roots of pseudoscience and superstition might be much more widely accepted. If we understand this, then of course we feel the uncertainty and pain of the abductees, or those who dare not leave home without consulting their horoscopes, or those who pin their hopes on crystals from Atlantis.
Or, say, those who vote for a racist, sexist, homophobic, misogynistic, climate-change-denying political leader.
Sagan’s central point is that we humans — all of us — are greatly perturbed by fear, anxiety, and uncertainty, and in seeking to becalm ourselves, we sometimes anchor ourselves to irrational and ignorant ideologies that offer certitude and stability, however illusory. In understanding those who succumb to such false refuges, Sagan calls for “compassion for kindred spirits in a common quest.” Echoing 21-year-old Hillary Rodham’s precocious assertion that “we are all of us exploring a world that none of us understand,” he argues that the dangerous beliefs of ignorance arise from “the feeling of powerlessness in a complex, troublesome and unpredictable world.”
In envisioning a society capable of cultivating both critical thinking and kindness, Sagan’s insistence on the role and responsibility of the media resonates with especial poignancy today:
Both skepticism and wonder are skills that need honing and practice. Their harmonious marriage within the mind of every schoolchild ought to be a principal goal of public education. I’d love to see such a domestic felicity portrayed in the media, television especially: a community of people really working the mix — full of wonder, generously open to every notion, dismissing nothing except for good reason, but at the same time, and as second nature, demanding stringent standards of evidence — and these standards applied with at least as much rigor to what they hold dear as to what they are tempted to reject with impunity.
“The protection of minorities is vitally important; and even the most orthodox of us may find himself in a minority some day, so that we all have an interest in restraining the tyranny of majorities.”
By Maria Popova
“We must believe before we can doubt, and doubt before we can deny,” W.H. Auden observed in his commonplace book. Half a century earlier, Bertrand Russell (May 18, 1872–February 2, 1970), the great poet laureate of reason, addressed the central equation of free thinking in his 1922 Conway Memorial Lecture, later published as Free Thought and Official Propaganda (public library | free ebook) — a short and searing book charged with Russell’s characteristic intellectual electricity, the immense power of which melts an entire century into astonishing timeliness speaking directly to the present day.
When we speak of anything as “free,” our meaning is not definite unless we can say what it is free from. Whatever or whoever is “free” is not subject to some external compulsion, and to be precise we ought to say what this kind of compulsion is. Thus thought is “free” when it is free from certain kinds of outward control which are often present. Some of these kinds of control which must be absent if thought is to be “free” are obvious, but others are more subtle and elusive.
Writing three years after the magnificent Declaration of the Independence of the Mind, which he signed alongside luminaries like Albert Einstein and Jane Addams, Russell points to two primary meanings of “free thought” — the narrower sense of resisting traditional dogma and a broader sense that encompasses all forms of propaganda pervading public life. A patron saint of nonbelievers, Russell writes:
I am myself a dissenter from all known religions, and I hope that every kind of religious belief will die out. I do not believe that, on the balance, religious belief has been a force for good. Although I am prepared to admit that in certain times and places it has had some good effects, I regard it as belonging to the infancy of human reason, and to a stage of development which we are now outgrowing.
But there is also a wider sense of “free thought,” which I regard as of still greater importance. Indeed, the harm done by traditional religions seems chiefly traceable to the fact that they have prevented free thought in this wider sense.
He considers the three essential elements of this wider conception of free thought:
Thought is not “free” when legal penalties are incurred by the holding or not holding of certain opinions, or by giving expression to one’s belief or lack of belief on certain matters… The most elementary condition, if thought is to be free, is the absence of legal penalties for the expression of opinions.
Legal penalties are, however, in the modern world, the least of the obstacles to freedom of thoughts. The two great obstacles are economic penalties and distortion of evidence. It is clear that thought is not free if the profession of certain opinions makes it impossible to earn a living. It is clear also that thought is not free if all the arguments on one side of a controversy are perpetually presented as attractively as possible, while the arguments on the other side can only be discovered by diligent search.
Echoing the essence of Descartes’s twelve tenets of critical thinking, penned three centuries earlier, Russell returns to the centerpiece of free thought — the willingness to doubt:
William James used to preach the “will to believe.” For my part, I should wish to preach the “will to doubt.” None of our beliefs are quite true; all have at least a penumbra of vagueness and error. The methods of increasing the degree of truth in our beliefs are well known; they consist in hearing all sides, trying to ascertain all the relevant facts, controlling our own bias by discussion with people who have the opposite bias, and cultivating a readiness to discard any hypothesis which has proved inadequate.
Every man of science whose outlook is truly scientific is ready to admit that what passes for scientific knowledge at the moment is sure to require correction with the progress of discovery; nevertheless, it is near enough to the truth to serve for most practical purposes, though not for all. In science, where alone something approximating to genuine knowledge is to be found, men’s attitude is tentative and full of doubt.
In religion and politics, on the contrary, though there is as yet nothing approaching scientific knowledge, everybody considers it de rigueur to have a dogmatic opinion, to be backed up by inflicting starvation, prison, and war, and to be carefully guarded from argumentative competition with any different opinion. If only men could be brought into a tentatively agnostic frame of mind about these matters, nine-tenths of the evils of the modern world would be cured. War would become impossible, because each side would realize that both sides must be in the wrong. Persecution would cease. Education would aim at expanding the mind, not at narrowing it. Men would be chosen for jobs on account of fitness to do the work, not because they flattered the irrational dogmas of those in power.
He points Einstein and the relativity theory he had formulated just seven years earlier as an epitome of this disposition:
His theory upsets the whole theoretical framework of traditional physics; it is almost as damaging to orthodox dynamics as Darwin was to Genesis. Yet physicists everywhere have shown complete readiness to accept his theory as soon as it appeared that the evidence was in its favour. But none of them, least of all Einstein himself, would claim that he has said the last word… This critical undogmatic receptiveness is the true attitude of science.
Russell offers a disquieting thought experiment of sorts:
If Einstein had advanced something equally new in the sphere of religion or politics … the truth or falsehood of his doctrine would be decided on the battlefield, without the collection of any fresh evidence for or against it. This method is the logical outcome of William James’s will to believe. What is wanted is not the will to believe, but the wish to find out, which is its exact opposite.
He considers the core obstacles to this vital rational doubt:
A great deal of this is due to the inherent irrationality and credulity of average human nature. But this seed of intellectual original sin is nourished and fostered by other agencies, among which three play the chief part — namely, education, propaganda, and economic pressure.
Russell examines each of the three in turn, beginning with education — a subject he would come to consider closely four years later in his masterwork on education and the good life. Education’s formal institutions, he argues, are set up “to impart information without imparting intelligence” and designed “not to give true knowledge, but to make the people pliable to the will of their masters” — a seedbed of political and cultural propaganda that begins in elementary school, with the teaching of a history told by those in power, and results in the widespread manipulation of public opinion. Lamenting “the paradoxical fact that education has become one of the chief obstacles to intelligence and freedom of thought,” he envisions the remedy:
Education should have two objects: first, to give definite knowledge — reading and writing, languages and mathematics, and so on; secondly, to create those mental habits which will enable people to acquire knowledge and form sound judgments for themselves. The first of these we may call information, the second intelligence.
Too much fuss is sometimes made about the fact that propaganda appeals to emotion rather than reason. The line between emotion and reason is not so sharp as some people think.
The objection to propaganda is not only its appeal to unreason, but still more the unfair advantage which it gives to the rich and powerful. Equality of opportunity among opinions is essential if there is to be real freedom of thought; and equality of opportunity among opinions can only be secured by elaborate laws directed to that end, which there is no reason to expect to see enacted. The cure is not to be sought primarily in such laws, but in better education and a more sceptical public opinion.
Turning to the final impediment of free thought — the economic pressures of conformity, under which one is rewarded for siding with and adopting the dogmas of those in power — Russell writes:
There are two simple principles which, if they were adopted, would solve almost all social problems. The first is that education should have for one of its aims to teach people only to believe propositions when there is some reason to think that they are true. The second is that jobs should be given solely for fitness to do the work.
The protection of minorities is vitally important; and even the most orthodox of us may find himself in a minority some day, so that we all have an interest in restraining the tyranny of majorities. Nothing except public opinion can solve this problem.
It’s a sentiment of enormous poignancy and prescience, illustrating both how far we’ve come — Russell is writing more than three decades before the zenith of civil rights and the Equal Pay Act — and how far we have yet to go in a culture where, a century later, sexism and racism are far from gone and many workplaces are still systematically discriminating against minorities like Muslims and the LGBT community.
The cultivation of public opinion that advances equality and justice rather than upholding oppressive power structures has to do with the “will to doubt” at the heart of Russell’s case. He writes:
Some element of doubt is essential to the practice, though not to the theory, of toleration… If there is to be toleration in the world, one of the things taught in schools must be the habit of weighing evidence, and the practice of not giving full assent to propositions which there is no reason to believe true.
The role of the educator, he argues, is to teach young minds how to infer what actually happened “from the biased account of either side” and to instill in them the awareness that “everything in newspapers is more or less untrue” — a task all the more urgent today, when the old role of the newspapers has been largely taken over by incessant opinion-streams barraging us online and off with the certitude of their respective version of reality masquerading as truth.
Russell returns to the basic human predicament obstructing freedom of thought and envisions its only fruitful solution:
The evils of the world are due to moral defects quite as much as to lack of intelligence. But the human race has not hitherto discovered any method of eradicating moral defects; preaching and exhortation only add hypocrisy to the previous list of vices. Intelligence, on the contrary, is easily improved by methods known to every competent educator. Therefore, until some method of teaching virtue has been discovered, progress will have to be sought by improvement of intelligence rather than of morals. One of the chief obstacles to intelligence is credulity, and credulity could be enormously diminished by instruction as to the prevalent forms of mendacity.
Credulity is a greater evil in the present day than it ever was before, because, owing to the growth of education, it is much easier than it used to be to spread misinformation, and, owing to democracy, the spread of misinformation is more important than in former times to the holders of power.
He concludes by considering what it would take for us to implement these two pillars of free thought — an education system that fosters critical thinking rather than conformity and a meritocratic workforce where jobs are earned based on acumen rather than ideological alignment with power structures:
It must be done by generating an enlightened public opinion. And an enlightened public opinion can only be generated by the efforts of those who desire that it should exist.