Brain Pickings

Posts Tagged ‘John Brockman’

23 FEBRUARY, 2015

This Idea Must Die: Some of the World’s Greatest Thinkers Each Select a Major Misconception Holding Us Back

By:

From the self to left brain vs. right brain to romantic love, a catalog of broken theories that hold us back from the conquest of Truth.

“To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact,” asserted Charles Darwin in one of the eleven rules for critical thinking known as Prospero’s Precepts. If science and human knowledge progress in leaps and bounds of ignorance, then the recognition of error and the transcendence of falsehood are the springboard for the leaps of progress. That’s the premise behind This Idea Must Die: Scientific Theories That Are Blocking Progress (public library) — a compendium of answers Edge founder John Brockman collected by posing his annual question“What scientific idea is ready for retirement?” — to 175 of the world’s greatest scientists, philosophers, and writers. Among them are Nobel laureates, MacArthur geniuses, and celebrated minds like theoretical physicist and mathematician Freeman Dyson, biological anthropologist Helen Fisher, cognitive scientist and linguist Steven Pinker, media theorist Douglas Rushkoff, philosopher Rebecca Newberger Goldstein, psychologist Howard Gardner, social scientist and technology scholar Sherry Turkle, actor and author Alan Alda, futurist and Wired founding editor Kevin Kelly, and novelist, essayist, and screenwriter Ian McEwan.

Brockman paints the backdrop for the inquiry:

Science advances by discovering new things and developing new ideas. Few truly new ideas are developed without abandoning old ones first. As theoretical physicist Max Planck (1858–1947) noted, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” In other words, science advances by a series of funerals.

Many of the answers are redundant — but this is a glorious feature rather than a bug of Brockman’s series, for its chief reward is precisely this cumulative effect of discerning the zeitgeist of ideas with which some of our era’s greatest minds are tussling in synchronicity. They point to such retirement-ready ideas as IQ, the self, race, the left brain vs. right brain divide, human nature and essentialism, free will, and even science itself. What emerges is the very thing Carl Sagan deemed vital to truth in his Baloney Detection Kit — a “substantive debate on the evidence by knowledgeable proponents of all points of view.”

Illustration by Lizi Boyd from 'Flashlight.' Click image for more.

One of the most profound undercurrents across the answers has to do with our relationship with knowledge, certainty, and science itself. And one of the most profound contributions in that regard comes from MacArthur fellow Rebecca Newberger Goldstein, a philosopher who thinks deeply and dimensionally about some of the most complex questions of existence. Assailing the idea that science makes philosophy obsolete — that science is the transformation of “philosophy’s vagaries into empirically testable theories” and philosophy merely the “cold-storage room in which questions are shelved until the sciences get around to handling them” — Goldstein writes:

The obsolescence of philosophy is often taken to be a consequence of science. After all, science has a history of repeatedly inheriting — and definitively answering — questions over which philosophers have futilely hemmed and hawed for unconscionable amounts of time.

The gravest problem with this theory, Goldstein notes, is its internal incoherence:

You can’t argue for science making philosophy obsolete without indulging in philosophical arguments… When pressed for an answer to the so-called demarcation problem, scientists almost automatically reach for the notion of “falsifiability” first proposed by Karl Popper. His profession? Philosophy. But whatever criterion you offer, its defense is going to implicate you in philosophy.

This is something that Dorion Sagan, Carl Sagan’s son, has previously addressed, but Goldstein brings to it unparalleled elegance of thought and eloquence of expression:

A triumphalist scientism needs philosophy to support itself. And the lesson here should be generalized. Philosophy is joined to science in reason’s project. Its mandate is to render our views and our attitudes maximally coherent.

In doing so, she argues, philosophy provides “the reasoning that science requires in order to claim its image as descriptive.” As a proponent of the vital difference between information and wisdom — the former being the material of science, the latter the product of philosophy, and knowledge the change agent that transmutes one into the other — I find the provocative genius of Goldstein’s conclusion enormously invigorating:

What idea should science retire? The idea of “science” itself. Let’s retire it in favor of the more inclusive “knowledge.”

Neuroscientist Sam Harris, author of the indispensable Spirituality Without Religion, echoes this by choosing our narrow definition of “science” as the idea to be put to rest:

Search your mind, or pay attention to the conversations you have with other people, and you’ll discover that there are no real boundaries between science and philosophy — or between those disciplines and any other that attempts to make valid claims about the world on the basis of evidence and logic. When such claims and their methods of verification admit of experiment and/or mathematical description, we tend to say our concerns are “scientific”; when they relate to matters more abstract, or to the consistency of our thinking itself, we often say we’re being “philosophical”; when we merely want to know how people behaved in the past, we dub our interests “historical” or “journalistic”; and when a person’s commitment to evidence and logic grows dangerously thin or simply snaps under the burden of fear, wishful thinking, tribalism, or ecstasy, we recognize that he’s being “religious.”

The boundaries between true intellectual disciplines are currently enforced by little more than university budgets and architecture… The real distinction we should care about — the observation of which is the sine qua non of the scientific attitude — is between demanding good reasons for what one believes and being satisfied with bad ones.

In a sentiment that calls to mind both Richard Feynman’s spectacular ode to a flower and Carl Sagan’s enduring wisdom on our search for meaning, Harris applies this model of knowledge to one of the great mysteries of science and philosophy — consciousness:

Even if one thinks the human mind is entirely the product of physics, the reality of consciousness becomes no less wondrous, and the difference between happiness and suffering no less important. Nor does such a view suggest that we’ll ever find the emergence of mind from matter fully intelligible; consciousness may always seem like a miracle. In philosophical circles, this is known as “the hard problem of consciousness” — some of us agree that this problem exists, some of us don’t. Should consciousness prove conceptually irreducible, remaining the mysterious ground for all we can conceivably experience or value, the rest of the scientific worldview would remain perfectly intact.

The remedy for all this confusion is simple: We must abandon the idea that science is distinct from the rest of human rationality. When you are adhering to the highest standards of logic and evidence, you are thinking scientifically. And when you’re not, you’re not.

Illustration from 'Once Upon an Alphabet' by Oliver Jeffers. Click image for more.

Psychologist Susan Blackmore, who studies this very problem — famously termed “the hard problem of consciousness” by philosopher David Chalmers in 1996 — benches the idea that there are neural correlates to consciousness. Tempting as it may be to interpret neural activity as the wellspring of that special something we call “consciousness” or “subjective experience,” as opposed to the “unconscious” rest of the brain, Blackmore admonishes that such dualism is past its cultural expiration date:

Dualist thinking comes naturally to us. We feel as though our conscious experiences were of a different order from the physical world. But this is the same intuition that leads to the hard problem seeming hard. It’s the same intuition that produces the philosopher’s zombie — a creature identical to me in every way except that it has no consciousness. It’s the same intuition that leads people to write, apparently unproblematically, about brain processes being either conscious or unconscious… Intuitively plausible as it is, this is a magic difference. Consciousness is not some weird and wonderful product of some brain processes but not others. Rather, it’s an illusion constructed by a clever brain and body in a complex social world. We can speak, think, refer to ourselves as agents, and so build up the false idea of a persisting self that has consciousness and free will.

Much of the allure of identifying such neural correlates of consciousness, Blackmore argues, lies in cultural mythologies rooted in fantasy rather than fact:

While people are awake they must always be conscious of something or other. And that leads along the slippery path to the idea that if we knew what to look for, we could peer inside someone’s brain and find out which processes were the conscious ones and which the unconscious ones. But this is all nonsense. All we’ll ever find are the neural correlates of thoughts, perceptions, memories, and the verbal and attentional processes that lead us to think we’re conscious.

When we finally have a better theory of consciousness to replace these popular delusions, we’ll see that there’s no hard problem, no magic difference, and no NCCs.

Illustration by Rob Hunter from 'A Graphic Cosmogony.' Click image for more.

In a related grievance, social psychologist Bruce Hood — author of the uncomfortable yet strangely comforting The Self Illusion — does away with the notion of the self. Half a century after Alan Watts enlisted Eastern philosophy in this mission, Hood presents a necessary integration of science and philosophy:

It seems almost redundant to call for the retirement of the free willing self, as the idea is neither scientific nor is this the first time the concept has been dismissed for lack of empirical support. The self did not have to be discovered; it’s the default assumption most of us experience, so it wasn’t really revealed by methods of scientific inquiry.

[…]

Yet the self, like a conceptual zombie, refuses to die. It crops up again and again in recent theories of decision making, as an entity with free will which can be depleted. It reappears as an interpreter in cognitive neuroscience, as able to integrate parallel streams of information arising from separable neural substrates. Even if these appearances of the self are understood to be convenient ways of discussing the emergent output of multiple parallel processes, students of the mind continue to implicitly endorse the idea that there’s a decision maker, an experiencer, a point of origin.

We know the self is constructed because it can be so easily deconstructed — through damage, disease, and drugs. It must be an emergent property of a parallel system processing input, output, and internal representations. It’s an illusion because it feels so real, but that experience is not what it seems. The same is true for free will. Although we can experience the mental anguish of making a decision… the choices and decisions we make are based on situations that impose on us. We don’t have the free will to choose the experiences that have shaped our decisions.

[…]

By abandoning the free willing self, we’re forced to reexamine the factors that are truly behind our thoughts and behavior and the way they interact, balance, override, and cancel out. Only then will we begin to make progress in understanding how we really operate.

Illustration by Ben Newman from 'A Graphic Cosmogony.' Click image for more.

Among the most provocative answers, in fact, is one examining the factors that underlie one of the most complex and seemingly human of our experiences: love. Biological anthropologist Helen Fisher, who studies the brain on love, points to romantic love and addiction as two concepts in need of serious reformulation and reframing — one best accomplished by understanding the intersection of the two. Fisher argues that we ought to broaden the definition of addiction and do away with science’s staunch notion that all addiction is harmful. Love, she argues with a wealth of neurobiological evidence in hand, is in fact a state that closely resembles that of addiction in terms of what happens in the brain during it — and yet love, anguishing as it may be at times, is universally recognized as the height of positive experience. In that respect, it presents a case of “positive addiction.” Fisher writes:

Love-besotted men and women show all the basic symptoms of addiction. Foremost, the lover is stiletto-focused on his/her drug of choice, the love object. The lover thinks obsessively about him or her (intrusive thinking), and often compulsively calls, writes, or stays in touch. Paramount in this experience is intense motivation to win one’s sweetheart, not unlike the substance abuser fixated on the drug. Impassioned lovers distort reality, change their priorities and daily habits to accommodate the beloved, experience personality changes (affect disturbance), and sometimes do inappropriate or risky things to impress this special other. Many are willing to sacrifice, even die for, “him” or “her.” The lover craves emotional and physical union with the beloved (dependence). And like addicts who suffer when they can’t get their drug, the lover suffers when apart from the beloved (separation anxiety). Adversity and social barriers even heighten this longing (frustration attraction).

In fact, besotted lovers express all four of the basic traits of addiction: craving, tolerance, withdrawal, and relapse. They feel a “rush” of exhilaration when they’re with their beloved (intoxication). As their tolerance builds, they seek to interact with the beloved more and more (intensification). If the love object breaks off the relationship, the lover experiences signs of drug withdrawal, including protest, crying spells, lethargy, anxiety, insomnia or hypersomnia, loss of appetite or binge eating, irritability, and loneliness. Lovers, like addicts, also often go to extremes, sometimes doing degrading or physically dangerous things to win back the beloved. And lovers relapse the way drug addicts do. Long after the relationship is over, events, people, places, songs, or other external cues associated with their abandoning sweetheart can trigger memories and renewed craving.

Fisher points to fMRI studies that have shown intense romantic love to trigger the brain’s reward system and the dopamine pathways responsible for “energy, focus, motivation, ecstasy, despair, and craving,” as well as the brain regions most closely associated with addiction and substance abuse. In shedding light on the neurochemical machinery of romantic love, Fisher argues, science reveals it to be a “profoundly powerful, natural, often positive addiction.”

Illustration by Christine Rösch from 'The Mathematics of Love.' Click image for more.

Astrophysicist Marcelo Gleiser, who has written beautifully about the necessary duality of knowledge and mystery, wants to do away with “the venerable notion of Unification.” He points out that smaller acts of unification and simplification are core to the scientific process — from the laws of thermodynamics to Newton’s law of universal gravity — but simplification as sweeping as reducing the world to a single Theory of Everything is misplaced:

The trouble starts when we take this idea too far and search for the Über-unification, the Theory of Everything, the arch-reductionist notion that all forces of nature are merely manifestations of a single force. This is the idea that needs to go.

Noting that at some point along the way, “math became equated with beauty and beauty with truth,” Gleiser writes:

The impulse to unify it all runs deep in the souls of mathematicians and theoretical physicists, from the Langlands program to superstring theory. But here’s the rub: Pure mathematics isn’t physics. The power of mathematics comes precisely from its detachment from physical reality. A mathematician can create any universe she wants and play all sorts of games with it. A physicist can’t; his job is to describe nature as we perceive it. Nevertheless, the unification game has been an integral part of physics since Galileo and has produced what it should: approximate unifications.

And yet this unification game, as integral as it may be to science, is also antithetical to it in the long run:

The scientific impulse to unify is crypto-religious… There’s something deeply appealing in equating all of nature to a single creative principle: To decipher the “mind of God” is to be special, is to answer to a higher calling. Pure mathematicians who believe in the reality of mathematical truths are monks of a secret order, open only to the initiated. In the case of high energy physics, all unification theories rely on sophisticated mathematics related to pure geometric structures: The belief is that nature’s ultimate code exists in the ethereal world of mathematical truths and that we can decipher it.

Echoing Richard Feynman’s spectacular commencement address admonishing against “cargo cult science,” Gleiser adds:

Recent experimental data has been devastating to such belief — no trace of supersymmetric particles, of extra dimensions, or of dark matter of any sort, all long-awaited signatures of unification physics. Maybe something will come up; to find, we must search. The trouble with unification in high energy physics is that you can always push it beyond the experimental range. “The Large Hadron Collider got to 7 TeV and found nothing? No problem! Who said nature should opt for the simplest versions of unification? Maybe it’s all happening at much higher energies, well beyond our reach.”

There’s nothing wrong with this kind of position. You can believe it until you die, and die happy. Or you can conclude that what we do best is construct approximate models of how nature works and that the symmetries we find are only descriptions of what really goes on. Perfection is too hard a burden to impose on nature.

People often see this kind of argument as defeatist, as coming from someone who got frustrated and gave up. (As in “He lost his faith.”) Big mistake. To search for simplicity is essential to what scientists do. It’s what I do. There are essential organizing principles in nature, and the laws we find are excellent ways to describe them. But the laws are many, not one. We’re successful pattern-seeking rational mammals. That alone is cause for celebration. However, let’s not confuse our descriptions and models with reality. We may hold perfection in our mind’s eye as a sort of ethereal muse. Meanwhile nature is out there doing its thing. That we manage to catch a glimpse of its inner workings is nothing short of wonderful. And that should be good enough.

Ceramic tile by Debbie Millman courtesy of the artist

Science writer Amanda Gefter takes issue with one particular manifestation of our propensity for oversimplification — the notion of the universe. She writes:

Physics has a time-honored tradition of laughing in the face of our most basic intuitions. Einstein’s relativity forced us to retire our notions of absolute space and time, while quantum mechanics forced us to retire our notions of pretty much everything else. Still, one stubborn idea has stood steadfast through it all: the universe.

[…]

In recent years, however, the concept of a single shared spacetime has sent physics spiraling into paradox. The first sign that something was amiss came from Stephen Hawking’s landmark work in the 1970s showing that black holes radiate and evaporate, disappearing from the universe and purportedly taking some quantum information with them. Quantum mechanics, however, is predicated upon the principle that information can never be lost.

Gefter points to recent breakthroughs in physics that produced one particularly puzzling such paradox, known as the “firewall paradox,” solved by the idea that spacetime is divided not by horizons but by the reference frames of the observers, “as if each observer had his or her own universe.”

But the solution isn’t a multiverse theory:

Yes, there are multiple observers, and yes, any observer’s universe is as good as any other’s. But if you want to stay on the right side of the laws of physics, you can talk only about one at a time. Which means, really, that only one exists at a time. It’s cosmic solipsism.

Here, psychology, philosophy, and cosmology converge, for what such theories suggest is what we already know about the human psyche — as I’ve put it elsewhere, the stories that we tell ourselves, whether they be false or true, are always real. Gefter concludes:

Adjusting our intuitions and adapting to the strange truths uncovered by physics is never easy. But we may just have to come around to the notion that there’s my universe and there’s your universe — but there’s no such thing as the universe.

Biological anthropologist Nina Jablonski points to the notion of race as urgently retirement-ready. Pointing out that it has always been a “vague and slippery concept,” she traces its origins to Hume and Kant — the first to divide humanity into geographic groupings called “races” — and the pseudoscientific seeds of racism this division planted:

Skin color, as the most noticeable racial characteristic, was associated with a nebulous assemblage of opinions and hearsay about the inherent natures of the various races. Skin color stood for morality, character, and the capacity for civilization; it became a meme.

Even though the atrocious “race science” that emerged in the 19th and early 20th century didn’t hold up — whenever scientists looked for actual sharp boundaries between groups, none came up — and race came to be something people identify themselves with as a shared category of experiences and social bonds, Jablonski argues that the toxic aftershocks of pseudoscience still poison culture:

Even after it has been shown that many diseases (adult-onset diabetes, alcoholism, high blood pressure, to name a few) show apparent racial patterns because people share similar environmental conditions, groupings by race are maintained. The use of racial self-categorization in epidemiological studies is defended and even encouraged. Medical studies of health disparities between “races” become meaningless when sufficient variables — such as differences in class, ethnic social practices, and attitudes — are taken into account.

Half a century after the ever-prescient Margaret Mead made the same point, Jablonski urges:

Race has a hold on history but no longer has a place in science. The sheer instability and potential for misinterpretation render race useless as a scientific concept. Inventing new vocabularies to deal with human diversity and inequity won’t be easy, but it must be done.

Psychologist Jonathan Gottschall, who has previously explored why storytelling is so central to the human experience, argues against the notion that there can be no science of art. With an eye to our civilization’s long struggle to define art, he writes:

We don’t even have a good definition, in truth, for what art is. In short, there’s nothing so central to human life that’s so incompletely understood.

Granted, Gottschall is only partly right, for there are some excellent definitions of art — take, for instance, Jeanette Winterson’s or Leo Tolstoy’s — but the fact that they don’t come from scientists only speaks to his larger point. He argues that rather than being unfit to shed light on the role of art in human life, science simply hasn’t applied itself to the problem adequately:

Scientific work in the humanities has mainly been scattered, preliminary, and desultory. It doesn’t constitute a research program.

If we want better answers to fundamental questions about art, science must jump into the game with both feet. Going it alone, humanities scholars can tell intriguing stories about the origins and significance of art, but they don’t have the tools to patiently winnow the field of competing ideas. That’s what the scientific method is for — separating the more accurate stories from the less accurate stories. But a strong science of art will require both the thick, granular expertise of humanities scholars and the clever hypothesis-testing of scientists. I’m not calling for a scientific takeover of the arts, I’m calling for a partnership.

[…]

The Delphic admonition “Know thyself” still rings out as the great prime directive of intellectual inquiry, and there will always be a gaping hole in human self-knowledge until we develop a science of art.

In a further testament to the zeitgeist-illuminating nature of the project, actor, author, and science-lover Alan Alda makes a passionate case for the same concept:

The trouble with truth is that not only is the notion of eternal, universal truth highly questionable, but simple, local truths are subject to refinement as well. Up is up and down is down, of course. Except under special circumstances. Is the North Pole up and the South Pole down? Is someone standing at one of the poles right-side up or upside-down? Kind of depends on your perspective.

When I studied how to think in school, I was taught that the first rule of logic was that a thing cannot both be and not be at the same time and in the same respect. That last note, “in the same respect,” says a lot. As soon as you change the frame of reference, you’ve changed the truthiness of a once immutable fact.

[…]

This is not to say that nothing is true or that everything is possible — just that it might not be so helpful for things to be known as true for all time, without a disclaimer… I wonder — and this is just a modest proposal — whether scientific truth should be identified in a way acknowledging that it’s something we know and understand for now, and in a certain way.

[…]

Facts, it seems to me, are workable units, useful in a given frame or context. They should be as exact and irrefutable as possible, tested by experiment to the fullest extent. When the frame changes, they don’t need to be discarded as untrue but respected as still useful within their domain. Most people who work with facts accept this, but I don’t think the public fully gets it.

That’s why I hope for more wariness about implying we know something to be true or false for all time and for everywhere in the cosmos.

Illustration from 'Once Upon an Alphabet' by Oliver Jeffers. Click image for more.

And indeed this elasticity of truth across time is at the heart of what I find to be the most beautiful and culturally essential contribution to the collection. As someone who believes that the stewardship of enduring ideas is at least as important as the genesis of new ones — not only because past ideas are the combinatorial building blocks of future ones but also because in order to move forward we always need a backdrop against which to paint the contrast of progress and improvement — I was most bewitched by writer Ian McEwan’s admonition against the arrogance of retiring any idea as an impediment to progress:

Beware of arrogance! Retire nothing! A great and rich scientific tradition should hang onto everything it has. Truth is not the only measure. There are ways of being wrong that help others to be right. Some are wrong, but brilliantly so. Some are wrong but contribute to method. Some are wrong but help found a discipline. Aristotle ranged over the whole of human knowledge and was wrong about much. But his invention of zoology alone was priceless. Would you cast him aside? You never know when you might need an old idea. It could rise again one day to enhance a perspective the present cannot imagine. It would not be available to us if it were fully retired.

To appreciate McEwan’s point, one need only look at something like Bertrand Russell’s timely thoughts on boredom, penned in 1930 and yet astoundingly resonant with our present anxieties about the societal side effects of current technology. McEwan captures this beautifully:

Every last serious and systematic speculation about the world deserves to be preserved. We need to remember how we got to where we are, and we’d like the future not to retire us. Science should look to literature and maintain a vibrant living history as a monument to ingenuity and persistence. We won’t retire Shakespeare. Nor should we Bacon.

Complement This Idea Must Die, the entirety of which weaves a mind-stretching mesh of complementary and contradictory perspectives on our relationship with knowledge, with some stimulating answers to previous editions of Brockman’s annual question, exploring the only thing worth worrying about (2013), the single most elegant theory of how the world works (2012), and the best way to make ourselves smarter (2011).

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.





You can also become a one-time patron with a single donation in any amount.





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

23 JULY, 2014

The Universe, “Branes,” and the Science of Multiple Dimensions

By:

How a needle, a shower curtain, and a New England clam explain the possibility of parallel universes.

“The mystery of being is a permanent mystery,” John Updike once observed in pondering why the universe exists, and yet of equal permanence is the allure this mystery exerts upon the scientists, philosophers, and artists of any given era. The Universe: Leading Scientists Explore the Origin, Mysteries, and Future of the Cosmos (public library | IndieBound) collects twenty-one illuminating, mind-expanding meditations on various aspects of that mystery, from multiple dimensions to quantum monkeys to why the universe looks the way it does, by some of the greatest scientific thinkers of our time. It is the fourth installment in an ongoing series by Edge editor John Brockman, following Thinking (2013), Culture (2011), and The Mind (2011).

In one of the essays, theoretical physicist Leonard Suskind marvels at the unique precipice we’re fortunate to witness:

The beginning of the 21st century is a watershed in modern science, a time that will forever change our understanding of the universe. Something is happening which is far more than the discovery of new facts or new equations. This is one of those rare moments when our entire outlook, our framework for thinking, and the whole epistemology of physics and cosmology are suddenly undergoing real upheaval. The narrow 20th-century view of a unique universe, about 10 billion years old and 10 billion light years across with a unique set of physical laws, is giving way to something far bigger and pregnant with new possibilities.

Gradually physicists and cosmologists are coming to see our ten billion light years as an infinitesimal pocket of a stupendous megaverse.

Here, an inevitable note on a different kind of human narrowness: I am not one to advocate for a blind quota-filling approach, where there must be equal representation on all levels at all cost. And yet it’s rather disappointing to see only one female scientist alongside her twenty-two male peers. (One of the twenty-one essays has three authors.) To be sure, Edge itself is far from gender-balanced — one could rationalize that this is simply the state of science still — but the site’s vast archive, spanning fifteen years of conversations and essays, does feature a number of female scientists, which renders the 5% female representation in this collection editorially lamentable.

This gender gap lends double meaning to Susskind’s reflections on the progress of science in the twenty-first century as he notes: “Man’s place in the universe is also being reexamined and challenged.” Woman’s, evidently, is not.

And yet, it’s perhaps not coincidental that the sole female contributor is none other than Harvard’s Lisa Randall, one of the most influential theoretical physicists of our time, and her essay is the most intensely interesting in the entire collection. (Perchance Brockman considered its weighted quotient equal to several of the male essays combined. No, not really, but when the skies of equality get particularly cloudy, what is one to do but squint for silver linings?)

Lisa Randall (Photograph: Phil Knott)

Randall’s essay explores her work on the physics of extra dimensions of space, particularly the concept of “branes” — membrane-like two-dimensional objects that exist in a higher-dimensional space. (Randall illustrates this with the visual metaphor of a shower curtain, “virtually a two-dimensional object in a three-dimensional space.”) To understand why branes matter — more than that, why they are so infinitely interesting — we first need a primer on the physics of what is known as the “TeV scale.” Randall explains:

Particle physicists measure energy in units of electron volts. “TeV” means “a trillion electron volts.” This is a very high energy and challenges the limits of current technology, but it’s low from the perspective of quantum gravity, whose consequences are likely to show up only at energies 16 orders of magnitude higher. This energy scale is interesting, because we know that the as-yet-undiscovered part of the theory associated with giving elementary particles their masses should be found there… Back at the very beginning, the entire universe could have been squeezed to the size of an elementary particle. Quantum fluctuations could shake the entire universe, and there would be an essential link between cosmology and the microworld.

This ghostly playground of particles raises the question of whether “space and time are so complicated and screwed up that we can’t really talk about a beginning in time” — which brings us to string theory and its peculiar predicament. Randall writes:

The one thing that’s rather unusual about string theory from the viewpoint of the sociology and history of science is that it’s one of the few instances where physics has been held up by a lack of the relevant mathematics. In the past, physicists have generally taken fairly old-fashioned mathematics off the shelf. Einstein used 19th-century non-Euclidean geometry, and the pioneers in quantum theory used group theory and differential equations that had essentially been worked out long beforehand. But string theory poses mathematical problems that aren’t yet solved, and has actually brought math and physics closer together.

String theory is the dominant approach right now, and it has some successes already, but the question is whether it will develop to the stage where we can actually solve problems that can be tested observationally. If we can’t bridge the gap between this ten-dimensional theory and anything that we can observe, it will grind to a halt. In most versions of string theory, the extra dimensions above the normal three are all wrapped up very tightly, so that each point in our ordinary space is like a tightly wrapped origami in six dimensions. We see just three dimensions; the rest are invisible to us because they are wrapped up very tightly. If you look at a needle, it looks like a one-dimensional line from a long distance, but really it’s three-dimensional. Likewise, the extra dimensions could be seen if you looked at things very closely. Space on a very tiny scale is grainy and complicated — its smoothness is an illusion of the large scale. That’s the conventional view in these string theories.

The Cat's Eye Nebula, from 'Hubble: Imaging Space and Time.' Click image for more.

This is where Randall’s work on branes comes in as a promising contender for a better solution. She writes:

According to this theory, there could be other universes, perhaps separated from ours by just a microscopic distance; however, that distance is measured in some fourth spatial dimension, of which we are not aware. Because we are imprisoned in our three dimensions, we can’t directly detect these other universes. It’s rather like a whole lot of bugs crawling around on a big two-dimensional sheet of paper, who would be unaware of another set of bugs that might be crawling around on another sheet of paper that could be only a short distance away in the third dimension.

Of course, the concepts of multiple dimensions and parallel universes are far from new and can be traced as far back as another trailblazing woman in scientific thought, Margaret Cavendish, Duchess of Newcastle — her 1666 book The Blazing World features a heroine who passes into a world with different stars through a space-time portal near the North Pole.

Randall takes us into her own time machine to trace the history of multiple dimensions in contextualizing what makes branes so special:

People entertained the idea of extra dimensions before string theory came along, although such speculations were soon forgotten or ignored. It’s natural to ask what would happen if there were different dimensions of space; after all, the fact that we see only three spatial dimensions doesn’t necessarily mean that only three exist, and Einstein’s general relativity doesn’t treat a three-dimensional universe preferentially. There could be many unseen ingredients to the universe. However, it was first believed that if additional dimensions existed they would have to be very small in order to have escaped our notice. The standard supposition in string theory was that the extra dimensions were curled up into incredibly tiny scales — 1033 centimeters, the so-called Planck length and the scale associated with quantum effects becoming relevant. In that sense, this scale is the obvious candidate: If there are extra dimensions, which are obviously important to gravitational structure, they’d be characterized by this particular distance scale. But if so, there would be very few implications for our world. Such dimensions would have no impact whatsoever on anything we see or experience.

[…]

Branes are special, particularly in the context of string theory, because there’s a natural mechanism to confine particles to the brane; thus not everything need travel in the extra dimensions, even if those dimensions exist. Particles confined to the brane would have momentum and motion only along the brane, like water spots on the surface of your shower curtain. Branes allow for an entirely new set of possibilities in the physics of extra dimensions, because particles confined to the brane would look more or less as they would in a three-plus-one-dimension world; they never venture beyond it. Protons, electrons, quarks, all sorts of fundamental particles could be stuck on the brane. In that case, you may wonder why we should care about extra dimensions at all, since despite their existence the particles that make up our world do not traverse them. However, although all known standard-model particles stick to the brane, this is not true of gravity. The mechanisms for confining particles and forces mediated by the photon or electrogauge proton to the brane do not apply to gravity. Gravity, according to the theory of general relativity, must necessarily exist in the full geometry of space. Furthermore, a consistent gravitational theory requires that the graviton, the particle that mediates gravity, has to couple to any source of energy, whether that source is confined to the brane or not. Therefore, the graviton would also have to be out there in the region encompassing the full geometry of higher dimensions—a region known as the bulk—because there might be sources of energy there. Finally, there’s a string-theory explanation of why the graviton is not stuck to any brane: The graviton is associated with the closed string, and only open strings can be anchored to a brane.

Meanwhile, scientists haven’t studied gravity as intensely as they have other particles, largely because gravity is an extremely weak force. (It might not seem so every time you trip and fall, but as Randall points out, that’s because the entire Earth is pulling you down at that moment, whereas “the result of coupling an individual graviton to an individual particle is quite small.”) What makes branes especially intriguing is that including them into string theory allows us to contemplate, to use Randall’s technical term, “crazily large extra dimensions.” These, in turn, might explain why gravity is so weak — if its force is spread out across these gigantic dimensions, no wonder it would be this diluted on any one brane.

But it gets even more interesting — citing her work with Johns Hopkins scientist Raman Sundrum, Randall writes:

A more natural explanation for the weakness of gravity could be the direct result of the gravitational attraction associated with the brane itself. In addition to trapping particles, branes carry energy. We showed that from the perspective of general relativity this means that the brane curves the space around it, changing gravity in its vicinity. When the energy in space is correlated with the energy on the brane so that a large flat three-dimensional brane sits in the higher-dimensional space, the graviton — the particle communicating the gravitational force — is highly attracted to the brane. Rather than spreading uniformly in an extra dimension, gravity stays localized, very close to the brane.

René Descartes's 1644 model of the universe, from 'The Book of Trees.' Click image for more.

Randall’s discoveries get even more mind-bending. Outlining a finding that calls to mind the legendary Victorian allegory Flatland: A Romance of Many Dimensions (which in turn inspired Norton Juster’s brilliant 1963 book and film The Dot and the Line: A Romance in Lower Mathematics, she writes:

Conventionally, it was thought that extra dimensions must be curled up or bounded between two branes, or else we would observe higher-dimensional gravity. The aforementioned second brane appeared to serve two purposes: It explained the hierarchy problem because of the small probability for the graviton to be there, and it was also responsible for bounding the extra dimension so that at long distances, bigger than the dimension’s size, only three dimensions are seen.

The concentration of the graviton near the Planck brane can, however, have an entirely different implication. If we forget the hierarchy problem for the moment, the second brane is unnecessary. That is, even if there’s an infinite extra dimension and we live on the Planck brane in this infinite dimension, we wouldn’t know about it. In this “warped geometry,” as the space with exponentially decreasing graviton amplitude is known, we would see things as if this dimension did not exist and the world were only three-dimensional.

[…]

Because the graviton makes only infrequent excursions into the bulk, a second brane or a curled-up dimension isn’t necessary to get a theory that describes our three-dimensional world, as had previously been thought. We might live on the Planck brane and address the hierarchy problem in some other manner—or we might live on a second brane out in the bulk, but this brane would not be the boundary of the now infinite space. It doesn’t matter that the graviton occasionally leaks away from the Planck brane; it’s so highly localized there that the Planck brane essentially mimics a world of three dimensions, as though an extra dimension didn’t exist at all. A four-spatial-dimensions world, say, would look almost identical to one with three spatial dimensions. Thus all the evidence we have for three spatial dimensions could equally well be evidence for a theory in which there are four spatial dimensions of infinite extent.

So why does any of this matter, this “exciting but frustrating game” of speculation, as Randall elegantly puts it? For one thing, there might be subtle but important differences between these different dimensions and different worlds — for instance, black holes may not behave the same way in each of them. If energy leaks off a brane, a black hole might spit out particles into an extra dimension as it perishes. (If you’ve ever steamed a New England clam, you may have noticed it “spitting” water at you in its final moments — perhaps this is somewhat akin to what Randall describes.) Most importantly, multiple dimensions offer endless possibilities for the very structure of space. Randall writes:

There can be different numbers of dimensions and there might be arbitrary numbers of branes contained within. Branes don’t even all have to be three-plus-one-dimensional; maybe there are other dimensions of branes in addition to those that look like ours and are parallel to ours. This presents an interesting question about the global structure of space, since how space evolves with time would be different in the context of the presence of many branes. It’s possible that there are all sorts of forces and particles we don’t know about that are concentrated on branes and can affect cosmology.

Lisa Randall

So where does this leave us? Randall echoes Marie Curie’s famous words upon receiving her second graduate degree — a sentiment no doubt common to any great scientist who understands that not-knowing is the currency of meaningful work — and concludes:

In general, the problems that get solved, although they seem very complicated, are in many ways simple problems. There’s much more work to be done; exciting discoveries await, and they will have implications for other fields… It’s my hope that time and experiments will distinguish among the possibilities.

Randall’s essay is a spectacular, mind-bending read in its entirety, as are the rest of the contributions in The Universe. Complement it with Brockman’s compendium of leading scientists’ selections of the most elegant theory of how the world works and the single most important concept to make you smarter.

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.





You can also become a one-time patron with a single donation in any amount.





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

11 FEBRUARY, 2014

Big Thinkers on the Only Things Worth Worrying About

By:

A cross-disciplinary kaleidoscope of intelligent concerns for the self and the species.

In his famous and wonderfully heartening letter of fatherly advice, F. Scott Fitzgerald gave his young daughter Scottie a list of things to worry and not worry about in life. Among the unworriables, he named popular opinion, the past, the future, triumph, and failure “unless it comes through your own fault.” Among the worry-worthy, courage, cleanliness, and efficiency. What Fitzgerald touched on, of course, is the quintessential anxiety of the human condition, which drives us to worry about things big and small, mundane and monumental, often confusing the two classes. It was this “worryability” that young Italo Calvino resolved to shake from his life. A wonderful 1934 book classified all of our worries in five general categories that endure with astounding prescience and precision, but we still struggle to identify the things truly worth worrying about — and, implicitly, working to resolve — versus those that only strain our psychoemotional capacity with the deathly grip of anxiety.

'My Wheel of Worry' by Andrew Kuo, depicting his inner worries, arguments, counterarguments, and obsessions in the form of charts and graphs.

Click image for details.

In What Should We Be Worried About? (public library), intellectual jockey and Edge founder John Brockman tackles this issue with his annual question — which has previously answered such conundrums as the single most elegant theory of how the world works (2012) and the best way to make ourselves smarter (2011) — and asks some of our era’s greatest thinkers in science, psychology, technology, philosophy, and more to each contribute one valid “worry” about our shared future. Rather than alarmist anxiety-slinging, however, the ethos of the project is quite the opposite — to put in perspective the things we worry about but shouldn’t, whether by our own volition or thanks to ample media manipulation, and contrast them with issues of actual concern, at which we ought to aim our collective attention and efforts in order to ensure humanity’s progress and survival.

Behavioral neuroscientist Kate Jeffery offers one of the most interesting answers, reminiscent of Alan Watts’s assertion that “without birth and death … the world would be static, rhythm-less, undancing, mummified,” exploring our mortality paradox and pointing to the loss of death as a thing to worry about:

Every generation our species distills the best of itself, packages it up and passes it on, shedding the dross and creating a fresher, newer, shinier generation. We have been doing this now for four billion years, and in doing so have transmogrified from unicellular microorganisms that do little more than cling to rocks and photosynthesize, to creatures of boundless energy and imagination who write poetry, make music, love each other and work hard to decipher the secrets of themselves and their universe.

And then they die.

Death is what makes this cyclical renewal and steady advance in organisms possible. Discovered by living things millions of years ago, aging and death permit a species to grow and flourish. Because natural selection ensures that the child-who-survives-to-reproduce is better than the parent (albeit infinitesimally so, for that is how evolution works), it is better for many species that the parent step out of the way and allow its (superior) child to succeed in its place. Put more simply, death stops a parent from competing with its children and grandchildren for the same limited resources. So important is death that we have, wired into our genes, a self-destruct senescence program that shuts down operations once we have successfully reproduced, so that we eventually die, leaving our children—the fresher, newer, shinier versions of ourselves—to carry on with the best of what we have given them: the best genes, the best art, and the best ideas. Four billion years of death has served us well.

Now, all this may be coming to an end, for one of the things we humans, with our evolved intelligence, are working hard at is trying to eradicate death. This is an understandable enterprise, for nobody wants to die—genes for wanting to die rarely last long in a species. For millennia, human thinkers have dreamed of conquering old age and death: the fight against it permeates our art and culture, and much of our science. We personify death as a specter and loathe it, fear it and associate it with all that is bad in the world. If we could conquer it, how much better life would become.

Celebrated filmmaker Terry Gilliam leans toward the philosophical with an answer somewhere between John Cage and Yoda:

I’ve given up asking questions. I merely float on a tsunami of acceptance of anything life throws at me… and marvel stupidly.

Music pioneer Brian Eno, a man of strong opinions on art and unconventional approaches to creativity, is concerned that we see politics, a force that impacts our daily lives on nearly every level, as something other people do:

Most of the smart people I know want nothing to do with politics. We avoid it like the plague — like Edge avoids it, in fact. Is this because we feel that politics isn’t where anything significant happens? Or because we’re too taken up with what we’re doing, be it Quantum Physics or Statistical Genomics or Generative Music? Or because we’re too polite to get into arguments with people? Or because we just think that things will work out fine if we let them be — that The Invisible Hand or The Technosphere will mysteriously sort them out?

Whatever the reasons for our quiescence, politics is still being done — just not by us. It’s politics that gave us Iraq and Afghanistan and a few hundred thousand casualties. It’s politics that’s bleeding the poorer nations for the debts of their former dictators. It’s politics that allows special interests to run the country. It’s politics that helped the banks wreck the economy. It’s politics that prohibits gay marriage and stem cell research but nurtures Gaza and Guantanamo.

But we don’t do politics. We expect other people to do it for us, and grumble when they get it wrong. We feel that our responsibility stops at the ballot box, if we even get that far. After that we’re as laissez-faire as we can get away with.

What worries me is that while we’re laissez-ing, someone else is faire-ing.

Barbara Strauch, science editor of The New York Times, echoes Richard Feynman’s lament about the general public’s scientific ignorance — not the good kind, but the kind that leads to the resurgence of preventable diseases — when it comes to science, as well as the dismal state of science education. She sees oases of hope in that desert of ignorance but finds the disconnect worrisome:

Something quite serious has been lost. . . . This decline in general-interest science coverage comes at a time of divergent directions in the general public. At one level, there seems to be increasing ignorance. After all, it’s not just science news coverage that has suffered, but also the teaching of science in schools. And we just went through a political season that saw how all this can play out, with major political figures spouting off one silly statement after another, particularly about women’s health. . . .

But something else is going on, as well. Even as we have in some pockets what seems like increasing ignorance of science, we have at the same time, a growing interest of many. It’s easy to see, from where I sit, how high that interest is. Articles about anything scientific, from the current findings in human evolution to the latest rover landing on Mars, not to mention new genetic approaches to cancer — and yes, even the Higgs boson — zoom to the top of our newspaper’s most emailed list.

We know our readers love science and cannot get enough of it. And it’s not just our readers. As the rover Curiosity approached Mars, people of all ages in all parts of the country had “Curiosity parties” to watch news of the landing. Mars parties! Social media, too, has shown us how much interest there is across the board, with YouTube videos and tweets on science often becoming instant megahits.

So what we have is a high interest and a lot of misinformation floating around. And we have fewer and fewer places that provide real information to a general audience that is understandable, at least by those of us who do not yet have our doctorates in astrophysics. The disconnect is what we should all be worried about.

Nicholas Carr, author of the techno-dystopian The Shallows: What the Internet Is Doing to Our Brains, considers the effects that digital communication might be having on our intricate internal clocks and the strange ways in which our brains warp time:

I’m concerned about time — the way we’re warping it and it’s warping us. Human beings, like other animals, seem to have remarkably accurate internal clocks. Take away our wristwatches and our cell phones, and we can still make pretty good estimates about time intervals. But that faculty can also be easily distorted. Our perception of time is subjective; it changes with our circumstances and our experiences. When things are happening quickly all around us, delays that would otherwise seem brief begin to feel interminable. Seconds stretch out. Minutes go on forever. . . .

Given what we know about the variability of our time sense, it seems clear that information and communication technologies would have a particularly strong effect on personal time perception. After all, they often determine the pace of the events we experience, the speed with which we’re presented with new information and stimuli, and even the rhythm of our social interactions. That’s been true for a long time, but the influence must be particularly strong now that we carry powerful and extraordinarily fast computers around with us all day long. Our gadgets train us to expect near-instantaneous responses to our actions, and we quickly get frustrated and annoyed at even brief delays.

I know that my own perception of time has been changed by technology. . . .

As we experience faster flows of information online, we become, in other words, less patient people. But it’s not just a network effect. The phenomenon is amplified by the constant buzz of Facebook, Twitter, texting, and social networking in general. Society’s “activity rhythm” has never been so harried. Impatience is a contagion spread from gadget to gadget.

One of the gravest yet most lucid and important admonitions comes from classicist-turned-technologist Tim O’Reilly, who echoes Susan Sontag’s concerns about anti-intellectualism and cautions that the plague of ignorance might spread far enough to drive our civilization into extinction:

For so many in the techno-elite, even those who don’t entirely subscribe to the unlimited optimism of the Singularity, the notion of perpetual progress and economic growth is somehow taken for granted. As a former classicist turned technologist, I’ve always lived with the shadow of the fall of Rome, the failure of its intellectual culture, and the stasis that gripped the Western world for the better part of a thousand years. What I fear most is that we will lack the will and the foresight to face the world’s problems squarely, but will instead retreat from them into superstition and ignorance.

[…]

History teaches us that conservative, backward-looking movements often arise under conditions of economic stress. As the world faces problems ranging from climate change to the demographic cliff of aging populations, it’s wise to imagine widely divergent futures.

Yes, we may find technological solutions that propel us into a new golden age of robots, collective intelligence, and an economy built around “the creative class.” But it’s at least as probable that as we fail to find those solutions quickly enough, the world falls into apathy, disbelief in science and progress, and after a melancholy decline, a new dark age.

Civilizations do fail. We have never yet seen one that hasn’t. The difference is that the torch of progress has in the past always passed to another region of the world. But we’ve now, for the first time, got a single global civilization. If it fails, we all fail together.

Biological anthropologist Helen Fisher, who studies the brain on love and whose Why We Love remains indispensable, worries that we misunderstand men. She cites her research for some findings that counter common misconceptions and illustrate how gender stereotypes limit us:

Men fall in love faster too — perhaps because they are more visual. Men experience love at first sight more regularly; and men fall in love just as often. Indeed, men are just as physiologically passionate. When my colleagues and I have scanned men’s brains (using fMRI), we have found that they show just as much activity as women in neural regions linked with feelings of intense romantic love. Interestingly, in the 2011 sample, I also found that when men fall in love, they are faster to introduce their new partner to friends and parents, more eager to kiss in public, and want to “live together” sooner. Then, when they are settled in, men have more intimate conversations with their wives than women do with their husbands—because women have many of their intimate conversations with their girlfriends. Last, men are just as likely to believe you can stay married to the same person forever (76% of both sexes). And other data show that after a break up, men are 2.5 times more likely to kill themselves.

[…]

In the Iliad, Homer called love “magic to make the sanest man go mad.” This brain system lives in both sexes. And I believe we’ll make better partnerships if we embrace the facts: men love — just as powerfully as women.

David Rowan, editor of Wired UK and scholar of the secrets of entrepreneurship, worries about the growing disconnect between the data-rich and the data-poor:

Each day, according to IBM, we collectively generate 2.5 quintillion bytes — a tsunami of structured and unstructured data that’s growing, in IDC’s reckoning, at 60 per cent a year. Walmart drags a million hourly retail transactions into a database that long ago passed 2.5 petabytes; Facebook processes 2.5 billion pieces of content and 500 terabytes of data each day; and Google, whose YouTube division alone gains 72 hours of new video every minute, accumulates 24 petabytes of data in a single day. . . . Certainly there are vast public benefits in the smart processing of these zetta- and yottabytes of previously unconstrained zeroes and ones. . . .

Yet as our lives are swept unstoppably into the data-driven world, such benefits are being denied to a fast-emerging data underclass. Any citizen lacking a basic understanding of, and at least minimal access to, the new algorithmic tools will increasingly be disadvantaged in vast areas of economic, political and social participation. The data disenfranchised will find it harder to establish personal creditworthiness or political influence; they will be discriminated against by stock markets and by social networks. We need to start seeing data literacy as a requisite, fundamental skill in a 21st-century democracy, and to campaign — and perhaps even to legislate — to protect the interests of those being left behind.

Some, like social and cognitive scientist Dan Sperber, go meta, admonishing that our worries about worrying are ushering in a new age of anxiety, the consequences of which are debilitating:

Worrying is an investment of cognitive resources laced with emotions from the anxiety spectrum and aimed at solving some specific problem. It has its costs and benefits, and so does not worrying. Worrying for a few minutes about what to serve for dinner in order please one’s guests may be a sound investment of resources. Worrying about what will happen to your soul after death is a total waste. Human ancestors and other animals with foresight may have only worried about genuine and pressing problems such as not finding food or being eaten. Ever since they have become much more imaginative and have fed their imagination with rich cultural inputs, that is, since at least 40,000 years (possibly much more), humans have also worried about improving their lot individually and collectively — sensible worries — and about the evil eye, the displeasure of dead ancestors, the purity of their blood — misplaced worries.

A new kind of misplaced worries is likely to become more and more common. The ever-accelerating current scientific and technological revolution results in a flow of problems and opportunities that presents unprecedented cognitive and decisional challenges. Our capacity to anticipate these problems and opportunities is swamped by their number, novelty, speed of arrival, and complexity.

[…]

What I am particularly worried about is that humans will be less and less able to appreciate what they should really be worrying about and that their worries will do more harm than good. Maybe, just as on a boat in rapids, one should try not to slowdown anything but just to optimize a trajectory one does not really control, not because safety is guaranteed and optimism is justified — the worst could happen — but because there is no better option than hope.

Mathematician and economist Eric R. Weinstein considers our conventional wisdom on what it takes to cultivate genius, including the myth of the 10,000 hours rule, and argues instead that the pursuit of excellence is a social malady that gets us nowhere meaningful:

We cannot excel our way out of modern problems. Within the same century, we have unlocked the twin nuclei of both cell and atom and created the conditions for synthetic biological and even digital life with computer programs that can spawn with both descent and variation on which selection can now act. We are in genuinely novel territory which we have little reason to think we can control; only the excellent would compare these recent achievements to harmless variations on the invention of the compass or steam engine. So surviving our newfound god-like powers will require modes that lie well outside expertise, excellence, and mastery.

Going back to Sewall Wright’s theory of adaptive landscapes of fitness, we see four modes of human achievement paired with what might be considered their more familiar accompanying archetypes:

A) Climbing—Expertise: Moving up the path of steepest ascent towards excellence for admission into a community that holds and defends a local maximum of fitness.

B) Crossing—Genius: Crossing the ‘Adaptive Valley’ to an unknown and unoccupied even higher maximum level of fitness.

C) Moving—Heroism: Moving ‘mountains of fitness’ for one’s group.

D) Shaking—Rebellion: Leveling peaks and filling valleys for the purpose of changing the landscape to be more even.

The essence of genius as a modality is that it seems to reverse the logic of excellence.

He adds the famous anecdote of Feynman’s Challenger testimony:

In the wake of the Challenger disaster, Richard Feynman was mistakenly asked to become part of the Rogers commission investigating the accident. In a moment of candor Chairman Rogers turned to Neil Armstrong in a men’s room and said “Feynman is becoming a real pain.” Such is ever the verdict pronounced by steady hands over great spirits. But the scariest part of this anecdote is not the story itself but the fact that we are, in the modern era, now so dependent on old Feynman stories having no living heroes with which to replace him: the ultimate tragic triumph of runaway excellence.

This view, however, is remarkably narrow and defeatist. As Voltaire memorably remarked, “Appreciation is a wonderful thing: It makes what is excellent in others belong to us as well.” Without appreciation for the Feynmans of the past we duly don our presentism blinders and refuse to acknowledge the fact that genius is a timeless quality that belongs to all ages, not a cultural commodity of the present. Many of our present concerns have been addressed with enormous prescience in the past, often providing more thoughtful and richer answers than we are able to today, whether it comes to the value of space exploration or the economics of media or the essence of creativity or even the grand question of how to live. Having “living heroes” is an admirable aspiration, but they should never replace — only enhance and complement — the legacy and learnings of those who came before.

Indeed, this presentism bias is precisely what Noga Arikha, historian of ideas and author of Passions and Tempers: A History of the Humours, points to as her greatest worry in one of the most compelling answers. It’s something I’ve voiced as well in a recent interview with the Guardian. Arikha writes:

I worry about the prospect of collective amnesia.

While access to information has never been so universal as it is now — thanks to the Internet — the total sum of knowledge of anything beyond the present seems to be dwindling among those people who came of age with the Internet. Anything beyond 1945, if then, is a messy, remote landscape; the centuries melt into each other in an insignificant magma. Famous names are flickers on a screen, their dates irrelevant, their epochs dusty. Everything is equalized.

She points to a necessary antidote to this shallowing of our cultural hindsight:

There is a way out: by integrating the teaching of history within the curricula of all subjects—using whatever digital or other means we have to redirect attention to slow reading and old sources. Otherwise we will be condemned to living without perspective, robbed of the wisdom and experience with which to build for the future, confined by the arrogance of our presentism to repeating history without noticing it.

Berkeley developmental psychologist Alison Gopnik, author of The Philosophical Baby: What Children’s Minds Tell Us About Truth, Love, and the Meaning of Life, worries that much of modern parenting is concerned with the wrong things — particularly the push for overachievement — when evidence strongly indicates that the art of presence is the most important gift a parent can bestow upon a child:

Thinking about children, as I do for a living, and worrying go hand in hand. There is nothing in human life so important and urgent as raising the next generation, and yet it also feels as if we have very little control over the outcome. . . .

[But] “parenting” worries focus on relatively small variations in what parents and children do — co-sleeping or crying it out, playing with one kind of toy rather than another, more homework or less. There is very little evidence that any of this make much difference to the way that children turn out in the long run. There is even less evidence that there is any magic formula for making one well-loved and financially supported child any smarter or happier or more successful as an adult than another.

Instead, she argues, it is neglect that parents should be most worried about — a moral intuition as old as the world, yet one lamentably diluted by modern parents’ misguided concerns:

More recently research into epigenetics has helped demonstrate just how the mechanisms of care and neglect work. Research in sociology and economics has shown empirically just how significant the consequences of early experience actually can be. The small variations in middle-class “parenting” make very little difference. But providing high-quality early childhood care to children who would otherwise not receive it makes an enormous and continuing difference up through adulthood. In fact, the evidence suggests that this isn’t just a matter of teaching children particular skills or kinds of knowledge—a sort of broader institutional version of “parenting.” Instead, children who have a stable, nurturing, varied early environment thrive in a wide range of ways, from better health to less crime to more successful marriages. That’s just what we’d expect from the evolutionary story. I worry more and more about what will happen to the generations of children who don’t have the uniquely human gift of a long, protected, stable childhood.

Journalist Rolf Dobelli, author of The Art of Thinking Clearly, offers an almost Alan Wattsian concern about the paradox of material progress:

As mammals, we are status seekers. Non-status seeking animals don’t attract suitable mating partners and eventually exit the gene pool. Thus goods that convey high status remain extremely important, yet out of reach for most of us. Nothing technology brings about will change that. Yes, one day we might re-engineer our cognition to reduce or eliminate status competition. But until that point, most people will have to live with the frustrations of technology’s broken promise. That is, goods and services will be available to everybody at virtually no cost. But at the same time, status-conveying goods will inch even further out of reach. That’s a paradox of material progress.

Columbia biologist Stuart Firestein, author of the fantastic Ignorance: How It Drives Science and champion of “thoroughly conscious ignorance,” worries about our unreasonable expectations of science:

Much of science is failure, but it is a productive failure. This is a crucial distinction in how we think about failure. More importantly is that not all wrong science is bad science. As with the exaggerated expectations of scientific progress, expectations about the validity of scientific results have simply become overblown. Scientific “facts” are all provisional, all needing revision or sometimes even outright upending. But this is not bad; indeed it is critical to continued progress. Granted it’s difficult, because you can’t just believe everything you read. But let’s grow up and recognize that undeniable fact of life. . . .

So what’s the worry? That we will become irrationally impatient with science, with it’s wrong turns and occasional blind alleys, with its temporary results that need constant revision. And we will lose our trust and belief in science as the single best way to understand the physical universe. . . . From a historical perspective the path to discovery may seem clear, but the reality is that there are twists and turns and reversals and failures and cul de sacs all along the path to any discovery. Facts are not immutable and discoveries are provisional. This is the messy process of science. We should worry that our unrealistic expectations will destroy this amazing mess.

Neuroscientist Sam Harris, who has previously explored the psychology of lying, is concerned about bad incentives that bring out the worst in us, as individuals and as a society:

We need systems that are wiser than we are. We need institutions and cultural norms that make us better than we tend to be. It seems to me that the greatest challenge we now face is to build them.

Writer Douglas Rushkoff, author of Present Shock: When Everything Happens Now, offers a poignant and beautifully phrased, if exceedingly anthropocentric, concern:

We should worry less about our species losing its biosphere than losing its soul.

Our collective perceptions and cognition is our greatest evolutionary achievement. This is the activity that gives biology its meaning. Our human neural network is in the process of deteriorating and our perceptions are becoming skewed — both involuntarily and by our own hand — and all that most of us in the greater scientific community can do is hope that somehow technology picks up the slack, providing more accurate sensors, faster networks, and a new virtual home for complexity.

We should worry such networks won’t be able to function without us; we should also worry that they will.

Harvard’s Lisa Randall, one of the world’s leading theoretical physicists and the author of, most recently, Knocking on Heaven’s Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World, worries about the decline in major long-term investments in research, the kind that made the Large Hadron Collider possible, which would in turn diminish our capacity for exploring the most intensely fascinating aspects of the unknown:

I’m worried I won’t know the answer to questions I care deeply about. Theoretical research (what I do) can of course be done more cheaply. A pencil and paper and even a computer are pretty cheap. But without experiments, or the hope of experiments, theoretical science can’t truly advance either.

One of the most poignant answers comes from psychologist Susan Blackmore, author of Consciousness: An Introduction, who admonishes that we’re disconnecting our heads from our hands by outsourcing so much of our manual humanity to machines, in the process amputating the present for the sake of some potential future. She writes:

What should worry us is that we seem to be worrying more about the possible disasters that might befall us than who we are becoming right now.

From 'Things I have learned in my life so far' by Stefan Sagmeister.

Click image for details.

What Should We Be Worried About? is an awakening read in its entirety. For more of Brockman’s editorial-curatorial mastery, revisit the Edge Question compendiums from 2013 and 2012, and see Nobel-winning behavioral economist Daniel Kahneman on the marvels and flaws of our intuition.

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.





You can also become a one-time patron with a single donation in any amount:





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.