Brain Pickings

Posts Tagged ‘science’

06 NOVEMBER, 2012

The Half-Life of Facts: Dissecting the Predictable Patterns of How Knowledge Grows

By:

“No one learns something new and then holds it entirely independent of what they already know. We incorporate it into the little edifice of personal knowledge that we have been creating in our minds our entire lives.”

Concerns about the usefulness of knowledge and the challenges of information overload predate contemporary anxieties by decades, centuries, if not millennia. In The Half-life of Facts: Why Everything We Know Has an Expiration Date (public library) — which gave us this fantastic illustration of how the Gutenberg press embodied combinatorial creativitySamuel Arbesman explores why, in a world in constant flux with information proliferating at overwhelming rates, understanding the underlying patterns of how facts change equips us for better handling the uncertainty around us. (He defines fact as “a bit of knowledge that we know, either as individuals or as a society, as something about the state of the world.”)

Arbesman writes in the introduction:

Knowledge is like radioactivity. If you look at a single atom of uranium, whether it’s going to decay — breaking down and unleashing its energy — is highly unpredictable. It might decay in the next second, or you might have to sit and stare at it for thousands, or perhaps even millions, of years before it breaks apart.

But when you take a chunk of uranium, itself made up of trillions upon trillions of atoms, suddenly the unpredictable becomes predictable.. We know how uranium atoms work in the aggregate. As a group of atoms, uranium is highly regular. When we combine particles together, a rule of probability known as the law of large numbers takes over, and even the behavior of a tiny piece of uranium becomes understandable. If we are patient enough, half of a chunk of uranium will break down in 704 million years, like clock-work. This number — 704 million years — is a measurable amount of time, and it is known as the half-life of uranium.

It turns out that facts, when viewed as a large body of knowledge, are just as predictable. Facts, in the aggregate, have half-lives: We can measure the amount of time for half of a subject’s knowledge to be overturned. There is science that explores the rates at which new facts are created, new technologies developed, and even how facts spread. How knowledge changes can be understood scientifically.

This is a powerful idea. We don’t have to be at sea in a world of changing knowledge. Instead, we can understand how facts grow and change in the aggregate, just like radioactive materials. This book is a guide to the startling notion that our knowledge — even what each of us has in our head — changes in understandable and systematic ways.

Indeed, Arbesman’s conception depicts facts as the threads of which our networked knowledge and combinatorial creativity are woven:


Facts are how we organize and interpret our surroundings. No one learns something new and then holds it entirely independent of what they already know. We incorporate it into the little edifice of personal knowledge that we have been creating in our minds our entire lives. In fact, we even have a phrase for the state of affairs that occurs when we fail to do this: cognitive dissonance.

Facts, says Arbesman, live on a continuum from the very rapidly changing (like the stock market and the weather) to those whose pace of change is so slow it’s imperceptible to us (like the number of continents on Earth and the number of fingers on the human hand), in the mid-range of which live mesofacts — the facts that change at the meso, or middle, of the timescale. These include facts that change over a single lifetime. For instance, my grandmother, who celebrates her 76th birthday today, learned in grade school that there were a little over 2 billion people living on Earth and a hundred elements in the periodic table, but we’ve recently passed seven billion and there are now 118 known elements. But, rather than fretting about this impossibly rapid informational treadmill, Arbesman finds comfort in patterns:

Facts change in regular and mathematically understandable ways. And only by knowing the pattern of our knowledge’s evolution an we be better prepared for its change.

He offers a curious example of the exponential nature of knowledge through the history of scientific research:

If you look back in history you can get the impression that scientific discoveries used to be easy. Galileo rolled objects down slopes; Robert Hooke played with a spring to learn about elasticity; Isaac Newton poked around his own eye with a darning needle to understand color perception. It took creativity and knowledge (and perhaps lack of squeamishness or regard for one’s own well-being) to ask the right questions, but the experiments themselves could be very simple. Today, if you want to make a discovery in physics, it helps to be part of a ten-thousand-member team that runs a multibillion-dollar atom smasher. It takes even more money, more effort, and more people to find out new things.

Indeed, until very recently, no one was particularly interested in the increasing difficulty of discovery, but Arbesman and his team decided to examine the precise pace of change in just how much harder discovery is getting. He looked at the history of three specific fields of science — mammal species, asteroids, and chemical elements — and determined that size was a good proxy for ease of discovery: Smaller creatures and asteroids are harder to discover; in chemistry, he used inverse size since larger elements are harder to create and detect. He plotted the results and what emerged was a clear pattern of exponential decay in the ease of discovery:

What this means is that the ease of discovery doesn’t drop by the same amount every year — it declines by the same fraction each year, a sort of reverse compound interest. For example, the size of asteroids discovered annually gets 2.5 percent smaller each year. In the first few years, the ease of discovery drops off quickly; after early researchers pick the low-hanging fruit, it continues to ‘decay’ for a long time, becoming slightly harder without ever quite becoming impossible.

And yet:

However it happens, scientific discovery marches forward. We are in an exceptional time, when the number of scientists is growing rapidly and consists of the majority of scientists who have ever lived. We have massive collaborative projects, from the Manhattan Project to particle accelerators, that have and are unearthing secrets of our cosmos. Yet, while this era of big science has allowed for the shockingly fast accumulation of knowledge, this growth of science is not unexpected.

Arbesman highlights the practical application beyond the cerebral understanding of how knowledge becomes obsolete:

Scholars in the field of information science in the 1970s were concerned with understanding the half-life of knowledge for a specific reason: protecting libraries from being overwhelmed.

In our modern digital information age, this sounds strange. But in the 1970s librarians everywhere were coping with the very real implications of the exponential growth of knowledge: Their libraries were being inundated. They needed ways to figure out which volumes they could safely discard. If they knew the half-life of a book or article’s time to obsolescence, it would go a long way to providing a means of avoiding overloading a library’s capacity. Knowing the half-lives of a library’s volumes would give a librarian a handle on how long books should be kept before they are just taking up space on the shelves, without being useful.

So a burst of research was conducted into this area. Information scientists examined citation data, and even usage data in libraries, in order to answer such questions as, If a book isn’t taken out for decades, is it that important anymore? And should we keep it on our shelves?

These questions, of course, strike very close to home given much of what makes my own heart sing is the excavation of near-forgotten gems that are at once timeless and timely, but that rot away in the dusty corners of humanity’s intellectual library in a culture conditioned us to fetishize the newest. In fact, contrary to what Arbesman suggests, those fears of the 1970s are not at all “strange” in the “digital information age” — if anything, they are, or should be, all the more exacerbated given the self-perpetuating nature of our knowledge biases: the internet is wired to give more weight to information that a greater number of people have already seen, sending the near-forgotten into an increasingly rapid spiral to the bottom, however “timeless and timely” that information may inherently be.

Still, The Half-life of Facts offers a fascinating and necessary look at the pace of human knowledge and what its underlying patterns might reveal about the secrets of intellectual progress, both for us as individuals and collectively, as a culture and a civilization.

Brain Pickings has a free weekly newsletter and people say it’s cool. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

05 NOVEMBER, 2012

Scientists and Philosophers Answer Kids’ Most Pressing Questions About How the World Works

By:

Why we fall in love, what we’re all made of, how dreams work, and more deceptively simple mysteries of living.

UPDATE: The sequel is here, featuring a 9-year-old girl’s question about why we have books, answered by yours truly.

“If you wish to make an apple pie from scratch,” Carl Sagan famously observed in Cosmos, “you must first invent the universe.” The questions children ask are often so simple, so basic, that they turn unwittingly yet profoundly philosophical in requiring apple-pie-from-scratch type of answers. To explore this fertile intersection of simplicity and expansiveness, Gemma Elwin Harris asked thousands of primary school children between the ages of four and twelve to send in their most restless questions, then invited some of today’s most prominent scientists, philosophers, and writers to answer them. The result is Big Questions from Little People & Simple Answers from Great Minds (public library) — a compendium of fascinating explanations of deceptively simple everyday phenomena, featuring such modern-day icons as Mary Roach, Noam Chomsky, Philip Pullman, Richard Dawkins, and many more, with a good chunk of the proceeds being donated to Save the Children.

Alain de Botton explores why we have dreams:

Most of the time, you feel in charge of your own mind. You want to play with some Lego? Your brain is there to make it happen. You fancy reading a book? You can put the letters together and watch characters emerge in your imagination.

But at night, strange stuff happens. While you’re in bed, your mind puts on the weirdest, most amazing and sometimes scariest shows.

[…]

In the olden days, people believed that our dreams were full of clues about the future. Nowadays, we tend to think that dreams are a way for the mind to rearrange and tidy itself up after the activities of the day.

Why are dreams sometimes scary? During the day, things may happen that frighten us, but we are so busy we don’t have time to think properly about them. At night, while we are sleeping safely, we can give those fears a run around. Or maybe something you did during the day was lovely but you were in a hurry and didn’t give it time. It may pop up in a dream. In dreams, you go back over things you missed, repair what got damaged, make up stories about what you’d love, and explore the fears you normally put to the back of your mind.

Dreams are both more exciting and more frightening than daily life. They’re a sign that our brains are marvellous machines — and that they have powers we don’t often give them credit for, when we’re just using them to do our homework or play a computer game. Dreams show us that we’re not quite the bosses of our own selves.

Evolutionary biologist Richard Dawkins breaks down the math of evolution and cousin marriages to demonstrate that we are all related:

Yes, we are all related. You are a (probably distant) cousin of the Queen, and of the president of the United States, and of me. You and I are cousins of each other. You can prove it to yourself.

Everybody has two parents. That means, since each parent had two parents of their own, that we all have four grandparents. Then, since each grandparent had to have two parents, everyone has eight great-grandparents, and sixteen great- great-grandparents and thirty-two great-great-great-grandparents and so on.

You can go back any number of generations and work out the number of ancestors you must have had that same number of generations ago. All you have to do is multiply two by itself that number of times.

Suppose we go back ten centuries, that is to Anglo-Saxon times in England, just before the Norman Conquest, and work out how many ancestors you must have had alive at that time.

If we allow four generations per century, that’s about forty generations ago.

Two multiplied by itself forty times comes to more than a thousand trillion. Yet the total population of the world at that time was only around three hundred million. Even today the population is seven billion, yet we have just worked out that a thousand years ago your ancestors alone were more than 150 times as numerous.

[…]

The real population of the world at the time of Julius Caesar was only a few million, and all of us, all seven billion of us, are descended from them. We are indeed all related. Every marriage is between more or less distant cousins, who already share lots and lots of ancestors before they have children of their own.
By the same kind of argument, we are distant cousins not only of all human beings but of all animals and plants. You are a cousin of my dog and of the lettuce you had for lunch, and of the next bird that you see fly past the window. You and I share ancestors with all of them. But that is another story.

Neuroscientist David Eagleman explains why we can’t tickle ourselves:

To understand why, you need to know more about how your brain works. One of its main tasks is to try to make good guesses about what’s going to happen next. While you’re busy getting on with your life, walking downstairs or eating your breakfast, parts of your brain are always trying to predict the future.

Remember when you first learned how to ride a bicycle? At first, it took a lot of concentration to keep the handlebars steady and push the pedals. But after a while, cycling became easy. Now you’re not aware of the movements you make to keep the bike going. From experience, your brain knows exactly what to expect so your body rides the bike automatically. Your brain is predicting all the movements you need to make.

You only have to think consciously about cycling if something changes — like if there’s a strong wind or you get a flat tyre. When something unexpected happens like this, your brain is forced to change its predictions about what will happen next. If it does its job well, you’ll adjust to the strong wind, leaning your body so you don’t fall.

Why is it so important for our brains to predict what will happen next? It helps us make fewer mistakes and can even save our lives.

[…]

Because your brain is always predicting your own actions, and how your body will feel as a result, you cannot tickle yourself. Other people can tickle you because they can surprise you. You can’t predict what their tickling actions will be.

And this knowledge leads to an interesting truth: if you build a machine that allows you to move a feather, but the feather moves only after a delay of a second, then you can tickle your- self. The results of your own actions will now surprise you.

Particle physicist and cosmologist Lawrence Krauss explains why we’re all made of stardust:

Everything in your body, and everything you can see around you, is made up of tiny objects called atoms. Atoms come in different types called elements. Hydrogen, oxygen and carbon are three of the most important elements in your body.

[…]

How did those elements get into our bodies? The only way they could have got there, to make up all the material on our Earth, is if some of those stars exploded a long time ago, spew- ing all the elements from their cores into space. Then, about four and a half billion years ago, in our part of our galaxy, the material in space began to collapse. This is how the Sun was formed, and the solar system around it, as well as the material that forms all life on earth.

So, most of the atoms that now make up your body were created inside stars! The atoms in your left hand might have come from a different star from those in your right hand. You are really a child of the stars.

But my favorite answers are to the all-engulfing question, How do we fall in love? Author Jeanette Winterson offers this breathlessly poetic response:

You don’t fall in love like you fall in a hole. You fall like falling through space. It’s like you jump off your own private planet to visit someone else’s planet. And when you get there it all looks different: the flowers, the animals, the colours people wear. It is a big surprise falling in love because you thought you had everything just right on your own planet, and that was true, in a way, but then somebody signalled to you across space and the only way you could visit was to take a giant jump. Away you go, falling into someone else’s orbit and after a while you might decide to pull your two planets together and call it home. And you can bring your dog. Or your cat. Your goldfish, hamster, collection of stones, all your odd socks. (The ones you lost, including the holes, are on the new planet you found.)

And you can bring your friends to visit. And read your favourite stories to each other. And the falling was really the big jump that you had to make to be with someone you don’t want to be without. That’s it.

PS You have to be brave.

Evolutionary psychologist and sociologist Robin Dunbar balances out the poetics with a scientific look at what goes on inside the brain when we love:

What happens when we fall in love is probably one of the most difficult things in the whole universe to explain. It’s something we do without thinking. In fact, if we think about it too much, we usually end up doing it all wrong and get in a terrible muddle. That’s because when you fall in love, the right side of your brain gets very busy. The right side is the bit that seems to be especially important for our emotions. Language, on the other hand, gets done almost completely in the left side of the brain. And this is one reason why we find it so difficult to talk about our feelings and emotions: the language areas on the left side can’t send messages to the emotional areas on the right side very well. So we get stuck for words, unable to describe our feelings.

But science does allow us to say a little bit about what happens when we fall in love. First of all, we know that love sets off really big changes in how we feel. We feel all light-headed and emotional. We can be happy and cry with happiness at the same time. Suddenly, some things don’t matter any more and the only thing we are interested in is being close to the person we have fallen in love with.

These days we have scanner machines that let us watch a person’s brain at work. Different parts of the brain light up on the screen, depending on what the brain is doing. When people are in love, the emotional bits of their brains are very active, lighting up. But other bits of the brain that are in charge of more sensible thinking are much less active than normal. So the bits that normally say ‘Don’t do that because it would be crazy!’ are switched off, and the bits that say ‘Oh, that would be lovely!’ are switched on.

Why does this happen? One reason is that love releases certain chemicals in our brains. One is called dopamine, and this gives us a feeling of excitement. Another is called oxytocin and seems to be responsible for the light-headedness and cosiness we feel when we are with the person we love. When these are released in large quantities, they go to parts of the brain that are especially responsive to them.

But all this doesn’t explain why you fall in love with a particular person. And that is a bit of a mystery, since there seems to be no good reason for our choices. In fact, it seems to be just as easy to fall in love with someone after you’ve married them as before, which seems the wrong way round. And here’s another odd thing. When we are in love, we can trick ourselves into thinking the other person is perfect. Of course, no one is really perfect. But the more perfect we find each other, the longer our love will last.

Big Questions from Little People is a wonderful complement to The Where, the Why, and the How: 75 Artists Illustrate Wondrous Mysteries of Science and is certain to give you pause about much of what you thought you knew, or at the very least rekindle that childlike curiosity about and awe at the basic fabric of the world we live in.

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:





You can also become a one-time patron with a single donation in any amount:





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

30 OCTOBER, 2012

Mind and Cosmos: Philosopher Thomas Nagel’s Brave Critique of Scientific Reductionism

By:

How our hunger for definitive answers robs us of the intellectual humility necessary for understanding the universe and our place in it.

“The purpose of science is not to cure us of our sense of mystery and wonder,” Stanford’s Robert Sapolsky famously noted, “but to constantly reinvent and reinvigorate it.” And yet, we live in a media culture that warps seeds of scientific understanding into sensationalist, definitive headlines about the gene for obesity or language or homosexuality and maps where, precisely, love or fear or the appreciation of Jane Austen is located in the brain — even though we know that it isn’t the clinging to answers but the embracing of ignorance that drives science.

In 1974, philosopher Thomas Nagel penned the essay “What It’s Like To Be A Bat?”, which went on to become one of the seminal texts of contemporary philosophy of mind. Nearly four decades later, he returns with Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False (public library) — a provocative critique of the limits of scientific reductionism, exploring what consciousness might be if it isn’t easily explained as a direct property of physical interactions and if the door to the unknown were, as Richard Feynman passionately advocated, left ajar.

To be sure, Nagel is far from siding with the intellectual cop-outs of intelligent design. His criticism of reductive materialism isn’t based on religious belief (or on any belief in a particular alternative, for that matter) but, rather, on the insistence that a recognition of these very limitations is a necessary precondition for exploring such alternatives, “or at least being open to their possibility” — a possibility that makes mind central to understanding the natural order, rather than an afterthought or a mere byproduct of physical laws.

He writes in the introduction:

[T]he mind-body problem is not just a local problem, having to do with the relation between mind, brain, and behavior in living animal organisms, but that it invades our understanding of the entire cosmos and its history.

[…]

Humans are addicted to the hope for a final reckoning, but intellectual humility requires that we resist the temptation to assume that tools of the kind we now have are in principle sufficient to understand the universe as a whole.

As a proponent of making the timeless timely again through an intelligent integration of history with contemporary culture, I find Nagel’s case for weaving a historical perspective into the understanding of mind particularly compelling:

The world is an astonishing place, and the idea that we have in our possession the basic tools needed to understand it is no more credible now than it was in Aristotle’s day.

[…]

The greatest advances in the physical and biological sciences were made possible by excluding the mind from the physical world. This has permitted a quantitative understanding of the world, expressed in timeless, mathematically formulated physical laws, But at some point it will be necessary to make a new start on a more comprehensive understanding that includes the mind. It seems inevitable that such an understanding will have a historical dimension as well as a timeless one. The idea that historical understanding is part of science has become familiar through the transformation of biology by evolutionary theory. But more recently, with the acceptance of the big bang, cosmology has also become a historical science. Mind, as a development of life, must be included as the most recent stage of this long cosmological history, and its appearance, I believe, casts its shadow back over the entire process and the constituents and principles on which the process depends.

Ultimately, Nagel echoes John Updike’s reflection on the possibility of “permanent mystery”:

It is perfectly possible that the truth is beyond our reach, in virtue of our intrinsic cognitive limitations and not merely beyond our grasp in humanity’s present stage of intellectual development.

Though Mind and Cosmos isn’t a neat package of scientific, or even philosophical, answers, it’s a necessary thorn in the side of today’s all-too-prevalent scientific reductionism and a poignant affirmation of Isaac Asimov’s famous contention that “the most beautiful experience we can have is the mysterious.”

Image: Orion Nebula; public domain courtesy of The Smithsonian

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:





You can also become a one-time patron with a single donation in any amount:





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

22 OCTOBER, 2012

The Science of Why We Blush, Animated

By:

What adrenaline-responsive blood vessels have to do with the social signaling of remorse.

Earlier this month, The Where, the Why, and the How, that wonderful illustrated compendium of scientific mysteries, shed light on the science of why we blush. Just a couple of days later, the creative duo behind AsapSCIENCE — who have previously illuminated such enigmas as the science of lucid dreaming, how music enchants the brain, the neurobiology of orgasms, and the science of procrastination — brought their signature style of sketchnote science storytelling to the same question. Blushing, in fact, has perplexed scientists since Charles Darwin, who famously studied human emotional expressions and called blushing “the most peculiar and most human of all expressions,” and theories as to its exact evolutionary purpose remain unreconciled.

For more on the science behind the body’s peculiar involuntary conducts, see Curious Behavior: Yawning, Laughing, Hiccupping, and Beyond.

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:





You can also become a one-time patron with a single donation in any amount:





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.