“No one learns something new and then holds it entirely independent of what they already know. We incorporate it into the little edifice of personal knowledge that we have been creating in our minds our entire lives.”
Concerns about the usefulness of knowledge and the challenges of information overload predate contemporary anxieties by decades, centuries, if not millennia. In The Half-life of Facts: Why Everything We Know Has an Expiration Date (public library) — which gave us this fantastic illustration of how the Gutenberg press embodied combinatorial creativity — Samuel Arbesman explores why, in a world in constant flux with information proliferating at overwhelming rates, understanding the underlying patterns of how facts change equips us for better handling the uncertainty around us. (He defines fact as “a bit of knowledge that we know, either as individuals or as a society, as something about the state of the world.”)
Arbesman writes in the introduction:
Knowledge is like radioactivity. If you look at a single atom of uranium, whether it’s going to decay — breaking down and unleashing its energy — is highly unpredictable. It might decay in the next second, or you might have to sit and stare at it for thousands, or perhaps even millions, of years before it breaks apart.
But when you take a chunk of uranium, itself made up of trillions upon trillions of atoms, suddenly the unpredictable becomes predictable.. We know how uranium atoms work in the aggregate. As a group of atoms, uranium is highly regular. When we combine particles together, a rule of probability known as the law of large numbers takes over, and even the behavior of a tiny piece of uranium becomes understandable. If we are patient enough, half of a chunk of uranium will break down in 704 million years, like clock-work. This number — 704 million years — is a measurable amount of time, and it is known as the half-life of uranium.
It turns out that facts, when viewed as a large body of knowledge, are just as predictable. Facts, in the aggregate, have half-lives: We can measure the amount of time for half of a subject’s knowledge to be overturned. There is science that explores the rates at which new facts are created, new technologies developed, and even how facts spread. How knowledge changes can be understood scientifically.
This is a powerful idea. We don’t have to be at sea in a world of changing knowledge. Instead, we can understand how facts grow and change in the aggregate, just like radioactive materials. This book is a guide to the startling notion that our knowledge — even what each of us has in our head — changes in understandable and systematic ways.
Indeed, Arbesman’s conception depicts facts as the threads of which our networked knowledge and combinatorial creativity are woven:
Facts are how we organize and interpret our surroundings. No one learns something new and then holds it entirely independent of what they already know. We incorporate it into the little edifice of personal knowledge that we have been creating in our minds our entire lives. In fact, we even have a phrase for the state of affairs that occurs when we fail to do this: cognitive dissonance.
Facts, says Arbesman, live on a continuum from the very rapidly changing (like the stock market and the weather) to those whose pace of change is so slow it’s imperceptible to us (like the number of continents on Earth and the number of fingers on the human hand), in the mid-range of which live mesofacts — the facts that change at the meso, or middle, of the timescale. These include facts that change over a single lifetime. For instance, my grandmother, who celebrates her 76th birthday today, learned in grade school that there were a little over 2 billion people living on Earth and a hundred elements in the periodic table, but we’ve recently passed seven billion and there are now 118 known elements. But, rather than fretting about this impossibly rapid informational treadmill, Arbesman finds comfort in patterns:
Facts change in regular and mathematically understandable ways. And only by knowing the pattern of our knowledge’s evolution an we be better prepared for its change.
He offers a curious example of the exponential nature of knowledge through the history of scientific research:
If you look back in history you can get the impression that scientific discoveries used to be easy. Galileo rolled objects down slopes; Robert Hooke played with a spring to learn about elasticity; Isaac Newton poked around his own eye with a darning needle to understand color perception. It took creativity and knowledge (and perhaps lack of squeamishness or regard for one’s own well-being) to ask the right questions, but the experiments themselves could be very simple. Today, if you want to make a discovery in physics, it helps to be part of a ten-thousand-member team that runs a multibillion-dollar atom smasher. It takes even more money, more effort, and more people to find out new things.
Indeed, until very recently, no one was particularly interested in the increasing difficulty of discovery, but Arbesman and his team decided to examine the precise pace of change in just how much harder discovery is getting. He looked at the history of three specific fields of science — mammal species, asteroids, and chemical elements — and determined that size was a good proxy for ease of discovery: Smaller creatures and asteroids are harder to discover; in chemistry, he used inverse size since larger elements are harder to create and detect. He plotted the results and what emerged was a clear pattern of exponential decay in the ease of discovery:
What this means is that the ease of discovery doesn’t drop by the same amount every year — it declines by the same fraction each year, a sort of reverse compound interest. For example, the size of asteroids discovered annually gets 2.5 percent smaller each year. In the first few years, the ease of discovery drops off quickly; after early researchers pick the low-hanging fruit, it continues to ‘decay’ for a long time, becoming slightly harder without ever quite becoming impossible.
However it happens, scientific discovery marches forward. We are in an exceptional time, when the number of scientists is growing rapidly and consists of the majority of scientists who have ever lived. We have massive collaborative projects, from the Manhattan Project to particle accelerators, that have and are unearthing secrets of our cosmos. Yet, while this era of big science has allowed for the shockingly fast accumulation of knowledge, this growth of science is not unexpected.
Arbesman highlights the practical application beyond the cerebral understanding of how knowledge becomes obsolete:
Scholars in the field of information science in the 1970s were concerned with understanding the half-life of knowledge for a specific reason: protecting libraries from being overwhelmed.
In our modern digital information age, this sounds strange. But in the 1970s librarians everywhere were coping with the very real implications of the exponential growth of knowledge: Their libraries were being inundated. They needed ways to figure out which volumes they could safely discard. If they knew the half-life of a book or article’s time to obsolescence, it would go a long way to providing a means of avoiding overloading a library’s capacity. Knowing the half-lives of a library’s volumes would give a librarian a handle on how long books should be kept before they are just taking up space on the shelves, without being useful.
So a burst of research was conducted into this area. Information scientists examined citation data, and even usage data in libraries, in order to answer such questions as, If a book isn’t taken out for decades, is it that important anymore? And should we keep it on our shelves?
These questions, of course, strike very close to home given much of what makes my own heart sing is the excavation of near-forgotten gems that are at once timeless and timely, but that rot away in the dusty corners of humanity’s intellectual library in a culture conditioned us to fetishize the newest. In fact, contrary to what Arbesman suggests, those fears of the 1970s are not at all “strange” in the “digital information age” — if anything, they are, or should be, all the more exacerbated given the self-perpetuating nature of our knowledge biases: the internet is wired to give more weight to information that a greater number of people have already seen, sending the near-forgotten into an increasingly rapid spiral to the bottom, however “timeless and timely” that information may inherently be.
Still, The Half-life of Facts offers a fascinating and necessary look at the pace of human knowledge and what its underlying patterns might reveal about the secrets of intellectual progress, both for us as individuals and collectively, as a culture and a civilization.