Brain Pickings

Posts Tagged ‘technology’

09 JUNE, 2014

The Birth of the Information Age: How Paul Otlet’s Vision for Cataloging and Connecting Humanity Shaped Our World

By:

“Everyone from his armchair will be able to contemplate creation, in whole or in certain parts.”

Decades before Alan Turing pioneered computer science and Vannevar Bush imagined the web, a visionary Belgian idealist named Paul Otlet (August 23, 1868–December 10, 1944) set out to organize the world’s information. For nearly half a century, he worked unrelentingly to index and catalog every significant piece of human thought ever published or recorded, building a massive Universal Bibliography of 15 million books, magazines, newspapers, photographs, posters, museum pieces, and other assorted media. His monumental collection was predicated not on ownership but on access and sharing — while amassing it, he kept devising increasingly ambitious schemes for enabling universal access, fostering peaceful relations between nations, and democratizing human knowledge through a global information network he called the “Mundaneum” — a concept partway between Voltaire’s Republic of Letters, Marshall McLuhan’s “global village,” and the übermind of the future. Otlet’s work would go on to inspire generations of information science pioneers, including the founding fathers of the modern internet and the world wide web. (Even the visual bookshelf I use to manage the Brain Pickings book archive is named after him.)

In Cataloging the World: Paul Otlet and the Birth of the Information Age (public library), writer, educator, and design historian Alex Wright traces Otlet’s legacy not only in technology and information science, but also in politics, social reform, and peace activism, illustrating why not only Otlet’s ideas, but also his idealism matter as we contemplate the future of humanity.

The Mundaneum, with its enormous filing system designed by Otlet himself, allowed people to request information by mail-order. By 1912, Otlet and his team were fielding 1,500 such requests per year.

(Image: Mundaneum Archive, Belgium)

Wright writes:

Paul Otlet … seems to connect a series of major turning points in the history of the early twentieth-century information age, synthesizing and incorporating their ideas along with his own, and ultimately coming tantalizingly close to building a fully integrated global information network.

[…]

Otlet embraced the new internationalism and emerged as one of its most prominent apostles in Europe in the early twentieth century. In his work we can see many of these trends intersecting — the rise of industrial technologies, the problem of managing humanity’s growing intellectual output, and the birth of a new internationalism. To sustain it Otlet tried to assemble a great catalog of the world’s published information, create an encyclopedic atlas of human knowledge, build a network of federated museums and other cultural institutions, and establish a World City that would serve as the headquarters for a new world government. For Otlet these were not disconnected activities but part of a larger vision of worldwide harmony. In his later years he started to describe the Mundaneum in transcendental terms, envisioning his global knowledge network as something akin to a universal consciousness and as a gateway to collective enlightenment.

In 1903, Otlet developed a revolutionary index card system for organizing information.

(Image: Mundaneum Archive, Belgium)

Otlet's primarily female staff answered information requests by hand. Without the digital luxury of keyword searches, a single query could take painstaking hours, even days, of sifting through the elaborate index card catalog.

(Image: Mundaneum Archive, Belgium)

The Mundaneum, which officially opened its doors in 1920, a decade after Otlet first dreamt it up, wasn’t merely a prescient vision for the utilitarian information-retrieval function of the modern internet, but the ideological framework for a far nobler and more ambitious goal to unite the world around a new culture of networked peace and understanding, which would shepherd humanity toward reaching its spiritual potential — an idea that makes the Mundaneum’s fate in actuality all the more bitterly ironic.

At the peak of Otlet’s efforts to organize the world’s knowledge around a generosity of spirit, humanity’s greatest tragedy of ignorance and cruelty descended upon Europe. As the Nazis seized power, they launched a calculated campaign to thwart critical thought by banning and burning all books that didn’t agree with their ideology — the very atrocity that prompted Helen Keller’s scorching letter on book-burning — and even paved the muddy streets of Eastern Europe with such books so the tanks would pass more efficiently. When the Nazi inspectors responsible for the censorship effort eventually got to Otlet’s collection, they weren’t quite sure what to make of it. One report summed up their contemptuous bafflement:

The institute and its goals cannot be clearly defined. It is some sort of … ‘museum for the whole world,’ displayed through the most embarrassing and cheap and primitive methods… The library is cobbled together and contains, besides a lot of waste, some things we can use. The card catalog might prove rather useful.

But behind the “waste” and the “embarrassing” methods of organizing it lay far greater ideas that evaded, as is reliably the case, small minds. Wright outlines the remarkable prescience of Otlet’s vision:

What the Nazis saw as a “pile of rubbish,” Otlet saw as the foundation for a global network that, one day, would make knowledge freely available to people all over the world. In 1934, he described his vision for a system of networked computers — “electric telescopes,” he called them — that would allow people to search through millions of interlinked documents, images, and audio and video files. He imagined that individuals would have desktop workstations—each equipped with a viewing screen and multiple movable surfaces — connected to a central repository that would provide access to a wide range of resources on whatever topics might interest them. As the network spread, it would unite individuals and institutions of all stripes — from local bookstores and classrooms to universities and governments. The system would also feature so-called selection machines capable of pinpointing a particular passage or individual fact in a document stored on microfilm, retrieved via a mechanical indexing and retrieval tool. He dubbed the whole thing a réseau mondial: a “worldwide network” or, as the scholar Charles van den Heuvel puts it, an “analog World Wide Web.”

Twenty-five years before the first microchip, forty years before the first personal computer, and fifty years before the first Web browser, Paul Otlet had envisioned something very much like today’s Internet.

Otlet articulated this vision in his own writing, describing an infrastructure remarkably similar to the underlying paradigm of the modern web:

Everything in the universe, and everything of man, would be registered at a distance as it was produced. In this way a moving image of the world will be established, a true mirror of [its] memory. From a distance, everyone will be able to read text, enlarged and limited to the desired subject, projected on an individual screen. In this way, everyone from his armchair will be able to contemplate creation, in whole or in certain parts.

Otlet’s prescience, Wright notes, didn’t end there — he also envisioned speech recognition tools, wireless networks that would enable people to upload files to remote servers, social networks and virtual communities around individual pieces of media that would allow people to “participate, applaud, give ovations, sing in the chorus,” and even concepts we are yet to crack with our present technology, such as transmitting sensory experiences like smell and taste.

Otlet's sketch for the 'worldwide network' he envisioned

(Image: Mundaneum Archive, Belgium)

But Otlet’s most significant vision wasn’t about the technology of it — it was about politics and peace, the very things that most bedevil the modern web, from cyber terrorism to the ongoing struggle for net neutrality. Wright writes:

An ardent “internationalist,” Otlet believed in the inevitable progress of humanity toward a peaceful new future, in which the free flow of information over a distributed network would render traditional institutions — like state governments — anachronistic. Instead, he envisioned a dawning age of social progress, scientific achievement, and collective spiritual enlightenment. At the center of it all would stand the Mundaneum, a bulwark and beacon of truth for the whole world.

But when the Nazis swept Europe and crept closer to Belgium, it became clear to Otlet that not only the physical presence of the Mundaneum but also its political ideals stood at grave risk. He grew increasingly concerned. In swelling desperation to save his life’s work, he sent President Roosevelt a telegram offering the entire collection to the United States “as nucleus of a great World Institution for World Peace and Progress with a seat in America.” Otlet’s urgent plea made it all the way to the Belgian press, who printed the telegram, but he never heard back from Roosevelt. He send a second telegram, even more urgent, once Belgium was invaded, but again received no response. Finally, in a final act of despair, he decided to make “an appeal on behalf of humanity” and try persuading the Nazi inspectors that the Mundaneum was worth saving. Predictably, they were unmoved. A few days later, Nazi soldiers destroyed 63 tons’ worth of books Otlet’s meticulously preserved and indexed materials that constituted the heart of his collection.

Otlet was devastated, but continued to labor quietly over his dream of a global information network throughout the occupation. Four months after the liberation of Paris, he died. And yet the ghost of his work went on to greatly influence the modern information world. Wright contextualizes Otlet’s legacy:

While Otlet did not by any stretch of the imagination “invent” the Internet — working as he did in an age before digital computers, magnetic storage, or packet-switching networks — nonetheless his vision looks nothing short of prophetic. In Otlet’s day, microfilm may have qualified as the most advanced information storage technology, and the closest thing anyone had ever seen to a database was a drawer full of index cards. Yet despite these analog limitations, he envisioned a global network of interconnected institutions that would alter the flow of information around the world, and in the process lead to profound social, cultural, and political transformations.

By today’s standards, Otlet’s proto-Web was a clumsy affair, relying on a patchwork system of index cards, file cabinets, telegraph machines, and a small army of clerical workers. But in his writing he looked far ahead to a future in which networks circled the globe and data could travel freely. Moreover, he imagined a wide range of expression taking shape across the network: distributed encyclopedias, virtual classrooms, three-dimensional information spaces, social networks, and other forms of knowledge that anticipated the hyperlinked structure of today’s Web. He saw these developments as fundamentally connected to a larger utopian project that would bring the world closer to a state of permanent and lasting peace and toward a state of collective spiritual enlightenment.

And yet there’s a poignant duality in how the modern web came to both embody and defy Otlet’s ideals:

During its brief heyday, Otlet’s Mundaneum was also a window onto the world ahead: a vision of a networked information system spanning the globe. Today’s Internet represents both a manifestation of Otlet’s dream and also, arguably, the realization of his worst fears. For the system he imagined differed in crucial ways from the global computer network that would ultimately take shape during the Cold War. He must have sensed that his dream was over when he confronted Krüss and the Nazi delegation on that day in 1940. But before we can fully grasp the importance of Otlet’s vision, we need to look further back, to where it all began.

Comparing the Mundaneum with Sir Tim Berners Lee’s original 1989 proposal for the world wide web, both premised on an essential property of universality, Wright notes both the parallels between the two and the superiority, in certain key aspects, of Otlet’s ideals compared to how the modern web turned out:

[Otlet] never framed his thinking in purely technological terms; he saw the need for a whole-system approach that encompassed not just a technical solution for sharing documents and a classification system to bind them together, but also the attendant political, organizational, and financial structures that would make such an effort sustainable in the long term. And while his highly centralized, controlled approach may have smacked of nineteenth-century cultural imperialism (or, to put it more generously, at least the trappings of positivism), it had the considerable advantages of any controlled system, or what today we might call a “walled garden”: namely, the ability to control what goes in and out, to curate the experience, and to exert a level of quality control on the contents that are exchanged within the system.

Paul Otlet in 1932, months before the Nazis destroyed his Mundaneum

(Image: Mundaneum Archive, Belgium)

But Otlet’s greatest ambition, as well as the one most enduring due to its as-yet unfulfilled fruition, was that of the Mundaneum’s humanistic effect in strengthening the invisible bonds that link us together — an ethos rather antithetical to the individualistic, almost narcissistic paradigm of today’s social web. Wright explains:

The contemporary construct of “the user” that underlies so much software design figures nowhere in Otlet’s work. He saw the mission of the Mundaneum as benefiting humanity as a whole, rather than serving the whims of individuals. While he imagined personalized workstations (those Mondotheques), he never envisioned the network along the lines of a client-server “architecture” (a term that would not come into being for another two decades). Instead, each machine would act as a kind of “dumb” terminal, fetching and displaying material stored in a central location.

The counterculture programmers who paved the way for the Web believed they were participating in a process of personal liberation. Otlet saw it as a collective undertaking, one dedicated to a higher purpose than mere personal gratification. And while he might well have been flummoxed by the anything-goes ethos of present-day social networking sites like Facebook or Twitter, he also imagined a system that allowed groups of individuals to take part in collaborative experiences like lectures, opera performances, or scholarly meetings, where they might “applaud” or “give ovations.” It seems a short conceptual hop from here to Facebook’s ubiquitous “Like” button.

A reproduction of Otlet's original Mondotheque desk

(Image: Mundaneum Archive, Belgium)

In this regard, Otlet’s idea of collective intelligence working toward a common good presaged modern concepts like crowdsourcing and “cognitive surplus” as well as initiatives like Singularity University. Wright considers the essence of his legacy:

Otlet’s work invites us to consider a simple question: whether the path to liberation requires maximum personal freedom of the kind that characterizes today’s anything-goes Internet, or whether humanity would find itself better served by pursuing liberation through the exertion of discipline.

Considering the darker side of the modern internet in information monopolies like Google and Facebook, Wright reflects on how antithetical this dominance of private enterprise is to Otlet’s vision of a democratic, publicly funded international network. “He likely would have seen the pandemonium of today’s Web as an enormous waste of humanity’s intellectual and spiritual potential,” Wright writes and as he contemplates the messy machinery of money and motives propelling the modern web:

Would the Internet have turned out any differently had Paul Otlet’s vision come to fruition? Counterfactual history is a fool’s game, but it is perhaps worth considering a few possible lessons from the Mundaneum. First and foremost, Otlet acted not out of a desire to make money — something he never succeeded at doing — but out of sheer idealism. His was a quest for universal knowledge, world peace, and progress for humanity as a whole. The Mundaneum was to remain, as he said, “pure.” While many entrepreneurs vow to “change the world” in one way or another, the high-tech industry’s particular brand of utopianism almost always carries with it an underlying strain of free-market ideology: a preference for private enterprise over central planning and a distrust of large organizational structures. This faith in the power of “bottom-up” initiatives has long been a hallmark of Silicon Valley culture, and one that all but precludes the possibility of a large-scale knowledge network emanating from anywhere but the private sector.

But rather than a hapless historical lament, Wright argues, Otlet’s work can serve as an ideal — moral, social, political — to aspire to as we continue to shape this fairly young medium. It could lead us to devise more intelligent intellectual property regulations, build more sophisticated hyperlinks, and hone our ability to curate and contextualize information in more meaningful ways. He writes:

That is why Paul Otlet still matters. His vision was not just cloud castles and Utopian scheming and positivist cant but in some ways more relevant and realizable now than at any point in history. To be sure, some of his most cherished ideas seem anachronistic by today’s standards: his quest for “universal” truth, his faith in international organizations, and his conviction in the inexorable progress of humanity. But as more and more of us rely on the Internet to conduct our everyday lives, we are also beginning to discover the dark side of such extreme decentralization. The hopeful rhetoric of the early years of the Internet revolution has given way to the realization that we may be entering a state of permanent cultural amnesia, in which the sheer fluidity of the Web makes it difficult to keep our bearings. Along the way, many of us have also entrusted our most valued personal data — letters, photographs, films, and all kinds of other intellectual artifacts — to a handful of corporations who are ultimately beholden not to serving humanity but to meeting Wall Street quarterly earnings estimates. For all the utopian Silicon Valley rhetoric about changing the world, the technology industry seems to have little appetite for long-term thinking beyond its immediate parochial interests.

[…]

Otlet’s Mundaneum will never be. But it nonetheless offers us a kind of Platonic object, evoking the possibility of a technological future driven not by greed and vanity, but by a yearning for truth, a commitment to social change, and a belief in the possibility of spiritual liberation. Otlet’s vision for an international knowledge network—always far more expansive than a mere information retrieval tool—points toward a more purposeful vision of what the global network could yet become. And while history may judge Otlet a relic from another time, he also offers us an example of a man driven by a sense of noble purpose, who remained sure in his convictions and unbowed by failure, and whose deep insights about the structure of human knowledge allowed him to peer far into the future…

His work points to a deeply optimistic vision of the future: one in which the world’s knowledge coalesces into a unified whole, narrow national interests give way to the pursuit of humanity’s greater good, and we all work together toward building an enlightened society.

Cataloging the World: Paul Otlet and the Birth of the Information Age is a remarkable read in its entirety, not only in illuminating history but in extracting from it a beacon for the future. Complement it with Vannevar Bush’s 1945 “memex” concept and George Dyson’s history of bits. And lest we forget, it all started with a woman — Ada Lovelace, Lord Byron’s illegitimate daughter and the world’s first computer programmer.

Thanks, Liz

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.





You can also become a one-time patron with a single donation in any amount.





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

29 MAY, 2014

A Brief History of the Toilet

By:

How the most appropriately named inventor in history saved humanity from a centuries-long crisis.

“Civilized man has always been outraged by what he sees, or else there would be no civilization,” Norman Mailer once wrote. And, in fact, among the greatest feats of civilization is a technology that has enabled us to get one of humanity’s most primal yet most outrageous sights as far away from us, and as quickly, as possible: the modern toilet.

From Bill Bryson’s wonderfully edifying At Home: A Short History of Private Life (public library) comes the curious history of how this staple of civilization came to be — a story not for the faint of heart or gut, but one brilliantly emblematic of how scientific innovation unfolds, with all its desperation-driven revolutions, cumulative advances, and dormant breakthroughs.

Bryson begins by tracing the colorful etymological history of the word itself:

Perhaps no word in English has undergone more transformations in its lifetime than toilet. Originally, in about 1540, it was a kind of cloth, a diminutive form of ‘toile’, a word still used to describe a type of linen. Then it became a cloth for use on dressing tables. Then it became the items on the dressing table (whence toiletries). Then it became the dressing table itself, then the act of dressing, then the act of receiving visitors while dressing, then the dressing room itself, then any kind of private room near a bedroom, then a room used lavatorially, and finally the lavatory itself. Which explains why toilet water in English can describe something you would gladly daub on your face or, simultaneously and more basically, water in a toilet.

Meanwhile, the fate of the actual toilet water — at what is referred to by that term today — was far less polished. As recently as the beginning of the 18th century, most sewage still went into cesspools, which were frequently neglected to a point of spilling into adjoining water supplies or overflowing into the streets. Bryson cites one man’s diary record of such an incident spurred by his neighbor’s neglected cesspit:

Going down into my cellar… I put my foot into a great heap of turds … by which I found that Mr. Turner’s house of office is full and comes into my cellar, which doth trouble me.

And just when one feels things couldn’t get any more nauseating, Bryson introduces the people who cleaned the cesspits, semi-euphemistically known as “nightsoil men.” Their duties put in perspective any present-day complaints about the struggle to find fulfilling work:

They worked in teams of three or four. One man — the most junior, we may assume — was lowered into the pit itself to scoop waste into buckets. A second stood by the pit to raise and lower the buckets, and the third and fourth carried the buckets to a waiting cart. Nightsoil work was dangerous as well as disagreeable. Workers ran the risk of asphyxiation and even of explosions, since they worked by the light of a lantern in powerfully gaseous environments.

Given this was unfolding during the heyday of Adam Smith, it is perhaps unsurprising that nightsoil workers made up for the extreme disagreeableness of the job and the skewed supply-demand ratio by charging formidable fees. This presented another problem: Poorer districts, often in the overcrowded inner city, couldn’t afford their services, which caused their cesspits to overflow regularly. Given the extreme population density — in London’s most compressed districts, 54,000 people were packed into a few blocks and one one report claimed that 11,000 lived in 27 houses on a single alley — this was a problem.

A new word crept into the vernacular to describe such neighborhoods: slums. Though its exact origin remains unknown, Charles Dickens was among the first to use it, in a letter penned in 1851.

A solution to the cesspit crisis was desperately needed. But when a successful one finally arrived, it wasn’t the result of a eureka! moment for groundbreaking technology — it was a concept that had been around since the end of the 16th century but, as is the case with many scientific and technological breakthroughs ahead of their time, had stopped short of perfecting the prototype enough to gain commercial traction.

That solution was the flush toilet, which John Harington, the godson of Queen Elizabeth I, had built for the Queen in 1597. Delight by his invention, she promptly installed it in Richmond Palace, but it never expanded beyond the royal dwellings. Bryson writes:

Almost 200 years passed before Joseph Bramah, a cabinet maker and locksmith, patented the first modern flush toilet in 1778. It caught on in a modest way. Many others followed… But early toilets often didn’t work well. Sometimes they backfired, filling the room with even more of what the horrified owner had very much hoped to be rid of. Until the development of the U-bend and water trap — which create that little reservoir of water that returns to the bottom of the bowl after each flush — every toilet bowl acted as a conduit to the smells of cesspit and sewer. The backwaft of odors, particularly in hot weather, could be unbearable.

The final link in this chain of problem-solving came from an inventor with perhaps the most brilliantly appropriate name in history: Thomas Crapper. Bryson ties the loose ends of the story:

[Crapper] was born into a poor family in Yorkshire and reputedly walked to London at the age of 11. There he became an apprentice plumber in Chelsea. Crapper invented the classic and still familiar toilet with an elevated cistern activated by the pull of a chain. Called the Marlboro Silent Water Waste Preventer, it was clean, leak-proof, odor-free and wonderfully reliable, and their manufacture made Crapper very rich and so famous that it is often assumed that he gave his name to the slang term crap and its many derivatives. In fact, crap in the lavatorial sense is very ancient, and crapper for a toilet is an Americanism not recorded by the Oxford English Dictionary before 1922. Crapper’s name, it seems, was just a happy accident.

In the rest of At Home: A Short History of Private Life, Bryson goes on to explore with equal parts wit and scientific rigor the everyday miracles in each room of the house and the colorful backstories behind those modern comforts we’ve come to take for granted, from pipes to pillows.

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.





You can also become a one-time patron with a single donation in any amount.





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

07 APRIL, 2014

Isaac Asimov on the Thrill of Lifelong Learning, Science vs. Religion, and the Role of Science Fiction in Advancing Society

By:

“It’s insulting to imply that only a system of rewards and punishments can keep you a decent human being.”

Isaac Asimov was an extraordinary mind and spirit — the author of more than 400 science and science fiction books and a tireless advocate of space exploration, he also took great joy in the humanities (and once annotated Lord Byron’s epic poem “Don Juan”), championed humanism over religion, and celebrated the human spirit itself (he even wrote young Carl Sagan fan mail). Like many of the best science fiction writers, he was as exceptional at predicting the future as he was at illuminating some of the most timeless predicaments of the human condition. In a 1988 interview with Bill Moyers, found in Bill Moyers: A World of Ideas (public library) — the same remarkable tome that gave us philosopher Martha Nussbaum on how to live with our human fragility — Asimov explores several subjects that still stir enormous cultural concern and friction. With his characteristic eloquence and sensitivity to the various dimensions of these issues, he presages computer-powered lifelong learning and online education decades before it existed, weighs the question of how authors will make a living in a world of free information, bemoans the extant attempts of religious fundamentalism to drown out science and rational thought, and considers the role of science fiction as a beacon of the future.

The conversation begins with a discussion of Asimov’s passionate belief that when given the right tools, we can accomplish far more than what we can with the typical offerings of formal education:

MOYERS: Do you think we can educate ourselves, that any one of us, at any time, can be educated in any subject that strikes our fancy?

ASIMOV: The key words here are “that strikes our fancy.” There are some things that simply don’t strike my fancy, and I doubt that I can force myself to be educated in them. On the other hand, when there’s a subject I’m ferociously interested in, then it is easy for me to learn about it. I take it in gladly and cheerfully…

[What's exciting is] the actual process of broadening yourself, of knowing there’s now a little extra facet of the universe you know about and can think about and can understand. It seems to me that when it’s time to die, there would be a certain pleasure in thinking that you had utilized your life well, learned as much as you could, gathered in as much as possible of the universe, and enjoyed it. There’s only this one universe and only this one lifetime to try to grasp it. And while it is inconceivable that anyone can grasp more than a tiny portion of it, at least you can do that much. What a tragedy just to pass through and get nothing out of it.

MOYERS: When I learn something new — and it happens every day — I feel a little more at home in this universe, a little more comfortable in the nest. I’m afraid that by the time I begin to feel really at home, it’ll all be over.

ASIMOV: I used to worry about that. I said, “I’m gradually managing to cram more and more things into my mind. I’ve got this beautiful mind, and it’s going to die, and it’ll all be gone.” And then I thought, “No, not in my case. Every idea I’ve ever had I’ve written down, and it’s all there on paper. I won’t be gone. It’ll be there.

Page from 'Charley Harper: An Illustrated Life'

Asimov then considers how computers would usher in this profound change in learning and paints the outline of a concept that Clay Shirky would detail and term “cognitive surplus” two decades later:

MOYERS: Is it possible that this passion for learning can be spread to ordinary folks out there? Can we have a revolution in learning?

ASIMOV: Yes, I think not only that we can but that we must. As computers take over more and more of the work that human beings shouldn’t be doing in the first place — because it doesn’t utilize their brains, it stifles and bores them to death — there’s going to be nothing left for human beings to do but the more creative types of endeavor. The only way we can indulge in the more creative types of endeavor is to have brains that aim at that from the start.

You can’t take a human being and put him to work at a job that underuses the brain and keep him working at it for decades and decades, and then say, “Well, that job isn’t there, go do something more creative.” You have beaten the creativity out of him. But if from the start children are educated into appreciating their own creativity, then probably almost all of us can be creative. In the olden days, very few people could read and write. Literacy was a very novel sort of thing, and it was felt that most people just didn’t have it in them. But with mass education, it turned out that most people could be taught to read and write. In the same way, once we have computer outlets in every home, each of them hooked up to enormous libraries, where you can ask any question and be given answers, you can look up something you’re interested in knowing, however silly it might seem to someone else.

Asimov goes on to point out the flawed industrial model of education — something Sir Ken Robinson would lament articulately two decades later — and tells Moyers:

Today, what people call learning is forced on you. Everyone is forced to learn the same thing on the same day at the same speed in class. But everyone is different. For some, class goes too fast, for some too slow, for some in the wring direction. But give everyone a chance, in addition to school, to follow up their own bent from the start, to find out about whatever they’re interested in by looking it up in their own homes, at their own speed, in their own time, and everyone will enjoy learning.

Later, in agreeing with Moyers that this revolution in learning isn’t merely for the young, Asimov adds:

That’s another trouble with education as we now have it. People think of education as something that they can finish. And what’s more, when they finish, it’s a rite of passage. You’re finished with school. You’re no more a child, and therefore anything that reminds you of school — reading books, having ideas, asking questions — that’s kid’s stuff. Now your’e an adult, you don’t do that sort of thing anymore…

Every kid knows the only reason he’s in school is because he’s a kid and little and weak, and if he manages to get out early, if he drops out, why he’s just a premature man.

Embroidered map of the infant Internet in 1983 by Debbie Millman

Speaking at a time when the Internet as we know it today was still an infant, and two decades before the golden age of online education, Asimov offers a remarkably prescient vision for how computer-powered public access to information would spark the very movement of lifelong learning that we’ve witnessed in the past decade:

You have everybody looking forward to no longer learning, and you make them ashamed afterward of going back to learning. If you have a system of education using computers, then anyone, any age, can learn by himself, can continue to be interested. If you enjoy learning, there’s no reason why you should stop at a given age. People don’t stop things they enjoy doing just because they reach a certain age. They don’t stop playing tennis just because they’ve turned forty. They don’t stop with sex just because they’ve turned forty. They keep it up as long as they can if they enjoy it, and learning will be the same thing. The trouble with learning is that most people don’t enjoy it because of the circumstances. Make it possible for them to enjoy learning, and they’ll keep it up.

When Moyers asks him to describe what such a teaching machine would look like — again, in 1988, when personal computers had only just begun to appear in homes — Asimov envisions a kind of Siri-like artificial intelligence, combined with the functionality of a discovery engine:

I suppose that one essential thing would be a screen on which you could display things… And you’ll have to have a keyboard on which you ask your questions, although ideally I could like to see one that could be activated by voice. You could actually talk to it, and perhaps it could talk to you too, and say, “I have something here that may interest you. Would you like to have me print it out for you?” And you’d say, “Well, what is it exactly?” And it would tell you, and you might say, “Oh all right, I’ll take a look at it.”

But one of his most prescient remarks actually has to do not with the mechanics of freely available information but with the ethics and economics of it. Long before our present conundrum of how to make online publishing both in the public interest and financially sustainable for publishers, Asimov shares with Moyers the all too familiar question he has been asking himself — “How do you arrange to pay the author for the use of the material?” — and addresses it with equal parts realism and idealism:

After all, if a person writes something, and this then becomes available to everybody, you deprive him of the economic reason for writing. A person like myself, if he was assured of a livelihood, might write anyway, just because he enjoyed it, but most people would want to do it in return for something. I imagine how they must have felt when free libraries were first instituted. “What? My book in a free library? Anyone can come in and read it for free?” Then you realize that there are some books that wouldn’t be sold at all if you didn’t have libraries.

(A century earlier, Schopenhauer had issued a much sterner admonition against the cultural malady of writing solely for material rewards.)

Painting of hell by William Blake from John Milton's 'Paradise Lost' (click image for more)

Asimov then moves on to the subject of science vs. religion — something he would come to address with marvelous eloquence in his memoir — and shares his concern about how mysticism and fundamentalism undercut society:

I’d like to think that people who are given a chance to learn facts and broaden their knowledge of the universe wouldn’t seek so avidly after mysticism.

[…]

It isn’t right to sell a person phony stock, and take money for it, and this is what mystics are doing. They’re selling people phony knowledge and taking money for it. Even if people feel good about it, I can well imagine that a person who really believes in astrology is going to have a feeling of security because he knows that this is a bad day, so he’ll stay at home, just as a guy who’s got phony stock may look at it and feel rich. But he still has phony stock, and the person who buys mysticism still has phony knowledge.

He offers a counterpoint and considers what real knowledge is, adding to history’s best definitions of science:

Science doesn’t purvey absolute truth. Science is a mechanism, a way of trying to improve your knowledge of nature. It’s a system for testing your thoughts against the universe and seeing whether they match. This works not just for the ordinary aspects of science, but for all of life.

Asimov goes on to bemoan the cultural complacency that has led to the decline of science in mainstream culture — a decline we feel even today more sharply than ever when, say, a creationist politician tries to stop a little girl’s campaign for a state fossil because such an effort would “endorse” evolution. Noting that “we are living in a business society” where fewer and fewer students take math and science, Asimov laments how we’ve lost sight of the fact that science is driven by not-knowing rather than certitude:

MOYERS: You wrote a few years ago that the decline in America’s world power is in part brought about by our diminishing status as a world science leader. Why have we neglected science?

ASIMOV: Partly because of success. The most damaging statement that the United States has ever been subjected to is the phrase “Yankee know-how.” You get the feeling somehow that Americans — just by the fact that they’re American — are somehow smarter and more ingenious than other people, which really is not so. Actually, the phrase was first used in connection with the atomic bomb, which was invented and brought to fruition by a bunch of European refugees. That’s “Yankee know-how.”

MOYERS: There’s long been a bias in this country against science. When Benjamin Franklin was experimenting with the lightning rod, a lot of good folk said, “You don’t need a lightning rod. If you want to prevent lightning from striking, you just have to pray about it.”

ASIMOV: The bias against science is part of being a pioneer society. You somehow feel the city life is decadent. American history is full of fables of the noble virtuous farmer and the vicious city slicker. The city slicker is an automatic villain. Unfortunately, such stereotypes can do damage. A noble ignoramus is not necessarily what the country needs.

(What might Asimov, who in 1980 voiced fears that the fundamentalists coming into power with President Reagan would turn the country even more against science by demanding that biblical creationism be given an equal footing with evolution in the classroom, if he knew that a contemporary television station can edit out Neil deGrasse Tyson’s mention of evolution?)

'The Expulsion of Adam and Eve from the Garden of Eden' by William Blake from John Milton's 'Paradise Lost' (click image for more)

But when Moyers asks the writer whether he considers himself an enemy of religion, Asimov answers in the negative and offers this beautifully thoughtful elaboration on the difference between the blind faith of religion and the critical thinking at the heart of science:

My objection to fundamentalism is not that they are fundamentalists but that essentially they want me to be a fundamentalist, too. Now, they may say that I believe evolution is true and I want everyone to believe that evolution is true. But I don’t want everyone to believe that evolution is true, I want them to study what we say about evolution and to decide for themselves. Fundamentalists say they want to treat creationism on an equal basis. But they can’t. It’s not a science. You can teach creationism in churches and in courses on religion. They would be horrified if I were to suggest that in churches they should teach secular humanism as nan alternate way of looking at the universe or evolution as an alternate way of considering how life may have started. In the church they teach only what they believe, and rightly so, I suppose. But on the other hand, in schools, in science courses, we’ve got to teach what scientists think is the way the universe works.

He extols the “thoroughly conscious ignorance” at the heart of science as a much safer foundation of reality than dogma:

That is really the glory of science — that science is tentative, that it is not certain, that it is subject to change. What is really disgraceful is to have a set of beliefs that you think is absolute and has been so from the start and can’t change, where you simply won’t listen to evidence. You say, “If the evidence agrees with me, it’s not necessary, and if it doesn’t agree with me, it’s false.” This is the legendary remark of Omar when they captured Alexandria and asked him what to do with the library. He said, “If the books agree with the Koran, they are not necessary and may be burned. If they disagree with the Koran, they are pernicious and must be burned.” Well, there are still these Omar-like thinkers who think all of knowledge will fit into one book called the Bible, and who refuse to allow it is possible ever to conceive of an error there. To my way of thinking, that is much more dangerous than a system of knowledge that is tentative and uncertain.

Riffing off the famous and rather ominous Dostoevsky line that “if God is dead, everything is permitted,” Asimov revisits the notion of intrinsic vs. extrinsic rewards — similarly to his earlier remark that good writing is motivated by intrinsic motives rather than external incentives, he argues that good-personhood can’t be steered by dogma but by one’s own conscience:

It’s insulting to imply that only a system of rewards and punishments can keep you a decent human being. Isn’t it conceivable a person wants to be a decent human being because that way he feels better?

I don’t believe that I’m ever going to heaven or hell. I think that when I die, there will be nothingness. That’s what I firmly believe. That’s not to mean that I have the impulse to go out and rob and steal and rape and everything else because I don’t fear punishment. For one thing, I fear worldly punishment. And for a second thing, I fear the punishment of my own conscience. I have a conscience. It doesn’t depend on religion. And I think that’s so with other people, too.

'The Rout of the Rebel Angels' by William Blake from John Milton's 'Paradise Lost' (click image for more)

He goes on to extend this conscience-driven behavior to the domain of science, which he argues is strongly motivated by morality and a generosity of spirit uncommon in most other disciplines, where ego consumes goodwill. (Mark Twain memorably argued that no domain was more susceptible to human egotism than religion.) Asimov offers a heartening example:

I think it’s amazing how many saints there have been among scientists. I’ll give you an example. In 1900, De Vries studied mutations. He found a patch of evening primrose of different types, and he studied how they inherited their characteristics. He worked out the laws of genetics. Two other guys worked out the laws of genetics at the same time, a guy called Karl Correns, who was a German, and Erich Tschermak von Seysenegg, who was an Austrian. All three worked out the laws of genetics in 1900, and having done so, all three looked through the literature, just to see what has been done before. All three discovered that in the 1860s Gregor Mendel had worked out the laws of genetics, and people hadn’t paid any attention then. All three reported their findings as confirmation of what Mendel had found. Not one of the three attempted to say that it was original with him. And you know what it meant. It meant that two of them, Correns and Tschermak von Seyenegg, lived in obscurity. De Vries is known only because he was also the first to work out the theory of mutations. But as far as discovering genetics is concerned, Mendel gets all the credit. They knew at the time that this would happen. That’s the sort of thing you just don’t find outside of science.

Moyers, in his typical perceptive fashion, then asks Asimov why, given how much the truth of science excites him, he is best-known for writing science fiction, and Asimov responds with equal insight and outlines the difference, both cultural and creative, between fiction in general and science fiction:

In serious fiction, fiction where the writer feels he’s accomplishing something besides simply amusing people — although there’s nothing wrong with simply amusing people — the writer is holding up a mirror to the human species, making it possible for you to understand people better because you’ve read the novel or story, and maybe making it possible for you to understand yourself better. That’s an important thing.

Now science fiction uses a different method. It works up an artificial society, one which doesn’t exist, or one that may possibly exist in the future, but not necessarily. And it portrays events against the background of this society in the hope that you will be able to see yourself in relation to the present society… That’s why I write science fiction — because it’s a way of writing fiction in a style that enables me to make points I can’t make otherwise.

Painting by Rowena Morrill

But perhaps the greatest benefit of science fiction, Moyers intimates and Asimov agrees, is its capacity to warm people up to changes that are inevitable but that seem inconceivable at the present time — after all, science fiction writers do have a remarkable record of getting the future right. Asimov continues:

Society is always changing, but the rate of change has been accelerating all through history for a variety of reasons. One, the change is cumulative. The very changes you make now make it easier to make further changes. Until the Industrial Revolution came along, people weren’t aware of change or a future. They assumed the future would be exactly like it had always been, just with different people… It was only with the coming of the Industrial Revolution that the rate of change became fast enough to be visible in a single lifetime. People were suddenly aware that not only were things changing, but that they would continue to change after they died. That was when science fiction came into being as opposed to fantasy and adventure tales. Because people knew that they would die before they could see the changes that would happen in the next century, they thought it would be nice to imagine what they might be.

As time goes on and the rate of change still continues to accelerate, it becomes more and more important to adjust what you do today to the fact of change in the future. It’s ridiculous to make your plans now on the assumption that things will continue as they are now. You have to assume that if something you’re doing is going to reach fruition in ten years, that in those ten years changes will take place, and perhaps what you’re doing will have no meaning then… Science fiction is important because it fights the natural notion that there’s something permanent about things the way they are right now.

Painting by William Blake from Dante's 'Divine Comedy' (click image for more)

Given that accepting impermanence doesn’t come easily to us, that stubborn resistance to progress and the inevitability of change is perhaps also what Asimov sees in the religious fundamentalism he condemns — dogma, after all, is based on the premise that truth is absolute and permanent, never mind that the cultural context is always changing. Though he doesn’t draw the link directly, in another part of the interview he revisits the problem with fundamentalism with words that illuminate the stark contrast between the cultural role of religion and that of science fiction:

Fundamentalists take a statement that made sense at the time it was made, and because they refuse to consider that the statement may not be an absolute, eternal truth, they continue following it under conditions where to do so is deadly.

Indeed, Asimov ends the conversation on a related note as he considers what it would take to transcend the intolerance that such fundamentalism breeds:

MOYERS: You’ve lived through much of this century. Have you ever known human beings to think with the perspective you’re calling on them to think with now?

ASIMOV: It’s perhaps not important that every human being think so. But how about the leaders and opinion-makers thinking so? Ordinary people might follow them. It would help if we didn’t have leaders who were thinking in exactly the opposite way, if we didn’t have people who were shouting hatred and suspicion of foreigners, if we didn’t have people who were shouting that it’s more important to be unfriendly than to be friendly, if we didn’t have people shouting that the people inside the country who don’t look exactly the way the rest of us look have something wrong with them. It’s almost not necessary for us to do good; it’s only necessary for us to stop doing evil, for goodness’ sake.

Bill Moyers: A World of Ideas is a remarkable tome in its entirety. Complement this particular sample-taste with Asimov on religion vs. humanism, Buckminster Fuller’s vision for the future of education, and Carl Sagan on science and spirituality.

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.





You can also become a one-time patron with a single donation in any amount:





Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.