Brain Pickings Icon
Brain Pickings

Page 609

The Hole: A Philosophical Scandinavian Children’s Book About the Meaning of Existence

“Hello, I’ve discovered a hole in my apartment… It moves… If you could come take a look… Bring it down, you say? What? Hello?!”

Brooklyn-based independent publisher Enchanted Lion Books has given us countless gems, including my labor-of-love pet project, young Mark Twain’s Advice to Little Girls. Now comes The Hole (public library) by artist Øyvind Torseter, one of Norway’s most celebrated illustrators and the talent behind the lovely My Father’s Arms Are a Boat — the story of a lovable protagonist who wakes up one day and discovers a mysterious hole in his apartment, which moves and seems to have a mind of its own. Befuddled, he looks for its origin — in vain. He packs it in a box and takes it to a lab, but still no explanation.

With Torseter’s minimalist yet visually eloquent pen-and-digital line drawings, vaguely reminiscent of Sir Quentin Blake and Tomi Ungerer yet decidedly distinctive, the story is at once simple and profound, amusing and philosophical, the sort of quiet meditation that gently, playfully tickles us into existential inquiry.

What makes the book especially magical is that a die-cut hole runs from the wonderfully gritty cardboard cover through every page and all the way out through the back cover — an especial delight for those of us who swoon over masterpieces of die-cut whimsy. In every page, the hole is masterfully incorporated into the visual narrative, adding an element of tactile delight that only an analog book can afford. The screen thus does it little justice, as these digital images feature a mere magenta-rimmed circle where the die-cut hole actually appears, but I’ve tried to capture its charm in a few photographs accompanying the page illustrations.

Complement The Hole with Enchanted Lion’s equally heartening Little Bird and Bear Despair, then revisit Torseter’s My Father’s Arms Are a Boat.

Page images courtesy of Enchanted Lion Books; photographs by Maria Popova


E. B. White’s Poignant and Playful Obituary for His Beloved Dog Daisy

“She suffered from a chronic perplexity. … She died sniffing life, and enjoying it.”

Literary history brims with famous authors who adored their pets, and E. B. White — extraordinary essayist, celebrator of New York, champion of integrity, upholder of linguistic style — was chief among them. His beloved Scotty Daisy, one of the dozen dogs he had over the course of his life, was the only witness to White’s wedding to the love of his life, and it was Daisy who “wrote” this utterly endearing letter to Katharine White on the occasion of her pregnancy. So when Daisy was killed by a swerving cab, White exorcised his grief the best way he knew how: by seeking solace in his literary flair and writing — soulfully, wittily beautifully — about his dear canine companion. From E. B. White on Dogs (public library), the fantastic collection of the author’s finest letters, poems, sketches, and essays celebrating his canine companions, compiled by his granddaughter and literary executor, Martha White — comes this moving obituary White penned for Daisy, originally published in the New Yorker on March 12, 1932.


Daisy (“Black Watch Debatable”) died December 22, 1931, when she was hit by a Yellow Cab in University Place. At the moment of her death she was smelling the front of a florist’s shop. It was a wet day, and the cab skidded up over the curb — just the sort of excitement that would have amused her, had she been at a safe distance. She is survived by her mother, Jeannie; a brother, Abner; her father, whom she never knew; and two sisters, whom she never liked. She was three years old.

Daisy was born at 65 West Eleventh Street in a clothes closet at two o’clock of a December morning in 1928. She came, as did her sisters and brothers, as an unqualified surprise to her mother, who had for several days previously looked with a low-grade suspicion on the box of bedding that had been set out for the delivery, and who had gone into the clothes closet merely because she had felt funny and wanted a dark, awkward place to feel funny in. Daisy was the smallest of the litter of seven, and the oddest.

Her life was full of incident but not of accomplishment. Persons who knew her only slightly regarded her as an opinionated little bitch, and said so; but she had a small circle of friends who saw through her, cost what it did. At Speyer hospital, where she used to go when she was indisposed, she was known as “Whitey,” because, the man told me, she was black. All her life she was subject to moods, and her feeling about horses laid her sanity open to question. Once she slipped her leash and chased a horse for three blocks through heavy traffic, in the carking belief that she was an effective agent against horses. Drivers of teams, seeing her only in the moments of her delirium, invariably leaned far out of their seats and gave tongue, mocking her; and thus made themselves even more ridiculous, for the moment, than Daisy.

She had a stoical nature, and spent the latter part of her life an invalid, owing to an injury to her right hind leg. Like many invalids, she developed a rather objectionable cheerfulness, as though to deny that she had cause for rancor. She also developed, without instruction or encouragement, a curious habit of holding people firmly by the ankle without actually biting them — a habit that gave her an immense personal advantage and won her many enemies. As far as I know, she never even broke the thread of a sock, so delicate was her grasp (like a retriever’s), but her point of view was questionable, and her attitude was beyond explaining to the person whose ankle was at stake. For my own amusement, I often tried to diagnose this quirkish temper, and I think I understand it: she suffered from a chronic perplexity, and it relieved her to take hold of something.

She was arrested once, by Patrolman Porco. She enjoyed practically everything in life except motoring, an exigency to which she submitted silently, without joy, and without nausea. She never grew up, and she never took pains to discover, conclusively, the things that might have diminished her curiosity and spoiled her taste. She died sniffing life, and enjoying it.

Katharine S. White with Daisy on a leash, New York City, 1931

In the introduction to the anthology, White’s granddaughter poignantly observes that her grandfather revealed so much of himself through his writing about his dogs, riffing on his remembrance of Daisy:

My grandfather also suffered from a chronic perplexity, I believe, and he spent his career trying to take hold of it, not infrequently through the literary device of his dogs.

In this particular case, it seems, Malcolm Gladwell was wrong in asserting, “Dogs are not about something else. Dogs are about dogs.” Dogs, for White, were about dogs, but also about how to be human.

E. B. White on Dogs is superb in its entirety, dancing across the entire spectrum from the soul-stirring to the heart-rending to the infinitely heartening. Complement it with John Updike’s harrowing poem on the loss of his dog, then lift your spirits with The Big New Yorker Book of Dogs and Jane Goodall’s charming children’s book about the healing power of pet love.


“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter

“A public library keeps no intentional secrets about its mechanisms; a search engine keeps many.”

“The dangerous time when mechanical voices, radios, telephones, take the place of human intimacies, and the concept of being in touch with millions brings a greater and greater poverty in intimacy and human vision,” Anaïs Nin wrote in her diary in 1946, decades before the internet as we know it even existed. Her fear has since been echoed again and again with every incremental advance in technology, often with simplistic arguments about the attrition of attention in the age of digital distraction. But in Smarter Than You Think: How Technology is Changing Our Minds for the Better (public library), Clive Thompson — one of the finest technology writers I know, with regular bylines for Wired and The New York Times — makes a powerful and rigorously thought out counterpoint. He argues that our technological tools — from search engines to status updates to sophisticated artificial intelligence that defeats the world’s best chess players — are now inextricably linked to our minds, working in tandem with them and profoundly changing the way we remember, learn, and “act upon that knowledge emotionally, intellectually, and politically,” and this is a promising rather than perilous thing.

He writes in the introduction:

These tools can make even the amateurs among us radically smarter than we’d be on our own, assuming (and this is a big assumption) we understand how they work. At their best, today’s digital tools help us see more, retain more, communicate more. At their worst, they leave us prey to the manipulation of the toolmakers. But on balance, I’d argue, what is happening is deeply positive. This book is about the transformation.

Page from ‘Charley Harper: An Illustrated Life’

But Thompson is nothing if not a dimensional thinker with extraordinary sensitivity to the complexities of cultural phenomena. Rather than revisiting painfully familiar and trite-by-overuse notions like distraction and information overload, he examines the deeper dynamics of how these new tools are affecting the way we make sense of the world and of ourselves. Several decades after Vannevar Bush’s now-legendary meditation on how technology will impact our thinking, Thompson reaches even further into the fringes of our cultural sensibility — past the cheap techno-dystopia, past the pollyannaish techno-utopia, and into that intricate and ever-evolving intersection of technology and psychology.

One of his most fascinating and important points has to do with our outsourcing of memory — or, more specifically, our increasingly deft, search-engine-powered skills of replacing the retention of knowledge in our own brains with the on-demand access to knowledge in the collective brain of the internet. Think, for instance, of those moments when you’re trying to recall the name of a movie but only remember certain fragmentary features — the name of the lead actor, the gist of the plot, a song from the soundtrack. Thompson calls this “tip-of-the-tongue syndrome” and points out that, today, you’ll likely be able to reverse-engineer the name of the movie you don’t remember by plugging into Google what you do remember about it. Thompson contextualizes the phenomenon, which isn’t new, then asks the obvious, important question about our culturally unprecedented solutions to it:

Tip-of-the-tongue syndrome is an experience so common that cultures worldwide have a phrase for it. Cheyenne Indians call it navonotootse’a, which means “I have lost it on my tongue”; in Korean it’s hyeu kkedu-te mam-dol-da, which has an even more gorgeous translation: “sparkling at the end of my tongue.” The phenomenon generally lasts only a minute or so; your brain eventually makes the connection. But … when faced with a tip-of-the-tongue moment, many of us have begun to rely instead on the Internet to locate information on the fly. If lifelogging … stores “episodic,” or personal, memories, Internet search engines do the same for a different sort of memory: “semantic” memory, or factual knowledge about the world. When you visit Paris and have a wonderful time drinking champagne at a café, your personal experience is an episodic memory. Your ability to remember that Paris is a city and that champagne is an alcoholic beverage — that’s semantic memory.


What’s the line between our own, in-brain knowledge and the sea of information around us? Does it make us smarter when we can dip in so instantly? Or dumber with every search?

Vannevar Bush’s ‘memex’ — short for ‘memory index’ — a primitive vision for a personal hard drive for information storage and management. Click image for the full story.

That concern, of course, is far from unique to our age — from the invention of writing to Alvin Toffler’s Future Shock, new technology has always been a source of paralyzing resistance and apprehension:

Writing — the original technology for externalizing information — emerged around five thousand years ago, when Mesopotamian merchants began tallying their wares using etchings on clay tablets. It emerged first as an economic tool. As with photography and the telephone and the computer, newfangled technologies for communication nearly always emerge in the world of commerce. The notion of using them for everyday, personal expression seems wasteful, risible, or debased. Then slowly it becomes merely lavish, what “wealthy people” do; then teenagers take over and the technology becomes common to the point of banality.

Thompson reminds us of the anecdote, by now itself familiar “to the point of banality,” about Socrates and his admonition that the “technology” of writing would devastate the Greek tradition of debate and dialectic, and would render people incapable of committing anything to memory because “knowledge stored was not really knowledge at all.” He cites Socrates’s parable of the Egyptian god Theuth and how he invented writing, offering it as a gift to the king of Egypt, Thamus, who met the present with defiant indignation:

This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

That resistance endured as technology changed shape, across the Middle Ages and past Gutenberg’s revolution, but it wasn’t without counter-resistance: Those who recorded their knowledge in writing and, eventually, collected it in the form of books argued that it expanded the scope of their curiosity and the ideas they were able to ponder, whereas the mere act of rote memorization made no guarantees of deeper understanding.

Ultimately, however, Thompson points out that Socrates was both right and wrong: It’s true that, with some deliberately cultivated exceptions and neurological outliers, few thinkers today rely on pure memorization and can recite extensive passages of text from memory. But what Socrates failed to see was the extraordinary dot-connecting enabled by access to knowledge beyond what our own heads can hold — because, as Amanda Palmer poignantly put it, “we can only connect the dots that we collect,” and the outsourcing of memory has exponentially enlarged our dot-collections.

With this in mind, Thompson offers a blueprint to this newly developed system of knowledge management in which access is critical:

If you are going to read widely but often read books only once; if you going to tackle the ever-expanding universe of ideas by skimming and glancing as well as reading deeply; then you are going to rely on the semantic-memory version of gisting. By which I mean, you’ll absorb the gist of what you read but rarely retain the specifics. Later, if you want to mull over a detail, you have to be able to refind a book, a passage, a quote, an article, a concept.

Giuseppe Arcimboldo, The Librarian, ca. 1566

This, he argues, is also how and why libraries were born — the death of the purely oral world and the proliferation of print after Gutenberg placed new demands on organizing and storing human knowledge. And yet storage and organization soon proved to be radically different things:

The Gutenberg book explosion certainly increased the number of books that libraries acquired, but librarians had no agreed-upon ways to organize them. It was left to the idiosyncrasies of each. A core job of the librarian was thus simply to find the book each patron requested, since nobody else knew where the heck the books were. This created a bottleneck in access to books, one that grew insufferable in the nineteenth century as citizens began swarming into public venues like the British Library. “Complaints about the delays in the delivery of books to readers increased,” as Matthew Battles writes in Library: An Unquiet History, “as did comments about the brusqueness of the staff.” Some patrons were so annoyed by the glacial pace of access that they simply stole books; one was even sentenced to twelve months in prison for the crime. You can understand their frustration. The slow speed was not just a physical nuisance, but a cognitive one.

The solution came in the late 19th century by way of Melville Dewey, whose decimal system imposed order by creating a taxonomy of book placement, eventually rendering librarians unnecessary — at least in their role as literal book-retrievers. They became, instead, curiosity sherpas who helped patrons decide what to read and carry out comprehensive research. In many ways, they came to resemble the editors and curators who help us navigate the internet today, framing for us what is worth attending to and why.

But Thompson argues that despite history’s predictable patterns of resistance followed by adoption and adaptation, there’s something immutably different about our own era:

The history of factual memory has been fairly predictable up until now. With each innovation, we’ve outsourced more information, then worked to make searching more efficient. Yet somehow, the Internet age feels different. Quickly pulling up [the answer to a specific esoteric question] on Google seems different from looking up a bit of trivia in an encyclopedia. It’s less like consulting a book than like asking someone a question, consulting a supersmart friend who lurks within our phones.

And therein lies the magic of the internet — that unprecedented access to humanity’s collective brain. Thompson cites the work of Harvard psychologist Daniel Wegner, who first began exploring this notion of collective rather than individual knowledge in the 1980s by observing how partners in long-term relationships often divide and conquer memory tasks in sharing the household’s administrative duties:

Wegner suspected this division of labor takes place because we have pretty good “metamemory.” We’re aware of our mental strengths and limits, and we’re good at intuiting the abilities of others. Hang around a workmate or a romantic partner long enough and you begin to realize that while you’re terrible at remembering your corporate meeting schedule, or current affairs in Europe, or how big a kilometer is relative to a mile, they’re great at it. So you begin to subconsciously delegate the task of remembering that stuff to them, treating them like a notepad or encyclopedia. In many respects, Wegner noted, people are superior to these devices, because what we lose in accuracy we make up in speed.


Wegner called this phenomenon “transactive” memory: two heads are better than one. We share the work of remembering, Wegner argued, because it makes us collectively smarter — expanding our ability to understand the world around us.

This ability to “google” one another’s memory stores, Thompson argues, is the defining feature of our evolving relationship with information — and it’s profoundly shaping our experience of knowledge:

Transactive memory helps explain how we’re evolving in a world of on-tap information.

He illustrates this by turning to the work of Betsy Sparrow, a graduate student of Wegner’s, who conducted a series of experiments demonstrating that when we know a digital tool will store information for us, we’re far less likely to commit it to memory. On the surface, this may appear like the evident and worrisome shrinkage of our mental capacity. But there’s a subtler yet enormously important layer that such techno-dystopian simplifications miss: This very outsourcing of memory requires that we learn what the machine knows — a kind of meta-knowledge that enables us to retrieve the information when we need it. And, reflecting on Sparrow’s findings, Thomspon points out that this is neither new nor negative:

We’ve been using transactive memory for millennia with other humans. In everyday life, we are only rarely isolated, and for good reason. For many thinking tasks, we’re dumber and less cognitively nimble if we’re not around other people. Not only has transactive memory not hurt us, it’s allowed us to perform at higher levels, accomplishing acts of reasoning that are impossible for us alone. It wasn’t until recently that computer memory became fast enough to be consulted on the fly, but once it did — with search engines boasting that they return results in tenths of a second — our transactive habits adapted.

Outsourcing our memory to machines rather than to other humans, in fact, offers certain advantages by pulling us into a seemingly infinite rabbit hole of indiscriminate discovery:

In some ways, machines make for better transactive memory buddies than humans. They know more, but they’re not awkward about pushing it in our faces. When you search the Web, you get your answer — but you also get much more. Consider this: If I’m trying to remember what part of Pakistan has experienced many U.S. drone strikes and I ask a colleague who follows foreign affairs, he’ll tell me “Waziristan.” But when I queried this once on the Internet, I got the Wikipedia page on “Drone attacks in Pakistan.” A chart caught my eye showing the astonishing increase of drone attacks (from 1 a year to 122 a year); then I glanced down to read a précis of studies on how Waziristan residents feel about being bombed. (One report suggested they weren’t as opposed as I’d expected, because many hated the Taliban, too.) Obviously, I was procrastinating. But I was also learning more, reinforcing my schematic understanding of Pakistan.

But algorithms, as the filter bubble has taught us, come with their own biases — most of which remain intentionally obscured from view — and this requires a whole new kind of literacy:

The real challenge of using machines for transactive memory lies in the inscrutability of their mechanics. Transactive memory works best when you have a sense of how your partners’ minds work — where they’re strong, where they’re weak, where their biases lie. I can judge that for people close to me. But it’s harder with digital tools, particularly search engines. You can certainly learn how they work and develop a mental model of Google’s biases. … But search companies are for-profit firms. They guard their algorithms like crown jewels. This makes them different from previous forms of outboard memory. A public library keeps no intentional secrets about its mechanisms; a search engine keeps many. On top of this inscrutability, it’s hard to know what to trust in a world of self-publishing. To rely on networked digital knowledge, you need to look with skeptical eyes. It’s a skill that should be taught with the same urgency we devote to teaching math and writing.

Thompson’s most important point, however, has to do with how outsourcing our knowledge to digital tools actually hampers the very process of creative thought, which relies on our ability to connect existing ideas from our mental pool of resources into new combinations, or what the French polymath Henri Poincaré has famously termed “sudden illuminations.” Without a mental catalog of materials which to mull and let incubate in our fringe consciousness, our capacity for such illuminations is greatly deflated. Thompson writes:

These eureka moments are familiar to all of us; they’re why we take a shower or go for a walk when we’re stuck on a problem. But this technique works only if we’ve actually got a lot of knowledge about the problem stored in our brains through long study and focus. … You can’t come to a moment of creative insight if you haven’t got any mental fuel. You can’t be googling the info; it’s got to be inside you.

But while this is a valid concern, Thompson doubts that we’re outsourcing too many bits of knowledge and thus curtailing our creativity. He argues, instead, that we’re mostly employing this newly evolved skill to help us sift the meaningful from the meaningless, but we remain just as capable of absorbing that which truly stimulates us:

Evidence suggests that when it comes to knowledge we’re interested in — anything that truly excites us and has meaning — we don’t turn off our memory. Certainly, we outsource when the details are dull, as we now do with phone numbers. These are inherently meaningless strings of information, which offer little purchase on the mind. … It makes sense that our transactive brains would hand this stuff off to machines. But when information engages us — when we really care about a subject — the evidence suggests we don’t turn off our memory at all.

He illustrates this deep-seated psychological tendency with a famous 1979 experiment:

Scientists gave a detailed description of a fictitious half inning of baseball to two groups: one composed of avid baseball fans, the other of people who didn’t know the game well. When asked later to recall what they’d read, the baseball fans had “significantly greater” recall than the nonfans. Because the former cared deeply about baseball, they fit the details into their schema of how the game works. The nonfans had no such mental model, so the details didn’t stick. A similar study found that map experts retained far more details from a contour map than nonexperts. The more you know about a field, the more easily you can absorb facts about it.

The question, then, becomes: How do we get people interested in things beyond their existing interests? (Curiously, this has been the Brain Pickings mission since the very beginning in 2005.) Thompson considers:

In an ideal world, we’d all fit the Renaissance model — we’d be curious about everything, filled with diverse knowledge and thus absorbing all current events and culture like sponges. But this battle is age-old, because it’s ultimately not just technological. It’s cultural and moral and spiritual; “getting young people to care about the hard stuff” is a struggle that goes back centuries and requires constant societal arguments and work. It’s not that our media and technological environment don’t matter, of course. But the vintage of this problem indicates that the solution isn’t merely in the media environment either.

In the epilogue, Thompson offers his ultimate take on that solution, at once romantic and beautifully grounded in critical thinking:

Understanding how to use new tools for thought requires not just a critical eye, but curiosity and experimentation. … A tool’s most transformative uses generally take us by surprise.


How should you respond when you get powerful new tools for finding answers?

Think of harder questions.

Smarter Than You Think is excellent and necessary in its entirety, covering everything from the promise of artificial intelligence to how technology is changing our ambient awareness.


View Full Site

Brain Pickings participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn commissions by linking to Amazon. In more human terms, this means that whenever you buy a book on Amazon from a link on here, I get a small percentage of its price. That helps support Brain Pickings by offsetting a fraction of what it takes to maintain the site, and is very much appreciated