“Euclid alone has looked on Beauty bare,” Edna St. Vincent Millay wrote in her lovely ode to how the father of geometry transformed the way we see and comprehend the world. But although the ancient Alexandrian mathematician provided humanity’s only framework for understanding space for centuries to come, shaping both science and art, his beautiful system was wormed by one ineluctable flaw: Euclid’s famous fifth postulate, known as the parallel postulate — which states that through any one point not belonging to a particular line, only one other line can be drawn that would be parallel to the first, and the two lines, however infinitely they may be extended into space, will remain parallel forever — is not a logical consequence of his other axioms.
This troubled Euclid. He spent the remainder of his life trying to prove the fifth postulate mathematically, and failing. Generations of mathematicians did the same for the next two thousand years. It even stumped Gauss, considered by many the greatest mathematician of all time. It took a Hungarian teenager to solve the ancient quandary.
In 1820, more than two millennia after Euclid’s death, the seventeen-year-old János Bolyai (December 15, 1802–January 27, 1860) told his father — the mathematician Wolfgang Bolyai, who had introduced his son to the enchantment of mathematics four years earlier — about his obsession with the parallel postulate.
Don’t waste an hour on that problem. Instead of reward, it will poison your whole life. The world’s greatest geometers have pondered the problem for hundreds of years and not proved the parallel postulate without a new axiom. I believe that I myself have investigated all the possible ideas.
But the young man persisted. On November 3, 1823, the twenty-one-year-old mathematical maverick wrote to his father while serving as an artillery officer in the Hungarian army:
I have resolved to publish a work on the theory of parallels as soon as I have arranged the material and my circumstances allow it. I have almost been overwhelmed by them, and it would be the cause of constant regret if they were lost. When you see them, my dear father, you too will understand. At present I can say nothing except this: I have created a new universe from nothing. All that I have sent to you till now is but a house of cards in comparison with a tower. I’m fully persuaded that this will bring me honor, as if I had already completed the discovery.
The discovery in which he exults is one of humanity’s most groundbreaking insights into the nature of reality: Bolyai had laid the foundation of non-Euclidean geometry — a wholly novel way of apprehending space, which describes everything from the shape of a calla lily blossom to the growth pattern of a coral reef, and which would become a centerpiece of relativity; without it, Einstein couldn’t have revolutionized our understanding of the universe with his notion of spacetime, the curvature of which is a supreme embodiment of non-Euclidean geometry.
Impressed by his son’s tenacity and swayed by the significance of the breakthrough, Wolfgang pivoted 180 degrees and now urged his son to publish his findings as soon as possible in order to ensure priority of discovery:
If you have really succeeded in the question, it is right that no time be lost in making it public, for two reasons: first, because ideas pass easily from one to another, who can anticipate its publication, and secondly, [because] there is some truth in the fact that many things have an epoch, in which they are discovered at the same time in several places, just as the violets appear on every side in spring.
These were words of remarkable prescience. When János’s paper, completed in 1829 and published as an appendix to a book of his father’s in 1832, reached Gauss — an old friend of Wolfgang Bolyai’s — the great German mathematician was astonished. He responded that he couldn’t praise János’s work, for it would mean praising himself — the young mathematician’s breakthrough, from the central questions he had tackled to the path he had pursued in answering them to the results he had obtained, coincided “almost entirely” with what had been occupying Gauss’s own mind for more than thirty years, though he had resolved never to publish these meditations in his lifetime. With the selfless graciousness of a true scientist, who sets aside all personal ego and celebrates any triumph of knowledge, Gauss wrote to János’s father:
So far as my own work is concerned, of which up till now I have put little on paper, my intention was not to let it be published during my lifetime… On the other hand it was my idea to write down all this later so that at least it should not perish with me. It is therefore a pleasant surprise for me that I am spared this trouble, and I am very glad that it is just the son of my old friend who anticipates me in such a remarkable manner.
But the young Bolyai’s elation at having “created a new universe from nothing” was swiftly grounded when he realized that a third mathematician — Nikolai Lobachevsky in Russia — had preceded both him and Gauss in publishing a paper outlining the selfsame ideas. Lest we forget how information traveled in the pre-Internet era, it took Bolyai sixteen years to learn of Lobachevsky’s book. Once he read it, he reconciled himself to the loss of priority by rooting his ego in the animating principle of science, which he recorded in an uncommonly poetic and profound meditation in his notebook:
The nature of real truth of course cannot be but one and the same [in Hungary] as in Kamchatka and on the Moon, or, to be brief, anywhere in the world; and what one finite, sensible being discovers, can also not impossibly be discovered by another.
The discovery at which these three finite, sensible beings had arrived simultaneously and independently forever changed not only mathematics but our fundamental grasp of nature. For a fine complement, see mathematician Lillian Lieber’s 1961 masterpiece drawing on the non-Euclidean revolution to illustrate the building blocks of moral values like democracy and social justice, then revisit physicist Alan Lightman on the shared psychology of creative breakthrough in art and science.
An imaginative extension of Euclid’s parallel postulate into life, liberty, and the pursuit of happiness.
By Maria Popova
“The joy of existence must be asserted in each one, at every instant,” Simone de Beauvoir wrote in her paradigm-shifting treatise on how freedom demands that happiness become our moral obligation. A decade and a half later, the mathematician and writer Lillian R. Lieber (July 26, 1886–July 11, 1986) examined the subject from a refreshingly disparate yet kindred angle.
Einstein was an ardent fan of Lieber’s unusual, conceptual books — books discussing serious mathematics in a playful way that bridges science and philosophy, composed in a thoroughly singular style. Like Einstein himself, Lieber thrives at the intersection of science and humanism. Like Edwin Abbott and his classic Flatland, she draws on mathematics to invite a critical shift in perspective in the assumptions that keep our lives small and our world inequitable. Like Dr. Seuss, she wrests from simple verses and excitable punctuation deep, calm, serious wisdom about the most abiding questions of existence. She emphasized that her deliberate line breaks and emphatic styling were not free verse but a practicality aimed at facilitating rapid reading and easier comprehension of complex ideas. But Lieber’s stubborn insistence that her unexampled work is not poetry should be taken with the same grain of salt as Hannah Arendt’s stubborn insistence that her visionary, immensely influential political philosophy is not philosophy.
In her hundred years, Lieber composed seventeen such peculiar and profound books, illustrated with lovely ink drawings by her husband, the artist Hugh Lieber. Among them was the 1961 out-of-print gem Human Values and Science, Art and Mathematics (public library) — an inquiry into the limits and limitless possibilities of the human mind, beginning with the history of the greatest revolution in geometry and ending with the fundamental ideas and ideals of a functioning, fertile democracy.
Lieber paints the conceptual backdrop for the book:
This book is really about
Life, Liberty, and the Pursuit of Happiness,
using ideas from mathematics
to make these concepts less vague.
We shall see first what is meant by
“thinking” in mathematics,
and the light that it sheds on both the
CAPABILITIES and the LIMITATIONS
of the human mind.
And we shall then see what bearing this can have
on “thinking” in general —
even, for example, about such matters as Life, Liberty, and the Pursuit of Happiness!
For we must admit that our “thinking”
about such things,
without this aid,
often leads to much confusion —
mistaking LICENSE for LIBERTY,
often resulting in juvenile delinquency;
mistaking MONEY for HAPPINESS,
often resulting in adult delinquency;
mistaking for LIFE itself
just a sordid struggle for mere existence!
Embedded in the history of mathematics, Lieber argues, is an allegory of what we are capable of as a species and how we can use those capabilities to rise to our highest possible selves. In the first chapter, titled “Freedom and Responsibility,” she chronicles the revolution in our understanding of nature and reality ignited by the advent of non-Euclidean geometry — the momentous event Lieber calls “The Great Discovery of 1826.” She writes:
One of the amazing things
in the history of mathematics
happened at the beginning of the 19th century.
As a result of it,
the floodgates of discovery
were open wide,
and the flow of creative contributions
is still on the increase!
this amazing phenomenon
was due to a mere
CHANGE OF ATTITUDE!
Perhaps I should not say “mere,”
since the effect was so immense —
which only goes to show that
a CHANGE OF ATTITUDE
can be extremely significant
and we might do well to examine our ATTITUDES
toward many things, and people —
this might be the most rewarding,
as it proved to be in mathematics.
In order to fully comprehend a revolutionary change in attitude, Lieber points out, we need to first understand the old attitude — the former worldview — supplanted by the revolution. To appreciate “The Great Discovery of 1826,” we must go back to Euclid:
first put together
the various known facts of geometry
into a SYSTEM,
instead of leaving them as
isolated bits of information —
as in a quiz program!
has served for many centuries
as a MODEL for clear thinking,
and has been and still is
of the greatest value to the human race.
Lieber unpacks what it means to build such a “model for clear thinking” — networked logic that makes it easier to learn and faster to make new discoveries. With elegant simplicity, she examines the essential building blocks of such a system and outlines the basics of mathematical logic:
In constructing a system,
one must begin with
a few simple statements
by means of logic,
one derives the “consequences.”
We can thus
“figure out the consequences”
before they hit us.
And this we certainly need more of!
Thus Euclid stated such
(called “postulates” in mathematics)
“It shall be possible to draw
a straight line joining
any two points,”
and others like it.
he derived many complicated theorems
like the well-known
and many, many others.
And, as we all know,
to “prove” any theorem
one must show how
to “derive” it from the postulates —
every claim made in a “proof”
must be supported by reference to
the postulates or
to theorems which have previously
already been so “proved”
from the postulates.
Of course Theorem #1
must follow from
the postulates ONLY.
Now what about
the postulates themselves?
How can THEY be “proved”?
CANNOT be PROVED at all —
since there is nothing preceding them
from which to derive them!
This may seem disappointing to those who
thought that in
EVERYTHING is proved!
But you can see that
this is IMPOSSIBLE,
even in mathematics,
since EVERY SYSTEM must necessarily
START with POSTULATES,
and these are NOT provable,
since there is nothing preceding them
from which to derive them.
This circularity of certainty permeates all of science. In fact, strangely enough, the more mathematical a science is, the more we consider it a “hard science,” implying unshakable solidity of logic. And yet the more mathematical a mode of thinking, the fuller it is of this circularity reliant upon assumption and abstraction. Euclid, of course, was well aware of this. He reconciled the internal contradiction of the system by considering his unproven postulates to be “self-evident truths.” His system was predicated on using logic to derive from these postulates certain consequences, or theorems. And yet among them was one particular postulate — the famous parallel postulate — which troubled Euclid.
The parallel postulate states that if you were to draw a line between two points, A and B, and then take a third point, C, not on that line, you can only draw one line through C that will be parallel to the line between A and B; and that however much you may extend the two parallel lines in space, they will never cross.
Euclid, however, wasn’t convinced this was a self-evident truth — he thought it ought to be mathematically proven, but he failed to prove it. Generations of mathematicians failed to prove it over the following thirteen centuries. And then, in the early nineteenth century, three mathematicians — Nikolai Lobachevsky in Russia, János Bolyai in Hungary, and Carl Friedrich Gauss in Germany — independently arrived at the same insight: The challenge of the parallel postulate lay not in the proof but, as Lieber puts it, in “the very ATTITUDE toward what postulates are” — rather than considering them to be “self-evident truths” about nature, they should be considered human-made assumptions about how nature works, which may or may not reflect the reality of how nature work.
This may sound like a confounding distinction, but it is a profound one — it allowed mathematicians to see the postulates not as sacred and inevitable but as fungible, pliable, tinkerable with. Leaving the rest of the Euclidean system intact, these imaginative nineteenth-century mathematicians changed the parallel postulate to posit that not one but two lines can be drawn through point C that would be parallel to the line between A and B, and the entire system would still be self-consistent. This resulted in a number of revolutionary theorems, including the notion that the sum of angles in a triangle could be different from 180 degrees — greater if the triangle is drawn on the surface of a sphere, for instance, or lesser if drawn on a concave surface.
It was a radical, thoroughly counterintuitive insight that simply cannot be fathomed, much less diagramed, in flat space. And yet it wasn’t a mere thought experiment, an amusing and suspicious mental diversion. It bust open the floodgates of creativity in mathematics and physics by giving rise to non-Euclidean geometry — an understanding of curved three-dimensional space which we now know is every bit as real as the geometry of flat surfaces, abounding in nature in everything from the blossom of a calla lily to the growth pattern of a coral reef to the fabric of spacetime of which everything that ever was and ever will be is woven. In fact, Einstein himself would not have been able to arrive at his relativity theory, nor bridge space and time into the revolutionary notion of spacetime, without non-Euclidean geometry.
Here, Lieber makes the conceptual leap that marks her books as singular achievements in thought — the leap from mathematics and the understanding of nature to psychology, sociology, and the understanding of human nature. Reflecting on the larger revolution in thought that non-Euclidean geometry embodied in its radical refusal to accept any truth as self-evident, she questions the notion of “eternal verities” — a term popularized by the eighteen-century French philosopher Claude Buffier to signify the aspects of human consciousness that allegedly furnish universal, indubitable moral and humane values. Considering how the relationship between creative limitation, freedom, humility, and responsibility shapes our values, Lieber writes:
Even though mathematics is
only a MAN-MADE enterprise,
man has done very well for himself
in this domain,
where he has
FREEDOM WITH RESPONSIBILITY —
though he has learned the
HUMILITY that goes with
knowing that he does
NOT have access to
“Self-evident truths” and
that he is NOT God —
yet he knows also that
he is not a mouse either,
but a man,
with all the
HUMAN DIGNITY and the
needed to develop
the wonderful domain of
The very dignity and ingenuity driving mathematics, Lieber points out in another lovely conceptual bridging of ideas, is also the motive force behind the central aspiration of human life, the one which Albert Camus saw as our moral obligation — the pursuit of happiness.
In the final chapter, titled “Life, Liberty and the Pursuit of Happiness,” Lieber recounts the principle of metamathematics demanding that a set of postulates within any system not contradict one another in order for the system to be self-consistent, and considers mathematics as a sandbox for the subterranean morality without which human life is unlivable:
[This] means of course that
CANNOT SERVE as an instrument of thought!
Now is not this statement
usually considered to be
a MORAL principle?
without it we cannot have
ANY satisfactory mathematical system,
nor ANY satisfactory system of thought —
indeed we cannot even PLAY a GAME properly
with CONTRADICTORY rules!
In a similar way,
I wish to make the point that
there are other important MORAL ideas
BEHIND THE SCENES,
without which there cannot be
ANY MATHEMATICS or SCIENCE.
And therefore, in this sense,
Science is NOT AT ALL AMORAL —
any more than one could have
a fruitful and non-trivial postulate set
which is not subject to
the METAmathematical demand for
One of these “behind-the-scenes” moral ideas, Lieber argues, is the notion of taking Life itself as a basic postulate:
there can be
no living thing —
no human race —
also of course
no music, no art,
I am not suggesting that we consider here WHETHER life is worth living,
whether it would make more “sense”
to commit suicide,
whether it is all just
“Sound and fury, signifying nothing.”
I am proposing that
LIFE be taken as a POSTULATE,
and therefore not subject to proof,
just like any other postulate.
But I propose to MODIFY this
and take more specifically as
THE PRESERVATION OF
LIFE FOR THE HUMAN RACE
is a goal of human effort.
This does not mean that
we are to go about
wantonly killing animals,
but to do this only when
it is necessary to support
HUMAN life —
for prevention of disease,
Indeed a horse or dog or other animals,
through their friendliness and sincerity,
might actually HELP to sustain
Man’s spirit and faith and even his life.
And I interpret this postulate
also to mean that
or “ganging up” on one little fox —
a hole gang of men and women
(and corrupting even horses and dogs
to help!) —
is really a cowardly act,
so unsportsmanlike that it is amazing
how this activity could ever be called
All this is by way of interpreting
the meaning to be given to Postulate I:
ACCEPTANCE of LIFE for the HUMAN RACE.
Surely everyone will accept the idea that
is definitely present,
behind the scenes,
in science or mathematics.
But this is not all.
For, I take this postulate to mean also
that we are not to limit it to
only a PART of the human race,
as Hitler did,
because this inevitably leads to WAR,
and in this day of
and CBR (chemical, biological, radiological)
this would certainly contradict
would it not?
May I say at the very outset that
the “SYSTEM” suggested here
makes no pretense of finality (!),
remembering how difficult it is,
EVEN in MATHEMATICS,
to have a postulate set which is
Nevertheless, one must go on,
one must TRY,
one must do one’s BEST,
as in mathematics and sciences.
And so, let us continue, in all humility,
to try to make
what can only at best be regarded as
in the hope that the basic idea —
that there is a MORALITY behind the scenes
in Mathematics and Science —
may prove to be helpful
and may be further
improved and strengthened
as time goes on.
Drawing on the consequence of the Second Law of Thermodynamics, which implies for living things an inevitable degradation toward destruction, Lieber offers additional postulates for the moral system that undergirds a thriving democracy:
Each INDIVIDUAL HUMAN BEING
must fight this “degeneration,”
must cling to LIFE as long as possible,
must grow and create —
physically and/or mentally.
And for this we need
We must all have the LIBERTY to
so grow and create,
without of course interfering
with each other’s growth,
This Freedom or LIBERTY
must be accompanied by
if it is not to lead to
individuals or groups
which would of course
CONTRADICT the other postulates.
All this is of course very DIFFICULT to do,
accepting LIFE without whimpering,
growing without interfering with
the growth of others,
it involves what Goethe called
But how can this be done?
It seems clear that we must now add
The PURSUIT of HAPPINESS
is a goal of human effort.
For without some happiness,
or at least the hope of some happiness
(the “pursuit” of happiness)
it would be impossible
to accept “cheerfully”
the program outlined above.
And such acceptance leads to
a calm, sane performance of our work,
in the spirit in which a mathematician
accepts the postulates of a system
and accepts his creative work
based on these —
accepting even the Great Difficulties
which he encounters
and is determined to conquer.
And I finally believe that
the results of such a formulation
will re-discover the conclusions
reached by the
great religious leaders and the
Lieber distills from this conception of the system “some invariants and some differences,” drawing from science and mathematics a working model for democracy:
(1) Invariants: LIFE — which demands
(a) Sufficient and proper food;
(b) Good Health;
(c) Education — both mental and physical;
(d) NO VIOLENCE!
(a real scientist does NOT go
into his laboratory with an AXE
with which to DESTROY his apparatus,
but rather with a well-developed BRAIN,
and lots of PATIENCE
with which to CREATE new things
which will be BENEFICIAL to the
This of course implies PEACE,
and better still
(e) FRIENDSHIP between K and K’!
(f) Humility — remembering that
he will NEVER know THE “truth”
(g) And all this of course
requires a great deal of
(2) Differences which will
NOT PREVENT both K and K’
from studying the universe
WITH EQUAL RIGHT and EQUAL
which is certainly
the clearest concept of
what DEMOCRACY is:
(a) Different coordinate systems
(b) Differences in color of skin!
(c) Different languages — or
other means of communication.
And please do not consider this program
as an unattainable “Utopia,”
for it really WORKS in
Science and Mathematics,
as we have seen,
and can also work in
if we would only
put our BEST EFFORTS into it,
fighting WARS —
(HOT or COLD)
or even PREPARING for wars —
HATING other people,
“Virtually as soon as humans developed the ability to speak and write, somebody somewhere felt the desire to say something to somebody else that could not be understood by others.”
By Maria Popova
During WWII, when Richard Feynman was recruited as one of the country’s most promising physicists to work on the Manhattan Project in a secret laboratory in Los Alamos, his young wife Arline was writing him love letters in code from her deathbed. While Arline was merely having fun with the challenge of bypassing the censors at the laboratory’s Intelligence Office, all across the country thousands of women were working as cryptographers for the government — women who would come to constitute more than half of America’s codebreaking force during the war. While Alan Turing was decrypting Nazi communication across the Atlantic, some eleven thousand women were breaking enemy code in America.
A splendid writer and an impressive scholar, Mundy tracked down and interviewed more than twenty surviving “code girls,” trawled hundreds of boxes containing archival documents, and successfully petitioned for the declassification of more than a dozen oral histories. Out of these puzzle pieces she constructs a masterly portrait of the brilliant, unheralded women — women with names like Blanche and Edith and Dot — who were recruited into lives they never could have imagined, lives believed to have saved incalculable other lives by bringing the war to a sooner end.
Driven partly by patriotism, but mostly by pure love of that singular intersection of mathematics and language where cryptography lives, these “high grade” young women, as the military recruiters called them, came from all over the country and had only one essential thing in common — their answers to two seemingly strange questions. Mundy traces the inception of this female codebreaking force:
A handful of letters materialized in college mailboxes as early as November 1941. Ann White, a senior at Wellesley College in Massachusetts, received hers on a fall afternoon not long after leaving an exiled poet’s lecture on Spanish romanticism.
The letter was waiting when she returned to her dormitory for lunch. Opening it, she was astonished to see that it had been sent by Helen Dodson, a professor in Wellesley’s Astronomy Department. Miss Dodson was inviting her to a private interview in the observatory. Ann, a German major, had the sinking feeling she might be required to take an astronomy course in order to graduate. But a few days later, when Ann made her way along Wellesley’s Meadow Path and entered the observatory, a low domed building secluded on a hill far from the center of campus, she found that Helen Dodson had only two questions to ask her.
Did Ann White like crossword puzzles, and was she engaged to be married?
Elizabeth Colby, a Wellesley math major, received the same unexpected summons. So did Nan Westcott, a botany major; Edith Uhe (psychology); Gloria Bosetti (Italian); Blanche DePuy (Spanish); Bea Norton (history); and Ann White’s good friend Louise Wilde, an English major. In all, more than twenty Wellesley seniors received a secret invitation and gave the same replies. Yes, they liked crossword puzzles, and no, they were not on the brink of marriage.
Letters and clandestine questioning sessions spread across other campuses, particularly those known for strong scientific curricula — from Vassar, where astronomer Maria Mitchell paved the way for American women in science, to Mount Holyoke, the “castle of science” where Emily Dickinson composed her botanical herbarium. The young women who answered the odd questions correctly were summoned to secret meetings, where they learned they were being invited to work for the U.S. Navy as “cryptanalysts.” They were to take a training in codebreaking and, if they completed it successfully, would take jobs with the Navy after graduation, as civilians. They could tell no one about the appointment — not their parents, not their girlfriends, not their fiancés.
First, they had to solve a series of problem sets, which would be graded in Washington to determine if they made the cut to the next stage. Mundy writes:
And so the young women did their strange new homework. They learned which letters of the English language occur with the greatest frequency; which letters often travel together in pairs, like s and t; which travel in triplets, like est and ing and ive, or in packs of four, like tion. They studied terms like “route transposition” and “cipher alphabets” and “polyalphabetic substitution cipher.” They mastered the Vigenère square, a method of disguising letters using a tabular method dating back to the Renaissance. They learned about things called the Playfair and Wheatstone ciphers. They pulled strips of paper through holes cut in cardboard. They strung quilts across their rooms so that roommates who had not been invited to take the secret course could not see what they were up to. They hid homework under desk blotters. They did not use the term “code breaking” outside the confines of the weekly meetings, not even to friends taking the same course.
These young women’s acumen, and their willingness to accept the cryptic invitations, would become America’s secret weapon in assembling a formidable wartime codebreaking operation in record time. They would also furnish a different model of genius — one more akin to the relational genius that makes a forest successful. Mundy writes:
Code breaking is far from a solitary endeavor, and in many ways it’s the opposite of genius. Or, rather: Genius itself is often a collective phenomenon. Success in code breaking depends on flashes of inspiration, yes, but it also depends on the careful maintaining of files, so that a coded message that has just arrived can be compared to a similar message that came in six months ago. Code breaking during World War II was a gigantic team effort. The war’s cryptanalytic achievements were what Frank Raven, a renowned naval code breaker from Yale who supervised a team of women, called “crew jobs.” These units were like giant brains; the people working in them were a living, breathing, shared memory. Codes are broken not by solitary individuals but by groups of people trading pieces of things they have learned and noticed and collected, little glittering bits of numbers and other useful items they have stored up in their heads like magpies, things they remember while looking over one another’s shoulders, pointing out patterns that turn out to be the key that unlocks the code.
But although codebreaking has entered the popular imagination through the portal of war, often depicted with a kind of intellectual glamor that aligns it with spies and superheroes, it spans a far vaster cultural spectrum of uses as a tool of communication and un-communication. Mundy examines its history and essential elements:
Codes have been around for as long as civilization, maybe longer. Virtually as soon as humans developed the ability to speak and write, somebody somewhere felt the desire to say something to somebody else that could not be understood by others. The point of a coded message is to engage in intimate, often urgent communication with another person and to exclude others from reading or listening in. It is a system designed to enable communication and to prevent it.
Both aspects are important. A good code must be simple enough to be readily used by those privy to the system but tough enough that it can’t be easily cracked by those who are not. Julius Caesar developed a cipher in which each letter was replaced by a letter three spaces ahead in the alphabet (A would be changed to D, B to E, and so forth), which met the ease-of-use requirement but did not satisfy the “toughness” standard. Mary, Queen of Scots, used coded missives to communicate with the faction that supported her claim to the English throne, which — unfortunately for her — were read by her cousin Elizabeth and led to her beheading. In medieval Europe, with its shifting alliances and palace intrigues, coded letters were an accepted convention, and so were quiet attempts to slice open diplomatic pouches and read them. Monks used codes, as did Charlemagne, the Inquisitor of Malta, the Vatican (enthusiastically and often), Islamic scholars, clandestine lovers. So did Egyptian rulers and Arab philosophers. The European Renaissance — with its flowering of printing and literature and a coming-together of mathematical and linguistic learning — led to a number of new cryptographic systems. Armchair philosophers amused themselves pursuing the “perfect cipher,” fooling around with clever tables and boxes that provided ways to replace or redistribute the letters in a message, which could be sent as gibberish and reassembled at the other end. Some of these clever tables were not broken for centuries; trying to solve them became a Holmes-and-Moriarty contest among thinkers around the globe.
Even in the context of war, even in the subset of women cryptographers, the history of codebreaking predates WWII. It stretches back to the world’s first Great War, to a strange haven under the auspices of a Mad Hatter character by the name of George Fabyan — an eccentric, habitually disheveled millionaire with little formal education, who built himself an elaborate private Wonderland complete with a working lighthouse, a Japanese garden, a Roman-style bathing pool fed by fresh spring water, a Dutch mill transported piece by piece from Holland, and an enormous rope replica of a spider’s web for recreation. On these strange grounds, Fabyan constructed Riverbank Laboratories — a pseudo-scientific shrine to his determination to “wrest the secrets of nature” by way of acoustics, agriculture, and, crucially, literary manuscripts.
Fabyan subscribed to a conspiracy theory that the works of William Shakespeare were actually authored by Sir Francis Bacon, who allegedly encoded evidence of his authorship into the texts. The millionaire acquired rare manuscripts, including a 1623 folio of one of Shakespeare’s plays, then hired a team of researchers — he could afford the best minds in the country — to prove the theory by analyzing the text in search of coded messages. Under these improbable circumstances, he incubated the talent that would become the U.S. military’s first concerted cryptanalytic force.
Among Fabyan’s hires was Elizebeth Smith — an intelligent and driven young midwesterner, one of nine children, who had put herself through college after her father denied her the opportunity. In 1916, Fabyan recruited Smith to be the public face of his Baconian codebreaking operation. Soon after she moved to Riverbank Laboratories, Smith began to suspect that the Shakespearian conspiracy theory was just that, sustained by a cultish team of cranks who fed on confirmation bias as they searched for “evidence.” Among Fabyan’s staff was another doubter — William Friedman, a polymathish geneticist from Cornell, living on the second floor of the windmill. Elizebeth and William bonded over their dissent on long bike rides and swims in the Roman pool. Within a year, they were married — a marriage of equals in every way. But although they saw clearly the ludicrousness of Fabyan’s theory, they were too fascinated by the pure art-science of codes and ciphers to leave. Elizebeth moved into the windmill. The couple would soon become the country’s most sought-after codebreaking team as the government outsourced its cryptanalytic efforts to Riverbank. But although the Friedmans worked in tandem, when the Army set out to hire them, they offered William $3,000 and Elizebeth $1,520.
When the team began working for the government in Washington — both still in their twenties, heading a team of thirty — they were decoding every kind of intercepted foreign communication suspected to contain military information. Some did. Most did not — one turned out to be a Czechoslovakian love letter.
Elizebeth Friedman — who went on to have a formidable career in law enforcement, training men for a new codebreaking unit for the Coast Guard — is one of the many women whose stories, all different and all fascinating, Mundy tells in Code Girls, a thoroughly wonderful read in its entirety. Complement it with the story of the the unheralded women astronomers who revolutionized our understanding of the universe decades before they could vote.
“What we cannot know creates the space for myth, for stories, for imagination, as much as for science… Stories are crucial in providing the material for what one day might be known. Without stories, we wouldn’t have any science at all.”
By Maria Popova
In a recent MoMA talk about the lacuna between truth and meaning, I proposed that, just like there is a limit to the speed of light arising from the fundamental laws of physics that govern the universe, there might be a fundamental cognitive limit that keeps human consciousness from ever fully comprehending itself. After all, the moment a system becomes self-referential, it becomes susceptible to limitation and paradox — the logical equivalent to Audre Lorde’s memorable metaphor that “the master’s tools will never dismantle the master’s house.”
Pioneering astronomer Maria Mitchell articulated this splendidly when she wrote in her diary in 1854:
The world of learning is so broad, and the human soul is so limited in power! We reach forth and strain every nerve, but we seize only a bit of the curtain that hides the infinite from us.
The century and a half since has been strewn with myriad scientific breakthroughs that have repeatedly transmuted what we once thought to be unknowable into what is merely unknown and therefore knowable, then eventually known. Evolutionary theory and the discovery of DNA have answered age-old questions considered unanswerable for all but the last blink of our species’ history. Einstein’s relativity and the rise of quantum mechanics have radically revised our understanding of the universe and the nature of reality.
And yet the central question remains: Against the infinity of the knowable, is there a fundamental finitude to our capacity for knowing?
That’s what English mathematician Marcus du Sautoy, chair for the Public Understanding of Science at Oxford University, explores with intelligent and imaginative zest in The Great Unknown: Seven Journeys to the Frontiers of Science (public library) — an inquiry into the puzzlement and promise of seven such unknowns, which Du Sautoy terms “edges,” marking horizons of knowledge beyond which we can’t currently see.
For any scientist the real challenge is not to stay within the secure garden of the known but to venture out into the wilds of the unknown.
The knowledge of what we don’t know seems to expand faster than our catalog of breakthroughs. The known unknowns outstrip the known knowns. And it is those unknowns that drive science. A scientist is more interested in the things he or she can’t understand than in telling all the stories we already know the answers to. Science is a living, breathing subject because of all those questions we can’t answer.
Among those are questions like whether the universe is infinite or finite, what dark matter is made of, the perplexity of multiverses, and the crowning curio of devising a model of reality that explains the nature and behavior of all energy and matter — often called a “theory of everything” or a “final theory” — unifying the two presently incompatible models of Einstein’s theory of relativity, which deals with the largest scale of physics, and quantum field theory, which deals with the smallest scale.
Would we want to know everything? Scientists have a strangely ambivalent relationship with the unknown. On the one hand, what we don’t know is what intrigues and fascinates us, and yet the mark of success as a scientist is resolution and knowledge, to make the unknown known.
And yet, too often, our human tendency when faced with unknowns is to capitulate to their unknowability prematurely — nowhere more famously, nor more absurdly, than in the proclamation Lord Kelvin, one of the most esteemed scientists of his era, made before the British Association of Science in 1900: “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” Elsewhere in Europe, Einstein was incubating the ideas that would precipitate humanity’s greatest leap of physics just five years later. Lord Kelvin had failed to see beyond the edge of the known.
There are natural phenomena that will never be tamed and known. Chaos theory asserts that I cannot know the future of certain systems because they are too sensitive to small inaccuracies. Because we can never have complete knowledge of the present, chaos theory denies us access to the future.
That’s not to say that all futures are unknowable. Very often we are in regions that aren’t chaotic, where small fluctuations have little effect. This is why mathematics has been so powerful in helping us to predict and plan. The power of mathematical equations has allowed us to land spaceships on other planets, predict the paths of deadly typhoons on Earth, and model the effects of deadly viruses, allowing us to take action before they become a pandemic. But at other times we cannot accurately predict or control outcomes.
This, Du Sautoy notes, is representative of the common denominator between all of the “edges” he identifies — the idea, also reflected in the aforementioned problem of consciousness, that we might be fundamentally unable to grasp a system from a bird’s-eye perspective so long as we are caged inside that system. Perhaps the most pervasive manifestation of this paradox is language itself, the hallmark of our cognitive evolution — language contains and carries knowledge, but language is a system, be it the language of the written word or that of mathematics.
Du Sautoy reflects on this possible meta-limitation:
Many philosophers identify language as a problem when it comes to the question of consciousness. Understanding quantum physics is also a problem because the only language that helps us navigate its ideas is mathematics.
At the heart of this tendency is what is known as “the paradox of unknowability” — the logical proof that unless you know all there is to be known, there will always exist for you truths that are inherently unknowable. And yet truth can exist beyond logic because logic itself has fundamental limits, which the great mathematician Kurt Gödel so elegantly demonstrated in the 1930s.
So where does this leave us? With an eye to his seven “edges,” Du Sautoy writes:
Perhaps the best we can hope for is that science gives us verisimilitudinous knowledge of the universe; that is, it gives us a narrative that appears to describe reality. We believe that a theory that makes our experience of the world intelligible is one that is close to the true nature of the world, even if philosophers tell us we’ll never know. As Niels Bohr said, “It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.”
In consonance with my foundational belief that the cross-pollination of disciplines is what catalyzes the combinatorial creativity out of which every meaningful new idea is born, Du Sautoy adds:
Science flourishes when we share the unknowable with other disciplines. If the unknowable has an impact on how we lead our lives, then it is worth having ways to probe the consequences of choosing an answer to an unknowable. Music, poetry, stories, and art are powerful tools for exploring the implications of the unknowable.
Chaos theory implies that … humans are in some ways part of the unknowable. Although we are physical systems, no amount of data will help us completely predict human behavior. The humanities are the best language we have for understanding as much as we can about what it is to be human.
Studies into consciousness suggest boundaries beyond which we cannot go. Our internal worlds are potentially unknowable to others. But isn’t that one of the reasons we write and read novels? It is the most effective way to give others access to that internal world.
What we cannot know creates the space for myth, for stories, for imagination, as much as for science. We may not know, but that doesn’t stop us from creating stories, and these stories are crucial in providing the material for what one day might be known. Without stories, we wouldn’t have any science at all.