Chapter 1: The Engines of Calculation

Central Question: Can thought be mechanized?


1.1 Babbage’s Obsession

It is 1821, and we are watching a young astronomer die of frustration.

Charles Babbage sits hunched over a desk at the Royal Astronomical Society in London, surrounded by page after page of mathematical tables—logarithms, trigonometric functions, navigational ephemerides. These tables are the infrastructure of the Industrial Revolution, as essential to the age of steam as coal and iron. A sea captain plotting a course across the Atlantic depends on these numbers. An engineer calculating the stress on a railway bridge consults them. A banker computing compound interest reaches for the same worn volumes.

And they are riddled with errors.

Babbage and his friend John Herschel have been checking calculations for the Society, comparing computed values against the printed tables. The work is mind-numbing. Hours pass in a blur of digits. And mistake after mistake emerges from the pages—computational errors, transcription errors, typesetting errors. Each wrong digit is a potential shipwreck, a collapsed bridge, a financial ruin.

“I wish to God these calculations had been executed by steam,” Babbage mutters.

Herschel looks up. “It is quite possible,” he replies.

In that moment, according to Babbage’s later recollection, the obsession of his life is born. If the great engines of industry can weave complex patterns in silk and stamp precise forms in metal, why can they not compute? Why can they not calculate with absolute, mechanical certainty, banishing forever the curse of human error?

Charles Babbage is, in many ways, the archetypal Victorian polymath—brilliant, cantankerous, and utterly convinced of his own rectitude. Born in 1791 to a wealthy London banker, he arrives at Cambridge already knowing more mathematics than most of his tutors. He helps found the Analytical Society, dedicated to importing the superior Continental notation for calculus into the backward precincts of British mathematics. He is elected to the Royal Society at twenty-five. He holds the Lucasian Professorship—Newton’s chair—though he never delivers a single lecture.

He is also impossible. He wages a decades-long war against street musicians, whom he considers a plague upon the intellectual life of London. He quarrels with everyone: the Royal Society, the British government, his own engineers. He cannot resist a scientific controversy or an opportunity to demonstrate his own superior understanding. His dinner parties are legendary, featuring automata, demonstrations of machinery, and the most glittering company in London. Ada Lovelace calls him “the enchanter.”

But beneath the combativeness and the social display burns an absolute conviction: the problems of the world are problems of calculation, and calculation can be mechanized.

By 1822, Babbage has a working model of what he calls the Difference Engine—a brass and steel contraption of gears and columns that can compute mathematical tables automatically. The key insight is the method of finite differences, a mathematical technique that reduces the evaluation of polynomials to nothing but repeated addition.

Consider the problem of computing squares: 1, 4, 9, 16, 25, 36… These are the values of f(x) = x^2 for x = 1, 2, 3, and so on. Direct calculation requires multiplication. But notice the differences between consecutive terms: 3, 5, 7, 9, 11… And now notice the differences of those differences: 2, 2, 2, 2… The second differences are constant!

This is no accident. For any polynomial of degree n, the nth differences are constant. Which means that once we set the initial values correctly, we can generate all subsequent values through pure addition—no multiplication, no division, just the simplest arithmetic operation, mechanized through the rotation of numbered wheels.


Technical Box: The Method of Finite Differences

Consider the polynomial f(x) = x^2. We want to compute f(1), f(2), f(3), … without multiplication.

First, we build a difference table:

x f(x) = x^2 Delta-f (1st diff) Delta^2-f (2nd diff)
1 1 3 2
2 4 5 2
3 9 7 2
4 16 9 2
5 25 11

The second differences are constant (always 2). To compute the next value:

  1. Add the constant second difference to the first difference: 11 + 2 = 13
  2. Add the new first difference to the current value: 25 + 13 = 36

Thus f(6) = 36, computed through addition alone.

In general, for a polynomial of degree n: - The nth differences are constant - All subsequent values can be computed by “propagating” additions upward

The Difference Engine mechanizes this process. Each column of gears represents a level of the difference table. Adding the nth column to the (n-1)th, then the (n-1)th to the (n-2)th, and so on, generates the next function value—computed by the turning of brass wheels, not the fallible attention of a human “computer.”


The British government sees promise. In 1823, they provide Babbage with 1,500 pounds to construct a full-scale Difference Engine—the first government funding for a computing project, two centuries before the National Science Foundation. But the project spirals into a nightmare of cost overruns, engineering disputes, and Babbage’s own perfectionism. By 1842, after 17,000 pounds of government money and even more of Babbage’s own fortune, the project is abandoned, incomplete.

Yet even as the Difference Engine founders, Babbage’s imagination has leaped ahead to something far more ambitious: the Analytical Engine.

Where the Difference Engine can only tabulate polynomials, the Analytical Engine is conceived as a general-purpose calculating machine. It has a “store” (memory) holding a thousand 50-digit numbers. It has a “mill” (processor) that can perform any arithmetic operation. It has a control mechanism reading instructions from punched cards—borrowed from the Jacquard loom, which uses such cards to weave complex patterns in silk. It can branch and loop based on intermediate results.

The Analytical Engine is, in every essential respect, a computer. Babbage has invented the architecture of modern computing a century before the transistor.

The person who understands this best is not Babbage himself, but Augusta Ada King, Countess of Lovelace—daughter of the poet Lord Byron, whom she never knew, and of the mathematically inclined Annabella Milbanke, who raised Ada to be everything her father was not: disciplined, rational, scientific.

Ada meets Babbage in 1833, at seventeen, and is immediately captivated by his machines. Over the next decade, she becomes his most profound intellectual collaborator. When the Italian mathematician Luigi Menabrea publishes a description of the Analytical Engine, Ada undertakes to translate it into English—and her “Notes” to the translation are three times longer than the original article.

In Note A, she makes a crucial distinction. The Difference Engine, she explains, can only tabulate; it “can do nothing but add.” The Analytical Engine, by contrast, can perform any operation that can be described in terms of its basic functions. It “weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves.”

And then, in Note G, she sees further still. The engine’s operations, she writes, “might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations.” Music, for instance: if the relationships of musical sounds can be expressed in terms of the engine’s operations, “the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”

Ada has glimpsed something that will take another century to fully articulate: the principle of symbolic computation. The gears do not care whether they are manipulating numbers representing quantities, or numbers representing notes, or numbers representing propositions of logic. Computation is not essentially numerical. It is symbolic. It is, at its core, the manipulation of formal structures according to precise rules.

She also notes—and this will echo down through the centuries—that “the Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”

This is sometimes called “Lady Lovelace’s objection” to artificial intelligence. The machine cannot create; it can only execute. It cannot think; it can only follow instructions. Whether this objection is ultimately valid is a question we will return to throughout this book.

Ada dies of cancer in 1852, at thirty-six—the same age as her father at his death. Babbage outlives her by nearly two decades, but the Analytical Engine is never built. His drawings and notebooks, thousands of pages of the most detailed mechanical designs ever conceived, gather dust. The computer has been invented, and the world does not notice.

But the questions Babbage and Ada raised do not disappear. Can thought be mechanized? Can reasoning be reduced to calculation? The gears have asked the question. Now it is time for the algebra.


1.2 Boolean Dreams

We travel now from the clanking workshops of Victorian London to a very different scene: a cobbler’s shop in Lincoln, in the English Midlands, where a boy of modest means is teaching himself mathematics from borrowed books.

George Boole, born in 1815, has no formal education beyond primary school. His father, a tradesman with intellectual aspirations but no money, gives him what learning he can—a little Latin, a fascination with mathematics. The rest George acquires himself, working through the great texts of the mathematical tradition: Lacroix, Lagrange, Laplace. By sixteen, he is teaching in village schools. By twenty, he has opened his own school. By twenty-four, he has published original research in the Cambridge Mathematical Journal.

The young Boole is drawn not to computation but to something deeper: the foundations of thought itself. Where does mathematical truth come from? What is the relationship between the symbols on the page and the reasoning in the mind? Is there, beneath the surface diversity of mathematics, a unified logical structure?

In 1847, Boole publishes a pamphlet with an ambitious title: “The Mathematical Analysis of Logic.” In it, he proposes something audacious: that the laws of human reasoning can be expressed as an algebra. Just as ordinary algebra manipulates numerical quantities, this new algebra manipulates logical propositions. Just as x + y = y + x in arithmetic, so “A and B” equals “B and A” in logic.

The full flowering of this idea comes in 1854, in a book whose title announces Boole’s grand ambition: An Investigation of the Laws of Thought, on Which are Founded the Mathematical Sciences of Logic and Probabilities.

Boole’s algebra operates on symbols representing propositions—statements that are either true or false. Let x represent “it is raining.” Let y represent “I have an umbrella.” Then xy (which Boole writes as multiplication) represents “it is raining AND I have an umbrella.” The expression x + y - xy represents “it is raining OR I have an umbrella” (where we subtract xy to avoid counting the overlap twice). And (1 - x) represents “it is NOT raining.”

The key insight is that this algebra has remarkably simple rules. Since a proposition is either true (1) or false (0), we have x^2 = x (a proposition ANDed with itself is just itself). This “law of duality” has no analogue in ordinary arithmetic—no number except 0 and 1 equals its own square. It marks Boolean algebra as fundamentally different, suited to the binary world of yes and no, true and false.

From this foundation, Boole derives a complete system. He shows how syllogisms—the traditional forms of logical argument going back to Aristotle—can be expressed as algebraic equations. He demonstrates that logical reasoning is a kind of calculation, as mechanical in its rules as arithmetic.


Technical Box: Boolean Algebra Basics

Boolean algebra operates on binary values: 1 (true) and 0 (false).

The fundamental operations:

AND (conjunction), written as multiplication or the logical wedge:

A B A AND B
0 0 0
0 1 0
1 0 0
1 1 1

OR (disjunction), written as + or the logical vee:

A B A OR B
0 0 0
0 1 1
1 0 1
1 1 1

NOT (negation), written with an overbar or logical negation symbol:

A NOT A
0 1
1 0

Key laws:

  • Identity: A AND 1 = A; A OR 0 = A
  • Domination: A AND 0 = 0; A OR 1 = 1
  • Idempotence: A AND A = A; A OR A = A
  • Complementation: A AND (NOT A) = 0; A OR (NOT A) = 1
  • De Morgan’s Laws: NOT(A AND B) = (NOT A) OR (NOT B); NOT(A OR B) = (NOT A) AND (NOT B)

Example: Express “If it is raining and I don’t have an umbrella, then I will get wet.”

Let R = “it is raining,” U = “I have an umbrella,” W = “I will get wet.”

The statement becomes: (R AND (NOT U)) implies W

Using the identity (P implies Q) = (NOT P) OR Q:

NOT(R AND (NOT U)) OR W = (NOT R) OR U OR W

This algebraic manipulation reveals the logical structure: “Either it’s not raining, or I have an umbrella, or I will get wet.”


Boole’s algebra is a profound intellectual achievement. It demonstrates that logic—the very structure of rational thought—can be captured in mathematical form. Reasoning becomes calculation. Arguments become equations. The intuitive, seemingly irreducible act of thinking is revealed to have an algebraic skeleton.

But there is a gap. Boole’s algebra exists only on paper. The symbols are manipulated by human minds, not by machines. Babbage had gears without algebra; Boole has algebra without gears. The marriage of the two—mechanical implementation of logical operations—lies decades in the future.

Boole himself never makes the connection to machinery. He dies young, in 1864, of pneumonia—contracted, according to legend, after walking to class in the rain and lecturing in wet clothes. He is forty-nine. His widow, Mary Everest Boole (niece of the man who measured the mountain), will spend the rest of her life promoting his ideas.

For half a century, Boolean algebra remains a mathematical curiosity, admired by logicians but seemingly disconnected from practical application. Philosophers recognize its importance. Mathematicians appreciate its elegance. But no one quite sees how to use it.

The connection to machinery comes in 1937, when a twenty-one-year-old MIT graduate student named Claude Shannon writes what may be the most important master’s thesis in history. Shannon realizes that the on/off, open/closed states of electrical relay switches correspond exactly to the 1s and 0s of Boolean algebra. Complex circuits can be described as Boolean expressions. And conversely, any Boolean expression can be built as a circuit.

In a flash, Boole’s pure mathematics becomes the foundation of digital electronics. Every logic gate in every computer is a physical implementation of Boolean algebra. The AND gate, the OR gate, the NOT gate—these are Boole’s operations, realized in silicon. His dream of an algebra of thought becomes, quite literally, the hardware of the thinking machine.

But we are getting ahead of ourselves. Before the machines can compute, we must first understand what computation itself is. And this understanding comes from an unexpected direction: a crisis in the foundations of mathematics itself.


1.3 Hilbert’s Program and Its Destruction

We are now in the early twentieth century, and mathematics is in trouble.

For millennia, mathematicians had worked with a comfortable certainty: their methods were reliable, their results were true. Euclid’s geometry was the model of certainty—axioms led to theorems through unbreakable chains of logical reasoning. Yes, there were occasional puzzles and paradoxes. But the edifice seemed solid.

Then cracks appear.

The first shock comes from geometry itself. In the 1820s and 1830s, Lobachevsky and Bolyai independently discover that Euclid’s parallel postulate—through a point not on a line, exactly one parallel line can be drawn—is not a necessary truth. Consistent geometries exist in which the postulate fails. There are worlds (at least mathematically speaking) where the angles of a triangle do not sum to 180 degrees.

If geometry is not about the necessary structure of space, then what is mathematics about? The question becomes urgent when Georg Cantor, in the 1870s and 1880s, develops his theory of infinite sets—and finds paradoxes lurking in the foundations. Some infinite sets are bigger than others: the real numbers cannot be put in one-to-one correspondence with the integers. But how can infinity have sizes? And if we are not careful in defining “set,” we arrive at contradictions: the set of all sets that do not contain themselves, Bertrand Russell’s famous paradox.

By 1900, the great mathematician David Hilbert is calling for a program of foundations. Mathematics must be placed on an absolutely secure footing. We must:

  1. Formalize: Express all of mathematics in a precise symbolic language, with explicit rules for deriving theorems from axioms.
  2. Prove completeness: Show that every true statement can be proved within the system.
  3. Prove consistency: Show that the system cannot prove contradictions.
  4. Find a decision procedure: Discover an algorithm that can determine, for any statement, whether it is true or false.

The last requirement Hilbert calls the Entscheidungsproblem—the “decision problem.” Is there a mechanical procedure that can answer any mathematical question? Can thought, in effect, be mechanized?

For three decades, mathematicians labor on Hilbert’s program. Progress is made. Formal systems are developed—notably the Principia Mathematica of Russell and Whitehead, which attempts to derive all of mathematics from pure logic, and which famously takes hundreds of pages to prove that 1 + 1 = 2. The dream of a complete, consistent, decidable mathematics seems almost within reach.

And then, in 1931, a twenty-five-year-old Austrian logician demolishes it.

Kurt Godel is, by all accounts, a strange man—brilliant, obsessive, increasingly paranoid. In his later years at the Institute for Advanced Study in Princeton, he becomes convinced that unknown enemies are trying to poison him. He will eat only food prepared by his wife. When she is hospitalized for an extended period, he starves to death rather than trust anyone else’s cooking. He weighs sixty-five pounds at his death.

But in 1931, his mind is at its keenest, and he proves two theorems that reshape our understanding of mathematics, logic, and ultimately computation.

Godel’s First Incompleteness Theorem: In any consistent formal system powerful enough to express basic arithmetic, there exist statements that are true but cannot be proved within the system.

Godel’s Second Incompleteness Theorem: Such a system cannot prove its own consistency.

The proofs are technical masterpieces, but the core idea can be sketched. Godel constructs a way to encode logical statements as numbers—a technique now called “Godel numbering.” Every symbol, formula, and proof can be represented as a unique integer. Mathematical statements about numbers become, in a sense, statements about themselves.

Using this encoding, Godel constructs a sentence that says, essentially, “This statement is not provable.” Call it G. If the system proves G, then G is false (since G claims to be unprovable), and the system has proved a falsehood—the system is inconsistent. If the system does not prove G, then G is true (since G claims to be unprovable), and the system has failed to prove a truth—the system is incomplete.

Either way, Hilbert’s dream is broken. Mathematics cannot be both complete and consistent. No finite set of axioms can capture all mathematical truth. The hope of mechanizing all reasoning—of building a machine that could solve any mathematical problem—seems to be dashed.

But wait. Godel has shown that some true statements are unprovable. He has shown that consistency cannot be proved internally. But he has not directly addressed the Entscheidungsproblem. Perhaps there is still an algorithm that can at least decide questions—even if it cannot prove all truths, it might be able to sort statements into “true” and “false.”

The question becomes sharper: What exactly do we mean by “algorithm”? What does it mean for a procedure to be “mechanical”? We need a precise mathematical definition of computation itself.

This question is taken up independently by several brilliant minds in the mid-1930s: Alonzo Church at Princeton with his lambda calculus, Emil Post with his production systems, and a young British mathematician who will change everything: Alan Turing.

Turing’s 1936 paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” introduces what we now call the Turing machine—an imaginary device of breathtaking simplicity and profound power. With it, Turing gives the first convincing definition of what it means for a function to be computable, what it means for a procedure to be algorithmic.

And using this definition, Turing proves that the Entscheidungsproblem is unsolvable. There is no algorithm that can decide, for every statement in mathematics, whether it is true or false. There are limits to mechanical reasoning—limits not of engineering but of logic itself.

We stand now at the threshold of modern computer science. Babbage has shown that calculation can be mechanized. Boole has shown that logic can be algebra. Godel has revealed the limits of formal systems. And Turing is about to show us what lies within those limits—the vast territory of the computable, and the strange boundary that separates it from the incomputable.

The question “Can thought be mechanized?” has not been answered. But it has been transformed into something far more precise and tractable: “What can be computed?”

This is the question we take up in Chapter 2, where we meet Alan Turing’s machine and the birth of computer science itself.


Next: Chapter 2 - The Universal Machine