In this episode, I hope to illustrate some of the ideas about logic and formal systems that emerged around the turn of the century. The work on logic as it pertains to human thought paved the way for the explosion of invention and enterprise that we'll discuss soon, as we're getting closer to the introduction of the electronic technology that in some sense embodied these ideas, ushering in the digital age.
This week’s recommended book:
Mentioned in this episode: Turing’s Cathedral: The Origins of the Digital Universe by George Dyson. The image above links to this book on Amazon.
Links to learn more about Frege and Russel:
Link to Philosophy Bites interview with Michael Dummet about Frege:
Links to learn more about Russel’s Paradox
Episode Transcription
Hello again, this is Wes Doyle. Welcome back to another episode of the Bitwise Podcast, where we explore the history of computer science, the discipline of software development, and the implications of an increasingly digital world.
Thanks for joining me on our current exploration of the history of computer science. In the last episode, we talked about the work of the famous logician George Boole, discussing how Boolean logic is fundamental to software engineering and digital circuit design. I mentioned the Austrian logician and mathematician Kurt Gödel and the Hungarian mathematician Rózsa Péter briefly at the end of the last episode. We'll explore Gödel and touch on the work of Rózsa Péter in greater detail in a future episode.
While working on this episode, I realized it would be worthwhile to spend a bit more time in the period leading up to Gödel's monumental Incompleteness Theorems, because I think that this is a particularly formative window of time in terms of the development of logic.
So, for now, we'll be wandering our way out of the fog of the Industrial Revolution and into the late 19th century and early 20th century. The thread that we'll follow, at a very high level, is that of the further development of logic, which is a hot topic in mathematics around this time.
This episode's reading recommendation is the book "Turing's Cathedral: The Origins of the Digital Universe," by George Dyson. Dyson, as I'm sure you're aware, is a science historian. Turing's Cathedral is an engaging account of the emergence and use of the computer and several of the interesting figures behind this development, most notably John von Neumann, who will certainly come up again in this podcast. The book is an engaging read and includes lots of interesting peripheral history and context in which some of the first computing machines came into being and were used.
In this episode, I hope to illustrate some of the ideas about logic and formal systems that emerged around the turn of the century. The work on logic as it pertains to human thought paved the way for the explosion of invention and enterprise that we'll discuss soon, as we're getting closer to the introduction of the electronic technology that in some sense embodied these ideas, ushering in the digital age. Keep in mind, at this point, there are no electronic systems. There are no calculators. There are no transistor radios. We don't even yet have a theory of computation. James Clerk Maxwell's theories about electromagnetism from the mid-19th century are just now being proven through experimentation by people like Heinrich Hertz. In short, these are the embryonic years of modern physics, and the ideas about logic that we're going to discuss shortly existed solely in the minds of individuals. We had yet to apply these ideas to the design of physical systems like computers. I'm providing this as preface to the topics discussed in this episode because I want to emphasize the incredible amount of work and thought that went into developing logic during this time.
I think it's nearly impossible for many of us, including myself, to separate what we've internalized from our own practical experience using all of the devices of our time from the ingenuity that was required to conceive of them.
How do you get a machine to represent a real number, for instance? How do you get a machine to add any two numbers? Frankly, I'd like to argue that it's miraculous that we've built machines that can do this - machines that far and away eclipse our own limited abilities to perform calculations, operating at a microscopic scale. The topics we'll cover shortly are perhaps a bit esoteric, and to say how these ideas relate to the device you're using to listen to podcast, for instance, is to tell a long and multi-threaded story.
If that still doesn't convince you of the importance of the development of a notoriously dry subject like logic to our history, then you might take some consolation in the fact that the mid 20th century will, for its part, be packed with drama.
In any case, I'll do my best to tell the story of what I've learned about this history. I'd also like to encourage you to get in touch at bitwisepodcast.com if you have any feedback about the show, if you notice errors or inconsistencies, or have requests for deeper dives into particular areas. I'd love to hear from you.
We can trace the origin of modern logic in some sense back to Aristotle and his work Organon - and follow it through several periods of major development. Some 2000 years after Aristotle, Gottfried Wilhelm Leibniz picked up the thread between 1670 and 1690, and it was further developed by people like George Boole, Charles Sanders Peirce, John Venn, who invented the Venn diagram, and Augustus DeMorgan, and later by people like Gottlob Frege, Kurt Gödel and Alfred Tarski.
Augustus DeMorgan, whom you may remember from our episode about Ada Lovelace, had an informal correspondence with Lovelace as a math tutor. You might be familiar with DeMorgan's laws, which are rules in Boolean logic that allow for the expression of conjunctions and disjunctions in terms of each other using negation. Just think of the word conjunction as referring to AND, and the word disjunction as referring to OR. So, DeMorgan's laws illustrate that we can state Boolean expressions comprised of ANDs in terms of NOT and OR, and we can express statements comprised of ORs in terms of NOT and ANDs.
The rules are, put simply, that:
"The negation of a disjunction is the conjunction of the negations,"
"The negation of a conjunction is the disjunction of the negations."
I don't know about you, but these are just the kind of sentences that cause my eyes to gloss over immediately, but I think that's just because of how awkward and mechanical these kinds of things sound at first when mapped to natural language. Actually, the proof of these laws is much easier illustrated in pictorial form, but, given our current medium, I'll do my best to illustrate what these laws mean with a natural-language example.
Let's say that, for some strange reason, I don't drink coffee or tea. I drink neither. If that was the case, and someone came to me with the question "would you like coffee or tea?" I might respond by saying, "I would NOT like coffee (N)OR tea," which is the same as saying, I would NOT like coffee AND I would NOT like tea. These are two symbolically different but equivalent ways of saying I would like neither. Notice one uses a conjunction - "I would NOT like coffee AND NOT like tea." and the other a disjunction - "I would like neither coffee nor tea." In English, of course, for grammatical reasons, we're using neither in place of NOT and nor in place of OR, but the structure and meaning are the same.
To give another example, let's say I'm always up for a coffee or tea, which is more likely the case, of course, but I never have both at the same time. If someone asks the same question - "Would you like coffee or tea?" I can express two equivalent responses - the first, "Well, I would NOT like coffee AND tea," and the second, transformed by DeMorgan's law, "I would like NOT coffee OR NOT tea." Again, two symbolically different but equivalent ways of saying "one or the other" using different logical operators.
Alright, so here's a practical link from this trivia to modern software development: If you've ever used development tools created by JetBrains - and this is not a plug, by the way, I'm not associated with JetBrains. The company does make many popular tools, like IntelliJ, ReSharper, for instance. You may have seen or used a feature in these tools that references refactoring Boolean expressions using DeMorgan's laws. This feature allows you to automatically convert a Boolean expression in your code from using, say AND's, to using NOT and ORs, or an expression using ORs to NOTs and ANDs, much in the same way in the coffee or tea example that we just talked about.
In any case, these are De Morgan's laws, and he deserves more than an anecdotal reference in this history, so if you're interested in more about the history of logic, you'll certainly come across his work.
One of the crucial points in the history of the philosophy of language and logic comes with the work of Gottlob Frege.
Frege was born in Germany in 1848, and worked as a mathematician and professor of mathematics, though he was relatively unknown to much of the world during his lifetime, with the notable exception of the philosopher Bertrand Russell. Frege was apparently an introverted and private individual, and drew little attention in the mathematical and philosophical circles of his time. Nonetheless, he would eventually become recognized as a founding father of analytic philosophy. His work influenced many notable philosophers in this domain, including Russell, and, in particular, Ludwig Wittgenstein. For those interested in the philosophical facet, Frege's work also influenced many others, including G.E. Moore, Carnap, and had implications for virtually the entire branch of analytic philosophy. We'll come back to the significance of Frege's relation to Bertrand Russell toward the end of the episode.
Frege's work was driven by the pursuit of a universal concept language - some formal system that could be used to describe thoughts, built entirely on the foundations of logic. If you recall, this is also very similar to the motivations of people like Boole, Leinbiz, and, even to some extent, Panini, for his rigorous linguistic analysis of Sanskrit that was mentioned in the first episode of the podcast.
When we picture Frege, I think we should keep in mind that this is someone who looked at the state of mathematics in his time, and was convinced that things like proof and arithmetic itself were being used without sufficient rigor. He wanted to clarify the foundation of arithmetic such that we'd never need to appeal to anything but logic itself to do serious work.
Frege didn't want to appeal to intuition or human psychology to understand the concept of a number. He insisted upon the idea that logic itself was objective, and unlike other prevailing beliefs about the subject, which had existed at least since Aristotle, that logic itself was independent of human psychology. This is an approach which was orthogonal to some of Frege's contemporaries, like Edmund Husserl, who published his Philosophy of Arithmetic in 1891, which investigated the origins of numbers from a more psychological perspective.
Frege believed that arithmetic could be deduced using logic, and that logic itself is objective. We can sort of see a glimpse of this in his set-theoretic definition of natural numbers. So, a natural number n is the collection of all sets with n elements. So if I define 2 episodes of this podcast as a set, this set would fall into the same set that all other things defined as having 2 objects would fall into, and we'd call that set the number 2. This definition could be used to define any cardinal number in purely logical terms. With a definition of natural numbers rooted in logic, Frege could work to build arithmetic.
He published his book Begriffsschrift in 1879, which is regarded as one of the most significant works on logic ever published. Ultimately, he was interested in finding a way to express general ideas using a formal system.
In Begriffsschrift, which might be loosely translated as "concept-language," or "the putting of concepts into notation," we find Frege laying out his truth-propositional logic, which is the analysis of the proposition into functions and arguments.
Frege works to map the idea of function and argument from mathematics to the domain of logic. He realizes that there is a distinction between the functional aspect of a statement and its arguments. He notices that statements can retain meaning when certain words are replaced by other words - the words themselves play a functional or structural role that is independent of their meaning. In this, he begins to make the distinction between syntax and semantics, and helps pivot the course of Western philosophy into the formal study of language itself.
The very small amount I have attempted to learn about Frege's system, at a very high level, is pretty interesting. In his system, sentences denote truth values, which he calls The True, and The False. Sentences can be constructed such that objects are mapped to "The True" or "The False" depending on the structure of the function and its arguments.
So, as an example, something like 2 + _ = 4 maps to the True when blank is 2. Frege worked to generalize this to non-mathematical predicates. So let's take the predicate "Is liquid," which would signal a function of a single variable (we're using the verb is), that maps its arguments to either "The True," or "The False." So, if an argument satisfies the predicate "is liquid," the function signaled by this predicate would map the argument to "The True." On the other hand, if the argument didn't satisfy the predicate, the function would map the argument to "The False."
So, using our predicate, "is liquid," and ignoring certain subtleties of meaning, we could have the argument "water," which would map to The True, and say, "sand," which would map to The False.
If we use placeholders to denote the location of the argument here, so, say, "blank is liquid," we have what Frege referred to as a "concept." Concepts, to Frege, are functions that map all of their arguments to a truth value. Frege, again, like Leibniz, via his calculus rationator, was driving towards the formal logical representation of thoughts.
Frege went on to further develop his ideas, publishing them in several works, including two volumes called The Basic Laws of Arithmetic. The works, I gather, are incredibly detailed and abstract, and it seems like they weren't fully appreciated until Bertrand Russell wrote about them around 1903. Whereas Frege was introverted and relatively unknown at the time, Russell was an outspoken and well-known philosopher, political activist, and popular writer. His grandfather was a prime minister, and his godfather was none other than the famous utilitarian philosopher John Stuart Mill. I should note that although Russell was and is well-known among the public, his work on the foundations of mathematics and logic were rigorous. Russell lived a long and productive life, from 1872 to 1970.
For more information about Frege, I'd encourage you to check out a great podcast called The Partially Examined Life, which has an early episode about his work. There's also an episode of Philosophy Bites containing an interview with the late Michael Dummett about Frege.
Russell actually went on to uncover a crucial contradiction leading to a paradox in Frege's work on the foundations of mathematics. This was actually a contradiction in set theory, first laid out by Georg Cantor. In 1902, Russell wrote to Frege, describing the paradox, which undermined one of Frege's axioms. Russell essentially posited the case where a particular predicate was defined to be a predicate that cannot be predicated of itself. This created a sort of strange loop, where Russell had defined something that must be able to exist per the axioms Frege had laid out, and yet its very definition prevented it from being definable in that system.
You may have heard of this paradox, called Russell's Paradox, which revealed a major problem in naive set theory, and specifically in a particular axiom called unrestricted comprehension. So, consider a set as a collection of any objects. Given any property, there exists a set containing all objects that have that property. To name some examples, we might have a set that contains a raindrop. We might define a set that contains all water molecules. We might even conceive of a set that contains all sets. This is where things start to get a little tricky. This set of all sets has an interesting self-referential property. As the set of all sets, it must contain itself.
This isn't a property of normal, everyday sets that we might consider. The set of all water molecules, for instance, does not contain itself. It just contains all water molecules. If we can say that there is a set of all sets, and we can say that there are sets which do not contain themselves, we should also be able to define the set of all sets that do not contain themselves as members. This seems straightforward enough, and would include the set of all water molecules, for instance, or the set of a raindrop. It's just the set of all sets that do not contain themselves as members.
What if we pose the question:
"Does the set of all sets that do not contain themselves contain itself?"
Well, as a thought experiment, let's try to answer the question, which must be either yes or no. So, let's try yes. If it does contain itself, then it must have the property that it doesn't contain itself, so that can't be right. The answer must be, no, then. In this case, the set of all sets which do not contain themselves, is not part of these sets, so it must contain itself. In fact, it seems like this set would contain itself if and only if it does not contain itself.
It should take a few minutes to work this out, but when you do, you'll realize how this self-reference poses a lot of trouble for us in this case. To take another popular example, consider what's often called the Barber Paradox. Consider a city where there is a single barber, and he shaves a person if and only if that person does not shave themselves. So, the analogous question is, does the barber shave himself? Let's work it out. If the barber does not shave himself, then he must shave himself, because he shaves all of those people who do not shave themselves. Hmmm. So, if he does shave himself, then he must not shave himself, because he only shaves people who do not shave themselves. Oof.
It should be noted that other philosophers have worked out ways to either work around or even discard Russell's paradox while working on axiomatic set theory. Russell went on to define his Theory of Types in an attempt to root out all self-referential problems from mathematics and set theory. His monumental work, too, however, was cleverly undermined by the Austrian Kurt Gödel, but we'll save that for the next episode.
This letter that Russell wrote to Frege came just a short time before Frege was to publish his second volume on the foundations of arithmetic, and it wasn't good news for Frege, obviously. This paradox essentially put a crack in the foundation, showing that one of the crucial axioms of his system was invalid. Frege wrote a late Appendix to the second volume in response to the discovery, which clearly shook his confidence. He wrote, as translated by historian Jean van Heijenoort,
"Hardly anything more unfortunate can befall a scientific writer than to have one of the foundations of his edifice shaken after the work is finished. This was the position I was placed in by a letter of Mr. Bertrand Russell, just when the printing of this volume was nearing its completion."
While this must have been a significant blow to Frege, and it did apparently cause him to ultimately abandon his pursuits related to the foundations of mathematics, he apparently handled Russell's discovery gracefully, at least according to Russell himself. I stumbled upon a quote by Russell reproduced in a book called From Frege to Gödel, also by van Heijenoort, published by Harvard University Press, re-published in an article on Russell's Paradox in the Stanford Encyclopedia of Philosophy. Russel states,
"As I think about acts of integrity and grace, I realise that there is nothing in my knowledge to compare with Frege’s dedication to truth. His entire life’s work was on the verge of completion, much of his work had been ignored to the benefit of men infinitely less capable, his second volume was about to be published, and upon finding that his fundamental assumption was in error, he responded with intellectual pleasure clearly submerging any feelings of personal disappointment. It was almost superhuman and a telling indication of that of which men are capable if their dedication is to creative work and knowledge instead of cruder efforts to dominate and be known."
I mentioned Cantor, briefly, as the founder of set theory. We'll see his ideas come up again when we talk about the ideas of enumeration and ideas related specifically to the non-enumeration of real numbers and how that relates to Alan Turing's work on computability.
So, what I'm about to say is obvious, but I think it's worth keeping in mind, as we come up for air from the history for a moment - At this point in time, there is absolutely nothing in existence approximating a computer or a programming language. The number of digital devices in the world is exactly zero. Frege's Begriffsschrift was even written about 15 years before the first intentional radio signal was sent by a human. And yet, with a bit of squinting, I think we can already begin to see the shape - or perhaps, the mold for - something like functional programming starting to emerge. Frege's truth-propositional logic, and the study of functions in general is perhaps something we'll revisit when we dive into The Lambda Calculus, introduced by Alonzo Church in the 1930s.
OK, that's it for this episode - in the next episode, we'll continue with another fascinating figure - Kurt Gödel. In the meantime, be sure to check out bitwisepodcast.com to get notified whenever new episodes are released, or to get in touch with questions or comments about the podcast. Thanks for listening.