#005 | George Boole

In this episode, we explore the life and work of the great logician George Boole. We also talk about the use and implementation of Boolean logic in software development, idempotent functions, and referential transparency.

This week’s recommended book:

Mentioned in this episode: An Investigation of the Laws of Thought, by George Boole. The image above links to this book on Amazon.


Links to learn more about George Boole:

Links to learn more about referential transparency and idempotent functions:

Episode Transcription

Hello again. This's Wes Doyle. Welcome back after a fairly significant hiatus to another episode of the Bitwise Podcast, where we explore the history of computer science, the discipline of software development and the implications of an increasingly digital world.

I apologize for the radio silence of the last few months. Things get pretty busy around the New Year, but I'm excited to pick up where we left off in her exploration of the history of computer science, and I'll do my best to get back on track.

Up until this point in our journey, we've talked about the origins of computation, reaching way back to the emergence of the abacus and similar accounting instruments in ancient Mesopotamia and China. We've covered the inventions of many great thinkers like Pascal and Leibniz, most recently Charles Babbage’s concepts of the Difference and Analytical Engines and examined the story of Ada Lovelace and her extraordinary vision of the great implications that these types of machines would have on the world.

In this episode we’ll turn to the life of a philosopher and mathematician whose name is recognizable to any software developer and perhaps also known to anyone who's studied a bit of logic or mathematics - and that's George Boole.

If you’ve spent any time writing software, you'll already be familiar with the concept of the Boolean data type, if not no worries - the concept is really straightforward. In the context of computer science, A Boolean type is one that can just be in one of two possible states at any point in time, so a bullion value represents one bit of information in the most general sense. In many programming languages like Python, Java and C#, Boolean values are written as the words true and false. It's interesting to note that the underlying implementation of this concept differs slightly among languages.

It's kind of interesting, actually, just to take a few quick examples. In Python, for instance, Boolean values are just a special case of vintage er types zero representing false and one representing True. In fact, if you open up an interactive Python interpreter and execute the expression:

True + True + True

- you'll see that the output is actually 3. On the other hand, in Ruby, another dynamic language, true and false are actually objects hand belong to separate classes along with nil, another special class that represents the absence of a thing and can act a bit like false in programming lingo, we call it falsey. There is some other interesting stuff going on down at the level of implementation, but for now, it's just worth noting that there's this interesting difference an approach on design between two popular modern languages.

In any case, as a concept, the Boolean type just represents something with two possible states, so it's useful on modeling anything that has a binary representation, like “on and off” or “true and false”. Because of this, Boolean values are useful in constructing conditional statements, which can be used to change the flow of a procedure. Depending on some initial state, you can think of them as enabling a kind of railroad switch that allows a programmer to control the path of some set of instructions down one track or another.

They allow us to construct statements for things like:

if it is true that a user is logged in, then show them their profile

or

if it is false that a seat belt is fastened and true that a passenger is in the car, then make a beeping sound.

Likewise, Boolean values provides something that can be returned from relational operators, so things like greater than less than and equal to. we can construct meaningful procedures. In this case, um, saying something like:

Given an album costs $10 to purchase, if a customer has less than $10 available, then reject their purchase transaction

or something like that. In general terms, Boolean logic is a kind of algebra that employs three operators:

AND, OR, NOT

and variables that can take the form of two states, as we've mentioned. So let's say,

TRUE , FALSE.

Though its elements are simple, this framework turns out to be crucial to topics like digital circuit design, and, as we've just mentioned, software development. Note that the concept of True and False employed in this framework can be substituted with other things like 0 or 1, as mentioned, or even physical properties like a high or low voltage.

Using this kind of framework, we can, in fact, begin to model complex systems that describe the world. So just make another contrived example. I might imagine building an app for my phone that reminds me to eat breakfast. Now, in some method in this system, I’m most likely use Boolean logic to evaluate a proposition like: “It is true that I have eaten breakfast.” I'll need a Boolean variable to hold that truth state of whether or not I've eaten breakfast. With this, I can then use those logical operators AND, OR, and NOT to construct procedures whose behavior changes depending on the composition of this and other facts.

So, for example, maybe I'll build new propositions like “I have not eaten breakfast, and it is after 9AM” to define some other downstream behavior. Notice “after 9AM” would probably make use of the conditional greater than, which would return a Boolean value itself. So you can kind of see how the different pieces can start to really come together to form statements that we can use to control the flow of our program.

Okay, great. So we have a simple concept, albeit with some nuance in how it's implemented in different languages. But why do we call truth values Boolean in the first place? To answer this question, we'll have to take another trip back to the early 19th century, this time to Lincolnshire, England, to talk about George Boole.

Boole was born in 1815 in Lincoln, England, into a family that ran issue making business. 1815, if you recall, was also the year, incidentally, that Ada Lovelace was born. This was around the time that Jane Austen was finishing writing her novel Emma. The War of 1812 was coming to an end. Beethoven, Schubert and Rossini are all actively composing music, and it was around this time that the first commercially successful steam locomotives were produced in the United Kingdom, still several years before the technology would propagate to the U.S. As for George Boole, his life was off to a busy start.

His father, John Boole, greatly encouraged his son's interest in language, learning, math and astronomy. He seemed have had a strong natural talent for learning, which was encouraged and nurtured by his father. He studied Latin and several European languages as a child. Although he left school when he was 16, Boole continued his studies independently and remained drawn to mathematics and languages. John Boole’s shoemaking business ultimately failed, and George actually supported his family by taking up a job teaching at several schools. Throughout his early adult life, he continued teaching and studying mathematics, and he published many papers on calculus, in particular. One of his papers even won a gold medal from the Royal Society in London. Boole also published work that helped to form the basis for a branch of abstract algebra called invariant theory, which I know nothing about. It's apparently concerned with transformations on mathematical objects and the things that do not change under such transformations. My apologies to more technical listeners of the show if I have completely butchered that high level description.

Interestingly, Boole’s work in establishing invariant theory helped provide some of the tools used by Einstein in constructing his Theory of Relativity. Again, I'm going way outside of my wheelhouse here, but the link seems to have something to do with the invariance of the speed of light among observers, but I can't say much about any of this at all, so I'll stop there simply noting that it's an interesting connection.

In 1849, when Boole was still in his early thirties, he joined Queens College Cork in the far south of Ireland, now called the University College Cork. It was founded in 1845, and so at that time it was a brand new university. It was here he developed his ideas about logic that would change the world forever.

It's important to note that Boole now becomes a member of an institution of mainly more privileged members of society. This is during a period of extreme destitution throughout Ireland, known as the Great Famine. This was a famine that was caused by significant potato blight that affected lots of Europe throughout the 1840s and resulted in a mass starvation event in Ireland. Its devastation on County Cork, where Boole had moved, was significant. In all of Ireland, actually, approximately 1,000,000 people died and over 1,000,000 others emigrated from the country. Remember that Boole came from a very modest background, having had to support his parents as a teacher while still a teenager. It's clear that the mass suffering in Ireland was on Boole’s mind as he worked at the university.

You can actually read some of bulls letters online as original scans that have been professionally archive, thanks to an amazing collection made available by the library at University College Cork. I will provide a link on the Bitwise podcast website page for this episode. (link here opens in a new window). In a letter to his sister in October of 1849, he writes about a train ride from Dublin to Cork, noting the desolation he observed in the countryside.

It's also important to mention here that George Boole was a deeply religious individual. It seems that perhaps the most significant driving factor in his intellectual exploration was indeed his faith. His views on religion seemed to be essentially nondenominational Christian, as he attended several different churches as an adult.

Boole’s work “An Investigation of the Laws of Thought,” published in 1854 provides a kind of mathematical foundation for logic. This book is widely recognized as his most important contribution to logic and was a follow up on his first book called “The Mathematical Analysis of Logic,” published in 1847. In his works, he really lays out the idea that logical statements can be expressed using symbols that can then be used in calculation. In other words, logical statements, statements about sets of things, can be constructed, a manipulated in a way analogous to arithmetic.

So, for example, A + B is in some way analogous to A or B, and, A multiplied by B is in the same way analogous to A and B. He defined multiplication in this algebra to be the intersection of sets. Boole defined something he called the “Law of Duality” later called “The Idempotent Law” by Benjamin Peirce, often represented as:

x^2 = x

Notice that this equation “x-squared equals x” is true in normal arithmetic for numbers 1 and 0 only. Mathematical operations are called “idempotent” If they produce the same result when applied once as applied any number of subsequent times in succession.

So, take the number “1” in normal everyday arithmetic. “1” is idempotent under multiplication because no matter how many times one is multiplied by some number, the subsequent result of this operation is always equal to what it was the first time the application was applied. Likewise, 0 is idempotent under addition, because no matter how many times you would add zero to some initial value the following result would always be the same as it was the first time.

To take a more practical example, consider a keyless remote for a car that has two buttons: one to lock the car and the other to unlock the car. The output, if you will, of pressing the unlock button is that the car doors are in an unlocked state. No matter how many times you press the unlock button, the result is always the same regardless of the initial state of the car before the first time the button was pressed.

There are other interesting functions in mathematics and computer science which are idempotent, also - for example, the floor, ceiling, and fractional part functions. Take the floor function, for example, which you can think of as like an “always round down” function. So, given any real number, the floor function will return the greatest integer less than or equal to that input.

So if I floor 42.9, for instance, I'll get 42 and I'll continue to get 42 regardless of how many times I subsequently apply the floor function. So floor of 42.9 is 42 the floor of that is 42 the floor of that is 42 - forever.

Both of the Boolean operators AND and OR are also idempotent. So, if I make the statement,

I like mountain biking and hiking.

This is equivalent to saying:

I like mountain biking and hiking and hiking.

It is the same situation for OR. If I say:

I would like to visit Oregon or Washington.

that's logically equivalent of the statement:

I would like to visit Oregon or Washington or Washington.

It sounds kind of absurd to make such a statement precisely because it's redundant. Applying “or Washington” to the statement here doesn't change the set of places I'd like to visit. The output, if you will, is the same as it was the first time I applied “or Washington.”

In functional programming, which is a particular approach to the way software is written, a supposed to say imperative programming or object-oriented programming, the concept of idempotency is related to something called referential transparency. It's a pretty fancy term, but all it really means for a function to be referentially transparent is that its output can be replaced by the function’s expression without changing a program’s behavior.

So, for example, if I have a function called TWICE that takes an input integer and multiplies its value by two, then returns that value, and there's nothing else to modify the state of the program in between those two events, then I have a referentially transparent function. I always get the same output for a given input, and there's nothing about that function that will change the state of my system.

So, for example, if I have this function TWICE and I pass it the number 2, I can safely replace that expression TWICE 2 anywhere in my system with the number 4 without changing the system's behavior. If this is the case, then that function TWICE is reverentially transparent. It requires that that function is “pure.” In other words, it can have no side effects. This just means that the function can't do anything to modify some state outside of its own scope. So sometimes we write programs, er, we write methods in software that do more than return of value. We write procedures that change something in our programm and sometimes even something outside of our program.

So, if my TWICE function did something like insert of record into a database, for instance, then it modified the state of that system. We'd no longer have a referentially transparent function. This observed change of state outside of the scope of a function, is what we call a side effect.

The application of Boolean logic to digital circuits and work pioneered by Claude Shannon nearly 80 years after Boole would write his “Investigation of the Laws of Thought” would ultimately helped establish the foundation of information science. We'll investigate the life and work of Claude Shannon in a future episode of the podcast.

In reading about George Boole for the creation of this episode, one of the things that struck me the most was not only his search and discovery of a framework in which we could manipulate symbols to make logical assertions about the world, but also in particular in the context in which the search took place. I wonder how his learning of multiple languages as a child in particular, affected his thoughts about establishing symbolic representations of statements? I imagine it must have helped drive, you know, some desire to really find a language in which statements could be made at a very atomic level. Like, what were the underlying statements - the concepts that transcended the language is that he had spent so much time learning? And how could he capture those ideas symbolically so that we could then use them to describe the world in, you know, in a precise fashion?

So, here we have a figure born into modest socioeconomic circumstances, fortunate to have a family that encouraged intellectual exploration, bestowed with an exceptional natural talent and learning. He experienced the failure of a family business and the need to support a family while still barely an adult. He saw firsthand the devastation of a mass starvation event while situated in relative comfort as an intellectual at a new university. I get the impression that Boole was an outsider in some respects with access to the inside and the resources it provided to pursue a life of learning. George Boole was an original thinker, a seeker driven by deeply human instincts. His work towards discovering a framework for expressing logical propositions in the form of algebraic equations has echoed throughout the history of human technological development.

So, the next time you unlock your phone, or pull up a playlist to listen to your favorite music, or watch the street lights flicker on it dusk - or perhaps write a bit of code that enables anything like this to happen - maybe take a minute and reflect on the fact that all of these otherwise mundane events are possible in part as a result of human reasoning and the pursuits of real people trying to figure out the world and our minds and how they work.

In the next episode we’ll continue onward in our exploration of the foundations of computer science with a look at a few interesting figures, including Rózsa Péter, known as the founding mother of recursion theory, as well as how the work of Kurt Gödel and Bertrand Russell contributed to the foundations of computer science. W’ell then dive into the work of Alonzo Church and Alan Turing over the course of several subsequent episodes.

Thanks for listening!