# Circles 2: Defining Infinity

In our previous post, we began a discussion of circularity, first in dictionary definitions and then in mathematics. Today, we dig a little bit deeper by considering what should be a rather straightforward mathematical notion, and certainly a foundational issue for this blog: the definition of the term “finite set”. (And hence, by negation, the definition of “infinite set”. Four years into a blog about infinity and we’re only now rigorously defining what it is we’ve been talking about. How embarrassing!)

We’ve talked about sets quite a bit here; all you need to remember today is that a set is simply a collection of objects. A set can, of course, either be finite or infinite. Most of you reading this probably think that you understand this distinction quite well and, when presented with a relatively simply defined set, you could quite reliably and accurately classify it as either finite or infinite. For example, the set of all real numbers is infinite, whereas the set $\{\pi, e, 2\}$ is finite, as it contains only three elements. This intuitive understanding of “finite” and “infinite” works fine in everyday practice, but it does not amount to a definition, and if we want to study sets generally rather than specific instances of sets, or if we want to study the notion of “infinity”, then it will be important to have a solid definition of the concept “finite set”. And, considering that this is a concept that we are all quite familiar with, this turns out to be surprisingly tricky.

(Before continuing, take a moment to think about how you might try formally to define “finite set”. Can you think of multiple approaches to the task?)

When asked to define the term “finite set”, an immediate first attempt might be something like this:

Definition Attempt #1: A finite set is a set with only finitely many elements.

This is obviously a bad definition, though. When we’re defining the term “finite set”, what we’re really trying to do is define the word “finite”, so it’s unacceptable to use the word “finite” or “finitely” in its definition.

Let’s think about things a bit more carefully. Saying that a set is finite is making an assertion about its size, so let’s recall how we think about the size of a set. We say that two sets $A$ and $B$ have the same size if their elements can be put into a one-to-one correspondence with one another. Now, when you think about a finite set, you probably think about the act of counting the elements in the set, and eventually completing this count. What do you use to count the elements of the set? The natural numbers! (Recall that natural numbers are whole numbers of the form $0, 1, 2, \ldots$.) A finite set should thus be a set whose size is the same as some natural number. This might lead us to formulate the following attempted definition:

Definition Attempt #2: A finite set is a set whose elements can be placed into a one-to-one correspondence with some natural number.

This is much better than Definition Attempt #1, and in fact this is exactly how “finite set” is often defined. When doing mathematics though, we need to make sure we’re being rigorous, so let’s examine this definition a little closer. Our definition makes crucial reference to the notion of a “natural number”, so if we are to declare success, then we should already have a solid definition of “natural number”.

But defining “natural number” is quite tricky in its own right. (Try it!) Notice that, when reminding you what a natural number is a few paragraphs ago, I did so by pointing to some examples, namely 0, 1, and 2, and saying, “it’s like those”. And this is a perfectly valid way of describing the natural numbers in conversational English, but it is certainly not precise enough to rigorously ground mathematics.

So let’s think more carefully about natural numbers. We can certainly talk about certain specific natural numbers, like 0 and 1. For example, 0 is the size of the empty set, and 1 is the size of the set $\{\emptyset\}$. We can also talk meaningfully about what it means to add 1 to a number: given a number $x$, find a set $A$ of size $x$ and an object $b$ that is not an element of $A$, and then say that $x + 1$ is the size of the set $A \cup \{b\}$ formed by adding $b$ to $A$ as an element.

Now notice that the natural numbers are precisely the numbers you can obtain by starting with 0 and then repeatedly adding 1. (For example, 4 = 0 + 1 + 1 + 1 + 1.) Great. It seems like we now have a fine definition of “natural number”, which in turn means that we have a fine definition of “finite set”. But wait a minute. When we say “repeatedly adding 1”, what do we really mean? How many times are we allowed to “repeatedly add 1”? Well, we want to specify that we’re only allowed to add 1 a finite number of times. But we haven’t defined “finite” yet, and, even worse, our current candidate for a definition of “finite” relies on the notion of “natural number”, so it would create a circle if we were to define “natural number” in terms of “finite”, and since we’re trying to be at least somewhat rigorous here, we definitely want to avoid that.

We could certainly continue down this path if we wanted and isolate a definition of “natural number” that does not depend on the general notion of “finite”. For our purposes, though, I think there’s a shorter, more elegant approach, so let’s change tack slightly. Rather than trying to characterize finite sets, let’s try to characterize infinite sets and use this to arrive at a definition of finiteness.

One of the first observations we make when we start thinking about infinity mathematically is one that might initially be rather unintuitive and disorienting: there are infinite sets $A$ and $B$ such that $B$ obviously contains “more” elements than $A$, and yet $A$ and $B$ have the same size. The first example you probably saw of this is that in which $A$ is the set of even natural numbers and $B$ is the set of all natural numbers. $B$ obviously contains all of the elements of $A$ plus many more elements that are not in $A$, and yet we can see that $A$ and $B$ have the same size via the one-to-one correspondence that matches the 0 from $A$ with the 0 in $B$, the 2 from $A$ with the 1 from $B$, the 4 from $A$ with the 2 from $B$, and so on. (I know we made this pivot precisely because we haven’t rigorously defined the notion of “natural number”, but that’s fine. This is just an illustrative example, not a foundational part of what will eventually be our definition.)

When we reflect on this further, this phenomenon in fact seems to be a fundamental feature of what it means for a set to be infinite. Given any infinite set $B$, it is always possible to find a proper subset $A$ of $B$ such that $A$ and $B$ have the same size (remember that $A$ is a proper subset of $B$ if every element of $A$ is also an element of $B$ but $B$ contains at least one element that is not in $A$). Indeed, if you start with an infinite set $B$ and simply remove one element from $B$, you always end up with a set of the same size as $B$. This is certainly not the case with finite sets. If you have a set of size 6, for instance, and take an element out of it, you now have a set of size 5, which is strictly smaller than your original set. Since this phenomenon seems so characteristic of infinite sets, then, why don’t we take it as the definition of “infinite”, and hence take its negation as the definition of “finite”.

Definition Attempt #3: A finite set is a set with the property that there is no proper subset of the set that has the same size as the set itself. In other words, a set is finite if, whenever you take an element out of the set, you make the set strictly smaller.

This is now a non-circular, rigorous definition (for a certain definition of the word “rigorous”…) and, to my mind at least, it is quite elegant and matches up with our experience and intuition of finite sets.

This definition of “finite” in fact has a relatively long history. It was first proposed by the German mathematician Richard Dedekind in 1888 and was the first modern definition of “finite” that did not rely on the definition of “natural number”. Nowadays, a set satisfying our Definition Attempt #3 is said to be “Dedekind-finite”, while a set satisfying Definition Attempt #2 (after a rigorous definition of “natural number” has been made) is simply said to be “finite”.

Are these two definitions “the same”? Well, they both seem to match our intuition of “finite set”, and for a number of years after the introduction of “Dedekind-finite sets”, it was widely believed among mathematicians that they were the same. And, under the standard axioms of set theory, it is in fact the case that they are the same, i.e., that the finite sets are precisely the Dedekind-finite sets. However, one of these standard axioms of set theory is the sometimes controversial Axiom of Choice. Since the Axiom of Choice often leads to unintuitive consequences (the Banach-Tarski paradox, for instance), mathematicians often like to consider what happens if we don’t adopt the Axiom of Choice. And it turns out that, without the Axiom of Choice, it is impossible to prove that these two definitions of “finite” are the same. In fact, there are mathematical universes that satisfy all of the standard axioms of set theory except for the Axiom of Choice and in which there are sets that are Dedekind-finite and yet are not the same size as any natural number. Try to think about what such a set would be like!

I had meant to use this post to discuss hereditarily finite sets, which provide an example of a mathematical definition that at first glance looks circular but turns out not to be. It took us so much work simply to define finite sets, though, that I think we’ll save that for next time!

Cover image: “Still life, pitcher and fruit,” by Paul Cézanne

# Circles 1: Circular Definitions

circular definition. noun Logic. a definition in which the definiendum (the expression being defined) or a variant of it appears in the definiens (the expression that defines it).

-dictionary.com

Today we return to our “Circles” series, which we introduced what seems like years ago but was actually just March, with its first real installment, on circular definitions.

We are rightfully taught from a young age to avoid using a word in its own definition. This makes obvious sense: a definition is, in part, supposed to convey the meaning of a word to a reader who is unfamiliar with it. Ideally, this happens using words that the reader is already familiar with or can become familiar with.  If the unfamiliar word is itself in the definition, it is difficult for it to effectively convey this meaning. The definition has failed at one of its primary tasks.

Even if we diligently avoid using words within their own definitions, though, circularity can easily crop up across multiple definitions. For example, we might make the following sequence of definitions:

ice. noun. the solid form of water.

steam. noun. the gaseous form of ice.

water. noun. the liquid form of steam.

Although none of these definitions is circular on its own, they form a closed circle when taken as a set, and this obscures the meaning of all three words. A person who is unfamiliar with all of the words “ice”, “steam”, and “water” will come away from these definitions knowing something, namely that the three words represent, respectively, the solid, gas, and liquid forms of a certain substance, but not much more than that. They would not know, for instance, any relevant properties of this substance, or where this substance might be found, or what it is used for, and these three definitions would certainly not be printed together in any decent dictionary. (One might argue that the definitions of “ice” and “steam” are basically fine, if a bit limited, and that the definition of “water” should be altered to something more descriptive, thus breaking this instance of circularity. This is essentially what is done by Merriam-Webster, for instance).

Or here is an example actually occurring in Merriam-Webster. (To be fair, the full definitions in M-W are much longer and have multiple entries that we are not quoting in their entirety here. Still, what we reproduce below are the exact quotes of the first definitions of each of the three respective words. (The definition of “weigh” we quote is the first given as an intransitive verb, which is the sense in which it is being used in the definition of “weight”.))

weight. noun. the amount that a thing weighs.

weigh. verb. to have a certain heaviness

This probably doesn’t seem like an ideal scenario, but satisfactorily removing this circularity from the definitions of “weight”, “weigh”, and “heaviness” (which redirects to “heavy”) is more difficult than it might initially seen. I encourage you to spend a few minutes thinking about it. Let me know if you come up with something significantly better.

One might be wondering at this point if it is even possible to avoid circularity entirely in a system of definitions.  In fact, a rather simple argument, of a sort that is ubiquitous in the mathematical field of graph theory, shows that it is impossible. Let us look at this argument now.

In our analysis, we will represent certain information from the dictionary in a directed graph, i.e., a collection of nodes and a collection of arrows between certain of these nodes. In our graph, each node will represent a word, and there will be an arrow pointing from Word 1 to Word 2 if Word 2 appears in the definition of Word 1. For example, a portion of this graph containing some of the words in our “ice”, “steam”, and “water” definitions given above might look like this:

We now make two assumptions that I think will seem quite reasonable:

1. There are only finitely many words in the dictionary.
2. Every word that is used in the dictionary itself has a definition in the dictionary.

We will now see that these two innocuous assumptions necessarily lead to a circular set of definitions. To see this, choose any word in the dictionary and start following an arbitrary path of arrows in the directed graph derived from the dictionary. For example, in the segment above, we might start at “ice”, then move to “water”, then to “liquid”, and then elsewhere, maybe to “flowing”, for example. Let us label the words we visit on this path as $w_1, w_2, w_3,$ and so on. Recalling the meaning of the graph, this means that the definition of $w_1$ contains $w_2$, the definition of $w_2$ contains $w_3$, and so on.

A consequence of Assumption 2 is that each of the words $w_n$ visited in our path itself has a definition, and therefore there are certainly arrows pointing out of the node associated with $w_n$. This means that we never get stuck on our path. We can always choose a next node, so we can continue our path indefinitely and find a word $w_n$ for every natural number $n$.

We now have an infinite path of words. But Assumption 1 says there are only finitely many words in the dictionary, so this means that our path must repeat some words! In other words, there are numbers $m < n$ such that $w_m$ and $w_n$ are the same word. For example, maybe $w_6$ is the same word as $w_{10}$. But in that case, look at what we have:

• the definition of $w_6$ contains $w_7$;
• the definition of $w_7$ contains $w_8$;
• the definition of $w_8$ contains $w_9$;
• the definition of $w_9$ contains $w_6$ (since $w_6 = w_{10}$).

Thus, $w_6$, $w_7$, $w_8$, and $w_9$ form a circular set of definitions. (The same sort of analysis obviously holds for any values of $m$ and $n$, not just for 6 and 10.)

The fact that circularity is unavoidable should not, though, be a cause for despair, but can perhaps help us to clarify certain ideas about language and dictionaries. A dictionary is not meant to secure a language on a perfectly logically sound, immutable foundation. Nobody is worried that an inconsistency will be found in the dictionary and, as a result, the English language will collapse (whereas this sort of worry certainly does exist, in a limited form, in mathematics). Rather, it is meant to serve as a living documentation of a language as it is actually used in practice, and our everyday use of language is delightfully full of imprecisions and ambiguities and circularities. I would not want things to be any other way.

(This is not to say, of course, that all circular definitions should be uncritically embraced. Circular definitions are often simply unhelpful and can easily be improved. Our definitions of “ice”, “steam”, and “water”, for instance, would be greatly improved simply by defining “water” as something like “the liquid form of a substance whose molecules consist of two hydrogen atoms and one oxygen atom, which freezes at 0 degrees Celsius and boils at 100 degrees Celsius, and which is the principal component of the Earth’s rivers, oceans, lakes, and rain”.)

In the spirit of play, though, and in anticipation of the forthcoming mathematical discussion, let us consider two ways in which our language could be altered to actually allow us to avoid circles. These two ways essentially amount to denying the two Assumptions isolated in our proof above. The first way consists of allowing our language to be infinite. Our argument above hinged on the fact that our language has only finitely many words, so our path must return to a previously visited node at some point. If we had infinitely many words, though, we could line them all up and write down definitions so that each word’s definition only depended on words appearing later. So, for example, Word 1 might be defined in terms of Words 3, 5, and 17, and Word 3 might then be defined in terms of Words 6, 12, and 24, and so on. In practice, this is untenable for obvious physical reasons, but, even absent these physical concerns, it’s also unclear how any actual meaning would arise from such a system.

The second way consists of retaining a finite language but specifying one or more words as not needing definitions. These would be something like atomic words, perhaps expressing pure, indivisible concepts. In the graph diagram in our argument above, the nodes corresponding to these words would have no arrows coming out of them, so our paths would simply come to a halt, with no contradiction reached, if we ever reached such a node. Indeed, it is not hard to see how, given even a small number of such atomic words, one could build an entire dictionary, free from any circularity, on their foundation.

In some sense, this is actually close to how language works in practice. You can look up, for instance, the word “the” in the dictionary and find a definition that, at least for the first few entries, studiously avoids the use of “the”. Such definitions are certainly of use to, say, linguists, or poets. And yet nobody actually learns the word “the” from its definition. Similarly, you probably learned a word like “cat” by, over time, seeing many instances of actual cats, or images or sounds of cats, being told each time that what you were witnessing is a cat, and subconsciously collating the similarities across these examples into a robust notion of the word “cat”. Reading a definition may help clarify some formal aspects of the concept of “cat”, but it probably won’t fundamentally alter your understanding of the word.

One place where circularity certainly should be avoided, though, is in mathematical reasoning. Just as definitions of words make use of other words, proofs of mathematical theorems often make use of other theorems. For example, if I wanted to prove that there is a real number that is not a solution of any polynomial equation with integer coefficients (i.e., there exists a transcendental number), the shortest path might be to make use of two theorems due to Cantor, the first being that the set of polynomial equations with integer coefficients has the same size as the set of natural numbers, and the second being that the set of real numbers is strictly larger than the set of natural numbers. Since these theorems have already been proven, we are free to use them in our proof of the existence of transcendental numbers.

Carelessness about circularity can lead to problems here, though. Suppose we have “proved” three theorems (Theorem A, Theorem B, and Theorem C), but suppose also that our proof of Theorem A depends on the truth of Theorem B, our proof of Theorem B depends on the truth of  Theorem C, and our proof of Theorem C depends on the truth of Theorem A. Have we really proven anything? Well, no, we haven’t. In fact, if you allow for such circular reasoning, it’s easy to prove all sorts of obviously false statements (and I encourage you to try!).

How does mathematics avoid this problem and actually get started proving things? Exactly by the second method outlined above for potentially removing circular definitions from language. Just as we specified certain words as not needing definitions there, here we specify certain mathematical statements as being true without needing proof. These statements are known as axioms; they are accepted as being true, and they serve as the foundation on which mathematical theories are built.

Statements can be chosen as axioms for a number of reasons. Often they are statements that are either seen as obviously true or definitionally true. Sometimes long experience with a particular mathematical field leads to recognition that adopting a certain statement as an axiom is particularly useful or leads to particularly interesting consequences. Different sets of axioms are adopted by different mathematicians depending on their goals and area of study. But they are always necessary to make mathematical progress.

A proper discussion of axioms would fill up many books, so let us end this post here. In our next entry in our Circles series, we will look at the important mathematical idea of well-foundedness and how it allows us to develop mathematical definitions or arguments that at first glance may appear circular but are in fact perfectly valid.

Cover image: “Etretat in the Rain” by Claude Monet