Circles 1: Circular Definitions

circular definition. noun Logic. a definition in which the definiendum (the expression being defined) or a variant of it appears in the definiens (the expression that defines it).

-dictionary.com

Today we return to our “Circles” series, which we introduced what seems like years ago but was actually just March, with its first real installment, on circular definitions.


We are rightfully taught from a young age to avoid using a word in its own definition. This makes obvious sense: a definition is, in part, supposed to convey the meaning of a word to a reader who is unfamiliar with it. Ideally, this happens using words that the reader is already familiar with or can become familiar with.  If the unfamiliar word is itself in the definition, it is difficult for it to effectively convey this meaning. The definition has failed at one of its primary tasks.

Even if we diligently avoid using words within their own definitions, though, circularity can easily crop up across multiple definitions. For example, we might make the following sequence of definitions:

ice. noun. the solid form of water.

steam. noun. the gaseous form of ice.

water. noun. the liquid form of steam.

Although none of these definitions is circular on its own, they form a closed circle when taken as a set, and this obscures the meaning of all three words. A person who is unfamiliar with all of the words “ice”, “steam”, and “water” will come away from these definitions knowing something, namely that the three words represent, respectively, the solid, gas, and liquid forms of a certain substance, but not much more than that. They would not know, for instance, any relevant properties of this substance, or where this substance might be found, or what it is used for, and these three definitions would certainly not be printed together in any decent dictionary. (One might argue that the definitions of “ice” and “steam” are basically fine, if a bit limited, and that the definition of “water” should be altered to something more descriptive, thus breaking this instance of circularity. This is essentially what is done by Merriam-Webster, for instance).

Or here is an example actually occurring in Merriam-Webster. (To be fair, the full definitions in M-W are much longer and have multiple entries that we are not quoting in their entirety here. Still, what we reproduce below are the exact quotes of the first definitions of each of the three respective words. (The definition of “weigh” we quote is the first given as an intransitive verb, which is the sense in which it is being used in the definition of “weight”.))

weight. noun. the amount that a thing weighs.

weigh. verb. to have a certain heaviness

heavy. adjective. having great weight

This probably doesn’t seem like an ideal scenario, but satisfactorily removing this circularity from the definitions of “weight”, “weigh”, and “heaviness” (which redirects to “heavy”) is more difficult than it might initially seen. I encourage you to spend a few minutes thinking about it. Let me know if you come up with something significantly better.


One might be wondering at this point if it is even possible to avoid circularity entirely in a system of definitions.  In fact, a rather simple argument, of a sort that is ubiquitous in the mathematical field of graph theory, shows that it is impossible. Let us look at this argument now.

In our analysis, we will represent certain information from the dictionary in a directed graph, i.e., a collection of nodes and a collection of arrows between certain of these nodes. In our graph, each node will represent a word, and there will be an arrow pointing from Word 1 to Word 2 if Word 2 appears in the definition of Word 1. For example, a portion of this graph containing some of the words in our “ice”, “steam”, and “water” definitions given above might look like this:

circular_definition-1
A portion of our “Dictionary directed graph”. Note the circle formed by “ice”, “water”, and “steam”.

We now make two assumptions that I think will seem quite reasonable:

  1. There are only finitely many words in the dictionary.
  2. Every word that is used in the dictionary itself has a definition in the dictionary.

We will now see that these two innocuous assumptions necessarily lead to a circular set of definitions. To see this, choose any word in the dictionary and start following an arbitrary path of arrows in the directed graph derived from the dictionary. For example, in the segment above, we might start at “ice”, then move to “water”, then to “liquid”, and then elsewhere, maybe to “flowing”, for example. Let us label the words we visit on this path as w_1, w_2, w_3, and so on. Recalling the meaning of the graph, this means that the definition of w_1 contains w_2, the definition of w_2 contains w_3, and so on.

A consequence of Assumption 2 is that each of the words w_n visited in our path itself has a definition, and therefore there are certainly arrows pointing out of the node associated with w_n. This means that we never get stuck on our path. We can always choose a next node, so we can continue our path indefinitely and find a word w_n for every natural number n.

We now have an infinite path of words. But Assumption 1 says there are only finitely many words in the dictionary, so this means that our path must repeat some words! In other words, there are numbers m < n such that w_m and w_n are the same word. For example, maybe w_6 is the same word as w_{10}. But in that case, look at what we have:

  • the definition of w_6 contains w_7;
  • the definition of w_7 contains w_8;
  • the definition of w_8 contains w_9;
  • the definition of w_9 contains w_6 (since w_6 = w_{10}).
Written 5-12-20, 16-17-1
A circle of definitions.

Thus, w_6, w_7, w_8, and w_9 form a circular set of definitions. (The same sort of analysis obviously holds for any values of m and n, not just for 6 and 10.)


The fact that circularity is unavoidable should not, though, be a cause for despair, but can perhaps help us to clarify certain ideas about language and dictionaries. A dictionary is not meant to secure a language on a perfectly logically sound, immutable foundation. Nobody is worried that an inconsistency will be found in the dictionary and, as a result, the English language will collapse (whereas this sort of worry certainly does exist, in a limited form, in mathematics). Rather, it is meant to serve as a living documentation of a language as it is actually used in practice, and our everyday use of language is delightfully full of imprecisions and ambiguities and circularities. I would not want things to be any other way.

(This is not to say, of course, that all circular definitions should be uncritically embraced. Circular definitions are often simply unhelpful and can easily be improved. Our definitions of “ice”, “steam”, and “water”, for instance, would be greatly improved simply by defining “water” as something like “the liquid form of a substance whose molecules consist of two hydrogen atoms and one oxygen atom, which freezes at 0 degrees Celsius and boils at 100 degrees Celsius, and which is the principal component of the Earth’s rivers, oceans, lakes, and rain”.)

In the spirit of play, though, and in anticipation of the forthcoming mathematical discussion, let us consider two ways in which our language could be altered to actually allow us to avoid circles. These two ways essentially amount to denying the two Assumptions isolated in our proof above. The first way consists of allowing our language to be infinite. Our argument above hinged on the fact that our language has only finitely many words, so our path must return to a previously visited node at some point. If we had infinitely many words, though, we could line them all up and write down definitions so that each word’s definition only depended on words appearing later. So, for example, Word 1 might be defined in terms of Words 3, 5, and 17, and Word 3 might then be defined in terms of Words 6, 12, and 24, and so on. In practice, this is untenable for obvious physical reasons, but, even absent these physical concerns, it’s also unclear how any actual meaning would arise from such a system.

The second way consists of retaining a finite language but specifying one or more words as not needing definitions. These would be something like atomic words, perhaps expressing pure, indivisible concepts. In the graph diagram in our argument above, the nodes corresponding to these words would have no arrows coming out of them, so our paths would simply come to a halt, with no contradiction reached, if we ever reached such a node. Indeed, it is not hard to see how, given even a small number of such atomic words, one could build an entire dictionary, free from any circularity, on their foundation.

In some sense, this is actually close to how language works in practice. You can look up, for instance, the word “the” in the dictionary and find a definition that, at least for the first few entries, studiously avoids the use of “the”. Such definitions are certainly of use to, say, linguists, or poets. And yet nobody actually learns the word “the” from its definition. Similarly, you probably learned a word like “cat” by, over time, seeing many instances of actual cats, or images or sounds of cats, being told each time that what you were witnessing is a cat, and subconsciously collating the similarities across these examples into a robust notion of the word “cat”. Reading a definition may help clarify some formal aspects of the concept of “cat”, but it probably won’t fundamentally alter your understanding of the word.


One place where circularity certainly should be avoided, though, is in mathematical reasoning. Just as definitions of words make use of other words, proofs of mathematical theorems often make use of other theorems. For example, if I wanted to prove that there is a real number that is not a solution of any polynomial equation with integer coefficients (i.e., there exists a transcendental number), the shortest path might be to make use of two theorems due to Cantor, the first being that the set of polynomial equations with integer coefficients has the same size as the set of natural numbers, and the second being that the set of real numbers is strictly larger than the set of natural numbers. Since these theorems have already been proven, we are free to use them in our proof of the existence of transcendental numbers.

Carelessness about circularity can lead to problems here, though. Suppose we have “proved” three theorems (Theorem A, Theorem B, and Theorem C), but suppose also that our proof of Theorem A depends on the truth of Theorem B, our proof of Theorem B depends on the truth of  Theorem C, and our proof of Theorem C depends on the truth of Theorem A. Have we really proven anything? Well, no, we haven’t. In fact, if you allow for such circular reasoning, it’s easy to prove all sorts of obviously false statements (and I encourage you to try!).

How does mathematics avoid this problem and actually get started proving things? Exactly by the second method outlined above for potentially removing circular definitions from language. Just as we specified certain words as not needing definitions there, here we specify certain mathematical statements as being true without needing proof. These statements are known as axioms; they are accepted as being true, and they serve as the foundation on which mathematical theories are built.

Statements can be chosen as axioms for a number of reasons. Often they are statements that are either seen as obviously true or definitionally true. Sometimes long experience with a particular mathematical field leads to recognition that adopting a certain statement as an axiom is particularly useful or leads to particularly interesting consequences. Different sets of axioms are adopted by different mathematicians depending on their goals and area of study. But they are always necessary to make mathematical progress.

A proper discussion of axioms would fill up many books, so let us end this post here. In our next entry in our Circles series, we will look at the important mathematical idea of well-foundedness and how it allows us to develop mathematical definitions or arguments that at first glance may appear circular but are in fact perfectly valid.


Cover image: “Etretat in the Rain” by Claude Monet

Pythagoras’ Table of Opposites

On March 14 (aka Pi Day, aka International Day of Mathematics, aka Save a Spider Day), the New York Times reposted a 2012 piece about infinity by Natalie Angier (originally published on New Years’ Eve, aka Leap Second Time Adjustment Day, aka Make Up Your Mind Day). For obvious reasons, I thought the essay might be of interest to readers of this blog. It doesn’t delve particularly deeply into any one facet of infinity, but offers a nice overview of some views of the infinite through history.

One thing to which I was introduced by Angier’s piece is Pythagoras’ Table of Opposites, which collects ten pairs of contrasting qualities, placing those viewed positively by the Pythagoreans on the left and those viewed negatively on the right (which is sort of ironic given the fourth pair of opposites in the table below). Here is a (translation of) a version of the Pythagorean Table of Opposites found in work of Aristotle:

Finite Infinite
Odd Even
One Many
Right Left
Male Female
Rest Motion
Straight Curved
Light Darkness
Good Evil
Square Oblong

I’m not going to write too much about this, as I’m certainly not an expert in ancient Greek philosophy, but let me just make a few observations:

  • It’s no surprise that “infinite” was placed on the negative side of the table, given the Pythagoreans’ antipathy to irrational numbers and the accompanying infinities. We have written previously about mathematicians’ growing acceptance of infinity, especially in the last 150 years, though there are still a few mathematicians who would rather do away with the concept of infinity in mathematics.
  • The pairs “One/Many” and “Rest/Motion” play central roles in the paradoxes of Zeno of Elea, covered in this previous post.
  • The association of “right” with “good” and “left” with “bad” is certainly not unique to the ancient Greeks, showing up in a number of other societies throughout the world. It even persists in slightly hidden form in modern etymology: the Latin word for “right” is “dexter”, from which the English words “dextrous” and “dexterity” derive, while the Latin word for “left” is “sinister”, from which, naturally, the English word “sinister” derives. For a deeper and more rigorous account of “left” and “right” and other pairs of opposites in ancient Greek philosophy, see G.E.R. Lloyd’s paper, “Right and Left in Greek Philosophy”.

Cover Image: “PH-929” by Clyfford Still

Circles 0

Life is a full circle, widening until it joins the circle motions of the infinite.

-Anaïs Nin

No shape has captured the human imagination quite like the circle has. Its perfect symmetry and constant radius stand in contrast to the messy variability of our everyday lives. We have inner circles, vicious circles, fairy circles, crop circles, family circles, circles of influence, the circle of life. Circles permeate our conception of time, as they provide the shape of our clocks, and when things return to their original configuration, we say that they have come full circle.

We have seen intimate connections between circles and infinity in many previous posts. The circle is the one-point compactification of the infinite straight line, which itself can be thought of as a circle with infinite radius. The process of circle inversion provides a useful duality between the finite region inside a circle and the infinite region outside. The Poincaré disk model provides an elegant finite setting for a decidedly infinite instantiation of hyperbolic geometry.

In the next few posts, we will be exploring circles more deliberately, through lenses of mathematics, philosophy, linguistics, art, and literature. We will think about circular definitions and closed timelike curves, about causality and the Liar Paradox.

These are for future weeks, though. For today, to lay the foundations and whet the appetite, please enjoy these three essential pieces of circle-related culture:

Although popularly every one called a Circle is deemed a Circle, yet among the better educated Classes it is known that no Circle is really a Circle, but only a Polygon with a very large number of very small sides. As the number of the sides increases, a Polygon approximates to a Circle; and, when the number is very great indeed, say for example three or four hundred, it is extremely difficult for the most delicate touch to feel any polygonal angles. Let me say rather, it would be difficult: for, as I have shown above, Recognition by Feeling is unknown among the highest society, and to feel a circle would be considered a most audacious insult. This habit of abstention from Feeling in the best society enables a Circle the more easily to sustain the veil of mystery in which, from his earliest years, he is wont to enwrap the exact nature of his Perimeter or Circumference. Three feet being the average Perimeter, it follows that, in a Polygon of three hundred sides, each side will be no more than the tenth part of an inch; and in a Polygon of six or seven hundred sides the sides are little larger than the diameter of a Spaceland pin-head. It is always assumed, by courtesy, that the Chief Circle for the time being has ten thousand sides.

Edwin A. Abbott, Flatland: A Romance of Many Dimensions

The Subtle Art of Go, or Finite Simulations of Infinite Games

Tatta hito-ban to
uchihajimeta wa
sakujitsu nari

Saying `just one game’
they began to play . . .
That was yesterday.

-Senryū (trans. William Pinckard)

A few weeks ago, I found myself in Virginia’s finest book store and made a delightful discovery: a newly published translation of A Short Treatise Inviting the Reader to Discover the Subtle Art of Go, by Pierre Lusson, Georges Perec, and Jacques Roubaud (two mathematicians and two poets, all associates of the French literary workshop Oulipo), originally published in France in 1969.

Go, of course, is a notoriously difficult and abstract board game, invented in China over 2500 years ago and further developed in Korea and Japan. After millennia of being largely unknown outside of East Asia, it has in the last century become popular throughout the world (the publication of this book played a significant role in introducing it to France) and has even been in the news recently, as computer programs using neural networks have defeated some of the best professional go players in highly publicized matches.

The stated goal of Lusson, Perec, and Roubaud’s book is to “[provide], in a clear, complete, and precise manner, the rules of the game of GO” and to “heighten interest in this game.” As a practical manual on the rules and basic tactics and strategy of the game, the modern reader can do much better with other books. As a literary meditation on play, on art, and on infinity, it is dazzling. It is this latter aspect of the book that I want to touch on here today.


A theme running throughout the book is the idea that the practice of go is akin to a journey into infinity, and this theme is expressed both with respect to one’s relationship with other players and with one’s relationship to the game itself.

A joy of learning any game is developing relationships and rivalries with other players, and this is especially true with go, for two main reasons. First, an individual match is not simply won or lost but rather is won or lost by a certain number of points. Second, there is a robust handicapping system whereby a substantially weaker player can legitimately compete with a stronger player, in a match of interest to both players, by placing a specific number of pieces on specific points of the board before the first move. Through these two mechanisms, a rich and rewarding go relationship can thus develop, even between players of unequal skill, over not just one match but a progression of matches, during which handicaps can be finely calibrated and can, indeed, change over time, as the players learn more about each other’s play and about the game in general.

As such, GO, at its limits, constitutes the best finite simulation of an infinite game. The two adversaries battle on the Goban the way cyclists pursue each other in a velodrome.

This is not as much the case with, say, chess, in which the facts that the result of a game is purely binary and that handicap systems are clumsier and more substantially alter the character of the game mean that matches between players of unequal skill will frequently be frustrating for the weaker and boring for the stronger.


It is a cliché to say that one can never truly master a subject, that there is always more to learn. But the richness of go makes it especially true here, and, in a sense, quantifiably so. The number of legal positions in go is more than 2 \times 10^{170}. This is a truly astounding number, dwarfing the estimated number of atoms in the observable universe (10^{80}) or the estimated number of legal positions in chess (a piddling 10^{43}). The possibilities in go are, for all intents and purposes, infinite. No matter how much one learns, one knows essentially nothing.

Crucially, though, there is a well-developed hierarchy through which one progresses and by which one can measure one’s skill, even if remains practically zero when taking a wider view. Lusson, Perec, and Roubaud write about this better than I could in the following two excerpts, so let us simply listen to them:

The genius of GO stems precisely from what it hides as well as what it reveals, at any moment, at any level, in its different, hierarchized mysteries whose progressive mastery transforms the game every time:

A garden of bifurcating pathways, a labyrinth, the game of Babel, each step forward is decisive and each step forward is useless: we will never have finished learning..

(Note the surely intentional nod to Borges in the last sentence above).

From a beginner to a classified amateur in the bottom ranks of kyu, a player can rise to the top kyus and then, one by one, climb the six amateur echelons, approaching (but only approaching) the inaccessible regions where the true players reign, the professionals…

In this last excerpt, we hear echoes of a mathematical concept from set theory, my personal area of expertise. The authors temper the reader’s (and their own) dreams of go mastery by noting that, no matter how high an amateur go player may ascend in their study of go, they will still never reach the “inaccessible” realm of the true masters of the game. These professionals also inhabit a hierarchy, but it is a separate hierarchy, visible but unreachable from below.

This calls to mind the concept of an inaccessible cardinal, which is an uncountable cardinal number \kappa that cannot be reached from below through the standard procedures of climbing to the next cardinal, applying cardinal exponentiation, or taking unions of a small number of small sets. (More formally, \kappa is (strongly) inaccessible if it is regular, uncountable, and, for all \lambda < \kappa, we have 2^\lambda < \kappa.)

It cannot be proven that inaccessible cardinals exist or are even consistent, and the assumption of the consistency of such cardinals has significant implications for what one can prove (see a previous post for more information about inaccessible and other “large cardinals”). On the simplest descriptive level, an inaccessible cardinal divides the hierarchy of infinite cardinals into two realms that cannot communicate via the standard operations of arithmetic: those above and those below.

(A modern version of the book would surely posit a third separate region of hierarchies in go: that of the neural networks that with stunning swiftness have become stronger than the strongest professionals.)


So why bother? If one can spend one’s whole life studying go and still hardly understand the game, if one can develop to one’s full potential and still be nowhere close to the level of professional players, let alone the newly ascendant artificial intelligences, then why start at all?

The authors consider this question, but ultimately they reject its premises. The study of go is not worthwhile in spite of the fact that it is an “infinite pathway.” It is worthwhile because of it.

And this clearly has implications outside of go. Why devote much of my life to mathematical research if I can never know more than a miniscule portion of what remains undiscovered? Why write if it is physically impossible to write more than about 10,000,000 words in a life, and if everything I may write is already contained in Borges’ Library of Babel anyway? Perhaps because the best life is a finite simulation of an infinite life.

Only one activity exists to which GO may be reasonably compared.

We will have understood it is writing.


PS: We have had occasion to mention chess and its relation to the infinite in a previous post. One of the joys of A Short Treatise… is the exaggerated contempt expressed by its authors for the game of chess. We end by offering you just a taste:

Good news!

One of the best European players, Zoran Mutabzija, abandoned chess, to which he had devoted himself since the age of four, as soon as someone taught him GO!

In related news.

We just received a detailed report concerning a certain Louis A. caught in the act of robbing a gas station attendant on the Nationale 6. According to the report, whose source and information cannot be called into question, Louis A. is a notorious chess player.

Grandi’s Series and “Creation Ex Nihilo” (Supertasks 3)

The universe works on a math equation

That never even ever really even ends in the end

Infinity spirals out creation

-Modest Mouse, “Never Ending Math Equation”

In our last post, we discussed Thomson’s Lamp, a supertask in which a lamp that is initially off is switched on and then off again infinitely many times over the course of two seconds (the amount of time between successive switches of the lamp gets exponentially smaller and smaller so that these infinitely many switches can fit within two seconds). We can then ask the perplexing question: “At the end of these two seconds, is the lamp on or off?” Either answer seems wrong; it cannot be on, since there is no previous time at which we turned the lamp on and left it on. But by the same token, it cannot be off, since there is no previous time at which we turned the lamp off and left it off. And yet it must be either on or off at the end of the two seconds, in a state that is seemingly uncaused by any prior state.

Today, we’re going to strip away the physical details of this thought experiment and get to its mathematical core. To start, we note that it is common in electrical devices to denote the state of being “on” with the number 1 and the state of being “off” with the number 0.  With this framework, we can think of the act of turning the lamp on as being equivalent to adding 1 to the numerical state of the lamp, and we can think of the act of turning the lamp off as equivalent to subtracting 1 from the numerical state of the lamp. Our act of switching the lamp on and off an infinite number of times can then be reduced to the infinitely long sum

 1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 + \ldots

and the question, “What is the state of the lamp after two seconds?” is equivalent to the question, “What is the value of this infinite sum?”


Before we actually answer this question (in two different ways!) let’s travel back to 18th century Italy to visit Luigi Guido Grandi, a mathematician and theologian. Grandi was one of the first to study the infinite sum

1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 + \ldots ,

which is often called Grandi’s series in his honor. (What we are calling an infinite sum here is more formally (and more correctly) called an infinite series.) Grandi made the devious observation that, if one cleverly adds parentheses in different ways to the infinite sum, one can seemingly make it attain two different values. First, if one adds parentheses like this:

 (1 - 1) + (1 - 1) + (1 - 1) + (1 - 1) + \ldots

then the sum becomes:

 0 + 0 + 0 + 0 + \ldots ,

which clearly equals 0. On the other hand, if one adds parentheses like this:

 1 + (-1 + 1) + (-1 + 1) + (-1 + 1) + \ldots

then the sum becomes:

 1 + 0 + 0 + 0 + \ldots ,

which just as clearly equals 1! Depending on one’s perspective, the value of the sum can be either 0 or 1. The lamp can equally well be off or on.

Grandi, as a theologian, makes a somewhat more ambitious claim:

By putting parentheses into the expression 1 – 1 + 1 – 1… in different ways, I can, if I want to, obtain 0 or 1. But then the idea of the creation ex nihilo is perfectly plausible.

Creation ex nihilo, of course, is creation out of nothing. This is certainly the only instance I can think of in which clever parenthesis placement is used to make theological arguments about the creation of the universe.


Grandi’s arguments don’t hold up to modern mathematical scrutiny. (And, to be fair, Grandi didn’t really believe them himself; more on that later.) To say more, we need to say what we might actually mean when we assert that an infinite sum has a value. Very roughly speaking, the standard way of treating infinite sums in modern mathematics, the way that I taught to my calculus students a couple of weeks ago, is as follows. To determine whether the infinite sum

a_0 + a_1 + a_2 + a_3 + \ldots + a_k + \ldots

has a definite value, one considers longer and longer finite initial sums:

 S_0 = a_0 \newline  S_1 = a_0 + a_1 \newline S_2 = a_0 + a_1 + a_2 \newline \ldots \newline S_n = a_0 + a_1 + a_2 + \ldots + a_n

If these initial sums S_n approach a fixed number L as n becomes arbitrarily large, then we say that the value of the sum is in fact L:

a_0 + a_1 + a_2 + a_3 + \ldots + a_k + \ldots = L.

Otherwise, the value of the infinite sum is left undefined, and we say that the sum diverges.

This leads to a beautiful mathematical theory with some striking results. For example:

1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + \ldots + \frac{1}{k^2} + \ldots = \frac{\pi^2}{6}

When we consider Grandi’s series, though, the initial sums alternate between 0 and 1. They never approach a single fixed number, so the value of Grandi’s series is undefined. The lamp is neither on nor off.


The story isn’t quite over yet, though. As I mentioned earlier, Grandi didn’t actually think that he could change the value of his infinite sum just by placing parentheses in different ways. But he also didn’t think that its value should be left undefined, as we teach our students in calculus class today. Instead, he thought the series should have a definite value, and that that value, it what could be seen as an early anticipation of the idea of quantum superposition, should be 1/2. The lamp is half on and half off.

Grandi was not alone in thinking this, and a number of arguments have been put forward for why 1/2 is the correct value of the series

1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 + \ldots

Let’s examine a couple of them here. First, Grandi advanced the following story as an explanation: Suppose that two siblings inherit a valuable gem from their parents. They are forbidden from selling the gem, and cutting it in half would ruin its value. So the siblings agree that they will alternate ownership of the gem, exchanging it every New Year’s Day. Assuming this arrangement continues indefinitely, then, from the point of view of the first sibling to take the gem, their ownership of the gem can be represented by the series

1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 + \ldots .

And yet each sibling owns the gem half of the time, so the value of this series should be 1/2.

Next, we consider a more mathematical argument from a young Leibniz. This proceeds via a sequence of equations, each constructed from the previous one. First, he notes that

\frac{1}{2} = 1 - \frac{1}{2}.

Then, substituting the expression 1 - \frac{1}{2} in for \frac{1}{2} on the right side of this equation, he finds that

\frac{1}{2} = 1 - (1 - \frac{1}{2}).

Continuing with similar substitutions, he obtains the equations

\frac{1}{2} = 1 - (1 - (1 - \frac{1}{2})) \newline \frac{1}{2} = 1 - (1 - (1 - (1 - \frac{1}{2}))) \newline \frac{1}{2} = 1 - (1 - (1 - (1 - (1 - \frac{1}{2})))).

Taking this process through infinitely many steps, one should then obtain

\frac{1}{2} = 1 - (1 - (1 - (1 - (1 - (1 - \ldots))))).

And finally, distributing the minus signs through the parentheses, one ends up with

\frac{1}{2} = 1 - 1 + 1 - 1 + 1 - 1 + \ldots ,

which is exactly what we were trying to show.


Neither of these arguments would be considered rigorous or particularly convincing in any mathematics department today, but they can be useful to build intuition. And there is a rigorous framework for infinite sums in which Grandi’s series does equal 1/2. It is known as Cesàro summation, after the late-19th century Italian mathematician Ernesto Cesàro. In Cesàro summation, one doesn’t compute the limit of the finite initial sums S_0, S_1, S_2, \ldots as above. Instead, one computes the limit of the average of the first n initial sums. More precisely, given an infinite sum a_0 + a_1 + a_2 + \ldots and initial sums S_0 = a_0, S_1 = a_0 + a_1, S_2 = a_0 + a_1 + a_2 \ldots, one then defines a sequence A_0, A_1, A_2, \ldots by letting

A_0 = S_0 \newline A_1 = \frac{1}{2}(S_0 + S_1) \newline A_2 = \frac{1}{3}(S_0 + S_1 + S_2) \newline \ldots \newline A_n = \frac{1}{n}(S_0 + S_1 + S_2 + \ldots + S_n) .

Then, if these averages A_n approach a fixed number L as n becomes arbitrarily large, then that number L is said to be the value of the infinite sum a_0 + a_1 + a_2 + \ldots .

Cesàro summation can be seen as an extension of the standard theory of infinite sums explained above. If an infinite sum has a value under the standard account, then it still has that value under Cesàro summation. But there are a number of infinite sums that diverge in the standard theory but have defined values under Cesàro summation. Grandi’s series is one of these sums, and, as expected, its Cesàro value is 1/2.


Cover Image: “Starfield” by Vija Celmins

Thomson’s Lamp and Determinism (Supertasks II)

In our previous post, we began a discussion of supertasks, or processes with infinitely many steps that are completed in a finite amount of time. Today, we visit the source of the word supertask (or, in its original formulation, super-task), J.F. Thomson’s 1954 paper, “Tasks and Super-tasks”, published in Analysis.

The main thesis of Thomson’s paper is that supertasks are paradoxical, that it is impossible for someone to have completed an infinite sequence of tasks. (He is careful to distinguish between (1) completing an infinite sequence of tasks, which he thinks is impossible and (2) having an infinite number of tasks that one can potentially do. This is essentially the classical distinction between actual and potential infinity that we have discussed here a number of times.)

To illustrate the impossibility of supertasks, Thomson introduces a particular supertask, now known as Thomson’s Lamp. The idea is as follows: Suppose we have a lamp with a button on its base. If the lamp is off, then pressing the button turns it on; if it is on, then pressing the button turns it off. You have probably encountered such a lamp at some point in your life. Now imagine the following sequence of actions. The lamp is initially off. After one second, I press the button to turn it on. After a further 1/2 second, I press the button again to turn it off. After a further 1/4 second, I press the button to turn it on. After a further 1/8 second, I press the button to turn it off. And so on. (As in our last post, we are ignoring the obvious physical impossibilities of this situation, such as the fact that my finger would eventually be moving faster than the speed of light or that the wires in the lamp would become arbitrarily hot.) In the course of 2 seconds, I press the button infinitely many times. Moreover, there is no “last” button press; each press is quickly followed by another. So the question is this: after 2 seconds, is the lamp on or off? Thomson feels that either answer is impossible, and yet one of them must hold:

It cannot be on, because I did not ever turn it on without at once turning it off. It cannot be off, because I did in the first place turn it on, and thereafter I never turned it off without at once turning it on. But the lamp must be either on or off. This is a contradiction.

Thomson concludes that supertasks are impossible. (I am oversimplifying a little bit here; I encourage interested readers to check out Thomson’s original paper, which is fairly short and well-written.) As anyone passingly familiar with the history of philosophy could guess, though, not everybody was convinced by Thomson’s arguments, and Thomson’s paper initiated a small but lively philosophical discussion on supertasks, some of which we may examine in future posts.

Today, though, I don’t want to focus directly on the possibility of supertasks in general or Thomson’s Lamp in particular, but rather to assume that they are possible and think about what this would mean for a topic of great interest in philosophy: determinism.

Roughly speaking, determinism is the assertion that all events are uniquely determined by what has happened before. Given the history of the universe, its future is fixed in stone. The first modern expression of this type of determinism is typically credited to the French mathematician Pierre-Simon Laplace, who imagined the existence of a demon who knows the precise location and momentum of every object in the universe and is therefore able to calculate all future events.

Determinism is a vast subject, and we don’t have nearly enough time to touch on its many connections with topics such as religion, free will, or quantum mechanics. We do have time, however, to show that Thomson’s Lamp, plus a bit of intuition (always dangerous when dealing with infinity, but let’s allow it for the sake of argument), places serious doubt on the truth of determinism.

To see this, let’s suppose that the supertask of Thomson’s Lamp is in fact coherent, and let’s revisit the question: after 2 seconds, is the lamp on or off? I claim that either answer is possible. To see this, let’s be a little bit more precise. Let’s call our supertask, in which the lamp starts off and we press the button infinitely many times in 2 seconds, Process A. Suppose my claim is wrong and that, at the conclusion of Process A, the lamp is necessarily on (if the lamp is necessarily off, then a symmetric argument will work). Now imagine that Process B is the same as Process A but with the role of “on” and “off” reversed: the lamp starts on and, over the course of 2 seconds, we press the button infinitely many times. One would expect that the final state of Process B would just be the reverse of the final state of Process A, namely that the lamp is necessarily off at the conclusion of Process B. (If this doesn’t seem intuitively obvious to you, try imagining a more symmetric situation in which the switch doesn’t turn the lamp on or off but instead switches the color of light emitted from red to blue.) But now suppose that an observer arrives at Process A exactly 1 second late and thus misses the first press of the button. From their perspective, the remainder of the supertask looks exactly like Process B, just sped up by a factor of two. This speeding up intuitively shouldn’t affect the final outcome, so they should rightly expect the lamp to be off at the end of the process, whereas I should rightly expect the lamp to be on at the end of the process, since I know we are completing Process A. We can’t both be correct, so this is a contradiction.

But if, at the end of the supertask, the lamp could consistently either be on or off, then the state of the lamp after two seconds is not uniquely determined by past events. Laplace’s demon cannot predict the behavior of the lamp after two seconds, even if it knows everything about the universe at all times before two seconds have elapsed. In other words, if we accept the coherence of Thomson’s lamp, then determinism must be false!


This was admittedly a very non-rigorous argument, but I hope it provides some food for thought or at least some amusement. Next time, we’ll return to doing some real mathematics and connect ideas of supertasks with the values of infinite series!

Cover image: “L’empire des lumières” by René Magritte

Infinite Collisions, or Something from Nothing (Supertasks I)

Hello! It’s been over a year since my last post; I took some time off as I moved to a new city and a new job and became generally more busy. I missed writing this blog, though, and recently I found myself reading about supertasks, tasks or processes with infinitely many steps but which are completed in a finite amount of time. This made the finitely many things I was doing seem lazy by comparison, so I think I’m going to try updating this site regularly again.

The first supertasks were possibly those considered by Zeno in his paradoxes of motion (which we’ve thought about here before). In the simplest, Achilles is running a 1 kilometer race down a road. To complete this race, though, Achilles must first run the first 1/2 kilometer. He must then run the next 1/4 kilometer, and then the next 1/8 kilometer, and so on. He must seemingly complete an infinite number of tasks just in order to finish the race.

Zeno_Dichotomy_Paradox
Achilles running a race. (Image by Martin Grandjean, CC BY-SA 4.0)

If this idea leaves you a bit confused, you’re not alone! It seems that the same idea can be applied to any continuous process; first half of the process must take place, then the next quarter, then the next eighth, and so on. How does anything ever get done? Is every task a supertask?

Supertasks reemerged as a topic of philosophical study in the mid-twentieth century, when J.F. Thomson introduced both the term supertask and a new example of a supertask, now known as Thomson’s Lamp: Imagine you have a lamp with an on/off switch. Now imagine that you switch the lamp on at time t = 0 seconds. After 1/2 second, you switch the lamp off again. After a further 1/4 second, you switch the lamp on again. After a further 1/8 second, you switch the lamp off again, and so on. After 1 full second, you have switched the lamp on and off infinitely many times. Now ask yourself: at time t = 1, is the lamp on or off? Is this even a coherent question to ask?

We’ll have more to say about this and other supertasks in future posts, but today I want to present a particularly elegant supertask introduced by Jon Perez Laraudogoitia in the aptly named paper “A Beautiful Supertask”, published in the journal Mind in 1996. Before describing the supertask, let me note that it will obviously not be compatible with our physical universe, or perhaps any coherent physical universe, for a number of reasons: it assumes perfectly elastic collisions with no energy lost (no sound or heat created, for instance), it assumes that particles with a fixed mass can be made arbitrarily small, it assumes that nothing will go horribly wrong if we enclose infinitely much matter in a bounded region, and it neglects the effects of gravity. Let’s put aside these objections for today, though, and simply enjoy the thought experiment.

Imagine that we have infinitely objects, all of the same mass. Call these objects m1, m2, m3, and so on. They are all arranged on a straight line, which we’ll think of as being horizontal. To start, each of these objects is standing still. Object m1 is located 1 meter to the right of a point we will call the origin, object m2 is 1/2 meter to the right of the origin, object m3 is 1/4 meter to the right of the origin, and so on. (Again, we’re ignoring gravity here. Also, so that we may fit these infinitely many objects in a finite amount of space, you can either assume that they are point masses without dimension or you can imagine them as spheres that get progressively smaller (but retain the same mass) as their index increases.) Meanwhile, another object of the same mass, m0, appears to the right of m1, moving to the left at 1 meter per second. At time t = 0 seconds, it collides with m1.

beautiful_supertask
Our system immediately before t = 0.

What happens after time t = 0? Well, since we’re assuming our collisions are perfectly elastic, m0 transfers all of its momentum to m1, so that m0 is motionless 1 meter from the origin and m1 is moving to the left at 1 meter per second. After a further half second, at t = 1/2, m1 collides with m2, so that m1 comes to rest 1/2 meter from the origin while m2 begins moving to the left at 1 meter per second. This continues. At time t = 3/4, m2 collides with m3. At time t = 7/8, m3 collides with m4, and so on.

Where are we after 1 full second? By time t = 1, each object has been hit by the previous object and has in turn collided with the next object. m0 is motionless where m1 started, m1 is motionless where m2 started, m2 is motionless where m3 started, and so on. Our entire system is at rest. Moreover, compare the system {m0, m1, m2, …} at t=1 to the system {m1, m2, m3, …} at t=0. They’re indistinguishable! Our initial system {m1, m2, m3, …} has absorbed the new particle m0 and all of its kinetic energy and emerged entirely unchanged!

Things become seemingly stranger if we reverse time and consider this process backwards. We begin with the system {m0, m1, m2, …}, entirely at rest. But then, at t = -1, for no apparent reason, a chain reaction of collisions begins around the origin (“begins” might not be quite the right terms — there’s no first collision setting it all off). The collisions propagate rightward, and at t = 0 the system spits out particle m0, moving to the right at 1 meter per second, while the system {m1, m2, m3, …} is now identical to the system {m0, m1, m2, …} at time t = -1. The ejection of this particle from a previously static system is an inexplicable event. It has no root cause; each collision in the chain was caused by the one before it, ad infinitum. A moving object has seemingly emerged from nothing; kinetic energy has seemingly been created for free.


Cover Image: “4 Spheres” by Victor Vasarely