Thomson’s Lamp and Determinism (Supertasks II)

In our previous post, we began a discussion of supertasks, or processes with infinitely many steps that are completed in a finite amount of time. Today, we visit the source of the word supertask (or, in its original formulation, super-task), J.F. Thomson’s 1954 paper, “Tasks and Super-tasks”, published in Analysis.

The main thesis of Thomson’s paper is that supertasks are paradoxical, that it is impossible for someone to have completed an infinite sequence of tasks. (He is careful to distinguish between (1) completing an infinite sequence of tasks, which he thinks is impossible and (2) having an infinite number of tasks that one can potentially do. This is essentially the classical distinction between actual and potential infinity that we have discussed here a number of times.)

To illustrate the impossibility of supertasks, Thomson introduces a particular supertask, now known as Thomson’s Lamp. The idea is as follows: Suppose we have a lamp with a button on its base. If the lamp is off, then pressing the button turns it on; if it is on, then pressing the button turns it off. You have probably encountered such a lamp at some point in your life. Now imagine the following sequence of actions. The lamp is initially off. After one second, I press the button to turn it on. After a further 1/2 second, I press the button again to turn it off. After a further 1/4 second, I press the button to turn it on. After a further 1/8 second, I press the button to turn it off. And so on. (As in our last post, we are ignoring the obvious physical impossibilities of this situation, such as the fact that my finger would eventually be moving faster than the speed of light or that the wires in the lamp would become arbitrarily hot.) In the course of 2 seconds, I press the button infinitely many times. Moreover, there is no “last” button press; each press is quickly followed by another. So the question is this: after 2 seconds, is the lamp on or off? Thomson feels that either answer is impossible, and yet one of them must hold:

It cannot be on, because I did not ever turn it on without at once turning it off. It cannot be off, because I did in the first place turn it on, and thereafter I never turned it off without at once turning it on. But the lamp must be either on or off. This is a contradiction.

Thomson concludes that supertasks are impossible. (I am oversimplifying a little bit here; I encourage interested readers to check out Thomson’s original paper, which is fairly short and well-written.) As anyone passingly familiar with the history of philosophy could guess, though, not everybody was convinced by Thomson’s arguments, and Thomson’s paper initiated a small but lively philosophical discussion on supertasks, some of which we may examine in future posts.

Today, though, I don’t want to focus directly on the possibility of supertasks in general or Thomson’s Lamp in particular, but rather to assume that they are possible and think about what this would mean for a topic of great interest in philosophy: determinism.

Roughly speaking, determinism is the assertion that all events are uniquely determined by what has happened before. Given the history of the universe, its future is fixed in stone. The first modern expression of this type of determinism is typically credited to the French mathematician Pierre-Simon Laplace, who imagined the existence of a demon who knows the precise location and momentum of every object in the universe and is therefore able to calculate all future events.

Determinism is a vast subject, and we don’t have nearly enough time to touch on its many connections with topics such as religion, free will, or quantum mechanics. We do have time, however, to show that Thomson’s Lamp, plus a bit of intuition (always dangerous when dealing with infinity, but let’s allow it for the sake of argument), places serious doubt on the truth of determinism.

To see this, let’s suppose that the supertask of Thomson’s Lamp is in fact coherent, and let’s revisit the question: after 2 seconds, is the lamp on or off? I claim that either answer is possible. To see this, let’s be a little bit more precise. Let’s call our supertask, in which the lamp starts off and we press the button infinitely many times in 2 seconds, Process A. Suppose my claim is wrong and that, at the conclusion of Process A, the lamp is necessarily on (if the lamp is necessarily off, then a symmetric argument will work). Now imagine that Process B is the same as Process A but with the role of “on” and “off” reversed: the lamp starts on and, over the course of 2 seconds, we press the button infinitely many times. One would expect that the final state of Process B would just be the reverse of the final state of Process A, namely that the lamp is necessarily off at the conclusion of Process B. (If this doesn’t seem intuitively obvious to you, try imagining a more symmetric situation in which the switch doesn’t turn the lamp on or off but instead switches the color of light emitted from red to blue.) But now suppose that an observer arrives at Process A exactly 1 second late and thus misses the first press of the button. From their perspective, the remainder of the supertask looks exactly like Process B, just sped up by a factor of two. This speeding up intuitively shouldn’t affect the final outcome, so they should rightly expect the lamp to be off at the end of the process, whereas I should rightly expect the lamp to be on at the end of the process, since I know we are completing Process A. We can’t both be correct, so this is a contradiction.

But if, at the end of the supertask, the lamp could consistently either be on or off, then the state of the lamp after two seconds is not uniquely determined by past events. Laplace’s demon cannot predict the behavior of the lamp after two seconds, even if it knows everything about the universe at all times before two seconds have elapsed. In other words, if we accept the coherence of Thomson’s lamp, then determinism must be false!


This was admittedly a very non-rigorous argument, but I hope it provides some food for thought or at least some amusement. Next time, we’ll return to doing some real mathematics and connect ideas of supertasks with the values of infinite series!

Cover image: “L’empire des lumières” by René Magritte

Advertisements

Infinite Collisions, or Something from Nothing (Supertasks I)

Hello! It’s been over a year since my last post; I took some time off as I moved to a new city and a new job and became generally more busy. I missed writing this blog, though, and recently I found myself reading about supertasks, tasks or processes with infinitely many steps but which are completed in a finite amount of time. This made the finitely many things I was doing seem lazy by comparison, so I think I’m going to try updating this site regularly again.

The first supertasks were possibly those considered by Zeno in his paradoxes of motion (which we’ve thought about here before). In the simplest, Achilles is running a 1 kilometer race down a road. To complete this race, though, Achilles must first run the first 1/2 kilometer. He must then run the next 1/4 kilometer, and then the next 1/8 kilometer, and so on. He must seemingly complete an infinite number of tasks just in order to finish the race.

Zeno_Dichotomy_Paradox
Achilles running a race. (Image by Martin Grandjean, CC BY-SA 4.0)

If this idea leaves you a bit confused, you’re not alone! It seems that the same idea can be applied to any continuous process; first half of the process must take place, then the next quarter, then the next eighth, and so on. How does anything ever get done? Is every task a supertask?

Supertasks reemerged as a topic of philosophical study in the mid-twentieth century, when J.F. Thomson introduced both the term supertask and a new example of a supertask, now known as Thomson’s Lamp: Imagine you have a lamp with an on/off switch. Now imagine that you switch the lamp on at time t = 0 seconds. After 1/2 second, you switch the lamp off again. After a further 1/4 second, you switch the lamp on again. After a further 1/8 second, you switch the lamp off again, and so on. After 1 full second, you have switched the lamp on and off infinitely many times. Now ask yourself: at time t = 1, is the lamp on or off? Is this even a coherent question to ask?

We’ll have more to say about this and other supertasks in future posts, but today I want to present a particularly elegant supertask introduced by Jon Perez Laraudogoitia in the aptly named paper “A Beautiful Supertask”, published in the journal Mind in 1996. Before describing the supertask, let me note that it will obviously not be compatible with our physical universe, or perhaps any coherent physical universe, for a number of reasons: it assumes perfectly elastic collisions with no energy lost (no sound or heat created, for instance), it assumes that particles with a fixed mass can be made arbitrarily small, it assumes that nothing will go horribly wrong if we enclose infinitely much matter in a bounded region, and it neglects the effects of gravity. Let’s put aside these objections for today, though, and simply enjoy the thought experiment.

Imagine that we have infinitely objects, all of the same mass. Call these objects m1, m2, m3, and so on. They are all arranged on a straight line, which we’ll think of as being horizontal. To start, each of these objects is standing still. Object m1 is located 1 meter to the right of a point we will call the origin, object m2 is 1/2 meter to the right of the origin, object m3 is 1/4 meter to the right of the origin, and so on. (Again, we’re ignoring gravity here. Also, so that we may fit these infinitely many objects in a finite amount of space, you can either assume that they are point masses without dimension or you can imagine them as spheres that get progressively smaller (but retain the same mass) as their index increases.) Meanwhile, another object of the same mass, m0, appears to the right of m1, moving to the left at 1 meter per second. At time t = 0 seconds, it collides with m1.

beautiful_supertask
Our system immediately before t = 0.

What happens after time t = 0? Well, since we’re assuming our collisions are perfectly elastic, m0 transfers all of its momentum to m1, so that m0 is motionless 1 meter from the origin and m1 is moving to the left at 1 meter per second. After a further half second, at t = 1/2, m1 collides with m2, so that m1 comes to rest 1/2 meter from the origin while m2 begins moving to the left at 1 meter per second. This continues. At time t = 3/4, m2 collides with m3. At time t = 7/8, m3 collides with m4, and so on.

Where are we after 1 full second? By time t = 1, each object has been hit by the previous object and has in turn collided with the next object. m0 is motionless where m1 started, m1 is motionless where m2 started, m2 is motionless where m3 started, and so on. Our entire system is at rest. Moreover, compare the system {m0, m1, m2, …} at t=1 to the system {m1, m2, m3, …} at t=0. They’re indistinguishable! Our initial system {m1, m2, m3, …} has absorbed the new particle m0 and all of its kinetic energy and emerged entirely unchanged!

Things become seemingly stranger if we reverse time and consider this process backwards. We begin with the system {m0, m1, m2, …}, entirely at rest. But then, at t = -1, for no apparent reason, a chain reaction of collisions begins around the origin (“begins” might not be quite the right terms — there’s no first collision setting it all off). The collisions propagate rightward, and at t = 0 the system spits out particle m0, moving to the right at 1 meter per second, while the system {m1, m2, m3, …} is now identical to the system {m0, m1, m2, …} at time t = -1. The ejection of this particle from a previously static system is an inexplicable event. It has no root cause; each collision in the chain was caused by the one before it, ad infinitum. A moving object has seemingly emerged from nothing; kinetic energy has seemingly been created for free.


Cover Image: “4 Spheres” by Victor Vasarely

Common Knowledge

The phrase “common knowledge,” in its common usage, refers to knowledge that is held by (nearly) everyone, at least within some given community. For example, it is common knowledge in modern society that the Earth is round and orbits the sun, or that 2+2=4. These are facts that I can assume that you, my readers, already know or can easily verify, and it’s common practice not to give citations or acknowledge sources for them.

Today, though, we’re not really going to be discussing this common, informal usage of “common knowledge.” Nor are we going to devote this post to facts about the musician behind the classic hip hop album, Resurrection. (Though did you know that the track “I Used to Love H.E.R.” launched a feud with Ice Cube that was eventually settled at a private meeting organized by Louis Farrakhan at his house??) Rather, we’re going to delve into “common knowledge” as a fascinating technical concept in philosophical logic, epistemology, and social science.

Let’s start with the idea of mutual knowledge. Suppose that there is a group of people, and there is a fact that is known by everyone in the group. Then this fact is mutual knowledge among the members of the group. The classification of a fact as mutual knowledge seems like a pretty strong assertion, but we can (and often do) ask for even more. We can ask, for example, that not only does everyone in the group know the fact, but everyone in the group knows that everyone in the group knows the fact. Or we can go further, and ask that everyone in the group knows that everyone in the group knows that everyone in the group knows the fact. Carrying this process to infinity, we obtain the concept of common knowledge.

To be more precise, let us say that a fact is first-order knowledge among members of a group if everyone knows it, i.e., it is mutual knowledge. Let us say it is second-order knowledge if everyone in the group knows that everyone in the group knows it. Continue in this manner. In general, if n is a natural number and we have already defined the notion of n-th order knowledge, then a fact is (n+1)-st order knowledge among members of a group if it is n-th order knowledge and everyone in the group knows that it is n-th order knowledge. A fact is then common knowledge among members of a group if it is n-th order knowledge for every natural number n.

Let’s look at an example to try to clarify the distinction between mutual knowledge and common knowledge. Suppose that all of the students in a class have arrived to a classroom early and are sitting at their desks, and there is a large beetle crawling on the wall. Every student sees the beetle, so the presence of the beetle in the classroom is mutual knowledge among the students. None of the students particularly wants to deal with the beetle, though, so each student pretends to ignore it. As far as each student is concerned, they might be the only student who has noticed the beetle. The presence of the beetle is thus not even second-order knowledge among the students. Then the professor walks in the room, sees the beetle, points at it, and loudly exclaims, “Ah, I see we have a beetle auditing the class today!” Now, suddenly, the students know that they all know that there is a beetle in the room, and they know that they all know that they all know that there is a beetle in the room, and so on. The presence of the beetle has suddenly become common knowledge.

In the grand scheme of things, the study of common knowledge, and higher order knowledge in general, is a rather recent development in philosophy; David Hume was perhaps the first major philosopher to consider it, in A Treatise of Human Nature. It has become extensively studied since then, though, and, if one knows where to look, it can be found almost anywhere.

Consider the following example, due to philosopher Alan Hájek. You are in a restaurant, and a server walking by you trips on his shoelaces and spills a bit of soup on your shirt. You are, naturally, upset, and the server becomes apologetic and says, “It was my fault.” This is of course something almost everyone would expect him to say, but let’s look at it more closely. Why exactly did he say it, and why is it important? Everyone involved in the incident already knew that the server was at fault, so this utterance would seem to add no new information. However, it actually can be the key to ensuring that the incident does not escalate further, by establishing the server’s fault as common knowledge. Indeed, before the server’s admission, you might have thought that the server considered you at fault, which surely would have made you even angrier. After the server’s statement, some common ground has been established, and, with luck and maybe a free dessert, the situation can calmly be resolved.

The notion of higher order knowledge also shows up all the time in game theory (and, therefore, economics, international relations, etc.). When formulating an ideal strategy for a game, one must consider not only the current state and rules of the game, but also the knowledge and strategies of all of one’s partners and adversaries. But they, in turn, of course, are considering your knowledge and strategies when formulating their own, and one quickly reaches vertiginous levels of higher order knowledge. If you want to see common knowledge repeatedly raise its many confusing heads, just head to a high stakes poker game. (“I know that she would almost never bluff in this spot. But we’ve been playing together for days, so I also know that she knows that I know this, so maybe she’s actually more likely to bluff. But maybe she knows that I know that she knows that I know this, which means…I don’t know. I call.”)


We end today with a puzzle that has common knowledge at its core: There are two prisoners, Louis and Marie, each in their own prison cell. Each looks out on their own little yard. Louis can see 8 trees in his yard; Marie can see 12. One day, the warden brings them together and tells them both, “Between the two of you, you can see a total of either 18 or 20 trees. Every day at 5 pm, I will go to each of you. If one of you tells me the correct total number of trees in your two yards, you will immediately be set free. If you are incorrect, you will immediately be executed. If you say nothing, you will stay in prison for another day.”

Louis and Marie are not able to directly communicate with one another during this meeting with the warden, and no communication is possible between their two cells. Each was fully aware of themselves during the meeting, though; the contents of the warden’s speech have become common knowledge between them.

Q: Assuming Louis and Marie are entirely rational (and that their rationality is common knowledge between them) can they guarantee their release? If so, how? How long will it take?

We’ll talk about the solution beneath this nice painting of trees. Meanwhile, take a few minutes to think about it yourselves.

klimt_pine_forest
Gustav Klimt, Pine Forest II. If you can correctly tell me how many trees you see, you will be set free…

At first, it might not seem like the warden’s speech gave any really useful information. Louis now knows that Marie sees either 10 or 12 trees, and Marie now knows that Louis sees either 6 or 8 trees. But it seems that neither can be sure which of the two numbers the other sees.

If we delve deeper into the thought processes of the prisoners, though, and consider some counterfactual situations, a different picture arises. First, suppose that Marie actually saw 19 or 20 trees out her window. Then, on Day 1, Marie could confidently say, “We see 20 trees between us,” because she see more than 18 trees herself. Therefore, back in the real world, after neither prisoner says anything on Day 1, and the prisoners therefore remain in their cells, Louis knows that Marie sees at most 18 trees. You might naturally raise the following objection here: “Of course Louis knows that Marie sees at most 18 trees. He didn’t have to wait until after Day 1 to know this. He already knew this immediately after the warden’s speech, when he knew that Marie sees either 10 or 12 trees!”

But the important point here is not just that Louis knows that Marie sees at most 18 trees, but that this fact is now common knowledge, since it follows immediately from the information given by the warden and by the fact that nobody said anything on Day 1, both of which are themselves common knowledge.

Furthermore, just for the sake of completeness, and at the risk of getting lost in the weeds, let us argue that the fact that Marie sees at most 18 trees was not common knowledge before 5 pm on Day 1. Indeed, Louis knew then that Marie sees 10 or 12 trees. Put Marie does not know that Louis knows this. This is because, as far as Marie knows, Louis could see 6 trees, in which case Louis would know that Marie sees either 12 or 14 trees. Therefore, Marie can only know that Louis knows that Marie sees either 10, 12, or 14 trees. But, in turn, Louis cannot know that Marie knows that, since, as far as Louis knows, Marie could see 10 trees, in which case Marie would only know that Louis knows that Marie sees either 8, 10, or 12 trees. It follows that Louis only knows that Marie knows that Louis knows that Marie sees either 8, 10, 12, or 14 trees. And we can continue in this way. The higher order the knowledge becomes, the more uncertainty is introduced into the range, until we reach the statement: Marie knows that Louis knows that Marie knows that Louis knows that Marie knows that Louis knows that Marie knows that Louis knows that Marie sees 4, 6, 8, 10, 12, 14, 16, 18, or 20 trees. Therefore, before 5 pm on Day 1, the knowledge that Marie sees at most 18 trees is not even ninth order knowledge!

I’m sure that was crystal clear. So let’s continue. We’ve established that, after Day 1, it is common knowledge that Marie sees at most 18 trees. Suppose now that it happened to be the case that Louis saw fewer than 2 trees. Then this fact, combined with the knowledge that Marie sees at most 18 trees, would lead Louis to conclude that the prisoners see fewer than 20 trees between them, so they must see exactly 18. In the real world, when Day 2 goes by and the prisoners are still in their cells, it therefore becomes common knowledge that Louis sees at least 2 trees. Again, this was not common knowledge before 5 pm on Day 2.

It should now be clear how to continue. After Day 3 goes by with no change, it becomes common knowledge that Marie sees at most 16 trees. Then, after Day 4, it becomes common knowledge that Louis sees at least 4 trees. After Day 5, Marie sees at most 14 trees. After Day 6, Louis sees at least 6 trees. After Day 7, Marie sees at most 12 trees.

So far, none of this common knowledge is enough for either prisoner to deduce the total number of trees. In fact, neither of the prisoners has learned anything new about the number of trees seen by the other. However, their higher order knowledge has steadily grown. Indeed, the increasing ranges we saw creeping in to the prisoners’ higher order knowledge on Day 1 have gradually been winnowed by the passage of time. And this pays off after Day 8, when it becomes common knowledge that Louis sees at least 8 trees. Marie then carefully recounts the trees outside her window, confirms that it is 12, and concludes that, together, they see at least (and therefore exactly) 20 trees. She gives the correct answer on Day 9, and the prisoners are released!


There’s another famous puzzle involving common knowledge, initially even more baffling than this one. It is known as the blue-eyed islanders puzzle. It has been written about extensively elsewhere, so let me just point you to one such place, namely, Terence Tao’s blog.


For more reading on common knowledge, see this entry at the Stanford Encyclopedia of Philosophy.


Cover Image: Spock performing a Vulcan mind meld, thereby establishing common knowledge with Dr. Simon Van Gelder.

Playing Games II: The Rules

In an earlier examination of games, we ran into some trouble when Hypergame, a “game” we defined, led to a contradiction. This ended up being a positive development, as the ideas we developed there led us to a (non-contradictory) proof of Cantor’s Theorem, but it indicates that, if we are going to be serious about our study of games, we need to be more careful about our definitions.

So, what is a game? Here’s what Wittgenstein had to say about the question in his famous development of the notion of language games and family resemblances:

66. Consider for example the proceedings that we call “games.” I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all? Don’t say: “There must be something common, or they would not be called ‘games’” but look and see whether there is anything common on all. For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that. To repeat: don’t think, but look! Look for example at board-games, with their multifarious relationships. Now pass to card-games; here you find many correspondences with the first group, but many common features drop out, and others appear. When we pass next to ball-games, much that is common is retained, but much is lost. Are they all ‘amusing’? Compare chess with noughts and crosses. Or is there always winning and losing, or competition between players? Think of patience. In ball-games there is winning and losing; but when a child throws his ball at the wall and catches it again, this feature has disappeared. Look at the parts played by skill and luck; and at the difference between skill in chess and skill in tennis. Think now of games like ring-a-ring-a-roses; here is the element of amusement, but how many other characteristic features have disappeared! And we can go through the many, many other groups of games in the same way; can see how similarities crop up and disappear. And the result of this examination is: we see a complicated network of similarities overlapping and crisscrossing: sometimes overall similarities, sometimes similarities of detail.

-Ludwig Wittgenstein, Philosophical Investigations

This is perhaps the correct approach to take when studying the notion of “game” as commonly used in the course of life, but that is not what we are doing here. We want to isolate a concrete mathematical notion of game amenable to rigorous analysis, and for this purpose we must be precise. No doubt there will be things that many people consider games that will be left out of our analysis, and perhaps some of our games would not be recognized as such out in the wild, but this is beside the point.


To narrow the scope of our investigation, let us say more about what type of games we are interested in. First, for simplicity, we are interested in two-player games, in which the players play moves one at a time. We are also (for now, at least) interested in games that necessarily end in a finite number of moves (though, for any particular game, there may be no a priori finite upper bound on the number of moves in a run of that game). Finally, we will be interested in games for which the game must end in victory for one of the players. Our theory can easily be adapted to deal with ties, but this will just unnecessarily complicate things.

One way to think about a move in a game is as a transformation of the current game into a different one. Consider chess (and, just so it satisfies our constraints, suppose that a classical “tie” counts as a win for black). A typical game of chess starts with all of the pieces in their traditional spots (for simplicity, let’s be agnostic about which color moves first). However, we can consider a slightly different game, called chess_1, that has all of the same rules as chess except that white’s king pawn starts on e4, two squares up from its traditional square. This is a perfectly fine game, and white’s opening move of e2-e4 can be seen as a transformation of chess into chess_1.

With this idea in mind, it makes sense to think of a game as two sets of other games: one set is the set of games that one player can transform the game into by making a move, and the other set is the set of games that the other player can transform the game into by making a move. We will refer to our players as Left (L) and Right (R), so a game G can be thought of as a pair (L | R), where L and R are sets of games. This in fact leads to our first rule of games:

First Rule of Games: If L and R are sets of games, then G = (L | R) is a game.

Depending on one’s background assumptions, this rule does not necessarily rule out games with infinite runs, or pathological games like Hypergame. We therefore explicitly forbid this:

Second Rule of Games: There is no infinite sequence \langle G_i = (L_i | R_i) \mid i \in \mathbb{N} \rangle of games such that, for all i \in \mathbb{N}, G_{i+1} \in L_i \cup R_i.

And that’s it! Now we know what games are…

The skeptics among you may think this is not enough. It may not even be immediately evident that there are any games at all! But there are. Note that the empty set is certainly a set of games (all of its elements are certainly games). Therefore, G = (\emptyset | \emptyset) is a game. It is a boring game in which neither player can make any moves, but it is a game nonetheless. We can now begin to construct more interesting games, like (\{(\emptyset | \emptyset), (\{(\emptyset | \emptyset)\} | \{(\emptyset | \emptyset)\})\} | \{(\emptyset | \emptyset)\}), or chess.

There’s one crucial aspect of games we haven’t dealt with yet: who wins? We deal with this in the obvious way. Let us suppose that, in an actual run of a game, the players must alternate moves (though a game by itself does not specify who makes the first move). During a run of a game, a player loses if it is their turn to move and they have no moves to make, e.g., the game has reached a position (L | R), it is R’s turn to move, and R = \emptyset.


Let us look now at a simple, illustrative example of a game: Nim. A game of Nim starts with a finite number of piles, each containing a finite number of objects. On a player’s move, they choose one of these piles and remove any non-zero number of objects from that pile. The loser is the first player who is unable to remove any objects.

Let us denote games of Nim by finite arrays of numbers, arranged in increasing order. For example, the game of Nim starting with four piles of, respectively, 1,3,5, and 7 objects will be represented by [1,3,5,7]. The trivial game of Nim, consisting of zero piles, and in which the first player to move automatically loses, will be represented by [0].

Let us see that Nim falls into the game framework that we developed above. The trivial game of Nim is clearly equivalent to the trivial game, (\emptyset | \emptyset). We can now identify other game of Nim as members of our framework by induction on, say, the total number of objects involved in the game at the start. Thus, suppose we are trying to identify [1,3,5,7] as a game and we have already succeeded in identifying all instances of Nim with fewer than 16 objects. What instances of Nim can [1,3,5,7] be transformed into by a single move? Well, a player can remove all of the objects from a pile, resulting in [1,3,5], [1,3,7], [1,5,7], or [3,5,7]. Alternatively, they can remove parts of the 3, 5, or 7 piles, resulting in things like [1,3,4,5], [1,1,5,7], etc. All of these Nim instances, clearly, have fewer than 16 objects, so, if we let X denote the set of Nims that can result after one move of [1,3,5,7], then we have shown that X is a set of games, in the sense of our formal framework. We can therefore define a game (X | X), which is clearly equivalent to [1,3,5,7].

In the next post, we’ll look at strategies for games. When can we say for sure which player wins a game? How can we derive winning strategies for games? And what does it all mean?


Cover image: Paul Cézanne, “The Card Players”

Playing Games I: Setting Up the Pieces

Life is more fun if you play games.

-Roald Dahl

Combinatory play seems to be the essential feature in productive thought.

-Albert Einstein

Observant readers will have noted the multiple occasions on which games have shown up in our posts here at Point at Infinity. We have examined the paradoxes of Hypergame in pursuit of a proof of Cantor’s Theorem. We have callously decided the fates of prisoners by playing games with hat colors. We have seen mysterious characters engage in a variant of Nim in Last Year at Marienbad. Some may even accuse us of playing a few games ourselves.

There are reasons for this. Games are fun, for one. And, more to the point, games often provide a useful lens through which to view more “serious” topics. So, over the next few weeks, we are going to be taking a deeper look at all kinds of games and the light they can shed on the infinite. We will discover a winning strategy for Marienbad (among other games). We will investigate Conway’s surreal numbers (for real this time) in the context of game values. We will consider the profound and often surprising role infinite games have played in modern set theory, in particular with regard to questions around the strange and counter-intuitive Axiom of Determinacy. We may even venture into muddier philosophical waters to look at Wittgenstein’s language games or James Carse’s ideas about finite and infinite games.

It will be fun, and I hope you will join us. For today, though, just enjoy this video of an instance of Conway’s Game of Life, implemented inside another instance of Conway’s Game of Life:


Cover image: Still from The Seventh Seal.

Marienbad

Trying to make sense of it doesn’t make sense.

-Last Year at Marienbad

A spa town in the Czech Republic, Marienbad was a favored vacation spot of European royalty and celebrities during the 19th and early 20th centuries. Some mathematicians came, too: Karl Weierstrass, Gösta Mittag-Leffler, and Sofia Kovalevskaya were all drawn by the combination of the restful atmosphere and sparkling social life that could be found at the spa.

Though its golden era ended in the early 20th century, Marienbad remained popular between the world wars. Among the visitors during that time was a young Kurt Gödel. According to some accounts, Gödel’s interest in the sciences was kindled by a teenage visit to Marienbad, during which he and his brother studied Goethe’s philosophical theory of color and found it lacking in comparison to Newton’s more strictly scientific account.

After the war we were in Marienbad quite often with my brother, and I remember that we once read Chamberlain’s biography of Goethe together. At several points, he took a special interest in Goethe’s theory of color, which also served as a source of his interest in the natural sciences. In any case, he preferred Newton’s analysis of the color spectrum to Goethe’s.

-Rudolf Gödel

Goethe himself was a frequent visitor to Marienbad. During an 1823 trip, the 73-year-old Goethe became infatuated with the 18-year-old Baroness Ulrike von Levetzow. The pain caused by her rejection of his marriage proposal led him to write the famous Marienbad Elegy (updated in 1999 by the great W.G. Sebald).


…who dance, stroll up and down, and swim in the pool, as if this were a summer resort like Los Teques or Marienbad.

-Adolfo Bioy Casares, The Invention of Morel

In 1961, Last Year at Marienbad was released. Directed by Alain Resnais and written by Alain Robbe-Grillet, the film is as beautiful as it is inexplicable. On its face, the film is set at a resort hotel; an unnamed man (‘X’) becomes infatuated with an unnamed woman (‘A’) and attempts to convince her that they had an affair the previous year. The film unfolds in combinatorial play, with narration and scenes repeated in ever-evolving and bewildering variation.

A popular theory is that Last Year at Marienbad is actually an adaptation of Adolfo Bioy Casares’ The Invention of Morel, a novel of which our friend Jorge Luis Borges wrote, “To classify it as perfect is neither an imprecision nor a hyperbole.” I will not say much about this, so as not to spoil the book (you should go read it right now), but will only mention that Morel was, in a way, an homage to Louise Brooks, a Hollywood actress with whom Casares was somewhat obsessed and whose performance in Pandora’s Box provided a model for Delphine Seyrig’s performance as ‘A’ in Marienbad. The idea that there is a direct line from Casares and Brooks to the main characters in Morel to ‘X’ and ‘A’ in Marienbad, and that the obscurities of both novel and film are at heart simply odes to the power of cinema is an appealing one.

This theory about the connection between Marienbad and Morel was never acknowledged by the filmmakers (some say that the source text for the film is not Casares’ novel, but rather Wittgenstein’s Philosophical Investigations). Perhaps its fullest explication is given by this article in Senses of Cinema, in which the only sources given by the author are the dust jacket of a different Casares work and an Encyclopedia Britannica article which has since been removed from the online archives. Regardless of the theory’s truth, though, when one views the film through the lens of Morel, it comes tantalizingly close to making sense; the characters of the film lose their agency, consigned to repeating their roles ad infinitum.

At first sight, it seemed impossible to lose your way. At first sight…

Last Year at Marienbad


The third main character in Marienbad is ‘M’, a man who may or may not be the husband of ‘A’. Throughout the film, we see ‘M’ playing a version of the mathematical game of Nim with ‘X’.

This version of Nim became known by the name “Marienbad” and was a brief craze in certain circles. It even got written about in Time:

Last week the Marienbad game was popping up at cocktail parties (with colored toothpicks), on commuter trains (with paper matches), in offices (paper clips) and in bars (with swizzle sticks). Only two can play, but any number can kibitz — and everyone, it seems, has a system for duplicating “X’s” talent for winning.

-“Games: Two on a Match,” Time, Mar. 23, 1962

The game is a theoretical win for the second player, although, as it is unlikely that a player will stumble upon the winning strategy by accident, ‘X’ is able to win even as the first player. We will return to general winning strategies for Nim and other games in later posts.

-The one who starts, wins.

-You must take an even number.

-You must take the smallest odd number.

-It’s a logarithmic series.

-You must switch rows as you go.

-And divide by three.

-Seven times seven is forty-nine.

-kibitzers in Last Year at Marienbad


I leave you now with Nick Cave’s exquisite “Girl in Amber.” Another secret adaptation of The Invention of Morel? Possible…

Cantor and the Absolute (Universal Structures IV)

As long-time readers of this blog will know, the mathematical discipline of set theory, and with it the modern mathematical investigation of infinity, was almost single-handedly initiated by Georg Cantor, a mathematical giant of the late 19th century. Among the factors contributing to Cantor’s revolutionary achievements was his willingness to work with actual infinities, to consider infinite sets as completed entities that can be manipulated as mathematical objects in their own right. Two of the foundational ideas made possible by Cantor’s acceptance of actual infinity are:

  • the existence of multiple infinite cardinalities;
  • the concept of transfinite ordinal numbers.

It was realized quite early in the development of set theory, though, that a naïve conception of infinity and sets would lead to paradoxes. (Indeed, the existence of these paradoxes is one of the reasons for the reluctance of mathematicians to accept actual infinities and provided fodder for Cantor’s mathematical enemies.) Two of the most consequential of these paradoxes are Russell’s Paradox and the Burali-Forti Paradox.

Russell’s Paradox: Essentially, Russell’s Paradox asserts that there can be no set of all sets. For suppose that such a set exists (call it S). Then, by any reasonable formulation of set theory, we could form the set of all sets that do not contain themselves. Formally, this set, which we will call R, would be \{x \in S \mid x \not\in x\}. Now comes the crucial question: can R be a member of itself? Either answer leads to contradiction: if R \in R, then  R \not\in R, and if R \not\in R, then R \in R. Thus, there can be no set of all sets.

Burali-Forti Paradox: The Burali-Forti Paradox goes even further than Russell’s Paradox: not only can the set of all sets not exist, but the set of all ordinal numbers cannot exist either. For suppose that T is the set of all ordinal numbers. Recall that an ordinal number is, essentially, the order type of a well-ordered set. Recall also that the class of ordinal numbers is well-ordered: any set of ordinals has a least element. But this implies that T itself is a well-ordered set, but, since T contains all of the ordinals, its order type must be larger than all the ordinals numbers. This is a contradiction.

The collection of all sets and the collection of all ordinal numbers are therefore not sets. But it seems that we are nonetheless able to conceive of them, to reason about them. They seem to have some form of reality. So what are they?

The standard response in modern set theory is simply to say that these collections are what is known as proper classes. They are collections that are “too large” to be sets, that are “too large” to be manipulated in ways that sets are manipulated. This has served us well, to this point banishing paradox from the subject. Cantor’s response was similar but was imbued with something more. Consider this excerpt from a letter sent by Cantor to the English mathematician Grace Chisholm Young:

I have never assumed a “Genus Supremum” of the actual infinite. Quite on the contrary I have proved that there can be no such “Genus Supremum” of the actual infinite. What lies beyond all that is finite and transfinite is not a “Genus”; it is the unique, completely individual unity, in which everything is, which comprises everything, the ‘Absolute’, for human intelligence unfathomable, also that not subject to mathematics, unmeasurable, the “ens simplicissimum”, the “Actus purissimus”, which is by many called “God”.

For Cantor, infinity comes in not two but three varieties: the potential infinity of limits, sequences, and series; the transfinite infinity of the infinite cardinal and ordinal numbers; and the absolute infinity, which transcends mathematics and human comprehension, and which is identified with God.


Much has been made of the possibility that Georg Cantor’s family had Jewish roots and that Cantor’s conceptions of infinity and mathematical philosophy were influenced by Jewish mysticism and Kabbalah. On the former issue we will just say that there is evidence, from correspondence and genealogical records, pointing in both directions. On the latter, we will simply note that Cantor’s choice to denote infinite cardinals was ‘aleph’ (ℵ), the first letter of the Hebrew alphabet and a letter of importance in Kabbalah as the opening of the words Ein Sof (infinity, roughly) and Elohim (a name for God).

Also, here’s a little-known fact: the symbol that Cantor used to denote the class of all cardinal numbers (too big to be a set, of course) is ‘tav’ (ת), the last letter in the Hebrew alphabet and the representative, in Kabbalah, of perfection, of the synthesis of all that exists.


Another fascinating intersection between Cantor’s set theory and theology involves the Catholic Church. Long-time readers may recall that, the last time the Church appeared in our story, the mathematician involved was Galileo, and the relationship between the two could be described, at the risk of understatement, as antagonistic.

250 years later, the relationship between the Catholic Church and the day’s leading provocateur of the infinite would turn out decidedly differently. This was due in large part to Pope Leo XIII, who assumed the position in 1878 and, the following year, issued the encyclical Aeterni Patris. The encyclical called for the revival of the Scholastic philosophy of Thomas Aquinas. It sought to modernize the Church and to increase its interest and participation in scientific inquiry.

For, the investigation of facts and the contemplation of nature is not alone sufficient for their profitable exercise and advance; but, when facts have been established, it is necessary to rise and apply ourselves to the study of the nature of corporeal things, to inquire into the laws which govern them and the principles whence their order and varied unity and mutual attraction in diversity arise. To such investigations it is wonderful what force and light and aid the Scholastic philosophy, if judiciously taught, would bring.

-Leo XIII, Aeterni Patris

Neo-Scholastic scholars thus gained prominence in the Church, and the Church began engaging more with the cutting-edge scientific work of the day. Cantor’s brand new ideas about infinity were naturally of interest. Cantor, with religious tendencies of his own and upset by the skeptical or hostile reactions of most of his fellow mathematicians to his work, was himself eager to explain his ideas to the Church and ensure that they were properly understood. And thus began the remarkable correspondence between Cantor and the leading Catholic scholars and clergy of the day.

In any case it is necessary to undertake a serious examination of the latter question concerning the truth of the Transfinitum, for were I correct in asserting its truth in terms of the possibility of the Transfinitum, then there would be (without doubt) a certain danger of religious error for those of the opposite opinion, since: error circa creaturas redundat in falsam de Deo scientiam.

-letter from Cantor to Jeiler von Pfingsten, 1888

One of the issues that arose in Cantor’s correspondence with the Church involved the fundamental existence of Cantor’s transfinite numbers. (Note that I am in no way a theological scholar, so this account is probably grossly oversimplified.) A number of Catholic intellectuals were hesitant to accept the objective existence of Cantor’s transfinite numbers on the grounds that, as God is often theologically identified with the infinite, the acceptance of the existence of transfinite numbers would lead inevitably to Pantheism, a doctrine which was not only implicity but explicitly (by Pius IX in 1861) condemned by the Church. Cantor responded by emphasizing his distinction between the absolute infinity and the transfinite infinity, between an “Infinitum aeternum increatum sive Absolutum” and an “Infinitum creatum sive Transfinitum.” The former is reserved for God; the latter manifests itself in mathematics and in the universe.

The neo-Scholastic thinkers were largely convinced by Cantor’s distinction and came to accept many of his ideas about infinity. Cantor himself took great pride in this achievement, even going so far as, in a moment of mathematical self-doubt engendered by his inability to solve the Continuum Problem, to express the following remarkable sentiment in an 1894 letter to the French mathematician Charles Hermite:

Now I only thank God, the all-wise and all-good, that he always denied me the fulfillment of this wish [for a position at University in either Göttingen or Berlin], for He thereby constrained me, through a deeper penetration into theology, to serve Him and His Holy Roman Catholic Church better than I would have been able to with my probably weak mathematical powers through an exclusive occupation with mathematics.


P.S.: Cantorian set theory recently made a surprise appearance in the New York Times crossword puzzle:

00001

Rex Parker was not happy!


Acknowledgement: The material about Cantor’s correspondence with the Catholic Church came largely from Joseph W. Dauben’s excellent paper, “Georg Cantor and Pope Leo XIII: Mathematics, Theology, and the Infinite.”

Cover Image: Harald Sohlberg, “Winter Night in the Mountains,” 1901. Nasjonalgalleriet, Oslo.