# Infinity in the Classroom II

We’re between posts about non-Euclidean geometry here at Point at Infinity. In the meantime, take a look at this article about a fascinating connection between hyperbinary numbers and the countability of the rationals, and about exploring this connection in the classroom. Enjoy!

# Parallel Lines

Detest it as lewd intercourse, it can deprive you of all your leisure, your health, your rest, and the whole happiness of your life.

Do not try the parallels in that way: I know that way all along. I have measured that bottomless night, and all the light and all the joy of my life went out there.

-Letters from Farkas Bolyai to his son, János, attempting to dissuade him from his investigations of the Parallel Postulate.

In our previous post, cataloging various notions of mathematical independence, we introduced the idea of logical independence. Briefly, given a consistent set of axioms, $T$, a sentence $\varphi$ is independent from $T$ if it can be neither proven nor disproven from the sentences in $T$. Today, we discuss one of the most prominent and interesting instances of logical independence: Euclid’s Parallel Postulate.

Among the most famous sets of axioms (top 5, certainly) are Euclid’s postulates, five statements underpinning (together with 23 definitions and five other statements putting forth the properties of equality) the mathematical system of Euclidean geometry set forth in the Elements and still taught in high school classrooms to this day. (We should note here that, from a modern viewpoint, Euclid’s proofs do not always strictly conform to the standards of mathematical rigor, and some of his results rely on methods or assumptions not justified by his five postulates. This has been fixed, for example by Hilbert, who gave a different set of axioms for Euclidean geometry in 1899. Now that we have noted this, we will proceed to forget it for the remainder of the post.)

Euclid’s first four postulates are quite elegant in their simplicity and self-evidence. Reformulated in modern language, they are roughly as follows:

1. Given any two points, there is a unique line segment connecting them.
2. Given any line segment, there is a unique line (unbounded in both directions) containing it.
3. Given any point $P$ and any radius $r$, there is a unique circle of radius $r$ centered at $P$.
4. All right angles are congruent.

The fifth postulate, however, which is known as the Parallel Postulate, is, quite unsatisfyingly, markedly more complicated and less self-evident:

1. If two lines intersect a third in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines must intersect one another on that side.

A picture might help illustrate this postulate:

This postulate doesn’t seem to be explicitly about parallel lines, so the reader may be wondering why it is often called the Parallel Postulate. The reason becomes evident, though, when considering the following statement and learning that, in the context of the other four postulates, it is in fact equivalent to the Parallel Postulate:

1. Given any line $\ell$ and any point $P$ not on $\ell$, there is exactly one line through $P$ that is parallel to $\ell$.

(Recall that two lines are parallel if they do not intersect.) This reformulation of the Parallel Postulate is often named Playfair’s Axiom, after the 18th-century Scottish mathematician John Playfair, though it was stated already by Proclus in the 5th century.

The Parallel Postulate was considered undesirably unwieldy and less satisfactory than the other four postulates, even by Euclid himself, who made a point of proving the first 28 results of the Elements without recourse to the Parallel Postulate. The general opinion among mathematicians for the next two millennia was that the Parallel Postulate should not be an axiom but rather a theorem; it should be possible to prove it using just the other four postulates.

Many attempts were made to prove the Parallel Postulate, and many claimed success at this task. Errors were then inevitably discovered by later mathematicians, many of whom subsequently put forth false proofs of their own. The aforementioned Proclus, for example, after pointing out flaws in a purported proof of Ptolemy, gives his own proof, which suffers from two instructive flaws. The first is relatively minor: Proclus assumes a consequence of Archimedes’ Axiom, which essentially states that, given any two line segments, there is a natural number $n$ such that $n$ times the length of the shorter line segment will exceed the length of the longer. (We encountered Archimedes’ Axiom in a previous post, about infinitesimals, which the reader is invited to revisit.) Archimedes’ Axiom seems like an entirely reasonable axiom to assume, but it notably does not follow from Euclid’s postulates.

Proclus’ more serious error, though, is that he makes the assumption that any two parallel lines have a constant distance between them. But this does not follow from the first four postulates. In fact, the statement, “The set of points equidistant from a straight line on one side of it form a straight line,” known as Clavius’ Axiom, is, in the presence of Archimedes’ Axiom and the first four postulates, equivalent to the Parallel Postulate. Proclus’ proof is therefore just a sophisticated instance of begging the question.

In the course of the coming centuries’ attempts to prove the Parallel Postulate, a number of other axioms were unearthed that are, at least in the presence of Archimedes’ Axiom and the first four postulates, equivalent to the Parallel Postulate. In addition to Playfair’s Axiom and Clavius’ Axiom, these include the following:

• (Clairaut) Rectangles exist. (A rectangle, of course, being a quadrilateral with four right angles.)
• (Legendre) Given an angle $\alpha$ and a point $P$ in the interior of the angle, there is a line through $P$ that meets both sides of the angle.
• (Wallis) Given any triangle, there are similar triangles of arbitrarily large size.
• (Farkas Bolyai) Given any three points, not all lying on the same line, there is a circle passing through all three points.

A key line of investigation into the Parallel Postulate was carried out, probably independently, by Omar Khayyam, an 11th-century Persian mathematician, astronomer, and poet, and by Giovanni Gerolamo Saccheri, an 18th-century Italian Jesuit priest and mathematician. For concreteness, let us consider Saccheri’s account, which has the wonderful title, “Euclid Freed from Every Flaw.”

Saccheri and Khayyam were, similarly to their predecessors, attempting to prove the Parallel Postulate. Their method of proof was contradiction: assume that the Parallel Postulate is false and derive a false statement from it. To do this, they considered figures that came to be known as Khayyam-Saccheri quadrilaterals.

To form a Khayyam-Saccheri quadrilateral, take a line segment (say, BC). Take two line segments of equal length and form perpendiculars, in the same direction, at B and C (forming, say, AB and DC). Now connect the ends of those two line segments with a line segment (AD) to form a quadrilateral. A picture is given below.

By construction, the angles at B and C are right angles, but the angles at A and D are unclear. Saccheri proves that these two angles are equal. He also proves that, if these angles are obtuse, then they are obtuse for every such quadrilateral; if they are right, then they are right for every such quadrilateral; and, if they are acute, then they are acute for every such quadrilateral. This then naturally divides geometries into three categories: those satisfying the Obtuse Hypothesis, those satisfying the Right Hypothesis, and those satisfying the Acute Hypothesis. (These types of geometries subsequently became known as semielliptic, semieuclidean, and semihyperbolic, respectively.)

At this point, Saccheri attempts to prove that the Obtuse Hypothesis and the Acute Hypothesis both lead to contradiction. (Note that a geometry satisfying all five of Euclid’s postulates must satisfy the Right Hypothesis. The converse is not true, so even a successful refutation of the Obtuse and Acute Hypotheses would not be enough to establish the Parallel Postulate.) Saccheri is able to prove (in the presence of Archimedes’ Axiom) that the Obtuse Hypothesis leads to the conclusion that straight lines are finite, thus contradicting the second postulate. He is unable to obtain a logical contradiction from the Acute Hypothesis, though. Instead, he derives a number of counter-intuitive statements from it and then concludes that the Acute Hypothesis must be false because it is “repugnant to the nature of a straight line.”

The next big steps towards the establishment of the independence of the Parallel Postulate were made by Nikolai Lobachevsky and János Bolyai (who fortunately did not heed his father’s letters quoted at the top of this post), 19th-century mathematicians from Russia and Hungary, respectively. (Similar work was probably done by Gauss, as well, though it was never published.) Their work entailed a crucial shift in perspective – rather than attempt to prove the Parallel Postulate from the others, the mathematicians seriously considered the possibility that it is not provable and thought of non-Euclidean geometries (i.e., those failing to satisfy the Parallel Postulate) as legitimate objects of mathematical study in their own right. In particular, they were interested in hyperbolic geometry, in which the Parallel Postulate is replaced by the assertion that, given any line $\ell$ and any point $P$ not on the line, there are at least two distinct lines passing through $P$ and parallel to $\ell$. (Not surprisingly, considering the nomenclature, hyperbolic geometries are semihyperbolic, i.e., they satisfy the Acute Hypothesis.) This viewpoint was vindicated when, in 1868, Eugenio Beltrami produced a model of hyperbolic geometry. This shows that, as long as Euclidean geometry is consistent, then the Parallel Postulate is independent of the other four postulates: all five postulates are true, for example, in the Cartesian plane, while the first four are true and the Parallel Postulate is false in any model of hyperbolic geometry.

A number of other models for hyperbolic geometry are now known. In our next post, we will look at a particularly elegant one: the Poincaré disk model.

Cover Image: Michael Tompsett, Parallel Lines

For more information on this and many other geometric topics, I highly recommend Robin Hartshorne’s excellent book, Geometry: Euclid and Beyond.

# Independence

I am no bird; and no net ensnares me: I am a free human being with an independent will.

-Charlotte Brontë, Jane Eyre

An object is independent from others if it is, in some meaningful way, outside of their area of influence. If it has some meaningful measure of self-determination. Independence is important. Nations have gone to war to obtain independence from other nations or empires. Adolescents go through rebellious periods, yearning for independence from parents or other authority figures. Though perhaps less immediately exciting, notions of independence permeate mathematics, as well. Viewed in the right light, they can even be seen as direct analogues of the more familiar notions considered above: in various mathematical structures, there is often a natural way of defining an area of influence of an element or subset of the structure. A different element is then independent of this element or subset if it is outside its area of influence. Such notions have proven to be of central importance in a wide variety of mathematical contexts. Today, in anticipation of some deeper dives in future posts, we take a brief look at a few prominent examples.

Graph Independence: Recall that a graph is a pair $G = (V,E)$, where $V$ is a set of vertices and $E$ is a set of edges between these vertices. If $u \in V$ is a vertex, then its neighborhood is the set of all vertices that are connected to $u$ in the graph, i.e., $\{v \in V \mid \{u,v\} \in E\}$. One could naturally consider a vertex’s area of influence in the graph $G$ to consist of the vertex itself together with all of its neighbors. With this viewpoint, we can say that a vertex $u \in V$ is independent from a subset $A \subseteq V$ if, for all $v \in A$, $u$ is not in the neighborhood of $v$, i.e., $u$ is not in the area of influence of any element of $A$. Similarly, we may say that a set $A \subseteq V$ of vertices is independent if each element of $A$ is independent from the rest of the elements of $A$, i.e., if each $v \in A$ is independent from $A \setminus \{v\}$.

Fun Fact: In computer science, there are a number of interesting computational problems involving independent sets in graphs. These problems are often quite difficult; for example, the maximum independent set problem, in which one is given a graph and must produce an independent set of maximum size, is known to be NP-hard.

Linear Independence: Let $n$ be a natural number, and consider the real $n$-dimensional Euclidean space $\mathbb{R}^n$, which consists of all $n$-tuples of real numbers. Given $\vec{u} = (u_1,\ldots,u_n)$ and $\vec{v} = (v_1, \ldots, v_n)$ in $\mathbb{R}^n$ and a real number $r \in \mathbb{R}$, we can define the elements $\vec{u} + \vec{v}$ and $r\vec{u}$ in $\mathbb{R}^n$ as follows:

$\vec{u} + \vec{v} = (u_1 + v_1, \ldots, u_n + v_n)$

$r\vec{u} = (ru_1, \ldots, ru_n).$

(In this way, $\mathbb{R}^n$ becomes what is known as a vector space over $\mathbb{R}$). Given a subset $A \subseteq \mathbb{R}^n$, the natural way to think about its linear area of influence is as $\mathrm{span}(A)$, which is equal to all $n$-tuples which are of the form

$r_1\vec{v}_1 + \ldots + r_k\vec{v}_k,$

where $k$ is a natural number, $r_1, \ldots, r_k$ are real numbers, and $\vec{v}_1, \ldots, \vec{v}_k$ are elements of $A$.

In this way, we say that an $n$-tuple $\vec{u} \in \mathbb{R}^n$ is linearly independent from a set $A \subseteq \mathbb{R}^n$ if $\vec{u}$ is not in $\mathrm{span}(A)$. A set $A$ is linearly independent if each element $\vec{u}$ of $A$ is not in $\mathrm{span}(A \setminus \{\vec{u}\})$, i.e., if each element of $A$ is linearly independent from the set formed by removing that element from $A$. It is a nice exercise to show that every linearly independent subset of $\mathbb{R}^n$ has size at most $n$ and is maximal if and only if it has size equal to $n$.

Fun Fact: Stay tuned until the end of the post!

Thou of an independent mind,
With soul resolv’d, with soul resign’d;
Prepar’d Power’s proudest frown to brave,
Who wilt not be, nor have a slave;
Virtue alone who dost revere,
Thy own reproach alone dost fear—
Approach this shrine, and worship here.

-Robert Burns, “Inscription for an Altar of Independence”

Algebraic Independence: If $A$ is a set of real numbers, then one can say that its algebraic area of influence (over $\mathbb{Q}$, the set of rational numbers), is the set of all real roots of polynomial equations with coefficients in $A \cup \mathbb{Q}$, i.e., the set of all real numbers that are solutions to equations of the form:

$r_kx^k + \ldots + r_1x + r_0 = 0,$

where $k$ is a natural number and $r_0, \ldots, r_k$ are elements of $A \cup \mathbb{Q}$. With this definition, a real number $s$ is algebraically independent (over $\mathbb{Q}$) from a set $A \subseteq \mathbb{R}$ if $s$ is not the root of a polynomial equation with coefficients in $A \cup \mathbb{Q}$. A set $A \subseteq \mathbb{R}$ is algebraically independent (over $\mathbb{Q}$) if each element $s \in A$ is algebraically independent from $A \setminus \{s\}$.

Fun Fact: Note that a 1-element set $\{s\}$ is algebraically independent over $\mathbb{Q}$ if and only if it is transcendental, i.e., is not the root of a polynomial with rational coefficients. $\pi$ and $e$ are famously both transcendental numbers, yet it is still unknown whether the 2-element set $\{\pi, e\}$ is algebraically independent over $\mathbb{Q}$. It is not even known if $\pi + e$ is irrational!

Logical Independence: Let $T$ be a consistent set of axioms, i.e., a set of sentences from which one cannot derive a contradiction. We can say that the logical area of influence of $T$ is the set of sentences that can be proven from $T$, together with their negations. In other words, it is the set of sentences which, if one takes the sentences in $T$ as axioms, can be proven either true or false. A sentence $\phi$ is then logically independent from $T$ if neither $\phi$ nor its negation can be proven from the sentences in $T$.

Logical independence is naturally of great importance in the study of the foundations of mathematics. Much of modern set theory, and much of my personal mathematical research, involves statements that are independent from the Zermelo-Fraenkel Axioms with Choice (ZFC), which is a prominent set of axioms for set theory and indeed for all of mathematics. These are statements, then, that in our predominant mathematical framework can neither be proven true nor proven false. The most well-known of these is the Continuum Hypothesis (CH), which, in one of its formulations, is the statement that there are no infinite cardinalities strictly between the cardinality of the set of natural numbers and the cardinality of the set of real numbers. To prove that CH is independent from ZFC, one both produces a mathematical structure that satisfies ZFC and in which CH is true (which Kurt Gödel did in 1938) and produces a mathematical structure that satisfies ZFC and in which CH is false (which Paul Cohen did in 1963). Since Cohen’s result in 1963, a great number of natural mathematical statements have been proven to be independent from ZFC.

In our next post, we will consider a logical independence phenomenon of a somewhat simpler nature: the independence of Euclid’s parallel postulate from Euclid’s four other axioms for plane geometry, which will lead us to considerations of exotic non-Euclidean geometries.

Fun Fact: In the setting of general vector spaces, which generalize the vector spaces $\mathbb{R}^n$ from the above discussion of linear independence, a basis is a linearly independent set whose span (what we referred to as its linear area of influence) is the entire vector space. A basis for $\mathbb{R}^n$ is thus any linearly independent set of size $n$. Using the Axiom of Choice, one can prove that every vector space has a basis. However, there are models of ZF (i.e., the Zermelo-Fraenkel Axioms without Choice) in which there are (infinite-dimensional) vector spaces without a basis. Thus, the statement, “Every vector space has a basis,” is logically independent from ZF.

Solitude is independence.

-Hermann Hesse, Steppenwolf

# Circle Inversion and the Pappus Chain

There is a pledge of the big and of the small in the infinite.

-Dejan Stojanović

In the next two posts, we are going to look at two interesting geometric ideas of the 19th century involving circles. Next time, we will consider Poincaré’s disk model for hyperbolic geometry. Today, though, we immerse ourselves in the universe of inversive geometry.

Consider a circle in the infinite 2-dimensional plane:

This circle divides the plane into two regions: the bounded region inside the circle and the unbounded region outside the circle (let’s say that the points on the circle belong to both regions). A natural thing to want to do, now, especially in the context of this blog, would be to try to exchange these two regions, to map the infinite space outside the circle into the bounded space of the circle, and vice versa, in a “natural” way.

I could be bounded in a nutshell, and count myself a king of infinite space.

-William Shakespeare, Hamlet

Upon first reflection, one might be tempted to say that we want to “reflect” points across the circle. And this is sort of right, but reflection already carries a meaning in geometry. Truly reflecting points across the circle would preserve their distance from the circle, so the inside of the circle could only be mapped onto a finite ring whose outer radius is twice that of the circle two. Moreover, it would not be clear how to reflect points from outside this ring into the circle.

Instead, we want to consider a process known as “inversion.” Briefly speaking, we want to arrange so that points arbitrarily close to the center of the circle get sent to points arbitrarily far away from the center of the circle, and vice versa. For simplicity, let us suppose that the circle is centered at the origin of the plane and has a radius of 1. The most natural way to achieve our aim is to send a point $P$ to a point $P'$ that lies in the same direction from the origin as $P$ and whose distance from the origin is the reciprocal of the distance from $P$ to the origin. Here’s an example:

One can check that, algebraically, this inversion sends a point $P$ with coordinates $(x,y)$ to a point $P'$ with coordinates $(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2})$. Points inside the circle are sent to points outside the circle, points outside the circle are sent to points inside the circle, and points on the circle are sent to themselves. Moreover, as one might expect from the name, the inversion map is its own inverse: applying it twice, we end up where we started. Perfect!

Wait a second, though. We’re being a little too hasty. What about the origin? Where is it sent? Our procedure doesn’t seem to tell us, and if we try to use our algebraic expression, we end up dividing by zero. Since the origin is inside the circle, it should certainly be sent to a point outside the circle, but all of those points are already taken. Also, since points arbitrarily close to the origin get mapped to points arbitrarily far from the origin, we want to send the origin to a point as far away from itself as possible. At first glance, we might seem to be in a quandary here, but longtime readers of this blog will see an obvious solution: the origin gets mapped to a point at infinity! (And the point at infinity, in turn, gets mapped to the origin.)

(Technical note: Since we’ve added a point at infinity, the inversion map should be seen not as a map on the plane $\mathbb{R}^2$, but on its one-point compactification (or Alexandroff compactification), $\hat{\mathbb{R}}^2$. In fact, the inversion map is a topological homeomorphism of $\hat{\mathbb{R}}^2$ with itself.)

Let’s examine what the inversion map does to simple geometric objects. We have already seen what happens to points. It should also be obvious that straight lines through the origin get mapped to themselves. For example, in the image above, the line connecting $P$ and $P'$ gets mapped to itself. (Here we are specifying, of course, that every line contains the point at infinity.)

A bit of thought and calculation will convince you that lines not passing through the origin get sent to circles that do pass through the origin.

Since the inversion map is its own inverse, circles passing through the origin get mapped to lines that don’t pass through the origin. Circles that don’t pass through the origin, on the other hand, get mapped to other circles that don’t pass through the origin.

There’s an important special case of this phenomenon: a circle that is met perpendicularly by the circle through which we are inverting gets mapped to itself.

We thus have a sort of duality between lines and circles that has been revealed through the process of circle inversion. Lines, when seen in the right light, are simply circles with an infinite radius. We’re going to move on to some applications of circle inversion in just a sec, but, first, a pretty picture of an inverted checkerboard.

The introduction of the method of circle inversion is widely attributed to the Swiss mathematician Jakob Steiner, who wrote a treatise on the matter in 1824. When combined with the more familiar rigid transformations of rotation, translation, and reflection, the decidedly non-rigid transformation of inversion gives rise to inversive geometry, which became a major topic of study in nineteenth geometry. It was perhaps most notably applied by William Thomson (later to become 1st Baron Kelvin, immortalized in the name of a certain temperature scale), at the age of 21, to solve problems in electrostatics. Circle inversion also allows for extremely elegant proofs of classical geometric facts. We end today’s post with an example.

Consider three half-circles, all tangent to one another and centered on the same horizontal line, with two placed inside the third, as follows:

This figure (or, more precisely, the grey region enclosed by the semicircles) is known as an arbelos, and its first known appearance dates back to The Book of Lemmas by Archimedes. A remarkable fact about the arbelos is that, starting with the smallest of the semicircles in the figure, one can nestle into it an infinite sequence of increasingly small circles, each tangent to the two larger semicircles and the circle appearing before it, thus creating the striking Pappus chain, named for Pappus of Alexandria, who investigated the figure in the 3rd century AD:

Let us label the circles in the Pappus chain (starting with the smallest semicircle in the arbelos) $\mathcal{C}_0, \mathcal{C}_1, \mathcal{C}_2$, etc. (So, in the picture above, $P_1$ is the center of $\mathcal{C}_1$, $P_2$ is the center of $\mathcal{C}_2$, and so on.) Clearly, the size of $\mathcal{C}_n$ decreases as $n$ increases, but it is natural to ask how quickly it decreases. It is also natural to ask how the position of the point $P_n$ changes as $n$ increases. In particular, what is the height of $P_n$ above the base of the figure? It turns out that the answers to these two questions are closely related, a fact discovered by Pappus through a long and elaborate derivation in Euclidean geometry, and which we will derive quickly and elegantly through circle inversion.

Let $d_n$ denote the diameter of the circle $\mathcal{C}_n$, and let $h_n$ denote the height of the point $P_n$ above the base of the Pappus chain (i.e., the line segment $AB$). We will prove the remarkable formula:

For all $n \in \mathbb{N}$$h_n = n \cdot d_n$.

For concreteness, let us demonstrate the formula for $\mathcal{C}_3$. The same argument will work for each of the circles in the Pappus chain. As promised, we are going to use circle inversion. Our first task is to find a suitable circle across which to invert our figure. And that circle, it turns out, will be the circle centered at $A$ and perpendicular to $\mathcal{C}_3$:

Now, what happens when we invert our figure? First, consider the two larger semicircles in the arbelos, with diameters $AC$ and $AB$. The circles of which these form the upper half pass through the center of our circle of inversion and thus, as discussed above, are mapped to straight lines by our inversion. Moreover, since the centers of these circles lie directly to the right of $A$, a moment’s thought should convince you that they are mapped to vertical lines.

Now, what happens to the circles in the Pappus chain? Well, none of them pass through $A$, so they will all get mapped to circles. $\mathcal{C}_3$ is perpendicular to the circle of inversion, so it gets mapped to itself. But, in the original diagram, $\mathcal{C}_3$ is tangent to the larger semicircles in the arbelos. Since circle inversion preserves tangency, in the inverted diagram, $\mathcal{C}_3$ is tangent to the two vertical lines that these semicircles are mapped to. And, of course, the same is true of all of the other circles in the Pappus chain. Finally, note that, since the center of $\mathcal{C}_0$ lies on the base of the figure, which passes through the center of our inversion circle, it also gets mapped to a point on the base of the figure. Putting this all together, we end up with the following striking figure:

The circle with diameter $AB$ gets mapped to the vertical line through $B'$, and the circle with diameter $AC$ gets mapped to the vertical line through $C'$. Our Pappus chain, meanwhile, is transformed by inversion into an infinite tower of circles, all of the same size, bounded by these vertical lines. Moreover, the circle $\mathcal{C}_3$ and the point $P_3$ are left in place by the inversion. It is now straightforward to use this tower to calculate the height $h_3$ of $P_3$ in terms of the diameter $d_3$ of $\mathcal{C}_3$. To get from $P_3$ down to the base, we must first pass through half of $\mathcal{C}_3$, which has a height of $\frac{d_3}{2}$. We then must pass through the image of $\mathcal{C}_2$ under the inversion, which has a height of $d_3$. Then the image of $\mathcal{C}_1$, which also has a height of $d_3$. And, finally, the image of the smallest semicircle of the arbelos, which has a height of $\frac{d_3}{2}$. All together, we get:

$h_3 = \frac{d_3}{2} + d_3 + d_3 + \frac{d_3}{2} = 3d_3$.

Pretty nice!

For further reading on circle inversion, see Harold P. Boas’ excellent article, “Reflections on the Arbelos.”

Cover image: René Magritte, The false mirror

# Playing Games II: The Rules

In an earlier examination of games, we ran into some trouble when Hypergame, a “game” we defined, led to a contradiction. This ended up being a positive development, as the ideas we developed there led us to a (non-contradictory) proof of Cantor’s Theorem, but it indicates that, if we are going to be serious about our study of games, we need to be more careful about our definitions.

So, what is a game? Here’s what Wittgenstein had to say about the question in his famous development of the notion of language games and family resemblances:

66. Consider for example the proceedings that we call “games.” I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all? Don’t say: “There must be something common, or they would not be called ‘games’” but look and see whether there is anything common on all. For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that. To repeat: don’t think, but look! Look for example at board-games, with their multifarious relationships. Now pass to card-games; here you find many correspondences with the first group, but many common features drop out, and others appear. When we pass next to ball-games, much that is common is retained, but much is lost. Are they all ‘amusing’? Compare chess with noughts and crosses. Or is there always winning and losing, or competition between players? Think of patience. In ball-games there is winning and losing; but when a child throws his ball at the wall and catches it again, this feature has disappeared. Look at the parts played by skill and luck; and at the difference between skill in chess and skill in tennis. Think now of games like ring-a-ring-a-roses; here is the element of amusement, but how many other characteristic features have disappeared! And we can go through the many, many other groups of games in the same way; can see how similarities crop up and disappear. And the result of this examination is: we see a complicated network of similarities overlapping and crisscrossing: sometimes overall similarities, sometimes similarities of detail.

-Ludwig Wittgenstein, Philosophical Investigations

This is perhaps the correct approach to take when studying the notion of “game” as commonly used in the course of life, but that is not what we are doing here. We want to isolate a concrete mathematical notion of game amenable to rigorous analysis, and for this purpose we must be precise. No doubt there will be things that many people consider games that will be left out of our analysis, and perhaps some of our games would not be recognized as such out in the wild, but this is beside the point.

To narrow the scope of our investigation, let us say more about what type of games we are interested in. First, for simplicity, we are interested in two-player games, in which the players play moves one at a time. We are also (for now, at least) interested in games that necessarily end in a finite number of moves (though, for any particular game, there may be no a priori finite upper bound on the number of moves in a run of that game). Finally, we will be interested in games for which the game must end in victory for one of the players. Our theory can easily be adapted to deal with ties, but this will just unnecessarily complicate things.

One way to think about a move in a game is as a transformation of the current game into a different one. Consider chess (and, just so it satisfies our constraints, suppose that a classical “tie” counts as a win for black). A typical game of chess starts with all of the pieces in their traditional spots (for simplicity, let’s be agnostic about which color moves first). However, we can consider a slightly different game, called chess_1, that has all of the same rules as chess except that white’s king pawn starts on e4, two squares up from its traditional square. This is a perfectly fine game, and white’s opening move of e2-e4 can be seen as a transformation of chess into chess_1.

With this idea in mind, it makes sense to think of a game as two sets of other games: one set is the set of games that one player can transform the game into by making a move, and the other set is the set of games that the other player can transform the game into by making a move. We will refer to our players as Left (L) and Right (R), so a game $G$ can be thought of as a pair $(L | R)$, where $L$ and $R$ are sets of games. This in fact leads to our first rule of games:

First Rule of Games: If $L$ and $R$ are sets of games, then $G = (L | R)$ is a game.

Depending on one’s background assumptions, this rule does not necessarily rule out games with infinite runs, or pathological games like Hypergame. We therefore explicitly forbid this:

Second Rule of Games: There is no infinite sequence $\langle G_i = (L_i | R_i) \mid i \in \mathbb{N} \rangle$ of games such that, for all $i \in \mathbb{N}$, $G_{i+1} \in L_i \cup R_i$.

And that’s it! Now we know what games are…

The skeptics among you may think this is not enough. It may not even be immediately evident that there are any games at all! But there are. Note that the empty set is certainly a set of games (all of its elements are certainly games). Therefore, $G = (\emptyset | \emptyset)$ is a game. It is a boring game in which neither player can make any moves, but it is a game nonetheless. We can now begin to construct more interesting games, like $(\{(\emptyset | \emptyset), (\{(\emptyset | \emptyset)\} | \{(\emptyset | \emptyset)\})\} | \{(\emptyset | \emptyset)\})$, or chess.

There’s one crucial aspect of games we haven’t dealt with yet: who wins? We deal with this in the obvious way. Let us suppose that, in an actual run of a game, the players must alternate moves (though a game by itself does not specify who makes the first move). During a run of a game, a player loses if it is their turn to move and they have no moves to make, e.g., the game has reached a position $(L | R)$, it is R’s turn to move, and $R = \emptyset$.

Let us look now at a simple, illustrative example of a game: Nim. A game of Nim starts with a finite number of piles, each containing a finite number of objects. On a player’s move, they choose one of these piles and remove any non-zero number of objects from that pile. The loser is the first player who is unable to remove any objects.

Let us denote games of Nim by finite arrays of numbers, arranged in increasing order. For example, the game of Nim starting with four piles of, respectively, 1,3,5, and 7 objects will be represented by [1,3,5,7]. The trivial game of Nim, consisting of zero piles, and in which the first player to move automatically loses, will be represented by [0].

Let us see that Nim falls into the game framework that we developed above. The trivial game of Nim is clearly equivalent to the trivial game, $(\emptyset | \emptyset)$. We can now identify other game of Nim as members of our framework by induction on, say, the total number of objects involved in the game at the start. Thus, suppose we are trying to identify [1,3,5,7] as a game and we have already succeeded in identifying all instances of Nim with fewer than 16 objects. What instances of Nim can [1,3,5,7] be transformed into by a single move? Well, a player can remove all of the objects from a pile, resulting in [1,3,5], [1,3,7], [1,5,7], or [3,5,7]. Alternatively, they can remove parts of the 3, 5, or 7 piles, resulting in things like [1,3,4,5], [1,1,5,7], etc. All of these Nim instances, clearly, have fewer than 16 objects, so, if we let $X$ denote the set of Nims that can result after one move of [1,3,5,7], then we have shown that $X$ is a set of games, in the sense of our formal framework. We can therefore define a game $(X | X)$, which is clearly equivalent to [1,3,5,7].

In the next post, we’ll look at strategies for games. When can we say for sure which player wins a game? How can we derive winning strategies for games? And what does it all mean?

Cover image: Paul Cézanne, “The Card Players”

# Playing Games I: Setting Up the Pieces

Life is more fun if you play games.

-Roald Dahl

Combinatory play seems to be the essential feature in productive thought.

-Albert Einstein

Observant readers will have noted the multiple occasions on which games have shown up in our posts here at Point at Infinity. We have examined the paradoxes of Hypergame in pursuit of a proof of Cantor’s Theorem. We have callously decided the fates of prisoners by playing games with hat colors. We have seen mysterious characters engage in a variant of Nim in Last Year at Marienbad. Some may even accuse us of playing a few games ourselves.

There are reasons for this. Games are fun, for one. And, more to the point, games often provide a useful lens through which to view more “serious” topics. So, over the next few weeks, we are going to be taking a deeper look at all kinds of games and the light they can shed on the infinite. We will discover a winning strategy for Marienbad (among other games). We will investigate Conway’s surreal numbers (for real this time) in the context of game values. We will consider the profound and often surprising role infinite games have played in modern set theory, in particular with regard to questions around the strange and counter-intuitive Axiom of Determinacy. We may even venture into muddier philosophical waters to look at Wittgenstein’s language games or James Carse’s ideas about finite and infinite games.

It will be fun, and I hope you will join us. For today, though, just enjoy this video of an instance of Conway’s Game of Life, implemented inside another instance of Conway’s Game of Life:

Cover image: Still from The Seventh Seal.

# Tree decomposition in Budapest

I am spending this week in Budapest in order to participate in the 6th European Set Theory Conference, and I want to take the occasion to present a nice little result of Paul Erdős, one of the great mathematicians of the twentieth century, who was born and spent the first decades of his life in Budapest before spending most of his adult life traveling the world, living and working with a vast network of mathematical collaborators.

Erdős contributed to a huge array of mathematical disciplines, including set theory, my own primary field of specialization and the field from which today’s result is drawn. Like most other Hungarians working in set theory, Erdős’s results in the field have a distinctly combinatorial flavor.

In order to state and prove the result, we need to review a bit of terminology. Recall first that the Continuum Hypothesis is the assertion that the size of the set of all real numbers is $\aleph_1$, i.e., that there are no sizes of infinity strictly between the sizes of the set of natural numbers and the set of real numbers. As we have discussed, the Continuum Hypothesis is independent of the axioms of set theory.

The Continuum Hypothesis can be shown to be equivalent to a surprisingly diverse collection of other mathematical statements. We will be concerned with one of these statements today, coming from the field of graph theory. If you need a brief review of terminology regarding graphs, visit this previous post.

If $Z$ is a set, then we say that the complete graph on $Z$ is the graph whose vertex set is $Z$ and which contains all possible edges between distinct elements of $Z$.

If $G = (V, E)$ is a graph and $k \geq 3$ is a natural number, then a $k$-cycle in $G$ is a sequence of $k$ distinct elements of $V$, $(v_1, v_2, \ldots, v_k)$, such that $\{v_1, v_2\}, \{v_2, v_3\}, \ldots, \{v_{k-1}, v_k\}, \{v_k, v_1\}$ are all in $E$.

A graph without any cycles is called a tree. (Note that many sources require a tree to be connected as well as cycle-free and call a cycle-free graph a forest. This leads to the pleasing definition, “A tree is a connected forest.” We will ignore this distinction here, though.)

A complete graph is very far from being a tree: every possible cycle is there. We will be interested in the question: `How many trees does it take to make a complete graph?’

Let us be more precise. An edge-decomposition of a graph $G = (V,E)$ is a disjoint partition of $E$, i.e., a collection of sets $\{E_i \mid i \in I\}$, indexed by a set $I$, such that $E_i \cap E_j = \emptyset$ for distinct elements $i,j \in I$ and $\bigcup_{i \in I} E_i = E$. An edge-decomposition thus decomposes the graph $G = (V,E)$ into graphs $G_i = (V, E_i)$ for $i \in I$.

We will be interested in the number of pieces required for an edge-decomposition of a complete graph into trees. In the above image, we provide an edge-decomposition of the complete graph on 8 vertices into 4 trees (in fact, into 4 Hamiltonian paths). For an infinite complete graph, though, no finite number of pieces will ever suffice. The question we will be interested in today is the number of pieces necessary to provide an edge-decomposition of the complete graph on the real numbers, $\mathbb{R}$, into trees.

Theorem. (Erdős-Kakutani and Erdős-Hajnal) The following statements are equivalent:

1. The Continuum Hypothesis
2. There is an edge-decomposition of the complete graph on $\mathbb{R}$ into countably many trees.

Proof. Suppose first that the Continuum Hypothesis holds. Then $\mathbb{R}$ can be enumerated as $\langle r_\alpha \mid \alpha < \omega_1 \rangle$. For all $\beta < \omega_1$, we know that $\beta$ is countable, so we can fix a function $e_\beta:\beta \rightarrow \mathbb{N}$ that is one-to-one, i.e., if $\alpha_0$ and $\alpha_1$ are distinct ordinals less than $\beta$, then $e_\beta(\alpha_0) \neq e_\beta(\alpha_1)$.

We now specify an edge-decomposition of the complete graph on $\mathbb{R}$ into countably many graphs. The edge-sets of these graphs will be denoted by $E_n$ for $n \in \mathbb{N}$. To specify this decomposition, it suffices to specify, for each pair $\alpha < \beta$ of countable ordinals, a natural number $n_{\alpha, \beta}$ such that $\{r_\alpha, r_\beta\} \in E_{n_{\alpha, \beta}}$. And we have a natural way of doing this: simply let $n_{\alpha, \beta} = e_\beta(\alpha)$.

We claim that each $E_n$ is cycle-free. To prove this, suppose for sake of contradiction that $k,n \in \mathbb{N}$ and $E_n$ has a $k$-cycle. This means there are distinct ordinals $\alpha_1, \alpha_2, \ldots, \alpha_k$ such that $\{r_{\alpha_1}, r_{\alpha_2}\}, \{r_{\alpha_2}, r_{\alpha_3}\}, \ldots, \{r_{\alpha_{k-1}}, r_{\alpha_k}\}, \{r_{\alpha_k}, r_{\alpha_1}\}$ are all in $E_n$.

Now fix $\ell \leq k$ such that $\alpha_\ell$ is the maximum of the set $\{\alpha_1, \ldots, \alpha_k\}$. Without loss of generality, let us suppose that $1 < \ell < k$ (if this is not the case, simply shift the numbering of the cycle). Then we have $\{r_{\alpha_{\ell - 1}}, r_{\alpha_\ell}\}, \{r_{\alpha_\ell}, r_{\alpha_{\ell + 1}}\} \in E_n$. Since $\alpha_\ell > \alpha_{\ell - 1}, \alpha_{\ell + 1}$, our definition of $E_n$ implies that $e_{\alpha_\ell}(\alpha_{\ell - 1}) = n = e_{\alpha_\ell}(\alpha_{\ell + 1})$, contradicting the assumption that $e_{\alpha_\ell}$ is one-to-one and finishing the proof of the implication $1. \Rightarrow 2.$

To prove that $2.$ implies $1.$, we will actually prove that the negation of $1.$ implies the negation of $2.$. So, suppose that the Continuum Hypothesis fails, i.e., $|\mathbb{R}| \geq \aleph_2$. Let $\langle r_\alpha \mid \alpha < |\mathbb{R}|\rangle$ be an enumeration of $\mathbb{R}$, and suppose $\langle E_n \mid n \in \mathbb{N} \rangle$ is an edge-decomposition of the complete graph on $\mathbb{R}$. This means that, for all pairs of ordinals $\alpha < \beta$ smaller than $|\mathbb{R}|$, there is a unique natural number $n_{\alpha, \beta}$ such that $\{r_\alpha, r_\beta\} \in E_{n_{\alpha, \beta}}$. We will prove that there is a natural number $n$ such that $E_n$ contains a 4-cycle.

Let $X$ be the set of ordinals that are bigger than $\omega_1$ but less than $|\mathbb{R}|$. Clearly, $|X| = |\mathbb{R}| \geq \aleph_2$. For each $\beta \in X$ and each $n < \omega$, let $A_{\beta, n}$ be the set of ordinals $\alpha$ less than $\omega_1$ such that $n_{\alpha, \beta} = n$. Then $\bigcup_{n \in \mathbb{N}} A_{\beta, n} = \omega_1$, so, since $\mathbb{N}$ is countable and $\omega_1$ is uncountable, there must be some natural number $n$ such that $A_{\beta, n}$ is uncountable. Let $n_\beta$ be such an $n$.

For $n \in \mathbb{N}$, let $X_n = \{\beta \in X \mid n_\beta = n\}$. Then $\bigcup_{n \in \mathbb{N}}X_n = X$. Since $|X| \geq \aleph_2$, this means that there must be some natural number $n^*$ such that $|X_{n^*}| \geq \aleph_2$.

Now, for each $\beta \in X_{n^*}$, let $\alpha_{\beta, 0}$ and $\alpha_{\beta, 1}$ be the least two elements of $A_{\beta, n^*}$. Since there are only $\aleph_1$ different choices for $\alpha_{\beta, 0}$ and $\alpha_{\beta, 1}$, and since $|X_{n^*}| \geq \aleph_2$, it follows that we can find $\beta_0 < \beta_1$ in $X_{n^*}$ and $\alpha_0 < \alpha_1 < \omega_1$ such that $\alpha_{\beta_0, 0} = \alpha_0 = \alpha_{\beta_1, 0}$ and $\alpha_{\beta_0, 1} = \alpha_1 = \alpha_{\beta_1, 1}$. It follows that $\{r_{\alpha_0}, r_{\beta_0}\}, \{r_{\beta_0}, r_{\alpha_1}\}, \{r_{\alpha_1}, r_{\beta_1}\}, \{r_{\beta_1}, r_{\alpha_0}\} \in E_{n^*}$. In other words, $(r_{\alpha_0}, r_{\beta_1}, r_{\alpha_1}, r_{\beta_1})$ forms a 4-cycle in $E_{n^*}$. This completes the proof of the Theorem.

Notes: The proofs given here do not exactly match those given by Erdős and collaborators; rather, they are simplifications made possible by the increased sophistication in dealing with ordinals and cardinals that has been developed by set theorists over the previous decades. The implication from 1. to 2. is due to Erdős and Kakutani and can be found in this paper from 1943. The implication from 2. to 1. was stated as an unsolved problem in the Erdős-Kakutani paper. It was proven (in a much more general context) in a landmark paper by Erdős and Hajnal from 1966.