# Life on the Poincaré Disk

Just at this time I left Caen, where I was then living, to go on a geological excursion under the auspices of the school of mines. The changes of travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of  non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’ sake I verified the result at my leisure.

-Henri Poincaré, Science and Method

You’re out for a walk one day, contemplating the world, and you suddenly have an out-of-body experience, your perspective floating high above your corporeal self. As you rise, everything seems perfectly normal at first, but, when you reach a sufficient altitude, you notice something strange: your body appears to be at the center of a perfect circle, beyond which there is simply…nothing!

You watch yourself walk towards the edge of the circle. It initially looks like you will reach the edge in a surprisingly short amount of time, but, as you continue watching, you notice yourself getting smaller and slowing down. By the time you are halfway to the edge, you are moving at only 3/4 of your original speed. When you are 3/4 of the way to the edge, you are moving at only 7/16 of your original speed. Maybe you will never reach the edge after all? What is happening?

At some point, you see your physical self notice some friends, standing some distance away in the circle. You wave to one another, and your friends beckon you over. You start walking toward them, but, strangely, you walk in what looks not to be a straight line but rather an arc, curving in towards the center of the circle before curving outward again to meet your friends. And, equally curiously, your friends don’t appear to be surprised or annoyed by your seemingly inefficient route. You puzzle things over for a few seconds before having a moment of insight. ‘Oh!’ you think. ‘My physical body is living on a Poincaré disk model for hyperbolic geometry, which my mind has somehow transcended during this out-of-body experience. Of course!”

The Poincaré disk model, which was actually put forth by Eugenio Beltrami, is one of the first and, to my mind, most elegant models of non-Euclidean geometry. Recall from our previous post that a Euclidean geometry is a geometry satisfying Euclid’s five postulates. The first four of these postulates are simple and self-evident. The fifth, known as the Parallel Postulate (recall also that two lines are parallel if they do not intersect), is unsatisfyingly complex and non-immediate. To refresh our memories, here is an equivalent form of the Parallel Postulate, known as Playfair’s Axiom:

Given any line $\ell$ and any point $P$ not on $\ell$, there is exactly one line through $P$ that is parallel to $\ell$.

A non-Euclidean geometry is a geometry that satisfies the first four postulates of Euclid but fails to satisfy the Parallel Postulate. Non-Euclidean geometries began to be seriously investigated in the 19th century; Beltrami, working in the context of Euclidean geometry, was the first to actually produce models of non-Euclidean geometry, thus proving that, supposing Euclidean geometry is consistent, then so is non-Euclidean geometry.

The Poincaré disk model, one of Beltrami’s models, is a model for hyperbolic geometry, in which the Parallel Postulate is replaced by the following statement:

Given any line $\ell$ and any point $P$ not on $\ell$, there are at least two distinct lines through $P$ that are parallel to $\ell$.

Points and lines are the basic objects of geometry, so, to describe the Poincaré disk model, we must first describe the set of points and lines of the model. The set of points of the model is the set of points strictly inside a given circle. For concreteness, let us suppose we are working on the Cartesian plane, and let us take the unit circle, i.e., the circle of radius one, centered at the origin, as our given circle. The points in the Poincaré disk model are then the points in the plane whose distances from the origin are strictly less than one.

Lines in the Poincaré disk model (which we will sometimes call hyperbolic lines) are arcs formed by taking one of the following type of objects and intersecting it with the unit disk:

1. Straight lines (in the Euclidean sense) through the center of the circle.
2. Circles (in the Euclidean sense) that are perpendicular to the unit circle.

(These can, of course, be seen as two instances of the same thing, if one takes the viewpoint that, in Euclidean space, straight lines are just circles of infinite radius.)

It’s already pretty easy to see that this geometry satisfies our hyperbolic replacement of the Parallel Postulate. In fact, given a line $\ell$ and a point $P$ not on $\ell,$ there are infinitely many lines through $P$ parallel to $\ell$. Here’s an illustration of a typical case, with three parallel lines drawn:

We’re not quite able right now to prove that the disk model satisfies the first four of Euclid’s postulates, in part because we haven’t yet specified what it means for two line segments in the model to be be congruent (we don’t, for example, have a notion of distance in our model yet). We’ll get to this in just a minute, but let us first show that our model satisfies the first postulate: Given any two distinct points, there is a line containing both of them.

To this end, let $A$ and $B$ be two points in the disk. If the (Euclidean) line that contains $A$ and $B$ passes through the center of the disk, then this is also a line in the disk model, and we are done. Otherwise, the (Euclidean) line that contains $A$ and $B$ does not pass through the center of the disk. In this case, we use the magic of circle inversion, which we saw in a previous post. Let $A'$ by the result of inverting $A$ across the unit circle. Now $A$, $A'$, and $B$ are distinct points in the Cartesian plane, so there is a unique circle (call it $\gamma$) containing all three. Since $A$ and $A'$ are both on the circle, it is perpendicular to the unit circle. Therefore, its intersection with the unit disk is a line in the disk model containing both $A$ and $B.$ Here’s a picture:

We turn now to distance in the Poincaré disk model. And here, for the sake of brevity, I’m not even going to try to explain why things are they way they are but will just give you a formula. Given two points $A$ and $B$ in the disk, consider the hyperbolic line containing them, and let $P$ and $Q$ be the points where this line meets the boundary circle (with $P$ closer to $A$ and $Q$ closer to $B$). Then the hyperbolic distance between $A$ and $B$ is given by:

$d(A,B) = \mathrm{ln}(\frac{|PB|\cdot|AQ|}{|PA|\cdot|BQ|})$.

This is likely inscrutable right now. That’s fine. Let’s think about what it means for this to be the correct notion of distance, though. For one thing, it means that, given two points in the disk model, the shortest path between them is not, in general, the straight Euclidean line that connects them, but rather the hyperbolic line that connects them. This explains your body’s behavior in the story at the start of this post. When you were walking over to your friends, what appeared to your mind (which was outside the disk, in the Euclidean realm) as a curved arc, and therefore an inefficient path, was in fact a hyperbolic line and, because your body was inside the hyperbolic disk, the shortest path between you and your friends.

This notion of distance also means that distances inside the disk which appear equal to an external Euclidean observer in fact get longer and longer the closer they are to the edge of the disk. This is also consistent with the observations at the beginning of the post: as your body got further toward the edge of the disk, it appeared from an external viewpoint to be moving more and more slowly. From a viewpoint inside the disk, though, it was moving at constant speed and would never reach the edge of the disk, which is infinitely far away. The disk appears bounded from the external Euclidean view, but from within it is entirely unbounded and limitless.

Let’s close by looking at two familiar shapes, interpreted in the hyperbolic disk. First, circles. Recall that a circle is simply the set of points that are some fixed distance away from a given center. Now, what happens when we interpret this definition inside the hyperbolic disk? Perhaps somewhat surprisingly, we get Euclidean circles! (Sort of.) To be more precise, hyperbolic circles in the Poincaré disk model are precisely the Euclidean circles that lie entirely within the disk. (I’m not going to go through the tedious calculations to prove this; I’ll leave that up to you…) Beware, though! The hyperbolic center of the circle is generally different from the Euclidean center. (This should make sense if you think about our distance definition. The hyperbolic center will be further toward the edge of the disk than the Euclidean center, coinciding only if the Euclidean center of the circle is in fact the center of the hyperbolic disk.)

Next, triangles. A triangle is, of course, a polygon with three sides. This definition works perfectly fine in hyperbolic geometry; we simply require that our sides are hyperbolic line segments rather than Euclidean line segments. If we assume the first four of Euclid’s postulates, then the Parallel Postulate is actually equivalent to the statement that the sum of the interior angles of a triangle is 180 degrees. In the Poincaré disk model (and, in fact, in any model of hyperbolic geometry) all triangles have angles that sum to less than 180 degrees. This should be evident if we look at a typical triangle:

Things become interesting when you start to ask how much less than 180 degrees a hyperbolic triangle has. The remarkable fact is that the number of degrees in a hyperbolic triangle is dependent entirely on its (hyperbolic) area! The smaller a triangle is, the larger the sum of its interior angles: as triangles get smaller and smaller, approaching a single point, the sum of their angles approaches 180 degrees from below. Correspondingly, as triangles get larger, the sum of their angles approaches 0 degrees. In fact if we consider an “ideal triangle”, in which the three vertices are in fact points on the bounding circle (and thus not real points in the disk model), then the sum of the angles of this “triangle” is actually 0 degrees!

A consequence of this is the fact that, in the Poincaré disk model, if two triangles are similar, then they are in fact congruent!

This leads us to our final topic: one of the perks of living in a Poincaré disk model. Perhaps the most frequent complaint I hear from people living on a Euclidean plane is that there aren’t enough ways to tile the plane with triangles. Countless people come up to me and say, “Chris, I want to tile the plane with triangles, and I want this tiling to have the following two pleasing properties:

1. All of the triangles are congruent, they don’t overlap, and they fill the entire plane.
2. At every vertex of the tiling, all angles meeting that vertex are the same.

But there are only four essentially different ways of doing this, and I’m tired of all of them! What should I do?”

(Exercise for the reader: Find all four such tilings!)

It just so happens that I have a simple answer for these people: “Move to a Poincaré disk model, where there are infinitely many tilings with these properties!” Here are just a few (all by Tamfang and in the public domain):

I’ll leave you with that! Hyperbolic geometry is fascinating, and I encourage you to investigate further on your own. The previous mentioned Euclid and Beyond, by Hartshorne, is a nice place to start.

This also wraps up (for now, at least) a couple of multi-part investigations here at Point at Infinity: a look at the interesting geometry of circles, which started in our post on circle inversion, and a look at various notions of independence in mathematics, the other posts being here and here. Join us next time for something new!

Cover Image: M. C. Escher, Circle Limit III

# Parallel Lines

Detest it as lewd intercourse, it can deprive you of all your leisure, your health, your rest, and the whole happiness of your life.

Do not try the parallels in that way: I know that way all along. I have measured that bottomless night, and all the light and all the joy of my life went out there.

-Letters from Farkas Bolyai to his son, János, attempting to dissuade him from his investigations of the Parallel Postulate.

In our previous post, cataloging various notions of mathematical independence, we introduced the idea of logical independence. Briefly, given a consistent set of axioms, $T$, a sentence $\varphi$ is independent from $T$ if it can be neither proven nor disproven from the sentences in $T$. Today, we discuss one of the most prominent and interesting instances of logical independence: Euclid’s Parallel Postulate.

Among the most famous sets of axioms (top 5, certainly) are Euclid’s postulates, five statements underpinning (together with 23 definitions and five other statements putting forth the properties of equality) the mathematical system of Euclidean geometry set forth in the Elements and still taught in high school classrooms to this day. (We should note here that, from a modern viewpoint, Euclid’s proofs do not always strictly conform to the standards of mathematical rigor, and some of his results rely on methods or assumptions not justified by his five postulates. This has been fixed, for example by Hilbert, who gave a different set of axioms for Euclidean geometry in 1899. Now that we have noted this, we will proceed to forget it for the remainder of the post.)

Euclid’s first four postulates are quite elegant in their simplicity and self-evidence. Reformulated in modern language, they are roughly as follows:

1. Given any two points, there is a unique line segment connecting them.
2. Given any line segment, there is a unique line (unbounded in both directions) containing it.
3. Given any point $P$ and any radius $r$, there is a unique circle of radius $r$ centered at $P$.
4. All right angles are congruent.

The fifth postulate, however, which is known as the Parallel Postulate, is, quite unsatisfyingly, markedly more complicated and less self-evident:

1. If two lines intersect a third in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines must intersect one another on that side.

A picture might help illustrate this postulate:

This postulate doesn’t seem to be explicitly about parallel lines, so the reader may be wondering why it is often called the Parallel Postulate. The reason becomes evident, though, when considering the following statement and learning that, in the context of the other four postulates, it is in fact equivalent to the Parallel Postulate:

1. Given any line $\ell$ and any point $P$ not on $\ell$, there is exactly one line through $P$ that is parallel to $\ell$.

(Recall that two lines are parallel if they do not intersect.) This reformulation of the Parallel Postulate is often named Playfair’s Axiom, after the 18th-century Scottish mathematician John Playfair, though it was stated already by Proclus in the 5th century.

The Parallel Postulate was considered undesirably unwieldy and less satisfactory than the other four postulates, even by Euclid himself, who made a point of proving the first 28 results of the Elements without recourse to the Parallel Postulate. The general opinion among mathematicians for the next two millennia was that the Parallel Postulate should not be an axiom but rather a theorem; it should be possible to prove it using just the other four postulates.

Many attempts were made to prove the Parallel Postulate, and many claimed success at this task. Errors were then inevitably discovered by later mathematicians, many of whom subsequently put forth false proofs of their own. The aforementioned Proclus, for example, after pointing out flaws in a purported proof of Ptolemy, gives his own proof, which suffers from two instructive flaws. The first is relatively minor: Proclus assumes a consequence of Archimedes’ Axiom, which essentially states that, given any two line segments, there is a natural number $n$ such that $n$ times the length of the shorter line segment will exceed the length of the longer. (We encountered Archimedes’ Axiom in a previous post, about infinitesimals, which the reader is invited to revisit.) Archimedes’ Axiom seems like an entirely reasonable axiom to assume, but it notably does not follow from Euclid’s postulates.

Proclus’ more serious error, though, is that he makes the assumption that any two parallel lines have a constant distance between them. But this does not follow from the first four postulates. In fact, the statement, “The set of points equidistant from a straight line on one side of it form a straight line,” known as Clavius’ Axiom, is, in the presence of Archimedes’ Axiom and the first four postulates, equivalent to the Parallel Postulate. Proclus’ proof is therefore just a sophisticated instance of begging the question.

In the course of the coming centuries’ attempts to prove the Parallel Postulate, a number of other axioms were unearthed that are, at least in the presence of Archimedes’ Axiom and the first four postulates, equivalent to the Parallel Postulate. In addition to Playfair’s Axiom and Clavius’ Axiom, these include the following:

• (Clairaut) Rectangles exist. (A rectangle, of course, being a quadrilateral with four right angles.)
• (Legendre) Given an angle $\alpha$ and a point $P$ in the interior of the angle, there is a line through $P$ that meets both sides of the angle.
• (Wallis) Given any triangle, there are similar triangles of arbitrarily large size.
• (Farkas Bolyai) Given any three points, not all lying on the same line, there is a circle passing through all three points.

A key line of investigation into the Parallel Postulate was carried out, probably independently, by Omar Khayyam, an 11th-century Persian mathematician, astronomer, and poet, and by Giovanni Gerolamo Saccheri, an 18th-century Italian Jesuit priest and mathematician. For concreteness, let us consider Saccheri’s account, which has the wonderful title, “Euclid Freed from Every Flaw.”

Saccheri and Khayyam were, similarly to their predecessors, attempting to prove the Parallel Postulate. Their method of proof was contradiction: assume that the Parallel Postulate is false and derive a false statement from it. To do this, they considered figures that came to be known as Khayyam-Saccheri quadrilaterals.

To form a Khayyam-Saccheri quadrilateral, take a line segment (say, BC). Take two line segments of equal length and form perpendiculars, in the same direction, at B and C (forming, say, AB and DC). Now connect the ends of those two line segments with a line segment (AD) to form a quadrilateral. A picture is given below.

By construction, the angles at B and C are right angles, but the angles at A and D are unclear. Saccheri proves that these two angles are equal. He also proves that, if these angles are obtuse, then they are obtuse for every such quadrilateral; if they are right, then they are right for every such quadrilateral; and, if they are acute, then they are acute for every such quadrilateral. This then naturally divides geometries into three categories: those satisfying the Obtuse Hypothesis, those satisfying the Right Hypothesis, and those satisfying the Acute Hypothesis. (These types of geometries subsequently became known as semielliptic, semieuclidean, and semihyperbolic, respectively.)

At this point, Saccheri attempts to prove that the Obtuse Hypothesis and the Acute Hypothesis both lead to contradiction. (Note that a geometry satisfying all five of Euclid’s postulates must satisfy the Right Hypothesis. The converse is not true, so even a successful refutation of the Obtuse and Acute Hypotheses would not be enough to establish the Parallel Postulate.) Saccheri is able to prove (in the presence of Archimedes’ Axiom) that the Obtuse Hypothesis leads to the conclusion that straight lines are finite, thus contradicting the second postulate. He is unable to obtain a logical contradiction from the Acute Hypothesis, though. Instead, he derives a number of counter-intuitive statements from it and then concludes that the Acute Hypothesis must be false because it is “repugnant to the nature of a straight line.”

The next big steps towards the establishment of the independence of the Parallel Postulate were made by Nikolai Lobachevsky and János Bolyai (who fortunately did not heed his father’s letters quoted at the top of this post), 19th-century mathematicians from Russia and Hungary, respectively. (Similar work was probably done by Gauss, as well, though it was never published.) Their work entailed a crucial shift in perspective – rather than attempt to prove the Parallel Postulate from the others, the mathematicians seriously considered the possibility that it is not provable and thought of non-Euclidean geometries (i.e., those failing to satisfy the Parallel Postulate) as legitimate objects of mathematical study in their own right. In particular, they were interested in hyperbolic geometry, in which the Parallel Postulate is replaced by the assertion that, given any line $\ell$ and any point $P$ not on the line, there are at least two distinct lines passing through $P$ and parallel to $\ell$. (Not surprisingly, considering the nomenclature, hyperbolic geometries are semihyperbolic, i.e., they satisfy the Acute Hypothesis.) This viewpoint was vindicated when, in 1868, Eugenio Beltrami produced a model of hyperbolic geometry. This shows that, as long as Euclidean geometry is consistent, then the Parallel Postulate is independent of the other four postulates: all five postulates are true, for example, in the Cartesian plane, while the first four are true and the Parallel Postulate is false in any model of hyperbolic geometry.

A number of other models for hyperbolic geometry are now known. In our next post, we will look at a particularly elegant one: the Poincaré disk model.

Cover Image: Michael Tompsett, Parallel Lines

For more information on this and many other geometric topics, I highly recommend Robin Hartshorne’s excellent book, Geometry: Euclid and Beyond.

# Independence

I am no bird; and no net ensnares me: I am a free human being with an independent will.

-Charlotte Brontë, Jane Eyre

An object is independent from others if it is, in some meaningful way, outside of their area of influence. If it has some meaningful measure of self-determination. Independence is important. Nations have gone to war to obtain independence from other nations or empires. Adolescents go through rebellious periods, yearning for independence from parents or other authority figures. Though perhaps less immediately exciting, notions of independence permeate mathematics, as well. Viewed in the right light, they can even be seen as direct analogues of the more familiar notions considered above: in various mathematical structures, there is often a natural way of defining an area of influence of an element or subset of the structure. A different element is then independent of this element or subset if it is outside its area of influence. Such notions have proven to be of central importance in a wide variety of mathematical contexts. Today, in anticipation of some deeper dives in future posts, we take a brief look at a few prominent examples.

Graph Independence: Recall that a graph is a pair $G = (V,E)$, where $V$ is a set of vertices and $E$ is a set of edges between these vertices. If $u \in V$ is a vertex, then its neighborhood is the set of all vertices that are connected to $u$ in the graph, i.e., $\{v \in V \mid \{u,v\} \in E\}$. One could naturally consider a vertex’s area of influence in the graph $G$ to consist of the vertex itself together with all of its neighbors. With this viewpoint, we can say that a vertex $u \in V$ is independent from a subset $A \subseteq V$ if, for all $v \in A$, $u$ is not in the neighborhood of $v$, i.e., $u$ is not in the area of influence of any element of $A$. Similarly, we may say that a set $A \subseteq V$ of vertices is independent if each element of $A$ is independent from the rest of the elements of $A$, i.e., if each $v \in A$ is independent from $A \setminus \{v\}$.

Fun Fact: In computer science, there are a number of interesting computational problems involving independent sets in graphs. These problems are often quite difficult; for example, the maximum independent set problem, in which one is given a graph and must produce an independent set of maximum size, is known to be NP-hard.

Linear Independence: Let $n$ be a natural number, and consider the real $n$-dimensional Euclidean space $\mathbb{R}^n$, which consists of all $n$-tuples of real numbers. Given $\vec{u} = (u_1,\ldots,u_n)$ and $\vec{v} = (v_1, \ldots, v_n)$ in $\mathbb{R}^n$ and a real number $r \in \mathbb{R}$, we can define the elements $\vec{u} + \vec{v}$ and $r\vec{u}$ in $\mathbb{R}^n$ as follows:

$\vec{u} + \vec{v} = (u_1 + v_1, \ldots, u_n + v_n)$

$r\vec{u} = (ru_1, \ldots, ru_n).$

(In this way, $\mathbb{R}^n$ becomes what is known as a vector space over $\mathbb{R}$). Given a subset $A \subseteq \mathbb{R}^n$, the natural way to think about its linear area of influence is as $\mathrm{span}(A)$, which is equal to all $n$-tuples which are of the form

$r_1\vec{v}_1 + \ldots + r_k\vec{v}_k,$

where $k$ is a natural number, $r_1, \ldots, r_k$ are real numbers, and $\vec{v}_1, \ldots, \vec{v}_k$ are elements of $A$.

In this way, we say that an $n$-tuple $\vec{u} \in \mathbb{R}^n$ is linearly independent from a set $A \subseteq \mathbb{R}^n$ if $\vec{u}$ is not in $\mathrm{span}(A)$. A set $A$ is linearly independent if each element $\vec{u}$ of $A$ is not in $\mathrm{span}(A \setminus \{\vec{u}\})$, i.e., if each element of $A$ is linearly independent from the set formed by removing that element from $A$. It is a nice exercise to show that every linearly independent subset of $\mathbb{R}^n$ has size at most $n$ and is maximal if and only if it has size equal to $n$.

Fun Fact: Stay tuned until the end of the post!

Thou of an independent mind,
With soul resolv’d, with soul resign’d;
Prepar’d Power’s proudest frown to brave,
Who wilt not be, nor have a slave;
Virtue alone who dost revere,
Thy own reproach alone dost fear—
Approach this shrine, and worship here.

-Robert Burns, “Inscription for an Altar of Independence”

Algebraic Independence: If $A$ is a set of real numbers, then one can say that its algebraic area of influence (over $\mathbb{Q}$, the set of rational numbers), is the set of all real roots of polynomial equations with coefficients in $A \cup \mathbb{Q}$, i.e., the set of all real numbers that are solutions to equations of the form:

$r_kx^k + \ldots + r_1x + r_0 = 0,$

where $k$ is a natural number and $r_0, \ldots, r_k$ are elements of $A \cup \mathbb{Q}$. With this definition, a real number $s$ is algebraically independent (over $\mathbb{Q}$) from a set $A \subseteq \mathbb{R}$ if $s$ is not the root of a polynomial equation with coefficients in $A \cup \mathbb{Q}$. A set $A \subseteq \mathbb{R}$ is algebraically independent (over $\mathbb{Q}$) if each element $s \in A$ is algebraically independent from $A \setminus \{s\}$.

Fun Fact: Note that a 1-element set $\{s\}$ is algebraically independent over $\mathbb{Q}$ if and only if it is transcendental, i.e., is not the root of a polynomial with rational coefficients. $\pi$ and $e$ are famously both transcendental numbers, yet it is still unknown whether the 2-element set $\{\pi, e\}$ is algebraically independent over $\mathbb{Q}$. It is not even known if $\pi + e$ is irrational!

Logical Independence: Let $T$ be a consistent set of axioms, i.e., a set of sentences from which one cannot derive a contradiction. We can say that the logical area of influence of $T$ is the set of sentences that can be proven from $T$, together with their negations. In other words, it is the set of sentences which, if one takes the sentences in $T$ as axioms, can be proven either true or false. A sentence $\phi$ is then logically independent from $T$ if neither $\phi$ nor its negation can be proven from the sentences in $T$.

Logical independence is naturally of great importance in the study of the foundations of mathematics. Much of modern set theory, and much of my personal mathematical research, involves statements that are independent from the Zermelo-Fraenkel Axioms with Choice (ZFC), which is a prominent set of axioms for set theory and indeed for all of mathematics. These are statements, then, that in our predominant mathematical framework can neither be proven true nor proven false. The most well-known of these is the Continuum Hypothesis (CH), which, in one of its formulations, is the statement that there are no infinite cardinalities strictly between the cardinality of the set of natural numbers and the cardinality of the set of real numbers. To prove that CH is independent from ZFC, one both produces a mathematical structure that satisfies ZFC and in which CH is true (which Kurt Gödel did in 1938) and produces a mathematical structure that satisfies ZFC and in which CH is false (which Paul Cohen did in 1963). Since Cohen’s result in 1963, a great number of natural mathematical statements have been proven to be independent from ZFC.

In our next post, we will consider a logical independence phenomenon of a somewhat simpler nature: the independence of Euclid’s parallel postulate from Euclid’s four other axioms for plane geometry, which will lead us to considerations of exotic non-Euclidean geometries.

Fun Fact: In the setting of general vector spaces, which generalize the vector spaces $\mathbb{R}^n$ from the above discussion of linear independence, a basis is a linearly independent set whose span (what we referred to as its linear area of influence) is the entire vector space. A basis for $\mathbb{R}^n$ is thus any linearly independent set of size $n$. Using the Axiom of Choice, one can prove that every vector space has a basis. However, there are models of ZF (i.e., the Zermelo-Fraenkel Axioms without Choice) in which there are (infinite-dimensional) vector spaces without a basis. Thus, the statement, “Every vector space has a basis,” is logically independent from ZF.

Solitude is independence.

-Hermann Hesse, Steppenwolf

# Infinite Acceleration: Risset Rhythms

In our most recent post, we took a look at and a listen to Shepard tones and their cousins, Shepard-Risset glissandos, which are tones or sequences of tones that create the illusion of perpetually rising (or falling) pitch. The illusion is created by overlaying a number of tones, separated by octaves, rising in unison. The volumes gradually increase from low pitch to middle pitch and gradually decrease from middle pitch to high pitch, leading to a fairly seamless continuous tone.

The same idea can be applied, mutatis mutandis, to percussive loops instead of tones, and to speed instead of pitch, thus creating the illusion of a rhythmic track that is perpetually speeding up (or slowing down). (The mechanism is exactly the same as that of the Shepard tone, so rather than provide an explanation here, I will simply refer the reader to the previous post.) Such a rhythm is known as a Risset rhythm.

I coded up some very basic examples on Supercollider. Here’s an accelerating Risset rhythm:

And a decelerating Risset rhythm:

Here’s a more complex Risset rhythm:

And, finally, a piece of electronic music employing Risset rhythms: “Calculus,” by Stretta.

# Playing Games II: The Rules

In an earlier examination of games, we ran into some trouble when Hypergame, a “game” we defined, led to a contradiction. This ended up being a positive development, as the ideas we developed there led us to a (non-contradictory) proof of Cantor’s Theorem, but it indicates that, if we are going to be serious about our study of games, we need to be more careful about our definitions.

So, what is a game? Here’s what Wittgenstein had to say about the question in his famous development of the notion of language games and family resemblances:

66. Consider for example the proceedings that we call “games.” I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all? Don’t say: “There must be something common, or they would not be called ‘games’” but look and see whether there is anything common on all. For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that. To repeat: don’t think, but look! Look for example at board-games, with their multifarious relationships. Now pass to card-games; here you find many correspondences with the first group, but many common features drop out, and others appear. When we pass next to ball-games, much that is common is retained, but much is lost. Are they all ‘amusing’? Compare chess with noughts and crosses. Or is there always winning and losing, or competition between players? Think of patience. In ball-games there is winning and losing; but when a child throws his ball at the wall and catches it again, this feature has disappeared. Look at the parts played by skill and luck; and at the difference between skill in chess and skill in tennis. Think now of games like ring-a-ring-a-roses; here is the element of amusement, but how many other characteristic features have disappeared! And we can go through the many, many other groups of games in the same way; can see how similarities crop up and disappear. And the result of this examination is: we see a complicated network of similarities overlapping and crisscrossing: sometimes overall similarities, sometimes similarities of detail.

-Ludwig Wittgenstein, Philosophical Investigations

This is perhaps the correct approach to take when studying the notion of “game” as commonly used in the course of life, but that is not what we are doing here. We want to isolate a concrete mathematical notion of game amenable to rigorous analysis, and for this purpose we must be precise. No doubt there will be things that many people consider games that will be left out of our analysis, and perhaps some of our games would not be recognized as such out in the wild, but this is beside the point.

To narrow the scope of our investigation, let us say more about what type of games we are interested in. First, for simplicity, we are interested in two-player games, in which the players play moves one at a time. We are also (for now, at least) interested in games that necessarily end in a finite number of moves (though, for any particular game, there may be no a priori finite upper bound on the number of moves in a run of that game). Finally, we will be interested in games for which the game must end in victory for one of the players. Our theory can easily be adapted to deal with ties, but this will just unnecessarily complicate things.

One way to think about a move in a game is as a transformation of the current game into a different one. Consider chess (and, just so it satisfies our constraints, suppose that a classical “tie” counts as a win for black). A typical game of chess starts with all of the pieces in their traditional spots (for simplicity, let’s be agnostic about which color moves first). However, we can consider a slightly different game, called chess_1, that has all of the same rules as chess except that white’s king pawn starts on e4, two squares up from its traditional square. This is a perfectly fine game, and white’s opening move of e2-e4 can be seen as a transformation of chess into chess_1.

With this idea in mind, it makes sense to think of a game as two sets of other games: one set is the set of games that one player can transform the game into by making a move, and the other set is the set of games that the other player can transform the game into by making a move. We will refer to our players as Left (L) and Right (R), so a game $G$ can be thought of as a pair $(L | R)$, where $L$ and $R$ are sets of games. This in fact leads to our first rule of games:

First Rule of Games: If $L$ and $R$ are sets of games, then $G = (L | R)$ is a game.

Depending on one’s background assumptions, this rule does not necessarily rule out games with infinite runs, or pathological games like Hypergame. We therefore explicitly forbid this:

Second Rule of Games: There is no infinite sequence $\langle G_i = (L_i | R_i) \mid i \in \mathbb{N} \rangle$ of games such that, for all $i \in \mathbb{N}$, $G_{i+1} \in L_i \cup R_i$.

And that’s it! Now we know what games are…

The skeptics among you may think this is not enough. It may not even be immediately evident that there are any games at all! But there are. Note that the empty set is certainly a set of games (all of its elements are certainly games). Therefore, $G = (\emptyset | \emptyset)$ is a game. It is a boring game in which neither player can make any moves, but it is a game nonetheless. We can now begin to construct more interesting games, like $(\{(\emptyset | \emptyset), (\{(\emptyset | \emptyset)\} | \{(\emptyset | \emptyset)\})\} | \{(\emptyset | \emptyset)\})$, or chess.

There’s one crucial aspect of games we haven’t dealt with yet: who wins? We deal with this in the obvious way. Let us suppose that, in an actual run of a game, the players must alternate moves (though a game by itself does not specify who makes the first move). During a run of a game, a player loses if it is their turn to move and they have no moves to make, e.g., the game has reached a position $(L | R)$, it is R’s turn to move, and $R = \emptyset$.

Let us look now at a simple, illustrative example of a game: Nim. A game of Nim starts with a finite number of piles, each containing a finite number of objects. On a player’s move, they choose one of these piles and remove any non-zero number of objects from that pile. The loser is the first player who is unable to remove any objects.

Let us denote games of Nim by finite arrays of numbers, arranged in increasing order. For example, the game of Nim starting with four piles of, respectively, 1,3,5, and 7 objects will be represented by [1,3,5,7]. The trivial game of Nim, consisting of zero piles, and in which the first player to move automatically loses, will be represented by [0].

Let us see that Nim falls into the game framework that we developed above. The trivial game of Nim is clearly equivalent to the trivial game, $(\emptyset | \emptyset)$. We can now identify other game of Nim as members of our framework by induction on, say, the total number of objects involved in the game at the start. Thus, suppose we are trying to identify [1,3,5,7] as a game and we have already succeeded in identifying all instances of Nim with fewer than 16 objects. What instances of Nim can [1,3,5,7] be transformed into by a single move? Well, a player can remove all of the objects from a pile, resulting in [1,3,5], [1,3,7], [1,5,7], or [3,5,7]. Alternatively, they can remove parts of the 3, 5, or 7 piles, resulting in things like [1,3,4,5], [1,1,5,7], etc. All of these Nim instances, clearly, have fewer than 16 objects, so, if we let $X$ denote the set of Nims that can result after one move of [1,3,5,7], then we have shown that $X$ is a set of games, in the sense of our formal framework. We can therefore define a game $(X | X)$, which is clearly equivalent to [1,3,5,7].

In the next post, we’ll look at strategies for games. When can we say for sure which player wins a game? How can we derive winning strategies for games? And what does it all mean?

Cover image: Paul Cézanne, “The Card Players”

# Playing Games I: Setting Up the Pieces

Life is more fun if you play games.

-Roald Dahl

Combinatory play seems to be the essential feature in productive thought.

-Albert Einstein

Observant readers will have noted the multiple occasions on which games have shown up in our posts here at Point at Infinity. We have examined the paradoxes of Hypergame in pursuit of a proof of Cantor’s Theorem. We have callously decided the fates of prisoners by playing games with hat colors. We have seen mysterious characters engage in a variant of Nim in Last Year at Marienbad. Some may even accuse us of playing a few games ourselves.

There are reasons for this. Games are fun, for one. And, more to the point, games often provide a useful lens through which to view more “serious” topics. So, over the next few weeks, we are going to be taking a deeper look at all kinds of games and the light they can shed on the infinite. We will discover a winning strategy for Marienbad (among other games). We will investigate Conway’s surreal numbers (for real this time) in the context of game values. We will consider the profound and often surprising role infinite games have played in modern set theory, in particular with regard to questions around the strange and counter-intuitive Axiom of Determinacy. We may even venture into muddier philosophical waters to look at Wittgenstein’s language games or James Carse’s ideas about finite and infinite games.

It will be fun, and I hope you will join us. For today, though, just enjoy this video of an instance of Conway’s Game of Life, implemented inside another instance of Conway’s Game of Life:

Cover image: Still from The Seventh Seal.

# Tree decomposition in Budapest

I am spending this week in Budapest in order to participate in the 6th European Set Theory Conference, and I want to take the occasion to present a nice little result of Paul Erdős, one of the great mathematicians of the twentieth century, who was born and spent the first decades of his life in Budapest before spending most of his adult life traveling the world, living and working with a vast network of mathematical collaborators.

Erdős contributed to a huge array of mathematical disciplines, including set theory, my own primary field of specialization and the field from which today’s result is drawn. Like most other Hungarians working in set theory, Erdős’s results in the field have a distinctly combinatorial flavor.

In order to state and prove the result, we need to review a bit of terminology. Recall first that the Continuum Hypothesis is the assertion that the size of the set of all real numbers is $\aleph_1$, i.e., that there are no sizes of infinity strictly between the sizes of the set of natural numbers and the set of real numbers. As we have discussed, the Continuum Hypothesis is independent of the axioms of set theory.

The Continuum Hypothesis can be shown to be equivalent to a surprisingly diverse collection of other mathematical statements. We will be concerned with one of these statements today, coming from the field of graph theory. If you need a brief review of terminology regarding graphs, visit this previous post.

If $Z$ is a set, then we say that the complete graph on $Z$ is the graph whose vertex set is $Z$ and which contains all possible edges between distinct elements of $Z$.

If $G = (V, E)$ is a graph and $k \geq 3$ is a natural number, then a $k$-cycle in $G$ is a sequence of $k$ distinct elements of $V$, $(v_1, v_2, \ldots, v_k)$, such that $\{v_1, v_2\}, \{v_2, v_3\}, \ldots, \{v_{k-1}, v_k\}, \{v_k, v_1\}$ are all in $E$.

A graph without any cycles is called a tree. (Note that many sources require a tree to be connected as well as cycle-free and call a cycle-free graph a forest. This leads to the pleasing definition, “A tree is a connected forest.” We will ignore this distinction here, though.)

A complete graph is very far from being a tree: every possible cycle is there. We will be interested in the question: `How many trees does it take to make a complete graph?’

Let us be more precise. An edge-decomposition of a graph $G = (V,E)$ is a disjoint partition of $E$, i.e., a collection of sets $\{E_i \mid i \in I\}$, indexed by a set $I$, such that $E_i \cap E_j = \emptyset$ for distinct elements $i,j \in I$ and $\bigcup_{i \in I} E_i = E$. An edge-decomposition thus decomposes the graph $G = (V,E)$ into graphs $G_i = (V, E_i)$ for $i \in I$.

We will be interested in the number of pieces required for an edge-decomposition of a complete graph into trees. In the above image, we provide an edge-decomposition of the complete graph on 8 vertices into 4 trees (in fact, into 4 Hamiltonian paths). For an infinite complete graph, though, no finite number of pieces will ever suffice. The question we will be interested in today is the number of pieces necessary to provide an edge-decomposition of the complete graph on the real numbers, $\mathbb{R}$, into trees.

Theorem. (Erdős-Kakutani and Erdős-Hajnal) The following statements are equivalent:

1. The Continuum Hypothesis
2. There is an edge-decomposition of the complete graph on $\mathbb{R}$ into countably many trees.

Proof. Suppose first that the Continuum Hypothesis holds. Then $\mathbb{R}$ can be enumerated as $\langle r_\alpha \mid \alpha < \omega_1 \rangle$. For all $\beta < \omega_1$, we know that $\beta$ is countable, so we can fix a function $e_\beta:\beta \rightarrow \mathbb{N}$ that is one-to-one, i.e., if $\alpha_0$ and $\alpha_1$ are distinct ordinals less than $\beta$, then $e_\beta(\alpha_0) \neq e_\beta(\alpha_1)$.

We now specify an edge-decomposition of the complete graph on $\mathbb{R}$ into countably many graphs. The edge-sets of these graphs will be denoted by $E_n$ for $n \in \mathbb{N}$. To specify this decomposition, it suffices to specify, for each pair $\alpha < \beta$ of countable ordinals, a natural number $n_{\alpha, \beta}$ such that $\{r_\alpha, r_\beta\} \in E_{n_{\alpha, \beta}}$. And we have a natural way of doing this: simply let $n_{\alpha, \beta} = e_\beta(\alpha)$.

We claim that each $E_n$ is cycle-free. To prove this, suppose for sake of contradiction that $k,n \in \mathbb{N}$ and $E_n$ has a $k$-cycle. This means there are distinct ordinals $\alpha_1, \alpha_2, \ldots, \alpha_k$ such that $\{r_{\alpha_1}, r_{\alpha_2}\}, \{r_{\alpha_2}, r_{\alpha_3}\}, \ldots, \{r_{\alpha_{k-1}}, r_{\alpha_k}\}, \{r_{\alpha_k}, r_{\alpha_1}\}$ are all in $E_n$.

Now fix $\ell \leq k$ such that $\alpha_\ell$ is the maximum of the set $\{\alpha_1, \ldots, \alpha_k\}$. Without loss of generality, let us suppose that $1 < \ell < k$ (if this is not the case, simply shift the numbering of the cycle). Then we have $\{r_{\alpha_{\ell - 1}}, r_{\alpha_\ell}\}, \{r_{\alpha_\ell}, r_{\alpha_{\ell + 1}}\} \in E_n$. Since $\alpha_\ell > \alpha_{\ell - 1}, \alpha_{\ell + 1}$, our definition of $E_n$ implies that $e_{\alpha_\ell}(\alpha_{\ell - 1}) = n = e_{\alpha_\ell}(\alpha_{\ell + 1})$, contradicting the assumption that $e_{\alpha_\ell}$ is one-to-one and finishing the proof of the implication $1. \Rightarrow 2.$

To prove that $2.$ implies $1.$, we will actually prove that the negation of $1.$ implies the negation of $2.$. So, suppose that the Continuum Hypothesis fails, i.e., $|\mathbb{R}| \geq \aleph_2$. Let $\langle r_\alpha \mid \alpha < |\mathbb{R}|\rangle$ be an enumeration of $\mathbb{R}$, and suppose $\langle E_n \mid n \in \mathbb{N} \rangle$ is an edge-decomposition of the complete graph on $\mathbb{R}$. This means that, for all pairs of ordinals $\alpha < \beta$ smaller than $|\mathbb{R}|$, there is a unique natural number $n_{\alpha, \beta}$ such that $\{r_\alpha, r_\beta\} \in E_{n_{\alpha, \beta}}$. We will prove that there is a natural number $n$ such that $E_n$ contains a 4-cycle.

Let $X$ be the set of ordinals that are bigger than $\omega_1$ but less than $|\mathbb{R}|$. Clearly, $|X| = |\mathbb{R}| \geq \aleph_2$. For each $\beta \in X$ and each $n < \omega$, let $A_{\beta, n}$ be the set of ordinals $\alpha$ less than $\omega_1$ such that $n_{\alpha, \beta} = n$. Then $\bigcup_{n \in \mathbb{N}} A_{\beta, n} = \omega_1$, so, since $\mathbb{N}$ is countable and $\omega_1$ is uncountable, there must be some natural number $n$ such that $A_{\beta, n}$ is uncountable. Let $n_\beta$ be such an $n$.

For $n \in \mathbb{N}$, let $X_n = \{\beta \in X \mid n_\beta = n\}$. Then $\bigcup_{n \in \mathbb{N}}X_n = X$. Since $|X| \geq \aleph_2$, this means that there must be some natural number $n^*$ such that $|X_{n^*}| \geq \aleph_2$.

Now, for each $\beta \in X_{n^*}$, let $\alpha_{\beta, 0}$ and $\alpha_{\beta, 1}$ be the least two elements of $A_{\beta, n^*}$. Since there are only $\aleph_1$ different choices for $\alpha_{\beta, 0}$ and $\alpha_{\beta, 1}$, and since $|X_{n^*}| \geq \aleph_2$, it follows that we can find $\beta_0 < \beta_1$ in $X_{n^*}$ and $\alpha_0 < \alpha_1 < \omega_1$ such that $\alpha_{\beta_0, 0} = \alpha_0 = \alpha_{\beta_1, 0}$ and $\alpha_{\beta_0, 1} = \alpha_1 = \alpha_{\beta_1, 1}$. It follows that $\{r_{\alpha_0}, r_{\beta_0}\}, \{r_{\beta_0}, r_{\alpha_1}\}, \{r_{\alpha_1}, r_{\beta_1}\}, \{r_{\beta_1}, r_{\alpha_0}\} \in E_{n^*}$. In other words, $(r_{\alpha_0}, r_{\beta_1}, r_{\alpha_1}, r_{\beta_1})$ forms a 4-cycle in $E_{n^*}$. This completes the proof of the Theorem.

Notes: The proofs given here do not exactly match those given by Erdős and collaborators; rather, they are simplifications made possible by the increased sophistication in dealing with ordinals and cardinals that has been developed by set theorists over the previous decades. The implication from 1. to 2. is due to Erdős and Kakutani and can be found in this paper from 1943. The implication from 2. to 1. was stated as an unsolved problem in the Erdős-Kakutani paper. It was proven (in a much more general context) in a landmark paper by Erdős and Hajnal from 1966.