Infinity in the Classroom II

We’re between posts about non-Euclidean geometry here at Point at Infinity. In the meantime, take a look at this article about a fascinating connection between hyperbinary numbers and the countability of the rationals, and about exploring this connection in the classroom. Enjoy!


Parallel Lines

Detest it as lewd intercourse, it can deprive you of all your leisure, your health, your rest, and the whole happiness of your life.

Do not try the parallels in that way: I know that way all along. I have measured that bottomless night, and all the light and all the joy of my life went out there.

-Letters from Farkas Bolyai to his son, János, attempting to dissuade him from his investigations of the Parallel Postulate.

In our previous post, cataloging various notions of mathematical independence, we introduced the idea of logical independence. Briefly, given a consistent set of axioms, T, a sentence \varphi is independent from T if it can be neither proven nor disproven from the sentences in T. Today, we discuss one of the most prominent and interesting instances of logical independence: Euclid’s Parallel Postulate.

Among the most famous sets of axioms (top 5, certainly) are Euclid’s postulates, five statements underpinning (together with 23 definitions and five other statements putting forth the properties of equality) the mathematical system of Euclidean geometry set forth in the Elements and still taught in high school classrooms to this day. (We should note here that, from a modern viewpoint, Euclid’s proofs do not always strictly conform to the standards of mathematical rigor, and some of his results rely on methods or assumptions not justified by his five postulates. This has been fixed, for example by Hilbert, who gave a different set of axioms for Euclidean geometry in 1899. Now that we have noted this, we will proceed to forget it for the remainder of the post.)

Euclid’s first four postulates are quite elegant in their simplicity and self-evidence. Reformulated in modern language, they are roughly as follows:

  1. Given any two points, there is a unique line segment connecting them.
  2. Given any line segment, there is a unique line (unbounded in both directions) containing it.
  3. Given any point P and any radius r, there is a unique circle of radius r centered at P.
  4. All right angles are congruent.

The fifth postulate, however, which is known as the Parallel Postulate, is, quite unsatisfyingly, markedly more complicated and less self-evident:

  1. If two lines intersect a third in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines must intersect one another on that side.

A picture might help illustrate this postulate:

The two indicated angles sum to less than two right angles, so, by the Parallel Postulate, the two lines, if extended far enough, will intersect on that side of the third line. By Harkonnen2, CC BY-SA 3.0

This postulate doesn’t seem to be explicitly about parallel lines, so the reader may be wondering why it is often called the Parallel Postulate. The reason becomes evident, though, when considering the following statement and learning that, in the context of the other four postulates, it is in fact equivalent to the Parallel Postulate:

  1. Given any line \ell and any point P not on \ell, there is exactly one line through P that is parallel to \ell.

(Recall that two lines are parallel if they do not intersect.) This reformulation of the Parallel Postulate is often named Playfair’s Axiom, after the 18th-century Scottish mathematician John Playfair, though it was stated already by Proclus in the 5th century.

The Parallel Postulate was considered undesirably unwieldy and less satisfactory than the other four postulates, even by Euclid himself, who made a point of proving the first 28 results of the Elements without recourse to the Parallel Postulate. The general opinion among mathematicians for the next two millennia was that the Parallel Postulate should not be an axiom but rather a theorem; it should be possible to prove it using just the other four postulates.

Many attempts were made to prove the Parallel Postulate, and many claimed success at this task. Errors were then inevitably discovered by later mathematicians, many of whom subsequently put forth false proofs of their own. The aforementioned Proclus, for example, after pointing out flaws in a purported proof of Ptolemy, gives his own proof, which suffers from two instructive flaws. The first is relatively minor: Proclus assumes a consequence of Archimedes’ Axiom, which essentially states that, given any two line segments, there is a natural number n such that n times the length of the shorter line segment will exceed the length of the longer. (We encountered Archimedes’ Axiom in a previous post, about infinitesimals, which the reader is invited to revisit.) Archimedes’ Axiom seems like an entirely reasonable axiom to assume, but it notably does not follow from Euclid’s postulates.

Proclus’ more serious error, though, is that he makes the assumption that any two parallel lines have a constant distance between them. But this does not follow from the first four postulates. In fact, the statement, “The set of points equidistant from a straight line on one side of it form a straight line,” known as Clavius’ Axiom, is, in the presence of Archimedes’ Axiom and the first four postulates, equivalent to the Parallel Postulate. Proclus’ proof is therefore just a sophisticated instance of begging the question.

In the course of the coming centuries’ attempts to prove the Parallel Postulate, a number of other axioms were unearthed that are, at least in the presence of Archimedes’ Axiom and the first four postulates, equivalent to the Parallel Postulate. In addition to Playfair’s Axiom and Clavius’ Axiom, these include the following:

  • (Clairaut) Rectangles exist. (A rectangle, of course, being a quadrilateral with four right angles.)
  • (Legendre) Given an angle \alpha and a point P in the interior of the angle, there is a line through P that meets both sides of the angle.
  • (Wallis) Given any triangle, there are similar triangles of arbitrarily large size.
  • (Farkas Bolyai) Given any three points, not all lying on the same line, there is a circle passing through all three points.

A key line of investigation into the Parallel Postulate was carried out, probably independently, by Omar Khayyam, an 11th-century Persian mathematician, astronomer, and poet, and by Giovanni Gerolamo Saccheri, an 18th-century Italian Jesuit priest and mathematician. For concreteness, let us consider Saccheri’s account, which has the wonderful title, “Euclid Freed from Every Flaw.”

Saccheri and Khayyam were, similarly to their predecessors, attempting to prove the Parallel Postulate. Their method of proof was contradiction: assume that the Parallel Postulate is false and derive a false statement from it. To do this, they considered figures that came to be known as Khayyam-Saccheri quadrilaterals.

To form a Khayyam-Saccheri quadrilateral, take a line segment (say, BC). Take two line segments of equal length and form perpendiculars, in the same direction, at B and C (forming, say, AB and DC). Now connect the ends of those two line segments with a line segment (AD) to form a quadrilateral. A picture is given below.

A Khayyam-Saccheri quadrilateral. By HR – Own work, CC BY-SA 3.0

By construction, the angles at B and C are right angles, but the angles at A and D are unclear. Saccheri proves that these two angles are equal. He also proves that, if these angles are obtuse, then they are obtuse for every such quadrilateral; if they are right, then they are right for every such quadrilateral; and, if they are acute, then they are acute for every such quadrilateral. This then naturally divides geometries into three categories: those satisfying the Obtuse Hypothesis, those satisfying the Right Hypothesis, and those satisfying the Acute Hypothesis. (These types of geometries subsequently became known as semielliptic, semieuclidean, and semihyperbolic, respectively.)

At this point, Saccheri attempts to prove that the Obtuse Hypothesis and the Acute Hypothesis both lead to contradiction. (Note that a geometry satisfying all five of Euclid’s postulates must satisfy the Right Hypothesis. The converse is not true, so even a successful refutation of the Obtuse and Acute Hypotheses would not be enough to establish the Parallel Postulate.) Saccheri is able to prove (in the presence of Archimedes’ Axiom) that the Obtuse Hypothesis leads to the conclusion that straight lines are finite, thus contradicting the second postulate. He is unable to obtain a logical contradiction from the Acute Hypothesis, though. Instead, he derives a number of counter-intuitive statements from it and then concludes that the Acute Hypothesis must be false because it is “repugnant to the nature of a straight line.”

The next big steps towards the establishment of the independence of the Parallel Postulate were made by Nikolai Lobachevsky and János Bolyai (who fortunately did not heed his father’s letters quoted at the top of this post), 19th-century mathematicians from Russia and Hungary, respectively. (Similar work was probably done by Gauss, as well, though it was never published.) Their work entailed a crucial shift in perspective – rather than attempt to prove the Parallel Postulate from the others, the mathematicians seriously considered the possibility that it is not provable and thought of non-Euclidean geometries (i.e., those failing to satisfy the Parallel Postulate) as legitimate objects of mathematical study in their own right. In particular, they were interested in hyperbolic geometry, in which the Parallel Postulate is replaced by the assertion that, given any line \ell and any point P not on the line, there are at least two distinct lines passing through P and parallel to \ell. (Not surprisingly, considering the nomenclature, hyperbolic geometries are semihyperbolic, i.e., they satisfy the Acute Hypothesis.) This viewpoint was vindicated when, in 1868, Eugenio Beltrami produced a model of hyperbolic geometry. This shows that, as long as Euclidean geometry is consistent, then the Parallel Postulate is independent of the other four postulates: all five postulates are true, for example, in the Cartesian plane, while the first four are true and the Parallel Postulate is false in any model of hyperbolic geometry.

A number of other models for hyperbolic geometry are now known. In our next post, we will look at a particularly elegant one: the Poincaré disk model.

Cover Image: Michael Tompsett, Parallel Lines

For more information on this and many other geometric topics, I highly recommend Robin Hartshorne’s excellent book, Geometry: Euclid and Beyond.


I am no bird; and no net ensnares me: I am a free human being with an independent will.

-Charlotte Brontë, Jane Eyre

An object is independent from others if it is, in some meaningful way, outside of their area of influence. If it has some meaningful measure of self-determination. Independence is important. Nations have gone to war to obtain independence from other nations or empires. Adolescents go through rebellious periods, yearning for independence from parents or other authority figures. Though perhaps less immediately exciting, notions of independence permeate mathematics, as well. Viewed in the right light, they can even be seen as direct analogues of the more familiar notions considered above: in various mathematical structures, there is often a natural way of defining an area of influence of an element or subset of the structure. A different element is then independent of this element or subset if it is outside its area of influence. Such notions have proven to be of central importance in a wide variety of mathematical contexts. Today, in anticipation of some deeper dives in future posts, we take a brief look at a few prominent examples.

Graph Independence: Recall that a graph is a pair G = (V,E), where V is a set of vertices and E is a set of edges between these vertices. If u \in V is a vertex, then its neighborhood is the set of all vertices that are connected to u in the graph, i.e., \{v \in V \mid \{u,v\} \in E\}. One could naturally consider a vertex’s area of influence in the graph G to consist of the vertex itself together with all of its neighbors. With this viewpoint, we can say that a vertex u \in V is independent from a subset A \subseteq V if, for all v \in A, u is not in the neighborhood of v, i.e., u is not in the area of influence of any element of A. Similarly, we may say that a set A \subseteq V of vertices is independent if each element of A is independent from the rest of the elements of A, i.e., if each v \in A is independent from A \setminus \{v\}.

The blue vertices form a (maximum) independent set in this graph. By Life of Riley – Own work, GFDL


Fun Fact: In computer science, there are a number of interesting computational problems involving independent sets in graphs. These problems are often quite difficult; for example, the maximum independent set problem, in which one is given a graph and must produce an independent set of maximum size, is known to be NP-hard.

Linear Independence: Let n be a natural number, and consider the real n-dimensional Euclidean space \mathbb{R}^n, which consists of all n-tuples of real numbers. Given \vec{u} = (u_1,\ldots,u_n) and \vec{v} = (v_1, \ldots, v_n) in \mathbb{R}^n and a real number r \in \mathbb{R}, we can define the elements \vec{u} + \vec{v} and r\vec{u} in \mathbb{R}^n as follows:

\vec{u} + \vec{v} = (u_1 + v_1, \ldots, u_n + v_n)

r\vec{u} = (ru_1, \ldots, ru_n).

(In this way, \mathbb{R}^n becomes what is known as a vector space over \mathbb{R}). Given a subset A \subseteq \mathbb{R}^n, the natural way to think about its linear area of influence is as \mathrm{span}(A), which is equal to all n-tuples which are of the form

r_1\vec{v}_1 + \ldots + r_k\vec{v}_k,

where k is a natural number, r_1, \ldots, r_k are real numbers, and \vec{v}_1, \ldots, \vec{v}_k are elements of A.

In this way, we say that an n-tuple \vec{u} \in \mathbb{R}^n is linearly independent from a set A \subseteq \mathbb{R}^n if \vec{u} is not in \mathrm{span}(A). A set A is linearly independent if each element \vec{u} of A is not in \mathrm{span}(A \setminus \{\vec{u}\}), i.e., if each element of A is linearly independent from the set formed by removing that element from A. It is a nice exercise to show that every linearly independent subset of \mathbb{R}^n has size at most n and is maximal if and only if it has size equal to n.

Fun Fact: Stay tuned until the end of the post!

Thou of an independent mind,
With soul resolv’d, with soul resign’d;
Prepar’d Power’s proudest frown to brave,
Who wilt not be, nor have a slave;
Virtue alone who dost revere,
Thy own reproach alone dost fear—
Approach this shrine, and worship here.

-Robert Burns, “Inscription for an Altar of Independence”

Algebraic Independence: If A is a set of real numbers, then one can say that its algebraic area of influence (over \mathbb{Q}, the set of rational numbers), is the set of all real roots of polynomial equations with coefficients in A \cup \mathbb{Q}, i.e., the set of all real numbers that are solutions to equations of the form:

r_kx^k + \ldots + r_1x + r_0 = 0,

where k is a natural number and r_0, \ldots, r_k are elements of A \cup \mathbb{Q}. With this definition, a real number s is algebraically independent (over \mathbb{Q}) from a set A \subseteq \mathbb{R} if s is not the root of a polynomial equation with coefficients in A \cup \mathbb{Q}. A set A \subseteq \mathbb{R} is algebraically independent (over \mathbb{Q}) if each element s \in A is algebraically independent from A \setminus \{s\}.

Fun Fact: Note that a 1-element set \{s\} is algebraically independent over \mathbb{Q} if and only if it is transcendental, i.e., is not the root of a polynomial with rational coefficients. \pi and e are famously both transcendental numbers, yet it is still unknown whether the 2-element set \{\pi, e\} is algebraically independent over \mathbb{Q}. It is not even known if \pi + e is irrational!

Logical Independence: Let T be a consistent set of axioms, i.e., a set of sentences from which one cannot derive a contradiction. We can say that the logical area of influence of T is the set of sentences that can be proven from T, together with their negations. In other words, it is the set of sentences which, if one takes the sentences in T as axioms, can be proven either true or false. A sentence \phi is then logically independent from T if neither \phi nor its negation can be proven from the sentences in T.

Logical independence is naturally of great importance in the study of the foundations of mathematics. Much of modern set theory, and much of my personal mathematical research, involves statements that are independent from the Zermelo-Fraenkel Axioms with Choice (ZFC), which is a prominent set of axioms for set theory and indeed for all of mathematics. These are statements, then, that in our predominant mathematical framework can neither be proven true nor proven false. The most well-known of these is the Continuum Hypothesis (CH), which, in one of its formulations, is the statement that there are no infinite cardinalities strictly between the cardinality of the set of natural numbers and the cardinality of the set of real numbers. To prove that CH is independent from ZFC, one both produces a mathematical structure that satisfies ZFC and in which CH is true (which Kurt Gödel did in 1938) and produces a mathematical structure that satisfies ZFC and in which CH is false (which Paul Cohen did in 1963). Since Cohen’s result in 1963, a great number of natural mathematical statements have been proven to be independent from ZFC.

In our next post, we will consider a logical independence phenomenon of a somewhat simpler nature: the independence of Euclid’s parallel postulate from Euclid’s four other axioms for plane geometry, which will lead us to considerations of exotic non-Euclidean geometries.

Fun Fact: In the setting of general vector spaces, which generalize the vector spaces \mathbb{R}^n from the above discussion of linear independence, a basis is a linearly independent set whose span (what we referred to as its linear area of influence) is the entire vector space. A basis for \mathbb{R}^n is thus any linearly independent set of size n. Using the Axiom of Choice, one can prove that every vector space has a basis. However, there are models of ZF (i.e., the Zermelo-Fraenkel Axioms without Choice) in which there are (infinite-dimensional) vector spaces without a basis. Thus, the statement, “Every vector space has a basis,” is logically independent from ZF.

Solitude is independence.

-Hermann Hesse, Steppenwolf

Circle Inversion and the Pappus Chain

There is a pledge of the big and of the small in the infinite.

-Dejan Stojanović

In the next two posts, we are going to look at two interesting geometric ideas of the 19th century involving circles. Next time, we will consider Poincaré’s disk model for hyperbolic geometry. Today, though, we immerse ourselves in the universe of inversive geometry.

Consider a circle in the infinite 2-dimensional plane:


This circle divides the plane into two regions: the bounded region inside the circle and the unbounded region outside the circle (let’s say that the points on the circle belong to both regions). A natural thing to want to do, now, especially in the context of this blog, would be to try to exchange these two regions, to map the infinite space outside the circle into the bounded space of the circle, and vice versa, in a “natural” way.

I could be bounded in a nutshell, and count myself a king of infinite space.

-William Shakespeare, Hamlet

Upon first reflection, one might be tempted to say that we want to “reflect” points across the circle. And this is sort of right, but reflection already carries a meaning in geometry. Truly reflecting points across the circle would preserve their distance from the circle, so the inside of the circle could only be mapped onto a finite ring whose outer radius is twice that of the circle two. Moreover, it would not be clear how to reflect points from outside this ring into the circle.

Instead, we want to consider a process known as “inversion.” Briefly speaking, we want to arrange so that points arbitrarily close to the center of the circle get sent to points arbitrarily far away from the center of the circle, and vice versa. For simplicity, let us suppose that the circle is centered at the origin of the plane and has a radius of 1. The most natural way to achieve our aim is to send a point P to a point P' that lies in the same direction from the origin as P and whose distance from the origin is the reciprocal of the distance from P to the origin. Here’s an example:

P and P’ get swapped by inversion.

One can check that, algebraically, this inversion sends a point P with coordinates (x,y) to a point P' with coordinates (\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}). Points inside the circle are sent to points outside the circle, points outside the circle are sent to points inside the circle, and points on the circle are sent to themselves. Moreover, as one might expect from the name, the inversion map is its own inverse: applying it twice, we end up where we started. Perfect!

Wait a second, though. We’re being a little too hasty. What about the origin? Where is it sent? Our procedure doesn’t seem to tell us, and if we try to use our algebraic expression, we end up dividing by zero. Since the origin is inside the circle, it should certainly be sent to a point outside the circle, but all of those points are already taken. Also, since points arbitrarily close to the origin get mapped to points arbitrarily far from the origin, we want to send the origin to a point as far away from itself as possible. At first glance, we might seem to be in a quandary here, but longtime readers of this blog will see an obvious solution: the origin gets mapped to a point at infinity! (And the point at infinity, in turn, gets mapped to the origin.)

(Technical note: Since we’ve added a point at infinity, the inversion map should be seen not as a map on the plane \mathbb{R}^2, but on its one-point compactification (or Alexandroff compactification), \hat{\mathbb{R}}^2. In fact, the inversion map is a topological homeomorphism of \hat{\mathbb{R}}^2 with itself.)

Let’s examine what the inversion map does to simple geometric objects. We have already seen what happens to points. It should also be obvious that straight lines through the origin get mapped to themselves. For example, in the image above, the line connecting P and P' gets mapped to itself. (Here we are specifying, of course, that every line contains the point at infinity.)

A bit of thought and calculation will convince you that lines not passing through the origin get sent to circles that do pass through the origin.

The red line, when inverted across the black circle, gets sent to the red circle.

Since the inversion map is its own inverse, circles passing through the origin get mapped to lines that don’t pass through the origin. Circles that don’t pass through the origin, on the other hand, get mapped to other circles that don’t pass through the origin.

The red circle on the left is sent to the red circle on the right through inversion.

There’s an important special case of this phenomenon: a circle that is met perpendicularly by the circle through which we are inverting gets mapped to itself.

The red circle is perpendicular to the circle of inversion and is thus sent to itself.

We thus have a sort of duality between lines and circles that has been revealed through the process of circle inversion. Lines, when seen in the right light, are simply circles with an infinite radius. We’re going to move on to some applications of circle inversion in just a sec, but, first, a pretty picture of an inverted checkerboard.

Left: A checkerboard. Right: A checkerboard inverted across a circle centered at the middle of the board with radius equal to the side length of one checkerboard square. (from Mathographics by R. Dixon)

The introduction of the method of circle inversion is widely attributed to the Swiss mathematician Jakob Steiner, who wrote a treatise on the matter in 1824. When combined with the more familiar rigid transformations of rotation, translation, and reflection, the decidedly non-rigid transformation of inversion gives rise to inversive geometry, which became a major topic of study in nineteenth geometry. It was perhaps most notably applied by William Thomson (later to become 1st Baron Kelvin, immortalized in the name of a certain temperature scale), at the age of 21, to solve problems in electrostatics. Circle inversion also allows for extremely elegant proofs of classical geometric facts. We end today’s post with an example.

Consider three half-circles, all tangent to one another and centered on the same horizontal line, with two placed inside the third, as follows:

An arbelos. (Original by Julio Reis, new version by Rubber Duck, CC BY-SA 3.0)

This figure (or, more precisely, the grey region enclosed by the semicircles) is known as an arbelos, and its first known appearance dates back to The Book of Lemmas by Archimedes. A remarkable fact about the arbelos is that, starting with the smallest of the semicircles in the figure, one can nestle into it an infinite sequence of increasingly small circles, each tangent to the two larger semicircles and the circle appearing before it, thus creating the striking Pappus chain, named for Pappus of Alexandria, who investigated the figure in the 3rd century AD:

A Pappus chain. (By Pbroks13, CC BY 3.0)

Let us label the circles in the Pappus chain (starting with the smallest semicircle in the arbelos) \mathcal{C}_0, \mathcal{C}_1, \mathcal{C}_2, etc. (So, in the picture above, P_1 is the center of \mathcal{C}_1, P_2 is the center of \mathcal{C}_2, and so on.) Clearly, the size of \mathcal{C}_n decreases as n increases, but it is natural to ask how quickly it decreases. It is also natural to ask how the position of the point P_n changes as n increases. In particular, what is the height of P_n above the base of the figure? It turns out that the answers to these two questions are closely related, a fact discovered by Pappus through a long and elaborate derivation in Euclidean geometry, and which we will derive quickly and elegantly through circle inversion.

Let d_n denote the diameter of the circle \mathcal{C}_n, and let h_n denote the height of the point P_n above the base of the Pappus chain (i.e., the line segment AB). We will prove the remarkable formula:

For all n \in \mathbb{N}h_n = n \cdot d_n.

For concreteness, let us demonstrate the formula for \mathcal{C}_3. The same argument will work for each of the circles in the Pappus chain. As promised, we are going to use circle inversion. Our first task is to find a suitable circle across which to invert our figure. And that circle, it turns out, will be the circle centered at A and perpendicular to \mathcal{C}_3:

We will invert our figure across the red circle.

Now, what happens when we invert our figure? First, consider the two larger semicircles in the arbelos, with diameters AC and AB. The circles of which these form the upper half pass through the center of our circle of inversion and thus, as discussed above, are mapped to straight lines by our inversion. Moreover, since the centers of these circles lie directly to the right of A, a moment’s thought should convince you that they are mapped to vertical lines.

Now, what happens to the circles in the Pappus chain? Well, none of them pass through A, so they will all get mapped to circles. \mathcal{C}_3 is perpendicular to the circle of inversion, so it gets mapped to itself. But, in the original diagram, \mathcal{C}_3 is tangent to the larger semicircles in the arbelos. Since circle inversion preserves tangency, in the inverted diagram, \mathcal{C}_3 is tangent to the two vertical lines that these semicircles are mapped to. And, of course, the same is true of all of the other circles in the Pappus chain. Finally, note that, since the center of \mathcal{C}_0 lies on the base of the figure, which passes through the center of our inversion circle, it also gets mapped to a point on the base of the figure. Putting this all together, we end up with the following striking figure:

Pappus chain inversion: before and after. (from “Reflections on the Arbelos” by Harold P. Boas)

The circle with diameter AB gets mapped to the vertical line through B', and the circle with diameter AC gets mapped to the vertical line through C'. Our Pappus chain, meanwhile, is transformed by inversion into an infinite tower of circles, all of the same size, bounded by these vertical lines. Moreover, the circle \mathcal{C}_3 and the point P_3 are left in place by the inversion. It is now straightforward to use this tower to calculate the height h_3 of P_3 in terms of the diameter d_3 of \mathcal{C}_3. To get from P_3 down to the base, we must first pass through half of \mathcal{C}_3, which has a height of \frac{d_3}{2}. We then must pass through the image of \mathcal{C}_2 under the inversion, which has a height of d_3. Then the image of \mathcal{C}_1, which also has a height of d_3. And, finally, the image of the smallest semicircle of the arbelos, which has a height of \frac{d_3}{2}. All together, we get:

h_3 = \frac{d_3}{2} + d_3 + d_3 + \frac{d_3}{2} = 3d_3.

Pretty nice!

For further reading on circle inversion, see Harold P. Boas’ excellent article, “Reflections on the Arbelos.”

Cover image: René Magritte, The false mirror

Infinite Acceleration: Risset Rhythms

In our most recent post, we took a look at and a listen to Shepard tones and their cousins, Shepard-Risset glissandos, which are tones or sequences of tones that create the illusion of perpetually rising (or falling) pitch. The illusion is created by overlaying a number of tones, separated by octaves, rising in unison. The volumes gradually increase from low pitch to middle pitch and gradually decrease from middle pitch to high pitch, leading to a fairly seamless continuous tone.

The same idea can be applied, mutatis mutandis, to percussive loops instead of tones, and to speed instead of pitch, thus creating the illusion of a rhythmic track that is perpetually speeding up (or slowing down). (The mechanism is exactly the same as that of the Shepard tone, so rather than provide an explanation here, I will simply refer the reader to the previous post.) Such a rhythm is known as a Risset rhythm.

I coded up some very basic examples on Supercollider. Here’s an accelerating Risset rhythm:

And a decelerating Risset rhythm:

Here’s a more complex Risset rhythm:

And, finally, a piece of electronic music employing Risset rhythms: “Calculus,” by Stretta.


Infinite Ascent: Shepard Tones

Have you ever been watching a movie and noticed that the musical score was seeming, impossibly, to be perpetually rising, ratcheting up the intensity of the film more and more? Or perhaps it seemed to be perpetually falling, creating a deeper and deeper sense of doom onscreen? If so, it is likely that this effect was achieved using a Shepard tone, a way of simulating an unbounded auditory ascent (or descent) in a bounded range.

To understand how Shepard tones work, let’s look at a simplified implementation of one. We will have three musical voices (middle, low, and high), with an octave between successive voices. The voices then start to move, in unison, and always an octave apart, up through a single octave, over, say, five seconds. As they go, though, they also change their volumes: the middle voice stays at full volume the whole time, the low voice gradually increases from zero volume to full volume, and the high voice gradually decreases from full volume to zero volume. The result will simply sound like a tone rising through an octave, and it can be represented visually as follows.


This by itself is nothing special, though. The trick of the Shepard tone is that this pattern is then repeated over, and over, and over again. Each repetition of the pattern sounds like a tone ascending an octave, but, because of the volume modulation, successive patterns are aurally glued together: the low voice from one cycle leads seamlessly to the middle voice of the next, the middle voice from one cycle leads seamlessly to the high voice of the next, and the high voice simply fades away. The result sounds like a perpetually increasing tone.


Note the similarity to the visual barber pole illusion, in which a rotating pole causes stripes to appear to be perpetually rising. Also, this whole story can be turned upside down, which will lead to a perpetually falling tone.

Let’s hear some Shepard tones in action! Now, in practice, using only three voices does not create a particularly convincing illusion, so, to make these sounds, I used nine voices, spread across nine octaves. Also, linearly varying the volume, as in the above visualization, seems to make it more noticeable when voices enter or fade away, so I used something more like a bell curve.

(Technical notes: These Shepard tones were created in Supercollider, using modified code written by Eli Fieldsteel, from whose YouTube tutorials I have learned a great deal of what I know about Supercollider. Also, I used a formant oscillator instead of the more traditional sine oscillator.)

First, a simple ascending Shepard tone:

The effect becomes more convincing, and the tone more interesting, if multiple Shepard tones are played simultaneously at a fixed interval. Here, we have two ascending Shepard tones separated by a tritone, a.k.a. the devil’s interval, a.k.a. half an octave:

Next, three descending Shepard tones, arranged in a minor triad:

Finally, two Shepard tones, with one ascending and the other descending:

The origins of the Shepard tone lie with Roger Shepard, a 20th-century American cognitive scientist, as a sequence of discrete notes. The continuous Shepard scale, or Shepard-Risset glissando, which our code approximates, was introduced by French composer Jean-Claude Risset, who perhaps most notably used it in his Computer Suite from Little Boy from 1968.

More recently, it has prominently been deployed by Christopher Nolan and Hans Zimmer, as the basis for the Batpod sound in The Dark Knight and in the Dunkirk soundtrack.

Cover image: M.C. Escher, Waterfall

Infinite Cities: Calvino and Chess

…trying to master chess is like trying to master the infinite, and the psychological consequences can be transcendent or terrifying.

I’ve been busy traveling lately, so, in lieu of a new post, I’m just giving you a couple of literary links today.

First, a piece at The Millions about depictions of chess in literature. Chess, like the infinite, is often depicted in the popular imagination as an object of obsession, a pursuit that can lead either to transcendence or madness. This is often a little overwrought, but certainly entertaining.

“Plunging Into the Infinite: How Literature Captures the Essence of Chess” by Matthew James Seidel

Next, at LitHub, a selection of artwork inspired by Italo Calvino’s wonderful Invisible Cities, a novel exploring the infinite permutability of the urban environment.

Art Inpired by Italo Calvino’s Invisible Cities” by Emily Temple

Cover image: “Zenobia” by Maria Monsonet