Independence

I am no bird; and no net ensnares me: I am a free human being with an independent will.

-Charlotte Brontë, Jane Eyre

An object is independent from others if it is, in some meaningful way, outside of their area of influence. If it has some meaningful measure of self-determination. Independence is important. Nations have gone to war to obtain independence from other nations or empires. Adolescents go through rebellious periods, yearning for independence from parents or other authority figures. Though perhaps less immediately exciting, notions of independence permeate mathematics, as well. Viewed in the right light, they can even be seen as direct analogues of the more familiar notions considered above: in various mathematical structures, there is often a natural way of defining an area of influence of an element or subset of the structure. A different element is then independent of this element or subset if it is outside its area of influence. Such notions have proven to be of central importance in a wide variety of mathematical contexts. Today, in anticipation of some deeper dives in future posts, we take a brief look at a few prominent examples.

Graph Independence: Recall that a graph is a pair G = (V,E), where V is a set of vertices and E is a set of edges between these vertices. If u \in V is a vertex, then its neighborhood is the set of all vertices that are connected to u in the graph, i.e., \{v \in V \mid \{u,v\} \in E\}. One could naturally consider a vertex’s area of influence in the graph G to consist of the vertex itself together with all of its neighbors. With this viewpoint, we can say that a vertex u \in V is independent from a subset A \subseteq V if, for all v \in A, u is not in the neighborhood of v, i.e., u is not in the area of influence of any element of A. Similarly, we may say that a set A \subseteq V of vertices is independent if each element of A is independent from the rest of the elements of A, i.e., if each v \in A is independent from A \setminus \{v\}.

Independent_set_graph
The blue vertices form a (maximum) independent set in this graph. By Life of Riley – Own work, GFDL

 

Fun Fact: In computer science, there are a number of interesting computational problems involving independent sets in graphs. These problems are often quite difficult; for example, the maximum independent set problem, in which one is given a graph and must produce an independent set of maximum size, is known to be NP-hard.

Linear Independence: Let n be a natural number, and consider the real n-dimensional Euclidean space \mathbb{R}^n, which consists of all n-tuples of real numbers. Given \vec{u} = (u_1,\ldots,u_n) and \vec{v} = (v_1, \ldots, v_n) in \mathbb{R}^n and a real number r \in \mathbb{R}, we can define the elements \vec{u} + \vec{v} and r\vec{u} in \mathbb{R}^n as follows:

\vec{u} + \vec{v} = (u_1 + v_1, \ldots, u_n + v_n)

r\vec{u} = (ru_1, \ldots, ru_n).

(In this way, \mathbb{R}^n becomes what is known as a vector space over \mathbb{R}). Given a subset A \subseteq \mathbb{R}^n, the natural way to think about its linear area of influence is as \mathrm{span}(A), which is equal to all n-tuples which are of the form

r_1\vec{v}_1 + \ldots + r_k\vec{v}_k,

where k is a natural number, r_1, \ldots, r_k are real numbers, and \vec{v}_1, \ldots, \vec{v}_k are elements of A.

In this way, we say that an n-tuple \vec{u} \in \mathbb{R}^n is linearly independent from a set A \subseteq \mathbb{R}^n if \vec{u} is not in \mathrm{span}(A). A set A is linearly independent if each element \vec{u} of A is not in \mathrm{span}(A \setminus \{\vec{u}\}), i.e., if each element of A is linearly independent from the set formed by removing that element from A. It is a nice exercise to show that every linearly independent subset of \mathbb{R}^n has size at most n and is maximal if and only if it has size equal to n.

Fun Fact: Stay tuned until the end of the post!

Thou of an independent mind,
With soul resolv’d, with soul resign’d;
Prepar’d Power’s proudest frown to brave,
Who wilt not be, nor have a slave;
Virtue alone who dost revere,
Thy own reproach alone dost fear—
Approach this shrine, and worship here.

-Robert Burns, “Inscription for an Altar of Independence”

Algebraic Independence: If A is a set of real numbers, then one can say that its algebraic area of influence (over \mathbb{Q}, the set of rational numbers), is the set of all real roots of polynomial equations with coefficients in A \cup \mathbb{Q}, i.e., the set of all real numbers that are solutions to equations of the form:

r_kx^k + \ldots + r_1x + r_0 = 0,

where k is a natural number and r_0, \ldots, r_k are elements of A \cup \mathbb{Q}. With this definition, a real number s is algebraically independent (over \mathbb{Q}) from a set A \subseteq \mathbb{R} if s is not the root of a polynomial equation with coefficients in A \cup \mathbb{Q}. A set A \subseteq \mathbb{R} is algebraically independent (over \mathbb{Q}) if each element s \in A is algebraically independent from A \setminus \{s\}.

Fun Fact: Note that a 1-element set \{s\} is algebraically independent over \mathbb{Q} if and only if it is transcendental, i.e., is not the root of a polynomial with rational coefficients. \pi and e are famously both transcendental numbers, yet it is still unknown whether the 2-element set \{\pi, e\} is algebraically independent over \mathbb{Q}. It is not even known if \pi + e is irrational!

Logical Independence: Let T be a consistent set of axioms, i.e., a set of sentences from which one cannot derive a contradiction. We can say that the logical area of influence of T is the set of sentences that can be proven from T, together with their negations. In other words, it is the set of sentences which, if one takes the sentences in T as axioms, can be proven either true or false. A sentence \phi is then logically independent from T if neither \phi nor its negation can be proven from the sentences in T.

Logical independence is naturally of great importance in the study of the foundations of mathematics. Much of modern set theory, and much of my personal mathematical research, involves statements that are independent from the Zermelo-Fraenkel Axioms with Choice (ZFC), which is a prominent set of axioms for set theory and indeed for all of mathematics. These are statements, then, that in our predominant mathematical framework can neither be proven true nor proven false. The most well-known of these is the Continuum Hypothesis (CH), which, in one of its formulations, is the statement that there are no infinite cardinalities strictly between the cardinality of the set of natural numbers and the cardinality of the set of real numbers. To prove that CH is independent from ZFC, one both produces a mathematical structure that satisfies ZFC and in which CH is true (which Kurt Gödel did in 1938) and produces a mathematical structure that satisfies ZFC and in which CH is false (which Paul Cohen did in 1963). Since Cohen’s result in 1963, a great number of natural mathematical statements have been proven to be independent from ZFC.

In our next post, we will consider a logical independence phenomenon of a somewhat simpler nature: the independence of Euclid’s parallel postulate from Euclid’s four other axioms for plane geometry, which will lead us to considerations of exotic non-Euclidean geometries.

Fun Fact: In the setting of general vector spaces, which generalize the vector spaces \mathbb{R}^n from the above discussion of linear independence, a basis is a linearly independent set whose span (what we referred to as its linear area of influence) is the entire vector space. A basis for \mathbb{R}^n is thus any linearly independent set of size n. Using the Axiom of Choice, one can prove that every vector space has a basis. However, there are models of ZF (i.e., the Zermelo-Fraenkel Axioms without Choice) in which there are (infinite-dimensional) vector spaces without a basis. Thus, the statement, “Every vector space has a basis,” is logically independent from ZF.

Solitude is independence.

-Hermann Hesse, Steppenwolf

Advertisements