Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for $\complex{m}$.

Goldilocks' Theorem

We begin with a useful theorem that we will need later, and in the proof of the main theorem in this subsection. This theorem says that we can extend linearly independent sets, one vector at a time, by adding vectors from outside the span of the linearly independent set, all the while preserving the linear independence of the set.

Theorem ELIS (Extending Linearly Independent Sets) Suppose $V$ is vector space and $S$ is a linearly independent set of vectors from $V$. Suppose $\vect{w}$ is a vector such that $\vect{w}\not\in\spn{S}$. Then the set $S^\prime=S\cup\set{\vect{w}}$ is linearly independent.

Proof.  

In the story Goldilocks and the Three Bears, the young girl Goldilocks visits the empty house of the three bears while out walking in the woods. One bowl of porridge is too hot, the other too cold, the third is just right. One chair is too hard, one too soft, the third is just right. So it is with sets of vectors --- some are too big (linearly dependent), some are too small (they don't span), and some are just right (bases). Here's Goldilocks' Theorem.

Theorem G (Goldilocks) Suppose that $V$ is a vector space of dimension $t$. Let $S=\set{\vectorlist{v}{m}}$ be a set of vectors from $V$. Then

  1. If $m>t$, then $S$ is linearly dependent.
  2. If $m < t$, then $S$ does not span $V$.
  3. If $m=t$ and $S$ is linearly independent, then $S$ spans $V$.
  4. If $m=t$ and $S$ spans $V$, then $S$ is linearly independent.

Proof.  

There is a tension in the construction of basis. Make a set too big and you will end up with relations of linear dependence among the vectors. Make a set too small and you will not have enough raw material to span the entire vector space. Make a set just the right size (the dimension) and you only need to have linear independence or spanning, and you get the other property for free. These roughly-stated ideas are made precise by Theorem G.

The structure and proof of this theorem also deserve comment. The hypotheses seem innocuous. We presume we know the dimension of the vector space in hand, then we mostly just look at the size of the set $S$. From this we get big conclusions about spanning and linear independence. Each of the four proofs relies on ultimately contradicting Theorem SSLD, so in a way we could think of this entire theorem as a corollary of Theorem SSLD. (See technique LC.) The proofs of the third and fourth parts parallel each other in style (add $\vect{w}$, toss $\vect{v}_k$) and then turn on Theorem ELIS before contradicting Theorem SSLD.

Theorem G is useful in both concrete examples and as a tool in other proofs. We will use it often to bypass verifying linear independence or spanning.

Example BPR: Bases for $P_n$, reprised.  

Example BDM22: Basis by dimension in $M_{22}$.  

Example SVP4: Sets of vectors in $P_4$.  

A simple consequence of Theorem G is the observation that proper subspaces have strictly smaller dimensions. Hopefully this may seem intuitively obvious, but it still requires proof, and we will cite this result later.

Theorem PSSD (Proper Subspaces have Smaller Dimension) Suppose that $U$ and $V$ are subspaces of the vector space $W$, such that $U\subsetneq V$. Then $\dimension{U} < \dimension{V}$.

Proof.  

The final theorem of this subsection is an extremely powerful tool for establishing the equality of two sets that are subspaces. Notice that the hypotheses include the equality of two integers (dimensions) while the conclusion is the equality of two sets (subspaces). It is the extra "structure" of a vector space and its dimension that makes possible this huge leap from an integer equality to a set equality.

Theorem EDYES (Equal Dimensions Yields Equal Subspaces) Suppose that $U$ and $V$ are subspaces of the vector space $W$, such that $U\subseteq V$ and $\dimension{U}=\dimension{V}$. Then $U=V$.

Proof.  

Ranks and Transposes

We now prove one of the most surprising theorems about matrices. Notice the paucity of hypotheses compared to the precision of the conclusion.

Theorem RMRT (Rank of a Matrix is the Rank of the Transpose) Suppose $A$ is an $m\times n$ matrix. Then $\rank{A}=\rank{\transpose{A}}$.

Proof.  

This says that the row space and the column space of a matrix have the same dimension, which should be very surprising. It does not say that column space and the row space are identical. Indeed, if the matrix is not square, then the sizes (number of slots) of the vectors in each space are different, so the sets are not even comparable.

It is not hard to construct by yourself examples of matrices that illustrate Theorem RMRT, since it applies equally well to any matrix. Grab a matrix, row-reduce it, count the nonzero rows or the leading 1's. That's the rank. Transpose the matrix, row-reduce that, count the nonzero rows or the leading 1's. That's the rank of the transpose. The theorem says the two will be equal. Here's an example anyway.

Example RRTI: Rank, rank of transpose, Archetype I.  

Dimension of Four Subspaces

That the rank of a matrix equals the rank of its transpose is a fundamental and surprising result. However, applying Theorem FS we can easily determine the dimension of all four fundamental subspaces associated with a matrix.

Theorem DFS (Dimensions of Four Subspaces) Suppose that $A$ is an $m\times n$ matrix, and $B$ is a row-equivalent matrix in reduced row-echelon form with $r$ nonzero rows. Then

  1. $\dimension{\nsp{A}}=n-r$
  2. $\dimension{\csp{A}}=r$
  3. $\dimension{\rsp{A}}=r$
  4. $\dimension{\lns{A}}=m-r$

Proof.  

There are many different ways to state and prove this result, and indeed, the equality of the dimensions of the column space and row space is just a slight expansion of Theorem RMRT. However, we have restricted our techniques to applying Theorem FS and then determining dimensions with bases provided by Theorem BNS and Theorem BRS. This provides an appealing symmetry to the results and the proof.

Direct Sums

Some of the more advanced ideas in linear algebra are closely related to decomposing (technique DC) vector spaces into direct sums of subspaces. With our previous results about bases and dimension, now is the right time to state and collect a few results about direct sums, though we will only mention these results in passing until we get to Section NLT:Nilpotent Linear Transformations, where they will get a heavy workout.

A direct sum is a short-hand way to describe the relationship between a vector space and two, or more, of its subspaces. As we will use it, it is not a way to construct new vector spaces from others.

Definition DS (Direct Sum) Suppose that $V$ is a vector space with two subspaces $U$ and $W$ such that for every $\vect{v}\in V$,

  1. There exists vectors $\vect{u}\in U$, $\vect{w}\in W$ such that $\vect{v}=\vect{u}+\vect{w}$
  2. If $\vect{v}=\vect{u}_1+\vect{w}_1$ and $\vect{v}=\vect{u}_2+\vect{w}_2$ where $\vect{u}_1,\,\vect{u}_2\in U$, $\vect{w}_1,\,\vect{w}_2\in W$ then $\vect{u}_1=\vect{u}_2$ and $\vect{w}_1=\vect{w}_2$.

Then $V$ is the direct sum of $U$ and $W$ and we write $V=U\ds W$.
(This definition contains Notation DS.)

Informally, when we say $V$ is the direct sum of the subspaces $U$ and $W$, we are saying that each vector of $V$ can always be expressed as the sum of a vector from $U$ and a vector from $W$, and this expression can only be accomplished in one way (i.e. uniquely). This statement should begin to feel something like our definitions of nonsingular matrices (Definition NM) and linear independence (Definition LI). It should not be hard to imagine the natural extension of this definition to the case of more than two subspaces. Could you provide a careful definition of $V=U_1\ds U_2\ds U_3\ds ...\ds U_m$ (exercise PD.M50)?

Example SDS: Simple direct sum.  

Example SDS is easy to generalize into a theorem.

Theorem DSFB (Direct Sum From a Basis) Suppose that $V$ is a vector space with a basis $B=\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3,\,...,\,\vect{v}_n}$ and $m\leq n$. Define

\begin{align*} U&=\spn{\set{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3,\,...,\,\vect{v}_m}} & W&=\spn{\set{\vect{v}_{m+1},\,\vect{v}_{m+2},\,\vect{v}_{m+3},\,...,\,\vect{v}_n}} \end{align*}

Then $V=U\ds W$.

Proof.  

Given one subspace of a vector space, we can always find another subspace that will pair with the first to form a direct sum. The main idea of this theorem, and its proof, is the idea of extending a linearly independent subset into a basis with repeated applications of Theorem ELIS.

Theorem DSFOS (Direct Sum From One Subspace) Suppose that $U$ is a subspace of the vector space $V$. Then there exists a subspace $W$ of $V$ such that $V=U\ds W$.

Proof.  

There are several different ways to define a direct sum. Our next two theorems give equivalences (technique E) for direct sums, and therefore could have been employed as definitions. The first should further cement the notion that a direct sum has some connection with linear independence.

Theorem DSZV (Direct Sums and Zero Vectors) Suppose $U$ and $W$ are subspaces of the vector space $V$. Then $V=U\ds W$ if and only if

  1. For every $\vect{v}\in V$, there exists vectors $\vect{u}\in U$, $\vect{w}\in W$ such that $\vect{v}=\vect{u}+\vect{w}$.
  2. Whenever $\zerovector=\vect{u}+\vect{w}$ with $\vect{u}\in U$, $\vect{w}\in W$ then $\vect{u}=\vect{w}=\zerovector$.

Proof.  

Our second equivalence lends further credence to calling a direct sum a decomposition. The two subspaces of a direct sum have no (nontrivial) elements in common.

Theorem DSZI (Direct Sums and Zero Intersection) Suppose $U$ and $W$ are subspaces of the vector space $V$. Then $V=U\ds W$ if and only if

  1. For every $\vect{v}\in V$, there exists vectors $\vect{u}\in U$, $\vect{w}\in W$ such that $\vect{v}=\vect{u}+\vect{w}$.
  2. $U\cap W=\set{\zerovector}$.

Proof.  

If the statement of Theorem DSZV did not remind you of linear independence, the next theorem should establish the connection.

Theorem DSLI (Direct Sums and Linear Independence) Suppose $U$ and $W$ are subspaces of the vector space $V$ with $V=U\ds W$. Suppose that $R$ is a linearly independent subset of $U$ and $S$ is a linearly independent subset of $W$. Then $R\cup S$ is a linearly independent subset of $V$.

Proof.  

Our last theorem in this collection will go some ways towards explaining the word "sum" in the moniker "direct sum," while also partially explaining why these results appear in a section devoted to a discussion of dimension.

Theorem DSD (Direct Sums and Dimension) Suppose $U$ and $W$ are subspaces of the vector space $V$ with $V=U\ds W$. Then $\dimension{V}=\dimension{U}+\dimension{W}$.

Proof.  

There is a certain appealling symmetry in the previous proof, where both linear independence and spanning properties of the bases are used, both of the first two conclusions of Theorem G are employed, and we have quoted both of the two conditions of Theorem DSZI.

One final theorem tells us that we can successively decompose direct sums into sums of smaller and smaller subspaces.

Theorem RDS (Repeated Direct Sums) Suppose $V$ is a vector space with subspaces $U$ and $W$ with $V=U\ds W$. Suppose that $X$ and $Y$ are subspaces of $W$ with $W=X\ds Y$. Then $V=U\ds X\ds Y$.

Proof.  

Remember that when we write $V=U\ds W$ there always needs to be a "superspace," in this case $V$. The statement $U\ds W$ is meaningless. Writing $V=U\ds W$ is simply a shorthand for a somewhat complicated relationship between $V$, $U$ and $W$, as described in the two conditions of Definition DS, or Theorem DSZV, or Theorem DSZI. Theorem DSFB and Theorem DSFOS gives us sure-fire ways to build direct sums, while Theorem DSLI, Theorem DSD and Theorem RDS tell us interesting properties of direct sums. This subsection has been long on theorems and short on examples. If we were to use the term "lemma" we might have chosen to label some of these results as such, since they will be important tools in other proofs, but may not have much interest on their own (see technique LC). We will be referencing these results heavily in later sections, and will remind you then to come back for a second look.