We have seen in Section SD:Similarity and Diagonalization that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.

Triangular Matrices

An upper, or lower, triangular matrix is exactly what it sounds like it should be, but here are the two relevant definitions.

Definition UTM (Upper Triangular Matrix) The $n\times n$ square matrix $A$ is upper triangular if $\matrixentry{A}{ij} =0$ whenever $i>j$.

Definition LTM (Lower Triangular Matrix) The $n\times n$ square matrix $A$ is lower triangular if $\matrixentry{A}{ij} =0$ whenever $i < j$.

Obviously, properties of a lower triangular matrices will have analogues for upper triangular matrices. Rather than stating two very similar theorems, we will say that matrices are "triangular of the same type" as a convenient shorthand to cover both possibilities and then give a proof for just one type.

Theorem PTMT (Product of Triangular Matrices is Triangular) Suppose that $A$ and $B$ are square matrices of size $n$ that are triangular of the same type. Then $AB$ is also triangular of that type.

Proof.  

The inverse of a triangular matrix is triangular, of the same type.

Theorem ITMT (Inverse of a Triangular Matrix is Triangular) Suppose that $A$ is a nonsingular matrix of size $n$ that is triangular. Then the inverse of $A$, $\inverse{A}$, is triangular of the same type. Furthermore, the diagonal entries of $\inverse{A}$ are the reciprocals of the corresponding diagonal entries of $A$. More precisely, $\matrixentry{\inverse{A}}{ii}=\matrixentry{A}{ii}^{-1}$.

Proof.  

Upper Triangular Matrix Representation

Not every matrix is diagonalizable, but every linear transformation has a matrix representation that is an upper triangular matrix, and the basis that achieves this representation is especially pleasing. Here's the theorem.

Theorem UTMR (Upper Triangular Matrix Representation) Suppose that $\ltdefn{T}{V}{V}$ is a linear transformation. Then there is a basis $B$ for $V$ such that the matrix representation of $T$ relative to $B$, $\matrixrep{T}{B}{B}$, is an upper triangular matrix. Each diagonal entry is an eigenvalue of $T$, and if $\lambda$ is an eigenvalue of $T$, then $\lambda$ occurs $\algmult{T}{\lambda}$ times on the diagonal.

Proof.  

A key step in this proof was the construction of the subspace $W$ with dimension strictly less than that of $V$. This required an eigenvalue/eigenvector pair, which was guaranteed to us by Theorem EMHE. Digging deeper, the proof of Theorem EMHE requires that we can factor polynomials completely, into linear factors. This will not always happen if our set of scalars is the reals, $\real{\null}$. So this is our final explanation of our choice of the complex numbers, $\complexes$, as our set of scalars. In $\complexes$ polynomials factor completely, so every matrix has at least one eigenvalue, and an inductive argument will get us to upper triangular matrix representations.

In the case of linear transformations defined on $\complex{m}$, we can use the inner product (Definition IP) profitably to fine-tune the basis that yields an upper triangular matrix representation. Recall that the adjoint of matrix $A$ (Definition A) is written as $\adjoint{A}$.

Theorem OBUTR (Orthonormal Basis for Upper Triangular Representation) Suppose that $A$ is a square matrix. Then there is a unitary matrix $U$, and an upper triangular matrix $T$, such that

\begin{align*} \adjoint{U}AU=T \end{align*}

and $T$ has the eigenvalues of $A$ as the entries of the diagonal.

Proof.  

Normal Matrices

Normal matrices comprise a broad class of interesting matrices, many of which we have met already. But they are most interesting since they define exactly which matrices we can diagonalize via a unitary matrix. This is the upcoming Theorem OD. Here's the definition.

Definition NRML (Normal Matrix) The square matrix $A$ is normal if $\adjoint{A}A=A\adjoint{A}$.

So a normal matrix commutes with its adjoint. Part of the beauty of this definition is that it includes many other types of matrices. A diagonal matrix will commute with its adjoint, since the adjoint is again diagonal and the entries are just conjugates of the entries of the original diagonal matrix. A Hermitian (self-adjoint) matrix (Definition HM) will trivially commute with its adjoint, since the two matrices are the same. A real, symmetric matrix is Hermitian, so these matrices are also normal. A unitary matrix (Definition UM) has its adjoint as its inverse, and inverses commute (Theorem OSIS), so unitary matrices are normal. Another class of normal matrices is the skew-symmetric matrices. However, these broad descriptions still do not capture all of the normal matrices, as the next example shows.

Example ANM: A normal matrix.  

Orthonormal Diagonalization

A diagonal matrix is very easy to work with in matrix multiplication (Example HPDM) and an orthonormal basis also has many advantages (Theorem COB). How about converting a matrix to a diagonal matrix through a similarity transformation using a unitary matrix (i.e. build a diagonal matrix representation with an orthonormal matrix)? That'd be fantastic! When can we do this? We can always accomplish this feat when the matrix is normal, and normal matrices are the only ones that behave this way. Here's the theorem.

Theorem OD (Orthonormal Diagonalization) Suppose that $A$ is a square matrix. Then there is a unitary matrix $U$ and a diagonal matrix $D$, with diagonal entries equal to the eigenvalues of $A$, such that $\adjoint{U}AU=D$ if and only if $A$ is a normal matrix.

Proof.  

We can rearrange the conclusion of this theorem to read $A=UD\adjoint{U}$. Recall that a unitary matrix can be viewed as a geometry-preserving transformation (isometry), or more loosely as a rotation of sorts. Then a matrix-vector product, $A\vect{x}$, can be viewed instead as a sequence of three transformations. $\adjoint{U}$ is unitary, so is a rotation. Since $D$ is diagonal, it just multiplies each entry of a vector by a scalar. Diagonal entries that are positive or negative, with absolute values bigger or smaller than 1 evoke descriptions like reflection, expansion and contraction. Generally we can say that $D$ "stretches" a vector in each component. Final multiplication by $U$ undoes (inverts) the rotation performed by $\adjoint{U}$. So a normal matrix is a rotation-stretch-rotation transformation.

The orthonormal basis formed from the columns of $U$ can be viewed as a system of mutually perpendicular axes. The rotation by $\adjoint{U}$ allows the transformation by $A$ to be replaced by the simple transformation $D$ along these axes, and then $D$ brings the result back to the original coordinate system. For this reason Theorem OD is known as the Principal Axis Theorem.

The columns of the unitary matrix in Theorem OD create an especially nice basis for use with the normal matrix. We record this observation as a theorem.

Theorem OBNM (Orthonormal Bases and Normal Matrices) Suppose that $A$ is a normal matrix of size $n$. Then there is an orthonormal basis of $\complex{n}$ composed of eigenvectors of $A$.

Proof.  

In a vague way Theorem OBNM is an improvement on Theorem HMOE which said that eigenvectors of a Hermitian matrix for different eigenvalues are always orthogonal. Hermitian matrices are normal and we see that we can find at least one basis where every pair of eigenvectors is orthogonal. Notice that this is not a generalization, since Theorem HMOE states a weak result which applies to many (but not all) pairs of eigenvectors, while Theorem OBNM is a seemingly stronger result, but only asserts that there is one collection of eigenvectors with the stronger property.