In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM:Matrix Inverses and Nonsingular Matrices, Section OD:Orthonormal Diagonalization). Because we have chosen to use $\complexes$ as our set of scalars, this subsection is a bit more, uh, ... complex than it would be for the real numbers. We'll explain as we go along how things get easier for the real numbers ${\mathbb R}$. If you haven't already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO:Complex Number Operations. With that done, we can extend the basics of complex number arithmetic to our study of vectors in $\complex{m}$.

Complex Arithmetic and Vectors

We know how the addition and multiplication of complex numbers is employed in defining the operations for vectors in $\complex{m}$ (Definition CVA and Definition CVSM). We can also extend the idea of the conjugate to vectors.

Definition CCCV (Complex Conjugate of a Column Vector) Suppose that $\vect{u}$ is a vector from $\complex{m}$. Then the conjugate of the vector, $\conjugate{\vect{u}}$, is defined by

\begin{align*} \vectorentry{\conjugate{\vect{u}}}{i} &=\conjugate{\vectorentry{\vect{u}}{i}} &&\text{$1\leq i\leq m$} \end{align*}

With this definition we can show that the conjugate of a column vector behaves as we would expect with regard to vector addition and scalar multiplication.

Theorem CRVA (Conjugation Respects Vector Addition) Suppose $\vect{x}$ and $\vect{y}$ are two vectors from $\complex{m}$. Then \begin{equation*} \conjugate{\vect{x}+\vect{y}}=\conjugate{\vect{x}}+\conjugate{\vect{y}} \end{equation*}

Proof.  

Theorem CRSM (Conjugation Respects Vector Scalar Multiplication) Suppose $\vect{x}$ is a vector from $\complex{m}$, and $\alpha\in\complexes$ is a scalar. Then \begin{equation*} \conjugate{\alpha\vect{x}}=\conjugate{\alpha}\,\conjugate{\vect{x}} \end{equation*}

Proof.  

These two theorems together tell us how we can "push" complex conjugation through linear combinations.

Inner products

Definition IP (Inner Product) Given the vectors $\vect{u},\,\vect{v}\in\complex{m}$ the inner product of $\vect{u}$ and $\vect{v}$ is the scalar quantity in $\complex{\null}$, \begin{equation*} \innerproduct{\vect{u}}{\vect{v}}= \vectorentry{\vect{u}}{1}\conjugate{\vectorentry{\vect{v}}{1}}+ \vectorentry{\vect{u}}{2}\conjugate{\vectorentry{\vect{v}}{2}}+ \vectorentry{\vect{u}}{3}\conjugate{\vectorentry{\vect{v}}{3}}+ \cdots+ \vectorentry{\vect{u}}{m}\conjugate{\vectorentry{\vect{v}}{m}} = \sum_{i=1}^{m}\vectorentry{\vect{u}}{i}\conjugate{\vectorentry{\vect{v}}{i}} \end{equation*}

This operation is a bit different in that we begin with two vectors but produce a scalar. Computing one is straightforward.

Example CSIP: Computing some inner products.  

In the case where the entries of our vectors are all real numbers (as in the second part of Example CSIP), the computation of the inner product may look familiar and be known to you as a dot product or scalar product. So you can view the inner product as a generalization of the scalar product to vectors from $\complex{m}$ (rather than ${\mathbb R}^m$).

Also, note that we have chosen to conjugate the entries of the second vector listed in the inner product, while many authors choose to conjugate entries from the first component. It really makes no difference which choice is made, it just requires that subsequent definitions and theorems are consistent with the choice. You can study the conclusion of Theorem IPAC as an explanation of the magnitude of the difference that results from this choice. But be careful as you read other treatments of the inner product or its use in applications, and be sure you know ahead of time which choice has been made.

There are several quick theorems we can now prove, and they will each be useful later.

Theorem IPVA (Inner Product and Vector Addition) Suppose $\vect{u},\,\vect{v},\,\vect{w}\in\complex{m}$. Then

\begin{align*} \text{1.}   \innerproduct{\vect{u}+\vect{v}}{\vect{w}}&=\innerproduct{\vect{u}}{\vect{w}}+\innerproduct{\vect{v}}{\vect{w}}\\ \text{2.}   \innerproduct{\vect{u}}{\vect{v}+\vect{w}}&=\innerproduct{\vect{u}}{\vect{v}}+\innerproduct{\vect{u}}{\vect{w}} \end{align*}

Proof.  

Theorem IPSM (Inner Product and Scalar Multiplication) Suppose $\vect{u},\,\vect{v}\in\complex{m}$ and $\alpha\in\complex{\null}$. Then

\begin{align*} \text{1.}   \innerproduct{\alpha\vect{u}}{\vect{v}}&=\alpha\innerproduct{\vect{u}}{\vect{v}}\\ \text{2.}   \innerproduct{\vect{u}}{\alpha\vect{v}}&=\conjugate{\alpha}\innerproduct{\vect{u}}{\vect{v}} \end{align*}

Proof.  

Theorem IPAC (Inner Product is Anti-Commutative) Suppose that $\vect{u}$ and $\vect{v}$ are vectors in $\complex{m}$. Then $\innerproduct{\vect{u}}{\vect{v}}=\conjugate{\innerproduct{\vect{v}}{\vect{u}}}$.

Proof.  

Norm

If treating linear algebra in a more geometric fashion, the length of a vector occurs naturally, and is what you would expect from its name. With complex numbers, we will define a similar function. Recall that if $c$ is a complex number, then $\modulus{c}$ denotes its modulus (Definition MCN).

Definition NV (Norm of a Vector) The norm of the vector $\vect{u}$ is the scalar quantity in $\complex{\null}$ \begin{equation*} \norm{\vect{u}}= \sqrt{ \modulus{\vectorentry{\vect{u}}{1}}^2+ \modulus{\vectorentry{\vect{u}}{2}}^2+ \modulus{\vectorentry{\vect{u}}{3}}^2+ \cdots+ \modulus{\vectorentry{\vect{u}}{m}}^2 } = \sqrt{\sum_{i=1}^{m}\modulus{\vectorentry{\vect{u}}{i}}^2} \end{equation*}

Computing a norm is also easy to do.

Example CNSV: Computing the norm of some vectors.  

Notice how the norm of a vector with real number entries is just the length of the vector. Inner products and norms are related by the following theorem.

Theorem IPN (Inner Products and Norms) Suppose that $\vect{u}$ is a vector in $\complex{m}$. Then $\norm{\vect{u}}^2=\innerproduct{\vect{u}}{\vect{u}}$.

Proof.  

When our vectors have entries only from the real numbers Theorem IPN says that the dot product of a vector with itself is equal to the length of the vector squared.

Theorem PIP (Positive Inner Products) Suppose that $\vect{u}$ is a vector in $\complex{m}$. Then $\innerproduct{\vect{u}}{\vect{u}}\geq 0$ with equality if and only if $\vect{u}=\zerovector$.

Proof.  

Notice that Theorem PIP contains three implications:

\begin{align*} \vect{u}\in\complex{m}&\Rightarrow\innerproduct{\vect{u}}{\vect{u}}\geq 0\\ \vect{u}=\zerovector&\Rightarrow\innerproduct{\vect{u}}{\vect{u}}=0\\ \innerproduct{\vect{u}}{\vect{u}}=0&\Rightarrow\vect{u}=\zerovector \end{align*}

The results contained in Theorem PIP are summarized by saying "the inner product is positive definite."

Orthogonal Vectors

"Orthogonal" is a generalization of "perpendicular." You may have used mutually perpendicular vectors in a physics class, or you may recall from a calculus class that perpendicular vectors have a zero dot product. We will now extend these ideas into the realm of higher dimensions and complex scalars.

Definition OV (Orthogonal Vectors) A pair of vectors, $\vect{u}$ and $\vect{v}$, from $\complex{m}$ are orthogonal if their inner product is zero, that is, $\innerproduct{\vect{u}}{\vect{v}}=0$.

Example TOV: Two orthogonal vectors.  

We extend this definition to whole sets by requiring vectors to be pairwise orthogonal. Despite using the same word, careful thought about what objects you are using will eliminate any source of confusion.

Definition OSV (Orthogonal Set of Vectors) Suppose that $S=\set{\vectorlist{u}{n}}$ is a set of vectors from $\complex{m}$. Then $S$ is an orthogonal set if every pair of different vectors from $S$ is orthogonal, that is $\innerproduct{\vect{u}_i}{\vect{u}_j}=0$ whenever $i\neq j$.

We now define the prototypical orthogonal set, which we will reference repeatedly.

Definition SUV (Standard Unit Vectors) Let $\vect{e}_j\in\complex{m}$, $1\leq j\leq m$ denote the column vectors defined by

\begin{align*} \vectorentry{\vect{e}_j}{i} &= \begin{cases} 0&\text{if $i\neq j$}\\ 1&\text{if $i=j$} \end{cases} \end{align*}

Then the set

\begin{align*} \set{\vectorlist{e}{m}}&=\setparts{\vect{e}_j}{1\leq j\leq m} \end{align*}

is the set of standard unit vectors in $\complex{m}$.

Notice that $\vect{e}_j$ is identical to column $j$ of the $m\times m$ identity matrix $I_m$ (Definition IM). This observation will often be useful. It is not hard to see that the set of standard unit vectors is an orthogonal set. We will reserve the notation $\vect{e}_i$ for these vectors.

Example SUVOS: Standard Unit Vectors are an Orthogonal Set.  

Example AOS: An orthogonal set.  

So far, this section has seen lots of definitions, and lots of theorems establishing un-surprising consequences of those definitions. But here is our first theorem that suggests that inner products and orthogonal vectors have some utility. It is also one of our first illustrations of how to arrive at linear independence as the conclusion of a theorem.

Theorem OSLI (Orthogonal Sets are Linearly Independent) Suppose that $S$ is an orthogonal set of nonzero vectors. Then $S$ is linearly independent.

Proof.  

Gram-Schmidt Procedure

The Gram-Schmidt Procedure is really a theorem. It says that if we begin with a linearly independent set of $p$ vectors, $S$, then we can do a number of calculations with these vectors and produce an orthogonal set of $p$ vectors, $T$, so that $\spn{S}=\spn{T}$. Given the large number of computations involved, it is indeed a procedure to do all the necessary computations, and it is best employed on a computer. However, it also has value in proofs where we may on occasion wish to replace a linearly independent set by an orthogonal set.

Theorem GSP (Gram-Schmidt Procedure) Suppose that $S=\set{\vectorlist{v}{p}}$ is a linearly independent set of vectors in $\complex{m}$. Define the vectors $\vect{u}_i$, $1\leq i\leq p$ by \begin{equation*} \vect{u}_i=\vect{v}_i -\frac{\innerproduct{\vect{v}_i}{\vect{u}_1}}{\innerproduct{\vect{u}_1}{\vect{u}_1}}\vect{u}_1 -\frac{\innerproduct{\vect{v}_i}{\vect{u}_2}}{\innerproduct{\vect{u}_2}{\vect{u}_2}}\vect{u}_2 -\frac{\innerproduct{\vect{v}_i}{\vect{u}_3}}{\innerproduct{\vect{u}_3}{\vect{u}_3}}\vect{u}_3 -\cdots -\frac{\innerproduct{\vect{v}_i}{\vect{u}_{i-1}}}{\innerproduct{\vect{u}_{i-1}}{\vect{u}_{i-1}}}\vect{u}_{i-1} \end{equation*} Then if $T=\set{\vectorlist{u}{p}}$, then $T$ is an orthogonal set of non-zero vectors, and $\spn{T}=\spn{S}$.

Proof.  

Example GSTV: Gram-Schmidt of three vectors.  

One final definition related to orthogonal vectors.

Definition ONS (OrthoNormal Set) Suppose $S=\set{\vectorlist{u}{n}}$ is an orthogonal set of vectors such that $\norm{\vect{u}_i}=1$ for all $1\leq i\leq n$. Then $S$ is an orthonormal set of vectors.

Once you have an orthogonal set, it is easy to convert it to an orthonormal set --- multiply each vector by the reciprocal of its norm, and the resulting vector will have norm 1. This scaling of each vector will not affect the orthogonality properties (apply Theorem IPSM).

Example ONTV: Orthonormal set, three vectors.  

Example ONFV: Orthonormal set, four vectors.  

We will see orthonormal sets again in Subsection MINM.UM:Matrix Inverses and Nonsingular Matrices: Unitary Matrices. They are intimately related to unitary matrices (Definition UM) through Theorem CUMOS. Some of the utility of orthonormal sets is captured by Theorem COB in Subsection B.OBC:Bases: Orthonormal Bases and Coordinates. Orthonormal sets appear once again in Section OD:Orthonormal Diagonalization where they are key in orthonormal diagonalization.