Further Linear Algebra. Chapter VI. Inner Product Spaces
Further Linear Algebra. Chapter VI. Inner Product Spaces
Inner
product spaces.
Andrei Yafaev
Notice that a symmetric bilinear form is positive definite if and only if its
canonical form (over R) is In .
Clearly x21 + . . . + x2n is positive definite on Rn . Conversely, suppose B is
a basis such that the matrix with respect to B is the canonical form. For any
basis vector bi , the diagonal entry satisfies hbi , bi i > 0 and hence hbi , bi i = 1.
For all u, v V ,
hu, vi = hv, ui.
1
If A is a Hermitian matrix then the following is a Hermitian form on Cn :
hv, wi = v t Aw.
hu, ui R.
Definition 1.4 By an inner product space we shall mean one of the follow-
ing:
either A finite dimensional vector space V over R with a positive definite
symmetric bilinear form;
2
or A finite dimensional vector space V over C with a positive definite
Hermitian form.
Example 1.3 Consider the vector space V of all continuous functions [0, 1]
C.
Then we can define
Z 1
hf, gi = f (x)g(x)dx.
0
3
hu,vi
Setting = ||v||2
we have:
hu, vi 2
2hu, vi hv, ui ||v||2 0.
||u|| hv, ui hu, vi +
||v||2 ||v||2 ||v||
2
Hence
||u||2 ||v||2 |hu, vi|2 .
Taking the square root of both sides we get the result.
Proof. We have
||u + v||2 = hu + v, u + vi
= ||u||2 + 2 hu, vi + ||v||2 .
Hence
||u + v|| ||u|| + ||v||.
Definition 1.6 Two vectors v, w in an inner product space are called or-
thogonal if hv, wi = 0.
4
Theorem 1.7 (Pythagoras Theorem) Let (V, <, >) be an inner product
space. If v, w V are orthogonal, then
Proof. Since
so we have
||v||2 + ||w||2 = ||v + w||2
if hv, wi = 0.
2 GramSchmidt Orthogonalisation
Definition 2.1 Let V be an inner product space. We shall call a basis B of
V an orthonormal basis if hbi , bj i = i,j .
Proof. If the basis B = (b1 , . . . , bn ) is orthonormal, then the matrix of <, >
in this basis is the identity In . The proposition follows.
c 1 = b1
hb2 , c1 i
c 2 = b2 c1
hc1 , c1 i
hb3 , c1 i hb3 , c2 i
c3 = b3 c1 c2
hc1 , c1 i hc2 , c2 i
..
.
n1
X hbn , cr i
c n = bn cr ,
r=1
hcr , cr i
5
is orthogonal. Furthermore the basis D defined by
1
dr = cr ,
||cr ||
is orthonormal.
hbr , cs i
hcr , cs i = hbr , cs i hcs , cs i. = hbr , cs i hbr , cs i = 0.
hcs , cs i
(notice that < ct , cs >= 0 unless t = s). This shows that {c1 , . . . , cr } are
orthogonal. Hence C is an orthogonal basis. It follows easily that D is
orthonormal.
S = {v V : w S hv, wi = 0}.
6
Theorem 2.4 If (V, <, >) is an inner product space and W is a subspace of
V then
V = W W ,
and hence any v V can be written as
v = w + w ,
Now r
X
i ei W.
i=1
So * +
n
X r
X n
X
w, i ej = i j hei , ej i = 0.
j=r+1 i=1 j=r+1
Hence n
X
i ei W .
i=r+1
7
Therefore
V = W + W .
Next suppose v W W . So hv, vi = 0 and so v = 0.
Hence V = W W and so any vector v V can be expressed uniquely
as
v = w + w ,
where w W and w W .
3 Adjoints.
Definition 3.1 An adjoint of a linear map T : V V is a linear map T
such that hT (u), vi = hu, T (v)i for all u, v V .
Notice that here we have used that the basis is orthonormal : we said that
the matrix of <, > was the identity. (Uniqueness) Let T , T be two adjoints.
Then we have
hu, (T T )vi = 0.
for all u, v V . In particular, let u = (T T )v, then ||(T T )v|| = 0
hence T (v) = T (v) for all v V . Therefore T = T .
Example 3.2 Consider V = C2 with the standard orthonormal basis and let
T be represented by
1 i
A=
i 1
Then T = T (such a linear map is called autoadjoint).
Notice that T being self-adjoint is equivalent to the matrix
representing it being hermitian
8
2i 1+i
A=
1 + i i
Then T = T
t
We also see that T = T (using that T is represented by A ).
4 Isometries.
Theorem 4.1 If T : V V be a linear map of a Euclidean space V then
the following are equivalent.
(i) T T = Id (i.e. T = T 1 ).
Definition 4.1 If T satisfies any of the above (and so all of them) then T
is called an isometry.
t
We also see that T = T (using that T is represented by A ).
9
For the second equality, notice that,
2 < T (u), T (v) >= ||T (u)+T (v)||2 ||T (u)||2 ||T (v)||2 = ||u+v||2 ||u||2 ||v||2 = 2 < u, v >
< T (u), T (v) >= R < T (u), iT (v) >= R(< T (u), T (iv) >) = R(< u, iv >) = < u, v >
. Hence
< T (u), T (v) >=< u, v >
.
(ii) implies (i):
hT T u, vi = hT u, T vi
= hu, vi .
(ii) The columns of A form an orthonormal basis for Rn (for the standard
inner product on Rn ).
10
(iii) The rows of A form an orthonormal basis for Rn .
Hence
* n n
+ n
n X n
X X X X
hfi , fj i = pk,i ek , pl,j el = pk,i pl,j hek , el i = pk,i pk,j = (P t P )i,j .
k=1 l=1 k=1 l=1 k=1
11
where is not a multiple of .
The characteristic polynomial is x2 2 cos()x+1. Then, as cos()2 1 <
0, there are no real eigenvalues and the matrix is not diagonalisable.
Notice that for a given matrix A, it is easy to check that columns are
orthogonal. If that is the case, then A is in On (R) and it is easy to calculate
inverse : A1 = At .
5 Orthogonal Diagonalisation.
Definition 5.1 Let V be an inner space. A linear map T : V V is
self-adjoint if
T = T
Av = v
It follows that
12
Proof. This is rather similar to Theorem 5.4.
We use induction on dim(V ) = n. True for n = 1 so suppose the result
holds n 1 and let dim(V ) = n.
Since T is self-adjoint, if E is an orthonormal basis for V and A is the
matrix representing T in E then
t
A=A.
so dim(W ) = n 1.
We claim that T : W W , i.e. T (W ) W . Let w = e1 W ,
R and v W . Then
since e1 W . Hence T : W W .
By induction there exists an orthonormal basis of eigenvectors {e2 , . . . , en }
for W . But V = W W so E = {e1 , . . . , en } is a basis for V and he1 , ei i = 0
for 2 i n and ||e1 || = 1. Hence E is an orthonormal basis of eigenvectors
for V .
u V v V hu, vi = 0.
13
Example 5.4 Let
1 i
A=
i 1
This matrix is self-adjoint.
One calculates the cracteristic polynomial and finds t(t 2) (in particular
the minimal polynomial is the same, hence you know that the matrix is di-
agonalisable for other reasons than being self-adjoint). For eigenvalue zero,
one finds eigenvector
i
1
i
For eigenvalue 2, one finds Then we normalise the vectors : v1 =
1
1
i i
and v1 = 12 We let
2 1 1
1 i i
P =
2 1 1
and
1 0 0
P AP =
0 2
14
1
For V1 (9), one finds v1 = 2 To make this orthonormal, divide by
2
the norm, i.e replace v1 by 13 v1 .
For V1 (0), one finds V1 (0) = Span(v2 , v3 ) with
2
v3 = 1
0
and
2
v4 = 0
1
By Gram-Schmidt process we replace v3 by
2
1
1
5 0
and v4 by
2
1
4
3 5 5
Let
1/3 2/5 2/3 5
P = 2/3 1/ 5 4/35
2/3 0 5/3 5
We have
9 0 0
P 1 AP = 0 0 0
0 0 0
15