0% found this document useful (0 votes)
120 views

Chapter7 7.1 7.3

This document discusses eigenvalues and eigenvectors of matrices. It begins by defining eigenvalues and eigenvectors, and provides examples of finding them for 2x2 and 3x3 matrices. It then discusses properties such as all eigenvalues of an upper triangular matrix being its diagonal entries. The document also covers finding bases for eigenspaces, properties under matrix powers, conditions for invertibility involving eigenvalues, and diagonalizability of matrices.

Uploaded by

Fakhri Akbar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views

Chapter7 7.1 7.3

This document discusses eigenvalues and eigenvectors of matrices. It begins by defining eigenvalues and eigenvectors, and provides examples of finding them for 2x2 and 3x3 matrices. It then discusses properties such as all eigenvalues of an upper triangular matrix being its diagonal entries. The document also covers finding bases for eigenspaces, properties under matrix powers, conditions for invertibility involving eigenvalues, and diagonalizability of matrices.

Uploaded by

Fakhri Akbar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 53

1

7.1 Eigenvalues And


Eigenvectors
2
Definition
If A is an nn matrix, then a nonzero
vector x in R
n
is called an eigenvector
of A if Ax is a scalar multiple of x; that
is,
Ax=x
for some scalar . The scalar is called
an eigenvalue of A, and x is said to be
an eigenvector of A corresponding to .
3
Example 1
Eigenvector of a 22 Matrix
The vector is an eigenvector
of


Corresponding to the eigenvalue =3,
since

1
2
(
=
(

x
3 0
8 1
A
(
=
(


3 0 1 3
3
8 1 2 6
A
( ( (
= = =
( ( (


x x
4
To find the eigenvalues of an nn matrix A we
rewrite Ax=x as
Ax=Ix
or equivalently,
(I-A)x=0 (1)
For to be an eigenvalue, there must be a nonzero
solution of this equation. However, by Theorem 6.4.5,
Equation (1) has a nonzero solution if and only if
det (I-A)=0
This is called the characteristic equation of A; the scalar
satisfying this equation are the eigenvalues of A.
When expanded, the determinant det (I-A) is a
polynomial p in called the characteristic polynomial
of A.
5
Example 2
Eigenvalues of a 33 Matrix (1/3)
Find the eigenvalues of



Solution.
The characteristic polynomial of A is




The eigenvalues of A must therefore satisfy the cubic equation
0 1 0
0 0 1
4 17 8
A
(
(
=
(
(

3 2
1 0
det( ) det 0 1 8 17 4
4 17 8
I A

(
(
= = +
(
(


3 2
8 17 4 0 (2) + =
6
Example 2
Eigenvalues of a 33 Matrix (2/3)
To solve this equation, we shall begin by searching for
integer solutions. This task can be greatly simplified
by exploiting the fact that all integer solutions (if
there are any) to a polynomial equation with integer
coefficients

n
+c
1

n-1
++c
n
=0
must be divisors of the constant term c
n
. Thus, the only
possible integer solutions of (2) are the divisors of -4,
that is, 1, 2, 4. Successively substituting these
values in (2) shows that 4 is an integer solution.
As a consequence, -4 must be a factor of the left
side of (2). Dividing -4 into
3
-8
2
+17-4 show that
(2) can be rewritten as
(-4)(
2
-4+1)=0
7
Example 2
Eigenvalues of a 33 Matrix (3/3)
Thus, the remaining solutions of (2) satisfy the
quadratic equation

2
-4+1=0
which can be solved by the quadratic formula.
Thus, the eigenvalues of A are

4, 2 3, 2 3 = = + =
8
Example 3
Eigenvalues of an Upper Triangular
Matrix (1/2)
Find the eigenvalues of the upper triangular matrix




Solution.
Recalling that the determinant of a triangular matrix is
the product of the entries on the main diagonal
(Theorem 2.2.2), we obtain
11 12 13 14
22 23 24
33 34
44
0
0 0
0 0 0
a a a a
a a a
A
a a
a
(
(
(
=
(
(
(

9
Example 3
Eigenvalues of an Upper Triangular
Matrix (2/2)
Thus, the characteristic equation is
(-a
11
)(-a
22
) (-a
33
) (-a
44
)=0
and the eigenvalues are
=a
11
, =a
22
, =a
33
, =a
44

which are precisely the diagonal entries of A.
11 12 13 14
22 23 24
33 34
44
11 22 33 44
0
det( ) det
0 0
0 0 0
( )( )( )( )
a a a a
a a a
I A
a a
a
a a a a



(
(

(
=
(

(

(

=
10
Theorem 7.1.1
If A is an nn triangular matrix (upper
triangular, low triangular, or diagonal),
then the eigenvalues of A are entries on
the main diagonal of A.
11
Example 4
Eigenvalues of a Lower Triangular
Matrix
By inspection, the eigenvalues of the lower
triangular matrix



are =1/2, =2/3, and =-1/4.
1/ 2 0 0
1 2/ 3 0
5 8 1/ 4
A
(
(
=
(
(


12
Theorem 7.1.2
Equivalent Statements
If A is an nn matrix and is a real number,
then the following are equivalent.
a) is an eigenvalue of A.
b) The system of equations (I-A)x=0 has
nontrivial solutions.
c) There is a nonzero vector x in R
n
such that
Ax=x.
d) is a solution of the characteristic equation
det(I-A)=0.
13
Finding Bases for Eigenspaces
The eigenvectors of A corresponding to
an eigenvalue are the nonzero x that
satisfy Ax=x. Equivalently, the
eigenvectors corresponding to are the
nonzero vectors in the solution space of
(I-A)x=0. We call this solution space
the eigenspace of A corresponding to .
14
Example 5
Bases for Eigenspaces (1/5)
Find bases for the eigenspaces of




Solution.
The characteristic equation of matrix A is
3
-5
2
+8-4=0,
or in factored form, (-1)(-2)
2
=0; thus, the
eigenvalues of A are =1 and =2, so there are two
eigenspaces of A.
0 0 2
1 2 1
1 0 3
A

(
(
=
(
(

15
Example 5
Bases for Eigenspaces (2/5)
By definition,



Is an eigenvector of A corresponding to if and only if
x is a nontrivial solution of (I-A)x=0, that is, of



If =2, then (3) becomes
1
2
3
x
x
x
(
(
=
(
(

x
1
2
3
0 2 0
1 2 1 0 (3)
1 0 3 0
x
x
x

( ( (
( ( (
=
( ( (
( ( (


16
Example 5
Bases for Eigenspaces (3/5)
Solving this system yield
x
1
=-s, x
2
=t, x
3
=s
Thus, the eigenvectors of A corresponding to =2 are the
nonzero vectors of the form



Since
1
2
3
2 0 2 0
1 0 1 0
1 0 1 0
x
x
x
( ( (
( ( (
=
( ( (
( ( (

0 1 0
0 0 1
0 1 0
s s
t t s t
s s

( ( ( ( (
( ( ( ( (
= = + = +
( ( ( ( (
( ( ( ( (

x
1 0
0 and 1
1 0

( (
( (
( (
( (

17
Example 5
Bases for Eigenspaces (4/5)
are linearly independent, these vectors form a basis for
the eigenspace corresponding to 2.
If 1, then (3) becomes





Solving this system yields
x
1
=-2s, x
2
=s, x
3
=s
1
2
3
1 0 2 0
1 1 1 0
1 0 2 0
x
x
x
( ( (
( ( (
=
( ( (
( ( (


18
Example 5
Bases for Eigenspaces (5/5)
Thus, the eigenvectors corresponding to 1
are the nonzero vectors of the form




is a basis for the eigenspace corresponding to
1.


2 2 -2
1 so that 1
1 1
s
s s
s

( ( (
( ( (
=
( ( (
( ( (

19
Theorem 7.1.3
If k is a positive integer, is an
eigenvalue of a matrix A, and x is
corresponding eigenvector, then
k
is an
eigenvalue of A
k
and x is a
corresponding eigenvector.
20
Example 6
Using Theorem 7.1.3 (1/2)
In Example 5 we showed that the eigenvalues of



are =2 and =1, so from Theorem 7.1.3 both =2
7
=128
and =1
7
=1 are eigenvalues of A
7
. We also showed that



are eigenvectors of A corresponding to the eigenvalue =2,
so from Theorem 7.1.3 they are also eigenvectors of A
7

corresponding to =2
7
=128. Similarly, the eigenvector
0 0 2
1 2 1
1 0 3
A

(
(
=
(
(

1 0
0 and 1
1 0

( (
( (
( (
( (

21
Example 6
Using Theorem 7.1.3 (2/2)
of A corresponding to the eigenvalue =1 is
also eigenvector of A
7
corresponding to
=1
7
=1.
2
1
1

(
(
(
(

22
Theorem 7.1.4
A square matrix A is invertible if and
only if =0 is not an eigenvalue of A.
23
Example 7
Using Theorem 7.1.4
The matrix A in Example 5 is invertible
since it has eigenvalues =1 and =2,
neither of which is zero. We leave it for
reader to check this conclusion by
showing that det(A)0
24
Theorem 7.1.5
Equivalent Statements (1/3)
If A is an nn matrix, and if T
A
: R
n
R
n
is
multiplication by A, then the following are
equivalent.
a) A is invertible.
b) Ax=0 has only the trivial solution.
c) The reduced row-echelon form of A is I
n
.
d) A is expressible as a product of elementary matrix.
e) Ax=b is consistent for every n1 matrix b.
f) Ax=b has exactly one solution for every n1 matrix
b.
g) det(A)0.
25
Theorem 7.1.5
Equivalent Statements (2/3)
h) The range of T
A
is R
n
.
i) T
A
is one-to-one.
j) The column vectors of A are linearly
independent.
k) The row vectors of A are linearly
independent.
l) The column vectors of A span R
n
.
m) The row vectors of A span R
n
.
n) The column vectors of A form a basis for R
n
.
o) The row vectors of A form a basis for R
n
.

26
Theorem 7.1.5
Equivalent Statements (3/3)
p) A has rank n.
q) A has nullity 0.
r) The orthogonal complement of the
nullspace of A is R
n
.
s) The orthogonal complement of the
row space of A is {0}.
t) A
T
A is invertible.
u) =0 is not eigenvalue of A.

27
7.2 Diagonalization
28
Definition
A square matrix A is called
diagonalizable if there is an invertible
matrix P such that P
-1
AP is a diagonal
matrix; the matrix P is said to
diagonalize A.
29
Theorem 7.2.1
If A is an nn matrix, then the
following are equivalent.
a) A is diagonalizable.
b) A has n linearly independent
eigenvectors.
30
Procedure for Diagonalizing a
Matrix
The preceding theorem guarantees that an nn
matrix A with n linearly independent eigenvectors is
diagonalizable, and the proof provides the following
method for diagonalizing A.
Step 1. Find n linear independent eigenvectors of A,
say, p
1
, p
2
, , p
n
.
Step 2. From the matrix P having p
1
, p
2
, , p
n
as its
column vectors.
Step 3. The matrix P
-1
AP will then be diagonal with
1
,

2
, ,
n
as its successive diagonal entries, where
i

is the eigenvalue corresponding to p
i
, for i=1, 2, ,
n.
31
Example 1
Finding a Matrix P That Diagonalizes
a Matrix A (1/2)
Find a matrix P that diagonalizes



Solution.
From Example 5 of the preceding section we found the
characteristic equation of A to be
(-1)(-2)
2
=0
and we found the following bases for the eigenspaces:




0 0 2
1 2 1
1 0 3
A

(
(
=
(
(

1 2 3
1 0 2
2: 0 , 1 =1: 1
1 0 1


( ( (
( ( (
= = = =
( ( (
( ( (

p p p
32
Example 1
Finding a Matrix P That Diagonalizes
a Matrix A (2/2)
There are three basis vectors in total, so the matrix A is
diagonalizable and



diagonalizes A. As a check, the reader should verify
that
1 0 2
0 1 1
1 0 1
P

(
(
=
(
(

1
1 0 2 0 0 2 1 0 2 2 0 0
1 1 1 1 2 1 0 1 1 0 2 0
1 0 1 1 0 3 1 0 1 0 0 1
P AP


( ( ( (
( ( ( (
= =
( ( ( (
( ( ( (


33
Example 2
A Matrix That Is Not Diagonalizable
(1/4)
Find a matrix P that diagonalize



Solution.
The characteristic polynomial of A is

1 0 0
1 2 0
3 5 2
A
(
(
=
(
(


2
1 0 0
det( ) 1 2 0 ( 1)( 2)
3 5 2
I A

= =

34
Example 2
A Matrix That Is Not Diagonalizable
(2/4)
so the characteristic equation is
(-1)(-2)
2
=0
Thus, the eigenvalues of A are =1 and =2. We leave
it for the reader to show that bases for the
eigenspaces are



Since A is a 33 matrix and there are only two basis
vectors in total, A is not diagonalizable.
1 2
1/ 8 0
1: 1/ 8 2: 0
1 1

( (
( (
= = = =
( (
( (

p p
35
Example 2
A Matrix That Is Not Diagonalizable
(3/4)
Alternative Solution.
If one is interested only in determining whether a matrix
is diagonalizable and is not concerned with actually
finding a diagonalizing matrix P, then it is not
necessary to compute bases for the eigenspaces; it
suffices to find the dimensions of the eigenspaces.
For this example, the eigenspace corresponding to
=1 is the solution space of the system




The coefficient matrix has rank 2. Thus, the nullity of
this matrix is 1 by Theorem 5.6.3, and hence the
solution space is one-dimensional.
1
2
3
0 0 0 0
1 1 0 0
3 5 1 0
x
x
x
( ( (
( ( (
=
( ( (
( ( (


36
Example 2
A Matrix That Is Not Diagonalizable
(4/4)
The eigenspace corresponding to =2 is the
solution space system



This coefficient matrix also has rank 2 and nullity 1,
so the eigenspace corresponding to =2 is also
one-dimensional. Since the eigenspaces produce
a total of two basis vectors, the matrix A is not
diagonalizable.
1
2
3
1 0 0 0
1 0 0 0
3 5 0 0
x
x
x
( ( (
( ( (
=
( ( (
( ( (

37
Theorem 7.2.2
If v
1
, v
2
, v
k
, are eigenvectors of A
corresponding to distinct eigenvalues
1
,

2
, ,
k
, then{v
1
, v
2
, v
k
} is a
linearly independent set.
38
Theorem 7.2.3
If an nn matrix A has n distinct
eigenvalues, then A is diagonalizable.
39
Example 3
Using Theorem 7.2.3
We saw in Example 2 of the preceding section that



has three distinct
eigenvalues, . Therefore, A
is diagonalizable. Further,



for some invertible matrix P. If desired, the matrix P can
be found using method shown in Example 1 of this
section.
0 1 0
0 0 1
4 17 8
A
(
(
=
(
(

4, 2 3, 2 3 = = + =
1
4 0 0
0 2 3 0
0 0 2 3
P AP

(
(
= +
(
(


40
Example 4
A Diagonalizable Matrix
From Theorem 7.1.1 the eigenvalues of a
triangular matrix are the entries on its main
diagonal. This, a triangular matrix with
distinct entries on the main diagonal is
diagonalizable. For example,




is a diagonalizable matrix.
1 2 4 0
0 3 1 7
0 0 5 8
0 0 0 2
A

(
(
(
=
(
(


41
Theorem 7.2.4
Geometric and Algebraic Multiplicity
If A is a square matrix, then :
a) For every eigenvalue of A the
geometric multiplicity is less than or
equal to the algebraic multiplicity.
b) A is diagonalizable if and only if the
geometric multiplicity is equal to the
algebraic multiplicity for every
eigenvalue.
42
Computing Powers of a Matrix
(1/2)
There are numerous problems in applied
mathematics that require the computation of
high powers of a square matrix. We shall
conclude this section by showing how
diagonalization can be used to simplify such
computations for diagonalizable matrices.
If A is an nn matrix and P is an invertible
matrix, then
(P
-1
AP)
2
=P
-1
APP
-1
AP=P
-1
AIAP=P
-1
A
2
P
More generally, for any positive integer k
(P
-1
AP)
k
=P
-1
A
k
P (8)
43
Computing Powers of a Matrix
(2/2)
It follows form this equation that if A is diagonalizable,
and P
-1
AP=D is a diagonal matrix, then
P
-1
A
k
P=(P
-1
AP)
k
=D
k
(9)
Solving this equation for A
k
yields
A
k
=PD
k
P
-1
(10)
This last equation expresses the kth power of A in terms
of the kth power of the diagonal matrix D. But D
k
is
easy to compute; for example, if


1
1
2 k 2
0 ... 0
0 ... 0
0 ... 0
0 ... 0
, and D
: : :
: : :
0 0 ...
0 0 ...
k
k
k
n
n
d
d
d
d
D
d
d
(
(
(
(
(
(
= =
(
(
(
(
(
(


44
Example 5
Power of a Matrix (1/2)
Using (10) to find A
13
, where



Solution.
We showed in Example 1 that the matrix A is diagonalized by



and that
0 0 2
1 2 1
1 0 3
A

(
(
=
(
(

1 0 2
0 1 1
1 0 1
P

(
(
=
(
(

1
2 0 0
0 2 0
0 0 1
D P AP

(
(
= =
(
(

45
Example 5
Power of a Matrix (2/2)
Thus, form (10)
13
13 13 1 13
13
1 0 2 2 0 0 1 0 2
0 1 1 0 2 0 1 1 1
1 0 1 0 0 2 1 0 1
8190 0 16382
8191 8192 8191 (13)
8191 0 16383
A PD P

(
( (
(
( (
= =
(
( (
(
( (



(
(
=
(
(

46
7.3 Orthogonal
Diagonalization
47
The Orthogonal
Diagonalization Matrix Form
Given an nn matrix A, if there exist an
orthogonal matrix P such that the
matrix P
-1
AP=P
T
AP, then A is said to be
orthogonally diagonalizable and P is
said to orthogonally diagonalize A.
48
Theorem 7.3.1
If A is an nn matrix, then the
following are equivalent.
a) A is orthogonally diagonalizable.
b) A has an orthonormal set of n
eigenvectors.
c) A is symmetric.
49
Theorem 7.3.2
If A is a symmetric matrix, then:
a) The eigenvalues of A are real
numbers.
b) Eigenvectors from different
eigenspaces are orthogonal.
50
Diagonalization of Symmetric
Matrices
As a consequence of the preceding theorem
we obtain the following procedure for
orthogonally diagonalizing a symmetric matrix.
Step 1. Find a basis for each eigenspace of A.
Step 2. Apply the Gram-Schmidt process to
each of these bases to obtain an orthonormal
basis for each eigenspace.
Step 3. Form the matrix P whose columns are
the basis vectors constructed in Step2; this
matrix orthogonally diagonalizes A.
51
Example 1
An Orthogonal Matrix P That
Diagonalizes a Matrix A (1/3)
Find an orthogonal matrix P that diagonalizes



Solution.
The characteristic equation of A is





4 2 2
2 4 2
2 2 4
A
(
(
=
(
(

2
4 2 2
det( ) det 2 4 2 ( 2) ( 8) 0
2 2 4
I A


(
(
= = =
(
(


52
Example 1
An Orthogonal Matrix P That
Diagonalizes a Matrix A (2/3)
Thus, the eigenvalues of A are =2 and =8. By the
method used in Example 5 of Section 7.1, it can be
shown that



form a basis for the eigenspace corresponding to =2.
Applying the Gram-Schmidt process to {u
1
, u
2
}
yields the following orthonormal eigenvectors:

1 2
1 1
1 and 0
0 1

( (
( (
= =
( (
( (

u u
1 2
1/ 2 1/ 6
1/ 2 and 1/ 6
0
2/ 6
( (

( (
= =
( (
( (
( (

v v
53
Example 1
An Orthogonal Matrix P That
Diagonalizes a Matrix A (3/3)
The eigenspace corresponding to =8 has



as a basis. Applying the Gram-Schmidt process to {u
3
} yields



Finally, using v1, v2, and v3 as column vectors we obtain




which orthogonally diagonalizes A.
3
1
1
1
(
(
=
(
(

u
3
1/ 3
1/ 3
1/ 3
(
(
= (
(
(

v
1/ 2 1/ 6 1/ 3
1/ 2 1/ 6 1/ 3
0 2/ 6 1/ 3
P
(

(
= (
(
(

You might also like