0% found this document useful (0 votes)
5 views

Ch.#8

The document discusses matrix eigenvalue problems, focusing on the vector equation Ax = λx, where A is a square matrix, λ is an unknown scalar (eigenvalue), and x is a nonzero vector (eigenvector). It explains the significance of eigenvalues and eigenvectors in various fields and outlines the process for determining them, including the characteristic equation and polynomial. The document also provides examples and discusses methods for computing eigenvalues and eigenvectors, emphasizing their applications in engineering, physics, and other disciplines.

Uploaded by

Manazir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Ch.#8

The document discusses matrix eigenvalue problems, focusing on the vector equation Ax = λx, where A is a square matrix, λ is an unknown scalar (eigenvalue), and x is a nonzero vector (eigenvector). It explains the significance of eigenvalues and eigenvectors in various fields and outlines the process for determining them, including the characteristic equation and polynomial. The document also provides examples and discusses methods for computing eigenvalues and eigenvectors, emphasizing their applications in engineering, physics, and other disciplines.

Uploaded by

Manazir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Linear Algebra:

Matrix Eigenvalue Problems


Linear Algebra: Matrix Eigenvalue Problems

A matrix eigenvalue problem considers the vector equation

(1) Ax " lx.

Here A is a given square matrix, l an unknown scalar, and x an unknown vector. In a


matrix eigenvalue problem, the task is to determine l’s and x’s that satisfy (1). Since
x " 0 is always a solution for any l and thus not interesting, we only admit solutions
with x ! 0.
The solutions to (1) are given the following names: The l’s that satisfy (1) are called
eigenvalues of A and the corresponding nonzero x’s that also satisfy (1) are called
eigenvectors of A.
From this rather innocent looking vector equation flows an amazing amount of relevant
theory and an incredible richness of applications. Indeed, eigenvalue problems come up
all the time in engineering, physics, geometry, numerics, theoretical mathematics, biology,
environmental science, urban planning, economics, psychology, and other areas. Thus, in
your career you are likely to encounter eigenvalue problems.
We start with a basic and thorough introduction to eigenvalue problems in Sec. 8.1 and
explain (1) with several simple matrices. This is followed by a section devoted entirely
to applications ranging from mass–spring systems of physics to population control models
of environmental science. We show you these diverse examples to train your skills in
modeling and solving eigenvalue problems. Eigenvalue problems for real symmetric,
skew-symmetric, and orthogonal matrices are discussed in Sec. 8.3 and their complex
2 !5 2 2 !12
x2 " c d, Check: Ax2 " c dc d " c d " (!6)x2 " l2x2.
!1 2 !2 !1 6

For the matrix in the intuitive opening example at the start of Sec. 8.1, the characteristic equation is
2
l ! 13l # 30 " (l ! 10)(l ! 3) " 0. The eigenvalues are {10, 3}. Corresponding eigenvectors are
[3 4]T and [!1 1]T , respectively. The reader may want to verify this. !
General Approach
This example illustrates the general case as follows. Equation (1) written in components is

a11x 1 # Á # a1nx n " lx 1


a21x 1 # Á # a2nx n " lx 2
#######################
an1x 1 # Á # annx n " lx n.

Transferring the terms on the right side to the left side, we have

(a11 ! l)x 1 # a12x 2 # Á # a1nx n "0

a21x 1 # (a22 ! l)x 2 # Á # a2nx n "0


(2)
. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
an1x 1 # an2x 2 # Á # (ann ! l)x n " 0.

In matrix notation,

(3) (A ! lI)x " 0.


CHAP. 8 Linear Algebra: Matrix Eigenvalue Problems

ByCHAP. 8 Linear
Cramer’s Algebra:
theorem in Matrix Eigenvalue
Sec. 7.7, Problems linear system of equations has a
this homogeneous
Cont.
nontrivial solution if and only if the corresponding determinant of the coefficients is zero:
By Cramer’s theorem in Sec. 7.7, this homogeneous linear system of equations has a
Á
nontrivial solution if and only if thea11 %l a12determinant
corresponding a1ncoefficients is zero:
of the
a21% l a22a%
a11 12
l Á Á aa1n
2n
(4) D(l) " det (A % lI) " 5 5 " 0.
#a #
a22 % l Á
Á #
a2n
21
(4) D(l) " det (A % lI) " 5 5 " 0.
an1 # an2# Á
Á ann #% l

an1 and Dan2


A % lI is called the characteristic matrix Á ann % l determinant of
(l) the characteristic
A. Equation (4) is called the characteristic equation of A. By developing D(l) we obtain
aA % lI is called
polynomial of nth characteristic
thedegree in l. Thismatrix
is called D (l)
andthe the characteristic
characteristic determinant
polynomial of A. of
A. Equation
This proves(4)
theisfollowing characteristic
called the important equation of A. By developing D(l) we obtain
theorem.
a polynomial of nth degree in l. This is called the characteristic polynomial of A.
This proves the following important theorem.
EOREM 1 Eigenvalues

HEOREM 1 The eigenvalues of a square matrix A are the roots of the characteristic equation
Eigenvalues
(4) of A.
The eigenvalues
Hence an n ! nof matrix
a square
hasmatrix A are
at least one the roots of the
eigenvalue andcharacteristic equation
at most n numerically
(4) of A.eigenvalues.
different
Hence an n ! n matrix has at least one eigenvalue and at most n numerically
different eigenvalues.
For larger n, the actual computation of eigenvalues will, in general, require the use
of Newton’s method (Sec. 19.2) or another numeric approximation method in Secs.
For larger n, the actual computation of eigenvalues will, in general, require the use
20.7–20.9.
of Newton’s method (Sec. 19.2) or another numeric approximation method in Secs.
The eigenvalues must be determined first. Once these are known, corresponding
Example 1 demonstrates how to systematically solve a simple eigenvalue problem.

ation of Eigenvalues and Eigenvectors


A M P L E 1 Determination of Eigenvalues and Eigenvectors
e all the steps in terms of the matrix
We illustrate all the steps in terms of the matrix

"5 2
Example A! c d. A ! c
"5 2
d.
2 "2 2 "2

Solution. (a) Eigenvalues. These must be determined first. Equation (1) is


(a) Eigenvalues. These must be determined first. Equation (1) is
"5 2 x1 x1 "5x 1 # 2x 2 ! lx 1
"5 x 1 Ax ! cx 1
2 d c d ! lc d; in components,
"5x # 2x1 2 ! lx 1
Ax ! c d c d ! l c d ; inx components,
2 "2 2 x2 2x 1 " 2x 2 ! lx 2.
2 "2 x 2 x2 2x 1 " 2x 2 ! lx 2.
Transferring the terms on the right to the left, we get

g the terms on the right to the left, we get ("5 " l)x 1 # 2x 2 !0
(2*)
2x 1 # ("2 " l)x 2 ! 0.
("5 " l)x 1 # 2x 2 !0
This can be written in matrix notation
2x 1 # ("2 " l)x 2 ! 0.
(3*) (A " lI)x ! 0
written in matrix notation
because (1) is Ax " lx ! Ax " lIx ! (A " lI)x ! 0, which gives (3*). We see that this is a homogeneous
linear system. By Cramer’s theorem in Sec. 7.7 it has a nontrivial solution x $ 0 (an eigenvector of A we are
(A " lI)x ! 0
looking for) if and only if its coefficient determinant is zero, that is,

is Ax " lx ! Ax " lIx ! (A " lI)x ! "5 0, which


" l gives2 (3*). We see that this is a homogeneous
m. By Cramer’s(4theorem
*) D (l)in!Sec.
det (A7.7
" lI) ! 2 a nontrivial solution
it has 2 ! ("5
x$ l) " 4 ! l2of
0 (an "eigenvector
" l)("2 #A7lwe
# 6are
! 0.
2 "2 " l
if and only if its coefficient determinant is zero, that is,

"5 " l 2
) ! det (A " lI) ! 2 2 ! ("5 " l)("2 " l) " 4 ! l2 # 7l # 6 ! 0.
2 "2 " l
e Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors 325
Cont.
We call D (l) the characteristic determinant or, if expanded, the characteristic polynomial, and D (l) " 0
the characteristic equation of A. The solutions of this quadratic equation are l1 " !1 and l2 " !6. These
are the eigenvalues of A.
(b1) Eigenvector of A corresponding to l1. This vector is obtained from (2*) with l " l1 " !1, that is,

!4x 1 # 2x 2 " 0

2x 1 ! x 2 " 0.

A solution is x 2 " 2x 1, as we see from either of the two equations, so that we need only one of them. This
determines an eigenvector corresponding to l1 " !1 up to a scalar multiple. If we choose x 1 " 1, we obtain
the eigenvector

1 !5 2 1 !1
x1 " c d, Check: Ax1 " c dc d " c d " (!1)x1 " l1x1.
2 2 !2 2 !2

(b2) Eigenvector of A corresponding to l2. For l " l2 " !6, equation (2*) becomes

x 1 # 2x 2 " 0

2x 1 # 4x 2 " 0.

A solution is x 2 " !x 1>2 with arbitrary x1. If we choose x 1 " 2, we get x 2 " !1. Thus an eigenvector of A
A solution is x 2 " 2x 1, as we see from either of the two equations, so that we need only one of them. This
determines an eigenvector corresponding to l1 " !1 up to a scalar multiple. If we choose x 1 " 1, we obtain
the eigenvector

1 !5 2 1 !1
Cont.x1 " c d , Check: Ax1 " c dc d " c d " (!1)x1 " l1x1.
2 2 !2 2 !2

(b2) Eigenvector of A corresponding to l2. For l " l2 " !6, equation (2*) becomes

x 1 # 2x 2 " 0

2x 1 # 4x 2 " 0.

A solution is x 2 " !x 1>2 with arbitrary x1. If we choose x 1 " 2, we get x 2 " !1. Thus an eigenvector of A
corresponding to l2 " !6 is

2 !5 2 2 !12
x2 " c d, Check: Ax2 " c dc d " c d " (!6)x2 " l2x2.
!1 2 !2 !1 6

For the matrix in the intuitive opening example at the start of Sec. 8.1, the characteristic equation is
2
l ! 13l # 30 " (l ! 10)(l ! 3) " 0. The eigenvalues are {10, 3}. Corresponding eigenvectors are
[3 4]T and [!1 1]T , respectively. The reader may want to verify this. !

This example illustrates the general case as follows. Equation (1) written in components is

a11x 1 # Á # a1nx n " lx 1


Á
ix 1 different
# x 2 " 0, eigenvalues.
respectively, and we can choose x 1 " 1 to get

1 1
c d and c d. !
For larger n, the actual computation i of eigenvalues
!i will, in general, require the use
of Newton’s method (Sec. 19.2) or another numeric approximation method in Secs.
InCont.
20.7–20.9.
the next section we shall need the following simple theorem.
The eigenvalues must be determined first. Once these are known, corresponding
eigenvectors are obtained from the system (2), for instance, by the Gauss elimination,
EM 3 where l is the of
Eigenvalues eigenvalue for which an eigenvector is wanted. This is what we did in
the Transpose
Example 1 and shall
The transpose doa again
AT of squareinmatrix
the examples below.
A has the same (To prevent as
eigenvalues misunderstandings:
A.
numeric approximation methods, such as in Sec. 20.8, may determine eigenvectors first.)
Eigenvectors have the following properties.

OOF Transposition does not change the value of the characteristic determinant, as follows from
EM 2 Theorem 2d in Sec.
Eigenvectors, 7.7.
Eigenspace !
If w and x are eigenvectors of a matrix A corresponding to the same eigenvalue l,
Having
so aregained
w # xa (provided
first impression
x $ %wof )matrix
and kxeigenvalue
for any k $ problems,
0. we shall illustrate their
importance
Hence with some typicalcorresponding
the eigenvectors applications intoSec.
one8.2.
and the same eigenvalue l of A,
together with 0, form a vector space (cf. Sec. 7.4), called the eigenspace of A
corresponding to that l.

OOF Aw " lw and Ax " lx imply A(w # x) " Aw # Ax " lw # lx " l(w # x) and
A (kw) " k (Aw) " k (lw) " l (kw); hence A (kw # /x) " l (kw # /x). !

In particular, an eigenvector x is determined only up to a constant factor. Hence we


can normalize x, that is, multiply it by a scalar to get a unit vector (see Sec. 7.9). For
T 2 2
SEC. 8.1 The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors 327
Examples 2 and 3 will illustrate that an n $ n matrix may have n linearly independent
eigenvectors, or it may have fewer than n. In Example 4 we shall see that a real matrix
Examples 2 and 3 will illustrate that an n $ n matrix may have n linearly independent
may have complex eigenvalues and eigenvectors.
eigenvectors, or it may have fewer than n. In Example 4 we shall see that a real matrix
may have complex eigenvalues and eigenvectors.
AMPLE 2 Multiple Eigenvalues
Example
E X A Find
M P LtheE eigenvalues
2 Multiple and Eigenvalues
eigenvectors of
Find the eigenvalues and eigenvectors of
"2 2 "3

A!D 2 "6T . 2
1 "2 "3
A!D 2 1 "6T .
"1 "2 0
"1 "2 0
Solution. For our matrix, the characteristic determinant gives the characteristic equation
Solution. For our matrix, the characteristic determinant gives the characteristic equation
"l3 " l2 # 21l # 45 ! 0.
"l3 " l2 # 21l # 45 ! 0.
The roots (eigenvalues of A) are l1 ! 5, l2 ! l3 ! "3. (If you have trouble finding roots, you may want to
Thealgorithm
use a root finding roots (eigenvalues
such as of A) are lmethod
Newton’s 1 ! 5, l2(Sec.
! l319.2). (If you
! "3.Your CAShave
or trouble finding
scientific roots, you
calculator can may
find want to
use a root finding algorithm such as Newton’s method (Sec. 19.2). Your CAS or scientific calculator can find
roots. However, to really learn and remember this material, you have to do some exercises with paper and pencil.)
roots. However, to really learn and remember this material, you have to do some exercises with paper and pencil.)
To find eigenvectors, we apply the Gauss elimination (Sec. 7.3) to the system (A " lI)x ! 0, first with l ! 5
To find eigenvectors, we apply the Gauss elimination (Sec. 7.3) to the system (A " lI)x ! 0, first with l ! 5
and then with land "3. with
! then For ll ! 5 the characteristic matrix is
! "3. For l ! 5 the characteristic matrix is

"7 2 "3
"7 2 "3 "7 2
"7 "32 "3
A " lI ! A "
A 5I !!
" lI D A2 " "4
5I ! D"62T . "4 It"6
row-reduces
T. to D 0to " D24
It row-reduces
4824
7 0 ""7 7T . " 7 T .
48

"1 "2 "5


"1 "2 "5 0 00 00 0

4824 48
Hence
Hence it has rank 2. itChoosing
has rank x2.3 Choosing
! "1 wex 3have
! "1x 2 we
! 2have
from !24
x2 " 2 xfrom
7 2 "" 7 3x !
7 x 2 " x3 !
0 7and 0 and
then then
x1 ! x 1 ! 1 from
1 from
"7x 1 # 2x 2 ""7x
3x 31 !
# 0.2x 2Hence
" 3x 3an 0. Hence an of
!eigenvector eigenvector of A corresponding
A corresponding to l ! 5 istoxl1 ! [1 .2 "1]T.
! 5[1is x21 !"1] T

For l ! "3 theForcharacteristic


l ! "3 the characteristic
matrix matrix

1 2 1
"3 2 "3 1 2 1"3 2 "3
A " lI ! A # 3I ! D 2 4 "6T row-reduces to D0 0 0T .
A " lI ! A # 3I ! D 2 4 "6T row-reduces to D0 0 0T .
"1 "2 3 0 0 0
"1 "2 3 0 0 0
"7 2 "3 "7 2 "3
A " lI ! A " 5I ! D 2 "4 "6T . It row-reduces to D 0 " 24
7 " 48
7 T.

"1 "2 "5 0 0 0

Cont.
Hence it has rank 2. Choosing x 3 ! "1 we have x 2 ! 2 from " 24 48
7 x 2 " 7 x 3 ! 0 and then x 1 ! 1 from
"7x 1 # 2x 2 " 3x 3 ! 0. Hence an eigenvector of A corresponding to l ! 5 is x1 ! [1 2 "1]T.
For l ! "3 the characteristic matrix

1 2 "3 1 2 "3
A " lI ! A # 3I ! D 2 4 "6T row-reduces to D0 0 0T .
"1 "2 3 0 0 0

Hence it has rank 1. From x 1 # 2x 2 " 3x 3 ! 0 we have x 1 ! "2x 2 # 3x 3. Choosing x 2 ! 1, x 3 ! 0 and


x 2 ! 0, x 3 ! 1, we obtain two linearly independent eigenvectors of A corresponding to l ! "3 [as they must
exist by (5), Sec. 7.5, with rank ! 1 and n ! 3],

"2

x 2 ! D 1T

and

x3 ! D0T . !
1

The order M l of an eigenvalue l as a root of the characteristic polynomial is called the


algebraic multiplicity of l. The number m l of linearly independent eigenvectors
corresponding to l is called the geometric multiplicity of l. Thus m l is the dimension
of the eigenspace corresponding to this l.
x2 ! D 1T

and

Definitions x3 ! D0T . !
1

The
Algebra: Matrix order MProblems
Eigenvalue l of an eigenvalue l as a root of the characteristic polynomial is called the
algebraic multiplicity of l. The number m l of linearly independent eigenvectors
corresponding
racteristic polynomial has todegree
l is called geometric
the sum
n, the of all multiplicity of l. Thus m l is the dimension
the algebraic
of the eigenspace
st equal n. In Example 2 for l "corresponding
!3 we have mtol this
" Ml.l " 2. In general,
n be shown. The difference ¢ l " M l ! m l is called the defect of l.
n Example 2, but positive defects ¢ l can easily occur:

icity, Geometric Multiplicity. Positive Defect


uation of the matrix

0 1 !l 1
A" c d is det (A ! lI) " 2 2 " l2 " 0.
0 0 0 !l

genvalue of algebraic multiplicity M 0 " 2. But its geometric multiplicity is only m 0 " 1,
ult from !0x 1 # x 2 " 0, hence x 2 " 0, in the form [x 1 0]T. Hence for l " 0 the defect

acteristic equation of the matrix

3 2 3!l 2
" c d is det (A ! lI) " 2 2 " (3 ! l)2 " 0.
0 3 0 3!l
CHAP. 8 Linear Algebra: Matrix Eigenvalue Problems

Since the characteristic polynomial has degree n, the sum of all the algebraic
multiplicities must equal n. In Example 2 for l " !3 we have m l " M l " 2. In general,
m l $ M l, as can be shown. The difference ¢ l " M l ! m l is called the defect of l.
Example
Thus ¢ !3 " 0 in Example 2, but positive defects ¢ l can easily occur:

MPLE 3 Algebraic Multiplicity, Geometric Multiplicity. Positive Defect


The characteristic equation of the matrix

0 1 !l 1
A" c d is det (A ! lI) " 2 2 " l2 " 0.
0 0 0 !l

Hence l " 0 is an eigenvalue of algebraic multiplicity M 0 " 2. But its geometric multiplicity is only m 0 " 1,
since eigenvectors result from !0x 1 # x 2 " 0, hence x 2 " 0, in the form [x 1 0]T. Hence for l " 0 the defect
is ¢ 0 " 1.
Similarly, the characteristic equation of the matrix

3 2 3!l 2
A" c d is det (A ! lI) " 2 2 " (3 ! l)2 " 0.
0 3 0 3!l

Hence l " 3 is an eigenvalue of algebraic multiplicity M 3 " 2, but its geometric multiplicity is only m 3 " 1,
since eigenvectors result from 0x 1 # 2x 2 " 0 in the form [x 1 0]T. !

MPLE 4 Real Matrices with Complex Eigenvalues and Eigenvectors


Since real polynomials may have complex roots (which then occur in conjugate pairs), a real matrix may have
18 PM Page 335

Symmetric,andSkew-Symmetric,
Symmetric, Skew-Symmetric, Orthogonal Matrices and Orthogonal Matrices335

ITIONS Symmetric, Skew-Symmetric, and Orthogonal Matrices


A real square matrix A ! [ajk] is called
symmetric if transposition leaves it unchanged,

(1) AT ! A, thus akj ! ajk,

skew-symmetric if transposition gives the negative of A,

(2) AT ! "A, thus akj ! "ajk,

orthogonal if transposition gives the inverse of A,

(3) AT ! A!1.

MPLE 1 Symmetric, Skew-Symmetric, and Orthogonal Matrices


The matrices
2 1 2
"3 1 5 0 9 "12 3 3 3

2 2 1
E 2 Illustration of Formula (4)

9 5 2 9.0 3.5 3.5 0 1.5 "1.5


A ! D2 3 "8T ! R # S ! D3.5 3.0 "2.0T # D"1.5 0 "6.0T !
5 4 3 3.5 "2.0 3.0 1.5 6.0 0
Symmetric, Skew-Symmetric, and Orthogonal Matrices

M 1 Eigenvalues of Symmetric and Skew-Symmetric Matrices


(a) The eigenvalues of a symmetric matrix are real.
(b) The eigenvalues of a skew-symmetric matrix are pure imaginary or zero.

This basic theorem (and an extension of it) will be proved in Sec. 8.5.
y2 sin u cos u x2

is an orthogonal transformation. It can be shown that any orthogonal transformation in


the plane or in three-dimensional space is a rotation (possibly combined with a reflection
in a straight line or a plane, respectively).
The main reason for the importance of orthogonal matrices is as follows.
Symmetric, Skew-Symmetric, and Orthogonal Matrices

M 2 Invariance of Inner Product


An orthogonal transformation preserves the value of the inner product of vectors
a and b in R n, defined by

b1
.
(7) a • b " aTb " [a1 Á an] D . T .
.
bn

That is, for any a and b in Rn, orthogonal n # n matrix A, and u " Aa, v " Ab
we have u • v " a • b.
Hence the transformation also preserves the length or norm of any vector a in
n
R given by

(8) ! a ! " 1a • a " 2aTa.

OOF Let A be orthogonal. Let u " Aa and v " Ab. We must show that u • v " a • b. Now
(Aa)T " aTAT by (10d) in Sec. 7.2 and ATA " A!1A " I by (3). Hence

(9) u • v " uTv " (Aa)TAb " aTATAb " aTIb " aTb " a • b.
Now Thethedeterminant
column vectors
of an of A (#A matrix
orthogonal ) are the
hasrow
thevectors of A.
value !1 Hence the row vectors
or "1.
of A also form an orthonormal system.
(b) Conversely, if the column vectors of A satisfy (10), the off-diagonal entries in (11)
must be 0 and the diagonal entries 1. Hence ATA # I, as (11) shows. Similarly, AAT # I.
PROOF This det AB
Fromimplies AT##det
A!1A because
det B (Sec.
also A7.8,
!1
ATheorem
# AA!1 4) # Iand AT # det
det inverse
and the A (Sec.Hence
is unique. 7.7,
Theorem
A 2d), we Similarly
is orthogonal. get for anwhen
orthogonal
the rowmatrix
vectors of A form an orthonormal system, by
Symmetric,
what has been saidSkew-Symmetric,
at the end of part (a).
!1
and Orthogonal
T T
Matrices
2
!
1 # det I # det (AA ) # det (AA ) # det A det A # (det A) . !
0:56 AM Page 337
AOMRPELME 4 Illustration of Theorems
Determinant 3 and 4 Matrix
of an Orthogonal
TheThe determinant
last matrix of 1anandorthogonal
in Example the matrix inmatrix has Theorems
(6) illustrate the value3 !1
and 4or
because
"1. their determinants are
"1 and !1, as you should verify. !

3OP RR EOMO F5 From


Symmetric, det AB #
Eigenvalues of det
Skew-Symmetric, A det B (Sec.
an Orthogonal
and Orthogonal 7.8, Theorem 4) and det AT # det A (Sec. 7.7,
Matrix
Matrices 337
Theorem 2d), we get for an orthogonal matrix
The eigenvalues of an orthogonal matrix A are real or complex conjugates in pairs
and have matrices
Orthogonal absolutehave 1.
valuefurther interesting properties as follows.
1 # det I # det (AA!1) # det (AAT) # det A det AT # (det A)2. !

AMPLE 4 Illustration of Theorems 3 and 4


OREM 3 Orthonormality of Column and Row Vectors
The last matrix in Example 1 and the matrix in (6) illustrate Theorems 3 and 4 because their
Ádeterminants are
A real square matrix is
"1 and !1, as you should verify.
orthogonal if and only if its column vectors a 1 , , a n (and !
also its row vectors) form an orthonormal system, that is,
0 if j % k
OREM 5 (10)
Eigenvalues of an Orthogonal # aTj ak
aj • akMatrix # e
1 if j # k.
The eigenvalues of an orthogonal matrix A are real or complex conjugates in pairs
and have absolute value 1.

PROOF (a) Let A be orthogonal. Then A!1A # ATA # I. In terms of column vectors a1, Á , an,

aT1 aT1 a1 aT1 a2 " "" aT1 an


.
0:56 AM Page 340 # c1l1x1 ! Á ! cnlnxn.

This shows that we have decomposed the complicated action of A on an arbitrary vector
x into a sum of simple actions (multiplication by scalars) on the eigenvectors of A. This
is the point of an eigenbasis.
CHAP.Now8 ifLinear Algebra: Matrix
the n eigenvalues are allEigenvalue Problems
different, we do obtain a basis:
Eigenbases
CHAP. 8 Linear Algebra: Matrix Eigenvalue Problems
P EL M
R E 11 Eigenbasis. Nondistinct Eigenvalues. Nonexistence
Basis of Eigenvectors
MPLE 1 Eigenbasis. Nondistinct Eigenvalues. Nonexistence
5 3 A has n distinct eigenvalues,
If an n & n matrix 1 then1 A has a basis of eigenvectors
The matrix
Á A! c 5 3n d has a basis of eigenvectors c d ,1 c
1 d corresponding to the eigenvalues l1 ! 8,
x ,
The matrix
1 , x
An ! c 3 5d. has a basis of eigenvectors c d , c1 d #1
for R corresponding to the eigenvalues l1 ! 8,
3 5 1 #1
l2 ! 2. (See Example 1 in Sec. 8.2.)
l2 ! 2. (See Example 1 in Sec. 8.2.) n
Even if not all n eigenvalues are different, a matrix A may still provide an eigenbasis n for R . See Example 2
Even if not all n eigenvalues are different,Á , xna matrix A may still provide an eigenbasis for R . See Example 2
ROOF All
in we
Sec. have
8.1, to
where show
n
in Sec. 8.1, where n ! 3. ! is
3. that x 1, are linearly independent. Suppose they are not. Let
r beOn the
On the largest
the other hand,
other hand, AAmay
integer maysuch that
notnot have
have enough
Á , xlinearly
{xenough r } is
1, linearly a independent
linearlyeigenvectors
independent independent
eigenvectorsset.toup
to make Then
make rup%aFor
a basis. nbasis. For
instance,
and the A
instance, Asetinin{x 1,
ExampleÁ ,3 x3ofrof
Example xSec.
, Sec. }8.1
r!18.1 linearly dependent. Thus there are scalars c1, Á , cr!1,
is is
is
not all zero, such that
00 1 1 k k
AA!!c c dd andand
hashas
onlyonly
one one eigenvector c
eigenvector d c d(k $ 0(k
, arbitrary).
$ 0, arbitrary). ! !
(2) 00 0 0 c1x1 ! Á ! cr!1xr!1 # 00 0

Actually,
Actually, eigenbases
eigenbases exist
existunder much
under bymore
much general
Amore conditions
general # ljxthan
Axjconditions those in Theorem 1.
(see Sec. 7.4). Multiplying both sides and using j, we than those
obtain in Theorem 1.
An important case is the following.
An important case is the following.
and a more general result
(3) A(c1x1 ! Á ! cr!1xr!1) # c1l1x1 ! Á ! cr!1lr!1xr!1 # A0 # 0.
REM 2 Symmetric Matrices
EM 2 To Symmetric
get rid of theMatrices
last term, we subtract lr!1 times (2) from this, obtaining
A symmetric matrix has an orthonormal basis of eigenvectors for R n.
A symmetric matrixc1(l1has
$ lan orthonormal
)x ! Á ! c basis
(l $ of
l eigenvectors
)x # 0. for Rn.
r!1 1 r r r!1 r

For a proof (which is involved) see Ref. [B3], vol. 1, pp. 270–272.
Hereac1proof
(l1 $ (which Á , cr(lr $ lr!1) # 0 since {x 1, Á , x r } is linearly independent.
lr!1) #is0,involved)
For see Ref. [B3], vol. 1, pp. 270–272.
M P L E 2 Hence c1 # ÁBasis
Orthonormal # cof
r # 0, since all the eigenvalues are distinct. But with this, (2) reduces to
Eigenvectors
cr!1xr!1 # 0, hence cr!1 # 0, since xr!1 " 0 (an eigenvector!). This contradicts the factT
P L E 2 Orthonormal
The first matrix inBasis of Eigenvectors
Example 1 is symmetric, and an orthonormal basis of eigenvectors is 31> 12 1> 124 ,
Similarity of Matrices. Diagonalization
Similarity
Eigenbases of Matrices.
also play Diagonalization
a role in reducing a matrix A to a diagonal matrix whose entries are
the eigenvalues of A. This is done by a “similarity transformation,” which is defined as
Eigenbases
follows (andalso
willplay
havea role in reducing
various a matrix
applications A to a diagonal
in numerics in Chap.matrix
20). whose entries are
the eigenvalues of A. This is done by a “similarity transformation,” which is defined as
follows (and will have various applications in numerics in Chap. 20).
Similarity of Matrices and Diagonalization
NITION Similar Matrices. Similarity Transformation

NITION
An n " n matrix  is called similar to an n " n matrix A if
Similar Matrices. Similarity Transformation
An n " n matrix  is called similarÂto!anP !1
(4) n"APn matrix A if
!1
for some (nonsingular!) n " n matrix P. This AP
(4)  ! P transformation, which gives  from
A, is called a similarity transformation.
for some (nonsingular!) n " n matrix P. This transformation, which gives  from
A, is called a similarity transformation.
The key property of this transformation is that it preserves the eigenvalues of A:
The key property of this transformation is that it preserves the eigenvalues of A:
OREM 3 Eigenvalues and Eigenvectors of Similar Matrices

OREM 3 If  is similar
Eigenvalues A, then  has
andtoEigenvectors of the eigenvalues as A.
sameMatrices
Similar
Furthermore, if x is an eigenvector of A, then y ! P !1x is an eigenvector of Â
If  is similar to
corresponding A, then
to the sameÂeigenvalue.
has the same eigenvalues as A.
Furthermore, if x is an eigenvector of A, then y ! P !1x is an eigenvector of Â
corresponding to the same eigenvalue.
Indeed, these are eigenvectors of the diagonal matrix A.
Perhaps we see that x1 and x2 are the columns of P. This suggests the general method of transforming a
matrix A to diagonal form D by using P ! X, the matrix with eigenvectors as columns. !

By a suitable similarity transformation we can now transform a matrix A to a diagonal


matrix D whose diagonal entries are the eigenvalues of A:
Similarity of Matrices and Diagonalization

M 4 Diagonalization of a Matrix
If an n " n matrix A has a basis of eigenvectors, then

(5) D ! X!1AX

is diagonal, with the eigenvalues of A as the entries on the main diagonal. Here X
is the matrix with these eigenvectors as column vectors. Also,

(5*) D m ! X!1AmX (m ! 2, 3, Á ).
CHAP. 8 Linear Algebra: Matrix Eigenvalue Problems

networks. Example 4 is included to keep our discussion independent of Chapter 4.


(However, the reader not interested in ODEs may want to skip Example 4 without loss
Example
of continuity.)

PLE 1 Stretching of an Elastic Membrane


An elastic membrane in the x 1x 2-plane with boundary circle x 21 ! x 22 " 1 (Fig. 160) is stretched so that a point
P: (x 1, x 2) goes over into the point Q: ( y1, y2) given by

y1 5 3 x1 y1 " 5x 1 ! 3x 2
(1) y" c d " Ax " c d c d; in components,
y2 3 5 x2 y2 " 3x 1 ! 5x 2.

Find the principal directions, that is, the directions of the position vector x of P for which the direction of the
position vector y of Q is the same or exactly opposite. What shape does the boundary circle take under this
deformation?
Solution. We are looking for vectors x such that y " lx. Since y " Ax, this gives Ax " lx, the equation
of an eigenvalue problem. In components, Ax " lx is

5x 1 ! 3x 2 " lx 1 (5 # l)x 1 ! 3x 2 "0


(2) or
3x 1 ! 5x 2 " lx 2 3x 1 ! (5 # l)x 2 " 0.

The characteristic equation is

5#l 3
(3) 2 2 " (5 # l)2 # 9 " 0.
3 5#l
y1 5 3 x1 y1 " 5x 1 ! 3x 2
(1) y" c d " Ax " c d c d; in components,
y2 3 5 x2 y2 " 3x 1 ! 5x 2.

Find the principal directions, that is, the directions of the position vector x of P for which the direction of the
position vector y of Q is the same or exactly opposite. What shape does the boundary circle take under this
Example
deformation?
Solution. We are looking for vectors x such that y " lx. Since y " Ax, this gives Ax " lx, the equation
of an eigenvalue problem. In components, Ax " lx is

5x 1 ! 3x 2 " lx 1 (5 # l)x 1 ! 3x 2 "0


(2) or
3x 1 ! 5x 2 " lx 2 3x 1 ! (5 # l)x 2 " 0.

The characteristic equation is

5#l 3
(3) 2 2 " (5 # l)2 # 9 " 0.
3 5#l

Its solutions are l1 " 8 and l2 " 2. These are the eigenvalues of our problem. For l " l1 " 8, our system (2)
becomes

#3x 1 ! 3x 2 " 0, Solution x 2 " x 1, x 1 arbitrary,


2
3x 1 # 3x 2 " 0. for instance, x 1 " x 2 " 1.

For l2 " 2, our system (2) becomes

3x 1 ! 3x 2 " 0, Solution x 2 " #x 1, x 1 arbitrary,


2
3x 1 ! 3x 2 " 0. for instance, x 1 " 1, x 2 " #1.
Its solutions are l1 " 8 and l2 " 2. These are the eigenvalues of our problem. For l " l1 " 8, our system (2)
becomes

#3x 1 ! 3x 2 " 0, Solution x 2 " x 1, x 1 arbitrary,


2
3x 1 # 3x 2 " 0. for instance, x 1 " x 2 " 1.
Example
For l2 " 2, our system (2) becomes
10:56 AM Page 331
3x 1 ! 3x 2 " 0, Solution x 2 " #x 1, x 1 arbitrary,
2
3x 1 ! 3x 2 " 0. for instance, x 1 " 1, x 2 " #1.

We thus obtain as eigenvectors of A, for instance, [1 1]T corresponding to l1 and [1 #1]T corresponding to
l2 (or a nonzero scalar multiple of these). These vectors make 45° and 135° angles with the positive x1-direction.
8.2 Some Applications
They give of theEigenvalue directions, the answer to our problem. The eigenvalues show that331
principal Problems in the principal
directions the membrane is stretched by factors 8 and 2, respectively; see Fig. 160.
Accordingly, if we choose the principal directions
x2 as directions of a new Cartesian u 1u 2-coordinate system,
say, with the positive u 1-semi-axis in the first quadrant and the positive u 2-semi-axis in the second quadrant of
the x 1x 2-system, and if we set u 1 " r cos !, u 2 " r sin !, then a boundary point of the unstretched circular
Pr irec
in ti
d

membrane has coordinates cos !, sin !. Hence, after the stretch we have
ci on

ct al
pa

re ip
n
l

io
di inc
Pr

z 1 " 8 cos !, z 2 " 2 sin !.


x1

Since cos2 ! ! sin2 ! " 1, this shows that the deformed boundary is an ellipse (Fig. 160)

z 21 z 22
(4) ! " 1. !
82 22
Fig. 160. Undeformed and deformed membrane in Example 1

XAMPLE 2 Eigenvalue Problems Arising from Markov Processes


Markov processes as considered in Example 13 of Sec. 7.2 lead to eigenvalue problems if we ask for the limit
state of the process in which the state vector x is reproduced under the multiplication by the stochastic matrix

You might also like