0% found this document useful (0 votes)
7 views

Chapter 4 Solving Eigenvalues and Eigenvectors of Matrix

Uploaded by

BACIR L
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Chapter 4 Solving Eigenvalues and Eigenvectors of Matrix

Uploaded by

BACIR L
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter 4 Solving Eigenvalues and

Eigenvectors of Matrix
Definition 4.1 For an n-order matrix A , if there
exists  and vector x (nonzero) such that
Ax=x
We say  is an eigenvalue of A, and x is an eigenvector
A corresponding to .
Note. If  is an eigenvalue of A, then
det( I  A)  0

24/6/19 1
Theorem 4.1 1 , 2 , , n are eigenvalues of A, and p(x)
is a polynomial, then p(A) has eigenvalues p (1 ), p(2 ), , p(n ) .

(1) Ak has ei genval ues 1k , 2k , , nk ; (2)I f A i s i nver t i bl e, t hen
1 1 1
, ,, i s ei genval ues of A1 wi t h t he s ame ei genvect or s .
1 2 n

Theorem 4.2 If A is real symmetric, then all


eigenvalues of A are real numbers and all eigenvectors
are orthogonal each other. Furthermore, there exists
orthogonal matrix Q, such that
QT AQ  diag (1 , 2 , , n )
Theorem 4.3 If|P|0 , B=P-1AP , we say A is similar to B.

24/6/19 2
§1 The Power Method

1.1 The Power Method


Using power method, we can obtain the eigenvalue
with the greatest modulus (dominant eigenvalue)and
eigenvector with respect to it.
Theorem 4.4 A has n linear independent eigenvectors
x1,x2,…,xn , and  1,  2,… ,  n are corresponding
eigenvalues, such that
| 1 || 2 || 3 |  | n |
then for any an initial vector v0,
vk  Avk 1 =  =A v0
k
k  1, 2,  ,

the series {vk } satisfy


24/6/19 3
vk
(1) lim  a1 x1
k   1
k

(vk 1 ) m
(2) lim  1
k  (v )
k m

wher e (vk ) m i s m- t h ent r y of vk .


Proof.
Because x1,x2,…,xn are linear independent , for any
vector v0, we have
v0  a1 x1  a2 x2    an xn (a1  0)
vk  Ak v0  a1 Ak x1  a2 Ak x2    an A k xn
 a11k x1  a2 2k x2    an nk xn

 1k [ a1 x1  a2 ( 12 ) k x2    an ( 1n ) k xn ]
24/6/19 4
2 n
Supposeα1≠0 , and | 1 | 1, ,| 1 | 1
i k
n
 [ a1 x1   ai ( ) xi ]
k

vk 1
i 2 1
lim k  lim
k   k   k
1 1
n

 lim[ a1 x1   ai ( i ) k xi ]  a1 x1
k 
i 2 1

n
i k 1
{ [ a1 x1   ai ( )
k 1
xi ]}m
(vk 1 ) m 1
i 2 1
lim  lim
k  (v ) k  n
i k
k m
{1 [ a1 x1   ai ( ) xi ]}m
k

i 2 1
( a1 x1 ) m ( x1 ) m
 1  1  1
( a1 x1 ) m ( x1 ) m
24/6/19 5
Note 1. The speed of convergence is determined by the
2
value | | . When the value is small, the speed is fast.
1
Note 2.Even the dominant eigenvalue is multiple root, the
result still holds.
Let 1  2    r,and |r || r 1 |  | n |
r n
vk  A v0   [ ai xi  a(
k k j k
1 j 1) xj ]
i 1 j  r 1
r n j k
 [ ai xi   a j ( ) x j ] r
k

vk
1
1
  ai xi
i 1 j  r 1
lim k  lim
k   k  1 k
1 i 1
r n  j k 1 r
{1 [ ai xi   a j ( ) x j ]}m
k 1
[ ai xi ]m
(v ) i 1 j  r 1 1
lim k 1 m  lim   i 1
 1
k  (v ) k  r n j k 1 r
k m
{1 [ ai xi   a j ( ) x j ]}m
k
[ ai xi ]m
i 1 j  r 1 1 i 1
24/6/19 6
Note 3 When |λ1|>1,elements of {vk} increase with |
λ1|k tend to infinite. On the contrary, if|λ1|<1,all
elements tend to 0.
For dealing with the problem, we only need normalize
all vector series.

1.2 Modified Power Method

Let v is a nonozero vector, normalize it


v
u
max(v)
where max(v) is the entry with greatest absolute value
of v. v0
u0 
max(v0 )
24/6/19 7
Av0 v1 Av0
v1  Au0  , u1  
max(v0 ) max(v1 ) max( Av0 )
A2 v0 v2 A 2
v0
v2  Au1  , u2  
max( Av0 ) max(v2 ) max( A2v0 )
  
Ak v0 vk Ak v0
vk  Auk 1  k 1
, uk  
max( A v0 ) max(vk ) max( Ak v0 )
Finally,
n
i k
 [a1 x1   ai ( ) xi ]
k
1
i2 1 x1
lim uk  lim 
k  k  n
i k max( x1 )
max{1 [a1 x1   ai ( ) xi ]}
k

i 2 1
24/6/19 8
n
i k
 [a1 x1   ai ( ) xi ]
k
1
i 2 1 1 x1
lim vk  lim 
k  k  n
i k 1 max( x1 )
max{1 [a1 x1   ai ( ) xi ]}
k 1

i2 1
lim max(vk )  1
k 

24/6/19 9
Eg. 4.1 Show the dominant eigenvalue and eigenvector
of matrix 2 4 6
 
A   3 9 15 
 4 16 36 
 
Solution. As showed in the table
k vk uk
0 1 1 1 1 1 1
1 12.00 27.00 56.00 0.2143 0.4821 1
2 8.357 19.98 44.57 0.1875 0.4483 1
3 8.168 19.60 43.92 0.1860 0.4463 1
4 8.157 19.57 43.88 0.1859 0.4460 1
5 8.157 19.57 43.88 0.1859 0.4460 1
24/6/19 10
1.3 Inverse Iteration
We can use the inverse iteration to solve the eigenvalue
with smallest absolute value and its eigenvector. Let
| 1 || 2 |  | n 1 || n |
Corresponding eigenvectors x1 , x2 , , xn
Then the eigenvalues of A-1 satisfy
1 1 1
 
| n | | n 1 | | 1 |
When we solve the dominant eigenvalue of A-1, we obtain
the eigenvalue with smallest absolute value of A.
For any initial nonzero vector v0 , formulate the series
v0
u0 
max(v0 )
24/6/19 11
1 vk
vk  A uk 1 , uk  , k  0,1, 2, 
max(vk )
xn 1
lim uk  , lim max(vk ) 
k  max( xn ) k  n
Note.
We can get vk through solving the system Avk  uk 1

24/6/19 12
§2 Jacobi’s Method
Jacobi’s method is a global method and can
obtain all eigenvalues and eigenvectors of a
matrix. When the matrix A is symmetric, its all
eigenvalues are real numbers and all eigenvectors
are orthogonal each other. Furthermore, there
exists a orthogonal matrix Q, such that

QT AQ  diag (1 , 2 , , n )

24/6/19 13
2.1 Diagonalization of a real symmetric matrix

The idea of Jacobi’s method is finding s series of


orthogonal matrices {S k } , such that

lim S1S 2  S k  Q
k 

We have
Tk  SkT SkT1  S 2T S1T AS1S2  Sk 1Sk  SkT Tk 1Sk
 diag (1 , 2 , , n )

The series {Tk} are similar each other.


tij( k ) , sij( k ) ar e ent r i es of Tk and S k .
24/6/19 14
Definiton 4.2
n n n n
vk   (tij( k ) ) 2 , wk   (tij( k ) ) 2 k  1, 2,
i 1 j 1 i 1 j 1
j i
We restrict {Tk} satisfying
(1) wk 1  wk , vk 1  vk k
(2) lim vk  0
k 

we have
lim Tk  diag (1 , 2 , , n )
k 

24/6/19 15
p q

1 
  
 p
 cos  k  sin  k 
 
Sk  S ( p, q ) 
    
q
  sin  k  cos  k 
 
  
 1 

S(p,q) is orthogonal, S(p,q)TA S(p,q) only change


p-th and q-th row, p-th and q-th column of A.
S(p,q) is called Givens matrix.
24/6/19 16
Tk  S kT Tk 1S k , so we have
t (pjk )  t (pjk 1) cos  k  tqj( k 1) sin  k
 (k )
tqj  t (pjk 1) sin  k  tqj( k 1) cos  k j  p, q
 (k )
tip  tip( k 1) cos  k  tiq( k 1) sin  k
 (k )
tiq  tip( k 1) sin  k  tiq( k 1) cos  k i  p, q
t ( k )  t (ppk 1) cos 2  k  tq( qk 1) sin 2  k  2t (pqk 1) sin  k cos  k
 pp
tqq
(k )
 t (ppk 1) sin 2  k  tqq
( k 1)
cos 2  k  2t (pqk 1) sin  k cos  k

t ( k ) 1 ( k 1) ( k 1)
 pq  t  (t pp  tqq ) sin 2 k  t (pqk 1) cos 2 k
(k )
qp
2
 (k ) ( k 1)
tij  tij i  p , q; j  p , q
24/6/19 17
Select k ,such that t (pqk )  0
( k 1)
tqq  t (ppk 1)  
a  cot 2 k  ( k 1)
,   2 k 
2t pq 2 2
Let t  tan  k,accor di ng t o tan  k  2 tan k cot 2k  1  0,
2

t 2  2at  1  0
 1
 a0
a  1  a2
t
 1
a0
 a  1 a 2

1

24/6/19 | a |  1  a2 18
1
and cos  k  , sin  k  t  cos  k
1 t 2

Theorem 4.5 The linear transform constructed by the


above way satisfies
wk  wk 1 , vk  vk 1
and lim Tk  diag (1 , 2 , , n )
k 

2.2 The Classic Jacobi’s Method


The classic Jacobi’s method always change the
non-diagonal entries with the greatest absolute value
into 0.Normally,the process need infinite times
because the next transform could change 0 entry into
nonzero again.

24/6/19 19
The process of Algorithm
( 1 ) Select the entry with the greatest value
| ti1 j1 | max | aij | 0
i j

Obtain the matrix S1  S (i1 , j1 ) such that T1  S1 AS1 with


T

ti(1)
1 j1
 t j1i1  0
(1)

( 2 ) Similarly, select
| ti(1)
2 j2
| max | t ij | 0
(1)
i j
And matrix S 2  S (i2 , j2 ) such that T2  S 2 T1S 2 with
T

ti(2)
2 j2
 t j2i2  0
(2)

( 3 ) Continue the above process until get a solution


satisfying precision.
24/6/19 20
Eg. 4.2 Show all eigenvalues and eigenvector of matrix
 1.0 1.0 0.50 
 
A   1.0 1.0 0.25  ,
 0.50 0.25 1.0 
 
using Jacobi’s method with precision 0.0005 。
Solution. ( 1 ) Change the entriesa12 , a21 of A into 0
a22  a11
a1  0  t1  1
2a12
 2
1   cos 1  sin 1 
4 2
 0.70711 0.70711 0 
 
S1   0.70711 0.70711 0 
 0 0 1 

24/6/19 21
 0 0 0.17678 
 
T1  S1T AS1   0 2 0.53038 
 0.17678 0.53038 2 
 
(1) (1)
( 2 ) Change the entriest 23 ,t 32
of T2 into 0
(1)
t33  t22(1)
a2  (1)
0  t2  1
2t23
 2
2   cos  2  sin  2 
4 2
1 0  0
 
S2   0 0.70711 0.70711
0 0.70711 0.70711

 0 0.12500 0.12500 
 
T2  S 2T T1S 2   0.12500 1.4697 0 
 0.12500 0 2. 5304 
24/6/19   22
(2) (2)
( 3 ) Change the entriest12 and t21 of T2 into 0
(2)
t22  t11(2) 1
a3   5.8788  t3 
2t12(2) 5.8788  1  (5.8788) 2
1
cos  3   0.99645, sin  3  t3  cos 3  0.084145
1 t 2
3

 0.99645 0.84145 0 
 
S3   0.84145 0.99645 0 
 0 0 1 
 
 0.010556 0 0.12456 
 
T3  S3T T2 S3   0 1.4802 0.010518 
 0.12456 0.010518 2.5304 
 

Continue the process


24/6/19 23
 0.016647 0.000409 0.000005 
 
T5   0.000409 1.4801 0 
 0.000005 0 2.5336 
 
 0.72135 0.44404 0.53584 
 
Q  S1S 2 S3 S 4 S5   0.68616 0.56234 0.46147 
 0.093844 0.69757 0.71033 
 
Finally, we get three eigenvalues of A with precision 0.0005
1=-0.016647 , 2=1.4801 and 3=2.5366, the three eigenvectors
are
 0.72135   0.44404   0.53584 
     
x1   0.68616  , x2   0.56234  , x3   0.46147 
 0.093844   0.69757   0.71033 
     

24/6/19 24

You might also like