0% found this document useful (0 votes)
134 views

MA5158 Unit I Section 4

The document discusses diagonalization of matrices using similarity transformations. It defines that two matrices A and B are similar if there exists an invertible matrix P such that P-1AP = B. It then proves that any square matrix A with n linearly independent eigenvectors is similar to a diagonal matrix D whose diagonal elements are the eigenvalues of A. This allows transforming a matrix into diagonal form using similarity transformations. As an application, it shows how to find powers of a diagonalizable matrix using its diagonal form.

Uploaded by

Sanjay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
134 views

MA5158 Unit I Section 4

The document discusses diagonalization of matrices using similarity transformations. It defines that two matrices A and B are similar if there exists an invertible matrix P such that P-1AP = B. It then proves that any square matrix A with n linearly independent eigenvectors is similar to a diagonal matrix D whose diagonal elements are the eigenvalues of A. This allows transforming a matrix into diagonal form using similarity transformations. As an application, it shows how to find powers of a diagonalizable matrix using its diagonal form.

Uploaded by

Sanjay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

MA5158 ENGINEERING MATHEMATICS I

UNIT I - MATRICES
Section 4. Diagonalization using similarity
transformation

Faculty
Department of Mathematics
Anna University, Chennai

Strictly for University Departments only 1


Section 4. Diagonalization using similarity
transformation

Contents
Similarity transformation
Diagonalization process
Application: Finding powers of a matrix using
diagonalization
Examples
Practice Problems and MCQ’s

Strictly for University Departments only 2


Similarity transformation

Let us recall the definition of similar matrices. Two square matrices A, B


of size n are said to be similar if there exists a non-singular matrix P such
that P −1 AP = B.
This transformation of A into B is known as similarity transformation.
Also recall from the properties of eigen values and eigen vectors that
similar matrices have the same eigen values .
We will use this similarity transformation to diagonalize a square matrix.

Strictly for University Departments only 3


Diagonalization

Theorem: An n × n square matrix A with n linearly independent eigen


vectors is similar to a diagonal matrix D whose diagonal elements
are the eigen values of A.
Proof: We prove the result for n = 3.
Let A be asquare
 matrixof order
 3. Let
 λ1, λ2 , λ3 be ites eigen values
x1 x2 x3
and X1 =  y1 , X2 =  y2 , X3 =  y3  be the corresponding eigen
z1 z2 z3
vectors.
 
x1 x2 x3
Let P = [X1 X2 X3 ] =  y1 y2 y3  .
z1 z2 z3

Strictly for University Departments only 4


P is a square matrix whose column vectors are the eigen vectors of A.
Since the eigen vectors X1 , X2 , X3 are linearly independent, P −1 exists.
Consider AP = A[X1 X2 X3 ] = [AX1 AX2 AX3 ]
= [λ1 X1 λ2 X2 λ3 X3 ]
 
λ1 0 0
= [X1 X2 X3 ]  0 λ2 0 
0 0 λ3
 
λ1 0 0
= P D where D =  0 λ2 0  a diagonal matrix whose
0 0 λ3
diagonal elements are the eigen values of A.

Strictly for University Departments only 5


Pre-multiply bothsides by P −1 . We get P −1 AP = D. Thus we can
transform A into a diagonal matrix D using a similarity transformation.
The matrix P which diagonalizes the matrix A is called the modal matrix
of A.
Remark: Note that not all square matrices are diagonalizable.

Suppose square matrix A of size n has less than n linearly independent


eigen vectors, then A is not diagonalizable.
For example, let us refer to Example 4 in section 1 of this unit.
 
1 2 2
In this example, the matrix A =  0 2 1  is of size 3. The eigen
−1 2 2
values of A are λ = 1, 2, 2.

Strictly for University Departments only 6


 
−1
An eigenvector corresponding to the e-value λ = 1 is −1 An eigen
1
 
2
vector corresponding to the e-value λ = 2 is 1.
0
This is the only linearly independent e-vector (upto scalar multiples)
corresponding to the repeated e-value λ = 2.
Therefore, in this case we are able to find only two linearly independent
eigen vectors in total.
Since we do not have three linearly independent eigen vectors for A,
A is not diagonalizable .

Strictly for University Departments only 7


Application: Finding powers of a matrix

Let A be a square matrix which is diagonalizable. Let P be a non-singular


matrix such that

P −1 AP = D

Then

(P −1 AP )2 = D2
P −1 A(P P −1 )AP = D2
P −1 A2 P = D2 .

Similarly, P −1 A3 P = D3
In general, P −1 Am P = Dm

Strictly for University Departments only 8


λm
 
1 0 0 ... 0
 0 λm 0 ... 0 
Therefore, Am = P Dm P −1 where Dm 2
 ... ... ... ... ...  .
= 

0 0 0 ... λm n

It is easy to find Dm . Then we can find Am which is equal to P Dm P −1 .


This involves less computation than computing Am directly.

Strictly for University Departments only 9


Examples
Example 1: (A has distinct eigen values)  
1 6 1
Find the matrix P which transforms the matrix A =  1 2 0  to the
0 0 3
diagonal form. Hence calculate A8 .
Solution:
Step 1: The characteristic equation of A is |A − λI| = 0

1−λ 6 1

1
2−λ 0 = 0
0 0 3−λ

(1 − λ)[(2 − λ)(3 − λ)] − 6(3 − λ) = 0

i.e., (λ + 1)(λ − 3)(λ − 4) = 0


Step 2: We get λ1 = −1, λ2 = 3, λ3 = 4 are the eigen values of A.
Step 3: An eigen vector X = (x1 x2 x3 )T corresponding to an eigen
value λ is Strictly for University Departments only 10
given by (A − λI)X = 0.
When λ = −1, the corresponding eigen vector is given by
    
2 6 1 x1 0
(A − λI)X =  1 3 0   x2 = 0 
0 0 4 x3 0

2x1 + 6x2 + x3 = 0
x1 + 3x2 + 0 = 0
4x3 = 0
=⇒ x3 = 0; and x1 = −3x2
Take x2 = 1. Then x1 = −3.
 
−3
The vector X1 =  1  is an eigen vector corresponding to the
0
eigen-value λ = −1 of A.

Strictly for University Departments only 11


When λ = 3, the corresponding eigen vector is given by
    
−2 6 1 x1 0
(A − λI)X =  1 −1 0   x2 = 0 
 
0 0 0 x3 0

−2x1 + 6x2 + x3 = 0
x1 − x2 = 0
=⇒ x1 = x2 ;
and x3 = −4x2
Take x1 = 1. Then x2 = 1 and x3 = −4
 
1
The vector X2 =  1  is an eigen vector corresponding to the
−4
eigen-value λ = 3.

Strictly for University Departments only 12


When λ = 4, the corresponding eigen vector is given by
    
−3 6 1 x1 0
(A − λI)X =  1 −2 0   x2 = 0 
 
0 0 −1 x3 0

−3x1 + 6x2 + x3 = 0
x1 − 2x2 = 0
−x3 = 0
=⇒ x3 = 0;
and x1 = 2x2
Take x2 = 1 so that x1 = 2. We have x3 = 0.
 
2
The vector X3 =  1  is an eigen vector corresponding to the eigen
0
value λ = 4.

Strictly for University Departments only 13


Let P be a matrix whose column vectors are the eigen vectors X1 , X2 , X3
of A. Thus
   
−3 1 2 p11 p12 p13
P = [X1 X2 X3 ] =  1 1 1  =  p21 p22 p23 
0 −4 0 p31 p32 p33
P is the required modal matrix.
We have to find P −1 .
1
P −1 = (adjP )
|P |
Then the co-factors are given by
P11 = p22 p33 − p23 p32 = 4; P12 = −(p21 p33 − p23 p31 ) = 0;
P13 = p21 p32 − p22 p31 = −4; P21 = −(p12 p33 − p13 p32 ) = −8;
P22 = p11 p33 − p13 p31 = 0; P23 = −(p11 p32 − p31 p12 ) = −12;
P31 = p12 p23 − p22 p13 = −1; P32 = −(p11 p23 − p21 p13 ) = 5;
P33 = p11 p22 − p21 p12 = −4;

Strictly for University Departments only 14


 T  
4 0 −4 4 −8 −1
∴ adj P =  −8 0 −12  =  0 0 5 
−1 5 −4 −4 −12 −4

|P | = −3(0 + 4) − 1(0) + 2(−4) = −20


 
4 −8 −1
1 
P −1 = 0 0 5 
−20
−4 −12 −4
Step 4: Diagonalization
Consider P −1
 the matrix D =  AP .  
4 −8 −1 1 6 1 −3 1 2
1 
D= 0 0 5  1 2 0  1 1 1 
−20
−4 −12 −4 0 0 3 0 −4 0

Strictly for University Departments only 15


  
−4 8 1 −3 1 2
1 
D= 0 0 15   1 1 1 
−20
−16 −48 −16 0 −4 0
   
20 0 0 −1 0 0
1 
= 0 −60 0 = 0 3 0 
−20
0 0 −80 0 0 4
D is a diagonal matrix which contain the eigen values -1, 3 and 4 of A as
its diagonal elements. Thus we have diagonalized A.

Step 5: To find A8 .

Since P −1 AP = D, we have A = P DP −1 .
Therefore, A8 = P D8 P −1 .

Strictly for University Departments only 16


(−1)8 0 0
    
−3 1 2 4 −8 −1
1 
A8 =  1 1 1  0 38 0  0 0 5 
8 −20
0 −4 0 0 0 4 −4 −12 −4
    
−3 1 2 1 0 0 −4 8 1
1 
= 1 1 1   0 6561 0  0 0 −5 
20
0 −4 0 0 0 65536 4 12 4
  
−3 6561 131072 −4 8 1
1 
= 1 6561 65536   0 0 −5 
20
0 −26244 0 4 12 4
 
524300 1572840 491480
1 
= 262140 786440 229340 
20
0 0 131220
 
26215 78642 24574
∴ A8 =  13107 39322 11467 
0 0 6561
Strictly for University Departments only 17
Example 2: (A has distinct eigen values)  
4 1
Find a matrix P which diagonalizes the matrix A = . Verify
2 3
P −1 AP = D where D is a diagonal matrix. Hence find A−6 .
Solution:
Step 1: The characteristic equation of A is |A − λI| = 0

4−λ 1
=0
2 3−λ

(4 − λ)(3 − λ) − 2 = 0

i.e., (λ − 2)(λ − 5) = 0

Strictly for University Departments only 18


Step 2: We get λ1 = 2, λ2 = 5 are the eigen values of A.

Step 3: An eigen vector X = (x1 x2 )T corresponding to an eigen value λ


is given by (A − λI)X = 0
When λ = 2, the corresponding eigen vector is given by
    
2 1 x1 0
(A − λI)X = =
2 1 x2 0

2x1 + x2 = 0
x2 = −2x1
Take x1 = 1, then x2 = −2.
 
1
Hence X1 = is an eigen vector of A corresponding to
−2
the eigen value λ = 2.

Strictly for University Departments only 19


When λ = 5, the corresponding eigen vector is given by
    
−1 1 x1 0
(A − λI)X = =
2 −2 x2 0

−x1 + x2 = 0
x2 = x1
We take x1 = 1 so that x2 = 1.
 
1
Then X2 = is an eigen vector of A corresponding to the
1
eigen value λ = 5.
Hence the modal matrix is
 
1 1
P = [X1 X2 ] = .
−2 1

Strictly for University Departments only 20


Step 4: Diagonalization
 
1 −1
−1 1
P =
2 1 3
   
1 1 −1 4 1 1 1
P −1 AP =
3 2 1 2 3 −2 1
  
1 2 −2 1 1
=
3 10 5 −2 1
   
−1 1 6 0 2 0
P AP = = =D
3 0 15 0 5
D is a diagonal matrix containing the eigen values 2, 5 of A as
the diagonal elements.

Strictly for University Departments only 21


Step 5: To find A6 .

26 0
    
6 6 −1 1 1 1 1 −1
A = PD P =
0 56
−2 13 2 1
  
6 1 64 15625 1 −1
A =
3 −128 15625 2 1
 
1 31314 15561
=
3 31122 15753
 
6 10438 5187
∴A =
10374 5251

Strictly for University Departments only 22


Example 3. (A has repeated eigen values)
 
2 2 1
Diagonalize the matrix A =  1 3 1  .
1 2 2
Solution:
Step 1: The characteristic
equation of A is |A − λI| = 0.
2−λ 2 1

1
3−λ 1 = 0 i.e. λ3 − 7λ2 + 11λ − 5 = 0 is the
1 2 2−λ
characteristic equation of A..

Step 2: The eigen values of A (roots of this characteristic equation) are


λ = 1, 1, 5.

Strictly for University Departments only 23


Step 3: To
 find
the eigen vectors:
x1
Let X =  x2  be an eigen vector corresponding to the eigen value λ.
x3
Then (A − λI)X = 0.
    
2−λ 2 1 x1 0
 1 3−λ 1  x2  = 0 (1)
1 2 2−λ x3 0
When λ = 5 in equation (1), we have the following system of equations:

Strictly for University Departments only 24


−3x1 + 2x2 + x3 = 0 (2)
x1 − 2x2 + x3 = 0 (3)
x1 + 2x2 − 3x3 = 0 (4)

Since the determinant of the coefficient matrix is zero, at the most only
two of these equations are linearly independent. Therefore, from equations
(2) and (3) we have
x1 x2 x3
= =
2+2 −(−3 − 1) 6−2
or
x1 x2 x3
= =
4 4 4
or
x1 x2 x3
= =
1 1 1

Strictly for University Departments only 25


Therefore
 an eigen vector corresponding to the eigen value λ = 5 is
1
X1 = 1
1
Any non-zero multiple of this vector is also an eigen vector corresponding
to
the eigen value λ = 5.
When λ = 1 in equation (1), we have the following system of equations:

x1 + 2x2 + x3 = 0 (5)
x1 + 2x2 + x3 = 0 (6)
x1 + 2x2 + x3 = 0 (7)

Strictly for University Departments only 26


Thus we get only one independent equation,

x1 + 2x2 + x3 = 0 (8)

We may take one of the co-ordinates to be zero, say x2 = 0 in equation


(8). Then we get x1 + x3 = 0 or x1 = −x3 . Let x3 be an arbitrary
non-zero real number, say 1. ( We cannot take x3 = 0 as this will make x1

to be zero. But an eigen vector is a non-zero vector.) Then x1 = −1.


Therefore, one 
independent
 eigen vector corresponding to the eigen value
−1
λ = 1 is X2 =  0  .
1

Strictly for University Departments only 27


To choose another linearly independent eigen vector corresponding to
λ = 1,
choose either x1 = 0 or x3 = 0 in equation (8). Suppose we take x3 = 0.
Then we get x1 + 2x2 = 0. Take x2 = 1. Then x1 = −2.
 
−2
Therefore, the second independent eigen vector for λ = 1 is X3 =  1 .
0
   
1 −1
Therefore, 5, 1, 1 are the eigen values and X1 = 1, X2 =  0 ,
1 1
 
−2
X3 =  1  are the corresponding eigen vectors of A.
0

Strictly for University Departments only 28


Step 4: Formation of the modal matrix and diagonalization
 
1 −1 −2
Now we form the modal matrix P = [X1 X2 X3 ] =  1 0 1 .
1 1 0
 
1/4 1/2 1/4
P −1 = −1/4 −1/2 3/4 
−1/4 1/2 −1/4

Consider    
1/4 1/2 1/4 2 2 1 1 −1 −2
P −1 AP = −1/4 −1/2 3/4   1 3 1   1 0 1 
−1/4 1/2 −1/4 1 2 2 1 1 0
 
5 0 0
=  0 1 0  = D.
0 0 1

Thus we have diagonalized A.

Strictly for University Departments only 29


Practice Problems

1. Diagonalize the following matrices:


 
−19 7
(i) A =
−42 16
 
−1 2 −2
(ii) A =  1 2 1 
−1 −1 0

Strictly for University Departments only 30


Multiple Choice questions

1. Two square matrices A and B are similar if


(a) A = B
(b) B = P −1 AP
(c) AT = B T
(d) A−1 = B −1
   
4 1 2 0
2. The matrices A = and B = are similar.
3 2 0 −5
(a) True
(b) False

Strictly for University Departments only 31


Answers to MCQs

1. (b)
2. (b)

Strictly for University Departments only 32

You might also like