0% found this document useful (0 votes)
201 views25 pages

Law of Matrices PT 1

1. A matrix is Hermitian if it is equal to its conjugate transpose. 2. A matrix is Skew-Hermitian if it is equal to the negative of its conjugate transpose. 3. The operations of scalar multiplication of a matrix follow basic properties - the distributive property, the property that scalar multiplication and addition are commutative, and the property that multiplying a matrix by the product of scalars is the same as multiplying by each scalar individually.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
201 views25 pages

Law of Matrices PT 1

1. A matrix is Hermitian if it is equal to its conjugate transpose. 2. A matrix is Skew-Hermitian if it is equal to the negative of its conjugate transpose. 3. The operations of scalar multiplication of a matrix follow basic properties - the distributive property, the property that scalar multiplication and addition are commutative, and the property that multiplying a matrix by the product of scalars is the same as multiplying by each scalar individually.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

6 Theory of Matrices

Example 1.4.3. I A= 0 is skew-SVMmetric

() a
a 0 -c =-A.
tor A=a
-} c

SKEW-HERMITIAN
1.5 THE CONJUGATE OF A MATRIX: HERMITIAN AND
MATRICES
Definition 1.5.1. Let Abe an m xn matrix having complex numbersas its elements. The matrix of order
conjugate
mXn which is obtained from Aby replacing cach clement of Aby its conjugate is called the
of 4. denoted by A.Thus if
A= [a,mxa A = la, Imxn
where a, is the conjugate of a,,
If Ais a real matrix, then A = A.

It is also obvious that (A) =A.


-i
Example 1.5.1. IfA =
1+2i
3 2-3i
then A
-[ 2+ 3i

Definition 1.5.2. The conjugate of the transpose of a matrix A is called the conjugate transpose
of Aand denoted by A*. Thus ifA = [a,], then
A* =(A)= la1
clearly theconjugate of the transpose is the same as the transpose of the conjugate, i.e.
A* =(A) = (Ay
Also clearly (A*)* = A
IfA is a real matrix, then A* =A'

[+2i 2-3i 3+4i


Example 1.5.2. IfA = 4- 5i 5+6i 6-7i
8 7+ 8i

+2i 4-5i 8
A'= 2- 3i 5+6i 7+ 8i
3+4i 6-7i 7

- 2i 4+ 5i 8
A*= (A )=|2+ 3i 5-6i 7- 8i
3 4i 6 + 7i 7
Matrieag 7

Definition 1.5.3. Auare tmstris Aeh that A- A* it called Hes Thu, n1t atri
A=lalik Hermitian if n, ñ,fon ll id ;
Particularly. for i- i.

Thux, the ding1nalelements of aHermitisn matrix are real and the eymmetrieally pleet slenents
with respet to the diagonal of Aare ecomplex conjugate In cae all element ars roal the matri is
SVMmetric
Example I.5.3. 7he marix
24i 2i
A- 2-i is Hermitim
3-21 -2
2-1 3-21
For A'= 2+i 3 31
|3+ 2i - 3i -2
1 2+i 34 2i
and A* = A'= 2-i -3i A.
3-2i 3i -2
Definition 1.5.4. A square matrix Asuch that A* =-Ais called Skew-iermitian. Thus &xquaue
matrix A=|a,]is Skew-Hermitian if

For i=j. a, =- a,
a, is purely imaginary or zero.
Example 1.5.4. The matrix
?+i 3+ 2i
A = -2+i 3i - 3i is Skew-Hermitan
-3+ 2i - 3i
-?ti - 3+2i
For A'= 2+i 3i -3i
3+2i -3i
-2-i-3-2i
and A* = 4'= 2-i
3-2i

2ri 3+ 2i
2+i
3+2i
8 Theory of Matrices
For this type of matrix, the symmetrically placed elements
are the
each other. In case an element is real, it would be the
are real, the diagonal elements would be zero, and all negative,
negatives the
of its
symmetric
of
In other words the matrix would be

1.6 SUBMATRICES
element. Ifof each
Skew-symmetric.symmetric pairs would be negative all
conjugelaetmesents
of
other.

Definition 1.6.1. Asubmatrix of a


definition also allows the matrix
of A The A
is an array formed by deleting one
deletion of a or more
combination of rows and columns rows or
Example 1.6.1. Let A= -1 7 13
2 4 columns
then, the deletion of
3 9 16 third column ofA gives the
2
7
submatriy
9

The deletion of third row


and third column of Awill give the
submatrix |
1. Put a tick (
PROBLEM 1.1
)against the
A square matrix A is correct statement:
symmetric iff
(a) A = A
(b) A = A*
(d) A =-A' (c) A =A'
2. (e)A =- A*.
Choose the correct answer out of the
A square matrix A is following:
Hermitian if and only if
(a) A = A
(b) A=A*,
(d) A =-A' (c) A =-A',
3. (e)A =-A*,
Choose the correct answer:
A square matrix A is
(a) A= A
Skew-symmetric iff
(b)A = A*
(d) A =-A' (c) A = A'
4.
(e) A =-A*.
Choose the correct answer:
Asquare matrix A is Skew-Hermitian iff
(a) A = A
(b) A=A*
(d) A =-A' (c) A =A'
(e) A=-A*.
Algebra of Matrices 13
X+A =B
may always be solved for X when Aand Bare givcn:
X= B-A.

23 SCALAR MULTIPLES OF MATRICES


IfA =[a,]m xn) and a is a scalar, we define aA = A.= (a a,l..o. In words to multiply a matrix ADy
ascalar c, multiply every element of Aby a. This definition is suggested by the fact that if we add nA's,
we obtain amatrix whose elements are those of Acach multiplied by n. It is known as scalar multiplication
of a matrix by a scalar quantity.

Example 2.3.1.
Properties of scalar multiplication
The operation of multiplying amatrix by a scalar has the following basic properties.
Let a ß be scalars and let Aand Bbe two matrices of the same order, Then
(a) (a + B)A = aA + BA
(6)aA + B) = A + CB
(c) aBA) = (aß)A.
Solution. (a) Let A= [a(m, n)
(a + B)A = (a +P)la] =[(a +B) a)
= [(aa, +Ba,l= [aa] +(Ba,l= &A +BA.
(b) We have
a(A +B) =ala,] +[b,] =o[4, +b
=|oa, +ab,]= [a] +[ab,] =aA +aB.
(c) We have
a(ß4) =a[Ba,]= [oßa,]= (aß)la,] =(aß)A.
2 -3 |3 -1 2 4 1 2|
Example 2.3.2. IfA =|5 2 5 and C=|0 3 2
-1 0 3| -2 3

Verify A+(B-C) =(A +B) - C.


Solution. Here
|3 -1 2| 4 1 2|
B-C=4 2 5 -0 3 2
|2 03 1 -2 3|

3-4 -1-1 2-21 -1 -2 0]


=|4-0 2-3 5-2= 4 -1 3
2-1 0+2 3-3 1 2 0
14 Theory of Matrices

2 I -2 0
and A+ (B - O= 0 4 -1 3
2 0

-1 2-2 -3+ 0] 0 -3
=S+4 0-1 2+3=|9 -
1+1 -1l+2 1+0 2
2 -31 3 -1 2|
AB= S 2+4 2
0 3|

1+3 2-1 -3 +21 4 1 -1|


=5+4 0+2 2+5=9 2 7
1+2 -1l+0 1+3 |3 -1 4

and 1 -1 4 1 2| 0
(A + B) -C=9 0 -37
3 -1
-o|1 -2 3 2
3 2
9 -1
1
5

Hence A+ (B C) = (A + B) C.
Example 2.33. Show that
(a) (A + B'=A + B'.
(b) (A+ B) = A+B.
Proof. (a) Let
then A=(a,)mx
ymxnB=lbmx
A'= (anx B=bnx
m Km'
A
+B= (a,]+ (b
A
and +B=(a,+b)
(A +B) =(a, +
b,>= [a, +b,)
=la,]+(b)=A'+B.
(b)
A+ B=(a +
and
b,]
A+ B= (a, + b,] = la, +
b,]
= la,] +[b,] =Ã
Example 2.3.4. For any +B,
Proof. Let A= Hermitian matri. Ashow that iA is
|4an Hermitian matrix.
be a
Then
Now
A* =A
iA=(ia,]
l4,> (a, ) Skew-Hermit an.
4= 4 Tor all i and i
iAf =|ia,)
(iA)* = (iA)' =|i a, ]
Algebra of Matrices 15

(i AY = |ia,l

=|ia]=-I-ia,]=-(ia,|=-iA|.
Hence (i A) is Skew-Hermitian,
Example 2.3.5.Show that every square matrixA canbe epressed uniquely as a sum of asymmetrtc
matrix B and askew-mmetric matrixC.
Proof. For any square matrix A, we sce that
(A +A)Y'=A'+ (4Y=A'+A =A +A'
and (A -A' =A'- (A)'= A'-A=-(A -A).
Therefore A+A is symmetric and A-A' is skew-symmetric. Thus, we note
1
A
= (A -A) =B+C,
1 1
where B= A+A) is symmetric and C=2 (A - A')
is Skew-symmetric.
For uniqueness,
Let A =R+S, where R' = R, S=- S.
then A'=R+S' =R-S
A+A' A- A'
we have R= =B and S= =C
2 2
This completes the proof.
Example 2.3.6. Show that every square matrix can be expressed uniquely as
P+ i0, where P and Q are Hermitian.
Proof. For any square matrix A we see that
(A + A*)* = A* + (A*)* =A* +A =A+ A*
and (A-A*)* =A* -(A*)* =A* -A=-(A -A%)
1
Therefore 2 (A +A*) and 2 (4-A") are Hermitian and Skew-Hermitian respectively.

Thus, we write A=,(A+A") +i 2i (A -A*) = P+iQ.

where P=(4+A*) is Hermitian


and Q=(A - A*) is Hermitian,
2i

for

1
= (A*-A) =A-
2i
A*) =0.
16 Theory of Matrices
Thus Ahas been expressed as P+i), where Pand Qare Hermitian. For
Let
A= R+ iS, where R* = R,S* = S. uniqueness,
A* = (R + iS)* = R* + (iS)*
= R$ + i S* = R- iS
1
R=(A +A*) =P
and 1
S=
2 (A- A*) =.
Thiscompletes the proof.
Example 2.3.7. Show that every Hermitian matrix Acan be
where P is real and svmneiric and 0 is real and uniquely expressed as P
+i9,
Proof. Let A= P+ iQ.
Skew-symmetric matrx.
then where P = P and -Q
A* = P* + (iQ)* = P* - i0*
= P -i(-O) since P* =P=P
= P+ iQ
Hence Ais Hermitian. For and Q* =0=-Q= A.
then
Let uniqueness,
A =R+ iS
R+ iS =P+ i0
R= P and S= Q.
This completes the proof.

1.
Prove that,
PROBLEM 2.1
if A, B
andCare matrices all of the
2. (A + C) -(A + B) = same size,
Solve the equation C- B.
0 0 1] 2 1 21
X+ 0 1 2 3
I 0 0 |4 3 4|
for the matrix X.
3
3. If A = 2
5-1 and B=7
|4 3 2 5 -3 4
evaluate
the matrices (i) A+ B, (ii)
A+ B, (iil) 5A B,
(iv) 2A + 2B.
4. 1P= 3 1 2
-1 find 3P - 4Q.
5.

evaluate the matrices (i) 2A + C, (iü) A- B, (iiü) 3A -B+C.


5
Rank and Equivalence
CONCEPT OF ARANK
5.1 THE
Inthe present chapter the square submatrix of a matrix A which are definedto be either Aor any matrix
remain1ng after certain lines are deleted from A, are of particular importance to us. For exampie, the
x4 matrix.
1 2 -1 3
A= 2 4 -4 7

-1 -2 -1 -2|
has 4 squaresubmatrices of maximum order 3.
1 -1| 1 2 3
2 4 -4 2 4
-1 -2 -1| -1 -2 -2
1 -1 3 2 -1 3]
2 -4 7 4 -4 7
-1 -J -2 -2 -1 -2|
and it has many square submatrices of order 2 as

Now we recall the definition of a minor of agiven natrix that the determinant oe a subatrix of
uci rof agiven matrix will be called a (determinant) minor of order r of the matri.
Deinition 5.1.1. The rank of a matrix Ais the order say r, oe its largest non-zero minor.
nhis definition, allminors of this order r may be non-zero, onlysome may be ton-zero,or only
the one may be But every minor of order (r + )nust be zero. Thus, Amatrixis saidto be rank
rif and only if itnon-zero.
has at least one (determinant) minor of order r which is not zero, but it has every minor
of order (r + l)
which is zero. A matrix is said to be of rank zero if and only ie all its elements are zero.
112 Theory of Matrices

Thus the rank rof an nxn nOn-2cro matrix Ais a


positive number
io the minimum of mand n i.c.. r<min (, n). By this the rank
of a no which is less
always n. The rank of a matrix A may be denoted by p(A). non-singular matrix ihA
anof and equal
Example5.1.1. Find the rak of the natrix order niw
[2 I -]
A =0 3 - 2|.
|2 4 -3
Solution. It is a square matrix of order 3. Therefore, the minor of it of
deteninant |A l of order 3. That is. largest order is the
|2 1 -1|
0 3 - 2 = 2(-9+ 8) 0(-3 +4) ++ 2(-2 +3)
2 4 -3| -2-0+2=0
Here the minor of order 3 is zero. Therefore p(A) *3, p(A) <3.
Now we consider the minors of Aof order 2 and the minor of order 2.

=6-0=60.

By the definition, the rank of A = 2.


Example 5.1.2. Find the rank of the matrix

1 2 3 41
A= 2 4 6
3 6 9 12|
Solution. The matrix A is of order 3 x 4. So it has 4 square
submatrices of largest order 3.
|1 2 3| 1 2 4
A, =|2 4 6 A, =2 4
|3 6 9
3 6 12|
1 3 4 |2 3 4
A, =2 6 A,=4 6
|3 9 12| 6 9 12
Now we compute the minors of A,, A,,
A, and A.
31 |1 2 3
IA, I=2 4 6=2| 2 3 =0,
3 6 9 |3 6 9
Rank and Equivalence 113
2 4 | | 2 4]
|A,=|2 4 8=2|| 2 4=).
|3 6 12| 3 9 12
Sinilarly. |A,|=0. and |A,|=0.
rank of Acannot be 3, It will be less than 3, p(A)<3.
Sothe
computethe minors of order 2.
Non we
IUsclearthat every minor of order 2 of the clements of square submatrices A,. A,. A, and A,IS
<2
p) 2.pA) therefore p(A) =1.
SoExample.
S.1.3.The rank of the transpOse of a matrix is the same as that of the original matri.
pA)= pA)
Solution. LettA = lamn be an m×n matrix of rank r. Then A' is the transpOse of Aof order
(=la n Therefore there exists anon-singular square submatrix M, of A. It shows that the
is also anon-singular submatrix of A'.
|M,|=| M| =0.
Hence p(A) 2r ..1)
Nog we consider asquare submatrix Ar of Aof order (r + 1). Since p(A) =r|A., l=0. Then
submatrix of A' such that |A'=0.
is asquare
p(A))Sr. Hence pA) =r.
Example 5.1.4. Prove that three points (x, y), (, y), (X y) in the plane are collinear if and
oniy iftherank ofthe
matrix

Xy 2 1|is less than 3.

Proof. Since the points (x,, y), (Ciy, Y,). , y) in a plane are collinear, they lie on a line.
Let the equation of the line be
ax +by +c=0 ..(l)
..(2)
Since (x. y,). (z, y,), (Cizy y) lie on (1), then
ax, + by, +c=0 ...(3)
ax, + by, + c=0 .4)
ax, + by, tc=0
Eliminating a, b, c from (2), (3) and (4), we get
1

This show that the rank of matrix

X) 1|is always less than 3.


5.2 ELEMENTARY TRANSFORMATIONS
direct applicationof the definition would be very tedious eveer.:
To find the rank of amatrix by the methods of altering a matrix in such way that tha
therefore investigate some
the simplest cases. We These methods are based on the following tvpes of thes
remains the same but is simpler to determine.
transformation of a matrix.
operations which are called elementary
5.2.1. Elementary Transformations
denoted by R,. a
(a) The interchage of any two ith and jth rows (columns)
non-zero constunt k
(b) The multiplication of all elements of ith row (column) by the some
denoted by kR, (KC).
(c) The addition to the ith row (column) of an arbitrary multiple of the jth row (column),
denoted by R+ kR, (C; +kC).
The inverse of an elementary transformation is defined to be the operation which transforms the
transformed matrix by elementary transformation to the original matrix. That is, if we apply the elemntiy
ransformation R, toamatrix A, then R, is the inverse of R Similarly, k R, is the inverse of kR, and k
- kR, is the inverse of R, +kR, Thus we have the inverse of an elementary transformation is an elementuy
transformation of the same type.
Rank and Equivalence 115
5.3 EQUIVALENT MATRICES
Definition 5.3.1, Two matrices Aand Bare said to be cquivalent, A~B, if the matrix Bcan be obtained
fromthe maatrix Aby applying the clementary transformations to Aand Acan be obtained from Bby
transformation to B.
applyingthe clementary
5.4 ELEMENTARY MATRICES
Definition 5.4.1. A matrix obtained from the unit matrix by applyingto it any of the three elementary
transformation called an elementary matrix.
Theorenm 5.4.1. When an elementary transformation is applied to a matrix, there results a matrix
ofthesane
rank. That is, equivalent matrices have the same rank.
Proof.We have three elementrary transformation. So this result will be proved in three stages:
Case L. In the case of interchange of ith and jth rows (columns) of a matrix A.
Proof. Let r be the rank of a given matrix A. Then there exists a minor of order r of the matrix A
which is not zero.
Let this minor be M,. Now we get the matrix Bbyinterchangingith and jth rows ofA.
the matrix Bhas the minor M,. If the minor M, contains ith andjth row, then the interchange of ith
d th row willchange M, into - M,, which is not zero. If M, does not contain ith and jth row, then
the rank of Bis also
M, is unaltered which shows
Case II. In the case of multiplication of the elements of ith row of the matrix A by non-zero
constant k.
Proof. Let rbe the rank of the matrix A. Then there exists a minor of order rof the matrix A
which is not zero. Let this minor be M,. Now we get amatrix Bby multiplying ith row of Abyk. Then
the matrix B has the minor M.. If the minor contains ith row, then multiplication of ith row by k will
change M, into kM, which is not zero and if it does not contain ith row, then M. remains unchanged
which shows the rank of B is also r.
Case II. In the case of addition to the elements of a row the product by any number of the
corresponding elements of any other row of a matrix A.
Proof. Let Abe a matrix [a,] of order mxn and let m<n. Again let mbe the rank of A. Then the
minor of order m is not zero and the operation R, + kR, does not alter the minor, which shows that by
applying this elementary transformation the rank of A is not changed.
If PA) < m. Then,
the
Let r, r be the ranks respectivelyof Aand B which is obtained by adding to the elements of
ith row ofA the product by k of the coresponding elements of its jth row.
Now we consider a minor | B. | of Bof order (r +1) and the corresponding placed minorA,of
A. Here we have three possibilities
1. If|B, | contains both ith and jth row, then
|B,1=|4,|
2. If| B. does not contain ith row, then
|B,1=|4,| ith row of | B|will be
3.|B,; contains ith row but not jth row. The elements of
4, +ka, wheres= 1,2,.....
Therefore IB,l=A,|+kiCI
116 heon of Matrires

re the detern1n at h hrs oht auned hy replx


ing ith row by k
atrs4 thre r alena fminY of order of the matrix A
Since the ranktimes of
of A ith roN of he
0 and -0coneequently R- 0 hen ever

h t chos
Henr ciementn transfrrmaton< do t alter the rank of amateie
ExampBe 5 41 Irmd th rank of the matrnY

2R, 2R,1 2
Solution We have 4= 4 7
2 1
2 R, R, 0 0 2 1
1 2 -| 31
B R. 0 0
0 0
All third otder subrmatrices of thc last matrix are singular. But the minor of orde,

Hence pA) = 2.
Exampie 6.4.2. Find the rank of the matrix
I 3 4 5
A=3 9 12 3
| 3 4 1|
Solution. We have

13 4 5]C; - 3C I 0 0 4]
A= 3 9 12 3C -4C3 0 0 0
|I 3 4

I 4 0 0R,, 1 4 0 0
3 0 0 ) |0 0 0

I 4 0 07
Rank and Equivalence 117
third order subnatriccs of this last matrix are singular.
ckarthat all
Iis order)
minor of
Rutonc
=0-4=-40.
pA) = 2.
Hence
,Determinethe rank of the matrix
Example.S.4.3.
[ 23 1
A=2 4 6 2
I 2 3 2
Solution. We have
[1 2 3 1 |R, -2R,1 2 3 1]
A= 2 4 6 2 0 0 0 0
1 2 3 2| R, -R 0 0 0 1
[1 2 3 11
Re.3) 0 00 1
0 00 0
Itis clearthat all third order submatrices of the last matrix are singular.
But we have a minor of order 2.
3 1|
=3-0=30.
0 1

Hence pA)= 2.

PROBLEM 5.2
1 Find the rank of the following matrices:
3 4 3 8 3 6
la) A= 3 9 12 9 p(A) = 1. (b) A = 3 2 2 pA)= 3.
-1 -3 -4 -3 -8 -1 -3 4|
6 1 3 81 [2 3 -1 -1|
4 2 6 -1 1 -1 -2 -4
(c) A=| p(A) = 2. (d) A = p(A) = 3.
10 3 9 3 3 -2
|16 4 12 15 6 3 0 -7|
|3 -2 0 -1 -7|
(e) A= 2 2 1 -5
1 -2 -3 -2 pA)=4.
1 2 1 -6
5.5 NORMAL FORM
transformations., be redo
Every non-zero matrix of rank r, can, by a finite sequence of elemantary
the one of the forms.

or ,
where r is its rank and I is an identity matrix of order r. The above given four forms are called
Normal form or Canonical form of the given matrix A.
We get the normal form of the matrix Aby subjecting to A the elementary transformations in t
following manner:
1. We first use the elemantary transformation of the type (a), if necessary, to obtain a non-zr
element (preferable a1) in the first row and the first column of the given matrix.
2. We divide the first row bythis element. if it is not 1.

3. We substract appropriate multiples of the first row from other rows soas to obtain zeros nu
remainder of the first column.
obtain
4. We subtract appropriate multiples of the first column from the other columns so asto
zeros in the remainder of the first row.
5. We repeat steps (1) to (4) starting with element in the second row and the second column.
is reachedor
6. We continue thus down the main the diagonal
diagonal," either until the end of
until all the remaining elements in the matrix are
zero.
Rank and Equivalence 119
afinal matrix then has one of the forms

We knowthat elemantary transformation do not alter the rank or order of the matrix. Therefore
form will be the same as the rank of a given matrix A.
ofthe normal
therank
Example 5.5.1. Reuce the following matrices to their normal forms and find their ranks:

8 / 3 ||4 3 21
2 2 3 4
-8 1 -3 |2 6 7 S

[-2 -1 -3 -1 |2 3 -1
3 -1 | - | -2 -4
(3) 0 3 3 -2
1 6 3 0 -7

Solution. (1) We have


1 3 6]R, + R,8 1 3 61C
A= 0 3 2 2 03 2 2
|-8 -1 -3 4 0 0 0 10|

2C-3 0 03 o2
1 136]C;
0 32
0 0 0 10]C, -6C, lo 0 0 10|
[I 0 0 0]C,-2C,|1 0 0
0 12 2 01 0
0 0. 0 10 C, -2C, 0 0 0 10
|1 0 0 0 C1 0 0: 0
0 1 0 0 1 0 : 0-4, :0]
|0 0 10 0 |0 0 1 : 0|
Hence P(4) =3.

fi 4 3 2] R, -R1 4 3 2]
A=|| 2 3 4 0'-2 0 2
(2) We have
|2 6 7 5|R, -2R,0 -2 I 1|

C; -4C[I 0 0 0]-[1 0 0 0]
0,-2 0 2
C; -3C 1
C-2C LO -2 1
122 Theory of Matnces
normal fomn ofa proxtuct oftwo
Gve an exanple to show that the matnces Is not
|4.
pduct of therr normal form
Gvean eample to show that the rank of apoduct of matrtCes may be less than
necesarnlv the
the rank of
other factor
ble, by using only row oerations, to reduce a matr1x 4 to
any
Shou that t s
matr1x (a, | such that a, -0 f i ; If4 has rank r, then no more than r of the an equrvalent
dffer from zero
Prove that not even matnx 4 can be reduIced to a nomal form by
elements a. may
I6. row
Hint. Exhrbit a matrix which cannot be reduced toa nomal form transformation only
5.6 ELEMENTARY TRANSFORMATION BY MATRIX MULTIPLICATIOM
Here we shall see an important thing that the clementary transformations of a
brought by pre- or ost-multiplying it by suitably chosen square matrices of very given matrix can be
Theorem S.6.1. To efct om elementanv transformation on agivCn matrix simple types
A. we may
prtarm the sammc elenxntan manstormation on an identih matix of appropriate order,
then first
< ho the result if hc operation is on ws, post-multipl) if it is on columns.
Proof. Suppose fist that we wish to interchange the second and third rows of a given
pre-multiply
2.
niatria Thts can be done by first interchanging 2nd and 3rd row of unit matrix and
grven matrix by it That is pre-multiplying the
0 0 1

a3
on the other hand, the interchange of 2nd and 3rd columns is effected by
interchanging 2nd and}o4
columns of unit matrix and post-multiplying the given matrix by it. Thus
0 0 0
a3 ae o 0 1 a4
1 0 0
00 0
From this, it isobvious that a l on the diagonal leaves the
but an off-diagonal 1selects the elements of the (row, column) corresponding (row, column) unaltered,
which that 1 is situated. corresponding to the (column, ow) in
Next suppose we wish to multiply a row or a column by a constant k:

0 0|| a a2 a3
0 k ka
a32
ka2 kazs
and kas
0 0 k kaz3
kay.
Finally, to add, say. k timnes of third row to second row, ork times of third column to a
we have second,
Rank and Equivalence 123

0 0||a1 2 aa

+kay +ka

|1 0 0 0
00
fa, a, a; a4 ||0 1 a, a, +ka a
b,
0 00 1
Here an off-diagonal &in the ij-position of the pre- or post-multiplier adds to the ith row (jth
umn)ofthe
multiplied k times its jth row (ith column).
Fromabove discussion we conclude that any elementary row transformation on agiven matrix
accomplisheddby pre-mutliplying it by asuitable elementary matrix and any column transformation
de
an accomplishedby post-multiplying it by suitable elementary matrix.
ande Theorem 5.6.2. IfA and Bare equivalent matrices then there exists non-singular matrices Cand
B=CAD.
Dsuchthat
Poof. Since Aand B are equivalent matrices, then B is obtained from Aby applying to Aa
saquence of elementary row and column transformations. But elementary row transformation can be
aomplished1by pre-multiplying Aby elementary matrices of appropriate order and elementary column
fomatjon can be accompished by post-multiplying Aby elementary matrices of appropriate order.
Hence CC, .CAD,D, . D, =B
CAD = B.

where C= C-C,.. C,
and D=D,·D, D,
Since elementary matrices are non-singular, C and D are non-singular.
Corollary 1. Every non-singular matrices can be expressed as a product of elementary matrices.
Proof. Let Abe a non-singular matrix of order n and I, be the indentify matrix of order n. Since
Aand 1, are equivalent, we have
A= C·C, .CJ,D,D, ..D,
=C,C, . D,D,.. D,
Corollary 2. If Ais mxn matrix of rank r. then there exists two non-singular matrices C and D
dorder mand n respectively such that

CAD =
0
124 Theony of Matrioes

Toof. B1 a seqcnve ol clemenatv row andcolumn transfomations Acan he


reduced to the
nomal fon

()

So wr Can sav the matrices Aand are equivalent

But elementarv ow and column transformations can be effected by pre- and


h clementan natrices.
So wr have
post-multiplying A
0

CG.. C,AD, D,.. D,= 0

[1, : 0
CAD =
0: 0

where C=C,C, ... C


D= DD, ... D
andcach C, and D are clementary matrices. Since elementary matices are non-singular, C and D are
non-singular.
Example 5.6.1. For the following matrices, find non-singular matrices P andQ such that PAO
is in the normal form:
2 2 -1|
() 2 3 (ii)4 2 -1
2 2 -2 0

2 37 -1 2
2 4 6 1
(iü) 3 2
(iv)
7 -11 -6
2 I 3 2 12 3|
Solution. (i) We write A = IAI,
07
i.e., A=| ] 2
0 -1 0 0

Now we reduce the matrix Aon the left hand side to the nornmal forn by applying elementarY
Lransformations. In reducing the matrix Aon the left hand side to the normnal forn. If we apply ay
clementary row (column) transformation, the same elementary row (column) of transtornation wl
applied to the prefactor (post-factor) of A on the right hand side.
ank and Equivalonre 125

1 )

()

-1|0A0
)

Apling -G
(0 0 | 0 0 || -1
| 0
(0

Applying R, + R,

0 1
0 0 0|
Thus we have the required normal form.
0 0 | | -1 -1]|
Where 1 0and Q
=0
|-1 1 1| 0
ci)We write A=1, Al,
|1 0 0 01
-1 2 -1] [| 0 0] 0 I0 0
A=4 2 -1 2 =|0 1 0A
00 0
|2 2 -2 0 0 0 1| 0 0 0 I|
Aplying C, +C,, C, - 2C, C, +C,
|I 0 0 0] 1 00 0 0
4 6 -9
|2 4 -6 2
|0 0

Ayplying R, - 4R,, R, - 2R,


|I 0 0 0 () 0
|0 6 -9 6
) 0
0 4 -6 2-2 0 1| |0 0 )
126

Applying R,

() 07 1 0 0)
0A
0 0 1 )
2 0 0 0
04

0 1 0 0
[I 0 0 2 1
6
|0 4 0 -2| l-2 00
0 0 1
Applying R, - 4R,
0 0 |1 1
[i 0 0 0|
l0 1 0 - oA 0 1
|0 0 0 -2 0 1 0
0 0

Applying - ; R_ Ca4

10 1 1
1 0 0: 0] 2 1
3
010: 0 3 6 A|0 1 -1
0 0 1 : 0| 1 1 1 0
-

3 3 00 1
This is the required normal form, where

1 0
1 1

2 1 3
P=
3 6 Q=|0 1 -1
2
1
0 0
3 3
|0 0 1 0
(üi) We write A=1AI,
[! 2 3] [1 1 00 0 0 T1 0
|3 23 1lo
1 I 0 0o ! 1
2 0 0 1 0 |A|0
2 1 3o0 0 1
0 1
Rank and Equivalence 127

R, -3R,. R, -
Applying
R,. R, - 2R,
3| 10 0 0
-4 -8| -3 I 0 0
1 -| 0 1
3 -3 -2 0 0
2,. C;-3C
0] 1 0 0 0]
-4 -8 -3 | 00
1
0 1 -1 0 0
-3|-2 0 0 I

1
Applying R,
1 0 0 07
0 1
1
0 0 |I -2-31
1 2 4
4
1 -1
0 1 0 1
J0 -3 -3
|-2 0 0 0
3R,
ApplyingR; -R,.R + 00 0|
3 1 0 0
[1 0 0 4 4 1 -2 -31
01 2 7
+
1 0A0 1 0
00 -3 4 4 1
0 0 3
1 3
0
4

Applying C, - 2C,
0 0 01
3 1
00
[1 0 4 4
1 -2 -1|
0 1 1 A|O 1 -2
00 -3 + 10
4 4 0 1 -1|
|0 0 3 3
1
0 1
4 4

Aplying R
1 0
3 1
[! 0 0] 4 4 1 -2
0 1 ATO I -2
7 0
0 0 1
12 3 1 -1|
|00 0 33 12
3
00 1|
4 4
7heonn
o fMatrices
0 0

) 0

4 I -2
1
12
12
2

where
forn,
nomal 0 07
ngind
thc
Thrsis 1 (0 0 -2
4
4 Q=|0 -2
7 1
P= 3
12 12
1 2 1

-1 2 1| [1 0 0 0] [1 0 0 0]
3
1 6 1 4 01 0 0
A
0 1 0o
-6 1 0 01 0 0 01 0
A= 7 -11
7 2 12 3| 0 0 0 1 |0 0 0 1|

Appl ing R1, 2)


4 6 1] [0 1 0 0] [1 0 0 0]
3 -1 2 1 10 0 0
A
0 1 0 0
7 -11 -6 1 00 1 0 00 1 0
|7 2 12 3 | 0 0 0 1 |0 0 0 1
Applyng k, -3R, R, -7R,. R, - 7R,
4 6 1| [o
|0 -13 -16 -2 10 [1 0 0 0]
1 -3 0 0 0 1 00
0 -39 - 48 A
0 - 26 -30 -60 -7 1 0 0 0 1 0
-4 0 -7 0 |0 0 0 1
Aypl, ng C, -4C,
0
C,-13 -6C,C;-C1 I 0 07 -4 -6 -1|
-16
0
-39 - 48 -26 |
-3 0 0
0 -26 -30 |0 -7 1 0 A
-4
0 -7 0 1
Rank and Equivalonre 129

I6
7 |0

1 0 0
16
A0
03 0 0 7 10 13
022 0 0 -7 0 1

R, - 2R,
Ann ingR, -3R,. 14

0 1 0 (0 13 13
0 10 0 1 -3 0 0 16
A|0
0 0 0 0 -3 2 10 13 13
0 0 2 0-2 -1 0 1| 0

R. R3,4)
4 14
0 1 0 01 13
|1 0 0 0 1 -3 0 0
0 10 0 16 2
A|0
0 01 0 -1 -0 13 13 3
0 0
0 0 0 0 -3 2 1 0

Chis is the required form, where

0 I0 0] |3
-3 0 o l6
P= (0=0
-J 13 13 13

) 0
2 I 0
We have pA) = p(P).

E7 COMPUTATION OF THE INVERSE OF MATRIX BY


ELEMENTARY
TRANSFORMATION
tAbE non-singular matrix of order n. The matrix, A can be reduced to by a seyuence of ciementary
transformations. Therefore there exist elementary matrix E, E,, ....,. E, such that
EE, . E, A=1,
BA = I,.
B= E,E, ... E,:

You might also like