Law of Matrices PT 1
Law of Matrices PT 1
() a
a 0 -c =-A.
tor A=a
-} c
SKEW-HERMITIAN
1.5 THE CONJUGATE OF A MATRIX: HERMITIAN AND
MATRICES
Definition 1.5.1. Let Abe an m xn matrix having complex numbersas its elements. The matrix of order
conjugate
mXn which is obtained from Aby replacing cach clement of Aby its conjugate is called the
of 4. denoted by A.Thus if
A= [a,mxa A = la, Imxn
where a, is the conjugate of a,,
If Ais a real matrix, then A = A.
Definition 1.5.2. The conjugate of the transpose of a matrix A is called the conjugate transpose
of Aand denoted by A*. Thus ifA = [a,], then
A* =(A)= la1
clearly theconjugate of the transpose is the same as the transpose of the conjugate, i.e.
A* =(A) = (Ay
Also clearly (A*)* = A
IfA is a real matrix, then A* =A'
+2i 4-5i 8
A'= 2- 3i 5+6i 7+ 8i
3+4i 6-7i 7
- 2i 4+ 5i 8
A*= (A )=|2+ 3i 5-6i 7- 8i
3 4i 6 + 7i 7
Matrieag 7
Definition 1.5.3. Auare tmstris Aeh that A- A* it called Hes Thu, n1t atri
A=lalik Hermitian if n, ñ,fon ll id ;
Particularly. for i- i.
Thux, the ding1nalelements of aHermitisn matrix are real and the eymmetrieally pleet slenents
with respet to the diagonal of Aare ecomplex conjugate In cae all element ars roal the matri is
SVMmetric
Example I.5.3. 7he marix
24i 2i
A- 2-i is Hermitim
3-21 -2
2-1 3-21
For A'= 2+i 3 31
|3+ 2i - 3i -2
1 2+i 34 2i
and A* = A'= 2-i -3i A.
3-2i 3i -2
Definition 1.5.4. A square matrix Asuch that A* =-Ais called Skew-iermitian. Thus &xquaue
matrix A=|a,]is Skew-Hermitian if
For i=j. a, =- a,
a, is purely imaginary or zero.
Example 1.5.4. The matrix
?+i 3+ 2i
A = -2+i 3i - 3i is Skew-Hermitan
-3+ 2i - 3i
-?ti - 3+2i
For A'= 2+i 3i -3i
3+2i -3i
-2-i-3-2i
and A* = 4'= 2-i
3-2i
2ri 3+ 2i
2+i
3+2i
8 Theory of Matrices
For this type of matrix, the symmetrically placed elements
are the
each other. In case an element is real, it would be the
are real, the diagonal elements would be zero, and all negative,
negatives the
of its
symmetric
of
In other words the matrix would be
1.6 SUBMATRICES
element. Ifof each
Skew-symmetric.symmetric pairs would be negative all
conjugelaetmesents
of
other.
Example 2.3.1.
Properties of scalar multiplication
The operation of multiplying amatrix by a scalar has the following basic properties.
Let a ß be scalars and let Aand Bbe two matrices of the same order, Then
(a) (a + B)A = aA + BA
(6)aA + B) = A + CB
(c) aBA) = (aß)A.
Solution. (a) Let A= [a(m, n)
(a + B)A = (a +P)la] =[(a +B) a)
= [(aa, +Ba,l= [aa] +(Ba,l= &A +BA.
(b) We have
a(A +B) =ala,] +[b,] =o[4, +b
=|oa, +ab,]= [a] +[ab,] =aA +aB.
(c) We have
a(ß4) =a[Ba,]= [oßa,]= (aß)la,] =(aß)A.
2 -3 |3 -1 2 4 1 2|
Example 2.3.2. IfA =|5 2 5 and C=|0 3 2
-1 0 3| -2 3
2 I -2 0
and A+ (B - O= 0 4 -1 3
2 0
-1 2-2 -3+ 0] 0 -3
=S+4 0-1 2+3=|9 -
1+1 -1l+2 1+0 2
2 -31 3 -1 2|
AB= S 2+4 2
0 3|
and 1 -1 4 1 2| 0
(A + B) -C=9 0 -37
3 -1
-o|1 -2 3 2
3 2
9 -1
1
5
Hence A+ (B C) = (A + B) C.
Example 2.33. Show that
(a) (A + B'=A + B'.
(b) (A+ B) = A+B.
Proof. (a) Let
then A=(a,)mx
ymxnB=lbmx
A'= (anx B=bnx
m Km'
A
+B= (a,]+ (b
A
and +B=(a,+b)
(A +B) =(a, +
b,>= [a, +b,)
=la,]+(b)=A'+B.
(b)
A+ B=(a +
and
b,]
A+ B= (a, + b,] = la, +
b,]
= la,] +[b,] =Ã
Example 2.3.4. For any +B,
Proof. Let A= Hermitian matri. Ashow that iA is
|4an Hermitian matrix.
be a
Then
Now
A* =A
iA=(ia,]
l4,> (a, ) Skew-Hermit an.
4= 4 Tor all i and i
iAf =|ia,)
(iA)* = (iA)' =|i a, ]
Algebra of Matrices 15
(i AY = |ia,l
=|ia]=-I-ia,]=-(ia,|=-iA|.
Hence (i A) is Skew-Hermitian,
Example 2.3.5.Show that every square matrixA canbe epressed uniquely as a sum of asymmetrtc
matrix B and askew-mmetric matrixC.
Proof. For any square matrix A, we sce that
(A +A)Y'=A'+ (4Y=A'+A =A +A'
and (A -A' =A'- (A)'= A'-A=-(A -A).
Therefore A+A is symmetric and A-A' is skew-symmetric. Thus, we note
1
A
= (A -A) =B+C,
1 1
where B= A+A) is symmetric and C=2 (A - A')
is Skew-symmetric.
For uniqueness,
Let A =R+S, where R' = R, S=- S.
then A'=R+S' =R-S
A+A' A- A'
we have R= =B and S= =C
2 2
This completes the proof.
Example 2.3.6. Show that every square matrix can be expressed uniquely as
P+ i0, where P and Q are Hermitian.
Proof. For any square matrix A we see that
(A + A*)* = A* + (A*)* =A* +A =A+ A*
and (A-A*)* =A* -(A*)* =A* -A=-(A -A%)
1
Therefore 2 (A +A*) and 2 (4-A") are Hermitian and Skew-Hermitian respectively.
for
1
= (A*-A) =A-
2i
A*) =0.
16 Theory of Matrices
Thus Ahas been expressed as P+i), where Pand Qare Hermitian. For
Let
A= R+ iS, where R* = R,S* = S. uniqueness,
A* = (R + iS)* = R* + (iS)*
= R$ + i S* = R- iS
1
R=(A +A*) =P
and 1
S=
2 (A- A*) =.
Thiscompletes the proof.
Example 2.3.7. Show that every Hermitian matrix Acan be
where P is real and svmneiric and 0 is real and uniquely expressed as P
+i9,
Proof. Let A= P+ iQ.
Skew-symmetric matrx.
then where P = P and -Q
A* = P* + (iQ)* = P* - i0*
= P -i(-O) since P* =P=P
= P+ iQ
Hence Ais Hermitian. For and Q* =0=-Q= A.
then
Let uniqueness,
A =R+ iS
R+ iS =P+ i0
R= P and S= Q.
This completes the proof.
1.
Prove that,
PROBLEM 2.1
if A, B
andCare matrices all of the
2. (A + C) -(A + B) = same size,
Solve the equation C- B.
0 0 1] 2 1 21
X+ 0 1 2 3
I 0 0 |4 3 4|
for the matrix X.
3
3. If A = 2
5-1 and B=7
|4 3 2 5 -3 4
evaluate
the matrices (i) A+ B, (ii)
A+ B, (iil) 5A B,
(iv) 2A + 2B.
4. 1P= 3 1 2
-1 find 3P - 4Q.
5.
-1 -2 -1 -2|
has 4 squaresubmatrices of maximum order 3.
1 -1| 1 2 3
2 4 -4 2 4
-1 -2 -1| -1 -2 -2
1 -1 3 2 -1 3]
2 -4 7 4 -4 7
-1 -J -2 -2 -1 -2|
and it has many square submatrices of order 2 as
Now we recall the definition of a minor of agiven natrix that the determinant oe a subatrix of
uci rof agiven matrix will be called a (determinant) minor of order r of the matri.
Deinition 5.1.1. The rank of a matrix Ais the order say r, oe its largest non-zero minor.
nhis definition, allminors of this order r may be non-zero, onlysome may be ton-zero,or only
the one may be But every minor of order (r + )nust be zero. Thus, Amatrixis saidto be rank
rif and only if itnon-zero.
has at least one (determinant) minor of order r which is not zero, but it has every minor
of order (r + l)
which is zero. A matrix is said to be of rank zero if and only ie all its elements are zero.
112 Theory of Matrices
=6-0=60.
1 2 3 41
A= 2 4 6
3 6 9 12|
Solution. The matrix A is of order 3 x 4. So it has 4 square
submatrices of largest order 3.
|1 2 3| 1 2 4
A, =|2 4 6 A, =2 4
|3 6 9
3 6 12|
1 3 4 |2 3 4
A, =2 6 A,=4 6
|3 9 12| 6 9 12
Now we compute the minors of A,, A,,
A, and A.
31 |1 2 3
IA, I=2 4 6=2| 2 3 =0,
3 6 9 |3 6 9
Rank and Equivalence 113
2 4 | | 2 4]
|A,=|2 4 8=2|| 2 4=).
|3 6 12| 3 9 12
Sinilarly. |A,|=0. and |A,|=0.
rank of Acannot be 3, It will be less than 3, p(A)<3.
Sothe
computethe minors of order 2.
Non we
IUsclearthat every minor of order 2 of the clements of square submatrices A,. A,. A, and A,IS
<2
p) 2.pA) therefore p(A) =1.
SoExample.
S.1.3.The rank of the transpOse of a matrix is the same as that of the original matri.
pA)= pA)
Solution. LettA = lamn be an m×n matrix of rank r. Then A' is the transpOse of Aof order
(=la n Therefore there exists anon-singular square submatrix M, of A. It shows that the
is also anon-singular submatrix of A'.
|M,|=| M| =0.
Hence p(A) 2r ..1)
Nog we consider asquare submatrix Ar of Aof order (r + 1). Since p(A) =r|A., l=0. Then
submatrix of A' such that |A'=0.
is asquare
p(A))Sr. Hence pA) =r.
Example 5.1.4. Prove that three points (x, y), (, y), (X y) in the plane are collinear if and
oniy iftherank ofthe
matrix
Proof. Since the points (x,, y), (Ciy, Y,). , y) in a plane are collinear, they lie on a line.
Let the equation of the line be
ax +by +c=0 ..(l)
..(2)
Since (x. y,). (z, y,), (Cizy y) lie on (1), then
ax, + by, +c=0 ...(3)
ax, + by, + c=0 .4)
ax, + by, tc=0
Eliminating a, b, c from (2), (3) and (4), we get
1
h t chos
Henr ciementn transfrrmaton< do t alter the rank of amateie
ExampBe 5 41 Irmd th rank of the matrnY
2R, 2R,1 2
Solution We have 4= 4 7
2 1
2 R, R, 0 0 2 1
1 2 -| 31
B R. 0 0
0 0
All third otder subrmatrices of thc last matrix are singular. But the minor of orde,
Hence pA) = 2.
Exampie 6.4.2. Find the rank of the matrix
I 3 4 5
A=3 9 12 3
| 3 4 1|
Solution. We have
13 4 5]C; - 3C I 0 0 4]
A= 3 9 12 3C -4C3 0 0 0
|I 3 4
I 4 0 0R,, 1 4 0 0
3 0 0 ) |0 0 0
I 4 0 07
Rank and Equivalence 117
third order subnatriccs of this last matrix are singular.
ckarthat all
Iis order)
minor of
Rutonc
=0-4=-40.
pA) = 2.
Hence
,Determinethe rank of the matrix
Example.S.4.3.
[ 23 1
A=2 4 6 2
I 2 3 2
Solution. We have
[1 2 3 1 |R, -2R,1 2 3 1]
A= 2 4 6 2 0 0 0 0
1 2 3 2| R, -R 0 0 0 1
[1 2 3 11
Re.3) 0 00 1
0 00 0
Itis clearthat all third order submatrices of the last matrix are singular.
But we have a minor of order 2.
3 1|
=3-0=30.
0 1
Hence pA)= 2.
PROBLEM 5.2
1 Find the rank of the following matrices:
3 4 3 8 3 6
la) A= 3 9 12 9 p(A) = 1. (b) A = 3 2 2 pA)= 3.
-1 -3 -4 -3 -8 -1 -3 4|
6 1 3 81 [2 3 -1 -1|
4 2 6 -1 1 -1 -2 -4
(c) A=| p(A) = 2. (d) A = p(A) = 3.
10 3 9 3 3 -2
|16 4 12 15 6 3 0 -7|
|3 -2 0 -1 -7|
(e) A= 2 2 1 -5
1 -2 -3 -2 pA)=4.
1 2 1 -6
5.5 NORMAL FORM
transformations., be redo
Every non-zero matrix of rank r, can, by a finite sequence of elemantary
the one of the forms.
or ,
where r is its rank and I is an identity matrix of order r. The above given four forms are called
Normal form or Canonical form of the given matrix A.
We get the normal form of the matrix Aby subjecting to A the elementary transformations in t
following manner:
1. We first use the elemantary transformation of the type (a), if necessary, to obtain a non-zr
element (preferable a1) in the first row and the first column of the given matrix.
2. We divide the first row bythis element. if it is not 1.
3. We substract appropriate multiples of the first row from other rows soas to obtain zeros nu
remainder of the first column.
obtain
4. We subtract appropriate multiples of the first column from the other columns so asto
zeros in the remainder of the first row.
5. We repeat steps (1) to (4) starting with element in the second row and the second column.
is reachedor
6. We continue thus down the main the diagonal
diagonal," either until the end of
until all the remaining elements in the matrix are
zero.
Rank and Equivalence 119
afinal matrix then has one of the forms
We knowthat elemantary transformation do not alter the rank or order of the matrix. Therefore
form will be the same as the rank of a given matrix A.
ofthe normal
therank
Example 5.5.1. Reuce the following matrices to their normal forms and find their ranks:
8 / 3 ||4 3 21
2 2 3 4
-8 1 -3 |2 6 7 S
[-2 -1 -3 -1 |2 3 -1
3 -1 | - | -2 -4
(3) 0 3 3 -2
1 6 3 0 -7
2C-3 0 03 o2
1 136]C;
0 32
0 0 0 10]C, -6C, lo 0 0 10|
[I 0 0 0]C,-2C,|1 0 0
0 12 2 01 0
0 0. 0 10 C, -2C, 0 0 0 10
|1 0 0 0 C1 0 0: 0
0 1 0 0 1 0 : 0-4, :0]
|0 0 10 0 |0 0 1 : 0|
Hence P(4) =3.
fi 4 3 2] R, -R1 4 3 2]
A=|| 2 3 4 0'-2 0 2
(2) We have
|2 6 7 5|R, -2R,0 -2 I 1|
C; -4C[I 0 0 0]-[1 0 0 0]
0,-2 0 2
C; -3C 1
C-2C LO -2 1
122 Theory of Matnces
normal fomn ofa proxtuct oftwo
Gve an exanple to show that the matnces Is not
|4.
pduct of therr normal form
Gvean eample to show that the rank of apoduct of matrtCes may be less than
necesarnlv the
the rank of
other factor
ble, by using only row oerations, to reduce a matr1x 4 to
any
Shou that t s
matr1x (a, | such that a, -0 f i ; If4 has rank r, then no more than r of the an equrvalent
dffer from zero
Prove that not even matnx 4 can be reduIced to a nomal form by
elements a. may
I6. row
Hint. Exhrbit a matrix which cannot be reduced toa nomal form transformation only
5.6 ELEMENTARY TRANSFORMATION BY MATRIX MULTIPLICATIOM
Here we shall see an important thing that the clementary transformations of a
brought by pre- or ost-multiplying it by suitably chosen square matrices of very given matrix can be
Theorem S.6.1. To efct om elementanv transformation on agivCn matrix simple types
A. we may
prtarm the sammc elenxntan manstormation on an identih matix of appropriate order,
then first
< ho the result if hc operation is on ws, post-multipl) if it is on columns.
Proof. Suppose fist that we wish to interchange the second and third rows of a given
pre-multiply
2.
niatria Thts can be done by first interchanging 2nd and 3rd row of unit matrix and
grven matrix by it That is pre-multiplying the
0 0 1
a3
on the other hand, the interchange of 2nd and 3rd columns is effected by
interchanging 2nd and}o4
columns of unit matrix and post-multiplying the given matrix by it. Thus
0 0 0
a3 ae o 0 1 a4
1 0 0
00 0
From this, it isobvious that a l on the diagonal leaves the
but an off-diagonal 1selects the elements of the (row, column) corresponding (row, column) unaltered,
which that 1 is situated. corresponding to the (column, ow) in
Next suppose we wish to multiply a row or a column by a constant k:
0 0|| a a2 a3
0 k ka
a32
ka2 kazs
and kas
0 0 k kaz3
kay.
Finally, to add, say. k timnes of third row to second row, ork times of third column to a
we have second,
Rank and Equivalence 123
0 0||a1 2 aa
+kay +ka
|1 0 0 0
00
fa, a, a; a4 ||0 1 a, a, +ka a
b,
0 00 1
Here an off-diagonal &in the ij-position of the pre- or post-multiplier adds to the ith row (jth
umn)ofthe
multiplied k times its jth row (ith column).
Fromabove discussion we conclude that any elementary row transformation on agiven matrix
accomplisheddby pre-mutliplying it by asuitable elementary matrix and any column transformation
de
an accomplishedby post-multiplying it by suitable elementary matrix.
ande Theorem 5.6.2. IfA and Bare equivalent matrices then there exists non-singular matrices Cand
B=CAD.
Dsuchthat
Poof. Since Aand B are equivalent matrices, then B is obtained from Aby applying to Aa
saquence of elementary row and column transformations. But elementary row transformation can be
aomplished1by pre-multiplying Aby elementary matrices of appropriate order and elementary column
fomatjon can be accompished by post-multiplying Aby elementary matrices of appropriate order.
Hence CC, .CAD,D, . D, =B
CAD = B.
where C= C-C,.. C,
and D=D,·D, D,
Since elementary matrices are non-singular, C and D are non-singular.
Corollary 1. Every non-singular matrices can be expressed as a product of elementary matrices.
Proof. Let Abe a non-singular matrix of order n and I, be the indentify matrix of order n. Since
Aand 1, are equivalent, we have
A= C·C, .CJ,D,D, ..D,
=C,C, . D,D,.. D,
Corollary 2. If Ais mxn matrix of rank r. then there exists two non-singular matrices C and D
dorder mand n respectively such that
CAD =
0
124 Theony of Matrioes
()
[1, : 0
CAD =
0: 0
2 37 -1 2
2 4 6 1
(iü) 3 2
(iv)
7 -11 -6
2 I 3 2 12 3|
Solution. (i) We write A = IAI,
07
i.e., A=| ] 2
0 -1 0 0
Now we reduce the matrix Aon the left hand side to the nornmal forn by applying elementarY
Lransformations. In reducing the matrix Aon the left hand side to the normnal forn. If we apply ay
clementary row (column) transformation, the same elementary row (column) of transtornation wl
applied to the prefactor (post-factor) of A on the right hand side.
ank and Equivalonre 125
1 )
()
-1|0A0
)
Apling -G
(0 0 | 0 0 || -1
| 0
(0
Applying R, + R,
0 1
0 0 0|
Thus we have the required normal form.
0 0 | | -1 -1]|
Where 1 0and Q
=0
|-1 1 1| 0
ci)We write A=1, Al,
|1 0 0 01
-1 2 -1] [| 0 0] 0 I0 0
A=4 2 -1 2 =|0 1 0A
00 0
|2 2 -2 0 0 0 1| 0 0 0 I|
Aplying C, +C,, C, - 2C, C, +C,
|I 0 0 0] 1 00 0 0
4 6 -9
|2 4 -6 2
|0 0
Applying R,
() 07 1 0 0)
0A
0 0 1 )
2 0 0 0
04
0 1 0 0
[I 0 0 2 1
6
|0 4 0 -2| l-2 00
0 0 1
Applying R, - 4R,
0 0 |1 1
[i 0 0 0|
l0 1 0 - oA 0 1
|0 0 0 -2 0 1 0
0 0
Applying - ; R_ Ca4
10 1 1
1 0 0: 0] 2 1
3
010: 0 3 6 A|0 1 -1
0 0 1 : 0| 1 1 1 0
-
3 3 00 1
This is the required normal form, where
1 0
1 1
2 1 3
P=
3 6 Q=|0 1 -1
2
1
0 0
3 3
|0 0 1 0
(üi) We write A=1AI,
[! 2 3] [1 1 00 0 0 T1 0
|3 23 1lo
1 I 0 0o ! 1
2 0 0 1 0 |A|0
2 1 3o0 0 1
0 1
Rank and Equivalence 127
R, -3R,. R, -
Applying
R,. R, - 2R,
3| 10 0 0
-4 -8| -3 I 0 0
1 -| 0 1
3 -3 -2 0 0
2,. C;-3C
0] 1 0 0 0]
-4 -8 -3 | 00
1
0 1 -1 0 0
-3|-2 0 0 I
1
Applying R,
1 0 0 07
0 1
1
0 0 |I -2-31
1 2 4
4
1 -1
0 1 0 1
J0 -3 -3
|-2 0 0 0
3R,
ApplyingR; -R,.R + 00 0|
3 1 0 0
[1 0 0 4 4 1 -2 -31
01 2 7
+
1 0A0 1 0
00 -3 4 4 1
0 0 3
1 3
0
4
Applying C, - 2C,
0 0 01
3 1
00
[1 0 4 4
1 -2 -1|
0 1 1 A|O 1 -2
00 -3 + 10
4 4 0 1 -1|
|0 0 3 3
1
0 1
4 4
Aplying R
1 0
3 1
[! 0 0] 4 4 1 -2
0 1 ATO I -2
7 0
0 0 1
12 3 1 -1|
|00 0 33 12
3
00 1|
4 4
7heonn
o fMatrices
0 0
) 0
4 I -2
1
12
12
2
where
forn,
nomal 0 07
ngind
thc
Thrsis 1 (0 0 -2
4
4 Q=|0 -2
7 1
P= 3
12 12
1 2 1
-1 2 1| [1 0 0 0] [1 0 0 0]
3
1 6 1 4 01 0 0
A
0 1 0o
-6 1 0 01 0 0 01 0
A= 7 -11
7 2 12 3| 0 0 0 1 |0 0 0 1|
I6
7 |0
1 0 0
16
A0
03 0 0 7 10 13
022 0 0 -7 0 1
R, - 2R,
Ann ingR, -3R,. 14
0 1 0 (0 13 13
0 10 0 1 -3 0 0 16
A|0
0 0 0 0 -3 2 10 13 13
0 0 2 0-2 -1 0 1| 0
R. R3,4)
4 14
0 1 0 01 13
|1 0 0 0 1 -3 0 0
0 10 0 16 2
A|0
0 01 0 -1 -0 13 13 3
0 0
0 0 0 0 -3 2 1 0
0 I0 0] |3
-3 0 o l6
P= (0=0
-J 13 13 13
) 0
2 I 0
We have pA) = p(P).