0% found this document useful (0 votes)
2 views

chapter 2

The document discusses the properties and operations of matrices, including block matrix multiplication and the definitions of matrices over complex numbers. It introduces concepts such as Hermitian, skew-Hermitian, and unitary matrices, along with examples and exercises. Additionally, it covers linear systems of equations, their definitions, and solution methods, highlighting unique, infinite, and no solution cases.

Uploaded by

msimsseid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

chapter 2

The document discusses the properties and operations of matrices, including block matrix multiplication and the definitions of matrices over complex numbers. It introduces concepts such as Hermitian, skew-Hermitian, and unitary matrices, along with examples and exercises. Additionally, it covers linear systems of equations, their definitions, and solution methods, highlighting unique, infinite, and no solution cases.

Uploaded by

msimsseid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

16 CHAPTER 1.

MATRICES


Theorem 1.3.6 is very useful due to the following reasons:

1. The order of the matrices P, Q, H and K are smaller than that of A or B.

2. It may be possible to block the matrix in such a way that a few blocks are either identity matrices
or zero matrices. In this case, it may be easy to handle the matrix product using the block form.

3. Or when we want to prove results using induction, then we may assume the result for r × r
submatrices and then look for (r + 1) × (r + 1) submatrices, etc.
 
" # a b
1 2 0  
For example, if A = and B =  c d  , Then
2 5 0
e f
" #" # " # " #
1 2 a b 0 a + 2c b + 2d
AB = + [e f ] = .
2 5 c d 0 2a + 5c 2b + 5d
 
0 −1 2
 
If A =  3 1 4  , then A can be decomposed as follows:
−2 5 −3
   
0 −1 2 0 −1 2
   
A= 3 1 4  , or A =  3 1 4  , or
−2 5 −3 −2 5 −3
 
0 −1 2
 
A= 3 1 4  and so on.
−2 5 −3

"m1 m#2 "s1 s2 #


Suppose A = n1 P Q and B = r1 E F . Then the matrices P, Q, R, S and
n2 R S r2 G H
E, F, G, H, are called the blocks of the matrices A and B, respectively.
Even if A + B is defined, the orders of P and E may not be same and hence, we " may not be able#
P +E Q+F
to add A and B in the block form. But, if A + B and P + E is defined then A + B = .
R+G S+H
Similarly, if the product AB is defined, the product P E need not be defined. Therefore, we can talk
of matrix product AB as block " product of matrices,#if both the products AB and P E are defined. And
P E + QG P F + QH
in this case, we have AB = .
RE + SG RF + SH
That is, once a partition of A is fixed, the partition of B has to be properly chosen for
purposes of block addition or multiplication.

Exercise 1.3.7 1. Compute the matrix product AB using the block matrix multiplication for the matrices
   
1 0 0 1 1 2 2 1
 0 1 1 1   1 1 2 1 
A=  and B =  .
   
 0 1 1 0   1 1 1 1 
0 1 0 1 −1 1 −1 1
" #
P Q
2. Let A = . If P, Q, R and S are symmetric, what can you say about A? Are P, Q, R and S
R S
symmetric, when A is symmetric?
1.4. MATRICES OVER COMPLEX NUMBERS 17

3. Let A = [aij ] and B = [bij ] be two matrices. Suppose a1 , a2 , . . . , an are the rows of A and
b1 , b2 , . . . , bp are the columns of B. If the product AB is defined, then show that
 
a1 B
 
 a2 B 
AB = [Ab1 , Ab2 , . . . , Abp ] =  . 

.
 .. 
an B

[That is, left multiplication by A, is same as multiplying each column of B by A. Similarly, right
multiplication by B, is same as multiplying each row of A by B.]

1.4 Matrices over Complex Numbers


Here the entries of the matrix are complex numbers. All the definitions still hold. One just needs to
look at the following additional definitions.

Definition 1.4.1 (Conjugate Transpose of a Matrix) 1. Let A be an m×n matrix over C. If A = [aij ]
then the Conjugate of A, denoted by A, is the matrix B = [bij ] with bij = aij .
" #
1 4 + 3i i
For example, Let A = . Then
0 1 i−2
" #
1 4 − 3i −i
A= .
0 1 −i − 2

2. Let A be an m × n matrix over C. If A = [aij ] then the Conjugate Transpose of A, denoted by A∗ , is


the matrix B = [bij ] with bij = aji .
" #
1 4 + 3i i
For example, Let A = . Then
0 1 i−2
 
1 0
A∗ = 4 − 3i
 
1 .
−i −i − 2

3. A square matrix A over C is called Hermitian if A∗ = A.

4. A square matrix A over C is called skew-Hermitian if A∗ = −A.

5. A square matrix A over C is called unitary if A∗ A = AA∗ = I.

6. A square matrix A over C is called Normal if AA∗ = A∗ A.

Remark 1.4.2 If A = [aij ] with aij ∈ R, then A∗ = At .

Exercise 1.4.3 1. Give examples of Hermitian, skew-Hermitian and unitary matrices that have entries
with non-zero imaginary parts.

2. Restate the results on transpose in terms of conjugate transpose.


A+A∗ A−A∗
3. Show that for any square matrix A, S = 2 is Hermitian, T = 2 is skew-Hermitian, and
A = S + T.

4. Show that if A is a complex triangular matrix and AA∗ = A∗ A then A is a diagonal matrix.
18 CHAPTER 1. MATRICES
Chapter 2

Linear System of Equations

2.1 Introduction
Let us look at some examples of linear systems.

1. Suppose a, b ∈ R. Consider the system ax = b.

(a) If a 6= 0 then the system has a unique solution x = ab .


(b) If a = 0 and
i. b 6= 0 then the system has no solution.
ii. b = 0 then the system has infinite number of solutions, namely all x ∈ R.

2. We now consider a system with 2 equations in 2 unknowns.


Consider the equation ax + by = c. If one of the coefficients, a or b is non-zero, then this linear
equation represents a line in R2 . Thus for the system

a1 x + b1 y = c1 and a2 x + b2 y = c2 ,

the set of solutions is given by the points of intersection of the two lines. There are three cases to
be considered. Each case is illustrated by an example.

(a) Unique Solution


x + 2y = 1 and x + 3y = 1. The unique solution is (x, y)t = (1, 0)t .
Observe that in this case, a1 b2 − a2 b1 6= 0.
(b) Infinite Number of Solutions
x + 2y = 1 and 2x + 4y = 2. The set of solutions is (x, y)t = (1 − 2y, y)t = (1, 0)t + y(−2, 1)t
with y arbitrary. In other words, both the equations represent the same line.
Observe that in this case, a1 b2 − a2 b1 = 0, a1 c2 − a2 c1 = 0 and b1 c2 − b2 c1 = 0.
(c) No Solution
x + 2y = 1 and 2x + 4y = 3. The equations represent a pair of parallel lines and hence there
is no point of intersection.
Observe that in this case, a1 b2 − a2 b1 = 0 but a1 c2 − a2 c1 6= 0.

3. As a last example, consider 3 equations in 3 unknowns.


A linear equation ax + by + cz = d represent a plane in R3 provided (a, b, c) 6= (0, 0, 0). As in the
case of 2 equations in 2 unknowns, we have to look at the points of intersection of the given three
planes. Here again, we have three cases. The three cases are illustrated by examples.
20 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS

(a) Unique Solution


Consider the system x+ y + z = 3, x+ 4y + 2z = 7 and 4x+ 10y − z = 13. The unique solution
to this system is (x, y, z)t = (1, 1, 1)t ; i.e. the three planes intersect at a point.
(b) Infinite Number of Solutions
Consider the system x + y + z = 3, x + 2y + 2z = 5 and 3x + 4y + 4z = 11. The set of
solutions to this system is (x, y, z)t = (1, 2 − z, z)t = (1, 2, 0)t + z(0, −1, 1)t, with z arbitrary:
the three planes intersect on a line.
(c) No Solution
The system x + y + z = 3, x + 2y + 2z = 5 and 3x + 4y + 4z = 13 has no solution. In this
case, we get three parallel lines as intersections of the above planes taken two at a time.
The readers are advised to supply the proof.

2.2 Definition and a Solution Method


Definition 2.2.1 (Linear System) A linear system of m equations in n unknowns x1 , x2 , . . . , xn is a set of
equations of the form

a11 x1 + a12 x2 + · · · + a1n xn = b1


a21 x1 + a22 x2 + · · · + a2n xn = b2
.. ..
. . (2.2.1)
am1 x1 + am2 x2 + · · · + amn xn = bm

where for 1 ≤ i ≤ n, and 1 ≤ j ≤ m; aij , bi ∈ R. Linear System (2.2.1) is called homogeneous if


b1 = 0 = b2 = · · · = bm and non-homogeneous otherwise.

We rewrite the above equations


 in the form Ax =b, where

a11 a12 · · · a1n x1 b1
     
 a21 a22 · · · a2n   x2   b2 
A= .
 . . .  , x =  .  , and b = 
  
 .. 
 .. .. .. ..   .. 

 . 
am1 am2 · · · amn xn bm
The matrix A is called the coefficient matrix and the block matrix [A b] , is the augmented
matrix of the linear system (2.2.1).

Remark 2.2.2 Observe that the ith row of the augmented matrix [A b] represents the ith equation
and the j th column of the coefficient matrix A corresponds to coefficients of the j th variable xj . That
is, for 1 ≤ i ≤ m and 1 ≤ j ≤ n, the entry aij of the coefficient matrix A corresponds to the ith equation
and j th variable xj ..

For a system of linear equations Ax = b, the system Ax = 0 is called the associated homogeneous
system.

Definition 2.2.3 (Solution of a Linear System) A solution of the linear system Ax = b is a column vector
y with entries y1 , y2 , . . . , yn such that the linear system (2.2.1) is satisfied by substituting yi in place of xi .

That is, if yt = [y1 , y2 , . . . , yn ] then Ay = b holds.


Note: The zero n-tuple x = 0 is always a solution of the system Ax = 0, and is called the trivial
solution. A non-zero n-tuple x, if it satisfies Ax = 0, is called a non-trivial solution.

You might also like