0% found this document useful (0 votes)
70 views

Chaper 3 Matrix PDF

The document introduces matrices and some basic matrix operations. It defines a matrix as a rectangular array of numbers or variables. It describes how matrices can be used to organize data and solve systems of equations. The summary provides definitions for different types of matrices including zero, square, rectangular, row, column, diagonal, and triangular matrices. It also outlines some basic matrix operations such as addition, subtraction, scalar multiplication, and matrix multiplication.

Uploaded by

tesfu ab
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Chaper 3 Matrix PDF

The document introduces matrices and some basic matrix operations. It defines a matrix as a rectangular array of numbers or variables. It describes how matrices can be used to organize data and solve systems of equations. The summary provides definitions for different types of matrices including zero, square, rectangular, row, column, diagonal, and triangular matrices. It also outlines some basic matrix operations such as addition, subtraction, scalar multiplication, and matrix multiplication.

Uploaded by

tesfu ab
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter Three

Matrices

INTRODUCTION: Information in science and mathematics is often organized into rows and
columns to form rectangular arrays, called ―matrices‖ (plural of ―matrix‖). Matrices are often
tables of numerical data that arise from physical observations, but they also occur in various
mathematical contexts. For example, we shall see in this chapter that to solve a system of
equations such as

all of the information required for the solution is embodied in the matrix

* +

and that the solution can be obtained by performing appropriate operations on this matrix. This is
particularly important in developing computer programs to solve systems of linear equations
because computers are well suited for manipulating arrays of numerical information. However,
matrices are not simply a notational tool for solving systems of equations; they can be viewed as
mathematical objects in their own right, and there is a rich and important theory associated with
them that has a wide variety of applications. In this chapter we will begin the study of matrices.
3.1. Definition of Matrix

Definition 3.1.1:- A matrix is a rectangular array of numbers or variables, which we will


enclose in brackets. The numbers (or variables) are called entries or, less commonly, elements of
the matrix. The horizontal lines of entries are called rows, and the vertical lines of entries are
called columns. A matrix with rows and columns has the form:

[ ] Or [ ] , where ..., and

By an matrix (read as ― by matrix”) we mean a matrix with rows and


columns—rows always come first! is called the size/order/shape/dimension of the matrix. We

1
shall denote matrices by capital boldface letters A, B, C, …, or by writing the general entry in
brackets; like [ ], and so on.. The element , is called the entry, appears in row and
column . Thus is the entry in Row 2 and Column 1.
Example 3.1.1:

[ ] [ ]

The dimension/size of matrix A is 3×4 and the dimension/size of matrix B is 3×3. The entry
in matrix A is 7 and the entry in matrix A is 9.

NB: Matrices are important because they let us to express large amounts of data and functions in
an organized and concise form.

Example 3.1.2: Given a matrix A = [ ] , then

a) Find the size of A c) List rows of A


b) List columns of A d) List elements of A

Exercise 3.1.1:

1. Let be a real number. Assume that B = [ ]. Then, determine:


a) The size of B c) the no of columns of B
b) The rows of B d) the elements B
2. Construct a 3x3 matrix whose entry is given by 2j – i

3.2. Types of Matrices


In matrix theory, there are many special kinds of matrices that are important because they
possess certain properties. The following is a list of some of these matrices.
i. Zero Matrix (Null Matrix): Matrix that consists of all zero entries is called a zero
matrix and is denoted by bold zero, or .

Example: [ ]

ii. Square Matrix: any matrix is called square matrix if . The order of
square matrix is or simply .

2
Example: [ ]

iii. Rectangular Matrix: A matrix of any size is called a rectangular matrix; this includes
square matrices as a special case.
iv. Row Matrix (row vector): A matrix is called a row matrix (row vector).
[ ]
v. Column Matrix (column vector): A matrix is called a column matrix (column

vector)

( )
vi. Diagonal Matrix: A square matrix is said to be a diagonal matrix if all its
entries except the main diagonal entries are zeros.

[ ]

Remark: A diagonal matrix each of whose diagonal elements are equal is called a scalar
matrix.
vii. Identity Matrix (Unit Matrix): is a diagonal matrix in which all diagonal elements are 1.

[ ] is an identity matrix.

viii. Triangular Matrix: A square matrix in which all the entries above the main diagonal
are zero is called lower triangular, and a square matrix in which all the entries below the
main diagonal are zero is called upper triangular. A matrix that is either upper triangular
or lower triangular is called triangular.

[ ] [ ]

Upper Triangular Lower Triangular

3
Remark: Observe that diagonal matrices are both upper triangular and lower triangular since
they have zeros below and above the main diagonal.

Example: [ ] [ ]

Lower triangular matrix Upper triangular matrix


3.3. Basic Operations on Matrices
I. Equality of Matrices
Definition: Two matrices are defined to be equal, denoted by , if they have the same size
and their corresponding entries are equal.
Example: Find the value of and if matrix is equal to matrix .

* + and [ ]

Solution: the two matrices are equal if and only if and .


II. Matrix Addition and Subtraction
Definition: Let A = [ ] and B = [ ] be two matrices of the same size, say, matrices.
Then the sum of A and B, written A + B, is the matrix obtained by adding the entries of B to the
corresponding entries of A. The difference A – B is the matrix obtained by subtracting the entries
of B from the corresponding entries of A. In matrix notation, if A = [ ] and B = [ ] have the
same size, then
[ ] [ ] [ ] [ ] [ ] [ ]
Example: Consider the following matrices

[ ] [ ]

Then find

Solution: [ ] [ ]

[ ] [ ]

4
Remark: Matrices of different sizes cannot be added or subtracted.
III. Scalar Multiplication of Matrix

Definition 3.1.1.3 If A is any matrix and c is any scalar; then the product cA is the matrix
obtained by multiplying each entry of the matrix A by c. The matrix cA is said to be a scalar
multiple of A. i.e.

[ ]

[ ] [ ]

Example: Let [ ] and [ ]. Then compute the following

a. b.

Solution: a. [ ] [ ] [ ]

b. [ ]

[ ] [ ]
Exercise:
1. Find the values of and for the following matrix equation.

* + * +

2. Find matrix if * +

Remark: If , , …, are matrices of the same size and , , …, are scalars, then an
expression of the form is called a linear combination of , , …,
with coefficients , , …, .

5
Properties of Matrix Addition and Scalar Multiplication

Suppose A, B, and C are matrices (having the same size) and and are scalars. Then

i. (commutative law of addition)


ii. (Associative law of addition)
iii. (Existence of additive identity)
iv. (Existence of additive inverse)
v.
vi.
vii.
viii. and
IV. Product of Matrices
Definition: Let matrix be an matrix and be an matrix (i.e. the number of
columns of is equal to the number of rows of ). Then the product of and , denoted by ,
is an matrix which is obtained by multiplying the corresponding elements of row of by
column of and adding the product, i.e. if [ ] and [ ] , then

[ ] , where ∑ , for
- Consider the following matrices

[ ] and [ ]

Rows of are:
[ ]
[ ]

[ ]
Columns of are:
[ ]
[ ]

[ ]
Then

6
[ ] [ ]

Example-1: Find the product of the matrices

Solution:

* +

Exercise: Determine the size of the product matrix if the sizes of and are and
respectively.
Note: The product of lower triangular matrices is lower triangular, and the product of upper
triangular matrices is upper triangular.
Properties of Matrix Multiplication
1) , (matrix product is not commutative.)
2) , (matrix multiplication is associative)
3) , (multiplication of matrices is
distributive with respect to addition)
4) If , it does not mean that either A = 0 or B = 0.

Example: For matrix A and B given by * + * + we have

* + is a null matrix even though and are not a null matrix.

5) The relation does not imply that . In other words the


cancelation law doesn’t hold as for real numbers.

Example: if [ ], [ ] and [ ]

We have, [ ] , but

Exercise: If * + * + and * +, then verify that:

a) b)

7
V. Transpose of a Matrix
Definition: If A is an matrix, then the transpose of A, denoted by , is defined to be the
matrix that results from interchanging the rows and columns of A.
Remark: The transpose of a row matrix is column matrix and the transpose of a column matrix
is a row matrix.
Example: The following are some examples of matrices and their transposes.

Properties of Matrix Transpose


1.
2.
3.
Proof: Ex.
Note: The transpose of a lower triangular matrix is upper triangular and the transpose of an
upper triangular matrix is lower triangular.
Orthogonal Matrix: An orthogonal matrix A is a matrix such that = = I. A typical
orthogonal matrix is:

||√ √ |
|

√ √

VI. Trace of Matrix


Definition: If A is a square matrix, then the trace of A, denoted by is defined to be the
sum of the entries on the diagonal of A. The trace of A is undefined if A is not a square matrix.
Example: The following are examples of matrices and their traces.

8
VII. Polynomial of Matrix
For any square matrix and for any polynomial,
where are scalars, we define . If , then A
is a zero (root) of the polynomial.

Examples: Let * + and let . Then compute .

Solution:
Exercise:

1. Let * + .Then compute:

a) b)
VIII. Symmetric and Skew-Symmetric Matrices:
Definition: Let be a square matrix. Then is said to be
 Symmetric if ,i.e. .
 Skew-symmetric (anti-symmetric) if , i.e. .

Example: A = [ ] B=[ ] C= [ ]

Symmetric Skew-Symmetric Neither


Exercise: Determine whether the following matrices are symmetric, skew symmetric or neither.

a) * +
d) [ ]

b) * +
e) * +

c) [ ]

3.4. Elementary Row Operations and Echelon Form of Matrices


i. Elementary Row Operations

9
A matrix is said to be row equivalent to matrix , written if matrix is obtained from by a
finite sequence of elementary row operations. These elementary row operations are:
i. Interchanging the row by the row (i.e )
ii. Multiplying the row by a none-zero scalar (i.e ).
iii. Replacing the row by times the row plus row (i.e )
Example: Apply all elementary row operations on the given matrix:

[ ] [ ] [ ] [ ]

Exercise: Let [ ] . Then find matrix which is row equivalent to with and

.
Remark: The first non-zero entry in a row is called the leading entry (pivotal entry) of that row.

[ ]

Entries are leading entries of respectively, and no leading entry for row .

ii. Echelon Form of a Matrix


Definition1: An m × n matrix is said to be in echelon form (EF) provided the following two
conditions hold.
1. Any zero rows (if there is) are at the bottom of the non-zero rows.
2. The leading entry of all rows is at the right side of the leading entries of the above rows (i.e. all
entries below the leading entry are zero).

Example: [ ] [ ] [ ] [ ]

Matrices , and are in echelon form, but matrix is not.


Definition2: An m × n matrix is said to be in row-echelon form (REF) provided the following two
conditions hold.
1. The matrix is in echelon form.
2. All leading entries are equal to 1.

10
Example: [ ] [ ] [ ] [ ]

Matrices are in row echelon form (REF) but is not.


Definition3: An m × n matrix is said to be in reduced row-echelon form (RREF) provided the
following two conditions hold.
1. The matrix is in row echelon form.
2. All non-leading entries in a column, which contains the leading entry, are equal to 0.

Example: [ ] [ ] [ ] [ ]

Remark: Any matrix can be reduced to its echelon form by applying some elementary row operations
on the given matrix.
Example: Reduce matrix to its row echelon form by applying elementary row operations where

A=[ ]

Solution: Applying we have: [ ]

Applying and we get: [ ]

Applying we get: [ ⁄ ]

Applying we get: [ ⁄ ]

Appling ⁄ we get: [ ⁄ ]

Applying we get: [ ]. This is in row echelon form.

11
Rank of a Matrix
Suppose an matrix is reduced by row operations to an echelon form . Then the rank of ,
denoted by , is defined to be,
Rank ( ) = number of pivots (leading entries) or
= number of nonzero rows in or
= number of basic columns in ,
where the basic columns of are defined to be those columns in which contain the pivotal positions.

Example 1: Let matrix [ ]. Then the rank (A) = 3 since matrix is in echelon form and

has three non-zero rows.


Example 2: Determine the rank, and identify the basic columns in the matrix

[ ]

Solution: [ ] [ ] [ ]

and

Basic Columns {( ) ( )}

Exercise: Reduce each of the following matrices to its echelon form and determine its rank & identify
the basic columns.

a. [ ]
b.

[ ]

3.5. Inverse of a Matrix and Its Properties


Definition: If A is an square matrix, and if a matrix B of the same size can be found such
that , where the identity matrix, then A is said to be invertible (non-singular)
and B is called an inverse of A. If no such matrix B can be found, then A is said to be singular.

12
Example: The matrix

Since

And

Remark:
1.

2. Matrices which are not square matrices have no inverses


3. Not all square matrices have inverse
4.
Properties of Inverse Matrices
1. If B and C are both inverses of a matrix A, then B = C. that is the inverse of a matrix is unique
and is denoted by .
2. If A and B are invertible matrices of the same size, then
a) AB is invertible and
b)
3. is invertible and

4. Let [ ] be an diagonal matrix where all (for ).

Then the inverse of , denoted by , is given by

[ ]
Exercise: Verify that .

13
Finding Inverse of a Matrix by Using Elementary Row Operations (Gauss-Jordan
Elimination Method)
Let be an matrix and let be an identity matrix. Then to find the inverse of .
1. Adjoin the identity matrix to to form the augmented matrix .
2. Compute the reduced echelon form of If the reduced echelon form is of the type
then is the inverse of . If the reduced echelon form is not of the type , in that the first
sub matrix is not , then has no inverse.
Example-1: Find the inverse of A if

[ ]

Solution: [ ] [ ]

Applying we get:

[ ]

Applying we get:

[ ]

Applying we get:

[ ]

Applying ; and we get:

[ ]

Therefore, B = [ ]

Example-2: Find the inverse of ⌈ ⌉ by using elementary row operations (Gauss-Jordan

Method).

14
Solution: [ ] [ ]

Applying , we get:

[ ]

Applying we get:

[ ]

Applying we get:

[ ]

Applying we get:

[ ]

Applying and we get:

[ ]

Applying we get:

[ ]

[ ] [ ]

[ ]

Example-3: Determine the inverse of the matrix

[ ]

15
Solution: [ ]

Appling we get

[ ]

Appling we get

[ ]

Appling we get

[ ]

Appling we get

[ ] Thus [ ]

Exercise: Determine inverse of the following matrices, if it exists.

[ ] [ ] [ ] [ ]

16
Chapter Four
Determinant of Matrices
4.1. Definition of Determinant
Introduction: In this section, we shall study the ―determinant function,‖ which is a real-valued function of a
matrix variable in the sense that it associates a real number with a square matrix A. Our work on
determinant functions will have important applications to the theory of systems of linear equations and will also
lead us to an explicit formula for the inverse of an invertible matrix.
Definition: A “determinant” is a certain kind of function that associates a real number with a square
matrix , and it is usually denoted by or | |. i.e. if is an square matrix, then

| |

a. Determinant of
Let [ ] be matrix. Then .
Example: Let [ ]. Then
b. Determinant of
Let * + be a matrix. Then | |

Example: Find if * +

Solution:
c. Determinant of matrix
Definition-1: Let be an square matrix and be the matrix obtained from
matrix by deleting row and column containing the entry . Then is called the minor
of .
Remark: the matrix is called sub matrix of .
Definition-2: The cofactor of , denoted by , is defined as .

Example: Let [ ]. Then find the minor and cofactor of and .

17
Solution: Minor of | |

Cofactor of

Minor of | |

Cofactor of
Definition-3: The determinant of an matrix is given by either of the following two formulas.
i. ∑ , for fixed
ii. ∑ , for fixed
Example 1: Evaluate the determinant for the matrices

[ ] and [ ]

Solution: Let us take row-1 of matrix for . Then

∑ ∑

Let us take column-1 of matrix for .

∑ ∑

Example 2: Let be vectors in . Then show

that | |

Solution: | | [ ] [ ] [ ]

=
=
=

Note: The determinant of a 3 3 square matrix [ ] is given by

18
| | | | | | | |

4.2. Properties of Determinants


Let , and C be square matrices and be any scalars. Then
a.
b. , where is the size of .
c.
d.

e. , if is invertible (non-singular) matrix.

f. Determinant of any triangular matrix is the product of its diagonal entries.


i.e.
g. Determinant of diagonal matrix is the product of its diagonal entries. Thus ,
where is an identity matrix.
h. If any rows (or columns) of a square matrix are proportional to each other (i.e. one is the scalar
multiple of the other), then .
i. If any row (or column) of a square matrix is zero, then .
Exercise:
1. Evaluate the determinant for each of the following matrices.


, [ √ ], ,

[ ] [ ]

[ ], [ ]

2. Let and , and let and be a matrices. Then find


i. iii.
ii. iv.

3. Let * +and let . Then find the value/s of .

Theorem-1: Let A be an invertible matrix. Then .


Proof: then taking determinant on both sides we get:

19
Example: If * +, then .

Note: We can evaluate the determinant of any square matrix by reducing it to its echelon form by
keeping the following conditions.
a. If is a matrix which results by applying the elementary row operation of multiplying any
particular row of by a non-zero constant , then .
b. If is a matrix that results when two rows or two columns of are interchanged, then
.
c. If is a matrix that results when a multiple of one row of is added to another row or when a
multiple of one column is added to another column, then .

Exercise: Let [ ] and let . Then find , and if

[ ], [ ] and [ ]

4.3. Adjoint and Inverse of a Matrix

Definition-1: Let [ ] be an square matrix and let be the cofactor of

. Then the matrix [ ] whose entry is is called matrix of cofactors

of entries of and its transpose [ ] is called the adjoint of , denoted by

Example: Find the matrix of cofactors and adjoint of the matrix

20
[ ]

Solution:
The cofactors of each elements of A are

| | | |

| | | |

| | | |

| | | |

| |

Matrix of Cofactors [ ] [ ]

[ ]

Exercise: Find the adjoint of A if:

[ ]

Definition-2: Let be an square matrix with . Then the inverse of a matrix , denoted

by , is defined as .

Note: If , then the matrix has no inverse (i.e. is singular matrix).

Example: Find the inverse of the matrix [ ]

Solution:

[ ]

21
[ ]

[ ]
Exercise: Verify whether each of the following matrices has an inverse, and find the inverse if it has.

* +, [ ] and [ ]

4.4. System of Linear Equations


A system of linear equations with n-unknown (variables) is given by

The matrix form of this system of linear equations is

[ ][ ] [ ]

A x b

Where, is the coefficient matrix, is the unknown matrix (vector) and is the known matrix (vector).
The augmented matrix, [ ], for the above system of linear equation is

[ ]

Remark: If a system of linear equations is the form , i.e. all entries of are equal to 0, then the
system is called homogeneous system of linear equation, otherwise it is called non-homogeneous.
Example: write the coefficient matrix and the augmented matrix for the following system of linear
equations.

22
Solution: Coefficient Matrix [ ]

Augmented Matrix [ ] [ ]

Solving System of Linear Equations


1. Cramer’s Rules
Let be a system of linear equations in unknowns such that . Then the system has
a unique solution. This solution is:

, , …,

Where (for ) is the matrix obtained by replacing the entries in the column of by the

entries in the matrix ( ).

Example: Using Cramer’s Rule, solve the following system of linear equations.

Solution:

[ ] [ ]

[ ] [ ]

, , , and

Thus and

Exercise: Solve the following system of linear equations using Cramer’s Rule:

23
a) {
b)

2. Gaussian Elimination Method

Definition: The process of using elementary row operations to transfer an augmented matrix of linear
system in to one whose augmented matrix is in row echelon form is called Gaussian elimination.

Let be a system of linear equations. Then, to solve the system by using Gaussian elimination
method, use the following procedures.

i. Write down the augmented matrix for the system.


ii. Reduce this augmented matrix to its row echelon form.
iii. Use back substitution to arrive at the solution.
Example: By using the Gaussian-Elimination method, solve the following system of linear equations.

Exercise: By using the Gaussian-Elimination method, solve the following system of linear equations.

i. ii.

3. Inverse Matrix Method

Theorem: If is an invertible matrix, then for each matrix , the system has

exactly one solution namely

Example1: Solve the following system of linear equations by using inverse method.

In matrix form, this system can be written as: where

24
A=[ ] [ ] and [ ]

But [ ] and we have:

[ ][ ] [ ] [ ]

Thus

Exercise: Solve the following system of linear equations by using inverse method.

a)
b)

Remark:-

1. A system of equations that has no solution is called inconsistent; if there is at least one solution
of the system, it is called consistent.
2. A system of linear equations may not have solutions, or has exactly one solution (unique
solution), or infinitely many solutions.
3. Every homogeneous system of linear equations ( ) is consistent, since all (for
) is a solution. This solution is called the trivial solution; if there are other solutions,
they are called nontrivial solutions. If this system has nontrivial solutions, then these solutions are
infinite.
4. If the number of variables is greater than the number of equations in a given system of linear
equations, then the system has infinite solutions. The arbitrary values that are assigned to the free
variables are often called parameters.
5. Let be a system of non-homogenous linear equations and the number of variables are
equal to the number of equation (i.e. be a square matrix). And let (for ) be a
matrix obtained by replacing the entries in the column of by the entries in the matrix

25
( ). Then

i. if , then the system has a unique solution.


ii. if , then the system has
a. infinitely many solutions if for all
b. no solution if at least one of the for some
4.5. Eigen values and Eigenvectors

Definition: Let be an square matrix and be a non-zero column vector. Then is called the
eigenvector (or right eigenvector or right characteristic vector) of if there exists a scalar such that

Then, is called an eigenvalue or characteristics value of . Eigenvalue may be zero, but eigenvector
cannot be zero vectors.

Example: show that * + is an eigenvector corresponding to the eigenvalue for the matrix

* +.

Solution: from equation (1) we have

* +* + * + * +

From equation (1),

 | | is called the characteristic polynomial of .


 The equation | | is called the characteristic equation.
 For each eigenvalue , the corresponding eigenvector is found by substituting back into the
characteristic equation | | .

Example: Determine the eigenvalues and corresponding eigenvectors of the matrix * +

26
Solution: For this matrix * + * + * + and hence

. Thus, the characteristic equation of A is


and up on solving this we get

i. The eigenvectors to will be obtained by solving equation (1) above for * +. With

this value of after substituting and rearranging, we get:

(* + * +) * + * + * +* + * +

This is equivalent to the set of linear equations given below:

The solution to this system is with arbitrary, so the eigenvectors corresponding to

are,

* + [ ] [ ] with arbitrary.

ii. When , equation (1) above may be written as:

(* + * +) * + * + * +* + * +

This is equivalent to the set of linear equations given below:

The solution to this system is with arbitrary, so the eigenvectors corresponding to


are,

* + * + * + with arbitrary

Exercise: Determine the eigenvalues and eigenvectors of the following matrices if there exist:

27
[ ] [ ]

4.6. Diagonalization of a symmetric matrix

28

You might also like