0% found this document useful (0 votes)
3 views

Linear Algebra Lecture1

The document is a comprehensive guide on matrix algebra and its applications in economics, covering definitions, types of matrices, and operations such as addition, subtraction, and multiplication. It also discusses systems of linear equations, special determinants, and linear programming methods. The content is organized into chapters that progressively build on the concepts, providing examples and mathematical properties relevant to economists.

Uploaded by

mollabihon20
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Linear Algebra Lecture1

The document is a comprehensive guide on matrix algebra and its applications in economics, covering definitions, types of matrices, and operations such as addition, subtraction, and multiplication. It also discusses systems of linear equations, special determinants, and linear programming methods. The content is organized into chapters that progressively build on the concepts, providing examples and mathematical properties relevant to economists.

Uploaded by

mollabihon20
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Table of Contents

CHAPTER ONE: MATRIX ALGEBRA........................................................................................1


1.INTRODUCTION........................................................................................................................1
1.1. Overview of Matrix definition.......................................................................................1
1.2. Types of Matrices............................................................................................................3
1.3. Matrix operations............................................................................................................8
1.4. Determinant of a matrix...............................................................................................16
1.5. Inverse of a matrix........................................................................................................21
1.6. Partitioned matrix.........................................................................................................23
1.7. Trace of a matrix...........................................................................................................23
CHAPTER TWO: SYSTEM OF LINEAR EQUATIONS..................................................................27
2.1. Matrix representation of linear equations..................................................................27
2.2. Solving Systems of Linear Equations..........................................................................29
2.2.1. The inverse matrix method..........................................................................................31
2.2.2. The Gauss Jordan Method (Gaussian elimination)......................................................31
2.2.3. Cramer’s rule.............................................................................................................33
Chapter 3. Special Determinants and Matrices in Economics...............................................36
3.1 Introduction...................................................................................................................36
3.2. The Jacobean Determinant..........................................................................................36
3.3. The Hessian Determinant.................................................................................................37
3.3.1. Definite Hessian Matrices.........................................................................................38
3.4. Eigenvectors and Eigenvalues.........................................................................................41
Property 1:.................................................................................................................................46
Property 2:.................................................................................................................................46
Property 3:.................................................................................................................................46
Property 4:.................................................................................................................................46
Property 5:.................................................................................................................................46
Property 6:.................................................................................................................................46
Chapter 4: Input-Output analysis and Linear Programming.................................................47
4.1. Input-Output Model (Leontief Model)...........................................................................47
4.1.1. Introduction................................................................................................................47
4.2. Linear programming........................................................................................................53
4.2.1 Introduction.................................................................................................................53
4.2.2 Basic structure of a linear programming problem..................................................54
4.2.3 LINEAR PROGRAMMING ASSUMPTIONS AND GENERAL LIMITATIONS
...............................................................................................................................................54
4.3. Solving Linear Programming Problems.........................................................................55
4.3.1 The Graphic Method..................................................................................................55
Example: Mindoro Mines..........................................................................................................66
4.3 The Duality Theorem........................................................................................................77
REFERENCES............................................................................................................................81

Linear Algebra for Economists, Organized by Ahmed A.

CHAPTER ONE: MATRIX ALGEBRA

1. INTRODUCTION
The aim of this chapter is to introduce the concept of a matrix, which is a convenient
mathematical way of representing information displayed in a table. By defining the matrix
operations of addition, subtraction and multiplication it is possible to develop algebra of
matrices.

In this chapter you are also shown how to calculate the inverse of a matrix. This is analogous to
the reciprocal of a number and enables matrix equations to be solved. In particular, inverses
provide an alternative way of solving systems of simultaneous linear equations and so can be
used to solve problems in statics.

1.1. Overview of Matrix definition


What is matrix algebra?

A matrix is a rectangular array (or arrangement) of numbers (or variables) arranged in such a
way that each number has a definite position allotted to it. The numbers (parameters or variables)
are referred to as elements (entries) of the matrix. The numbers in the horizontal line are called
rows; the numbers in a vertical line are called columns. A matrix is by capital letters in bold
type (that is, A, B, C . . .) and their elements by the corresponding lower-case letter in ordinary
type. In fact, we use a rather clever double subscript notation so that aij stands for the element of
A which occurs in row i and column j.
A general matrix of order (mxn) is written as:

[ ]
X 11 X 12 . . X 1n
X 21 X 22 . . X 2n
A= . . . . . . Matrix A above has m rows and n columns or it is said to be a matrix
. . . . .
of order (size) m x n (read as m by n).
X m1 Xm2 . . X mn mxn

A=¿ ( 11 12 13 ¿) ( 21 22 23 ¿) ¿¿
a a a a a a ¿
Example: ¿
Here A is a general matrix composed of 3 rows and 3 columns and is said to have order 3x3 with
9 elements. The elements all have double subscripts which give the address or placement of the
element in the matrix; the first subscript identifies the row in which the element appears and the
second identifies the column. For instance, a23 is the element which appears in the second row
and the third column and a32 is the element which appears in the third row and the second
column.

( 3 5 7 ¿ ) ¿ ¿¿
Example 1: Given A =¿ , then matrix A has order of 2x3, the element of a 13 is 7 and
the element a22 is 4 and so forth.
1.2. Types of Matrices
Dear learner! Can you mention some of the type of matrices?
Row Matrix –A matrix which has exactly one raw is called a raw matrix or a row vector; its

dimension is 1xn. Example A= ( 1 2 3 4)


This is a raw vector of order one by four, (1 X 4). We can simply say “a row vector of order
four.”
Column Matrix –A matrix which has exactly one column is called a column matrix or a column
vector; its dimension is m x 1.

(2 ¿)(−1¿)¿¿
Example ¿ is a column matrix (vector) of order three.
Null or Zero Matrix –A matrix each of whose elements is zero is known as a null or zero
matrix.

( 0 0 0¿) ¿ ¿¿
Example 1: ¿ is 2 x 3 null matrix.
Square Matrix –A matrix whose number of rows is equal to the number of columns is called
( 1 2 ¿ ) ¿ ¿¿
square matrix. Example: ¿ is a 2 x 2 square matrix. It can simply be referred to as “a
square matrix of order two.”
Diagonal Matrix –A square matrix whose every element other than the diagonal elements is
zero is known as a diagonal matrix.

(a11 a12 a13 ¿)(a21 a22 a23¿)¿¿¿


If A = ¿ , elements a11, a22 and a33 are diagonal elements of matrix A.
In order to classify a matrix as a diagonal matrix;
a) It should be a square matrix; that is its number of rows should be equal to its number of
columns, and
b) All the not-diagonal elements should be zero. Note that the diagonal elements
themselves may or may not be zero.
Example

(1 0 0¿)(0 2 0¿)¿¿¿ ( 0 0¿) ¿ ¿¿


(0 0 0¿)(0 0 0¿)¿¿¿
A=¿ , B=¿ , C=¿
Scalar Matrix –A diagonal matrix whose diagonal elements are equal is called scalar matrix.
Therefore, a scalar matrix must be a) a square matrix, b) a diagonal matrix and c) all its diagonal
elements must be equal.
Example:

(3 0 ¿) ¿ ¿ ¿
¿
All the above matrices are scalar matrix.
Identify (unit) Matrix –A scalar matrix whose every diagonal element is equal to one is called
Identity or Unit matrix. Identity matrix is a) square matrix, b) diagonal matrix, c) scalar matrix,
and d) all its diagonal elements equal to 1. Therefore, there is only one unit matrix for each
square order; thus it is unique.
Example:

( 1 0¿ ) ¿ ¿¿
(1 0 0¿)(0 1 0¿)¿¿
A =¿ , A is called identity matrix of order 2, B =¿ , B is called identity matrix of

(1 000¿)(0 100¿)(0 010¿) ¿


order 3 and C = ¿ C is called identity matrix of order 4

Identity matrix has some similar characteristics as the number 1 in common algebra. If A is a
non-Zero (null) matrix and I is an identity matrix and if A and I are conformable for
multiplication, then
AxI=IxA=A
Triangular Matrix –A matrix whose every element above (or below) the diagonal is equal to
zero is called triangular matrix. Specifically, a square matrix whose a ij = 0 wherever i<j is called
lower triangular matrix. Analogously, a square matrix whose a ij = 0 whenever i>j is called upper
triangular matrix.
Example 1

( 5 0 0¿)( 3 1 0¿)¿¿¿ (1 000¿)(2 300¿)(0 500¿) ¿ (0 0 0¿)(0 0 0¿)¿¿¿ (1 0 0¿)(0 1 0¿)¿¿


( 2 0¿ ) ¿ ¿¿
A=¿ B= ¿ C=¿ D=¿ E=¿
All the above matrices are lower triangular matrixes because their elements above the diagonal
are equal to zero. Note that their a 12, a13, a14 … a23, a24, a25 … a34, a35, a36 … are zero. That is, every
element whose row number is less than its column number (aij, where i<j) is equal to zero.
A symmetric matrix: a matrix is said to be symmetric if A = At.

[ ]
8 2 1
2 3 4
Example: A=
1 4 5
Idempotent matrix: this is a matrix having the property that A2 =A.

[ ] [ ]
2 1 2 1
3 3 3 3
2 1 2 1
Example: A= 3 3 A 2= 3 3

Matrix equality: Two matrices A and B are said to be equal if and only if:

a) A and B are of the same order, and


b) Every corresponding elements in A and B are same.
Therefore, a matrix is equal to itself only.

(1 2 0¿)(5 3 4¿)¿¿¿ (1 2 0¿)(5 3 4¿)¿¿¿


Example 1:- If A = ¿ and B =¿ , then

A = B because every element of A is equal to the corresponding element of B. Therefore,


matrix A and B are an example of Equity matrix.

1.3. Matrix operations 5


All we have done so far is to explain what matrices are and to provide some notation for handling
them. A matrix certainly gives us convenient shorthand to describe information presented in a
table. However, we would like to go further than this and to use matrices to solve problems in
economics. To do this we describe several mathematical operations that can be performed on
matrices, namely:
 Transposition of matrices
 Addition and subtraction of matrices
 Scalar multiplication
 Matrix multiplication
One obvious omission from the list is matrix division. Strictly speaking, it is impossible to divide
one matrix by another, although we can get fairly close to the idea of division by defining
something called an inverse, which we consider in the later Section.
A. Transpose of a matrix:
The transpose of a matrix is an important construct that is frequently encountered when working
~
with matrices, and is represented variously by A , A, A , A, or rarely A . Most linear algebra
T tr t

texts use AT, while a number of publications use A. For that reason, we will generally use A to
represent the transpose of a matrix. Operationally, the transpose of a matrix is created by
‘converting’ of its rows into the corresponding columns of its transpose, meaning the first row of
a matrix becomes the first column of its transpose, the second row becomes the second column,
and so on. Thus, if

[ ] [ ][ ]

1 2 3 1 2 3 1 4 7
'
A= 4 5 6 A= 4 5 6 = 2 5 8
7 8 9 7 8 9 3 6 9
, the transpose of A is

B. Addition of matrices: Two matrices A and B can be added or subtracted if and only if they
have the same order, which is the same number of rows and columns, otherwise, they are
said to be non-conformable for addition. If they are conformable for addition, their sum will
be the matrices formed by adding each corresponding element. If they are non-conformable
for addition, the sum of the matrices does not exist.
Generally, if

(a11 a12 a13 ¿)(a21 a22 a23 ¿)¿¿¿ (b11 b12 b13 ¿)(b21 b22 b23 ¿)¿¿¿
A= ¿ and B= ¿
Then

(a11 +b11 a12 +b12 a13 +b13 ¿)(a21 +b21 a22 +b22 a23 +b23¿)¿¿¿
A+B= ¿
Example 1:
A=¿ {2 0¿} ¿ {} B=¿ {3 6¿} ¿ {} Then
A+B=¿ {2+3 0+6¿ }¿ {} ={5 6¿ }¿ {}
If two or more matrices, say A, B and C, are conformable for addition (that is if they are of the
same order), then:
i) Matrix addition is commutative
A+B=B+A
ii) Matrix addition is associative
(A + B) + C = A + (B + C) = A + B + C
iii) If 0 denotes a zero matrix of the same order as that of A, then
A+0=0+A=A

iv) (A + B) = A + B, where  is a scalar (distributive law)


Examples:

( 2 3¿) ¿ ¿¿ (−1 4¿) ¿ ¿¿ ( 0 1¿ ) ¿ ¿¿


Let A = ¿ , B=¿ , C=¿ , and the zero matrix is given by

( 0 0¿) ¿ ¿¿
O=¿ , then

( 2 + (−1 ) 3 + 4 ¿ ) ¿ ¿¿ ( 1 7¿) ¿ ¿¿
a) A + B = ¿ = ¿
(−1 + 0 4 + 1 ¿ ) ¿ ¿ ¿
b) B + C = ¿

( 1 7¿) ¿ ¿¿ ( 0 1¿ ) ¿ ¿¿ ( 1 + 0 7 + 1 ¿ ) ¿ ¿¿ ( 1 8¿) ¿ ¿¿
c) (A + B) + C = ¿ + ¿ = ¿ = ¿

( 2 3¿) ¿¿¿ (−1 5¿) ¿ ¿¿ ( 2 + (−1 ) 3 + 5 ¿) ¿ ¿¿ ( 1 8¿) ¿ ¿¿


d) A + (B + C) =¿ + ¿ = ¿ = ¿
( 2 3¿) ¿¿¿ (−1 4¿) ¿ ¿¿ ( 0 1¿) ¿¿¿
e) A + B + C = ¿ + ¿ + ¿

( 2 + (−1 ) + 0 3 + 4 + 1 ¿ ) ¿ ¿ ¿
=¿

( 1 8¿) ¿ ¿¿
Therefore, (A + B) + C = A + (B + C) = A + B + C = ¿
C. Subtraction of Matrices –A matrix can be subtracted from another matrix if both are of the
same order; otherwise, they are said to be non-conformable for subtraction. If they are
conformable for subtraction, the difference between the two matrices will be the matrix
obtained by subtracting each corresponding element.
Generally, if

(a11 a12 a13 ¿)(a21 a22 a23 ¿)¿¿¿ (b1 b12 b13¿)(b21 b2 b23¿)(b31 b32 b3 ¿)¿¿
A=¿ and B= ¿
Then

(a1 −b1 a12−b12 a13−b13 ¿)(a21−b21 a2 −b2 a23−b23¿)(a31−b31 a32−b32 a3 −b3 )¿¿
A–B= ¿
Example:

( 1 9¿) ¿ ¿¿ (−2 4¿) ¿ ¿¿


Let A = ¿ and B=¿ , then

( 1 − (−2 ) 9 − 4 ¿) ¿ ¿¿ ( 3 5 ¿ ) ¿ ¿¿
A–B= ¿ = ¿

(−2 −1 4 − 9¿ ) ¿ ¿¿ ( −3 −5 ¿ ) ¿ ¿¿
B – A =¿ = ¿
D. Matrix Multiplication: Hopefully, you have found the matrix operations considered so far
in previous sections easy to understand. We now turn our attention to matrix multiplication.
There are two types of matrix multiplication: multiplication by scalar and multiplication by
matrix.
a) Scalar multiplication: in this type of multiplication we multiply the scalar by each
element of the given matrix.
In general, if K is any scalar and x is a matrix given as below,

X=¿ [
X 11 X12 ¿ ][ X 21 X 22 ¿ ] ¿
¿¿
¿

For example; find the a B if a is 3 and B is given as follows;

B=¿ [ 3 4 0¿ ][ 1 2 3 ¿ ] ¿ ¿¿
¿
b) Multiplication by matrix:

Two matrices A and B can be multiplied together to get AB if the number of columns in A is
equal to the number of rows in B. Matrices that fulfill this condition are said to be conformable
for multiplication. Otherwise, multiplication is impossible and the two matrices are said to be
non-conformable for multiplication.
If A is a matrix of order (m x n) and B is another matrix of order (n x p), then multiplying A to B
is possible because the number of columns of A is “n” which is the same as the number of rows
of B. The product matrix, AB, will be of (m x p) order. That is, the product matrix will have the
same number of rows as the first matrix and the same number of columns as the second matrix.
That n, the number of columns of A is the same to n, the number of rows of B confirms is AB
exists and therefore A and B are conformable for multiplications. AB will be a matrix of order
m x p.
Examples:
[ ]
1 2
A= 3 4
0 1 3x 2
& B= [ 23 1 4
0 5 ]2x3

[ ]
8 1 14
AxB=¿ {( 1x2) + (2x3) (1x1) + (2x0) (1x4) + (2x5)¿ }{( 3x2) + (4x3) (3x1) + (4x0) (3x4) + (4x5)¿} ¿{}= 15 3 32
Then 3 0 5 3 x3

Property of matrix multiplication: there are several properties of matrix multiplication


1. The distributive law is valid in matrix; if A, B, and C are matrix then A(B+C)=AB+AC:
(B+C)A=BA+CA

(−2 5¿) ¿ ¿¿ ( 1 2¿ ) ¿ ¿¿ ( 8 2¿ ) ¿ ¿¿
Example: Let A =¿ , B =¿ , and C= ¿

a)
(−2 5¿) ¿ ¿¿
A (B + C) = ¿ ¿¿
(−2 5¿) ¿ ¿¿ ( 9 4¿ ) ¿ ¿¿ (−8 32 ¿) ¿ ¿¿
=¿ ¿ = ¿

AB + AC = ¿ ¿
( 23 31¿ ) ¿ ¿¿ (−31 1¿ ) ¿ ¿¿ (−8 32 ¿) ¿ ¿¿
=¿ + ¿ = ¿
Therefore, A (B + C) = AB + AC

2. If I an identity matrix, then IA=AI=A

Example: Let A=
(32 56 ) and the identity matrix is given by I=
(10 01 ) , then

IA=
(10 01 ) (32 56 ) (10 x3+0x
x
2
x 3+1x 2
=
1x 5+0 x 6 3 5
)( )
0 x5+1 x 6 = 2 6
AI=
(32 56 ) (10 01 ) (32 x1+5
x =
x0
x1+6x 1 )( )
3 x0+5x 1 3 5
2 x0+6 x 1 = 2 6
Therefore, IA=AI=A
1.4. Determinant of a matrix

The determinant matrix A is donated by

( a11 a12 ¿ ) ¿ ¿¿
Let A = ¿

|a a |¿
|A|=¿ 11 12 ¿ ¿¿ |A|=a 11 a22−a12 a 21
¿ is known as a determinant of order two and its value is given as:

E.g A=¿ ( 6 4¿ ) ¿ ¿¿ |A|=¿|6 4¿|¿ ¿¿


¿ ¿ = 6(9)-7(4) =26
;

Note that, If one of the rows or the columns of a matrix is zero, then the determinant will be zero.

Another property of determinant is that the value of the determinant is unchanged if a constant
multiple of any row or column is add.

Before we can discuss the determinant of a matrix we need to introduce an additional concept known as
a cofactor. Corresponding to each element aij of a matrix A, there is a cofactor, Aij.

Minors: The minor of an element aij of a determinant A is denoted by Mij and is the determinant
obtained from A by deleting the row and the column where aij occurs. The minor Mij of an element in a
matrix is obtained by deleting the ith row and the jth column of the matrix.

Cofactor: The cofactor of an element aij with minor Mij is denoted by Cij and is defined as

C ij=¿ {M ij , if i+j is even ¿ ¿¿¿


Thus, cofactors are signed minors.

The cofactor |Cij|of an element of in a matrix A is a signed minor. The following formula is used to
found cofactor.

|Cij|=(−1)i+ j|Mij|

Where i and j are refers to the ith row and jth column, respectively and |Mij| are the minors of the
element to that matrix.

|a11 a21 ¿|¿


¿¿
In this case ¿ we have M11=a22. M12=a21, M21=a12, M22=a22

Also C11=a22. C12=-a21, C21=-a12, C22=a22

For more general case consider the following example,

A 3 × 3 matrix has nine elements, so there are nine cofactors to be computed. The cofactor, Aij, is
defined to be the determinant of the 2 × 2 matrix obtained by deleting row i and column j of A, prefixed
by a ‘+’ or ‘−’ sign according to the following pattern:

( )
+ − +
− + −
+ − +

For example, suppose we wish to calculate, the minor and cofactor of the matrix given below:

|a11 a21 a13 ¿| a21 a22 a22 ¿|¿


¿¿
|Mij|for A= ¿ we have

|a a |¿ |a a |¿ |a a |¿
|M 11|=¿ 22 23 ¿ ¿¿ |M 12|=¿ 21 23 ¿ ¿¿ |M 13|=¿ 21 22 ¿ ¿¿
¿ ¿ ¿
|a a |¿ |a a |¿ |a a |¿
|M 21|=¿ 21 13 ¿ ¿¿ |M 22|=¿ 11 13 ¿ ¿¿ |M 23|=¿ 11 12 ¿ ¿¿
¿ ¿ ¿
|a a |¿ |a a |¿ |a a |¿
|M 31|=¿ 12 13 ¿ ¿¿ |M 32|=¿ 11 13 ¿ ¿¿ |M 32|=¿ 11 12 ¿ ¿¿
¿ ¿ ¿

The value of |Mij|would be

M11=a 22 a 33−a23 a32 M12=a21 a33−a 23 a 31 M13 =a21 a32−a 31 a 22

M21=a12 a33−a 32 a 13 , M22=a11 a 33 −a31 a13 M23 =a11 a 32−a31 a12

M31=a12 a23−a 22 a 13 , M32=a11 a 23−a21 a13 , M33 =a11 a 22−a21 a12 ,

For example, suppose we wish to calculate A23, which is the cofactor associated with a23 in the above
matrix A.

The element a23 lies in the second row and third column. Consequently, we delete the second row and
third column to produce the 2 × 2 matrix.

[ ]
a11 a 12 a13
a21 a 22 a23
a31 a 32 a33

The cofactor, A23, is the determinant of this 2 × 2 matrix prefixed by a ‘−’ sign because from the

Pattern:

( )
+ − +
− + −
+ − +

We see that a23 is in a minus position. In other words,


|a a |¿
A23 =¿ 11 12 ¿ ¿¿
¿
−( a11 . a32−a 12 . a31 )

=
−a 11 a32 +a12 a31

A=¿ [2 3 ¿ ] ¿ ¿¿
Example 1: find the |Cij|for ¿

To use the cofactor formula we need to find|Mij|. These are

Using these elements in the |Cij| yields


|M 11|=9, |M 12|=7 , |M 21|=3 and |M 22|=2

|Cij|=(−1)i+ j|Mij| |C11|=9, |C12|= -7, |C 21|= -3, |C 22|=2

Now if the matrix of cofactors is called C, where

C=¿ [ C11 C12 ¿] ¿ ¿¿


¿ Then the cofactor, C, for this example would be

C=¿ [ 9 -7 ¿ ] ¿ ¿¿ C =¿ [ 9 -3 ¿] ¿ ¿¿
t
¿ And ¿

Determinants of a matrix
Now it is easy to find the determinant of a matrix. A determinant can be expanded as the sum of the

products of the elements in any one row or column and their respective cofactors.

Let the determinant matrix A is donated by A 

Let
A=
[ a11
a 21
a12
a22 ] 2 x2

|a a |¿
|A|=¿ 11 12 ¿ ¿¿ |A|=a 11 a22−a12 a 21
¿ is known as a determinant of order two and its value is given as:

E.g A=¿ ( 6 4¿ ) ¿ ¿¿ |A|=¿|6 4¿|¿ ¿¿


¿ ¿ = 6(9)-7(4) =26
;

letA=¿ (a11 a12 a13¿) (a21 a22 a23¿) ¿ ¿¿ |A|=¿|a1 a12 a13 ¿| a21 a2 a23 ¿|¿¿¿
¿ ¿ Is called a third order determinant

|A|=+ a 11 ¿|a22 a23 ¿|¿ ¿


¿
= a11 (a22 a33 - a32 a23) – a12 (a21 a33-a31a23) + a13 (a21a32-a31a22)
E. g let A=¿ (1 2 4¿) (0 -1 0¿) ¿ ¿¿ |A|=¿|1 2 4 ¿||0 -1 0¿|¿ ¿¿
¿ ; Find ¿
= 1 (-1x3 – 0x0) -2 (0x3- (-2x0)) + 4 (0x0 – (-2x-1))

= -3 -0 -8

= -11

1.5. Inverse of a matrix

a
Consider, first, the division of two numbers. If we divide b in to a, we can write this as , where
b
1 −1
b ≠ 0. Alternatively, we could write =b as the reciprocal or inverse of b, and define division
b
a −1
as the multiplication of a, and b−1 ; =ab .
b

This slightly more roundabout way of defining division is in fact a more useful one when dealing
with matrices. Note fining that by the inverse of a number b, we mean the number b−1 that has
the property

b . b =b . b=1
−1 −1

For example, the inverse of the number 2 is 1/2, since 2(1/2) =1. This rather obvious that is
worth spelling out when we proceed to consider matrix division. The inverse matrix A−1 of a
square matrix A of order n is the matrix that satisfies the condition that:
−1 −1
A . A = A . A=I n

Where In is the identity matrix of order n.


−1
A is only defines for square matrices. However, not only square matrices have an inverse.

Obtaining the inverse matrix involves obtain the elements of A−1 as a solution to the equation:
−1
A . A=I
When we deal with real number in ordinary algebra, we know that for some real number a, the
operation a(1/a) =1 is valid if and only if a ≠ 0. In case of matrices, A−1 corresponding to (1/a) in
simple algebra, nut now for A−1 exist it is not sufficient, simply to assume that A is different
from the null matrix. Any matrix A for which A−1 does not exist is known as a singular matrix.
The matrix A for which A−1 exists is known as nonsingular matrix.

There are four steps in computing the inverse of any matrix:

1. The cofactor of each element of the matrix must be found


2. The cofactor of the matrix must be transposed. The transposed of cofactor of matrix A is
called the adjoint of matrix A.
3. The determinant of the matrix must be found and
4. The reciprocal of the determinant, which scalar, must be multiplied by the adjoint matrix

A−1 =1/|A| Adjoint (A) where adjoint A=Ct while C is the cofactor.

Example: find the inverse for

A=¿[1 3 1¿][2 5 0¿]¿¿¿


¿
Find the cofactors first and then transpose it to find the adjoint matrix and then;

|M 11|=35,|M 12|=14, |M 13|=−20, |M 21|=21, |M 22|=3,


Minors of the matrix are
|M 23|=−12, |M 31|=−5,|M 32|=−2, |M 33|=−1

From this the cofactors of the matrix are,

|C 11|=35,|C12|=−14, |C 13|=−20, |C21|=−21, |C 22|=3,


|C 23|=12, |C 31|=−5,|C 32|=2, |C 33|=−1
C=¿[35 -14 -20¿][-21 3 12¿]¿¿¿ C =adjo int(A)=¿ [3 5 -21 -5 ¿] [ -14 3 2¿ ] ¿ ¿¿
t
¿ , ¿
|A|=a 11|C11|+a12|C 12|=a13|C 13|=1(35 )+3 (−14 )+1(−20 )=−27

A =1/|A|adjoint( A)=−1/27¿ [ 35 -21 -5¿ ][ -14 3 2¿] ¿ ¿


−1
¿

1.6. Partitioned matrix

A partitioned matrix contains sub matrices as elements. The sub matrices are obtained by
partitioning the raw and column of the original matrix.

[ ]
1 2 3 1
A 11 A 12
Example: 2 1 0 1 = ⌈ ⌉
A 21 A 22
3 0 −1 2

Where A11= [ 12 2 3
1 0 ]
A12= [ 11]
A21=[ 3 0 −1 ]
A22= [ 2 ]
The rule for addition (subtraction) and multiplication of the matrices apply directly to partitioned
matrices provided that the matrices are of suitable order. Let A and B be two partitioned
matrices, which are written as:

A= [ AA 1121 A 12
A 22 ]
and B=
B11 B 12
B 21 B 22 [ ]
Then

A+B = [ AA 11+ B 11
21+ B 21
A 12+ B 12
A 22+ B 22 ]
provided that A and B are of same overall dimension and that A 11
and B11, A12 and B12, A21 and B21, and A22 and B22 are the same order.

1.7. Trace of a matrix

The trace is defined only for square matrices. The trace of a square matrix A is given by the sum of the
elements of the main diagonal. In other words, if A is nxn matrix, then the trace is defined as:

Trace (An) = all+a22+a33+----+ann

The trace of a matrix, if defined, has some very attractive simplifying properties.

For two matrices, A and B of dimension mxn and nxm respectively, we have that Ab is mxm and Ba is
nxn, and,

Trace (AB) = Trace (BA)

Example: For the matrices A and B below, verify that the Trace(AB) = Trace (BA)

[ ]
3 1
A= [ 2 1 3
2 1 4 ]
and B= 2 −1
1 0

In this case, A is 2x3 and B is 3x2 matrixes. Therefore, AB is 2x2 and BA is 3x3 matrix.

][ ][
3 1
AB = [ 2 1 3
2 1 4
2 −1 =
1 0
11 1
12 1 ]
Then, trace (AB) = 11+1= 12

[ ][ [ ]
3 1 8 4 13
BA = 2 −1
1 0
2 1 3
2 1 4 ] = 2 1 2
2 1 3

And the trace (BA) = 8+1+3 =12

Therefore, Trace (AB) = Trace (BA).

You might also like