0% found this document useful (0 votes)
16 views48 pages

Matrices and Linear Equations Notes-2

The document consists of lecture notes on Elementary Linear Algebra and Complex Numbers, focusing on matrices and linear equations. It covers definitions, operations, determinants, inverses, row reduction, systems of linear equations, and linear transformations. The content is structured into sections with detailed explanations and examples to aid understanding of the concepts presented.

Uploaded by

Steve Mwenye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views48 pages

Matrices and Linear Equations Notes-2

The document consists of lecture notes on Elementary Linear Algebra and Complex Numbers, focusing on matrices and linear equations. It covers definitions, operations, determinants, inverses, row reduction, systems of linear equations, and linear transformations. The content is structured into sections with detailed explanations and examples to aid understanding of the concepts presented.

Uploaded by

Steve Mwenye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Lilongwe University of Agriculture and

Natural Resources

Bunda Campus

Basic Sciences Department

MAT 32202: Elementary Linear Algebra and


Complex Numbers

Matrices and Linear Equations

Lecture Notes

Francisco Chamera

November 25, 2023


Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 1

Contents

1 Matrices 2

1.1 Definitions and Operations . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.1 2 × 2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.2 Cofactor Method . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.3 Sarrus Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.3 Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.4 Row Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.4.1 Row Reduction and Echelon Form . . . . . . . . . . . . . . . . 13

1.4.2 Row Reduction and Inverses . . . . . . . . . . . . . . . . . . . 14

1.4.3 Row Reduction and Determinants . . . . . . . . . . . . . . . . 16

2 Systems of Linear Equations 21

2.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.2 The Equation Ax = b . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.3 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.4 Gauss-Jordan Eliminations . . . . . . . . . . . . . . . . . . . . . . . . 25

2.5 Applications of Linear Equations . . . . . . . . . . . . . . . . . . . . 28

3 Linear Combinations, Dependence and Independence 30

3.1 Linear Dependence and Independence . . . . . . . . . . . . . . . . . . 30

3.2 The rank of a Matrix and Existence of Solutions of Linear Equations 34

4 Introduction to Linear Transformations 38

4.1 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.2 Matrices for Linear Transformations . . . . . . . . . . . . . . . . . . . 42


Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 2

1 Matrices

1.1 Definitions and Operations

Definition 1.1. A matrix A = (aij ) is a rectangular array of numbers, i.e., num-


bers arranged in rows and columns.

The order or size of a matrix with m rows and n columns is m × n. General


presentation of a matrix A with order m × n is
 
a11 a12 ... a1n
 a21 a22 ... a2n 
 . .
 .. 
am1 am2 ... amn

For example the matrix


 
2 −1 6 11 3
B =  0 2 21 16 −7
−8 10 4 −3 9

has order 3 × 5.

Definition 1.2. Two matrices A and B are equal if and only if they have the same
size (that is, the same number of rows and the same number of columns) and their
corresponding entries are equal. That is if aij = bij for all 1 ≤ i ≤ m and 1 ≤ j ≤ n.

We sometimes denote the ij-th entry of a matrix A by (A)ij . This is taken to be


the same thing as aij .

Definition 1.3. 1. A zero matrix is a matrix whose entries are all zeros.

2. A square matrix is a matrix with the same number of rows and columns.

3. In a square matrix, A = (aij ), of order n, the entries a11 , a22 , ..., ann are called
the diagonal entries and form the main diagonal (also called principal
diagonal) of A.
(
1 if i = j
4. A square matrix A = (aij ) with aij = is called the identity
0 if i 6= j
matrix, denoted by In .

5. A square matrix A is said to be upper triangular if the entries beneath the


main diagonal are all zeros, i.e., aij = 0 if i > j.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 3

6. A square matrix is lower triangular if the entries above the main diagonal
are all zeros, that is aij = 0 if i < j.

7. A square matrix A is said to be triangular if it is an upper or a lower trian-


gular matrix.

8. A matrix A = (aij ) is called diagonal matrix if A is both upper triangular


and lower triangular, i.e., aij = 0 if i 6= j.

Example 1.4.
   
  1 0 0 2 1 4
1 0
I2 = , I3 = 0 1 0 and the matrix A = 0 3 −1 is an upper triangular
0 1
0 0 1 0 0 −2
matrix.

Definition 1.5. Let A and B be m × n matrices. We define addition of matrices


by
A + B = (A)ij + (B)ij .

By Definition 1.5, the ij-th entry of (A + B) is the sum of the ij-th entry of A with
the ij-th entry of B.

Example 1.6.
     
1 2 4 2 −1 3 3 1 7
If A = 2 3 1 and B = 2 4 2 then A + B = 4 7 3.
5 0 3 3 6 1 8 6 4
Definition 1.7. Let A be an m × n matrix and t ∈ R, a scalar. We define the
scalar multiplication of matrices by

(tA)ij = t(A)ij .

By Definition 1.7, the ij-th entry of tA is t times the ij-th entry of A.

Example 1.8.

     
1 −1 3(1) 3(−1) 3 −3
3 2 8  =  3(2) 3(8)  =  6 24  .
−10 −5 3(−10) 3(−5) −30 −15

We write −A for the scalar product (−1)A and define A + (−1)B as matrix sub-
raction A − B.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 4

Example 1.9.
     
1 2 4 2 −1 3 −1 3 1
If A = 2 3 1 and B = 2 4 2 then A − B =  0 −1 −1.
5 0 3 3 6 1 2 −6 2
Theorem 1.10. Let A, B and C be matrices of order m × n and let k, l ∈ R.
Suppose further that O represents the m × n zero matrix.Then

1. A + B = B + A (commutativity)
2. (A + B) + C = A + (B + C) (associativity)
3. A + O = A
4. there is an m × n matrix A0 such that A + A0 = O
5. k(lA) = (kl)A
6. (k + l)A = kA + lA
7. k(A + B) = kA + kB
8. 0A = O

Proof. This is an easy consequence of ordinary addition, as matrix addition is simply


entry-wise addition and multiplication by scalar k is simply entry-wise multiplication
by the number k.
Definition 1.11. Suppose that A = (aij ) and B = (bij ) are m×n and n×p matrices
respectively. Then the matrix product AB is given by the m × p matrix AB = (qij )
where for every i = 1, ..., m and j = 1, ..., p we have
n
X
qij = aik bkj = ai1 b1j + ai2 b2j + ... + ain bnj .
k=1

The product AB of two matrices A and B is found by multiplying rows of A by


columns of B. AB exists if and only if the number of columns of A equals the
number of rows of B.
Example 1.12.
   
1 −5 1 2 4
If A = and B = then
2 2 −5 2 1
      
1 −5 1 2 4 1 + 25 2 − 10 4 − 5 26 −8 −1
AB = = = .
2 2 −5 2 1 2 − 10 4 + 4 8 + 2 −8 8 10
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 5

Note that in Example 1.12, while AB is defined, the product BA is not defined.
However, for square matrices A and B of the same order, both the product AB and
BA are defined, though the products are generally not equal. If AB = BA we say
that A and B are commutative.

Definition 1.13. Two square matrices A and B are said to commute if AB = BA.

Remark 1.14. If A is a square matrix of size n then AIn = In A = A.

Theorem 1.15. 1. Let A be an m × n matrix, B an n × p matrix and C a p × r


matrix. Then (AB)C = A(BC) (associativity)

2. Suppose that A is an m × n matrix, B is an n × p matrix, and that k ∈ R.


Then k(AB) = (kA)B = A(kB).

3. Suppose that A is an m × n matrix and B and C are n × p matrices. Then


A(B + C) = AB + AC. (distributive law)

4. Suppose that A and B are m × n matrices and C is an n × p matrix. Then


(A + B)C = AC + BC. (distributive law)

Definition 1.16. The transpose of a matrix A, denoted AT , is the matrix whose


rows are columns of A (and whose columns are rows of A). That is if A = (aij )
then AT = (aji ).

Example 1.17.

   T
 T 1 −5  T   7
1 2 4 4 7 4 7  
= 2 2 , = and 8 = 7 8 9 .
−5 2 1 7 0 7 0
4 1 9

The following theorem gives properties of the transpose.

Theorem 1.18. Let A and B be matrices. Then

1. (AT )T = A

2. (AB)T = B T AT and (A1 A2 ...Ak )T = ATk ....AT2 AT1

3. (A + B)T = AT + B T

4. (rA)T = rAT

Proof. We prove the first two and leave the rest as exercise.

1. Let A = (aij ), AT = (bij ) and (AT )T = (cij ). Then the definition of transpose
gives
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 6

cij = bji = aij

and the result follows.

2. It is enough to show the first part, as the general claim follows by repeatedly
applying the first claim with k = 2. This is just an explicit calculation:
n
X n
X n
X
T
((AB) )ij = (AB)ji = Ajk Bki = Bki Ajk = (B T )ik (AT )kj = (B T AT )ij .
k=1 k=1 k=1

Definition 1.19. A square matrix A is said to be symmetric if AT = A. A square


matrix B is skew-symmetric if B T = −B.

Example 1.20.
   
1 2 3 0 1 2
Let A = 2 4 −2 and B = −1 0 −3. Then A is a symmetric matrix and
3 −2 4 −2 3 0
B is a skew-symmetric matrix.

Proposition 1.21. For any square matrix A the matrices B = AAT and C = A+AT
are symmetric.

Proof.
B T = (AAT )T = (AT )T AT = AAT = B
and
C T = (A + AT )T = AT + (AT )T = AT + A = C.

Exercise 1.22.

1. Show that the product of two lower triangular matrices is a lower triangular
matrix. Show that the product of two upper triangular matrices is an upper
triangular matrix.

2. Let A and B be symmetric matrices. Show that AB is symmetric if and only


if AB = BA.

3. Show that the diagonal entries of a skew-symmetric matrix are zero.

4. Let A, B be skew-symmetric matrices with AB = BA. Is the matrix AB


symmetric or skew-symmetric?
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 7

5. Let A be a symmetric matrix of order n with A2 = 0. Is it necessarily true


that A = 0?
1
6. Show that for any square matrix A, R = (A + AT ) is symmetric, S =
2
1
(A − AT ) is skew-symmetric, and A = R + S.
2
7. For a square matrix A of order n, we define trace of A, denoted by tr(A) as
tr(A) = a11 + a22 + ... + ann . Let A and B be square matrices of the same
order. Show that tr(A + B) = tr(A) + tr(B) and that tr(AB) = tr(BA).

1.2 Determinants

The determinant of a matrix is a scalar (number), obtained from the elements of a


matrix, by specified operations, which is characteristic of the matrix. The determi-
nants are defined for square matrices only. The determinant of a square matrix A
is denoted by det(A) or |A|.

1.2.1 2 × 2 Determinants
 
a11 a12
Definition 1.23. Let A be a 2×2 matrix, i.e., A = . Then the determinant
a21 a22
a a
of A is given by |A| = 11 12 = a11 a22 − a12 a21 .
a21 a22
Example 1.24.
 
3 1 3 1
If A = then |A| = = 3(3) − 1(−2) = 9 + 2 = 11.
−2 3 −2 3

1.2.2 Cofactor Method

Definition 1.25. Let A be an n × n matrix. The (i, j)-th minor, denoted Aij , is
the determinant of the (n − 1) × (n − 1) matrix obtained from A by deleting the i-th
row and the j-th column.

Definition 1.26. Let A be an n × n matrix. The (i, j)-th cofactor, denoted C ij , is


defined in terms of the minor by

C ij = (−1)i+j Aij .

Example 1.27.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 8
 
1 2 3
1 2
Let A = 4 5 6. Then A23 = = 8 − 14 = −6 and C 23 = (−1)2+3 (−6) = 6.
7 8
7 8 9

The following are steps for finding the determinant of a matrix using cofactors.

1. Choose any row or column.


2. Multiply each element in the chosen row or column by its cofactor.
3. Add the profucts.

This method is called cofactor method and is sometimes referred to as expanding


along a row or column.
Example 1.28.
 
5 −2 2
Compute the determinant of B = 0 3 −3.
2 −4 7

Solution

1. We choose the first row.


3 −3
2. B 11 = = 21 − 12 = 9 and C 11 = 9.
−4 7
0 −3
B 12 = = 0 − (−6) = 6 and C 12 = (−1)1+2 B 12 = (−1)1+2 × 6 = −6.
2 7
0 3
B 13 = = 0 − 6 = −6 and C 12 = (−1)1+3 B 13 = (−1)1+3 (−6) = −6.
2 −4
3. |B| = 5(9) + (−2)(−6) + 2(−6) = 45 + 12 − 12 = 45.
Example 1.29.
 
2 4 2 1
0 3 5 3 
Find |D| given that D = 
0 5 7 −4.

0 −7 −7 0

Solution

Expanding along the first column gives


3 5 3
11
|D| = 2C = 2 5 7 −4 = 2 × 98 = 196.
−7 −7 0
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 9

1.2.3 Sarrus Rule

To calculate
a11 a12 a13
|A| = a21 a22 a23 ,
a31 a32 a33
write the first two columns on the right side of the determinant forming a 3 × 5
matrix  
a11 a12 a13 a11 a12
 a21 a22 a23 a21 a22  .
a31 a32 a33 a31 a32
Then sum the products on all lines parallel to the main diagonal and subtract the
products on the lines parallel to the second diagonal (see Figure 1).

Figure 1: Sarrus Rule

|A| = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 − a11 a23 a32 − a12 a21 a33 .
Example 1.30.

2 0 2
Find 0 1 0 .
−1 0 1
Solution
 
2 0 2 2 0
The corresponding 3 × 5 matrix is  0 1 0 0 1  .
−1 0 1 −1 0

Hence the determinant is

2(1)(1) + 0(0)(−1) + 2(0)(0) − (−1)(1)(2) − 0(0)(2) − 1(0)(0) = 2 + 2 = 4.

Definition 1.31. A square matrix A is called singular if |A| = 0 and non-singular


if |A| =
6 0.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 10

Example 1.32.
 
3 4
The matrix A = is singular since |A| = 0.
6 8
Theorem 1.33. Let A and B be square matrices of size n. Then

1. det(AB) = det(A)det(B)

2. det(AT ) = det(A)

Proof. Exercise.

Corollary 1.34. Let A and B be square matrices of size n. Then


det(BA) = det(AB).

Proof. By Theorem 1.33, det(AB) = det(A)det(B) = det(B)det(A) = det(BA).

1.3 Inverses

Definition 1.35. A square matrix A of size n is said to be invertible if there exists


a unique matrix B of the same size such that

AB = BA = In .

The matrix B is called the inverse of A and is denoted by

B = A−1 .
 
a11 a12
Definition 1.36. The inverse of a 2 × 2 matrix A = is given by
a21 a22
   
−1 1 a22 −a12 1 a22 −a12
A = =
|A| −a21 a11 a11 a22 − a12 a21 −a21 a11

provided that |A| =


6 0.

Example 1.37.
 
3 1
Find the inverse of the matrix A = .
4 2
Solution
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 11

1 − 12
     
1 2 −1 1 2 −1
A−1 = = = .
3(2) − 1(4) −4 3 2 −4 3 −2 23
To find the inverse of a 3 × 3 marix A using the cofactor method we follow these
steps:

1. Find |A| the determinant of A. If |A| = 0, then the inverse of A does not exist,
so stop. Otherwise (if |A| =
6 0) proceed to find the inverse using steps below.

2. Find the cofactor for each entry of A.

3. Replace each entry in A by its cofactor to form the matrix of cofactors.

4. Find the adjoint (also called adjugate) of A by taking the transpose of the
matrix of cofactors.
1
5. Find A−1 = Adj(A) where Adj(A) is the adjoint of A.
|A|
Example 1.38.
 
7 2 1
Find the inverse of A =  0 3 −1.
−3 4 −2

Solution
 
−2 8 −5
Please confirm that |A| = 1 and that Adj(A) =  3 −11 7 .
9 −34 21
Hence    
−2 8 −5 −2 8 −5
1 1
A−1 = Adj(A) =  3 −11 7  =  3 −11 7  .
|A| 1 9 −34 21 9 −34 21
Theorem 1.39. Suppose that A and B are two invertible n × n matrices. Then

1. (A−1 )−1 = A
1
2. |A−1 | =
|A|
3. (AT )−1 = (A−1 )T

4. (AB)−1 = B −1 A−1 .

5. if AB = AC where C is an n × n matrix, then B = C.


Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 12

Proof. 1. The result follows Definition 1.35.

2. |A| × |A−1 | = |AA−1 | = |In | = 1 and the result follows.

3. Take transposes of all three sides of (A−1 )A = A(A−1 ) = In :

((A−1 )A)T = (A(A−1 ))T = InT ⇒ AT (A−1 )T = (A−1 )T AT = In .

Hence (A−1 )T is the inverse of AT .

4. Using the associative law for matrix multiplication repeatedly gives:

(B −1 A−1 )AB = B −1 (A−1 A)B = B −1 (In )B = B −1 (In B) = B −1 B = In

and

(AB)(B −1 A−1 ) = A(BB −1 )A−1 = A(In )A−1 = (AIn )A−1 = AA−1 = In .

5. AB = AC ⇒ A−1 AB = A−1 AC ⇒ B = C.

Corollary 1.40. For any invertible matrix A, det(ABA−1 ) = det(B).

Proof. By Theorem 1.33, det(ABA−1 ) = det(A)det(B)det(A−1 ) = det(B)det(A)det(A−1 ).


Theorem 1.39 gives det(A)det(A−1 ) = 1, so det(ABA−1 ) = det(B).

1.4 Row Reduction

Given a matrix A, the three elementary row operations are

1. Interchange rows.

2. Multiply a row by a non-zero scalar.

3. Replace a row by its sum with a scalar multiple of another row.

Definition 1.41. Two matrices A and B are said to be row equivalent if B can
be obtained by applying a sequence of elementary row operations to A.

Definition 1.42. A matrix is called an elementary matrix if it is obtained by


performing an elementary row operation on an identity matrix.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 13

1.4.1 Row Reduction and Echelon Form

Definition 1.43. A leading entry of a matrix A is the first nonzero entry in a row.

Definition 1.44. A matrix is in row echelon form (REF) if these hold:

1. In any nonzero row, the leading (first nonzero) entry is 1.

2. If row k has a leading entry, then either the leading entry in row k + 1 is to
the right of the one for row k or row k + 1 is a zero row.

3. All rows with only zeros for entries, if available, are at the bottom.

Example 1.45.
 
1 1 2 −1 3
The matrix 0 0 1 0 −4 is in row echelon form.
0 0 0 0 0

Definition 1.46. A matrix is in reduced row echelon form (RREF) if these


hold:

1. The matrix is in row echelon form.

2. The leading entry in a row is the only nonzero entry in its column. That is,
any column containing a leading entry has zeros in all other positions.

Example 1.47.
 
1 0 0 7
The matrix 0 1 0 4 is in reduced row echelon form.
0 0 1 3
Example 1.48.
 
3 −2 4 7
Put the matrix 2 1 0 −3 in row echelon form then in reduced row echelon
2 8 −8 2
form.

Solution
   
1 −3 4 10 1 −3 4 10
R1 → R1 −R2 , R3 → R3 −R2 2 1 0 −3. R2 → R2 −2R1 0 7 −8 −23
0 7 −8 5 0 7 −8 5
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 14

1 −3 4 1 −3 4
   
10 10
1 8 23  1 8 23 
R3 → R3 − R2 , R2 → R2 0 1 − − . R3 → R3 0 1 − − .
 
7 7 7 28 7 7
0 0 0 28 0 0 0 1
The last matrix is in row echelon form.
 
4 1
1 0 7 7 
R1 → R1 + 3R2 0 1 − 8 − 23 .
 
 7 7
0 0 0 1
 
4
23 1 1 0 7 0
R2 → R2 + R3 , R1 → R1 − R3 0 1 − 8 0.
 
7 7  7 
0 0 0 1

The last matrix is in reduced row echelon form.


Remark 1.49. A matrix may have different row echelon forms but its reduced row
echelon form is unique. This means every matrix is row equivalent to one and only
one matrix in reduced row echelon form.

1.4.2 Row Reduction and Inverses

Note that performing an elementary row operation on an identity matrix produces an


elementary matrix corresponding to that elementary row operation. Any elementary
row operation is equivalent to left multiplying by the corresponding elementary
matrix.

Suppose A is an n × n invertible matrix. Then, its reduced row echelon form is the
identity matrix. In other words, by a series of successive row operations, the matrix
A is reduced to In . Since each row operation is equivalent to left multiplication by
an elementary matrix, there exist elementary matrices E1 , E2 , ..., Ek such that

(Ek ...E2 E1 )A = In .

By definition this implies

A−1 = Ek ...E2 E1 = Ek ...E2 E1 (In ).

The last equation says that the exact same row operations that reduce A to the
identity In also transforms the identity matrix In to A−1 .

Therefore, to find the inverse of A, we first create an n × (2n) matrix by adding


the identity to the right of A: [A|In ]. We then perform row operations till the first
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 15

n columns form the identity matrix. When the first n columns form the identity
matrix, the remaining columns form the inverse A−1 :

[A|In ] −→ [In |A−1 ].

Example 1.50.
 
2 −1
Use row reduction to find the inverse of A = .
1 −1
Solution

Note that the |A| = −1, so A is invertible.


   
2 −1 1 0 1 −1 0 1
We set the augmented matrix as . R1 ↔ R2 .
1 −1 0 1 2 −1 1 0
   
1 −1 0 1 1 0 1 −1
R2 → R2 − 2R1 . R1 → R1 + R2 .
0 1 1 −2 0 1 1 −2
Notice that the left hand part is now the identity. The right hand side is the inverse.
Hence  
−1 1 −1
A = .
1 −2
Example 1.51.
 
3 4 −1
Use row reduction to find the inverse of B =  1 −1 1 .
−1 2 3

Solution
 
3 4 −1 1 0 0
We augment B with 3 × 3 identity matrix:  1 −1 1 0 1 0 .
−1 2 3 0 0 1
 
1 −1 1 0 1 0
R1 ↔ R2  3 4 −1 1 0 0 .
−1 2 3 0 0 1
 
1 −1 1 0 1 0
R2 → R2 − 3R1 , R3 → R3 + R1  0 7 −4 1 −3 0 .
0 1 4 0 1 1
 
1 −1 1 0 1 0
R3 ↔ R2  0 1 4 0 1 1 .
0 7 −4 1 −3 0
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 16
 
1 0 5 0 2 1
R1 → R1 + R2 , R3 → R3 − 7R2  0 1 4 0 1 1 .
0 0 −32 1 −10 −7
 
1 0 5 0 2 1
1
R3 → − R3  0 1 4 0 1 1 .
32 1 10 7
0 0 1 − 32 32 32

5 14 3
 
1 0 0 32 32 − 32
4 8 4 
R1 → R1 − 5R3 , R2 → R2 − 4R3  0 1 0 32 − 32 32 .
1 10 7
0 0 1 − 32 32 32

Since [B|I3 ] reduces to [I3 |B −1 ], we have


 5 14 3
  
− 5 14 −3
32 32 32 1 
B −1 =  32 4 8
− 32 4 
32 = 4 −8 4  .
1 10 7 32 −1 10 7
− 32 32 32

1.4.3 Row Reduction and Determinants

Theorem 1.52. Let A be an n × n square matrix.

1. If two rows of A are interchanged to produce B, then |B| = −|A|.

2. If one row of A is multiplied by k to produce B, then |B| = k|A|.

Proof. 1. First suppose that the two interchanged rows are consecutive:
   
∗ ∗ ... ∗ ∗ ∗ ... ∗
 .. .. ..   .. .. .. 
 . . .   . . . 
v v ... vn  w1 w2 ... wn 
   
A= 1 2  and B =  .
w1 w2 ... wn   v1 v2 ... vn 
 .. .. ..   .. .. .. 
 . . .   . . . 
∗ ∗ ∗ ∗ ∗ ∗

If both |A| and |B| are computed by expanding along the row [w1 w2 ....wn ],
then all the cofactors are the same except that the signs are flipped. Therefore
|B| = −|A|.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 17

Now suppose that the two interchanged rows i < i0 are not consecutive.
   
∗ ∗ ... ∗ ∗ ∗ ... ∗
 ... .. ..  . .. .. 
. .   .. . . 


v v2 ... vn  w w2 ... wn 
 1   1 
A =  ... .. ..   . .. .. 
.  and B =  .. . .

. .
w1 w2 ... wn   v1 v2 ... vn 
   
 . .. ..   . .. .. 
 .. . .   .. . . 
∗ ∗ ∗ ∗ ∗ ∗

We can transform A into B by performing an odd number of consecutive row


interchanges. First we move [v1 v2 ...vn ] down from row i to row i0 one step at a
time. This takes i0 − i interchanges of consecutive rows. Afterwards, [v1 v2 ...vn ]
is in the right place, while [w1 w2 ...wn ] is in row i0 − 1. Therefore we must move
[w1 w2 ...wn ] up from row i0 − 1 to row i which takes i0 − i − 1 interchanges of
consecutive rows. The total number of interchanges is

(i0 − i) + (i0 − i − 1) = 2(i0 − i) − 1

which is an odd number.


Therefore
|B| = (−1)(odd no) |A| = −|A|.

2. Suppose we choose to compute the determinant of A by expanding along the


i-th row [ai1 ai2 ... ain ]. Then

|A| = ai1 C i1 + ai2 C i2 + ... + ain C in .

Let B be a matrix obtained by multiplying i-row of A by k. Then the i-th


row of B is [kai1 kai2 ... kain ] and

|B| = kai1 C i1 +kai2 C i2 +...+kain C in = k(ai1 C i1 +ai2 C i2 +...+ain C in ) = k|A|.

Lemma 1.53. Let A be an n × n matrix.

1. If A has a row of zeros, then |A| = 0.


2. If two rows of A are the same, then |A| = 0.
3. If one row of A is a scalar multiple of another row, then |A| = 0.

Proof. 1. By part 2 of Theorem 1.52, if A has a whole row of 0’s we can multiply
this row by 0 to obtain B which is in fact equal to A. So we have |A| = 0|A|
giving |A| = 0.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 18

2. By part 1 of Theorem 1.52, exchanging the two identical rows gives the original
matrix, but its determinant is negated. The only scalar which is its own
negation is 0. Therefore, the determinant of the matrix is 0.
3. This follows from part 2 of Theorem 1.52 and part 2 of this Lemma.

Theorem 1.54. Let A be an n × n square matrix. If a multiple of one row of A is


added to another row to produce a matrix B, then |B| = |A|.

Proof. Suppose that [w1 w2 ...wn ] (in A) and [w1 + k.v1 ...wn + k.vn ] (in B) are in row
i. Then
|A| = w1 C i1 + w2 C 2i + ... + wn C ni
and

|B| = (w1 + k.v1 )C i1 + w2 C 2i + ... + (wn + k.vn )C ni


= (w1 C i1 + w2 C 2i + ... + wn C ni ) + k.(v1 C i1 + v2 C 2i + ... + vn C ni ).

This means that  


∗ ∗ ... ∗
 ... ..
.
.. 
.

v v2 ... vn 
 1 
|B| = det(A) + k.det  ... .. .. 
. .

.
v1 v2 ... vn 
 
. .. .. 
 .. . .
∗ ∗ ∗
The last matrix has two rows which are the same, so its determinant is zero by part
2 of Lemma 1.53. We conclude that

|B| = |A| + k.0 = |A|.

Theorem 1.55. Let A be an n × n upper or lower triangular or diagonal matrix.


Then the determinant of A is the product of entries in the main diagonal, i.e.,

|A| = a11 a22 a33 ...ann .

Proof. If we use cofactor expansion along the first row or first column, the only term
in the expansion that is not zero is the first, and that term is the product of the first
entry and its cofactor matrix. By induction the cofactor matrix has a determinant
which is the product of its diagonal entries.
Corollary 1.56. The determinant of an identity matrix is equal to 1, i.e., |In | = 1.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 19

Proof. The result follows from Theorem 1.55 and definition for identity matrix.

Algorithm for Calculating Determinant

Let A be an n × n matrix. To find |A| the determinant of A,

1. use elementary row operations to obtain a matrix B from A whose determinant


is easy to calculate. (B need not be in row echelon form). Calculate |B| and
then obtain |A| from |B| using steps 2 and 3 below.

2. add up the number of times you performed a row swap. If this number is even,
then do nothing. If this number is odd, then multiply |B| by -1.

3. let r1 , ..., rm be a list of all the scalars used in an operation of the type “multiply
a row by anon-zero
   scalar”.
 Then take your result from step (2) and multiply
1 1 1
it by ... . Note that it does not matter what order the row
r1 r2 rm
multiplication took place, or what row was multiplied.

Example 1.57.

Use elementary row operations to find the determinant of the matrix


 
5 −1 −3
A = −2 2 3  .
4 8 3

Solution
 
5 −1 −3
R2 → R2 + R1 , R3 → R3 + R1 3 1 0  = B.
9 7 0

The determinant of B can easily be found by expanding along the third column:

3 1
|B| = (−1)1+3 (−3) = −3(21 − 9) = −36.
9 7

Since the row operations used do not change determinant,

|A| = |B| = −36.

Example 1.58.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 20

Use elementary row operations to find the determinant of the matrix


 
0 5 −2 −4
2 4 −2 8 
A= −3 4
.
−1 1 
5 5 −8 9

Solution
   
2 −2 8
4 1 2 −1 4
0 −2 −4
5  1  0 5 −2
 −4
R1 ↔ R2 
−3 . R1 → R1
.
−1 1 
4 2 −3 4 −1 1
5 −8 9
5 5 5 −8 9
 
1 2 −1 4
0 5 −2 −4 
R3 → R3 + 3R1 , R4 → R4 − 5R1 0 10 −4 13 .

0 −5 −3 −11
   
1 2 −1 4 1 2 −1 4
0 5 −2 −4  0 5 −2 −4 
R3 → R3 − 2R2 , R4 → R4 + R2 
0 0 0 21 . R3
 ↔ R4   = B.
0 0 −5 −15
0 0 −5 −15 0 0 0 21

Since B is an upper-triangular matrix,

|B| = 1(5)(−5)21 = −525.


1
We have perfomed two row swaps and the only scalar multiplied by a row is .
2
Hence
1
|A| = |B| = 2|B| = 2 × −525 = −1050.
(1/2)
Exercise 1.59.

1. Use elementary row operations to find determinant of each matrix.


 
  2 4 8 8 4  
2 4 0 4   2 1 0 1
0 5 1 0 0 −2 −4 1 3 0 1
 0 1 1
 1 2
(a) 
  (b) 1 4 8  (c)  0 0 3 0 −3 (d) 2 3
 
2 9 0 4 1 2
1 6 15 0 0 1 1 0
1 2 0 1 1 1 2 1
0 0 1 0 1
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 21

2 Systems of Linear Equations

2.1 Linear Equations

The first problem of linear algebra is to solve a system of m linear equations in n


unknowns.

Definition 2.1. A system of equations is linear if it can be written

a11 x1 + a12 x2 + ... + a1n xn = b1


a21 x1 + a22 x2 + ... + a2n xn = b2
.. (1)
.
am1 x1 + am2 x2 + ... + amn xn = bm

where a11 , ..., amn are numbers and x1 , x2 , ..., xn are unknowns.

To solve the system of equations in 1 means finding all the n-tuples of scalars
(x¯1 , x¯2 , ..., x¯n ) that satisfy the system when the constants x¯j are substituted for the
unkowns xj , 1 ≤ j ≤ n. The solutions must satisfy each equation in the system.

In case of more than one solutions, we use upper indices to indicate constant values.
(1) (2)
For example x1 and x1 denote two different values for the unknown x1 .

Definition 2.2. If all the right hand constants bi , 1 ≤ i ≤ m, are equal to 0, then
the system is homogeneous. Otherwise it is inhomogeneous. If you set all the
constants bj in an inhomogeneous system 1 to zero, you get the homogeneous system
associated with the inhomogeneous one.
(1) (1) (1) (2) (2) (2)
Theorem 2.3. If (x1 , x2 , ..., xn ) and (x1 , x2 , ..., xn ) are solutions of the inho-
mogeneous system 1, then their difference
(1) (2) (1) (2)
(x1 − x1 , x2 − x2 , ..., x(1) (2)
n − xn )

is a solution of the associated homogeneous system.

Proof. Just subtract the corresponding equations. The details are left as an exercise.

Recall that a system of linear equations may or may not have a solution. If the
solution exists it may or may not be unique.

Definition 2.4. System of linear equations that does not have any solutions is in-
consistent. A system with at least one solution is consistent.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 22

Corollary 2.5. A consistent inhomogeneous system has exactly one solution if and
only if the corresponding homogeneous system has only one solution, which must be
the trivial solution (trivial solution means all the variables are zero).

Proof. This is a direct consequence of Theorem 2.3 and the existence of the trivial
solution for homogeneous systems of equations.

2.2 The Equation Ax = b

The system 1 can be expressed in the form

Ax = b
 
a11 a12 ... a1n
 a21 a22 ... a2n 
where A =   ... .. ..  is the coefficient matrix, x = [x1 x2 ...xn ]T and
. . 
am1 am2 ... amn
b = [b1 b2 ...bm ]T .
Theorem 2.6. Let A be an invertible matrix. Then the system of linear equations

Ax = b

has a unique solution


x = A−1 b.

Proof. Observe that


A(A−1 b) = (AA−1 )b = Ib = b.
Thus A−1 b is a solution. This solution is unique since if x0 is any solution then

Ax0 = b

giving
A−1 (Ax0 ) = A−1 b
and so
x0 = A−1 b.

Example 2.7.

5x1 + 15x2 + 56x3 = 35

Solve the following system of linear equations −4x1 − 11x2 − 41x3 = −26

−x − 3x − 11x = −7.
1 2 3
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 23

Solution

Writing the system as Ax = b we have


    
5 15 56 x1 35
−4 −11 −41 x2  = −26 .
−1 −3 −11 x3 −7

First, we will find the inverse of the coefficient matrix by augmenting with the
identity.
56 1
   
5 15 56 1 0 0 R1 = 51 R1
1 3 5 5 0 0
 −4 −11 −41 0 1 0  7−− −−→  −4 −11 −41 0 1 0 
−1 −3 −11 0 0 1 −1 −3 −11 0 0 1
56 1 56 1
   
1 3 5 5 0 0 1 3 5 5 0 0
R2 =R2 +4R1
 −4 −11 −41 0 1 0  7− −−−−−−→  0 1 19 4
5 5 1 0

R3 =R3 +R1
−1 −3 −11 0 0 1 0 0 15 15 0 1
1 3 56 1 1 11
   
5 5 0 0 1 0 − 5 − 5 −3 0
 0 1 19 4 1 0  7−−R − 3 =5R3
− − − −→  0 1 19 4
1 0
5 5 R =R −3R 5 5
0 0 51 15 0 1 1 1 2
0 0 1 1 0 5
1 0 − 51 − 11
   
5 −3 0 19
R2 =R2 − 5 R3
1 0 0 −2 −3 1
 0 1 19 4
1 0  7−− −−−−1−→  0 1 0 −3 1 −19  .
5 5
R1 =R1 + 5 R3
0 0 1 1 0 5 0 0 1 1 0 5
This gives  
−2 −3 1
A−1 = −3 1 −19 .
1 0 5
So     
−2 −3 1 35 1
−1
x = A b = −3 1 −19 −26 = 2 .
1 0 5 −7 0
Hence x1 = 1 when x2 = 2 and x3 = 0.

2.3 Cramer’s Rule

Theorem 2.8 (Cramer’s Rule). Let A be an invertible n × n matrix. The solutions


xk to the system Ax = b are given by
|Ak |
xk =
|A|
where Ak is the matrix obtained from A by replacing the k-th column of A by b.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 24

Proof. Since A is invertible, the system Ax = b has a unique solution


 
A11 A21 ... An1
1 1  . .. .. ..  b
x= Adj(A).b = .. . . .
|A| |A|
A1n A2n ... Ann
where . denotes matrix multiplication. By the formula for matrix multiplication,
the k-th unknown, xk , can be written as
n
1 X 1
xk = Aik bi = det[c1 ... ck−1 b ck+1 ... cn ]
|A| i=1 |A|

where ci denotes the i-th column of A. The second equality follows from the cofactor
expansion of the matrix
[c1 ... ck−1 b ck+1 ... cn ]
along the k-th column.
Example 2.9.
(
2x1 + 3x2 = −5
Solve the system
2x1 − 3x2 = 13.

Solution
2 3
D= = 2(−3) − 2(3) = −6 − 6 = −12.
2 −3

−5 3 2 −5
Dx1 = = −24 and Dx2 = = 36.
13 −3 2 13
Dx1 −24 Dx2 36
Hence x1 = = = 2 and x2 = = = −3.
D −12 D −12
Example 2.10.

x1 + x2 − x3 = 6

Solve the system 3x1 − 2x2 + x3 = −5

x + 3x − 2x = 14.
1 2 3

Solution
1 1 −1 6 1 −1 1 6 −1
D = 3 −2 1 = −3, Dx1 = −5 −2 1 = −3, Dx2 = 3 −5 1 = −9,
1 3 −2 14 3 −2 1 14 −2
1 1 6
Dx3 = 3 −2 −5 = 6.
1 3 14
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 25

Dx1 Dx2 Dx3


Henece x1 = = 1, x2 = = 3 and x3 = = −2.
D D D

2.4 Gauss-Jordan Eliminations

For the system in 1, the matrix


 
a11 a12 ... a1n b1
 a21 a22 ... a2n b2 
Ab = [A|b] = 
 ... .. .. .. 
. . . 
am1 am2 ... amn bm
is called augmented matrix of the system.

Recall steps for the Gaussian elimination method for solving a system of linear
equations:

1. Write the augumented matrix of the system.


2. Use elementary row operations to transform the augmented matrix and obtain
the row echelon form (REF).
3. Stop process in step 2 if you obtain a row whose elements are all zeros except
the last one on the right. In that case, the system inconsistent (has no solu-
tion). Otherwise, finish step 2 and read the solutions of the system from the
final matrix.

The Gauss-Jordan elimination method is similar to Gaussian elimination method


only that the augmented matrix is further reduced to obtain reduced row echelon
form (RREF).
Definition 2.11. In a linear system, the variables that correspond to columns with
leading 1’s are called basic variables. Any remaining variables are called free
variables.

If a linear system is consistent

• if every variable is a basic variable, for example


 
1 ∗ ∗ ∗ ∗ ∗
0 1
 ∗ ∗ ∗ ∗ 
0 0
 1 ∗ ∗ ∗ ,
0 0 0 1 ∗ ∗
0 0 0 0 1 ∗
then the system has a unique solution and this can be found by back-substitution.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 26

• if there is at least one free variable, for example


 
1 ∗ ∗ ∗ ∗ ∗
0 1
 ∗ ∗ ∗ ∗ 
0 0
 0 1 ∗ ∗ ,
0 0 0 0 1 ∗
0 0 0 0 0 0
then the system has infinitely many solutions.
Example 2.12.

x1 + 4x2 + x3 = 3

Solve the system 2x1 − 3x2 − 2x3 = 5

2x + 4x + 2x = 6.
1 2 3

Solution
 
1 4 1 3
The augmented matrix is  2 −3 −2 5 . Now we use elementary row operations
2 4 2 6
to put this matrix into RREF as follows.

     
1 4 1 3 1 4 1 3 1 4 1 3
R =R3 −2R1
7 −3−−−
 2 −3 −2 5  − −−→  0 −11 −4 −1  R2 ↔ R3  0 −4 0 0 
R2 =R2 −2R1
2 4 2 6 0 −4 0 0 0 −11 −4 −1

     
1 4 1 3 R2 =− 14 R2
1 4 1 3 1 4 1 3
R3 =R3 +11R2
 0 −4 0 0  − 7 −−−−→  0 1 0 0 − 7 − −−−−−→  0 1 0 0 
0 −11 −4 −1 0 −11 −4 −1 0 0 −4 −1

1 0 0 11
     
1 4 1 3 1 0 1 3 4
R =R1 −4R2 R1 =R1 −R3
0 1 0 0 − 7 −1−−− − −→ 0 1 0 0− 7 − − −− −→  0 1 0 0 .
R3 =− 14 R3
0 0 −4 −1 0 0 1 14 0 0 1 14
The last matrix is in reduced row echelon form.
11 1
Hence x1 = when x2 = 0 and x3 = .
4 4
Example 2.13.

x1 + 2x2 + 3x3 = 12

Solve the system 4x1 + 5x2 + 6x3 = 11

7x + 8x + 9x = −10.
1 2 3
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 27

Solution

The augmented matrix is  


1 2 3 12
 4 5 6 11  .
7 8 9 −10
Applying appropriate row operations gives the following matrix in reduced row ech-
elon form:  
1 0 −1 0
 0 1 2 0 .
0 0 0 1
Since 0 6= 1, this system is inconsistent.

Example 2.14.

x1 − x2 + x3 = 3

Solve the system 2x1 − x2 + 4x3 = 7

3x − 5x − x = 7.
1 2 3

Solution

The augmented matrix is  


1 −1 1 3
 2 −1 4 7  .
3 −5 −1 7
Applying appropriate row operations gives the following matrix in reduced row ech-
elon form:  
1 0 3 4
 0 1 2 1 .
0 0 0 0
This shows that there are infinitely many solutions as x3 is a free variable.

The original linear system has the same solutions as


(
x1 + 3x3 = 4
x2 + 2x3 = 1

and solutions can be found by giving values to the free variable x3 then evaluating
the basic variables x1 and x2 .

To describe the general solution, assign the arbitrary value t to x3 . Back-substitution


now shows

x2 = 1 − 2x3 = −2t + 1 and x1 = 4 − 3x3 = −3t + 4 .


Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 28

So the general solution to this system of equations is

(x1 , x2 , x3 ) = (−3t + 4, −2t + 1, t)

where t ∈ R can have any value, and the set of all solutions is

{(−3t + 4, −2t + 1, t)|t ∈ R}.

In this example the arbitary value t ∈ R is called a parameter, and the general
solution (−3t + 4, −2t + 1, t) is called a parametric solution.

2.5 Applications of Linear Equations

Some of the applications of systems of linear equations are:

1. Reaction stoichiometry (balancing equations)

2. Electronic circuit analysis (current flow in networks)

3. Structural analysis (linear deformations of various constructions)

4. Statistics (least squares analysis)

5. Economics: optimization problems

6. System of non-linear equations: approximation of solutions.


Example 2.15.

Two cars, one traveling 10 km/h faster than the other car, start at the same time
from the same point and travel in opposite directions. In 3 hours, they are 300 km
apart. Find the rate of each car.

Solution

Let x1 and x2 be speed of the first car and second car respectively. The tabe below
summarises the information given.

Rate Time Distance


First Car x1 3 3x1
Second Car x2 3 3x2

The rate of one car is 10 km/h faster than the other:

x1 = x2 + 10 ⇒ x1 − x2 = 10.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 29

After 3 hours, the distance between the two cars is 300km:

3x1 + 3x2 = 300 ⇒ x1 + x2 = 100.

We have the system of equations


(
x1 − x2 = 10
.
x1 − x2 = 100

Solving this system gives x1 = 55 and x2 = 45.

Hence the rates of the two cars is 45 km/h and 55 km/h.

Example 2.16.

What is the stoichiometry of the complete combustion of propane?

C3 H8 + x1 O2 → x2 CO2 + x3 H2 O.

Solution

Oxygen: 2x1 = 2x2 + x3

Carbon: 3 = x2 ⇒ x2 = 3

Hydrogen: 8 = 2x3 ⇒ x3 = 4

Substituting: 2x1 = 2x2 + x3 ⇒ 2x1 = 10 ⇒ x1 = 5.

Example 2.17.

In the past three men’s soccer games, a Lilongwe based team called Town Rangers
5
averaged goals per game. They scored the same number of goals in the most
3
recent two games, but three games ago they scored additional two goals. How many
goals did they score in each game?

Solution

Let the number of goals in the past three games be x1 , x2 and x3 with x1 being the
most recent.
5
That they averaged goals means
3
x1 + x2 + x3 5
= ⇒ x1 + x2 + x3 = 5.
3 3
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 30

That the past two games had the same number of goals means
x1 = x2 ⇒ x1 − x2 = 0.
That three games ago there were two more goals can be represented with the equa-
tion
x3 = x2 + 2 ⇒ x3 − x2 = 2.
Thus, we have the system of linear equations

x1 + x2 + x3 = 5

x1 − x2 =0 .

x − x
3 2 =2
The RREF of the augmented matrix of the system is
 
1 0 0 1
0 1 0 1 .
0 0 1 3
Hence x1 = 1 when x2 = 1 and x3 = 3.

3 Linear Combinations, Dependence and Inde-


pendence

3.1 Linear Dependence and Independence

Definition 3.1. The linear combination of the vectors {v1 , v2 , ..., vk } with scalars
r1 , r2 , ..., rk is the vector r1 v1 + r2 v2 + ... + rk vk .
Definition 3.2. A set of vectors {v1 , v2 , ..., vk } is linearly independent if the
only linear combination r1 v1 + r2 v2 + ... + rk vk = 0 equal to the zero vector is the
one with r1 = r2 , = ... = rk = 0.

A set of vectors which is not linearly independent is linearly dependent.


Definition 3.3. A set of vectors {v1 , v2 , ..., vk } is linearly dependent if there are
scalars r1 , r2 , ..., rk not all zeros such that r1 v1 + r2 v2 + ... + rk vk = 0.
Example 3.4.
               
1 3 5 0 1 3 0 0
1. 2 2 + 5 −  9  = 0 and 3 2 − 5 − 1 = 0, so the two sets of
3 7  13  0 3  7   2 0
 1 3 5   1 3 0 
vectors 2 , 5 ,  9  and 2 , 5 , 1 are linearly dependent.
3 7 13 3 7 2
   
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 31
       

 1 0 0 0 
       
0 1 0 0

2.   ,   ,   ,   is a linearly independent subset of R4 .


 0 0 1 0 

0 0 0 1
 

3. {1, x, x2 } is a linearly independent subset of P2 the set of polynomials over R


of degree 2.
4. {1, x, x2 , ..., xn } is a linearly independent subset of Pn .

Verification of Linearly Dependency/Independency

Suppose the given set of vectors is S = {v1 , v2 , ..., vk }.

1. Equate the linear combination of these vectors to the zero vector, that is
r1 v1 + r2 v2 + ... + rk vk = 0 where r’s are scalars that we have to find.
2. Solve for scalars r1 , r2 , ..., rk . If all are equal to zero then S is a linearly
independent set, otherwise (if at least one r’s is non-zero) then S is linearly
dependent.
Example 3.5.

Show whether the set S = {(2, 0, 6), (1, 2, −4), (3, 2, 2)} is linearly dependent or
independent.

Solution

Let r1 , r2 and r3 be scalars such that


r1 (2, 0, 6) + r2 (1, 2, −4) + r3 (3, 2, 2) = (0, 0, 0).
Then we have
(2r1 + r2 + 3r3 , 2r2 + 2r3 , 6r1 − 4r2 + 2r3 ) = (0, 0, 0)
which is equivalent to the system

2r1 + r2 + 3r3 = 0....(i),

2r2 + 2r3 = 0....(ii),

6r − 4r + 2r = 0....(iii).
1 2 3

Multiplying Equation (i) by 3 and subtracting Equation (iii) gives


7r2 + 7r3 = 0....(iv).
Clearly Equation (iv) is equivalent to Equation (ii). This implies that the above
system has infinitely many solutions.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 32

Putting r2 = 1 in Equation (iv) gives r3 = −1. Substituting in Equation (i) we have


r1 = 1.

Hence S is linearly dependent.


Example 3.6.

Show whether the set V = {(1, 2, 3), (1, 0, 2), (2, 1, 5)} is linearly dependent or inde-
pendent.

Solution

Let r1 , r2 and r3 be scalars such that

r1 (1, 2, 3) + r2 (1, 0, 2) + r3 (2, 1, 5) = (0, 0, 0).

Then we have

(r1 + r2 + 2r3 , 2r1 + r3 , 3r1 + 2r2 + 5r3 ) = (0, 0, 0)

which is equivalent to the system



r1 + r2 + 2r3 = 0....(i),

2r1 + r3 = 0....(ii),

3r + 2r + 5r = 0....(iii).
1 2 3

Solving this system we get r1 = r2 = r3 = 0.

Hence V is linearly independent.


Example 3.7.

Consider the polynomials p(x) = 1 + 3x + 2x2 , q(x) = 3 + x + 2x2 and r(x) = 2x + x2 .


Show whether {p(x), q(x), r(x)} is linearly dependent or independent.

Solution

Let a1 , a2 and a3 be scalars such that a1 p(x) + a2 q(x) + a3 r(x) = 0, that is

0 = a1 (1 + 3x + 2x2 ) + a2 (3 + x + 2x2 ) + a3 (2x + x2 )


= (a1 + 3a2 ) + (3a1 + a2 + 2a3 )x + (2a1 + 2a2 + a3 )x2 .

This corresponds to the following system of linear equations



a1 + 3a2
 = 0....(i),
3a1 + a2 + 2a3 = 0....(ii),

2a + 2a + a = 0....(iii).
1 2 3
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 33
     
1 3 0 1 3 0 1 3 0
R =R3 −2R1 R =R3 −2R2
7 −3−−−
We compute 3 1 2 − −−→ 0 −8 2 −7 −3−−− −−→ 0 −8 2.
R2 =R2 −3R1
2 2 1 0 −4 1 0 0 0

Hence, {p(x), q(x), r(x)} is linearly dependent.


Example 3.8.

Let X = {sin x, cos x}. Is X linearly dependent or linearly independent?

Solution

Suppose that r1 sin x + r2 cos x = 0. Notice that this equation holds for all x ∈ R.
π
If we put x = 0 we get r1 .0 + r2 .1 = 0 and putting x = we get r1 .1 + r2 .0 = 0, so
2
we must have r1 = r2 = 0.

Hence, X is linearly independent.


Lemma 3.9. Suppose that the vectors v1 , v2 , ..., vk are linearly independent but
v1 , v2 , ..., vk , vk+1 are linearly dependent. Then vk+1 must be a linear combination
of v1 , v2 , ..., vk .

Proof. Since v1 , v2 , ..., vk , vk+1 are linearly dependent, there exist scalars r1 , r2 , ..., rk , rk+1
not all zeros such that

r1 v1 + r2 v2 + ... + rk vk + rk+1 vk+1 = 0.

Note that rk+1 can not be equal to zero (why?). Now


r1 r2 rk
vk+1 = − v1 − v2 − ... − vk .
rk+1 rk+1 rk+1
Hence vk+1 is a linear combination of v1 , v2 , ..., vk .
Theorem 3.10. The set of vectors V = {v1 , v2 , ..., vk } is linearly dependent if and
only if one vector in V can be expressed as a linear combination of the other vectors.

Proof. Exercise.
Exercise 3.11.

1. Show whether each of the following sets of vectors is linearly dependent or


linearly independent.
       

 1 1 1 3 
       
1 2 2 5

(a) S =   ,   ,   ,  


 1 3 1 5 

1 4 2 7
 
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 34

(b) B = {(1, 1, 3), (0, 2, 1), (0, 0, 0)}


(c) R = {x, sin x, cos x}
(d) C = {(1, 0, 0), (0, 1, 1), (1, 0, 1)}
(e) D = {(1, 1, −1), (2, −3, 5), (−2, 1, 4)}
(f) X = {ex , e2x , e3x }
     
1 0 0 −1 1 0
(g) T = , ,
0 1 0 0 0 0
(h) E = {(2, 3, −1), (−4, 2, −5), (5, −4, 9)}
(i) W = {1 + x − x2 , 2 + x2 , x3 − 2x4 }
(j) A = {(1, 1, 1), (0, 1, 1), (2, 0, 1)}
2. Determine the value of k such that the set {(1, 2, 1), (k, 3, 1), (2, k, 0)} is lin-
early dependent in R3 .
     
 1 h 1 
3. Find the values of h for which the set of vectors 0 , 1 ,
    2h  is
0 −h 3h + 1
 
linearly independent.
4. Suppose M is an n × n upper-triangular matrix. If the diagonal entries of M
are all non-zero, then prove that the column vectors are linearly independent.
Does the conclusion hold if we do not assume that M has non-zero diagonal
entries?
5. Show that any set of vectors with the zero vector is linearly dependent.
6. Show that the set of vectors {v1 , v2 , ..., vn } is linearly dependent if and only if
one vector vi (1 ≤ i ≤ n) can be expressed as linear combination of the other
vectors.

3.2 The rank of a Matrix and Existence of Solutions of Lin-


ear Equations

Definition 3.12. The rank of a matrix A denoted rank(A) is the maximum number
of rows in A which are linearly independent.

Recall that two matrices A and B are row equivalent if we can convert A to B by
applying a sequence of elementary row operations.
Lemma 3.13. If A and B are row equivalent, then they have the same rank

Proof. Exercise.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 35

Lemma 3.14. If a matrix A is in row echelon form, then rank(A) is the number
non-zero rows of A

Proof. Exercise.

Lemma 3.13 and Lemma 3.14 suggest that to find rank of a matrix A, convert A to
a matrix A0 of row echelon form, then, count the number of non-zero rows of A0 .

Example 3.15.
 
1 2
Compute the rank of the following matrix A = 0 1.
3 4

Solution

     
1 2 1 2 1 2
R =R3 −3R1 R =R3 +2R2
0 1 −7 −3−−− −−→ 0 1  7−−3−−− −−→ 0 1 .
3 4 0 −2 0 0
Hence the rank of A is 2.

Example 3.16.
 
1 2 3 4
Compute the rank of the following matrix B = 5 6 7 8.
3 2 1 0

Solution

     
1 2 3 4 1 2 3 4 1 2 3 4
R3 =R3 −3R1 R =R +R2
−−−−−−→ 0 −4 −8 −12 7−−3−−3−1−→
5 6 7 8 7− 0 1 2 3 .
R2 =R2 −5R1 R2 =− 4 R2
3 2 1 0 0 −4 −8 −12 0 0 0 0
Hence, B has rank 2.

Lemma 3.17. The rank of a matrix A is the same as the rank of its transpose, AT .

Proof. Exercise.

Theorem 3.18. Let Ab = [A|b] be the augmented matrix of a linear system Ax = b


in n unknowns. Then we have

1. The linear system is consistent if and only if rank(A) = rank(A|b).


Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 36

2. If the linear system is consistent, then it has a unique solution if and only if
rank(A) = rank(A|b) = n.

Proof. Exercise.

Example 3.19.

Is the following linear system consistent? Does it have a unique solution?


2x1 + 2x2 − x3 = 1

4x1 + 2x3 = 2

 6x2 − 3x3 = 4

Solution

We have
   
2 2 −1 2 2 −1 1
A = 4 0 2  and Ab = 4 0 2 2.
0 6 −1 0 6 −1 4

Now we compute rank(A).

1 1 − 21
       
2 2 −1 2 2 −1 R =R + 6 R 2 2 −1 R = 1 R
R2 =R2 +2R1 3 3 4 2 3 5 3
4 0 2  7−−−−−−−→ 0 −4 4  − 7 −−−−− 1
−→ 0 1 −1 7−− −−1
→ 0 1 −1 .
R2 =− 4 R2 R1 = 2 R1
0 6 −1 0 6 −1 0 0 5 0 0 1

The last matrix is in row echelon form and all the three row vectors are non-zero.
This gives rank(A) = 3.

For Ab ,
     
2 2 −1 1 2 2 −1 1 R =R + 6 R 2 2 −1 1
R2 =R2 +2R1 3 3 4 2
4 0 2 2 7− −−−−−−→ 0 −4 4 0 7−−−−−− −→ 0 1 −1 0 .
R2 =− 41 R2
0 6 −1 4 0 6 −1 4 0 0 5 4

1 1 − 21 12
   
2 2 −1 1 R = 1 R
1 2 1
0 1 −1 0 7−− −−→ 0 1 −1 0 .
R3 = 51 R3
0 0 5 4 0 0 1 45

This gives rank(Ab ) = 3.

Hence the system is consistent and has a unique solution.


Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 37

Corollary 3.20. Let A be an m × n matrix. A homogeneous system of equations


Ax = 0 will have a unique solution, the trivial solution x = 0, if and only if
rank(A) = n. In all other cases, it will have infinitely many solutions. As a
consequence, if n > m, i.e., if the number of unknowns is larger than the number of
equations, then the system will have infinitely many solutions.

Proof. Either a system of linear eqations has no solution, unique solution or infinitely
many solutions. Since x = 0 is always a solution, the first case is eliminated as a
possibility, so the system is consistent. Therefore, we must always have rank(A) =
rank(A|b) ≤ n. Equality will hold if and only if x = 0 is the only solution. When it
does not hold, we are always in the third case, so there are infinitely many solutions
for the system. If n > m, then we need only note that rank(A) ≤ m < n to see
that the system has to have infinitely many solutions.

Recall that a set of k vectors {v1 , v2 , ..., vk } is linearly independent if the equation
k
X
ri v i = 0
i=1

where the ri ’s are scalars, has only r1 = r2 = ... = rk = 0 as a solution, otherwise


the vectors are linearly dependent. Let’s assume the vectors are all m × 1 column
vectors (if they are rows, just transpose them). Now, use the basic matrix trick
(BMT) to put the equation in matrix form:
k
X
Ar = ri vi = 0 where A = [v1 , v2 , ..., vk ] and r = (r1 r2 ...rk )T .
i=1

Determining whether the vectors are linearly independent or dependent is the same
as determining whether the homogeneous system Ar = 0 has a nontrivial solution.
Combining this observation with Corollary 3.20 gives another way to check for linear
independency/depencency by finding the rank of a matrix.
Corollary 3.21. A set of k column vectors {v1 , v2 , ..., vk } is linearly independent
if and only if the associated matrix A = [v1 , v2 , ..., vk ] has rank(A) = k.
Example 3.22.

Determine whether the vectors below are linearly independent or linearly dependent.
     
1 2 0
−1 1 1
v1 = 
 2  , v2 =  1  , v3 = −1 .
    

1 −1 11

Solution
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 38

Form the associated matrix A and perform row reduction on it:

1 0 − 32
   
1 2 0
−1 1 1  0 1 1 
A=   ⇔ 3 
0 0 0  .
2 1 −1
1 −1 −1 0 0 0

The rank of A is 2 < k = 3. Hence the vectors are linearly dependent.

4 Introduction to Linear Transformations

4.1 Linear Transformations

Definition 4.1. A linear transformation is a function T : Rn → Rm such that

1. T (v + u) = T (v) + T (u) for all v, u ∈ Rn .

2. T (rv) = rT (v) for all v ∈ Rn and r ∈ R.

Definition 4.1 implies that a linear transformation is a function T which preserves


addition and scalar multiplication.

The following are trivial examples of linear transformations:

1. T (v) = 0 for all v ∈ Rn . This is called the zero transformation.

2. T (v) = v for all v ∈ Rn . This is called the identity transformation.

Example 4.2.

Other examples of linear transformations are

1. two examples of T : R2 → R2 which are rotations around the origin and


reflections along a line through the origin.

2. T : Pn → Pn−1 , the derivative function that maps each polynomial p(x) to its
derivative p0 (x).

The following theorem gives properties of linear transformations.

Theorem 4.3. Suppose T : Rn → Rm is a linear transformation. Then


Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 39

1. T (0) = 0.
2. T (−v) = −T (v) for all v ∈ Rn .
3. T (u − v) = T (u) − T (v) for all u, v ∈ Rn .
4. If
v = {r1 v1 + r2 v2 + ... + rn vn }
then

T (v) = T (r1 v1 + r2 v2 + ... + rn vn ) = r1 T (v1 ) + r2 T (v2 ) + ... + rn T (vn ).

Proof. 1. By the second property of Definition 4.1, we have

T (0) = T (00) = 0T (0) = 0.

2. Similarly
T (−v) = T ((−1)v) = (−1)T (v) = −T (v).

3. By the first property of Definition 4.1, we have

T (u − v) = T (u + (−1)v) = T (u) + T ((−1)v) = T (u) − T (v).

4. To prove (4), we use induction on n. For n = 1 we have, by second property


of Definition 4.1, T (r1 v1 ) = r1 T (v1 ).
For n = 2, by the two properties of Definition 4.1, we have

T (r1 v1 + r2 v2 ) = r1 T (v1 ) + r2 T (v2 ).

So, (4) is proved for n = 2. Now, we assume that the formula (4) is valid for
n − 1 vectors and prove it for n. We have

T (r1 v1 + r2 v2 + ... + rn vn ) = T (r1 v1 + r2 v2 + ... + rn−1 vn−1 ) + T (rn vn )


= r1 T (v1 ) + r2 T (v2 ) + ... + rn−1 T (vn−1 ) + rn T (vn ).

Every linear transformation T : Rn → Rm is given by left multiplication with some


m × n matrix.
Theorem 4.4. Suppose A is a matrix of size m × n. Given a vector
 
v1
 v2  n
v=  ...  ∈ R

vn
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 40

define  
v1
 v2 
T (v) = Av = A 
 ...  .

vn
Then T is a linear transformation from Rn to Rm .

Proof. From properties of matrix multiplication, for u, v ∈ Rn and scalar r we have

T (u + v) = A(u + v) = A(u) + A(v) = T (u) + T (v)

and
T (ru) = A(ru) = rAu = rT (u).

Example 4.5.

Let T (v1 , v2 , v3 ) = (2v1 + v2 , 2v2 − 3v1 , v1 − v3 ). Find

1. T (−4, 5, 1).

2. The preimage of (4, 1, −1).

Solution

1. T (−4, 5, 1) = (2(−4) + 5, 2(5) − 3(−4), −4 − 1) = (−3, 22, −5).

2. Suppose (v1 , v2 , v3 ) is the preimage of (4, 1, −1). Then

T (v1 , v2 , v3 ) = (4, 1, −1) = (2v1 + v2 , 2v2 − 3v1 , v1 − v3 ).

This gives the following system of linear equations:



2v1 + v2
 =4
2v2 − 3v3 = 1

v
1 − v3 = −1.

The augmented matrix for this system is


 
2 1 0 4
 0 2 −3 1 
1 0 −1 −1
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 41

which can be row reduced to get the matrix

1 0 0 47
 
 0 1 0 20  .
7
0 0 1 117

4 20 11
Hence v1 = when v2 = and v3 = and the preimage of (4, 1, −1) is
 7 7 7
4 20 11
, , .
7 7 7

Example 4.6.

Determine whether the function

T : R2 → R2

given by
T (x, y) = (x2 , y)
is linear.

Solution

We have

T ((x, y) + (w, z)) = T (x + w, y + z) = ((x + w)2 , y + z)


6= (x2 , y) + (w2 , z) = T (x, y) + T (w, z).

Hence T does not preserve addition, so, T is not linear.

Alternatively we could show that T does not preserve scalar multiplication, or we


could check failure to preserve addition numerically, for example

T ((1, 1) + (2, 0)) = T (3, 1) = (9, 1) 6= T (1, 1) + T (2, 0)

Example 4.7.

Let T : R3 → R3 be a linear transformation such that

T (1, 0, 0) = (2, 4, −1), T (0, 1, 0) = (1, 3, −2) and T (0, 0, 1) = (0, −2, 2).

Find T (−2, 4, −1).

Solution

We have
(−2, 4, −1) = −2(1, 0, 0) + 4(0, 1, 0) − (0, 0, 1).
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 42

So

T (−2, 4, −1) = −2T (1, 0, 0) + 4T (0, 1, 0) − T (0, 0, 1)


= −2(2, 4, −1) + 4(1, 3, −2) − (0, −2, 2)
= (−4, −8, 2) + (4, 12, −8) − (0, −2, 2)
= (0, 8, −8).

Example 4.8.

Let T : R2 → R2 be linear transformation such that

T (1, 1) = (0, 2) and T (1, −1) = (2, 0)

Find

1. T (1, 4).
2. T (−2, 1)

Solution

1. First we write
T (1, 4) = a(1, 1) + b(1, −1).
Solving we get a = 2.5 and b = −1.5, so

T (1, 4) = 2.5T (1, 1, ) − 1.5T (1, −1) = 2.5(0, 2) − 1.5(2, 0) = (−3, 5).

2. Writing
(−2, 1) = a(1, 1) + b(1, −1)
and solving for a and b gives a = −0.5 and b = −1.5.
Hence

T (−2, 1) = −0.5T (1, 1) − 1.5T (1, −1) = −0.5(0, 2) − 1.5(2, 0) = (−3, −1).

4.2 Matrices for Linear Transformations

As already mentioned, for any linear transformation

T : Rn → R m ,

we can find a matrix A such that

T (v) = Av
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 43

for v ∈ Rn . We will look at how to find the matrix A.


     
1 0 0
0
, e2 = 1. ... en = 0.. Then ei is the i-th column of the identity
   
Let e1 =  .
 ..   ..   .. 
0 0 1
n
matrix in R . The set B = {e1 , e2 , ..., en } is called standard basis of the vector
space Rn (but this is discussion for another day).
Theorem 4.9. Let T : Rn → Rm be a linear transformation. Write
     
a11 a12 a1n
 a21   a22   a2n 
T (e1 ) = 
 ... 
, T (e 2 ) =  ... 
 , ... , T (e n ) =  . .
 .. 
am1 am2 amn

These columns T (e1 ), T (e2 ), ..., T (en ) form an m × n matrix


 
a11 a12 ... a1n
 a21 a22 ... a2n 
A=  ... .. ..  .
. . 
am1 am2 ... amn
This matrix has the property that

T (v) = Av for all v ∈ Rn .

Proof. We can write


 
v1
 v2 
 ...  = v1 e1 + v2 e2 + ..., +vn en .
v= 

vn
Now
        
a11 a12 ... a1n v1 a11 a12 a1n
 a21 a22 ... a2n   v2   a21   a22   a2n 
Av = 
 ... .. ..   .. 
   = v1  .. 
  + v2  .. 
  + ... + vn  .. 
 
. . . . . .
am1 am2 ... amn vn am1 am2 amn
= v1 T (e1 ) + v2 T (e2 ) + ... + vn T (en )
= T (v1 e1 + v2 e2 + ... + vn en )
= T (v).
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 44

The matrix A in Theorem 4.9 is called standard matrix of T .

Example 4.10.

Let T (x, y, z) = (5x − 3y + z, 2z + 4y, 5x + 3y) be a linear transformation. Find the


standard matrix of T .

Solution
 
x
3 3 3
We have T : R → R . We write vectors v ∈ R as columns v = y  instead of
z
(x, y, z). Let

     
1 0 0
e1 = 0, e2 = 1 and e3 = 0 in R3 .
0 0 1

We have
     
5 −3 1
T (e1 ) = 0, T (e2 ) =  4  and T (e3 ) = 2.
5 3 0

Hence the standard matrix of T is


 
5 −3 1
0 4 2 .
5 3 0

Example 4.11.

Let T (x, y, z) = (2x+y, 3y −z) be a linear transformation. Write down the standard
matrix of T and use it to find T (0, 1, −1).

Solution

Here T : R3 → R2 . With
     
1 0 0
e1 = 0, e2 = 1 and e3 = 0
0 0 1

we have
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 45
     
2 1 0
T (e1 ) = , T (e2 ) = and T (e3 ) = .
0 3 −1

So the standard matrix of T is


 
2 1 0
A= .
0 3 −1

Therefore    
0   0  
2 1 0  1 = 1 .
T (0, 1, −1) = A  1  =
0 3 −1 4
−1 −1
Writing the answer as a row vector we have

T (0, 1, −1) = (1, 4).

Example 4.12.

Let T be the reflection in the line y = x in R2 . So T (x, y) = (y, x).

1. Write down the standard matrix of T .

2. Use the standard matrix to compute T (3, 4).

Solution

   
1 0
1. In this case T : R2 → R2 . With e1 = and e2 = we have
0 1
   
0 1
T (e1 ) = and T (e2 ) = .
1 0

Hence the standard matrix of T is


 
0 1
A= .
1 0

2. We know that T (3, 4) = (4, 3) but we want to find the same answer using the
standard matrix.       
3 0 1 3 4
A = = .
4 1 0 4 3
Hence T (3, 4) = (4, 3).
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 46

Lemma 4.13. Suppose T : R2 → R2 is the counterclockwise rotation by a fixed angle


θ. Then   
cos θ − sin θ x
T (x, y) = .
sin θ cos θ y

Proof. We can write


p y
x = r cos α, y = r sin α where r = x2 + y 2 and tan α = .
x

By definition (see Figure 2)

T (x, y) = (r cos(α + θ), r sin(α + θ)).

Using trigonometric formulas

r cos(α + θ) = r cos α cos θ − r sin α sin θ = x cos θ − y sin θ

and
r sin(α + θ) = r sin α cos θ + r cos α sin θ = y cos θ + x sin θ.
Hence   
cos θ − sin θ x
T (x, y) = .
sin θ cos θ y

Figure 2: Counterclockwise Rotation T (x, y) = Rα

Exercise 4.14.

Let T be the counterclockwise rotation in R2 by the angle 120◦ . Write down the
standard matrix of T and find T (2, 2).

Example 4.15.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 47

Standard matrices for linear transformations T : R2 → R2 .

 
1 0
1. Reflection along the x-axis: A = .
0 1
 
−1 0
2. Reflection along the y-axis: A = .
0 1
 
0 1
3. Reflection along the line y = x: A = .
1 0
 
k 0
4. Scaling of the x-axis: A = .
0 1
 
1 0
5. Scaling of the y-axis: A = .
0 k

You might also like