0% found this document useful (0 votes)
333 views

Intermediate Methods For Algebra

This document discusses mathematical induction and other intermediate algebra topics. It contains: 1) An introduction to the ATOM series for high school students and the topics covered in this booklet, including mathematical induction, series, binomial coefficients, solving polynomial equations, and vectors and matrices. 2) A section on mathematical induction that defines the principle of induction and explains how it is used to prove results about natural numbers through three parts: the base case, inductive hypothesis, and inductive step. 3) An overview of the other sections in the booklet, including series such as telescoping, arithmetic, and geometric series, properties of binomial coefficients, methods for solving polynomial equations of different degrees, and basics of

Uploaded by

Anup Saravan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
333 views

Intermediate Methods For Algebra

This document discusses mathematical induction and other intermediate algebra topics. It contains: 1) An introduction to the ATOM series for high school students and the topics covered in this booklet, including mathematical induction, series, binomial coefficients, solving polynomial equations, and vectors and matrices. 2) A section on mathematical induction that defines the principle of induction and explains how it is used to prove results about natural numbers through three parts: the base case, inductive hypothesis, and inductive step. 3) An overview of the other sections in the booklet, including series such as telescoping, arithmetic, and geometric series, properties of binomial coefficients, methods for solving polynomial equations of different degrees, and basics of

Uploaded by

Anup Saravan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

A TASTE OF MATHEMATICS

AIMETON LES MATHEMATIQUES

Volume / Tome II

ALGEBRA
INTERMEDIATE METHODS
Revised Edition

Bruce Shawyer
Memorial University of Newfoundland

The ATOM series


The booklets in the series, A Taste of Mathematics, are published by the
Canadian Mathematical Society (CMS). They are designed as enrichment materials for high school students with an interest in and aptitude for mathematics.
Some booklets in the series will also cover the materials useful for mathematical
competitions at national and international levels.

La collection ATOM
Publies par la Societe mathematique du Canada (SMC), les livrets
de la collection Aime-t-on les mathematiques (ATOM) sont destines au
perfectionnement des etudiants du cycle secondaire qui manifestent un interet
et des aptitudes pour les mathematiques. Certains livrets de la collection ATOM
servent egalement de materiel de preparation aux concours de mathematiques sur
lechiquier national et international.

Editorial Board / Conseil de r


edaction
Editor-in-Chief / Redacteur-en-chef
Richard J. Nowakowski
Dalhousie University / Universite Dalhousie

Associate Editors / Redacteurs associes


Edward J. Barbeau
University of Toronto / Universite de Toronto

Bruce Shawyer
Memorial University of Newfoundland / Universite Memorial de Terre-Neuve

Managing Editor / Redacteur-gerant


Graham P. Wright
University of Ottawa / Universite dOttawa

This is printed in a different font from the first printing. As a result, there
may be slight differences in line breaks and page breaks.

Table of Contents
Foreword

iv

Introduction

1 Mathematical Induction
1.1 Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Horses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Strong Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2
2
3
4

2 Series
2.1 Telescoping Series
2.2 Sums of Powers of
2.3 Arithmetic Series
2.4 Geometric Series .
2.5 Infinite Series . . .

. . . . . . . . . . . .
Natural Numbers
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .

3 Binomial Coefficients
3.1 Factorials . . . . . . .
3.2 Binomial Coefficients
3.3 Pascals Triangle . . .
3.4 Properties . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

5
5
6
7
8
9

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

12
12
12
13
14

4 Solution of Polynomial Equations


4.1 Quadratic Equations . . . . .
4.2 Cubic Equations . . . . . . . .
4.3 Quartic Equations . . . . . . .
4.4 Higher Degree Equations . .
4.5 Symmetric Functions . . . . .
4.6 Iterative Methods . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

15
15
18
20
21
23
25

5 Vectors and Matrices


5.1 Vectors . . . . . . . . . . . . . . . . . . . . .
5.2 Properties of Vectors . . . . . . . . . . . .
5.3 Addition . . . . . . . . . . . . . . . . . . . .
5.4 Multiplication by a Scalar . . . . . . . . .
5.5 Scalar or Dot Multiplication . . . . . . .
5.6 Vector or Cross Product . . . . . . . . . .
5.7 Triple Scalar Product . . . . . . . . . . . .
5.8 Matrices . . . . . . . . . . . . . . . . . . . .
5.9 Properties of Matrices . . . . . . . . . . .
5.10 Multiplication by a Scalar . . . . . . . . .
5.11 Multiplication of Vectors and Matrices
5.12 Square Matrices . . . . . . . . . . . . . . .
5.13 Determinants . . . . . . . . . . . . . . . . .
5.14 Properties of determinants . . . . . . . .
5.15 Determinants and Inverses of Matrices
5.16 Generalities . . . . . . . . . . . . . . . . . .
5.17 Cramers Rule . . . . . . . . . . . . . . . .
5.18 Underdetermined systems . . . . . . . . .
5.19 Conics . . . . . . . . . . . . . . . . . . . . .
5.19.1 Shifting the origin . . . . . . . . . . .
5.19.2 Rotating the axes . . . . . . . . . . . .
5.19.3 Classification of conics . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

26
26
26
26
26
27
27
27
28
28
28
29
30
31
33
34
34
36
37
38
38
40
41

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

iv

Foreword
This volume contains a selection of some of the basic algebra that is useful in solving
problems at the senior high school level. Many of the problems in the booklet admit
several approaches. Some worked examples are shown, but most are left to the ingenuity
of the reader.
While I have tried to make the text as correct as possible, some mathematical and
typographical errors might remain, for which I accept full responsibility. I would be
grateful to any reader drawing my attention to errors as well as to alternative solutions.
Also, I should like to express my sincere appreciation for the help given by Ed Barbeau
in the preparation of this material.
It is the hope of the Canadian Mathematical Society that this collection may find its way
to high school students who may have the talent, ambition and mathematical expertise
to represent Canada internationally. Those who wish more problems can find further
examples in:
1. The International Mathematical Talent Search (problems can be obtained from
the author, or from the magazine Mathematics & Informatics Quarterly, subscriptions for which can be obtained (in the USA) by writing to Professor
Susan Schwartz Wildstrom, 10300 Parkwood Drive, Kensington, MD USA 20895
ssw@ umd5.umd.edu, or (in Canada) to Professor Ed Barbeau, Department of
Mathematics, University of Toronto, Toronto, ON Canada M5S 3G3
barbeau@ math.utoronto.ca);
2. The Skoliad Corner in the journal Crux Mathematicorum with Mathematical Mayhem (subscriptions can be obtained from the Canadian Mathematical Society, 577
King Edward, PO Box 450, Station A, Ottawa, ON, Canada K1N 6N5);
3. The book The Canadian Mathematical Olympiad 19691993
LOlympiade
mathematique du Canada, which contains the problems and solutions of the first
twenty five Olympiads held in Canada (published by the Canadian Mathematical
Society, 577 King Edward, PO Box 450, Station A, Ottawa, ON, Canada K1N
6N5);
4. The book Five Hundred Mathematical Challenges, by E.J. Barbeau, M.S.
Klamkin & W.O.J. Moser (published by the Mathematical Association of America,
1529 Eighteenth Street NW, Washington, DC 20036, USA).
Bruce L.R. Shawyer
Department of Mathematics and Statistics
Memorial University of Newfoundland
St. Johns, NF
Canada A1C 5S7

Introduction
The purpose of this booklet is to gather together some of the miscellaneous mathematics that is useful in problem solving, but is often missing from high school
mathematics textbooks.
The topics are grouped under five main headings
1. Mathematical Induction.
2. Series.
3. Binomial Coefficients.
4. Solution of Polynomial Equations.
5. Vectors and Matrices.
A little explanation is necessary.
1. Mathematical Induction.
This method is very useful for proving results about whole numbers. It is,
in fact, basic to the number system that we use.
2. Series.
Although infinite series is really a university level topic, a gentle introduction
leads to some very useful results.
3. Binomial Coefficients.
Binomial coefficients are everywhere. Many of the properties can be obtained
using the results of the previous two section. However, many of the properties
can be obtained using counting arguments. See, for example, the forthcoming
ATOM booklet on Combinatorics.
4. Solution of Polynomial Equations.
Almost everyone knows how to solve a quadratic equation. But few people
know about the methods for solving cubic and quartic equations. Also included are properties of the roots of polynomials. This used to be a well
studied topic, but is now almost lost from the curriculum.
5. Vectors and Matrices.
The basic properties are followed by properties of determinants and properties of conic section. Again, these are lost branches of mathematics.

Mathematical Induction

A very important method of proof is the Principle of Mathematical Induction.


This is used to prove results about the natural numbers, such as the fact that the
n(n + 1)
sum of the first n natural numbers is
.
2
We use functional notation: P (n) means a result (proposition) for each
natural number n. For example, the statement that the sum of the first n natural
n(n + 1)
numbers is equal to
is written:
2
P (n) :

n
X

k =

k=0

1.1

n(n + 1)
.
2

Induction

Proof by induction consists of three parts:


TEST: Find an appropriate starting number for the result (if it is not already
given in the statement). Usually 1 is appropriate (sometimes 0 is better). For the
purposes of this explanation, we shall take 1 as the starting number.
Test (check) the result for this number: is P (1) correct?
If YES, we proceed: if NO, the suggested result is false.
STEP: We assume that the result is indeed true for some particular (general)
natural number, say k. This is the inductive hypothesis. With this assumed result
P (k), we deduce the result for the next natural number k + 1. In other words, we
try to prove the implication
P (k) = P (k + 1) .
Make sure that your logic includes the case k = 1.
If YES, we proceed: if NO, then we are in trouble with this method of proof.
PROOF. The formal proof is now:
P (1) is true.
P (k) = P (k + 1) for k 1.
Hence P (1) = P (2),
and so P (2) = P (3),
and so P (3) = P (4),
and so on.
Therefore P (n) is true for all natural numbers n.
The last part is usually omitted in practice.
Prove the following using mathematical induction:
Problem 1 1 + 2 + + n =

n(n + 1)
.
2

Problem 2 12 + 22 + + n2 =

n(n + 1)(2n + 1)
.
6

Problem 3 13 + 23 + + n3 =

n2 (n + 1)2
.
4

Problem 4

n  
X
n

k=0

Problem 5

k nk = ( + )n .

n
X

1
< 2 n.
n
k=1

Problem 6

n5
n4
n3
n
+
+

is an integer for n = 0, 1, 2, . . ..
5
2
3
30

Problem 7 If x [0, ], prove that | sin(nx)| n sin(x) for n = 0, 1, 2, . . ..


Problem 8 If ak 1 for all k, prove that
!
n
n
Y
Y
2n
ak + 1 2
(1 + ak )
k=1

k=1

for n = 1, 2, . . ..

1.2

Horses

Just to check that you really understand the Principle of Mathematical Induction,
find the fallacy in the following proof.
Proposition. Let P (n) mean in any set of n horses, all the horses are the same
colour.
TEST. Consider a set consisting of exactly one horse. All the horses in it are the
same colour (since there is only one).
STEP. Assume that P (k) is true for some natural number k, that is, in any set
of k horses, all the horses are the same colour. This is the inductive hypothesis.
Now, consider a set consisting of k + 1 horses. We place each horse in a stable, in
a row, and number the stables, 1, 2, . . ., k, k + 1.
1

k1

k+1

Consider the horses in stables numbered 1, 2, . . ., k 1, k.

.......................................................................................................................................................................................................................................................................................................................................................................................................................................... ..
...
...
.

k1

k+1

This is a set of k horses, and so, by the inductive hypothesis, must consist of
horses, all of the same colour.
Now, consider the horses in stables numbered 2, 3, . . ., k, k + 1.
......................................................................................................................................................................................................................................................................................................................................................................................................................................................... ..
...
...
.

k1

k+1

This is a set of k horses, and so, by the inductive hypothesis, must consist of
horses, all of the same colour.
By observing the overlap between the two sets, we see that all the horses in the
set of k + 1 of horses must be of the same colour. And so, we are done!
Clearly, this is nonsense! We know that all horses are not the same colour. The
reader is asked to examine this proof and find out where it goes wrong. The
explanation may be found on page 42.

1.3

Strong Induction

Strong induction (with is mathematically equivalent to induction) has an apparently stronger STEP condition:
STRONG STEP: We assume that the result is indeed true for all natural
numbers, 1 j k. This is the strong inductive hypothesis. With these assumed
results P (1), P (2), . . ., P (k), we deduce the result for the next natural number
k + 1. In other words, we try to prove the implication
{P (1), P (2), . . ., P (k)} = P (k + 1) .
Again, make sure that your logic includes the case k = 1.
Problem 9 Picks theorem states that the area of a polygon, whose vertices have
integer coordinates (that is, are lattice points) and whose sides do not cross, is
given by
B
I +
1,
2
where I and B are then numbers of interior and boundary lattice points respectively.
Prove Picks Theorem for a triangle directly.
Use strong induction to prove Picks theorem in general.
Problem 10 Prove that every natural number may be written as the product of
primes.
Problem 11 Assume Bertrands theorem: for every x > 1, there is a prime
number strictly between x and 2x.
Prove that every positive integer can be written as a sum of distinct primes.
(You may take 1 to be a prime in this problem.)

Problem 12 Show that every positive integer can be written as a sum of distinct
Fibonacci numbers. 1
2
Problem 13 For the Fibonacci numbers, show that Fn+1
+ Fn2 = F2n+1 .

Series

We start with a given sequence {aj }, and we use sigma notation for addition:
n
X

j=k+1

aj := ak+1 + ak+2 + + an .

We have series when we add up sequences. They may start with term 0 or with
term 1 as is convenient. We define the partial sum of a series by
Ak :=

k
X

aj ,

j=0

so that

n
X

ak+1 + ak+2 + + an =

j=k+1

aj = A n A k .

We note that series may be added term by term, that is


n
X

(aj + bj ) =

j=k+1

2.1

n
X

aj +

j=k+1

n
X

bj .

j=k+1

Telescoping Series

A series is telescoping if the sequence of terms satisfies


aj = fj fj1 ,
and so we get

n
X

aj =

j=1

For example
n
X
j=1

n
X
j=1

(fj fj1 ) = fn f0 .

1
1
1
1
1
1
=

, so that fj =
, and
j(j + 1)
j
j +1
j+1
j
j+1


n 
X
1
1
1
1
1
1
=

= 1
.
j(j + 1)
j
+
1
j
n
+
1
1
n
+
1
j=1

Fibonacci numbers are given by


F0 = 0 ,F1 = 1 ,for n 2 ,

Fn = Fn1 + Fn2 .

Problem 14 Evaluate

n
X
j=1

Problem 15 Evaluate

n
X
j=1

1
.
j(j + 1)(j + 2)
1
.
j(j + 1)(j + 2)(j + 3)

Generalize this!
Problem 16 Evaluate

n
X
j=1

2.2

j
.
j4 + j2 + 1

Sums of Powers of Natural Numbers

We are interested in evaluating

n
X

j , where is a natural number.

j=1

If = 0, then we have

n
X

1. It is easy to see that the value of this is n.

j=1

The next case is = 1. To evaluate

n
X

j, we first note that

j=1

j 2 (j 1)2 = 2j 1. Therefore
n2 (n 1)2

(n 1)2 (n 2)2
..
.
2
2 12

12 0 2

= 2n 1

= 2(n 1) 1
= 2(2) 1

= 2(1) 1 .

2
2
The
sumof the left sides is n 0 = n and the sum of the right sides is
n
X
2
j n (did you get the last term?) This gives
j=1

n
X
j=1

j =

n(n + 1)
.
2

This technique can be used to generalize this result to larger values of .


Problem 17 Show that

n
X
j=1

j2 =

n(n + 1)(2n + 1)
.
6

Problem 18 Show that

n
X

j=1

Problem 19 Calculate a formula for

n(n + 1)
2
n
X

2

j4.

j=1

Problem 20 Calculate a formula for

n
X

j5.

j=1

2.3

Arithmetic Series

A sequence {aj } is called arithmetic if aj+1 aj = (a fixed quantity, called the


common difference) for all values of j 0. Thus, we have
aj = aj1 + = aj2 + + = = a0 + j .
Thus
n
X

j=k+1

aj

n
X

j=k+1

(a0 + j ) = (n k)a0 +

= (n k)a0 +
= (n k)a0 +

n
X
j=0

k
X
j=0

n
X

j=k+1

k(k + 1)
n(n + 1)

2
2

(1)

Another way to look at arithmetic series, is to write the sum down in both directions!
Sum = a0 + a1 + + an1 + an ,

Sum = an + an1 + + a1 + a0 .

Adding gives
2 Sum =
Note that





a0 + an + a1 + an1 + + an1 + a1 + an + a0 .
a0 + a n

a1 + an1

= =
= =

Therefore
2 Sum = (n + 1) a0 + an

ak + ank

an + a 0 .


= (n + 1) First + Last ,

so that

(n + 1) (a0 + an )
(n + 1) (First + Last)
=
.
2
2

Sum =

Put into words, we see that the sum of an arithmetic series is the average of the
first and last terms, multiplied by the number of terms.

2.4

Geometric Series

A sequence {aj } is called geometric if aj+1 = aj r (a fixed quantity, called the


common ratio) for all values of j 0. Thus, we have
aj = aj1 r = aj2 r r = = a0 rj .
For notational ease, we write a for a0 .
We start with the easiest case: r = 1. Here, each term is the same as the one
before. So it is easy to add them up. In this case
n
X

n
X

aj =

j=k+1

a rj =

j=k+1

n
X

j=k+1

a = a(n k) .

In words, we have a times the number of terms.


For the remainder of this section, we shall assume that r 6= 1. Thus
n
X

aj =

j=k+1

n
X

j=k+1


a rj = a rk+1 + rk+2 + rk+3 + + rn1 + rn .

Also
r

n
X

a rj

n
X

j=k+1

a rj+1

j=k+1


= a rk+2 + rk+3 + + rn1 + rn + rn+1 .

Subtracting these gives


(r 1)

n
X

j=k+1


a rj = a rn+1 rk+1 ,

so that (remember that r 6= 1)


n
X

j=k+1

ar

= a

rn+1 rk+1
r1

In the ATOM volume on trigonometry 2 , you will see how to use geometric
series to find
n
n
X
X
cos(j)
and
sin(j) .
j=k+1

2.5

j=k+1

Infinite Series

In order to do infinite series properly, you need a good grounding in the theory of
limits. This is university material (usually second year). However, a few results
are useful to know.
1. Infinite series either converge or diverge.
2. Those which diverge are in four categories:
(a) those which diverge to + 3 ; for example

2k ;

k=1

(b) those which diverge to ; for example

X
k=1


2k ;

(c) those whose sequence of partial sums oscillates unboundedly; for exam

X
X
ple
(2)k ; or
(1)k 2bk/2c , where bxc means the greatest integer
k=1

k=1

less that or equal to x ;

(d) those whose sequence of partial sums oscillates boundedly; for example

X
k
(1) .
k=1

3. Convergent series have a bounded sequence of partial sums.


4. A bounded sequence of partial sums does not guarantee convergence.
5. A series of positive terms either converges or diverges to +.
6. For a series of positive terms, a bounded sequence of partial sums does
guarantee convergence.
P
7. If
ak converges, then lim ak = 0.
k

8. If lim ak = 0, then
k

ak does not necessarily converge; for example

1
k.

9. An infinite arithmetic series cannot converge unless both a0 = 0 and = 0,


in other words, we are adding up zeros! See (1).
2

A forthcoming volume in this series.


We emphasize that is not a number, but a symbol used to describe the behaviour of
limits. Since the sum of an infinite series is in fact the limit of the sequence of partial sums, the
symbol is also used to describe the sum of certain divergent infinite series.
3

10

Some examples of positive series are:

(See below)
(See below)

k=1

rk
rk

converges for |r| < 1 ;


diverges for

k=1

(See below)
(See below)

X
1
k

k=1

k=1

X
k=1

X
k=1

k=1

1
kp
1
kp

diverges ;

(2)

converges ;

(3)

converges for p > 1 ;


diverges for

1
k log(k)

diverges ;

1
k log(k) log(log(k))

diverges .

k=1

1
k2

|r| 1 ;

p 1;

An infinite geometric series may converge or diverge. For it to converge, we


require that lim rn exists. This is true if 1 < r 1.
n

If r = 1, the partial sum to the nth term is an, which does not tend to a

X
finite limit. Thus
a diverges.
k=1

If |r| < 1, then lim rn = 0, and this gives the sum to infinity
n

X
k=0

rn =

1
.
1r

For (3), we note that, for each integer n, there is an integer m such that 2m
n < 2m+1 . Thus

11

n
X
1
k2
k=1

1
1
1
+ 2 + + 2
2
2
3
n

 

1 1
1
1
1
1
+
+
+
+
+
< 1+
4 4
16 16 16 16


1
1
1
++
+ 2m + + 2m
22m
2
2
 m


1 1
1
1
= 2 1 m+1 < 2 .
= 1+ + ++
2 4
2
2
= 1+

Thus, the sequence of partial sums is bounded above. Since series of positive terms
either converge or diverge to plus infinity, it follows that (3) converges. We do not
know the value of the sum only that it is less than or equal to 2. For those
interested in numerical values, the actual sum is approximately 1.644934.
Problem 21 Investigate the exact value of

X
1
.
k2

k=1

For (2), using the same m as above, we have


m

2
X
1
k

k=1

1
= 1+ +
2
1
2
1
= 1+
2

> 1+

1 1
+
3 4

1
2m




1 1
1
1
1
+
+
++
+
+

+
4 4
2m
2m
2m
1
1
m
+ ++
= 1+ .
2
2
2
++

Thus, the sequence of partial sums increases without bound. Since series of positive
terms either converge or diverge to plus infinity, it follows that (2) diverges.

X
1
We also note that the partial sums of
grow slowly. If you try to sum this
k
k=1
series on a computer, you may reach the conclusion that it converges, because the
rate of growth will be too slow for the accuracy of the computer to detect. The
partial sums grow like log(n). In fact
n
X
1
= + log(n) + n ,
n

k=1

where is Eulers constant, and n is an error term that tends to zero and n tends
to infinity. For those interested in numerical values, 0.577215665.
Problem 22 Investigate Eulers constant.

12

With general series, the situation is a bit different. For example

(1)n

k=1

1
np

is a convergent series if p > 0. What we need here is the alternating series test,
which states that
If an 0 and lim an = 0, then
n

(1)k ak

k=1

is convergent.
Care is required about what is meant by the sum of such a series. For

X
1
example
(1)k
= log(2). But this is only so if the terms are added up in
k
k=1
the order given and not re-arranged.

Binomial Coefficients

3.1

Factorials

For every natural number k, we define k factorial 4 , written as k! as the running


product of all the natural numbers from 1 up to k:
k! := 1 2 3 (k 1) k .
This can also be defined inductively from
1! := 1
k! := k (k 1)!
We also define 0! = 1 for reasons of consistency that will become clear.

3.2

Binomial Coefficients


The Binomial Coefficient, nk usually read as n choose k, is defined 5 , initially
for positive integers n, k with n k, from:
 
n
n!
:=
.
k
k!(n k)!
This is equivalent to
 
n
n(n 1)(n 2) (n k + 1)
=
.
k
k!
4
5

Also read as factorial k


Older notation includes n Ck .

(4)

13

We see that

 
 
 
n
n
n
=
= 1 and that
> 0.
n
0
k

We also note these coefficients in the Binomial Theorem:


If n is a positive integer, then
(1 + x)

n  
X
n

k=0

xk .

This can be proved by induction on n. The important property required in


given in problem 25.
Because the expression (4) makes sense when n is not a positive integer, we
can use it to extend the definitions of binomial coefficients as follows:
 
n
If k is an integer greater than (the integer) n, then
= 0. For example,
k
 
3
3 2 1 0 (1)
=
= 0.
5!
5
 
n
Let n be any real number. Note that
is non-zero when n is not an
k
integer and k is greater than n. For example,


3.3


1/2
(1/2)(3/2)
3
=
=
.
2
12
8

Pascals Triangle

One way to generate the binomial coefficients, is to use Pascals Triangle.


1
1
1
1
1
1
1
1
.

7
.

3
4

5
6
.

10

1
3

15
21
.

1
2

10
20

35
.

1
4

1
5

15
35
.

1
6

21
.

1
7
.

1
.

We presume that you know how to generate this!


It is better to write this as an (infinite) matrix, putting in a 0 in every space
to the right of the last 1. This matrix is considered to begin with row and column
numbered 0.

14

n
n
n
n
n
n
n
n

=0
=1
=2
=3
=4
=5
=6
=7
.

k=0
1
1
1
1
1
1
1
1
.

k=1
0
1
2
3
4
5
6
7
.

k=2
.
0
1
3
6
10
15
21
.

k=3
.
.
0
1
4
10
20
35
.

k=4
.
.
.
0
1
5
15
35
.

k=5
.
.
.
.
0
1
6
21
.

k=6
.
.
.
.
.
0
1
7
.

k=7
.
.
.
.
.
.
0
1
.

k=8
.
.
.
.
.
.
.
0
.

 
n
The binomial coefficient
occurs at position (n, k) of this matrix.
k

3.4

Properties

The following formulae should be proved:


Problem 23
k
Problem 24

 


n
n1
= n
.
k
k1
 


n
n
=
.
k
nk

Problem 25

Problem 26

n+1
k+1


Problem 27

  

n
n
=
+
.
k
k+1


 
n
n
nk
.
=

k+1
k+1
k

n+1
k

  

n
n
=
+
.
k
k1

Problem 28


n+1
k+1

  

 
k
k+1
n
+
++
.
k
k
k

Problem 29


  
 



k+n+1
k
k+1
k+2
k+n
=
+
+
+ +
.
n
0
1
2
n
Problem 30

 


n
n
=
= n.
1
n1

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

15

Problem 31

     
 
n
n
n
n
+
+
++
.
0
1
2
n

2n =
Problem 32

     
 
n
n
n
n n
0 =

+
+ (1)
.
0
1
2
n
Problem 33
2

n1

     
n
n
n
=
+
+
+ .
0
2
4

n1

     
n
n
n
=
+
+
+ .
1
3
5

Problem 34

Problem 35



  2  2  2
 2
n
n
n
n
2n
=
+
+
++
.
n
0
1
2
n

Problem 36
n 2n1 =

 
 
 
 
n
n
n
n
+2
+3
++n
.
1
2
3
n

Problem 37
 
 
 
 
n
n
n
n
n1
0 =
2
+3
+ (1)
n
.
1
2
3
n
Problem 38

m+n
k


k  
X
n
m
=
.
j
kj
j=0

There are several methods for proving these inequalities. For example, one
could use mathematical induction (as developed earlier in this booklet), or the
binomial theorem. They can also be obtained combinatorically. See, for example
the ATOM booklet on Combinatorics 6 .

Solution of Polynomial Equations

4.1

Quadratic Equations

We begin with the quadratic equation 0 = a x2 + b x + c, where x, a, b and c are


real numbers.
6

A planned ATOM booklet.

16

There are only certain conditions under which this equation has a solution: first,
if a = b = c = 0, then any x satisfies the equation. If both a = b = 0 and c 6= 0,
then the equation makes no sense. So we shall assume that at least one of a and
b is a non-zero real number.
c
If a = 0, then b 6= 0 and so we can solve 0 = b x + c to get x = . This is easy.
b
So we shall assume henceforth that a 6= 0.
Since a 6= 0, we may divide throughout by a to get:
0 = x2 +

b
c
x+ .
a
a

(5)

The standard technique for solution is to complete the square. Since


(x + u)2 = x2 + 2xu + u2 , we get
 2
 2
b
b
c
b
x+
+
a
2a
a
2a

2  2

b
b 4ac

.
x+
2a
4a2

0 = x2 +
=

and this leads to the standard solution:


x =

(6)

b2 4ac
.
2a

The expression b2 4ac, called the Discriminant and usually denoted by 7 , must
be non-negative if the quadratic equation is to have a real number solution. If
= 0, then the quadratic equation has only one solution: we have a perfect
square. If > 0, then we have two distinct solutions.
If we allow x to be a complex number, in particular, when < 0, then the
solutions occur in conjugate pairs, that is, of the form p i q, where p, q are real
numbers.
If we also allow the coefficients, a, b, c to be complex numbers, and
allow complex solutions, then we always have two (distinct) solutions unless
= 0, which we can interpret as two identical solutions.
The word root of an equation is another word for the solution of an equation.
This is frequently used in the circumstances like those in the next paragraph.
Suppose that and are the roots of (5). Then we have that
(x )(x ) = x2 +

b
c
x+ .
a
a

By multiplying out the left side of this identity, we obtain that


+ =
7 This

b
a

is the upper case Greek letter, delta.

and

c
.
a

17

In words, this tells us that in the quadratic equation (5), the constant term is
the product of the roots, and that the coefficient of x is minus the sum of the
roots. For more on this, see the section entitled symmetric functions, later in
this booklet.
Problem 39 If and are the roots of x2 + 5x + 7 = 0, find a quadratic equation
with roots 1 and 1 .
Solution 39 We have + = 5 and = 7. The equation we want is
0 =
=
=
=



1
1
x
x



1
1
1
x2
+
x+



+
1
2
x+
x



5
1
2
x
x+ ,
7
7

or 7x2 + 5x + 1 = 0.
Problem 40 If and are the roots of x2 + 5x + 7 = 0, find a quadratic equation
with roots 12 and 12 .
Solution 40 As before, we have + = 5 and = 7. The equation we want
is



1
1
0 =
x 2
x 2



1
1
1
= x2
+ 2 x+
2

()2
 2

2
+
1
= x2
x+
2
()
()2
 2

2
+ 2 + 2
1
= x2
x+
2
()
()2


1
( + )2 2
x+
= x2
2
()
()2


(5)2 2 7
1
= x2
x+ 2
72
7
11
1
= x2 x +
,
49
49
or 49x2 11x + 1 = 0.

18

4.2

Cubic Equations

We start with a consideration of the roots of 0 = A x3 + B x2 + C x + D with


A 6= 0. First, as in the previous section, we write this as
0 = x3 +

B 2 C
D
x + x+ .
A
A
A

(7)

Suppose that the roots are , and . Then we have that


(x )(x )(x ) = x3 ( + + )x2 + ( + + )x .
Comparing this with the equation above, we obtain that
++ =

B
C
, + + =
A
A

and =

D
.
A

In words, this tells us that in the cubic equation (7), the constant term is minus
the product of the roots, that the coefficient of x is the sum of the products
of the roots taken in pairs, and that the coefficient of x is minus the product
of the roots. For more on this, see the section entitled symmetric functions,
later in this booklet.
Problem 41 If , and are the roots of x3 2x2 + 3x 4 = 0, find an equation
with roots 2 , 2 and 2 .
Solution 41 We have + + = 2, + + = 3 and = 4. The
equation we seek is
0 = (x 2 )(x 2 )(x 2 )

= x3 (2 + 2 + 2 )x2 + (2 2 + 2 2 + 2 2 )x 2 2 2 .

Here, we will make use of


( + + )2
( + + )2

= 2 + 2 + 2 + 2( + + ) ,
= 2 2 + 2 2 + 2 2 + 22 + 2 2 + 2 2
= 2 2 + 2 2 + 2 2 + 2( + + )

to give us
2 + 2 + 2
2 2 + 2 2 + 2 2
2 2 2

= ( + + )2 2( + + )

= 22 2(3) = 2 ,
= ( + + )2 2( + + )
= 32 2(4)(2) = 7 ,
= ()2 = 42 = 16 .

Hence, the required equation is


0 = x3 + 2x2 7x 16 .

19

We now look for a general solution.


The general form 0 = A x3 + B x2 + C x + D with A 6= 0 is usually reduced
to the standard form
0 = x3 + a x2 + b x + c .
The substitution
x=y

a
3

leads to the reduced form


0 = y3 + p y + q .
This can be solved using Cardanos Formula.
Here is the technique. Let y = u + v. (This may seem to be complicating
matters by having two variables instead of one, but it actually give more room to
manoeuvre.) The reduced equation becomes
0 = u3 + 3uv(u + v) + v 3 + py + q = u3 + v 3 + (3uv + p)y + q .
We have one equation in two unknowns, so there is not a unique solution. We thus
have the freedom to impose the condition: 3uv + p = 0. Now, we have a system
of two equations in two unknowns:
u3 + v 3
u3 v 3

= q ,
p3
= .
27

Thinking now about the roots of quadratic equations, we see that u3 and v 3 are
the roots of the quadratic equation
0 = x2 + q x

p3
.
27

This leads to the solutions


 q  31
+
,
2
 q  31
v=
,
2

u=

(8)
(9)

where
=

 q 2
2

 p 3
3

(10)

and the cube roots chosen so that uv = 3p . Let


y1
y2
y3

= u+v,





u+v
uv
=
+i 3
,
2
2





u+v
uv
i 3
.
=
2
2

(11)
(12)
(13)

20

Now, we can check that y1 , y2 and y3 are the three solutions of the reduced
equation.
There are three cases.
If > 0, then we get one real solution and two conjugate complex solutions.
If = 0, then we get three real solutions (including a double root).
If < 0, then we get three real solutions, which can be found from trigonometry
(see the ATOM volume on trigonometry 8 if necessary):
 r
|p|

y1 = 2 cos
;
(14)
3
3

r
2
|p|
y2 = 2 cos

;
(15)
3
3
3
r

|p|
2
+
;
(16)
y3 = 2 cos
3
3
3

1
where = cos s 2 
.
3

|p|
3

4.3

Quartic Equations

Again, we could start with a consideration of the roots, but we will leave that until
the next section.
The general form 0 = A x4 + B x3 + C x2 + D x + E with A 6= 0 is usually
reduced to the standard form
0 = x4 + a x3 + b x2 + c x + d .
The substitution
x=y

a
4

leads to the reduced form


0 = y4 + p y2 + q y + r .
but, for all u, this is equivalent to
0 =
=
8



y 4 + y 2 u + u2 /4 + p y 2 + q y + r u2 /4 y 2 u
2

y 2 + u/2 (u p) y 2 q y + (u2 /4 r) .

A forthcoming volume in this series.

(17)

21

The first term is a perfect square, say P 2 with P = y 2 + u/2. The second term is
also a perfect square Q2 for those value of u such that
q 2 = 4(u p)(u2 /4 r) .

(18)

(This is obtained by setting the discriminant of the quadratic in y equal to zero.)


Consider (18) as a cubic in u; that is
H(u) = u3 pu2 4ru + (4pr q 2 ) = 0 .
We note that there is a root of H that is greater than p. This follows since
H(p) = q 2 , since H(u) and u , and since H is continuous.

So, this cubic in u can be solved as in the previous section and has at least one
root, say u1 , of (18) satisfies u1 p. Substituting back into (17) gives the form


y 2 + u1 /2 + Q y 2 + u1 /2 Q = 0 ,

where

q
.
(19)
2
Thus, the desired roots of the monic 9 quartic are the roots of the two quadratic
factors of (19). These quadratics have real coefficients if the original quartic has
real coefficients. Thus a quartic has an even number of real roots!
Q = y , where =

u1 p , =

Historical Note
The solution of the quadratic was known to both ancient Hindu and Greek
mathematicians. The solutions of the cubic and quartic are due to Italian mathematicians, Scipio del Ferro (1515) and Ferrari (1545).

4.4

Higher Degree Equations

There are no general algorithms for solving fifth and higher degree polynomial
equations using the methods described above. This was shown in the nineteenth
century by Abel and Galois. Despite this, there is the Fundamental Theorem
of Algebra which states that every polynomial equation of degree n has a (complex) solution, and hence, including solutions according to their multiplicity, n
solutions. The proof of this apparently simple theorem requires some techniques
of advanced mathematics, and is usually not proved until second or third year
university courses.
However, there are several methods for approximating the roots of polynomials.
First, we note the following:
If the monic polynomial equation
p(x) = xn + an1 xn1 + an2 xn2 + + a1 x + a0 = 0
9

A polynomial is called monic if the coefficient of the highest power of the variable is 1.

22

has a root between the values x1 and x2 , that is, if p(x1 ) and p(x2 ) have opposite
signs, then x3 is an approximate value to a root of p(x) = 0, where
x3 = x 1

xq2

x3 q

(x2 x1 )p(x1 )
.
p(x2 ) p(x1 )

x0 q

xq1

We have no guarantee that x3 is a better approximation that either x1 or x2 . This


method is known as linear interpolation, and sometimes as Regula falsi (rule of
false position).
However, all is not lost. If p(x3 ) = 0, then we are done. Otherwise, p(x3 )
will have opposite sign to one of p(x1 ) or p(x2 ), so that we can proceed again.
Another procedure, known as Newtons Method, requires a knowledge of
Calculus, and applies (as does Regula falsi) to more than polynomials.
If x1 is an approximation to a root of the polynomial equation
p(x) = xn + an1 xn1 + an2 xn2 + + a1 x + a0 = 0 ,
then a better approximation may be obtained from (it usually works, but not
always)
p(x1 )
x2 = x 1 0
,
p (x1 )
provided that p0 (x1 ) 6= 0 .

As with Regula falsi, this may be repeated for better approximations.

Newtons method, usually works if you start close to a suspected root. For
polynomials, and when there is no other root close by, the procedure works well,
giving rapid convergence to the root. However, considerable care is necessary in
general. Real difficulties arise when roots are close together.
Newtons method is especially useful for estimating nth roots of numbers
when no calculator is available (as in competitions like the IMO).

23

Let p(x) = xn A where A is a positive real.

Then p0 (x) = nxn1 so that the Newton approximation equation becomes


xk+1 = xk

(xk )n A
.
n(xk )n1

For example, if n = 3 and A = 10, this is


xk+1 = xk

(xk )3 10
.
3(xk )2

We now choose (almost arbitrarily) a first approximation, say x1 = 2. Thus


x2 = 2

(2)3 10
2
1
13
= 2
= 2+
=
.
2
3(2)
12
6
6

We now repeat the process:


x3 =

)3 10
13 ( 13
3277
6 13 2
2.1545 .
=
6
1521
3( 6 )

Note that 101/3 2.1544

Also note that, if we start with a rational number, this process will always yield
a rational approximation to a root of a polynomial equation with rational coefficients.

4.5

Symmetric Functions

We note that the monic polynomial can be written as


p(x)

= xn + an1 xn1 + an2 xn2 + + a1 x + a0


= (x x1 )(x x2 )(x x3 ) (x xn ) ,

where x1 , x2 , x3 , . . . xn are the roots of p(x) = 0.

The elementary symmetric functions Sk are defined from Vietas Theorem


S1
S2

= x1 + x2 + + x n
= an1 ;
= x 1 x2 + x 1 x3 + + x 2 x3 + x 2 x4 +
=
+x3 x4 + x3 x5 + + xn1 xn
= + an2 ;

S3

= x 1 x2 x3 + x 1 x2 x4 + + x 2 x3 x4 + x 2 x3 x5 +
=
+x3 x4 x5 + x3 x4 x6 + + xn2 xn1 xn
= an3 ;

Sn

..
.
= x 1 x2 x3 x n
= (1)n a0 .

(20)

(21)

(22)

(23)

24

In the problems that follow, you will be making use of the known result that
any symmetric polynomial in the roots can be expressed as a polynomial of the
elementary symmetric functions.
Problem 42 If x1 , x2 , x3 are the roots of x3 + 9x2 + 24x + 18 = 0, prove that
3
3
X
X
x2k = 33 and
x3k = 135.
k=1

k=1

Problem 43 If x1 , x2 , x3 are the roots of x3 ax2 + bx c = 0, prove that


2

(x1 x2 ) + (x2 x3 ) + (x3 x1 )

= 2a2 6b .

Problem 44 If x1 , x2 , x3 are the roots of x3 + ax2 + bx + c = 0, find the values


of
(i)

3
X

x21 x22 + x22 x23 + x23 x21 ,

k=1

(ii)

3
X

k=1

x1 (x22 x23 ) + x2 (x23 x21 ) + x3 (x21 x22 ).

Problem 45 If x1 , x2 , x3 are the roots of x3 + px + q = 0, find equations whose


roots are
(i) x21 , x22 , x23 ,
(ii)

1 1 1
,
,
,
x1 x2 x3

1
1 1
1 1
1
+ ,
+ ,
+ ,
x1
x2 x2
x3 x3
x1
x1
x2
x3
(iv)
,
,
.
x2 x3 x3 x1 x1 x2

(iii)

Problem 46 If x1 , x2 , x3 , x4 are the roots of x4 + px3 + qx2 + rx + s = 0, find


the values of
X
(i)
x21 x2 x3 , where is sum is over all terms of the given form.
X
(ii)
x31 x2 , where is sum is over all terms of the given form.
Problem 47 If x1 , x2 , x3 , the roots of px3 + qx2 + rx + s = 0, are in geometric
progression, prove that pr 3 = q 3 s.
Problem 48 If x1 , x2 , x3 , the roots of x3 + px2 + qx + r = 0, are in arithmetic
progression, prove that 2p3 = 9(pq 3r).
Problem 49 If x1 , x2 , x3 are the roots of x3 + 2x2 36x 72 = 0 and
1
2
1
+
=
, find the values of x1 , x2 and x3 .
x1
x2
x3

25

4.6

Iterative Methods

We have already mentioned Newtons method. We conclude this section with an


iterative method that was advertised in the 1970s when hand calculators became
readily available.
Suppose that P (x) = xn an1 xn1 a1 xa0 with a0 6= 0 and n 1.
This means that x = 0 is not a solution. To find a solution, we must solve
0 = xn an1 xn1 a1 x a0 .

Write this as xn = an1 xn1 + + a1 x + a0 and divide both sides of the


equation by xn1 .
an2
a1
a0
So we have x = an1 +
+ + n2 + n1 .
x
x
x
On the left side, replace x by xk+1 , and on the right side, replace x
by xk .
This leads to the iteration
xk+1 = an1 +

an2
a1
a0
+ + n2 + n1 .
xk
xk
kx

(24)

If we start with a guess, x0 , we generate a sequence {xk }.

If we take limits as k on both sides, provided (24) converges, we will


have generated a solution to P (x) = 0.

This is all very appealing, but, unfortunately, in general, we cannot conclude that
the sequence converges. It can lead to strange attractors or chaos. It is
suggested that the reader try the following example.
Problem 50 Let P (x) = (x 1)(x 2)(x 3) = x3 6x2 + 11x 6.

Construct the iteration (24), and investigate the behaviour of the sequence
{xk } with the following choices of x0 :
1.
4.
7.
10.

x0
x0
x0
x0

= 0.1.
= 1.1.
= 2.1.
= 3.1.

2.
5.
8.
11.

x0
x0
x0
x0

= 0.5.
= 1.5.
= 2.5.
= 3.5.

3.
6.
7.
12.

x0
x0
x0
x0

= 0.9.
= 1.9.
= 2.9.
= 3.9.

It will also be instructive to try this example too.


Problem 51 Let P (x) = (x )(x ) = x2 2ax b with a and b real.
1. Suppose that and are real and distinct with || > || > 0.

Construct the iteration (24), and investigate the behaviour of the sequence
{xk } with the following choices of x0 :
1. x0 =

2.

2. x0 =

+
3 .

3. x0 = 2.

26

2. Suppose that = is real and non-zero.


Construct the iteration (24), and investigate the behaviour of the sequence
{xk } with the following choices of x0 :
1. x0 =

2.

2. x0 = 2.

3. Suppose that and are complex.


Construct the iteration (24), and investigate the behaviour of the sequence
{xk } with the following choices of x0 :
1. x0 = 1.

5
5.1

2. x0 =

2.

3. x0 = 2.

Vectors and Matrices


Vectors

There is a commonly held notion, especially in the physical sciences, that a vector is
a thing that points! This is because vectors are often denoted by arrows! However,
this is only a loose way of describing the important properties of a vector.
A vector is best defined as an ordered set of numbers (coordinates). For
example, in 2-space (The Euclidean plane), we write
x = (x1 , x2 ) ,
and in n-space,
x = (x1 , x2 , . . . , xn ) .

5.2

Properties of Vectors

5.3

Addition

The sum of two vectors x, y is defined by


x + y := (x1 + y1 , x2 + y2 , . . . , xn + yn ) = x + y .
Note that it is necessary for both vectors to have the same number of coordinates.

5.4

Multiplication by a Scalar

The product of a real number (scalar), , with a vector, x, is defined by


x := ( x1 , x2 , . . . , xn ) .

27

5.5

Scalar or Dot Multiplication

The scalar product of two vectors x, y is defined by


x y :=

n
X

xk y k .

k=1

Note that it is necessary for both vectors to have the same number of coordinates.
This leads to the length or norm of a vector being defined by

|x| := x x .
Geometrically, the scalar product is then
x y = |x| |y| cos() ,
where is the angle between the vectors x and y.

5.6

Vector or Cross Product

This is restricted to 3-dimensional space. The vector product of two vectors x, y


is defined by
x y := (x2 y3 x3 y2 , x3 y1 x1 y3 , x1 y2 x2 y1 ) .
Geometrically, the vector product is then a vector, orthogonal to both x and y,
with norm |x| |y| sin(), directed according to the right hand screw convention,
and where is the angle between the vectors x and y. (The right hand screw
convention is illustrated by the following diagram.)
6

xy

........................
........
.............
...................
........
...........
......
........
.....
......
....
...
.
..
....
...
...
...
.
...
.
.
.....
...
..
.
.
.
.
.
.....
.
.......
.
.
.....
........................
.
..........
.
.
.
.
.
.
..............
....
.......................................................................

P
PPP

q
P y
-

Problem 52 Prove that x (y z) = (x.z)y (x.y)z .


Problem 53 Prove that x (y z) + y (z x) + z (x y) = 0 .

5.7

Triple Scalar Product

This is restricted to 3-dimensional space. The triple scalar product of y z with


x is called the triple scalar product, and is defined by [x, y, z] = x (y z) gives
the (signed) volume of the parallelepiped with sides x, y, z at a vertex.

28

Problem 54 Show that x (y x) = 0 .


Problem 55 Show that x (y z) = y (z x) .
Problem 56 Show that (x y) (z w) = [x, y, z w] = [y, z w, x] .
Problem 57 Show that [x, y, z] w = [w, y, z] x + [x, w, z] y + [x, y, w] z .
This means that any vector w may be expresses as a linear combination of
any three given non-coplanar vectors, x, y and z.

5.8

Matrices

A Matrix is a rectangular array of elements. We shall refer to the rows and columns
of this matrix.
Suppose that the matrix has n rows and k columns. We then write
A = (ai,j )1in , 1jk ,
where ai,j is the element in the ith row and the j th column. Written in full, this
is

A=

a1,1
a2,1
a3,1
..
.

a1,2
a2,2
a3,2
..
.

a1,3
a2,3
a3,3
..
.

a1,4
a2,4
a3,4
..
.

..
.

a1,k
a2,k
a3,k
..
.

an,1

an,2

an,3

an,4

an,k

These give useful ways of describing other mathematical quantities.


Note that a matrix with one row is a vector. This is often called a row vector to
distinguish it from a matrix with one column, which is known as a column vector.
The matrix obtained by interchanging the rows and columns of a matrix, A, is
known as the transpose of the given matrix, and is written as AT .

5.9
5.10

Properties of Matrices
Multiplication by a Scalar

The product of a scalar, , with a matrix, A, is given by


A := ( ai,j )1in , 1jk .

29

5.11

Multiplication of Vectors and Matrices

The scalar product of two vectors can be written as a product of a row vector with
a column vector thus:

x y = (x1 , x2 , ,xn )

y1
y2
..
.
yn

n
X

xk y k .

k=1

This gives the basis for multiplying a row vector by a matrix. Consider the
matrix a consisting of a set of column vectors (of course, the number of coordinates
in the row vector must be the same as the number of columns in the matrix). This
gives, for example
!
n
n

X
X
y
z
x A = (x1 , x2 , . . . , xn )
1
1
=
xi y i ,
xi z i .
y2 z2
i=1
i=1

..
..
.
.
yn zn
and, in general,
xA

(x1 , x2 , . . . , xn )

n
X
i=1

xi ai,1 ,

n
X

a1,1
a2,1
a3,1
..
.

a1,2
a2,2
a3,2
..
.

a1,3
a2,3
a3,3
..
.

a1,4
a2,4
a3,4
..
.

an,1

an,2

an,3

an,4
!

xi ai,2 , . . . ,

i=1

n
X

xi ai,k

i=1

or, more compactly


n
X

xA =

i=1

xi ai,j

...
...
...
..
.

a1,k
a2,k
a3,k
..
.

. . . an,k

1jk

Similarly, we can multiply a matrix by a column vector (of course, the number of coordinates in the columns vector must be the same as the number of rows
in the matrix). This gives,

Ax =

k
X
j=1

ai,j xj

1in

30

We are now ready to multiply two matrices! This first condition is that the
number of columns in the left matrix must equal the number of rows in the right
matrix. So, let
A = (ai,j )1in, 1jm
and we get that

A B :=

and

m
X
j=1

B = (bj,k )1jm, 1kp .

ai,j bj,k

.
1in, 1kp

Note that even if we can calculate A B, it is not necessary that B A makes


sense. And if it does, with A being n k and so with B being k n, we have that
A B is n n and B A is k k. The only possibility for equality is if n = k, that
is, for square matrices. But even then we shall find that equality is not necessary.
For example





1 1
1 0
2 1
=
.
0 1
1 1
1 1
whereas

1 0
1 1



1 1
0 1

1 1
1 2

If a pair of matrices satisfy A B = B A, then we say that they commute.


The pair of matrices given above do not commute.

5.12

Square Matrices

For square matrices, it is easy to show that


(A B) C = A (B C) .
A special square matrix is the one with 1 in each position where i = j (the
main diagonal), and 0 in every other position. For example

1 0 0
I := 0 1 0 .
0 0 1

This is known as the Identity matrix, for it is easy to show that, for any square
matrix,
AI = I A = A.
We now ask if, given a square matrix A, there exists a square matrix, say B,
such that A B = I. If such a matrix exists, we call it a right inverse of A. (Is it
also a left inverse?) If a matrix is both a left inverse and a right inverse, we call it
1
the inverse of A, and denote it by A1 . Note that this does not mean , because
A
that expression has no meaning!

31

For example, if
A =

1 1
0 1

A1 =

1 1
0 1

then

,


The condition for a square matrix to have an inverse is well known. It is


that a quantity known as the determinant of the matrix should have a non-zero
value.

5.13

Determinants

We shall start by considering a set of two equations in two unknowns:


ax + by
px + qy

= c,
= r.

(25)
(26)

You can think of this algebraically, or, if you know some Cartesian geometry,
geometrically, as two lines.
A unique solution of these equations will exist under certain conditions,
and geometrically, this is if the lines are not parallel. If the lines are parallel, then
there will either no solution (when the lines are distinct) or there will be an infinite
number of solutions (when the lines coincide). In the cases for no solution, the
slopes of the lines must be the same: that is (unless the lines are parallel to the
yaxis, that is b = q = 0)

a
p
=
b
q

or

aq bp = 0 .

(Note that the second form is also true in the case when the lines are parallel to
the yaxis.)
So, if we assume that aq bp 6= 0, we can find the unique solution of (25)
and (26). This is
x

cq br
,
aq bp
ar cp
.
aq bp

If we write (25) and (26) in matrix form, we have





 
a b
x
c
=
.
p q
y
r
This leads to the idea of writing aq bp in the form


a b


p q .

(27)
(28)

32

This is called the determinant of

a b
p q

Thus we can write (27) and (28) in the form




c b


r q
,
x =

a b
p q


a c


p r
.
y =

a b
p q

(29)

(30)

So, we see that the system of two simultaneous linear equations, (25) and
(26), has a solution if and only if


a b


p q 6= 0 .

This is a very useful criterion.

We can now consider three equations in three unknowns:


ax + by + cz = p ,
dx + ey + f x = q ,

(31)
(32)

gx + hy + iz

(33)

= r.

Three dimensional geometric considerations lead us to a similar result.


The algebraic condition for a unique solution is
aei + bf g + cdh ahf dbi gec 6= 0 .
It is not easy to see how to

a
d
g

relate this to the matrix equation

b c
x
p
e f y = q .
h i
z
r

(34)

a b c
We shall define the determinant of d e f to be
g h i

aei + bf g + cdh ahf dbi gec .

In both cases, with the appropriate interpretation, we have a matrix equation


Ax = p, and we write the determinant of the square matrix A as det(A) or |A|.

33

5.14

Properties of determinants

Let A be a square matrix (for the moment, either 2 2 or 3 3).


1. If B is a matrix obtained from A by interchanging two rows, then
det(B) = det(A).
2. If B is a matrix obtained from A by interchanging two columns, then
det(B) = det(A).
These can readily be checked by direct computation.

3. If two rows of a matrix (two columns of a matrix) are the same, then the
determinant has value zero.
Suppose the matrix is A and the matrix obtained by interchanging the two
identical rows (or columns) is B.
From the previous result det(A) = det(B). But A and B are in fact
identical. Thus det(A) = det(B). This means that det(A) must be zero.
4. The addition of a constant multiple of a row (resp. column) of a determinant
to another row (resp. column) does not change the value of the determinant.
We show this for a 2 2 determinant.



a b

and det(B) = a + p b + q
Let det(A) =


p q
p
q


a + p b + q
= (a + p)q (b
Then, det(B) =

p
q
bp = det(A) .

+ q)p = aq

This enables us to devise methods for evaluating determinant efficiently.


By using appropriate additions of constant multiples of rows to other rows
(resp. columns to other columns), we can put zeros into specific entries. For
example, with = qb (assuming q 6= 0)

a + p b + q


p
q

=
=



a bp 0
q



p
q
aqbp


0
q

p
q
aq bp
q 0 = aq bp of course!
q

Provided that det(A) is non-zero, it is possible to perform these operations


and result in a determinant with non-zero terms, only on the main diagonal:
for example, we can proceed in the above example and get
apbq


0
q

.
0
q

34

5. The determinant of the transposed matrix is the same at the determinant of


the matrix; that is |AT | = |A|,
6. The determinant of the product of two matrices is equal to the product of
the determinants of the matrices; that is |A B| = |A| |B|.

5.15

Determinants and Inverses of Matrices

In the section on square matrices, we mentioned that the condition for a square
matrix to have an inverse was that its determinant was non-zero. We will show
this here for 2 2 matrices. We recall that the solution of



 
a b
x
c
=
p q
y
r
is given by (29) and (30). These two equations can be written in matrix form as
follows:

 


1
q b
c
x
=
.
p a
r
y
aq bp




1
a b
q b
In other words, if A =
, then the matrix
is the
p q
p a
det(A)
inverse of the matrix A, and a necessary condition for this is that the determinant
a b
is non-zero.
|A| =
p q

Writing this in matrix form, we have that Ax = p has a solution


Bp = x, provided that det(A) is non-zero. Here, B is the inverse of A. Note
that det(B) = det(A).

5.16

Generalities

To bring all this together, we have defined the determinants for 2 2 and 3 3
matrices as follows: if


a1,1 a1,2
A =
,
a2,1 a2,2
then
det(A) = |A| := (a1,1 a2,2 a1,2 a2,1 ) .
This is also written as


a
A = 1,1
a2,1


a1,2
,
a2,2

For 3 3 matrices, the rule is in terms of 2 2 determinants, expanding


along the top row:

35

|A|


a1,1

= a2,1
a3,1

a1,2
a2,2
a3,2


a
= a1,1 2,2
a3,2

a1,3
a2,3
a3,3



a
a2,3
a1,2 2,1
a3,3
a3,1



a
a2,3
+ a1,3 2,1
a3,3
a3,1


a2,2
.
a3,2

To the element ai,j in the matrix A, the minor Ai,j is the matrix obtained
from A by deleting the row with index i and the column with index j. in other
words, delete the row and column containing ai,j . We also associate with the
position (i, j), the number i,j := 1 if (i + j) is even, and i,j := 1 if (i + j) is
odd. This gives an array of signs:
+

..
.

+
..
.

..
.

+
..
.

..
.

+
..
.

..
.

and now we get

|A|


a1,1

= a2,1
a3,1

a1,2
a2,2
a3,2

a1,3
a2,3
a3,3

= 1,1 a1,1 |A1,1 | + 1,2 a1,2 |A1,2 | + 1,3 a1,3 |A1,3 |


=

3
X
j=1

1,j a1,j |A1,j | .

Note that taking the sum of the elements of one row multiplied by the minors
corresponding to elements of a different row (or similarly for columns) will result
in a value of 0 (why?).
In general, we get
|A| =

k
X
j=1

1,j a1,j |A1,j | .

However, it is not required to expand along any particular row (or column), for
we have
k
X
|A| =
i,j ai,j |Ai,j | .
j=1

36

Since we know that addition of multiples of rows (or columns) to a row (or
column) does not change the value of a determinant, we note that it may be a
good strategy to do such additions first before expanding a determinant.
Finally, for 3 3 matrices, there is a diagonal process that parallels the 2 2
case. But it must be emphasised that this does not extend to higher orders.
We consider diagonals that go down and right to be positive, and those that
go up and right to be negative.


a b

= ad bc
|A| =
c d

a
(b)
(a)
b

;
&
%
=
(c)
d
c
(d)
|A|



a b c


= d e f
g h i

= (aei + bf g + cdh) (gec + hf a + idb)

=
(d)

(g)

b
&

&

(h)

&

(a)

(d)

We also see that the triple


determinant





5.17

Cramers Rule

Returning to the solution

a
d
g

c
f

&

&

(b)

&
a

b
%

scalar product x y z can be evaluated as a



x1 x2 x3
y1 y2 y3 .
z1 z2 z3

of the matrix equation:

b c
x
p
e f y = q .
h i
z
r

(34)

You might also like