0% found this document useful (0 votes)
134 views28 pages

Chapter One Polynomials: 1-1 Introduction Definition 1.1: A Monomial in Variable X Is An Expression of The Form C X

This document introduces polynomials and their properties. It defines a monomial as an expression of the form c x^k, where c is a constant and k is a non-negative integer. A polynomial is defined as a sum of finitely many monomials. Key properties of polynomials include: 1) The degree of a polynomial is the highest exponent of its terms. 2) Polynomials can be added or multiplied following certain rules to obtain new polynomials. 3) Polynomial operations like addition and multiplication are associative and distribute over each other.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
134 views28 pages

Chapter One Polynomials: 1-1 Introduction Definition 1.1: A Monomial in Variable X Is An Expression of The Form C X

This document introduces polynomials and their properties. It defines a monomial as an expression of the form c x^k, where c is a constant and k is a non-negative integer. A polynomial is defined as a sum of finitely many monomials. Key properties of polynomials include: 1) The degree of a polynomial is the highest exponent of its terms. 2) Polynomials can be added or multiplied following certain rules to obtain new polynomials. 3) Polynomial operations like addition and multiplication are associative and distribute over each other.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter One

Polynomials

1-1 Introduction

Definition 1.1: A monomial in variable x is an expression of the form c x k, where c is a


constant and k a nonnegative integer. Constant c can be an integer, rational, real or complex
number the following are example of monomial.

3 , 3 x ,−2 xy , 51 x 3 z , x 5 , 14 x−2 CITATION Leu92 \l 1033 (Leung, Mok, & Suen, 1992)

Definition 1.2: A polynomial in x is a sum of finitely many monomials in x. In other words,


it is an expression of the form

P ( x ) =an x n+ an−1 x n−1 +… .+a 1 x+ a0 (¿)

The constants a 0 , … , an in (¿) are the coefficients of polynomial P. Then the exponent n is
called the degree of polynomial P is the greatest degree of the monomials it contains and
denoted by deg ⁡( P). In particular, polynomials of degree one, two and three are called
linear, quadratic and cubic. A nonzero constant polynomial has degree 0, while the zero-
polynomial is P ( x ) =0. [ CITATION Leu92 \l 1033 ]
2
Example 1.1: P ( x ) =x3 ( x +1 )+ ( 1−x 2 ) −2 x 4 + x 3−2 x 2+1 is a polynomial with integer
coefficients of degree 4.

Q(x )=0 x 2−√ 2 x +3 is a linear polynomial with real coefficients.

1
R( x )=¿ x∨¿ , S( x )= and T ( x)= √ 2 x+ 1 are not polynomials.
x

P ( x ) =x3 +2 x +1 has degree 3.

1-2 Domain of Polynomial

Sometimes it is very important to emphasize the fact that the coefficients a i of a polynomial
P ( x ) =an x n+ an−1 x n−1 +… .+a 1 x+ a0 are real numbers. To do so, we say that P( x ) is a
polynomial in x with real coefficients, P( x ) is a polynomial in x with coefficients in R, or
P( x ) is a polynomial (in x) over R. The set of all polynomials in x over R is denoted by
R[ x ]. Here the letter R indicates that the coefficients are taken from the system R of real
numbers and the letter x indicates the indeterminate under consideration. We shall call R[ x ]
the domain of polynomials in x over R or the domain of polynomials with real coefficients.
Monomials and polynomials with coefficients taken from other number systems are
similarly defined. Consider polynomials of the domains Z [ x ] , Q [ x ] ,C [ x ] , i.e. polynomials in
x whose coefficients are respectively integers, rational numbers and complex numbers. Now
we can say, A Polynomial is a function in the form

f ( x )=an x n +a n−1 x n−1+ … .+a1 x+ a0

Where a n , a n−1 , … , a0 are values in domain of polynomial, and x is variable. For each c we
call f (c ) the value of the polynomial function f (x) at x=c.

1-3 Synthetic Substitution

Given a polynomial function f ( x )=an x n +a n−1 x n−1+ … .+a1 x+ a0 in the variable x, the value
f (c ) at x=c can be calculated in different ways. For example, we can first calculate
successively the powers c n , c n−1 , … , c ,1 of c and then multiply each of them by the
corresponding coefficient a i to get

a n c n , an−1 c n−1 , … , a1 c , a 0

and finally, we add up to get f (c ).

Alternatively, we can devise a scheme of synthetic substitution based on the following


identity with n−1 pairs of nested brackets.

If P(x) = 2 x2 +3 x – 10, find P when x=5. By substituting, P=2 ( 5 )2 +3 ( 5 ) – 10 P(5)=55.


The process of direct substitution can become a real pain for higher degree polynomial
expressions. There is another method called synthetic substitution that will make evaluating
a polynomial a very simple process. Given some polynomialQ=3 x ²+10 x ² – 5 x – 4 in one
variable. You can evaluate Q when x=2 by plugging in that value as we did before.

Q(x )=3 x ³+10 x ² – 5 x – 4

¿ 3(2)³+10(2) ² – 5( 2) – 4

¿ 24+ 40+ – 10 – 4

¿ 50

So, the value of Q is 50 when x is 2. or by using synthetic substitution, we would write the
coefficients of Q;

Q=3 x 3 +10 x2 – 5 x – 4

Now, we’ll leave a space under those coefficients and draw a line. We will also write down
the value of the variable to be plugged in. Once we do that, we are set up to evaluate Q
when x=2.

To accomplish that, we bring down the first number, 3, and multiply by 2, then add. Keep
repeating this process. The last value will be the value of Q when x is 2.
2 3 10 -5 -4
Notice, we did get 6 32 54 50 as we did before
3 16 27 50

f ( x )=x 2

1-4 Algebra on polynomial

Consider the two polynomials in x

f ( x )=an x n +a n−1 x n−1+ …+a 1 x+ a0

g( x )=b m x m +b m−1 x m−1 +…+b 1 x +b0

are equal if and only if n=m and a i=bi for i=0 , 1 ,... , n. It follows that in writing a
polynomial as a sum of monomials it is immaterial in which order its terms appear. Two
polynomials in different in determinates are never equal. We define the sum f ( x ) + g(x ) of
two polynomials

f ( x )=an x n +a n−1 x n−1+ …+a 1 x+ a0

g( x )=b m x m +b m−1 x m−1 +…+b 1 x +b0

of R[ x ] to be the polynomial

h ( x )=f ( x ) + g ( x )=c k x k + c k−1 x k−1+ …+c 1 x +c 0

where c i=ai+ bi.

The product f ( x ) g ( x ) is defined to be the polynomial

h ( x )=f ( x ) g ( x )=c k x k +c k−1 x k −1 +…+ c 1 x+ c 0

then the coefficient c k is given by

c k =aO bk + a1 b k−1+ …+a k−1 b1 +a k bO= ∑ ai +b j


i + j=k

Example 1.2: To add two or more polynomials, add the coefficient of the terms with the
same degree.

P ( x ) =2 x 3 +5 x – 3 ,Q(x )=4 x – 3 x 2 +2 x3

P( x )+Q (x)=(2 x 3 +5 x – 3)+(5 x3 – 2 x – 7)

¿ 2 x3 +2 x 3−3 x 2+5 x +4 x – 3

¿ 4 x3 −3 x 2 +9 x – 3

Example 1.3: Find the product of 7 x 3+ 2 x 2−4 x+ 7 and 10 x 4−5 x 2 +7.


Using a scheme of detached coefficients, we can calculate the coefficient c k as in the
following:

7 2 -4 7
10 0 -5 0 7
49 14 -28 49
-35 -10 20 -35
70 20 -40 70
70 20 -75 60 69 -21 -28 49
Thus, the product is 70 x 7+ 20 x 6−75 x 5 +60 x 4 +69 x 3−21 x 2−28 x +49

We take note that for any two polynomials of f ¿) and g ( x ) of R [ x ], their sum f ( x ) + g ( x ) and
product f ( x ) g ( x ) are both polynomials of the same domain R ¿]. We may therefore say that
the domain R [ x ] is closed under the addition and the multiplication defined above. Secondly
the sum and the product of two constant polynomials a and b of R [ x ] are a+ b and ab which
are respectively the sum and the product of the real numbers a and b of R. Therefore, we
may say that the two algebraic operations of R [ x ] extend those of R. It is not difficult to
verify that the usual laws of arithmetic hold in R [ x ] .

1-5 Properties of sum and product

Consider we have polynomials f ( x ) , g ( x) and h( x ), It is not difficult to verify that the usual
laws of arithmetic hold in R[ x ]:

1) The commutative law of addition: f ( x ) + g ( x )=g ( x ) + f ( x ) .


2) The commutative law of multiplication: f ( x ) g ( x )=g ( x ) f ( x ) .
3) The associative law of addition: ( f (x)+ g( x ))+ h(x)=f (x )+( g( x )+ h(x )).
4) The associative law of multiplication: ( f ( x ) g ( x ) ) q ( x )=f ( x ) ( g ( x ) q ( x ) ) .
5) The distributive law: f (x)( g( x )+h(x ))=f (x )g( x)+f ( x)h( x) .

Moreover, the constant polynomials 0 and 1 satisfy the following special conditions:

f (x)+0=f (x) and 1 f ( x )=f ( x)

for every f (x) ∈ R [x ] . In fact, they are characterized by the above properties.

Theorem 1.1: f (x)+ g(x )=f ( x) if and only if g( x )=0. For any non-zero polynomial f (x),
f (x) h( x )=f ( x) if and only if h(x )=1.[ CITATION Leu92 \l 1033 ]

Proof: As in the case of ordinary arithmetic, we may also subtract one polynomial g( x )
from another polynomial f (x) to obtain a difference d(x) which is itself a polynomial. More
precisely, the difference d(x) is given by

d ( x )=f ( x ) + (−1 ) g ( x )=f ( x ) −g ( x) ∎


For the degree of the sum and the product we have the useful properties of the theorem
below.

Theorem 1.2: Let f (x) and g( x ) be non-zero polynomials of R [ x ] Then

deg ( f ( x )+ g ( x ) )=max { deg f ( x ) , deg g ( x ) }

deg ( f ( x ) g ( x ) )=deg f ( x)+ deg g (x).

Proof: Let f (x)=ao +a 1 x +...+an x n and g( x )=b o+ b1 x +...+b m x m with a n ≠ 0 and b m ≠ 0; . By


definition, deg (f ( x)+ g(x ))=nif m< n and deg (f ( x)+ g(x ))=m if n< m; but
deg (f ( x), g ( x)) n if n=m. Therefore in all cases
deg (f ( x)+ g(x )) ≤max (deg f ( x), deg g(x )) . It follows from an ¿ 0∧bm =0 that
a n b m ≠ 0∈ f (x) g(x)=ao b o (ao bl + al bo ) x +...+an bm x n +m; hence
deg f ( x ) g( x )=deg f ( x)+ deg g (x). We note that in the first general formula of the theorem
deg (f ( x)+ g(x )) ≤max (deg f ( x), deg g(x )) the inequality sign cannot be replaced by the
equality sign. Take for example f (x)=3 x 2 +2 x+1and g( x )=−3 x 2+5 x +2. Then
deg (f ( x)+ g(x ))=1<2=max(deg f ( x) , g( x)). It follows from the second formula of the
theorem deg f ( x ) g ( x ) =deg f ( x ) +deg g ( x ) that if f (x) ≠ 0∧g( x )≠ 0 then f (x )g(x )≠ 0 This
leads us to formulate the following very simple but very important properties of R[ x ].∎

Corollary 1.1: Let f (x) and g( x ) be polynomials of R[ x ]. Then f (x) g(x)=0 if and only if
f (x)=0∨g(x )=0.[CITATION Leu92 \p 11 \l 1033 ]

Corollary 1.2: Let f (x) , g (x) andh(x ) be polynomials of R[ x ] . if f (x)≠ 0and


f (x) g(x)=f (x ) h( x) then g( x )=h( x ).

The last corollary says effectively that we may cancel a non-zero factor from both sides of
an equation.

1-6 The remainder theorem

given any polynomial f (x) of degree n and any number c there exists a polynomial q ( x) of
degree n−1 such that f (x)=( x−c )q(x )+ f (c) . [ CITATION Leu92 \l 1033 ]

Theorem: If f (x) is a polynomial of degree n ≥ 1 in R[x] and c is an arbitrary real number,


then there exists a unique polynomial q ( x) of degree n−1∈R ¿] such that

f (x)=( x−c )q(x )+ f (c) .

The polynomial q ( x) is called the quotient and the constant f (c ) the remainder of the
division of f (x) by x−c .This nomenclature is suggested by the following 'long division' of
f (x) byx−c

d n x n−1 + d n−1 x n−2+ … d 2 x +d 1=q ( x )


( x−c ) :an x+ an−1 x n−1 +a 1 x +a0 =f (x )
a n x−an−1 c X n−1
d n−1 xn−1 +a n−2 x n−2
d n−1 xn−1−d n−1 c x n−2
d n−2 xn −2

d 2 x2 +a 1 x
d 2 x2 −d 2 cx
d 1 x+ a0
d 1−d1 c
d 0=f ( c)
Therefore q ( x) and f (c ) do turn out to be the quotient and the remainder of a division.
Because of the enormous importance and usefulness of the theorem we felt that it would be
instructive to go through another proof to consolidate the idea. ALTERNATIVE PROOF
OF THE REMAINDER THEOREM. We shall not offer another proof for the uniqueness
but shall carry out a proof of the existence of the quotient q ( x) by induction on the degree n
of f (x) . For deg f (x)=1 we have f (x)=ax+ bwith a=F o. Putting q ( x)=awe get

ax +b=( x −c)a+(ac +b)i . e . f ( x )=(x−c) q ( x)+ f (c ).

Thus the existence of q ( x) of degree 0is proved. Suppose that for all polynomials of degree
less than n such quotients exist. Let

f ( x )=an x n +a n−1 x n−1+ … .+a1 x+ a0 an ≠ 0

be a polynomial of degree and c be a real number. Then the two polynomials f (x) and
( x−c ) an x n−1 have identical leading term. Thus

g ( x )=f ( x )−( x−c )a n x n−1

is a polynomial of degree ≤ n−1. Moreover

g( c)=f (c ).

By induction assumption, there is a quotient h(x ) of degree ≤ n−2 such

That

g( x )=(x −c) h(x )+ g (c ).

Therefore

f ( x )−( x−c ) a n x n−1=( x−c) h( x )+ g(c ).

Putting q ( x )=an x n−1 +h ( x ) , which is ofdegree n−1 we have

f (x)=( x−c )q(x )+ f (c) .


The induction is now complete. The remainder theorem provides us with particularly useful
information on the polynomial f (x) if the chosen constant c happens to satisfy the condition
that f ( c ) =0 , i. e . c is azero (or a root) of f (x). In this case we have f (c )=0 , and
consequently

f (x)=( x−c )q(x ).

Therefore if c is a root of f ( x), then the linear polynomial x−c is a factor of f (x).
Conversely if x−c is a factor of f ( x), then f (x)=( x−c )q( x ) for some polynomial q (x)
whose degree is less than that of f (x) by 1. Recalling that the order of substitution and
multiplication can be interchanged, we see that f (c )=( c−c) q (c)=0 , i.e. c is a root of f ( x) .
This relationship between a rootc and the linear polynomial x−c is known as the factor
theorem. Thus we have proved the factor theorem. [ CITATION Leu92 \l 1033 ] ∎

Theorem 1.3: Let f (x) be a polynomial of R[ x ]and cbe a real number. Then c is a root of
f (x) if and only if x−c is a factor of f ( x ) ,i . e . f (x )=(x−c )q (x) for some q (x) of R[ x ] .
[ CITATION Leu92 \l 1033 ]

Example 1.4: Let f ( x )=x 4 + x 3 −x−1. Since 1 and −1 are real roots of f (x) and by
remainder theorem we have

f ( x )=(x−1)(x+1)(x 2+ x+1)

The remaining roots of f (x) must be the roots of the quadratic polynomial h ( x )=x 2+ x+1.
However, h(x ) has a negative discriminant; it has no real root. Therefore, the real roots of
f (x) are 1 and −1. From this example we see that in R[ x ] not only the linear polynomials
x−1 and x +1 are factors of the polynomial f (x) but also their product ( x−1)( x+ 1) .

Teorem1.4: Let f (x) be a. polynomial of R[ x ] . H the real (respectively complex) numbers


c 1 , c2 , … , c k are distinct roots of f (x) ,thus f (c i)=0 for i=1 , 2 ,... , k , then the k-th degree
polynomial ¿ is a factor of f ( x ) , i.e.

f (x)=( x−c 1 )(x−C2 )... ( x−C k ) g(X )

for some polynomial g ( x ) of R[ x ] (respectively of C [x ]).

Proof: The following inductive proof is based on the factor theorem and the fact that the
product of two real (complex) numbers is zero if and only if at least one of the numbers is
zero. The induction is carried out on the number kof distinct roots. For k =1 the present
theorem is just the factor theorem. Assume that the present theorem holds for k −1 distinct
roots. Let C 1 , C2 , ... , C k be k distinct roots of a polynomial f ( x) .Then by induction
Assumption

f (x)=( x−C 2)... ( x −Ck ) h (X )

for some polynomial h(x ) . Now f (c 1)=0, so ( C 1−C 2) ... ( C1 −Ck ) h ( C 1 )=0 since the order of
substitution and multiplication may be interchanged. The root C 1 is different from
C 2 , ... ,Ck ; therefore (C 1−C 2)...( C1−C k )≠ 0Hence h(C 1 )=0. Applying the factor theorem
to h(x )∧C 1 we have h(x )=( x−c 1) g( x ) for some polynomial g( x ). Therefore

f (x)=( x−c 1 )( x−C2 )... ( x−C k ) g( x)

We remark that if the rootsC i are not all distinct then the conclusion of the theorem may not
hold. Take for example the polynomial f (x)= x2−1. C 1=−1 and C 2=−1 are roots off ( x ) ;
but ( x +1 )2 is not a factor off (x). One important consequence of this theorem is that a
polynomial withk distinct roots must have a degree at least equal to k. From this remark a
number of corollaries follow. [ CITATION Leu92 \l 1033 ] ∎

1. 7 Divisibility

In the subsequent discussion we shall tacitly assume that the zero polynomial is excluded
and that all polynomials are taken from the domain R[ x ]

Definition 1.3 : We say that a polynomial g( x ) is divisible by a polynomial f (x) if


g( x )=f (x) h( x ) for some polynomial h( x ) . In this case we also say thatf (x) is a factor (or
a divisor) of g( x ) or g( x ) is a multiple of f (x) and write f ( x )∨g ( x). A non-zero constant
polynomial is a factor of every polynomial because for everya ≠ 0 and every
g( x )=b m x m +...+b 1 x+ bo we always have g( x )=a( bm x m +...+ b1 x +b o ). Similarly, if
f ( x )∨g ( x) then af ( x )∨g( x ) for every non-zero constant a. Other general properties of
divisibility are listed in the theorem below.

Theorem1.5:Let f (x) , g (x) , h( x ), k (x ) be polynomials. Then the followingstatements hold:

(a)If f (x) ¿( x ) and g( x )¿ ( x), then f (x) ¿( x ).

(b)If f (x) ¿( x ) and h(x )¿( x), then f (x) h( x )¿( x) k (x ).

(c)If f (x) ¿( x ) and ¿, then f (x)=ag(x ) for some non-zero constant.

(d)If f (x) ¿(x ), then deg f (x )≤ deg g( x ).

(e)If f (x) ¿( x ) and f (x) ¿(x ), then f (x)( p( x ) g(x )+ q( x ) h( x)¿ for arbitrary polynomials
p(x ) and q (x) .

Proof:: We shall only prove (c) and (d) and leave the proof of the other statements as an
exercise. (c) It follows from the hypothesis that g( x )= p(x )/( x)∧f ( x )=¿ q ( x) g( x) for
some polynomials p(x ) and q ( x) . Thereforef (x)= p (x) q( x ) f (x ); whence p(x )q (x)=1 by
Corollary 1.3.5. By Theorem 1.3.3, deg p( x )+deg q ( x)=O. Therefore both p(x ) andq (x)
are constant polynomials; so f (x)=ag(x ) for some non-zero constant a. (d)If ( x ) ¿ ( x ) ,then
deg f ( x ) =deg g(x ). for some polynomial p(x ). Therefore deg f ( x )+ deg p ( x)=deg g(x) .
Since g p(x )≥ 0, we conclude that deg f ( x )≤ deg g( x ). Before we study further properties
of divisibility, let us compare the statements of the above theorem with their counterparts in
the arithmetic of Z. For non-zero integers a, b, e and d the corresponding statements are:

(a) If a ¿andb ¿ ,thena ¿ .


( b ) If a ¿andc ¿ , thenac ¿ .

( c ) if a ¿andb ¿ ,thena=± b

( d )=if a ¿ ,then|a|≤|b|

( e ) if a ¿∧a¿thena ( xb + yc ¿for arbitrary integersxand y We discover that (a),(b) and(e ) are


the exact parallels of (a ' ),(b ')and(e ' ) respectively while there are minor differences
between (c )and(c ' ) ,and between(d )and(d ') . To restore the similarity between (d ) and
(d ') , we can regard the absolute value as a measurement of magnitude of integers and the
degree as a measurement of magnitude of polynomials. From this point of view, the
statements (d )and(d ') are now parallel. Next we observe that1 and −1 are the only integers
that have a reciprocal which is also an integer; they are known as the invertible elements or
units of Z. On the other hand, the invertible polynomials are the non-zero constant
polynomials, since they are exactly the polynomials that have a polynomial reciprocal.
Therefore we may also call non-zero constant polynomials units of R [ x ] . The similarity
between (c ) and (c ' ) is now completely restored: ( c ) if f (x )∧g( x )divide each other ,then
f (x)=ag(x )for some unita of R[ x ](c ')if a and b divide each other, thena=cb f or some unit
cofZ . If divisibility of integers is our main concern, then we may replace any integer
a by −a in a statement about divisibility without altering its validity. Similarly in a statement
on divisibility of polynomials, we may replace any polynomial f (x) by any multiple af ( x )
as long as a is a non-zero constant. This leads us to the following terminology. Two
polynomialsf (x)andg( x )are said to be associated or associates of each other iff (x)=ag(x )
for some non-zero constant a. Among the associates of a given polynomial
f { x }=a n x n +a n−1 x n−1+...+ a1 x +a o (an ≠ 0) , there is one that has a leading coefficient equal
1
to 1, namely f ( x) . This is called the monic polynomial associated to f ( x) . In general
an
every polynomial with leading coefficient 1 is a monic polynomial. Corresponding to the
prime numbers of Z we have the irreducible polynomials. Recall that an integer is a prime
number if it is different from 1∧−1, &nd if it is not a product of two non-units. Thus we
say that a non-constant polynomial is irreducible if it is not a product of two non-units, i.e. it
is not a product of two polynomials, both of positive degrees. In other words if f (x) is
irreducible and f (x)=g (x) h( x )then degg( x )=0∨¿deg h(x )=0. For example, all linear
polynomials are irreducible; so are the quadratic polynomials x 2+ 1and x 2+ x+1, for
otherwise they would have a linear factor and hence, by the factor theorem, a root in R,
which is impossible. Clearly if f (x) is irreducible and if f (x) andg( x ) are associated, then
g( x ) is also irreducible; in this case the manic polynomial associated to f (x) is irreducible.
A non-constant polynomial which is not irreducible is said to be reducible; a reducible
polynomial can therefore be a product of two non-units, i.e. a product of two polynomials of
positive degrees. In other words, if f (x) is reducible, then f (x)=g ( x)h( x ) for some g( x )
and h( x )in R[ x ]such·that1 ≤deg g (x)<deg f ( x ) and1 ≤deg h(x)< deg f ( x ). For example,
x 2−1 , x 3 , x3 −1 are reducible. Like prime numbers, an irreducible polynomial is divisible
only by units and its associates. ∎
The Division Algorithm theorem 1.6: If p ( x ) and d ( x ) ≠ 0 are any two polynomials then there
exist unique polynomials q (x) and r (x )' such thatp( x )=d ( x ) . q ( x ) +r (x )where the degree of
r ( x )' is strictly less than the degree ofd ( x )' when the degree d ( x )≥ 1 or else r ( x ) ≡ 0

Proof: We apply induction on the degree n of p(x )We let m denote the degree of the divisor
d ( x ) We will establish uniqueness after we establish the existence of q ( x )andr ( x ) If n=0
then p ( x ) =c where c is a constant

Case 1 : m=0
c
d ( x )=k where k is a constant and since d ( x ) ≠ 0 we knowk ≠ 0. In this case choose q ( x )=
k
c
and choose,r ( x ) ≡ 0 Then d ( x ) . q ( x ) +r ( x ) =k . +0=c= p ( x )in this caser ( x )≡ 0
k

Case 2 m>0

In this case let q ( x ) ≡0and let r ( x )=c Then clearlyd ( x ) . q ( x ) +r ( x ) =d ( x ) .0+c=c =p ( x ) In


this case the degree of , r ( x ) is strictly less than the degree of d ( x ) Now assume there exist
polynomials q 1 ( x)and ,r 1 ( x ) such that p1 x =d ( x ) . q1 ( x ) +r 1 (x ) whenever p1 ( x)is any
polynomial that has a degree less than or equal to k. Let p(x ) be a polynomial of degree k +1
We assume p ( x ) =ak +1 x k+ 1+ ak x k +…+ a1 x + a0 where a k+1 ≠ 0 We must show the theorem
statement holds for p(x )

Case 1 : m=0
1
d ( x )=k where k is a constant and since d ( x ) ≠ 0 we knowk ≠ 0. In this case choose q ( x )=
k
1
and choose ,r ( x ) ≡ 0 Then d ( x ) . q ( x ) +r ( x ) =k . +0= p ( x ) +0= p ( x ) in this caser ( x )≡ 0
k

Case 2: m>0
ak +1
Let d ( x )=d m x m+ …+d 1 x + d0 where d m ≠ 0 Note that ≠ 0 since both constants are
dm
a k+1 k+1−m
nonzero let p1 ( x )= p ( x )− x . d (x) Then the subtraction on the right cancels the
dm
leading term of p ( x ) so p1 ( x) is a polynomial of degree k or less and we can apply the
induction assumption to p1 ( x) to conclude there exist polynomials q 1 (x) and r 1 ( x ) such that
p1 x =d ( x ) . q1 ( x ) +r 1 (x ) where the degree of r 1 ( x ) is strictly less than that of d ( x )

a k+1 k+1−m
p1 x =d ( x ) . q1 ( x ) +r 1 ( x )=p ( x )− x . d (x)
dm

Now we solve the 2nd equation for p(x )

ak+1 k +1−m
p ( x) = x . d ( x ) . q1 ( x )+ r 1 ( x )
dm
ak +1 k+1−m
p ( x ) =d ( x )
[ dm
x
]
.q 1 ( x ) +r 1 ( x )

ak+1 k +1−m
So we many let q ( x )= [ dm
x
]
. q 1 ( x ) and letr ( x )=r 1 ( x) and we have established the

theorem holds for p(x )of degree k +1 The induction proof that establishes the existence part
of the theorem is now complete.

To establish uniqueness, suppose p ( x ) =d ( x ) . q1 ( x ) +r 1 ( x )=d ( x ) . q2 ( x )+ r 2( x)

Then we have d ( x ) . [ q1 ( x )−q 2 ( x ) ]=r 2 ( x )−r 1 ( x) call this equation (*)

Case 1 : m=0

In this case both remainders must be identically zero and this means r 1 ( x ) ≡r 2( x )

In turn, this means d ( x ) . [ q1 ( x )−q 2 ( x ) ] ≡ 0and since d ( x ) ≠ 0we must have

q 1 ( x )−q 2 ( x ) ≡0which of course implies q 1 ( x ) ≡ q2 ( x )

Case 2 : m>0

If [ q 1 ( x )−q 2 ( x ) ] ≠ 0 then we can compute the degrees of the polynomials on both sides of the
(*) equation. The degree on the left side is greater than or equal to the degree of d ( x ) But on
the right side, both remainders have degrees less than d ( x ) so their difference has a degree
that is less than or equal to that of either which is less than the degree of d ( x ) This is a
contradiction. So we must haved ( x ) . [ q1 ( x )−q 2 ( x ) ] ≡ 0 and when this is the case the entire
left side of the (*) equation is identically 0 and we may add back ,r ( x ) from the right side to
conclude that the two remainders are also identically equal. ∎

1.8 Greatest common factor

Definition 1.4: Let f (x) and g( x ) be non-zero polynomials of R[ x ]. A polynomial d ( x ) of


R[ x ] is a highest common factor (HCF for short) of f (x) and g( x ) if the following
conditions are satisfied:

(i) d ( x )¿ ( x)andd ( x )¿ ( x),

(ii) if d '( x )¿ ( x) andd '( x )¿ (x) ,then d '( x )¿ (x) .

A polynomial m( x) of R[ x ]is a lowest common multiple (LCM for short) of

f (x)andg( x ) if the following conditions are satisfied:

(iii) f (x) ¿( x )andg( x )¿ ( x),

(iv) if f ( x) ¿ ' (x ) andg( x )¿ ' ( x)thenm(x) ¿ ' ( x ).

Example:
if f (x)= x3 −2 x 2−x +2=( x 2−1)(x −2)and g( x )=x 3+ 2 x 2−x−2=(x 2−1)( x +2), then they
have x 2−1as an HCF and x 4 −5 x 2 +4=(x 2−1)(x 2−4) as an LCM. The HCF and the LCM of
two polynomials are not unique. Clearly any associate of an HCF (respectively an LCM) is
an HCF (respectively an LCM) and conversely any two HCFs (respectively LCMs) are
associates. We shall usually ignore the distinction between associates and denote anyone
HCF of f (x) and g( x ) by HCF( f (x) , g ( x)) and anyone LCM off (x)and g( x )by LCM
( f (x) , g ( x)). We shall prove the existence of HCF and LCM in the next section. In the
meantime we proceed to study their properties under the assumption that they exist.

Theorem 1.7:For any non-zero polynomials f (x), g ( x) and h(x ) , the following statements
hold:

(a) HCF (f (x )h(x) , g ( x)h(x ))=h( x)HCF( f (x) , g ( x)).

(b) LCM ( f (x) h( x ) , g( x ) h( x))=h ( x)LCM( f (x) , g ( x)).

(c )HCF ( f (x) , g(x))∧f (x)are associates if and only iff (x)/ g(x ).

(d ) LCM (f ( x), g ( x))∧f (x) are associates if and only ifg(x )/f (x ).

(e )HCF ( f (x) , g (x))=¿HCF( g( x ) ,r ( x )) iff (x)=g ( x) q(x )+r (x ).

( f ) HCF (f ( x) , g ( x))LCM( f (x) , g ( x)) is associated tof (x) g(x) .

Proof:: We leave the proof of (a) to (d) to the interested reader as an exercise.

(e) Let d ( x )=¿HCF( g( x ) ,r ( x )). Then d ( x ) g( x ) and d ( x )( g (x) q( x )+r ( x)¿ . Therefore
d ( x )¿ (x)andd ( x )¿ (x) ,i . e. condition (i) of Definition 2.3.1 is satisfied. Suppose d '( x )¿ ( x)
and d ' ¿( x ). Then d '( x )( f (x )g(x )q(x )¿ . Since d ( x )=¿HCF( g( x ),r (x )) , we must have
d '( x )¿ (x) ,i . e. condition (ii) of Definition 2.3.1, also satisfied. Therefore d ( x )=¿HCF
( f (x) , g ( x)).This completes the proof of (e ).(f ) Let m(x)=¿LCM( f (x), g ( x)). Then it
follows from the fact thatf (x) g ¿) is a common multiple of f (x) and g( x )that
f (x) g(x)=d (x) m( x)for somed ( x ). Therefore it remains to show thatd ( x )=¿HCF
( f (x) , g ( x)),thus to verify (i) and (ii) of Definition 2.3.1.Condition (i).It follows from
m ( x ) =¿LCM ( f (x) , g ( x)) that m(x)=g( x)s ( x )for somes( x )of R [x ].Then
f (x) g(x)=d (x) m( x)=d ( x) g( x)s ( x ). Therefore f (x)=d (x) s (x), and hence d ( x )/( x).
Similarlyd ( x )¿ ( x). Condition (ii). Letd '( x )/f (x )andd '( x )¿ (x) .Thenf (x)=d ' (x)h(x )and
g( x )=d '( x ) k ( x)forsomeh(x )andk ( x)ofR[ x ] .It follows from
f (x) g(x)=d ' (x) h( x ) g( x)=d '( x) k ( x) f (x)thath(x )g ( x)=k (x)f ( x).Putting
n(x )=h ( x) g ( x), we see thatn ¿) is a common multiple of f (x) and g(x). Since m( x)=¿
LCM ( f (x) , g ( x)), we get n(x )=m(x ) p( x ). Now it follows from
d ( x )m(x )=f ( x)g ( x)=d ' ( x)n(x )=d '( x )m(x ) p( x ) that d ( c)=d '( x) p ( x). Therefore
d '( x )¿ d ( x ).Therefore d ( x )=¿HCF( f (x) , g ( x)). This proof of (f) is now complete. ∎

Theorem 1.8: Two polynomials f (x) and g( x )of R [x ]always have an HCF which can be
written in the form
a ( x)f ( x )+ b(x ) g(x )

for some polynomials a ( x) and b ( x)of R [x ].

proof: Consider the set S={s (x )f ( x)+t ( x) g( x): s( x ), t (x) E R [ z]}. The set S is clearly
non-empty and contains non-zero polynomials since both f (x) and g( x ) belong to S.
Among the non-zero polynomials of S we pick anyone d ( x ) which has the lowest degree,
say d ( x )=a( x)f ( x )+ b(x )g(x ) for certain a ( x) and b ( x) of R[ x ] The theorem will be
proved if we can show that d ( x ) is an HCF off (x)andg( x ). The verification of condition
(ii) of 2.3.1 is easy. Since d ( x ) is of the form a ( x)f ( x )+ b(x ) g( x ) ,ifd '( x )¿ (z) and
d '( x )¿ (x) ,thend '( x )¿ (x) . To show thatd ( x )satisfies condition (i) of Definition 2.3.1 we
shall have to use the device of Euclidean algorithm. Since both f (x)andg( x ) belong to S, it
suffices to show that every polynomial of S is divisible by d ( x ). Suppose to the contrary
there is one element, say s( x ) f (x)+t (x) g(x ), of S which is not divisible by d ( x ). Then
upon division by d ( x ), it would leave a non-zero remainderr ( z ) :

s( x ) f (x)+t (x) g(x )=d ( x )q (x)+r (x )

with deg r (x)<deg d(x ). Then

r ( x )=s (x )f ( x)+t ( x) g( x )−d ( x)q ( x)

¿ [s( x ) – a(x )q (x)] f (x)+¿ g(x )

would be a polynomial of the set S with a degree strictly less than that of d (z ). This would
contradict our choice ofd ( z ) as a polynomial of S with lowest degree. Therefore d ( x )
divides every polynomial of S and hence it divides both f (x)andg( x ). Two polynomials
f (x) and g( x )are said to be relatively prime if they have no non-unit common factor, in
other words if HCF¿ g( x )¿=1. Two polynomials being relatively prime is at the one
extreme of the possibilities with respective to the availability of common non-unit factors.
At the other extreme we would find two polynomials being associates; in this case the
polynomials will have all non-unit factors in common. Some of the useful properties of
relatively In conjunction with Theorem 2.3.2(f) we have ∎

Theorem 1.9: Euclidean Algorithm for Polynomials

Let p(x ) and q (x)be any two polynomials with degrees ≥ 1. Then there exists a polynomial
d ( x ) such that d ( x ) divides evenly into both p(x )andq ( x) Moreover,d ( x ) is such that if
a ( x) is any other common divisor of p(x )and q (x) then a ( x)divides evenly into d ( x ). The
polynomial called the Greatest Common Divisor of p(x )' and q ( x)' is sometimes denoted
by GCD( p ( x ) , q ( x ) ) . Except for constant multiples, d ( x )is unique.

Proof : Without loss of generality we assume the degree of p(x )' is larger than or equal to
the degree of q (x) By the Division Algorithm we may write

p ( x ) =q ( x ) . q 1 x +r 1 x (1)
where q ( x) is the quotient polynomial and ,r 1 x is the remainder. If r 1 x=0 then we stop.
Otherwise, if r 1 x ≠ 0 we note from the above equation that any common divisor of both q (x)
and r 1 x must be a divisor of the right side of the above equation and therefore a divisor of
the left side. Any common divisor of q (x) and ,r 1 x must be a divisor of p(x )Next, by
writing

p ( x ) −q ( x ) . q 1 x=r 1 x

we can see that every common divisor of p ( x ) and q (x) must be a divisor of r 1 x and thus a
common divisor of q (x) and ,r 1 x So GCD ( p ( x ) , q ( x ) )=GCD ( q ( x ) , r 1 ( x ) ) We continue by
applying the Division Algorithm again to write

q ( x )=r 1 x . q 2 x +r 2 ( x) (2)

If r 2 (x )we stop. Otherwise, repeating the above reasoning,


GCD ( q ( x ) , r 1 ( x ) )=GCD ( r 1 ( x ) ,r 2 ( x ) ). Now apply the Division Algorithm again.

r 1 ( x ) =r 2 x . q3 x+r 3( x) (3)

If ,r 3 (x ) we stop. Otherwise we note GCD ( r 1 ( x ) , r 2 ( x ) ) =GCD ( r 2 ( x ) , r 3 ( x ) ). .and we continue


to apply the Division Algorithm to get

r 2 ( x ) =r 3 x . q4 x +r 4 ( x) (4)

If r 4 ( x) we stop. Otherwise we continue this process. However, we cannot this process


forever because the degrees of the remainders r i (x )keep decreasing by 1.

degree (r 4 ( x ))<degree (r 3 ( x ) )<degree( r 2 ( x ))< degree(r 1 ( x )) ≤degree (r q ( x ))

So after applying the Division Algorithm at most the number of times that is the degree of
q ( x) we must have some remainder become the identically zero polynomial. We claim
GCD ( p ( x ) , q ( x ) ) is the last nonzero remainder. For example, suppose

r n−2 ( x ) =r n−1 x . qn x+ r n (x) (n)

and

r n−1 ( x ) =r n x . qn +1 x (n+1)

where r n +1 x=0 and is not written. The last equation shows r n is a divisor of r n−1 ( x ) so
GCD ( r n−1 ( x ) , r n ( x ) ) =r n ( x )

Now GCD ( p ( x ) , q ( x ) )=GCD ( q ( x ) , r 1 ( x ) ) =GCD ( r 1 ( x ) , r 2 ( x ) ) =GCD ( r 2 ( x ) , r 3 ( x ) )


¿ …=GCD ( r n−1 ( x ) , r n ( x ) ) =r n ( x) the last nonzero remainder. ∎

Corollary 1.3: the Euclidean Algorithm for Polynomials


The GCD of any two polynomials p ( x ) andq ( x ) may be expressed as a linear combination of
p ( x ) andq ( x)

Proof:The following were the series of equations that led up to the creation of the GCD
polynomial.

p ( x ) =q ( x ) . q 1 x +r 1 x (1)

q ( x )=r 1 x . q 2 x +r 2 ( x) (2)

r 1 ( x ) =r 2 x . q3 x+r 3( x) (3)

r 2 ( x ) =r 3 x . q4 x +r 4 ( x) (4)

r n−3 ( x )=r n−2 x . qn−1 x+r n−1( x ) (n−1)

r n−2 ( x ) =r n−1 x . qn x+ r n (x) (n)

Now starting with the last equation, we solve for the GCD

r n ( x )=r n−2 ( x ) −r n−1 x +q n ( x) (*)

Note this shows how to write the GCD as a linear combination of r n−2 ( x ) andr n−1 ( x ) But in
the next to the last equation we can solve for r n−1 ( x ) and substitute into (¿)

.r n ( x )=r n−2 ( x ) −[ r n−3 ( x )−r n−2 x .q n−1 x ] . q( x )

r n−2 ( x ) −[ r n−3 ( x ) −r n−2 x . qn−1 x ] . q( x )

[ 1+q n−1 ( x ) . q ( x ) ] . r n−2 x + [ q n ( x ) ] .r n −3 ( x )


We have now shown how to write r n (x ) as a linear combination of r n−2 ( x ) andr n−3 ( x )
Clearly we can continue to work backwards, and solve each next equation for the previous
remainder, and then substitute that remainder (which is a linear combination of its two
previous remainders) into our equation to continually write r n ( x ) as a linear combination of
the two most recent remainders. As we work our way up the list, we will eventually have

r n ( x )=f ( x ) .r 1 x+ g ( x ) .r 2

and when we solve the second equation for r 2 (x ) and substitute we get

r n ( x )=f ( x ) .r 1 (x )+ g ( x ) [ q ( x )−r 1 ( x ) . q2 ( x ) ]

= f ( x ) .r 1( x )+ g ( x ) [ q ( x )−r 1 ( x ) . q2 ( x ) ]

=[ f ( x )−g ( x ) .q 2 ( x ) ] .r 1 ( x )+ [ g ( x ) ] .q ( x )

Lastly we solve the first equation for r 1 (x ) and substitute and we get

r n ( x )=[ f ( x )−g ( x ) . q 2 ( x ) ] . [ p ( x )−q ( x ) . q1 ( x) ¿ +[g ( x ) ]. q( x) ]


= f ( x )−g ( x ) .q 2 ( x ) ¿ . p(x )−[f ( x ) −g ( x ) . q2 ( x ) ]. q(x ). q 1 ( x ) + g(x) . q( x )

=[ f ( x )−g ( x ) .q 2 ( x ) ] . p ( x )+ {[ g ( x ) . q 2 ( x )−f ( x ) ] . q1 ( x ) + g ( x ) } . q( x ) This shows that the GCD


can be written as a linear combination of p(x ) and q (x) ∎

Chapter two

Roots of polynomials

2.1 Introduction
Definition 2.1:Let p ( x ) =a0 +a 1 x +a2 x+ … an xn be a polynomial. Then λ ∈ F is called a Root,
Solution, or Zero of p if p ( λ )=0.

Theorem 2.1:If p ( x ) =a0 +a 1 x +a2 x 2+ …+an x n is a polynomial and deg( p )=n≥ 1 then λ ∈ F
is a root of p if and only if there exist a polynomial q ( x ) where deg( q )=n−1 such that
p ( x ) =( x−λ ) q (x). ∎

Theorem 2.2:If p ( x ) =a0 +a 1 x +a2 x 2+ …+an x n is a polynomial such that deg( p )=n≥ 0 then
p has at most n distinct roots. ∎

Corollary 2.1:If p ( x ) =a0 +a 1 x +…+ am x m where a 0 , a1 , … , am ∈ F and that p ( x ) =0 for all


x ∈ F then a 0=a1 =…=am=0.

..

Corollary 2.2: If two polynomials f (x) and g( x ), both of degree never R agree in n+1
distinct places ¿forn+1 distinct values of (c ), then they are identical: f (x)=g (x).
[CITATION Leu92 \p 22 \l 1033 ]

Corollary 2.3: if two polynomial f (x) and g( x ) of R[ x ] are such that f (c )=g(c ) for a
infinite number of values c E Rthenf (x)=g ( x).[CITATION Leu92 \p 23 \l 1033 ]

Fundamental Theorem of Algebra

a) Every polynomial of degree n ≥ 0 has at least one zero among the complex numbers.

b) If p(x ) denotes a polynomial of degree n , then p(x ) has exactly 2 roots, some of which
may be either irrational numbers or complex numbers.

Fundamental Theorem of Algebra Proof:

(Proof: …..) ‫‌بدۆزه‌وه‬

This is not proved here. Gauss proved this in 1799 as his Ph.D. doctoral dissertation topic.

2.2 Relation between roots and coefficient

@7

We recall that the roots of a quadratic equation with leading

coefficient 1

x 2+ b1 x +b 0=0

Are given as
−b 1 √b 21+ 4 b 0
r 1= + and
2 2
−b 1 √ b21−4 b 0
r 2= −
2 2

On the other hand, by the factor theorem we have

x 2+ b1 x +b 0=( x−r 1)(x −r 2)

Whence

b 1=−( r 1 +r 2 )

b 0=r 1 r 2

Thus we have two sets of relations between the roots and the coefficients of the given monic
quadratic equation, the first set consisting of expressions of the roots in terms of the
coefficients and the second set consisting of expressions of the coefficients in terms of the
roots. Similarly given a cubic equation

x 3+ c 2 x 2 +c 1 x +c 0=0

we also have two such sets of relations. Now the expressions of the roots r 1 , r 2 and r 3 in
terms of the coefficientsc 0 , c 1l and c 2 constitute the substance of Cardoon's formulae in 4.3.1
which are too complicated to be reproduced here. To obtain the second set of relations we
use the factorization

x 3+ c 2 x 2 +c 1 x +c 0=( x −r 1 ) ( x−r 2 )(x−r 3 )

Meter expanding the right-hand side and comparing corresponding terms, we get

c 2=−( r 1+ r 2+ r 3 )

c 1=r 1 r 2 +r 1 r 3+ r 2 r 3

c 2=−r 1 r 2 r 3

Theorem 2.3: In an equation in the unknown x of degree n, in which the leading coefficient

is one, the sum of n roots equals the negative of the coefficient of x n−1 , the sum of the (n2 )
products of roots two at a time equals the coefficient of x n−2, the sum of the (n3 )products of
roots three at a time equals the negative of the coefficient of x n−3 , etc.; finally the product
of the n roots equals the constant term or its negative according as n is even or odd.
[CITATION Leu92 \p 83 \l 1033 ]

Proof: given an equation f ( x )=0, to have the first set of relations expressing the roots in
terms of the coefficients amounts to a complete solution of the proposed equation. This is,
therefore, not always possible. For example, we would not have such a set of relations for a
quantic equation with general coefficients. We shall show that it is always possible to get
the second set of relations which, under certain circumstances, may even lead to a complete
solution of the equation. Let

f ( x )=x n +a n−1 x n−1+ …+a1 x+ a0=0

be a monic equation with real coefficients. By the fundamental theorem of algebra and the
factor theorem, the monic polynomial f (x) has a (real or complex) roor r 1 and can be
factorized into f ( x )=( x−r 1 ) f 1 ( x ) where f 1 ( x) is a monic polynomial of degree n−1. For the
same reason f 1 ( x) has a root r 2 and we get f ( x )=( x−r 1 ) ( x−r 2 ) f 2 ( x ). further factorization
will lead finally to the complete factorization of f (x):

f ( x )=( x−r 1 ) ( x−r 2 ) …( x−r n)

where the numbers r 1 , r 2 , … , r n not necessarily all distinct, are the n roots of the equation
f ( x )=0 After expanding the right-hand side, we get

f ( x )=x n +a n−1 x n−1+ an−2 x n−2 +…+ a2 x 2 +a1 x+ a0

¿ x n−( r 1 +…+ r n ) xn −1 + ( r 1 r 2 +…+r n−1 r n ) x n−2+ …+ (−1 )n r 1 r 2 … r n

Hence a comparison of the corresponding terms will yield:


−a n−1=r 1 +r 2 +…+r n

a n−2=r 1 r 2+r 1 r 3 +…+ r n−1 r n

−a n−3=r 1 r 2 r 3 +r 1 r 2 r 4 +…+ r n−2 r n−1 r n

…….

(−1 )n an=r 1 r 2 … r n

which is the second set of relations. ∎

example 2.1: Solve x 3−5 x 2+ 8 x −4=0 given that two of its roots are equal.

solution: Let r 1 , r 2 , r 3 be the three unknown roots of the equation. The extra information is
that two of them are identical, say r 1=r 2. The three relations between roots and coefficients
are then
2 r 1+ r 3=5

r 21 +2 r 1 r 3 =8

r 21 r 3=4

3 7
The first two relations yield r 1=r 2=2and r 3=1 or r 1=r 2= and r 3= The first set of values
4 3
of r 1 , r 2 , r 3 are seen to satisfy the third relation.
Example 2.2: solve x 4 −2 x 3−21 x 2+ 22 x +40=0, given that its roots are in arithmetic
progression.

Solution: The extra information allows us to write the four roots of the given equation as
α −3 β , α−β , α + β , α +3 β with unknowns α and β.to find two unknowns we usually only
need two relations among them. Choose, for example

2= ( α−3 β ) + ( α −β ) + ( α + β )+(α + 3 β)

−21= ( α −3 β ) ( α −β )+ ( α −3 β ) ( α + β )+ ( α−3 β ) ( α +3 β ) + ( α −β ) ( α + β ) + ( α −β )( α+3 β )+(α −β )(α +3 β )

¿ 6 α 2−10 β 2
1 3
we find α = and β=± . Both values of β yield the arithmetic progression −4 ,−1 , 2 ,5.
2 2
To ascertain that they are the roots of the given equation, we may either verify the
remaining two relations between roots and coefficients or verify the given equation by
substitutions. 2.3 Integer and rational roots

theorem 2.4: Rational roots: let p ( x ) =an x n+ an−1 x n−1 +…+ a3 x 3 +a 2 x 2+ a1 x +a 0 be any
c
polynomial with integer coefficients. If the rational number is a root of p ( x ) =0 then c
d
must be a factor of a 0 and d must be a factor of a n.

an−1 n−1 an−2 n−2 a3 3 a2 2 a 1 a0


Proof: let q ( x )=x n + x + x + …+ x + x + x+ . By the product of the
an an an an an an
n a0
roots theorem, we know the product of the roots of this polynomial is the fraction (−1 ) . .
an
c
thus if is a root, cmust be a factor of a 0 and dmust be a factor of a n. ∎
d

Integer roots theorem 2.5: : let p ( x ) =an x n+ an−1 x n−1 +…+ a3 x 3 +a 2 x 2+ a1 x +a 0 be any
polynomial with integer coefficientsand with a leading coefficient of 1. If p(x ) has any
rational zeros, then those zeros must all be integers.

Proof: by the rational roots theorem we know the denominator of any rational zero must
divide into the leading coefficient which in this case is 1. Thus any denominator must be ± 1
making the rational zero into a pure integer. ∎

The rational zero test: if f (x) is a polynomial f ( x )=an x n +a n−1 x n−1+ …+a 1 x+ a0 with integer
p
coefficient, then every rational zero of f (x) has the form x= . Where p and q have no
q
common factors other than q, and pis a factor of the constant tern a 0, and qis a factor of the
leading coefficient a n.

Example 2.3: find all the possible rational zeros of f ( x )=3 x 3 +2 x −2


The possible zeros are ± {factor of −2
factor of 3 }
this will be all combinations of ± 1or ± 2 divided by

1 1 2 2 1 1 2 2
± 1 or ± 3 which will be ± { , , , } which will be {1,−1 , ,− , 2 ,−2 , ,− } guess
1 2 1 3 3 3 3 3
what? None of these are actually zeros, because the one real zero that exist is an irrational
number. Remember this test only gives the possible rational zero. The only guarantee is that
if there is a rational zero, it will be contained in the list generated.

Complex zeros occur in conjugate pairs: if f (x) is a polynomial with real coefficients and
has one complex zero x=a+ bi, then x=a−b i will also be a zero. Furthermore,
x 2−2 ax+ a2 +b2 will be a factor of f (x).

Example 2.4: find the zeros of f ( x )=x 3−x 2−x −2. Then show that the complex zeros must
occur in a conjugate pair. If we use the rational zero test, we find that the possible rational
zeros are ± { 2,1 } or { 2 ,−2,1 ,−1 }. Only x=2is an actual zero. Dividing f (x) by x−2 via
synthetic division results in

2 1 -1 -1 -2
2 2 2
1 1 1 0
−1± √ 1−4
So f ( x )=(x−2)( x 2+ x +1). Letting x 2+ x+1=0 results in quadratic solutions x=
2
−1 √ 3 −1 √3
or x= + i and x= − i. As you can see, when we solve the quadratic factors, we
2 2 2 2
always get complex conjugate pairs.

2.4 Bounded and Location of roots

Theorem 2.6:Let p(x ) be any polynomial with real coefficients and a positive leading
coefficient.

(Upper Bound) If a> 0 and p ( a ) >0and if in applying synthetic substitution to compute p(a)
all numbers in the 3rd row are positive, then ; is an upper bound for all the roots of p ( x ) =0

(Lower Bound) If; a< 0 and p ( a ) ≠ 0 and if in applying synthetic substitution to compute
p(a)' all the numbers in the 3rd row alternate in sign then ; is a lower bound for all the roots
of p ( x ) =0 [ In either bound case, we can allow any number of zeros in any positions in the
3rd row except in the first and last positions. The first number is assumed to be positive and
the last number is p ( a ) ≠ 0 For upper bounds, we can state alternatively and more precisely
that no negatives are allowed in the 3rd row. In the lower bound case the alternating sign
requirement is not strict either, as any 0 value can assume either sign as required. In practice
you may rarely see any zeros in the 3rd row. However, a slightly stronger and more precise
statement is that the bounds still hold even when zeros are present anywhere as interior
entries in the 3rd row.

Upper and lower bounds theorem proof


Upper bound: let b be any root of the equation p ( x ) =0. Must show b< a. If b=0 then
clearly b< a since a is positive in this case. So we assume b ≠ 0. If the constant term of p ( x )
is 0, then we could factor x or a pure power of x from p ( x ) and just operate on the resulting
polynomial that is then guaranteed to have a nonzero constant term. So we can implicity
assume p(0) ≠ 0. The last number in the third row of the synthetic substitution process is
positive and it is p(a). Since b is a root, we know by the factor theorem that
p ( x ) =( x−b ) . q( x ) where q (x) is the quotient polynomial. The leading coefficient of p(x ) is
also the leading coefficient of q ( x) and since all of q ( x) ' s remaining coefficient are
positive, and since a> 0, we must have q (a)>0. Finally, p ( a )= ( a−b ) . q (a). Since q (a)>0
p(a)
we may divide by q (a) and get a−b ¿= . now since p ( a ) and q (a) are both positive,
q (a)
(a−b)>0 which implies b< a. Note that since the leading coefficient of q (x) is positive and
since a> 0, we don't really need all positive numbers in the last row. As long as q ( x) ' s
remaining coefficient are nonnegative we can guarantee that q (a)>0.

Lower bound: let b be any root of the equation p ( x ) =0. Must show a< b. As in the above
upper bound proof, we can easily dispense with the case when b=0. Clearly a< b when b=0
because ais negative. We can further implicitly assume no pure power of x is a factor of
p ( x ) and this also allows us to assume p(0) ≠ 0. Since p ( b )=0 by the factor theorem we may
write p ( x ) =( x−b ) . q( x ). Substituting x=a we have p ( a )= ( a−b ) . q ( x). Since p(a) ≠ 0 we
p (a )
know q (a)≠ 0 so we can divide by q (a) to get ( a−b ) = . Now q (a) is either positive or
q ( a)
negative. Because a< 0 and the leading term in q (x) has positive coefficient, the constant
term in q (x) has the same sign as q (a). This fact can be established by considering the two
cases of the even or odd degrees that q (x) must have. For example
'
q ( x )=1 x 5−2 x 4 +3 x3 −4 x 2 +5 x−6. With a< 0 , q(a)<0 and q ( a ) and q ( x ) s constant term
agree in sign or q ( x )=1 x 4−2 x 3 +3 x2 −4 x +5. With a< 0 , q(a)>0 and again q (a) and q ( x )' s
constant term agree in sign.we might note that in these example, it would make no
difference if any of the interior coefficients were 0. This is because the first term has a
positive coefficient, and all the remaining terms just add fuel to the fire with the same sign
as the first term. The presence of an interior zero just means you might not get as big a fire.
But the first term guarantees there is a flame. Another note is that p ( 0 )=(−b ) .q (0), and
since we are assuming b ≠ 0, we can divide this equation by −b to conclude that q (0)≠ 0
when p(0) ≠ 0. So assuming neither b not the constant term in p(x ) are zero guarantees that
the constant term in q (x) must be strictly positive or strictly negative. Since the numbers in
the third row alternate in sign, p(a) differs in sign from the constant term in q ( x). But since
the constant term in q (x) has the same sign as q (a) we know p(a) and q ( a ) differ in sign.
p (a )
So ( a−b ) = <0. a<b . ∎
q ( a)

@16…………………………………..
@17 Example 2.5: c is an upper bound for all real zeros of f (x) if the leading coefficient a
is positive provided none of the numbers in the bottom row of the synthetic division are
negative. An example is shown below:…………………

@18 example 2.6:c is a lower bound for all real zeros of f (x) if the botton row of the
synthetic division alternates in sign. If a value in the bottom row is zero. It can be
considered to be positive or negative as needed to show the alternating pattern. An example
is shown below:…………

@19

@20 intermediate value theorem2.7: if f (x) is a polynomial, and f (a)≠ f (b) for a< b, then
f (x) takes on ever value from f (a) to f (b) in the closed interval [ a , b ]. Applied to
polynomial zeros, the intermediate value theorem states that if f ( a ) <0 and f (b)>0, then
there must be a value x=c in the interval [ a , b ] such that f ( c ) =0. In other words, if the
graph of a polynomial passes from nrgative to positive, it must pass through the x−axis at
the value of a zero.

Proof: ‫بدۆزەوە‬..................

@21

@22 example 2.7: given f ( x )=x 3−2 x−1, f ( 1 ) =−2 , and f ( 2 ) =3,what does the
intermediate value theorem tell us about the existence of a zero?

f (x) must take on all values from y=−2to y=3. Thus it must take on the value 7=0. This
means that there must be a zero at an x-value between x=1 and x=2. In fact, there is a zero,
as shown below in the graph…………………………….. ‫وینە دروست بکە‬

@23………………..

Lemma 2.1: Descartes's rule of signs: each negative root of p(x ) corresponds to a positive
root of p(−x ). That is, if a< 0 and a is a zero of p(x ), then −a is a positive zero of p(−x ).

Proof: the graph of the function y= p (−x) is just the graph of y= p (x) reflected over the
y−¿axis. So, if a< 0 and p ( a )=0, then −a> 0, and when x=−a, then
p (−x )= p (−(−a ) ) = p ( a )=0. So −a is a positive zero of p(−x ). ∎

Theorem 2.8: Descartes's rule of signs: let p(x ) be any polynomial with real coefficients.
Positive roots: the number of positive roots of p(x=0) is either equal to the number of sign
variations in the coefficients of p(x ) or else is less than this number by an even integer.

Negative roots: the number of negative roots off p ( x ) =0 is either equal to the number of
signvariations in the coefficients of p(−x ) or else is less than this number by an even
integer. Note that when determining sign variations we can ignore terms with zero
coefficients. ∎

Proof: the statement about the number of positive roots of p ( x ) =0 is exactly the statement
of lemma ….. that has already Benn proved. To prove the statement about the number of
negative roots of p(x ) we need only apply lemma ….. . each negative root of p(x )
corresponds to a positive root of p(−x ) and by lemma…. , the number of positive roots of
any polynomial (like p(−x )) is either equal to the number of sign variations in that
polynomial { p(−x )}, or is less than the number of sign variations in that polynomial (
p(−x )) by a positive even integer. ∎

Example 2.8: if f ( x )=x 4 −3 x 3+2 x 2+ x−1, there are k =3 sign changes so there will be k =3
positive real zeros or k −m=3−2=1 positive real zero.

Example 2.9: if f ( x )=x 4 −3 x 3+2 x 2+ x−1, then f (−x )=x 4+ 3 x 3 +2 x 2−x−1 there are k =1
sign changes so there will be k =1 negative real zeros or k −m=1−0=1 negative real zero.
Here the only even integer less than 1 is m=0. Remark 2.1: note: missing terms (coefficient
=0) are not considered. For example, f ( x )=−x 5 −3 x 2−2 x 2−1 would have zero sign
changes.

Chapter three

Finding Roots of Polynomials of Small Degree

Step I Divide the polynomial by its leading coefficient so that the new leading coefficient is
equal to 1.

Step II Make a substitution so that the new coefficient immediately following the leading
coefficient is equal to 0.

Step III Make a substitution so that the new trailing coefficient is equal to 1.

For polynomials of degree 1, performing Step I will be enough for us to easily find the root.
In the degree 2 case, we will be able to find the roots after performing Steps I and II. At that
point, we will have succeeded in deriving familiar formulas that you have undoubtedly
come across in your earlier algebra courses. Not surprisingly, the situation for polynomials
of degree 3 will be more difficult. Not only will we need to apply Steps I, II, and III, but we
will also need to make a substitution that appears to come out of nowhere. The degree 4
case will involve even more computations, but in some ways, these computations will be
more natural and motivated than those required to complete the degree 3 case.

Roots of Polynomials of Degree 1 Given

ax +b=0

witha ≠ 0 ,apply Step I and divide both sides of the equation by a to obtain
b
x + =0
a
a
Then subtract from both sides to obtain the root .
b

Roots of Polynomials of Degree 2 Given

a x 2+ bx+ c=0

with a ≠ 0 , apply Step I and divide both sides of the equation by a to obtain
b c
x 2+ x+ =0
a a
b
Next, to apply Step II, make the substitution x= y − . Note that if we can find y, then we
2a
can certainly find x, thus it suffices to find y. Our previous equation now becomes

www.elsevierdir

b 2 b b c
( y−
2a) (
+ y−
a )
+ =0
2a a

Expanding out the terms in this equation, we obtain

2 b b2 b b2 c
y− y+ 2 + y− 2 + =0
a 4a a 2a a

which simplifies to

2 4 ac−b2
y + =0
4 a2
4 ac−b2
2 2 b2 −4 ac
We then subtract y + =0 from both sides to obtain y = =0
4 a2 4 a2

Taking square roots is an allowable operation and when we take the square root of both
sides we obtain

√ b2−4 ac
y=±
2a
b
Having successfully solved for y, we can now go back to the substitution x= y − and see
2a
That

−b ± √ b2−4 ac
x=
2a

This is, of course, the familiar quadratic formula .

Example : …

Roots of Polynomials of Degree 3 Given

a x 3 +b x 2+ cx+ d=0

witha ≠ 0, apply Step I and divide both sides of the equation by a to obtain
b c d
x 3+ x 2+ x+ =0
a a a
b
To apply Step II, we now make the substitution ¿ y− . Once again, if we can find y ,
3a
then we can certainly find x , so it suffices to find y. You should check that our previous
equation now becomes .

3 −b2 +3 ac 2b 2−9 abc +27 a2 d


y + y + =0
3 a2 27 a3

−b 2+3 ac 2 b2−9 abc+27 a2 d


To simplify matters, we will let B= and c= Our previous
3 a2 27 a 3
equation now simplifies to

y 3 + By+C=0

If C=0 , then we can factor and obtain

0= y 3 +By= y ( y 2+ B) .

Therefore, one of the roots is 0 and the others can be found by applying the quadratic
formula to y 2 +B=0 . As a result, we will assume that C ≠ 0 and, to apply Step III, we will
1
make the substitution y=C 3 z Note that if we can solve for z , then we will know the value
of y , so it suffices to solve forz . Our substitution turns the equation y 3 + By+C=0 into
1 3 1 1
( ) ( )
0= C Z + B C 3 Z +C=C Z 3 +B C 3 Z +C
3

Since C ≠ 0, we can divide this equation by C to obtain We can now further simplify this
B
D=
equation by letting 3 thereby giving us the equation
C2

z 3+ Dz+1=0 .

This is as far as Steps I, II, and III can take us. Therefore, in order to finish this problem, we
will need another substitution or idea. There is a substitution that works at this point, but,
unfortunately, it is rather unmotivated and appears to come out of nowhere. However, it
D
does get the job done, and that is the most important thing. We let Z=v − and once
3v
again note that if we can find v, then we will know the value of z. Thus, it suffices to find v.
Our previous equation now becomes

D 3 D D2 D3 D2
0= v−( 3v ) (
+ D v−
3v ) (
+1= v 3 Dv + −
3 v 27 v 3
+ Dv− )(
3v
+1 )
which simplifies to

3 D3
v− +1=0
27 v 3

−D 3
Multiplying this equation by v3 and then letting E=
27

v 6+ v 3 + E=0 .
1
Finally, let v=w 3 we can solve for w, then we can certainly find with this substitution, the
previous equation now becomes
1 6 1 3
( w ) +( w ) + E=0
3 3

which immediately simplifies to

w 2+ w+ E=0

At various points in this procedure, we have taken cube roots, which is an allowable
operation. We can now use the quadratic formula to find w. Knowing w, we can then, in
order, find v , z , y , and then x. The actual expression forx in terms of a , b , c , d is quite long
and complicated. However, at this point, we are much more interested in the fact that such
an expression actually exists. Thus, we have succeeded in showing that there is indeed a
formula for the roots of a x 3 +b x 2+ cx+ d , which only involves combining a , b , c , d using
addition, subtraction, multiplication, division, taking square roots, and taking cube roots.

Roots of Polynomials of Degree 4 Given

a x 4 +b x 3 +c x2 +dx +e=0

witha ≠ 0, apply Step I and divide both sides of the equation by a to obtain
b c d e
x 4 + x 3 + x 2 + x+ =0
a a a a

In light of our discussion before the degree 3 case, it should now come as no surprise that to
b
apply Step II, we make the substitution x= y − . As before, in order to find x, it suffices
4a
to find y . Our equation now becomes [ CITATION Leu92 \l 1033 ]

You might also like