Ma 2730
Ma 2730
(MA2712b, MA2730)
0 Introduction, Overview 6
1 Taylor Polynomials 10
1.1 Lecture 1: Taylor Polynomials, Definition . . . . . . . . . . . . . . . . . . . . . 10
1.1.1 Reminder from Level 1 about Differentiable Functions . . . . . . . . . . 11
1.1.2 Definition of Taylor Polynomials . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Lectures 2 and 3: Taylor Polynomials, Examples . . . . . . . . . . . . . . . . . 13
1.2.1 Example: Compute and plot Tn f for f (x) = ex . . . . . . . . . . . . 13
1.2.2 Example: Find the Maclaurin polynomials of f (x) = sin x . . . . . . 14
2
1.2.3 Find the Maclaurin polynomial T11 f for f (x) = sin(x ) . . . . . . . 15
1.2.4 Questions for Chapter 6: Error Estimates . . . . . . . . . . . . . . . . 15
1.3 Lecture 4 and 5: Calculus of Taylor Polynomials . . . . . . . . . . . . . . . . . 17
1.3.1 General Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4 Lecture 6: Various Applications of Taylor Polynomials . . . . . . . . . . . . . 22
1.4.1 Relative Extrema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4.2 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.4.3 How to Calculate Complicated Taylor Polynomials? . . . . . . . . . . . 26
1.5 Exercise Sheet 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.5.1 Exercise Sheet 1a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.5.2 Feedback for Sheet 1a . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2 Real Sequences 40
2.1 Lecture 7: Definitions, Limit of a Sequence . . . . . . . . . . . . . . . . . . . . 40
2.1.1 Definition of a Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.1.2 Limit of a Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.1.3 Graphic Representations of Sequences . . . . . . . . . . . . . . . . . . . 43
2.2 Lecture 8: Algebra of Limits, Special Sequences . . . . . . . . . . . . . . . . . 44
2.2.1 Infinite Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1
2.2.2 Algebra of Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.2.3 Some Standard Convergent Sequences . . . . . . . . . . . . . . . . . . . 46
2.3 Lecture 9: Bounded and Monotone Sequences . . . . . . . . . . . . . . . . . . 48
2.3.1 Bounded Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.3.2 Convergent Sequences and Closed Bounded Intervals . . . . . . . . . . 48
2.4 Lecture 10: Monotone Sequences . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.4.1 Convergence of Monotone, Bounded Sequences . . . . . . . . . . . . . . 50
2.5 Exercise Sheet 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.5.1 Exercise Sheet 2a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.5.2 Feedback for Sheet 2a . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4 Real Series 78
4.1 Lecture 12: Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.1.1 A Tale of a Rabbit and a Turtle following Zeno’s . . . . . . . . . . . . 78
4.1.2 Definition of a Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.1.3 Convergent Series, Geometric Series . . . . . . . . . . . . . . . . . . . . 80
4.2 Lecture 13: Important Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.2.1 A Criterion for Divergence . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.2.2 Telescopic Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.2.3 Harmonic Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.2.4 Algebra of Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.3 Lecture 14: Test for Convergence . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.3.1 Comparison Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.3.2 Integral Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.4 Lecture 15: Further Tests for Convergence . . . . . . . . . . . . . . . . . . . . 92
4.4.1 Comparing a Series with a Geometric Series: the Root and Ratio Tests 92
4.5 Exercise Sheet 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.5.1 Exercise Sheet 3a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.5.2 Additional Exercise Sheet 3b . . . . . . . . . . . . . . . . . . . . . . . . 97
4.5.3 Feedback for Sheet 3a . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
2
5 Deeper Results on Sequences and Series 103
5.1 Lecture 16: (ǫ, N )-Definition of Limits . . . . . . . . . . . . . . . . . . . . . . 103
5.1.1 Practical Aspects of Estimates of Convergent Sequences . . . . . . . . . 104
5.1.2 Divergent Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.2 Extra curricular material: Error Estimates from the Tests . . . . . . . . . . . . 108
5.2.1 Error Estimates from the Ratio and Root Tests . . . . . . . . . . . . . 108
5.2.2 Error Estimates for the Integral Test . . . . . . . . . . . . . . . . . . . 108
5.3 Lecture 17: Absolute Convergence of Series and the Leibnitz Criterion of Con-
vergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.3.1 Alternating Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.4 Exercise Sheet 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.4.1 Exercise Sheet 4a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.4.2 Additional Exercise Sheet 4b . . . . . . . . . . . . . . . . . . . . . . . . 117
5.4.3 Short Feedback for Exercise Sheet 4a . . . . . . . . . . . . . . . . . . . 117
5.4.4 Short Feedback for the Additional Exercise Sheet 4b . . . . . . . . . . 118
5.4.5 Feedback for Exercise Sheet 4a . . . . . . . . . . . . . . . . . . . . . . 119
3
7 Power and Taylor Series 147
7.1 Lecture 19: About Taylor and Maclaurin Series . . . . . . . . . . . . . . . . . 147
7.1.1 Some Special Examples of Maclaurin Series . . . . . . . . . . . . . . . 147
7.1.2 Convergence Issues about Taylor Series . . . . . . . . . . . . . . . . . . 148
7.1.3 Valid Taylor Series Expansions . . . . . . . . . . . . . . . . . . . . . . 149
7.2 Lecture 20: Power Series, Radius of Convergence . . . . . . . . . . . . . . . . . 151
7.2.1 Behaviour at the Boundary of the Interval of Convergence . . . . . . . 153
7.2.2 Elementary Calculus of Taylor Series . . . . . . . . . . . . . . . . . . . 154
7.3 Lecture 21: More on Power and Taylor Series . . . . . . . . . . . . . . . . . . 155
7.3.1 The Binomial Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.4 Extra curricular material: General Theorem About Taylor Series . . . . . . . . 159
7.4.1 Examples of Power Series Revisited . . . . . . . . . . . . . . . . . . . . 160
7.4.2 Leibniz’ Formulas For ln 2 and π/4 . . . . . . . . . . . . . . . . . . . . 161
7.5 Extra curricular material: Taylor Series and Fibonacci Numbers . . . . . . . . 163
7.5.1 Taylor Series in Number Theory . . . . . . . . . . . . . . . . . . . . . . 163
7.5.2 Taylor’s Formula and Fibonacci Numbers . . . . . . . . . . . . . . . . . 164
7.5.3 More about the Fibonacci Numbers . . . . . . . . . . . . . . . . . . . . 165
7.6 Extra curricular Christmas treat: Series of Functions Can Be Difficult Objects 167
7.6.1 What Can Go ‘Wrong’ with Taylor Approximation? . . . . . . . . . . . 167
7.6.2 The Day That All Chemistry Stood Still . . . . . . . . . . . . . . . . . 168
7.6.3 Series Can Define Bizarre Functions: Continuous but Nowhere Differ-
entiable Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
7.7 Exercise Sheet 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
7.7.1 Exercise Sheet 6a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
7.7.2 Feedback for Exercise Sheet 6a . . . . . . . . . . . . . . . . . . . . . . 175
7.7.3 Additional Exercise Sheet 6b . . . . . . . . . . . . . . . . . . . . . . . . 180
7.7.4 Exercise Sheet 0c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
7.7.5 Feedback for Exercise Sheet 0c . . . . . . . . . . . . . . . . . . . . . . . 183
4
List of Figures
4.1 The area under the curve is less than the area under the line y = f (k − 1) and
it is greater than the area under the line y = f (k). . . . . . . . . . . . . . . . 90
5
Chapter 0
Introduction, Overview
The first 12 lectures (Chapters 1-3) contribute to the study blocks MA2730 (for M, FM, MSM
and MCS) and MA2712b (for MMS and MCC). Those blocks feed into the assessment blocks
MA2812 and MA2815 (for M, FM, MSM and MCS) or MA2810 (for MMS and MCC). The
purpose of those lectures is to make you familiar with important concepts in Calculus and
Analysis, namely those of sequences and series as well as Taylor1 polynomials and series.
In the first three chapters, you shall be introduced to elementary ideas about these concepts,
so you could apprehend them as well as follow and perform relevant calculations. Students
studying the full Analysis study block (MA2730) will continue, revisiting, broadening and
deepening these concepts. In particular, you will be given the means to use more formal
definitions and prove the results stated in the following first set of lectures.
Most of Calculus (MA1711) and Fundamentals of Mathematics (MA1712) may be used in this
set of lectures. For quick reference, we have put some essential background material of Level 1
in a revision section on Blackboard. You will have two lectures a week with one seminar (the
class is split in four seminar groups). The 24 lectures are split into 6 chapters; each section
of those chapters corresponding to a lecture. There will be one set of exercise sheets per
chapter, obviously most of these sheets lasting for a few weeks. In Exercise Sheet number Na,
N = 1, . . . , 6, we expect to give you some short feedback first (to check your answers), finally
rather detailed feedback. Additional exercises are given in Exercise Sheets Nb, N = 1, . . . , 6.
Hence there will only be short feedback for them.
6
1. to evaluate limits,
In the first two operations, the result of the replacement will be an exact value, in the last it
will only be an approximation.
In the five lectures of this chapter, we shall first define Taylor polynomials, then give examples
of calculation of those Taylor polynomial. This involves the calculation of many derivatives.
You should pay attention to the Product, Quotient and Chain Rules for differentiation.
In the second lecture we shall also give a first idea about the error Rna f = f − Tna f made by
replacing a function f by one of its Taylor polynomial Tna f . In practical term, the calculation
of a Taylor polynomial of a complex function can be simplified by using calculus rules to obtain
the calculus of Taylor polynomials. To justify those calculations we need the information
we got in the second lecture about the error term. Those are the topics of the third lecture.
We end the chapter with two lectures on the application of Taylor polynomials to calculate
limits and to study the type of degenerate critical points of real functions.
7
Chapter 4: Real Series (4 Lectures)
The first half of the course finishes in Chapter 3 with the study of series of real numbers.
X∞
Series are infinite sums of sequences, say an . You have already seen that a geometric
n=1
series converges if and only if its common ratio is strictly smaller than 1 in modulus. In
lecture nine, we give the definition of the convergence of a series, using the convergence of
the sequence of partial sums. In the next lecture ten, we look at other important series
and show that there exists an algebra of convergent series. This leads to the discussion
of many tests to assess the convergence or divergence of series without calculating their
sum. In the last two lectures of the chapter, eleven and twelve, we discuss the Comparison,
Integral, Ratio and Root Tests, usually by comparing with known convergent or divergent
series, like the geometric or harmonic series. In this theory, it means that we establish first
the convergence or divergence of a series, then establish its exact sum (when possible), or give
estimates for it.
This finishes the first half of the block. We now move to deepen our understanding of the first
12 lectures.
8
initial segments correspond to the Taylor polynomials Tn ex of ex. From the other side, as we
let n tend to ∞, what happens to Tna f (x)? How far is it from f (x)? Those questions are
studied in the last 5 lectures. In lectures twenty one and twenty two, we derive fundamental
properties shared by all power series. We finish in the last two lectures by applying those
results to Taylor series.
9
Chapter 1
Taylor Polynomials
p(x) = 1 + x + x2 + x3
around x = 3, is to recast p(x) such that the behaviour of p near x = 3 is more transparent.
Set x = 3 + y, y = x − 3, and calculate:
1 + x + x2 + x3 = 1 + (3 + y) + (3 + y)2 + (3 + y)3
= 40 + 34y + 10y 2 + y 3
= 40 + 34(x − 3) + 10(x − 3)2 + (x − 3)3. (1.1)
This last expression allows us to see more clearly what happens when we are close to x = 3,
that is, when |y| = |x − 3| < 1, because then the powers of x − 3 decrease in value.
There is a question: the final form (1.1) of p involved some calculations (that we skipped).
How is it possible to avoid them? The answer is to look at the derivatives of p at
x = 3. We have
Therefore, f (3) = 40, f ′ (3) = 34, f ′′ (3) = 20, f ′′′ (3) = 6, and thus
20 6
p(x) = 40 + 34(x − 3) + (x − 3)2 + (x − 3)3 = 40 + 34(x − 3) + 10(x − 3)2 + (x − 3)3.
2! 3!
Both sides in the above equation are third degree polynomials, and their derivatives of order
0, 1, 2 and 3 are the same at x = 3, so they must be the same polynomial.
We shall generalise that idea to any function f : [a, b] → R. Later in MA2712 and MA2715,
you shall see that those ideas can also be generalised to any function of several variables.
10
1.1.1 Reminder from Level 1 about Differentiable Functions
But first some reminder of Level 1 material about calculus, in particular differentiable func-
tions. A real valued function f in one dimension is a triple f : I ⊆ R → J ⊆ R where
Remark 1.1. When the domain and co-domains are clear from the context, we often use only
the rule f (x) to define f : I → J, x 7→ f (x). An example is f (x) = sin(x) or f (x) = ex when
I = J = R (for instance). This is not precise, but often convenient.
The first, second or third derivatives of f are denoted by f ′ , f ′′ or f ′′′ . For higher derivatives,
we use the notation f (k) to mean the k-th derivative of f . By definition, f (0) = f . For
k ∈ N, the notation k! means k-factorial, which by definition is
k! = 1 · 2 . . . · (k − 1) · k = Πkj=1 j.
Recall that we had also defined 0! = 1! = 1. The following two theorems are fundamental.
Theorem 1.2 (Rolle’s1 Theorem). Let f : [a, b] → R be continuous on [a, b] and differentiable
on (a, b) such that f (a) = f (b) = 0, then there exists a < c < b such that f ′ (c) = 0.
The theoretical underpinning for these facts about Taylor polynomials is the Mean Value
Theorem, which itself depends upon some fairly subtle properties of the real numbers.
Theorem 1.3 (Mean Value Theorem). Let f : [a, b] → R be continuous on [a, b] and dif-
ferentiable on (a, b). For every x ∈ (a, b), there exists a < ξ < x, ξ dependent on x, such
that
f (x) − f (a)
f ′ (ξ) = .
x−a
11
Definition 1.4. Let I ⊆ R be an open interval, f : I → R be a n-times differentiable function
at a ∈ I. The Taylor polynomial Tna f of degree n at a point a of f is the polynomial
f ′′ (a) f (n) (a)
Tna f (x) = f (a) + f ′ (a)(x − a) + (x − a)2 + · · · + (x − a)n. (1.2)
2! n!
In the following lemma we show that Tna f is indeed the polynomial we seek.
Theorem 1.5. Let I ⊆ R be an open interval, f : I → R be a n-times differentiable function
at a ∈ I. The Taylor polynomial Tna f is the only polynomial p of degree n for which
p(a) = f (a), p′ (a) = f ′ (a), p′′ (a) = f ′′ (a), ..., p(n) (a) = f (n) (a)
holds.
Proof. The result depends from a lemma about polynomial that will be proved in the last
exercise in Sheet 1a. Namely, given any polynomial p of degree n, say, and a ∈ R, there exist
ai ∈ R, 0 ≤ i ≤ n, such that
n
X
p(x) = ai (x − a)i = a0 + a1 (x − a) + . . . + an (x − a)n. (1.3)
i=0
Clearly, taking the derivatives of p in the shape of (1.3) and evaluating at x = a, we find
p(a) = a0 , p′ (0) = a1 , p′′ (a) = 2a2 , p(3) (a) = 2 · 3 a3 ...
and, in general,
p(k) (a) = k! ak , 0 ≤ k ≤ n.
Therefore, p is unique and its coefficients in (1.3) are given by
f (k) (a)
ak = , 0 ≤ k ≤ n.
k!
Remark 1.6. Note that the zeroth order Taylor polynomial is just a constant,
T0a f (x) = f (a),
while the first order Taylor polynomial is
T1a f (x) = f (a) + f ′ (a)(x − a).
This is exactly the linear approximation of f for x close to a which was derived in Level 1.
Taylor polynomial generalizes this first order approximation by providing ‘higher
order approximations’ to f .
2
C. Maclaurin (1698-1746) was a Scottish mathematician. He got his degree at 14 years of age. He spread
Newton’s theory of fluxions (Newton’s version of calculus), developed the integral test for series and worked
on geometry. He is the founder of actuarial studies.
12
1.2 Lectures 2 and 3: Taylor Polynomials, Examples
We look here at a few examples of Taylor and Maclaurin polynomials.
T0 f (x) = 1,
T1 f (x) = 1 + x,
1
T2 f (x) = 1 + x + x2.
2
y = T1 f (x)
y = T0 f (x) y = T2 f (x)
Contemplating the graphs in Figure 1.1, the following comments are of order:
1. The zeroth order Taylor polynomial T0 f has the right value at x = 0 but it does not
know whether or not the function f is increasing at x = 0. T0 f (x) is close to ex for
small x, by virtue of the continuity of ex, because ex does not change very much if x
stays close to x = 0.
3. The graphs of both T0 f and T1 f are straight lines, while the graph of y = ex is curved
(in fact, convex). The graph of T2 f is a parabola and, since it has the same first and
second derivative at x = 0, it captures this convexity. It has the right curvature at
x = 0. So it seems that
y = T2 f (x) = 1 + x + x2 /2
is an approximation to y = ex which beats both T0 f and T1 f .
In Figure 1.2, the top edge of the shaded region is the graph of y = ex. The graphs are of the
functions y = 1 + x + Cx2 for various values of C. These graphs all are tangent at x = 0, but
one of the parabolas matches the graph of y = ex better than any of the others.
13
y = 1 + x + x2
y = 1 + x + 32 x2
y = 1 + x + 21 x2
x
y=e
x
+
1
=
y = 1 + x − 12 x2
y
f (x) = sin x, f ′ (x) = cos x, f ′′ (x) = − sin x, f (3) (x) = − cos x, (1.4)
and thus
f (4) (x) = sin x.
So, after four derivatives, you are back to where you started, and the sequence of derivatives
of sin x cycles through the pattern (1.4). At x = 0, you then get the following values for the
derivatives
T0 f (x) = 0,
T1 f (x) = x,
T2 f (x) = T1 f (x),
x3
T3 f (x) = x − = T4 f (x),
3!
x3 x5
T5 f (x) = x − + = T6 f (x) . . .
3! 5!
Note that the Maclaurin polynomial Tn f of any function is a polynomial of degree at most n,
and this example shows that the degree can sometimes be strictly less.
14
T1 f (x) T5 f (x) T9 f (x)
y = sin x
π 2π
−2π −π
We shall see in Lecture 1.2 that this is to be expected and will be a useful way not to have to
calculate an enormous number of derivatives.
2. Given Tna f and ǫ. On how large an interval I ∋ a does Tna f achieve that tolerance?
15
To be able to be precise about those statements we shall introduce the following idea.
In general we shall see that there are formula for the n-th order remainder we can estimate to
give answers to the previous questions. But this is for the future. At present we concentrate to
the calculations and some use of the Taylor polynomials without detailed justification. They
are for a revisit later.
Proof. The proof is not really difficult. We leave it to the exercise sheet. It boils down to
using l’Hôpital Rule n-times on
Rna f (x)
hn (x) = , x 6= a.
(x − a)n
3
G. Peano (1858-1932) was an Italian analyst and a founder of mathematical logic. He worked on differential
equations, set theory and axiomatic. Peano had a great skill in seeing that theorems were incorrect by spotting
exceptions. At the end of his life he became interested in universal languages, both between humans and for
teaching and working in mathematics.
16
1.3 Lecture 4 and 5: Calculus of Taylor Polynomials
The obvious question to ask about Taylor polynomials is ‘What are the first so-many terms in
the Taylor polynomial of some function expanded at some point?’. The most straightforward
way to deal with this is just to do what is indicated by the formula (1.2): take however high
order derivatives you need and plug in x = a. However, very often this is not at all the most
efficient method. Especially in a situation where we are interested in a composite function f
of the form f (xn ) or, more generally, f (p(x)), where p is a polynomial, even f (g(x)), where g
is a function, all with ‘familiar’ Taylor polynomial expansions.
The fundamental principles of those calculations follow from the following results. Recall that
the degree of a polynomial is the degree of the highest non zero monomial appearing in
p. It is denoted deg(p).
Lemma 1.10. 1. Let p and q two polynomials. Then deg(p · q) = deg(p) + deg(q) and
deg(p(q)) = deg(p) · deg(q).
2. When the Taylor polynomial Tna f of a function f is in the form (1.2), we obtain easily
Tka f , k ≤ n, by ignoring the terms of powers (x − a)l, with l > k, in Tna f .
The proof of this lemma will be provided in Chapter 6, but you can attempt it, it is not
difficult.
2. The Taylor polynomial of the product of two functions f and g is given by the terms of
degree up to n of the product of their Taylor polynomials. Namely, the terms of degree
up to n of:
Tna h = Tna (Tna f · Tna g). (1.8)
Examples.
1. Consider f (x) = ex and g(x) = sin(x). To illustrate (1.7) and (1.8), we calculate the
Maclaurin polynomials of order 3 of h1 (x) = f (x) + g(x) and of h2 (x) = ex sin(x). For
h1 ,
x2 x3 x3 x2
T3 h1 (x) = T3 f (x) + T3 g(x) = 1 + x + + + x− = 1 + 2x + .
2 3! 3! 2
The idea from (1.8) is just that you multiply the polynomials for ex and sin(x). And so,
x2 x3 x3
1 1 1
T3 f (x) = T3 (1 + x + + )(x − ) = x+x2 +(− + )x3 = x+x2 + x3. (1.9)
2 3! 3! 3! 2 3
17
2. As a more complicated example of (1.8), let f (x) = e2x /(1 + 3x). We exploit our
knowledge of the Maclaurin polynomial of both factors e2x and 1/(1+x). As an example,
what is T3 f ? Let
2x 22 x2 23 x3 4
T3 e = 1 + 2x + + = 1 + 2x + 2x2 + x3,
2! 3! 3
and
1
T3 ( ) = 1 − 3x + 9x2 − 27x3.
1 + 3x
Then multiply these two
41 3
T3 f (x) = 1 − x + 5x2 − x.
3
Remark 1.13. Pay attention to the fact that, with the composition of function, we need the
Taylor polynomial of the second function f about the value of the first one g(a). In the
expression h(x) = f (g(x)) we apply first g then f . Remark that we often use, in Theorem
1.12, functions g such that g(0) = 0 so that (1.10) becomes simply
Tn h = Tn (Tn f (Tn g)). (1.11)
18
I am getting tired of differentiating - can you find f (12) (x)? After a lot of work we give up at
the sixth derivative, and all we have found is
T6 f (x) = 1 − x2 + x4 − x6.
By the way,
T6 g(t) = 1 + t + t2 + t3 + t4 + t6.
x3
T4 sin(x) = x −
6
and
x2 x3 x4
T4 exp(x) = 1 + x + + + .
2 6 24
sin(0)
Note that because sin(0) = 0, T4 exp(x) = T40 exp(x) = T4 exp(x). And so, from (1.10),
Theorem 1.14. 1. The Taylor polynomial of the derivative of f is given by the derivative
term per term of the Taylor polynomial, that is,
′
Tna f ′ (x) = Tn+1
a
f (x) . (1.14)
19
2. The Taylor polynomial of the anti-derivative (primitive/integral) F of f is given by
integrating term per term the Taylor polynomial, that is,
Z
Tn+1 F (x) = C + Tna f (x) dx.
a
(1.15)
Example. To find the Maclaurin polynomials for f (x) = tan−1 (x) = arctan(x), we start with
an example we already know. Thinking about derivatives and anti-derivatives, we see that f
1
is the anti-derivative of f ′ (x) = . So, we would have a plan if we knew the Maclaurin
1 + x2
1
polynomial for . Well, we know it. So the plan is this:
1 + x2
T2n f ′ (x) = 1 − x2 + x4 − x6 + x8 + . . . + (−1)n x2n.
When a = 0, Theorems 1.11, 1.12 and 1.14 lead to the following classical, useful examples
about Maclaurin polynomials.
1
2. If h(x) = , then
1 − p(x) !
n
X
Tn h(x) = Tn (p(x))k .
k=0
20
4. If h(x) = cos(p(x)), then
n
!
X 1
Tn h(x) = Tn (−1)k (p(x))2k .
k=0
(2k)!
21
1.4 Lecture 6: Various Applications of Taylor Polyno-
mials
We shall look at two areas of use of Taylor approximations:
1. generalised criteria using derivatives to determine the type of a critical point (maximum,
minimum or saddle points),
Then,
2. if n is even,
f (n) (a)
f (x) = Tna f (x) + Rna f (x) = f (a) + f ′ (a)(x − a) + . . . + (x − a)n + Rna f (x)
n!
f (n) (a)
= f (a) + (x − a)n + Rna f (x)
n!
(n)
f (a)
= f (a) + + hn (x) (x − a)n ,
n!
4
G.F.A. marquis de l’Hôpital (1661-1704) was a French mathematician. He wrote the first textbook of
Calculus where he gave the rule having his name.
22
where hn appears in the Peano form of the remainder.
f (n) (a) f (n) (a)
Define g1 (x) = − hn (x), and denote by g2 the constant function g2 (x) = .
n! n!
f (n) (a)
The function g1 = g1 (x) is close to the constant function g2 (x) = when x is close to
n!
a. Therefore, we can deduce that for x close to a the value g1 (x) has the same sign as the
f (n) (a)
constant .
n!
Thus, it is enough to study the local extrema of the approximation
f (n) (a)
f (a) + (x − a)n.
n!
It is now simple calculus to show that our conclusions are correct.
Examples.
1. Let f (x) = sin(x2 ) − x2 cos(x). Determine the type of the critical point x = 0.
The first non zero terms of the Maclaurin polynomials are
x6
sin(x2 ) = x2 − + ...,
6
x2 x4 x4 x6
2 2
x cos(x) = x 1 − + = x2 − + + ....
2 24 2 24
x4 5 6
And so, f (x) = − x + . . .. Hence, x = 0 is a (local) minimum.
2 24
2. Let g(x) = cos(x2 ) − esin x − ln(1 − x) + ax3 where a ∈ R is a parameter. Discuss the
type of the critical point x = 0 as a function of a.
The first non zero terms of the Maclaurin polynomials are
x4 x8
cos(x2 ) = 1 − + + ...,
2 24
2 3
x3 x3 x3
sin(x) 1 1
e = 1+ x− + x− + x− + ...
6 2 6 6 6
x2 x4
= 1+x+ − + ...,
2 8
x2 x3 x4
− ln(1 − x) = x + + + + ....
2 3 4
And so,
x4
1
g(x) = + a x3 − + ....
3 8
Therefore, when a 6= −1/3, x = 0 is a saddle point, but when a = −1/3, x = 0 is a
(local) maximum. This local maximum is indeed ‘illustrated’ in Figure 1.4.
23
-0.1 -0.05 0.05 0.1
-7
-2·10
-7
-4·10
-7
-6·10
-7
-8·10
-6
-1·10
-6
-1.2·10
x3
Figure 1.4: The graph near x = 0 of g(x) = cos(x2 ) − esin x − ln(1 − x) − .
3
1.4.2 Limits
f (x)
At Level 1 you have seen l’Hôpital’s Rule. Let h(x) = and suppose that there exists an
g(x)
f ′ (a)
a such that f (a) = g(a) = 0 and g ′ (a) 6= 0, then lim h(x) = ′ . But if g ′(a) = 0 this is of
x→a g (a)
no help. We now prove an extension of that result using Taylor polynomials.
Theorem 1.17 (Generalised l’Hôpital Rule). Let I ⊆ R be an open set, a ∈ I, and f, g : I ⊆
R → R be n + 1, resp. m + 1,-times continuously differentiable at x = a. Suppose
0 = f (a) = f ′ (a) = . . . = f (n−1) (a) = 0, f (n) (a) 6= 0,
0 = g(a) = g ′ (a) = . . . = g (m−1) (a) = 0, g (m) (a) 6= 0.
f (x)
Then, h : I → R, defined by h(x) = , has the following limits as x → a:
g(x)
1. if n > m, limx→a h(x) = 0,
f (n) (a)
2. if m = n, lim h(x) = ,
x→a g (n) (a)
3. if n < m, the limit limx→a h(x) is unbounded.
24
where hn appears in the Peano’s form of the remainder. Similarly for g. For x close to a,
g (m) (a)
g(x) = g(a) + g ′(a)(x − a) + . . . + (x − a)m + Rm a
g(x),
m!
g (m) (a) g (m) (a)
= (x − a)m + Rm
a
g(x) = + hm (x) (x − a)m.
m! m!
Now, define
f (n) (a)
!
f (x) + hn (x)
h(x) = = (x − a)n−m n!
g (m) (a)
.
g(x) + hm (x)
m!
Note that the big bracket is bounded when x → a. Now we consider our three cases:
sin(x2 ) − x2 cos(x)
h(x) =
cos(x2 ) − esin x + ln(1 − x) + ax3
x4 /2
0, a 6= −1/3;
lim h(x) = lim =
x→0 x→0 (1/3 + a)x3 − x4 /8 −4, a = −1/3.
Remark 1.18. In Chapter 6, with a more precise expression for the error term, we shall look
also at the use of Taylor polynomials approximations to estimate integral, for instance
Z b Z b Z b Z b
a b
f (x) dx ≈ Tn f (x) dx ≈ Tn f (x) dx ≈ Tnc f (x) dx,
a a a a
a+b
where c = . Note that those are only approximations. In Chapter 6 we shall be able
2
to estimate the maximum error of the different formulas and so determine which one is more
appropriate depending on what we want to achieve.
25
1.4.3 How to Calculate Complicated Taylor Polynomials?
We consider a few examples illustrating Corollary 1.15 and showing how to handle the figuring
out of Maclaurin polynomials of arbitrary orders (when such formulae exists). The question
is to find the Maclaurin polynomial of order n of the following functions.
Examples.
x+2
1. Let f (x) = . Simplify
3 + 2x
(x + 2) 1
f (x) = .
3 (1 + (2x)/3)
We know that n k
1 X
k 2x
Tn = (−1) .
(1 + (2x)/3) k=0
3
(x + 2)
Multiplying by and collecting the terms up to power xn we find
3
n
2 X 2k−1
Tn f (x) = + (−1)k k+1 xk.
3 k=1 3
1
2. Let f (x) = √ . You can use directly the Binomial Theorem (from Level 1) or
1−x
calculate the derivatives of f . We get
1 · 3 · 5 · · · (2k − 1)
f (k) (0) = .
2k
Recall that 2 · 4 · 6 · · · 2k = 2k (k!). And so by multiplying by the product of even terms,
(2k)!
top and bottom, we get f (k) (0) = k . Therefore
4 (k!)
n
X (2k)! k
Tn f (x) = k (k!)2
x.
k=0
4
√
Remark 1.19. Note that f is equal to (−2) times the derivative of g(x) = 1 − x. So,
if we know Tn g, Tn f = −2(Tn g)′ .
√ 4
3. Let 4
√ f (x) = 1 + x . We replace z = x into the Maclaurin polynomials of g(z) =
1 + z. Recall that
n
X (2k)!
Tn g(z) = 1 + (−1)k+1 z k.
4k (k!)2 (2k − 1)
k=1
And so,
⌊n/4⌋
X (2k)!
Tn f (x) = 1 + (−1)k+1 x4k.
k=1
4k (k!)2 (2k − 1)
26
r !
1+x
4. Let f (x) = ln . We simplify
1−x
1 1
f (x) = ln(1 + x) − ln(1 − x).
2 2
Recall the Maclaurin polynomials
n
X xk
Tn ln(1 + x) = (−1)k+1 ,
k=1
k
n
X xk
Tn ln(1 − x) = − .
k=1
k
Hence,
⌊(n−1)/2⌋
X x2k+1
Tn f (x) = .
k=0
2k + 1
1
5. Let f (x) = arctan(x). Recall that f ′ (x) = . Use Theorem 1.14, integrating the
1 + x2
Maclaurin polynomial of f ′ :
⌊n/2⌋
X
′
Tn f (x) = (−1)k x2k.
k=0
We get that
⌊(n−1)/2⌋
X x2k+1
Tn f (x) = (−1)k .
k=0
2k + 1
27
Summary of Chapter 1
We have seen:
28
1.5 Exercise Sheet 1
3. Compute T0a f (x), T1a f (x) and T2a f (x) for the following functions.
5. Find the Maclaurin polynomials Tn f of any order n for the following rules. (i) f (t) =
2
ekt, for some constant k. (ii) f (t) = e1+t. (iii) f (t) = e−t . (iv) f (t) = cos(t5 ). (v) f (t) =
1+t 1 1 et 1
. (vi) f (t) = . (vii) f (t) = . (viii) f (t) = . (ix) f (t) = √ .
1−t 1 + 2t 2−t 1−t 1−t
1
(x) f (t) = 2
(xi) f (t) = ln(1 − t2 ). (xii) f (t) = sin t cos t.
2−t−t
6. Let f : R → R be defined as f (x) = |x|3.
29
[a] a = 0?
[b] a = 1?
[c] a = −1?
(ii) When f has Taylor polynomials, determine them (in the three cases).
30
(iii) p(x) = 4x3 − 4x2 − 3x + 1.
2. (i) [−25, ∞).
x x2 x3
(ii) 5 + − + .
10 1000 5 · 104
3. T2a f (x) is only given (think why?).
(i) T2 f (x) = 0, T21 f (x) = 1+3(x−1)+3(x−1)2 and T22 f (x) = 8+12(x−2)+6(x−2)2.
1
(ii) T21 f (x) = 1 − (x − 1) + (x − 1)2 and T22 f (x) = 2
− 41 (x − 2) + 18 (x − 2)2.
(iii) T21 f (x) = 1 + 21 (x − 1) − 18 (x − 1)2.
2
(iv) T21 f (x) = (x − 1) − 12 (x − 1)2 and T2e f (x) = 2 + e−2 (x − e2 ) − e−4
2
(x − e2 )2.
√
(v) Hint: note that ln( x) = 12 ln(x), then use the answer of (iv)!
π/4
(vi) T2 f (x) = 2x and T2 f (x) = 1 − 2(x − π/4)2.
(vii) T2π f (x) = −1 + 21 (x − π)2.
(viii) T2 f (x) = 1 − 2x + x2 and T21 f (x) = (x − 1)2.
x2
(ix) T2 f (x) = 1 − x + 2
.
3 2 7 3 25 4
4. (i) T4 f (x) = 1 + x + x + x + x.
2 6 24
z2 z3 z4
Hint: use T4 ez = 1 + z + + + in part 1. of Corollary 1.15 because
2! 3! 4!
p(x) = x2 + x satisfies p(0) = 0.
x2 x3 x4
(ii) T4 f (x) = x + + + .
2 3 4
Hint: recall ln(1/a) = − ln(a). Then use part 5. of Corollary 1.15 with p(x) = −x.
e e
(iii) T4 f (x) = e − x2 + x4.
2 6
Hint: this is the more difficult. Note cos(0) = 1, so use the Taylor polynomial of
z x2 x4
e near z = 1 with z = T4 cos x = 1 − + .
2 24
x3 x4
(iv) T4 f (x) = 1 − x + − .
3 6
Hint: look at (1.9).
5. Recall that ⌊n⌋ denotes the largest integer smaller than n. This will be useful to stop the
sums so that the powers of the variable do not exceed n. Unless indicated, use Corollary
1.15 with the right p (check, arrange that p(0) = 0 if needed).
n
X kj
(i) Tn f (t) = tj.
j=0
j!
n
X e k
(ii) Tn f (t) = t.
k!
k=0
⌊n/2⌋
X (−1)k
(iii) Tn f (t) = t2k.
k=0
k!
31
⌊n/10⌋
X (−1)k
(iv) Tn f (t) = t10k.
k=0
2k!
(v) Tn f (t) = 1 + 2 nk=1 tk.
P
32
1.5.2 Feedback for Sheet 1a
1. We use Theorem 1.5.
2. (i) To determine the largest domain of a rule f , you need to get the x’s such that
f (x) make sense. In this case, x + 25 must be non negative, so x + 25 ≥ 0, hence
D(f ) = [−25, ∞).
(ii) From Theorem 1.5, p is the Taylor polynomial T30 f of f . Straightforward differen-
tiation get you p(0) = f (0) = 5, p′ (0) = f ′ (0) = 1/10, p′′ (0) = f ′′ (0) = −1/500 and
p′′′ (0) = f ′′′ (0) = 6/50000. Hence
x x2 x3
p(x) = 5 + − + .
10 1000 5 · 104
3. You are being asked to provide the Taylor polynomials of f up to order 2. Note that if
you keep the Taylor polynomial is the shape (1.2), the Taylor polynomial of lower order,
say Tka f , can be found from the Taylor polynomial of higher order, say Tla f , by cutting
out all the terms (x − a)j of order k < j ≤ l, from Tla f (though, this must be at the
same a). Hence we only calculate and give the answer as T2a f (x) and in the shape of
(1.2).
√
(v) Let g(x) = ln(x). Because ln( x) = 12 ln(x), then T21 f = 21 T21 g (think about it).
Using the answer of (iv), T21 f (x) = 21 (x − 1) − 41 (x − 1)2.
π/4
(vi) T2 f (x) = 2x and T2 f (x) = 1 − 2(x − π/4)2.
33
(vii) T2π f (x) = −1 + 21 (x − π)2.
(viii) T2 f (x) = 1 − 2x + x2 and T21 f (x) = (x − 1)2.
x2
(ix) T2 f (x) = 1 − x + .
2
4. In the following you do not need to evaluate all powers of (x − a), only the ones up to
order 4 (because we want the Taylor polynomials of order 4).
(i) Use part 1. of Corollary 1.15 with p(x) = x + x2. Because p(0) = 0, we can
replace it into the fourth order Maclaurin polynomial of ez (around z = 0), namely
z2 z3 z4
T4 ez = 1 + z + + + . We get
2! 3! 4!
(x + x2 )2 (x + x2 )3 (x + x2 )4
2
T4 f (x) = T4 1 + (x + x ) + + +
2! 3! 4!
3 7 25 4
= 1 + x + x2 + x3 + x.
2 6 24
(ii) Recall that ln(1/a) = − ln(a). Use part 5. of Corollary 1.15 with p(x) = −x. Then
x2 x3 x4
T4 f (x) = x + + + .
2 3 4
(iii) Because cos(0) = 1, we need to use the Taylor polynomial of ez around z = 1, that
is,
1 z (z − 1)2 (z − 1)3 (z − 1)4
T4 e = e + e(z − 1) + e +e +e .
2 3 4
Then we replace z − 1 by the Maclaurin expansion of order 4 of
x2 x4
cos x − 1 = − + .
2 24
Therefore,
e 2 e 4
T4 f (x) = e − x + x.
2 6
(iv) Use (1.9) in Theorem 1.11. We multiply together the Maclaurin polynomials of
order 4 of e−x and cos(x), namely,
1 1 1
T4 e−x = 1 + (−x) + (−x)2 + (−x)3 + (−x)4,
2 6 24
1 2 1 4
T4 cos(x) = 1 − x + x.
2 24
Hence, collecting all terms up to order 4,
x3 x4
T4 f (x) = T4 T4 e−x · T4 cos(x) = 1 − x +
− .
3 6
5. Recall that ⌊n⌋ denotes the largest integer smaller than n. This will be useful to stop
the sums so that the powers of the variable do not exceed n.
34
n n
X (kt)j X kj
(i) Use part 1. of Corollary 1.15 with p(t) = kt. We get Tn f (t) = = tj.
j=0
j! j=0
j!
n
X e k
(ii) Because e1+t = e · et, so Tn f (t) = t.
k=0
k!
(iii) Use part 1. of Corollary 1.15 with p(t) = −t2. Because we need to stop at degree
n = 2k, we only need to add the terms up to k = ⌊n/2⌋. Finally, we get Tn f (t) =
⌊n/2⌋
X (−1)k
t2k.
k=0
k!
(iv) Use part 4. of Corollary 1.15 with p(t) = t5. Because we need to stop at degree
⌊n/10⌋
X (−1)k
n = 10k, we get Tn f (t) = t10k.
2k!
k=0
1+t 2
(v) Decompose f in partial fractions: f (t) = 1−t
= −1 + 1−t
. Using the GP formula,
you get Tn f (t) = 1 + 2 nk=1 tk.
P
Pn k k
(vi) Substitute u = −2t in the GP formula for 1/(1−u). You get Tn f (t) = k=0 (−2) t.
(vii) To use part 2. of Corollary 1.15, you need to write 2 − t = 2(1 − t/2). And so,
n k n
1X t X 1 k
Tn f (t) = = k+1
t.
2 k=0 2 k=0
2
(viii) Let g(t) = et and h(t) = 1/(1 − t). Then,
n n
X tk X
Tn g(t) = , Tn h(t) = tk.
k=0
k! k=0
and so n
X
Tn f (t) = (Tk exp(1)) tk.
k=0
(ix) Calculate the derivatives of (1 − t)−1/2, being careful with the minus signs (alterna-
tively, use the Binomial Theorem). The term of order k in the Taylor polynomial
is
1 · 3 · · · (2k − 1)
.
2 · 4 · · · 2k
To simplify it, note that the product of even terms
2 · 4 · . . . · 2n = 2n (n!).
Hence, if you multiply top and bottom by the missing even terms, you get
n
X 2k! k
Tn f (t) = t.
k=0
(k!2k )2
35
(x) Decompose in partial fraction:
1 1 1 1 1
2
= + = + .
(2 − t − t ) 3(2 + t) 3(1 − t) 6(1 + (t/2)) 3(1 − t)
Using the Taylor polynomials of a geometric progression, we get
n n
! !
1 X 1 X
Tn f (t) = (−t/2)k + tk
6 k=0 3 k=0
n
(−1)k k
X 1
= 1 + k+1 t .
k=0
3 2
1
(xii) Recall that sin t cos t = 2
sin(2t). Using part 3. of Corollary 1.15 with p(t) = 2t,
we get
⌊(n−1)/2⌋
X (−1)k 4k 2k+1
Tn f (t) = t .
k=0
2k + 1!
x3,
x≥0
6. Recall that from Level 1, the rule f (x) can be written as f (x) = 3 .
−x , x < 0.
It is clear that all the derivatives of f exist at x = 1 or x = −1. At x = 0, only
f (0) = f ′ (0) = f ′′ (0) = 0 exist, the others do not. And so, we have the following Taylor
polynomials.
7. (i) At the limit, the fraction is of type 0/0. To eliminate the ambiguity, it is quick
to see that we need to go to order at least 3 (maybe higher, but let’s start at 3).
Recall that T3 sin x = x − x3 /6, T2 cos x = 1 − x2 /2 and so
x3 x2 x3
T3 sin x
T3 tan x = T3 = x− 1+ = x+ .
T3 cos x 6 2 3
We can now replace everything and get, up to third order,
x3 2x3 7
T3 (x(1 + cos x) − 2 tan x) = 2x − − 2x − = − x3,
2 3 3
3 3
x x 1
T3 (2x − sin x − tan x) = 2x − x + −x− = − x3.
6 3 6
And so, the quotient, hence the limit, is 7.
Note that we would need to differentiate 3 times to use l’Hôpital Rule (try it?).
36
(ii) We need to expand the functions around x = 1 to identify the leading order term.
We get
1
Tn1 ln x = (x − 1) − (x − 1)2 + . . . .
2
1
And so, for the numerator, Tn1 (1 − x + ln x) = − (x − 1)2 + . . .. The denominator
2
can also be written as
√ p
1 − 2x − x2 = 1 − 1 − (x − 1)2
(iii) The two fractions tend to +∞ as x tends to 0. It is easier to recast the expression
ln(1 + x) − x
as one fraction, namely and evaluate the leading order terms. We
x ln(1 + x)
x2
have Tn ln(1 + x) = x − + . . ., and so the elements of the fraction become
2
x2
Tn (ln(1 + x) − x) = − ,
2
x2
Tn (x ln(1 + x)) = ,
2
and so the limit is −1/2.
(iv) We again have the difference ‘∞ − ∞’. Again, recast the expression as a fraction:
(sin x)2 − x
.
x(sin x)2
The leading order terms are
Tn ((sin x)2 − x) = −x + . . . ,
Tn (x(sin x)2 ) = x3 + . . . ,
37
(v) We start by establishing the leading order therm of the denominator around x = 0.
Recall that
2
2Tn sinh x = Tn (ex − e−x ) = Tn ex − Tn e−x = 2x + x3 + . . . ,
3
x3
and so x − sinh x = − + . . .. Therefore we need to look at the numerator and fix
3
α such that its leading term is at least cubic. The difficult term to deal with is
x3
− sin x 1 2
= xT3 1 − sin x + sin x = x − x2 + .
T3 xe
2 2
x3 x3 2
T3 xe− sin x − sin x + αx2 = x − x2 + + αx2 = (α − 1) x2 + x3.
−x+
2 6 3
Therefore we need to set α = 1 and the limit is −2.
x6
Tn sin(x2 ) = x2 − + ...,
6
x2 x4
Tn (x2 cos x) = x2 (1 − + . . .) = x2 − ,
2 2
and so the leading order terms of the function are
x4 x6
− + ...
2 6
and so x = 0 is a minimum.
(ii) The rule f (x) is odd, f (−x) = −f (x), and so the critical point x = 0 can only be
a saddle point. We need to get the leading term of f (x) at x = 0. We know that
x3 x5 x7
Tn sin x = x − + − = ....
6 120 7!
For the fraction, using that (1 + z)−1 = 1 − z + z 2 − z 3 + . . ., we have
When b 6= a + 1/6, the leading term is cubic, when b = a + 1/6 and a 6= −7/60,
the leading term is quintic, and, when (a, b) = (−7/60, 1/20), the leading term is
of order 7.
38
g(x)
9. Consider g(x) = f (x) − Tna f (x). Then hx (x) = (x−a) n . Now, the n − 1-th derivative of
g and of (x − a)n are f (n−1) (x) − f (n−1) (a) − f (n) (x − a) and (n!)(x − a), respectively.
Clearly they are both 0 at x = a, so, using l’Hôpital Rule,
g ′ (x)
lim hn (x) = lim = 0.
x→a x→a n(x − a)(n−1)
10. Set a0 = p(a). Let q1 (x) = p(x) − a0 , so deg(q) = deg(p) and q1 (a) = 0. Hence,
there exists a unique polynomial r1 (x) of degree deg(q) − 1 = deg(p) − 1 such that
q1 (x) = r1 (x)(x − a). Therefore,
Now, let a1 = r1 (a) and let q2 (x) = r1 (x) − a1 . Again, deg(q2 ) = deg(r1 ) = deg(p) − 1
and q2 (a) = 0. So, there exists an unique polynomial r2 of degree deg(r1 )−1 = deg(p)−2
such that q2 (x) = r2 (x)(x − a). Hence, r1 (x) = a1 + r2 (x)(x − a) and
The process can be re-iterated until the degree of rn is 0 and the conclusion follows.
39
Chapter 2
Real Sequences
1, 2, 3, 4, . . . 1, 4, 9, 16, . . . − 1, 1, −1, 1, . . . 1/2, 1/4, 1/8, . . . 1/4, 1/2, 1/16, 1/8, . . . (2.1)
are all sequences. We will be interested in the limits of sequences, i.e. whether they get
closer and closer to a particular value as we take more and more terms. In the examples above,
providing we make reasonable assumptions about how the tail of the sequence behaves, the
first two sequences are tending to infinity, the third oscillates and so does not tend to a
limit and the fourth and fifth tend to 0. Determining whether a sequence converges can
be extremely difficult, so we will concentrate on a few situations and tools we can use. In
particular, we will see how to link a sequence with a real function and use the connection
with limits of functions to determine its convergence. You have seen and practiced limits
of functions at Level 1. Further, more general information, will be the subject of Chapter 5.
Cautionary Tale 2.1. The limit behaviour of a sequence is really determined by its (infinite)
tail, not by its first initial numbers. We shall come back later to this point.
Definition 2.2. A sequence (of real numbers) is the infinite ordered list of real numbers
f (1), f (2), f (3), . . . , obtained from a function f : N → R.
40
Conversely, given a general sequence of real numbers a1 , a2 , a3 , a4 , . . ., we can define a function
f : N → R by setting f (n) = an . So, a sequence of real numbers and a function
f : N → R are the same thing.
Cautionary Tale 2.3. In principle, the function f defining a sequence has no need to
have a formula f (n) for its rule. For instance, a sequence starting by −π, e2, 8, −4.67, . . .
is unlikely to have a ‘reasonable rule’, although it will still define a function.
Examples.
Remark 2.4. Some authors use the notation (n2 )∞ n=1 to emphasise that the order of the
terms in the sequence matters (Curly brackets are used with sets where the order of the terms
is irrelevant). Because the type with curly brackets is more common, we make this choice.
Our definition requires sequences to start with terms from n = 1. However, it can be sometimes
convenient to start with a different value of n.
41
Examples.
1 1
1. Let an = 2
. To show that lim an = 0, consider f : [1, ∞) → R, f (x) = 2 . Now
n n→∞ x
1
an = f (n) and lim 2 = 0. Then, by definition, lim an = 0.
x→∞ x n→∞
n2 ∞ n2
2. To determine whether the sequence { } is convergent, set an =
3n2 + 2n + 1 n=1 3n2 + 2n + 1
x2
and f : [1, ∞) → R, f (x) = so that f (n) = an when n ∈ N. Because
3x2 + 2x + 1
x2 1 1
lim f (x) = lim 2
= lim 2
= ,
x→∞ x→∞ 3x + 2x + 1 x→∞ 3 + 2/x + 1/x 3
1
lim an = lim f (x) = .
n→∞ x→∞ 3
Definition 2.5 is typical of definitions in mathematics where we use an explicit second object
(like the function f ) to define a property of a first one (the limit of the sequence {an }∞
n=1 ). For
such definitions to make sense we need to show that for ANY choice of the second object
we arrive at the same property. In our case we need the following result.
Lemma 2.6. Let {an }∞ n=1 be a convergent sequence with two functions f, g : [1, ∞) → R
such that an = f (n) = g(n) and limx→∞ f (x) = l1 , limx→∞ g(x) = l2 , l1 , l2 ∈ R. Then l1 = l2 .
Proof. By contradiction, suppose l1 6= l2 . Take ǫ = |l1 − l2 |/3 > 0 so that the intervals
(li − ǫ, l1 + ǫ) and (l2 − ǫ, l2 + ǫ) do not intersect. By definition of the limit of functions as
x → ∞, there exists m1 , m2 > 0 such that |f (x) − l1 | < ǫ when x > m1 and |g(x) − l2 | < ǫ
when x > m2 . Now, when x > max(m1 , m2 ),
3ǫ = |l1 − l2 | ≤ |l1 − f (x)| + |f (x) − g(x)| + |l2 − g(x)| < 2ǫ + |f (x) − g(x)|.
And so, ǫ < |f (x) − g(x)| for all x > max(m1 , m2 ). This is a contradiction because f (n) =
an = g(n) for all integer n > max(m1 , m2 ).
Cautionary Tale 2.7. In the definition of limit it is important that f such that f (n) = an
has a limit as x → ∞. The following example shows why. The sequence {0}∞ n=1 is clearly
convergent to 0. Indeed, using the definition, let an = 0 and f : [1, ∞) → R, f (x) = 0, then
limn→∞ an = limx→∞ f (x) = 0.
But, if we made another, ‘strange’, choice of function, things could go wrong. Choose f2 :
R → R, f2 (x) = sin(2πx), then f2 (n) = 0 if n ∈ N. However, limx→∞ f2 (x) does not exist.
This is not a contradiction. One important hypothesis of the definition is that limx→∞ f (x)
exists for (at least) one function f . That is the case for our first choice f , hence we can
conclude, although it is not the case for our second choice f2 . Note that, because it has no
limit, f2 tells us nothing about whether the sequence converges or not.
42
2.1.3 Graphic Representations of Sequences
n
In the following Figures 2.1 and 2.2, we portray the sequences {1+ (−1)
n
}∞ cos n ∞
n=1 and {1+ n }n=1 .
1.5
0.5
0 5 10 15 20 25 30
(−1)n
Figure 2.1: The first few values of a sequence alternating about the limit, like 1 + n
.
cos(πx)
They both converge to 1. We can take f1 (x) = 1 + for the first sequence and
x
cos x
f2 (x) = 1 + . Clearly,
x
cos(πx) cos x
1 = lim 1 + = lim 1 + .
x→∞ x x→∞ x
The first one alternates up and down as it goes to 1, the second oscillates in no organised way
towards 1.
A sequence converging to 1 in a random fashion
1.5
0.5
0
0 5 10 15 20 25 30
Figure 2.2: The first few values of a sequence converging in a random fashion to the limiting
value, like 1 + cosn n .
43
2.2 Lecture 8: Algebra of Limits, Special Sequences
In the previous lecture the limit of a sequence was defined, linking the limit of a sequence to
the limit of an associated function. We can use further this association and establish results
that are very useful to evaluate explicitly limits of convergent sequences. In practice, using
the algebra of limits, we can split limits into components we can evaluate, and put them
together to find the final limit. In this lecture we also introduce the notion of infinite limit.
Cautionary Tale 2.9. Recall that ±∞ are not real numbers. In particular, sequences with
limits +∞ or −∞ are divergent.
Remark 2.10. A divergent sequence does not need to grow unbounded. It can oscillate
between two values, for instance {(−1)n }∞
n=1 (see Exercise 2(k) in Sheet 2a). Therefore a
sequence either converges or diverges, but not both.
cos(n2 )
lim 2 + + n ln(1 − 2/n) , (2.2)
n→∞ 3n
or
lim cos (n sin(1/n) − 1). (2.3)
n→∞
44
a limn→∞ an
n
4. lim = , providing bn 6= 0 for any natural n and lim bn 6= 0.
n→∞ bn limn→∞ bn n→∞
5. Let f be a continuous function around a = limn→∞ an , then the sequence {f (an )}∞
n=1
converges to f (a).
Proof. Those results are left as Exercises 7 and 8 in Sheet 2a. They follow directly from the
definition of a limit and the same results for limits of functions you have seen at Level 1.
This result is not enough to get the previous limit (2.2). We need one more result. The second
theorem is the Sandwich Rule for sequences.
Theorem 2.12 (Squeeze Theorem, Sandwich Rule). Let {an }∞ ∞ ∞
n=1 , {bn }n=1 and {cn }n=1 be
sequences such that lim an = lim cn = l and for all n, an ≤ bn ≤ cn . Then lim bn = l.
n→∞ n→∞ n→∞
Proof. This result is left as Exercise 9 in Sheet 2a. It follows from the definition of a limit
and the Squeeze Theorem for functions you have seen at Level 1.
Cautionary Tale 2.13. The results of Theorem 2.11 do not hold if the limits are not finite.
For instance, for the product of limits, consider an = n2, bn = 1/n and cn = 1/n3. Then,
lim (an bn ) = +∞ =
6 0 = lim (an cn ),
n→∞ n→∞
Examples. We can now revisit the previous examples to see how we can use Theorems 2.11
and 2.12.
1. For (2.2), we know that
−| cos(n2 )| cos(n2 ) | cos(n2 )|
lim ≤ lim ≤ lim .
n→∞ 3n n→∞ 3n n→∞ 3n
| cos(n2 )| 1
We know that lim ≤ lim = 0, and so the limit of this fraction is 0. For
n→∞
3n n→∞ 3n
2
lim n ln 1 − , we can use a Taylor polynomial expansion
n→∞ n
2 2 −2
lim n ln 1 − = lim x ln 1 − = lim x · = −2.
n→∞ n x→∞ x x→∞ x
And so we can collect everything together and we find the final limit as
cos(n2 )
lim 2 + + n ln(1 − 2/n) = 2 + 0 − 2 = 0.
n→∞ 3n
2. For the second limit, using a Taylor polynomial expansion,
lim n sin(1/n) = lim x · (1/x) = 1.
n→∞ x→∞
And so,
lim cos (n sin(1/n) − 1) = lim cos (x sin(1/x) − 1)
n→∞ x→∞
= cos lim x sin(1/x) − 1 = cos(0) = 1.
x→∞
45
2.2.3 Some Standard Convergent Sequences
We state next a few standard sequences with ‘well known’ limits. We can use the previous
theorems to get from their behaviour the limits of more complicated sequences. The proofs
are not always trivial!
Proposition 2.14. 1. Let p > 0, limn→∞ 1/np = 0.
2. The limit limn→∞ r n = 0 when |r| < 1. It diverges when |r| > 1.
3. The limit limn→∞ n1/n = 1.
4. If c > 0 then limn→∞ c1/n = 1.
5. This is a very important limit:
n
1
lim 1 + = e,
n→∞ n
and has an equally important generalisation for a ∈ R:
a n
lim 1 + = ea.
n→∞ n
np
6. For all a, p > 0, lim = 0.
n→∞ ean
ln n
7. For all p > 0, lim p = 0.
n→∞ n
Proof. The proofs will mainly follow from the application of l’Hospital’s Rule or the following
idea. If f is a positive function such that limx→∞ ln(f (x)) exists, because exp x and ln x are
continuous on their domain of definition, the following limit makes sense
lim f (x) = exp lim ln f (x) . (2.4)
x→∞ x→∞
1. Using (2.4),
lim n−p = exp lim ln x−p = exp −p lim ln x = 0.
n→∞ x→∞ x→∞
2. Again, we shall be using (2.4) and the Squeeze Theorem. When r > 0,
−∞, r < 1;
lim x ln r =
x→∞ ∞, r > 1.
And so, when 0 ≤ r < 1,
n
lim r = exp lim x ln r = 0
n→∞ x→∞
46
3. Using (2.4),
1/x
1/x
ln x
lim x = exp lim ln x = exp lim = exp(0) = 1,
x→∞ x→∞ x→∞ x
ln x 1
having used l’Hospital Rule for the limit lim = lim = 0.
x→∞ x x→∞ x
4. Using (2.4),
1/x
1/x
ln c
lim c = exp lim ln c = exp lim = exp(0) = 1.
x→∞ x→∞ x→∞ x
5. Using (2.4), and Taylor series expansion for the last limit, we have for all a ∈ R,
a n a a
lim 1 + = exp lim x ln(1 + ) = exp lim x · = ea.
n→∞ n x→∞ x x→∞ x
6. Use the Squeeze Theorem because 0 ≤ np ≤ nN for any integer N > p. Then, use N
times l’Hospital Rule of the type ‘∞/∞’:
xN N!
lim = lim = 0.
x→∞ eax x→∞ aN eax
ln x 1/x 1
lim = lim = lim = 0.
x→∞ xp x→∞ pxp−1 x→∞ pxp
47
2.3 Lecture 9: Bounded and Monotone Sequences
We defined the convergence of a sequence and found some ways of evaluating limits. It may
often be difficult to calculate the explicit limit of a sequence, so we split the problem into two
stages:
It is only the first part that interests us here. We are going to determine sufficient conditions
for the convergence of sequences.
From the contrapositive statement (see Level 1) of Theorem 2.16, it follows that ‘not
bounded’ implies ‘not convergent’: an unbounded sequence is divergent.
Similarly, if xn ≤ 0, then l ≤ 0.
48
Proof. We will defer this until a later lecture when we meet the ǫ — N definition of conver-
gence.
What follows is an important property characterising the limit of a convergent sequence re-
maining in a closed bounded interval.
Corollary 2.18 (Closed bounded intervals contain their limits). If {xn }∞ n=1 is a convergent
sequence in the closed bounded interval [a, b] then the limit l ∈ [a, b].
Proof. We show that l ≥ a. A similar argument shows that l ≤ b. Being a closed interval, it
means that l ∈ [a, b]. Consider the sequence {yn }∞
n=1 where yn = xn − a. Then yn ≥ 0 and so
limn→∞ yn = l − a ≥ 0.
Definition 2.19. A sequence {an }∞ n=1 is increasing (or decreasing) if an ≤ an+1 (or
an ≥ an+1 ) for all n ≥ 1 or, in other words, if a1 ≤ a2 ≤ a3 ≤ . . . (increasing) or if
a1 ≥ a2 ≥ a3 ≥ . . . (decreasing). A sequence that is either increasing or decreasing is called
monotone.
Remark 2.20. From our definitions, a sequence can be both increasing and decreasing. In
that case it must be of constant value, say c. Formally such a sequence {c}∞
n=1 is monotone.
Examples.
n
1. To determine whether the sequence is increasing or decreasing, we could simply
n+1
n
calculate the sign of the difference an+1 − an where an = . We get
n+1
n+1 n (n + 1)2 − n(n + 2) 1
an+1 − an = − = = > 0.
n+2 n+1 (n + 1)(n + 2) (n + 1)(n + 2)
49
When a sequence is obtained from a continuously differentiable function f : [1, ∞) → R, we
can use Level 1 results about the monotonicity of functions to get the following result.
Proposition 2.21. Let f : [1, ∞) ⊆ R → R be continuously differentiable and {an }∞ n=1 be the
sequence obtained from f such that an = f (n), n ∈ N. If f (x) ≥ 0, x ∈ [1, ∞), then {an }∞
′
n=1
is increasing, if f ′ (x) ≤ 0, x ∈ [1, ∞), then {an }∞
n=1 is decreasing.
Proof. Observe that, from the Mean Value Theorem, and with (n + 1) − n = 1,
f (n + 1) − f (n)
xn+1 − xn = = f ′ (ξn )
(n + 1) − n
where ξn ∈ (n, n + 1). If the derivative of f is non-negative then the sequence is increasing, if
it is non-positive, the sequence is decreasing.
Examples.
′
x ′ 1 1
1. Let f (x) = . Then f (x) = 1 − = ≥ 0 for x ≥ 1. and so, as
x+1 x+1 (x + 1)2
expected, the sequence {f (n)}∞
n=1 is increasing.
x ′ 1 − x2
2. Let f (x) = . Then f (x) = ≤ 0 for x ≥ 1. And so, the sequence
x2 + 1 (x2 + 1)2
{f (n)}∞
n=1 is decreasing.
Remark 2.22. As a comment here, a minority of texts define ‘increasing’ as x1 < x2 <
· · · and ‘decreasing’ as x1 > x2 > · · · which will be defined in Analysis later as strictly
increasing and strictly decreasing. Then, those texts use the terms ‘non decreasing’ for
what we define here as increasing and ‘non increasing’ for decreasing. This minor difference
in terminology makes no difference to any of the results covered in this study block.
The proof of that theorem will have to wait until Analysis II (MA2731) in Term 2.
Remark 2.24. We shall also see in Term 2 that the limit of a monotone convergent sequence
{sn }∞
n=1 can be determined by, respectively,
We shall not give here precise definitions of the sup and the inf of a set. This will happen in
MA2731 next term. Simply take them as
50
1. the ‘lowest upper bound’ for the sup, like in
This result indicates that we have a precise description of the limit of a monotone sequence,
provided we know the definition and to calculate the sup and inf of a set. We illustrate what
happens in Figures 2.3 and 2.4.
A sequence increasing to 1
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 5 10 15 20 25 30
A sequence decreasing to 1
2
1.9
1.8
1.7
1.6
1.5
1.4
1.3
1.2
1.1
1
0 5 10 15 20 25 30
We now finish by looking at examples. We can calculate the limits using our previous tech-
niques, but we show that they satisfy Theorem 2.23.
1
1. The sequence {an }∞
n=1 where an = is convergent because it is
2n + 3
(i) bounded, as clearly 0 ≤ an and an ≤ 1 (since 2n + 3 > 1) for all n > 1;
51
(ii) monotone decreasing because
1 1 (2n + 1) − (2n + 3) 2
an+1 −an = − = =− < 0.
2(n + 1) + 3 2n + 1 (2n + 1)(2n + 3) (2n + 1)(2n + 3)
√ √
n ∞ n
2. The sequence { }n=1 is convergent because it is bounded by 0 ≤ ≤ 1 and,
n+1 √ n+1
x
defining f : [1, ∞) by f (x) = , its derivative is negative when x ≥ 1:
x+1
1 √ √ √
′
√
2 x
(x + 1) − x (x + 1) − 2 x x 1−x
f (x) = 2
= √ = √ ≤ 0.
(x + 1) 2 x(x + 1)2 2 x(x + 1)2
52
Summary of Chapter 2
We have seen:
53
2.5 Exercise Sheet 2
2. Determine which of the following sequences converge, and for those that do, find the
limit.
n n o∞ n 1 o∞ n 1 o∞
(a) ; (b) 1 − n ; (c) cos ;
nn + 2n on=1
(−1) ∞ n 12 on=1
∞ n n on=1
nπ ∞
(d) 2
; (e) ln ; (f) sin ;
n n no∞ n=1 n n 1 n=1
n o∞ n n 2o n=1
(g) ; (h) 1− ; (i) ;
2n on=1
n ln n n=1
n 2n +n1o
n n o (−1)
(j) ; (k) (−1)n ; (l) .
n n
54
n2 + n + 1
(ii) xn = .
3n2 + 4
√
(iii) xn = n4 + n2 − n2.
4
n+1
(iv) xn = − n4.
n2
√
(v) xn = −n + n2 + n.
sin n √ √
(vi) xn = + ( n + 1 − n).
n
n2 + 500n + 1
(vii) xn = .
5n2 + 3
p p
(viii) xn = n(n + 1) − n(n − 1).
1. Bookwork.
(i) convergent.
(ii) 3/2.
(iii) 1/3.
(iv) 0.
(v) 1.
(vi) 1/2.
(vii) ∞.
(viii) −∞.
55
5. 0.
(i) 1/6.
(ii) 1/3.
(iii) 1/2.
(iv) −∞.
(v) 1/2.
(vi) 0.
(vii) 1/5.
(viii) 1.
7. Use the equivalent results you have seen at Level 1 for the limits of functions.
8. Use the fact that f is continuous and that you have seen at Level 1 a similar result for
limits of functions.
9. To use the Squeeze Theorem for limit of functions, you need to use a continuous function
obtained by linking by straight lines two consecutive points of the sequence.
∗
10. Compare x1 − x0 in both cases.
3. (a) Let xn = (sin n)(sin(1/n)). We know that | sin n| 6 1 and that | sin x| 6 |x| for all
x ∈ R, therefore
4. .
5. .
6. .
7. .
8. .
9. .
56
∗
10. Observe that by the Mean Value Theorem (a result you have met in Calculus at Level
1)
xn+1 − xn = φ(xn ) − φ(xn−1 ) = φ′ (ξn )(xn − xn−1 )
where ξn is some point between xn−1 and xn . If the starting point x0 is such that
x1 = φ(x0 ) > x0 then this successively implies that x2 − x1 > 0, x3 − x2 > 0, · · · ,
xn+1 − xn > 0. That is xn+1 > xn > · · · > x2 > x1 > x0 , i.e. the sequence is increasing.
Similar reasoning shows that if instead x1 < x0 then the sequence is decreasing.
57
Chapter 3
However, we do need to be careful here. We cannot just deal with the ∞ as if it were a
number, because it’s not a number. There is another reason why we cannot just do this.
Usually when we integrate, we use the Fundamental Theorem of Calculus to compute the
integral asZ the difference between two evaluations of an antiderivative. For example, we
b h ib
might say f (x) dx = F (x) where F is an antiderivative for f . We have only proved the
a a
Fundamental Theorem of Calculus for finite intervals, so we can’t just plug in infinity and
hope for the best.
Definition 3.1 (Improper Integrals of Type 1).
Z t
1. If f (x) dx exists (i.e. can be computed and is finite) for all t ≥ a, then we define
a
Z ∞ Z t
f (x) dx = lim f (x) dx,
a t→∞ a
58
provided this limit exists.
Z b
2. If f (x) dx exists (i.e. can be computed and is finite) for all t ≤ b, then we define
t
Z b Z b
f (x) dx = lim f (x) dx,
−∞ t→−∞ t
If an improper integral exists we say that the integral is convergent. Otherwise we say that it
is divergent.
Z ∞
1
Example. Determine whether dx is convergent and if so evaluate it.
1 x
Solution. We have
∞ t
1 1
Z Z
t
dx = lim dx = lim ln x 1 = lim ln t = ∞.
1 x t→∞ 1 x t→∞ t→∞
Here though we give a harder example, you should complete this yourself — for practice.
Z ∞
e−x
Example. Determine −2x
dx.
−∞ 1 + e
Solution. We have
∞ 0 ∞
e−x e−x e−x
Z Z Z
dx = dx + dx,
−∞ 1 + e−2x −∞ 1 + e−2x 0 1 + e−2x
this integral, we substitute u = e−x . Then du = −e−x and so “− du = e−x dx”. When x = 0,
dx
u = 1 and when x = t, u = e−t .
59
Consequently
t e−t
e−x −1
Z Z
e−t
du = lim − tan−1 u 1 = lim (− tan−1 (e−t )+tan−1 1).
lim dx = lim
t→∞ 0 1 + e−2x t→∞ 1 1+u 2 t→∞ t→∞
Because tan is continuous on (−π/2, π/2), the function tan−1 is continuous and so
t
e−x
Z
Hence lim dx = π/4.
t→∞ 0 1 + e−2x
Z 0 Z 0
e−x e−x
On the other hand −2x
dx = lim −2x
dx = lim (− tan−1 1 + tan−1 (e−t )).
−∞ 1 + e t→−∞ t 1+e t→−∞
Now as t → −∞, e−t → ∞ and since tan−1 is continuous, lim tan−1 (e−t ) = lim tan−1 y = π/2.
t→−∞ y→∞
Z 0 −x
e
Hence lim dx = π/2 − π/4 = π/4. Consequently
t→−∞ t 1 + e−2x
Z ∞
e−x
−2x
dx = π/4 + π/4 = π/2.
−∞ 1 + e
Exercise 3.2. Determine whether the following integrals are convergent and if so evaluate
them.
∞ 0 ∞ ∞
1 1
Z Z Z Z
x
(a) dx; (b) e dx; (c) dx; (d) x3 dx.
1 x3 −∞ −∞ 1 + x2 −∞
Summary
We have seen:
• that when we integrate over an infinite range, we cannot just treat ∞ like a number;
60
3.2 Self-study for Lecture 11: Improper Integrals —
Type 2
Reference: Stewart Chapter 8.4, Pages 547–550.
So far we have only integrated functions that are continuous or at least consist of a number
of continuous pieces. This is because whenever we compute a definite integral, we are using
the Fundamental Theorem of Calculus and this only applies to continuous functions. In this
session we will see how we can integrate functions that are discontinuous. In some cases the
functions may not even be defined at all points in the range of integration.
Suppose that f is continuous on the interval [a, b) but has a vertical asymptote at x = b, that
is, f is undefined at x = b. We can still try to find the area beneath the graph of f . Let
Z t
A(t) = f (x) dx.
a
Suppose that A(t) tends to a finite limit as t → b− , then it would be reasonable to define the
area under f to be lim− A(t) and say that
t→b
Z b
f (x) dx = lim− f (x) dx.
a t→b
1. Suppose that f is continuous on [a, b) and discontinuous at b. (f does not even have to
be defined at x = b.) Then we define
Z b Z t
f (x) dx = lim− f (x) dx,
a t→b a
2. Suppose that f is continuous on (a, b] and discontinuous at a. (f does not even have to
be defined at x = a.) Then we define
Z b Z b
f (x) dx = lim+ f (x) dx,
a t→a t
3. Suppose that f is continuous on [a, b] except at c ∈ (a, b) where it may not even be
Z c Z b
defined and that f (x) dx and f (x) dx both exist. Then we define
a c
Z b Z c Z b
f (x) dx = f (x) dx + f (x) dx.
a a c
61
Again we describe integrals as convergent or divergent depending on whether they exist.
1
x
Z
Example. Determine whether √ dx is convergent and if so evaluate it.
−3 9 − x2
Solution. We have
√ √ √
Z 1 Z 1
x x 1
√ dx = lim+ √ dx = lim+ − 9 − x2 t = lim+ ( 9 − t2 − 8).
−3 9 − x2 t→−3 t 9 − x2 t→−3 t→−3
√
Z 1
x √
+ 2 + 2 √
As t → −3 , 9 − t → 0 and so 9 − t → 0. Hence dx = − 8.
−3 9 − x2
3
1
Z
Example. Determine whether dx is convergent and if so evaluate it.
−1 x2
Solution. We have
3 Z 0 Z 3
1 1 1
Z
2
dx = 2
dx + 2
dx,
−1 x −1 x 0 x
providing both of these integrals are convergent. We will try to evaluate the first one.
Z 0 Z t
1 1 h 1 it 1
2
dx = lim dx = lim − = lim − − 1 .
−1 x t→0− −1 x2 t→0− x −1 t→0− t
This limit does not exist, so the original integral is divergent.
This cannot possibly be correct because we have integrated a positive function and obtained
a negative answer.
Exercise 3.4. Find:
Z 2 Z 1 4 ∞
1 1 1 1
Z Z
(a) dx; (b) √ dx; (c) dx; (d) √ dx.
1 1−x 0 1−x 1 (x − 2)2/3 0 x
Summary
We have seen:
• that more care is required to integrate functions when they are not continuous;
• how to evaluate integrals when the integrand is not continuous / undefined at certain
points using a limit.
62
3.3 Homework for improper integrals, Types 1 and 2
1. Finish off any remaining in-class exercises.
2. Determine which of the following integrals exist and evaluate them when possible.
Z ∞ Z ∞ Z ∞
1 1
(a) dx; (b) dx; (c) cos x dx;
1 (3x + 1)2 0 x2 + 5x + 6 0
Z ∞ Z 2 Z π/2
−x 2
(d) e cos x dx; (e) x ln x dx; (f) tan x dx;
0 0 0
1 ∞ ∞
e1/x 1 1
Z Z Z
(g) dx; (h) dx; (i) √ dx;
0 x2 −∞
2
x +4 0 x(x + 1)
[Hint: It should be helpful to know that for any k > 0, lim+ xk ln x = 0. How do we
x→0
prove this?]
3. Find the values of p for which the following integrals exist and evaluate the integrals
when they exist.
1 ∞ ∞
1
Z Z Z
p p
(a) x dx; (b) x dx; (c) dx.
0 1 e x(ln x)p
2
4. (a) Show that e−x ≤ e−x for x ≥ 1.
Z ∞
(b) Determine e−x dx.
1
Z ∞
2
(c) What can you deduce about e−x dx?
1
5. If f is continuous
Z ∞on [0, ∞), then its Laplace transform is a function F for which the
rule is F (s) = f (t)e−st dt, and the domain of F consists of those s for which the
0
integral converges.
Find the Laplace transform of:
6. The definitions of the trigonometric functions and hyperbolic functions can be extended
to all complex numbers. Use Euler’s theorem concerning eix and the definition of sinh
and cosh to show that sinh x = −i sin(ix) and cosh x = cos ix.
7. In this question we have a random mixture of integrals. You have to work out the
method required. A couple of them are not too hard, most of them are slightly tricky
and some are very tricky. Evaluate:
63
1 1 2
1 ex
Z Z Z
x − x2 dx;
(a) √ dx; (b) √ dx; (c)
0 x2 + 4 0 e2x + 1 −1
π/4 π/3
x4
Z Z Z
2 2
(d) dx; (e) cos t tan t dt; (f) sin(4x) cos(3x) dx;
x10 + 16 0 π/4
1 1 1
Z Z Z
(g) √ dx; (h) dx; (i) √ √ dx;
x2 + 4x + 8 e − e−x
x
x+1− x
1
etan (y)
Z −1 Z Z
2 x
(j) dy; (k) x sin x dx; (l) ex+e dx;
−1 1 + y2
3
1
Z Z Z
2
(m) dx; (n) ln(x − 1) dx; (o) r 4 ln r dr.
1 + ex 1
8. Challenge Question: Find the value of the constant C for which the integral
Z ∞
x C
− dx
0 x2 + 1 x + 1
exists. Evaluate the integral for this value of C.
3.4 Feedback
(a)
∞ t
1 1 h 1 it 1 1 1
Z Z
dx = lim dx = lim − = lim − + = .
1 x3 t→∞ 1 x3 t→∞ 2x2 1 t→∞ 2t2 2 2
(b)
Z 0 Z 0 x 0
x
e dx = lim ex dx = lim e t = lim (1 − et ) = 1.
−∞ t→−∞ t t→−∞ t→−∞
(c)
∞ 0 ∞
1 1 1
Z Z Z
dx = dx + dx,
−∞ 1 + x2 −∞ 1 + x2 0 1 + x2
providing both of these integrals converge. Now
Z ∞ Z t
1 1 t π
dx = lim tan−1 x 0 = lim tan−1 t = .
2
dx = lim 2
0 1+x t→∞ 0 1 + x t→∞ t→∞ 2
64
Similarly
Z 0 0
1 1 π
Z
−1
0
= lim (− tan−1 t) = .
dx = lim dx = lim tan x
−∞ 1 + x2 t→−∞ t 1+x2 t→−∞ t t→−∞ 2
Hence
∞ 0 ∞
1 1 1
Z Z Z
dx = dx + dx = π/2 + π/2 = π.
−∞ 1 + x2 −∞ 1 + x2 0 1 + x2
(d) Z ∞ Z 0 Z ∞
3 3
x dx = x dx + x3 dx,
−∞ −∞ 0
Z ∞
So at least one of the two integrals diverges and hence x3 dx diverges.
−∞
Exercise 1:
1
(a) is undefined at x = 1 but is continuous on (1, 2]. Hence
1−x
Z 2 Z 2
1 1 2
dx = lim+ dx = lim+ − ln |1 − x| t = lim+ ln(t − 1) = −∞.
1 1−x t→1 t 1−x t→1 t→1
1
(c) is undefined at x = 2 but is continuous on [1, 4] \ {2}. Hence
(x − 2)2/3
Z 4 Z 2 Z 4
1 1 1
2/3
dx = 2/3
dx + 2/3
dx,
1 (x − 2) 1 (x − 2) 2 (x − 2)
65
providing both of these integrals converge. Now
Z 2 Z t
1 1 1/3 t
2/3
dx = lim dx = lim 3(x − 2)
1 (x − 2) t→2− 1 (x − 2)2/3 t→2− 1
and
4 Z 4
1 1
Z
1/3 4
dx = lim dx = lim 3(x − 2) t
2 (x − 2)2/3 t→2+ t (x − 2)2/3 t→2+
Hence
4 2 4
1 1 1
Z Z Z
dx = dx + dx = 3 + 3 × 21/3 .
1 (x − 2)2/3 1 (x − 2)2/3 2 (x − 2) 2/3
(d) This integral is improper for two reasons. First, the range of integration is infinite and
1 1
second, √ is undefined at 0. However √ is continuous on (0, ∞). So
x x
Z ∞ Z 1 Z ∞
1 1 1
√ dx = √ dx + √ dx
0 x 0 x 1 x
Z ∞
1
So, at least one of the two integrals diverges and hence √ dx does not converge.
0 x
66
(b) We have
∞ t
1 1
Z Z
2
dx = lim dx.
0 x + 5x + 6 t→∞ 0 x2 + 5x + 6
t
1
Z
We will determine dx using the method of partial fractions.
0 + 5x + 6 x2
Factorising gives x2 + 5x + 6 = (x + 2)(x + 3) and so
1 A B
= + ,
x2 + 5x + 6 x+2 x+3
where A and B are constants to be determined.
Standard calculations yield A = 1 and B = −1. So
Z t Z t Z t
1 1 1 t t
2
dx = dx − dx = ln |x + 2| 0
− ln |x + 3| 0
0 x + 5x + 6 0 x+2 0 x+3
t + 2 3
= ln + ln .
t+3 2
Hence t
1 t + 2 3
Z
lim dx = lim ln + ln .
t→∞ 0 x2 + 5x + 6 t→∞ t+3 2
t+2
As t → ∞, → 1 and since ln is continuous at 1, we can use the result from
t+3
Continuity III which allows us to say that
t + 2 t+2
lim ln = ln lim = ln 1 = 0.
t→∞ t+3 t→∞ t+3
(We will use this argument lots of times on thisZ sheet, so it is really important that
∞
1 3
you try to understand it fully.) Consequently dx = ln .
0 x2 + 5x + 6 2
(c) Z ∞ Z t h it
cos x dx = lim cos x dx = lim sin x = lim sin t.
0 t→∞ 0 t→∞ 0 t→∞
But this limit does not exist, because sin keeps oscillating between −1 and 1. Hence
the integral is divergent.
(d) We have Z ∞ Z t
−x
e cos x dx = lim e−x cos x dx.
0 t→∞ 0
Z t
We will determine I(t) = e−x cos x dx using integration by parts.
0
Let f (x) = e−x and g ′ (x) = cos x. Then f ′ (x) = −e−x and g(x) = sin x. Hence
Z t
−x t
I(t) = e sin x 0 + e−x sin x dx.
0
Now we evaluate the second integral by integrating by parts again. Let h(x) = e−x
and k ′ (x) = sin x. Then h′ (x) = −e−x and k(x) = − cos x. So
Z t
−x t −x t
I(t) = e sin x 0 + −e cos x 0 − e−x cos x dx.
0
67
Consequently t t
2I(t) = e−x sin x 0 + −e−x cos x 0
and so
1
I(t) = (e−t sin t − e−t cos t + 1).
2
Now for any t, −e−t ≤ e−t sin t ≤ e−t and −e−t ≤ e−t cos t ≤ e−t . Furthermore
lim e−t = 0, so we can apply the Sandwich Theorem to see that
t→∞
1
Hence lim I(t) = and so
t→∞ 2
Z ∞
1
e−x cos x dx = lim I(t) = .
0 t→∞ 2
As t → π/2− , cos t → 0 but cos t > 0, whenever 0 ≤ t < π/2. Hence as t → π/2− ,
sec t → ∞ and so ln | sec t| → ∞. Consequently the integral is divergent.
e1/x
(g) 2 is undefined at 0, but continuous on (0, 1]. Hence,
x
Z 1 1/x Z 1 1/x
e e
dx = lim dx.
0 x2 t→0+ t x2
68
1
e1/x 1 1
Z
To determine dx, we substitute u = 1/x. du = − 2 , so “ 2 dx = −du”.
t x 2 dx x x
1
When x = t, u = , and when x = 1, u = 1. Thus
t
Z 1 1/x Z 1
e 1
2
dx = − eu du = − eu 1/t = e1/t − e.
t x 1/t
(i) This integral is improper in two ways. First, the range of integration is infinite and
1
second, √ is undefined at 0. We have
x(x + 1)
Z ∞ Z 1 Z ∞
1 1 1
√ dx = √ dx + √ dx,
0 x(x + 1) 0 x(x + 1) 1 x(x + 1)
providing both of these integrals exist.
Now Z 1 Z 1
1 1
√ dx = lim+ √ dx.
0 x(x + 1) t→0 t x(x + 1)
Z 1
1 √
We evaluate √ dx using integration by substitution. Let u = x.
t x(x + 1)
1 1 √
Then du = √ and so “ √ dx = 2 du”. When x = t, u = t and when x = 1,
dx 2 x x
u = 1. Hence
√
Z 1 Z 1
1 1 1
du = 2 tan−1 u √t = π/2 − 2 tan−1 t.
√ dx = 2 √ 2
t x(x + 1) t u +1
69
√
As t → 0+ , t → 0. Since tan−1 is continuous,
√ √
lim+ tan−1 t = tan−1 lim+ t = tan−1 0 = 0.
t→0 t→0
Hence
1 1
1 1
Z Z
√ dx = lim+ √ dx = π/2.
0 x(x + 1) t→0 t x(x + 1)
Similarly
∞
1 1 t √t
Z Z
−1
√ dx = lim √ dx = lim 2 tan u 1
1 x(x + 1) t→∞ 1 x(x + 1) t→∞
−1
√
= lim 2 tan t − π/2.
t→∞
√
As t → ∞, t → ∞ and because tan−1 is continuous we have
√
lim tan−1 ( t) = lim tan−1 u = π/2.
t→∞ u→∞
Thus ∞ t
1 1
Z Z
√ dx = lim √ dx = π/2.
1 x(x + 1) t→∞ 1 x(x + 1)
Consequently
∞
1
Z
√ dx = π/2 + π/2 = π.
0 x(x + 1)
3. How should we tackle a problem like this with a parameter? The usual idea is just to
work through the problem treating the parameter as if it were a number. At some point
we might see that different things will happen for different values of the parameter, so
we might end up with a number of different cases.
(a) Clearly there is no problem if p ≥ 0. If p < 0 then xp is not defined at x = 0. In
that case Z 1 Z 1
p
x dx = lim+ xp dx.
0 t→0 t
Now if p 6= −1,
1 h xp+1 i1 1 tp+1
Z
lim+ xp dx = lim+ = lim+ − .
t→0 t t→0 p+1 t t→0 p+1 p+1
Whether or not this limit exists depends on whether or not lim+ tp+1 exists. From
t→0
lectures on limits we know that this limit exists (and equals zero) if and only if
p + 1 ≥ 0, or equivalently if and only if p ≥ −1. However we are not currently
considering the case p = −1 because in that case the integral has a different form.
So we know that the integral exists if p > −1 and does not exist if p < −1.
If p = −1, Z 1
1
xp dx = lim+ ln x t = lim+ (− ln t) = ∞.
lim+
t→0 t t→0 t→0
70
(b) When p 6= −1 we have
Z ∞ Z t h xp+1 it tp+1
p p 1
x dx = lim x dx = lim = lim − .
1 t→∞ 1 t→∞ p + 1 1 t→∞ p + 1 p+1
Using results from lectures on limits, we know that this limit exists (and equals
zero) if and only if p + 1 < 0, or equivalently if and only if p < −1. Again we are
not considering the case when p = −1 at this point, so we have shown that the
integral converges if p < −1 and diverges if p > −1.
When p = −1, Z t
t
xp dx = lim ln x 1 = lim ln t = ∞.
lim
t→∞ 1 t→∞ t→∞
If p 6= 1 then
ln t h x1−p iln t (ln t)1−p
1 1
Z
p
dx = = − .
1 x 1−p 1 1−p 1−p
As t → ∞, ln t → ∞ and so lim (ln t)1−p exists if and only if 1 − p < 0, or
t→∞
equivalently if p > 1. So we have shown that the integral converges if p > 1 and
diverges if p < 1.
If p = 1 then
Z ln t
1 ln t
lim dx = lim ln x 1
= lim ln ln t = ∞.
t→∞ 1 xp t→∞ t→∞
71
Z ∞
2
(c) We can’t say anything about the precise value of e−x dx, but it should seem
1
2
reasonable that the integral will converge because 0 ≤ e−x ≤ e−x for x ≥ 1.
5. (a) We have
Z ∞ Z y h e−st iy 1 e−sy
−st
F (s) = e dt = lim e−st dt = lim − = lim − .
0 y→∞ 0 y→∞ s 0 y→∞ s s
Z ∞
This derivation is only valid for s 6= 0. If s = 0 then we have F (0) = 1 dt and
0
it is easy to see that this is not convergent.
So for the integral to converge we need lim e−sy to exist. From our results on
y→∞
limits, we know that this limit exists if and only if s ≥ 0. But s = 0 is already
excluded for convergence and so F (s) exists if and only if s > 0, in which case it
1 e−sy 1
equals lim − = .
y→∞ s s s
(b) We have
Z ∞ Z y h et(1−s) iy ey(1−s)
t −st 1
F (s) = e e dt = lim et(1−s) dt = lim = lim − .
0 y→∞ 0 y→∞ 1 − s 0 y→∞ 1−s 1−s
This derivation is only valid for s 6= 1. If s = 1 then we have
Z ∞ Z ∞
t −t
F (1) = e e dt = 1 dt
0 0
72
eiθ + e−iθ
6. We have eiθ = cos θ + i sin θ and e−iθ = cos θ − i sin θ. Consequently cos θ =
2
eiθ − e−iθ
and i sin θ = . Now let θ = ix to get
2
2x 2x
ei + e−i ex + e−x
cos(ix) = = = cosh x
2 2
and
2x 2x
ei − e−i ex − e−x
−i sin(ix) = − = = sinh x.
2 2
√
(b) We substitute u = ex . If we do this then the denominator becomes u2 + 1 and
so we can be hopeful of turning it into one of our standard integrals. We have
du
= ex and hence “du = ex dx”. When x = 0, u = 1 and when x = 1, u = e.
dx
Consequently
Z 1
ex
Z e
1 h ie e + √e2 + 1
√ dx = √ du = sinh−1 (u) = ln √ .
e2x + 1 1 + u 2 1 1+ 2
0 1
(c) The function we are integrating contains an absolute value. Our first aim should
be to remove this by setting up a piecewise definition. We have
(
2 2
x − x2 = x − x if x − x ≥ 0,
x2 − x if x − x2 < 0.
In order to make use of this piecewise definition, we must determine when x−x2 ≥ 0.
First we determine when x − x2 = 0. We have x − x2 = x(1 − x) so x − x2 = 0 if
x = 0 or x = 1. We split up the range [−1, 2] into intervals bounded by the roots
of x − x2 = 0 to get the intervals [−1, 0), (0, 1) and (1, 2]. On [−1, 0), x − x2 < 0,
on (0, 1), x − x2 > 0 and on (1, 2], x − x2 < 0. We also have x − x2 = 0 when x = 0
or when x = 1. Hence
(
2
x − x = x − x if x ∈ [0, 1],
2
x2 − x if x ∈ [−1, 0) ∪ (1, 2].
73
(d) We substitute u = x5 . Why should we do this? First, notice that the denominator
of the integral is x10 + 16 so this substitution will change this to u2 + 16 which is
a familiar function to have in the denominator. Also the numerator is a constant
du
times which means that it will disappear when we do the substitution. We have
dx
du 1
= 5x4 and so “x4 dx = du”. Hence
dx 5
x4 1 1 1 1 −1 u 1 5
−1 x
Z Z
dx = du = × tan + c = tan + c,
x10 + 16 5 u2 + 16 5 4 4 20 4
where c is a constant.
(e) We have
π/4 π/4 π/4
1
Z Z Z
2 2 2
cos t tan t dt = sin t dt = (1 − cos(2t)) dt
0 0 0 2
ht sin 2t iπ/4 π 1
= − = − .
2 4 0 8 4
(f) Since we have a product where it is simple to either integrate or differentiate both
parts of the product, we try to integrate by parts. Let f (x) = sin(4x) and let
g ′ (x) = cos(3x). (Doing this the other way round should work too.) Then f ′ (x) =
sin(3x)
4 cos(4x) and g(x) = and so
3
Z π/3
sin 3x iπ/3 4 π/3
h Z
sin(4x) cos(3x) dx = sin(4x) − cos(4x) sin(3x) dx
π/4 3 π/4 3 π/4
4 π/3
Z
=− cos(4x) sin(3x) dx.
3 π/4
Now we integrate by parts again, setting f (x) = cos(4x) and g ′ (x) = sin(3x). (This
time we have to be careful to choose f and g ′ this way round, if we made the initial
choice as we have done here, else we end up back where we started.) We have
cos(3x)
f ′ (x) = −4 sin(4x) and g(x) = − and so
3
The final term is just the integral we started with so rearranging gives
π/3
7 4
Z
π/3
sin(4x) cos(3x) dx = − cos(4x) cos(3x) π/4 .
9 π/4 9
So
π/3
2 √
Z
sin(4x) cos(3x) dx = ( 2 − 1).
π/4 7
74
1 1
Z Z
(g) We have √ dx = p dx. Substituting u = x + 2 yields
x2 + 4x + 8 (x + 2)2 + 4
1 1 u x + 2
Z Z
p dx = √ du = sinh−1 + c = sinh−1 + c,
(x + 2)2 + 4 u2 + 4 2 2
where c is a constant.
(h) We multiply the top and bottom of the fraction by ex to get
1 ex
Z Z
dx = dx.
ex − e−x e2x − 1
The denominator of the integral is a composition f (g(x)) where g(x) = ex and
du
f (x) = x2 − 1. So we substitute u = ex . Then = ex . Hence “ex dx = du”. So
dx
ex 1
Z Z
2x
dx = 2
du.
e −1 u −1
In the last integral the denominator factorises as (u − 1)(u + 1), so we can use
partial fractions to compute the integral. We have
1 1 A B
= = + ,
u2 −1 (u − 1)(u + 1) u−1 u+1
where A and B are constants to be determined. Multiplying out gives
1 = A(u + 1) + B(u − 1).
1
By substituting u = 1, we get 1 = 2A and so A = . Similarly substituting u = −1
2
1
gives 1 = −2B and so B = − . Hence
2
1 1 1
= − .
u2 −1 2(u − 1) 2(u + 1)
So
1 1 1 1 1
Z Z
2
du = − = ln |u − 1| − ln |u + 1| + c
u −1 2(u − 1) 2(u + 1) 2 2
1 u − 1 1 ex − 1
= ln + c = ln x + c,
2 u+1 2 e +1
where c is a constant.
(i) We have
√ √ √ √
1 1 x+1+ x x+1+ x
Z Z Z
√ √ dx = √ √ √ √ dx = dx
x+1− x x+1− x x+1+ x x+1−x
√ √ 2 2 3/2
Z
= x + 1 + x dx = (x + 1)3/2 + x +c
3 3
2
= (x + 1)3/2 + x3/2 + c,
3
where c is a constant.
75
(j) One part of this function is a composition because etan (y) = f (g(y)) where g(y) =
−1
1+y 2
where c is a constant.
(k) We have
x x 1
Z Z Z Z
2
x sin x dx = (1 − cos(2x)) dx = dx − x cos(2x) dx.
2 2 2
We evaluate the second integral using integration by parts with f (x) = x and
sin(2x)
g ′ (x) = cos(2x). Hence f ′ (x) = 1 and g(x) = . Consequently
2
x2 x sin(2x) 1 x2 x sin(2x) cos(2x)
Z Z
2
x sin x dx = − + sin(2x) dx = − − + c,
4 4 4 4 4 8
where c isZa constant. Z
x x
(l) We have ex+e dx = ex ee dx. Again we have a composition so we substitute
where c is a constant.
(m) There are lots of ways in which this integral can be written as a composition but
it looks a reasonable idea to try to substitute u = ex and see what happens. We
have du = ex and hence “du = ex dx” So we have
dx
1 ex 1
Z Z Z
x
dx = x x
dx = du.
1+e e (1 + e ) u(u + 1)
This last integral is found using partial fractions. We have
1 A B
= + .
u(u + 1) u u+1
Multiplying out gives
1 = A(u + 1) + Bu.
76
Z Z Z
2
(n) We have ln(x − 1) dx =
ln(x − 1) dx + ln(x + 1) dx.
Z
Now we remember how to find ln x dx using integration by parts with f (x) = ln x
Z
′
and g (x) = 1. We get ln x dx = x ln x − x + c, where c is a constant.
Now return to the original integral. Substitute u = x − 1 in the first integral and
y = x + 1 in the second integral to give
Z Z Z
2
ln(x − 1) dx = ln u du + ln y dy = u ln u − u + y ln y − y + c
= (x − 1) ln(x − 1) + (x + 1) ln(x + 1) − 2x + c,
where c is a constant.
1
(o) We will integrate by parts with f (r) = ln r and g ′(r) = r 4 . Then f ′ (r) = and
r
r5
g(r) = . Hence
5
3 h r 5 ln r i3 3
r4 h r 5 ln r i3 h r 5 i3 243 ln 3 242
Z Z
4
r ln r dr = − dr = − = − .
1 5 1 1 5 5 1 25 1 5 25
8. We have
Z ∞ Z t
x C x C h1
2
it
− dx = lim − dx = lim ln(x + 1) − C ln |x + 1|
0 x2 + 1 x + 1 t→∞ 0 x2 + 1 x + 1 t→∞ 2 0
1 √t2 + 1
= lim ln(t2 + 1) − C ln(t + 1) = lim ln .
t→∞ 2 t→∞ (t + 1)C
√t2 + 1 √t2 + 1
We need to establish lim . If this is finite and non-zero then lim ln
t→∞ (t + 1)C t→∞ (t + 1)C
√
t2 + 1 √
t2 + 1
exists, if it is zero then lim ln = −∞ and if it is ∞ then lim ln = ∞.
t→∞ (t + 1)C t→∞ (t + 1)C
We have
√ p 1 if C = 1,
1 + t2 1 + 1/t2
= → 0 if C > 1, as t → ∞.
(t + 1)C (1 + 1/t)(t + 1)C−1
∞ if C < 1
Z ∞
x C
Hence − dx = 0 if C = 1 and otherwise fails to converge.
0 x2 + 1 x + 1
77
Chapter 4
Real Series
78
of equations in terms of the place of the meeting L and its time T. The rabbit will reach
L = T · vr and the turtle L = l0 + T · vt . This is a linear system of equations,
L − vr · T = 0,
L − vt · T = l + 0.
This process is infinite and a ‘paradoxical’ conclusion is that the rabbit will never catch the
turtle. That point of view was used to discuss discrete versus continuous space and time. We
leave the metaphysics aside and shall see how to reconcile both point of view using geometric
∞
X 1
series. From school, recall that rn = (we shall soon revisit that formula). Thus, we
n=0
1 − r
find that the total time is
∞
X l0 1 l0
T = (lo /vr ) (vt /vr )n = · = .
k=0
vr 1 − vt /vr vr − vt
s1 = a1 , s2 = a1 + a2 , s3 = a1 + a2 + a3 , . . . , sn = a1 + a2 + · · · + an , . . .
This very special sort of sequence is what we will work with in this chapter.
79
Definition 4.1. Given a sequence {an }∞ ∞
n=1 , the corresponding series is the sequence {sn }n=1
of partial sums for which the n-th term is
n
X
sn = a1 + a2 + · · · an = ak .
k=1
∞
X
We denote the full series by ak .
k=1
Examples.
∞
1 1 1 X1 1
1. The infinite sum 1 + + + + . . . = is a series for the sequence { }∞
n=1 .
2 3 4 n=1
n n
∞ ∞
X (−1)n X cos(nπ)
2. A series can have alternating positive and negative terms like = ,
n=1
n n=1
n
∞
X cos(n)
or ‘random’ changes of sign like in (no π in the cosine).
n=1
n2
∞
X
Definition 4.2. Given a series ak , let sn denotes the n-th partial sum. If {sn }∞
n=1 is con-
k=1
∞
X ∞
X
vergent, the series ak is convergent and the sum of the series is given by ak = lim sn .
n→∞
k=1 k=1
If the series does not converge, it is called divergent.
Remark 4.3. If the value of the sum depends on all the terms {an }∞ n=1 involved, the
convergence or divergence of P the series depends only on the entries in the tail of the
∞ ∞ P∞
sequence {an }n=1 . For example, if n=100 an converges then n=1 an also converges with the
sums related by ! !
X∞ X99 X∞
an = an + an .
n=1 n=1 n=100
As we will see, series where all the terms are positive (or all the terms are negative) can be
dealt with much more easily than sequences where some of the terms are positive and some
are negative. Weird things can happen with these series as the following example shows.
80
Cautionary Tale 4.4. Suppose we try to compute
1 1 1 1 1
1−
+ − + − + ...
2 3 4 5 6
by grouping together adjacent terms. So
∞
1 1 1 1 1 1 1 1 1 1 1 X 1
1− + − +· · · = 1− + − + − +· · · = + + +· · · = .
2 3 4 2 3 4 5 6 2 12 30 k=1
2k(2k − 1)
Alternatively since addition is commutative, we might hope to rearrange the terms to get
1 1 1 1 1 1 1 1 ∞
5 13 7 X 8k − 3
1+ − + + − + + − +· · · = + + +· · · = .
3 2 5 7 4 9 11 6 6 140 198 k=1
2k(4k − 1)(4k + 3)
So, we apparently get the same sum being equal to two different series. The terms in the
second sum are all bigger than the corresponding terms in the first sum because
8k − 3 1
> , k ≥ 1.
2k(4k − 1)(4k + 3) 2k(2k − 1)
It can be shown that both series converge but the second one converges to a limit that is strictly
greater than the first one. This shows that if we rearrange the order of adding terms
in an infinite series with positive and negative terms then we may change the value
of the sum. When we are dealing with infinite sums, the rule that addition is commutative
is no longer always valid and extreme care is required.
We have seen that the following important series is actually one of the only ‘easily’ computable
series.
Definition 4.5. Let a 6= 0 and r be real numbers. They determine a geometric series (of
common ratio r) equal to
∞
X
2 n−1
a + ar + ar + · · · + ar +··· = ar n−1.
n=1
∞
X
Note that this series if often written as ar n, starting at n = 0, to simplify the notation.
n=0
81
Examples.
X
1. To see if the series 22k 31−k converge or diverge, we can write it as the geometric
∞ k k=1
X 4
series 3 . From Theorem 4.6, the series diverges.
k=1
3
∞
X xk
2. For what values of x does the series converge? Here x is variable, but, again,
k=1
2k
x
this series is a geometric series with common ratio . The series converges if and only
x 2
2x
if < 1, i.e. if |x| < 2 or, equivalently, if −2 < x < 2. In that case its sum is .
2 2−x
82
4.2 Lecture 13: Important Series
Reference: Stewart Chapter 12.2, Pages 727–728, Chapter 12.3, Pages 733–736, Chapter 12.6,
Pages 750–754. (Edition?)
∞
X
The converse of the theorem fails. It is not necessarily true that if lim an = 0, then ak is
n→∞
k=1
convergent. We shall see examples later where this happens. But it is a cheap test.
Examples.
1 P∞ (−1)n 1
1. The series ∞ and ∞
P P
n=1 , n=1 n=1 2 all satisfy the criterion, but we shall see
n n n
that the first one diverges, but the second and third will converge.
P∞
2. It is not clear if the series n=1 sin(n) converges or not. It sums positive and neg-
ative numbers in a random fashion. But we know it must be divergent because
limn→∞ sin(n) is not determined (hence not 0).
3. In Theorem 4.6, it is clear that if |r| ≥ 1, then the series is divergent as limn→∞ |r|n 6= 0,
and may be convergent when |r| < 1 because, in this case, limn→∞ r n = 0.
n+3
. Then limn→∞ an = 1 6= 0 and so the series ∞
P
4. Let an = n=1 an diverges.
n+7
5. Let
n+3
3 −n2 +n
an = (−1)n .
n+7
it changes sign like (−1)n ). But, anyway, an
In that case, an changes sign (note that P
does not converge to 0, and so the series ∞ n=1 an diverges.
83
4.2.2 Telescopic Series
Let first consider an explicit example that will speak (hopefully) by itself.
∞
X 1
Example. How can we check if the series is convergent and find its sum?
k=1
k(k + 1)
n
X 1 1
Let sn = . We can rewrite using partial fractions. We have
k=1
k(k + 1) k(k + 1)
1 A B
= + ,
k(k + 1) k k+1
1 = A(k + 1) + Bk.
So,
n n
X 1 X 1 1
sn = = −
k=1
k(k + 1) k=1 k k + 1
1 1 1 1 1 1 1 1 1 1
=1− + − + − + ...+ − + − =1− ,
2 2 3 3 4 n−1 n n n+1 n+1
hence lim sn = 1.
n→∞
This sort of sum is an example of what is called a telescoping sum because its terms cancel
each other, collapsing into almost nothing.
Pn
Definition 4.8. A finite sum sn = k=1 ak in which subsequent terms cancel each other,
leaving only the initial and final terms is called a telescopic sum.
Examples.
is telescopic.
84
∞
X
2. We have seen before that the series sin(n) is divergent. Remarkably we can use
n=1
telescoping sums to show such result. Using trigonometric identities, we have
N N
X X 1
sin(n) = csc(1/2) (2 sin(1/2) sin(n))
n=1 n=1
2
N
1 X 2n − 1 2n + 1
= csc(1/2) cos( ) − cos( )
2 n=1
2 2
1 1 2N + 1
= csc(1/2) cos( ) − cos( ) .
2 2 2
1
Proof. Let an = . Consider the following inequalities
n
a1 = 1 ≥ 1,
1 1
a1 + a2 = 1 + ≥ 1 + ,
2 2
1 1 1 1 1 1 2
a1 + a2 + a3 + a4 = 1 + + + ≥ 1 + + + =1+ ,
2 3 4 2 4 4 2
1 1 1 1 1 1 1 1 1 1 3
a1 + · · · + a8 = 1 + + . . . + + ≥ 1 + + + + + + + =1+ ,
2 7 8 2 4 4 8 8 8 8 2
··· ≥ ··· ,
1 1 1 1 1 n
a1 + · · · + a2n ≥ 1 + + + +···+ n +···+ n = 1+ .
2 4 4 2 2 2
n
X 1
This shows that we can make as large as we like by choosing n large enough. Conse-
k=1
k
quently the series cannot converge.
Cautionary Tale 4.10. Note that, even though lim an = 0, this series diverges. This is
n→∞
the standard example of a series that behaves in this way.
A similar technique to the above can be used to show that the series of the inverse of squares
of integers converges.
85
Proof. Let
2(n+1)
X−1
1 1 1 1 1 1 1
tn = = 1+ 2 + 2+ +···+ 2 +···+ + · · · + (n+1)
k=1
k2 2 3 42 7 22n (2 − 1)2
2n
2 4 1 1
6 1+ 2 + 2 − − + · · · + 2n
2 4 4 9 2
1 1 1 5
= 1+ + 2 +···+ n −( )
2 2 2 36
(n+1)
1 − (1/2) 5 67 1
= −( )= − n < 2,
1 − (1/2) 36 36 2
where in the above we have used the formula for summing a geometric series. The sequence of
67
partial sums is hence increasing and bounded above by 36 ≈ 1.861 and so, by the monotone
convergence theorem it converges to a limit smaller than 2.
Remark 4.12. We shall see that, as a consequence of our analysis of Fourier series (see
MA2712), we can calculate the sum of the series:
∞
X 1 π2
= ≈ 1.645 . . . < 2.
n=1
n2 6
∞
X
2. (ak − bk ) converges to a − b;
k=1
∞
X
3. cak converges to ca.
k=1
Proof. We will prove the first part. The other parts are left as Exercise 7 of Sheet 3a.
Let us define the sequences {sn }∞ ∞ ∞
n=1 , {tn }n=1 and {un }n=1 by
n
X n
X n
X
sn = ak , tn = bk , un = (ak + bk ).
k=1 k=1 k=1
86
Then un = sn + tn . Since limn→∞ sn = a and limn→∞ tn = b, using Theorem 4.13, we have
Consequently,
∞
X
(ak + bk ) = lim un = lim (sn + tn ) = a + b.
n→∞ n→∞
k=1
∞
X
Therefore (ak + bk ) converges to a + b.
k=1
This result will be used to split complicated formula for ak into manageable parts.
87
4.3 Lecture 14: Test for Convergence
Reference: Stewart Chapter 12.3, Pages 733-739, Chapter 12.4, Pages 741-744 and Chapter
12.6, Pages 752-754. (Edition?)
Many series are impossible to sum exactly. Second best is to determine its convergence (or
divergence). There are tests that allow us to determine whether a series converges, but they
will not tell us anything about their sum. We shall see in Chapter 7 that many functions
can be represented by a series of monomial terms. Knowing that a series converges will allow
us to perform some manipulations on the terms of the series and hence on the function. For
instance, we should be able to integrate and differentiate these series and so to solve some
differential equations (see MA3610 in Level 3). We will look at four tests. Some tests may be
inconclusive on a given series. In that case we should use them one by one until a conclusion
is reached (all four might fail though).
Before we look into those tests, we prove an important lemma. Recall that bounded monotone
sequences converge. If the terms are positive, the sequence of partial sums of a series must
be increasing, and so if we can bound them a priori, the series must be convergent.
This leads to the following useful result where we can ignore the change of signs of some series.
∞
X ∞
X
Lemma 4.14. Suppose that the series |an | converges. Then, the series an converges.
n=1 n=1
n
X
Proof. Let bk = |ak | + ak . Note that 0 ≤ bk ≤ 2|ak |. Let sn = bk . Then, {sn }∞
n=1 is an
k=1
increasing sequence (because bk ≥ 0) and bounded because
n
X n
X ∞
X
0 ≤ sn = bk ≤ 2 |ak | ≤ 2 |ak |.
k=1 k=1 k=1
∞
X
Consequently, from Theorem 2.23, it converges. Now, ak = bk − |ak | and both bk and
k=1
∞
X ∞
X
|ak | converge. Therefore ak converges from Theorem 4.13.
k=1 k=1
P∞
Remark 4.15. We shall see in Section 5.3 that a convergent series n=1 an such that
P∞
n=1 |an | does not converge behaves quite extraordinarily.
Examples.
∞ ∞
X cos(n) ≤ 1 and the series
cos(n) X 1
1. The series 2
converges because
2 2 2
converges
n=1
n n n n=1
n
(Lemma 4.14 and the Comparison Test). But its sum is difficult to calculate.
∞
X cos(nπ)
2. We cannot apply Lemma 4.14 to because the harmonic series diverge. This
n=1
n
cos(nπ)
n
follows because cos(nπ) = (−1) and so = 1 . We shall see that this series
n n
has some interesting, although unexpected, properties.
88
4.3.1 Comparison Test
Finally, we can use our results about monotone sequences to give the following existence result.
Proposition 4.16. Suppose that {an }∞ n=1 is a sequence of positive numbers, then the series
P ∞
n=1 an converges if and only if the sequence of partial sums is bounded.
Proof. If the terms of the series are positive, its sequence of partial sums is monotonically
increasing, and so, from the Monotone Convergence Theorem 2.23, it has a limit if and only
if it is bounded.
Given the above comments we restrict attention to non-negative sequences {an }∞n=1 . Hence we
establish convergence or divergence by testing whether or not the sequence of partial sums
{sn }∞n=1 is bounded. This is usually done by comparing the given series with a ‘simpler’
series which is known to converge or diverge. We make this precise in the following result.
Theorem 4.17 (Comparison Test). Let {an }∞ ∞
n=1 and {bn }n=1 be real sequences of numbers.
Suppose that there exists an N > 0 such that 0 6 an 6 bn for all n ≥ N. Then,
(i) if ∞
P P∞
n=1 bn converges, then n=1 an converges,
P∞ P∞
(ii) if n=1 an diverges, then n=1 bn diverges.
Examples.
n+3 X
1. Let an = . We can compare the series an with the harmonic series because
n2 + 7 n→∞
1 X
an ≥ for n ≥ 3. Hence the series an diverges because the harmonic series diverges.
n n→∞
n+3 1 2
2. Let now an = 3 . We cannot compare an with 2 directly, but an ≤ 2 for n ≥ 3.
n +X7 n n
P∞ 2
And so the series an converges by comparison with n=1 (2/n ).
n→∞
This test follows from Lemma 4.14 and Theorem 2.23. We shall look at its proof in Chapter
5.
89
4.3.2 Integral Test
We now consider a test which can be used for series of the type
∞
X 1
n=1
np
1 1
for fixed p > 0. When p ≤ 1, we see that np ≤ n, ≤ p , and so the series diverges using
n n
2 p 1 1
the Comparison Test. When p ≥ 2, we have n ≤ n , p ≤ 2 , and so the series converges
n n
using the Comparison Test. When 1 < p < 2 we cannot conclude. P We need another
∞
idea. The next test that we will consider applies to series of the form n=1 f (n) where
f : [1, ∞) → [0, ∞) is a positive decreasing continuous function with f (x) → 0 as x → ∞. In
the above case the function f (x) = 1/xp has this property.
Theorem 4.19 (Integral Test). If f : [1, ∞) → [0, ∞) is a positive decreasing positive con-
tinuous function then the following quantities
∞
X Z ∞
s= f (n) and I = f (x) dx
n=1 1
y = f (k − 1)
y = f (x)
y = f (k)
x=k−1 x=k
Figure 4.1: The area under the curve is less than the area under the line y = f (k − 1) and it
is greater than the area under the line y = f (k).
90
By considering the area under the curves of y = f (k), y = f (x) and y = f (k − 1), when
k − 1 6 x 6 k, we have
Z k Z k Z k
f (k) = f (k) dx 6 f (x) dx 6 f (k − 1) dx = f (k − 1),
k−1 k−1 k−1
n
X Z 2 Z 3 Z n n
X n−1
X
sn − f (1) = f (k) 6 + +···+ f (x) dx 6 f (k − 1) = f (k) = sn−1 .
k=2 1 2 n−1 k=2 k=1
That is,
sn − f (1) 6 In 6 sn−1 .
Hence, if {In }∞ ∞ ∞ ∞
n=1 converges then {sn }n=1 converges and if {In }n=1 diverges then {sn }n=1
diverges from the Comparison Test.
Remarks 4.20. As before, the convergent result will remain true if the hypothesis are true
for n, x on an interval [N, ∞) for some N large enough (see Exercise 6 of Sheet 3a). More-
over, integrating functions can be very difficult but, using the Limit Comparison Test, we can
simplify f (n) into an expression that can be integrated.
∞
X 1 1
Example. Check for which p > 0, is convergent. For the Integral Test, let an =
n=1
np np
1
and let f : [1, ∞) → R, f (x) = . Then f is continuous on [1, ∞), positive and decreasing.
xp
Furthermore an = f (n) and so we may apply the test. We have
1−p
Z ∞
1
Z t
1 t /(1 − p), 0 < p < 1;
dx = lim dx = lim ln(t), p = 1;
1 xp t→∞ 1 xp t→∞ 1
1− 1
, p > 1. p−1 tp−1
The first two limit are infinity, but the last one is 0. Consequently the integral, and hence the
series, is divergent when p ≤ 1 and is convergent when p > 1.
∞
X
The next test can be used for series of the type f (n) where f is integrable basically on
n=1
the positive real line.
Cautionary Tale 4.21. Note that the sum of the series is not necessarily equal to the improper
∞
X 1 1 1
integral. After all, the series 2
starts 1 + + + · · · and consists of positive terms, so
n=1
n 4 9
its sum is larger than 1.5 (suming the first 10 terms). But the integral test gives
Z ∞ Z t
1 1 h 1 it 1
dx = lim dx = lim − = lim 1 − = 1.
1 x2 t→∞ 1 x2 t→∞ x 1 t→∞ t
91
4.4 Lecture 15: Further Tests for Convergence
∞
X
1. If ρ1 < 1, the series an converges.
n=1
∞
X
2. If ρ1 > 1, the series an diverges.
n=1
3. If ρ1 = 1, we cannot conclude.
∞
X
1. If ρ2 < 1, the series an converges.
n=1
∞
X
2. If ρ2 > 1, the series an diverges.
n=1
3. If ρ = 1, we cannot conclude.
Those two results are corollaries of the following lemma about the convergence/divergence of
series with general terms.
92
1. either
|an+1 /an | ≤ ρ1 < 1, n ≥ N,
or
|an |1/n ≤ ρ2 < 1, n ≥ N,
then the series converges.
2. If, either
|an+1 /an | ≥ ρ1 > 1, n ≥ N,
or
|an |1/n ≥ ρ2 > 1, n ≥ N,
then the series diverges.
Proof. This proof has been removed. It had a flaw. See the lecture and seminar notes.
∞
X xn
Example. We can determine the values of x for which is convergent. If the sum
n=1
n!
depends on a variable x like this one does, it is often a good idea to use the Ratio Test. We
xn
have an = and we check the hypotheses of the test. First, we need an to be non-zero
n!
an+1
and this is ok providing x 6= 0, so let’s assume x 6= 0 for the moment. Second, =
an
xn+1 /(n + 1)! x
= . So, we have
xn /n! n+1
a |x|
n+1
lim = lim = 0.
n→∞ an n→∞ n + 1
Using the Ratio Test, we see that the series converges providing x 6= 0. But the case when
x = 0 is easy because then every term of the series is zero and so it converges. Consequently
the series converges for all x.
Cautionary Tale 4.25. Notice that if l = 1 then the Ratio Test tells us nothing. Typical
examples are when we have rational functions (i.e. ratio of polynomials).
∞
X 1
Example. Check for which p > 0, is convergent applying the last two tests.
n=1
np
93
2. For the Root Test, consider
1 ln n
bn = ln(|an |1/n ) = ln(n−p ) = −p ,
n n
and so limn→∞ bn = 0 and the test is inconclusive because
Remark 4.26. How do we know whether one of these tests will tell us about convergence?
We don’t for sure. However, if the general term of the series looks like something we could
integrate, then we should try the integral test. On the other hand, if the ratio of consecutive
terms of the series is simple enough, we should try the ratio test.
94
Summary of Chapter 3
We have seen:
• an example of a series where the terms tend to zero but the series diverges;
• that we must check carefully that it is appropriate to use either of the tests;
∞
X 1
• that the series p
, p ≥ 0, converges for p > 1 and diverges for p ≤ 1.
k=1
n
95
4.5 Exercise Sheet 3
2. In each of the following cases determine whether or not the series converges.
∞ ∞ ∞ √ ∞
X 1 X 4n2 − n + 3 X n+ n X 2
(a) n
. (b) 3
. (c) 3
. (d) n4 e−n .
n=1
2 +1 n=1
n + 2n n=1
2n − 1 n=1
3. For each of the following series, determine the values of x ∈ R such that the given series
converges.
∞
X xk
(a) .
k!
k=0
∞
X k 3 xk
(c) .
k=0
3k
X∞
(d) k k xk.
k=0
X∞
(e) ak xk = 1 + 2x + x2 + 2x3 + x4 + · · · , i.e. with a2k = 1 and a2k+1 = 2 for
k=0
k = 0, 1, 2, · · · .
∞ √ 2
X x + k − |x|
(f) .
k=1
k2
∞
X cos kx sin kx
(g) +3 2 .
k=1
k3 k
4. Show that
1 1 1 1
+ + +··· = .
1·3 3·5 5·7 2
5. Use the Integral Test to test the convergence of the following series for c > 0:
∞ ∞
X 1 X 1
(a) c
. (b) .
n=2
n(ln n) n=2
n(ln n)(ln ln n)c
6. Show that the Integral Test remain valid if its hypotheses on f are only true for some
interval [N, ∞), N ∈ N.
96
7. Prove Parts 2 and 3 of Theorem 4.13.
8. Determine the values of x for which the following series converge. Find the sum of the
∞ ∞ ∞
X xk X
k
X
series for those values of x. (a) k
. (b) (x − 4) . (c) tank x.
k=1
3 k=1 k=1
9. Show that the Ratio Test is always inconclusive for series whose terms are rational
p(n)
functions, that is a ratio of polynomials an = for p, q some polynomials. Deduce
Pq(n)
by other means the convergence of the series ∞ n=1 an .
10. Get the details of the proof of the Root Test when ρ2 < 1.
2. Determine which of the following series converge and for those series that do converge
find ∞
the sum. ∞ ∞ ∞ ∞
X 3 X 1 X 1 X k2 + 2 X 3k + 2k
(a) k
; (b) ; (c) 2 + 3k − 2
; (d) 3+3
; (e) k
.
k=1
5 k=1
(k + 2)(k + 3) k=1
9k k=1
k k=1
6
∞
X 7(n + 1)
3. Determine whether converges or diverges.
n=1
2n n
∞
X n4 + 3n2
4. Determine whether converges or diverges.
n=1
n9/2 + n
1. (a) 3/5.
(b) 1/2.
(c) divergent.
(d) 3/4.
2. (a) convergent.
(b) divergent.
(c) convergent.
(d) convergent.
3. (a) x ∈ R.
(b) |x| < 1.
(c) |x| < 3.
97
(d) x = 0.
(e) |x| < 1.
(f) x ∈ R. Compare with a power of k.
(g) x ∈ R.
1. (a) Divergent;
(b) convergent;
(c) divergent;
(d) divergent;
(e) convergent.
2. (a) 3/4;
(b) telescopic;
(c) telescopic;
(d) divergent;
(e) 3/2.
3.
98
4.5.3 Feedback for Sheet 3a
3/8 3
1. It is a geometric series of ratio 3/8 and so it converges to its limit = .
1 − 3/8 5
We have a telescopic series. We decompose
1 a b 1 1
= + = − ,
(k + 1)(k + 2) k+1 k+2 k+1 k+2
2. (a) We could show convergence here by using the Ratio or Root Tests or more simply
by using the Comparison Test by noting that
1 1
06 6 n.
2n +1 2
The upper bound is a term from a convergent geometric series.
4n2 − n + 3
(b) This is divergent. Using the Limit Comparison Test with an = and
n3 + 2n
bn = 1/n, we get
an n(4n2 − n + 3)
lim = lim nan = lim = 4,
n→∞ bn n→∞ n→∞ n3 + 2n
and so both series diverge together.
(c) This converges. Using the simple Comparison Theorem we see that
√
n+ n 2n
≤
2n3 − 1 2n3 − 1
2n
and, using the Limit Comparison Test with an = and bn = 1/n2, we get
2n3 − 1
an 2n3
lim= lim n2 an = lim = 1,
n→∞ bn n→∞ n→∞ 2n3 − 1
Here the results is as a consequence of limn→∞ n1/n = 1 and limn→∞ e−n = 0. And
so, the series converges.
99
3. (a) Let ak = xk /k! and use the Ratio Test. We have
k+1
ak+1
lim
= lim |x| /(k + 1)! = lim |x| = 0.
k→∞ ak k→∞ |x|k /k! k→∞ k + 1
From the Ratio Test, the series actually converges for all x ∈ R.
(b) Let ak = α(α − 1) · · · (α − k + 1)xk /k!. Using the Ratio Test
ak+1
lim = lim α − k |x| = lim α/k − 1 |x| = |x|.
k→∞ ak k→∞ k + 1 k→∞ 1 + 1/k
Thus the series converges if |x| < 1. If |x| > 1 then the terms of the series are
unbounded and thus the series diverges. What happens when x = −1 or x = 1
needs more refined tests to determine if the series converges or diverges and the
outcome depends on α. This will not be considered further here.
(c) The Root Test is the easiest test to use here. With ak = k 3 xk /3k we have
This only converges if x = 0 and is unbounded for x 6= 0. Hence the series only
converges when x = 0.
(e) Let bk = ak xk . The Ratio Test does not give any information here as ak+1 /ak does
not have a limit as k → ∞. However we can still use the Root Test. Since
1/k
1 6 ak 6 2, 1 6 lim ak 6 lim 21/k = 1.
k→∞ k→∞
Thus,
1/k
lim |bk |1/k = lim ak |x| = |x|.
k→∞ k→∞
The series converges if |x| < 1 and diverges if |x| > 1. By inspection the series
diverges if x = 1 as the terms of the series do not tend to 0 as k → ∞. It can be
shown that the series also diverges when x = −1.
(f) Since x2 ≥ 0,
√
x2 + k − |x| (x2 + k) − x2 1 1
ak = = √ = √ 6 ,
k2 ( x2 + k + |x|)k 2 ( x2 + k + |x|)k k 3/2
100
(g) Let ak denotes the k-th term of the series. From the triangle inequality and the
fact that | sin kx| 6 1 and | cos kx| 6 1, we have
1 1
|ak | 63
+3· 2
k k
for all x ∈ R.PSince k=1 1/k 3 and ∞
P∞ P 2
k=1 1/k are standard convergent series, it
follows that ∞ k=1 |ak | converges from the Comparison Test. Hence the original
series converges for all x.
1 1/2 1/2
= − ,
(2k − 1)(2k + 1) 2k − 1 2k + 1
5. (a) We use the Integral Test. We have to evaluate the convergence of the integral
Z ∞
dx
2 x(ln x)c
and check that the integrand is decreasing. The last claims follows readily from
1
the derivative of :
x(ln x)c
′
1 1 c
c
=− 2 c
− 2 .
x(ln x) x (ln x) x (ln x)c+1
We know that the integral converges if and only if c > 1. Hence the series converges
if c > 1 and diverges otherwise.
(b) We proceed in the same way. The derivative of the integrand is
′
1 −1 −1 −c
c
= 2 c
+ 2 2 c
+ 2 ,
x(ln x)(ln ln x) x (ln x)(ln ln x) x (ln x) (ln ln x) x (ln x) (ln ln x)c+1
2
101
6. Because N is finite,
∞
X N
X −1 ∞
X
an = an + an .
n=1 n=1 n=N
with g that satisfies the hypotheses of the Integral Test and so all integrals and all series
converge or diverge together.
Finally, let
n
X n
X
un = (cak ) = c( ak ) = c · sn .
k=1 k=1
8. Those are geometric series. Recall that they converge if and only if their ratio r is
smaller than 1 (in modulus). And so,
x
(a) r = x/3, the series converges if |x| < 3, that is, x ∈ (−3, 3) to ;
3−x
x−4
(b) r = x − 4, the series converges if |x − 4| < 1, that is, x ∈ (3, 5) to ;
5−x
(c) r = tan(x), the series converges if | tan(x)| < 1, that is, x ∈ (−π/4 + kπ, π/4 + kπ),
tan x sin x
k ∈ Z, to = .
1 − tan x cos x − sin x
an + 1 p(n + 1) q(n)
9. The Ratio Test take the form lim = lim · . Note that
n→∞ an n→∞ p(n) q(n + 1)
the leading
coefficients
of p(n + 1) and p(n) are the same for any polynomial, and so
p(n + 1)
lim = 1, as well as for the ratio of q, therefore the Ratio Test is always
n→∞ p(n)
inconclusive. Because the degrees of p and q are non negative integers, using the Limit
Comparison Test, we see that the series converges if and only if the degree of q is larger
or equal to the degree of p plus 2.
102
Chapter 5
Definition 5.1. A sequence {an }∞ n=1 has limit a if for every ǫ > 0, there is an integer N(ǫ)
(usually depending on ǫ) such that, whenever n > N(ǫ), |an − a| < ǫ. If {an }∞
n=1 has limit a we
∞
write lim an = a. If lim an exists, we say that the sequence {an }n=1 converges. Otherwise
n→∞ n→∞
we say it diverges.
Proof. Given any ǫ > 0, we know from the definition of lim f (x) that there exists M such
x→∞
that, whenever x > M, we have |f (x) − l| < ǫ. So take N = ⌈M⌉, that is, M rounded up
to the nearest integer (see Fundamentals at Level 1). Then, if n > N, we have |an − l| =
|f (n) − l| < ǫ because n > N ≥ M.
Remark 5.3. Because the limit of a sequence depends only on the (N)-tail of a sequence
{xn }∞
n=1 , defined as { xn : n > N }. Those results are valid for any tail being positive, or
bounded in [a, b].
As with limit of functions, we can define what it means to have limit ±∞.
Definition 5.4. We say that lim an = +∞ if for every M ∈ R there exists an integer N
n→∞
such that for all n > N , an ≥ M. We say that lim an = −∞ if for every M ∈ R there
n→∞
exists an integer N such that for all n > N , an ≤ M.
103
Note that the idea in the definition is that M can be as large as we wish. Conversely, in
the following, M is as large as we wish in modulus but negative.
Remarks 5.5. 1. You should also note that N(ǫ) is not unique since, if we have an N(ǫ)
corresponding to a given ǫ > 0, then any natural number larger that this N will also
suffice to satisfy the definition. This will be useful later.
2. The definition also implies that any number of the initial values of xn are irrelevant
for convergence.
1. Have a candidate for l. That candidate could come from calculating, guessing etc.
3. Set the upper bound to be smaller than ǫ. If the bound is simple, then we can get easily
the value of N(ǫ).
Examples.
1
1. A very simple example is the sequence {xn }∞n=1 with xn = . We believe, or calculated
n
using our knowledge of Level 1, that l = 0. We can use the definition to check if this is
correct. We need to evaluate (recall n ∈ N)
1 1
|xn − l| = | − 0| = < ǫ.
n n
And so, for n > N(ǫ) = ⌊1/ǫ⌋, where ⌊x⌋ represents the integer part of x (the largest
1
integer smaller than x), < ǫ.
n
Note that, as expected, N (ǫ) increases (to infinity) as ǫ tends to 0.
104
1
2. Using the sequence {xn }∞
n=1 with xn = , we can again estimate N(ǫ). The
n + n2
candidate for the limit is again l = 0. And so,
1 1
|xn − l| = 2
< < ǫ,
n+n n
because n2 ≥ 0. We could use the previous value for N(ǫ) = ⌊1/ǫ⌋. This value for N(ǫ)
is far from optimal as we are going to see now.
3. To get a more concrete understanding of what N(ǫ) does, consider the previous example
1 1
with xn = 2
. Take ǫ = 0.1, say. The estimate n ≥ is easy to calculate, basically,
n+n ǫ
−1 1
if n > (0.1) = 10, < 0.1, but it is not optimal. Indeed, it is enough to choose
n + n2
1 1 1
n ≥ 3 to get 2
≤ < . Actually, in the exercises, we are going to see that,
n+n 12 10
from the analysis of quadratic inequalities you did early in Level 1, if
$ r %
1 4+ǫ 2
n > N(ǫ) = − + = √ √ √ , (5.1)
2 4ǫ ǫ( 4 + ǫ + ǫ)
1
then n+n 2 < ǫ. Again, note that N (ǫ) increases as ǫ tends to 0. But, clearly,
−1
(5.1) is very
√ complicated, but gives the first ‘exact’ estimate of N(ǫ). For ǫ = 10 , you
get n > [ 10.25 − 0.5] = 2.
cos n
4. Let xn = 1 + n
.
The candidate for the limit is l = 1. Then,
cos n | cos n| 1
|xn − 1| = = ≤ < ǫ,
n n n
and so we can choose N(ǫ) = 1/ǫ.
5. For a more complicated example, consider the sequence whose general term is
103n2 − 8
xn = .
4n2 + 99n − 3
The sequence converges because it is a combination of standard convergent sequences.
We have
103n2 − 8 103 − 8/n2 103
lim xn = lim 2
= lim 2
= .
n→∞ n→∞ 4n + 99n − 3 n→∞ 4 + 99/n − 3/n 4
103
To show, using the definition, that l = is indeed the limit, we estimate
4
103n2 − 8 103 99 · 103n − 277 100 · 104n 25 · 26 700
2
− = 2
≤ 2
≤ ≤ ,
4n + 99n − 3 4 4(4n + 99n − 3) 4(4n ) n n
700
because 99n − 3 ≥ 0 for n ≥ 1. Therefore |xn − l| < ǫ if n > = N(ǫ).
ǫ
Cautionary Tale 5.6. Note that this is highly not unique. Each of us will usually have
their own expression. But, some estimates are correct, some others are not because of
errors, not of choice.
105
5.1.2 Divergent Sequences
Sequences that do not converge satisfy the following result.
Proposition 5.7. A sequence {xn }∞ n=1 is not convergent if and only if for all l ∈ R, there
exists an ǫ(l) > 0 such that for all N ∈ N, there exists n̄(N) > N such that
|xn̄ − l| ≥ ǫ(l).
Proof. To show that a sequence is divergent, we need to state the contraposition of the defi-
nition of a limit. For that we need to look back at some propositional logic. The contra-
position of statements with the quantifiers ∃ or ∀ are as follows:
We use those two properties to state the contrary of the definition of a limit by exchanging ∀
and ∃. Now, in a compact notation, a sequence is convergent if
Example. Proposition 5.7 shows that the sequence defined by xn = (−1)n is divergent.
Indeed, let l ∈ R be any candidate for its limit. Take ǫ(l) = 1. Then, for all N, either
1. l ≥ 0, and |x2N +1 − l| = | − 1 − l| = 1 + l ≥ 1,
Using the same type of argument we can also show the following uniqueness result showing
that we can talk about THE LIMIT of a convergent sequence.
Proof. We use a proof by contradiction and thus we start by assuming that the converse is
true. That is, we assume that limn→∞ xn = x and limn→∞ xn = y with x 6= y. That is we
have |x − y| > 0. Now, by the triangle inequality, we have
106
Because x and y are limits of {xn }∞ n=1 , for every ǫ > 0 there exists N1 (ǫ) and N2 (ǫ) such that
|xn − x| < ǫ, for all n > N1 (ǫ), and |y − xn | < ǫ, for all n > N2 (ǫ). We want both of these to
be true and thus take n > N(ǫ) = max{N1 (ǫ), N2 (ǫ)} and we get
This is true for every ǫ > 0 and hence, it is true for ǫ = |x − y|/4, which immediately gives us
a contradiction since we cannot have
2 1
0 < |x − y| 6 |x − y| = |x − y|,
4 2
when |x − y| =
6 0.
107
5.2 Extra curricular material: Error Estimates from the
Tests
In this extra curricular material we study various error estimates of the sum of series we can
deduce from the work on the tests for convergence
Proof. Using the notation and calculations of Lemma 4.24, we can evaluate the difference
between s and sn when n ≥ N. We have
∞
X ρ1
|s − sn | ≤ |an+k | ≤ |an |.
k=1
1 − ρ1
For the second case, as before, comparing with the convergent geometric series we deduce that
the series converges and that the error s − sn is bounded by
∞ ∞
X X ρn+1
|s − sn | ≤ |an+k | ≤ ρ2n+k ≤ 2 .
k=1 k=1
1 − ρ2
108
2. In the situation of Theorem 4.19, when both I and s diverge, there exists γ ≤ f (1) such
that
lim (sn−1 − In ) = γ.
n→∞
That is, s and I both diverge at a rate comparable with a constant less or equal to f (1).
Proof. Following the proof of Theorem 4.19, we can relate the limits of {sn }∞ ∞
n=1 and {In }n=1
in the convergent case. We have that if sn → s and In → I as n → ∞ then
s − f (1) 6 I 6 s
and thus the sum of the series s satisfies s ∈ [I, I + f (1)]. We also have
n+j Z n+j
X
0 6 sn+j − sn = f (k) 6 f (x) dx.
k=n+1 n
Letting j → ∞ gives Z ∞
0 6 s − sn 6 f (x) dx.
n
This gives a bound on how rapidly the sequence converges to s. In general, recall that
Z k
f (k) 6 f (x) dx 6 f (k − 1).
k−1
And so, Z k
06 f (k − 1) − f (x) dx 6 f (k − 1) − f (k).
k−1
where Z 2 Z n
sn−1 − In = f (1) − f (x) dx + · · · + f (n − 1) − f (x) dx .
1 n−1
Since {sn−1 − In }∞
n=1 is increasing with n and bounded by f (1) we know that it converges with
Thus the difference between the two sequences converges even in the case when the sequences
themselves diverge.
Examples.
109
In this case it can be shown that γ = 0.577.. which is known as the Euler1 constant.
Because
1
lim (ln(n + 1) − ln n) = lim ln(1 + ) = ln 1 = 0,
n→∞ n→∞ n
we can actually write γ in a more convenient form:
n
!
X 1
γ = lim − ln n .
n→∞
k=1
k
P∞
Thus n=1 1/n diverges as slowly as ln n, that is, for all ǫ > 0, there exists N(ǫ) such
that n
X 1
ln(n) + γ − ǫ ≤ ≤ ln(n) + γ + ǫ, n ≥ N(ǫ).
k=1
k
Hence to obtain the sum accurate to 10−3 we need a thousand (103 ) terms and to
obtain the sum accurate to 10−6 we need a million (106 ) terms. The series converges
rather slowly.
1
L. Euler (1707-1783) was a Swiss mathematician who spent most of his career between St-Petersburg
and Berlin. He was one of the towering lights of 18th century mathematics. He has been the most prolific
mathematician of his exceptional caliber and was nearly blind for many years still producing papers on all
areas of mathematics. Euler claimed that he made some of his greatest mathematical discoveries while holding
a baby in his arms with other children playing round his feet (he had 13 of them, although only 5 did not die
in infancy).
110
5.3 Lecture 17: Absolute Convergence of Series and the
Leibnitz Criterion of Convergence
Coming back to Lemma 4.14, we can introduce some terminology connected with this result.
P∞
Definition
P∞ 5.11. The series n=1 an is said to be absolutely convergent if the series
P∞ P∞
n=1 |an | converges.
P∞ The series a
n=1 n is said to be conditionally convergent if n=1 an
converges but n=1 |an | diverges.
Theorem 5.13 (Leibnitz Test). Let {an }∞ such that {|an |}∞
n=1 be an alternating sequence P n=1
is strictly monotonically decreasing with limn→∞ an = 0. Then the series ∞ n=1 an converges.
Moreover, the rate of convergence of the partial sum sn can be evaluated by
|s − sn | ≤ |an+1 |, n ≥ 1.
Proof. We assume that the first term a1 of the series is positive. When it is negative
P∞ we
multiply everything by (−1) and follow the same procedure. So, the series
P2n−1 is n=1 n =
a
a1 − |a2 | + a3 − |a4 | + . .. Consider the odd partial sums s2n−1 = i=1 ai and the even
P. 2n
partial sums s2n = i=1 ai . We claim that s2n−1 is monotone decreasing because
111
The sequences of odd and even partial sums are monotone and bounded, hence they both
converge. Their limits must be equal because
Example. The Leibnitz result can be useful to study the behaviour of power series at the of
convergence because we get an alternating series at x = −R.
A series is determined by the sequence of its partial sums. So the order in which we sum its
terms is in principle important. Absolutely convergent series have the remarkable property
that their sum is independent of the order of summation, hence their name.
Definition 5.14. A series ∞
P P∞
n=1 bn is a re-arrangement of a series n=1 an if there exists
a bijection φ : N → N such that bn = aφ(n) .
Proof. We proceed in two steps. First we show that the result is true for the re-arrangement
of a series with positive terms. Let {an }∞ n=1 be a sequence with Ppositive terms andP {bn }∞
n=1
be a re-arrangement given by a bijection φ : N → N. Let sn = k=1 ak and tn = nk=1 bk .
n
112
negative, terms of the re-arranged series. Because the original series is absolutely convergent,
s+ −
n and sn converge and have positive terms. It is clear that any re-arrangement of the sequence
{an }∞
n=1 does not alter the sign of its terms and so induces re-arrangements of the sequences of
positive, respectively, negative, terms. From the first part we know that {s+ ∞ + ∞
n }n=1 and {tn }n=1 ,
respectively {s− ∞ − ∞
n }n=1 and {tn }n=1 , converge to the same limit. And so the conclusion because
Conditionally convergent sequences have a remarkable property: they are formed of entwined
divergent series of positive, resp. negative, terms.
Pn Pn
Proof. Let s+n =
−
k=1, ak >0 ak and sn = (−ak ) be the (positive) partial sums of the
k=1, ak <0P
series of positive, respectively negative, terms of ∞ + −
n=1 ak . Recall that sn = sn − sn .We are
going to show that if one of the sequences {s+ ∞ − ∞
n }n=1 or {sn }n=1 converges then the other must
also converge, hence the original series must have been absolutely converging. Suppose that
limn→∞ s+ n = s
+
< ∞. Define s− = s+ − s where s is the sum of the original series. We
evaluate
|s− − s− + − + − + + + +
n | ≤ |s − s − sn | ≤ | − s + sn − sn | + |s − sn | = |s − sn | + |s − sn |.
Proof. The basis of the proof is that the partial sums of the positive and negative numbers
must both diverge as was shown in Lemma 5.16. Then we use the trick shown in the lectures
to show the result for the alternating harmonic series. Suppose c ≥ 0 (the proof follows exactly
the same line if c < 0). We take positive terms of the series until we go over c for the first
time. Then we take negative terms to get for the first time under c. Then we retake positive
terms to go over c again, etc. At each step we go over or under c by a strictly smaller amount,
hence the final convergence to c of the re-arrangement of the series.
2
G.F.D. Riemann (1826-1866) was a German mathematician of great originality. He developed complex
analysis and the relation between geometry and analysis. In his exceptional PhD thesis, he already showed
his style: great insight but somewhat lacking of the necessary rigour. On the other hand, the work of other
mathematicians to fill in the gaps in his ideas proved very fertile in the development of mathematics. One of
the very important conjectures in mathematics is about the structure of the complex zeros of the Riemann’s
ζ function.
113
Example
Conditionally convergent series are difficult objects to manipulate. For instance, let
∞ ∞
(−1)i+1
X X 1 1 1
s = = − − (by re-arrangement)
i=1
i i=1
2i − 1 2(2i − 1) 4i
∞ ∞ ∞
!
(−1)i+1 1 X (−1)i−1
X
X 1 1 1
= − = = = s.
i=1
2(2i − 1) 4i i=1
2i 2 i=1 i 2
Hence s = 0. But s > 0 (in Exercise 5 of Sheet 4a we show that it is ln 2). This contradiction
occurs because we have kept the equal sign with the re-ordering of the series which is not
possible unless the series are absolutely convergent.
114
Summary of Chapter 4
We have seen:
• The (ǫ, N)-definition of a limit and to be able to evaluate N(ǫ) in simple cases;
• To know how to apply the Ratio, Root, Integral and Leibniz Tests for convergence of
series.
• that we must check carefully that it is appropriate to use either of the tests;
• To know the definitions and to understand the difference between absolute and condi-
tional convergence of series.
115
5.4 Exercise Sheet 4
(a) Determine, when it exists, a ‘candidate’ for the limit l for the following sequences
{xn }∞
n=1 .
(b) When such ‘candidate’ limit l exists. Determine an N(ǫ) such that the definition
of the limit of a sequence gives you that |l − xn | < ǫ, for all n > N(ǫ). Recall that
N(ǫ) does not need to be the smallest value (although you might try to have fun
to get a small one).
(c) When such ‘candidate’ limit l does not exists, show that the sequence has no limit
using the definition (more precisely its converse).
√ 5n3 + 3n + 1 sin(n2 + 1)
(a) xn = −n + n2 + 3n. (b) xn = . (c) x n = .
√ √ 15n3 + n2 + 2 n2 + 1
n+2− n+1 7n4 + n2 − 2 n3 + 3n2
(d) xn = √ √ . (e) xn = . (f) xn = − n2.
n+1− n 14n4 + 5n − 4 n+1
3
n+1
(g) xn = − n3.
n
1
2. (a) Consider the sequence {xn }∞
n=1 with xn = . We would like to estimate the
n + n2
exact value of N(ǫ) such that
1
|xn − l| = < ǫ. (5.4)
n + n2
Note that rewriting (5.4), we would like to find N(ǫ) such that, if n > N(ǫ),
1
n2 + n − > 0.
ǫ
Use the analysis of quadratic inequalities you did early in Level 1, to find that
$ r %
1 4+ǫ 2
N(ǫ) = − + = √ √ √ .
2 4ǫ ǫ( 4 + ǫ + ǫ)
a2 + a + a4 + a3 + · · · + a2n + a2n+1 + · · · .
Show that the Root Test applies but not the Ratio Test.
116
∗
Pn 1
5. Let cn = k=1 k − ln n. Show that (c ) is a decreasing sequence of positive numbers.
P2nn
Let γ be its limit. Show that if bn = k=1 (−1)k+1 k1 , then lim bn = ln 2.
n→∞
Similarly, if xn ≤ 0, then l ≤ 0.
7. Using the (ǫ, N)-definition of a limit, prove that the algebra of limit holds. Namely, let
{xn }∞ ∞
n=1 and {yn }n=1 be convergent sequences in R with limits x and y, respectively,
and let α ∈ R. Then,
xn 6 yn 6 zn ,
Using the (ǫ, N)-definition of a limit, show that the ‘intermediate’ sequence {yn }∞
n=1 is
convergent and
lim xn = lim yn = lim zn .
n→∞ n→∞ n→∞
117
(d) l = 1,
(e) l = 1/2,
(f) l = ∞ (divergent),
(g) l = −∞ (divergent).
3. Show that the limit condition means that for any ǫ there exists N(ǫ) such that
4. The coefficient ρ2 in the Root Test has a limit, but ρ in the Ratio Test has not.
This result follows from the following more sophisticated test, but whose proof must be delayed
until next term. Note that our results are valid if the tail of the series start to be alternating
for n ≥ N. Those series form a good source of conditionally convergent series if they are
not absolutely convergent. The Leibnitz Test is the consequence of the more sophisticated
Dirichlet Test.
Abel3 Test.
∞
Proposition 5.18 (Dirichlet Test). Let {bP
n }n=1 be a decreasing sequenceP
such that limn→∞ bn =
0 and {an }n=1 a sequence such that m ≤ i=1 ai ≤ M for all n. Then ∞
∞ n
k ak bk converges.
118
5.4.5 Feedback for Exercise Sheet 4a
Recall that for positive numbers
a a+p a a
≤ , ≤
b b b+p b
a2 − b2
and a − b = .
a+b
1. In that exercise, calculate the limit, when it exists, using the usual tools. Then, estimate
|xn − l| trying to get a simple expression that obviously tends to 0 as n is big. Then
we can find N(ǫ) by estimating how big n should be for the estimate of |xn − l| to be
smaller than ǫ. Note that this is highly not unique. Everybody will usually
have their own expression, but some estimates are correct, some others are
not, this is the difference between being correct or wrong.
The estimate
√
3 3n 3 2 + 3n − n
|xn − l| = − √ = · n √
2 n + n2 + 3n 2 n + n2 + 3n
3 3n 9n 9 2
≤ · √ ≤ 2
= ≤ .
2 (n + n2 + 3n)2 2(2n) 8n n
2
Clearly, |xn − l| < ǫ when n > .
ǫ
1
(b) As in (i), we show that limn→∞ xn = . To find an estimate for N(ǫ),
3
| − n2 + 9n + 1| |n2 − 9n − 1| 1
|xn − l| = 3 2
≤ 3
≤ ,
3(15n + n + 2) 3(15n ) 45n
1
because |n2 − 9n − 1| ≥ 1 when n ≥ 3. And so, |xn − l| < ǫ when n > .
45ǫ
(c) Here, we can estimate directly
sin(n2 + 1)
|xn | ≤ ≤ 1.
n + 1 n2
2
1
This provides that limn→∞ xn = 0 and an estimate for N(ǫ) = √ .
ǫ
(d) Applying the identity a−b = (a2 −b2 )/(a+b) to the numerator and the denominator
gives √ √
n+1+ n
lim xn = lim √ √ = 1.
n→∞ n→∞ n+2+ n+1
119
We then estimate
√ √
| n + 2 − n|
|xn − 1| = √ √
| n + 2 + n + 1|
2 2 1
= √ √ √ √ = √ 2 = ,
| n + 2 + n| · | n + 2 + n + 1| (2 n) 2n
1
and so, a possibility for N(ǫ) = .
2ǫ
(e) Clearly limn→∞ xn = 12 . And so, when n ≥ 2,
2n2 − 5n 2
= 2n ≤ 1 ≤ 1 ,
|xn − l| =
2(14n4 + 15n − 4) 28n4 14n2 9n2
1
and so we can choose N(ǫ) = √ .
3 ǫ
2n2 2n2 2n2
(f) Simplifying xn , we find that xn = . Clearly then, xn = ≥ = n,
n+1 n+1 2n
and so the sequence diverges as xn > M if n > M.
3
n+1
(g) The term is bounded below by 1 and above by 8. This follows from
n
n+1 1
1≤ ≤ 1 + ≤ 2.
n n
And so xn is unbounded, with limn→∞ xn = −∞. Given an arbitrary positive
number M, we can estimate for which n is xn < −M. We have
xn ≤ 8 − n3 < −M
√
3
when n > 8 + M.
2.
3.
an+1
4. The Ratio Test gives = a3 < 1, if n is even, or a−1 > 1, if n is odd. So we cannot
an
conclude.
On the other hand, note that if n is odd, an = an+1 , and if n is even, an = an−1 . The
Root Test gives (an )1/n = a1+1/n if n is even and a1−1/n if n is odd. So the limit of
1
the root (an )1/n as n tend to ∞ is a < 1 in both cases as lim = 0. Thus the series
n→∞ n
converges.
∗
5. That cn > 0 follows from the estimates in Theorem 4.19 and Example 2 after it. That
it is decreasing follows from the following estimate:
1 1 1
cn+1 − cn = − ln(n + 1) + ln n = + ln 1 − .
n+1 n+1 n+1
120
1
Let x = . So as n tends to ∞, x tends to 0+ . Now, in terms of x,
n+1
cn+1 − cn = x + ln(1 − x) = ln(ex (1 − x)).
bn = c2n − cn + ln 2
because
2n n
! !
X 1 X 1
c2n − cn + ln 2 = − ln(2n) − − ln n + ln 2
k k
k=1 k=1
2n n
X 1 1 X
= − − ln 2 − ln n + ln n + ln 2
k k
k=1 k=1
2n n 2n 2n
! !
X 1 X 1 X 1 X 1
= −2 = −2
k 2k k j=1,even
j
k=1 k=1 k=1
2n
X 1
= (−1)k+1
k=1
k
= bn .
Hence,
lim bn = lim (c2n − cn + ln 2) = γ − γ + ln 2 = ln 2.
n→∞ n→∞
6. (a)
(b) We use a proof by contradiction. That is, we start with the assumption that l < 0.
But the elements of the tail of the sequence can be arbitrarily close to l and in
particular this implies that for sufficiently large n, xn < 0. In terms of ǫ’s, we take
ǫ = −x/2 = |l|/2 > 0. The convergence means that there exists an N ∈ N such
that |xn − l| < ǫ, that is, l < xn − l < −l or 2l < xn < 0 for all n > N. This
contradicts xn > 0 for all n ∈ N.
7. We will only prove the first and third assertions. The others are proved in the same
spirit.
Let N1 (ǫ), N2 (ǫ), be such that |xn − x| < ǫ, |yn − y| < ǫ, for n > N1 (ǫ), n > N2 (ǫ),
respectively. For the first assertion we evaluate
121
Then by considering the absolute value, and using the triangle inequality, we have
xn − w 6 yn − w 6 zn − w
−ǫ < xn − w 6 yn − w 6 zn − w < ǫ,
122
Chapter 6
This chapter and the next diverge somewhat from the content of Lectures 18 and 19 by
providing an extensive set of worked examples. Our strategy is to return to the remainder
term for Taylor and Maclaurin polynomials that was introduced much earlier in Section 5. We
derive the precise form of this remainder and are then able to use it for two primary purposes.
• By terminating the Taylor or Maclaurin polynomial at some finite degree, we can esti-
mate the size of the remainder to determine a bound on the accuracy with which the
polynomial, Tna f (x) approximates f (x) at x. By inverting the question, we can then
determine how many terms we need to reach a specified accuracy.
• By examining the limiting behaviour of the remainder as the polynomial degree tends
to infinity we can ascertain whether or not f (x) = limn→∞ Tna f (x). If this limit holds
a
then we will have rigorously proven that f (x) = T∞ f (x).
The second item is the focus of Lectures 18 and 19. It is without a doubt harder to grasp and
so is better presented ‘live’. Once the structure of the remainder term is understood though,
the first item can provide ample opportunity for investigation. The first item is therefore
supported mainly by this document.
Section 6.2 discusses the remainder term and then in Section 6.3 we show how this remainder
can be estimated.
123
Concrete examples on estimating the error in replacing some familiar functions by their
Maclaurin series are then detailed in Subsection 6.3.1, for computing e; Subsection 6.3.2,
for the error in T4 cos(x); and, Subsection 6.3.3, for sin(x)/x.
Furthermore, Section 6.4 discusses the possibility of approximating integrals by replacing
integrands with their Taylor or Maclaurin polynomials.
Once again, though, note that the main points will be covered in the lectures and detailed on
the lecture slides. This document supports that activity but does not replace it.
There is one important lemma that we will make use of in the lectures. The proof is boardwork
for Lecture 19.
Lemma 6.1.
xn
lim =0 ∀x ∈ R.
n→∞ n!
2. Given Tna f and ǫ. On how large an interval I does Tna f achieve that tolerance?
3. Given f , a ∈ I and ǫ. Find how many terms n must be used for Tna f to approximate
f to within ǫ on I.
Having a polynomial approximation that works all along an interval is a much more sub-
stantive property than evaluation at a single point. The Taylor polynomial Tna f (x) is almost
never exactly equal to f (x), but often it is a good approximation, especially if |x − a| is small.
Remark 6.2. It must be noted that there are also other ways to approach the issue of best
approximation by a polynomial on an interval. And beyond worry over approximating
the values of the function, we might also want the values of one or more of the derivatives
to be close, as well. The theory of splines is one approach to approximation which is very
important in practical applications. Also the approximation in the mean can be useful:
given a function f , which polynomial p of degree n minimises the error
Z b
(f (x) − p(x))2 dx?
a
124
To see how good is the approximation Tna f of f , we defined the ‘error term’ or, ‘remainder
term’.
Example. If f (x) = sin x then we have found that T3 f (x) = x − 16 x3, so that
xn+1
Rn f (x) = f (x) − Tn f (x) = .
1−x
Our main theorem is as follows. It gives a formula for the remainder, formula we will be able
to use for estimates later.
Proof. Suppose a < x, otherwise interchange them. The full proof is not significantly harder,
but we are going to show the previous result for n = 1 only, that is,
f ′′ (ξ)
R2a f (x) = (x − a)2,
2!
125
for some a < ξ < x. Define the function in t, F : [a, x] → R by
Then G(a) = G(x) = 0. Applying Rolle’s Theorem, there exists a < ξ < x such that
(x − ξ)
0 = G′ (ξ) = F ′ (ξ) + 2F (a) .
(x − a)2
Hence,
(x − a)2 ′ f ′′ (ξ)
F (a) = − F (ξ) = (x − a)2.
2(x − ξ) 2
The formula (6.3) is called the Lagrange Remainder Formula. Even though you usually
cannot compute the mystery point ξ precisely, Lagrange’s formula allows you to estimate it.
A couple of example of a type already done.
√
Example. Approximate
√ x by a Taylor polynomial of degree 2 at x = 4. Use this polynomial
to approximate 4.1 and use Taylor’s Theorem to estimate the error in this approximation.
√ 1 1 3
Let f : (0, ∞) → R, f (x) = x. Then f ′ (x) = √ , f ′′ (x) = − 3/2 and f ′′′ (x) = .
2 x 4x 8x5/2
The Taylor polynomial, T24 f (x), is
1 1 1
T24 f (x) = f (4) + f ′ (4)(x − 4) + f ′′ (4)(x − 4)2 = 2 + (x − 4) − (x − 4)2.
2 4 64
On substituting x = 4.1,
√ 1 1
4.1 ≈ 2 + (0.1) − (0.1)2 = 2.02484375.
4 64
Taylor’s Theorem gives
M(4.1 − 4)3
|R24 f (4.1)| ≤ ,
3!
3
where M is the absolute maximum of |f (3) (x)| on [4, 4.1]. Now f (3) (x) = x−5/2 . This is
8
(3) 3 −5/2
a decreasing positive function so the absolute maximum of |f (x)| = x occurs when
8
3 3
x = 4. Hence, M = = and it follows that
8 · 45/2 256
(3/256)(0.1)3
|R24 f (4.1)| ≤ ≈ 0.000002.
3!
√
Thus, our approximation to 4.1 differs from the exact value by 0.000002 or less!
126
3.5
2.5
1.5
0.5
0
-2 0 2 4 6 8 10
-0.5
-1
√
The graph shows √ x in red and T24 f in green and shows clearly that the Taylor polynomial
T24 f approximates x extremely well but only for a certain range of x.
127
6.3 Estimates Using Taylor Polynomial
Here is the most common way to estimate the remainder.
M|x − a|n+1
|Rna f (x)| ≤ . (6.5)
(n + 1)!
Proof. We do not know what ξ is in Lagrange’s formula, but it doesn’t matter, for wherever
it is, it must lie between a and x so that our assumption (6.4) implies that |f (n+1) (ξ)| ≤ M.
Put that in Lagrange’s formula and you get the stated inequality.
1. Note, usually we will find M by finding the absolute max and min of f (n+1) (x) on the
interval I. Sometimes, however, we can find a value for M without calculating absolute
max’s and min’s. For example, if f (x) equals sin(x), then we can always take M = 1.
2. Note that this theorem gives some idea why Maclaurin approximations get better by
M
using more terms. As n gets bigger, the fraction (n+1)! will often get smaller. Why?
M
Because (n + 1)! gets really big. Is it true then that (n+1)! always get smaller for very
large n? Well, to be rigorous, M might also increase with n. But those cases are rare
for functions we deal with in general.
3. Note, we are often interested in bounding |Rna f (x)| on an interval. In such a case we
replace |x − a|n+1 by its absolute max on the interval. In other words, if the interval is
I = [c, d], we shall replace |x − a|n+1 by |c − a|n+1 or |d − a|n+1 , whichever is bigger.
f (9) (ξ) 9 eξ
R8 (1) = 1 = .
9! 9!
128
We do not really know where ξ is, but, since it lies between 0 and 1, we know that 1 < eξ < e.
So, the remainder term R8 (1) is positive and no more than e/9!. Estimating e < 3, we find
1 3
< R8 (1) < ≈ (120000)−1.
9! 9!
Note that we have been able to estimate a lower bound and upper bound for R8 f (1).
Thus we see that
1 1 1 1 1 1 1 1 1 1 1 3
1+ + + +···+ + + < e < 1+ + + +···+ + +
1! 2! 3! 7! 8! 9! 1! 2! 3! 7! 8! 9!
or, in decimals,
2.718 281 . . . < e < 2.718 287 . . .
129
6.3.3 Error in the Approximation sin x ≈ x
In many calculations involving sin x for small values of x one makes the simplifying approxi-
mation sin x ≈ x, justified by the known limit
sin x
lim = 1.
x→0 x
How big is the error in this approximation?
To answer this question, we use Lagrange’s formula for the remainder term. Let f : R → R,
f (x) = sin x. Its Maclaurin polynomial of order 1 is
T1 f (x) = x,
of remainder
f ′′ (ξ) 2
R1 f (x) = x = − 21 sin ξ · x2
2!
for some ξ between 0 and x. As always with Lagrange’s remainder term, we don’t know where
ξ is precisely, so we estimate the remainder term. As we did several time before, | sin ξ| ≤ 1,
∀ξ ∈ R. Hence, the remainder term is bounded by
Example. Now we could fix the maximum error we are seeking and ask how small must
we choose x to be sure that the approximation sin x ≈ x is not off by more than,
say, 1%? If we want the error to be less than 1% of the estimate (it is a relative error, then
we should require 12 x2 to be less than 1% of |x|, that is |x| < 0.02.
Recall that because T1 f = T2 f , the error term R1 f can be estimated using R2 f , that is
Now, is that better? Think about it. It is actually better for ‘small’ x, but worse for ‘large’
x.
130
6.4 Estimating Integrals
Thinking simultaneously about the difficulty (or impossibility) of ‘direct’ symbolic integration
of complicated expressions, by contrast to the ease of integration of polynomials, we might
hope to get some mileage out of integrating and differentiating Taylor polynomials.
For x < 1, let F (x) = − ln(1 − x). Putting the previous results together (and changing the
variable back to ‘x’), we get the ‘formula’:
x2 xn+1
Tn F (x) = x + + ...+ .
2 (n + 1)
For the moment we do not worry about what happens to the error term for the Taylor
polynomial. This little computation has several useful interpretations.
1. First, we obtained a Taylor polynomial for − ln(1 − T ) from that of the geometric series
in (6.10), without going to the trouble of recomputing derivatives.
Being a little more careful, we would like to keep track of the error term in the example so
we could be more sure of what we claim. We have
1 1 1
= 1 + x + x2 + . . . + xn + xn+1
1−x (n + 1) (1 − ξ)n+1
131
for some ξ between 0 and x (and also depending upon x and n). One way to avoid having
1
the term to ‘blow up’ on us, is to keep x itself in the range [0, 1) so that ξ is in the
(1 − ξ)n+1
range [0, x) ⊆ [0, 1), keeping ξ away from 1. To do this we might demand that 0 6 T < 1.
For simplicity, and to illustrate the point, we take 0 6 T 6 21 . Then, in the worst-case
scenario,
1 1
| |6 = 2n+1.
(1 − ξ) n+1 (1 − 21 )n+1
Thus, integrating the error term, we have
Z T Z T n+1 Z T
1 1 n+1 2 n+1 2n+1
| n+1
x dx| 6 x dx = xn+1 dx
0 n + 1 (1 − ξ) 0 n+1 n+1 0
n+1
n+2 T
2 x 2n+1 T n+2
= = .
n+1 n+2 0 (n + 1)(n + 2)
Since we have required 0 6 T 6 12 , we actually have
2n+1 ( 21 )n+2
Z T
1 1 n+1 2n+1 T n+2 1
| n+1
x dx| 6 6 = .
0 n + 1 (1 − c) (n + 1)(n + 2) (n + 1)(n + 2) 2(n + 1)(n + 2)
That is, we have
T2 Tn 1
| − log(1 − T ) − [T + + ...+ ]| 6
2 n 2(n + 1)(n + 2)
for all T in the interval [0, 12 ]. Actually, we had obtained
T2 Tn 2n+1 T n+2
| − log(1 − T ) − [T + + ...+ ]| 6
2 n (n + 1)(n + 2)
and the latter expression shrinks rapidly as T approaches 0.
Example. Let f : R → R given by f (x) = cos x. We use T3a f about a = π/2 to estimate
Z π
cos x dx,
π/3
and compare the estimation with the actual answer. The polynomial is
π/2 (x − π/2)3
T3 f (x) = −(x − π/2) + .
3!
We approximate the integral by
Z π Z π
(x − π/2)3 h (x − π/2)2 (x − π/2)4 iπ
cos x dx ≈ −(x − π/2) + dx = − + ≈ −0.846085.
π/3 π/3 3! 2 24 π/3
132
The approximate answer is quite close to the exact answer. But could we guarantee this? The
π/2 f (4) (ξ)
error could be evaluated from the integral of the remainder term R3 f = (x − π/2)4 .
4!
Because each derivative of cos is bounded by 1 (determine why?), we get
π π
|(x − π/2)|4 [(x − π/2)5]ππ/3 244π 5
Z Z
π/2
|R3 f (x)| dx ≤ dx ≤ ≤ ≈ 0.08.
π/3 π/3 4! 5! 933120
We see that our predicted error is about four times larger than the real value. This is not a
contraction, but we could have been smarter. Note that because cos an odd function about
π/2 π/2 π/2
x = π/2, T4 f (x) = T3 f (x), and so we could use the remainder R4 f (x). In that case,
we get the estimate
We now give an alternative Taylor estimate using an integral formulation for the error.
Theorem 6.6 (Taylor’s Theorem Revisited). Let f : [a, b] → R be n + 1 times differentiable
on [a, b]. Let Rna f (x) = f (x) − Tna f (x) for x ∈ [a, b], then,
Z x f (n+1) (t) M|x − a|n+1
a n
|Rn f (x)| = (x − t) dt ≤ , (6.11)
a n! (n + 1)!
We claim that
f (n+1) (t)
S ′ (t) = − (x − t)n . (6.12)
n!
We have
d f (k) (t) f (k+1) (t) f (k) (t)
(x − t)k = (x − t)k − k(x − t)k−1 .
dt k! k! k!
133
So,
n n
′
X f (k+1) (t) k
X f (k) (t)
S (t) = − (x − t) + k(x − t)k−1
k=0
k! k=0
k!
n n
X f (k+1) (t) k
X f (k) (t)
=− (x − t) + (x − t)k−1 .
k=0
k! k=1
(k − 1)!
Now we work with the second sum and manipulate it so that the index begins at 0. Set
j = k − 1. As k runs from 1 to n, j runs from 0 to n − 1. Substituting k = 1 + j gives
n n−1 (j+1)
′
X f (k+1) (t) k
X f (t)
S (t) = − (x − t) + (x − t)j.
k=0
k! j=0
j!
But now these two sums are the same except that the first one has an extra term. Consequently
most of the terms cancel and we are left with (6.12).
Because
n n
X f (k) (t) k (0)
X f (k) (t)
S(t) = f (x) − (x − t) = f (x) − f (t) − (x − t)k .
k=0
k! k=1
k!
Hence,
n
X f (k) (x)
S(x) = f (x) − f (x) − (x − x)k = 0,
k!
k=1
134
6.5 Extra curricular material: Estimating n for Taylor
Polynomials
In this section we look at the issue of determining n in our problems.
Example. Find n such that the Maclaurin polynomial Tn f (0.5) for f (x) = sin(x) would
approximate sin(0.5) with an error less than 10−5.
using the now familiar estimates for f , the error terms are bounded by
|x|n+1
|Rn f (x)| = .
(n + 1)!
Therefore we are looking to find n such that
1
(1/2)n+1 6 10−5.
(n + 1)!
Now, this is an inequality for n, that cannot be solved exactly. So, we have to guess and
check, more or less cleverly. We check only even values for n since there are no even terms in
sin(x)!
1
n=4: 5!
(.5)5 = 2.6 × 10−4,
1
n=6: 7!
(.5)7 = 1.55 × 10−6.
Thus T5 f (0.5) gives an error which is small enough.
Example. For example, let’s get a Taylor polynomial approximation to f (x) = ex which is
within 0.001 on the interval [− 12 , + 21 ]. Recall that
x2 x3 xn ec
Tn f (x) = 1 + x + + + ...+ + xn+1
2! 3! n! (n + 1)!
for some ξ between 0 and x, and where we do not yet know what we want n to be. It is very
convenient here that the n-th derivative of ex is still just ex ! We are wanting to choose n
large-enough to guarantee that
eξ
| xn+1 | 6 0.001
(n + 1)!
for all x in that interval.
The error term is estimated as follows, by thinking about the worst-case scenario for the
sizes of the parts of that term: we know that the exponential function is increasing along the
whole real line, so in any event ξ lies in [− 21 , + 12 ] and
|eξ | 6 e1/2 6 2
135
(where we’ve not been too fussy about being accurate about how big the square root of e is!).
And for x in that interval we know that
1
|xn+1 | 6 ( )n+1
2
So we are wanting to choose n large-enough to guarantee that
eξ 1
| ( )n+1 | 6 0.001.
(n + 1)! 2
Since
eξ 1 2 1
| ( )n+1 | 6 ( )n+1 .
(n + 1)! 2 (n + 1)! 2
we can be confident of the desired inequality if we can be sure that
2 1
( )n+1 6 0.001.
(n + 1)! 2
There is no genuine formulaic way to ‘solve’ for n to accomplish this. Rather, we just evaluate
the left-hand side of the desired inequality for larger and larger values of n until (hopefully!)
we get something smaller than 0.001. So, trying n = 3, the expression is
2 1 1
( )3+1 =
(3 + 1)! 2 12 · 16
x2 x3 x4
1+x+ + +
2 3 4
approximates ex to within 0.00052 for x in the interval [− 12 , 12 ].
Yes, such questions can easily become very difficult. And, as a reminder, there is no real or
genuine claim that this kind of approach to polynomial approximation is ‘the best’. It will
depend on what we want to do.
136
Summary of Chapter 5
We have seen:
• an example of a series where the terms tend to zero but the series diverges;
• that we must check carefully that it is appropriate to use either of the tests.
137
6.6 Exercise Sheet 5
2. (Computing the cube root of 9)√The cube root of 8√ is 2, and 9 is ‘only’ one more than 8.
So you could try to compute 3 9 by viewing it as 3 8 + 1.
√ √
(a) Let f (x) = 3 8 + x. Find T2 f , and estimate the error | 3 9 − T2 f (1)|.
(b) Repeat part (i) for T3 f and compare.
√
3. Follow the method of Question 2 to compute 10 within an error of 10−3 using a
polynomial approximation.
1 √
4. For what range of x > 25 does 5 + (x − 25) approximate x to within 0.001?
10
5. (a) Let f (t) = sin t. Compute T2 f and give an upper bound for R2 f for t ∈ [0, 0.5].
R 0.5
(b) Use part (i) to approximate 0 sin(x2 ) dx, and give an upper bound for the error
in your approximation.
138
6.6.2 Exercise Sheet 5b
x3
1. For what range of values of x is x − within 0.01 of sin x?
6
2. Only consider −1 ≤ x ≤ 1. For what range of values of x inside this interval is the
polynomial 1 + x + x2 /2 within .01 of ex ?
6. Determine how many terms are needed in order to have the corresponding Taylor poly-
nomial approximate f (x) = cos x to within the tolerance ǫ on the interval [a, b] for the
following cases:
9. Find the 4-th degree Maclaurin polynomial for the function f (x) = sin x. Estimate the
error for |x| < 1.
(a) Find the fourth degree Maclaurin polynomial for f and estimate the error for
|x| < 1.
(b) Find the eighth degree Maclaurin polynomial for f and estimate the error for
|x| < 1.
(c) Now, find the ninth degree Maclaurin polynomial for f . What is the error for
|x| ≤ 1?
139
6.6.3 Miscellaneous Exercises
Question
1. If
Tn f (x) = a0 + a1 x + a2 x2 + · · · + an xn
is the Taylor polynomial of a function y = f (x), then what is the Taylor polynomial of
its derivative f ′ (x)?
In other words, “the Taylor polynomial of the derivative is the derivative of the Taylor
polynomial.”
Proof. Let g(x) = f ′ (x). Then g (k) (0) = f (k+1) (0), so that
x2 xn−1
Tn−1 g(x) = g(0) + g ′ (0)x + g (2) (0) + · · · + g (n−1) (0)
2! (n − 1)!
2
x xn−1
= f ′ (0) + f (2) (0)x + f (3) (0) + · · · + f (n) (0) ($)
2! (n − 1)!
In other words,
f (3) (0)
1 · a1 = f ′ (0), 2a2 = f (2) (0), 3a3 = , etc.
2!
So, continuing from ($) you find that
as claimed.
140
6
A bound on the error is = 0.05. We see that the actual error is much smaller than
120
this.
(full) To compute the Taylor polynomial of degree 4 and a bound on the remainder, we
will need the first 5 derivatives of x sin x. Let f (x) = x sin x. Then
f (1) (x) = sin x + x cos x, f (2) (x) = 2 cos x − x sin x, f (3) (x) = −3 sin x − x cos x,
f (4) (x) = −4 cos x + x sin x, f (5) (x) = 5 sin x + x cos x.
141
M
The bound on the error given by Taylor’s Theorem is (0.5)4 where M is the absolute
4!
maximum of |f (4) (t)| for t ∈ [0, 0.5]. Since f (4) is increasing on [0, ∞), we have M =
32.101
f (4) (0.5) ≈ 32.101. So a bound on the error is (0.5)4 ≈ 0.084. Of course, the
24
actual error is no larger than this.
Exercise 6.8. For the following functions, find an approximation to f by a Taylor polyno-
mial of degree n about a number a. Use Taylor’s Theorem to estimate the accuracy of the
approximation and compare with the exact value.
Exercise 6.9. Determine the nth degree Taylor polynomial for ex about x = 0 and write down
an expression for the remainder.
1
Use this to determine n so that the nth degree Taylor polynomial for ex evaluated at x =
√ 2
gives an approximation to e for which the error is at most 0.0001.
Compute the approximation and check with the correct answer.
142
Short Feedback for Exercise Sheet 5a
4. 25 ≤ x ≤ 26.
t3
5. (a) T2 f (t) = t and |R2 f (t)| ≤ for t ∈ [0, 0.5].
6
1
(b) The approximation of the integral is with an error of at most 1/5376 < 2 · 10−4.
24
t2 √ (t − 0.5)2
1/2
6. (a) T2 f (t) = 1 + t + , T2 f (t) = e 1 + (t − 0.5) + and T21 f (t) =
2 2
(t − 1)2
e 1 + (t − 1) + .
2
(b) Integrating over the interval [0, 1] the polynomial approximations, we find 1.4333,
1.4701 and the worse one 1.631.
(c) With T5 f , the estimate is 1.4625 with error less than 3 · 10−4.
7. (a) T2 f (t) = t, the integral is approximated by 5 · 10−3 and the error is less than
3.5 · 10−4.
(b) Nothing to do, the previous calculation is OK.
143
6.6.4 Feedback for Exercise Sheet 5a
r8
1. The error on [−r, r] is bounded by .
8!
1
(a) Here, r = 1/10, and so the error is smaller than ≤ 2.5 10−13.
108 8!
1
(b) Here, r = 1, and so the error is smaller than ≤ 2.5 10−5.
8!
π8
(c) Here, r = π/2, and so the error is smaller than 8 ≤ 10−3.
2 8!
1 1
2. (a) The polynomial is T2 f (x) = 2 + x − x2. Thus, T2 f (1) ≈ 2.07986111. The
12 9 · 32
error is
√ 10 − 8 1
· 8 3 · < 2.5 · 10−4.
3
| 9 − T2 f (1)| ≤
27 3!
√3
(b) The 9 according to a computer is:
√
3
9 ≈ 2.08008382305.
√ √
3. (a) Use Taylor’s formula with f (x) = 9 + x, n = 1, to calculate 10 approximately.
The error is less than 1/216.
(b) Repeat with n = 2. The error is less than 0.0003.
4. One can easily check that the question ask for the larger
√ range 25 ≤ x ≤ c for which
T125 f approximate f within 0.001. Indeed, for f (x) = x, taking a = 25, we have
√ 1 1 1
T125 f (x) = f (a) + f ′ (a)(x − a) = 25 + √ (x − 25) = 5 + (x − 25).
2 25 10
f ′′ (ξ)
The corresponding remainder term is (x − a)2 for some ξ between a and x. Ex-
2!
plicitly we have
11 1 1 1
R125 f (x) = − 3/2
(x − 25)2 = − 3/2 (x − 25)2,
2! 4 (c) 8ξ
So, in the worst-case scenario, the value of ξ −3/2 is at most 25−3/2 = 1/125. Taking
the absolute value for the remainder term (in order to talk about error),
1 1 1 1
|R1 f 25 f (x)| = | 3/2
(x − 25)2| ≤ | (x − 25)2 |.
8ξ 8 125
144
To give a range of values of x for which we can be sure that the error is never larger
than 0.001 based upon our estimate, we solve the inequality
1 1
| (x − 25)2 | 6 0.001
8 125
(with x > 25). Multiplying out by the denominator of 8 · 125 gives
|x − 25|2 6 1
f (3) (ζ) 3
sin(t) − p(t) = t
3!
When f (t) = sin(t), |f (n) (ζ)| ≤ 1 for any n and ζ. Consequently,
t3
| sin(t) − p(t)| ≤
3!
for nonnegative t.
(b) Hence
Z 1 Z 1 Z 1
2 2
2
2 2
sin(x ) dx − p(x ) dx ≤ | sin(x2 ) − p(x2 )| dx
0 0 0
Z 1 6
2 x (1/2)7
≤ dx = =ǫ
0 3! 3! 7
1
(1/2)3
Z
2
Since p(x2 ) dx = = A (the approximate value) we have that
0 3
Z 1
2
A−ǫ≤ sin(x2 ) dx ≤ A + ǫ
0
t2 1/2
6. (a) Because all the derivatives of f are equal to f , T2 f (t) = 1 + t + , T2 f (t) =
2
√ (t − 0.5)2 (t − 1)2
1
e 1 + (t − 0.5) + and T2 f (t) = e 1 + (t − 1) + .
2 2
x4 √ 5 x2 x4
x2 2
(b) The respective polynomial estimates for e are 1+x + , e + + and
2 8 2 2
e
(1+x4 ), respectively. Integrating over the interval [0, 1] the polynomial estimates,
2
43
we find = 1.4333, 1.4701 and the worse one 1.631.
30
145
R1
(c) Note: You need not find p(t) or the integral 0
p(x2 ) dx. The error for T5 f is
3
smaller than .
6! · 13
7. (a)
(b)
8.
9.
1.
x2 x4
T4 f (x) = 1 − + .
2! 4!
The error is
f (5) (ξ) 5 (− sin ξ) 5
R4 f (x) = x = x
5! 5!
for some ξ between 0 and x. As | sin ξ| 6 1, we have
|x5 | 1
|R4 (x)| 6 <
5! 5!
for |x| < 1. Remark that since the fourth and fifth order Maclaurin polynomial for f are the
same, R4 (x) = R5 (x). It follows that 6!1 is also an upperbound.
146
Chapter 7
When a = 0, we denote the Taylor series by T∞ f and call it the Maclaurin series at a
point a of f .
147
Another function whose Maclaurin polynomial/series you should know is f (x) = (1 + x)α ,
where α ∈ R is a constant. We have already seen it at Level 1. You can compute Tn f (x)
directly from the definition, and, when you do this, you find
binomial coefficients are just the numbers in Pascal’s triangle. When α = −1, the binomial
formula is the Geometric Series.
a
1. What sense can we make of the series T∞ f?
a
2. What is the relation between the series T∞ f and the limit of Tna f as n → ∞?
3. When they exist, what are the properties of the series and the limit?
To approach those questions, we first remind you of the error estimates for Taylor polynomials.
Recall that, for x ∈ [a, b], we call Rna (x) = f (x) − Tna f (x), the remainder of order n at
x = a. We had the Lagrange formula for the remainder when f has n + 1 continuous
derivatives:
f (n+1) (ξ)
Rna f (x) = (x − a)n+1
(n + 1)!
for some ξ between a and x (ξ depending on x). We also derived an alternative formula:
x
f (n+1) (t)
Z
Rna (x) = (x − t)n dt.
a n!
When we work for many x simultaneously, the following estimate for the remainder will prove
useful:
a M|b − a|n+1
Rn (x) ≤ , ∀x ∈ [a, b],
(n + 1)!
where |f (n+1) (x)| ≤ M for x ∈ [a, b].
148
Convergence May or May Not Occur
We can now look at an explicit example to see that our initial questions are issues that are
not necessarily easy to ignore.
Example. ConsiderP f : nR\{1} → R, f (x) = 1/(1 − x). From its definition, the Maclaurin
expansion of f is ∞ n=0 x and, from the theory of convergence of series, such series converges
for |x| < 1. From the formula for sum of geometric progressions we have
1 1 − xn+1 + xn+1
f (x) = =
1−x 1−x
xn+1 xn+1
= 1 + x + x2 + · · · + xn + = Tn f (x) + .
1−x 1−x
The remainder term is
xn+1
Rn f (x) = ,
1−x
and, when |x| < 1, we have
1. the Maclaurin series of f converges for all −1 < x < 1, but not for |x| ≥ 1, even if the
function f is defined for any other point but x = 1,
So, for x ∈ (−1, 1), we can ignore the Taylor series notation and write with confidence
∞
1 X
= xn.
1 − x n=0
a
1. For which x ∈ I does T∞ f (x) exist (i.e. converge as a series in n, x fixed)?
a
2. For which x ∈ I does T∞ f (x) converge to f (x)?
149
Remarks 7.2. 1. Examples later on will show that we can get converging Taylor series
expansion whose sum is not equal to the original function. However this is a fairly rare
occurrence for our examples and exercises.
2. At x = a, the Taylor series is always equal to f (a).
3. For many functions, we shall see that their Taylor series converges to f (x) for x in some
interval a − R < x < a + R around a.
Definition 7.3. We shall then say that a function f has a valid power series expansion
about a if limn→∞ Rna (x) = 0 for x in a neighbourhood of a, that is, when T∞
a
(x) = f (x).
There is a simple example showing that everything we can hope for can indeed occur.
Proposition 7.4. The exponential function f (x) = ex has a valid Maclaurin expansion,
converging for all x ∈ R.
∞
x
X xn
Proof. First, remember that T∞ e = . To prove that this really is a power series
n=0
n!
expansion of ex , valid for all x, we need to prove that lim Rn (x) = 0. Our aim will be to use
n→∞
the Sandwich Theorem to show that
lim |Rn (x)| = 0.
n→∞
We know that 0 ≤ |Rn (x)| so all we need to do is find an upper bound for |Rn (x)|.
Taylor’s Theorem says that
Mxn+1
|Rn (x)| ≤ ,
(n + 1)!
where M is the absolute maximum of |f (n+1) (t)| on the interval [0, x] or [x, 0] (depending on
the sign of x). Now, f (n+1) (t) = et ≤ e|t| , so the absolute maximum of e|t| on the interval [0, x]
e|x| xn+1
or [x, 0] is e|x| . Therefore |Rn (x)| ≤ .
(n + 1)!
xn+1 n xn+1 o∞
We claim that lim = 0. The quotient of successive terms in the sequence
n→∞ (n + 1)! (n + 1)! n=0
x
is , which means that the sequence in n is initially increasing, then its tail is monoton-
n+1
ically decreasing for n > x. The sequence is bounded below, so it is a simple exercise to show
that its limit must be 0 (see Exercise 4 of Sheet 6b). Hence,
e|x| xn+1
lim = lim e|x| lim xn+1 (n + 1)! = e|x| · 0 = 0.
n→∞ (n + 1)! n→∞ n→∞
Because f = f ′ =
R
f on R when f is the exponential function, we get the following result.
Corollary 7.5. Let f : R → R, defined as f (x) = ex. Then T∞ f , T∞ f ′ and T∞ ( f ) are
R
all valid on R and we obtain each of them from the other by differentiating or integrating the
relevant series term per term.
150
7.2 Lecture 20: Power Series, Radius of Convergence
Taylor or MacLaurin series belong to a special type of series called power series. These are
incredibly important in mathematics, both theoretically and for doing numerical calculations.
where {cn }∞
n=0 is a sequence of real numbers.
We think of a and c0 , . . . P
, cn . . . as being constants and x as a variable. So, a power series is
a function rule f (x) = ∞ n=0 cn (x − a) .
n
Remark 7.7. Often we work with the special case where a = 0, i.e. we use power series of
the form
∞
X
cn xn.
n=0
To define a proper function, we need to define the domain of the power series. The natural
domain of a power series is the set of values of x for which f (x) converges. We can use
the Root or the Ratio Tests to check such convergence.
For the geometric series ∞ n
P
n=0 x , it was easy to find the domain and it had a simple answer:
(−1, 1). The following theorem shows that the domain of a power series, or equivalently the
values of x for which it converges, always has this form. Before, let start with a definition and
a lemma to indicate that the definition makes sense.
∞
X
Definition 7.8. Let cn (x − a)n be a power series and let
n=0
cn+1
ρ1 = lim , and ρ2 = lim |cn |1/n. (7.4)
n→∞ cn n→∞
2. R = 0 when ρ2 = ∞,
3. R = ∞ when ρ2 = 0.
The radius of convergence is defined from ρ2 that comes from the Root Test because it applies
to slightly more cases (see Exercise 10 of Sheet 6a), but they are in general equal.
151
Then, the domain of f is determined by its radius of convergence.
∞
X
Theorem 7.10 (Pointwise convergence of a power series). Given a power series cn (x − a)n,
n=0
either:
1. the series converges for all x when R = ∞;
2. the series converges just for x = a when R = 0;
3. the series converges for all x with |x − a| < R and we need to check what happens for
x = a + R and x = a − R.
Proof. Apply the Root Test to the power series. We need that ρ2 · |x| < 1 for convergence or
ρ2 · |x| > 1 for divergence. The proof is completed in Exercise 8 of Sheet 6a.
Examples.
∞
X (x − 1)n
1. To find the radius of convergence of the power series , we need to look at
n=1
n2n
1
cn = . We evaluate
n2n
1 1
ρ2 = lim |cn |1/n = lim n−1/n = ,
n→∞ 2 n→∞ 2
and so the radius of convergence is R = 1/ρ2 = 2. To test Lemma 7.9,
|cn+1 | n2n (n) 1
ρ1 = lim = lim n+1
= lim = = ρ2 .
n→∞ |cn | n→∞ (n + 1)2 n→∞ 2(n + 1) 2
So, the power series converges when |x − 1| < 2 and diverges when |x − 1| > 2. To
see what happens at the radius of convergence, we replace x = 3 and x = −1 in the
∞
X 1
series. In the first case, we get the divergent harmonic series , in the second
n=1
n
∞
X (−1)n
case alternating (convergent) harmonic series . Therefore the series is
n=1
n
well defined on [−1, 3).
∞
X xn
2. As we know, the power series converges for all x ∈ R. Indeed, its radius of
n=0
n!
convergence is R = ∞ because
|cn+1 | n! 1
ρ1 = lim = lim = lim = 0.
n→∞ |cn | n→∞ (n + 1)! n→∞ n + 1
The coefficient cn tends so fast to infinity that no small (but non zero) amount of x can
keep the series converging.
152
7.2.1 Behaviour at the Boundary of the Interval of Convergence
When 0 < R < ∞ exists, the power series converges when |x − a| < R and diverges when
|x − a| > R. When |x − a| = R, that is, x = a ± R, anything can happen. It needs to be looked
X∞
at individually. The power series becomes (−1)n cn Rn. In particular it is quite clear that
n=1
the theory of alternating series is important there. We shall see in the next example (7.5)
thereafter that any behaviour can be found at the boundaries of the interval of convergence.
|cn+1 | np 2n 1 1
ρ1 = lim = lim p n+1
= lim (1 + 1/n)p = ,
n→∞ |cn | n→∞ (n + 1) 2 2 n→∞ 2
and so R = 2 (independently of p). What happens at the boundary of the interval of conver-
gence? This will depend on p. We get the series
∞
X 1
n=1
np
at x = 3, and
∞
X (−1)n
n=1
np
at x = −1. And so, mostly using the Integral Test,
1. when p > 1, we have convergence on both sides of the interval, the power series converges
on [−1, 3] and diverges outside;
3. when p ≤ 0, convergence does not occur at the boundary. The power series converges
on (−1, 3) and diverges outside.
153
7.2.2 Elementary Calculus of Taylor Series
Inside the radius of convergence, we shall see that we can differentiate and integrate a power
series term per term. But first, we look at simpler properties of Taylor series.
Lemma 7.11.
P Let f : In → R be an infinitely differentiable function with Taylor series
a
T∞ f (x) = ∞
n=0 cn (x − a) about a ∈ I.
1. The sum and product of Taylor series about x = a are the Taylor series of the sum and
product functions.
a
2. For any k ∈ N, the Taylor series T∞ g of g(x) = (x − a)k f (x) about a ∈ I is given by
∞
X
a
T∞ g(x) = (x − a)k T∞
a
f (x) = cn (x − a)n+k.
n=0
4. The Taylor series of the derivative f ′ of f is given by the series of term per term
derivatives of the Taylor series of f and has the same radius of convergence.
1. This follows from the similar properties for the Taylor polynomials about x = a.
2. This is true by computing the derivatives of (x − a)k f (x) at x = a.
3. This is true by a simple substitution.
4. By definition of T∞ f ′ (x),
∞ ∞
f (n+1)
(k)
′
X
n
X d f k
T∞ f (x) = (x − a) = (x − a) .
n=0
n! k=0
dx k!
∞
X (−1)n
Examples. The Maclaurin series of cosine is T∞ cos(x) = x2n . The Maclaurin
n=0
(2n)!
series for f1 (x) = cos(2x), f2 (x) = x cos x and f3 (x) = cos(4x2 ), respectively, are
∞ ∞ ∞
X 22n 2n X (−1)n X 24n 4n
(−1)n x , x2n+1, (−1)n x .
n=0
(2n)! n=0
(2n)! n=0
(2n)!
154
7.3 Lecture 21: More on Power and Taylor Series
in this lecture we are concerned with the ‘inverse’ problem for Taylor series. Given a power
series, what sort of function does it define?
Remark 7.13. We shall see later that if f has a power series expansion about x = a then the
power series is unique and must be its Taylor series (7.1), or, in the special case where
a = 0, its MacLaurin series.
Earlier on we saw that if we started with a known power series, e.g. the geometric series, then
if we could compute the sum, we could write down a simpler function to which it is equal.
On the other hand, suppose we have a function and want to try to write it as a power series.
How do we find the right coefficients of the power series in order to do this?
X∞
Suppose we know that f (x) = cn (x − a)n but we do not know the numbers c0 , c1 , . . . cn , . . ..
n=0
Together, the following two theorems help us to compute them.
∞
X
Theorem 7.14 (Differentiating a power series). Suppose that f (x) = cn (x − a)n is a power
n=0
series with radius of convergence R. If |x − a| < R then
∞
X
′
f (x) = ncn (x − a)n−1.
n=0
So given a power series for f (x) we can compute a related power series for f ′ (x).
155
Perhaps this theorem looks like it is not saying very much. One way to look at it, is that it
says
∞ ∞
d X n
X d n
cn (x − a) = cn (x − a) .
dx n=0 n=0
dx
So we are interchanging the order of summation and differentiation. If a sum is finite, we
know from the basic rules of differentiation that, to differentiate the sum, we can differentiate
each term separately and add up the answers. So this switch would be valid for a finite sum,
i.e.
k k
d X n
X
cn (x − a) = ncn (x − a)n−1.
dx n=0 n=0
However, we have an infinite sum, and infinite sums are much more complicated than finite
sums. As we have already seen, results about finite sums e.g commutativity, do not necessarily
carry over to infinite sums. So, this theorem is really saying quite a lot.
Both an infinite sum and differentiation are defined in terms of limits. So the theorem is
saying that we can switch around the order of these limits without affecting the answer. This
sort of thing requires careful justification and proof.
∞
X
Theorem 7.15. Suppose that f (x) = cn (x − a)n is a power series with radius of conver-
n=0
gence R > 0. Then,
f (n) (a)
cn = .
n!
Proof. We will prove this by first proving by induction that, providing |x − a| < R, we have
∞
X
(k)
f (x) = cn+k (n + 1) · · · (n + k)(x − a)n .
n=0
Clearly this is true when k = 0, because the statement when k = 0 is just the definition of f .
Suppose that the result is true whenever k ≤ j. Then providing |x − a| < R, we have
∞
X
(j)
f (x) = cn+j (n + 1) · · · (n + j)(x − a)n .
n=0
We can apply the previous theorem to the power series f (j) to get
∞
X
(j+1)
f (x) = cn+j n(n + 1) · · · (n + j)(x − a)n−1 .
n=0
Hence the result is also true when k = j + 1. So our claim follows by induction.
So substituting x = a gives
∞
X
(k)
f (a) = cn+k (n + 1) · · · (n + k)(a − a)n = ck k!.
n=0
This is true because all of the terms in the sum disappear except for the term when n = 0.
This is exactly what we want.
156
Finally, as an immediate corollary, Theorem 7.14 shows that, in the domain (a − R, a + R), a
power series is infinitely differentiable and defines a unique function.
P∞ n
P∞ n
Corollary 7.16. Suppose that two power series n=0 an (x − x0 ) and n=0 bn (x − x0 )
converge on some interval (x0 − r, x0 + r) to the same function, then an = bn , n ≥ 0, and they
are both infinitesimally differentiable.
Theorem 7.17 (The Binomial Theorem). If α is any real number and |x| < 1, then
∞
α
X α(α − 1) · · · (α − (n − 1))
(1 + x) = xn. (7.6)
n=0
n!
Remark 7.18. If α is a positive integer then this just reduces to a special case of the more
familiar version of the Binomial Theorem for positive integer powers.
To see that this is a special case of the previous theorem just notice that if n − 1 ≥ m, that is,
n ≥ m + 1, the terms in the series are all zero, so we only have a finite sum. Also, we do
not need the condition on |x| because the sum is finite and so we do not have to
worry about convergence.
α(α − 1) · · · (α − (n − 1)) n
Proof. Let cn = x . Then,
n!
cn+1 (n + 1)!α(α − 1) · · · (α − (n − 1)) n+1
R = lim = = lim = 1.
n→∞ cn n!α(α − 1) · · · (α − n) n→∞ n − α
Hence the series converges if |x| < 1 and diverges if |x| > 1. We will not worry about showing
what happens when |x| = 1.
To calculate the convergence we will need to show that cn corresponds to the general term of
the Maclaurin series of f (x) = (1 + x)α and that the remainder tends to 0 for every |x| < 1.
157
It is a straightforward calculation to see that f (n) (x) = α(α − 1) · · · (α − (n − 1)) · (1 + x)α−n.
Hence, f (n) (0) = α(α − 1) · · · (α − (n − 1)). And so,
f (n+1) (ξ) n+1 α(α − 1) · · · (α − n)
Rn f (x) = x = · (1 + ξ)(α−n) · xn.
(n + 1)! (n + 1)!
A close look at this reveals that we can expect trouble if x < 0 since then 1 + ξ can become
arbitrarily close to zero and for large n, we’ll have α − n < 0.
It turns out that a reasonably straightforward proof of convergence can be given if we insist
that 0 6 x < 1 but that the proof requires a lot more effort if −1 < x 6 0. We do not give
the details here.
Examples.
1
1. To find the MacLaurin series for the function f (x) = √ and the values of x for
9−x
which it converges, we proceed as follows. We have
1 1 1 x −1/2
√ =√ p = 1− .
9−x 9 1 − x/9 3 9
We can compute this using
the Binomial expansion which will result in a valid MacLaurin
x
series expansion if − < 1, that is, if −9 < x < 9. We get
9
1 x −1/2 1
1 x 1 3 x 2
1− = 1+ − − + − − − +···
3 9 3 2 9 2 2 9
(−1/2)(−3/2) · · · (−1/2 − (n − 1)) x n
+ − +···
n! 9
∞ ∞
X 1 · 3 · · · · (2n − 1) n X (2n)!
= n
x = n 2
xn.
n=0
(18) n! n=0
(36) (n!)
1
2. To find the MacLaurin series for (2 + 3x)− 2, we use the Binomial Theorem noting that
p = 1/2. Rewrite the expression as
− 12
− 21 − 21 3
(2 + 3x) = 2 1+ x .
2
Then, from (7.6):
p(p − 1) 2 p(p − 1)(p − 2) 3
(1 + u)p = 1 + pu + u + u + ..., |u| < 1,
2! 3!
3x
so, on substituting u = ,
2
" 3 2 3 5 3 #
1 1 1
− − − − − −
1 1 3 3 3
(2 + 3x)− 2 = √ 1 + 2
x + 2 2
x + 2 2 2
x + ...
2 1 2 1 · 2 2 1 · 2 · 3 2
1 3 27 2 135 3
= √ 1− x+ x − x + ...
2 4 32 128
which is convergent for 3 x < 1 that is |x| < 2 .
2 3
158
7.4 Extra curricular material: General Theorem About
Taylor Series
In this lecture we collect together the information we have on power series and add some
additional properties we are going to prove next term in Analysis 2 (MA2731).
Theorem 7.20 (Power and Taylor’s series). Let {an }∞ n=1 be a sequence of real numbers such
1
that ρ = limn→∞ |an |1/n is defined. When ρ > 0, let R = and, when ρ = 0, let R = ∞. The
ρ
power series
∞
X
f (x) = ak (x − x0 )k
k=0
2. The function f is infinitely differentiable (in particular continuous) for x ∈ (x0 −R, x0 +
R).
3. The values f (x) are not defined for |x − x0 | > R. What happens to x = x0 ± R needs to
be investigated individually.
4. For any n ∈ N, the derivatives f (n) of f have also a representation as power series
with the same radius of convergence R. The coefficients of the power series of the n-th
derivative are obtained by differentiating n-times each of the terms of the power series
for f . The function f has the Taylor’s series representation
∞
X f (n) (x0 )
f (x) = (x − x0 )n, x ∈ (x0 − R, x0 + R).
n=0
n!
159
7.4.1 Examples of Power Series Revisited
Now we derive consequences of Theorem 7.20.
The results that have been covered say nothing about the convergence or divergence at
any point x with |x| = 1. In the case of the series for 1/(1 − x)2 the terms do not tend
to 0 when |x| = 1 and the series diverges for all |x| = 1. In the case of the series for
− ln(1 − x), the series is the harmonic series when x = 1 and hence diverges when x = 1.
Notice also that
lim− − ln(1 − x) = ∞.
x→1
When x = −1, the left hand side is − ln(2) and the right hand side is the alternating
harmonic series. We shall see in the next subsection that the series converges so
1 1 1
ln(2) = 1 − + − +··· .
2 3 4
dx
= sec2 y = 1 + tan2 y = 1 + x2,
dy
we get
dy 1
= .
dx 1 + x2
We can integrate term-by-term to obtain
Z x
dt x3 x5 x7
arctan x = 2 = x− + − +··· .
0 1+t 3 5 7
Note that arctan is defined for all x ∈ R, but this representation as a power series is
only true for |x| < 1.
160
3. Theorem 7.20 does not make any claims about the convergence of the power series, or
its derivatives, at |x − x0 | = R. Indeed, anything can happen,
Also, the power series may converge, but not its derivative etc. For instance,
the series ∞
X xn
n=1
n2
converges at both end-points x = ±1, but its derivative converges (conditionally) only
at x = −1.
x2 x3 x4
ln(1 + x) = x − + + −···
2 3 4
x3 x5 x7
arctan x = x − + + −···
3 5 7
This is only justified if you show that the series actually converge at x = 1, which we’ll do
here for the first of these two formulas. The proof for the second formula is similar. The
following is not Leibniz’ original proof. Begin with the geometric sum
1 (−1)n+1 xn+1
1 − x + x2 − x3 + · · · + (−1)n xn = +
1+x 1+x
Then you integrate both sides from x = 0 to x = 1 and get
Z 1 Z 1 n+1
1 1 1 n 1 dx n+1 x dx
− + − · · · + (−1) = + (−1)
1 2 3 n+1 0 1+x 0 1+x
Z 1 n+1
x dx
= ln 2 + (−1)n+1
0 1+x
1
G.W.L. Leibniz (1646-1716) was a German mathematician and philosopher who contributed to many
different areas of human knowledge. He is famous for his version of calculus and the dispute with Newton.
Many of the notation and ideas where R made explicit by Leibnitz. His formalism was to prove vital in latter
development. He used the notation , gave the product rule for differentiation and the derivative of xr for
integral and fractional r, but he never thought of the derivative as a limit. Another great achievements in
mathematics were the development of the binary system of arithmetic, his work on determinants which arose
from his developing methods to solve systems of linear equations.
161
Instead of computing the last integral, you estimate it by noting that
xn+1
0≤ ≤ xn+1
1+x
implying that
1 1
xn+1 dx 1
Z Z
0≤ ≤ xn+1 dx = .
0 1+x 0 n+2
Hence,
1 1
xn+1 dx xn+1 dx
Z Z
n+1
lim (−1) = (−1)n+1 lim = 0,
n→∞ 0 1+x n→∞ 0 1+x
and we get
1
1 1 1 1 xn+1 dx
Z
lim − + − · · · + (−1)n = ln 2 + lim (−1)n+1 = ln 2.
n→∞ 1 2 3 n+1 n→∞ 0 1+x
using directly Theorem 7.20 by manipulating the series to obtain derivatives or integrals of
known series. The radius of convergence is R = 1 and x0 = 0. For |x| < 1, we write
∞ ∞ ∞ ∞
X
k
X
k
X
k
X ′ x
kx = (k + 1)x − x = xk+1 −
k=1 k=1 k=1 k=1
x−1
∞
!′ ′
x2
X
k x x x
= x x − = − = .
k=1
x−1 1−x x−1 (1 − x)2
162
7.5 Extra curricular material: Taylor Series and Fi-
bonacci Numbers
In this lecture we shall look at examples of use of Taylor/Maclaurin series in different areas
of Mathematics.
Proof. Recall that e is irrational means that e cannot be written as a fraction of two integers.
Let f (x) = ex . We know that
n
X 1
ex = + Rn f (x).
k=0
k!
Take x > 0. Then, the revisited Taylor’s Theorem 6.6 tells us that the integral form of Rn f
is
Z x (n+1) Z x t
f (t) e ex x
Z
n n
Rn f (x) = (x − t) dt = (x − t) dt ≤ (x − t)n dt
0 n! 0 n! n! 0
ex h (x − t)n+1 ix ex xn+1
= − = .
n! n+1 0 (n + 1)!
Now take x = 1 and an integer N > e. Then,
n
X 1
e= + Rn f (1),
k!
k=0
where
N
0 < Rn f (1) < .
(n + 1)!
p
Suppose, for contradiction, that e is rational. Let e = where p and q are positive integers.
q
Choose n so that n > q and n > 3. Then,
p 1 1 1
= 1+1+ + +···+ + Rn f (1).
q 2! 3! n!
So,
pn! n! n!
= n! + n! + + + · · · + 1 + n!Rn f (1).
q 2! 3!
Every term in this equation other than possibly n!Rn f (1) is an integer, so n!Rn f (1) must be
an integer. However,
N
0 < n!Rn f (1) < < 1.
n+1
So, n!Rn f (1) is not an integer after all, giving a contradiction.
163
7.5.2 Taylor’s Formula and Fibonacci Numbers
Now, consider the function rule
1
f (x) = .
1 − x − x2
How can we calculate T∞ f ? Using the definition seems complicated because the derivatives
of f will involve complicated rational functions:
1 + 2x 2(2 + 3x + 3x2 )
f ′ (x) = , f ′′ (x) = ...
(1 − x − x2 )2 (1 − x − x2 )3
T∞ f (x) = c0 + c1 x + c2 x2 + c3 x3 + · · ·
be its Taylor series. Considering the Taylor polynomials Tn f for large n, due to Lagrange’s
Remainder Theorem, we have, for any n,
1
= c0 + c1 x + c2 x2 + c3 x3 + · · · + cn xn + Rn f (x).
1 − x − x2
Multiply both sides with 1 − x − x2 , ignoring the terms of degree larger than n, you get
1 = (1 − x − x2 ) · (c0 + c1 x + c2 x2 + · · · + cn xn )
= c0 + c1 x + c2 x2 + · · · + cn xn
− c0 x − c1 x2 − · · · − cn−1 xn
− c0 x2 − · · · − cn−2 xn
= c0 + (c1 − c0 )x + (c2 − c1 − c0 )x2 + · · · + (cn − cn−1 − cn−2 )xn.
Compare the coefficients of powers xk on both sides for k = 0, 1, . . . , n and you find
Therefore the coefficients of the Taylor series T∞ f (x) satisfy the (second order) recursion
relation
cn = cn−1 + cn−2 , n ≥ 2, (7.7)
with initial data
c0 = 1 = c1 = 1. (7.8)
The numbers satisfying (7.7) and (7.8) are known in other aspects mathematics, they are the
Fibonacci numbers Fn with F0 = F1 = 1. We can calculate them:
F2 = F1 + F0 = 1 + 1 = 2,
F3 = F2 + F1 = 2 + 1 = 3,
F4 = F3 + F2 = 3 + 2 = 5,
etc.
The first time they appeared was in the model problem of calculating the growth in a popu-
lation of ‘rabbits’ such that each pair has exactly one pair of offsprings each year but cannot
164
reproduce for one year (as juveniles). Since, they appeared in other area of biology (in a more
realistic way).
Since it is much easier to compute the Fibonacci numbers one by one than it is to compute
the derivatives of f (x) = 1/(1 − x − x2 ), this is a better way to compute the Taylor series of
f (x) than just directly from the definition. The Fibonacci numbers are encoded into
the Taylor series of f (x) = 1/(1 − x − x2 ). This appears in area of higher mathematics, for
instance in Probability and Statistics. The term is to say that f is the generating function
for the Fibonacci numbers.
x2 + x − 1 = 0.
Using the general formula for the roots of a quadratic equation, we get,
√
−1 ± 5
x± = .
2
Define the golden ratio φ as
√
1+ 5
φ= ≈ 1.618 033 988 749 89 . . . .
2
Note that √ √
1 2 2( 5 − 1) 5−1
=√ = = .
φ 5+1 5−1 2
1
√ 1
And so, φ + φ
= 5 and φ − φ
= 1. Then,
1
x− = −φ, x+ = .
φ
165
So,
1
1 − x − x2 = −(x − )(x + φ).
φ
Hence, f (x) can be written as
1 −1 A B
f (x) = 2
= 1 = 1 + ,
1−x−x (x − φ )(x + φ) x− φ
x+φ
where
−1 −1 1 1
A= 1 =√ , B= 1 =√ .
φ
+φ 5 φ
+φ 5
Therefore,
1 φ 1/φ
f (x) = 2
=√ +√ .
1−x−x 5(1 − xφ) 5(1 + φx )
Using the Binomial Theorem, we find the expansions of (1 − xφ)−1 and (1 + φx )−1 as
1
= 1 + (xφ) + (xφ)2 + · · · + (xφ)n + · · ·
1 − xφ
1 x x x
x = 1 − + ( )2 + · · · + (−1)n ( )n + · · ·
1+ φ φ φ φ
(−1)n
1 n+1
Fn = √ φ + n+1 . (7.9)
5 φ
Note. We can check that (7.9) gives F0 and F1 correctly. For that, recall that φ2 = 1 + φ.
And so,
√
1 1 5
F0 = √ φ + = √ = 1,
5 φ 5
4
φ2 − 1
1 2 1 φ φ −1
F1 = √ φ − 2 = = = 1.
5 φ 1 + φ2 φ2 φ
Remark 7.22. As a final remark, there are some problems where the initial numbers are
f0 = 0 and f1 = 1. In that case we can see that fn = Fn−1 , n ≥ 1, and their generating
x
function is now .
1 − x − x2
166
7.6 Extra curricular Christmas treat: Series of Func-
tions Can Be Difficult Objects
In this appendix we look at two phenomena illustrating difficult behaviour what can happen
with series of functions.
1. In the first example, we show that a Taylor series for a function f about x = a may
converge everywhere but never to f (x) outside of x = a.
2. In a second example, we show that Fourier series can converge to bizarre function.
We show an explicit example where the limit is a continuous function, but nowhere
differentiable, hence impossible to draw.
Proof. We are going to show that all derivatives of f are 0 at the origin. We proceed in a few
steps.
167
2. Now show that f (n) (0) = 0 using the rigorous definition of a derivative. To evaluate the
2
e−1/x
limit, you will need to find lim where k ∈ N. This can be done by taking logs
x→0 xk
and then using L’Hôpital’s Rule.
We prove this by induction too. This time we will show that for all n ≥ 0, f (n) (0) = 0.
This is clearly true for n = 0. Suppose it is true for n = m. Then
2 2
(m+1) f (m) (x) p(x)e−1/x e−1/x
f (0) = lim = lim = lim p(x) lim .
x→0 x x→0 xk+1 x→0 x→0 xk+1
The first limit is finite and we will show that the second limit is 0 using L’Hôpital’s
Rule. This will show that the limit we want is equal to zero and establish the result by
induction.
2
e−1/x
It will be enough to show that lim k+1 = 0.
x→0 |x|
We will show that lim x2 ln |x| = 0. This can be done with a standard application of
x→0
L’Hôpital’s rule by writing
ln |x|
x2 ln |x| = .
1/x2
Therefore lim (1 + (k + 1)x2 ln |x|) = 1 and so
x→0
1 1 + (k + 1)x2 ln |x|
lim − 2 − (k + 1) ln |x| = − lim = −∞.
x→0 x x→0 x2
Because exp is continuous, if g(x) → −∞ as x → 0 then exp(g(x)) → 0 as x → 0 so
2
e−1/x 1
lim lim exp − 2 − (k + 1) ln |x| = 0.
x→0 |x|k+1 x→0 x
3. We are told that f (0) = 0 and have proved that f (n) (0) = 0 for all n ∈ N. So, all
the coefficients of the MacLaurin series are 0. Consequently the MacLaurin series is
identically zero.
168
where ∆E is the amount of energy involved in each reaction, k is Boltzmann’s constant and
T is the temperature in degrees Kelvin. If you ignore the constants ∆E and k (i.e. if you set
them equal to one by choosing the right units) then the reaction rate is proportional to
f (T ) = e−1/T.
If you have to deal with reactions at low temperatures you might be inclined to replace this
function with its Taylor series at T = 0, or at least the first non-zero term in this series. If you
were to do this, you would be in for a surprise. To see what happens, look at the following
function, (
e−1/x, x > 0,
f (x) =
0, x 6 0.
This function goes to zero very quickly as x → 0. In fact one has, setting t = 1/x,
f (x) e−1/x
lim = lim = lim tn e−t = 0.
x→0 xn x→0 xn t→∞
The Taylor
series at this
point does not
converge to f y = e−1/x
1 2 3
If you try to compute the Taylor series of f you need its derivatives at x = 0 of all orders.
These can be computed (see last weeks example), and the result turns out to be that all
derivatives of f vanish at x = 0,
x2 x3
T∞ f (x) = 0 + 0 · x + 0 · +0· + · · · = 0.
2! 3!
Clearly this series converges (all terms are zero, after all), but instead of converging to the
function f we started with, it converges to the function g = 0.
What does this mean for the chemical reaction rates and Arrhenius’ law? We wanted to
‘simplify’ the Arrhenius law by computing the Taylor series of f (T ) at T = 0, but we have
just seen that all terms in this series are zero. Therefore replacing the Arrhenius reaction rate
by its Taylor series at T = 0 has the effect of setting all reaction rates equal to zero.
169
7.6.3 Series Can Define Bizarre Functions: Continuous but Nowhere
Differentiable Functions
As a final remark about series, Karl Weierstrass in 1872 considered the function
∞
X 1 j
f : R → R, f (x) = fj (x), where fj (x) = j cos 3 x. (7.10)
j=0
2
For all x, |fj (x)| 6 1/2j and, since the geometric series ∞ −j
P
j=0 2 converges, the series converges
uniformly on R and consequently defines a continuous function. The particularly interesting
property of f that Weierstrass proved (which will not be shown here) is that f is not differen-
tiable at any point. The function f is an example of a continuous but nowhere differentiable
function.
In Figure 7.2 we represent the finite sums n−6+ nj=1 1j cos 3j x that are translations upwards
P
2
(by an amount of n − 3) of the graphs so that we can compare them. Note that as n increases
the graphs become more and more jaggy.
-3 -2 -1 1 2 3
-1
-2
-3
-4
-5
170
Summary of Chapter 6
We have seen:
• To be able to calculate the radius of convergence of a power series and to evaluate the
convergence at the boundary of the convergence interval.
171
7.7 Exercise Sheet 6
2. Determine the values of x for which the following function rules has a valid power series
expansion about 0.
(a) cos x,
(b) sin x,
(c) ln(1 + x).
172
∞
X
7. Let cn (x − a)n be a power series. Prove that, when they exist,
n=0
cn+1
ρ1 = lim , and ρ2 = lim |cn |1/n,
n→∞ cn n→∞
are equal.
10. Show that we cannot apply the Ratio Test to the power series ∞ n
P
n=1 an x where an = 1
if n is odd, an = 2 when n is even, but we can apply the Root Test. Find its radius of
convergence.
2. (a) x ∈ R,
(b) x ∈ R,
(c) x ∈ (−1, 1).
3. (a) R = 3.
(b) R = 0.
(c) R = ∞.
(d) R = 0.
(e) R = 5.
(f) R = 2.
(g) R = ∞.
4. Use the convergence of a monotone sequence to establish the existence of a limit, then
calculate it.
173
∞
X y
5. 3(i) You may need ky k = .
k=1
(1 − y)2
2x
3(iii) e .
∞
X
3(v) You may need to calculate k2 y k .
k=1
2x
3(vi) .
(2 − x)2
1 1 1 1
6. 1 − + − + − ....
3 5 · 2 7 · 3! 9 · 4!
7.
8.
9.
10. R = 1.
174
7.7.2 Feedback for Exercise Sheet 6a
√ p
1. (a) 3
1 + 2t + t2 = 3 (1 + t)2 = (1 + t)2/3 . The answer follows from Newton’s binomial
formula. The Maclaurin series of (1 + t)2/3 is
∞
X 2/3
tn.
n=0
n
(b) Let f (t) = e−t sin(2t). To simplify calculation we are going to use a ‘neat trick’: to
write sin(2t) as a complex exponential:
1
(c) The derivative of arcsin t is √ . So you need to integrate the result (7.12) of
1 − t2
the next exercise. We find
∞
! ∞
(2n)! 2n (2n)! t2n+1
Z X X
T∞ arcsin(t) = t = .
n=0
4n (n!)2 n=0
4n (n!)2 2n + 1
175
(d) We use the Binomial Theorem or direct differentiation on g(z) = (1 − z)−1/2 , then
replace z by t2. We find that the Maclaurin series expansion of g is
∞ ∞
X 1 · 3 · · · (2n − 1) n
X (2n)! n
z = n (n!)2
z
n=0
2 · 4 · · · 2n n=0
4
multiplying top and bottom by the product of the first n even numbers:
2 · 4 · · · 2n = 2n (n!).
∞
X x2n+1
(e) Because T∞ sin(x) = (−1)n+1 , we have
n=1
2n + 1!
∞
X x2n
(−1)n+1 .
n=1
2n + 1!
Therefore we obtain ∞
X xn
(−1)n .
n=0
n+1
And so f (0) = 1, f ′ (0) = −1, f ′′ (0) = 0, f ′′′ (0) = 2 and f (4) (0) = −4f (0) = −4,
hence, for n ≥ 0,
f 4n (0) = (−4)n,
f 4n+1 (0) = −(−4)n,
f 4n+2 (0) = 0,
f 4n+3 (0) = 2(−4)n.
176
So the Maclaurin series expansion is
∞
X t4n t4n+1 t4n+3
T∞ f (t) = f 4n (0) + f 4n+1 (0) + f 4n+3 (0)
n=0
4n! (4n + 1)! (4n + 3)!
∞ 4n
t4n+1 t4n+3
nt
X
n n
= (−4) − (−4) + 2(−4)
n=0
4n! (4n + 1)! (4n + 3)!
∞ 4n
2t3
nt t
X
= (−4) 1− +
n=0
4n! 4n + 1 (4n + 1)(4n + 2)(4n + 3)
∞
t4n t3
X
n
= (−4) 4n + 1 − t + .
n=0
4n + 1! (2n + 1)(4n + 3)
(n)
2. (a) The remainder term Rn (x) is equal to f n!(ζn ) xn for some ζn . For either the cosine
n
or sine and any n and ζ, we have |f (n) (ζ)| ≤ 1. So, |Rn (x)| ≤ |x|
n!
. But we know
|x|n
limn→∞ n! = 0 and hence limn→∞ Rn (x) = 0.
∞
X (−1)n 2n+1
(b) We know that the MacLaurin expansion for sin x is x . To test
n=0
(2n + 1)!
when this converges to sin x, we need to test if the remainder |Rn,0 (x)| converges
to 0. We will use the Sandwich Theorem to show this. From Taylor’s Theorem we
know that M
n+1
|Rn,0 (x)| ≤ x ,
(n + 1)!
where M is the absolute maximum of |f (n+1) (t)| on the interval [0, x] or [x, 0]
(depending on the sign of x). But all the derivatives of sin x are either ± sin x or
± cos x and we have | sin x| ≤ 1 and | cos x| ≤ 1 for all x. Hence, M ≤ 1 and
consequently
1
|Rn,0(x)| ≤ |x|n+1 .
(n + 1)!
1
From Exercise 4 we know that lim |x|n+1 = 0. Consequently, by the Sand-
n→∞ (n + 1)!
wich Theorem, lim |Rn,0(x)| = 0.
n→∞
∞
X
(c) We know that if f (x) = ln(1 + x) has a power series expansion ln(1 + x) = cn xn
n=0
f (n) (0) (−1)n−1 (n − 1)!
then cn = . We know that f (n) (x) = and so f (n) (0) = (−1)n−1 (n − 1)!.
n! (1 + x)n
∞
X (−1)n−1 xn
Hence, if ln(1+x) has a power series expansion, then it must be ln(1 + x) = .
n=0
n
First we will apply the ratio test to the series to see when this power series converges.
Clearly it converges when x = 0 because all the terms are zero. So assume x 6= 0
(−1)n xn
and apply the ratio test. Let an = . Then
n
an+1 (−1)n+1 xn+1 /(n + 1) nx
= n n
=− .
an (−1) x /n n+1
177
So, a
n+1 n
lim = lim |x| = |x|.
n→∞ an n→∞ n+1
Hence the series converges if |x| < 1, diverges if |x| > 1 and may or may not
converge if |x| = 1.
So ln(1 + x) cannot have a valid power series expansion when |x| > 1. We will
not worry about what happens when |x| = 1 and will now prove that it has a
valid power series expansion when |x| < 1. We need to show that if |x| < 1,
lim Rn,0 (x) = 0.
n→∞
Our aim will be to use the Sandwich Theorem to show this. We know that 0 ≤
|Rn,0 (x)| so all we need to do is find an upper bound for |Rn,0 (x)|. Taylor’s Theorem
says that
Mxn+1
|Rn,0(x)| ≤ ,
(n + 1)!
where M is the absolute maximum of |f (n+1) (t)| on the interval [0, x] or [x, 0]
(depending on the sign of x). Now,
(−1)n n! n!
|f (n+1) (t)| = ≤ .
(1 + t)n+1 (1 − |t|)n+1
So the absolute maximum of |f (n+1) (t)| on the interval [0, x] or [x, 0] is at most
n!
. So, for 0 ≤ x < 1, the absolute maximum of |f (n+1) (t)| on the
(1 − |x|)n+1
interval [0, x] is n! and we have
Mxn+1
|Rn,0 (x) ≤ .
n+1
So if 0 ≤ x < 1 then |Rn,0 (x)| → 0 as n → 0.
3
k3 3k+1
k
3. (a) The radius R is R = lim k · = 3 lim = 3.
k→∞ 3 (k + 1)3 k→∞ k + 1
2k k + 1! k+1
(c) The radius R is R = lim · k+1 = lim = +∞.
k→∞ k! 2 k→∞ 2
k! 1
(d) The radius R is R = lim = lim = 0.
k→∞ k + 1! k→∞ k + 1
2
k2 5k+1
k
(e) The radius R is R = lim k · = 5 lim = 5.
k→∞ 5 (k + 1)2 k→∞ k + 1
178
k 2k+1 k
(f) The radius R is R = lim k
· = 2 lim = 2.
k→∞ 2 (k + 1) k→∞ k + 1
(k + 1)(k+1)
R = lim = lim (k + 1) · (1 + 1/k)k = e · lim (k + 1) = +∞.
k→∞ kk k→∞ k→∞
4.
5.
2 3
6. We start with the series for ex : namely 1 + x + x2 + x3! + . . . . Replacing x with −x2, we
obtain
2 x4 x6 x8
e−x = 1 − x2 + − + − ...
2! 3! 4!
Now we integrate the series term per term:
R 1 −x2 R1 4 6 8
0
e dx = 0
1 − x2 + x2! − x3! + x4! − . . .
1
x3 x5 x7 x9
= x − 3 + 5·2! − 7·3! + 9·4! − . . .
0
1
= 1 − 13 + 5·2 1
− 7·3! 1
+ 9·4! − ...
If we add the first three terms here we get .7666. As a rough idea of how accurate this is,
suppose we added the next term, this would change the result to .7429. This isn’t much
of a change. If we added one more term, this would change it even less, the convergence
is very slow indeed.
7.
179
7.7.3 Additional Exercise Sheet 6b
1.
1.
180
7.7.4 Exercise Sheet 0c
1. Compute the Maclaurin series T∞ f for the following rules defined on the maximal in-
terval containing 0 where they are defined.
181
3. (a) T5 f (t) = t + 31 t3 + 2 5
15
t.
(b) There is no simple general formula for the nth term in the Taylor series for f .
182
7.7.5 Feedback for Exercise Sheet 0c
∞
X 1
1. (a) T∞ f (t) = t2k+1.
k=0
(2k + 1)!
∞
X 1 2k
(b) T∞ f (t) = t .
k=0
(2k)!
∞
X 3 · (n + 1) n
(c) T∞ f (t) = t.
2n+2
k=0
∞
X (−1)n+1
(d) T∞ f (t) = tn.
k=1
n
t
(e) 1−t2
h i
t 1/2 1/2
T∞ 1−t 2 = T∞ 1−t
− 1+t
= t + t3 + t5 + · · · + t2k+1 + · · ·
(f) sin t + cos t
The pattern for the nth derivative repeats every time you increase n by 4. So we
indicate the the general terms for n = 4m, 4m + 1, 4m + 2 and 4m + 3:
(g) 1 + t2 − 32 t7
A: T∞ 1 + t2 − 23 t7 = 1 + t2 − 32 t7
√
(h) 3 1 + t
√
A: T∞ 3 1 + t = 1 + 1/3
1!
t + (1/3)(1/3−1)
2!
t2 + · · · + (1/3)(1/3−1)(1/3−2)···(1/3−n+1) n
n!
t +···
x4
2. f (x) = 1+4x2
, what is f (10) (0)?
6
A: 10! · 2
183
Index
Maclaurin polynomial, 12
Maclaurin series, 147
Mean Value Theorem, 11
polynomial
degree, 17
Maclaurin, 12
Taylor, 12
power series, 9
184