0% found this document useful (0 votes)
18 views

Perturbation Methods

This document outlines the contents of a course on perturbation methods. It covers various techniques for solving algebraic equations, asymptotic approximations, methods for evaluating integrals, summation of series, matched asymptotic expansions, and the method of multiple scales. Specific topics include regular and singular perturbations, rescaling, non-integral powers, logarithms, convergence criteria, parametric expansions, integration by parts, stationary phase, steepest descents, Stokes phenomena, Cesàro sums, Borel sums, Padé approximants, and strained coordinates. Examples are provided throughout to illustrate the various methods.

Uploaded by

Thomas Wong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Perturbation Methods

This document outlines the contents of a course on perturbation methods. It covers various techniques for solving algebraic equations, asymptotic approximations, methods for evaluating integrals, summation of series, matched asymptotic expansions, and the method of multiple scales. Specific topics include regular and singular perturbations, rescaling, non-integral powers, logarithms, convergence criteria, parametric expansions, integration by parts, stationary phase, steepest descents, Stokes phenomena, Cesàro sums, Borel sums, Padé approximants, and strained coordinates. Examples are provided throughout to illustrate the various methods.

Uploaded by

Thomas Wong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Mathematical Tripos: Part III PM

Contents
0 Perturbation Methods (16 Lectures) i
0.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i

1 Algebraic Equations 1
1.1 Regular Expansions and Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Iterative method (liked by Pure Mathematicians) . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Expansion method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

This is a specific individual’s copy of the notes. Please do not copy and/or redistribute.
1.2 Singular Perturbations and Rescaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Iterative method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Expansion method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Rescaling before expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Non Integral Powers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Asymptotic Approximations 7
2.1 Convergence and Asymptoticness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Uniqueness and Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Parametric Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Integral Methods 11
3.1 Elementary Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Integration by Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Integrals with Algebraic Parameter Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.5 Integrals with Exponential Power Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5.1 Watson’s Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5.2 Intermediate maximum (Laplace’s method) . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.5.3 Stationary phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5.4 Steepest descents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5.5 The Airy function and Stokes phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.6 Stokes Phenomena in the Complex Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.6.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.7 What Happens At Stokes Lines? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.7.1 The Airy function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Mathematical Tripos: Part III PM a © [email protected], Michaelmas 2022


4 Summation Of Series By ‘Magic’ 31
4.1 Cesàro Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2 Euler Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 Borel Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3.1 An example: the Stieltjes series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3.2 Summation in the Borel plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3.3 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.4 Padé Approximants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.5 Continued Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.6 Shanks’ Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.7 Richardson Extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.8 Other Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5 Matched Asymptotic Expansions (MAEs) 36


5.1 Regular Perturbation Problems: An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.1.1 Exact solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.1.2 Perturbation solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 Singular Perturbation: Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.1 Exact solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.2 Expansion solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.3 Van Dyke’s Matching Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.4 The Choice of Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.5 Where is the ‘Inner Layer’ ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.6 Composite Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.7 Matching Involving Logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.7.1 A model equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.7.2 The case n = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.7.3 The case n = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.7.4 A ‘terrible’ problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.8 Strained Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

6 A Little More on Asymptotics Beyond All Orders 52


6.1 More on What Happens at Stokes Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.1.1 The complementary error function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.2 A Model Equation (With Wider Implications) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.3 A Model of Crystal Growth (Unlectured) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.3.1 Regular perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.3.2 Too many boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.3.3 A well posed problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.3.4 Analytical continuation into the complex plane . . . . . . . . . . . . . . . . . . . . . . . . . 59

Mathematical Tripos: Part III PM b © [email protected], Michaelmas 2022


7 Method of Multiple Scales 62
7.1 Van der Pol oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
7.1.1 Regular perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
7.1.2 Multiple scales expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.1.3 A simple example of super slow time scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.2 Mathieu Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.2.1 Floquet Theory (for second order ODEs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.2.2 ω 6= ±n/2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.2.3 ω2 − 1
4
1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2
7.2.4 ω −1 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.3 WKBJLG Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
7.3.1 Leading-order solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
7.3.2 Turning points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
7.4 Ray Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
7.4.1 Model example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
7.4.2 Conservation of wave action for sound waves (only outlined in lectures) . . . . . . . . . . . 72

Mathematical Tripos: Part III PM c © [email protected], Michaelmas 2022


0 Perturbation Methods (16 Lectures)

0.1 Introduction
• 24 lectures prised into 16.
• Any corrections and suggestions should be emailed to me at [email protected].
• Closed book examination. Likely rubric:

Attempt no more than TWO questions.


There are THREE questions in total.
The questions carry equal weight.
• Books

This is a specific individual’s copy of the notes. Please do not copy and/or redistribute.
– Hinch, Perturbation methods.
– Van Dyke, Perturbation methods in fluid mechanics.
– Kevorkian & Cole, Perturbation methods in applied mathematics.
– Bender & Orszag, Advanced mathematical methods for scientists and engineers.

• Philosophy
– Many physical processes are described by equations that cannot be solved analytically.
– One approach is to solve the equations numerically; however, often there exists a ‘small’ pa-
rameter, ε, e.g.
∗ in low Mach number flows ε = M = uc , where u is the fluid velocity and c is the speed of
sound;
∗ in fast flows ε = Re
1
, where Re is the Reynolds number.
– We can use the smallness of ε to simplify the equations, and then find analytic (or simpler
numerical) solutions.

• Primarily interested in differential equations, but a number of the ideas can be illustrated for
algebraic equations and/or integrals. We will use algebraic equations to motivate some of the ideas.
• The only pre-requisites are (a) a course in ‘Sums’ (i.e. a competency to perform moderately messy
calculations), and (b) an ability to solve simple differential equations and evaluate simple integrals
(e.g. using integration by parts).

Mathematical Tripos: Part III PM i © [email protected], Michaelmas 2022


Include Olver’s paradox as an example?

Mathematical Tripos: Part III PM ii © [email protected], Michaelmas 2022


1 Algebraic Equations

1.1 Regular Expansions and Iteration

Consider
x2 + εx − 1 = 0 . (1.1)

Exact solution:   21
1 1
x = − ε ± 1 + ε2 .
2 4
If |ε| < 2, then can expand in a convergent series:

1 1 2 1 4

This is a specific individual’s copy of the notes. Please do not copy and/or redistribute.
1 − 2 ε + 8 ε − 128 ε + · · ·



x=
 1 1 1 4
−1 − ε − ε2 +

 ε + ···
2 8 128
Since the series is convergent for |ε| < 2, for small ε we can increase the accuracy by taking more terms.
We have

solved the equation and then approximated the solution

However, we cannot always solve the equation exactly, so can we


approximate and then solve the equation?

1.1.1 Iterative method (liked by Pure Mathematicians)

Based on
xn+1 = g(xn ) .
∗ ∗ ∗
Suppose xn = x + δn where x = g(x ). Then by Taylor Series

δn+1 = g 0 (x∗ ) δn + O δn2 .




If we have a good guess, so that |δn | is small, this is convergent if

|g 0 (x∗ )| < 1 .

Rearrange (1.1):
x2 = 1 − εx .
For the root near x = 1 try
1
xn+1 = (1 − εxn ) 2
x0 = 1
1 1 1
x1 = (1 − ε) 2 = 1 − ε − ε2 + · · ·
2 8
 1
 21 1 1 1
x2 = 1 − ε (1 − ε) 2 = 1 − ε + ε2 + ε3 + · · ·
2 8 8
  1
 12  12
x3 = 1 − ε 1 − ε (1 − ε) 2

1 1
1 − ε + ε2 + 0 + O ε4 + · · ·

=
2 8
Hard work for the higher terms — also, how many terms are correct?

Mathematical Tripos: Part III PM 1 © [email protected], Michaelmas 2022


1.1.2 Expansion method

For ε = 0, the roots are x = ±1. For the root near x = 1 try

x(ε) = 1 + εx1 + ε2 x2 + ε3 x3 + · · ·

Substitute into equation (1.1):


1 + 2εx1 + 2ε2 x2 + ε2 x21 + 2ε3 x3 + 2ε3 x1 x2 + · · ·
+ε + ε2 x1 + ε3 x2 + ···
−1 =0
Equate powers of ε:
ε0 : 1−1 =0
1
ε : 2x1 + 1 =0 , x1 = − 21
1
ε2 : 2x2 + x21 + x1 =0 , x2 = 8
ε3 : 2x3 + 2x1 x2 + x2 =0 , x3 = 0
Easier than the iterative method for higher terms, but you need to guess the expansion correctly.

1.2 Singular Perturbations and Rescaling

Consider
εx2 + x − 1 = 0 . (1.2)
ε=0 : one solution
ε 6= 0 : two solutions
The limit process ε → 0 is said to be singular .
1
−1 ± (1 + 4ε) 2
Exact solution: .

Expansion for |ε| < 14 :
1 − ε + 2ε2 − 5ε3 + · · ·

x= (1.3)
− 1ε − 1 + ε − 2ε2 + · · ·

The singular (i.e. extra) root → ∓∞ as ε → 0±.

1.2.1 Iterative method

(a) For the non-singular root try


xn+1 = 1 − εx2n .

(b) For the singular root, we need to keep the ‘εx2 ’ term as a major player. The leading order approxi-
mation is
εx2 + x ≈ 0 ;
so try rearranging (1.2) to
1 1
xn+1 = − + .
ε εxn
Exercise. Confirm (1.3) by iteration.
Note that in (b)
1 1
xn+1 = g(xn ) , where g(x) = − + .
ε εx
Hence  
1 1
g 0 (x) = − , g0 − =ε<1 if 0<ε<1.
εx2 ε

Mathematical Tripos: Part III PM 2 © [email protected], Michaelmas 2022


1.2.2 Expansion method

For one root try


x = x0 + εx1 + ε2 x2 + · · · , (1.4a)
and for the other try
x−1
x= + x0 + εx1 + · · · . (1.4b)
ε
Substitute (1.4b) into (1.2):

x2−1 2

ε + 2x−1 x0 + ε x0 + 2x−1 x1 + · · ·
x
+ −1
ε + x0 + εx1 + · · ·
− 1 =0
Equate powers

ε−1 : x2−1 + x−1 =0 ; x−1 = 0 , −1


0
ε : (2x−1 + 1) x0 − 1 =0 ; x0 = 1 , −1
ε : x20 + 2x−1 x1 + x1 =0 ; x1 = −1 , 1
↑ ↑
(1.3a) (1.3b)

1.2.3 Rescaling before expansion

How do you decide on the expansion if you do not know the solution?
Seek rescaling[s] to convert the singular equation into a regular equation. Try

x = δ(ε) X
need to choose suitable δ % - strictly order ‘unity’; say X = ord(1).

(1.2) becomes
εδ 2 X 2 + δX − 1 = 0 .
Consider the possibilities for different choices of δ (|ε|  1):

δ  1: small + small − 1 = 0 >


δ = 1: small + X − 1 = 0 regular root
1  δ  1ε : LHS
δ = small + X + small = 0 >
(since X = ord(1))
δ = 1ε : LHS
δ = X2 + X + small = 0 singular root
δ  1ε : LHS
εδ 2 = X2 + small + small = 0 >

The distinguished choices are therefore:

δ=1: εX 2 + X − 1 = 0 ; X = 1 + εX1 + ε2 X2 + . . .
δ = 1ε : X 2 + X − ε = 0 ; X = −1 + εX1 + ε2 X2 + . . .
01/01

Mathematical Tripos: Part III PM 3 © [email protected], Michaelmas 2022


1.3 Non Integral Powers

Inter alia, double roots can cause problems. Consider, with ε > 0,

(1 − ε) x2 − 2x + 1 = 0 . (1.5)

When ε = 0, there is a double root at x = 1. Try an expansion:

x = 1 + εx1 + ε2 x2 + . . .

then 
1 + 2εx1 + ε2 2x2 + x21 + ···
− ε − ε2 (2x1 )
− 2 − 2εx1 − 2ε2 x2 + ···
+ 1 = 0
and equating powers of ε:
ε0 : 1−2+1 =0
ε1 : 2x1 − 1 − 2x1 =0 >
We need ‘εx1 ’ to be larger.
From the exact solution: 1
1 ± ε2
x= ,
1−ε
1
we see that we should have expanded in powers of ε 2 :
1 3
x = 1 + ε 2 x 12 + εx1 + ε 2 x 32 + · · ·
1
1 + 2ε 2 x 12 + 2εx1 + εx21
2

− ε
1
− 2 − 2ε 2 x 12 − 2εx1
+ 1 = 0
This time on equating powers of ε we see that

ε0 : 1−2+1 =0
1
ε2 : 2x 21 − 2x 12 =0 no information
ε1 : 2x1 + x21 − 1 − 2x1 =0 x 21 = ±1
2

1
We must work to O(ε) to obtain the solution to O(ε 2 ).
From the original equation
(x − 1)2 = εx2 ,
we see that, since the roots are near x = 1 when ε  1, a change in the ordinate by ord(ε) changes the
1
position of the root by ord(ε 2 ).

Mathematical Tripos: Part III PM 4 © [email protected], Michaelmas 2022


In general we must derive (guess) the expansion required, e.g. try

x(ε) = 1 + δ1 (ε) x1 + δ2 (ε) x2 + · · ·


1  δ1  δ2  · · ·
xj = ord(1).

Substitute into (1.5):

1 + 2δ1 x1 + 2δ2 x2 + · · · + δ12 x21 + ··· + 2δ1 δ2 x1 x2 + · · ·


− ε − 2εδ1 x1 + · · ·
− 2 − 2δ1 x1 − 2δ2 x2 + · · ·
+ 1 = 0

The leading order terms are δ12 x21 and −ε.


1
Hence take δ1 = ε 2 
allow x1 to absorb any multiple roots.

Exercise. Show that the choices δ12  ε, or δ12  ε, lead to a >.


Cancelling off these two terms, the leading-order terms become

2δ1 δ2 x1 x2 and − 2εδ1 x1 .

Repeating the argument ⇒ δ2 = ε (and x2 = 1).

1.4 Logarithms

Solve
xe−x = ε . (1.6)

One root is close to x = ε, the other root is between


1
x = ln (xe−x = ε ln 1ε > ε)
ε
and
1
x = 2 ln (xe−x = 2ε2 ln 1ε < ε, for ε small).
ε
Note: doubling x reduces the e−x factor by an order of magnitude.
The expansion method is unclear, so try the iteration scheme. Consider a rearrangement that emphasises
the e−x factor:
x
ex =
ε

Mathematical Tripos: Part III PM 5 © [email protected], Michaelmas 2022


so try
1
xn+1 = log + log xn .
ε
Then
1
x0 = log
ε
1 1
x1 = log + log log
| {z } | {z ε}
ε
L1 L2
x2 = L1 + log (L1 + L2 )
L2 L2 L3
= L1 + L2 + − 22 + 23 + · · ·
L1 2L1 3L1
L2 L3
 
L2
x3 = L1 + log L1 + L2 + − 22 + 23 + · · ·
L1 2L1 3L1

L2 − 1 L2 + L2 1 3
L − 3 L2
= L1 + L2 + + 2 22 + 3 2 3 2 2 + ···
L1 L1 L1

The iterative method can give more than one term per iteration.
Numerical disaster. Percentage errors for the truncated series:

ε L1 L2 L2 /L1 −L22 /2L21 L2 /L21


10−1 36% 12% 2% 4% 0.03%
10−3 24% 3% 0.02% 0.04% 0.04%
10−5 19% 1% 0.04% 0.1% 0.001%
| {z }
Do not separate terms
like −L22 /2L21 & L2 /L21 .
A very small ε is needed before this is tolerably accurate.

Check convergence.

xn+1 = g(xn )
1
g(x) = log + log x
ε
0 1
g (x) =
x
0 ∗ 1
g (x ) ≈
log 1ε
↑need ε very small for |g 0 |  1.
01/19
01/20
(long)

Mathematical Tripos: Part III PM 6 © [email protected], Michaelmas 2022


2 Asymptotic Approximations

2.1 Convergence and Asymptoticness


P∞
An expansion n=0 fn (z) converges for a fixed z if, given ε > 0, ∃ N (z, ε) s.t.
m
X
fn (z) < ε ∀ `, m > N .
`

Convergent series can be useful analytically, but hopeless in practice. For instance, consider
Z z
2 2
erf(z) = √ e−t dt .
π 0
We know that n

−t2
X −t2
e =
0
n!
is analytic in the entire complex plane. Hence we have uniform convergence on any bounded part of the
plane ⇒ we can integrate term by term:
P∞ n 2n+1
erf(z) = √2π 0 (−) z
(2n+1)n! .
↓ also has ∞ radius of convergence

To obtain an accuracy of 10−5 we need


8 terms up to z =1
16 terms up to z =2
31 terms up to z =3
75 terms up to z =5

However, intermediate terms can be large ⇒ problems due to round-off error on computers.
An alternative for large z is to proceed as follows. First rewrite the integral:
Z ∞
2 2
erf(z) = 1 − √ e−t dt .
π z
Then repeatedly integrate by parts:
Z ∞ Z ∞   
2 1 2

e−t dt = − d e−t
z z 2t
−z 2 Z ∞
e 1 −t2
= − 2
e dt
2z z 2t
.
.
.
.
.
! 2
1 1.3 1.3.5 e−z
= 1− 2 + 2 − 3 + R5
2z 2
(2z ) (2z 2 ) 2z

where 2
∞ Z ∞
105 e−t
Z
105  −t2 
R5 = dt = d −e
z 16 t8 z 32 t9
105
Z ∞   105 e−z2
−t2
6 d −e = .
32z 9 z 32 z 9

Mathematical Tripos: Part III PM 7 © [email protected], Michaelmas 2022


The series in z −1 is divergent (due to the odd factorial in the numerator), but the truncated series is
useful, e.g. 10−5 accuracy with 3 terms for z = 2.5
2 terms for z = 3.
“First term is essentially the answer, while subsequent terms are minor corrections.”
Problem: What if the leading term is not sufficiently accurate (e.g. in reality ε is not sufficiently small)?
Adding a few extra terms may help, but there is a limit to the number of useful extra terms if the series
diverges as N → ∞ at fixed ε. It is not sensible to include extra terms once they stop decreasing in
magnitude. By suitable truncation, one can obtain exponential accuracy (see §3.1 and the first example
01/04 sheet).

2.2 Definitions
PN
The expansion 0 fn (ε) is an asymptotic approximation of f (ε) as ε → 0, if ∀ m 6 N ,
Pm
0 fn (ε) − f (ε)
→0 as ε → 0
fm (ε)

i.e. the remainder is less than the last included term.


If we can let N → ∞ (in principle) then we have an asymptotic expansion.
If fn = an εn , then we have an asymptotic power series; however we frequently need more general expan-
−1
sions involving terms like εα , ln 1ε , etc. We write these as

N
X
an δn (ε) (2.1)
n=0

where the δn form an asymptotic sequence:


δn+1
→0 as ε → 0 .
δn
Note that sometimes we need to restrict to one sector of the complex ε plane to keep the δn single valued.
Often ε is real and positive. A useful set of asymptotic functions are then Hardy’s logarithm–exponential
functions obtained by a finite number of +, −, ∗, /, exp & log operations, with all intermediate quantities
real.
This class has the property that it can be ordered, i.e. either f (ε) = o (g(ε)), or g(ε) = o (f (ε)) or
02/01 f (ε) = ord (g(ε)).

2.3 Uniqueness and Manipulation

If f can be expanded asymptotically for a given asymptotic sequence, then the expansion is unique. For
if the expansion exists it has the form
X
f (ε) ∼ an δn (ε) ,
n

then by construction

f (ε)
a0 = lim
ε→0 δ0 (ε)
( Pn−1 )
f (ε) − 0 am δm
an = lim .
ε→0 δn

Mathematical Tripos: Part III PM 8 © [email protected], Michaelmas 2022


However , a single function can have different asymptotic expansions for different sequences:
1 2
tan(ε) ∼ ε + ε3 + ε5 + · · ·
3 15
1 3 3 5
∼ sin ε + (sin ε) + (sin ε) + · · ·
2 8
r r !5
2 31 2
∼ ε cosh ε+ ε cosh ε + ··· .
3 270 3
Part of the ‘art’ of obtaining an effective asymptotic solution is choosing the most appropriate asymptotic
sequence.
Worse: two functions can have the same asymptotic expansion:

X εn
exp ε ∼ as ε → 0
0
n!

εn
 
1 X
exp ε + exp − ∼ as ε & 0 .
ε 0
n!
2
Exercise. Does f = x2 + e−x (1−sin x)
have an asymptotic expansion as x → ∞?

• Asymptotic expansions can be added, multiplied and divided to produce asymptotic expansions for
the sum, product and quotient (if necessary one may need to enlarge the asymptotic sequence).
• If appropriate, one can try to substitute an asymptotic expansion into another – but care is needed,
e.g. if
2 1
f (z) = ez , z(ε) = + ε
ε
then
 
1
f (z(ε)) = exp 2 + 2 + ε2
ε
ε4
 
2
∼ e1/ε e2 1 + ε2 + + ··· ,
2
but if we just work to leading order
1
z ∼
ε
2
f (z) ∼
6 e1/ε
↑missing e2

The leading-order approximation in z is inadequate for the leading-order approximation in f (z).


• Integration w.r.t. ε of asymptotic expansions is allowed term-by-term producing the correct result.
• Differentiation is not allowed in principle because O and o estimates do not survive differentiation.
For instance:
(a)
2
f = eix = O(1) as x → ∞
df 2
= 2ixeix = O(x) as x → ∞
dx
(b)
2
 2

f 1 + e−1/x sin e1/x ∼ 1 + · · ·
= as x → 0
df 2  2
 2 2
 2

= − 3 cos e1/x + 3 e−1/x sin e1/x
dx | x {z } x
No asymptotic expansion as x → 0.

Mathematical Tripos: Part III PM 9 © [email protected], Michaelmas 2022


(c)
f = t2 + t sin t ∼ t2 , f 0 = (2 + cos t)t + sin t 6∼ 2t as t → ∞.
01/03
However:
PN
(i) If f 0 (x) exists and is integrable, and f (x) ∼ n=0 an xn as x → 0, then

X
0
f ∼ nan xn−1 as x → 0.
n=1

(ii) If f (z) is analytic in θ1 6 arg z 6 θ2 , 0 < |z| < R and



X
f ∼ an z n as z → 0 (θ1 6 arg z 6 θ2 )
n=0
then

X
f0 ∼ nan z n−1 as z → 0 (θ1 6 arg z 6 θ2 ).
n=1

(iii) There are lots more special cases. For instance, consider asymptotic expansions of solutions to
differential equations.
Suppose that y is the solution to
y 00 + qy = 0 (2.2)
where q has an asymptotic expansion as x → 0.
Assume y has an asymptotic expansion as x → 0;
then from (2.2) y 00 has an asymptotic expansion (multiplication OK)
thus y 0 has an asymptotic expansion (integration OK)
thus y has an asymptotic expansion (integration OK) .
Hence if y has an asymptotic expansion, the equation ensures that its differentials have asymp-
totic expansions (the proof that y has an asymptotic expansion in the first place is often tricky).

2.4 Parametric Expansions

For functions of two (or more) variables, e.g. f (x, ε) (as might arise in solutions to pdes, etc.), we make
the obvious generalisation of (2.1) to allow the an to be functions of x:
N
X
f (x, ε) ∼ an (x)δn (ε) as ε → 0. (2.3)
n=0

If the approximation is asymptotic as ε → 0 for each x, then it is called a Poincaré, or classical, asymptotic
approximation.
The above pointwise asymptoticness may not be uniform in x, e.g. it may require ε < x (restrictive as
x → 0). Such problems sometimes need a further extension:
P
f (x, ε) ∼ n an (x, ε)δn (ε) (2.4)
e.g. an (x, ε) = bn xε .


Uniqueness extends to (2.3), but not to (2.4), etc.

Mathematical Tripos: Part III PM 10 © [email protected], Michaelmas 2022


3 Integral Methods

3.1 Elementary Examples

Example 1. Rewrite an integral so that we can use a Taylor series. For instance:
Z ∞
4
I= e−t dt as x→0.
x

Then
Z ∞ Z x
−t4 4
I = e dt − e−t dt
0 0

xX
(−t4 )n
Z
= Γ (5/4) − dt
0 n=0
n!

X (−)n x4n+1
= Γ (5/4) − .
n=0
(4n + 1)n!

Example 2. Use a Taylor series even when we cannot! For instance:


Z ∞ −t
e
I= dt as x→∞.
0 x +t
Then
−1
1 ∞ −t
Z 
t
I = e 1+ dt
x 0 x
1 ∞ −t t2 t3
Z  
t
= e 1− + 2 − 3 + . . . dt
x 0 x x x
1

1! 2! 3!
 ↑dubious, since invalid for t > x.
= 1− + 2 − 3 + ... .
x x x x
↑Divergent
Estimate the remainder using
m−1 m
t2 1 − − xt

t t
1 − + 2 + ... + − = .
x x x 1 + xt
Then
m−1 Z n
1 X ∞

t
I = − e−t dt + Rm (x) ,
x n=0 0 x

where ∞
(−t)m e−t
Z
1
Rm (x) =  dt ,
xm+1 0 1 + xt

and Z ∞
1 m!
|Rm (x)| 6 tm e−t dt = .
|xm+1 | 0 xm+1
Hence   
1 1 2! m! (m + 1)!
I = 1 − + 2 + ... + +O
02/22 x x x (−x)m xm+1
Truncate the series when the remainder has the smallest bound, i.e. stop one before smallest term when
x ∼ m. The error when we truncate is then (after using Stirling’s formula)
03/01 x! (2π)1/2 e−x
02/04 |Rm | ∼ ∼ ,
xx+1 x1/2
02/19
02/20 i.e. the error is exponentially small for large x (so the ‘dubious’ step wasn’t too bad).
Mathematical Tripos: Part III PM 11 © [email protected], Michaelmas 2022
3.2 Integration by Parts
R
Integrals of the form f (t)g(t) dt can be integrated by parts and may so yield asymptotic expansions;
one automatically obtains the remainder.
Example 1. See §2.1 for erf(z).
Example 2. Consider the exponential integral
Z ∞ −t Z ∞ −t
e e dt
E1 (x) ≡ dt = e−x .
x t 0 x+t

Then integrating by parts


 −t ∞ Z ∞ −t
e e
E1 (x) = − − dt
t x x t2
e−x
 
1 2! m!
= 1 − + 2 + ... + + Rm (x) ,
x x x (−x)m

where ∞
e−t
Z
m+1
Rm (x) = (−) (m + 1)! dt .
x tm+2
Hence
(m + 1)!e−x
|Rm (x)| 6 ,
xm+2
and as in §3.1, the remainder is asymptotically smaller than the retained terms on truncation with m ∼ x.
Example 3. The sine and cosine integrals.
π  Z ∞ eit dt
−Ci(x) − i si(x) = −Ci(x) + i − Si(x) ≡
2 x t 
eix

1 2! m!
= − 1+ + + ... + + Rm (x) ,
ix ix (ix)2 (ix)m

where ∞
eit dt
Z
Rm (x) = i(m + 1)! .
x (it)m+2
If we proceed to estimate the remainder as before
Z ∞
dt m!
|Rm | 6 (m + 1)! m+2
= m+1 = O(last term) ,
x t x

so this does not demonstrate asymptoticness. We seek an improved error estimate by integrating by parts:
∞ Z ∞ it
(m + 1)! eit

e dt
Rm = + i(m + 2)! ,
(it)m+2 x x (it)m+3

and then we can demonstrate that the remainder is asymptotically


 smaller
 than the retained terms:
(m + 1)! (m + 1)! 1
|Rm | 6 + =O .
xm+2 xm+2 xm+2

3.3 Integrals with Algebraic Parameter Dependence

Example 1. Consider the integralZ


1 √ √ 
1
I(ε) = 1 dx = 2 1+ε− ε .
0 (x + ε) 2

Mathematical Tripos: Part III PM 12 © [email protected], Michaelmas 2022


The leading-order (ε → 0) estimate is just
Z 1
1
I(0) = 1 dx = 2 .
0 x2
| {z }
global contribution from
all of integration range
−1/2
In order to obtain an improved estimate one cannot expand (1 + ε/x) throughout the range as
−1/2
(1 + ε/x) = 1 − ε/2x + . . . ,
1
since for 0 6 x  ε the  expansion is not convergent. Further, we note that when x =−1/2 ord(ε),
 the
−1/2
integrand is ord ε ⇒ contribution to the integral for this range of x will be ord ε · ε , i.e.

ord ε1/2 .
To account for this correction, one could subtract the leading-order estimate exactly; then
Z 1 
1 1
I =2+ 1 − 1 dx
0 (x + ε) 2 x2
| {z }
x = ord(ε), integrand = ord ε−1/2 , contribution to R = ord ε1/2
 R 

x = ord(1), integrand = ord (ε) , contribution to = ord (ε)


The major contribution is from near x = 0 so, as in §0, try the scaling x = εξ (ξ = ord(1)); then
Z 1ε ≈∞ " #
1 1 1
I = 2+ε 1 − dξ
2
1
0 (1 + ξ) 2 ξ2
1
≈ 2 − 2ε 2
Further corrections can be obtained by now subtracting out this contribution, but this method is tedious
02/03 and difficult! There must be a better way.

Alternative 1: Solve a differential equation. Let


Z x
1
J(x) = 1 dq .
0 (q + ε) 2

Then we need to find J(1). This can be done by solving the differential equation
dJ 1
= 1
dx (x + ε) 2
subject to the initial condition J(0) = 0. We will discover how to do this in §5.

Alternative 2: Divide & Conquer. In this method we split the range of integration. Split [ 0, 1 ] at
x = δ where ε  δ  1, and then use Taylor series when we can use Taylor series:
Z δ Z 1
dx dx
I = 1 + 1
0 (x + ε) 2 δ (x + ε) 2
Z δ/ε Z 1
3ε2
 
1 dξ 1 ε
= ε 2
1 + 1 1− + + . . . dx
0 (1 + ξ) 2 δ x2 2x 8x2
  12 !
    2 
1 δ  1 ε ε 2
= 2ε 2 +1 − 1 + 2 − 2δ 2 + ε − 1 + O 3 , ε
ε δ2 δ2
 2 
1 ε 1 1 ε ε 2
= 2δ + 1 − 2ε + 2 − 2δ + ε − 1 + O
2 2 2
3 , ε
δ2 δ2 δ2
 2 
1 ε 2
= 2 − 2ε 2 + ε + O 3 , ε .
δ2
1 And in this case there is no exponentially small multiplier.

Mathematical Tripos: Part III PM 13 © [email protected], Michaelmas 2022


Remarks.
• Since δ is arbitrary, all terms containing a δ must cancel.
2
• The error term is definitely small if ε 3  δ  1.

• To organise the algebra it is sometimes helpful to tie δ to ε, e.g.


3
δ = Kε 4 ,

and then the answer must be independent of K.

Example 2. Suppose that we wish to estimate the integral


π
sin2 θ
Z 2
I(m, ε) = 2 dθ 0<m<∞,
0 (1 − m2 cos2 θ) sin2 θ + ε2

for 0 < ε  1. It turns out that there are three cases to consider: 0 < m < 1; |m − 1|  1; m > 1.
(a) 0<m<1
R
θ integrand contribution to
ord(1) ord(1) ord(1)
ord(ε) ord(1) ord(ε)
↑ 2
1 − m2 cos2 θ sin2 θ ∼ ε2

We will find the solution correct to O ε2 ; to this end let 0 < ε  δ  1. Then
δ π
sin2 (εu) sin2 θ
Z ε
Z 2
I = ε 2 du + 2 dθ
0 (1 − m2 cos2 (εu)) sin2 (εu) + ε2 δ (1 − m2 cos2 θ) sin2 θ + ε2
Z δε Z π2
u2 du 1 2

= ε
2 2 2 +
2 2 2 dθ + O ε
0 (1 − m ) u + 1 δ (1 − m cos θ)
"  # δ
(1 − m2 )u − tan−1 (1 − m2 )u
Z δ
(2 − m2 )π dθ 2

= ε + 3 − 2 +O ε
(1 − m2 )3 2
4(1 − m ) 2 2
0 (1 − m cos θ)
2
0
1
via a tan θ = t = (1 − m2 ) 2 tan ψ substitution

(2 − m2 )π 2
 
δ επ δ 2 2 ε
= − + 3 − + O ε , δ ,
(1 − m2 )2 2(1 − m2 )3 4(1 − m2 ) 2 (1 − m2 )2 δ
1
 π
since arctan ∆ ∼ 2 −∆

(2 − m2 )π επ
I = 3 − +... (3.1)
4(1 − m2 ) 2 2(1 − m2 )3
| {z } | {z }
global local

Note that this is a non-uniform


approximation as m → 1. There
is a loss of ordering of the series
solution when
1 ε
04/01 3 ∼
03/22 (1 − m2 ) 2 (1 − m2 )3

i.e. when
2 1
(1 − m2 ) ∼ ε 3 and I ∼ .
ε

Mathematical Tripos: Part III PM 14 © [email protected], Michaelmas 2022


(b) This suggests that when |m − 1|  1, we should introduce a scaled parameter: viz.
2
m = 1 − ε3 λ . (3.2a)

First let us examine the local contribution from near θ = 0 (since on the basis of the estimates above it
will be leading order). Put θ = εβ u, then
2  2
2
1 − m2 cos2 θ sin2 θ + ε2 = ε2β u2 + 2ε 3 λ ε2β u2 + ε2 + . . .

All leading order terms balance if β = 13 , i.e.


1
θ = ε3 u . (3.2b)

This is referred to as a distinguished scaling.


As a first guess, let us assume that this is the scaling in θ to consider. Then
1
 2 
θ = ord(ε 3 ); integrand = ord ε 3 /ε2 ; contribution to = ord (1/ε)
R
R
θ = ord(1) ; integrand = ord (1) ; contribution to = ord (1)
1
The ‘local’ contribution dominates. Hence introduce ε 3  δ  1, and split the integral:
Z δ Z π2
I = . . . dθ + . . . dθ
0 δ
1
δε− 3
u2 du
Z
1 1
∼ 2 ∼ f (λ)
ε 0 (u2 + 2λ) u2 +1 ε
where ∞
u2 du
Z
f (λ) = 2 .
0 (u2 + 2λ) u2 + 1
03/19
Hence for a given λ (or equivalently m), we have a leading order asymptotic estimate. However, we should
check that as λ → ∞, we obtain the same estimate as in (a). In particular, when λ  1
 
u = ord(1) , integrand = ord 1/λ2 , contribution to = ord 1/λ2 
R
1  3
u = ord(λ 2 ), integrand = ord 1/λ2 , contribution to = ord 1/λ 2
R

1
This suggests that the largest contribution will come from where v = λ− 2 u = ord(1). Hence estimate f
in this range:
Z ∞
1 dv π
f (λ) ∼ 3 2 = 3 ,
λ 2 0 (2 + v ) 2
4 (2λ) 2
and
π π
I ∼ 3 ∼ 3 (3.3)
4ε (2λ) 2
4 (1 − m2 ) 2
↓agrees with (3.1) for m ≈ 1
03/20
We might also be interested in the other limit, i.e. λ → −∞. This estimate is a little more tricky, since
u2 + 2λ can now have a zero (when |λ|  1, this term normally dominates the denominator). First we
test for a significant contribution from near this zero by introducing a scaled coordinate, say w:
1
u = (−2λ) 2 + (−λ)γ w .

Then
2  1
2
1 + u2 u2 + 2λ ∼ 1 + (−2λ) 2 (−2λ) 2 (−λ)γ w + . . .
∼ 1 + 16λ2 (−λ)2γ w2 + . . . .

Mathematical Tripos: Part III PM 15 © [email protected], Michaelmas 2022


There is a distinguished scaling (that ensures the scaled integral is convergent) for the choice γ = −1; in
that case the contribution to the integral from near the zero can be estimated as follows:
Z
1
u = (−2λ) + ord (1/ |λ|) ; integrand = ord (|λ| /1) ; contribution to
2
= ord(1) .

This is a much larger contribution than we found in (3.3) for λ  1.


In order to estimate the contribution set
1 w
u = (−2λ) 2 + (3.4)
(−λ)
then Z ∞
(−2λ + . . . ) dw π
f (λ) = ∼ .
−2 1/2
(−λ) 3/2
≈−∞ (−λ) [1 + 16w2 . . . ] 2

Hence as λ → −∞, the value of the integral tends to a large constant, viz.
π
I∼ . (3.5)

(c) Finally consider the case when m > 1.


The limit λ → −∞ (i.e. 0 < (m − 1)  1) suggests that the main contribution will be local, and will
come from the region close to the point where
m2 cos2 θ = 1 .
Define  
1  π
θm = cos−1 0 < θm < .
m 2
In order to deduce the coordinate scaling that is appropriate close to θm , we note from (3.2a) and (3.4)
that the ‘inner’ scaling for 0 < m − 1  1 can be written in the form
 
1 1 1 w 1 εw 2εw
θ=ε u=ε
3 3 (−2λ) +
2
= (2(m − 1)) 2 + ∼ θm + 2 .
(−λ) (m − 1) θm
This suggests that for (m − 1) = O(1) we might guess the scaling
θ = θm + εt ,

in which case 2
1 − m2 cos2 θ sin2 θ + ε2 ∼ 4ε2 m2 sin4 θm t2 + . . . + ε2

and
Z 1
ε ( π2 −θm )≈+∞ ε sin2 (θm + εt) dt
I ∼
4m2 t2 sin4 θm + 1 + . . .

− 1ε θm ≈−∞ ε2
1 π
· ∼ (3.6)
ε 2m
05/01 We note that (3.6) agrees with (3.5) in the limit m → 1.
03/04

3.4 Logarithms

As an illustrative example, consider integrals of the form



−α
Z a ord (ε ) x = ord(ε)

f (x, ε) dx with f (x, ε) = x−α εx1
0 
ord(1) x = ord(1).

e.g.
1 1
f= α
.
(x + ε) 1 + x
There are three possibilities for the leading-order contribution depending on the value of α:

Mathematical Tripos: Part III PM 16 © [email protected], Michaelmas 2022


(i) α < 1. Dominant contribution from x = ord(1), e.g. with α = 21 :
Z ∞ Z ∞
dx dx
1 ∼ 1 .
0 (x + ε) (1 + x)
2 0 x (1 + x)
2

(ii) α > 1. Dominant contribution from x = ord(ε), e.g. with α = 23 :


Z ∞ Z ∞
dx dξ
3 ∼ 1 3 (x = εξ) .
0 (x + ε) 2 (1 + x) 0 ε 2 (1 + ξ) 2

(iii) α = 1. Dominant contribution not from x = ord(ε) or x = ord(1) but from the interme-
diate region between. Easiest to see by using divide and conquer, and splitting
the integration region, e.g. with ε  δ  1:
Z ∞ Z δε Z ∞
dx dξ dx
= +
0 (x + ε)(1 + x) 0 (1 + ξ)(1 + εξ) δ (x + ε)(1 + x)
h i δε
= log(1 + ξ) − ε [ξ − log(1 + ξ)] + . . .
    0  ∞
x ε x+1
+ log + − ε log + ...
x+1 x x δ
ε
∼ (1 + ε) (log δ − log ε) + + . . .
δ
ε
− log δ − − ε log δ + . . .
 δ
1
∼ (1 + ε) log + ... .
ε
03/16 ↑
‘fortunate’ ord(1) cancellation
3.5 Integrals with Exponential Power Dependence
General case: limit as λ → ∞ of integrals of type
Z b
I(λ) = eλφ(z;λ) f (z; λ) dz .
a ↑
paths in C
‘weak’ algebraic
dependence on λ

Initially assume a, b, λ, φ, f , and the path of the integral are real. Then we estimate the integral by
assuming that the major contribution comes from close to the point where φ is largest (and the integrand
is exponentially largest).
There are different cases to consider depending on whether the maximum of φ is at an end point (Watson’s
Lemma), or in the interior of the integration range (Laplace’s Method).

3.5.1 Watson’s Lemma

In this section we assume the maximum is at an end point, say wlog z = a. We also assume that φ is
monotonic decreasing function of z (so φ0 < 0). Write
f (z; λ) λφ(a;λ)
x = φ(a; λ) − φ(z; λ) , F (x; λ) = − e , c = φ(a; λ) − φ(b; λ) > 0 ,
φ0 (z; λ)
then Z c
I(λ) = e−λx F (x; λ) dx .
0
Assume that F is analytic in some sector S of the complex plane, and that as x → 0,
N
X
F (x; λ) ∼ ak xαk − 1 < α0 < α1 < . . . . (3.7a)
k=0

Mathematical Tripos: Part III PM 17 © [email protected], Michaelmas 2022


Also assume that c is in S, that F is bounded in S, and that, for simplicity, F (x; λ) ≡ F (x); the changes
when this is not the case are straightforward, but somewhat messy, since the ak are now functions of λ
and themselves need to be expanded in λ when λ  1. Then
Z c N
−λx
X Γ(αk + 1)
e F (x) dx ∼ ak . (3.7b)
0 λαk +1
k=0

Unlectured Proof. For a given ε > 0, ∃ δ(ε) s.t.


N
X
F (x) − ak xαk < ε |xαN | ∀ x in S with |x| < δ. (3.8)
k=0

Split the range of the integral at λ−1  δ(ε)  1; then


Z δ Z c
I= e−λx F (x) dx + e−λx F (x) dx = I1 + I2 .
0 δ
First note that I2 is exponentially small as λ → ∞:
Z c
e−λδ
I2 = e−λx F (x) dx < Fmax , where Fmax = max|F (x)| .
δ λ x∈S

Further, consider the difference between I1 and the asymptotic series (3.7b), then from (3.8)
N
X Z δ N
X Z δ Z ∞ !
I1 − ak λ−αk −1 Γ(αk + 1) = e−λx F (x) dx − ak dx + dx xαk e−λx
k=0 0 k=0 0 δ
Z δ N
X Z ∞
αN
6 ε e−λx |x| dx + ak xαk e−λx dx
0 k=0 δ
Z N
∞X
Γ(αN + 1)
6ε + e−(λ−1)δ |ak xαk | e−x dx .
λαN +1 δ k=0

Hence as λ → ∞,  
ε
error = O , exp .
|λαN +1 |
This proves the result since ε can be arbitrarily small (and λ arbitrarily large).

This proof can be extended to the cases when


• |F (x)| < Kemx for K, m > 0;
• λ is complex (by deforming the integration contour so that xλ is real).
03/03
04/19 How to obtain a practical answer
04/22
The introduction of the coordinate x is not always simple. If all that is required is a few leading-order
terms, then it is possible to proceed as follows (for φ0 (a) < 0). Assume, for simplicity, f (z; λ) ≡ f (z) and
φ(z; λ) ≡ φ(z), where the more general case generally leads to more Taylor expansions. Then, for λ  1,
expand close to z = a:
Z b
I = eλφ(z) f (z) dz
a
Z λβ (b−a)     
t t dt t
= f a+ β exp λφ a + β β
, where z = a + β for some β > 0
0 λ λ λ λ
Z λβ (b−a) 
t2
  
t t dt
= f (a) + β f 0 (a) + . . . exp λφ(a) + β−1 φ0 (a) + 2β−1 φ00 (a) + . . . β
0 λ λ 2λ λ
Z λ(b−a)≈∞
t2 00
   
t 0 dt
= f (a) + f 0 (a) + . . . eλφ(a) etφ (a) 1 + φ (a) + . . . , choosing β = 1
0 λ 2λ λ
 0
eλφ(a) φ00 (a)f (a)
   
f (a) 1 f (a) 1
≈ − 0 + 0 2
− 0 3
+ O + exp .
λ φ (a) λ [φ (a)] [φ (a)] λ2

Mathematical Tripos: Part III PM 18 © [email protected], Michaelmas 2022


Summary. This approach works since the major asymptotic contribution comes from near the maximum
of φ, courtesy of the strong exponential decay of the integrand; the choice of β is made to pick out this
04/20 contribution.

3.5.2 Intermediate maximum (Laplace’s method)

Again consider
Z b
I= eλφ(x) f (x) dx ,
a

where the generalisation to φ ≡ φ(x; λ) and


f ≡ f (x; λ), for algebraic λ dependence, is
more messy than conceptual. Suppose that
(a) max φ = φ(c);
x∈[ a,b ]

(b) a < c < b, φ0 (c) = 0, φ00 (c) < 0;


Similar to above, assume that the major con-
tribution to the integral comes from when the
integrand is close to maximal, and introduce a
scaled co-ordinate of the form
t
x=c+ .
λβ
Then expanding λφ close to the maximum at x = c we obtain

λφ(x) ∼ λφ(c) + (x − c)λφ0 (c) + 12 (x − c)2 λφ00 (c) + 61 (x − c)3 λφ000 (c) + . . .
∼ λφ(c) + 12 t2 λ1−2β φ00 (c) + 16 t3 λ1−3β φ000 (c) + . . .
1
The choice β = 2 ensures that the decay of the exponential occurs over an ord(1) scaled distance t.
It follows that
1
Z (b−c)λ 2  
  
1 t t
I = 1 1
f c+
1 exp λφ c + 1 dt
λ2 (a−c)λ 2 λ2 λ2
Z λ 12 (b−c) 
t2 00 t3 000
  
1 t 0
= 1 f (c) + 1 f (c) + . . . exp λφ(c) + φ (c) + 1 φ (c) + . . . dt
λ2 1
λ 2 (a−c) λ2 2 6λ 2
Z ∞
1 1 2 00
  1 
≈ 1 f (c)eλφ(c) e 2 t φ (c) 1 + O λ− 2 dt + exp
λ2 −∞
  12

≈ f (c)eλφ(c) + . . . .
−λφ00 (c)
06/01
Example: Stirling’s Formula. Consider
Z ∞ ∞
e−x λ log x
Z
−x λ−1
Γ(λ) = e x dx = e dx as λ → ∞
0 0 x
−x
e
Then f (x) = , φ(x) = log x
x
max φ(x) = ∞ for 0 < x < ∞.

The method seems invalid! Instead, use the ‘generalisation’


Z ∞
1 
Γ(λ) = exp −x + λ log x dx
0 x | {z }
φ(x) = log x − x/λ
φ0 (x) = 1/x − 1/λ , φ0 = 0 at x = λ .

Mathematical Tripos: Part III PM 19 © [email protected], Michaelmas 2022


Let x = λs.
Z ∞
ds 
Γ(λ) = exp −λs + λ log λ + λ log s
0 s
Z ∞
ds
λλ

= exp −λ (s − log s)
0 s
f (s) = 1s , φ(s) = log s − s
φ0 = 1s − 1 , c=1
φ00 = − s12 , φ00 (c) = −1
  12

04/16 Γ(λ) ∼ λλ e−λ + . . . .
λ

3.5.3 Stationary phase

Let φ(x) = iψ(x), with ψ(x) real. Consider


Z b
I(x) = f (x)eiλψ(x) dx
a Generalised Fourier Integral
Rb
Riemann-Lebesgue Lemma. If a
|f (x)| dx exists, then
Z b
f (x)eiλx dx → 0 as λ → ∞.
a

Generalised Riemann-Lebesgue Lemma. If


(a) |f (x)| is integrable;
(b) ψ(x) is continuously differentiable (ψ 0 (x) = 0 is OK at isolated points);
(c) ψ(x) is not constant on any sub-interval,
then
I(x) → 0 as λ → ∞.

ψ 0 6= 0 on [ a, b ]. In this case integrate by parts (note problems if ψ 0 = 0):


 b Z b  0
f iλψ f
I(x) = e − eiλψ dx
iλψ 0 a a iλψ 0
Z  0
i b f
 
i f (a) iλψ(a) f (b) iλψ(b)
= e − 0 e + eiλψ dx .
λ ψ 0 (a) ψ (b) λ a ψ0
| {z }
J
J satisfies the conditions for the generalised Riemann-Lebesgue Lemma if f (x)/ψ 0 (x) is smooth;
hence to leading order  
i f (a) iλψ(a) f (b) iλψ(b)
I(x) ∼ e − e .
λ ψ 0 (a) ψ 0 (b)
04/04 Remark. If we can continue to integrate by parts, we can obtain higher-order terms.
ψ 0 = 0 on [ a, b ]. Assume a unique zero at x = c:
ψ 0 (c) = 0, ψ 00 (c) 6= 0 .

Since cancellation is much reduced near


x = c, try a local scaling
y
x=c+ .
λβ

Mathematical Tripos: Part III PM 20 © [email protected], Michaelmas 2022


1 1

0.5 0.5

cos(β 2 )
sin(β 2 )
0 0

-0.5 -0.5

-1 -1
-8 -6 -4 -2 0 2 4 6 8 -8 -6 -4 -2 0 2 4 6 8
β β

Z (b−c)λβ
 y    y  dy
I(x) = f c + β exp iλψ c + β
(a−c)λβ λ λ λβ
Z (b−c)λβ ≈∞ h i 00
y 2 1−2β
+ 6i ψ 000 (c)y 3 λ1−3β + . . . dy
i h i
iλψ(c)+ 2 ψ (c)y λ
= f (c) + β f 0 (c) + . . . e
(a−c)λβ ≈−∞ λ λβ

1
again choose β = 2 for the distinguished limit

iψ 00 (c)y 2
Z  
f (c)   1 
= 1 eiλψ(c) exp dy 1 + O λ− 2
λ2 −∞ 2
  12
2
substitute y = t , s = sgn [ψ 00 (c)]
|ψ 00 (c)|
  21 Z ∞
2 iλψ(c) 2
∼ 00
f (c)e eist dt
λ |ψ (c)| −∞
| {z }
1
π 2 eisπ/4 by contour deformation
  21
2π  π
∼ 00
f (c) exp iλψ(c) + i sgn [ψ 00 (c)] .
λ |ψ (c)| 4
↑ leading order; next order approximation can come from end points, etc.

One can tighten up the ‘proof’ by changing variables at the start:

ψ(x) = ψ(c) + 12 ψ 00 (c)Y 2 .


05/19

3.5.4 Steepest descents

This is a method for estimating integrals (for large |λ|) of the form
Z
I= f (z)eλφ(z) dz ,
C

where C is an integration path in the complex z-plane, f and φ are analytic functions of z. In principle
λ may be complex, but wlog φ can then be redefined so that we can take λ to be real. There is a
straightforward extension to f (z) ≡ f (z; λ) and φ(z) ≡ φ(z; λ).

(a) The idea is to deform the contour and then use Watson’s Lemma or Laplace’s Method.
First some notation. Let φ = u + iv.
Then (i) ux = vy , uy = −vx Cauchy Riemann
(ii) ∇2 u = 0 = ∇2 v.
05/22

Mathematical Tripos: Part III PM 21 © [email protected], Michaelmas 2022


Figure 3.1: Plot of the tomography of the surface u = Re φ(z; λ) near the saddle point z0 for a typical
function φ(z; λ). The heavy solid curves follow the centres of the ridges and valleys from the saddle point (i.e.
lines of constant v), and the dashed curves follow level contours, u = u(x0 , y0 ) = constant. The curve AA0
is the path of steepest descent. Source: Mathematical Methods of Physics, by Jon Mathews and Robert L.
Walker.

(b) From stationary phase we have seen that rapid oscillations can cause cancellation. This makes esti-
mation of the integral difficult and in particular means that the dominant contribution to I may
not come from the part of C where Re (λφ(z; λ)) = λu is largest. We eliminate such oscillations by
choosing an integration path with

Im (φ) = v = constant .

The Cauchy-Riemann equations imply that

∇u · ∇v = 0 . (3.9)

Thus the v = constant contours are k to ∇u. It follows that the v = constant contours are paths of
steepest ascent/descent of u. [Note that we need the steepest descent path to obtain ‘all’ terms of
04/03 the series.]
05/20
(c) The major contribution to the integral I then comes from close to the ‘highest’ point (w.r.t. u) on the
integration path.2 If the ‘highest’ point is at the end of the integration path, then Watson’s Lemma
is most likely to be appropriate (but see below for a case when Laplace’s method is needed), while
if the ‘highest’ point is in the middle of the integration path then Laplace’s Method is likely to be
needed.
(d) A constraint on interior maxima. For the case of an interior ‘highest’ point on the integration path,
i.e. a maximum, we will have at the maximum

ŝ.∇u = 0 ,

where ŝ is a unit vector in the direction of the integration path. Further, since the integration path
is a line of constant v, it is perpendicular to ∇v, i.e.

ŝ.∇v = 0 .

It follows from (3.9), and the fact the integration path is two-dimensional, that at a turning point

|∇u| = 0 .

2 Those of you who already know about the method of steepest descents need to remember this — do not just go for the

turning points!

Mathematical Tripos: Part III PM 22 © [email protected], Michaelmas 2022


Hence we conclude from the Cauchy-Riemann equations, that a maximum on an integration path
that is a steepest descent contour can only occur at points where
φ0 (z) = 0 .
Further, since ∇2 u = 0, from the maximum modulus principle, these points can only be saddles in
the surface u(x, y), i.e.
∆ ≡ uxx uyy − (uxy )2 = −(uxx )2 − (uxy )2 6 0 .
07/01
05/16 Example. Find an asymptotic expansion for
Z 1
2
I= eiλz dz as λ → ∞.
0

The leading-order approximation can be obtained by a stationary phase calculation near z = 0. To obtain
a full expansion try to use steepest descent contours. From above
φ = iz 2 = i(x2 − y 2 ) − 2xy ,
u = −2xy, v = x2 − y 2 .
Hence
steepest contours through z = 0: v = 0, x = ±y, u = ∓2y 2
S.D. contour through z = 0: x = +y, u = −2y 2
z = (1 + i)y, iz 2 = −2y 2
p p
steepest contours through z = 1: v = 1, x=p ± 1 + y2 , u = ∓2y p1 + y 2
S.D. contour through z = 1: x = p 1 + y2 , u = −2y 1p + y2
2
z = 1 + y 2 + iy, iz = i − 2y 1 + y 2

The contribution from C2 vanishes as ymax → ∞; thus


Z Z
2 2
I = eiλz dz + eiλz dz
C1 C3
Z ∞
i ∞ eiλ e−λs ds
Z
−2λy 2
= (1 + i) e dy −
0 2 0 (1 + is) 12
2
∞  substitute iz = i − s
 π  21 iπ ieiλ ∞ (−is) n
Γ n + 1
Z X
= e4 − ds e−λs 1
 2
4λ 2 0 n=0
n! Γ 2
 π  21 iπ ∞ 
iλ X
n+1 1

e −i Γ n+ 2
∼ e4 +  .
4λ 2 n=0 λ Γ 21

Mathematical Tripos: Part III PM 23 © [email protected], Michaelmas 2022


Changing Steepest Descent Contours
It is often necessary to change from one
steepest descent contour comprising part
of the integration path to another steep-
est descent contour. As the above example
illustrates, two such steepest descent con-
tours should be joined in regions of the
complex plane where the real part of the
exponent, i.e. u, is asymptotically smaller
than its maximum value.

The Local Contribution from a Saddle


We have adopted the approach that you choose steepest descent contours, and then look for maxima of u.
If there are maxima in the interior of the path, then we have seen that they occur at a saddle points of
u(x, y).
An alternative view is that you deform the integration path so that u is as small as possible. If there is
an interior maximum of u on the path, then it will occur at a saddle.
Either way we need to evaluate the contribution to the path in the neighbourhood of a saddle, which
wlog we take to be at z = zs . Close to this point
λφ(z) ∼ λφ(zs ) + λ(z − zs )φ0 (zs ) + 12 λ(z − zs )2 φ00 (zs ) + 16 λ(z − zs )3 φ000 (zs ) + . . . .
As in Laplace’s method introduce a rescaling such that λ(z − zs )2 = ord(1):
w
z = zs + 1 ,
λ2
where we have assumed λ > 0. Then
 1
λφ(z) ∼ λφ(zs ) + 12 φ00 (zs )w2 + O λ− 2
Z Z   1  dw
1 00 2
f (z)e λφ(z)
dz = f (zs )eλφ(zs )+ 2 φ (zs )w 1 + O λ− 2 1
C C λ2
  12
−2π 1
∼ f (zs )eλφ(zs ) + ... by using η = − 21 φ00 (zs ) 2 w,
λφ00 (zs )
where we have evaluated the integral using Laplace’s method on the steepest descent path (by a suitable
rotation of the contour C), and the choice of sign of the square root depends both on the rotation and
the direction of traversed along the contour. That there is a dominant local contribution from close to
the saddle is ‘confirmed’ by the fact that the integral is convergent as |w| → ∞.
Note also that while it is not strictly necessary to choose the steepest descent path at the final stage (we
just need a path that goes downhill), the steepest descent path is necessary to obtain ‘all’ terms of the
06/17 series.

3.5.5 The Airy function and Stokes phenomenon

The Airy function is defined as


Z
1 1 3
Ai(λ) = eλz− 3 z dz , (3.10)
2πi C

where C starts from ∞ with arg(z) = −2π/3 and ends at ∞ with arg(z) = +2π/3. Define, consistent
with our earlier notation,
λφ = Φ = λz − 31 z 3 .

Mathematical Tripos: Part III PM 24 © [email protected], Michaelmas 2022


Thus there are saddles at
2 3

and eΦ(zs ) = e± 3 λ .
1 2
zs = ±λ 2 ,
First consider λ → +∞. Then we see from the contours of Re(λz − 31 z 3 ) for arg λ = 0 in figures 3.2
and 3.3, that it is only necessary to pass over the [lower] left-hand saddle in order to traverse the ridge
separating the end points of integration.

arg(λ)=0 arg(λ)=π/3
2 2

1.5 1.5

1 1

0.5 0.5
Im(z)/|λ|1/2

Im(z)/|λ|1/2
0 0

−0.5 −0.5

−1 −1

−1.5 −1.5

−2 −2
−2 −1 0 1 2 −2 −1 0 1 2
Re(z)/|λ|1/2 Re(z)/|λ|1/2

arg(λ)=2π/3 arg(λ)=π
2 2

1.5 1.5

1 1

0.5 0.5
Im(z)/|λ|1/2

Im(z)/|λ|1/2

0 0

−0.5 −0.5

−1 −1

−1.5 −1.5

−2 −2
−2 −1 0 1 2 −2 −1 0 1 2
Re(z)/|λ|1/2 Re(z)/|λ|1/2

Figure 3.2: Contours of Re(3λz − z 3 ) (solid and dotted, where solid/dotted is higher/lower than the oper-
ational saddle), and Im(3λz − z 3 ) (dashed).
Seek a local contribution from near the saddle. Write:
1
z = −λ 2 + iλβ w
3 1
λz − 13 z 3 = − 32 λ 2 − λ 2 +2β w2 + i 13 λ3β w3 .
To apply Laplace’s method choose β = − 41 , then
iw3 w6
Z  
1 3
− 23 λ 2 −w2
Ai(λ) = 1 e e 1 + 3 − 3 + . . . dw
2πλ 4 C λ4 18λ 2
3 
2
e− 3 λ 2

5
∼ 1 1 1 − 3 + . . . . (3.11a)
2π 2 λ 4 48λ 2

Mathematical Tripos: Part III PM 25 © [email protected], Michaelmas 2022


𝑧3
Φ 𝑧 = 𝜆𝑧 − 𝜆 = 𝜆 𝑒 𝑖𝜃
3 A 𝜃=0 B 𝜃 = 0.042
Contours: 𝑣 = Im Φ 𝑧 = 𝑐𝑠𝑡.
𝑣 = Im(Φ 𝑧± )
others
steepest descent path

Saddle points: 𝑧− = −𝜆1/2


𝑧+ = +𝜆1/2

𝑢 = Re Φ 𝑧

C 𝜃 = 0.625 𝜋 D 𝜃 = 2𝜋/3 E 𝜃 = 0.708 𝜋

F 𝜃=𝜋

G 𝜃 = 1.292 𝜋 H 𝜃 = 4𝜋/3 I 𝜃 = 1.375 𝜋

Figure 3.3: Contours of Re(3λz − z 3 ) in colour

By brute force higher order terms can be obtained:


3
2 ∞
e− 3 λ 2 X Γ(r + 61 )Γ(r + 65 ) 3
05/04 Ai(λ) ∼ 1 1 Yr , where Yr = and ξ = − 43 λ 2 . (3.11b)
06/19 2π 2 λ 4 r=0
2πξ r Γ(r + 1)

Mathematical Tripos: Part III PM 26 © [email protected], Michaelmas 2022


Consider next complex values of λ, and in particular for what values of arg(λ) result (3.11a) remains valid.
From above we have that the positions of the saddles, zs , rotate anti-clockwise. Further, the saddles swap
dominance at
π 2
arg(λ) = + nπ .
3 3
However, to go from the valley at ∞ e−2πi/3 to the valley at ∞ e2πi/3 it is only necessary to go over the
left-hand saddle up to arg λ = π. Hence we deduce that (3.11a) remains valid for arg λ < π.
For arg λ = π we need to go over both saddles. Hence
3
2
e− 3 λ 2
Ai ∼ 1 1 + c.c.
2π 2 λ 4  
1 2 3 π
∼ 1 sin (−λ) 2 + , (3.11c)
(−λ) 4 π 2
1
3 4

where c.c. stands for complex conjugate. For π < arg λ < 5π/3 we need to go through the other saddle,
but (3.11c) is still an asymptotic approximation; in fact (3.11c) is correct for | arg λ − π| < 2π/3.
This is an example of Stokes phenomenon, since (3.11a) and (3.11c) are distinct expressions (note that
06/16 (3.11c) certainly is not valid for arg λ = 0).
06/18
06/20
3.6 Stokes Phenomena in the Complex Plane

Suppose that f (z) is analytic, with say an isolated singularity at z = z0 , where, say, z0 = 0. If z a f (z) is
regular for some a, then z a f (z) has a power series that converges. This suggests that if an asymptotic
power series is divergent, then the divergence must be associated with, say, an essential singularity, in
which case the asymptotic series could only be valid in a sector of angle < 2π. This suggests that a single
function may possess several asymptotic expansions, each restricted to a different sector; this is referred
to as the Stokes phenomenon, as illustrated by the Airy function.
As a further example consider
Z z Z ∞
2 −t2 2 2
erf z = √ e dt = 1 − √ e−t dt
π 0 π z
2
e−z
∼1− √ as z → ∞, z real .
πz
One can extend this approximation into the complex plane as long as the contour for
Z ∞
2 2
√ e−t dt
π z
2
is kept in the sector where e−z → 0 as z → ∞. Hence
2
e−z
erf z ∼ 1 − √ as z → ∞, |arg z| < π/4. (3.12a)
πz
But erf is an odd function, so
2
e−z
erf z ∼ −1 − √ as z → ∞, 3π/4 < |arg z| < 5π/4. (3.12b)
πz
Rz 2
For π/4 < arg z < 3π/4 we can integrate the defintion of the error function, i.e. erf z = √2
π 0
e−t dt, by
parts to show that
2
e−z
erf z ∼ − √ as z → ∞, π/4 < |arg z| < 3π/4. (3.12c)
πz
We now have three different asymptotic expansions for erf z. This is because, while erf is analytic every-
where in the finite complex plane, there is a non-analytic essential singularity at ∞.

Mathematical Tripos: Part III PM 27 © [email protected], Michaelmas 2022


3.6.1 Terminology

• The line where a term that is sub-dominant (i.e. much smaller) in one sector becomes comparable
with a term that is dominant in that sector, is called an anti-Stokes line by some (e.g. Stokes,
physicists and some mathematicians), and a Stokes lines by others (e.g. Bender & Orszag). For the
error function the anti-Stokes lines are at

arg z = (2n + 1)π/4 .

• The lines where the leading behaviours of the two terms are most unequal are called Stokes lines
by some (e.g. Stokes, physicists and some mathematicians), and a anti-Stokes lines by others (e.g.
Bender & Orszag). In the case above the Stokes lines are at

arg z = nπ/2 .

Stokes lines are important since the coefficient of the sub-dominant term can jump at them.
06/15

3.7 What Happens At Stokes Lines?

If we concentrate on the steepest descent paths, then in the case of the Airy function there is a change in
topology of the integration path when arg λ = 2π/3. The aim of this section is both to demonstrate that
07/17 the sub-dominant exponentially small term is ‘turned on’ here, and to understand the ‘turn on’ process.

3.7.1 The Airy function

Lemma. We first need a lemma. Consider the integral I(σ, n) defined for real integer n and Re(σ) > 0 by
Z ∞ n−1
t exp(σ(1 − t))
I(σ, n) = dt , (3.13)
0 1−t

where the contour of integration is chosen, based on the hindsight that we are doing a ‘turn-on’
problem, to pass just above the pole at t = 1.
First we note that, by expanding (1 − t)−1 as a binomial, (3.13) can be formally expressed as a
[divergent] series (see (4.4) below for ‘justification’ of this):
Z ∞ ∞
X
I = dt tn−1 exp(σ(1 − t)) tp
0 p=0

X Z ∞
= eσ σ −n−p dssn+p−1 e−s
p=0 0

X Γ(r)
= eσ . (3.14)
r=n
σr

Next, we seek an asymptotic expansion of I in the limit as n → ∞, for the [inspired] choice
1
σ ∼ n + iµn 2 + ν + . . . , (3.15)

where µ = O(1) and ν = O(1).


One way possible way forward is to note that the exponent in (3.13), φ = σ(1 − t) + (n − 1) log t,
is stationary at
n−1 n−1 iµ
t= ∼ 1 ∼ 1 − 1 + ... , (3.16)
σ n + iµn + ν
2 n2

Mathematical Tripos: Part III PM 28 © [email protected], Michaelmas 2022


and then to proceed using steepest descents. Alternatively, we note that
Z ∞
∂I
= tn−1 exp(σ(1 − t)) dt
∂σ 0
eσ ∞ n−1 −u
Z
= u e du
σn 0

= Γ(n) .
σn
Hence from using Stirling’s formula
1
1
∂I 1 en eiµn 2 eν (2π) 2 nn e−n
∼ in 2
1 1
∂µ (n + iµn 2 + ν)n n2
1
∼ i(2π) 2 exp − 12 µ2 .


We now wish to integrate this expression; for this we need


a boundary condition. As noted in (3.16), the exponent in
(3.13) has a stationary point at,


t∼1− 1 + ... .
n2
Hence as µ → −∞ the stationary point moves further and
further above the pole at t = 1, whereas as µ passes through 0
a contribution will be picked up from the pole. We can
show, say using a steepest descents estimate, that I → 0
as µ → −∞ (exercise: do this); it follows that
Z µ
1
exp − 12 t2 dt

I(σ, n) ∼ i(2π) 2

−∞
 √ 
∼ iπ 1 + erf(µ/ 2) . (3.17)

With this lemma in our armoury, consider the full asymptotic series for the Airy function (see (3.10)),
Z
1 1 3
Ai(λ) = eλz− 3 z dz,
2πi C

when |λ|  1 and | arg(λ)| < π. Recall from (3.11b) that this is given by

1 1
X
Ai(λ) ∼ 1 1 exp 2ξ Yr ,
2λ 4 π 2 r=0

where
Γ(r + 61 )Γ(r + 56 ) 3
Yr = and ξ = − 34 λ 2 .
2πξ r Γ(r + 1)
We aim to estimate this when the asymptotic expansion is optimal. We note that Yr is a minimum when

(r + 1) ξ ∼ r + 61 r + 56
 
i.e. when r ∼ ξ .

Let
n = int(|ξ|) + 1 , (3.18)
and write
1 1
 n−1
X
Ai(λ) = 1 1 exp 2ξ Yr + Rn ,
2λ 4 π 2 r=1

Mathematical Tripos: Part III PM 29 © [email protected], Michaelmas 2022


where

1 1
X
Rn = 1 1 exp 2ξ Yr .
2λ 4 π 2 r=n

Next we need an estimate for Rn . Using Stirling’s formula we can show that

Γ(r)
Yr → as r → ∞.
2πξ r

Hence from the lemma, in particular (3.14), we deduce that


1 1
Rn ∼ 1 3 e− 2 ξ I(ξ, n) . (3.19)
4λ π 4 2

Finally, we consider values of ξ which have small argument. Specifically, write


µ
arg(ξ) = 1 ,
|ξ 2 |

so that  
iµ 1
ξ = |ξ| exp 1 ∼ |ξ| + iµ|ξ| 2 + O(1) . (3.20)
|ξ |
2

Thence from (3.15), (3.17), (3.18) and (3.19) it follows


thata
i exp(− 21 ξ)  √ 
Rn ∼ 1 1 1 + erf(µ/ 2) . (3.21)
4λ π 4 2

3
Since ξ = − 43 λ 2 , we can interpret this result as saying
1 3
that within an O(|ξ|− 2 ) angle, i.e. an O(|λ|− 4 ) angle, of

arg λ = ± 3 , the sub-dominant exponentially small term
is ‘turned on’ by an error function. This is why Stokes
lines are more important than anti -Stokes lines. We note
that as µ → ∞ then (3.21) is the contribution from the
sub-dominant saddle point in figures 3.2 and 3.3.

Asymptotics beyond all orders. In order to see the sub-


dominant exponentially small term ‘turn on’, it was
not sufficient to consider just the algebraic asymp-
totic expansion. We needed a clever trick to look
beyond the infinite number of algebraic terms; this
is an example of asymptotics beyond all orders. We
will return to this topic later.
a
The choice of the contour going above the pole at t = 1 means
07/16
07/18 that the remainder, Rn , ‘turns on’ as µ increases; if the contour had
been chosen beneath the pole then the remainder would have ‘turned
07/19
off’ as µ increased.
07/20

Mathematical Tripos: Part III PM 30 © [email protected], Michaelmas 2022


4 Summation Of Series By ‘Magic’
How do we sum series? E.g. how do we find the value of
n
X
Sn = ar as n→∞.
r=0

For instance what are the sums of

1 1 1
(a) 1− 2 + 3 − 4 + ... ,
(b) 1 − 1 + 1 − 1 + ... ,
(c) 1 + 2 + 4 + 8 + ... .

For starters note that in the case of example (b)

lim Sn ≡ S = 1 − 1 + 1 − 1 + . . .
n→∞
= 1 − (1 − 1 + 1 − . . . )
=1−S ;
hence we might guess that
1
S= 2 .

More generally, we might expect that the value of the sum depends on the definition of the sum. We will
consider a number of different ‘magical methods’ (most of which are based on analytical continuation),
most of which, reassuringly, come up with the same answer.

4.1 Cesàro Sums


S0 + S1 + · · · + Sn
S = lim .
n→∞ n+1
For example (b):

Sn = 21 (1 + (−)n ) ,
1 + 0 + 1 + 0 + ... 1
S = lim = 2 .
n→∞ n+1

4.2 Euler Sums

Define

X
f (x) = a r xr .
r=0

Suppose that this series is convergent for |x| < 1; then, based on the idea analytic continuation, define
the Euler sum to be
S = lim f (x) .
x→1−

Mathematical Tripos: Part III PM 31 © [email protected], Michaelmas 2022


For instance:

(a) ar =(−)r ,

X 1
f (x) = (−)r xr = ,
r=0
1+x
1
and so 1 − 1 + 1 − 1 + . . . = f (1) = 2 (again).
r
(b) ar =2 ,

X 1
f (x) = (2x)r = ,
r=0
1 − 2x
and so 1 + 2 + 4 + 8 + · · · = f (1) = −1 .
(c) ar =r ,

X x
f (x) = rxr = ,
r=0
(1 − x)2

16/13 and so the Euler sum of 1 + 2 + 3 + 4 + . . . is not defined.

4.3 Borel Sums

If the coefficients an grow too fast, then Euler summation is not applicable. However, the power series
may still have meaning as an asymptotic series. Define

X ar xr
φ(x) = ,
r=0
r!

and let Z ∞
B(x) = e−t φ(xt)dt
0
∞ Z ∞
X ar
= (xt)r e−t dt
r=0
r! 0

X
= ar xr ,
r=0

by Watson’s lemma (or by playing fast-and-loose with the interchange of the summation and integration).
We define the Borel sum to be:
X∞
S= ar = lim B(x) .
x→1−
r=0

4.3.1 An example: the Stieltjes series

The [divergent] Stieltjes series is given by



X
f (x) = (−)r r! xr , ar = (−)r r! ,
r=0

with x = 1. Adopting the above method we write



X 1
φ(x) = (−)r xr = ,
r=0
1+x

and

e−t
Z
B(x) = dt .
0 1 + xt

Mathematical Tripos: Part III PM 32 © [email protected], Michaelmas 2022


Hence ∞
e−t
Z
0! − 1! + 2! − 3! + . . . = dt.
0 1+t

4.3.2 Summation in the Borel plane

There is another way of looking at Borel sums. Suppose instead that we have a series

X ar
ψ(x) = , (4.1)
r=1
xr

with ar ∝ r! as x → ∞. Let L be the Laplace operator, with inverse L−1 . We adopt a normalisation so
that Z ∞
(r − 1)!
r−1
L{t } = e−xt tr−1 dt = . (4.2)
0 xr
Then, analytically continuing in the Borel “t” plane,

−1
X ar
ψ(x) = LL r
r=1
x
(∞ )
X ar tr−1
= L
r=1
(r − 1)!
Z ∞ ∞
X ar tr−1
= e−xt dt
0 r=1
(r − 1)!
Z ∞
= e−xt ϕ(t) dt , (4.3a)
0

where ϕ(t) is given by the series (which is convergent for small t)



X ar tr−1
ϕ(t) = . (4.3b)
r=1
(r − 1)!

4.3.3 An example

Suppose that ar = 0 for r = 1, . . . , n − 1, and that

ar = (r − 1)! for r = n, n + 1, . . .

Then, using analytical continuation for |t| > 1,



X tn−1
ϕ(t) = tr−1 = ,
r=n
1−t

and hence
∞ Z ∞ n−1
X (r − 1)! −xt t
ψ(x) = = e dt , (4.4)
r=n
xr 0 1−t

where the integration contour is assumed to pass just above the pole at t = 1. The sum I(σ, n) = eσ ψ(σ)
in equation (3.14), in § 3.7.1 on Stokes lines of the Airy function, is thus a Borel sum (and relies on ideas
16/07 of analytical continuation in the Laplace transform plane).
16/08
07/15
08/17

Mathematical Tripos: Part III PM 33 © [email protected], Michaelmas 2022


4.4 Padé Approximants

Suppose we only know partial sums. Let


NX
+M PN
r An x n N
ar x = PMn=0 = PM (x) .
m
r=0 m=0 Bm x

Often if

X
f (x) = a r xr ,
r=0

then
N
PM (x) → f (x) as N, M → ∞ ,
P∞ r
even if r=0 ar x is divergent.

1. If ar = 1, then
1
PNN (x) = exact !
1−x
2. Stieltjes series, ar = (−)r r!

P55 (1) = 0.59738 . . . 11 terms


10
P10 (1) = 0.59638 . . . 21 terms
B(1) = 0.59635 . . . .

Padé Approximants work because they put

• poles near poles,


• a cluster of poles at essential singularities,
• sequences of poles and zeros along branch cuts.

4.5 Continued Fractions

A variation of the Padé method of summing power series. Define


c0
FN (x) = c1 x
1+ 1+c2 x
..
.
cN −1 x
1 + cN x
There are fast numerical methods for the evaluation of continued fractions.

4.6 Shanks’ Transformation

Suppose
n
X
Sn = ar = A + BC n ,
r=0

then from eliminating A, B and C,

(Sn+1 − Sn )(Sn − Sn−1 )


S(Sn ) = Sn − .
(Sn+1 − Sn ) − (Sn − Sn−1 )

Mathematical Tripos: Part III PM 34 © [email protected], Michaelmas 2022


This can be applied repeatedly, e.g. S(S(Sn )), to remove higher transients. For instance, consider
1 1 1 1
ln 2 = 1 − + − + + · · · = 0.693147 . . .
2 3 4 5
Partial Sums 1-Shanks 2-Shanks 3-Shanks
1
0.5
0.833 0.7000
0.583 0.6905
0.783 0.6944 0.693277
0.617 0.6924 0.693106
0.760 0.6936 0.693163 0.693149

4.7 Richardson Extrapolation

Suppose instead
Q1 Q2 Q3
Sn ∼ Q0 + + 2 + 3 + . . . as n → ∞ .
n n n
Calculate the N + 1 partial sums Sn , Sn+1 , . . . , Sn+N . Then it is possible to show that
N
X Sn+k (n + k)N (−)k+N
Q0 = .
k!(N − k)!
k=0

4.8 Other Methods


• Neville tables;

• Domb-Sykes plots (to find the nearest singularity);


• Euler transformations;
• etc.
Unlectured
08/17

Mathematical Tripos: Part III PM 35 © [email protected], Michaelmas 2022


5 Matched Asymptotic Expansions (MAEs)
Matched asymptotic expansions are mainly used for solving singular perturbation problems that arise when
finding solutions to differential equations. MAEs are often needed when the highest-order derivative is
multiplied by a small parameter, say ε, where 0 < ε  1 henceforth. We will apply them primarily to
ODEs, but they are equally applicable to PDEs.

5.1 Regular Perturbation Problems: An Example


π
y 00 + 2εy 0 + (1 + ε2 )y = 1 , y(0) = 0, y =0 where, as noted above, 0 < ε  1.
2

5.1.1 Exact solution


1 h −ε(x−π/2) −εx
i
y = 1 − e sin x − e cos x
1 + ε2
h π i
= (1 − sin x − cos x) + ε x − sin x + x cos x
 2 
2 1
 π 2 1 2
−ε 1 − cos x − sin x + 2 x − sin x + 2 x cos x + . . . .
2

5.1.2 Perturbation solution

Try

y = y0 + εy1 + ε2 y2 + . . . .

Then
π
ε0 : y000 + y0 = 1 , y0 (0) = 0, y0 =0,
2

ε1 : y100 + y1 = −2y00 , y1 (0) = 0, y1 =0,
2

ε2 : y200 + y2 = −2y10 − y0 , y2 (0) = 0, y2 =0.
2
Hence

y0 = 1 − sin x − cos x ,
 π
y1 = x − sin x + x cos x ,
2
 π 2
08/01 y2 = −1 + cos x + sin x − 21 x − sin x − 21 x2 cos x .
05/03 2

5.2 Singular Perturbation: Example

εy 00 + y 0 = −e−x , y(0) = 0, y→0 as x → ∞.

5.2.1 Exact solution

e−x − e−x/ε
y= .
1−ε

Limit ε → 0, x fixed:
y ∼ e−x 1 + ε + ε2 + . . .

. (5.1)
This expansion satisfies the boundary condition as x → ∞, but does not satisfy the boundary condition
y(0) = 0.

Mathematical Tripos: Part III PM 36 © [email protected], Michaelmas 2022


The limit ε → 0, with x fixed, is a non-uniform limit since
1
e−x/ε  εm only if |x|  mε log ;
ε
hence we cannot put x = 0 in (5.1).
For x small we obtain an asymptotic expansion by first setting x = εξ, and then expanding:

y ∼ 1 − e−ξ + ε 1 − e−ξ − ξ + ε2 1 − e−ξ − ξ + 21 ξ 2 + . . . .


  
(5.2)

Now

y(0) = 0 + ε0 + ε2 0 + . . . ,

while

y → 1 + ε(1 − ξ) + ε2 1 − ξ + 21 ξ 2 + . . .

as ξ → ∞.

‘Outer’ (ε → 0, x fixed) expansion satisfies the x → ∞ boundary condition,


‘Inner’ (ε → 0, ξ fixed) expansion satisfies the ξ = 0 boundary condition.
1
08/16 Exercise. Put x = ε 2 η in (5.1) and expand to O(ε);
1
08/18 ξ = ε−1 x = ε− 2 η in (5.2) and expand to O(ε).
08/19 Compare the results.
08/20

5.2.2 Expansion solution

Outer Approximation. Pose a Poincaré expansion for x fixed (6= 0) and ε → 0:



X
y= εn yn (x) = y0 (x) + εy1 (x) + ε2 y2 (x) . . . .
n=0

Then, from substituting into the governing equation and equating terms with the same power of ε,

O(ε0 ) : y00 = −e−x , y0 = A0 + e−x ,


O(ε1 ) : y000
+ y10 = 0 , y1 = A1 + e−x ,
O(εn ) : 00
yn−1 + yn0 = 0 , yn = An + e−x .

We wish to apply two boundary conditions at each order, but have only one unknown constant. From
comparison with the exact solution we choose not to satisfy the boundary condition at x = 0. From
applying the boundary condition as x → ∞, it follows that An = 0, and

y = e−x 1 + ε + ε2 + . . . .

(5.3)

This is in agreement with (5.1).


Inner Approximation. Since we wish to apply two boundary conditions, we need the εy 00 term to be
important somewhere at leading order. Note that in a somewhat rough and ready sense
εy 
εy 00 ∼ 
(x − x0 )2 

this suggests rescaling for (x − x0 ) ∼ ε.
y
y0 ∼



(x − x0 )
Hence try

X
x = x0 + εξ , y(x) = Y (ξ) = εn Yn (ξ) ,
n=0

where Y (ξ) satisfies


1 d2 Y 1 dY
+ = −e−εξ .
ε dξ 2 ε dξ

Mathematical Tripos: Part III PM 37 © [email protected], Michaelmas 2022


From substituting into the governing equation it follows that
O(ε−1 ) : Y000 + Y00 = 0 , Y0 = B0 + C0 e−ξ ,
O(ε0 ) : Y100 + Y10 = −e−x0 , Y1 = B1 + C1 e−ξ − ξe−x0 .
Since we need to satisfy the boundary condition at x = 0, take x0 = 0. Then
B0 1 − e−ξ ,
 
Y0 = 



−ξ
 
Y1 = B1 1 − e −ξ , 

(5.4)
B2 1 − e−ξ − ξ + 21 ξ 2 , 

Y2 = 




... .

Match. We have two asymptotic expansions valid in x fixed, i.e. (5.3), and ξ fixed, i.e. (5.4). They must
represent the same function in the intermediate region
ε  x  1 , i.e. 1  ξ  ε−1 .

08/15
Forcing the two expansions to be identical determines the Bj . To this end introduce an ‘intermediate
variable’, η, where
x εξ
0 < α < 1, e.g. α = 12 ,

η= α = α
ε ε
so that
x = εα η , ξ = εα−1 η .
When η = ord(1), then as required ε  x  1. Expand both outer and inner asymptotic expansions in
powers of η:
Outer: y ∼ 1 −εα η + 12 ε2α η 2 + 61 ε3α η 3 +...
1 2 3

+ε −ε1+α η +ε1+2α 21 η 2 +...


4 5

+ε2 −ε2+α η +...


6

+... ,

Inner: y ∼ B0 + exp
1

+εB1 −εα η + exp


4 2

+ε2 B2 −εα+1 η + 12 ε2α η 2 + exp


6 5 3

+... .

Mathematical Tripos: Part III PM 38 © [email protected], Michaelmas 2022


After reordering the expansions should be the same; hence

07/04 B0 = 1 , B1 = 1 , B2 = 1 .
Terms jump order when matching. This indicates that there are terms in the governing equation that,
although small in one region, are to be treated as dominant in the next region.
x = O(1) ξ = O(1)
2 2
−εξ
−ε d y2 = e−x + dxdy dy d y
dξ + dξ 2 = −εe
| {zdx } | {z } | {z }
small common term small

Note that if the smallest retained terms, i.e. theO ε2 terms in both expansions, are to be bigger than
the largest [small] ignored terms, i.e. the O ε3α terms in both expansions, then we require
ε2  ε3α , i.e. 2
3 < α < 1.
 
If matching to higher order by, say, retaining the terms up-to O εQ , then for the O ε(Q+1)α ignored
terms to be formally smaller, we would require that
Q
εQ  ε(Q+1)α , i.e. < α < 1.
Q+1
13/06
12/13
12/14
09/17 5.3 Van Dyke’s Matching Rule

This can be simpler than using an intermediate variable, but sometimes fails (beware of logs).
Notation
n−1
X
En y = Outer limit (x fixed, ε ↓ 0) of y retaining n terms = εr yr (x)
r=0
m−1
X
Hm y = Inner limit (ξ fixed, ε ↓ 0) of y retaining m terms = εr Yr (ξ)
r=0

Van Dyke’s rule is


En Hm y = Hm En y .
↑ Take m terms of the inner expansion, re-express ξ in
terms of x, and then take n terms of the resulting
expansion.
Forcing equality determines the unknown constants. We illustrate this using our model problem:
E2 H2 y = E2 B0 1 − e−ξ + εB1 1 − e−ξ − εξ
  
     
= E2 B0 1 − e−x/ε + εB1 1 − e−x/ε − x
= B0 − x + εB1 ,

H2 E2 y = H2 e−x + εe−x


= H2 e−εξ + εe−εξ


= 1 − εξ + ε .
Hence require
B0 − x + εB1 = 1 − εξ +ε ,
|{z}
x
and
B0 = 1 = B1 .
09/01 Exercise. Do for general m and n.
06/03
13/08
Mathematical Tripos: Part III PM 39 © [email protected], Michaelmas 2022
5.4 The Choice of Scaling

There is no magic law that enables one to make the correct choice of scaling. However, there are tips.3

(a) First find ‘the’ regular solution:

y = y0 + εy1 + ε2 y2 + . . . .

If for some x it happens that, εy1 ∼ y0 or ε2 y2 ∼ εy1 or . . ., then the solution is no longer asymptotic.
This often suggests a rescaling for x. For instance suppose that the regular-perturbation solution
yields
2ε 7ε2
y =1+ 2
+ + ... .
(x − x0 ) (x − x0 )4
1
This breaks down when (x − x0 ) ∼ ε 2 , which suggests that an appropriate rescaling would be
1
x = x0 + ε 2 ξ.
(b) Look at the equation and see if one can predict the scaling from there, i.e. seek distinguished limits.
For instance consider the problem
dy
(x + εy) +y =1 , y(1) = 2 .
dx
This has the leading-order (i.e. ε = 0) solution
dy0 1
x + y0 = 1 , y0 = 1 + . (5.5)
dx x
Now, using(5.5), compare the size of the terms in the equation:
y εy 2
x ; ; y ; 1
|x {z x}
x
comparable when y ∼ .
ε
1
Hence the neglected term is comparable with the largest retained term when x ∼ xε , i.e. when
1
13/07 x ∼ ε2 .

5.5 Where is the ‘Inner Layer’ ?

The ‘inner layer’ could be anywhere! One way to try and track it down is to look at regular solution and
see where it breaks down. However, this method does not always work, as illustrated by the following
examples.

Example 1. Consider the problem


εy 00 − y = 0 , y(0) = y(1) = 1 .
For ε > 0 this has solution
1 !
1 − e−1/ε 2 1 1

−x/ε 2 (x−1)/ε 2
y= 1 e +e .
1 − e−2/ε 2
3 In a forest, a fox bumps into a little rabbit, and inquires, ‘Hi, what are you up to?’. ‘I’m writing a dissertation on how

rabbits eat foxes’, says the rabbit. ‘Come now rabbit, you know that’s impossible’, replies the fox. ‘Well, follow me and I’ll
show you’, says the rabbit. They both go into the rabbit’s dwelling and after a while the rabbit emerges with a satisfied
expression on his face.
Along comes a wolf who asks, ‘Hello, what are you doing these days?’. ‘I’m writing the second chapter of my thesis, on
how rabbits devour wolves’, says the rabbit. ‘Are you crazy! Where is your academic honesty?’ explodes the wolf. ‘Come
with me and I’ll show you’, says the rabbit. As before the rabbit comes out of his dwelling with a satisfied expression on
his face, and with a diploma in his paw.
Switch to the rabbit’s dwelling to find a huge lion sitting next to some bloody and furry remnants of the fox and the wolf.
The moral: it’s your supervisor that really counts.

Mathematical Tripos: Part III PM 40 © [email protected], Michaelmas 2022


The asymptotic solution is
y = 0 + ε 0 + ε2 0 + . . . .

There are inner layers at both x = 0 and x = 1.

Now consider the case ε < 0. This has solution


 1
  1

sin x/ |ε| 2 − sin (x − 1)/ |ε| 2
y=  1
 .
sin 1/ |ε| 2

In this case there are ‘inner layers’ everywhere.


 1

What happens if sin 1/ |ε| 2 = 0?

Example 2.
1 2 00
2ε f − f (f 2 − 1) = 0 , with f (∞) = 1 , f (−∞) = −1 .

ε=0 : f (f 2 − 1) = 0 , hence f = −1 or 0 or + 1 .
x
ε 6= 0 : an exact solution is f = tanh . (5.6)
ε

There is a inner layer in the interior of width O(ε). Within the ‘inner layer’

09/16 ε2 f 00 ∼ f (f 2 − 1) ,
09/18
09/19 i.e. the inner layer is confined to a region where (x − x0 ) ∼ ε.
09/20 Exercise: Is (5.6) unique?
09/22

Mathematical Tripos: Part III PM 41 © [email protected], Michaelmas 2022


5.6 Composite Expansions

The outer solution in (5.3) fails as x → 0 due to the missing e−x/ε term.
εn ξ n
The inner solution in (5.4) fails as ξ → ∞ due to the missing n! terms.
By correcting either one we can obtain a uniformly valid asymptotic expansion called a composite expan-
sion — this is useful for real answers/comparison with experiment.
It takes little effort to obtain the composite when using Van Dyke’s matching rule — just use the composite
operator:
Cnm y = En y + Hm y − En Hm y .
Note:

En Cnm y = En y ,
Hm Cnm y = Hm y .

For the example we have been considering


     
C22 y = e−x + εe−x + 1 − e−x/ε + ε 1 − e−x/ε − x − 1 + x − ε
 
=(1 + ε) e−x − e−x/ε .


(i) This is correct to O(ε). Such expansions tend to be accurate to O εmin(m,n) .
(ii) The expansion is not of Poincaré form — so it is not unique.

The above additive composition is not always [most] effective, e.g. if there are exponents or singularities
in the expansions. However, other rules exist, for instance the multiplicative composition:
En yHm y
Cnm y = .
En Hm y
Alternatively, suppose that F is a sufficiently smooth functional with an inverse, then a composite ex-
pansion can be defined by
n o
Cnm y = F −1 F (En y) + F (Hm y) − F (En Hm y) .

Hence, additive composition corresponds to F (x) = x, while multiplicative composition corresponds to


09/15 F (x) = log(x).

Mathematical Tripos: Part III PM 42 © [email protected], Michaelmas 2022


5.7 Matching Involving Logarithms

5.7.1 A model equation

We consider a model equation which can be thought of representing heat conduction outside a spherical
cavity with a weak nonlinear heat source. The equation can be written in two forms. In the first form the
small parameter ε occurs in the equation
 
n−1
frr + fr + εf fr = 0 , f (1) = 0, f → 1 as r → ∞ , (5.7)
r

while in the second form, with ρ = εr, ε occurs in one of the boundary conditions
 
n−1
fρρ + fρ + f fρ = 0 ; f (ε) = 0, f → 1 as ρ → ∞ . (5.8)
ρ
14/06
13/13
5.7.2 The case n = 3

First seek a regular expression (r fixed, ε ↓ 0):

f (r, ε) ∼ f0 (r) + εf2 (r) + . . . .

Then from substituting into (5.7) we find that


2
ε0 : f000 + f00 = 0 , f0 (1) = 0, f0 → 1 as r → ∞;
r
1
with solution f0 = 1 − ,
r
1 2 0 0
ε1 : r f2 = −f0 f00 , f2 (1) = 0, f2 → 0 as r → ∞.
r2
On integrating and applying f2 (1) = 0, we obtain
   
1 1
f 2 = A2 1 − − ln r 1 + . (5.9)
r r

The boundary condition at ∞, i.e. f2 → 0 as r → ∞, cannot be satisfied for any choice of A2 . As a result
the expansion cannot be uniformly asymptotic at large r. In fact for r  1
2 ε
f000 ∼ − , εf0 f00 ∼ ,
r3 r2
and hence the O(ε) term is no longer a small correction to the equation when
 
1
r=O .
ε

Remark. Unfortunately, trying to derive the scaling from balancing the first two term of the series, i.e.
ε ln r ∼ 1, does not work. Scalings are a black art.
13/14
Since εf2 ∼ ε ln(1/ε) when r = O ε−1 , we try the asymptotic sequence

10/17

1, ε ln(1/ε), ε, . . . .

Note that we can view the ln(1/ε) term as coming from the particular integral:
Z r
ds s 2
Z
f2 = − 2
t f0 (t)f00 (t) dt
0 s 0
| {z }
∼ s as s → ∞.
10/01
07/03
08/04 Mathematical Tripos: Part III PM 43 © [email protected], Michaelmas 2022
Asymptotic expansion for r fixed and ε ↓ 0. Try the Poincaré expansion:
f ∼ f0 + ε ln(1/ε) f1 + εf2 + . . . , fj (1) = 0 . (5.10)
Substitute into (5.7) and solve.
O(ε0 ). At leading order, as before  
1
f0 = 1− .
r
O(ε ln(1/ε)). At this order the same linear equation is obtained as for f0 , hence
 
1
f 1 = A1 1 − .
r

O(ε). This order is the same as (5.9), viz.


   
1 1
f2 = A2 1 − − ln r 1 + .
r r

The constants A1 & A2 are to be determined by matching.


Asymptotic expansion for ρ fixed, and ε ↓ 0. Try the Poincaré expansion:
f ∼ g0 + ε ln(1/ε) g1 (ρ) + ε g2 (ρ) + . . . , (5.11a)
where from the outer boundary consition
g0 (∞) = 1 , g1 (∞) = g2 (∞) = 0 . (5.11b)
g0 satisfies the nonlinear equation (5.8), which we note is satisfied if g0 is a constant. Since f0 → 1
as r → ∞ and g0 (∞) = 1 we guess that g0 = 1. Then on substitution of (5.11a) into (5.8) we obtain
the same equation for both g1 and g2 :
2 0
gj00 + gj0 + gj0 = 0 , i.e. ρ2 eρ gj0 = 0,
ρ
with solution

e−τ
Z
gj = Bj dτ , gj (∞) = 0 .
14/08 ρ τ2
Match by intermediate variable to fix A1 , A2 , B1 & B2 . First observe that (e.g. by integrating by parts)
Z ∞ −τ
e 1
dτ ∼ + (ln ρ + γ − 1) − 21 ρ + o(ρ) as ρ → 0 ,
ρ τ2 ρ
R∞
where γ = − 0 e−τ log τ dτ is the Euler[-–Mascheroni] constant. Introduce η = εα r = εα−1 ρ, with
0 < α < 1. Take the limit of η fixed, ε ↓ 0:
εα ε1+α
(5.10): f ∼ 1 − +... +ε ln(1/ε) A1 − ln(1/ε) A1 +...
η η
1 3 5

ε1+α ε1+α ε1+α


+εA2 − A2 −αε ln(1/ε) −ε ln η −α ln(1/ε) − ln η + . . .
η η η
6 5 6
εα
(5.11a): f ∼ 1 + ln(1/ε) B1 +ε(ln(1/ε))2 B1 (α − 1) +ε ln(1/ε)B1 (ln η + (γ − 1)) + . . .
η
1 2 4
εα
+ B2 +ε ln(1/ε)B2 (α − 1) +εB2 (ln η + (γ − 1)) +...
η
3 5 6

Mathematical Tripos: Part III PM 44 © [email protected], Michaelmas 2022


Make the expansions agree:

ε0 : 1=1;
εα ln(1/ε) : 0 = B1 , B1 = 0 ;
α
ε : −1 = B2 , B2 = −1 ;
ε(ln(1/ε))2 : 0 = B1 , consistent ;
ε ln(1/ε) : A1 − α = (α − 1)B2 , A1 = 1 ;
ε: A2 − ln η = B2 ln η + (γ − 1)B2 , A2 = 1 − γ .

Hence
        
1 1 1 1
r fixed: f ∼ 1− + ε ln(1/ε) 1 − + ε (1 − γ) 1 − − ln r 1 + + ... ,
r r r r
Z ∞ −τ
e
14/07 ρ fixed: f ∼1 − ε dτ + . . . .
10/16 ρ τ2

Match using Van Dyke’s rule. Identify E and H with the coordinates r and ρ respectively. Then
   
1 1
H2 E2 f = H2 1 − + ε ln(1/ε) A1 1 −
r r
= 1 + ε ln(1/ε) A1 ,
 Z ∞ 
−τ dτ
E2 H2 f = E2 1 + ε ln(1/ε) B1 e
ρ τ2
B1
=1+ ln(1/ε) − ε ln2 (1/ε) B1 + B1 ε ln(1/ε) (ln r + γ − 1) .
r
If these two expansions are to agree then B1 = 0 and A1 = 0, which is incorrect. The trouble is a ln ρ
in the O(ε) term when ρ = O(1) — this changes to a ε ln(1/ε) term in the intermediate scaling.
In general, terms like (ln r)p lead to failures near to the diagonal where |n − m| < p. However, in general
there is success sufficiently far from the diagonal, e.g.
ε
H3 E2 f = 1 + ε ln(1/ε) A1 − ,
ρ

e−τ
 Z 
E2 H3 f = E2 1 + (ε ln(1/ε) B1 + εB2 ) dτ
ρ τ2
1 
= 1 + (B1 ln(1/ε) + B2 ) − ε ln(1/ε) ln(1/ε) B1 + B2
r
+ ε ln(1/ε) B1 (ln r + γ − 1) ;

so B1 = 0, B2 = −1, and A1 = 1 as before.


It is best to apply Van Dyke’s rule (and composite expansions) only at changes in the power of ε:

1 ε ln(1/ε) ε ε2 ln2 (1/ε) ε2 ln(1/ε) ε2 ... .


10/15 ↑ ↑ ↑
10/18
10/19 Because of the way that logarithmic terms jump order, apply Van Dyke’s rule only at the arrowed orders:
10/20 do not split logs! The mindless application of rules can be dangerous.
10/22
5.7.3 The case n = 2
In this case the governing equation is
1
frr + fr + εf fr = 0 , f (1) = 0 , f →1 as r → ∞.
r

Mathematical Tripos: Part III PM 45 © [email protected], Michaelmas 2022


Try a regular expansion, f ∼ f0 + εf1 + . . .; then
1
ε0 : f000 + f00 = 0 , f0 = A0 ln r + C0 .
r
No choice of A0 or C0 will satisfy the boundary conditions both at r = 1 and as r → ∞. Choose to satisfy
the boundary condition at r = 1, i.e. set C0 = 0. At next order
1
ε1 : f100 + f10 = −f0 f00 , f1 = A1 ln r + C1 − A20 (r ln r − 2r + 2) .
r
Again satisfy the boundary condition at r = 1 (i.e. f1 (1) = 0) – this time by setting C1 = 0. Note that if
A0 6= 0, then f1 has even worse behaviour as r → ∞ than f0 . By comparing where the expansion for f
14/13 becomes non-asymptotic, it follows that we should introduce ρ = εr as the stretched variable.
Note that when r = O 1ε ,


f0 ∼ A0 ln(1/ε) .
1
Since f0 ∼ 1 as ρ → ∞, this suggests trying A0 = ln(1/ε) , and the asymptotic sequence

1 1
1 , , 2 , ... .
ln(1/ε) ln(1/ε)
15/06
Remark. This asymptotic sequence is likely to have non-wonderful convergence properties.

Asymptotic expansion for r fixed and ε ↓ 0. We propose the asymptotic expansion

f1 (r) f2
f (r, ε) ∼ 0 + + 2 + . . . . (5.12)
ln(1/ε) ln(1/ε)

Then
1
fn00 + fn0 = 0 , and fn = An ln r .
r
0
Note that the εf f term never enters into the expansion for r = O(1).

Asymptotic expansion for ρ fixed and ε ↓ 0. In this case we propose

g1 (ρ) g2 (ρ)
f ∼1+ + 2 + ... . (5.13)
ln(1/ε) (ln(1/ε))

Then
 
1
g100 + + 1 g10 = 0 ,
ρ
Z ∞ −τ
e
g1 = B1 dτ = B1 E1 (ρ) ;
ρ τ
 
1
g200 + + 1 g20 = −g1 g10 ,
ρ
g2 = B2 E1 (ρ) − B12 e−ρ E1 (ρ) − 2E1 (2ρ) .


Match using the intermediate variable

η = εα r = εα−1 ρ (0 < α < 1)

and the asymptotic expansion

E1 (ρ) → − ln ρ − γ + ρ + O ρ2

as ρ → 0 .

Mathematical Tripos: Part III PM 46 © [email protected], Michaelmas 2022


Then
1 1
(5.12) f∼ A1 (α ln(1/ε) + ln η) + 2 A2 (α ln(1/ε) + ln η) + . . . (5.14)
ln(1/ε) ln(1/ε)
B1
−(α − 1) ln(1/ε) − ln η − γ + ε1−α η + . . .

(5.13) f ∼1 +
ln(1/ε)
1 
+ 2 B2 [−(α − 1) ln(1/ε) − ln η − γ + . . . ]
ln(1/ε)

+ B12 [−(α − 1) ln(1/ε) − γ − ln η − ln 4 + . . . ] . (5.15)

On equating equal orders of ε we find that (again noting that terms jump order)

ln0 (1/ε) : αA1 =1 − (α − 1)B1 ,


— if this is true ∀α then B1 = −1, A1 = 1;
−1
ln (1/ε) : A1 ln η + αA2 = − B1 (ln η + γ) − B2 (α − 1) − B12 (α − 1) ,
— if this is true ∀α, η then B2 = −(1 + γ), A2 = γ.
11/17

Match by Van Dyke’s Rule (if you must). Put α = 1 and η = ρ in (5.14), and α = 0 and η = r in (5.15).
Then Van Dyke’s rule gives

E1 H1 = 1 , H1 E1 = 0 , Contradictory ;
E2 H1 = 1 , H1 E2 = A1 , A1 = 1 ;
E1 H2 = 1 + B1 , H2 E1 = 0 , B1 = −1 .

Similarly
B1

E2 H2 = 1 + B1 − (ln r + γ) 

ln(1/ε) 
  Contradictory .
ln ρ A1 ln r 
H2 E2 = A1 1 + = 

ln(1/ε) ln(1/ε)
However
B1
E3 H2 = 1 + B1 − (ln r + γ)
ln(1/ε)
1 A1 ln r A2
H2 E3 = A1 + (A1 ln ρ + A2 ) = + ,
ln(1/ε) ln(1/ε) ln(1/ε)
and hence
A1 = 1 , B1 = −1 , A2 = γ .
11/01 As before, Van Dyke’s rule works if n 6= m.
08/03
09/04 5.7.4 A ‘terrible’ problem
15/08
14/14
Consider the equation with n = 2 plus a new term:
1
frr + fr + fr2 + εf fr = 0 , f (1) = 0, f →1 as r → ∞ .
r
First compare the size of terms using the solution calculated in §5.7.3:
 2
1 1 1
r = ord(1), f∼ , fr2 ∼ , frr ∼ ,
ln(1/ε) ln(1/ε) ln(1/ε)
 2
1 2 1 1
ρ = ord(1), f ∼1 , fρ ∼ , fρ ∼ , fρρ ∼ .
ln(1/ε) ln(1/ε) ln(1/ε)
From this comparison of terms we might expect a small perturbation to the previous answer.

Mathematical Tripos: Part III PM 47 © [email protected], Michaelmas 2022


Asymptotic expansion for r fixed. As in §5.7.3 propose the asymptotic expansion
1 1 1
f∼ f1 + 2 f2 + 3 f3 + . . . . (5.16)
ln(1/ε) ln(1/ε) ln(1/ε)
Then from substituting into the equation we find:
1
ln−1 (1/ε): f100 + f10 = 0 , f1 = A1 ln r ,
r
1 0 2
ln−2 (1/ε): 00
f2 + f2 = −f10 , f2 = A2 ln r − 12 A21 ln2 r ,
r
1
ln−3 (1/ε): f300 + f30 = −2f10 f20 , f3 = A3 ln r + 13 A31 ln3 r − A1 A2 ln2 r .
r
By induction, one can show that as r → ∞,
 
1
fn ∼ (−)n − An1 lnn r + An−2
1 A 2 lnn−1
r ,
n
and hence by summation that
" ! #
A1 A2
f ∼ ln 1 + + 2 + . . . ln r as r → ∞ .
ln(1/ε) ln(1/ε)
15/07
11/16
Lemma (for future reference). Instead of adopting the above approach, ignore §5.7.3 and assume

f = f0 + . . . .

Then
1 2
f000 + f00 + f00 = 0 ⇒ f0 = ln (1 + A ln r) if f0 (1) = 0 .
r
If A = O ln−1 (1/ε) , this suggests that the natural variable is, say,


ln r
t= . (5.17)
ln(1/ε)

Asymptotic expansion for ρ fixed. In this variable ε does not appear in the equation:

fρρ + ρ1 fρ + fρ2 + f fρ = 0 .

We pose the Poincaré expansion:


g1 g2
f ∼1+ + 2 + . . . . (5.18)
ln(1/ε) ln(1/ε)
Substitute, equate, etc:
1 e−ρ
ln−1 (1/ε) : g100 + g10 + g10 = (ρeρ g10 )0 = 0 ,
ρ ρ
Z ∞ −τ
e
g1 = B1 dτ = B1 E1 (ρ) , setting g1 (∞) = 0 .
ρ τ

e−ρ
ln−2 (1/ε) : (ρeρ g20 )0 = −g102 − g1 g10 ,
ρ
g2 = B2 E1 (ρ) + B12 2E1 (2ρ) − 12 E12 (ρ) − e−ρ E1 (ρ) .


As ρ → 0 we have

g1 ∼ B1 (− ln ρ − γ) ,
∼ B2 (− ln ρ − γ) + B12 − 21 ln2 ρ − (γ + 1) ln ρ − 12 γ 2 − γ − ln 4 .

g2

Mathematical Tripos: Part III PM 48 © [email protected], Michaelmas 2022


15/13
The leading-order behaviour as ρ → 0, g2 ∼ − 12 B12 ln2 ρ, comes from the balance

1 2 B2
(ρg20 )0 ∼ −g10 ∼ 21 .
ρ ρ
Similarly we can show that for small ρ
1 ln ρ  1
g300 + g30 ∼ −2g10 g20 ∼ −2B13 2 − 2B13 (γ + 1) + 2B1 B2 2 ,
ρ ρ ρ
1 3 3 3
 2
g3 ∼ − 3 B1 ln ρ − B1 (γ + 1) + B1 B2 ln ρ .

By induction it is possible to conclude that as ρ → 0


1
− B1n lnn ρ − B1n (γ + 1) + B1n−2 B2 lnn−1 ρ .

gn ∼
n
11/15

Match using the intermediate variable


η = εα r = ρεα−1 .
Then
1
(5.16) : f∼ A1 (α ln(1/ε) + ln η)
ln(1/ε)
1 h
2
i
+ 2 − 12 A21 (α ln(1/ε) + ln η) + . . . + . . .
ln (1/ε)
" #
n+1
1 (−) n
+ n An1 (α ln(1/ε) + ln η) + . . . + . . . ;
ln (1/ε) n
B1
(5.18) : f ∼1 + [−(α − 1) ln(1/ε) − ln η − γ + . . . ]
ln(1/ε)
B2 
 
1 2

+ 2 − 1 ((α − 1) ln(1/ε) + ln η) + . . . + . . . + . . .
ln (1/ε) 2
Bn
 
1 n
+ n − 1 ((α − 1) ln(1/ε) + ln η) + . . . + . . . .
ln (1/ε) n

Equate these two expansions. At leading order

ln0 (1/ε) : αA1 − 12 α2 A21 + 13 α3 A31 + . . . = 1 − B1 (α − 1) − 21 B12 (α − 1)2 + . . . .

or from summing the series


ln (1 + αA1 ) = 1 + ln [1 − (α − 1)B1 ] .
This must be true ∀α, hence

e(1 + B1 ) − 1 = 0 , A1 + eB1 = 0 ,

i.e.  
e−1
B1 = − , A1 = (e − 1) .
e
Note that:

• in matching, an infinite number of terms jumped order — hence the need for general expressions
11/18 for fn & gn ;
11/19
11/20 • hence there is no hope for Van Dyke’s rule. §
11/22

Mathematical Tripos: Part III PM 49 © [email protected], Michaelmas 2022


Is there an easier way? Recall from earlier that a natural variable is

ln r
t =  . (5.19)
ln 1/ε

Note that
{r = 1} ≡ {t = 0} ,
and that
k
ρ = ord(1) when r = for k = ord(1) ,
ε
i.e. when
ln k
t=1+  for k = ord(1).
ln 1/ε

finite value

Let τ = 1 − t, so that ρ = ord(1) when τ = ord(ln−1 (1/ε)). Next substitute into the equation to obtain

fτ τ + fτ2 = − ln(1/ε) e−τ ln(1/ε) f fτ .

Seek a Poincaré expansion for τ > 0 (so that the r.h.s. is ‘exponentially’ small):
1
f = f0 + f1 + . . . , (5.20)
ln(1/ε)
then
2
f0τ τ + f0τ =0.
If we require f0 (1) = 0, then
f0 = log (1 + α0 (1 − τ )) . (5.21a)
We need to match with the outer solution that is valid for ρ = ord(1), i.e. we need to match with the
solution that is valid in the region where τ = ord(ln−1 (1/ε)). Since
ln 1/r ln 1/ρ
τ =1+ = ,
ln(1/ε) ln(1/ε)
introduce
s = ln ρ = −(ln(1/ε))τ
and seek an expansion
G1 (s) G2 (s)
f =1+ + 2 + . . . . (5.21b)
ln(1/ε) ln(1/ε)
As before

e−u
Z
G1 = B1 du ,
es u
G1 → B1 (−s − γ + . . . ) as s → −∞.

Now try matching by Van Dyke’s rule using s = −(ln(1/ε))τ :


  
α0 s α0 s
H2 E1 f = H2 log 1 + α0 + = ln(1 + α0 ) + ,
  ln(1/ε)  (1 + α0 ) ln(1/ε)
B1 B1 s
E1 H2 f = E1 1 + (ln(1/ε))τ − γ + . . . = 1 + B1 τ = 1 − .
ln(1/ε) ln(1/ε)

Hence, as before,
1−e
α0 = e − 1 , B1 = .
16/06
e
15/14

Mathematical Tripos: Part III PM 50 © [email protected], Michaelmas 2022


5.8 Strained Coordinates

The method of strained co-ordinates is a better, but less general way, of solving certain singular pertur-
bation problems. However, usually such problems can also be solved either by using MAEs, or by means
of the method of Multiple Scales.

Mathematical Tripos: Part III PM 51 © [email protected], Michaelmas 2022


6 A Little More on Asymptotics Beyond All Orders
As we have seen in the case of Stokes lines, sometimes it is not sufficient to consider just the algebraic
asymptotic expansion of a solution. This section is concerned with looking at further examples where
exponentially small terms can play a key role. For a more general overview I recommend The Devil’s
invention: asymptotic, super-asymptotic and hyper-asymptotic series by John P. Boyd (Acta Applicandae,
56, 1-98, 1999) which is available at

http://www-personal.engin.umich.edu/~jpboyd/boydactaapplicreview.pdf

6.1 More on What Happens at Stokes Lines

In §3.7 we looked at what happens near the Stokes line of the Airy function when arg λ = 2π/3. In this
section we return to the ‘turn-on’ of the sub-dominant exponentially small term at a Stokes line, but this
time for the complementary error function.

6.1.1 The complementary error function

There are a number of ways of getting a handle on what happens at Stokes lines. In §3.7 we used Borel
summation and and an integral estimate obtained using steepest descents in the complex plane. Here we
will use a differential equations approach for model problem of the complementary error function:
Z ∞
2 2
erfc(z) = √ e−t dt . (6.1)
π z
From the first part of the course (see also (3.12a), (3.12b) and (3.12c))
2 ∞
e−z X (−)s (2s)!
erfc(z) ∼ √ for |arg(z)| < 43 π , (6.2a)
z π s=0 s!(4z 2 )s
2 ∞
e−z X (−)s (2s)!
erfc(z) ∼ 2+ √ for |arg(−z)| < 34 π . (6.2b)
z π s=0 s!(4z 2 )s

We note that erfc and ‘2’ are solutions to the differ-


ential equation

w00 + 2zw0 = 0 . (6.3)

Moreover, if we let
2 N −1
e−z X (−)s (2s)!
erfc(z) = √ + RN , (6.4a)
z π s=0 s!(4z 2 )s

then
2
00 0 e−z (−)N (2N )!
RN + 2zRN =− √ . (6.4b)
z π (N − 1)! 4N −1 z 2N
Write z = reiθ and consider the case of fixed r, so that

d −ie−iθ d
= . (6.5)
dz r dθ
12/17
Then (6.4b) becomes

e−2iθ e−2iθ exp −r2 e2iθ + iπN − (2N + 1)iθ (2N )!
 
− 2 (RN )θθ + i − 2 (RN )θ = − √ . (6.6)
r r2 π(N − 1)!4N −1 r2N +1

Mathematical Tripos: Part III PM 52 © [email protected], Michaelmas 2022


Now assume that |z| = r  1, then it is possible to show using Stirling’s formula (see § 3.5.2) that the
right-hand-side forcing is smallest when N ∼ r2 . Guessing that the remainder, RN , will be smallest then,
we let
N = int(r2 ) , r2 = N + α , (6.7a)

then
8r
RHS ∼ − √ exp (−iα(2θ − π) − iθ) exp −r2 e2iθ + 1 + i(2θ − π) .

(6.7b)

This has a local maximum when cos(2θ) = −1, i.e. when θ = ±π/2. Moreover we note that when
06/04 θ = ±π/2, then at leading order the RHS both stops oscillating and is independent of α.
12/16
On the basis of this try an asymptotic rescaling of the form
ρ π
r = , θ = + δφ , (6.8)
ε 2
where ε  1 and δ  1 (and for simplicity we have focused close to θ = + π2 ). Then, from (6.7a),
N = O(ε−2 ), and
ε2 e−2iδφ d2 i ε2 e−2iδφ
 
d
RN − +2 RN
δ 2 ρ2 dφ2 δ ρ2 dφ
 2 
8ρi (−2iδφα−iδφ) ρ 2iδφ

∼ − √ e exp e − 1 − 2iδφ
ε 2π ε2
δ2
 
8ρi
∼ − √ exp −2ρ2 φ2 2 . (6.9)
ε 2π ε
There is a distinguished scaling when δ = O(ε); for simplicity take δ = ε. Then
d 4ρ
RN ∼ √ exp −2ρ2 φ2 ,

(6.10)
dφ 2π
and thus √ 
RN ∼ A + erf 2ρφ , (6.11)
where A is a constant. We recall that
ρ  π 
z= exp i + εφ ,
ε 2
and hence for ‘matching’ with (6.2a) we deduce that that we require that RN → 0 as φ → −∞, i.e. we
require A = 1. Thus √ 
12/15
RN ∼ 1 + erf 2ρφ . (6.12)

We can interpret this result as saying that within an angle of O(|z|−1 ) of arg z = ± π2 , the sub-dominant

Mathematical Tripos: Part III PM 53 © [email protected], Michaelmas 2022


term is ‘turned on’ by an error function (as was the case for the Airy function). In fact the turn on by
12/18 an error function is generic.
This agrees with question 1 on Example Sheet 2. There you will find that
2 N −1
e−z X (−)s (2s)!
erfc(z) = √ + RN , (6.13)
z π s=0 s!(4z 2 )s

where 2

2 (−)N (2N )! e−t
Z
RN = √ dt . (6.14)
π 22N N ! z t2N
1
On the basis of the above scaling (i.e. ε = δ = O(N − 2 )), take
 
1 iπ iψ
z = N exp
2 + 1 . (6.15)
2 N2
For large N try applying the method of steepest descents to (6.14). The stationary point is found to occur
1
1 1
at t = iN 2 . Let t = iN 2 eiv/N 2 , then
1
1 1
−t2 − 2N log t = N e2iv/N 2 − 2N log(iN 2 ) − 2N 2 iv
∼ N (1 − log N − iπ) − 2v 2 + . . . . (6.16)

Hence
∞ 2 −∞
e−t (−)N eN −2v2
Z Z
dt ∼ − e dv
z t2N ψ NN
Z √2ψ
(−)N eN 2
∼ √ e−w dw
2N N
−∞
N N  1 
(−) e π 2 √ 
∼ N
1 + erf 2ψ , (6.17)
2N 2
18/07
18/08 and thus, as before,  √ 
12/19 RN ∼ 1 + erf 2ψ , (6.18)
12/20
12/22 since ρφ ∼ ψ.

6.2 A Model Equation (With Wider Implications)

Consider the asymptotic solution to

fyy + λ3 (1 + iy)f = −λ2 , f → 0 as |y| → ∞ , (6.19)

for large |λ|, and real y. Try



f1 f2 X fn
λf = f0 + 3
+ 6
+ · · · = . (6.20)
λ λ n=0
λ3n
Then
1 fn00
f0 = − , and for n = 0, 1, 2, . . . fn+1 = − .
1 + iy (1 + iy)
Hence
2
f1 = − , etc. .
(1 + iy)4
Thus an asymptotic expansion can be found to all orders, irrespective of the sign of λ. Further, the
expansion satisfies the boundary conditions as |y| → ∞. However the expansion (6.20) is only valid ∀y if
16/14 λ → −∞.

Mathematical Tripos: Part III PM 54 © [email protected], Michaelmas 2022


arg( µ )=0 arg( µ )= π /24

2 8 2
5 12
3
0 2 0
23
5
8 12
−2 −2
−2 0 2 −2 0 2

arg( µ )= π /6 arg( µ )= π /3

2 2

0 0

−2 −2
−2 0 2 −2 0 2

arg( µ )= π /2 arg( µ )=2 π /3

2 2

0 0

−2 −2
−2 0 2 −2 0 2

arg( µ )=5 π /6 arg( µ )= π

2 2

0 0

−2 −2
−2 0 2 −2 0 2

Figure 6.4: Contours of Re(3µz − z 3 ) (blue: high; red: low), and Im(3µz − z 3 ) (black).

Mathematical Tripos: Part III PM 55 © [email protected], Michaelmas 2022


To see this we start from the fact that the exact solution is
Z
exp λ(1 + iy)z − 13 z 3 dz ,

f (y, λ) = (6.21)
C

where C starts from z = 0 and extends to z = ∞ in the sector | arg(z)| < π/6. For large |λ|, we can
estimate the integral using steepest descents. In the figure 6.4 we plot contours of Re(3µz − z 3 ) and
Im(3µz − z 3 ), where µ = λ(1 + iy). There are two cases to consider.

λ → −∞. If λ → −∞, then |π − arg µ| < π/2, in which case we deduce from figure 6.4 that (6.20) is
recovered by Watson’s Lemma.
λ → +∞. However, if λ → +∞, then the asymptotic behaviour depends crucially on whether the Wat-
son’s lemma contribution from the end point at z = 0 is larger or smaller than the Laplace’s
method √ contribution from the saddle point. As indicated in figure 6.4, if π/3 < | arg µ| < π/2, i.e. if
|y| > 3, then the Watson’s √ lemma contribution dominates, and (6.20) is again recovered. However,
if | arg µ| < π/3, i.e. if |y| < 3, then the Laplace’s method contribution dominates and
1
π2 
2 23 3

f∼ 1 1 exp 3 λ (1 + iy) 2 ; (6.22)
λ (1 + iy)
4 4

this is exponentially large.

To understand this result, note that equation (6.19) has a turning point at

1 + iy = 0 .

Set
  13
i
y =i+ s,
λ3
then
2
fss − sf = −i 3 .

The complementary function solutions to this equation are Ai(s) and Bi(s), which have anti -Stokes lines
in the complex s-plane at
π π
arg s = − , , π .
3 3
We plot these anti -Stokes lines in the complex y-plane:

λ>0 λ<0

Mathematical Tripos: Part III PM 56 © [email protected], Michaelmas 2022


Hence when λ > 0, we see that since two anti -
Stokes lines cross the real y axis, the solution
that decays as |y| → ∞ can√be exponentially
small as λ → ∞ for √ |y| > 3, but exponen-
tially large for |y| < 3. This is not possible
when λ < 0, since only one anti -Stokes line
crosses the real y-axis. Note that in the case
when λ > 0, it is possible to get from y = −∞
to y = ∞ without seeing the exponentially
large solution, by deforming into the complex
λ>0 y-plane.

The idea of deforming into the complex plane to sidestep regions where the solution is exponentially large
10/04 has wider applications (e.g. eigenvalue problems in stability, nonlinear models of crystal growth).
17/13
13/16
13/17 6.3 A Model of Crystal Growth (Unlectured)

A simple geometric model of crystal growth is:

ε2 θ000 + θ0 = cos θ −∞<s<∞ ; (6.23)

ε represents surface tension;


s 00 arclength along the solid-liquid interface;
θ(s, ε) 00 the angle between the local normal and the direc-
tion of propagation of the crystal.
A ‘needle crystal’ is a monotonic solution satisfying
π
θ(s, ε) → ± as s → ±∞ . (6.24)
2

6.3.1 Regular perturbation

Try
θ = θ0 + ε2 θ1 + ε4 θ2 + . . . . (6.25)

Mathematical Tripos: Part III PM 57 © [email protected], Michaelmas 2022


We fix the apex at s = 0 by requiring that θj (0) = 0.
ε0 : θ00 − cos θ0 = 0 ,
π
θ0 = − + 2 tan−1 (es ) ,
2
π
θ0 → ± as s → ±∞ ,
2
θ0 increases monotonically.

ε2 : θ10 + sin θ0 θ1 = −θ0000 ,


θ1 = (2 tanh s − s) sech s ,
θ1 → 0 as s → ±∞ .

ε4 : θ2 = − 12 s2 tanh s + 5s − 4s sech2 s − 32 50
tanh s sech2 s sech s ,

3 tanh s + 3
θ2 → 0 as s → ±∞ .
It is possible to prove that: (a) θj (−s) = −θj (s) ⇒ θj00 (0) = 0,
PN 2n
(b) 0 ε θj (s) ∓ π/2 → 0 as s → ±∞,
(c) the solution is monotonic for small ε.
18/14 Hence we appear to have a solution correct to all orders!

6.3.2 Too many boundary conditions

How many boundary conditions are implied by (6.24)? Suppose we linearise about s = −∞ by setting
π
θ = − + αems .
2
We find that
ε2 m3 + m =1

1 − ε2 + . . . decays as s → −∞
m= i
± − 21 + . . . grow as s → −∞.
ε
Hence we have effectively imposed 2 boundary conditions as s → −∞. Similarly, we have imposed 2
boundary conditions as s → +∞.
Thus we have imposed 4 boundary conditions on a 3rd order ODE!

6.3.3 A well posed problem

Suppose that we just impose


π
→ 0 as s → −∞ .
θ+ (6.26)
2
Then a one-parameter family of solutions will exist. We fix the solution by requiring that
θ(0; ε) = 0 . (6.27)
The question is: ‘Does this solution satisfy (θ − π2 ) → 0 as s → +∞?’
Suppose that it does, then a second solution is
Θ(s; ε) = −θ(−s; ε) .
Θ and θ differ by at most a translation, hence θ is antisymmetric about some point. However, θ is
monotonic, analytic and vanishes at s = 0, thus
θ(s; ε) is antisymmetric about s = 0 .
We conclude that a needle crystal satisfies
θ00 (0; ε) = 0 . (6.28)
18/06
19/13

Mathematical Tripos: Part III PM 58 © [email protected], Michaelmas 2022


6.3.4 Analytical continuation into the complex plane

We analytically continue solution into the complex s-plane; the continued solution still satisfies

ε2 θ000 + θ0 = cos θ .

For future reference we note that if θ(s; ε) is antisymmetric, then



X
θ(s; ε) = an s2n+1 ,
0

and hence Re (θ) = 0 if s is pure imaginary.


Next we analytically extend the asymptotic expansion (6.25) into the complex s-plane. We note that this
asymptotic expansion breaks down near

s = ±(2n + 1) n = 0, 1, 2, . . . ,
2
because sech s = ∞ near such points. We seek an asymptotic expansion near to one of the points closest
to the real axis, i.e. s = iπ
2 . In particular, if we let


s= +σ ,
2
then
π
θ0 = − + 2i tanh−1 (eσ ) ,
2
and  
2 π
θ0 ∼ i ln − − + ... as σ → 0.
σ 2

Further, from HOT (i.e. higher order terms),


   ε 2 50  ε 4 
π 2
θ ∼ − + i ln − −2 + + ... as σ → 0 .
2 σ σ 3 σ

This expansion becomes disordered for σ = O(ε). Hence when σ is this small we rescale:


s= + εz ,
2  
2 π
θ = i ln − + iϕ(z, ε) .
ε 2

Then  ε 2
ϕ000 + ϕ0 = eϕ − e−ϕ , (6.29)
2
and from matching we require that
2
ϕ → − ln(−z) − + ... as Re (z) → −∞.
z2

Mathematical Tripos: Part III PM 59 © [email protected], Michaelmas 2022


We seek an asymptotic solution to (6.29):

ϕ = ϕ0 + ε2 ϕ1 + . . . ,

then
ϕ000 0
0 + ϕ0 = e
ϕ0
, (6.30)

and
2
ϕ0 → − ln(−z) − as Re (z) → −∞. (6.31)
z2
It is possible to prove that ∃ a unique solution for ϕ0 in Re (z) 6 0. The strategy is therefore to:

(a) integrate (6.30) from Re (z) = −∞ to Re (z) = 0 along a line on which Im (z) = constant < 0;
(b) continue this solution down Re (z) = 0 to s = 0 and compute θ00 (0, ε).

Write
2
ϕ0 = − ln(−z) − + . . . + ϕ̃ , (6.32)
z2
and linearise (6.30) for large |z|. We find that

ϕ̃ = αϕ̃1 + β ϕ̃2 + γ ϕ̃3 ,


where
1 4
ϕ̃1 ∼ − + 3 + . . . ,
z  z 
1
iz 3i
ϕ̃2 ∼ z e
2 1+ + ... ,
8z
 
1
−iz 3i
ϕ̃3 ∼ z e
2 1− + ... .
8z

Mathematical Tripos: Part III PM 60 © [email protected], Michaelmas 2022


The matching condition (6.31) implies that if we let Re (z) → −∞ along Im (z) = constant, then we
deduce that in this ‘direction’
α=β=γ=0.
This does not mean that α = β = γ = 0 in the direction specified by Im (z) → −∞ with Re (z) = 0
because ϕ3 (z) is exponentially small in that direction. Hence, while we might expect that

α=β=0 for Im (z) → −∞, Re (z) = 0 ,

it is possible that γ 6= 0 in that direction.


In order to get a handle on these terms, we note that when Re (z) = 0, the algebraic terms in (6.32) are
real valued, hence as Im (z) → −∞ with Re (z) = 0

π 1   
−1
Im (ϕ(z)) ∼ − + Γ|z| 2 e−|z| 1 + O |z| ,
2
where Γ = Im γe−iπ/4 . Moreover, numerical solutions to (6.30) subject to (6.31) show that


Γ ≈ 2.11 ;

a result that can also be obtained analytically using Borel summation. Hence
1   
−1
Re (θ(s, ε)) ∼ −Γ |z| 2 e−|z| 1 + O |z|

as Im (z) → −∞ with Re (z) = 0 = Re (s). With a little more effort one can conclude, by integrating
along Re (s) = 0 back to s = 0, that
5
θ00 (0, ε) ∼ 2Γε− 2 exp(−π/2ε) ,

which is exponentially small. This term is non-zero because of a Stokes-line effect.


11/04
19/07 Often exponentially small terms do not matter, but they do here. We conclude that θ(s, ε) is not antisym-
19/08 metric, and hence the well-posed problem does not represent a needle crystal. Indeed, no needle crystal
19/14 solutions exist for small ε.
13/18
13/19
13/20
13/22

Mathematical Tripos: Part III PM 61 © [email protected], Michaelmas 2022


7 Method of Multiple Scales
11/95
Multiple scales is a useful technique for a number of problems. For instance, it underlies much of the
theory of ‘ray-tracing’.
One of the simpler, if important, uses of multiple scales is to describe the evolution of linear waves through
slowly varying media (e.g. sound waves through the atmosphere). For such examples, the different scales
are often immediately apparent (e.g. the wavelength of sound, and the depth of the troposphere).
We will concentrate on nonlinear problems where the need for two (or more) scales is necessary, but may
not be immediately apparent.

MAE: Two or more processes with different scales; processes act separately in different regions.
13/15 MS: Two or more processes each with own scale; processes act simultaneously.

7.1 Van der Pol oscillator

The Van der Pol oscillator is described by the equation

ẍ + εẋ(x2 − 1) +x = 0 , t>0, (7.1)


| {z }
nonlinear friction
−ve : |x| < 1
+ve : |x| > 1

where 0 < ε  1. Typical initial conditions might be x = 1, ẋ = 0 at t = 0 (although the precise initial
conditions are not crucial for what follows).
Solutions are found to tend to a finite amplitude oscillation, during which energy losses when |x| > 1 are
balanced by energy gains when |x| < 1.

7.1.1 Regular perturbation

Try

x = x0 + εx1 + . . . . (7.2)

Then at leading order

ẍ0 + x0 = 0 ⇒ x0 = cos t . (7.3)

At the next order

ẍ1 + x1 = ẋ0 (1 − x20 ) = − sin3 t


= − 43 sin t + 1
4 sin 3t , (7.4a)

and
3 1
x1 = 8 (t cos t − sin t) − 32 (sin 3t − 3 sin t) . (7.4b)

Note that the expansion loses its asymptoticness when


1

εx1 = ord(x0 ) i.e. when t = ord ε . (7.5)

The ‘problem’ is that the ε-damping term slowly changes the oscillation amplitude on a time scale of
ord(ε−1 ) by the slow accumulation of small effects.

Mathematical Tripos: Part III PM 62 © [email protected], Michaelmas 2022


7.1.2 Multiple scales expansion

The oscillator has two processes:

Harmonic oscillation on time scale of ord(1). Slow drift in amplitude (and possible phase)
on time scale of ord(ε−1 ).
τ =t T = εt
The ‘fast’ time scale. The ‘slow’ time scale.

We treat τ and T as independent variables:

• the rapidly changing features are modelled by τ ,


• the slowly changing features are modelled by T .

Hence we seek a solution with the form

x(t; ε) = x(τ, T ; ε) , (7.6a)

where the two variables are introduced as an artifice in order to remove secular effects. We use the chain
rule to compute derivatives:
d ∂x ∂x
x(t; ε) = (τ, T ; ε) + ε (τ, T ; ε) , (7.6b)
dt ∂τ ∂T
ẍ = xτ τ + 2εxτ T + ε2 xT T . (7.6c)

We now seek an asymptotic expansion of the form

x = x0 (τ, T ) + εx1 (τ, T ) + . . . , (7.7)

and require the expansion to be valid for T = ord(1), i.e. t = ord(ε−1 ). Then at leading order

ε0 : x0τ τ + x0 = 0 , t>0, (7.8a)


x0 = 1 , x0τ = 0 , at t = 0 . (7.8b)

This has solution in terms of trigonometric functions (we could alternatively use complex notation, as we
shall see below),
x0 = R0 (T ) cos (τ + θ0 (T )) , (7.8c)
where, in order to satisfy the initial conditions,

R0 (0) = 1 , θ0 (0) = 0 . (7.8d)

The functions R0 and θ0 are not fixed at this stage — we need equations for them. At next order we have
that

ε1 : x1τ τ + x1 = −x0τ x20 − 1 − 2x0τ T




= 2R0 θ0T cos (τ + θ0 ) + 2R0T + 41 R03 − R0 sin(τ + θ0 ) + 14 R03 sin 3(τ + θ0 ) , (7.9a)


together with the initial conditions

x1 = 0 , x1τ = −x0T = −R0T at t = 0 . (7.9b)

The solution is
1
2R0T + 41 R03 − R0 τ cos(τ + θ0 (T ))

x1 = R0 θ0T τ sin(τ + θ0 (T )) − 2
1 3
− 32 R0 sin 3(τ + θ0 (T )) + R1 sin (τ + θ1 (T )) . (7.9c)

However, the asymptotic expansion will not be valid for τ = ord(ε−1 ) unless

R0 θ0T = 0 , 2R0T + 41 R03 − R0 = 0 . (7.10a)

Mathematical Tripos: Part III PM 63 © [email protected], Michaelmas 2022


This is the ‘secularity’ or ‘integrability’ condition of Poincaré. Using the initial conditions we deduce that
2
θ0 = 0 , R0 = 1 . (7.10b)
(1 + 3e−T ) 2

In particular note that R0 → 2 as T → ∞. It follows that the solution for x1 becomes


1 3
x1 = R1 sin (τ + θ1 (T )) − 32 R0 sin 3(τ + θ0 (T )) , (7.11a)

while the initial conditions for R1 and θ1 become

R1 (0) sin θ1 (0) = 0 ,


3 3
R1 (0) cos θ1 (0) − 32 R0 (0) cos 3θ0 (0) = −R0T (0) ,

i.e.
9
θ1 (0) = 0 , R1 (0) = − 32 . (7.11b)
The equations governing R1 and θ1 are determined by the secularity condition for the x2 problem.
However, we then find that there is insufficient freedom in R1 and θ1 to avoid breaking the asymptoticness
when T = ord(1). This problem can be avoided by introducing a super slow time scale, T2 = ε2 t.

Alternative approach to deriving (7.10a). Instead of solving explicitly for x1 , we could use a condition
based on requiring x1 to be periodic over the time scale τ . For instance, we could require that (cf.
inner products and Sturm-Liouville operators and integrating by parts twice)
Z 2π
(x1τ τ + x1 ) sin
cos (τ + θ0 ) dτ = 0 , (7.12a)
0

i.e.
Z 2π
x0τ x20 − 1 + 2x0τ T sin
 
0
cos (τ + θ0 ) dτ = 0 . (7.12b)

On performing the integrals, (7.10a) is again recovered. This is known as the Fredholm alternative.

7.1.3 A simple example of super slow time scale

Consider the exact solution to the equation

ẍ + 2εẋ + x = 0 ,

i.e.
 12 
x = e−εt cos (1 − ε2 t .

This has:

(a) an oscillation on the time scale t = ord(1),


(b) an amplitude drift on the time scale t = ord(ε−1 ), and

(c) a phase drift on the time scale t = ord(ε−2 ).


 
In general, when working to ord εk on a time scale ord εk−n , one must expect to have a hierarchy of
n slow time scales.

Mathematical Tripos: Part III PM 64 © [email protected], Michaelmas 2022


7.2 Mathieu Equation

As a further example of multiple-scales consider solutions to the Mathieu equation:


ÿ + ω 2 + ε cos t y = 0 .

(7.13)
The coefficients are 2π-periodic. This equation describes the small amplitude oscillations of a pendulum
whose length changes slightly in time. If the natural oscillation frequency is near a multiple of half
the forcing frequency, then the amplitude of the pendulum will increase in time. This is an example of
12/95 parametric excitation.

7.2.1 Floquet Theory (for second order ODEs)

First note that, since the coefficients of the Mathieu equation are 2π periodic, if y(t) is a solution, then
y(t + 2π) is also a solution. Further since the equation is second order, we can write the general solution
as
y(t) = Ay1 (t) + By2 (t) . (7.14)
Combining these results we see that we can write
yj (t + 2π) = αj y1 (t) + βj y2 (t) , (7.15a)
and hence
y(t + 2π) = Ay1 (t + 2π) + By2 (t + 2π) (7.15b)
 
= Aα1 + Bα2 y1 (t) + Aβ1 + Bβ2 y2 (t)
= A0 y1 (t) + B 0 y2 (t) , (7.15c)
where, in matrix notation,
A0
    
α1 α2 A
= . (7.15d)
B0 β1 β2 B
| {z }
P

Suppose (A, B) is an eigenvector of P with eigenvalue λ; then


A0 = λA , B 0 = λB , (7.16a)

and
y(t + 2π) = λy(t) for all t . (7.16b)
Let µ = ln λ/2π and define
ϕ(t) = e−µt y(t) . (7.17a)
Then from (7.16b)
ϕ(t + 2π) = e−µ(t+2π) y(t + 2π) = e−µt y(t) = ϕ(t) for all t , (7.17b)
and hence
y(t) = eµt ϕ(t) , (7.17c)
14/17 where ϕ(t) is a 2π-periodic function.
Since the Mathieu equation is second order, there will be two eigenvalues λ, or equivalently two con-
stants µ, and two eigenvectors (we sidestep the degenerate case of one eigenvector). Then the system is
said to be
unstable if, for either eigenvalue, Re (µ) > 0 ,
stable if, for both eigenvalues, Re (µ) 6 0 .

In the case of the Mathieu equation, if y(t) is a solution, so is y(−t). Thus for stability we must have
Re(µ) = 0 for both eigenvalues.
It is possible to show that there are regions of the (ω 2 , ε) plane where solutions are stable, and other
14/15 regions where solutions are unstable. We will attempt to find the ‘stability boundaries’ when |ε|  1 by
14/18 seeking small amplitude periodic solutions, and identifying regions of parameter space where they do not
14/19 exist.
14/22

Mathematical Tripos: Part III PM 65 © [email protected], Michaelmas 2022


7.2.2 ω 6= ±n/2

Try the Poincaré expansion

y = y0 (t) + εy1 (t) + ε2 y2 (t) + . . . . (7.18)

From substitution into the Mathieu equation we obtain:

O(ε0 ) : ÿ0 + ω 2 y0 = 0 , (7.19a)


1 2
O(ε ) : ÿ1 + ω y1 = −y0 cos t . (7.19b)

If we seek a real solution, then

y0 = A0 exp ıωt + A∗0 exp −ıωt ,


 
(7.20a)

= A0 exp ıωt + c.c. , (7.20b)

and
ÿ1 + ω 2 y1 = − 21 A0 exp ı(ω + 1)t − 21 A0 exp ı(ω − 1)t + c.c .
 
(7.20c)

It follows that there are ‘secular’ terms if ω ± 1 = −ω, i.e. if ω = ∓ 21 . Further, it is possible to show that
higher-order terms are secular only if ω = ±n/2. Thus if ω 6= ±n/2, we can solve at all orders to show
that 
y(t) = exp ıωt ϕ(t) + c.c. ,
14/20 where ϕ is 2π-periodic. We conclude that for ε  1 and ω 6= ±n/2, the solution is stable.

1
7.2.3 ω2 − 4 1

See Example Sheet 3.

7.2.4 ω2 − 1  1

Suppose that |ω 2 − 1|  1, and seek a solution of the form

ω 2 = 1 + εa1 + ε2 a2 + . . .

From §7.2.2 we anticipate that resonance will only occur at second order. Hence if a1 6= 0, we expect
there to be no instability; thus we set a1 = 0.

ε0 : 1st harmonic
ε1 : 0th & 2nd harmonics
ε2 : 1st & 3rd harmonics
↑can force resonance

This suggests that we should consider an ord(ε−2 ) slow time scale. Try

τ = t, T = ε2 t , (7.21a)
2
y = y0 (τ, T ) + εy1 (τ, T ) + ε y2 (τ, T ) + . . . . (7.21b)

At leading order the governing equations is

ε0 : y0τ τ + y0 = 0 , (7.22a)

with solution
y0 = A0 (T )eıτ + c.c. . (7.22b)

At next order

Mathematical Tripos: Part III PM 66 © [email protected], Michaelmas 2022


ε1 : y1τ τ + y1 = −y0 cos τ
= − 12 A0 (e2ıτ + 1) + c.c. , (7.22c)

with solution
y1 = − 12 (A0 + A∗0 ) + 1
A0 e2ıτ + A∗0 e−2ıτ ,

6 (7.22d)

where any homogeneous component can [usually] be absorbed by a suitable redefinition of A0 . At next
order

ε2 : y2τ τ + y2 = −2y0τ T − a2 y0 − 12 y1 eıτ + e−ıτ




= −2ıA0T + ( 16 − a2 )A0 + 14 A∗0 eıτ − 12


1
A0 e3ıτ + c.c. ,

(7.23a)

where ∗ denotes a complex conjugate. For asymptoticness not to be lost when T = ord(1), it follows from
the secularity condition that
5 1
 
2βT + 12 − a2 α = 0 , 2αT + 12 + a2 β = 0 , (7.23b)

where A0 = α + ıβ. Hence the oscillation is unstable on the slow time scale T if
5
 1 
12 − a2 12 + a2 > 0 , (7.23c)

i.e. if
1 5
− 12 < a2 < 12 . (7.23d)

Figure 7.5: Plot of the stability boundaries of solutions to the Mathieu equation. In the white regions of
the (ω 2 , ε) plane, all solutions of the Mathieu equation are stable, while in the cross-hatched regions there
is an unstable solution. When ε = 0, the cross-hatched regions meet the ω 2 axis at ω = n/2, n = 0, 1, 2 . . ..
Source: Advanced Mathematical Methods for Scientists and Engineers, by C.M. Bender and S.A. Orszag.

Mathematical Tripos: Part III PM 67 © [email protected], Michaelmas 2022


7.3 WKBJLG Theory

Terminology: omit the J if not in Cambridge, and omit the LG if a physicist.


This theory is concerned with asymptotic solutions to equations with slowly varying coefficients, e.g.

ẍ + f (εt) x = 0 . (7.24)

It has both linear and nonlinear variants. A generalisation to two or more independent variables is called
ray theory.

7.3.1 Leading-order solution

Initially assume that f = ω 2 > 0, and seek a multiple scales solution with

τ = t, T = εt , (7.25a)
x ≡ x(τ, T ) = x0 (τ, T ) + εx1 (τ, T ) + . . . . (7.25b)

Then at leading order


x0τ τ + ω 2 (T ) x0 = 0 , (7.26a)
with solution
x0 = R0 (T ) cos (ω(T ) τ + θ0 (T )) . (7.26b)
At next order

x1τ τ + ω 2 x1 = −2x0τ T
= 2(ωR0 )T sin(ωτ + θ0 ) + 2ωR0 (ωT τ + θ0T ) cos(ωτ + θ0 ) . (7.27a)

The secularity condition implies that


θT (T ) = −τ ωT (T ) , (7.27b)
but this is ‘impossible’, because the fast variable appears in the ‘drift’ equation for the slow dependence.
In some sense we want ‘θ to be larger’. Instead replace the solutions for x0 with

x0 (τ, T ) = R0 (T ) cos(θ(T )) , (7.28a)

where
θ = 1ε Θ0 (T ) + Θ1 (T ) + . . . , (7.28b)
so that small variations in Θ0 on the T timescale produce O(1) changes in θ.
Since
θt = Θ0T + εΘ1T + . . . , (7.29a)
it follows that

ẋ0 = −R0 θ0T sin θ + ε R0T cos θ − R0 Θ1T sin θ + . . . ,
2

ẍ0 = −R0 θ0T cos θ − ε (2R0T Θ0T + R0 Θ0T T ) sin θ + 2R0 Θ1T Θ0T cos θ + . . . .

On substituting these expansions into (7.24) we find that at leading order


2
θ0T = ω2 , i.e. θ0T = ω , (7.29b)

where ω > 0 wlog. On applying the secularity condition to the equation for x1 we obtain

2R0 Θ1T Θ0T = 0 Θ1 = const.
(7.29c)
2R0T Θ0T + R0 Θ0T T = 0 R02 ω = const.

Remark. While the local ‘energy’ E = 12 R02 ω 2 is not conserved, the ‘action’ E/ω is conserved (recall that
for a standard harmonic oscillator E = 21 (ẋ2 + ω 2 x2 ))

Mathematical Tripos: Part III PM 68 © [email protected], Michaelmas 2022


Hence the multiple scales solution has the form
1
x∼ [f (εt)]1/4
(a cos θ + b sin θ) , (7.30a)

where a and b are constants, and


Z t 1
θ= f (εq) 2 dq . (7.30b)
0

A similar analysis is possible if f < 0, except that exponentially growing/decaying solutions are found
rather than harmonically oscillating ones. In particular
1
Ae−ϕ + Beϕ ,

x ∼ [−f (εt)]1/4 (7.30c)

where and A and B are constants, and


Z t  12
ϕ= −f (εq) dq . (7.30d)
0

Remark. In order to obtain higher order approximations, at first sight it might appear that super slow
time scales, Tn = εn t, are needed. However, with care, this is not necessary (see the last example
13/95 sheet).
15/17
7.3.2 Turning points

What if f = 0 at some point? The solutions (7.30a) and (7.30c) are then singular. In order to investigate
this case, we assume without loss of generality that f (0) = 0 and f 0 (0) < 0.
We recall that when εt = ord(1), we have (7.30a) as solution for t < 0 (since f > 0),
(7.30c) as solution for t > 0 (since f < 0).
In order to have a complete solution we need the relationship between (a, b) and (A, B). To this end we
observe that when |εt|  1,
ẍ + εtf 0 (0) x ≈ 0 . (7.31a)
Therefore, all times are of a comparable scale when
x − 31
∼ εf 0 (0) tx ⇒ t ∼ εf 0 (0) . (7.31b)
t2
Thus we introduce ‘medium time’, s, defined by
1
s = t −εf 0 (0) 3 . (7.31c)
1
Based on the magnitudes of (7.30a) and (7.30c) when t = ord(ε− 3 ), i.e. s = ord(1), we scale x by
1
x= 1 X0 + . . . . (7.31d)
15/15 ε6
The leading-order governing equation is then Airy’s equation,
X0ss − sX0 = 0 , (7.32a)
with solution
X0 = α Ai(s) + β Bi(s) , (7.32b)
15/18 where α and β are constants.
This solution must match with those valid when εt = ord(1). First we match (7.32b) as s → ∞ to (7.30c)
as εt → 0+. From the asymptotic expansions for the Airy function, etc.
 
1 1 2 32
 2 32

(7.32b) : X0 ∼ 1/4 √ α exp − s + β exp s , (7.33a)
s π 2 3 3

1
(7.30c) : x0 ∼   1 (A exp(−ϕ) + B exp(ϕ)) , (7.33b)
−εtf 0 (0) 4

Mathematical Tripos: Part III PM 69 © [email protected], Michaelmas 2022


where 1 3 3
2
−εf 0 (0) 2 t 2 = 23 s 2 .

ϕ∼ 3 (7.33c)
Hence from matching
α A β B
√ = 1/6 , √ = 1/6 . (7.33d)
2 π −f 0 (0) π −f 0 (0)
Note that the determination of A this way is ‘dangerous’ since that part of the solution is exponentially
small in (7.30c).
We can similarly match (7.32b) as s → −∞ to (7.30a) as εt → 0−. From above
1 3
(7.32b) : X0 ∼ √ (α sin Θ + β cos Θ) , Θ = 23 (−s) 2 + 14 π , (7.34a)
π(−s)1/4
1 2
 0
 12 3
2 3
(7.30a) : x0 ∼  1/4 (a cos θ + b sin θ) , θ ∼ − 3 −εf (0) (−t) 2 = − 3 (−s) 2 . (7.34b)
0
εtf (0)

These two expansions match if:


a β+α b β−α
 1/6 = (2π)1/2 ,  1/6 = (2π)1/2 . (7.34c)
−f 0 (0) −f 0 (0)

We therefore have the connection formulae

15/19 a−b a+b


A= √ , B= √ . (7.35)
15/20 2 2 2
15/22
(long)

7.4 Ray Theory

Consider waves propagating through a slowly varying medium. Assume that they are governed by

L(∂t , ∂x ; εx, εt)ϕ = εN (∂t , ∂x , ϕ; εx, εt, ε) , (7.36a)

where L is a linear operator,


N is a nonlinear operator,

and X = εx and T = εt represent the slowly varying nature of the medium. For instance
 2  
∂ ∂ 2 ∂
Lϕ ≡ − c (X, T ) ϕ=0. (7.36b)
∂t2 ∂x ∂x

Seek a solution of the form


ı 
ϕ = [A0 (X, T ) + εA1 (X, T ) + . . . ] exp θ(X, T ) + c.c. . (7.37a)
ε
Then
ϕt = iθT [A0 + εA1 + . . . ]eiθ/ε + ε[A0T + εA1T + . . . ]eiθ/ε + c.c. , (7.37b)
and the leading order approximation to (7.36a) becomes

L(iθT , iθX ; X, T ) = 0 , (7.38a)

i.e. the dispersion relation


L(−iω, ik; X, T ) = 0 , (7.38b)
where ω = −θT is defined to be the [real] frequency, and k = θX is defined to be the [real] wave number.
(7.38b) is often rewritten in the form
ω = Ω(k; X, T ) . (7.38c)

Mathematical Tripos: Part III PM 70 © [email protected], Michaelmas 2022


Consider small variations about the values X0 and T0 by writing X = X0 + εδx and T = T0 + εδt where
|εδx|, |εδt|  1. Then
     
i iθ(X0 , T0 ) iθX iθT
exp θ(X0 + δX, T0 + δT ) ≈ exp exp εδx + . . . + εδt
ε ε ε ε
 
iθ(X0 , T0 )
≈ exp exp (ikδx − iωδt + . . . ) .
ε

Hence the definitions of ω and k are consistent with convention. Further, because

θXT − θT X = 0 , (7.39a)

it follows that
k T + ωX = 0 , (7.39b)
and hence from (7.38c) that
∂Ω
kT + cg kX = − , (7.39c)
∂X
∂Ω
where cg = ∂k is the group velocity. In characteristic form

dk ∂Ω dX
=− on = cg . (7.40a)
dT ∂X dT
A ray is a path along the characteristic traversed with speed cg . In general rays are curved.
Exercise. Show that
dω ∂Ω dX
= on = cg . (7.40b)
dT ∂T dT

Hamilton’s Equations. Consider the transformations:

X → q
k(X, T ) → p (7.41a)
Ω(k; X, T ) → H(q, p, T ) ,

then (7.40a) becomes


dp ∂H dq ∂H
=− , = . (7.41b)
dT ∂q dT ∂p
These are just Hamilton’s equations; hence waves move like particles with speed cg . Further, from
(7.38c)  
∂θ ∂θ
+ H q, , T = 0 . (7.41c)
∂T ∂q
This is the Hamilton-Jacobi equation with the phase, θ(q, T ), as the action.

7.4.1 Model example

Consider the equation


∂2
 
∂  2 ∂ 
− c (X, T ) ϕ=0. (7.42)
∂t2 ∂x ∂x
Substitute  
iθ(X, T
ϕ = (A0 (X, T ) + εA1 (X, T ) + . . . ) exp + c.c. , (7.43a)
ε
then

ε0 : −ω 2 A0 + c2 k 2 A0 = 0 , (7.43b)
1 2 2 2 2
ε : −ω A1 + c k A1 = i(ωT A0 + 2ωA0T ) + 2ccX ikA0 + ic (kX A0 + 2kA0X ) . (7.43c)

Mathematical Tripos: Part III PM 71 © [email protected], Michaelmas 2022


Hence at leading order we deduce the dispersion relation

ω = ±ck , (7.44a)

and at first order it follows that

(ωA20 )T + 2ccX kA20 + c2 (kA20 )X = 0 , (7.44b)

or equivalently
(ωA20 )T + (cg ωA20 )X = 0 , (7.44c)
ω
where cg = ±c = k. In this case no further information comes from the complex conjugate equation.
Write
A0 = r0 eiψ0 , (7.45a)
then, on taking real and imaginary parts,

ψ0T + cg ψ0X = 0 , (7.45b)

and
(ωr02 )T + (cg ωr02 )X = 0 . (7.45c)

Wave action. The local time and spatial averaged energy density of a wave satisfying (7.42) is given by

ωk 2π/ω 2π/k
Z Z
1
ϕ2t + c2 ϕ2x dx dt

E= 2
4π 2 0 0
= 12 ω 2 r02 + O(ε) . (7.46)

14/95 Hence (7.45c) represents conservation of wave action E/ω.

7.4.2 Conservation of wave action for sound waves (only outlined in lectures)

For 1D sound waves the governing equations are

ρ(ut + uuz ) = − pz ,
ρt + (ρu)z =0 ,
St + uSz =0 ,
p ≡p(ρ, S) .

Consider small perturbation from a basic, slowly varying, state of the form

ρ = ρ0 (Z) + ρ̃ ,
p = p0 (ρ0 , S0 ) + p̃ ,
S = S0 (Z) + S̃ ,
where
z = εZ .

Assume that the basic state is at constant pressure so that

∂p ∂p
ρ0Z + S0Z = 0 ,
∂ρ S ∂S ρ

or equivalently
c20 (Z)ρ0Z + p0S (Z)S0Z = 0 .

Mathematical Tripos: Part III PM 72 © [email protected], Michaelmas 2022


Linearised perturbations satisfy the equations

ρ0 ut = −p̃z ,
ρ̃t + (ρ0 u)z = 0 ,
S̃t + εuS0Z = 0 ,
p̃ = c20 (z)ρ̃ + p0S S̃ .

Hence

p̃t = c20 ρ̃t + p0S S̃t


= −c20 (ρ0 u)z − εp0S uS0Z
= −c20 (ρ0 u)z + εc20 ρ0Z u ,
and
p̃tt = −c20 ρ0 uzt
 
2 p̃
= c0 ρ 0 .
ρ0 z
The governing wave equation for pressure perturbations is thus (cf. (7.42))
 
2 p̃z
p̃tt − c0 (Z)ρ0 (Z) = 0. (7.47)
ρ0 (Z) z

Multiple-scales analysis. On the basis of earlier, seek a solution of the form


 
iθ(Z, T )
p̃ =[a0 (Z, T ) + εa1 (Z, T ) + . . . ] exp + c.c. ,
ε
θ θ
p̃z =ika0 ei ε + ε(ika1 + a0Z )ei ε + c.c. ,
θ θ
p̃zz = − k 2 a0 ei ε + ε(−k 2 a1 + ikZ a0 + 2ika0Z )ei ε + c.c. ,
θ θ
p̃t = − iωa0 ei ε + ε(−iωa1 + a0T )ei ε + c.c. ,
θ θ
p̃tt = − ω 2 a0 ei ε + ε(−ω 2 a1 − iωT a0 − 2iωa0T )ei ε + c.c. .

Substitute into the governing equation


εc20 ρ0Z
p̃tt − c20 p̃zz + p̃z = 0 ,
ρ0
then on collecting terms of the same power of ε one obtains

ε0 : −ω 2 a0 + c20 k 2 a0 = 0 ,
c20 ρ0Z
ε1 : −ω 2 a1 − iωT a0 − 2iωa0T + k 2 c20 a1 − ikZ c20 a0 − 2ikc20 a0Z + ika0 = 0 .
ρ0
These yield the dispersion relation

ω 2 = k 2 c20 ,
and the amplitude equation
ρ0Z 2 2
(ωa20 )T + c20 (ka20 )Z − c ka = 0 .
ρ0 0 0
From the governing equations
ka0 i θ
u= e ε + . . . + c.c. ,
ωρ0
ika0 θ
S̃ = − ε 2 S0Z ei ε + . . . + c.c. ,
ω ρ0
a0 i θ
ρ̃ = 2 e ε + . . . + c.c. .
c0

Mathematical Tripos: Part III PM 73 © [email protected], Michaelmas 2022


The mean local energy density is given by

E = h 21 ρ0 u2 i + h 12 c20 ρ̃2 /ρ0 i


|a0 |2
= ,
2ρ0 c20
and hence
E |a0 |2
= ,
ω 2ρ0 c20 ω
where ω = ±kc0 is the dispersion relation, and the [local] group velocity is given by cg = ±c0 . Thus
cg E k|a0 |2
= .
ω 2ρ0 ω 2
Henceforth assume a0 is real for simplicity. Then

ωT a20 ρ0Z ka20 ka2 ωZ kZ a20


   
E cg E a0 a0T ka0 a0Z
+ = 2 − 2 2
+ 2
− 2 2
− 0 3 +
ω T ω Z ρ0 c0 ω 2ρ0 c0 ω ρ0 ω 2ρ0 ω ρ0 ω 2ρ0 ω 2
ρ0Z kc20 a0 2ka0 c20 ωZ
 
a0 2 2
= 2ωa 0T − ω a
T 0 + 2kc a
0 0Z + k c
Z 0 0a − − ,
2ρ0 c20 ω 2 ρ0 ω
a20 kc20 ωZ
 
=− ω T + ,
ρ0 c20 ω 2 ω

from making use of the amplitude equation. Further, because the dispersion relation is independent
of time, from (7.40b)

ωT + cg ωZ = 0 ,
i.e.
kc20
ωT + ωZ = 0 .
ω
16/15
16/17 Hence wave action is conserved:    
16/18 E cg E
+ = 0. (7.48)
16/19 ω T ω Z
16/20
16/22

Mathematical Tripos: Part III PM 74 © [email protected], Michaelmas 2022

You might also like