0% found this document useful (0 votes)
99 views

Best Mathematical Methods Notes

This document provides a summary of mathematical methods in physics. It covers topics such as infinite series, complex analysis, integrals, differential equations, matrices and vectors, and partial differential equations. The document includes example problems and figures to illustrate key concepts. It is intended as a reference for students learning mathematical physics.

Uploaded by

Thomas Janès
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views

Best Mathematical Methods Notes

This document provides a summary of mathematical methods in physics. It covers topics such as infinite series, complex analysis, integrals, differential equations, matrices and vectors, and partial differential equations. The document includes example problems and figures to illustrate key concepts. It is intended as a reference for students learning mathematical physics.

Uploaded by

Thomas Janès
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 299

Notes on Mathematical Methods in Physics

Jolien Creighton
December 15, 2020

6 0.5
5
0.2

0
5
0.5
0.2
2

1
4

2 0.5 C
1
0 2 0.25
Im t

2
0.5
3
4
6

4
0.2
5
0

8
8 6 4 2 0 2 4 6 8
Re t
Contents

I Infinite Series 1
1 Geometric Series 3
2 Convergence 5
3 Familiar Series 13
4 Transformation of Series 15
Problems 21

II Complex Analysis 22
5 Complex Variables 24
6 Complex Functions 27
7 Complex Integrals 34
8 Example: Gamma Function 50
Problems 57

III Evaluation of Integrals 59


9 Elementary Methods of Integration 61
10 Contour Integration 64
11 Approximate Expansions of Integrals 70
12 Saddle-Point Methods 75
Problems 82

IV Integral Transforms 84
13 Fourier Series 86
14 Fourier Transforms 92
15 Other Transform Pairs 100
16 Applications of the Fourier Transform 101
Problems 106

i
Contents ii

V Ordinary Differential Equations 107


17 First Order ODEs 109
18 Higher Order ODEs 119
19 Power Series Solutions 122
20 The WKB Method 137
Problems 146

VI Eigenvalue Problems 149


21 General Discussion of Eigenvalue Problems 151
22 Sturm-Liouville Problems 153
23 Degeneracy and Completeness 172
24 Inhomogeneous Problems — Green Functions 175
Problems 180

VII Matrices and Vectors 183


25 Linear Algebra 185
26 Vector Spaces 190
27 Vector Calculus 203
28 Curvilinear Coordinates 220
Problems 228

VIII Partial Differential Equations 229


29 Classification 231
30 Separation of Variables 235
31 Integral Transform Method 246
32 Green Functions 250
Problems 268

Appendix 270
A Series Expansions 270
B Special Functions 272
C Vector Identities 286

Index 289
List of Figures

2.1 Integral Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

5.1 Complex Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

6.1 Complex Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

7.1 Contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.2 Contour for Ex. 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.3 Contour for Cauchy integral formula . . . . . . . . . . . . . . . . . 37
7.4 Contour for Taylor’s theorem . . . . . . . . . . . . . . . . . . . . . 40
7.5 Contours for Laurent’s theorem. . . . . . . . . . . . . . . . . . . . 43
7.6 Intersecting Domains . . . . . . . . . . . . . . . . . . . . . . . . . . 48

8.1 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51


8.2 Contour for Integral in Euler Reflection Formula . . . . . . . . . . 54

10.1 Contour for Ex. 10.1 . . . . . . . . . . . . . . . . . . . . . . . . . . 65


10.2 Contour for Ex. 10.2 . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.3 Jordan’s Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

11.1 Error Function and Complementary Error Function . . . . . . . . . 70


11.2 Exponential Integral . . . . . . . . . . . . . . . . . . . . . . . . . . 74

12.1 Integrand of the Gamma Function . . . . . . . . . . . . . . . . . . 75


12.2 Topography of Steepest Descent Surface . . . . . . . . . . . . . . 79

13.1 Step Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88


13.2 Gibbs’s Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . 88

14.1 Damped Oscillator Power Spectrum . . . . . . . . . . . . . . . . . 96

16.1 Contour for Damped Driven Harmonic Oscillator . . . . . . . . . . 104

17.1 Intersecting Adiabats . . . . . . . . . . . . . . . . . . . . . . . . . . 113


17.2 Non-intersecting Adiabats . . . . . . . . . . . . . . . . . . . . . . . 114

iii
List of Figures iv

19.1 Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 126


19.2 Legendre Functions of the Second Kind . . . . . . . . . . . . . . . 127
19.3 Hermite Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . 135
19.4 Complex Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

20.1 Solutions to Airy’s Equation . . . . . . . . . . . . . . . . . . . . . . 139


20.2 Airy Functions of the First and Second Kind . . . . . . . . . . . . . 140
20.3 Topography of Airy Function Integrand . . . . . . . . . . . . . . . . 143
20.4 Connection Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . 144
20.5 Potential for Bohr-Sommerfeld Quantization Rule . . . . . . . . . 145

22.1 Bessel Functions of the First and Second Kind . . . . . . . . . . . 157


22.2 Spherical Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . 163
22.3 Modified Bessel Functions of the First and Second Kind . . . . . . 164
22.4 Associated Legendre Functions . . . . . . . . . . . . . . . . . . . . 170

26.1 Passive and Active Rotations . . . . . . . . . . . . . . . . . . . . . 194


26.2 CO2 Molecule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
26.3 CO2 Molecule Vibration Modes . . . . . . . . . . . . . . . . . . . . 202

27.1 Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204


27.2 Vector Fields with Divergence and Curl . . . . . . . . . . . . . . . 205
27.3 Curve in 2-Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . 207
27.4 Double Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
27.5 Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
27.6 Green’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
27.7 Stokes’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
27.8 Gauss’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214

30.1 Drum 01 Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240


30.2 Drum 11 Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
30.3 Drum 21 Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
30.4 Drum 02 Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
30.5 Slab Heating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244

31.1 Heat Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247


31.2 Point Source Integral . . . . . . . . . . . . . . . . . . . . . . . . . . 248
31.3 Image Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249

32.1 Circular Drum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250


32.2 Green Function Integral . . . . . . . . . . . . . . . . . . . . . . . . 251
32.3 Slab Heating Redux . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
32.4 Contour Closed in Upper Half Plane . . . . . . . . . . . . . . . . . 259
32.5 Contour Closed in Lower Half Plane . . . . . . . . . . . . . . . . . 260
32.6 Light Cone and Retarded Time . . . . . . . . . . . . . . . . . . . . 263

B.1 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273


List of Figures v

B.2 Bessel Functions of the First and Second Kinds . . . . . . . . . . . 276


B.3 Spherical Bessel Functions of the First and Second Kinds . . . . . 278
B.4 Modified Bessel Functions of the First and Second Kinds . . . . . 280
B.5 Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 282
B.6 Legendre Functions of the Second Kind . . . . . . . . . . . . . . . 283
Preface

These lecture notes are designed for a one-semester introductory


graduate-level course in mathematical methods for Physics. The goal is to
cover mathematical topics that will be needed in other core graduate-level
Physics courses such as Classical Mechanics, Quantum Mechanics, and
Electrodynamics. It is assumed that the student will have had undergraduate
level courses in linear algebra, calculus, ordinary differential equations, partial
differential equations, and complex analysis. However, each module in these
notes begins at a point that is hopefully “too easy” — i.e., already covered in
the undergraduate courses — and progresses to more advanced material.
These notes are based heavily on the book Mathematical Methods of Physics
(2nd edition) by Jon Mathews and R. L. Walker (Addison-Wesley, 1970).
Additional material was drawn from Mathematical Methods for Physicists (3rd
edition) by George Arfken (Academic Press, 1985) and Complex Variables and
Applications (5th edition) by Ruel V. Churchill and James Ward Brown.

vi
Module I

Infinite Series

1 Geometric Series 3

2 Convergence 5

3 Familiar Series 13

4 Transformation of Series 15

Problems 21

1
2

Motivation
In physics problems we often encounter infinite series. Sometimes we want to
expand functions in power series, e.g., when we want to evaluate complex
functions for small arguments. Sometimes we have solutions in the form of an
infinite series and we want to sum the series.
This module reviews techniques for determining if a series will converge, for
summing series, and recaps certain familiar series that are commonly
encountered.
1 Geometric Series

The geometric series is


¼
xn = 1 + x + x2 + x3 + x4 + · · · . (1.1)
n=0

This series can be summed: consider

f (x) = 1+x + x 2 + x 3 + x 4 + · · · . (1.2)


x f (x) = x + x2 + x3 + x4 + · · · (1.3)

and subtract the second equation from the first:

(1 − x)f (x) = 1. (1.4)

If x , 1 then
1
f (x) = (1.5)
1−x
= 1 + x + x2 + x3 + x4 + · · · . (1.6)

We’ll see that the second equality holds only for |x| < 1.
We see geometric series in repeating fractions:

y = 0.345 345 345 · · · (1.7a)


( )
1 1
= 0.345 · 1 + + + · · · (1.7b)
1000 (1000)2
 
 1 
= 0.345 · 
1 
 (1.7c)
1 − 1000
1000
= 0.345 · (1.7d)
999
345
= . (1.7e)
999

3
1. Geometric Series 4

The geometric series only converges for |x| < 1.


Consider, e.g., x = 2:
1 ?
= −1 = 1 + 2 + 4 + 8 + · · · (1.8)
1 − 2 |{z} | {z }
negative ever increasing
number positive numbers

therefore we see that

f (x) = 1 + x + x 2 + x 3 + x 4 + · · · (1.9)

is only valid for |x| < 1 (where it converges).


However, everywhere within this domain,
1
f (x) = , |x| < 1 (1.10)
1−x
but the expression (1 − x)−1 is actually valid everywhere except x = 1.
Therefore we say that
1
g(x) = (1.11)
1−x
is the analytic continuation of the function

¼
f (x) = xn , |x| < 1. (1.12)
n=0

We will talk more about analytic continuation in the section on complex


analysis.
We can easily derive other infinte series from the geometric series:
• Let x → −x:
1
= 1 − x + x2 − x3 + · · · (1.13)
1+x
which is an alternating series.
• Let x → x 2 :
1
= 1 + x2 + x4 + x6 + · · · . (1.14)
1 − x2
2 Convergence

An infinte series

¼
a n = a1 + a2 + a3 + · · · (2.1)
n=1

is said to converge to the sum S provided the sequence of partial sums has
the limit S:
N
¼
lim an = S . (2.2)
N→∞
n=1

The series is said to converge absolutely if the related series



¼
|an | (2.3)
n=1

converges.

5
2. Convergence 6

Ex. 2.1. The geometric series has partial sums


N
¼
SN = x n = 1+x + x 2 + · · · + x N (2.4a)
n=0
x SN = x + x 2 + · · · + x N + x N+1 (2.4b)

subtract:

(1 − x)S N = 1 − x N+1 . (2.4c)

• If x = 1 then S N = N + 1 which diverges in the limit N → ∞.


• If x , 1 then

1 − x N+1
SN = . (2.5)
1−x
Then, in the limit N → ∞,
1 x
lim S N = − lim x N (2.6)
N→∞ 1 − x 1 − x N→∞

note: x N → 0 as N → ∞ for −1 < x < 1


1
∴ lim S N = for |x| < 1 (2.7)
N→∞ 1−x
otherwise the series diverges.
2. Convergence 7

Ex. 2.2. The alternating series


1 1 1
1− + − + ··· (2.8)
2 3 4
converges. To see this, note that
1 1 1 1 1
     
S 2N = 1 − + − + ··· + − >0 (2.9)
2 3 4 2N − 1 2N
since each term in parentheses is positive, but also
1 1 1 1 1 1 1
     
S 2N = 1 − − − − − ··· − − − <1 (2.10)
2 3 4 5 2N − 2 2N − 1 2N
since each term in parentheses is positive. Therefore

0 < lim S2N < 1. (2.11)


N→∞

Also
1
 
lim S2N+1 = lim S2N + = lim S2N (2.12)
N→∞ N→∞ 2N + 1 N→∞

so the partial sums converge as N → ∞.


However this alternating series does not converge absolutely because the series
1 1 1
1+ + + + ··· (2.13)
2 3 4
diverges:
N
¼ 1
SN = (harmonic series) (2.14a)
n
n=1
S1 = 1 (2.14b)
1
S2 = 1 + (2.14c)
2
1 1 1
 
S4 = 1 + + + (2.14d)
2 3 4
1 1 1
 
> 1+ + + (2.14e)
2 4 4
2
= 1+ (2.14f)
2
1 1 1 1 1 1 1
   
S8 = 1 + + + + + + + (2.14g)
2 3 4 5 6 7 8
1 1 1 1 1 1 1
   
> 1+ + + + + + + (2.14h)
2 4 4 8 8 8 8
3
= 1+ (2.14i)
2
N
∴ S 2N > 1 + → ∞ as N → ∞ (2.14j)
2
2. Convergence 8

The simplest way to tell if a series converges or diverges is to compare it to a


series that is known to converge and diverge.
For example, the geometric series converges for |x| < 1 and diverges for |x| > 1
so compare

1 ¼
= xn = 1 + x + x2 + x3 + · · · (2.15)
1−x
n=0

with the series of interest



¼
a n = a0 + a1 + a2 + a3 + · · · (2.16)
n=0

and we see that if, as n → ∞, |an+1 /an | < 1 then our series converges just as
the geometric series does. Thus we obtain the ratio test:

Ratio Test

an+1
• If lim < 1 the series converges (absolutely).
n→∞ an
a
• If lim n+1 > 1 the series diverges.
n→∞ a n
a
• If lim n+1 = 1 (or doesn’t exist) we must investigate further.
n→∞ a n
2. Convergence 9

y y

a1 a1
f(x) f(x)

a2 a2
a3 a3
a4 a4
a5 x a5 x
1 2 3 4 5 1 2 3 4 5
Figure 2.1: Riemann sums used in the integral test, where f (x) is a monotonically-
R5 R5
decreasing function. Left: a2 +a3 +a4 +a5 < 1 f (x) d x. Right: a1 +a2 +a3 +a4 > 1 f (x) d x.

Another method: compare with an infinite integral.


The series

f (1) + f (2) + f (3) + · · · (2.17)

will converge or diverge depending on whether the integral


Z∞
f (x) d x (2.18)

converges or diverges provided f (x) is monotonically decreasing.


Let an = f (n). Then, as shown in the left panel of Fig. 2.1,
N
¼ Z N
a n = a2 + a3 + · · · + a N < f (x) d x (2.19)
n=2 1

so if the integral converges as N → ∞ then the series must converge.


Also, as shown in the right panel of Fig. 2.1,
N−1
¼ Z N
an = a1 + a2 + · · · + aN−1 > f (x) d x (2.20)
n=1 1

so if the integral diverges as N → ∞ then the series must diverge.


2. Convergence 10

Ex. 2.3. Consider the Riemann zeta function


1 1 1
Ø(s) = 1 + s + s + s + · · · . (2.21)
2 3 4
Try the ratio test:
an+1 n s 1 −s s
   
= = 1+ ∼ 1 − + ···
an n+1 n n→∞ n
→ 1 as n → ∞ (2.22)

so the ratio test is inconclusive. But note:


1
Ø(s) = f (1) + f (2) + f (3) + · · · for f (x) = s (2.23)
x
(a monotonically-decreasing function). Now,
Z Z
dx 1 1
f (x) d x = =− (s , 1) (2.24)
xs s − 1 x s−1

and this converges as x → ∞ if Re(s) > 1 so the Riemann zeta function converges for
Re(s) > 1.

This suggests that we can sharpen the ratio test by comparison to the
Riemann zeta function:

an+1 s
If ∼ 1− with s > 1 then the series converges absolutely.
an n→∞ n
2. Convergence 11

In fact, consider the more slowly converging series:



¼ 1 1 1
= + + ··· . (2.25)
n(ln n)s 2(ln 2)s 3(ln 3)s
n=2

Note:
Z
dx 1 1
=− (2.26)
x(ln x)s s − 1 (ln x)s−1

so the series converges provided s > 1.


Apply the ratio test:
" #s
an+1 n ln n
= (2.27a)
an n + 1 ln(n + 1)
" #−s
1 ln n + ln(1 + 1/n)

= 1 − + ··· (2.27b)
n ln n
1 ln n + 1/n + · · · −s
  
= 1 − + ··· (2.27c)
n ln n
1 s
∼ 1− − as n → ∞ . (2.27d)
n n ln n

A series converges absolutely if


an+1 1 s
∼ 1− − , s>1
an n→∞ n n ln n

(and it diverges if s < 1).


2. Convergence 12

Ex. 2.4. The Legendre differential equation

(1 − x 2 )y 00 − 2xy 0 + n(n + 1)y = 0 (2.28)

has a power series solution

x2 x4
y = 1 − n(n + 1) + n(n + 1)(n − 2)(n + 3) − ··· (2.29)
2! 4!
(see Ex. 19.2).
´∞
Try the ratio test: if the series is y = m=1 a m then

am (n − 2m + 4)(n + 2m − 3) 2
=− x . (2.30)
am−1 (2m − 3)(2m − 2)
Check: take a1 = 1 and then

(n − 4 + 4)(n + 4 − 3) 2
a2 = − x a1
(4 − 3)(4 − 2)
1
= − n(n + 1)x 2 X
2
(n − 6 + 4)(n + 6 − 3) 2
a3 = − x a2
(6 − 3)(6 − 2)
1
=− (n − 2)(n + 3)x 2 a2
3·4
1
= n(n + 1)(n − 2)(n + 3)x 4 . X
4!
For large m,
am 1 1
  
∼ 1− +O x2 . (2.31)
am−1 m→∞ m m2

Note that there is no s/(m ln m) term so s = 0. Therefore the series diverges if x 2 = 1


(unless n − 2m + 4 = 0 for some m, in which case this is actually a finite series).
3 Familiar Series

• Binomial series

x2 x3
(1 + x)Ó = 1 + Óx + Ó(Ó − 1) + Ó(Ó − 1)(Ó − 2) + · · ·
2! 3!
∞ !
¼ Ó n
= x (3.1)
n
n=0

where
!
Ó Ó(Ó − 1)(Ó − 2) · · · (Ó − n + 1)
= (3.2)
n n!

is the binomial coefficient.


If Ó is a non-negative integer then this is a finite series and so obviously
converges for any finite x (except the case when x = −1 and Ó = 0, which is
undefined).
The ratio test reveals that this series converges absolutely for |x| < 1. In
addition, it converges absolutely for |x| = 1 and Ó > 0. It turns out that the
series converges, but not absolutely, for x = 1 and −1 < Ó < 0.
• Exponential series

x2 x3
ex = 1 + x + + + ···
2! 3!

¼ xn
= . (3.3)
n!
n=0

The ratio test shows that this series always converges.

13
3. Familiar Series 14

Generate new series:


• Use Euler’s relation (see later) e i x = cos x + i sin x in the exponential series:

(i x)2 (i x)3 (i x)4


cos x + i sin x = e i x = 1 + (i x) + + + + ··· (3.4)
2! 3! 4!
x2 x3 x4
= 1 + ix − −i + + ··· (3.5)
2! 3! 4!
x2 x4 x3
! !
= 1− + − ··· + i x − + ··· (3.6)
2! 4! 3!

and identify the real and imaginary parts:

x2 x4
cos x = 1 − + − ··· (3.7)
2! 4!
x3 x5
sin x = x − + − ··· . (3.8)
3! 5!

• Integrate the series for (1 + x)−1 term-by-term:


Z Z n
dx o
= 1 − x + x2 − x3 + · · · d x (3.9)
1+x
| {z } | {z }
ln(1+x) x− 12 x 2 + 13 x 3 − 14 x 4 +···

so

1 2 1 3 1 4
ln(1 + x) = x − x + x − x + ··· . (3.10)
2 3 4

Take the average of ln(1 + x) and ln(1 − x):

1 1+x 1 1 1
 
ln = x + x3 + x5 + x7 · · · . (3.11)
2 1−x 3 5 7

• Integrate the series for (1 + x 2 )−1 term-by-term:


Z Z n
dx o
2
= 1 − x2 + x4 − x6 + · · · d x (3.12)
1+x
| {z } | {z }
arctan x x− 31 x 3 + 51 x 5 − 17 x 7 +···

so

1 3 1 5 1 7
arctan x = x − x + x − x + ··· . (3.13)
3 5 7
4 Transformation of Series

Series of constants can be summed by introducing a variable.


Ex. 4.1. Sum this series:
1 2 3
S= + + + ··· . (4.1)
2! 3! 4!
Let

x 2 2x 3 3x 4
f (x) = + + + ··· . (4.2)
2! 3! 4!
Note: f (1) = S and f (0) = 0.
Now,

x3 x4
f 0 (x) = x + x 2 + + + ··· (4.3a)
2! 3!
x2 x3
( )
= x 1+x + + + ··· (4.3b)
2! 3!
= x ex . (4.3c)

Therefore
Z
f (x) = x ex d x = x ex − ex + C . (4.4)

The constant of integration is determined by

0 = f (0) = 0e 0 − e 0 + C = −1 + C =⇒ C =1 (4.5)

so

f (x) = x e x − e x + 1 (4.6)

and thus

S = f (1) = 1e 1 − e 1 + 1 = 1. (4.7)

15
4. Transformation of Series 16

Ex. 4.2. Sum the alternating harmonic series:


1 1 1
S = 1− + − + ··· (4.8)
2 3 4
(recall this series converges, but not absolutely).
Let

x2 x3 x4
f (x) = x − + − + ··· . (4.9)
2 3 4
Note: S = f (1) and recall f (x) = ln(1 + x) so

S = ln 2 . (4.10)

However, we can rearrange the series by putting two negative terms after each positive
term:
1 1 1
S = 1− + − + ··· (4.11a)
2 3 4
1 1 1 1 1
= 1 − − + − − + ··· (4.11b)
2 4 3 6 8
1 1 1 1 1
   
= 1− − + − − + ··· (4.11c)
2 4 3 6 8
1 1 1 1
= − + − + ··· (4.11d)
2 4 6 8
1 1 1 1
 
= 1 − + − + ··· (4.11e)
2 2 3 4
1
= ln 2 . (4.11f)
2
4. Transformation of Series 17

Introduce the Bernoulli numbers by considering the series


x
= c0 + c1 x + c2 x 2 + · · · |x| < 2á (4.12)
ex−1
x2 x3
! !
2
=⇒ x = c0 + c1 x + c2 x + · · · x + + + ··· . (4.13)
2! 3!

Now divide both sides by x and define the Bernoulli numbers by cn = B n /n!:

x2 x x2
! !
x
1 = B0 + B1 + B2 + ··· 1 + + + ··· . (4.14)
1! 2! 2! 3!

Now equate powers in x:

1 = B0 (4.15a)
B B 1
0= 0 + 1 =⇒ B1 = − (4.15b)
2! 1! 2
B0 B1 B 1
0= + + 2 =⇒ B2 = (4.15c)
3! 1!2! 1!2! 6
and so on. The first few Bernoulli numbers are
1 1 1
B0 = 1 B2 = B4 = − B6 = ···
6 30 42 (4.16)
1
B1 = − B3 = B5 = B7 = · · · = 0 .
2

The Bernoulli numbers appear in series expansions of other common functions.


4. Transformation of Series 18

Ex. 4.3. Consider


1 (e i x + e −i x )
cos x e i x + e −i x
cot x = = 21 = i ix . (4.17)
sin x ix −i x ) e − e −i x
2i (e − e

Let i x = y/2:

e y/2 + e −y/2
cot x = i (4.18a)
e y/2 − e −y/2
ey + 1
=i y (4.18b)
e −1
2
 
= i 1+ y (4.18c)
e −1
2i y y
 
= + y (4.18d)
y 2 e −1

 
2i  ¼ y n 
= −B y + Bn (4.18e)
y  1

n! 
n=0

note: B n = 0 for n odd except B1


¼ yn
= Bn . (4.18f)
n even
n!

Now put back y = 2i x and let n = 2m, m = 0, 1, 2, . . .


1 ¼ (2x)2m
cot x = (−1)m B2m
x (2m)!
m=0
1 1 1 3 2 5
= − x− x − x − ··· 0 < |x| < á . (4.19)
x 3 45 945

Deduce the series for tan x using tan x = cot x − 2 cot 2x:


1 ¼ (2x)2m
tan x = (−1)m−1 (22n − 1)B2m
x (2m)!
m=1
1 3 2 5 17 7 á
=x+ x + x + x + ··· |x| < . (4.20)
3 15 315 2
4. Transformation of Series 19

Ex. 4.4. And, just for fun, use Hardy’s method to sum the series

¼ 1 1 1 1
S= 2
= 1+ + + + · · · = Ø(2) . (4.21)
n 4 9 16
n=1

Consider the Fourier series (see Ex. 13.2 later):



a ¼
cos kx = 0 + (an cos nx + b n sin nx) (4.22a)
2
n=1

a ¼
= 0+ an cos nx (4.22b)
2
n=1

where all the b n coefficients are zero since cos kx is an even function, and where

2 á
Z
an = cos nx cos kx d x (4.22c)
á 0
2k sin ká
= (−1)n (4.22d)
á(k 2 − n 2 )

2k sin ká 1 cos x cos 2x cos 3x


 
∴ cos kx = − + − + · · · . (4.23)
á 2k 2 k 2 − 1 k 2 − 4 k 2 − 9
Now set x = á:
2k sin ká 1 1 1 1
 
cos ká = + + + + · · · (4.24a)
á 2k 2 k 2 − 1 k 2 − 4 k 2 − 9
and so
1 1 1 1
 
ká cot ká = 2k 2 + + + + · · · (4.24b)
2k 2 k 2 − 1 k 2 − 4 k 2 − 9
1 1 1 1 1
 
= 1 + 2k 2 − − − − · · · (4.24c)
1 − k 2 22 1 − k 2 /22 32 1 − k 2 /32
k2 k4
" !
1
= 1 − 2k 2 (1 + k 2 + k 4 + · · · ) + 2 1 + 2 + 4 + · · ·
2 2 2
2 4
! #
1 k k
+ 2 1 + 2 + 4 + ··· + ··· (4.24d)
3 3 3
1 1
 
= 1 − 2k 2 1 + 2 + 2 + · · ·
2 3
1 1
 
− 2k 4 1 + 4 + 4 + · · · + · · · (4.24e)
2 3

¼
= 1−2 Ø(2n)k 2n . (4.24f)
n=1
4. Transformation of Series 20

Now we have two series representations of cotangent: recall



1 ¼ (2x)2m
cot x = (−1)m B2m (4.25)
x (2m)!
m=0
so

¼ B (2á)2m k 2m
ká cot ká = 1 + (−1)m 2m (4.26)
(2m)!
m=1

and compare this to



¼
ká cot ká = 1 − 2 Ø(2n)k 2n . (4.27)
n=1

These are two equivalent power series so we must have

B (2á)2n
−2Ø(2n) = (−1)n 2n (4.28)
(2n)!
or

B (2á)2n
Ø(2n) = (−1)n+1 2n . (4.29)
2(2n)!

Hence:

1 1 B 4á2 á2
1+ + + · · · = Ø(2) = 2 = (4.30)
4 9 4 6
1 1 B 16á 4 á4
1+ + + · · · = Ø(4) = − 4 = (4.31)
16 81 48 90
etc.
Problems

Problem 1.
a) For what values of x does the following series converge?
4 16 64
f (x) = 1 + + + + ···
x2 x4 x6

b) Does the following series converge or diverge?

(1 · 3)2 (1 · 3 · 5)2 (1 · 3 · 5 · 7)2 (1 · 3 · 5 · 7 · 9)2


+ + + +···
1 · 1 · (1)2 4 · 2 · (1 · 2)2 16 · 3 · (1 · 2 · 3)2 64 · 4 · (1 · 2 · 3 · 4)2

Problem 2.
a) Find the sum of the following series:
1 1 1 1 1
1+ − − + + − − + +···
4 16 64 256 1024

b) Find the sum of the following series:


1 2 3
+ + + ···
0! 1! 2!

Problem 3.
By repeatedly differentiating the geometric series

1 ¼
= xn
1−x
n=0

find a closed-form expression for the function



¼
f (x) = n2 x n .
n=1

For what values of x does the series converge?

21
Module II

Complex Analysis

5 Complex Variables 24

6 Complex Functions 27

7 Complex Integrals 34

8 Example: Gamma Function 50

Problems 57

22
23

Motivation
Complex numbers are encountered not only in quantum mechanics but are
also a useful tool for many applications in physics. Complex analysis and
contour integration give powerful mathematical techniques which we will
encounter over and over in later modules.
5 Complex Variables

Basics
A complex number can be written as
z = x + iy (5.1)
and where the real part and imaginary part are
Re z = x and Im z = y (5.2)
respectively and where the imaginary constant i satisfies i 2 = −1.
The complex inverse, z −1 , which satisfies z · z −1 = 1, is
x − iy
z −1 = . (5.3)
x2 + y2

A complex number can be represented as a point (x, y) on a two-dimensional


plane known as the complex plane as shown in Fig. 5.1.

The complex conjugate z ∗ = (x, −y) y


is the reflection of the point z = (x, y) about the real axis.
In polar form, the point is (r, Ú) where

z = (x, y)
p
r = |z| = x 2 + y 2 (5.4)
r
is the complex modulus and
x
Ú = arg z = arctan(y/x) (5.5)
z = (x, y)
is the complex argument. Then
z = r(cos Ú + i sin Ú). (5.6)
Figure 5.1: Representation of a
complex number as a point on a
Note: arg z is multiple valued. two-dimensional plane.
Define the principal value Arg z such that
arg z = Arg z + 2ná, n = 0, ±1, ±2, . . . (5.7)
where −á < Arg z ≤ á.

24
5. Complex Variables 25

Identities

|z|2 = z · z ∗ (z1 + z2 )∗ = z1∗ + z2∗ (z ∗ )∗ = z


∗ ∗
|z | = |z| (z1 z2 ) = z1∗ z2∗
z + z∗ z − z∗
|z1 z2 | = |z1 ||z2 | Re z = Im z = (5.8)
2 2i

Also,

arg(z1 z2 ) = arg z1 + arg z2 . (5.9)

Proof. Let z1 = r1 (cos Ú1 + i sin Ú1 ) and z2 = r2 (cos Ú2 + i sin Ú2 ); then

z1 z2 = r1 r2 [(cos Ú1 cos Ú2 − sin Ú1 sin Ú2 )


+ i(sin Ú1 cos Ú2 + cos Ú1 sin Ú2 )] (5.10a)
= r1 r2 [cos(Ú1 + Ú2 ) + i sin(Ú1 + Ú2 )] . (5.10b)

This motivates the exponential form: define

e iÚ = cos Ú + i sin Ú (5.11)

which is Euler’s formula; then

z = r(cos Ú + i sin Ú) = re iÚ . (5.12)

We have:

e iÚ1 e iÚ2 = e i(Ú1 +Ú2 ) (5.13a)


1
= e −iÚ (5.13b)
e iÚ
e iÚ = e i(Ú+2ná) , n = 0, ±1, ±2, . . . . (5.13c)
5. Complex Variables 26

Powers and Roots


Use induction to show:

z n+1 ≡ z · z n = r n+1 e i(n+1)Ú , n = 1, 2, 3, . . . (5.14)


0
z ≡ 1, z,0 (5.15)
z n ≡ (z −1 )(−n) , n = −1, −2, −3, . . . , z,0 (5.16)

therefore

z n = r n e i nÚ , n = 0, ±1, ±2, . . . . (5.17)

Use these to compute roots. E.g., the roots of unity are

z n = 1 =⇒ r n e i nÚ = 1e i0 (5.18a)
n
=⇒ r = 1 and nÚ = 0 + 2ká, k = 0, ±1, ±2, . . . (5.18b)

therefore

z = e 2ái k/n , k = 0, ±1, ±2, . . . . (5.19)

The distinct nth roots of unity are

1, én , é2n , . . . , én−1
n where én = e 2ái/n . (5.20)

Similarly, the roots of the equation z n = z0 are



c, cén , cé2n , . . . , cén−1
n where c = n r0 e iÚ0 /n . (5.21)
6 Complex Functions

Consider

w = f (z) . (6.1)

Suppose w = u + i v and z = x + i y; then

f (z) = u(x, y) + i v(x, y) . (6.2)

E.g., if f (z) = z 2 then

f (x + i y) = x 2 − y 2 + 2i xy . (6.3)
| {z } |{z}
u(x,y)=x 2 −y 2 v(x,y)=2xy

Think of this as a map from the x-y plane to the u-v plane as seen in Fig. 6.1.

y v
D A A0
1.0 1.0

0.5 0.5
f
C B D0 C0 B0 u
x
0.5 1.0 1.0 0.5 0.5

Figure 6.1: The complex map w = z 2 .

27
6. Complex Functions 28

Limits
If f (z) is defined at all points z in some “deleted neighborhood” of z0 (does not
include z0 ) then

lim f (z) = w0 (6.4a)


z→z0

if and only if

lim u(x, y) = u0 and lim v(x, y) = v0 (6.4b)


(x,y)→(x0 ,y0 ) (x,y)→(x0 ,y0 )

and w0 = u0 + i v0 .

Continuity
f (z) is continuous at a point z0 if

f (z0 ) exists and lim f (z) = f (z0 ) . (6.5)


z→z0

Derivatives
f (z) − f (z0 )
f 0 (z0 ) = lim
z→z0 z − z0
f (z0 + Éz) − f (z0 )
= lim . (6.6)
Éz→0 Éz

The derivative only exists if it doesn’t matter how z → z0 as illustrated in the


following examples.
6. Complex Functions 29

Ex. 6.1. The derivative of f (z) = z 2 :

(z + Éz)2 − z 2
f 0 (z) = lim (6.7a)
Éz→0 Éz
= lim (2z + Éz) (6.7b)
Éz→0
= 2z . (6.7c)

Ex. 6.2. The derivative of f (z) = |z|2 = z · z ∗ :

(z + Éz)(z ∗ + (Éz)∗ ) − z · z ∗
f 0 (z) = lim (6.8a)
Éz→0 Éz
(Éz)∗
( )
= lim z ∗ + (Éz)∗ + z . (6.8b)
Éz→0 Éz

Here, Éz = Éx + iÉy. Consider two cases:

1. Approach the origin Éz = 0 along the real axis: Éz = Éx, Éy = 0:

f 0 (z) = lim {z ∗ + Éx + z} = z ∗ + z . (6.8c)


Éx→0

2. Approach the origin Éz = 0 along the imaginary axis: Éz = iÉy, Éx = 0:

f 0 (z) = lim {z ∗ − iÉy − z} = z ∗ − z . (6.8d)


Éy→0

These are different results if z , 0, therefore the only place the derivative exists is at
z = 0.
Note: f = |z|2 is continuous since

u(x, y) = x 2 + y 2 and v(x, y) = 0 (6.9)

are both continuous.


6
Thus continuous =⇒ differentiable (though differentiable =⇒ continuous).
6. Complex Functions 30

Cauchy-Riemann Equations
If f (z) = u(x, y) + i v(x, y) then, if we approach z with y constant and Éz = Éx,

u v
f 0 (z) = (x, y) + i (x, y) (6.10a)
x x
whereas if we approach z with x constant and Éz = iÉy,

v u
f 0 (z) = (x, y) − i (x, y) . (6.10b)
y y

Therefore, a necessary condition for f 0 (z) to exist is

u v u v
= and =− . (6.11)
x y y x

These are the Cauchy-Riemann equations.


The Cauchy-Riemann equations are also sufficient conditions for the existance
of the derivative.

Analytic Functions
A function is said to be analytic at a point z0 if its derivative exists in a
neighborhood of z0 .
Ex. 6.3. f (z) = 1/z is analytic everywhere except for z = 0. However, since f (z) is
analytic at some point in every neighborhood of z = 0, we call z = 0 a singular point.

Ex. 6.4. f (z) = |z|2 is not analytic at any point.

A function is entire if it is analytic everywhere in the finite plane.


(Polynomials are entire.)

Harmonic Functions
A harmonic function h(x, y) satisfies Laplace’s equation

2 h 2 h
+ = 0. (6.12)
x 2 y 2

If f (z) = u(x, y) + i v(x, y) is analyitic in some domain then u and v are harmonic
functions in that domain and v is known as the harmonic conjugate of u.
6. Complex Functions 31

Exponential Function
We seek something that behaves like e x along the real axis, i.e.,
d x
e = ex ∀x (real). (6.13)
dx

Define the exponential function, exp(z) = e z by:

d z
e z is entire and e = ez ∀z . (6.14)
dz

Consider the function


f (z) = e x (cos y + i sin y) (6.15)
so
u(x, y) = e x cos y and v(x, y) = e x sin y . (6.16)
We see that
u v u v
= e x cos y = e x cos y =⇒ = (6.17a)
x y x y
u v u v
= −e x sin y = e x sin y =⇒ =− (6.17b)
y x x y
so the Cauchy-Riemann equations are satisfied everywhere. Furthermore,
u v
f 0 (z) = +i = e x (cos y + i sin y)
x x
= f (z) (6.18)
and therefore this is the exponential function:

e z = e x (cos y + i sin y) . (6.19)

Note: this justifies our use of the symbol e iÚ = cos Ú + i sin Ú in the polar form of
a complex number.
The exponential function has the familiar properties:

e z1 e z2 = e z1 +z2 e z+2ái = e z
z x
|e | = e arg e z = y + 2ná, n = 0, ±1, ±2, . . . (6.20)
z iæ
e = âe =⇒ z = ln â + i(æ + 2ná), n = 0, ±1, ±2, . . . .

Therefore w = e z is a many-to-one mapping due to the periodicity of e z .


Note: e z , 0 so the range of w = e z is the entire w-plane except the origin
w = 0.
6. Complex Functions 32

Logarithm Function
The logarithm function is the inverse exponential function:

log z = ln |z| + i arg z, z , 0. (6.21)

Since the complex argument is multi-valued, so is the logarithm function.


The logarithm function can be made single-valued by restricting it to a branch

|z| > 0, Ó < arg z < Ó + 2á (6.22)

where |z| > 0, arg z = Ó is the branch cut.


The logarithm function is discontinuous across the branch cut.
The principal value of the logarithm is

Log z = ln |z| + i Arg z, z , 0. (6.23)

Note: the logarithm function is analytic with


d 1
log z = for z , 0 . (6.24)
dz z

The logarithm function has the following properties:

exp(log z) = z (6.25a)
log(exp z) = z + 2ái n, n = 0, ±1, ±2, . . . (6.25b)
Log(exp z) = z (6.25c)
log(z1 z2 ) = log z1 + log z2 (for some branch) (6.25d)
z n = exp(n log z), n = 0, ±1, ±2, . . . (6.25e)
1
 
z 1/n = exp log z , z , 0, n = 0, ±1, ±2, . . . (6.25f)
n
(the last equation has n distinct values corresponding to the n roots.)
Use the logarithm function to define complex exponents:

z c = exp(c log z) . (6.26)

Find:
d c
z = cz c−1 , |z| > 0, Ó < arg z < Ó + 2á . (6.27)
dz
The principal value of z c is

z c = exp(c Log z) (6.28)

and the principal branch is |z| > 0, −á < Arg z < á.


6. Complex Functions 33

Trigonometric Functions
Define the trigonometric functions as

e i z − e −i z e i z + e −i z
sin z = and cos z = . (6.29)
2i 2

(Also define tan z = sin z/ cos z, etc.)

Hyperbolic Functions
Define the hyperbolic functions as

e z − e −z e z + e −z
sinh z = and cosh z = . (6.30)
2 2

Inverse Trigonometric Functions


Consider the arcsin function:

w = arcsin z when z = sin w . (6.31)

Therefore, solve z = sin w for w:

e i w − e −i w
z= (6.32a)
2i
=⇒ (e i w )2 − 2i z(e i w ) − 1 = 0 (6.32b)
iw 2 1/2
=⇒ e = i z + (1 − z ) (6.32c)
=⇒ w = arcsin z = −i log[i z + (1 − z 2 )1/2 ] . (6.32d)

Note: the square root is double-valued and the log is multiple-valued, so the
arcsin function is multiple-valued.
Similarly can compute the other inverse trigonometric functions:

arcsin z = −i log[i z + (1 − z 2 )1/2 ]


arccos z = −i log[z + i(1 − z 2 )1/2 ]
i i +z
arctan z = log . (6.33)
2 i −z

We can now compute the derivatives of these functions.


We can similarly find the inverse hyperbolic functions.
7 Complex Integrals

y
A contour C is a set of points
C = {(x(t), y(t)) : a ≤ t ≤ b} (7.1) (x(b), y(b))
(see Fig. 7.1). The length of C is t=b
Zb C
L= |z 0 (t)| d t (7.2)
a t=a
where z 0 (t) = x 0 (t) + i y 0 (t). (x(a), y(a))
x
A simple contour does not self-intersect.
Figure 7.1: Contour.
A simple closed contour does not
self-intersect except at the end points, which are the same.

Contour Integral
A contour integral is
Z Zb
f (z) d z = f [z(t)] z 0 (t) d t . (7.3)
C a
This integral is invariant under re-parameterization of the contour.
Properties of contour integrals:
Z Z
• f (z) d z = − f (z) d z (7.4a)
−C C
Z Z Z
• f (z) d z = f (z) d z + f (z) d z (7.4b)
C=C1 +C2 C1 C2
Z Z b
• f (z) d z ≤ |f [z(t)] z 0 (t)| d t (7.4c)
C a
• If M is a non-negative constant such that |f (z)| ≤ M on C then
Z Zb
f (z) d z ≤ M |z 0 (t)| d t = ML . (7.4d)
C a

34
7. Complex Integrals 35

Ex. 7.1. Let C be the path (see Fig. 7.2)

z = 3e iÚ , 0≤Ú≤á. (7.5)

Let

f (z) = z 1/2 = re iÚ/2 , r > 0, 0 < Ú < 2á . (7.6)

Note: this branch of the square root is not defined at the initial point, but we can still
integrate f (z) because it only needs to be piecewise continuous.
Therefore
√ √ Ú √ Ú
f [z(Ú)] = 3e iÚ/2 = 3 cos + i 3 sin , 0 < Ú ≤ á . (7.7)
2 2

As Ú → 0, f [z(Ú)] → 3 so just define this to be its value at Ú = 0. Then
Z Z Z á√
I= f (z) d z = z 1/2 d z = 3e iÚ/2 (3i e iÚ ) dÚ (7.8a)
C C 0
√ Zá √ 2 á √  2 
= 3 3i e i3Ú/2 dÚ = 3 3 e i3Ú/2 = 3 3 − (1 + i) (7.8b)
0 3i 0 3i

= −2 3(1 + i) . (7.8c)


If we had just wanted to bound the integral, we note that |z 1/2 | = 3 and L = 3á,
therefore

|I| ≤ 3 3á . (7.9)

Im z

Re z
3 3
Figure 7.2: Contour for Ex. 7.1
7. Complex Integrals 36

Cauchy-Goursat Theorem
Theorem 1 (Cauchy-Goursat). If a function f is analytic at all points interior to
and on a simple closed curve C then
I
f (z) d z = 0 . (7.10)
C

Sketch of proof.
I Z b
f (z) d z = f [z(t)] z 0 (t) d t (7.11a)
C a
let f (z) = u(x, y) + i v(x, y)
and z(t) = x(t) + i y(t)
Z b
= [(ux 0 − vy 0 ) + i(vx 0 + uy 0 ] d t (7.11b)
a

I I
= (u d x − v d y) + i (v d x + u d y) (7.11c)
C C
by Green’s theorem where C
is the boundary of region R
† ! † !
v u u v
= − − dx dy + i − dx dy (7.11d)
R x y R x y
by the Cauchy-Riemann
equations
= 0. (7.11e)
7. Complex Integrals 37

Cauchy Integral Formula


If f is analytic everywhere within and on a simple closed contour C, take in a
positive (counterclockwise) sense, and if z0 is any point interior to C, then

1 f (z)
f (z0 ) = dz . (7.12)
2ái C z − z0

This is the Cauchy integral formula.

Proof. Consider C× wich is a circle of radius × about z0 : z(Ú) = z0 + ×e iÚ ,


z 0 (Ú) = ×i e iÚ :


×i e iÚ
Z
f (z) dz
d z ≈ f (z0 ) = f (z0 ) dÚ (7.13a)
C× z − z0 C× z − z0 0 ×e iÚ
= 2ái f (z0 ). (7.13b)

Now divide C into the modified contour C + L − C× − L as shown in Fig. 7.3. The
integrand is analytic everywhere inside this contour so, by the Cauchy-Goursat
theroem,
Z Z
f (z) f
(z)  f (z) f (z)  
0= d z +  dz − d z − d z (7.14)
C z − z 0  L z − z 0 C× z − z 0  L z − z 0

f (z) f (z)
=⇒ dz = d z = 2ái f (z0 ) (7.15)
C z − z0 C× z − z0

Im z

L
L
z0
C C
Re z

Figure 7.3: Contour for Cauchy integral formula


7. Complex Integrals 38

Derivatives of Analytic Functions


Assume f is analytic on and within a positively-oriented closed contour C
about z. Then:
1 f (s)
f (z) = ds . (7.16)
2ái C s−z

Now,
1 f (s)
f 0 (z) = ds (7.17a)
2ái C (s − z)2
1 f (s)
f 00 (z) = ds (7.17b)
ái C (s − z)3
etc.
This establishes the existance of all derivatives of f at z and shows that all
derivatives are also analytic at z:

n! f (s)
f (n) (z) = ds . (7.18)
2ái C (s − z)n+1

Ex. 7.2. Take f (z) = 1:



dz 2ái,
 n=0
= (7.19)

n+1 
n = 1, 2, 3 . . . .
C (z − z0 )
0,

7. Complex Integrals 39

Maximum Moduli of Functions


Suppose |f (z)| ≤ |f (z0 )| everywhere in the disk |z − z0 | < × and suppose f (z) is
analytic in this neighborhood.
Let Câ be the oriented circle |z − z0 | = â with 0 < â < × so
Câ = {z0 + âe iÚ : 0 ≤ Ú ≤ 2á}. Then
Z 2á
1 f (z) 1
f (z0 ) = dz = f (z0 + âe iÚ ) dÚ . (7.20)
2ái Câ z − z0 2á 0

(This is Gauss’s mean value theorem.)


We have:
Z 2á
1
|f (z0 )| ≤ |f (z0 + âe iÚ )| dÚ . (7.21a)
2á 0

Also, by assumption, |f (z0 )| ≥ |f (z0 + âe iÚ )| so


Z 2á Z 2á
1 1
|f (z0 + âe iÚ )| dÚ ≤ |f (z0 )| dÚ = |f (z0 )| . (7.21b)
2á 0 2á 0

By Eq. (7.21a) and Eq. (7.21b) we see that

|f (z0 )| = |f (z0 + âe iÚ )| . (7.21c)

It turns out that when the modulus of a function is constant in a domain, the
function itself must be constant there.
Therefore we have the maximum modulus principle:
If a function f is analytic and not constant in a given domain then |f (z)| has no
maximum value in the domain.
Corollary. Suppose a function f is continuous in a closed bounded region R
and that it is analytic and not constant in the interior of R. Then the maximum
value of |f (z)|, which is always reached, occurs somewhere on the boundary of
R and never in the interior.
7. Complex Integrals 40

Taylor’s Theorem
Theorem 2 (Taylor’s Theorem). If f is analytic throughout an open disk
|z − z0 | < R 0 centered at z0 with radius R0 then at each point in the disk


¼ 1 (n)
f (z) = an (z − z0 )n with an = f (z0 ) (7.22)
n!
n=0

(the infinite series converges).

Proof. We prove it for the Maclaurin series where z0 = 0.


Let C0 be a positively-oreinted circle |s| = r0 where r < r0 < R 0 with |z| = r as
shown in Fig. 7.4.

1 f (s) 1 1 1
f (z) = ds = f (s) d s (7.23a)
2ái C0 s − z 2ái C0 s 1 − z/s
(z/s)N
(  2  N−1 )
1 1 z z z
= 1+ + + ··· + + f (s) d s (7.23b)
2ái C0 s s s s 1 − z/s
1 1
= f (0) + f 0 (0)z + f 00 (0)z 2 + · · · + f (N−1) (0)z N−1 + RN (z)
2! (N − 1)!
(7.23c)

where the remainder term is

zN f (s)
RN (z) = ds . (7.23d)
2ái C0 (s − z)s N

Im s

z s
r
r 0 Re s
R0
C0

Figure 7.4: Contour for Taylor’s theorem


7. Complex Integrals 41

Now |s − z| ≥ ||s| − |z|| = r0 − r since r0 > r and let M be the maximum value of
|f (s)| on C0 . Then
!N
zN M Mr0 r
|RN (z)| ≤ 2ár0 = (7.24)
2ái (r0 − r)r0N r0 − r r0
→ 0 as N → ∞ since r0 > r . (7.25)

Therefore the Maclaurin series


1 00 1
f (z) = f (0) + f 0 (0)z + f (0)z 2 + · · · + f (n) (0)z n + · · · (7.26)
2! n!
converges in the open disk |z| < R 0 provided that f (z) is analytic in this disk.
(It is straightforward to shift the origin to obtain Taylor’s theorem.)

Ex. 7.3. For the exponential function,

f (z) = e z , f 0 (z) = e z , ... , f (n) (z) = e z (7.27)

so
∞ n
¼ z
ez = . (7.28)
n!
n=0

Note: since e z is entire, this series converges for all z.


7. Complex Integrals 42

Laurent’s Theorem
If f is not analytic at a point z0 , we cannot apply Taylor’s theorem there.
However, we can use Laurent’s theorem:
Theorem 3 (Laurent). Suppose a function f is analytic throughout an annular
domain R1 < |z − z0 | < R 2 and let C donate any positively-oriented closed
contour around z0 and lying in that domain. Then, at each point z in the
domain,

∞ ∞
¼ ¼ bn
f (z) = an (z − z0 )n + , R1 < |z − z0 | < R 2 (7.29)
(z − z0 )n
n=0 n=1

where

1 f (z)
an = dz , n = 0, 1, 2, . . .
2ái C (z − z0 )n+1
1 f (z)
bn = dz , n = 1, 2, . . . (7.30)
2ái C (z − z0 )−n+1

or, more concisely,


¼
f (z) = cn (z − z0 )n , R1 < |z − z0 | < R 2 (7.31)
n=−∞

where

1 f (z)
cn = , n = 0, ±1, ±2, . . . . (7.32)
2ái C (z − z0 )n+1

Sketch of proof. Take z0 as before for simplicity. Refer to Fig. 7.5 for contours
C, C1 , C2 , and È . First note:

f (s) f (s) f (s)


ds − ds − ds = 0 (7.33)
C2 s−z C1 s−z È s −z
| {z }
−2ái f (z)

1 f (s) 1 f (s)
=⇒ f (z) = ds − ds . (7.34)
2ái C2 s−z 2ái C1 s−z
7. Complex Integrals 43

Im s

C2
z

r1 r
Re s
R1
r2 C1
C
R2

Figure 7.5: Contours for Laurent’s theorem.

In the integrand of the first integral where |s| > |z| expand

1 1 z zN
= + 2 + ··· + (7.35a)
s−z s s (s − z)s N
and in the integrand of the second integral where |z| > |s| expand

1 1 s sN
− = + 2 + ··· + . (7.35b)
s−z z z (z − s)z N

b1 b2
∴ f (z) = a0 + a1 z + · · · + RN (z) + + 2 + · · · + SN (z) (7.36a)
z z
where
1 f (s) 1 f (s)
an = n+1
ds = ds (7.36b)
2ái C2 s 2ái C s n+1
1 f (s) 1 f (s)
bn = −n+1
ds = ds (7.36c)
2ái C1 s 2ái C s −n+1

zN f (s)
RN (z) = ds (7.36d)
2ái C2 (s − z)s N
1 s N f (s)
SN (z) = ds . (7.36e)
2ái z N C1 z−s
7. Complex Integrals 44

Now, if M1 is the maximum value of |f (s)| on C1 and M2 is the maximum value of


|f (s)| on C2 ,
!N
M r r
|RN (z)| ≤ 2 2 (7.36f)
r2 − r r2
→ 0 as N → ∞ since r2 > r (7.36g)
M r r N
 
|SN (z)| ≤ 1 1 1 (7.36h)
r − r1 r
→ 0 as N → ∞ since r1 < r . (7.36i)

A power series has the following properties:


• If a power series ∞ n
´
n=0 a n z converges when z = z1 (z1 , 0) then it is
absolutely convergent in the open disk |z| < |z1 |.
Thus the series will converge only in a disk out to radius R 0 = |z0 | where z0 is
the nearest point for which the series diverges, i.e., where the function that
the series corresponds to fails to be analytic.
E.g.,
1
f (z) = is analytic for z , 1
1−z
¼∞
=⇒ z n converges in the disk |z| < 1 but not beyond.
n=0

• The power series S(z) = ∞ n


´
n=0 a n z is analytic within its circle of
convergence. It can be term-by-term integrated and differentiated.
• If a series ∞ n
´
n=−∞ c n (z − z0 ) converges to f (z) at all points in some annular
domain about z0 then it is the unique Laurent series expansion for f in
powers of z − z0 for that domain.
7. Complex Integrals 45

Residues
If a function f is analytic throughout a deleted neighborhood 0 < |z − z0 | < × of
a singular point z0 then z0 is an isolated singular point. E.g., 1/z has an
isolated singular point z0 = 0 but the origin is not isolated for Log z.
If z0 is an isolated singular point of f then the function can be written as a
Laurent series:

¼ b1 b2
f (z) = an (z − z0 )n + + + ··· , 0 < |z − z0 | < R 2 (7.37)
z − z0 (z − z0 )2
n=0

where R 2 is some positive number. Here in particular

f (z) d z = 2ái b 1 (7.38)


C

where C is a positively-oriented simple closed contour around z0 lying in the


domain 0 < |z − z0 | < R 2 . Call b 1 the residue: b 1 = Resz=z0 f (z).
Tricks to find the residue:
• Suppose æ(z) is analytic at z = z0 and æ(z0 ) , 0, then

æ(z)
Res = æ(z0 ) . (7.39)
z=z0 z − z0

• Suppose p(z) and q(z) are both analytic at z0 and p(z0 ) , 0, q(z0 ) = 0,
q 0 (z0 ) , 0, then

p(z) p(z0 )
Res = . (7.40)
z=z0 q(z) q 0 (z0 )

z +1
Ex. 7.4. For f (z) = 2 find Res f (z).
z +9 z=3i
æ(z) z +1
Write f (z) = where æ(z) =
z − 3i z + 3i
3−i
∴ Res f (z) = æ(3i) = .
z=3i 6

cos z
Ex. 7.5. f (z) = cot z = .
sin z
Let p(z) = cos z, q(z) = sin z, q 0 (z) = cos z. The zeros of q(z) are the points z = ná,
n = 0, ±1, ±2, . . ..
p(ná)
∴ Res f (z) = 0 = 1.
z=ná q (ná)
7. Complex Integrals 46

• If

¼ b1 b2 bm
f (z) = an (z − z0 )n + + + ··· + (7.41)
z − z0 (z − z0 ) 2 (z − z0 )m
n=0 | {z }
principal part

for 0 < |z − z0 | < R2 where b m , 0 then the isolated singular point z0 is called
a pole of order m.
If m = 1 then it is a simple pole.

Ex. 7.6.

z3 z5
( )
sinh z 1 1 11 z
= z + + + · · · = 3+ + + ··· (7.42)
z4 z4 3! 5! z 3! z 5!

has a pole of order 3 at z = 0 with residue 1/6.

• If the principal part has an infinite number of terms then the singular point is
an essential singular point.

Ex. 7.7.

¼ 1 1
e 1/z = , 0 < |z| < ∞ (7.43)
n! z n
n=0

has an essential singular point at z = 0 with residue 1.

• When all b m are zero at an isolated singular point z0 then z0 is a removable


singular point.

Ex. 7.8.

ez − 1 1 1 z z2
 
f (z) = = z + z2 + · · · = 1 + + + ··· , 0 < |z| < ∞ (7.44)
z z 2! 2! 3!
has a removable singular point at z = 0. If we write f (0) = 1 then the function is entire.
7. Complex Integrals 47

Residue Theorem
Theorem 4 (Residue). If C is a positively oriented simple closed contour within
and on which a function f is analytic except for a finite number of singular
points zk (k = 1, 2, . . . , n) interior to C, then
n
¼
f (z) d z = 2ái Res f (z) . (7.45)
C z=zk
k=1

Ex. 7.9. Evaluate


5z − 2
dz (7.46)
C z(z − 1)

for C the circle |z| = 2 described counterclockwise.


For the domain 0 < |z| < 1,
5z − 2 2 − 5z 1 2
  
= = − 5 1 + z + z2 + · · · (7.47a)
z(z − 1) z 1−z z
2
= − 3 − 3z − · · · (7.47b)
z
so the residue at z = 0 is 2. Also, for the domain 0 < |z − 1| < 1,
5z − 2 5(z − 1) + 3 1
= (7.47c)
z(z − 1) z −1 1 + (z − 1)
3 
  
= 5+ 1 − (z − 1) + (z − 1)2 − · · · (7.47d)
z −1
3
= + 2 − 2(z − 1) + · · · (7.47e)
z −1
so the residue at z = 1 is 3.
5z − 2
∴ d z = 2ái(2 + 3) = 10ái . (7.48)
C z(z − 1)

Theorem 5. If f is analytic throughout a domain D and f (z) = 0 at each point z


of a domain or arc interior to D then f (z) = 0 everywhere in D .

Proof. Since f (z) = 0 along some arc we know that the coefficients
an = f (n) (z0 )/n! must be zero since the derivatives must all be zero. This means
that f (z) = 0 for all z for which the Taylor series is valid.

Corollary. Suppose f (z) and g(z) are analytic in a domain D and f (z) = g(z)
along some arc or in some sub-domain. Then f (z) = g(z) everywhere in D .

Proof. Consider h(z) = f (z) − g(z) = 0 along the arc; Theorem 5 then requires
h(z) = 0 within D .
7. Complex Integrals 48

Analytic Continuation
Consider two intersecting domains D1 and D2 .
Suppose f1 is analytic in D1 . There may be a function f2 that is analytic in D2
such that

f2 (z) = f1 (z) ∀z ∈ D1 ∩ D2 . (7.49)

If such a function exists, then it is called the analytic continuation of f1 into


D2 .
When such a function exists, it is unique. The function

f1 (z), z ∈ D1


F (z) =  (7.50)
f2 (z), z ∈ D2

is analytic in D1 ∪ D2 .
However, suppose there are three domains as shown in Fig. 7.6 and

f1 (z) = f2 (z) ∀z ∈ D1 ∩ D2 (7.51)


f1 (z) = f3 (z) ∀z ∈ D1 ∩ D3 (7.52)

it is not necessarily true that

f2 (z) = f3 (z) ∀z ∈ D1 ∩ D3 . (7.53)

Im z

D1
D2
D3
Re z

Figure 7.6: Intersecting Domains


7. Complex Integrals 49

Ex. 7.10. Consider



¼
f1 (z) = zn , |z| < 1 . (7.54)
n=0

The function
1
f2 (z) = , |z| , 1 (7.55)
1−z
satisfies f2 (z) = f1 (z) for |z| < 1. Therefore, f2 is the analytic continuation of f1 to the
entire complex plane except z = 1.

Ex. 7.11. Consider the branch of z 1/2 with −á < arg z < á and define:

f1 (z) = re iÚ/2 , r > 0, −á/2 < Ú < á . (7.56)

This is defined in Quadrants I, II, and IV of the complex plane.


Analytically continue this across the negative real axis into Quadrant III:

f2 (z) = re iÚ/2 , r > 0, á/2 < Ú < 3á/2 . (7.57)

This is defined in Quadrants II and III of the complex plane. Note that f2 (z) = f1 (z) in the
overlapping domain of Quadrant II: r > 0, á/2 < Ú < á.
Now analytically continue this across the negative imaginary axis:

f3 (z) = re iÚ/2 , r > 0, á < Ú < 5á/2 . (7.58)

This is defined in Quadrants I, III, and IV of the complex plane. Note that f3 (z) = f2 (z) in
the overlapping domain of Quadrant III: r > 0, á < Ú < 3á/2.
However, f3 (z) , f1 (z) in their overlapping domains of Quadrants I and IV; in fact,
f3 (z) = −f1 (z). E.g.,

f1 (1) = 1e i0/2 = 1 (7.59)

but

f3 (1) = 1e i(2á)/2 = −1 . (7.60)
8 Example: Gamma Function

The Euler representation of the gamma function (see Fig. 8.1) is


Z ∞
È (z) = e −t t z−1 d t . (8.1)
0

Note: as t → 0 the integrand behaves like t z−1 and so the integral behaves like
t z /z = z −1 e z ln t ; therefore this definition of the gamma function is only valid for
Re z > 0.
We can integrate by parts:
Z∞
È (z) = e −t t z−1 d t (8.2a)
Z0∞
= u dv let u = e −t , d u = −e −t d t (8.2b)
0 d v = t z−1 d t, v = t z /z (z , 0)
Z∞
∞ v → 0 as t → 0 (Re z > 0)
= uv 0 − v du u → 0 as t → ∞ (8.2c)
0
Z∞
tz
= e −t d t (8.2d)
0 z
È (z + 1)
= , Re z > 0 (8.2e)
z
Thus,

È (z + 1) = z È (z) , Re z > 0 . (8.3)

50
8. Example: Gamma Function 51

(x)

2
x
4 2 2 4
2

Figure 8.1: Gamma Function

Note: when z = n, n > 0,

È (n + 1) = nÈ (n) (8.4a)

and
Z ∞
È (1) = e −t d t = 1 . (8.4b)
0

Therefore, write

n! = È (n + 1) , n = 0, 1, 2, . . . . (8.5)

Use the relation È (z + 1) = z È (z) to analytically continue into the left-half of the
complex plaine:

È (z + 1)
È (z) = , Re z > −1, z , 0 . (8.6)
z
For example,

È (− 12 + 1)
È (− 12 ) = = −2È ( 12 ) . (8.7)
− 12

With repeated applications, can extend over (almost) all of the complex plane.
However, there is a singularity at z = 0 which prevents us from obtaining È (0),
È (−1), È (−2), . . . , but other than this, the Gamma function has been extended
over the entire complex plane.
8. Example: Gamma Function 52

Weirstrass Representation of the Gamma Function


Begin with the Euler representation:
Z∞
È (z) = e −t t z−1 d t (8.8a)
Z0Ó Z∞
= e −t t z−1 d t + e −t t z−1 d t (8.8b)
0
Z Ó Ó
∞ n Z∞
 ¼ (−1) n 
e −t t z−1 d t
  z−1

= t t d t + (8.8c)

n!

 

0 
n=0
 Ó
∞ n ZÓ Z∞
¼ (−1)
= t n+z−1 d t + e −t t z−1 d t (8.8d)
n! 0 Ó
n=0
∞ Z∞
¼ (−1)n Ón+z
= + e −t t z−1 d t . (8.8e)
n! z + n Ó
n=0
| {z } | {z }
simple poles at well-defined even
z = 0, −1, −2, . . . when Re z < 0
provided Ó > 0

Therefore this form is valid everywhere on the complex plane, with simple poles
at z = 0, −1, −2, . . ..
Note: the choice of Ó > 0 does not matter; the Weirstrass representation of the
gamma function is when Ó = 1:

∞ ∞
(−1)n
Z
¼ 1
È (z) = + e −t t z−1 d t . (8.9)
n! z+n 1
n=0
8. Example: Gamma Function 53

Euler Reflection Formula


For 0 < x < 1,
Z ∞ Z ∞
È (x)È (1 − x) = e −s s x−1 d s e −t t (1−x)−1 d t (8.10a)
0 0

Z ∞ Z ∞
= e −(s+t) s x−1 t −x d t d s (8.10b)
s=0 t=0
let s = u − t
Z ∞ Z u
= e −u (u − t)x−1 t −x d t d u (8.10c)
u=0 t=0
let t = uv
Z ∞ Z 1
= e −u u x−1 (1 − v)x−1 u −x v −x u d v d u (8.10d)
u=0 v=0

∞ 1
(1 − v)x−1
Z Z
= e −u d u dv (8.10e)
0 0 vx

1
(1 − v)x−1
Z
= dv (8.10f)
0 vx
let v → 1 − v
1
v x−1
Z
= dv (8.10g)
0 (1 − v)x
let v = t/(1 + t)
Z ∞ −x d v = d t/(1 + t)2
t dt

= t x−1 (1 − t)−x+1 1 − (8.10h)
0 1+t (1 + t)2

Z ∞
= t x−1 (1 − t)−x+1 (1 + t)x (1 + t)−2 d t (8.10i)
0


t x−1
Z
= dt , 0 < x < 1. (8.10j)
0 1+t

We need to evaluate this integral.


8. Example: Gamma Function 54

Im z

CR
C
Re z
1

Figure 8.2: Contour for Integral in Euler Reflection Formula

Let
z −a
f (z) = |z| > 0, 0 < arg z < 2á (8.11)
z +1
where a = 1 − x, 0 < a < 1. The function has a simple pole at z = −1 and a
branch cut along the positive real axis.
Consider the contour shown in Fig.R 8.2. The function
R is piecewise continuous
(even though it is multivalued) so C f (z) d z and C f (z) d z exist.
× R

For the linear parts of the contour above and below the branch cut write

e −a log z e −a(ln r+iÚ)


f (z) = = with z = re iÚ (8.12a)
z +1 re iÚ + 1
so
 −a
r
for z = re i0 (above the cut);



r +1

f (z) =  (8.12b)
 r −a −i2aá
for z = re i2á (below the cut).


 e
r +1

Now,
R R
ra r a −i2áa
Z Z Z Z
dr + f (z) d z − e dr + f (z) d z
× r +1 CR × r +1 C×
−a
= 2ái Res f (z) = 2ái(−1) = 2ái(e iá )−a (8.13a)
z=−1
−i aá
= 2ái e , (8.13b)

therefore
R
ra
Z Z Z
f (z) d z + f (z) d z = 2ái e −i aá + (e −i2aá − 1) dr . (8.14)
CR C× × r +1
8. Example: Gamma Function 55

Since a < 1,

×−a
Z
2á 1−a
f (z) d z ≤ 2á× = × (8.15a)
C× 1−× 1−×
→ 0 as × → 0 . (8.15b)

Also, since a > 0,

R −a
Z
2á 1
f (z) d z ≤ 2áR = a
(8.16a)
CR R − 1 1 − 1/R R
→ 0 as R → ∞ . (8.16b)

Therefore, taking × → 0 and R → ∞,


Z∞ a
r e −i aá 2i á
d r = 2ái −i2aá
= á i aá −i aá
= . (8.17)
0 r + 1 1 − e e − e sin aá

Thus (with a = 1 − x) we have


á
È (x)È (1 − x) = , 0 < x < 1. (8.18)
sin áx
Now use analytic continuation to extend to the entire complex plane; the result
is Euler’s reflection formula:
á
È (z)È (1 − z) = , z , 0, ±1, ±2, . . . . (8.19)
sin áz

Note:
• È (z)È (1 − z) sin áz = á is clearly entire;
• È (z) has singularities at z = 0, −1, −2, . . .;
• È (1 − z) has singularities at z = 1, 2, 3, . . .;
• sin áz has zeros at z = 0, ±1, ±2, . . . that “cancel” the singularities;
thus we conclude
1
is entire. (8.20)
È (z)
8. Example: Gamma Function 56

Useful results:
• When z = 12 ,
á
È ( 12 )È (1 − 12 ) = =á (8.21)
| {z } sin á/2
1
[È ( 2 )]2

so

È ( 21 ) = á . (8.22)

• Can then show:

1 · 3 · 5 · · · · · (2m − 1) √
È (m + 12 ) = á. (8.23)
2m

Therefore, define

2m È (m + 12 )
(2m − 1)!! = 1 · 3 · 5 · · · · · (2m − 1) = √ (8.24)
á

and

(2m)! á È (2m + 1)
(2m)!! = 2 · 4 · 6 · · · · · (2m) = = m (8.25)
(2m − 1)!! 2 È (m + 12 )

but note also that (2m)!! = 2m m! so

22m
È (2m + 1) = √ È (m + 21 )È (m + 1) . (8.26)
á

• This last result can be generalized to give Legendre’s duplication formula

22z−1
È (2z) = √ È (z)È (z + 21 ) . (8.27)
á

• The binomial coefficient, Eq. (3.2), can be expressed in terms of the Gamma
function as
!
x È (x + 1)
= . (8.28)
y È (y + 1)È (x − y + 1)
Problems

Problem 4.
Show that
h i
a) (1 + i)i = e −á/4 e 2ná cos( 21 ln 2) + i sin( 12 ln 2) where n = 0, ±1, ±2, . . . .

b) (−1)1/á = cos(2n + 1) + i sin(2n + 1) where n = 0, ±1, ±2, . . . .

Problem 5.
Derive the Cauchy-Riemann equations in polar coordinates

u 1 v 1 u v
= and =−
r r Ú r Ú r
and use these to show that if f (z) = u(r, Ú) + i v(r, Ú) is analytic in some domain
D that does not contain the origin then throughout D the function u(r, Ú)
satisfies the polar form of Laplace’s equation:

2 u u 2 u
r2 + r + = 0.
r 2 r Ú2
Verify that u(r, Ú) = ln r is harmonic in r > 0, 0 < Ú < 2á and show that
v(r, Ú) = Ú is its harmonic conjugate.

Problem 6.
Use the Cauchy-Riemann equations to determine which of the following are
analytic functions of the complex variable z:
a) |z| ;
b) Re z ;
c) e sin z .

57
Problems 58

Problem 7.
Let C denote the circle |z − z0 | = R taken counterclockwise. Use the parametric
representation z = z0 + Re iÚ , −á ≤ Ú ≤ á, for C to derive the following integral
formulas:
dz
a) = 2ái ;
C − z0
z

b) (z − z0 )n−1 d z = 0 where n = ±1, ±2, . . . ;


C
Log(z − z0 ) á + ln R
 
c) d z < 2á → 0 as R → ∞.
C (z − z0 )2 R

Problem 8.
Represent the function (z + 1)/(z − 1) by
a) its Maclaurin series, and give the region of validity for the
reprensentation;
b) its Laurent series for the domain 1 < |z| < ∞.

Problem 9.
Use residues to evaluate these integrals where the contour C is the circle
|z| = 3 taken in the positive sense:
exp(−z)
a) dz ;
C z2
1
 
b) z 2 exp dz ;
C z
z +1
c) d z.
C z 2 − 2z
Module III

Evaluation of Integrals

9 Elementary Methods of Integration 61

10 Contour Integration 64

11 Approximate Expansions of Integrals 70

12 Saddle-Point Methods 75

Problems 82

59
60

Motivation
Let’s face it: integration can be a pain in the neck. Nowadays you can use
computer algebra packages such as Maple or Mathematica or WolframAlpha
to do integrals for you; more traditionally one would use tables of integrals.
But it is still useful to be able to do elementary integrals, and some useful
tricks are reviewed here. We also explore contour integration further and
touch on topics such as asymptotic series (useful for evaluating functions at
large arguments) and saddle-point methods (which can give approximate
solutions to integrals).
9 Elementary Methods of Integration

• Introduce a complex variable.

Ex. 9.1. Evaluate


Z∞
I= e −ax cos bx d x (9.1a)
0
Z∞
= Re e −ax e i bx d x (9.1b)
0
1
= Re (9.1c)
a − ib
a
= 2 . (9.1d)
a + b2
Similarly
Z∞
I= e −ax sin bx d x (9.2a)
0
Z∞
= Im e −ax e i bx d x (9.2b)
0
1
= Im (9.2c)
a − ib
b
= 2 . (9.2d)
a + b2

61
9. Elementary Methods of Integration 62

• Differentiation or integration with respect to a parameter.

Ex. 9.2. Evaluate


Z∞
I= x e −ax cos bx d x . (9.3)
0

Let
Z∞
a
I(a) = e −ax cos bx d x = 2 ; (9.4)
0 a + b2
then

d d a a2 − b 2
I =− I(a) = = . (9.5)
da d a a2 + b 2 (a2 + b 2 )2

Ex. 9.3. Evaluate


Z∞
sin x
I= dx . (9.6)
0 x

Let
Z ∞ −ax
e sin x
I(a) = dx (9.7)
0 x

so I = I(0). Now,
Z∞
d 1
I(a) = − e −ax sin x d x = − 2 (9.8)
da 0 a +1
so we have
Z
da
I(a) = − = C − arctan a (9.9)
a2 + 1
but since I(∞) = 0, we find C = á/2; therefore
á
I(a) = − arctan a (9.10)
2
and finally
á
I = I(0) = . (9.11)
2
9. Elementary Methods of Integration 63

• Be clever.

Ex. 9.4. Evaluate


Z∞
2
I= e −x d x . (9.12)
−∞

Consider
Z∞ Z∞
2 2
I2 = e −x d x e −y d y (9.13a)
−∞ −∞
†

2 2
= e −(x +y ) d x d y (9.13b)
−∞
change to polar coordinates
Z 2á Z∞ r 2 = x 2 + y 2 ; d x d y = r dÚ d r
2
= dÚ e −r r d r (9.13c)
0 0
let u = r 2 ; d u = 2r d r
1 ∞ −u
Z
= 2á · e du (9.13d)
2 0
= á. (9.13e)

Therefore,

I = á. (9.14)
10 Contour Integration

Improper Real Integrals


Types:
Z ∞ Z R
• f (x) d x = lim f (x) d x . (10.1)
0 R→∞ 0
Z ∞ Z 0 Z R2
• f (x) d x = lim f (x) d x + lim f (x) d x . (10.2)
−∞ R 1 →∞ −R 1 R 2 →∞ 0
? ∞ Z R
• f (x) d x = lim f (x) d x . (10.3)
−∞ R→∞ −R
The third is known as the Cauchy principal value.
R∞ >∞
If −∞ f (x) d x converges then its value is the same as −∞ f (x) d x.
>∞ R∞
However, note that −∞ x d x = 0 while −∞ x d x diverges.
If f (x) is an even function then
?
1 ∞ 1 ∞
Z Z∞
f (x) d x = f (x) d x = f (x) d x . (10.4)
2 −∞ 2 −∞ 0

Evaluation of improper real integrals can often be done easily using the
Cauchy principal value and residues.

64
10. Contour Integration 65

Ex. 10.1. Evaluate:


Z∞ ?
2x 2 − 1 1 ∞ 2x 2 − 1
4 2
d x = dx . (10.5)
0 x + 5x + 4 2 −∞ x + 5x 2 + 4
4

Let

2z 2 − 1 2z 2 − 1
f (z) = 4 = . (10.6)
z + 5z 2 + 4 (z 2 + 1)(z 2 + 4)
This function has isolated simple poles at z = ±i, z = ±2i.
Consider the contour C = LR + C R , R > 2, as shown in Fig. 10.1.
We have:
ZR Z  
f (x) d x + f (z) d z = 2ái Res f (z) + Res f (z) . (10.7)
−R CR z=i z=2i

Note:

æ1 (z) 2z 2 − 1
• f (z) = where æ1 (z) =
z−i (z + i)(z 2 + 4)
−3 1
=⇒ Res f (z) = æ1 (i) = =− . (10.8a)
z=i (2i)(3) 2i

æ2 (z) 2z 2 − 1
• f (z) = where æ2 (z) =
z − 2i (z 2 + 1)(z + 2i)
−9 3
=⇒ Res f (z) = æ2 (2i) = = . (10.8b)
z=2i (−3)(4i) 4i

Im z

CR
2i
i
Re z
R LR R
Figure 10.1: Contour for Ex. 10.1
10. Contour Integration 66

Therefore
ZR   Z
f (x) d x = 2ái Res f (z) + Res f (z) − f (z) d z (10.9a)
−R z=i z=2i CR
 Z
1 3

= 2ái − + − f (z) d z (10.9b)
2i 4i CR
Z
á
= − f (z) d z . (10.9c)
2 CR
R
We need to figure out what C f (z) d z is as R → ∞.
R

Note: when |z| = R,

|2z 2 − 1| ≤ 2|z|2 + 1 = 2R 2 + 1 (10.10a)


|z 4 + 5z 2 + 4| = |z 2 + 1||z 2 + 4|
≥ |z|2 − 1 |z 2 | − 4 = (R 2 − 1)(R 2 − 4) (10.10b)
2R 2 + 1
=⇒ |f (z)| ≤ MR where MR = (10.10c)
(R 2 − 1)(R 2 − 4)
and so,

áR(2R 2 + 1)
Z
f (z) d z ≤ MR áR = (10.11a)
CR (R 2 − 1)(R 2 − 4)
→ 0 as R → ∞ . (10.11b)

Thus,
?∞ ZR
2x 2 − 1 2x 2 − 1 á
4 2
d x = lim 4 2
dx = (10.12)
−∞ x + 5x + 4 R→∞ −R x + 5x + 4 2
and therefore
Z∞
2x 2 − 1 á
4 2
dx = . (10.13)
0 x + 5x + 4 4
10. Contour Integration 67

To evaluate integrals of the form


Z∞ Z ∞
f (x) sin ax d x (a > 0) or f (x) cos ax d x (10.14)
−∞ −∞

try
Z R Z R Z R
f (x) cos ax d x + i f (x) sin ax d x = f (x)e i ax d x (10.15)
−R −R −R

and use the fact that |e i az | = e −ay is bounded in the upper-half plane y ≥ 0.
Ex. 10.2. Compute
?∞
x sin x
2
dx . (10.16)
−∞ x + 2x + 2

Let
z z
f (z) = 2 = where z1 = −1 + i . (10.17)
z + 2z + 2 (z − z1 )(z − z1∗ )

Note: z1 is a simple pole of f (z)e i z in the upper-half plane with residue

z e i z1
b 1 = Res f (z)e i z = 1 ∗ . (10.18)
z=z1 z1 − z1

Use the contour C = LR + C R shown in Fig. 10.2. We see


ZR
xe i x
Z
2
d x = 2ái b 1 − f (z)e i z d z . (10.19)
−R x + 2x + 2 CR
| {z }
want to bound this

Im z

CR

z1
Re z
R LR R
Figure 10.2: Contour for Ex. 10.2
10. Contour Integration 68


Note: |f (z)| ≤ MR where MR = R/(R − 2)2 and |e i z | = e −y ≤ 1 so

áR 2
Z
f (z)e i z d z ≤ MR áR = √ (10.20)
CR (R − 2)2

but this does not go to zero as R → ∞.


We need to be more careful:
Z Zá

f (z)e i z d z = f (Re iÚ )e i Re i Re iÚ dÚ . (10.21)
CR 0


Now |f (Re iÚ )| ≤ MR and |e i Re | ≤ e −R sin Ú so
Z Zá
f (z)e i z d z ≤ MR R e −R sin Ú dÚ . (10.22)
CR 0

We use Jordan’s inequality to bound the integral: since sin Ú ≤ 2Ú/á for 0 ≤ Ú ≤ á/2
(see Fig. 10.3),
Zá Z á/2
á
e −R sin Ú dÚ ≤ 2 e −2RÚ/á = 2 (1 − e −R ) (10.23a)
0 0 2R
á
< . (10.23b)
R
Thus,
Z
á
f (z)e i z d z < MR R = áMR (10.24a)
CR R
→ 0 as R → ∞ (10.24b)

and therefore
?∞
x sin x á
2
d x = Im(2ái b1 ) = (sin 1 + cos 1) . (10.25)
−∞ x + 2x + 2 e

y
y = sin

y=2 /
/2
Figure 10.3: Jordan’s Inequality
10. Contour Integration 69

Definite Integrals Involving Sines and Cosines


For integrals of the form
Z 2á
I= F (sin Ú, cos Ú) dÚ (10.26)
0

use the following trick: Let z = e iÚ , 0 ≤ Ú ≤ 2á and substitute:

z − z −1 z + z −1 dz
sin Ú = , cos Ú = , dÚ = . (10.27)
2i 2 iz
Then
z − z −1 z + z −1 d z
!
I= F , (10.28)
C 2i 2 iz

where C is the unit circle about the origin evaluated in the positive direction.
Ex. 10.3. Compute
Z 2á

I= , −1 < a < 1, a , 0 . (10.29)
0 1 + a sin Ú

Perform the suggested substitutions:


1 dz 2/a
I= = dz . (10.30)
C z − z −1 i z C z 2 + (2i/a)z − 1
1+a
2i
The integrand is
2/a
f (z) = (10.31a)
(z − z1 )(z − z2 )
with
 √   √ 
 −1 + 1 − a2   −1 − 1 − a2 
z1 =   i and z2 =   i . (10.31b)
a  a 

Note: because |a| < 1, |z2 | = (1 + 1 − a2 )/|a| > 1 and since |z1 z2 | = 1, |z1 | < 1.
Therefore only z1 is contained within C, and its residue is
2/a 1
Res f (z) = = √ (10.32)
z=z1 z1 − z2 i 1 − a2

and thus

I = 2ái Res f (z) = √ , −1 < a < 1 (10.33)
z=z1 1 − a2
(the case a = 0 is obvious).
11 Approximate Expansions of Integrals

The idea is to expand the integrand in a series.


Ex. 11.1 (Error function). The error function is (see Fig. 11.1)
Zx
2 2
erf x = √ e −t d t . (11.1)
á 0

Expand the integrand in a power series and integrate term-by-term:


Z x(
t4 t6
)
2
erf x = √ 1 − t2 + − + ··· dt (11.2a)
á 0 2! 3!
3 5 x7
( )
2 x x
=√ x− + − + ··· . (11.2b)
á 3 5 · 2! 7 · 3!

This converges for all x but it is only really useful for small x.
We would like a large-x expansion.

y = erfc x 2
1 y = erf x

x
3 2 1 1 2 3
1

Figure 11.1: Error Function and Complementary Error Function

70
11. Approximate Expansions of Integrals 71

As x → ∞, erf x → 1 so compute the complementary error function (see Fig. 11.1)


Z∞
2 2
erfc x = 1 − erf x = √ e −t d t (11.3a)
á x
integrate by parts:
u = 1/t, d u = −d t/t 2
2 2
d v = te −t d t, v = − 12 e −t

−x 2 Z∞ −t 2 
2  1 e 1e 
= √  − 2
d t  (11.3b)
á 2 x x 2 t
by parts again:
2
u = 1/t 3 , d v = te −t d t
 2 2 Z∞ 2 
2  1 e −x 1 e −x 1 3 e −t 
= √  − 2 3 + 4
d t  (11.3c)
á 2 x 2 x x 22 t

and so on. . . . After n times,


2 1 1 1·3 1·3·5
2

erfc x = √ e −x − + − 4 7 + ···
á 2x 22 x 3 23 x 5 2 x
)
1 · 3 · 5 · · · (2n − 3)
+(−1)n−1
2n x 2n−1
Z ∞ −t 2
1 · 3 · 5 · · · (2n − 1) 2 e
+ (−1)n √ dt . (11.3d)
2n á x t 2n

Consider the series with terms


(2n − 3)!!
an = (−1)n−1 n 2n−1 . (11.4)
2 x
Apply the ratio test:

an+1 (2n − 1)!! 2n x 2n−1 2n − 1 1


= = (11.5a)
an (2n − 3)!! 2n+1 x 2n+1 2 x2
n
∼ 2 for large n. (11.5b)
x
For large n, we can always find an n larger than x 2 and so the ratio test indicates this
series will not converge.
However, the terms are getting smaller until term n ≈ x 2 .
The error is smallest if we truncate the series here.
11. Approximate Expansions of Integrals 72

Asymptotic Series
The series
c1 c2
S(z) = c0 + + + ··· (11.6)
z z2
is an asymptotic series expansion of some function f (z) provided that for any
n the error involved in terminating the series with the term cn z −n goes to zero
faster than z −n as |z| → ∞ (for some range of arg z):

lim z n [f (z) − S n (z)| = 0 (arg z in some range) (11.7a)


|z|→∞

where
c1 c2 c
S n (z) = c0 + + 2 + · · · + nn . (11.7b)
z z z

Write: f (z) ∼ S(z) where “∼” means “asymptically equal to.”


11. Approximate Expansions of Integrals 73

Ex. 11.1 (continued). Returning to the complementary error function,


! !
asymptotic remainder
erfc x − = (11.8a)
series integral

where
! ( )
asymptotic 2 −x 2 1 1 n (2n − 3)!!
=√ e − + · · · + (−1) n 2n−1 (11.8b)
series á 2x 22 x 3 2 x
and
Z∞ 2
e −t
!
remainder (2n − 1)!! 2
= (−1)n √ dt (11.8c)
integral 2n á x t 2n

Note: the series is in steps of x 2 .


To show the asymptotic series truly is an asymptotic series, consider
2
(2n − 1)!! 2 2n ∞ e −t
" !# Z
asymptotic
x 2n erfc x − = (−1)n √ x dt (11.9a)
series 2n á x t 2n
Z ∞ −t 2
(2n − 1)!! 2 2n e
< (−1)n √ x 
 dt (11.9b)
2n á

x  x 2n
Z∞
(2n − 1)!! 2 2
= (−1)n √ e −t d t (11.9c)
2n á x
→ 0 as x → ∞. (11.9d)

Therefore we see that the asymptotic series is indeed an asymptotic series.


11. Approximate Expansions of Integrals 74

Ex. 11.2 (Exponential integral). The exponential integral (see Fig. 11.2) is
Zx t
e
Ei x = dt . (11.10)
−∞ t

We seek an asymptotic series for x → −∞.


Consider
Z −x t
e
Ei(−x) = dt (11.11a)
−∞ t
Z x −t
e
= dt integrate by parts (11.11b)
∞ t u = 1/t, d u = −d t/t 2
e −x x e −t
Z
=− − dt d v = e −t d t, v = −e −t (11.11c)
x 2
∞ t
by parts again
e −x e −x
Z x −t
e
=− + 2 +2 2
dt (11.11d)
x x ∞ t

and so on. After n times,


e −x 1 2! 3! n!
 
− Ei(−x) = 1 − + 2 − 3 + · · · + (−1)n n
x x x x x
Z x −t
e
+ (−1)n (n + 1)! n+2
dt . (11.12)
∞ t

• Asymptotic series for Ei(−x):

e −x 1 2! 3!
 
− Ei(−x) = 1 − + 2 − 3 + ··· . (11.13)
x x x x
• Identity for remainder term:
Z ∞ −t
(−1)n e −x
( " #)
e 1 2! n (n − 2)!
dt = Ei(−x) + 1 − + 2 − · · · + (−1) . (11.14)
x tn (n − 1)! x x x x n−2

Ei x
10
5
x
1 1 2 3
5

Figure 11.2: Exponential Integral


12 Saddle-Point Methods

Method of Steepest Descent


For sharply peaked integrands, the integral is dominated by the region near the
peak of the integrand.
Ex. 12.1. Obtain an approximation of È (x + 1) for x  1.
Recall the Euler representation of the gamma function:
Z∞
È (x + 1) = t x e −t d t . (12.1)
0

The integrand is shown in Fig. 12.1.


It is peaked at the value t0 where
d x −t
0= (t e ) t=t (12.2a)
dt 0
 
−t
= e 0 −t0x + xt0x−1 (12.2b)

and so

t0 = x . (12.2c)

The integrand of the gamma function is sharply peaked for large x.

t xe t

t
x

Figure 12.1: Integrand of the Gamma Function

75
12. Saddle-Point Methods 76

Write integrand as e f (t) = e x ln t−t and expand f (t) in a Taylor series about t = t0 = x:

f (t) = x ln t − t =⇒ f (x) = x ln x − x (12.3a)


x
f 0 (t) = − 1 =⇒ f 0 (x) = 0 (12.3b)
t
00 x 1
f (t) = − 2 =⇒ f 00 (x) = − (12.3c)
t x
and so
1 00
f (t) = f (x) + f 0 (x)(t − x) + f (x)(t − x)2 + · · · (12.3d)
2!
1
≈ x ln x − x − (t − x)2 . (12.3e)
2x
Therefore we have
Z∞
1
 
È (x + 1) ≈ exp x ln x − x − (t − x)2 d t (12.4a)
2x
Z0∞ extend integration domain
1
 
≈ exp x ln x − x − (t − x)2 d t (12.4b)
−∞ 2x
Z∞
2
= e x ln x−x e −(t−x) /2x d t (12.4c)
−∞

= 2áxx x e −x . (12.4d)

This is the first term of Stirling’s formula.


12. Saddle-Point Methods 77

In general, the idea is to evaluate integrals of the form


Z
I(Ó) = e Óf (z) d z (Ó large and positive) (12.5)
C
by deforming the contour so as to concentrate most of the integral near where
Re f (z) is largest.
Let z = x + i y and f (z) = u(x, y) + i v(x, y). When f (z) is analytic (not at a
singularity)
2 u 2 u
+ =0 (harmonic) (12.6)
x 2 y 2
so any flat spot u/x = u/y = 0 is neither a maximum or a minimum since
2 u/x 2 = −2 u/y 2 . Therefore all such points are saddle points, and, by the
Cauchy-Riemann condition, they are saddle points of v as well and at the
saddle point f 0 (z0 ) = 0.
Therefore
1 00
f (z0 )(z − z0 )2
f (z) ≈ f (z0 ) + (12.7)
2
where z0 is a saddle point.
Let f 00 (z0 ) = âe iæ and let z − z0 = se iè . Then,
1
u ≈ u(x0 , y0 ) + âs 2 cos(æ + 2è) (12.8a)
2
1 2
v ≈ v(x0 , y0 ) + âs sin(æ + 2è) . (12.8b)
2
The path of steepest descent from the saddle point is when
cos(æ + 2è) = −1 . (12.9)
In this direction, sin(æ + 2è) = 0, so v is constant.
Deform the contour to go along the path of steepest descent:
Z∞
2
I(Ó) ≈ e Óf (z0 ) e −Óâs /2 e iè d s (12.10a)
−∞
where
æ á
è=− ± (12.10b)
2 2
and the sign depends on which we travel over the saddle.
Therefore
s
2á Óf (z0 ) iè
I(Ó) ≈ e e . (12.11)
Óâ
12. Saddle-Point Methods 78

Ex. 12.2. Steepest descent approximation for È (z + 1):


Z∞ Z∞
È (z + 1) = e −t+z ln t dt = e rf (t) d t (12.12a)
0 0

where r = |z| (assume r is large) and


1
f (t) = (−t + z log t) . (12.12b)
r

Let z = re iÚ . Then
t iÚ
 
f (t) = log t − e (12.13a)
z
1 1 iÚ
 
f 0 (t) = − e =⇒ f 0 (t0 ) = 0 for t0 = z (12.13b)
t z
1
f 00 (t) = − 2 e iÚ (12.13c)
t
so

f (t0 ) = (log z − 1)e iÚ (12.14a)


e iÚ 1 1
f 00 (t0 ) = âe iæ = − 2 = − 2 e −iÚ =⇒ â = 2 and æ = á − Ú . (12.14b)
z r r

Deform the contour to go through t0 = z at an angle è for which cos(æ + 2è) = −1, so

Ú Ú
è= or è= −á. (12.15)
2 2
To figure out which one of these to choose, we need to look at the topography of the
surface u(t) = Re f (t) for a particular choice of z.
For example, when z = 3e iá/4 so r = 3 and Ú = á/4, have
t
 
Re f (t) = Re e iá/4 log t − . (12.16)
3
In Fig. 12.2 this function is plotted and it is seen that the correct direction to traverse
the saddle is with è = Ú/2 = á/8 rather than è = Ú/2 − á = −7á/8. Thus,
Z
È (z + 1) = e rf (t) d t (12.17a)
C
s
2á rf (z) iè
≈ e e (12.17b)

â = 1/r 2 and è = Ú/2

= 2áre z log z−z e iÚ/2 √ iÚ/2 (12.17c)

z+1/2 −z re = z 1/2
= 2áz e (12.17d)

This is the first term in an asymptotic series.


12. Saddle-Point Methods 79

0.5
5
0.2
0

5
0.25
0.
2

1
4

2 0.5 C
1
0 2 0.25
Im t

0.5
3
4
6

1
4

0.2
5
0
6

8
8 6 4 2 0 2 4 6 8
Re t
8

6
0.5
5
0.2

0
5
0.25
0.
2

1
4

2 0.5
1
2 0.25 C
0
Im t

2
0.5
3
4
6

4
0.2
5
0

8
8 6 4 2 0 2 4 6 8
Re t

Figure 12.2: Topography of the surface Re(e iá/4 log t − t/3). The saddle point is at the
intersection of the white contour lines. Top: the contour is deformed so that it correctly
goes over the saddle point t0 = 3e iá/4 . Bottom: the contour is incorrectly deformed and
goes over the ridge three times.
12. Saddle-Point Methods 80

Since
1 √
È (z) = È (z + 1) ≈ 2áz z−1/2 e −z (12.18)
z
write an asymptotic series:
√ 
A B C

È (z) ∼ 2áz z−1/2 e −z 1 + + 2 + 3 + · · · (12.19)
z z z
and use the recurrence È (z + 1) = zÈ (z) to find A, B, C, . . . as follows:

( )
(z+1)−1/2 −(z+1) A B C
È (z + 1) ∼ 2á (z + 1) e 1+ + + + · · · . (12.20)
| {z } z + 1 (z + 1)2 (z + 1)3
consider this first | {z }
and this second

First:
1
  
exp z + log(z + 1) − z − 1
2
1 1 1
      
= exp z + log z + z + log 1 + −z −1 (12.21a)
2 2 z
1 1 1 1 1 1
    
= exp z + log z − z − 1 + z + − 2 + 3 − 4 + ··· (12.21b)
2 2 z 2z 3z 4z
!
1 1 1 1
 
= exp z + log z − z − 1
+ 1  − 2z + 2 − 3 + · · · (12.21c)
2  3z 4z !#
1 1 1
+ − + − ···
2z 4z 2 6z 3
1 1 1
   
= exp z + log z − z + 2
− + ··· (12.21d)
2 12z 12z 3
1 1
 
= z z+1/2 e −z 1 + − + ··· (12.21e)
12z 2 12z 3

Second:
A B C
1+ + +
z + 1 (z + 1)2 (z + 1)3
A B C
= 1 + (1 + 1/z)−1 + 2 (1 + 1/z)−2 + 3 (1 + 1/z)−3 (12.22a)
z z z
A 1 1 B 2 C
   
= 1+ 1 − + 2 − · · · + 2 1 − + · · · + 3 (1 − · · · ) (12.22b)
z z z z z z
A B − A C − 2B + A
= 1+ + 2 + + ··· . (12.22c)
z z z3
12. Saddle-Point Methods 81

Therefore,
√ 
A

1 1

È (z + 1) ∼ 2áz z+1/2 e −z 1 + + B − A +
z 12 z 2
A 1 1
  
+ C − 2B + A + − · · · (12.23)
12 12 z 3
and compare this to
√ 
A B C

È (z + 1) = zÈ (z) ∼ 2áz z+1/2 e −z 1 + + 2 + 3 + · · · (12.24)
z z z
and equate like powers:

A=A (not illuminating) (12.25a)


1 1
B = B −A+ =⇒ A= (12.25b)
12 12
A 1 1
C = C − 2B + A + − =⇒ B= . (12.25c)
12 12 288
Thus we have
√ 
1 1

È (z) ∼ 2áz z−1/2 e −z 1 + + + · · · . (12.26)
12z 288z 2

Now recall
√ 
1 1

n! = È (n + 1) = nÈ (n) ∼ 2án n+1/2 e −n 1 + + + · · · (12.27)
12n 288n 2
so
√  n 
n 1 1

n! ∼ 2án 1+ + + · · · . (12.28)
e 12n 288n 2

This is Stirling’s formula.


Problems

Problem 10.
Establish the following integration formulae with the aid of residues:
Z∞
dx á
a) 2+1
= ;
0 x 2
Z∞
dx á
b) 4
= √ ;
0 x +1 2 2
Z∞
cos(ax) á
c) 2
d x = e −a (a ≥ 0).
0 x +1 2

Problem 11.

a) Use residues and the contour shown to estab- y


lish the integral formula
Z∞
dx 2á Rei2 /3
3
= √ .
0 x +1 3 3
b) Generalize your result in (a) to evaluate x
Z∞ R
xn
m
dx
0 x +1

where n = 0, 1, 2, . . . and m > n + 1.

82
Problems 83

Problem 12.
Use residues to show:
Z 2á
dÚ 2á
a) =√ (−1 < a < 1) ;
0 1 + a cos Ú 1 − a2

(2n)!
b) sin2n Ú dÚ = 2n á.
0 2 (n!)2

Problem 13.
By appropriate use of power series expansions, evaluate
Z1 
1 + x dx

a) I = ln ;
0 1−x x
Z1
ln(1 − x n )
b) I(n) = dx .
0 x

Problem 14.
Obtain two expansions of the sine integral
Zx
sin t
Si x = dt ,
0 t

one useful for small x and one useful for large x.

Problem 15.
Evaluate
Z ∞
t
I(x) = e xt−e d t
0

approximately for large positive x.


Module IV

Integral Transforms

13 Fourier Series 86

14 Fourier Transforms 92

15 Other Transform Pairs 100

16 Applications of the Fourier Transform 101

Problems 106

84
85

Motivation
Integral transforms — in particular the Fourier transform — are ubiquitous in
physics. Whether in quantum mechanics, or X-ray diffraction, or signal
analysis, we often use integral transforms to go from space or time variables
to wave-number or frequency variables. Integral transforms can be used to
change differential equations into algebraic equations which are often easier
to solve. We focus mostly on the Fourier series and Fourier transform, but we
also mention a few other transforms that are sometimes encountered. (The
Hilbert transform, for example, is encountered in the Kramers-Kronig
relations.)
13 Fourier Series

Consider a function f (Ú), −á < Ú ≤ á. We seek an expansion in the form:


a0 ¼
f (Ú) = + (an cos nÚ + b n sin nÚ) . (13.1)
2
n=1

This series expansion for f (Ú) is known as a Fourier series.


To find the coefficients, multiply both sides by cos nÚ or sin nÚ and integrate
from −á to á. For example, if n , 0, then

f (Ú) cos nÚ dÚ (13.2a)
−á
 
Z á ∞
 a0 ¼
 

= + (a cos mÚ + b sin mÚ) cos nÚ dÚ (13.2b)

m m
2

 

−á 
m=1

: 0 for n , 0

a 
= 0 cos  nÚ
dÚ (13.2c)
2 −á
¼∞ Zá
 :0 Z á
cos2 nÚ dÚ

+ am cos mÚ cos nÚ dÚ + an


−á −á
m=1
m,n

¼ Z á :0

+ bm sin mÚ
cos nÚ dÚ


−á
m=1

= an cos2 nÚ dÚ (13.2d)
−á

:0
= an 1
( 21 +
2 cos

2nÚ) dÚ (13.2e)
−á
= áan (13.2f)

86
13. Fourier Series 87

Therefore
Z á
1
an = f (Ú) cos nÚ dÚ n = 1, 2, 3, . . . | : . (13.3)
á −á

Similarly
Z á
1
a0 = f (Ú) dÚ (13.4)
á −á

Z á
1
bn = f (Ú) sin nÚ dÚ n = 1, 2, 3, . . . . (13.5)
á −á

The Fourier series converges at all points in −á < Ú ≤ á to1 f (Ú) provided that
f (Ú) is sufficiently nice.
• The Fourier series is periodic: it repeats itself in á < Ú ≤ 3á, etc. That is,
f (Ú + 2á) = f (Ú).
• For even functions, f (−Ú) = f (Ú) or f (2á − Ú) = f (Ú), only cosine terms occur,
i.e., b n = 0 ∀n.
• For odd functions, f (−Ú) = −f (Ú), only sine terms occur, i.e., an = 0 ∀n.

1 actually, to 1 [f (Ú+ ) + f (Ú− )]


2
13. Fourier Series 88

Ex. 13.1. The step function:


f( )
−1 −á < Ú < 0
1


f (Ú) =  (13.6)
+1 0 ≤ Ú ≤ á

Figure 13.1: Step Function


This is an odd function so there will be no cosine terms.

1 0 1 á
Z Z
bn = − sin nÚ dÚ + sin nÚ dÚ (13.7a)
á −á á 0

2
= sin nÚ dÚ (13.7b)
á 0
2 
(−1)n − 1

=− (13.7c)


 4

 n odd
= ná (13.7d)


0
 n even.

Therefore
4 sin 3Ú sin 5Ú
 
f (Ú) = sin Ú + + + ··· . (13.8)
á 3 5

Aside: set Ú = á/2 to get


á 1 1 1
= 1 − + − + ··· . (13.9)
4 3 5 7
This is known as Gregory’s series.
(This can also be obtained from arctan x = x − x 3 /3 + x 5 /5 − · · · with x = 1.)
The series for f (Ú) has non-uniform convergence, as seen in Fig. 13.2. The overshoot
near Ú = 0 and Ú = ±á is known as Gibbs’s phenomenon. Even in the limit of an infinite
number of terms the overshoot is finite — approximately by 0.18.

/2 /2
1

Figure 13.2: Gibbs’s phenomenon: shown is the series solution truncated at n = 1, n = 3,


and n = 18.
13. Fourier Series 89

Ex. 13.2. Consider

f (Ú) = cos kÚ , −á < Ú ≤ á . (13.10)

This is an even function so only cosine terms are present.

2 á
Z
an = cos kÚ cos nÚ dÚ (13.11a)
á 0

1
= {cos [(k − n)Ú] + cos [(k + n)Ú]} dÚ (13.11b)
á 0
1 sin[(k − n)á] 1 sin[(k + n)á]
= + (13.11c)
á k−n á k+n
1 (−1) sin ká 1 (−1)n sin ká
n
= + (13.11d)
á k−n á k+n
n 2k sin ká
= (−1) . (13.11e)
á(k 2 − n 2 )
Therefore,
2k sin ká 1 cos Ú cos 2Ú
 
cos kÚ = 2
− 2 + 2 − ··· . (13.12)
á 2k k −1 k −4
(We used this result earlier in Ex. 4.4.)
13. Fourier Series 90

Suppose f (x) is periodic with some period L rather than 2á. Let

L
x= Ú. (13.13)

Then we have
∞ 
a0 ¼ 2ánx 2ánx

f (x) = + an cos + b n sin (13.14a)
2 L L
n=1

where
Z L/2
2 2ánx
an = f (x) cos dx , n = 0, 1, 2, . . . (13.14b)
L −L/2 L

Z L/2
2 2ánx
bn = f (x) sin dx , n = 1, 2, 3, . . . . (13.14c)
L −L/2 L

We can also define the Fourier series in complex form:



¼
f (x) = cn e i2ánx/L . (13.15)
n=−∞

Observe:
Z L/2
e i2ámx/L e −i2ánx/L d x
−L/2
Z L/2
= e i2á(m−n)x/L d x (13.16a)
−L/2



 L n=m

= (13.16b)

 L i2á(m−n)x/L L/2
 i2á(n − m) e n,m


−L/2



 L n=m

= (13.16c)

L h i
iá(m−n)
− e −iá(m−n) n , m

 i2á(m − n) e



1 n = m


= L (13.16d)
0 n , m

= LÖmn (13.16e)

where Ömn is the Kronecker delta.


13. Fourier Series 91

Therefore

1 L/2
Z Z L/2
−i2ánx/L 1 ¼
f (x)e dx = c e i2ámx/L e −i2ánx/L d x (13.17a)
L −L/2 L m=−∞ m −L/2

¼
= cm Ömn (13.17b)
m=−∞
= cn (13.17c)

and thus

¼
f (x) = cn e i2ánx/L (13.18a)
n=−∞

where
Z L/2
1
cn = f (x)e −i2ánx/L d x . (13.18b)
L −L/2

Parseval’s Identity
Consider:
 ∞  ∞ 
1 L/2 1 L/2  ¼
Z Z
2
  ¼ ∗ −i2ánx/L 
i2ámx/L 
|f (x)| d x = c e   cn e  d x
L −L/2 m=−∞ m

L −L/2 
n=−∞

(13.19a)
∞ ∞ Z L/2
1 ¼ ¼
= c c∗ e i2ámx/L e −i2ánx/L d x (13.19b)
L m=−∞ n=−∞ m n −L/2

¼ ∞
¼
= cm cn∗ Ömn (13.19c)
n=−∞ m=−∞

¼
= |cn |2 . (13.19d)
n=−∞

Thus we have Parseval’s identity:


Z L/2 ∞
1 ¼
|f (x)|2 d x = |cn |2 . (13.20)
L −L/2 n=−∞
14 Fourier Transforms

Recall the complex Fourier series:



¼ Z L/2
f (x) = cn e i2ánx/L
where Lcn = f (x)e −i2ánx/L d x . (14.1)
n=−∞ −L/2

Consider the case L → ∞. Define


2án
yn = and Lcn = g(yn ) (14.2)
L
and note

¼
f (x) = cn e i2ánx/L (14.3a)
n=−∞

¼ g(yn ) i xyn
= e (14.3b)
n=−∞
L
let Éy = 2á/L

1 ¼
= g(yn )e i xyn Éy (14.3c)
2á n=−∞ as L → ∞, Éy → 0
Z∞
1 this is a Riemann sum
= g(y)e i xy d y . (14.3d)
2á −∞

We thus have the Fourier transform pairs:


Z ∞ Z ∞
1
f (x) = g(y)e i xy
dy ⇐⇒ g(y) = f (x)e −i xy d x . (14.4)
2á −∞ −∞

We say that g(y) is the Fourier transform of f (x)


and f (x) is the inverse Fourier transform of g(y).
1
Note: the factor of 2á is sometimes rearranged between these two equations.

92
14. Fourier Transforms 93

Substitute g(y) into the f (x) equation:


Z ∞ "Z ∞ #
1 0 −i x 0 y 0
f (x) = f (x )e d x e i xy d y (14.5a)
2á −∞ −∞
Z∞ " Z∞ #
1 0
= f (x 0 ) e i(x−x )y d y d x 0 (14.5b)
−∞ 2á −∞

which holds for any function f . Thus,


Z∞
1 0
e i(x−x )y d y (14.6)
2á −∞

is the continuous generalization of the Kronecker delta.


Define the Dirac delta function by
Z +b
Ö(x) = 0 for x , 0 , Ö(x) d x = 1 for a, b > 0 . (14.7)
−a

Then
Z
f (x 0 )Ö(x − x 0 ) d x 0 = f (x) (14.8)

if the domain of integration contains x.


One representation of the Dirac delta function is therefore
Z ∞
1
Ö(x) = e i xy d y . (14.9)
2á −∞

Using a change of variables, one can show the following identity


Z b ¼ f (x )
n
f (x)Ö(g(x)) d x = 0 (x )|
a n
|g n

where xn are roots of g(x) in a < xn < b. (14.10)


14. Fourier Transforms 94

Theorem 6 (Parseval’s). If f (x) and g(y) are Fourier transform pairs then
Parseval’s identity states
Z ∞ Z ∞
2 1
|f (x)| d x = |g(y)|2 d y (14.11)
−∞ 2á −∞

Proof.
Z ∞ Z ∞ " Z ∞ #" Z ∞ #
1 1 0
|f (x)|2 d x = g ∗ (y)e −i xy d y g(y 0 )e i xy d y 0 d x
−∞ −∞ 2á −∞ 2á −∞

(14.12a)
Z ∞ Z ∞ " Z ∞ #
1 1 0
= g ∗ (y) g(y 0 ) e i(y −y)x d x d y 0 d y
2á y=−∞ y 0 =−∞ 2á −∞
| {z }
Ö(y 0 −y)
(14.12b)
Z ∞ "Z ∞ #
1
= g ∗ (y) g(y 0 )Ö(y 0 − y) d y 0 d y (14.12c)
2á −∞ −∞
Z∞
1
= g ∗ (y)g(y) d y . (14.12d)
2á −∞
14. Fourier Transforms 95

Fourier Cosine Transform


Suppose f (x) is an even function. Then,
Z ∞ Z 0
−i xy
g(y) = f (x)e dx + f (x)e −i xy d x (14.13a)
−∞
Z0∞
= f (x)(e i xy + e −i xy ) d x (14.13b)
0
Z ∞
=2 f (x) cos(xy) d x . (14.13c)
0

Note: g(y) is also an even function so

1 ∞
Z
f (x) = g(y) cos(xy) d y . (14.14)
á 0

Therefore, f (x) and g(y) need only be defined for positive x and y. They are
Fourier cosine transform pairs.
Similarly, if f (x) is an odd function,

1 ∞ ∞
Z Z
f (x) = g(y) sin(xy) d y ⇐⇒ g(y) = 2 f (x) sin(xy) d x (14.15)
á 0 0

are Fourier sine transform pairs.


14. Fourier Transforms 96

Ex. 14.1. Damped harmonic oscillator.



0

 t<0
f (t) =  (14.16)
e −t/T sin é0 t t > 0 .

This might describe, e.g., the current in a radiating antenna.


The Fourier transform of this function is
Z∞
g(é) = f (t)e −iét d t (14.17a)
−∞
Z∞
e ié0 t − e −ié0 t −iét
= e −t/T e dt (14.17b)
0 2i
1 ∞ 1 ∞
Z Z
1 1
       
= exp − + i(é − é0 ) t d t − exp − + i(é + é0 ) t d t
2i 0 T 2i 0 T
(14.17c)
1 1 1 1
= − (14.17d)
2i i(é − é0 ) + 1/T 2i i(é + é0 ) + 1/T
" #
1 1 1
= − . (14.17e)
2 (é + é0 ) − i/T (é − é0 ) − i/T

Note: if T  1/é0 , g(é) is sharply peaked around é = ±é0 . Near é = é0 ,


1 1 1 1
g(é) ≈ − =⇒ |g(é)| ≈ p . (14.18)
2 (é − é0 ) − i/T 2 (é − é0 )2 + 1/T 2

The energy radiated by the antenna is proportional to


Z∞ Z∞
1
|f (t)|2 d t = |g(é)|2 dé (14.19)
−∞ 2á −∞

so we interpret |g(é)|2 as the radiated power spectrum. The power spectrum peaks at
frequency é0 and the full width at half maximum frequency band is È = 2/T (see
Fig. 14.1).
Note the uncertainty principle: the decay time T is inversely proportional to the width of
the power spectrum.

|g( )|2

full width at
half maximum
= 2/T

Figure 14.1: Damped Oscillator Power Spectrum


14. Fourier Transforms 97

Generalization to Higher Dimensions


For example, in 3 dimensions:
ˆ
ï(k) = f (x)e −ik·x d x d y d z (14.20a)

and
ˆ
1
f (x) = ï(k)e ik·x d kx d k y d kz (14.20b)
(2á)3

are Fourier transform pairs.


We can deduce the 3-dimensional delta function
ˆ
1
Ö(x) = e ik·x d kx d k y d kz (14.21)
(2á)3

which has the properties


• Ö(x) = 0 for x , 0; (14.22)
ˆ
• Ö(x) d x d y d z = 1 (14.23)

provided the origin is in the domain of integration;


ˆ
• f (x)Ö(x − x0 ) d x d y d z = f (x0 ) (14.24)

provided the x0 is in the domain of integration.


14. Fourier Transforms 98

Ex. 14.2. Wave function for Gaussian wave packet.

2 3/4 −r 2 /a2
  2 2
f (x) = e = Ne −r /a (14.25)
áa2

where
‡ r = kxk. Note the probability distribution |f (x)|2 is normalized:
|f (x)|2 d x d y d z = 1.

ˆ
2 2
ï(k) = N e −r /a e −ik·x d x d y d z (14.26a)
introduce polar coordinates
with z-axis along k;
Z 2á Z 1 Z∞
2 2
let Þ = cos Ú, k = kkk
=N r 2 e −r /a e −i krÞ d r dÞ dæ (14.26b)
æ=0 Þ=−1 r=0

Z∞ Z1
2 2
= 2áN r 2 e −r /a e −i krÞ dÞ d r (14.26c)
r=0 Þ=−1
Z∞
1 −i krÞ 1
2 2
 
= 2áN r 2 e −r /a e dr (14.26d)
−i kr −1
Z0∞
2 2 1
= 2áN re −r /a (e i kr − e −i kr ) d r (14.26e)
0 ik
change lower limit
Z∞ of integration
2á 2 2
= N re −r /a e i kr d r (14.26f)
ik −∞
complete the square
Z∞
2á 2 2 2 2 2 2
= N re −(r /a −i kr−k a /4)−k a /4 d r (14.26g)
ik −∞

Z∞
2á 2 2 2 2 2
= Ne −k a /4 re −(r−i ka /2) /a d r (14.26h)
ik −∞
let y = r − i ka2 /2
Z∞
i ka2
!
2á 2 2 2 2
= Ne −k a /4 y+ e −y /a d y (14.26i)
ik −∞ 2 R∞ 2 2
−∞
ye −y /a d y = 0
(odd integrand)
2á 2 2  i ka2 √
=  Ne −k a /4 a á (14.26j)
ik
 2

recall N = (2/áa)3/4
3/4
2 3√

=á a áe −k 2 a2 /4 (14.26k)
áa2

2 2
= (2áa2 )3/4 e −k a /4 . (14.26l)
14. Fourier Transforms 99

We seen that the Fourier transform of a Gaussian distribution is a Gaussian distribution.


The width of the Gaussian probability distribution |f (x)|2 is Éx = a/2 while the width of
the Gaussian probability distribution of the Fourier transform |ï(k)|2 is Ék = 1/a. Thus
we have
1
Éx Ék = . (14.27)
2
In quantum mechanics, p = ~k so
~
Éx Ép = (14.28)
2
for Gaussian wave packets.
15 Other Transform Pairs

• Laplace transform: for f (t) with f (t) = 0 for t < 0,


Z ∞
F (s) = f (t)e −st d t . (15.1a)
0

The inverse Laplace transform is given by the Browmwich integral


Z c+i∞
1
f (t) = F (s)e st d s , t>0 (15.1b)
2ái c−i∞

where the integral is along the line Re s = c, c > 0, such that all singularities
are to the left of the contour.
• Fourier-Bessel transform or Hankel transform:
Z∞ Z ∞
g(k) = f (x)Jm (x)x d x ⇐⇒ f (x) = g(k)Jm (k)k d k (15.2)
0 0

where Jm (x) is a Bessel function (see later).


• Mellin transformation:
Z ∞ Z i∞
1
ï(z) = f (t)t z−1
dt ⇐⇒ f (t) = ï(z)t −z d z . (15.3)
0 2ái −i∞

• Hilbert transformation:
? ?
1 ∞ f (x) 1 ∞
g(y)
g(y) = dx ⇐⇒ f (x) = dy . (15.4)
á −∞ x − y á −∞ y−x

100
16 Applications of the Fourier Transform

Properties of the Fourier Transform


We adopt the following notation for the Fourier transform and its inverse:
Z∞
F[f (x); y] = f (x)e −i xy d x (16.1a)
−∞

Z ∞
1
F−1 [g(y); x] = g(y)e i xy d y . (16.1b)
2á −∞

The Fourier transform has the following properties:


• Linearity.

F[Óf (x) + Ôg(x); y] = ÓF[f (x); y] + ÔF[g(x); y] . (16.2)

• Derivatives.
Z ∞
0
F[f (x); y] = f 0 (x)e −i xy d x (16.3a)
−∞ integrate by parts with
u = e −i xy , d v = f 0 (x) d x

Z ∞
= f (x)e −i xy + iy f (x)e −i xy d x (16.3b)
−∞ −∞ assume f (x) → 0
for x → ±∞
= i yF[f (x); y] . (16.3c)

• Integrals. Similarly,
hR i F[f (x); y]
F f (x) d x; y = + C Ö(y) (16.4)
iy

where C is an arbitrary constant of integration; note F[C; y] = 2áC Ö(y).

101
16. Applications of the Fourier Transform 102

• Translation.
Z ∞
F[f (x + a); y] = f (x + a)e −i xy d x (16.5a)
Z−∞

= f (x)e −i(x−a)y d x (16.5b)
−∞
=e i ay
F[f (x); y] . (16.5c)

• Multiplication by an exponential.

F[e ax f (x); y] = F[f (x); y + i a] (16.6)

(cf. translation property).


• Multiplication by a power of x.
d
F[xf (x); y] = i F[f (x); y] (16.7)
dy

(cf. derivative property).


• Convolution. Define the convolution of two functions, f (x) and g(x), as
Z ∞
h(x) = (f ∗ g)(x) = f (t)g(x − t) d t . (16.8)
−∞

Then the convolution theorem states

F[h(x); y] = F[f (x); y] · F[g(x); y] . (16.9)


16. Applications of the Fourier Transform 103

Ex. 16.1. Damped driven harmonic oscillator.


The equation of motion is
" 2 #
d d 2 x(t) = s(t)
+ 2Øé0 + é 0 (16.10)
d x2 dt

where é0 is the natural frequency, Ø is the damping ratio, and s(t) is the source driving
function.
Let X(é) = F[x(t); é] and S(é) = F[s(t); é]. Then, using the derivative property,

[−é2 + 2iØé0 é + é20 ]X(é) = S(é) (16.11)

and so
S(é)
X(é) = = G (é)S(é) (16.12)
é20 − é2 + 2iØé0 é

where G (é) is the transfer function. By the convolution theorem, x(t) = (g ∗ s)(t) where
g(t) = F−1 [G (é); t].
The power spectrum of the harmonic motion is

|S(é)|2
|X(é)|2 = . (16.13)
(é20 − é2 )2 + 4Ø2 é20 é2

Take the inverse Fourier transform of X(é) to find the motion x(t).
For example, suppose s(t) = aÖ(t) (an impulse). Then,
Z∞
S(é) = aÖ(t)e −iét d t = a . (16.14)
−∞

Therefore,
a
X(é) = − (16.15a)
é2 − 2iØé0 é − é20
a p
=− with é1 = é0 1 − Ø2 . (16.15b)
(é − é1 − iØé0 )(é + é1 − iØé0 )

Now perform the inverse Fourier transform:


Z∞
1
x(t) = X(é)e iét dé (16.16a)
2á −∞
Z∞
a e iét dé
=− . (16.16b)
2á −∞ (é − é1 − iØé0 )(é + é1 − iØé0 )
Do this integral using contour integration.
16. Applications of the Fourier Transform 104

Im

CR

Re
R 1 1 R
Figure 16.1: Contour for Damped Driven Harmonic Oscillator
We close the contour in the upper-half plane as shown in Fig. 16.1: C R is the curve
é = Re iÚ , 0 ≤ Ú ≤ á. Note that, on C R ,

e iét = e i Rte = e i Rt cos Ú e −Rt sin Ú → 0 as R → ∞ for t > 0 (16.17)

so
e iét dé
Z
→ 0 as R → ∞ when t > 0 . (16.18)
C R (é − é1 − iØé0 )(é + é1 − iØé0 )

Thus,
?∞
e iét dé
−∞ (é − é1 − iØé0 )(é + é1 − iØé0 )
:0


e iét
Z 
dé ´
+ lim  = 2ái . (16.19)
−é
(é
R→∞ C R  1 − iØé0 )(é + é1 − iØé0 ) Res
 

There are two simple poles in the upper-half plane with residues

e ié1 t e −Øé0 t e −ié1 t e −Øé0 t


Res = and Res = . (16.20)
é=é1 +iØé0 2é1 é=−é1 +iØé0 −2é1

Therefore

e ié1 t e −Øé0 t e −ié1 t e −Øé0 t


!
a
x(t) = − 2ái − , t>0 (16.21a)
2á 2é1 2é1
i a ié1 t
=− (e − e −ié1 t )e −Øé0 t , t>0 (16.21b)
2 é1
i a
=− (2i sin é1 t)e −Øé0 t , t>0 (16.21c)
2 é1
a −Øé0 t
= e sin é1 t , t>0. (16.21d)
é1
16. Applications of the Fourier Transform 105

R
For t < 0 we need to close the contour in the lower-half plane instead so that C · · · → 0
R
as R → ∞, but there are no poles in the lower-half plane so we find

x(t) = 0 for t < 0 (causality!) (16.22)

and therefore

0a


 t<0
x(t) =  −Øé0 t sin é t (16.23a)
é e

 1 t>0
1

with
p
é1 = é0 1 − Ø2 . (16.23b)

This example shows that causality imposes the requirement that X(é) has singularities
only in the upper-half plane and is analytic everywhere in the lower-half plane.
Problems

Problem 16.
Expand the following functions in a Fourier series of the form
∞ 
a0 ¼ 2ánx 2ánx
   
f (x) = + an cos + b n sin ,
2 L L
n=1

(i.e., determine the Fourier coefficients a0 , an , and b n , n = 1, 2, 3, . . .):


a) the triangular function

1
1 + 2x/L − 2 L ≤ x ≤ 0


f (x) = 
1 − 2x/L 0 < x ≤ 12 L ;

b) the function f (x) = e x for − 12 L ≤ x ≤ 21 L.

Problem 17.
Find the Fourier transform, ï(k), of the wave function for a 2p electron in
hydrogen:
1
f (x) = q ze −r/2a0
5
32áa0

where x = (x, y, z), r 2 = x 2 + y 2 + z 2 , and a0 is the radius of the first Bohr orbit.
(Hint: let f (x) = ez · g(x) and use symmetry to argue that F[g(x); k] ∝ k.)

Problem 18.
Prove the Wiener-Khinchin theorem, which relates the autocorrelation and
the Fourier transform: Let F[f (x); y] = g(y); then:
Z∞
−1
F [|g(y)| ; x] =
2
f ∗ (t)f (x + t) d t
−∞

where F−1 is the inverse Fourier transform.

106
Module V

Ordinary Differential
Equations

17 First Order ODEs 109

18 Higher Order ODEs 119

19 Power Series Solutions 122

20 The WKB Method 137

Problems 146

107
108

Motivation
Ordinary differential equations are even more of a pain in the neck to solve
than integrals. But, of course, physical laws are formulated in terms of
differential equations, and the solutions require integrating them, so it is
important to know how to do that. Here we present some common techniques
for solving ordinary differential equations. We will also encounter some
commonly occurring special functions.

Terminology
Consider:
r
d3y dy
+x + x2 y = 0 .
d x3 dx

Rationalize this:
!2
2 dy d3y 2
x = +x y
dx d x3
!2
d3y d3y
!
2
= +2x y + x4 y2 .
d x3 d x3
| {z }
this is the highest order
derivative term

We say this ordinary differential equation (ODE) is third order and second
degree.
17 First Order ODEs

Separable Equations
If we can write the equation in the form

A(x) d x + B(y) d y = 0 (17.1)

then the equation is separable and the solution is obtained by integration.


Ex. 17.1. Consider
s
dy 1 − y2
+ = 0. (17.2)
dx 1 − x2

Then
1 1
q dy + q dx = 0 . (17.3)
1 − y2 1 − x2
| {z } | {z }
B(y) A(x)

Integrate:

arcsin y + arcsin x = c (17.4a)


=⇒ sin(arcsin y + arcsin x) = sin c = C (17.4b)
=⇒ sin(arcsin y) cos(arcsin x) + cos(arcsin y) sin(arcsin x) = C (17.4c)
q q
=⇒ y 1 − x2 + x 1 − y2 = C . (17.4d)

109
17. First Order ODEs 110

Exact Equations
More generally,

A(x, y) d x + B(x, y) d y = 0 . (17.5)


| {z }
if this is the differential d u
of some function u(x, y)
then integrate to get
u(x, y) = c;
in this case, the equation is an
exact equation

Note: for an exact equation,

u u
du = dx + dy (17.6)
x y
|{z} |{z}
A(x,y) B(x,y)

2 u 2 u
but since = , a necessary condition is
xy yx

A B
= . (17.7)
y x

This is also a sufficient condition.


Ex. 17.2. Consider

(x + y) d x + x d y = 0 . (17.8)
| {z } |{z}
A(x,y) B(x,y)

A B
Note: = = 1 so this equation is exact.
y x
Therefore
u u
=x+y and =x (17.9)
x y
and so

u(x, y) = 12 x 2 + xy + c . (17.10)
17. First Order ODEs 111

Integrating Factors
If A d x + B d y is not exact, try to find a function Ý(x, y) such that

Ý(A d x + B d y) = 0 (17.11)

is exact. Then we can integrate as before. Such a function is known as an


integrating factor.
Such a factor always exists for a first-order equation, but there is not a general
method for finding it.
However for a linear first-order equation
dy
+ f (x) y = g(x) (17.12)
dx
we can obtain Ý. Multiply by Ý(x):

Ý(x)[d y + f (x) y d x] = Ý(x)g(x) d x (17.13)


| {z } | {z }
this is exact iff this is integrable

= Ý(x)f (x)
dx

so the integrating factor we seek is


hR i
Ý(x) = exp f (x) d x . (17.14)
17. First Order ODEs 112

Ex. 17.3. Consider

xy 0 + (1 + x)y = e x . (17.15)

Write this in the form


1+x ex
 
y0 + y= (17.16)
x x
| {z } |{z}
f (x) g(x)

so we see this is a linear, first-order equation.


The integrating factor is
Z !
hR i 1+x
Ý(x) = exp f (x) d x = exp d x = exp(x + ln x) (17.17a)
x
= xe x . (17.17b)

Multiply the original equation by the integrating factor:


1+x
   
xe x xy 0 + y = e 2x . (17.18)
x
We see this equation is exact:

xe x d y + (1 + x)e x y d x = e 2x d x (17.19)
|{z} | {z }
B(x) A(x,y)

and we verify

B A B
= e x + xe x and = (1 + x)e x = X (17.20)
x y x
thus
u u
= A(x, y) = (1 + x)e x y and = B(x) = xe x (17.21)
x y
which implies

u(x, y) = xe x y . (17.22)

Therefore, integrating d u = e 2x d x, we find


Z
xe x y = e 2x d x = 12 e 2x + c (17.23)

or
1 x c −x
y= e + e . (17.24)
2x x
17. First Order ODEs 113

Ex. 17.4. Thermodynamics.


The integrating factor plays a fundamental role in thermodynamics.
Suppose a system has state variables:

X1 , X2 , . . . , X n and Y1 , Y2 , . . . , Yn
| {z } | {z }
extensive variables, intensive variables,
i.e., displacements, i.e., forces,
e.g., volume e.g., pressure

and an internal energy function U = U(X1 , . . . , X n , Y1 , . . . , Yn ).


For a quasistatic process, the first law of thermodynamics (conservation of energy) is

d¯Q = dU + Y1 d X1 + · · · + Yn d X n . (17.25)
|{z} |{z} | {z }
heat flow change in work terms
internal energy

The use ofd¯ (rather than d) for the heat flow reminds us that the right hand side cannot
generally be written as an exact differential so the equation cannot generally be
integrated. Therefore there is no ‘heat’ of the system, Q = Q(X1 , . . . , X n , Y1 , . . . , Yn ).
If n = 1 we have claimed an integrating factor can always be found for an equation of
this form, but for n > 1 this cannot be integrated in general with the aid of an
integrating factor. . .
but. . . .
Kelvin-Planck statement of the second law of thermodynamics:

It is impossible to construct an engine which, operating in a cycle, will


produce no other effect than the extraction of heat from a reservoir and
the performance of an equivalent amount of work.

Reminder: an adiabatic process has d¯Q = 0. Y1


Suppose that you can reach a point P in state-
space by two different adiabatic processes, i.e.,
two adiabatic curves intersect at P as shown in
Fig. 17.1.
Consider the cycle: P → Q → R → P.
• Work is done by the system in P → Q and R →
P but no heat is gained or lost.
• No work is done in Q → R but heat is gained. X1
The net effect is conversion of heat into an equiv-
alent amount of work. Figure 17.1: Intersecting Adiabats
Therefore, adiabatic processes cannot intersect.
17. First Order ODEs 114

Since adiabatic surfaces do not intersect, we Y1


can label them, 1, 2, 3, . . . , as seen in Fig. 17.2.
Thus there exists a function of state variables,

S = S(X1 , . . . , X n , Y1 , . . . , Yn )

which is constant for adiabatic processes:

dS = 0 when d¯Q = 0 .

This implies that there must exist an integrating S=1 S=2 S=3
factor X1
Ý = Ý(X1 , . . . , X n , Y1 , . . . , Yn )
Figure 17.2: Non-intersecting Adia-
so that the adiabatic surfaces are bats

0 = d S = Ý d¯Q = Ý(d U + Y1 d X1 + · · · + Yn d X n ) (17.26)


| {z }
exact

We recognize S as the entropy and Ý = 1/T where T is the temperature:

d¯Q = T d S . (17.27)

This is the mathematical restatement of the second law of thermodynamics.


17. First Order ODEs 115

Change of Variables
Changing variables can often help.
Ex. 17.5. Consider an equation of the form

y 0 = f (ax + by + c) (17.28)

which can be re-expressed as

d y = f (ax + by + c) d x . (17.29)

Let

v = ax + by + c (17.30a)

so

dv = a dx + b dy or a dx = dv − b dy . (17.30b)

Then

a d y = f (v)(d v − b d y) (17.31a)
=⇒ [a + bf (v)] d y = f (v) d v (17.31b)
f (v)
=⇒ d y = dv . (17.31c)
a + bf (v)

The equation is now separated and we can integrate directly.


17. First Order ODEs 116

Ex. 17.6. Bernoulli equation

y 0 + f (x) y = g(x) y n . (17.32)

Divide by y n :
1 dy
+ f (x) y 1−n = g(x) . (17.33)
yn d x
| {z }
1 d y 1−n
1−n d x

Thus we let v = y 1−n to obtain


dv
+ (1 − n)f (x)v = (1 − n)g(x) . (17.34)
dx

This is now a linear first-order equation that has an integrating factor:


Rx
f (x 0 ) d x 0
Ý(x) = e (1−n) . (17.35)

Multiply by the integrating factor:


Rx Rx Rx
f (x 0 ) d x 0 d v f (x 0 ) d x 0 f (x 0 ) d x 0
e (1−n) + (1 − n)f (x)ve (1−n) = (1−n)g(x)e (1−n) (17.36)
dx
| {z }
Rx
d
 0 0

ve (1−n) f (x ) d x
dx
and therefore
Rx Z Rx
f (x 0 ) d x 0 f (x 0 ) d x 0
ve (1−n) = (1 − n)g(x)e (1−n) dx . (17.37)
17. First Order ODEs 117

Homogeneous Functions
A function f (x, y, . . .) is a homogeneous function of degree r in the arguments
if

f (ax, ay, . . .) = ar f (x, y, . . .) . (17.38)

A first order ODE A(x, y) d x + B(x, y) d y = 0 is a homogeneous equation if A


and B are homogeneous functions of the same degree.
Then the substitution y = vx makes the equation separable.
Ex. 17.7. Consider

y d x + (2 xy − x) d y = 0 . (17.39)
|{z} | {z }
homogeneous homogeneous
of degree 1 of degree 1

Let y = vx, d y = v d x + x d v; then



vx d x + (2x v − x)(v d x + x d v) = 0 (17.40a)
√ √ 2
=⇒ [ vx + vx(2 v − 1 )] d x + (2 v − 1)x d v = 0 (17.40b)

=⇒ 2v 3/2 x d x + (2 v − 1)x 2 d v = 0 (17.40c)

dx 2 v − 1
=⇒ + dv = 0 (17.40d)
x 2v 3/2
which is now separated!

Why did this work?


Suppose x and y both had the same dimensions, say meters.
Homogeneity means that the ODE is dimensionally consistent.
The substitution y = vx introduces a dimensionless variable v.
We then have to be able to write the ODE in the form
dx
f (v) d v + g(v) =0 (17.41)
x
in order for the dimensions to work out.
(Obviously, dimensional consistency of equations of motion is an important
thing in physics, so this device occurs frequently.)
17. First Order ODEs 118

Generalization: suppose that

(dimensions of y) = (dimensions of x)m (17.42)

for some power m, and that

A(ax, am y) = ar A(x, y) and B(ax, am y) = ar−m+1 B(x, y) (17.43)

so that the ODE A(x, y) d x + B(x, y) d y = 0 is dimensionally correct.


Then the substitution y = vx m reduces the equation to a separable one.
Such an equation is called an isobaric equation.
Ex. 17.8. Consider

xy 2 (3y d x + x d y) − (2y d x − x d y) = 0 . (17.44)

Test if this is isobaric: suppose x has units of s and suppose y has units of sm . Then the
dimensions of terms of the equation are

ss2m (sm s & ssm ) & (sm s & ssm ) (17.45a)


=⇒ s2 s3m & ssm (17.45b)

so the equation is dimensionally consistent if 2 + 3m = 1 + m or m = − 12 .



We are told to introduce v by y = vx −1/2 or v = y x which is dimensionless.
d v 2v d y
Actually, it is more convenient to let v = y 2 x so x = v/ y 2 and d x = 2 − . Then
y y3
! !
dv 2v d y v dv 2v d y v
v 3y 2 − 3y 3 + 2 d y − 2y 2 − 2y 3 − 2 d y = 0 . (17.46a)
y y y y y y

Multiply by y 2 :

v(3y d v − 6v d y + v d y) − (2y d v − 4v d y − v d y) = 0 (17.46b)


=⇒ (3v − 2)y d v + 5v(1 − v) d y = 0 (17.46c)

which is separable.
18 Higher Order ODEs

Linear Equations with Constant Coefficients


These are equations of the form
an y (n) + an−1 y (n−1) + · · · + a2 y 00 + a1 y 0 + a0 y = f (x) . (18.1)
• If f (x) = 0, the equation is a homogeneous equation.
• Otherwise, the equation is an inhomogeneous equation.
The general solution to an inhomogeneous equation is the sum of the general
solution to the homogeneous equation — the complementary function —
and any solution of the inhomogeneous equation — the particular integral.
• To solve the homogeneous equation (where f (x) = 0), try y = e mx . Then
an m n + an−1 m n−1 + · · · + a1 m + a0 = 0 . (18.2)
The n roots of this polynomial are m1 , m2 , . . . , mn ; when they are distinct,
the complementary function is
c 1 e m1 x + c 2 e m2 x + · · · + c n e m n x (18.3)
where c1 , c2 , . . . , cn are arbitrary constants.
However, suppose that some of the roots are the same, e.g., suppose
m1 = m2 . Now there are only n − 1 solutions and we need another.
Imagine a procedure in which m2 → m1 (i.e., we perturb the coefficients a0 ,
. . . , an to break the degeneracy). Then
e m2 x − e m1 x
(18.4)
m2 − m1
is a solution (since it is the sum of two solutions), and as m2 → m1 (by
reducing the perturbation) it becomes
d mx
e = xe m1 x (18.5)
dm m=m1
and this is the additional solution we need.
If three roots are equal, m1 = m2 = m3 , then the solutions are e m1 x , xe m1 x ,
and x 2 e m1 x (and so on).

119
18. Higher Order ODEs 120

• Finding a particular solution can be tricky. . . .


Try the method of undetermined coefficients:
If f (x) has only a finite number of linearly independent derivatives, e.g.,
x n , e Óx , sin kx, cos kx, x n e Óx cos kx, . . .
then take as a trial y(x) to be a linear combination of f (x) and its
independent derivatives.

Ex. 18.1. Solve

y 00 + 3y 0 + 2y = e x . (18.6)

– Complementary function. Letting y = e mx results in the polynomial equation


m2 + 3m + 2 = 0 with roots m = −1 and m = −2. Thus

y = c1 e −x + c2 e −2x . (18.7)

– Particular integral. Try y = Ae x and substitute into the ODE:

Ae x + 3Ae x + 2Ae x = e x =⇒ 6A = 1 =⇒ A = 16 . (18.8)

Therefore, the general solution is

y = 16 e x + c1 e −x + c2 e −2x . (18.9)

Note: if f (x) or a term in f (x) is also part of the complementary function, the
particular integral may contain this term and its derivatives multiplied by
some power of x.

Ex. 18.2. Re-solve Ex. 18.1 f (x) = e −x rather than e x .


– Try y = Ae −x :
−x
−
Ae
 −x
3Ae
+ −x
2Ae
 = e −x (18.10)

so this doesn’t work (because e −x is a solution to the homogeneous equation).


– Now try y = Axe −x , y 0 = Ae x − Axe −x , y 00 = −2Ae −x + Axe −x . Then

(−2Ae −x +  −x ) + 3(Ae −x −  −x ) +  −x = e −x


   
Axe Axe 2Axe (18.11a)
=⇒ Ae −x = e −x (18.11b)
=⇒ A = 1. (18.11c)

Therefore the general solution is

y = xe −x + c1 e −x + c2 e −2x . (18.12)
18. Higher Order ODEs 121

Tricks for More General Problems


• If the dependent variable y is absent, let y 0 = p be the new dependent
variable. This lowers the order by one.
• If the equation is homogeneous in y, let v = ln y be a new dependent variable.
The resulting equation will not contain v and the substitution v 0 = p will
reduce the order by one.
• If the equation is isobaric when x is given weight 1 and y is given weight m,
the change in dependent variable y = vx m followed by the change in the
independent variable u = ln x gives an equation in which the new
independent variable u is absent.
• Watch for the possibility that the equation is exact and consider the
possibility of finding an integrating factor. For example,

y 00 = f (y) (18.13)

can be integrated immediately by multiplying both sides by y 0 .


19 Power Series Solutions

Illustrate the basic idea with an example:


Ex. 19.1. A simple non-linear equation is

y 00 = x − y 2 . (19.1)

Try a power series solution: y = c0 + c1 x + c2 x 2 + · · · . We find

2c2 + 6c3 x + 12c4 x 2 + · · · = x − c02 − 2c0 c1 x − (c12 + 2c0 c2 )x 2 − · · · (19.2)

so equating like powers we have

2c2 = −c02 =⇒ c2 = − 12 c02 (19.3a)


6c3 = 1 − 2c0 c1 =⇒ c3 = 16 − 13 c0 c1 (19.3b)
12c4 = −c12 − 2c0 c2 =⇒ 1 c2 + 1 c3
c4 = − 12 1 12 0 (19.3c)

and so on. . . .
Note: cn , n > 1 can all be expressed in terms of c0 and c1 , which are the two free
constants of integration.
If we want a solution with y = 0 and y 0 = 1 at x = 0 then c0 = 0, c1 = 1, and
1 , . . . , so
c2 = 0, c3 = 16 , c4 = − 12

y = x + 16 x 3 − 12
1 x4 + · · · (19.4)

(but we don’t know if this series converges).

122
19. Power Series Solutions 123

Linear Differential Equations


These have the form
dny d n−1 y dy
+ fn−1 (x) n−1 + · · · + f1 (x) + f (x)y = 0 . (19.5)
dx n dx dx 0
• If f0 (x), f1 (x), . . . , fn−1 (x) are regular at a point x = x0 we call x0 an ordinary
point of the differential equation. The general solution can be written as a
Taylor series with radius of convergence out to the nearest singular point:

¼
y= cm (x − x0 )m (19.6)
m=0

The coefficients cm are obtained by substitution into the differential


equation (as before).
• If x0 is not an ordinary point but

(x − x0 )fn−1 (x) , (x − x0 )2 fn−2 (x) , ... , (x − x0 )n f0 (x)

are all regular at x0 then x0 is a regular singular point.


Then we can always find at least one solution of the form

¼
y = (x − x0 )s cm (x − x0 )m , c0 , 0 (19.7)
m=0

(where s is not necessarily an integer) which has a radius of convergence to


the nearest singularity apart from x0 .
Explore these two cases in the next two examples.
19. Power Series Solutions 124

Ex. 19.2. Legendre’s equation (a non-singular case) is:

(1 − x 2 )y 00 − 2xy 0 + n(n + 1)y = 0 . (19.8)

This has regular singular points at x = ±1. Expand about x = 0:

y = c0 + c1 x + c2 x 2 + · · · (19.9)

and insert this into the differential equation to obtain


¼ ∞
¼
(1 − x 2 ) (m)(m − 1)cm x m−2 − 2x (m)cm x m−1
m=2 m=1

¼
+ n(n + 1) cm x m = 0 . (19.10a)
m=0

Write out the m = 0 and m = 1 terms explicitly:

n(n + 1)(c0 + c1 x) − 2xc1


¼∞
+ cm [m(m − 1)x m−2 − m(m − 1)x m − 2mx m + n(n + 1)x m ] = 0 (19.10b)
m=2

=⇒ n(n + 1)(c0 + c1 x) − 2xc1


¼∞ n o
+ cm m(m − 1)x m−2 +[n(n + 1) − m(m + 1)]x m ] = 0 . (19.10c)
m=2
| {z }
consider this

Note that

¼ ∞
¼
cm m(m − 1)x m−2 = c2 (1)(2) + c3 (3)(2)x + cm m(m − 1)x m−2 (19.11a)
m=2 m=4

let m = m0 + 2
m0
¼
= 2c2 + 6c3 x + cm0 +2 (m0 + 2)(m0 + 1)x (19.11b)
m0 =2

so we have

[n(n + 1)c0 + 2c2 ] + [n(n + 1)c1 − 2c1 + 6c3 ]x


¼∞ n o
+ cm+2 (m + 2)(m + 1) + cm [n(n + 1) − m(m + 1)]x m = 0 . (19.12)
m=2
19. Power Series Solutions 125

Now equate powers of x to find


n(n + 1)
2c2 = −n(n + 1)c0 =⇒ c2 = − c0 (19.13a)
2
2 − n(n + 1)
6c3 = 2c1 − n(n + 1)c1 =⇒ c3 = c1 (19.13b)
6
and the general recurrence relation

(m + 1)(m + 2)cm+2 = −[n(n + 1) − m(m + 1)]cm


cm+2 m(m + 1) − n(n + 1) (m + n + 1)(m − n)
=⇒ = = . (19.13c)
cm (m + 1)(m + 2) (m + 1)(m + 2)
Hence our solution is

x2 x4
" #
y =c0 1 − n(n + 1) + n(n + 1)(n − 2)(n + 3) + ···
2! 4!
x3 x5
" #
+ c1 x − (n − 1)(n + 2) + (n − 1)(n + 2)(n − 3)(n + 4) + ··· . (19.14)
3! 5!

c
Note that m+2 → 1 as m → ∞ so both series converge for x 2 < 1.
cm
Write the general solution as

y = c0 Un (x) + c1 Vn (x) (19.15a)

where

x2 x4
Un (x) = 1 − n(n + 1) + n(n + 1)(n − 2)(n + 3) + ··· (19.15b)
2! 4!
x3 x5
Vn (x) = x − (n − 1)(n + 2) + (n − 1)(n + 2)(n − 3)(n + 4) + ··· (19.15c)
3! 5!
are the two independent solutions, and c0 and c1 are the two constants of integration.
19. Power Series Solutions 126

Although the series converge for |x| < 1, we saw in Ex. 2.4 that they diverge for |x| = 1;
however, we normally want solutions over the domain −1 ≤ x ≤ 1. This can be arranged
in one of two ways:

1. Let c1 = 0 and choose one of n = −1, −3, −5, . . . or n = 0, 2, 4, . . ..


Then the first series Un (x) terminates and the second series Vn (x) is absent.
2. Let c0 = 0 and choose one of n = −2, −4, −6, . . . or n = 1, 3, 5, . . ..
Then the second series Vn (x) terminates and the first series Un (x) is absent.

Therefore, to have a finite solution on −1 ≤ x ≤ 1, n must be an integer. The resulting


solution is a polynomial which, when normalized by the condition y(1) = 1, is called a
Legendre polynomial:

Un (x)/Un (1) n = 0, 2, 4, . . .


Pn (x) =  (19.16)
Vn (x)/Vn (1) n = 1, 3, 4, . . . .

The first few Legendre polynomials are

P0 (x) = 1 , P1 (x) = x , P2 (x) = 12 (3x 2 − 1) , P3 (x) = 12 (5x 3 − 3x) , etc. (19.17)

See Fig. 19.1.

Pn(x)
1
n=0
x n=1
1 1 n=2
n=3
1

Figure 19.1: Legendre Polynomials


19. Power Series Solutions 127

What about the non-terminating series for integer n?


This series diverges at x = ±1. Consider, for example, the case n = 0 and c0 = 0:

x3 x5
" #
y = c1 x − (−1)(2) + (−1)(2)(−3)(4) − ··· . (19.18)
3! 5!

Note:
cm+2 (m + n + 1)(m − n) m
= = since n = 0 (19.19a)
cm (m + 1)(m + 2) m+2
=⇒ (m + 2)cm+2 = mcm (19.19b)
c
=⇒ cm = 1 . (19.19c)
m
Thus

x3 x5 x7
" #
y = c1 x + + + + ··· . (19.20)
3 5 7

1 1+x
 
We’ve seen this series before in Eq. (3.11): it is ln and is singular at x = ±1.
2 1−x
We have Legendre functions of the second kind of order n:

Un (1)Vn (x)

 n = 0, 2, 4, . . .
Qn (x) =  (19.21)
−Vn (1)Un (x) n = 1, 3, 5, . . .

with

1 1+x x 1+x
   
Q0 (x) = ln , Q1 (x) = ln −1, etc. (19.22)
2 1−x 2 1−x

See Fig. 19.2.


The general solution to Legendre’s equation with integer n is

y = APn (x) + B Qn (x) . (19.23)

Qn(x)
1
n=0
x n=1
1 1 n=2
n=3
1

Figure 19.2: Legendre Functions of the Second Kind


19. Power Series Solutions 128

Ex. 19.3. Bessel’s equation (a singular case) is:

x 2 y 00 + xy 0 + (x 2 − ß2 )y = 0 . (19.24)

This has a regular singular point at x = 0 so the solution has the form

¼
y(x, s) = x s cn x n , c0 , 0 . (19.25)
n=0

We have

¼
xy 0 = (s + n)cn x s+n (19.26a)
n=0
¼∞
x 2 y 00 = (s + n)(s + n − 1)cn x s+n (19.26b)
n=0

so, substituting into Bessel’s equation we find


∞ n
¼ o
[(s + n)(s + n − 1) + (s + n) − ß2 ] cn x s+n + cn x s+n+2 = 0 . (19.27)
n=0
| {z }
(s+n)2 −ß2 =(s+n+ß)(s+n−ß)

Write out the first two terms explicitly:

(s 2 − ß2 )c0 x s + [(s + 1)2 − ß2 ]c1 x s+1


¼∞
+ [(s + ß + n)(s − ß + n)cn + cn−2 ]x s+n = 0 . (19.28)
n=2

We see that Bessel’s equation is solved if

• s 2 = ß2 (19.29a)
which is called the indicial equation;
• c1 [(s + 1)2 − ß2 ] = 0 (19.29b)
which is solved if c1 = 0 or (s + 1) = ±ß;
cn 1
• =− (19.29c)
cn−2 (s + ß + n)(s − ß + n)
which is the recurrence relation.
19. Power Series Solutions 129

We choose to solve the second of these by setting c1 = 0. Then only n even terms
survive and the recurrence formula gives all cn (n even) in terms of c0 . The solutions to
the indicial equation are s = ±ß and the two independent solutions are

y(x, +ß) and y(x, −ß) . (19.30)

Aside: had we left c1 free and instead set (s + 1) = ±ß then, with the indicial equation,
we have the requirement s = −ß = −1/2. It turns out that the terms that appear from this
are identical to those contained in the other solution s = +ß = 1/2 with c1 = 0, so we can
choose c1 = 0 even for the ß = 1/2 case.
Set s 2 = ß2 and c1 = 0. Then
cn 1 1 1
=− 2 2
=− =− . (19.31)
cn−2 (s + n) − s s 2 + 2sn + n 2 − s 2 n(2s + n)

The non-vanishing coefficients are c2n :


c0 1
c2 = − =− c (19.32a)
2(2s + 2) 4 · (s + 1) 0
c2 c2 1
c4 = − =− = c (19.32b)
4(2s + 2) 8(s + 2) 4 · 8 · (s + 1)(s + 2) 0
c4 c4 1
c6 = − =− = c (19.32c)
6(2s + 6) 12(s + 3) 4 · 8 · 12 · (s + 1)(s + 2)(s + 3) 0
..
.
c2n−2 c
c2n = − = − 2n−2
2n(2s + 2n) 4n(s + n)
(−1)n
= 2n c0 . (19.32d)
2 n!(s + 1)(s + 2)(s + 3) · · · (s + n)

But there is a problem if ß is an integer: the procedure works fine for the s = +ß solution
(assume ß is positive), but the second solution with s = −ß won’t work because
cn 1 1
=− = − (19.33)
cn−2 (s + ß + n)(s − ß + n) s=−ß n(n − 2ß)
so when n = 2ß, the ratio is infinite and c2ß and higher are infinite!
We need a way to get a second solution, so we try this trick: don’t impose the indicial
relation (i.e., leave s and ß unrelated), multiply y(x, s) by the factor (s + ß), then take the
limit as s → −ß. The factor will cancel the infinities with this procedure.
It turns out this doesn’t work. . . but let’s try it and see why.
19. Power Series Solutions 130

Before taking s → −ß, the solution will be


(s + ß)

(s + ß)y(x, s) = c0 x s (s + ß) − x2 + · · ·
(s + ß + 2)(s − ß + 2)
(s  ß)
+
x 2ß

± 
(s + ß + 2)(s − ß + 2) · · · (s + ß + 2ß)  − ß
(s  +2ß)
| {z }
infinity is cancelled

∓ ··· . (19.34)

Now as s → −ß, the terms up to x 2ß vanish and we have


( )
h i 1
(s + ß)y(x, s) = c0 x −ß ± x 2ß ∓ · · · (19.35a)
s=−ß 2 · (2 − 2ß) · · · (2ß)
ß
n o
= c0 x 1 − c20 x 2 + · · ·
0 (19.35b)

where
c0
c00 = ± (19.35c)
2 · (2 − 2ß) · · · (2ß)
and
cn0 c 1
0 = 2ß+n = − (19.35d)
cn−2 c2ß+n−2 (s + ß + 2ß + n)(s − ß + 2ß + n)
1
=− . (19.35e)
[(s + 2ß) + ß + n][(s + 2ß) − ß + n]
But note: s + 2ß when s = −ß is the same as s when s = +ß so this solution is actually the
same as the y(x, +ß) solution (up to an overall factor).
Thus it is not an independent solution.
Instead, substitute [(s + ß)y(x, s)] into Bessel’s equation. The result will not be zero since
we have not yet imposed the indicial equation:

2
" #

x2 2 + x + (x 2 − ß2 ) (s + ß)y(x, s) = (s + ß) (s 2 − ß2 )
x x | {z }
result is proportional to
indicial equation
= (s + ß)2 (s − ß) . (19.36)

Now take the partial derivative with respect to s:

2
" #
 
x2 2 + x + (x 2 − ß2 ) [(s + ß)y(x, s)] = 2(s + ß)(s − ß) + (s + ß)2 . (19.37)
x x s | {z }
vanishes as s → −ß
19. Power Series Solutions 131

Therefore our second solution is



lim [(s + ß)y(x, s)] . (19.38)
s→−ß s

To see how this works, consider the case ß = 2:

x2 x4
 
y(x, s) = c0 x s 1 − + − ··· (19.39)
s(s + 4) s(s + 4) (s + 2)(s + 6)
| {z }
this is what causes the
problem when s = −ß
so

(s + 2) 2 x4

(s + 2)y(x, s) = c0 x s (s + 2) − x +
s(s + 4) s(s + 4)(s + 6)
x6

− + ··· . (19.40)
s(s + 4)(s + 6)(s + 4)(s + 8)

 s  s ln x
Now take the derivative with respect to s. Note: x = e = x s ln x.
s s

[(s + ß)y(x, s)] = (s + 2)y(x, s) ln x
s
 (s + 2) 2 x4

+ c0 x s (s + 2) − x +
s s(s + 4) s(s + 4)(s + 6)
x6

− + ··· (19.41a)
s(s + 4)(s + 6)(s + 4)(s + 8)
= (s + 2)y(x, s) ln x
(
(s + 2) 1 1 1
 
+ c0 x s 1 − − − x2
s(s + 4) s + 2 s s + 4
)
1 1 1 1
 
+ − − − 4
x − ··· . (19.41b)
s(s + 4)(s + 6) s s + 4 s + 6

Now we set s = −2. Note that [cf. Eq. (19.35c)]


" #
h i 1 1
(s + 2)y(x, s) = y(x, +2) = − y(x, 2) (19.42)
s=−2 s(s + 4)(s + 6) s=−2 16

and therefore

x2 x4
( )
 1 1
[(s + 2)y(x, s)] = − y(x, 2) ln x + c0 2 1 + + + ··· . (19.43)
s 16 x 4 64

This is our second independent solution. Note that it is singular at x = 0.


19. Power Series Solutions 132

Application: Quantum Harmonic Oscillator


The stationary states of a one-dimensional quantum harmonic oscillator
satisfy the time-independent Schrödinger equation

d 2è
+ (E − x 2 )è = 0 . (19.44)
d x2
Here, for convenience,
√ we use the dimensionless variables: to restore
dimensions, x → mé/~ x and E → E/( 12 ~é) where é is the angular frequency
of the oscillator.
For large values of x we have

d 2è
− x2è ≈ 0 (19.45)
d x2
2 2 2
and so the solutions are è ∼ e ±x /2 as x → ∞: è0 ∼ ±xe ±x /2 and è00 ∼ x 2 e ±x /2
(where the omitted term is higher order in the asymptotic series) so è00 − x 2 è
vanishes at leading order in the asymptotic series.
Physical solutions must not become infinite as x → ∞. This motivates the
substitution
2 /2
è = ye −x . (19.46)
2
(We must watch for the solutions y ∼ e x that generate the unwanted
2
è ∼ e +x /2 behavior.) We have:
2 /2 2 /2
è0 = y 0 e −x − xye −x (19.47a)
−x 2 /2 −x 2 /2 −x 2 /2 −x 2 /2
è00 = y 00 e − 2xy 0 e − ye + x 2 ye (19.47b)

so, substituting into the Schrödinger equation, we have

(y 00 − 2xy 0 − y + 
x 2 x 2
y) + E y −  y = 0. (19.47c)

The resulting equation is the Hermite differential equation:

y 00 − 2xy 0 + (E − 1)y = 0 . (19.48)


19. Power Series Solutions 133

We seek a power series solution of the form



¼
y= cn x n . (19.49)
n=0

Substitute this into the Hermite equation:



¼ ∞
¼ ∞
¼
n−2 n
0= n(n − 1)cn x −2 ncn x + (E − 1) cn x n (19.50a)
n=2 n=1 n=0
¼∞
= n(n − 1)cn x n−2
n=2

¼
− 2c1 x − 2 ncn x n
n=2

¼
+ (E − 1)c0 + (E − 1)c1 x + (E − 1) cn x n (19.50b)
n=2

¼
= (n + 2)(n + 1)cn+2 x n
n=0

¼
− 2c1 x + (E − 1)c0 + (E − 1)c1 x + (E − 1 − 2n)cn x n (19.50c)
n=2

¼
= 2c2 + 6c3 x + (n + 2)(n + 1)cn+2 x n
n=2

¼
− 2c1 x + (E − 1)c0 + (E − 1)c1 x + (E − 1 − 2n)cn x n (19.50d)
n=2
= [(E − 1)c0 + 2c2 ] + [(E − 3)c1 + 6c3 ]x
¼∞
+ [(n + 2)(n + 1)cn+2 − (2n + 1 − E)cn ]x n . (19.50e)
n=2

Therefore
1−E 3−E
c2 = c , c3 = c , (19.51a)
2 0 6 1
and
cn+2 (2n + 1) − E
= , n = 2, 3, 4, . . . . (19.51b)
cn (n + 1)(n + 2)
19. Power Series Solutions 134

Therefore our power series solution is

x2 x4
( )
y = c0 1 + (1 − E) + (1 − E)(5 − E) + · · ·
2! 4!
3 x5
( )
x
+ c1 x + (3 − E) + (3 − E)(7 − E) + · · · . (19.52)
3! 5!

cn+2 2 2 c
In general, for large n, ∼ as n → ∞ so c2n+2 ∼ c = 2n .
cn n 2n 2n n
Therefore
c2n c2(n−1) c
c2(n+1) ∼ ∼ ∼ · · · ∼ 0 as n → ∞ (19.53)
n n(n − 1) n!
and similarly with the odd-n coefficients.
(x 2 )n 2
Therefore the terms are ∼ as n → ∞ so y ∼ e x for large x as expected:
n!
2
this generates the è ∼ e x /2 solutions.
The bounded (as x → ±∞) solutions are when one of the series truncates (and
the coefficient of the other series is chosen to be 0). This only happens when

(2n + 1) − E = 0 =⇒ E = 2n + 1 . (19.54)

Then one of the two series will truncate.


We see that the boundary conditions pose restrictions on the form of the
differential equation. Acceptable values of E are

E = En = 2n + 1 , n = 0, 1, 2, . . . . (19.55)

These are eigenvalues. The corresponding solutions (that don’t blow up) are
the eigenfunctions
2 /2
è(x) = èn (x) = Hn (x)e −x (19.56)

where Hn (x) are Hermite polynomials of order n:

H0 (x) = 1 for E0 = 1 (19.57a)


H1 (x) = 2x for E1 = 3 (19.57b)
2
H2 (x) = −2(1 − 2x ) for E2 = 5 (19.57c)
2 3
H3 (x) = −12(x − 3x ) for E3 = 7 (19.57d)

etc. See Fig. 19.3.


19. Power Series Solutions 135

Hn(x)
10
5 n=0
x n=1
2 1 1 2 n=2
5 n=3
10

Figure 19.3: Hermite Polynomials

We say that èn are the eigenfunctions of the differential operator −d 2 /d x 2 + x 2


belonging to the eigenvalues En :

d2
!
− 2 + x 2 èn = En èn . (19.58)
dx

Restoring physical units, the Schrödinger equation is

~2 d 2
!
1 2 2
− + mé x èn = En èn (19.59)
2m d x 2 2

and, we have the (suitably normalized) eigenstates


r !
1 mé 1/4 −méx 2 /(2~) mé
 
èn (x) = √ e Hn x , n = 0, 1, 2, . . . (19.60a)
2n n! á~ ~

belonging to the eigenenergies

En = ~é(n + 12 ) , n = 0, 1, 2, . . . . (19.60b)
19. Power Series Solutions 136

Consider a more generic quantum mechanics problem.


V(x)
The time-independent Schrödinger equation is
V3
d 2è 2m
= − 2 [E − V(x)]è (19.61)
d x2 ~ V2
x
where V(x) is some arbitrary potential,
e.g., like the potential shown in Fig. 19.4. V1
è00
• If E > V(x), <0
è
=⇒ è curves toward the x-axis
=⇒ sinusoidal character. Figure 19.4: A Potential
è00
• If E < V(x), >0
è
=⇒ è curves away from the x-axis
=⇒ exponential character.
We require è to remain finite everywhere so unbounded exponential behavior is
unacceptable. This boundary condition imposes restrictions on the solutions.
Consider these cases:
• E > V3 : Solutions are oscillatory everywhere
=⇒ always acceptable.
• V2 < E < V3 : Most solutions blow up as x → ∞, however we can find a
unique solution (up to an overall factor) that falls off
exponentially as x → +∞. This fixes the phase of the solution in
the left-hand side.
• V1 < E < V2 : Solutions behave exponentially at both ends x → ±∞.
Adjusting it so that it does not blow up on the left-hand side
almost certainly means it blows up on the right-hand side and
vice versa.
Only for certain values of E can satisfactory solutions be found
=⇒ eigenvalues.
• E < V1 : No satisfactory solutions are possible.
20 The WKB Method

The Wentzel-Kramers-Brillouin (WKB) method obtains approximate


solutions of differential equations of the form
d2y
− + f (x)y = 0 (20.1)
d x2
where f (x) is slowly-varying.
Note: for f ≈ const, the solution would be an exponential or a sinusoid
depending on the sign of the constant. Therefore try
y = e S(x) , y 0 = S 0 (x)e S(x) , y 00 = S 00 (x)e S(x) + [S 0 (x)]2 e S(x) (20.2)
which results in
−[S 0 (x)]2 − S 00 (x) + f (x) = 0 . (20.3)

If S 00 (x) is small then


p Z p
S 0 (x) ≈ ± f (x) =⇒ S(x) ≈ ± f (x) d x . (20.4)

By “small” we mean (see below)

1 f 0 (x)
|S 00 (x)| ≈ p  |f (x)| . (20.5)
2 f (x)

The solution will be


h Rp i
y ≈ exp ± f (x) d x (20.6)
√ 1
so we can regard 1/ f ≈ o where o = 2á (wavelength) for f < 0 or the
exponential scale length for f > 0.
The condition of validity of the approximation is
 
fractional
  Öf of 0 f 0 (x)
 change in f over  = =  1 =⇒  |f (x)| . (20.7)
 
p
  f f f (x)
one length scale

137
20. The WKB Method 138

Improve the approximation by including the S 00 term:

1 f 0 (x)
S 00 ≈ ± p (20.8a)
2 f (x)
=⇒ [S 0 (x)]2 = f (x) − S 00 (x)
1 f 0 (x)
≈ f (x) ∓ p
2 f (x)
1 f 0 (x)
" #
= f (x) 1 ∓ (20.8b)
2 f 3/2 (x)
1 f 0 (x)
( )
p
0
=⇒ S (x) ≈ ± f (x) 1 ∓ + ···
4 f 3/2 (x)
p 1 f 0 (x)
≈ ± f (x) − (20.8c)
4 f (x)
Z p
1
=⇒ S(x) ≈ ± f (x) d x − ln f (x) . (20.8d)
4
Our solution is:
1 n h Rp i h Rp io
y(x) ≈ p c+ exp + f (x) d x + c− exp − f (x) d x . (20.9)
4
f (x)

Note that there are two solutions corresponding either to exponentially


growing or decaying solutions for f > 0 or to cosine or sine sinusoids for f < 0.
The method fails if f (x) varies rapidly or if f (x) goes through zero.
If f (x) goes through zero, we need to join an oscillatory solution where f (x) < 0
to an exponential solution where f (x) > 0. In doing so, c+ and c− become
related and the phase of oscillation is determined.
20. The WKB Method 139

Ex. 20.1. Airy equation.


Here we take f (x) = x so y
d2y
− xy = 0 . (20.10)
y=x
d x2
Figure 20.1 shows two different x
solutions to Airy’s equation, one that is
exponentially decreasing in the
right-hand side and one that is
exponentially increasing. Figure 20.1: Solutions to Airy’s Equation

• For x  −1:

p √ √ p
4
4
−x (−x)1/4
f (x) = x = −i −x and f (x) = √ = ; (20.11a)
i e iá/4
also
Z xp Zx Z −x
√ √ 2
f (x) d x = −i −x d x = i x d x = i (−x)3/2 (20.11b)
0 0 0 3
so the two solutions will have the form
2 á
 
(−x)−1/4 exp ±i (−x)3/2 + i . (20.11c)
3 4
Therefore,
2
 
y ≈ A(−x)−1/4 cos (−x)3/2 + Ö , x  −1 (20.12)
3
where A is a free amplitude constant and Ö is an undetermined phase.
• For x  1, the two solutions have the form
 R√  
2

x −1/4 exp ± x d x = x −1/4 exp ± x 3/2 (20.13)
3
and we take the negative exponential solution which remains bounded as x → ∞:
2
 
y ≈ B x −1/4 exp − x 3/2 , x  1 (20.14)
3
where B is a free amplitude constant.

We now want to connect these forms at x = 0. This will allow us to determine the phase
Ö in the left-hand side that results in the exponential decay in the right-hand side.
20. The WKB Method 140

Deduce the connection formula using Fourier transform methods: Let


Z∞
g(k) = y(x)e −i kx d x . (20.15)
−∞

Since

d2y
− xy = 0 (20.16)
d x2
we find
d 3
−k 2 g(k) − i g(k) = 0 =⇒ g(k) = Ce i k /3 (20.17)
dk
where C is a constant of integration, so
Z∞
1 3
y(x) = C e i k /3 e i kx d k . (20.18)
2á −∞

Convention: set C = 1; the result is the Airy function of the first kind, which can be
written in these forms
Z∞
k3
" !#
1
Ai x = exp i + kx d k (20.19)
2á −∞ 3

1 ∞ k3
Z !
Ai x = cos + kx d k . (20.20)
á 0 3

We will use the first form.


Note: the second independent solution to the Airy equation is the Airy function of the
second kind,

1 ∞ k3 k3
Z " ! !#
Bi x = exp − + kx + sin + kx d k . (20.21)
á 0 3 3

The functions Ai x and Bi x are shown in Fig. 20.2. We see that the function Bi x has the
unwanted exponentially increasing behavior.

1.5
1.0 Bi x
0.5
Ai x
x
10 8 6 4 2 2
0.5

Figure 20.2: Airy Functions of the First and Second Kind


20. The WKB Method 141

We now want to compute the asymptotic forms of the Airy integral for x → −∞ and
x → ∞ and compare to our WKB results in order to identify the phase Ö. We will use the
saddle-point method.

• For x → −∞, write the integrand of Ai x as

k3
!
e (−x)f (k) with f (k) = i k + (20.22)
3x

since (−x) is large and positive. We have:

k2
!

f 0 (k) = i 1 + =⇒ f 0 (k0 ) = 0 for k0 = ± −x (20.23a)
x
k 1
f 00 (k) = 2i =⇒ f 00 (k0 ) = ∓2i √ . (20.23b)
x −x

Note: there are two saddle points, k0 = ± −x. Also
√
1√ 2 √

f (k0 ) = ±i −x − −x = ± i −x . (20.23c)
3 3
Therefore,
1 00
f (k) ≈ f (k0 ) + f (k0 )(k − k0 )2 . (20.24)
2

Write f 00 (k0 ) = âe iæ with â = 2/ −x and æ = ∓á/2, and k − k0 = se iè with
è = −æ/2 ± á/2. Then
Z s
1 (−x)f (k) 1 2á (−x)f (k0 ) iè
y= e dk ∼ e e (20.25)
2á C x→−∞ 2á (−x)â

where C is a contour deformed to go over the saddle points.


To figure out how to deform the contour C to go over the saddle points appropriately
we need to look at the
√ topography of Re f (k). The
√ top panel of Fig. 20.3 shows that
è = +á/4 for k0 = − −x and è = −á/4 for k0 = + −x. We need to go over both saddle
points so we need to add the two contributions

s
1 2á −x 2
 
y∼ exp ± i(−x)3/2 e ∓iá/4 , x → −∞ (20.26)
2á (−x) 2 3

together to get the asymptotic form of the Airy function for x → −∞:
1 2 á
 
Ai x ∼ √ cos (−x)3/2 − , x → −∞ . (20.27)
á(−x)1/4 3 4
20. The WKB Method 142

• For x → +∞, write the integrand of Ai x as

k3
!
e xf (k) with f (k) = i k + (20.28)
3x

since x is large and positive. We have:

k2
!

f 0 (k) = i 1 + =⇒ f 0 (k0 ) = 0 for k0 = ±i x (20.29a)
x
k 1
f 00 (k) = 2i =⇒ f 00 (k0 ) = ∓2 √ . (20.29b)
x x

Note: there are two saddle points, k0 = ±i x, but now we will only go over one. Also
√
1√ 2√

f (k0 ) = ±i −x − −x = ∓ x. (20.29c)
3 3

Write f 00 (k0 ) = âe iæ with â = 2/ x and æ = 0 or á. From the topography of Re f (x)
shown √ in the bottom panel of Fig. 20.3, we see we should go over one saddle point
k0 = +i x with k − k0 = se iè where è = 0. Then
Z s
1 xf (k) 1 2á xf (k0 ) iè
Ai x = e dk ∼ e e (20.30a)
2á C x→+∞ 2á xâ
r √
1 2á x 2
 
= exp − x 3/2 (20.30b)
2á x 2 3
where C is a contour deformed to go over the desired saddle point. Thus
1 2
 
Ai x ∼ √ exp − x 3/2 , x → +∞ . (20.31)
2 áx 1/4 3

We therefore have:
1 2 á
 
Ai x ∼ √ cos (−x)3/2 − , x → −∞ (20.32a)
á(−x)1/4 3 4
1 2
 
Ai x ∼ √ exp − x 3/2 , x → +∞ (20.32b)
2 áx 1/4 3

while our WKB solution was


2
 
y ≈ A(−x)−1/4 cos (−x)3/2 + Ö , x  −1 (20.33a)
3
2 3/2
 
y ≈ Bx −1/4 exp − x , x  1. (20.33b)
3
Comparison tells us A = 2B and Ö = −á/4. The phase is now determined! Any other
phase would have introduced an exponentially-growing term as x → +∞.
20. The WKB Method 143

0
Im k

0
Re k

0
Im k

0
Re k

Figure 20.3: Topography of the surface Re[i(k + k 3 /(3x))] for x < 0 (top) and x > 0
(bottom). The saddle points are at the intersection of the white contour
√ lines. Top: the
contour is deformed so that it goes over both saddle√point k0 = ± −x.√ Bottom: the
contour is deformed to go over the saddle point k0 = i x but not k0 = −i x.
20. The WKB Method 144

The WKB method can be used for more general f (x) but we will still need to
connect an osciliatory solution for the f (x) < 0 region to an exponential
solution for the f (x) > 0 region.
Note that f (x) is approximately linear as it passes through zero, so the
undetermined phase is just that of the Airy function.
Therefore the rule is, when f (b) = 0, f (x) > 0 for x > b,
Z bp ! Z xp !
2 á 1
p cos −f (x) d x − −
−*
)−− 4p exp − f (x) d x . (20.34)
4
−f (x) x 4 f (x) b
| {z } | {z }
f (x)<0 for x<b f (x)>0 for x>b

This is a connection formula.

f(x)
x
b

solutions for x < b solutions for x > b


Z b√ ! Z x√ !
−1/4 á −1/4
2(−f ) cos −f d x − −−*
) −− f exp − f dx
4!
Z xb √ Z bx √ !
á
(−f )−1/4 sin −f d x − −*

) −− −f −1/4
exp f dx
x 4 b

x
a
f(x)
solutions for x < a solutions for x > a
Z x√ ! Z x√ !
á
f −1/4 exp f dx −−*
) −− 2(−f )−1/4 cos −f d x −
Zax √ ! Z x a√ !4
−1/4 á
−f exp − f dx −*

) −− (−f )−1/4 sin −f d x −
a a 4

d2y
Figure 20.4: Connection Formulas for − + f (x)y = 0
d x2
20. The WKB Method 145

Ex. 20.2. Bohr-Sommerfeld quantization rule.


Time-independent Schrödinger equation:

d 2 è 2m V(x)
d x2
− 2 [V(x) − E]è = 0
~
(20.35) E
The potential V(x) shown in Fig. 20.5 has x
two turning points at x = a and x = b. a b
Use the WKB method with
2m Figure 20.5: Potential for Bohr-
f (x) = [V(x) − E] . (20.36)
~2 Sommerfeld Quantization Rule

• For x < a, the solution is exponential. Boundedness as x → −∞ means that in


a < x < b we have
Z p 
A  x 2m[E − V(x)] á 
è(x) ≈ cos  d x −  (20.37)
[E − V(x)]1/4 a ~ 4

(from the connection formula).


• For x > b, the solution is again exponential. Boundedness as x → +∞ means that in
a < x < b we have
Z p 
B  b 2m[E − V(x)] á 
è(x) ≈ cos  d x −  . (20.38)
[E − V(x)]1/4 x ~ 4

These must be the same! Let


Zbp Zbp
2m[E − V(x)] 2m[E − V(x)] á
Ù= dx and Ó = dx − (20.39)
a ~ x ~ 4

then we see we must have |A| = |B| and

cos(Ù − Ó − á/2) = ± cos(Ó)


= ± cos(−Ó) (20.40)
á
=⇒ Ù = + ná , n = 0, 1, 2, . . . . (20.41)
2
Therefore
Z bp
2m[E − V(x)] d x = (n + 21 )á~ , n = 0, 1, 2, . . . . (20.42)
a

This is the Bohr-Sommerfeld quantization rule and the integral on the left hand side
is one half of the classical action.
Problems

Problem 19.
An ideal gas in a box has internal energy U(V, P) = 32 P V where P is the
pressure of the gas and V is the volume of the box. The first law of
thermodynamics for a quasistatic process is

d¯Q = d U + P d V

where d¯Q is the heat flow to the system. Although the right-hand-side is not an
exact integral, so there is no function Q(V, P) for the “heat of the system”
(hence we write d¯Q rather than d Q), the right-hand-side can be integrated by
means of an integrating factor Ý. That is, dã = Ý · (d U + P d V) is exact and can
be integrated. Determine Ý(V, P) and the integral ã(V, P) in terms of the state
variables V and P. What are the physical significance of these quantities?

Problem 20.
Find the general solution of
a) y 0 + y cos x = 12 sin 2x ;
p
b) 2x 3 y 0 = 1 + 1 + 4x 2 y.

Problem 21.
Find the general solution of
a) y 000 − 2y 00 − y 0 + 2y = sin x ;
b) a2 y 002 = (1 + y 02 )3 .

146
Problems 147

Problem 22.
An object is dropped (from rest) from some distance r0 from the center of the
Earth and it accelerates according to Newton’s law of gravity,
G M⊕
r̈ = − .
r2
Determine t(r) for the fall, where t = 0 when r = r0 . Find the number of days it
would take an object to fall to the surface of the Earth, r = R ⊕ , if it were
dropped from the distance of the Moon, r0 = 60R⊕ .
Use: G M⊕ = 398 600 km3 s−2 and R⊕ = 6371 km.

Problem 23.
Bessel’s equation for ß = 0 is

x 2 y 00 + xy 0 + x 2 y = 0.

We have found one solution


x2 x4
J0 (x) = 1 − + − ··· .
4 64
Show that a second solution exists of the form

J0 (x) ln x + Ax 2 + B x 4 + Cx 6 + · · ·

and find the first three coefficients A, B, and C.

Problem 24.
Consider the equation

d2y 2 d y
" #
2 2 „(„ + 1)
+ + −k + − y = 0, 0≤x≤∞
d x2 x d x x x2

where „ = 0, 1, 2, . . .. Find all values of the constant k that can give a solution
that is finite on the entire range of x (including x = ∞). An equation like this
arises in solving the Schrödinger equation for the hydrogen atom [here r = a0 x,
R(r) = a20 y(x), and E = −k 2 (e 2 /2a0 ) with a0 = ~2 /(me e 2 )].
(Hint: Let y = v/x, then “factor out” the behavior at infinity.)
Problems 148

Problem 25.
For what values of the constant K does the differential equation
1 K
 
y 00 − + y=0 (0 < x < ∞)
4 x
have a nontrivial solution vanishing at x = 0 and x = ∞?

Problem 26.

Use the WKB method to find approximate negative V(x)


values of the constant E for which the equation
x
d2y a a
+ [E − V(x)]y = 0 V0
d x2

has a solution that is finite for all x between x = −∞ and x = +∞ inclusive.

Problem 27.
Recall Bessel’s equation is:

x 2 y 00 + xy 0 + (x 2 − ß2 )y = 0.

The first derivative term can be eliminated by making the substitution


y(x) = u(x)x −1/2 . Use the WKB method to get an approximate solution for u(x)
for large x and thus obtain an approximate solution for y(x) for x  ß. You may
assume that ß  1/2 and don’t worry about the overall constant. Your solution
should be the one that is finite at the origin.
Module VI

Eigenvalue Problems

21 General Discussion of Eigenvalue Problems 151

22 Sturm-Liouville Problems 153

23 Degeneracy and Completeness 172

24 Inhomogeneous Problems — Green Functions 175

Problems 180

149
150

Motivation
We’ve seen that when solutions to differential equations are required to satisfy
specific boundary conditions then there can be restrictions on the form of the
differential equation in order for it to admit such solutions. Here we will explore
such eigenvalue problems in more detail as they commonly arise in physics
problems.
We will start with some general properties of linear differential operators,
eigenvalues, and eigenfunctions. We then turn to a rather general class of
eigenvalue problems called Sturm-Liouville problems. Such equations occur
frequently in physics applications, and we will encounter several important
special functions such as Bessel functions, Legendre polynomials, and
spherical harmonics. We will examine the case of degenerate eigenvalues and
show how complete bases of eigenfunctions can be used to form eigenfunction
expansions of other functions. Finally we’ll look at inhomogeneous equations
and introduce the concept of a Green function.
21 General Discussion of Eigenvalue Problems

The eigenvalue problem is


L u(x) = Ýu(x) (21.1)
where L is a linear differential operator and Ý is an eigenvalue. The solution
u(x) is called an eigenfunction of L belonging to Ý.
In addition to the equation we also need to specify a domain Ò and boundary
conditions.
L is a Hermitian differential operator if
Z "Z #∗
∗ ∗
u (x) L v(x) d x = v (x) L u(x) d x (21.2)
Ò Ò

where u(x) and v(x) are functions that obey the boundary conditions.
Suppose L is Hermitian. Then, if ui (x) and uj (x) are eigenfunctions belonging
to eigenvalues Ýi and Ý j ,
L ui (x) = Ýi ui (x) and L uj (x) = Ý j uj (x) . (21.3)
Because L is Hermitian,
Z "Z #∗
∗ ∗
uj (x) L ui (x) d x = ui (x) L uj (x) d x (21.4a)
Ò Ò
" Z #∗
= Ýj ui∗ (x)uj (x) d x (21.4b)


= Ýj ui (x)uj∗ (x) d x (21.4c)
Ò

but we also have


Z Z
uj∗ (x) L ui (x) d x = Ýi uj∗ (x)ui (x) d x (21.4d)
Ò Ò

so therefore
Z
(Ýi − Ý∗j ) uj∗ (x)ui (x) d x = 0 . (21.5)
Ò

151
21. General Discussion of Eigenvalue Problems 152

• Case i = j: the eigenvalues of Hermitian operators are real since Ýi = Ý∗i .


• Case i , j: the eigenfunctions of Hermitian operators are orthogonal if the
eigenvalues are different, where
Z
(u, v) = u ∗ (x)v(x) d x = 0 (21.6)
Ò

for functions u(x) and v(x) that are orthogonal.


Ex. 21.1. A familiar set of orthogonal functions are the trigonometric functions
d2
associated with L = − 2 :
dx
d2
u(x) + Ýu(x) = 0 , 0 ≤ x ≤ 2á (21.7)
d x2
along with the periodic boundary conditions

u(0) = u(2á) and u 0 (0) = u 0 (2á) . (21.8)

The eigenvalues are Ýn = (ná)2 for integer n and the eigenfunctions are un (x) ∝ e i náx .
L is Hermitian with the periodic boundary conditions: if u(x) and v(x) are two functions
that satisfy the boundary conditions then

Z 2á 0
* Z 2á d u ∗ d v
d2
2á
∗ d
v 
 
− u ∗ (x) 2 v(x) d x = − u + dx (21.9a)
0 dx  dx 0 0 dx dx
0
#2á
> Z 2á

d u ∗ d2
"
= v − v(x) 2 u ∗ (x) d x (21.9b)
d
x 0 0 dx

" Z 2á #∗
d 2
= − ∗
v (x) 2 u(x) d x . (21.9c)
0 dx

More generally, eigenvalue problems can include a weight function â(x) with
â(x) ≥ 0 in the domain so that

L u(x) = Ýâ(x)u(x) (21.10)

in which case the orthogonality condition will be


Z
(u, v) = u ∗ (x)v(x)â(x) d x = 0 . (21.11)
Ò
22 Sturm-Liouville Problems

The Sturm-Liouville differential equation is

d d
 
p(x) u(x) − q(x)u(x) + Ýâ(x)u(x) = 0 (22.1)
dx dx

for a ≤ x ≤ b with u(a) = u(b) = 0 (other boundary conditions are possible).


Here, p(x), q(x), and â(x) are all real-valued and â(x) ≥ 0 on the domain.
We can verify that

d2 d
L = −p(x) − p0 (x) + q(x) (22.2)
d x2 dx
is Hermitian and that the orthogonality of eigenfunctions ui and uj , Ýi , Ý j , is
Z
(ui , uj ) = ui∗ (x)uj (x)â(x) d x = 0 . (22.3)
Ò

The eigenvalues of a Sturm-Liouville problem can be arranged in order


Ý0 ≤ Ý1 ≤ Ý2 ≤ · · · , where Ý0 is the smallest eigenvalue and Ýn → ∞ as n → ∞
for finite domain Ò.
The eigenfunctions of a Sturm-Liouville problem form a complete set of
functions in the domain with the boundary conditions.

153
22. Sturm-Liouville Problems 154

Some examples of Sturm-Liouville problems are:


• Legendre’s equation

d2y dy
(1 − x 2 ) − 2x + n(n + 1)y = 0 . (22.4)
d x2 dx

Here p(x) = 1 − x 2 , q(x) = 0, â(x) = 1, where −1 ≤ x ≤ 1, and Ýn = n(n + 1).


• Hermite’s equation

d2y dy
− 2x + 2ny = 0 . (22.5)
d x2 dx
2 2
Here p(x) = e −x , q(x) = 0, â(x) = e −x , where −∞ ≤ x ≤ ∞, and Ýn = 2n.
• Bessel’s equation

d2y dy
x2 2
+x + (k 2 x 2 − ß2 )y = 0 (22.6)
dx dx

(note: we have introduced the factor k 2 now). Here p(x) = x, q(x) = ß2 /x,
â(x) = x, the domain is 0 ≤ x ≤ ∞, and Ý = k 2 .
The eigenfunctions and orthogonality relations for these equations are:
• Legendre polynomials, Pn (x):
Z 1
Pn (x)Pm (x) d x = 0 for n , m. (22.7)
−1

• Hermite polynomials, Hn (x):


Z∞
2
Hn (x)Hm (x) e −x d x = 0 for n , m. (22.8)
−∞

• Bessel function, Jß (kx):


Z b
Jß (Ax)Jß (B x) x d x = 0 (22.9)
a

provided Jß (Ax) and Jß (B x) vanish at x = a and x = b respectively,


or if Jß0 (Ax) and Jß0 (B x) vanish at x = a and x = b respectively,
(or various other similar conditions).
22. Sturm-Liouville Problems 155

Independence of Solutions
Recall Bessel’s equation with k = 1 has solutions Jß (x) and J−ß (x) and these are
independent unless ß is an integer.
In general, two solutions, u and v, are said to be linearly dependent if there
are values Ó and Ô (Ó , 0, Ô , 0) such that

Óu + Ôv = 0 . (22.10a)

Take a derivative:

Óu 0 + Ôv 0 = 0 (22.10b)

and multiply Eq. (22.10a) by v 0 and subtract Eq. (22.10b) times v:

Ó (uv 0 − u 0 v) = 0 . (22.10c)
| {z }
must vanish

Define the Wronskian as

W[u(x), v(x)] = u(x)v 0 (x) − u 0 (x)v(x) (22.11)

or sometimes just write W. Thus, linear dependence requires W = 0.


Furthermore, if W , 0, the solutions are linearly independent.
Suppose that u and v are solutions to the Sturm-Liouville equation:

pu 00 + p0 u 0 − qu + Ýâu = 0 (22.12a)
00 0 0
pv + p v − qv + Ýâv = 0 (22.12b)

The Wronskian is:

W = uv 0 − vu 0 (22.13a)
=⇒ pW = puv 0 − pvu 0 (22.13b)
=⇒ (pW)0 = u · [pv 00 + p0 v 0 ] +  0
puv 0 − v · [pu 00 + p0 u 0 ] −  0
puv0 (22.13c)
= u · [(q − Ýâ)v] − v · [(q − Ýâ)u] (22.13d)
=0 (22.13e)

and therefore
C
W[u(x), v(x)] = (22.14)
p(x)

for solutions to a Sturm-Liouville equation where C is some constant (which


can be zero). Note: C depends on u and v, i.e., on the pair of solutions chosen.
22. Sturm-Liouville Problems 156

Ex. 22.1. Bessel functions.


Solutions to Bessel’s equation with conventional normalization are
 ß "  2 #
1 x 1 x 1 1 x 4
 
Jß (x) = 1− + − ···
È (ß + 1) 2 ß+1 2 (ß + 1)(ß + 2) 2! 2

(−1)k
 ß+2k
¼ x
= . (22.15)
k!È (ß + k + 1) 2
k=0

Consider W[Jß , J−ß ]. We know it must have the form


C C
W= = (22.16)
p(x) x

and we want to determine C. Note that as x → 0,


 ß  −ß
1 x 1 x
Jß (x) ∼ and J−ß (x) ∼ (22.17)
x→0 È (ß + 1) 2 x→0 È (−ß + 1) 2

so, for x → 0, we have

W = 0 (x) − J 0 (x)J (x)


Jß (x)J−ß (22.18a)
ß −ß
ß ß x −ß−1
   ß−1  −ß #
"     
1 1 x ß x x
∼ − − (22.18b)
x→0 È (ß + 1) È (1 − ß) 2 2 2 2 2 2
1 ß ß
 
= − − (22.18c)
ßÈ (ß)È (1 − ß) x x recall Euler’s reflection formula
á
È (ß)È (1 − ß) =
2 sin áß sin áß
= − . (22.18d)
áx
Thus the constant is determined.
Therefore
2 sin áß
W[Jß (x), J−ß (x)] = − . (22.19)
áx
Note: when ß = n is an integer, W = 0, so Jn (x) and J−n (x) are linearly dependent.
In fact, the normalization has been chosen so that J−n (x) = (−1)n Jn (x).
Conversely, when ß is not an integer, W , 0 so Jß (x) and J−ß (x) are linearly independent.
We seek a second, linearly independent solution when ß = n is an integer.
One method to get a second solution is to use the Wronskian. Let
!0
y
W = W[Jn , yn ] = Jn yn0 − Jn0 yn = Jn2 · n (22.20)
Jn
22. Sturm-Liouville Problems 157

where W = C/x and yn is the second solution we seek. Therefore,

d x0
Zx
yn (x) = CJn (x) . (22.21)
x Jn2 (x 0 )
0

For example, for ß = 0,

x2 x4 x2 5 4
J0 (x) = 1 − + − ··· and J0−2 (x) = 1 + + x + ··· (22.22)
4 64 2 32
so

x2
Z ( )
1 5 4
y0 (x) = CJ0 (x) 1+ + x + ··· dx . (22.23)
x 2 32

For C = 1 we have

x2
( )
5 4
y0 (x) = J0 (x) ln x + + x + ··· (22.24a)
4 128
1 3 4
= J0 (x) ln x + x 2 − x + ··· . (22.24b)
4 128

More conventionally, define the second solution to be

Jß (x) cos ßáx − J−ß (x)


Yn (x) = lim . (22.25)
ß→n sin ßá

This is the Bessel function of the second kind.


It is straightforward to show that W[Jß , Yß ] , 0 even for integer ß.
For integer ß = n, both the numerator and denominator vanish as ß → n so Yn must be
evaluated by l’Hospital’s rule. . . but this requires derivatives of Jß with respect to ß,
which is a nuisance since ß appears in the È functions in the series. . . .
It is easiest just to look up Yn (x). The Bessel functions of the first and second kind are
shown in Fig. 22.1.

Jn(x) Yn(x)
1 1
n=0
0 x n=1 0 x
5 10 15 20 n=2 5 10 15 20
1 1

Figure 22.1: Bessel Functions of the First and Second Kind


22. Sturm-Liouville Problems 158

Generating Functions
Consider a function of two variables, g(x, t). We can use it to generate a set of
functions A n (x) by expanding it in powers of t:
¼
g(x, t) = A n (x)t n . (22.26)
n

This is a Laurent series in t. We call g(x, t) a generating function.


The following example illustrates the use of generating functions.
Ex. 22.2. Consider

x 1
   
g(x, t) = exp t− . (22.27)
2 t

We can obtain A n (x) from the Laurent series via the contour integral

1 g(x, t)
A n (x) = n+1
dt (22.28)
2ái C t

where C is a positively-oriented simple closed contour about the origin.


Let t = e iÚ , −á ≤ Ú ≤ á:

1 g(x, e iÚ ) iÚ
A n (x) = i e dÚ (22.29a)
2ái −á e i(n+1)Ú
Z á i x sin Ú
1 e
= dÚ (22.29b)
2á −á e i nÚ

1

 : 0 (odd)

= [cos(x sin Ú − nÚ) + i sin(x sin Ú − nÚ)] dÚ
  (22.29c)
2á −á 
so

1 á
Z
A n (x) = cos(x sin Ú − nÚ) dÚ . (22.30)
á 0
22. Sturm-Liouville Problems 159

Recurrence relations can be obtained by taking derivatives with respect to x or t:

g x 1 x 1
      
= 1 + 2 exp t− (22.31a)
t 2 t 2 t
x 1
  
= 1 + 2 g(x, t) (22.31b)
2 t
 ∞
x 1 ¼
 
= 1+ 2 A n (x)t n (22.31c)
2 t n=−∞

x
  ¼
= [A (x)t n + A n (x)t n−2 ] (22.31d)
2 n=−∞ n

x
  ¼
= [A (x) + A n+1 (x)]t n−1 (22.31e)
2 n=−∞ n−1

but
∞ ∞
g  ¼ ¼
= A n (x)t n = nA n (x)t n−1 (22.31f)
t t n=−∞ n=−∞

so we find

2n
A n−1 (x) + A n+1 (x) = A (x) . (22.32)
x n

Now take a derivative with respect to x:

g 1 1 x 1
     
= t− exp t− (22.33a)
x 2 t 2 t
1 1
 
= t− g(x, t) (22.33b)
2 t
 ∞
1 1 ¼

= t− A (x)t n (22.33c)
2 t n=−∞ n

1 ¼
= [A (x)t n+1 − A n (x)t n−1 ] (22.33d)
2 n=−∞ n

1 ¼
= [A (x) − A n+1 (x)]t n (22.33e)
2 n=−∞ n−1

but
∞ ∞
g  ¼ ¼
= A n (x)t n = A0n (x)t n (22.33f)
x x n=−∞ n=−∞

so we find

A n−1 (x) − A n+1 (x) = 2A0n (x) . (22.34)


22. Sturm-Liouville Problems 160

Adding and subtracting this recurrence relation to the previous yields

n n
A0n (x) = A n−1 (x) − A (x) and A0n (x) = A (x) − A n+1 (x) (22.35)
x n x n

Manipulate these:

xA0n (x) = xA n−1 (x) − nA n (x) (22.36a)


[xA0n (x)]0 = A n−1 (x) + xA0n−1 (x) − nA0n (x) (22.36b)
n−1 n
   
= A n−1 (x) + x A n−1 (x) − A n (x) − n A n−1 (x) − A n (x) (22.36c)
x x
n 2
(
( − 1)A n−1 (x) − xA n (x) − 
=A n−1 (x)
 + (n ( nA

n−1 (x) + A (x) (22.36d)
 (( ( 
x n
n2
= −xA n (x) + A (x) (22.36e)
x n
so we have

n2
xA00n (x) + A0n (x) = −xA n (x) + A (x) (22.36f)
x n
or

x 2 A00n (x) + xA0n (x) + (x 2 − n 2 )A n (x) = 0 . (22.37)

But this is Bessel’s equation(!) so A n (x) are Bessel functions.


Now expand g(x, t) in a series in t explicitly:
x 1
   
g(x, t) = exp t− (22.38a)
2 t
∞  r ¼ ∞
1 x 1 x s
¼  
= tr (−1)s t −s (22.38b)
r! 2 s! 2
r=0 s=0
∞ ¼∞  r+s
¼ 1 1 x
= (−1)s t r−s (22.38c)
r! s! 2 let n = r − s and note we are
r=0 s=0
summing over all possible n
∞ ¼
∞ 
 n+2s  since r − s can take any value
¼  1 1 s x  n
=  (−1)  t (22.38d)
 (s + n)! s!
n=−∞ s=0
2 
| {z }
this is A n (x)

and therefore
∞  n+2s
¼ 1 1 x
A n (x) = (−1)s . (22.39)
s! (s + n)! 2
s=0
22. Sturm-Liouville Problems 161

Bessel Functions of Integer Order


In summary:
• Generating function

x 1
    ¼
exp t− = Jn (x)t n (22.40)
2 t n=−∞

• Integral form
Z á
1
Jn (x) = cos(nÚ − x sin Ú) dÚ (22.41)
á 0

• Recurrence relations
2n
Jn−1 (x) + Jn+1 (x) = J (x) (22.42)
x n
Jn−1 (x) − Jn+1 (x) = 2Jn0 (x) (22.43)
n
Jn0 (x) = Jn−1 (x) − Jn (x) (22.44)
x
0 n
Jn (x) = Jn (x) − Jn+1 (x) (22.45)
x

• Series expansion
∞  n+2k
¼ 1 1 x
Jn (x) = (−1)k (22.46)
k! (k + n)! 2
k=0

• Hankel functions
(1)
Hn (x) = Jn (x) + i Yn (x) (22.47)
(2)
Hn (x) = Jn (x) − i Yn (x) (22.48)
22. Sturm-Liouville Problems 162

Bessel Functions of Half-Integer Order


Consider Jß (x) with ß = 1/2. The series solution is

(−1)k
 1/2+2k
¼ x
J1/2 (x) = 1
. (22.49)
k=0
k!È ( 2 + k + 1) 2

Recall Legendre’s duplication formula: 22z−1 È (z)È (z + 21 ) = áÈ (2z)
and set z = k + 1:
√ √
k!È ( 12 + k + 1) = áÈ (2k + 2)21−2(k+1) = á(2k + 1)!2−2k−1 . (22.50)

Thus

(−1)k 2 x 1/2 2k
¼  
J1/2 (x) = √ x (22.51a)
(2k + 1)! á 2
k=0

1/2 ¼
2 (−1)k 2k+1

= x (22.51b)
áx (2k + 1)!
k=0
| {z }
sin x

and therefore
1/2
2

J1/2 (x) = sin x . (22.52)
áx

Similiarly

1/2
2

J−1/2 (x) = cos x . (22.53)
áx

Use the recurrence formulas to get

2 1/2 1

  
J3/2 (x) = sin x − cos x , (22.54)
áx x
 1/2 
2 1

J−3/2 (x) = − cos x − sin x , (22.55)
áx x
etc.
22. Sturm-Liouville Problems 163

Conventionally define the spherical Bessel functions


r
á
j„ (x) = J (x) (22.56)
2x „+1/2

and
r r
á á
y„ (x) = Y (x) = (−1)„+1 J (x) . (22.57)
2x „+1/2 2x −„−1/2

The first few spherical Bessel functions are


sin x sin x cos x
j0 (x) = , j1 (x) = − , etc. (22.58)
x x2 x
cos x cos x sin x
y0 (x) = − , y1 (x) = − 2 − , etc. (22.59)
x x x
These functions are shown in Fig. 22.2.
In addition, the spherical Hankel functions are

(1)
h„ (x) = j„ (x) + i y„ (x) (22.60)
(2)
h„ (x) = j„ (x) − i y„ (x) . (22.61)

The spherical Bessel functions are solutions to the differential equation

x 2 y 00 (x) + 2xy 0 (x) + [x 2 − „(„ + 1)]y(x) = 0 . (22.62)

1
j0(x)
j1(x)
0 x y0(x)
5 10 15 20
y1(x)
1

Figure 22.2: Spherical Bessel Functions


22. Sturm-Liouville Problems 164

Modified Bessel Functions


The modified Bessel functions of the first and second kind are defined by

Jn (i z)
I n (z) = (22.63)
in

and

ái n (1)
Kn (z) = i Hn (i z) (22.64)
2

respectively. They are shown in Fig. 22.3.


These functions are solutions to the modified Bessel equation

x 2 y 00 (x) + xy 0 (x) − (x 2 + n 2 )y(x) = 0 n ≥ 0. (22.65)

In(x) Kn(x)
3 3

2 n=0 2
n=1
1 n=2 1

0 x 0 x
0 1 2 3 0 1 2 3
Figure 22.3: Modified Bessel Functions of the First and Second Kind
22. Sturm-Liouville Problems 165

Legendre Polynomials
The generating function for the Legendre polynomials is

1 ¼
g(x, t) = √ = Pn (x)t n . (22.66)
1 − 2xt + t 2
n=0

Consider:
g 1 1 x−t
=− (−2x + 2t) = g(x, t) (22.67a)
t 2 (1 − 2xt + t 2 )3/2 1 − 2xt + t 2
g
=⇒ (1 − 2xt + t 2 ) = (x − t)g(x, t) (22.67b)
t
¼ ∞ ∞
¼
2 n−1
=⇒ (1 − 2xt + t ) nPn (x)t = (x − t) Pn (x)t n (22.67c)
n=0 n=0
∞ n
¼ o
=⇒ nPn (x)t n−1 − 2xnPn (x)t n + nPn (x)t n+1
n=0
∞ n
¼ o
= xPn (x)t n − Pn (x)t n+1 (22.67d)
n=0
∞ n
¼ o
=⇒ (n + 1)Pn+1 (x) − 2xnPn (x) + (n − 1)Pn−1 (x) t n
n=0
∞ n
¼ o
= xPn (x) − Pn−1 (x) t n (22.67e)
n=0

and so we obtain the recurrence relation


(n + 1)Pn+1 (x) − (2n + 1)xPn (x) + nPn−1 (x) = 0 . (22.68)

Now consider
g t t
= = g(x, t) (22.69)
2
x (1 − 2xt + t ) 3/2 1 − 2xt + t 2
but

g ¼ 0
= Pn (x)t n (22.70)
x
n=0

¼ ∞
¼
=⇒ (1 − 2xt + t 2 ) Pn0 (x)t n = t Pn (x)t n (22.71)
n=0 n=0

and so we obtain another recurrence relation


0 0
Pn+1 (x) + Pn−1 (x) = 2xPn0 (x) + Pn (x) . (22.72)
22. Sturm-Liouville Problems 166

By combining these two recurrence relations we obtain

0 0
Pn+1 (x) − Pn−1 (x) = (2n + 1)Pn (x) (22.73)
0
Pn+1 (x) = (n + 1)Pn (x) + xPn0 (x) (22.74)
0
Pn−1 (x) = −nPn (x) + xPn0 (x) (22.75)

and further manipulations yield

(1 − x 2 )Pn00 (x) − 2xPn0 (x) + n(n + 1)Pn (x) = 0 (22.76)

which is Legendre’s equation, so Pn (x) are indeed Legendre functions.

Orthonormalization
Consider
∞ 2 ∞ ¼

1 ¼  ¼
[g(x, t)]2 = =  Pn (x)t n
 = Pm (x)Pn (x)t m+n . (22.77)
1 − 2xt + t 2 
 

n=0 m=0 n=0
R1
Now integrate both sides −1
d x:
∞ ¼
∞ Z 1 Z 1
¼
m+n 1
t Pm (x)Pn (x) d x = dx (22.78a)
−1 −1 1 − 2xt + t2 y = 1 − 2xt + t 2
m=0 n=0
2 1 dy
d x = − 2t
1 (1+t) d y
Z
= (22.78b)
2t (1−t)2 y
1 1+t
 
= ln (22.78c)
t 1−t recall:
3 x5
 
∞ 1 1+x x
¼ t 2n 2 ln 1−x = x + 3 + 5 + · · ·
=2 (22.78d)
2n + 1
n=0

so
∞ ∞ ¼ ∞ Z1
¼ 2t 2n ¼
m+n
= t Pm (x)Pn (x) d x . (22.78e)
2n + 1 −1
n=0 n=0 m=0 | {z }
must be ∝ Ömn

Therefore
Z 1
2
Pn (x)Pm (x) d x = Ö . (22.79)
−1 2n + 1 mn
22. Sturm-Liouville Problems 167

Special values
Let x = 1:

1 1 ¼
g(1, t) = √ = = tn (22.80a)
1 − 2t + t 2 1 − t
n=0

but

¼
g(1, t) = Pn (1)t n (22.80b)
n=0

and therefore

Pn (1) = 1 (22.81)

(this is the conventional normalization for Legendre polynomials). Similarly,

Pn (−1) = (−1)n . (22.82)

Let x = 0 and use the binomial series


1 1 1 3 t4 1 3 5 t6
g(0, t) = √ = 1 − t2 + − + ··· (22.83a)
1 + t2 2 2 2 2! 2 2 2 3!

¼ (2n − 1)!! 2n
= (−1)n t (22.83b)
(2n)!
n=0

so we find

(2n − 1)!!
P2n (0) = (−1)n and P2n+1 (0) = 0 . (22.84)
(2n)!

Finally, note that g(−x, −t) = g(x, t) which yields

Pn (−x) = (−1)n Pn (x) . (22.85)


22. Sturm-Liouville Problems 168

Useful identity
Let x = cos Ú and t = r 0 /r in the generating function with r 0 < r. Then:
∞ !„
¼ r0 1
g(cos Ú, r 0 /r) = P„ (cos Ú) = p (22.86a)
r 1 − 2(r 0 /r) cos Ú + (r 0 /r)2
„=0
r
=√ (22.86b)
r + r − 2rr 0 cos Ú
2 02
r
= (22.86c)
kx − x0 k

where x and x0 are two vectors with r = kxk, r 0 = kx0 k, and x · x0 = rr 0 cos Ú. Thus

¼ (r 0 )„
1
= P„ (cos Ú) , r0 < r . (22.87)
kx − x0 k r „+1
„=0

If r 0 > r, exchange r 0 and r or else the series will not converge. Therefore


¼ r„
1 <
0
= P„ (cos Ú) (22.88)
kx − x k r>„+1
„=0

where r< = min(r 0 , r) and r> = max(r 0 , r).


22. Sturm-Liouville Problems 169

Second solution
The Wronskian can be used to find the second independent solution Qn (x):
!0
0 0 2 Qn
W[Pn , Qn ] = Pn Qn − Pn Qn = Pn (22.89a)
Pn

but
1
W[Pn , Qn ] ∝ (22.89b)
1 − x2
so
Z
dx
Qn (x) = Pn (x) (22.90)
(1 − x 2 )[Pn (x)]2

(with the conventional choice of normalization).


Explicitly:
• For n = 0, P0 (x) = 1 and
Z
dx 1 1+x
 
Q0 (x) = = ln . (22.91)
1 − x2 2 1−x

• For n = 1, P1 (x) = x and


Z
dx 1 1 1+x
  
Q1 (x) = x = x − + ln
x 2 (1 − x 2 ) x 2 1−x
x 1+x
 
= ln −1. (22.92)
2 1−x
22. Sturm-Liouville Problems 170

Associated Legendre Differential Equation


The associated Legendre differential equation is

d2y m2
" #
dy
(1 − x 2 ) − 2x + n(n + 1) − y = 0. (22.93)
d x2 dx 1 − x2

Note that this reduces to the Legendre equation when m = 0.


Non-singular solutions in the domain −1 ≤ x ≤ 1 exist only when n and m are
integers with 0 ≤ |m| ≤ n. If Pn (x) is a solution to Legendre’s equation, then

dm
Pnm (x) = (−1)m (1 − x 2 )m/2 P (x) (22.94)
d xm n

is a solution to the associated Legendre’s equation when m is a positive integer.


These are called the associated Legendre functions.
For m < 0 and m is an integer, use

(n − m)! m
Pn−m (x) = (−1)m P (x) . (22.95)
(n + m)! n

The first few associated Legendre functions are (recall Pn0 (x) = Pn (x)):
√ √
P11 (x) = − 1 − x 2 , P21 (x) = −3x 1 − x 2 , P22 (x) = 3(1 − x 2 ) . (22.96)

These are shown in Fig. 22.4.

Pnm(x)
3
2
1 P11
x P21
1 1 1 P22
2
3
Figure 22.4: Associated Legendre Functions
22. Sturm-Liouville Problems 171

Spherical Harmonics
The spherical harmonics are defined as
s
2„ + 1 („ − m)! m
Y„m (Ú, æ) = P (cos Ú)e i mæ . (22.97)
4á („ + m)! „

The first few spherical harmonics are


1
Y00 (Ú, æ) = √ (22.98)

r
1 3
Y10 (Ú, æ) = cos Ú (22.99)
2 á
r
1 3
Y11 (Ú, æ) = − sin Úe iæ (22.100)
2 2á
r
0 1 5
Y2 (Ú, æ) = (3 cos2 Ú − 1) (22.101)
4 á
r
1 1 15
Y2 (Ú, æ) = − sin Ú cos Úe iæ (22.102)
2 2á
r
2 1 15
Y2 (Ú, æ) = sin2 Úe 2iæ (22.103)
4 2á
and, for negative integer m, use

Y„−m (Ú, æ) = (−1)m [Y„m (Ú, æ)]∗ . (22.104)

A few useful identities are:


r
1 (2„ + 1)!
−„
Y„ (Ú, æ) = „ sin„ Úe −i„æ (22.105)
2 „! 4á
r
0 2„ + 1
Y„ (Ú, æ) = P (cos Ú) (22.106)
4á „
„
¼ 2„ + 1
|Y„m (Ú, æ)|2 = (22.107)

m=−„
Z 2á Z á
0 ∗
Y„m (Ú, æ)[Y„m
0 (Ú, æ)] sin Ú dÚ dæ = ք„0 Ömm 0 . (22.108)
æ=0 Ú=0
23 Degeneracy and Completeness

When two or more eigenvalues are the same they are called degenerate.
A linear combination of eigenfunctions belonging to a degenerate set is again
an eigenfunction with the same eigenvalue.
Construct an orthogonal set of eigenfunctions by the Gram-Schmidt
procedure demonstrated in the next example:
Ex. 23.1. Suppose u, v, and w all belong to eigenvalue Ý.

• Take u1 = u.
• Let u2 = v + Óu1 and choose Ó so
Z Z Z
0 = u1 (x)u2 (x)â(x) d x = u (x)v(x)â(x) d x + Ó u ∗ (x)u(x)â(x) d x
∗ ∗ (23.1)
R
u ∗ (x)v(x)â(x) d x
=⇒ Ó = − R . (23.2)
u ∗ (x)u(x)â(x) d x

• Let u3 = w + Ôu1 + Õu2 . Choose Ô so that


Z Z Z
0 = u1∗ (x)u3 (x)â(x) d x = u1∗ (x)w(x)â(x) d x + Ô u1∗ (x)u1 (x)â(x) d x (23.3)

u ∗ (x)w(x)â(x) d x
R
=⇒ Ô = − ∗1 R . (23.4)
u1 (x)u1 (x)â(x) d x

Similarly choose Õ so that


Z Z Z
0 = u2∗ (x)u3 (x)â(x) d x = u2∗ (x)w(x)â(x) d x + Õ u2∗ (x)u2 (x)â(x) d x (23.5)

u ∗ (x)w(x)â(x) d x
R
=⇒ Õ = − R ∗2 . (23.6)
u2 (x)u2 (x)â(x) d x

We now have u1 , u2 , and u3 which are orthogonal eigenfunctions.

172
23. Degeneracy and Completeness 173

Therefore, even when there is a degenerate set, it is possible to get a complete


set of orthogonal eigenfunctions:
Z
(ui , uj ) = ui∗ (x)uj (x)â(x) d x = Öi j . (23.7)
Ò

(Here we’ve assumed that the eigenfunctions are actually orthonormal.)


Functions over the domain Ò having the required boundary conditions can be
expanded in terms of this complete orthonormal set:
¼ Z
f (x) = cn un (x) with cn = un∗ (x)f (x)â(x) d x . (23.8)
n Ò

Substitute the expression for cn into the expansion:


¼ Z
f (x) = un (x) un∗ (x 0 )f (x 0 )â(x 0 ) d x 0 (23.9a)
n Ò
Z
f (x 0 ) [â(x 0 ) ∗ 0 d x0
´
= n un (x)un (x )] (23.9b)
Ò | {z }
must be Ö(x − x 0 )

and therefore we have the completeness relation


¼
â(x 0 ) un (x)un∗ (x 0 ) = Ö(x − x 0 ) . (23.10)
n
23. Degeneracy and Completeness 174

Ex. 23.2. Fourier series, as we’ve seen in §13.

Ex. 23.3. Legendre polynomials: â(x) = 1.

• f (x) = A0 P0 (x) + A1 P1 (x) + A2 P2 (x) + · · · for −1 ≤ x ≤ 1 (23.11)


Z1
2k + 1
• Ak = f (x)Pk (x) d x (23.12)
2 −1
Z1
2
where the normalization comes from [Pk (x)]2 d x =
−1 2k +1

¼ 2n + 1
• Pn (x)Pn (x 0 ) = Ö(x − x 0 ) (23.13)
2
n=0

Ex. 23.4. Spherical harmonics:

„
∞ ¼
¼
• f (Ú, æ) = c„m Y„m (Ú, æ) (23.14)
„=0 m=−„
Z 2á Z á
• c„m = f (Ú, æ)[Y„m (Ú, æ)]∗ sin Ú dÚ dæ (23.15)
æ=0 Ú=0
„
∞ ¼
¼ 1
• Y„m (Ú, æ)[Y„m (Ú0 , æ0 )]∗ = Ö(Ú − Ú0 )Ö(æ − æ0 ) (23.16)
sin Ú
„=0 m=−„
24 Inhomogeneous Problems — Green Functions

Consider the inhomogeneous problem (take â = 1 for simplicity)

L u(x) − Ýu(x) = f (x) (24.1)

where f (x) is a source and seek a solution via eigenfunction expansion:


¼ ¼
u(x) = cn un (x) and f (x) = d n un (x) . (24.2)
n n

Then we have
¼ ¼
cn (Ýn − Ý)un (x) = d n un (x) (24.3a)
n n

and since the eigenfunctions are linearly independent

dn (u , f )
cn = = n . (24.3b)
Ýn − Ý Ýn − Ý
Therefore
¼ u (x) Z
n
u(x) = un∗ (x 0 )f (x 0 ) d x 0 (24.4a)
Ý n − Ý Ò
Zn
= G (x, x 0 )f (x 0 ) d x 0 (24.4b)
Ò

where
¼ u (x)u ∗ (x 0 )
n
G (x, x 0 ) = n
(24.4c)
n
Ý n − Ý

is known as a Green function. It depends on the linear operator L, the value Ý,


the domain Ò, and the boundary conditions.

175
24. Inhomogeneous Problems — Green Functions 176

Note that if f (x) = Ö(x − x0 ) where x0 is in the domain then


Z
u(x) = G (x, x 0 )Ö(x 0 − x0 ) d x 0 (24.5a)
Ò
= G (x, x0 ) (24.5b)

thus we have

L G (x, x0 ) − ÝG (x, x0 ) = Ö(x − x0 ) (24.6)

(note that the differential operator L acts on the x variable, not on x0 ).


This is the differential equation for the Green function. Appropriate boundary
conditions are sill required (and different boundary conditions result in
different Green functions).
Therefore, Green functions are solutions to the inhomogeneous problems with
unit point sources.
The solution for more general source distributions is obtained by linear
superposition of the solutions for many point sources, as seen in Eq. (24.4b).
24. Inhomogeneous Problems — Green Functions 177

Ex. 24.1. A string of length „ vibrating with angular frequency é with fixed ends is
described by

d2u
+ k2 u = 0 , u(0) = u(„) = 0 (24.7)
d x2 | {z }
fixed ends boundary condition

where u(x) is the transverse displacement of the string from its equilibrium.
Here k = é/c where c is the speed of sound in the string.
Find the Green function for this differential equation and boundary conditions.

• Method 1.
Let k 2 = −Ý and solve the eigenvalue problem

d2u
= Ýu , u(0) = u(„) = 0 . (24.8)
d x2
The eigenvalues are

ná 2
 
Ýn = − , n = 1, 2, 3, . . . (24.9)
„
and the normalized eigenfunctions are
r
2 náx
 
un (x) = sin , n = 1, 2, 3, . . . . (24.10)
„ „
Therefore
¼ u (x)u ∗ (x 0 )
n
G (x, x 0 ) = n
(24.11a)
n
Ý n − Ý

2 ¼ sin(náx/„) sin(náx 0 /„)
= (24.11b)
„ k 2 − (ná/„)2
n=1

Note: when the string vibrates at an eigenfrequency, the Green function becomes
infinite.
Note also: G (x, x 0 ) = G ∗ (x 0 , x) so for real-valued Green functions

G (x, x 0 ) = G (x 0 , x) . (24.12)

This is a reciprocity relation: the response at position x to a disturbance at position


x 0 is equal to the response at position x 0 to a disturbance at position x.
24. Inhomogeneous Problems — Green Functions 178

• Method 2.
Solve

d 2 G (x, x 0 )
+ k 2 G (x, x 0 ) = Ö(x − x 0 ) , G (0, x 0 ) = G („, x 0 ) = 0 . (24.13)
d x2
d2G
Note: for x , x 0 , + k 2 G = 0, so
d x2

0 a sin kx

 x < x0
G (x, x ) =  (24.14)
b sin k(x − „) x > x 0

where a and b are constants. This satisfies the boundary conditions at x = 0 and x = „
and the homogeneous equation for x , x 0 .
We need to match these two solutions at x = x 0 to determine a and b.
Integrate the differential equation over x from x 0 − × to x 0 + ×:
Z x 0 +× 2 Z x 0 +× Z x 0 +×
d G (x, x 0 ) 0) dx =
d x +k 2 G (x, x Ö(x − x 0 ) d x (24.15)
x 0 −× d x2 x 0 −× x 0 −×
| {z } | {z } | {z }
→ ddGx as ×→0 vanishes as ×→0 1

so we have
d G (x, x 0 ) d G (x, x 0 )
" #
lim − =1 (24.16)
×→0 dx x=x 0 +× dx x=x 0 −×

i.e., the derivative of G is discontinuous at x 0 and jumps by 1.


Integrate again:
x=x 0 +×
lim G (x, x 0 ) =0 (24.17)
×→0 x=x 0 −×

i.e., G is continuous at x 0 .
Matching the two solutions at x = x 0 then yields

continuous: a sin kx 0 = b sin k(x 0 − „) (24.18a)


unit jump in derivative: ka cos kx 0 + 1 = kb cos k(x 0 − „) (24.18b)

and we find
sin k(x 0 − „) sin kx 0
a= and b= . (24.19)
k sin k„ k sin k„
Therefore

1  sin kx sin k(x 0 − „) 0 ≤ x < x0
G (x, x 0 ) =

(24.20)
sin kx 0 sin k(x − „)
k sin k„  x0 < x ≤ „ .

24. Inhomogeneous Problems — Green Functions 179

General method
Consider the linear operator

d2 d
L = p(x) + p0 (x) + q(x) (24.21)
d x2 dx
and the inhomogeneous equation

L y(x) − f (x) = 0 with y(a) = y(b) = 0 , a≤x≤b. (24.22)

Let u(x) be a solution of L u = 0 with u(a) = 0.


Let v(x) be a solution of L v = 0 with v(b) = 0.
Let

Au(x) a ≤ x < x 0

G (x, x 0 ) = 

(24.23)
B v(x) x 0 < x ≤ b

and enforce
h i
lim G x=x 0 −× − G x=x 0 +×
=0 (24.24a)
×→0

and
" #
dG dG 1
lim − =− . (24.24b)
×→0 d x x=x 0 −× dx x=x 0 −× p(x 0 )

This determines
v(x 0 ) u(x 0 )
A= and B= (24.25a)
C C
where
C
W[u(x 0 ), v(x 0 )] = . (24.25b)
p(x 0 )
Therefore

1 u(x)v(x 0 ) a ≤ x < x0

0

G (x, x ) = (24.26a)
C u(x 0 )v(x) x0 < x ≤ b

with

C = p(x 0 )[u(x 0 )v 0 (x 0 ) − u 0 (x 0 )v(x 0 )] . (24.26b)

Then
Z b
y(x) = G (x, x 0 )f (x 0 ) d x 0 . (24.27)
a
Problems

Problem 28.
The Sturm-Liouville differential equation is

L u(x) + Ýâ(x)u(x) = 0

where
d2 d
L = p(x) + p0 (x) − q(x).
d x2 dx
Show that L is Hermitian when the domain is chosen to be a ≤ x ≤ b and the
boundary conditions are taken to be u(a) = u(b) = 0. Show that orthogonality
now means:
Zb
0 = (u, v) = u ∗ (x)v(x)â(x) d x.
a

180
Problems 181

The next two problems refer to Hermite’s differential equation


y 00 − 2xy 0 + 2ny = 0 − ∞ < x < ∞.
The Hermite polynomials are solutions that can be obtained from the
generating function

2
¼ Hn (x)t n
g(x, t) = e 2xt−t = .
n!
n=0

Problem 29.
a) Use the generating function to prove the following identities:
Hn+1 (x) = 2xHn (x) − 2nHn−1 (x),
Hn0 (x) = 2nHn−1 (x),
(2n)!
H2n (0) = (−1)n ,
n!
H2n+1 (0) = 0,
and
Hn (x) = (−1)n Hn (−x).

b) Using the identities proven in part (a), show that Hn (x) is a solution to
Hermite’s equation.
c) From the generating function, show that
bn/2c
¼ n!
Hn (x) = (−1)s (2x)n−2s
(n − 2s)!s!
s=0

where bn/2c means the greatest integer less than or equal to n/2.

Problem 30.
a) Prove Rodrigues’s formula:
2 d n −x 2
Hn (x) = (−1)n e x e .
d xn

b) By integrating the product


2
e −x g(x, s)g(x, t)
over all x, show that
Z∞
2 √
e −x Hm (x)Hn (x) d x = 2n n! áÖmn .
−∞
Problems 182

Problem 31. Use Gram-Schmidt orthogonalization of the set of polynomials 1,


x, x 2 , x 3 , . . . on the interval −1 ≤ x ≤ 1 to generate the orthogonal Legendre
polynomials P0 (x), P1 (x), P2 (x), and P3 (x). Note that Legendre polynomials are
normalized so that Pn (1) = 1.

Problem 32.
Consider the differential equation
" 2
n2
#
d 1 d
+ − y(r) = 0 0<r<∞
d r2 r d r r2

where n = 1, 2, 3, . . .. Find two independent solutions, one which vanishes as


r → 0, the other that vanishes for r → ∞. (Hint: let x = ln r.)
Problem 33.
Given the result of problem 32 find the solution to the differential equation
" 2
n2
#
d 1 d 1
2
+ − 2 G (r, r 0 ) = Ö(r − r 0 ) 0<r<∞
dr r dr r r

with the boundary conditions that the solution vanishes as r → 0 and r → ∞.


Module VII

Matrices and Vectors

25 Linear Algebra 185

26 Vector Spaces 190

27 Vector Calculus 203

28 Curvilinear Coordinates 220

Problems 228

183
184

Motivation
We now move our general discussion beyond one dimension.
We first address solving linear systems of equations and introduce matrices
and review some of their properties. Next we talk about vector spaces, linear
operators, and we re-encounter eigenvalue problems which arise in quantum
and classical mechanics. Then we review vector calculus and differential
operators that are used to formulate fundamental physical laws, e.g.,
electrodynamics. In the last section we provide formulae for these differential
operators in cylindrical and spherical coordinate systems which are commonly
used to simplify problems.
25 Linear Algebra

A linear system of equations is a system of equations of the form

a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1


a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn = b2
.. (25.1)
.
am1 x1 + am2 x2 + am3 x3 + · · · + amn xn = b m

where ai j and b i , i = 1 . . . m, j = 1 . . . n are constants and x j , j = 1 . . . n are


unknowns. This is a system having m equations and n unknowns.
• If the number of equations is fewer than the number of unknowns, m < n,
then the system is underdetermined and in general has an infinite number
of solutions.
• If the number of equations is greater than the number of unknowns, m > n,
then the system is overdetermined, and generally has no solution.
• In general, there is a unique solution when the number of equations equals
the number of unknowns, m = n.
The use of the term “in general” above means that there are exceptions for
certain values of the coefficients. For example, if two equations are the same
up to an overall factor, e.g.,

2x + 3y = 4 and 6x + 9y = 12 (25.2)

(the second equation is 3 times the first) then they are not linearly
independent — they are the same equation, and we can drop one of them.
Another possibility is when two equations are inconsistent, e.g.,

2x + 3y = 4 and 2x + 3y = 5 . (25.3)

An inconsistent system of equations has no solutions.


To solve a system of n equations in n unknowns, solve the first equation for the
first unknown, and substitute this in the remaining equations. Now there are
n − 1 equations in n − 1 unknowns.

185
25. Linear Algebra 186

Ex. 25.1. For a system of 3 equations in 3 unknowns,

a11 x + a12 y + a13 z = b 1


a21 x + a22 y + a23 z = b 2
a31 x + a32 y + a33 z = b 3 (25.4)

solve the first equation for x:


b1 a a13
x= − 12 y − z (25.5)
a11 a11 a11
and substitute this into the next two equations. Actually, it is a little neater if we multiply
the other two equations by a11 . Then we have

(a11 a22 − a21 a12 )y + (a11 a23 − a21 a13 )z = a11 b 2 − a21 b1
(a11 a32 − a31 a12 )y + (a11 a33 − a31 a13 )z = a11 b 3 − a31 b2 . (25.6)

Solve the first of these for y:


1
y= [a b − a21 b 1 − (a11 a23 − a21 a13 )z] . (25.7)
a11 a22 − a21 a12 11 2

Now multiply the last equation by (a11 a22 − a21 a12 ) and substitute in for y:

[(a11 a22 − a21 a12 )(a11 a33 − a31 a13 ) − (a11 a32 − a31 a12 )(a11 a23 − a21 a13 )]z
= (a11 a22 − a21 a12 )(a11 b 3 − a31 b 1 ) − (a11 a32 − a31 a12 )(a11 b 2 − a21 b 1 ) .
(25.8)

Provided the coefficient in front of z is not zero, we can now solve for z. Then substitute
z into the equation for y to determine y and finally substitute the equations for z and y
into the equation for x to determine x.
This is straightforward but tedious. (Fortunately we have computers.)
The solution is
(a22 a33 − a23 a32 )b 1 − (a12 a33 − a13 a32 )b 2 + (a12 a23 − a13 a22 )b 3
x= (25.9a)
D
−(a21 a33 − a23 a31 )b 1 + (a11 a33 − a13 a31 )b 2 − (a11 a23 − a13 a21 )b 3
y=
D
(25.9b)
(a21 a32 − a22 a31 )b 1 − (a11 a32 − a12 a31 )b 2 + (a11 a22 − a12 a21 )b 3
z= (25.9c)
D
with

D = a11 a22 a33 + a12 a23 a31 + a13 a21 a32


− a13 a22 a31 − a12 a21 a33 − a11 a23 a32 . (25.9d)
25. Linear Algebra 187

Matrices
To express the linear system of equations more succinctly, introduce the
matrix. First note that the linear system can be written
n
¼
ai j x j = b i , i = 1...m . (25.10)
j=1

Let A = [ai j ] be an m × n matrix, b = [b i ] be a m × 1 matrix or column vector


and x = [xi ] be a n × 1 matrix (column vector). Then our system of equations
can be written concisely as

Ax = b (25.11)

where
 
x1
 a11 a12 a13 ··· a1n  b 1
     
  x2  
 a a22 a23 ··· a2n  b
   
 21  
x3
  2 
A =  . .. .. .. ..  , x =   , and b =  .  .
 .. . . . .

  ..

  .. 

   .   
am1 am2 am3 ··· amn   bm
xn
(25.12)

Matrix multiplication is defined as follows: if A = [ai j ] is an m × n matrix,


B = [b j k ] is an n × p matrix, and C = [ci k ] is an m × p matrix, i = 1 . . . m, j = 1 . . . n,
k = 1 . . . p, then

n
¼
C = AB ⇐⇒ ci k = ai j b j k , for i = 1 . . . m and k = 1 . . . p. (25.13)
j=1

Note that matrix multiplication is only defined between a n × m matrix on the


left and a p × q matrix on the right if p = m and the result is a m × q matrix.
Consequently, if A B is defined, it does not necessarily mean that B A is defined.
Even if A and B are both n × n matrixes so that A B and B A both exist, it does
not necessarily follow that A B = B A.
h i h i h i h i h i h i
For example: 01 11 · 13 24 = 34 46 but 13 24 · 01 11 = 24 37 .
In other words, matrix multiplication does not commute.
25. Linear Algebra 188

In addition to matrix multiplication, matrices can be multiplied by a scalar to


form a matrix of the same shape:
C = ÓA ⇐⇒ ci j = Óai j (25.14)

and matrices of the same shape can be added:


C=A+B ⇐⇒ ci j = ai j + b i j . (25.15)

It can then be verified that matrix addition is commutative, A + B = B + A, and


associative, (A + B) + C = A + (B + C), and matrix multiplication is associative,
(A B)C = A(B C) and distributive A(B + C) = A B + A C.
Some special matrices are the zero matrix 0 which has zero for all elements
and the identity matrix 1 = [Öi j ]. These have the properties A + 0 = A,
A 0 = 0 A = 0, and A 1 = 1 A = A.
Some other important matrix operations are as follows.
• Complex conjugation: if A = [ai j ] and C = [ci j ] have the same shape then

C = A∗ ⇐⇒ ci j = a∗i j . (25.16)

• Transpose: if A = [ai j ] is an m × n matrix and C = [ck„ ] is a n × m matrix then

C = AT ⇐⇒ ck„ = a j i . (25.17)

• Adjoint:

A † = (A T )∗ . (25.18)

• Trace: if A = [ai j ] is a n × n (square) matrix then


n
¼
Tr A = ai i . (25.19)
i=1

• Determinant: if A = [ai j ] is a n × n (square) matrix then


n ¼
¼ n n
¼
det A = ··· ×i1 i2 ...in a1,i1 a2,i2 · · · an,in (25.20)
i1 =1 i2 =1 i n =1

where ×i1 i2 ...in is the Levi-Civita symbol defined by

×1,2,...,n = 1 and ×i1 ,...,ip ,...,iq ,...,in = −×i1 ,...,iq ,...ip ,...,in (25.21)

or ×i1 ,i2 ,...,in = +1 if (i1 , i2 , . . . , i n ) is an even permultation of (1, 2, . . . , n);


×i1 ,i2 ,...,in = −1 if (i1 , i2 , . . . , i n ) is an odd permultation of (1, 2, . . . , n);
and ×i1 ,i2 ,...,in = 0 otherwise.
25. Linear Algebra 189

The minor mi j of the square matrix A = [ai j ] is mi j = det([(ak„ )k,i,„,j ]) and the
cofactor matrix is C = [(−1)i+j mi j ]. Then the matrix inverse of A is

1
A −1 = CT . (25.22)
det A

The inverse matrix has the property A −1 A = A A −1 = 1.


Some useful identities:

(A B)−1 = B −1 A −1 (25.23a)
T T T
(A B) = B A (25.23b)
Tr(A B) = Tr(B A) (25.23c)
det(A B) = (det A)(det B) = det(B A) . (25.23d)

A matrix A is a:
• real matrix if A ∗ = A, (25.24a)
• symmetric matrix if AT = A, (25.24b)
• antisymmetric matrix if AT = −A, (25.24c)
• Hermitian matrix if A † = A, (25.24d)
• orthogonal matrix if A −1 = A T , (25.24e)
• unitary matrix if A −1 = A†, (25.24f)
• diagonal matrix if A = [ai j ] with ai j = 0 for i , j, (25.24g)
• idempotent matrix if A2 = A, (25.24h)
• nilpotent matrix if Ak = 0 for some integer k. (25.24i)
26 Vector Spaces

A n-vector x is said to live in an n-dimensional vector space. Vectors in the


vector space have the following operations:
• Addition of vectors commutative and associative:
x+y = y+x and (x + y) + z = x + (y + z) . (26.1)

• Multiplication by a scalar is distributive and associative:


a(x + y) = ax + by and a(bx) = (ab)x . (26.2)

Multiplication by 1 leaves a vector unchanged: 1x = x.


Multiplication by 0 results in a null vector 0x = 0 for which x + 0 = x.
Multiplication by −1 results in a vector −x for which x + (−x) = 0.
A set of vectors x, y, . . . , z, are linearly independent if there are no values of
a, b, . . . , c for which
ax + by + · · · + cz = 0 (26.3)
except for a = b = · · · = c = 0.
In an n-dimensional vector space, there exists sets of n linearly independent
vectors, but there does not exist n + 1 linearly independent vectors.
Let e1 , e2 , . . . , en be n linearly independent vectors in a n-dimensional vector
space. These are known as basis vectors. Then, for any vector x, we can find
values x1 , x2 , . . . , xn for which
x 1 e1 + x 2 e2 + · · · + x n e n − x = 0 . (26.4)
Thus the basis vectors are complete and define a coordinate system. The
values x1 , x2 , . . . , xn that satisfy the above equation are the components of x.
That is, the vector x can be written in terms of its components xi , i = 1 . . . n, as
n
¼
x= x i ei . (26.5)
i=1

We will find it convenient to express the components of the vector x as a


column vector x = [xi ].

190
26. Vector Spaces 191

Linear Operators
A linear operator A is a map from one vector in a vector space to another

y = Ax (26.6)

having the property

A(ax + by) = a A x + b A y (26.7)

Linear operators do not generally commute, A B , B A. If an inverse operator


A−1 exists then

A A−1 = A−1 A = 1 . (26.8)

Consider the application of a linear operator A to a set of basis vectors:

aj = A ej , j = 1...n . (26.9)

We can write these vectors in terms of their components:


n
¼
aj = a i j ei , j = 1...n (26.10)
i=1

where ai j is the ith component of the vector a j in a particular basis.


These n 2 components are sufficient to define the operator A: for any vector x,
n
¼ n
¼ n
¼ n
¼
y = Ax = A xj ej = xj A ej = xj a i j ei (26.11a)
j=1 j=1 j=1 i=1
 
n ¼
¼  n 
= ai j x j  ei (26.11b)
 
 
i=1 j=1

but
n
¼
y= yi ei (26.11c)
i=1

thus
n
¼
yi = ai j x j , i = 1...n . (26.11d)
j=1

Therefore if, in some basis, we have the components A = [ai j ] of a linear


operator A, then the components y = [yi ] of the vector y = A x are related to
the components x = [xi ] of the vector x by the matrix equation

y = Ax . (26.12)
26. Vector Spaces 192

Coordinate Transformations
Suppose that we change from one set of basis vectors to another set of basis
vectors by an invertable linear transformation P
n
¼
e0j = P e j or e0j = p i j ei , j = 1...n . (26.13)
i=1

Here P = [pi j ] is called the transformation matrix.


In the new bases, a vector x is
 
¼ n ¼ n ¼n ¼n ¼n ¼ n 
x j0 e0j = x j0 0

x= xj ej = p i j ei = p x  ei (26.14)


 i j j 
i=1 j=1 j=1 i=1 i=1 j=1
| {z }
xi

so
n
¼
xi = pi j x j0 , i = 1...n or x = P x0 (26.15)
j=1

where we express the components as column vectors x = [xi ] and x 0 = [xi0 ].


We can now determine the effect of the change of basis on the components of
other linear operators. Suppose

y = Ax (26.16)

then

y = Ax and y0 = A0 x0 . (26.17)

Therefore

P y 0 = A(P x 0 ) or y 0 = P −1 A P x 0 . (26.18)

We thus identify

A 0 = P −1 A P . (26.19)

This is known as a similarity transformation.


We can apply similarity transforms to any matrix equation:

AB = C =⇒ P −1 A(P P −1 )B P = P −1 C P =⇒ A0 B0 = C0 . (26.20)
26. Vector Spaces 193

Inner Product
A scalar product or inner product or dot product between two vectors
x·y (26.21)
is a scalar-valued function of the two vectors with the properties:
• Conjugate symmetry x · y = (y · x)∗ (26.22a)
• Linearity (ax + by) · z = a(x · z) + b(y · z) (26.22b)
• Positive definite x · x > 0 for x , 0 (26.22c)
The length of a vector x is kxk = (x · x)1/2 .
If x · y = 0 then the two vectors are orthogonal.
The dot product of two vectors is related to their lengths and the angle Ú
between them by x · y = kxk kyk cos Ú.
Suppose we define the inner product in some basis as
n
¼
x·y = xi yi∗ = y † x (26.23)
i=1
where xi and yi are the components of the vectors x and y respectively in that
basis. It then follows that the basis vectors are orthonormal with respect to
our inner product:
ei · e j = Öi j . (26.24)
If we wish to find a new orthonormal basis = P ei , i = 1 . . . n, with respect to
e0i
the same inner product, then
 n   n  n ¼
n
¼  ¼  ¼
0 0
Öi j = ei · e j =  pki ek  · 
  p„j e„  = pki p∗„j ek · e„ (26.25a)
k=1 „=1 k=1 „=1
|{z}
Ök„
n
¼
= pki p∗kj (26.25b)
k=1
or
1 = P† P (26.25c)
so the transformation matrix must be unitary. If the vector space is real then
the transformation matrix must be orthogonal.
Note that
 n  n
¼  ¼
e0j · ei =  pkj ek  · ei = pkj ek · ei = pi j (26.26)
k=1 k=1
|{z}
Öki

and since ei and e0j are both unit vectors, pi j is the direction cosine between
the two different basis vectors, and P is the matrix of direction cosines.
26. Vector Spaces 194

Ex. 26.1. Passive and active rotations.


Consider a vector x in a 2-dimensional vector space. First suppose we rotate the basis
vectors as in the left panel of Fig. 26.1 so that the direction cosines are

p11 = e01 · e1 = cos Ú p12 = e02 · e1 = cos(á/2 + Ú)


(26.27a)
p21 = e01 · e2 = cos(á/2 − Ú) p22 = e02 · e2 = cos Ú
or
" #
cos Ú − sin Ú
P= . (26.27b)
sin Ú cos Ú

Then, from Eq. (26.15), x 0 = P −1 x and since P −1 = P T (it is orthogonal)


" 0# " # " #
x1 cos Ú sin Ú x1
= · . (26.28)
x20 − sin Ú cos Ú x2

This is known as a passive or alias rotation.


Alternatively, one could apply the linear operation P to the vector x to obtain an new
vector x0 = P x as shown in the right panel of Fig. 26.1. The components of x0 in the
(unchanged) basis, according to Eq. (26.12), is x 0 = P x or
" 0# " # " #
x1 cos Ú − sin Ú x1
0 = · . (26.29)
x2 sin Ú cos Ú x2

This is known as a active or alabi rotation.

e2 e2
e0 2
x02 x0
x2 x x2 x

x02 e0 1
x1
0

e1 e1
x1 x01 x1

Figure 26.1: Passive or alias (left) and active or alibi (right) rotations.
26. Vector Spaces 195

Vector or Cross Product


In a 3-dimensional real vector space, a vector product or cross product

x×y (26.30)

is a vector-valued function of the two vectors with the properties:


• Linearity and distributivity (ax + by) × z = a(x × z) + b(y × z) (26.31a)
• Anticommutativity x × y = −y × x (26.31b)
• Jacobi identity x × (y × z) + z × (x × y) + y × (z × x) = 0 (26.31c)
In a particular basis it is conventional to define the cross product as
3 ¼
¼ 3
z = x×y ⇐⇒ zi = ×i j k x j yk (26.32a)
j=1 k=1

where ×i j k is the Levi-Civita symbol, or

z1 = x2 y3 − x3 y2 , z2 = x3 y1 − x1 y3 , and z3 = x1 y2 − x2 y1 . (26.32b)

We see that

e1 × e2 = e3 , e2 × e3 = e1 , and e3 × e1 = e2 . (26.33)

The cross product of two vectors is orthogonal to both of those vectors:


x · (x × y) = y · (x × y) = 0.
The magnitude of the cross product is related to the lengths of the two vectors
and the angle Ú between them by kx × yk = kxk kyk sin Ú.
The cross product of two vectors x and y gives the (directed) area of the
parallelogram with sides defined by x and y.
The scalar triple product is
 
x1 x2 x3 
x · (y × z) = y · (z × x) = z · (x × y) = det  y1
 y2 y3  . (26.34)
z1 z2 z3
 

This is the volume of a parallelepiped with sides defined by x, y, z.


The vector triple product is

x × (y × z) = (x · z)y − (x · y)z . (26.35)


26. Vector Spaces 196

Eigenvalue Problems
If a linear operator A acts on a vector x in such a manner that the result is
proportional to x,

A x = Ýx , (26.36)

then Ý is known as an eigenvalue of the operator A and x is the eigenvector


belonging to Ý.
The matrix version of the eigenvalue problem is

A x = Ýx . (26.37)

This equation can be rearranged as follows:

(A − Ý1)x = 0 . (26.38)

Note that if (A − Ý1) is invertible then the solution is the trivial solution x = 0.
Therefore, in order for there to be non-trivial solutions to the eigenvalue
equation, (A − Ý1) must be non-invertible and so its determinant must vanish:

det(A − Ý1) = 0 . (26.39)

This is called the secular or characteristic equation. The determinant will


produce a polynomial in Ý which is called the characteristic polynomial
which will have n roots (not necessarily all real though). These roots are the
eigenvalues. Then, for a particular eigenvalue, Ýp , the eigenvector x p that
belongs to it can be determined up to an overall constant by solving

(A − Ýp 1)x p = 0 (26.40)

for the components of x p . This is an underdetermined system of equations so


there will be one (or more if the eigenvalue is degenerate) degrees of freedom.
Normally we supplement the system of equations with one additional equation
requiring the eigenvector to be normalized

x †p x p = 1 . (26.41)
26. Vector Spaces 197

Ex. 26.2. Consider the (active) rotation of a vector x through angle Ú about the z-axis
Rz described by the rotation matrix

cos Ú − sin Ú 0


 
 sin Ú cos Ú 0 .
R z =  (26.42)
 
0 0 1

We want to solve the eigenvalue problem

Rz x = Ýx or R z x = Ýx (26.43)

that is, we seek a vector that is left unchanged, apart from a possible scale, when
rotated by Ú about the z-axis. (It should be obvious what this vector is.)
The secular equation is

det[R z − Ý1] = (1 − Ý)[(cos Ú − Ý)2 + sin2 Ú] (26.44a)


= (1 − Ý)(Ý2 − 2Ý cos Ú + 1) (26.44b)
= (1 − Ý)(Ý − e iÚ )(Ý − e −iÚ ) . (26.44c)

This has one real eigenvalue, Ý = 1, unless Ú = 0 or Ú = á. We’ll come back to those at
the end of the example.
To find the eigenvector for the Ý = 1 eigenvalue we solve

cos Ú − 1 − sin Ú
     
0  x  0
 sin Ú cos Ú − 1 0 ·  y  = 0 (26.45)
     
0 0 0 z 0

for which the solution is x = y = 0 and z is undetermined. Requiring the eigenvector to


be normalized we find z = 1 and thus the eigenvector is ez .
Now for the case Ú = 0, Ý = 1 is a triply-degenerate eigenvector. We have R z Ú=0 = 1
and it is obvious that any vector x will solve the equation 1 x = x. A orthonormal set of
eigenvectors is ex , e y , ez .
Finally for the case Ú = á, we have the usual eigenvalue Ý = 1 and the eigenvector ez
that belongs to it but now we also have a doubly-degenerate eigenvalue Ý = −1. The
eigenvalue equation is

−1 0 0  x  −x = −x
     
 x 
 0 −1 0  y  −y = −y
 ·   = −  y  =⇒ (26.46)
 

z = −z
     
0 0 1 z z

for which the solution is z = 0 and x and y are unspecified. An orthonormal set of
eigenvectors is ex and e y . We see that any vector on the x-y plane simply changes its
sign when rotated by an angle á about the z axis.
26. Vector Spaces 198

If H = [hi j ] is a Hermitian matrix, H † = H, with two eigenvectors x p and x q


belonging to eigenvalues Ýp and Ýq respectively then

H x p = Ýp x p and H x q = Ýq x q . (26.47)

Then we have

x †q (Ýp x p ) = x †q H x p (26.48a)
¼n n
¼

= xqi hi j xpj (26.48b)
i=1 j=1
¼n ¼n
= ∗ ∗
(xqi hi∗j xpj ) (26.48c)
i=1 j=1
 n n
∗
¼ ¼ 
∗ ∗
=  xpj hi j xqi  (26.48d)
j=1 i=1
h i∗
† †
= x p (H x q ) (26.48e)
h i∗
= x †p (H x q ) (26.48f)
h i∗
= x †p (Ýq x q ) . (26.48g)

Since x †q x p = (x †p x q )∗ we find

(Ýp − Ý∗q )x †q x p = 0 . (26.49)

• If p = q and x p , 0 so that x †p x p > 0 then we have Ýp = Ý∗p .


The eigenvalues of a Hermitian matrix are real.
• If Ýp , Ýq then x †p x q = 0 or xp · xq = 0. The eigenvectors belonging to
different eigenvalues of a Hermitian matrix are orthogonal.
• If Ýp = Ýq are degenerate eigenvalues then the eigenvectors belonging to
them need not be orthogonal. However, a linear combination of them can be
made orthogonal. Let

u = xp and v = u + Óxq (26.50)

where Ó is some constant. Then


xq · xp
v · u = 0 =⇒ Ó = − . (26.51)
xp · xp

This procedure can be generalized to multiply degenerate eigenvalues and it


is just the Gram-Schmidt orthogonalization described in §23.
We see that all n eigenvalues of a Hermitian operator are real, and that we can
construct an orthogonal set of n eigenvectors belonging to these eigenvalues.
This set of eigenvectors is also complete.
26. Vector Spaces 199

The eigenvectors x of a linear operator A do not depend on the choice of basis


vectors. Suppose that in one basis we have

A x = Ýx (26.52)

and we use a transformation matrix P to go to a different basis:

P −1 A P P −1 x = ÝP −1 x or A 0 x 0 = Ýx 0 (26.53)

where A 0 = P −1 A P and x 0 = P −1 x. We see that the transformed column vector


x 0 is an eigenvector of the transformed matrix A 0 belonging to the same
eigenvalue Ý.
Two other important invariants of a similarity transformation are the trace and
determinant of the matrix:

Tr A 0 = Tr(P −1 A P) = Tr(P P −1 A) = Tr A (26.54)


0 −1 −1
det A = det(P A P) = det(P P A) = det A . (26.55)

Suppose our linear operator has a complete set of orthonormal eigenvectors


and suppose that we make a coordinate transformation so that the new basis
vectors are these eigenvectors so that

A e0i = Ýi e0i =⇒ (A e0i ) · e0j = Ýi e0i · e0j = Ýi Öi j . (26.56)

However, (A e0i ) · e0j = a0i j where A 0 = [a0i j ], so

a0i j = Ýi Öi j . (26.57)

Therefore, the coordinate transformation to the basis set by the orthonormal


set of eigenvectors has diagonalized the matrix and the diagonal elements of
A 0 are the eigenvalues.
Recall that the transformation matrix P = [pi j ] has elements pi j = e0j · ei in our
original (unprimed) basis, i.e., the jth column contains the components of e0j
(the eigenvectors) in the original basis:
h i
P = e 01 e 02 · · · e 0n . (26.58)

Therefore, once a complete set of orthonormal eigenvectors of a matrix A are


found, we can use them to diagonalize the matrix.
26. Vector Spaces 200

Ex. 26.3. Vibrational modes of the linear triatomic carbon dioxide molecule.
We consider only the vibrational modes along the axis of the linear triatomic molecule.
Let s1 , s2 , and s3 be the displacements away from the equilibrium positions of the
leftmost oxygen atom, the carbon atom, and the rightmost oxygen atom respectively
(see Fig. 26.2). The two double bonds are represented by springs with spring constant k.
Newton’s equations of motion are

d 2 s1
mO = −k(s1 − s2 ) (26.59a)
d t2
d 2 s2
mC = −k(s2 − s3 ) + k(s1 − s2 ) (26.59b)
d t2
d 2 s3
mO = k(s2 − s3 ) . (26.59c)
d t2
Assume the motion is oscillatory with angular frequency é and let s1 (t) = x1 e iét ,
s2 (t) = x2 e iét , and s3 (t) = x3 e iét . Then

−mO é2 x1 = −k(x1 − x2 ) (26.60a)


−mC é2 x2 = −k(−x3 + 2x2 − x1 ) (26.60b)
−mO é2 x3 = −k(x3 − x2 ) . (26.60c)

These equations can be expressed in matrix form as

 1 −1 0  x1 
     
x1  2
−q 2q −q  · x2  = Ý x2  with q = mO and Ý = é . (26.61)
      mC k/mO
0 −1 1 x3 x3

This is now in the form of an


√ eigenvalue
√ problem where the eigenvalues Ý will determine
the eigenfrequencies é = Ý k/mO and the eigenvectors will be the normal modes.

k k
x
mO mC mO
Figure 26.2: CO2 Molecule
26. Vector Spaces 201

First we compute the eigenvalues from the secular equation

1 − Ý −1
 
0 
0 = det 
 −q 2q − Ý −q  (26.62a)
−1 1−Ý
 
0
= (1 − Ý)[(1 − Ý)(2q − Ý) − q] − (−1)[−q(1 − Ý)] (26.62b)
= Ý(1 − Ý)(Ý − 2q − 1) . (26.62c)

We see that the eigenvalues are Ý = 0, Ý = 1, and Ý = 2q + 1.


Next we find the eigenvectors belonging to these eigenfunctions.

• Case Ý = 0: Solve

 1 −1 0  x1  0 x1 − x2 = 0
     
−q 2q −q  x  0 −qx1 + 2qx2 − qx3 = 0
  ·  2  =   =⇒ (26.63)
0 −1 1 −x2 + x3 = 0
     
x3 0
 
1
and so we have x1 = x2 = x3 . With suitable normalization the eigenvector is √1 1 .
3 1
This is a zero-frequency mode that corresponds to rigid translation of the whole
molecule along its axis as seen in the top panel of Fig. 26.3.
• Case Ý = 1: Solve

 1 −1 0  x1  x1  1 − x2 = 
     
x
 x
1
−q 2q −q  x  x  −qx1 + 2qx2 − qx3 = x2
  ·  2  =  2  =⇒ (26.64)
0 −1 1 −x2 + 
     
x3 x3 x
3 =x
3
 
1
and so we have x2 = 0 and x1 = −x3 . The normalized eigenvector is √1 0 .
2 −1

This is a symmetric mode of oscillation with frequency és = k/mO in which the
carbon atom remains stationary and the two oxygen atoms vibrate out-of-phase with
each other along the axis as seen in the middle panel of Fig. 26.3.
• Case Ý = 2q + 1: Solve

−1 1 − x2 = (2q + 1
     
 1 0  x1  x1  x
 )x1
−q −q  · x2  = (2q + 1) x2  −qx1 + 
2q =⇒ 2qx 2 − qx3 = (2q
 + 1)x2

 
 
 


−1
     
0 1 x3 x3 −x2 +  x
3 = (2q + 1
)x3
(26.65)
 1 
and so we have x1 = x3 and x2 = −2qx1 . The eigenvector is √ 12 −2q .
4q +2 1

This is a asymmetric mode of oscillation with frequency éa = 2k/mC + k/mO in
which the two oxygen atoms move in phase while the carbon atom moves out of
phase along the axis in such a way to preserve the center of mass as seen in the
bottom panel of Fig. 26.3.
26. Vector Spaces 202

The general solution to the longitudinal motion is


       
s1  1  1   1 
−2m /m 
 2  = (s0 + vt)   + a cos(és t + æs )   + b cos(éa t + æa ) 
s  1  0 
O C  (26.66)
−1
       
s3 1 1

where a, b, s0 , v, æs , and æa are constants determined by the initial conditions.


(Note: the solution to d 2 s/d t 2 = −é2 s for é = 0 is s = s0 + vt.)

For the CO2 molecule, mO ≈ 16 amu and mC ≈ 12 amu and we find éa ≈ 3és .

Figure 26.3: Vibration modes of the CO2 molecule along its axis. Top: a zero-frequency
rigid translation along the axis. Middle: symmetric stretching in which the oxygen
atoms move out of phase and the carbon atom remains at rest. Bottom: antisymmetric
stretching in which the oxygen atoms move in phase while the carbon atom moves out
of phase preserving the center of mass.
27 Vector Calculus

Derivatives
Consider a scalar function of multiple variables, ï(x, y, z). The partial
derivative of this function with respect to x at (x, y, z) = (a, b, c) is the
derivative of the related univariate function f (x) constructed by holding the
other variables at fixed values, y = b and z = c, f (x) = ï(x, b, c):

ï(x, y, z) d f (x) ï(x + h, b, c) − ï(x, b, c)


= = lim . (27.1)
x x=a,y=b,z=c dx x=a h→0 h

The antiderivative of a partial derivative results in a “constant” of integration


that is in fact a function of the remaining variables: if

ï(x, y, z)
è(x, y, z) = (27.2a)
x
then
Z
ï(x, y, z) = è(x, y, z) d x + ç(y, z) . (27.2b)

Differentiating a function with respect to one variable and then with respect to
another results in a mixed partial derivative. If all mixed partial derivatives are
continuous at a point then the order with which the procedure is done does not
matter:
2 ï 2 ï
= . (27.3)
xy yx

The gradient of a function ï(x, y, z) is a vector field whose components are


the partial derivatives of the function:

ï(x, y, z) ï(x, y, z) ï(x, y, z)


∇ ï(x, y, z) = ex + ey + ez . (27.4)
x y z

203
27. Vector Calculus 204

Ex. 27.1. Compute the gradient of the function of two variables


2 2
ï(x, y) = xe −(x +y )/2 . (27.5)

The partial derivatives are

ï(x, y) 2 2 2 2
= e −(x +y )/2 − x 2 e −(x +y )/2 (27.6a)
x
and
ï(x, y) 2 2
= −xye −(x +y )/2 (27.6b)
y
so we have
2 2 2 2
∇ ï(x, y) = (1 − x 2 )e −(x +y )/2 ex − xye −(x +y )/2 e y . (27.7)

Figure 27.1 shows a contour plot of ï(x, y) along with the vector field ∇ ï(x, y). Notice
that the vectors are normal to the contours.

0
y

2 1 0 1 2
x
2 2
Figure 27.1: The function ï(x, y) = xe −(x +y )/2 and its gradient ∇ ï(x, y). The color
density plot and with contours shows ï(x, y) while the arrows (length and direction)
represent the vector field ∇ ï(x, y).
27. Vector Calculus 205

The gradient of a scalar function is an example of a vector field. More


generally, a vector field is a vector-valued function over space of the form

A(x, y, z) = A x (x, y, z)ex + A y (x, y, z)e y + A z (x, y, z)ez (27.8)

where A x (x, y, z), A y (x, y, z), and A z (x, y, z) are scalar functions that give the x-,
y-, and z-components of a vector at each point in space.
The divergence of a vector field is given by

#  A x 
 
3 "
¼      
∇· A = A = · A y 
xi i x y z  
i=1
Az
A x A y A z
= + + . (27.9)
x y z

The curl of a vector field is given by

 ex e y ez 
 
3 ¼ 3 ¼ 3  
¼      
∇× A = ×i j k ei A = det 
x j k

 x y z 

i=1 j=1 k=1  

Ax A y Az
! ! !
A z A y A x A z A y A x
= − ex + − ey + − ez (27.10)
y z z x x y

where ×i j k is the Levi-Civita symbol.


Figure 27.2 shows vector fields with non-zero divergence (left) and non-zero
curl (right).

y y

x x

Figure 27.2: Vector fields A = xex + ye y (left) and A = −yex + xe y (right). The former has
vanishing curl but non-vanishing divergence while the latter has vanishing divergence
but non-vanishing curl.
27. Vector Calculus 206

Some useful identities involving the gradient, divergence, and curl:


• Gradient.

∇(è + ï) = ∇ è + ∇ ï (27.11a)
∇(èï) = ï ∇ è + è ∇ ï (27.11b)
∇(A · B) = (A · ∇)B + (B · ∇)A + A × (∇ × B) + B × (∇ × A) . (27.11c)

• Divergence.

∇ · (A + B) = ∇ · A + ∇ · B (27.12a)
∇ · (èA) = è ∇ · A + (∇ è) · A (27.12b)
∇ · (A × B) = (∇ × A) · B − (∇ × B) · A . (27.12c)

• Curl.

∇ × (A + B) = ∇ × A + ∇ × B (27.13a)
∇ × (èA) = è ∇ × A + (∇ è) × A (27.13b)
∇ × (A × B) = A(∇ · B) − B(∇ · A) + (B · ∇)A − (A · ∇)B (27.13c)

• Second derivatives.

∇ · (∇ × A) = 0 (27.14a)
∇ × (∇ è) = 0 (27.14b)
∇ · (∇ è) = ∇2 è (27.14c)
2
∇ × (∇ × A) = ∇(∇ · A) − ∇ A (27.14d)

where we define the scalar and vector Laplacian ∇2 by

2 è 2 è 2 è
∇2 è = + + (27.15a)
x 2 y 2 z 2
and

∇2 A = (∇2 A x )ex + (∇2 A y )e y + (∇2 A z )ex . (27.15b)

• Other miscellaneous results.

∇ kxk = x/kxk (27.16a)


∇· x = 3 (27.16b)
∇× x = 0 (27.16c)
(A · ∇)x = A (27.16d)
27. Vector Calculus 207

Integrals
y
A curve C is a set of points

(x(b), y(b))
n o
C = x(t) : a ≤ t ≤ b (27.17)
t=b
(see Fig. 27.3). C
The directed length element along this curve is
t=a
ds = x0 (t) d t . (27.18) (x(a), y(a))
x
A line integral of a scalar field ï(x) is Figure 27.3: Curve in
2-Dimensions.
Z Z b
ï(x) d s = ï(x(t)) kx0 (t)k d t (27.19)
C a

which is invariant under re-parameterization of the curve.


A scalar line integral of a vector field A(x) is
Z Z b
A(x) · ds = A(x(t)) · x0 (t) d t (27.20)
C a

and the vector line integral of the vector field is


Z Z b
A(x) × ds = A(x(t)) × x0 (t) d t (27.21)
C a

y
A double integral of a scalar field
ï(x, y) over a domain D bounded by two functions y = Ó(x)
and y = Ô(x) with a ≤ x ≤ b as shown in Fig. 27.4 is given by (x)
† Z x=b Z y=Ô(x)
ï(x) dA = ï(x, y) d y d x . (27.22) D
D x=a y=Ó(x)

This generalizes to volume integrals (x)


with Ó(x, y) ≤ z ≤ Ô(x, y) and (x, y) in D : x
a b
ˆ † Z Ô(x,y)
ï(x) d V = ï(x, y, z) d z d x d y (27.23) Figure 27.4: Double Integral
V D z=Ó(x,y)

and so on for higher dimensional integrals.


27. Vector Calculus 208

A change of variables from x to q specified by x(q) can be performed. In doing


so, the volume element d x d y d z must also be transformed:
ˆ ˆ
ï(x) d V = ï(x(q))â(q) d q1 d q2 d q3 (27.24)
V V

where â(q) is a density that we now determine. Consider a volume element that
is a parallelepiped formed by three vectors a, b, c, with displacements d q1 ,
d q2 , and d q3 along the q1 -, q2 -, and q3 -axes respectively:
!
x y z
a = d q1 e + e + e (27.25a)
q1 x q1 y q1 z

!
x y z
b = d q2 e + e + e (27.25b)
q2 x q2 y q2 z

and
!
x y z
c = d q3 e + e + e . (27.25c)
q3 x q3 y q3 z
"a a y az
#
x
The volume of this parallelepiped is det b x b y bz :
cx c y cz

 x d q y z
 
dq d q1 
 q1 1 q1 1 q1


 
 x y z
 
d q2  = det(J) d q1 d q2 d q3 .

d V = det  d q2 dq (27.26)
 q2 q2 2 q2 
 
 x y z


 dq dq d q3 
q3 3 q3 3 q3
where we define the Jacobian matrix

 x y z 
 

 q1 q1 q1 

 
 x y z 
 
J =   (27.27)
 q2 q2 q2 
 
 x y z 
 
q3 q3 q3

and then we have â(q) = det(J) and so


ˆ ˆ
ï(x) d V = ï(x(q)) det(J) d q1 d q2 d q3 . (27.28)
V V
27. Vector Calculus 209
z
A surface S is a set of points
n o
S = x(s, t) : (s, t) ∈ D (27.29)

for some domain D (see Fig. 27.5).


A directed surface area element
of a parallelogram with sides given by
y
vectors a and b with displacements d s and t
d t along the s- and t-directions is given by
s

dS = a × b =
x x
× ds dt . (27.30) x
s t
Figure 27.5: Surface
The surface integral of a scalar field ï(x) is
† †
x x
ï(x) d S = ï(x(s, t)) × ds dt . (27.31)
S S s t

A scalar surface integral of a vector field A(x) is


† † !
x x
A(x) · dS = A(x(s, t)) · × ds dt . (27.32)
S S s t

A vector surface integral of a vector field A(x) is


† † !
x x
A(x) × dS = A(x(s, t)) × × ds dt . (27.33)
S S s t

If we parameterize our surface as z = z(x, y) where (x, y) is in a domain D on


the x-y plane then we have
! !
x x z z z z
× = ex + e × ey + e = − ex − e + ez (27.34)
x y x z y z x y y

and then we find


† † !
z z
A · dS = −A x − Ay + Az d x d y . (27.35)
S D x y
27. Vector Calculus 210

Ex. 27.2. Area of a unit sphere.


A unit sphere is parameterized by a polar angle Ú and an azimuthal angle æ as

x(Ú, æ) = sin Ú cos æex + sin Ú sin æe y + cos Ú ez , 0 ≤ Ú ≤ á , 0 ≤ æ ≤ 2á . (27.36)

We have
x
= cos Ú cos æex + cos Ú sin æe y − sin Ú ez (27.37a)
Ú
and
x
= − sin Ú sin æex + sin Ú cos æe y (27.37b)
æ
so
x x
× = sin2 Ú cos æex + sin2 Ú sin æe y + sin Ú cos Ú ez (27.37c)
Ú æ
and
x x
q
× = sin4 Ú cos2 æ + sin4 Ú sin2 æ + sin2 Ú cos2 Ú = sin Ú . (27.37d)
Ú æ

The area of the sphere is thus


Z Z 2á Z á
A= dS = sin Ú dÚ dæ = 4á . (27.38)
S æ=0 Ú=0
27. Vector Calculus 211

Green’s theorem
y
Consider two scalar fields in two dimensions, ï(x, y)
and è(x, y) defined over a domain D with boundary given by
the closed curve C. We write the boundary as C = D . Then (x)
† !
è ï
(ï d x + è d y) = − dx dy (27.39) D C2
D D x y
C1
where the line integral over the boundary (x)
is taken in a counter-clockwise sense. x
a b
Proof. The domain D is given by
Figure 27.6: Green’s Theorem
n o
D = (x, y) : a ≤ x ≤ b , Ó(x) ≤ y ≤ Ô(x) (27.40)

and let the boundary of this domain be divided into two curves,
D = C = C1 + C2 where C1 is given by Ó(x) and C2 is given by Ô(x) and note
that the second curve is traversed from x = b to x = a as shown in Fig. 27.6. We
have
Z Z
ïdx = ï(x, y) d x + ï(x, y) d x (27.41a)
C C1 C2
Z b Z a
= ï(x, Ó(x)) d x + ï(x, Ô(x)) d x (27.41b)
a b
Z b Z b
= ï(x, Ó(x)) d x − ï(x, Ô(x)) d x . (27.41c)
a a

Also,
† Z b Z Ô(x)
ï ï(x, y)
dx dy = dx dy (27.42a)
D y x=a y=Ó(x) y
Zb
= [ï(x, Ô(x)) − ï(x, Ó(x))] d x . (27.42b)
a

We thus see that


†
ï
ïdx = − dx dy . (27.43)
C D y

Similarly, if D is taken to be bounded by two functions of y and roles of x and y


are interchanged in the above argument, we have
†
è
èdy = dx dy . (27.44)
C D x

Combining this with the previous result proves Green’s theorem.


27. Vector Calculus 212

Stokes’s theorem
Green’s theorem is a special case
z
of the more general Stokes’s theorem:
if F(x) is a vector field and S is a surface
with boundary S then
†
F · ds = (∇ × F) · dS . (27.45)
S S
S y

D
Proof. Suppose the surface S is
given by z = z(x, y) with (x, y) in the domain
x
D as shown in Fig. 27.7. Then we have Figure 27.7: Stokes’s Theorem
Z b
dx dy dz

F · ds = Fx + Fy + Fz dt
S a dt dt dt
(27.46)

but
d z z d x z d y
= + (27.47)
d t x d t y d t
so
Z b" ! ! #
z d x z d y
F · ds = Fx + Fz + Fy + Fz (27.48a)
S a x d t y d t
Z " ! ! #
z z
= Fx + Fz d x + Fy + Fz dy (27.48b)
D x y
| {z } | {z }
ï(x,y) è(x,y)

where we define
z(x, y)
ï(x, y) = Fx (x, y, z(x, y)) + Fz (x, y, z(x, y)) (27.49a)
x
and
z(x, y)
è(x, y) = Fy (x, y, z(x, y)) + Fz (x, y, z(x, y)) (27.49b)
y
and now employ Green’s theorem
† !
è ï
F · ds = − dx dy . (27.50)
S D x y
27. Vector Calculus 213

Now
 
è ï  Fy Fy z Fz z Fz z  z
 2
z
− = + + +   + Fz  
x y  x z x x y  z x y  xy
 
 Fx Fx z Fz z Fz z z
 2
z
−  + + +   + Fz   (27.51)
y z y y x  z y x  yx

so we have

F · ds
S
† " ! ! !#
Fz Fy z Fx Fz z Fy Fx
= − − − − + − d x d y . (27.52)
D y z x z y y x y
| {z } | {z } | {z }
Ax Ay Az

Here we have identified the components of the vector A = ∇ × F.


Comparing this with Eq. (27.35) we arrive at
†
F · ds = (∇ × F) · dS . (27.53)
S S

Ex. 27.3. Conservative fields.


Suppose F is a curl-free vector field, ∇ × F = 0. Suppose C is any closed curve and let S
be a surface whose boundary is C. Then, by Stokes’s theorem
I
F · ds = 0 . (27.54)
C

From this result it is easy to show that line integral of F depends only on the endpoints.
We say that such a field F is a conservative vector field.
It can also be shown that if F is a conservative field then it is the gradient of some
function. Suppose C is a curve from (0, 0, 0) to (x, y, z) and define
Z
−ï(x, y, z) = F · ds . (27.55)
C

Let C be three straight lines connecting the points (0, 0, 0), (x, 0, 0), (x, y, 0), and (x, y, z):
Zx Z y Zz
−ï(x, y, z) = Fx (t, 0, 0) d t + Fy (x, t, 0) d t + Fz (x, y, t) d t . (27.56)
0 0 0

Clearly −ï/z = Fz . Permuting x, y, and z we see F = − ∇ ï.


Since ∇ × ∇ ï = 0 it follows that ∇ × F = 0 ⇐⇒ F = − ∇ ï.
27. Vector Calculus 214

Gauss’s theorem z
Consider a vector field F(x) defined (x, y)
in a volume V which has a boundary
that is a closed surface S = V. Then S2
m ˆ
F · dS = ∇· FdV . (27.57)
V
S1
V
Here we assume that directed
surface elements are directed outwards (x, y)
from the volume. This is known as Gauss’s y
theorem or the divergence theorem.
x D
Proof. Let F = ï ex + ç e y + è ez .
Figure 27.8: Gauss’s Theorem
Then Gauss’s theorem becomes
m m m
ï ex · dS + ç e y · dS + è ez · dS
V V V
ˆ ˆ ˆ
ï ç è
= dV + dV + d V . (27.58)
V x V y V z

Let the volume be (see Fig. 27.8)


n o
V = (x, y, z) : (x, y) ∈ D , Ó(x, y) ≤ z ≤ Ô(x, y) (27.59)
which is bounded by a lower surface S 1 with z = Ó(x, y) for (x, y) in D and an
upper surface S 2 with z = Ô(x, y) for (x, y) in D so that S 1 + S 2 = V.
Consider
m † †
è ez · dS = è ez · dS +
è ez · dS (27.60a)
V S1 S2
† †
=− è(x, y, Ó(x, y)) d x d y + è(x, y, Ô(x, y)) d x d y
D D
(27.60b)
where the minus sign arises because ez · dS is negative on the lower surface.
Now consider
ˆ † Z Ô(x,y)
è è
dV = dz dx dy (27.61a)
V z R z=Ó(x,y) z
† h i
= è(x, y, Ô(x, y)) − è(x, y, Ó(x, y)) d x d y . (27.61b)
D

Thus we have
m ˆ
è
è ez · dS = dV . (27.62)
V V z
A similar argument for the x- and y-components completes the proof.
27. Vector Calculus 215

Ex. 27.4. Gauss’s law can be expressed as follows: if V is some volume and x0 is some
vector then
m 
x − x0 4á if x0 ∈ V


3
· dS =  (27.63)
V kx − x 0 k 0
 otherwise.

To show this, use Gauss’s theorem


m ˆ !
x − x0 x − x0
3
· dS = ∇ · dV . (27.64)
V kx − x0 k V kx − x0 k3
It is straightforward to show that
!
x − x0
∇· = 0 for x , x0 (27.65)
kx − x0 k3
which proves the case for x0 < V.
Now consider a spherical ball V× , kx − x0 k < ×, which is a ball of radius × centered on x0 :
ˆ m m
×2
!
x − x0 x − x0
∇· 3
d V = 3
· dS = 4
d S = 4á (27.66)
V× kx − x0 k V× kx − x0 k V× ×

since the normal to the V× is (x − x0 )/× and the area of the surface is 4á×2 .
Taking the limit × → 0 we obtain the identity
!
x − x0
∇· = 4áÖ3 (x − x0 ) (27.67)
kx − x0 k3
where the three-dimensional Dirac delta function is

Ö3 (x) = Ö(x)Ö(y)Ö(z) . (27.68)

Also, since
x − x0 1
3
= −∇ (27.69)
kx − x0 k kx − x0 k

we have the identity


1
∇2 = −4áÖ3 (x − x0 ) . (27.70)
kx − x0 k

Therefore, for the case x0 ∈ V, let V 0 = V − V× be the volume with an infinitesimal ball
about x0 removed and we have
m ˆ ! ˆ !
x − x0 x − x0 x − x0
3
· dS = ∇ · d V + ∇ · d V (27.71a)
V kx − x0 k V0 kx − x0 k3 V× kx − x0 k3
| {z } | {z }
0 since x0 <V 0 4á
= 4á . (27.71b)
27. Vector Calculus 216

In electrostatics, Coulomb’s law states that the force on a charge q at position x


produced by another charge q0 at position x0 is
qq 0 x − x0
F= (27.72)
4á×0 kx − x0 k3

where ×0 is the permittivity of free space. Define the electric field E = F/q so
q0 x − x0
E(x) = . (27.73)
4á×0 kx − x0 k3

Then Gauss’s law has the more familiar form


m 
q0 /×0 if q0 is contained in V


E(x) · dS = 
 (27.74)
V 0 otherwise.

In addition we have
q
∇ · E(x) = 0 Ö3 (x − x0 ) . (27.75)
×0

A continuous charge distribution â(x) can be thought of as a sum over point charges in
the neighborhood of x. Since the Coulomb forces combine as a linear vector sum, we
can write
N ˆ
1 ¼ x − xi 1 x − x0
E(x) = qi = â(x0 ) dV0 (27.76)
4á×0 kx − xi k 3 4á×0 kx − x0 k3
n=1

and
ˆ
x − x0
!
1
∇ · E(x) = â(x0 ) ∇ · dV0 (27.77a)
4á×0 kx − x0 k3
ˆ
1
= â(x0 )4áÖ3 (x − x0 )d V 0 (27.77b)
4á×0
â(x)
= (27.77c)
×0
which is also known as Gauss’s law.
Now Gauss’s theorem results in the following form of Gauss’s law:
m ˆ ˆ
E(x) · dS = ∇ · E(x) d V = â(x) d V = Q (27.78)
V V V

where Q is the total charge contained in V.


27. Vector Calculus 217

Green’s identities and other useful identities


From the divergence theorem with F = è ∇ ï we obtain Green’s first identity
m ˆ
è ∇ ï · dS = (è∇2 ï + ∇ ï · ∇ è) d V (27.79a)
V V

and from this we obtain Green’s second identity


m ˆ
(è ∇ ï − ï ∇ è) · dS = (è∇2 ï − ï∇2 è) d V . (27.79b)
V V

Other useful identities are


m ˆ
ï dS = ∇ïdV (27.80a)
m V ˆV

A × dS = − ∇× AdV (27.80b)
V
I † V

ï ds = − ∇ ï × dS . (27.80c)
S S

Integration by parts for volume gives the rule


ˆ m ˆ
A · ∇ïdV = ïA · dS − ï∇· AdV (27.81a)
V V V

or, written the other way,


ˆ m ˆ
ï∇· AdV = ïA · dS − A · ∇ïdV . (27.81b)
V V V
27. Vector Calculus 218

Helmholtz’s theorem
Any vector F(x) defined in a volume V can be decomposed as
F(x) = − ∇ ï(x) + ∇ × A(x) (27.82a)
where
ˆ m
1 ∇’ · F(x0 ) 0 1 F(x0 )
ï(x) = d V − · dS0 (27.82b)
4á kx − x0 k 4á V kx − x0 k
ˆV m
1 ∇’ × F(x0 ) 0 1 F(x0 )
A(x) = 0
d V + × dS0 (27.82c)
4á V kx − x k 4á V kx − x0 k
and ∇’ is the gradient operator acting on x0 . If V is all space and F vanishes
faster than 1/kxk as kxk → ∞ then the surface terms vanish.
Since ∇ × ∇ ï = 0 and ∇ · (∇ × A) = 0, Helmholtz’s theorem implies any vector
field can be decomposed into a longitudinal field FL and a transverse field FT
F(x) = FL (x) + FT (x) where ∇ × FL (x) = 0 and ∇ · FT (x) = 0 . (27.83)

Proof. We now prove Helmholtz’s theorem:


ˆ
F(x) = F(x0 )Ö3 (x − x0 ) d V 0 (27.84a)
ˆ V
1 1
 
= F(x0 ) − ∇2 0k
dV0 (27.84b)
4á kx − x
V
ˆ
1 F(x0 )
= ∇2 0k
dV0 (27.84c)
4á V kx − x
ˆ ˆ
F(x0 ) F(x0 )
! !
1 0 1 0
= ∇ ∇· 0
dV −∇× ∇× 0
dV
4á V kx − x k 4á V kx − x k
| {z } | {z }
ï(x) −A(x)
(27.84d)
Now
ˆ
1 F(x0 )
ï(x) = ∇ · 0
dV0 (27.85a)
4á V kx − x k
ˆ
1 1
= F(x0 ) · ∇ dV0 (27.85b)
4á kx − x0 k
ˆV
1 1
=− F(x0 ) · ∇0 dV0 (27.85c)
4á V kx − x0 k
1 1
since ∇ 0
= − ∇0 and now use the integration by parts rule
kx − x k kx − x0 k
m ˆ
F(x0 ) ∇’ · F(x0 )
=− 0
· dS + 0k
dV0 (27.85d)
V kx − x k V kx − x
A similar manipulation for A(x) completes the proof.
27. Vector Calculus 219

Uniqueness.
If both ∇ · F and ∇ × F are specified in V as well as the normal component of F
on V, then F is uniquely determined. This is shown as follows: suppose G is a
different vector having the same divergence and curl in V and normal
component on V. Then

∇ · (F − G) = 0 and ∇ × (F − G) = 0 . (27.86)

The second implies we can write F − G = − ∇ ï and then the first implies ∇2 ï = 0.
Now use Green’s first identity, Eq. (27.79a), with è = ï:
m ˆ 0
∇2
>

ï ∇ ï · dS = (ï ï + ∇ ï · ∇ ï) d V . (27.87)
V V

But ∇ ï · dS = 0 on the surface V since the normal component of F and G are


the same on the surface so surface integral vanishes. Thus
ˆ
k ∇ ïk2 d V = 0 . (27.88)
V

The integrand is non-negative, so this implies ∇ æ = 0 and hence F = G.


We thus see that the Helmholtz decomposition is unique.
Ex. 27.5. Electrostatics and magnetostatics.
In electrostatics the electric field E(x) satisfies

â(x)
∇ · E(x) = and ∇ × E(x) = 0 (27.89)
×0

and in magnetostatics the magnetic field B(x) satisfies

∇ · B(x) = 0 and ∇ × B(x) = Þ0 j(x) (27.90)

where â(x) is a static electric charge density, j(x) is a steady electric current density, and
Þ0 is the permeability of free space.
By Helmholtz’s theorem, the unique solutions to these equations are
ˆ ˆ
1 â(x0 ) 0= 1 0) x − x
0
E(x) = − ∇ d V â(x dV0 (27.91)
4á×0 kx − x0 k 4á×0 kx − x0 k3
and
ˆ ˆ
Þ0 j(x0 ) Þ0 x − x0
B(x) = ∇ × dV0 = j(x0 ) × dV0 . (27.92)
4á kx − x0 k 4á kx − x0 k3
These are the Coulomb law and the Biot-Savart law respectively.
28 Curvilinear Coordinates

General curvilinear coordinates q are specified by three functions x(q) or by


their inverse q(x).
Basis vectors e1 , e2 , and e3 are normal to surfaces of constant q1 , q2 , and q3
respectively. In this basis the components of a vector A are A1 , A2 , and A3
where

A = A 1 e1 + A 2 e2 + A 3 e3 . (28.1)

Infinitesimal displacements are

x x x
dx = d q1 + d q2 + d q3 (28.2a)
q1 q2 q3
y y y
dy = d q1 + d q2 + d q3 (28.2b)
q1 q2 q3
z z z
dz = d q1 + d q2 + d q3 . (28.2c)
q1 q2 q3

Pythagoras’s law requires that

(d s)2 = kdxk2 = (d x)2 + (d y)2 + (d z)2 (28.3)

is invariant. We thus have


!2
x x x
(d s)2 = d q1 + d q2 + d q3
q1 q2 q3
!2
y y y
+ d q1 + d q2 + d q3
q1 q2 q3
!2
z z z
+ d q1 + d q2 + d q3 (28.4a)
q1 q2 q3
3 ¼
¼ 3
= gi j d q i d q j (28.4b)
i=1 j=1

220
28. Curvilinear Coordinates 221

where
x x y y z z
gi j = + + (28.4c)
q i q j q i q j q i q j

are the components of the metric.


We restrict attention to orthogonal coordinate systems for which

gi j = 0 for i,j. (28.5)



Then it is conventional to define the scale factors hi = gi i and then

(d s)2 = (h1 d q1 )2 + (h2 d q2 )2 + (h3 d q3 )2 . (28.6)

We see that h1 d q1 , h2 d q2 , and h3 d q3 take the place of orthogonal rectilinear


elements d x1 , d x2 , and d x3 which can be oriented so that d x1 = h1 d q1 is a
displacement in the e1 direction, d x2 = h2 d q 2 is a displacement in the e2
direction, and d x3 = h3 d q3 is a displacement in the e3 direction. For these
rectilinear coordinates aligned with the curvilinear coordinate surfaces

x1 x2 x3


h1 = , h2 = , and h3 = . (28.7)
q1 q2 q3
Note that the orientation of the basis vectors of the curvilinear coordinates e1 ,
e2 , and e3 relative to a fixed rectilinear basis ex , e y , and ez will change from
point to point.
28. Curvilinear Coordinates 222

Integrals
The line element is

ds = h1 d q1 e1 + h2 d q2 e2 + h3 d q3 e3 (28.8)

and the line integral is therefore


Z Z
A · ds = (A1 h1 d q1 + A2 h2 d q2 + A3 h3 d q3 ) . (28.9)
C C

Similarly the area and volume elements are

dA = h2 h3 d q2 d q3 e1 + h3 h1 d q3 d q1 e2 + h1 h2 d q1 d q2 e3 (28.10)

and

d V = h1 h2 h3 d q1 d q2 d q3 (28.11)

and so, for example, a double integral on a surface of constant q3 and


(q1 , q2 ) ∈ D would be
† †
ï dA = ï h1 h2 d q1 d q2 (28.12)
D D

while a volume integral would be


ˆ ˆ
ïdV = ï h1 h2 h3 d q1 d q2 d q3 . (28.13)
V V
28. Curvilinear Coordinates 223

Derivatives
The gradient of a scalar field is
ï ï ï
∇ï = e1 + e2 + e
x1 x2 x3 3
q1 ï q ï q ï
= e + 2 e + 3 e (28.14)
x1 q1 1 x2 q2 2 x3 q3 3
and so

1 ï 1 ï 1 ï
∇ï = e + e + e (28.15)
h1 q1 1 h2 q2 2 h3 q3 3

To obtain a formula for the divergence of a vector field, consider an


infinitesimal volume of sides d x1 = h1‡d q1 , d x2 = hl2 d q2 , d x3 = h3 d q3 at point
(q1 , q2 , q3 ) and use Gauss’s theorem ∇ · A d V = A · dS

∇ · A h1 h2 h3 d q1 d q2 d q3 (28.16a)
 
= (A1 h2 h3 ) (q +h d q ,q ,q ) − (A1 h2 h3 ) (q ,q ,q ) d q 2 d q3
1 1 1 2 3 1 2 3
 
+ (A2 h3 h1 ) (q ,q +h d q ,q ) − (A2 h3 h1 ) (q ,q ,q ) d q3 d q1
1 2 2 2 3 1 2 3
 
+ (A3 h1 h2 ) (q ,q ,q +h d q ) − (A3 h1 h2 ) (q ,q ,q ) d q1 d q2 .
1 2 3 3 3 1 2 3
(28.16b)
(A1 h2 h3 )
≈ h1 d q1 d q 2 d q3
h1 q1
(A2 h3 h1 )
+ h2 d q2 d q3 d q1
h2 q2
(A3 h1 h2 )
+ h3 d q3 d q1 d q2 (28.16c)
h3 q3
The right hand side is the surface integral over all six faces.
Divide both sides by the volume element d V = h1 h2 h3 d q1 d q2 d q3 :
" #
1   
∇· A = (A h h ) + (A h h ) + (A h h ) . (28.17)
h1 h2 h3 q1 1 2 3 q2 2 3 1 q3 3 1 2

The Laplacian ∇2 ï is obtained by setting A = ∇ ï and computing ∇ · A:


" ! ! !#
2 1  h2 h3 ï  h3 h1 ï  h1 h2 ï
∇ ï= + + .
h1 h2 h3 q1 h1 q1 q2 h2 q2 q3 h3 q3
(28.18)
28. Curvilinear Coordinates 224

We derive the formula for the curl on a component-by-component basis.


Consider a square of sides d x1 = h1 d q1 and d x2 = h2 d q2 on theHq3 = const
surface at point (q1 , q2 , q3 ). By Stokes’s theorem, (∇ × A) · dS = A · ds,

(∇ × A) · e3 h1 h2 d q1 d q2
 
= (A1 h1 ) (q ,q ,q ) − (A1 h1 ) (q ,q +h d q ,q ,q ) d q 1
1 2 3 1 2 2 2 2 3
 
+ (A2 h2 ) (q +h d q ,q ,q ) − (A2 h2 ) (q ,q ,q ,q ) d q2 (28.19a)
1 1 1 2 3 1 2 2 3

(A1 h1 ) (A2 h2 )
≈− h dq dq + h dq dq (28.19b)
h2 q2 2 2 1 h1 q1 1 1 2
and so
" #
1 (A2 h2 ) (A1 h1 )
(∇ × A) · e3 = − . (28.19c)
h1 h2 q1 q2

A similar treatment for the other components of ∇ × A results in


3 ¼
3 ¼
3
¼ 1 
∇× A = ×i j k ei (A h ) (28.20a)
hj hk x j k k
i=1 j=1 k=1

 e1 h1 e2 h2 e3 h3 
 
 
1     
= det  (28.20b)

 q1 q2 q3 

h1 h2 h3 
 
A1 h1 A2 h2 A3 h3
" #
1 (A3 h3 ) (A2 h2 )
= − e1
h2 h3 q2 q3
" #
1 (A1 h1 ) (A3 h3 )
+ − e2
h3 h1 q3 q1
" #
1 (A2 h2 ) (A1 h1 )
+ − e3 (28.20c)
h1 h2 q1 q2

where ×i j k is the Levi-Civita symbol.


The vector Laplacian in general curvilinear coordinates is obtained from the
above rules for the gradient, divergence, and curl via the formula

∇2 A = ∇(∇ · A) − ∇ × (∇ × A) . (28.21)

In curvilinear coordinates it is not ∇2 A1 e1 + ∇2 A2 e2 + ∇2 A3 e3 , which is true


only in rectilinear coordinates.
28. Curvilinear Coordinates 225

Cylindrical Coordinates
The cylindrical coordinates (â, æ, z) are defined by
x = â cos æ , y = â sin æ , and z=z (28.22)
or
p y
â= x2 + y2 , , and z = z
æ = arctan (28.23)
x
where 0 ≤ â < ∞, 0 ≤ æ ≤ 2á, and −∞ < z < ∞.
The scale factors are
hâ = 1 , hæ = â , and hz = 1 (28.24)
and the basis vectors are related to the Cartesian basis by
eâ = cos æex + sin æe y and eæ = − sin æex + cos æe y (28.25)
or
ex = cos æeâ − sin æeæ and e y = sin æeâ + cos æeæ . (28.26)

The line, area, and volume elements are


ds = dâ eâ + â dæeæ + d z ez (28.27)
dA = â dæ d z eâ + d z dâ eæ + â dâ dæez (28.28)
d V = â dâ dæ d z . (28.29)

The differential operators are


è 1 è è
∇è = eâ + eæ + e (28.30)
â â æ z z
1  1 Aæ A z
∇· A = (âAâ ) + + (28.31)
⠁â â æ z
1 A z Aæ A⠁A z Aâ
! ! !
1 
∇× A = − eâ + − eæ + (âAæ) − ez
â æ z z â ⠁⠁æ
(28.32)
2 2
!
1  è 1  è  è
∇2 è = â + 2 + (28.33)
⠁⠁â ⠁æ2 z 2
2 2
 
Aâ 1  A⠁ Aâ 1 2 Aæ 
!
 1 
∇2 A =  â + 2 + − A â −  eâ
⠁⠁â ⠁æ2 z 2 â2 â2 æ 
2 2
 
Aæ 1  Aæ  Aæ 1 2 Aâ 
!
 1 
+  â + 2 + − 2 Aæ − 2  eæ
⠁⠁â ⠁æ2 z 2 â ⠁æ 
1 2 A z 2 A z
" ! #
1  A
+ â z + 2 + ez . (28.34)
⠁⠁â ⠁æ2 z 2
28. Curvilinear Coordinates 226

Spherical Polar Coordinates


The spherical polar coordinates (r, Ú, æ) are defined by

x = r sin Ú cos æ , y = r sin Ú sin æ , and z = r cos Ú (28.35)

or
p y z
r= x2 + y2 + z2 , æ = arctan , and z = arccos p
x x2 + y2 + z2
(28.36)

where 0 ≤ r < ∞, 0 ≤ Ú ≤ á, and 0 ≤ æ ≤ 2á.


The scale factors are

hr = 1 , hÚ = r , and hæ = r sin Ú (28.37)

and the basis vectors are related to the Cartesian basis by

er = sin Ú cos æex + sin Ú sin æe y + cos Ú ez (28.38a)


eÚ = cos Ú cos æex + cos Ú sin æe y − sin Ú ez (28.38b)
eæ = − sin æex + cos æe y (28.38c)

or

ex = sin Ú cos æer + cos Ú cos æeÚ − sin æeæ (28.39a)


e y = sin Ú sin æer + cos Ú sin æeÚ + cos æeæ (28.39b)
ez = cos Ú er − cos Ú eÚ . (28.39c)

The line, area, and volume elements are

ds = d r er + r dÚ eÚ + r sin Ú dæeæ (28.40)


2
dA = r sin Ú dÚ dæer + r sin Ú dæ d r eÚ + r d r dÚ eæ (28.41)
2
d V = r sin Ú d r dÚ dæ . (28.42)
28. Curvilinear Coordinates 227

The differential operators are

è 1 è 1 è
∇è = er + eÚ + e (28.43)
r r Ú r sin Ú æ æ
1  2 1  1 Aæ
∇· A = 2
(r A r ) + (sin ÚAÚ ) + (28.44)
r r r sin Ú Ú r sin Ú æ
!
1  AÚ
∇× A = (sin ÚAæ) − er
r sin Ú Ú æ
!
1 1 A r 
+ − (rAæ) eÚ
r sin Ú æ r
!
1  A r
+ (rAÚ ) − eæ (28.45)
r r Ú
2 è
! !
2 1  2 è 1  è 1
∇ è= 2 r + 2 sin Ú + 2 2 (28.46)
r r r r sin Ú Ú Ú r sin Ú æ2
2 Aæ
" #
2 2 
∇2 A = ∇2 A r − 2 A r − 2 (sin ÚAÚ ) − 2 er
r r sin Ú Ú r sin Ú æ
2 cos Ú Aæ
" #
2 1 2 A r
+ ∇ AÚ − 2 2 AÚ + 2 − eÚ
r sin Ú r Ú r 2 sin2 Ú æ
" #
2 1 2 A r 2 cos Ú AÚ
+ ∇ Aæ − 2 2 Aæ + 2 2 + eæ .
r sin Ú r sin Ú æ r 2 sin2 Ú æ
(28.47)
Problems

Problem 34.
 
123
Find the eigenvalues and normalized eigenvectors of the matrix 456 .
789
Keep 3 significant figures in your numerical answer.

Problem 35.
a) Let a and b be any two vectors in a linear vector space and let c = a + Ýb
where Ý is a scalar. By requiring c · c ≥ 0 for all Ý, derive the
Cauchy-Schwarz inequality

(a · a)(b · b) ≥ |a · b|2 .

b) In an infinite-dimensional vector space with a set of n orthonormal


vectors e1 , e2 , . . . , en satisfying ei · e j = Öi j , i, j = 1 . . . n, use the results of
part (a) to obtain Bessel’s inequality
n
¼
|xi |2 ≤ x · x where x i = x · ei , i = 1...n .
i=1

Problem 36.
In 2-dimensions, show that if Ý is the charge at the origin, then Gauss’s law is
Ý Ý 1
ï=− ln â and E = −∇ï = e
2á×0 2á×0 â â

where â is the radial distance from the charge.

228
Module VIII

Partial Differential Equations

29 Classification 231

30 Separation of Variables 235

31 Integral Transform Method 246

32 Green Functions 250

Problems 268

229
230

Motivation
Fundamental physical laws, from electrodynamics to quantum mechanics, are
formulated as partial differential equations. Here we examine methods to
solve these equations.
In this module we will solve several types of partial differential equations in a
series of examples. We will focus on second-order partial differential equations
involving the Laplacian operator ∇2 as these types of equations are the ones
most commonly encountered in basic physics problems.
29 Classification

Some commonly encountered partial differential equations:


• Vibrating string / 1-dimensional wave equation

2 è 1 2 è tension of string
= with c2 = . (29.1)
x 2 c 2 t 2 linear density of string
This is a hyperbolic equation.
• Laplace’s equation

2 è 2 è 2 è
!
2
∇ è= + + = 0. (29.2)
x 2 y 2 z 2

This is an elliptic equation.


• 3-dimensional wave equation

1 2 è
∇2 è − = 0. (29.3)
c 2 t 2
This is another hyperbolic equation.
• Diffusion equation

1 è
∇2 è − =0 (29.4)
Ó t
where Ó is the diffusion constant, e.g., if è is temperature then

thermal conductivity
Ó= . (29.5)
(specific heat capacity) · (density)

This is a parabolic equation.

231
29. Classification 232

• Schrödinger equation

~2 2 è
− ∇ è + V(x)è = i~ (29.6)
2m t
where è(x) is the wavefunction of a particle, m is the mass of the particle,
V(x) is the potential the particle moves in, and ~ is the reduced Planck
constant. This is again a parabolic equation.
If è ∝ e −i E t/~ where E is the energy, the time-independent Schrödinger
equation is
2m
∇2 è + [E − V(x)]è = 0 . (29.7)
~2
This is an elliptic equation.
All of these are linear, second order, and homogeneous. The last implies
that if è is a solution, any multiple of è is also a solution.
If a “force” or “source” is present, the equation is inhomogeneous, e.g.,

2 è 1 2 è 1
2
− 2 2 =− f (x, t) (29.8)
x c t tension
where f (x, t) is the force per unit length acting on the string.
An equation may be inhomogeneous due to a boundary condition, e.g., a
vibrating string in which the end x = 0 is prescribed to move in a particular way:

è(0, t) = g(t) . (29.9)

The general solution is made up of any particular solution plus the general
solution of the corresponding homogeneous problem.
29. Classification 233

Boundary Conditions
There are three commonly used types of boundary conditions:
• Dirichlet boundary conditions are ones in which è is specified at each
point on the boundary.
• Neumann boundary conditions are ones in which the normal derivative
n · ∇ è is specified at each point on the boundary where n is the unit normal
vector to the boundary surface.
• Cauchy boundary conditions are ones in which both è and n · ∇ è are
specified at each point on the boundary.
The goal is to choose appropriate boundary conditions so that a unique
solution is obtained.
Generally we use Dirichlet or Neumann boundary conditions for elliptic or
parabolic systems, and Cauchy boundary conditions for hyperbolic systems.
29. Classification 234

Ex. 29.1. Simplest hyperbolic equation.

2 è 1 2 è
− = 0. (29.10)
x 2 c 2 t 2

Change variables to

u = x − ct and v = x + ct . (29.11)

Lines of u = const and v = const are known as characteristics.


We have
 u  v   
= + = + (29.12a)
x x u x v u v
 u  v   
= + = −c +c (29.12b)
t t u t v u v
and so
!2 !2
   
+ è− − + è=0 (29.12c)
u v u v
or

2 è
= 0. (29.12d)
uv
This is the hyperbolic equation in its normal form.
The solution is immediate:

è(u, v) = f (u) + g(v) or è(x, y) = f (x − ct) + g(x + ct) (29.13)

where f and g are arbitrary functions, i.e., a superposition of a left-going wave and a
right-going wave.
è
Suppose we specify the Cauchy boundary conditions è(t = 0, x) and (t = 0, x). Then
t
f (x) + g(x) = è(t = 0, x) (29.14a)
Z
1 è 1 è
−f 0 (x) + g 0 (x) = (t = 0, x) =⇒ −f (x) + g(x) = (t = 0, x) d x .
c t c t
(29.14b)

Therefore
Z
1 1 è
f (x) =è(t = 0, x) − (t = 0, x) d x (29.15a)
2 2c t
Z
1 1 è
g(x) = è(t = 0, x) + (t = 0, x) d x . (29.15b)
2 2c t
Note: the arbitrary constant of integration is irrelevant as it cancels in the sum è = f + g.
30 Separation of Variables

Ex. 30.1. Wave equation in spherical-polar coordinates.


The 3-dimensional wave equation is

1 2 è
∇2 è − 2 2 = 0 . (30.1)
c t
Look for a solution where t and x dependence factors:

è(t, x) = T(t)X(x) (30.2a)


X 2 T
=⇒ T∇2 X − 2 2 = 0 (30.2b)
c t
∇2 X 1 1 d2T
=⇒ = . (30.2c)
X c2 T d t2
|{z} | {z }
function of function of
x only t only

In order for this to hold for all t and all x, each side must be constant.
Let −k 2 be the separation constant. Then

∇2 X 1 1 d2T
= −k 2 and = −k 2 . (30.3)
X c2 T d t2
Note that the second is an ordinary differential equation which we now solve:

d2T
=⇒ + é2 T = 0 with é = ck (30.4a)
d t2

sin ét

T(t) = e ±iét

=⇒ or T(t) =  (30.4b)
cos ét

(the choice depends on the initial conditions).


The other equation, involving X(x), is

∇2 X + k 2 X = 0 . (30.5)

This is the Helmholtz equation. We want to solve this in spherical-polar coordinates.

235
30. Separation of Variables 236

Express the Laplacian in spherical polar coordinates:

2 X
! !
1  2 X 1  X 1
2
r + 2
sin Ú + + k2 X = 0 . (30.6)
r r r r sin Ú Ú Ú r sin Ú æ2
2 2

Let X(r, Ú, æ) = R(r)Ê(Ú)Ð(æ) and divide by X:

d 2Ð 2
!
1 1 d 2 dR 1 1 d dÊ 1 1
 
2
r + 2
sin Ú + +k = 0 . (30.7)
R r dr dr Ê r sin Ú dÚ dÚ Ð r sin Ú dæ2
2 2
| {z }
only term that
depends on æ

Multiply by r 2 sin2 Ú:

sin2 Ú d 2 d R 1 d 2Ð 2 2 2
!
sin Ú d dÊ
 
r + sin Ú + +k r sin Ú = 0 . (30.8)
R dr dr Ê dÚ dÚ Ð dæ2
| {z }
depends only on æ
=⇒ separates!

Let the separation constant be −m2 . Then

1 d 2Ð
= −m2 =⇒ Ð(æ) = e ±i mæ (30.9)
Ð dæ2

and

sin2 Ú d 2 d R
!
sin Ú d dÊ
 
r + sin Ú − m2 + k 2 r 2 sin2 Ú = 0 . (30.10)
R dr dr Ê dÚ dÚ

Divide by sin2 Ú:

m2
 " ! #
1 d 2 dR 1 1 d dÊ
  
r 2 2
+k r + sin Ú − = 0. (30.11)
R dr dr Ê sin Ú dÚ dÚ sin2 Ú
| {z } | {z }
depends only on r depends only on Ú

This equation again separates. Let the separation constant be „(„ + 1). We then arrive
at an angular equation

m2
!
1 1 d dÊ
sin Ú − = −„(„ + 1) (30.12)
Ê sin Ú dÚ dÚ sin2 Ú
and a radial equation
 " #
1 d 2 dR 2 − „(„ + 1) R = 0 .

r + k (30.13)
r2 d r dr r2
30. Separation of Variables 237

Solve the angular equation first. Let x = cos Ú:


d dx d d
= = − sin Ú (30.14)
dÚ dÚ d x dx
and so
! " #
1 d dÊ d dÊ
sin Ú = sin2 Ú (30.15a)
sin Ú dÚ dÚ dx dx
" #
d 2 dÊ
= (1 − x ) (30.15b)
dx dx
d 2Ê dÊ
= (1 − x 2 ) − 2x . (30.15c)
d x2 dx
The angular equation is thus

d 2Ê m2
" #

(1 − x 2 ) 2 − 2x . + „(„ + 1) − Ê = 0. (30.16)
dx dx 1 − x2
This is the associated Legendre equation so the solutions are

m
P„ (x)


Ê(x) =  (30.17)
Q m (x) .

„

Note that when we choose the associated Legendre functions of the first kind, P„m (x),
which are the ones that are defined in −1 ≤ x ≤ 1 or 0 ≤ Ú ≤ á, we have

Ê(Ú)Ð(æ) = P„m (cos Ú)e i mæ ∝ Y„m (Ú, æ) (30.18)

so the „ and m separation constants separates the solution into terms in which the
angular part are spherical harmonics.
Now solve the radial equation
 " #
1 d 2 dR 2 − „(„ + 1) = 0

r + k (30.19a)
r2 d r dr r2
d2R dR
=⇒ r2 + 2r + [k 2 r 2 − „(„ + 1)]R = 0 . (30.19b)
d r2 dr
Solutions to this equation are the spherical Bessel functions

 j„ (kr)


R(r) =  (30.20)
 y„ (kr) .

30. Separation of Variables 238

However, if k = 0 (corresponding to è/t = 0 so solving Laplace’s equation rather than


the wave equation), we have instead

d 2 R 2 d R „(„ + 1)
+ − R =0 (30.21)
d r2 r d r r2
and the solutions to this equation are

r „


R(r) =  (30.22)
r −(„+1) .

Therefore the solutions have the form

1 2 è
∇2 è + 2 2 = 0 :
c t
( i kct ) ( i mæ ) ( m ) ( )
e e P„ (cos Ú) j„ (kr)
è(t, r, Ú, æ) = · · · (30.23)
e −i kct e −i mæ Q„m (cos Ú) y„ (kr)

∇2 è = 0 :
) ( m
e i mæ r„
( ) ( )
P„ (cos Ú)
è(r, Ú, æ) = · · . (30.24)
e −i mæ Q„m (cos Ú) r −(„+1)

Any linear combination is a solution, but boundary conditions limit allowed types of
solutions.
30. Separation of Variables 239

Ex. 30.2. Vibrations of a round drum head.


We now solve the 2-dimensional wave equation

1 2 u
∇2 u = 2 2 (30.25)
c t
in polar coordinates.
The normal modes are periodic solutions u(t, x) = u(x)e iét

=⇒ ∇2 u + k 2 u = 0 (30.26)
é
where k = is the wave number.
c
In 2-dimensional polar coordinates, this is

1 2 u
!
1  u
r + 2 + k2 u = 0 . (30.27)
r r r r æ2

Let u = R(r)Ð(æ) and separate:

d 2Ð
+ m2 Ð = 0 =⇒ Ð(æ) = e ±i mæ (30.28a)
dæ2

d2R 1 d R 2
!
2− m R =0 Jm (kr)


+ + k =⇒ R(r) =  (30.28b)
d r2 r d r r2 Ym (kr)

(the second is Bessel’s equation) and so our solutions are of the form
( i mæ ) ( )
e Jm (kr)
u(r, æ) = · . (30.29)
e −i mæ Ym (kr)

Boundary conditions:

• Require solutions to be periodic in æ so that u(r, æ = 0) = u(r, æ = 2á)


=⇒ m is an integer.
• Dirichlet boundary conditions on edge or drum requires u(r = a, æ) = 0
=⇒ Jm (ka) = 0 .
Note: Ym (kr) solutions are unacceptable because they are not regular at r = 0.
Thus, only certain values of k are allowed:
x
kmn = mn (30.30)
a
where xmn is the nth zero of Jm (x):

J0 (x) = 0 for x01 ≈ 2.40 , x02 ≈ 5.52 , x03 ≈ 8.65 , . . . (30.31a)


J1 (x) = 0 for x11 ≈ 3.83 , x12 ≈ 7.02 , x13 ≈ 10.17 , . . . (30.31b)
J2 (x) = 0 for x21 ≈ 5.14 , x22 ≈ 8.42 , x23 ≈ 11.62 , . . . (30.31c)
30. Separation of Variables 240

The lowest-frequency modes have

2.40 c r
 
• k01 = , é01 = 2.40 , u ∝ J0 2.40
a a a

Figure 30.1: Drum 01 Mode

There are no nodes inside the rim.



3.83 c

r cos æ

• k11 = , é11 = 3.83 , u ∝ J1 3.83

a a a

sin æ

Figure 30.2: Drum 11 Modes

The white dashed lines are the nodes.


Note: there are two degenerate modes belonging to the same eigenfrequency.

5.14 c cos 2æ
r 
 
• k21 = , é21 = 5.14 , u ∝ J2 5.14 
a a a 
sin 2æ

Figure 30.3: Drum 21 Modes

The white dashed lines are the nodes.


Note: there are two degenerate modes belonging to the same eigenfrequency.
5.52 c r
 
• k02 = , é02 = 5.52 , u ∝ J0 5.52
a a a

Figure 30.4: Drum 02 Mode

The white dashed line is the node.


30. Separation of Variables 241

The generalization to a cylinder is straightforward: first separate out the z-dependence


(with separation constant Ó) then proceed as in the 2-dimensional example.
The Laplacian in cylindrical coordinates (â, æ, z) is

2 1  1 2 2
∇2 = + + + . (30.32)
â2 ⠁â â2 æ2 z 2

• Laplace’s equation

∇2 è = 0 :
( ) ( Óz ) ( i mæ )
Jm (Óâ) e e
è(â, æ, z) = · · . (30.33)
Ym (Óâ) e −Óz e −i mæ

• Helmholtz equation

∇2 è + k 2 è = 0 :
 √   
2 − Ó2 â  e iÓz
  i mæ 


 J m k 


 


 
 e 

è(â, æ, z) =  · · . (30.34)
   
  e −iÓz
√      
 Ym k 2 − Ó2 â

 
  
   e −i mæ 

30. Separation of Variables 242

Ex. 30.3. Cube in a hot bath.


A cube with sides L is immersed in a heat bath at temperature T = T0 . The initial
temperature of the cube is T = 0. The warming of the cube is described by the heat
equation

1 T k
∇2 T = with Ó= (30.35)
Ó t câ

where k is the thermal conductivity, c is the specific heat capacity, and â is the density
of the cube.
Let T ∝ e −Ýt . Then
Ý
∇2 T + T =0 (30.36a)
Ó
2 T 2 T 2 T Ý
=⇒ + + =− T. (30.36b)
x 2 y 2 z 2 Ó

Now separate the spatial variables: T ∝ e i ax e i by e i cz


Ý
=⇒ a2 + b 2 + c 2 = . (30.37)
Ó

Boundary conditions: all six faces must be at T = T0 .


This is an inhomogeneous boundary condition. A particular solution is Tp = T0 .
Now we need to find the complementary function Tc which must satisfy the
homogeneous boundary conditions:

T =0 for x = 0, L y = 0, L z = 0, L . (30.38)

We find
! ! !
„áx máy náz
Tc ∝ sin sin sin (30.39a)
L L L

with
!2 !2 !2
„á má ná Ý
+ + = . (30.39b)
L L L Ó

Therefore, T = Tp + Tc :
∞ ¼
∞ ¼
∞ ! ! !
¼ „áx máy náz −Ý t
T = T0 + c„mn sin sin sin e „mn (30.40a)
L L L
„=1 m=1 n=1

where

á2
݄mn = Ó 2 („2 + m2 + n 2 ) . (30.40b)
L
30. Separation of Variables 243

To determine the coefficients c„mn use the condition T = 0 at t = 0:


∞ ¼
∞ ¼
∞ ! ! !
¼ „áx máy náz
c„mn sin sin sin = −T0 . (30.41)
L L L
„=1 m=1 n=1

 0   0   0  RL RL RL
Multiply by sin „ Láx sin m Láy sin n Láz and integrate x=0 d x, y=0 d y, z=0 d z
(i.e., over the whole cube). Then we obtain

 64
− 3

 „, m, n all odd
c„mn =  á „mn (30.42)
0
 otherwise.

We have finally

∞ ∞ ∞ ! ! !
64 ¼ ¼ ¼ 1 „áx máy náz
T(t, x, y, z) = T0 − 3 T0 sin sin sin
á „mn L L L
„=1 m=1 n=1
„, m, n all odd
(„2 + m2 + n 2 )á2
" #
× exp − Ót . (30.43)
L2

This series solution works well at late times when the exponential kills all but the lowest
modes, but at early times we will need to keep a large number of terms in the sums to
get an accurate result.
30. Separation of Variables 244

Ex. 30.4. Heating of a slab. insulation


Consider a slab of thickness d in the
x-direction that is infinite in y- and heat q
z-directions as shown in Fig. 30.5.
The face at x = d is insulated while the face
at x = 0 is heated at a constant rate q. 0 d x
Initially the slab is at T = 0.
We must solve the 1-dimensional diffusion
equation

2 T 1 T k
− = 0, Ó= (30.44)
x 2 Ó t câ
Figure 30.5: Slab Heating
with inhomogeneous boundary conditions.
As before, we seek a particular solution Tp to which we will add a complementary
function Tc ,

T = Tp + Tc (30.45)

where Tc is a solution to the problem with homogeneous boundary conditions.

• Particular solution.
Eventually we expect the temperature to rise linearly with time as heat is added. Try

Tp (t, x) = u(x) + Üt . (30.46)

This results in a separation of variables:

d2u Ü
= (30.47a)
d x2 Ó
1Ü 2
=⇒ u(x) = x + ax + b . (30.47b)

To determine a and b, we employ the boundary conditions.
From Fourier’s law of conduction, q = −k ∇ T where q is the heat flux density, the
temperature gradient is
q
u 0 (0) = − and u 0 (d) = 0 (insulated) (30.48)
Ó
so we find
1 q qÓ q
u(x) = (x − d)2 and Ü= = . (30.49)
2 kd kd câd

Therefore
1 q q
Tp (t, x) = (x − d)2 + Ót . (30.50)
2 kd kd
30. Separation of Variables 245

To this we need to add a complementary function (that satisfies the homogeneous


boundary conditions) in order to satisfy the initial condition

T(t = 0, x) = Tp (t = 0, x) + Tc (t = 0, x) = 0 . (30.51)

• Characteristic function.
Ý
Write Tc (t, x) ∝ e −Ýt e i ax =⇒ . a2 =
Ó
The homogeneous boundary conditions (Neumann) are:

Tc Tc
= =0 (30.52)
x x=0 x x=d

and so e i ax becomes cos(ax) with a = ná/d so



A0 ¼ náx −Ýn t á2 n 2
 
Tc (t, x) = + A n cos e , Ýn = Ó . (30.53)
2 d d2
n=1

At t = 0, Tc = −Tp so

A0 ¼ náx 1 q
 
+ A n cos =− (x − d)2 (30.54a)
2 d 2 kd
n=1

and we solve for A0 and A n , n = 1, 2, . . .:


 " Zd #
2 1 q 1 qd
A0 = − (x − d)2 d x = − (30.54b)
d 2 kd 0 3 k
 " Zd  #
2 1 q náx qd 1

An = − (x − d)2 cos d x = −2 . (30.54c)
d 2 kd 0 d k (ná)2

The complete solution is

1 q q
T(t, x) = (x − d)2 + Ót
2 kd kd

 
qd 
1 2 ¼ 1 náx −Ón2 á2 t/d 2 
   
−  + 2 cos e . (30.55)

k 
3 á n 2 d



n=1
31 Integral Transform Method

Ex. 31.1. Find the temperature distribution T(t, x) of an infinite solid if we are given an
initial distribution T(t = 0, x) = f (x).
Note: there is no y- or z-dependence so this is a 1-dimensional problem:

2 T 1 T
= . (31.1)
x 2 Ó t

Let
Z∞ Z∞
1
T(t, x) = F (t, k)e i kx d k ⇐⇒ F (t, k) = T(t, x)e −i kx d x . (31.2)
2á −∞ −∞

Then
1 F (t, k) 2
−k 2 F (t, k) = =⇒ F (t, k) = g(k)e −k Ót (31.3)
Ó t
where we must determine g(k) from the initial conditions.
At t = 0,
Z∞ Z∞
F (t = 0, k) = T(0, x)e −i kx d x = f (x)e −i kx d x (31.4)
−∞ −∞

but F (t = 0, k) = g(k) so
Z∞
g(k) = f (x)e −i kx d x . (31.5)
−∞

Thus
Z∞
2
F (t, k) = e −k Ót f (x)e −i kx d x . (31.6)
−∞

246
31. Integral Transform Method 247

Therefore
Z∞ Z∞
1 2 0
T(t, x) = e −k Ót f (x 0 )e −i kx e i kx d x 0 d k (31.7a)
2á k=−∞ x 0 =−∞
Z∞ Z∞
1 0 2
= f (x 0 ) e i k(x−x ) e −k Ót d k d x 0 (31.7b)
0
x =−∞ 2á k=−∞
| {z }
q
1 −(x−x 0 )2 /4Ót
4áÓt e

and so we have
Z∞ r
1 0 2
T(t, x) = 0
f (x ) e −(x−x ) /4Ót d x 0 . (31.8)
−∞ 4áÓt

Note:
r
1 0 2
G (t, x; x 0 ) = e −(x−x ) /4Ót (31.9)
4áÓt
is a Green function for this problem.
Suppose the initial source is the plane source f (x) = Ö(x). Then
r
1 2
T(t, x) = e −x /4Ót = G (t, x; 0) , t > 0 . (31.10)
4áÓt

This is a Gaussian of width 2Ót. We see that an initial delta-like distribution spatially
diffuses with time as shown in Fig. 31.1.

T T T

x x x
t = 0+ t small t large

Figure 31.1: Heat Diffusion


31. Integral Transform Method 248

We can use this solution to find the distribution from a point source Ö3 (x).
Let G (t, x; 0) be the response to the plane source Ö(x) at t = 0.
Let g(t, r) be the response to the point source Ö3 (x) at t = 0.
Then we must have (see Fig. 31.2) z plane x = 0
Z∞
G (t, x; 0) = 2á g(t, r) â dâ (31.11) d
0
y
(a superposition of points lying on the x = 0 plane) r
and r 2 = â2 + x 2 =⇒ r d r = â dâ so
Z∞ x x
G (t, x; 0) = 2á g(t, r) r d r . (31.12)
x
Figure 31.2: Point Source Integral

G (t, x; 0)
=⇒ = −2áxg(t, x) (31.13a)
x
1 G (t, x; 0)
=⇒ g(t, r) = − (31.13b)
2ár x x=r

We find

1 3/2 −r 2 /4Ót
 
g(t, r) = e , t > 0. (31.14)
4áÓt
Thus the Green function for an infinite solid is

1 3/2 −kx−x0 k2 /4Ót


 
G (t, x; x0 ) = e , t > 0. (31.15)
4áÓt
31. Integral Transform Method 249

Ex. 31.2. Consider the response of a semi-infinite solid x > 0 to a point initial
temperature distribution at x = a, y = z = 0, Ö(x − a)Ö(y)Ö(z), if the entire solid is initially
at T = 0 (except at the point) and the boundary x = 0 is maintained at T = 0, as shown in
Fig. 31.3.
We will solve this using the method of images.
In the previous example we saw that the Green function for a point source in an infinite
solid is

1 3/2 −kx−x0 k2 /4Ót


 
G (t, x; x0 ) = e , t > 0. (31.16)
4áÓt

To enforce the boundary condition T = 0 at x = 0 for the semi -infinite solid, superimpose
a source function at x = a, y = z = 0, with a negative source function at x = −a,
y = z = 0:

1 3/2 (x − a)2 + y 2 + z 2
  ( " #
T(t, x, y, z) = exp −
4áÓt 4Ót
(x + a)2 + y 2 + z 2
" #)
− exp − , t > 0, x > 0. (31.17)
4Ót
| {z }
fictitious image source
required to maintain
T = 0 at x = 0

point
image source

x
a a

T=0
at x = 0

Figure 31.3: Image Source


32 Green Functions

We have the following methods for finding Green functions:


• Sum over eigenfunctions (discussed previously).
• Use solutions to the homogeneous equation and boundary conditions on
either side of a surface containing the source point that are matched on that
surface with the required jump.
• Take the sum of a singular fundamental solution and a smooth solution of
the homogeneous problem which fixes the boundary conditions.
Explore the latter two methods in the following example.

Ex. 32.1. Circular drum.

x
∇2 u + k 2 u = 0 (32.1)
r
x0
with u = 0 when r = a. r 0
Clearly G (x, x0 ) depends only on r, r 0 , and Ú. We
have
a
∇2 G + k 2 G = Ö2 (x − x0 ) . (32.2)
Figure 32.1: Circular Drum

For x , x0 , ∇2 G + k 2 G = 0 so the solution that satisfies the boundary conditions is


 ∞
 ¼
r < r0




 A m Jm (kr) cos mÚ

m=0

G =  ∞ (32.3)
 ¼


 B m [Jm (kr)Ym (ka) − Ym (kr)Jm (ka)] cos mÚ r > r . 0



m=0

Note that the factor in square brackets vanishes automatically when r = a.


Note also that G is an even function of Ú periodic in 2á.
To determine A m and B m we must match the solutions along the circle r = r 0 .
G is continuous but its gradient is discontinuous at r = r 0 .
We need to determine what the jump in the gradient across this surface.

250
32. Green Functions 251

‡ l
Recall Gauss’s theorem, ∇ · F d V = F · dS. In 2-dimensions, with F = ∇ G , we have
† I
∇2 G dA = n · ∇ G d s . (32.4)

Integrate the inhomogeneous equation over the area element shown in Fig. 32.2:
† † †
∇2 G dA +k 2 G dA = Ö2 (x − x0 ) dA (32.5a)
| {z } | {z } | {z }
H
n·∇ G d s 0 as ×→0 1

so, as × → 0, only the arcs above and below r = r 0 contribute to the line integral
Z Z
G G
ds − ds = 1 (32.5b)
r 0 +× r r 0 −× r

and, since d s = r 0 dÚ for the arcs


Z !
G G 1
− dÚ = 0 (32.5c)
r r 0 +× r r 0 −× r

provided Ú = 0 is in the domain of integration

G G 1
=⇒ − = Ö(Ú) . (32.5d)
r r 0 +× r r 0 −× r 0
Let

G G ¼
− = cm cos mÚ (32.5e)
r r 0 +× r r 0 −×
m=0

¼ 1
=⇒ cm cos mÚ = 0 Ö(Ú) . (32.5f)
r
m=0

Multiply both sides by cos m0 Ú and integrate −á dÚ to get

1 1
2ác0 = 0 and ácm = 0 , m = 1, 2, . . . . (32.5g)
r r
Therefore

G G 1 1 ¼
− = 0 + 0 cos mÚ . (32.5h)
r r 0 +× r r 0 −× 2ár ár
m=1

This is the requirement for the discontinuity of the gradient of G at r = r 0 .

r = r0 +
x0 r=r0
r = r0
Figure 32.2: Green Function Integral
32. Green Functions 252

Thus, at r 0 = r, we require

A m Jm (kr 0 ) = B m [Jm (kr 0 )Ym (ka) − Ym (kr 0 )Jm (ka)] , m = 0, 1, 2, . . . (32.6a)


1
B0 [J00 (kr 0 )Y0 (ka) − Y00 (kr 0 )J0 (ka)] − A0 J00 (kr 0 ) = (32.6b)
2ákr 0
B m [Jm0 (kr 0 )Y (ka) − Y 0 (kr 0 )J (ka)] − A J 0 (kr 0 ) = 1 , m = 1, 2, . . . . (32.6c)
m m m m m
ákr 0
The solution is
J (ka)Y0 (kr 0 ) − J0 (kr 0 )Y0 (ka)
A0 = 0 (32.7a)
4J0 (ka)
J0 (kr 0 )
B0 = − (32.7b)
4J0 (ka)
J (ka)Ym (kr 0 ) − Jm (kr 0 )Ym (ka)
Am = m m = 1, 2, . . . (32.7c)
2Jm (ka)
Jm (kr 0 )
Bm = − m = 1, 2, . . . (32.7d)
2Jm (ka)

0 (x) − J 0 (x)Y (x) = 2


where we have used Jm (x)Ym m m .
áx
Thus the Green function is
J (kr )[J (ka)Y0 (kr> ) − J0 (kr> )Y0 (ka)]
G (x, x0 ) = 0 < 0
4J0 (ka)

¼ Jm (kr< )[Jm (ka)Ym (kr> ) − Jm (kr> )Ym (ka)]
+ cos mÚ (32.8)
2Jm (ka)
m=1

with
 
r

 r < r0 r 0

 r < r0
r< =  and r> =  (32.9)
r 0
 r > r0 r
 r > r0

where r = kxk, r 0 = kx0 k, and cos Ú = x · x0 /rr 0 .


32. Green Functions 253

Ex. 32.1 (continued). Alternative approach for the circular drum.


Note that we want to solve ∇2 G + k 2 G = Ö2 (x − x0 ) so G must
(i) have the proper singular behavior at x = x0 , and
(ii) satisfy the boundary conditions.
Therefore we seek a solution of the form

G (x, x0 ) = u(x, x0 ) + v(x, x0 ) (32.10)

where u(x, x0 ), known as the fundamental solution, is singular at x = x0 but does not
satisfy the boundary conditions, and v(x, x0 ) is a smooth solution of the homogeneous
problem that fixes the boundary conditions.
To find u(x, x0 ), let â = kx − x0 k and write u = u(â). Integrate over a small circular disk
about â = 0:
Zâ Z⠆
2á ∇2 u dâ + 2á k 2 u dâ = Ö2 (x − x0 ) dA . (32.11)
0 0
| {z } | {z } | {z }
du vanishes as â → 0 1
2áâ

du
As â → 0, have 2áâ = 1 so

1
u(â) ∼ ln â + const as â → 0. (32.12)

Recall the singular solution to ∇2 u + k 2 u = 0 is


2
Y0 (kâ) ∼ ln â + const as â → 0 (32.13)
á

so take u(â) = 14 Y0 (kâ) and so

1
G= Y (kâ) + v(x, x0 ) . (32.14)
4 0

We now find v(x, x0 ) by fixing the boundary conditions.


32. Green Functions 254

Since v is a solution to the homogeneous equation, it can be written as



¼
v= A n Jn (kr) cos nÚ . (32.15)
n=0

Thus, at r = a, we have

1
 p  ¼
G (r = a) = 0 = Y0 k a2 + r 02 − 2ar 0 cos Ú + A n Jn (kr) cos nÚ . (32.16a)
4 | {z }
n=0
this is â(r = a)

so
Z 2á  p
1

A0 = − Y0 k a2 + r 02 − 2ar 0 cos Ú dÚ (32.16b)
8áJ0 (ka) 0
and
Z 2á  p
1

An = − Y0 k a2 + r 02 − 2ar 0 cos Ú cos nÚ dÚ (32.16c)
4áJn (ka) 0

for n = 1, 2, . . ..
Therefore, another form of the Green function is
1  
G (x, x0 ) = Y0 kkx − x0 k
4
Zá  p
J0 (kr)

− Y0 k a2 + r 02 − 2ar 0 cos Ú0 dÚ0
4áJ0 (ka) 0

Jn (kr) cos nÚ á
¼ Z  p 
− Y0 k a2 + r 02 − 2ar 0 cos Ú0 cos nÚ0 dÚ0 . (32.17)
2áJn (ka) 0
n=1
32. Green Functions 255

Ex. 32.2. Heating of a slab (redux). insulation


We’ve seen that problems that are
inhomogeneous due to the boundary
conditions rather than the differential heat q
equation may still be written in terms of a
Green function.
0 d x
Alternatively, a homogeneous equation with
inhomogeneous boundary conditions can
be transformed into an inhomogeneous
equation with homogeneous boundary
conditions (and vice versa).
Recall from Ex. 30.4 our infinite slab of
thickness d, initially at zero temperature,
heated at constant rate q at x = 0 and Figure 32.3: Slab Heating Redux
insulated at x = d as shown in Fig. 32.3:

2 u(t, x) 1 u(t, x) k
− = 0, Ó= (32.18a)
x 2 Ó t câ
with inhomogeneous boundary conditions

u u q
u(t = 0, x) = 0 , = 0, and =− . (32.18b)
x x=d x x=0 k

Transform to a problem with homogeneous boundary conditions with a change of


variables:

v(t, x) = u(t, x) − w(x) (32.19)

where w(x) satisfies


dw dw q
=0 and =− (32.20)
d x x=d d x x=0 k

and also choose it so that d 2 w/d x 2 gives a simple result. The simplest choice is
1 q
w(x) = (x − d)2 (32.21)
2 kd
which satisfies the boundary conditions and now

2 v 1 v d2w q
2
− =− 2 =− . (32.22a)
x Ó t dx kd
where v must now satisfy the boundary conditions

v v
= = 0. (32.22b)
x x=d x x=0
We have achieved our goal of transforming to an inhomogeneous equation for v with
homogeneous boundary conditions.
32. Green Functions 256

An almost trivial particular solution is



vp = t (32.23a)
kd
and so
q 1 q
up (t, x) = Ót + (x − d)2 . (32.23b)
kd 2 kd
This is the same particular solution we saw in Ex. 30.4.
We proceed as we did before in Ex. 30.4 to find the characteristic function uc to satisfy
the initial conditions.
32. Green Functions 257

Ex. 32.3. Laplace’s equation

∇2 ï = 0 (32.24)

in an infinite region with ï → 0 as r → ∞.


The Green function is a solution to

∇2 ï(x) = Ö3 (x − x0 ) . (32.25)

Note: ï can only depend on r = kx − x0 k so we take the origin of spherical coordinates to


be the point x0 .
For x , x0 , ∇2 ï(x) = 0 so solutions have the form


( )
ï(r, Ú, æ) = P„m (cos Ú)e ±i mæ . (32.26)
r −(„+1)

Spherical symmetry implies „ = m = 0.


The boundary condition ï → 0 as r → ∞ then implies

A
ï(r) = (32.27)
r
where we now must determine A.
Integrate the inhomogeneous equation over a spherical ball of radius a about the origin:

ˆ ˆ
∇2 ï d V = Ö3 (x − x0 ) d V = 1 (32.28a)
r<a r<a

but, using Gauss’s theorem,


ˆ m !
ï A
 
∇2 ï d V = d S = 4áa2 − 2 = −4áA (32.28b)
r<a r=a r r r=a
1
so we find A = − and therefore

1 1
G (x, x0 ) = − . (32.29)
4á kx − x0 k
32. Green Functions 258

Ex. 32.4. Wave equation

1 è(t, x)
∇2 è(t, x) − 2 =0 (32.30)
c t 2
over an infinite domain.
The Green function is a solution to the inhomogeneous equation

1 è(t, x)
∇2 è(t, x) − 2 = Ö(t − t 0 )Ö(x − x0 ) . (32.31)
c t 2
Note: the solution only depends on t − t 0 and x − x0 , so we have translational invariance
in t and x. Therefore, without loss of generality, set t 0 = 0 and x0 = 0.
Let
Š
1
è(t, x) = Ñ (é, k)e i(k·x−ét) dé d kx d k y d kz (32.32a)
(2á)4
Š
Ñ (é, k) = è(t, x)e −i(k·x−ét) d t d x d y d z . (32.32b)

The Fourier transform of the inhomogeneous equation is

é2
!
−k 2 + 2 Ñ = 1 (32.33a)
c
c2
=⇒ Ñ (é, k) = (32.33b)
é2 − c 2 k 2
where k = kkk, and we want
Š
c2 e i(k·x−ét)
è(t, x) = dé d kx d k y d kz . (32.34)
(2á)4 é2 − c 2 k 2
32. Green Functions 259

To do this integral, choose the axis of spherical polar coordinates in k space along x.
Then k · x = kr cos Ú. Also let Þ = cos Ú so dÞ = sin Ú dÚ. Then
Z 2á Z 1 Z ∞ Z ∞
c2 e i(kr cos Ú−ét) 2
è(t, x) = k dé d k dÞ dæ (32.35a)
(2á)4 æ=0 Þ=−1 k=0 é=−∞ é2 − c 2 k 2
Z ∞ Z ∞ Z 1 
c2  i kr cos Ú  e −iét 2
=  e dÞ  é2 − c 2 k 2 k dé d k
 (32.35b)
(2á)3 k=0 é=−∞ Þ=−1


| {z }
1 i kr −i kr )
i kr (e −e

c2 1 ∞
Z∞
e −iét
Z
= (e i kr − e −i kr )k dé d k (32.35c)
(2á)3 i r k=0 é=−∞ é2 − c 2 k 2
c2 1 ∞
"Z ∞
e −iét
Z #
= dé e i kr k d k . (32.35d)
(2á)3 i r k=−∞ é=−∞ é2 − c 2 k 2

We evaluate the integral over é


Z∞
e −iét
2 2 2
dé . (32.36)
é=−∞ é − c k

Note that the integrand has two poles on the real axis, é = −|ck| and é = +|ck|.
We therefore modify the integral to be
Z ∞+i×
e −iét
2 2 2
dé (32.37)
é=−∞+i× é − c k

where we will eventually take the limit × → 0.

• When t < 0, close contour in upper half plane as in Fig. 32.4.


The arc C R is é = Re iÚ , 0 ≤ Ú ≤ á, so

e −iét = e −i Rt cos Ú e Rt sin Ú


→ 0 as R → ∞ for t < 0. (32.38)

Therefore the integral is zero since the contour encloses no poles.


Im

CR

R+i R + i Re
|ck| |ck|

Figure 32.4: Contour Closed in Upper Half Plane


32. Green Functions 260

• When t > 0, close contour in lower half plane as in Fig. 32.5.


The arc C R is é = Re −iÚ , 0 ≤ Ú ≤ á, so

e −iét = e −i Rt cos Ú e −Rt sin Ú


→ 0 as R → ∞ for t > 0. (32.39)

Poles are now enclosed!


Im

R+i R + i Re
|ck| |ck|

CR

Figure 32.5: Contour Closed in Lower Half Plane

Now note that


1 1
 
e −iét − dé = 2ái(e −i ckt − e i ckt ) (32.40)
C é − ck é + ck
| {z }
2ck
é2 −c 2 k 2

for C enclosing the poles so we find


Z∞
e −iét ái
2 2 2
dé = − (e −i ckt − e i ckt ) (32.41)
−∞ é − c k ck
where the negative sign arises because the contour in Fig. 32.5 is traversed clockwise
rather than counterclockwise.

Therefore, for t > 0, we have


Z∞
c
è(t, x) = − 2 e i kr (e −i kct − e i kct ) d k (32.42a)
8á r −∞
c
=− [Ö(r − ct) − Ö(r + ct)] . (32.42b)
4ár
The second term will never contribute because r and t are both positive.
32. Green Functions 261

Thus, the Green function for the wave equation is





0 t < t0

0 0 
G (t − t , x − x ) = 

(32.43)
c  
Ö kx − x0 k − c(t − t 0 )

− t > t0 .


0

4ákx − x k
This is the causal or retarded Green function.
Had we shifted the contour below the poles we would have found the advanced Green
function
 c  

 − 0 Ö kx − x0 k + c(t − t 0 ) t < t 0
 4ákx − x k

G (t − t 0 , x − x0 ) = 

 (32.44)
0

0 t>t .

These different flavors of Green functions correspond to different kinds of boundary


conditions. For example, if nothing happens before a disturbance, we use the causal
Green function.
32. Green Functions 262

Ex. 32.5. Liénard-Wiechert potential.


Consider

1 2 ï
∇2 ï − 2 2 = f (t, x) . (32.45)
c t
The solution is
Š 
Ö kx − x0 k − c(t − t 0 )

c
ï(t, x) = − f (t 0 , x0 ) d t0 d V 0 (32.46a)
4á kx − x0 k
ˆ f t − 1 kx − x0 k, x0 
1 c
=− dV0 . (32.46b)
4á kx − x0 k
This is the retarded potential because the source function is evaluated at the retarded
time t − 1c kx − x0 k.
For example, consider a point source moving on a prescribed path x0 (t) so that
 
f (t, x) = Ö3 x − x0 (t) . (32.47)

Therefore
Š Ö3 x0 − x (t 0 )Öt − t 0 − 1 kx − x0 k
1 0 c
ï(t, x) = − d t0 d V 0 . (32.48)
4á kx − x0 k
‡
First do the integral dV0:

Z Ö t − t 0 − 1 kx − x (t 0 )k
1 c 0
ï(t, x) = − d t0 . (32.49)
4á kx − x0 (t 0 )k

Note: the integrand contains Ö(g(t 0 )) with


1
g(t 0 ) = t − t 0 − kx − x0 (t 0 )k (32.50a)
c
which has a single root at the retarded time tr where the worldline of the particle
passes through the past light cone, see Fig. 32.6,
1
g(tr ) = 0 for tr = t − kx − x0 (tr )k (32.50b)
c
(note that x0 is evaluated at time tr in the definition of tr ) and

dg 1 v0 (tr ) · (x − x0 (tr ))
= −1 + (32.50c)
d t 0 t 0 =tr c kx − x0 (tr )k

where v0 (t) = dx(t)/d t is the velocity of the point source.


32. Green Functions 263

ct
x0(t)
ct
x0(tr)
x ctr y

light cone
Figure 32.6: Light Cone and Retarded Time

We use the identity of Eq. (14.10) to perform the integral d t 0 :


R

1 1 1
ï(t, x) = − (32.51)
4á kx − x0 (tr )k 1 v0 (tr ) · (x − x0 (tr ))
1−
c k]x − x0 (tr )k
and thus we obtain the Liénard-Wiechert potential

1 1
ï(t, x) = −
4á 1
kx − x0 (tr )k − v0 (tr ) · (x − x0 (tr ))
c
1
with tr = t − kx − x0 (tr )k . (32.52)
c
32. Green Functions 264

Integral Equations
Green functions can be used to convert a partial differential equation with
boundary conditions into an integral equation.
Consider

∇2 ï(x) = â(x)ï(x) (32.53)

in some region with some suitable boundary conditions.


Suppose that G (x, x0 ) is the Green function for the Laplace equation in the
region with the boundary conditions so that

∇2 G (x, x0 ) = Ö3 (x − x0 ) (32.54)

and so the solution of

∇2 ï(x) = f (x) (32.55)

is
ˆ
ï(x) = G (x, x0 )f (x0 ) d V 0 . (32.56)

Then, with f (x) = â(x)ï(x) we have


ˆ
ï(x) = G (x, x0 )â(x0 )ï(x0 ) d V 0 . (32.57)

This integral equation for ï(x) is equivalent to the differential equation of


Eq. (32.53) with the boundary conditions built into it.
The above integral equation is an example of a homogeneous linear integral
equation. One example of an inhomogeneous integral equation is Fredholm
integral equation of the second kind
ˆ
ï(x) = f (x) + Ý K(x, x0 )ï(x0 ) d V 0 (32.58)

where K(x, x0 ) is called the kernel.


We will only briefly touch on solving integral equations.
32. Green Functions 265

Neumann series
Consider the Fredholm integral equation of the second kind
ˆ
ï(x) = f (x) + Ý K(x, x0 )ï(x0 ) d V 0 (32.59)

and solve this by iteration: begin with the approximation

ï(x) ≈ f (x) . (32.60)

Now substitute the integral equation into itself to build up successive


refinements to this approximation
ˆ " ˆ #
0 0
ï(x) = f (x) + Ý K(x, x ) f (x ) + Ý 0 00 00 00
K(x , x )ï(x ) d V d V 0 (32.61a)
ˆ
= f (x) +Ý K(x, x0 )f (x0 ) d V 0 (32.61b)
ˆˆ
+Ý2 K(x, x0 )K(x0 , x00 )ï(x00 ) d V 0 d V 00

so our next level of approximation is


ˆ
ï(x) ≈ f (x) + Ý K(x, x0 )f (x0 ) d V 0 . (32.62)

Repeat. . .
ˆ
ï(x) = f (x) +Ý K(x, x0 )f (x0 ) d V 0 (32.63)
ˆˆ
+Ý2 K(x, x0 )K(x0 , x00 )f (x00 ) d V 0 d V 00

+··· .

This is known as the Neumann series and it converges for small Ý provided
K(x, x0 ) is bounded.
32. Green Functions 266

If we let L be a linear integral operator defined by


ˆ
L f (x) = Ý K(x, x0 )f (x0 ) d V 0 (32.64)

then our integral equation can be written as

(1 − L)ï(x) = f (x) (32.65)

and the formal solution would be


1
ï(x) = f (x) (32.66)
1−L
where (1 − L)−1 is some operator to be determined.
We now write the Neumann series solution as

¼
ï(x) = Ln f (x) (32.67)
n=0

where L0 f (x) = f (x) and Ln f (x) = L[Ln−1 f (x)]. Therefore



1 ¼
= Ln . (32.68)
1−L
n=0

The Neumann series thus generalizes the geometric series and brings us by a
commodius vicus of recirculation back to §1.
32. Green Functions 267

Ex. 32.6. Scattering in quantum mechanics.


Consider the equation
2m
∇2 è(x) − V(x)è(x) + k 2 è(x) = 0 (32.69)
~

with boundary conditions that è(x)e −i E t/~ is an incident plane wave with wave vector k0
plus outgoing waves as kxk → ∞ and k 2 = k02 = 2mE/~2 .
The Helmholtz equation

∇2 è(x) + k 2 è(x) = f (x) (32.70)

with outgoing wave boundary condition has the Green function


0
1 e i kkx−x k
G (x, x0 ) = − (32.71)
4á kx − x0 k
so the differential equation can be transformed into the integral equation
ˆ 0
2m e i kkx−x k
è(x) = e ik0 ·x − V(x0 )è(x0 ) d V 0 . (32.72)
|{z} 4á~2 kx − x0 k
incident | {z }
wave outgoing wave

The first iteration in the Neumann series gives the Born approximation
ˆ 0
m e i kkx−x k 0
è(x) ≈ e ik0 ·x − 0 V(x0 )e ik0 ·x d V 0 . (32.73)
2á~2 kx − x k
Problems

Problem 37.
Find the lowest frequency of oscillation of acoustic waves in a hollow sphere of
radius a. The boundary condition is è = 0 at r = a and è obeys the differential
equation

1 2 è
∇2 è = .
c 2 t 2

Problem 38.
A sphere of radius a is at temperature T = 0 throughout. At time t = 0 it is
immersed in a liquid bath at temperature T0 . Find the subsequent temperature
distribution T(r, t) inside the sphere. This distribution satisfies:

1 T
∇2 T − = 0.
Ó t

Problem 39.
Find the three lowest eigenvalues of the Schrödinger equation

~2 2
− ∇ è = Eè
2m
for a particle confined in a cylindrical box of radius a and height b where è = 0
on the walls and a ≈ b.
Zeros of the Bessel functions:
J0 (x) = 0 for x = 2.404, 5.520, 8.654, . . .
J1 (x) = 0 for x = 3.832, 7.016, 10.173, . . .
J2 (x) = 0 for x = 5.135, 8.417, 11.619, . . . .

268
Appendix

A Series Expansions 270

B Special Functions 272

C Vector Identities 286

269
A Series Expansions

Binomial series
Ó(Ó − 1) 2 Ó(Ó − 1)(Ó − 2) 3
(1 + x)Ó = 1 + Óx + x + x + ···
2! 3!
! ! !
Ó Ó 2 Ó 3
= 1+ x+ x + x + ··· . (A.1)
1 2 3

Special cases:

(1 + x)2 = 1 + 2x + x 2 (A.2)
(1 + x)3 = 1 + 3x + 3x 2 + x 3 (A.3)
(1 + x)−1 = 1 − x + x 2 − x 3 + x 4 − · · · −1 < x < 1 (A.4)
(1 + x)−2 = 1 − 2x + 3x 2 − 4x 3 + 5x 4 − · · · −1 < x < 1 (A.5)
(1 + x)−3 = 1 − 3x + 6x 2 − 10x 3 + 15x 4 − · · · −1 < x < 1 (A.6)
1 1 2 1·3 3
(1 + x)1/2 = 1 + x − x + x − ··· −1 < x ≤ 1 (A.7)
2 2·4 2·4·6
1 1·3 2 1·3·5 3
(1 + x)−1/2 = 1 − x + x − x + ··· −1 < x ≤ 1 (A.8)
2 2·4 2·4·6
1/3 1 2 2 2·5 3
(1 + x) = 1+ x − x + x − ··· −1 < x ≤ 1 (A.9)
3 3·6 3·6·9
−1/3 1 1·4 2 1·4·7 3
(1 + x) = 1− x + x − x − ··· −1 < x ≤ 1 (A.10)
3 3·6 3·6·9

Series for exponential and logarithmic functions

x2 x3
ex = 1 + x + + + ··· −∞ < x < ∞ (A.11)
2! 3!
x 2 x 3 x 4
ln(1 + x) = x − + − + ··· −1 < x ≤ 1 (A.12)
2 3 4
1 1+x x3 x5 x7
 
ln =x+ + + + ··· −1 < x < 1 (A.13)
2 1−x 3 5 7
1 x −1 3 1 x −1 5
( )
x −1
    
ln x = 2 + + + ··· x>0 (A.14)
x +1 2 x +1 5 x +1

270
Appendix A. Series Expansions 271

Series for trigonometric functions

x3 x5 x7
sin x = x − + − + ··· −∞ < x < ∞ (A.15)
3! 5! 7!
x2 x4 x6
cos x = 1 − + − + ··· −∞ < x < ∞ (A.16)
2! 4! 6!
x 3 2x 5 22n (22n − 1)B n x 2n−1 á
tan x = x + + + ··· + + ··· |x| < (A.17)
3 15 (2n)! 2
1 x x3 22n B n x 2n−1
cot x = − − − ··· − − ··· 0 < |x| < á (A.18)
x 3 45 (2n)!
1 x3 1 · 3 x5 1 · 3 · 5 x7
arcsin x = x + + + + ··· |x| < 1 (A.19)
2 3 2·4 5 2·4·6 7
á
arccos x = − arcsin x (A.20)
2
x3 x5 x7
arctan x = x − + − |x| < 1 (A.21)
3 5 7
á 1 1 1
arctan x = ± − + 3 − 5 + · · · x ≷ 0, |x| ≥ 1 (A.22)
2 x 3x 5x
á
arccot x = − arctan x (A.23)
2

Series for hyperbolic functions

x3 x5 x7
sinh x = x + + + + ··· −∞ < x < ∞ (A.24)
3! 5! 7!
x2 x4 x6
cosh x = 1 + + + + ··· −∞ < x < ∞ (A.25)
2! 4! 6!
x 3 (−1)n−1 22n (22n − 1)B n x 2n−1 á
tanh x = x − + ··· + + ··· |x| < (A.26)
3 (2n)! 2
1 x (−1)n−1 22n B n x 2n−1
coth x = + − ··· + − ··· 0 < |x| < á (A.27)
x 3 (2n)!
1 x3 1 · 3 x5 1 · 3 · 5 x7
arcsinh x = x − + − + ··· |x| < 1 (A.28)
2 3 2·4 5 2·4·6 7
1 1 1·3 1 1·3·5 1
arcsinh x = ln(2x) + − + − ··· x > 1 (A.29)
2 2x 2 2 · 4 4z 4 2 · 4 · 6 6z 6
1 1 1·3 1 1·3·5 1
arccosh x = ln(2x) − − − − ··· x > 1 (A.30)
2 2x 2 2 · 4 4z 4 2 · 4 · 6 6z 6
x3 x5 x7
arctanh x = x + + + + ··· |x| < 1 (A.31)
3 5 7
1 1 1 1
arccoth x = + 3 + 5 + 7 + · · · |x| > 1 (A.32)
x 3x 5x 7x
B Special Functions

Gamma Function
Definition (positive arguments)
Z∞
È (x) = t x−1 e −t d t x>0 (B.1)
0

Recursion formula

È (x + 1) = xÈ (x) (B.2)
È (n + 1) = n! for n = 0, 1, 2, · · · (B.3)

Negative arguments
Use repeated application of the recursion formula
È (x + 1)
È (x) = (B.4)
x

Special values


È ( 12 ) = á (B.5)
1 · 3 · 5 · · · (2n − 1) √
È (n + 12 ) = á n = 1, 2, 3, . . . (B.6)
2n
(−1)n 2n √
È (−n + 12 ) = á n = 1, 2, 3, . . . (B.7)
1 · 3 · 5 · · · (2n − 1)

Relationships

á
È (x)È (1 − x) = Euler’s reflection formula (B.8)
sin xá

22x−1 È (x)È (x + 12 ) = áÈ (2x) Legendre’s duplication formula (B.9)

272
Appendix B. Special Functions 273

Asymptotic expansions

√ 
1 1 139

È (x) ∼ 2áxx x−1/2 e −x 1 + + − + · · · (B.10)
12x 288x 2 51 840x 3
1 x 1 1
 
ln È (x) ∼ x ln x − x − ln + − + ··· (B.11)
2 2á 12x 360x 3

n! ∼ 2ánn n e −n Stirling’s formula (B.12)

5
4
3
2
1
0
(x)

1
2
3
4
5
5 4 3 2 1 0 1 2 3 4 5
x
Figure B.1: Gamma Function
Appendix B. Special Functions 274

Bessel Functions
Bessel differential equation
x 2 y 00 + xy 0 + (x 2 − ß2 )y = 0 ß≥0 (B.13)
Solutions are called Bessel functions of order ß.

Bessel functions of the first kind

xß x2 x4
( )
Jß (x) = ß 1− + − ··· (B.14)
2 È (ß + 1) 2(2ß + 2) 2 · 4(2ß + 2)(2ß + 4)
∞ k ß+2k
¼ (−1) (x/2)
= (B.15)
k!È (ß + k + 1)
k=0
x −ß x2 x4
( )
J−ß (x) = −ß 1− + − ··· (B.16)
2 È (1 − ß) 2(2 − 2ß) 2 · 4(2 − 2ß)(4 − 2ß)
∞ k 2k−ß
¼ (−1) (x/2)
= (B.17)
k!È (k + 1 − ß)
k=0
J−n (x) = (−1)n Jn (x) n = 0, 1, 2, . . . (B.18)

If ß , 0, 1, 2, . . ., Jß (x) and J−ß (x) are linearly independent.


For ß = 0, 1

x2 x4 x6
J0 (x) = 1 − 2 + 2 2 − 2 2 2 + · · · (B.19)
2 2 ·4 2 ·4 ·6
x x3 x5 x7
J1 (x) = − 2 + 2 2 − 2 2 2 + ··· (B.20)
2 2 ·4 2 ·4 ·6 2 ·4 ·6 ·8

Bessel functions of the second kind

J (x) cos ßá − J−ß (x)


Yß (x) = ß ß , 0, 1, 2, . . . (B.21)
sin ßá
Yn (x) = lim Yß (x) n = 0, 1, 2, . . . (B.22)
ß→n
Y−n (x) = (−1)n Yn (x) n = 0, 1, 2, . . . (B.23)

Hankel functions

(1)
Hß (x) = Jß (x) + i Yß (x) (B.24)
(2)
Hß (x) = Jß (x) − i Yß (x) (B.25)
Appendix B. Special Functions 275

Limiting forms
As x → 0,

J0 (x) → 1 (B.26)
 ß
1 x
Jß (x) ∼ ß , −1, −2, −3, . . . (B.27)
È (ß + 1) 2
2
Y0 (x) ∼ ln x (B.28)
á
È (ß) x −ß
 
Yß (x) ∼ − ß > 0 or ß = − 12 , − 32 , − 52 , . . . (B.29)
á 2
 −ß
È (ß) x
Y−ß (x) ∼ − cos ßá ß > 0 , ß , 12 , 23 , 52 , . . . (B.30)
á 2
È (ß) x −ß
 
(1) (2)
Hß (x) ∼ −Hß (x) ∼ −i ß>0 (B.31)
á 2

As x → ∞,
r
2 ßá á
 
Jß (x) ∼ cos x − − (B.32)
áx 2 4
r
2 ßá á
 
Yß (x) ∼ sin x − − (B.33)
áx 2 4
r
2 ßá á
  
(1)
Hß (x) ∼ exp i x − − (B.34)
áx 2 4
r
2 ßá á
  
(2)
Hß (x) ∼ exp −i x − − (B.35)
áx 2 4

Recurrence relations
(1) (2)
For Cß denoting Jß , Yß , Hß , or Hß

Cß−1 (x) + Cß+1 (x) = C (x) (B.36)
x ß
0
Cß−1 (x) − Cß+1 (x) = 2Cß (x) (B.37)
ß
C0ß (x) = Cß−1 (x) − Cn (ß) (B.38)
x
ß
C0ß (x) = Cß (x) − Cß+1 (x) (B.39)
x
Appendix B. Special Functions 276

Bessel Functions of Integer Order


Generating function

x 1
    ¼
exp t− = Jn (x)t n (B.40)
2 t n=−∞

Integral forms

1 á
Z
J0 (x) = cos(x sin Ú) dÚ (B.41)
á 0
1 á
Z
Jn (x) = cos(nÚ − x sin Ú) dÚ (B.42)
á 0
Z∞
2
Y0 (x) = − cos(x cosh t) d t (B.43)
á 0

1 n=0
n=1

0
Jn(x)

1
1 n=0
n=1
Yn(x)

1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
x
Figure B.2: Bessel Functions of the First and Second Kinds
Appendix B. Special Functions 277

Definite integrals

Z1
[Jn (Ót)]2 t d t = 21 [Jn0 (Ó)]2 − 12 (1 − n 2 /Ó2 )[Jn (Ó)]2 (B.44)
0
ÓJn (Ô)Jn0 (Ó) − ÔJn (Ó)Jn0 (Ô)
Z1
Jn (Ót)Jn (Ôt) t d t = Ó,Ô (B.45)
0 Ô 2 − Ó2
Z∞
Ö(x − x 0 )
Jn (xt)Jn (x 0 t) t d t = (B.46)
0 x

Note in Eq. (B.45) that if Ó and Ô are zeros of the Bessel function Jn or of the derivative
of the Bessel function Jn0 then we have
Z1
Jn (xnp t)Jn (xnq t) t d t = 0 p,q (B.47)
0
Z1
Jn (ynp t)Jn (ynq t) t d t = 0 p,q (B.48)
0

where xnp is the pth zero of Jn and ynp is the pth zero of Jn0 .

Zeros of Bessel functions


If Jn (xnp ) = 0 and Jn0 (ynp ) = 0 for p = 1, 2, 3, . . . then

x0p = 2.4048, 5.5201, 8.6537, ... (B.49)


x1p = 3.8317, 7.0156, 10.1735, ... (B.50)
x2p = 5.1356, 8.4172, 11.6198, ... (B.51)
y0p = 3.8317, 7.0156, 10.1735, ... (B.52)
y1p = 1.8412, 5.3314, 8.5363, ... (B.53)
y2p = 3.0542, 6.7061, 9.9695, ... (B.54)

Bessel Functions of Half-Integer Order

2 1/2

J1/2 (x) = sin x (B.55)
áx
 1/2
2
J−1/2 (x) = cos x (B.56)
áx
 1/2 
2 1

J3/2 = sin x − cos x (B.57)
áx x
 1/2 
2 1

J−3/2 = − cos x − sin x (B.58)
áx x
Appendix B. Special Functions 278

Spherical Bessel Functions


Spherical Bessel differential equation
x 2 y 00 (x) + 2xy 0 (x) + [x 2 − „(„ + 1)]y(x) = 0 (B.59)

Spherical Bessel functions of the first, second, and third kind


r
á
j„ (x) = J (x) (B.60)
2x „+1/2
r r
á á
y„ (x) = Y (x) = (−1)„+1 J (x) (B.61)
2x „+1/2 2x −„−1/2
(1)
h„ (x) = j„ (x) + i y„ (x) (B.62)
h„ (x) = [h„ (x)]∗ = j„ (x) − i y„ (x) .
(2) (1)
(B.63)

For „ = 0, 1
sin x cos x (1) ei x
j0 (x) = y0 (x) = − h0 (x) = −i (B.64)
x x x
sin x cos x cos x sin x e ix
(1)
j1 (x) = 2 − y1 (x) = − 2 − h1 (x) = − 2 (i + x) (B.65)
x x x x x

1 n=0
n=1

0
jn(x)

1
1 n=0
n=1

0
yn(x)

1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
x
Figure B.3: Spherical Bessel Functions of the First and Second Kinds
Appendix B. Special Functions 279

Modified Bessel Functions


Modified Bessel differential equation
x 2 y 00 (x) + xy 0 (x) − (x 2 + ß2 )y(x) = 0 (B.66)

Modified Bessel functions of the first and second kind

J (i x)
Iß (x) = ß ß (B.67)
i
á ß+1 (1)
Kß (x) = i Hß (i x) (B.68)
2

For ß = 0, 1

x2 x4 x6
I0 (x) = 1 + 2 + 2 2 + 2 2 2 + · · · (B.69)
2 2 ·4 2 ·4 ·6
x x3 x5 x7
I1 (x) = + 2 + 2 2 + 2 2 2 + ··· (B.70)
2 2 ·4 2 ·4 ·6 2 ·4 ·6 ·8

Limiting forms

 ß
1 x
Iß (x) ∼ as x → 0 ß , −1, −2, −3, . . . (B.71)
È (ß + 1) 2
 −ß
1 x
Kß (x) ∼ È (ß) as x → 0 ß>0 (B.72)
2 2
K0 (x) ∼ − ln x as x → 0 (B.73)
1
Iß (x) ∼ √ ex as x → ∞ (B.74)
2áx
r
á −x
Kß (x) ∼ e as x → ∞ (B.75)
2x
Appendix B. Special Functions 280

Modified Bessel Functions of Integer Order


Generating function

x 1
    ¼
exp t+ = I n (x)t n (B.76)
2 t n=−∞

Integral forms

1 á 1 á x cos Ú
Z Z
I0 (x) = cosh(x sin Ú) dÚ = e dÚ (B.77)
á 0 á 0
Z∞
K0 (x) = cos(x sinh t) d t (B.78)
0

1
I n (x) = e x cos Ú cos(nÚ) dÚ n = 0, 1, 2, . . . (B.79)
á 0

6 3
n=0 n=0
n=1 n=1
5

4 2
Kn(x)

3
In(x)

2 1

0 0
0 1 2 3 4 0 1 2
x x
Figure B.4: Modified Bessel Functions of the First and Second Kinds
Appendix B. Special Functions 281

Legendre Functions
Legendre differential equation
(1 − x 2 )y 00 − 2xy 0 + „(„ + 1)y = 0 (B.80)

Legendre polynomials

1 dn 2
P„ (x) = n (x − 1)n Rodrigues’s formula (B.81)
2 n! d x n

For „ = 0, 1, 2, 3

P0 (x) = 1 (B.82)
P1 (x) = x (B.83)
P2 (x) = 21 (3x 2 − 1) (B.84)
P3 (x) = 21 (5x 3 − 3x) (B.85)

Generating function

1 ¼
√ = P„ (x)t „ (B.86)
1 − 2tx + t 2 „=0

Recurrence formulas

0 (x) + P 0 (x) = 2xP 0 (x) + P (x)


P„+1 (B.87)
„−1 „ „
0 (x) − P 0 (x) = (2„ + 1)P (x)
P„+1 (B.88)
„−1 „
0 (x) = („ + 1)P (x) + xP 0 (x)
P„+1 (B.89)
n „
0 (x) = −„P (x) + xP 0 (x)
P„−1 (B.90)
„ „

Orthogonality and completeness

Z1
2
P„ (x)P„0 (x) d x = Ö 0 (B.91)
−1 2„ + 1 „„

¼ 2„ + 1
P„ (x)P„ (x 0 ) = Ö(x − x 0 ) (B.92)
2
„=0
Appendix B. Special Functions 282

Special values

P„ (1) = 1 (B.93)
P„ (−1) = (−1)n (B.94)
P„ (−x) = (−1)n P„ (x) (B.95)

0


 „ odd
P„ (0) =  1 · 3 · 5 · · · („ − 1) (B.96)
(−1)„/2 „ even


2 · 4 · 6···„

1 =0
=1
=2
=3

0
P (x)

1
1 0 1
x
Figure B.5: Legendre Polynomials
Appendix B. Special Functions 283

Legendre functions of the second kind



U„ (1)V„ (x)

 „ = 0, 2, 4, . . .
Q„ (x) =  (B.97)
−V„ (1)U„ (x)
 „ = 1, 3, 5, . . .

„(„ + 1) 2 „(„ − 2)(„ + 1)(„ + 3) 4


U„ (x) = 1 − x + x − ··· (B.98)
2! 4!
(„ − 1)(„ + 2) 3 („ − 1)(„ − 3)(„ + 2)(„ + 4) 5
V„ (x) = x − x + x − ··· (B.99)
3! 5!

U„ (1) = (−1)„/2 {(„/2)!}2 „ = 0, 2, 4, . . . (B.100)
„!
2„−1
V„ (1) = (−1)(„−1)/2 {[(„ − 1)/2]!}2 „ = 1, 3, 5, . . . (B.101)
„!

For „ = 0, 1
1 1+x
 
Q0 (x) = ln (B.102)
2 1−x
x 1+x
 
Q1 (x) = ln −1 (B.103)
2 1−x

1 n=0
n=1
n=2
n=3
Q (x)

1
1 0 1
x
Figure B.6: Legendre Functions of the Second Kind
Appendix B. Special Functions 284

Associated Legendre Functions


Associated Legendre differential equation
m2
" #
(1 − x 2 )y 00 − 2xy 0 + „(„ + 1) − y=0 (B.104)
1 − x2

Associated Legendre functions of the first kind

dm
P„m = (−1)m (1 − x 2 )m/2 P (x) (B.105)
d xm „
(−1) m d „+m
= (1 − x 2 )m/2 (x 2 − 1)„ (B.106)
„
2 „! d x „+m
P„0 (x) = P„ (x) (B.107)
(„ − m)! m
P„−m (x) = (−1)m P (x) (B.108)
(„ + m)! „
P„m (x) = 0 if m > n (B.109)

For „ = 1, 2
p
P11 (x) = − 1 − x 2 (B.110)
p
P21 (x) = −3x 1 − x 2 (B.111)
P22 (x) = 3(1 − x 2 ) (B.112)

Orthogonality
Z1
2 („ + m)!
P„m (x)P„m
0 (x) d x = Ö 0 (B.113)
−1 2„ + 1 („ − m)! „„

Associated Legendre functions of the second kind


dm
Q„m = (−1)m (1 − x 2 )m/2 Q (x) (B.114)
d xm „
Appendix B. Special Functions 285

Spherical Harmonics
s
2„ + 1 („ − m)! m
Y„m (Ú, æ) = P (cos Ú)e i mæ (B.115)
4á („ + m)! „
Y„−m (Ú, æ) = (−1)m [Y„m (Ú, æ)]∗ (B.116)
r
0 (2„ + 1)
Y„ (Ú, æ) = P„ (cos Ú) (B.117)

For „ = 0, 1, 2
1
Y00 (Ú, æ) = √ (B.118)

r
1 3
Y10 (Ú, æ) = cos Ú (B.119)
2 á
r
1 3
Y11 (Ú, æ) = − sin Úe iæ (B.120)
2 2á
r
0 1 5
Y2 (Ú, æ) = (3 cos2 Ú − 1) (B.121)
4 á
r
1 1 15
Y2 (Ú, æ) = − sin Ú cos Úe iæ (B.122)
2 2á
r
1 15
Y22 (Ú, æ) = sin2 Úe 2iæ (B.123)
4 2á

Orthogonality and completeness

Z 2á Z á
0 ∗
Y„m (Ú, æ)[Y„m
0 (Ú, æ)] sin Ú dÚ dæ = ք„0 Ömm 0 (B.124)
æ=0 Ú=0
„
∞ ¼
¼ 1
Y„m (Ú, æ)[Y„m (Ú0 , æ0 )]∗ = Ö(Ú − Ú0 )Ö(æ − æ0 ) (B.125)
sin Ú
„=0 m=−„

Addition theorem
„
¼ 2„ + 1
Y„m (Ú, æ)[Y„m (Ú0 , æ0 )]∗ = P (cos Õ) (B.126)
4á „
m=−„
where
cos Õ = cos Ú cos Ú0 + sin Ú sin Ú0 cos(æ − æ0 ) (B.127)

With Ú = Ú0 and æ = æ0 , cos Õ = 1 so


„
¼ 2„ + 1
|Y„m (Ú, æ)|2 = (B.128)

m=−„
C Vector Identities

 
a1 a2 a3 
a · (b × c) = b · (c × a) = c · (a × b) = det b 1 b2 b 3  (C.1)

 
c1 c2 c3
a × (b × c) = (a · c)b − (a · b)c (C.2)
(a × b) · (c × d) = (a · c)(b · d) − (a · d)(b · c) (C.3)
∇ · (èa) = è ∇ · a + (∇ è) · a (C.4)
∇ × (èa) = è ∇ × a + (∇ è) × a (C.5)
∇(a · b) = (a · ∇)b + (b · ∇)a + a × (∇ × b) + b × (∇ × a) (C.6)
∇ · (a × b) = (∇ × a) · b − (∇ × b) · a (C.7)
∇ × (a × b) = a(∇ · b) − b(∇ · a) + (b · ∇)a − (a · ∇)b (C.8)
∇ · (∇ × a) = 0 (C.9)
∇ × (∇ è) = 0 (C.10)
∇ × (∇ × a) = ∇(∇ · a) − ∇2 a (C.11)

†
A · ds = (∇ × A) · dS Stokes’s theorem (C.12)
S S
m ˆ
A · dS = ∇· AdV Gauss’s theorem (C.13)
V
m ˆV
è ∇ ï · dS = (è∇2 ï + ∇ ï · ∇ è) d V Green’s 1st identity (C.14)
V V
m ˆ
(è ∇ ï − ï ∇ è) · dS = (è∇2 ï − ï∇2 è) d V Green’s 2nd identity (C.15)
V V
m ˆ
ï dS = ∇ïdV (C.16)
V V
m ˆ
A × dS = − ∇× AdV (C.17)
V
I †V
ï ds = − ∇ ï × dS (C.18)
S
ˆ m S ˆ
A · ∇ïdV = ïA · dS − ï∇· AdV (C.19)
V V V

286
Appendix C. Vector Identities 287

Helmholtz’s theorem
ˆ ˆ
1 ∇’ × A(x0 ) 0 −∇ 1 ∇’ · A(x0 )
A(x) = ∇ × 0 d V dV0 (C.20)
4á kx − x k 4á kx − x0 k

If x is a position vector, r = kxk, and n = x/r


∇· x = 3 (C.21)
∇× x = 0 (C.22)
2 f (r)
∇ · [nf (r)] = f (r) + (C.23)
r r
∇ × [nf (r)] = 0 (C.24)
f (r) f (r)
(a · ∇)[nf (r)] = [a − n(a · n)] + n(a · n) (C.25)
r r
∇(x · a) = a + x(∇ · a) + (x × ∇) × a (C.26)
1
∇2 = −4áÖ3 (x) (C.27)
r
  1 3xi x j − r 2 Öi j 4á
= − Ö Ö3 (x) (C.28)
xi x j r r5 3 ij

Unit vector relations


eâ = cos æex + sin æe y (C.29)
eæ = − sin æex + cos æe y (C.30)

ex = cos æeâ − sin æeæ (C.31)


e y = sin æeâ + cos æeæ (C.32)

er = sin Ú cos æex + sin Ú sin æe y + cos Ú ez (C.33)


eÚ = cos Ú cos æex + cos Ú sin æe y − sin Ú ez (C.34)
eæ = − sin æ, ex + cos æe y (C.35)

ex = sin Ú cos æer + cos Ú cos æeÚ − sin æeæ (C.36)


e y = sin Ú sin æer + cos Ú sin æeÚ + cos æeæ (C.37)
ez = cos Ú er − sin Ú eÚ (C.38)

Line, area, and volume elements


ds = d x ex + d y e y + d z ez (C.39)
= dâ eâ + â dæeæ + d z ez (C.40)
= d r er + r dÚ eÚ + r sin Ú dæeæ (C.41)
dA = d y d z ex + d z d x e y + d x d y ez (C.42)
= â dæ d z eâ + d z dâ eæ + â dâ dæez (C.43)
= r 2 sin Ú dÚ dæer + r sin Ú dæ d r eÚ + r d r dÚ eæ (C.44)
d V = d x d y d z = â dâ dæ d z = r 2 sin Ú d r dÚ dæ (C.45)
Appendix C. Vector Identities 288

Rectilinear Coordinates (x, y, z)

è è è
∇è = e + e + e (C.46)
x x y y z z
A x A y A z
∇· A = + + (C.47)
x y z
! ! !
A z A y A x A z A y A x
∇× A = − ex + − ey + − ez (C.48)
y z z x x y
2 A x 2 A y 2 A z
∇2 è = + + (C.49)
x 2 y 2 z 2
∇2 A = ∇2 A x ex + ∇2 A y e y + ∇2 A z ez (C.50)

Cylindrical Coordinates (â, æ, z)

è 1 è è
∇è = e + e + e (C.51)
â â â æ æ z z
1  1 Aæ A z
∇· A = (âAâ ) + + (C.52)
⠁â â æ z
A A Aâ
! ! !
1 A z æ ⠁A z 1 
∇× A = − eâ + − eæ + (âAæ ) − ez (C.53)
â æ z z â ⠁⠁æ
1 2 è 2 è
!
1  è
∇2 è = â + 2 + (C.54)
⠁⠁â ⠁æ2 z 2
2 Aæ 2 Aâ
! !
1 1
∇2 A = ∇2 A â − 2 A â − 2 eâ + ∇2 Aæ − 2 Aæ − 2 eæ + ∇2 A z ez (C.55)
â ⠁æ â ⠁æ

Spherical Polar Coordinates (r, Ú, æ)

è 1 è 1 è
∇è = e + e + e (C.56)
r r r Ú Ú r sin Ú æ æ
1  2 1  1 Aæ
∇· A = 2 (r A r ) + (sin ÚAÚ ) + (C.57)
r r r sin Ú Ú r sin Ú æ
! ! !
1  AÚ 1 1 A r  1  A r
∇× A = (sin ÚAæ ) − er + − (rAæ ) eÚ + (rAÚ ) − eæ (C.58)
r sin Ú Ú æ r sin Ú æ r r r Ú
2 è
! !
1  2 è 1  è 1
∇2 è = 2 r + 2 sin Ú + (C.59)
r r r r sin Ú Ú Ú r 2 sin2 Ú æ2
Aæ 2 cos Ú Aæ
" # " #
2 2  2 1 2 A r
∇2 A = ∇2 A r − 2 A r − 2 (sin ÚAÚ ) − 2 e r + ∇2 A Ú − AÚ + 2 − eÚ
r r sin Ú Ú r sin Ú æ r 2 sin2 Ú r Ú r 2 sin2 Ú æ
" #
1 2 A r 2 cos Ú AÚ
+ ∇2 Aæ − Aæ + + eæ . (C.60)
r 2 sin2 Ú r 2 sin2 Ú æ r 2 sin2 Ú æ
Index

absolute convergence, 5 Cauchy principal value, 64


active rotation, 194 Cauchy-Goursat theorem, 36
adjoint, 188 Cauchy-Riemann equations, 30
Airy differential equation, 139 Cauchy-Schwarz inequality, 228
Airy function of the first kind, 140 characteristic equation, 196
Airy function of the second kind, 140 characteristic polynomial, 196
alabi rotation, 194 characteristics, 234
alias rotation, 194 cofactor matrix, 189
alternating series, 4 column vector, 187
analytic continuation, 4, 48 commutative, 187, 188
analytic function, 30 complementary error function, 71
antisymmetric matrix, 189 complementary function, 119
assiciated Legendre function, 284 completeness relation, 173
associated Legendre differential equation, complex argument, 24
170, 284 complex conjugate, 24
associated Legendre function, 170 complex modulus, 24
associated Legendre functions of the complex number, 24
second kind, 284 components, 190
associative, 188 connection formula, 144
asymptotic series, 72 conservative vector field, 213
autocorrelation, 106 contour, 34
contour integral, 34
basis vectors, 190 convergence, 4, 5
Bernoulli equation, 116 convolution, 102
Bernoulli numbers, 17 convolution theorem, 102
Bessel differential equation, 128, 154, 274 coordinate system, 190
Bessel function, 154, 161, 274 cross product, 195
Bessel function of the second kind, 157 curl, 205
Bessel functions of half-integer order, 162, cylindrical coordinates, 225, 288
277
Bessel’s inequality, 228 degenerate eigenvalues, 172
binomial coefficient, 13, 56 determinant, 188
binomial series, 13, 270 diagonal matrix, 189
Bohr-Sommerfeld quantization rule, 145 diffusion equation, 231
Born approximation, 267 Dirac delta function, 93
branch cut, 32 direction cosine, 193
Browmwich integral, 100 Dirichlet boundary conditions, 233
distributive, 188
Cauchy boundary conditions, 233 divergence, 205
Cauchy integral formula, 37 divergence theorem, 214

289
Index 290

dot product, 193 Helmholtz’s theorem, 218, 287


double integral, 207 Hermite differential equation, 132, 154
Hermite polynomial, 134, 154
eigenfunction, 134, 151 Hermitian matrix, 189
eigenvalue, 134, 151, 196 Hermitian operator, 151
eigenvalue problem, 150 Hilbert transformation, 100
eigenvector, 196 homogeneous equation, 117, 119, 232
elliptic equation, 231 homogeneous function, 117
entire function, 30 hyperbolic equation, 231
error function, 70 hyperbolic functions, 33
essential singular point, 46
Euler’s formula, 25 idempotent matrix, 189
Euler’s reflection formula, 55, 272 identity matrix, 188
exact equation, 110 imaginary constant, 24
exponential form, 25 imaginary part, 24
exponential function, 31 inconsistent system of equations, 185
exponential integral, 74 indicial equation, 128
exponential series, 13 inhomogeneous equation, 119, 232
inner product, 193
Fourier cosine transform, 95 integral equation, 264
Fourier series, 19, 86 integrating factor, 111
Fourier sine transform, 95 inverse hyperbolic functions, 33
Fourier transform, 92 inverse trigonometric functions, 33
Fourier’s law of conduction, 244 isobaric equation, 118
Fourier-Bessel transform, 100 isolated singular point, 45
Fredholm integral equation, 264
fundamental solution, 253 Jacobi identity, 195
Jacobian matrix, 208
gamma function, 50, 272 Jordan’s inequality, 68
Gauss’s law, 215
Gauss’s mean value theorem, 39 kernel, 264
Gauss’s theorem, 214, 286 Kronecker delta, 90
generating function, 158
geometric series, 3 Laplace transform, 100
Gibbs’s phenomenon, 88 Laplace’s equation, 30, 231
gradient, 203 Laplacian, 206
Gram-Schmidt orthogonalization, 172, Laurent’s theorem, 42
198 Legendre differential equation, 12, 124,
Green function, 175, 247 154, 281
Green’s first identity, 217, 286 Legendre functions of the second kind,
Green’s second identity, 217, 286 127, 283
Green’s theorem, 211 Legendre polynomial, 126, 154, 165, 281
Gregory’s series, 88 Legendre’s duplication formula, 56, 272
length, 193
Hankel function, 161, 274 Levi-Civita symbol, 188
Hankel transform, 100 Liénard-Wiechert potential, 263
harmonic conjugate, 30 line integral, 207
harmonic function, 30 linear equation, 232
heat equation, 242 linear operator, 191
Helmholtz equation, 235 linear system of equations, 185
Index 291

linearly dependent solutions, 155 rectilinear coordinates, 288


linearly independent, 190 regular singular point, 123
linearly independent equations, 185 removable singular point, 46
linearly independent solutions, 155 residue, 45
logarithm function, 32 residue theorem, 47
longitudial vector field, 218 retarded potential, 262
retarded time, 262
Maclaurin series, 40 Riemann zeta function, 10
matrix, 187
matrix inverse, 189 scalar product, 193
matrix minor, 189 scalar triple product, 195, 286
maximum modulus principle, 39 Schrödinger equation, 232
Mellin transformation, 100 second order equation, 232
method of images, 249 secular equation, 196
method of steepest descent, 75 separable equation, 109
method of undetermined coefficients, 120 similarity transformation, 192
metric, 221 simple closed contour, 34
modified Bessel differential equation, 164, simple contour, 34
279 simple pole, 46
modified Bessel function of the first kind, singular point, 30, 123
164, 279 spherical Bessel function, 163, 278
modified Bessel function of the second spherical Hankel function, 163, 278
kind, 164, 279 spherical harmonics, 171, 285
spherical polar coordinates, 226, 288
Neumann boundary conditions, 233 Stirling’s formula, 76, 81, 273
Neumann series, 265 Stokes’s theorem, 212, 286
nilpotent matrix, 189 Sturm-Liouville differential equation, 153
normal form, 234 surface integral, 209
normal modes, 239 symmetric matrix, 189

ordinary point, 123 Taylor’s theorem, 40


orthogonal functions, 152 trace, 188
orthogonal matrix, 189 transfer function, 103
orthogonal vectors, 193 transformation matrix, 192
orthonormal vectors, 193 transpose, 188
overdetermined system of equations, 185 transverse vector field, 218
trigonometric functions, 33
parabolic equation, 231
Parseval’s identity, 91, 94 underdetermined system of equations,
partial derivative, 203 185
particular integral, 119 unitary matrix, 189
passive rotation, 194
pole, 46 vector field, 203
principal part, 46 vector product, 195
principal value, 24 vector space, 190
vector triple product, 195, 286
ratio test, 8 volume integral, 207
real matrix, 189
wave equation, 231
real part, 24
Wentzel-Kramers-Brillouin (WKB) method,
reciprocity relation, 177
137
Index 292

Wiener-Khinchin theorem, 106


Wronskian, 155

zero matrix, 188

You might also like