Best Mathematical Methods Notes
Best Mathematical Methods Notes
Jolien Creighton
December 15, 2020
6 0.5
5
0.2
0
5
0.5
0.2
2
1
4
2 0.5 C
1
0 2 0.25
Im t
2
0.5
3
4
6
4
0.2
5
0
8
8 6 4 2 0 2 4 6 8
Re t
Contents
I Infinite Series 1
1 Geometric Series 3
2 Convergence 5
3 Familiar Series 13
4 Transformation of Series 15
Problems 21
II Complex Analysis 22
5 Complex Variables 24
6 Complex Functions 27
7 Complex Integrals 34
8 Example: Gamma Function 50
Problems 57
IV Integral Transforms 84
13 Fourier Series 86
14 Fourier Transforms 92
15 Other Transform Pairs 100
16 Applications of the Fourier Transform 101
Problems 106
i
Contents ii
Appendix 270
A Series Expansions 270
B Special Functions 272
C Vector Identities 286
Index 289
List of Figures
7.1 Contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.2 Contour for Ex. 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.3 Contour for Cauchy integral formula . . . . . . . . . . . . . . . . . 37
7.4 Contour for Taylor’s theorem . . . . . . . . . . . . . . . . . . . . . 40
7.5 Contours for Laurent’s theorem. . . . . . . . . . . . . . . . . . . . 43
7.6 Intersecting Domains . . . . . . . . . . . . . . . . . . . . . . . . . . 48
iii
List of Figures iv
vi
Module I
Infinite Series
1 Geometric Series 3
2 Convergence 5
3 Familiar Series 13
4 Transformation of Series 15
Problems 21
1
2
Motivation
In physics problems we often encounter infinite series. Sometimes we want to
expand functions in power series, e.g., when we want to evaluate complex
functions for small arguments. Sometimes we have solutions in the form of an
infinite series and we want to sum the series.
This module reviews techniques for determining if a series will converge, for
summing series, and recaps certain familiar series that are commonly
encountered.
1 Geometric Series
∞
¼
xn = 1 + x + x2 + x3 + x4 + · · · . (1.1)
n=0
If x , 1 then
1
f (x) = (1.5)
1−x
= 1 + x + x2 + x3 + x4 + · · · . (1.6)
We’ll see that the second equality holds only for |x| < 1.
We see geometric series in repeating fractions:
3
1. Geometric Series 4
f (x) = 1 + x + x 2 + x 3 + x 4 + · · · (1.9)
An infinte series
∞
¼
a n = a1 + a2 + a3 + · · · (2.1)
n=1
is said to converge to the sum S provided the sequence of partial sums has
the limit S:
N
¼
lim an = S . (2.2)
N→∞
n=1
converges.
5
2. Convergence 6
subtract:
1 − x N+1
SN = . (2.5)
1−x
Then, in the limit N → ∞,
1 x
lim S N = − lim x N (2.6)
N→∞ 1 − x 1 − x N→∞
Also
1
lim S2N+1 = lim S2N + = lim S2N (2.12)
N→∞ N→∞ 2N + 1 N→∞
and we see that if, as n → ∞, |an+1 /an | < 1 then our series converges just as
the geometric series does. Thus we obtain the ratio test:
Ratio Test
an+1
• If lim < 1 the series converges (absolutely).
n→∞ an
a
• If lim n+1 > 1 the series diverges.
n→∞ a n
a
• If lim n+1 = 1 (or doesn’t exist) we must investigate further.
n→∞ a n
2. Convergence 9
y y
a1 a1
f(x) f(x)
a2 a2
a3 a3
a4 a4
a5 x a5 x
1 2 3 4 5 1 2 3 4 5
Figure 2.1: Riemann sums used in the integral test, where f (x) is a monotonically-
R5 R5
decreasing function. Left: a2 +a3 +a4 +a5 < 1 f (x) d x. Right: a1 +a2 +a3 +a4 > 1 f (x) d x.
and this converges as x → ∞ if Re(s) > 1 so the Riemann zeta function converges for
Re(s) > 1.
This suggests that we can sharpen the ratio test by comparison to the
Riemann zeta function:
an+1 s
If ∼ 1− with s > 1 then the series converges absolutely.
an n→∞ n
2. Convergence 11
Note:
Z
dx 1 1
=− (2.26)
x(ln x)s s − 1 (ln x)s−1
x2 x4
y = 1 − n(n + 1) + n(n + 1)(n − 2)(n + 3) − ··· (2.29)
2! 4!
(see Ex. 19.2).
´∞
Try the ratio test: if the series is y = m=1 a m then
am (n − 2m + 4)(n + 2m − 3) 2
=− x . (2.30)
am−1 (2m − 3)(2m − 2)
Check: take a1 = 1 and then
(n − 4 + 4)(n + 4 − 3) 2
a2 = − x a1
(4 − 3)(4 − 2)
1
= − n(n + 1)x 2 X
2
(n − 6 + 4)(n + 6 − 3) 2
a3 = − x a2
(6 − 3)(6 − 2)
1
=− (n − 2)(n + 3)x 2 a2
3·4
1
= n(n + 1)(n − 2)(n + 3)x 4 . X
4!
For large m,
am 1 1
∼ 1− +O x2 . (2.31)
am−1 m→∞ m m2
• Binomial series
x2 x3
(1 + x)Ó = 1 + Óx + Ó(Ó − 1) + Ó(Ó − 1)(Ó − 2) + · · ·
2! 3!
∞ !
¼ Ó n
= x (3.1)
n
n=0
where
!
Ó Ó(Ó − 1)(Ó − 2) · · · (Ó − n + 1)
= (3.2)
n n!
x2 x3
ex = 1 + x + + + ···
2! 3!
∞
¼ xn
= . (3.3)
n!
n=0
13
3. Familiar Series 14
x2 x4
cos x = 1 − + − ··· (3.7)
2! 4!
x3 x5
sin x = x − + − ··· . (3.8)
3! 5!
so
1 2 1 3 1 4
ln(1 + x) = x − x + x − x + ··· . (3.10)
2 3 4
1 1+x 1 1 1
ln = x + x3 + x5 + x7 · · · . (3.11)
2 1−x 3 5 7
so
1 3 1 5 1 7
arctan x = x − x + x − x + ··· . (3.13)
3 5 7
4 Transformation of Series
x 2 2x 3 3x 4
f (x) = + + + ··· . (4.2)
2! 3! 4!
Note: f (1) = S and f (0) = 0.
Now,
x3 x4
f 0 (x) = x + x 2 + + + ··· (4.3a)
2! 3!
x2 x3
( )
= x 1+x + + + ··· (4.3b)
2! 3!
= x ex . (4.3c)
Therefore
Z
f (x) = x ex d x = x ex − ex + C . (4.4)
0 = f (0) = 0e 0 − e 0 + C = −1 + C =⇒ C =1 (4.5)
so
f (x) = x e x − e x + 1 (4.6)
and thus
S = f (1) = 1e 1 − e 1 + 1 = 1. (4.7)
15
4. Transformation of Series 16
x2 x3 x4
f (x) = x − + − + ··· . (4.9)
2 3 4
Note: S = f (1) and recall f (x) = ln(1 + x) so
S = ln 2 . (4.10)
However, we can rearrange the series by putting two negative terms after each positive
term:
1 1 1
S = 1− + − + ··· (4.11a)
2 3 4
1 1 1 1 1
= 1 − − + − − + ··· (4.11b)
2 4 3 6 8
1 1 1 1 1
= 1− − + − − + ··· (4.11c)
2 4 3 6 8
1 1 1 1
= − + − + ··· (4.11d)
2 4 6 8
1 1 1 1
= 1 − + − + ··· (4.11e)
2 2 3 4
1
= ln 2 . (4.11f)
2
4. Transformation of Series 17
Now divide both sides by x and define the Bernoulli numbers by cn = B n /n!:
x2 x x2
! !
x
1 = B0 + B1 + B2 + ··· 1 + + + ··· . (4.14)
1! 2! 2! 3!
1 = B0 (4.15a)
B B 1
0= 0 + 1 =⇒ B1 = − (4.15b)
2! 1! 2
B0 B1 B 1
0= + + 2 =⇒ B2 = (4.15c)
3! 1!2! 1!2! 6
and so on. The first few Bernoulli numbers are
1 1 1
B0 = 1 B2 = B4 = − B6 = ···
6 30 42 (4.16)
1
B1 = − B3 = B5 = B7 = · · · = 0 .
2
Let i x = y/2:
e y/2 + e −y/2
cot x = i (4.18a)
e y/2 − e −y/2
ey + 1
=i y (4.18b)
e −1
2
= i 1+ y (4.18c)
e −1
2i y y
= + y (4.18d)
y 2 e −1
∞
2i ¼ y n
= −B y + Bn (4.18e)
y 1
n!
n=0
∞
1 ¼ (2x)2m
cot x = (−1)m B2m
x (2m)!
m=0
1 1 1 3 2 5
= − x− x − x − ··· 0 < |x| < á . (4.19)
x 3 45 945
Deduce the series for tan x using tan x = cot x − 2 cot 2x:
∞
1 ¼ (2x)2m
tan x = (−1)m−1 (22n − 1)B2m
x (2m)!
m=1
1 3 2 5 17 7 á
=x+ x + x + x + ··· |x| < . (4.20)
3 15 315 2
4. Transformation of Series 19
Ex. 4.4. And, just for fun, use Hardy’s method to sum the series
∞
¼ 1 1 1 1
S= 2
= 1+ + + + · · · = Ø(2) . (4.21)
n 4 9 16
n=1
where all the b n coefficients are zero since cos kx is an even function, and where
2 á
Z
an = cos nx cos kx d x (4.22c)
á 0
2k sin ká
= (−1)n (4.22d)
á(k 2 − n 2 )
B (2á)2n
−2Ø(2n) = (−1)n 2n (4.28)
(2n)!
or
B (2á)2n
Ø(2n) = (−1)n+1 2n . (4.29)
2(2n)!
Hence:
1 1 B 4á2 á2
1+ + + · · · = Ø(2) = 2 = (4.30)
4 9 4 6
1 1 B 16á 4 á4
1+ + + · · · = Ø(4) = − 4 = (4.31)
16 81 48 90
etc.
Problems
Problem 1.
a) For what values of x does the following series converge?
4 16 64
f (x) = 1 + + + + ···
x2 x4 x6
Problem 2.
a) Find the sum of the following series:
1 1 1 1 1
1+ − − + + − − + +···
4 16 64 256 1024
Problem 3.
By repeatedly differentiating the geometric series
∞
1 ¼
= xn
1−x
n=0
21
Module II
Complex Analysis
5 Complex Variables 24
6 Complex Functions 27
7 Complex Integrals 34
Problems 57
22
23
Motivation
Complex numbers are encountered not only in quantum mechanics but are
also a useful tool for many applications in physics. Complex analysis and
contour integration give powerful mathematical techniques which we will
encounter over and over in later modules.
5 Complex Variables
Basics
A complex number can be written as
z = x + iy (5.1)
and where the real part and imaginary part are
Re z = x and Im z = y (5.2)
respectively and where the imaginary constant i satisfies i 2 = −1.
The complex inverse, z −1 , which satisfies z · z −1 = 1, is
x − iy
z −1 = . (5.3)
x2 + y2
z = (x, y)
p
r = |z| = x 2 + y 2 (5.4)
r
is the complex modulus and
x
Ú = arg z = arctan(y/x) (5.5)
z = (x, y)
is the complex argument. Then
z = r(cos Ú + i sin Ú). (5.6)
Figure 5.1: Representation of a
complex number as a point on a
Note: arg z is multiple valued. two-dimensional plane.
Define the principal value Arg z such that
arg z = Arg z + 2ná, n = 0, ±1, ±2, . . . (5.7)
where −á < Arg z ≤ á.
24
5. Complex Variables 25
Identities
Also,
We have:
therefore
z n = 1 =⇒ r n e i nÚ = 1e i0 (5.18a)
n
=⇒ r = 1 and nÚ = 0 + 2ká, k = 0, ±1, ±2, . . . (5.18b)
therefore
1, én , é2n , . . . , én−1
n where én = e 2ái/n . (5.20)
Consider
w = f (z) . (6.1)
f (x + i y) = x 2 − y 2 + 2i xy . (6.3)
| {z } |{z}
u(x,y)=x 2 −y 2 v(x,y)=2xy
Think of this as a map from the x-y plane to the u-v plane as seen in Fig. 6.1.
y v
D A A0
1.0 1.0
0.5 0.5
f
C B D0 C0 B0 u
x
0.5 1.0 1.0 0.5 0.5
27
6. Complex Functions 28
Limits
If f (z) is defined at all points z in some “deleted neighborhood” of z0 (does not
include z0 ) then
if and only if
and w0 = u0 + i v0 .
Continuity
f (z) is continuous at a point z0 if
Derivatives
f (z) − f (z0 )
f 0 (z0 ) = lim
z→z0 z − z0
f (z0 + Éz) − f (z0 )
= lim . (6.6)
Éz→0 Éz
(z + Éz)2 − z 2
f 0 (z) = lim (6.7a)
Éz→0 Éz
= lim (2z + Éz) (6.7b)
Éz→0
= 2z . (6.7c)
(z + Éz)(z ∗ + (Éz)∗ ) − z · z ∗
f 0 (z) = lim (6.8a)
Éz→0 Éz
(Éz)∗
( )
= lim z ∗ + (Éz)∗ + z . (6.8b)
Éz→0 Éz
These are different results if z , 0, therefore the only place the derivative exists is at
z = 0.
Note: f = |z|2 is continuous since
Cauchy-Riemann Equations
If f (z) = u(x, y) + i v(x, y) then, if we approach z with y constant and Éz = Éx,
u v
f 0 (z) = (x, y) + i (x, y) (6.10a)
x x
whereas if we approach z with x constant and Éz = iÉy,
v u
f 0 (z) = (x, y) − i (x, y) . (6.10b)
y y
u v u v
= and =− . (6.11)
x y y x
Analytic Functions
A function is said to be analytic at a point z0 if its derivative exists in a
neighborhood of z0 .
Ex. 6.3. f (z) = 1/z is analytic everywhere except for z = 0. However, since f (z) is
analytic at some point in every neighborhood of z = 0, we call z = 0 a singular point.
Harmonic Functions
A harmonic function h(x, y) satisfies Laplace’s equation
2 h 2 h
+ = 0. (6.12)
x 2 y 2
If f (z) = u(x, y) + i v(x, y) is analyitic in some domain then u and v are harmonic
functions in that domain and v is known as the harmonic conjugate of u.
6. Complex Functions 31
Exponential Function
We seek something that behaves like e x along the real axis, i.e.,
d x
e = ex ∀x (real). (6.13)
dx
d z
e z is entire and e = ez ∀z . (6.14)
dz
Note: this justifies our use of the symbol e iÚ = cos Ú + i sin Ú in the polar form of
a complex number.
The exponential function has the familiar properties:
e z1 e z2 = e z1 +z2 e z+2ái = e z
z x
|e | = e arg e z = y + 2ná, n = 0, ±1, ±2, . . . (6.20)
z iæ
e = âe =⇒ z = ln â + i(æ + 2ná), n = 0, ±1, ±2, . . . .
Logarithm Function
The logarithm function is the inverse exponential function:
exp(log z) = z (6.25a)
log(exp z) = z + 2ái n, n = 0, ±1, ±2, . . . (6.25b)
Log(exp z) = z (6.25c)
log(z1 z2 ) = log z1 + log z2 (for some branch) (6.25d)
z n = exp(n log z), n = 0, ±1, ±2, . . . (6.25e)
1
z 1/n = exp log z , z , 0, n = 0, ±1, ±2, . . . (6.25f)
n
(the last equation has n distinct values corresponding to the n roots.)
Use the logarithm function to define complex exponents:
Find:
d c
z = cz c−1 , |z| > 0, Ó < arg z < Ó + 2á . (6.27)
dz
The principal value of z c is
Trigonometric Functions
Define the trigonometric functions as
e i z − e −i z e i z + e −i z
sin z = and cos z = . (6.29)
2i 2
Hyperbolic Functions
Define the hyperbolic functions as
e z − e −z e z + e −z
sinh z = and cosh z = . (6.30)
2 2
e i w − e −i w
z= (6.32a)
2i
=⇒ (e i w )2 − 2i z(e i w ) − 1 = 0 (6.32b)
iw 2 1/2
=⇒ e = i z + (1 − z ) (6.32c)
=⇒ w = arcsin z = −i log[i z + (1 − z 2 )1/2 ] . (6.32d)
Note: the square root is double-valued and the log is multiple-valued, so the
arcsin function is multiple-valued.
Similarly can compute the other inverse trigonometric functions:
y
A contour C is a set of points
C = {(x(t), y(t)) : a ≤ t ≤ b} (7.1) (x(b), y(b))
(see Fig. 7.1). The length of C is t=b
Zb C
L= |z 0 (t)| d t (7.2)
a t=a
where z 0 (t) = x 0 (t) + i y 0 (t). (x(a), y(a))
x
A simple contour does not self-intersect.
Figure 7.1: Contour.
A simple closed contour does not
self-intersect except at the end points, which are the same.
Contour Integral
A contour integral is
Z Zb
f (z) d z = f [z(t)] z 0 (t) d t . (7.3)
C a
This integral is invariant under re-parameterization of the contour.
Properties of contour integrals:
Z Z
• f (z) d z = − f (z) d z (7.4a)
−C C
Z Z Z
• f (z) d z = f (z) d z + f (z) d z (7.4b)
C=C1 +C2 C1 C2
Z Z b
• f (z) d z ≤ |f [z(t)] z 0 (t)| d t (7.4c)
C a
• If M is a non-negative constant such that |f (z)| ≤ M on C then
Z Zb
f (z) d z ≤ M |z 0 (t)| d t = ML . (7.4d)
C a
34
7. Complex Integrals 35
z = 3e iÚ , 0≤Ú≤á. (7.5)
Let
√
f (z) = z 1/2 = re iÚ/2 , r > 0, 0 < Ú < 2á . (7.6)
Note: this branch of the square root is not defined at the initial point, but we can still
integrate f (z) because it only needs to be piecewise continuous.
Therefore
√ √ Ú √ Ú
f [z(Ú)] = 3e iÚ/2 = 3 cos + i 3 sin , 0 < Ú ≤ á . (7.7)
2 2
√
As Ú → 0, f [z(Ú)] → 3 so just define this to be its value at Ú = 0. Then
Z Z Z á√
I= f (z) d z = z 1/2 d z = 3e iÚ/2 (3i e iÚ ) dÚ (7.8a)
C C 0
√ Zá √ 2 á √ 2
= 3 3i e i3Ú/2 dÚ = 3 3 e i3Ú/2 = 3 3 − (1 + i) (7.8b)
0 3i 0 3i
√
= −2 3(1 + i) . (7.8c)
√
If we had just wanted to bound the integral, we note that |z 1/2 | = 3 and L = 3á,
therefore
√
|I| ≤ 3 3á . (7.9)
Im z
Re z
3 3
Figure 7.2: Contour for Ex. 7.1
7. Complex Integrals 36
Cauchy-Goursat Theorem
Theorem 1 (Cauchy-Goursat). If a function f is analytic at all points interior to
and on a simple closed curve C then
I
f (z) d z = 0 . (7.10)
C
Sketch of proof.
I Z b
f (z) d z = f [z(t)] z 0 (t) d t (7.11a)
C a
let f (z) = u(x, y) + i v(x, y)
and z(t) = x(t) + i y(t)
Z b
= [(ux 0 − vy 0 ) + i(vx 0 + uy 0 ] d t (7.11b)
a
I I
= (u d x − v d y) + i (v d x + u d y) (7.11c)
C C
by Green’s theorem where C
is the boundary of region R
! !
v u u v
= − − dx dy + i − dx dy (7.11d)
R x y R x y
by the Cauchy-Riemann
equations
= 0. (7.11e)
7. Complex Integrals 37
1 f (z)
f (z0 ) = dz . (7.12)
2ái C z − z0
2á
×i e iÚ
Z
f (z) dz
d z ≈ f (z0 ) = f (z0 ) dÚ (7.13a)
C× z − z0 C× z − z0 0 ×e iÚ
= 2ái f (z0 ). (7.13b)
Now divide C into the modified contour C + L − C× − L as shown in Fig. 7.3. The
integrand is analytic everywhere inside this contour so, by the Cauchy-Goursat
theroem,
Z Z
f (z) f
(z) f (z) f (z)
0= d z + dz − d z − d z (7.14)
C z − z 0 L z − z 0 C× z − z 0 L z − z 0
f (z) f (z)
=⇒ dz = d z = 2ái f (z0 ) (7.15)
C z − z0 C× z − z0
Im z
L
L
z0
C C
Re z
Now,
1 f (s)
f 0 (z) = ds (7.17a)
2ái C (s − z)2
1 f (s)
f 00 (z) = ds (7.17b)
ái C (s − z)3
etc.
This establishes the existance of all derivatives of f at z and shows that all
derivatives are also analytic at z:
n! f (s)
f (n) (z) = ds . (7.18)
2ái C (s − z)n+1
It turns out that when the modulus of a function is constant in a domain, the
function itself must be constant there.
Therefore we have the maximum modulus principle:
If a function f is analytic and not constant in a given domain then |f (z)| has no
maximum value in the domain.
Corollary. Suppose a function f is continuous in a closed bounded region R
and that it is analytic and not constant in the interior of R. Then the maximum
value of |f (z)|, which is always reached, occurs somewhere on the boundary of
R and never in the interior.
7. Complex Integrals 40
Taylor’s Theorem
Theorem 2 (Taylor’s Theorem). If f is analytic throughout an open disk
|z − z0 | < R 0 centered at z0 with radius R0 then at each point in the disk
∞
¼ 1 (n)
f (z) = an (z − z0 )n with an = f (z0 ) (7.22)
n!
n=0
1 f (s) 1 1 1
f (z) = ds = f (s) d s (7.23a)
2ái C0 s − z 2ái C0 s 1 − z/s
(z/s)N
( 2 N−1 )
1 1 z z z
= 1+ + + ··· + + f (s) d s (7.23b)
2ái C0 s s s s 1 − z/s
1 1
= f (0) + f 0 (0)z + f 00 (0)z 2 + · · · + f (N−1) (0)z N−1 + RN (z)
2! (N − 1)!
(7.23c)
zN f (s)
RN (z) = ds . (7.23d)
2ái C0 (s − z)s N
Im s
z s
r
r 0 Re s
R0
C0
Now |s − z| ≥ ||s| − |z|| = r0 − r since r0 > r and let M be the maximum value of
|f (s)| on C0 . Then
!N
zN M Mr0 r
|RN (z)| ≤ 2ár0 = (7.24)
2ái (r0 − r)r0N r0 − r r0
→ 0 as N → ∞ since r0 > r . (7.25)
so
∞ n
¼ z
ez = . (7.28)
n!
n=0
Laurent’s Theorem
If f is not analytic at a point z0 , we cannot apply Taylor’s theorem there.
However, we can use Laurent’s theorem:
Theorem 3 (Laurent). Suppose a function f is analytic throughout an annular
domain R1 < |z − z0 | < R 2 and let C donate any positively-oriented closed
contour around z0 and lying in that domain. Then, at each point z in the
domain,
∞ ∞
¼ ¼ bn
f (z) = an (z − z0 )n + , R1 < |z − z0 | < R 2 (7.29)
(z − z0 )n
n=0 n=1
where
1 f (z)
an = dz , n = 0, 1, 2, . . .
2ái C (z − z0 )n+1
1 f (z)
bn = dz , n = 1, 2, . . . (7.30)
2ái C (z − z0 )−n+1
∞
¼
f (z) = cn (z − z0 )n , R1 < |z − z0 | < R 2 (7.31)
n=−∞
where
1 f (z)
cn = , n = 0, ±1, ±2, . . . . (7.32)
2ái C (z − z0 )n+1
Sketch of proof. Take z0 as before for simplicity. Refer to Fig. 7.5 for contours
C, C1 , C2 , and È . First note:
1 f (s) 1 f (s)
=⇒ f (z) = ds − ds . (7.34)
2ái C2 s−z 2ái C1 s−z
7. Complex Integrals 43
Im s
C2
z
r1 r
Re s
R1
r2 C1
C
R2
In the integrand of the first integral where |s| > |z| expand
1 1 z zN
= + 2 + ··· + (7.35a)
s−z s s (s − z)s N
and in the integrand of the second integral where |z| > |s| expand
1 1 s sN
− = + 2 + ··· + . (7.35b)
s−z z z (z − s)z N
b1 b2
∴ f (z) = a0 + a1 z + · · · + RN (z) + + 2 + · · · + SN (z) (7.36a)
z z
where
1 f (s) 1 f (s)
an = n+1
ds = ds (7.36b)
2ái C2 s 2ái C s n+1
1 f (s) 1 f (s)
bn = −n+1
ds = ds (7.36c)
2ái C1 s 2ái C s −n+1
zN f (s)
RN (z) = ds (7.36d)
2ái C2 (s − z)s N
1 s N f (s)
SN (z) = ds . (7.36e)
2ái z N C1 z−s
7. Complex Integrals 44
Residues
If a function f is analytic throughout a deleted neighborhood 0 < |z − z0 | < × of
a singular point z0 then z0 is an isolated singular point. E.g., 1/z has an
isolated singular point z0 = 0 but the origin is not isolated for Log z.
If z0 is an isolated singular point of f then the function can be written as a
Laurent series:
∞
¼ b1 b2
f (z) = an (z − z0 )n + + + ··· , 0 < |z − z0 | < R 2 (7.37)
z − z0 (z − z0 )2
n=0
æ(z)
Res = æ(z0 ) . (7.39)
z=z0 z − z0
• Suppose p(z) and q(z) are both analytic at z0 and p(z0 ) , 0, q(z0 ) = 0,
q 0 (z0 ) , 0, then
p(z) p(z0 )
Res = . (7.40)
z=z0 q(z) q 0 (z0 )
z +1
Ex. 7.4. For f (z) = 2 find Res f (z).
z +9 z=3i
æ(z) z +1
Write f (z) = where æ(z) =
z − 3i z + 3i
3−i
∴ Res f (z) = æ(3i) = .
z=3i 6
cos z
Ex. 7.5. f (z) = cot z = .
sin z
Let p(z) = cos z, q(z) = sin z, q 0 (z) = cos z. The zeros of q(z) are the points z = ná,
n = 0, ±1, ±2, . . ..
p(ná)
∴ Res f (z) = 0 = 1.
z=ná q (ná)
7. Complex Integrals 46
• If
∞
¼ b1 b2 bm
f (z) = an (z − z0 )n + + + ··· + (7.41)
z − z0 (z − z0 ) 2 (z − z0 )m
n=0 | {z }
principal part
for 0 < |z − z0 | < R2 where b m , 0 then the isolated singular point z0 is called
a pole of order m.
If m = 1 then it is a simple pole.
Ex. 7.6.
z3 z5
( )
sinh z 1 1 11 z
= z + + + · · · = 3+ + + ··· (7.42)
z4 z4 3! 5! z 3! z 5!
• If the principal part has an infinite number of terms then the singular point is
an essential singular point.
Ex. 7.7.
∞
¼ 1 1
e 1/z = , 0 < |z| < ∞ (7.43)
n! z n
n=0
Ex. 7.8.
ez − 1 1 1 z z2
f (z) = = z + z2 + · · · = 1 + + + ··· , 0 < |z| < ∞ (7.44)
z z 2! 2! 3!
has a removable singular point at z = 0. If we write f (0) = 1 then the function is entire.
7. Complex Integrals 47
Residue Theorem
Theorem 4 (Residue). If C is a positively oriented simple closed contour within
and on which a function f is analytic except for a finite number of singular
points zk (k = 1, 2, . . . , n) interior to C, then
n
¼
f (z) d z = 2ái Res f (z) . (7.45)
C z=zk
k=1
Proof. Since f (z) = 0 along some arc we know that the coefficients
an = f (n) (z0 )/n! must be zero since the derivatives must all be zero. This means
that f (z) = 0 for all z for which the Taylor series is valid.
Corollary. Suppose f (z) and g(z) are analytic in a domain D and f (z) = g(z)
along some arc or in some sub-domain. Then f (z) = g(z) everywhere in D .
Proof. Consider h(z) = f (z) − g(z) = 0 along the arc; Theorem 5 then requires
h(z) = 0 within D .
7. Complex Integrals 48
Analytic Continuation
Consider two intersecting domains D1 and D2 .
Suppose f1 is analytic in D1 . There may be a function f2 that is analytic in D2
such that
is analytic in D1 ∪ D2 .
However, suppose there are three domains as shown in Fig. 7.6 and
Im z
D1
D2
D3
Re z
The function
1
f2 (z) = , |z| , 1 (7.55)
1−z
satisfies f2 (z) = f1 (z) for |z| < 1. Therefore, f2 is the analytic continuation of f1 to the
entire complex plane except z = 1.
Ex. 7.11. Consider the branch of z 1/2 with −á < arg z < á and define:
√
f1 (z) = re iÚ/2 , r > 0, −á/2 < Ú < á . (7.56)
This is defined in Quadrants II and III of the complex plane. Note that f2 (z) = f1 (z) in the
overlapping domain of Quadrant II: r > 0, á/2 < Ú < á.
Now analytically continue this across the negative imaginary axis:
√
f3 (z) = re iÚ/2 , r > 0, á < Ú < 5á/2 . (7.58)
This is defined in Quadrants I, III, and IV of the complex plane. Note that f3 (z) = f2 (z) in
the overlapping domain of Quadrant III: r > 0, á < Ú < 3á/2.
However, f3 (z) , f1 (z) in their overlapping domains of Quadrants I and IV; in fact,
f3 (z) = −f1 (z). E.g.,
√
f1 (1) = 1e i0/2 = 1 (7.59)
but
√
f3 (1) = 1e i(2á)/2 = −1 . (7.60)
8 Example: Gamma Function
Note: as t → 0 the integrand behaves like t z−1 and so the integral behaves like
t z /z = z −1 e z ln t ; therefore this definition of the gamma function is only valid for
Re z > 0.
We can integrate by parts:
Z∞
È (z) = e −t t z−1 d t (8.2a)
Z0∞
= u dv let u = e −t , d u = −e −t d t (8.2b)
0 d v = t z−1 d t, v = t z /z (z , 0)
Z∞
∞ v → 0 as t → 0 (Re z > 0)
= uv 0 − v du u → 0 as t → ∞ (8.2c)
0
Z∞
tz
= e −t d t (8.2d)
0 z
È (z + 1)
= , Re z > 0 (8.2e)
z
Thus,
50
8. Example: Gamma Function 51
(x)
2
x
4 2 2 4
2
È (n + 1) = nÈ (n) (8.4a)
and
Z ∞
È (1) = e −t d t = 1 . (8.4b)
0
Therefore, write
n! = È (n + 1) , n = 0, 1, 2, . . . . (8.5)
Use the relation È (z + 1) = z È (z) to analytically continue into the left-half of the
complex plaine:
È (z + 1)
È (z) = , Re z > −1, z , 0 . (8.6)
z
For example,
È (− 12 + 1)
È (− 12 ) = = −2È ( 12 ) . (8.7)
− 12
With repeated applications, can extend over (almost) all of the complex plane.
However, there is a singularity at z = 0 which prevents us from obtaining È (0),
È (−1), È (−2), . . . , but other than this, the Gamma function has been extended
over the entire complex plane.
8. Example: Gamma Function 52
Therefore this form is valid everywhere on the complex plane, with simple poles
at z = 0, −1, −2, . . ..
Note: the choice of Ó > 0 does not matter; the Weirstrass representation of the
gamma function is when Ó = 1:
∞ ∞
(−1)n
Z
¼ 1
È (z) = + e −t t z−1 d t . (8.9)
n! z+n 1
n=0
8. Example: Gamma Function 53
Z ∞ Z ∞
= e −(s+t) s x−1 t −x d t d s (8.10b)
s=0 t=0
let s = u − t
Z ∞ Z u
= e −u (u − t)x−1 t −x d t d u (8.10c)
u=0 t=0
let t = uv
Z ∞ Z 1
= e −u u x−1 (1 − v)x−1 u −x v −x u d v d u (8.10d)
u=0 v=0
∞ 1
(1 − v)x−1
Z Z
= e −u d u dv (8.10e)
0 0 vx
1
(1 − v)x−1
Z
= dv (8.10f)
0 vx
let v → 1 − v
1
v x−1
Z
= dv (8.10g)
0 (1 − v)x
let v = t/(1 + t)
Z ∞ −x d v = d t/(1 + t)2
t dt
= t x−1 (1 − t)−x+1 1 − (8.10h)
0 1+t (1 + t)2
Z ∞
= t x−1 (1 − t)−x+1 (1 + t)x (1 + t)−2 d t (8.10i)
0
∞
t x−1
Z
= dt , 0 < x < 1. (8.10j)
0 1+t
Im z
CR
C
Re z
1
Let
z −a
f (z) = |z| > 0, 0 < arg z < 2á (8.11)
z +1
where a = 1 − x, 0 < a < 1. The function has a simple pole at z = −1 and a
branch cut along the positive real axis.
Consider the contour shown in Fig.R 8.2. The function
R is piecewise continuous
(even though it is multivalued) so C f (z) d z and C f (z) d z exist.
× R
For the linear parts of the contour above and below the branch cut write
Now,
R R
ra r a −i2áa
Z Z Z Z
dr + f (z) d z − e dr + f (z) d z
× r +1 CR × r +1 C×
−a
= 2ái Res f (z) = 2ái(−1) = 2ái(e iá )−a (8.13a)
z=−1
−i aá
= 2ái e , (8.13b)
therefore
R
ra
Z Z Z
f (z) d z + f (z) d z = 2ái e −i aá + (e −i2aá − 1) dr . (8.14)
CR C× × r +1
8. Example: Gamma Function 55
Since a < 1,
×−a
Z
2á 1−a
f (z) d z ≤ 2á× = × (8.15a)
C× 1−× 1−×
→ 0 as × → 0 . (8.15b)
R −a
Z
2á 1
f (z) d z ≤ 2áR = a
(8.16a)
CR R − 1 1 − 1/R R
→ 0 as R → ∞ . (8.16b)
Note:
• È (z)È (1 − z) sin áz = á is clearly entire;
• È (z) has singularities at z = 0, −1, −2, . . .;
• È (1 − z) has singularities at z = 1, 2, 3, . . .;
• sin áz has zeros at z = 0, ±1, ±2, . . . that “cancel” the singularities;
thus we conclude
1
is entire. (8.20)
È (z)
8. Example: Gamma Function 56
Useful results:
• When z = 12 ,
á
È ( 12 )È (1 − 12 ) = =á (8.21)
| {z } sin á/2
1
[È ( 2 )]2
so
√
È ( 21 ) = á . (8.22)
1 · 3 · 5 · · · · · (2m − 1) √
È (m + 12 ) = á. (8.23)
2m
Therefore, define
2m È (m + 12 )
(2m − 1)!! = 1 · 3 · 5 · · · · · (2m − 1) = √ (8.24)
á
and
√
(2m)! á È (2m + 1)
(2m)!! = 2 · 4 · 6 · · · · · (2m) = = m (8.25)
(2m − 1)!! 2 È (m + 12 )
22m
È (2m + 1) = √ È (m + 21 )È (m + 1) . (8.26)
á
22z−1
È (2z) = √ È (z)È (z + 21 ) . (8.27)
á
• The binomial coefficient, Eq. (3.2), can be expressed in terms of the Gamma
function as
!
x È (x + 1)
= . (8.28)
y È (y + 1)È (x − y + 1)
Problems
Problem 4.
Show that
h i
a) (1 + i)i = e −á/4 e 2ná cos( 21 ln 2) + i sin( 12 ln 2) where n = 0, ±1, ±2, . . . .
Problem 5.
Derive the Cauchy-Riemann equations in polar coordinates
u 1 v 1 u v
= and =−
r r Ú r Ú r
and use these to show that if f (z) = u(r, Ú) + i v(r, Ú) is analytic in some domain
D that does not contain the origin then throughout D the function u(r, Ú)
satisfies the polar form of Laplace’s equation:
2 u u 2 u
r2 + r + = 0.
r 2 r Ú2
Verify that u(r, Ú) = ln r is harmonic in r > 0, 0 < Ú < 2á and show that
v(r, Ú) = Ú is its harmonic conjugate.
Problem 6.
Use the Cauchy-Riemann equations to determine which of the following are
analytic functions of the complex variable z:
a) |z| ;
b) Re z ;
c) e sin z .
57
Problems 58
Problem 7.
Let C denote the circle |z − z0 | = R taken counterclockwise. Use the parametric
representation z = z0 + Re iÚ , −á ≤ Ú ≤ á, for C to derive the following integral
formulas:
dz
a) = 2ái ;
C − z0
z
Problem 8.
Represent the function (z + 1)/(z − 1) by
a) its Maclaurin series, and give the region of validity for the
reprensentation;
b) its Laurent series for the domain 1 < |z| < ∞.
Problem 9.
Use residues to evaluate these integrals where the contour C is the circle
|z| = 3 taken in the positive sense:
exp(−z)
a) dz ;
C z2
1
b) z 2 exp dz ;
C z
z +1
c) d z.
C z 2 − 2z
Module III
Evaluation of Integrals
10 Contour Integration 64
12 Saddle-Point Methods 75
Problems 82
59
60
Motivation
Let’s face it: integration can be a pain in the neck. Nowadays you can use
computer algebra packages such as Maple or Mathematica or WolframAlpha
to do integrals for you; more traditionally one would use tables of integrals.
But it is still useful to be able to do elementary integrals, and some useful
tricks are reviewed here. We also explore contour integration further and
touch on topics such as asymptotic series (useful for evaluating functions at
large arguments) and saddle-point methods (which can give approximate
solutions to integrals).
9 Elementary Methods of Integration
61
9. Elementary Methods of Integration 62
Let
Z∞
a
I(a) = e −ax cos bx d x = 2 ; (9.4)
0 a + b2
then
d d a a2 − b 2
I =− I(a) = = . (9.5)
da d a a2 + b 2 (a2 + b 2 )2
Let
Z ∞ −ax
e sin x
I(a) = dx (9.7)
0 x
so I = I(0). Now,
Z∞
d 1
I(a) = − e −ax sin x d x = − 2 (9.8)
da 0 a +1
so we have
Z
da
I(a) = − = C − arctan a (9.9)
a2 + 1
but since I(∞) = 0, we find C = á/2; therefore
á
I(a) = − arctan a (9.10)
2
and finally
á
I = I(0) = . (9.11)
2
9. Elementary Methods of Integration 63
• Be clever.
Consider
Z∞ Z∞
2 2
I2 = e −x d x e −y d y (9.13a)
−∞ −∞
∞
2 2
= e −(x +y ) d x d y (9.13b)
−∞
change to polar coordinates
Z 2á Z∞ r 2 = x 2 + y 2 ; d x d y = r dÚ d r
2
= dÚ e −r r d r (9.13c)
0 0
let u = r 2 ; d u = 2r d r
1 ∞ −u
Z
= 2á · e du (9.13d)
2 0
= á. (9.13e)
Therefore,
√
I = á. (9.14)
10 Contour Integration
Evaluation of improper real integrals can often be done easily using the
Cauchy principal value and residues.
64
10. Contour Integration 65
Let
2z 2 − 1 2z 2 − 1
f (z) = 4 = . (10.6)
z + 5z 2 + 4 (z 2 + 1)(z 2 + 4)
This function has isolated simple poles at z = ±i, z = ±2i.
Consider the contour C = LR + C R , R > 2, as shown in Fig. 10.1.
We have:
ZR Z
f (x) d x + f (z) d z = 2ái Res f (z) + Res f (z) . (10.7)
−R CR z=i z=2i
Note:
æ1 (z) 2z 2 − 1
• f (z) = where æ1 (z) =
z−i (z + i)(z 2 + 4)
−3 1
=⇒ Res f (z) = æ1 (i) = =− . (10.8a)
z=i (2i)(3) 2i
æ2 (z) 2z 2 − 1
• f (z) = where æ2 (z) =
z − 2i (z 2 + 1)(z + 2i)
−9 3
=⇒ Res f (z) = æ2 (2i) = = . (10.8b)
z=2i (−3)(4i) 4i
Im z
CR
2i
i
Re z
R LR R
Figure 10.1: Contour for Ex. 10.1
10. Contour Integration 66
Therefore
ZR Z
f (x) d x = 2ái Res f (z) + Res f (z) − f (z) d z (10.9a)
−R z=i z=2i CR
Z
1 3
= 2ái − + − f (z) d z (10.9b)
2i 4i CR
Z
á
= − f (z) d z . (10.9c)
2 CR
R
We need to figure out what C f (z) d z is as R → ∞.
R
áR(2R 2 + 1)
Z
f (z) d z ≤ MR áR = (10.11a)
CR (R 2 − 1)(R 2 − 4)
→ 0 as R → ∞ . (10.11b)
Thus,
?∞ ZR
2x 2 − 1 2x 2 − 1 á
4 2
d x = lim 4 2
dx = (10.12)
−∞ x + 5x + 4 R→∞ −R x + 5x + 4 2
and therefore
Z∞
2x 2 − 1 á
4 2
dx = . (10.13)
0 x + 5x + 4 4
10. Contour Integration 67
try
Z R Z R Z R
f (x) cos ax d x + i f (x) sin ax d x = f (x)e i ax d x (10.15)
−R −R −R
and use the fact that |e i az | = e −ay is bounded in the upper-half plane y ≥ 0.
Ex. 10.2. Compute
?∞
x sin x
2
dx . (10.16)
−∞ x + 2x + 2
Let
z z
f (z) = 2 = where z1 = −1 + i . (10.17)
z + 2z + 2 (z − z1 )(z − z1∗ )
z e i z1
b 1 = Res f (z)e i z = 1 ∗ . (10.18)
z=z1 z1 − z1
Im z
CR
z1
Re z
R LR R
Figure 10.2: Contour for Ex. 10.2
10. Contour Integration 68
√
Note: |f (z)| ≤ MR where MR = R/(R − 2)2 and |e i z | = e −y ≤ 1 so
áR 2
Z
f (z)e i z d z ≤ MR áR = √ (10.20)
CR (R − 2)2
iÚ
Now |f (Re iÚ )| ≤ MR and |e i Re | ≤ e −R sin Ú so
Z Zá
f (z)e i z d z ≤ MR R e −R sin Ú dÚ . (10.22)
CR 0
We use Jordan’s inequality to bound the integral: since sin Ú ≤ 2Ú/á for 0 ≤ Ú ≤ á/2
(see Fig. 10.3),
Zá Z á/2
á
e −R sin Ú dÚ ≤ 2 e −2RÚ/á = 2 (1 − e −R ) (10.23a)
0 0 2R
á
< . (10.23b)
R
Thus,
Z
á
f (z)e i z d z < MR R = áMR (10.24a)
CR R
→ 0 as R → ∞ (10.24b)
and therefore
?∞
x sin x á
2
d x = Im(2ái b1 ) = (sin 1 + cos 1) . (10.25)
−∞ x + 2x + 2 e
y
y = sin
y=2 /
/2
Figure 10.3: Jordan’s Inequality
10. Contour Integration 69
z − z −1 z + z −1 dz
sin Ú = , cos Ú = , dÚ = . (10.27)
2i 2 iz
Then
z − z −1 z + z −1 d z
!
I= F , (10.28)
C 2i 2 iz
where C is the unit circle about the origin evaluated in the positive direction.
Ex. 10.3. Compute
Z 2á
dÚ
I= , −1 < a < 1, a , 0 . (10.29)
0 1 + a sin Ú
and thus
2á
I = 2ái Res f (z) = √ , −1 < a < 1 (10.33)
z=z1 1 − a2
(the case a = 0 is obvious).
11 Approximate Expansions of Integrals
This converges for all x but it is only really useful for small x.
We would like a large-x expansion.
y = erfc x 2
1 y = erf x
x
3 2 1 1 2 3
1
70
11. Approximate Expansions of Integrals 71
Asymptotic Series
The series
c1 c2
S(z) = c0 + + + ··· (11.6)
z z2
is an asymptotic series expansion of some function f (z) provided that for any
n the error involved in terminating the series with the term cn z −n goes to zero
faster than z −n as |z| → ∞ (for some range of arg z):
where
c1 c2 c
S n (z) = c0 + + 2 + · · · + nn . (11.7b)
z z z
where
! ( )
asymptotic 2 −x 2 1 1 n (2n − 3)!!
=√ e − + · · · + (−1) n 2n−1 (11.8b)
series á 2x 22 x 3 2 x
and
Z∞ 2
e −t
!
remainder (2n − 1)!! 2
= (−1)n √ dt (11.8c)
integral 2n á x t 2n
Ex. 11.2 (Exponential integral). The exponential integral (see Fig. 11.2) is
Zx t
e
Ei x = dt . (11.10)
−∞ t
e −x 1 2! 3!
− Ei(−x) = 1 − + 2 − 3 + ··· . (11.13)
x x x x
• Identity for remainder term:
Z ∞ −t
(−1)n e −x
( " #)
e 1 2! n (n − 2)!
dt = Ei(−x) + 1 − + 2 − · · · + (−1) . (11.14)
x tn (n − 1)! x x x x n−2
Ei x
10
5
x
1 1 2 3
5
and so
t0 = x . (12.2c)
t xe t
t
x
75
12. Saddle-Point Methods 76
Write integrand as e f (t) = e x ln t−t and expand f (t) in a Taylor series about t = t0 = x:
Let z = re iÚ . Then
t iÚ
f (t) = log t − e (12.13a)
z
1 1 iÚ
f 0 (t) = − e =⇒ f 0 (t0 ) = 0 for t0 = z (12.13b)
t z
1
f 00 (t) = − 2 e iÚ (12.13c)
t
so
Deform the contour to go through t0 = z at an angle è for which cos(æ + 2è) = −1, so
Ú Ú
è= or è= −á. (12.15)
2 2
To figure out which one of these to choose, we need to look at the topography of the
surface u(t) = Re f (t) for a particular choice of z.
For example, when z = 3e iá/4 so r = 3 and Ú = á/4, have
t
Re f (t) = Re e iá/4 log t − . (12.16)
3
In Fig. 12.2 this function is plotted and it is seen that the correct direction to traverse
the saddle is with è = Ú/2 = á/8 rather than è = Ú/2 − á = −7á/8. Thus,
Z
È (z + 1) = e rf (t) d t (12.17a)
C
s
2á rf (z) iè
≈ e e (12.17b)
râ
â = 1/r 2 and è = Ú/2
√
= 2áre z log z−z e iÚ/2 √ iÚ/2 (12.17c)
√
z+1/2 −z re = z 1/2
= 2áz e (12.17d)
0.5
5
0.2
0
5
0.25
0.
2
1
4
2 0.5 C
1
0 2 0.25
Im t
0.5
3
4
6
1
4
0.2
5
0
6
8
8 6 4 2 0 2 4 6 8
Re t
8
6
0.5
5
0.2
0
5
0.25
0.
2
1
4
2 0.5
1
2 0.25 C
0
Im t
2
0.5
3
4
6
4
0.2
5
0
8
8 6 4 2 0 2 4 6 8
Re t
Figure 12.2: Topography of the surface Re(e iá/4 log t − t/3). The saddle point is at the
intersection of the white contour lines. Top: the contour is deformed so that it correctly
goes over the saddle point t0 = 3e iá/4 . Bottom: the contour is incorrectly deformed and
goes over the ridge three times.
12. Saddle-Point Methods 80
Since
1 √
È (z) = È (z + 1) ≈ 2áz z−1/2 e −z (12.18)
z
write an asymptotic series:
√
A B C
È (z) ∼ 2áz z−1/2 e −z 1 + + 2 + 3 + · · · (12.19)
z z z
and use the recurrence È (z + 1) = zÈ (z) to find A, B, C, . . . as follows:
√
( )
(z+1)−1/2 −(z+1) A B C
È (z + 1) ∼ 2á (z + 1) e 1+ + + + · · · . (12.20)
| {z } z + 1 (z + 1)2 (z + 1)3
consider this first | {z }
and this second
First:
1
exp z + log(z + 1) − z − 1
2
1 1 1
= exp z + log z + z + log 1 + −z −1 (12.21a)
2 2 z
1 1 1 1 1 1
= exp z + log z − z − 1 + z + − 2 + 3 − 4 + ··· (12.21b)
2 2 z 2z 3z 4z
!
1 1 1 1
= exp z + log z − z − 1
+ 1 − 2z + 2 − 3 + · · · (12.21c)
2 3z 4z !#
1 1 1
+ − + − ···
2z 4z 2 6z 3
1 1 1
= exp z + log z − z + 2
− + ··· (12.21d)
2 12z 12z 3
1 1
= z z+1/2 e −z 1 + − + ··· (12.21e)
12z 2 12z 3
Second:
A B C
1+ + +
z + 1 (z + 1)2 (z + 1)3
A B C
= 1 + (1 + 1/z)−1 + 2 (1 + 1/z)−2 + 3 (1 + 1/z)−3 (12.22a)
z z z
A 1 1 B 2 C
= 1+ 1 − + 2 − · · · + 2 1 − + · · · + 3 (1 − · · · ) (12.22b)
z z z z z z
A B − A C − 2B + A
= 1+ + 2 + + ··· . (12.22c)
z z z3
12. Saddle-Point Methods 81
Therefore,
√
A
1 1
È (z + 1) ∼ 2áz z+1/2 e −z 1 + + B − A +
z 12 z 2
A 1 1
+ C − 2B + A + − · · · (12.23)
12 12 z 3
and compare this to
√
A B C
È (z + 1) = zÈ (z) ∼ 2áz z+1/2 e −z 1 + + 2 + 3 + · · · (12.24)
z z z
and equate like powers:
Now recall
√
1 1
n! = È (n + 1) = nÈ (n) ∼ 2án n+1/2 e −n 1 + + + · · · (12.27)
12n 288n 2
so
√ n
n 1 1
n! ∼ 2án 1+ + + · · · . (12.28)
e 12n 288n 2
Problem 10.
Establish the following integration formulae with the aid of residues:
Z∞
dx á
a) 2+1
= ;
0 x 2
Z∞
dx á
b) 4
= √ ;
0 x +1 2 2
Z∞
cos(ax) á
c) 2
d x = e −a (a ≥ 0).
0 x +1 2
Problem 11.
82
Problems 83
Problem 12.
Use residues to show:
Z 2á
dÚ 2á
a) =√ (−1 < a < 1) ;
0 1 + a cos Ú 1 − a2
Zá
(2n)!
b) sin2n Ú dÚ = 2n á.
0 2 (n!)2
Problem 13.
By appropriate use of power series expansions, evaluate
Z1
1 + x dx
a) I = ln ;
0 1−x x
Z1
ln(1 − x n )
b) I(n) = dx .
0 x
Problem 14.
Obtain two expansions of the sine integral
Zx
sin t
Si x = dt ,
0 t
Problem 15.
Evaluate
Z ∞
t
I(x) = e xt−e d t
0
Integral Transforms
13 Fourier Series 86
14 Fourier Transforms 92
Problems 106
84
85
Motivation
Integral transforms — in particular the Fourier transform — are ubiquitous in
physics. Whether in quantum mechanics, or X-ray diffraction, or signal
analysis, we often use integral transforms to go from space or time variables
to wave-number or frequency variables. Integral transforms can be used to
change differential equations into algebraic equations which are often easier
to solve. We focus mostly on the Fourier series and Fourier transform, but we
also mention a few other transforms that are sometimes encountered. (The
Hilbert transform, for example, is encountered in the Kramers-Kronig
relations.)
13 Fourier Series
∞
a0 ¼
f (Ú) = + (an cos nÚ + b n sin nÚ) . (13.1)
2
n=1
86
13. Fourier Series 87
Therefore
Z á
1
an = f (Ú) cos nÚ dÚ n = 1, 2, 3, . . . | : . (13.3)
á −á
Similarly
Z á
1
a0 = f (Ú) dÚ (13.4)
á −á
Z á
1
bn = f (Ú) sin nÚ dÚ n = 1, 2, 3, . . . . (13.5)
á −á
The Fourier series converges at all points in −á < Ú ≤ á to1 f (Ú) provided that
f (Ú) is sufficiently nice.
• The Fourier series is periodic: it repeats itself in á < Ú ≤ 3á, etc. That is,
f (Ú + 2á) = f (Ú).
• For even functions, f (−Ú) = f (Ú) or f (2á − Ú) = f (Ú), only cosine terms occur,
i.e., b n = 0 ∀n.
• For odd functions, f (−Ú) = −f (Ú), only sine terms occur, i.e., an = 0 ∀n.
f( )
−1 −á < Ú < 0
1
f (Ú) = (13.6)
+1 0 ≤ Ú ≤ á
1 0 1 á
Z Z
bn = − sin nÚ dÚ + sin nÚ dÚ (13.7a)
á −á á 0
Zá
2
= sin nÚ dÚ (13.7b)
á 0
2
(−1)n − 1
=− (13.7c)
ná
4
n odd
= ná (13.7d)
0
n even.
Therefore
4 sin 3Ú sin 5Ú
f (Ú) = sin Ú + + + ··· . (13.8)
á 3 5
/2 /2
1
2 á
Z
an = cos kÚ cos nÚ dÚ (13.11a)
á 0
Zá
1
= {cos [(k − n)Ú] + cos [(k + n)Ú]} dÚ (13.11b)
á 0
1 sin[(k − n)á] 1 sin[(k + n)á]
= + (13.11c)
á k−n á k+n
1 (−1) sin ká 1 (−1)n sin ká
n
= + (13.11d)
á k−n á k+n
n 2k sin ká
= (−1) . (13.11e)
á(k 2 − n 2 )
Therefore,
2k sin ká 1 cos Ú cos 2Ú
cos kÚ = 2
− 2 + 2 − ··· . (13.12)
á 2k k −1 k −4
(We used this result earlier in Ex. 4.4.)
13. Fourier Series 90
Suppose f (x) is periodic with some period L rather than 2á. Let
L
x= Ú. (13.13)
2á
Then we have
∞
a0 ¼ 2ánx 2ánx
f (x) = + an cos + b n sin (13.14a)
2 L L
n=1
where
Z L/2
2 2ánx
an = f (x) cos dx , n = 0, 1, 2, . . . (13.14b)
L −L/2 L
Z L/2
2 2ánx
bn = f (x) sin dx , n = 1, 2, 3, . . . . (13.14c)
L −L/2 L
Observe:
Z L/2
e i2ámx/L e −i2ánx/L d x
−L/2
Z L/2
= e i2á(m−n)x/L d x (13.16a)
−L/2
L n=m
= (13.16b)
L i2á(m−n)x/L L/2
i2á(n − m) e n,m
−L/2
L n=m
= (13.16c)
L h i
iá(m−n)
− e −iá(m−n) n , m
i2á(m − n) e
1 n = m
= L (13.16d)
0 n , m
= LÖmn (13.16e)
Therefore
∞
1 L/2
Z Z L/2
−i2ánx/L 1 ¼
f (x)e dx = c e i2ámx/L e −i2ánx/L d x (13.17a)
L −L/2 L m=−∞ m −L/2
∞
¼
= cm Ömn (13.17b)
m=−∞
= cn (13.17c)
and thus
∞
¼
f (x) = cn e i2ánx/L (13.18a)
n=−∞
where
Z L/2
1
cn = f (x)e −i2ánx/L d x . (13.18b)
L −L/2
Parseval’s Identity
Consider:
∞ ∞
1 L/2 1 L/2 ¼
Z Z
2
¼ ∗ −i2ánx/L
i2ámx/L
|f (x)| d x = c e cn e d x
L −L/2 m=−∞ m
L −L/2
n=−∞
(13.19a)
∞ ∞ Z L/2
1 ¼ ¼
= c c∗ e i2ámx/L e −i2ánx/L d x (13.19b)
L m=−∞ n=−∞ m n −L/2
∞
¼ ∞
¼
= cm cn∗ Ömn (13.19c)
n=−∞ m=−∞
∞
¼
= |cn |2 . (13.19d)
n=−∞
92
14. Fourier Transforms 93
Then
Z
f (x 0 )Ö(x − x 0 ) d x 0 = f (x) (14.8)
Theorem 6 (Parseval’s). If f (x) and g(y) are Fourier transform pairs then
Parseval’s identity states
Z ∞ Z ∞
2 1
|f (x)| d x = |g(y)|2 d y (14.11)
−∞ 2á −∞
Proof.
Z ∞ Z ∞ " Z ∞ #" Z ∞ #
1 1 0
|f (x)|2 d x = g ∗ (y)e −i xy d y g(y 0 )e i xy d y 0 d x
−∞ −∞ 2á −∞ 2á −∞
(14.12a)
Z ∞ Z ∞ " Z ∞ #
1 1 0
= g ∗ (y) g(y 0 ) e i(y −y)x d x d y 0 d y
2á y=−∞ y 0 =−∞ 2á −∞
| {z }
Ö(y 0 −y)
(14.12b)
Z ∞ "Z ∞ #
1
= g ∗ (y) g(y 0 )Ö(y 0 − y) d y 0 d y (14.12c)
2á −∞ −∞
Z∞
1
= g ∗ (y)g(y) d y . (14.12d)
2á −∞
14. Fourier Transforms 95
1 ∞
Z
f (x) = g(y) cos(xy) d y . (14.14)
á 0
Therefore, f (x) and g(y) need only be defined for positive x and y. They are
Fourier cosine transform pairs.
Similarly, if f (x) is an odd function,
1 ∞ ∞
Z Z
f (x) = g(y) sin(xy) d y ⇐⇒ g(y) = 2 f (x) sin(xy) d x (14.15)
á 0 0
so we interpret |g(é)|2 as the radiated power spectrum. The power spectrum peaks at
frequency é0 and the full width at half maximum frequency band is È = 2/T (see
Fig. 14.1).
Note the uncertainty principle: the decay time T is inversely proportional to the width of
the power spectrum.
|g( )|2
full width at
half maximum
= 2/T
and
1
f (x) = ï(k)e ik·x d kx d k y d kz (14.20b)
(2á)3
2 3/4 −r 2 /a2
2 2
f (x) = e = Ne −r /a (14.25)
áa2
where
r = kxk. Note the probability distribution |f (x)|2 is normalized:
|f (x)|2 d x d y d z = 1.
2 2
ï(k) = N e −r /a e −ik·x d x d y d z (14.26a)
introduce polar coordinates
with z-axis along k;
Z 2á Z 1 Z∞
2 2
let Þ = cos Ú, k = kkk
=N r 2 e −r /a e −i krÞ d r dÞ dæ (14.26b)
æ=0 Þ=−1 r=0
Z∞ Z1
2 2
= 2áN r 2 e −r /a e −i krÞ dÞ d r (14.26c)
r=0 Þ=−1
Z∞
1 −i krÞ 1
2 2
= 2áN r 2 e −r /a e dr (14.26d)
−i kr −1
Z0∞
2 2 1
= 2áN re −r /a (e i kr − e −i kr ) d r (14.26e)
0 ik
change lower limit
Z∞ of integration
2á 2 2
= N re −r /a e i kr d r (14.26f)
ik −∞
complete the square
Z∞
2á 2 2 2 2 2 2
= N re −(r /a −i kr−k a /4)−k a /4 d r (14.26g)
ik −∞
Z∞
2á 2 2 2 2 2
= Ne −k a /4 re −(r−i ka /2) /a d r (14.26h)
ik −∞
let y = r − i ka2 /2
Z∞
i ka2
!
2á 2 2 2 2
= Ne −k a /4 y+ e −y /a d y (14.26i)
ik −∞ 2 R∞ 2 2
−∞
ye −y /a d y = 0
(odd integrand)
2á 2 2 i ka2 √
= Ne −k a /4 a á (14.26j)
ik
2
recall N = (2/áa)3/4
3/4
2 3√
=á a áe −k 2 a2 /4 (14.26k)
áa2
2 2
= (2áa2 )3/4 e −k a /4 . (14.26l)
14. Fourier Transforms 99
where the integral is along the line Re s = c, c > 0, such that all singularities
are to the left of the contour.
• Fourier-Bessel transform or Hankel transform:
Z∞ Z ∞
g(k) = f (x)Jm (x)x d x ⇐⇒ f (x) = g(k)Jm (k)k d k (15.2)
0 0
• Hilbert transformation:
? ?
1 ∞ f (x) 1 ∞
g(y)
g(y) = dx ⇐⇒ f (x) = dy . (15.4)
á −∞ x − y á −∞ y−x
100
16 Applications of the Fourier Transform
Z ∞
1
F−1 [g(y); x] = g(y)e i xy d y . (16.1b)
2á −∞
• Derivatives.
Z ∞
0
F[f (x); y] = f 0 (x)e −i xy d x (16.3a)
−∞ integrate by parts with
u = e −i xy , d v = f 0 (x) d x
∞
Z ∞
= f (x)e −i xy + iy f (x)e −i xy d x (16.3b)
−∞ −∞ assume f (x) → 0
for x → ±∞
= i yF[f (x); y] . (16.3c)
• Integrals. Similarly,
hR i F[f (x); y]
F f (x) d x; y = + C Ö(y) (16.4)
iy
101
16. Applications of the Fourier Transform 102
• Translation.
Z ∞
F[f (x + a); y] = f (x + a)e −i xy d x (16.5a)
Z−∞
∞
= f (x)e −i(x−a)y d x (16.5b)
−∞
=e i ay
F[f (x); y] . (16.5c)
• Multiplication by an exponential.
where é0 is the natural frequency, Ø is the damping ratio, and s(t) is the source driving
function.
Let X(é) = F[x(t); é] and S(é) = F[s(t); é]. Then, using the derivative property,
and so
S(é)
X(é) = = G (é)S(é) (16.12)
é20 − é2 + 2iØé0 é
where G (é) is the transfer function. By the convolution theorem, x(t) = (g ∗ s)(t) where
g(t) = F−1 [G (é); t].
The power spectrum of the harmonic motion is
|S(é)|2
|X(é)|2 = . (16.13)
(é20 − é2 )2 + 4Ø2 é20 é2
Take the inverse Fourier transform of X(é) to find the motion x(t).
For example, suppose s(t) = aÖ(t) (an impulse). Then,
Z∞
S(é) = aÖ(t)e −iét d t = a . (16.14)
−∞
Therefore,
a
X(é) = − (16.15a)
é2 − 2iØé0 é − é20
a p
=− with é1 = é0 1 − Ø2 . (16.15b)
(é − é1 − iØé0 )(é + é1 − iØé0 )
Im
CR
Re
R 1 1 R
Figure 16.1: Contour for Damped Driven Harmonic Oscillator
We close the contour in the upper-half plane as shown in Fig. 16.1: C R is the curve
é = Re iÚ , 0 ≤ Ú ≤ á. Note that, on C R ,
iÚ
e iét = e i Rte = e i Rt cos Ú e −Rt sin Ú → 0 as R → ∞ for t > 0 (16.17)
so
e iét dé
Z
→ 0 as R → ∞ when t > 0 . (16.18)
C R (é − é1 − iØé0 )(é + é1 − iØé0 )
Thus,
?∞
e iét dé
−∞ (é − é1 − iØé0 )(é + é1 − iØé0 )
:0
e iét
Z
dé ´
+ lim = 2ái . (16.19)
−é
(é
R→∞ C R 1 − iØé0 )(é + é1 − iØé0 ) Res
There are two simple poles in the upper-half plane with residues
Therefore
R
For t < 0 we need to close the contour in the lower-half plane instead so that C · · · → 0
R
as R → ∞, but there are no poles in the lower-half plane so we find
and therefore
0a
t<0
x(t) = −Øé0 t sin é t (16.23a)
é e
1 t>0
1
with
p
é1 = é0 1 − Ø2 . (16.23b)
This example shows that causality imposes the requirement that X(é) has singularities
only in the upper-half plane and is analytic everywhere in the lower-half plane.
Problems
Problem 16.
Expand the following functions in a Fourier series of the form
∞
a0 ¼ 2ánx 2ánx
f (x) = + an cos + b n sin ,
2 L L
n=1
Problem 17.
Find the Fourier transform, ï(k), of the wave function for a 2p electron in
hydrogen:
1
f (x) = q ze −r/2a0
5
32áa0
where x = (x, y, z), r 2 = x 2 + y 2 + z 2 , and a0 is the radius of the first Bohr orbit.
(Hint: let f (x) = ez · g(x) and use symmetry to argue that F[g(x); k] ∝ k.)
Problem 18.
Prove the Wiener-Khinchin theorem, which relates the autocorrelation and
the Fourier transform: Let F[f (x); y] = g(y); then:
Z∞
−1
F [|g(y)| ; x] =
2
f ∗ (t)f (x + t) d t
−∞
106
Module V
Ordinary Differential
Equations
Problems 146
107
108
Motivation
Ordinary differential equations are even more of a pain in the neck to solve
than integrals. But, of course, physical laws are formulated in terms of
differential equations, and the solutions require integrating them, so it is
important to know how to do that. Here we present some common techniques
for solving ordinary differential equations. We will also encounter some
commonly occurring special functions.
Terminology
Consider:
r
d3y dy
+x + x2 y = 0 .
d x3 dx
Rationalize this:
!2
2 dy d3y 2
x = +x y
dx d x3
!2
d3y d3y
!
2
= +2x y + x4 y2 .
d x3 d x3
| {z }
this is the highest order
derivative term
We say this ordinary differential equation (ODE) is third order and second
degree.
17 First Order ODEs
Separable Equations
If we can write the equation in the form
Then
1 1
q dy + q dx = 0 . (17.3)
1 − y2 1 − x2
| {z } | {z }
B(y) A(x)
Integrate:
109
17. First Order ODEs 110
Exact Equations
More generally,
u u
du = dx + dy (17.6)
x y
|{z} |{z}
A(x,y) B(x,y)
2 u 2 u
but since = , a necessary condition is
xy yx
A B
= . (17.7)
y x
(x + y) d x + x d y = 0 . (17.8)
| {z } |{z}
A(x,y) B(x,y)
A B
Note: = = 1 so this equation is exact.
y x
Therefore
u u
=x+y and =x (17.9)
x y
and so
u(x, y) = 12 x 2 + xy + c . (17.10)
17. First Order ODEs 111
Integrating Factors
If A d x + B d y is not exact, try to find a function Ý(x, y) such that
Ý(A d x + B d y) = 0 (17.11)
xy 0 + (1 + x)y = e x . (17.15)
xe x d y + (1 + x)e x y d x = e 2x d x (17.19)
|{z} | {z }
B(x) A(x,y)
and we verify
B A B
= e x + xe x and = (1 + x)e x = X (17.20)
x y x
thus
u u
= A(x, y) = (1 + x)e x y and = B(x) = xe x (17.21)
x y
which implies
u(x, y) = xe x y . (17.22)
or
1 x c −x
y= e + e . (17.24)
2x x
17. First Order ODEs 113
X1 , X2 , . . . , X n and Y1 , Y2 , . . . , Yn
| {z } | {z }
extensive variables, intensive variables,
i.e., displacements, i.e., forces,
e.g., volume e.g., pressure
d¯Q = dU + Y1 d X1 + · · · + Yn d X n . (17.25)
|{z} |{z} | {z }
heat flow change in work terms
internal energy
The use ofd¯ (rather than d) for the heat flow reminds us that the right hand side cannot
generally be written as an exact differential so the equation cannot generally be
integrated. Therefore there is no ‘heat’ of the system, Q = Q(X1 , . . . , X n , Y1 , . . . , Yn ).
If n = 1 we have claimed an integrating factor can always be found for an equation of
this form, but for n > 1 this cannot be integrated in general with the aid of an
integrating factor. . .
but. . . .
Kelvin-Planck statement of the second law of thermodynamics:
S = S(X1 , . . . , X n , Y1 , . . . , Yn )
dS = 0 when d¯Q = 0 .
This implies that there must exist an integrating S=1 S=2 S=3
factor X1
Ý = Ý(X1 , . . . , X n , Y1 , . . . , Yn )
Figure 17.2: Non-intersecting Adia-
so that the adiabatic surfaces are bats
d¯Q = T d S . (17.27)
Change of Variables
Changing variables can often help.
Ex. 17.5. Consider an equation of the form
y 0 = f (ax + by + c) (17.28)
d y = f (ax + by + c) d x . (17.29)
Let
v = ax + by + c (17.30a)
so
dv = a dx + b dy or a dx = dv − b dy . (17.30b)
Then
a d y = f (v)(d v − b d y) (17.31a)
=⇒ [a + bf (v)] d y = f (v) d v (17.31b)
f (v)
=⇒ d y = dv . (17.31c)
a + bf (v)
Divide by y n :
1 dy
+ f (x) y 1−n = g(x) . (17.33)
yn d x
| {z }
1 d y 1−n
1−n d x
Homogeneous Functions
A function f (x, y, . . .) is a homogeneous function of degree r in the arguments
if
Test if this is isobaric: suppose x has units of s and suppose y has units of sm . Then the
dimensions of terms of the equation are
Multiply by y 2 :
which is separable.
18 Higher Order ODEs
119
18. Higher Order ODEs 120
y 00 + 3y 0 + 2y = e x . (18.6)
y = c1 e −x + c2 e −2x . (18.7)
y = 16 e x + c1 e −x + c2 e −2x . (18.9)
Note: if f (x) or a term in f (x) is also part of the complementary function, the
particular integral may contain this term and its derivatives multiplied by
some power of x.
y = xe −x + c1 e −x + c2 e −2x . (18.12)
18. Higher Order ODEs 121
y 00 = f (y) (18.13)
y 00 = x − y 2 . (19.1)
and so on. . . .
Note: cn , n > 1 can all be expressed in terms of c0 and c1 , which are the two free
constants of integration.
If we want a solution with y = 0 and y 0 = 1 at x = 0 then c0 = 0, c1 = 1, and
1 , . . . , so
c2 = 0, c3 = 16 , c4 = − 12
y = x + 16 x 3 − 12
1 x4 + · · · (19.4)
122
19. Power Series Solutions 123
y = c0 + c1 x + c2 x 2 + · · · (19.9)
∞
¼ ∞
¼
(1 − x 2 ) (m)(m − 1)cm x m−2 − 2x (m)cm x m−1
m=2 m=1
∞
¼
+ n(n + 1) cm x m = 0 . (19.10a)
m=0
Note that
∞
¼ ∞
¼
cm m(m − 1)x m−2 = c2 (1)(2) + c3 (3)(2)x + cm m(m − 1)x m−2 (19.11a)
m=2 m=4
∞
let m = m0 + 2
m0
¼
= 2c2 + 6c3 x + cm0 +2 (m0 + 2)(m0 + 1)x (19.11b)
m0 =2
so we have
x2 x4
" #
y =c0 1 − n(n + 1) + n(n + 1)(n − 2)(n + 3) + ···
2! 4!
x3 x5
" #
+ c1 x − (n − 1)(n + 2) + (n − 1)(n + 2)(n − 3)(n + 4) + ··· . (19.14)
3! 5!
c
Note that m+2 → 1 as m → ∞ so both series converge for x 2 < 1.
cm
Write the general solution as
where
x2 x4
Un (x) = 1 − n(n + 1) + n(n + 1)(n − 2)(n + 3) + ··· (19.15b)
2! 4!
x3 x5
Vn (x) = x − (n − 1)(n + 2) + (n − 1)(n + 2)(n − 3)(n + 4) + ··· (19.15c)
3! 5!
are the two independent solutions, and c0 and c1 are the two constants of integration.
19. Power Series Solutions 126
Although the series converge for |x| < 1, we saw in Ex. 2.4 that they diverge for |x| = 1;
however, we normally want solutions over the domain −1 ≤ x ≤ 1. This can be arranged
in one of two ways:
Pn(x)
1
n=0
x n=1
1 1 n=2
n=3
1
x3 x5
" #
y = c1 x − (−1)(2) + (−1)(2)(−3)(4) − ··· . (19.18)
3! 5!
Note:
cm+2 (m + n + 1)(m − n) m
= = since n = 0 (19.19a)
cm (m + 1)(m + 2) m+2
=⇒ (m + 2)cm+2 = mcm (19.19b)
c
=⇒ cm = 1 . (19.19c)
m
Thus
x3 x5 x7
" #
y = c1 x + + + + ··· . (19.20)
3 5 7
1 1+x
We’ve seen this series before in Eq. (3.11): it is ln and is singular at x = ±1.
2 1−x
We have Legendre functions of the second kind of order n:
Un (1)Vn (x)
n = 0, 2, 4, . . .
Qn (x) = (19.21)
−Vn (1)Un (x) n = 1, 3, 5, . . .
with
1 1+x x 1+x
Q0 (x) = ln , Q1 (x) = ln −1, etc. (19.22)
2 1−x 2 1−x
Qn(x)
1
n=0
x n=1
1 1 n=2
n=3
1
x 2 y 00 + xy 0 + (x 2 − ß2 )y = 0 . (19.24)
This has a regular singular point at x = 0 so the solution has the form
∞
¼
y(x, s) = x s cn x n , c0 , 0 . (19.25)
n=0
We have
∞
¼
xy 0 = (s + n)cn x s+n (19.26a)
n=0
¼∞
x 2 y 00 = (s + n)(s + n − 1)cn x s+n (19.26b)
n=0
• s 2 = ß2 (19.29a)
which is called the indicial equation;
• c1 [(s + 1)2 − ß2 ] = 0 (19.29b)
which is solved if c1 = 0 or (s + 1) = ±ß;
cn 1
• =− (19.29c)
cn−2 (s + ß + n)(s − ß + n)
which is the recurrence relation.
19. Power Series Solutions 129
We choose to solve the second of these by setting c1 = 0. Then only n even terms
survive and the recurrence formula gives all cn (n even) in terms of c0 . The solutions to
the indicial equation are s = ±ß and the two independent solutions are
Aside: had we left c1 free and instead set (s + 1) = ±ß then, with the indicial equation,
we have the requirement s = −ß = −1/2. It turns out that the terms that appear from this
are identical to those contained in the other solution s = +ß = 1/2 with c1 = 0, so we can
choose c1 = 0 even for the ß = 1/2 case.
Set s 2 = ß2 and c1 = 0. Then
cn 1 1 1
=− 2 2
=− =− . (19.31)
cn−2 (s + n) − s s 2 + 2sn + n 2 − s 2 n(2s + n)
But there is a problem if ß is an integer: the procedure works fine for the s = +ß solution
(assume ß is positive), but the second solution with s = −ß won’t work because
cn 1 1
=− = − (19.33)
cn−2 (s + ß + n)(s − ß + n) s=−ß n(n − 2ß)
so when n = 2ß, the ratio is infinite and c2ß and higher are infinite!
We need a way to get a second solution, so we try this trick: don’t impose the indicial
relation (i.e., leave s and ß unrelated), multiply y(x, s) by the factor (s + ß), then take the
limit as s → −ß. The factor will cancel the infinities with this procedure.
It turns out this doesn’t work. . . but let’s try it and see why.
19. Power Series Solutions 130
where
c0
c00 = ± (19.35c)
2 · (2 − 2ß) · · · (2ß)
and
cn0 c 1
0 = 2ß+n = − (19.35d)
cn−2 c2ß+n−2 (s + ß + 2ß + n)(s − ß + 2ß + n)
1
=− . (19.35e)
[(s + 2ß) + ß + n][(s + 2ß) − ß + n]
But note: s + 2ß when s = −ß is the same as s when s = +ß so this solution is actually the
same as the y(x, +ß) solution (up to an overall factor).
Thus it is not an independent solution.
Instead, substitute [(s + ß)y(x, s)] into Bessel’s equation. The result will not be zero since
we have not yet imposed the indicial equation:
2
" #
x2 2 + x + (x 2 − ß2 ) (s + ß)y(x, s) = (s + ß) (s 2 − ß2 )
x x | {z }
result is proportional to
indicial equation
= (s + ß)2 (s − ß) . (19.36)
2
" #
x2 2 + x + (x 2 − ß2 ) [(s + ß)y(x, s)] = 2(s + ß)(s − ß) + (s + ß)2 . (19.37)
x x s | {z }
vanishes as s → −ß
19. Power Series Solutions 131
x2 x4
y(x, s) = c0 x s 1 − + − ··· (19.39)
s(s + 4) s(s + 4) (s + 2)(s + 6)
| {z }
this is what causes the
problem when s = −ß
so
(s + 2) 2 x4
(s + 2)y(x, s) = c0 x s (s + 2) − x +
s(s + 4) s(s + 4)(s + 6)
x6
− + ··· . (19.40)
s(s + 4)(s + 6)(s + 4)(s + 8)
s s ln x
Now take the derivative with respect to s. Note: x = e = x s ln x.
s s
[(s + ß)y(x, s)] = (s + 2)y(x, s) ln x
s
(s + 2) 2 x4
+ c0 x s (s + 2) − x +
s s(s + 4) s(s + 4)(s + 6)
x6
− + ··· (19.41a)
s(s + 4)(s + 6)(s + 4)(s + 8)
= (s + 2)y(x, s) ln x
(
(s + 2) 1 1 1
+ c0 x s 1 − − − x2
s(s + 4) s + 2 s s + 4
)
1 1 1 1
+ − − − 4
x − ··· . (19.41b)
s(s + 4)(s + 6) s s + 4 s + 6
and therefore
x2 x4
( )
1 1
[(s + 2)y(x, s)] = − y(x, 2) ln x + c0 2 1 + + + ··· . (19.43)
s 16 x 4 64
d 2è
+ (E − x 2 )è = 0 . (19.44)
d x2
Here, for convenience,
√ we use the dimensionless variables: to restore
dimensions, x → mé/~ x and E → E/( 12 ~é) where é is the angular frequency
of the oscillator.
For large values of x we have
d 2è
− x2è ≈ 0 (19.45)
d x2
2 2 2
and so the solutions are è ∼ e ±x /2 as x → ∞: è0 ∼ ±xe ±x /2 and è00 ∼ x 2 e ±x /2
(where the omitted term is higher order in the asymptotic series) so è00 − x 2 è
vanishes at leading order in the asymptotic series.
Physical solutions must not become infinite as x → ∞. This motivates the
substitution
2 /2
è = ye −x . (19.46)
2
(We must watch for the solutions y ∼ e x that generate the unwanted
2
è ∼ e +x /2 behavior.) We have:
2 /2 2 /2
è0 = y 0 e −x − xye −x (19.47a)
−x 2 /2 −x 2 /2 −x 2 /2 −x 2 /2
è00 = y 00 e − 2xy 0 e − ye + x 2 ye (19.47b)
(y 00 − 2xy 0 − y +
x 2 x 2
y) + E y − y = 0. (19.47c)
Therefore
1−E 3−E
c2 = c , c3 = c , (19.51a)
2 0 6 1
and
cn+2 (2n + 1) − E
= , n = 2, 3, 4, . . . . (19.51b)
cn (n + 1)(n + 2)
19. Power Series Solutions 134
x2 x4
( )
y = c0 1 + (1 − E) + (1 − E)(5 − E) + · · ·
2! 4!
3 x5
( )
x
+ c1 x + (3 − E) + (3 − E)(7 − E) + · · · . (19.52)
3! 5!
cn+2 2 2 c
In general, for large n, ∼ as n → ∞ so c2n+2 ∼ c = 2n .
cn n 2n 2n n
Therefore
c2n c2(n−1) c
c2(n+1) ∼ ∼ ∼ · · · ∼ 0 as n → ∞ (19.53)
n n(n − 1) n!
and similarly with the odd-n coefficients.
(x 2 )n 2
Therefore the terms are ∼ as n → ∞ so y ∼ e x for large x as expected:
n!
2
this generates the è ∼ e x /2 solutions.
The bounded (as x → ±∞) solutions are when one of the series truncates (and
the coefficient of the other series is chosen to be 0). This only happens when
(2n + 1) − E = 0 =⇒ E = 2n + 1 . (19.54)
E = En = 2n + 1 , n = 0, 1, 2, . . . . (19.55)
These are eigenvalues. The corresponding solutions (that don’t blow up) are
the eigenfunctions
2 /2
è(x) = èn (x) = Hn (x)e −x (19.56)
Hn(x)
10
5 n=0
x n=1
2 1 1 2 n=2
5 n=3
10
d2
!
− 2 + x 2 èn = En èn . (19.58)
dx
~2 d 2
!
1 2 2
− + mé x èn = En èn (19.59)
2m d x 2 2
En = ~é(n + 12 ) , n = 0, 1, 2, . . . . (19.60b)
19. Power Series Solutions 136
1 f 0 (x)
|S 00 (x)| ≈ p |f (x)| . (20.5)
2 f (x)
137
20. The WKB Method 138
1 f 0 (x)
S 00 ≈ ± p (20.8a)
2 f (x)
=⇒ [S 0 (x)]2 = f (x) − S 00 (x)
1 f 0 (x)
≈ f (x) ∓ p
2 f (x)
1 f 0 (x)
" #
= f (x) 1 ∓ (20.8b)
2 f 3/2 (x)
1 f 0 (x)
( )
p
0
=⇒ S (x) ≈ ± f (x) 1 ∓ + ···
4 f 3/2 (x)
p 1 f 0 (x)
≈ ± f (x) − (20.8c)
4 f (x)
Z p
1
=⇒ S(x) ≈ ± f (x) d x − ln f (x) . (20.8d)
4
Our solution is:
1 n h Rp i h Rp io
y(x) ≈ p c+ exp + f (x) d x + c− exp − f (x) d x . (20.9)
4
f (x)
• For x −1:
√
p √ √ p
4
4
−x (−x)1/4
f (x) = x = −i −x and f (x) = √ = ; (20.11a)
i e iá/4
also
Z xp Zx Z −x
√ √ 2
f (x) d x = −i −x d x = i x d x = i (−x)3/2 (20.11b)
0 0 0 3
so the two solutions will have the form
2 á
(−x)−1/4 exp ±i (−x)3/2 + i . (20.11c)
3 4
Therefore,
2
y ≈ A(−x)−1/4 cos (−x)3/2 + Ö , x −1 (20.12)
3
where A is a free amplitude constant and Ö is an undetermined phase.
• For x 1, the two solutions have the form
R√
2
x −1/4 exp ± x d x = x −1/4 exp ± x 3/2 (20.13)
3
and we take the negative exponential solution which remains bounded as x → ∞:
2
y ≈ B x −1/4 exp − x 3/2 , x 1 (20.14)
3
where B is a free amplitude constant.
We now want to connect these forms at x = 0. This will allow us to determine the phase
Ö in the left-hand side that results in the exponential decay in the right-hand side.
20. The WKB Method 140
Since
d2y
− xy = 0 (20.16)
d x2
we find
d 3
−k 2 g(k) − i g(k) = 0 =⇒ g(k) = Ce i k /3 (20.17)
dk
where C is a constant of integration, so
Z∞
1 3
y(x) = C e i k /3 e i kx d k . (20.18)
2á −∞
Convention: set C = 1; the result is the Airy function of the first kind, which can be
written in these forms
Z∞
k3
" !#
1
Ai x = exp i + kx d k (20.19)
2á −∞ 3
1 ∞ k3
Z !
Ai x = cos + kx d k . (20.20)
á 0 3
1 ∞ k3 k3
Z " ! !#
Bi x = exp − + kx + sin + kx d k . (20.21)
á 0 3 3
The functions Ai x and Bi x are shown in Fig. 20.2. We see that the function Bi x has the
unwanted exponentially increasing behavior.
1.5
1.0 Bi x
0.5
Ai x
x
10 8 6 4 2 2
0.5
We now want to compute the asymptotic forms of the Airy integral for x → −∞ and
x → ∞ and compare to our WKB results in order to identify the phase Ö. We will use the
saddle-point method.
k3
!
e (−x)f (k) with f (k) = i k + (20.22)
3x
k2
!
√
f 0 (k) = i 1 + =⇒ f 0 (k0 ) = 0 for k0 = ± −x (20.23a)
x
k 1
f 00 (k) = 2i =⇒ f 00 (k0 ) = ∓2i √ . (20.23b)
x −x
√
Note: there are two saddle points, k0 = ± −x. Also
√
1√ 2 √
f (k0 ) = ±i −x − −x = ± i −x . (20.23c)
3 3
Therefore,
1 00
f (k) ≈ f (k0 ) + f (k0 )(k − k0 )2 . (20.24)
2
√
Write f 00 (k0 ) = âe iæ with â = 2/ −x and æ = ∓á/2, and k − k0 = se iè with
è = −æ/2 ± á/2. Then
Z s
1 (−x)f (k) 1 2á (−x)f (k0 ) iè
y= e dk ∼ e e (20.25)
2á C x→−∞ 2á (−x)â
together to get the asymptotic form of the Airy function for x → −∞:
1 2 á
Ai x ∼ √ cos (−x)3/2 − , x → −∞ . (20.27)
á(−x)1/4 3 4
20. The WKB Method 142
k3
!
e xf (k) with f (k) = i k + (20.28)
3x
k2
!
√
f 0 (k) = i 1 + =⇒ f 0 (k0 ) = 0 for k0 = ±i x (20.29a)
x
k 1
f 00 (k) = 2i =⇒ f 00 (k0 ) = ∓2 √ . (20.29b)
x x
√
Note: there are two saddle points, k0 = ±i x, but now we will only go over one. Also
√
1√ 2√
f (k0 ) = ±i −x − −x = ∓ x. (20.29c)
3 3
√
Write f 00 (k0 ) = âe iæ with â = 2/ x and æ = 0 or á. From the topography of Re f (x)
shown √ in the bottom panel of Fig. 20.3, we see we should go over one saddle point
k0 = +i x with k − k0 = se iè where è = 0. Then
Z s
1 xf (k) 1 2á xf (k0 ) iè
Ai x = e dk ∼ e e (20.30a)
2á C x→+∞ 2á xâ
r √
1 2á x 2
= exp − x 3/2 (20.30b)
2á x 2 3
where C is a contour deformed to go over the desired saddle point. Thus
1 2
Ai x ∼ √ exp − x 3/2 , x → +∞ . (20.31)
2 áx 1/4 3
We therefore have:
1 2 á
Ai x ∼ √ cos (−x)3/2 − , x → −∞ (20.32a)
á(−x)1/4 3 4
1 2
Ai x ∼ √ exp − x 3/2 , x → +∞ (20.32b)
2 áx 1/4 3
0
Im k
0
Re k
0
Im k
0
Re k
Figure 20.3: Topography of the surface Re[i(k + k 3 /(3x))] for x < 0 (top) and x > 0
(bottom). The saddle points are at the intersection of the white contour
√ lines. Top: the
contour is deformed so that it goes over both saddle√point k0 = ± −x.√ Bottom: the
contour is deformed to go over the saddle point k0 = i x but not k0 = −i x.
20. The WKB Method 144
The WKB method can be used for more general f (x) but we will still need to
connect an osciliatory solution for the f (x) < 0 region to an exponential
solution for the f (x) > 0 region.
Note that f (x) is approximately linear as it passes through zero, so the
undetermined phase is just that of the Airy function.
Therefore the rule is, when f (b) = 0, f (x) > 0 for x > b,
Z bp ! Z xp !
2 á 1
p cos −f (x) d x − −
−*
)−− 4p exp − f (x) d x . (20.34)
4
−f (x) x 4 f (x) b
| {z } | {z }
f (x)<0 for x<b f (x)>0 for x>b
f(x)
x
b
x
a
f(x)
solutions for x < a solutions for x > a
Z x√ ! Z x√ !
á
f −1/4 exp f dx −−*
) −− 2(−f )−1/4 cos −f d x −
Zax √ ! Z x a√ !4
−1/4 á
−f exp − f dx −*
−
) −− (−f )−1/4 sin −f d x −
a a 4
d2y
Figure 20.4: Connection Formulas for − + f (x)y = 0
d x2
20. The WKB Method 145
d 2 è 2m V(x)
d x2
− 2 [V(x) − E]è = 0
~
(20.35) E
The potential V(x) shown in Fig. 20.5 has x
two turning points at x = a and x = b. a b
Use the WKB method with
2m Figure 20.5: Potential for Bohr-
f (x) = [V(x) − E] . (20.36)
~2 Sommerfeld Quantization Rule
This is the Bohr-Sommerfeld quantization rule and the integral on the left hand side
is one half of the classical action.
Problems
Problem 19.
An ideal gas in a box has internal energy U(V, P) = 32 P V where P is the
pressure of the gas and V is the volume of the box. The first law of
thermodynamics for a quasistatic process is
d¯Q = d U + P d V
where d¯Q is the heat flow to the system. Although the right-hand-side is not an
exact integral, so there is no function Q(V, P) for the “heat of the system”
(hence we write d¯Q rather than d Q), the right-hand-side can be integrated by
means of an integrating factor Ý. That is, dã = Ý · (d U + P d V) is exact and can
be integrated. Determine Ý(V, P) and the integral ã(V, P) in terms of the state
variables V and P. What are the physical significance of these quantities?
Problem 20.
Find the general solution of
a) y 0 + y cos x = 12 sin 2x ;
p
b) 2x 3 y 0 = 1 + 1 + 4x 2 y.
Problem 21.
Find the general solution of
a) y 000 − 2y 00 − y 0 + 2y = sin x ;
b) a2 y 002 = (1 + y 02 )3 .
146
Problems 147
Problem 22.
An object is dropped (from rest) from some distance r0 from the center of the
Earth and it accelerates according to Newton’s law of gravity,
G M⊕
r̈ = − .
r2
Determine t(r) for the fall, where t = 0 when r = r0 . Find the number of days it
would take an object to fall to the surface of the Earth, r = R ⊕ , if it were
dropped from the distance of the Moon, r0 = 60R⊕ .
Use: G M⊕ = 398 600 km3 s−2 and R⊕ = 6371 km.
Problem 23.
Bessel’s equation for ß = 0 is
x 2 y 00 + xy 0 + x 2 y = 0.
J0 (x) ln x + Ax 2 + B x 4 + Cx 6 + · · ·
Problem 24.
Consider the equation
d2y 2 d y
" #
2 2 ( + 1)
+ + −k + − y = 0, 0≤x≤∞
d x2 x d x x x2
where = 0, 1, 2, . . .. Find all values of the constant k that can give a solution
that is finite on the entire range of x (including x = ∞). An equation like this
arises in solving the Schrödinger equation for the hydrogen atom [here r = a0 x,
R(r) = a20 y(x), and E = −k 2 (e 2 /2a0 ) with a0 = ~2 /(me e 2 )].
(Hint: Let y = v/x, then “factor out” the behavior at infinity.)
Problems 148
Problem 25.
For what values of the constant K does the differential equation
1 K
y 00 − + y=0 (0 < x < ∞)
4 x
have a nontrivial solution vanishing at x = 0 and x = ∞?
Problem 26.
Problem 27.
Recall Bessel’s equation is:
x 2 y 00 + xy 0 + (x 2 − ß2 )y = 0.
Eigenvalue Problems
Problems 180
149
150
Motivation
We’ve seen that when solutions to differential equations are required to satisfy
specific boundary conditions then there can be restrictions on the form of the
differential equation in order for it to admit such solutions. Here we will explore
such eigenvalue problems in more detail as they commonly arise in physics
problems.
We will start with some general properties of linear differential operators,
eigenvalues, and eigenfunctions. We then turn to a rather general class of
eigenvalue problems called Sturm-Liouville problems. Such equations occur
frequently in physics applications, and we will encounter several important
special functions such as Bessel functions, Legendre polynomials, and
spherical harmonics. We will examine the case of degenerate eigenvalues and
show how complete bases of eigenfunctions can be used to form eigenfunction
expansions of other functions. Finally we’ll look at inhomogeneous equations
and introduce the concept of a Green function.
21 General Discussion of Eigenvalue Problems
where u(x) and v(x) are functions that obey the boundary conditions.
Suppose L is Hermitian. Then, if ui (x) and uj (x) are eigenfunctions belonging
to eigenvalues Ýi and Ý j ,
L ui (x) = Ýi ui (x) and L uj (x) = Ý j uj (x) . (21.3)
Because L is Hermitian,
Z "Z #∗
∗ ∗
uj (x) L ui (x) d x = ui (x) L uj (x) d x (21.4a)
Ò Ò
" Z #∗
= Ýj ui∗ (x)uj (x) d x (21.4b)
ZÒ
∗
= Ýj ui (x)uj∗ (x) d x (21.4c)
Ò
so therefore
Z
(Ýi − Ý∗j ) uj∗ (x)ui (x) d x = 0 . (21.5)
Ò
151
21. General Discussion of Eigenvalue Problems 152
The eigenvalues are Ýn = (ná)2 for integer n and the eigenfunctions are un (x) ∝ e i náx .
L is Hermitian with the periodic boundary conditions: if u(x) and v(x) are two functions
that satisfy the boundary conditions then
Z 2á 0
* Z 2á d u ∗ d v
d2
2á
∗ d
v
− u ∗ (x) 2 v(x) d x = − u + dx (21.9a)
0 dx dx 0 0 dx dx
0
#2á
> Z 2á
d u ∗ d2
"
= v − v(x) 2 u ∗ (x) d x (21.9b)
d
x 0 0 dx
" Z 2á #∗
d 2
= − ∗
v (x) 2 u(x) d x . (21.9c)
0 dx
More generally, eigenvalue problems can include a weight function â(x) with
â(x) ≥ 0 in the domain so that
d d
p(x) u(x) − q(x)u(x) + Ýâ(x)u(x) = 0 (22.1)
dx dx
d2 d
L = −p(x) − p0 (x) + q(x) (22.2)
d x2 dx
is Hermitian and that the orthogonality of eigenfunctions ui and uj , Ýi , Ý j , is
Z
(ui , uj ) = ui∗ (x)uj (x)â(x) d x = 0 . (22.3)
Ò
153
22. Sturm-Liouville Problems 154
d2y dy
(1 − x 2 ) − 2x + n(n + 1)y = 0 . (22.4)
d x2 dx
d2y dy
− 2x + 2ny = 0 . (22.5)
d x2 dx
2 2
Here p(x) = e −x , q(x) = 0, â(x) = e −x , where −∞ ≤ x ≤ ∞, and Ýn = 2n.
• Bessel’s equation
d2y dy
x2 2
+x + (k 2 x 2 − ß2 )y = 0 (22.6)
dx dx
(note: we have introduced the factor k 2 now). Here p(x) = x, q(x) = ß2 /x,
â(x) = x, the domain is 0 ≤ x ≤ ∞, and Ý = k 2 .
The eigenfunctions and orthogonality relations for these equations are:
• Legendre polynomials, Pn (x):
Z 1
Pn (x)Pm (x) d x = 0 for n , m. (22.7)
−1
Independence of Solutions
Recall Bessel’s equation with k = 1 has solutions Jß (x) and J−ß (x) and these are
independent unless ß is an integer.
In general, two solutions, u and v, are said to be linearly dependent if there
are values Ó and Ô (Ó , 0, Ô , 0) such that
Óu + Ôv = 0 . (22.10a)
Take a derivative:
Óu 0 + Ôv 0 = 0 (22.10b)
Ó (uv 0 − u 0 v) = 0 . (22.10c)
| {z }
must vanish
pu 00 + p0 u 0 − qu + Ýâu = 0 (22.12a)
00 0 0
pv + p v − qv + Ýâv = 0 (22.12b)
W = uv 0 − vu 0 (22.13a)
=⇒ pW = puv 0 − pvu 0 (22.13b)
=⇒ (pW)0 = u · [pv 00 + p0 v 0 ] + 0
puv 0 − v · [pu 00 + p0 u 0 ] − 0
puv0 (22.13c)
= u · [(q − Ýâ)v] − v · [(q − Ýâ)u] (22.13d)
=0 (22.13e)
and therefore
C
W[u(x), v(x)] = (22.14)
p(x)
d x0
Zx
yn (x) = CJn (x) . (22.21)
x Jn2 (x 0 )
0
x2 x4 x2 5 4
J0 (x) = 1 − + − ··· and J0−2 (x) = 1 + + x + ··· (22.22)
4 64 2 32
so
x2
Z ( )
1 5 4
y0 (x) = CJ0 (x) 1+ + x + ··· dx . (22.23)
x 2 32
For C = 1 we have
x2
( )
5 4
y0 (x) = J0 (x) ln x + + x + ··· (22.24a)
4 128
1 3 4
= J0 (x) ln x + x 2 − x + ··· . (22.24b)
4 128
Jn(x) Yn(x)
1 1
n=0
0 x n=1 0 x
5 10 15 20 n=2 5 10 15 20
1 1
Generating Functions
Consider a function of two variables, g(x, t). We can use it to generate a set of
functions A n (x) by expanding it in powers of t:
¼
g(x, t) = A n (x)t n . (22.26)
n
x 1
g(x, t) = exp t− . (22.27)
2 t
We can obtain A n (x) from the Laurent series via the contour integral
1 g(x, t)
A n (x) = n+1
dt (22.28)
2ái C t
1
Zá
: 0 (odd)
= [cos(x sin Ú − nÚ) + i sin(x sin Ú − nÚ)] dÚ
(22.29c)
2á −á
so
1 á
Z
A n (x) = cos(x sin Ú − nÚ) dÚ . (22.30)
á 0
22. Sturm-Liouville Problems 159
g x 1 x 1
= 1 + 2 exp t− (22.31a)
t 2 t 2 t
x 1
= 1 + 2 g(x, t) (22.31b)
2 t
∞
x 1 ¼
= 1+ 2 A n (x)t n (22.31c)
2 t n=−∞
∞
x
¼
= [A (x)t n + A n (x)t n−2 ] (22.31d)
2 n=−∞ n
∞
x
¼
= [A (x) + A n+1 (x)]t n−1 (22.31e)
2 n=−∞ n−1
but
∞ ∞
g ¼ ¼
= A n (x)t n = nA n (x)t n−1 (22.31f)
t t n=−∞ n=−∞
so we find
2n
A n−1 (x) + A n+1 (x) = A (x) . (22.32)
x n
g 1 1 x 1
= t− exp t− (22.33a)
x 2 t 2 t
1 1
= t− g(x, t) (22.33b)
2 t
∞
1 1 ¼
= t− A (x)t n (22.33c)
2 t n=−∞ n
∞
1 ¼
= [A (x)t n+1 − A n (x)t n−1 ] (22.33d)
2 n=−∞ n
∞
1 ¼
= [A (x) − A n+1 (x)]t n (22.33e)
2 n=−∞ n−1
but
∞ ∞
g ¼ ¼
= A n (x)t n = A0n (x)t n (22.33f)
x x n=−∞ n=−∞
so we find
n n
A0n (x) = A n−1 (x) − A (x) and A0n (x) = A (x) − A n+1 (x) (22.35)
x n x n
Manipulate these:
n2
xA00n (x) + A0n (x) = −xA n (x) + A (x) (22.36f)
x n
or
and therefore
∞ n+2s
¼ 1 1 x
A n (x) = (−1)s . (22.39)
s! (s + n)! 2
s=0
22. Sturm-Liouville Problems 161
• Integral form
Z á
1
Jn (x) = cos(nÚ − x sin Ú) dÚ (22.41)
á 0
• Recurrence relations
2n
Jn−1 (x) + Jn+1 (x) = J (x) (22.42)
x n
Jn−1 (x) − Jn+1 (x) = 2Jn0 (x) (22.43)
n
Jn0 (x) = Jn−1 (x) − Jn (x) (22.44)
x
0 n
Jn (x) = Jn (x) − Jn+1 (x) (22.45)
x
• Series expansion
∞ n+2k
¼ 1 1 x
Jn (x) = (−1)k (22.46)
k! (k + n)! 2
k=0
• Hankel functions
(1)
Hn (x) = Jn (x) + i Yn (x) (22.47)
(2)
Hn (x) = Jn (x) − i Yn (x) (22.48)
22. Sturm-Liouville Problems 162
Thus
∞
(−1)k 2 x 1/2 2k
¼
J1/2 (x) = √ x (22.51a)
(2k + 1)! á 2
k=0
∞
1/2 ¼
2 (−1)k 2k+1
= x (22.51b)
áx (2k + 1)!
k=0
| {z }
sin x
and therefore
1/2
2
J1/2 (x) = sin x . (22.52)
áx
Similiarly
1/2
2
J−1/2 (x) = cos x . (22.53)
áx
2 1/2 1
J3/2 (x) = sin x − cos x , (22.54)
áx x
1/2
2 1
J−3/2 (x) = − cos x − sin x , (22.55)
áx x
etc.
22. Sturm-Liouville Problems 163
and
r r
á á
y (x) = Y (x) = (−1)+1 J (x) . (22.57)
2x +1/2 2x −−1/2
(1)
h (x) = j (x) + i y (x) (22.60)
(2)
h (x) = j (x) − i y (x) . (22.61)
1
j0(x)
j1(x)
0 x y0(x)
5 10 15 20
y1(x)
1
Jn (i z)
I n (z) = (22.63)
in
and
ái n (1)
Kn (z) = i Hn (i z) (22.64)
2
In(x) Kn(x)
3 3
2 n=0 2
n=1
1 n=2 1
0 x 0 x
0 1 2 3 0 1 2 3
Figure 22.3: Modified Bessel Functions of the First and Second Kind
22. Sturm-Liouville Problems 165
Legendre Polynomials
The generating function for the Legendre polynomials is
∞
1 ¼
g(x, t) = √ = Pn (x)t n . (22.66)
1 − 2xt + t 2
n=0
Consider:
g 1 1 x−t
=− (−2x + 2t) = g(x, t) (22.67a)
t 2 (1 − 2xt + t 2 )3/2 1 − 2xt + t 2
g
=⇒ (1 − 2xt + t 2 ) = (x − t)g(x, t) (22.67b)
t
¼ ∞ ∞
¼
2 n−1
=⇒ (1 − 2xt + t ) nPn (x)t = (x − t) Pn (x)t n (22.67c)
n=0 n=0
∞ n
¼ o
=⇒ nPn (x)t n−1 − 2xnPn (x)t n + nPn (x)t n+1
n=0
∞ n
¼ o
= xPn (x)t n − Pn (x)t n+1 (22.67d)
n=0
∞ n
¼ o
=⇒ (n + 1)Pn+1 (x) − 2xnPn (x) + (n − 1)Pn−1 (x) t n
n=0
∞ n
¼ o
= xPn (x) − Pn−1 (x) t n (22.67e)
n=0
Now consider
g t t
= = g(x, t) (22.69)
2
x (1 − 2xt + t ) 3/2 1 − 2xt + t 2
but
∞
g ¼ 0
= Pn (x)t n (22.70)
x
n=0
∞
¼ ∞
¼
=⇒ (1 − 2xt + t 2 ) Pn0 (x)t n = t Pn (x)t n (22.71)
n=0 n=0
0 0
Pn+1 (x) − Pn−1 (x) = (2n + 1)Pn (x) (22.73)
0
Pn+1 (x) = (n + 1)Pn (x) + xPn0 (x) (22.74)
0
Pn−1 (x) = −nPn (x) + xPn0 (x) (22.75)
Orthonormalization
Consider
∞ 2 ∞ ¼
∞
1 ¼ ¼
[g(x, t)]2 = = Pn (x)t n
= Pm (x)Pn (x)t m+n . (22.77)
1 − 2xt + t 2
n=0 m=0 n=0
R1
Now integrate both sides −1
d x:
∞ ¼
∞ Z 1 Z 1
¼
m+n 1
t Pm (x)Pn (x) d x = dx (22.78a)
−1 −1 1 − 2xt + t2 y = 1 − 2xt + t 2
m=0 n=0
2 1 dy
d x = − 2t
1 (1+t) d y
Z
= (22.78b)
2t (1−t)2 y
1 1+t
= ln (22.78c)
t 1−t recall:
3 x5
∞ 1 1+x x
¼ t 2n 2 ln 1−x = x + 3 + 5 + · · ·
=2 (22.78d)
2n + 1
n=0
so
∞ ∞ ¼ ∞ Z1
¼ 2t 2n ¼
m+n
= t Pm (x)Pn (x) d x . (22.78e)
2n + 1 −1
n=0 n=0 m=0 | {z }
must be ∝ Ömn
Therefore
Z 1
2
Pn (x)Pm (x) d x = Ö . (22.79)
−1 2n + 1 mn
22. Sturm-Liouville Problems 167
Special values
Let x = 1:
∞
1 1 ¼
g(1, t) = √ = = tn (22.80a)
1 − 2t + t 2 1 − t
n=0
but
∞
¼
g(1, t) = Pn (1)t n (22.80b)
n=0
and therefore
Pn (1) = 1 (22.81)
so we find
(2n − 1)!!
P2n (0) = (−1)n and P2n+1 (0) = 0 . (22.84)
(2n)!
Useful identity
Let x = cos Ú and t = r 0 /r in the generating function with r 0 < r. Then:
∞ !
¼ r0 1
g(cos Ú, r 0 /r) = P (cos Ú) = p (22.86a)
r 1 − 2(r 0 /r) cos Ú + (r 0 /r)2
=0
r
=√ (22.86b)
r + r − 2rr 0 cos Ú
2 02
r
= (22.86c)
kx − x0 k
where x and x0 are two vectors with r = kxk, r 0 = kx0 k, and x · x0 = rr 0 cos Ú. Thus
∞
¼ (r 0 )
1
= P (cos Ú) , r0 < r . (22.87)
kx − x0 k r +1
=0
If r 0 > r, exchange r 0 and r or else the series will not converge. Therefore
∞
¼ r
1 <
0
= P (cos Ú) (22.88)
kx − x k r>+1
=0
Second solution
The Wronskian can be used to find the second independent solution Qn (x):
!0
0 0 2 Qn
W[Pn , Qn ] = Pn Qn − Pn Qn = Pn (22.89a)
Pn
but
1
W[Pn , Qn ] ∝ (22.89b)
1 − x2
so
Z
dx
Qn (x) = Pn (x) (22.90)
(1 − x 2 )[Pn (x)]2
d2y m2
" #
dy
(1 − x 2 ) − 2x + n(n + 1) − y = 0. (22.93)
d x2 dx 1 − x2
dm
Pnm (x) = (−1)m (1 − x 2 )m/2 P (x) (22.94)
d xm n
(n − m)! m
Pn−m (x) = (−1)m P (x) . (22.95)
(n + m)! n
The first few associated Legendre functions are (recall Pn0 (x) = Pn (x)):
√ √
P11 (x) = − 1 − x 2 , P21 (x) = −3x 1 − x 2 , P22 (x) = 3(1 − x 2 ) . (22.96)
Pnm(x)
3
2
1 P11
x P21
1 1 1 P22
2
3
Figure 22.4: Associated Legendre Functions
22. Sturm-Liouville Problems 171
Spherical Harmonics
The spherical harmonics are defined as
s
2 + 1 ( − m)! m
Ym (Ú, æ) = P (cos Ú)e i mæ . (22.97)
4á ( + m)!
When two or more eigenvalues are the same they are called degenerate.
A linear combination of eigenfunctions belonging to a degenerate set is again
an eigenfunction with the same eigenvalue.
Construct an orthogonal set of eigenfunctions by the Gram-Schmidt
procedure demonstrated in the next example:
Ex. 23.1. Suppose u, v, and w all belong to eigenvalue Ý.
• Take u1 = u.
• Let u2 = v + Óu1 and choose Ó so
Z Z Z
0 = u1 (x)u2 (x)â(x) d x = u (x)v(x)â(x) d x + Ó u ∗ (x)u(x)â(x) d x
∗ ∗ (23.1)
R
u ∗ (x)v(x)â(x) d x
=⇒ Ó = − R . (23.2)
u ∗ (x)u(x)â(x) d x
u ∗ (x)w(x)â(x) d x
R
=⇒ Ô = − ∗1 R . (23.4)
u1 (x)u1 (x)â(x) d x
u ∗ (x)w(x)â(x) d x
R
=⇒ Õ = − R ∗2 . (23.6)
u2 (x)u2 (x)â(x) d x
172
23. Degeneracy and Completeness 173
∞ ¼
¼
• f (Ú, æ) = cm Ym (Ú, æ) (23.14)
=0 m=−
Z 2á Z á
• cm = f (Ú, æ)[Ym (Ú, æ)]∗ sin Ú dÚ dæ (23.15)
æ=0 Ú=0
∞ ¼
¼ 1
• Ym (Ú, æ)[Ym (Ú0 , æ0 )]∗ = Ö(Ú − Ú0 )Ö(æ − æ0 ) (23.16)
sin Ú
=0 m=−
24 Inhomogeneous Problems — Green Functions
Then we have
¼ ¼
cn (Ýn − Ý)un (x) = d n un (x) (24.3a)
n n
dn (u , f )
cn = = n . (24.3b)
Ýn − Ý Ýn − Ý
Therefore
¼ u (x) Z
n
u(x) = un∗ (x 0 )f (x 0 ) d x 0 (24.4a)
Ý n − Ý Ò
Zn
= G (x, x 0 )f (x 0 ) d x 0 (24.4b)
Ò
where
¼ u (x)u ∗ (x 0 )
n
G (x, x 0 ) = n
(24.4c)
n
Ý n − Ý
175
24. Inhomogeneous Problems — Green Functions 176
thus we have
Ex. 24.1. A string of length vibrating with angular frequency é with fixed ends is
described by
d2u
+ k2 u = 0 , u(0) = u() = 0 (24.7)
d x2 | {z }
fixed ends boundary condition
where u(x) is the transverse displacement of the string from its equilibrium.
Here k = é/c where c is the speed of sound in the string.
Find the Green function for this differential equation and boundary conditions.
• Method 1.
Let k 2 = −Ý and solve the eigenvalue problem
d2u
= Ýu , u(0) = u() = 0 . (24.8)
d x2
The eigenvalues are
ná 2
Ýn = − , n = 1, 2, 3, . . . (24.9)
and the normalized eigenfunctions are
r
2 náx
un (x) = sin , n = 1, 2, 3, . . . . (24.10)
Therefore
¼ u (x)u ∗ (x 0 )
n
G (x, x 0 ) = n
(24.11a)
n
Ý n − Ý
∞
2 ¼ sin(náx/) sin(náx 0 /)
= (24.11b)
k 2 − (ná/)2
n=1
Note: when the string vibrates at an eigenfrequency, the Green function becomes
infinite.
Note also: G (x, x 0 ) = G ∗ (x 0 , x) so for real-valued Green functions
G (x, x 0 ) = G (x 0 , x) . (24.12)
• Method 2.
Solve
d 2 G (x, x 0 )
+ k 2 G (x, x 0 ) = Ö(x − x 0 ) , G (0, x 0 ) = G (, x 0 ) = 0 . (24.13)
d x2
d2G
Note: for x , x 0 , + k 2 G = 0, so
d x2
0 a sin kx
x < x0
G (x, x ) = (24.14)
b sin k(x − ) x > x 0
where a and b are constants. This satisfies the boundary conditions at x = 0 and x =
and the homogeneous equation for x , x 0 .
We need to match these two solutions at x = x 0 to determine a and b.
Integrate the differential equation over x from x 0 − × to x 0 + ×:
Z x 0 +× 2 Z x 0 +× Z x 0 +×
d G (x, x 0 ) 0) dx =
d x +k 2 G (x, x Ö(x − x 0 ) d x (24.15)
x 0 −× d x2 x 0 −× x 0 −×
| {z } | {z } | {z }
→ ddGx as ×→0 vanishes as ×→0 1
so we have
d G (x, x 0 ) d G (x, x 0 )
" #
lim − =1 (24.16)
×→0 dx x=x 0 +× dx x=x 0 −×
i.e., G is continuous at x 0 .
Matching the two solutions at x = x 0 then yields
and we find
sin k(x 0 − ) sin kx 0
a= and b= . (24.19)
k sin k k sin k
Therefore
1 sin kx sin k(x 0 − ) 0 ≤ x < x0
G (x, x 0 ) =
(24.20)
sin kx 0 sin k(x − )
k sin k x0 < x ≤ .
24. Inhomogeneous Problems — Green Functions 179
General method
Consider the linear operator
d2 d
L = p(x) + p0 (x) + q(x) (24.21)
d x2 dx
and the inhomogeneous equation
and enforce
h i
lim G x=x 0 −× − G x=x 0 +×
=0 (24.24a)
×→0
and
" #
dG dG 1
lim − =− . (24.24b)
×→0 d x x=x 0 −× dx x=x 0 −× p(x 0 )
This determines
v(x 0 ) u(x 0 )
A= and B= (24.25a)
C C
where
C
W[u(x 0 ), v(x 0 )] = . (24.25b)
p(x 0 )
Therefore
1 u(x)v(x 0 ) a ≤ x < x0
0
G (x, x ) = (24.26a)
C u(x 0 )v(x) x0 < x ≤ b
with
Then
Z b
y(x) = G (x, x 0 )f (x 0 ) d x 0 . (24.27)
a
Problems
Problem 28.
The Sturm-Liouville differential equation is
L u(x) + Ýâ(x)u(x) = 0
where
d2 d
L = p(x) + p0 (x) − q(x).
d x2 dx
Show that L is Hermitian when the domain is chosen to be a ≤ x ≤ b and the
boundary conditions are taken to be u(a) = u(b) = 0. Show that orthogonality
now means:
Zb
0 = (u, v) = u ∗ (x)v(x)â(x) d x.
a
180
Problems 181
Problem 29.
a) Use the generating function to prove the following identities:
Hn+1 (x) = 2xHn (x) − 2nHn−1 (x),
Hn0 (x) = 2nHn−1 (x),
(2n)!
H2n (0) = (−1)n ,
n!
H2n+1 (0) = 0,
and
Hn (x) = (−1)n Hn (−x).
b) Using the identities proven in part (a), show that Hn (x) is a solution to
Hermite’s equation.
c) From the generating function, show that
bn/2c
¼ n!
Hn (x) = (−1)s (2x)n−2s
(n − 2s)!s!
s=0
where bn/2c means the greatest integer less than or equal to n/2.
Problem 30.
a) Prove Rodrigues’s formula:
2 d n −x 2
Hn (x) = (−1)n e x e .
d xn
Problem 32.
Consider the differential equation
" 2
n2
#
d 1 d
+ − y(r) = 0 0<r<∞
d r2 r d r r2
Problems 228
183
184
Motivation
We now move our general discussion beyond one dimension.
We first address solving linear systems of equations and introduce matrices
and review some of their properties. Next we talk about vector spaces, linear
operators, and we re-encounter eigenvalue problems which arise in quantum
and classical mechanics. Then we review vector calculus and differential
operators that are used to formulate fundamental physical laws, e.g.,
electrodynamics. In the last section we provide formulae for these differential
operators in cylindrical and spherical coordinate systems which are commonly
used to simplify problems.
25 Linear Algebra
2x + 3y = 4 and 6x + 9y = 12 (25.2)
(the second equation is 3 times the first) then they are not linearly
independent — they are the same equation, and we can drop one of them.
Another possibility is when two equations are inconsistent, e.g.,
2x + 3y = 4 and 2x + 3y = 5 . (25.3)
185
25. Linear Algebra 186
(a11 a22 − a21 a12 )y + (a11 a23 − a21 a13 )z = a11 b 2 − a21 b1
(a11 a32 − a31 a12 )y + (a11 a33 − a31 a13 )z = a11 b 3 − a31 b2 . (25.6)
Now multiply the last equation by (a11 a22 − a21 a12 ) and substitute in for y:
[(a11 a22 − a21 a12 )(a11 a33 − a31 a13 ) − (a11 a32 − a31 a12 )(a11 a23 − a21 a13 )]z
= (a11 a22 − a21 a12 )(a11 b 3 − a31 b 1 ) − (a11 a32 − a31 a12 )(a11 b 2 − a21 b 1 ) .
(25.8)
Provided the coefficient in front of z is not zero, we can now solve for z. Then substitute
z into the equation for y to determine y and finally substitute the equations for z and y
into the equation for x to determine x.
This is straightforward but tedious. (Fortunately we have computers.)
The solution is
(a22 a33 − a23 a32 )b 1 − (a12 a33 − a13 a32 )b 2 + (a12 a23 − a13 a22 )b 3
x= (25.9a)
D
−(a21 a33 − a23 a31 )b 1 + (a11 a33 − a13 a31 )b 2 − (a11 a23 − a13 a21 )b 3
y=
D
(25.9b)
(a21 a32 − a22 a31 )b 1 − (a11 a32 − a12 a31 )b 2 + (a11 a22 − a12 a21 )b 3
z= (25.9c)
D
with
Matrices
To express the linear system of equations more succinctly, introduce the
matrix. First note that the linear system can be written
n
¼
ai j x j = b i , i = 1...m . (25.10)
j=1
Ax = b (25.11)
where
x1
a11 a12 a13 ··· a1n b 1
x2
a a22 a23 ··· a2n b
21
x3
2
A = . .. .. .. .. , x = , and b = . .
.. . . . .
..
..
.
am1 am2 am3 ··· amn bm
xn
(25.12)
n
¼
C = AB ⇐⇒ ci k = ai j b j k , for i = 1 . . . m and k = 1 . . . p. (25.13)
j=1
C = A∗ ⇐⇒ ci j = a∗i j . (25.16)
C = AT ⇐⇒ ck = a j i . (25.17)
• Adjoint:
A † = (A T )∗ . (25.18)
×1,2,...,n = 1 and ×i1 ,...,ip ,...,iq ,...,in = −×i1 ,...,iq ,...ip ,...,in (25.21)
The minor mi j of the square matrix A = [ai j ] is mi j = det([(ak )k,i,,j ]) and the
cofactor matrix is C = [(−1)i+j mi j ]. Then the matrix inverse of A is
1
A −1 = CT . (25.22)
det A
(A B)−1 = B −1 A −1 (25.23a)
T T T
(A B) = B A (25.23b)
Tr(A B) = Tr(B A) (25.23c)
det(A B) = (det A)(det B) = det(B A) . (25.23d)
A matrix A is a:
• real matrix if A ∗ = A, (25.24a)
• symmetric matrix if AT = A, (25.24b)
• antisymmetric matrix if AT = −A, (25.24c)
• Hermitian matrix if A † = A, (25.24d)
• orthogonal matrix if A −1 = A T , (25.24e)
• unitary matrix if A −1 = A†, (25.24f)
• diagonal matrix if A = [ai j ] with ai j = 0 for i , j, (25.24g)
• idempotent matrix if A2 = A, (25.24h)
• nilpotent matrix if Ak = 0 for some integer k. (25.24i)
26 Vector Spaces
190
26. Vector Spaces 191
Linear Operators
A linear operator A is a map from one vector in a vector space to another
y = Ax (26.6)
aj = A ej , j = 1...n . (26.9)
but
n
¼
y= yi ei (26.11c)
i=1
thus
n
¼
yi = ai j x j , i = 1...n . (26.11d)
j=1
y = Ax . (26.12)
26. Vector Spaces 192
Coordinate Transformations
Suppose that we change from one set of basis vectors to another set of basis
vectors by an invertable linear transformation P
n
¼
e0j = P e j or e0j = p i j ei , j = 1...n . (26.13)
i=1
so
n
¼
xi = pi j x j0 , i = 1...n or x = P x0 (26.15)
j=1
y = Ax (26.16)
then
y = Ax and y0 = A0 x0 . (26.17)
Therefore
P y 0 = A(P x 0 ) or y 0 = P −1 A P x 0 . (26.18)
We thus identify
A 0 = P −1 A P . (26.19)
AB = C =⇒ P −1 A(P P −1 )B P = P −1 C P =⇒ A0 B0 = C0 . (26.20)
26. Vector Spaces 193
Inner Product
A scalar product or inner product or dot product between two vectors
x·y (26.21)
is a scalar-valued function of the two vectors with the properties:
• Conjugate symmetry x · y = (y · x)∗ (26.22a)
• Linearity (ax + by) · z = a(x · z) + b(y · z) (26.22b)
• Positive definite x · x > 0 for x , 0 (26.22c)
The length of a vector x is kxk = (x · x)1/2 .
If x · y = 0 then the two vectors are orthogonal.
The dot product of two vectors is related to their lengths and the angle Ú
between them by x · y = kxk kyk cos Ú.
Suppose we define the inner product in some basis as
n
¼
x·y = xi yi∗ = y † x (26.23)
i=1
where xi and yi are the components of the vectors x and y respectively in that
basis. It then follows that the basis vectors are orthonormal with respect to
our inner product:
ei · e j = Öi j . (26.24)
If we wish to find a new orthonormal basis = P ei , i = 1 . . . n, with respect to
e0i
the same inner product, then
n n n ¼
n
¼ ¼ ¼
0 0
Öi j = ei · e j = pki ek ·
pj e = pki p∗j ek · e (26.25a)
k=1 =1 k=1 =1
|{z}
Ök
n
¼
= pki p∗kj (26.25b)
k=1
or
1 = P† P (26.25c)
so the transformation matrix must be unitary. If the vector space is real then
the transformation matrix must be orthogonal.
Note that
n n
¼ ¼
e0j · ei = pkj ek · ei = pkj ek · ei = pi j (26.26)
k=1 k=1
|{z}
Öki
and since ei and e0j are both unit vectors, pi j is the direction cosine between
the two different basis vectors, and P is the matrix of direction cosines.
26. Vector Spaces 194
e2 e2
e0 2
x02 x0
x2 x x2 x
x02 e0 1
x1
0
e1 e1
x1 x01 x1
Figure 26.1: Passive or alias (left) and active or alibi (right) rotations.
26. Vector Spaces 195
x×y (26.30)
z1 = x2 y3 − x3 y2 , z2 = x3 y1 − x1 y3 , and z3 = x1 y2 − x2 y1 . (26.32b)
We see that
e1 × e2 = e3 , e2 × e3 = e1 , and e3 × e1 = e2 . (26.33)
Eigenvalue Problems
If a linear operator A acts on a vector x in such a manner that the result is
proportional to x,
A x = Ýx , (26.36)
A x = Ýx . (26.37)
(A − Ý1)x = 0 . (26.38)
Note that if (A − Ý1) is invertible then the solution is the trivial solution x = 0.
Therefore, in order for there to be non-trivial solutions to the eigenvalue
equation, (A − Ý1) must be non-invertible and so its determinant must vanish:
(A − Ýp 1)x p = 0 (26.40)
x †p x p = 1 . (26.41)
26. Vector Spaces 197
Ex. 26.2. Consider the (active) rotation of a vector x through angle Ú about the z-axis
Rz described by the rotation matrix
Rz x = Ýx or R z x = Ýx (26.43)
that is, we seek a vector that is left unchanged, apart from a possible scale, when
rotated by Ú about the z-axis. (It should be obvious what this vector is.)
The secular equation is
This has one real eigenvalue, Ý = 1, unless Ú = 0 or Ú = á. We’ll come back to those at
the end of the example.
To find the eigenvector for the Ý = 1 eigenvalue we solve
cos Ú − 1 − sin Ú
0 x 0
sin Ú cos Ú − 1 0 · y = 0 (26.45)
0 0 0 z 0
−1 0 0 x −x = −x
x
0 −1 0 y −y = −y
· = − y =⇒ (26.46)
z = −z
0 0 1 z z
for which the solution is z = 0 and x and y are unspecified. An orthonormal set of
eigenvectors is ex and e y . We see that any vector on the x-y plane simply changes its
sign when rotated by an angle á about the z axis.
26. Vector Spaces 198
H x p = Ýp x p and H x q = Ýq x q . (26.47)
Then we have
x †q (Ýp x p ) = x †q H x p (26.48a)
¼n n
¼
∗
= xqi hi j xpj (26.48b)
i=1 j=1
¼n ¼n
= ∗ ∗
(xqi hi∗j xpj ) (26.48c)
i=1 j=1
n n
∗
¼ ¼
∗ ∗
= xpj hi j xqi (26.48d)
j=1 i=1
h i∗
† †
= x p (H x q ) (26.48e)
h i∗
= x †p (H x q ) (26.48f)
h i∗
= x †p (Ýq x q ) . (26.48g)
Since x †q x p = (x †p x q )∗ we find
A x = Ýx (26.52)
P −1 A P P −1 x = ÝP −1 x or A 0 x 0 = Ýx 0 (26.53)
a0i j = Ýi Öi j . (26.57)
Ex. 26.3. Vibrational modes of the linear triatomic carbon dioxide molecule.
We consider only the vibrational modes along the axis of the linear triatomic molecule.
Let s1 , s2 , and s3 be the displacements away from the equilibrium positions of the
leftmost oxygen atom, the carbon atom, and the rightmost oxygen atom respectively
(see Fig. 26.2). The two double bonds are represented by springs with spring constant k.
Newton’s equations of motion are
d 2 s1
mO = −k(s1 − s2 ) (26.59a)
d t2
d 2 s2
mC = −k(s2 − s3 ) + k(s1 − s2 ) (26.59b)
d t2
d 2 s3
mO = k(s2 − s3 ) . (26.59c)
d t2
Assume the motion is oscillatory with angular frequency é and let s1 (t) = x1 e iét ,
s2 (t) = x2 e iét , and s3 (t) = x3 e iét . Then
1 −1 0 x1
x1 2
−q 2q −q · x2 = Ý x2 with q = mO and Ý = é . (26.61)
mC k/mO
0 −1 1 x3 x3
k k
x
mO mC mO
Figure 26.2: CO2 Molecule
26. Vector Spaces 201
1 − Ý −1
0
0 = det
−q 2q − Ý −q (26.62a)
−1 1−Ý
0
= (1 − Ý)[(1 − Ý)(2q − Ý) − q] − (−1)[−q(1 − Ý)] (26.62b)
= Ý(1 − Ý)(Ý − 2q − 1) . (26.62c)
• Case Ý = 0: Solve
1 −1 0 x1 0 x1 − x2 = 0
−q 2q −q x 0 −qx1 + 2qx2 − qx3 = 0
· 2 = =⇒ (26.63)
0 −1 1 −x2 + x3 = 0
x3 0
1
and so we have x1 = x2 = x3 . With suitable normalization the eigenvector is √1 1 .
3 1
This is a zero-frequency mode that corresponds to rigid translation of the whole
molecule along its axis as seen in the top panel of Fig. 26.3.
• Case Ý = 1: Solve
1 −1 0 x1 x1 1 − x2 =
x
x
1
−q 2q −q x x −qx1 + 2qx2 − qx3 = x2
· 2 = 2 =⇒ (26.64)
0 −1 1 −x2 +
x3 x3 x
3 =x
3
1
and so we have x2 = 0 and x1 = −x3 . The normalized eigenvector is √1 0 .
2 −1
√
This is a symmetric mode of oscillation with frequency és = k/mO in which the
carbon atom remains stationary and the two oxygen atoms vibrate out-of-phase with
each other along the axis as seen in the middle panel of Fig. 26.3.
• Case Ý = 2q + 1: Solve
−1 1 − x2 = (2q + 1
1 0 x1 x1 x
)x1
−q −q · x2 = (2q + 1) x2 −qx1 +
2q =⇒ 2qx 2 − qx3 = (2q
+ 1)x2
−1
0 1 x3 x3 −x2 + x
3 = (2q + 1
)x3
(26.65)
1
and so we have x1 = x3 and x2 = −2qx1 . The eigenvector is √ 12 −2q .
4q +2 1
√
This is a asymmetric mode of oscillation with frequency éa = 2k/mC + k/mO in
which the two oxygen atoms move in phase while the carbon atom moves out of
phase along the axis in such a way to preserve the center of mass as seen in the
bottom panel of Fig. 26.3.
26. Vector Spaces 202
Figure 26.3: Vibration modes of the CO2 molecule along its axis. Top: a zero-frequency
rigid translation along the axis. Middle: symmetric stretching in which the oxygen
atoms move out of phase and the carbon atom remains at rest. Bottom: antisymmetric
stretching in which the oxygen atoms move in phase while the carbon atom moves out
of phase preserving the center of mass.
27 Vector Calculus
Derivatives
Consider a scalar function of multiple variables, ï(x, y, z). The partial
derivative of this function with respect to x at (x, y, z) = (a, b, c) is the
derivative of the related univariate function f (x) constructed by holding the
other variables at fixed values, y = b and z = c, f (x) = ï(x, b, c):
ï(x, y, z)
è(x, y, z) = (27.2a)
x
then
Z
ï(x, y, z) = è(x, y, z) d x + ç(y, z) . (27.2b)
Differentiating a function with respect to one variable and then with respect to
another results in a mixed partial derivative. If all mixed partial derivatives are
continuous at a point then the order with which the procedure is done does not
matter:
2 ï 2 ï
= . (27.3)
xy yx
203
27. Vector Calculus 204
ï(x, y) 2 2 2 2
= e −(x +y )/2 − x 2 e −(x +y )/2 (27.6a)
x
and
ï(x, y) 2 2
= −xye −(x +y )/2 (27.6b)
y
so we have
2 2 2 2
∇ ï(x, y) = (1 − x 2 )e −(x +y )/2 ex − xye −(x +y )/2 e y . (27.7)
Figure 27.1 shows a contour plot of ï(x, y) along with the vector field ∇ ï(x, y). Notice
that the vectors are normal to the contours.
0
y
2 1 0 1 2
x
2 2
Figure 27.1: The function ï(x, y) = xe −(x +y )/2 and its gradient ∇ ï(x, y). The color
density plot and with contours shows ï(x, y) while the arrows (length and direction)
represent the vector field ∇ ï(x, y).
27. Vector Calculus 205
where A x (x, y, z), A y (x, y, z), and A z (x, y, z) are scalar functions that give the x-,
y-, and z-components of a vector at each point in space.
The divergence of a vector field is given by
# A x
3 "
¼
∇· A = A = · A y
xi i x y z
i=1
Az
A x A y A z
= + + . (27.9)
x y z
ex e y ez
3 ¼ 3 ¼ 3
¼
∇× A = ×i j k ei A = det
x j k
x y z
i=1 j=1 k=1
Ax A y Az
! ! !
A z A y A x A z A y A x
= − ex + − ey + − ez (27.10)
y z z x x y
y y
x x
Figure 27.2: Vector fields A = xex + ye y (left) and A = −yex + xe y (right). The former has
vanishing curl but non-vanishing divergence while the latter has vanishing divergence
but non-vanishing curl.
27. Vector Calculus 206
∇(è + ï) = ∇ è + ∇ ï (27.11a)
∇(èï) = ï ∇ è + è ∇ ï (27.11b)
∇(A · B) = (A · ∇)B + (B · ∇)A + A × (∇ × B) + B × (∇ × A) . (27.11c)
• Divergence.
∇ · (A + B) = ∇ · A + ∇ · B (27.12a)
∇ · (èA) = è ∇ · A + (∇ è) · A (27.12b)
∇ · (A × B) = (∇ × A) · B − (∇ × B) · A . (27.12c)
• Curl.
∇ × (A + B) = ∇ × A + ∇ × B (27.13a)
∇ × (èA) = è ∇ × A + (∇ è) × A (27.13b)
∇ × (A × B) = A(∇ · B) − B(∇ · A) + (B · ∇)A − (A · ∇)B (27.13c)
• Second derivatives.
∇ · (∇ × A) = 0 (27.14a)
∇ × (∇ è) = 0 (27.14b)
∇ · (∇ è) = ∇2 è (27.14c)
2
∇ × (∇ × A) = ∇(∇ · A) − ∇ A (27.14d)
2 è 2 è 2 è
∇2 è = + + (27.15a)
x 2 y 2 z 2
and
Integrals
y
A curve C is a set of points
(x(b), y(b))
n o
C = x(t) : a ≤ t ≤ b (27.17)
t=b
(see Fig. 27.3). C
The directed length element along this curve is
t=a
ds = x0 (t) d t . (27.18) (x(a), y(a))
x
A line integral of a scalar field ï(x) is Figure 27.3: Curve in
2-Dimensions.
Z Z b
ï(x) d s = ï(x(t)) kx0 (t)k d t (27.19)
C a
y
A double integral of a scalar field
ï(x, y) over a domain D bounded by two functions y = Ó(x)
and y = Ô(x) with a ≤ x ≤ b as shown in Fig. 27.4 is given by (x)
Z x=b Z y=Ô(x)
ï(x) dA = ï(x, y) d y d x . (27.22) D
D x=a y=Ó(x)
where â(q) is a density that we now determine. Consider a volume element that
is a parallelepiped formed by three vectors a, b, c, with displacements d q1 ,
d q2 , and d q3 along the q1 -, q2 -, and q3 -axes respectively:
!
x y z
a = d q1 e + e + e (27.25a)
q1 x q1 y q1 z
!
x y z
b = d q2 e + e + e (27.25b)
q2 x q2 y q2 z
and
!
x y z
c = d q3 e + e + e . (27.25c)
q3 x q3 y q3 z
"a a y az
#
x
The volume of this parallelepiped is det b x b y bz :
cx c y cz
x d q y z
dq d q1
q1 1 q1 1 q1
x y z
d q2 = det(J) d q1 d q2 d q3 .
d V = det d q2 dq (27.26)
q2 q2 2 q2
x y z
dq dq d q3
q3 3 q3 3 q3
where we define the Jacobian matrix
x y z
q1 q1 q1
x y z
J = (27.27)
q2 q2 q2
x y z
q3 q3 q3
dS = a × b =
x x
× ds dt . (27.30) x
s t
Figure 27.5: Surface
The surface integral of a scalar field ï(x) is
x x
ï(x) d S = ï(x(s, t)) × ds dt . (27.31)
S S s t
We have
x
= cos Ú cos æex + cos Ú sin æe y − sin Ú ez (27.37a)
Ú
and
x
= − sin Ú sin æex + sin Ú cos æe y (27.37b)
æ
so
x x
× = sin2 Ú cos æex + sin2 Ú sin æe y + sin Ú cos Ú ez (27.37c)
Ú æ
and
x x
q
× = sin4 Ú cos2 æ + sin4 Ú sin2 æ + sin2 Ú cos2 Ú = sin Ú . (27.37d)
Ú æ
Green’s theorem
y
Consider two scalar fields in two dimensions, ï(x, y)
and è(x, y) defined over a domain D with boundary given by
the closed curve C. We write the boundary as C = D . Then (x)
!
è ï
(ï d x + è d y) = − dx dy (27.39) D C2
D D x y
C1
where the line integral over the boundary (x)
is taken in a counter-clockwise sense. x
a b
Proof. The domain D is given by
Figure 27.6: Green’s Theorem
n o
D = (x, y) : a ≤ x ≤ b , Ó(x) ≤ y ≤ Ô(x) (27.40)
and let the boundary of this domain be divided into two curves,
D = C = C1 + C2 where C1 is given by Ó(x) and C2 is given by Ô(x) and note
that the second curve is traversed from x = b to x = a as shown in Fig. 27.6. We
have
Z Z
ïdx = ï(x, y) d x + ï(x, y) d x (27.41a)
C C1 C2
Z b Z a
= ï(x, Ó(x)) d x + ï(x, Ô(x)) d x (27.41b)
a b
Z b Z b
= ï(x, Ó(x)) d x − ï(x, Ô(x)) d x . (27.41c)
a a
Also,
Z b Z Ô(x)
ï ï(x, y)
dx dy = dx dy (27.42a)
D y x=a y=Ó(x) y
Zb
= [ï(x, Ô(x)) − ï(x, Ó(x))] d x . (27.42b)
a
Stokes’s theorem
Green’s theorem is a special case
z
of the more general Stokes’s theorem:
if F(x) is a vector field and S is a surface
with boundary S then
F · ds = (∇ × F) · dS . (27.45)
S S
S y
D
Proof. Suppose the surface S is
given by z = z(x, y) with (x, y) in the domain
x
D as shown in Fig. 27.7. Then we have Figure 27.7: Stokes’s Theorem
Z b
dx dy dz
F · ds = Fx + Fy + Fz dt
S a dt dt dt
(27.46)
but
d z z d x z d y
= + (27.47)
d t x d t y d t
so
Z b" ! ! #
z d x z d y
F · ds = Fx + Fz + Fy + Fz (27.48a)
S a x d t y d t
Z " ! ! #
z z
= Fx + Fz d x + Fy + Fz dy (27.48b)
D x y
| {z } | {z }
ï(x,y) è(x,y)
where we define
z(x, y)
ï(x, y) = Fx (x, y, z(x, y)) + Fz (x, y, z(x, y)) (27.49a)
x
and
z(x, y)
è(x, y) = Fy (x, y, z(x, y)) + Fz (x, y, z(x, y)) (27.49b)
y
and now employ Green’s theorem
!
è ï
F · ds = − dx dy . (27.50)
S D x y
27. Vector Calculus 213
Now
è ï Fy Fy z Fz z Fz z z
2
z
− = + + + + Fz
x y x z x x y z x y xy
Fx Fx z Fz z Fz z z
2
z
− + + + + Fz (27.51)
y z y y x z y x yx
so we have
F · ds
S
" ! ! !#
Fz Fy z Fx Fz z Fy Fx
= − − − − + − d x d y . (27.52)
D y z x z y y x y
| {z } | {z } | {z }
Ax Ay Az
From this result it is easy to show that line integral of F depends only on the endpoints.
We say that such a field F is a conservative vector field.
It can also be shown that if F is a conservative field then it is the gradient of some
function. Suppose C is a curve from (0, 0, 0) to (x, y, z) and define
Z
−ï(x, y, z) = F · ds . (27.55)
C
Let C be three straight lines connecting the points (0, 0, 0), (x, 0, 0), (x, y, 0), and (x, y, z):
Zx Z y Zz
−ï(x, y, z) = Fx (t, 0, 0) d t + Fy (x, t, 0) d t + Fz (x, y, t) d t . (27.56)
0 0 0
Gauss’s theorem z
Consider a vector field F(x) defined (x, y)
in a volume V which has a boundary
that is a closed surface S = V. Then S2
m
F · dS = ∇· FdV . (27.57)
V
S1
V
Here we assume that directed
surface elements are directed outwards (x, y)
from the volume. This is known as Gauss’s y
theorem or the divergence theorem.
x D
Proof. Let F = ï ex + ç e y + è ez .
Figure 27.8: Gauss’s Theorem
Then Gauss’s theorem becomes
m m m
ï ex · dS + ç e y · dS + è ez · dS
V V V
ï ç è
= dV + dV + d V . (27.58)
V x V y V z
Thus we have
m
è
è ez · dS = dV . (27.62)
V V z
A similar argument for the x- and y-components completes the proof.
27. Vector Calculus 215
Ex. 27.4. Gauss’s law can be expressed as follows: if V is some volume and x0 is some
vector then
m
x − x0 4á if x0 ∈ V
3
· dS = (27.63)
V kx − x 0 k 0
otherwise.
since the normal to the V× is (x − x0 )/× and the area of the surface is 4á×2 .
Taking the limit × → 0 we obtain the identity
!
x − x0
∇· = 4áÖ3 (x − x0 ) (27.67)
kx − x0 k3
where the three-dimensional Dirac delta function is
Also, since
x − x0 1
3
= −∇ (27.69)
kx − x0 k kx − x0 k
Therefore, for the case x0 ∈ V, let V 0 = V − V× be the volume with an infinitesimal ball
about x0 removed and we have
m ! !
x − x0 x − x0 x − x0
3
· dS = ∇ · d V + ∇ · d V (27.71a)
V kx − x0 k V0 kx − x0 k3 V× kx − x0 k3
| {z } | {z }
0 since x0 <V 0 4á
= 4á . (27.71b)
27. Vector Calculus 216
where ×0 is the permittivity of free space. Define the electric field E = F/q so
q0 x − x0
E(x) = . (27.73)
4á×0 kx − x0 k3
In addition we have
q
∇ · E(x) = 0 Ö3 (x − x0 ) . (27.75)
×0
A continuous charge distribution â(x) can be thought of as a sum over point charges in
the neighborhood of x. Since the Coulomb forces combine as a linear vector sum, we
can write
N
1 ¼ x − xi 1 x − x0
E(x) = qi = â(x0 ) dV0 (27.76)
4á×0 kx − xi k 3 4á×0 kx − x0 k3
n=1
and
x − x0
!
1
∇ · E(x) = â(x0 ) ∇ · dV0 (27.77a)
4á×0 kx − x0 k3
1
= â(x0 )4áÖ3 (x − x0 )d V 0 (27.77b)
4á×0
â(x)
= (27.77c)
×0
which is also known as Gauss’s law.
Now Gauss’s theorem results in the following form of Gauss’s law:
m
E(x) · dS = ∇ · E(x) d V = â(x) d V = Q (27.78)
V V V
A × dS = − ∇× AdV (27.80b)
V
I V
ï ds = − ∇ ï × dS . (27.80c)
S S
Helmholtz’s theorem
Any vector F(x) defined in a volume V can be decomposed as
F(x) = − ∇ ï(x) + ∇ × A(x) (27.82a)
where
m
1 ∇’ · F(x0 ) 0 1 F(x0 )
ï(x) = d V − · dS0 (27.82b)
4á kx − x0 k 4á V kx − x0 k
V m
1 ∇’ × F(x0 ) 0 1 F(x0 )
A(x) = 0
d V + × dS0 (27.82c)
4á V kx − x k 4á V kx − x0 k
and ∇’ is the gradient operator acting on x0 . If V is all space and F vanishes
faster than 1/kxk as kxk → ∞ then the surface terms vanish.
Since ∇ × ∇ ï = 0 and ∇ · (∇ × A) = 0, Helmholtz’s theorem implies any vector
field can be decomposed into a longitudinal field FL and a transverse field FT
F(x) = FL (x) + FT (x) where ∇ × FL (x) = 0 and ∇ · FT (x) = 0 . (27.83)
Uniqueness.
If both ∇ · F and ∇ × F are specified in V as well as the normal component of F
on V, then F is uniquely determined. This is shown as follows: suppose G is a
different vector having the same divergence and curl in V and normal
component on V. Then
∇ · (F − G) = 0 and ∇ × (F − G) = 0 . (27.86)
The second implies we can write F − G = − ∇ ï and then the first implies ∇2 ï = 0.
Now use Green’s first identity, Eq. (27.79a), with è = ï:
m 0
∇2
>
ï ∇ ï · dS = (ï ï + ∇ ï · ∇ ï) d V . (27.87)
V V
â(x)
∇ · E(x) = and ∇ × E(x) = 0 (27.89)
×0
where â(x) is a static electric charge density, j(x) is a steady electric current density, and
Þ0 is the permeability of free space.
By Helmholtz’s theorem, the unique solutions to these equations are
1 â(x0 ) 0= 1 0) x − x
0
E(x) = − ∇ d V â(x dV0 (27.91)
4á×0 kx − x0 k 4á×0 kx − x0 k3
and
Þ0 j(x0 ) Þ0 x − x0
B(x) = ∇ × dV0 = j(x0 ) × dV0 . (27.92)
4á kx − x0 k 4á kx − x0 k3
These are the Coulomb law and the Biot-Savart law respectively.
28 Curvilinear Coordinates
A = A 1 e1 + A 2 e2 + A 3 e3 . (28.1)
x x x
dx = d q1 + d q2 + d q3 (28.2a)
q1 q2 q3
y y y
dy = d q1 + d q2 + d q3 (28.2b)
q1 q2 q3
z z z
dz = d q1 + d q2 + d q3 . (28.2c)
q1 q2 q3
220
28. Curvilinear Coordinates 221
where
x x y y z z
gi j = + + (28.4c)
q i q j q i q j q i q j
Integrals
The line element is
ds = h1 d q1 e1 + h2 d q2 e2 + h3 d q3 e3 (28.8)
dA = h2 h3 d q2 d q3 e1 + h3 h1 d q3 d q1 e2 + h1 h2 d q1 d q2 e3 (28.10)
and
d V = h1 h2 h3 d q1 d q2 d q3 (28.11)
Derivatives
The gradient of a scalar field is
ï ï ï
∇ï = e1 + e2 + e
x1 x2 x3 3
q1 ï q ï q ï
= e + 2 e + 3 e (28.14)
x1 q1 1 x2 q2 2 x3 q3 3
and so
1 ï 1 ï 1 ï
∇ï = e + e + e (28.15)
h1 q1 1 h2 q2 2 h3 q3 3
∇ · A h1 h2 h3 d q1 d q2 d q3 (28.16a)
= (A1 h2 h3 ) (q +h d q ,q ,q ) − (A1 h2 h3 ) (q ,q ,q ) d q 2 d q3
1 1 1 2 3 1 2 3
+ (A2 h3 h1 ) (q ,q +h d q ,q ) − (A2 h3 h1 ) (q ,q ,q ) d q3 d q1
1 2 2 2 3 1 2 3
+ (A3 h1 h2 ) (q ,q ,q +h d q ) − (A3 h1 h2 ) (q ,q ,q ) d q1 d q2 .
1 2 3 3 3 1 2 3
(28.16b)
(A1 h2 h3 )
≈ h1 d q1 d q 2 d q3
h1 q1
(A2 h3 h1 )
+ h2 d q2 d q3 d q1
h2 q2
(A3 h1 h2 )
+ h3 d q3 d q1 d q2 (28.16c)
h3 q3
The right hand side is the surface integral over all six faces.
Divide both sides by the volume element d V = h1 h2 h3 d q1 d q2 d q3 :
" #
1
∇· A = (A h h ) + (A h h ) + (A h h ) . (28.17)
h1 h2 h3 q1 1 2 3 q2 2 3 1 q3 3 1 2
(∇ × A) · e3 h1 h2 d q1 d q2
= (A1 h1 ) (q ,q ,q ) − (A1 h1 ) (q ,q +h d q ,q ,q ) d q 1
1 2 3 1 2 2 2 2 3
+ (A2 h2 ) (q +h d q ,q ,q ) − (A2 h2 ) (q ,q ,q ,q ) d q2 (28.19a)
1 1 1 2 3 1 2 2 3
(A1 h1 ) (A2 h2 )
≈− h dq dq + h dq dq (28.19b)
h2 q2 2 2 1 h1 q1 1 1 2
and so
" #
1 (A2 h2 ) (A1 h1 )
(∇ × A) · e3 = − . (28.19c)
h1 h2 q1 q2
e1 h1 e2 h2 e3 h3
1
= det (28.20b)
q1 q2 q3
h1 h2 h3
A1 h1 A2 h2 A3 h3
" #
1 (A3 h3 ) (A2 h2 )
= − e1
h2 h3 q2 q3
" #
1 (A1 h1 ) (A3 h3 )
+ − e2
h3 h1 q3 q1
" #
1 (A2 h2 ) (A1 h1 )
+ − e3 (28.20c)
h1 h2 q1 q2
∇2 A = ∇(∇ · A) − ∇ × (∇ × A) . (28.21)
Cylindrical Coordinates
The cylindrical coordinates (â, æ, z) are defined by
x = â cos æ , y = â sin æ , and z=z (28.22)
or
p y
â= x2 + y2 , , and z = z
æ = arctan (28.23)
x
where 0 ≤ â < ∞, 0 ≤ æ ≤ 2á, and −∞ < z < ∞.
The scale factors are
hâ = 1 , hæ = â , and hz = 1 (28.24)
and the basis vectors are related to the Cartesian basis by
eâ = cos æex + sin æe y and eæ = − sin æex + cos æe y (28.25)
or
ex = cos æeâ − sin æeæ and e y = sin æeâ + cos æeæ . (28.26)
or
p y z
r= x2 + y2 + z2 , æ = arctan , and z = arccos p
x x2 + y2 + z2
(28.36)
or
è 1 è 1 è
∇è = er + eÚ + e (28.43)
r r Ú r sin Ú æ æ
1 2 1 1 Aæ
∇· A = 2
(r A r ) + (sin ÚAÚ ) + (28.44)
r r r sin Ú Ú r sin Ú æ
!
1 AÚ
∇× A = (sin ÚAæ) − er
r sin Ú Ú æ
!
1 1 A r
+ − (rAæ) eÚ
r sin Ú æ r
!
1 A r
+ (rAÚ ) − eæ (28.45)
r r Ú
2 è
! !
2 1 2 è 1 è 1
∇ è= 2 r + 2 sin Ú + 2 2 (28.46)
r r r r sin Ú Ú Ú r sin Ú æ2
2 Aæ
" #
2 2
∇2 A = ∇2 A r − 2 A r − 2 (sin ÚAÚ ) − 2 er
r r sin Ú Ú r sin Ú æ
2 cos Ú Aæ
" #
2 1 2 A r
+ ∇ AÚ − 2 2 AÚ + 2 − eÚ
r sin Ú r Ú r 2 sin2 Ú æ
" #
2 1 2 A r 2 cos Ú AÚ
+ ∇ Aæ − 2 2 Aæ + 2 2 + eæ .
r sin Ú r sin Ú æ r 2 sin2 Ú æ
(28.47)
Problems
Problem 34.
123
Find the eigenvalues and normalized eigenvectors of the matrix 456 .
789
Keep 3 significant figures in your numerical answer.
Problem 35.
a) Let a and b be any two vectors in a linear vector space and let c = a + Ýb
where Ý is a scalar. By requiring c · c ≥ 0 for all Ý, derive the
Cauchy-Schwarz inequality
(a · a)(b · b) ≥ |a · b|2 .
Problem 36.
In 2-dimensions, show that if Ý is the charge at the origin, then Gauss’s law is
Ý Ý 1
ï=− ln â and E = −∇ï = e
2á×0 2á×0 â â
228
Module VIII
29 Classification 231
Problems 268
229
230
Motivation
Fundamental physical laws, from electrodynamics to quantum mechanics, are
formulated as partial differential equations. Here we examine methods to
solve these equations.
In this module we will solve several types of partial differential equations in a
series of examples. We will focus on second-order partial differential equations
involving the Laplacian operator ∇2 as these types of equations are the ones
most commonly encountered in basic physics problems.
29 Classification
2 è 1 2 è tension of string
= with c2 = . (29.1)
x 2 c 2 t 2 linear density of string
This is a hyperbolic equation.
• Laplace’s equation
2 è 2 è 2 è
!
2
∇ è= + + = 0. (29.2)
x 2 y 2 z 2
1 2 è
∇2 è − = 0. (29.3)
c 2 t 2
This is another hyperbolic equation.
• Diffusion equation
1 è
∇2 è − =0 (29.4)
Ó t
where Ó is the diffusion constant, e.g., if è is temperature then
thermal conductivity
Ó= . (29.5)
(specific heat capacity) · (density)
231
29. Classification 232
• Schrödinger equation
~2 2 è
− ∇ è + V(x)è = i~ (29.6)
2m t
where è(x) is the wavefunction of a particle, m is the mass of the particle,
V(x) is the potential the particle moves in, and ~ is the reduced Planck
constant. This is again a parabolic equation.
If è ∝ e −i E t/~ where E is the energy, the time-independent Schrödinger
equation is
2m
∇2 è + [E − V(x)]è = 0 . (29.7)
~2
This is an elliptic equation.
All of these are linear, second order, and homogeneous. The last implies
that if è is a solution, any multiple of è is also a solution.
If a “force” or “source” is present, the equation is inhomogeneous, e.g.,
2 è 1 2 è 1
2
− 2 2 =− f (x, t) (29.8)
x c t tension
where f (x, t) is the force per unit length acting on the string.
An equation may be inhomogeneous due to a boundary condition, e.g., a
vibrating string in which the end x = 0 is prescribed to move in a particular way:
The general solution is made up of any particular solution plus the general
solution of the corresponding homogeneous problem.
29. Classification 233
Boundary Conditions
There are three commonly used types of boundary conditions:
• Dirichlet boundary conditions are ones in which è is specified at each
point on the boundary.
• Neumann boundary conditions are ones in which the normal derivative
n · ∇ è is specified at each point on the boundary where n is the unit normal
vector to the boundary surface.
• Cauchy boundary conditions are ones in which both è and n · ∇ è are
specified at each point on the boundary.
The goal is to choose appropriate boundary conditions so that a unique
solution is obtained.
Generally we use Dirichlet or Neumann boundary conditions for elliptic or
parabolic systems, and Cauchy boundary conditions for hyperbolic systems.
29. Classification 234
2 è 1 2 è
− = 0. (29.10)
x 2 c 2 t 2
Change variables to
u = x − ct and v = x + ct . (29.11)
2 è
= 0. (29.12d)
uv
This is the hyperbolic equation in its normal form.
The solution is immediate:
where f and g are arbitrary functions, i.e., a superposition of a left-going wave and a
right-going wave.
è
Suppose we specify the Cauchy boundary conditions è(t = 0, x) and (t = 0, x). Then
t
f (x) + g(x) = è(t = 0, x) (29.14a)
Z
1 è 1 è
−f 0 (x) + g 0 (x) = (t = 0, x) =⇒ −f (x) + g(x) = (t = 0, x) d x .
c t c t
(29.14b)
Therefore
Z
1 1 è
f (x) =è(t = 0, x) − (t = 0, x) d x (29.15a)
2 2c t
Z
1 1 è
g(x) = è(t = 0, x) + (t = 0, x) d x . (29.15b)
2 2c t
Note: the arbitrary constant of integration is irrelevant as it cancels in the sum è = f + g.
30 Separation of Variables
1 2 è
∇2 è − 2 2 = 0 . (30.1)
c t
Look for a solution where t and x dependence factors:
In order for this to hold for all t and all x, each side must be constant.
Let −k 2 be the separation constant. Then
∇2 X 1 1 d2T
= −k 2 and = −k 2 . (30.3)
X c2 T d t2
Note that the second is an ordinary differential equation which we now solve:
d2T
=⇒ + é2 T = 0 with é = ck (30.4a)
d t2
sin ét
T(t) = e ±iét
=⇒ or T(t) = (30.4b)
cos ét
∇2 X + k 2 X = 0 . (30.5)
235
30. Separation of Variables 236
2 X
! !
1 2 X 1 X 1
2
r + 2
sin Ú + + k2 X = 0 . (30.6)
r r r r sin Ú Ú Ú r sin Ú æ2
2 2
d 2Ð 2
!
1 1 d 2 dR 1 1 d dÊ 1 1
2
r + 2
sin Ú + +k = 0 . (30.7)
R r dr dr Ê r sin Ú dÚ dÚ Ð r sin Ú dæ2
2 2
| {z }
only term that
depends on æ
Multiply by r 2 sin2 Ú:
sin2 Ú d 2 d R 1 d 2Ð 2 2 2
!
sin Ú d dÊ
r + sin Ú + +k r sin Ú = 0 . (30.8)
R dr dr Ê dÚ dÚ Ð dæ2
| {z }
depends only on æ
=⇒ separates!
1 d 2Ð
= −m2 =⇒ Ð(æ) = e ±i mæ (30.9)
Ð dæ2
and
sin2 Ú d 2 d R
!
sin Ú d dÊ
r + sin Ú − m2 + k 2 r 2 sin2 Ú = 0 . (30.10)
R dr dr Ê dÚ dÚ
Divide by sin2 Ú:
m2
" ! #
1 d 2 dR 1 1 d dÊ
r 2 2
+k r + sin Ú − = 0. (30.11)
R dr dr Ê sin Ú dÚ dÚ sin2 Ú
| {z } | {z }
depends only on r depends only on Ú
This equation again separates. Let the separation constant be ( + 1). We then arrive
at an angular equation
m2
!
1 1 d dÊ
sin Ú − = −( + 1) (30.12)
Ê sin Ú dÚ dÚ sin2 Ú
and a radial equation
" #
1 d 2 dR 2 − ( + 1) R = 0 .
r + k (30.13)
r2 d r dr r2
30. Separation of Variables 237
d 2Ê m2
" #
dÊ
(1 − x 2 ) 2 − 2x . + ( + 1) − Ê = 0. (30.16)
dx dx 1 − x2
This is the associated Legendre equation so the solutions are
m
P (x)
Ê(x) = (30.17)
Q m (x) .
Note that when we choose the associated Legendre functions of the first kind, Pm (x),
which are the ones that are defined in −1 ≤ x ≤ 1 or 0 ≤ Ú ≤ á, we have
so the and m separation constants separates the solution into terms in which the
angular part are spherical harmonics.
Now solve the radial equation
" #
1 d 2 dR 2 − ( + 1) = 0
r + k (30.19a)
r2 d r dr r2
d2R dR
=⇒ r2 + 2r + [k 2 r 2 − ( + 1)]R = 0 . (30.19b)
d r2 dr
Solutions to this equation are the spherical Bessel functions
j (kr)
R(r) = (30.20)
y (kr) .
30. Separation of Variables 238
d 2 R 2 d R ( + 1)
+ − R =0 (30.21)
d r2 r d r r2
and the solutions to this equation are
r
R(r) = (30.22)
r −(+1) .
1 2 è
∇2 è + 2 2 = 0 :
c t
( i kct ) ( i mæ ) ( m ) ( )
e e P (cos Ú) j (kr)
è(t, r, Ú, æ) = · · · (30.23)
e −i kct e −i mæ Qm (cos Ú) y (kr)
∇2 è = 0 :
) ( m
e i mæ r
( ) ( )
P (cos Ú)
è(r, Ú, æ) = · · . (30.24)
e −i mæ Qm (cos Ú) r −(+1)
Any linear combination is a solution, but boundary conditions limit allowed types of
solutions.
30. Separation of Variables 239
1 2 u
∇2 u = 2 2 (30.25)
c t
in polar coordinates.
The normal modes are periodic solutions u(t, x) = u(x)e iét
=⇒ ∇2 u + k 2 u = 0 (30.26)
é
where k = is the wave number.
c
In 2-dimensional polar coordinates, this is
1 2 u
!
1 u
r + 2 + k2 u = 0 . (30.27)
r r r r æ2
d 2Ð
+ m2 Ð = 0 =⇒ Ð(æ) = e ±i mæ (30.28a)
dæ2
d2R 1 d R 2
!
2− m R =0 Jm (kr)
+ + k =⇒ R(r) = (30.28b)
d r2 r d r r2 Ym (kr)
(the second is Bessel’s equation) and so our solutions are of the form
( i mæ ) ( )
e Jm (kr)
u(r, æ) = · . (30.29)
e −i mæ Ym (kr)
Boundary conditions:
2.40 c r
• k01 = , é01 = 2.40 , u ∝ J0 2.40
a a a
2 1 1 2 2
∇2 = + + + . (30.32)
â2 â â â2 æ2 z 2
• Laplace’s equation
∇2 è = 0 :
( ) ( Óz ) ( i mæ )
Jm (Óâ) e e
è(â, æ, z) = · · . (30.33)
Ym (Óâ) e −Óz e −i mæ
• Helmholtz equation
∇2 è + k 2 è = 0 :
√
2 − Ó2 â e iÓz
i mæ
J m k
e
è(â, æ, z) = · · . (30.34)
e −iÓz
√
Ym k 2 − Ó2 â
e −i mæ
30. Separation of Variables 242
1 T k
∇2 T = with Ó= (30.35)
Ó t câ
where k is the thermal conductivity, c is the specific heat capacity, and â is the density
of the cube.
Let T ∝ e −Ýt . Then
Ý
∇2 T + T =0 (30.36a)
Ó
2 T 2 T 2 T Ý
=⇒ + + =− T. (30.36b)
x 2 y 2 z 2 Ó
T =0 for x = 0, L y = 0, L z = 0, L . (30.38)
We find
! ! !
áx máy náz
Tc ∝ sin sin sin (30.39a)
L L L
with
!2 !2 !2
á má ná Ý
+ + = . (30.39b)
L L L Ó
Therefore, T = Tp + Tc :
∞ ¼
∞ ¼
∞ ! ! !
¼ áx máy náz −Ý t
T = T0 + cmn sin sin sin e mn (30.40a)
L L L
=1 m=1 n=1
where
á2
Ýmn = Ó 2 (2 + m2 + n 2 ) . (30.40b)
L
30. Separation of Variables 243
0 0 0 RL RL RL
Multiply by sin Láx sin m Láy sin n Láz and integrate x=0 d x, y=0 d y, z=0 d z
(i.e., over the whole cube). Then we obtain
64
− 3
, m, n all odd
cmn = á mn (30.42)
0
otherwise.
We have finally
∞ ∞ ∞ ! ! !
64 ¼ ¼ ¼ 1 áx máy náz
T(t, x, y, z) = T0 − 3 T0 sin sin sin
á mn L L L
=1 m=1 n=1
, m, n all odd
(2 + m2 + n 2 )á2
" #
× exp − Ót . (30.43)
L2
This series solution works well at late times when the exponential kills all but the lowest
modes, but at early times we will need to keep a large number of terms in the sums to
get an accurate result.
30. Separation of Variables 244
2 T 1 T k
− = 0, Ó= (30.44)
x 2 Ó t câ
Figure 30.5: Slab Heating
with inhomogeneous boundary conditions.
As before, we seek a particular solution Tp to which we will add a complementary
function Tc ,
T = Tp + Tc (30.45)
• Particular solution.
Eventually we expect the temperature to rise linearly with time as heat is added. Try
d2u Ü
= (30.47a)
d x2 Ó
1Ü 2
=⇒ u(x) = x + ax + b . (30.47b)
2Ó
To determine a and b, we employ the boundary conditions.
From Fourier’s law of conduction, q = −k ∇ T where q is the heat flux density, the
temperature gradient is
q
u 0 (0) = − and u 0 (d) = 0 (insulated) (30.48)
Ó
so we find
1 q qÓ q
u(x) = (x − d)2 and Ü= = . (30.49)
2 kd kd câd
Therefore
1 q q
Tp (t, x) = (x − d)2 + Ót . (30.50)
2 kd kd
30. Separation of Variables 245
T(t = 0, x) = Tp (t = 0, x) + Tc (t = 0, x) = 0 . (30.51)
• Characteristic function.
Ý
Write Tc (t, x) ∝ e −Ýt e i ax =⇒ . a2 =
Ó
The homogeneous boundary conditions (Neumann) are:
Tc Tc
= =0 (30.52)
x x=0 x x=d
At t = 0, Tc = −Tp so
∞
A0 ¼ náx 1 q
+ A n cos =− (x − d)2 (30.54a)
2 d 2 kd
n=1
1 q q
T(t, x) = (x − d)2 + Ót
2 kd kd
∞
qd
1 2 ¼ 1 náx −Ón2 á2 t/d 2
− + 2 cos e . (30.55)
k
3 á n 2 d
n=1
31 Integral Transform Method
Ex. 31.1. Find the temperature distribution T(t, x) of an infinite solid if we are given an
initial distribution T(t = 0, x) = f (x).
Note: there is no y- or z-dependence so this is a 1-dimensional problem:
2 T 1 T
= . (31.1)
x 2 Ó t
Let
Z∞ Z∞
1
T(t, x) = F (t, k)e i kx d k ⇐⇒ F (t, k) = T(t, x)e −i kx d x . (31.2)
2á −∞ −∞
Then
1 F (t, k) 2
−k 2 F (t, k) = =⇒ F (t, k) = g(k)e −k Ót (31.3)
Ó t
where we must determine g(k) from the initial conditions.
At t = 0,
Z∞ Z∞
F (t = 0, k) = T(0, x)e −i kx d x = f (x)e −i kx d x (31.4)
−∞ −∞
but F (t = 0, k) = g(k) so
Z∞
g(k) = f (x)e −i kx d x . (31.5)
−∞
Thus
Z∞
2
F (t, k) = e −k Ót f (x)e −i kx d x . (31.6)
−∞
246
31. Integral Transform Method 247
Therefore
Z∞ Z∞
1 2 0
T(t, x) = e −k Ót f (x 0 )e −i kx e i kx d x 0 d k (31.7a)
2á k=−∞ x 0 =−∞
Z∞ Z∞
1 0 2
= f (x 0 ) e i k(x−x ) e −k Ót d k d x 0 (31.7b)
0
x =−∞ 2á k=−∞
| {z }
q
1 −(x−x 0 )2 /4Ót
4áÓt e
and so we have
Z∞ r
1 0 2
T(t, x) = 0
f (x ) e −(x−x ) /4Ót d x 0 . (31.8)
−∞ 4áÓt
Note:
r
1 0 2
G (t, x; x 0 ) = e −(x−x ) /4Ót (31.9)
4áÓt
is a Green function for this problem.
Suppose the initial source is the plane source f (x) = Ö(x). Then
r
1 2
T(t, x) = e −x /4Ót = G (t, x; 0) , t > 0 . (31.10)
4áÓt
√
This is a Gaussian of width 2Ót. We see that an initial delta-like distribution spatially
diffuses with time as shown in Fig. 31.1.
T T T
x x x
t = 0+ t small t large
We can use this solution to find the distribution from a point source Ö3 (x).
Let G (t, x; 0) be the response to the plane source Ö(x) at t = 0.
Let g(t, r) be the response to the point source Ö3 (x) at t = 0.
Then we must have (see Fig. 31.2) z plane x = 0
Z∞
G (t, x; 0) = 2á g(t, r) â dâ (31.11) d
0
y
(a superposition of points lying on the x = 0 plane) r
and r 2 = â2 + x 2 =⇒ r d r = â dâ so
Z∞ x x
G (t, x; 0) = 2á g(t, r) r d r . (31.12)
x
Figure 31.2: Point Source Integral
G (t, x; 0)
=⇒ = −2áxg(t, x) (31.13a)
x
1 G (t, x; 0)
=⇒ g(t, r) = − (31.13b)
2ár x x=r
We find
1 3/2 −r 2 /4Ót
g(t, r) = e , t > 0. (31.14)
4áÓt
Thus the Green function for an infinite solid is
Ex. 31.2. Consider the response of a semi-infinite solid x > 0 to a point initial
temperature distribution at x = a, y = z = 0, Ö(x − a)Ö(y)Ö(z), if the entire solid is initially
at T = 0 (except at the point) and the boundary x = 0 is maintained at T = 0, as shown in
Fig. 31.3.
We will solve this using the method of images.
In the previous example we saw that the Green function for a point source in an infinite
solid is
To enforce the boundary condition T = 0 at x = 0 for the semi -infinite solid, superimpose
a source function at x = a, y = z = 0, with a negative source function at x = −a,
y = z = 0:
1 3/2 (x − a)2 + y 2 + z 2
( " #
T(t, x, y, z) = exp −
4áÓt 4Ót
(x + a)2 + y 2 + z 2
" #)
− exp − , t > 0, x > 0. (31.17)
4Ót
| {z }
fictitious image source
required to maintain
T = 0 at x = 0
point
image source
x
a a
T=0
at x = 0
x
∇2 u + k 2 u = 0 (32.1)
r
x0
with u = 0 when r = a. r 0
Clearly G (x, x0 ) depends only on r, r 0 , and Ú. We
have
a
∇2 G + k 2 G = Ö2 (x − x0 ) . (32.2)
Figure 32.1: Circular Drum
250
32. Green Functions 251
l
Recall Gauss’s theorem, ∇ · F d V = F · dS. In 2-dimensions, with F = ∇ G , we have
I
∇2 G dA = n · ∇ G d s . (32.4)
Integrate the inhomogeneous equation over the area element shown in Fig. 32.2:
∇2 G dA +k 2 G dA = Ö2 (x − x0 ) dA (32.5a)
| {z } | {z } | {z }
H
n·∇ G d s 0 as ×→0 1
so, as × → 0, only the arcs above and below r = r 0 contribute to the line integral
Z Z
G G
ds − ds = 1 (32.5b)
r 0 +× r r 0 −× r
G G 1
=⇒ − = Ö(Ú) . (32.5d)
r r 0 +× r r 0 −× r 0
Let
∞
G G ¼
− = cm cos mÚ (32.5e)
r r 0 +× r r 0 −×
m=0
∞
¼ 1
=⇒ cm cos mÚ = 0 Ö(Ú) . (32.5f)
r
m=0
Rá
Multiply both sides by cos m0 Ú and integrate −á dÚ to get
1 1
2ác0 = 0 and ácm = 0 , m = 1, 2, . . . . (32.5g)
r r
Therefore
∞
G G 1 1 ¼
− = 0 + 0 cos mÚ . (32.5h)
r r 0 +× r r 0 −× 2ár ár
m=1
r = r0 +
x0 r=r0
r = r0
Figure 32.2: Green Function Integral
32. Green Functions 252
Thus, at r 0 = r, we require
with
r
r < r0 r 0
r < r0
r< = and r> = (32.9)
r 0
r > r0 r
r > r0
where u(x, x0 ), known as the fundamental solution, is singular at x = x0 but does not
satisfy the boundary conditions, and v(x, x0 ) is a smooth solution of the homogeneous
problem that fixes the boundary conditions.
To find u(x, x0 ), let â = kx − x0 k and write u = u(â). Integrate over a small circular disk
about â = 0:
Zâ Zâ
2á ∇2 u dâ + 2á k 2 u dâ = Ö2 (x − x0 ) dA . (32.11)
0 0
| {z } | {z } | {z }
du vanishes as â → 0 1
2áâ
dâ
du
As â → 0, have 2áâ = 1 so
dâ
1
u(â) ∼ ln â + const as â → 0. (32.12)
2á
1
G= Y (kâ) + v(x, x0 ) . (32.14)
4 0
Thus, at r = a, we have
∞
1
p ¼
G (r = a) = 0 = Y0 k a2 + r 02 − 2ar 0 cos Ú + A n Jn (kr) cos nÚ . (32.16a)
4 | {z }
n=0
this is â(r = a)
so
Z 2á p
1
A0 = − Y0 k a2 + r 02 − 2ar 0 cos Ú dÚ (32.16b)
8áJ0 (ka) 0
and
Z 2á p
1
An = − Y0 k a2 + r 02 − 2ar 0 cos Ú cos nÚ dÚ (32.16c)
4áJn (ka) 0
for n = 1, 2, . . ..
Therefore, another form of the Green function is
1
G (x, x0 ) = Y0 kkx − x0 k
4
Zá p
J0 (kr)
− Y0 k a2 + r 02 − 2ar 0 cos Ú0 dÚ0
4áJ0 (ka) 0
∞
Jn (kr) cos nÚ á
¼ Z p
− Y0 k a2 + r 02 − 2ar 0 cos Ú0 cos nÚ0 dÚ0 . (32.17)
2áJn (ka) 0
n=1
32. Green Functions 255
2 u(t, x) 1 u(t, x) k
− = 0, Ó= (32.18a)
x 2 Ó t câ
with inhomogeneous boundary conditions
u u q
u(t = 0, x) = 0 , = 0, and =− . (32.18b)
x x=d x x=0 k
and also choose it so that d 2 w/d x 2 gives a simple result. The simplest choice is
1 q
w(x) = (x − d)2 (32.21)
2 kd
which satisfies the boundary conditions and now
2 v 1 v d2w q
2
− =− 2 =− . (32.22a)
x Ó t dx kd
where v must now satisfy the boundary conditions
v v
= = 0. (32.22b)
x x=d x x=0
We have achieved our goal of transforming to an inhomogeneous equation for v with
homogeneous boundary conditions.
32. Green Functions 256
∇2 ï = 0 (32.24)
∇2 ï(x) = Ö3 (x − x0 ) . (32.25)
r
( )
ï(r, Ú, æ) = Pm (cos Ú)e ±i mæ . (32.26)
r −(+1)
A
ï(r) = (32.27)
r
where we now must determine A.
Integrate the inhomogeneous equation over a spherical ball of radius a about the origin:
∇2 ï d V = Ö3 (x − x0 ) d V = 1 (32.28a)
r<a r<a
1 è(t, x)
∇2 è(t, x) − 2 =0 (32.30)
c t 2
over an infinite domain.
The Green function is a solution to the inhomogeneous equation
1 è(t, x)
∇2 è(t, x) − 2 = Ö(t − t 0 )Ö(x − x0 ) . (32.31)
c t 2
Note: the solution only depends on t − t 0 and x − x0 , so we have translational invariance
in t and x. Therefore, without loss of generality, set t 0 = 0 and x0 = 0.
Let
1
è(t, x) = Ñ (é, k)e i(k·x−ét) dé d kx d k y d kz (32.32a)
(2á)4
Ñ (é, k) = è(t, x)e −i(k·x−ét) d t d x d y d z . (32.32b)
é2
!
−k 2 + 2 Ñ = 1 (32.33a)
c
c2
=⇒ Ñ (é, k) = (32.33b)
é2 − c 2 k 2
where k = kkk, and we want
c2 e i(k·x−ét)
è(t, x) = dé d kx d k y d kz . (32.34)
(2á)4 é2 − c 2 k 2
32. Green Functions 259
To do this integral, choose the axis of spherical polar coordinates in k space along x.
Then k · x = kr cos Ú. Also let Þ = cos Ú so dÞ = sin Ú dÚ. Then
Z 2á Z 1 Z ∞ Z ∞
c2 e i(kr cos Ú−ét) 2
è(t, x) = k dé d k dÞ dæ (32.35a)
(2á)4 æ=0 Þ=−1 k=0 é=−∞ é2 − c 2 k 2
Z ∞ Z ∞ Z 1
c2 i kr cos Ú e −iét 2
= e dÞ é2 − c 2 k 2 k dé d k
(32.35b)
(2á)3 k=0 é=−∞ Þ=−1
| {z }
1 i kr −i kr )
i kr (e −e
c2 1 ∞
Z∞
e −iét
Z
= (e i kr − e −i kr )k dé d k (32.35c)
(2á)3 i r k=0 é=−∞ é2 − c 2 k 2
c2 1 ∞
"Z ∞
e −iét
Z #
= dé e i kr k d k . (32.35d)
(2á)3 i r k=−∞ é=−∞ é2 − c 2 k 2
Note that the integrand has two poles on the real axis, é = −|ck| and é = +|ck|.
We therefore modify the integral to be
Z ∞+i×
e −iét
2 2 2
dé (32.37)
é=−∞+i× é − c k
CR
R+i R + i Re
|ck| |ck|
R+i R + i Re
|ck| |ck|
CR
1 2 ï
∇2 ï − 2 2 = f (t, x) . (32.45)
c t
The solution is
Ö kx − x0 k − c(t − t 0 )
c
ï(t, x) = − f (t 0 , x0 ) d t0 d V 0 (32.46a)
4á kx − x0 k
f t − 1 kx − x0 k, x0
1 c
=− dV0 . (32.46b)
4á kx − x0 k
This is the retarded potential because the source function is evaluated at the retarded
time t − 1c kx − x0 k.
For example, consider a point source moving on a prescribed path x0 (t) so that
f (t, x) = Ö3 x − x0 (t) . (32.47)
Therefore
Ö3 x0 − x (t 0 )Öt − t 0 − 1 kx − x0 k
1 0 c
ï(t, x) = − d t0 d V 0 . (32.48)
4á kx − x0 k
First do the integral dV0:
Z Ö t − t 0 − 1 kx − x (t 0 )k
1 c 0
ï(t, x) = − d t0 . (32.49)
4á kx − x0 (t 0 )k
dg 1 v0 (tr ) · (x − x0 (tr ))
= −1 + (32.50c)
d t 0 t 0 =tr c kx − x0 (tr )k
ct
x0(t)
ct
x0(tr)
x ctr y
light cone
Figure 32.6: Light Cone and Retarded Time
1 1 1
ï(t, x) = − (32.51)
4á kx − x0 (tr )k 1 v0 (tr ) · (x − x0 (tr ))
1−
c k]x − x0 (tr )k
and thus we obtain the Liénard-Wiechert potential
1 1
ï(t, x) = −
4á 1
kx − x0 (tr )k − v0 (tr ) · (x − x0 (tr ))
c
1
with tr = t − kx − x0 (tr )k . (32.52)
c
32. Green Functions 264
Integral Equations
Green functions can be used to convert a partial differential equation with
boundary conditions into an integral equation.
Consider
∇2 G (x, x0 ) = Ö3 (x − x0 ) (32.54)
is
ï(x) = G (x, x0 )f (x0 ) d V 0 . (32.56)
Neumann series
Consider the Fredholm integral equation of the second kind
ï(x) = f (x) + Ý K(x, x0 )ï(x0 ) d V 0 (32.59)
Repeat. . .
ï(x) = f (x) +Ý K(x, x0 )f (x0 ) d V 0 (32.63)
+Ý2 K(x, x0 )K(x0 , x00 )f (x00 ) d V 0 d V 00
+··· .
This is known as the Neumann series and it converges for small Ý provided
K(x, x0 ) is bounded.
32. Green Functions 266
The Neumann series thus generalizes the geometric series and brings us by a
commodius vicus of recirculation back to §1.
32. Green Functions 267
with boundary conditions that è(x)e −i E t/~ is an incident plane wave with wave vector k0
plus outgoing waves as kxk → ∞ and k 2 = k02 = 2mE/~2 .
The Helmholtz equation
The first iteration in the Neumann series gives the Born approximation
0
m e i kkx−x k 0
è(x) ≈ e ik0 ·x − 0 V(x0 )e ik0 ·x d V 0 . (32.73)
2á~2 kx − x k
Problems
Problem 37.
Find the lowest frequency of oscillation of acoustic waves in a hollow sphere of
radius a. The boundary condition is è = 0 at r = a and è obeys the differential
equation
1 2 è
∇2 è = .
c 2 t 2
Problem 38.
A sphere of radius a is at temperature T = 0 throughout. At time t = 0 it is
immersed in a liquid bath at temperature T0 . Find the subsequent temperature
distribution T(r, t) inside the sphere. This distribution satisfies:
1 T
∇2 T − = 0.
Ó t
Problem 39.
Find the three lowest eigenvalues of the Schrödinger equation
~2 2
− ∇ è = Eè
2m
for a particle confined in a cylindrical box of radius a and height b where è = 0
on the walls and a ≈ b.
Zeros of the Bessel functions:
J0 (x) = 0 for x = 2.404, 5.520, 8.654, . . .
J1 (x) = 0 for x = 3.832, 7.016, 10.173, . . .
J2 (x) = 0 for x = 5.135, 8.417, 11.619, . . . .
268
Appendix
269
A Series Expansions
Binomial series
Ó(Ó − 1) 2 Ó(Ó − 1)(Ó − 2) 3
(1 + x)Ó = 1 + Óx + x + x + ···
2! 3!
! ! !
Ó Ó 2 Ó 3
= 1+ x+ x + x + ··· . (A.1)
1 2 3
Special cases:
(1 + x)2 = 1 + 2x + x 2 (A.2)
(1 + x)3 = 1 + 3x + 3x 2 + x 3 (A.3)
(1 + x)−1 = 1 − x + x 2 − x 3 + x 4 − · · · −1 < x < 1 (A.4)
(1 + x)−2 = 1 − 2x + 3x 2 − 4x 3 + 5x 4 − · · · −1 < x < 1 (A.5)
(1 + x)−3 = 1 − 3x + 6x 2 − 10x 3 + 15x 4 − · · · −1 < x < 1 (A.6)
1 1 2 1·3 3
(1 + x)1/2 = 1 + x − x + x − ··· −1 < x ≤ 1 (A.7)
2 2·4 2·4·6
1 1·3 2 1·3·5 3
(1 + x)−1/2 = 1 − x + x − x + ··· −1 < x ≤ 1 (A.8)
2 2·4 2·4·6
1/3 1 2 2 2·5 3
(1 + x) = 1+ x − x + x − ··· −1 < x ≤ 1 (A.9)
3 3·6 3·6·9
−1/3 1 1·4 2 1·4·7 3
(1 + x) = 1− x + x − x − ··· −1 < x ≤ 1 (A.10)
3 3·6 3·6·9
x2 x3
ex = 1 + x + + + ··· −∞ < x < ∞ (A.11)
2! 3!
x 2 x 3 x 4
ln(1 + x) = x − + − + ··· −1 < x ≤ 1 (A.12)
2 3 4
1 1+x x3 x5 x7
ln =x+ + + + ··· −1 < x < 1 (A.13)
2 1−x 3 5 7
1 x −1 3 1 x −1 5
( )
x −1
ln x = 2 + + + ··· x>0 (A.14)
x +1 2 x +1 5 x +1
270
Appendix A. Series Expansions 271
x3 x5 x7
sin x = x − + − + ··· −∞ < x < ∞ (A.15)
3! 5! 7!
x2 x4 x6
cos x = 1 − + − + ··· −∞ < x < ∞ (A.16)
2! 4! 6!
x 3 2x 5 22n (22n − 1)B n x 2n−1 á
tan x = x + + + ··· + + ··· |x| < (A.17)
3 15 (2n)! 2
1 x x3 22n B n x 2n−1
cot x = − − − ··· − − ··· 0 < |x| < á (A.18)
x 3 45 (2n)!
1 x3 1 · 3 x5 1 · 3 · 5 x7
arcsin x = x + + + + ··· |x| < 1 (A.19)
2 3 2·4 5 2·4·6 7
á
arccos x = − arcsin x (A.20)
2
x3 x5 x7
arctan x = x − + − |x| < 1 (A.21)
3 5 7
á 1 1 1
arctan x = ± − + 3 − 5 + · · · x ≷ 0, |x| ≥ 1 (A.22)
2 x 3x 5x
á
arccot x = − arctan x (A.23)
2
x3 x5 x7
sinh x = x + + + + ··· −∞ < x < ∞ (A.24)
3! 5! 7!
x2 x4 x6
cosh x = 1 + + + + ··· −∞ < x < ∞ (A.25)
2! 4! 6!
x 3 (−1)n−1 22n (22n − 1)B n x 2n−1 á
tanh x = x − + ··· + + ··· |x| < (A.26)
3 (2n)! 2
1 x (−1)n−1 22n B n x 2n−1
coth x = + − ··· + − ··· 0 < |x| < á (A.27)
x 3 (2n)!
1 x3 1 · 3 x5 1 · 3 · 5 x7
arcsinh x = x − + − + ··· |x| < 1 (A.28)
2 3 2·4 5 2·4·6 7
1 1 1·3 1 1·3·5 1
arcsinh x = ln(2x) + − + − ··· x > 1 (A.29)
2 2x 2 2 · 4 4z 4 2 · 4 · 6 6z 6
1 1 1·3 1 1·3·5 1
arccosh x = ln(2x) − − − − ··· x > 1 (A.30)
2 2x 2 2 · 4 4z 4 2 · 4 · 6 6z 6
x3 x5 x7
arctanh x = x + + + + ··· |x| < 1 (A.31)
3 5 7
1 1 1 1
arccoth x = + 3 + 5 + 7 + · · · |x| > 1 (A.32)
x 3x 5x 7x
B Special Functions
Gamma Function
Definition (positive arguments)
Z∞
È (x) = t x−1 e −t d t x>0 (B.1)
0
Recursion formula
È (x + 1) = xÈ (x) (B.2)
È (n + 1) = n! for n = 0, 1, 2, · · · (B.3)
Negative arguments
Use repeated application of the recursion formula
È (x + 1)
È (x) = (B.4)
x
Special values
√
È ( 12 ) = á (B.5)
1 · 3 · 5 · · · (2n − 1) √
È (n + 12 ) = á n = 1, 2, 3, . . . (B.6)
2n
(−1)n 2n √
È (−n + 12 ) = á n = 1, 2, 3, . . . (B.7)
1 · 3 · 5 · · · (2n − 1)
Relationships
á
È (x)È (1 − x) = Euler’s reflection formula (B.8)
sin xá
√
22x−1 È (x)È (x + 12 ) = áÈ (2x) Legendre’s duplication formula (B.9)
272
Appendix B. Special Functions 273
Asymptotic expansions
√
1 1 139
È (x) ∼ 2áxx x−1/2 e −x 1 + + − + · · · (B.10)
12x 288x 2 51 840x 3
1 x 1 1
ln È (x) ∼ x ln x − x − ln + − + ··· (B.11)
2 2á 12x 360x 3
√
n! ∼ 2ánn n e −n Stirling’s formula (B.12)
5
4
3
2
1
0
(x)
1
2
3
4
5
5 4 3 2 1 0 1 2 3 4 5
x
Figure B.1: Gamma Function
Appendix B. Special Functions 274
Bessel Functions
Bessel differential equation
x 2 y 00 + xy 0 + (x 2 − ß2 )y = 0 ß≥0 (B.13)
Solutions are called Bessel functions of order ß.
xß x2 x4
( )
Jß (x) = ß 1− + − ··· (B.14)
2 È (ß + 1) 2(2ß + 2) 2 · 4(2ß + 2)(2ß + 4)
∞ k ß+2k
¼ (−1) (x/2)
= (B.15)
k!È (ß + k + 1)
k=0
x −ß x2 x4
( )
J−ß (x) = −ß 1− + − ··· (B.16)
2 È (1 − ß) 2(2 − 2ß) 2 · 4(2 − 2ß)(4 − 2ß)
∞ k 2k−ß
¼ (−1) (x/2)
= (B.17)
k!È (k + 1 − ß)
k=0
J−n (x) = (−1)n Jn (x) n = 0, 1, 2, . . . (B.18)
x2 x4 x6
J0 (x) = 1 − 2 + 2 2 − 2 2 2 + · · · (B.19)
2 2 ·4 2 ·4 ·6
x x3 x5 x7
J1 (x) = − 2 + 2 2 − 2 2 2 + ··· (B.20)
2 2 ·4 2 ·4 ·6 2 ·4 ·6 ·8
Hankel functions
(1)
Hß (x) = Jß (x) + i Yß (x) (B.24)
(2)
Hß (x) = Jß (x) − i Yß (x) (B.25)
Appendix B. Special Functions 275
Limiting forms
As x → 0,
J0 (x) → 1 (B.26)
ß
1 x
Jß (x) ∼ ß , −1, −2, −3, . . . (B.27)
È (ß + 1) 2
2
Y0 (x) ∼ ln x (B.28)
á
È (ß) x −ß
Yß (x) ∼ − ß > 0 or ß = − 12 , − 32 , − 52 , . . . (B.29)
á 2
−ß
È (ß) x
Y−ß (x) ∼ − cos ßá ß > 0 , ß , 12 , 23 , 52 , . . . (B.30)
á 2
È (ß) x −ß
(1) (2)
Hß (x) ∼ −Hß (x) ∼ −i ß>0 (B.31)
á 2
As x → ∞,
r
2 ßá á
Jß (x) ∼ cos x − − (B.32)
áx 2 4
r
2 ßá á
Yß (x) ∼ sin x − − (B.33)
áx 2 4
r
2 ßá á
(1)
Hß (x) ∼ exp i x − − (B.34)
áx 2 4
r
2 ßá á
(2)
Hß (x) ∼ exp −i x − − (B.35)
áx 2 4
Recurrence relations
(1) (2)
For Cß denoting Jß , Yß , Hß , or Hß
2ß
Cß−1 (x) + Cß+1 (x) = C (x) (B.36)
x ß
0
Cß−1 (x) − Cß+1 (x) = 2Cß (x) (B.37)
ß
C0ß (x) = Cß−1 (x) − Cn (ß) (B.38)
x
ß
C0ß (x) = Cß (x) − Cß+1 (x) (B.39)
x
Appendix B. Special Functions 276
Integral forms
1 á
Z
J0 (x) = cos(x sin Ú) dÚ (B.41)
á 0
1 á
Z
Jn (x) = cos(nÚ − x sin Ú) dÚ (B.42)
á 0
Z∞
2
Y0 (x) = − cos(x cosh t) d t (B.43)
á 0
1 n=0
n=1
0
Jn(x)
1
1 n=0
n=1
Yn(x)
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
x
Figure B.2: Bessel Functions of the First and Second Kinds
Appendix B. Special Functions 277
Definite integrals
Z1
[Jn (Ót)]2 t d t = 21 [Jn0 (Ó)]2 − 12 (1 − n 2 /Ó2 )[Jn (Ó)]2 (B.44)
0
ÓJn (Ô)Jn0 (Ó) − ÔJn (Ó)Jn0 (Ô)
Z1
Jn (Ót)Jn (Ôt) t d t = Ó,Ô (B.45)
0 Ô 2 − Ó2
Z∞
Ö(x − x 0 )
Jn (xt)Jn (x 0 t) t d t = (B.46)
0 x
Note in Eq. (B.45) that if Ó and Ô are zeros of the Bessel function Jn or of the derivative
of the Bessel function Jn0 then we have
Z1
Jn (xnp t)Jn (xnq t) t d t = 0 p,q (B.47)
0
Z1
Jn (ynp t)Jn (ynq t) t d t = 0 p,q (B.48)
0
where xnp is the pth zero of Jn and ynp is the pth zero of Jn0 .
2 1/2
J1/2 (x) = sin x (B.55)
áx
1/2
2
J−1/2 (x) = cos x (B.56)
áx
1/2
2 1
J3/2 = sin x − cos x (B.57)
áx x
1/2
2 1
J−3/2 = − cos x − sin x (B.58)
áx x
Appendix B. Special Functions 278
For = 0, 1
sin x cos x (1) ei x
j0 (x) = y0 (x) = − h0 (x) = −i (B.64)
x x x
sin x cos x cos x sin x e ix
(1)
j1 (x) = 2 − y1 (x) = − 2 − h1 (x) = − 2 (i + x) (B.65)
x x x x x
1 n=0
n=1
0
jn(x)
1
1 n=0
n=1
0
yn(x)
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
x
Figure B.3: Spherical Bessel Functions of the First and Second Kinds
Appendix B. Special Functions 279
J (i x)
Iß (x) = ß ß (B.67)
i
á ß+1 (1)
Kß (x) = i Hß (i x) (B.68)
2
For ß = 0, 1
x2 x4 x6
I0 (x) = 1 + 2 + 2 2 + 2 2 2 + · · · (B.69)
2 2 ·4 2 ·4 ·6
x x3 x5 x7
I1 (x) = + 2 + 2 2 + 2 2 2 + ··· (B.70)
2 2 ·4 2 ·4 ·6 2 ·4 ·6 ·8
Limiting forms
ß
1 x
Iß (x) ∼ as x → 0 ß , −1, −2, −3, . . . (B.71)
È (ß + 1) 2
−ß
1 x
Kß (x) ∼ È (ß) as x → 0 ß>0 (B.72)
2 2
K0 (x) ∼ − ln x as x → 0 (B.73)
1
Iß (x) ∼ √ ex as x → ∞ (B.74)
2áx
r
á −x
Kß (x) ∼ e as x → ∞ (B.75)
2x
Appendix B. Special Functions 280
Integral forms
1 á 1 á x cos Ú
Z Z
I0 (x) = cosh(x sin Ú) dÚ = e dÚ (B.77)
á 0 á 0
Z∞
K0 (x) = cos(x sinh t) d t (B.78)
0
Zá
1
I n (x) = e x cos Ú cos(nÚ) dÚ n = 0, 1, 2, . . . (B.79)
á 0
6 3
n=0 n=0
n=1 n=1
5
4 2
Kn(x)
3
In(x)
2 1
0 0
0 1 2 3 4 0 1 2
x x
Figure B.4: Modified Bessel Functions of the First and Second Kinds
Appendix B. Special Functions 281
Legendre Functions
Legendre differential equation
(1 − x 2 )y 00 − 2xy 0 + ( + 1)y = 0 (B.80)
Legendre polynomials
1 dn 2
P (x) = n (x − 1)n Rodrigues’s formula (B.81)
2 n! d x n
For = 0, 1, 2, 3
P0 (x) = 1 (B.82)
P1 (x) = x (B.83)
P2 (x) = 21 (3x 2 − 1) (B.84)
P3 (x) = 21 (5x 3 − 3x) (B.85)
Generating function
∞
1 ¼
√ = P (x)t (B.86)
1 − 2tx + t 2 =0
Recurrence formulas
Z1
2
P (x)P0 (x) d x = Ö 0 (B.91)
−1 2 + 1
∞
¼ 2 + 1
P (x)P (x 0 ) = Ö(x − x 0 ) (B.92)
2
=0
Appendix B. Special Functions 282
Special values
P (1) = 1 (B.93)
P (−1) = (−1)n (B.94)
P (−x) = (−1)n P (x) (B.95)
0
odd
P (0) = 1 · 3 · 5 · · · ( − 1) (B.96)
(−1)/2 even
2 · 4 · 6···
1 =0
=1
=2
=3
0
P (x)
1
1 0 1
x
Figure B.5: Legendre Polynomials
Appendix B. Special Functions 283
For = 0, 1
1 1+x
Q0 (x) = ln (B.102)
2 1−x
x 1+x
Q1 (x) = ln −1 (B.103)
2 1−x
1 n=0
n=1
n=2
n=3
Q (x)
1
1 0 1
x
Figure B.6: Legendre Functions of the Second Kind
Appendix B. Special Functions 284
dm
Pm = (−1)m (1 − x 2 )m/2 P (x) (B.105)
d xm
(−1) m d +m
= (1 − x 2 )m/2 (x 2 − 1) (B.106)
2 ! d x +m
P0 (x) = P (x) (B.107)
( − m)! m
P−m (x) = (−1)m P (x) (B.108)
( + m)!
Pm (x) = 0 if m > n (B.109)
For = 1, 2
p
P11 (x) = − 1 − x 2 (B.110)
p
P21 (x) = −3x 1 − x 2 (B.111)
P22 (x) = 3(1 − x 2 ) (B.112)
Orthogonality
Z1
2 ( + m)!
Pm (x)Pm
0 (x) d x = Ö 0 (B.113)
−1 2 + 1 ( − m)!
Spherical Harmonics
s
2 + 1 ( − m)! m
Ym (Ú, æ) = P (cos Ú)e i mæ (B.115)
4á ( + m)!
Y−m (Ú, æ) = (−1)m [Ym (Ú, æ)]∗ (B.116)
r
0 (2 + 1)
Y (Ú, æ) = P (cos Ú) (B.117)
4á
For = 0, 1, 2
1
Y00 (Ú, æ) = √ (B.118)
4á
r
1 3
Y10 (Ú, æ) = cos Ú (B.119)
2 á
r
1 3
Y11 (Ú, æ) = − sin Úe iæ (B.120)
2 2á
r
0 1 5
Y2 (Ú, æ) = (3 cos2 Ú − 1) (B.121)
4 á
r
1 1 15
Y2 (Ú, æ) = − sin Ú cos Úe iæ (B.122)
2 2á
r
1 15
Y22 (Ú, æ) = sin2 Úe 2iæ (B.123)
4 2á
Z 2á Z á
0 ∗
Ym (Ú, æ)[Ym
0 (Ú, æ)] sin Ú dÚ dæ = Ö0 Ömm 0 (B.124)
æ=0 Ú=0
∞ ¼
¼ 1
Ym (Ú, æ)[Ym (Ú0 , æ0 )]∗ = Ö(Ú − Ú0 )Ö(æ − æ0 ) (B.125)
sin Ú
=0 m=−
Addition theorem
¼ 2 + 1
Ym (Ú, æ)[Ym (Ú0 , æ0 )]∗ = P (cos Õ) (B.126)
4á
m=−
where
cos Õ = cos Ú cos Ú0 + sin Ú sin Ú0 cos(æ − æ0 ) (B.127)
a1 a2 a3
a · (b × c) = b · (c × a) = c · (a × b) = det b 1 b2 b 3 (C.1)
c1 c2 c3
a × (b × c) = (a · c)b − (a · b)c (C.2)
(a × b) · (c × d) = (a · c)(b · d) − (a · d)(b · c) (C.3)
∇ · (èa) = è ∇ · a + (∇ è) · a (C.4)
∇ × (èa) = è ∇ × a + (∇ è) × a (C.5)
∇(a · b) = (a · ∇)b + (b · ∇)a + a × (∇ × b) + b × (∇ × a) (C.6)
∇ · (a × b) = (∇ × a) · b − (∇ × b) · a (C.7)
∇ × (a × b) = a(∇ · b) − b(∇ · a) + (b · ∇)a − (a · ∇)b (C.8)
∇ · (∇ × a) = 0 (C.9)
∇ × (∇ è) = 0 (C.10)
∇ × (∇ × a) = ∇(∇ · a) − ∇2 a (C.11)
A · ds = (∇ × A) · dS Stokes’s theorem (C.12)
S S
m
A · dS = ∇· AdV Gauss’s theorem (C.13)
V
m V
è ∇ ï · dS = (è∇2 ï + ∇ ï · ∇ è) d V Green’s 1st identity (C.14)
V V
m
(è ∇ ï − ï ∇ è) · dS = (è∇2 ï − ï∇2 è) d V Green’s 2nd identity (C.15)
V V
m
ï dS = ∇ïdV (C.16)
V V
m
A × dS = − ∇× AdV (C.17)
V
I V
ï ds = − ∇ ï × dS (C.18)
S
m S
A · ∇ïdV = ïA · dS − ï∇· AdV (C.19)
V V V
286
Appendix C. Vector Identities 287
Helmholtz’s theorem
1 ∇’ × A(x0 ) 0 −∇ 1 ∇’ · A(x0 )
A(x) = ∇ × 0 d V dV0 (C.20)
4á kx − x k 4á kx − x0 k
è è è
∇è = e + e + e (C.46)
x x y y z z
A x A y A z
∇· A = + + (C.47)
x y z
! ! !
A z A y A x A z A y A x
∇× A = − ex + − ey + − ez (C.48)
y z z x x y
2 A x 2 A y 2 A z
∇2 è = + + (C.49)
x 2 y 2 z 2
∇2 A = ∇2 A x ex + ∇2 A y e y + ∇2 A z ez (C.50)
è 1 è è
∇è = e + e + e (C.51)
â â â æ æ z z
1 1 Aæ A z
∇· A = (âAâ ) + + (C.52)
â â â æ z
A A Aâ
! ! !
1 A z æ â A z 1
∇× A = − eâ + − eæ + (âAæ ) − ez (C.53)
â æ z z â â â æ
1 2 è 2 è
!
1 è
∇2 è = â + 2 + (C.54)
â â â â æ2 z 2
2 Aæ 2 Aâ
! !
1 1
∇2 A = ∇2 A â − 2 A â − 2 eâ + ∇2 Aæ − 2 Aæ − 2 eæ + ∇2 A z ez (C.55)
â â æ â â æ
è 1 è 1 è
∇è = e + e + e (C.56)
r r r Ú Ú r sin Ú æ æ
1 2 1 1 Aæ
∇· A = 2 (r A r ) + (sin ÚAÚ ) + (C.57)
r r r sin Ú Ú r sin Ú æ
! ! !
1 AÚ 1 1 A r 1 A r
∇× A = (sin ÚAæ ) − er + − (rAæ ) eÚ + (rAÚ ) − eæ (C.58)
r sin Ú Ú æ r sin Ú æ r r r Ú
2 è
! !
1 2 è 1 è 1
∇2 è = 2 r + 2 sin Ú + (C.59)
r r r r sin Ú Ú Ú r 2 sin2 Ú æ2
Aæ 2 cos Ú Aæ
" # " #
2 2 2 1 2 A r
∇2 A = ∇2 A r − 2 A r − 2 (sin ÚAÚ ) − 2 e r + ∇2 A Ú − AÚ + 2 − eÚ
r r sin Ú Ú r sin Ú æ r 2 sin2 Ú r Ú r 2 sin2 Ú æ
" #
1 2 A r 2 cos Ú AÚ
+ ∇2 Aæ − Aæ + + eæ . (C.60)
r 2 sin2 Ú r 2 sin2 Ú æ r 2 sin2 Ú æ
Index
289
Index 290