Quantum Mechanics I I-Lecture
Quantum Mechanics I I-Lecture
com
QUA N T U M M E C H A N I C S I I
QUA N T U M M E C H A N I C S I I
c
Copyright
2015 Peeter Joot All Rights Reserved
This book may be reproduced and distributed in whole or in part, without fee, subject to the
following conditions:
• The copyright notice above and this permission notice must be preserved complete on all
complete or partial copies.
• Any translation or derived work must be approved by the author in writing before distri-
bution.
• If you distribute this work in part, instructions for obtaining the complete version of this
document must be included, and a means for obtaining a complete version provided.
• Small portions may be reproduced as illustrations for reviews or quotes in other works
without this permission notice if proper citation is given.
Exceptions to these rules may be granted for academic purposes: Write to the author and ask.
Disclaimer: I confess to violating somebody’s copyright when I copied this copyright state-
ment.
v
DOCUMENT VERSION
Sources for this notes compilation can be found in the github repository
https://github.com/peeterjoot/physicsplay
The last commit (Nov/4/2015), associated with this pdf was
834e136b8e3635cc7ab824f3c74fcfdf8c6f909f
vii
Dedicated to:
Aurora and Lance, my awesome kids, and
Sofia, who not only tolerates and encourages my studies, but is also awesome enough to think
that math is sexy.
P R E FA C E
These are my personal lecture notes for the Fall 2011, University of Toronto Quantum mechan-
ics II course (PHY456H1F), taught by Prof. John E Sipe.
The official description of this course was:
Quantum dynamics in Heisenberg and Schrodinger Pictures; WKB approximation; Varia-
tional Method; Time-Independent Perturbation Theory; Spin; Addition of Angular Momentum;
Time-Dependent Perturbation Theory; Scattering.
This document contains a few things
• My lecture notes.
Typos, if any, are probably mine (Peeter), and no claim nor attempt of spelling or grammar
correctness will be made.
• Notes from reading of the text [4]. This may include observations, notes on what seem
like errors, and some solved problems.
• Different ways of tackling some of the assigned problems than the solution sets.
• Some personal notes exploring details that were not clear to me from the lectures.
xi
CONTENTS
Preface xi
xiii
xiv contents
v appendices 271
a harmonic oscillator review 273
b verifying the helmholtz green’s function 277
c evaluating the squared sinc integral 283
d derivative recurrence relation for hermite polynomials 287
e mathematica notebooks 289
vi bibliography 293
bibliography 295
LIST OF FIGURES
xvii
xviii List of Figures
A P P R O X I M AT E M E T H O D S A N D P E RT U B AT I O N
A P P R O X I M AT E M E T H O D S
1
1.1 approximate methods for finding energy eigenvalues and eigenkets
Why?
• Simplifies dynamics
take
X X
|Ψ(0)i = |Ψnα i hΨnα |Ψ(0)i = cnα |Ψnα i (1.2)
nα nα
Then
• “Applied field"’ can often be thought of a driving the system from one eigenstate to
another.
• Stat mech.
In thermal equilibrium
P −βEn
nα hΨnα | O |Ψnα i e
hOi = (1.4)
Z
3
4 approximate methods
where
1
β= , (1.5)
kB T
and
X
Z= e−βEn (1.6)
nα
X
|Ψi = cnα |Ψnα i (1.7)
nα
X
hΨ|Ψi = |cnα |2 (1.9)
nα
|cnα |2 En
P
hΨ| H |Ψi
= Pnα 2
hΨ|Ψi
mβ cmβ
|cnα |2 E0
P
(1.10)
≥ Pnα 2
mβ cmβ
= E0
So for any ket we can form the upper bound for the ground state energy
hΨ| H |Ψi
≥ E0 (1.11)
hΨ|Ψi
There is a whole set of strategies based on estimating the ground state energy. This is called
the Variational principle for ground state. See §24.2 in the text [4].
We define the functional
hΨ| H |Ψi
E[Ψ] = ≥ E0 (1.12)
hΨ|Ψi
If |Ψi = c |Ψ0 i where |Ψ0 i is the normalized ground state, then
E[cΨ0 ] = E0 (1.13)
hr| H r0 = Hδ3 (r − r0 ) (1.14)
where
h̄2 2 e2
H =− ∇ − (1.15)
2µ r
Here µ is the reduced mass.
6 approximate methods
H |Ψ0 i (1.16)
E0 = −Ry (1.17)
µe4
Ry = ≈ 13.6eV (1.18)
2 h̄2
h̄2
a0 = ≈ 0.53Å (1.20)
µe2
estimate
h̄2 2 e2
Z !
hΨ| H |Ψi = 3 ∗
d rΨ (r) − ∇ − Ψ(r)
2µ r
Z (1.21)
hΨ|Ψi = 3
d r|Ψ(r)| 2
Or guess shape
8 approximate methods
2
Using the trial wave function e−αr
2
2 e2 2
R
h̄
d3 re−αr − 2µ ∇2 − r e−αr
E(α) = R 2
(1.23)
d3 re−2αr
find
3 h̄2
A=
2µ
!1/2 (1.25)
2
B = 2e 2
π
1.2 variational principle 9
Minimum at
µe2 8
!
α0 = (1.26)
h̄2 9π
So
µe4 8
E(α0 ) = − = −0.85Ry (1.27)
2 h̄2 3π
maybe not too bad...
Ψ0 (r1 , r2 ) (1.28)
h̄2 2 h̄2 2 2e e2
!
− ∇1 − ∇ − + Ψ0 (r1 , r2 ) = E0 Ψ0 (r1 , r2 ) (1.29)
2m 2m 2 r |r1 − r2 |
Nobody can solve this problem. It is one of the simplest real problems in QM that cannot
be solved exactly.
Suppose that we neglected the electron, electron repulsion. Then
where
h̄2 2 2e
!
− ∇ − Φ100 (r) = Φ100 (r) (1.31)
2m r
with
= −4Ry (1.32)
me4
Ry = (1.33)
2 h̄2
This is the solution to
Now we want to put back in the electron electron repulsion, and make an estimate.
Trial wavefunction
!
5
E(Z) = 2RY Z − 4Z + Z
2
(1.37)
8
The Z 2 comes from the kinetic energy. The −4Z is the electron nuclear attraction, and
the final term is from the electron-electron repulsion.
12 approximate methods
5
Z = 2− (1.38)
16
..
.
E1 ∼ |ψ1 i (2.1)
E0 ∼ |ψ0 i
We form projections onto |ψn i “direction”. The difference from this projection will be written
|ψn⊥ i, as depicted in fig. 2.1. This illustration cannot not be interpreted literally, but illustrates
the idea nicely.
For the amount along the projection onto |ψn i we write
where
13
14 perturbation methods
Or
This gives
hψ| H |ψi
E[ψ] =
hψ|ψi
En |1 + δα|2 + hδψn⊥ | H |δψn⊥ i
=
|1 + δα|2 hδψn⊥ |δψn⊥ i
hδψn⊥ |H|δψn⊥ i
En + |1+δα|2
= (2.11)
hδψn⊥ |δψn⊥ i
1+ |1+δα|2
!
hδψn⊥ |δψn⊥ i
= En 1 − +··· +···
|1 + δα|2
h i
= En 1 + O (δψn⊥ )2
where
..
.
E2 ∼ |ψ2 i
(2.13)
E1 ∼ |ψ1 i
E0 ∼ |ψ0 i
Suppose we wanted an estimate of E1 . If we knew the ground state |ψ0 i. For any trial |ψi
form
0
ψ = |ψi − |ψ0 i hψ0 |ψi (2.14)
We are taking out the projection of the ground state from an arbitrary trial function.
For a state written in terms of the basis states, allowing for an α degeneracy
X
|ψi = c0 |ψ0 i + cnα |ψnα i (2.15)
n>0,α
and
0 X
ψ = cnα |ψnα i (2.17)
n>0,α
(note that there are some theorems that tell us that the ground state is generally non-degenerate).
hψ0 | H |ψ0 i
E[ψ0 ] =
hψ0 |ψ0 i
P 2 (2.18)
n>0,α |cnα | E n
= P 2 ≥ E1
c
m>0,β mβ
E
Often do not know the exact ground state, although we might have a guess ψ̃0 .
for
00 ED E
ψ = |ψi − ψ̃0 ψ̃0 ψ (2.19)
2.3 problems 17
hψ000 | H |ψ000 i
≥ E1 (2.21)
hψ000 |ψ000 i
Somewhat remarkably, this is often possible. We talked last time about the Hydrogen atom.
In that case, you can guess that the excited state is in the 2s orbital and and therefore orthogonal
to the 1s (?) orbital.
2.3 problems
P2 1
Ho = + mω2 X 2 (2.22)
2m 2
and denote the energy eigenstates by |ni, where n is the eigenvalue of the number operator.
1. Find hn| X 4 |ni
2. Quadratic pertubation
Find the ground state energy of the Hamiltonian H = Ho + γX 2 . You may assume γ > 0.
[Hint: This is not a trick question.]
3. linear pertubation
Find the ground state energy of the Hamiltonian H = Ho − αX. [Hint: This is a bit harder
than part 2 but not much. Try "completing the square."]
Answer for Exercise 2.1
Part 1. X 4 Working through A we have now got enough context to attempt the first part of
the question, calculation of
a |ni = cn |n − 1i , (2.25)
or
hn| a† a |ni = |cn |2 hn − 1|n − 1i = |cn |2
n hn|ni = (2.26)
n=
so that
√
a |ni = n |n − 1i . (2.27)
Similarly let
a† |ni = bn |n + 1i , (2.28)
or
hn| aa† |ni = |bn |2 hn − 1|n − 1i = |bn |2
hn| (1 + a† a) |ni = (2.29)
1+n =
so that
√
a† |ni = n + 1 |n + 1i . (2.30)
h̄2
hn| X 4 |ni = hn| (a + a† )4 |ni (2.31)
4m2 ω2
2.3 problems 19
(a + a† )2 |ni = a2 + (a† )2 + a† a + aa† |ni
= a2 + (a† )2 + a† a + (1 + a† a) |ni
(2.32)
= a2 + (a† )2 + 1 + 2a† a |ni
√ √ √ √
= n − 1 n − 2 |n − 2i + n + 1 n + 2 |n + 2i + |ni + 2n |ni
Squaring, utilizing the Hermitian nature of the X operator
h̄2 h̄2 2
hn| X 4 |ni = (n − 1)(n − 2) + (n + 1)(n + 2) + (1 + 2n) 2
= 6n + 4n + 5 (2.33)
4m ω
2 2 4m ω
2 2
Part 2. Quadratic ground state Find the ground state energy of the Hamiltonian H = H0 +
γX 2 for γ > 0.
The new Hamiltonian has the form
P2 1 2γ 2 P2 1
!
H= + m ω +
2
X = + mω0 2 X 2 , (2.34)
2m 2 m 2m 2
where
r
2γ
ω =
0
ω2 + (2.35)
m
The energy states of the Hamiltonian are thus
r !
2γ 1
En = h̄ ω2 + n+ (2.36)
m 2
and the ground state of the modified Hamiltonian H is thus
r
h̄ 2γ
E0 = ω2 + (2.37)
2 m
Part 3. Linear ground state Find the ground state energy of the Hamiltonian H = H0 − αX.
With a bit of play, this new Hamiltonian can be factored into
α2 α2
! !
1 1
H = h̄ω b b + − †
= h̄ω bb†
− − , (2.38)
2 2mω2 2 2mω2
20 perturbation methods
where
α
r
mω iP
b= X+ √ − √
2 h̄ 2m h̄ω ω 2m h̄ω
(2.39)
α
r
mω iP
b† = X− √ − √ .
2 h̄ 2m h̄ω ω 2m h̄ω
From eq. (2.38) we see that we have the same sort of commutator relationship as in the
original Hamiltonian
h i
b, b† = 1, (2.40)
and because of this, all the preceding arguments follow unchanged with the exception that
the energy eigenstates of this Hamiltonian are shifted by a constant
α2
! !
1
H |ni = h̄ω n + − |ni , (2.41)
2 2mω2
h̄ω α2
E0 = − . (2.43)
2 2mω2
This makes sense. A translation of the entire position of the system should not effect the
energy level distribution of the system, but we have set our reference potential differently, and
have this constant energy adjustment to the entire system.
Exercise 2.2 Expectation values for position operators for spinless hydrogen (2011 ps1/p2)
Show that for all energy eigenstates |Φnlm i of the (spinless) hydrogen atom, where as usual
n, l, and m are respectively the principal, azimuthal, and magnetic quantum numbers, we have
[Hint: Take note of the parity of the spherical harmonics (see "quick summary" notes on the
spherical harmonics).]
2.3 problems 21
where Fnl is a real valued function defined in terms of Lagueere polynomials. Working with
the expectation of the X operator to start with we have
Z
hΦnlm | X |Φnlm i = Φnlm r0 r0 X |ri hr|Φnlm i d3 rd3 r0
Z
= Φnlm r0 δ(r − r0 )r sin θ cos φ hr|Φnlm i d3 rd3 r0
Z (2.46)
= Φ∗nlm (r)r sin θ cos φΦnlm (r)d3 r
Z !2 Z
2r m∗
r sin θdθdφYl (θ, φ) sin θ cos φYl (θ, φ)
2 m
∼ r drFnl
na0
Recalling that the only φ dependence in Ylm is eimφ we can perform the dφ integration directly,
which is
Z 2π
cos φdφe−imφ eimφ = 0. (2.47)
φ=0
Z !2 Z
2r m∗
r sin θdθdφYl (θ, φ) sin θ sin φYl (θ, φ).
2 m
hΦnlm | X |Φnlm i ∼ r drFnl (2.48)
na0
Our φ integral is then just
Z 2π
sin φdφe−imφ eimφ = 0, (2.49)
φ=0
Z !2
2r 3
hΦnlm | Z |Φnlm i ∼ drFnl r
na0
Z 2π Z π !2 (2.50)
dl−m
dφ sin θdθ (sin θ)−2m sin θ cos θ.
2l
0 0 d(cos θ)l−m
22 perturbation methods
u = cos θ
sin θdθ = −d(cos θ) = −du (2.51)
u ∈ [1, −1],
Here we have the product of two even functions, times one odd function (u), over a symmetric
interval, so the end result is zero, completing the problem.
I was not able to see how to exploit the parity result suggested in the problem, but it was not
so bad to show these directly.
Z
hr| Li |ψi = d3 r0 hr| Li r0 r0 ψ
Z
= d3 r0 hr| iab Xa Pb r0 r0 ψ
∂ψ(r0 ) 0
Z
= −i h̄iab d3 r0 xa hr| r
∂Xb
(2.53)
∂ψ(r0 )
0
Z
= −i h̄iab d3 r0 xa rr
∂xb
∂ψ(r0 ) 3
Z
= −i h̄iab d3 r0 xa δ (r − r0 )
∂xb
∂ψ(r)
= −i h̄iab xa
∂xb
2.3 problems 23
∂ψ(r)
hr| Li |ψi = −i h̄iab xa
∂xb
∂r dψ(r)
= −i h̄iab xa (2.54)
∂xb dr
1 1 dψ(r)
= −i h̄iab xa 2xb
2 r dr
We are left with an sum of a symmetric product xa xb with the antisymmetric tensor iab so
this is zero for all i ∈ [1, 3].
Zr1
x=
a0
(2.56)
Zr2
y=
a0
24 perturbation methods
we have
2 2 −x−y 2 ∂ ∂2 2 ∂ ∂2 −x−y
* 2 + 8 h̄2 Z 2
Z !
h̄
− ∇ + ∇2 = −
2 2
dxdyx y e + + + e
2m 1 m a20 x ∂x ∂x2 y ∂y ∂y2
8 h̄2 Z 2
Z !
2 2
=− 2 2 −x−y
dxdyx y e − + 1 − + 1 e−x−y
m a20 x y
(2.57)
16 h̄2 Z 2
Z !
2 2 −2x−2y 1 1
= dxdyx y e + −1
m a20 x y
h̄2 Z 2
=
m a20
With
h̄2
a0 = , (2.58)
me2
We have the result from the text
* 2 + Z 2 e2
h̄
− ∇ + ∇2 =
2 2
(2.59)
2m 1 a0
Z6
* !+ Z !
1 1 −2(r1 +r2 )Z/a0 2 2 1 1
2e2 + = 2e2 (4π) 2
e r r dr
1 2 1 2dr +
r1 r2 π2 a60 r1 r2
Z !
Z 1 1
= 32e2 e−2x−2y x2 y2 dxdy + (2.60)
a0 x y
Z
= 4e2
a0
We start with
+ 3 2 Z
e2
*
Z 1
= 3 e2 d3 kd3 r1 d3 r2 2 2 eik·(r1 −r2 ) e−2Z(r1 +r2 )/a0
|r1 − r2 | πa0 2π k
2 (2.61)
Z 3 2 1 Z
1
Z Z
= 3 e d3 k 2 d3 r1 eik·r1 e−2Zr1 /a0 d3 r2 e−ik·r2 e−2Zr2 /a0
πa0 2π2 k
To evaluate the two last integrals, I figure the author has aligned the axis for the d3 r1 volume
elements to make the integrals easier. Specifically, for the first so that k · r1 = kr1 cos θ, so the
integral takes the form
Z Z
ik·r1 −2Zr1 /a0
3
d r1 e e =− r12 dr1 dφd(cos θ)eikr1 cos θ e−2Zr1 /a0
Z ∞ Z −1
= −2π r2 drdueikru e−2Zr/a0
Zr=0 ∞
u=1
1 −ikr
= −2π 2
r dr e − eikr e−2Zr/a0 (2.62)
ikr
Z r=0
∞
4π 1 ikr
= rdr e − e−ikr e−2Zr/a0
k r=0 2i
4π ∞
Z
= rdr sin(kr)e−2Zr/a0
k r=0
For this last, Mathematica gives me (24.75) from the text
16πZa30
Z
d3 r1 eik·r1 e−2Zr1 /a0 = (2.63)
(k2 a20 + 4Z 2 )2
For the second integral, if we align the axis so that −k · r2 = kr cos θ and repeat, then we have
+ 3 2
e2
* Z
Z 2 1 1 1
= 3 e 16 π Z a0 d3 k 2 2 2
2 2 2 6
|r1 − r2 | πa0 2π 2 k (k a0 + 4Z 2 )4
128Z 8 2
Z
1
= e dkdΩ 2 2
π2 (k a0 + 4Z 2 )4
(2.64)
512Z 8 2
Z
1
= e dk 2 2
π (k a0 + 4Z 2 )4
512Z 8 2
Z
1
= e dk 2 2
π (k a0 + 4Z 2 )4
26 perturbation methods
e2 512Z 8 2
* + Z
2Z 1
= e dκ
|r1 − r2 | π a0 (2Z) (κ2 + 1)4
8
Z (2.65)
4Z 2 1
= e dκ 2
πa0 (κ + 1)4
Here I note that
d −1 1
= (2.66)
dκ 3(1 + x) 3 (1 + x)4
so the definite integral has the value
Z ∞
1 −1 −1 1
dκ 2 = − = (2.67)
0 (κ + 1) 4 3(1 + ∞) 3 3(1 + 0)3 3
(not 5π/3 as claimed in the text). This gives us
e2 4Ze2
* + Z
1
= dκ 2
|r1 − r2 | πa0 (κ + 1)4
4Ze2
= (2.68)
3πa0
Ze2 5 Ze2
≈ 0.424 ,
a0 8 a0
So, no compensating error is found, yet the end result of the calculation, which requires the
5/8 result matches with the results obtained other ways (as in problem set III). How would this
be accounted for? Is there an error above?
2.3.2 Curious problem using the variational method to find the ground state energy of the
Harmonic oscillator
X
|ψi = cm |ψm i , (2.69)
m
where
X X
hψ| H |ψi = c∗m hψm | H cn |ψn i
m n
X X
= c∗m hψm | cn En |ψn i
m n
X
= c∗m cn En hψm |ψn i
X
m,n (2.72)
2
|cm | Em
m
X
≥ |cm |2 E0
m
= E0 hψ|ψi
This allows us to form an estimate of the ground state energy for the system, by using any
state vector formed from a superposition of energy eigenstates, by simply calculating
hψ| H |ψi
E0 ≤ . (2.73)
hψ|ψi
One of the examples in the text is to use this to find an approximation of the ground state
energy for the Helium atom Hamiltonian
h̄2 2 e2
!
2 1 1
H=− ∇1 + ∇1 − 2e
2
+ + . (2.74)
2m r1 r2 |r1 − r2 |
28 perturbation methods
This calculation is performed using a trial function that was a solution of the interaction free
Hamiltonian
This is despite the fact that this is not a solution to the interaction Hamiltonian. The end result
ends up being pretty close to the measured value (although there is a pesky error in the book
that appears to require a compensating error somewhere else).
Part of the variational technique used in that problem, is to allow Z to vary, and then once the
normalized expectation is computed, set the derivative of that equal to zero to calculate the trial
wavefunction as a parameter of Z that has the lowest energy eigenstate for a function of that
form. We find considering the Harmonic oscillator that this final variation does not necessarily
produce meaningful results.
φ = e−β|x| , (2.76)
to perform the variational calculation above for the Harmonic oscillator Hamiltonian, which
has the one dimensional position space representation
h̄2 d2 1
H=− + mω2 x2 . (2.77)
2m dx2 2
We can find the normalization easily
Z ∞
hφ|φi = e−2β|x| dx
−∞
Z ∞
1
=2 e−2βx 2βdx
2β 0
Z ∞ (2.78)
1
=2 e−u du
2β 0
1
=
β
2.3 problems 29
∞
h̄2 d2
Z !
1
hφ| H |φi = dxe − −β|x|
+ mω x e 2 2 −β|x|
−∞ 2m dx2 2
Z − Z Z ∞ !
h̄2 d2
!
1
= lim + + dxe −β|x|
− + mω x e
2 2 −β|x|
(2.79)
→0 −∞ − 2m dx2 2
Z
h̄2 β2 1
Z ∞
h̄2 d2
!
=2 dxe−2βx − + mω2 x2 − lim dxe−β|x| 2 e−β|x|
0 2m 2 2m →0 − dx
∞
h̄2 β2 1 h̄2 β2 ∞
Z ! Z Z ∞
2 dxe−2βx
− + mω x = −
2 2
dxe−2βx
+ mω 2
dxx2 e−2βx
0 2m 2 m 0 0
h̄2 β ∞ mω2 ∞
Z Z
=− due +
−u
3
duu2 e−u (2.80)
2m 0 8β 0
β h̄2 mω2
=− +
2m 4β3
A naive evaluation of this integral requires the origin to be avoided where the derivative of
|x| becomes undefined. This also provides a nice way to evaluate this integral because we can
double the integral and half the range, eliminating the absolute value.
However, can we assume that the remaining integral is zero?
I thought that we could, but the end result is curious. I also verified my calculation sym-
bolically in 24.4.3_attempt_with_mathematica.nb , but found that Mathematica required some
special hand holding to deal with√the origin. Initially I coded this by avoiding the origin as
above, but later switched to |x| = x2 which Mathematica treats more gracefully.
Without that last integral, involving our singular |x|0 and |x|00 terms, our ground state energy
estimation, parameterized by β is
β2 h̄2 mω2
E[β] = − + . (2.81)
2m 4β2
Observe that if we set the derivative of this equal to zero to find the “best” beta associated
with this trial function
∂E β h̄2 mω2
0= =− − (2.82)
∂β 2m 2β3
30 perturbation methods
we find that the parameter beta that best minimizes this ground state energy function is com-
plex with value
imω
β2 = ± √ . (2.83)
2 h̄
It appears at first glance that we can not minimize eq. (2.81) to find a best ground state energy
estimate associated with the trial function eq. (2.76). We do however, know the exact ground
state energy h̄ω/2 for the Harmonic oscillator. Is is possible to show that for all β2 we have
X
|φi = cn |ψn i + c⊥ |ψ⊥ i . (2.85)
n
2.3 problems 31
where |ψ⊥ i is unknown, and presumed not orthogonal to any of the energy eigenkets. We can
still calculate the norm of the trial function
X
hφ|φi = hcn ψn + c⊥ ψ⊥ |cm ψm + c⊥ ψ⊥ i
n,m
X
= |cn |2 + c∗n c⊥ hψn |ψ⊥ i + cn c∗⊥ hψ⊥ |ψn i + |c⊥ |2 hψ⊥ |ψ⊥ i (2.86)
n
X
= hψ⊥ |ψ⊥ i + |cn |2 + 2 Re (c∗n c⊥ hψn |ψ⊥ i) .
n
Similarly we can calculate the energy expectation for this unnormalized state and find
X
hφ| H |φi = hcn ψn + c⊥ ψ⊥ | H |cm ψm + c⊥ ψ⊥ i
n,m
X (2.87)
= |cn |2 En + c∗n c⊥ En hψn |ψ⊥ i + cn c∗⊥ En hψ⊥ |ψn i + |c⊥ |2 hψ⊥ | H |ψ⊥ i
n
Our normalized energy expectation is therefore the considerably messier
2
En + c∗n c⊥ En hψn |ψ⊥ i + cn c∗⊥ En hψ⊥ |ψn i + |c⊥ |2 hψ⊥ | H |ψ⊥ i
P
hφ| H |φi n |cn |
=
hφ|φi hψ⊥ |ψ⊥ i + m |cm |2 + 2 Re (c∗m c⊥ hψm |ψ⊥ i)
P
(2.88)
2
E0 + c∗n c⊥ En hψn |ψ⊥ i + cn c∗⊥ En hψ⊥ |ψn i + |c⊥ |2 hψ⊥ | H |ψ⊥ i
P
n |cn |
≥
hψ⊥ |ψ⊥ i + m |cm |2 + 2 Re (c∗m c⊥ hψm |ψ⊥ i)
P
With a requirement to include the perpendicular cross terms the norm does not just cancel
out, leaving us with a clean estimation of the ground state energy. In order to utilize this vari-
ational method, we implicitly have an assumption that the hψ⊥ |ψ⊥ i and hψm |ψ⊥ i terms in the
denominator are sufficiently small that they can be neglected.
α
r
√ e−α x /2
2 2
u0 =
π
α
r
√ (2αx)e−α x /2
2 2
u1 = (2.89)
2 π
α
r
√ (4α2 x2 − 2)e−α x /2
2 2
u2 =
8 π
32 perturbation methods
un = Nn Hn (αx)e−α x /2
2 2
α
r
Nn = √ n (2.90)
π2 n!
2 d
n
Hn (η) = (−1)n eη n e−η
2
dη
From which we find
Z ∞
−α2 x2 /2 2 x2 /2
ψ(x) = e 2
(Nn ) Hn (αx) Hn (αx)e−α ψ(x)dx (2.91)
−∞
Figure 2.3: Exponential trial function with absolute exponential die off
β
!
e−α x /2+β /(2α )
2 2 2 2
ψ0 (x) = 2β erfc √
p
(2.92)
2α
With α = β = 1, this is plotted in fig. 2.4 and can be seen to match fairly well
The higher order terms get small fast, but we can see in fig. 2.5, where a tenth order fitting
is depicted that it would take a number of them to get anything close to the sharp peak that we
have in our exponential trial function.
Note that all the brakets of even orders in n with the trial function are zero, which is why the
tenth order approximation is only a sum of six terms.
Details for this harmonic oscillator wavefunction fitting can be found in gaussian_fitting_for_abs_function.nb
can be found separately, calculated using a Mathematica worksheet.
2.3 problems 33
Figure 2.4: First ten orders, fitting harmonic oscillator wavefunctions to this trial function
The question of interest is why we can approximate the trial function so nicely (except at the
origin) even with just a first order approximation (polynomial times Gaussian functions where
the polynomials are Hankel functions), and we can get an exact value for the lowest energy
state using the first order approximation of our trial function, why do we get garbage from the
variational method, where enough terms are implicitly included that the peak should be sharp.
It must therefore be important to consider the origin, but how do we give some meaning to
the derivative of the absolute value function? The key (supplied when asking Professor Sipe in
office hours for the course) is to express the absolute value function in terms of Heavyside step
functions, for which the derivative can be identified as the delta function.
where the step function is zero for x < 0 and one for x > 0 as plotted in fig. 2.6.
34 perturbation methods
Expressed this way, with the identification θ0 (x) = δ(x), we have for the derivative of the
absolute value function
Observe that we have our expected unit derivative for x > 0, and −1 derivative for x < 0. At
the origin our θ contributions vanish, and we are left with
|x|0 x=0 = 2 xδ(x)| x=0 (2.95)
We have got zero times infinity here, so how do we give meaning to this? As with any delta
functional, we have got to apply it to a well behaved (square integrable) test function f (x) and
integrate. Doing so we have
Z ∞ Z ∞
dx|x| f (x) = 2
0
dxxδ(x) f (x)
−∞ −∞ (2.96)
= 2(0) f (0)
This equals zero for any well behaved test function f (x). Since the delta function only picks
up the contribution at the origin, we can therefore identify |x|0 as zero at the origin.
Using the same technique, we can express our trial function in terms of steps
This we can now take derivatives of, even at the origin, and find
= β −θ(x)e−βx + θ(−x)eβx
h̄2 2 1
Hψ = − β ψ − 2δ(x) + mω2 x2 ψ, (2.100)
2m 2
so
∞
h̄2 β2 1 h̄2 β ∞
Z ! Z
hψ| H |ψi = − + mω x e
2 2 −2β|x|
+ δ(x)e−β|x|
−∞ 2m 2 m −∞
β h̄2 mω2 h̄2 β
=− + + (2.101)
2m 4β3 m
β h̄2
mω 2
= + .
2m 4β3
Normalized we have
∂E β h̄2 mω2
0= = − , (2.103)
∂β m 2β3
36 perturbation methods
m2 ω2
β4 = . (2.104)
2 h̄2
Plugging this back in we find that our trial function associated with the minimum energy
(unnormalized still) is
r
mωx 2
− √
ψ=e 2 h̄
, (2.105)
h̄ω √
E[βmin ] = 2 (2.106)
2
We have something that is 1.4× the true ground state energy, but is at least a ball park value.
However, to get this result, we have to be very careful to treat our point of singularity. A deriva-
tive that we would call undefined in first year calculus, is not only defined, but required, for this
treatment to work!
T I M E I N D E P E N D E N T P E R T U R B AT I O N T H E O RY
3
See §16.1 of the text [4].
We can sometimes use this sort of physical insight to help construct a good approximation.
This is provided that we have some of this physical insight, or that it is good insight in the first
place.
This is the no-think (turn the crank) approach.
Here we split our Hamiltonian into two parts
H = H0 + H 0 (3.1)
where H0 is a Hamiltonian for which we know the energy eigenstates and the eigenkets. The
H 0 is the “perturbation” that is supposed to be small “in some sense”.
Prof Sipe will provide some references later that provide a more specific meaning to this
“smallness”. From some ad-hoc discussion in the class it sounds like one has to consider se-
quences of operators, and look at the convergence of those sequences (is this L2 measure the-
ory?)
We would like to consider a range of problems of the form
H = H0 + λH 0 (3.2)
where
λ ∈ [0, 1] (3.3)
H → H0 (3.4)
H = H0 + H 0 (3.5)
37
38 time independent perturbation theory
E E
H0 ψ(0)
s = E (0) (0)
s ψ s (3.6)
We seek
(H0 + H 0 ) |ψ s i = E s |ψ s i (3.7)
(this is the λ = 1 case).
Once (if) found, when λ → 0 we will have
E s → E (0)s
E (3.8)
|ψ s i → ψ(0)
s
X E
ψs = cns ψ(0)
n (3.10)
n
time independent perturbation theory 39
This we know we can do because we are assumed to have a complete set of states.
with
where
c(0)
ns = δns (3.12)
There is a subtlety here that will be treated differently from the text. We write
E X E X E
|ψ s i = ψ(0)
s + λ c (1) (0)
ns n ψ + λ2
c(2) (0)
ns ψn +···
n n
(0) E X E (3.13)
(1) (1) (0)
= 1 + λc ss + · · · ψ s + λ
cns ψn + · · ·
n,s
Take
E
(1) (0)
n,s ns ψn
P
E (0) E c
ψ s = ψ s + λ +···
1 + λc(1) ss (3.14)
E X E
= ψ(0)
s + λ c (1) (0)
ns ψn +···
n,s
where
c(1)
c(1)
ns =
ns
(3.15)
1 + λc(1)
ss
We have:
c(1) (1)
ns = cns
(3.16)
c(2) (2)
ns , cns
D E
ψ s ψ s , 1 (3.17)
40 time independent perturbation theory
The setup To recap, we were covering the time independent perturbation methods from §16.1
of the text [4]. We start with a known Hamiltonian H0 , and alter it with the addition of a “small”
perturbation
H = H0 + λH 0 , λ ∈ [0, 1] (3.18)
For the original operator, we assume that a complete set of eigenvectors and eigenkets is
known
E E
H0 ψ s (0) = E s (0) ψ s (0) (3.19)
H |ψ s i = E s |ψ s i (3.20)
and assumed a perturbative series representation for the energy eigenvalues in the new system
Given an assumed representation for the new eigenkets in terms of the known basis
X E
|ψ s i = cns ψn (0) (3.22)
n
so that
X E X E X E
|ψ s i = cns (0) ψn (0) + λ cns (1) ψn (0) + λ2 cns (2) ψn (0) + · · · (3.24)
n n n
Setting λ = 0 requires
for
E X E X E
|ψ s i = ψ s (0) + λ cns (1) ψn (0) + λ2 cns (2) ψn (0) + · · ·
n n
E X E X E
= 1 + λc ss (1) + λ2 c ss (2) + · · · ψ s (0) + λ cns (1) ψn (0) + λ2 cns (2) ψn (0) + · · ·
n,s n,s
(3.26)
E (0) E X E X E
ψ s = ψ s +λ cns (1) ψn (0) + λ2 cns (2) ψn (0) + · · · (3.27)
n,s n,s
where
cns ( j)
cns ( j) = (3.28)
1 + λc ss (1) + λ2 c ss (2) + · · ·
The normalization of the rescaled kets is then
c (1) 2 + · · · ≡ 1 ,
D E X
ψ s ψ s = 1 + λ2 ss (3.29)
n,s
Zs
E E
ψ s = Z s1/2 ψ s , (3.30)
R
so that
E E D E
(ψ s )† ψ s = Z s ψ s ψ s = 1. (3.31)
R R
The meat That is as far as we got last time. We continue by renaming terms in eq. (3.27)
E (0) E E E
ψ s = ψ s + λ ψ s (1) + λ2 ψ s (2) + · · · (3.32)
where
( j) E X ( j) (0) E
ψ s = cns ψn . (3.33)
n,s
42 time independent perturbation theory
E E
H ψ s = E s ψ s , (3.34)
or
E E
H ψ s − E s ψ s = 0. (3.35)
This is
(0) E
0 = λ0 (H0 − E (0)
s ) ψ s
ψ (1) E + (H 0 − E (1) ) ψ (0) E
+ λ (H0 − E (0)
s ) s s s
E E E (3.38)
+ λ2 (H − E (0) ) ψ (2) + (H 0 − E (1) ) ψ (1) − E (2) ψ
(0)
0 s s s s s s
···
So we form
(0) E
|Ai = (H0 − E (0)
s ) ψ s
(1) E (0) E
|Bi = (H0 − E (0)
s ) ψ s + (H 0 − E (1)
s ) ψ s (3.39)
(2) E (1) E (0) E
|Ci = (H0 − E (0)
s ) ψ s + (H 0 − E (1)
s ) ψ s − E (2)
s ψ s ,
and so forth.
E E
Zeroth order in λ Since H0 ψ s (0) = E (0)
s ψ s
(0) , this first condition on
|Ai is not much more
than a statement that 0 − 0 = 0.
3.1 time independent perturbation 43
First order in λ How about |Bi = 0? For this to be zero we require that both of the following
are simultaneously zero
D E
ψ s (0) B = 0
D E (3.40)
ψm (0) B = 0, m,s
With
D E
ψm (0) H 0 ψ s (0) ≡ H 0 ms , (3.42)
or
H 0 ss = E (1)
s . (3.43)
D (0)
D
ψm (0) H0 = Em ψm (0) . (3.45)
D E D E
We note that ψm (0) ψ s (0) = 0, m , s. We can also expand the ψm (0) ψ s (1) , which is
D E D X (1) (0) E
ψm ψ s
(0) (1)
= ψm cns ψn
(0)
(3.46)
n,s
I found that reducing this sum was not obvious until some actual integers were plugged in.
Suppose that s = 3, and m = 5, then this is
D E D X E
ψ5 (0) ψ3 (1) = ψ5 (0) cn3 (1) ψn (0)
n=0,1,2,4,5,···
D E (3.47)
= c53 (1)
ψ5 (0) ψ5 (0)
= c53 (1) .
44 time independent perturbation theory
Observe that we can also replace the superscript (1) with ( j) in the above manipulation with-
out impacting anything else. That and putting back in the abstract indices, we have the general
result
D E
ψm (0) ψ s ( j) = cms ( j) . (3.48)
E (1)
s = H ss
0
H 0 ms (3.50)
cms (1) = (0) (0)
E s − Em
Second order in λ Doing the same thing for |Ci = 0 we form (or assume)
D E
ψ s (0) C = 0 (3.51)
D E
0 = ψ s (0) C
D (2) E (1) E (0) E
= ψ s (0) (H0 − E (0)
s ) ψ s + (H 0 − E (1)s ) ψ s − E (2)
s ψ s (3.52)
D E D (1) E D E
= (E (0) (0)
+ ψ s (0) (H 0 − E (1) − E (2)
s − E s ) ψ s ψ s s ) ψ s ψ s (0) ψ s (0)
(0) (2)
s
D E
We need to know what the ψ s (0) ψ s (1) is, and find that it is zero
D E D X (1) (0) E
ψ s (0) ψ s (1) = ψ s (0) cns ψn (3.53)
n,s
Again, suppose that s = 3. Our sum ranges over all n , 3, so all the brakets are zero. Utilizing
that we have
D E
E (2)
s = ψ s H ψ s
(0) 0 (1)
D X E
= ψ s (0) H 0 cms (1) ψm (0)
m,s (3.54)
X
= cms (1) H 0 sm
m,s
3.2 issues concerning degeneracy 45
X H 0 ms X |H 0 ms |2
E (2)
s = H 0 sm = (3.55)
m,s E (0)
s − Em
(0)
m,s E (0) (0)
s − Em
We can now summarize by forming the first order terms of the perturbed energy and the
corresponding kets
X |H 0 ms |2
E s = E (0)
s + λH ss + λ
0 2
(0) (0)
+···
m,s E s − E m
X H 0 ms (3.56)
E (0) E ψ (0) E + · · ·
ψ s = ψ s +λ (0) (0) m
m,s E s − E m
We can continue calculating, but are hopeful that we can stop the calculation without doing
more work, even if λ = 1. If one supposes that the
X H 0 ms
(3.57)
m,s E (0) (0)
s − Em
term is “small”, then we can hope that truncating the sum will be reasonable for λ = 1. This
would be the case if
H 0 ms E (0)
s − E (0)
m , (3.58)
however, to put some mathematical rigor into making a statement of such smallness takes a
lot of work. We are referred to [10]. Incidentally, these are loosely referred to as the first and
second testaments, because of the author’s name, and the fact that they came as two volumes
historically.
When the perturbed state is non-degenerate Suppose the state of interest is non-degenerate
but others are
FIXME: diagram. states designated by dashes labeled n1, n2, n3 degeneracy α = 3 for energy
En(0) .
This is no problem except for notation, and if the analysis is repeated we find
46 time independent perturbation theory
X H 0 mα;s 2
E s = E (0)
s + λH ss + λ
0 2
(0) (0)
+··· (3.59)
m,s,α E s − E m
E (0) E X H 0 mα;s
ψ s = ψ s +λ ψ (0) E + · · · , (3.60)
(0) (0) mα
m,s,α E s − E m
where
D E
H 0 mα;s = ψmα (0) H 0 ψ sα (0) (3.61)
When the perturbed state is also degenerate FIXME: diagram. states designated by dashes
labeled n1, n2, n3 degeneracy α = 3 for energy En(0) , and states designated by dashes labeled s1,
s2, s3 degeneracy α = 3 for energy E (0)s .
If we just blindly repeat the derivation for the non-degenerate case we would obtain
X H 0 mα;s1 2 X H 0 sα;s1 2
Es = E (0)
s + λH 0
s1;s1 +λ
(0)
2
(0)
+λ (0)
2
(0)
+··· (3.62)
m,s,α E s − E m α,1 E s − E s
E (0) E X H 0 mα;s E X H 0 sα;s1
ψ s = ψ s +λ ψ (0) + λ ψ (0) E + · · · , (3.63)
(0) (0) mα (0) (0) sα
m,s,α E s − E m α,s1 E s − E s
where
D E
H 0 mα;s1 = ψmα (0) H 0 ψ s1 (0) (3.64)
D E
ψmα (0) H 0 ψ s1 (0) = 0. (3.65)
That may not be obvious, but if one returns to the original derivation, the right terms cancel
so that one will not end up with the 0/0 problem.
FIXME: performing this derivation outside of class (below), it was found that we do not need
the matrix elements of H 0 to be diagonal, but just need
D E
ψ sα (0) H 0 ψ sβ (0) = 0, for β , α. (3.66)
3.2 issues concerning degeneracy 47
That is consistent with problem set III where we did not diagonalize H 0 , but just the subset
of it associated with the degenerate states. I am unsure now if eq. (3.65) was copied in error or
provided in error in class, but it definitely appears to be a more severe requirement than actually
needed to deal with perturbation of a state found in a degenerate energy level.
H = H0 + λH 0 , λ ∈ [0, 1] (3.67)
For the original operator, we assume that a complete set of eigenvectors and eigenkets is
known
E E
H0 ψ sα (0) = E s (0) ψ sα (0) (3.68)
H |ψ sα i = E sα |ψ sα i (3.69)
and assumed a perturbative series representation for the energy eigenvalues in the new system
Note that we do not assume that the perturbed energy states, if degenerate in the original
system, are still degenerate after perturbation.
Given an assumed representation for the new eigenkets in terms of the known basis
X E
|ψ sα i = cns;βα ψnβ (0) (3.71)
n,β
so that
X E X E X E
|ψ sα i = cns;βα (0) ψnβ (0) + λ cns;βα (1) ψnβ (0) + λ2 cns;βα (2) ψnβ (0) + · · · (3.73)
n,β n,β n,β
Setting λ = 0 requires
for
E X E X E
|ψ sα i = ψ sα (0) + λ cns;βα (1) ψnβ (0) + λ2 cns;βα (2) ψnβ (0) + · · ·
n,β n,β
E
= 1 + λc ss;αα + λ c ss;αα
(1) 2 (2)
+ · · · ψ sα (0)
(3.75)
X E
+λ cns;βα (1) ψnβ (0)
nβ,sα
X E
+λ 2
cns;βα (2) ψnβ (0) + · · ·
nβ,sα
E (0) E X E X E
ψ sα = ψ sα +λ cns;βα (1) ψnβ (0) + λ2 cns;βα (2) ψnβ (0) + · · · (3.76)
nβ,sα nβ,sα
where
cns;βα ( j)
cns;βα ( j) = (3.77)
1 + λc ss;αα (1) + λ2 c ss;αα (2) + · · ·
c (1) 2 + · · · ≡ 1 ,
D E X
ψ sα ψ sα = 1 + λ2 ss (3.78)
nβ,sα
Z sα
E E
1/2
ψ sα = Z sα ψ sα , (3.79)
R
3.2 issues concerning degeneracy 49
so that
E E D E
(ψ sα )† ψ sα = Z sα ψ sα ψ sα = 1. (3.80)
R R
E (0) E E E
ψ sα = ψ sα + λ ψ sα (1) + λ2 ψ sα (2) + · · · (3.81)
where
( j) E X E
ψ sα = cns;βα ( j) ψnβ (0) . (3.82)
nβ,sα
E E
H ψ sα = E sα ψ sα , (3.83)
or
E E
H ψ sα − E sα ψ sα = 0. (3.84)
This is
(0) E
0 = λ0 (H0 − E (0)
s ) ψ sα
(1) E (0) E
+ λ (H0 − E (0)
s ) ψ sα + (H 0 − E (1)
sα ) ψ sα
E E E (3.87)
+ λ2 (H − E (0) ) ψ (2) + (H 0 − E (1) ) ψ (1) − E (2) ψ
(0)
0 s sα sα sα sα sα
···
50 time independent perturbation theory
So we form
(0) E
|Ai = (H0 − E (0)
s ) ψ sα
(1) E (0) E
|Bi = (H0 − E (0)
s ) ψ sα + (H 0 − E (1)
sα ) ψ sα (3.88)
(2) E (1) E (0) E
|Ci = (H0 − E (0)
s ) ψ sα + (H 0 − E (1)
sα ) ψ sα − E (2)
sα ψ sα ,
and so forth.
E E
Zeroth order in λ Since H0 ψ sα (0) = E (0)
s ψ sα
(0) , this first condition on
|Ai is not much
more than a statement that 0 − 0 = 0.
First order in λ How about |Bi = 0? For this to be zero we require that both of the following
are simultaneously zero
D E
ψ sα (0) B = 0
D E (3.89)
ψmβ (0) B = 0, mβ , sα
This first condition is
D (0) E
ψ sα (0) (H 0 − E (1)
sα ) ψ sα = 0. (3.90)
With
D E
ψmβ (0) H 0 ψ sα (0) ≡ H 0 ms;βα , (3.91)
or
H 0 ss;αα = E (1)
sα . (3.92)
D (0)
D
ψmβ (0) H0 = Em ψmβ (0) . (3.94)
D E D E
We note that ψmβ (0) ψ sα (0) = 0, mβ , sα. We can also expand the ψmβ (0) ψ sα (1) , which is
D E D X E
ψmβ (0) ψ sα (1) = ψmβ (0)
cns;δα (1) ψnδ (0)
(3.95)
nδ,sα
3.2 issues concerning degeneracy 51
I found that reducing this sum was not obvious until some actual integers were plugged in.
Suppose that s = 3 1, and mβ = 2 2, then this is
D E D X E
ψ2 2 (0) ψ3 1 (1) = ψ2 2 (0)
cn3;δ1 (1) ψnδ (0)
nδ∈{1 1,1 2,···,2 1,2 2,2 3,···,3 2,3 3,···}
D E (3.96)
= c2 3;2 1 (1)
ψ2 2 (0) ψ2 2 (0)
= c2 3;2 1 (1) .
Observe that we can also replace the superscript (1) with ( j) in the above manipulation with-
out impacting anything else. That and putting back in the abstract indices, we have the general
result
D E
ψmβ (0) ψ sα ( j) = cms;βα ( j) . (3.97)
Utilizing this gives us, for mβ , sα
(0)
0 = (Em − E (0)
s )cms;βα
(1)
+ H 0 ms;βα (3.98)
Here we see our first sign of the trouble hinted at in lecture 5. Just because mβ , sα does not
mean that m , s. For example, with mβ = 1 1 and sα = 1 2 we would have
(1)
E12 = H 0 1 1;22
H 0 1 1;12 (3.99)
c1 1;12 (1) = (0)
E1 − E1(0)
We have got a divide by zero unless additional restrictions are imposed!
If we return to eq. (3.98), we see that, for the result to be valid, when m = s, and there exists
degeneracy for the s state, we require for β , α
H 0 ss;βα = 0 (3.100)
(then eq. (3.98) becomes a 0 = 0 equality, and all is still okay)
And summarizing what we learn from our |Bi = 0 conditions we have
E (1)
sα = H ss;αα
0
H 0 ms;βα
cms;βα (1) = (0) (0)
, m,s (3.101)
E s − Em
H 0 ss;βα = 0, βα , 1 1
52 time independent perturbation theory
Second order in λ Doing the same thing for |Ci = 0 we form (or assume)
D E
ψ sα (0) C = 0 (3.102)
D E
0 = ψ sα (0) C
D (2) E (1) E (0) E
= ψ sα (0) (H0 − E (0)
s ) ψ sα + (H 0 − E (1)
sα ) ψ sα − E (2)
sα ψ sα (3.103)
(0) (0)
D E D (1)
E (2)
D E
= (E s − E s ) ψ sα (0) ψ sα (2) + ψ sα (0) (H 0 − E sα ) ψ sα (1) − E sα ψ sα (0) ψ sα (0)
D E
We need to know what the ψ sα (0) ψ sα (1) is, and find that it is zero
D E D X E
ψ sα (0) ψ sα (1) = ψ sα (0) cns;βα (1) ψnβ (0) = 0 (3.104)
nβ,sα
D (1) E
E (2)
sα = ψ sα H ψ sα
(0) 0
D X E
= ψ sα (0) H 0 cms (1) ψmβ (0)
mβ,sα (3.105)
X
= cms;βα (1) H 0 sm;αβ
mβ,sα
X X H 0 ms;βα
E (2)
sα = c ss;βα (1) H 0 ss;αβ + (0) (0)
H 0 sm;αβ (3.106)
β,α mβ,sα,m,s E s − Em
Again, only if H ss;αβ = 0 for β , α do we have a result we can use. If that is the case, the first
sum is killed without a divide by zero, leaving
2
X H 0 ms;βα
E (2)
sα = . (3.107)
mβ,sα,m,s E (0) (0)
s − Em
3.2 issues concerning degeneracy 53
We can now summarize by forming the first order terms of the perturbed energy and the
corresponding kets
2
X H 0 ms;βα
E sα = E (0)
s + λH 0
ss;αα +λ 2
(0) (0)
+···
m,s,mβ,sα E s − E m
E (0) E X H 0 ms;βα E (3.108)
ψ sα = ψ sα +λ (0) (0)
ψmβ (0) + · · ·
m,s,mβ,sα E s − E m
H 0 ss;βα = 0, βα , 1 1
Notational discrepancy: OOPS. It looks like I used different notation than in class for our
matrix elements for the placement of the indices.
FIXME: looks like the c ss;αα0 (1) , for α , α0 coeffients have been lost track of here? Do we
have to assume those are zero too? Professor Sipe did not include those in his lecture eq. (3.114),
but I do not see the motivation here for dropping them in this derivation.
Diagonalizing the perturbation Hamiltonian Suppose that we do not have this special zero
condition that allows the perturbation treatment to remain valid. What can we do. It turns out
that we can make use of the fact that the perturbation Hamiltonian is Hermitian, and diagonalize
the matrix
D E
ψ sα (0) H 0 ψ sβ (0) (3.109)
In the example of a two fold degeneracy, this amounts to us choosing not to work with the
states
E E
ψ(0) , ψ(0) , (3.110)
s1 s2
D E
ψ sα (0) H 0 ψ sβ (0) = Hα δαβ (3.113)
54 time independent perturbation theory
Utilizing this to fix the previous, one would get if the analysis was repeated correctly
X H 0 mβ;sα 2
E sα = E (0)
s + λH 0 sα;sα + λ2 (0) (0)
+··· (3.114)
m,s,β E s − E m
E (0) E X H 0 mβ;sα
ψ sα = ψ sα +λ ψ (0) E + · · · (3.115)
(0) (0) mβ
m,s,β E s − E m
FIXME: why do we have second order in λ terms for the energy when we found those exactly
by diagonalization? We found there that the perturbed energy eigenvalues were multivalued with
values E sα = E (0)
s + λH sβ;sβ for all degeneracy indices β. Have to repeat the derivation for these
0
guess I will bet that this is the origin of the spectral line splitting, especially given that an
atom like hydrogen has degenerate states.
3.3 examples
H = H0 + λH 0 (3.116)
H 0 = eEz Ẑ (3.117)
E D (0) (0) E
E E X ψ(0)
β ψβ H 0 ψα
ψ(1)
α
(0)
= ψα + (3.118)
β,α Eα(0) − Eβ(0)
3.3 examples 55
and
D E
Eα(1) = ψ(0) 0 (0)
α H ψα (3.119)
E
With the default basis {ψ(0)
β }, and n = 2 we have a 4 fold degeneracy
l, m = 0, 0
l, m = 1, −1
(3.120)
l, m = 1, 0
l, m = 1, +1
nlm 200 210 211 21 − 1
200
0 ∆ 0 0
∆ (3.121)
210 0 0 0
211 0 0 0 0
21 − 1 0 0 0 0
FIXME: show.
where
∆ = −3eEz a0 (3.122)
Observe the embedded Pauli matrix (FIXME: missed the point of this?)
0 1
σ x = (3.123)
1 0
( )
1
√ (|2, 0, 0i ± |2, 1, 0i), |2, 1, ±1i (3.124)
2
and our result is
ED E
ψ(0) ψ(0) H 0 ψ(0)
α
β β
E (0) E X
ψ(1)
α,n=2 = ψα + (3.125)
β<degenerate subspace Eα(0) − Eβ(0)
T I M E D E P E N D E N T P E R T U B AT I O N
4
4.1 review of dynamics
We want to move on to time dependent problems. In general for a time dependent problem, the
answer follows provided one has solved for all the perturbed energy eigenvalues. This can be
laborious (or not feasible due to infinite sums).
Before doing this, let us review our dynamics as covered in §3 of the text [4].
Schrödinger and Heisenberg pictures Our operator equation in the Schrödinger picture is the
familiar
d
i h̄ |ψ s (t)i = H |ψ s (t)i (4.1)
dt
and most of our operators X, P, · · · are time independent.
where O s is the operator in the Schrödinger picture, and is non time dependent.
Formally, the time evolution of any state is given by
57
58 time dependent pertubation
Note that because the Hamiltonian commutes with its exponential (it commutes with itself
and any power series of itself), the Hamiltonian in the Heisenberg picture is the same as in the
Schrödinger picture
Time evolution and the Commutator Taking the derivative of eq. (4.6) provides us with the
time evolution of any operator in the Heisenberg picture
d d iHt/ h̄
i h̄ OH (t) = i h̄ e O s e−iHt/ h̄
dt dt
−iHt/ h̄ −iH
iH
= i h̄ e iHt/ h̄
Ose −iHt/ h̄
+eiHt/ h̄
Ose (4.9)
h̄ h̄
= (−HOH + OH H ) .
d
i h̄ OH (t) = [OH , H ] . (4.10)
dt
|ψ s (0)i = |ψH i
OS = OH (0)
4.2 interaction picture 59
|ψ s (0)i = |ψH i
OS = OH (0)
We consider the interaction of a nucleus with a neutral atom, heavy enough that it can be
considered classically. From the atoms point of view, the effects of the heavy nucleus barrel-
60 time dependent pertubation
ing by can be described using a time dependent Hamiltonian. For the atom, that interaction
Hamiltonian is
X Zeqi
H0 = . (4.13)
i
|rN (t) − Ri |
Here and rN is the position vector for the heavy nucleus, and Ri is the position to each charge
within the atom, where i ranges over all the internal charges, positive and negative, within the
atom.
Placing the origin close to the atom, we can write this interaction Hamiltonian as
∂
!
X Zeq
i
X 1
H (t) =
0
+
Zeqi Ri · (4.14)
|r (t)|
∂r |rN (t) − r| r=0
i N i
The first term vanishes because the total charge in our neutral atom is zero. This leaves us
with
∂
!
X Ze
H (t) = −
0
qi Ri · −
∂r |rN (t) − r|
i r=0
X (4.15)
=− qi Ri · E(t),
i
where E(t) is the electric field at the origin due to the nucleus.
Introducing a dipole moment operator for the atom
X
µ= qi Ri , (4.16)
i
Here we have a quantum mechanical operator, and a classical field taken together. This sort of
dipole interaction also occurs when we treat a atom placed into an electromagnetic field, treated
classically as depicted in fig. 4.2
In the figure, we can use the dipole interaction, provided λ a, where a is the “width” of
the atom.
Because it is great for examples, we will see this dipole interaction a lot.
4.2 interaction picture 61
The interaction picture Having talked about both the Schrödinger and Heisenberg pictures,
we can now move on to describe a hybrid, one where our Hamiltonian has been split into static
and time dependent parts
We will formulate an approach for dealing with problems of this sort called the interaction
picture.
This is also covered in §3.3 of the text, albeit in a much harder to understand fashion (the text
appears to try to not pull the result from a magic hat, but the steps to get to the end result are
messy). It would probably have been nicer to see it this way instead.
In the Schrödinger picture our dynamics have the form
d
i h̄ |ψ s (t)i = H |ψ s (t)i (4.19)
dt
How about the Heisenberg picture? We look for a solution
We want to find this operator that evolves the state from the state as some initial time t0 , to
the arbitrary later state found at time t. Plugging in we have
d
i h̄ U(t, t0 ) |ψ s (t0 )i = H(t)U(t, t0 ) |ψ s (t0 )i . (4.21)
dt
This has to hold for all |ψ s (t0 )i, and we can equivalently seek a solution of the operator
equation
d
i h̄ U(t, t0 ) = H(t)U(t, t0 ), (4.22)
dt
where
U(t0 , t0 ) = I, (4.23)
Rt
− h̄i H(τ)dτ
U(t, t0 ) = e t0
(4.25)
holds? No. This may be true when H(t) is a number, but when it is an operator, the Hamilto-
nian does not necessarily commute with itself at different times
So this is wrong in general. As an aside, for numbers, eq. (4.25) can be verified easily. We
have
i Rt 0 i Z t !0 Rt
− H(τ)dτ −i H(τ)dτ
i h̄ e h̄ t0 = i h̄ − H(τ)dτ e h̄ t0
h̄ t0
!
dt dt0 − h̄i t t H(τ)dτ (4.27)
R
= H(t) − H(t0 ) e 0
dt dt
= H(t)U(t, t0 )
4.2 interaction picture 63
Expectations Suppose that we do find U(t, t0 ). Then our expectation takes the form
Put
and form
i
U I (t, t0 ) = e h̄ H0 (t−t0 ) U(t, t0 ) (4.32)
or
i
U(t, t0 ) = e− h̄ H0 (t−t0 ) U I (t, t0 ). (4.33)
dU I d i H0 (t−t0 )
i h̄ = i h̄ e h̄ U(t, t0 )
dt dt !
i d
= −H0 U(t, t0 ) + e h̄ H0 (t−t0 ) i h̄ U(t, t0 )
dt
i (4.34)
= −H0 U(t, t0 ) + e h̄ H0 (t−t0 ) ((H + H 0 (t))U(t, t0 ))
i
= e h̄ H0 (t−t0 ) H 0 (t)U(t, t0 )
i i
= e h̄ H0 (t−t0 ) H 0 (t)e− h̄ H0 (t−t0 ) U I (t, t0 ).
Define
i i
H 0 (t) = e h̄ H0 (t−t0 ) H 0 (t)e− h̄ H0 (t−t0 ) , (4.35)
64 time dependent pertubation
d
i h̄ U I (t, t0 ) = H 0 (t)U I (t, t0 ). (4.36)
dt
Note that we also have the required identity at the initial time
U I (t0 , t0 ) = I. (4.37)
Without requiring us to actually find U(t, t0 ) all of the dynamics of the time dependent inter-
action are now embedded in our operator equation for H 0 (t), with all of the simple interaction
related to the non time dependent portions of the Hamiltonian left separate.
i
|ψ s (t)i = e− h̄ H0 (t−t0 ) |ψI i . (4.40)
Also, by multiplying eq. (4.36) by our Schrödinger ket, we remove the last vestiges of U I and
U from the dynamical equation for our time dependent interaction
d
i h̄ |ψI i = H 0 (t) |ψI i . (4.41)
dt
Interaction picture expectation Inverting eq. (4.40), we can form an operator expectation, and
relate it the interaction and Schrödinger pictures
i i
hψ s (t)| O s |ψ s (t)i = hψI | e h̄ H0 (t−t0 ) O s e− h̄ H0 (t−t0 ) |ψI i . (4.42)
4.2 interaction picture 65
With a definition
i i
OI = e h̄ H0 (t−t0 ) O s e− h̄ H0 (t−t0 ) , (4.43)
we have
As before, the time evolution of our interaction picture operator, can be found by taking
derivatives of eq. (4.43), for which we find
dOI (t)
i h̄ = [OI (t), H0 ] (4.45)
dt
we have
where
and
d
i h̄ |ψI i = H 0 (t) |ψI i , (4.50)
dt
or
d
i h̄ U I (t, t0 ) = H 0 (t)U I (t, t0 )
dt (4.51)
U I (t0 , t0 ) = I.
66 time dependent pertubation
i i
H 0 (t) = e h̄ H0 (t−t0 ) H 0 (t)e− h̄ H0 (t−t0 ) , (4.52)
and for Schrödinger operators, independent of time, we have the dynamical equation
dOI (t)
i h̄ = [OI (t), H0 ] (4.53)
dt
Multivariable Taylor series As outlined in §2.8 (8.10) of [7], we want to derive the multi-
variable Taylor expansion for a scalar valued function of some number of variables
f (a + x) = f (a + tx)|t=1 . (4.55)
We can Taylor expand a single variable function without any trouble, so introduce
where
We have
∂g t2 ∂g
g(t) = g(0) + t + +···, (4.58)
∂t t=0 2! ∂t t=0
so that
∂g 1 ∂g
g(1) = g(0) + + + +···. (4.59)
∂t t=0 2! ∂t t=0
4.3 justifying the taylor expansion above (not class notes) 67
The multivariable Taylor series now becomes a plain old application of the chain rule, where
we have to evaluate
dg d
= f (a1 + tx1 , a2 + tx2 , · · ·)
dt dt
X ∂ ∂ai + txi (4.60)
= f (a + tx) ,
i
∂(a i + txi ) ∂t
so that
∂ f
!
dg X
= x i
. (4.61)
∂xi xi =ai
dt t=0 i
Assuming an Euclidean space we can write this in the notationally more pleasant fashion
using a gradient operator for the space
dg
= x · ∇u f (u)|u=a . (4.62)
dt t=0
To handle the higher order terms, we repeat the chain rule application, yielding for example
d2 f (a + tx) d X i ∂ f (a + tx)
= x
∂(ai + txi )
dt2 t=0 dt i
t=0
X ∂ d f (a + tx) (4.63)
= xi
∂(a + tx )
i i
i
dt
t=0
= (x · ∇u )2 f (u)u=a .
Thus the Taylor series associated with a vector displacement takes the tidy form
∞
X 1
f (a + x) = (x · ∇u )k f (u)u=a . (4.64)
k=0
k!
f (a + x) = ex·∇u f (u)u=a (4.65)
Here a dummy variable u has been retained as an instruction not to differentiate the x part of
the directional derivative in any repeated applications of the x · ∇ operator.
68 time dependent pertubation
∞
X 1
f (a + x) = (a · ∇)k f (x) = ea·∇ f (x), (4.66)
k=0
k!
∂
!
1 1 1
+R· .
≈ (4.67)
|r − R| |r| ∂R |r − R| R=0
which we can see has the same structure as above with some variable substitutions. Evaluat-
ing it we have
∂ 1 ∂
= ei i ((x j − R j )2 )−1/2
∂R |r − R| ∂R
∂(x j − R j ) 1
!
1
= ei − 2(x j − R j ) (4.68)
2 ∂Ri |r − R|3
r−R
= ,
|r − R|3
and at R = 0 we have
1 1 r
≈ +R· 3. (4.69)
|r − R| |r| |r|
We see in this direction derivative produces the classical electric Coulomb field expression
for an electrostatic distribution, once we take the r/|r|3 and multiply it with the −Ze factor.
4.4 recap: interaction picture 69
With algebra A different way to justify the expansion of eq. (4.14) is to consider a Clifford
algebra factorization (following notation from [5]) of the absolute vector difference, where R is
considered small.
q
|r − R| = (r − R) (r − R)
s* ! ! +
1 1
= r 1− R 1−R r
r r
s* ! !+
1 1
= r2 1 − R 1 − R (4.70)
r r
s * +
1 1 1
= |r| 1 − 2 · R + RR
r r r
r
1 R2
= |r| 1 − 2 · R + 2
r r
Neglecting the R2 term, we can then Taylor series expand this scalar expression
!
1 1 1 1 r̂ 1 r
≈ 1+ ·R = + ·R = + · R. (4.71)
|r − R| |r| r |r| r2 |r| |r|3
Observe this is what was found with the multivariable Taylor series expansion too.
We will use the interaction picture to examine time dependent perturbations. We wrote our
Schrödinger ket in terms of the interaction ket
where
where
i i
H 0 (t) = e h̄ H0 (t−t0 ) H 0 (t)e− h̄ H0 (t−t0 ) . (4.75)
Z t
i
U I (t, t0 ) = I − dt0 H 0 (t0 )U I (t0 , t0 ). (4.76)
h̄ t0
Z t !0
d
i h̄ U I = dt H (t )U I (t , t0 )
0 0 0 0
dt t0
dt dt0 (4.77)
= H 0 (t)U I (t, t0 ) − H 0 (t)U I (t, t0 )
dt dt
= H 0 (t)U I (t, t0 )
This is a bit of a chicken and an egg expression, since it is cyclic with a dependency on
unknown U I (t0 , t0 ) factors.
We start with an initial estimate of the operator to be determined, and iterate. This can seem
like an odd thing to do, but one can find books on just this integral kernel iteration method (like
the nice little Dover book [13] that has sat on my (Peeter’s) shelf all lonely so many years).
Suppose for t near t0 , try
Z t
i
U I (t, t0 ) ≈ I − dt0 H 0 (t0 ). (4.78)
h̄ t0
Z t Z t0
i i 0 0 0
00 0 00
U I (t, t0 ) ≈ I − dt H (t ) I − dt H (t ).
h̄
t0 h̄ t0
Z t Z t0 (4.79)
−i 2 t 0 0 0
Z
i
=I− dt H (t ) +
0 0 0
dt H (t ) dt00 H 0 (t00 )
h̄ t0 h̄ t0 t0
It is possible to continue this iteration, and this approach is considered in some detail in §3.3
of the text [4], and is apparently also the basis for Feynman diagrams.
4.5 time dependent perturbation theory 71
As covered in §17 of the text, we will split the interaction into time independent and time
dependent terms
X E
|ψI (t)i = c̃n (t) ψ(0)
n . (4.81)
n
(0)
|ψ(t)i = e−iH0 (t−t0 )/ h̄ |ψI (t0 )i
X (0)
E (4.82)
= c̃n (t)e−iEn (t−t0 )/ h̄ ψ(0)
n .
n
With a definition
(where we leave off the zero superscript for the unperturbed state), our time evolved ket
becomes
X E
|ψ(t)i = cn (t)e−iEn t/ h̄ ψ(0)
n . (4.84)
n
d
i h̄ |ψI (t)i = H 0 (t) |ψI (t)i
dt (4.85)
i i
= e h̄ H0 (t−t0 ) H 0 (t)e− h̄ H0 (t−t0 ) |ψI (t)i ,
which gives us
X ∂ i
X
− h̄i H0 (t−t0 )
(0)
E E
i h̄ c̃ p (t) ψ p = e
h̄ H0 (t−t0 ) 0
H (t)e c̃n (t) ψ(0)
n . (4.86)
p
∂t n
72 time dependent pertubation
D
We can apply the bra ψ(0)
m to this equation, yielding
∂ X i D E i
i h̄ c̃m (t) = c̃n (t)e h̄ Em (t−t0 ) ψ(0)
m
H 0
(t) ψ(0)
n e− h̄ En (t−t0 ) . (4.87)
∂t n
With
Em
ωm =
h̄
ωmn = ωm − ωn (4.88)
D E
0
Hmn (t) = ψ(0) 0 (0) ,
m H (t) ψn
this is
∂c̃m (t) X
i h̄ = 0
c̃n (t)eiωmn (t−t0 ) Hmn (t) (4.89)
∂t n
yields
We are now left with all of our time dependence nicely separated out, with the coefficients
cn (t) encoding all the non-oscillatory time evolution information
H = H0 + H 0 (t)
X E
|ψ(t)i = cn (t)e−iωn t ψ(0)
n
n (4.93)
X
i h̄ċm = 0
Hmn (t)eiωmn t cn (t)
n
4.6 perturbation expansion 73
and hope for convergence, or at least something that at least has well defined asymptotic
behavior. We have
X
i h̄ċm = λ 0
Hmn (t)eiωmn t cn (t), (4.95)
n
and try
X X
(p)
i h̄ λk ċ(k)
m (t) =
0
Hmn (t)eiωmn t λ p+1 cn (t). (4.97)
k n,p
As before, for equality, we treat this as an equation for each λk . Expanding explicitly for the
first few powers, gives us
0 = λ0 i h̄ċ(0)
m (t) − 0
(1) X
(0)
+ λ i h̄ċm (t) −
1 0 iω t
Hmn (t)e mn
cn (t)
n
X
(4.98)
(2) iωmn t (1)
+ λ i h̄ċm (t) −
2 0
Hmn (t)e cn (t)
n
..
.
Suppose we have a set of energy levels as depicted in fig. 4.3
With cn(i) = 0 before the perturbation for all i ≥ 1, n and c(0)
m = δms , we can proceed iteratively,
solving each equation, starting with
i h̄ċ(1)
m = Hms (t)e
0 iωms t
(4.99)
74 time dependent pertubation
with
0
Hms = −µms · E(t), (4.101)
where
D E
µms = ψ(0) (0) .
m µ ψ s (4.102)
Using our previous nucleus passing an atom example, as depicted in fig. 4.4
4.6 perturbation expansion 75
We have
X
µ= qi Ri , (4.103)
i
the dipole moment for each of the charges in the atom. We will have fields as depicted
in fig. 4.5
76 time dependent pertubation
Consider a EM wave pulse, perhaps Gaussian, of the form depicted in fig. 4.6
4.6 perturbation expansion 77
2 /T 2
Ey (t) = e−t cos(ω0 t). (4.104)
As we learned very early, perhaps sitting on our mother’s knee, we can solve the differ-
ential equation eq. (4.99) for the first order perturbation, by direct integration
Z t
1 0
c(1)
m (t) =
0
Hms (t0 )eiωms t dt0 . (4.105)
i h̄ −∞
Here the perturbation is assumed equal to zero at −∞. Suppose our electric field is
specified in terms of a Fourier transform
Z ∞
dω
E(t) = E(ω)e−iωt , (4.106)
−∞ 2π
78 time dependent pertubation
so
µms ∞ t
Z Z
0
c(1)
m (t) = · E(ω)ei(ωms −ω)t dt0 dω. (4.107)
2πi h̄ −∞ −∞
µms
Z ∞Z ∞
0
c(1)
m (∞) = · E(ω)ei(ωms −ω)t dt0 dω
2πi h̄ −∞ −∞
(4.108)
µms
Z ∞
= · E(ω)δ(ωms − ω)dω
i h̄ −∞
since we identify
Z ∞
1 0
ei(ωms −ω)t dt0 ≡ δ(ωms − ω) (4.109)
2π −∞
µms
c(1)
m (∞) = · E(ωms ). (4.110)
i h̄
Frequency symmetry for the Fourier spectrum of a real field We will look further at this
next week, but we first require an intermediate result from transform theory. Because our
field is real, we have
so
Z
dω ∗
E (t) =
∗
E (ω)eiωt
2π
Z (4.112)
dω ∗
= E (−ω)e−iωt
2π
4.7 time dependent perturbation 79
and thus
and
1
c(1)
m (∞) = µ · E(ωms ) (4.115)
i h̄ ms
where
Z
dω
E(t) = E(ω)e−iωt , (4.116)
2π
and
Em − E s
ωms = . (4.117)
h̄
Graphically, these frequencies are illustrated in fig. 4.7
The probability for a transition from m to s is therefore
2 1 2
ρm→s = c(1) (∞) = µ · E(ω )2 (4.118)
m ms
h̄ ms
Recall that because the electric field is real we had
Suppose that we have a wave pulse, where our field magnitude is perhaps of the form
2 /T 2
E(t) = e−t cos(ω0 t), (4.120)
80 time dependent pertubation
n
ωns < 0
Negative frequencies: stimulated emission
2 (ω +ω)2
eω0 T
1 2 ω− 1 T 2 (ω +ω)2
e− 4 T 0 4 0
E(ω) = q + q (4.121)
2 2
2 T2
2 T2
where we see the expected Gaussian result, since the Fourier transform of a Gaussian is a
Gaussian.
FIXME: not sure what the point of this was?
d
|ψ(t)i = H(t) |ψ(t)i
i h̄ (4.122)
dt
and a sudden perturbation in the Hamiltonian, as illustrated in fig. 4.10
Consider H0 and HF fixed, and decrease ∆t → 0. We can formally integrate eq. (4.122)
d 1
|ψ(t)i = H(t) |ψ(t)i (4.123)
dt i h̄
For
Z t
1
|ψ(t)i − |ψ(t0 )i = H(t0 ) ψ(t0 ) dt0 .
(4.124)
i h̄ t0
82 time dependent pertubation
While this is an exact solution, it is also not terribly useful since we do not know |ψ(t)i.
However, we can select the small interval ∆t, and write
Z t
1
|ψ(∆t/2)i = |ψ(−∆t/2)i + H(t0 ) ψ(t0 ) dt0 .
(4.125)
i h̄ t0
Note that we could use the integral kernel iteration technique here and substitute |ψ(t0 )i =
|ψ(−∆t/2)i and then develop this, to generate a power series with (∆t/2)k dependence. However,
we note that eq. (4.125) is still an exact relation, and if ∆t → 0, with the integration limits
narrowing (provided H(t0 ) is well behaved) we are left with just
Or
provided that we change the Hamiltonian fast enough. On the surface there appears to be no
consequences, but there are some very serious ones!
P2 1
H0 = + mω20 X 2
2m 2 (4.128)
P2 1
HF = + mω2F X 2
2m 2
4.8 sudden perturbations 83
Here ω0 → ωF continuously, but very quickly. In effect, we have tightened the spring
constant. Note that there are cases in linear optics when you can actually do exactly that.
Imagine that |ψbefore i is in the ground state of the harmonic oscillator as in fig. 4.11
E 1 E
H0 ψ(0)
0 = h̄ω (0)
0 ψ0 (4.129)
2
but we also have
E
|ψafter i = |ψbefore i = ψ(0)
0 (4.130)
and
E 1 !
(f) 1 ( f ) E
HF ψn = h̄ωF n + ψn
(4.131)
2 2
84 time dependent pertubation
So
E
|ψafter i = ψ(0)
0
cn
X
ψn( f ) ψn( f ) ψ(0)
E D E (4.132)
= 0
n
X
(f)
E
= cn ψn
n
E E
ψ(t)( f ) = ψ(0)
0
X (f)
(f)
E (4.133)
= cn eiωn t ψn ,
n
whereas
E (0)
E
ψ(t)(o) = eiω0 t ψ(0)
0 , (4.134)
So, while the wave functions may be exactly the same after such a sudden change in
Hamiltonian, the dynamics of the situation change for all future times, since we now have
a wavefunction that has a different set of components in the basis for the new Hamiltonian.
In particular, the evolution of the wave function is now significantly more complex.
FIXME: plot an example of this.
d 1
|ψ(t)i = H(t) |ψ(t)i (4.135)
dt i h̄
4.9 adiabatic perturbations 85
H(0) = H0 , (4.136)
and
E E
H0 ψ(0)
s = E (0) (0)
s ψ s (4.137)
Imagine that at each time t we can find the “instantaneous” energy eigenstates
E E
H(t) ψ̂ s (t) = E s (t) ψ̂ s (t) (4.138)
These states do not satisfy Schrödinger’s equation, but are simply solutions to the eigen
problem. Our standard strategy in perturbation is based on analysis of
X (0)
E
|ψ(t)i = cn (t)e−iωn t ψ(0)
n , (4.139)
n
Here instead
X E
|ψ(t)i = bn (t) ψ̂n (t) , (4.140)
n
we will expand, not using our initial basis, but instead using the instantaneous kets. Plugging
into Schrödinger’s equation we have
X E
H(t) |ψ(t)i = H(t) bn (t) ψ̂n (t)
n
X E (4.141)
= bn (t)En (t) ψ̂n (t)
n
This was complicated before with matrix elements all over the place. Now it is easy, however,
the time derivative becomes harder. Doing that we find
d d X E
i h̄ |ψ(t)i = i h̄ bn (t) ψ̂n (t)
dt dt n
X dbn (t) X d
= i h̄ ψ̂ (t)E + bn (t) ψ̂n (t)
E
n (4.142)
dt dt
Xn E
n
D
We bra ψ̂m (t) into this
X dbn (t) D E X D d E X D E
i h̄ ψ̂m (t)ψ̂n (t) + bn (t) ψ̂m (t) ψ̂n (t) = bn (t)En (t) ψ̂m (t)ψ̂n (t) , (4.143)
n
dt n
dt n
and find
dbm (t) X D d E
i h̄ + bn (t) ψ̂m (t) ψ̂n (t) = bm (t)Em (t) (4.144)
dt n
dt
E0
If the Hamiltonian is changed very very slowly in time, we can imagine that ψ̂n (t) is also
changing very very slowly, but we are not quite there yet. Let us first split our sum of bra and
ket products
X D d E
bn (t) ψ̂m (t) ψ̂n (t) (4.145)
n
dt
D d E
ψ̂m (t) ψ̂m (t) (4.146)
dt
we note
d D E
0= ψ̂m (t)ψ̂m (t)
dt ! (4.147)
d D E D d E
= ψ̂m (t) ψ̂m (t) + ψ̂m (t) ψ̂m (t)
dt dt
Something plus its complex conjugate equals 0
a + ib + (a + ib)∗ = 2a = 0 =⇒ a = 0, (4.148)
D E
so ψ̂m (t) dtd ψ̂m (t) must be purely imaginary. We write
D d E
ψ̂m (t) ψ̂m (t) = −iΓ s (t), (4.149)
dt
where Γ s is real.
4.10 adiabatic perturbation theory (cont.) 87
We were working through Adiabatic time dependent perturbation (as also covered in §17.5.2 of
the text [4].)
Utilizing an expansion
X (0)
E
|ψ(t)i = cn (t)e−iωn t ψ(0)
n
n
X E (4.150)
= bn (t) ψ̂n (t) ,
n
where
E E
H(t) ψ̂ s (t) = E s (t) ψ̂ s (t) (4.151)
and found
db s (t) X D d E
= −i (ω s (t) − Γ s (t)) b s (t) − bn (t) ψ̂ s (t) ψ̂n (t) (4.152)
dt n,s
dt
where
D d E
Γ s (t) = i ψ̂ s (t) ψ̂ s (t) (4.153)
dt
Look for a solution of the form
Rt
dt0 (ω s (t0 )−Γ s (t0 ))
b s (t) = b s (t)e−i 0
(4.154)
= b s (t)e−iγs (t)
where
Z t
γ s (t) = dt0 (ω s (t0 ) − Γ s (t0 )). (4.155)
0
Taking derivatives of b s and after a bit of manipulation we find that things conveniently cancel
db s (t) d
= b s (t)eiγs (t)
dt dt
db s (t) iγs (t) d
= e + b s (t) eiγs (t) (4.156)
dt dt
db s (t) iγs (t)
= e + b s (t)i(ω s (t) − Γ s (t))eiγs (t) .
dt
88 time dependent pertubation
We find
so
db s (t) X D d E
=− bn (t)eiγs (t) ψ̂ s (t) ψ̂n (t)
dt n,s
dt
(4.158)
X D d E
=− bn (t)ei(γs (t)−γn (t)) ψ̂ s (t) ψ̂n (t) .
n,s
dt
db s (t) X D d E
=− bn (t)eiγsn (t) ψ̂ s (t) ψ̂n (t) . (4.160)
dt n,s
dt
X
δns (· · ·) = 0 (4.162)
n,s
4.10 adiabatic perturbation theory (cont.) 89
db s (t) X D d E
=− δnm eiγsn (t) ψ̂ s (t) ψ̂n (t)
dt dt
n,s (4.163)
D d E
= −eiγsm (t) ψ̂ s (t) ψ̂m (t)
dt
But
Z t !
1
γ sm (t) = dt0 (E s (t0 ) − Em (t0 )) − Γ s (t0 ) + Γm (t0 ) (4.164)
0 h̄
FIXME: I think we argued in class that the Γ contributions are negligible. Why was that?
Now, are energy levels will have variation with time, as illustrated in fig. 4.12
E3 (t)
E2 (t)
E1 (t)
E0 (t)
Perhaps unrealistically, suppose that our energy levels have some “typical” energy difference
∆E, so that
∆E t
γ sm (t) ≈ t≡ , (4.165)
h̄ τ
or
h̄
τ= (4.166)
∆E
Suppose that τ is much less than a typical time T over which instantaneous quantities (wave-
functions and brakets) change. After a large time T
eiγsm (t)
so we haveDour phase
termE whipping around really fast, as illustrated in fig. 4.13.
So, while ψ̂ s (t) dtd ψ̂m (t) is moving really slow, but our phase space portion is changing
really fast. The key to the approximate solution is factoring out this quickly changing phase
term.
Note Γ s (t) is called the “Berry” phase [15], whereas the E s (t0 )/ h̄ part is called the geometric
phase, and can be shown to have a geometric interpretation.
To proceed we can introduce λ terms, perhaps
and
X
− eiγsn (t) λ(· · ·) (4.169)
n,s
This λ approximation and a similar Taylor series expansion in time have been explored further
in 19.
Degeneracy Suppose we have some branching of energy levels that were initially degenerate,
as illustrated in fig. 4.14
We have a necessity to choose states properly so there is a continuous evolution in the instan-
taneous eigenvalues as H(t) changes.
4.11 examples
4.11 examples 91
E21 (t)
E0 (t)
X E
|ψ(t)i = bα (t) ψ̂α (t) (4.170)
α
where
E E
H(t) ψ̂α (t) = Eα (t) ψ̂α (t) (4.171)
We found
i
Rt
(Eα (t0 )− h̄Γα (t0 ))dt0
bα (t) = bα (t)e− h̄ 0 (4.172)
where
D d E
Γα = i ψ̂α (t) ψ̂α (t) (4.173)
dt
92 time dependent pertubation
and
d X i t
R
0 0 0 D d E
bα (t) = − bβ (t)e− h̄ 0 (Eβα (t )− h̄Γβα (t ))dt ψ̂α (t) ψ̂β (t) (4.174)
dt β,α
dt
( )
1
span √ (|2, 0, 0i ± |2, 1, 0i), |2, 1, ±1i (4.175)
2
Now expand the bra derivative kets
so
A different way to this end result A result of this form is also derived in [2] §20.1, but
with a different approach. There he takes derivatives of
E E
H(t) ψ̂β (t) = Eβ (t) ψ̂β (t) , (4.179)
4.11 examples 93
or
D E
D d E ψ̂α (t) dH(t)
dt
ψ̂ β (t)
ψ̂α (t) ψ̂β (t) = (4.182)
dt Eβ (t) − Eα (t)
E
so without the implied λ perturbation of ψ̂α (t) we can from eq. (4.174) write the exact
generalization of eq. (4.178) as
D E
d X i t
R
0 0 0 ψ̂α (t) dH(t)
dt
ψ̂β (t)
bα (t) = − bβ (t)e− h̄ 0 (Eβα (t )− h̄Γβα (t ))dt (4.183)
dt β,α
Eβ (t) − Eα (t)
F E R M I ’ S G O L D E N RU L E
5
See §17.2 of the text [4].
Fermi originally had two golden rules, but his first one has mostly been forgotten. This refers
to his second.
This is really important, and probably the single most important thing to learn in this course.
You will find this falls out of many complex calculations.
Returning to general time dependent equations with
H = H0 + H 0 (t) (5.1)
X
|ψ(t)i = cn (t)e−iωn t |ψn i (5.2)
n
and
X
i h̄ċn = 0 iωmn t
Hmn e cn (t) (5.3)
n
where
0
Hmn (t) = hψm | H 0 (t) |ψn i
En
ωn = (5.4)
h̄
ωmn = ωm − ωn
If c(0)
m = δmi , then to first order
95
96 fermi’s golden rule
and
Z t
1 0
c(1)
m (t) =
0
Hmi (t0 )eiωmi t dt0 . (5.7)
i h̄ t0
Reminder . Have considered this using eq. (5.7) for a pulse as in fig. 5.1
Now we want to consider instead a non-terminating signal, that was zero before some
initial time as illustrated in fig. 5.2, where the separation between two peaks is ∆t = 2π/ω0 .
2Ami sin(ω0 t)
if t > 0
0
(t) = − hψm | µ |ψi i · E(t) =
Hmi (5.8)
0 if t < 0
fermi’s golden rule 97
Here the factor of 2 has been included for consistency with the text.
0
Hmi (t) = iAmi e−iω0 t − eiω0 t (5.9)
Z t
Ami
dt0 ei(ωmi −ω0 t) − ei(ωmi +ω0 t)
c(1)
m (t) = (5.10)
h̄ t0
ωmi
Suppose that
ω0 ≈ ωmi , (5.11)
then
Z t
Ami
c(1)
m (t) ≈ dt0 1 − e2iω0 t , (5.12)
h̄ t0
98 fermi’s golden rule
Z t 2iω t
2iω0 t0 0 e 0 − 1
dt =
e
0 2iω0
sin(ω0 t)
= (5.13)
ω0
1
∼
ω0
so for t 1
ω0 and ω0 ≈ ωmi we have
Ami
c(1)
m (t) ≈ t (5.14)
h̄
Similarly for ω0 ≈ ωim as in fig. 5.4
ωim
then
Z t
Ami
c(1)
m (t) ≈ dt0 e−2iω0 t − 1 , (5.15)
h̄ t0
5.1 recap. where we got to on fermi’s golden rule 99
and we have
Ami
c(1)
m (t) ≈ − t (5.16)
h̄
We are continuing on the topic of Fermi golden rule, as also covered in §17.2 of the text [4].
Utilizing a wave train with peaks separation ∆t = 2π/ω0 , zero before some initial time fig. 5.5.
Perturbing a state in the ith energy level, and looking at the states for the mth energy level as
illustrated in fig. 5.6
0
Hmi (t) = 2Am i sin(ω0 t)θ(t)
(5.17)
= iAmi (e−iω0 t − eiω0 t )θ(t),
and we found
Z t
Ami
dt0 ei(ωmi −ω0 )t − ei(ωmi +ω0 )t ,
0 0
c(1)
m (t) = (5.18)
h̄ 0
2 A 2
c(1) mi
m (t) ∼
t2 + · · · (5.19)
h̄
where ω0 t 1 for ωmi ∼ ±ω0 .
We can also just integrate eq. (5.18) directly
where
Fermi’s Golden rule applies to a continuum of states (there are other forms of Fermi’s golden
rule, but this is the one we will talk about, and is the one in the book). One example is the ionized
5.2 fermi’s golden rule 101
states of an atom, where the energy level separation becomes so small that we can consider it
continuous
Another example are the unbound states in a semiconductor well as illustrated in fig. 5.9
Note that we can have reflection from the well even in the continuum states where we would
have no such reflection classically. However, with enough energy, states are approximately plane
waves. In one dimension
D E eipx/ h̄
xψ p ≈ √
2π h̄ (5.23)
D E
ψ p ψ p = δ(p − p0 )
0
or in 3d
D E eip·r/ h̄
rψp ≈
(2π h̄)3/2 (5.24)
D E
ψp ψp0 = δ3 (p − p0 )
Let us consider the 1d model for the quantum well in more detail. Including both discrete and
continuous states we have
X Z E
|ψ(t)i = cn (t)e−iωn t
|ψn i + d pc p (t)e−iω p t ψ p (5.25)
n
Imagine at t = 0 that the wave function started in some discrete state, and look at the proba-
bility that we “kick the electron out of the well”. Calculate
Z 2
P= d pc(1)
p (t)
(5.26)
Now, we assume that our matrix element has the following form
H 0pi (t) = A pi e−iω0 t + B pi eiω0 t θ(t) (5.27)
0
Hmi (t) = iAmi e−iω0 t − eiω0 t θ(t) (5.28)
5.2 fermi’s golden rule 103
Z 2
P= d pA pi (ω0 , t) + B pi (−ω0 , t) (5.29)
where
Figure 5.10
104 fermi’s golden rule
Our probability to find the particle in the continuum range is now approximately
Z 2
P= d pA pi (ω0 , t) (5.32)
With
1 p2
!
ω pi − ω0 = − E i − ω0 , (5.33)
h̄ 2m
define p so that
1 p2
!
0= − Ei − ω0 . (5.34)
h̄ 2m
In momentum space, we know have the sinc functions peaked at ±p as in fig. 5.11
Z ∞ 2
P+ = d pc(1)
p (t)
0
Z ∞ 2 sin2 ((ω pi − ω0 )t/2) (5.35)
= d pA pi ,
0 (ω pi − ω0 )2
with
1 p2
!
ω pi = − Ei (5.36)
h̄ 2m
5.2 fermi’s golden rule 105
4
Z ∞ 2 d p sin2 ((ω pi − ω0 )t/2)
P+ = 2 dω pi A pi . (5.37)
h̄ −Ei / h̄ dω pi (ω pi − ω0 )2
Now suppose we have t small enough so that P+ 1 and t large enough so
2 d p
A pi (5.38)
dω pi
is roughly constant over ∆ω. This is a sort of “Goldilocks condition”, a time that can not be
too small, and can not be too large, but instead has to be “just right”. Given such a condition
matrix element
4 2 d p t
P+ = 2 A pi π. (5.41)
dω pi p 2
h̄
density of states
The d p/dω pi is something like “how many continuous states are associated with a transition
from a discrete frequency interval.”
We can also get this formally from eq. (5.39) with
so
2πt 2
c(1)
p (t) → A pi δ(ω pi − ω0 )
h̄2 (5.43)
2πt 2
= A pi δ(E pi − h̄ω0 )
h̄
where δ(ax) = δ(x)/|a| has been used to pull in a factor of h̄ into the delta.
The ratio of the coefficient to time is then
c(1)
p (t) 2π 2
= A pi δ(E pi − h̄ω0 ). (5.44)
t h̄
or “between friends”
00
dc(1)
p (t) 2π 2
00
= A pi δ(E pi − h̄ω0 ), (5.45)
dt h̄
roughly speaking we have a “rate” or transitions from the discrete into the continuous. Here
“rate” is in quotes since it does not hold for small t.
This has been worked out for P+ . This can also be done for P− , the probability that the
electron will end up in a left trending continuum state.
While the above is not a formal derivation, but illustrates the form of what is called Fermi’s
golden rule. Namely that such a rate has the structure
2π
× (matrix element)2 × energy conservation (5.46)
h̄
WKB METHOD
6
6.1 wkb (wentzel-kramers-brillouin) method
h̄2 d2 U
− + V(x)U(x) = EU(x) (6.1)
2m dx2
which we can write as
d2 U 2m
+ 2 (E − V(x))U(x) = 0 (6.2)
dx2 h̄
Consider a finite well potential as in fig. 6.1
With
2m(E − V)
k2 = , E>V
h̄ (6.3)
2m(V − E)
κ2 = , V > E,
h̄
we have for a bound state within the well
U ∝ e±ikx (6.4)
107
108 wkb method
U ∝ e±κx (6.5)
In general we can hope for something similar. Let us look for that something, but allow the
constants k and κ to be functions of position
2m(E − V(x))
k2 (x) = , E>V
h̄ (6.6)
2m(V(x) − E)
κ2 (x) = , V > E.
h̄
In terms of k Schrödinger’s equation is just
d2 U(x)
+ k2 (x)U(x) = 0. (6.7)
dx2
We use the trial solution
and obtain
or
Things get a little confusing here with the ± variation since we have to take a second set of
square roots, so let’s consider these separately.
or
p
φ0 (x) = ± +ik0 (x) + k2 (x)
s
k0 (x) (6.18)
= ±k(x) 1 + i 2 .
k (x)
If k0 is small compared to k2
k0 (x)
1, (6.19)
k2 (x)
then we have
k0 (x) k0 (x)
! !
φ (x) ≈ ±k(x) 1 + i 2
0
= ± k(x) + i (6.20)
2k (x) 2k(x)
110 wkb method
Since we’d picked φ0 ≈ +k in this case, we pick the positive sign, and can now integrate
k0 (x)
Z Z
φ(x) = dxk(x) + i dx + ln const
Z 2k(x) (6.21)
1
= dxk(x) + i ln k(x) + ln const
2
Going back to our wavefunction, for this E > V(x) case we have
U(x) ∼ eiφ(x)
Z !!
1
= exp i dxk(x) + i ln k(x) + const
2
Z !! (6.22)
1
∼ exp i dxk(x) + i ln k(x)
2
R
dxk(x) − 12 ln k(x)
= ei e
or
1 R
U(x) ∝ √ ei dxk(x)
(6.23)
k(x)
k0 (x) k0 (x)
! !
φ (x) ≈ ±k(x) 1 − i 2
0
= ± k(x) − i (6.24)
2k (x) 2k(x)
This time we want the negative root to match φ0 ≈ −k. Integrating, we have
k0 (x)
Z !
iφ(x) = −i dx k(x) − i
2k(x)
Z Z 0
1 k
= −i k(x)dx − dx
Z 2 k (6.25)
1
= −i k(x)dx − ln k + ln constant
2
This gives us
1 R
U(x) ∝ √ e−i dxk(x)
(6.26)
k(x)
6.2 turning points. 111
1 R
U(x) ∝ √ e±i dxk(x)
(6.27)
k(x)
It’s not hard to show that for the E < V(x) case we find
1 R
U(x) ∝ √ e± dxκ(x)
, (6.28)
κ(x)
this time, provided that our potential satisfies
κ0 (x)
1, (6.29)
κ2 (x)
Validity
√
1. V(x) changes very slowly =⇒ k0 (x) small, and k(x) = 2m(E − V(x))/ h̄.
WKB will not work at the turning points in this figure since our main assumption was that
0
k (x) 1 (6.30)
k2 (x)
112 wkb method
so we get into trouble where k(x) ∼ 0. There are some methods for dealing with this. Our text
as well as Griffiths give some examples, but they require Bessel functions and more complex
mathematics.
The idea is that one finds the WKB solution in the regions of validity, and then looks for a
polynomial solution in the patching region where we are closer to the turning point, probably
requiring lookup of various special functions.
This power series method is also outlined in [19], where solutions to connect the regions are
expressed in terms of Airy functions.
6.3 examples
v(x)
if x ∈ [0, a]
V(x) =
(6.31)
∞
otherwise
1 Rx
0 0
Rx
0 0
ψ(x) = √ C+ ei 0 k(x )dx + C− e−i 0 k(x )dx (6.32)
k(x)
where
1p
k(x) = 2m(E − v(x) (6.33)
h̄
With
R x
k(x0 )dx0
φ(x) = e 0 (6.34)
114 wkb method
We have
1
ψ(x) = √ (C+ (cos φ + i sin φ) + C− (cos φ − i sin φ))
k(x)
1
= √ ((C+ + C− ) cos φ + i(C+ − C− ) sin φ)
k(x)
(6.35)
1
= √ ((C+ + C− ) cos φ + i(C+ − C− ) sin φ)
k(x)
1
≡ √ (C2 cos φ + C1 sin φ) ,
k(x)
Where
C2 = C+ + C−
(6.36)
C1 = i(C+ − C− )
φ(0) = 0 (6.37)
1
√ C2 = 0 (6.38)
k(0)
So
1
ψ(x) ∼ √ sin φ (6.39)
k(x)
At the other boundary
ψ(a) = 0 (6.40)
6.3 examples 115
So we require
or
Z a
1
2m(E − v(x0 )dx0 = nπ
p
(6.42)
h̄ 0
1√
2mEa = nπ (6.43)
h̄
or
!2
1 nπ h̄
E= (6.44)
2m a
Part II
S P I N , A N G U L A R M O M E N T U M , A N D T W O PA RT I C L E
SYSTEMS
COMPOSITE SYSTEMS
7
7.1 hilbert spaces
READING: §30 of the text [4] covers entangled states. The rest of the composite state back-
ground is buried somewhere in some of the advanced material sections. FIXME: what section?
Example, one spin one half particle and one spin one particle. We can describe either quantum
mechanically, described by a pair of Hilbert spaces
H1 , (7.1)
of dimension D1
H2 , (7.2)
of dimension D2
Recall that a Hilbert space (finite or infinite dimensional) is the set of states that describe the
system. There were some additional details (completeness, normalizable, L2 integrable, ...) not
really covered in the physics curriculum, but available in mathematical descriptions.
We form the composite (Hilbert) space
H = H1 ⊗ H2 (7.3)
E
H1 : φ(i)
1 (7.4)
D1
X E
|Ii = ci φ(i)
1 (7.5)
i=1
where
D (i) ( j) E
φ1 φ1 = δi j (7.6)
119
120 composite systems
Similarly
E
H2 : φ(i)
2 (7.7)
D2
X E
|IIi = di φ(i)
2 (7.8)
i=1
where
D (i) ( j) E
φ2 φ2 = δi j (7.9)
E E
φ(i) ⊗ φ( j) = φ(i j) ,
E
1 2 (7.10)
where
D E
φ(i j) φ(kl) = δik δ jl . (7.11)
D1 X
X D2 E E
|ψi = fi j φ(i) ( j)
1 ⊗ φ2
i=1 j=1
(7.12)
D1 X
X D2 E
= fi j φ(i j) .
i=1 j=1
7.2 operators
With operators O1 and O2 on the respective Hilbert spaces. We would now like to build
O1 ⊗ O2 (7.14)
If one defines
D1 X
X D2 E ( j)
E
O1 ⊗ O2 ≡ fi j O1 φ(i)
1 ⊗ O2 φ2
(7.15)
i=1 j=1
Q:Can every operator that can be defined on the composite space have a representation of this
form? No.
Special cases. The identity operators. Suppose that
D1 X
X D2 E E
|ψi = fi j φ(i) ( j)
1 ⊗ φ2 (7.16)
i=1 j=1
then
D1 X
X D2 E ( j) E
(O1 ⊗ I2 ) |ψi = fi j O1 φ(i)
1 ⊗ φ2
(7.17)
i=1 j=1
[ O 1 ⊗ I 2 , I1 ⊗ O 2 ] = 0 (7.18)
Let us verify this one. Suppose that our state has the representation
D1 X
X D2 E E
|ψi = fi j φ(i) ( j)
1 ⊗ φ2 (7.19)
i=1 j=1
122 composite systems
so that the action on this ket from the composite operations are
D1 X
X D2 E ( j) E
(O1 ⊗ I2 ) |ψi = fi j O1 φ(i)
1 ⊗ φ2
i=1 j=1
(7.20)
D1 X
X D2 E
( j)
E
(I1 ⊗ O2 ) |ψi = fi j φ(i)
1 ⊗ O2 φ2
i=1 j=1
Our commutator is
= 0.
7.3 generalizations
Can generalize to
H1 ⊗ H2 ⊗ H3 ⊗ · · · (7.22)
Can also start with H and seek factor spaces. If H is not prime there are, in general, many
ways to find factor spaces
We had one example of a composite system in phy356 that I recall. It was related to states of the
silver atoms in a Stern Gerlach apparatus, where we had one state from the Hamiltonian that
7.4 recalling the stern-gerlach system from phy354 123
governs position and momentum and another from the Hamiltonian for the spin, where each of
these states was considered separately.
This makes me wonder what would the Hamiltonian for a system (say a single electron) that
includes both spin and position/momentum would look like, and how is it that one can solve
this taking spin and non-spin states separately?
Professor Sipe, when asked said of this
“It is complicated because not only would the spin of the electron interact with the magnetic
field, but its translational motion would respond to the magnetic field too. A simpler case is a
neutral atom with an electron with an unpaired spin. Then there is no Lorentz force on the atom
itself. The Hamiltonian is just the sum of a free particle Hamiltonian and a Zeeman term due to
the spin interacting with the magnetic field. This is precisely the Stern-Gerlach problem”
I did not remember what the Zeeman term looked like, but wikipedia does [20], and it is the
magnetic field interaction
−µ · B (7.24)
that we get when we gauge transform the Dirac equation for the electron as covered in §36.4
of the text (also introduced in chapter 6, which was not covered in class). That does not look
too much like how we studied the Stern-Gerlach problem? I thought that for that problem we
had a Hamiltonian of the form
H = ai j |ii h j| (7.25)
It is not clear to me how this ket-bra Hamiltonian and the Zeeman Hamiltonian are related
(ie: the spin Hamiltonians that we used in 356 and were on old 356 exams were all pulled out
of magic hats and it was not obvious where these came from).
FIXME: incorporate what I got out of the email thread with the TA and prof on this question.
SPIN AND SPINORS
8
8.1 generators
125
126 spin and spinors
[ Pi , P j ] = 0, (8.4)
eA+B , eA eB , (8.5)
1
eA+B = 1 + A + B + (A + B)2 + · · ·
2 (8.6)
1
= 1 + A + B + (A2 + AB + BA + B2 ) + · · ·
2
and
! !
1 2 1 2
e e = 1+ A+ A +··· 1+ B+ B +···
A B
2 2
(8.7)
1
= 1 + A + B + (A2 + 2AB + B2 ) + · · ·
2
Comparing the second order (for example) we see that we must have for equality
AB + BA = 2AB, (8.8)
8.1 generators 127
or
BA = AB, (8.9)
or
[ A, B] = 0 (8.10)
e−ia·P/ h̄ |ψi = ψ0 , (8.11)
does this ket “translated” by a make any sense? The vector a lives in a 3D space and
our ket |ψi lives in Hilbert space. A quantity like this deserves some careful thought and is
the subject of some such thought in the Interpretations of Quantum mechanics course. For
now, we can think of the operator and ket as a “gadget” that prepares a state.
A student in class pointed out that |ψi can be dependent on many degrees of freedom,
for example, the positions of eight different particles. This translation gadget in such a case
acts on the whole kit and caboodle.
Now consider the matrix element
0
rψ = hr| e−ia·P/ h̄ |ψi . (8.12)
Note that
†
hr| e−ia·P/ h̄ = eia·P/ h̄ |ri
(8.13)
= (|r − ai)† ,
so
0
rψ = hr − a|ψi , (8.14)
128 spin and spinors
or
L = R × P, (8.16)
where
L x = Y Pz − ZPy
Ly = ZP x − XPz (8.17)
Lz = XPy − Y P x .
X
[ Li , L j ] = i h̄ i jk Lk . (8.18)
k
These non-zero commutators show that the components of angular momentum do not
commute.
8.1 generators 129
Define
This is the vector that we get by actively rotating the vector r by an angle θ counter-
clockwise about n̂, as in fig. 8.3
An active rotation rotates the vector, leaving the coordinate system fixed, whereas a
passive rotation is one for which the coordinate system is rotated, and the vector is left
fixed.
Note that rotations do not commute. Suppose that we have a pair of rotations as in fig. 8.4
Again, we get the graphic demo, with Professor Sipe rotating the big wooden cat sculp-
ture. Did he bring that in to class just to make this point (too bad I missed the first couple
minutes of the lecture).
Rather amusingly, he points out that most things in life do not commute. We get much
different results if we apply the operations of putting water into the teapot and turning on
the stove in different orders.
130 spin and spinors
0
ψ = e−iθn̂·L/ h̄ |ψi , (8.20)
In this we have
†
hr| e−iθn̂·L/ h̄ = eiθn̂·L/ h̄ |ri
E† (8.22)
= R−1 (r) ,
so
0 D −1 0 E
rψ = R (r)ψ , (8.23)
or
8.2 generalizations
Recall what you did last year, where H, P, and L were defined mechanically. We found
[ Pi , P j ] = 0, (8.25)
and
X
[ Li , L j ] = i h̄ i jk Lk . (8.26)
k
8.2 generalizations 131
These are the relations that show us the way translations and rotations combine. We want to
move up to a higher plane, a new level of abstraction. To do so we define H as the operator that
generates time evolution. If we have a theory that covers the behavior of how anything evolves
in time, H encodes the rules for this time evolution.
Define P as the operator that generates translations in space.
Define J as the operator that generates rotations in space.
In order that these match expectations, we require
[ Pi , P j ] = 0, (8.27)
and
X
[ Ji , J j ] = i h̄ i jk Jk . (8.28)
k
J ≡ L = R × P. (8.29)
We actually need a generalization of this since this is, in fact, not good enough, even for low
energy physics.
Many component wave functions We are free to construct tuples of spatial vector functions
like
Ψ (r, t)
I , (8.30)
Ψ II (r, t)
or
Ψ I (r, t)
Ψ (r, t) , (8.31)
II
Ψ III (r, t)
etc.
We will see that these behave qualitatively different than one component wave functions. We
also do not have to be considering multiple particle wave functions, but just one particle that
requires three functions in R3 to describe it (ie: we are moving in on spin).
132 spin and spinors
A classical analogy “There is only bad analogies, since if the are good they would be describ-
ing the same thing. We can however, produce some useful bad analogies”
1. A temperature field
T (r) (8.32)
2. Electric field
E x (r)
Ey (r) (8.33)
Ez (r)
These behave in a much different way. If we rotate a scalar field like T (r) as in fig. 8.5
Suppose we have a temperature field generated by, say, a match. Rotating the match above,
we have
Compare this to the rotation of an electric field, perhaps one produced by a capacitor, as in
fig. 8.6
8.3 multiple wavefunction spaces 133
No. Because the components get mixed as well as the positions at which those components
are evaluated.
We will work with many component wave functions, some of which will behave like vectors,
and will have to develop the methods and language to tackle this.
|ri (8.37)
Now introduce many function spaces
ψ1 (r)
ψ2 (r)
..
(8.38)
.
ψγ (r)
134 spin and spinors
H = Ho ⊗ H s (8.41)
Where Ho is the Hilbert space of "scalar" QM, “o” orbital and translational motion, associated
with kets |ri and H s is the Hilbert space associated with the γ components |αi. This latter space
we will label the “spin” or “internal physics” (class suggestion: or perhaps intrinsic). This is
“unconnected” with translational motion.
We build up the basis kets for H by direct products
Now, for a rotated ket we seek a general angular momentum operator J such that
0
ψ = e−iθn̂·J/ h̄ |ψi (8.43)
where
J = L + S, (8.44)
where L acts over kets in Ho , “orbital angular momentum”, and S is the “spin angular mo-
mentum”, acting on kets in H s .
Strictly speaking this would be written as direct products involving the respective identities
J = L ⊗ I s + Io ⊗ S. (8.45)
We require
X
[ Ji , J j ] = i h̄ i jk Jk (8.46)
8.3 multiple wavefunction spaces 135
Since L and S “act over separate Hilbert spaces”. Since these come from legacy operators
[ Li , S j ] = 0 (8.47)
so
X
[S i , S j ] = i h̄ i jk S k , (8.49)
as expected. We could, in principle, have more complicated operators, where this would not
be true. This is a proposal of sorts. Given such a definition of operators, let us see where we can
go with it.
For matrix elements of L we have
∂ ∂
!
0
hr| L x r = −i h̄ y − z δ(r − r0 ) (8.50)
∂z ∂y
What are the matrix elements of hα| S i |α0 i? From the commutation relationships we know
γ
X γ
X
X
hα| S i α00 α00 S j α0 − hα| S j α00 α00 S i α0 = i h̄ i jk hα| S k α00 (8.51)
α00 =1 α00 =1 k
We see that our matrix element is tightly constrained by our choice of commutator relation-
ships. We have γ2 such matrix elements, and it turns out that it is possible to choose (or find)
matrix elements that satisfy these constraints?
The hα| S i |α0 i matrix elements that satisfy these constraints are found by imposing the com-
mutation relations
X
[S i , S j ] = i h̄ i jk S k , (8.52)
and with
X
S2 = S 2j , (8.53)
j
136 spin and spinors
h i
S 2, S i = 0 (8.54)
and seeking eigenkets
1
s= =⇒ γ = 2
2
s = 1 =⇒ γ = 3 (8.56)
3
s= =⇒ γ = 4
2
We start with the algebra (mathematically the Lie algebra), and one can compute the Hilbert
spaces that are consistent with these algebraic constraints.
We assume that for any type of given particle S is fixed, where this has to do with the nature
of the particle.
1
s= A spin 1/2 particle
2
s=1 A spin 1 particle (8.57)
3
s= A spin 3/2 particle
2
S is fixed once we decide that we are talking about a specific type of particle.
A non-relativistic particle in this framework has two nondynamical quantities. One is the
mass m and we now introduce a new invariant, the spin s of the particle.
This has been introduced as a kind of strategy. It is something that we are going to try, and it
turns out that it does. This agrees well with experiment.
In 1939 Wigner asked, “what constraints do I get if I constrain the constraints of quantum
mechanics with special relativity.” It turns out that in the non-relativistic limit, we get just this.
There is a subtlety here, because we get into some logical trouble with the photon with a rest
mass of zero (m = 0 is certainly allowed as a value of our invariant m above). We can not stop or
slow down a photon, so orbital angular momentum is only a conceptual idea. Really, the orbital
angular momentum and the spin angular momentum cannot be separated out for a photon, so
talking of a spin 1 particle really means spin as in J, and not spin as in L.
8.3 multiple wavefunction spaces 137
Spin one half particles Reading: See §26.6 in the text [4].
Let us start talking about the simplest case. This includes electrons, all leptons (integer spin
particles like photons and the weakly interacting W and Z bosons), and quarks.
1
s=
2 (8.58)
1
ms = ±
2
states
+ +
1 1 1 1
|sm s i = , , , − (8.59)
2 2 2 2
Note there is a convention
+ +
1 1 1 1
= , −
22 2 2
+ + (8.60)
1 1 = 1 1
2 2 2 2
+ ! +
1 1 1 1
S 2 m s = + 1 h̄2 m s
2 2 2 2
+ (8.61)
3 1
= h̄2 m s
4 2
+ +
1 1
S z m s = m s h̄ m s
(8.62)
2 2
For shorthand
+
1 1 = |+i
2 2
+ (8.63)
1 1
= |−i
22
3 1 0
S 2 → h̄2 (8.64)
4 0 1
138 spin and spinors
h̄ 1 0
S z → (8.65)
2 0 −1
One can easily work out from the commutation relationships that
h̄ 0 1
S x → (8.66)
2 1 0
h̄ 0 −i
S y → (8.67)
2 i 0
h̄ 0 1
S x → (9.1)
2 1 0
h̄ 0 −i
S y → (9.2)
2 i 0
h̄ 1 0
S z → (9.3)
2 0 −1
h+|χi
|χi → , (9.4)
h−|χi
and
1
|+i → (9.5)
0
0
|0i → (9.6)
1
h̄ 0 −i 1 i h̄ 0
S y |+i → = (9.7)
2 i 0 0 2 1
139
140 representation of two state kets and pauli spin matrices
Kets in Ho ⊗ H s
hr+|ψi ψ (r)
+
|ψi → = . (9.8)
hr−|ψi ψ− (r)
This is a “spinor”
Put
hr±|ψi = ψ± (r)
1 0 (9.9)
= ψ+ + ψ−
0 1
with
hψ|ψi = 1 (9.10)
Use
I = Io ⊗ I s
Z
= d3 r |ri hr| ⊗ (|+i h+| + |−i h−|)
Z
(9.11)
X
= d3 r |ri hr| ⊗ |σi hσ|
σ=±
XZ
= d3 r |rσi hrσ|
σ=±
So
XZ
hψ| I |ψi = d3 r hψ|rσi hrσ|ψi
σ=± (9.12)
Z
= d3 r |ψ+ (r)|2 + |ψ− (r)|2
9.1 representation of kets 141
Alternatively
|ψi = I |ψi
Z X
= d3 r |rσi hrσ|ψi
σ=±
X Z !
(9.13)
= 3
d rψσ (r) |rσi
σ=±
X Z !
= d3 rψσ (r) |ri ⊗ |σi
σ=±
Z
|ψσ i = d3 rψσ (r) |ri , (9.14)
then
and likewise
and
Suppose we want to rotate a ket, we do this with a full angular momentum operator
A simple example
Suppose
where
Then
where
for
hψ|ψi = 1, (9.28)
so
|α|2 = 1, β = 0 (9.31)
|β|2 = 1, α = 0 (9.32)
FIXME: F1: standard spherical projection picture, with n̂ projected down onto the x, y plane
at angle φ and at an angle θ from the z axis.
The eigenvalues will still be ± h̄/2 since there is nothing special about the z direction.
n̂ · S = n x S x + ny S y + nz S z
h̄ nz n x − iny
→ (9.33)
2 n x + iny −nz
h̄ h i
= cos θ sin θe−iφ sin θeiφ − cos θ
2
To find the eigenkets we diagonalize this, and we find representations of the eigenkets are
θ θ
|n̂+i → cos e−iφ/2 |+i sin eiφ/2 |−i (9.36)
2 2
θ θ
|n̂−i → − sin e−iφ/2 |+i cos eiφ/2 |−i (9.37)
2 2
Every ket
α
|χi → (9.38)
β
for which
can be written in the form eq. (9.34) for some θ and φ, neglecting an overall phase factor.
For any ket in H s , that ket is “spin up” in some direction.
FIXME: show this.
It is useful to write
h̄ 0 1 h̄
S x = ≡ σ x (9.40)
2 1 0 2
h̄ 0 −i h̄
S y = ≡ σy (9.41)
2 i 0 2
h̄ 1 0 h̄
= ≡ σz (9.42)
2 0 −1 2
9.3 pauli spin matrices 145
where
0 1
σ x = (9.43)
1 0
0 −i
σy = (9.44)
i 0
1 0
σz = (9.45)
0 −1
These are the Pauli spin matrices.
Interesting properties
•
σ x σy = iσz (9.47)
tr(σi ) = 0 (9.48)
where
n̂ · σ ≡ n x σ x + ny σy + nz σz , (9.50)
and
1 0
σ0 = (9.51)
0 1
(note tr(σ0 ) , 0)
146 representation of two state kets and pauli spin matrices
where A and B are vectors (or more generally operators that commute with the σ matri-
ces).
where α, β = 0, x, y, z
X
M= ma σα
α
m0 + mz m x − imy (9.57)
=
m x + imy m0 − mz
1
mβ = tr(Mσβ ). (9.58)
2
R O TAT I O N O P E R AT O R I N S P I N S PA C E
10
10.1 formal taylor series expansion
1 1
e−iθn̂·S/ h̄ = I + (−iθn̂ · S/ h̄) + (−iθn̂ · S/ h̄)2 + (−iθn̂ · S/ h̄)3 + · · · (10.1)
2! 3!
or
1 1
e−iθn̂·σ/2 = I + (−iθn̂ · σ/2) + (−iθn̂ · σ/2)2 + (−iθn̂ · σ/2)3 + · · ·
2! 3!
−iθ 1 −iθ 1 −iθ
= σ0 + (n̂ · σ) + (n̂ · σ) +
2
(n̂ · σ)3 + · · ·
2 2! 2 3! 2
−iθ 1 −iθ 1 −iθ
= σ0 + (n̂ · σ) + σ0 + (n̂ · σ) + · · · (10.2)
2 2! 2 3! 2
1 θ 2 θ 1 θ 3
! !
= σ0 1 − + · · · − i(n̂ · σ) − +···
2! 2 2 3! 2
= cos(θ/2)σ0 − i sin(θ/2)(n̂ · σ)
where we have used the fact that (n̂ · σ)2 = σ0 .
So our representation of the spin operator is
147
148 rotation operator in spin space
Unfortunate interjection by me I mentioned the half angle rotation operator that requires a
half angle operator sandwich. Prof. Sipe thought I might be talking about a Heisenberg picture
representation, where we have something like this in expectation values
0
ψ = e−iθn̂·J/ h̄ |ψi (10.5)
so that
0 0
ψ O ψ = hψ| eiθn̂·J/ h̄ Oe−iθn̂·J/ h̄ |ψi (10.6)
However, what I was referring to, was that a general rotation of a vector in a Pauli matrix
basis
X
R( ak σk ) = R(a · σ) (10.7)
can be expressed by sandwiching the Pauli vector representation by two half angle rotation
operators like our spin 1/2 operators from class today
where û and v̂ are two non-colinear orthogonal unit vectors that define the oriented plane that
we are rotating in.
For example, rotating in the x − y plane, with û = x̂ and v̂ = ŷ, we have
yielding our usual coordinate rotation matrix. Expressed in terms of a unit normal to that
plane, we form the normal by multiplication with the unit spatial volume element I = σ1 σ2 σ3 .
For example:
σ1 σ2 σ3 (σ3 ) = σ1 σ2 (10.11)
10.1 formal taylor series expansion 149
and can in general write a spatial rotation in a Pauli basis representation as a sandwich of half
angle rotation matrix exponentials
when n̂ · a = 0 we get the complex-number like single sided exponential rotation exponentials
(since a · σ commutes with n · σ in that case)
I believe it was pointed out in one of [5] or [7] that rotations expressed in terms of half
angle Pauli matrices has caused some confusion to students of quantum mechanics, because
this 2π “rotation” only generates half of the full spatial rotation. It was argued that this sort of
confusion can be avoided if one observes that these half angle rotations exponentials are exactly
what we require for general spatial rotations, and that a pair of half angle operators are required
to produce a full spatial rotation.
The book [5] takes this a lot further, and produces a formulation of spin operators that is
devoid of the normal scalar imaginary i (using the Clifford algebra spatial unit volume element
instead), and also does not assume a specific matrix representation of the spin operators. They
argue that this leads to some subtleties associated with interpretation, but at the time I was
attempting to read that text I did know enough QM to appreciate what they were doing, and
have not had time to attempt a new study of that content.
Asked about this offline, our Professor says, “Yes.... but I think this kind of result is essentially
what I was saying about the ’rotation of operators’ in lecture. As to ’interpreting’ the −1, there
are a number of different strategies and ways of thinking about things. But I think the fact
remains that a 2π rotation of a spinor replaces the spinor by −1 times itself, no matter how you
formulate things.”
That this double sided half angle construction to rotate a vector falls out of the Heisenberg
picture is interesting. Even in a purely geometric Clifford algebra context, I suppose that a
vector can be viewed as an operator (acting on another vector it produces a scalar and a bivector,
acting on higher grade algebraic elements one gets +1, −1 grade elements as a result). Yet that
is something that is true, independent of any quantum mechanics. In the books I mentioned, this
was not derived, but instead stated, and then proved. That is something that I think deserves a bit
of exploration. Perhaps there is a more natural derivation possible using infinitesimal arguments
... I had guess that scalar or grade selection would take the place of an expectation value in such
a geometric argument.
150 rotation operator in spin space
At least classically, the angular momentum of charged objects is associated with a magnetic
moment as illustrated in fig. 10.1
µ = IAe⊥ (10.14)
In our scheme, following the (cgs?) text conventions of [4], where the E and B have the same
units, we write
IA
µ= e⊥ (10.15)
c
For a charge moving in a circle as in fig. 10.2
charge
I=
time
distance charge
= (10.16)
time distance
qv
=
2πr
so the magnetic moment is
qv πr2
µ=
2πr c
q (10.17)
= (mvr)
2mc
= γL
T = µ×B (10.18)
−µ · B (10.19)
Also recall that this torque leads to precession as shown in fig. 10.4
dL
= T = γL × B, (10.20)
dt
152 rotation operator in spin space
ω = −γB. (10.21)
e
γ=− <0 (10.22)
2mc
where we are, here, writing for charge on the electron −e.
Question: steady state currents only? . Yes, this is only true for steady state currents.
For the translational motion of an electron, even if it is not moving in a steady way, regardless
of its dynamics
e
µ0 = − L (10.23)
2mc
Now, back to quantum mechanics, we turn µ0 into a dipole moment operator and L is “pro-
moted” to an angular momentum operator.
µs = γsS (10.25)
10.3 the hydrogen atom with spin 153
we write this as
e
µs = g − S (10.26)
2mc
so that
ge
γs = − (10.27)
2mc
Experimentally, one finds to very good approximation
g=2 (10.28)
There was a lot of trouble with this in early quantum mechanics where people got things
wrong, and canceled the wrong factors of 2.
In fact, Dirac’s relativistic theory for the electron predicts g = 2.
When this is measured experimentally, one does not get exactly g = 2, and a theory that also
incorporates photon creation and destruction and the interaction with the electron with such
(virtual) photons. We get
gtheory = 2 (1.001159652140(±28))
(10.29)
gexperimental = 2 (1.0011596521884(±43))
Richard Feynman compared the precision of quantum mechanics, referring to this measure-
ment, “to predicting a distance as great as the width of North America to an accuracy of one
human hair’s breadth”.
where we have independent Hamiltonian’s for the motion of the center of mass and the rela-
tive motion of the electron to the proton.
The basis kets for these could be designated |pCM i and |prel i respectively.
154 rotation operator in spin space
where Hs is the Hamiltonian for the spin of the electron. We are neglecting the spin of the
proton, but that could also be included (this turns out to be a lesser effect).
We will introduce a Hamiltonian including the dynamics of the relative motion and the elec-
tron spin
Hrel ⊗ Hs (10.32)
Covering the Hilbert space for this system we will use basis kets
|nlm±i (10.33)
hr+|nlm+i Φ (r)
|nlm+i → = nlm
hr−|nlm+i 0
(10.34)
hr+|nlm−i 0
|nlm−i → = .
hr−|nlm−i Φnlm (r)
Here r should be understood to really mean rrel . Our full Hamiltonian, after introducing a
magnetic pertubation is
P2CM
2
Prel e2
H= + − µ0 · B − µ s · B
− (10.35)
2M 2µ Rrel
where
and
1 1 1
= + . (10.37)
µ mproton melectron
e
µ0 = − L (10.38)
2mce
µs = g − S (10.39)
2mc
We also have higher order terms (higher order multipoles) and relativistic corrections (like
spin orbit coupling [17]).
TWO SPIN SYSTEMS, ANGULAR MOMENTUM, AND
CLEBSCH-GORDON CONVENTION
11
11.1 two spins
H = H1 ⊗ H2 (11.1)
where H1 and H2 are both spin Hamiltonian’s for respective 2D Hilbert spaces. Our complete
Hilbert space is thus a 4D space.
We will write
Can introduce
S1 = S(1)
1 ⊗I
(2)
(11.3)
S2 = I (1) ⊗ S(2)
2
Here we “promote” each of the individual spin operators to spin operators in the complete
Hilbert space.
We write
h̄
S 1z |++i = |++i
2 (11.4)
h̄
S 1z |+−i = |+−i
2
157
158 two spin systems, angular momentum, and clebsch-gordon convention
Write
S = S1 + S2 , (11.5)
for the full spin angular momentum operator. The z component of this operator is
S z = S 1z + S 2z (11.6)
!
h̄ h̄
S z |++i = (S 1z + S 2z ) |++i = + |++i = h̄ |++i
2 2
!
h̄ h̄
S z |+−i = (S 1z + S 2z ) |+−i = − |+−i = 0
2 2
! (11.7)
h̄ h̄
S z |−+i = (S 1z + S 2z ) |−+i = − + |−+i = 0
2 2
!
h̄ h̄
S z |−−i = (S 1z + S 2z ) |−−i = − − |−−i = − h̄ |−−i
2 2
So, we find that |xxi are all eigenkets of S z . These will also all be eigenkets of S21 = S 1x
2 +
2 + S 2 since we have
S 1y 1z
! !
1 1 3 2
S 12 |xxi = h̄ 2
1 + |xxi = h̄ |xxi
2 2 4
! ! (11.8)
1 1 3 2
S 22 |xxi = h̄2 1 + |xxi = h̄ |xxi
2 2 4
S 2 = (S1 + S2 ) · (S1 + S2 )
(11.9)
= S 12 + S 22 + 2S1 · S2
! !
h̄ h̄
2S 1z S 2z |+−i = 2 − |+−i (11.11)
2 2
So
1
|+i →
0
(11.13)
0
|−i →
1
We have
h̄ 0 1 1 h̄ 0 h̄
S x |+i → = = |−i
2 1 0 0 2 1 2
h̄ 0 1 0 h̄ 1 h̄
S x |−i → = = |+i
2 1 0 1 2 0 2
(11.14)
h̄ 0 −i 1 i h̄ 0 i h̄
S y |+i → = = |−i
2 i 0 0 2 1 2
h̄ 0 −i 0 −i h̄ 1 i h̄
S y |−i → = = − |+i
2 i 0 1 2 0 2
And are able to arrive at the action of S 2 on our mixed composite state
h̄2 2
! ! !
3 2 3 2 2 h̄ h̄ h̄
S |++i =
2
h̄ + h̄ |++i + 2 |−−i + 2i |−−i + 2 |++i
4 4 4 4 2 2 (11.16)
= 2 h̄ |++i 2
160 two spin systems, angular momentum, and clebsch-gordon convention
(− h̄)2 h̄2
! ! !
3 2 3 2 h̄ h̄
S 2 |−−i = h̄ + h̄ |−−i + 2 |++i + 2i2 |++i + 2 − − |−−i
4 4 4 4 2 2 (11.17)
= 2 h̄ |−−i
2
2 0 0 0
2 0 1 1 0
2
S → h̄ , (11.18)
0 1 1 0
0 0 0 2
However,
h i
S 2, S z = 0
X (11.20)
[S i , S j ] = i h̄ i jk S k
k
h i
(Also, S 2 , S i = 0.)
It should be possible to find eigenkets of S 2 and S z
|++i s = 1 and m s = 1
(|+−i + |−+i) s = 1 and m s = 0
√1
2 (11.22)
|−−i s = 1 and m s = −1
√1 (|+−i − |−+i) s = 0 and m s = 0
2
11.2 more on two spin systems 161
The first three kets here can be grouped into a triplet in a 3D Hilbert space, whereas the last
treated as a singlet in a 1D Hilbert space.
Form a grouping
H = H1 ⊗ H2 (11.23)
Can write
1 1
⊗ = 1⊕0 (11.24)
2 2
where the 1 and 0 here refer to the spin index s.
J12 | j1 m1 i = j1 ( j1 + 1) h̄2 | j1 m1 i
(11.25)
J1z | j1 m1 i = h̄m1 | j1 m1 i
J22 | j2 m2 i = j2 ( j2 + 1) h̄2 | j2 m2 i
(11.26)
J2z | j2 m2 i = h̄m2 | j2 m2 i
Consider the Hilbert space spanned by | j1 m1 i ⊗ | j2 m2 i, a (2 j1 + 1)(2 j2 + 1) dimensional space.
How to find the eigenkets of J 2 and Jz ?
1 1
⊗ = 1⊕0 (11.27)
2 2
where 1 is a triplet state for s = 1 and 0 the “singlet” state with s = 0. We want to consider
the angular momentum of the entire system
j1 ⊗ j2 =? (11.28)
Why bother? Often it is true that
[ H, J] = 0, (11.29)
so, in that case, the eigenstates of the total angular momentum are also energy eigenstates, so
considering the angular momentum problem can help in finding these energy eigenstates.
162 two spin systems, angular momentum, and clebsch-gordon convention
Rotation operator
e−iθn̂·J/ h̄ (11.30)
n̂ · J = n x J x + ny Jy + nz Jz (11.31)
J± = J x ± iJy , (11.32)
or
1
J x = (J+ + J− )
2 (11.33)
1
Jy = (J+ − J− )
2i
We have
1 1
n̂ · J = n x (J+ + J− ) + ny (J+ − J− ) + nz Jz , (11.34)
2 2i
and
1/2
J± | jmi = h̄(( j ∓ m)( j ± m1 )) | j, m ± 1i (11.35)
So
j0 m0 e−iθn̂·J/ h̄ | jmi = 0
(11.36)
unless j = j0 .
jm0 e−iθn̂·J/ h̄ | jmi
(11.37)
is a (2 j + 1) × (2 j + 1) matrix.
Combining rotations
X
jm0 e−iθb n̂a ·J/ h̄ e−iθa n̂b ·J/ h̄ | jmi = jm0 e−iθb n̂a ·J/ h̄ jm00 jm00 e−iθa n̂b ·J/ h̄ | jmi (11.38)
m00
11.2 more on two spin systems 163
If
X
jm0 e−iθn̂·J/ h̄ | jmi = jm0 e−iθb n̂a ·J/ h̄ jm00 jm00 e−iθa n̂b ·J/ h̄ | jmi
(11.40)
m00
For fixed j, the matrices h jm0 | e−iθn̂·J/ h̄ | jmi form a representation of the rotation group. The
(2 j + 1) representations are irreducible. (This will not be proven).
It may be that there may be big blocks of zeros in some of the matrices, but they cannot be
simplified any further?
Back to the two particle system
j1 ⊗ j2 =? (11.41)
If we use
| j1 m 1 i ⊗ | j2 m 2 i (11.42)
D
j1 m01 ; j2 m02 e−iθn̂·J/ h̄ | j1 m1 ; j2 m2 i (11.43)
is also a representation of the rotation group, but these sort of matrices can be simplified a lot.
This basis of dimensionality (2 j1 + 1)(2 j2 + 1) is reducible.
A lot of this is motivation, and we still want a representation of j1 ⊗ j2 .
Recall that
! !
1 1 1 1 1 1
⊗ = 1⊕0 = + ⊕ − (11.44)
2 2 2 2 2 2
Might guess that, for j1 ≥ j2
j1 ⊗ j2 = ( j1 + j2 ) ⊕ ( j1 + j2 − 1) ⊕ · · · ( j1 − j2 ) (11.45)
1 11 9
5⊗ = ⊕ (11.46)
2 2 2
164 two spin systems, angular momentum, and clebsch-gordon convention
1 ⊗ 1 = 2 ⊕ 1 ⊕ 0
(11.47)
3 × 3 = 5 + 3 + 1
?
(2 j1 + 1)(2 j2 + 1) = (11.48)
We find
1 + j2
jX 1 + j2
jX − j2 −1
j1 X
(2 j + 1) = (2 j + 1) − (2 j + 1)
j1 − j2 j=0 j=0 (11.49)
= (2 j1 + 1)(2 j2 + 1)
Using
N
X N(N + 1)
n= (11.50)
n=0
2
j1 ⊗ j2 = ( j1 + j2 ) ⊕ ( j1 + j2 − 1) ⊕ · · · ( j1 − j2 ) (11.51)
| j1 m1 i ⊗ | j2 m2 i (11.52)
denote also by
| jm; j1 j2 i , (11.54)
11.2 more on two spin systems 165
j= j1 + j2 j1 + j2 − 1 ··· j1 − j2
| j1 + j2 , j1 + j2 i
| j1 + j2 , j1 + j2 − 1i | j1 + j2 − 1, j1 + j2 − 1i
| j1 + j2 − 1, j1 + j2 − 2i
..
. | j1 − j2 , j1 − j2 i
.. ..
. .
..
. | j1 − j2 , −( j1 − j2 )i
..
.
| j1 + j2 , −( j1 + j2 − 1)i | j1 + j2 − 1, −( j1 + j2 − 1)i
| j1 + j2 , −( j1 + j2 )i
(11.55)
Look at
| j1 + j2 , j1 + j2 i (11.56)
Jz | j1 + j2 , j1 + j2 i = ( j1 + j2 ) h̄ | j1 + j2 , j1 + j2 i (11.57)
we must have
| j1 + j2 , j1 + j2 i = eiφ (| j1 j1 i ⊗ | j2 j2 i) (11.59)
| j1 + j2 , j1 + j2 i = | j1 j1 i ⊗ | j2 j2 i (11.60)
166 two spin systems, angular momentum, and clebsch-gordon convention
1/2
J− | j1 + j2 , j1 + j2 i = h̄(2( j1 + j2 )) | j1 + j2 , j1 + j2 − 1i (11.61)
So
J− | j1 + j2 , j1 + j2 i
| j1 + j2 , j1 + j2 − 1i = 1/2
h̄(2( j1 + j2 ))
(11.62)
(J1− + J2− ) | j1 j1 i ⊗ | j2 j2 i
= 1/2
h̄(2( j1 + j2 ))
j= j1 + j2 j1 + j2 − 1 ··· j1 − j2
| j1 + j2 , j1 + j2 i
| j1 + j2 , j1 + j2 − 1i | j1 + j2 − 1, j1 + j2 − 1i
| j1 + j2 − 1, j1 + j2 − 2i
..
. | j1 − j2 , j1 − j2 i
.. ..
. .
..
. | j1 − j2 , −( j1 − j2 )i
..
.
| j1 + j2 , −( j1 + j2 − 1)i | j1 + j2 − 1, −( j1 + j2 − 1)i
| j1 + j2 , −( j1 + j2 )i
(11.63)
First column Let us start with computation of the kets in the lowest position of the first
column, which we will obtain by successive application of the lowering operator to the state
| j1 + j2 , j1 + j2 i = | j1 j1 i ⊗ | j2 j2 i . (11.64)
Recall that our lowering operator was found to be (or defined as)
J− | j, mi = ( j + m)( j − m + 1) h̄ | j, m − 1i ,
p
(11.65)
11.3 recap: table of two spin angular momenta 167
J− | j1 j1 i ⊗ | j2 j2 i
| j1 + j2 , j1 + j2 − 1i =
(2( j1 + j2 ))1/2 h̄
(J1− + J2− ) | j1 j1 i ⊗ | j2 j2 i
=
(2( j1 + j2 ))1/2 h̄
p
( j1 + j1 )( j1 − j1 + 1) h̄ | j1 ( j1 − 1)i ⊗ | j2 j2 i
=
(2( j1 + j2 ))1/2 h̄
p
| j1 j1 i ⊗ ( j2 + j2 )( j2 − j2 + 1) h̄ | j2 ( j2 − 1)i
+
(2( j1 + j2 ))1/2 h̄
!1/2 !1/2
j1 j2
= | j1 ( j1 − 1)i ⊗ | j2 j2 i + | j1 j1 i ⊗ | j2 ( j2 − 1)i
j1 + j2 j1 + j2
(11.66)
Second column Moving on to the second column, the top most element in the table
| j1 + j2 − 1, j1 + j2 − 1i , (11.67)
m 1 = j1 m2 = j2 − 1
(11.68)
m1 = j1 − 1 m2 = j2
So for some A and B to be determined we must have
Observe that these are the same kets that we ended up with by application of the lowering
operator on the topmost element of the first column in our table. Since | j1 + j2 , j1 + j2 − 1i
and | j1 + j2 − 1, j1 + j2 − 1i are orthogonal, we can construct our ket for the top of the second
column by just seeking such an orthonormal superposition. Consider for example
a
A |bi + C |di = |bi − |di
c (11.71)
∼ c |bi − a |di
for any orthonormal pair of kets |ai and |di. Using this we find
!1/2 !1/2
j2 j1
| j1 + j2 − 1, j1 + j2 − 1i = | j1 j1 i ⊗ | j2 ( j2 − 1)i − | j1 ( j1 − 1)i ⊗ | j2 j2 i
j1 + j2 j1 + j2
(11.73)
This will work, although we could also multiply by any phase factor if desired. Such a choice
of phase factors is essentially just a convention.
This gives us the first state in the second column, and we can proceed to iterate using the
lowering operators to get all those values.
Moving on to the third column
| j1 + j2 − 2, j1 + j2 − 2i (11.74)
m1 = j1 m2 = j2 − 2
m1 = j1 − 2 m2 = j2 (11.75)
m1 = j1 − 1 m2 = j2 − 1
11.3 recap: table of two spin angular momenta 169
and 2 orthogonality conditions, plus conventions. This is enough to determine the ket in the
third column.
We can formally write
X
| jm; j1 j2 i = | j1 m1 , j2 m2 i h j1 m1 , j2 m2 | jm; j1 j2 i (11.76)
m1 ,m2
where
| j1 m1 , j2 m2 i = | j1 m1 i ⊗ | j2 m2 i , (11.77)
and
h j1 m1 , j2 m2 | jm; j1 j2 i (11.78)
h j1 m1 , j2 m2 | jmi (11.79)
Properties
1. h j1 m1 , j2 m2 | jmi , 0 only if j1 − j2 ≤ j ≤ j1 + j + 2
This is sometimes called the triangle inequality, depicted in fig. 11.1
2. h j1 m1 , j2 m2 | jmi , 0 only if m = m1 + m2 .
3. Real (convention).
Note that the h j1 m1 , j2 m2 | jmi are all real. So, they can be assembled into an orthogonal
matrix. Example
|11i 1 0 0 0 |++i
√1 √1
|10i 0 0 |++i
E = 2 2 (11.81)
11 0 0 0 1 |−+i
√1 −1
√
|00i 0 0 |−−i
2 2
Example. Electrons Consider the special case of an electron, a spin 1/2 particle with s = 1/2
and m s = ±1/2 where we have
J = L+S (11.82)
+
1
|lmi ⊗ m s
(11.83)
2
! !
1 1 1
l⊗ = l+ ⊕ l− (11.84)
2 2 2
j = l + 12 l− 1
2
E
l + 12 , l + 21
E E
l + 12 , l + 21 − 1 l − 21 , l − 21 (11.85)
E
l − 21 , −(l − 21
E
l + 12 , −(l + 21 )
11.4 tensor operators 171
E
Here l + 12 , m
can only have contributions from
+ +
l, m − 1 ⊗ 1 1
2 2 2
+ + (11.86)
l, m + 1 ⊗ 1 1
2 22
E
l − 21 , m from the same two. So using this and conventions we can work out (in §28 page
524, of our text [4]).
+ + +
l ± 1 , m = ± 1 (l + 1 ± m)1/2 l, m − 1 × 1 1
√
2l + 1
2 2 2 2 2
+ + (11.87)
1 1 1 1 1
± √ (l + ∓ m)1/2 l, m + ×
2l + 1 2 2 2 2
r → R(r). (11.88)
Here we are using an active rotation as depicted in fig. 11.2
Suppose that
h i X
R(r) i = Mi j r j (11.89)
j
172 two spin systems, angular momentum, and clebsch-gordon convention
so that
U = e−iθn̂·J/ h̄ (11.90)
Rotating a ket
|ψi (11.91)
0
ψ = e−iθn̂·J/ h̄ |ψi (11.92)
and write
0
ψ = U[M] |ψi (11.93)
Now look at
(∗)
0 0
ψ O ψ = hψ| U † [M]OU[M] |ψi (11.95)
X
r̃i = Mi j r j (12.1)
j
X
hψ| U † [M]Ri U[M] |ψi = r̃i = Mi j r j
j (12.3)
= hψ| (U [M]Ri U[M]) |ψi
†
So
X
U † [M]Ri U[M] = Mi j R j (12.4)
j
173
174 rotations of operators and spherical tensors
X
U † [M]Vi U[M] = Mi j V j (12.5)
j
Consider infinitesimal rotations, where we can show (problem set 11, problem 1) that
X
[Vi , J j ] = i h̄ i jk Vk (12.6)
k
Note that for Vi = Ji we recover the familiar commutator rules for angular momentum, but
this also holds for operators R, P, J, ...
Note that
so
X
U † [M]Vi U † [M] = U † [M † ]Vi U[M † ] = M ji V j (12.8)
j
so
τi j , i, j = x, y, z (12.10)
X
U[M]τi j U † [M] = Mli Mm j τlm (12.11)
lm
12.3 a problem 175
then we will call these the components of (Cartesian) a second rank tensor operator. Suppose
that we have an operator S that transforms
12.3 a problem
This all looks good, but it is really not satisfactory. There is a problem.
Suppose that we have a Cartesian tensor operator like this, lets look at the quantity
X X
τii = U[M]τii U † [M]
i i
X X
= Mli Mmi τlm
i lm
XX
= T
Mli Mim τlm (12.13)
i lm
X
= δlm τlm
lm
X
= τll
l
We see buried inside these Cartesian tensors of higher rank there is some simplicity embedded
(in this case trace invariance). Who knows what other relationships are also there? We want
to work with and extract the buried simplicities, and we will find that the Cartesian way of
expressing these tensors is horribly inefficient. What is a representation that does not have any
excess information, and is in some sense minimal?
Recall
U[M] jm00
(12.14)
176 rotations of operators and spherical tensors
X 0
0
U[M] jm00 = jm jm U[M] jm00
m 0
X 0 ( j) (12.15)
= jm Dm0 m00 [M]
m0
( j)
We have talked about before how these Dm0 m00 [M] form a representation of the rotation group.
These are in fact (not proved here) an irreducible representation.
( j)
Look at each element of Dm0 m00 [M]. These are matrices and will be different according to
which rotation M is chosen. There is some M for which this element is nonzero. There is no
element in this matrix element that is zero for all possible M. There are more formal ways to
think about this in a group theory context, but this is a physical way to think about this.
Think of these as the basis vectors for some eigenket of J 2 .
X
|ψi = jm00
jm00 ψ
m 00
X (12.16)
= am00 jm00
m00
where
am00 = jm00 ψ
(12.17)
So
X
U[M] |ψi = = U[M] jm0 jm0 ψ
m 0
X
= U[M] jm0 am0
m0
X
= jm00
jm00 U[M] jm0 a 0
m (12.18)
m0 ,m00
X
= jm00 D( j) a 0
m00 ,m0 m
m0 ,m00
X
= ãm00 jm00
m00
where
X
( j)
ãm00 = Dm00 ,m0 am0 (12.19)
m0
12.5 motivating spherical tensors 177
Recall that
X
r̃ j = Mi j r j (12.20)
j
X 0
( j)
U[M]T k q U † [M] = Dq0 q T k q (12.21)
q0
Here we are looking for a better way to organize things, and it will turn out (not to be proved)
that this will be an irreducible way to represent things.
We want to work though some examples of spherical tensors, and how they relate to Cartesian
tensors. To do this, a motivating story needs to be told.
Let us suppose that |ψi is a ket for a single particle. Perhaps we are talking about an electron
without spin, and write
XX
( j)
hr| U[M] |ψi = Dm00 m am0 Ylm00 (θ, φ) (12.23)
m00 m0
We are writing this in this particular way to make a point. Now also assume that
so we find
X
( j)
hr| U[M] |ψi = Ylm00 (θ, φ)Dm00 m
m00 (12.25)
= Ylm (θ, φ)
178 rotations of operators and spherical tensors
so
X
( j)
0
Ylm (x, y, z) = Ylm00 (x, y, z)Dm00 m (12.27)
m00
X
( j)
U[M]Ylm (X, Y, Z)U † [M] = Ylm00 (X, Y, Z)Dm00 m (12.28)
m00
definition . Any (2k + 1) operator T (k, q), q = −k, · · · , k are the elements of a spherical tensor
of rank k if
X
U[M]T (k, q)U −1 [M] = T (k, q0 )D(k)
qq0 (12.29)
q0
where D(k)
qq0 was the matrix element of the rotation operator
D(k)
0 00
qq 0 = kq U[M] kq . (12.30)
So, if we have a Cartesian vector operator with components V x , Vy , Vz then we can construct
a corresponding spherical vector operator
V +iV
T (1, 1) = − x√ y ≡ V+1
2
T (1, 0) = Vz ≡ V0 . (12.31)
V x −iVy
T (1, −1) = − √ ≡ V−1
2
12.6 spherical tensors (cont) 179
By considering infinitesimal rotations we can come up with the commutation relations be-
tween the angular momentum operators
T (k, q) |kqi
[ J± , T (k, q)] ↔ J± |kqi (12.34)
[ Jz , T (k, q)] ↔ Jz |kqi
We have a correspondence between the spherical tensors and angular momentum kets
X
T (k, q) = T 1 (k1 , q1 )T 2 (k2 , q2 ) hk1 q1 k2 q2 |kqi
q1 ,q2
X
(12.37)
T 1 (k1 , q1 )T 2 (k2 , q2 ) = T (k, q0 ) kq0 k1 q1 k2 q2
k,q0
180 rotations of operators and spherical tensors
Can form eigenstates |kqi of (total angular momentum)2 and (z-comp of the total angular
momentum). FIXME: this will not be proven, but we are strongly suggested to try this ourselves.
|22i
|21i |11i
|20i |10i |00i (12.41)
E E
21 11
E
22
Example .
How about a Cartesian tensor of rank 3?
Ai jk (12.42)
1 ⊗ 1 ⊗ 1 = 1 ⊗ (0 ⊕ 1 ⊕ 2)
= (1 ⊗ 0) ⊕ (1 ⊗ 1) ⊕ (1 ⊗ 2)
(12.43)
1 ⊕ (0 ⊕ 1 ⊕ 2) ⊕ (3 ⊕ 2 ⊕ 1)
=
3 + 1 + 3 + 5 + 7 + 5 + 3 = 27
12.6 spherical tensors (cont) 181
Why bother? Consider a tensor operator T (k, q) and an eigenket of angular momentum |α jmi,
where α is a degeneracy index.
Look at
T (k, q) |α jmi U[M]T (k, q) |α jmi = U[M]T (k, q)U † [M]U[M] |α jmi
X
D(k)
( j) (12.44)
= 0
α jm0
0 D
qq mm 0 T (k, q )
q0 m0
0 0 0
α j m T (k, q) |α jmi = 0 (12.45)
unless
|k − j| ≤ j0 ≤ k + j
(12.46)
m0 = m + q
• Scalar T (0, 0)
0 0 0
α j m T (0, 0) |α jmi = 0, (12.47)
unless j = j0 and m = m0 .
V−1 − V+1
Vx = √ ,··· (12.48)
2
0 0 0
α j m V x,y |α jmi = 0, (12.49)
unless
| j − 1| ≤ j0 ≤ j + 1
(12.50)
m0 = m ± 1
182 rotations of operators and spherical tensors
α0 j0 m0 Vz |α jmi = 0,
(12.51)
unless
| j − 1| ≤ j0 ≤ j + 1
(12.52)
m0 = m
Very generally one can prove (the Wigner-Eckart theory in the text §29.3)
where we split into a “reduced matrix element” describing the “physics”, and the CG coeffi-
cient for “geometry” respectively.
Part III
S C AT T E R I N G T H E O RY
S C AT T E R I N G T H E O RY
13
13.1 setup
We will focus on point particle elastic collisions (no energy lost in the collision). With parti-
cles of mass m1 and m2 we write for the total and reduced mass respectively
M = m1 + m2 (13.1)
1 1 1
= + , (13.2)
µ m1 m2
so that interaction due to a potential V(r1 − r2 ) that depends on the difference in position
r = r1 − r has, in the center of mass frame, the Hamiltonian
p2
H= + V(r) (13.3)
2µ
In the classical picture we would investigate the scattering radius r0 associated with the im-
pact parameter ρ as depicted in fig. 13.2
185
186 scattering theory
Now lets move to the QM picture where we assume that we have a particle that can be repre-
sented as a wave packet as in fig. 13.3
First without any potential V(x) = 0, lets consider the evolution. Our position and momentum
space representations are related by
Z Z
|ψ(x, t)|2 dx = 1 = |ψ(p, t)|2 d p, (13.4)
Z
dp
ψ(x, t) = √ ψ(p, t)eipx/ h̄ . (13.5)
2π h̄
Schrödinger’s equation takes the form
∂ψ(p, t) p2 ∂2 ψ(p, t)
i h̄ = . (13.7)
∂t 2µ ∂x2
Rearranging to integrate we have
∂ψ ip2
=− ψ, (13.8)
∂t 2µ h̄
and integrating
ip2 t
ln ψ = − + ln C, (13.9)
2µ h̄
or
ip2 t ip2 t
ψ = Ce− 2µ h̄ = ψ(p, 0)e− 2µ h̄ . (13.10)
Time evolution in momentum space for the free particle changes only the phase of the wave-
function, the momentum probability density of that particle.
Fourier transforming, we find our position space wavefunction to be
Z
dp 2
ψ(x, t) = √ ψ(p, 0)eipx/ h̄ e−ip t/2µ h̄ . (13.11)
2π h̄
To clean things up, write
p = h̄k, (13.12)
for
Z
dk 2
ψ(x, t) = √ a(k, 0))eikx e−i h̄k t/2µ , (13.13)
2π
where
√
a(k, 0) = h̄ψ(p, 0). (13.14)
188 scattering theory
Putting
2 /2µ
a(k, t) = a(k, 0)e−i h̄k , (13.15)
we have
Z
dk
ψ(x, t) = √ a(k, t))eikx (13.16)
2π
Observe that we have
Z Z 2
dk|a(k, t)|2 = d pψ(p, t) = 1. (13.17)
ik0 x
(π∆2 )1/4 2 /2∆2
ψ(x, 0) = e−x . (13.18)
e
This is actually a minimum uncertainty packet with
∆
∆x = √
2
(13.19)
h̄
∆p = √ .
∆ 2
13.4 with a potential 189
!1/4
∆2 2 ∆2 /2
a(k, 0) = e−(k−k0 )
π
!1/4 (13.20)
∆2 )2 ∆2 /2 −i h̄k2 t/2µ
a(k, t) = e−(k−k0 e ≡ α(k, t)
π
For t > 0 our wave packet will start moving and spreading as in fig. 13.5
Now “switch on” a potential, still assuming a wave packet representation for the particle. With
a positive (repulsive) potential as in fig. 13.6, at a time long before the interaction of the wave
packet with the potential we can visualize the packet as heading towards the barrier.
After some time long after the interaction, classically for this sort of potential where the
particle kinetic energy is less than the barrier “height”, we would have total reflection. In the
190 scattering theory
QM case, we have seen before that we will have a reflected and a transmitted portion of the
wave packet as depicted in fig. 13.7
Figure 13.7: QM wave packet long after interaction with repulsive potential
Even if the particle kinetic energy is greater than the barrier height, as in fig. 13.8, we can
still have a reflected component.
Z
1= |ψr + ψt |2
Z Z Z (13.21)
= |ψr |2 + |ψt |2 + 2< ψ∗r ψt
Observe that long after the interaction the cross terms in the probabilities will vanish because
they are non-overlapping, leaving just the probably densities for the transmitted and reflected
probably densities independently.
13.5 considering the time independent case temporarily 191
We define
Z
T= |ψt (x, t)|2 dx
Z (13.22)
R= 2
|ψr (x, t)| dx.
The objective of most of our scattering problems will be the calculation of these probabilities
and the comparisons of their ratios.
Question . Can we have more than one wave packet reflect off. Yes, we could have multiple
wave packets for both the reflected and the transmitted portions. For example, if the potential
has some internal structure there could be internal reflections before anything emerges on either
side and things could get quite messy.
We are going to work through something that is going to seem at first to be completely unrelated.
We will (eventually) see that this can be applied to this problem, so a bit of patience will be
required.
We will be using the time independent Schrödinger equation
h̄2 00
− ψ (x) = V(x)ψk (x) = Eψk (x), (13.23)
2µ k
192 scattering theory
where we have added a subscript k to our wave function with the intention (later) of allowing
this to vary. For “future use” we define for k > 0
h̄2 k2
E= . (13.24)
2µ
Consider a potential as in fig. 13.10, where V(x) = 0 for x > x2 and x < x1 .
We will not have bound states here (repulsive potential). There will be many possible solu-
tions, but we want to look for a solution that is of the form
dψk
= ikCeikx3 ≡ φk (x3 ) (13.27)
dx x=x3
d2 ψk
= −k2Ceikx3 (13.28)
dx2 x=x3
Defining
dψk
φk (x) = , (13.29)
dx
13.6 recap 193
dψk
= φk (x)
dx
(13.30)
h̄2 dφk (x) h̄2 k2
− = −V(x)ψk (x) + ψk (x).
2µ dx 2µ
At this x = x3 specifically, we “know” both φk (x3 ) and ψk (x3 ) and have
dψk
= φk (x)
dx x3
(13.31)
h̄2 dφk (x) h̄2 k2
− = −V(x3 )ψk (x3 ) + ψk (x3 ),
2µ dx x3 2µ
This allows us to find both
dψk
dx x3
(13.32)
dφk (x)
dx x3
then proceed to numerically calculate φk (x) and ψk (x) at neighboring points x = x3 + . Es-
sentially, this allows us to numerically integrate backwards from x3 to find the wave function at
previous points for any sort of potential.
13.6 recap
dψk (x)
φk (x) = (13.35)
dx
194 scattering theory
for x ≥ x3
dψk (x)
= φk (x)
dx
(13.37)
h̄2 dφk (x) h̄2 k2
− = −V(x)ψk (x) +
2µ dx 2µ
integrate these equations back to x1 .
For x ≤ x1
Z
dk
ψ(x, tinitial ) = √ α(k, tinitial )eikx (13.40)
2π
13.6 recap 195
Figure 13.12: Wave packet in free space and with positive potential
Returning to the same coefficients, the solution of the Schrödinger eqn for problem with the
potential eq. (13.39)
For x ≤ x1 ,
Z
dk
ψi (x, t) = √ α(k, tinitial )eikx
2π
Z (13.42)
dk
ψr (x, t) = √ α(k, tinitial )βk e−ikx .
2π
For x > x2
and
Z
dk
ψt (x, t) = √ α(k, tinitial )γk eikx (13.44)
2π
Look at
Z
dk
χ(x, t) = √ α(k, tinitial )βk eikx
Z 2π (13.46)
dk
≈ βk0 √ α(k, tinitial )eikx
2π
for t = tinitial , this is nonzero for x < x1 .
so for x < x1
R
√dk α(k, tinitial )eikx for x < x1
ψ(x, tinitial ) =
2π
for x > x2 (and actually also for x > x1 (unproven))
0
(13.49)
for t = tfinal
R
√dk βk α(k, tfinal )e−ikx for x < x1
2π
ψ(x, tfinal ) →
x ∈ [x1 , x2 ] (13.50)
0
√ γk α(k, tfinal )eikx for x > x2
R dk
2π
13.6 recap 197
Probability of reflection is
Z
|ψr (x, tfinal )|2 dx (13.51)
If we have a sufficiently localized packet, we can form a first order approximation around the
peak of βk (FIXME: or is this a sufficiently localized responce to the potential on reflection?)
Z
dk
ψr (x, tfinal ) ≈ βk0 √ α(k, tfinal )e−ikx , (13.52)
2π
so
Z 2
|ψr (x, tfinal )|2 dx ≈ βk0 ≡ R (13.53)
Probability of transmission is
Z
|ψt (x, tfinal )|2 dx (13.54)
Z
dk
ψt (x, tfinal ) ≈ γk0 √ α(k, tfinal )eikx , (13.55)
2π
we have for x > x2
Z 2
|ψt (x, tfinal )|2 dx ≈ γk0 ≡ T. (13.56)
By constructing the wave packets in this fashion we get as a side effect the solution of the
scattering problem.
The
are called asymptotic in states. Their physical applicability is only once we have built wave
packets out of them.
3 D S C AT T E R I N G
14
14.1 setup
From 1D we have learned to build up solutions from time independent solutions (non normal-
izable). Consider an incident wave
h̄2 2 ik·r
− ∇ e = Eeik·r , (14.2)
2µ
where
h̄2 k2
E= . (14.3)
2µ
In the presence of a potential expect scattered waves.
Consider scattering off of a positive potential as depicted in fig. 14.2
199
200 3d scattering
eikn̂·r (14.4)
What other solutions can be found for r > r0 , where our potential V(r) = 0? We are looking for
Φ(r) such that
h̄2 2 h̄2 k2
− ∇ Φ(r) = Φ(r) (14.6)
2µ 2µ
What can we find?
14.2 seeking a post scattering solution away from the potential 201
We split our Laplacian into radial and angular components as we did for the hydrogen atom
h̄2 ∂2 L2
− (rΦ(r)) + Φ(r) = EΦ(r), (14.7)
2µ ∂r2 2µr2
where
∂2 1 ∂+ 1 ∂2
!
L = − h̄
2 2
+ (14.8)
∂θ2 tan θ ∂θ sin2 θ ∂φ2
Assuming a solution of
u(r)
R(r) = , (14.12)
r
we have
d2 l(l + 1)
!
+k −
2
u(r) = 0 (14.14)
dr2 r2
Writing ρ = kr, we have
d2 l(l + 1)
!
+1− u(r) = 0 (14.15)
dρ2 ρ2
202 3d scattering
With a last substitution of u(r) = U(kr) = U(ρ), and introducing an explicit l suffix on our
eigenfunction U(ρ) we have
d2 l(l + 1)
!
− 2+ Ul (ρ) = Ul (ρ). (14.16)
dρ ρ2
We would not have done this before with the hydrogen atom since we had only finite E =
h̄2 k2 /2µ. Now this can be anything.
Making one final substitution, Ul (ρ) = ρ fl (ρ) we can rewrite eq. (14.16) as
d2
!
d
ρ 2
+ 2ρ + (ρ − l(l + 1)) fl = 0.
2
(14.17)
dρ2 dρ
This is the spherical Bessel equation of order l and has solutions called the Bessel and Neu-
mann functions of order l, which are
!l
sin ρ
!
1 d
jl (ρ) = (−ρ)
l
(14.18a)
ρ dρ ρ
!l
cos ρ
!
1 d
nl (ρ) = (−ρ)l
− . (14.18b)
ρ dρ ρ
We can easily calculate
!2
sin ρ 1 sin ρ cos ρ cos ρ sin ρ
!
1 d
= − 2 −2 3 − 3 +3 4
ρ dρ ρ ρ ρ ρ ρ ρ
(14.21)
cos ρ
!
1 3
= sin ρ − 3 + 5 − 3 4
ρ ρ ρ
and
cos ρ 1 sin ρ cos ρ
!
1 d
− = + 2
ρ dρ ρ ρ ρ ρ
(14.22)
sin ρ cos ρ
= 2 + 3
ρ ρ
!2
cos ρ 1 cos ρ sin ρ sin ρ cos ρ
!
1 d
− = −2 3 − 3 −3 4
ρ dρ ρ ρ ρ2 ρ ρ ρ
(14.23)
sin ρ
!
1 3
= cos ρ 3 − 5 − 3 4
ρ ρ ρ
so we find
sin ρ
j0 (ρ) = ρ n0 (ρ) = − cosρ ρ
sin ρ
j1 (ρ) = − cosρ ρ n (ρ) = − cos ρ
− sinρ ρ (14.24)
ρ2 1 ρ2
j2 (ρ) = sin ρ − ρ1 + 3
ρ3
+ cos ρ − ρ2 n2 (ρ) = cos ρ ρ − ρ3 + sin ρ − ρ2
3 1 3 3
Observe that our radial functions R(r) are proportional to these Bessel and Neumann func-
tions
u(r)
R(r) =
r
U(kr)
=
r
j (ρ)ρ
lr
(14.25)
=
nl (ρ)ρ
r
r
jl (ρ)k
r
=
nl (ρ)kr
r
Or
With n! ! denoting the double factorial, like factorial but skipping every other term
ρl
jl (ρ) → (14.28a)
(2l + 1)! !
(2l − 1)! !
nl (ρ) → − , (14.28b)
ρ(l+1)
(for the l = 0 case, note that (−1)! ! = 1 by definition).
Comparing this to our explicit expansion for j1 (ρ) in eq. (14.24) where we appear to have a
1/ρ dependence for small ρ it is not obvious that this would be the case. To compute this we
need to start with a power series expansion for sin ρ/ρ, which is well behaved at ρ = 0 and then
the result follows (done later).
It is apparently also possible to show that as ρ → ∞ we have
!
1 lπ
jl (ρ) → sin ρ − (14.29a)
ρ 2
!
1 lπ
nl (ρ) → − cos ρ − . (14.29b)
ρ 2
For r > r0 we can construct (for fixed k) a superposition of the spherical functions
XX
( Al jl (kr) + Bl nl (kr)) Ylm (θ, φ) (14.30)
l m
lπ
sin kr − 2
jl (kr) → (14.31a)
kr
14.5 back to our problem 205
lπ
cos kr − 2
nl (kr) → − (14.31b)
kr
Put Al /Bl = −i for a given l we have
lπ lπ
1
sin kr − 2 cos kr − 2
1
−i − ∼ ei(kr−πl/2) (14.32)
kr kr kr kr
For
XX 1 i(kr−πl/2) m
Bl e Yl (θ, φ). (14.33)
l m
kr
Making this choice to achieve outgoing waves (and factoring a (−i)l out of Bl for some reason,
we have another wave function that satisfies our Hamiltonian equation
eikr X X
(−1)l Bl Ylm (θ, φ). (14.34)
kr l m
The Bl coefficients will depend on V(r) for the incident wave eik·r . Suppose we encapsulate
that dependence in a helper function fk (θ, φ) and write
eikr
fk (θ, φ) (14.35)
r
We seek a solution ψk (r)
h̄2 2 h̄2 k2
!
− ∇ + V(r) ψk (r) = ψk (r), (14.36)
2µ 2µ
where as r → ∞
eikr
ψk (r) → eik·r + fk (θ, φ). (14.37)
r
Note that for r < r0 in general for finite r, ψk (r), is much more complicated. This is the
analogue of the plane wave result
We can think classically first, and imagine a scattering of a stream of particles barraging a target
as in fig. 14.3
Here we assume that dΩ is far enough away that it includes no non-scattering particles.
Write P for the number density
number of particles
P= , (14.39)
unit volume
and
!
dσ(Ω)
dN = J dΩ. (14.41)
dΩ
The factor
dσ(Ω)
, (14.42)
dΩ
14.7 appendix 207
area
(14.43)
steradians
(recalling that steradians are radian like measures of solid angle [18]).
The total number of particles through the volume per unit time is then
Z Z
dσ(Ω) dσ(Ω)
J dΩ = J dΩ = Jσ (14.44)
dΩ dΩ
where σ is the total cross section and has units of area. The cross section σ his the effective
size of the area required to collect all particles, and characterizes the scattering, but is not
necessarily entirely geometrical. For example, in photon scattering we may have frequency
matching with atomic resonance, finding σ ∼ λ2 , something that can be much bigger than the
actual total area involved.
14.7 appendix
Answer: There is an orthogonality relation, but it is not one of plain old multiplication.
Curious about this, I find an orthogonality condition in [16]
π
2 sin 2 (α − β)
Z ∞
dz
Jα (z)Jβ (z) = , (14.45)
0 z π α2 − β2
from which we find for the spherical Bessel functions
π
sin 2 (l − m)
Z ∞
jl (ρ) jm (ρ)dρ = . (14.46)
0 (l + 1/2)2 − (m + 1/2)2
Is this a satisfactory orthogonality integral? At a glance it does not appear to be well behaved
for l = m, but perhaps the limit can be taken?
Deriving the large limit Bessel and Neumann function approximations For eq. (14.29) we are
referred to any “good book on electromagnetism” for details. I thought that perhaps the weighty
208 3d scattering
[8] would be to be such a book, but it also leaves out the details. In §16.1 the spherical Bessel
and Neumann functions are related to the plain old Bessel functions with
π
r
jl (x) = Jl+1/2 (x) (14.47a)
2x
π
r
nl (x) = Nl+1/2 (x) (14.47b)
2x
Referring back to §3.7 of that text where the limiting forms of the Bessel functions are given
r
2 νπ π
Jν (x) → cos x − − (14.48a)
πx 2 4
r
2 νπ π
Nν (x) → sin x − − (14.48b)
πx 2 4
This does give us our desired identities, but there is no hint in the text how one would derive
eq. (14.48) from the power series that was computed by solving the Bessel equation.
Deriving the small limit Bessel and Neumann function approximations Writing the sinc func-
tion in series form
∞
sin x X x2k
= (−1)k , (14.49)
x k=0
(2k + 1)!
∞
1 d sin x X x2k−2
= (−1)k (2k)
x dx x k=1
(2k + 1)!
∞
X x2k
= (−1) (−1)k (2k + 2) (14.50)
k=0
(2k + 3)!
∞
X 1 x2k
= (−1) (−1)k
k=0
2k + 3 (2k + 1)!
14.7 appendix 209
!2 ∞
1 d sin x X 1 x2k−2
= (−1) (−1)k (2k)
x dx x k=1
2k + 3 (2k + 1)!
∞ (14.51)
X 1 1 x2k
= (−1) k
k=0
2k + 5 2k + 3 (2k + 1)!
It appears reasonable to form the inductive hypotheses
!l ∞
1 d sin x X (2k + 1)! ! x2k
= (−1)l (−1)k , (14.52)
x dx x k=0
(2(k + l) + 1)! ! (2k + 1)!
and this proves to be correct. We find then that the spherical Bessel function has the power
series expansion of
∞
X (2k + 1)! ! x2k+l
jl (x) = (−1)k (14.53)
k=0
(2(k + l) + 1)! ! (2k + 1)!
and from this the Bessel function limit of eq. (14.28a) follows immediately.
Finding the matching induction series for the Neumann functions is a bit harder. It is not
really any more difficult to write it, but it is harder to put it in a tidy form that is.
We find
∞
cos x X x2k−1
− = − (−1)k
x k=0
(2k)!
∞
1 d cos x X 2k − 1 x2k−3
− = − (−1)k (14.54)
x dx x k=0
2k (2k − 2)!
!2 ∞
1 d cos x X (2k − 1)(2k − 3) x2k−3
− = − (−1)k
x dx x k=0
2k(2k − 2) (2k − 4)!
The general expression, after a bit of messing around (and I got it wrong the first time), can
be found to be
!l l−1 Y
l−1
1 d cos x X x2(k−l)−1
− = (−1)l+1 |2(k − j) − 1|
x dx x k=0 j=0
(2k)!
(14.55)
∞
X (2(k + l) − 1)! ! x2k−1
+ (−1)l+1 (−1)k .
k=0
(2k − 1)! ! (2(k + l)!
210 3d scattering
We really only need the lowest order term (which dominates for small x) to confirm the small
limit eq. (14.28b) of the Neumann function, and this follows immediately.
For completeness, we note that the series expansion of the Neumann function is
l−1 Y
l−1
X x2k−l−1
nl (x) = − |2(k − j) − 1|
k=0 j=0
(2k)!
(14.56)
∞
X (2k + 3l − 1)! ! x2k−1
− (−1)k .
k=0
(2k − 1)! ! (2(k + l)!
One way to verify that eq. (14.18a) is a solution to the Bessel equation eq. (14.17) as claimed
should be to substitute the series expression and verify that we get zero. Another way is to solve
this equation directly. We have a regular singular point at the origin, so we look for solutions of
the form
∞
X
f = xr ak xk (14.57)
k=0
d2 d
L = x2 + 2x + x2 − l(l + 1), (14.58)
dx2 dx
we get
0 = Lf
∞
X
= ak ((k + r)(k + r − 1) + 2(k + r) − l(l + 1)) xk+r + ak xk+r+2
k=0
= a0 (r(r + 1) − l(l + 1)) xr (14.59)
+ a1 ((r + 1)(r + 2) − l(l + 1)) xr+1
X∞
+ ak ((k + r)(k + r − 1) + 2(k + r) − l(l + 1) + ak−2 ) xk+r
k=2
14.8 verifying the solution to the spherical bessel equation 211
Since we require this to be zero for all x including non-zero values, we must have constraints
on r. Assuming first that a0 is non-zero we must then have
One solution is obviously r = l. Assuming we have another solution r = l + k for some integer
k we find that r = −l − 1 is also a solution. Restricting attention first to r = l, we must have
a1 = 0 since for non-negative l we have (l + 1)(l + 2) − l(l + 1) = 2(l + 1) , 0. Thus for non-zero
a0 we find that our function is of the form
X
f = a2k x2k+l . (14.61)
k
It does not matter that we started with a0 , 0. If we instead start with a1 , 0 we find that we
must have r = l − 1, −l − 2, so end up with exactly the same functional form as eq. (14.61). It
ends up slightly simpler if we start with eq. (14.61) instead, since we now know that we do not
have any odd powered ak ’s to deal with. Doing so we find
0 = Lf
∞
X
= a2k ((2k + l)(2k + l − 1) + 2(2k + l) − l(l + 1)) x2k+l + a2k x2k+l+2
k=0 (14.62)
X∞
= (a2k 2k(2(k + l) + 1) + a2(k−1) ) x2k+l
k=1
We find
a2k −1
= . (14.63)
a2(k−1) 2k(2(k + l) + 1)
Proceeding recursively, we find
∞
X (−1)k
f = a0 (2l + 1)! ! x2k+l . (14.64)
k=0
(2k)! ! (2(k + l) + 1)! !
1 (2k + 1)! !
= , (14.65)
(2k)! ! (2k + 1)!
212 3d scattering
a2k −1
= , (14.66)
a2(k−1) 2k(2(k − l) − 1)
and find
a2k (−1)k
= . (14.67)
a0 (2k)! ! (2(k − l) − 1)(2(k − l) − 3) · · · (−2l + 1)
Flipping signs around, we can rewrite this as
a2k 1
= . (14.68)
a0 (2k)! ! (2(l − k) + 1)(2(l − k) + 3) · · · (2l − 1)
For those values of l > k we can write this as
a0
= −1. (14.70)
(2l − 1)! !
After some play we find
(2(l−k)−1)!!
− (2k)!! if l ≥ k
a2k =
(−1)k−l+1
(14.71)
if l ≤ k
(2k)!!(2(k−l)−1)!!
FIXME: check that this matches the series calculated earlier eq. (14.56).
14.9 scattering cross sections 213
h̄2 2 h̄2 k2
− ∇ ψk (r) + V(r)ψk (r) = ψk (r), (14.73)
2µ 2µ
in regions of space, where r > r0 is very large. We found
eikr
ψk (r) ∼ eik·r + fk (θ, φ). (14.74)
r
For r ≤ r0 this will be something much more complicated.
To study scattering we will use the concept of probability flux as in electromagnetism
∇ · j + ρ̇ = 0 (14.75)
Using
we find
h̄
j(r, t) = (ψk (r)∗ ∇ψk (r) − (∇ψ∗k (r))ψk (r)) (14.77)
2µi
when
Z
ψ(r, tinitial ) = d3 kα(k, tinitial )ψk (r) (14.79)
and treat the scattering as the scattering of a plane wave front (idealizing a set of wave pack-
ets) off of the object of interest as depicted in fig. 14.5
We assume that our incoming particles are sufficiently localized in k space as depicted in the
idealized representation of fig. 14.6
we assume that α(k, tinitial ) is localized.
eikr
Z !
ψ(r, tinitial ) = d k α(k, tinitial )e
3 ikz z
+ α(k, tinitial ) fk (θ, φ) (14.80)
r
14.9 scattering cross sections 215
We suppose that
initial /2µ
2t
α(k, tinitial ) = α(k)e−i h̄k (14.81)
where this is chosen (α(k, tinitial ) is built in this fashion) so that this is non-zero for z large in
magnitude and negative.
This last integral can be approximated
eikr fk (θ, φ)
Z Z
d3 kα(k, tinitial ) fk (θ, φ) ≈ 0 d3 kα(k, tinitial )eikr
r r (14.82)
→0
This is very much like the 1D case where we found no reflected component for our initial
time.
We will normally look in a locality well away from the wave front as indicted in fig. 14.7
There are situations where we do look in the locality of the wave front that has been scattered.
2 t/2µ
ψi = Aeikz e−i h̄k (14.83)
Here we have made the approximation that k = |k| ∼ kz . We can calculate the probability
current
h̄k
j = ẑ A (14.84)
µ
216 3d scattering
h̄ 2 2ik
r̂ · j = |f| 2
2µi r
(14.86)
h̄k 1 2
= |f| ,
µ r2
14.9 scattering cross sections 217
probability
r̂dA · j = × area
unit area per time
probability
=
unit time
h̄k | fk (θ, φ)|2 2
= r dΩ
µ r2 (14.87)
h̄k
= | fk (θ, φ)|2 dΩ
µ
dσ/dΩ
dσ
= | fk (θ, φ)|2 (14.88)
dΩ
Z
σ= | fk (θ, φ)|2 dΩ (14.89)
We have been somewhat unrealistic here since we have used a plane wave approximation,
and can as in fig. 14.8
will actually produce the same answer. For details we are referred to [10] and [12].
Working towards a solution We have done a bunch of stuff here but are not much closer to a
real solution because we do not actually know what fk is.
Let us write Schrödinger
h̄2 2 h̄2 k2
− ∇ ψk (r) + V(r)ψk (r) = ψk (r), (14.90)
2µ 2µ
instead as
where
2µ
s(r) = V(r)ψk (r) (14.92)
h̄2
where s(r) is really the particular solution to this differential problem. We want
homogeneous particular
ψk (r) = ψk (r) + ψk (r) (14.93)
and
homogeneous
ψk (r) = eik·r (14.94)
B O R N A P P R O X I M AT I O N
15
READING: §20 [4]
We have been arguing that we can write the stationary equation
∇2 + k2 ψk (r) = s(r) (15.1)
with
2µ
s(r) = V(r)ψk (r) (15.2)
h̄2
homogeneous particular
ψk (r) = ψk (r) + ψk (r) (15.3)
∇2 + k2 G0 (r, r0 ) = δ(r − r0 ) (15.4)
Z
particular
ψk (r) = G0 (r, r0 )s(r0 )d3 r0 (15.5)
It turns out that finding the Green’s function G0 (r, r0 ) is not so hard. Note the following, for
k = 0, we have
(where a zero subscript is used to mark the k = 0 case). We know this Green’s function from
electrostatics, and conclude that
1 1
G00 (r, r0 ) = − (15.7)
4π |r − r0 |
219
220 born approximation
0
1 eik|r−r |
G (r, r ) = −
0 0
(15.8)
4π |r − r0 |
This is correct for all r because it also gives the right limit as r → r0 . This argument was
first given by Lorentz. An outline for a derivation, utilizing the usual Fourier transform and
contour integration arguments for these Green’s derivations, can be found in §7.4 of [3]. A
direct verification, not quite as easy as claimed can be found in B.
We can now write our particular solution
0
eik|r−r | 0 3 0
Z
1
ψk (r) = eik·r − s(r )d r (15.9)
4π |r − r0 |
This is of no immediate help since we do not know ψk (r) and that is embedded in s(r).
0
eik|r−r |
Z
2µ
ψk (r) = e ik·r
− V(r0 )ψk (r0 )d3 r0 (15.10)
4π h̄2 |r − r0 |
Now look at this for r r0
1/2
r − r0 = r2 + (r0 )2 − 2r · r0
!1/2
(r0 )2 1
= r 1+ 2 −2 2r·r 0
r r
0 2 1/2
! (15.11)
1 2 r
= r 1 − 2 r · r + O
0
2r r
r0 2
!
= r − r̂ · r + O
0
r
We get
2µ eikr
Z
0
ψk (r) → eik·r − 2 r
e−ikr̂·r V(r0 )ψk (r0 )d3 r0
4π h̄ (15.12)
eikr
= eik·r + fk (θ, φ) ,
r
where
µ
Z
0
fk (θ, φ) = − e−ikr̂·r V(r0 )ψk (r0 )d3 r0 (15.13)
2π h̄2
born approximation 221
µ
Z
0 0
fk (θ, φ) = − e−ikr̂·r V(r0 )eik·r d3 r0 , (15.14)
2π h̄2
or
µ eikr
Z
0 0
ψk (r) = e ik·r
− e−ikr̂·r V(r0 )eik·r d3 r0 . (15.15)
2π h̄2 r
Should we wish to make a further approximation, we can take the wave function resulting
from application of the Born approximation, and use that a second time. This gives us the “Born
again” approximation of
0 Z
µ eikr µ eikr
Z !
−ikr̂·r0 ik·r0 −ikr̂0 ·r00 00 ik·r00 3 00
ψk (r) = eik·r − 2 r
e V(r0
) e − 2 r0
e V(r )e d r d3 r0
2π h̄ 2π h̄
µ e ikr Z 0 0
= eik·r − e−ikr̂·r V(r0 )eik·r d3 r0
2π h̄2 r
µ2 eikr
Z ikr0 Z
−ikr̂·r0 0 e 0 00 00
+ 4 r
e V(r ) 0 e−ikr̂ ·r V(r00 )eik·r d3 r00 d3 r0 .
2
(2π) h̄ r
(15.16)
Part IV
N OT E S A N D P RO B L E M S
S I M P L E E N TA N G L E M E N T E X A M P L E
16
On the quiz we were given a three state system |1i , |2i and |3i, and a two state system |ai , |bi,
and were asked to show that the composite system can be entangled. I had trouble with this,
having not seen any examples of this and subsequently filing away entanglement in the “abstract
stuff that has no current known application” bit bucket, and then forgetting about it. Let us
generate a concrete example of entanglement, and consider the very simplest direct product
spaces.
What is the simplest composite state that we can create? Suppose we have a pair of two state
systems, say,
1
|1i = ∈ H1
0
(16.1)
0
|2i = ∈ H1 ,
1
and
eikx
hx|+i = √ , where |+i ∈ H2
2π
(16.2)
e−ikx
hx|+i = √ where |−i ∈ H2 .
2π
We can now enumerate the space of possible operators
A ∈ a11++ |1i h1| ⊗ |+i h+| + a11+− |1i h1| ⊗ |+i h−|
+ a11−+ |1i h1| ⊗ |−i h+| + a11−− |1i h1| ⊗ |−i h−|
+ a12++ |1i h2| ⊗ |+i h+| + a12+− |1i h2| ⊗ |+i h−|
+ a12−+ |1i h2| ⊗ |−i h+| + a12−− |1i h2| ⊗ |−i h−|
(16.3)
+ a21++ |2i h1| ⊗ |+i h+| + a21+− |2i h1| ⊗ |+i h−|
+ a21−+ |2i h1| ⊗ |−i h+| + a21−− |2i h1| ⊗ |−i h−|
+ a22++ |2i h2| ⊗ |+i h+| + a22+− |2i h2| ⊗ |+i h−|
+ a22−+ |2i h2| ⊗ |−i h+| + a22−− |2i h2| ⊗ |−i h−|
225
226 simple entanglement example
We can also enumerate all the possible states, some of these can be entangled
|ψi ∈ h1+ |1i ⊗ |+i + h1− |1i ⊗ |−i + h2+ |2i ⊗ |+i + h2− |2i ⊗ |−i . (16.4)
|ψi ∈ (ai |ii) ⊗ (bβ |βi) = a1 b+ |1i ⊗ |+i + a1 b− |1i ⊗ |−i + a2 b+ |2i ⊗ |+i + a2 b− |2i ⊗ |−i (16.5)
In this simpler example, we have the same dimensionality for both the sets of direct product
kets and the ones formed by arbitrary superposition of the composite ket basis elements, but
that does not mean that this rules out entanglement.
Suppose that, as the product of some operator, we end up with a ket
a1 b+ = 1
a2 b− = 1
(16.8)
a1 b− = 0
a2 b+ = 0,
However, we can not find a solution to this set of equations. We require one of a1 = 0 or
b− = 0 for the third equality, but such zeros generate contradictions for one of the first pair of
equations.
P RO B L E M S E T 4 , P RO B L E M 2 N OT E S
17
I was deceived by an incorrect result in Mathematica, which led me to believe that the second
order energy perturbation was zero (whereas part (c) of the problem asked if it was greater
or lesser than zero). I started starting writing this up to show my reasoning, but our Professor
quickly provided an example after class showing how this zero must be wrong, and I did not
have to show him any of this.
Setup Recall first the one dimensional particle in a box. Within the box we have to solve
P2
ψ = Eψ (17.1)
2m
and find
i
√
ψ ∼ e h̄ 2mEx
(17.2)
With
√
2mE
k= (17.3)
h̄
our general state, involving terms of each sign, takes the form
ψ(−L/2) e−ik L2 + eik L2 A
ik L2 L
(17.5)
ψ(L/2) e + e −ik B
2
227
228 problem set 4, problem 2 notes
e2ikL = 1. (17.7)
πn
k= . (17.8)
L
This quantizes the energy, and inverting eq. (17.3) gives us
!2
1 h̄πn
E= . (17.9)
2m L
To complete the task of matching boundary value conditions we cheat and recall that the
particular linear combinations that we need to match the boundary constraint of zero at ±L/2
were sums and differences yielding cosines and sines respectively. Since
πnx πn
sin = ± sin (17.10)
L x=±L/2 2
So sines are the wave functions for n = 2, 4, ... since sin(nπ) = 0 for integer n. Similarly
πnx πn
cos = cos . (17.11)
L x=±L/2 2
Cosine becomes zero at π/2, 3π/2, · · ·, so our wave function is the cosine for n = 1, 3, 5, · · ·.
Normalizing gives us
r πnx
cos L
2 n = 1, 3, 5, · · ·
ψn (x) =
(17.12)
sin πnx
L n = 2, 4, 6, · · ·
L
Two non-interacting particles. Three lowest energy levels and degeneracies Forming the
Hamiltonian for two particles in the box without interaction, we have within the box
P21 P22
H= + (17.13)
2m 2m
problem set 4, problem 2 notes 229
we can apply separation of variables, and it becomes clear that our wave functions have the
form
h̄2 πn 2 πm 2
!
Hψnm = + ψnm
2m L L
!2 (17.16)
1 h̄π
= (n2 + m2 )ψnm
2m L
Letting n, m each range over [1, 3] for example we find
n m n2 + m2
1 1 2
1 2 5
1 3 10
2 1 5
(17.17)
2 2 8
2 3 13
3 1 10
3 2 13
3 3 18
It is clear that our lowest energy levels are
!2
1 h̄π
m L
!2
5 h̄π
(17.18)
2m L
!2
4 h̄π
m L
with degeneracies 1, 2, 1 respectively.
230 problem set 4, problem 2 notes
Ground state energy with interaction perturbation to first order With c0 positive and an inter-
action potential of the form
2
H 0
(0)
X 11;11
E = E11 + H11;11
0
+ (17.20)
nm,11
E11 − Enm
where
!2
(0) 1 h̄π
E11 = , (17.21)
m L
and
0
Hnm;ab = −c0 hψnm | δ(X1 − X2 ) |ψab i (17.22)
Z
hψnm | δ(X1 − X2 ) |ψab i = dx1 dx2 dy1 dy2 hψnm |x1 x2 i hx1 x2 | δ(X1 − X2 ) |y1 y2 i hy1 y2 |ψab i
Z
= dx1 dx2 dy1 dy2 hψnm |x1 x2 i δ(x1 − x2 )δ2 (x − y) hy1 y2 |ψab i
Z
= dx1 dx2 hψnm |x1 x2 i δ(x1 − x2 ) hx1 x2 |ψab i
Z L/2
= dxψnm (x, x)ψab (x, x)
−L/2
(17.23)
Z L/2
0
H11;11 = −c0 dxψ11 (x, x)ψ11 (x, x)
−L/2
Z L/2
4 (17.24)
= 2 dx cos4 (πx/L)
L −L/2
3c0
=−
2L
problem set 4, problem 2 notes 231
For the second order perturbation of the energy, it is clear that this will reduce the first order
approximation for each matrix element that is non-zero.
Attempting that calculation with Mathematica however, is deceiving, since Mathematica re-
ports these all as zero after FullSimplify. It appears, that as used, it does not allow for m = n and
m = n ± 1 constraints properly where the denominators of the unsimplified integrals go zero.
This worksheet can be seen to be giving misleading results, by evaluating
Z L !2 πx !
2 2 3πx 1
cos 2
cos 2
dx = (17.25)
− L2 L L L L
Z L πx 2 2 !2
(2n + 1)πx (2m + 1)πx
" # " #
2
dx, {m, n} ∈ Integers = 0
FullSimplify Cos Cos Cos
− L2 L L L L
(17.26)
I am hoping that asking about this on stackoverflow will clarify how to use Mathematica
correctly for this calculation.
A D I F F E R E N T D E R I VAT I O N O F T H E A D I A B AT I C P E R T U R B AT I O N
C O E F F I C I E N T E Q U AT I O N
18
Professor Sipe’s adiabatic perturbation and that of the text [4] in §17.5.1 and §17.5.2 use differ-
ent notation for γm and take a slightly different approach. We can find Prof Sipe’s final result
with a bit less work, if a hybrid of the two methods is used.
Our starting point is the same, we have a time dependent slowly varying Hamiltonian
H = H(t), (18.1)
where our perturbation starts at some specific time from a given initial state
H(t) = H0 , t ≤ 0. (18.2)
where
Z t
1
αn (t) = dt0 En (t0 ). (18.5)
h̄ 0
Here I have used βn instead of γn (as in the text) to avoid conflicting with the lecture notes,
where this βn is a factor to be determined.
233
234 a different derivation of the adiabatic perturbation coefficient equation
For this state, we have at the time just before the perturbation
X
|ψ(0)i = bn (0)e−iαn (0)+iβn (0) |n(0)i . (18.6)
n
The question to answer is: How does this particular state evolve?
Another question, for those that do not like sneaky bastard derivations, is where did that
magic factor of e−iαn come from in our superposition state? We will see after we start taking
derivatives that this is what we need to cancel the H(t) |ni in Schrödinger’s equation.
Proceeding to plug into the evolution identity we have
!
d
0 = hm| i h̄ − H(t) |ψi
dt
!
X −iα +iβ dbn En d
= hm| e + bn −i + iβ̇m |ni + i h̄bn |ni −
n n
(i h̄) En bn |ni
n
dt h̄ dt
dbm d
= e−iαm +iβm (i h̄) + e−iαm +iβm (i h̄)iβ̇m bm + i h̄ bn hm| |ni e−iαn +iβn
X
dt dt (18.7)
n
dbm d
e−iαn +iβn eiαm −iβm bn hm| |ni
X
∼ + iβ̇m bm +
dt n
dt
dbm d d
e−iαn +iβn eiαm −iβm bn hm| |ni
X
= + iβ̇m bm + bm hm| |mi +
dt dt n,m
dt
d
0 = iβ̇m bm + bm hm| |mi , (18.8)
dt
or
d
β̇m = i hm| |mi , (18.9)
dt
which after integration is
Z t d
βm (t) = i dt0 m(t0 ) 0 |m(t)i .
(18.10)
0 dt
a different derivation of the adiabatic perturbation coefficient equation 235
d
Γm (t) = i hm(t)| |m(t)i (18.11)
dt
so that
Z t
βm (t) = dt0 Γm (t0 ). (18.12)
0
As in class we can observe that this is a purely real function. We are left with
dbm d
bn e−iαnm +iβnm hm| |ni ,
X
=− (18.13)
dt n,m
dt
where
αnm = αn − αm
(18.14)
βnm = βn − βm
The task is now to find solutions for these bm coefficients, and we can refer to the class notes
for that without change.
S E C O N D O R D E R T I M E E VO L U T I O N F O R T H E C O E F F I C I E N T S O F
A N I N I T I A L LY P U R E K E T W I T H A N A D I A B AT I C A L LY C H A N G I N G
19
H A M I LT O N I A N
Motivation In lecture 9, Prof Sipe developed the equations governing the evolution of the
coefficients of a given state for an adiabatically changing Hamiltonian. He also indicated that
we could do an approximation, finding the evolution of an initially pure state in powers of λ
(like we did for the solutions of a non-time dependent perturbed Hamiltonian H = H0 + λH 0 ).
I tried doing that a couple of times and always ended up going in circles. I will show that here
and also develop an expansion in time up to second order as an alternative, which appears to
work out nicely.
Review We assumed that an adiabatically changing Hamiltonian was known with instanta-
neous eigenkets governed by
E E
H(t) ψ̂n (t) = h̄ωn ψ̂n (t) (19.1)
The problem was to determine the time evolutions of the coefficients bn (t) of some state |ψ(t)i,
and this was found to be
X E
|ψ(t)i = bn (t)e−iγn (t) ψ̂n (t)
n
Z t
γ s (t) = dt0 (ω s (t0 ) − Γ s (t0 )) (19.2)
0
D d E
Γ s (t) = i ψ̂ s (t) ψ̂ s (t)
dt
where the b s (t) coefficient must satisfy the set of LDEs
db s (t) X D d E
=− bn (t)eiγsn (t) ψ̂ s (t) ψ̂n (t) , (19.3)
dt n,s
dt
where
237
238 second order time evolution for the coefficients of an initially pure ket with an adiabatically changing
Solving these in general does not look terribly fun, but perhaps we can find an explicit solu-
tion for all the b s ’s, if we simplify the problem somewhat. Suppose that our initial state is found
to be in the mth energy level at the time before we start switching on the changing Hamiltonian.
E
|ψ(0)i = bm (0) ψ̂m (0) . (19.5)
We therefore require (up to a phase factor)
bm (0) = 1
(19.6)
b s (0) = 0 if s , m.
Equivalently we can write
Going in circles with a λ expansion In class it was hinted that we could try a λ expansion of
the following form to determine a solution for the b s coefficients at later times
d (1) X D d E
λ b s (t) = − (δmn + λb(1)
n (t))e
iγ sn (t)
ψ̂ s (t) ψ̂n (t) , (19.9)
dt n,s
dt
d (1) X D d E
b s (t) = − b(1)
n (t)e
iγ sn (t)
ψ̂ s (t) ψ̂n (t)
dt n,s
dt
X D d E (19.10)
0=− δmn eiγsn (t) ψ̂ s (t) ψ̂n (t) .
n,s
dt
Observe that the first identity is exactly what we started with in eq. (19.3), but has just re-
placed the bn ’s with b(1)
n ’s. Worse is that the second equation is only satisfied for s = m, and for
s , m we have
D d E
0 = −eiγsm (t) ψ̂ s (t) ψ̂m (t) . (19.11)
dt
second order time evolution for the coefficients of an initially pure ket with an adiabatically changing hamilt
E
So this λ power series only appears to work if we somehow had ψ̂ s (t) always orthonormal
E
to the derivative of ψ̂m (t) . Perhaps this could be done if the Hamiltonian was also expanded
in powers of λ, but such a beastie seems foreign to the problem. Note that we do not even have
any explicit dependence on the Hamiltonian in the final bn differential equations, as we would
probably need for such an expansion to work out.
A Taylor series expansion in time What we can do is to expand the bn ’s in a power series
parametrized by time. That is, again, assuming we started with energy equal to h̄ωm , form
! 2 !
t d t d2
b s (t) = δ sm + + +···
b s (t)
2
b s (t) (19.12)
1! dt t=0 2! dt t=0
The first order term we can grab right from eq. (19.3) and find
db s (t) X D d E
=− bn (0) ψ̂ s (t) ψ̂n (t)
dt t=0 n,s
dt t=0
X D d E
=− δnm ψ̂ s (t) ψ̂n (t)
dt t=0
n,s (19.13)
0 s=m
D E
=
− ψ̂ s (t) dtd ψ̂m (t)
t=0
s,m
Let us write
E
|ni = ψ̂n (0)
0 d E (19.14)
n = ψ̂n (t)
dt t=0
So we can write
db s (t)
0
= −(1 − δ sm ) sm , (19.15)
dt t=0
and form, to first order in time our approximation for the coefficient is
b s (t) = δ sm − t(1 − δ sm ) sm0 . (19.16)
240 second order time evolution for the coefficients of an initially pure ket with an adiabatically changing
d
γ s (t) = ω s (0) − i s s0 , (19.18)
dt t=0
So we have
d2 X
= (−(1 − δnm ) nm0 + δnm i(ω sn (0) − i s s0 + i nn0 )) sn0 + δnm ( s0 n0 + sn00 )
2
b (t)
s −
dt t=0
n,s
(19.19)
Again for s = m, all terms are killed. That is somewhat surprising, but suggests that we will
need to normalize the coefficients after the perturbation calculation, since we have unity for one
of them.
For s , m we have
d2 X
m0 − δ i(ω (0) − i
s s0 + i
nn0 ))
sn0 − δ (
s0 n0 +
sn00 )
=
b s (t) ( n nm sn nm
dt2
t=0
n,s
X
0
0
= −i(ω (0) − i s s0 + i mm0 )) sm0 − ( s0 m0 + sm00 ) +
sm nm sn .
n,s
(19.20)
So we have, for s , m
d2
0
0
0
0
0 0
00 X
0
0
= sm − s m − sm + nm sn .
b s (t) ( m m − s s ) s m − iω sm (0)
dt2
t=0
n,s
(19.21)
It is not particularly illuminating looking, but possible to compute, and we can use it to form
a second order approximate solution for our perturbed state.
b s (t) = δ sm − t(1 − δ sm ) sm0
0
0
0
0
0 0
00 X
0
0 t2
+ (1 − δ sm ) ( mm − s s ) sm − iω sm (0) sm − s m − sm +
nm sn
n,s
2
second order time evolution for the coefficients of an initially pure ket with an adiabatically changing hamilt
(19.22)
New info. How to do the λ expansion Asking about this, Federico nicely explained. “The
reason why you are going in circles when trying the lambda expansion is because you are not
assuming the term hψ(t)| (d/dt) |ψ(t)i to be of order lambda. This has to be assumed, otherwise
it does not make sense at all trying a perturbative approach. This assumption means that the cou-
pling between the level s and the other levels is assumed to be small because the time dependent
part of the Hamiltonian is small or changes slowly with time. Making a Taylor expansion in time
would be sensible only if you are interested in a short interval of time. The lambda-expansion
approach would work for any time as long as the time dependent piece of the Hamiltonian does
not change wildly or is too big.”
In the tutorial he outlined another way to justify this. We have written so far
H(t)
t>0
H=
(19.23)
H t<0
0
where H(0) = H0 . We can make this explicit, and introduce a λ factor into the picture if we
write
where H0 has no time dependence, so that our Hamiltonian is then just the “steady-state”
system for λ = 0.
Now recall the method from [2] that we can use to relate our bra-derivative-ket to the Hamilto-
nian. Taking derivatives of the energy identity, braketed between two independent kets (m , n)
we have
D d E E
0 = ψ̂m (t) H(t) ψ̂n (t) − h̄ωn ψ̂n (t)
dt
D dH(t) E d E dωn E d E!
= ψ̂m (t) ψ̂n (t) + H(t) ψ̂n (t) − h̄ ψ̂n (t) − h̄ωn ψ̂n (t) (19.25)
dt dt dt dt
D d E dω n
D dH(t) ψ̂ (t)E
= h̄(ωm − ωn ) ψ̂m (t) ψ̂n (t) − h̄ δmn + ψ̂m (t)
n
dt dt dt
So for m , n we find a dependence between the bra-derivative-ket and the time derivative of
the Hamiltonian
D E
D d E ψ̂m (t) dH(t)
dt ψ̂n (t)
ψ̂m (t) ψ̂n (t) = (19.26)
dt h̄(ωn − ωm )
242 second order time evolution for the coefficients of an initially pure ket with an adiabatically changing
Referring back to eq. (19.24) we see the λ dependence in this quantity, coming directly from
the λ dependence imposed on the time dependent part of the Hamiltonian
D 0 E
D d E ψ̂m (t) dHdt(t) ψ̂n (t)
ψ̂m (t) ψ̂n (t) = λ (19.27)
dt h̄(ωn − ωm )
Given this λ dependence, let us revisit the perturbation attempt of eq. (19.9). Our first order
factors of λ are now
d (1) X D d E
b s (t) = − δmn eiγsn (t) ψ̂ s (t) ψ̂n (t)
dt n,s
dt
(19.28)
0 if m = s
=
D E
−e sm ψ̂ s (t) dt ψ̂m (t)
iγ (t) d
if m , s
Z t d
0 D E
b s (t) = δms (1 + λconstant) − (1 − δms )λ dt0 eiγsm (t ) ψ̂ s (t0 ) 0 ψ̂m (t0 ) (19.29)
0 dt
A couple observations of this result. One is that the constant factor in the m = s case makes
sense. This would likely be a negative contribution since we have to decrease the probability
coefficient for finding our wavefunction in the m = s state after perturbation, since we are
increasing the probability for finding it elsewhere by changing the Hamiltonian.
Also observe that since eiγsm ∼ 0 for small t this is consistent with the first order Taylor series
expansion where we found our first order contribution was
D d E
−(1 − δms )t ψ̂ s (t) ψ̂m (t) . (19.30)
dt
0
D E
Also note that this −eiγsm (t ) ψ̂ s (t0 ) dtd0 ψ̂m (t0 ) is exactly the difference from 0 that was men-
tioned in class when the trial solution of b s = δ sm was tested by plugging it into eq. (19.3), so
it is not too surprising that we should have a factor of exactly this form when we refine our
approximation.
A question to consider should we wish to refine the λ perturbation to higher than first order in
λ: is there any sort of λ dependence in the eiγsm coming from the Γ sm term in that exponential?
D E G E N E R A C Y A N D D I A G O N A L I Z AT I O N
20
20.1 motivation
In class it was mentioned that to deal with perturbation around a degenerate energy eigenvalue,
we needed to diagonalize the perturbing Hamiltonian. I did not follow those arguments com-
pletely, and I had like to revisit those here.
Problem set 3, problem 1, was to calculate the energy eigenvalues for the following Hamiltonian
H = H0 + λH 0
a 0 0 0
0 b 0 0
H0 =
0 0 c 0
0 0 0 c (20.1)
α 0 ν η
0 β 0 µ
H 0 =
ν∗ 0 γ 0
η∗ µ∗ 0 δ
This is more complicated that the two state problem that are solved exactly in §13.1.1 in the
text [4], but differs from the (possibly) infinite dimensional problem that was covered in class.
Unfortunately, the solution provided to this problem did not provide the illumination I expected,
so let us do it again, calculating the perturbed energy eigenvalues for the degenerate levels, from
scratch.
Can we follow the approach used in the text for the two (only) state problem. For the two
state problem, it was assumed that the perturbed solution could be expressed as a superposition
of the two states that formed the basis for the unperturbed Hilbert space. That is
243
244 degeneracy and diagonalization
For the two state problem, assuming that the perturbed energy eigenvalue is E, and the unper-
turbed energy eigenvalue is E 0 we find
0 = (H − E) |ψi
= (H0 + λH 0 ) |ψi − E |ψi
= (H0 + λH 0 )(m |1i + n |2i) − E(m |1i + n |2i)
(20.3)
= λH 0 (m |1i + n |2i) − E 0 (m |1i + n |2i)
h i m
= (−E + λH ) |1i |2i
0 0
n
h1|
0 = (H − E) |ψi
h2|
(20.4)
h1| H 0 |1i h1| H 0 |2i m
= (E − E)I + λ
0
0 0
h2| H |1i h2| H |2i n
Or
m
(E − E)I + λ Hi j = 0.
0 0 (20.5)
n
Observe that there was no assumption about the dimensionality of H0 and H 0 here, just that
the two degenerate energy levels had eigenvalues E 0 and a pair of eigenkets |1i and |2i such
that H0 |ii = E 0 |ii , i ∈ [1, 2]. It is clear that we can use a similar argument for any degeneracy
degree. It is also clear how to proceed, since we have what almost amounts to a characteristic
equation for the degenerate subspace of Hilbert space for the problem.
Because H 0 is Hermitian, a diagonalization
H 0 = U ∗ DU
h i (20.6)
D = H 0 i δi j
20.3 generalizing slightly 245
can be found. To solve for E we can take the determinant of the matrix factor of eq. (20.5),
and because I = U ∗ U we have
0 = (E 0 − E)U ∗ IU + λU ∗ DU
= U ∗ (E 0 − E)I + λD U
(20.7)
E 0 − E + λH 0 1 0
=
0 E 0 − E + λH 0 2
= (E 0 − E + λH 0 1 )(E 0 − E + λH 0 2 )
So our energy eigenvalues associated with the perturbed state are (exactly)
E = E 0 + λH 0 1 , E 0 + λH 0 2 . (20.8)
It is a bit curious seeming that only the energy eigenvalues associated with the degeneracy
play any part in this result, but there is some intuitive comfort in this idea. Without the pertur-
bation, we can not do an energy measurement that would distinguish one or the other of the
eigenkets for the degenerate energy level, so it does not seem unreasonable that a perturbed en-
ergy level close to the original can be formed by superposition of these two states, and thus the
perturbed energy eigenvalue for the new system would then be related to only those degenerate
levels.
Observe that in the problem set three problem we had a diagonal initial Hamiltonian H0 , that
does not have an impact on the argument above, since that portion of the Hamiltonian only has
a diagonal contribution to the result found in eq. (20.5), since the identity H0 |ii = c |ii , i ∈ [3, 4]
removes any requirement to know the specifics of that portion of the matrix element of H0 .
Let us work with a system that has kets using an explicit degeneracy index
H0 |mαm i = Em
0
|mαm i , αm = 1, · · · , γm , m ∈ [1, N] (20.9)
Example:
|mαm i ∈ |11i
|21i , |22i
(20.10)
|31i
|41i , |42i , |43i .
246 degeneracy and diagonalization
H = H0 + λH 0 . (20.11)
For any m with associated with a degeneracy (γm > 1) we can calculate the subspace diago-
nalization
h i
hmi| H 0 |m ji = Um Dm Um ,
†
(20.12)
where
Um Um† = 1, (20.13)
and Dm is diagonal
Dm = δi j Hm,i
0 . (20.14)
This is not a diagonalizing transformation in the usual sense. Putting it together into block
matrix form, we can write
U1
U2
U = (20.15)
..
.
UN
and find that a similarity transformation using this change of basis matrix puts all the block
matrices along the diagonal into diagonal form, but leaves the rest possibly non-zero
D1 x x x
x D2 x x
U † mαmi H 0 |m ji mαm j U =
E
(20.16)
..
x
x . x
x x x DN
20.3 generalizing slightly 247
A five level system with two pairs of degenerate levels Let us do this explicitly using a specific
degeneracy example, supposing that we have a non-degenerate ground state, and two pairs
doubly degenerate next energy levels. That is
|mαm i ∈ |11i
|21i , |22i (20.17)
|31i , |32i
U † H0U (20.19)
Let us write this putting row and column range subscripts on our matrices to explicitly block
them into multiplication compatible sized pieces
I11,11
011,23 011,45
U = 023,11
U23,23 023,45
045,11 045,23 U45,45
(20.20)
H 0 11,11 H 0 11,23 H 0 11,45
H 0 = H 0 23,11
H 0 23,23 H 0 23,45
H 0 45,11 H 0 45,23 H 0 45,45
248 degeneracy and diagonalization
We see that we end up with explicitly diagonal matrices along the diagonal blocks, but prod-
ucts that are otherwise everywhere else.
In the new basis our kets become
0
mαm = U † |mαm i (20.22)
Suppose we calculate this change of basis representation for |21i (we have implicitly assumed
above that our original basis had the ordering {|11i |21i , |22i , |31i , |32i}). We find
0
21 = U ∗ |21i
0
1 0 0
1 (20.23)
= 0 U2∗ 0 0
0
0 0 U3∗
0
With
U U2,12
U2 = 2,11
U2,21 U2,22
(20.24)
U ∗ ∗
U2,21
U2† = 2,11
∗ ∗
U2,12 U2,22
20.3 generalizing slightly 249
We find
0
21 = U ∗ 210
0
U ∗
2,11 (20.25)
= U2,12 ∗
= U2,11 |21i + U2,12 |22i
∗ ∗
0
0
Energy eigenvalues of the unperturbed Hamiltonian in the new basis Generalizing this, it is
clear that for a given degeneracy level, the transformed kets in the new basis are superposition
of only the kets associated with that degenerate level (and the kets for the non-degenerate levels
are left as is).
Even better, we have for all mα0m = U † |mαm i that mα0m remain eigenkets of the unper-
turbed Hamiltonian. We see that by computing the matrix element of our Hamiltonian in the
full basis.
Writing
F = U † H 0 U, (20.26)
or
H 0 = UFU † , (20.27)
where F has been shown to have diagonal block diagonals, we can write
H = H0 + λUFU †
= UU † H0 UU † + λUFU † (20.28)
= U (U † H0 U + λF )U †
So in the mα0m basis, our Hamiltonian’s matrix element is
H → U † H0 U + λF (20.29)
250 degeneracy and diagonalization
H0 mα0 = U † H0 UU † |mαi
= U † H0 |mαi
(20.30)
= U † Hm0 |mαi
= Hm0 U † |mαi
H0 mα0 = Hm0 mα0 ,
(20.31)
a statement that the |mα0 i are still the energy eigenkets for the unperturbed system. This
matches our expectations since we have seen that these differ from the original basis elements
only for degenerate energy levels, and that these new basis elements are superpositions of only
the kets for their respective degeneracy levels.
R E V I E W O F A P P R O X I M AT I O N R E S U LT S
21
21.1 motivation
Here I will summarize what I had put on a cheat sheet for the tests or exam, if one would be
allowed. While I can derive these results, memorization unfortunately appears required for good
test performance in this class, and this will give me a good reference of what to memorize.
This set of review notes covers all the approximation methods we covered except for Fermi’s
golden rule.
hΨ| H |Ψi
≥ E0 (21.1)
hΨ|Ψi
Given a perturbed Hamiltonian and an associated solution for the unperturbed state
H = H0 + λH 0 , λ ∈ [0, 1]
E E (21.2)
H0 ψmα (0) = Em (0) ψmα (0) ,
251
252 review of approximation results
E E
For a non-degenerate state |ψm i = |ψm1 i, with an unperturbed value of ψ(0)
m = ψ(0) , we
m1
seek a power series expansion of this ket in the perturbed system
X E X E X E
|ψm i = cnα;m (0) ψnα (0) + λ cnα;m (1) ψnα (0) + λ2 cnα;m (2) ψnα (0) + · · ·
n,α n,α n,α
E X E X E (21.4)
∝ ψm (0) + λ cnα;m (1) ψnα (0) + λ2 cnα;m (2) ψnα (0) + · · ·
n,m,α n,m,α
Any states n , m are allowed to have degeneracy. For this case, we found to second order in
energy and first order in the kets
X Hnα;m1 0 2
(0)
Em = Em + λHm1;m1 0 + λ2 (0) (0)
+···
n,m,α E m − E n
(0) E X Hnα;m1 0 (21.5)
|ψm i ∝ ψm +λ ψ (0) E + · · ·
(0) (0) nα
n,m,α E m − E n
D E
0
Hnα;sβ = ψnα (0) H 0 ψ sβ (0) .
21.4 degeneracy
When the initial energy eigenvalue Em has a degeneracy γm > 1 we use a different approach
to compute the perturbed energy eigenkets and perturbed energy eigenvalues. Writing the kets
as |mαi, then we assume that the perturbed ket is a superposition of the kets in the degenerate
energy level
X
|mαi0 = ci |mii . (21.6)
i
We find that we must have
c1
c
2
(E 0 − E)I + λ Hmi;m
0
j ..
= 0. (21.7)
.
cγm
Diagonalizing this matrix Hmi;m 0
j
(a subset of the complete H 0 matrix element)
h i
hmi| H0 |m ji = U m δi j Hm,i U m ,
0 †
(21.8)
21.4 degeneracy 253
we find, by taking the determinant, that the perturbed energy eigenvalues are in the set
E = Em
0
+ λHm,i
0
, i ∈ [1, γm ] (21.9)
To compute the perturbed kets we must work in a basis for which the block diagonal matrix
elements are diagonal for all m, as in
h i
hmi| H0 |m ji = δi j Hm,i .
0 (21.10)
If that is not the case, then the unitary matrices of eq. (21.8) can be computed, and the matrix
U1
U2
U = , (21.11)
..
.
UN
H0 |mαi = Em
0
|mαi , (21.13)
but also ensure that the partial diagonalization condition of eq. (21.8) is satisfied. In this basis,
dropping overbars, the first order perturbation results found previously for perturbation about a
non-degenerate state also hold, allowing us to write
X H 0 mβ;sα
|sαi0 = |sαi + λ |mβi + · · · (21.14)
m,s,β E (0) (0)
s − Em
254 review of approximation results
We split of the Hamiltonian into time independent and time dependent parts, and also factorize
the time evolution operator
H = H0 + HI (t)
(21.15)
|αS i = e−iH0 t/ h̄ |αI (t)i = e−iH0 t/ h̄ U I (t) |αI (0)i .
d
i h̄ |αI (t)i = HI (t) |αI (t)i
dt
dU I (21.16)
i h̄ = HI0 U I
dt
HI0 (t) = eiH0 t/ h̄ HI (t)e−iH0 t/ h̄
H(t) = H0 + H 0 (t)
E E (21.17)
H0 ψ(0)
n = h̄ωn ψ(0)
n .
E
where h̄ωn are the energy eigenvalues, and ψ(0) n the energy eigenstates of the unperturbed
Hamiltonian.
Use of the interaction picture led quickly to the problem of seeking the coefficients describing
the perturbed state
X E
|ψ(t)i = cn (t)e−iωn t ψ(0)
n , (21.18)
n
and plugging in we found
X
i h̄ċ s = 0
H sn (t)eiωsn t cn (t)
n
ω sn = ω s − ωn (21.19)
D E
0
H sn (t) = ψ(0) 0 (0) ,
s H (t) ψn
21.7 sudden perturbations 255
H 0 (t) → λH 0 (t)
(21.20)
c s (t) = c(0) (1) 2 (2)
s (t) + λc s (t) + λ c s (t) + · · ·
i h̄ċ(0)
s (t) = 0
X
(1)
i h̄ċ s (t) = 0
H sn (t)eiωsn t c(0)
n (t)
n
X (21.21)
i h̄ċ(2)
s (t) = 0
H sn (t)eiωsn t c(1)
n (t)
n
..
.
Of particular value was the expansion, assuming that we started with an initial state in energy
level m before the perturbation was “turned on” (ie: λ = 0).
E
|ψ(t)i = e−iωm t ψ(0)
m (21.22)
So that c(0)
n (t) = δnm . We then found a first order approximation for the transition probability
coefficient of
i h̄ċ(1)
m = Hms (t)e
0 iωms t
(21.23)
The idea here is that we integrate Schrödinger’s equation over the small interval containing the
changing Hamiltonian
Z t
1
|ψ(t)i = |ψ(t0 )i + H(t0 ) ψ(t0 ) dt0
(21.24)
i h̄ t0
and find
|ψafter i = |ψbefore i . (21.25)
An implication is that, say, we start with a system measured in a given energy, that same
system after the change to the Hamiltonian will then be in a state that is now a superposition of
eigenkets from the new Hamiltonian.
256 review of approximation results
Given a Hamiltonian that turns on slowly at t = 0, a set of instantaneous eigenkets for the dura-
tion of the time dependent interval, and a representation in terms of the instantaneous eigenkets
H(t) = H0 , t≤0
E E
H(t) ψ̂n (t) = En (t) ψ̂n (t)
bn (t)e−iαn +iβn ψ̂n
X E
|ψi = (21.26)
n
Z t
1
αn (t) = dt0 En (t0 ),
h̄ 0
dbm X D d E
=− bn e−iγnm ψ̂m (t) ψ̂n (t)
dt n,m
dt
γnm (t) = αn (t) − αm (t) − (βn (t) − βm (t))
Z t (21.27)
βn (t) = dt0 Γn (t0 )
0
D d E
Γn (t) = i ψ̂n (t) ψ̂n (t)
dt
Here Γn (t) is called the Berry phase.
Evolution of a given state Given a system initially measured with energy Em (0) before the
time dependence is “turned on”
E
|ψ(0)i = ψ̂m (0) , (21.28)
we find that the first order Taylor series expansion for the transition probability coefficients
are
D d E
b s (t) = δ sm − t(1 − δ sm ) ψ̂ s (0) ψ̂m (t) . (21.29)
dt t=0
If we introduce a λ perturbation, separating all the (slowly changing) time dependent part of
the Hamiltonian H 0 from the non time dependent parts H0 as in
Z t d
0 D E
b s (t) = δms (1 + λconstant) − (1 − δms )λ dt0 eiγsm (t ) ψ̂ s (t0 ) 0 ψ̂m (t0 ) (21.31)
0 dt
21.9 wkb
d2 U
0= 2
+ k2 U
dx (21.32)
2m(E − V)
k = −κ2 =
2
.
h̄
and seek solutions of the form U ∝ eiφ . Schrödinger’s equation takes the form
1 R
U(x) ∝ √ e±i dxk(x)
(21.35)
k(x)
What we did not cover in class, but required in the problems was the Bohr-Sommerfeld
condition described in §24.1.2 of the text [4].
Z x2 !
1
dx 2m(E − V(x)) = n + π h̄.
p
(21.36)
x1 2
This was found from the WKB connection formulas, themselves found my some Bessel func-
tion arguments that I have to admit that I did not understand.
O N C O N D I T I O N S F O R C L E B S H - G O R DA N C O E F F I C I E N T S TO B E
Z E RO
22
22.1 motivation
unless m = m1 + m2 . It appeared that it was related to the operation of Jz , but how exactly
was not obvious to me. In tutorial today we hashed through this. Here is the details lying behind
this statement
We are taking an arbitrary two particle ket and decomposing it utilizing an insertion of a com-
plete set of states
X E ED D
| jmi = ( j1 m01 j2 m02 j1 m01 j2 m02 ) | jmi (22.2)
m01 m02
| j1 m1 i | j2 m2 i = |m1 m2 i
(22.3)
h j1 m1 | h j2 m2 | | jmi = hm1 m2 | jmi ,
so that we write
X
| jmi = m0 m0 E Dm0 m0 jmE (22.4)
1 2 1 2
m01 m02
259
260 on conditions for clebsh-gordan coefficients to be zero
We have two ways that we can apply the operator J z to | jmi. One is using the sum above, for
which we find
X ED E
J z | jmi = J z m 01 m 02 m 01 m 02 jm
m 01 m 02
X ED E (22.5)
= h̄ (m 01 + m 02 ) m 01 m 02 m 01 m 02 jm
m 01 m 02
We can also act directly on | jmi and then insert a complete set of states
X
Jz | jmi = m0 m0 E Dm0 m0 J | jmi
1 2 1 2 z
m01 m02
X
m0 m0 E Dm0 m0 jmE (22.6)
= h̄m 1 2 1 2
m01 m02
X X
m m0 m0 E Dm0 m0 jmE = ED E
(m01 + m02 ) m01 m02 m01 m02 jm (22.7)
1 2 1 2
m01 m02 m01 m02
E
This equality must be valid for any | jmi, and since all the kets m01 m02 are linearly indepen-
dent, we must have for any m01 , m02
D E E
(m − m01 − m02 ) m01 m02 jm m01 m02 = 0 (22.8)
We have two ways D to getEthis zero. One of them is a m = m01 + m02 condition, and the other is
for the CG coeff m01 m02 jm to be zero whenever m , m01 + m02 .
It is not a difficult argument, but one that was not clear from a read of the text (at least to me).
O N E M O R E A D I A B AT I C P E R T U R B AT I O N D E R I VAT I O N
23
23.1 motivation
I liked one of the adiabatic perturbation derivations that I did to review the material, and am
recording it for reference.
23.2 build up
In time dependent perturbation we started after noting that our ket in the interaction picture, for
a Hamiltonian H = H0 + H 0 (t), took the form
|αS (t)i = e−iH0 t/ h̄ |αI (t)i = e−iH0 t/ h̄ U I (t) |αI (0)i . (23.1)
Here we have basically assumed that the time evolution can be factored into a portion depen-
dent on only the static portion of the Hamiltonian, with some other operator U I (t), providing
the remainder of the time evolution. From eq. (23.1) that operator U I (t) is found to behave
according to
dU I
i h̄ = eiH0 t/ h̄ H 0 (t)e−iH0 t/ h̄ U I , (23.2)
dt
but for our purposes we just assumed it existed, and used this for motivation. With the as-
sumption that the interaction picture kets can be written in terms of the basis kets for the system
at t = 0 we write our Schrödinger ket as
X X
|ψi = e−iH0 t/ h̄ ak (t) |ki = e−iωk t/ h̄ ak (t) |ki , (23.3)
k k
where |ki are the energy eigenkets for the initial time equation problem
261
262 one more adiabatic perturbation derivation
For the adiabatic problem, we assume the system is changing very slowly, as described by the
instantaneous energy eigenkets
Can we assume a similar representation to eq. (23.3) above, but allow |ki to vary in time?
This does not quite work since |k(t)i are no longer eigenkets of H0
X X
|ψi = e−iH0 t/ h̄ ak (t) |k(t)i , e−iωk t ak (t) |k(t)i . (23.6)
k k
Operating with eiH0 t/ h̄ does not give the proper time evolution of |k(t)i, and we will in general
have a more complex functional dependence in our evolution operator for each |k(t)i. Instead of
an ωk t dependence in this time evolution operator let us assume we have some function αk (t) to
be determined, and can write our ket as
X
|ψi = e−iαk (t) ak (t) |k(t)i . (23.7)
k
!
d
0 = H − i h̄ |ψi
dt
!X
d
= H − i h̄ e−iαk ak |ki (23.8)
dt k
X
= e−iαk (t) ( Ek ak − i h̄(−iα0k ak + a0k )) |ki − i h̄ak k0
k
Here I have written |k0 i = d |ki /dt. In our original time dependent perturbation the −iα0k term
was −iωk , so this killed off the Ek . If we assume this still kills off the Ek , we must have
Z t
1
αk = Ek (t0 )dt0 , (23.9)
h̄ 0
X
0= e−iαk (t) a0k |ki + ak k0 . (23.10)
k
23.3 adiabatic case 263
X −iαk (t)
0
0 = e−iαm (t) a0m + e−iαm (t) am mm0 + e ak mk , (23.11)
k,m
or
X
a0m + am mm0 = − e−iαk (t) eiαm (t) ak mk0 , (23.12)
k,m
Rt
0
The LHS is a perfect differential if we introduce an integration factor e 0 hm|m i , so we can
write
Rt Rt X
hm|m0 i 0
e− 0 (am e 0 hm|m i )0 = − e−iαk (t) eiαm (t) ak mk0 , (23.13)
k,m
Rt
0
bm = am e 0 hm|m i (23.14)
or
Rt
hm|m0 i
am = bm e− 0 (23.15)
Plugging this into our assumed representation we have a more concrete form
Rt
dt0 (iωk +hk|k0 i)
X
|ψi = e− 0 bk (t) |k(t)i . (23.16)
k
Writing
Γk = i kk0 , (23.17)
this becomes
X Rt
dt0 (ωk −Γk )
|ψi = e−i 0 bk (t) |k(t)i . (23.18)
k
264 one more adiabatic perturbation derivation
A final pass Now that we have what appears to be a good representation for any given state
if we wish to examine the time evolution, let us start over, reapplying our instantaneous energy
operator equality
!
d
0 = H − i h̄ |ψi
dt
!X R
d t 0
= H − i h̄ e−i 0 dt (ωk −Γk ) bk |ki (23.19)
dt k
X Rt 0
= −i h̄ e−i 0 dt (ωk −Γk ) iΓk bk |ki + b0k |ki + bk k0 .
k
Rt Rt
dt0 (ωm −Γm ) 0
0 = e−i 0 iΓm bm + e−i 0 dt (ωm −Γm ) b0m
Rt
0
X −i R t dt0 (ωk −Γk )
0 (23.20)
+ e−i 0 dt (ωm −Γm ) bm mm0 + e 0 bk mk
k,m
Since iΓm = hm|m0 i the first and third terms cancel leaving us just
X Rt
dt0 (ωkm −Γkm )
b0m = − e−i 0 bk mk0 , (23.21)
k,m
23.4 summary
We assumed that a ket for the system has a representation in the form
X
|ψi = e−iαk (t) ak (t) |k(t)i , (23.22)
k
where ak (t) and αk (t) are given or to be determined. Application of our energy operator iden-
tity provides us with an alternate representation that simplifies the results
X Rt
dt0 (ωk −Γk )
|ψi = e−i 0 bk (t) |k(t)i . (23.23)
k
23.4 summary 265
With
0 d
m = |mi
dt
Γ = i mm0
k (23.24)
ωkm = ωk − ωm
Γkm = Γk − Γm
X Rt
dt0 (ωkm −Γkm )
b0m = − e−i 0 bk mk0 , (23.25)
k,m
A S U P E R S H O R T D E R I VAT I O N O F T H E T I M E D E P E N D E N T
P E RT U R B AT I O N R E S U LT
24
With
X
|ψti = ck (t)e−iωk t |ki (24.1)
k
!
d
0 = H0 + H − i h̄ 0
|ψti
dt
!
d X −iωk t
= H0 + H − i h̄
0
ck e |ki (24.2)
dt k
X
= e−iωk t ( Ek + H 0 ck − (
ck i h̄(−iω
((( 0
k )ck − i h̄ck ) |ki
(
k
X
e−iωk t Hmk
0
ck = i h̄e−iωm t c0m , (24.3)
k
or
1 X −iωkm t 0
c0m = e Hmk ck (24.4)
i h̄ k
Now we can make the assumptions about the initial state and away we go.
267
S E C O N D F O R M O F A D I A B AT I C A P P R O X I M AT I O N
25
Motivation In class we were shown an adiabatic approximation where we started with (or
worked our way towards) a representation of the form
X Rt
(ωk (t0 )−Γk (t0 ))dt0
|ψi = ck (t)e−i 0 |ψk (t)i (25.1)
k
where |ψk (t)i were normalized energy eigenkets for the (slowly) evolving Hamiltonian
X
|ψ(t)i = ck (t) |ψk (t)i . (25.3)
k
For completeness, here is a walk through of the general amplitude derivation that is been
used.
!
d X
0 = H − i h̄ ck |ki
dt k
X (25.4)
= ck Ek |ki − i h̄c0k |ki − i h̄ck k0 ,
k
where
0 d
k = |ki . (25.5)
dt
Bra’ing with hm|, and split the sum into k = m and k , m parts
X
0 = cm Em − i h̄c0m − i h̄cm mm0 − i h̄ ck mk0 (25.6)
k,m
269
270 second form of adiabatic approximation
Again writing
Γm = i mm0 (25.7)
We have
1 X
c0m = cm (Em − h̄Γm ) − ck mk0 , (25.8)
i h̄ k,m
In this form we can make an “Adiabatic” approximation, dropping the k , m terms, and
integrate
dc0m t
Z Z
1
= (Em (t0 ) − h̄Γm (t0 ))dt0 (25.9)
cm i h̄ 0
or
Z t !
1
cm (t) = A exp (Em (t ) − h̄Γm (t ))dt .
0 0 0
(25.10)
i h̄ 0
Z t !
1
cm (t) = cm (0) exp (Em (t ) − h̄Γm (t ))dt .
0 0 0
(25.11)
i h̄ 0
Observe that this is very close to the starting point of the adiabatic approximation we per-
formed in class since we end up with
X Rt
(ωk (t0 )−Γk (t0 ))dt0
|ψi = ck (0)e−i 0 |k(t)i , (25.12)
k
So, to perform the more detailed approximation, that started with eq. (25.1), where we ended
up with all the cross terms that had both ωk and Berry phase Γk dependence, we have only to
generalize by replacing ck (0) with ck (t).
Part V
APPENDICES
H A R M O N I C O S C I L L AT O R R E V I E W
A
Consider
P2 1
H0 = + mω2 X 2 (A.1)
2m 2
Since it has been a while let us compute the raising and lowering factorization that was used
so extensively for this problem.
It was of the form
Why this factorization has an imaginary in it is a good question. It is not one that is given any
sort of rationale in the text [4]. √
√
It is clear that we want a = m/2ω and b = 1/ 2m. The difference is then
ω
H0 − (aX − ibP)(aX + ibP) = −iab [ X, P] = −i [ X, P] (A.3)
2
That commutator is an i h̄ value, but what was the sign? Let us compute so we do not get it
wrong
[ x, p] ψ = −i h̄ [ x, ∂ x ] ψ
= −i h̄(x∂ x ψ − ∂ x (xψ))
(A.4)
= −i h̄(−ψ)
= i h̄ψ
So we have
r r r r
m 1 m 1 h̄ω
H0 = ω P ω X+i P +
X−i (A.5)
2 2m 2 2m 2
Factoring out an h̄ω produces the form of the Hamiltonian that we used before
r r r r
mω 1 mω 1 1
H0 = h̄ω X+i P + .
X−i P (A.6)
2 h̄ 2m h̄ω 2 h̄ 2m h̄ω 2
273
274 harmonic oscillator review
The factors were labeled the uppering (a† ) and lowering (a) operators respectively, and writ-
ten
!
1
H0 = h̄ω a a +†
2
r r
mω 1
a= X+i P (A.7)
2 h̄ 2m h̄ω
r r
mω 1
a =
†
X−i P.
2 h̄ 2m h̄ω
Observe that we can find the inverse relations
r
h̄
X= a + a†
2mω
r (A.8)
m h̄ω †
P=i a −a
2
Question What is a good reason that we chose this particular factorization? For example, a
quick computation shows that we could have also picked
!
1
H0 = h̄ω aa† − . (A.9)
2
I do not know that answer. That said, this second factorization is useful in that it provides
the commutator relation between the raising and lowering operators, since subtracting eq. (A.9)
and eq. (A.7) yields
h i
a, a† = 1. (A.10)
then the problem of finding the eigensolution of H0 reduces to solving this problem. Because
a† acommutes with 1/2, an eigenstate of a† a is also an eigenstate of H0 . Utilizing eq. (A.10)
we then have
a |0i = 0 (A.14)
Thus
a† a |0i = 0, (A.15)
and
λ0 = 0. (A.16)
This seems like a small bit of slight of hand, since it sneakily supplies an integer value to λ0
where up to this point 0 was just a label.
If the eigenvalue equation we are trying to solve for the Hamiltonian is
! !
1 1
En = h̄ω λn + = h̄ω n + (A.18)
2 2
V E R I F Y I N G T H E H E L M H O LT Z G R E E N ’ S F U N C T I O N
B
Motivation In class this week, looking at an instance of the Helmholtz equation
∇2 + k2 ψk (r) = s(r). (B.1)
We were told that the Green’s function
∇2 + k2 G0 (r, r0 ) = δ(r − r0 ) (B.2)
that can be used to solve for a particular solution this differential equation via convolution
Z
ψk (r) = G0 (r, r0 )s(r0 )d3 r0 , (B.3)
0
1 eik|r−r |
G (r, r ) = −
0 0
. (B.4)
4π |r − r0 |
Let us try to verify this.
Application of the Helmholtz differential operator ∇2 + k2 on the presumed solution gives
1
Z eik|r−r0 |
∇2 + k2 ψk (r) = − ∇2 + k2 s(r0 )d3 r0 . (B.5)
4π |r − r0 |
ik|r−r0 |
2e
∇ . (B.6)
|r − r0 |
Writing µ = |r − r0 | we start with the computation of
∂ eikµ ∂µ ik 1 ikµ
!
= − e
∂x µ ∂x µ µ2
(B.7)
∂µ 1 eikµ
!
= ik −
∂x µ µ
277
278 verifying the helmholtz green’s function
eikµ 1 eikµ
!
∇ = ik − ∇µ. (B.8)
µ µ µ
Taking second derivatives with respect to x we find
!2 !2
∂2 eikµ ∂2 µ 1 eikµ ∂µ ∂µ 1 eikµ ∂µ 1 eikµ
!
= 2 ik − + + ik −
∂x2 µ ∂x µ µ ∂x ∂x µ2 µ ∂x µ µ
2
(B.9)
∂2 µ 1 eikµ ∂µ 2ik 2 eikµ
! ! !
= 2 ik − + −k2 − + 2 .
∂x µ µ ∂x µ µ µ
Our Laplacian is then
∂ ∂
q
µ= (x − x0 )2 + (y − y0 )2 + (z − z0 )2
∂x ∂x
1 1
= 2(x − x0 ) p (B.11)
2 (x − x ) + (y − y0 )2 + (z − z0 )2
0 2
x − x0
= .
µ
So we have
r − r0
∇µ =
µ (B.12)
(∇µ) = 1
2
∂2 ∂ x − x0
µ =
∂x2 ∂x µ
1 ∂µ 1
= − (x − x0 )
µ ∂x µ2
(B.13)
1 x − x0 1
= − (x − x0 )
µ µ µ2
1 1
= − (x − x0 )2 3 .
µ µ
verifying the helmholtz green’s function 279
So we find
3 1
∇2 µ = − , (B.14)
µ µ
or
2
∇2 µ = . (B.15)
µ
∇2 + k2 G0 (r, r0 ) = 0. (B.17)
In the neighborhood of |r − r0 | < Having shown that we end up with zero everywhere that
r , r0 we are left to consider a neighborhood of the volume surrounding the point r in our
integral. Following the Coulomb treatment in §2.2 of [11] we use a spherical volume element
centered around r of radius , and then convert a divergence to a surface area to evaluate the
integral away from the problematic point
1
Z eik|r−r0 | 1
Z eik|r−r0 |
− ∇ +k2 2
0
s(r )d r = −
0 3 0
∇ +k
2 2
s(r0 )d3 r0 (B.18)
4π all space |r − r | 4π |r−r0 |< |r − r0 |
1
Z eik|r−r0 | 1
Z eik|a|
− ∇2r +k 2
s(r )d r = −
0 3 0
∇2r + k2 s(r + a)d3 a
4π |r−r0 |< |r − r0 | 4π |a|< |a|
(B.19)
s(r)
Z eik|a|
=− ∇2r +k 2 3
d a
4π |a|< |a|
280 verifying the helmholtz green’s function
∇r r − r0 = −∇a |a|
2
∇r r − r0 = (∇a |a|)2
(B.20)
∇2r r − r0 = ∇2a |a|,
0
eik|r−r | ik|a| eik|a|
!
2e
∇2r = ∇a = ∇ a · ∇a (B.21)
|r − r0 | |a| |a|
This gives us
To complete these evaluations, we can now employ a spherical coordinate change of variables.
Let us do the k2 volume integral first. We have
π 2π
eik|a| 3 eika 2
Z Z Z Z
k2 d a= k2 a da sin θdθdφ
dV |a| a=0 θ=0 φ=0 a
Z
= 4πk2 aeika da
a=0
Z k (B.23)
= 4π ueiu du
u=0
k
= 4π (−iu + 1)eiu 0
= 4π (−ik + 1)eik − 1
verifying the helmholtz green’s function 281
To evaluate the surface integral we note that we will require only the radial portion of the
gradient, so have
eik|a| ∂ eika
! !
∇a · â = â · â
|a| ∂a a
∂ eika
=
∂a a ! (B.24)
1 1 ika
= ik − 2 e
a a
eika
= (ika − 1) 2
a
Our area element is a2 sin θdθdφ, so we are left with
Z π Z 2π
eik|a| eika 2
Z !
· âd a =
2
(ika − 1) 2 a sin θdθdφ
∇a
dA |a| θ=0 φ=0 a
a= (B.25)
= 4π (ik − 1) e ik
1
Z eik|r−r0 |
− ∇ +k
2 2
s(r0 3 0
)d r = −s(r) (−ik + 1)eik
− 1 + ( ik − 1 ) eik
4π all space |r − r0 | (B.26)
= −s(r) (−ik + 1 + ik − 1)e − 1
ik
1
Z eik|r−r0 |
− ∇2 + k2 s(r0 )d3 r0 = s(r). (B.27)
4π all space |r − r0 |
This completes the desired verification of the Green’s function for the Helmholtz operator.
Observe the perfect cancellation here, so the limit of → 0 can be independent of how large k
is made. You have to complete the integrals for both the Laplacian and the k2 portions of the
integrals and add them, before taking any limits, or else you will get into trouble (as I did in my
first attempt).
E VA L U AT I N G T H E S Q U A R E D S I N C I N T E G R A L
C
In the Fermi’s golden rule lecture we used the result for the integral of the squared sinc function.
Here is a reminder of the contours required to perform this integral.
We want to evaluate
∞
sin2 (x|µ|)
Z
dx (C.1)
−∞ x2
We make a few change of variables
∞ ∞
sin2 (x|µ|) sin2 (y)
Z Z
dx = |µ| dy
−∞ x2 −∞ y2
Z ∞ iy
(e − e−iy )2
= −i|µ| idy (C.2)
−∞ (2iy)2
i|µ| i∞ e2z + e−2z − 2
Z
=− dz
4 −i∞ z2
Now we pick a contour that is distorted to one side of the origin as in fig. C.1
Figure C.1: Contour distorted to one side of the double pole at the origin
283
284 evaluating the squared sinc integral
We employ Jordan’s theorem (§8.12 [9]) now to pick the contours for each of the integrals
since we need to ensure the e±z terms converges as R → ∞ for the z = Reiθ part of the contour.
We can write
∞
sin2 (x|µ|) e2z e−2z
Z Z Z Z !
i|µ| 2
2
dx = − dz + dz − dz (C.3)
−∞ x 4 C0 +C2 z2 C0 +C1 z2 C0 +C1 z2
The second two integrals both surround no poles, so we have only the first to deal with
e2z
Z
1 d 2z
dz = 2πi e
C0 +C2 z2 1! dz z=0 (C.4)
= 4πi
∞
sin2 (x|µ|)
Z
i|µ|
2
dx = − 4πi = π|µ| (C.5)
−∞ x 4
On the cavalier choice of contours The choice of which contours to pick above may seem
pretty arbitrary, but they are for good reason. Suppose you picked C0 + C1 for the first integral.
On the big C1 arc, then with a z = Reiθ substitution we have
2 dn −x2
Hn = (−1)n e x e (D.1)
dxn
Let us write D = d/dx, and take the derivative of Hn
2 2
(−1)n DHn = D e x Dn e−x
2 2 2 2
= 2xe x Dn e−x + e x Dn −2xe−x
n !
x2 n −x2 x2
X n k 2
= 2xe D e + e D (−2x)Dn−k e−x
k=0
k
1 ! (D.2)
x2 n −x2 x2
X n k 2
= 2xe D e + e D (−2x)Dn−k e−x
k=0
k
2
n −x2 2 2 2
= e + e x −
n −x
xD e − 2nDn−1 e−x
2xe 2xD
2
= −2nDn−1 e−x
d
Hn (x) = 2nHn−1 (x). (D.3)
dx
287
M AT H E M AT I C A N O T E B O O K S
E
These Mathematica notebooks, some just trivial ones used to generate figures, others more
elaborate, and perhaps some even polished, can be found in
https://raw.github.com/peeterjoot/mathematica/master/.
The free Wolfram CDF player, is capable of read-only viewing these notebooks to some
extent.
Files saved explicitly as CDF have interactive content that can be explored with the CDF
player.
289
290 mathematica notebooks
BIBLIOGRAPHY
BIBLIOGRAPHY
[1] M. Abramowitz and I.A. Stegun. Handbook of mathematical functions with formulas,
graphs, and mathematical tables, volume 55. Dover publications, 1964. (Cited on
page 287.)
[2] D. Bohm. Quantum Theory. Courier Dover Publications, 1989. (Cited on pages 92
and 241.)
[3] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover
Publications, 1992. (Cited on page 220.)
[4] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.
(Cited on pages xi, 5, 23, 24, 26, 37, 40, 47, 54, 57, 59, 70, 84, 87, 95, 99, 107, 119, 125,
133, 137, 139, 147, 150, 153, 157, 161, 171, 173, 178, 185, 193, 199, 213, 219, 233, 243,
257, 259, and 273.)
[5] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University
Press New York, Cambridge, UK, 1st edition, 2003. (Cited on pages 69 and 149.)
[6] D.J. Griffiths. Introduction to quantum mechanics, volume 1. Pearson Prentice Hall, 2005.
(Cited on page 107.)
[7] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers,
1999. (Cited on pages 66 and 149.)
[8] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975. (Cited
on page 208.)
[9] W.R. Le Page and W.R. LePage. Complex Variables and the Laplace Transform for Engi-
neers. Courier Dover Publications, 1980. (Cited on page 284.)
[10] A. Messiah, G.M. Temmer, and J. Potter. Quantum mechanics: two volumes bound as one.
Dover Publications New York, 1999. (Cited on pages 45 and 217.)
[12] JR Taylor. Scattering Theory: the Quantum Theory of Nonrelativistic Scattering, volume 1.
1972. (Cited on page 217.)
295
296 bibliography
[13] F.G. Tricomi. Integral equations. Dover Pubns, 1985. (Cited on page 70.)
[14] Wikipedia. Adiabatic theorem — Wikipedia, The Free Encyclopedia, 2011. URL //en.
wikipedia.org/w/index.php?title=Adiabatic_theorem&oldid=447547448.
[Online; accessed 9-October-2011]. (Cited on page 84.)
[15] Wikipedia. Geometric phase — Wikipedia, The Free Encyclopedia, 2011. URL //en.
wikipedia.org/w/index.php?title=Geometric_phase&oldid=430834614. [On-
line; accessed 9-October-2011]. (Cited on page 90.)
[16] Wikipedia. Bessel function — Wikipedia, The Free Encyclopedia, 2011. URL http://en.
wikipedia.org/w/index.php?title=Bessel_function&oldid=461096228. [On-
line; accessed 4-December-2011]. (Cited on page 207.)
[18] Wikipedia. Steradian — Wikipedia, The Free Encyclopedia, 2011. URL http://en.
wikipedia.org/w/index.php?title=Steradian&oldid=462086182. [Online; ac-
cessed 4-December-2011]. (Cited on page 207.)
[19] Wikipedia. WKB approximation — Wikipedia, The Free Encyclopedia, 2011. URL
http://en.wikipedia.org/w/index.php?title=WKB_approximation&oldid=
453833635. [Online; accessed 19-October-2011]. (Cited on page 112.)
[20] Wikipedia. Zeeman effect — Wikipedia, The Free Encyclopedia, 2011. URL http://en.
wikipedia.org/w/index.php?title=Zeeman_effect&oldid=450367887. [Online;
accessed 15-September-2011]. (Cited on page 123.)