0% found this document useful (0 votes)
85 views

Quantum Mechanics I I-Lecture

Uploaded by

Nabil Dakhli
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views

Quantum Mechanics I I-Lecture

Uploaded by

Nabil Dakhli
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 316

peeter joot peeterjoot@protonmail.

com
QUA N T U M M E C H A N I C S I I
QUA N T U M M E C H A N I C S I I

peeter joot [email protected]

Notes and problems from UofT PHY456H1F 2012


November 2015 – version v.6
Peeter Joot [email protected]: Quantum Mechanics II, Notes and problems from
UofT PHY456H1F 2012, c November 2015
COPYRIGHT

c
Copyright 2015 Peeter Joot All Rights Reserved
This book may be reproduced and distributed in whole or in part, without fee, subject to the
following conditions:

• The copyright notice above and this permission notice must be preserved complete on all
complete or partial copies.

• Any translation or derived work must be approved by the author in writing before distri-
bution.

• If you distribute this work in part, instructions for obtaining the complete version of this
document must be included, and a means for obtaining a complete version provided.

• Small portions may be reproduced as illustrations for reviews or quotes in other works
without this permission notice if proper citation is given.

Exceptions to these rules may be granted for academic purposes: Write to the author and ask.

Disclaimer: I confess to violating somebody’s copyright when I copied this copyright state-
ment.

v
DOCUMENT VERSION

Sources for this notes compilation can be found in the github repository
https://github.com/peeterjoot/physicsplay
The last commit (Nov/4/2015), associated with this pdf was
834e136b8e3635cc7ab824f3c74fcfdf8c6f909f

vii
Dedicated to:
Aurora and Lance, my awesome kids, and
Sofia, who not only tolerates and encourages my studies, but is also awesome enough to think
that math is sexy.
P R E FA C E

These are my personal lecture notes for the Fall 2011, University of Toronto Quantum mechan-
ics II course (PHY456H1F), taught by Prof. John E Sipe.
The official description of this course was:
Quantum dynamics in Heisenberg and Schrodinger Pictures; WKB approximation; Varia-
tional Method; Time-Independent Perturbation Theory; Spin; Addition of Angular Momentum;
Time-Dependent Perturbation Theory; Scattering.
This document contains a few things

• My lecture notes.
Typos, if any, are probably mine (Peeter), and no claim nor attempt of spelling or grammar
correctness will be made.

• Notes from reading of the text [4]. This may include observations, notes on what seem
like errors, and some solved problems.

• Different ways of tackling some of the assigned problems than the solution sets.

• Some personal notes exploring details that were not clear to me from the lectures.

• Some worked problems.

Peeter Joot [email protected]

xi
CONTENTS

Preface xi

i approximate methods and pertubation 1


1 approximate methods 3
1.1 Approximate methods for finding energy eigenvalues and eigenkets 3
1.2 Variational principle 4
2 perturbation methods 13
2.1 States and wave functions 13
2.2 Excited states 16
2.3 Problems 17
2.3.1 Helium atom ground state energy estimation 23
2.3.2 Curious problem using the variational method to find the ground state
energy of the Harmonic oscillator 26
3 time independent perturbation theory 37
3.1 Time independent perturbation 40
3.2 Issues concerning degeneracy 45
3.3 Examples 54
4 time dependent pertubation 57
4.1 Review of dynamics 57
4.2 Interaction picture 59
4.3 Justifying the Taylor expansion above (not class notes) 66
4.4 Recap: Interaction picture 69
4.5 Time dependent perturbation theory 71
4.6 Perturbation expansion 73
4.7 Time dependent perturbation 79
4.8 Sudden perturbations 81
4.9 Adiabatic perturbations 84
4.10 Adiabatic perturbation theory (cont.) 87
4.11 Examples 90
5 fermi’s golden rule 95
5.1 Recap. Where we got to on Fermi’s golden rule 99
5.2 Fermi’s Golden rule 100
6 wkb method 107
6.1 WKB (Wentzel-Kramers-Brillouin) Method 107

xiii
xiv contents

6.2 Turning points. 111


6.3 Examples 112

ii spin, angular momentum, and two particle systems 117


7 composite systems 119
7.1 Hilbert Spaces 119
7.2 Operators 121
7.3 Generalizations 122
7.4 Recalling the Stern-Gerlach system from PHY354 122
8 spin and spinors 125
8.1 Generators 125
8.2 Generalizations 130
8.3 Multiple wavefunction spaces 133
9 representation of two state kets and pauli spin matrices 139
9.1 Representation of kets 139
9.2 Representation of two state kets 144
9.3 Pauli spin matrices 144
10 rotation operator in spin space 147
10.1 Formal Taylor series expansion 147
10.2 Spin dynamics 150
10.3 The hydrogen atom with spin 153
11 two spin systems, angular momentum, and clebsch-gordon convention 157
11.1 Two spins 157
11.2 More on two spin systems 161
11.3 Recap: table of two spin angular momenta 166
11.4 Tensor operators 171
12 rotations of operators and spherical tensors 173
12.1 Setup 173
12.2 Infinitesimal rotations 174
12.3 A problem 175
12.4 How do we extract these buried simplicities? 175
12.5 Motivating spherical tensors 177
12.6 Spherical tensors (cont) 178

iii scattering theory 183


13 scattering theory 185
13.1 Setup 185
13.2 1D QM scattering. No potential wave packet time evolution 186
13.3 A Gaussian wave packet 188
contents xv

13.4 With a potential 189


13.5 Considering the time independent case temporarily 191
13.6 Recap 193
14 3d scattering 199
14.1 Setup 199
14.2 Seeking a post scattering solution away from the potential 200
14.3 The radial equation and its solution 202
14.4 Limits of spherical Bessel and Neumann functions 204
14.5 Back to our problem 204
14.6 Scattering geometry and nomenclature 206
14.7 Appendix 207
14.8 Verifying the solution to the spherical Bessel equation 210
14.9 Scattering cross sections 213
15 born approximation 219

iv notes and problems 223


16 simple entanglement example 225
17 problem set 4, problem 2 notes 227
18 a different derivation of the adiabatic perturbation coefficient equa-
tion 233
19 second order time evolution for the coefficients of an initially pure
ket with an adiabatically changing hamiltonian 237
20 degeneracy and diagonalization 243
20.1 Motivation 243
20.2 A four state Hamiltonian 243
20.3 Generalizing slightly 245
21 review of approximation results 251
21.1 Motivation 251
21.2 Variational method 251
21.3 Time independent perturbation 251
21.4 Degeneracy 252
21.5 Interaction picture 254
21.6 Time dependent perturbation 254
21.7 Sudden perturbations 255
21.8 Adiabatic perturbations 256
21.9 WKB 257
22 on conditions for clebsh-gordan coefficients to be zero 259
22.1 Motivation 259
xvi contents

22.2 Recap on notation 259


22.3 The Jz action 260
23 one more adiabatic perturbation derivation 261
23.1 Motivation 261
23.2 Build up 261
23.3 Adiabatic case 262
23.4 Summary 264
24 a super short derivation of the time dependent perturbation result 267
25 second form of adiabatic approximation 269

v appendices 271
a harmonic oscillator review 273
b verifying the helmholtz green’s function 277
c evaluating the squared sinc integral 283
d derivative recurrence relation for hermite polynomials 287
e mathematica notebooks 289

vi bibliography 293

bibliography 295
LIST OF FIGURES

Figure 1.1 qmTwoL2fig1 4


Figure 1.2 qmTwoL2fig2 6
Figure 1.3 qmTwoL2fig3 7
Figure 1.4 qmTwoL2fig4 8
Figure 1.5 qmTwoL2fig5 9
Figure 1.6 qmTwoL2fig6 10
Figure 2.1 Pictorial illustration of ket projections 14
Figure 2.2 Illustration of variation of energy with variation of Hamiltonian 15
Figure 2.3 Exponential trial function with absolute exponential die off 32
Figure 2.4 First ten orders, fitting harmonic oscillator wavefunctions to this trial
function 33
Figure 2.5 Tenth order harmonic oscillator wavefunction fitting 33
Figure 2.6 stepFunction 34
Figure 3.1 Example of small perturbation from known Hamiltonian 38
Figure 3.2 Energy level splitting 56
Figure 4.1 Coulomb interaction of a nucleus and heavy atom 59
Figure 4.2 atom in a field 61
Figure 4.3 Perturbation around energy level s 74
Figure 4.4 Slow nucleus passing an atom 75
Figure 4.5 Fields for nucleus atom example 76
Figure 4.6 Atom interacting with an EM pulse 77
Figure 4.7 Positive and negative frequencies 80
Figure 4.8 Gaussian wave packet 80
Figure 4.9 FTgaussianWavePacket 81
Figure 4.10 Sudden step Hamiltonian 82
Figure 4.11 Harmonic oscillator sudden Hamiltonian perturbation 83
Figure 4.12 Energy level variation with time 89
Figure 4.13 Phase whipping around 90
Figure 4.14 Degenerate energy level splitting 91
Figure 5.1 Gaussian wave packet 96
Figure 5.2 Sine only after an initial time 96
Figure 5.3 ωmi illustrated 97
Figure 5.4 FIXME: qmTwoL9fig7 98
Figure 5.5 Sine only after an initial time 99

xvii
xviii List of Figures

Figure 5.6 Perturbation from i to mth energy levels 99


Figure 5.7 Two sinc lobes 101
Figure 5.8 Continuum of energy levels for ionized states of an atom 101
Figure 5.9 Semi-conductor well 101
Figure 5.10 103
Figure 5.11 Momentum space view 104
Figure 6.1 Finite well potential 107
Figure 6.2 Example of a general potential 111
Figure 6.3 Turning points where WKB will not work 112
Figure 6.4 Diagram for patching method discussion 112
Figure 6.5 Arbitrary potential in an infinite well 113
Figure 8.1 Vector translation 125
Figure 8.2 Active spatial translation 128
Figure 8.3 Active vector rotations 129
Figure 8.4 A example pair of non-commuting rotations 129
Figure 8.5 Rotated temperature (scalar) field 132
Figure 8.6 Rotating a capacitance electric field 133
Figure 10.1 Magnetic moment due to steady state current 150
Figure 10.2 Charge moving in circle 150
Figure 10.3 Induced torque in the presence of a magnetic field 151
Figure 10.4 Precession due to torque 152
Figure 11.1 Angular momentum triangle inequality 169
Figure 11.2 active rotation 171
Figure 11.3 Rotating a wavefunction 172
Figure 12.1 Rotating a state centered at F 173
Figure 13.1 classical collision of particles 185
Figure 13.2 Classical scattering radius and impact parameter 186
Figure 13.3 Wave packet for a particle wavefunction <(ψ(x, 0)) 186
Figure 13.4 Gaussian wave packet 188
Figure 13.5 moving spreading Gaussian packet 189
Figure 13.6 QM wave packet prior to interaction with repulsive potential 189
Figure 13.7 QM wave packet long after interaction with repulsive potential 190
Figure 13.8 Kinetic energy greater than potential energy 190
Figure 13.9 qmTwoL21Fig9 191
Figure 13.10 potential zero outside of a specific region 192
Figure 13.11 A bounded positive potential 194
Figure 13.12 Wave packet in free space and with positive potential 195
Figure 13.13 Reflection and transmission of wave packet 195
Figure 14.1 Radially bounded spherical potential 199
List of Figures xix

Figure 14.2 Radially bounded potential 200


Figure 14.3 Scattering cross section 206
Figure 14.4 Bounded potential 213
Figure 14.5 plane wave front incident on particle 214
Figure 14.6 k space localized wave packet 215
Figure 14.7 point of measurement of scattering cross section 216
Figure 14.8 Plane wave vs packet wave front 218
Figure C.1 Contour distorted to one side of the double pole at the origin 283
Part I

A P P R O X I M AT E M E T H O D S A N D P E RT U B AT I O N
A P P R O X I M AT E M E T H O D S
1
1.1 approximate methods for finding energy eigenvalues and eigenkets

In many situations one has a Hamiltonian H

H |Ψnα i = En |Ψnα i (1.1)

Here α is a “degeneracy index” (example: as in Hydrogen atom).

Why?

• Simplifies dynamics
take

X X
|Ψ(0)i = |Ψnα i hΨnα |Ψ(0)i = cnα |Ψnα i (1.2)
nα nα

Then

|Ψ(t)i = e−iHt/ h̄ |Ψ(0)i


X
= cnα e−iHt/ h̄ |Ψnα i
nα (1.3)
X
= cnα e −iEn t/ h̄
|Ψnα i

• “Applied field"’ can often be thought of a driving the system from one eigenstate to
another.

• Stat mech.
In thermal equilibrium

P −βEn
nα hΨnα | O |Ψnα i e
hOi = (1.4)
Z

3
4 approximate methods

Figure 1.1: qmTwoL2fig1

where

1
β= , (1.5)
kB T
and

X
Z= e−βEn (1.6)

1.2 variational principle

Consider any ket

X
|Ψi = cnα |Ψnα i (1.7)

(perhaps not even normalized), and where

cnα = hΨnα |Ψi (1.8)


1.2 variational principle 5

but we do not know these.

X
hΨ|Ψi = |cnα |2 (1.9)

|cnα |2 En
P
hΨ| H |Ψi
= Pnα 2
hΨ|Ψi
mβ cmβ

|cnα |2 E0
P
(1.10)
≥ Pnα 2
mβ cmβ

= E0
So for any ket we can form the upper bound for the ground state energy

hΨ| H |Ψi
≥ E0 (1.11)
hΨ|Ψi
There is a whole set of strategies based on estimating the ground state energy. This is called
the Variational principle for ground state. See §24.2 in the text [4].
We define the functional

hΨ| H |Ψi
E[Ψ] = ≥ E0 (1.12)
hΨ|Ψi
If |Ψi = c |Ψ0 i where |Ψ0 i is the normalized ground state, then

E[cΨ0 ] = E0 (1.13)

Example 1.1: Hydrogen atom


hr| H r0 = Hδ3 (r − r0 ) (1.14)

where

h̄2 2 e2
H =− ∇ − (1.15)
2µ r
Here µ is the reduced mass.
6 approximate methods

We know the exact solution:

H |Ψ0 i (1.16)

E0 = −Ry (1.17)

µe4
Ry = ≈ 13.6eV (1.18)
2 h̄2

 1 1/2 −r/a


 
hr|Ψ0 i = Φ100 (r) =  3  e 0
 (1.19)
πa0

h̄2
a0 = ≈ 0.53Å (1.20)
µe2

Figure 1.2: qmTwoL2fig2


1.2 variational principle 7

Figure 1.3: qmTwoL2fig3

estimate

h̄2 2 e2
Z !
hΨ| H |Ψi = 3 ∗
d rΨ (r) − ∇ − Ψ(r)
2µ r
Z (1.21)
hΨ|Ψi = 3
d r|Ψ(r)| 2

Or guess shape
8 approximate methods

Figure 1.4: qmTwoL2fig4

2
Using the trial wave function e−αr

E[Ψ] → E(α) (1.22)

 2
2 e2 2
R 

d3 re−αr − 2µ ∇2 − r e−αr
E(α) = R 2
(1.23)
d3 re−2αr

find

E(α) = Aα − Bα1/2 (1.24)

3 h̄2
A=

!1/2 (1.25)
2
B = 2e 2
π
1.2 variational principle 9

Figure 1.5: qmTwoL2fig5

Minimum at

µe2 8
!
α0 = (1.26)
h̄2 9π
So

µe4 8
E(α0 ) = − = −0.85Ry (1.27)
2 h̄2 3π
maybe not too bad...

Example 1.2: Helium atom

Assume an infinite nuclear mass with nucleus charge 2e


10 approximate methods

Figure 1.6: qmTwoL2fig6

ground state wavefunction

Ψ0 (r1 , r2 ) (1.28)

The problem that we want to solve is

h̄2 2 h̄2 2 2e e2
!
− ∇1 − ∇ − + Ψ0 (r1 , r2 ) = E0 Ψ0 (r1 , r2 ) (1.29)
2m 2m 2 r |r1 − r2 |

Nobody can solve this problem. It is one of the simplest real problems in QM that cannot
be solved exactly.
Suppose that we neglected the electron, electron repulsion. Then

Ψ0 (r1 , r2 ) = Φ100 (r1 )Φ100 (r2 ) (1.30)


1.2 variational principle 11

where

h̄2 2 2e
!
− ∇ − Φ100 (r) = Φ100 (r) (1.31)
2m r

with

 = −4Ry (1.32)

me4
Ry = (1.33)
2 h̄2
This is the solution to

h̄2 2 h̄2 2 2e (0)


!
− ∇1 − ∇ − Ψ0 (r1 , r2 ) = E0 Ψ0 (r1 , r2 ) = E0(0) Ψ(0)
0 (r1 , r2 ) (1.34)
2m 2m 2 r

E0(0) = −8Ry . (1.35)

Now we want to put back in the electron electron repulsion, and make an estimate.
Trial wavefunction

 Z 3 1/2   Z 3 1/2


     
/a /a

Ψ(r1 , r2 , Z) =  3  e
   −Zr1 0   
   e
 −Zr 2 0  (1.36)
πa 0 πa3
0

expect that the best estimate is for Z ∈ [1, 2].


This can be calculated numerically, and we find

!
5
E(Z) = 2RY Z − 4Z + Z
2
(1.37)
8

The Z 2 comes from the kinetic energy. The −4Z is the electron nuclear attraction, and
the final term is from the electron-electron repulsion.
12 approximate methods

The actual minimum is

5
Z = 2− (1.38)
16

E(2 − 5/16) = −77.5eV (1.39)

Whereas the measured value is −78.6eV.


P E RT U R B AT I O N M E T H O D S
2
2.1 states and wave functions

Suppose we have the following non-degenerate energy eigenstates

..
.
E1 ∼ |ψ1 i (2.1)
E0 ∼ |ψ0 i

and consider a state that is “very close” to |ψn i.

|ψi = |ψn i + |δψn i (2.2)

We form projections onto |ψn i “direction”. The difference from this projection will be written
|ψn⊥ i, as depicted in fig. 2.1. This illustration cannot not be interpreted literally, but illustrates
the idea nicely.
For the amount along the projection onto |ψn i we write

hψn |δψn i = δα (2.3)

so that the total deviation from the original state is

|δψn i = δα |ψn i + |δψn⊥ i . (2.4)

The varied ket is then

|ψi = (1 + δα) |ψn i + |δψn⊥ i (2.5)

where

(δα)2 , hδψn⊥ |δψn⊥ i  1 (2.6)

13
14 perturbation methods

Figure 2.1: Pictorial illustration of ket projections

In terms of these projections our kets magnitude is

hψ|ψi = ((1 + δα∗ ) hψn | + hδψn⊥ |)((1 + δα) |ψn i + |δψn⊥ i)


= |1 + δα|2 hψn |ψn i + hδψn⊥ |δψn⊥ i (2.7)
+ (1 + δα∗ ) hψn |δψn⊥ i + (1 + δα) hδψn⊥ |δψn i

Because hδψn⊥ |δψn i = 0 this is

hψ|ψi = |1 + δα|2 hδψn⊥ |δψn⊥ i . (2.8)

Similarly for the energy expectation we have

hψ|ψi = ((1 + δα∗ ) hψn | + hδψn⊥ |) H ((1 + δα) |ψn i + |δψn⊥ i)


= |1 + δα|2 En hψn |ψn i + hδψn⊥ |Hi δψn⊥ (2.9)
+ (1 + δα )En hψn |δψn⊥ i + (1 + δα)En hδψn⊥ |δψn i

Or

hψ| H |ψi = En |1 + δα|2 + hδψn⊥ | H |δψn⊥ i . (2.10)


2.1 states and wave functions 15

This gives

hψ| H |ψi
E[ψ] =
hψ|ψi
En |1 + δα|2 + hδψn⊥ | H |δψn⊥ i
=
|1 + δα|2 hδψn⊥ |δψn⊥ i
hδψn⊥ |H|δψn⊥ i
En + |1+δα|2
= (2.11)
hδψn⊥ |δψn⊥ i
1+ |1+δα|2
!
hδψn⊥ |δψn⊥ i
= En 1 − +··· +···
|1 + δα|2
h  i
= En 1 + O (δψn⊥ )2

where

(δψn⊥ )2 ∼ hδψn⊥ |δψn⊥ i (2.12)

Figure 2.2: Illustration of variation of energy with variation of Hamiltonian

“small errors” in |ψi do not lead to large errors in E[ψ]


It is reasonably easy to get a good estimate and E0 , although it is reasonably hard to get a
good estimate of |ψ0 i. This is for the same reason, because E[] is not terribly sensitive.
16 perturbation methods

2.2 excited states

..
.
E2 ∼ |ψ2 i
(2.13)
E1 ∼ |ψ1 i
E0 ∼ |ψ0 i
Suppose we wanted an estimate of E1 . If we knew the ground state |ψ0 i. For any trial |ψi
form

0
ψ = |ψi − |ψ0 i hψ0 |ψi (2.14)

We are taking out the projection of the ground state from an arbitrary trial function.
For a state written in terms of the basis states, allowing for an α degeneracy

X
|ψi = c0 |ψ0 i + cnα |ψnα i (2.15)
n>0,α

hψ0 |ψi = c0 (2.16)

and

0 X
ψ = cnα |ψnα i (2.17)
n>0,α

(note that there are some theorems that tell us that the ground state is generally non-degenerate).

hψ0 | H |ψ0 i
E[ψ0 ] =
hψ0 |ψ0 i
P 2 (2.18)
n>0,α |cnα | E n
= P 2 ≥ E1
c
m>0,β mβ
E
Often do not know the exact ground state, although we might have a guess ψ̃0 .
for

00 ED E
ψ = |ψi − ψ̃0 ψ̃0 ψ (2.19)
2.3 problems 17

but cannot prove that


hψ00 | H |ψ00 i
≥ E1 (2.20)
hψ00 |ψ00 i
Then
FIXME: missed something here.

hψ000 | H |ψ000 i
≥ E1 (2.21)
hψ000 |ψ000 i
Somewhat remarkably, this is often possible. We talked last time about the Hydrogen atom.
In that case, you can guess that the excited state is in the 2s orbital and and therefore orthogonal
to the 1s (?) orbital.

2.3 problems

Exercise 2.1 Harmonic oscillator (2011 ps1/p1)


Let Ho indicate the Hamiltonian of a 1D harmonic oscillator with mass m and frequency ω

P2 1
Ho = + mω2 X 2 (2.22)
2m 2
and denote the energy eigenstates by |ni, where n is the eigenvalue of the number operator.
1. Find hn| X 4 |ni
2. Quadratic pertubation
Find the ground state energy of the Hamiltonian H = Ho + γX 2 . You may assume γ > 0.
[Hint: This is not a trick question.]
3. linear pertubation
Find the ground state energy of the Hamiltonian H = Ho − αX. [Hint: This is a bit harder
than part 2 but not much. Try "completing the square."]
Answer for Exercise 2.1

Part 1. X 4 Working through A we have now got enough context to attempt the first part of
the question, calculation of

hn| X 4 |ni (2.23)


18 perturbation methods

We have calculated things like this before, such as



hn| X 2 |ni = hn| (a + a† )2 |ni (2.24)
2mω
To continue we need an exact relation between |ni and |n ± 1i. Recall that a |ni was an eigen-
state of a† a with eigenvalue n − 1. This implies that the eigenstates a |ni and |n − 1i are propor-
tional

a |ni = cn |n − 1i , (2.25)

or
hn| a† a |ni = |cn |2 hn − 1|n − 1i = |cn |2
n hn|ni = (2.26)
n=

so that


a |ni = n |n − 1i . (2.27)

Similarly let

a† |ni = bn |n + 1i , (2.28)

or
hn| aa† |ni = |bn |2 hn − 1|n − 1i = |bn |2
hn| (1 + a† a) |ni = (2.29)
1+n =

so that


a† |ni = n + 1 |n + 1i . (2.30)

We can now return to eq. (2.23), and find

h̄2
hn| X 4 |ni = hn| (a + a† )4 |ni (2.31)
4m2 ω2
2.3 problems 19

Consider half of this braket

 
(a + a† )2 |ni = a2 + (a† )2 + a† a + aa† |ni
 
= a2 + (a† )2 + a† a + (1 + a† a) |ni
  (2.32)
= a2 + (a† )2 + 1 + 2a† a |ni
√ √ √ √
= n − 1 n − 2 |n − 2i + n + 1 n + 2 |n + 2i + |ni + 2n |ni
Squaring, utilizing the Hermitian nature of the X operator

h̄2   h̄2  2 
hn| X 4 |ni = (n − 1)(n − 2) + (n + 1)(n + 2) + (1 + 2n) 2
= 6n + 4n + 5 (2.33)
4m ω
2 2 4m ω
2 2

Part 2. Quadratic ground state Find the ground state energy of the Hamiltonian H = H0 +
γX 2 for γ > 0.
The new Hamiltonian has the form

P2 1 2γ 2 P2 1
!
H= + m ω +
2
X = + mω0 2 X 2 , (2.34)
2m 2 m 2m 2
where
r

ω =
0
ω2 + (2.35)
m
The energy states of the Hamiltonian are thus

r !
2γ 1
En = h̄ ω2 + n+ (2.36)
m 2
and the ground state of the modified Hamiltonian H is thus

r
h̄ 2γ
E0 = ω2 + (2.37)
2 m

Part 3. Linear ground state Find the ground state energy of the Hamiltonian H = H0 − αX.
With a bit of play, this new Hamiltonian can be factored into

α2 α2
! !
1 1
H = h̄ω b b + − †
= h̄ω bb†
− − , (2.38)
2 2mω2 2 2mω2
20 perturbation methods

where

α
r
mω iP
b= X+ √ − √
2 h̄ 2m h̄ω ω 2m h̄ω
(2.39)
α
r
mω iP
b† = X− √ − √ .
2 h̄ 2m h̄ω ω 2m h̄ω
From eq. (2.38) we see that we have the same sort of commutator relationship as in the
original Hamiltonian

h i
b, b† = 1, (2.40)

and because of this, all the preceding arguments follow unchanged with the exception that
the energy eigenstates of this Hamiltonian are shifted by a constant

α2
! !
1
H |ni = h̄ω n + − |ni , (2.41)
2 2mω2

where the |ni states are simultaneous eigenstates of the b† b operator

b† b |ni = n |ni . (2.42)

The ground state energy is then

h̄ω α2
E0 = − . (2.43)
2 2mω2
This makes sense. A translation of the entire position of the system should not effect the
energy level distribution of the system, but we have set our reference potential differently, and
have this constant energy adjustment to the entire system.

Exercise 2.2 Expectation values for position operators for spinless hydrogen (2011 ps1/p2)
Show that for all energy eigenstates |Φnlm i of the (spinless) hydrogen atom, where as usual
n, l, and m are respectively the principal, azimuthal, and magnetic quantum numbers, we have

hΦnlm | X |Φnlm i = hΦnlm | Y |Φnlm i = hΦnlm | Z |Φnlm i = 0 (2.44)

[Hint: Take note of the parity of the spherical harmonics (see "quick summary" notes on the
spherical harmonics).]
2.3 problems 21

Answer for Exercise 2.2


The summary sheet provides us with the wavefunction
s !
2 (n − l − 1)! 2r
hr|Φnlm i = Fnl Ylm (θ, φ), (2.45)
2
n a03/2 (n + l)! )3 na0

where Fnl is a real valued function defined in terms of Lagueere polynomials. Working with
the expectation of the X operator to start with we have

Z

hΦnlm | X |Φnlm i = Φnlm r0 r0 X |ri hr|Φnlm i d3 rd3 r0

Z
= Φnlm r0 δ(r − r0 )r sin θ cos φ hr|Φnlm i d3 rd3 r0

Z (2.46)
= Φ∗nlm (r)r sin θ cos φΦnlm (r)d3 r
Z ! 2 Z
2r m∗
r sin θdθdφYl (θ, φ) sin θ cos φYl (θ, φ)
2 m
∼ r dr Fnl
na0
Recalling that the only φ dependence in Ylm is eimφ we can perform the dφ integration directly,
which is

Z 2π
cos φdφe−imφ eimφ = 0. (2.47)
φ=0

We have the same story for the Y expectation which is

Z ! 2 Z
2r m∗
r sin θdθdφYl (θ, φ) sin θ sin φYl (θ, φ).
2 m
hΦnlm | X |Φnlm i ∼ r dr Fnl (2.48)
na0
Our φ integral is then just

Z 2π
sin φdφe−imφ eimφ = 0, (2.49)
φ=0

also zero. The Z expectation is a slightly different story. There we have


Z ! 2
2r 3
hΦnlm | Z |Φnlm i ∼ dr Fnl r
na0
Z 2π Z π !2 (2.50)
dl−m
dφ sin θdθ (sin θ)−2m sin θ cos θ.
2l
0 0 d(cos θ)l−m
22 perturbation methods

Within this last integral we can make the substitution

u = cos θ
sin θdθ = −d(cos θ) = −du (2.51)
u ∈ [1, −1],

and the integral takes the form


1 !2
dl−m
Z
1
− (−du) (1 − u2 )l u. (2.52)
−1 (1 − u2 )m dul−m

Here we have the product of two even functions, times one odd function (u), over a symmetric
interval, so the end result is zero, completing the problem.
I was not able to see how to exploit the parity result suggested in the problem, but it was not
so bad to show these directly.

Exercise 2.3 Angular momentum operator


2011 ps1/p3
Working with the appropriate expressions in Cartesian components, confirm that Li |ψi = 0
for each component of angular momentum Li , if hr|ψi = ψ(r) is in fact only a function of r = |r|.
Answer for Exercise 2.3
In order to proceed, we will have to consider a matrix element, so that we can operate on |ψi
in position space. For that matrix element, we can proceed to insert complete states, and reduce
the problem to a question of wavefunctions. That is

Z

hr| Li |ψi = d3 r0 hr| Li r0 r0 ψ
Z

= d3 r0 hr| iab Xa Pb r0 r0 ψ
∂ψ(r0 ) 0
Z
= −i h̄iab d3 r0 xa hr| r
∂Xb
(2.53)
∂ψ(r0 )
0
Z
= −i h̄iab d3 r0 xa r r
∂xb
∂ψ(r0 ) 3
Z
= −i h̄iab d3 r0 xa δ (r − r0 )
∂xb
∂ψ(r)
= −i h̄iab xa
∂xb
2.3 problems 23

With ψ(r) = ψ(r) we have

∂ψ(r)
hr| Li |ψi = −i h̄iab xa
∂xb
∂r dψ(r)
= −i h̄iab xa (2.54)
∂xb dr
1 1 dψ(r)
= −i h̄iab xa 2xb
2 r dr
We are left with an sum of a symmetric product xa xb with the antisymmetric tensor iab so
this is zero for all i ∈ [1, 3].

2.3.1 Helium atom ground state energy estimation

2.3.1.1 Helium atom, variational method, first steps


To verify (24.69) from [4], a six fold integral is required

h̄2  2 h̄2 Z 6 −(r1 +r2 )Z/a0


* + Z
− ∇1 + ∇2 = −
2
dr1 dΩ1 r12 dr2 dΩ2 r22 e
2m 2m π2 a60
2 ∂ ∂2 2 ∂ ∂2
!
+ + + e−(r1 +r2 )Z/a0
r1 ∂r1 ∂r1 2 r2 ∂r2 ∂r2 2
(2.55)
h̄2 Z 6
Z
=− (4π) 2
dr1 dr2 r12 r22 e−(r1 +r2 )Z/a0
2m π2 a60
2 ∂ ∂2 2 ∂ ∂2
!
+ + + e−(r1 +r2 )Z/a0
r1 ∂r1 ∂r1 2 r2 ∂r2 ∂r2 2

Making a change of variables

Zr1
x=
a0
(2.56)
Zr2
y=
a0
24 perturbation methods

we have

2 2 −x−y 2 ∂ ∂2 2 ∂ ∂2 −x−y
* 2 + 8 h̄2 Z 2
Z !

− ∇ + ∇2 = −
2 2
dxdyx y e + + + e
2m 1 m a20 x ∂x ∂x2 y ∂y ∂y2
8 h̄2 Z 2
Z !
2 2
=− 2 2 −x−y
dxdyx y e − + 1 − + 1 e−x−y
m a20 x y
(2.57)
16 h̄2 Z 2
Z !
2 2 −2x−2y 1 1
= dxdyx y e + −1
m a20 x y
h̄2 Z 2
=
m a20

With

h̄2
a0 = , (2.58)
me2
We have the result from the text

* 2 + Z 2 e2

− ∇ + ∇2 =
2 2
(2.59)
2m 1 a0

Verification of (24.70) follows in a similar fashion. We have

Z6
* !+ Z !
1 1 −2(r1 +r2 )Z/a0 2 2 1 1
2e2 + = 2e2 (4π) 2
e r r dr
1 2 1 2dr +
r1 r2 π2 a60 r1 r2
Z !
Z 1 1
= 32e2 e−2x−2y x2 y2 dxdy + (2.60)
a0 x y
Z
= 4e2
a0

2.3.1.2 Problem with subsequent derivation


In §24.2.1 of the text [4] is an expectation value calculation associated with the Helium atom.
One of the equations (24.76) seems wrong (according to hand calculation as shown below and
according to Mathematica). Is there another compensating error somewhere? Here I work the
entire calculation in detail to attempt to find this.
2.3 problems 25

We start with

+  3 2 Z
e2
*
 Z  1
=  3  e2 d3 kd3 r1 d3 r2 2 2 eik·(r1 −r2 ) e−2Z(r1 +r2 )/a0
|r1 − r2 | πa0 2π k
 2 (2.61)
 Z 3  2 1 Z
1
Z Z
=  3  e d3 k 2 d3 r1 eik·r1 e−2Zr1 /a0 d3 r2 e−ik·r2 e−2Zr2 /a0
πa0 2π2 k

To evaluate the two last integrals, I figure the author has aligned the axis for the d3 r1 volume
elements to make the integrals easier. Specifically, for the first so that k · r1 = kr1 cos θ, so the
integral takes the form

Z Z
ik·r1 −2Zr1 /a0
3
d r1 e e =− r12 dr1 dφd(cos θ)eikr1 cos θ e−2Zr1 /a0
Z ∞ Z −1
= −2π r2 drdueikru e−2Zr/a0
Zr=0 ∞
u=1
1  −ikr 
= −2π 2
r dr e − eikr e−2Zr/a0 (2.62)
ikr
Z r=0

4π 1  ikr 
= rdr e − e−ikr e−2Zr/a0
k r=0 2i
4π ∞
Z
= rdr sin(kr)e−2Zr/a0
k r=0
For this last, Mathematica gives me (24.75) from the text

16πZa30
Z
d3 r1 eik·r1 e−2Zr1 /a0 = (2.63)
(k2 a20 + 4Z 2 )2
For the second integral, if we align the axis so that −k · r2 = kr cos θ and repeat, then we have

+  3 2
e2
* Z
 Z  2 1 1 1
=  3  e 16 π Z a0 d3 k 2 2 2
2 2 2 6
|r1 − r2 | πa0 2π 2 k (k a0 + 4Z 2 )4
128Z 8 2
Z
1
= e dkdΩ 2 2
π2 (k a0 + 4Z 2 )4
(2.64)
512Z 8 2
Z
1
= e dk 2 2
π (k a0 + 4Z 2 )4
512Z 8 2
Z
1
= e dk 2 2
π (k a0 + 4Z 2 )4
26 perturbation methods

With ka0 = 2Zκ this is

e2 512Z 8 2
* + Z
2Z 1
= e dκ
|r1 − r2 | π a0 (2Z) (κ2 + 1)4
8
Z (2.65)
4Z 2 1
= e dκ 2
πa0 (κ + 1)4
Here I note that

d −1 1
= (2.66)
dκ 3(1 + x) 3 (1 + x)4
so the definite integral has the value
Z ∞
1 −1 −1 1
dκ 2 = − = (2.67)
0 (κ + 1) 4 3(1 + ∞) 3 3(1 + 0)3 3
(not 5π/3 as claimed in the text). This gives us

e2 4Ze2
* + Z
1
= dκ 2
|r1 − r2 | πa0 (κ + 1)4
4Ze2
= (2.68)
3πa0
Ze2 5 Ze2
≈ 0.424 ,
a0 8 a0
So, no compensating error is found, yet the end result of the calculation, which requires the
5/8 result matches with the results obtained other ways (as in problem set III). How would this
be accounted for? Is there an error above?

2.3.2 Curious problem using the variational method to find the ground state energy of the
Harmonic oscillator

2.3.2.1 Recap. Variational method to find the ground state energy


Problem 3 of §24.4 in the text [4] is an interesting one. It asks to use the variational method to
find the ground state energy of a one dimensional harmonic oscillator Hamiltonian.
Somewhat unexpectedly, once I take derivatives equate to zero, I find that the variational
parameter beta becomes imaginary?
I tried this twice on paper and pencil, both times getting the same thing. This seems like a
noteworthy problem, and one worth reflecting on a bit.
2.3 problems 27

2.3.2.2 Recap. The variational method


Given any, not necessarily normalized wavefunction, with a series representation specified using
the energy eigenvectors for the space

X
|ψi = cm |ψm i , (2.69)
m

where

H |ψm i = Em |ψm i , (2.70)


and

hψm |ψn i = δmn . (2.71)


We can perform an energy expectation calculation with respect to this more general state

X X
hψ| H |ψi = c∗m hψm | H cn |ψn i
m n
X X
= c∗m hψm | cn En |ψn i
m n
X
= c∗m cn En hψm |ψn i
X
m,n (2.72)
2
|cm | Em
m
X
≥ |cm |2 E0
m
= E0 hψ|ψi
This allows us to form an estimate of the ground state energy for the system, by using any
state vector formed from a superposition of energy eigenstates, by simply calculating

hψ| H |ψi
E0 ≤ . (2.73)
hψ|ψi
One of the examples in the text is to use this to find an approximation of the ground state
energy for the Helium atom Hamiltonian

h̄2  2 e2
!
2 1 1

H=− ∇1 + ∇1 − 2e
2
+ + . (2.74)
2m r1 r2 |r1 − r2 |
28 perturbation methods

This calculation is performed using a trial function that was a solution of the interaction free
Hamiltonian

Z 3 −Z(r1 +r2 )/a0


φ= e . (2.75)
πa30

This is despite the fact that this is not a solution to the interaction Hamiltonian. The end result
ends up being pretty close to the measured value (although there is a pesky error in the book
that appears to require a compensating error somewhere else).
Part of the variational technique used in that problem, is to allow Z to vary, and then once the
normalized expectation is computed, set the derivative of that equal to zero to calculate the trial
wavefunction as a parameter of Z that has the lowest energy eigenstate for a function of that
form. We find considering the Harmonic oscillator that this final variation does not necessarily
produce meaningful results.

2.3.2.3 The Harmonic oscillator variational problem


The problem asks for the use of the trial wavefunction

φ = e−β|x| , (2.76)

to perform the variational calculation above for the Harmonic oscillator Hamiltonian, which
has the one dimensional position space representation

h̄2 d2 1
H=− + mω2 x2 . (2.77)
2m dx2 2
We can find the normalization easily

Z ∞
hφ|φi = e−2β|x| dx
−∞
Z ∞
1
=2 e−2βx 2βdx
2β 0
Z ∞ (2.78)
1
=2 e−u du
2β 0
1
=
β
2.3 problems 29

Using integration by parts, we find for the energy expectation


h̄2 d2
Z !
1
hφ| H |φi = dxe − −β|x|
+ mω x e 2 2 −β|x|
−∞ 2m dx2 2
Z − Z  Z ∞ !
h̄2 d2
!
1
= lim + + dxe −β|x|
− + mω x e
2 2 −β|x|
(2.79)
→0 −∞ −  2m dx2 2
Z 
h̄2 β2 1
Z ∞
h̄2 d2
!
=2 dxe−2βx − + mω2 x2 − lim dxe−β|x| 2 e−β|x|
0 2m 2 2m →0 − dx

The first integral we can do


h̄2 β2 1 h̄2 β2 ∞
Z ! Z Z ∞
2 dxe−2βx
− + mω x = −
2 2
dxe−2βx
+ mω 2
dxx2 e−2βx
0 2m 2 m 0 0
h̄2 β ∞ mω2 ∞
Z Z
=− due +
−u
3
duu2 e−u (2.80)
2m 0 8β 0
β h̄2 mω2
=− +
2m 4β3
A naive evaluation of this integral requires the origin to be avoided where the derivative of
|x| becomes undefined. This also provides a nice way to evaluate this integral because we can
double the integral and half the range, eliminating the absolute value.
However, can we assume that the remaining integral is zero?
I thought that we could, but the end result is curious. I also verified my calculation sym-
bolically in 24.4.3_attempt_with_mathematica.nb , but found that Mathematica required some
special hand holding to deal with√the origin. Initially I coded this by avoiding the origin as
above, but later switched to |x| = x2 which Mathematica treats more gracefully.
Without that last integral, involving our singular |x|0 and |x|00 terms, our ground state energy
estimation, parameterized by β is

β2 h̄2 mω2
E[β] = − + . (2.81)
2m 4β2
Observe that if we set the derivative of this equal to zero to find the “best” beta associated
with this trial function

∂E β h̄2 mω2
0= =− − (2.82)
∂β 2m 2β3
30 perturbation methods

we find that the parameter beta that best minimizes this ground state energy function is com-
plex with value

imω
β2 = ± √ . (2.83)
2 h̄
It appears at first glance that we can not minimize eq. (2.81) to find a best ground state energy
estimate associated with the trial function eq. (2.76). We do however, know the exact ground
state energy h̄ω/2 for the Harmonic oscillator. Is is possible to show that for all β2 we have

h̄ω β2 h̄2 mω2


≤− + (2.84)
2 2m 4β2
? This inequality would be expected if we can assume that the trial wavefunction has a Fourier
series representation utilizing the actual energy eigenfunctions for the system.
The resolution to this question is avoided once we include the singularity. This is explored in
the last part of these notes.

2.3.2.4 Is our trial function representable?


I thought perhaps that since the trial wave function for this problem lies outside the span of the
Hilbert space that describes the solutions to the Harmonic oscillator. Another thing of possible
interest is the trouble near the origin for this wave function, when operated on by P2 /2m, and
this has been (incorrectly assumed to have zero contribution above).
I had initially thought that part of the value of this variational method was that we can use
it despite not even knowing what the exact solution is (and in the case of the Helium atom, I
believe it was stated in class that an exact closed form solution is not even known). This makes
me wonder what restrictions must be imposed on the trial solutions to get a meaningful answer
from the variational calculation?
Suppose that the trial wavefunction is not representable in the solution space. If that is the
case, we need to adjust the treatment to account for that. Suppose we have

X
|φi = cn |ψn i + c⊥ |ψ⊥ i . (2.85)
n
2.3 problems 31

where |ψ⊥ i is unknown, and presumed not orthogonal to any of the energy eigenkets. We can
still calculate the norm of the trial function
X
hφ|φi = hcn ψn + c⊥ ψ⊥ |cm ψm + c⊥ ψ⊥ i
n,m
X
= |cn |2 + c∗n c⊥ hψn |ψ⊥ i + cn c∗⊥ hψ⊥ |ψn i + |c⊥ |2 hψ⊥ |ψ⊥ i (2.86)
n
X
= hψ⊥ |ψ⊥ i + |cn |2 + 2 Re (c∗n c⊥ hψn |ψ⊥ i) .
n
Similarly we can calculate the energy expectation for this unnormalized state and find
X
hφ| H |φi = hcn ψn + c⊥ ψ⊥ | H |cm ψm + c⊥ ψ⊥ i
n,m
X (2.87)
= |cn |2 En + c∗n c⊥ En hψn |ψ⊥ i + cn c∗⊥ En hψ⊥ |ψn i + |c⊥ |2 hψ⊥ | H |ψ⊥ i
n
Our normalized energy expectation is therefore the considerably messier

2
En + c∗n c⊥ En hψn |ψ⊥ i + cn c∗⊥ En hψ⊥ |ψn i + |c⊥ |2 hψ⊥ | H |ψ⊥ i
P
hφ| H |φi n |cn |
=
hφ|φi hψ⊥ |ψ⊥ i + m |cm |2 + 2 Re (c∗m c⊥ hψm |ψ⊥ i)
P
(2.88)
2
E0 + c∗n c⊥ En hψn |ψ⊥ i + cn c∗⊥ En hψ⊥ |ψn i + |c⊥ |2 hψ⊥ | H |ψ⊥ i
P
n |cn |

hψ⊥ |ψ⊥ i + m |cm |2 + 2 Re (c∗m c⊥ hψm |ψ⊥ i)
P

With a requirement to include the perpendicular cross terms the norm does not just cancel
out, leaving us with a clean estimation of the ground state energy. In order to utilize this vari-
ational method, we implicitly have an assumption that the hψ⊥ |ψ⊥ i and hψm |ψ⊥ i terms in the
denominator are sufficiently small that they can be neglected.

2.3.2.5 Calculating the Fourier terms


In order to see how much a problem representing this trial function in the Harmonic oscillator
wavefunction solution space, we can just calculate the Fourier fit.

Our first few basis functions, with α = mω/ h̄ are

α
r
√ e−α x /2
2 2
u0 =
π
α
r
√ (2αx)e−α x /2
2 2
u1 = (2.89)
2 π
α
r
√ (4α2 x2 − 2)e−α x /2
2 2
u2 =
8 π
32 perturbation methods

In general our wavefunctions are

un = Nn Hn (αx)e−α x /2
2 2

α
r
Nn = √ n (2.90)
π2 n!
2 d
n
Hn (η) = (−1)n eη n e−η
2


From which we find

Z ∞
−α2 x2 /2 2 x2 /2
ψ(x) = e 2
(Nn ) Hn (αx) Hn (αx)e−α ψ(x)dx (2.91)
−∞

Our wave function, with β = 1 is plotted in fig. 2.3

Figure 2.3: Exponential trial function with absolute exponential die off

The zeroth order fitting using the Gaussian exponential is found to be

β
!
e−α x /2+β /(2α )
2 2 2 2
ψ0 (x) = 2β erfc √
p
(2.92)

With α = β = 1, this is plotted in fig. 2.4 and can be seen to match fairly well
The higher order terms get small fast, but we can see in fig. 2.5, where a tenth order fitting
is depicted that it would take a number of them to get anything close to the sharp peak that we
have in our exponential trial function.
Note that all the brakets of even orders in n with the trial function are zero, which is why the
tenth order approximation is only a sum of six terms.
Details for this harmonic oscillator wavefunction fitting can be found in gaussian_fitting_for_abs_function.nb
can be found separately, calculated using a Mathematica worksheet.
2.3 problems 33

Figure 2.4: First ten orders, fitting harmonic oscillator wavefunctions to this trial function

Figure 2.5: Tenth order harmonic oscillator wavefunction fitting

The question of interest is why we can approximate the trial function so nicely (except at the
origin) even with just a first order approximation (polynomial times Gaussian functions where
the polynomials are Hankel functions), and we can get an exact value for the lowest energy
state using the first order approximation of our trial function, why do we get garbage from the
variational method, where enough terms are implicitly included that the peak should be sharp.
It must therefore be important to consider the origin, but how do we give some meaning to
the derivative of the absolute value function? The key (supplied when asking Professor Sipe in
office hours for the course) is to express the absolute value function in terms of Heavyside step
functions, for which the derivative can be identified as the delta function.

2.3.2.6 Correcting, treating the origin this way


Here is how we can express the absolute value function using the Heavyside step

|x| = xθ(x) − xθ(−x), (2.93)

where the step function is zero for x < 0 and one for x > 0 as plotted in fig. 2.6.
34 perturbation methods

Figure 2.6: stepFunction

Expressed this way, with the identification θ0 (x) = δ(x), we have for the derivative of the
absolute value function

|x|0 = x0 θ(x) − x0 θ(−x) + xθ0 (x) − xθ0 (−x)


= θ(x) − θ(−x) + xδ(x) + xδ(−x) (2.94)
= θ(x) − θ(−x) + xδ(x) + xδ(x)

Observe that we have our expected unit derivative for x > 0, and −1 derivative for x < 0. At
the origin our θ contributions vanish, and we are left with


|x|0 x=0 = 2 xδ(x)| x=0 (2.95)

We have got zero times infinity here, so how do we give meaning to this? As with any delta
functional, we have got to apply it to a well behaved (square integrable) test function f (x) and
integrate. Doing so we have

Z ∞ Z ∞
dx|x| f (x) = 2
0
dxxδ(x) f (x)
−∞ −∞ (2.96)
= 2(0) f (0)

This equals zero for any well behaved test function f (x). Since the delta function only picks
up the contribution at the origin, we can therefore identify |x|0 as zero at the origin.
Using the same technique, we can express our trial function in terms of steps

ψ = e−β|x| = θ(x)e−βx + θ(−x)eβx . (2.97)


2.3 problems 35

This we can now take derivatives of, even at the origin, and find

ψ0 = θ0 (x)e−βx + θ0 (−x)eβx − βθ(x)e−βx + βθ(−x)eβx


= δ(x)e−βx − δ(−x)eβx − βθ(x)e−βx + βθ(−x)eβx
βx (2.98)
+ βθ(−x)eβx
(((
=(δ(x)e
(((− δ(x)e − βθ(x)e
−βx ( −βx

= β −θ(x)e−βx + θ(−x)eβx
 

Taking second derivatives we find

ψ00 = β −θ0 (x)e−βx + θ0 (−x)eβx + βθ(x)e−βx + βθ(−x)eβx


 

= β −δ(x)e−βx − δ(−x)eβx + βθ(x)e−βx + βθ(−x)eβx


 
(2.99)
= β2 ψ − 2βδ(x)

Now application of the Hamiltonian operator on our trial function gives us

h̄2  2  1
Hψ = − β ψ − 2δ(x) + mω2 x2 ψ, (2.100)
2m 2
so


h̄2 β2 1 h̄2 β ∞
Z ! Z
hψ| H |ψi = − + mω x e
2 2 −2β|x|
+ δ(x)e−β|x|
−∞ 2m 2 m −∞
β h̄2 mω2 h̄2 β
=− + + (2.101)
2m 4β3 m
β h̄2
mω 2
= + .
2m 4β3
Normalized we have

hψ| H |ψi β2 h̄2 mω2


E[β] = = + . (2.102)
hψ|ψi 2m 4β2
This is looking much more promising. We will have the sign alternation that we require to
find a positive, non-complex, value for β when E[β] is minimized. That is

∂E β h̄2 mω2
0= = − , (2.103)
∂β m 2β3
36 perturbation methods

so the extremum is found at

m2 ω2
β4 = . (2.104)
2 h̄2
Plugging this back in we find that our trial function associated with the minimum energy
(unnormalized still) is

r
mωx 2
− √
ψ=e 2 h̄
, (2.105)

and that energy, after substitution, is

h̄ω √
E[βmin ] = 2 (2.106)
2
We have something that is 1.4× the true ground state energy, but is at least a ball park value.
However, to get this result, we have to be very careful to treat our point of singularity. A deriva-
tive that we would call undefined in first year calculus, is not only defined, but required, for this
treatment to work!
T I M E I N D E P E N D E N T P E R T U R B AT I O N T H E O RY
3
See §16.1 of the text [4].
We can sometimes use this sort of physical insight to help construct a good approximation.
This is provided that we have some of this physical insight, or that it is good insight in the first
place.
This is the no-think (turn the crank) approach.
Here we split our Hamiltonian into two parts

H = H0 + H 0 (3.1)

where H0 is a Hamiltonian for which we know the energy eigenstates and the eigenkets. The
H 0 is the “perturbation” that is supposed to be small “in some sense”.
Prof Sipe will provide some references later that provide a more specific meaning to this
“smallness”. From some ad-hoc discussion in the class it sounds like one has to consider se-
quences of operators, and look at the convergence of those sequences (is this L2 measure the-
ory?)
We would like to consider a range of problems of the form

H = H0 + λH 0 (3.2)

where

λ ∈ [0, 1] (3.3)

So that when λ → 0 we have

H → H0 (3.4)

the problem that we already know, but for λ → 1 we have

H = H0 + H 0 (3.5)

37
38 time independent perturbation theory

Figure 3.1: Example of small perturbation from known Hamiltonian

the problem that we would like to solve.


We are assuming that we know the eigenstates and eigenvalues for H0 . Assuming no degen-
eracy

E E
H0 ψ(0)
s = E (0) (0)
s ψ s (3.6)
We seek
(H0 + H 0 ) |ψ s i = E s |ψ s i (3.7)
(this is the λ = 1 case).
Once (if) found, when λ → 0 we will have

E s → E (0)s
E (3.8)
|ψ s i → ψ(0)
s

E s = E (0) (1) 2 (2)


s + λE s + λ E s (3.9)

X E
ψs = cns ψ(0)
n (3.10)
n
time independent perturbation theory 39

This we know we can do because we are assumed to have a complete set of states.
with

cns = c(0) (1) 2 (2)


ns + λcns + λ cns (3.11)

where

c(0)
ns = δns (3.12)

There is a subtlety here that will be treated differently from the text. We write

E X E X E
|ψ s i = ψ(0)
s + λ c (1) (0)
ns n ψ + λ2
c(2) (0)
ns ψn +···
n n
  (0) E X E (3.13)
(1) (1) (0)
= 1 + λc ss + · · · ψ s + λ
cns ψn + · · ·
n,s

Take
E
(1) (0)
n,s ns ψn
P

E (0) E c
ψ s = ψ s + λ +···
1 + λc(1) ss (3.14)
E X E
= ψ(0)
s + λ c (1) (0)
ns ψn +···
n,s

where

c(1)
c(1)
ns =
ns
(3.15)
1 + λc(1)
ss

We have:

c(1) (1)
ns = cns
(3.16)
c(2) (2)
ns , cns

FIXME: I missed something here.


Note that this is no longer normalized.

D E
ψ s ψ s , 1 (3.17)
40 time independent perturbation theory

3.1 time independent perturbation

The setup To recap, we were covering the time independent perturbation methods from §16.1
of the text [4]. We start with a known Hamiltonian H0 , and alter it with the addition of a “small”
perturbation

H = H0 + λH 0 , λ ∈ [0, 1] (3.18)

For the original operator, we assume that a complete set of eigenvectors and eigenkets is
known

E E
H0 ψ s (0) = E s (0) ψ s (0) (3.19)

We seek the perturbed eigensolution

H |ψ s i = E s |ψ s i (3.20)

and assumed a perturbative series representation for the energy eigenvalues in the new system

E s = E s (0) + λE s (1) + λ2 E s (2) + · · · (3.21)

Given an assumed representation for the new eigenkets in terms of the known basis

X E
|ψ s i = cns ψn (0) (3.22)
n

and a pertubative series representation for the probability coefficients

cns = cns (0) + λcns (1) + λ2 cns (2) , (3.23)

so that

X E X E X E
|ψ s i = cns (0) ψn (0) + λ cns (1) ψn (0) + λ2 cns (2) ψn (0) + · · · (3.24)
n n n

Setting λ = 0 requires

cns (0) = δns , (3.25)


3.1 time independent perturbation 41

for

E X E X E
|ψ s i = ψ s (0) + λ cns (1) ψn (0) + λ2 cns (2) ψn (0) + · · ·
n n
  E X E X E
= 1 + λc ss (1) + λ2 c ss (2) + · · · ψ s (0) + λ cns (1) ψn (0) + λ2 cns (2) ψn (0) + · · ·
n,s n,s
(3.26)

We rescaled our kets

E (0) E X E X E
ψ s = ψ s +λ cns (1) ψn (0) + λ2 cns (2) ψn (0) + · · · (3.27)
n,s n,s

where
cns ( j)
cns ( j) = (3.28)
1 + λc ss (1) + λ2 c ss (2) + · · ·
The normalization of the rescaled kets is then

c (1) 2 + · · · ≡ 1 ,
D E X
ψ s ψ s = 1 + λ2 ss (3.29)
n,s
Zs

One can then construct a renormalized ket if desired

E E
ψ s = Z s1/2 ψ s , (3.30)
R

so that
E E D E
( ψ s )† ψ s = Z s ψ s ψ s = 1. (3.31)
R R

The meat That is as far as we got last time. We continue by renaming terms in eq. (3.27)

E (0) E E E
ψ s = ψ s + λ ψ s (1) + λ2 ψ s (2) + · · · (3.32)

where

( j) E X ( j) (0) E
ψ s = cns ψn . (3.33)
n,s
42 time independent perturbation theory

Now we act on this with the Hamiltonian

E E
H ψ s = E s ψ s , (3.34)

or

E E
H ψ s − E s ψ s = 0. (3.35)

Expanding this, we have


 E E E 
(H0 + λH 0 ) ψ s (0) + λ ψ s (1) + λ2 ψ s (2) + · · ·
   E E E  (3.36)
− E s (0) + λE s (1) + λ2 E s (2) + · · · ψ s (0) + λ ψ s (1) + λ2 ψ s (2) + · · · = 0.

We want to write this as

|Ai + λ |Bi + λ2 |Ci + · · · = 0. (3.37)

This is

(0) E
0 = λ0 (H0 − E (0)
s ) ψ s
 ψ (1) E + (H 0 − E (1) ) ψ (0) E
+ λ (H0 − E (0)

s ) s s s
 E E E (3.38)
+ λ2 (H − E (0) ) ψ (2) + (H 0 − E (1) ) ψ (1) − E (2) ψ
(0)
0 s s s s s s

···

So we form

(0) E
|Ai = (H0 − E (0)
s ) ψ s
(1) E (0) E
|Bi = (H0 − E (0)
s ) ψ s + (H 0 − E (1)
s ) ψ s (3.39)
(2) E (1) E (0) E
|Ci = (H0 − E (0)
s ) ψ s + (H 0 − E (1)
s ) ψ s − E (2)
s ψ s ,

and so forth.
E E
Zeroth order in λ Since H0 ψ s (0) = E (0)

s ψ s
(0) , this first condition on
|Ai is not much more
than a statement that 0 − 0 = 0.
3.1 time independent perturbation 43

First order in λ How about |Bi = 0? For this to be zero we require that both of the following
are simultaneously zero

D E
ψ s (0) B = 0
D E (3.40)
ψm (0) B = 0, m,s

This first condition is


D (0) E
ψ s (0) (H 0 − E (1)

s ) ψ s = 0. (3.41)

With
D E
ψm (0) H 0 ψ s (0) ≡ H 0 ms , (3.42)

or

H 0 ss = E (1)
s . (3.43)

From the second condition we have


D (1) E D (0) 0 (0) E
0 = ψm (0) (H0 − E (0) + ψm (H − E (1)

s ) ψ s s ) ψ s (3.44)
D
Utilizing the Hermitian nature of H0 we can act backwards on ψm (0)

D (0)
D
ψm (0) H0 = Em ψm (0) . (3.45)
D E D E
We note that ψm (0) ψ s (0) = 0, m , s. We can also expand the ψm (0) ψ s (1) , which is

 
D E D X (1) (0) E
ψm ψ s
(0) (1)
= ψm  cns ψn 
(0) 
(3.46)
n,s

I found that reducing this sum was not obvious until some actual integers were plugged in.
Suppose that s = 3, and m = 5, then this is

 
D E D  X E 
ψ5 (0) ψ3 (1) = ψ5 (0)  cn3 (1) ψn (0) 
n=0,1,2,4,5,···
D E (3.47)
= c53 (1)
ψ5 (0) ψ5 (0)
= c53 (1) .
44 time independent perturbation theory

Observe that we can also replace the superscript (1) with ( j) in the above manipulation with-
out impacting anything else. That and putting back in the abstract indices, we have the general
result

D E
ψm (0) ψ s ( j) = cms ( j) . (3.48)

Utilizing this gives us


(0)
0 = (Em − E (0)
s )cms
(1)
+ H 0 ms (3.49)

And summarizing what we learn from our |Bi = 0 conditions we have

E (1)
s = H ss
0

H 0 ms (3.50)
cms (1) = (0) (0)
E s − Em

Second order in λ Doing the same thing for |Ci = 0 we form (or assume)

D E
ψ s (0) C = 0 (3.51)

D E
0 = ψ s (0) C
D  (2) E (1) E (0) E
= ψ s (0) (H0 − E (0)
s ) ψ s + (H 0 − E (1)s ) ψ s − E (2)
s ψ s (3.52)
D E D (1) E D E
= (E (0) (0)
+ ψ s (0) (H 0 − E (1) − E (2)

s − E s ) ψ s ψ s s ) ψ s ψ s (0) ψ s (0)
(0) (2)
s
D E
We need to know what the ψ s (0) ψ s (1) is, and find that it is zero

D E D X (1) (0) E
ψ s (0) ψ s (1) = ψ s (0) cns ψn (3.53)
n,s

Again, suppose that s = 3. Our sum ranges over all n , 3, so all the brakets are zero. Utilizing
that we have

D E
E (2)

s = ψ s H ψ s
(0) 0 (1)
D X E
= ψ s (0) H 0 cms (1) ψm (0)
m,s (3.54)
X
= cms (1) H 0 sm
m,s
3.2 issues concerning degeneracy 45

From eq. (3.50) we have

X H 0 ms X |H 0 ms |2
E (2)
s = H 0 sm = (3.55)
m,s E (0)
s − Em
(0)
m,s E (0) (0)
s − Em

We can now summarize by forming the first order terms of the perturbed energy and the
corresponding kets

X |H 0 ms |2
E s = E (0)
s + λH ss + λ
0 2
(0) (0)
+···
m,s E s − E m
X H 0 ms (3.56)
E (0) E ψ (0) E + · · ·
ψ s = ψ s +λ (0) (0) m
m,s E s − E m

We can continue calculating, but are hopeful that we can stop the calculation without doing
more work, even if λ = 1. If one supposes that the

X H 0 ms
(3.57)
m,s E (0) (0)
s − Em

term is “small”, then we can hope that truncating the sum will be reasonable for λ = 1. This
would be the case if


H 0 ms  E (0)
s − E (0)
m , (3.58)

however, to put some mathematical rigor into making a statement of such smallness takes a
lot of work. We are referred to [10]. Incidentally, these are loosely referred to as the first and
second testaments, because of the author’s name, and the fact that they came as two volumes
historically.

3.2 issues concerning degeneracy

When the perturbed state is non-degenerate Suppose the state of interest is non-degenerate
but others are
FIXME: diagram. states designated by dashes labeled n1, n2, n3 degeneracy α = 3 for energy
En(0) .
This is no problem except for notation, and if the analysis is repeated we find
46 time independent perturbation theory

X H 0 mα;s 2

E s = E (0)
s + λH ss + λ
0 2
(0) (0)
+··· (3.59)
m,s,α E s − E m
E (0) E X H 0 mα;s
ψ s = ψ s +λ ψ (0) E + · · · , (3.60)
(0) (0) mα
m,s,α E s − E m

where

D E
H 0 mα;s = ψmα (0) H 0 ψ sα (0) (3.61)

When the perturbed state is also degenerate FIXME: diagram. states designated by dashes
labeled n1, n2, n3 degeneracy α = 3 for energy En(0) , and states designated by dashes labeled s1,
s2, s3 degeneracy α = 3 for energy E (0)s .
If we just blindly repeat the derivation for the non-degenerate case we would obtain

X H 0 mα;s1 2 X H 0 sα;s1 2

Es = E (0)
s + λH 0
s1;s1 +λ
(0)
2
(0)
+λ (0)
2
(0)
+··· (3.62)
m,s,α E s − E m α,1 E s − E s
E (0) E X H 0 mα;s E X H 0 sα;s1
ψ s = ψ s +λ ψ (0) + λ ψ (0) E + · · · , (3.63)
(0) (0) mα (0) (0) sα
m,s,α E s − E m α,s1 E s − E s

where

D E
H 0 mα;s1 = ψmα (0) H 0 ψ s1 (0) (3.64)

Note that the E (0) (0)


s − E s is NOT a typo, and why we run into trouble. There is one case where
a perturbation approach is still possible. That case is if we happen to have

D E
ψmα (0) H 0 ψ s1 (0) = 0. (3.65)

That may not be obvious, but if one returns to the original derivation, the right terms cancel
so that one will not end up with the 0/0 problem.
FIXME: performing this derivation outside of class (below), it was found that we do not need
the matrix elements of H 0 to be diagonal, but just need

D E
ψ sα (0) H 0 ψ sβ (0) = 0, for β , α. (3.66)
3.2 issues concerning degeneracy 47

That is consistent with problem set III where we did not diagonalize H 0 , but just the subset
of it associated with the degenerate states. I am unsure now if eq. (3.65) was copied in error or
provided in error in class, but it definitely appears to be a more severe requirement than actually
needed to deal with perturbation of a state found in a degenerate energy level.

3.2.0.7 Time independent perturbation with degeneracy


Now we repeat the derivation of the first order perturbation with degenerate states from lecture 4.
We see explicitly how we would get into (divide by zero) trouble if the state we were perturbing
had degeneracy. Here I alter the previous derivation to show this explicitly.
Like the non-degenerate case, we are covering the time independent perturbation methods
from §16.1 of the text [4].
We start with a known Hamiltonian H0 , and alter it with the addition of a “small” perturbation

H = H0 + λH 0 , λ ∈ [0, 1] (3.67)

For the original operator, we assume that a complete set of eigenvectors and eigenkets is
known

E E
H0 ψ sα (0) = E s (0) ψ sα (0) (3.68)

We seek the perturbed eigensolution

H |ψ sα i = E sα |ψ sα i (3.69)

and assumed a perturbative series representation for the energy eigenvalues in the new system

E sα = E s (0) + λE sα (1) + λ2 E sα (2) + · · · (3.70)

Note that we do not assume that the perturbed energy states, if degenerate in the original
system, are still degenerate after perturbation.
Given an assumed representation for the new eigenkets in terms of the known basis

X E
|ψ sα i = cns;βα ψnβ (0) (3.71)
n,β

and a pertubative series representation for the probability coefficients

cns;βα = cns;βα (0) + λcns;βα (1) + λ2 cns;βα (2) , (3.72)


48 time independent perturbation theory

so that

X E X E X E
|ψ sα i = cns;βα (0) ψnβ (0) + λ cns;βα (1) ψnβ (0) + λ2 cns;βα (2) ψnβ (0) + · · · (3.73)
n,β n,β n,β

Setting λ = 0 requires

cns;βα (0) = δns;βα , (3.74)

for

E X E X E
|ψ sα i = ψ sα (0) + λ cns;βα (1) ψnβ (0) + λ2 cns;βα (2) ψnβ (0) + · · ·
n,β n,β
  E
= 1 + λc ss;αα + λ c ss;αα
(1) 2 (2)
+ · · · ψ sα (0)
(3.75)
X E
+λ cns;βα (1) ψnβ (0)
nβ,sα
X E
+λ 2
cns;βα (2) ψnβ (0) + · · ·
nβ,sα

We rescale our kets

E (0) E X E X E
ψ sα = ψ sα +λ cns;βα (1) ψnβ (0) + λ2 cns;βα (2) ψnβ (0) + · · · (3.76)
nβ,sα nβ,sα

where
cns;βα ( j)
cns;βα ( j) = (3.77)
1 + λc ss;αα (1) + λ2 c ss;αα (2) + · · ·

The normalization of the rescaled kets is then

c (1) 2 + · · · ≡ 1 ,
D E X
ψ sα ψ sα = 1 + λ2 ss (3.78)
nβ,sα
Z sα

One can then construct a renormalized ket if desired

E E
1/2
ψ sα = Z sα ψ sα , (3.79)
R
3.2 issues concerning degeneracy 49

so that
E E D E
( ψ sα )† ψ sα = Z sα ψ sα ψ sα = 1. (3.80)
R R

We continue by renaming terms in eq. (3.76)

E (0) E E E
ψ sα = ψ sα + λ ψ sα (1) + λ2 ψ sα (2) + · · · (3.81)

where

( j) E X E
ψ sα = cns;βα ( j) ψnβ (0) . (3.82)
nβ,sα

Now we act on this with the Hamiltonian

E E
H ψ sα = E sα ψ sα , (3.83)

or

E E
H ψ sα − E sα ψ sα = 0. (3.84)

Expanding this, we have


 E E E 
(H0 + λH 0 ) ψ sα (0) + λ ψ sα (1) + λ2 ψ sα (2) + · · ·
   E E E  (3.85)
− E s (0) + λE sα (1) + λ2 E sα (2) + · · · ψ sα (0) + λ ψ sα (1) + λ2 ψ sα (2) + · · · = 0.

We want to write this as

|Ai + λ |Bi + λ2 |Ci + · · · = 0. (3.86)

This is

(0) E
0 = λ0 (H0 − E (0)
s ) ψ sα
 (1) E (0) E
+ λ (H0 − E (0)
s ) ψ sα + (H 0 − E (1)
sα ) ψ sα
 E E E (3.87)
+ λ2 (H − E (0) ) ψ (2) + (H 0 − E (1) ) ψ (1) − E (2) ψ
(0)
0 s sα sα sα sα sα

···
50 time independent perturbation theory

So we form

(0) E
|Ai = (H0 − E (0)
s ) ψ sα
(1) E (0) E
|Bi = (H0 − E (0)
s ) ψ sα + (H 0 − E (1)
sα ) ψ sα (3.88)
(2) E (1) E (0) E
|Ci = (H0 − E (0)
s ) ψ sα + (H 0 − E (1)
sα ) ψ sα − E (2)
sα ψ sα ,
and so forth.
E E
Zeroth order in λ Since H0 ψ sα (0) = E (0)

s ψ sα
(0) , this first condition on
|Ai is not much
more than a statement that 0 − 0 = 0.

First order in λ How about |Bi = 0? For this to be zero we require that both of the following
are simultaneously zero

D E
ψ sα (0) B = 0
D E (3.89)
ψmβ (0) B = 0, mβ , sα
This first condition is
D (0) E
ψ sα (0) (H 0 − E (1)

sα ) ψ sα = 0. (3.90)

With
D E
ψmβ (0) H 0 ψ sα (0) ≡ H 0 ms;βα , (3.91)

or

H 0 ss;αα = E (1)
sα . (3.92)

From the second condition we have


D (1) E D (0) E
0 = ψmβ (0) (H0 − E (0) + ψmβ (0) (H 0 − E (1)

s ) ψ sα sα ) ψ sα (3.93)
D
Utilizing the Hermitian nature of H0 we can act backwards on ψm (0)

D (0)
D
ψmβ (0) H0 = Em ψmβ (0) . (3.94)
D E D E
We note that ψmβ (0) ψ sα (0) = 0, mβ , sα. We can also expand the ψmβ (0) ψ sα (1) , which is

 
D E D  X E 
ψmβ (0) ψ sα (1) = ψmβ (0) 
cns;δα (1) ψnδ (0) 

(3.95)
nδ,sα
3.2 issues concerning degeneracy 51

I found that reducing this sum was not obvious until some actual integers were plugged in.
Suppose that s = 3 1, and mβ = 2 2, then this is

 
D E D  X E 
ψ2 2 (0) ψ3 1 (1) = ψ2 2 (0) 
cn3;δ1 (1) ψnδ (0) 

nδ∈{1 1,1 2,···,2 1,2 2,2 3,···,3 2,3 3,···}
D E (3.96)
= c2 3;2 1 (1)
ψ2 2 (0) ψ2 2 (0)
= c2 3;2 1 (1) .
Observe that we can also replace the superscript (1) with ( j) in the above manipulation with-
out impacting anything else. That and putting back in the abstract indices, we have the general
result

D E
ψmβ (0) ψ sα ( j) = cms;βα ( j) . (3.97)
Utilizing this gives us, for mβ , sα

(0)
0 = (Em − E (0)
s )cms;βα
(1)
+ H 0 ms;βα (3.98)
Here we see our first sign of the trouble hinted at in lecture 5. Just because mβ , sα does not
mean that m , s. For example, with mβ = 1 1 and sα = 1 2 we would have

(1)
E12 = H 0 1 1;22
H 0 1 1;12 (3.99)
c1 1;12 (1) = (0)
E1 − E1(0)
We have got a divide by zero unless additional restrictions are imposed!
If we return to eq. (3.98), we see that, for the result to be valid, when m = s, and there exists
degeneracy for the s state, we require for β , α

H 0 ss;βα = 0 (3.100)
(then eq. (3.98) becomes a 0 = 0 equality, and all is still okay)
And summarizing what we learn from our |Bi = 0 conditions we have

E (1)
sα = H ss;αα
0

H 0 ms;βα
cms;βα (1) = (0) (0)
, m,s (3.101)
E s − Em
H 0 ss;βα = 0, βα , 1 1
52 time independent perturbation theory

Second order in λ Doing the same thing for |Ci = 0 we form (or assume)

D E
ψ sα (0) C = 0 (3.102)

D E
0 = ψ sα (0) C
D  (2) E (1) E (0) E
= ψ sα (0) (H0 − E (0)
s ) ψ sα + (H 0 − E (1)
sα ) ψ sα − E (2)
sα ψ sα (3.103)
(0) (0)
D E D (1)
E (2)
D E
= (E s − E s ) ψ sα (0) ψ sα (2) + ψ sα (0) (H 0 − E sα ) ψ sα (1) − E sα ψ sα (0) ψ sα (0)

D E
We need to know what the ψ sα (0) ψ sα (1) is, and find that it is zero

D E D X E
ψ sα (0) ψ sα (1) = ψ sα (0) cns;βα (1) ψnβ (0) = 0 (3.104)
nβ,sα

Utilizing that we have

D (1) E
E (2)
sα = ψ sα H ψ sα
(0) 0
D X E
= ψ sα (0) H 0 cms (1) ψmβ (0)
mβ,sα (3.105)
X
= cms;βα (1) H 0 sm;αβ
mβ,sα

From eq. (3.101), treating the m , s case carefully, we have

X X H 0 ms;βα
E (2)
sα = c ss;βα (1) H 0 ss;αβ + (0) (0)
H 0 sm;αβ (3.106)
β,α mβ,sα,m,s E s − Em

Again, only if H ss;αβ = 0 for β , α do we have a result we can use. If that is the case, the first
sum is killed without a divide by zero, leaving

2
X H 0 ms;βα
E (2)
sα = . (3.107)
mβ,sα,m,s E (0) (0)
s − Em
3.2 issues concerning degeneracy 53

We can now summarize by forming the first order terms of the perturbed energy and the
corresponding kets

2
X H 0 ms;βα
E sα = E (0)
s + λH 0
ss;αα +λ 2
(0) (0)
+···
m,s,mβ,sα E s − E m
E (0) E X H 0 ms;βα E (3.108)
ψ sα = ψ sα +λ (0) (0)
ψmβ (0) + · · ·
m,s,mβ,sα E s − E m

H 0 ss;βα = 0, βα , 1 1

Notational discrepancy: OOPS. It looks like I used different notation than in class for our
matrix elements for the placement of the indices.
FIXME: looks like the c ss;αα0 (1) , for α , α0 coeffients have been lost track of here? Do we
have to assume those are zero too? Professor Sipe did not include those in his lecture eq. (3.114),
but I do not see the motivation here for dropping them in this derivation.

Diagonalizing the perturbation Hamiltonian Suppose that we do not have this special zero
condition that allows the perturbation treatment to remain valid. What can we do. It turns out
that we can make use of the fact that the perturbation Hamiltonian is Hermitian, and diagonalize
the matrix

D E
ψ sα (0) H 0 ψ sβ (0) (3.109)

In the example of a two fold degeneracy, this amounts to us choosing not to work with the
states

E E
ψ(0) , ψ(0) , (3.110)
s1 s2

both some linear combinations of the two


E E E
ψ(0) = a1 ψ(0) + b1 ψ(0) (3.111)
sI s1 s2
E E E
ψ(0) = a2 ψ(0) + b2 ψ(0) (3.112)
sII s1 s2

In this new basis, once found, we have

D E
ψ sα (0) H 0 ψ sβ (0) = Hα δαβ (3.113)
54 time independent perturbation theory

Utilizing this to fix the previous, one would get if the analysis was repeated correctly

X H 0 mβ;sα 2

E sα = E (0)
s + λH 0 sα;sα + λ2 (0) (0)
+··· (3.114)
m,s,β E s − E m
E (0) E X H 0 mβ;sα
ψ sα = ψ sα +λ ψ (0) E + · · · (3.115)
(0) (0) mβ
m,s,β E s − E m

FIXME: why do we have second order in λ terms for the energy when we found those exactly
by diagonalization? We found there that the perturbed energy eigenvalues were multivalued with
values E sα = E (0)
s + λH sβ;sβ for all degeneracy indices β. Have to repeat the derivation for these
0

more carefully to understand this apparent discrepancy.


We see that a degenerate state can be split by applying perturbation.
FIXME: diagram. E (0)s as one energy level without perturbation, and as two distinct levels
with perturbation.

guess I will bet that this is the origin of the spectral line splitting, especially given that an
atom like hydrogen has degenerate states.

3.3 examples

Example 3.1: Stark Shift

Reading: §16.5 of [4].

H = H0 + λH 0 (3.116)

H 0 = eEz Ẑ (3.117)

where Ez is the electric field.


To first order we have

E D (0) (0) E
E E X ψ(0)
β ψβ H 0 ψα
ψ(1)
α
(0)
= ψα + (3.118)
β,α Eα(0) − Eβ(0)
3.3 examples 55

and

D E
Eα(1) = ψ(0) 0 (0)
α H ψα (3.119)
E
With the default basis { ψ(0)
β }, and n = 2 we have a 4 fold degeneracy

l, m = 0, 0
l, m = 1, −1
(3.120)
l, m = 1, 0
l, m = 1, +1

but can diagonalize as follows

 
 nlm 200 210 211 21 − 1
 
 200
 0 ∆ 0 0 

∆ (3.121)
 
 210 0 0 0 
 
 211 0 0 0 0 
 
21 − 1 0 0 0 0

FIXME: show.
where

∆ = −3eEz a0 (3.122)

We have a split of energy levels as illustrated in fig. 3.2


56 time independent perturbation theory

Figure 3.2: Energy level splitting

Observe the embedded Pauli matrix (FIXME: missed the point of this?)

 
0 1
σ x =    (3.123)
1 0

Proper basis for perturbation (FIXME:check) is then

( )
1
√ (|2, 0, 0i ± |2, 1, 0i), |2, 1, ±1i (3.124)
2
and our result is

ED E
ψ(0) ψ(0) H 0 ψ(0)
α
β β
E (0) E X
ψ(1)
α,n=2 = ψα + (3.125)

β<degenerate subspace Eα(0) − Eβ(0)
T I M E D E P E N D E N T P E R T U B AT I O N
4
4.1 review of dynamics

We want to move on to time dependent problems. In general for a time dependent problem, the
answer follows provided one has solved for all the perturbed energy eigenvalues. This can be
laborious (or not feasible due to infinite sums).
Before doing this, let us review our dynamics as covered in §3 of the text [4].

Schrödinger and Heisenberg pictures Our operator equation in the Schrödinger picture is the
familiar

d
i h̄ |ψ s (t)i = H |ψ s (t)i (4.1)
dt
and most of our operators X, P, · · · are time independent.

hOi (t) = hψ s (t)| O s |ψ s (t)i (4.2)

where O s is the operator in the Schrödinger picture, and is non time dependent.
Formally, the time evolution of any state is given by

|ψ s (t)i e−iHt/ h̄ |ψ s (0)i = U(t, 0) |ψ s (0)i (4.3)

so the expectation of an operator can be written

hOi (t) = hψ s (0)| eiHt/ h̄ O s e−iHt/ h̄ |ψ s (0)i . (4.4)

With the introduction of the Heisenberg ket

|ψH i = |ψ s (0)i , (4.5)

and Heisenberg operators

OH = eiHt/ h̄ O s e−iHt/ h̄ , (4.6)

57
58 time dependent pertubation

the expectation evolution takes the form

hOi (t) = hψH | OH |ψH i . (4.7)

Note that because the Hamiltonian commutes with its exponential (it commutes with itself
and any power series of itself), the Hamiltonian in the Heisenberg picture is the same as in the
Schrödinger picture

HH = eiHt/ h̄ He−iHt/ h̄ = H. (4.8)

Time evolution and the Commutator Taking the derivative of eq. (4.6) provides us with the
time evolution of any operator in the Heisenberg picture

d d  iHt/ h̄ 
i h̄ OH (t) = i h̄ e O s e−iHt/ h̄
dt dt
−iHt/ h̄ −iH
 iH 
= i h̄ e iHt/ h̄
Ose −iHt/ h̄
+eiHt/ h̄
Ose (4.9)
h̄ h̄
= (−HOH + OH H ) .

We can write this as a commutator

d
i h̄ OH (t) = [OH , H ] . (4.10)
dt

Summarizing the two pictures

Schrödinger picture Heisenberg picture


d d
i h̄ |ψ s (t)i = H |ψ s (t)i i h̄ OH (t) = [OH , H ]
dt dt
hψ s (t)| OS |ψ s (t)i = hψH | OH |ψH i (4.11)

|ψ s (0)i = |ψH i
OS = OH (0)
4.2 interaction picture 59

4.2 interaction picture

Recap Recall our table comparing our two interaction pictures

Schrödinger picture Heisenberg picture


d d
i h̄ |ψ s (t)i = H |ψ s (t)i i h̄ OH (t) = [OH , H ]
dt dt
hψ s (t)| OS |ψ s (t)i = hψH | OH |ψH i (4.12)

|ψ s (0)i = |ψH i
OS = OH (0)

A motivating example While fundamental Hamiltonians are independent of time, in a number


of common cases, we can form approximate Hamiltonians that are time dependent. One such
example is that of Coulomb excitations of an atom, as covered in §18.3 of the text [4], and
shown in fig. 4.1.

Figure 4.1: Coulomb interaction of a nucleus and heavy atom

We consider the interaction of a nucleus with a neutral atom, heavy enough that it can be
considered classically. From the atoms point of view, the effects of the heavy nucleus barrel-
60 time dependent pertubation

ing by can be described using a time dependent Hamiltonian. For the atom, that interaction
Hamiltonian is

X Zeqi
H0 = . (4.13)
i
|rN (t) − Ri |

Here and rN is the position vector for the heavy nucleus, and Ri is the position to each charge
within the atom, where i ranges over all the internal charges, positive and negative, within the
atom.
Placing the origin close to the atom, we can write this interaction Hamiltonian as


!
X Zeq
i
X 1
H (t) =
0
+

Zeqi Ri · (4.14)
|r (t)|

∂r |rN (t) − r| r=0

i N i

The first term vanishes because the total charge in our neutral atom is zero. This leaves us
with


!
X Ze
H (t) = −
0
qi Ri · −
∂r |rN (t) − r|

i r=0
X (4.15)
=− qi Ri · E(t),
i

where E(t) is the electric field at the origin due to the nucleus.
Introducing a dipole moment operator for the atom

X
µ= qi Ri , (4.16)
i

the interaction takes the form

H 0 (t) = −µ · E(t). (4.17)

Here we have a quantum mechanical operator, and a classical field taken together. This sort of
dipole interaction also occurs when we treat a atom placed into an electromagnetic field, treated
classically as depicted in fig. 4.2
In the figure, we can use the dipole interaction, provided λ  a, where a is the “width” of
the atom.
Because it is great for examples, we will see this dipole interaction a lot.
4.2 interaction picture 61

Figure 4.2: atom in a field

The interaction picture Having talked about both the Schrödinger and Heisenberg pictures,
we can now move on to describe a hybrid, one where our Hamiltonian has been split into static
and time dependent parts

H(t) = H0 + H 0 (t) (4.18)

We will formulate an approach for dealing with problems of this sort called the interaction
picture.
This is also covered in §3.3 of the text, albeit in a much harder to understand fashion (the text
appears to try to not pull the result from a magic hat, but the steps to get to the end result are
messy). It would probably have been nicer to see it this way instead.
In the Schrödinger picture our dynamics have the form

d
i h̄ |ψ s (t)i = H |ψ s (t)i (4.19)
dt
How about the Heisenberg picture? We look for a solution

|ψ s (t)i = U(t, t0 ) |ψ s (t0 )i . (4.20)


62 time dependent pertubation

We want to find this operator that evolves the state from the state as some initial time t0 , to
the arbitrary later state found at time t. Plugging in we have

d
i h̄ U(t, t0 ) |ψ s (t0 )i = H(t)U(t, t0 ) |ψ s (t0 )i . (4.21)
dt
This has to hold for all |ψ s (t0 )i, and we can equivalently seek a solution of the operator
equation

d
i h̄ U(t, t0 ) = H(t)U(t, t0 ), (4.22)
dt
where

U(t0 , t0 ) = I, (4.23)

the identity for the Hilbert space.


Suppose that H(t) was independent of time. We could find that

U(t, t0 ) = e−iH(t−t0 )/ h̄ . (4.24)

If H(t) depends on time could you guess that

Rt
− h̄i H(τ)dτ
U(t, t0 ) = e t0
(4.25)

holds? No. This may be true when H(t) is a number, but when it is an operator, the Hamilto-
nian does not necessarily commute with itself at different times

[ H(t0 ), H(t00 )] , 0. (4.26)

So this is wrong in general. As an aside, for numbers, eq. (4.25) can be verified easily. We
have

 i Rt 0  i Z t !0 Rt
− H(τ)dτ −i H(τ)dτ
i h̄ e h̄ t0 = i h̄ − H(τ)dτ e h̄ t0
h̄ t0
!
dt dt0 − h̄i t t H(τ)dτ (4.27)
R
= H(t) − H(t0 ) e 0
dt dt
= H(t)U(t, t0 )
4.2 interaction picture 63

Expectations Suppose that we do find U(t, t0 ). Then our expectation takes the form

hψ s (t)| O s |ψ s (t)i = hψ s (t0 )| U † (t, t0 )O s U(t, t0 ) |ψ s (t0 )i (4.28)

Put

|ψH i = |ψ s (t0 )i , (4.29)

and form

OH = U † (t, t0 )O s U(t, t0 ) (4.30)

so that our expectation has the familiar representations

hψ s (t)| O s |ψ s (t)i = hψH | OH |ψH i (4.31)

New strategy. Interaction picture Let us define

i
U I (t, t0 ) = e h̄ H0 (t−t0 ) U(t, t0 ) (4.32)

or
i
U(t, t0 ) = e− h̄ H0 (t−t0 ) U I (t, t0 ). (4.33)

Let us see how this works. We have

dU I d  i H0 (t−t0 ) 
i h̄ = i h̄ e h̄ U(t, t0 )
dt dt !
i d
= −H0 U(t, t0 ) + e h̄ H0 (t−t0 ) i h̄ U(t, t0 )
dt
i (4.34)
= −H0 U(t, t0 ) + e h̄ H0 (t−t0 ) ((H + H 0 (t))U(t, t0 ))
i
= e h̄ H0 (t−t0 ) H 0 (t)U(t, t0 )
i i
= e h̄ H0 (t−t0 ) H 0 (t)e− h̄ H0 (t−t0 ) U I (t, t0 ).
Define

i i
H 0 (t) = e h̄ H0 (t−t0 ) H 0 (t)e− h̄ H0 (t−t0 ) , (4.35)
64 time dependent pertubation

so that our operator equation takes the form

d
i h̄ U I (t, t0 ) = H 0 (t)U I (t, t0 ). (4.36)
dt
Note that we also have the required identity at the initial time

U I (t0 , t0 ) = I. (4.37)

Without requiring us to actually find U(t, t0 ) all of the dynamics of the time dependent inter-
action are now embedded in our operator equation for H 0 (t), with all of the simple interaction
related to the non time dependent portions of the Hamiltonian left separate.

Connection with the Schrödinger picture In the Schrödinger picture we have

|ψ s (t)i = U(t, t0 ) |ψ s (t0 )i


i (4.38)
= e− h̄ H0 (t−t0 ) U I (t, t0 ) |ψ s (t0 )i .

With a definition of the interaction picture ket as

|ψI i = U I (t, t0 ) |ψ s (t0 )i = U I (t, t0 ) |ψH i , (4.39)

the Schrödinger picture is then related to the interaction picture by

i
|ψ s (t)i = e− h̄ H0 (t−t0 ) |ψI i . (4.40)

Also, by multiplying eq. (4.36) by our Schrödinger ket, we remove the last vestiges of U I and
U from the dynamical equation for our time dependent interaction

d
i h̄ |ψI i = H 0 (t) |ψI i . (4.41)
dt

Interaction picture expectation Inverting eq. (4.40), we can form an operator expectation, and
relate it the interaction and Schrödinger pictures

i i
hψ s (t)| O s |ψ s (t)i = hψI | e h̄ H0 (t−t0 ) O s e− h̄ H0 (t−t0 ) |ψI i . (4.42)
4.2 interaction picture 65

With a definition

i i
OI = e h̄ H0 (t−t0 ) O s e− h̄ H0 (t−t0 ) , (4.43)

we have

hψ s (t)| O s |ψ s (t)i = hψI | OI |ψI i . (4.44)

As before, the time evolution of our interaction picture operator, can be found by taking
derivatives of eq. (4.43), for which we find

dOI (t)
i h̄ = [OI (t), H0 ] (4.45)
dt

Summarizing the interaction picture Given

H(t) = H0 + H 0 (t), (4.46)

and initial time states

|ψI (t0 )i = |ψ s (t0 )i = |ψH i , (4.47)

we have

hψ s (t)| O s |ψ s (t)i = hψI | OI |ψI i , (4.48)

where

|ψI i = U I (t, t0 ) |ψ s (t0 )i , (4.49)

and

d
i h̄ |ψI i = H 0 (t) |ψI i , (4.50)
dt
or

d
i h̄ U I (t, t0 ) = H 0 (t)U I (t, t0 )
dt (4.51)
U I (t0 , t0 ) = I.
66 time dependent pertubation

Our interaction picture Hamiltonian is

i i
H 0 (t) = e h̄ H0 (t−t0 ) H 0 (t)e− h̄ H0 (t−t0 ) , (4.52)

and for Schrödinger operators, independent of time, we have the dynamical equation

dOI (t)
i h̄ = [OI (t), H0 ] (4.53)
dt

4.3 justifying the taylor expansion above (not class notes)

Multivariable Taylor series As outlined in §2.8 (8.10) of [7], we want to derive the multi-
variable Taylor expansion for a scalar valued function of some number of variables

f (u) = f (u1 , u2 , · · ·), (4.54)

consider the displacement operation applied to the vector argument

f (a + x) = f (a + tx)|t=1 . (4.55)

We can Taylor expand a single variable function without any trouble, so introduce

g(t) = f (a + tx), (4.56)

where

g(1) = f (a + x). (4.57)

We have

∂g t2 ∂g

g(t) = g(0) + t + +···, (4.58)
∂t t=0 2! ∂t t=0
so that

∂g 1 ∂g

g(1) = g(0) + + + +···. (4.59)
∂t t=0 2! ∂t t=0
4.3 justifying the taylor expansion above (not class notes) 67

The multivariable Taylor series now becomes a plain old application of the chain rule, where
we have to evaluate

dg d
= f (a1 + tx1 , a2 + tx2 , · · ·)
dt dt
X ∂ ∂ai + txi (4.60)
= f (a + tx) ,
i
∂(a i + txi ) ∂t

so that

∂ f
!
dg X
= x i
. (4.61)
∂xi xi =ai

dt t=0 i

Assuming an Euclidean space we can write this in the notationally more pleasant fashion
using a gradient operator for the space


dg
= x · ∇u f (u)|u=a . (4.62)
dt t=0
To handle the higher order terms, we repeat the chain rule application, yielding for example


d2 f (a + tx) d X i ∂ f (a + tx)

= x
∂(ai + txi )

dt2 t=0 dt i
t=0
X ∂ d f (a + tx) (4.63)
= xi
∂(a + tx )
i i

i
dt
t=0

= (x · ∇u )2 f (u) u=a .

Thus the Taylor series associated with a vector displacement takes the tidy form


X 1
f (a + x) = (x · ∇u )k f (u) u=a . (4.64)
k=0
k!

Even more fancy, we can form the operator equation


f (a + x) = ex·∇u f (u) u=a (4.65)

Here a dummy variable u has been retained as an instruction not to differentiate the x part of
the directional derivative in any repeated applications of the x · ∇ operator.
68 time dependent pertubation

That notational cludge can be removed by swapping a and x


X 1
f (a + x) = (a · ∇)k f (x) = ea·∇ f (x), (4.66)
k=0
k!

where ∇ = ∇x = (∂/∂x1 , ∂/∂x2 , ...).


Having derived this (or for those with lesser degrees of amnesia, recall it), we can see that
eq. (4.14) was a direct application of this, retaining no second order or higher terms.
Our expression used in the interaction Hamiltonian discussion was


!
1 1 1
+R· .

≈ (4.67)
|r − R| |r| ∂R |r − R| R=0

which we can see has the same structure as above with some variable substitutions. Evaluat-
ing it we have

∂ 1 ∂
= ei i ((x j − R j )2 )−1/2
∂R |r − R| ∂R
∂(x j − R j ) 1
!
1
= ei − 2(x j − R j ) (4.68)
2 ∂Ri |r − R|3
r−R
= ,
|r − R|3
and at R = 0 we have

1 1 r
≈ +R· 3. (4.69)
|r − R| |r| |r|
We see in this direction derivative produces the classical electric Coulomb field expression
for an electrostatic distribution, once we take the r/|r|3 and multiply it with the −Ze factor.
4.4 recap: interaction picture 69

With algebra A different way to justify the expansion of eq. (4.14) is to consider a Clifford
algebra factorization (following notation from [5]) of the absolute vector difference, where R is
considered small.

q
|r − R| = (r − R) (r − R)
s* ! ! +
1 1
= r 1− R 1−R r
r r
s* ! !+
1 1
= r2 1 − R 1 − R (4.70)
r r
s * +
1 1 1
= |r| 1 − 2 · R + RR
r r r
r
1 R2
= |r| 1 − 2 · R + 2
r r
Neglecting the R2 term, we can then Taylor series expand this scalar expression

!
1 1 1 1 r̂ 1 r
≈ 1+ ·R = + ·R = + · R. (4.71)
|r − R| |r| r |r| r2 |r| |r|3

Observe this is what was found with the multivariable Taylor series expansion too.

4.4 recap: interaction picture

We will use the interaction picture to examine time dependent perturbations. We wrote our
Schrödinger ket in terms of the interaction ket

|ψi = e−iH0 (t−t0 )/ h̄ |ψI (t)i , (4.72)

where

|ψI i = U I (t, t0 ) |ψI (t0 )i . (4.73)

Our dynamics is given by the operator equation


d
i h̄ U I (t, t0 ) = H 0 (t)U I (t, t0 ), (4.74)
dt
70 time dependent pertubation

where

i i
H 0 (t) = e h̄ H0 (t−t0 ) H 0 (t)e− h̄ H0 (t−t0 ) . (4.75)

We can formally solve eq. (4.74) by writing

Z t
i
U I (t, t0 ) = I − dt0 H 0 (t0 )U I (t0 , t0 ). (4.76)
h̄ t0

This is easy enough to verify by direct differentiation

Z t !0
d
i h̄ U I = dt H (t )U I (t , t0 )
0 0 0 0
dt t0
dt dt0 (4.77)
= H 0 (t)U I (t, t0 ) − H 0 (t)U I (t, t0 )
dt dt
= H 0 (t)U I (t, t0 )

This is a bit of a chicken and an egg expression, since it is cyclic with a dependency on
unknown U I (t0 , t0 ) factors.
We start with an initial estimate of the operator to be determined, and iterate. This can seem
like an odd thing to do, but one can find books on just this integral kernel iteration method (like
the nice little Dover book [13] that has sat on my (Peeter’s) shelf all lonely so many years).
Suppose for t near t0 , try

Z t
i
U I (t, t0 ) ≈ I − dt0 H 0 (t0 ). (4.78)
h̄ t0

A second order iteration is now possible

Z  t Z t0 
i  i 0 0 0
00 0 00 
U I (t, t0 ) ≈ I − dt H (t ) I − dt H (t ).
 

t0 h̄ t0
Z t Z t0 (4.79)
−i 2 t 0 0 0
Z
i 
=I− dt H (t ) +
0 0 0
dt H (t ) dt00 H 0 (t00 )
h̄ t0 h̄ t0 t0

It is possible to continue this iteration, and this approach is considered in some detail in §3.3
of the text [4], and is apparently also the basis for Feynman diagrams.
4.5 time dependent perturbation theory 71

4.5 time dependent perturbation theory

As covered in §17 of the text, we will split the interaction into time independent and time
dependent terms

H(t) = H0 + H 0 (t), (4.80)

and work in the interaction picture with

X E
|ψI (t)i = c̃n (t) ψ(0)
n . (4.81)
n

Our Schrödinger ket is then

(0)
|ψ(t)i = e−iH0 (t−t0 )/ h̄ |ψI (t0 )i
X (0)
E (4.82)
= c̃n (t)e−iEn (t−t0 )/ h̄ ψ(0)
n .
n

With a definition

cn (t) = c̃n (t)eiEn t0 / h̄ , (4.83)

(where we leave off the zero superscript for the unperturbed state), our time evolved ket
becomes

X E
|ψ(t)i = cn (t)e−iEn t/ h̄ ψ(0)
n . (4.84)
n

We can now plug eq. (4.81) into our evolution equation

d
i h̄ |ψI (t)i = H 0 (t) |ψI (t)i
dt (4.85)
i i
= e h̄ H0 (t−t0 ) H 0 (t)e− h̄ H0 (t−t0 ) |ψI (t)i ,

which gives us

X ∂ i
X
− h̄i H0 (t−t0 )
(0)
E E
i h̄ c̃ p (t) ψ p = e
h̄ H0 (t−t0 ) 0
H (t)e c̃n (t) ψ(0)
n . (4.86)
p
∂t n
72 time dependent pertubation

D
We can apply the bra ψ(0)
m to this equation, yielding

∂ X i D E i
i h̄ c̃m (t) = c̃n (t)e h̄ Em (t−t0 ) ψ(0)
m
H 0
(t) ψ(0)
n e− h̄ En (t−t0 ) . (4.87)
∂t n

With

Em
ωm =

ωmn = ωm − ωn (4.88)
D E
0
Hmn (t) = ψ(0) 0 (0) ,
m H (t) ψn

this is
∂c̃m (t) X
i h̄ = 0
c̃n (t)eiωmn (t−t0 ) Hmn (t) (4.89)
∂t n

Inverting eq. (4.83) and plugging in

c̃n (t) = cn (t)e−iωn t0 , (4.90)

yields

∂cm (t) −iωm t0 X


i h̄ e = cn (t)e−iωn t0 eiωmn t e−i(ωm −ωn )t0 Hmn
0
(t), (4.91)
∂t n

from which we can cancel the exponentials on both sides yielding


∂cm (t) X
i h̄ = 0
cn (t)eiωmn t Hmn (t) (4.92)
∂t n

We are now left with all of our time dependence nicely separated out, with the coefficients
cn (t) encoding all the non-oscillatory time evolution information

H = H0 + H 0 (t)
X E
|ψ(t)i = cn (t)e−iωn t ψ(0)
n
n (4.93)
X
i h̄ċm = 0
Hmn (t)eiωmn t cn (t)
n
4.6 perturbation expansion 73

4.6 perturbation expansion

We now introduce our λ parametrization

H 0 (t) → λH 0 (t), (4.94)

and hope for convergence, or at least something that at least has well defined asymptotic
behavior. We have

X
i h̄ċm = λ 0
Hmn (t)eiωmn t cn (t), (4.95)
n

and try

cm (t) = c(0) (1) 2 (2)


m (t) + λcm (t) + λ cm (t) + · · · (4.96)

Plugging in, we have

X X
(p)
i h̄ λk ċ(k)
m (t) =
0
Hmn (t)eiωmn t λ p+1 cn (t). (4.97)
k n,p

As before, for equality, we treat this as an equation for each λk . Expanding explicitly for the
first few powers, gives us

 
0 = λ0 i h̄ċ(0)
m (t) − 0
 
 (1) X
(0)
+ λ i h̄ċm (t) −
1 0 iω t

Hmn (t)e mn
cn (t)
n

X
 (4.98)
 (2) iωmn t (1) 
+ λ i h̄ċm (t) −
2 0

 Hmn (t)e cn (t)
n
..
.
Suppose we have a set of energy levels as depicted in fig. 4.3
With cn(i) = 0 before the perturbation for all i ≥ 1, n and c(0)
m = δms , we can proceed iteratively,
solving each equation, starting with

i h̄ċ(1)
m = Hms (t)e
0 iωms t
(4.99)
74 time dependent pertubation

Figure 4.3: Perturbation around energy level s

Example 4.1: Slow nucleus passing an atom

H 0 (t) = −µ · E(t) (4.100)

with
0
Hms = −µms · E(t), (4.101)

where
D E
µms = ψ(0) (0) .
m µ ψ s (4.102)

Using our previous nucleus passing an atom example, as depicted in fig. 4.4
4.6 perturbation expansion 75

Figure 4.4: Slow nucleus passing an atom

We have

X
µ= qi Ri , (4.103)
i

the dipole moment for each of the charges in the atom. We will have fields as depicted
in fig. 4.5
76 time dependent pertubation

Figure 4.5: Fields for nucleus atom example

FIXME: think through.

Example 4.2: Electromagnetic wave pulse interacting with an atom

Consider a EM wave pulse, perhaps Gaussian, of the form depicted in fig. 4.6
4.6 perturbation expansion 77

Figure 4.6: Atom interacting with an EM pulse

2 /T 2
Ey (t) = e−t cos(ω0 t). (4.104)

As we learned very early, perhaps sitting on our mother’s knee, we can solve the differ-
ential equation eq. (4.99) for the first order perturbation, by direct integration

Z t
1 0
c(1)
m (t) =
0
Hms (t0 )eiωms t dt0 . (4.105)
i h̄ −∞

Here the perturbation is assumed equal to zero at −∞. Suppose our electric field is
specified in terms of a Fourier transform

Z ∞

E(t) = E(ω)e−iωt , (4.106)
−∞ 2π
78 time dependent pertubation

so

µms ∞ t
Z Z
0
c(1)
m (t) = · E(ω)ei(ωms −ω)t dt0 dω. (4.107)
2πi h̄ −∞ −∞

From this, “after the perturbation”, as t → ∞ we find

µms
Z ∞Z ∞
0
c(1)
m (∞) = · E(ω)ei(ωms −ω)t dt0 dω
2πi h̄ −∞ −∞
(4.108)
µms
Z ∞
= · E(ω)δ(ωms − ω)dω
i h̄ −∞

since we identify

Z ∞
1 0
ei(ωms −ω)t dt0 ≡ δ(ωms − ω) (4.109)
2π −∞

Thus the steady state first order perturbation coefficient is

µms
c(1)
m (∞) = · E(ωms ). (4.110)
i h̄

Frequency symmetry for the Fourier spectrum of a real field We will look further at this
next week, but we first require an intermediate result from transform theory. Because our
field is real, we have

E∗ (t) = E(t) (4.111)

so

Z
dω ∗
E (t) =

E (ω)eiωt

Z (4.112)
dω ∗
= E (−ω)e−iωt

4.7 time dependent perturbation 79

and thus

E(ω) = E∗ (−ω), (4.113)

and

|E(ω)|2 = |E(−ω)|2 . (4.114)

We will see shortly what the point of this aside is.

4.7 time dependent perturbation

We would gotten as far as calculating

1
c(1)
m (∞) = µ · E(ωms ) (4.115)
i h̄ ms
where

Z

E(t) = E(ω)e−iωt , (4.116)

and

Em − E s
ωms = . (4.117)

Graphically, these frequencies are illustrated in fig. 4.7
The probability for a transition from m to s is therefore

2 1 2
ρm→s = c(1) (∞) = µ · E(ω ) 2 (4.118)
m ms
h̄ ms
Recall that because the electric field is real we had

|E(ω)|2 = |E(−ω)|2 . (4.119)

Suppose that we have a wave pulse, where our field magnitude is perhaps of the form

2 /T 2
E(t) = e−t cos(ω0 t), (4.120)
80 time dependent pertubation

Positive frequencies: absorbtion


ωms > 0

n
ωns < 0
Negative frequencies: stimulated emission

Figure 4.7: Positive and negative frequencies

Figure 4.8: Gaussian wave packet


4.8 sudden perturbations 81

as illustrated with ω = 10, T = 1 in fig. 4.8.


We expect this to have a two lobe Fourier spectrum, with the lobes centered at ω = ±10, and
width proportional to 1/T .
For reference, as calculated using qmTwoL8figures.nb this Fourier transform is

2 (ω +ω)2
eω0 T
1 2 ω− 1 T 2 (ω +ω)2
e− 4 T 0 4 0
E(ω) = q + q (4.121)
2 2
2 T2
2 T2

This is illustrated, again for ω0 = 10, and T = 1, in fig. 4.9

Figure 4.9: FTgaussianWavePacket

where we see the expected Gaussian result, since the Fourier transform of a Gaussian is a
Gaussian.
FIXME: not sure what the point of this was?

4.8 sudden perturbations

Given our wave equation

d
|ψ(t)i = H(t) |ψ(t)i
i h̄ (4.122)
dt
and a sudden perturbation in the Hamiltonian, as illustrated in fig. 4.10
Consider H0 and HF fixed, and decrease ∆t → 0. We can formally integrate eq. (4.122)

d 1
|ψ(t)i = H(t) |ψ(t)i (4.123)
dt i h̄
For
Z t
1
|ψ(t)i − |ψ(t0 )i = H(t0 ) ψ(t0 ) dt0 .

(4.124)
i h̄ t0
82 time dependent pertubation

Figure 4.10: Sudden step Hamiltonian

While this is an exact solution, it is also not terribly useful since we do not know |ψ(t)i.
However, we can select the small interval ∆t, and write

Z t
1
|ψ(∆t/2)i = |ψ(−∆t/2)i + H(t0 ) ψ(t0 ) dt0 .

(4.125)
i h̄ t0

Note that we could use the integral kernel iteration technique here and substitute |ψ(t0 )i =
|ψ(−∆t/2)i and then develop this, to generate a power series with (∆t/2)k dependence. However,
we note that eq. (4.125) is still an exact relation, and if ∆t → 0, with the integration limits
narrowing (provided H(t0 ) is well behaved) we are left with just

|ψ(∆t/2)i = |ψ(−∆t/2)i (4.126)

Or

|ψafter i = |ψbefore i , (4.127)

provided that we change the Hamiltonian fast enough. On the surface there appears to be no
consequences, but there are some very serious ones!

Example 4.3: Harmonic oscillator

Consider our harmonic oscillator Hamiltonian, with

P2 1
H0 = + mω20 X 2
2m 2 (4.128)
P2 1
HF = + mω2F X 2
2m 2
4.8 sudden perturbations 83

Here ω0 → ωF continuously, but very quickly. In effect, we have tightened the spring
constant. Note that there are cases in linear optics when you can actually do exactly that.
Imagine that |ψbefore i is in the ground state of the harmonic oscillator as in fig. 4.11

Figure 4.11: Harmonic oscillator sudden Hamiltonian perturbation

and we suddenly change the Hamiltonian with potential V0 → VF (weakening the


“spring”). Professor Sipe gives us a graphical demo of this, by impersonating a constrained
wavefunction with his arms, doing weak chicken-flapping of them. Now with the potential
weakened, he wiggles and flaps his arms with more freedom and somewhat chaotically.
His “wave function” arms are now bouncing around in the new limiting potential (initially
over doing it and then bouncing back).
We had in this case the exact relation

E 1 E
H0 ψ(0)
0 = h̄ω (0)
0 ψ0 (4.129)
2
but we also have
E
|ψafter i = |ψbefore i = ψ(0)
0 (4.130)

and
E 1 !

(f) 1 ( f ) E
HF ψn = h̄ωF n + ψn
(4.131)
2 2
84 time dependent pertubation

So
E
|ψafter i = ψ(0)
0

cn
X
ψn( f ) ψn( f ) ψ(0)
E D E (4.132)
= 0
n
X
(f)
E
= cn ψn
n

and at later times

E E
ψ(t)( f ) = ψ(0)

0
X (f)

(f)
E (4.133)
= cn eiωn t ψn ,
n

whereas

E (0)
E
ψ(t)(o) = eiω0 t ψ(0)

0 , (4.134)

So, while the wave functions may be exactly the same after such a sudden change in
Hamiltonian, the dynamics of the situation change for all future times, since we now have
a wavefunction that has a different set of components in the basis for the new Hamiltonian.
In particular, the evolution of the wave function is now significantly more complex.
FIXME: plot an example of this.

4.9 adiabatic perturbations

This is treated in §17.5.2 of the text [4].


I wondered what Adiabatic meant in this context. The usage in class sounds like it was just
“really slow and gradual”, yet this has a definition “Of, relating to, or being a reversible ther-
modynamic process that occurs without gain or loss of heat and without a change in entropy”.
Wikipedia [14] appears to confirm that the QM meaning of this term is just “slow” changing.
This is the reverse case, and we now vary the Hamiltonian H(t) very slowly.

d 1
|ψ(t)i = H(t) |ψ(t)i (4.135)
dt i h̄
4.9 adiabatic perturbations 85

We first consider only non-degenerate states, and at t = 0 write

H(0) = H0 , (4.136)

and
E E
H0 ψ(0)
s = E (0) (0)
s ψ s (4.137)

Imagine that at each time t we can find the “instantaneous” energy eigenstates

E E
H(t) ψ̂ s (t) = E s (t) ψ̂ s (t) (4.138)

These states do not satisfy Schrödinger’s equation, but are simply solutions to the eigen
problem. Our standard strategy in perturbation is based on analysis of

X (0)
E
|ψ(t)i = cn (t)e−iωn t ψ(0)
n , (4.139)
n

Here instead

X E
|ψ(t)i = bn (t) ψ̂n (t) , (4.140)
n

we will expand, not using our initial basis, but instead using the instantaneous kets. Plugging
into Schrödinger’s equation we have

X E
H(t) |ψ(t)i = H(t) bn (t) ψ̂n (t)
n
X E (4.141)
= bn (t)En (t) ψ̂n (t)
n

This was complicated before with matrix elements all over the place. Now it is easy, however,
the time derivative becomes harder. Doing that we find

d d X E
i h̄ |ψ(t)i = i h̄ bn (t) ψ̂n (t)
dt dt n
X dbn (t) X d
= i h̄ ψ̂ (t)E + bn (t) ψ̂n (t)
E
n (4.142)
dt dt
Xn E
n

= b (t)E (t) ψ̂ (t)


n n n
n
86 time dependent pertubation

D
We bra ψ̂m (t) into this

X dbn (t) D E X D d E X D E
i h̄ ψ̂m (t) ψ̂n (t) + bn (t) ψ̂m (t) ψ̂n (t) = bn (t)En (t) ψ̂m (t) ψ̂n (t) , (4.143)
n
dt n
dt n

and find

dbm (t) X D d E
i h̄ + bn (t) ψ̂m (t) ψ̂n (t) = bm (t)Em (t) (4.144)
dt n
dt
E0
If the Hamiltonian is changed very very slowly in time, we can imagine that ψ̂n (t) is also
changing very very slowly, but we are not quite there yet. Let us first split our sum of bra and
ket products

X D d E
bn (t) ψ̂m (t) ψ̂n (t) (4.145)
n
dt

into n , m and n = m terms. Looking at just the n = m term

D d E
ψ̂m (t) ψ̂m (t) (4.146)
dt
we note

d D E
0= ψ̂m (t) ψ̂m (t)
dt ! (4.147)
d D E D d E
= ψ̂m (t) ψ̂m (t) + ψ̂m (t) ψ̂m (t)
dt dt
Something plus its complex conjugate equals 0

a + ib + (a + ib)∗ = 2a = 0 =⇒ a = 0, (4.148)
D E
so ψ̂m (t) dtd ψ̂m (t) must be purely imaginary. We write

D d E
ψ̂m (t) ψ̂m (t) = −iΓ s (t), (4.149)
dt
where Γ s is real.
4.10 adiabatic perturbation theory (cont.) 87

4.10 adiabatic perturbation theory (cont.)

We were working through Adiabatic time dependent perturbation (as also covered in §17.5.2 of
the text [4].)
Utilizing an expansion

X (0)
E
|ψ(t)i = cn (t)e−iωn t ψ(0)
n
n
X E (4.150)
= bn (t) ψ̂n (t) ,
n
where

E E
H(t) ψ̂ s (t) = E s (t) ψ̂ s (t) (4.151)
and found

db s (t) X D d E
= −i (ω s (t) − Γ s (t)) b s (t) − bn (t) ψ̂ s (t) ψ̂n (t) (4.152)
dt n,s
dt
where

D d E
Γ s (t) = i ψ̂ s (t) ψ̂ s (t) (4.153)
dt
Look for a solution of the form

Rt
dt0 (ω s (t0 )−Γ s (t0 ))
b s (t) = b s (t)e−i 0
(4.154)
= b s (t)e−iγs (t)
where
Z t
γ s (t) = dt0 (ω s (t0 ) − Γ s (t0 )). (4.155)
0

Taking derivatives of b s and after a bit of manipulation we find that things conveniently cancel

db s (t) d  
= b s (t)eiγs (t)
dt dt
db s (t) iγs (t) d
= e + b s (t) eiγs (t) (4.156)
dt dt
db s (t) iγs (t)
= e + b s (t)i(ω s (t) − Γ s (t))eiγs (t) .
dt
88 time dependent pertubation

We find

db s (t) −iγs (t) db s (t)


e = + ib s (t)(ω s (t) − Γ s (t))
dt dt
( X D d E
=( − Γ s (t)) − ( ω s( Γ( bn (t) ψ̂ s (t) ψ̂n (t) ,
((( ((
(t)(ω
ib s( ((s (t)
(( i (( (t)(−( s (t)) b s (t) −
(
n,s
dt
(4.157)

so

db s (t) X D d E
=− bn (t)eiγs (t) ψ̂ s (t) ψ̂n (t)
dt n,s
dt
(4.158)
X D d E
=− bn (t)ei(γs (t)−γn (t)) ψ̂ s (t) ψ̂n (t) .
n,s
dt

With a last bit of notation

γ sn (t) = γ s (t) − γn (t)), (4.159)

the problem is reduced


D to one involving
E only the sums over the n , s terms, and where all
the dependence on ψ̂ s (t) dt ψ̂ s (t) has been nicely isolated in a phase term
d

db s (t) X D d E
=− bn (t)eiγsn (t) ψ̂ s (t) ψ̂n (t) . (4.160)
dt n,s
dt

Looking for an approximate solution

Try : An approximate solution

bn (t) = δnm (4.161)

For s = m this is okay, since we have dδns


dt = 0 which is consistent with

X
δns (· · ·) = 0 (4.162)
n,s
4.10 adiabatic perturbation theory (cont.) 89

However, for s , m we get

db s (t) X D d E
=− δnm eiγsn (t) ψ̂ s (t) ψ̂n (t)
dt dt
n,s (4.163)
D d E
= −eiγsm (t) ψ̂ s (t) ψ̂m (t)
dt
But

Z t !
1
γ sm (t) = dt0 (E s (t0 ) − Em (t0 )) − Γ s (t0 ) + Γm (t0 ) (4.164)
0 h̄
FIXME: I think we argued in class that the Γ contributions are negligible. Why was that?
Now, are energy levels will have variation with time, as illustrated in fig. 4.12

E3 (t)

E2 (t)
E1 (t)
E0 (t)

Figure 4.12: Energy level variation with time

Perhaps unrealistically, suppose that our energy levels have some “typical” energy difference
∆E, so that

∆E t
γ sm (t) ≈ t≡ , (4.165)
h̄ τ
or


τ= (4.166)
∆E
Suppose that τ is much less than a typical time T over which instantaneous quantities (wave-
functions and brakets) change. After a large time T

eiγsm (t) ≈ eiT/τ (4.167)


90 time dependent pertubation

eiγsm (t)

Figure 4.13: Phase whipping around

so we haveDour phase
termE whipping around really fast, as illustrated in fig. 4.13.
So, while ψ̂ s (t) dtd ψ̂m (t) is moving really slow, but our phase space portion is changing
really fast. The key to the approximate solution is factoring out this quickly changing phase
term.

Note Γ s (t) is called the “Berry” phase [15], whereas the E s (t0 )/ h̄ part is called the geometric
phase, and can be shown to have a geometric interpretation.
To proceed we can introduce λ terms, perhaps

b s (t) = δms + λb(1)


s (t) + · · · (4.168)

and
X
− eiγsn (t) λ(· · ·) (4.169)
n,s

This λ approximation and a similar Taylor series expansion in time have been explored further
in 19.

Degeneracy Suppose we have some branching of energy levels that were initially degenerate,
as illustrated in fig. 4.14
We have a necessity to choose states properly so there is a continuous evolution in the instan-
taneous eigenvalues as H(t) changes.

Question: A physical example? FIXME: Prof Sipe to ponder and revisit.

4.11 examples
4.11 examples 91

E31 (t), E32 (t)

E21 (t)

E11 (t), E12 (t), E13 (t)

E0 (t)

Figure 4.14: Degenerate energy level splitting

Example 4.4: Adiabatic perturbation theory

Utilizing instantaneous eigenstates

X E
|ψ(t)i = bα (t) ψ̂α (t) (4.170)
α

where
E E
H(t) ψ̂α (t) = Eα (t) ψ̂α (t) (4.171)

We found

i
Rt
(Eα (t0 )− h̄Γα (t0 ))dt0
bα (t) = bα (t)e− h̄ 0 (4.172)

where
D d E
Γα = i ψ̂α (t) ψ̂α (t) (4.173)
dt
92 time dependent pertubation

and

d X i t
R
0 0 0 D d E
bα (t) = − bβ (t)e− h̄ 0 (Eβα (t )− h̄Γβα (t ))dt ψ̂α (t) ψ̂β (t) (4.174)
dt β,α
dt

Suppose we start in a subspace

( )
1
span √ (|2, 0, 0i ± |2, 1, 0i), |2, 1, ±1i (4.175)
2
Now expand the bra derivative kets

D (0) (0) E D (0)  


(0)
E D (0) (0) E 
E D (0) X ψγ H 0 ψα ψγ  d  (0) E X ψγ0 ψγ0 H 0 ψβ 

D d
ψ̂α (t) ψ̂β (t) =  ψα + (0) (0)
  ψ
 dt  β + (0) (0)

dt 
γ E α − E γ γ E − E

0 0 β γ
(4.176)

To first order we can drop the quadratic terms in γ, γ0 leaving

D (0) dH 0 (t) (0) E D (0) dH 0 (t) (0) E


D d E X D (0) (0) E ψγ0 dt ψβ
ψα dt ψβ
ψ̂α (t) ψ̂β (t) ∼ ψα ψγ0 = (4.177)
dt γ0 Eβ(0) − Eγ(0)0 Eβ(0) − Eα(0)

so

D (0) dH 0 (t) (0) E


d X i
Rt ψα dt ψβ
(Eβα (t0 )− h̄Γβα (t0 ))dt 0
bα (t) = − bβ (t)e− h̄ 0 (4.178)
dt β,α Eβ(0) − Eα(0)

A different way to this end result A result of this form is also derived in [2] §20.1, but
with a different approach. There he takes derivatives of

E E
H(t) ψ̂β (t) = Eβ (t) ψ̂β (t) , (4.179)
4.11 examples 93

dH(t) E d E dEβ (t) E d E


ψ̂β (t) + H(t) ψ̂β (t) = ψ̂β (t) + Eβ (t) ψ̂β (t) (4.180)
dt dt dt dt
D
Bra’ing ψ̂α (t) into this we have, for α , β

D dH(t) E D d E D dEβ(t)  E D d E


ψ̂α (t) ψ̂β (t) + ψ̂α (t) H(t) ψ̂β (t) = ψ̂α (t)   ψ̂β (t) + ψ̂α (t) Eβ (t) ψ̂β (t)
dt dt   dt dt
D dH(t) E D d E
ψ̂α (t) ψ̂β (t) + Eα (t) ψ̂α (t) ψ̂β (t) =
dt dt
(4.181)

or
D E
D d E ψ̂α (t) dH(t)
dt
ψ̂ β (t)
ψ̂α (t) ψ̂β (t) = (4.182)
dt Eβ (t) − Eα (t)
E
so without the implied λ perturbation of ψ̂α (t) we can from eq. (4.174) write the exact
generalization of eq. (4.178) as

D E
d X i t
R
0 0 0 ψ̂α (t) dH(t)
dt
ψ̂β (t)
bα (t) = − bβ (t)e− h̄ 0 (Eβα (t )− h̄Γβα (t ))dt (4.183)
dt β,α
Eβ (t) − Eα (t)
F E R M I ’ S G O L D E N RU L E
5
See §17.2 of the text [4].
Fermi originally had two golden rules, but his first one has mostly been forgotten. This refers
to his second.
This is really important, and probably the single most important thing to learn in this course.
You will find this falls out of many complex calculations.
Returning to general time dependent equations with

H = H0 + H 0 (t) (5.1)

X
|ψ(t)i = cn (t)e−iωn t |ψn i (5.2)
n
and

X
i h̄ċn = 0 iωmn t
Hmn e cn (t) (5.3)
n
where

0
Hmn (t) = hψm | H 0 (t) |ψn i
En
ωn = (5.4)

ωmn = ωm − ωn

Example 5.1: Electric field potential

H 0 (t) = −µ · E(t). (5.5)

If c(0)
m = δmi , then to first order

i h̄ċ(1) (t) = Hmi


0
(t)eiωmi t , (5.6)

95
96 fermi’s golden rule

and

Z t
1 0
c(1)
m (t) =
0
Hmi (t0 )eiωmi t dt0 . (5.7)
i h̄ t0

Assume the perturbation vanishes before time t0 .

Reminder . Have considered this using eq. (5.7) for a pulse as in fig. 5.1

Figure 5.1: Gaussian wave packet

Now we want to consider instead a non-terminating signal, that was zero before some
initial time as illustrated in fig. 5.2, where the separation between two peaks is ∆t = 2π/ω0 .

Figure 5.2: Sine only after an initial time

Our matrix element is


 2Ami sin(ω0 t)
 if t > 0
0
(t) = − hψm | µ |ψi i · E(t) = 

Hmi (5.8)
 0 if t < 0

fermi’s golden rule 97

Here the factor of 2 has been included for consistency with the text.

 
0
Hmi (t) = iAmi e−iω0 t − eiω0 t (5.9)

Plug this into the perturbation

Z t
Ami
dt0 ei(ωmi −ω0 t) − ei(ωmi +ω0 t)
 
c(1)
m (t) = (5.10)
h̄ t0

ωmi

Figure 5.3: ωmi illustrated

Suppose that

ω0 ≈ ωmi , (5.11)

then

Z t
Ami  
c(1)
m (t) ≈ dt0 1 − e2iω0 t , (5.12)
h̄ t0
98 fermi’s golden rule

but the exponential has essentially no contribution

Z t 2iω t
2iω0 t0 0 e 0 − 1
dt =

e

0 2iω0
sin(ω0 t)
= (5.13)
ω0
1

ω0

so for t  1
ω0 and ω0 ≈ ωmi we have

Ami
c(1)
m (t) ≈ t (5.14)

Similarly for ω0 ≈ ωim as in fig. 5.4

ωim

Figure 5.4: FIXME: qmTwoL9fig7

then

Z t
Ami  
c(1)
m (t) ≈ dt0 e−2iω0 t − 1 , (5.15)
h̄ t0
5.1 recap. where we got to on fermi’s golden rule 99

and we have

Ami
c(1)
m (t) ≈ − t (5.16)

5.1 recap. where we got to on fermi’s golden rule

We are continuing on the topic of Fermi golden rule, as also covered in §17.2 of the text [4].
Utilizing a wave train with peaks separation ∆t = 2π/ω0 , zero before some initial time fig. 5.5.

Figure 5.5: Sine only after an initial time

Perturbing a state in the ith energy level, and looking at the states for the mth energy level as
illustrated in fig. 5.6

Figure 5.6: Perturbation from i to mth energy levels


100 fermi’s golden rule

Our matrix element was

0
Hmi (t) = 2Am i sin(ω0 t)θ(t)
(5.17)
= iAmi (e−iω0 t − eiω0 t )θ(t),

and we found

Z t
Ami
dt0 ei(ωmi −ω0 )t − ei(ωmi +ω0 )t ,
 0 0
c(1)
m (t) = (5.18)
h̄ 0

and argued that

2  A 2
c(1) mi
m (t) ∼
t2 + · · · (5.19)

where ω0 t  1 for ωmi ∼ ±ω0 .
We can also just integrate eq. (5.18) directly

Ami ei(ωmi −ω0 )t − 1 ei(ωmi +ω0 )t − 1


!
c(1)
m (t) = −
h̄ i(ωmi − ω0 ) i(ωmi + ω0 ) (5.20)
≡ Ami (ω0 , t) − Ami (−ω0 , t),

where

Ami ei(ωmi −ω0 )t − 1


Ami (ω0 , t) = (5.21)
h̄ i(ωmi − ω0 )
Factoring out the phase term, we have

2Ami i(ωmi −ω0 )t/2 sin((ωmi − ω0 )t/2)


Ami (ω0 , t) = e (5.22)
h̄ (ωmi − ω0 )
We we will have two lobes, centered on ±ω0 , as illustrated in fig. 5.7

5.2 fermi’s golden rule

Fermi’s Golden rule applies to a continuum of states (there are other forms of Fermi’s golden
rule, but this is the one we will talk about, and is the one in the book). One example is the ionized
5.2 fermi’s golden rule 101

Figure 5.7: Two sinc lobes

Figure 5.8: Continuum of energy levels for ionized states of an atom

Figure 5.9: Semi-conductor well


102 fermi’s golden rule

states of an atom, where the energy level separation becomes so small that we can consider it
continuous
Another example are the unbound states in a semiconductor well as illustrated in fig. 5.9
Note that we can have reflection from the well even in the continuum states where we would
have no such reflection classically. However, with enough energy, states are approximately plane
waves. In one dimension

D E eipx/ h̄
x ψ p ≈ √
2π h̄ (5.23)
D E
ψ p ψ p = δ(p − p0 )
0

or in 3d

D E eip·r/ h̄
r ψp ≈
(2π h̄)3/2 (5.24)
D E
ψp ψp0 = δ3 (p − p0 )

Let us consider the 1d model for the quantum well in more detail. Including both discrete and
continuous states we have

X Z E
|ψ(t)i = cn (t)e−iωn t
|ψn i + d pc p (t)e−iω p t ψ p (5.25)
n

Imagine at t = 0 that the wave function started in some discrete state, and look at the proba-
bility that we “kick the electron out of the well”. Calculate

Z 2
P= d p c(1)
p (t)
(5.26)

Now, we assume that our matrix element has the following form

 
H 0pi (t) = A pi e−iω0 t + B pi eiω0 t θ(t) (5.27)

generalizing the wave train matrix element that we had previously

 
0
Hmi (t) = iAmi e−iω0 t − eiω0 t θ(t) (5.28)
5.2 fermi’s golden rule 103

Doing the perturbation we have

Z 2
P= d p A pi (ω0 , t) + B pi (−ω0 , t) (5.29)

where

2A pi i(ω pi −ω0 )t/2 sin((ω pi − ω0 )t/2)


A pi (ω0 , t) = e (5.30)
i h̄ ω pi − ω0

which is peaked at ω pi = ω0 , and

2B pi i(ω pi +ω0 )t/2 sin((ω pi + ω0 )t/2)


B pi (ω0 , t) = e (5.31)
i h̄ ω pi + ω0

which is peaked at ω pi = −ω0 .


FIXME: show that this is the perturbation result.
In eq. (5.29) at t  0 the only significant contribution is from the A portion as illustrated in
fig. 5.10 where we are down in the wiggles of A pi .

Figure 5.10
104 fermi’s golden rule

Our probability to find the particle in the continuum range is now approximately

Z 2
P= d p A pi (ω0 , t) (5.32)

With

1 p2
!
ω pi − ω0 = − E i − ω0 , (5.33)
h̄ 2m

define p so that

1 p2
!
0= − Ei − ω0 . (5.34)
h̄ 2m

In momentum space, we know have the sinc functions peaked at ±p as in fig. 5.11

Figure 5.11: Momentum space view

The probability that the electron goes to the right is then

Z ∞ 2
P+ = d p c(1)
p (t)
0
Z ∞ 2 sin2 ((ω pi − ω0 )t/2) (5.35)
= d p A pi ,
0 (ω pi − ω0 )2
with

1 p2
!
ω pi = − Ei (5.36)
h̄ 2m
5.2 fermi’s golden rule 105

we have with a change of variables

4
Z ∞ 2 d p sin2 ((ω pi − ω0 )t/2)
P+ = 2 dω pi A pi . (5.37)
h̄ −Ei / h̄ dω pi (ω pi − ω0 )2
Now suppose we have t small enough so that P+  1 and t large enough so

2 d p
A pi (5.38)
dω pi
is roughly constant over ∆ω. This is a sort of “Goldilocks condition”, a time that can not be
too small, and can not be too large, but instead has to be “just right”. Given such a condition

∞ sin2 ((ω pi − ω0 )t/2)


Z
4 2 d p
P+ = 2 A pi dω pi , (5.39)
h̄ dω pi −Ei / h̄ (ω pi − ω0 )2
where we can pull stuff out of the integral since the main contribution is at the peak. Provided
p is large enough, using eq. (C.5), then

∞ sin2 ((ω pi − ω0 )t/2) ∞ sin2 ((ω pi − ω0 )t/2)


Z Z
dω pi ≈ dω pi
−Ei / h̄ (ω pi − ω0 )2 −∞ (ω pi − ω0 )2 (5.40)
t
= π,
2
leaving the probability of the electron with a going right continuum state as

matrix element

4 2 d p t
P+ = 2 A pi π. (5.41)
dω pi p 2


density of states

The d p/dω pi is something like “how many continuous states are associated with a transition
from a discrete frequency interval.”
We can also get this formally from eq. (5.39) with

sin2 ((ω pi − ω0 )t/2) t


2
→ πδ(ω pi − ω0 ), (5.42)
(ω pi − ω0 ) 2
106 fermi’s golden rule

so

2πt 2
c(1)
p (t) → A pi δ(ω pi − ω0 )
h̄2 (5.43)
2πt 2
= A pi δ(E pi − h̄ω0 )

where δ(ax) = δ(x)/|a| has been used to pull in a factor of h̄ into the delta.
The ratio of the coefficient to time is then

c(1)
p (t) 2π 2
= A pi δ(E pi − h̄ω0 ). (5.44)
t h̄
or “between friends”

00
dc(1)
p (t) 2π 2
00
= A pi δ(E pi − h̄ω0 ), (5.45)
dt h̄
roughly speaking we have a “rate” or transitions from the discrete into the continuous. Here
“rate” is in quotes since it does not hold for small t.
This has been worked out for P+ . This can also be done for P− , the probability that the
electron will end up in a left trending continuum state.
While the above is not a formal derivation, but illustrates the form of what is called Fermi’s
golden rule. Namely that such a rate has the structure


× (matrix element)2 × energy conservation (5.46)

WKB METHOD
6
6.1 wkb (wentzel-kramers-brillouin) method

This is covered in §24 in the text [4]. Also §8 of [6].


We start with the 1D time independent Schrödinger equation

h̄2 d2 U
− + V(x)U(x) = EU(x) (6.1)
2m dx2
which we can write as

d2 U 2m
+ 2 (E − V(x))U(x) = 0 (6.2)
dx2 h̄
Consider a finite well potential as in fig. 6.1

Figure 6.1: Finite well potential

With

2m(E − V)
k2 = , E>V
h̄ (6.3)
2m(V − E)
κ2 = , V > E,

we have for a bound state within the well

U ∝ e±ikx (6.4)

107
108 wkb method

and for that state outside the well

U ∝ e±κx (6.5)

In general we can hope for something similar. Let us look for that something, but allow the
constants k and κ to be functions of position

2m(E − V(x))
k2 (x) = , E>V
h̄ (6.6)
2m(V(x) − E)
κ2 (x) = , V > E.

In terms of k Schrödinger’s equation is just

d2 U(x)
+ k2 (x)U(x) = 0. (6.7)
dx2
We use the trial solution

U(x) = Aeiφ(x) , (6.8)

allowing φ(x) to be complex

φ(x) = φR (x) + iφI (x). (6.9)

We need second derivatives

(eiφ )00 = (iφ0 eiφ )0


(6.10)
= (iφ0 )2 eiφ + iφ00 eiφ ,

and plug back into our Schrödinger equation to obtain

−(φ0 (x))2 + iφ00 (x) + k2 (x) = 0. (6.11)

For the first round of approximation we assume

φ00 (x) ≈ 0, (6.12)


6.1 wkb (wentzel-kramers-brillouin) method 109

and obtain

(φ0 (x))2 = k2 (x), (6.13)

or

φ0 (x) = ±k(x). (6.14)

A second round of approximation we use eq. (6.14) and obtain

φ00 (x) = ±k0 (x) (6.15)

Plugging back into eq. (6.11) we have

−(φ0 (x))2 ± ik0 (x) + k2 (x) = 0, (6.16)

Things get a little confusing here with the ± variation since we have to take a second set of
square roots, so let’s consider these separately.

Case I. positive root With φ0 = +k, we have

−(φ0 (x))2 + ik0 (x) + k2 (x) = 0, (6.17)

or
p
φ0 (x) = ± +ik0 (x) + k2 (x)
s
k0 (x) (6.18)
= ±k(x) 1 + i 2 .
k (x)

If k0 is small compared to k2

k0 (x)
 1, (6.19)
k2 (x)
then we have

k0 (x) k0 (x)
! !
φ (x) ≈ ±k(x) 1 + i 2
0
= ± k(x) + i (6.20)
2k (x) 2k(x)
110 wkb method

Since we’d picked φ0 ≈ +k in this case, we pick the positive sign, and can now integrate

k0 (x)
Z Z
φ(x) = dxk(x) + i dx + ln const
Z 2k(x) (6.21)
1
= dxk(x) + i ln k(x) + ln const
2
Going back to our wavefunction, for this E > V(x) case we have

U(x) ∼ eiφ(x)
Z !!
1
= exp i dxk(x) + i ln k(x) + const
2
Z !! (6.22)
1
∼ exp i dxk(x) + i ln k(x)
2
R
dxk(x) − 12 ln k(x)
= ei e
or
1 R
U(x) ∝ √ ei dxk(x)
(6.23)
k(x)

Case II: negative sign Now treat φ0 ≈ −k. This gives us

k0 (x) k0 (x)
! !
φ (x) ≈ ±k(x) 1 − i 2
0
= ± k(x) − i (6.24)
2k (x) 2k(x)

This time we want the negative root to match φ0 ≈ −k. Integrating, we have

k0 (x)
Z !
iφ(x) = −i dx k(x) − i
2k(x)
Z Z 0
1 k
= −i k(x)dx − dx
Z 2 k (6.25)
1
= −i k(x)dx − ln k + ln constant
2
This gives us
1 R
U(x) ∝ √ e−i dxk(x)
(6.26)
k(x)
6.2 turning points. 111

Provided we have eq. (6.19), we can summarize these as

1 R
U(x) ∝ √ e±i dxk(x)
(6.27)
k(x)
It’s not hard to show that for the E < V(x) case we find

1 R
U(x) ∝ √ e± dxκ(x)
, (6.28)
κ(x)
this time, provided that our potential satisfies

κ0 (x)
 1, (6.29)
κ2 (x)

Validity

1. V(x) changes very slowly =⇒ k0 (x) small, and k(x) = 2m(E − V(x))/ h̄.

2. E very far away from the potential |(E − V(x))/V(x)|  1.

6.2 turning points.

Figure 6.2: Example of a general potential

WKB will not work at the turning points in this figure since our main assumption was that

0
k (x)  1 (6.30)
k2 (x)
112 wkb method

Figure 6.3: Turning points where WKB will not work

Figure 6.4: Diagram for patching method discussion

so we get into trouble where k(x) ∼ 0. There are some methods for dealing with this. Our text
as well as Griffiths give some examples, but they require Bessel functions and more complex
mathematics.
The idea is that one finds the WKB solution in the regions of validity, and then looks for a
polynomial solution in the patching region where we are closer to the turning point, probably
requiring lookup of various special functions.
This power series method is also outlined in [19], where solutions to connect the regions are
expressed in terms of Airy functions.

6.3 examples

Example 6.1: Infinite well potential


6.3 examples 113

Consider the potential


 v(x)
 if x ∈ [0, a]
V(x) = 

(6.31)
 ∞

otherwise

as illustrated in fig. 6.5

Figure 6.5: Arbitrary potential in an infinite well

Inside the well, we have

1  Rx
0 0
Rx
0 0

ψ(x) = √ C+ ei 0 k(x )dx + C− e−i 0 k(x )dx (6.32)
k(x)
where

1p
k(x) = 2m(E − v(x) (6.33)

With
R x
k(x0 )dx0
φ(x) = e 0 (6.34)
114 wkb method

We have

1
ψ(x) = √ (C+ (cos φ + i sin φ) + C− (cos φ − i sin φ))
k(x)
1
= √ ((C+ + C− ) cos φ + i(C+ − C− ) sin φ)
k(x)
(6.35)
1
= √ ((C+ + C− ) cos φ + i(C+ − C− ) sin φ)
k(x)
1
≡ √ (C2 cos φ + C1 sin φ) ,
k(x)
Where

C2 = C+ + C−
(6.36)
C1 = i(C+ − C− )

Setting boundary conditions we have

φ(0) = 0 (6.37)

Noting that we have φ(0) = 0, we have

1
√ C2 = 0 (6.38)
k(0)
So

1
ψ(x) ∼ √ sin φ (6.39)
k(x)
At the other boundary

ψ(a) = 0 (6.40)
6.3 examples 115

So we require

sin φ(a) = sin(nπ) (6.41)

or

Z a
1
2m(E − v(x0 )dx0 = nπ
p
(6.42)
h̄ 0

This is called the Bohr-Sommerfeld condition.

Check with v(x) = 0.


We have

1√
2mEa = nπ (6.43)

or

!2
1 nπ h̄
E= (6.44)
2m a
Part II

S P I N , A N G U L A R M O M E N T U M , A N D T W O PA RT I C L E
SYSTEMS
COMPOSITE SYSTEMS
7
7.1 hilbert spaces

READING: §30 of the text [4] covers entangled states. The rest of the composite state back-
ground is buried somewhere in some of the advanced material sections. FIXME: what section?
Example, one spin one half particle and one spin one particle. We can describe either quantum
mechanically, described by a pair of Hilbert spaces

H1 , (7.1)

of dimension D1

H2 , (7.2)

of dimension D2
Recall that a Hilbert space (finite or infinite dimensional) is the set of states that describe the
system. There were some additional details (completeness, normalizable, L2 integrable, ...) not
really covered in the physics curriculum, but available in mathematical descriptions.
We form the composite (Hilbert) space

H = H1 ⊗ H2 (7.3)

E
H1 : φ(i)
1 (7.4)

for any ket in H1

D1
X E
|Ii = ci φ(i)
1 (7.5)
i=1

where

D (i) ( j) E
φ1 φ1 = δi j (7.6)

119
120 composite systems

Similarly
E
H2 : φ(i)
2 (7.7)

for any ket in H2

D2
X E
|IIi = di φ(i)
2 (7.8)
i=1

where

D (i) ( j) E
φ2 φ2 = δi j (7.9)

The composite Hilbert space has dimension D1 D2


basis kets:

E E
φ(i) ⊗ φ( j) = φ(i j) ,
E
1 2 (7.10)

where
D E
φ(i j) φ(kl) = δik δ jl . (7.11)

Any ket in H can be written

D1 X
X D2 E E
|ψi = fi j φ(i) ( j)
1 ⊗ φ2
i=1 j=1
(7.12)
D1 X
X D2 E
= fi j φ(i j) .
i=1 j=1

Direct product of kets:


D1 X
X D2 E E
ci d j φ(i) ( j)
|Ii ⊗ |IIi ≡ 1 ⊗ φ2
i=1 j=1
(7.13)
D1 X
X D2 E
= ci d j φ(i j)
i=1 j=1

If |ψi in H cannot be written as |Ii ⊗ |IIi, then |ψi is said to be “entangled”.


FIXME: insert a concrete example of this, with some low dimension.
7.2 operators 121

7.2 operators

With operators O1 and O2 on the respective Hilbert spaces. We would now like to build

O1 ⊗ O2 (7.14)

If one defines
D1 X
X D2 E ( j)
E
O1 ⊗ O2 ≡ fi j O1 φ(i)
1 ⊗ O2 φ2
(7.15)
i=1 j=1

Q:Can every operator that can be defined on the composite space have a representation of this
form? No.
Special cases. The identity operators. Suppose that

D1 X
X D2 E E
|ψi = fi j φ(i) ( j)
1 ⊗ φ2 (7.16)
i=1 j=1

then

D1 X
X D2 E ( j) E
(O1 ⊗ I2 ) |ψi = fi j O1 φ(i)
1 ⊗ φ2
(7.17)
i=1 j=1

Example 7.1: A commutator

Can do other operations. Example:

[ O 1 ⊗ I 2 , I1 ⊗ O 2 ] = 0 (7.18)

Let us verify this one. Suppose that our state has the representation

D1 X
X D2 E E
|ψi = fi j φ(i) ( j)
1 ⊗ φ2 (7.19)
i=1 j=1
122 composite systems

so that the action on this ket from the composite operations are
D1 X
X D2 E ( j) E
(O1 ⊗ I2 ) |ψi = fi j O1 φ(i)
1 ⊗ φ2

i=1 j=1
(7.20)
D1 X
X D2 E
( j)
E
(I1 ⊗ O2 ) |ψi = fi j φ(i)
1 ⊗ O2 φ2

i=1 j=1

Our commutator is

[(O1 ⊗ I2 ), (I1 ⊗ O2 )] |ψi


= (O1 ⊗ I2 )(I1 ⊗ O2 ) |ψi − (I1 ⊗ O2 )(O1 ⊗ I2 ) |ψi
D1 X
X D2 E D1 X
X D2 E ( j) E
( j)
(i)
E
= (O1 ⊗ I2 ) fi j φ1 ⊗ O2 φ2 − (I1 ⊗ O2 )
fi j O1 φ(i)
1 ⊗ φ2

i=1 j=1 i=1 j=1 (7.21)
D1 X
X D2 E E XD1 X
D2 E
O2 φ( j) − ( j)
E
= fi j O1 φ(i)
1 ⊗ 2 f O (i)
i j 1 1 ⊗ O2 φ2
φ
i=1 j=1 i=1 j=1

= 0. 

7.3 generalizations

Can generalize to

H1 ⊗ H2 ⊗ H3 ⊗ · · · (7.22)
Can also start with H and seek factor spaces. If H is not prime there are, in general, many
ways to find factor spaces

H = H1 ⊗ H2 = H10 ⊗ H20 (7.23)


A ket |ψi, if unentangled in the first factor space, then it will be in general entangled in a
second space. Thus ket entanglement is not a property of the ket itself, but instead is intrinsically
related to the space in which it is represented.

7.4 recalling the stern-gerlach system from phy354

We had one example of a composite system in phy356 that I recall. It was related to states of the
silver atoms in a Stern Gerlach apparatus, where we had one state from the Hamiltonian that
7.4 recalling the stern-gerlach system from phy354 123

governs position and momentum and another from the Hamiltonian for the spin, where each of
these states was considered separately.
This makes me wonder what would the Hamiltonian for a system (say a single electron) that
includes both spin and position/momentum would look like, and how is it that one can solve
this taking spin and non-spin states separately?
Professor Sipe, when asked said of this
“It is complicated because not only would the spin of the electron interact with the magnetic
field, but its translational motion would respond to the magnetic field too. A simpler case is a
neutral atom with an electron with an unpaired spin. Then there is no Lorentz force on the atom
itself. The Hamiltonian is just the sum of a free particle Hamiltonian and a Zeeman term due to
the spin interacting with the magnetic field. This is precisely the Stern-Gerlach problem”
I did not remember what the Zeeman term looked like, but wikipedia does [20], and it is the
magnetic field interaction

−µ · B (7.24)

that we get when we gauge transform the Dirac equation for the electron as covered in §36.4
of the text (also introduced in chapter 6, which was not covered in class). That does not look
too much like how we studied the Stern-Gerlach problem? I thought that for that problem we
had a Hamiltonian of the form

H = ai j |ii h j| (7.25)

It is not clear to me how this ket-bra Hamiltonian and the Zeeman Hamiltonian are related
(ie: the spin Hamiltonians that we used in 356 and were on old 356 exams were all pulled out
of magic hats and it was not obvious where these came from).
FIXME: incorporate what I got out of the email thread with the TA and prof on this question.
SPIN AND SPINORS
8
8.1 generators

Covered in §26 of the text [4].

Example 8.1: Time translation

|ψ(t)i = e−iHt/ h̄ |ψ(0)i . (8.1)

The Hamiltonian “generates” evolution (or translation) in time.

Example 8.2: Spatial translation

|r + ai = e−ia·P/ h̄ |ri . (8.2)

Figure 8.1: Vector translation

P is the operator that generates translations. Written out, we have

e−ia·P/ h̄ = e−i(ax Px +ay Py +az Pz )/ h̄


(8.3)
= e−iax Px / h̄ e−iay Py / h̄ e−iaz Pz / h̄ ,

125
126 spin and spinors

where the factorization was possible because P x , Py , and Pz commute

[ Pi , P j ] = 0, (8.4)

for any i, j (including i = i as I dumbly questioned in class ... this is a commutator, so


[ Pi , P j ] = Pi Pi − Pi Pi = 0).
The fact that the Pi commute means that successive translations can be done in any order
and have the same result.
In class we were rewarded with a graphic demo of translation component commutation
as Professor Sipe pulled a giant wood carving of a cat (or tiger?) out from beside the desk
and proceeded to translate it around on the desk in two different orders, with the cat ending
up in the same place each time.

Exponential commutation Note that in general

eA+B , eA eB , (8.5)

unless [ A, B] = 0. To show this one can compare

1
eA+B = 1 + A + B + (A + B)2 + · · ·
2 (8.6)
1
= 1 + A + B + (A2 + AB + BA + B2 ) + · · ·
2
and
! !
1 2 1 2
e e = 1+ A+ A +··· 1+ B+ B +···
A B
2 2
(8.7)
1
= 1 + A + B + (A2 + 2AB + B2 ) + · · ·
2
Comparing the second order (for example) we see that we must have for equality

AB + BA = 2AB, (8.8)
8.1 generators 127

or

BA = AB, (8.9)

or

[ A, B] = 0 (8.10)

Translating a ket If we consider the quantity


e−ia·P/ h̄ |ψi = ψ0 , (8.11)

does this ket “translated” by a make any sense? The vector a lives in a 3D space and
our ket |ψi lives in Hilbert space. A quantity like this deserves some careful thought and is
the subject of some such thought in the Interpretations of Quantum mechanics course. For
now, we can think of the operator and ket as a “gadget” that prepares a state.
A student in class pointed out that |ψi can be dependent on many degrees of freedom,
for example, the positions of eight different particles. This translation gadget in such a case
acts on the whole kit and caboodle.
Now consider the matrix element


0
r ψ = hr| e−ia·P/ h̄ |ψi . (8.12)

Note that

 †
hr| e−ia·P/ h̄ = eia·P/ h̄ |ri
(8.13)
= (|r − ai)† ,
so


0
r ψ = hr − a|ψi , (8.14)
128 spin and spinors

or

ψ0 (r) = ψ(r − a) (8.15)

This is what we expect of a translated function, as illustrated in fig. 8.2

Figure 8.2: Active spatial translation

Example 8.3: Spatial rotation

We have been introduced to the angular momentum operator

L = R × P, (8.16)

where

L x = Y Pz − ZPy
Ly = ZP x − XPz (8.17)
Lz = XPy − Y P x .

We also found that

X
[ Li , L j ] = i h̄ i jk Lk . (8.18)
k

These non-zero commutators show that the components of angular momentum do not
commute.
8.1 generators 129

Define

|R(r)i = e−iθn̂·L/ h̄ |ri . (8.19)

This is the vector that we get by actively rotating the vector r by an angle θ counter-
clockwise about n̂, as in fig. 8.3

Figure 8.3: Active vector rotations

An active rotation rotates the vector, leaving the coordinate system fixed, whereas a
passive rotation is one for which the coordinate system is rotated, and the vector is left
fixed.
Note that rotations do not commute. Suppose that we have a pair of rotations as in fig. 8.4

Figure 8.4: A example pair of non-commuting rotations

Again, we get the graphic demo, with Professor Sipe rotating the big wooden cat sculp-
ture. Did he bring that in to class just to make this point (too bad I missed the first couple
minutes of the lecture).
Rather amusingly, he points out that most things in life do not commute. We get much
different results if we apply the operations of putting water into the teapot and turning on
the stove in different orders.
130 spin and spinors

Rotating a ket With a rotation gadget

0
ψ = e−iθn̂·L/ h̄ |ψi , (8.20)

we can form the matrix element



0
r ψ = hr| e−iθn̂·L/ h̄ |ψi . (8.21)

In this we have
 †
hr| e−iθn̂·L/ h̄ = eiθn̂·L/ h̄ |ri
 E† (8.22)
= R−1 (r) ,
so

0 D −1 0 E
r ψ = R (r) ψ , (8.23)

or

ψ0 (r) = ψ(R−1 (r)) (8.24)

8.2 generalizations

Recall what you did last year, where H, P, and L were defined mechanically. We found

• H generates time evolution (or translation in time).

• P generates spatial translation.

• L generates spatial rotation.

For our mechanical definitions we have

[ Pi , P j ] = 0, (8.25)

and

X
[ Li , L j ] = i h̄ i jk Lk . (8.26)
k
8.2 generalizations 131

These are the relations that show us the way translations and rotations combine. We want to
move up to a higher plane, a new level of abstraction. To do so we define H as the operator that
generates time evolution. If we have a theory that covers the behavior of how anything evolves
in time, H encodes the rules for this time evolution.
Define P as the operator that generates translations in space.
Define J as the operator that generates rotations in space.
In order that these match expectations, we require

[ Pi , P j ] = 0, (8.27)

and

X
[ Ji , J j ] = i h̄ i jk Jk . (8.28)
k

In the simple theory of a spin less particle we have

J ≡ L = R × P. (8.29)

We actually need a generalization of this since this is, in fact, not good enough, even for low
energy physics.

Many component wave functions We are free to construct tuples of spatial vector functions
like

 
 Ψ (r, t) 
 I  , (8.30)
Ψ II (r, t)
 

or
 
 Ψ I (r, t) 
 
 Ψ (r, t)  , (8.31)
 II 
Ψ III (r, t)
 

etc.
We will see that these behave qualitatively different than one component wave functions. We
also do not have to be considering multiple particle wave functions, but just one particle that
requires three functions in R3 to describe it (ie: we are moving in on spin).
132 spin and spinors

Question: Do these live in the same vector space?

Answer: We will get to this.

A classical analogy “There is only bad analogies, since if the are good they would be describ-
ing the same thing. We can however, produce some useful bad analogies”

1. A temperature field

T (r) (8.32)

2. Electric field

 
E x (r)
 
 Ey (r) (8.33)
 
Ez (r)

These behave in a much different way. If we rotate a scalar field like T (r) as in fig. 8.5

Figure 8.5: Rotated temperature (scalar) field

Suppose we have a temperature field generated by, say, a match. Rotating the match above,
we have

T 0 (r) = T (R−1 (r)). (8.34)

Compare this to the rotation of an electric field, perhaps one produced by a capacitor, as in
fig. 8.6
8.3 multiple wavefunction spaces 133

Figure 8.6: Rotating a capacitance electric field

Is it true that we have


   
E x (r) E x (R−1 (r))
  ?  
 Ey (r) =  Ey (R−1 (r)) (8.35)
   
Ez (r) Ez (R−1 (r))

No. Because the components get mixed as well as the positions at which those components
are evaluated.
We will work with many component wave functions, some of which will behave like vectors,
and will have to develop the methods and language to tackle this.

8.3 multiple wavefunction spaces

Reading: See §26.5 in the text [4].


We identified

ψ(r) = hr|ψi (8.36)


with improper basis kets

|ri (8.37)
Now introduce many function spaces

 
ψ1 (r)
 
ψ2 (r)
 .. 
  (8.38)
 . 
ψγ (r)
 
134 spin and spinors

with improper (unnormalizable) basis kets

|rαi , α ∈ 1, 2, ...γ (8.39)

ψα (r) = hrα|ψi (8.40)

for an abstract ket |ψi


We will try taking this Hilbert space

H = Ho ⊗ H s (8.41)

Where Ho is the Hilbert space of "scalar" QM, “o” orbital and translational motion, associated
with kets |ri and H s is the Hilbert space associated with the γ components |αi. This latter space
we will label the “spin” or “internal physics” (class suggestion: or perhaps intrinsic). This is
“unconnected” with translational motion.
We build up the basis kets for H by direct products

|rαi = |ri ⊗ |αi (8.42)

Now, for a rotated ket we seek a general angular momentum operator J such that

0
ψ = e−iθn̂·J/ h̄ |ψi (8.43)

where

J = L + S, (8.44)

where L acts over kets in Ho , “orbital angular momentum”, and S is the “spin angular mo-
mentum”, acting on kets in H s .
Strictly speaking this would be written as direct products involving the respective identities

J = L ⊗ I s + Io ⊗ S. (8.45)

We require

X
[ Ji , J j ] = i h̄ i jk Jk (8.46)
8.3 multiple wavefunction spaces 135

Since L and S “act over separate Hilbert spaces”. Since these come from legacy operators

[ Li , S j ] = 0 (8.47)

We also know that


X
[ Li , L j ] = i h̄ i jk Lk (8.48)

so
X
[S i , S j ] = i h̄ i jk S k , (8.49)

as expected. We could, in principle, have more complicated operators, where this would not
be true. This is a proposal of sorts. Given such a definition of operators, let us see where we can
go with it.
For matrix elements of L we have

∂ ∂
!
0
hr| L x r = −i h̄ y − z δ(r − r0 ) (8.50)
∂z ∂y

What are the matrix elements of hα| S i |α0 i? From the commutation relationships we know

γ
X γ

X
X
hα| S i α00 α00 S j α0 − hα| S j α00 α00 S i α0 = i h̄ i jk hα| S k α00 (8.51)
α00 =1 α00 =1 k

We see that our matrix element is tightly constrained by our choice of commutator relation-
ships. We have γ2 such matrix elements, and it turns out that it is possible to choose (or find)
matrix elements that satisfy these constraints?
The hα| S i |α0 i matrix elements that satisfy these constraints are found by imposing the com-
mutation relations

X
[S i , S j ] = i h̄ i jk S k , (8.52)

and with

X
S2 = S 2j , (8.53)
j
136 spin and spinors

(this is just a definition). We find

h i
S 2, S i = 0 (8.54)
and seeking eigenkets

S 2 |sm s i = s(s + 1) h̄2 |sm s i


(8.55)
S z |sm s i = h̄m s |sm s i
Find solutions for s = 1/2, 1, 3/2, 2, · · ·, where m s ∈ {−s, · · · , s}. ie. 2s + 1 possible vectors
|sm s i for a given s.

1
s= =⇒ γ = 2
2
s = 1 =⇒ γ = 3 (8.56)
3
s= =⇒ γ = 4
2
We start with the algebra (mathematically the Lie algebra), and one can compute the Hilbert
spaces that are consistent with these algebraic constraints.
We assume that for any type of given particle S is fixed, where this has to do with the nature
of the particle.

1
s= A spin 1/2 particle
2
s=1 A spin 1 particle (8.57)
3
s= A spin 3/2 particle
2
S is fixed once we decide that we are talking about a specific type of particle.
A non-relativistic particle in this framework has two nondynamical quantities. One is the
mass m and we now introduce a new invariant, the spin s of the particle.
This has been introduced as a kind of strategy. It is something that we are going to try, and it
turns out that it does. This agrees well with experiment.
In 1939 Wigner asked, “what constraints do I get if I constrain the constraints of quantum
mechanics with special relativity.” It turns out that in the non-relativistic limit, we get just this.
There is a subtlety here, because we get into some logical trouble with the photon with a rest
mass of zero (m = 0 is certainly allowed as a value of our invariant m above). We can not stop or
slow down a photon, so orbital angular momentum is only a conceptual idea. Really, the orbital
angular momentum and the spin angular momentum cannot be separated out for a photon, so
talking of a spin 1 particle really means spin as in J, and not spin as in L.
8.3 multiple wavefunction spaces 137

Spin one half particles Reading: See §26.6 in the text [4].
Let us start talking about the simplest case. This includes electrons, all leptons (integer spin
particles like photons and the weakly interacting W and Z bosons), and quarks.

1
s=
2 (8.58)
1
ms = ±
2
states

+ +
1 1 1 1
|sm s i = , , , − (8.59)
2 2 2 2
Note there is a convention

+ +
1 1 1 1
= , −
22 2 2

+ + (8.60)
1 1 = 1 1
2 2 2 2

+ ! +
1 1 1 1
S 2 m s = + 1 h̄2 m s
2 2 2 2
+ (8.61)
3 1
= h̄2 m s
4 2

+ +
1 1
S z m s = m s h̄ m s
(8.62)
2 2
For shorthand

+
1 1 = |+i
2 2
+ (8.63)
1 1
= |−i
22

 
3 1 0
S 2 → h̄2   (8.64)
4 0 1
138 spin and spinors

 
h̄ 1 0 
S z →   (8.65)
2 0 −1

One can easily work out from the commutation relationships that

 
h̄ 0 1
S x →   (8.66)
2 1 0

 
h̄ 0 −i
S y →   (8.67)
2 i 0

We will start with adding L into the mix on Wednesday.


R E P R E S E N TAT I O N O F T W O S TAT E K E T S A N D PA U L I S P I N
M AT R I C E S
9
9.1 representation of kets

Reading: §5.1 - §5.9 and §26 in [4].


We found the representations of the spin operators

 
h̄ 0 1
S x →   (9.1)
2 1 0
 
h̄ 0 −i
S y →   (9.2)
2 i 0
 
h̄ 1 0 
S z →   (9.3)
2 0 −1

How about kets? For example for |χi ∈ H s

 
h+|χi
|χi →   , (9.4)
h−|χi

and

 
1
|+i →   (9.5)
0
 
0
|0i →   (9.6)
1

So, for example

    
h̄ 0 −i 1 i h̄ 0
S y |+i →     =   (9.7)
2 i 0  0 2 1

139
140 representation of two state kets and pauli spin matrices

Kets in Ho ⊗ H s

   
hr+|ψi ψ (r)
+
|ψi →   =   . (9.8)
hr−|ψi ψ− (r)

This is a “spinor”
Put

hr±|ψi = ψ± (r)
   
1 0 (9.9)
= ψ+   + ψ−  
0 1

with

hψ|ψi = 1 (9.10)

Use

I = Io ⊗ I s
Z
= d3 r |ri hr| ⊗ (|+i h+| + |−i h−|)
Z
(9.11)
X
= d3 r |ri hr| ⊗ |σi hσ|
σ=±
XZ
= d3 r |rσi hrσ|
σ=±

So

XZ
hψ| I |ψi = d3 r hψ|rσi hrσ|ψi
σ=± (9.12)
Z  
= d3 r |ψ+ (r)|2 + |ψ− (r)|2
9.1 representation of kets 141

Alternatively

|ψi = I |ψi
Z X
= d3 r |rσi hrσ|ψi
σ=±
X Z !
(9.13)
= 3
d rψσ (r) |rσi
σ=±
X Z !
= d3 rψσ (r) |ri ⊗ |σi
σ=±

In braces we have a ket in Ho , let us call it

Z
|ψσ i = d3 rψσ (r) |ri , (9.14)

then

|ψi = |ψ+ i |+i + |ψ− i |−i (9.15)

where the direct product ⊗ is implied.


We can form a ket in H s as

hr|ψi = ψ+ (r) |+i + ψ− (r) |−i (9.16)

An operator Oo which acts on Ho alone can be promoted to Oo ⊗ I s , which is now an operator


that acts on Ho ⊗ H s . We are sometimes a little cavalier in notation and leave this off, but we
should remember this.

Oo |ψi = (Oo |ψ+i) |+i + (Oo |ψ+i) |+i (9.17)

and likewise

O s |ψi = |ψ+i (O s |+i) + |ψ−i (O s |−i) (9.18)

and

Oo O s |ψi = (Oo |ψ+i)(O s |+i) + (Oo |ψ−i)(O s |−i) (9.19)


142 representation of two state kets and pauli spin matrices

Suppose we want to rotate a ket, we do this with a full angular momentum operator

e−iθn̂·J/ h̄ |ψi = e−iθn̂·L/ h̄ e−iθn̂·S/ h̄ |ψi (9.20)

(recalling that L and S commute)


So

e−iθn̂·J/ h̄ |ψi = (e−iθn̂·L/ h̄ |ψ+i)(e−iθn̂·S/ h̄ |+i) + (e−iθn̂·L/ h̄ |ψ−i)(e−iθn̂·S/ h̄ |−i) (9.21)

A simple example

|ψi = |ψ+ i |+i + |ψ− i |−i (9.22)

Suppose

|ψ+ i = α |ψ0 i (9.23)


|ψ− i = β |ψ0 i (9.24)

where

|α|2 + |β|2 = 1 (9.25)

Then

|ψi = |ψ0 i |χi (9.26)

where

|χi = α |+i + β |−i (9.27)

for

hψ|ψi = 1, (9.28)

hψ0 |ψ0 i hχ|χi = 1 (9.29)


9.1 representation of kets 143

so

hψ0 |ψ0 i = 1 (9.30)

We are going to concentrate on the unentangled state of eq. (9.26).

• How about with

|α|2 = 1, β = 0 (9.31)

|χi is an eigenket of S z with eigenvalue h̄/2.

|β|2 = 1, α = 0 (9.32)

|χi is an eigenket of S z with eigenvalue − h̄/2.

• What is |χi if it is an eigenket of n̂ · S?

FIXME: F1: standard spherical projection picture, with n̂ projected down onto the x, y plane
at angle φ and at an angle θ from the z axis.
The eigenvalues will still be ± h̄/2 since there is nothing special about the z direction.

n̂ · S = n x S x + ny S y + nz S z
 
h̄  nz n x − iny 
→   (9.33)
2 n x + iny −nz
h̄ h i
= cos θ sin θe−iφ sin θeiφ − cos θ
2
To find the eigenkets we diagonalize this, and we find representations of the eigenkets are

cos θ e−iφ/2 


   
|n̂+i →  2   (9.34)
θ iφ/2
sin 2 e

− sin θ e−iφ/2 


   
|n̂−i →   2  , (9.35)
θ iφ/2
cos 2 e

144 representation of two state kets and pauli spin matrices

with eigenvalues h̄/2 and − h̄/2 respectively.


So in the abstract notation, tossing the specific representation, we have

θ θ
|n̂+i → cos e−iφ/2 |+i sin eiφ/2 |−i (9.36)
2 2
θ θ
|n̂−i → − sin e−iφ/2 |+i cos eiφ/2 |−i (9.37)
2 2

9.2 representation of two state kets

Every ket
 
α
|χi →   (9.38)
β

for which

|α|2 + |β|2 = 1 (9.39)

can be written in the form eq. (9.34) for some θ and φ, neglecting an overall phase factor.
For any ket in H s , that ket is “spin up” in some direction.
FIXME: show this.

9.3 pauli spin matrices

It is useful to write

 
h̄ 0 1 h̄
S x =   ≡ σ x (9.40)
2 1 0 2
 
h̄ 0 −i h̄
S y =   ≡ σy (9.41)
2 i 0 2
 
h̄ 1 0  h̄
=   ≡ σz (9.42)
2 0 −1 2
9.3 pauli spin matrices 145

where
 
0 1
σ x =   (9.43)
1 0
 
0 −i
σy =   (9.44)
i 0
 
1 0 
σz =   (9.45)
0 −1
These are the Pauli spin matrices.

Interesting properties

[σi , σ j ] = σi σ j + σ j σi = 0, if i < j (9.46)

σ x σy = iσz (9.47)

(and cyclic permutations)

tr(σi ) = 0 (9.48)

(n̂ · σ)2 = σ0 (9.49)

where

n̂ · σ ≡ n x σ x + ny σy + nz σz , (9.50)

and
 
1 0
σ0 =   (9.51)
0 1

(note tr(σ0 ) , 0)
146 representation of two state kets and pauli spin matrices

[σi , σ j ] = 2δi j σ0 (9.52)


[σ x , σy ] = 2iσz (9.53)

(and cyclic permutations of the latter).


Can combine these to show that

(A · σ)(B · σ) = (A · B)σ0 + i(A × B) · σ (9.54)

where A and B are vectors (or more generally operators that commute with the σ matri-
ces).

tr(σi σ j ) = 2δi j (9.55)

tr(σα σβ ) = 2δαβ , (9.56)

where α, β = 0, x, y, z

Note that any complex matrix M can be written as

X
M= ma σα
α
 
 m0 + mz m x − imy  (9.57)
=   
m x + imy m0 − mz

for any four complex numbers m0 , m x , my , mz


where

1
mβ = tr(Mσβ ). (9.58)
2
R O TAT I O N O P E R AT O R I N S P I N S PA C E
10
10.1 formal taylor series expansion

READING: §27.5 in the text [4].


We can formally expand our rotation operator in Taylor series

1 1
e−iθn̂·S/ h̄ = I + (−iθn̂ · S/ h̄) + (−iθn̂ · S/ h̄)2 + (−iθn̂ · S/ h̄)3 + · · · (10.1)
2! 3!
or
1 1
e−iθn̂·σ/2 = I + (−iθn̂ · σ/2) + (−iθn̂ · σ/2)2 + (−iθn̂ · σ/2)3 + · · ·
2! 3!
 −iθ  1  −iθ  1  −iθ 
= σ0 + (n̂ · σ) + (n̂ · σ) +
2
(n̂ · σ)3 + · · ·
2 2! 2 3! 2
 −iθ  1  −iθ  1  −iθ 
= σ0 + (n̂ · σ) + σ0 + (n̂ · σ) + · · · (10.2)
2 2! 2 3! 2
1  θ 2 θ 1  θ 3
! !
= σ0 1 − + · · · − i(n̂ · σ) − +···
2! 2 2 3! 2
= cos(θ/2)σ0 − i sin(θ/2)(n̂ · σ)
where we have used the fact that (n̂ · σ)2 = σ0 .
So our representation of the spin operator is

e−iθn̂·S/ h̄ → cos(θ/2)σ0 − i sin(θ/2)(n̂ · σ)


      
 0 1 0 −i 1 0 
= cos(θ/2)σ0 − i sin(θ/2) n x   + ny   + nz  
1 0 i 0 0 −1 (10.3)
    
 
cos(θ/2) − in sin(θ/2) −i(n − in ) sin(θ/2) 
z x y
=   
−i(n x + iny ) sin(θ/2) cos(θ/2) + inz sin(θ/2)

Note that, in particular,

e−2πin̂·S/ h̄ → cos πσ0 = −σ0 (10.4)


This “rotates” the ket, but introduces a phase factor.
Can do this in general for other degrees of spin, for s = 1/2, 3/2, 5/2, · · ·.

147
148 rotation operator in spin space

Unfortunate interjection by me I mentioned the half angle rotation operator that requires a
half angle operator sandwich. Prof. Sipe thought I might be talking about a Heisenberg picture
representation, where we have something like this in expectation values

0
ψ = e−iθn̂·J/ h̄ |ψi (10.5)

so that

0 0
ψ O ψ = hψ| eiθn̂·J/ h̄ Oe−iθn̂·J/ h̄ |ψi (10.6)

However, what I was referring to, was that a general rotation of a vector in a Pauli matrix
basis

X
R( ak σk ) = R(a · σ) (10.7)

can be expressed by sandwiching the Pauli vector representation by two half angle rotation
operators like our spin 1/2 operators from class today

R(a · σ) = e−θû·σv̂·σ/2 a · σeθû·σv̂·σ/2 (10.8)

where û and v̂ are two non-colinear orthogonal unit vectors that define the oriented plane that
we are rotating in.
For example, rotating in the x − y plane, with û = x̂ and v̂ = ŷ, we have

R(a · σ) = e−θσ1 σ2 /2 (a1 σ1 + a2 σ2 + a3 σ3 )eθσ1 σ2 /2 (10.9)

Observe that these exponentials commute with σ3 , leaving

R(a · σ) = (a1 σ1 + a2 σ2 )eθσ1 σ2 + a3 σ3


= (a1 σ1 + a2 σ2 )(cos θ + σ1 σ2 sin θ) + a3 σ3 (10.10)
= σ1 (a1 cos θ − a2 sin θ) + σ2 (a2 cos θ + a1 sin θ) + σ3 (a3 )

yielding our usual coordinate rotation matrix. Expressed in terms of a unit normal to that
plane, we form the normal by multiplication with the unit spatial volume element I = σ1 σ2 σ3 .
For example:

σ1 σ2 σ3 (σ3 ) = σ1 σ2 (10.11)
10.1 formal taylor series expansion 149

and can in general write a spatial rotation in a Pauli basis representation as a sandwich of half
angle rotation matrix exponentials

R(a · σ) = e−Iθ(n̂·σ)/2 (a · σ)eIθ(n̂·σ)/2 (10.12)

when n̂ · a = 0 we get the complex-number like single sided exponential rotation exponentials
(since a · σ commutes with n · σ in that case)

R(a · σ) = (a · σ)eIθ(n̂·σ) (10.13)

I believe it was pointed out in one of [5] or [7] that rotations expressed in terms of half
angle Pauli matrices has caused some confusion to students of quantum mechanics, because
this 2π “rotation” only generates half of the full spatial rotation. It was argued that this sort of
confusion can be avoided if one observes that these half angle rotations exponentials are exactly
what we require for general spatial rotations, and that a pair of half angle operators are required
to produce a full spatial rotation.
The book [5] takes this a lot further, and produces a formulation of spin operators that is
devoid of the normal scalar imaginary i (using the Clifford algebra spatial unit volume element
instead), and also does not assume a specific matrix representation of the spin operators. They
argue that this leads to some subtleties associated with interpretation, but at the time I was
attempting to read that text I did know enough QM to appreciate what they were doing, and
have not had time to attempt a new study of that content.
Asked about this offline, our Professor says, “Yes.... but I think this kind of result is essentially
what I was saying about the ’rotation of operators’ in lecture. As to ’interpreting’ the −1, there
are a number of different strategies and ways of thinking about things. But I think the fact
remains that a 2π rotation of a spinor replaces the spinor by −1 times itself, no matter how you
formulate things.”
That this double sided half angle construction to rotate a vector falls out of the Heisenberg
picture is interesting. Even in a purely geometric Clifford algebra context, I suppose that a
vector can be viewed as an operator (acting on another vector it produces a scalar and a bivector,
acting on higher grade algebraic elements one gets +1, −1 grade elements as a result). Yet that
is something that is true, independent of any quantum mechanics. In the books I mentioned, this
was not derived, but instead stated, and then proved. That is something that I think deserves a bit
of exploration. Perhaps there is a more natural derivation possible using infinitesimal arguments
... I had guess that scalar or grade selection would take the place of an expectation value in such
a geometric argument.
150 rotation operator in spin space

10.2 spin dynamics

At least classically, the angular momentum of charged objects is associated with a magnetic
moment as illustrated in fig. 10.1

Figure 10.1: Magnetic moment due to steady state current

µ = IAe⊥ (10.14)

In our scheme, following the (cgs?) text conventions of [4], where the E and B have the same
units, we write

IA
µ= e⊥ (10.15)
c
For a charge moving in a circle as in fig. 10.2

Figure 10.2: Charge moving in circle


10.2 spin dynamics 151

charge
I=
time
distance charge
= (10.16)
time distance
qv
=
2πr
so the magnetic moment is

qv πr2
µ=
2πr c
q (10.17)
= (mvr)
2mc
= γL

Here γ is the gyromagnetic ratio


Recall that we have a torque, as shown in fig. 10.3

Figure 10.3: Induced torque in the presence of a magnetic field

T = µ×B (10.18)

tending to line up µ with B. The energy is then

−µ · B (10.19)

Also recall that this torque leads to precession as shown in fig. 10.4

dL
= T = γL × B, (10.20)
dt
152 rotation operator in spin space

Figure 10.4: Precession due to torque

with precession frequency

ω = −γB. (10.21)

For a current due to a moving electron

e
γ=− <0 (10.22)
2mc
where we are, here, writing for charge on the electron −e.

Question: steady state currents only? . Yes, this is only true for steady state currents.
For the translational motion of an electron, even if it is not moving in a steady way, regardless
of its dynamics

e
µ0 = − L (10.23)
2mc
Now, back to quantum mechanics, we turn µ0 into a dipole moment operator and L is “pro-
moted” to an angular momentum operator.

Hint = −µ0 · B (10.24)

What about the “spin”?


Perhaps

µs = γsS (10.25)
10.3 the hydrogen atom with spin 153

we write this as
 e 
µs = g − S (10.26)
2mc
so that

ge
γs = − (10.27)
2mc
Experimentally, one finds to very good approximation

g=2 (10.28)

There was a lot of trouble with this in early quantum mechanics where people got things
wrong, and canceled the wrong factors of 2.
In fact, Dirac’s relativistic theory for the electron predicts g = 2.
When this is measured experimentally, one does not get exactly g = 2, and a theory that also
incorporates photon creation and destruction and the interaction with the electron with such
(virtual) photons. We get

gtheory = 2 (1.001159652140(±28))
(10.29)
gexperimental = 2 (1.0011596521884(±43))

Richard Feynman compared the precision of quantum mechanics, referring to this measure-
ment, “to predicting a distance as great as the width of North America to an accuracy of one
human hair’s breadth”.

10.3 the hydrogen atom with spin

READING: what chapter of [4] ?


For a spinless hydrogen atom, the Hamiltonian was

H = HCM ⊗ Hrel (10.30)

where we have independent Hamiltonian’s for the motion of the center of mass and the rela-
tive motion of the electron to the proton.
The basis kets for these could be designated |pCM i and |prel i respectively.
154 rotation operator in spin space

Now we want to augment this, treating

H = HCM ⊗ Hrel ⊗ Hs (10.31)

where Hs is the Hamiltonian for the spin of the electron. We are neglecting the spin of the
proton, but that could also be included (this turns out to be a lesser effect).
We will introduce a Hamiltonian including the dynamics of the relative motion and the elec-
tron spin

Hrel ⊗ Hs (10.32)

Covering the Hilbert space for this system we will use basis kets

|nlm±i (10.33)

   
hr+|nlm+i Φ (r)
|nlm+i →   =  nlm 
hr−|nlm+i 0
  
    (10.34)
hr+|nlm−i  0 
|nlm−i →   =   .
hr−|nlm−i Φnlm (r)

Here r should be understood to really mean rrel . Our full Hamiltonian, after introducing a
magnetic pertubation is

P2CM
 2 
 Prel e2 
H= +   − µ0 · B − µ s · B

− (10.35)
2M  2µ Rrel 

where

M = mproton + melectron , (10.36)

and
1 1 1
= + . (10.37)
µ mproton melectron

For a uniform magnetic field


10.3 the hydrogen atom with spin 155

 e 
µ0 = − L (10.38)
 2mce 
µs = g − S (10.39)
2mc
We also have higher order terms (higher order multipoles) and relativistic corrections (like
spin orbit coupling [17]).
TWO SPIN SYSTEMS, ANGULAR MOMENTUM, AND
CLEBSCH-GORDON CONVENTION
11
11.1 two spins

READING: §28 of [4].

Example : Consider two electrons, 1 in each of 2 quantum dots.

H = H1 ⊗ H2 (11.1)

where H1 and H2 are both spin Hamiltonian’s for respective 2D Hilbert spaces. Our complete
Hilbert space is thus a 4D space.
We will write

|+i1 ⊗ |+i2 = |++i


|+i1 ⊗ |−i2 = |+−i
(11.2)
|−i1 ⊗ |+i2 = |−+i
|−i1 ⊗ |−i2 = |−−i

Can introduce

S1 = S(1)
1 ⊗I
(2)
(11.3)
S2 = I (1) ⊗ S(2)
2

Here we “promote” each of the individual spin operators to spin operators in the complete
Hilbert space.
We write


S 1z |++i = |++i
2 (11.4)

S 1z |+−i = |+−i
2

157
158 two spin systems, angular momentum, and clebsch-gordon convention

Write

S = S1 + S2 , (11.5)

for the full spin angular momentum operator. The z component of this operator is

S z = S 1z + S 2z (11.6)

!
h̄ h̄
S z |++i = (S 1z + S 2z ) |++i = + |++i = h̄ |++i
2 2
!
h̄ h̄
S z |+−i = (S 1z + S 2z ) |+−i = − |+−i = 0
2 2
! (11.7)
h̄ h̄
S z |−+i = (S 1z + S 2z ) |−+i = − + |−+i = 0
2 2
!
h̄ h̄
S z |−−i = (S 1z + S 2z ) |−−i = − − |−−i = − h̄ |−−i
2 2

So, we find that |xxi are all eigenkets of S z . These will also all be eigenkets of S21 = S 1x
2 +
2 + S 2 since we have
S 1y 1z

! !
1 1 3 2
S 12 |xxi = h̄ 2
1 + |xxi = h̄ |xxi
2 2 4
! ! (11.8)
1 1 3 2
S 22 |xxi = h̄2 1 + |xxi = h̄ |xxi
2 2 4

S 2 = (S1 + S2 ) · (S1 + S2 )
(11.9)
= S 12 + S 22 + 2S1 · S2

Note that we have a commutation assumption here [S 1i , S 2i ] = 0, since we have written


2S1 · S2 instead of i S 1i S 2i + S 2i S 1i . The justification for this appears to be the promotion of
P
the spin operators in eq. (11.3) to operators in the complete Hilbert space, since each of these
spin operators acts only on the kets associated with their index.
Are all the product kets also eigenkets of S 2 ? Calculate

S 2 |+−i = (S 12 + S 22 + 2S1 · S2 ) |+−i


(11.10)
!
3 2 3 2
= h̄ + h̄ + 2S 1x S 2x |+−i + 2S 1y S 2y |+−i + 2S 1z S 2z |+−i
4 4
11.1 two spins 159

For the z mixed terms, we have

! !
h̄ h̄
2S 1z S 2z |+−i = 2 − |+−i (11.11)
2 2
So

S 2 |+−i = h̄2 |+−i + 2S 1x S 2x |+−i + 2S 1y S 2y |+−i (11.12)

Since we have set our spin direction in the z direction with

 
1
|+i →  
0
  (11.13)
0
|−i →  
1
We have
    
h̄ 0 1 1 h̄ 0 h̄
S x |+i →     =   = |−i
2 1 0 0 2 1 2
    
h̄ 0 1 0 h̄ 1 h̄
S x |−i →     =   = |+i
2 1 0 1 2 0 2
     (11.14)
h̄ 0 −i 1 i h̄ 0 i h̄
S y |+i →     =   = |−i
2 i 0 0 2 1 2
    
h̄ 0 −i 0 −i h̄ 1 i h̄
S y |−i →     =   = − |+i
2 i 0 1 2 0 2

And are able to arrive at the action of S 2 on our mixed composite state

S 2 |+−i = h̄2 (|+−i + |−+i). (11.15)

For the action on the |++i state we have

h̄2 2
! ! !
3 2 3 2 2 h̄ h̄ h̄
S |++i =
2
h̄ + h̄ |++i + 2 |−−i + 2i |−−i + 2 |++i
4 4 4 4 2 2 (11.16)
= 2 h̄ |++i 2
160 two spin systems, angular momentum, and clebsch-gordon convention

and on the |−−i state we have

(− h̄)2 h̄2
! ! !
3 2 3 2 h̄ h̄
S 2 |−−i = h̄ + h̄ |−−i + 2 |++i + 2i2 |++i + 2 − − |−−i
4 4 4 4 2 2 (11.17)
= 2 h̄ |−−i
2

All of this can be assembled into a tidier matrix form

 
2 0 0 0
 
2 0 1 1 0

2
S → h̄   , (11.18)
0 1 1 0
 
0 0 0 2

where the matrix is taken with respect to the (ordered) basis

{|++i , |+−i , |−+i , |−−i}. (11.19)

However,

h i
S 2, S z = 0
X (11.20)
[S i , S j ] = i h̄ i jk S k
k
h i
(Also, S 2 , S i = 0.)
It should be possible to find eigenkets of S 2 and S z

S 2 |sm s i = s(s + 1) h̄2 |sm s i


(11.21)
S z |sm s i = h̄m s |sm s i

An orthonormal set of eigenkets of S 2 and S z is found to be

|++i s = 1 and m s = 1
(|+−i + |−+i) s = 1 and m s = 0
√1
2 (11.22)
|−−i s = 1 and m s = −1
√1 (|+−i − |−+i) s = 0 and m s = 0
2
11.2 more on two spin systems 161

The first three kets here can be grouped into a triplet in a 3D Hilbert space, whereas the last
treated as a singlet in a 1D Hilbert space.
Form a grouping

H = H1 ⊗ H2 (11.23)
Can write

1 1
⊗ = 1⊕0 (11.24)
2 2
where the 1 and 0 here refer to the spin index s.

Other examples Consider, perhaps, the l = 5 state of the hydrogen atom

J12 | j1 m1 i = j1 ( j1 + 1) h̄2 | j1 m1 i
(11.25)
J1z | j1 m1 i = h̄m1 | j1 m1 i

J22 | j2 m2 i = j2 ( j2 + 1) h̄2 | j2 m2 i
(11.26)
J2z | j2 m2 i = h̄m2 | j2 m2 i
Consider the Hilbert space spanned by | j1 m1 i ⊗ | j2 m2 i, a (2 j1 + 1)(2 j2 + 1) dimensional space.
How to find the eigenkets of J 2 and Jz ?

11.2 more on two spin systems

READING: Covering §26.5 of the text [4].

1 1
⊗ = 1⊕0 (11.27)
2 2
where 1 is a triplet state for s = 1 and 0 the “singlet” state with s = 0. We want to consider
the angular momentum of the entire system

j1 ⊗ j2 =? (11.28)
Why bother? Often it is true that

[ H, J] = 0, (11.29)
so, in that case, the eigenstates of the total angular momentum are also energy eigenstates, so
considering the angular momentum problem can help in finding these energy eigenstates.
162 two spin systems, angular momentum, and clebsch-gordon convention

Rotation operator

e−iθn̂·J/ h̄ (11.30)

n̂ · J = n x J x + ny Jy + nz Jz (11.31)

Recall the definitions of the raising or lowering operators

J± = J x ± iJy , (11.32)

or
1
J x = (J+ + J− )
2 (11.33)
1
Jy = (J+ − J− )
2i
We have
1 1
n̂ · J = n x (J+ + J− ) + ny (J+ − J− ) + nz Jz , (11.34)
2 2i
and

1/2
J± | jmi = h̄(( j ∓ m)( j ± m1 )) | j, m ± 1i (11.35)

So


j0 m0 e−iθn̂·J/ h̄ | jmi = 0

(11.36)

unless j = j0 .


jm0 e−iθn̂·J/ h̄ | jmi

(11.37)

is a (2 j + 1) × (2 j + 1) matrix.
Combining rotations

X

jm0 e−iθb n̂a ·J/ h̄ e−iθa n̂b ·J/ h̄ | jmi = jm0 e−iθb n̂a ·J/ h̄ jm00 jm00 e−iθa n̂b ·J/ h̄ | jmi (11.38)

m00
11.2 more on two spin systems 163

If

e−iθn̂·J/ h̄ = e−iθb n̂a ·J/ h̄ e−iθa n̂b ·J/ h̄ (11.39)

(something that may be hard to compute but possible), then

X

jm0 e−iθn̂·J/ h̄ | jmi = jm0 e−iθb n̂a ·J/ h̄ jm00 jm00 e−iθa n̂b ·J/ h̄ | jmi

(11.40)
m00

For fixed j, the matrices h jm0 | e−iθn̂·J/ h̄ | jmi form a representation of the rotation group. The
(2 j + 1) representations are irreducible. (This will not be proven).
It may be that there may be big blocks of zeros in some of the matrices, but they cannot be
simplified any further?
Back to the two particle system

j1 ⊗ j2 =? (11.41)

If we use

| j1 m 1 i ⊗ | j2 m 2 i (11.42)

If a j1 and a j2 are picked then

D
j1 m01 ; j2 m02 e−iθn̂·J/ h̄ | j1 m1 ; j2 m2 i (11.43)

is also a representation of the rotation group, but these sort of matrices can be simplified a lot.
This basis of dimensionality (2 j1 + 1)(2 j2 + 1) is reducible.
A lot of this is motivation, and we still want a representation of j1 ⊗ j2 .
Recall that

! !
1 1 1 1 1 1
⊗ = 1⊕0 = + ⊕ − (11.44)
2 2 2 2 2 2
Might guess that, for j1 ≥ j2

j1 ⊗ j2 = ( j1 + j2 ) ⊕ ( j1 + j2 − 1) ⊕ · · · ( j1 − j2 ) (11.45)

Suppose that this is right. Then

1 11 9
5⊗ = ⊕ (11.46)
2 2 2
164 two spin systems, angular momentum, and clebsch-gordon convention

Check for dimensions.

1 ⊗ 1 = 2 ⊕ 1 ⊕ 0
(11.47)
3 × 3 = 5 + 3 + 1

Q : What was this ⊕?


It was just made up. We are creating a shorthand to say that we have a number of different
basis states for each of the groupings. I Need an example!
Check for dimensions in general

?
(2 j1 + 1)(2 j2 + 1) = (11.48)

We find
1 + j2
jX 1 + j2
jX − j2 −1
j1 X
(2 j + 1) = (2 j + 1) − (2 j + 1)
j1 − j2 j=0 j=0 (11.49)
= (2 j1 + 1)(2 j2 + 1)

Using

N
X N(N + 1)
n= (11.50)
n=0
2

j1 ⊗ j2 = ( j1 + j2 ) ⊕ ( j1 + j2 − 1) ⊕ · · · ( j1 − j2 ) (11.51)

In fact, this is correct. Proof “by construction” to follow.

| j1 m1 i ⊗ | j2 m2 i (11.52)

J 2 | jmi = j( j + 1) h̄2 | jmi


(11.53)
Jz | jmi = m h̄ | jmi

denote also by

| jm; j1 j2 i , (11.54)
11.2 more on two spin systems 165

but will often omit the ; j1 j2 portion.


With

j= j1 + j2 j1 + j2 − 1 ··· j1 − j2
| j1 + j2 , j1 + j2 i
| j1 + j2 , j1 + j2 − 1i | j1 + j2 − 1, j1 + j2 − 1i
| j1 + j2 − 1, j1 + j2 − 2i
..
. | j1 − j2 , j1 − j2 i
.. ..
. .
..
. | j1 − j2 , −( j1 − j2 )i
..
.
| j1 + j2 , −( j1 + j2 − 1)i | j1 + j2 − 1, −( j1 + j2 − 1)i
| j1 + j2 , −( j1 + j2 )i
(11.55)

Look at

| j1 + j2 , j1 + j2 i (11.56)

Jz | j1 + j2 , j1 + j2 i = ( j1 + j2 ) h̄ | j1 + j2 , j1 + j2 i (11.57)

Jz (| j1 m1 i ⊗ | j2 m2 i) = (m1 + m2 ) h̄(| j1 m1 i ⊗ | j2 m2 i) (11.58)

we must have

| j1 + j2 , j1 + j2 i = eiφ (| j1 j1 i ⊗ | j2 j2 i) (11.59)

So | j1 + j2 , j1 + j2 i must be a superposition of states | j1 m1 i ⊗ | j2 m2 i with m1 + m2 = j1 + j2 .


Choosing eiφ = 1 is called the Clebsch-Gordan convention.

| j1 + j2 , j1 + j2 i = | j1 j1 i ⊗ | j2 j2 i (11.60)
166 two spin systems, angular momentum, and clebsch-gordon convention

We now move down column.

1/2
J− | j1 + j2 , j1 + j2 i = h̄(2( j1 + j2 )) | j1 + j2 , j1 + j2 − 1i (11.61)
So
J− | j1 + j2 , j1 + j2 i
| j1 + j2 , j1 + j2 − 1i = 1/2
h̄(2( j1 + j2 ))
(11.62)
(J1− + J2− ) | j1 j1 i ⊗ | j2 j2 i
= 1/2
h̄(2( j1 + j2 ))

11.3 recap: table of two spin angular momenta

Recall our table

j= j1 + j2 j1 + j2 − 1 ··· j1 − j2
| j1 + j2 , j1 + j2 i
| j1 + j2 , j1 + j2 − 1i | j1 + j2 − 1, j1 + j2 − 1i
| j1 + j2 − 1, j1 + j2 − 2i
..
. | j1 − j2 , j1 − j2 i
.. ..
. .
..
. | j1 − j2 , −( j1 − j2 )i
..
.
| j1 + j2 , −( j1 + j2 − 1)i | j1 + j2 − 1, −( j1 + j2 − 1)i
| j1 + j2 , −( j1 + j2 )i
(11.63)

First column Let us start with computation of the kets in the lowest position of the first
column, which we will obtain by successive application of the lowering operator to the state

| j1 + j2 , j1 + j2 i = | j1 j1 i ⊗ | j2 j2 i . (11.64)
Recall that our lowering operator was found to be (or defined as)

J− | j, mi = ( j + m)( j − m + 1) h̄ | j, m − 1i ,
p
(11.65)
11.3 recap: table of two spin angular momenta 167

so that application of the lowering operator gives us

J− | j1 j1 i ⊗ | j2 j2 i
| j1 + j2 , j1 + j2 − 1i =
(2( j1 + j2 ))1/2 h̄
(J1− + J2− ) | j1 j1 i ⊗ | j2 j2 i
=
(2( j1 + j2 ))1/2 h̄
p 
( j1 + j1 )( j1 − j1 + 1) h̄ | j1 ( j1 − 1)i ⊗ | j2 j2 i
=
(2( j1 + j2 ))1/2 h̄
p 
| j1 j1 i ⊗ ( j2 + j2 )( j2 − j2 + 1) h̄ | j2 ( j2 − 1)i
+
(2( j1 + j2 ))1/2 h̄
!1/2 !1/2
j1 j2
= | j1 ( j1 − 1)i ⊗ | j2 j2 i + | j1 j1 i ⊗ | j2 ( j2 − 1)i
j1 + j2 j1 + j2
(11.66)

Proceeding iteratively would allow us to finish off this column.

Second column Moving on to the second column, the top most element in the table

| j1 + j2 − 1, j1 + j2 − 1i , (11.67)

can only be made up of | j1 m1 i ⊗ | j2 m2 i with m1 + m2 = j1 + j2 − 1. There are two possibilities

m 1 = j1 m2 = j2 − 1
(11.68)
m1 = j1 − 1 m2 = j2
So for some A and B to be determined we must have

| j1 + j2 − 1, j1 + j2 − 1i = A | j1 j1 i ⊗ | j2 ( j2 − 1)i + B | j1 ( j1 − 1)i ⊗ | j2 j2 i (11.69)

Observe that these are the same kets that we ended up with by application of the lowering
operator on the topmost element of the first column in our table. Since | j1 + j2 , j1 + j2 − 1i
and | j1 + j2 − 1, j1 + j2 − 1i are orthogonal, we can construct our ket for the top of the second
column by just seeking such an orthonormal superposition. Consider for example

0 = (a hb| + c hd|)(A |bi + C |di)


(11.70)
= aA + cC
168 two spin systems, angular momentum, and clebsch-gordon convention

With A = 1 we find that C = −a/c, so we have

a
A |bi + C |di = |bi − |di
c (11.71)
∼ c |bi − a |di

So we find, for real a and c that

0 = (a hb| + c hd|)(c |bi − a |di), (11.72)

for any orthonormal pair of kets |ai and |di. Using this we find

!1/2 !1/2
j2 j1
| j1 + j2 − 1, j1 + j2 − 1i = | j1 j1 i ⊗ | j2 ( j2 − 1)i − | j1 ( j1 − 1)i ⊗ | j2 j2 i
j1 + j2 j1 + j2
(11.73)

This will work, although we could also multiply by any phase factor if desired. Such a choice
of phase factors is essentially just a convention.

The Clebsch-Gordon convention This is the convention we will use, where we

• choose the coefficients to be real.

• require the coefficient of the m1 = j1 term to be ≥ 0

This gives us the first state in the second column, and we can proceed to iterate using the
lowering operators to get all those values.
Moving on to the third column

| j1 + j2 − 2, j1 + j2 − 2i (11.74)

can only be made up of | j1 m1 i ⊗ | j2 m2 i with m1 + m2 = j1 + j2 − 2. There are now three


possibilities

m1 = j1 m2 = j2 − 2
m1 = j1 − 2 m2 = j2 (11.75)
m1 = j1 − 1 m2 = j2 − 1
11.3 recap: table of two spin angular momenta 169

and 2 orthogonality conditions, plus conventions. This is enough to determine the ket in the
third column.
We can formally write

X
| jm; j1 j2 i = | j1 m1 , j2 m2 i h j1 m1 , j2 m2 | jm; j1 j2 i (11.76)
m1 ,m2

where

| j1 m1 , j2 m2 i = | j1 m1 i ⊗ | j2 m2 i , (11.77)

and

h j1 m1 , j2 m2 | jm; j1 j2 i (11.78)

are the Clebsch-Gordon coefficients, sometimes written as

h j1 m1 , j2 m2 | jmi (11.79)

Properties

1. h j1 m1 , j2 m2 | jmi , 0 only if j1 − j2 ≤ j ≤ j1 + j + 2
This is sometimes called the triangle inequality, depicted in fig. 11.1

Figure 11.1: Angular momentum triangle inequality

2. h j1 m1 , j2 m2 | jmi , 0 only if m = m1 + m2 .

3. Real (convention).

4. h j1 j1 , j2 ( j − j1 )| j ji positive (convention again).


170 two spin systems, angular momentum, and clebsch-gordon convention

5. Proved in the text. If follows that

h j1 m1 , j2 m2 | jmi = (−1) j1 + j2 − j h j1 (−m1 ), j2 (−m2 )| j(−m)i (11.80)

Note that the h j1 m1 , j2 m2 | jmi are all real. So, they can be assembled into an orthogonal
matrix. Example

  
 |11i  1 0 0 0 |++i

  
√1 √1
  
 |10i  0 0 |++i
 E =  2 2    (11.81)
     
 11  0 0 0 1 |−+i
  
  
√1 −1

|00i 0 0 |−−i
2 2

Example. Electrons Consider the special case of an electron, a spin 1/2 particle with s = 1/2
and m s = ±1/2 where we have

J = L+S (11.82)

+
1
|lmi ⊗ m s
(11.83)
2

possible values of j are l ± 1/2

! !
1 1 1
l⊗ = l+ ⊕ l− (11.84)
2 2 2

Our table representation is then

j = l + 12 l− 1
2
E
l + 12 , l + 21
E E
l + 12 , l + 21 − 1 l − 21 , l − 21 (11.85)
E
l − 21 , −(l − 21
E
l + 12 , −(l + 21 )
11.4 tensor operators 171

E
Here l + 12 , m
can only have contributions from

+ +
l, m − 1 ⊗ 1 1
2 2 2
+ + (11.86)
l, m + 1 ⊗ 1 1
2 22

E
l − 21 , m from the same two. So using this and conventions we can work out (in §28 page
524, of our text [4]).

+ + +
l ± 1 , m = ± 1 (l + 1 ± m)1/2 l, m − 1 × 1 1

2l + 1
2 2 2 2 2
+ + (11.87)
1 1 1 1 1
± √ (l + ∓ m)1/2 l, m + ×
2l + 1 2 2 2 2

11.4 tensor operators

§29 of the text.


Recall how we characterized a rotation

r → R(r). (11.88)
Here we are using an active rotation as depicted in fig. 11.2

Figure 11.2: active rotation

Suppose that

h i X
R(r) i = Mi j r j (11.89)
j
172 two spin systems, angular momentum, and clebsch-gordon convention

so that

U = e−iθn̂·J/ h̄ (11.90)

rotates in the same way. Rotating a ket as in fig. 11.3

Figure 11.3: Rotating a wavefunction

Rotating a ket

|ψi (11.91)

using the prescription

0
ψ = e−iθn̂·J/ h̄ |ψi (11.92)

and write

0
ψ = U[M] |ψi (11.93)

Now look at

hψ| O |ψi (11.94)

and compare with

(∗)


0 0
ψ O ψ = hψ| U † [M]OU[M] |ψi (11.95)

We will be looking in more detail at (∗).


R O TAT I O N S O F O P E R AT O R S A N D S P H E R I C A L T E N S O R S
12
12.1 setup

READING: §28 [4].


Rotating with U[M] as in fig. 12.1

Figure 12.1: Rotating a state centered at F

X
r̃i = Mi j r j (12.1)
j

hψ| Ri |ψi = ri (12.2)

X
hψ| U † [M]Ri U[M] |ψi = r̃i = Mi j r j
j (12.3)
= hψ| (U [M]Ri U[M]) |ψi

So

X
U † [M]Ri U[M] = Mi j R j (12.4)
j

173
174 rotations of operators and spherical tensors

Any three operators V x , Vy , Vz that transform according to

X
U † [M]Vi U[M] = Mi j V j (12.5)
j

form the components of a vector operator.

12.2 infinitesimal rotations

Consider infinitesimal rotations, where we can show (problem set 11, problem 1) that

X
[Vi , J j ] = i h̄ i jk Vk (12.6)
k

Note that for Vi = Ji we recover the familiar commutator rules for angular momentum, but
this also holds for operators R, P, J, ...
Note that

U † [M] = U[M −1 ] = U[M T ], (12.7)

so

X
U † [M]Vi U † [M] = U † [M † ]Vi U[M † ] = M ji V j (12.8)
j

so

hψ| Vi |ψi = hψ| U † [M](U[M]Vi U † [M])U[M] |ψi (12.9)

In the same way, suppose we have nine operators

τi j , i, j = x, y, z (12.10)

that transform according to

X
U[M]τi j U † [M] = Mli Mm j τlm (12.11)
lm
12.3 a problem 175

then we will call these the components of (Cartesian) a second rank tensor operator. Suppose
that we have an operator S that transforms

U[M]S U † [M] = S (12.12)

Then we will call S a scalar operator.

12.3 a problem

This all looks good, but it is really not satisfactory. There is a problem.
Suppose that we have a Cartesian tensor operator like this, lets look at the quantity

X X
τii = U[M]τii U † [M]
i i
X X
= Mli Mmi τlm
i lm
XX
= T
Mli Mim τlm (12.13)
i lm
X
= δlm τlm
lm
X
= τll
l

We see buried inside these Cartesian tensors of higher rank there is some simplicity embedded
(in this case trace invariance). Who knows what other relationships are also there? We want
to work with and extract the buried simplicities, and we will find that the Cartesian way of
expressing these tensors is horribly inefficient. What is a representation that does not have any
excess information, and is in some sense minimal?

12.4 how do we extract these buried simplicities?

Recall


U[M] jm00

(12.14)
176 rotations of operators and spherical tensors

gives a linear combination of the | jm0 i.

X 0
0
U[M] jm00 = jm jm U[M] jm00

m 0
X 0 ( j) (12.15)
= jm Dm0 m00 [M]
m0
( j)
We have talked about before how these Dm0 m00 [M] form a representation of the rotation group.
These are in fact (not proved here) an irreducible representation.
( j)
Look at each element of Dm0 m00 [M]. These are matrices and will be different according to
which rotation M is chosen. There is some M for which this element is nonzero. There is no
element in this matrix element that is zero for all possible M. There are more formal ways to
think about this in a group theory context, but this is a physical way to think about this.
Think of these as the basis vectors for some eigenket of J 2 .
X
|ψi = jm00
jm00 ψ
m 00
X (12.16)
= am00 jm00

m00
where


am00 = jm00 ψ

(12.17)
So
X

U[M] |ψi = = U[M] jm0 jm0 ψ
m 0
X
= U[M] jm0 am0
m0
X
= jm00
jm00 U[M] jm0 a 0
m (12.18)
m0 ,m00
X
= jm00 D( j) a 0
m00 ,m0 m
m0 ,m00
X
= ãm00 jm00

m00
where

X
( j)
ãm00 = Dm00 ,m0 am0 (12.19)
m0
12.5 motivating spherical tensors 177

Recall that

X
r̃ j = Mi j r j (12.20)
j

Define (2k + 1) operators T k q , q = k, k − 1, · · · − k as the elements of a spherical tensor of rank


k if

X 0
( j)
U[M]T k q U † [M] = Dq0 q T k q (12.21)
q0

Here we are looking for a better way to organize things, and it will turn out (not to be proved)
that this will be an irreducible way to represent things.

12.5 motivating spherical tensors

We want to work though some examples of spherical tensors, and how they relate to Cartesian
tensors. To do this, a motivating story needs to be told.
Let us suppose that |ψi is a ket for a single particle. Perhaps we are talking about an electron
without spin, and write

hr|ψi = Ylm (θ, φ) f (r)


(12.22)
X
= am00 Ylm00 (θ, φ)
m00

for am00 = δm00 m and after dropping f (r). So

XX
( j)
hr| U[M] |ψi = Dm00 m am0 Ylm00 (θ, φ) (12.23)
m00 m0

We are writing this in this particular way to make a point. Now also assume that

hr|ψi = Ylm (θ, φ) (12.24)

so we find

X
( j)
hr| U[M] |ψi = Ylm00 (θ, φ)Dm00 m
m00 (12.25)
= Ylm (θ, φ)
178 rotations of operators and spherical tensors

Ylm (θ, φ) = Ylm (x, y, z) (12.26)

so

X
( j)
0
Ylm (x, y, z) = Ylm00 (x, y, z)Dm00 m (12.27)
m00

Now consider the spherical harmonic as an operator Ylm (X, Y, Z)

X
( j)
U[M]Ylm (X, Y, Z)U † [M] = Ylm00 (X, Y, Z)Dm00 m (12.28)
m00

So this is a way to generate spherical tensor operators of rank 0, 1, 2, · · ·.

12.6 spherical tensors (cont)

READING: §29 of [4].

definition . Any (2k + 1) operator T (k, q), q = −k, · · · , k are the elements of a spherical tensor
of rank k if

X
U[M]T (k, q)U −1 [M] = T (k, q0 )D(k)
qq0 (12.29)
q0

where D(k)
qq0 was the matrix element of the rotation operator

D(k)

0 00
qq 0 = kq U[M] kq . (12.30)

So, if we have a Cartesian vector operator with components V x , Vy , Vz then we can construct
a corresponding spherical vector operator

V +iV
T (1, 1) = − x√ y ≡ V+1
2
T (1, 0) = Vz ≡ V0 . (12.31)
V x −iVy
T (1, −1) = − √ ≡ V−1
2
12.6 spherical tensors (cont) 179

By considering infinitesimal rotations we can come up with the commutation relations be-
tween the angular momentum operators

[ J± , T (k, q)] = h̄ (k ∓ q)(k ± q + 1)T (k, q ± 1)


p
(12.32)
[ Jz , T (k, q)] = h̄qT (k, q)
Note that the text in (29.15) defines these, whereas in class these were considered conse-
quences of eq. (12.29), once infinitesimal rotations were used.
Recall that these match our angular momentum raising and lowering identities

J± |kqi = h̄ (k ∓ q)(k ± q + 1) |k, q ± 1i


p
(12.33)
Jz |kqi = h̄q |k, qi .
Consider two problems

T (k, q) |kqi
[ J± , T (k, q)] ↔ J± |kqi (12.34)
[ Jz , T (k, q)] ↔ Jz |kqi
We have a correspondence between the spherical tensors and angular momentum kets

T 1 (k1 , q1 ) q1 = −k1 , · · · , k1 |k1 q1 i |k2 q2 i


(12.35)
T 2 (k2 , q2 ) q2 = −k2 , · · · , k2 q1 = −k1 , · · · k1 q2 = −k2 , · · · k2
So, as we can write for angular momentum

These are the C.G coefficients


X
|kqi = |k1 , q1 i |k2 , q2 i hk1 q1 k2 q2 |kqi (12.36)
q1 ,q2
X

|k1 q1 ; k2 q2 i = kq0 kq0 k q k q
1 1 2 2
k,q0

We also have for spherical tensors

X
T (k, q) = T 1 (k1 , q1 )T 2 (k2 , q2 ) hk1 q1 k2 q2 |kqi
q1 ,q2
X
(12.37)
T 1 (k1 , q1 )T 2 (k2 , q2 ) = T (k, q0 ) kq0 k1 q1 k2 q2

k,q0
180 rotations of operators and spherical tensors

Can form eigenstates |kqi of (total angular momentum)2 and (z-comp of the total angular
momentum). FIXME: this will not be proven, but we are strongly suggested to try this ourselves.

spherical tensor (3) ↔ Cartesian vector (3)


(12.38)
(spherical vector)(spherical vector) Cartesian tensor
We can check the dimensions for a spherical tensor decomposition into rank 0, rank 1 and
rank 2 tensors.

spherical tensor rank 0 (1) (Cartesian vector)(Cartesian vector)


spherical tensor rank 1 (3) (3)(3)
(12.39)
spherical tensor rank 2 (5) 9
dimension check sum 9
Or in the direct product and sum shorthand

1⊗1 = 0⊕1⊕2 (12.40)


Note that this is just like problem 4 in problem set 10 where we calculated the CG kets for
the 1 ⊗ 1 = 0 ⊕ 1 ⊕ 2 decomposition starting from kets |1mi |1m0 i.

|22i
|21i |11i
|20i |10i |00i (12.41)
E E
21 11
E
22

Example .
How about a Cartesian tensor of rank 3?

Ai jk (12.42)

1 ⊗ 1 ⊗ 1 = 1 ⊗ (0 ⊕ 1 ⊕ 2)
= (1 ⊗ 0) ⊕ (1 ⊗ 1) ⊕ (1 ⊗ 2)
(12.43)
1 ⊕ (0 ⊕ 1 ⊕ 2) ⊕ (3 ⊕ 2 ⊕ 1)
=
3 + 1 + 3 + 5 + 7 + 5 + 3 = 27
12.6 spherical tensors (cont) 181

Why bother? Consider a tensor operator T (k, q) and an eigenket of angular momentum |α jmi,
where α is a degeneracy index.
Look at

T (k, q) |α jmi U[M]T (k, q) |α jmi = U[M]T (k, q)U † [M]U[M] |α jmi
X
D(k)
( j) (12.44)

= 0
α jm0

0 D
qq mm 0 T (k, q )
q0 m0

This transforms like |kqi ⊗ | jmi. We can say immediately


0 0 0
α j m T (k, q) |α jmi = 0 (12.45)

unless
|k − j| ≤ j0 ≤ k + j
(12.46)
m0 = m + q

This is the “selection rule”.


Examples.

• Scalar T (0, 0)


0 0 0
α j m T (0, 0) |α jmi = 0, (12.47)

unless j = j0 and m = m0 .

• V x , Vy , Vz . What are the non-vanishing matrix elements?

V−1 − V+1
Vx = √ ,··· (12.48)
2


0 0 0
α j m V x,y |α jmi = 0, (12.49)

unless
| j − 1| ≤ j0 ≤ j + 1
(12.50)
m0 = m ± 1
182 rotations of operators and spherical tensors


α0 j0 m0 Vz |α jmi = 0,

(12.51)

unless

| j − 1| ≤ j0 ≤ j + 1
(12.52)
m0 = m

Very generally one can prove (the Wigner-Eckart theory in the text §29.3)

hα2 j2 m2 | T (k, q) |α1 j1 m1 i = hα2 j2 | T (k) |α1 j1 i · h j2 m2 |kq1 ; j1 m1 i (12.53)

where we split into a “reduced matrix element” describing the “physics”, and the CG coeffi-
cient for “geometry” respectively.
Part III

S C AT T E R I N G T H E O RY
S C AT T E R I N G T H E O RY
13
13.1 setup

READING: §19, §20 of the text [4].


Figure 13.1 shows a simple classical picture of a two particle scattering collision

Figure 13.1: classical collision of particles

We will focus on point particle elastic collisions (no energy lost in the collision). With parti-
cles of mass m1 and m2 we write for the total and reduced mass respectively

M = m1 + m2 (13.1)

1 1 1
= + , (13.2)
µ m1 m2
so that interaction due to a potential V(r1 − r2 ) that depends on the difference in position
r = r1 − r has, in the center of mass frame, the Hamiltonian

p2
H= + V(r) (13.3)

In the classical picture we would investigate the scattering radius r0 associated with the im-
pact parameter ρ as depicted in fig. 13.2

185
186 scattering theory

Figure 13.2: Classical scattering radius and impact parameter

13.2 1d qm scattering. no potential wave packet time evolution

Now lets move to the QM picture where we assume that we have a particle that can be repre-
sented as a wave packet as in fig. 13.3

Figure 13.3: Wave packet for a particle wavefunction <(ψ(x, 0))

First without any potential V(x) = 0, lets consider the evolution. Our position and momentum
space representations are related by

Z Z
|ψ(x, t)|2 dx = 1 = |ψ(p, t)|2 d p, (13.4)

and by Fourier transform

Z
dp
ψ(x, t) = √ ψ(p, t)eipx/ h̄ . (13.5)
2π h̄
Schrödinger’s equation takes the form

∂ψ(x, t) h̄2 ∂2 ψ(x, t)


i h̄ =− , (13.6)
∂t 2µ ∂x2
13.2 1d qm scattering. no potential wave packet time evolution 187

or more simply in momentum space

∂ψ(p, t) p2 ∂2 ψ(p, t)
i h̄ = . (13.7)
∂t 2µ ∂x2
Rearranging to integrate we have

∂ψ ip2
=− ψ, (13.8)
∂t 2µ h̄
and integrating

ip2 t
ln ψ = − + ln C, (13.9)
2µ h̄
or
ip2 t ip2 t
ψ = Ce− 2µ h̄ = ψ(p, 0)e− 2µ h̄ . (13.10)

Time evolution in momentum space for the free particle changes only the phase of the wave-
function, the momentum probability density of that particle.
Fourier transforming, we find our position space wavefunction to be

Z
dp 2
ψ(x, t) = √ ψ(p, 0)eipx/ h̄ e−ip t/2µ h̄ . (13.11)
2π h̄
To clean things up, write

p = h̄k, (13.12)

for

Z
dk 2
ψ(x, t) = √ a(k, 0))eikx e−i h̄k t/2µ , (13.13)

where

a(k, 0) = h̄ψ(p, 0). (13.14)
188 scattering theory

Putting

2 /2µ
a(k, t) = a(k, 0)e−i h̄k , (13.15)

we have

Z
dk
ψ(x, t) = √ a(k, t))eikx (13.16)

Observe that we have

Z Z 2
dk|a(k, t)|2 = d p ψ(p, t) = 1. (13.17)

13.3 a gaussian wave packet

Suppose that we have, as depicted in fig. 13.4

Figure 13.4: Gaussian wave packet

a Gaussian wave packet of the form

ik0 x
(π∆2 )1/4 2 /2∆2
ψ(x, 0) = e−x . (13.18)
e
This is actually a minimum uncertainty packet with


∆x = √
2
(13.19)

∆p = √ .
∆ 2
13.4 with a potential 189

Taking Fourier transforms we have

!1/4
∆2 2 ∆2 /2
a(k, 0) = e−(k−k0 )
π
!1/4 (13.20)
∆2 )2 ∆2 /2 −i h̄k2 t/2µ
a(k, t) = e−(k−k0 e ≡ α(k, t)
π

For t > 0 our wave packet will start moving and spreading as in fig. 13.5

Figure 13.5: moving spreading Gaussian packet

13.4 with a potential

Now “switch on” a potential, still assuming a wave packet representation for the particle. With
a positive (repulsive) potential as in fig. 13.6, at a time long before the interaction of the wave
packet with the potential we can visualize the packet as heading towards the barrier.

Figure 13.6: QM wave packet prior to interaction with repulsive potential

After some time long after the interaction, classically for this sort of potential where the
particle kinetic energy is less than the barrier “height”, we would have total reflection. In the
190 scattering theory

QM case, we have seen before that we will have a reflected and a transmitted portion of the
wave packet as depicted in fig. 13.7

Figure 13.7: QM wave packet long after interaction with repulsive potential

Even if the particle kinetic energy is greater than the barrier height, as in fig. 13.8, we can
still have a reflected component.

Figure 13.8: Kinetic energy greater than potential energy

This is even true for a negative potential as depicted in fig. 13.9!


Consider the probability for the particle to be found anywhere long after the interaction,
summing over the transmitted and reflected wave functions, we have

Z
1= |ψr + ψt |2
Z Z Z (13.21)
= |ψr |2 + |ψt |2 + 2< ψ∗r ψt

Observe that long after the interaction the cross terms in the probabilities will vanish because
they are non-overlapping, leaving just the probably densities for the transmitted and reflected
probably densities independently.
13.5 considering the time independent case temporarily 191

Figure 13.9: qmTwoL21Fig9

We define

Z
T= |ψt (x, t)|2 dx
Z (13.22)
R= 2
|ψr (x, t)| dx.

The objective of most of our scattering problems will be the calculation of these probabilities
and the comparisons of their ratios.

Question . Can we have more than one wave packet reflect off. Yes, we could have multiple
wave packets for both the reflected and the transmitted portions. For example, if the potential
has some internal structure there could be internal reflections before anything emerges on either
side and things could get quite messy.

13.5 considering the time independent case temporarily

We are going to work through something that is going to seem at first to be completely unrelated.
We will (eventually) see that this can be applied to this problem, so a bit of patience will be
required.
We will be using the time independent Schrödinger equation

h̄2 00
− ψ (x) = V(x)ψk (x) = Eψk (x), (13.23)
2µ k
192 scattering theory

where we have added a subscript k to our wave function with the intention (later) of allowing
this to vary. For “future use” we define for k > 0

h̄2 k2
E= . (13.24)

Consider a potential as in fig. 13.10, where V(x) = 0 for x > x2 and x < x1 .

Figure 13.10: potential zero outside of a specific region

We will not have bound states here (repulsive potential). There will be many possible solu-
tions, but we want to look for a solution that is of the form

ψk (x) = Ceikx , x > x2 (13.25)

Suppose x = x3 > x2 , we have

ψk (x3 ) = Ceikx3 (13.26)


dψk
= ikCeikx3 ≡ φk (x3 ) (13.27)
dx x=x3

d2 ψk

= −k2Ceikx3 (13.28)
dx2 x=x3

Defining
dψk
φk (x) = , (13.29)
dx
13.6 recap 193

we write Schrödinger’s equation as a pair of coupled first order equations

dψk
= φk (x)
dx
(13.30)
h̄2 dφk (x) h̄2 k2
− = −V(x)ψk (x) + ψk (x).
2µ dx 2µ
At this x = x3 specifically, we “know” both φk (x3 ) and ψk (x3 ) and have


dψk
= φk (x)
dx x3
(13.31)
h̄2 dφk (x) h̄2 k2

− = −V(x3 )ψk (x3 ) + ψk (x3 ),
2µ dx x3 2µ
This allows us to find both


dψk

dx x3
(13.32)
dφk (x)

dx x3
then proceed to numerically calculate φk (x) and ψk (x) at neighboring points x = x3 + . Es-
sentially, this allows us to numerically integrate backwards from x3 to find the wave function at
previous points for any sort of potential.

13.6 recap

READING: §19, §20 of the text [4].


We used a positive potential of the form of fig. 13.11

h̄2 ∂2 ψk (x) h̄2 k2


− + V(x)ψ k (x) = (13.33)
2µ ∂x2 2µ
for x ≥ x3

ψk (x) = Ceikx (13.34)

dψk (x)
φk (x) = (13.35)
dx
194 scattering theory

Figure 13.11: A bounded positive potential

for x ≥ x3

φk (x) = ikCeikx (13.36)

dψk (x)
= φk (x)
dx
(13.37)
h̄2 dφk (x) h̄2 k2
− = −V(x)ψk (x) +
2µ dx 2µ
integrate these equations back to x1 .
For x ≤ x1

ψk (x) = Aeikx + Be−ikx , (13.38)


where both A and B are proportional to C, dependent on k.
There are cases where we can solve this analytically (one of these is on our problem set).
Alternatively, write as (so long as A , 0)

ψk (x) → eikx + βk e−ikx for x < x1


(13.39)
→ γk eikx for x > x2
Now want to consider the problem of no potential in the interval of interest, and our window
bounded potential as in fig. 13.12
where we model our particle as a wave packet as we found can have the fourier transform
description, for tinitial < 0, of

Z
dk
ψ(x, tinitial ) = √ α(k, tinitial )eikx (13.40)

13.6 recap 195

Figure 13.12: Wave packet in free space and with positive potential

Returning to the same coefficients, the solution of the Schrödinger eqn for problem with the
potential eq. (13.39)
For x ≤ x1 ,

ψ(x, t) = ψi (x, t) + ψr (x, t) (13.41)

where as illustrated in fig. 13.13

Figure 13.13: Reflection and transmission of wave packet

Z
dk
ψi (x, t) = √ α(k, tinitial )eikx

Z (13.42)
dk
ψr (x, t) = √ α(k, tinitial )βk e−ikx .

For x > x2

ψ(x, t) = ψt (x, t) (13.43)


196 scattering theory

and

Z
dk
ψt (x, t) = √ α(k, tinitial )γk eikx (13.44)

Look at

ψr (x, t) = χ(−x, t) (13.45)


where

Z
dk
χ(x, t) = √ α(k, tinitial )βk eikx
Z 2π (13.46)
dk
≈ βk0 √ α(k, tinitial )eikx

for t = tinitial , this is nonzero for x < x1 .
so for x < x1

ψr (x, tinitial ) = 0 (13.47)


In the same way, for x > x2

ψt (x, tinitial ) = 0. (13.48)


What has not been proved is that the wavefunction is also zero in the [x1 , x2 ] interval.

Summarizing For t = tinitial

 R

 √dk α(k, tinitial )eikx for x < x1
ψ(x, tinitial ) = 

 2π
for x > x2 (and actually also for x > x1 (unproven))

 0

(13.49)
for t = tfinal

 R


 √dk βk α(k, tfinal )e−ikx for x < x1





ψ(x, tfinal ) → 

x ∈ [x1 , x2 ] (13.50)


 0

√ γk α(k, tfinal )eikx for x > x2

 R dk




13.6 recap 197

Probability of reflection is

Z
|ψr (x, tfinal )|2 dx (13.51)

If we have a sufficiently localized packet, we can form a first order approximation around the
peak of βk (FIXME: or is this a sufficiently localized responce to the potential on reflection?)

Z
dk
ψr (x, tfinal ) ≈ βk0 √ α(k, tfinal )e−ikx , (13.52)

so
Z 2
|ψr (x, tfinal )|2 dx ≈ βk0 ≡ R (13.53)

Probability of transmission is

Z
|ψt (x, tfinal )|2 dx (13.54)

Again, assuming a small spread in γk , with γk ≈ γk0 for some k0

Z
dk
ψt (x, tfinal ) ≈ γk0 √ α(k, tfinal )eikx , (13.55)

we have for x > x2

Z 2
|ψt (x, tfinal )|2 dx ≈ γk0 ≡ T. (13.56)

By constructing the wave packets in this fashion we get as a side effect the solution of the
scattering problem.
The

ψk (x) →eikx + βk e−ikx


(13.57)
γk eikx

are called asymptotic in states. Their physical applicability is only once we have built wave
packets out of them.
3 D S C AT T E R I N G
14
14.1 setup

READING: §20, and §4.8 of our text [4].


For a potential V(r) ≈ 0 for r > r0 as in fig. 14.1

Figure 14.1: Radially bounded spherical potential

From 1D we have learned to build up solutions from time independent solutions (non normal-
izable). Consider an incident wave

eik·r = eikn̂·r (14.1)

This is a solution of the time independent Schrödinger equation

h̄2 2 ik·r
− ∇ e = Eeik·r , (14.2)

where

h̄2 k2
E= . (14.3)

In the presence of a potential expect scattered waves.
Consider scattering off of a positive potential as depicted in fig. 14.2

199
200 3d scattering

Figure 14.2: Radially bounded potential

Here we have V(r) = 0 for r > r0 . The wave function

eikn̂·r (14.4)

is found to be a solution of the free particle Schrödinger equation.

h̄2 2 ikn̂·r h̄2 k2 ikn̂·r


− ∇ e = e (14.5)
2µ 2µ

14.2 seeking a post scattering solution away from the potential

What other solutions can be found for r > r0 , where our potential V(r) = 0? We are looking for
Φ(r) such that

h̄2 2 h̄2 k2
− ∇ Φ(r) = Φ(r) (14.6)
2µ 2µ
What can we find?
14.2 seeking a post scattering solution away from the potential 201

We split our Laplacian into radial and angular components as we did for the hydrogen atom

h̄2 ∂2 L2
− (rΦ(r)) + Φ(r) = EΦ(r), (14.7)
2µ ∂r2 2µr2
where

∂2 1 ∂+ 1 ∂2
!
L = − h̄
2 2
+ (14.8)
∂θ2 tan θ ∂θ sin2 θ ∂φ2
Assuming a solution of

Φ(r) = R(r)Ylm (θ, φ), (14.9)


and noting that

L2 Ylm (θ, φ) = h̄2 l(l + 1)Ylm (θ, φ), (14.10)


we find that our radial equation becomes

h̄2 ∂2 h̄2 l(l + 1) h̄2 k2


− (rR(r)) + R(r) = ER(r) = R(r). (14.11)
2µr ∂r2 2µr2 2µ
Writing

u(r)
R(r) = , (14.12)
r
we have

h̄2 ∂2 u(r) h̄2 l(l + 1) h̄2 k2 u(r)


− + u(r) = , (14.13)
2µr ∂r2 2µr 2µ r
or

d2 l(l + 1)
!
+k −
2
u(r) = 0 (14.14)
dr2 r2
Writing ρ = kr, we have

d2 l(l + 1)
!
+1− u(r) = 0 (14.15)
dρ2 ρ2
202 3d scattering

14.3 the radial equation and its solution

With a last substitution of u(r) = U(kr) = U(ρ), and introducing an explicit l suffix on our
eigenfunction U(ρ) we have

d2 l(l + 1)
!
− 2+ Ul (ρ) = Ul (ρ). (14.16)
dρ ρ2
We would not have done this before with the hydrogen atom since we had only finite E =
h̄2 k2 /2µ. Now this can be anything.
Making one final substitution, Ul (ρ) = ρ fl (ρ) we can rewrite eq. (14.16) as

d2
!
d
ρ 2
+ 2ρ + (ρ − l(l + 1)) fl = 0.
2
(14.17)
dρ2 dρ
This is the spherical Bessel equation of order l and has solutions called the Bessel and Neu-
mann functions of order l, which are

!l
sin ρ
!
1 d
jl (ρ) = (−ρ)
l
(14.18a)
ρ dρ ρ

!l
cos ρ
!
1 d
nl (ρ) = (−ρ)l
− . (14.18b)
ρ dρ ρ
We can easily calculate

U0 (ρ) = ρ j0 (ρ) = sin ρ


sin ρ (14.19)
U1 (ρ) = ρ j1 (ρ) = − cos ρ +
ρ
and can plug these into eq. (14.16) to verify that they are a solution. A more general proof
looks a bit trickier.
Observe that the Neumann functions are less well behaved at the origin. To calculate the first
few Bessel and Neumann functions we first compute

1 d sin ρ 1 cos ρ sin ρ


!
= − 2
ρ dρ ρ ρ ρ ρ
(14.20)
cos ρ sin ρ
= 2 − 3
ρ ρ
14.3 the radial equation and its solution 203

!2
sin ρ 1 sin ρ cos ρ cos ρ sin ρ
!
1 d
= − 2 −2 3 − 3 +3 4
ρ dρ ρ ρ ρ ρ ρ ρ
(14.21)
cos ρ
!
1 3
= sin ρ − 3 + 5 − 3 4
ρ ρ ρ
and
cos ρ 1 sin ρ cos ρ
!
1 d
− = + 2
ρ dρ ρ ρ ρ ρ
(14.22)
sin ρ cos ρ
= 2 + 3
ρ ρ
!2
cos ρ 1 cos ρ sin ρ sin ρ cos ρ
!
1 d
− = −2 3 − 3 −3 4
ρ dρ ρ ρ ρ2 ρ ρ ρ
(14.23)
sin ρ
!
1 3
= cos ρ 3 − 5 − 3 4
ρ ρ ρ
so we find

sin ρ
j0 (ρ) = ρ n0 (ρ) = − cosρ ρ
sin ρ
j1 (ρ) = − cosρ ρ n (ρ) = − cos ρ
− sinρ ρ (14.24)
 ρ2    1 ρ2    
j2 (ρ) = sin ρ − ρ1 + 3
ρ3
+ cos ρ − ρ2 n2 (ρ) = cos ρ ρ − ρ3 + sin ρ − ρ2
3 1 3 3

Observe that our radial functions R(r) are proportional to these Bessel and Neumann func-
tions

u(r)
R(r) =
r
U(kr)
=
r

j (ρ)ρ
 lr

(14.25)
=

 nl (ρ)ρ

r
r

 jl (ρ)k
 r 

=

 nl (ρ)kr
 

r
Or

R(r) ∼ jl (ρ), nl (ρ). (14.26)


204 3d scattering

14.4 limits of spherical bessel and neumann functions

With n! ! denoting the double factorial, like factorial but skipping every other term

n! ! = n(n − 2)(n − 4) · · · , (14.27)

we can show that in the limit as ρ → 0 we have

ρl
jl (ρ) → (14.28a)
(2l + 1)! !

(2l − 1)! !
nl (ρ) → − , (14.28b)
ρ(l+1)
(for the l = 0 case, note that (−1)! ! = 1 by definition).
Comparing this to our explicit expansion for j1 (ρ) in eq. (14.24) where we appear to have a
1/ρ dependence for small ρ it is not obvious that this would be the case. To compute this we
need to start with a power series expansion for sin ρ/ρ, which is well behaved at ρ = 0 and then
the result follows (done later).
It is apparently also possible to show that as ρ → ∞ we have

!
1 lπ
jl (ρ) → sin ρ − (14.29a)
ρ 2

!
1 lπ
nl (ρ) → − cos ρ − . (14.29b)
ρ 2

14.5 back to our problem

For r > r0 we can construct (for fixed k) a superposition of the spherical functions

XX
( Al jl (kr) + Bl nl (kr)) Ylm (θ, φ) (14.30)
l m

we want outgoing waves, and as r → ∞, we have

 

sin kr − 2
jl (kr) → (14.31a)
kr
14.5 back to our problem 205

 

cos kr − 2
nl (kr) → − (14.31b)
kr
Put Al /Bl = −i for a given l we have

   
lπ lπ

1 
 sin kr − 2 cos kr − 2
 1

−i −  ∼ ei(kr−πl/2) (14.32)
kr kr kr  kr

For

XX 1 i(kr−πl/2) m
Bl e Yl (θ, φ). (14.33)
l m
kr

Making this choice to achieve outgoing waves (and factoring a (−i)l out of Bl for some reason,
we have another wave function that satisfies our Hamiltonian equation

eikr X X
(−1)l Bl Ylm (θ, φ). (14.34)
kr l m

The Bl coefficients will depend on V(r) for the incident wave eik·r . Suppose we encapsulate
that dependence in a helper function fk (θ, φ) and write

eikr
fk (θ, φ) (14.35)
r
We seek a solution ψk (r)

h̄2 2 h̄2 k2
!
− ∇ + V(r) ψk (r) = ψk (r), (14.36)
2µ 2µ
where as r → ∞

eikr
ψk (r) → eik·r + fk (θ, φ). (14.37)
r
Note that for r < r0 in general for finite r, ψk (r), is much more complicated. This is the
analogue of the plane wave result

ψ(x) = eikx + βk e−ikx (14.38)


206 3d scattering

14.6 scattering geometry and nomenclature

We can think classically first, and imagine a scattering of a stream of particles barraging a target
as in fig. 14.3

Figure 14.3: Scattering cross section

Here we assume that dΩ is far enough away that it includes no non-scattering particles.
Write P for the number density

number of particles
P= , (14.39)
unit volume
and

Number of particles flowing through


J = Pv0 = (14.40)
a unit area in unit time
We want to count the rate of particles per unit time dN through this solid angle dΩ and write

!
dσ(Ω)
dN = J dΩ. (14.41)
dΩ

The factor

dσ(Ω)
, (14.42)
dΩ
14.7 appendix 207

is called the differential cross section, and has “units” of

area
(14.43)
steradians
(recalling that steradians are radian like measures of solid angle [18]).
The total number of particles through the volume per unit time is then

Z Z
dσ(Ω) dσ(Ω)
J dΩ = J dΩ = Jσ (14.44)
dΩ dΩ
where σ is the total cross section and has units of area. The cross section σ his the effective
size of the area required to collect all particles, and characterizes the scattering, but is not
necessarily entirely geometrical. For example, in photon scattering we may have frequency
matching with atomic resonance, finding σ ∼ λ2 , something that can be much bigger than the
actual total area involved.

14.7 appendix

Q: Are Bessel and Neumann functions orthogonal?

Answer: There is an orthogonality relation, but it is not one of plain old multiplication.
Curious about this, I find an orthogonality condition in [16]

π
 
2 sin 2 (α − β)
Z ∞
dz
Jα (z)Jβ (z) = , (14.45)
0 z π α2 − β2
from which we find for the spherical Bessel functions

π
 
sin 2 (l − m)
Z ∞
jl (ρ) jm (ρ)dρ = . (14.46)
0 (l + 1/2)2 − (m + 1/2)2
Is this a satisfactory orthogonality integral? At a glance it does not appear to be well behaved
for l = m, but perhaps the limit can be taken?

Deriving the large limit Bessel and Neumann function approximations For eq. (14.29) we are
referred to any “good book on electromagnetism” for details. I thought that perhaps the weighty
208 3d scattering

[8] would be to be such a book, but it also leaves out the details. In §16.1 the spherical Bessel
and Neumann functions are related to the plain old Bessel functions with

π
r
jl (x) = Jl+1/2 (x) (14.47a)
2x

π
r
nl (x) = Nl+1/2 (x) (14.47b)
2x
Referring back to §3.7 of that text where the limiting forms of the Bessel functions are given

r
2  νπ π 
Jν (x) → cos x − − (14.48a)
πx 2 4

r
2  νπ π 
Nν (x) → sin x − − (14.48b)
πx 2 4
This does give us our desired identities, but there is no hint in the text how one would derive
eq. (14.48) from the power series that was computed by solving the Bessel equation.

Deriving the small limit Bessel and Neumann function approximations Writing the sinc func-
tion in series form


sin x X x2k
= (−1)k , (14.49)
x k=0
(2k + 1)!

we can differentiate easily


1 d sin x X x2k−2
= (−1)k (2k)
x dx x k=1
(2k + 1)!

X x2k
= (−1) (−1)k (2k + 2) (14.50)
k=0
(2k + 3)!

X 1 x2k
= (−1) (−1)k
k=0
2k + 3 (2k + 1)!
14.7 appendix 209

Performing the derivative operation a second time we find

!2 ∞
1 d sin x X 1 x2k−2
= (−1) (−1)k (2k)
x dx x k=1
2k + 3 (2k + 1)!
∞ (14.51)
X 1 1 x2k
= (−1) k

k=0
2k + 5 2k + 3 (2k + 1)!
It appears reasonable to form the inductive hypotheses

!l ∞
1 d sin x X (2k + 1)! ! x2k
= (−1)l (−1)k , (14.52)
x dx x k=0
(2(k + l) + 1)! ! (2k + 1)!
and this proves to be correct. We find then that the spherical Bessel function has the power
series expansion of


X (2k + 1)! ! x2k+l
jl (x) = (−1)k (14.53)
k=0
(2(k + l) + 1)! ! (2k + 1)!
and from this the Bessel function limit of eq. (14.28a) follows immediately.
Finding the matching induction series for the Neumann functions is a bit harder. It is not
really any more difficult to write it, but it is harder to put it in a tidy form that is.
We find


cos x X x2k−1
− = − (−1)k
x k=0
(2k)!

1 d cos x X 2k − 1 x2k−3
− = − (−1)k (14.54)
x dx x k=0
2k (2k − 2)!
!2 ∞
1 d cos x X (2k − 1)(2k − 3) x2k−3
− = − (−1)k
x dx x k=0
2k(2k − 2) (2k − 4)!
The general expression, after a bit of messing around (and I got it wrong the first time), can
be found to be

!l l−1 Y
l−1
1 d cos x X x2(k−l)−1
− = (−1)l+1 |2(k − j) − 1|
x dx x k=0 j=0
(2k)!
(14.55)

X (2(k + l) − 1)! ! x2k−1
+ (−1)l+1 (−1)k .
k=0
(2k − 1)! ! (2(k + l)!
210 3d scattering

We really only need the lowest order term (which dominates for small x) to confirm the small
limit eq. (14.28b) of the Neumann function, and this follows immediately.
For completeness, we note that the series expansion of the Neumann function is

l−1 Y
l−1
X x2k−l−1
nl (x) = − |2(k − j) − 1|
k=0 j=0
(2k)!
(14.56)

X (2k + 3l − 1)! ! x2k−1
− (−1)k .
k=0
(2k − 1)! ! (2(k + l)!

14.8 verifying the solution to the spherical bessel equation

One way to verify that eq. (14.18a) is a solution to the Bessel equation eq. (14.17) as claimed
should be to substitute the series expression and verify that we get zero. Another way is to solve
this equation directly. We have a regular singular point at the origin, so we look for solutions of
the form


X
f = xr ak xk (14.57)
k=0

Writing our differential operator as

d2 d
L = x2 + 2x + x2 − l(l + 1), (14.58)
dx2 dx
we get

0 = Lf

X
= ak ((k + r)(k + r − 1) + 2(k + r) − l(l + 1)) xk+r + ak xk+r+2
k=0
= a0 (r(r + 1) − l(l + 1)) xr (14.59)
+ a1 ((r + 1)(r + 2) − l(l + 1)) xr+1
X∞
+ ak ((k + r)(k + r − 1) + 2(k + r) − l(l + 1) + ak−2 ) xk+r
k=2
14.8 verifying the solution to the spherical bessel equation 211

Since we require this to be zero for all x including non-zero values, we must have constraints
on r. Assuming first that a0 is non-zero we must then have

0 = r(r + 1) − l(l + 1). (14.60)

One solution is obviously r = l. Assuming we have another solution r = l + k for some integer
k we find that r = −l − 1 is also a solution. Restricting attention first to r = l, we must have
a1 = 0 since for non-negative l we have (l + 1)(l + 2) − l(l + 1) = 2(l + 1) , 0. Thus for non-zero
a0 we find that our function is of the form

X
f = a2k x2k+l . (14.61)
k

It does not matter that we started with a0 , 0. If we instead start with a1 , 0 we find that we
must have r = l − 1, −l − 2, so end up with exactly the same functional form as eq. (14.61). It
ends up slightly simpler if we start with eq. (14.61) instead, since we now know that we do not
have any odd powered ak ’s to deal with. Doing so we find

0 = Lf

X
= a2k ((2k + l)(2k + l − 1) + 2(2k + l) − l(l + 1)) x2k+l + a2k x2k+l+2
k=0 (14.62)
X∞
= (a2k 2k(2(k + l) + 1) + a2(k−1) ) x2k+l
k=1

We find

a2k −1
= . (14.63)
a2(k−1) 2k(2(k + l) + 1)
Proceeding recursively, we find


X (−1)k
f = a0 (2l + 1)! ! x2k+l . (14.64)
k=0
(2k)! ! (2(k + l) + 1)! !

With a0 = 1/(2l + 1)! ! and the observation that

1 (2k + 1)! !
= , (14.65)
(2k)! ! (2k + 1)!
212 3d scattering

we have f = jl (x) as given in eq. (14.53).


If we do the same for the r = −l − 1 case, we find

a2k −1
= , (14.66)
a2(k−1) 2k(2(k − l) − 1)

and find

a2k (−1)k
= . (14.67)
a0 (2k)! ! (2(k − l) − 1)(2(k − l) − 3) · · · (−2l + 1)
Flipping signs around, we can rewrite this as
a2k 1
= . (14.68)
a0 (2k)! ! (2(l − k) + 1)(2(l − k) + 3) · · · (2l − 1)
For those values of l > k we can write this as

a2k (2(l − k) − 1)! !


= . (14.69)
a0 (2k)! ! (2l − 1)! !
Comparing to the small limit eq. (14.28b), the k = 0 term, we find that we must have

a0
= −1. (14.70)
(2l − 1)! !
After some play we find

(2(l−k)−1)!!
 − (2k)!! if l ≥ k


a2k = 

(−1)k−l+1
(14.71)
if l ≤ k


 (2k)!!(2(k−l)−1)!!

Putting this all together we have

X x2k−l−1 X (−1)k−l x2k−l−1


nl (x) = − (2(l − k) − 1)! ! − (14.72)
0≤k≤l
(2k)! ! l<k (2(k − l) − 1)! ! (2k)! !

FIXME: check that this matches the series calculated earlier eq. (14.56).
14.9 scattering cross sections 213

Figure 14.4: Bounded potential

14.9 scattering cross sections

READING: §20 [4]


Recall that we are studing the case of a potential that is zero outside of a fixed bound, V(r) = 0
for r > r0 , as in fig. 14.4
and were looking for solutions to Schrödinger’s equation

h̄2 2 h̄2 k2
− ∇ ψk (r) + V(r)ψk (r) = ψk (r), (14.73)
2µ 2µ
in regions of space, where r > r0 is very large. We found

eikr
ψk (r) ∼ eik·r + fk (θ, φ). (14.74)
r
For r ≤ r0 this will be something much more complicated.
To study scattering we will use the concept of probability flux as in electromagnetism

∇ · j + ρ̇ = 0 (14.75)

Using

ψ(r, t) = ψk (r)∗ ψk (r) (14.76)


214 3d scattering

we find


j(r, t) = (ψk (r)∗ ∇ψk (r) − (∇ψ∗k (r))ψk (r)) (14.77)
2µi
when

h̄2 2 ∂ψk (r)


− ∇ ψk (r) + V(r)ψk (r) = i h̄ (14.78)
2µ ∂t
In a fashion similar to what we did in the 1D case, let us suppose that we can write our wave
function

Z
ψ(r, tinitial ) = d3 kα(k, tinitial )ψk (r) (14.79)

and treat the scattering as the scattering of a plane wave front (idealizing a set of wave pack-
ets) off of the object of interest as depicted in fig. 14.5

Figure 14.5: plane wave front incident on particle

We assume that our incoming particles are sufficiently localized in k space as depicted in the
idealized representation of fig. 14.6
we assume that α(k, tinitial ) is localized.

eikr
Z !
ψ(r, tinitial ) = d k α(k, tinitial )e
3 ikz z
+ α(k, tinitial ) fk (θ, φ) (14.80)
r
14.9 scattering cross sections 215

Figure 14.6: k space localized wave packet

We suppose that

initial /2µ
2t
α(k, tinitial ) = α(k)e−i h̄k (14.81)
where this is chosen (α(k, tinitial ) is built in this fashion) so that this is non-zero for z large in
magnitude and negative.
This last integral can be approximated
eikr fk (θ, φ)
Z Z
d3 kα(k, tinitial ) fk (θ, φ) ≈ 0 d3 kα(k, tinitial )eikr
r r (14.82)
→0
This is very much like the 1D case where we found no reflected component for our initial
time.
We will normally look in a locality well away from the wave front as indicted in fig. 14.7
There are situations where we do look in the locality of the wave front that has been scattered.

Incoming wave Our income wave is of the form

2 t/2µ
ψi = Aeikz e−i h̄k (14.83)
Here we have made the approximation that k = |k| ∼ kz . We can calculate the probability
current

h̄k
j = ẑ A (14.84)
µ
216 3d scattering

Figure 14.7: point of measurement of scattering cross section

(notice the v = p/m like term above, with p = h̄k).


For the scattered wave (dropping A factor)

e−ikr eikr e−ikr eikr


! ! !

j= fk∗ (θ, φ) ∇ fk (θ, φ) − ∇ fk∗ (θ, φ) fk (θ, φ)
2µi r r r r
−ikr ikr −ikr
(14.85)
eikr
!
h̄ e e e
≈ f (θ, φ)

ikr̂ fk (θ, φ) − fk (θ, φ)

(−ikr̂) fk (θ, φ)
2µi k r r r r

We find that the radial portion of the current density is

h̄ 2 2ik
r̂ · j = |f| 2
2µi r
(14.86)
h̄k 1 2
= |f| ,
µ r2
14.9 scattering cross sections 217

and the flux through our element of solid angle is

probability
r̂dA · j = × area
unit area per time
probability
=
unit time
h̄k | fk (θ, φ)|2 2
= r dΩ
µ r2 (14.87)
h̄k
= | fk (θ, φ)|2 dΩ
µ
dσ/dΩ

= jincoming | fk (θ, φ)|2 dΩ.

We identify the scattering cross section above


= | fk (θ, φ)|2 (14.88)
dΩ

Z
σ= | fk (θ, φ)|2 dΩ (14.89)

We have been somewhat unrealistic here since we have used a plane wave approximation,
and can as in fig. 14.8
will actually produce the same answer. For details we are referred to [10] and [12].

Working towards a solution We have done a bunch of stuff here but are not much closer to a
real solution because we do not actually know what fk is.
Let us write Schrödinger

h̄2 2 h̄2 k2
− ∇ ψk (r) + V(r)ψk (r) = ψk (r), (14.90)
2µ 2µ
instead as

(∇2 + k2 )ψk (r) = s(r) (14.91)


218 3d scattering

Figure 14.8: Plane wave vs packet wave front

where


s(r) = V(r)ψk (r) (14.92)
h̄2
where s(r) is really the particular solution to this differential problem. We want

homogeneous particular
ψk (r) = ψk (r) + ψk (r) (14.93)

and

homogeneous
ψk (r) = eik·r (14.94)
B O R N A P P R O X I M AT I O N
15
READING: §20 [4]
We have been arguing that we can write the stationary equation

 
∇2 + k2 ψk (r) = s(r) (15.1)

with


s(r) = V(r)ψk (r) (15.2)
h̄2

homogeneous particular
ψk (r) = ψk (r) + ψk (r) (15.3)

Introduce Green function

 
∇2 + k2 G0 (r, r0 ) = δ(r − r0 ) (15.4)

Suppose that I can find G0 (r, r0 ), then

Z
particular
ψk (r) = G0 (r, r0 )s(r0 )d3 r0 (15.5)

It turns out that finding the Green’s function G0 (r, r0 ) is not so hard. Note the following, for
k = 0, we have

∇2G00 (r, r0 ) = δ(r − r0 ) (15.6)

(where a zero subscript is used to mark the k = 0 case). We know this Green’s function from
electrostatics, and conclude that

1 1
G00 (r, r0 ) = − (15.7)
4π |r − r0 |

219
220 born approximation

For r , r0 we can easily show that

0
1 eik|r−r |
G (r, r ) = −
0 0
(15.8)
4π |r − r0 |
This is correct for all r because it also gives the right limit as r → r0 . This argument was
first given by Lorentz. An outline for a derivation, utilizing the usual Fourier transform and
contour integration arguments for these Green’s derivations, can be found in §7.4 of [3]. A
direct verification, not quite as easy as claimed can be found in B.
We can now write our particular solution

0
eik|r−r | 0 3 0
Z
1
ψk (r) = eik·r − s(r )d r (15.9)
4π |r − r0 |
This is of no immediate help since we do not know ψk (r) and that is embedded in s(r).

0
eik|r−r |
Z

ψk (r) = e ik·r
− V(r0 )ψk (r0 )d3 r0 (15.10)
4π h̄2 |r − r0 |
Now look at this for r  r0

 1/2
r − r0 = r2 + (r0 )2 − 2r · r0
!1/2
(r0 )2 1
= r 1+ 2 −2 2r·r 0
r r

0 2 1/2
!  (15.11)
1 2 r
= r 1 − 2 r · r + O
 0 

2r r 
r0 2
!
= r − r̂ · r + O
0
r
We get

2µ eikr
Z
0
ψk (r) → eik·r − 2 r
e−ikr̂·r V(r0 )ψk (r0 )d3 r0
4π h̄ (15.12)
eikr
= eik·r + fk (θ, φ) ,
r
where

µ
Z
0
fk (θ, φ) = − e−ikr̂·r V(r0 )ψk (r0 )d3 r0 (15.13)
2π h̄2
born approximation 221

If the scattering is weak we have the Born approximation

µ
Z
0 0
fk (θ, φ) = − e−ikr̂·r V(r0 )eik·r d3 r0 , (15.14)
2π h̄2
or

µ eikr
Z
0 0
ψk (r) = e ik·r
− e−ikr̂·r V(r0 )eik·r d3 r0 . (15.15)
2π h̄2 r
Should we wish to make a further approximation, we can take the wave function resulting
from application of the Born approximation, and use that a second time. This gives us the “Born
again” approximation of

0 Z
µ eikr µ eikr
Z !
−ikr̂·r0 ik·r0 −ikr̂0 ·r00 00 ik·r00 3 00
ψk (r) = eik·r − 2 r
e V(r0
) e − 2 r0
e V(r )e d r d3 r0
2π h̄ 2π h̄
µ e ikr Z 0 0
= eik·r − e−ikr̂·r V(r0 )eik·r d3 r0
2π h̄2 r
µ2 eikr
Z ikr0 Z
−ikr̂·r0 0 e 0 00 00
+ 4 r
e V(r ) 0 e−ikr̂ ·r V(r00 )eik·r d3 r00 d3 r0 .
2
(2π) h̄ r
(15.16)
Part IV

N OT E S A N D P RO B L E M S
S I M P L E E N TA N G L E M E N T E X A M P L E
16
On the quiz we were given a three state system |1i , |2i and |3i, and a two state system |ai , |bi,
and were asked to show that the composite system can be entangled. I had trouble with this,
having not seen any examples of this and subsequently filing away entanglement in the “abstract
stuff that has no current known application” bit bucket, and then forgetting about it. Let us
generate a concrete example of entanglement, and consider the very simplest direct product
spaces.
What is the simplest composite state that we can create? Suppose we have a pair of two state
systems, say,

 
1
|1i =   ∈ H1
0
  (16.1)
0
|2i =   ∈ H1 ,
1
and

eikx
hx|+i = √ , where |+i ∈ H2

(16.2)
e−ikx
hx|+i = √ where |−i ∈ H2 .

We can now enumerate the space of possible operators

A ∈ a11++ |1i h1| ⊗ |+i h+| + a11+− |1i h1| ⊗ |+i h−|
+ a11−+ |1i h1| ⊗ |−i h+| + a11−− |1i h1| ⊗ |−i h−|
+ a12++ |1i h2| ⊗ |+i h+| + a12+− |1i h2| ⊗ |+i h−|
+ a12−+ |1i h2| ⊗ |−i h+| + a12−− |1i h2| ⊗ |−i h−|
(16.3)
+ a21++ |2i h1| ⊗ |+i h+| + a21+− |2i h1| ⊗ |+i h−|
+ a21−+ |2i h1| ⊗ |−i h+| + a21−− |2i h1| ⊗ |−i h−|
+ a22++ |2i h2| ⊗ |+i h+| + a22+− |2i h2| ⊗ |+i h−|
+ a22−+ |2i h2| ⊗ |−i h+| + a22−− |2i h2| ⊗ |−i h−|

225
226 simple entanglement example

We can also enumerate all the possible states, some of these can be entangled

|ψi ∈ h1+ |1i ⊗ |+i + h1− |1i ⊗ |−i + h2+ |2i ⊗ |+i + h2− |2i ⊗ |−i . (16.4)

And finally, we can enumerate all the possible product states

|ψi ∈ (ai |ii) ⊗ (bβ |βi) = a1 b+ |1i ⊗ |+i + a1 b− |1i ⊗ |−i + a2 b+ |2i ⊗ |+i + a2 b− |2i ⊗ |−i (16.5)

In this simpler example, we have the same dimensionality for both the sets of direct product
kets and the ones formed by arbitrary superposition of the composite ket basis elements, but
that does not mean that this rules out entanglement.
Suppose that, as the product of some operator, we end up with a ket

|ψi = |1i ⊗ |+i + |2i ⊗ |−i (16.6)

Does this have a product representation, of the following form

|ψi = (ai |ii) ⊗ (bβ |βi) = ai bβ |ii ⊗ |βi ? (16.7)

For this to be true we would require

a1 b+ = 1
a2 b− = 1
(16.8)
a1 b− = 0
a2 b+ = 0,

However, we can not find a solution to this set of equations. We require one of a1 = 0 or
b− = 0 for the third equality, but such zeros generate contradictions for one of the first pair of
equations.
P RO B L E M S E T 4 , P RO B L E M 2 N OT E S
17
I was deceived by an incorrect result in Mathematica, which led me to believe that the second
order energy perturbation was zero (whereas part (c) of the problem asked if it was greater
or lesser than zero). I started starting writing this up to show my reasoning, but our Professor
quickly provided an example after class showing how this zero must be wrong, and I did not
have to show him any of this.

Setup Recall first the one dimensional particle in a box. Within the box we have to solve

P2
ψ = Eψ (17.1)
2m
and find

i

ψ ∼ e h̄ 2mEx
(17.2)

With

2mE
k= (17.3)

our general state, involving terms of each sign, takes the form

ψ = Aeikx + Be−ikx (17.4)

Inserting boundary conditions gives us

   
ψ(−L/2) e−ik L2 + eik L2  A
  
  ik L2 L
   (17.5)
ψ(L/2) e + e −ik B
 2
 

The determinant is zero

e−ikL − eikL = 0, (17.6)

227
228 problem set 4, problem 2 notes

which provides our constraint on k

e2ikL = 1. (17.7)

We require 2kL = 2πn for any integer n, or

πn
k= . (17.8)
L
This quantizes the energy, and inverting eq. (17.3) gives us

!2
1 h̄πn
E= . (17.9)
2m L

To complete the task of matching boundary value conditions we cheat and recall that the
particular linear combinations that we need to match the boundary constraint of zero at ±L/2
were sums and differences yielding cosines and sines respectively. Since

 πnx   πn 
sin = ± sin (17.10)
L x=±L/2 2

So sines are the wave functions for n = 2, 4, ... since sin(nπ) = 0 for integer n. Similarly

 πnx   πn 
cos = cos . (17.11)
L x=±L/2 2

Cosine becomes zero at π/2, 3π/2, · · ·, so our wave function is the cosine for n = 1, 3, 5, · · ·.
Normalizing gives us

r  πnx
 
 cos L
2 n = 1, 3, 5, · · ·

ψn (x) =

(17.12)
 sin πnx
 
L n = 2, 4, 6, · · ·


L

Two non-interacting particles. Three lowest energy levels and degeneracies Forming the
Hamiltonian for two particles in the box without interaction, we have within the box

P21 P22
H= + (17.13)
2m 2m
problem set 4, problem 2 notes 229

we can apply separation of variables, and it becomes clear that our wave functions have the
form

ψnm (x1 , x2 ) = ψn (x1 )ψm (x2 ) (17.14)


Plugging in
Hψ = Eψ, (17.15)
supplies the energy levels for the two particle wavefunction, giving

h̄2  πn 2  πm 2
!
Hψnm = + ψnm
2m L L
!2 (17.16)
1 h̄π
= (n2 + m2 )ψnm
2m L
Letting n, m each range over [1, 3] for example we find

n m n2 + m2
1 1 2
1 2 5
1 3 10
2 1 5
(17.17)
2 2 8
2 3 13
3 1 10
3 2 13
3 3 18
It is clear that our lowest energy levels are

!2
1 h̄π
m L
!2
5 h̄π
(17.18)
2m L
!2
4 h̄π
m L
with degeneracies 1, 2, 1 respectively.
230 problem set 4, problem 2 notes

Ground state energy with interaction perturbation to first order With c0 positive and an inter-
action potential of the form

U(X1 , X2 ) = −c0 δ(X1 − X2 ) (17.19)

The second order perturbation of the ground state energy is

2
H 0
(0)
X 11;11
E = E11 + H11;11
0
+ (17.20)
nm,11
E11 − Enm

where
!2
(0) 1 h̄π
E11 = , (17.21)
m L
and
0
Hnm;ab = −c0 hψnm | δ(X1 − X2 ) |ψab i (17.22)

to proceed, we need to expand the matrix element

Z
hψnm | δ(X1 − X2 ) |ψab i = dx1 dx2 dy1 dy2 hψnm |x1 x2 i hx1 x2 | δ(X1 − X2 ) |y1 y2 i hy1 y2 |ψab i
Z
= dx1 dx2 dy1 dy2 hψnm |x1 x2 i δ(x1 − x2 )δ2 (x − y) hy1 y2 |ψab i
Z
= dx1 dx2 hψnm |x1 x2 i δ(x1 − x2 ) hx1 x2 |ψab i
Z L/2
= dxψnm (x, x)ψab (x, x)
−L/2
(17.23)

So, for our first order calculation we need

Z L/2
0
H11;11 = −c0 dxψ11 (x, x)ψ11 (x, x)
−L/2
Z L/2
4 (17.24)
= 2 dx cos4 (πx/L)
L −L/2
3c0
=−
2L
problem set 4, problem 2 notes 231

For the second order perturbation of the energy, it is clear that this will reduce the first order
approximation for each matrix element that is non-zero.
Attempting that calculation with Mathematica however, is deceiving, since Mathematica re-
ports these all as zero after FullSimplify. It appears, that as used, it does not allow for m = n and
m = n ± 1 constraints properly where the denominators of the unsimplified integrals go zero.
This worksheet can be seen to be giving misleading results, by evaluating

Z L !2  πx  !
2 2 3πx 1
cos 2
cos 2
dx = (17.25)
− L2 L L L L

Yet, the FullSimplify gives

Z L  πx 2 2 !2 
(2n + 1)πx (2m + 1)πx
" # " #
2
dx, {m, n} ∈ Integers = 0
 
FullSimplify  Cos Cos Cos
− L2 L L L L
(17.26)

I am hoping that asking about this on stackoverflow will clarify how to use Mathematica
correctly for this calculation.
A D I F F E R E N T D E R I VAT I O N O F T H E A D I A B AT I C P E R T U R B AT I O N
C O E F F I C I E N T E Q U AT I O N
18
Professor Sipe’s adiabatic perturbation and that of the text [4] in §17.5.1 and §17.5.2 use differ-
ent notation for γm and take a slightly different approach. We can find Prof Sipe’s final result
with a bit less work, if a hybrid of the two methods is used.
Our starting point is the same, we have a time dependent slowly varying Hamiltonian

H = H(t), (18.1)

where our perturbation starts at some specific time from a given initial state

H(t) = H0 , t ≤ 0. (18.2)

We assume that instantaneous eigenkets can be found, satisfying

H(t) |n(t)i = En (t) |n(t)i (18.3)


E
Here I will use |ni ≡ |n(t)i instead of the ψ̂n (t) that we used in class because its easier to
write.
Now suppose that we have some arbitrary state, expressed in terms of the instantaneous basis
kets |ni

bn (t)e−iαn +iβn |ni ,


X
|ψi = (18.4)
n

where

Z t
1
αn (t) = dt0 En (t0 ). (18.5)
h̄ 0

Here I have used βn instead of γn (as in the text) to avoid conflicting with the lecture notes,
where this βn is a factor to be determined.

233
234 a different derivation of the adiabatic perturbation coefficient equation

For this state, we have at the time just before the perturbation

X
|ψ(0)i = bn (0)e−iαn (0)+iβn (0) |n(0)i . (18.6)
n

The question to answer is: How does this particular state evolve?
Another question, for those that do not like sneaky bastard derivations, is where did that
magic factor of e−iαn come from in our superposition state? We will see after we start taking
derivatives that this is what we need to cancel the H(t) |ni in Schrödinger’s equation.
Proceeding to plug into the evolution identity we have

!
d
0 = hm| i h̄ − H(t) |ψi
dt
  ! 
X −iα +iβ  dbn En d
= hm|  e + bn −i + iβ̇m  |ni + i h̄bn |ni − 
 
 n n
(i h̄)   En bn |ni
 
n
dt h̄ dt
dbm d
= e−iαm +iβm (i h̄) + e−iαm +iβm (i h̄)iβ̇m bm + i h̄ bn hm| |ni e−iαn +iβn
X
dt dt (18.7)
n
dbm d
e−iαn +iβn eiαm −iβm bn hm| |ni
X
∼ + iβ̇m bm +
dt n
dt
dbm d d
e−iαn +iβn eiαm −iβm bn hm| |ni
X
= + iβ̇m bm + bm hm| |mi +
dt dt n,m
dt

We are free to pick βm to kill the second and third terms

d
0 = iβ̇m bm + bm hm| |mi , (18.8)
dt
or

d
β̇m = i hm| |mi , (18.9)
dt
which after integration is

Z t d
βm (t) = i dt0 m(t0 ) 0 |m(t)i .

(18.10)
0 dt
a different derivation of the adiabatic perturbation coefficient equation 235

In the lecture notes this was written as

d
Γm (t) = i hm(t)| |m(t)i (18.11)
dt
so that

Z t
βm (t) = dt0 Γm (t0 ). (18.12)
0

As in class we can observe that this is a purely real function. We are left with

dbm d
bn e−iαnm +iβnm hm| |ni ,
X
=− (18.13)
dt n,m
dt

where

αnm = αn − αm
(18.14)
βnm = βn − βm

The task is now to find solutions for these bm coefficients, and we can refer to the class notes
for that without change.
S E C O N D O R D E R T I M E E VO L U T I O N F O R T H E C O E F F I C I E N T S O F
A N I N I T I A L LY P U R E K E T W I T H A N A D I A B AT I C A L LY C H A N G I N G
19
H A M I LT O N I A N

Motivation In lecture 9, Prof Sipe developed the equations governing the evolution of the
coefficients of a given state for an adiabatically changing Hamiltonian. He also indicated that
we could do an approximation, finding the evolution of an initially pure state in powers of λ
(like we did for the solutions of a non-time dependent perturbed Hamiltonian H = H0 + λH 0 ).
I tried doing that a couple of times and always ended up going in circles. I will show that here
and also develop an expansion in time up to second order as an alternative, which appears to
work out nicely.

Review We assumed that an adiabatically changing Hamiltonian was known with instanta-
neous eigenkets governed by

E E
H(t) ψ̂n (t) = h̄ωn ψ̂n (t) (19.1)

The problem was to determine the time evolutions of the coefficients bn (t) of some state |ψ(t)i,
and this was found to be

X E
|ψ(t)i = bn (t)e−iγn (t) ψ̂n (t)
n
Z t
γ s (t) = dt0 (ω s (t0 ) − Γ s (t0 )) (19.2)
0
D d E
Γ s (t) = i ψ̂ s (t) ψ̂ s (t)
dt
where the b s (t) coefficient must satisfy the set of LDEs

db s (t) X D d E
=− bn (t)eiγsn (t) ψ̂ s (t) ψ̂n (t) , (19.3)
dt n,s
dt

where

γ sn (t) = γ s (t) − γn (t). (19.4)

237
238 second order time evolution for the coefficients of an initially pure ket with an adiabatically changing

Solving these in general does not look terribly fun, but perhaps we can find an explicit solu-
tion for all the b s ’s, if we simplify the problem somewhat. Suppose that our initial state is found
to be in the mth energy level at the time before we start switching on the changing Hamiltonian.

E
|ψ(0)i = bm (0) ψ̂m (0) . (19.5)
We therefore require (up to a phase factor)

bm (0) = 1
(19.6)
b s (0) = 0 if s , m.
Equivalently we can write

b s (0) = δms (19.7)

Going in circles with a λ expansion In class it was hinted that we could try a λ expansion of
the following form to determine a solution for the b s coefficients at later times

b s (t) = δms + λb(1)


s (t) + · · · (19.8)
I was not able to figure out how to make that work. Trying this first to first order, and plugging
in, we find

d (1) X D d E
λ b s (t) = − (δmn + λb(1)
n (t))e
iγ sn (t)
ψ̂ s (t) ψ̂n (t) , (19.9)
dt n,s
dt

equating powers of λ yields two equations

d (1) X D d E
b s (t) = − b(1)
n (t)e
iγ sn (t)
ψ̂ s (t) ψ̂n (t)
dt n,s
dt
X D d E (19.10)
0=− δmn eiγsn (t) ψ̂ s (t) ψ̂n (t) .
n,s
dt

Observe that the first identity is exactly what we started with in eq. (19.3), but has just re-
placed the bn ’s with b(1)
n ’s. Worse is that the second equation is only satisfied for s = m, and for
s , m we have

D d E
0 = −eiγsm (t) ψ̂ s (t) ψ̂m (t) . (19.11)
dt
second order time evolution for the coefficients of an initially pure ket with an adiabatically changing hamilt

E
So this λ power series only appears to work if we somehow had ψ̂ s (t) always orthonormal
E
to the derivative of ψ̂m (t) . Perhaps this could be done if the Hamiltonian was also expanded
in powers of λ, but such a beastie seems foreign to the problem. Note that we do not even have
any explicit dependence on the Hamiltonian in the final bn differential equations, as we would
probably need for such an expansion to work out.

A Taylor series expansion in time What we can do is to expand the bn ’s in a power series
parametrized by time. That is, again, assuming we started with energy equal to h̄ωm , form

! 2 !
t d t d2
b s (t) = δ sm + + +···

b s (t)
2
b s (t) (19.12)
1! dt t=0 2! dt t=0

The first order term we can grab right from eq. (19.3) and find


db s (t) X D d E
=− bn (0) ψ̂ s (t) ψ̂n (t)
dt t=0 n,s
dt t=0
X D d E
=− δnm ψ̂ s (t) ψ̂n (t)
dt t=0
n,s (19.13)




 0 s=m

 D E
=

− ψ̂ s (t) dtd ψ̂m (t)



 t=0

s,m

Let us write

E
|ni = ψ̂n (0)
0 d E (19.14)
n = ψ̂n (t)
dt t=0

So we can write


db s (t)
0
= −(1 − δ sm ) s m , (19.15)
dt t=0

and form, to first order in time our approximation for the coefficient is



b s (t) = δ sm − t(1 − δ sm ) s m0 . (19.16)
240 second order time evolution for the coefficients of an initially pure ket with an adiabatically changing

Let us do the second order term too. For that we have



d2
! E!!
X d dγ sn (t)
0 d D d
= bn (t) + δnm i s n + δnm ψ̂ s (t) ψ̂n (t)

b s (t) − (19.17)
dt2 dt dt dt dt

t=0 t=0
n,s

For the γ sn derivative we note that


d

γ s (t) = ω s (0) − i s s0 , (19.18)
dt t=0
So we have


d2 X






= (−(1 − δnm ) n m0 + δnm i(ω sn (0) − i s s0 + i n n0 )) s n0 + δnm ( s0 n0 + s n00 )

2
b (t)
s −
dt t=0
n,s
(19.19)
Again for s = m, all terms are killed. That is somewhat surprising, but suggests that we will
need to normalize the coefficients after the perturbation calculation, since we have unity for one
of them.
For s , m we have


d2 X

m0 − δ i(ω (0) − i
s s0 + i
n n0 ))
s n0 − δ (
s0 n0 +
s n00 )
=

b s (t) ( n nm sn nm
dt2

t=0
n,s





X
0
0
= −i(ω (0) − i s s0 + i m m0 )) s m0 − ( s0 m0 + s m00 ) +
sm n m s n .
n,s
(19.20)
So we have, for s , m


d2
0
0
0
0
0 0
00 X
0
0
= s m − s m − s m + n m s n .

b s (t) ( m m − s s ) s m − iω sm (0)
dt2

t=0
n,s
(19.21)
It is not particularly illuminating looking, but possible to compute, and we can use it to form
a second order approximate solution for our perturbed state.



b s (t) = δ sm − t(1 − δ sm ) s m0
 

0
0
0
0
0 0
00 X
0
0  t2
+ (1 − δ sm ) ( m m − s s ) s m − iω sm (0) s m − s m − s m +
 n m s n 
n,s
2
second order time evolution for the coefficients of an initially pure ket with an adiabatically changing hamilt

(19.22)

New info. How to do the λ expansion Asking about this, Federico nicely explained. “The
reason why you are going in circles when trying the lambda expansion is because you are not
assuming the term hψ(t)| (d/dt) |ψ(t)i to be of order lambda. This has to be assumed, otherwise
it does not make sense at all trying a perturbative approach. This assumption means that the cou-
pling between the level s and the other levels is assumed to be small because the time dependent
part of the Hamiltonian is small or changes slowly with time. Making a Taylor expansion in time
would be sensible only if you are interested in a short interval of time. The lambda-expansion
approach would work for any time as long as the time dependent piece of the Hamiltonian does
not change wildly or is too big.”
In the tutorial he outlined another way to justify this. We have written so far


 H(t)
 t>0
H=

(19.23)
 H t<0

0

where H(0) = H0 . We can make this explicit, and introduce a λ factor into the picture if we
write

H(t) = H0 + λH 0 (t), (19.24)

where H0 has no time dependence, so that our Hamiltonian is then just the “steady-state”
system for λ = 0.
Now recall the method from [2] that we can use to relate our bra-derivative-ket to the Hamilto-
nian. Taking derivatives of the energy identity, braketed between two independent kets (m , n)
we have

D d  E E
0 = ψ̂m (t) H(t) ψ̂n (t) − h̄ωn ψ̂n (t)
dt
D dH(t) E d E dωn E d E!
= ψ̂m (t) ψ̂n (t) + H(t) ψ̂n (t) − h̄ ψ̂n (t) − h̄ωn ψ̂n (t) (19.25)
dt dt dt dt
D d E dω n
 D dH(t) ψ̂ (t)E

= h̄(ωm − ωn ) ψ̂m (t) ψ̂n (t) − h̄  δmn + ψ̂m (t)

n
dt  dt dt
So for m , n we find a dependence between the bra-derivative-ket and the time derivative of
the Hamiltonian

D E
D d E ψ̂m (t) dH(t)
dt ψ̂n (t)

ψ̂m (t) ψ̂n (t) = (19.26)
dt h̄(ωn − ωm )
242 second order time evolution for the coefficients of an initially pure ket with an adiabatically changing

Referring back to eq. (19.24) we see the λ dependence in this quantity, coming directly from
the λ dependence imposed on the time dependent part of the Hamiltonian

D 0 E
D d E ψ̂m (t) dHdt(t) ψ̂n (t)
ψ̂m (t) ψ̂n (t) = λ (19.27)
dt h̄(ωn − ωm )
Given this λ dependence, let us revisit the perturbation attempt of eq. (19.9). Our first order
factors of λ are now

d (1) X D d E
b s (t) = − δmn eiγsn (t) ψ̂ s (t) ψ̂n (t)
dt n,s
dt
 (19.28)
 0 if m = s


=

D E
 −e sm ψ̂ s (t) dt ψ̂m (t)

 iγ (t) d
if m , s

So we find to first order

Z t d
0 D E
b s (t) = δms (1 + λconstant) − (1 − δms )λ dt0 eiγsm (t ) ψ̂ s (t0 ) 0 ψ̂m (t0 ) (19.29)
0 dt
A couple observations of this result. One is that the constant factor in the m = s case makes
sense. This would likely be a negative contribution since we have to decrease the probability
coefficient for finding our wavefunction in the m = s state after perturbation, since we are
increasing the probability for finding it elsewhere by changing the Hamiltonian.
Also observe that since eiγsm ∼ 0 for small t this is consistent with the first order Taylor series
expansion where we found our first order contribution was

D d E
−(1 − δms )t ψ̂ s (t) ψ̂m (t) . (19.30)
dt
0
D E
Also note that this −eiγsm (t ) ψ̂ s (t0 ) dtd0 ψ̂m (t0 ) is exactly the difference from 0 that was men-
tioned in class when the trial solution of b s = δ sm was tested by plugging it into eq. (19.3), so
it is not too surprising that we should have a factor of exactly this form when we refine our
approximation.
A question to consider should we wish to refine the λ perturbation to higher than first order in
λ: is there any sort of λ dependence in the eiγsm coming from the Γ sm term in that exponential?
D E G E N E R A C Y A N D D I A G O N A L I Z AT I O N
20
20.1 motivation

In class it was mentioned that to deal with perturbation around a degenerate energy eigenvalue,
we needed to diagonalize the perturbing Hamiltonian. I did not follow those arguments com-
pletely, and I had like to revisit those here.

20.2 a four state hamiltonian

Problem set 3, problem 1, was to calculate the energy eigenvalues for the following Hamiltonian

H = H0 + λH 0
 
a 0 0 0
 
0 b 0 0
H0 =  
0 0 c 0
 
0 0 0 c (20.1)
 
 α 0 ν η
 
 0 β 0 µ
H 0 =  
ν∗ 0 γ 0
 
η∗ µ∗ 0 δ

This is more complicated that the two state problem that are solved exactly in §13.1.1 in the
text [4], but differs from the (possibly) infinite dimensional problem that was covered in class.
Unfortunately, the solution provided to this problem did not provide the illumination I expected,
so let us do it again, calculating the perturbed energy eigenvalues for the degenerate levels, from
scratch.
Can we follow the approach used in the text for the two (only) state problem. For the two
state problem, it was assumed that the perturbed solution could be expressed as a superposition
of the two states that formed the basis for the unperturbed Hilbert space. That is

|ψi = m |1i + n |2i (20.2)

243
244 degeneracy and diagonalization

For the two state problem, assuming that the perturbed energy eigenvalue is E, and the unper-
turbed energy eigenvalue is E 0 we find

0 = (H − E) |ψi
= (H0 + λH 0 ) |ψi − E |ψi
= (H0 + λH 0 )(m |1i + n |2i) − E(m |1i + n |2i)
(20.3)
= λH 0 (m |1i + n |2i) − E 0 (m |1i + n |2i)
 
h i m
= (−E + λH ) |1i |2i  
0 0
n

Left multiplying by the brakets we find

 
h1|
0 =   (H − E) |ψi
h2|
     (20.4)
 h1| H 0 |1i h1| H 0 |2i m
= (E − E)I + λ 
 0    
0 0
h2| H |1i h2| H |2i n
  

Or

 
m
    
(E − E)I + λ Hi j   = 0.
0 0 (20.5)
n

Observe that there was no assumption about the dimensionality of H0 and H 0 here, just that
the two degenerate energy levels had eigenvalues E 0 and a pair of eigenkets |1i and |2i such
that H0 |ii = E 0 |ii , i ∈ [1, 2]. It is clear that we can use a similar argument for any degeneracy
degree. It is also clear how to proceed, since we have what almost amounts to a characteristic
equation for the degenerate subspace of Hilbert space for the problem.
Because H 0 is Hermitian, a diagonalization

H 0 = U ∗ DU
h i (20.6)
D = H 0 i δi j
20.3 generalizing slightly 245

can be found. To solve for E we can take the determinant of the matrix factor of eq. (20.5),
and because I = U ∗ U we have


0 = (E 0 − E)U ∗ IU + λU ∗ DU

= U ∗ (E 0 − E)I + λD U
(20.7)

E 0 − E + λH 0 1 0
=
0 E 0 − E + λH 0 2
= (E 0 − E + λH 0 1 )(E 0 − E + λH 0 2 )
So our energy eigenvalues associated with the perturbed state are (exactly)

E = E 0 + λH 0 1 , E 0 + λH 0 2 . (20.8)
It is a bit curious seeming that only the energy eigenvalues associated with the degeneracy
play any part in this result, but there is some intuitive comfort in this idea. Without the pertur-
bation, we can not do an energy measurement that would distinguish one or the other of the
eigenkets for the degenerate energy level, so it does not seem unreasonable that a perturbed en-
ergy level close to the original can be formed by superposition of these two states, and thus the
perturbed energy eigenvalue for the new system would then be related to only those degenerate
levels.
Observe that in the problem set three problem we had a diagonal initial Hamiltonian H0 , that
does not have an impact on the argument above, since that portion of the Hamiltonian only has
a diagonal contribution to the result found in eq. (20.5), since the identity H0 |ii = c |ii , i ∈ [3, 4]
removes any requirement to know the specifics of that portion of the matrix element of H0 .

20.3 generalizing slightly

Let us work with a system that has kets using an explicit degeneracy index

H0 |mαm i = Em
0
|mαm i , αm = 1, · · · , γm , m ∈ [1, N] (20.9)

Example:
|mαm i ∈ |11i
|21i , |22i
(20.10)
|31i
|41i , |42i , |43i .
246 degeneracy and diagonalization

Again we seek to find the energy eigenvalues of the new system

H = H0 + λH 0 . (20.11)

For any m with associated with a degeneracy (γm > 1) we can calculate the subspace diago-
nalization

h i
hmi| H 0 |m ji = Um Dm Um ,

(20.12)

where

Um Um† = 1, (20.13)

and Dm is diagonal

 
Dm = δi j Hm,i
0 . (20.14)

This is not a diagonalizing transformation in the usual sense. Putting it together into block
matrix form, we can write

 
U1 
 
 U2 
U =  (20.15)
 
..
.
 
 
 
UN

and find that a similarity transformation using this change of basis matrix puts all the block
matrices along the diagonal into diagonal form, but leaves the rest possibly non-zero

 
D1 x x x 
 
   x D2 x x 
U † mαmi H 0 |m ji mαm j U = 
E
(20.16)

.. 
 x
 x . x 
 
x x x DN
20.3 generalizing slightly 247

A five level system with two pairs of degenerate levels Let us do this explicitly using a specific
degeneracy example, supposing that we have a non-degenerate ground state, and two pairs
doubly degenerate next energy levels. That is

|mαm i ∈ |11i
|21i , |22i (20.17)
|31i , |32i

Our change of basis matrix is


 
 1 00 0 0 
 
 0 U 0 0 
U =  0 2
0 0  (20.18)

 
 0 00 U 

3
0 00

We would like to calculate

U † H0U (20.19)

Let us write this putting row and column range subscripts on our matrices to explicitly block
them into multiplication compatible sized pieces

 
 I11,11
 011,23 011,45 
U = 023,11

 U23,23 023,45 

045,11 045,23 U45,45
  (20.20)
H 0 11,11 H 0 11,23 H 0 11,45 

H 0 = H 0 23,11

 H 0 23,23 H 0 23,45 

H 0 45,11 H 0 45,23 H 0 45,45
248 degeneracy and diagonalization

The change of basis calculation then becomes


   
 I11,11 011,23 011,45  H 0 11,11
H 0 11,23 H 0 11,45   I11,11 011,23 011,45 
  
U † H 0 U = 023,11 U23,23 †
  
023,45  H 0 23,11
H 0 23,23 H 0 23,45  023,11 U23,23 023,45 
  

  
045,11 045,23 U45,45 H 0 45,11
H 0 45,23 H 0 45,45 045,11 045,23 U45,45
  
 I11,11 011,23
 011,45  H 0 11,11
H 0 11,23 U23,23 H 0 11,45 U45,45 
 
= 023,11 U23,23 †

023,45  H 0 23,11
H 0 23,23 U23,23 H 0 23,45 U45,45 
  


045,11 045,23 U45,45 H 0 45,11
H 0 45,23 U23,23 H 0 45,45 U45,45
 
 H 0 11,11 H 0 11,23 U23,23 H 0 11,45 U45,45 

= U23,23 † † †

H 0 23,11 U23,23 H 0 23,23 U23,23 U23,23 H 0 23,45 U45,45 
 
† † †
U45,45 H 0 45,11 U45,45 H 0 45,23 U23,23 U45,45 H 0 45,45 U45,45
(20.21)

We see that we end up with explicitly diagonal matrices along the diagonal blocks, but prod-
ucts that are otherwise everywhere else.
In the new basis our kets become

0
mαm = U † |mαm i (20.22)

Suppose we calculate this change of basis representation for |21i (we have implicitly assumed
above that our original basis had the ordering {|11i |21i , |22i , |31i , |32i}). We find

0
21 = U ∗ |21i
   0 
1 0 0   
   1  (20.23)
= 0 U2∗ 0   0 
0
   
0 0 U3∗  
0
With
 
U U2,12 
U2 =  2,11 
U2,21 U2,22
  (20.24)
U ∗ ∗ 
U2,21
U2† =  2,11

∗ ∗

U2,12 U2,22
20.3 generalizing slightly 249

We find
0
21 = U ∗ 210
 
 0 
 
U ∗ 
 2,11  (20.25)
= U2,12 ∗ 
 = U2,11 |21i + U2,12 |22i
∗ ∗

 0 
 
0

Energy eigenvalues of the unperturbed Hamiltonian in the new basis Generalizing this, it is
clear that for a given degeneracy level, the transformed kets in the new basis are superposition
of only the kets associated with that degenerate level (and the kets for the non-degenerate levels
are left as is).
Even better, we have for all mα0m = U † |mαm i that mα0m remain eigenkets of the unper-

turbed Hamiltonian. We see that by computing the matrix element of our Hamiltonian in the
full basis.
Writing

F = U † H 0 U, (20.26)

or

H 0 = UFU † , (20.27)

where F has been shown to have diagonal block diagonals, we can write

H = H0 + λUFU †
= UU † H0 UU † + λUFU † (20.28)
= U (U † H0 U + λF )U †

So in the mα0m basis, our Hamiltonian’s matrix element is

H → U † H0 U + λF (20.29)
250 degeneracy and diagonalization

When λ = 0, application of this Hamiltonian to the new basis kets gives


H0 mα0 = U † H0 UU † |mαi

= U † H0 |mαi
(20.30)
= U † Hm0 |mαi
 
= Hm0 U † |mαi

But this is just


H0 mα0 = Hm0 mα0 ,

(20.31)

a statement that the |mα0 i are still the energy eigenkets for the unperturbed system. This
matches our expectations since we have seen that these differ from the original basis elements
only for degenerate energy levels, and that these new basis elements are superpositions of only
the kets for their respective degeneracy levels.
R E V I E W O F A P P R O X I M AT I O N R E S U LT S
21
21.1 motivation

Here I will summarize what I had put on a cheat sheet for the tests or exam, if one would be
allowed. While I can derive these results, memorization unfortunately appears required for good
test performance in this class, and this will give me a good reference of what to memorize.
This set of review notes covers all the approximation methods we covered except for Fermi’s
golden rule.

21.2 variational method

We can find an estimate of our ground state energy using

hΨ| H |Ψi
≥ E0 (21.1)
hΨ|Ψi

21.3 time independent perturbation

Given a perturbed Hamiltonian and an associated solution for the unperturbed state

H = H0 + λH 0 , λ ∈ [0, 1]
E E (21.2)
H0 ψmα (0) = Em (0) ψmα (0) ,

we assume a power series solution for the energy

Em = Em (0) + λEm (1) + λ2 Em (2) + · · · (21.3)

251
252 review of approximation results

E E
For a non-degenerate state |ψm i = |ψm1 i, with an unperturbed value of ψ(0)
m = ψ(0) , we
m1
seek a power series expansion of this ket in the perturbed system
X E X E X E
|ψm i = cnα;m (0) ψnα (0) + λ cnα;m (1) ψnα (0) + λ2 cnα;m (2) ψnα (0) + · · ·
n,α n,α n,α
E X E X E (21.4)
∝ ψm (0) + λ cnα;m (1) ψnα (0) + λ2 cnα;m (2) ψnα (0) + · · ·
n,m,α n,m,α
Any states n , m are allowed to have degeneracy. For this case, we found to second order in
energy and first order in the kets

X Hnα;m1 0 2

(0)
Em = Em + λHm1;m1 0 + λ2 (0) (0)
+···
n,m,α E m − E n
(0) E X Hnα;m1 0 (21.5)
|ψm i ∝ ψm +λ ψ (0) E + · · ·
(0) (0) nα
n,m,α E m − E n
D E
0
Hnα;sβ = ψnα (0) H 0 ψ sβ (0) .

21.4 degeneracy

When the initial energy eigenvalue Em has a degeneracy γm > 1 we use a different approach
to compute the perturbed energy eigenkets and perturbed energy eigenvalues. Writing the kets
as |mαi, then we assume that the perturbed ket is a superposition of the kets in the degenerate
energy level

X
|mαi0 = ci |mii . (21.6)
i
We find that we must have

 
 c1 
 
    c 
 2 
(E 0 − E)I + λ Hmi;m
0
j  .. 
= 0. (21.7)
 . 
 
cγm
 
Diagonalizing this matrix Hmi;m 0
j
(a subset of the complete H 0 matrix element)

h i  
hmi| H0 |m ji = U m δi j Hm,i U m ,
0 †
(21.8)
21.4 degeneracy 253

we find, by taking the determinant, that the perturbed energy eigenvalues are in the set

E = Em
0
+ λHm,i
0
, i ∈ [1, γm ] (21.9)

To compute the perturbed kets we must work in a basis for which the block diagonal matrix
elements are diagonal for all m, as in

h i  
hmi| H0 |m ji = δi j Hm,i .
0 (21.10)

If that is not the case, then the unitary matrices of eq. (21.8) can be computed, and the matrix

 
U1 
 
 U2 
U =   , (21.11)
 
..

 . 
 
UN

can be formed. The kets

|mαi = U † |mαi , (21.12)

will still be energy eigenkets of the unperturbed Hamiltonian

H0 |mαi = Em
0
|mαi , (21.13)

but also ensure that the partial diagonalization condition of eq. (21.8) is satisfied. In this basis,
dropping overbars, the first order perturbation results found previously for perturbation about a
non-degenerate state also hold, allowing us to write

X H 0 mβ;sα
|sαi0 = |sαi + λ |mβi + · · · (21.14)
m,s,β E (0) (0)
s − Em
254 review of approximation results

21.5 interaction picture

We split of the Hamiltonian into time independent and time dependent parts, and also factorize
the time evolution operator

H = H0 + HI (t)
(21.15)
|αS i = e−iH0 t/ h̄ |αI (t)i = e−iH0 t/ h̄ U I (t) |αI (0)i .

Plugging into Schrödinger’s equation we find

d
i h̄ |αI (t)i = HI (t) |αI (t)i
dt
dU I (21.16)
i h̄ = HI0 U I
dt
HI0 (t) = eiH0 t/ h̄ HI (t)e−iH0 t/ h̄

21.6 time dependent perturbation

We moved on to time dependent perturbations of the form

H(t) = H0 + H 0 (t)
E E (21.17)
H0 ψ(0)
n = h̄ωn ψ(0)
n .
E
where h̄ωn are the energy eigenvalues, and ψ(0) n the energy eigenstates of the unperturbed
Hamiltonian.
Use of the interaction picture led quickly to the problem of seeking the coefficients describing
the perturbed state

X E
|ψ(t)i = cn (t)e−iωn t ψ(0)
n , (21.18)
n
and plugging in we found

X
i h̄ċ s = 0
H sn (t)eiωsn t cn (t)
n
ω sn = ω s − ωn (21.19)
D E
0
H sn (t) = ψ(0) 0 (0) ,
s H (t) ψn
21.7 sudden perturbations 255

Perturbation expansion in series Introducing a λ parametrized dependence in the perturbation


above, and assuming a power series expansion of our coefficients

H 0 (t) → λH 0 (t)
(21.20)
c s (t) = c(0) (1) 2 (2)
s (t) + λc s (t) + λ c s (t) + · · ·

we found, after equating powers of λ a set of coupled differential equations

i h̄ċ(0)
s (t) = 0
X
(1)
i h̄ċ s (t) = 0
H sn (t)eiωsn t c(0)
n (t)
n
X (21.21)
i h̄ċ(2)
s (t) = 0
H sn (t)eiωsn t c(1)
n (t)
n
..
.
Of particular value was the expansion, assuming that we started with an initial state in energy
level m before the perturbation was “turned on” (ie: λ = 0).
E
|ψ(t)i = e−iωm t ψ(0)
m (21.22)

So that c(0)
n (t) = δnm . We then found a first order approximation for the transition probability
coefficient of

i h̄ċ(1)
m = Hms (t)e
0 iωms t
(21.23)

21.7 sudden perturbations

The idea here is that we integrate Schrödinger’s equation over the small interval containing the
changing Hamiltonian
Z t
1
|ψ(t)i = |ψ(t0 )i + H(t0 ) ψ(t0 ) dt0

(21.24)
i h̄ t0
and find
|ψafter i = |ψbefore i . (21.25)
An implication is that, say, we start with a system measured in a given energy, that same
system after the change to the Hamiltonian will then be in a state that is now a superposition of
eigenkets from the new Hamiltonian.
256 review of approximation results

21.8 adiabatic perturbations

Given a Hamiltonian that turns on slowly at t = 0, a set of instantaneous eigenkets for the dura-
tion of the time dependent interval, and a representation in terms of the instantaneous eigenkets

H(t) = H0 , t≤0
E E
H(t) ψ̂n (t) = En (t) ψ̂n (t)
bn (t)e−iαn +iβn ψ̂n
X E
|ψi = (21.26)
n
Z t
1
αn (t) = dt0 En (t0 ),
h̄ 0

plugging into Schrödinger’s equation we find

dbm X D d E
=− bn e−iγnm ψ̂m (t) ψ̂n (t)
dt n,m
dt
γnm (t) = αn (t) − αm (t) − (βn (t) − βm (t))
Z t (21.27)
βn (t) = dt0 Γn (t0 )
0
D d E
Γn (t) = i ψ̂n (t) ψ̂n (t)
dt
Here Γn (t) is called the Berry phase.

Evolution of a given state Given a system initially measured with energy Em (0) before the
time dependence is “turned on”

E
|ψ(0)i = ψ̂m (0) , (21.28)

we find that the first order Taylor series expansion for the transition probability coefficients
are
D d E
b s (t) = δ sm − t(1 − δ sm ) ψ̂ s (0) ψ̂m (t) . (21.29)
dt t=0

If we introduce a λ perturbation, separating all the (slowly changing) time dependent part of
the Hamiltonian H 0 from the non time dependent parts H0 as in

H(t) = H0 + λH 0 (t) (21.30)


21.9 wkb 257

then we find our perturbed coefficients are

Z t d
0 D E
b s (t) = δms (1 + λconstant) − (1 − δms )λ dt0 eiγsm (t ) ψ̂ s (t0 ) 0 ψ̂m (t0 ) (21.31)
0 dt

21.9 wkb

We write Schrödinger’s equation as

d2 U
0= 2
+ k2 U
dx (21.32)
2m(E − V)
k = −κ2 =
2
.

and seek solutions of the form U ∝ eiφ . Schrödinger’s equation takes the form

−(φ0 (x))2 + iφ00 (x) + k2 (x) = 0. (21.33)

Initially setting φ00 = 0 we refine our approximation to find


s
k0 (x)
φ0 (x) = k(x) 1 + i 2 . (21.34)
k (x)

To first order, this gives us

1 R
U(x) ∝ √ e±i dxk(x)
(21.35)
k(x)

What we did not cover in class, but required in the problems was the Bohr-Sommerfeld
condition described in §24.1.2 of the text [4].

Z x2 !
1
dx 2m(E − V(x)) = n + π h̄.
p
(21.36)
x1 2

This was found from the WKB connection formulas, themselves found my some Bessel func-
tion arguments that I have to admit that I did not understand.
O N C O N D I T I O N S F O R C L E B S H - G O R DA N C O E F F I C I E N T S TO B E
Z E RO
22
22.1 motivation

In §28.2 of the text [4] is a statement that the Clebsh-Gordan coefficient

hm1 m2 | jmi (22.1)

unless m = m1 + m2 . It appeared that it was related to the operation of Jz , but how exactly
was not obvious to me. In tutorial today we hashed through this. Here is the details lying behind
this statement

22.2 recap on notation

We are taking an arbitrary two particle ket and decomposing it utilizing an insertion of a com-
plete set of states

X E ED D
| jmi = ( j1 m01 j2 m02 j1 m01 j2 m02 ) | jmi (22.2)
m01 m02

with j1 and j2 fixed, this is written with the shorthand

| j1 m1 i | j2 m2 i = |m1 m2 i
(22.3)
h j1 m1 | h j2 m2 | | jmi = hm1 m2 | jmi ,

so that we write

X
| jmi = m0 m0 E Dm0 m0 jmE (22.4)
1 2 1 2
m01 m02

259
260 on conditions for clebsh-gordan coefficients to be zero

22.3 the J z action

We have two ways that we can apply the operator J z to | jmi. One is using the sum above, for
which we find

X ED E
J z | jmi = J z m 01 m 02 m 01 m 02 jm
m 01 m 02
X ED E (22.5)
= h̄ (m 01 + m 02 ) m 01 m 02 m 01 m 02 jm
m 01 m 02

We can also act directly on | jmi and then insert a complete set of states

X
Jz | jmi = m0 m0 E Dm0 m0 J | jmi
1 2 1 2 z
m01 m02
X
m0 m0 E Dm0 m0 jmE (22.6)
= h̄m 1 2 1 2
m01 m02

This provides us with the identity

X X
m m0 m0 E Dm0 m0 jmE = ED E
(m01 + m02 ) m01 m02 m01 m02 jm (22.7)
1 2 1 2
m01 m02 m01 m02

E
This equality must be valid for any | jmi, and since all the kets m01 m02 are linearly indepen-
dent, we must have for any m01 , m02

D E E
(m − m01 − m02 ) m01 m02 jm m01 m02 = 0 (22.8)

We have two ways D to getEthis zero. One of them is a m = m01 + m02 condition, and the other is

for the CG coeff m01 m02 jm to be zero whenever m , m01 + m02 .
It is not a difficult argument, but one that was not clear from a read of the text (at least to me).
O N E M O R E A D I A B AT I C P E R T U R B AT I O N D E R I VAT I O N
23
23.1 motivation

I liked one of the adiabatic perturbation derivations that I did to review the material, and am
recording it for reference.

23.2 build up

In time dependent perturbation we started after noting that our ket in the interaction picture, for
a Hamiltonian H = H0 + H 0 (t), took the form

|αS (t)i = e−iH0 t/ h̄ |αI (t)i = e−iH0 t/ h̄ U I (t) |αI (0)i . (23.1)

Here we have basically assumed that the time evolution can be factored into a portion depen-
dent on only the static portion of the Hamiltonian, with some other operator U I (t), providing
the remainder of the time evolution. From eq. (23.1) that operator U I (t) is found to behave
according to

dU I
i h̄ = eiH0 t/ h̄ H 0 (t)e−iH0 t/ h̄ U I , (23.2)
dt
but for our purposes we just assumed it existed, and used this for motivation. With the as-
sumption that the interaction picture kets can be written in terms of the basis kets for the system
at t = 0 we write our Schrödinger ket as

X X
|ψi = e−iH0 t/ h̄ ak (t) |ki = e−iωk t/ h̄ ak (t) |ki , (23.3)
k k

where |ki are the energy eigenkets for the initial time equation problem

H0 |ki = Ek0 |ki . (23.4)

261
262 one more adiabatic perturbation derivation

23.3 adiabatic case

For the adiabatic problem, we assume the system is changing very slowly, as described by the
instantaneous energy eigenkets

H(t) |k(t)i = Ek (t) |k(t)i . (23.5)

Can we assume a similar representation to eq. (23.3) above, but allow |ki to vary in time?
This does not quite work since |k(t)i are no longer eigenkets of H0

X X
|ψi = e−iH0 t/ h̄ ak (t) |k(t)i , e−iωk t ak (t) |k(t)i . (23.6)
k k

Operating with eiH0 t/ h̄ does not give the proper time evolution of |k(t)i, and we will in general
have a more complex functional dependence in our evolution operator for each |k(t)i. Instead of
an ωk t dependence in this time evolution operator let us assume we have some function αk (t) to
be determined, and can write our ket as

X
|ψi = e−iαk (t) ak (t) |k(t)i . (23.7)
k

Operating on this with our energy operator equation we have

!
d
0 = H − i h̄ |ψi
dt
!X
d
= H − i h̄ e−iαk ak |ki (23.8)
dt k
X  
= e−iαk (t) ( Ek ak − i h̄(−iα0k ak + a0k )) |ki − i h̄ak k0
k

Here I have written |k0 i = d |ki /dt. In our original time dependent perturbation the −iα0k term
was −iωk , so this killed off the Ek . If we assume this still kills off the Ek , we must have

Z t
1
αk = Ek (t0 )dt0 , (23.9)
h̄ 0

and are left with

X  
0= e−iαk (t) a0k |ki + ak k0 . (23.10)
k
23.3 adiabatic case 263

Bra’ing with hm| we have


X −iαk (t)
0
0 = e−iαm (t) a0m + e−iαm (t) am m m0 + e ak m k , (23.11)
k,m

or


X

a0m + am m m0 = − e−iαk (t) eiαm (t) ak m k0 , (23.12)
k,m
Rt
0
The LHS is a perfect differential if we introduce an integration factor e 0 hm|m i , so we can
write

Rt Rt X
hm|m0 i 0

e− 0 (am e 0 hm|m i )0 = − e−iαk (t) eiαm (t) ak m k0 , (23.13)
k,m

This suggests that we want to form a new function

Rt
0
bm = am e 0 hm|m i (23.14)

or

Rt
hm|m0 i
am = bm e− 0 (23.15)

Plugging this into our assumed representation we have a more concrete form

Rt
dt0 (iωk +hk|k0 i)
X
|ψi = e− 0 bk (t) |k(t)i . (23.16)
k

Writing



Γk = i k k0 , (23.17)

this becomes

X Rt
dt0 (ωk −Γk )
|ψi = e−i 0 bk (t) |k(t)i . (23.18)
k
264 one more adiabatic perturbation derivation

A final pass Now that we have what appears to be a good representation for any given state
if we wish to examine the time evolution, let us start over, reapplying our instantaneous energy
operator equality

!
d
0 = H − i h̄ |ψi
dt
!X R
d t 0
= H − i h̄ e−i 0 dt (ωk −Γk ) bk |ki (23.19)
dt k
X Rt 0  
= −i h̄ e−i 0 dt (ωk −Γk ) iΓk bk |ki + b0k |ki + bk k0 .
k

Bra’ing with hm| we find

Rt Rt
dt0 (ωm −Γm ) 0
0 = e−i 0 iΓm bm + e−i 0 dt (ωm −Γm ) b0m
Rt
0
X −i R t dt0 (ωk −Γk )
0 (23.20)
+ e−i 0 dt (ωm −Γm ) bm m m0 + e 0 bk m k
k,m

Since iΓm = hm|m0 i the first and third terms cancel leaving us just

X Rt
dt0 (ωkm −Γkm )

b0m = − e−i 0 bk m k0 , (23.21)
k,m

where ωkm = ωk − ωm and Γkm = Γk − Γm .

23.4 summary

We assumed that a ket for the system has a representation in the form

X
|ψi = e−iαk (t) ak (t) |k(t)i , (23.22)
k

where ak (t) and αk (t) are given or to be determined. Application of our energy operator iden-
tity provides us with an alternate representation that simplifies the results

X Rt
dt0 (ωk −Γk )
|ψi = e−i 0 bk (t) |k(t)i . (23.23)
k
23.4 summary 265

With

0 d
m = |mi
dt
Γ = i m m0


k (23.24)
ωkm = ωk − ωm
Γkm = Γk − Γm

we find that our dynamics of the coefficients are related by

X Rt
dt0 (ωkm −Γkm )

b0m = − e−i 0 bk m k0 , (23.25)
k,m
A S U P E R S H O R T D E R I VAT I O N O F T H E T I M E D E P E N D E N T
P E RT U R B AT I O N R E S U LT
24
With

X
|ψti = ck (t)e−iωk t |ki (24.1)
k

apply the energy eigenvalue operator identity

!
d
0 = H0 + H − i h̄ 0
|ψti
dt
!
d X −iωk t
= H0 + H − i h̄
0
ck e |ki (24.2)
dt k
X
= e−iωk t ( Ek + H 0 ck − (
ck  i h̄(−iω
((( 0
k )ck − i h̄ck ) |ki
(
k

Bra with hm|

X
e−iωk t Hmk
0
ck = i h̄e−iωm t c0m , (24.3)
k

or

1 X −iωkm t 0
c0m = e Hmk ck (24.4)
i h̄ k

Now we can make the assumptions about the initial state and away we go.

267
S E C O N D F O R M O F A D I A B AT I C A P P R O X I M AT I O N
25
Motivation In class we were shown an adiabatic approximation where we started with (or
worked our way towards) a representation of the form

X Rt
(ωk (t0 )−Γk (t0 ))dt0
|ψi = ck (t)e−i 0 |ψk (t)i (25.1)
k

where |ψk (t)i were normalized energy eigenkets for the (slowly) evolving Hamiltonian

H(t) |ψk (t)i = Ek (t) |ψk (t)i (25.2)


In the problem sets we were shown a different adiabatic approximation, where are starting
point is

X
|ψ(t)i = ck (t) |ψk (t)i . (25.3)
k

For completeness, here is a walk through of the general amplitude derivation that is been
used.

Guts We operate with our energy identity once again

!
d X
0 = H − i h̄ ck |ki
dt k
X (25.4)
= ck Ek |ki − i h̄c0k |ki − i h̄ck k0 ,
k

where

0 d
k = |ki . (25.5)
dt
Bra’ing with hm|, and split the sum into k = m and k , m parts


X

0 = cm Em − i h̄c0m − i h̄cm m m0 − i h̄ ck m k0 (25.6)
k,m

269
270 second form of adiabatic approximation

Again writing



Γm = i m m0 (25.7)

We have

1 X

c0m = cm (Em − h̄Γm ) − ck m k0 , (25.8)
i h̄ k,m

In this form we can make an “Adiabatic” approximation, dropping the k , m terms, and
integrate

dc0m t
Z Z
1
= (Em (t0 ) − h̄Γm (t0 ))dt0 (25.9)
cm i h̄ 0

or

Z t !
1
cm (t) = A exp (Em (t ) − h̄Γm (t ))dt .
0 0 0
(25.10)
i h̄ 0

Evaluating at t = 0, fixes the integration constant for

Z t !
1
cm (t) = cm (0) exp (Em (t ) − h̄Γm (t ))dt .
0 0 0
(25.11)
i h̄ 0

Observe that this is very close to the starting point of the adiabatic approximation we per-
formed in class since we end up with

X Rt
(ωk (t0 )−Γk (t0 ))dt0
|ψi = ck (0)e−i 0 |k(t)i , (25.12)
k

So, to perform the more detailed approximation, that started with eq. (25.1), where we ended
up with all the cross terms that had both ωk and Berry phase Γk dependence, we have only to
generalize by replacing ck (0) with ck (t).
Part V

APPENDICES
H A R M O N I C O S C I L L AT O R R E V I E W
A
Consider
P2 1
H0 = + mω2 X 2 (A.1)
2m 2
Since it has been a while let us compute the raising and lowering factorization that was used
so extensively for this problem.
It was of the form

H0 = (aX − ibP)(aX + ibP) + · · · (A.2)

Why this factorization has an imaginary in it is a good question. It is not one that is given any
sort of rationale in the text [4]. √

It is clear that we want a = m/2ω and b = 1/ 2m. The difference is then

ω
H0 − (aX − ibP)(aX + ibP) = −iab [ X, P] = −i [ X, P] (A.3)
2
That commutator is an i h̄ value, but what was the sign? Let us compute so we do not get it
wrong

[ x, p] ψ = −i h̄ [ x, ∂ x ] ψ
= −i h̄(x∂ x ψ − ∂ x (xψ))
(A.4)
= −i h̄(−ψ)
= i h̄ψ
So we have

 r r  r r 
m 1 m 1  h̄ω
H0 = ω P ω X+i P +
  
X−i (A.5)
2 2m 2 2m 2
Factoring out an h̄ω produces the form of the Hamiltonian that we used before

 r r r r  
 mω 1   mω 1 1 
H0 = h̄ω  X+i P + .

X−i P  (A.6)
2 h̄ 2m h̄ω 2 h̄ 2m h̄ω 2

273
274 harmonic oscillator review

The factors were labeled the uppering (a† ) and lowering (a) operators respectively, and writ-
ten

!
1
H0 = h̄ω a a +†
2
r r
mω 1
a= X+i P (A.7)
2 h̄ 2m h̄ω
r r
mω 1
a =

X−i P.
2 h̄ 2m h̄ω
Observe that we can find the inverse relations

r
h̄  
X= a + a†
2mω
r (A.8)
m h̄ω  † 
P=i a −a
2

Question What is a good reason that we chose this particular factorization? For example, a
quick computation shows that we could have also picked

!
1
H0 = h̄ω aa† − . (A.9)
2

I do not know that answer. That said, this second factorization is useful in that it provides
the commutator relation between the raising and lowering operators, since subtracting eq. (A.9)
and eq. (A.7) yields

h i
a, a† = 1. (A.10)

If we suppose that we have eigenstates for the operator a† a of the form

a† a |ni = λn |ni , (A.11)


harmonic oscillator review 275

then the problem of finding the eigensolution of H0 reduces to solving this problem. Because
a† acommutes with 1/2, an eigenstate of a† a is also an eigenstate of H0 . Utilizing eq. (A.10)
we then have

a† a(a |ni) = (aa† − 1)a |ni


= a(a† a − 1) |ni
(A.12)
= a(λn − 1) |ni
= (λn − 1)a |ni ,

so we see that a |ni is an eigenstate of a† a with eigenvalue λn − 1.


Similarly for the raising operator

a† a(a† |ni) = a† (aa† ) |ni)


= a† (a† a + 1) |ni) (A.13)
= a (λn + 1) |ni),

and find that a† |ni is also an eigenstate of a† a with eigenvalue λn + 1.


Supposing that there is a lowest energy level (because the potential V(x) = mωx2 /2 has a
lower bound of zero) then the state |0i for which the energy is the lowest when operated on by
a we have

a |0i = 0 (A.14)

Thus

a† a |0i = 0, (A.15)

and

λ0 = 0. (A.16)

This seems like a small bit of slight of hand, since it sneakily supplies an integer value to λ0
where up to this point 0 was just a label.
If the eigenvalue equation we are trying to solve for the Hamiltonian is

H0 |ni = En |ni . (A.17)


276 harmonic oscillator review

Then we must then have

! !
1 1
En = h̄ω λn + = h̄ω n + (A.18)
2 2
V E R I F Y I N G T H E H E L M H O LT Z G R E E N ’ S F U N C T I O N
B
Motivation In class this week, looking at an instance of the Helmholtz equation

 
∇2 + k2 ψk (r) = s(r). (B.1)
We were told that the Green’s function

 
∇2 + k2 G0 (r, r0 ) = δ(r − r0 ) (B.2)
that can be used to solve for a particular solution this differential equation via convolution

Z
ψk (r) = G0 (r, r0 )s(r0 )d3 r0 , (B.3)

had the value

0
1 eik|r−r |
G (r, r ) = −
0 0
. (B.4)
4π |r − r0 |
Let us try to verify this.
Application of the Helmholtz differential operator ∇2 + k2 on the presumed solution gives

  1
Z   eik|r−r0 |
∇2 + k2 ψk (r) = − ∇2 + k2 s(r0 )d3 r0 . (B.5)
4π |r − r0 |

When r , r0 To proceed we will need to evaluate

ik|r−r0 |
2e
∇ . (B.6)
|r − r0 |
Writing µ = |r − r0 | we start with the computation of

∂ eikµ ∂µ ik 1 ikµ
!
= − e
∂x µ ∂x µ µ2
(B.7)
∂µ 1 eikµ
!
= ik −
∂x µ µ

277
278 verifying the helmholtz green’s function

We see that we will have

eikµ 1 eikµ
!
∇ = ik − ∇µ. (B.8)
µ µ µ
Taking second derivatives with respect to x we find

!2 !2
∂2 eikµ ∂2 µ 1 eikµ ∂µ ∂µ 1 eikµ ∂µ 1 eikµ
!
= 2 ik − + + ik −
∂x2 µ ∂x µ µ ∂x ∂x µ2 µ ∂x µ µ
2
(B.9)
∂2 µ 1 eikµ ∂µ 2ik 2 eikµ
! ! !
= 2 ik − + −k2 − + 2 .
∂x µ µ ∂x µ µ µ
Our Laplacian is then

ikµ 1 eikµ 2 2ik 2 eikµ


! !
2e
∇ = ik − ∇ µ + −k −
2
+ 2 (∇µ)2 . (B.10)
µ µ µ µ µ µ
Now lets calculate the derivatives of µ. Working on x again, we have

∂ ∂
q
µ= (x − x0 )2 + (y − y0 )2 + (z − z0 )2
∂x ∂x
1 1
= 2(x − x0 ) p (B.11)
2 (x − x ) + (y − y0 )2 + (z − z0 )2
0 2

x − x0
= .
µ
So we have

r − r0
∇µ =
µ (B.12)
(∇µ) = 1
2

Taking second derivatives with respect to x we find

∂2 ∂ x − x0
µ =
∂x2 ∂x µ
1 ∂µ 1
= − (x − x0 )
µ ∂x µ2
(B.13)
1 x − x0 1
= − (x − x0 )
µ µ µ2
1 1
= − (x − x0 )2 3 .
µ µ
verifying the helmholtz green’s function 279

So we find

3 1
∇2 µ = − , (B.14)
µ µ
or

2
∇2 µ = . (B.15)
µ

Inserting this and (∇µ)2 into eq. (B.10) we find

ikµ 1 eikµ 2 2ik 2 eikµ eikµ


! !
2e
∇ = ik − + −k −
2
+ 2 = −k2 (B.16)
µ µ µ µ µ µ µ µ

This shows us that provided r , r0 we have

 
∇2 + k2 G0 (r, r0 ) = 0. (B.17)

In the neighborhood of |r − r0 | <  Having shown that we end up with zero everywhere that
r , r0 we are left to consider a neighborhood of the volume surrounding the point r in our
integral. Following the Coulomb treatment in §2.2 of [11] we use a spherical volume element
centered around r of radius , and then convert a divergence to a surface area to evaluate the
integral away from the problematic point

1
Z   eik|r−r0 | 1
Z   eik|r−r0 |
− ∇ +k2 2
0
s(r )d r = −
0 3 0
∇ +k
2 2
s(r0 )d3 r0 (B.18)
4π all space |r − r | 4π |r−r0 |< |r − r0 |

We make the change of variables r0 = r + a. We add an explicit r suffix to our Laplacian


at the same time to remind us that it is taking derivatives with respect to the coordinates of
r = (x, y, z), and not the coordinates of our integration variable a = (a x , ay , az ). Assuming
sufficient continuity and “well behavedness” of s(r0 ) we will be able to pull it out of the integral,
giving

1
Z   eik|r−r0 | 1
Z   eik|a|
− ∇2r +k 2
s(r )d r = −
0 3 0
∇2r + k2 s(r + a)d3 a
4π |r−r0 |< |r − r0 | 4π |a|< |a|
(B.19)
s(r)
Z   eik|a|
=− ∇2r +k 2 3
d a
4π |a|< |a|
280 verifying the helmholtz green’s function

Recalling the dependencies on the derivatives of |r − r0 | in our previous gradient evaluations,


we note that we have


∇r r − r0 = −∇a |a|
 2
∇r r − r0 = (∇a |a|)2

(B.20)

∇2r r − r0 = ∇2a |a|,

so with a = r − r0 , we can rewrite our Laplacian as

0
eik|r−r | ik|a| eik|a|
!
2e
∇2r = ∇a = ∇ a · ∇a (B.21)
|r − r0 | |a| |a|

This gives us

eik|a| 3 eik|a| 3 eik|a| 3


Z Z ! Z
s(r) s(r) s(r)
− (∇2a +k )
2
d a=− ∇a · ∇a d a− k2 d a
4π |a|< |a| 4π dV |a| 4π dV |a|
(B.22)
eik|a| ik|a|
Z ! Z
s(r) s(r) 2e
=− ∇a 2
· âd a − k 3
d a
4π dA |a| 4π dV |a|

To complete these evaluations, we can now employ a spherical coordinate change of variables.
Let us do the k2 volume integral first. We have

 π 2π
eik|a| 3 eika 2
Z Z Z Z
k2 d a= k2 a da sin θdθdφ
dV |a| a=0 θ=0 φ=0 a
Z 
= 4πk2 aeika da
a=0
Z k (B.23)
= 4π ueiu du
u=0
k
= 4π (−iu + 1)eiu 0
 
= 4π (−ik + 1)eik − 1
verifying the helmholtz green’s function 281

To evaluate the surface integral we note that we will require only the radial portion of the
gradient, so have

eik|a| ∂ eika
! !
∇a · â = â · â
|a| ∂a a
∂ eika
=
∂a a ! (B.24)
1 1 ika
= ik − 2 e
a a
eika
= (ika − 1) 2
a
Our area element is a2 sin θdθdφ, so we are left with

Z π Z 2π
eik|a| eika 2
Z !
· âd a =
2
(ika − 1) 2 a sin θdθdφ

∇a
dA |a| θ=0 φ=0 a
a= (B.25)
= 4π (ik − 1) e ik

Putting everything back together we have

1
Z   eik|r−r0 |  
− ∇ +k
2 2
s(r0 3 0
)d r = −s(r) (−ik + 1)eik
− 1 + ( ik − 1 ) eik
4π all space |r − r0 | (B.26)
 
= −s(r) (−ik + 1 + ik − 1)e − 1
ik

But this is just

1
Z   eik|r−r0 |
− ∇2 + k2 s(r0 )d3 r0 = s(r). (B.27)
4π all space |r − r0 |

This completes the desired verification of the Green’s function for the Helmholtz operator.
Observe the perfect cancellation here, so the limit of  → 0 can be independent of how large k
is made. You have to complete the integrals for both the Laplacian and the k2 portions of the
integrals and add them, before taking any limits, or else you will get into trouble (as I did in my
first attempt).
E VA L U AT I N G T H E S Q U A R E D S I N C I N T E G R A L
C
In the Fermi’s golden rule lecture we used the result for the integral of the squared sinc function.
Here is a reminder of the contours required to perform this integral.
We want to evaluate


sin2 (x|µ|)
Z
dx (C.1)
−∞ x2
We make a few change of variables

∞ ∞
sin2 (x|µ|) sin2 (y)
Z Z
dx = |µ| dy
−∞ x2 −∞ y2
Z ∞ iy
(e − e−iy )2
= −i|µ| idy (C.2)
−∞ (2iy)2
i|µ| i∞ e2z + e−2z − 2
Z
=− dz
4 −i∞ z2
Now we pick a contour that is distorted to one side of the origin as in fig. C.1

Figure C.1: Contour distorted to one side of the double pole at the origin

283
284 evaluating the squared sinc integral

We employ Jordan’s theorem (§8.12 [9]) now to pick the contours for each of the integrals
since we need to ensure the e±z terms converges as R → ∞ for the z = Reiθ part of the contour.
We can write


sin2 (x|µ|) e2z e−2z
Z Z Z Z !
i|µ| 2
2
dx = − dz + dz − dz (C.3)
−∞ x 4 C0 +C2 z2 C0 +C1 z2 C0 +C1 z2

The second two integrals both surround no poles, so we have only the first to deal with

e2z
Z
1 d 2z
dz = 2πi e
C0 +C2 z2 1! dz z=0 (C.4)
= 4πi

Putting everything back together we have


sin2 (x|µ|)
Z
i|µ|
2
dx = − 4πi = π|µ| (C.5)
−∞ x 4

On the cavalier choice of contours The choice of which contours to pick above may seem
pretty arbitrary, but they are for good reason. Suppose you picked C0 + C1 for the first integral.
On the big C1 arc, then with a z = Reiθ substitution we have

e2z −π/2 e2R(cos θ+i sin θ) iθ


Z Z
dz =

2
Rie dθ
C1 z R2 e2iθ
θ=π/2
Z −π/2
1 2R(cos θ+i sin θ) −iθ
= e e dθ
R θ=π/2 (C.6)
Z π/2
1 e2R cos θ dθ

R θ=−π/2
πe2R

R
This clearly doesn’t have the zero convergence property that we desire. We need to pick
the C2 contour for the first (positive exponent) integral since in that [π/2, 3π/2] range, cos θ
is always negative. We can however, use the C1 contour for the second (negative exponent)
integral. Explicitly, again by example, using C2 contour for the first integral, over that portion
of the arc we have
evaluating the squared sinc integral 285

e2z 3π/2 e2R(cos θ+i sin θ) iθ


Z Z
dz =

2
Rie dθ
C2 z R2 e2iθ
θ=π/2
Z 3π/2
1 2R(cos θ+i sin θ) −iθ
= e e dθ
R θ=π/2
Z 3π/2
1 e2R cos θ dθ (C.7)

R θ=π/2
1 3π/2 −2R
Z
≈ e dθ
R θ=π/2
πe−2R
=
R
D E R I VAT I V E R E C U R R E N C E R E L AT I O N F O R H E R M I T E
P O LY N O M I A L S
D
For a QM problem I had need of a recurrence relation for Hermite polynomials. I found it in
[1], but thought I had try to derive the relation myself.
The starting point I will use is the Rodgigues’ formula definition of the Hermite polynomials

2 dn −x2
Hn = (−1)n e x e (D.1)
dxn
Let us write D = d/dx, and take the derivative of Hn

 2 2
(−1)n DHn = D e x Dn e−x
2 2 2  2
= 2xe x Dn e−x + e x Dn −2xe−x
n !
x2 n −x2 x2
X n k 2
= 2xe D e + e D (−2x)Dn−k e−x
k=0
k
1 ! (D.2)
x2 n −x2 x2
X n k 2
= 2xe D e + e D (−2x)Dn−k e−x
k=0
k
2 
n −x2 2  2 2
= e + e x −
 n −x
xD e − 2nDn−1 e−x

2xe 2xD

2
= −2nDn−1 e−x

So we have the rather simple end result

d
Hn (x) = 2nHn−1 (x). (D.3)
dx

287
M AT H E M AT I C A N O T E B O O K S
E
These Mathematica notebooks, some just trivial ones used to generate figures, others more
elaborate, and perhaps some even polished, can be found in
https://raw.github.com/peeterjoot/mathematica/master/.
The free Wolfram CDF player, is capable of read-only viewing these notebooks to some
extent.
Files saved explicitly as CDF have interactive content that can be explored with the CDF
player.

• Sep 15, 2011 phy456/desai_S_24_2_1_verify.nb


Some integrals related to QM hydrogen atom energy expectation values.

• Sep 24, 2011 phy456/problem_set_2,_problem_2,_verify_wavefunction_normalization.nb


Some trig integrals that I didn’t feel like doing manually.

• Sep 24, 2011 phy456/exponential_integrals.nb


More gaussian integrals and some that Mathematica didn’t know how to do.

• Sep 28, 2011 phy456/problem_set_3_integrals.nb


Some Gaussian integrals.

• Oct 2, 2011 phy456/24.4.3_attempt_with_mathematica.nb


Some variational method calculations for QM energy estimation.

• Oct 5, 2011 phy456/gaussian_fitting_for_abs_function.nb


Hankle function fitting for e^-b|x| and related plots.

• Oct 6, 2011 phy456/stack_overflow_question_mathematica_exponential_Nth_derivative_treated_as_an_unknown_


Stripped down example notebook for stackoverflow question about Derivative[2] not be-
having well.

• Oct 6, 2011 phy456/stackoverflow_question_about_listable.nb


Stripped down example notebook for stackoverflow question about Listable attribute de-
faults.

289
290 mathematica notebooks

• Oct 8, 2011 phy456/qmTwoL8figures.nb


Plot of gaussian weighted cosine, its Fourier transform, and figure for perturbation of
Harmonic oscillator system.

• Oct 9, 2011 phy456/qmTwoL9figures.nb


Sinusoid plot turned on at t_0 and ongoing from there.

• Oct 10, 2011 phy456/problem_set_4,_problem_2.nb


Some trig integrals that Mathematica didn’t evaluate correctly. Don’t trust a tool without
thinking whether the results are good!

• Oct 15, 2011 phy456/desai_24_4_4.nb


Another worked variational method problem.

• Oct 15, 2011 phy456/desai_24_4_5.nb


Another worked variational method problem.

• Oct 15, 2011 phy456/desai_24_4_6.nb


Another worked variational method problem. Looks like I’ve learned about the /. operator
for evaluating variables with values.

• Oct 15, 2011 phy456/qmTwoL10figures.nb


Some sinc function plots. Learned how to use Manipulate to make sliders.

• Oct 16, 2011 phy456/desai_attempt_to_verify_section_16.3.nb


Some energy expectation value calculations.

• Oct 17, 2011 phy456/qmTwoL11figures.nb


Some vector addition and function translation figures.

• Oct 18, 2011 phy456/problem_set_5_integrals.nb


Some integrals of first order linear polynomials.

• Oct 19, 2011 phy456/qmTwoL12_figures.nb


Some step and rect function plots.

• Oct 31, 2011 phy456/plot_question.nb


Another stackoverflow mathematica question. Why no output in my plot. Learned about
Mathematica local and global variables as a result.
mathematica notebooks 291

• Oct 31, 2011 phy456/problem_set_7_verify_rotation_matrix_orthonormal.nb


A sanity check on a rotation matrix calculated as part of a problem set.

• Dec 17, 2011 phy456/qmTwoExamReflection.cdf


Exam problem 2a. Calculate the matrix of a Perturbation Hamiltonian −µd · E with re-
spect to the n = 2 hydrogen atom wave functions.

• March 28, 2013 phy456/24.4.3.newAttempt.nb


A new attempt at Desai 24.4.3 from scratch. This one has an error, as did the original. The
original is now fixed.
Part VI

BIBLIOGRAPHY
BIBLIOGRAPHY

[1] M. Abramowitz and I.A. Stegun. Handbook of mathematical functions with formulas,
graphs, and mathematical tables, volume 55. Dover publications, 1964. (Cited on
page 287.)

[2] D. Bohm. Quantum Theory. Courier Dover Publications, 1989. (Cited on pages 92
and 241.)

[3] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover
Publications, 1992. (Cited on page 220.)

[4] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.
(Cited on pages xi, 5, 23, 24, 26, 37, 40, 47, 54, 57, 59, 70, 84, 87, 95, 99, 107, 119, 125,
133, 137, 139, 147, 150, 153, 157, 161, 171, 173, 178, 185, 193, 199, 213, 219, 233, 243,
257, 259, and 273.)

[5] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University
Press New York, Cambridge, UK, 1st edition, 2003. (Cited on pages 69 and 149.)

[6] D.J. Griffiths. Introduction to quantum mechanics, volume 1. Pearson Prentice Hall, 2005.
(Cited on page 107.)

[7] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers,
1999. (Cited on pages 66 and 149.)

[8] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975. (Cited
on page 208.)

[9] W.R. Le Page and W.R. LePage. Complex Variables and the Laplace Transform for Engi-
neers. Courier Dover Publications, 1980. (Cited on page 284.)

[10] A. Messiah, G.M. Temmer, and J. Potter. Quantum mechanics: two volumes bound as one.
Dover Publications New York, 1999. (Cited on pages 45 and 217.)

[11] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987. (Cited on


page 279.)

[12] JR Taylor. Scattering Theory: the Quantum Theory of Nonrelativistic Scattering, volume 1.
1972. (Cited on page 217.)

295
296 bibliography

[13] F.G. Tricomi. Integral equations. Dover Pubns, 1985. (Cited on page 70.)

[14] Wikipedia. Adiabatic theorem — Wikipedia, The Free Encyclopedia, 2011. URL //en.
wikipedia.org/w/index.php?title=Adiabatic_theorem&oldid=447547448.
[Online; accessed 9-October-2011]. (Cited on page 84.)

[15] Wikipedia. Geometric phase — Wikipedia, The Free Encyclopedia, 2011. URL //en.
wikipedia.org/w/index.php?title=Geometric_phase&oldid=430834614. [On-
line; accessed 9-October-2011]. (Cited on page 90.)

[16] Wikipedia. Bessel function — Wikipedia, The Free Encyclopedia, 2011. URL http://en.
wikipedia.org/w/index.php?title=Bessel_function&oldid=461096228. [On-
line; accessed 4-December-2011]. (Cited on page 207.)

[17] Wikipedia. Spin.orbit interaction — Wikipedia, The Free Encyclopedia, 2011.


URL http://en.wikipedia.org/w/index.php?title=Spin%E2%80%93orbit_
interaction&oldid=451606718. [Online; accessed 2-November-2011]. (Cited on
page 155.)

[18] Wikipedia. Steradian — Wikipedia, The Free Encyclopedia, 2011. URL http://en.
wikipedia.org/w/index.php?title=Steradian&oldid=462086182. [Online; ac-
cessed 4-December-2011]. (Cited on page 207.)

[19] Wikipedia. WKB approximation — Wikipedia, The Free Encyclopedia, 2011. URL
http://en.wikipedia.org/w/index.php?title=WKB_approximation&oldid=
453833635. [Online; accessed 19-October-2011]. (Cited on page 112.)

[20] Wikipedia. Zeeman effect — Wikipedia, The Free Encyclopedia, 2011. URL http://en.
wikipedia.org/w/index.php?title=Zeeman_effect&oldid=450367887. [Online;
accessed 15-September-2011]. (Cited on page 123.)

You might also like