Quantum
Quantum
Notes by: Dave Kaplan and Transcribed to LATEX by: Matthew S. Norton
5 Fifth Class: What Would Be a Valid State Function For a Free Particle? 30
5.1 Some Experimentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.2 Partially Localized States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3 A Single Moving Group − “Wave Packet” . . . . . . . . . . . . . . . . . . . . . . . . 34
5.4 Some Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.5 Toward a Governing Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1
7 Seventh Class: Time Dependence Through the Governing Equation 49
7.1 A Review of the Governing Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.2 Persistence of Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.3 The Reason for the Persistence of the Normalization . . . . . . . . . . . . . . . . . . 53
7.4 Determinism in Quantum Mechanics − A Comment . . . . . . . . . . . . . . . . . . 55
2
12.4 Reality of hpi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
12.5 Hermitian Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
12.6 Normalization of Free-Particle Definite Momentum States . . . . . . . . . . . . . . . 105
12.6.1 “Box Normalization” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
12.6.2 “Dirac Normalization” of Free-Particle Definite Momentum State . . . . . . . 106
12.7 Normalization of a State of Definite Position . . . . . . . . . . . . . . . . . . . . . . 107
12.8 Completeness of Free-Particle Definite Momentum States . . . . . . . . . . . . . . . 107
3
19.2.1 “High and Wide Barrier” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
19.2.2 “Thin Barrier” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
19.3 “Arbitrary” Shaped Barrier (E < V0 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4
25.3 The Classical Limit in the Harmonic Potential −A First Look . . . . . . . . . . . . . 215
25.4 Behavior of Harmonic Oscillator Superposition States . . . . . . . . . . . . . . . . . 216
25.4.1 Motion of Gaussian Wave Packet in Harmonic Potential . . . . . . . . . . . . 217
25.5 The Time Dependence of Expectation Values . . . . . . . . . . . . . . . . . . . . . . 220
25.6 When Does Classical Mechanics Apply? . . . . . . . . . . . . . . . . . . . . . . . . . 225
25.6.1 Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
25.7 Another Example of an Operator − Angular Momentum . . . . . . . . . . . . . . . . 226
5
Chapter 1
1.1 Introduction
6
hot enough? Recall that, for independent particles in thermal equilibrium,
1 2 3
mv = kB T (1.4)
2 2
where kB ≡ “Boltzmann’s Constant” = 1.381×10−23 J/k = 8.617×10−5 eV /k
(recal that 1eV = 1.6 × 10−19 J).
Now, the plasma at the center of the sum is roughly, we shall say, “like”
an ideal gas of protons, neutrons, and electrons. Consider a proton in this
plasma. For it
1 2 eV
mv ≈ 8.7 × 10−5 1.6 × 107 K ∼ 1.4 × 103 eV 1M eV (1.5)
2 k
Conclusion: According to classical physics, even in the core of the sun, it
is too cold for nuclear fusion to occur; therefore, the sun should not shine!
The protons should not penetrate the Coulomb barrier! Yet, according to
quantum mechanics, particles can “appear” in forbidden regions − in fact,
we use the theory to predict exactly the rate of this “forbidden appearance”,
and from this, the rate of the p-p cycle. From this, we can predict the mean
power output of the sun (∼ 3.8 × 1026 W ), and from this, the mean expected
temperature of the earth − and it works well!
Here is another example. Suppose we throw an eraser; it goes from “here”
to “there” on a parabolic path.
fig2
You say that this follows Newton’s law, but where is this from? Suppose
a particle is observed to be at x1 at time t1 , and later at x2 at time t2 . How
does it get from (x1 , t1 ) to (x2 , t2 )?
According to orthodox quantum mechanics: Between the observa-
tions, the particle does not possess all the attributes of reality. In particular,
it does not possess the attribute of position. Thus, there is no definite tra-
jectory. Rather, “in potentia”, all possible trajectories, even very wild ones,
contribute to the result of the second observation.
fig3
As we will see, these trajectories interfere with each other in potentia. As
we will also see, in the limit of macroscopic mass, the interference is destruc-
tive for all paths except those in a narrow band around the classical parabolic
path. In this band, the “superposition of possibilities” is constructive.
7
fig4
Other examples
1. Light incident on glass
2. Photon counting in the lab
Clearly, an understanding of interference is important for embarking to
understand quantum mechanics. We turn to a quick review of this next. For
interest, we place this in a historical context.
8
The next ingenious idea was to attempt to effectively isolate only “two”
spherical waves that would then be allowed to interfere with each other. It is
presumably, mush easier to keep track of the interference of two waves than
of an infinite number of waves. The idea then is to force the two waves to
combine out of phase with each other by different amounts at different points
on a viewing screen, thus causing the brightness of illumination along the
screen to vary. To accomplish this, he used a setup that forced the two waves
to start out in phase, but to meet out of phase due to an imposed difference
in the distance traveled.
fig6
Let’s now fix these ideas semi-quantitatively
fig7
For P to be a max: s1 b = mλ
mλ
d sin(θ) = mλ or sin(θ) = , m = 1, 2, . . . (1.6)
d
For P to be a min
som yound, knowing d and θ could find λ, thus proving that light is a wave!
But, no one believed him.
About what is the wavelength of light? Example: green light
• Slit spacing: d ≈ 0.1mm
• Distance to screen: D ≈ 20cm
• First minimum found at 0.16◦
1
2λ λ
sin(θ) = = (1.8)
d 2d
Therefore, λ = 5460Å. World’s first (of course) determination of wavelength
of light − now you see how we know it! (micro-world property − amazing
for 1802!)
9
1.3 Two-slit interference, Quantitative Calculation of Intensity
distribution at the Screen
~= 1E
S ~ ×B
~ (1.12)
µ0
~ ⊥E
For an electromagnetic wave in free space, B ~ and B = E , so
c
~ = 1 E2
S (1.13)
µ0 c
Now S 2 varies in time according to its factor sin2 (ωt + α), which oscillates
at the frequency, ω − some 1015 Hz − far to fast for the eye to follow. So,
what we need is the time average over the resolution time of the eye. Since
the variation is simple-harmonic, all cycles are the same, so it is suffices to
average over one cycle. Thus, intensity at point P on the screen is
1 φ
2
4E02 cos2
IP = sin (ωt + φ) t (1.14)
µ0 c 2
10
We drop the µ10 c since we will be taking ratios for which it will cancel. Now,
2
sin (ωt + φ) t = 12 , so
1 φ
IP = 4E02 · cos (1.15)
2 2
Now again, φ = φ(θ). How? To find out, set up the ratio
φ p.l.d φ d sin)θ) 2πd sin(θ)
= ⇒ = ⇒ φ(θ) = (1.16)
2π λ 2π λ λ
thus, in its full glory (except for the supposed factor µ10 c ),
1 2 2 πd sin(θ)
IP = · 4E0 cos . (1.17)
2 λ
It is often of important interest to compare the result for IP when both slits
are open to the intensity at P when only one slit is open and one is blocked.
For this purpose, defineI0 (θ) to be the intensity at P (θ) with only one slit
open and the other blocked. Then
1 2
I0 (θ) = E0 sin2 (ωt) t
= E , (1.18)
2 0
thus, the intensity with both slits open is
φ
IP = 4I0 cos2 (1.19)
2
This is interesting and very important. Let’s plot it:
fig8
As we say, this is very important: If I0 is the intensity with just one slit
open, note that we do not get 2I0 = I0 + I0 (as you might expect)1 by
opening both slits. Rather, I(θ) varies between 0 and 4I0 as θ changes. It’s
as if 1 + 1 = 4 or 1 + 1 = 0!
1
You do get this if you use a flashlight instead of a laser as a source. Why is this?
11
Chapter 2
2.1 Introduction
The wave picture of light, coupled with the interference idea, already leads
us to conclusions like the possibilities of 1 + 1 = 4 and 1 + 1 = 0, which
may seem natural in the wave picture, but which become very strange in the
photon picture: it is the job of quantum mechanics to explain this.
There are other really amazing things. These occur with nearly monochro-
matic light (which is freed from the mixings and jumblings of everyday light).
The laser is a good source of such light.
Suppose, for example, you illuminate a ball bearing with a indexlaser laser.
With “true light”, right at the center of the shadow, where you expect things
to be the darkest (the “penumbra” of the shadow with ordinary light), there
is a bright spot! This is evidence that true light does not “travel in straight
lines”, as we are led to believe! Some must have curved to get to the center!
fig1-5
These results are rather amazing! We do not expect these from geomet-
rical optics. Maybe you don’t believe them. That’s o.k., you’ll do it in lab
sometime. Return to the shadow of the ball bearing, note the bright spot
a the center − certainly weird. In 1818, Fresnel, a young Frenchman, (29
years old) presented an essay to the Paris academy. The judging committee
had many of the prominent physicists of the day, including, Poisson, Arago,
12
Laplace, and Biot. Poisson, an advocate of the corpuscular theory, worked
out Fresnel’s results and found the bright spot. This resulted in the cor-
puscular theory dying in one day, and was not revived again until the 20th
century.
Now back to the “expanding” behavior of light. We can get a first inkling
from the following example. As a definite case, consider monochromatic
plane waves incident on a single aperture of width a in one dimension (and
width a in the perpendicular direction, which is in and out of the page in
figure 2.6). This figure shows what we call diffraction, which is really just
the interference of many circular waves according to Huygen’s principle.
fig6
Semi-quantitative Consideration
We could calculate the diffraction pattern quantitatively by summing (or
integrating) over the infinite number of infinite similar-spaced points of emis-
sion along the slit. However, before we get involved, in detailed mathematics,
let us see if we can figure out what we expect the answer to look like (roughly
at least) in advance. Where possible, this procedure is always a good idea.
fig7
The point Q at the center will clearly be bright (Why?). Now consider a
point P such that the following happens
fig8
Each point on top half is canceled by a point in the bottom half, therefore,
we have found a minimum. So, a sin(θ) = mλ are minima. Something
happens at mλ. Now consider an angle θ such that the following is true
fig9
I cancels II, and the light reaching the screen at P is effectively 13 of the
total light from the slit. Therefore, we expect a maximum there1 , but not a
bright as the central maximum. by this kind of preliminary logic, we expect
an intensity pattern that looks something like this:
fig10
1
At least roughly midway` between
´ the first and second minima. It turns out that, in fact, the maxima are not
exactly given by a sin(θ) = m + 12 λ. This will be addressed in the homework.
13
We now attempt to consider this more quantitively. The paradigm prob-
lem: A plane wave is incident on a small2 aperature.
fig11
We want the calculate the result at P . To model the problem, try breaking
the aperture into an infinite number of infinitesimal “subapertures”; each
contributes a phasor of infinitesimal amplitude to P . So we must “add up”
(or integrate) the contributions from all of the little “subslits”. This means
that we have to “add up” a very large number of harmonically varying electric
fields (each with Ra different phase) at P . Although technically this is an
integral, E(P ) ∝ dy sin(ωt + φ(y)), we will develop a neat picture method
instead
fig12
14
so,
φ
ER = 2E0 cos (2.1)
2
So
EP = sin(ωt) + sin(ωt + φ)
φ φ
= 2E0 cos sin ωt +
2 2
just as we had figured earlier by trigonometry identity. The great advantage
of this technique is that it is easily extended to the addition of many har-
monic oscillations of the same frequency − this would be very painful by trig
identity, but if all else fails, with this method, you could do it graphically. It
was invented by M Co?????, circa 1880.
fig 15
For our “single slit” problem, we will have essentially, to add together an
infinite number of phasors, which is a tantamount to doing an integral “by
pictures”. In the following, we’ll sketch how this works.
Now let us look at our “single slit” problem at the next level. We use the
phasor method. An outline of this fllows:
• Divide slit up into many “elements”
• Adjacent elements have phase difference ∆φ
where,
∆φ p.l.d 2π
= ⇒ ∆φ = ∆x sin(θ) (2.2)
2π λ λ
fig16
Now let’s look at the situation in the θ = 0 direction. there, all path length
are the same, so all the phasors point the same way and add up to something
big − a very bright max of intensity.
fig17
Now we move P slightly up (or down) the screen. Then each successive ray
has a slight path length difference from the previous one, so each successive
phasor at P is inclined at a slightly greater angle. Thus, the resultant (Eθ in
figure 18) is not as big as when all the phasors point in the same direction.
15
Figure 2.1: Figure 21: The intensity as a function of θ at point P .
fig18
And if θ increases further, so does δφ − until we reach a point when the
phasors close in on themselves!
fig19
Then we have a minimum. And if we increase θ further,
fig20,
we reach a secondary maximum, with clearly less resultant than the straight
ahead max. And so on, a pattern of maxima and minima, with the maxima
getting dimmer and dimmer. our expectations from this again predict a curve
that looks something like what is shown in figure 2.22.
But we still need to be more quantitative. Now we make a fully quantita-
tive calculation − we “integrate by picture”. To see how this goes, consider
the phasor situation at arbitrary θ
fig22
Φ
Eθ = 2R sin (2.3)
2
16
but S = rθ, so Em = RΦ and R = EΦm so
sin Φ2
2Em Φ
Eθ = sin ⇒ Eθ = Em Φ (2.4)
Φ 2 2
But, what is Φ?
Φ p.l.d 2π
= ⇒ Φ= a sin(θ) (2.5)
2π λ λ
So our final result for single-slit diffraction is
" #2
sin Φ2 Φ πa
I(θ) = I0◦ Φ
, where = sin(θ) (2.6)
2
2 λ
The function sin(x)
x is called sinc(x), so this is
2 πa
h i
I(θ) = I0 sinc
◦ sin(θ) (2.7)
λ
fig23
but, what is meant by “I(θ)”? of course, the energy sent off at exactly
angle θ is zero. What we mean is I(α)dα = energy diffracted into the in-
finitesimal angular range dα centered on value α, where
πa
α≡ sin(θ). (2.8)
λ
We see that this function has the expected maxima and minima. What are
are minima? We wish sin Φ2 = 0 so Φ2 = π, 2π, 3π, · · · 3
πa
sin(θ) = mλ m = 1, 2, 3, · · · (2.9)
λ
Therefore, the condition for minima in single slit diffraction is given by
a sin(θ) = mλ (2.10)
which is as we anticipated.
Most Important 1 As it is obvious from the plot, most (almost all) of
the light falls within θ < θf irst min . So, the angle of the first minimum is
especially important. It is given by
λ
θf irst min = sin−1 (2.11)
a
3 Φ Φ
not 2
= 0, since 2
is also in the denominator
17
Most Important 2 Note how this depends on the ration of λ to a. Three
cases are plotted in figure 2.24.
fig24
Most Important 3 Remember the figure from earlier where it showed the
waves diverging from apertures of widths a = λ, a = 3λ, and a = 5λ? Make
sure that you understand the correlation between the situations shown in that
figure and those shown in figure 24. This shows the effects of increasing and
decreasing λ and a relative to each other. As you will see, that understanding
is extremely important. A good rule of thumb to remember is the half width
of the central maximum is ∼ λa .
If we look at a very small object with light (or other electromagnetic radia-
tion), the spatial resolution, ∆x is approximately λ. This limiting resolution
is due to diffraction! Let’s understand this. We saw that, for monochromatic
plane wave radiation incident on a long aperture of width a (single slit), the
energy spreads in angle (diffracts), but is mostly contained within an angular
range ∆θ = 2 sin λa (the range between the first zeros). If the aperture is
circular, the details of the results are different, but the main result is very
similar − you get a diffraction pattern consisting of bright and dark bands
(in this case, concentric rings). If the screen is far enough from aperture, the
situation is called “Fraunhofer diffraction” results − in this case, for a circu-
lar aperture, you get a very strong and large central maximum surrounded
by alternating light and dark rings. This is shown in figure 25.
fig25
For a circular aperture, under Fraunhofer conditions, the first minimum
−1 λ
occurs, not exactly at θ = sin a , but rather at
λ
sin(θ) = 1.22 (2.12)
d
where d is the diameter of the aperture. Now suppose you look at something
with a lens − say you look at a point-like source (i.e. star at great distance)
with a telescope. Say we concentrate on one wavelength component. Then,
because of its very great distance, the light from the star is essentially a
18
plane wave at the telescope’s objective lens. This lens effectively “cuts out”
a circular part of the plane wave fronts, passes them on, and rejects the rest.
thus, in addition to its properties related to geometric optics, the lens acts
like an aperture. Thus because its diameter is not infinite, the image formed
on the “screen” is not a point, but rather is a somewhat spread out diffraction
pattern.
fig26
Limit of Resolution This raises an interesting issue in resolution: sup-
pose we look at two distance point sources. Then, if they are too closely
spaced, if we look at both, they will not be individually resolved.
We will take as a resolution criterion “Rayleigh’s criterion” (actually, one
can do a bit better) − two objects are barely resolved if the central maxi-
mum of the diffraction pattern of one source falls on the first minimum of
the diffraction pattern of the other. Thus, by this criterion, the minimum
resolvable angular separation, or the angular limit of resolution is
λ
∆θmin = 1.22 (2.13)
d
for aperture size d.
fig27
Let us look at this in a bit more detail. Suppose we have two objects (O1
and O2 ) to be imaged by a lens. Let the separation between them be δ; let
the central maxima of their image blurs be centered at I1 and I2 , respectively,
as in figure 2.28. So, how far apart do O1 and O2 have to be for their distinct
images I1 and I2 to be resolvable? We use Rayleigh’s criterion − that the
maximum of I1 coincide with the first minimum of I2 . This means that O2 AI1
differs in path length from O2 BI2 by distance λ, To see where this condition
comes from, recall that at the first minimum of a diffraction pattern, say from
a rectangular slit, the rays coming from the extreme ends of the slit differ in
bath length by λ (with this, dividing the slit into two zones ensured that for
each ray in zone I, there is a corresponding ray in zone II that is π radians
out of phase with it).
fig28
α
Now, from figure 227, O2 A is shorter than O1 A by length δ sin 2 , and
α
O1 A = O2 B, and O1 B is shorter than O2 B by δ sin 2 . Thus, O2 B − O2 A =
19
α
δ sin 2 ≈ O2 BI1 − O2 AI1 .Thus for O1 and O2 to be resolvable,
α λ
2δ sin ≥λ⇒ δ≥ (2.14)
2 sin α2
2
Thus, the limit of resolution is
λ α
∆x = sin (2.15)
2 2
of course, we can make this as small as we want by either using λ as small as
possible (gamma rays) or by making α large (big diameter lens). However, if
we insist on using long wavelength light, we have a price to pay in resolution.
Since the largest α2 can be is π2 , we see that
∆x ∼ λ (2.16)
as claimed.
20
Chapter 3
21
Chapter 4
Let us recall some features of the double slit experiment with electrons:
fig1
Recall salient features
1. Hear clicks from a detector: no half clicks
2. Move detector around: rate of clicking changes, but still no half clicks
3. Lower gun temperature: clicking slows down, still no half clicks
4. Use two detectors at once: never hear both go off simultaneously
Conclusion: Upon detection, the electron shows up like a particle − at one
point in space, not spread out in space.
fig2
As we will see, in the case with photons, the “final picture” is not dimly
present during the intermediate stages (as it would for a wave-caused buildup).
Rather, each electron is detected at one place according to a probability rule.
Yet, P12 results! P12 is a statistical pattern − it results from the accumu-
lation of data over time − even if (and especially for the purposes of this
experiment) the gun only sends one electron at a time.
22
Therefore, if we send in a single electron, we can’t predict definitely where
on the screen it will wind up. From P12 (x) we can only predict the relative
probability of winding up at different points. this illustrates the central fea-
ture of quantum mechanics − we cannot, even in principle, predict exactly
where a given electron will wind up on the screen. We can only talk about
relative probabilities. This is in marked contrast to classical physics where
in principle, we could have followed the trajectory of a given bullet and then
used Newton’s laws to predict the subsequent trajectory.
Between creation at the electron gun and detection at a point at the screen,
the electron seems to not exist as a particle, but propagates in accordance
with the mathematics of wave motion. That is, it propagates as if it is
statistically guided by a wave. (We will see later that the “wave” does not
really exist as a spread out physical entity in space − it is a mathematical
construct).
It is important to realize that over pattern P12 , though collected over many
trials, reflects the interference of the wave function of a single electron with
itself. Let us denote this “fictitious” or “proxy” wave function “guiding”
the electron as Ψ(x, y, x, t). That is, we will hypothesize that, before the
position measurement at the screen, the electron is “represented” by the
”state function” (or “wave function”) Ψ(x, y, z, t). After doing the experiment
with a very, very large number of electrons and collecting the data, we will
then hypothesize the following interpretation:
Let dx be a very small distance along the screen centered at position
x. Then we will say that the probability that any electron in a single
trial of the experiment will materialize in dx, P (x)dx, is proportional
to the intensity in that small region. In particular, we will take as a
working postulate that:
P (x)dx = |Ψ(x)|2 dx (4.1)
where P (x) is the probability for a single electron to materialize in dx centered
on x.
This postulate (“Born probability rule”) is clearly suggest by the results
of the double-slit experiment for electrons.
23
4.2 Orthodox (“Copenhagen”) Interpretation
24
as [the] position [of a particle] to have definite values at all times. However,
the modern consensus, known as the Copenhagen interpretation, is that until
experiment actually localizes it, a particle simply does not have a location.”
Again, it is not that the electron is “somewhere” or even really phys-
ically spread out in space. Rather, we say that before the detection
the electron generally does not possess even the concept of
position (“partial reality”). In the orthodox quantum mechanics
theory it is believed that the act of measurement forces the previ-
ously only “non-objectively real” electron to materialize in positional
reality, selecting a position in the real world according to a certain
“preassigned” probability function, |Ψ(x, y, z, t)|2 .
Its as if, before the detection measurement the electron is like a ghost holding
a deck of cards, each card labeled with position in the real world. Consider
an interval of positions dx starting at x = xa , say then the number of cards
holding this position in this interval is weighted compared to the number
of cards holding positions in other intervals of equal width dx according
to the electron’s preassigned function ψ ∗ (x)ψ(x) ≡ |ψ|2 . The measurement
apparatus then says to the ghostly electron − “Hurry and pick one of the
cards at random − then appear in the world of humans at that point!”
Example: Suppose that before measurements, the state function of an
electron is, in some region, say 0 ≤ x ≤ L, at time t is
for some constants k and ω 2 . Where can the electron represented by this state
function be forced into materialization, and with what relative probabilities,
say at t = 0? Answer: Probability of materialization in a small interval, dx
centered on x at time t is:
25
4.3 Probability Rule ( Born, 1927)
Let us now look at some experimental results with light3 . As you know, in
some experiments (e.g. double-slit interference, diffraction, etc.) light shows
wavelike properties.
3
By “light”, I mean monochromatic, coherent light.
26
From a reading of Einstein’s 1905 paper, it appears (though it is not
explicitly so stated) that by then he viewed a quantum of definite energy as
being localized in space and used this to explain the photoelectric effect. By
1909, he seems to have arrived at a view that the localized energy packets
(“photons”) are guided statistically by the “associated” classical EM wave.
In this picture, which preceded the Copenhagen view, the intensity of
radiation striking a surface is proportional to the number of
photons incident
energy
per unit area per second on the surface. (Recall that I = area·sec and the
energy of each photon of frequency f is fixed at value hf ).
In this view, at any instant of time, the number of photons present in
an (infinitesimal) volume, dV of space is proportional to the square of the
electric field value there. Part of the reasoning leading to this: recall that,
classically, the energy in dV is nhf (we assume a single frequency f ), where
n is the number of photons. Thus, n(x, y, z, t) ∝ E 2 (x, y, z, t).
Thus, if a region of large E 2 moves through space (as around say a crest
of the E field in a traveling sinusoidal EM wave), then the photons tend
to follow along. Thus, this pre-Copenhagen view of Einstein’s is a sort of
“pilot-wave”/particle viewpoint − in it, the photon really exists all along
as a localized bundle (“particle”) and is statistically guided by the “really
existing” associated wave.
Superficial support for this view comes from noting that detections in the
double-slit experiment with the photons essentially come at “random” places
one at a time, as is shown in figure 2, rather than from a dim version of
the entire final pattern that gets progressively brighter, as it would if the
classical EM wave theory were correct. Further support at this level seems to
come from the development of images built up progressively again, the initial
stages show seemingly randomly placed dots, rather than a dim replica of the
final image.
fig2
Further evidence for the particle-like nature of light seems to come from
further consideration of the double-slit experiment. If we line the screen with
separate photon detectors, sending a very low level of light in, we find that
two detectors never fire at once.
fig3
27
This is further evidence that, in interaction, light is not a wave. However,
if we let the experiment accumulate data, we find a predictable statistical
pattern of strikes. In particular, the relative number of strikes in any detector
(compared to the others) when accumulated, follows the distribution expected
from interference waves − e.g., there are detectors that never fire (located at
positions of “destructive interference”).
fig4
However, most4 physicists now believe in accordance with the Copenhagen
view: Although the pictures in figure 4 show that very dim light is detected
“like a particle” (“photons”), the simple ‘existing all along particle guided
by an existing “pilot wave” ’ is incorrect. Nowadays, this pilot-wave picture
could be called a “semiclassical picture”.
Although the orthodox Copenhagen interpretation has been mainstream
since 1927-1928, it is only quite recently that we are approaching direct ex-
perimental evidence for it! Consider the experiments shown in figures 4.5
through 4.8.
First a “wide angle version of the double-slit experiment”:
1. “Single-Photon, Wide-Angle Experiment5 ”
fig5
2. Anti-coincidence Experiment: Consider the following experiment per-
formed in 1986.
fig6
This is a “space-separated” analog of our “multiple detector detectors on
the screen” thought experiment with the double-slit apparatus. Again,
one photon is sent in at a time. Result: for any given event, either D1
fires or D2 fires, never both. This experiment shows the “particle nature
of light”.
3. Interferometer Experiment6
a Showing wavelike behavior
4
But not all
5
Lie and Diels, Journal of Optical Society of America B9, page 2290 (1992). The brief discussion here is from
Fundamentals of Physics, 8th Ed. by Halliday, Resnick, & Walker, pp. 1067-8.
6
Discussion taken from Physics, 5th Ed., by Halliday, Resnick, and Krane, p.p. 1024-1025
28
fig7
b Showing Particle-Like Behavior
fig8
c Delayed Choice Experiment: Experiments bearing on this very ques-
tion have recently been done!
fig9
What can we conclude from this? Quoting7 another author.
fig10
Thus, the probability wave has no physical existence. It is, so to speak, an
artificial calculation device. Nor can the “photons” have a physical existence,
like that of a little ball, while they are propagating (“in transit”). We will
return to these issues later in the course.
7
Amit Goshamu in Quantum Mechanics, 2nd Ed., p. 114 (Custom Publishing)
29
Chapter 5
30
Let us recall deBroglie’s proposal − it is that, associated with a free1
particle of (definite) momentum, p~, and (definite) energy, E is a wavelength
and frequency given by
h
λ= (5.1)
p
and
E
f= . (5.2)
h
Since ω ≡ 2πf and k = 2π h
λ , these are p = h̄k and E = h̄ω, where h̄ ≡ 2π .
These relations argue that a plane wave is to be associated with a free
particle of definite energy E and definite momentum p~. Of course, from
the results of our interference experiments, we now know (at least in the
orthodox Copenhagen interpretation) that the associated wave does not have
a physical existence in real physical space, but rather is a probability wave
( “state function”). Thus, we tentatively assume that the state function
associated with a free particle of definite (E, p~ = px̂) is
p E
Ψ(x, t) = A cos x± t , (5.3)
h̄ h̄
the choice of the sign depending on the direction of the momentum. Notice
something interesting about this “wave function”, it fills all space. (Its wave-
fronts are planes perpendicular to x̂). This means that the position associated
with such a state is “completely unknown” (or better, “completely does not
exist”). On the other hand, for such a state (of nonobjective reality), the
momentum, and hence the speed, is precisely known (partial reality state).
Now, if this plane wave (even though its not a real wave that actually
exists in physical space − perhaps, we should call it the “proxy-wave”) rep-
resenting the state of a free electron of definite energy, E is to be a reasonable
representation of the state, then its velocity should match the “velocity of
the particle” calculated from vp = mp . Consider, though, as you know, the
(proxy) wave velocity is
p s 2
E h E 2 2
c p +m c 2 4 mc
vφ = f λ = · = = =c 1+ > c. (5.4)
h p p p p
1
By “free” particle, we mean one that is not subject to any forces of course, this is an idealization.
31
We seem to have a problem − the particle somehow “can’t keep up with”
the state function that is supposed to represent it. We must deal with this
problem.
Another issue (which will turn out, as we’ll see) to be related, also presents
itself: While our deBroglie plane wave is useful as an idealization, it is “very
rare” that the position of the particle it is to represent is completely nonex-
istent. Even in the most extreme case − if the universe is finite, the region
of unknown position is also no infinite. thus, the plane state function is an
idealization, never really occurring in practice. More typically, the position,
while not defined within a certain range, is known to be zero (or very close
to zero) outside this range. That is to say, upon the measurement, in most
situation, the particle will not materialize “just anywhere”, but only at a
point within a certain range2 .
what would be a valid proxy wave representing a “moving, partially local-
ized state?” Consider, for example, the possibility of a proxy wave that is a
moving pulse, say
−(x−vt)2
Ψ(x, t) = Ae 2σ 2 (5.5)
Now, the function
−x2
f (x) = Ae 2σ2 (5.6)
is a “Gaussian”, or “bell-shaped” function of standard deviation, σ.
fig4
Then, by the rule for making rigidly moving pulses,
−(x−vt)2
Ψ(x, t) = Ae 2σ 2 (5.7)
would represent this Gaussian shape moving rigidly down the x-axis at speed
v.
Perhaps this is the sort of probability state function we should use in
the theory to represent a free particle? It is the nice feature that adjusting σ
2
e.g. the electron is somewhere within the resolution of a measuring instrument, etc.
32
adjusts the range of “unknownness” in the position. Interestingly, the answer
to this question is no!
As we will see, not all functions are valid state functions in quantum
mechanics − and this one is not. (As we will see, valid state functions must
be solutions of a partial differential equation analogous to, but not the same
as, the classical wave equation.) Since we do not yet (in this course) know
this differential equation, we must look for another route to localized state
functions for moving particles.
For this, we invoke the principle of superposition, which says that,
under the right conditions, the sum of two valid state functions should also
be a valid state function − evidence for this is the interference pattern we get
at the screen in a double-slit electron experiment. (ψt = ψ1 + ψ2 is a solution;
|ψt |2 = |ψ1 + ψ2 |2 , which shows the interference “cross term”.)
So, to build more complicated solutions, which will hopefully lead to a
degree of localization, we begin by superposing two of our basic deBroglie
plane waves solutions, each corresponding to a definite, but different value of
momentum.
p1 E1 p2 E2
Ψ(x, t) = A cos x − t + A cos x− t (5.8)
h̄ h̄ h̄ h̄
Ψ(x, t) = A cos (k1 x − ω1 t) + A cos (k2 x − ω2 t) (5.9)
We note the trigonometry identity,
β−α β+α
cos(α) + cos(β) = 2 cos cos (5.10)
2 2
which tells us that
k2 x − ω2 t − k1 x − ω1 t k2 x − ω2 t + k1 x − ω1 t
Ψ(x, t) = 2 cos cos
2 2
(5.11)
∆k ∆ω
Ψ(x, t) = 2 cos x− t cos k̄x − ω̄t (5.12)
2 2
where ∆k ≡ k2 − k1 , ∆ω ≡ ω2 − ω1 , k̄ = 12 (k1 + k2 ), and ω̄ = 12 (ω1 + ω2 ).
Let us consider the case where ω2 is only slightly greater than ω1 and k2 is
only slightly greater than k1 . Then:
∆k k2 or k1 , ∆k k̄ (5.13)
33
∆ω ω2 or ω1 , ∆ω ω̄ (5.14)
Then:
∆k ∆ω
Ψ(x, t) = 2 cos x− t cos k̄x − ω̄t (5.15)
2 2
The 2 cos ∆k ∆ω
2 x − 2 t term is sinusoidal
relatively long wavelength traveling
“envelope” and the cos k̄x − ω̄t term is a higher frequency “inside envelope”
oscillations.
fig5,6
These are “groups” of wave. What is the speed of the wave? It is
∆ω
2 ∆ω
∆k
= = venv (5.16)
2
∆k
This is not the same as the speed of the motion of the “inside oscillations”,
which is
ω̄
vinside = . (5.17)
k
This is still not very localizing, though − the groups repeat in space. To deal
with this problem requires a bit more work, which we will do next.
It turns out3 that if we suppose more (an infinite number) of sinusoidal trav-
eling waves, we can arrange that there is basic constructive interference over a
small ∆x and essentially total destructive interference everywhere else. That
is, we get one moving group only. Such a moving group is called a “wave
packet”.
fig7
This happens if you superpose an infinite number of different sine waves
that span only a finite interval of wave numbers. That is, if δk is very small,
3
see or hear a course on a wave physics − e.g., late in the semester in PHY 301. We will also probably have a
derivation of this a bit later in this course.
34
(likewise δω is small),
E = hf = h̄ω
h
p = = h̄k
λ
so,
h̄2 k 2
= h̄ω (5.20)
2m
This leads to the “dispersion relation” for deBroglie waves for a free particle,
h̄k 2
ω= (5.21)
2m
4
Parts of the more general fourier integral will be developed later in the course; for now, we are drawing on
background from your previous exposure to the general concept from a “modern physics” course.
35
As is shown in a waves course (e.g. PHY 301), the group (region of construc-
tive interference) moves a velocity
δω(k) dω(k)
vgroup = lim = (5.22)
δω→+0 δk dk
This is called the group velocity (Vg ).
dω(k)
vg = (5.23)
dk
Let us, following deBroglie, evaluate this “group velocity”
dω 2h̄k 2p
vg = = = (5.24)
dk 2m 2m
h
since p = λ = h̄k
p mvparticle
vg = = = vparticle !! (5.25)
m m
I believe it was Einstein who called this, “the most amazing result of all
time”!
Let us now return to considering the idealized basic plane-wave state function
cos h̄p x − Eh̄ t + φ . Several problems occur with it; among them are:
1. A free particle represented by this state function ought to have equal
probability to materialize anywhere along the x-axis. But, as we’ve seen,
this plane wave state function, at any time, represents bumps and valleys
in spatial distribution of materialization probability.
2. Suppose we attempt to describe a free particle with momentum in the
+x direction by, say
p
Ψ→ (x, t) = A sin(kx − ωt), k= . (5.26)
h̄
Now, invoking the superposition principle,
36
then must be a possible state. By expanding out the right hand side,
you can easily show that this is
Ψ(x, t) = 2A sin(kx) cos(ωt) (5.28)
(which is a standing wave). But, this is clearly not satisfactory, for at
π
t = 2ω , this Ψ(x, t) vanishes for all x, thus, violating the normalization
condition! (A similar problem exists for Ψ← = A cos(kx + ωt).)
Thus, we can only conclude that
f (x, t) = A cos(kx ± ωt + φ) (5.29)
is not a valid free particle state function!
So, we now have a catalog with zero valid state functions. Perhaps finding
an equation that governs all state functions will help us find some that are
valid.
Here is another reason why we need to have a governing equation for a
state function:
As we will discuss in a bit more detain later on, a measurement
changes a state function. For example, a “perfectly accurate” (i.e.
very small resolution) position-materializing measurement suddenly
changes the state function to a tall, very narrow spike centered on
the central measured value; this is shown in figure 5.9
fig9
This is called the “collapse of the state function”; it is usually
taken as a postulate in the orthodox Copenhagen interpretation.
The figure also shows that, after the measurement, while “on its
own”, that state function changes in time. It is very important
to understand how state functions change in time between
measurements (“time evolution of state function”).
If the governing equation is a partial differential equation in the variables x
and t, then presumably (if we can solve it), it will tell us exactly how state
37
functions evolve in time. So we seek a governing equation. How can we find
out if one exists, and if so, what it is? Let us think, we need:
1. A linear equation. This allows for the possibility that some solutions
may be superpositions of other solutions. We saw that the double-slit
experiment for electrons requires this.
2. We know that the energy of a particle is (nonrelativistically),
1
E = mv 2 + V (x, t) (5.30)
2
or
p2
E= + V (x, t) (5.31)
2m
Now, in deBroglie language, this is
h̄2 k 2
h̄ω = + V (x, t) (5.32)
2m
This uses the dispersion relationship we found earlier. Our governing
equation must be consistent with this.
3. Something that allows plane and spherical wave solutions
But, for the same reason, the equation must be linear; hence, n = 1.
Thus, we expect a term
V (x, t)Ψ(x, t). (5.34)
• The h̄ω in the dispersion relation (with a plane wave) suggests one deriva-
tive in time
h̄2 k 2
• The 2m suggests (again, with a plane wave) a second derivative in x.
38
Putting these ideas together, we try
∂ 2 Ψ(x, t) ∂Ψ(x, t)
a + V (x, t)Ψ(x, t) = b (5.35)
∂x2 ∂t
For a free particle, we would have V = V0 constant, which we could take as
zero. Then, trying cos(kx−ωt) or sin(kx−ωt) in our trial governing equation,
it doesn’t work (as you can easily show). This is a good sign, since we already
know from other arguments that these can’t be valid state functions. For a
free particle, let’s try the superposition state
−ak 2 cos(kx − ωt) − aγk 2 sin(kx − ωt) + V0 cos(kx − ωt) + γV0 sin(kx − ωt) =
ωb sin(kx − ωt) − bγω cos(kx − ωt).
which, simplifies to
39
is required. However, for the moment, let us continue: we plug in γ = ±i
back into our trial governing equation, getting
Now this looks like a relation between ω and k, and so it must agree (if this
form of the wave equation is to be consistent) with our original dispersion
relation.
h̄2 k 2
+ V0 = h̄ω (5.42)
2m
Comparing these, we find
h̄2
a = − (5.43)
2m
b = ±ih̄ (5.44)
We will choose the positive sign for β; then our governing equation is
h̄2 ∂ 2 Ψ(x, t) ∂Ψ(x, t)
− + V 0 Ψ(x, t) = ih̄ (5.45)
2m ∂x2 ∂t
We now assume that this promising-looking equation is valid wether V is
constant or not. Then we have
h̄2 ∂ 2 Ψ(x, t) ∂Ψ(x, t)
− + V (x, t)Ψ(x, t) = ih̄ (5.46)
2m ∂x2 ∂t
This is the famous Schrödinger equation. We will be dealing with it for the
rest of the semester.
40
Chapter 6
A very interesting point has come up. In the process of our “derivation” we
have seen that the original deBroglie form of the state function for a free
particle of definite energy E and definite momentum p is not valid
pi Ei
Ψ(x, t) 6= A cos x− t (6.1)
h̄ h̄
for this case! Rather, the appropriate proxy wave function for this case is
apparently
pi Ei pi Ei
Ψ(x, t) = A cos x − t + i sin x− t (6.2)
h̄ h̄ h̄ h̄
41
6.2 What is the Meaning of a Complex Wave Function?
It makes no sense to say “the length of this table is 27i inches” or “the
voltage difference was 3i volts.;; Likewise, it makes not sense to say “ the
wave amplitude at this point of this stretched string is now 5i meters.” So,
how can we deal with a complex wave function in quantum mechanics? To
quote another author who puts it sell:
fig1
Thus, the state function (“wave function”) is a (generally) complex (in
mathematical sense) function that represents (mathematically) the particle
while it is in partial reality. We would like to write the equation in a more
compact form; to do this, we recall Euler’s formula:
x x2 x3
e =1+x+ + + ··· (6.4)
2! 3!
so,
iθ i2 θ 2 i3 θ 3 i4 θ 4
e = 1 + iθ + + + + ··· , (6.5)
2! 3! 4!
which is
iθ θ2 iθ3 θ4 iθ5
e = 1 + iθ − − + +
2! 3! 4! 5!
θ2 θ4 θ3 θ5
= 1 − + − ··· + i θ − + − ···
2! 4! 3! 5!
= cos θ + i sin θ
Thus, the valid deBroglie plane wave for definite momentum and energy is
Aei(kx−ωt) (6.6)
42
6.3 Probabilities and Normalization Revisited
43
and note that this is always ≥ 0 since it is the sum of squares of real
functions!
Consider also, the polar form of the representations:
f (x, t) = Reiθ(x,t)
44
This means that out previously stated normalization condition must now
be accordingly modified to
Z ∞
Ψ∗ (x, t)Ψ(x, t) dx = 1 (6.11)
−∞
at all times t.
Now we note another bonus of our complex plane wave states. With the real
plane waves, e.g. cos(kx − ωt), the probability density was bumpy”
fig2
This is a clear violation of the homogeneity of empty space2 , as we re-
marked some time ago. However, with the complex plane wave and the new
probability rule,
45
Then, inside the well, V = 0, so the Schrödinger equation is
h̄2 ∂ 2 Ψ(x, t) ∂Ψ(x, t)
− 2
= ih̄ (6.14)
2 ∂x ∂t
inside the well.
How do we solve this? For now, instead of using formal mathematical
methods, we note that inside the well the particle is free, so, if we assume
a fixed, definite energy, E, the general solution should be a superposition of
right and left moving quantum plane waves
√ √
2mE
(x,t) i x−i Et −i 2mE
x−i Et
ΨE e = Ae h̄ h̄ + Be h̄ h̄ (6.15)
We emphasize that this is the solution inside the well for only one energy.
Solutions of this form for different energies can, by principle of superposition,
be superposed; we will discuss that later on.
We are not done, however − recall that we have two boundary conditions:
1. Ψ(x = 0, t) = 0 for all t
2. Ψ(x = L, t) = 0 for all t
As you can show, application of the Schrödinger equation on our quantum
plane waves leads to
√ !
2mE Et
ΨT = A sin x e−i h̄ (6.16)
h̄
Them application of the second boundary conditions on this equation leads
to
√ L
emE = nπ, n = 1, 2, 3 · · · (6.17)
h̄
Which is the familiar quantization of energy
n2 π 2h̄2
En = , n = 1, 2, 3, · · · (6.18)
2mL2
Thus, the possible state functions for definite energy are
( − in2 π2h̄ t
An sin nπ L x e 2mL2 , 0 ≤ x ≤ L
Ψn (x, t) = (6.19)
0, x < 0, x > L
46
since the spatial parts of these state functions happen (in this case) to be
purely real and we can plot them − they are plotted in figure 3, along with
the associated probability density function Ψ∗ (x, t)Ψ(x, t) for each.
fig3
Note that, for the state function Ψn (x, t) given by out possible state func-
tions, the probability densities, while dependent on position (x) are all inde-
pendent of time, since −ict 2
e = e+ict e−ict = 1 (6.20)
for c real, as in our possible state function.
The coefficients {An } can be Rfound through the application of the normal-
ization condition requiring 1 = Ψ∗ (x, t)Ψ(x, t) dx here is
Z L
2 nπx
2
1 = |An | sin dx (6.21)
0 L
SinceΨ is zero outside the well and since the time dependence from the expo-
nential factor goes away in the probability density. Thus, the normalization
condition is independent of time − it can be applied at any time to determine
the normalization constant An , and once An is determined, it remains con-
stant in time. (We will see later that there is a very beautiful theorem that
ensures this time independence of the normalization is true for all state func-
tions valid for single particles states in non-relativistic quantum mechanics.)
Now, you can show, Z L
2 nπx L
sin dx = (6.22)
0 L 2
for all integer n − note that it is independentqof n. Thus, equation (6.21)
2
tells us that for the infinite square well, An = L for all n. Thus, the state
functions for definite energy are
q 2 π 2 h̄
2 sin nπ x e− in2mL 2 t
0≤x≤L
Ψn (x, t) = L L (6.23)
0 x < 0, x > L
Since the Schrödinger equation is linear and homogeneous, the sum of solu-
tions − in fact, an arbitrary linear combination of solutions
X ∞ X∞ nπx iEn t
Ψ(x, t) = cn Ψn (x, t) = cn sin e− h̄ (6.24)
i=1 i=1
L
47
is a solution inside the well. (of course, for “superposition solutions” like
this, the cn ’s must be scaled so that the overall Ψ(x, t) is correctly normal-
ized.) Of course, for this sort for superposition state function there is no
single value of definite energy, since each Ψn in the sum corresponds to a
distinct energy. This raises an important interpretational question about the
meaning of superposition states like our superposition state. We have much
to say about this later in the course. In fact, we’ll also have much further
to say about the definite-energy infinite square-well states and their physical
meaning later. Here our purpose was to give you a first illustration of the
use of the Schrödinger equation to find valid state function.
48
Chapter 7
49
−iE t
+iEn t n
e h̄ e h̄ = 1. Thus, in this case, at all x, the probability density
is constant in time. As another example in the homework, you show that
±iEn t
the functions e h̄ are valid free particle state functions provided a certain
dispersion relationship is obeyed. As yet another example, some time ago we
claimed that a rigidly moving “bell-shaped” curve,
(x−vt)2
f (x, t) = Ae− 2σ 2 (7.2)
with v and σ real and constant is not a valid state function − and it is clear
why not. Suppose that f (x, t) is purely real for all x and all t. Then the
application of the Schrödinger equation for f (x, t) produces an imaginary
function on one side and a real function on the other − and it is impossible
for these to be equal to each (unless they are both zero). Since f (x, t) =
2
AE− (x−vt)
2σ 2 with v and σ constant and real is purely real for all x and t, it is
therefore not a valid quantum mechanical state function.
A situation that sometimes arises is one in which you know the state
function at a specific time only (either by knowledge or by an assumption)
and you want to find out how it changes in time. This is called the quantum
initial value problem. Here is a quick example (without details) that will
give you a first idea of how this sort of thing works. Suppose we have a free
particle (V (x, t) = 0 all x and t) and we somehow know that at the specific
time t = 0, the state function Ψ(x, t), is Gaussian (bell-shaped).
x 2
− 2σ
Ψ(x, t = 0) = Ae 2
(7.3)
At a given instant, this can be a valid state function for a free particle.
(In fact, as we will see in a few classes hence, any normalizable1 continuous
function f (x) is a possible state function for a free particle at single instant
of time2 ).
In fact, this is an example of the sort of state function that can result at
the instant of position measurement, when the resolution of the measurement
is 2σ. If σ is small, this looks like a tall, narrow spike of “width” 2σ, as shown
in figure 1. Suppose Ψ(x, t = 0) is this. The question is − what happens to
R +∞
1
“Normalizable” means −∞ Ψ∗ (x, t = 0)Ψ(x, t = 0) dx < ∞
2
Fourier theory will show this
50
Ψ as time goes by in the absence of or before another measurement? Must
it change? (yes)
Fig1
Could it change in time by moving rigidly? As we’ve already seen, no.
To find out how Ψ evolves in time, we need to find solution, Ψ(x, t) to the
Schrödinger equation that reduces down to equation (7.3) at t = 0.
Since the Schrödinger equation is first order in time, there is at most one
linearly independent function that will do this. (see later if you are not
familiar with this fact about first-order differential equations), In the near
future, we’ll see3 (also from the theory of Fourier analysis) how to find this
function, but that is not our goal here. Therefore, for now, I’ll just tell you
the answer, so we can work with it.
ax2
−
e [ ( 1+ 2ih̄at
m )]
Ψ(x, t) = C q (7.4)
2ih̄at
1+ m
where a ≡ 2σ1 2 and C is a constant. If you have the patience, you can verify
that this equation works in free particle Schrödinger equation, and by setting
t equal to zero, we see that it does reduce to our previous function at time
equal to zero. So, this equation is correct. This question now is − how do we
interpret it? A plot would help, but since Ψ(x, t) is complex, we can’t plot it.
Anyway, what is more important than Ψ is the probability density function.
By letting θ = 2h̄at
m , we have
ax2 ax2
− (1+θ) − (1−θ)
e e
Ψ = C√ ⇒ Ψ∗ = C ∗ √ (7.5)
1 + iθ 1 − iθ
so,
−ax2 [ 1+iθ
1 1
+ 1−iθ ]
∗ |C|2 − (1+θ
2e
2ax2
2)
P (x, t) = Ψ Ψ = |C| √ =√ e (7.6)
1+θ 2 1+θ 2
51
q
m2
10% accuracy as long as t < 20h̄2 a2
.) Then
ax2
− 2 a 2 t2
P (x, t) ≈ |C|2 e ( )
1+ 4h̄
m2 (7.7)
Then
d +∞ ∗
Z
dP (t)
= Ψ (x, t)Ψ(x, t) dx
dt dt −∞
Z +∞
∂
= [Ψ∗ (x, t)Ψ] dx
∂t
Z−∞
+∞
∂Ψ∗ (x, t)
∂Ψ
= Ψ + Ψ∗ (x, t) dx
−∞ ∂t ∂t
52
By plugging these in, we get
ih̄ +∞ 2 2 ∗
Z
dP (t) ∂ Ψ ∂ Ψ
= Ψ∗ 2 − Ψ dx
dt 2m −∞ ∂x ∂x2
ih̄ +∞ ∂ ∗
Z
∂Ψ ∂Ψ
= Ψ∗ − Ψ dx
2m −∞ ∂x ∂x ∂x
∗
+∞
ih̄ ∂Ψ ∂Ψ
= Ψ∗ (x, t) − Ψ
2m ∂x ∂x −∞
then
d +∞ ∗
Z
dP (t)
= Ψ (x, t)Ψ(x, t) dx = 0. (7.10)
dt dt −∞
Key to this proof was the use of the SchrödingerSchrödinger equation
∂Ψ(x, t) h̄2 ∂ 2 Ψ(x, t)
ih̄ =− + V (x, t)Ψ(x, t). (7.11)
∂t 2m ∂x2
As you can convince yourself by going through the proof, again, what “made
it work” is the SchrödingerSchrödinger equation is first order in time. As
you can convince yourself (try it), if the SchrödingerSchrödinger equation
were second order in time, dPdt would not by zero and then we would have
problems. Only a first order (in time) equation is consistent with
conservation of probability. Actually, this is pretty obvious (from the
theory of differential equations) without going through the proof. A first
order differential equation has only one “arbitrary constant” in its general
53
solution and this is usually “f (t = 0)”. For example, consider say,
df (t)
= 2t
t Z
f (t) = 2t dt = t2 + const.
f (t) = t2 + f (t = 0)
or say,
df
= −3f (t) ⇒ f (t) = Ce−3t ⇒ C = f (t = 0).
dt
So, for a first order equation, only f (t = 0) can be arbitrarily specified as an
initial condition. However, a second order equation, say
d2 f (t)
= 2t
dt2
1
f (t) = t3 + at + b
3
Then, f (0) = D and f 0 (0) = ωC. We see that, with second order differential
equations, we can independently and arbitrarily specify both f (0) and f 0 (0)
− and both are needed to specify a unique time development of f (t). Thus, if
the Schrödinger equation were second order in time, we could independently
and arbitrarily specify Ψ(x, t = 0) and ∂Ψ(x,t=0)
∂t . This would mean that we
+∞
∂
[Ψ∗ (x, t = 0)Ψ(x, t = 0)] and dtd −∞ Ψ∗ (x, t = 0)Ψ(x, t = 0) dx
R
could set ∂t
to be nonzero. This would destroy conservation of probability; hence the
Schrödinger equation must be first order in time!
54
7.4 Determinism in Quantum Mechanics − A Comment
55
where the function K is called the propagator, represents the influence of
“old values” if Ψ at x0 (i.e.Ψ(x0 , 0)) at the point x at the present time (t).
[In this sense, K(x0 , x, t) is like the contribution at point x at time t made
by the state function at x0 at the earlier time t = 0.] It turns out that the
explicit form of the propagator is known for the case of a free particle
r
m i (x−x0 )2 m
Kf ree (x0 , x, t) = e 2h̄t (7.14)
2πih̄t
This technique5 , while conceptually important, is uncommon in elementary
courses, largely because the propagator is known in closed analytical form a sa
function for only a few potential energy profiles (of course, including the free
particle case V (x, t) = 0) and even in those cases, the mathematics involved
can be quite formidable. For this reason, we will mostly or exclusively stay
with the standard differential equation approach.
5
The propagator approach to quantum mechanics, largely pioneered by R. P. Feynman, is very important in
modern quantum field theory. Nonrelativistic quantum mechanics is developed in his book Quantum Mechanics
and Path Integrals.
56
Chapter 8
In the last class, we looked into the rate of change of the total probability for
materialization (or position measurement) anywhere on the x-axis and found
that it was zero;
+∞
∂Ψ∗
dPtot ih̄ ∗ ∂Ψ
= Ψ − Ψ =0 (8.1)
dt 2m ∂x ∂x −∞
We now return to look more closely at an issue we glossed over in the past.
For a free particle, we’ve been using, as our basic “right-moving” plane-wave
state,
Ψ(1)
→ (x, t) = Ae
i(kx−ωt)
. (8.6)
However, perhaps we should use for this purpose
Ψ(2)
→ (x, t) = Ae
i(kx−ωt)
= Ae−ikx e+iωt (8.7)
or, can be use these “interchangeably” or even mix them in calculation? We
also have different choices for “left-moving” plane-waves states:
−i(kx+ωt)
Ψ(1)
← (x, t) = Ae
i(kx−ωt)
or Ψ(2)
← (x, t) = Ae . (8.8)
Let’s look into this: say we have the choice Ψ(1) ← = Ae
i(kx−ωt)
. With this, we
(1) (2)
could, for left-moving state, a prı́ori still use either Ψ← or Ψ← above or both.
Say we pick Ψ(1) (1)
← and Ψ→ . Then, according to the principle of superposition,
58
which avoids the problem! (Why?) Likewise, Ψ(2) → (x, t) = Ae
i(kx−ωt)
combined
(1) i(kx−ωt)
with Ψ← = Ae avoids this problem (show this).
What if sometimes use Ψ(1) → (x, t) = Ae
i(kx−ωt)
and sometimes we use Ψ(2)→ (x, t) =
i(kx−ωt)
Ae ? This leads to trouble, as you can show by considering their su-
perposition. So we need to make a choice, In this, we follow the Quantum
Mechanics Convention for free particle plane wave states:
Consider the case of a free (no force field acting on it) electron of definite
energy E incident on a “pin-hole” aperture. On the downstream side of the
aperture, the state function should be (in three dimension) a spherical wave
i(pr−Et)
Ψ(~r, t) = A(r)e h̄ (8.14)
where r is the distance from the pin-hole to the point ~r (origin taken as the
pinhole). Note that A, the amplitude, depends on r − it must decrease with
r, in fact. Why? Because the probability spreads out in space. For example,
for the three dimensional case (spherical wave)
1
A(r) ∝ (8.15)
r
while, for the two dimensional case, (circular wave)
1
A(r) ∝ √ . (8.16)
r
Thus, if we consider the electron double slit experiment with the slits as
pin-holes, we have, downstream of the pin-hole plane
59
where Ψ1 is a spherical wave centered on hole 1, and Ψ2 is a spherical wave
centered on hole 2. If we put a measurement device (“proximity device”)
near hole 1 to determine if the electron “went through” hole 1, then if the
device registers a signal, after the measurement, Ψ(~r, t) is “collapsed” to
Ψ(~r, t) = Ψ1 (~r, t).
For the case of a plane-wave state function incident on a long, infinitesi-
mally narrow slit, the result downstream is a cylindrical wave state:
A
Ψ(~r, t) = √ ei(kr−ωt) (8.18)
r
where r is measured in cylindrical coordinates. An example of cylindrical
waves is shown in figures 8.1 and 8.2.
fig1
fig2
As we see, for this case, r is just the direct perpendicular distance from
the slit line to the nearest point on the wavefront.
How likely is each probability? In this case, the probability of getting result
p1 is:
|A1 |2
(8.20)
|A1 |2 + |A2 |2
60
the probability of getting result p2 is”
|A2 |2
(8.21)
|A1 |2 + |A2 |2
and the probability of getting either p1 or p2 is:
|A1 |2 |A2 |2 |A1 |2 + |A2 |2
+ = = 1 = 100% (8.22)
|A1 |2 + |A2 |2 |A1 |2 + |A2 |2 |A1 |2 + |A2 |2
A similar sort of rule applies to superposition states having components cor-
responding to definite values of energy. As an example, consider the case of
the infinite square-well. We found that the states of definite energy are
r
2 nπx iEn t
Ψn (x, t) = sin e− h̄ (8.23)
L L
However, since the Schrödinger equation is linear and homogeneous, superpo-
sitions of these are possible valid quantum states. Such superposition states
will then not correspond to definite energy. Consider, for example, the su-
perposition state
Ψ(x, t) = c1 Ψ1 (x, t) + c2 Ψ2 (x, t). (8.24)
(Here, c1 and c2 must be chosen such that Ψ(x, t) is normalized; we will worry
about the details of how to choose c1 and c2 to ensure this, later.) Thus
r r
2 πx iE1 t
− h̄ 2 πx iE2 t
Ψ(x, t) = c1 sin e + c2 sin e− h̄ (8.25)
L L L L
If a measurement of energy is made on a particle represented by this state
function, then we definitely obtain energy equal to either E1 or E2 . Recall
n2 π 2h̄2
En = (8.26)
2mL2
The probability of getting definite energy value E1 is
|c1 |2
E1 = (8.27)
|c1 |2 + |c2 |2
and the probability of getting definite energy value E2 is
|c2 |2
E2 = (8.28)
|c1 |2 + |c2 |2
61
and the probability of getting either E1 or E2 is
|c1 |2 |c2 |2
+ = 1 = 100%. (8.29)
|c1 |2 + |c2 |2 |c1 |2 + |c2 |2
We’ll work out the constant of proportionality later. Even without this, we
can, of course, will work the relative probabilities (i.e., ratios of probabilities
for which the constant of proportionality cancels).
62
Of course, for this to be really useful, we need to know how to calculate
A(k) for a given Ψ(x, t). for that, we need Fourier analysis; we will see how
that works in the next class.
Comment: “Materialization of momentum p” does not imply materializa-
tion at a definite position. In fact, there exists no state which simultaneously
allows materializing atoms with precise p and precise x. (We will prove this
later).
In the meantime, let us look at an example of something conceptually
simpler − the evaluation of an integral like that in equation (8.33) to construct
Ψ(x, t).
Example:
Suppose we want to evaluate the functional form (in x, say at t = 0)
of an equal-amplitude mix of plane waves with equal amplitudes
in some range [k1 , k2 ] and zero amplitudes for k outside this range.
Thus, the “bandwidth in wave number” for this mix is ∆k = k2 −k1 .
We suppose that the band of wave numbers is centered on k = 0,
thus k1 = k2 . Then k2 = ∆k ∆k
2 and k1 = − 2 . A plot of A(k) vs. k
would then be flat from = ∆k ∆k
2 → + 2 . Figure 8.3 shows this plot.
fig3
1
We chose the amplitude of A(k) to be √∆k so that the area under
the plot is 1 for “neatness”. Thus, this mix has half the plane
waves propagating to the left (negative values of k) and, with equal
63
amplitudes, half to the right. We evaluate it at t = 0:
Z + ∆k
1 2
Ψ(x, t = 0) = √ eikx dk (8.35)
∆k − 2 ∆k
+ ∆k
1 1 ikx 2
= √ e (8.36)
∆k ix − ∆k
" ∆k 2 #
i( 2 )x −i( ∆k )x
2 e −e 2
= √ (8.37)
x ∆k 2i
2 ∆k
= √ sin x (8.38)
∆kx 2
√
∆k ∆k
= ∆k sin x (8.39)
2 x 2
√
∆k
= ∆k sinc x (8.40)
2
Schematically,
fig 4
Question for now. (Qualitatively), what do you expect will happen to the
shape of Ψ(x, t) as t increases from zero? Why?
64
Chapter 9
9.1.1 Meaning of ∆x
:
Imagine preparing an ensemble of very many3 identical particles, all with
the same state function, Ψ(x, t). A position materialization measurement is
performed for each system, Each particle materializes at a different position;
the distribution of positions follows that of Ψ∗ (x, t)Ψ(x, t). ∆x represents
the standard deviation of this distribution of positions. Applied to the state
function for a single particle before materialization on position measurement,
1
Of course, the procedure described is as “in principle” or in the nature of a “thought experiment”. Still, it defines
the meaning of ∆x.
2
The only exception would be if the state function is an infinitely narrow spike function at the time of measurement,
which never occurs in practice, but which represents an idealization we will further discuss later on.
3
Technically, an infinite number
65
2∆x represents the “uncertainty” in our knowledge of where we expect the
particle to materialize. A schematic “cartoon” of this is shown in figure 1.
fig1
Note then, that in theory, ∆x is equal to standard deviation of Ψ∗ (x, t)Ψ,
not that of Ψ.
It is also true, as we’ve seen, that for a particle in partial reality and
represented by state function Ψ(x, t), generally, if a momentum-producing
measurement is made, the value of momentum materialized is unpredictable
or “uncertain” over a ranger called 2∆p. The “in-principle” experimental
procedure for determining ∆p is a follows.
9.1.2 Meaning of ∆p
:
Imagine preparing an ensemble of very many identical particles, all with
the same state function, Ψ(x, t). A momentum materializing measurement is
performed for each system. Generally each particle will materialize a differ-
ent value of momentum. The symbol ∆p represents the standard deviation of
this distribution of momenta. Applied to a single particle with state function
Ψ before momentum materialization, and 2∆p represents the “uncertainty”
in our knowledge of what value of momentum we expect the particle to ma-
terialize with. Special cases are those of the plane waves states
p p
Ψ(x, t) = Aei( h̄ x− h̄ t) and Ae−i( h̄ x+ h̄ t)
E E
(9.1)
For each of these states the momentum is definite + h̄p x̂ and − h̄p x̂ , respec-
66
9.2 Example: Plane-Wave states
p p
Consider the plane wave states Ψ(x, t) = Aei( h̄ x− h̄ t) and Ae−i( h̄ x+ h̄ t) . As
E E
67
i.e. ∆x∆k ≈ π. Thus,
h̄
∆px ∆x = (h̄∆k)(∆x) ≈ πh̄ ≥ , (9.8)
2
consistent with the uncertainty principle.
1
Notice an interesting point: since ∆x ∝ ∆p , the narrower we make wave
number bandwidth ∆k, the broader we make the positional uncertainty ∆x5 .
Conversely, the broader we make ∆k, the narrower we make ∆x. The Heisen-
berg Uncertainty Principle says that this is general − for any class of state
functions
h̄ 1
∆x ≥ ∝ (9.9)
2∆p ∆px
h̄ 1
∆px ≥ ∝ (9.10)
2∆x ∆x
More examples will be shown in the near future. Actually, the Heisenberg
Uncertainty Principle holds in three dimensions − so there are really three
relations
h̄
∆x∆px ≥ (9.11)
2
h̄
∆y∆py ≥ (9.12)
2
h̄
∆z∆pz ≥ (9.13)
2
We illustrate today next with a physical example.
68
where k = h̄p , ω = Eh̄ . In this state, px = h̄k (definitely) and py is definitely
zero. However, ∆x = ∞ and ∆y = ∞ (plane waves extends over x and
y). So ∆y∆py = “∞ · 0”, which is not inconsistent with the Heisenberg
Uncertainty Principle ∆y∆pt ≥ h̄2 . Now suppose we attempt to measure y
with an aperture:
fig2
Immediately after “passing through the aperture”, ∆y = a, the aperture
size. Thus, we have reduced ∆y. This means that the particle now has a new
state function − this one with ∆y ≥ a. (This is general − a measurement
or a disturbance on the particle causes a change in the state function. More
on this later). In fact, we can reduce ∆y as much a we like (by making a as
small as we lke).
However, in the new state function, ∆py is no longer zero! This is
because the state function has diffracted thru the aperture!
fig3
The arrows show “possible trajectories” to points on the screen. (Remem-
ber, though, a single trajectory is not actually taken unless we force a one by
measurement − recall the Feynman double-slit experiment − the situation is
similar here.) But, p = λh , so, since ∆py = p∆θ = p λa , ∆py = λh · λa = ha . Now,
at any time after passage thru the aperture, ∆y > a, so on the far side of the
aperture, the new state function obeys
h̄
∆y∆py ≥ h ≥ (9.15)
2
Thus, our attempt to reduce ∆y in the new state function has been compen-
sated with an increase in ∆py , in accordance with Heisenberg Uncertainty
Principle .
We’ve established, that if you prepare an ensemble of very many identical par-
ticles all with the same state function Ψ(x, t), position-producing measure-
ments for each will generally produce different materialization places spread
of a “range” 2∆x. It is often important to know what average would be
69
obtained if all these position measurements were (hypothetically) to be per-
formed. This average is called the expectation value of x, denoted by hxi
or x̄.
It is easy to find a general formula for hxi. As your text shows (p.p. 5-10),
if P (x) is the probability density function, then
Z +∞
hxi = xP (x) dx (9.16)
−∞
similarly, Z +∞
2
hxi = Ψ∗ (x, t)x2 Ψ(x, t) dx (9.18)
−∞
and for an analytical function, f (x),
Z +∞
hf (x)i = Ψ∗ (x, t)f (x)Ψ(x, t) dx (9.19)
−∞
70
so
ih̄ +∞ ∂ ∂Ψ∗
d hxi
Z
∗ ∂Ψ
= x Ψ − Ψ dx (9.24)
dt 2m −∞ ∂x ∂x ∂x
Integrating this by parts and using Ψ → 0 as x → ±∞ (and also ∂x ∂x = 1)
yields
ih̄ +∞ ∗
d hxi
Z
∂Ψ ∂Ψ
=− Ψ∗ − Ψ dx (9.25)
dt 2m −∞ ∂x ∂x
The second term can be integrated by parts again, and again throwing away
the boundary term, it is equal to the first term, so
d hxi ih̄ +∞ ∗ ∂Ψ
Z
=− Ψ dx (9.26)
dt m −∞ ∂x
so
h̄ +∞ ∗
Z
∂Ψ(x, t)
hpi = Ψ (x, t) dx (9.27)
i −∞ ∂x
We can write this as
Z +∞
h̄ ∂
hpi = Ψ∗ (x, t) · Ψ(x, t) dx (9.28)
−∞ i ∂x
along with
Z +∞ Z +∞
2
hxi = x |Ψ(x, t)| dx = Ψ∗ (x, t)xΨ(x, t) dx (9.29)
−∞ −∞
From these we say that the “position basis” (what we are so far working in)
the (multiplicative) “operator” x̂ = x “represents” position, and the operator
p̂ = h̄i ∂x
∂
“represents” momentum. (We will see in the future that the utility of
this operator concept, seemingly superfluous here, has significance far beyond
that for these simple expectation values.)
p2
Similarly, as you can show, if f (p) is any polynomial in p (e.g. KE = 2m ),
then Z +∞
∗ h̄ ∂
hf (p)i = Ψ (x, t)f Ψ(x, t) dx (9.30)
−∞ i ∂x
Note also the action of p̂ (or pop or p̃) on a plane wave state:
h̄ ∂ h i( h̄p x− Eh̄ t) i h̄ / /ip h p E i
i( h̄p x− E t)
p̂e h̄ = e = ei( h̄ x− h̄ t) (9.31)
i ∂x /i h̄
/
p
= pei( h̄ x− h̄ t)
E
(9.32)
71
We see that, for a plane-wave state, p̂ on it is equivalent to multiplication by
the number p.
Now, any function of p̂ is also an operator. For example, consider the
p2
kinetic energy K = 2m . Then
h̄2 ∂ 2
p̂ h̄ ∂ h̄ ∂
K̂ = = =− (9.33)
2m i ∂x i ∂x 2m ∂x2
Thus, (in one dimension),
Z +∞
hKi = Ψ∗ (x, t)K̂Ψ(x, t) dx (9.34)
−∞
Z +∞ 2 2
h̄ ∂ Ψ(x, t)
= Ψ∗ (x, t) − dx (9.35)
−∞ 2m ∂x2
Which is
h̄2 +∞ ∗ ∂ 2 Ψ(x, t)
Z
hKi = − Ψ (x, t) dx (9.36)
2m −∞ ∂x2
Now consider the total energy E = K + V . Then,
h̄2 d2
Ê = − + V (x, t) (9.37)
2m dx2
And, consider the Schrödinger equation
∂Ψ(x, t) h̄2 ∂ 2 Ψ(x, t)
ih̄ =− + V (x, t)Ψ(x, t) (9.38)
∂t 2m ∂x2
which is then
∂
ÊΨ(x, t) = ih̄Ψ(x, t) (9.39)
∂t
Thus, the energy operator is also represented by
∂
Ê = ih̄ (9.40)
∂t
so
Z +∞ 2 2
h̄ ∂
hEi = Ψ∗ (x, t) − + V (x, t) Ψ(x, t) dx (9.41)
−∞ 2m ∂ 2 x
h̄2 +∞ ∗ ∂ 2 Ψ(x, t)
Z
= − Ψ (x, t) dx (9.42)
2m −∞ ∂x2
Z +∞
+ Ψ∗ (x, t)V (x, t)Ψ(x, t) dx (9.43)
−∞
72
and also Z +∞
∂Ψ(x, t)
hEi = ih̄ Ψ∗ (x, t) dx (9.44)
−∞ ∂t
73
Chapter 10
74
λ
δ> (10.1)
2 sin φ2
Thus, if the limit of resolution on where the electron materializes is
λ
∆x = (10.2)
2 sin φ2
75
(e.g., to the electron being forced to materialize at a point) is indivisible
packets, each of momentum px = λh . If it were possible that pγ < λh for
wavelength λ, then we could scatter photons from electrons with sufficient
minimal unknowable x-momentum transfer to violate the Heisenberg Uncer-
tainty Principle .
Many examples of this sort have been investigated; a real possibility of
violating the Heisenberg Uncertainty Principle has never been found.
We will begin by recalling that any function f (x) with period 2π can be
expanded as
a0
f (x) = + a1 cos x + a2 cos(2x) + a3 cos(3x) + a4 cos(4x) + · · ·
2
+ b1 sin x + b2 sin(2x) + b3 sin(3x) + b4 sin(4x) + · · ·
or ∞ ∞
a0 X X
f (x) = + an cos(nx) + bn sin(nx). (10.8)
2 n=1 n=1
1
Of course, we need also to be assured that, if given an arbitrary reasonable state function Ψ(x, t), that A(k)
exists for it. Fourier analysis answers this question, as we will see.
76
If the function is periodic with a different period (call it 2L), the the expansion
becomes ∞ ∞
a0 X nπx X nπx
f (x) = + an cos + bn sin (10.9)
2 n=1
L n=1
L
We recall that the coefficients are found from
1 L
Z nπx
an = f (x) cos dx (10.10)
L −L L
1 L
Z nπx
bn = f (x) sin dx (10.11)
L −L L
where the integrals could be over any full period of f(x); I’ve arbitrarily chosen
−L → L. Recall also that, key to establishing the formulas for the coefficients
are orthogonality relations
Z L mπx nπx
cos cos dx = 0 if m 6= n (10.12)
−L L L
Z L mπx nπx
sin cos dx = 0 if m 6= n (10.13)
−L L L
Z L mπx nπx
sin sin dx = 0 if m 6= n (10.14)
−L L L
Z L Z L
2 nπx 2 nπx
sin dx = cos dx = L (10.15)
−L L −L L
Thus, the set of functions
nπx
1 1 nπx 1
√ , √ sin , √ cos (10.16)
2L L L L L
form an orthonormal set on any interval of length 2L.
77
where u(x), v(x), w(x), s(x) are real. f (x) and g(x) are said to be orthogonal
on the interval [a, b] if
Z b Z b
f ∗ (x)g(x) dx = g ∗ (x)f (x) dx = 0. (10.19)
a a
then,
+∞
X
f (x) = cn einx ? (10.26)
n=−∞
78
Let us see if we can find coefficients cn consistent with this. Assuming that
equation (10.25) is true, by integrating both sides we get,
Z 2π Z 2π
1 1
f (x) dx = c0 dx +
2π 0 2π 0
Z 2π Z 2π
1 1
ix
c1 e dx + c1 e−inx dx + · · ·
2π 0 2π 0
However, all integrals on the right except the first vanish.
Z 2π
1 ix 2π 1 2πi 1
eix dx = e 0 = e − e0 = [1 − 1] = 0 (10.27)
0 i i i
Thus, Z 2π
1
c0 = f (x) dx (10.28)
2π 0
To find cn , multiply equation (10.25) by e−inx and integrate:
Z 2π
c0 2π −inx c1 2π −inx ix
Z Z
1 −inx
f (x)e dx = e dx + e e dx
2π 0 2π 0 2π 0
c−1 2π −inx −ix
Z
+ e e dx + · · ·
2π 0
∞
c0 2π −inx
Z X Z 2π
= e dx + ei(m−n)x dx
2π 0
m=1,m6=n 0
Z 2π
+ e(n−n)x dx
0
These integrals are all zero except the last one, which is 2π; thus
Z 2π
1
cn = f (x)e−inx dx (10.29)
2π 0
79
with Z `
1 −inπx
cn = f (x)e ` dx (10.31)
2L −`
From this, if the period is L (i.e. we are considering an “reasonable” complex-
valued function of real variable x such that f (x + L) = f (x)), then the series
can be gotten by the replacement in our above work of 2` by L (2` → L)
+∞
2inπx
X
f (x) = cn e L (10.32)
n=−∞
with
1 L
Z
2inπx
cn = f (x)e− L dx (10.33)
L 0
It can be
shown
that these series converge for any “reasonable” f (x). Thus,
inx
the set e , n = −∞, · · · , ∞ is a complete set.
becomes ∞
X 2inπx 2inπx
f (x) = cn e L + n∗n e− L (10.36)
n=0
Noting that in this case,
Z
1 2inπx
cn = f (x)e− L dx
L
Z Z
1 2nπx i 2nπx
= f (x) cos − f (x) sin
L L L L
80
1 2nπx
is an and − Li 2nπx
R R
where L f (x) cos L f (x) sin L is ibn . Then
cn = an − ibn (10.37)
and
c∗n = an + ibn (10.38)
Then, equation (10.36) becomes
∞
X 2nπx 2nπx
f (x) = (an − ibn ) cos + i sin
n=0
L L
2nπx 2nπx
+ (an + ibn ) cos − i sin
L L
which is ∞
X 2nπx 2nπx
f (x) = an cos + bn sin (10.39)
n=0
L L
which itself is the familiar Fourier sine and cosine series for a real function
f(x).
For a complex function with a period L, we saw that the Fourier expansion
+∞
2inπx
X
f (x) = cn e L (10.40)
n=−∞
and
1 L
Z
2inπx
cn = f (x)e− L (10.41)
L 0
is valid as long as f (x) is “reasonable”.
What if our expansion function f (x) is not periodic? “Not periodic” means
the same as “period → ∞”. Therefore, we let L → ∞ in our equations
above, taking the interval (length L) as − L2 , L2 . Then, defining k ≡ 2nπL ,
2π
and ∆k ≡ L ,
+∞ +∞
X
ikx
X L
f (x) = cn e = cn eikx ∆k (10.42)
2π
k=−∞ k−∞
81
We now take the limit L → ∞; the sum then becomes
Z +∞
1 Lc
f (x) = √ √ n eikx dk (10.43)
−∞ 2π 2π
Since,
Z L
1 2 2inπx0
cn = f (x0 )e− L dx0 , (10.44)
L − L2
Z +∞ Z +∞
1 1 0
f (x) = √ · √ f (x0 )e−ikx dx0 eikx dk (10.45)
−∞ 2π 2π −∞
Let us define the quantity
Z +∞
1
g(k) ≡ √ f (x)e−ikx dx (10.46)
2π −∞
Then we see that we have established a very interesting and very important
theorem: the Fourier Transform Theorem:
Theorem:
Suppose that f (x) is a square-integrable function i.e.,
Z +∞
|f (x)|2 dx < ∞ (10.47)
−∞
82
Chapter 11
In the last class, we established the very important Fourier Transform theo-
rem:
Theorem: Suppose that f (x) is a complex-valued square integrable func-
R +∞
tion of real variable x − i.e. −∞ |f (x)|2 dx < ∞. Then, there exists a
function g(k) such that
Z +∞
1
f (x) = √ g(k)eikx dk (11.1)
2π −∞
where Z +∞
1
g(k) = √ f (x)e−ikx dx (11.2)
2π −∞
Since quantum mechanical state functions are always square-integrable, we
see that this theorem addresses our concerns about the existence of “A(k)”
for any state function Ψ(x, t0 ) at any particular time t0 − e.g. “A(k)” at time
t = 0 is just √12π g(k) above. (As we will see, for a free particle, A(k) does
not change in time. Of course, both f (x) and g(k) can be complex valued
functions. We’ll see later that if f (x) is square-integrable, so is g(k).
83
Example: Suppose that, at t = 0,
(
√1 |x| ≤ a
a 2
Ψ(x, t = 0) = f (x) = a
(11.3)
0 |x| > 2
fig 1
Then
Z +∞
1
g(k) = √ f (x)e−ikx dx
2π −∞
Z a
1 2
= √ e−ikx dx
2πa − a2
a
1 1 −ikx 2
= √ − e
2πa ik − a2
1 sin ka
2
= √ k
2πa 2
r
a ka
g(k) = sinc
2π 2
fig 2
Now you might think − O.K. − let me check this − if I were to take this
g(k) and Fourier transform it, I should get f (x) back − that is,
(
ka √1 if |x| ≤ a2
Z +∞ r
1 a sin 2 ikx a
√ e dk = (11.4)
2π −∞ 2π ka 2 0 if |x| > a
2
This is true, but unfortunately, actually showing this by evaluating the inte-
gral on the left is not easy and requires an advanced technique called “contour
integration”. We won’t get involved in that. Consider, however, an example
(that we really did some time ago) that will serve as a kind of “complemen-
tary example” to the previous − say we now start with a flat distribution of
wave numbers and ask what f (x) this makes (“Fourier synthesis”)
fig 3
(
√1 if |k| < ∆k
∆k 2
g(k) = ∆k
(11.5)
0 if |k| > 2
84
Then
Z ∆k
1 2 1
f (x) = √ √ eikx dk (11.6)
∆k − ∆k2
∆k
1 x∆k
= √ sinc (11.7)
2π∆k 2
fig 4
Let us look at an aspect of these examples: In the first, we have for the
“full width” of g(k) (spread between first zeros on either side of the center),
4π
∆k = (11.8)
a
If we take the full-width in x (call it ∆x) as a, we then have, for this example,
∆k∆x = 4π (11.9)
For the second example, as you easily show, we also have, for the full widths,
∆k∆x = 4π. Thus in both examples,
1
∆k ∝ (11.10)
∆x
and
∆k∆x ≥ 1 (11.11)
In order to see how general these are, let’s look at another example.
Example: Suppose we take a Gaussian
x2
− 2
f (x) = N e 2σx
(11.12)
fig 5
Here N is a normalization constant that we won’t bother with yet, σ is
the standard deviation. Then
Z +∞ 2
N − x 2 ikx
g(k) = √ dxe 2σx
e (11.13)
2π −∞
This integral is doable (we won’t bother); the result is
r
N π − σx2 k2
g(k) = √ 2
e 2 (11.14)
2π 2σ x
85
or
N − σx2 k2
g(k) = e 2 (11.15)
2σx
This result is very interesting − we see that g(k) is also a Gaussian! Let’s
find out what its standard deviation width (σk ). To do this, we put the result
(g(k)) into standard Gaussian form:
2
k
0 − 2σk2 N − σx2 k2
g(k) = N e = e k (11.16)
2σx
Thus
k2 k 2 σx2
= (11.17)
2σk2 2
1
σk2 = 2 (11.18)
σx
1
σk = (11.19)
σx
Again the widths in x and k space are reciprocally related:
fig 6
We will have you work out a third example for homework: again you will
1
find that ∆x ∝ ∆k . This should make it plausible that this is general prop-
erty. The statement σx σk ∼ 1 is called the : “Fourier Bandwidth Theorem”.
Following, we will indicate a more general line of reasoning leading to this
important mathematical result.
86
peaked at x = 0. Suppose we seek to understand the “qualitatively”.
We ask: Why is f (x) biggest at x = 0? It’s biggest at x = 0 since all
contributions eikx for all the different k’s in the wave number spectrum are
in phase at x = 0 (all eik·0 = 1), so big resultant there. However, at any
other x 6= 0, f (x) must be smaller than f (0). Why? Consider the synthesis
at x = x0 6= 0. The integral
Z +∞
f (x0 ) = g(k)eikx0 dk (11.21)
−∞
87
11.3 Heisenberg Uncertainty Principle
Let us now apply the forgoing to Quantum Mechanics. Let φ(k) be the
Fourier transform of ψ(x) ≡ Ψ(x, t0 ) for some particular time t0 . Them we
see that, in general, for state functions,
1
∆φ ∝ (11.25)
∆φ
where ∆xψ is the x-width of φ(x) and ∆kψ is the k-width of φ(k). Now,
for the Gaussian case, we saw that, if we use ∆xψ = σx , ∆kψ = σk , then
∆xψ ∆ψ = 1 (Gaussian, for full-widths). It turns out that the Gaussian case
achieves the minimum possible value for σx σk . (We will prove this later in
the course). However, in quantum mechanics, the Heisenberg Uncertainty
Principle refers to probabilities, not to state functions themselves. Now, the
probability of materialization per unit x as a function of x is
dP (x)
= |ψ(x)|2 (11.26)
dx
Likewise, the probability on momentum measurement of materializing of mo-
mentum p per unit interval of k is
dP (k)
= |φ(k)|2 φ(k) ≡ g(k) (11.27)
dk
Now, for the Gaussian
x2 x 2
− 2 2 − σ2
ψ(x) = N e 2σx2
⇒ |ψ(x)| = |N | e x (11.28)
Thus,
x 1 x
σ|ψ| 2 = √ ψψ (11.29)
2
Likewise
x 1 k
σ|φ|2 = √ σφ (11.30)
2
Thus letting,
x
∆x ≡ σ|ψ|2 (11.31)
x
∆k ≡ σ|φ|2 (11.32)
88
We have
1
∆x∆k = (11.33)
2
since p = h̄k, we have, for the Gaussian case,
h̄
∆x∆px = (11.34)
2
As I remarked above, this product is minimum of all cases for the Gaussian.
Thus, in general
h̄
∆x∆px ≥ (11.35)
2
This, again, is the Heisenberg Uncertainty Principle . We have already looked
at physical examples (single-aperture, Bohr microscope, etc.) that explicate
or illustrate its physical meaning.
Note: Note that ∆x and ∆p in the Heisenberg Uncertainty Principle refer
D E1
2 2
to standard deviations (σ’s), strictly, these are defined by ∆x = (x − x̄) ,
where
Z
hxi = x̄ = ψ ∗ (x) x ψ(x) dx (11.36)
Z 21
∆x = ψ ∗ (x) (x − x̄) ψ(x) dx (11.37)
90
In quantum mechanics there is, however, one class of very important ex-
ceptions to this warning: As we will see shortly (in the coming section) “Gen-
eral Solution to Schrödinger equation for a free particle”’, for a free particle
(only) in quantum mechanics, g(k, t) is easily predictable from g(k, t = 0),
which (as we’ll see) allows a plane wave expansion for any free particle state
function Ψfree (x, t). That simplifies things considerably for the case of a free
particle.
91
We see that the object δ(x0 − x) is the Dirac delta function, since equation
(11.50) (“sifting property”) defines the delta function.
For graduate students2 : There is “another” way to see that equation
(11.49) is a representation of the Dirac delta function. Consider the function
(
1
b for |x| < 2b
gb (x) = (11.51)
0 for |x| > 2b
fig 9 R +∞
For this function −∞ gb (x) dx = 1. If we take the limit of this as b → 0,
we get δ(x):
δ(x) = lim gb (x) (11.52)
b→0
To check that this reproduces equation (11.50), note that the for “any” ar-
bitrary function f (x),
Z +∞ Z b
2 1
lim gb (x)f (x) dx = lim f (x) dx = f (0) (11.53)
b→0 −∞ b→0 − b b
2
where we take the limit after closing the integral, instead of (as we “should”)
before doing the integral. (That is we can be cavalier like this about inter-
changing the order of integrals and limits was “creatively intuited” by Paul
Dirac in his famous 1930 book on quantum mechanics; it was finally proved
logically rigorous by Laurent Schwartz in the late 1930’s.)
Now consider the Fourier transform of gb (x):
Z +∞
1
gb (k) = √ gb (x)e−ikx dx
2π −∞
Z b
1 2 1
= √ e−ikx dx
2π − 2b b
1 sin kb
2
= √ kb
2π 2
thus
1
limb→0 gb (k) = √ (11.54)
2π
2
and also for interested undergrads
92
Now, if we take the Fourier transform of limb→0 gb (k), we must get back
limb→0 gb (x) = δ(x). Doing this:
Z +∞
h i 1 1 1
δ(x) = F lim gb (k) = F √ =√ √ eikx dk (11.55)
b→0 2π 2π −∞ 2π
Which corroborates our previous result that
Z +∞
1
δ(x) = eikx dk (11.56)
2π −∞
similarly
Z +∞
1
δ(x − x0 ) = eik(x−x0 ) dk (11.57)
2π −∞
Z +∞
1
δ(k − k0 ) = eiu(k−k0 ) du (11.58)
2π −∞
(end of section “for graduate students”.)
As we know, all free-particle state functions must solve and obey the free
particle Schrödinger equation
h̄2 ∂ 2 Ψ(x, t) ∂Ψ(x, t)
− = ih̄ (11.59)
2m ∂x2 ∂t
In the past, we put together a fairly general solution of this as a superposition
of plane waves, Z k2 2
i kx− h̄k t
Ψ(x, t) = A(k)e 2m
dk (11.60)
k1
We now ask if all free-particle solutions are of this form, of if there are addi-
tional solutions. The following very nice treatment (from Ohanian3
3
Ohanian, Hans. Principles of Quantum Mechanics. Benjamin Cummings, 1989.
93
We will now find the general solution of the Schrödinger wave
equation for a free particle. By Fourier’s theorem, at one given time
t, any normalizable wave function can be written as
Z +∞
1
Ψ(x, t) = √ g(k, t)eikx dk (11.61)
2π −∞
We have indicated a time dependence in the Fourier transform g(k, t),
because although Ψ(x, t) always can be expressed in the form given
in the previous equation, the Fourier transforms at different times
will be different. We call g(k, t) the amplitude in momentum space.
Next, we have to find g(k, t). For this we substitute the above equa-
tion into the Schrödinger equation
h̄2
Z +∞ Z +∞
1 1 ∂
− √ −k 2 g(k, t)eikx dk = ih̄ √ g(k, t)eikx dk
2m 2π −∞ 2π −∞ ∂t
If we compare the Fourier transforms, or the coefficients of eikx , on
both sides of this equation, we obtain
h̄2 k 2 ∂
g(k, t) = ih̄ g(k, t) (11.62)
2m ∂t
this has the obvious solution
2
− ih̄k t
g(k, t) = g(k, 0)e 2m (11.63)
or
iEt
g(k, t) = g(k, 0)e− h̄ (11.64)
where
h̄2 k 2
E= (11.65)
2m
is the energy that corresponds to the momentum p = h̄k. Equation
(11-61) then becomes
Z +∞ 2
1
i kx− h̄k t
Ψ(x, t) = √ g(k, 0)e 2m
dk (11.66)
2π −∞
This is the general solution of the Schrödinger’s wave equation for
a free particle. The amplitude g(k, 0) is arbitrary, but must satisfy
94
the normalization condition
Z +∞
|g(k, 0)|2 dk = 1 (11.67)
−∞
95
Chapter 12
We showed that for a free particle (only) the most general state-function is
(the most general solution to the free-particle Schrödinger equation)
Z +∞ 2
1
i kx− h̄k
2m t
Ψ(x, t) = √ g(k, 0)e dk (12.1)
2π −∞
where g(k, 0) is the Fourier transform of Ψ(x, t = 0). As we will illustrate
by example shortly, this is clearly the formal solution to the free-particle
initial value particle problem (find Ψ(x, t) given Ψ(x, t = 0)) in quantum
mechanics. Prior to doing an example of this, we first note an interesting
fact of it: Evaluating equation (12.1) at t = 0 yields
Z +∞
1
Ψ(x, t = 0) = √ g(k, 0)eikx dk (12.2)
2π −∞
But, by Fourier’s theorem, any continuous square-normalizable function of x
can be represented in this form with the proper g(k, 0). Therefore, we have
now shown something we assumed in the past − that, for a free particle,
Ψ(x, t = 0) can be (depending on the physical/experimential situation) any
continuous square-normalizable function of x. (Of course, given Ψ(x, t = 0),
Ψ(x, t) is not arbitrary − it is controlled by the Schrödinger equation. Now
we discuss examples of the initial value problem:
96
Example: Suppose Ψ(x, t = 0) = cos(k0 x) and the represented particle is
free. What is subsequent Ψ(x, t)?
Solution:
ikx Following our prescription, we Fourier analyze Ψ(x, t = 0) over
the set e . This Fourier analysis is
1 1
cos(k0 x) = eik0 x + e−ik0 x (12.3)
2 2
Note: This also comes out of formal Fourier series: setting up
+∞
X
f (x) = cos(k0 x) = cn eikn x
n=−∞
97
SupposeΨ(x, t = 0) is a “flat-top function” of full-width 2a (perhaps result
of measurement)
fig 1
Find Ψ(x, t).
Solution: First, normalize: A = √12a . Then find g(k, 0):
Z a r
1 1 −ikx 1 sin(ka) a
g(k, 0) = √ √ e dx = √ = sinc(ka) (12.6)
2π 2a −a πa k π
then r Z +∞ 2
a
i kx− h̄k t
Ψ(x, t) = sin(ka)e 2m
dk (12.7)
π −∞
The integral can be evaluated numerically, the result looks like we would
expect
fig 2
Looking at limiting cases is instructive:
a) lima→0 ⇒ Ψ(x, t = 0) is a spike (∆x → 0), Ψ(x, t = 0) = δ(x − x0 ) Then
sin(ka) ≈ ka
so r
a
g(k) ≈
π
and this is flat (∆k → ∞). Note that
Z +∞
1
F [δ(x − x0 )] = √ δ(x − x0 )e−ikx dx = e−ikx0 (12.8)
2π −∞
and this is flat.
b) lima→∞ , then r
a
g(k, 0) = sinc(ka)
π
π
has max at k = 0, the first zero is at ka = π ⇒ k1 = a → 0 ⇒ g(k, 0) ∝
δ(k − k0 ) (∆x → ∞, ∆k → 0).
2
Example: In the past we had Ψ(x, t = 0) = Ae−ax , then I told you that
Ψ(x, t) was also a Gaussian, but with standard deviation, σ, that gets wider
in time. (this is how I get it, (in fact, its Griffiths 2.22)).
98
12.1.1 State Function of Definite Momentum
This should be
Ψ(x, t = 0) = δ(x − x0 ) (12.12)
(spike in position at x0 ), again we don’t yet worry about the normalization.
Then
Z +∞
1
g(k) = √ δ(x − x0 )e−ikx dx (12.13)
2π −∞
Z +∞
A
= √ δ(x − x0 )eikx dx (12.14)
2π −∞
A
= √ eikx0 (12.15)
2π
Therefore,
2 |A|2
|g(k)| = (12.16)
2π
99
this is flat in k, we we would expect by the bandwidth theorem. Futher, we
should have
Aδ(x − x0 ) = Ψ(x, t = 0)
Z +∞
1
= √ g(k, 0)eikx dk
2π −∞
Z +∞
1 A
= √ √ eik0 x eikx dk
2π −∞ 2π
Z +∞
A
= ei(−k0 )x dk
2π −∞
= Aδ(x − x0 )
100
that |A(k)|2 is indeed proportional but not equal to it − we were merely
missing the factor √12π (g(k) = √12π A(k) normalizes it).
now, in quantum mechanics, it is customary to recast equations (12.18)
and (12.19) as integrals over a measurable quantity (momentum) rather then
over k. If we do this, equations (12.18) and (12.19) are
1 1 +∞
Z
ip
Ψ(x, t) = √ (g [p(k), t] e h̄ x dp (12.21)
2π h̄ −∞
Z +∞
1 ip
g [p(k), t] = √ Ψ(x, t)e− h̄ x dx (12.22)
2π −∞
While correct, part of the symmetry is lost between equations (12.21) and
(12.22) in this form. Therefore, it is conventional to rescale g(k, t) by defining
a function φ(p, t) as
1
φ(p, t) ≡ √ g(k, t) (12.23)
h̄
In terms of φ, we have then
Z +∞
1 ip
Ψ(x, t) = √ φ(p, t)e h̄ x dp (12.24)
2πh̄ −∞
Z +∞
1 ip
φ(p, t) = √ Ψ(x, t)e− h̄ x dx (12.25)
2πh̄ −∞
which restores the symmetry. Since
Z +∞
|g(k, t = 0)|2 dk = 1 (12.26)
−∞
it follows that Z +∞
|φ(p, t = 0)|2 dp = 1 (12.27)
−∞
(show this) and also Z +∞
|φ(p, t)|2 dp = 1 (12.28)
−∞
Thus, φ∗ (p, t)φ(p, t) dp is the probability that the momentum will materialize
in the range dp centered on p
101
The function φ(p, t) is called the “momentum-space state function” or “the
state function in momentum-space”. Since, by equation (12.24), we can re-
construct Ψ(x, t) from knowledge of φ(p, t), φ(p, t) provides for any system,
a description completely equivalent to that of Ψ. Thus, it is possible to com-
pletely do quantum mechanics in momentum space without ever mentioning
Ψ. We do not pursue this in this course.
If a system is in state Ψ(x, t) [φ(p, t)] and if a measurement is performed to
produce momentum at time t, the probability that the momentum produced
is in the infinitesimal interval dp centered on p is
P (p, t) dp = φ∗ (p, t)φ(p, t) dp (12.29)
We will take this as an axiom of quantum mechanics for the special case
of free particle. We note that, since
iE
φfree (p, t) = φfree (p, 0)e− h̄ t (12.30)
φ∗free (p, t)φfree (p, t) = φ∗free (p, 0)φfree (p, 0) (12.31)
Let us check this, say at t = 0. Then if φ(p) ≡ φ(p, t = 0), we must have
Z +∞
hpit=0 = φ∗ (p)pφ(p) dp (12.33)
−∞
103
By integrating by parts,
Z +∞
∗ h̄ ∂Ψ ∗ h̄
hpi = − Ψ dx − [Ψ∗ Ψ]+∞
−∞ (12.40)
−∞ −i ∂x i
Z +∞
h̄ ∗ ∂Ψ
= Ψ dx (12.41)
−∞ i ∂x
hpi∗ = hpi (12.42)
So, in spite of the i in the operator, hpi is real as it must be.
In fact, the expectation value of any operator that represents a physical quan-
tity (position, momentum, energy, angular momentum, etc.) must, of course,
be real. Operators that have real expectation values for any permissible state
function are called Hermitian operators.
Theorem: Let Ω̂ be a Hermitian operator. Then, for any admissible state
function This is also true for Ĥ. Let Ω̂ = Ĥ or p̂ or any other operator such
that Z +∞ Z +∞ ∗
∗
Ψ Ω̂Ψ dx = Ω̂Ψ Ψ dx (12.43)
−∞ −∞
Proof: If Ω̂ is Hermitian,
D E D E∗
Ω̂ = Ω̂
Z Z ∗
Ψ∗ Ω̂Ψ = Ψ∗ Ω̂Ψ
Z ∗
= Ψ Ω̂Ψ dx
Z +∞ ∗
= Ω̂Ψ Ψ dx
−∞
104
Proof : We sketch the proof; you can carry out the details: Consider Ψ(x, t) =
Ψ1 + cΨ2 for arbitrary complex constant c. Then
Z Z Z
(Ψ∗1 + c∗ Ψ∗2 ) Ω̂ (Ψ1 + cΨ2 ) = Ψ∗1 ΩΨ1 + c∗ c Ψ∗2 ΩΨ2
Z Z
+ c Ψ∗1 (ΩΨ2 ) + c∗ Ψ∗2 (ΩΨ1 )
must be real. The The first two terms on the right we know are real by the
last theorem; consequently, the sum of third and forth terms must also be
real and hence equal to its complex conjugate:
Z Z Z Z
c Ψ∗1 (ΩΨ2 ) + c∗ Ψ∗2 (ΩΨ1 ) = c∗ Ψ1 (Ω∗ Ψ∗2 ) + c Ψ2 (Ω∗ Ψ∗1 )
Applying this equation twice, once for real c = a real constant b and once for
c = ib and adding yields:
Z Z
∗
Ψ1 ΩΨ2 dx = (ΩΨ1 )∗ Ψ2 dx (12.45)
105
12.6.1 “Box Normalization”
Z +∞ i(p−p0 )
2 x
= |A| e h̄ dx (12.52)
−∞
= |A|2 · 2πh̄δ(p − p0 ) (12.53)
We pick A = √1 , then
2πh̄
1 ip
e h̄ x
ψp (x) = √ (12.54)
2πh̄
Then, as you can easily show, the normalization of ψp (x) is
106
12.7 Normalization of a State of Definite Position
where c(x0 ) is the expansion coefficient and δ(x − x0 ) is the basis vector.
Therefore, {ψx0 (x)} is a complete set. That the last expansion is valid can
be seen by working with the right hand side:
Z +∞
ψ(x) = c(x0 )δ(x − x0 ) dx (12.59)
−∞
comparing this to the known truth
Z +∞
ψ(x) = ψ(x0 )δ(x − x0 ) dx0 (12.60)
−∞
The set
1 ip
ψp (x) = √ e h̄ x (12.61)
2πh̄
107
is also a complete set − as we know from Fourier’s theorem, any state function
ψ(x) can be expanded as
Z +∞
ψ(x) = c(p)Ψp (x, t) dp (12.62)
−∞
with coefficient
1
c(p) = √ φ(p) (12.63)
2πh̄
the the completeness statement is just
Z +∞
1 ip
ψ(x) = √ φ(p)e h̄ x dp (12.64)
2πh̄ −∞
ip
with φ(p) is the expansion coefficient and e h̄ x is the basis function. And we
know this to be true!
These expansions are analogous to those of an ordinary vector in 3-space.
~ = Ax ı̂ + Ay ̂ + Az k̂
A (12.65)
108
Chapter 13
Thirteenth Class
109
Chapter 14
Fourteenth Class:
110
Chapter 15
111
not conserved (classically, dp ∂V dp
dt = F = − ∂x , so V = V (x) ⇒ dt 6= 0.) However,
total energy still is conserved. This motivates us to see if we can identify or
construct, from the maze of all solutions to the non-free Schrödinger equation,
some general properties of solutions that represent definite energy.
As we will no show, the definite energy states obey the equation
ĤΨE (x, t) = EΨE (x, t) (15.2)
where E is the (“sharp” or definite) value of the energy associated with the
state function ΨE (x, t) that satisfies this equation. An equation in the general
form
Q̂ΨQ (x, t) = CΨQ (x, t) (15.3)
where Q̂ is any operator and where C is any constant (real or complex) is
called an “eigenvalue equation” in the mathematical parlance. The constant
C is called the “eigenvalue”, and the states ΨQ (x, t) are called the “eigen-
states” (of operator Q). So, solutions to equation (15.2) are the eigenstates
of the energy operator (Hamiltonian). In fact, we will show that a state
function is sharp in energy if and only if it obeys equation (15.2).
To show that definite energy state function satisfy equation (15.2), we note
that, for such a state, σE2 , the variance of the results of energy producing
measurements, must be zero. Thus, if ΨE (x, t) is sharp in energy,
D E Z +∞ h i2
2 2
σE = (E − hEi) = ΨE (x, t) Ĥ − hEi ΨE (x, t) dx (15.4)
−∞
Now, since Ĥ is a Hermitian operator and hEi is a constant, Ĥ − hEi is
also a Hermitian operator. Recall that if Q̂ is a Hermitian operator, then
hQi∗ = hQi (15.5)
(reality of expectation value − required if the operator corresponds to a
physical observable), which implies also that
Z +∞ h i∗ Z +∞
Ψ(x, t) Q̂Ψ(x, t) dx = Ψ∗ (x, t)Q̂Ψ(x, t) dx (15.6)
−∞ −∞
for any valid state function Ψ(x, t) (not just for eigenstates of Q̂) and also
Z +∞ Z +∞ h i∗
∗
Ψ1 (x, t)Q̂Ψ2 (x, t) dx = Q̂Ψ1 (x, t) Ψ2 (x, t) dx (15.7)
−∞ −∞
112
for any pair of valid quantum mechanical state functions Ψ1 (x, t) and Ψ2 (x, t).
Thus, for the condition in equation (15.4) is,
Z +∞ h i
0= ΨE Ĥ − hEi Ĥ − hEi ΨE dx (15.8)
−∞
where ΨE is like Ψ1 is equation (15.7) and Ĥ − hEi ΨE is like Ψ2 is equation
(15.7).
Z +∞ h i∗ h i
0 = Ĥ − hEi ΨE Ĥ − hEi ΨE dx (15.9)
−∞
Z +∞ 2
0 = Ĥ − hEi ΨE dx (15.10)
−∞
Now, the integrand cannot be negative for any x, so the only way that the
whole integral can vanish is if the integrand is zero for all x, which implies
that
Ĥ − hEi ΨE = 0 ⇒ ĤΨE = hEi ΨE (15.11)
Since, for this state, hEi = E (sharp), we have established that if ΨE is a
state of sharp E, then it satisfies the eigenvalue equation
thus,
C = hEi (15.15)
Consider, also the variance of measurements of the energy in such a state:
Recall from chapter one of Griffith,
σE2 = σĤ2
= H 2 − hHi2
(15.16)
113
Now,
Ĥ 2 Ψ = Ĥ ĤΨ = ĤCΨ = C ĤΨ = CCΨ = C 2 Ψ (15.17)
Therefore Z Z
∗
2 2 2
Ψ∗ Ψ dx = C 2
E = Ψ Ĥ Ψ dx = C (15.18)
so
σE2 = E 2 − hEi2 = C 2 − C 2 = 0
(15.19)
Thus, in such a state, the energy is sharp (variance zero) and C = E.
So
energy is sharp ⇐⇒ ĤΨE (x, t) = EΨE (x, t) (15.20)
We see that Ψ obeys
ĤΨE (x, t) = EΨE (x, t) (15.21)
if and only if the energy is sharp. We note that the same sort of thing is true
d
for momentum: p̂ = h̄i dx is the momentum operator. Then Ψ(x, t) obeys the
eigenvalue equation
p̂Ψ(x, t) = pΨ(x, t) (15.22)
if and only if Ψ(x, t) is a state of sharp momentum.
Proof :
In our previous proofs for the operator Ĥ, replace Ĥ by p̂ and ev-
erything works analogously − in fact, the proof works for any Her-
mitian operator Q̂. We will have more to say about this general
property of Hermitian operators and their eigenstates later in the
course. for now, except for the one example immediately following,
we will worry about eigenstates of energy.
Example
For a free particle, the plane wave state
p
Ψ(x, t) = Aei( h̄ x− h̄ t)
E
(15.23)
which is
∂ΨE (x, t)
ih̄ = EΨE (x, t) (15.29)
∂t
which is
∂ΨE (x, t) i
= − EΨE (x, t) (15.30)
∂t h̄
or
iEt
ΨE (x, t) = f (x)e− h̄ (15.31)
where f (x) is a function of x only. Notation: We call f (x), “ψ(x)”. Com-
ment: We see that ψ(x) = Ψ(x, t = 0).
Terminology: ψ(x) is called the eigenfunction (or more correctly, the
energy eigenfunction) associated with Ψ(x, t). We will see the reason for
this terminology very soon. Thus,
iEt
ΨE (x, t) = ψE (x)e− h̄ (15.32)
115
15.3 Time Independent Schrödinger Equation
116
depends on time, even though the integral
Z +∞
Ψ∗ (x, t)Ψ(x, t) dx = 1 (15.39)
−∞
does not depend on time.
However, for the eigenstates, P (x) does not depend on time. (We say it is
“stationary”, hence eigenstates are also called “stationary states”.)
iEt
Proof : If Ψ(x, t) is an eigenstate, Ψ(x, t) = ψ(x)e− h̄ Thus,
P (x) = Ψ∗ (x, t)Ψ(x, t) (15.40)
∗
− iEt − iEt
= ψ(x)e h̄ ψ(x)e h̄ (15.41)
iEt iEt
= ψ ∗ (x)e+ h̄ ψ(x)e− h̄ (15.42)
+ iEt − iEt
= ψ ∗ (x)ψ(x)e h̄ e h̄ (15.43)
= ψ ∗ (x)ψ(x) (15.44)
which clearly has not time dependence.
Thus, in a eigenstate, there can be no “left-right” motion of probability.
We still only have certain solution (i.e. energy eigenstates) of the Schrödinger
equation. We found that these solutions are factored in their space and time
117
dependence − ie.
iEt
ΨE (x, t) = ψE (x)e− h̄ (15.46)
Perhaps, there are other factorizable solutions,
Ψ(x, t) = f (x)g(t) (15.47)
To find out, we put this equation into the Schrödinger equation an see what
happens. It makes sense to try this separation of variables method, as it
is a common mathematical technique for generating solutions of differential
equation. Putting this equation into the Schrödinger equation yields
dg(t) h̄ d2 f (x)
ih̄ =− g(t) + V (x, t)f (x)g(t) (15.48)
dt 2m dx2
which is (divide through by f (x)g(t)
1 dg(t) h̄2 1 d2 f (x)
ih̄ =− + V (x, t) (15.49)
g(t) dt 2m f (x) dx2
To make progress with this technique, we must again assume that V depends
only on x and not on t. In that case, a very interesting thing happens − the
left side of the equation depends only on t while the right side depends only
on x. Thus, each side can only equal something that depends on neither x
nor t − c constant − the same constant, so the equation can be split into
two ordinary differential equations:
1 dg(t)
ih̄ =C (15.50)
g(t) dt
and
h̄2 d2 f (x)
− + V (x)f (x) = Cf (x) (15.51)
2m dx2
Now equation (15.51) is
Ĥf (x) = Cf (x) (15.52)
Which we recognize as an energy eigenvalue equation; therefore, C = E and
apparantly f (x) is nothing but our energy eigenfunction ψ(x) (or at least
proportional to it). furthermore, the solution to equation (15.50) is
iCt iEt
g(t) = g(t = 0)e− h̄ = g(t = 0)e− h̄ (15.53)
118
Now, g(t = 0) is just a constant, so we may as well absorb it into ψ(x)
(which has to be normalized anyway). So, interestingly, we have found noth-
ing new − all time/space factorizable (“seperable”) solutions to the
Schrödinger equation are energy eigenstates!
iE2 t
Ψ2 (x, t) = ψ1 (x)e− h̄
iE3 t
Ψ3 (x, t) = ψ1 (x)e− h̄
..
.
119
15.7 General Solution to the Schrödinger Equation
is also a solution.
Thus, for a given V (x) there are N eigenstates, a quite general solution of
the time dependent Schrödinger equation for this potential is
N N
iEn t
X X
Ψ(x, t) = cn Ψn (x, t) = cn ψn e− h̄ (15.54)
n=1 n=1
This is really “quite general” because for every time we change at least one
of the cb ’s. we get a different solution. Generally, N = ∞, and in fact, we
assert (without proof for now) that the most general solution to the
time dependent Schrödinger equation is
∞
−iEn t
X
Ψ(x, t) = cn ψn e− h̄ (15.55)
n=1
Thus, if you find all the eigenfunctions of the time independent Schrödinger
equation, you have completely solved the time dependent Schrödinger equa-
tion!
Unfortunately, we only know how to do that analytically for a relatively
very few cases!
The reader who has familiarity with wave physics will note the strong
analogy to classical normal mode theory − the energy eigenstates are really
the normal modes of the Schrödinger equation and the time independent
120
Schrödinger equation is nothing but its Helmholtz equation. one difference
is
−iωt E
that the mode time dependence in quantum mechanics is e ω ≡ h̄ and
not cos(ωt) as in classical wave physics. Another difference, of course, is the
probability interpretation of the quantum mechanical state functions.
15.7.1 Completeness
Let us look at the assertion of equation (15.55) in “another” way. If any valid
state function Ψ(x, t)2 can be written in the “eigenstate expansion form”,
(equation (15.55)), then at t = 0, any valid state function can be expressed
as ∞
X
Ψ(x, t = 0) = cn ψn (x) (15.56)
n=1
Thus, in principle, one way of solving the initial value problem for a non-
free particle is the following: Given Ψ(x, t = 0), solve the time independent
Schrödinger equation for the relevant V (x) − i.e, find all the eigenfunctions
{ψi (x)}, then expand Ψ(x, t = 0) over these eigenfunctions as in equation
iEi t
(15.56); then tack a factor e− h̄ to each ψi (x) and the result is the expansion
in equation (15.55). Both equation (15.55) and (15.56) then, are statements
of completeness. It may seem miraculous that one can always find the set
{cn } for any f (x) = Ψ(x, t = 0), but this completeness property can be
proven3 from the theory of “Storm-Liouville” equations.
Fig 1
Here V (x) = ∞ unless 0 ≤ x ≤ L, in which case V (x) = constant (which
we will take to be zero). You already found the eigenstates for this problem
by combining traveling wave solutions of the time dependent Schrödinger
equation; here we look at this through the time independent Schrödinger
equation formalism. Outside the well (and at its boundaries x = 0 and
x = L) clearly ψ(x) = 0, otherwise there would be considerable probability
2
For a particular potential energy profile V (x), of course.
3
We won’t do this, we’ll just use the result. More on this next class.
121
for the particle to materialize at “±∞”, which is not valid. inside the well,
the time independent Schrödinger equation is
h̄2 d2 ψ(x)
− = Eψ(x) (15.57)
2m dx2
(since V = 0) or
d2 ψ(x) 2mE
= − ψ(x) (15.58)
dx2 h̄2
This is exactly the equation for harmonic oscillation in space, so
r !
2mE
ψ(x) = C cos x+φ (15.59)
h̄2
where φ is a phase angle and where C is a constant. We rewrite this in the
more convenient form
r ! r !
2mE 2mE
ψ(x) = A sin + B cos x (15.60)
h̄2 h̄2
122
the original Schrödinger equation). We note, in particular, that the energy
cannot be zero. Let us try to get some insight into this. The Heisenberg
Uncertainty Principle tells us tat, for any state function,
h̄
∆x∆px ≥ (15.64)
2
Consider now the lowest energy state or “ground state”. For it
∆x ≈ L (15.65)
Thus
h̄
∆px /ge (15.66)
2L
p2
But, inside the well E = 2m then, if E = 0, then p is definitely zero. But, if
p is sharp at zero, then ∆p = 0, in violation of the Heisenberg Uncertainty
Principle . Hence, E cannot be sharp at value zero. The minimum energy,
π 2h̄2
E1 = (15.67)
2mL2
is called the “zero-point” energy. In general, the ground state for any bound
system has a zero-point energy that is not zero. As you can show, the nor-
malization condition requires that, unlike the classical case of standing waves
on a stretched string, the amplitude is not arbitrary; rather
r
2
A (15.68)
L
Thus r
2 nπx
ψn (x) = sin (15.69)
L L
Hence r
2 nπx iEn t
Ψn (x, t) = sin e− h̄2 (15.70)
L L
or r
2 nπx n2 π2h̄
Ψn (x, t) = sin e−i 2mL2 t (15.71)
L L
inside the well. Now, the time dependent (i.e. “original”) Schrödinger equa-
tion is linear, hence a more general solution of it for the infinite square well
is, as we already mentioned some time ago,
123
∞ r
X 2 nπx n2 π2h̄
Ψ(x, t) = cn sin e−i 2mL2 t (15.72)
n=1
L L
if 0 ≤ x ≤ L and zero otherwise and cn is an arbitrary complex constant.
124
Chapter 16
Recall that definite energy state function can be written in factorized form
as
iEt
ΨE (x, t) = ψ(x)e− h̄ (16.1)
where ψ(x), the (energy) eigenfunction, obeys the time independent Schrödinger
equation
h̄2 d2 ψ(x)
− + V (x)ψ(x) = Eψ(x) (16.2)
2m dx2
if V is a function only of x (and not of t). The simplest example to try the
formalism out on is the infinite square well − recall that, for it V (x) = ∞
unless 0 ≤ x ≤ L, in which case V (x) = a constant (which for convenience we
take to be zero).Working from the time dependent Schrödinger equation, we
started with the assumption that the general definite energy state function
inside the well is
p p
Ψ(x, t) = Aei( h̄ x− h̄ t) + Be−i( h̄ x− h̄ t)
E E
(16.3)
1
This includes the “make-up” of the missed 14th class
125
In the eigenfunction approach, we start with the time independent Schrödinger
equation. Inside the well, the time independent Schrödinger equation is
h̄2 d2 ψ(x)
− = Eψ(x) (16.4)
2m dx2
(since V=0) or
d2 ψ(x) 2mE
= − ψ(x) (16.5)
dx2 h̄2
This is exactly the equation for the harmonic oscillation in space, so
r !
2mE
ψ(x) = C cos x+φ (16.6)
h̄2
Note that, a priorı́2 , ψ(x) could be complex, and a priorı́, A, B, and C can
all be complex. Note also that an alternative, and (as you can easily show)
completely equivalent way to write the general solution to this free time
independent Schrödinger equation is
√ √
i 2mE x −i 2mE x
ψ(x) = De h̄
+ Fe h̄
(16.8)
for convenience, we will work with equation (16.7). Applying the boundary
condition, ψ(x = 0) = 0, B = 0 (why?), so
√ !
2mE
ψ(x) = A sin x (16.9)
h̄2
126
(why not n = 0?) Solving for E, we obtain
n2 π 2h̄2
En = (16.11)
2mL2
Thus, the energy eigenstates are
√ !
2mE − in
2 π 2 h̄
2 t
ψn (x, t) = An sin x e 2mL (16.12)
h̄
q
where the normalization condition determine An to be L2 for all n. Figure 1
remind us of the shapes of the eigenfunctions and the associated probability
density profiles; since the states are stationary, the probability density doesn’t
change in time. So,
r
2 nπ in2 π2h̄
Ψn (x, t) = sin x e− 2mL2 t (16.13)
L L
fig 1
The analogous classical system is that of the stretched string which is
bound down at both its ends (x = 0, L). The energy eigenstates here (nor-
mal modes of the Schrödinger equation) are almost exactly mathematically
q − the “only” mathemat-
analogous to the normal modes of the classical string
ical differences are the mode amplitudes (fixed at L2 in quantum mechanics
by probability normalization, arbitrary in the classical stretched string nor-
mal modes) and the time dependences (cos(ωn t + φn )) in the classical case,
which means that Ψn = 0 for all x at certain times in the classical case (at
those times the energy is all kinetic) and eiωn t i nthe quantum case, which
prevents this from ever happening since it is never zero (as it must − oth-
erwise conservation of total probability would be violated). That the mode
amplitudes An are arbitrary in the classical case is a reflection that a given
mode can possess arbitrary energy; in the quantum case, the amplitude is
fixed and the energy in the mode is independent of it and is quantized since
it is proportional to the quantized frequency. In the quantum case, the fixed
amplitude of each node fixes the associated probability density profile.
127
16.2 The “Reason” for Energy Quantization
We see that the application of the boundary conditions has led to quanti-
zation of energy − only certain values of energy are allowed. (Of course,
we reached the same conclusion in the past when we “solved” this problem
using the original Schrödinger equation). It is easy to see the essence of the
mathematical reason for the energy quantization − the Schrödinger time in-
dependent equation, being a second order ordinary differential equation, had
two arbitrary constants in its general solution: “A” and “B” in
√ ! √ !
2mE 2mE
ψ(x) = A sin x + B cos x (16.14)
h̄ h̄
ψ(x = 0) = 0 ⇒ B=0
2. Overall normalization r
2
A=
L
3. Boundary Condition II
ψ(x = L) = 0
128
Thus, it should be plausible to you at this stage that, in general, for a “bound
state” scenario (V (x) < 0 in a region where, by convention, V (x) → 0 as
x → ±∞) there are solutions to the time independent Schrödinger equa-
tion(i.e. eigenstates exist) only for certain values of the energy − the figure
2 schematically shows an example of this
fig 2
In order to try to get some preliminary insight into this, let’s discuss the
general situation just briefly now. Consider a general potential energy profile
that would lead to bound states. (The phrase “bound state” means that
the particle is permanently essentially3 confined to a localized region of finite
extent.)
fig 3
In figure 3, V (x) is plotted and a possible energy value (E) on the same
plot. First, let us review what would happen classically to a particle with
the shown energy E moving in this potential energy profile. Whenever the
potential energy changes with position there is a force given by the negative
of its slope
dV (x)
F (x) = − (16.15)
dx
Thus, whenever the potential energy on the curve is becoming shallower, the
particle is pushed back to the region of lower V (x). Thus, the acceleration is
greatest where the slope of V (x) is greatest in magnitude. From the energy
point of view, since KE = E − V (x), the greater the difference E − V , the
greater the kinetic energy. Thus, the points xa and xb , where the particle
has zero kinetic energy, are classical turning points − approaching them,
the particle slows, stops and turns around. Thus, classically, the particle
oscillates between xa and xb and remains absolutely confined between these
positions. Thus, in regions I and III in figure 3 are classically forbidden (in
them KE < 0, which classically is nonsense) while region II is classically
allowed. In quantum mechanics, the situation is somewhat different. The
time independent Schrödinger equation
h̄2 d2 ψ(x)
− + V (x)ψ(x) = Eψ(x) (16.16)
2m dx2
3
the reason for the qualifying “essentially” will become apparent shortly and clear later
129
must be obeyed in all regions, and hence, acceptable state functions subject
to certain conditions4 .
1. ψ(x) must everywhere be single-valued
2. ψ(x) must everywhere be continuous
dψ(x)
3. dx must be single-valued everywhere
dψ(x)
4. dx must be continuous everywhere
The reason for condition (1) is that there must be a well defined probability
density ψ ∗ (x)ψ(x) at all x. The reason for condition (4) can be easily seen
by rewriting the time independent Schrödinger equationas
d2 ψ(x) 2m
= 2 [V (x) − E] ψ(x) (16.17)
dx2 h̄
Thus, for any finite and continuous V (x), ψ 00 (x) must be defined and finite as
long as ψ(x) is. For ψ 00 (x) to be defined and finite at all places, ψ 0 (x) must be
single-valued, finite, and continuous; this thus establishes conditions (3) and
(4). For ψ 0 (x) to be finite and defined everywhere, ψ(x) must be continuous,
thus establishing condition (2). Generally, nonzero solutions to the time
independent Schrödinger equationexist in all three regions, and these must
join smoothly to each other to make a solution valid for all x that accords
with conditions (1) through (4).
Example: Consider the finite square well. We consider the possibility of
a bound state with E < V0
fig 4
In region II, the time independent Schrödinger equationis
√
d2 ψ(x) 2mE
= − ψ(x) (16.18)
dx2 h̄
and the solution is sinusoidal
√ ! √ !
2mE 2mE
ψII )(x) = A sin x + B cos x (16.19)
h̄ h̄
4
in addition to the normalization conditions mentioned earlier
130
while in regions I and III, the time independent Schrödinger equationis
p
d2 ψ(x) 2m (V0 − E)
= ψ(x) (16.20)
dx2 h̄
Since V0 > E, the coefficient of ψ(x) on the right hand side is a positive
number, and thus the solution to this equation are exponential:
ψI,III (x) = De+κx + Fe−κx (16.21)
√
2m(v −E)
0
where κ = h̄ > 0. In region I, F must be zero; in region III, D must
be zero; otherwise ψ(x) blows up as x → ±∞. Thus
ψI (x) = De+κx (16.22)
√ ! √ !
2mE 2mE
ψII (x) = A sin x + B cos x (16.23)
h̄ h̄
ψIII (x) = Fe−κx (16.24)
We will discuss how to match pieces up to make one continuous ψ(x) extend-
ing from x = −∞ to x = +∞, but for now, our main points are ψ(x) is not
zero in the classically forbidden regions I and III and that the matching-up
can be done. Thus, for example, the ground state eigenfunction for the finite
square well looks like
fig 5
We will discuss the physical implications of the penetration of ψ(x) into
the nonclassical region later. Now return to consideration of the “general”
2
potential V (x). From elementary calculus, d dx ψ(x)
2 represents the curvature of
d2 ψ(x) 2
ψ(x). Where dx2 > 0, ψ(x) is concave up, where d dx ψ(x)
2 < 0, ψ(x) in concave
down. Thus, according to the time independent Schrödinger equation
d2 ψ(x) 2m
= [V (x) − E] ψ(x) (16.25)
dx2 h̄
in a classically allowed region (E > V ), ψ 00 (x) has the opposite sign as does
ψ(x) and hence always curves ψ(x) toward the ψ(x) = 0 axis, when in a
classically forbidden region (E < V ) ψ 00 (x) has the same size as ψ(x) and
hence always curves away from the axis; these possibilities are shown in figure
6.
131
fig 6
thus, for example, for the finite square well, the concavities of the ground-
state eigenfunction segments are as marked in figure 7
fig 7
How do we know that this represents the ground state? In the classically
allowed region the curvature is
d2 ψ(x)
= −Eψ(x) (16.26)
dx2
and its magnitude is least when E is least (taking V = 0 to be the bottom
of the well). Other ways to do the matching of segments are shown in figure
8. Clearly the curve marked ψ1 (x) has the lest magnitude of curvature and
is hence the ground state. (The other curves are ψ2 (x) with E2 > E1 and
ψ3 (x) with E3 > E2 (> E1 ).)
fig 8
Also, it should be clear from the above considerations that, for the V (x)
profile shown in figure 9. We know this must be at lest qualitatively correct
in depicting the shape of the ground state
fig 9
Armed with these insights, we can now understand the reason for the
quantization of energy for cases other than the infinite square well. Con-
sider again then, the symmetric potential energy profile shown below and its
ground state eigenfunction.
fig 10
Suppose we attempt to solve the time independent Schrödinger equationfor
an energy E just slightly lower then Eground . Then, according to our discussion
above, the curvature of the solution in the classically allowed region −x1 → x1
is shallower than it was for Etextrmground . Consequently, the new solution
(labelled “ψ(x)” for Et < Eground in the figure, and which we call ψt (x) (t
for “trial”)), which we assume has the same amplitude at x = 0 as did
the ground state eigenfunction ψ1 (x), arrives at the turning point xm, with a
higher amplitude than does ψ1 (x). Now in region III, ψ1 must also be concave
up, but, since according the time independent Schrödinger equation
d2 ψ(x) 2m
= [V (x) − E] ψ(x) (16.27)
dx2 h̄
132
2
the curvature magnitude, d dx ψ(x)
2 ∝ ψ(x) and ∝ [V (x) − E], the region III
curvature magnitude is greater than that of ψ1 (x) on two counts. Thus,
as shown in figure 10, the new trial solution diverges as x → +∞, and,
by symmetry, also as x → −∞. This is unacceptable behavior. Now, since
d2 ψ(x)
dx2 ∝ ψ(x), we could curve the divergent behavior by simply rescaling ψ(x)
− i.e., reducing the amplitude everywhere by the same factor f . It should
be apparent that there does exist an f such that, for the new E < Eground ,
ψrescaled = f ψt does go smoothly to zero as x → ±∞. This is perfectly
acceptable behavior, bur now there is a new problem − ψrescaled does not obey
the probability normalization condition! By this logic it is apparent that no
energy near Eground other than Etextrmground itself is possible. Question to
thing about: What would have gone wrong if we had tried raising the energy
a bit instead of lowering it? If you look back now at our original statement
(some time ago)
generally, there are two linearly independent solutions (hence two
adjustable constants) and three condition
1. ψ(x) → 0 as x → +∞ (or some other point)
2. ψ(x) → 0 as x → −∞ (or some other point)
R +∞
3. −∞ ψ ∗ (x)ψ(x) dx = 1
Thus, it should be plausible to you· · ·
as an intuitive explanation for energy quantization, you will see that the more
detailed “numerical integration” argument given above, while increasing of
our insight, is essentially the same argument. The situation “from here on”
is well described by another author5 :
quote
Example: The potential energy profile shown below below has the spec-
trum of energies shown on the same plot − discretely spaced bound state
energies and a continuum of possible energies for E > 0.
fig 11
5
Robert Scherrer in Quantum Mechanics: An Accessible Introduction, pp 57-58
133
16.3 Zero-Point Energy
Now back to the infinite square well: We note, in particular, that the energy
cannot be zero. Let us try to get some insight into this. the Heisenberg
Uncertainty Principle tells us that, for any state function
h̄
∆x∆p ≥ (16.28)
2
Consider now the lowest energy of “ground” state. For it,
L L
∆x ≤ √ = √ 6 (16.29)
12 2 3
thus √
3h̄
∆p ≥ (16.30)
L
p2
But, inside the well E = 2m if E = 0, then p is definitely zero. but if p is
sharp at zero, then ∆p = 0, in violation of our limitation on ∆p in equation
(16.16). Hence E cannot be sharp at value zero. The minimum energy
π 2h̄2
E1 = (16.31)
2mL2
is called the “zero-point” energy. In general, the ground state for any bound
system has a zero-point energy that is not zero. We can, in fact, use the
Heisenberg Uncertainty Principle to estimate the ground state energy for a
particle in a box. We have, for the ground state
p2
E = EK = EK = (16.32)
2m
where E ≡ hEi. Now, from its definition,
q
∆p = p2 − p2 (16.33)
p is zero,since it would be equally likely to be positive as negative (homo-
geneity of space inside the well − V = 0 everywhere inside). Thus
q
∆p = p2 (16.34)
6
∆xground = 2√ L
3
if ψ ∗ ψground is flat inside the well, since it is peaked, we expect ∆xground < 2
L
√
3
; in actuality it
works out to be ∆x ≈ 0.18L. Here we are only “estimating.”
134
or
p2 (∆p)2
=E= (16.35)
2m 2m
√
h̄ 3
using ∆p ≥ L ,
3h̄2
Eground ≥ (16.36)
2mL2
which compare favorably with the actual value
π 2h̄2
Eground = (16.37)
2mL2
If one want to turn the logic around and use the ground state as an illustration
of the Heisenberg Uncertainty Principle , one finds, after calculation
q
∆x = x2 − x2 ≈ 0.181L
πh̄
q
∆p = p2 − p2 =
L
so
πh̄ h̄
∆x∆p = (0.181L) ≈ 0.569h̄ ≥ . (16.38)
L 2
As we have already mentioned, the (time dependent) Schrödinger equation is
linear and homogeneous, thus, an arbitrary linear combination of eigenstates
is also a solution. In fact, as we already mentioned on more than one occa-
sion, the general solution to the time dependent Schrödinger equation is an
arbitrary linear combination of the eigenstates; hence the general solution for
the infinite square well potential is
∞ r
X 2 nπ in2 π62h̄
Ψ(x, t) = cn sin x e− 2mL2 t (16.39)
n=1
L L
for 0 ≤ x ≤ L and 0 otherwise and the cn ’s are determined (at least in ration)
by the overall normalization requirement.
135
16.4 Orthogonality and Completeness of Eigenfunctions
From our general solution to the infinite square well potential. we can see
that ∞
X nπ
Ψ(x, t = 0) = bn sin x (16.40)
n=1
L
From the theory of Fourier series, we know that the Fourier series on the
right-hand side can, with appropriate bn ’s represent any square-normalizable
function that vanishes at x = 0 and x = L. Hence, as in the free parti-
cle case, Ψ(x, t = 0) can be any square-normaliable function obeying the
boundary conditions. As well we will see, a similar statement is true for any
reasonable V (x). We can get a first idea of why the last sentence is true
for the orthogonality principle of eigenfunctions corresponding to different
eigenfunctions that we now demonstrate for “general” V (x).
As we will see in the Fourier case, a key to the completeness is the orthogo-
nality property:
Theorem: For any continuous V (x), eigenfunctions of Ĥ belonging
to different eigenvalues are orthogonal − i.e.
Z +∞
∗
ψm (x)ψn (x) dx = 0 if m 6= n. (16.42)
−∞
Proof : ψn (x) and ψm (x) are respectively solutions to the time independent
Schrödinger equation.
h̄2 d2 ψn (x)
− + V (x)ψn (x) = En ψn (x)
2m dx2
h̄2 d2 ψm (x)
− + V (x)ψm (x) = Em ψm (x)
2m dx2
136
Taking the complex conjugate of the lower equation
h̄2 d2 ψm
∗
(x) ∗ ∗
− 2
+ V (x)ψm (x) = Em ψm (x)
2m dx
We then multiply the the top equation by ψm (x) and the above equation by
ψn , then subtract them from each other and integrate
Z +∞ Z +∞ 2 ∗
d2 ψm
2m ∗ ∗ d ψn (x) (x)
(Em − En ) ψm (x)ψn (x) dx = ψm − ψn dx
h̄ −∞ −∞ dx2 dx2
This is
Z +∞
2m ∗ d ∗ dψn (x) dψm (x)
(Em − En ) ψm (x)ψn (x) dx = ψm (x) − ψn (x) dx
h̄ −∞ dx dx dx
The right hand side is
∗
+∞
∗ dψn (x) dψ (x)
ψm (x) − ψn (x) m =0 (16.43)
dx dx −∞
thus since Em 6= En ,
Z +∞
∗
ψm (x)ψn (x) dx = 0 when m 6= n (16.44)
−∞
137
Example: Consider the case V (x) = 0 everywhere (free particle). Then
ikx
the eigenfunctions of Ĥ are the set e .
Proof :
2 2
ikx
b2
p ikx 1 h̄ d ikx
k 2h̄2 ikx
Ĥe = e = e =− e (16.47)
2m 2m i dx2 2m
or
Ĥeikx = Eeikx . (16.48)
The completeness of these eigenfunctions is just the statement
Z +∞
1
f (x) = √ φ(k)eikx dk (16.49)
2πh̄ −∞
which we know already to be true.
Comments
1. The completeness postulate stated here (for any V ) can actually be
proved (“Sturm-Liouville Theory” in differential equation).
2. The completeness postulate stated here is actually a special case of a
more general completeness postulate concerning eigenfunctions of any
Hermitian operator.
The above considerations relate to the quantum initial value problem: de-
termining Ψ(x, t) from Ψ(x, t = 0) for general V (x). Now, as you recall,
we already discussed and solved7 this problem for the case of a free parti-
cle. There, the key was to expand
Ψ(x, t = 0) over a complete set of basis
ikx
functions (i.e. the set e ) and then evolve each basis function forward
in time − this works if the basis functions do evolve independently in time
and if we know how they evolve. For the case of “general” V (x), it is now
clear that we can attempt to proceed analogously: we begin, with license
from the completeness postulate, by expanding the given Ψ(x, t = 0) over
7
at least in principle
138
the complete set of eigenfunctions of Ĥ(V ).
∞
X
Ψ(x, t = 0) = cn ψn (x) (16.50)
n=1
i.e. Z +∞
∗
cm = ψm (x)Ψ(x, t = 0) dx (16.53)
−∞
Note that or expansion for Ψ(x, t = 0) and its derivation are exactly analogous
to how we find Fourier coefficients − that is nothing but a special case of this!
Thus, in general (i.e. for any continuous V (x)), we have
∞ Z +∞
iEn t
X
Ψ(x, t) = ψn∗ (x0 )Ψ(x0 , 0) dx0 ψn (x)e− h̄ (16.54)
n=1 −∞
This is the formal solution to our problem. Note how it explicitly show that
knowledge of Ψ(x, t = 0) at all x determines Ψ(x, t).
Example (Griffiths example 2.2, infinite square well):
Say Ψ(x, t = 0) = Ax(a − x) for 0 ≤ x ≤ a (where a is the extent of the
well)
Fig 2
What is Ψ(x, t) for subsequent time t?
Normalizing, you find r
30
A= (16.55)
a5
139
From Fourier,
Z a
r
2 nπ
cn = Ψ(x, t = 0) sin x dx (16.56)
0 a a
r r Z a
2 30 nπ
= x(a − x) sin x dx (16.57)
a a5 0 a
which works out to
√
4 15
cn = [cos(0) − cos(nπ)] (16.58)
(nπ)3
which is (
0 n is even
cn = 8√15 (16.59)
n3 π 3 n is odd
So r 3 X
30 2 1 nπ n2 π2h̄
Ψ(x, t) = 3
sin x e− 2ma2 t (16.60)
a π n=1,3,5,··· n a
Let us turn now to another important issue. Suppose that, for some potential
energy function V (x), the system is in some state Ψ(x, t). Suppose the energy
is measured. What values are possible and what are their probabilities?
Expectation Value
We start by calculating hEi for the general superposition state
∞
iEn t
X
Ψ(x, t) = cn ψn (x)e− h̄ (16.61)
n=1
We have
D E Z +∞
hEi = Ĥ = Ψ∗ (x, t)ĤΨ(x, t) dx
−∞
Z +∞ "X # " #
iEm t iEn t
X
= c∗m ψm∗
(x)e+ h̄ Ĥ cn ψn (x)e− h̄ dx
−∞ m n
140
which is Z +∞ X X
iEm t iEn t
hEi = c∗m cn ψm
∗
e h̄ e− h̄ Ĥψn dx (16.62)
−∞ m n
Now Ĥψn = En ψn , so
XX i(Em −En )t
Z +∞
∗ ∗
hEi = cm cn En e h̄ ψm (x)ψn (x) dx (16.63)
m n −∞
XX i(Em −En )t
= c∗m cn En e h̄ δm,n (16.64)
m n
X
hEi = |cn |2 En (16.65)
n
This suggests that, if you measure the energy, |cn |2 is at least proportional to
the probability of materializing result En 8 . In fact, we can find the probability
constant by finding the sum
X∞
|cn |2 (16.66)
n=1
We know that Z +∞
Ψ∗ Ψ(x, t) dx = 1 (16.67)
−∞
which is
Z +∞ X ∞
∞ X
iEm t iEn t
1 = c∗m ψm
∗
(x)e h̄ cn ψn (x)e− h̄ (16.68)
−∞ m=1 n=1
∞ X ∞ Z +∞
X i(Em −En )t
∗ ∗
= cm cn e h̄ ψm (x)ψn (x) dx (16.69)
m=1 n=1 −∞
∞ X ∞
X i(Em −En )t
= c∗m cn e h̄ δm,n (16.70)
m=1 n=1
X∞
1 = |cn |2 (16.71)
n=1
141
energy is made, one and only one of the eigenvalues En (for the
given potential) materializes.
The probability of obtaining the result En is |cn |2 = c∗n cn , where
iEn t
X
Ψ(x, t) = cn ψn (x)e− h̄ (16.72)
n
where Z +∞
1 ipx
φ(p, t) = √ Ψ(x, t)e− h̄ dx (16.74)
2πh̄ −∞
142
Chapter 17
This is a good question that one should continually ask − what is the con-
nection between the quantum mechanics we’ve learned and the everyday real
world of large objects? After all, classical physics must result as some limit
of quantum mechanics; otherwise quantum mechanics is wrong! This is an
issue we will return to in stages. Let us begin to think about this in the
context of our infinite square well example. The stationary states and their
resulting stationary probability densities are shown in figure 1
fig 1
We know ask: What is the connection of this with the behavior we expect?
Classically, we expect the particle to bounce back and forth between the walls,
moving at constant speed (since V = 0 inside the well).
fig 2
But, the behavior we’ve found doesn’t look anything like this at all! In our
normal modes, nothing moves right or left − the probability to materialize
in any small region is constant in time (“stationary states”). We can make
some progress on this problem as follows: We have
r
2
Ψn (x, t) = sin(kn x)e−iωn t (17.1)
a
143
En
where ωn = h̄ . Now,
eiθ − e−iθ
sin θ = (17.2)
2i
So, r
1 2 h i(kn x−ωn t) −i(kn x−ωn t)
i
Ψn (x, t) = e −e (17.3)
2i a
This is a combination to oppositely directed traveling waves. (In fact, starting
with this form was our first approach.) While this is somewhat comforting,
it is still true that the probability density for this state is stationary. What
we would really like to see is a moving “lump” of probability.
fig 3
Let us look at this issue from another point of view, however, our state
functions give us information on the probability of materializing the particle
in any small region dx in the box [ψ ∗ (x)ψ(x)]. Classically, this probability is
proportional to the time spent (by the bouncing back-and-forth) particle in
dx, call this dt. Then
dx
dt = = constant (17.4)
vparticle
call it C 0 . So
Pclassical (x) ∝ C 0 (17.5)
call this
Pclassical = C (17.6)
Now Z a Z a
1
Pclassical (x) dx = 1 ⇒ C dx = 1 ⇒ C = (17.7)
0 0 a
so
1
Pclassical = (17.8)
a
Now, quantum and classical physics are supposed to agree in the large n
limit. This is known as the “correspondence limit”. Let us think on this:
Example: An electron confined to a thin metal wire segment
fig4
mimics a quantum “particle in a box”. If the segment is of macroscopic
length (say 1cm), and the system is at equilibrium at room temperature, we
144
expect to have a “classical limit” (if one is possible for trapped electrons).
Here is an interesting example on this:
fig5
Conclusion: In any classical-like situation, n is huge! Figure 6 shows the
plot of ψ ∗ (x)ψ(x) for n = 10
fig 6
For n larger, the peaks get closer together, and if n is large enough, we
can’t resolve them. So, as n → ∞, we see an average “flat” ψ ∗ (x)ψ(x):
∗ 2 2
hP (x)iave = (ψ (x)ψ(x)) = sin (kn x)
a
2
2 2 1
= sin (kn x) = ·
a a 2
1
=
a
which agrees with the classical prediction. Still, this is not yet really satisfying
− we would still like to see a “moving lump” of probability.
Again. “so far, so good”, but we would like to see a “moving lump” of
probability. Can we somehow obtain this? We know that we are allowed to
have superposition (of normal mode) states. We now look into this. We know
we really should do this in the classical limit (n large), but to keep things
simple, we consider only an equal amplitude superposition of the two lowest
states: π
−iω1 t 2π
Ψ(x, t) = A sin x e + A sin x e−iω2 t (17.9)
L L
This superposition is shown for t = 0 along with its P (x) in figure 6.
fig 7
Now, E2 = 4E1 (En ∝ n2 ), so ω2 = 4ω1 . Also
2π 2π 2πh̄ h
T1 = = E1 = =
ω1 h̄
E1 E1
Thus at t = 12 T1 = h
2E1 , the relative signs of the two modes reverse, as shown
in figure 8.
fig 8
145
We thus see that, even for this low-n superposition, we do have a “lump
of probability density” sloshing back and forth.
fig 9
However, we expect to be able to construct something that moves back
and forth at the classical particle velocity. For this, we need to put a narrow
wave packet state in the well, and it has to be a narrow wave packet that is
carefully chosen: How can we do this? The method (in outline) should be as
follows:
1. We choose an appropriate Ψ(x, t = 0)
2. We calculate Ψ(x, t) from Ψ(x, t = 0) via our “propagation formula”
X Z +∞
iEn t
Ψ(x, t) = ψn (x )Ψ(x , 0) dx ψn (x)e− h̄
∗ 0 0 0
(17.10)
n −∞
Here q
2 nπ
a sin a x 0≤x≤a
Ψn (x) = (17.11)
0 otherwise
146
and choose the cn ’s first so as to satisfy our second condition. This in-
volves a fairly complicated procedure that we do not describe here. The re-
sults of such a procedure for a moderately tight “packet” initial state are
shown on the web at site http://www.optics.rochester.edu/~stroud/
animations/, click on Decay of High Momentum Wave Packet under
section “infinite square well wave packets”. It is fascinating to contemplate
that such classical limit behavior results from a superposition of eigenstates
(correctly chosen, of course) none of which individually involves any left or
rightward motion.
It is further profound that what causes “bouncing off the wall” in the
classical limit is not anything “put in by hand”, but rather just the time
evolution of superpositions of eigenstates, each involving its own phase fac-
iEn
tor e− h̄ t , and obeying the boundary conditions. From this, much of our
“classical experience” results!
As you know, any really successful theory of quantum mechanics must deal
with the effects of forces on state functions. So far, we have only investigated
this in one case − that where the change in potential energy is infinitely
abrupt and also infinite in magnitude − infintie square well. Of course, both
of these conditions represent idealizations. Consider, for example, a neutron
approaching an atomic nucleus in the atmosphere of the sun. The potential
energy profile it experiences may look like
fig 11
where the “change” occurs over a distance of order 1f m = 10−15 m − very
abrupt on a macroscopic scale, not so abrupt compared to the size of the
neutron, but still abrupt compared to the neutron’s DeBroglie wavelength if
its energy is low enough. To get a first handle on these sort of important
practical (for physics), we consider possible state functions for the case of
abrupt, but finite, changes in V . The simplest case is that of a sudden step-
like change in the potential energy profile in x:
fig 12
147
(
0 x < 0 (“region I”)
V (x) = (17.13)
V0 x ≥ 0 (“region II”)
Now, we would like to deal with the realistic situation of a narrow packet
state incident on the step, say from negative-x. (You will deal with incidence
from the right (“potential-cliff” problem) in the homework). Since the packet
state, at any given time, is a superposition of energy eigenstates (recall the
completeness postulates), we first consider only the energy eigenstates. We
then have two cases, E < V0 and E > V0 . We treat the first case first: E < V0
In region I, the time independent Schrödinger equationis
h̄2 d2 ψI (x)
− = EψI (x) (17.14)
2m dx2
since V = 0 in region I. This is
d2 ψI (x) 2mE
= − ψI (x) (17.15)
dx2 h̄2
which has the general solution
ψI (x) = Aeik1 x + Be−ik1 x (17.16)
√
where k1 = 2mE
h̄ (any E is allowed − we see here explicitly also, this is note
a bound state). Note: this is the same as
ψI (x) = C cos (k1 x + φ) (17.17)
Thus, the traveling wave form of Ψ(x, t) in region I is
ΨI (x, t) = Aei(k1 x−ωt) + Be−i(k1 x+ωt) (17.18)
where ω = Eh̄ . We see what “physical situation” this corresponds to: we
would like to say that this is the situation of a plane wave state “incident’
from negative-x” and undergoing a degree of “reflection” at x = 01 . In region
II, the time independent Schrödinger equationis
h̄2 d2 ψ(x)
− = + V0 ψ(x) = Eψ(x) (17.19)
2m dx2
1
More strictly speaking, however, the temporal ordering of “reflection following incidence” occurs with a
superposition-state traveling wave packet incident from the left, but as I say, first we’ll just deal with the eigen-
states − for them, the math is simpler.
148
which is
d2 ψ(x) 2m
= 2 (V0 − E) ψ(x) (17.20)
dx2 h̄
since V0 > E, the general solution to this is
ψII = Ceκ2 x + De−κ2 x (17.21)
√
2m(V −E)
0
where κ2 = h̄ Now consider the behavior of this as x → ∞. The
first term blows up, which is not possible for the resulting probability density.
Hence, C = 0. Thus
ψII = De−κ2 x (17.22)
and
iEt
ΨII (x, t) = De−κ2 x e− h̄ (17.23)
The issue now is how to join ψ1 and ψ2 at the boundary x = 0. For this, we
call on the following general theorem:
Theorem: To be acceptable, any eigenfunction ψ(x) must obey
1. ψ(x) must be finite for all x
dψ(x)
2. dx must be finite for all x
3. ψ(x) must be continuous
dψ(x)
4. dx must be continuous
Proof : for the time being, we offer the following simple “proof”. this
proof actually has a loophole, and there is a case in which that loophole is
realized, but we’ll worry about that later. If ψ(x) and dψ(x) dx were not finite
and continuous everywhere, then
∂Ψ∗ (x, t)
ih̄ ∗ ∂Ψ(x, t)
S(x, t) = − Ψ (x, t) − Ψ(x, t) (17.24)
2m ∂x ∂x
would not be continuous everywhere. but, a discontinuity in S(x, t) would
require a source of “sink” of probability, which, in this theory makes no sense.
We now return to our step potential problem. We have
(
Aeik1 x + Be−ik1 x x < 0
ψ(x) = (17.25)
De−κ2 x x>0
149
continuity of ψ(x) across x = 0 requires
A+B =D (17.26)
dψ(x)
continuity of dx requires
−κ2 D e−κ2 x x=0 = ik1 A eik1 x x=0 − ik1 B e−ik1 x x=0
(17.27)
or
iκ2
D =A−B (17.28)
k1
Subtracting equations (17.26) and (17.28) yields
D iκ2
A = 1+
2 k1
D iκ2
B = 1−
2 k1
k1 − iκ2
B = A
k1 + iκ2
2k1
D = A
k1 + iκ2
we choose to use the last two equations and let A be set by normalization.
Thus, the eigenfunction for energy is
(
k1 −iκ2
Ae ik1 x
+ k1 +iκ2 Ae−ik1 x x ≤ 0
ψ(x) = (17.29)
2k1 −κ2 x
k1 +iκ2 Ae x ≥ 0
Thus
√2mE E √
Aei h̄ x− h̄ t + k1 −iκ2 Ae−i 2mE x+ E
h̄ r
k1 +iκ2
h̄
x≤0
ΨE (x, t) = √
2m(V0 −E)
(17.30)
2k1 − x − iEt
k1 +iκ2 Ae h̄ e h̄ x≥0
We see that if Ψ(x, t) has a component which is incident from the left, there is
necessarily a reflected wave. The leads us to expect that Ψ(x, t) is a standing
wave in region I. Indeed it is; as you can show, the eigenfunction ψI can be
rewritten as
κ2
ψI (x) = D cos(k1 x) − D sin(k1 x) (17.31)
k1
150
which is
ψI (x) = E cos(k1 x + φ) (17.32)
(find E and φ in terms of D, k1 , and κ2 ). Finally, we show a plot of the
eigenfunction for the case D real
fig 13
151
17.4 Classically Forbidden Region
In the classical limit, (often but not always), m(V0 − E) h̄2 , so at least
in that limit, this effect goes away. Suppose, however, that the situation is
not “in the classical limit”. Suppose measurement does force the particle to
materialize in the forbidden region. Then, after the measurement, there
is new state function associated with the “particle” and for this new
state function
h̄
∆x ≤ p (17.40)
2 2m(V0 − E)
This means that for the new state function
h̄
q
∆p/ge ≥ 2m(V0 − Eoriginal ) (17.41)
∆x
Thus, for the new state function
(∆p)2
∆Enew ∼ ≈ V0 − Eoriginal (17.42)
2m
Thus, it is no longer possible to say with certainty that Enew < V0 . In this
way, the paradox “disappears”. Upon contemplation, examples like this show
how thin the edge is on which the consistency of theory hangs together. But,
the universe depends on it.
152
Chapter 18
153
For our first approximation, we ignore all but the first two terms of this
expansion. This this is not unreasonable approximation is justified by the
assumed sharpness of g(k) around k0 . Putting this into equation (18.1) we
have Z +∞
1 0
Ψ(x, t) = √ g(k)eikx e−iω0 t e−iω0 (k−k0 )t dk (18.3)
2π −∞
It is convenient to change variables from k to s ≡ k − k0 in this integral
e−iω0 t +∞
Z
0
Ψ(x, t) = √ g(k0 + s)ei(k0 +s)x e−iω0 st ds (18.4)
2π −∞
0 0
Multiplying by 1 = eik0 ω0 t e−ik0 ω0 t , and this is
0
e−i(ω0 −k0 ω0 )t +∞
Z
0
Ψ(x, t) = √ g(k0 + s)ei(k0 +s)x e−iω0 (k0 +s)t ds (18.5)
2π −∞
or
1 −i(ω0 t+k0 ω00 t) +∞
Z
0
Ψ(x, t) = √ e g(k0 + s)ei(k0 +s)(x−ω0 t) ds (18.6)
2π −∞
which tells us that
Z +∞
1
Ψ(x, t = 0) = √ g(k0 + s)e−(k0 +s)x ds (18.7)
2π −∞
comparing equations (18.6) and (18.7), we see that
0
Ψ(x, t) = e−i(ω0 −k0 ω0 )t Ψ(x − ω00 t, 0) (18.8)
except for an uninteresting phase factor that accumulates (and which cancels
out in the probability density Ψ∗ Ψ anyway), Ψ(x, t) is the same as Ψ(x −
ω00 t, 0)). Thus, in the approximation in which we ignore the quadratic term
in the expansion of ω(k) around k = k0 , the packet moves rigidly without
changing shape at velocity
0 dω
vg = ω0 = (18.9)
dk k=k0
The effect of the quadratic term is then the spreading of the packet while it
moves. As shown in the homework (set 6G), the effects of the quadratic term
are ignorable as long as
m (∆x)20
t (18.10)
h̄
154
where (∆x)0 is the spatial extent of the packet at t = 0. This is the “coherence
time” for the packet.
Now we are ready to consider the case of a fairly tight packet incident on
a potential step with mean energy less than the step height
155
Now let’s put together a superposition (of energy eigenstates) wave packet.
At time t = 0, this can be expressed as
Z k=K0
1 h
ikx −2iθ(k) −ikx
i
ΨI (x, t = 0) = √ g(k) e + e e dk (18.17)
2π 0
Z K0 Z K0
1 1
ΨI (x, t = 0) = √ g(k)eikx dk + √ g(k)e−i[kx−2θ(k)] dk (18.18)
2π 0 2π 0
we choose g(k) to have a narrow peak centered on k = k0 < K0 . We also
choose g(k) to be real (e.g. narrow Gaussian in k). Thus, by completing each
“eigenfunction seed”, using
h̄k 2
ω(k) = (18.19)
2m
Z K0
1
Ψ(x, t) = √ g(k)ei[kx−ω(k)t] dk
2π 0
Z K0
1
+ √ g(k)e−[kx+ω(k)t+2θ(k)] dk
2π 0
Of these, the first is the incident packet; the second is the reflected packet.
dω h̄k0
The incident packet moves in from x = −∞ with speed dk k=k0 = m thus its
center xinc is xinc = h̄k
m t. This packet is thus “on course” to arrive at x = 0 at
0
t = 0. Now consider the phase function θ(k). Since g(k) is narrowly peaked
around k0 , over the important range of k.
dθ
θ(k) ≈ θ(k0 ) + (k − k0 ) (18.20)
dk k=k0
Thus, the reflected packet is
Z K0
1 0
Ψrefl (x, t) = √ g(k)ei[kx+ω(k)t+2θ0 +2θ0 (k−k0 )] (18.21)
2π 0
where θ0 ≡ θ(k0 ), θ00 ≡ dk
dθ
k=k0
Defining ω0 ≡ ω(k0 ),
Z K0
1 0
Ψrefl (x, t) ≡ √ e−2iθ0 eiω0 t g(k)e−i[kx+(ω−ω0 )t+2θ0 (k−ko )] dk (18.22)
2π 0
156
Since the important range of k is narrow, for all important k
dk dk
k − k0 ≡ δk ≈ δω = (ω − ω0 ) (18.23)
dω k=k0 dω k=k0
So
Z K0 2θ 0
1 −i kx+(ω−ω0 ) t− dω/dk|0
Ψrefl (x, t) ≈ √ ei(ω0 t−2θ0 ) g(k)e k=k0
dk (18.24)
2π 0
or
2θ 0 Z K0 2θ 0
1 i vg0 −i kx+ω t− vg0
Ψrefl (x, t) ≈ √ e2i(ω0 t−θ0 ) e g(k)e dk (18.25)
2π 0
where
dω
vg ≡ (18.26)
dk k=k0
The phase factors in front do not affect the probability density |Ψ|2 , so
Z K0 2θ00
−i kx+ω t− vg
Ψrefl (x, t) ∼ g(k)e (18.27)
0
Comparing equations (18.27) and (18.28), we see that the reflected packet
dω
moves in the x direction at the classical velocity vg = dk k=k0 , but which is
delayed in leaving x = 0 by an amount of time
2θ00
τ= (18.29)
vg
(to see this, note that t in equation (18.28) is replaced in equation (18.27) by
t − τ ). A little algebra shows that
2m
τ= p (18.30)
h̄k0 K02 − k02
157
Thus, in disagreement with classical physics, the particle is not reflected
immediately − it is said that “the particle spends time τ in the classically
forbidden region (x > 0) before heading back.” The following sequence of
figures illustrates this.
fig 2
[end of graduate student section]
Note the relatively long interval of time during which the packet interacts
with the step − this is a reflection (no pun intended) of the delay time τ .
The sharp interference “fringes” occur as a result of the extreme sharpness
of the step − they are the result of interference between “incident” and
“reflected” components of the packet. With a smoother change from V = 0
to V = V0 , the interference fringes are much less pronounced.
fig 3
The eigenfunction for the energy E is of the form (as you should show)
√
2mE
ψI = Aeik1 x + Be−ik1 x k1 = (18.31)
p h̄
2m(E − V0 )
ψII = Ceik2 x + De−ik2 x k2 = (18.32)
h̄
For definiteness, we specify that a particle is “incident from the left”, thus we
set D (“there is nothing out at x = +∞ to cause back reflections”). Thus we
have three unknowns (A, B, C). Continuity of ψ and ψ 0 give two equations,
thus one unknown is unspecified. We take this one to be A (arbitrary incident
amplitude). The algebra then yields
k1 − k2
B = A (18.33)
k1 + k2
2k1
C = A (18.34)
k1 + k2
158
We define a “reflection coefficient” and a “transmission coefficient” as
Sreflected |B|2
R = = (18.35)
Sincident |A|2
Stransmitted k2 |C|2 v2 |C|2
T = = = since k ∝ v) (18.36)
Sincident k1 |A|2 v1 |A|2
2
k1 − k2
R = (18.37)
k1 + k2
4k1 k2
T = (18.38)
(k1 + k2 )2
As it must be, the sum R + T = 1 (a given particle is either transmitted or it
is reflected). Note the definite break with classical physics here − as we say,
a given incident particle is either reflected or transmitted − it never splits.
The probability of reflection is given by R, and R decreases with increasing
incident energy. The probability of transmission T increases with incident
energy (as you can see in figure 18.5). Note that, in the case E < V0 , R = 1
and T = 0. this makes good sense − R = 1since the reflected amplitude
differs from the incident amplitude only by a phase factor, as we saw last
class. T = 0 for E < V0 since then ψII is not a traveling wave, so SII = 0.
forming an incident wave packet for the case Einc > V0 leads to splitting of
the packet − part is transmitted and part is reflected. Remember, however,
that the reflected packet (or its modulus squared) represents the probability
of reflection in a given case. If I send in a beam of identical particles all in
the same packet state,
fig 4 2
R is the fraction of particles reflected and T is the fraction transmitted. A
given particle either reflects or transmits. How does a given particle “know”
if it must reflect or transmit? Good question − this is quantum mechanics!
fig 5
2
Figure from Cohen-tannoudji et.al
159
Chapter 19
We last dealt with the “step potential”; now we terminate the step before
x → ∞ and thus consider the rectangular “barrier potential”
Fig 1
We consider today the case E < V0 and treat the eigenfunction. Of course,
a classical particle with energy E < V0 and incident from the left would simply
bounce back from the first wall with 100% probability. In quantum mechan-
ics, as with the step potential, there is probability (but not 100%, as with
the step) of reflection at the left barrier, but here there is, as we will see,
finite probability that a traveling probability current will be excited in the
region to the right of the barrier (region III in figure 19.1) in spite of the
intervening classically forbidden region. As I mentioned to you, without the
existence of this usually very, very small effect, called “quantum tunneling”,
the sun would not shine, and therefore, we would not live. Other applications
of the effect abound, conduction of electrons in solids (tunneling of electrons
through lattice ions, ammonia masers, radioactive decay, field emission pro-
cess, new semiconductor devices, the “scanning tunneling microscope”, etc.)
Let us find the eigenfunction: the time independent Schrödinger equationis
h̄2 d2 ψ(x)
− + V (x)ψ(x) = Eψ(x) (19.1)
2m dx2
160
so, in region I, the Schrödinger equation becomes
2mE
ψI00 (x) = − ψI (x) (19.2)
h̄2
with solution
ψI (x) = Aeikx + Be−ikx (19.3)
√
2mE
where k = h̄ . In region II, the Schrödinger equation is
2m(v0 − E)
ψ 00 (x) = + ψ(x) (19.4)
h̄2
with solution
ψII (x) = Ce−κx + De+κx (19.5)
√
2m(V −E)
0
with κ = h̄ here we must keep C since x does not go to infinity in
region II. In region III,
ψIII (x) = Feikx + Ge−ikx (19.6)
√
where k = 2mE h̄ . The second part of this a reflection; since there is nothing
out at infinity to cause this, we must have G = 0. Now we must use the
boundary conditions to stitch together the three parts of the eigenfunction.
The boundary conditions are, of course
• continuity of ψ at x = 0
• continuity of ψ 0 at x = 0
• continuity of ψ at x = a
• continuity of ψ 0 at x = a
As you can easily show, applications of these four conditions leads to four
equations
1. A + B = C + D
2. ikA − ikB = −κC + κD
3. Ce−κa + Deκa = Feika
4. −κCe−κa + κDeκa = ikFeika
161
We have five unknowns and only four equations, so we could express B, C,
D, and F in terms of A, which can later be set by the overall normalization
condition1 . However, since the algebraic situation is a little complicated
here, it is best to focus on a more specific goal − of the greatest interest is
determining the fraction of the probability current that leaks into region III,
since that is a measure of the “tunneling”. That fraction is the “transmission
coefficient”.
current in region III vIII |F|2 |F|2
T = = = (19.7)
current in region I vI |A|2 |A|2
since vIII = vI . Here is an outline2 of a relatively quick route to this goal:
1. Eliminate B: multiply our first boundary condition by ik and add the
result to the second boundary condition, obtaining
2. Now multiply the third boundary condition by −κ and add the result to
the fourth boundary condition, we obtain
κ − ik κa ika
C = e e F (19.9)
2κ
κ + ik −κa ika
F = e e F (19.10)
2κ
Now, putting both of these into equation (19.8), the result is
F 4iκk
= 2 −κa 2 κa
e−ika (19.11)
A [(κ + ik) e − (κ − ik) e ]
thus
2
F 16κ2 k 2 −2κa
T (E) = = 2e (19.12)
A 2 2
|(κ − ik) − (κ + ik) e −2κa |
162
although equation (19.12) is fairly unwieldily form due to the | |2 in the
denominator, it will be useful to us. Further algebra leads to the form most
commonly seen in textbooks:
2 −1
k2
1 1 + κ2
T (E) = 1 + sinh (κa) (19.13)
k 2
4
κ2
As you can easily show, an alternate form of equation (19.13) that shows the
dependence on the energy explicitly is
−1
2
sinh (κa)
T (E) = 1 + (19.14)
E E
4 V0 1 − V0
r
2
−1
sinh2 2mV0 a
h̄2
1 − VE0
= 1 + (19.15)
E E
4 V0 1 − V0
We’ll look at a plot of this later, but for now, the main point is that T > 0
for E < V0 ! The math reason “why” this occurs is that the combination of
rising and decaying exponentials in region II is not dead at the door to region
III. 2
|B|
The reflection coefficient, R = |A| 2 is not unity, as it is with the potential
step, since the incoming current splits at the first wall − some goes on to
region II, and some heads back. Thus, a given incident particle will definitely
either reflect or tunnel; quantum mechanics can only tell us the probabilities.
Of course, this is highly nonclassical behavior for a particle, but it is
familiar behavior with classical waves − in fact, the same phenomenon occurs
in optics, as predicted by the wave theory (of optics) − Newton discovered
that light can “tunnel” from one prism to another even with an air-gap in
between − the wave is not traveling, but only “evanescent” on the air-gap −
this interesting phenomenon is called “frustrated total internal reflection”.
fig 2
In quantum mechanics, the reflection probability is
R(E) = 1 − T (E) (19.16)
163
Let us now look at the probability density for this eigenfunction.
fig 3
P (x, t) is constant in region III, since the eigenstate is a pure traveling
wave there. Note how the amplitude is diminished there compared to region
I − because of the exponential drop (essentially) in region II. In any case, we
see clear possibility of “tunneling through the classically forbidden region”.
The “surviving” state function in region III is a pure traveling wave. Note
that, in region I, P (x, t) has peaks and valleys. This is because it is a mixture
of right and left traveling waves. If the reflection coefficient were 1, as it is
for the simple step potential, then this would be a pure standing wave and
the valleys of P (x, t) would go down to zero. But, here, R 6= 1 (R = 1 − T ),
so we don’t get a pure standing wave − we get a mixed standing-traveling
wave (in region I).
Typical cases occur in atomic physics; there, energies and barrier heights are
eV order, barrier widths are, say, molecular size (∼ 0.5mm).
Example3 : Say electrons of kinetic energy 1eV encounter
q a barrier height
2eV and width 0.5nm. Then, the parameter K ≡ 2mV h̄2
0
a which appears in
T has value
q
2 0.511 × 106 eV
c2 (2eV )
K= · (0.5mm) ≈ 3.6 (19.17)
197.3eV · mm c
Then for this case
1
T = 2 √ ≈ 0.024 (19.18)
sinh (3.623 1−0.5)
1+ 4(0.5)(0.5)
so, about 1 electron in 40 will penetrate the barrier.
Of course, one can always calculate T (E) from equations (19.12) or (19.13)
or (19.15) and get a numerical answer, but for insight in applications4 it’s
3
From Professor R. Hill’s book manuscript Basic Quantum Mechanics
4
one or two of which we’ll discuss in the next class
164
useful to have simple − form approximations in each of two limiting cases.
These are
165
However, we want a better approximation than this − T should be a bit less
than 1, and we want an easy to use approximation that will tell us how much
less than 1. For this, we use the following5 method:
16k 2 κ2 −eκa
T = 2e (19.24)
2 2
|(κ − ik) − (κ + ik) e −2κa |
we ignore the e−2κa in the denominator, but not the overall factor of e−2κa .
Then
16k 2 κ2 −2κa
T ≈ e ≈ e−2κa (19.25)
16k 2 κ2
Therefore, the approximation of the transmission coefficient for the thin,
finite square barrier is
T ≈ e−2κa (19.26)
Thus, in the case of the thin, narrow barrier, the exponential e−2κa not only
dominates T , but is actually approximately equal to it.
Now consider the case of an arbitrary shaped barrier6 . This can be approxi-
mated as a sequence of thin barriers:
fig 3
The points x1 and x2 delineate the boundaries of the classically forbidden
region. (Thus, the “barrier” extends from x1 to x2 ). In approximation7 the
overall transmission factor is, then
T ≈ T1 T2 T3 · · · TN (19.27)
166
thus
N
Y
T ≈ e−2κi ∆xi = e−2[κ1 ∆x1 +κ2 ∆x2 +···+κN ∆xN ] (19.29)
i=1
Taking the limit as ∆xi → 0, N → ∞, we have, for our approximation
R x2
−2 κ(x) dx
T ≈e x1
(19.30)
or R x2 √
− h̄2 2m[V (x)−E] dx
T ≈e x1
(19.31)
The formula above, Garnow’s approximation, has many applications. Our
derivation of it has been nonrigorous, but the same result is obtained us-
ing a somewhat more sophisticated approximation scheme called the “WKB
Approximation”, which is treated in an elementary way in chapter 8 of the
Griffiths text.
Note: Many book derive Garnow’s result for a high and wide but arbi-
trarily shaped barrier in a less rigorous way − by approximating the barrier
as a sequence of “moderately wide” rectangular barriers, dropping the pref-
actor for the transmission for each, and then multiplying the Ti ’s and then
pretending that the moderately wide sub-barriers are thin enough to take
over to the integral in a “limit”.
167
Chapter 20
168
are the places where V (r) = E, so the kinetic energy is zero. Frequently in
applications one “starts” with the particle bound inside a finite classically
allowed region (as mentioned above) in which the particle is “free”, and one
is interested in the probability per second of tunneling across the barrier. to
the “outside world of freedom” (“escape rate”). In these applications, one
often adopts a simple semiclassical picture in which the particle, in the initial
confined region (r = 0 to r = r1 , as in figure 20.2) travels repeatedly back
and forth due to “bounces” (reflection probability) off the “wall” of the well.
In this semiclassical picture, the escape rate is given by
escape rate = (# of collisions with the “wall” per second) × (probability
of tunneling per collision)
The “probability of tunneling per collision” is just the transmission coef-
ficient T (E) supplied, in approximation, by gamow’s formula.
Of course, the semiclassical picture of a “bouncing back-and-forth particle
in the well” is not rigorous − it is merely a semiclassical picture. The really
correct way to approach the problem, however, is quite mathematical1 . We
use the semiclassical picture because it is simpler and because, to the estima-
tion accuracy we are interested in, it works (e.g. the following discussion).
the half life (τ ) for this is 1622 years! (this is incredibly long on what we now
know as the typical time scale for nuclear interactions − the time it would
take a light-speed impulse to cross a nucleus is ∼ 10−23 seconds!) but, some
alpha emitting have much shorter half-lives e.g. 84 Po212 has τ ≈ 3 × 10−7
1
see, e.g. Quantum Theory by David Bohm (Dover), chapter 12.
2
Our discussion here is based on that of E. Wichmann (class handout); the original work was done by George
Gamow around 1928.
169
seconds! at the other extreme, 92 U238 has τ ≈ 4.2 × 109 years! Any good
theory will have to explain this enormous range of lifetimes!
Another fact: Each decaying isotope emits (usually) its alpha particle
with a unique energy, usually in the range 4-10 MeV (K.E.) It was found
that there is a rough correlation between the lifetime and the energy of the
emitted alpha particle.
1
log(τ ) ∼ √ (roughly) (20.3)
E
A good theory should explain this also.
So, we have some incredibly daunting demands for any theory that seeks
to “explain” alpha decay., This was the challenge that Gamow and Codon
took up with the then fledgling theory of quantum mechanics.
1. Pretend that the alpha particle exists intact inside the nucleus. It is held
in by the very strong nuclear force, which presents a formidable barrier
potential energy profile against the alpha particle leaving the nucleus.
2. The short-range strong nuclear force (range ≈ 10−15 m), while essentially
zero outside the parent nucleus, must be very strong inside and must vary
strongly with position inside the nucleus. Nevertheless, for simplicity, the
model assumes that the V (r) that the “alpha particle inside the nucleus”
is subject to is flat (constant) − in other words, replace the actual V (r)
inside the nucleus with its average value for 0 < r ≤ R (R is the radius
of the nucleus).
3. Since the strong nuclear force on an alpha particle is essentially zero if
it is outside the nucleus, the force on the alpha-particle outside is purely
electromagnetic. Since Zα = 2, if we let Z 0 be the charge of the daughter
nucleus (Z 0 = Z − 2), then outside the nucleus,
2e2 Z 0
V (r) = r>R (20.4)
4π0 r
thus, the picture in this simple model is the following: The alpha-particle is
caught in a deep square well, inside of which it feels no force. Thus, in a
170
semiclassical picture, it bounces back and forth between the boundaries of
the parent nucleus. V (r) looks like the (in the model)
fig 3
fig 4
Each time the alpha particle collides with the “nuclear wall”, since it
is “merely” very, but not infinitely high (in V ) there is some small (very
small) probability to leak out (transmission coefficient for penetration of the
Coulomb barrier from inside the square well). With enough collisions with
the wall, the accumulated chance to leak out should be “easily” relatable to
the observed half-life for emission of the alpha-particle. Calculating this was
Gamow’s (and Codon’s) program − it was a means, essentially, of testing
quantum mechanics.
Let’s take a moment to check the scale of figure 20.4 for V (r). Let Rc
be the classical turning point for a hypothetical particle of energy E (same
as the emitted alpha particle energy) trying to get into the well from the
outside. Then
1 2Z 0 e2
E= (20.5)
4π0 Rc
This can be solved for Rc :
1 2Z 0 e2
Rx = (20.6)
4π0 E
226
Let’s see what this is for the case of the alpha decay of 88 Ra :
so
A = 226 ⇒ R ≈ 7f < 50f (20.7)
So, qualitatively, the picture is correct: however, the barrier should have
drawn much thicker and much higher!
171
20.2.2 Summary of Goals for the Calculation
Before getting involved in the details of the calculation, let us summarize the
main goals: We want to calculate, via the model, half-lives for alpha decay
for radioactive nuclei and compare the predictions to the experimental data.
To do this we need to
1. Calculate the approximate transmission coefficient T (E) using Gamow’s
approximation
2. Multiply T (E) by the factor f , the frequency of collisions with the barrier
wall. (Of course, to do this, we first have to calculate, or estimate, f ).
The result of this is the decay rate R = T · f
escape probability per second = probability per “knock”×# of knocks per second
3. We must the convert R to the half life. For this, we note that the
mean lifetime τ = R1 . The half life, τ1/2 is then related to τ by an
arithmetical factor: since radioactive decay follows the exponential decay
law, N (t) = N (t = 0)e−t/τ , as you can easily show, τ1/2 and τ are related
by a factor of log(2).
172
the nuclear radius, the “back and forth time” (τ0 ) is
2R
τ0 = (20.8)
v
q
2E
where v = mα , so
r
mα
τ0 = 2R (20.9)
E
q
1 1 E
The collision frequency f is τ0 = 2R mα in this model.
r
1 1 E
f= = (20.10)
τ0 2R mα
It is helpful to get an idea of the magnitude of f for a typical case. We take
the case of 88 Ra226 : r
C E
f≈ (20.11)
2R mα C 2
with
(Gamow, 1928)
Recall that Z x2
2 p
ln (T ) ≈ − 2m [V (x) − E] dx (20.13)
h̄ x1
173
2e2 Z 0
Here, V (r) = 4π0 r for the barrier region, so
Z r2 s 2 0
2 2e Z
ln(T ) ≈ − 2mα − E dr (20.14)
h̄ r1 4π0 r
0 2
1 2Z e
Here, r1 and r2 are the two turning points. Now, E = 4π0 r2 (conservation
of energy), so √
2mE r2 r2
Z r
ln(T ) ≈ − 1 dr (20.15)
h̄ r1 r
This integral is doable, the result is
√ r
2mE π −1 r1 p
ln(T ) ≈ r2 − sin − (r2 − r1 ) r1 (20.16)
h̄ 2 r2
In our case, r1 r2 , so
√
2mE h π √ i
ln(T ) ≈ r2 − 2 r1 r2 (20.17)
h̄ 2
1 2Z 0 e2 1 2Z e 0 2
again using E = 4π0 r2 ⇒ r2 4π0 E and evaluating the constants gives
√ ! r √
e2 π 2m Z0 e2 4 m p 0
ln(T ) ≈ ·√ − · · Z r1 (20.18)
4π0 h̄ E 4π h̄
or
Z0 p
ln(T ) ≈ 1.980 √ − 1.485 f −1/2 · Z 0 r1 (20.19)
E (in MeV)
where f =Fermi=10−15 meters. As a rough approximation to the above, we
take Z 0 ≈ 86, R ≈ 7.3f for all alpha-decaying nuclei. (Recall that R ∼
1.1A1/3 f for all nuclei, for and archetypal nucleus 88 Ra226 , A = 226 ⇒ R ≈
7.3f ). converting, then, to base 10 logarithms (customary for plotting later),
one winds up with, as a rough approximation,
148
log10 (T ) ≈ − √ + 32.5 (20.20)
E (in MeV)
174
20.2.5 Comparison with Experiment
We not put the pieces together to compare with experimental data. We have
τ =mean lifetime= R1 = τT0 (since f = τ10 . As we remarked, the values of τ
for diferent nuclear species subject to alpha decay vary over many orders of
magnitude; therefore, for a global comparison it is good to deal with log(τ )
rather than τ . In fact, it is traditional to deal with log10 (τ ). thus, from the
above,
log10 (τ ) = log10 (τ0 ) − log10 (T ) (20.21)
From equation (20.20), this is
148
log10 (τ ) = log10 (τ0 ) + √ − 32.5 (20.22)
E (in MeV)
Now, as we saw, for the case of 88 Ra226 , f = τ10 ∼ 1021 , so, for this case,
log10 (τ0 ) = −21. As you can convince yourself by looking at values of E for
different parent nuclei, log10 (τ0 ) varies at most ±1 or so from case to case;
this variation is very minor compared to the other two terms of equation
(20.22). Therefore, since we are only interested in an approximate answer,
we may as well approximate log10 (τ0 ) as −21 in all cases. thus, the theory
predicts,
148
log10 (τ ) ≈ √ − 53.5 (20.23)
E (in MeV)
In spite of some rough approximations on the way to equation (20.23), we4
compare it to the data for known alpha emitters in figure 20.6.
fig 6
The results are a spectacular confirmation of quantum mechanics notice
that the trend5 of the theory is correct over 23 orders of magnitude!
The dotted line (theory prediction) is actually for the mean lifetime; whereas
the experimental points are half-lives. As we remarked,
τ1/2 = (ln(2)) τmean ≈ (0.69)τmean (20.24)
however, given the approximations of our discuaaion and the latitude in the
plot from the dotted line, the factor (ln(2)) is not worth bothering with here.
4
The plot in figure (20.6) is from Quantum Mechanics (Berkeley Physics Series, Volume IV) by Evyinf Wich-
mann (McGraw-Hill)
5
of course, for individual cases (see plot) the erros can be order factor of 100 or even 1000, but the important
thing is the trend
175
20.3 Another Application − Fusion in the Sun’s Core
For fusion, protons have to get together to fuse, but, as we remarked in the
beginning of the course, there is a ∼ 1M eV Coulomb barrier in the way, and,
1
in the core of the Sun, the mean proton energy is only ∼ 1000 of this.
fig 7
Here we have the “opposite” of the alpha decay problem, we just treated
a proton tries to get into the deep square well of attraction presented by the
other proton from the outside, but in the way is a very high coulomb barrier!
Now, even with quantum mechanics a simultaneous fusion of the 4 protons
necessary is very unlikely − we must look for a chain. We believe that that
correct chain for the sun is (Bethe)
1
1H + 11 H → 2 +
1H + e + ν
2
1H + 11 H → 3
2 He + γ
3 3 4 1 1
2 He + 2 He → 2 He + 1 H + 1 H
176
The potential energy profile is shown in figure 20.9
fig 9
As we will see in the near future, a state with the N-ion “on one side” is
not an eigenstate of the Hamiltonian for this potential energy profile. Conse-
quently, if a measurement materializes the N on one side, the state function
will change over the course of time. As we will see, the initial state function
has a certain probability per unit time to “diffuse” through the forbidden
region. and. given enough time, can end up almost completely on the other
side − in fact, analysis shows that the N-ion will tunnel back and forth across
the barrier at a fixed frequency! A calculation of this frequency (which we
do not do here) shows it to be
This forms the basis of the Ammonia MASER. A maser of this sort was
used in the discovery of the cosmic microwave background radiation that
permeates the universe. Now, the N nucleus is charged, and as you know,
classically, a charge moving back and forth in space emits radiation. This
radiation, at this specific frequency, has been detected by radio telescopes
as originating in interstellar space, thus signaling the presence of Ammonia
molecules in the interstellar medium.
177
• “scanning-tunneling microscope”, etc.
fig 11
Finally, the following sequence of figures shows the situation when a wave
packet with E = 12 V0 is incident on a barrier. The strong interference fringes
are an artifact of the sharpness of the barrier.
fig 12
178
Chapter 21
fig 1
For this case we have traveling wave eigenfunctions in all three regions:
√
2mE
ψI (x) = Aeik1 x + Be−ik1 x k1 = (21.1)
p h̄
2m(E − V0 )
ψII (x) = Ceik2 x + De−ik2 x k2 = (21.2)
h̄
ψIII (x) = Feik3 x + Ge−ik3 x k3 = k1 (21.3)
179
This is still valid if E > V0 ! (“The math doesn’t know the difference”).
However, if E > V0 , κ2 is now imaginary:
p p p
2m(V0 − E) (−1)2m(E − V0 ) 2m(E − V0
κ2 = = =i
h̄ h̄ h̄
and now (E − V0 ) > 0 so κ2 is imaginary. Therefore, if we define ik2 ≡ κ2
with k2 real, then indeed
180
21.2 Resonance Transmission
Notice that it has maxima (“resonances”) when sin2 (k2 a) = nπ− then T =
100% (perfect transmission − no reflection!)! Let us investigate this, these
transmission resonances occur when
k2 a = nπ n = 1, 2, 3, · · · (21.11)
2π
a = nπ (21.12)
λ2
2a
λ2 = (21.13)
n
How can we understand this physically? Consider a DeBroglie plane wave
incident from the left − part of it reflects at the “front wall” (call that part
r1 ) and part reflects at the back wall (call that part r2 ).
fig 2
r1 and r2 combine and interfere with each other in region I − and their
resultant is the net reflection off the barrier 1 / our condition for resonant
transmission, λ2 = 2a n , means that the two reflections r1 and r2 are in phase,
and hence add, in region I. And so are higher order reflections (“back and
forth”) r3 , r4 , · · · But wait − that means that a resonance the reflections are
max, which means that the transmission should be min, not max! So what is
wrong? What is wrong is that we have neglected the fact that there is a 180◦
phase shift for r2 when it is formed at the bounce off wall 2. Why is there
a 180◦ phase shift there? To see why, let’s look at a situation in classical
wave physics. Say a wave pulse is incident from a less dense string to a more
dense string. Then, at the junction, there is partial transmission and partial
reflection − and, as you know, the reflection is inverted.
fig 3
1
ignoring, for the moment, back and forth multiple reflections resulting in an r1 r2 r1
181
The signal for inversion is an increase in wave number in growing from
medium 1 to medium q 2. You can check that k increases by noting that the
phase velocity vφ = Fµ = λf = ωk decreases in going to medium 2, and since
the frequency is the same in both media, k increases (λ decreases). (The same
sort of thing happens when light waves travel from a medium with index of
reflection n1 to a medium with index n2 .) Now look at deBroglie waves. We
have, for any region with constant V ,
1p
K= 2m(E − V ) (21.14)
h̄
Thus, the smaller V , the bigger k. So, for our barrier, kIII > kII , and therefore,
the reflection r2 suffers “inversion” (180◦ phase shift). However, r1 does not.
(Why?). So at resonance, rnet = 0 ⇒ T = 1. Some plots showing the
behavior of T are shown for typical cases in figure 21.4. Of course, at all
values of VE0 , R = 1 − T .
fig 4
182
to pass from one to the other, simply note that
E → E + V0 (21.15)
On the left, E is positive and less than V0 , and on the right, E is positive
and less than V0 (since E < 0).
21.3.1 Finite Square Well, Incident Plane Wave State With E > 0
We consider
fig 7
We note that we choose to center the well at x = 0 (so it runs from
x = −a to x = a). This is done to simplify the search for solutions by taking
advantage of “parity” − see later. And, consider the case of an incident plane
wave with E > 0. Then, for the eigenfunction,
√
2mE
ψI (x) = Aeik1 x + Be−ik1 x k1 (21.16)
p h̄
2m(e + V0 )
ψII (x) = Ceik2 x + De−ik2 x k2 (21.17)
h̄
ψIII (x) = Fe−k1 x (21.18)
2
The easiest way to derive the transmission coefficient T ≡ |F|
|A|
2 is to start with
R=1−T (21.20)
183
We see that, as wit the repulsive barrier, there are situations in which the
well is “transparent” − i.e., T → 1 (and R → 0), namely, when
Naturally, this is the same condition as for the square barrier, namely nλ2 =
4a = back and forth distance in region II. As you can show, this says that
for incident energies,
n2 π 2h̄2
E = −V0 + (21.22)
8ma2
there is perfect transmission. This perfect transmission was first observed
experimentally. In 1921 (before quantum mechanics was discovered!) by
Ramsauer & Townsend when they scattered low energy electrons off of noble
atoms (like xenon) nuclei − at the right values of energy (given by the last
equation), the electrons essentially pass right though the nuclei as if it weren’t
there! Experiments electron energy of ≈ 1 eV, and then working backward
though the formula showed that the effective well depth presented by the
xenon ion is ≈ 10 eV. (We use a ∼ 2Å − as the diameter of the ion.) for
an attractive square well, depending of the ratio of well width to depth, the
resonances on T can be very sharp, as in figure 21.9 and figure 21.10.
fig 9
fig 10
We conclude this section with a look at three cases (different mean en-
ergies) of Gaussian wave packets approaching and “scattering” from finite
square wells. (in each case, E > 0
fig 11
fig 12
fig 13
We take the well to “run from −a to a” for bound states E < 0 (where the
well depth is to −V0 .
fig 14
As we remarked some time ago, since E < 0, regions I and III are for-
bidden; we expect most of the state function amplitude to be in region II
184
with (for eigenfunctions) only a bit of leakage into regions I and III. Thus,
for example, we might expect an eigenfunction to look, say, something like
this
fig 15
These states are bound states − they hang around in the well region.
Firstly, we note that the the solution to the time independent Schrödinger
equation(i.e., the eigenfunction) in region II can be written equivalently in
any of the following ways:
These are all equivalent. The first form is convenient for considering proba-
bility currents (since its composed of traveling wave pieces). However, for the
bound states we are interested in, the net current must be zero (no left-right
probability flux) and so the sin-cosine form is more convenient. Another rea-
son the sine-cosine form is more convenient is that these (sin and cos) are
functions of definite parity (odd and even). As in the case of the infinite well,
if the ordinate axis is centered, the eigenfunctions must have definite parity.
Mathematical proof of this assertion is in Professor Hill’s book (section 4.11)
however, a “physics proof” of it is simply the following: As you know, in an
eigenstate, probability density at any x is independent of time. Since the po-
tential energy is left-right symmetric around x = 0, then so must ψ ∗ (x)ψ(x)
− i.e., ψ ∗ (x)ψ(x) must be an even function. There are only two ways for
this to happen − either ψ(x) or ψ ∗ (x) is odd. Which possibility
q corresponds
to the ground state? As noted above, in region II, k = 2m(E+V h̄2
0)
. How E
means lok k, which means long wavelength. We note, for example, that
fig 16
In either case, though, these have to map onto the region I and region III
pieces of the eigenfunction.
185
21.3.4 Parity of Eigenfunctions
Recall that inside the well (region II), we choose to write the general form of
the eigenfunctions as
ψII (x) = A sin(kx) + B cos(kx) (21.26)
This is especially convenient as the form in light of our recent result that, if
V (x) is symmetric around some point (call it x = 0, so well runs −a to a),
then the eigenfunctions must have definite (plus or minus) parity around this
point. Sine and cosine both have definite parity. So in region II, either
ψ(x) = A sin(kx) (21.27)
or
ψ(x) = B cos(kx) (21.28)
but not any linear combination of both.
So, with the well centered on x = 0, the ground state must be of even
parity,
q and hence, inside the well must be of the form cos(kx) with k =
2m(E+V0 )
h̄2
. In regions I and III, the time independent Schrödinger equationis
h̄2 d2 ψ d2 ψ 2m
− = Eψ ⇒ = − Eψ (21.29)
2m dx2 dx2 h̄2
has solutions that are exponential (since E < 0), namely
ψI (x) = Ae+κx (21.30)
ψIII (x) = De−κx (21.31)
√
where κ = −2mE h̄ > 0. Since e−κx blows up at x → −∞ and e+κx blows up
as x → +∞. So the tails in the “forbidden” regions are simply exponentials
sine the ground state is even, D = A. We choose the even cosine in region II
(since it has the longest wavelength) and match it to the external exponential
fall offs. So, the ground state looks like
fig 17
The first excited state has next shorter wavelength in the well, hence, by
similar logic, it must look like
fig 18
186
Why does this state have higher (but still negative) energy than the ground
state? Answer: The wavelength is shorter in region II, and λ ∝ √1E
p
2π −2m(E + V0 )
λ= , k2 = (21.32)
k2 h̄
Thus, E2 is more than infinitesimally greater than E1 . Similar logic shows
that successively higher bound states will have successively higher energies
(E3 , E4 , · · · ) all of which are still negative, and eigenfunctions that alternate
in parity. Of course, only certain values of energy are “allowed”.
fig 19
Based on our previous discussion4
(
A cos(kx) even
ψII (x) = (21.33)
A sin(kx) odd
ψIII (x) = Be−κx (21.34)
(
ψIII (−x) even
ψI (x) = (21.35)
−ψIII (−x) odd
q q
2m(E+V0 ) −2mE
where k = h̄2
and κ = h̄2
. Since we have definite parity for
each eigenfunction, we need only apply the boundary conditions at x = a
and they are automatically taken care of at x = −a (even or odd). We do
the even case.
The continuity of ψ(x):
A cos(ka) = Be−κa (21.36)
The continuity of ψ 0 (x):
−kA sin(ka) = −κBe−κa (21.37)
Dividing these,
k tan(ka) = κ (even case) (21.38)
4
Reference: Basic Quantum Mechanics by Professor R. Hill, Sections 7.8, 7.9, howver, we do not use his
potential energy convention (as noted earlier)
187
which is really an (transcendental) equation for the energy:
r r ! r
2m(E + V0 ) 2m(E + V0 ) −2mE
tan a = even case (21.39)
h̄2 h̄2 h̄2
2 E0
ν ≡ inf (21.41)
Egnd
inf
where Egnd is the amount by which the ground-state energy in the equivalent
width (2a) infinite square well is above the floor of that well.
inf π 2h̄2
Egnd = (21.42)
2m(2a)2
thus
2
E0
ν = (21.43)
π 2h̄2
8ma2
We also define another dimensionless less variable ν by
V0
ν2 ≡ inf
(21.44)
Egnd
Thus
8m
µ2 = 2 V 0 a 2
(21.45)
π 2h̄
√
so µ is a measure of the “strength” of the well − i.e., µ ∝ V0 a. µ is then also
dimensionless. Equations (21.38) and (21.40) can then be written in terms
188
of µ and ν as
π p
ν tan ν = µ2 − ν 2 even solutions (21.46)
2
π p
−ν cot ν = µ2 − ν 2 odd solutions (21.47)
2
These can be solved graphically (see figure 21.21) by plotting the left-hand
sides of equations (21.46) and (21.47) and also their right hand side versus
ν − solutions for each type (odd or even) are defined by where these curves √
intersect.
p One nice aspect of this is that , since µ is a constant (µ ∝ V0 a,
µ2 − ν 2 is a quarter circle of radius µ on such a plot.
fig 21
2 2 2
Figure 21.21 shows the situation for µ = 2.5 so V0 = (2.5) π h̄
8ma2 . So
189
Chapter 22
Let’s recall some features of the path to our solution for the case E < V0 1 .
fig 1
The eigenfunctions must have definite parity (even or odd), thus
(
A cos(kx) (even)
ψII (x) = (22.1)
A sin(kx) (odd)
ψIII (x) = Be−κx (22.2)
(
ψIII (−x) (even)
ψI (x) = (22.3)
−ψIII (−x) (odd)
Since higher energies implies more curvature of ψ, and since the odd solutions
have ψ = 0 at x = 0,the ground state must be of even parity (cosine in
the well) and hence the first excited state is of odd parity, etc. Since we
have definite parity for each eigenfunction, we need only apply the boundary
conditions at x = a and they are automatically taken care of at x = −a (even
or odd). We do the even case.
Continuity of ψ: A cos(ka) = Be−κa
Continuity of ψ 0 : −kA sin(ka) = −κBe−κa
1
reference: Basic Quantum Mechanics by R. Hill, sections 7.8 and 7.9
190
or dividing
k tan(ka) = κ (even) (22.4)
−k cot(ka) = κ (odd) (22.5)
We solved these transcendental equations graphically − we defined dimen-
sionless variables
2 E0
ν ≡ inf (22.6)
Egnd
V0
µ2 ≡ inf (22.7)
Egnd
where E 0 = E − (−V0 ) = E + V0 and Egnd inf
is the ground state energy of an
√
infinite square well of same width (2a). (µ ∝ V0 a). In terms of which the
transcendental equations become
π p
ν tan ν = µ2 − ν 2 (even) (22.8)
2
π p
−ν cot ν = µ2 − ν 2 (odd) (22.9)
2
and the plotted left-hand sides and the right-hand side on one plot versus ν;
from the intersection points you can easily deduce the energies.
fig 2
From knowing the energies you can then deduce the full form of each of
the eigenstates ψn (x, t); probability you will do an example of this in the
homework.
Let’s now seek some intuition about why only certain energies are allowed.
Of course, we already understand the basic reason for this2 , but here, we can
provide a bit of additional insight for the finite square well. We consider the
ground state. Inside the well the solution is a cosine; outside it is exponential
. We must have both ψ(x) and ψ 0 (x) continuous across the two boundaries
(x = ±a).
2
section B of class notes for class of Thursday, October 16, especially the discussion of curvature of ψ and its
implications therein.
191
Now, there is no problem in making ψ continuous at x = ±a for any
wavelength (i.e., for any value of energy) in region II − both ψI and ψIII
have their own (same) constant multiplier that can be arbitrarily adjusted
to match any amplitude at x = a (the amplitude at x = −a is the same as
that at x = a since ψ is even. However, in general, continuity of ψ 0 is not
simultaneously achieved. To see this: If E is chosen too low3 , the slope of the
exponential is too steep at the edge of the well (or, if you prefer, the slope of
the cosine is not steep enough):
r
2m(E + V0 )
k= E to neg. ⇒ low k ⇒ low cosine slope (22.10)
h̄2
r
−2mE
κ= E too neg. ⇒ high κ ⇒ high exponential slope (22.11)
h̄2
So, if E is too negative, the situation is shown in figure 22.3.
fig 3
Which has discontinuous slope at x = ±a. If E is too high, the situation
is shown in figure 22.4/
fig 4
which also has discontinuous slope. Only for a particular value of E do we
get a ground state that is continuous in both ψ and ψ 0
fig 5
Now let us think about the energies of the states. How do they compare
to the energies of an infinite well of the same width? We expect they be a
bit lower than those of the infinite square well. Why? Because in the ground
state we don’t fit half a wavelength in the well, whereas in the infinite well
we do. So, for the finite well, the wavelength in the interior to the well region
is a little longer than that for the infinite well, so the energy is a little lower.
This is true for each eigenstate.
22.3 Applications
192
many of the properties of the conduction of electricity in metals (quantum
theory of conduction) can be understood on the basis of the “quasi-free elec-
tron model”’ in which “free” electrons are moving in a (three-dimensional)
finite square well.
fig 6
Most of these applications involve three-dimensional wells and considera-
tion of states of many particles in the well at once. We’re not quite ready for
these, so we defer detailed consideration of them to the next course. There is,
however, one important application that we are about ready to briefly treat
− that of a “free” neutron inside an atomic nucleus.
193
which is identical to the one-dimensional time independent Schrödinger equa-
tionwe are used to. Here µ is the “reduced mass”, which should be familiar
from classical mechanics. (If it is not, just think of µ as the mass). So, for
our problem of a “free” neutron inside a nucleus,
h̄2 d2 u(r)
− − V0 u = −Bu r ≤ b (22.15)
2µ dr2
h̄2 d2 u(r)
− = −Bu r > b (22.16)
2µ dr2
where B is the binding energy (−B < 0 is the actual energy of the neutron
in the nucleus). Note that our finite well is now, of course, “one-sided”:
fig 8
Now, for r ≤ b, the solution is
u(r) = A sin(kr) + B cos(kr) (22.17)
√ √
2µ[−B−(−V0 )] 2µ(V0 −B)
where k = h̄ = h̄ . Note that the first boundary condition
is
u(r) = 0atr = 0 (22.18)
so the ground state cannot be cosine − nor can cosine be present in any of
the bound states4 − B = 0, so
u(r) = A sin(kr) (22.19)
since u(r) = rψ(r),
A
ψin (r) = sin(kr) r ≤ b (22.20)
r
Now, for r > b, the solution is
u(r) = Ce−κr + Fe+κr (22.21)
q
where κ = 2µB h̄2
. Since ψ(r) must goto zero as r goes to infinity, F = 0.
Now, we have to match the solutions at r = b
1. Continuity of ψ(r = b):
A C
sin(kb) = e−κb ⇒ A sin(kB) = Ce−κb (22.22)
b b
4
There is not problem with our “parity theorem” here the well is not symmetric around r = 0
194
2. Continuity of ψ 0 (r = b):
dividing,
k cot(kb) = −κ (22.24)
This is exactly the same as the condition for the odd solutions of the one-
dimensional finite square well. (as you will see by reading section 7.10 of
Hill5 , this is not accident). It follows from this that a three dimensional well
can be too shallow to have any bound states! If there is a bound state, the
interior sine wave function u(r) must pass π/2 in phase at r = b so that it is
falling and can connect on to the decreasing exponential outside as in figure
22.9.
fig 9
As a concrete example, we consider the deuteron (bound state of neutron
and proton, nucleus of deuterium). We take b ≈ 1.4 × 10−13 cm (recall that
R ∼ 1.2A1/3 from Rutherford) and B ≈ 2.225 MeV (measured as energy of
m
the photon (γ) in n + p → d + γ). (note: Here, the reduced mass, µ = 2p
since mp ≈ mn ). Then our condition
k cot(kb = −κ (22.25)
becomes p p ! √
2µ(V0 − B) 2µ(V0 − B) 2µB
cot b =− (22.26)
h̄ h̄ h̄
As we will see in a moment, V0 B (binding energy small compared to well
depth − deuteron is “barely bound”). Therefore, to ger an idea of what is
going on, we begin by setting V0 − B ≈ V0 . Then, our condition is
√ √ √
2µV0 2µV0 2µB
cot b ≈− (22.27)
h̄ h̄ h̄
or √ r
2µV0 B
cot b ≈− (22.28)
h̄ V0
5
Please consider this a reading assignment
195
Since, B V0 , as a first approximation
√
2µV0
cot b ≈0 (22.29)
h̄
The bound state then corresponds to
√
2µV0 π
b≈ (22.30)
h̄ 2
or
π 2h̄2 π 2h̄2
V0 ≈ ≈ (22.31)
8µb2 4mp b2
Let’s put some numbers into this: We have
2
h̄2 ≈ 6.6 × 10−16 eV·s
b ≈ 1.4 × 10−15 m
0.5 GeV 5 × 108 eV
µ ≈ =
c2 c2
(µ ≈ mp /2). So,
(10) 44 × 10−32 eV2 · s2 9 × 1016 m2 /s2
V0 ≈ (22.32)
4 (109 eV) (2 × 10−30 m2 )
V0 ≈ 50MeV (22.33)
(believedto
√
be about
right) This confirms our approximation
√
V0 B. Actu-
2µV0 2µV0
ally, cot h̄ b is a little less than zero. Therefore, h̄ b is a little greater
√
2µ(V0 −B)
than zero, so h̄ b = kB is a little greater than π/2. As it must be to
map onto the decaying exponential outside the well.
196
As a concrete example, consider the “simple” problem of a particle in an
uniform force field − e.g., a particle in the gravitational field new earth’s
surface. Then V (y) = mgy. If the particle is confined to a region (y1 , y2
boundaries), then the potential energy profile is shown in figure 22.10.
fig 10
Another example: consider the “free” electron in a metal wire segment
running from x1 to x2 . As we’ve already remarked, in the “quasi-free electron
model”, the electron in the metal are considered to be able to more freely in
a finite potential well.
fig 11
Now suppose we apply a constant electric field − e.g., from a battery
hooked up to the ends of the wire. Them the potential energy varies linearly
along the length of the wire (V = −kx ⇒ E = − dV dx = k = constant). The
electrons are now in a linearly ramped well.
fig 12
Of course, this leads to the conduction of electric current. An under-
standing of the quantum mechanics of this problem leads to the first steps
in understanding the quantum theory of conduction! While obviously very
important, gaining a full understanding of this not simple, and today, we
limit ourselves to just a few semi-quantitative first remarks about the E < 0
eigenfunctions in such a well. As a paradigm for this, take V = gx, where g
is a constant. Then, the Schrödinger equation is
h̄2 d2 ψ(x)
− + gxψ(x) = Eψ(x) (22.34)
2m dx2
We can simplify the appearance of this by defining
r
E 3 2mg
ξ ≡ x− (22.35)
g h̄2
the the Schrödinger equation is
d2 ψ(ξ)
= −ξψ(ξ) (22.36)
dξ 2
This equation looks very harmless, but, in fact, the solution to it cannot be
written in terms of a finite number of elementary functions! The solution to
197
equation (22.34) turns out to be the so-called “Airy Integral”
Z ∞ 3
1 −εu u
Ai(ξ) ≡ lim e cos + ξu du (22.37)
π ε→0 0 3
In this course, we won’t get involved with this function. however, it would
be nice to know if we could intuit anything about the result without going
through the mathematics. As I hope to show you, indeed we can, and our
thought son this will apply much more generally than to just this problem.
Suppose first that we have a “split-level” well and we want the bound-state
eigenfunction.
fig 13
Now, we could solve this exactly, but let’s see what we can get by just
“thinking”.
1. In the internal regions (I and II), ψ is sinusoidal and
2m(E − V ) 2m 2m p2 p2 2π
k2 = = (K.E.) = = = (22.38)
h̄2 h̄2 h̄2 2m h̄2 λ2
Thus, large E − V ⇒ small λ. So, λI < λII .
2. In the internal regions we can write, for some phases ϕi ,
ψi (x) = Ai sin (ki x + ϕi ) i = I,II (22.39)
Now consider the slope
dψi (x)
Si = = ki Ai cos (k1 x + ϕi ) (22.40)
dx
so
Si
= Ai cos (ki x + ϕi ) (22.41)
k1
so
Si
+ ψi2 = A2i cos2 (ki x + ϕi ) + Ai 62 sin2 (ki x + ϕi ) = A2i (22.42)
ki
now since kII < kI , and since both S and ψ are continuous across the
step in the well bottom, it follows that AII > AI !
198
3. Number of nodes: We know that from our previous experience that ψn
should have n − 1 nodes inside the well. So suppose someone asks for the
fifth lowest energy level in our split level well. We see in advance that it
must look pretty muck like what is shown in figure 22.14.
fig 14
Now, any shape well can be approximated by a succession of narrow step
wells. Consider for example, the “ramp-bottom” well
fig 15
(the dotted line shows an approximation to it). Then, from our consider-
ations, the say, fifth and seventh levels in this well have eigenfunctions that
must look like figure 22.16.
fig 16
Here’s a way of remembering that the amplitude is bigger in the shallower
part of the well. Consider the classical limit in the sense of large n. The
eigenfunction would then have a large number of zig zags. On which side of
the well, shallow or deep, must ψ have greater amplitude? A classical particle
spends more time where it is moving slowly. In the well in figure 22.16, the
classical acceleration would be to the left (F = − dVdx ). Therefore, the classical
particle spends more time (and is thus more likely to be found, in a randomly
timed snapshot) on the right side. Hence P (x) = ψ ∗ (x)ψ(x) must be greater
on the right side, therefore, the amplitude of ψ is greater on the right side.
Hence, shallower side of well ↔ greater amplitude for ψ. Something of a
reversal of this logic gives us insight into where classical physics comes from:
a narrow wave packet centered on very large n (“classical’) accelerates on
moving from the shallow to the deep end of the well − this is because that
ω
“local phase velocity” of any component of the packet is vϕ (k, x) = k(x) , and
k(x) increases for each component as the packet moves to the deep end of
the well. (If the phase velocity of every component increases as the packet
moves, then the center of the packet is accelerating.) Now, if V = V (x),
classically there is a force F = − dVdx(x) toward the deeper part of the well, so
this looks like the center of the packet is obeying a ∝ F − i.e., this looks like
F
Newton’s second law a = m . Of course, we haven’t shown that our “a” (we
2
really mean ddthxi 1
2 ) for that packet is proportional to m and othr details, but,
a bit later in our course we hope to show that, indeed, for this situation, as
199
well as for many others
d2 hxi d hV i
m = − ∼ “F ” (22.43)
dt2 dx
(The general result is called Ehrenfest’s theorem). Now let us return to our
constant force problem. If this is true in a limited region (x = −a → x = a,
say) as it always would be, then V (x) looks like
fig 17
This is exactly what we just looked at! Now, let’s look at the exact Airy
function. It looks like figure 22.18.
fig 18
E
Note that we are plotting against ξ = constant× x − g . ψ(x) = ψ (ξ(x))
must be unravelled from this. Since we’ve using this problem only as a
thought spring board, we won’t worry about that.
200
Chapter 23
Twenty-third Class
201
Chapter 24
202
24.1.1 Intuition
203
it is now geometrically possible for the amplitude to be greater in the
shallow parts of the well (see figure 23.4)
5. Further Excited States: In summary, we expect that the eigenfunc-
tions should qualitatively look like those shown in figure 23.4
fig 4
The vertical lines indicate that limits of the classically allowed region in
each case. Note that the spacing increases as n does, since E increases
as n does, so the intersection points of E with V move further apart (see
figure 23.5).
fig 5
Figure 23.5 shows, of course, the actual eigenfunctions. It is clear that
these look qualitatively as we expect (amplitudes for n = 2, n = 3 bigger in
shallower parts of well, etc.). The figure 23.6 shows |ψ100 (x)|2 , and in it you
can see both the expected amplitude increase and the expected “wavelength”
decrease as you move to the shallower parts of the well for the fixed value of
energy.
fig 6
Now that we have some intuition, we are ready for the math. We return to
the Schrödinger equation
h̄2 d2 ψ(x) 1 2
− + kx ψ(x) = Eψ(x) (24.5)
2m dx2 2
Simplify the algebra, we define two dimensionless variables
r
mω
ξ ≡ x (24.6)
h̄
2E
K ≡ (24.7)
h̄ω
this leads to energy in units of h̄ω
2 . In terms of these, the Schrödinger equation
is
d2 ψ(ξ) 2
= ξ − K ψ(ξ) (24.8)
dξ 2
204
which is another of these simple-looking differential equations that is quite
hard to solve directly. To approach this, we look at some limits to get some
insight. Consider the limit of very large ξ (very large x). Then equation
(23.8) becomes
d2 ψ(ξ)
2
≈ ξ 2 ψ(ξ) (24.9)
dξ
Even this equation cannot be solved exactly, but an approximate solution is
ξ2 2
− ξ2
ψ(ξ) = Ae 2 + Be (24.10)
To check this, note
d2 ψ(ξ) ξ2 ξ2
= A 1 + ξ 2
e 2 − B 1 − ξ 2 e− 2 (24.11)
dξ 2
so, for ξ 1
d2 ψ(ξ) 2
2 ξ2
2
2 − ξ2
2
≈ Aξ e + Bξ e = ξ 2ψ (24.12)
dξ
Now, we want ψ(ξ) to be well-behaved as ξ → ∞, therefore, A = 0, so for
large ξ
ξ2
ψ(ξ) → Be− 2 (24.13)
Now we really want a solution to equation (23.8). Spurred by our insight in
ξ2
equation (23.13), let’s factor e− 2 out of ψ, so we write
2
− ξ2
ψ(ξ) ≡ h(ξ)e (24.14)
Of course we sill don’t know h(ξ). Therefore, let us find the differential
2
equation it obeys. Now ψ(ξ) must obey equation (23.8). To get ddξψ2 we work
through h(ξ)
2
dψ dh − ξ2 − ξ2 dh(ξ) ξ
= e 2 − ξh(ξ)e 2 = − ξh(ξ) e− 2 (24.15)
dξ dξ dξ
d2 ψ
2 2
dh dh 2
− ξ2
= − 2ξ + ξ − 1 h e (24.16)
dξ 2 dξ 2 dξ
so equation (23.8) is
d2 h(ξ) dh(ξ)
− 2ξ + (K − 1)h(ξ) = 0 (24.17)
dξ 2 dξ
205
Now this is an equation that is known to mathematicians and physicists.
It was first studied by Charles Hermite in the 19th century, and is therefore,
called “Hermite’s equation”. To solve it, we use the method of power series:
We try a power series solution.
∞
X
2 3
h(ξ) = a0 + a1 ξ + a2 ξ + a3 ξ + · · · = aj ξ j (24.18)
j=0
since this must be true for all ξ, the coefficient of each power of ξ is separately
zero, so
(j + 1)(j + 2)aj+2 − 2jaj + (K − 1)aj = 0 (24.23)
for all j. This leads to
2j + 1 − K
aj+2 = aj (24.24)
(j + 1)(j + 2)
A relation like this is called a recursion relation. From it, we can express all
even-numbered a’s in terms of a0 and all odd-numbered a’s in terms of a1 :
1−K 5−K (5 − K)(1 − K)
a2 = a0 , a4 = a2 = a0 , · · · (24.25)
2 12 24
3−K 7−K (7 − K)(3 − K)
a3 = a1 , a5 = a3 = a1 , · · · (24.26)
6 20 120
206
Thus, we have two independent series solution of equation (23.17); therefore,
the general solution of (23.17) is a linear combination of them
heven = a0 + a2 ξ 2 + a4 ξ 4 + · · · (24.28)
hodd = a + 0ξ + a3 ξ 3 + a5 ξ 5 + · · · (24.29)
These are called Hermite functions. However, notice a problem: consider the
recursion relation from equation (23.24) for large j
2
aj+2 ≈ aj (24.30)
j
This is an algebraic equation with solution
C
aj ≈ j
(24.31)
2 !
for any constant C.Now consider the behavior of the Hermite functions at
large ξ. There, the higher powers of ξ dominate, so
X 1 X1 2
j
h(ξ) → C j
ξ ≈ C ξ 2j ≈ Ceξ (24.32)
2 ! j!
Then
ξ2 ξ2
ψ(ξ) = e− 2 h(ξ) lim ⇒ Ce 2 (24.33)
ξ→∞
which diverges. Hence, neither Hermite function is a useful solution for us!
How do we wiggle out of this problem? This leads to a loophole worthy of
any attorney: the power series must terminate as polynomials!
There is some highest j for which aj 6= 0. Then the recursion relation
must lead to aj+2 = 0! Now, the recursion relation was
2j + 1 − K
aj+2 = aj (24.34)
(j + 1)(j + 2)
so we require
2j + 1 − K = 0 (24.35)
207
for j = n (highest nonzero j). This leads to
K = 2n + 1 (24.36)
2E
but, K ≡ h̄ω , so
2E
= 2n + 1 (24.37)
h̄ω
and by solving for the energy leads to
1
En = n + h̄ω n = 1, 2, 3, · · · (24.38)
2
so, the allowed energies (energies of the eigenstates) are given by this formula!
We saw that for the potential energy profile V (x) = 21 kx2 . the energy values
associated with the eigenstates are given by
1
En = n + h̄ω (24.39)
2
This is very basic and very important result in quantum mechanics. We see
that the energy levels are equally spaced in units of h̄ times the classical
frequency, and that the ground state does not have zero energy, but rather
energy 21 h̄ω ( “zero-point” energy) (n=0).
fig 7
Since so many potentials have minima, which are therefore locally quadratic
(by Taylor expansion), “nothing is still” in quantum mechanics is a oft-
repeated rule of thumb. Now let us look at our eigenfunctions. The recursion
relation is
2j + 1 − K
aj+2 = aj (24.40)
(j + 1)(j + 2)
and the series termination condition is
2E
K= = 2n + 1 (24.41)
h̄ω
leading to
2j + 1 − 2n − 1 2(n − j)
aj+2 = aj = − aj (24.42)
(j + 1)(j + 2) (j + 1)(j + 2)
We consider successive values of n in turn:
208
• n = 0: Then a2 = 0. To kill the series hodd , we choose a1 = 0, so
h0 (ξ) = a0 (24.43)
and
ξ2
ψ0 (ξ) = a0 e− 2 (24.44)
which means
mω 2
ψ0 (x) = a0 e− 2h̄ x (24.45)
We see that the ground state is Gaussian in x (or in ξ). The normalized
eigenfunction is
mω 14 mω 2
ψ0 (x) = e− 2h̄ x (24.46)
πh̄
since Z +∞ r
mω 2 πh̄
e h̄ x dx = (24.47)
−∞ mω
Note that, as we expect, the ground state eigenfunction has no nodes.
Note also there is significant penetration into the classically forbidden
region.
fig 8
• n = 1: We choose a0 = 0 to kill off heven . Then the recursion relation
with n = 1, leads to a3 = 0, so ξ,
h1 (ξ) = a1 ξ (24.48)
ξ2
ψ1 (ξ) = a1 ξe− 2 (24.49)
h2 (ξ) = a0 1 − 2ξ 2
(24.50)
ξ2
ψ2 (ξ) = a0 1 − 2ξ 2 e− 2
(24.51)
209
We see that, for any n, hn (ξ) is an nth order polynomial in ξ with only even
or odd powers of ξ, according to whether n is even or odd. These polynomials
are well-known in mathematics and in physics, and when their normalization
is chosen to be such that the coefficient of the highest poser is 2n , they are
called Hermite polynomials. Some of these famous polynomials are listed
below
H0 (ξ) = 1
H1 (ξ) = 2ξ
H2 (ξ) = 4ξ 2 − 2
H3 (ξ) = 8ξ 3 − 12ξ
H4 (ξ) = 16ξ 4 − 48ξ 2 + 12
H5 (ξ) = 32ξ 5 − 160ξ 3 + 120ξ
210
Chapter 25
211
25.1 Expansion Postulate for Harmonic Oscillator
then
iEn
X
Ψ(x, t) = an ψn (x)e− h̄ t (25.6)
n
The question is whether a similar result is true for the harmonic oscillator
potential. You will also recall that we stated, some time ago, the expansion
postulate of quantum mechanics, or at least, a special (but still very general)
case of it − namely that:
Postulate: For any continuous potential energy function V (x), the set of
eigenfunctions is a complete set, i.e., and “reasonable” function f (x) obeying
the same boundary conditions as the eigenfunctions can be expanded as
∞
X
f (x) = an ψn (x) (25.7)
n=1
212
for this set. Assuming this, we can find the expansion coefficients: Multiply
both sides of the expansion by ψk∗ and inegrate
Z +∞ ∞
X Z +∞
∗
ψk (x)f (x) dx = an ψk∗ (x)ψn (x) dx
−∞ n=1 −∞
Z +∞ X∞
ψk∗ f (x) dx = an δk,n = ak
−∞ n=1
Z +∞
ak = ψk∗ (x)f (x) dx (25.10)
−∞
which is to say
Z +∞ r
− mω 2 mω
ak = C n f (x)e 2h̄ x Hk x dx (25.11)
−∞ h̄
The condition that f (x) be “reasonable” is the condition that the integral
from of this equation for ak exists. If F (x) does not diverge more rapidly
mω 2
then e± 4h̄ x as x → ±∞, we do not need to worry about this.
213
25.2 Further Important Results
with Z +∞
an = ψn∗ (x)Ψ(x, t = 0) dx (25.16)
−∞
Since this is formally the same as what we had for the infinite square well,
the same results follow:
1. ∞
X
|an |2 = 1 (25.17)
n=0
(proof same as on page 37 of Griffiths and in our previous notes)
2. ∞
X
hHi = |an |2 En (25.18)
n=0
3. ∞
D√ E X
|an |2 En
p
E = (25.19)
n=0
and
4. ∞
iEn
X
Ψ(x, t) = an ψn (x)e− h̄ t (25.20)
n=0
where
1
En = n + h̄ω (25.21)
2
with Z +∞
an = ψn∗ (x)Ψ(x, t = 0) dx (25.22)
−∞
This formally solves the initial value problem.
214
Example: Suppose that V (x) = 12 kx2 and that
s r r
β − β2 (x−b)2 mω k
Ψ(x, t = 0) = √ e 2 , β≡ , ω≡ (25.23)
π h̄ m
(this function is just like the harmonic oscillator ground state, except that it
is centered at x = b rather than at x = 0)
fig 1
If the energy is measured, what is the probability of finding h̄ω 3h̄ω
2 ? 2
What is Ψ(x, t)?
This may be a homework problem.
We can make a first pass at trying understand the classical limit by looking,
still at eigenfunctions only, but those for very large n. Let us first ask what
we expect in the classical limit. Classically, the oscillator should spend more
time where it moves more slowly − near the edges of the classically allowed
region. Also, the “leakage trail” into the forbidden region should become
negligible.
fig 2
We note that even for very low n (n = 4), the amplitude of ψn is smaller
near the center x = 0, and indeed, this is what we expect from our observa-
tions last week in class. At large n, this behavior persists − the situation for
“low intermediate” n (n = 100) is shown in figure 25.3.
fig 3
Let us calculate the classical probability distribution. Let the classical
period τ = 2π ω . Then
∆t ω 2∆x
Pclassical (x)∆x = = (25.24)
τ 2π v(x)
where v is the classical speed. Now,
215
q
2E
where we ignore the phase, and where A = mω 2 where E is the energy.
Thus,
v(x) = −ωA sin(ωt)
hp i
= −ω 2
1 − cos (ωt)
12
x2
v(x) = ωA 1 − 2 (25.26)
A
which gives us v as a function of x. Thus,
1 1
Pclassical ∆x = ∆x (25.27)
πA 1 − x2 12
A2
1 1
Pclassical = (25.28)
πA 1 − x2 12
A2
and this is plotted as the dashed line in figure 25.3. So, in the limit of
large n, the agreements of the probability distributions is very good. Now,
what about leakage into the forbidden region? We note that the exponential
ξ2
factor e− 2 is the same for all n; however, the classically allowed region get
wider (why?) as n increases, so the probability leakage fraction does get less.
Of course, we would like to see a probability lump moving back and forth
harmonically in the classical limit, and, of course, we cannot get that just
considering solely eigenfunctions (why not?). So, we need to understand the
motion of a peaked wave packet in a harmonic oscillator potential.
As a start toward this, we can consider the behavior over time of an equal
superposition o the ground and first excited states
1
Ψ(x, t = 0) = √ [ψ0 (x) + ψ1 (x)] (25.29)
2
Notice that these are the two must non-classical states! However, as you will
show in the homework, a plot of the probability density as a function of time
oscillates as shown in figure 25.4.
216
fig 4
We note, in fact, that any packet or superposition state in a harmonic
oscillator potential (only) is periodic (and will therefore be in the same place
and look the same after one or an integral number of periods). To see this,
note that, by the completeness theorem, any initial state Ψ(x, t = 0) = f (x)
can be expanded as
X∞
Ψ(x, t = 0) = cn ψn (x) (25.30)
n=0
so ∞
iEn
X
Ψ(x, t) = cn ψn (x)e− h̄ t (25.31)
n=0
Then the probability density is
∞ X
X ∞
2
P (x, t) = |Ψ(x, t)| = c∗k cn ψk∗ (x)ψn (x)e−(k−n)ωt (25.32)
k=0 n=0
q
k
where ω ≡ m . Each term in this sum oscillates in time at an integral
multiple (k − n) of the classical frequency ω. Hence the whole series is
periodic in time at the classical frequency. Of course, this doesn’t mean
that the superposition state moves in simple harmonic motion. So, we must
look into this more deeply.
217
This packet has hpi = p0 . If it were in free space (i.e. V = 0), such as
packet would move to positive x continuously, always spreading. but, the
packet is not in free space. To find out what happens over the course of time,
we use the completeness theorem:
∞
iEn
X
Ψ(x, t) = cn ψn (x)e− h̄ t (25.35)
n=0
where Z +∞
cn = ψn∗ (x0 )Ψ(x0 , 0) dx0 (25.36)
−∞
Recall that we rewrote this expansion in the form
Z +∞
Ψ(x, t) = Ψ(x0 , t = 0)K(x0 , x, t) dx0 (25.37)
−∞
p
Aclassical = (25.45)
mω
therefore,
pmax
x(t) = sin(ωt) (25.46)
mω
In the quantum expression, we recall that p0 = hpi, so
hpi
xclass,quantum = sin(ωt) (25.47)
mω
We see that close correspondence. Note also that the width of the packet
∆(t) also oscillates in time. As yo can easily show, the frequency for this is
twice the classical frequency. Thus, the packet itself moves back and forth, it
breathes!. An interesting
q special case occurs when σx is chosen for the initial
h̄
packet as σx = mω ; then, the packet width, ∆(t), is independent of time.
This special packet, called a “coherent state” oscillates rigidly back and forth
at frequency ω much like a classical particle.
219
25.5 The Time Dependence of Expectation Values
d +∞ ∗
Z
d D bE
Q = Ψ (x, t)QΨ
b dx
dt dt −∞
Z +∞ ∗
∂Ψ b ∗ b ∂Ψ
= QΨ + Ψ Q dx
−∞ ∂t ∂t
Now, the Schrödinger equation is
b = ih̄ ∂Ψ ⇒ ∂Ψ = 1 HΨ
HΨ b = − i HΨ
b (25.49)
∂t ∂t ih̄ h̄
and its conjugate is
∗
b ∗ = −ih̄ ∂Ψ ∂Ψ∗ i b ∗
HΨ ⇒ = HΨ (25.50)
∂t ∂t h̄
so
i +∞ b ∗ b
Z
d D bE
Q = HΨ QΨ dx (25.51)
dt h̄ −∞
Now, Hb is a Hermitian operator. Recall that, for any Hermitian operator, O,
b
by definition, for any two allowed state functions Ψ1 and Ψ2 ,
Z +∞ h i Z +∞ h i∗
∗ b
Ψ1 OΨ2 dx = OΨ1 Ψ2 dx
b (25.52)
−∞ −∞
220
Thus, Z +∞ ∗ Z +∞
∗
HΨ
b QΨ
b dx = Ψ H QΨ dx
b b (25.53)
−∞ −∞
Thus, equation (25.51) is
i +∞ ∗ h b b b b i
Z
d D bE
Q = Ψ H Q − QH Ψ dx (25.54)
dt h̄ −∞
Now the combination of operators A b−B
bB bA
b is very common in quantum
mechanics; it is denoted as
h i
A, B ≡ A
b b b−B
bB bAb (25.55)
i +∞ ∗ h b bi
Z
d D bE i Dh b biE
Q = Ψ H, Q Ψ dx = H, Q (25.56)
dt h̄ −∞ h̄
d D bE i Dh b biE
Q = H, Q (25.57)
dt h̄
As we will shortly get at least a first idea of this is a very important relation
in quantum mechanics.
Note: In rare cases, an operator may also have an explicit time depen-
dence. An example might be a harmonic potential energy in which the “spring
constant” (and hence the classical frequency) changes in time − then H b for
this system changes in time (H b = pb2 + 1 k(t)x2 = H(t)).
b In such cases, the
2m 2
time derivative of the expectation value is modified to
d ∂Qb Dh iE
= + H, Q b b (25.58)
dt ∂t
In this course, we do not deal with such cases. We now reap our first result
from this theorem: Conservation of Energy Let Q b be the Hamiltonian H.b
Then h i h i
H, Q = H, H = H
b b b b bHb −H bH
b =0 (25.59)
Then
d D bE d
H =0 ⇒ hEi = 0 (25.60)
dt dt
221
Thus, we have derived conservation of energy of energy as a statistical result.
Relation between x̂ and p̂x We now consider the very fundamental
commutator:
[x̂, p̂x ] (25.61)
To find out what this is, we operate with it is a function f (x):
[x̂, p̂x ] f (x) =
= x̂p̂x f (x) − p̂x x̂f (x)
h̄ ∂f (x) h̄ ∂
= x − [xf (x)]
i ∂x i ∂x
h̄ ∂f (x) h̄ df
= x − f (x) + x
i ∂x i dx
h̄ ∂f h̄ h̄ df
= x − f (x) − x
i ∂x i i dx
= ih̄f (x)
since this is true for any f (x),
[x̂, p̂x ] = ih̄ (25.62)
As we will see later, this result is very fundamental in quantum mechanics.
Relation between hxi and hpx i: We had, for the time development of
expectation values,
i +∞ ∗ h b bi
Z
d i Dh b biE
hQi = Ψ H, Q Ψ dx ≡ H, Q (25.63)
dt h̄ −∞ h̄
Suppose Q
b = x̂. Now
" ! #
h i px
b2
H,
b x̂ = + V (x) , x̂
2m
1 h b2 i
= p , x̂
2m x
1
= [pbx pbx , x̂]
2m
Now, as you will show in the homework, for any three operators Â, B̂, and
Ĉ, h i h i h i
ÂB̂, Ĉ = Â B̂, Ĉ + Â, Ĉ B̂
222
also, h i h i h i
Â, B̂ Ĉ = Â, B̂ Ĉ + B̂ Â, Ĉ
so h i
b2
px , x̂ = pbx [pbx , x̂] + [pbx , x̂] pbx (25.64)
now h i h i
Â, B̂ = − B̂, Â
h
2
i 2h̄
px , x̂ = −ih̄pbx − ih̄pbx = pbx
b (25.65)
i
so
d
hxi = hpbx i
m (25.66)
dt
which shows that the classical connection between momentum and position
is true, on average, even though both can’t be known exactly together!
Newton’s Second Law In classical mechanics, this is
dpx ∂V
=− (25.67)
dt ∂x
d
So we consider dt hpbx i. This is
d i Dh b iE
hpbx i = H, pbx (25.68)
dt h̄
Now, " #
h i pb2 1 h b2 i h b i
H, pbx =
b + V, pbx = p , pbx + V , pbx (25.69)
2m 2m x
now h i
b2
px , pbx = [pbx pbx , pbx ] = 0 (25.70)
To find [V (x), pbx ], consider
h̄ ∂Ψ h̄ ∂V Ψ
[V (x), pbx ] ψ(x) = V (x) −
i ∂x i ∂x
h̄ ∂Ψ ∂V ∂Ψ ∂V
= V − Ψ−V = ih̄ Ψ
i ∂x ∂x ∂x ∂x
so
∂V
[V (x), pbx ] = ih̄ (25.71)
∂x
223
Thus,
h i ∂V
H, pbx = ih̄
b (25.72)
∂x
Hence
d dV (x)
hpbx i = − (25.73)
dt dx
Equations (25.66) and (25.73) together are called Ehrenfest’s theorem (1927).
Note that they are true for any continuous V (x) and for any superposition
(or eigen-) state. We note, of course, that equation (25.73) looks a lot like
Newton’s second law for expectation values. Thus, it looks like our long
sought general connection between quantum and classical physics. It is not
quite that. For “Newton’s Second Law for the expectation values”, we would
like
d d
hpbx i = − V (hxi) (25.74)
dt dx
since
d
− V (hxi) = F (hxi) (25.75)
dx
but, equations (25.73) and (25.74) are generally not the same since, in general,
dV (x) d
6= V (hxi) (25.76)
dx d hxi
or
hF (x)i =
6 F (hxi) (25.77)
Proof:
Z +∞
dV (x) ∂V
= Ψ∗ (x, t) Ψ(x, t) dx (25.78)
dx −∞ ∂x
Z +∞
d d ∗
V (hxi) = V Ψ (x, t)xΨ(x, t) dx (25.79)
dx dx −∞
and these two are not equal in general. Of course, this somewhat deepens (or
muddies, depending on your point of view) to gap (or connection) between
quantum and classical physics.
224
25.6 When Does Classical Mechanics Apply?
When, then does classical mechanics apply? Must the system be macro-
scopic? The answer is no − for example, motion of electrons in an accelerator
beam are well described by the classical law
d2~x ~ + q V~ × B
~
me 2 = q E (25.80)
dt
in accelerator guide fields. One key is that V (x) is a “slowly varying” (com-
pared with the mean deBroglie wavelength in the packet under consideration)
function of x. To see this, we expand f (x) = − dVdx(x) around x = hxi:
dF (hxi) (x − hxi)2 d2 F (hxi)
F (x) = F (hxi) + (x − hxi) + + ··· (25.81)
dx 2! dx2
If (x − hxi) dFdx
(hxi)
F (hxi) for all (x − hxi) occupied by the packet at all
times t under consideration, then we can drop all terms but the first and
dV dV (hxi)
F (x) ≈ F (hxi) ⇒ ≈ (25.82)
dx d hxi
Under these conditions,
d2 hxi ∂V (hxi)
m → − (25.83)
dt2 ∂ hxi
25.6.1 Exceptions
225
Note that this is true for any state for any value of k (which controls how
fast V varies with x). Thus, for the harmonic potential, hxi always obeys
equation (25.85), which leads to “classical behavior”.
Question: What about for an eigenstate in this potential? is equation
(25.86) obeyed? Is equation (25.85) obeyed?
As you can show, hxi always obeys a classical equation of motion of V (x) =
Cxn as long as n is a positive integer greater than 2.
Question: Why doesn’t this work if n = 1?
226
classical quantities (“momentum”, “position”, “angular momentum”, etc.)
do not exist as quantities except in a certain limit. Rather, there are only
operators and states, and in the “classical limit” certain superposition states
behave as if certain quantities are conserved.
227
Chapter 26
26.1 An Introduction
228
This led to “results”:
1. That the ground state orbit has radius
h̄2
a0 = 4πε0 2
≈ 0.529 × 10−10 m (26.2)
me e
(“Bohr radius”)
2. That the energy of the ground state is
2
me e4
1
E1 = − = −13.6 eV (26.3)
4πε0 2h̄2
in agreement with the experimental value of the ionization energy.
3. That the energy of state n is
E1
En = (26.4)
n2
so, En ∝ n12 , which is the assumption of photon emission accompanying
“transitions” between energy levels, reproduces the Balmer series for-
mula. Still, due to the ad-hoc nature of its assumption, the Bohr model
is unsatisfactory.
Now we see what the Schrödinger equation quantum mechanics has to say.
We consider an electron, still not in fully objective reality, but in the Coulomb
field of the proton − the latter considered as a classical object that is infinitely
heavy and sets stationary at the origin. figure 26.1 shows one possible point
that the electron could materialize at upon position measurement.
fig 1
Thus, the electron has a state function that depends on all three of its
coordinates and the time
where
h̄2 2
∇ ψ(~r) + V (~r)ψ(~r) = Eψ(~r)
− (26.8)
2m
Here, V (x, y, z) is the Coulomb potential due to the proton at each “potential
position” (x, y, x) of the electron
e2e2
V (~r) = V (x, y, z) = − p =− (26.9)
4πε0 x2 + y 2 + z 2 4πε0 r
Clearly, spherical coordinates are more convenient for us here than the Carte-
sian coordinates. So, we re-express
230
26.2 Spherically Symmetric Solutions
231
simple, then this same function,ψ(r) = Ae−br , would also have the solve the
rest of equation (26.14), namely,
d2 ψ(r) 2m
= − Eψ(r) (26.19)
dr2 h̄2
This looks a bit familiar. Is it an oscillator equation? No − since E < 0
(why?) the right hand side is positive, not negative. That means that tis
solution is also a decaying exponential
1/2
−( 2mE ) r
ψ(r) = Be h̄2 (26.20)
so q q
−2mE −2mE
+ r − r
ψ(r) = Ae h̄2 + Be h̄2 (26.21)
but we must have A = 0 (why?). Of course, now we have two different
functions ψ(r), each solving an additive piece of the differential equation.
Let’s just try to equate them − is this possible? It is if
1/2 2
−( −2mE ) r − keh̄2m r
e h̄2 = e (26.22)
1/2
ke2 m
2mE
− 2 = (26.23)
h̄ h̄2
2
2mE ke2 m
− 2 = (26.24)
h̄ h̄4
k 2 me4
E = − (26.25)
2h̄2
which, amazingly, is exactly the ground state Bohr model energy (and it is
equal to −13.6 eV)! So let us look at our solution:
232
which is nothing but the Bohr radius a0 !
h̄2
a0 = 4πε0 (26.28)
me e2
So our solution is r
ψ(r) = Be− a0 (26.29)
which we expect top be the ground-state eigenfunction. This turns out to be
correct.
As you can show for yourself, another spherically symmetric solution to equa-
tion (26.15) is
r r
ψ2 (r) = 2 − e− 2a0 (26.30)
a0
fig 3
If you go through the algebra, you will find tat his is a solution only if
E1 13.6 eV
E = E2 = =− (26.31)
4 4
again agreeing with the Bohr theory. In fact, there is an entire series of
solutions to equation (26.15)
ψ1 (r), ψ2 (r), ψ3 (r), ψ4 (r), · · ·
2r2
2r − 3ar
ψ3 (r) = 1 − + e 0 (26.32)
3a0 27a20
fig 4
with energies
−13.6 eV
En = (26.33)
n2
This is a fantastic result! It shows that, although the Bohr model is really
wrong, the Bohr formula for the energies
2 4
mk e 1
En = − 2 (26.34)
2h̄ n2
233
is apparently correct in Schrödinger quantum mechanics! From this point of
view, it is interesting to contemplate the far from obvious fact that equation
(26.15) is apparently equivalent to the Balmer formula.
Of course, there is one aspect of the procedure we used above to find
solutions that is, perhaps, a bit unsatisfying − we guessed we guessed the
solutions beyond ψ1 . This raises the question − is there a systematic method
of finding these and also non-spherical symmetric eigenfunctions? The answer
is yes − and in the following we use methods of differential equations to show
this and to find them.
The hydrogen atom still holds more secrets. Thus, we return to equation
(26.13)
1 ∂ 2 ∂ψ(r, θ, φ) 1 ∂ ∂ψ(r, θ, φ)
r + 2 sin θ +
r2 ∂r ∂r r sin θ ∂θ ∂θ
2
1 ∂ψ(r, θ, φ) 2m e
+ 2 − E ψ(r, θ, φ) = 0 (26.35)
r2 sin2 θ ∂φ h̄ 4πε0 r
We try separation of variables: Search for solutions of the form
ψ(r, θ, φ) = R(r)Θ(θ)Φ(φ) (26.36)
we plug this into the Schrödinger equation. This yields
h̄2 1 ∂
∂ [RΘΦ] 1 ∂ ∂ [RΘΦ]
− 2
r2 + 2 sin θ +
2m r ∂r ∂r r sin θ ∂θ ∂θ
∂ 2 [RΘΦ]
1
+ V (r)RΘΦ = ERΘΦ (26.37)
r2 sin2 θ ∂φ2
Now,
∂ [RΘΦ] dR
= ΘΦ (26.38)
∂r dr
∂ [RΘΦ] dΘ
= RΦ (26.39)
∂θ dθ
2
∂ [RΘΦ] d2 Φ
= RΘ (26.40)
∂φ2 dφ2
234
so the Schrödinger equation becomes
h̄2 ΘΦ d
2 dR RΦ d dΘ
− r + 2 sin θ +
2m r2 dr dr r sin θ dθ dθ
RΘ d2 Φ
+ V (r)RΘΦ = ERΘΦ (26.41)
r2 sin2 θ dφ2
Now we divide by RΘΦ and multiply by −2µr2 sinθ (from now on, we let µ
stand for me )
1 d2 Φ sin2 θ d
dR sin θ d dΘ
2
= r2 − sin θ −
Φ dφ R dr dr Θ dθ dθ
2µ 2 2
r sin θ [E − V (r)] (26.42)
h̄2
In spite of the complication, the above echoes a by now familiar situation −
the left hand side depends only on one variable (φ while the right hand side
does not depend on this variable at all. Therefore, each side must equal a
constant, which for later connivence, we denote as “−m2` ”3 . (At this stage,
this is completely general − so far “−m2` ” can be any complex number). So
we now have two simpler equations.
1 d2 Φ
2
= −m2` (26.43)
Φ(φ) dφ
and
1 d 2 dR 1 d dΘ 2µ
− r = sin θ − 2 r2 [E − V (r)] =
R dr dr Θ sin θ dθ dθ h̄
m2`
− 2 (26.44)
sin θ
which is
2 2
1 d dR 2µr m 1 d dΘ
r2 + 2 [E − V (r)] = `
2 − sin θ (26.45)
R dr dr h̄ sin θ Θ sin θ dθ dθ
Again, the same sort of thing has happened − one side depends only on one
variable (θ) and the other only on another (r). So again, each side is equal to
3
The notation for the constant is conventional; the reason for the “`” subscript will become apparent in the
subsequent lectures.
235
a constant, call it “α”. So now we have three ordinary differential equations,
each of which is much simpler than the original partial differential equation:
1 d 2 dR 2µ R
r + [E − V (r)] = α (26.46)
r2 dr dr h̄2 r2
m2 Θ
1 d dΘ
− sin θ + `2 = αΘ (26.47)
sin θ dθ dθ sin θ
d2 Φ
2
= −m2` Φ (26.48)
dφ
In these equations lie locked the secrets of the hydrogen atom!
236
To remove any lingering doubt as to the equivalence of these solutions (bear
in mind that A, B, C, D are all allowed to range over all complex numbers),
we expand that last form by Euler’s theorem:
Φ(φ) = C [cos (m` φ) + i sin (m` φ)] + D [cos (m` φ) − i sin (m` φ)]
= (C + D) cos (m` φ) + i (C − D) sin (m` φ)
This is not all with this equation, however. Since changing an azimuthal
angle by 2π brings one back to the same point in space, we should require
This means
237
26.6 The Θ Equation
We have
m2` Θ
1 d dΘ
− sin θ + = αΘ (26.56)
sin θ dθ dθ sin2 θ
This is an equation that is well known to physicists and mathematicians. It
cvan be solved by standard power series methods4 ; as with the harmonic os-
cillator, the solutions that do blow up at θ = 0 and/or θ = π are polynomials
times well behaved functions of θ; these exist for integer values of of two
indices (`, m` ) where α = `(` + 1)
Θ`,m` (θ) = AP`m (cos θ) (26.57)
where P`m (cos θ) is the “associated Legendre function”, is
|m` |
|m |/2 d
P`m (cos θ) ≡ 1 − cos2 θ
`
P` (cos θ) (26.58)
dx
where P` (cos θ) is the `th Legendre polynomial, given by
`
1 d `
P` (x) = ` x2 − 1 (26.59)
2 `! dx
where the first few Legendre polynomials are
P0 (x) = 1
P1 (x) = x
1
3x2 − 1
P2 (x) =
2
1
5x3 − 3x
P3 (x) =
2
1
35x4 − 30x2 + 3
P4 (x) =
8
..
.
Some mathematical properties they obey: The Legendre polynomials are
orthogonal on the interval [−1, 1]:
Z 1 (
0 m 6= n
Pm (x)Pn (x) dx = 2
(26.60)
−1 2n+1 m = n
4
see any decent differential equations book.
238
They also form a complete set on this interval: Any “arbitrary” function f (x)
on [−1, 1] can be expressed as
∞
X
f (x) = an Pn (x) (26.61)
n=0
where, from the orthogonality relation above,
Z 1
1
an = n + f (x)Pn (x) dx (26.62)
2 −1
You can verify that equation (26.57)is a solution of equation (26.56) by simply
“plugging it in”. Note that for equation (26.59) to exist, ` has to be positive
integer. Further, if |m` | > `, |m` | ≤ `. Thus acceptable solutions exist only
for
` = 0, 1, 2, 3, 4, · · · (26.63)
m` = −`, −` + 1, −` + 2, · · · , −1, 0, 1, · · · , ` − 1, ` (26.64)
Now, as a second order differential equation (26.56) has two linearly indepen-
dent solutions for any ` and any m` , integer or not; the point is that these
diverge at θ = 0 and/or θ = π unless equation (26.63) and equation (26.64)
are satisfied.
239
where = (−1)m for m ≥ 0, = 1 for m ≤ 0. These are orthogonal; they
are listed (for low `, m) in the following tables. The orthogonality of the
spherical harmonics requires that
Z 2π Z π h 0 i
m ∗ m
[Y` (θ, φ)] Y`0 (θ, φ) sin θ dθ dφ = δ`,`0 δm,m0 (26.68)
0 0
1/2
1
Y00 =
4π
1/2
3
Y01 = cos θ
4π
1/2
3
Y1±1 = ∓ sin θe±iφ
8π
1/2
5
Y20 = 3 cos2 θ − 1
16π
1/2
15
Y2±1 = ∓ sin θ cos θe±iθ
8π
1/2
±2 15
Y2 = sin2 θe±2iφ
32π
1/2
7
Y03 = 5 cos3 θ − 3 cos θ
16π
1/2
±1 21
sin θ 5 cos2 θ − 1 e±iπ
Y3 = ∓
64π
1/2
105
Y3±2 = sin2 θ cos θe±2iφ
32π
1/2
±3 35
Y3 = ∓ sin3 θe±3iφ
64π
Essentially due to their orthogonality,the spherical harmonics are also com-
plete − any behaved function of θ and φ, f (θ, φ) can be expanded as
∞ X
X `
f (θ, φ) = a`,m Y`m (θ, φ) (26.69)
`=0 m=−`
240
while we do not prove this here, it is not terribly surprising since
241
Chapter 27
potential energy; for this reason it is often called the “centrifugal potential”1 .
This makes us suspect that our term in the Schrödinger equation
h̄2 `(` + 1)
(27.13)
2m r2
2
is somehow the value of 2mr L
2 (i.e., L
2
= `(` + 1)h̄2 ) for the eigenstates we
will find with “quantum numbers” ` and m` . As we will see next week, this
conjecture is exactly correct. Notice that the centrifugal potential acts like a
classical “force” that is repulsive from the origin and is very strong at small r
− this is like the centrifugal force in a rotating frame of reference in classical
mechanics
∂Vcent h̄2 `(` + 1)
− =+ (27.14)
∂r 2m r3
the + sign shows that this force is repulsive to + r and the r13 term grows
strongly with decreasing r.
With u(r) ≡ rR(r), our radial equation (from any spherically symmetric
potential V (r)) is
h̄2 d2 u h̄2 `(` + 1)
− + V (r) + u = Eu (27.15)
2m dr2 2m r2
For the hydrogen atom this becomes
h̄2 d2 u e2 1 h̄2 `(` + 1)
− + − + u = Eu (27.16)
2m dr2 4πε0 r 2m r2
To handle this, first we tidy it up (following Griffith’s notation) by putting
it in terms of a dimensionless dependent variable. We define
√
−2mE me2
κ= , ρ ≡ κr, ρ0 ≡ (27.17)
h̄ 2πε0h̄2 κ
1
In classical physics, the “force” − ∂V∂r
cent
associated with Vcent is called the “inertial” or “fictitious” force.
244
then the equation is
d2 u(ρ)
ρ0 `(` + 1)
= 1− + u(ρ) (27.18)
dρ2 ρ ρ2
It is tempting to try a direct power series solution of this,
∞
X
u(ρ) = an ρ n
n=0
but this does not work. From your reading for today, you know that a method
around this is to extract the behavior of u(ρ) in the limits ρ → 0 and ρ → ∞,
factoring these, then, we define a function v(ρ) by
u(ρ) ≡ ρ`+1 e−ρ v(ρ) (27.19)
then
d2 v dv
ρ + 2(` + 1 − ρ) + [ρ0 − 2(` + 1)] v = 0 (27.20)
dρ2 dρ
A simple power series trial
∞
X
v(ρ) = c j ρj
j=0
as a solution of equation (27.20) does not obviously fail, it leads to the re-
cursion relation
2(j + ` + 1) − ρ0
cj+1 = cj (27.21)
(j + 1)(j + 2` + 2)
As you know from your reading, here we again encounter the same kind of
problem we had for the harmonic oscillator problem − namely, the series
solution represented by equation (27.21) diverges as ρ → ∞ − and it badly
enough to make u(ρ) = ρ`+1 e−ρ v(ρ) also diverge as ρ → ∞. Interestingly,
“the same problem has the same solution” − our loophole (for the the atom
to exist) is the terminate the series into polynomials. Then, equation (27.21)
says for a given ρ0 , there must be a “jmax ” such that
2 (jmax + ` + 1) − ρ = 0 (27.22)
Letting n ≡ jmax + ` + 1, then the termination condition is
2n = ρ0 (27.23)
245
which is
me2 h̄
2n = √ (27.24)
2πεoh̄2 −2mE
or
√ me e2
−2mE = (27.25)
4πε0 nh̄
or 2
me4
1
En = − (27.26)
4πε0 2n2h̄2
therefore, 2
me e4
E1 1
En = 2 , E1 = − (27.27)
n 4πε0 2h̄2
these are exactly the Bohr energies! Further,
√
1 m2e 1
−2mE 1
κ= = = (27.28)
h̄ 4πε0 h̄2 n a0 n
2
where a0 = 4πε0 mh̄e e2 is the Bohr radius.
27.3 A check
Let us pause before going on to state our result in generality and check that
what we have makes sense. We consider the ground state. the question is −
how do we identify it? Recall our termination condition
n = jmax + ` + 1 (27.29)
The lowest energy is the minimum n, n = 1, and this means that jmax = ` = 0.
Recalling that this only allows only m` = 0 (m` ranges from −` to `), we see
that our set of quantum numbers for the ground state is n = 1, ` = 0, m` = 0.
So, we are looking at
ψ100 (r, θ, φ) = R1,0 (r)Y00 (θφ) (27.30)
Since Y00 is a constant, all we need is R1,0 (r). Now equation (27.21) with
j=0
2(0 + 0 + 1 − 1)
c1 = =0
(0 + 1)(0 + 0 + 2)
246
so for R1,0 only c0 is not zero. Then from equation (27.16),
u1,0 ρ0+1 e−ρ v1,0 (ρ)
R1,0 (r) = = (27.31)
r r
with v1,0 (ρ) = c0 (a constant). Now, ρ ≡ κr, and recalling that κ = a01n , here,
n=1
r − ar c0 r r
R1,0 (r) = c0 e 0 = e− a0 = constant × e− a0 (27.32)
a0 r a0
which is the same as we had by our original “easier method” early last week.
Properly normalized (we’ll see how to do this shortly),
1 r
ψ1,0,0 (rθ, φ) = p 3 e− a0 (27.33)
πa0
Now consider the next energy level − n = 2. Referring to our definition of
n, n ≡ jmax + ` + 1, we see that there two possible values of `
` = 0 ⇒ jmax = 1 (27.34)
and
` = 1 ⇒ jmax = 0 (27.35)
This means that there are actually four different states with this energy: for
` = 1, m` can be −`, 0, or +1! If ` = 0 equation (27.21) says c1 = −c0 ,
c2 = 0, so all higher cn = 0. Then, v(ρ) = c0 + c1 ρ = c0 (1 − ρ), so
c0 r r
R2,0 (r) = 1− e− 2a0 (27.36)
2a0 2a0
If ` = 1, jmax = 0 ⇒ v(ρ) = c0
c0 − 2ar
R2,1 (r) = re 0 (27.37)
4a20
so, you see how this thing works: for any n, n = jmax + ` + 1 tell us that the
possible values of ` are
` = 0, 1, 2, 3, · · · , n − 1 (27.38)
and we already know that for any `,
m` = −`, −` + 1, −` + 2, · · · , −1, 0, 1, · · · , ` − 1, ` (27.39)
247
(there are 2` + 1 values). Thus, level n has degeneracy
n−1
X
(2` + 1) = n2 (27.40)
`=0
In general,
where p
d
Lpq−p (x) ≡ (−1)p Lq (x) (27.48)
dx
where q
d
x
e−x xq
Lq (x) ≡ e (27.49)
dx
Lq (x) is called the “q th Laguerre polynomial”, and Lpq−p (x) is called the “as-
sociated Laguerre polynomial”. Some of the Laguerre polynomials (Lq ) are
248
listed below.
L0 = 1
L1 = −x + 1
L2 = x2 − 4x + 2
L3 = −x3 + 9x2 − 18x + 6
L4 = x64 − 16x3 + 72x2 − 96x + 24
L5 = −x5 + 25x4 − 200x3 + 600x2 − 600x + 120
L6 = x6 − 36x5 + 450x4 − 2400x3 + 5400x2 − 4320x + 720
L00 = 1
L01 = −x + 1
L02 = x2 − 4x + 2
L10 = 1
L11 = −2x + 4
L12 = 3x2 − 18x + 18
L20 = 2
L21 = −6x + 18
L22 = 12x2 − 96x + 144
L30 = 6
L31 = −24x + 96
L32 = 60x2 − 600x + 1200
ψ( n, `, m` ) =
s 3 `
2 (n − ` − 1)! −r/(na0 ) 2r 2r
e L2`+1
n−`−1 Y`m (θ, φ) (27.50)
na0 2n [(n + `)!]3 na0 na0
which is very complicated looking. Combining the orthonormality of the
Ye llm ’s with that of the associated Laguerre polynomials yields (we don’t
249
show this) the result that the {ψn,`,m } is a complete orthonormal set; the
orthonormality relation is
Z
∗
ψn,`,m (r, θ, φ)ψn0 ,`0 ,m0 (r, θ, φ)r2 sin θ dr dθ dφ = δn,n0 δ`,`0 δm,m0 (27.51)
The completeness property means that we can be assured that we have not
“left out” any solution of the Schrödinger equation by looking for product
solutions; any behaved function f (r, θ, φ) can be expanded as
∞ X
X ∞ X
`
f (r, θ, φ) = an,`,m` ψn,`,m` (r, θ, φ) (27.52)
n=1 `=0 m` =−`
In the next classes, we will try to make physical sense of our complicated
looking results.
250
Chapter 28
We have our eigenstates for the Schrödinger hydrogen atom, but let’s now
look again at the most basic solution − the ground state. According to
the Bohr model, the electron orbits at definite radius a0 ≈ 0.5 × 10−11 m.
However, in the Schrödinger theory
ψ1,0,0 = C1,0,0 e−r/a0 (28.1)
This then means that the probability density of ψ ∗ ψ,
ψ ∗ ψ = |C|2 e−2r/a0 (28.2)
peaks at r = 0 (inside the nucleus!)
fig 1
This seems to make no sense! Let us then, recall the volume element in
spherical coordinates.
fig 2
Thus, probability of materializing in dV is
ProbindV (r,θ,φ) = ψ ∗ (r, θ, φ)ψ(r, θ, φ)r2 sin θ dr dθ dφ (28.3)
Let us ask, then for the total probability that the electron will materialize
“between r and r + dr.” Since we are dealing here with a spherically sym-
metric state (ψ ∗ ψ is independent of θ and φ depends only on r), all we have
251
to do in this case is multiply ψ ∗ ψ by differential volume between r and r + dr
dV = 4πr2 dr. (hatched volume in figure 28.3.
fig 3
Thus
P (r) dr = ψ ∗ (r)ψ(r) · 4πr2 dr (28.4)
this is for s-states only1 . We now see the resolution of our paradox:
The factor r2 vanishes at the nucleus; thus the plot of P1,0,0) (r) dr is shown
in figure 28.4.
fig 4
Let us ash where this function peaks. To do this, we set
dP (r)
= 0
dr
d 2 −2r/a0
r e = 0
dr
2
r2 − · e−2r/a0 + 2re−2r/a0 = 0
a0
r2
= r
a0
therefore,
r = a0 (28.6)
Thus, for the ground state (ψ1,0,0 ), the most likely small range of r that the
electron can materialize in is centered on r = a0 !
Now, unlike the ground state, generally the excited state functions depend
on θ and φ (unless, of course, ` = 0, in which cases the procedure is similar
to that for the ground state.
∗ ∗
ψn,`,m`
(r, θ, φ)ψn,`,m` (r, θ, φ) = Rn,` Θ∗`,m` Φ∗m` Rn,` Θ`,m` Φm (28.7)
1
“s-states” are eigenstates with ` = 0. The notation “s” is historical, but still widely used.
252
subject to the normalization condition
Z ∞ Z π Z 2π
∗
ψn,`,m `
(r, θ, φ)ψn,`,m` (r, θ, φ)r2 sin θ dr dθ dφ = 1 (28.8)
0 0 0
Now the Θ and Φ functions are separately normalized so that the integrals
above involving them are each equal to one. Thus,
∗
Pn,` (r) dr = r2 Rn,` (r)Rn,` (r) dr (28.9)
for any n and `. Thus, for example, in state n, `, m hri is found from
Z ∞
hrn,` i = rPn,` (r) dr (28.11)
0
Here is a way of remembering the r2 factor in Pn,` (r) : P (r) is the probability
of finding the electron anywhere in the spherical shell of thickness dr centered
on r − this is ψ ∗ ψ × shell volume.
253
The 4π is absorbed in the conversion from ψ to R and the angular integra-
tions. To see this, consider, e.g., the 1 − s state
1 1 1
ψ1,0,0 (r) = √ R(r) = √ 3/2 e−r/a0 (28.12)
4π πa
0
with probability
P (r) dr = |ψ|2 · 4πr2 dr (28.13)
1
= |R(r)|2 · 4πr2 dr (28.14)
4π
= r2 |R(r)|2 dr (28.15)
In any case, equation (28.9) on page 253 is correct for any eigenstate ψn,`,m` .
The factor r2 in r2 |Rn,` |2 is important for any radial probability calculation
and must not be omitted. Figure 28.5 shows plots for the n = 1, 2 and 3
states.
fig 5
The small vertical arrows indicate the position of hri in units of a0 ; note
that, in general hri 6= rpeak of dP/dr . The expectation value is, of course, cal-
culated from
Z ∞
hrn,` i = rPn,` (r) dr
0
Z ∞
= r · r2 |Rn,` |2 dr
Z0 ∞ Z π Z 2π
∗
= ψn,`,m rψn,`,m r2 sin θ dr dθ dφ
0 0 0
254
exact for n = 3, ` = 2), etc. This is roughly in agreement with the Bohr
model (Rn = n2 a0 ) and provides support for a shell model of the atom. In
order order to make much sense of the angular probability distributions, it is
good to first discuss the connection with the angular momentum in quantum
mechanics, and that we turn to next.
255
We will also be interested in Lc2 = Lc2 + Lc2 + Lc2 ; in spherical coordinates this
x y z
works out to be
2
c2 = −h̄ 1 ∂ ∂ 1 ∂
L sin θ + (28.26)
sin θ ∂θ ∂θ sin2 θ ∂φ2
Let us consider finding the eigenfunctions of L̂z and L c2 . Recall that, for any
operator Q̂, the function f (~r) is an eigenfunction of Q̂ if and only if
Q̂f (~r) = constant × f (~r) (28.27)
the constant is then the eigenvalue corresponding to the eigenfunction f (~r)
of Q̂. Consider, then, the eigenfunction problem for L̂z :
∂ df (φ) ic
−ih̄ f (φ) = cf (φ) ⇒ = f (φ)
∂φ dφ h̄
therefore,
ic
f (φ) = Ae h̄ φ (28.28)
any function f (φ) of this form is an eigenfunction of L̂z . Now consider the
functions e−m` φ from the hydrogen atom. Clearly
L̂z eim` φ = m`h̄eim` φ (28.29)
since
Y`m (θ, φ) ∝ P`m (cos θ) eim` φ (28.30)
L̂z Y`m (θ, φ) = m`h̄Y`m (θ, φ) (28.31)
Further since
∂
L̂z ψn,`,m` (r, θ, φ) = −ih̄
∂φ
∂ m
= R(r) −ih̄ Y` (θ, φ)
∂φ
= m`h̄ [R(r)Y`m (θ, φ)]
= m`h̄ψn,`,m (r, θ, φ)
thus, ψn,`,m (R, θ, φ) is an eigenfunction of L̂z with eigenvalue m`h̄. Now
consider equation (28.3)
2
c2 = −h̄ 1 ∂ ∂ 1 ∂
L sin θ +
sin θ ∂θ ∂θ sin2 θ ∂φ2
256
The right hand side of this looks kind of familiar. Recall the “Θ-equation”
for the hydrogen atom eigenstates; it is
m2`
1 ∂ ∂
− sin θ + = `(` + 1)Θ(θ) (28.32)
sin θ ∂θ ∂θ sin2 θ
This is solved by Θ`,m` (θ). Consider, then
h̄2 ∂ 2
2 2 1 ∂ ∂
L ψm,`,m` = −h̄
c sin θ ψn,`,m` − ψn,`,m` (28.33)
sin θ ∂θ ∂θ sin2 θ ∂φ2
since
∂2 ∂ 2 im` φ
2
ψn,`.m` = R(r)Θ(θ) 2
e = (im` )2 R(r)Θ(θ)Φ(φ)
∂φ ∂φ
m2`
2 2 1 ∂ ∂
L ψn,`,m` = −h̄
c sin θ ψn,`,m` − ψn,`,m` (28.34)
sin θ ∂θ ∂θ sin2 θ
therefore,
2
c2 ψn,`,m = −h̄2 Rn,` (r)Φm (φ) 1 ∂ ∂ m `
L ` `
sin θ − 2 Θ(θ) (28.35)
sin θ ∂θ ∂θ sin θ
by using equation (28.32), this is
c2 ψn,`,m = h̄2 `(ell + 1)ψn,`,m
L (28.36)
` `
257
28.4 Compatible and Incompatible Quantities
We have established3 that the following three operators commute with each
other: h i h i h i
b c2
H, L = H, L̂z = H, L̂z
b b (28.38)
Further, we have seen that the ψn,`,m` are simultaneously, eigenfunctions of
all three of these operators. This means that the values of of E, L2 , and Lz
are sharp for any state ψn,`,m` . These results illustrate a general theorem in
quantum mechanics h i
Theorem: If observable operators  and B̂ commute ( Â, B̂ = 0), then
there exists a complete set of functions {ψA,B } that are simultaneously eigen-
functions of  and eigenfunctions of B̂ − i.e.,
ÂψA,B = AψA,B
B̂ψA,B = BψA,B
thus, in such a state, the quantities corresponding to both  and B̂ are sharp.
We will not have time to prove this theorem in general, but, we note that the
operators H,
b L c2 , and L̂z for the hydrogen atom provide an example.
On the other hand, h the iinverse of the theorem is also true − i.e., If  and
B̂ do not commute ( Â, B̂ 6== 0), then other than the zero function, there
exists no function that is simultaneously
h i an eigenfunction of  and also an
eigenfunction of B̂. Thus, if Â, B̂ 6= 0, there exists no state (other than
ψ = 0) for which A and B are both sharp.
Example: Consider x̂ and p̂x
We know that [x̂, p̂x ] = ih̄ 6==. We also already know that there does not
exist any states for which x and px are simultaneously sharp − the Heisenberg
Uncertainty Principle forbids this. Now, we begin to see why.
In fact, in any state, the product of the statistical uncertainties σA σB of
A and B is determined by the extent of the noncommutation:
Dh
1 iE2
σA2 σB2 ≥ Â, B̂ (28.39)
2i
3
You can easily show these
258
h i
where h[ ]i is the expectation value of the commutator Â, B̂ . Equation
(28.39)is the generalized uncertainty principle. It is proved in the appendix
to this lecture.
Example: [x̂, p̂x ] according to equation (28.39)
2 2
2 2 1 h̄ h̄
σx σ p ≥ · ih̄ = ⇒ σx σp ≥ (28.40)
2i 2 2
Now, we know, e.g., that (similarly, one can show that)
h i
L̂x , L̂y = ih̄Lz
h i
L̂y , L̂z = ih̄Lx (28.41)
h i
L̂z , L̂x = ih̄Ly
This means that there is no state (other than ψ = 0) that is sharp in more
than one cartesian component of L.
Example: ψn,`,m is sharp in Lz (Lz = m`h̄).
Therefore, it is not sharp in Lx and not sharp in Ly .
Question: Suppose then that the state function is ψn,`,m and we measure
Lx . What possible results can you obtain? What is the probability of each
result?.
Answer:
To answer this, one must expand the state function ψn,`,m over the com-
plete set of eigenfunctions of the operator L̂x . (Since L̂x is a Hermitian
operator, like all other Hermitian operators it possesses a complete set of
eigenfunctions). Let us call these functions {fLx }. Then our expansion is
X
ψn,`,m = ck fLx ,k (28.42)
k
where
L̂x fLx ,k = Lx fLx ,k
The probability of obtaining the result Lx,k is |ck |2 .
We will now inquire as to the nature of the eigenfunctions of L̂x . We
will note, however, (without proof) that the eigenvalues of L̂x are integer
259
multiples of h̄, as are the eigenvalues of L̂z (and L̂y ). For this reason, h̄ is the
fundamental unit of angular momentum
but,
Φ∗m` Φm` = eim` φ eim` φ = 1 (28.44)
consequently, for state ψn,`,m` , the probability density does not depend of φ.
Thus, to study the angular dependence of probabilities for the eigenstates,
we need only study Θ∗`,m` Θ`,m` . A simplified way of studying this involves a
polar diagram (see figure 28.6); the origin is at the nucleus, and the z-axis is
taken alone the direction from which θ is measured. The distance from the
origin to the curve, at angle θ is proportional to Θ∗`,m` (θ)Θ`,m` (θ).
fig 6
Let is look at a few of the states in more detail. The 1s and 2s states are,
of course, spherically symmetric. However, consider the 2p states.
r r
ψ2,1,0 = C2,1,0 e 2a0 cos θ (28.45)
a0
r r
ψ2,1,±1 = C2,1,1 e− 2a0 sin θe±iφ (28.46)
a0
The angular probability densities Θ∗ Θ behaves as
(where θ is measured from the z-axis). Computer plots in which the density
of the dots gives an indication of the r-dependence of the probability density
are shown in figures 28.7 through 28.12.
fig 7-12
260
Appendix
Before we prove this, first let’s get some intuition on it in ordinary three
dimensional space. SUppose that, in ordinary three dimensional space, we
~ and B.
have two vectors, A ~ Then
textitfig 13
~·B
hA | Bi ≡ A ~ = AB cos θ ≤ AB (28.51)
therefore,
|hA | Bi|2 ≤ |A|2 |B|2 (28.52)
The Schwartz inequality say “the same thing” on the infinite dimensional
function space.
Proof
1. Lemma: For any normalizable function f (x), hf | f i ≥ 0.
Proof : Using completeness, expand in the complete set of eigenfunctions
of any hermitian observable
X
f (x) = cn ψn (28.53)
n
261
Then,
* N N
+
X X
hf | f i = ci ψi | cj ψj
i=1 j=1
N
XX
= c∗i cj hψi | ψj i
i=1N j=1
XN X N
= c∗i cj δi,j
i=1 j=1
N
X
= |ci |2 ≥ 0
i=1
0 ≤ hf | f i + b hf | gi + b∗ hg | f i + bb∗ hg | gi (28.55)
262
Therefore,
|hf | gi|2 |hf | gi|2
0 ≤ hf | f i − + (28.60)
hg | gi hg | gi
Therefore,
hf | f i hg | gi ≥ |hf | gi|2 (28.61)
σB2 = hg | gi (28.63)
where g(x) ≡ B̂ − hBi Ψ
Consider now
σA2 σB2 = hf | f i hg | gi (28.64)
By the Schwarz inequality, the right hand side of this is greater than |hf | gi|2 ,
hence
σA2 σB2 ≥ |hf | gi|2 (28.65)
Now hf | gi is some complex number; call it z. Since for any complex number,
|z|2 = z ∗ z = [Re(z) − iIm(z)] [Re(z) + iIm(z)]
= [Re(z)]2 + [Im(z)]2 ≥ [Im(z)]2
2
z − z∗
=
2i
263
We have
2
hf | gi − hg | f i
σA2 σB2 ≥ (28.66)
2i
Let us work this out in terms of  and B̂
D E
hf | gi = Â − hAi Ψ | B̂ − hBi Ψ
D E
= Ψ | Â − hAi B̂ − hBi Ψ
D n o E
= Ψ | ÂB̂ − hAi B̂ − Â hBi + hAi hBi Ψ
D E D E D E
= Ψ | ÂB̂ Ψ − hAi Ψ | B̂Ψ − hBi Ψ | ÂΨ + hAi hBi hΨ | Ψi
= < hABi − hAi hBi − hBi hAi + hAi hBi
= hABi − hAi hBi
Likewise,
hg | f i = hf | gi∗ = hBAi − hAi hBi
Therefore, equation (28.66) says
Dh iE 2
hABi − hBAi
2
hAB − BAi
2 Â, B̂
σA2 σB2 ≥ = ≡ (28.67)
2i 2i 2i
264
or
h̄
σx σpx ≥ (28.70)
2
Which you recognize as the original Heisenberg Uncertainty Principle , now
proved. However, we now see that this is just one case a much more general
(and deeper) principle. We will discuss one consequence of this in the next
class.
265
Chapter 29
Look back at our polar plots of the angular dependence of the hydrogen atom
eigenstate probability densities. We see that in these plots that, for a given
state with ` 6= 0, there are conical “nodal surfaces” on which the electron
can never materialize. This is very strange, you might think. You might say
− prepare a million different hydrogen atoms in the same state, then mea-
sure the position of the electron for each. This would then locate the nodal
surfaces, from which one could locate the z-axis. But, this should be im-
possible − since V (r) is spherically symmetric, this would amount to finding
a preferred direction in space, which is a violation of a very basic physical
principle. Thus, it appears that we have a serious paradox undermining the
entire Schrödinger theory of the hydrogen atom on our hands. Let us think
on this.
Consider any one of the n = 2 states, for example, ψ2,1,1 . How could one set
a system into this state? We need to perform three measurements − energy
(to segregate for further measurements later all atoms materializing the n = 2
energy level), magnitude of angular momentum (to segregate all atoms with
` = 1 in addition to n = 2), and Lz (to finally define Lz = 1h̄). Consider,
for example, the least measurement. To accomplish it requires an external
magnetic field set up to point in the direction we wish to call “z”. thus, to
266
make the measurement requires the positive setting up of a special direction
in space, thus destroying the isotropy. Without destroying the isotropy of
space by forcing a “special direction” it is not possible to measure Lz . Thus,
without destroying the isotropy of space, all we an say is that an atom could
be in any of ψ2,1,−1 , ψ2,1,0 , ψ2,1,1 , or any superposition of these. Thus, on
average, a large ensemble would correspond to an equal weight superposition
of these states; thus
1 ∗
ψ ∗ ψeffective = ∗ ∗
ψ2,1,−1 ψ2,1,−1 + ψ2,1,1 ψ2,1,1 + ψ2,1,0 ψ2,1,0 (29.1)
3
Two of these terms are proportional to 12 sin2 θ, and one is proportional to
cos2 θ. Therefore, the sum is spherically symmetric (sin2 θ + cos2 θ = no θ
dependence).
This logic shows that at least, we cannot “force” the paradox to appear
experimentally. To further resolve the paradox conceptually, we must realize
again that the “wave function” ψ is not a real thing that exists in real space.
(In fact, this paradox demonstrates one of the major problems with such a
naive “realism” interpretation of ψ.
267
and hence is “diluted”. With this in mind, we investigate the possibility of
forming new eigenstates that have a more “centered” probability density than
do ψ2,1,1 and ψ2,1,−1 . to begin to understand how this works, we consider the
following statement: Any linear combination of eigenstates corresponding to
the same energy for a given Hamiltonian H b is also an eigenstate of H
b for that
same energy.
Proof : Suppose Hψ b i = Eψ1 and Hψ b 2 = Eψ2 . Let c1 and c2 be any
complex numbers. Then
c1 Hψ
b 1 = H(c
b 1 ψ1 ) = c1 Eψ1 = Ec1 ψ1 (29.5)
c2 Hψ
b 2 = H(c
b 2 ψ2 ) = c2 Eψ2 = Ec2 ψ2 (29.6)
(29.7)
(since H
b is linear). Adding these
H
b (c1 ψ1 + c2 ψ2 ) = E (c1 ψ1 + c2 ψ2 ) (29.8)
(again using the linearity of H).b Therefore, c1 ψ1 +c2 ψ2 is also an eigenfunction
of H
b with eigenvalue E.
with this in mind, let us consider the following three linear combinations
of ψn,1,0 , ψn,1,1 , and ψn,1,−1
r
3 z
pz = ψn,1,0 (~r) = Rn,1 (r) (29.9)
4π r r
1 3 x
px = − √ [ψn,1,1 (~r) − ψn,1,−1 (~r)] = Rn,1 (r) (29.10)
2 4π r
r
i 3 y
py = √ [ψn,1,1 (~r) + ψn,1,−1 (~r)] = Rn,1 (r) (29.11)
2 4π r
These new eigenfunctions (“orbitals” to chemists) are shown in figure 29.1.
fig 1
In each of these we have two regions of relatively high electron probability
density, which is god for chemical bonding. An example wherein nature
“makes use of this” is in the crucially important H2 O molecule. In the oxygen
atom (at the “center” (total 8 electrons), the 1s state is filled (uses two
electrons, spin up and spin down). the 2s state is also filled (uses 2 more
electrons), one 2p orbital is fully occupied (uses 2 more electrons) and the
268
remaining two 2p orbitals are each only half full (1 electron each) and so each
can join with a 1s orbital from a hydrogen atom in its ground state to form
what are “spσ 0 ’ bonding orbitals (shown in figure 29.2). In this, essentially
the 2px and 2py orbitals of the oxygen are used.
fig 2
269
29.4 If an Eigenstate is Forever, Why Do Atoms Radiate?
Note that the coefficients ai and af are time dependent where ai (t) = 1
before the transition and zero after the transmission and af (t) = 0 before
the transition and one after the transition. Now consider h~ri for the electron
during the transition
Z
h~rit = Ψ∗ (~r, t)~rΨ(~r, t) d3~r
Z Z
= a1 ai ~r |ψi | d ~r + af af ~r |ψf |2 d3~r
∗ 2 3 ∗
Z
∗
+ ai af ψi ~rψf d ~r e+i(Ei −Ef )t/h̄
∗ 3
Z
+ af a∗f ψf∗ ~rψi d3~r e−i(Ei −Ef )t/h̄
Now, since ψi and ψf are eigenstates ψn,`,m` , the first two integrals vanish.
The third and fourth terms are complex conjugates, so their sum is
Z
∗ ∗ 3 −i(Ei −Ef )t/h̄
h~rit = 2Re af ai ψf ~rψi d ~r e (29.13)
270
This expression has three factors. The middle factor is called “the matrix
element of ~r”; it is an overlap integral involving the final and initial states;
it is a measure of the
amplitude
of the oscillation defined by the real part of
Ei −Ef
the last factor, cos h̄ t . Thus, in this picture, during the transition, the
position of the electron expectation value undergoes simple harmonic motion
E −E
of frequency i h̄ f = E2 −E h̄ ; semi-classically, this would require the emission
1
271
dependence is not e−iE2 t/h̄ , where E2 is the energy of 2p. Rather, its best to
just write it as Ψ(~r, t).
So, what is Ψ(~r, t)? According to the completeness theorem (again!), what-
ever the eigenstates, ψn,`,m` are a complete set, so
X
Ψ(~r, t) = an,`,m` (t)ψn,`,m` e−iEn t/h̄ (29.16)
where the expansion coefficients an,`,m` (t) depend on time. We note that
equation (29.16) is of the form of equation (29.12). Detailed analysis shows
that only the initial 2p and the ground state 1s states contribute to equation
(29.16), so equation (29.12) does indeed result.
We have seen that the transition rate (probability of decay from excited
state to ground state with emission of electromagnetic radiation, per second)
should be proportional to the square of the “matrix element” between the
initial and final electron states, i.e.,
Z
2
R ∝ |Pf i | , Pf i ≡ ψf∗ e~rψi dτ (29.17)
272
Under reflection ~r → −~r. Thus, if ψf∗ ψi → +ψf∗ ψi , the integrand is off and
Pf i = 0. But ψf∗ ψi → (−1)`f (−1)`i ⇒ ψf∗ ψi is even if ∆` = 0, 2 and ψf∗ ψi is
odd if ∆` = ±1. If ∆` = 1, it is fair to ask what happens to the “disappeared”
angular momentum. The theory of relativity plus quantum mechanics shows
that the photon carries a “spin” or intrinsic angular momentum of either
+1h̄ or −h̄ in the propagation direction, these are “right circularly” and “left
circularly” polarized photons.
a slightly more detailed discussion follows1
1
From R. Eisberg and R. Resnick, Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles
(Wiley), 2nd Edition.
273
Index
274
Garnow’s approximation, 167 infintie square well, 147
Gaussian, 32, 50, 52 simple harmonic oscillator, 202, 208,
generalized uncertainty principle, 259, 211
264 step-potential, 155, 160
ground state, 123 potential
Finite square well, 182
Hamiltonian, 112, 114, 116, 119
Poynting vector, 10
hamiltonian, 111
Probability Current, 57
Heisenberg Uncertainty Principle, 66–
probability flux, 58
69, 74, 76
propagator, 218
Hermite functions, 207
Hermite polynomials, 210 quantization of energy, 46, 122, 128
Hermite’s equation, 206 quantum initial value problem, 50
Hermitian operator, 112, 114 Quantum Mechanics Convention, 59
Hermitian operators, 104 quantum tunneling, 160
Huygen’s principle, 8, 13
recursion relation, 206
infinite square well, 125 reflection coefficient, 151
Laguerre polynomial, 248 reflection coefficient’, 159
Laurent Schwartz, 92 Schrödinger, 47–52, 54, 55, 72
Legendre polynomial, 238 Schrödinger equation, 40, 43, 45, 46
normalization, 26, 43, 45, 47, 123, 127 Schrödinger equation, 45
Schwarz inequality, 261
orthogonality property, 136 selection rules, 272
orthonormal set, 77, 78 sinc(x), 17
parity, 203 state function, 23, 25, 31–33, 36, 37,
Parseval’s theorem, 89, 100 41, 42, 49, 50, 112, 125
Paul Dirac, 92 state functions, 43
phasors, 14–16 stationary states’, 117, 143
potential Thomas Young, 8
barrier potential, 160 Time Dependent Schrödinger Equa-
finite square well, 130, 132, 190 tion, 119–121
hydrogen atom, 228, 242 time dependent Schrödinger equation,
infinite square well, 45, 47, 121, 135
123, 132, 143
275
Time Dependent Schrödinger Equa-
tion, 120
Time Independent Schrödinger Equa-
tion, 116, 119, 121
time independent Schrödinger equa-
tion, 125
transmission coefficient, 162
transmission coefficient’, 159
wave function, 23, 42
WKB Approximation, 167
zero-point energy, 123, 134, 208
276