Quantum Dynamics Applications in Biological and Materials Systems (Eric R - Bittner)
Quantum Dynamics Applications in Biological and Materials Systems (Eric R - Bittner)
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
v
vi Contents
ix
x Preface
Much of this was committed to text over the course of my sabbatical at Cambridge
in 2007, and I wish to thank all the students, postdocs, and colleagues who helped
track down typos, clarify sessions, provide figures, and so on. I thank the editors at
CRC/Taylor & Francis for keeping me on target to complete this. I also thank the
postdocs and graduate students in my group for contributing figures, proofreading,
and working problems.
Eric R. Bittner
Cambridge, U.K. & Houston, Texas
About the Author
Eric Bittner is currently the John and Rebecca Moores Distinguished Professor
of chemical physics at the University of Houston. He received his PhD from the
University of Chicago in 1994 and was a National Science Foundation Postdoctoral
Fellow at the University of Texas at Austin and at Stanford University before moving
to the University of Houston in 1997. His accolades include an NSF Career Award and
a Guggenheim Fellowship. He has also held visiting appointments at the University
of Cambridge, the Ecôle Normale Supérieure, Paris, and at Los Alamos National
Lab. His research is focused in the areas of quantum dynamics as applied to organic
polymer semiconductors, object linking and embedding directory services (OLEDS),
solar cells, and energy transport in biological systems.
xi
1 Survey of Classical
Mechanics
Quantum mechanics is in many ways the cumulation of many hundreds of years
of work and thought about how mechanical things move and behave. Since ancient
times, scientists have wondered about the structure of matter and have tried to develop
a generalized and underlying theory that governs how matter moves at all length scales.
For ordinary objects, the rules of motion are very simple. By ordinary, I mean
objects that are more or less on the same length and mass scale as you and I, say
(conservatively) 10−7 m to 106 m and 10−25 g to 108 g moving at less than 20% of the
speed of light. On other words, almost everything you can see and touch and hold
obeys what are called classical laws of motion. The term classical means that that the
basic principles of this class of motion have their foundation in antiquity. Classical
mechanics is an extremely well-developed area of physics. While you may think that
because classical mechanics has been studied extensively for hundreds of years there
really is little new development in this field, it remains a vital and extremely active area
of research. Why? Because the majority of universe “lives” in a dimensional realm
where classical mechanics is extremely valid. Classical mechanics is the workhorse
for atomistic simulations of fluids, proteins, and polymers. It provides the basis for
understanding chaotic systems. It also provides a useful foundation of many of the
concepts in quantum mechanics.
Quantum mechanics provides a description of how matter behaves at very small
length and mass scales, that is, the realm of atoms, molecules, and below. It has been
developed over the past century to explain a series of experiments on atomic systems
that could not be explained using purely classical treatments. The advent of quantum
mechanics forced us to look beyond the classical theories. However, it was not a drastic
and complete departure. At some point, the two theories must correspond so that
classical mechanics is the limiting behavior of quantum mechanics for macroscopic
objects. Consequently, many of the concepts we will study in quantum mechanics
have direct analogs to classical mechanics: momentum, angular momentum, time,
potential energy, kinetic energy, and action.
Much as classical music is cast in a particular style, classical mechanics is based
upon the principle that the motion of a body can be reduced to the motion of a point
particle with a given mass m, position x, and velocity v. In this chapter, we will
review some of the concepts of classical mechanics which are necessary for studying
quantum mechanics. We will cast these in forms whereby we can move easily back
and forth between classical and quantum mechanics. We will first discuss Newtonian
motion and cast this into the Lagrangian form. We will then discuss the principle of
least action and Hamiltonian dynamics and the concept of phase space.
1
2 Quantum Dynamics: Applications in Biological and Materials Systems
Postulate 1.1
Law of Inertia: A free particle always moves without acceleration.
That is, a particle that is not under the influence of an outside force moves along a
straight line at constant speed, or remains at rest.
Postulate 1.2
Law of Motion: The rate of change of an object’s momentum is equal to the force
acting upon it.
d p
= F (1.1)
dt
This is equivalent to F = m a where a = d v /dt is the acceleration. Note that in
Newton’s first postulate, we assume that the mass does not change with time.
Postulate 1.3
Law of Action: For every action, there is an equal and opposite reaction.
F 12 = − F 21 (1.2)
This is to say that if particle 1 pushes on particle 2 with force F, then particle 2
pushes on particle 1 with a force −F. In SI units, the unit of force is the Newton,
1N = 1kg · m · s −2 .
Newton’s Principia set the theoretical basis of mathematical mechanics and anal-
ysis of physical bodies. The equation that force equals mass times acceleration is the
fundamental equation of classical mechanics. Stated mathematically,
m ẍ = f (x) (1.3)
The dots refer to differentiation with respect to time. We will use this notion for time
derivatives. We may also use x or d x/dt as well. So,
d2x
ẍ = (1.4)
dt 2
For now we are limiting ourselves to one particle moving in one dimension. For
motion in more dimensions, we need to introduce vector components. In Cartesian
Survey of Classical Mechanics 3
m ÿ = −mg (1.18)
where g is the gravitational constant and the force −mg is the attractive force due to
gravity. In x, we have
m ẍ = 0 (1.19)
since there are no net forces acting in the x direction. Hence, we can solve the x
equation immediately since v̇ x = 0 and thus, x(t) = vx (0)t + xo = vo t cos(φ). For
the y equation, denote v y = ẏ,
d
m v y = −mg (1.20)
dt
Integrating, v y = −gt + const. Evaluating this at t = 0, v y (0) = vo sin(φ) = const.
Thus,
that is,
g 2
y = vo sin(φ)t − t (1.23)
2
So the trajectory in y is parabolic. To determine the point of impact, we seek the roots
of the equation
g
vo sin(φ)t − t 2 = 0 (1.24)
2
V0
φ
X
Either t = 0 or
2
tI = vo sin(φ) (1.25)
g
Survey of Classical Mechanics 5
We can now ask this question: What angle do we need to point our cannon to hit a
target X meters away? In time t I the cannon ball will travel a distance x = vo cos(φ)t I .
Substituting our expression for the impact time:
2 v 2 sin(2φ)
X = vo2 cos(φ) sin(φ) = o (1.26)
g g
Thus,
g
sin(2φ) = X (1.27)
vo2
One can also see that the maximum range is obtained when φ = π/4.
takes the least possible value given a path that starts at xo at the initial time and ends
at x f at the final time.
Let us take x(t) to be a function for which S is minimized. This means that S
must increase for any variation about this path, x(t) + δx(t). Since the endpoints are
specified, δx(0) = δx(t) = 0 and the change in S upon replacement of x(t) with
x(t) + δx(t) is
tf tf
δS = L(x + δx, ẋ + δ ẋ, t)dt − L(x, ẋ, t)dt = 0 (1.29)
to to
This is zero because S is a minimum. Now, we can expand the integrand in the first
term
∂L ∂L
L(x + δx, ẋ + δ ẋ, t) = L(x, ẋ, t) + δx + δ ẋ (1.30)
∂x ∂ ẋ
Thus, we have
tf
∂L ∂L
δx + δ ẋ dt = 0 (1.31)
to ∂x ∂ ẋ
Since δ ẋ = dδx/dt and integrating the second term by parts
t f t f
∂L ∂L d ∂L
δS = δx + − δxdt = 0 (1.32)
δ ẋ to to ∂x dt ∂ ẋ
6 Quantum Dynamics: Applications in Biological and Materials Systems
The surface term vanishes because of the condition imposed above. This leaves the
integral. It too must vanish and the only way for this to happen is if the integrand
itself vanishes. Thus we have
∂L d ∂L
− =0 (1.33)
∂x dt ∂ ẋ
L is known as the Lagrangian. Before moving on, we consider the case of a free
particle. The Lagrangian in this case must be independent of the position of the particle
since a freely moving particle defines an inertial frame. Since space is isotropic, L
must depend upon only the magnitude of v and not its direction. Hence,
L = L(v 2 ) (1.34)
L =T −V (1.38)
L has units of energy and gives the difference between the energy of motion and the
energy of location.
This leads to the equations of motion:
d ∂L ∂L
= (1.39)
dt ∂v ∂x
Substituting L = T − V yields
∂V
m v̇ = − (1.40)
∂x
which is identical to Newton’s equations given above once we identify the force as
the minus of the derivative of the potential. For the free particle, v = const. Thus,
tf
m 2 m
S= v dt = v 2 (t f − to ) (1.41)
to 2 2
You may be wondering at this point why we needed a new function and derived
all this from some minimization principle. The reason is that for some systems we
Survey of Classical Mechanics 7
have constraints on the type of motion they can undertake. For example, there may be
bonds, hinges, and other mechanical hindrances that limit the range of motion a given
particle can take. The Lagrangian formalism provides a mechanism for incorporating
these extra effects in a consistent and correct way. In fact we will use this principle
later in deriving a variational solution to the Schrödinger equation by constraining
the wave function solutions to be orthonormal.
Lastly, it is interesting to note that v 2 = (dl/d)2 = (dl)2 /(dt)2 is the square
of the element of an arc in a given coordinate system. Thus, within the Lagrangian
formalism it is easy to convert from one coordinate system to another. For example, in
Cartesian coordinates: dl 2 = d x 2 +dy 2 +dz 2 . Thus, v 2 = ẋ 2 + ẏ 2 + ż 2 . In cylindrical
coordinates, dl = dr 2 + r 2 dφ 2 + dz 2 , we have the Lagrangian
1
L= m(ṙ 2 + r 2 φ̇ 2 + ż 2 ) (1.42)
2
and for spherical coordinates, dl 2 = dr 2 + r 2 dθ 2 + r 2 sin2 θdφ 2 ; hence,
1
L= m(ṙ 2 + r 2 θ˙2 + r 2 sin2 θ φ̇ 2 ) (1.43)
2
a X
o
F
FIGURE 1.1 Vector diagram for motion in central forces. The particle’s motion is along the
Z axis, which lies in the plane of the page.
d ∂L ∂L d
− = mr 2 sin2 θ φ̇ = 0 (1.52)
dt ∂ φ̇ ∂φ dt
d ∂L ∂L d
− = (mr 2 θ)
˙ − mr 2 sin θ cos θ φ̇ = 0 (1.53)
dt ∂ θ
˙ ∂θ dt
d ∂L ∂L d
− = (m ṙ ) − mr θ˙2 − mr sin2 θ φ̇ 2 + kr = 0 (1.54)
dt ∂ ṙ ∂r dt
We now prove that the motion of a particle in a central force field lies in a plane
containing the origin. The force acting on the particle at any given time is in a direction
toward the origin. Now, place an arbitrary Cartesian frame centered about the particle
with the z axis parallel to the direction of motion as sketched in Figure 1.1. Note
that the y axis is perpendicular to the plane of the page, and hence, there is no force
component in that direction. Consequently, the motion of the particle is constrained
to lie in the zx plane, that is the plane of the page, and there is no force component
that will take the particle out of this plane.
Let us make a change of coordinates by rotating the original frame to a new one
whereby the new z is perpendicular to the plane containing the initial position and
velocity vectors. In Figure 1.1, this new z axis would be perpendicular to the page
and would contain the y axis we placed on the moving particle. In terms of these new
coordinates, the Lagrangian will have the same form as previously since our initial
choice of axis was arbitrary. However, now we have some additional constraints.
Because the motion is now constrained to lie in the x y plane, θ = π/2 is a constant,
and θ˙ = 0. Thus cos(π/2) = 0 and sin(π/2) = 1 in the previous equations. From the
equations for φ we find
d
mr 2 φ̇ = 0 (1.55)
dt
or
mr 2 φ̇ = const = pφ (1.56)
Survey of Classical Mechanics 9
pφ2
ṙ 2 = − − kr 2 + b (1.59)
m 2r 2
that is,
pφ2
ṙ = − − kr 2 + b (1.60)
m 2r 2
Integrating once again with respect to time,
r dr
t − to = (1.61)
ṙ
r dr
= 2 (1.62)
pφ
− m 2 − kr 4 + br 2
1 dx
= √ (1.63)
2 a + bx + cx 2
Z´
Z
X´
Y Y´
10 Quantum Dynamics: Applications in Biological and Materials Systems
where
ω2 pφ2
A= b2 − (1.65)
m2
What we see then is that r follows an elliptical path in a plane determined by the
initial velocity.
This example also illustrates another important point that has tremendous impact
on molecular quantum mechanics, namely, that the angular momentum about the axis
of rotation is conserved. We can choose any axis we want. In order to avoid confusion,
let us define χ as the angular rotation about the body-fixed Z axis and φ as angular
rotation about the original Z axis. So our conservation equations are
mr 2 χ̇ = pχ (1.66)
mr 2 sin θ φ̇ = pφ (1.67)
for some arbitrary fixed Z axis. The angle θ will also have an angular momentum
associated with it, pθ = mr 2 θ,
˙ but we do not have an associated conservation principle
for this term since it varies with φ. We can connect pχ with pθ and pφ about the other
axis via
pχ dχ = pθ dθ + pφ dφ (1.68)
Consequently,
Here we see that the the angular momentum vector remains fixed in space in the
absence of any external forces. Once an object starts spinning, its axis of rotation re-
mains pointing in a given direction unless something acts upon it (torque); in essence,
in classical mechanics we can fully specify L x , L y , and L z as constants of the motion
since d L/dt = 0. In a later chapter, we will cover the quantum mechanics of rotations
in much more detail. In the quantum case, we will find that one cannot make such a
precise specification of the angular momentum vector for systems with low angular
momentum. We will, however, recover the classical limit in the end as we consider
the limit of large angular momenta.
E =T +V (1.76)
which says that the energy of the system can be written as the sum of two different
terms: the kinetic energy or energy of motion and the potential energy or the energy
of location.
One can also prove that linear momentum is conserved when space is homoge-
neous. That is, when we can translate our system, some arbitrary amount ε and our
dynamical quantities must remain unchanged. We will prove this in the problem sets.
where a and b are beginning and end of the path. In multiple dimensions, we have to
extend this concept so that the integral is taken along some arbitrary path.
12 Quantum Dynamics: Applications in Biological and Materials Systems
Wi =
si F(xi , yi , z i ) (1.78)
N
W ≈
si F(xi , yi , z i ) (1.79)
i
Taking
s → 0 and N → ∞, we can write the work performed in moving along
path C as
N
W = lim
si F(xi , yi , z i ) = F(s)ds (1.80)
si →∞ C
i
Now, suppose the force can be written as the gradient of some scaler potential
function
F = ∇G (1.81)
and that our curve C can be parametrized via a single variable t. For example, t could
be the length traveled along C or the time. Thus,
dG ds ds
= ∇G = F(s(t)) (1.82)
dt dt dt
Inserting this into the work integral,
ds dG
W = F(s)ds = F(s(t)) dt = dt = G(a) − G(b) (1.83)
C C dt C dt
where a and b are the two endpoints. As you can see, the integral now depends only
upon the two endpoints and does not depend upon the particular details of path C.
Suppose an object starts at point A and moves about some arbitrary closed path
P such that after some time it is again at point A. It may still be moving, but the net
work done on the object is exactly zero. That is, for a conservative force
W = · ds = 0
F(s) (1.84)
Moving on to C2 , it is easier to break this into two segments. Along the segment from
(0,0) to (1,0), x = s and y = 0. Thus,
1
1
W2(1) = sds = (1.86)
0 2
Along the next segment from (1, 0) to (1, 1), x = 1 and y = s, so we integrate
1
(2) 3
W2 = (1 + s)ds = (1.87)
0 2
then add W2 = W2(1) + W2(2) = 2. Finally, along the parabolic path, let x = s and
y = s 2 and we integrate
1
5
W3 = (s + s 2 )ds = (1.88)
0 6
Clearly, we are not dealing with a conservative force in this case! In fact, in most
cases, line integrals depend upon the path taken.
∂H
= q̇ (1.91)
∂p
These last two equations give the conservation conditions in the Hamiltonian for-
malism. If H is independent of the position of the particle, then the generalized
momentum, p, is constant in time. If the potential energy is independent of time, the
Hamiltonian gives the total energy of the system,
H =T +V (1.92)
where T is the kinetic energy that depends upon the velocities of the N particles in
the system and V is the potential energy describing the interaction between all the
particles and any external forces. V is the energy of position whereas T is the energy
of motion. For a single particle moving in three dimensions,
1 2
T = m vx + v 2y + vz2 (1.94)
2
If we write the momentum as px = mvx , then
1 2
T = px + p 2y + pz2 (1.95)
2m
Notice that we can also define the momentum as the velocity derivative of T :
∂T
px = (1.96)
∂vx
This defines a generalized momentum such that qx is the conjugate coordinate to
px and (qx , px ) are a pair of conjugate variables. This relation between T and px is
important since we can define the canonical momentum in any coordinate frame. In
the Cartesian frame, px = mvx . However, in other frames, this will not be so simple.
We can also define the following relations:
∂H ∂T pi ∂qi
= = = (1.97)
∂ pi ∂ pi m ∂t
∂H ∂V ∂(mvi )
= = −Fi = − (1.98)
∂qi ∂qi ∂t
Survey of Classical Mechanics 15
θ
X
where i now denotes a general coordinate (not necessarily x, y, z). In short, we can
write the following equations of motion:
∂H ∂qi
= (1.99)
∂ pi ∂t
∂H ∂ pi
− = (1.100)
∂qi ∂t
These hold in any coordinate frame and are termed Hamilton’s equations.
Note that pθ is the angular momentum of the system. Thus, we can write
1 pθ2
H= pr + 2 + V (r, θ )
2
(1.109)
2m r
Next, consider the case where V (r, θ ) has no angular dependence. Thus,
∂ pr ∂H p2 ∂V
=− = θ3 − (1.110)
∂t ∂r mr ∂r
∂ pθ ∂H ∂V
=− =− =0 (1.111)
∂t ∂θ ∂θ
∂r ∂H pr
= = (1.112)
∂t ∂ pr m
∂θ ∂H pθ
= = (1.113)
∂t ∂ pθ mr 2
Notice that pθ does not change in time; that is, the angular momentum is a constant
of the motion. The radial force we obtain from ∂ pr /∂t = Fr is
pθ2 ∂V
Fr = − (1.114)
mr 3 ∂r
The first term is constant (since pθ = const) and represents the radial force produced
by the angular momentum. It always points outward toward larger values of r and
is termed the centrifugal force. The second term is the force due to the attraction
between the moving object and the origin. It could be the gravitational forces, the
Coulombic force between charged particles, and so forth. Using the expression for
pθ (Equation 1.111), we can also write the force equation as
(mvθ r 2 )2 ∂V ∂V
Fr = − = mvθ2r − (1.115)
mr 3 ∂r ∂r
If the two forces counterbalance each other, then the net force is Fr = 0 and we have
∂V
mvθ2r = (1.116)
∂r
Since vθ = θ˙ = const, θ = ωt + const. Where ω is the angular velocity and using
vθ = ω, we can write
∂V
mω2r = (1.117)
∂r
Finally, we note that the linear velocity is related to the angular velocity by ω = vr ,
mv 2 ∂V
= (1.118)
r ∂r
Hence we have a relation between the kinetic energy T and the potential energy V
for a centro-symmetric system:
∂V
mv 2 = 2T = r (1.119)
∂r
This relation is extremely useful in deriving the classical orbital motion for Coulomb-
bound charges as in the hydrogen atom or for planetary motion.
Survey of Classical Mechanics 17
x3
−ω2 sin(x) = −ω2 x − + ··· (1.120)
6
k
v̇ = −ω2 x = − x (1.121)
m
which is the equation for harmonic motion. So, for small initial displacements, we see
that the pendulum oscillates back and forth with an angular frequency ω. For large
initial displacements, xo = π, or if we impart some initial velocity on the system
vo > 1, the pendulum does not oscillate back and forth but undergoes librational
motion (spinning!) in one direction or the other.
–2π –π 0 π 2π
3
2
1
0
v
–1
–2
–3
–2π –π 0 π 2π
x
FIGURE 1.2 Tangent field for simple pendulum with ω = 1. The superimposed curve is a
linear approximation to the pendulum motion.
18 Quantum Dynamics: Applications in Biological and Materials Systems
We consider here a free particle with mass m and charge e in an electromagnetic field.
The Hamiltonian is
H = px ẋ + p y ẏ + pz ż − L (1.122)
∂L ∂L ∂L
= ẋ + ẏ + ż −L (1.123)
∂ ẋ ∂ ẏ ∂ ż
Our goal is to write this Hamiltonian in terms of momenta and coordinates.
For a charged particle in a field, the force acting on the particle is the Lorenz
force. Here it is useful to introduce a vector and scalar potential and to work in
centimeter-gram-second (cgs) units
e
F = v × (∇ − e ∂ A − e∇φ
× A) (1.124)
c c ∂t
The force in the x direction is given by
d e ∂ Ay ∂ Az
Fx = m ẋ = ẏ + ż
dt c ∂x ∂x
e ∂ Ax ∂ Ax ∂ Ax ∂φ
− ẏ + ż + −e (1.125)
c ∂y ∂z ∂t ∂x
with the remaining components given by cyclic permutation. Since
d Ax ∂ Ax ∂ Ax ∂ Ax ∂ Ax
= + ẋ + ẏ + ż (1.126)
dt ∂t ∂x ∂y ∂z
with the force in x given by
e ∂ Ax ∂ Ax ∂ Ax e − eφ
Fx = ẋ + ẏ + ż − v · A (1.127)
c ∂x ∂y ∂z c
and we find that the Lagrangian is
1 2 1 2 1 2 e − eφ
L= m ẋ + m ẏ + m ż + v · A (1.128)
2 2 2 c
where φ is a velocity independent and static potential.
Continuing on, the Hamiltonian is
m 2
H = (ẋ + ẏ 2 + ż 2 ) + eφ (1.129)
2
1
= ((m ẋ)2 + (m ẏ)2 + (m ż)2 ) + eφ (1.130)
2m
The velocities, m ẋ, are derived from the Lagrangian via the canonical relation
∂L
p= (1.131)
∂ ẋ
Survey of Classical Mechanics 19
We see here an important concept relating the velocity and the momentum. In the
absence of a vector potential, the velocity and the momentum are parallel. However,
when a vector potential is included, the actual velocity of a particle is no longer
parallel to its momentum and is in fact deflected by the vector potential.
∂H ∂H
q˙ = , ṗ = − (1.137)
∂p ∂q
Thus,
dG ∂G ∂G ∂ H ∂G ∂ H
= + − (1.138)
dt ∂t ∂q ∂ p ∂ p ∂q
dG ∂G
= + {G, H } (1.139)
dt ∂t
where {A, B} is called the Poisson bracket of two dynamical quantities, G and H :
∂G ∂ H ∂G ∂ H
{G, H }, = − (1.140)
∂q ∂ p ∂ p ∂q
We can also define a linear operator L as generating the Poisson bracket with the
Hamiltonian:
1
LG = {H, G} (1.141)
i
20 Quantum Dynamics: Applications in Biological and Materials Systems
G = pq (1.143)
If the trajectories of the system are bounded, both p and q are periodic in time and
are therefore finite. Thus, the average must vanish as T → ∞ giving
pq̇ + q ṗ
= 0 (1.148)
2T
= − q F
(1.149)
2T
= n V
(1.151)
Survey of Classical Mechanics 21
where î, ĵ, and k̂ are the unit vectors along the x, y, z axes. Evaluating the determinant
gives
= î(ypz − zp y ) − ĵ(x pz − zpx ) + k̂(x p y − ypx )
M (1.155)
= î Mx + ĵ M y + k̂ Mz (1.156)
For motion in the x–y plane, the only term that remains is the Mz term, indicating
that the angular momentum vector points perpendicular to the plane of rotation,
Mz = (x p y − ypx ) = m(xv y − yvx ) (1.157)
Since we have noted that the angular momentum is a constant of the motion, we must
have d Mz /dt = 0. Let us check:
d Mz
= m(vx v y − v y vx + xa y − yax ) (1.158)
dt
where ax = v̇ x is the acceleration in x. Thus,
d Mz
= (x Fy − y Fx ) (1.159)
dt
If the force is radial, Fx = Fr cos(θ) and Fy = Fr sin(θ). Likewise, x = r cos(θ ) and
y = r sin(θ). Putting this into the equations, we have
d Mz
= r Fr (sin(θ) cos(θ) − sin(θ) cos(θ)) = 0 (1.160)
dt
Taking θ = ωt as above where ω is the angular frequency, and using vx =
−r ω sin(ωt) and y y = +r ω cos(ωt), we can also write
M = m(vx y − v y x) = mvr (sin 2 (ωt) + cos2 (ωt)) = mvr (1.161)
22 Quantum Dynamics: Applications in Biological and Materials Systems
quantum = classical
2R H τ e2
= τ (1.176)
n3 4πmr 3
Now we start canceling terms. To eliminate n we use the expression for the energy
levels
R H hc
n= (1.177)
|Wn |
e2
W =− (1.178)
2r
which gives, r = e2 /(2|W |). Turning the crank and eliminating variables where
possible,
2π 2 me4
RH = (1.179)
ch 3
which gives the Rydberg constant entirely in terms of fundamental physical constants:
m, mass of electron; c, speed of light; and h, Planck’s constant. Thus, we can write
the energy levels in terms of no adjustable parameters:
R H hc 2πme4 1
Wn = − = − (1.180)
n2 h2 n2
Furthermore, we can go on to show that since
e2 R H hc
W =− =− 2 (1.181)
2r n
we can solve for the orbital radius r ,
e2 2
r= n R H hc (1.182)
2
giving fixed circular orbitals for the electrons
h2 h̄ 2 2
rn = n 2
= n (1.183)
4π 2 me2 me2
Survey of Classical Mechanics 25
quantum = classical
2π 2 4 1 e2
− me 2 = − (1.184)
h n 2r
from the energy expression. Now, take
e2
mω2r = (1.185)
r2
e2 = mr 3 ω2 (1.186)
quantum = classical
2π 2 4 1 mr 3 ω2 M2
− me 2 = − =− (1.187)
h n 2r 2mr 2
where M is the angular momentum we derived above. Thus,
2π 2 me4 2mr 2
M2 = · 2 (1.188)
h n
Taking our expression for the quantized radii from above,
h
M2 = n 2 = h̄ 2 n 2 (1.189)
2π
Thus, M = h̄n is the angular momentum. This, too, is quantized in units of Planck’s
constant over 2π.
some area encompassed by a closed path as shown in Figure 1.3. The closed dashed
loop encompasses an area equal to
area = p(x)d x (1.190)
If we assume that energy is conserved, then the energy is a constant along the dashed
loop, H ( p, x) = E = const.
Planck required that E = hνn for the quantized harmonic oscillator levels, but
that equation is applicable only to harmonic systems. If the quantization condition is
general, then we should see it appear in a more general context. Let us take a harmonic
oscillator system as an example:
p2 k
H ( p, x) = + x2 = E (1.191)
2m 2
This is the equation for an ellipse:
p2 k 2
1= + x (1.192)
2m E 2E
√ √
with major and minor axes a = 2m E and b = 2E/k. The area of an ellipse is
A = πab (1.193)
A = hn (1.196)
Survey of Classical Mechanics 27
hv´
hv
φ
mv
e–
Hence, the electrons need not move in strictly circular orbits; they only need to move
in closed paths such that
nh = p(x)d x (1.197)
mv 2
hν = hν + (1.198)
2
Furthermore, the x and y components of the momentum must also be conserved:
Here, p and p are the incident and final momenta of the photon, v is the final velocity
of the electron, and we take the incident momentum to be along the x axis,
m 2 v 2 = p + p 2 − 2 pp cos θ
2
(1.203)
m 2 v 2 = 2m(hν − hν ) (1.204)
Compton then postulated that the momentum of a photon is given by h/λ = hν/c.
Thus, replacing the frequency with c/λ,
1 1 1 1 2h 2
2mch − =h 2
+ 2 − cos θ (1.206)
λ λ λ2 λ λλ
h
λ = λ + (1 − cos θ) (1.208)
mc
which is the Compton scattering formula. The factor h/mc is the Compton wavelength
(λc = 0.024263 Å). Since x-ray wavelengths are on the order of 1–3 Å, the assumption
that λ = λ is pretty accurate. In fact, if we do a relativistic treatment, we do not need
to even make that assumption.
If we rewrite this as an energy equation,
Emc2
E = (1.209)
mc2 + E(1 − cos θ)
Since m is the mass of the electron, mc2 = 0.5 MeV is the rest-energy of the electron.
We can also write this as
1 1 1
= + (1 − cos θ) (1.210)
E E mc2
Plotting 1/E vs 1 − cos θ should give a straight line with the slope being 1/m e c2 .
In Table 1.1 and Figure 1.5 we show some data measured by the author as part of
an Experimental Physics course. Using the data, we can calculate the rest-mass and
incident energy of the x-ray photon emitted by the 137 Cs source.
Survey of Classical Mechanics 29
TABLE 1.1
Compton Scattering Data
θ (degrees) E (MeV)
30 0.562
45 0.464
60 0.3955
75 0.339
90 0.29
105 0.244
120 0.223
135 0.205
Note: This data was measured by the author back in 1987 in an Exper-
imental Physics course at Valparaiso University. Here we measured
the energy of the scattered x-ray toward a target. From this, you can
calculate the rest-mass of the electron and the incident energy of the
x-ray emitted by the source (137 Cs).
0.7
Theoretical compton scatter
0.6 Experimental scatter
0.5
Energy Mev
0.4
0.3
0.2
0.1
0 50 100 150 200
Theta Degrees
FIGURE 1.5 Experimental Compton scattering data taken by the author in an Experimental
Physics course at Valparaiso University (1987).
E = hν = pc (1.211)
30 Quantum Dynamics: Applications in Biological and Materials Systems
force is given by
GM z
F(z) = − 1 − 2 (1.218)
R3 R
Using values for G, M, and R, what is the gravitational force on an object 1 km above
the surface of the Earth?
Problem 1.3 Calculate the work necessary to move a unit of mass m = 1 along the
indicated paths in the x y plane from (0, 1) to (1, 0).
Path 1: Counterclockwise along a circle of radius 1.
Path 2: First from (0, 1) to (1, 1), then from (1, 1) to (1, 0). Use the following force
fields normal to the x y plane for your calculations:
1. F(x, y) = Ax y
2. F(x, y) = B log(y)
3. F(x, y) = A/ x 2 + y 2
4. F(x, y) = A(x 2 + y 2 )2
5. F(x, y) = A exp(−β(x 2 + y 2 )2 )
Problem 1.7 Compute the flux due to the vector F = 4x y î + 3 jˆ + z 3 k̂ through the
surface of a sphere of radius a centered at the origin.
Problem 1.8 A particle moves between two points on the x axis, from A = (−a, 0)
to B = (+2a, 0) under the influence of a radial force f = k/(x 2 + y 2 ) directed toward
the origin. Calculate by direct integration the work done along the following paths:
Problem 1.9 Consider the paths in the previous problem. Assume that no force is
present but that the particle moves in a viscous medium that slows the motion with a
velocity-dependent force F = −γ v. Compute the work required to move the particle
along each of the paths at constant speed.
Problem 1.10 Find the force field associated with the potential V (x, y, z) = x y +
4z/x − 2z 2 y 2 /x 4 .
32 Quantum Dynamics: Applications in Biological and Materials Systems
Problem 1.11 Given the force vector F = 3x 2 y î −(4z 2 −6y) jˆ +(cos(x)/2− ye−z )k̂,
find the potential V, from which this force is derived.
Problem 1.12 Show that if the energy is independent of one or more coordinates,
then the momentum associated with those coordinates is constant. Use this result to
show that a classical electron moving in a Coulomb potential has constant angular
velocity.
Problem 1.13 Derive the Euler-Lagrange equations of motion for a pendulum con-
sisting of a mass suspended by a (massless) rigid rod attached to a ball and socket.
Neglect any effects of the Earth’s rotation.
Problem 1.14 Show that the Hamiltonian for an electron moving in a centro-symmetric
potential can be written as
pr2 p2
H= + 2θ + V (r )
2m 2r m
where pr and pθ are the respective radial and angular momenta and r is the radial
coordinate.
Problem 1.16 In the previous problem, you showed that if a vector field is conserva-
tive, it is also irrotational. However, is the converse also true? Is an irrotational field
also conservative? Consider the field
−y x
v = , , 0
x 2 + y2 x 2 + y2
Bohr’s model of the hydrogen atom was successful in that it gave us a radically new
way to look at atoms. However, it has serious shortcomings. It could not be used to
explain the spectra of He or any multielectron atom. It could not predict the intensities
of the H absorption and emission lines. With de Broglie’s hypothesis that matter was
also wavelike,1 there arose a question at the 1925 Solvey conference: What is the
wave equation? De Broglie could not answer this; however, over the next year Erwin
Schrödinger, working in Vienna, published a series of papers in which he deduced
the general form of the equation that bears his name and applied it successfully to the
hydrogen atom.2,3 What emerged was a new set of postulates, much like Newton’s,
that laid the foundations of quantum theory.
The physical basis of quantum mechanics is
1. That matter, such as electrons, always arrives at a point as a discrete chunk,
but that the probibility of finding a chunk at a specified position is like the
intensity distribution of a wave.
2. The “quantum state” of a system is described by a mathematical object
called a “wave function” or state vector and is denoted |ψ
.
3. The state |ψ
can be expanded in terms of the basis states of a given vector
space, {|φi
}, as
|ψ
= |φi
φi |ψ
(2.1)
i
where φi |ψ
denotes an inner product of the two vectors.
4. Observable quantities are associated with the expectation value of Hermi-
tian operators and that the eigenvalues of such operators are always real.
5. If two operators commute, one can measure the two associated physical
quantities simultaneously to arbitrary precision.
6. The result of a physical measurement projects |ψ
onto an eigenstate of the
associated operator |φn
yielding a measured value of an with probability
| φn |ψ
|2 .
33
34 Quantum Dynamics: Applications in Biological and Materials Systems
x̂|ψ
= |x
x|ψ
(2.2)
= |x
ψ(x) (2.3)
We shall call ψ(x) the wave function of the system since it is the amplitude of |ψ
at
point x. Here we can see that ψ(x) is an eigenstate of the position operator. We also
define the momentum operator p̂ as a derivative operator:
h̄ ∂
p̂ = (2.4)
i ∂x
Thus,
Note that ψ (x) = ψ(x); thus, an eigenstate of the position operator is not also an
eigenstate of the momentum operator.
We can deduce this also from the fact that x̂ and p̂ do not commute. To see this,
first consider
∂
x f (x) = f (x) + x f (x) (2.6)
∂x
Thus (using the shorthand ∂x as partial derivative with respect to x),
p̂|φ(k)
= k|φ(k)
(2.10)
This type of integral is called a “Fourier transform.” There are a number of ways to
define the normalization
√ C when using this transform; for our purposes at the moment,
we will set C = 1/ 2πh̄ so that
+∞
1
ψ(x) = √ dkψ(k) exp(−ikx/h̄) (2.18)
2πh̄ −∞
and
+∞
1
ψ(x) = √ d xψ(x) exp(ikx/h̄) (2.19)
2πh̄ −∞
Using this choice of normalization, the transform and the inverse transform have
symmetric forms and we only need to remember the sign in the exponential.
Postulate 2.1
The quantum state of the system is a solution of the Schrödinger equation
ih̄∂t |ψ(t)
= H |ψ(t)
(2.20)
From classical mechanics, H is the sum of the kinetic and potential energy of a
particle,
1 2
H= p + V (x) (2.21)
2m
36 Quantum Dynamics: Applications in Biological and Materials Systems
Thus, using the quantum analogs of the classical x and p, the quantum H is
1 2
H= p̂ + V (x̂) (2.22)
2m
To evaluate V (x̂) we need a theorem that a function of an operator is the function
evaluated at the eigenvalue of the operator. The proof is straightforward.
If
1
V (x) = V (0) + x V (0) + V (0)x 2 · · · (2.23)
2
then
1
V (x̂) = V (0) + x̂ V (0) + V (0)x̂ · · ·
2
(2.24)
2
Since for any operator
[ fˆ , fˆ p ] = 0∀ p (2.25)
Thus, we have
x|V (x̂)|ψ
= V (x)ψ(x) (2.26)
0.75
0.5
0.25
–3 –2 –1 1 2 3
–0.25
–0.5
–0.75
FIGURE 2.1 Real, imaginary, and absolute value of Gaussian wave packet ψ(x).
√
So, for f (x) = exp(−x 2 /b2 ),
x √
= b/ 2. Thus, when x varies form 0 to ±
x,
f (x) is diminished by a factor of 1/ e. [
x is the root-mean-square deviation of
f (x).]
For the Gaussian wave packet,
x = a/2 (2.31)
k = 1/a (2.32)
or
p = h̄/a (2.33)
Thus,
x
p = h̄/2 for the initial wave function.
2.5
1.5
ψ[k]
0.5
6 8 10 12 14
k
This equation is actually easier to solve in k-space. Taking the Fourier transform (FT),
k2
ih̄∂t ψ(k, t) = ψ(k, t) (2.35)
2m
Thus, the temporal solution of the equation is
This is subject to some initial function ψ(k, 0). To get the coordinate x-representation
of the solution, we can use the FT relations above:
1
ψ(x, t) = √ dkψ(k, t) exp(−ikx) (2.37)
2πh̄
= d x x| exp(−i p̂ 2 /(2m)t/h̄)|x
ψ(x , 0) (2.38)
m im(x − x )2
= d x exp ψ(x , 0) (2.39)
2πih̄t 2h̄t
= d x Go (x, x )ψ(x , 0) (2.40)
The function Go (x, x ) is called the free particle propagator or Green’s function. This
gives the amplitude for a particle at x to be found at x some time, t, later. A plot of
Go (x, x ) is shown in Figure 2.3 for a particle starting at the origin. Notice, that as
|x| increases, the ascillation period decreases rapidly. Since momentum is inversely
proportional to wavelength p = h/λ through the de Broglie relationship, in order
for a particle starting at the origin to move distance (x) away in time t, it must have
sufficient momentum.
The sketch tells us that in order to get far away from the initial point in time t,
we need to have a lot of energy (wiggles get closer together implying higher Fourier
component).
Consequently, to find a particle at the initial point decreases with time. Since the
period of oscillation (T ) is the time required to increase the phase by 2π,
mx 2 mx 2
2π = − (2.41)
2h̄t 2h̄(t + T )
mx 2 T2
= (2.42)
2h̄t 2 1 + T /t
Waves and Wave Functions 39
0.4
0.2
–10 –5 5 10
–0.2
–0.4
Let ω = 2π/T and take the long time limit t T ; we can estimate
m x 2
ω≈ (2.43)
2h̄ t
Since the classical kinetic energy is given by E = m/2v 2 , we obtain
E = h̄ω (2.44)
Thus, the energy of the wave is proportional to the period of oscillation.
We can evaluate the evolution in x using either the G o we derived above, or by
taking the FT of the wave function evolving in k-space. Recall that the solution in
k-space was
ψ(k, t) = exp(−ik 2 /(2m)t/h̄)ψ(k, 0) (2.45)
Assuming a Gaussian form for ψ(k) as above,
√
a
dke−a /4(k−ko ) ei(kx−ω(k)t)
2 2
ψ(x, t) = (2.46)
(2π )3/4
where ω(k) is the dispersion relation for a free particle:
h̄k 2
ω(k) = (2.47)
2m
Cranking through the integral,
2 1/4
2a eiφ (x − h̄ko /mt)2
ψ(x, t) = 1/4 e
iko x
exp (2.48)
π 2 2 a 2 + 2ih̄t/m
a 4 + 4h̄m 2t
where we define
a 4h̄ 2 t 2
x(t) = 1+ (2.50)
2 m2a4
and the time-dependent root-mean-square (rms) width of the wave and the group
velocity as
h̄ko
vo = (2.51)
m
Now, since
p = h̄
k = h̄/a is a constant for all time, the uncertainty relation
becomes
x(t)
p ≥ h̄/2 (2.52)
corresponding to the particle’s wave function becoming more and more diffuse as it
evolves in time.
The solutions of this are plane waves traveling to the left and to the right:
A+B =0 (2.56)
A exp(ik) + B exp(−ik) = 0 (2.57)
Waves and Wave Functions 41
We can see immediately that A = −B and that the solutions must correspond to a
family of sine functions:
ψ(x) = A sin(nπ/x) (2.58)
Just a check,
ψ() = A sin(nπ/) = A sin(nπ ) = 0 (2.59)
To obtain the coefficient, we simply require that the wave functions be normalized
over the range x = [0, ]:
sin(nπ x/)2 d x = (2.60)
0 2
Thus, the normalized solutions are
2
ψn (x) = sin(nπ/x) (2.61)
The eigenenergies are obtained by applying the Hamiltonian to the wave-function
solution
h̄ 2 2
E n ψn (x) = − ∂ ψn (x) (2.62)
2m x
h̄ 2 n 2 π 2
= ψn (x) (2.63)
2a 2 m
Thus we can write E n as a function of n:
h̄ 2 π 2 2
En = n (2.64)
2a 2 m
for n = 0, 1, 2, . . . . What about the case where n = 0? Clearly it is an allowed
solution of the Schrödinger equation. However, we also required that the probability
to find the particle anywhere must be 1. Thus, the n = 0 solution cannot be permitted.
Note that the cosine functions are also allowed solutions. However, the restriction
of ψ(0) = 0 and ψ() = 0 discounts these solutions.
In Figure 2.4 we show the first few eigenstates for an electron trapped in a well
of length a = π. Notice that the number of nodes increases as the energy increases.
In fact, we can determine the state of the system by simply counting nodes.
What about orthonormality? We stated that the solution of the eigenvalue problem
forms an orthonormal basis. In Dirac notation we can write
ψn |ψm
= d x ψn |x
x|ψm
(2.65)
= d xψn∗ (x)ψm (x) (2.66)
0
2
= d x sin(nπ x/) sin(mπ x/) (2.67)
0
= δnm (2.68)
42 Quantum Dynamics: Applications in Biological and Materials Systems
14
12
10
–1 1 2 3 4
Thus, we can see in fact that these solutions do form a complete set of orthogonal
states on the range x = [0, ]. Note that it is important to specify “on the range . . .”
since clearly the sine functions are not a set of orthogonal functions over the entire
x axis.
Let us consider the case for E < 0. The case E > 0 will correspond to scattering
solutions. Inside the well, the wave function oscillates, much as in the previous case,
where ki comes from the equation for the momentum inside the well
h̄ki = 2m(E n + Vo ) (2.71)
We will choose the coefficients c1 and c2 to create two cases, ψ L and ψ R on the left-
and right-hand sides of the well. Also,
√
h̄ρ = −2m E (2.73)
Waves and Wave Functions 43
Thus, we have three pieces of the full solution that we must connect together:
−E
= tan a 2m(E + Vo )/ h̄ (2.84)
E + Vo
−E
= − cot a 2m(Vo + E)/ h̄ (2.86)
Vo + E
10 V(x)
x
–4 –2 2 4
8
–2
S 6
–4
4
A –6
S 2
A –8
S
A
E –10
–10 –8 –6 –4 –2 0
(a) (b)
FIGURE 2.5 (a) Graphical solution to transendental equations for an electron in a truncated
hard well of depth Vo = 10 and width a = 2. (b) Wave functions corresponding to stationary
states for the finite well.
ψ L (−a) − ψW
(−a) = 0
ψ R (a) − ψW (a) = 0
ψ R (a) − ψW
(a) = 0
This can be solved by hand; however, Mathematica keeps the bookkeeping easy. The
result is a series of rules that we can use to determine the transmission and reflection
coefficients:
−4e−2iak1 +2iak2 k1 k2
T →
−k1 2 + e4iak2 k1 2 − 2k1 k2 − 2e4iak2 k1 k2 − k2 2 + e4iak2 k2 2
−1 + e4iak2 k1 2 − k2 2
R → 2iak
e 1 −k1 2 + e4iak2 k1 2 − 2k1 k2 − 2e4iak2 k1 k2 − k2 2 + e4iak2 k2 2
The R and T coefficients are related to the ratios of the reflected and transimitted
flux to the incoming flux. The current operator is given by
h̄ ∗
j(x) = ψ ∇ψ − ψ∇ψ ∗ (2.87)
2mi
Inserting the wave functions above yields
h̄k1
jin =
m
h̄k1 R 2
jref = −
m
h̄k1 T 2
jtrans =
m
Thus, R 2 = − jref /jin and T 2 = jtrans /jin . In Figure 2.6 we show the transmitted and
reflection coefficients for an electron passing over a well of depth V = −40 and a = 1
as a function of incident energy E.
Notice that the transmission and reflection coefficients undergo a series of os-
cillations as the incident energy is increased. These are due to resonance states that
lie in the continuum. The condition for these states is such that an integer number
of the de Broglie wavelength of the wave in the well matches the total length of the
well:
λ/2 = na
46 Quantum Dynamics: Applications in Biological and Materials Systems
1.0
0.8
Transmission
0.6
T, R
0.4
Reflection
0.2
10 20 30 40
E (hartree)
(a)
1.5
0.5
–10 –5 5 10
–0.5
–1
–1.5
1
0.5
–10 –5 5 10
–0.5
–1
(b)
FIGURE 2.6 (a) Transmission and reflection coefficients for an electron scattering over a
square well (V = −40 and a = 1 ). (b) Scattering waves for particle passing over a well. In
the top graphic, the particle is partially reflected from the well (V < 0); in the bottom graphic,
the particle passes over the well with a slightly different energy than above, this time with little
reflection.
Waves and Wave Functions 47
0.95 10
T 0.9
5
0.85
0
V
10
20 –5
En
30
40 –10
FIGURE 2.7 Transmission coefficient for particle passing over a bump. Here we have plotted
T as a function of V and incident energy E n . The oscillations correspond to resonance states
that occur as the particle passes over the well (for V < 0) or bump V > 0.
Figure 2.7 shows the transmission coefficient as a function of both incident energy
and the well depth and (or height) over a wide range, indicating that resonances can
occur for both wells and bumps.
energy levels are simply those of an n-dimensional particle in a box. For example,
for a three-dimensional (3D) system,
2 2
h̄ 2 π 2 nx 2 ny nz
E n x ,n y ,n z = + + (2.88)
2m Lx Ly Lz
where L x , L y , and L z are the lengths of the sides of the box and m is the mass of an
electron.
The density of states is the number of energy levels per unit energy. If we take the
box to be a cube L x = L y = L z , we can relate n to a radius of a sphere and write the
density of states as
dn d E −1
ρ(n) = 4π 2 n 2 = 4π 2 n 2
dE dn
Thus, for a 3D cube, the density of states is
4m L 2
ρ(n) = n
πh̄ 2
that is, for a three-dimensional cube, the density of states increases as n and hence as
E 1/2 (Figure 2.8).
Note that the scaling of the density of states with energy depends strongly upon
the dimensionality of the system. For example, in one dimension,
2m L 2 1
ρ(n) =
h̄ 2 π 2 n
and in two dimensions
ρ(n) = const
The reason for this lies in the way the volume element for linear, circular, and spherical
integration scales with radius n. Thus, measuring the density of states tells us not only
the size of the system but also its dimensionality.
6 5D
4
4D
ρd(E)
2 3D
1 2D
1D
0.2 0.4 0.6 0.8 1.0
E (a.u)
FIGURE 2.8 Density of states versus dimensionality of the system. For D > 3, these are
effective dimensions reflecting the number of free-particle degrees of freedom carried by a
given particle.
Waves and Wave Functions 49
We can generalize the results here by realizing that the volume of a d-dimensional
sphere in k space is given by
k d π d/2
Vd =
(1 + d/2)
where (x) is the gamma function. The total number of states per unit volume in a
d-dimensional space is then
1
n k = 2 2 Vd
2π
and the density is then the number of states per unit of energy. The relation between
energy and k is
h̄ 2 2
Ek = k
2m
that is, √
2E k m
k=
h̄
which gives
√ d
2 2 −1 d π 2 −2
d d mE
h̄
ρd (E) =
E 1 + d2
A quantum well is typically constructed so that the system is confined in one
dimension and unconfined in the other two. Thus, a quantum well will typically have
a discrete state only in the confined direction. The density of states for this system will
be identical to that of the three-dimensional system at energies where the k vectors
coincide. If we take the thickness to be s, then the density of states for the quantum
well is
L ρ3 (E)
ρ = ρ2 (E) L
s Lρ2 (E)/s
where x is the “floor” function, which means it takes the largest integer less than x.
This is plotted in Figure 2.9a and the stair-step density of states (DOS) is indicative
of the embedded confined structure.
30 100
80
DOS
DOS
20 60
40
10
20
0.005 0.01 0.015 0.02 0.025 0.03 0.05 0.1 0.15 0.2 0.25 0.3
ε (au) ε (au)
FIGURE 2.9 Density of states for a quantum well and quantum wire compared to a 3D space.
Here L = 5 and s = 2 for comparison.
50 Quantum Dynamics: Applications in Biological and Materials Systems
Next, we consider a quantum wire of thickness s along each of its two confined
directions (Figure 2.9b). The DOS along the unconfined direction is one dimensional.
As above, the total DOS will be identical to the 3D case when the wave vectors
coincide. Increasing the radius of the wire eventually leads to the case where the steps
decrease and merge into the 3D curve,
2 2
L L ρ2 (E)
ρ= ρ1 (E)
s L 2 ρ2 (E)/s
For a spherical dot, we consider the case in which the radius of the quantum dot
is small enough to support discrete rather than continuous energy levels. In a later
chapter, we will derive this result in more detail, for now, we consider just the results.
First, an electron in a spherical dot obeys the Schrödinger equation:
h̄ 2 2
− ∇ ψ = Eψ (2.89)
2m
1 ∂2 1 ∂ ∂ 1 ∂2
∇2 = r + sin θ +
r ∂r 2 r 2 sin θ ∂θ ∂θ r 2 sin2 θ ∂φ 2
The solution of the Schrödinger equation is subject to the boundary condition that for
r ≥ R, ψ(r ) = 0, where R is the radius of the sphere and is given in terms of the
spherical Bessel function, jl (r ), and spherical harmonic functions, Ylm ,
21/2 jl (αr/R)
ψnlm = Ylm () (2.90)
R 3/2 jl+1 (α)
with energy
h̄ 2 α 2
E= (2.91)
2m R 2
Note that the spherical Bessel functions (of the first kind) are related to the Bessel
functions via
π
jl (x) = Jl+1/2 (x) (2.92)
2x
sin x
j0 (x) = (2.93)
x
sin x cos x
j1 (x) = 2 − (2.94)
x x
Waves and Wave Functions 51
0.8
j1(x) 0.6
0.4
0.2
x
5 10 15 20
–0.2
3 1 3
j2 (x) = − sin x − 2 cos x (2.95)
x3 x x
1 d n
jn (x) = (−1) x
n n
j0 (x) (2.96)
x dx
j0 (α) = 0, (2.97)
sin α cos α
j1 (α) = − = 0. (2.98)
α 2 α
This can be solved to give α = 4.4934. These correspond to where the spherical Bessel
functions pass through zero. The first six of these are 3.14159, 4.49341, 5.76346,
6.98793, 8.18256 and 9.35581. These correspond to where the first zeros occur and
give the condition for the radial quantization, n = 1 with angular momentum l =
0, 1, 2, 3, 4, 5. There are more zeros, and these correspond to the case where n > 1.
In the next set of figures (Figure 2.11), we look at the radial wave functions
for an electron in a 0.5 Å quantum dot. First, the case where n = 1, l = 0 and
n = 0, l = 1. In both cases, the wave functions vanish at the radius of the dot. The
radial probability distribution function (PDF) is given by P = r 2 |ψnl (r )|2 . Note that
increasing the angular momentum l from 0 to 1 causes the electron’s most probable
position to shift outwards. This is due to the centrifugal force due to the angular
motion of the electron. For the n, l = (2, 0) and (2, 1) states, we have one node in the
system and two peaks in the PDF functions.
52 Quantum Dynamics: Applications in Biological and Materials Systems
12
4
10
8 3
P
6
ψ
2
4
1
2
0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5
r r
5 4
r
0.1 0.2 0.3 0.4 0.5 3
–5
P
2
–10
ψ
–15 1
–20
0.1 0.2 0.3 0.4 0.5
–25 r
FIGURE 2.11 Radial wave functions (left column) and corresponding PDFs (right column)
for an electron in an R = 0.5 Å quantum dot. The upper two correspond to (n, l) = (1, 0)
(solid) and (n, l) = (1, 1) (dashed) while the lower correspond to (n, l) = (2, 0) (solid) and
(n, l) = (2, 1) (dashed).
Problem 2.3 Show that the normalization of a wave function is independent of time.
Problem 2.4 Compute the bound-state solutions (E < 0) for a square well of depth
Vo where
−Vo −a/2 ≤ x ≤ a/2
V (x) = (2.102)
0 otherwise
Waves and Wave Functions 53
mVo2 a 2
E≈ (2.103)
2h̄ 2
4. Show that as
2m E
ρ= − →0 (2.104)
h̄ 2
the probability of finding the particle inside the well vanishes.
1. Let φ(x) be a stationary state. Show that φ(x) can be extended to give an
odd wave function corresponding to a stationary state of the symmetric
well of width 2a (that is, the one studied above) and depth Vo .
2. Discuss with respect to a and Vo the number of bound states and argue that
there is always at least one such state.
3. Now turn your attention toward the E > 0 states of the well. Show that
the transmission of the particle into the well region vanishes as E → 0
and that the wave function is perfectly reflected off the sudden change in
potential at x = a.
Problem 2.6 Which of the following are eigenfunctions of the kinetic energy
operator:
h̄ 2 ∂ 2
T̂ = − (2.106)
2m ∂ x 2
e x , x 2 , x n , 3 cos(2x), sin(x) + cos(x), e−ikx
∞
dke−ik(x−x ) e−ik /(2m)
2
f (x − x ) = (2.107)
−∞
f (x) = xe−x , or
2
−x 2
e x ≥0
f (x) = (2.108)
2e−x x < 0
2
54 Quantum Dynamics: Applications in Biological and Materials Systems
Problem 2.8 For a one-dimensional problem, consider a particle with wave function
exp(i po x/ h̄)
ψ(x) = N √ (2.109)
x 2 + a2
where a and po are real constants and N the normalization.
1. Determine N so that ψ(x) is normalized
∞ ∞
1
d x|ψ(x)|2 = N 2 dx (2.110)
−∞ −∞ x2 + a2
π
= N2 (2.111)
a
Thus ψ(x) is normalized when
a
N= (2.112)
π
2. The position of the particle
√ is measured.
√ What is the probability of finding
a result between −a/ 3 and +a/ 3?
√ √
+a/ 3 +a/ 3
a 1
√ d x|ψ(x)| =
2
√ dx (2.113)
π −a/ 3 −a/ 3 x 2 + a2
+a/√3
1
= tan (x/a) √
−1
(2.114)
π −a/ 3
1
= (2.115)
3
3. Compute the mean value of a particle that has ψ(x) as its wave function.
a ∞ x
x
= dx 2 (2.116)
π −∞ x + a 2
=0 (2.117)
H |φn
= E n |φn
(2.119)
φn | p̂|φm
= αnm φn |x̂|φm
(2.120)
Waves and Wave Functions 55
where αnm is a coefficient depending upon E n − E m . Compute αnm . (Hint: You will
need to use the commutation relations of [x̂, H ] and [ p̂, H ] to get this.) Finally, from
all this, deduce that
h̄ 2
(E n − E m )2 | φn |x̂|φm
|2 = φn | p̂ 2 |φn (2.121)
m
2m
Problem 2.10 The state space of a certain physical system is three dimensional. Let
|u 1
, |u 2
, and |u 3
be an orthonormal basis of the space in which kets |ψ1
and |ψ2
are defined by
1 i 1
|ψ1
= √ |u 1
+ |u 2
+ |u 3
(2.122)
2 2 2
1 i
|ψ2
= √ |u 1
+ √ |u 3
(2.123)
3 3
1. Are the states normalized?
2. Determine the matrices ρl and ρz as represented in the {|u i
basis, the
projection operators onto |ψ1
and |ψ2
. Verify that these matrices are
Hermitian.
Problem 2.11 Let ψ(r ) = ψ(x, y, z) be the normalized wave function of a particle.
Express in terms of ψ(r ):
1. A measurement along the x axis to yield a result between x1 and x2 .
2. A measurement of momentum component px to yield a result between p1
and p2 .
3. Simultaneous measurements of x and pz to yield x1 ≤ x ≤ x2 and pz > 0.
4. Simultaneous measurements of px , p y , and pz , to yield
p1 ≤ p x ≤ p2 (2.124)
p3 ≤ p y ≤ p4 (2.125)
p5 ≤ p z ≤ p6 (2.126)
Show that this result is equal to the result of part 2 when p3 , p5 → −∞
and p4 , p6 → +∞.
(a) Ground State. Show that the ground state is even about the origin
and that its energy E s is less than the bound state of a particle in
a single δ-function potential −E L . Interpret this physically. Plot the
corresponding wave function.
(b) Excited State. Show that when l is greater than some value (which you
need to determine), there exists an odd excited state of energy E A with
energy greater than −E L . Determine and plot the corresponding wave
function.
(c) Explain how the preceeding calculations enable us to construct a model
for an ionized diatomic molecule, for example, H2+ , whose nuclei are
separated by l. Plot the energies of the two states as functions of l.
What happens as l → ∞ and l → 0?
(d) If we take Coulombic repulsion of the nuclei into account, what is the
total energy of the system? Show that a curve that gives the variation
with respect to l of the energies thus obtained enables us to predict in
certain cases the existence of bound states of H2+ and to determine the
equilibrium bond length.
2. Calculate the reflection and transmission coefficients for this system. Plot
R and T as functions of l. Show that resonances occur when l is an integer
multiple of the de Broglie wavelength of the particle. Why?
Problem 2.13 Write down the Schrödinger equation for an oscillator in the momen-
tum representation and determine the momentum wave functions.
3 Semiclassical Quantum
Mechanics
Good actions ennoble us, and we are the sons of our own deeds.
Miguel de Cervantes
The use of classical mechanical analogs for quantum behavior holds a long and proud
tradition in the development and application of quantum theory. In Bohr’s original
formulation of quantum mechanics to explain the spectra of the hydrogen atom,
Bohr used purely classical mechanical notions of angular momentum and rotation for
the basic theory and imposed a quantization condition that the angular momentum
should come in integer multiples of h̄. Bohr worked under the assumption that at
some point the laws of quantum mechanics that govern atoms and molecules should
correspond to the classical mechanical laws of ordinary objects like rocks and stones.
Bohr’s Principle of Correspondence states that quantum mechanics is not completely
separate from classical mechanics; rather, it incorporates classical theory.
From a computational viewpoint, this is an extremely powerful notion since per-
forming a classical trajectory calculation (even running thousands of them) is simpler
than a single quantum calculation of a similar dimension. Consequently, the devel-
opment of semiclassical methods has been and remains an important part of the
development and utilization of quantum theory. In fact even in the most recent issues
of leading physics and chemical physics journals, one finds new developments and
applications of this very old idea.
In this chapter we will explore this idea in some detail. The field of semiclassical
mechanics is vast and I would recommend the following for more information:
There are many others, of course. These are just the ones on my bookshelf.
57
58 Quantum Dynamics: Applications in Biological and Materials Systems
me4 1 e2
− = − (3.2)
2h̄ 2 n 2 2r
Now we need to figure out how angular momentum gets pulled into this. For an orbiting
body the centrifugal force, which pulls the body outward, is counterbalanced by the
inward tugs of the centripetal force coming from the attractive Coulomb potential.
Thus,
e2
mr ω2 = (3.3)
r2
where ω is the angular frequency of the rotation. Rearranging this a bit, we can plug
this into the right-hand side (rhs) of Equation 3.2 and write
me4 1 mr 3 ω2
− 2 n2
=− (3.4)
2h̄ 2r
The numerator now looks amost like the classical definition of angular momentum:
L = mr 2 ω. So we can write the last equation as
me4 1 L2
− = − (3.5)
2h̄ 2 n 2 2mr 2
Solving for L 2 :
me4 2mr 2
L2 = (3.6)
2h̄ 2 n 2
Now, we need to pull in another one of Bohr’s results for the orbital radius of the H
atom:
h̄ 2 2
r= n (3.7)
me2
Plug this into Equation 3.6 and after the dust settles, we find
L = h̄n (3.8)
Semiclassical Quantum Mechanics 59
But, why should electrons be confined to circular orbits? Equation 3.8 should be
applicable to any closed path the electron should choose to take. If the quantization
condition only holds for circular orbits, then the theory itself is in deep trouble. At
least that is what Sommerfield thought.
The numerical units of h̄ are energy times time. That is the unit of action in classical
mechanics. In classical mechanics, the action of a mechanical system is given by the
integral of the classical momentum along a classical path:
x2
S= pd x (3.9)
x1
For an orbit, the initial point and the final point must coincide, x1 = x2 , so the action
integral must describe some of the area circumscribed by a closed loop on the p − x
plane called phase space
S= pd x (3.10)
So, Bohr and Sommerfield’s idea was that the circumscribed area in phase space was
quantized as well.
As a check, let us consider the harmonic oscillator. The classical energy is given by
p2 k
E( p, q) = + q2
2m 2
This is the equation for an ellipse in phase space since we can rearrange this to read
p2 k 2
1= + q
2m E 2E
p2 q2
= 2
+ 2 (3.11)
a b
√ √
where a = 2m E and b = 2E/k describe the major and minor axes of the ellipse.
The area of an ellipse is A = πab, so the area circumscribed by a classical trajectory
with energy E is
S(E) = 2Eπ m/k (3.12)
√
Since k/m = ω, S = 2π E/ω = E/ν. Finally, since E/ν must be an integer
multiple of h, the Bohr–Sommerfield condition for quantization becomes
pd x = nh (3.13)
√
where p is the classical momentum for a path of energy E, p = 2m(E − V (x)).
Taking this a bit further, the de Broglie wavelength is p/ h, so the Bohr–Sommerfield
rule basically states that stationary energies correspond to classical paths for which
there are an integer number of de Broglie wavelengths.
60 Quantum Dynamics: Applications in Biological and Materials Systems
Now, perhaps you can anticipate a problem with the quantum description of a
classically chaotic system. In classical chaos, chaotic trajectories never return to their
exact staring point in phase space. They may come close, but there are no closed
orbits. For 1D systems, this does not occur since the trajectories are the contours of
the energy function. For higher dimensions, the dimensionality of the system makes
it possible to have extremely complex trajectories that never return to their starting
point.
2m
ψ + (E − V (x))ψ = 0
h̄ 2
as
i
ψ(x) = exp χdx (3.14)
h̄
We will soon discover that χ is the classical momentum of the system, but for now,
let us consider it to be a function of the energy of the system. Substituting this into
Semiclassical Quantum Mechanics 61
d n
χn−1 = − χn−m χm (3.19)
dx m=0
χ12 + χ1
χ2 = −
2χo
! "
1 V 2 V 2 V
=− + +
2χo 16(E − V )2 4(E − V )2 4(E − V )
5V 2 V
=− − (3.21)
32(2m)1/2 (E − V )5/2 8(2m)1/2 (E − V )3/2
and so forth.
Problem 3.2 Verify Equation 3.19 and derive the first-order correction in Equa-
tion 3.20.
Note: An analytic function is such that it can be expanded in a polynomial series about some local point.
62 Quantum Dynamics: Applications in Biological and Materials Systems
1 ψn (z)
= dz (3.22)
2πi C ψn (z)
where ψn is the nth discrete stationary solution to the Schrödinger equation and C is
some contour of integration on the z plane. If there is a discrete spectrum, we know
that the number of zeros, n, in the wave function is related to the quantum number
corresponding to a given energy level. So if ψ has no real zeros, this is the ground-
state wave function with energy E o ; one real zero corresponds to energy level E 1 and
so forth.
Suppose the contour of integration, C, is taken such that it includes only these
zeros and no others, then we can write
1 1
n= χo dz + χ1 − h̄ χ2 dz + . . . (3.23)
h̄ C 2πi c C
We can make a change of variables Z = V (z) and d Z = V dz and write the integral as
1 1 1 dZ 1
χ1 = − =− (3.25)
2πi c 2πi 4 E−Z 2
V 3 V 2
dz = − dz (3.26)
C (E − V (z)) 3/2 2 C (E − V (z))5/2
As an example in using this approach, let us consider the simple case of a harmonic
oscillator. Recall in the discussion of the Bohr–Sommerfield approach, we noted that
the momentum integral over a closed trajectory was equal to the area in phase space
enclosed by an ellipse
nh = p(x)d x = πab = 2π E/ω
where a and b are the major and minor axes along the x and p directions and E
is the energy. The semiclassical treatment adds an additional factor of 1/2 to the
Bohr–Sommerfield expression so that
n + 1/2 = E/h̄ω
This agrees with the exact result for the harmonic oscillator energies: E n = h̄ω(n +
1/2) with n = 0, 1, 2, . . . .
ψ = ei S/h̄
64 Quantum Dynamics: Applications in Biological and Materials Systems
If we neglect the term involving h̄, we recover the classical Hamilton–Jacobi equation
for the action S,
1 ∂S
+ V (x) = E (3.30)
2m ∂ x
and can identify ∂ S/∂ x = χo = p with the classical momentum. Again, as above,
we can seek a series expansion of S in powers of h̄. The result is simply the integral
of Equation 3.17:
h̄
S = So + S1 + · · · (3.31)
i
Looking at Equation 3.29: it is clear that the classical approximation is valid when
the second term is very small compared to the first. That is,
S
1
h̄
S 2
2
d dS dx
h̄ 1
dx dx dS
d 1
h̄ 1 (3.32)
dx p
where we equate d S/d x = p. Since p is related to the de Broglie wavelength of the
particle λ = h/ p, the same condition implies that
1 dλ
2π d x 1 (3.33)
Thus the semiclassical approximation is only valid when the wavelength of the particle
as determined by λ(x) = h/ p(x) varies slightly over distances on the order of the
wavelength itself. Noting the gradiant of the momentum, this can be written another
way:
dp d m dV
= 2m(E − V (x)) = −
dx dx p dx
Thus, we can write the classical condition as
mh̄|F|/ p 3 1 (3.34)
Thus, the semiclassical condition is met when the potential changes slowly over a
length-scale comparable to the local de Broglie wavelength λ(x) = h/ p(x).
Going back to the expansion for χ
1 χo 1 V
χ1 = − = (3.35)
2 χo 4E−V
or equivalently for S1
So p
S1 = − = − (3.36)
2S 2p
So,
1
S1 (x) = − log p(x)
2
If we stick to regions where the semi-classical condition is met, then the wave function
becomes
# #
C1 i C2 − h̄i p(x)d x
ψ(x) ≈ √ e h̄ p(x)d x + √ e (3.37)
p(x) p(x)
√
The 1/ p prefactor has a remarkably simple interpretation. The probability of finding
the particle in some region between x and x + d x is given by |ψ|2 so that the classical
probability is essentially proportional to 1/p. So, the faster the particle is moving, the
less likely it is to be found in some small region of space. Conversely, the slower a
particle moves, the more likely it is to be found in that region. So the time spent in a
small d x is inversely proportional to the momentum of the particle. We will return to
this concept in a bit when we consider the idea of time in quantum mechanics.
The C1 and C2 coefficients are yet to be determined. If we take x = a to be
one classical turning point so that x > a corresponds to the classically inaccessible
region where E < V (x), then the wave function in that region must be exponentially
damped:
C 1 x
ψ(x) ≈ √ exp − | p(x)|d x (3.38)
| p| h̄ a
To the left of x = a, we have a combination of incoming and reflected components:
a
C1 i C2 i a
ψ(x) = √ exp pd x + √ exp − pd x (3.39)
p h̄ x p h̄ x
Inside we have
# #
C D
e+ h̄ | p|d x
e− h̄ | p|d x
i i
ψ B (x) = √ +√ (3.41)
| p(x)| | p(x)|
If F is the transmitted amplitude, then the tunneling probability is the ratio of the
transmitted probability to the incident probability: T = |F|2 /|A|2 . If we assume that
the barrier is high or broad, then C = 0 and we obtain the semiclassical estimate for
the tunneling probability:
2 b
T ≈ exp − | p(x)|d x (3.43)
h̄ a
where a and b are the turning points on either side of the barrier.
Mathematically, we can “flip the potential upside down” and work in imaginary
time. In this case the action integral becomes
b
S= 2m(V (x) − E)d x (3.44)
a
For convenience, set the zero in energy to be the barrier height Vo so that any trans-
mission for E < 0 corresponds to tunneling.
At sufficiently large distances from the turning point, the motion is purely quasi
classical and we can write the momentum as
√
p = 2m(E + kx 2 /2) ≈ x mk + E m/k/x (3.45)
The analysis is from Kembel (1935) as discussed in Landau and Lifshitz, Quantum Mechanics (nonrela-
tivistic theory), third edition. (New York: Oxford, Pergamon Press, 1977.)
Semiclassical Quantum Mechanics 67
1.0
0.8
0.6
0.4
0.2
x
–4 –2 0 2 4
FIGURE 3.1 Eckart barrier and parabolic approximation of the transition state.
ψ = Ae+iξ /2 +i−1/2
+ Be−iξ /2 −i−1/2
2 2
ξ ξ (3.46)
where A and B are the coefficients we need to determine by the matching condition
and ξ and √ are dimensionless lengths and energies given by ξ = x(mk/h̄)1/4 and
= (E/h̄) m/k.
The particular case we are interested in is for a particle coming from the left and
passing to the right with the barrier in between. So, the wave functions in each of
these regions must be
ψ R = Be+iξ /2 i−1/2
2
ξ (3.47)
and
ψ L = e−iξ /2
(−ξ )−i−1/2 + Ae+iξ /2
2 2
(−ξ )i−1/2 (3.48)
where the first term is the incident wave and the second term is the reflected com-
ponent. So, |A|2 is the reflection coefficient and |B|2 is the transmission coefficient
normalized so that
|A|2 + |B|2 = 1
Let us move to the complex plane, write a new coordinate, ξ = ρeiφ , and consider what
happens as we rotate around in φ and take ρ to be large. Since iξ 2 = ρ 2 (i cos 2φ −
sin 2φ), we have
ψ R (φ = 0) = Beiρ ρ +i−1/2
2
ψ L (φ = 0) = Aeiρ (−ρ)+i−1/2
2
(3.49)
and at φ = π
ψ R (φ = π ) = Beiρ (−ρ)+i−1/2
2
ψ L (φ = π ) = Aeiρ ρ +i−1/2
2
(3.50)
A = B(eiπ )i−1/2
68 Quantum Dynamics: Applications in Biological and Materials Systems
So, we have the relation A = −i Be−π . Finally, after we normalize this we get the
transmission coefficient:
1
T = |B|2 =
1 + e−2π
which must hold for any energy. If the energy is large and negative, then
T ≈ e−2π
E − V (x) ≈ Fo (x − a) (3.53)
Problem 3.3 Verify the relations for the transmission and reflection coefficients for
the Eckart barrier problem.
Semiclassical Quantum Mechanics 69
But, we can do better than that. We can actually solve the Schrödinger equation for
the linear potential and use the linearized solutions as our patch. The Mathematica
Notebook for this chapter (Chapter3.nb) determines the solution of the linearized
Schrödinger equation
h̄ 2 dψ
− + (E + V )ψ = 0 (3.55)
2m d x 2
which can be rewritten as
ψ = α 3 xψ (3.56)
with 1/3
2m
α= V (0)
h̄ 2
Absorbing the coefficient into a new variable y, we get Airy’s equation
ψ (y) = yψ
The solutions of Airy’s equation are Airy functions, Ai(y) and Bi(y) for the regular
and irregular cases. The integral representation of the Ai and Bi are
3
1 ∞ s
Ai(y) = cos + sy ds (3.57)
π 0 3
and
∞ 3
1 s
e−s /3+sy + sin
3
Bi(y) = + sy ds (3.58)
π 0 3
Plots of these functions are shown in Figure 3.2.
Since both Ai and Bi are acceptible solutions, we will take a linear combination
of the two as our patching function and figure out the coefficients later:
Bi(x)
1.0
0.8
0.6
0.4
0.2 Ai(x)
x
–10 –5 5 10
–0.2
–0.4
potential is reasonable and (2) that the overlap zone is far enough from the turning
point (at the origin) that the WKB approximation is accurate and reliable. You can
certainly cook up some potential for which this will not work, but we will assume it
is reasonable. In the linearized region, the momentum is
So for +x,
x
| p(x)|d x = 2h̄(αx)3/2 /3 (3.61)
0
e−2y /3
3/2
That is the WKB part, to connect with the patching part, so we again use the asymptotic
forms for y 0 and take only the regular solution,
1
Ai(y) ≈ √ sin 2(−y)3/2 /3 + π/4
π(−y)1/4
1 iπ/4 i2(−y)3/2 /3 −iπ/4 −i2(−y)3/2 /3
≈ √ e e − e e (3.68)
2i π(−y)1/4
Semiclassical Quantum Mechanics 71
Comparing the WKB wave and the patching wave, we can match term-by-term
a B
√ eiπ/4 = √ (3.69)
2i π h̄α
−a −iπ/4 C
√ e =√ (3.70)
2i π h̄α
Since we know a in terms of the normalization constant D, B = ieiπ/4 D and C =
ie−iπ/4 . This is the connection! We can write the WKB function across the turning
point as
⎧ 0
⎪
⎪ 2D 1
⎪ √
⎨ p(x) sin pd x + π/4 x <0
h̄ x
ψWKB (x) = #0 (3.71)
⎪
⎪ 2D
⎪
⎩ √ e
− h̄1
x
pd x
x >0
| p(x)|
TABLE 3.1
Location of Nodes for Airy, Ai (x ) Function
node xn
1 −2.33811
2 −4.08795
3 −5.52056
4 −6.78671
5 −7.94413
6 −9.02265
7 −10.0402
and so on. Of course, we still have to normalize the wave function to get the correct
energy.
We can make life a bit easier by using the quantization condition derived from the
WKB approximation. Since we require the wave function to vanish exactly at x = 0,
we have
1 xt π
p(x)d x + = nπ (3.75)
h̄ 0 4
This assures us that the wave vanishes at x = 0. In this case xt is the turning point
E = mgxt (see Figure 3.3). As a consequence,
xt
p(x)d x = (n − 1/4)π
0
√
Since p(x) = 2m(E n − mgx), the integral can be evaluated
xt √ √
√ 2E n E n m 2 m (E n − gmxt ) (−E n + gmxt )
2m(E − mgx)d x = 2 +
0 3gm 3gm
Since xt = E n /mg for the classical turning point, the phase intergral becomes
√
2 2E n 2
√ = (n − 1/4)πh̄.
3g E n m
V(x)
15
10
5
x
2 4 6 8 10
–5
–10
3.4 SCATTERING
The collision between two particles plays an important role in the dynamics of reactive
molecules. We consider here the collision between two particles interacting via a
central force V (r ). Working in the center of mass frame, we consider the motion of
a point particle with mass μ and position vector r. We will first examine the process
in a purely classical context since it is intuitive and then apply what we know to the
quantum and semiclassical case.
x = r cos θ (3.80)
y = r sin θ (3.81)
ẋ = ṙ cos θ − r θ˙ sin θ (3.82)
ẏ = ṙ sin θ + r θ˙ cos θ (3.83)
74 Quantum Dynamics: Applications in Biological and Materials Systems
Thus,
mu 2 L2
E= ṙ + V (r ) + (3.84)
2 2μr 2
where we use the fact that
L = μr 2 θ̇ 2 (3.85)
where L is the angular momentum. What we see here is that we have two poten-
tial contributions. The first is the physical attraction (or repulsion) between the two
scattering bodies. The second is a purely repulsive centrifugal potential that depends
upon the angular momentum and ultimately upon the impact parameters. For cases
of large impact parameters, this can be the dominant force. The effective radial force
is given by
L2 ∂V
μr̈ = − (3.86)
2r 3 μ ∂r
Again, we note that the centrifugal contribution is always repulsive while the physical
interaction V(r) is typically attractive at long ranges and repulsive at short ranges.
We can derive the solutions to the scattering motion by integrating the velocity
equations for r and θ
1/2
2 L2
ṙ = ± E − V (r ) − (3.87)
μ 2μr 2
L
θ̇ = (3.88)
μr 2
and taking into account the starting conditions for r and θ. In general, we could solve
the equations numerically and obtain the complete scattering path. However, really
what we are interested in is the deflection angle χ since this is what is ultimately
observed. So, we integrate the last two equations and derive θ in terms of r :
θ r
dθ
θ(r ) = dθ = − dr (3.89)
0 ∞ dr
r
L 1
=− dr (3.90)
∞ μr 2
(2/μ)(E − V − L 2 /2μr 2 )
where the collision starts at t = −∞ with r = ∞ and θ = 0. What we want to
do is derive this in terms of an impact parameter, b, and scattering angle χ . These
are illustrated in Figure 3.4 and can be derived from basic kinematic considerations.
First, energy is conserved throughout, so if we know the asymptotic velocity v, then
E = μv 2 /2. Secondly, angular momentum is conserved, so L = μ|r × v| = μvb.
Thus the integral above becomes
r
dθ
θ (r ) = −b dr (3.91)
∞ dr
r
dr
=− (3.92)
∞ r2 1 − V /E − b2 /r 2
Semiclassical Quantum Mechanics 75
θc
b r
χ
θ rc
Finally, the angle of deflection is related to the angle of closest approach by 2θc +
χ = π; hence,
∞
dr
χ = π − 2b (3.93)
rc r 2 1 − V /E − b2 /r 2
The radial distance of closest approach is determined by
L2
E= + V (rc ) (3.94)
2μrc2
which can be restated as
V (rc )
b2 = rc2 1 − (3.95)
E
Once we have specified the potential, we can compute the deflection angle using
Equation 3.95. If V (rc ) < 0 , then rc < b and we have an attractive potential; if
V (rc ) > 0, then rc > b and the potential is repulsive at the turning point.
If we have a beam of particles incident on some scattering center, then collisions
will occur with all possible impact parameters (hence angular momenta) and will give
rise to a distribution in the scattering angles. We can describe this by a differential
cross-section. If we have some incident intensity of particles in our beam Io , which
is the incident flux or the number of particles passing a unit area normal to the beam
direction per unit time, then the differential cross-section I (χ ) is defined so that
I (χ )d is the number of particles per unit time scattered into some solid angle d
divided by the incident flux.
The deflection pattern will be axially symmetric about the incident beam direction
due to the spherical symmetry of the interaction potential; thus, I (χ ) depends only
upon the scattering angle. Thus, d can be constructed by the cones defining χ and
χ + dχ , that is, d = 2π sin χdχ . Even if the interaction potential is not spherically
76 Quantum Dynamics: Applications in Biological and Materials Systems
symmetric, since most molecules are not spherical, the scattering would be axially
symmetric since we would be scattering from a homogeneous distribution of all
possible orientations of the colliding molecules. Hence any azimuthal dependency
must vanish unless we can orient on the colliding species.
Given an initial velocity v, the fraction of the incoming flux with impact parameters
between b and b+db is 2πbdb. These particles will be deflected between χ and χ +dχ
if dχ /db > 0 or between χ and χ − dχ if dχ /db < 0. Thus, I (χ)d = 2πbdb and
it follows then that
b
I (χ ) = (3.96)
sin χ |dχ /db|
Thus, once we know χ (b) for a given v, we can get the differential cross-section. The
total cross-section is obtained by integrating
π
σ = 2π I (χ ) sin χ dχ (3.97)
0
This is a measure of the attenuation of the incident beam by the scattering target and
has the units of area.
p y momentum transfer
χ≈ = (3.98)
p momentum
Since the time derivative of momentum is the force, the momentum transfered per-
pendicular to the incident beam is obtained by integrating the perpendicular force
∂V ∂ V ∂r ∂V b
Fy = − =− =− (3.99)
∂y ∂r ∂ y ∂r r
I (χ ) = | f (χ )|2 (3.107)
What we have is then that the asymptotic form of the wave function carries within it
information about the scattering process. As a result, we do not need to solve the wave
equation for all of space, we just need to be able to connect the scattering amplitude
to the interaction potential. We do so by expanding the wave as a superposition of
Legendre polynomials
∞
ψ(r, χ ) = Rl (r )Pl (cos χ) (3.108)
l=0
∞ i(kr −lπ/2)
1 e e−i(kr −lπ/2)
= (2l + 1)i l Pl (cos χ) + (3.110)
2i l=0 kr kr
We can interpret this equation in the following intuitive way: The incident plane wave
is equivalent to an infinite superposition of incoming and outgoing spherical waves
in which each term corresponds to a particular angular momentum state with
L = h̄ l(l + 1) ≈ h̄(l + 1/2) (3.111)
L l + 1/2
b= ≈ λ (3.112)
μv k
In essence the incoming beam is divided into cylindrical zones in which the lth zone
contains particles with impact parameters (and hence angular momenta) between
lλ and (l + 1)λ.
Problem 3.4 In the collision between hard spheres as described on p. 78, the impact
parameter b is treated as continuous; however, in quantum mechanics we allow only
discrete values of the angular momentum l. How will this affect our results, since
b = (l + 1/2)λ?
If V (r ) is short ranged (that is, it falls off more rapidly than 1/r for large r ), we
can derive a general solution for the asymptotic form
∞
lπ sin(kr − lπ/2 + ηl
ψ(r, χ ) −→ (2l + 1) exp i + ηl Pl (cos χ)
l=0
2 kr
(3.113)
The significant difference between Equation 3.113 and Equation 3.110 for the V (r ) =
0 case is the addition of a phase shift ηl . This shift only occurs in the outgoing part
of the wave function and so we conclude that the primary effect of a potential in
quantum scattering is to introduce a phase in the asymptotic form of the scattering
wave. This phase must be a real number and has the physical interpretation illustrated
in Figure 3.5. A repulsive potential will cause a decrease in the relative velocity of
the particles at small r resulting in a longer de Broglie wavelength. This causes the
wave to be “pushed out” relative to that for V = 0 and the phase shift is negative. An
attractive potential produces a positive phase shift and “pulls” the wave function in a
bit. Furthermore, the centrifugal part produces a negative shift of −lπ/2.
Semiclassical Quantum Mechanics 79
1.0
0.5
x
2 4 6 8 10
–0.5
–1.0
FIGURE 3.5 Form of the radial wave for repulsive (short dashed) and attractive (long dashed)
potentials. The form for V = 0 is the solid curve for comparison.
Comparing the various forms for the asymptotic waves, we can deduce that the
scattering amplitude is given by
∞
1
f (χ) = (2l + 1)(e2iηl − 1)Pl (cos χ) (3.114)
2ik l=0
What we see here is the possibility for interference between different angular mo-
mentum components.
Moving forward at this point requires some rather sophisticated treatments. How-
ever, we can use the semiclassical methods developed in this chapter to estimate the
phase shifts.
R is the radius a sphere about the scattering center and λ(r ) is a de Broglie wavelength
h̄ 1 h̄
λ(r ) = = = (3.117)
p k(r ) μv(1 − V (r ) − b2 /r 2 )1/2
80 Quantum Dynamics: Applications in Biological and Materials Systems
Now, to clean things up a bit, we add and subtract an integral over k (we do this to get
rid of the R dependence, which will cause problems when we take the limit R → ∞):
R R R
kbπ
ηl = lim
SC
k(r )dr − kdr + kdr − k R − (3.121)
R→∞ rc rc rc 2
R
= (k(r ) − k)dr − k(rc − bπ/2) (3.122)
rc
R
= (k(r ) − k)dr − krc π(l + 1/2)/2 (3.123)
rc
which yields the classical result we obtained previously. So, why did we bother? From
this we can derive a simple and useful connection between the classical deflection
angle and the rate of change of the semiclassical phase shift with angular momentum,
dηlSC/dl. First, recall the Leibnitz rule for taking derivatives of integrals:
b(x)
d b da
f (x, y)dy = f (b(x), y) − f (a(x), y)
d x a(x) dx dx
b(x)
∂ f (x, y)
+ dy (3.127)
a(x) ∂x
Semiclassical Quantum Mechanics 81
Taking the derivative of ηlSC with respect to l, using the last equation and the relation
that (∂b/∂l) E = b/k, we find that
dηlSC χ
= (3.128)
dl 2
Next, we examine the differential cross-section, I (χ ). The scattering amplitude
∞
λ
f (χ) = (2l + 1)e2iηl Pl (cos χ) (3.129)
2i l=0
where we use λ = 1/k and exclude the singular point where χ = 0 since this
contributes nothing to the total flux.
Now, we need a mathematical identity to take this to the semiclassical limit where
the potential varies slowly with wavelength. What we do is to first relate the Legendre
polynomial, Pl (cos θ), to a zeroth-order Bessel function for small values of θ (θ 1),
Pl (cos θ) = J0 ((l + 1/2)θ) (3.130)
Now, when x = (l + 1/2)θ 1 (that is, large angular momentum), we can use the
asymptotic expansion of J0 (x)
2 π
J0 (x) → sin x + (3.131)
πx 4
Pulling this together,
1/2
2
Pl (cos θ) → sin ((l + 1/2)θ + π/4)
π(l + 1/2)θ
1/2
2 sin ((l + 1/2)θ + π/4)
≈ (3.132)
π(l + 1/2) (sin θ)1/2
for θ(l + 1/2) 1. Thus, we can write the semiclassical scattering amplitude as
∞
(l + 1/2) 1/2 iφ + −
f (χ ) = −λ (e + eiφ ) (3.133)
l=0
2π sin χ
where
φ ± = 2ηl ± (l + 1/2)χ ± π/4 (3.134)
The phases are rapidly oscillating functions of l. Consequently, the majority of the
terms must cancel and the sum is determined by the ranges of l for which either φ +
or φ − is extremized. This implies that the scattering amplitude is determined almost
exclusively by phase shifts that satisfy
dηl
2 ±χ =0 (3.135)
dl
where the + is for dφ + /dl = 0 and the − is for dφ − /dl = 0. This demonstrates that
only the phase shifts corresponding to impact parameter b can contribute significantly
to the differential cross-section in the semiclassical limit. Thus, the classical condition
for scattering at a given deflection angle χ is that l be large enough for Equation 3.135
to apply.
82 Quantum Dynamics: Applications in Biological and Materials Systems
Let ψo be the semiclassical wave function describing the motion in one well with
energy E o . Assume that ψo is exponentially damped on both sides of the well and that
the wave function is normalized so that the integral over ψo2 is unity. When tuning
is taken into account, the wave functions corresponding to the new energy levels, E 1
and E 2 , are the symmetric and antisymmetric combinations of ψo (x) Q|P|Q
and
ψo (−x) √
ψ1 = (ψo (x) + ψo (−x)/ 2
√
ψ2 = (ψo (x) − ψo (−x)/ 2
where ψo (−x) can be thought of as the contribution from the zeroth-order wave
function in the other well. In well 1, ψo (−x) is very small, in well 2, ψo (+x) is
very small, and the product ψo (x)ψo (−x) is vanishingly small everywhere. Also, by
construction, ψ1 and ψ2 are normalized.
1. Assume that ψo and ψ1 are solutions of the Schrödinger equations
2m
ψo + (E o − V )ψo = 0
h̄ 2
and
2m
ψ1 + (E 1 − V )ψ1 = 0
h̄ 2
Multiply the former by ψ1 and the latter by ψo , combine and subtract
equivalent terms, and integrate over x from 0 to ∞ to show that
h̄ 2
E1 − Eo = − ψo (0)ψo (0)
m
Perform a similar analysis to show that
h̄ 2
E2 − Eo = + ψo (0)ψo (0)
m
2. Show that the unperturbed semiclassical wave function is
ω 1 a
ψo (0) = exp − | p|d x
2π vo h̄ 0
Semiclassical Quantum Mechanics 83
and mvo
ψo (0) = ψo (0)
h̄
√
where vo = 2(E o − V (0))/m and a is the classical turning point at
E o = V (a).
3. Combining your results, show that the tunneling splitting is
h̄ω 1 +a
E = exp − | p|d x
π h̄ −a
where the integral is taken between classical turning points on either side
of the barrier.
4. Assuming that the potential in the barrier is an upside-down parabola
V (x) ≈ Vo − kx 2 /2
What is the tunneling splitting?
5. Now, taking α = 0.1, expand the potential about the barrier and deter-
mine the harmonic force constant for the upside-down parabola. Use the
equations you derived and compute the tunneling splitting for a proton in
this well.
Problem 3.7 Use the semiclassical approximation to determine the average kinetic
energy of a particle in a stationary state.
Problem 3.8 Use the result of the previous problem to determine the average kinetic
energy of a particle in the following potentials:
1. V = muω2 x 2 /2
2. V = Vo cot2 (π x/a) for 0 < x < a
Problem 3.9 Use the semiclassical approximation to determine the form of the po-
tential V (x) for a given energy spectrum E n . Assume V (x) to be an even function
V (x) = V (−x) increasing monotonically for x > 0.
Problem 3.10 Use the Ritz variational principle to show that any purely attractive
one-dimensional potential well has at least one bound state.
Problem 3.11 Consider a particle of mass m moving in a potential λV (x) that satisfies
the following conditions: For x < 0 and x > a, V (x) = 0 and for 0 ≤ x ≤ a,
a
λ V (x)d x < 0
0
84 Quantum Dynamics: Applications in Biological and Materials Systems
Problem 3.12 Let us revisit the double well tunneling problem by making the fol-
lowing approximation to the tunneling doublet.
1
ψ± = √ (φ0 (x) ± φ0 (−x))
2
when φ0 (x) is a quasi-classical wave function describing motion in the right-hand
well.
2m
φ0 2 (V (x) − E 0 )φ0 = 0.
h̄
4h̄ 2
E− − E+ = φ0 (0)φ0 (x)
m
4 Quantum Dynamics
(and Other Un-American
Activities)
Dr. Condon, it says here that you have been at the forefront of a rev-
olutionary movement in physics called quantum mechanics. It strikes
this hearing that if you could be at the forefront of one revolutionary
movement . . . you could be at the forefront of another.
—House Committee on Un-American Activities to
Dr. Edward Condon, 1948
4.1 INTRODUCTION
This chapter is really the heart and soul of this text—not only in a physical sense but
also in a scientific sense. In the early days of quantum mechanics and especially chem-
ical physics, we were mostly interested in discerning the energy states or predicting
equilibrium structures of a given atomic or molecular system. This provided a good
test of quantum theory and deepened our understanding of the nature of the bonding
and intermolecular interactions that define a chemical system. With the introduction
of time-resolved laser techniques in the 1980s, modern investigations have focused
upon pulling apart how an atomic or molecular system undergoes transitions from
one state to the next and how the quantum interferences between different pathways
influence these transitions. Typically, in a molecular system we treat the electronic de-
grees of freedom using rigorous quantum theory and allow their energies and states to
be parametrized by the instantaneous positions of the nuclei. This is justified through
the Born–Oppenheimer approximation, which allows us to separate the fast motion
of the electrons from the far slower motions of the nuclei by virtue of their disparity in
mass. As we shall see in this chapter, things become interesting when the separation
of time scales is no longer valid.
We shall begin with a brief review of the bound states of a coupled two-level
system. This is a model problem that captures the essential physics of a wide range of
situations. We shall discuss this within first a time-independent perspective and then
a time-dependent perspective. Finally, at the end of the chapter we shall discuss what
happens when we allow the two-level system to have an additional harmonic degree
of freedom that couples the transitions between the two states.
85
86 Quantum Dynamics: Applications in Biological and Materials Systems
where
V
tan(2θ) = (4.3)
defines a “mixing angle.” It is straightforward to determine the energy levels of the
coupled system in terms of the mixing angle
ε± = E m + sec(2θ) (4.4)
where θ is the mixing angle defined above. What we do is to rotate the initial two
orthogonal state vectors, |1 and |2, which lie on a two-dimensional plane, by an
angle θ to form new state vectors, |+ and |−,
|+ cos(θ) sin(θ) |1
= (4.7)
|− − sin(θ) cos(θ) |2
Quantum Dynamics (and Other Un-American Activities) 87
є_
E2
Em
2Δ
E1
є+
FIGURE 4.1 Energy levels of the two-state system. The superimposed circles are representa-
tive of the initial (localized) and final (delocalized) states of the system.
These new states, which we shall term the “delocalized” basis, are linear combinations
of the original localized states as illustrated in Figure 4.1. When the energy gap
V , tan(2θ) = V / becomes small and θ → 0. In this limit, the eigenstates
become more and more like the initial localized states. In the other limit, as → 0
and the initial states become degenerate, tan(2θ) diverges and θ = π/4. In this case,
the true eigenstates of the system are the totally delocalized states:
1
|± = √ (|1 ± |2) (4.8)
2
Let us briefly examine the impact of these two limits on the final energies of the
system. From above, the exact energy levels are given by
ε± = E m ∓ 2 + V 2 (4.9)
We can use the binomial theorem to expand the exact energies either in terms of the
initial energy gap or in terms of the coupling. When V / 1, then we can expand
ε± = E m ∓ 1 + (V /)2
1 V2
= Em ∓ 1 + + · · · (4.10)
2 2
to obtain a lowest-order correction to the energy levels:
1 V2
ε+ ≈ = E 1 − (4.11)
2
1 V2
ε− ≈ = E 2 + (4.12)
2
(Note, we have assumed that E 1 < E 2 .)
In the opposite limit where /V 1, we pull V out from under the square root
and perform the binomial expansion
1 2
ε± = E m ∓ V 1 + + · · · (4.13)
2 V2
88 Quantum Dynamics: Applications in Biological and Materials Systems
ε± = E 1 ∓ |V | (4.14)
We can perform a similar analysis on the wave functions. In the weak coupling
limit, V / 1; hence, θ ≈ 0. Thus, we can expand the coefficients of the |± about
θ = 0 to obtain the lowest-order corrections to the states:
H = Ho + V (4.19)
where Ho represents that part of the problem we can solve exactly and V some extra
part that we cannot. This we take as a correction or perturbation to the exact problem.
Perturbation theory can be formuated in a variery of ways, but we begin with
what is typically termed Rayleigh–Schrödinger perturbation theory. This is the typi-
cal approach and used most commonly. Let Ho |φn = Wn |φn and (Ho + λV )|ψ =
E n |ψ be the Schrödinger equations for the uncoupled and perturbed systems.
Quantum Dynamics (and Other Un-American Activities) 89
In what follows, we take λ as a small parameter and expand the exact energy in
terms of this parameter. Clearly, we write E n as a function of λ and write
Since we require that |ψ be a solution of the exact Hamiltonian with energy E n , then
and so on.
The λ0 problem is just the unperturbed problem we can solve. Taking the λ1 terms
and multiplying by ψn(0) | we obtain
(0) (0) (0)
ψn Ho ψn + ψn |V |ψ (0) = E n(0) ψn(0) ψn(1) + E n(1) ψn(0) ψn(0) (4.24)
In other words, we obtain the first-order correction for the nth eigenstate:
E n(1) = ψn(0) |V |ψ (0) (4.25)
Rearranging things a bit, one obtains an expression for the overlap between the un-
perturbed and perturbed states:
(0)
(0) (1) ψm |V |ψn(0)
ψm ψn = (4.27)
E n(0) − E m(0)
Now, we use the resolution of the identity to project the perturbed state onto the
unperturbed states:
(1) (0) (0) (1)
ψ = ψ ψm ψn
n m
m
ψm(0) |V |ψn(0)
= ψm(0) (4.28)
m =n E n(0) − E m(0)
90 Quantum Dynamics: Applications in Biological and Materials Systems
where we explictly exclude the n = m term to avoid the singularity. Thus, the first-
order correction to the wave function is
(0) ψm(0) |V |ψn(0) (0)
|ψn ≈ ψn + ψ (4.29)
(0) (0) m
m =n E n − E m
H = E o σo − Aσx (4.31)
Now we apply an electric field. When the dipole moment of the molecule is aligned
parallel with the field, the molecule is in a lower energy configuration, whereas for the
antiparallel case, the system is in a higher energy configuration. This lifts the degen-
eracy between the two otherwise equivalent states. We can denote the contribution to
the Hamiltonian from the electric field as
H
= μe Eσz (4.32)
|H − λI | = 0 (4.34)
These are the exact eigenvalues. In Figure 4.2 we show the variation of the energy
levels as a function of the field strength.
Quantum Dynamics (and Other Un-American Activities) 91
λ±/|A|
μeε/|A|
–2 –1 1 2
–1
–2
FIGURE 4.2 Variation of energy levels (λ± ) as a function of the applied field for a polar
molecule in an electric field.
to write
2 1
μe E
A2 + μ2e E = A 1 +
2
/2
A
2
1 μe E
≈ A 1+ (4.37)
2 A
Thus in the weak field limit, the system can still tunnel between configurations, and
the energy splittings are given by
μ2e E 2
E ± ≈ (E o ∓ A) ∓ (4.38)
A
To understand this a bit further, let us use perturbation theory in which the tun-
neling dominates and treat the external field as a perturbing force. The unperturbed
Hamiltonian can be diagonalized by taking symmetric and antisymmetric combina-
tions of the |1 and |2 basis functions. This is exactly what we did above with the
time-dependent coefficients. Here the stationary states are
1
|± = √ (|1 ± |2) (4.39)
2
with energies E ± = E o ∓ A, so in the |± basis, the unperturbed Hamiltonian becomes
Eo − A 0
H= (4.40)
0 Eo + A
92 Quantum Dynamics: Applications in Biological and Materials Systems
To compute +|H
|+ we need to transform H
from the {|1, |2} uncoupled basis
to the new |± coupled basis. This is accomplished by inserting the identity on either
side of H
and collecting terms:
+|H
|+ = +|(|1 < 1| + |22|)H
(|1 < 1| + |22|) (4.42)
1
= (1| + 2|)H
(|1 + |2) (4.43)
2
=0 (4.44)
W+(2) = mi im
(4.45)
m =i
E i − Em
+|H
|−−|H
|+
= (4.46)
E +(0) − E −(0)
(μe E )2
= (4.47)
Eo − A − Eo − A
μ2e E 2
=− (4.48)
2A
This also applies to W−(2) = +μ2e E 2 /A. So we get the same variation as we estimated
above by expanding the exact energy levels when the field was weak.
Now let us examine the wave functions. Remember the first-order correction to
the eigenstates is given by
−|H
|−
|+(1) = |− (4.49)
E+ − E−
μE
=− |− (4.50)
2A
Thus,
μE
|+ = |+(0) − |− (4.51)
2A
μE
|− = |−(0) + |+ (4.52)
2A
So we see that by turning on the field, we begin to mix the two tunneling states.
However, since we have assumed that μE/A 1, the final state is not too unlike our
initial tunneling states.
Quantum Dynamics (and Other Un-American Activities) 93
2 1/2
A
A2 + μ2e E 2 = Eμe +1
μe E
2
1 A
= Eμe 1+ ...
2 μe E
1 A2
≈ Eμe + (4.53)
2 μe E
For very strong fields, the first term dominates and the energy splitting becomes linear
in the field strength. In this limit, the tunneling has been effectively suppressed.
Let us analyze this limit using perturbation theory. Here we will work in the |1, 2
basis and treat the tunneling as a perturbation. Since the electric field part of the
Hamiltonian is diagonal in the 1,2 basis, our unperturbed strong-field Hamiltonian is
simply
E o − μe E 0
H= (4.54)
0 E o − μe E
and the perturbation is the tunneling component. As stated previously, the first-order
corrections to the energy vanish and we are forced to resort to second-order pertur-
bation theory to get the lowest-order energy correction. The result is
A2
W (2) = ± (4.55)
2μE
which is exactly what we obtained by expanding the exact eigenenergies above.
Likewise, the lowest-order correction to the state vectors are
A 0
|1 = |10 − |2 (4.56)
2μE
A 0
|2 = |20 + |1 (4.57)
2μE
So, for large E the second-order correction to the energy vanishes, the correction to
the wave function vanishes, and we are left with the unperturbed (that is, localized)
states. We also find that the perturbative results exactly agree with the series expansion
results we obtained above. Thus, perturbative approaches work in the limit that the
coupling remains small compared with the energy gap of the unperturbed system.
where we define |φ and W to be the eigenvectors and eigenvalues of part of the full
problem. We shall call this the “uncoupled” problem and assume it is something we
can easily solve:
We want to write the solution of the fully coupled problem in terms of the solution
of the uncoupled problem. First we note that
1
|ψ = |φ + V |ψ (4.61)
Ho − E
This may seem a bit circular. But we can iterate the solution:
1 1 1
|ψ = |φ + V |φ + V V |ψ (4.62)
Ho − E Ho − E Ho − W
Taking this out to all orders, one obtains:
∞
n
1
|ψ = |φ + V |φ (4.63)
n=1
Ho − E
Assuming that the series converges rapidly (true for V << Ho weak coupling case),
we can truncate the series at various orders and write
and so on. Let us look at |ψ (1) for a moment. We can insert one in the form of
n |φn φn |:
(1) 1
ψ = |φn + |φm φn |V |φm (4.67)
n
n
Ho − Wm
Quantum Dynamics (and Other Un-American Activities) 95
that is,
(1) 1
ψ = |φn + |φm φn |V |φm (4.68)
n
n
Wn − Wm
Likewise,
2
(2) (1) 1
ψ
= ψn + Vlm Vmn |φn (4.69)
n
lm
(Wm − Wl )(Wn − Wm )
where
Vlm = φl |V |φm (4.70)
is the matrix element of the coupling in the uncoupled basis. These last two expressions
are the first- and second-order corrections to the wave function.
Note that we can actually solve the perturbation series exactly by noting that the
series has the form of a geometric progression, for x < 1 converge uniformly to
∞
1
= 1 + x + x2 + · · · = xn (4.71)
1−x n=0
1
= |φ (4.74)
1 − Go V
where G o = (Ho − E)−1 (this is the “time-independent” form of the propagator for
the uncoupled system). This particular analysis is particularly powerful in deriving
the propagator for the fully coupled problem.
We now calculate the first-order and second-order corrections to the energy of the
system. To do so, we make use of the wave functions we just derived and write
E n(1) = ψn(0) H ψn(0) = Wn + φn |V |φn = Wn + Vnn (4.75)
So the lowest-order correction to the energy is simply the matrix element of the per-
turbation in the uncoupled or unperturbed basis. That was easy. What about the next
order corrections? Using the same procedure as previously (assuming the states are
normalized)
E n(2) = ψn(1) H ψn(1)
= φn |H |φn
1
+ φn |H |φm φm |V |φn + O [V 3 ]
m =n
Wn − Wm
|Vnm |2
= Wn + Vnn + (4.76)
m =n
Wn − Wm
96 Quantum Dynamics: Applications in Biological and Materials Systems
Notice that I am avoiding the case where m = n as that would cause the denomi-
nator to be zero, leading to an infinity. This must be avoided. The “degenerate case”
must be handled via explicit matrix diagonalization. Closed forms can be obtained
for the doubly degenerate case easily. Also note that the successive approximations
to the energy require one less level of approximation to the wave function. Thus,
second-order energy corrections are obtained from first-order wave functions.
a = q R a
μ (4.77)
b = q R b
μ (4.78)
We will assume that R
ra & rb so that the electronic orbitals on each atom do not
come into contact.
Atom A creates an electrostatic potential U for atom B in which the charges in
B can interact. This creates an interaction energy W . Since both atoms are neutral,
the most important source for the interactions will come from the dipole–dipole
interactions. Thus, the dipole of A interacts with an electric field E = −∇U generated
by the dipole field about B and vice versa. To calculate the dipole–dipole interaction,
we start with the expression for the electrostatic potential created by μa at B,
1 μa · R
U (R) = (4.79)
4π εo R 3
Thus,
q 1
E = −∇U = − ra − 3(ra · n )n (4.80)
4π εo R 3
b · E
W = −μ
e2
= 3
ra · rb − 3(ra · n )(rb · n ) (4.81)
R
Quantum Dynamics (and Other Un-American Activities) 97
−2E I − E n − E n
where we restrict the summation to avoid the |1sa ; 1ab state. Since W ∝ 1/R 3 and
the denominator is negative, we can write
C
E (2) = − (4.86)
R6
which explains the origin of the 1/R 6 attraction.
Now we evaluate the proportionality constant C. Written explicitly,
|nlm
n
l
m
|(xa xb + ya yb − 2z a z b )|1sa ; 1sb |2
C = e4 (4.87)
nml n
l
m
2E I + E n + E n
Since n and n
≥ 2 and |E n | = E I /n 2 < E I , we can replace E n and E n
with 0
without appreciable error. Now, we can use the resolution of the identity
1= |nlm; n
l
m
nlm; n
l
m
| (4.88)
nml n
l
m
where the sign change reflects the sign difference in the image charges. So we get
e2 2
W =− 3
xa + ya2 + 2z a2 (4.99)
8d
as the interaction between a dipole and its image. Taking the atom to be in the 1s
ground state, the first-order term is nonzero:
e2 e2 a 2
E (1) = − 3
41s|r 2 |1s = − 3o (4.101)
8d 2d
Thus an atom is attracted to the wall with an interaction energy that varies as 1/d 3 .
This is a first-order effect since there is perfect correlation between the two dipoles.
then
1 ∂ 1
ih̄ f (t) = H φ(r ) (4.104)
f (t) ∂t φ(r )
Since the left-hand side is a function of only t and the right-hand side is a function of
only r , both sides must be equal to the same constant, E. Hence,
This is stationary because the probability density P(r )|ψ(r, t)|2 is independent of
time. More generally, since the complete set of eigenfunctions forms a suitable space
for representing any arbitrary function, ,
(r ) = φm |φm (r ) (4.109)
m
where ψ(0) is the state at time t = 0 and ψ(t) is the state evolved forward in time.
The operator U (t, 0) is the time-evolution operator. It is formally defined via the
expansion
2
it 1 i
U (t, 0) = 1 − H + H − ··· (4.112)
h̄ 2! h̄
The time-evolution operator has a number of useful properties:
1. It is unitary, I = U † U and U (t
, t) = U ∗ (t, t
) = U † (t, t
)
2. U obeys the semigroup property that U (t, to ) = U (t, t
)U (t
, to ) for t ≥
t
≥ to
3. U itself is a solution of the time-dependent Schrodinger equation
∂
ih̄ U (t, t
) = HU (t, t
) (4.113)
∂t
4. U (t, t
) = U (t − t
)
5. U (0) = 1
Notice that U is a polynomial function of an operator H . Hence if we know the
eigenvectors and eigenvalues of H , we know that f (H )φn = f (an )φn . Thus, we can
write U in an eigenbasis representation as
U= e−iωn |φn φn | (4.114)
n
This form is especially convenient when we have at hand the eigenvalues and eigen-
vectors of the system.
Quantum Dynamics (and Other Un-American Activities) 101
and this time we consider the solutions of the time-dependent Schrödinger equation:
∂
ih̄ |ψ = H |ψ (4.116)
∂t
We can write |ψ in terms of either the |± eigenstates of H or in terms of the localized
basis states
where the c’s are time-dependent coefficients. Either representation will work and we
can transform between the two easily enough.
From our discussion above, the time evolution of |ψ is generated by
where ω± = ε± /h̄. If our initial state, however, is one of the states of the uncoupled
system, say |ψ(0) = |1, then we need to write this in terms of the |±. Using the
rotation matrix above,
|ψ(t) = U (t, 0)|1 = e−iω1 t cos θ|+ − e−iω2 t sin θ|− (4.121)
We now ask, what is the probability that at time t > 0 the system will be found in the
other state? It is straightforward to show that
V2
P12 (t) = sin2 ω R t (4.122)
V 2 + 2
√
where ω R = 2 + V 2 /h̄ is the Rabi frequency, which gives the frequency at which
the system oscillates between the two states.
102 Quantum Dynamics: Applications in Biological and Materials Systems
1.0
0.8
0.6
0.4
0.2
π π 3π 2π
2 2
FIGURE 4.3 Rabi oscillation between two degenerate states starting off in |1.
In Figure 4.3 we show the P12 (t) and P11 (t) for the case of a degenerate system
= 0. Here the ω R = V /h̄ and the system oscillates between the two localized states
with a period τ = π/ω R . In other words, if we prepare the system in state |1 then for
every t = nτ we have a 100% likelihood of finding the system in state 1 and for every
t = nτ/2 we have a 100% likelihood of finding the system in state 2. The amount of
amplitude transferred depends upon both the coupling V and the energy gap . For
the degenerate case, = 0 and 100% of the initial population in 1 is transfered to
2 and back every π/ω R . For nondegenerate cases, a maximum of V 2 /(V 2 + 2 ) is
transferred every Rabi period. Ultimately, in the weak coupling limit, the population
remains localized in the the initial state.
H = Ho + λV (t) (4.123)
where Ho represents the Hamiltonian for the uncoupled system and λV is some
coupling. We begin by writing the state in the basis of unperturbed states as
ψ(t) = cn (t)|φn (4.124)
n
where the expansion coefficients cn (t) = ψ(t)|φn are simply the projection of the
evolving state onto the unperturbed basis. In this representation, the time-dependent
Quantum Dynamics (and Other Un-American Activities) 103
where Vnm (t) = φn |V (t)|φm is the matrix element of the coupling in the φn basis.
As such, this is a set of coupled linear differential equations to first order in time,
and, in principle at least, we can determine the coefficients for the time-evolved state.
The coupling between the equations comes from the fact that the operator V (t) is
nondiagonal in this basis representation. When λV = 0, our system of equations
becomes totally decoupled and the solutions are simply
cn (t) = bn e−iεn t/h̄ (4.126)
where bn depends entirely upon the choice of initial condition. We can also make a
simple change of variables by writing the general solution for the coefficients as
cn (t) = bn (t)e−iεn t/h̄ (4.127)
and determine the equations that govern the evolution of the new coefficients. The
advantage here is that this will eliminate any rapidly evolving phase terms e−iεn t/h̄ ,
and we expect the bn (t) to be slowly varying functions of time. Upon substitution into
the TDSE and introducing the Bohr frequency ωnm = (εn − εm )/h̄,
ih̄ ḃn (t) = λ Vnm (t)e−iωnm t bm (t) (4.128)
m
Again, this is a system of linear equations first order in time, but the bn (t) coefficients
are now slowly varying in time.
Next, let us expand the bn (t) = bn(0) (t) + λ1 bn(1) (t) + λ2 bn(2) (t) + · · · and substitute
this into Equation 4.128. Equating terms on each side to the equation with equal
orders in λα , one finds for λ0
ih̄ ḃn(0) (t) = 0 (4.129)
However, for α > 0
ih̄ ḃn(α+1) (t) = e−iωnm t Vnm (t)bm(α) (t) (4.130)
m
one obtains a recursive solution whereby lower-order solutions serve as input to the
next higher-order term.
Suppose at time t = 0 our initial state was prepared in the state |φi . Hence,
bi (t = 0) = 1 and all other bi =1 (0) = 0 (that is, bn (t = 0) = δni ). Prior to t = 0,
we assume that the interaction is turned off V (t < 0) = 0 and at time t = 0, it is
instantly switched on (but remains finite). To first order in the perturbation series,
ih̄ ḃn(1) (t) = e−iωni t Vni (t)bi(0) (t) (4.131)
This can be easily integrated
t
1
bn(1) (t) = dt
Vni (t
)eiωni t (4.132)
ih̄ 0
104 Quantum Dynamics: Applications in Biological and Materials Systems
to give the first-order probability amplitude for starting in state i and finding the
system in state n some time t later. Notice that this is the partial Fourier transform of
the coupling operator. (Partial in the sense that we integrate only out to intermediate
times.) The transition probability is then found by Pni (t) = |bn (t)|2 .
In the limit of a slowly varying perturbation, ω can be set to zero and we find
that is,
4|Vni |2
Pni (t) = f (t, ωni ) (4.135)
h̄ 2
where f (t, ωni ) is shown in Figure 4.4 as a function of the transition frequency ωni for
fixed t. Notice that f (t, ωni ) has a sharp peak at ωni = 0 with a height proportional to
t 2 /4 while its width (at half maximum) is given by 2π/t. A straightforward application
of the residue theorem indicates that
∞
tπ
dω f (t, ω) = (4.136)
−∞ 2
f (t,ωni)
1/4t2
2π/t
ωni/t
–6π –4π –2π 0 2π 4π 6π
so that the total area under the curve is proportional to t. Also, one has
πt
lim f (t, ωni ) ∝ δ(ω) (4.137)
t→∞ 2
This tells us that transitions occur mainly between states whose final energies E n
do not differ from the initial energy E i by more than
δ E = 2πh̄/t (4.138)
hence, energy is approximately conserved with the spread in energy given by 2πh̄/t.
We can relate this result with the so-called time-energy uncertainty relationship
δ Eδt ≥ h̄. In a sense, the perturbation is akin to making a measurement of the energy
of the system by inducing a transition from the initial state i to the final state n. Since
the time associated with making this observation is t, the associated uncertainty with
the observation should be approximately h̄/t, which is good agreement with the
estimate above.
For processes that occur to a continuum of final states whose energies lie within a
given interval (E f − ε, E f + ε) about some central energy E f , it should be apparent
from this discussion that we need to consider transitions from the initial state to
groups of states. Let us denote by ρ f (E f ) the density of levels so that ρ f (E f )d E f is
the number of states with energy levels in the interval (E f , E f + d E f ). Integrating
our result for a single state over a continuum of final states yields
E f +ε
P f i (t) = d E f P f i (t, E f )ρ f (E f ) (4.139)
E f −ε
If we assume that both |V f i | and ρ f (E f ) are slowly varying over a narrow integration
range,
ω f i +ε
4
P f i (t) = |V f i |2 ρ f (E f ) f (t, ω f i )dω f i (4.140)
h̄ ω f i −ε
that is,
2π
kfi = |V f i |2 ρ f (E) (4.143)
h̄
This is often referred to as Fermi’s Golden Rule4 (even though it was first obtained
by Paul Dirac)5 since it plays an important role in many physical processes.
2π
kfi = | f |V |i|2 δ(E f − E i ) (4.144)
h̄
This is the expression for the transition rate that we previously derived for the lim-
iting case where the external field varied slowly with time. It is applicable when the
initial and final energies are the same. For nondegenerate systems, we obtain for the
transition between states 1 and 2
2π
k21 = |2|V |1|2 δ(E 2 − E 1 − h̄ω) (4.145)
h̄
corresponding to a transition from the initial to final state that involves the absorption
of a quantum of energy h̄ω. We can also write the transition rate for the reverse 2 → 1
process as
2π
k12 = |1|V |2|2 δ(E 1 − E 2 + h̄ω) (4.146)
h̄
Because we are dealing with Hermitian operators, |1|V |2|2 = |2|V |1|2 , we can
conclude that
k12 = k21 (4.147)
This is an example of microscopic reversibility and stems from the fact that our
equations of motion are symmetric in time.
In general, however, we rarely encounter isolated systems. Typically in chemical
dynamical systems we deal with an ensemble of identically prepared systems. Thus,
to compute a statistical transition rate for the ensemble we need to sum over all initial
conditions, weighted by their respective Boltzmann probability, and sum over all
possible final states. Let us write this as
P(ω) = k f i (ω)wi (4.148)
f,i
where
e−β Ei
wi = (4.149)
Z
Quantum Dynamics (and Other Un-American Activities) 107
2π
P(−ω) = |F(ω)|2 wi | f |B|i|2 δ(E f − E i + h̄ω) (4.152)
h̄ f,i
for the emission process where E f = E i −h̄ω. In order to relate these two expressions,
let us now assume that E f > E i so that i and f now serve as state indices rather
than simply referring to the initial and final states. Thus, the sum over initial states in
P(−ω) is really a sum over f ’s, so we need to write
In essence, the rate for stimulated emission is statistically lower than that for
stimulated absorption. It makes sense because at thermal equilibrium, we are less
likely to find the system in a higher energy state than in a lower energy state. The
relation also assumes that the transition occurs from a thermally populated distribution
of initial states at time t = 0. Since P(ω) > P(−ω), we have lost microscopic
reversibility. This is the principle of detailed balance. Reversibility is lost the moment
we place the system in contact with a thermal bath.
Let us consider the part of P(ω) that only involves the summations:
C> (ω) = wi | f |B|i|2 δ(E f − E i − h̄ω) (4.155)
f,i
and
C< (ω) = wi | f |B|i|2 δ(E f − E i + h̄ω) (4.156)
f,i
so that
2π
P(ω) = |F(ω)|2 C> (ω) (4.157)
h̄
108 Quantum Dynamics: Applications in Biological and Materials Systems
and
2π
P(−ω) = |F(ω)|2 C< (ω) (4.158)
h̄
Clearly from the discussion above, C< (ω) = exp(−βh̄ω)C> (ω).
For the moment, consider just C> (ω) and recast this using the integral form of
δ(E):
∞
1
δ(E) = dte−i Et/h̄ (4.159)
2πh̄ −∞
Using this we can write
∞
C> (ω) = dt wi |Bi f |2 ei(E f −Ei −h̄ω)t/h̄ (4.160)
−∞ i, f
and use
to write this as
∞
C> (ω) = dteiωt wi i|e+i H t/h̄ Be−i H t/h̄ | f f |B|i (4.163)
−∞ i, f
We can now eliminate the sum over the final states since this is simply a resolution
of the identity
∞
C> (ω) = dteiωt wi i|B(t)B(0)|i (4.164)
−∞ i
Lastly, we can condense our notation by letting the sum over the initial conditions be
written as the trace over the thermal density
wi i|B(t)B(0)|i = T r [ρe+i H t/h̄ Be−i H t/h̄ B(0)] = B(t)B(0) (4.165)
i
Thus, we can write the C> (ω) and C< (ω) in terms of Fourier transform of correlation
functions:
∞
C> (ω) = dteiωt B(t)B(0) (4.166)
−∞
and
∞
C< (ω) = dteiωt B(0)B(t) (4.167)
−∞
Quantum Dynamics (and Other Un-American Activities) 109
It is vitally important to note that C> (ω) = C< (ω). This is because B(t) and B(0)
are quantum mechanical operators and do not necessarily commute. Also, while B(t)
and B(0) are Hermitian operators, their product is not Hermitian.
Symmetry properties: It is important to take a close look at the properties of
time-correlation functions. Consider the time-correlation function
and
In other words, the real part of C(t) is an even function of time and the imaginary
part must be an odd function of time. Thus, we can write
and consider
∞
I (ω) = eiωt C(t)dt (4.173)
−∞
where tg is a Gaussian decay time and ωo is a characteristic frequency. One can also
encounter correlation functions that decay exponentially with time:
While formally incorrect, this can occur whenever there is a loss of time reversibility
within the system being probed. This can occur either through coarse-graining over
some intermediate time scale or through the presence of a dissipative (that is, velocity-
dependent) force. The line shapes corresponding to these two correlation functions
are easy to obtain:
1 √
C(0) π τ (e−τ (ω−ωo ) /4 + e−τ (ω+ωo ) /4 )
2 2 2 2
Ig (ω) = (4.177)
2
for the Gaussian case and
τ τ
Il (ω) = + (4.178)
τ 2 (ω + ωo )2 + 1 τ 2 (ω − ωo )2 + 1
for the exponential decaying case. Here, the line shape is the characteristic Lorentzian.
In the limit that ωo = 0 we have
√
Ig (ω) = C(0) π τ e−τ ω /4
2 2
(4.179)
and
2τ
Il (ω) = (4.180)
τ 2 (ω + ω o )2 + 1
Finally, we can define the correlation time as
∞
C(t)
τc = dt (4.181)
0 C(0)
√
For the Gaussian decay: τc = πτ while for the exponential decay: τc = τ .
Lastly, we note that for a classical system, the time-correlation function is sym-
metric in time with C(t) = C(−t). This is the result of the time-reversal symmetry
that arises in Newtonian mechanics.
Example: A Brownian particle. To gain some understanding and practice in
computing correlation functions, we consider the time-correlation function of the
position for a particle with unit mass (m = 1) undergoing Brownian motion. This can
be described via the Langevin equation:
where R(t) is a random force with R(t) = 0 and R(t)R(0) = 2πkT γ δ(t). In
essence, each random kick is uncorrelated with the previous one that occurred an
instant earlier in time. Multiply both sides on the right by x(0),
The last term vanishes since R(t) = 0. For the other terms, the time derivative can
be pulled out in front of the thermal average:
d2 d
2
x(t)x(0) = −γ x(t)x(0) (4.185)
dt dt
This gives us a simple ordinary differential equation for our correlation function:
d2 d
C(t) = −γ C(t) (4.186)
dt 2 dt
This we can easily solve to find
In other words, the line-shape function for a randomly kicked particle is a Lorentzian
2γ
I (ω) = C(0) (4.188)
ω2 + γ 2
the magnitude of the wave vector that points in the direction of propagation and c is
the speed of light. For such a wave, we can always set the scalar part of its potential to
zero with a suitable choice in gauge and describe the fields associated with the wave
given by
in terms of a vector potential A,
Here, the wave vector points in the +y direction, the electric field E is polarized in
the yz plane, and the magnetic field B is in the x y plane. Using Maxwell’s relations
and
t) = ∇ × A
B(r, = ikex (Ao ei(ky−ωt) − A∗o e−i(ky−ωt) ) (4.191)
We are free to choose the time origin, so we will choose it so as to make Ao purely
imaginary, and set
iω Ao = E/2 (4.192)
ik Ao = B/2 (4.193)
E ω
= =c (4.194)
B k
Thus
where E and B are the magnitudes of the electric and magnetic field components of
the plane wave.
Lastly, we define what is known as the Poynting vector, which is parallel to the
direction of propagation:
S = εo c2 E × B (4.197)
Using the expressions for E and B above and averaging over several oscillation
periods:
E
S = εo c2 e y (4.198)
2
Quantum Dynamics (and Other Un-American Activities) 113
H = Ho + W (4.200)
where
P2
Ho = + V (r ) (4.201)
2m
is the unperturbed (atomic) Hamiltonian and
q q q2 2
W =− P·A− S·B+ A (4.202)
m m 2m
The first two terms depend linearly upon A and the second is quadratic in A. So, for
low intensity we can take
q q
W =− P · A − S · B = WE + WB (4.203)
m m
Before moving on, we need to evaluate the relative importance of each term by orders
of magnitude for transitions between bound states. In the second term, the contribution
of the spin operator is on the order of h̄ and the contribution from B is on the order
of k A. Thus,
WB
q
S ·B h̄k
= m
≈ (4.204)
WE q
m
P ·A p
Using the expressions we derived previously, the coupling to the electric field
component of the light wave is given by
q
since we are, after all, talking about a dipole moment associated with the motion of
the electron about the nucleus. Actually, the two expressions are equivalent because
we can always choose a different gauge to represent the physical problem without
changing the physical result. In electrodynamics, the electric and magnetic fields are
described in terms of a vector potential A and a scalar potential U . To get the present
result, we used
E
A= ez sin(ωt) (4.210)
ω
and set the scalar potential
U (r ) = 0 (4.211)
But this is completely arbitrary. We can always choose another vector potential and
scalar potential to describe the fields and require that, in the end, the physics be
invariant to how we choose to describe these potentials. Formally, when we choose
the potential we make a particular choice of gauge. We can transform from one gauge
to another by taking a function f and defining a new vector potential and a new scalar
potential as
A
= A + ∇ f (4.212)
∂f
U
= U − (4.213)
∂t
We are free to choose f . Let us take f = zE sin(ωt)/ω so that
E
A
= ez (sin(ky − ωt) + sin(ωt)) (4.214)
ω
Quantum Dynamics (and Other Un-American Activities) 115
U
= −zE cos ωt (4.215)
H = Ho + qU
(r, t) (4.216)
with perturbation
W D
= −qzE cos(ωt) (4.217)
Now our perturbation depends upon the displacement operator rather than the mo-
mentum operator. This is the usual form of the dipole coupling operator.
Next, let us consider the matrix elements of the dipole operator between two
stationary states of Ho : |ψi and |ψ f with eigenenergy E i and E f , respectively. The
matrix elements of W D are given by
qE
W f i (t) = sin(ωt)ψ f | pz |ψi (4.218)
mω
We can evaluate this by noting that
∂ Ho pz
[z, Ho ] = ih̄ = ih̄ (4.219)
∂ pz m
Thus,
Consequently,
sin(ωt)
W f i (t) = iqEω f i z fi (4.221)
ω
Thus, the matrix elements of the dipole operator are those of the position operator.
This determines the selection rules for the transition.
Before going through any specific details, let us consider what happens if the
frequency ω does not coincide with ω f i . Specifically, we limit ourselves to transitions
originating from the ground state of the system, |ψo . We will assume that the field
is weak and that in the field the atom acquires a time-dependent dipole moment
that oscillates at the same frequency as the field via a forced oscillation. To simplify
matters, assume that the electron is harmonically bound to the nucleus with a classical
Hooke’s law force,
1
V (r ) = mωo r 2 (4.222)
2
where ωo is the natural frequency of the electron.
116 Quantum Dynamics: Applications in Biological and Materials Systems
The classical motion of the electron is given by the equations of motion (via the
Ehrenfest theorem)
qE
z̈ + ω2 z =
cos(ωt) (4.223)
m
This is the equation of motion for a harmonic oscillator subject to a periodic force.
This inhomogeneous differential equation can be solved (using Fourier transform
methods) and the result is
qE
z(t) = A cos(ωo t − φ) +
cos(ωt) (4.224)
m ωo2 − ω2
where the first term represents the harmonic motion of the electron in the absence
of the driving force. The two coefficients, A and φ, are determined by the initial
condition. If we have a very slight damping of the natural motion, the first term
disappears after a while, leaving only the second, forced oscillation, so we write
qE
z=
cos(ωt) (4.225)
m ωo2− ω2
Thus, we can write the classical induced electric dipole moment of the atom in the
field as
q 2E
D = qz =
cos(ωt) (4.226)
m ωo2 − ω2
Typically this is written in terms of a susceptibility, χ , where
q2
χ=
(4.227)
m ωo2 − ω2
Now we look at this from a quantum mechanical point of view. Again, take the
initial state to be the ground state and H = Ho + W D as the Hamiltonian. Since the
time-evolved state can be written as a superposition of eigenstates of Ho ,
|ψ(t) = cn (t)|φn (4.228)
n
To evaluate this we can use the results derived previously in our derivation of the
golden rule,
qE
|ψ(t) = |φo + n| pz |φo
n =0
2imh̄ω
e−iωno t − eiωt e−iωno t − e−iωt
× − |φn (4.229)
ωno + ω ωno − ω
where we have removed a common phase factor. We can then calculate the dipole
moment expectation value, D(t), as
2q 2 ωon |φn |z|φo |2
D(t) = E cos(ωt) (4.230)
h̄ n
ωno
2 − ω2
Quantum Dynamics (and Other Un-American Activities) 117
From this we can begin to clearly appreciate the physics behind the absorption or
emission of light by an atom or molecule. When an oscillating dipole field is applied to
an atom or molecule, the electrons in the atom respond by oscillating with the applied
field. Ordinarily, this oscillation is not very significant when ω = ωno . However, at
the resonance condition, the induced dipole moment atom or molecule (due to its
interaction with the field) oscillates readily at the transition frequency and the atom
or molecule readily absorbs or emits energy in the form of electromagnetic radiation.
We can now notice the similarity between a driven harmonic oscillator and the expec-
tation value of the dipole moment of an atom in an electric field. We can define the
oscillator strength as a dimensionless and real number characterizing the transition
between |φo and |φn ,
2mωno
f no = |φn |z|φo |2 (4.231)
h̄
The term oscillator strength comes from the analysis of a harmonically bound electron.
In a sense, such an electron is a perfect absorber since its motion is perfectly harmonic
and as such it can maintain a perfect phase relationship with the external driving field.
Summing over all possible transitions from the original state yields the Thomas–
Reiche–Kuhn (TRK) sum rule:
f no = 1 (4.232)
n
of an explicitly applied field, an excited system can spontaneously emit a photon and
relax to a lower energy state. Since we have all done spectroscopy experiments at
one point or another in our education, we all know that the transitions are between
discrete energy levels. In fact, it was in the examination of light passing through glass
and light emitted from flames that people in the nineteenth century began to speculate
that atoms can absorb and emit light only at specific wavelengths.
We will use the golden rule to deduce the probability of a transition under the
influence of an applied light field (laser or otherwise). We will argue that the system
is in equilibrium with the electromagnetic field and that the laser drives the system
out of equilibrium. From this we can deduce the rate of spontaneous emission in the
absence of the field.
The electric field associated with a monochromatic light wave of average inten-
sity I is
I = cρ (4.234)
Eo
2
1 Bo 2
= c εo + (4.235)
2 μo 2
1/2
εo E 2o
= (4.236)
μo 2
E 2o
= cεo (4.237)
2
| and |Bo | = (1/c)|E
where ρ is the energy density of the field, |E | are the maximum
amplitudes of the E and B fields of the wave, and we are using meter-kilogram-second
(mks) units. The electromagnetic wave in reality contains a spread of frequencies, so
we must also specify the intensity density over a definite frequency interval:
dI
dω = cu(ω)dω (4.238)
dω
where u(ω) is the energy density per unit frequency at ω.
Within the “semiclassical” dipole approximation, the coupling between a molecule
and the light wave is
=μ E
μ
· E(t) · ε o cos(ωt) (4.239)
2
where μ is the dipole moment vector and ε is the polarization vector of the wave.
Using this result, we can plug directly into the golden rule and deduce that
E 2o sin2 ((E f − E i − h̄ω)t/(2h̄))
P f i (ω, t) = 4| f |μ
· ε|i|2 (4.240)
4 (E f − E i − h̄ω)2
Now, we can take into account the spread of frequencies of the electromagnetic wave
around the resonant value of ωo = (E f − E i )/h̄. To do this we note
I
E 2o = 2 (4.241)
cεo
Quantum Dynamics (and Other Un-American Activities) 119
To get this we assume that d I /dω and the matrix element of the coupling vary slowly
with frequency as compared to the sin2 (x)/x 2 term. Thus, as far as doing integrals
are concerned, they are both constants. With ωo so fixed, we can do the integral over
dw and get πt/(2h̄ 2 ), and we obtain the golden rule transition rate:
π dI
kfi = | f |μ · ε
|i| 2
(4.244)
cεoh̄ 2 dω ωo
Notice also that this equation predicts that the rate for excitation is identical to the
rate for de-excitation. This is because the radiation field contains both +ω and −ω
terms (unless the field is circularly polarized), and the transition rate from a state of
lower energy to a higher energy is the same as that of the transition from a higher energy
state to a lower energy state. However, we know that systems can emit spontaneously
in which a state of higher energy can go to a state of lower energy in the absence of
an external field. This is difficult to explain in the present framework since we have
assumed that |i is stationary. Let us assume that we have an ensemble of atoms in
a cavity containing electromagnetic radiation and the system is in thermodynamic
equilibrium. (Thought you could escape thermodynamics, eh?) Let E 1 and E 2 be
the energies of two states of the atom with E 2 > E 1 . When equilibrium has been
established, the number of atoms in the two states is determined by the Boltzmann
equation:
N2 N e−E2 β
= = e−β(E2 −E1 ) (4.245)
N1 N e−E1 β
where β = 1/kT . The number of atoms (per unit time) undergoing the transition
from 1 to 20 is proportional to k21 induced by the radiation and to the number of
atoms in the initial state N1 :
dN
(1 → 2) = N1 k21 (4.246)
dt
The number of atoms going from 2 to 1 is proportional to N2 and to k21 + A where
A is the spontaneous transition rate
dN
(2 → 1) = N2 (k21 + A) (4.247)
dt
At equilibrium, these two rates must be equal. Thus,
k21 + A N1
= = eh̄ωβ (4.248)
k21 N2
120 Quantum Dynamics: Applications in Biological and Materials Systems
Now, let us refer to the result for the induced rate k21 and express it in terms of the
energy density per unit frequency of the cavity, u(ω),
π
k21 = |2|μ
· ε|1|2 u(ω) = B21 u(ω) (4.249)
εoh̄ 2
where
π
B21 = |2|μ
· ε|1|2 (4.250)
εoh̄ 2
For electromagnetic radiation in equilibrium at temperature T , the energy density per
unit frequency is given by Planck’s law:
1 h̄ω3
u(ω) = (4.251)
π 2 c3 eh̄ωβ − 1
Combining the results we obtain
B12 A 1
+ = eh̄ωβ (4.252)
B21 B21 u(ω)
B21 A π 2 c3 h̄ωβ
+ (e − 1) = eh̄ωβ (4.253)
B12 B21 h̄ω3
(4.254)
ω3
= |2|μ
· ε|1|2 (4.258)
εo πh̄c3
This is a key result in that it determines the probability for the emission of light by
atomic and molecular systems. We can use it to compute the intensity of spectral lines
in terms of the electric dipole moment operator. The lifetime of the excited state is
then inversely proportional to the spontaneous decay rate,
1
τ= (4.259)
A
Quantum Dynamics (and Other Un-American Activities) 121
To compute the matrix elements, we can make a rough approximation that μ ∝
xe where e is the charge of an electron and x is on the order of atomic dimensions.
We also must include a factor of 1/3 for averaging over all orientations of (μ · ε).
Since at any given time the moments are not all aligned,
1 4 ω3 e2
=A= |x|2 (4.260)
τ 3 h̄c3 4π εo
The factor
e2 1
=α≈ (4.261)
4π εoh̄c 137
is the fine structure constant. Also, ω/c = 2π/λ. So, setting x ≈ 1 Å,
3
4 1 2π 6 × 1018
A= c (1Å)2 ≈ sec−1 (4.262)
3 137 λ [λ(Å)]3
2 e2 (v̇)2
P= (4.264)
3 4π εo c3
where v̇ is the acceleration of the charge. Assuming the particle moves in a circular
orbit of radius r with angular velocity ω, the acceleration is v̇ = ω2r . Thus, the time
required to radiate energy h̄ω/2 is equivalent to the lifetime τ
1 2P
= (4.265)
τclass h̄ω
1 4 e2 ω4 r 2
= (4.266)
h̄ω 3 4π εo c3
4 ω3 e2 2
= r (4.267)
3 h̄c3 4π εo
This qualitative agreement between the classical and quantum results is a manifes-
tation of the correspondence principle. However, it must be emphasized that the
MECHANISM for radiation is entirely different. The classical result will never pre-
dict a discrete spectrum. This was in fact a very early indication that something
was certainly amiss with the classical electromagnetic field theories of Maxwell and
others.
122 Quantum Dynamics: Applications in Biological and Materials Systems
where jl (kr ) is the spherical Bessel function and Pl (x) is a Legendre polynomial,
which we can also write as a spherical harmonic function,
4π
Pl (cos(θ)) = Yl0 (θ, φ) (4.273)
2l + 1
Thus, the integral we need to perform is
∞
1 ∗
1s|k = √ Y00 Yl0 d i 4π(2l + 1)
l
r 2 e−r jl (kr )dr (4.274)
π l 0
The angular integral we do by orthogonality and this produces a delta function that
restricts the sum to l = 0 only leaving
∞
1
1s|k = √ r 2 e−r j0 (kr )dr (4.275)
0
The radial integral can be easily performed using
sin(kr )
j0 (kr ) = (4.276)
kr
Quantum Dynamics (and Other Un-American Activities) 123
leaving
4 1 1
1s|k = (4.277)
k 1/2 (1 + k 2 )2
Thus, the matrix element is given by
qE h̄ 1 2
1s|V |k = (4.278)
mω (1 + k 2 )2
1/2
This we can insert directly into the golden rule formula to get the photoionization rate
to a given k-state:
2
2πh̄ qE 4
R0k = δ(E o − E k + h̄ω) (4.279)
mω (1 + k 2 )4
where we write K 2 = 2m(E I + h̄ω)/h̄ 2 to make our notation a bit more compact.
Eventually, we want to know the rate as a function of the photon frequency, so let us
put everything except the frequency and the volume element into a single constant
I, which is related to the intensity of the incident photon,
I 1 δ(k 2 − K 2 )
R0k = (4.281)
ω2 (1 + k 2 )4
Now, we sum over all possible final states to get the total photoionization rate. To do
this, we need to turn the sum over final states into an integral, and this is done by
∞
= 3
4π k 2 dk (4.282)
k
(2π ) 0
Thus,
∞
I 1 δ(k 2 − K 2 )
R= 4π k2 dk
ω (2π )
2 3
0 (1 + k 2 )4
∞
I 1 δ(k 2 − K 2 )
= 2 2 k2 dk
ω 2π 0 (1 + k 2 )2
0.035
0.030
0.025
0.020
R
0.015
0.010
0.005
0.000
0.5 1.0 1.5 2.0
ћω (a.u.)
FIGURE 4.5 Photoionization spectrum for hydrogen atom. Note that the vertical axis is scaled
by the incident photon flux.
Pulling everything together, we see that the total photoionization rate is given by
I 1 K
R=
ω2 2π 2 (1 + K 2 )4
√
I h̄m2 ω h̄ − εo
= √ 4
2 π 2 ω2 1 + 2 m (ωh̄ 2h̄−εo )
√
2ω − 1
=I (4.284)
32 π 2 ω6
where in the last line we have converted to atomic units to clean things up a bit. This
expression is clearly valid only when h̄ω > E I = 1/2 hartree (13.6 eV); a plot of the
photoionization rate is given in Figure 4.5.
p2
H = He (r (t)) + (4.285)
2m
where the first term is the electronic part that depends parametrically upon the nuclear
coordinate r and the second is the nuclear kinetic energy. If ψ(t) is the electronic state
Quantum Dynamics (and Other Un-American Activities) 125
then
dcn ∂
ih̄ = εn cn + ih̄ ṙ cm φn | |φm (4.288)
dt m
∂r
First, in the limit of slow nuclear motion, ṙ ≈ 0 or if the electronic wave function
varies slowly along r , then the second term gives no contribution to the dynamics.
If our initial electronic state is prepared in an eigenstate of He , then under these
conditions it will remain in the same eigenstate even as the nuclei move. This is the
adiabatic approximation whereby the nuclear motion is slow enough such that the
electronic state instantly responds to any small change. Within this approximation,
we can use the Hellmann–Feynman theorem to compute the forces exerted on the
nuclei. The resulting equations of motion for the nuclei then read:
∂εn (r ) ∂ He (r )
m r̈ = −
= − φn (r ) φn (r ) (4.289)
∂r ∂r
In general, we do not need to assume that ψ is initially an eigenstate of He ; it can be
a superposition of eigenstates, in which case we need to take a weighted average over
forces
∂ He (r )
m r̈ = − ψ ψ
∂r
∂εn (r )
=− |cn |2 (4.290)
n
∂r
V
1.5
A– B A B–
1.0
|1> |2>
0.5
A B
Q
(|1> + |2>)/21/2 –3 –2 –1 1 2 3
(a) (b)
FIGURE 4.6 Problem with Hellmann–Feynman forces: (a) We have two possible electron
transfer states. One (|1) has the electron localized on site A with B being neutral and the other
(|2) has the electron on site B with A being neutral. The arrows indicate the dipole moments
of surrounding solvent molecules. Since |1 and |2 are coupled, the state will naturally evolve
into a linear combination of the two possible outcomes. Consequently, the Hellmann–Feynman
forces will see an average of the two. (b) We have a potential well representing the two states
with Q being an order parameter. For Q = −1, the dipoles are oriented about A and for
Q = +1 the dipoles are oriented about B. Q = 0 corresponds to the unstable case of neither
A nor B being fully solvated.
localized on the left-hand molecule and the other corresponding to where the electron
is localized on the right-hand molecule. If the final populations are such that there is
a 1:1 mixture between the left and right configurations, the solvent molecules follow-
ing the transfer of the electron from the left to the right will “see” an averaged case
and will not fully solvate either side. The problem stems from the fact that when we
partition the full system into interacting subsystems and then make the mean-field
assumption, we essentially “trap” quantum coherence within the two separate sub-
spaces and do not allow for the mixing of phase coherence. Energy can flow between
the two subspaces, but phase information cannot. Consequently, the system is forced
to remain too coherent and never resolves itself into either state. A number of “fixes”
have been proposed 6–14 to kill off coherence and force the system to localize in one
state or the other. We shall pick up with this discussion of coherence and decoherence
in detail in a later chapter.
In effect, we are really solving the two-level system problem posed earlier in this
chapter. For the sake of discussion, we limit ourselves to two electronic states, labeled
a and b, and write our He (r ) as
E a (r ) λ
He (r ) = (4.292)
λ E b (r )
where E a (r ) and E b (r ) define two potential energy surfaces and λ is some constant
coupling. When λ = 0, the two potential energy curves will cross at some point.
Quantum Dynamics (and Other Un-American Activities) 127
However, when λ = 0, the two curves avoid each other as seen in Figure 4.6. Let us
center our frame of reference at the point of intersection where E a (0) = E b (0). We
know from our previous analysis that if the coupling is much weaker than the energy
difference (in the uncoupled representation), then the probability to make a transition
between the two states will be vanishingly small. Also for the sake of discussion, let
us consider this as a scattering problem whereby the nuclear motion is from r → −∞
to r → +∞ and appears at the crossing point at t = 0. Also at t → −∞, we presume
the system is prepared in one of the two states φa or φb , which are eigenstates of the
uncoupled system (that is, with λ = 0). We shall refer to this representation as the
diabatic representation. As the system progresses from left to right, the diabatic states
will mix, leading to a superposition of states
ψ = ca φa + cb φb (4.293)
At t → ∞ the coefficients |ca |2 and cb give the probability for either remaining in
the initial state or making a transition from state a to state b.
We can equally well picture a representation where He (r ) is diagonal at each point
along r . Although the physics (that is, what we eventually compute or observe) will
not depend upon our choice of representation, our description of the physics may be
quite different. In this adiabatic representation, the electronic coupling is described
by Equation 4.288.
φ1 |H
|φ2
φ1 |∇r |φ2 = (4.296)
E1 − E2
Again, when we are far from the point of intersection, ṙ · φ1 |H
|φ2 E 1 − E 2
and the coupling can be ignored. In this limit, both the adiabatic states φ1 and φ2 and
the diabatic states φa and φb are equivalent. To the left, the lower state φ1 = φa and
the upper state φ2 = φb . However, to the right as r → +∞, the lower state becomes
φ1 = φb while the upper adiabatic state becomes φ2 = φa . In other words, as our
state evolves, it becomes a superposition of adiabatic states:
ψ = a1 φ1 + a2 φ2 (4.297)
with |a1 |2 being the probability for the system to be found in the lower adiabatic
state. Again starting off at t in the distant past with the system prepared to the left
in the lower adiabatic state |a1 (r → −∞)|2 = 1, we find at long time and when the
nuclear coordinate has progressed though the intersection |a1 (r → ∞)|2 = |cb |2 and
|a2 (r → ∞|2 = |ca |2 . Thus, the probability to remain on the lower adiabatic surface
is given by P1→1 = |cb (r → ∞)|2 = |a1 (r → ∞)|2 = Pa→b and the probability
128 Quantum Dynamics: Applications in Biological and Materials Systems
for making a transition to the other surface is P1→2 = |ca (r → ∞)|2 = |a2 (r →
∞)|2 = Pa→a .
We can approximate the probability for making the transition using the Landau–
Zener approach 15–17 :
2π |Vab |2
P1→1 = Pa→b = 1 − exp − (4.298)
h̄∂t (E a (r ) − E b (r )) r =rc
where the time dependence of the energy gap is due to the motion along r . Taking the
derivative,
d
(E a (r ) − E b (r )) = ṙ (Fb − Fa ) (4.299)
dt
where Fa = −∇r E a is the force acting on the nuclear coordinate at r from either the
lower (Fa ) or upper (Fb ) adiabatic surfaces. Note that all of these quantities are to
be evaluated at the point of crossing rc , and ṙ is the velocity at the point of crossing.
Hence, Fa and Fb are the slopes of the diabatic curves at the point of crossing.
Again, in the limit of weak coupling or high velocity through the coupling region,
2π|Vab |2 h̄ ṙ (Fb − Fa ) and
2π |Vab |2
P1→1 ≈ (4.300)
h̄ ṙ |Fb − Fa | r =rc
Thus, the probability to remain in the original adiabatic state is very small. This is
referred to as the nonadiabatic limit. On the other hand, in the limit of large coupling
or slow motion, the exponential term in the Landau–Zener equation (Equation 4.298)
vanishes and the system remains on the original adiabatic surface throughout the
scattering process.
Thus, |a corresponds to the reactant state and |b is the product state. We can also
have a more general cross-electron transfer if the species are different, for example,
be carried out in a polar medium, the dipoles within the medium would respond by
reorganizing themselves to minimize the electrostatic interactions. As suggested by
Figure 4.6 and Figure 4.7, we have two minima corresponding to the cases where
the solvent polarization fields are organized to minimize these interactions. If we
take Q to be some collective polarization coordinate, then we can easily arrive at the
parabolic curves:
k
Va (Q) = E a + (Q − Q a )2 (4.301)
2
k
Vb (Q) = E b + (Q − Q b )2 (4.302)
2
Furthermore, since the electronic coupling is only significant close to the crossing
point, we shall assume it is independent of Q and equal to Vab . These curves cross
at Q c where Va (Q c ) = Vb (Q c ). Simple algebra yields
(E a − E b ) + k Q a2 − Q 2b /2
Qc = (4.303)
k(Q a − Q b )
For the forward reaction, the activation energy is the energy difference between E 1
and the energy at the crossing:
(E a − E b ) − λ
EA = (4.304)
4λ
where
k
λ= (Q a − Q b )2 (4.305)
2
This last term carries an important physical meaning. It is the energy required to
reorganize the polarization following the transfer of a charge from A to B. These
terms are shown in Figure 4.7 along with a simple sketch of the parabolic potentials.
To get to a transition rate, we need to compute the expectation value that our
system will arrive at the crossing at an appropriate velocity,
∞
ka→b = d Q̇ P(Q c , Q̇)Pa→b ( Q̇) (4.306)
0
Pa→b ( Q̇) we get from the Landau–Zener expression above. P(Q c , Q̇) we get by
taking the Boltzmann probability that the system will have the appropriate velocity
at the crossing point
βm −βm Q̇ 2 /2 −β E A Q c −β(Va −Ea )
P(Q c , Q̇) = e e e (4.307)
2π −∞
Inkeeping with our notation above, Va and Vb will represent the uncoupled (diabatic) potentials and V1
and V2 will denote the adiabatic potentials.
130 Quantum Dynamics: Applications in Biological and Materials Systems
V
Va
Vb
EA
ΔE
Q
Qa Qc Qb
FIGURE 4.7 Sketch of parabolic free-energy potentials arising from Marcus’ treatment of
electron transfer. In this figure, E A is the activation energy, E is the driving force taken as
the (free) energy difference between the initial and final states. λ is the reorganization energy.
Electronic transitions occur between energy eigenstates of H , not between the dia-
batic states. It is important that we make this distinction. Recall our discussion of
light absorption earlier in this chapter. We assumed initially that the system was at
equilibrium and perturbed only by the electromagnetic field of the incident photon.
Consequently, the initial state for optical absorption must be an eigenstate of H . For-
tunately, far from the crossing region, ψa ≈ ψ1 close to Q a and ψb ≈ ψ1 close to Q b .
To find an expression for the coupling, begin by writing the transition moment
between ψ1 and ψ2 as
μ12 = eψ1 |r |ψ2
and then expand the eigenstates in terms of the diabatic states
where θ is the mixing angle. If we assume that the transition moment between the
diabatic state vanishes, μab = eψa |r |ψb = 0, and let μa and μb be the static dipole
moments of the donor and acceptor species, then we can write
(E + λ)2
log k ∝ −
λ
as we increase the driving force and keep the reorganization energy roughly constant,
then at some point we reach a maximum rate where E = λ. Increasing E beyond
this will actually lead to a decrease in the electron transfer rate. Experimental verifica-
tion of this turnover in the rate did not occur until nearly 30 years after Marcus made
Quantum Dynamics (and Other Un-American Activities) 133
9.5 Lower
limit O
9.0
O O
O
8.5
O
Log(k)
8.0 Cl
O O
7.5 O
O
Cl
7.0 Cl
k
O
6.5
R
FIGURE 4.8 Comparison between predicted and experimental electron transfer rates between
a series of donor/acceptor species. Here, k is in units of 1/sec. Note that −G in this figure is
equivalent to E in our discussion. (Adopted from Refs. 19 and 20.)
this prediction. In a true tour de force of synthesis and spectroscopy, Gerhard Closs’
group produced a series of donor-acceptor molecules in the form of a bi-phenyl radi-
cal anion separated from an organic acceptor held a fixed distance away by a linking
chain.20 A plot of the observed rates vs. the driving force is shown in Figure 4.8. Here
we see that the rate constant increases up to a maximum value of about 2 × 109 s−1
with increasing |G ◦ |. Note, that the E used in our discussion should be taken
as free energy change between final and initial states and the activation free energy
G †† = (λ + G ◦ )2 /4λ.
From the rate expression, G †† decreases as G ◦ becomes increasingly negative
and the reaction becomes more and more exothermic. When G ◦ = −λ, the acti-
vation energy vanishes and any further increase in the exothermicity causes G †† to
increase, leading to a decrease in the rate constant. Looking at Figure 4.8, we can
identify these three regimes. First, the normal regime where −G ◦ < λ. Here in-
creasing |G ◦ leads to an increase in the rate since the barrier for the reaction steadily
decreases. At the point −G = λ ≈ 1 eV, the barrier vanishes and we achieve the
maximum electron transfer rate. Making the reaction increasingly exothermic only
serves to increase the energetic barrier for the reaction and hence leads to a decrease in
the rate. This regime is termed the “inverted regime.” Sketches of the energy parabolas
corresponding to each of these regimes are given in Figure 4.9.
V V V
Q Q Q
(a) (b) (c)
FIGURE 4.9 Potential energy curves corresponding to the normal (a), barrierless (b), and
inverted (c) regimes for electron transfer. The vertical arrow denotes the driving force E and
the gap between the dashed lines is the activation energy.
Clearly, one of the problems with the Landau–Zener approach is that we have ne-
glected the quantum motion of the nuclear degree of freedom. As such, we consider
here a semiclassical approach developed by Neria and Nitzan.21 The idea here is that
the nuclear vibrational motion on the potential energy surface of the initial electronic
state drive transitions to vibrational states on the potential energy surface of the final
electronic state. We begin by writing the golden rule expression for the transition
between electronic states 1 and 2 as
2π e−β E1i
k12 = |1i|V |2 f |2 δ(E 1i − E 2 f ) (4.315)
h̄ i Z1 f
where β = 1/kT , Z is the vibrational partition function for the initial state, and
|i and | f are nuclear states associated with the initial and final electronic states.
Integrating over electronic degrees of freedom, we can write V12 = 1|V |2 as it is
still an operator acting on the nuclear degrees of freedom. Moreover, following our
discussion above, the rate constant can be expressed as a correlation function as
∞
k12 = dteiEt/h̄ C(t) (4.316)
−∞
where E is the difference between the energy origin of the two potential energy
surfaces. The correlation function is given by
1 −β Ei
C(t) = 2 e i V12 ei H2 t/h̄ V21 e−i H1 t/h̄ |i (4.317)
h̄ Z i
1
= 2 V12 ei H2 t/h̄ V21 e−i H1 t/h̄ T (4.318)
h̄
where the T subscript denotes a thermal averaging over initial conditions. H1 and H2
denote the Hamiltonians for nuclear motion on the adiabatic potential energy curves
and V12 is the nonadiabatic coupling as given above.
As a model, we consider the crossing of two diabatic potential curves: one, a
harmonic well describing a bound molecular state, and the other a linear potential
representing an unbound or dissociative state:
Va (x) = x 2 /2 (4.319)
Vb (x) = αx + E o (4.320)
Quantum Dynamics (and Other Un-American Activities) 135
V
30
25
20
Eo
15
10
5
Vc
x
–10 –5 5 10
–5
FIGURE 4.10 Schematic view of Gaussian wave packet propagation scheme in computing the
correlation function in Equation 4.318. The shaded Gaussians denote the initial wavepackets
with the arrow indicating evolution on the upper potential energy curve. (Figure adopted from
Ref. 22.)
|Vab |2
C(t) = J (t) (4.321)
h̄ 2 Z 1
choosing the initial width of the upper state to match that of the initial harmonic wave
function. (See Figure 4.10.) The resulting integral can be tediously worked out by
hand; however, numerical evaluation can be readily done. The resulting J (t) decay
curve for the problem at hand is shown in Figure 4.11c.
The rate of decay of J (t) depends upon how rapidly the wave packet on the linear
potential moves away from the initial state. The steeper the potential, the faster the
upper wave packet loses overlap with the lower wave function. This is an indication
of how long it takes the system to lose memory of its initial condition. Once this
memory has been lost, the correlation between the initial and final states is zero, and
the Fourier integral required to compute the golden rule transfer rate will converge.
The final rate constant will depend upon two factors. The transfer will be more
efficient if in fact the vibrational spectrum of the final states has a significant overlap
with the initial state. For example, if we write as the overlap of two vibrational wave
packets evolving on electronic potentials i and f ,
where ψ f (0)|ψi (0) = 1 and H f is the Hamiltonian for nuclear motion on the final
surface, then by inserting a complete set of vibrational states
J (t) = | exp[+i E n f t/h̄]|n f |ψi (0)|2 (4.329)
nf
we see that J (t) involves the projection of the initial state onto all possible vibrational
eigenstates on the final electronic potential surface and E f is relative to a common
energy origin. Since we are dealing with a continuum of final energy states, the sum
must be converted to an integral
→ d Eg(E) (4.330)
nf
Thus, upon taking the Fourier transform we can write the transition rate from the nth
vibrational eigenstate on the initial electronic state as
2π
ki f = |V |2 d Eg(E)|E|ψni |2 δ(E + Eo − E i ) (4.332)
h̄
The overlap integral we can evaluate exactly since we know the energy eigenstates
for a particle under the influence of a constant force V = xα
1
x|E = Ai (2α)1/3 (x − E/α) (4.333)
21/3 α 1/6
∞
1
= √ dk exp[ik 3 /6α + ik(x − E/α)] (4.334)
2π α −∞
where Ai(x) is the Airy function chosen to be regular at the origin. The overlap
integral is then computed using
∞
E|ψni = d xE|xx|ψni (4.335)
−∞
138 Quantum Dynamics: Applications in Biological and Materials Systems
{x, ψ}
0.15 0.4
0.2
0.10
Rate
–10 –5 5 10
0.05
–0.2
10 20 30 40 50 60 –0.4
n
(a) (b)
FIGURE 4.12 (a) Overlap integral Equation 4.336 vs. quantum number. (b) Comparison be-
tween continuum wavefunction with E = E o and eigenstate #19.
Problem 4.1 A simple analysis of the two-well model can be performed using
the golden rule techniques developed thus far. Consider the case of two identical
wells, one displaced from the other by xo . We can also add an energy shift to the
problem E b .
Va (x) = m2 x 2 /2 (4.338)
Vb (x) = m (x − xo ) /2 + E b
2 2
(4.339)
Show that the time-dependent overlap between the harmonic oscillator ground-state
wave function in state a and an initially identical Gaussian evolving in state b is given
Quantum Dynamics (and Other Un-American Activities) 139
r(Å)
0.02 0.04 0.06 0.08 0.1
–20
–40
E(eV)
–60
–80
–100
–120
FIGURE 4.13 Coulomb potential for H atom including a cutoff approximating the finite radius
of the proton.
where
2
J (t) = exp − (1 − e−it ) − it/2 (4.340)
2
√
and = xo m/h̄ is a dimensionless displacement (Huang–Rhys parameter). Show
this by taking the Fourier transform of J (t) that the spectral function is given by
2n
−2 /2
σ (ω) = e δ(h̄ω − (n + 1/2)) (4.341)
n=0
2n n!
Finally, evaluate and plot the transition rate from state a to state b as a function of
temperature.
Problem 4.4 Because of the finite size of the nucleus, the actual potential seen by
the electron is more like what is seen in Figure 4.12.
140 Quantum Dynamics: Applications in Biological and Materials Systems
1. Calculate this effect on the ground-state energy of the H atom using first-
order perturbation theory with
2 3
e
− eR for r ≤ R
H = r (4.344)
0 otherwise
Z e2
V (r ) = − (4.345)
r
when r > R and
Z e2 1 r 2 R
V (r ) = − +2 −3 −1 (4.346)
r 2R R r
for r ≤ R. What is the perturbation in this case? Calculate the energy shift
for the H (1s) energy level for R = 1 fm and compare to the result you
obtained above.
Note that this effect is the “isotope shift” and can be observed in the spectral lines of
the heavy elements.
Problem 4.6 Adiabatic vs. Sudden Approximations. There are two essential limits
for time-dependent problems: first, where the perturbation or coupling varies slowly
in time and the other when the coupling is suddenly switched on. In the chapter we dis-
cussed the case where the reference system was coupled to some time-dependent field.
In this problem, we consider the case where the boundary conditions are changed.
Consider the case of an electron trapped in an infinite well of length L. The energy
levels are discrete and we shall assume that the electron is initially prepared in the
lowest energy level. The twist here is that we shall allow L to change with time,
something like a piston that can compress and expand the electron’s box.
1. What outside pressure must be exerted on the electron in order for L to be
fixed at some length L eq ?
2. Show that by expanding the electron’s wave function in terms of the time-
dependent states
ψ(t) = u j (t)| j(t)
j
Quantum Dynamics (and Other Un-American Activities) 141
where
2
| j(t) = cos(π(2 j + 1)x/L(t))
L(t)
are the particle-in-a-box states with x measured from the center of the well
and with energy
h̄ 2 2π(2 j + 1)
ε j (t) = = h̄ω j (t)
2m L(t)2
and substituting this into the time-dependent Schrödinger equation,
∂ −ω j (t)t ∂
h̄i u j e = h̄i j| |0 e−ω0 (t)t
∂t ∂t
∂u j ∂ log L(t)
= −λ j π exp[i(ω j (t) − ω0 (t)]
∂t ∂t
where
1/2
λj = 2 cos[π(2 j + 1)u]u sin[πu]du
−1/2
E − E o = 3c(h̄/2mω)2
Problem 4.8 Consider a particle of mass m in a harmonic well with force constant
k. A small perturbation is applied that changes the force constant by δk. Show that
the first- and second-order corrections to the ground-state energy are given by
1 δk
E (1) = h̄ω
4 k
and
1 δk 2
E =− (2)
h̄ω
16 k
How do these expressions relate to the exact expression for the energy?
Problem 4.9 Taking a trial wave function of the form φ = exp(−βr 2 ) where β is
an adjustable parameter, use the variational procedure to obtain an estimate of the
ground-state energy for the hydrogen atom in terms of atomic constants. How does
this compare to the exact answer? Also, use your optimized wave function to compute
r , < 1/r >, and p 2 . Compare your results with the exact values for a hydrogenic
system.
Problem 4.10 For an attractive 1D square well potential, it is possible to show that
there is always at least one bound state. Does this hold true for any one-dimensional
attractive potential of arbitrary shape?
where |n is an eigenstate of the Hamiltonian with energy E n = h̄ω(n + 1/2). The
summation runs from n = N − s to n = N + s with N
s
1. Show that the
expectation value of x(t) is oscillatory with amplitude (2h̄ N /mω)1/2 . How does
this compare to the time variation of the displacement for a classical oscillator?
2. f (t) = f o /((t/τ )2 + 1)
Problem 4.13 The S states for an electron in a spherical cavity of radius R are given by
ψn (r ) = An sin(nπr/R)/r
6
5
4
3
2
1
r
0.2 0.4 0.6 0.8 1.0
–1
Problem 4.14 Consider the case of the vibrational motion of a linear triatomic
molecule such as C = N − H where the harmonic stretching frequency of one
bond is much higher than the stretching frequency of the other bond so that the
low-frequency mode can treated essentially classically. A suitable Hamiltonian for
this is
1
H = h̄ωa † a + λ(a † + a)x + ( p 2 + 2 x 2 ) (4.347)
2
where a and a † are the anhiliation and creation operators for a quantum harmonic well
with frequency ω, p and x are the classical momentum and position for an oscillator
with frequency , and λ the coupling between the two systems. If the low-frequency
mode is described by a classical harmonic oscillator, what is the golden rule transition
rate for the high-frequency part to relax from its first excited state to the ground state?
What happens if ω?
Problem 4.15 Along a reaction coordinate, R(t), the harmonic frequency of a molecule
can change. For the sake of building a model, let us consider the case where the har-
monic frequency of a diatomic molecule is increased then decreased,
ω(t) = ωo + ω exp(−t 2 /τ )
with time scale τ due to a collision with an atom. Derive an expression for finding
the molecule in its lowest vibrational state at t → ∞ given that it was in its ground
vibrational state at t → −∞. Since τ is related to the collision energy (that is, the
speed of the colliding atom), what happens if the collision is very slow or very fast on
the time scale of ω? What collisional time scale is needed for the survival probability
to be exactly 50%?
5 Representations
and Dynamics
In this chapter we shall examine different ways in which one can represent the evolu-
tion of a quantum state. While mathematically different, the various representations
are equivalent in that, at the end of the day, one obtains the same physical prediction.
This is good because physics and physical measurements should not depend upon
how one chooses to represent the quantum state. Depending upon the problem at
hand, each representation has its unique advantages and disadvantages.
∂¯
i ψ(t) = Ĥ ψ(t) (5.1)
∂t
As we know from the postulates of quantum mechanics, the state of the system at time
t is described by ψ(t), which is a solution of the TDSE where Ĥ is the Hamiltonian
operator derived from the classical Hamiltonian function by the substitution of
r → r̂ (5.2)
V (r ) → V (r̂ ) (5.3)
∂
p → −ih̄ (5.4)
∂r
in the coordinate representation or equivalently in the momentum representation
p → p̂ (5.5)
∂
r → ih̄ (5.6)
∂r
∂
V (r ) → V ih̄ (5.7)
∂p
with
∞ n n
∂ i h̄ ∂ n V ∂n
V ih̄ = (5.8)
∂p n=0
n! ∂r n r =0 ∂p n
145
146 Quantum Dynamics: Applications in Biological and Materials Systems
A S
(t) = ψ S (t)|A S |ψ S (t)
(5.9)
dt S
= − ψ S (t)|[H, A S ]|ψ S (t)
+ ih̄ ψ S (t)|A ˙S |ψ S (t)
(5.10)
If the operator itself carries no explicit time dependency, then the time evolution of
the expectation value of an observable is specified by
d
ih̄ A
= − ψ S (t)|[H, A ]|ψ S (t)
(5.11)
dt S
If, in fact, [A, H ] = 0, then the observable associated with A is a constant of the
motion.
For completion, the time evolution of the Schrödinger state is given by
t2
ψ S (t) = ψ(0) + t ψ̇(0) + ψ̈(0) + · · · (5.13)
2
t 1
= 1+ H +t 2
H · · · ψ(0)
n
(5.14)
ih̄ 2(ih̄ 2 )
∞
1
= tn Hn (5.15)
n=0
n!(ih̄)n
The fact that U S can be expressed as a polynomial in time has a number of advantages.
We can choose any one of a number of polynomial bases for this expansion since
we can take advantage of various recurrence relations. Later on, when dealing with
numerical solutions, we will compare different ways of approximating the evolution
operator over short periods of time.
Representations and Dynamics 147
a|U |b
= b|U |a
∗ (5.18)
A |an
= αn |an
(5.25)
any new picture or representation of quantum mechanics must satisfy the following
two criteria:
1. The eigenspectrum of an operator must not change upon moving to the
new representation.
2. The probability amplitude for a given observation an |ψ
must not change.
Both of these criteria are satisfied by unitary transformations
A|x
= |y
(5.26)
A |x
= |y
(5.27)
|x
= U |x
x | = x|U † (5.28)
and
|y
= U |y
y | = y|U † (5.29)
Thus,
A U |x
= A |x
= U |y
= U A|x
(5.30)
U † A U = A (5.31)
U AU † U |an
= αn U |an
= αn |an
(5.32)
that is,
A |an
= αn |an
(5.33)
In other words, the eigenvalues of the transformed operator A are the same as the
original operator A. Likewise,
an |ψ
= an |U † U |ψ
= an |ψ
(5.34)
What we conclude from this is that there are an infinite numbers of ways we can for-
mulate dynamical representations of quantum mechanics based upon unitary trans-
formations.
The Heisenberg picture is based upon the transformation that returns the time-
evolved Schrodinger state back to its initial condition,
Because we never directly observe the state, time evolution is carried by the operators
themselves:
†
A H (t) = U S (t, to )A S (t)U S (t, to ) (5.36)
Representations and Dynamics 149
This is immediately verified if we take Ĥ (x̂, p̂) = p̂ 2 /(2m) + V (x̂) and write the
Heisenberg equations of motion for the momentum and position operators,
d x̂
= { Ĥ , x̂} = p̂/m (5.44)
dt
d p̂ ∂ V (x̂)
= { Ĥ , p̂} = − (5.45)
dt ∂ x̂
150 Quantum Dynamics: Applications in Biological and Materials Systems
Again, we must emphasize that the difference between these equations of motion and
their classical counterparts is that here we are dealing with operators rather than with
ordinary numbers or functions.
We can extend this idea to any pair of canonical variables. For example, for the
case of Boson operators, [â, â † ] = 1, we can write a similar relation for operators
composed of products of â and â † :
where the constraints are very small, φi ≈ 0, and the coefficients are functions of p
and q. Typically we arrive at the Hamiltonian equations by taking the variation of H ,
∂H ∂H
δH = δq + δp = − ṗδq + q̇δp (5.50)
∂q ∂p
so that
∂H ∂H
+ ṗ δq + − q̇ δp = 0 (5.51)
∂q ∂p
Since the coefficients c j are functions of the canonical variables, we cannot separately
set δp and δq to zero. The variations must be tangent to the constraints. This can be
One is referred at this point to a more complete discussion of the Poisson bracket formulation of classical
done by setting
An δqn + Bn δpn = 0 (5.52)
n n
with
∂φ j ∂φ j
An = uj and Bn = uj (5.53)
j
∂qn j
∂pn
More generally, the equation of motion for a function of canonical variables becomes
f˙ = { f, H ∗ } = { f, H } + u k { f, φk } (5.56)
k
φ̇ k = 0 = {φk , H ∗ } (5.57)
A · B − B · A = ih̄{A,B} D B (5.58)
where
−1
{A,B} D B = {A,B} P B − {A, φn }Mnm {B, φm } (5.59)
nm
defines the Dirac bracket in terms of the Poisson bracket. The constraint matrix M
is formed by taking the Poisson bracket of constraints
ψ f |ψ(t f )
= ψ f |U (t f , ti )|ψi
= ψ+ (t)|ψ− (t)
(5.64)
ψ+ (t)|ψ− (t)
= ψ f |ψ− (t f )
= ψ+ (ti )|ψi
(5.67)
is invariant of time t.
Now consider the quantity
tf
φ+ (t)|ih̄∂t − H |φ− (t)
which we shall refer to as an action taken as a functional of both the bra and the ket.
These we shall take as initial trial vectors for the variation of S. The bra and ket we
are using are subject to the boundary conditions
|φ− (ti )
= |φi
(5.69)
φ+ (t f )| = φ f | (5.70)
δS = dt
ti φ+ (t)|φ− (t)
φ+ (t)|ih̄∂t − H − λ|δφ− (t)
φ f |δφ− (t f )
+ − ih̄ (5.71)
φ+ (t)|φ− (t)
φ+ (t)|φ− (t)
λ(t) = (5.72)
φ+ (t)|φ− (t)
δS = dt ⎝
ti φ+ (t)|φ− (t)
⎞
←−
φ+ (t)|ih̄ ∂ t − H − λ |δφ− (t)
⎠
− (5.73)
φ+ (t)|φ− (t)
where the arrows over the partial derivative operator indicate that the operator acts
either to the left or to the right. Clearly, the variation vanishes if both φ+ (t)| and
|φ− (t)
obey
λ (t) = (5.76)
φ+ (t)|φ− (t)
φ f |φ− (t)
= ei Sc /h̄ φ f |ψ− (t)
(5.78)
154 Quantum Dynamics: Applications in Biological and Materials Systems
Thus, the quantum transition amplitude is given by the stationary value of the action
φ f |ψ(t f )
= ei Sc /h̄ (5.80)
Unfortunately, the phases of the two trial vectors are undetermined. For example, if
we add on an additional phase so that
|φ− (t)
= ei ξ (t)/h̄|φ− (t)
(5.81)
where
t
ξ (t) = dt z(t ) (5.82)
ti
where
Fortunately, the phase indeterminacy does not change the transition amplitude since
the final bra state is modified as well. We can eliminate this indeterminacy by imposing
an additional constraint on the system that S is a functional of |ψ(t)
and its Hermitian
conjugate ψ(t)|:
t2
ψ(t)|ih̄∂t − H |ψ(t)
S= dt (5.85)
t1 ψ(t)|ψ(t)
We also assume (as in classical mechanics) that the variations vanish at the boundaries
δψ(t f )| = |δψ(ti )
= δψ(ti )| = |δψ(ti )
(5.86)
What this means is that we are taking S to be a functional of the state vector and its
Hermitian conjugate at all times ti < t < t f other than the initial and final times. In
doing so, our final equations will not depend upon the choice of boundary conditions.
We can also set ψ(t)|ψ(t)
= 1 for all time to enforce normalization. One can easily
verify that δS = 0 when
(ih̄∂t − H )|ψ(t)
= 0 (5.87)
In other words, the action S is stationary with respect to all variations of |ψ(t)
and
its Hermitian conjugate provided they are solutions of the Schrödinger equation.
Let us take, for instance, the case where the Hamiltonian is driven by some set of
external time-dependent variables (perhaps a set of nuclear coordinates), q(t), such
Representations and Dynamics 155
where
ti f = ψ(t f )|ψ(ti )
= ei S[q]/h̄ (5.89)
is the transition amplitude between the initial and final states, Sc is the classical action
tf
m 2
Sc = q̇ (t) − V (q(t))dt (5.90)
ti 2
and S[q] is the contribution to the action due to the quantum transition. Again, we
use the trick we used above and write this as
ψ(t f )|ψ(ti )
= ψ+ (t)|ψ− (t)
(5.91)
where t f > t > ti is some intermediate time. Setting V (q) = 0 and taking the varia-
tion of S with respect to q(t) results in the classical equations of motion for q(t):28,29
ψ+ (t)| ∂H(q(t))
∂q
|ψ− (t)
The fact that the transition matrix element also depends upon the path between q(ti )
and q(t f ) means that the resulting force is path dependent. Consequently, in order to
determine the path, one must iterate this last equation self-consisently.
The equations of motion (Equation 5.92) were first derived by Phil Pechukas
in 1969 starting from a path-integral formulation for the fully quantum mechanical
problem of atomic scattering and then making a stationary phase approximation for
the nuclear trajectory. Although the approach is very appealing and gives the correct
semiclassical path, it is often impossible to converge a unique path if the time interval
t f − ti is too long and the states are strongly coupled.30–32
How do we interpret this last result? Imagine that q(t) represents the scattering
trajectory of an atom and the quantum states are internal degrees of freedom (say, the
atom’s electronic states). At the initial time ti we prepare the system at q(ti ) in some
well-determined quantum state that we will take to be an eigenstate of H (q(ti )). As
time evolves, the quantum state |ψ(t)
may no longer be an eigenstate of H (q(t))
since the Hamiltonian is changing in time as well. As a result, |ψ(t)
will evolve into
a superposition of eigenstates
|ψ(t)
= |cn (t)φn (q(t))
(5.93)
n
where the cn (t) are the transition amplitudes for starting in the initial state and evolving
into the nth eigenstate of H (q(t)) at some intermediate time ti < t < t f . Suppose at
156 Quantum Dynamics: Applications in Biological and Materials Systems
h̄
δt
≈ 1 − i ε1 (q1 ) + · · · |ψ1 (t1 )
(5.95)
h̄
At time t2 hardly any quantum evolution will have occurred and we would determine
that the atom is still in its original quantum state. In fact, the quantum state will remain
in the original eigenstate of H over the course of the trajectory. If our scattering atom
was in an electronic excited state and we frequently inquire about its current state,
the atom will forever remain in that excited state. An alternative history may be
hist2 = {ψ0 , ψ1 , ψ2 , . . . ψ N } (5.96)
where hist1 and hist2 are the same until between t1 and t2 the system makes a switch
from state 1 to state 2 . Up until time t2 , we would have some degree of quantum
coherence between the two histories and information can be passed from one world-
line to the other. After t2 there is no common phase relation between the two paths.
This is illustrated in Figure 5.1.
This may be starting to sound a bit like the plot line for a Star Trek episode.
Representations and Dynamics 157
|φα2Δ1
2
Δ1(t)
x2α
2
Δ2Δ1(t)
x3α
|φ1α Δ1 (t)
2
1 x2β
2
Δ 2́Δ1
x3β (t)
|φβ2Δ1 3
x(0)
2
Δ 1́(t)
x2α
2
Δ 2́Δ 1́(t)
x3α
|φ1β 3
1
Δ 1́(t)
x2β Δ 2̋Δ 1́
2 x3β (t)
3
Time
with V being some additional coupling that induces some sort of dynamical evolution
of the system. With this in mind, we define the interaction wave function as
ψ I (t) = e+i Ho t/h̄ ψ S (t)
= e+i Ho t/h̄ e−i H t/h̄ ψ S (0) (5.98)
Taking the time derivative, we find
d
ih̄ ψ I (t) = V (t)ψ I (t) (5.99)
dt
where
V (t) = e+i Ho t/h̄ V e−i Ho t/h̄ (5.100)
is the interaction operator written in the Heisenberg representation of the reference
system. Since unitary transformations can be visualized as rotations in some N -
dimensional space, both the interaction wave function and coupling operator are
simultaneously rotated along with the reference system so that any departure from
the initial state is entirely due to the interaction. This has the distinct advantage of
eliminating the rapidly oscillating terms that appear in the evolution of the Schrödinger
state.
Let us now consider the time-evolution operator in the interaction representation.
As previously,
ψ I (t) = U I (t, to )ψo (5.101)
158 Quantum Dynamics: Applications in Biological and Materials Systems
Expanding U I in time,
1 t
U I (t, to ) = 1 + dt1 V (t1 )U (t1 , to )
ih̄ to
t t1
1 t 1
= 1+ dt1 V (t1 ) + dt1 dt2 V (t1 )V (t2 ) + · · · (5.102)
ih̄ to (ih̄)2 to to
This is often refered to as the Dyson series and it often serves as the starting point
for perturbative theories since each term involves subsequent interactions with the
coupling operator. The series can be taken to infinite order
∞
U (t, to ) = Un (t) (5.103)
n=0
that has the effect of rearranging a series of operators into chronological order. For
example,
with ti ≥ t j · · · ≥ tn .
Consider the second term in the expansion of the time-evolution operator:
t t1
1
U2 (t, 0) = dt 1 dt2 V (t1 )V (t2 ) (5.107)
(ih̄)2 0 0
The implied area of integration on the t1 , t2 plane is the shaded area above the t2 = t1
line. On the other hand, in
t t2
1
U2 (t, 0) = dt 2 dt1 V (t1 )V (t2 ) (5.108)
(ih̄)2 0 0
the implied area of integration is below the t1 = t2 line. Since both integrals should
give the same result, we can write
t t
1 1
U2 (t, 0) = dt1 dt2 P[V (t1 )V (t2 )] (5.109)
2 (ih̄)2 0 0
Representations and Dynamics 159
This is the polynomial expansion for the exponential function, so we can immediately
write the interaction evolution operator as
i t
U (t) = P exp − V (t ) dt (5.111)
h̄ 0
Problem 5.2 Using the mixing angle and rotation matrix given in Equation 4.6 show
that T H T † is diagonal with eigenvalues ε± .
ψ(t) = Ŝ(t)ψ(0)
B̂(t)
= ψ(t)| B̂|ψ(t)
1. Show that the time evolution of the Heisenberg operator B̂ = Ŝ −1 (t) B̂ Ŝ(t)
satisfies
B̂(t)
= ψ(0)| Ŝ −1 (t) B̂ Ŝ(t)|ψ(0)
2. Show that the time derivative of the Heisenberg operator B̂(t) is given by
3. Show that if [ Â, B̂] = Ĉ, then the corresponding Heisenberg operators
satisfy [A,ˆ B̂] = Cˆ
SUGGESTED READING
There are any number of excellent textbooks on quantum mechanics. Listed below
are various texts I have found to be particularly useful in preparing this chapter.
1. Chemical Dynamics in Condensed Phases Relaxation, Transfer and Reac-
tions in Condensed Molecular Systems, Abraham Nitzan (Oxford Graduate
Texts, 2007). This is one of the best interdisciplinary accountings of dy-
namical processes in the condensed phase.
2. Quantum Mechanics, Claude Cohen-Tannoudji, Bernard Diu, and Frank
Laloë (Wiley Interscience, 1973)
3. Quantum Mechanics, A Modern Introduction, A. Das and A. C. Melissinos
(Gordon and Breach, 1986)
4. Quantum Mechanics, E. Merzbacher (Wiley, 1961).
6 Quantum Density Matrix
A(t)
= ψ(t)| Â|ψ(t)
(6.1)
ρ̂(t) = |ψ(t)
ψ(t)| (6.2)
taken as the outer product of the state vector with itself. From this definition we can
write
ρ̂ = cn∗ cm |m
n| (6.3)
mn
= ρmn |m
n| (6.4)
mn
where the ρmn are the density matrix elements. Expectation values of operator are
then given by the trace
A(t)
= Amn ρmn = Tr [ Âρ(t)] (6.5)
mn
If ρ is diagonal, then
Tr [ Âρ(t)] = Ann ρnn = n|A|n
Pn
n n
where Pn = ρnn is the statistical probability of finding the system in state n. These
statistical weights must be such that Pn ≤ 1 and
Pn = 1
n
161
162 Quantum Dynamics: Applications in Biological and Materials Systems
Taking each term in the () as a Heisenberg operator evolving under Lo , we can write
this as
t t t1
U = e−i Lo t 1 − i dt1 LV I (t1 ) − dt2 LV I (t1 )LV I (t2 ) + · · · (6.15)
0 0 0
Say, for example, we are interested in only one aspect of the system or in some
particular property, such as the spin or energy. We can then retain only the relevant
indices and define in such a way a reduced density matrix. Most commonly, the
reduced density matrix is used in cases where the total state space is partitioned into
interacting (or noninteracting) subsystems, such as the internal states of a molecule
coupled to the normal modes of a surrounding environment. In such cases we can
factor the total density matrix into a tensor product
ρ AB = ρ A ⊗ ρ B
If we take states |i
as spanning space A and states | j
as spanning B so that |i j
This is a “partial” trace since it involves summing only over states belonging to
space B.
Quantum Density Matrix 165
The expectation value of any operator acting in space A can be computed using ρ A
A
= Tr [ρ A A]
1
|−
= √ (|00
− |11
)
2
where |0
and |1
label, say, the ground and excited states of each particle. The density
matrix for this system is then
ρ̂ = |−
− |
1 1
= √ [|00
− |11
) √ ( 00| − 11|)
2 2
1
= (|00
00| − |00
11| − |11
00| + |11
11|] (6.17)
2
Say, for example, you want to find the reduced density matrix for the second particle.
For this you would take the partial trace over the first
ρ̂ 2 = tr1 (ρ̂)
1
= (tr (|0
0|)|0
0| − tr (|0
1|)|0
1| − tr (|1
0|)|1
0| + tr (|1
1|)|1
1|)
2
1
= [ 0|0
|0
0| − 0|1
|0
1| − 1|0
|1
0| + 1|1
|1
1|]
2
1
= (|0
0| + |1
1|)
2
1
= Î (6.18)
2
Notice the reduced density matrix for the second particle tells us the probabilities of
that particular particle being in either state |0
or |1
with no mention or regards to
the state of the other particle. Notice also that ρ2 represents a mixed state for particle
#2. We cannot determine with complete certainty which state particle #2 is in because
it is not in one. Because of its entanglement with particle #1, particle #2 is not in a
single definable state and likewise for particle #1.
If one were to make a measurement of the state of #1 (for example, |1
emits a
photon to decay to |0
), the total system would be forced to be in state |00
. If we
were to then calculate the reduced density matrix for #2, we would find ρ2 = |0
0|
indicating that #2 has a 100% chance of being in state |0
and a 0% chance of being
in state |1
. This is the “spooky” nature of quantum mechanics. Measuring the state
of one part of an entangled pair determines the state of the other.
166 Quantum Dynamics: Applications in Biological and Materials Systems
• S[ρ] is additive. That is, given two independent systems each with density
matrix ρ A and ρ B , S[ρ A ⊗ ρ B ] = S[ρ A ] + S[ρ B ]. If instead ρ A and ρ B are
the reduced density matrices of some general system ρ AB , then
The equations of motion for the density matrix elements in this basis are thus
ρ̇ 11 = iβ(ρ12 − ρ21 )
ρ̇ 22 = −iβ(ρ12 − ρ21 )
Quantum Density Matrix 167
ρ̇ 12 = iβ(ρ11 − ρ22 )
ρ̇ 21 = −iβ(ρ11 − ρ22 ) (6.19)
We can solve these equations in a number of ways, the easiest being to take the time
derivative of ρ̇ 11
ρ̈ 11 = iβ(ρ̇ 12 − ρ̇ 21 )
= −2β 2 (ρ11 − ρ22 ) (6.20)
and
ρ22 (t) = sin2 (βt)
for the populations. Obtaining the coherences
ρ̇ 12 = iβ(ρ11 − ρ22 )
ρij
1
є_
1
E2 2
Em
2Δ
E1
π π 3π 2π βt
2 2
є+ –1
2
FIGURE 6.1 Time evolution of various components of the density matrix for a degenerate
two-state system with coupling h̄β.
single phonon, with frequency . This system is shown schematically in Figure 6.2a.
Whereas in our previous discussion of this system we assumed that the driving field
remained on essentially forever, here we shall consider the case where the field is
switched on at time t = 0 and then switched off at some later time. For the sake of
discussion, let the two states in question be two electronic states of some system and
the external field be the electric field. The Hamiltonian we consider is given by
where μ̂ = e x̂ is the x component of the electric dipole operator and is the laser
frequency with intensity E. Let us assume that x̂ only couples 1 and 2 so that its matrix
elements are
μ = 1|e x̂|2
ћΩ
|2 , ћωo/2 3 |1 + 1 photon
2
|2
1 ћΩ ΔE = ћωo
ΔE = ћωo
1 |1
2
|1 , –ћωo/2
(a) (b)
FIGURE 6.2 (a) States 1 and 2 with energies ±h̄ωo are coupled to an external driving field
with frequency . (b) Energy level diagram showing how state |1
is dressed by its interaction
with the photon field so that it becomes nearly degenerate with state |2
. Note: The horizontal
offset between the wells is strictly for clarity.
Quantum Density Matrix 169
ω1 = μE
and write the equations of motion for the density matrix elements as
Thus far, our analysis is exact. However, to proceed we need to make a judicious
approximation in order to simplify our analysis. First, note that when we combine the
cosine and exponential, we arrive at terms that are of the form
If the laser frequency is such that we are nearly resonant with h̄ωo , then only the terms
with ωo − ≈ 0 will give a contribution to the transition rate between the two states.
The other terms with ωo + ≈ 2ωo are off resonance and will give vanishingly little
to the transition rate. Thus we define the rotating wave approximation or RWA by
neglecting all off-resonant terms. This allows us to simplify the above equations as
ω1 ' i(ωo −)t (
ρ̇ 11 = i e ρ12 − e−i(ωo −)t ρ21 (6.27)
2
ω1 ' i(ωo −)t (
ρ̇ 22 = −i e ρ12 − e−i(ωo −)t ρ21 (6.28)
2
ω1 i(ωo −)t
ρ̇ 12 =i e (ρ11 − ρ22 ) (6.29)
2
ρ̇ 21 = ρ̇ ∗12 (6.30)
If we are exactly on resonance, then all the exponential terms become unity and
the equations reduce to
ω1
ρ̇ 11 = i (ρ12 − ρ21 ) (6.31)
2
ω1
ρ̇ 22 = −i (ρ12 − ρ21 ) (6.32)
2
ω1
ρ̇ 12 = i (ρ11 − ρ22 ) (6.33)
2
∗
ρ̇ 21 = ρ̇ 12 (6.34)
170 Quantum Dynamics: Applications in Biological and Materials Systems
but
ρ̇ 12 = +iωo ρ12 = ρ̇ ∗21
If the field is switched off at some later time t1 > 0, then for all time after t1 the
populations will remain constant with ρ11 = ρ11 (t1 ), ρ22 = ρ22 (t1 ). However, the
coherences will continue to evolve as
This is free precession in that the system continues to evolve even though no further
population is being transferred. State 1 is effectively locked into a superposition with
state 2.
If the duration of the pulse is such that ω1 t1 = π , then for times t > π/ω1 , ρ11 = 0
and ρ22 = 1. In other words, we have achieved a perfect population inversion. If we
are discussing a spin 1/2 system, we can imagine that all the spins in the system have
been flipped from up to down (or vice versa). Such pulses are termed “pi” pulses.
We can also define a “pi-over-two” pulse by setting the pulse duration to be such that
ω1 t1 = π/2. In that case, the magnitudes of the two coherences ρ12 and ρ21 are at
Quantum Density Matrix 171
To appreciate the effect of all this on the system, consider what happens to the
evolution of an observable, such as the average polarization μ
following preparation
by a pulse of duration θ = t1 ω1 . The x component of the polarization is given by
μx (t)
= Tr [ρ(t)μ̂] = eTr [ρ(t)x̂]
For the two-state system at hand, the polarization operator in the {|1
, |2
} basis is
given by
0 1
μ̂x = μ
1 0
Thus, for t > t1
μx
= −μ sin(θ) sin(ωo t)
In other words, the x component of the polarization vector of the sample oscillates
between ±μ sin(θ) at the Bohr transition frequency. According to classical elec-
trodynamics, an oscillating electric field must radiate at its oscillation frequency.
Consequently, this polarization will eventually lead to radiative decay in any real-
istic physical situation. We can introduce a phenomenological radiative decay as an
afterthought to the dynamics by requiring the population of state 2 to be exponentially
damped (corresponding to radiative decay to state 1). As a result, we expect
μx (t)
rad = −μ sin(θ) sin(ωo t) × e−γ t
with
Pk = 1
k
Note that if we were to write out the full density matrix in matrix form, it would be
in block-diagonal form such as:
⎛ ⎞
ρ1 0 ··· 0
⎜ ρ2 ···⎟
⎜ 0 0 ⎟
ρ=⎜
⎜ .. ..
⎟
⎟ (6.43)
⎝ . 0 . 0 ⎠
0 0 0 ρn
and Pk would be the fractional number of subsystems with energy gap h̄ωok . Since
there is no coupling between subsystems, the off-diagonal blocks will remain 0 for
all times. In such a case, the density matrix of the entire system is a statistical mixture
of its component subsystems.
To calculate the polarization of the ensemble as a function of time, we need to
evaluate
μx (t)
= Tr [ρ μ̂x ]
= Pk Tr [ρk μ̂]
k
= −μ sin θ Pk sin(ωok t) (6.44)
k
To make things concrete, let us assume that the energy gaps are normally distributed
about some average energy gap such that the probability of a subsystem having energy
gap h̄ωok is given by
1
e−(ωok −ω) /2σ
2 2
P(ωok ) = √
2π σ 2
Taking the sum over subsystems to a continuous integral and integrating over all
frequencies results in
μx (t)
= −μ sin θ sin(ωt)e−σ t /2
2 2
(6.45)
1.2 1.0
π/2 pulse π pulse
1.0 0.8
0.8 0.6 Initial
Echo
P(t)
ε(t)
t1 t2 t3 t4 t1 t2 t3 t4
tωo tωo
FIGURE 6.3 Photon echo time sequence. Left: The two input pulses with areas π/2 and π .
Right: On same time scale are the resulting output signals for the macroscopic polarization
P(t) in the form of two free induction decay signals, each with nearly identical lifetimes.
174 Quantum Dynamics: Applications in Biological and Materials Systems
where V is the matrix element coupling the two states and E A and E B are their
energies. The coefficients, a(t) and b(t), are the probability amplitudes for observing
the excitation on A and B, respectively, at time t. Take, for example, the case where
the two species are identical so that E 1 = E 2 and that at time t = 0, A is photoexcited
so that a(0) = 1 and b(0) = 0. We can immediately write the solution as
where we see that the exciton is passed back and forth between A and B at the Rabi
frequency, = V /h̄. This oscillation is due to the fact that the initial state is not a
stationary state.
We know, however, that in a thermal system, energy transfer can be irreversible
due to contact and mixing between the donor and acceptor species with the solvent
media or matrix in which the two molecules are embedded. To consider the case of
irreversible transfer, we need to use a density matrix approach ρ = |ψ
ψ| where
the diagonal elements of the density matrix, ρ11 = |a(t)|2 and ρ22 = |b(t)|2 , are
the populations of each state and the off-diagonal elements, ρ12 = a ∗ (t)b(t) and
ρ21 = b∗ (t)a(t), are the coherences between the two states. For an isolated system,
the time evolution of the density matrix is given by the Liouville–von Neumann
equation,
relaxation time T2 as the time scale for coherence relaxation. For a statistical mixture,
ρ12 = ρ21 = 0 and we can write the equations of motion for ρ as
∂ρii 1 1 (eq)
= [H, ρ]ii + ρii − ρii (6.49)
∂t ih̄ T1
for the diagonal terms and
∂ρi j 1 1
= [H, ρ]i j − ρi j (6.50)
∂t ih̄ T2
for the coherences. Since we are dealing with the transfer of an electronic excitation
from one species to the next, we can assume that at thermal equilibrium, all of the
population is in the ground electronic state |0
, which we have not explicitly included.
If we take E A and E B kT , the thermal populations of the exciton states is vanish-
ingly small and we can include the ground state only as a “sink” such that the total
population ρ11 + ρ22 + ρ00 = 1. Thus, our equations of motion for the two excited
states read
i 1
ρ̇ 11 = − V (ρ21 − ρ12 ) − ρ11 (6.51)
h̄ τA
i 1
ρ̇ 22 = + V (ρ21 − ρ12 ) − ρ22 (6.52)
h̄ τB
i 1
E
ρ̇ 12 = − V (ρ22 − ρ11 ) − ρ12 − ρ12 (6.53)
h̄ T2 ih̄
i 1
E
ρ̇ 21 = + V (ρ22 − ρ11 ) − ρ21 + ρ21 (6.54)
h̄ T2 ih̄
Here, τ A and τ B are the radiative lifetimes of the A and B excitons and
E is the energy
difference. We now consider various limits to understanding the various physical
regimes described by these equations.
In the case of strong coupling, we take the Rabi frequency to be much greater than
the radiative rates, = 2V /h̄ 1/τ A &1/τ B , as well as the dephasing rate, 1/T2 . In
this case, the exchange between A and B is far more rapid than any other process in
the system. Our equations of motion reduce to
and we have the same oscillatory behavior as previously. A comparison between the
numerically exact solution and the approximate solution is shown in Figure 6.4(a) for
the case of
E = 0.1, V = 1/2, and τ = T2 = 104 .
Identical systems: For the case of exciton exchange between two identical sys-
tems, we have τ A = τ B = τ and
E = 0. In this limit, we can arrive at an equation
for the population in state 1:
1 1 1 1
ρ̈ 11 + + ρ̇ 11 + + T2 2 ρ11 = 2 /2 (6.56)
τ T2 T2 τ
176 Quantum Dynamics: Applications in Biological and Materials Systems
FIGURE 6.4 Comparison between numerically exact evaluation and various approximations.
(a) Strong coupling with no decay of population or dephasing 1/T2 , 1/τ → 0. (b) Identical
systems (
E = 0) with 1/T2 1/T1 . (c) Rapid dephasing with 1/T2 .
These are the equations of motion for a damped oscillating system. Taking the initial
condition to be ρ11 (0) = 1 and ignoring the oscillatory part, we obtain the solution
for the overdamped decay of the initial population
1 −t/τ
ρ11, decay (t) = e (1 − exp(−t/T2 )) (6.57)
2
The results of this case are shown in Figure 6.4(b) for the case of
E = 0, V = 1/2,
τ = 500, and T2 = 10. It is interesting to note that the approximate solution given by
Equation 6.57 does not decay at long times.
Rapid dephasing: As a final limit, we take the case where the dephasing time
is short compared with the radiative lifetime. This gives us the case of a damped
oscillator:
1 2 T2 /2
ρ̇ 11 + + ρ11 = 0 (6.58)
τA 1 + (
E T2 /h̄)2
The solution we immediately obtain is
1
ρ11 (t) = exp − −W t (6.59)
τA
with
2|V |2 T2
W = (6.60)
h̄ 1 + (T2
E/h̄)2
In this limit, we see energy transfer as a truly irreversible processes with rate constant
W , which gives the probability of transfer from A to B per unit time. This estimate
works well if the dephasing time is the shortest time scale in the system. In Figure
6.4(c) we show the case for T2 = 0.05, V = 0.1,
E = 1, and τ = 100. The dashed
curve is the approximate solution with the solid curve being the numerically exact
solution.
Note that if we take the dephasing time to be extremely short, T2 → 0, then the
transfer rate vanishes. This underscores the importance of the buildup of quantum
Quantum Density Matrix 177
coherence between the two coupled states. In many regards, this is much like the old
phrase that a watched pot never boils. If we take T2 as a time scale by which the envi-
ronment queries the system as to which state it happens to be in at a particular instant
in time, the system must be found in one state or the other and hence immediately
after that instant in time one or the other of the populations must be exactly 1 and
the other exactly 0 and all the coherences between the two must exactly vanish. As a
result, even if the states are strongly coupled, if T2 h̄/V , the exciton is effectively
localized on the initial state forever.
General solution: We now seek a general solution to the equations of motion in
the form of a damped oscillator. Notice that if we integrate Equation 6.59 over all
time, we obtain an equation of the form
∞ −1
1
dtρ11 (t) = +W (6.61)
0 τA
For now, let us differentiate the approximate rate W from Eq. (6.60) from W , which
we will obtain from the exact solution, and look for the case in which the two are
identical.
At this point it is best to work with the Laplace transformed versions of the
equations of motion,
∞
Lρ = ρ̃(s) = e−st ρ(t)dt (6.62)
0
and
First, the transformed equations must be true for all values of s, so we take the case
of s = 0. Secondly, the initial conditions are such that only ρ11 (0) = 1 with all other
elements equal to zero. Thus, our Laplace transformed equations of motion reduce to
a set of algebraic equations [where we take ρ̃ = ρ̃(0)]:
V 1
−1 = (ρ̃ 21 − ρ̃ 12 ) − ρ̃ 11 (6.64)
ih̄ τA
V 1
0=− (ρ̃ 21 − ρ̃ 12 ) − ρ̃ 22 (6.65)
ih̄ τB
V 1
E
0= (ρ̃ 22 − ρ̃ 12 ) − ρ̃ 12 + ρ̃ 12 (6.66)
ih̄ T2 ih̄
V 1
E
0=− (ρ̃ 22 − ρ̃ 12 ) − ρ̃ 21 + ρ̃ 21 (6.67)
ih̄ T2 ih̄
After quite a bit of tedious algebra, we obtain the final result in the desired form
1
ρ̃ −1
11 = +W (6.68)
τA
178 Quantum Dynamics: Applications in Biological and Materials Systems
6.5 DECOHERENCE
Certainly one of the hallmarks of quantum dynamics is the fact that over the course
of the evolution of a system, it evolves from some initially prepared state to a su-
perposition of states. This leads to one of the effects that make quantum mechanics
interesting, namely, interference. The most common thought experiment (and very
colorfully described in Feynman’s book) is where electrons are shot from some source
toward a blocking screen with two small parallel slits. If the slits are close enough
together, then there is equal likelihood for an electron to go through either slit. For
classical electrons, this would result in the accumulation of two distributions of elec-
trons behind the screen: those that went though slit #1 and those that went through
slit #2. However, what is observed is an interference pattern consistent with a plane
wave passing through both slits and interfering constructively and destructively on the
other side—much like water waves. Since electrons are not divisible into chunks of
partial electrons, we have to assume that each electron passing through the observing
screen went through one or the other slits and not through both.
By its very nature, quantum mechanics likes to explore all equivalent (or nearly
equivalent) alternatives. Yogi Berra put it best in saying, “When you come to a fork in
the road, you take it.” Perhaps the “Yogi Berra” rule of quantum mechanics is “When
you come to a fork in the road, you take both paths.” The story behind this quote is
that Yogi lived at the end of a cul-de-sac. So, if you were going to Yogi’s house, you
would eventually have to take the left or right turn . . . both of which would land you
at Yogi’s house. I wonder how many Brooklyn Dodgers were lost through destructive
interference this way.
Returning to the double-slit thought experiment, if we try to monitor the flux of
electrons through either hole, then we force the electron’s wave function to localize
every time we observe an electron passing by. Say we observe the flux using a laser
beam so that the scatter of a photon by the electron indicates the electron’s passage.
If we turn the light intensity down low so that some electrons go by undetected,
we partially recover the interference pattern such that the resulting distribution of
electrons on the final detector represents the weighted sum of electrons that got
caught (with no interferrence) and those that slipped by undetected.
Quantum Density Matrix 179
|x
|X
→ |x
|X x
= |x
Sx |X
where the x subscript denotes that the surroundings have interacted with the quantum
particle and are thus entangled. Sx is the scattering matrix describing the process. If
instead we start with a superposition state
d xφ(x)|x
|X
→ d xφ(x)|x
Sx |X
where
k 2 N vσe f f
=
V
with k as the wave vector, N v/V the incoming flux that we can relate to the collision
frequency, and σe f f the effective cross-section. is the decoherence rate given as the
number of scattering events per unit time per unit area. It is also (formally) equivalent
to the dephasing rate 1/T2 introduced in the last chapter. Here, at least, we have some
inkling of how the dephasing process may actually occur.
If we extend this analogy to a quantum state in a solvent environment, such as a
solvated electron or an excited state of a molecule, then we can relate the effective
cross-section to the molecular radius σe f f = πr 2 ; k is given by the de Broglie
wavelength of the scattering
√ particle, k = 2π/λth , and v is replaced by the mean
thermal velocity v = kT /m. Pulling this together, we arrive at a simple estimate
for the decoherence rate for condensed phase states:
√
2π 2 m(kT )3/2r 2 N
=
h2 V
Still working within the impulsive approximation, the resulting equation of motion
for the reduced density matrix of quantum particle is
∂ρ i
ih̄ = [H, ρ] − [x, [x, ρ]
∂t h̄
It is interesting to note that this model also results if we consider the equations
of motion for a particle randomly kicked by a Gaussian noise term. The equivalent
Hamiltonian for this reads
H = H0 + λi (t)Vi (6.70)
i
where H0 and Vi are arbitrary Hermitian operators and Gaussian stochastic coefficients
λi (t) with average mean λi (t) = λi = 0 and second moments given by
Hamiltonians such as this can model a wide variety of physical situations where
the motion or transport is driven by an external field. One such example is the case
Quantum Density Matrix 181
Here the time-evolution superoperator U (t) is given by the following infinite series:
t
−i L0 t
U (t) = e −i dτ e−i L0 (t−τ ) LV (τ )e−i L0 τ
0
t τ
− dτ dτ e−i L0 (t−τ ) LV (τ )e−i L0 (τ −τ ) LV (τ )e−i L0 τ + ... (6.74)
0 0
O(t)
= Tr (O(t)U (t)ρ(0)) (6.75)
It is assumed here that operator O can have an explicit time dependence. When per-
forming averages as in Equation 6.75, we need to distinguish two types of operators:
those with and those without stochastic coefficients λi (t). In the latter case, averaging
over noise as in Equation 6.75 reduces to an averaging of the evolution superoperator
U (t) and such expectation values can be calculated with the noise-averaged density
matrix.
Noise averaging of U (t) can be performed by taking averages for each term in
the series and then resumming the series. This involves averaging products of the
stochastic coefficients. Since λi (t) is sampled from a Gaussian deviate, all terms
involving an odd number of coefficients necessarily vanish. Furthermore, any term
with an even number of coefficients can be written as a sum of all possible products of
second moments. However, due to the order of integrations over time in Equation 6.74
and the fact that second moments in Equation 6.71 involve delta functions in time,
only one product from the sum contributes to the average after all the time integrations
are performed. For example, consider the fourth-order average:
It follows from Equation 6.77 that the noise-averaged density matrix satisfies the
following equation:
∂ρ
i = (L0 − iM )ρ (6.79)
∂t
This allows one to see an interesting connection between the noise-averaged time
evolution of the density matrix for the noisy system and the time evolution of the
reduced density matrix for an open quantum system.
Consider the case when the correlation matrix gi j in Equation 6.78 is diagonal,
that is, gi j = gii δi j . In this case, Equation 6.79 can be rewritten as
∂ρ 1 i
i = [H0 , ρ] − 2 gii [Vi , [Vi , ρ]] (6.80)
∂t h̄ 2h̄ i
Replacing gii = and Vi = x, we arrive at the equation we had above for the
implusively monitored particle.
Even though we have stated that there is no recoil between the quantum particle
and the scattering particle, the energy of the quantum particle must increase with time.
In general, the Ehrenfest equations of motion for the expectation values of operators
are given by
d O
d dρ
= Tr (Oρ) = Tr O
dt dt dt
For example, if we consider the Ehrenfest equations of motion for position, momen-
tum, and energy, we find
d x
p
= (6.81)
dt m
d p
dV
=− (6.82)
dt dx
d H
=0 (6.83)
dt
Quantum Density Matrix 183
In other words, when we eliminate the interaction with the scattering particles, energy
is conserved as we expect. However, including the recoilless interaction,
∂ρ i
ih̄ = [Ho , ρ] − [x, [x, ρ]] (6.84)
∂t h̄
d x
p
= (6.85)
dt m
d p
dV
=− (6.86)
dt dx
d H
=+ (6.87)
dt m
where m is the mass of the quantum particle. Thus, the average energy must increase
due to the noisy interaction. One can also conclude that the necessary condition for
energy conservation even in this case is that [Ho , Vi ] = 0 for all operators describing
the coupling between the quantum particle and the scatterer.
We can construct equations of motion that do lead to d H
/dt = 0 at long times
by including a frictional term. For example, the equations of motion for a classical
Brownian particle can be written as
m ẍ + η ẋ + V = f (t)
where η is the relaxation rate and f (t) is a noise source with the properties f (t) = 0
and f (t) f (t ) = 2ηkT δ(t − t ) = 2δ(t − t ). In other words, the relaxation rate
η is directly proportional to the frequency (and strength) of the interaction with the
noisy field. Analyzing the classical equations results in the following noise-averaged
quantities:
d
x = p/m (6.88)
dt
d
p = −V − η p/m (6.89)
dt
$ %
d 2 kT p2
E= − (6.90)
dt ηm 2 2m
As the system relaxes, the average kinetic energy becomes equal to kT /2 even though
the average momentum p relaxes to zero. Also, it is important to notice that p 2 = p 2 .
To actually solve these equations, we also need to also work out the equations for the
higher-order averages, p 2 , x 2 , and so on.
Similarly, for quantum systems, we can insert a frictional term into the equations
of motion for the density matrix:
∂ 1 η ηkT
i ρ = [Ho , ρ] + [x, { p, ρ}] − i 2 [x, [x, ρ]] (6.91)
∂t h̄ 2mh̄ 2
h̄
184 Quantum Dynamics: Applications in Biological and Materials Systems
d
x
= p
(6.92)
dt
d dV η
p
= − − p
(6.93)
dt dx m
2
d 2η kT p
H
= − (6.94)
dt m 2 2m
As in the classical case, the system relaxes to some thermal distribution such that its
final kinetic energy is identical to the thermal energy. Also, one needs to be aware
that these equations of motion are not closed. In fact, for the general case, this is the
beginning of a hierarchy of equations.43–48
Suppose at some time such that t 1 we measure the state of the system and
P1 ≈ 1 and P2 ≈ 2 t 2 /4 1. Likewise, if we had prepared the system in (2), we
would have the reverse situation where P2 ≈= 1 and P1 ≈= 0. If level (3) can only
decay to (1), say, due to a selection rule, then we can perform a measurement on the
system by driving the (1) → (3) transition with an optical pulse.
The proposed experiment goes as follows. First, we prepare the system in state
(2) by driving the (1) → (2) transition with a π-pulse of duration T = π/ while
simultaneously applying a series of short measurement pulses. The duration of the
measurement pulse is assumed to be much less than T . Suppose the system is in (1) at
t = 0 and a π pulse is applied. In the absence of the probe pulse, P2 (T ) = 1. For a
Quantum Density Matrix 185
1.0
n = 20
0.8 n = 10
|3 n=4
0.6
n=3
P2(t)
|2
Probe pulse
0.4
π pulse Coherent
0.2
0 π π 3π π
4 2 4
|1 Ωt
(a) (b)
FIGURE 6.5 (a) Three-level scheme used in Cook’s proposed experiment for testing the quan-
tum Zeno effect. (b) Predicted transient populations of state (2) following n impulsive probe
pulses.
The equations of motion for the polarization vector R = (R1 , R2 , R3 ) are given by
d R/dt = ω × R
with R(0) = (0, 0, −1) and ω = (1, 0, 0). Following preparation and in the absence
of any further interactions, the polarization vector precesses about ω at the Rabi
frequency. (Note that we have ignored the counter-rotating terms.)
Now assume that n probe pulses are applied at times τk = kπ/(n) where
k = 1, . . . , n. Just before the first probe pulse at t = π/(n), the polarization vector
is
R = (0, sin(π/n), − cos(π/n))
The probe pulse collapses the wave function (that is, eliminates the coherences)
while leaving the populations unchanged. In other words, after the pulse, R1 (t+ ) =
+ ) is
R2 (t+ ) = 0 and R3 (t+ ) = (0, 0, − cos(π/n)). For all intents and purposes, R(t
identical to R(0) except that its magnitude is now |R| = | cos(π/n). Consequently,
) = (0, 0, − cosn (π/n)). Since R3 is the difference
after a sequence of n pulses, R(T
between the two populations, R3 = P2 − P1 and P1 + P2 = 1, it is easy to see that
P2 (T ) = (1 + R3 (T ))/2
= (1 − cosn (π/n))/2 (6.97)
Expanding cos(π/n) as a power series and using
lim (1 − x/n)n = e−x
n→∞
186 Quantum Dynamics: Applications in Biological and Materials Systems
The predicted transient populations of state (2) are shown in Figure 6.5b where we
have assumed the probe pulse to be impulsive. As the frequency probe pulses increase,
the population in (2) does not decay to state (1).
This scheme was used by Itano et al.50 in 1990 to examine the effect of mea-
surement on a quantum superposition states. In this experiment, approximately 5000
9
Be+ ions were held in a Penning trap and laser cooled to below 250 mK. In a mag-
netic field, the 2s 2 S1/2 ground state of Be+ is split into hyperfine levels similar to
what is shown in Figure 6.5(a). Radio-frequency (rf) transitions can occur between
the (m I , m J ) = (3/2, 1/2) and (1/2, 1/2) sublevels. A resonant rf pulse would place
nearly all the atoms in the upper (3/2, 1/2) state (2), depopulating the lower state. A
second UV pulse resonant with the transition between the lower 2s 2 S1/2 state (1) and
one of the 2 p 2 P3/2 states (3) with quantum numbers (m I , m J ) = (3/2, 1/2), which
only decays to state (1). The results of this experiment are shown in Figure 6.6 where
we have plotted both the predicted and experimental transition probabilities between
states (1) and (2). The agreement is well within the 0.02% statistical uncertainty of
the experiment due to photon counting.
6.6 SUMMARY
In this chapter we have described the relaxation of a quantum system through a
rather phenomenological approach. We have not thus far described the process for
connecting the various relaxation time scales to a molecular-level description of the
interaction between an individual molecule and an environment. This we reserve for
later discussion and refer the interested reader to other texts and sources:
1.0 1.0
0.8 0.8
0.6 0.6
2 1
1 2
0.4 0.4
0.2 0.2
1 2 4 8 16 32 64 1 2 4 8 16 32 64
n n
(a) (b)
FIGURE 6.6 Comparison between predicted and experimental transition probabilities follow-
ing n probe pulses for the (a) 1 → 2 (b) 2 → 1 transition. Experimental data from Ref. 50.
Quantum Density Matrix 187
where N is the number of particles in the ensemble. The time development for the
phase-space distribution ρ(x, p) is given by the Liouville equation:
dρ ∂ρ ∂ρ ∂ H ∂ρ ∂ H
= + − =0 (6.104)
dt ∂t ∂q ∂ p ∂ p ∂q
where H is the Hamiltonian governing the motion of the particles. Thus, the equation
of motion governing the classical phase-space density is
∂ρ ∂ρ ∂ H ∂ρ ∂ H
=− + = −{ρ, H } (6.105)
∂t ∂q ∂ p ∂ p ∂q
the Wigner distribution can and normally does go negative for states that have no
classical analog—and is a convenient indicator of quantum mechanical interference.
This can be seen in Figure 6.7 where we have plotted the Wigner function for the
first three eigenstates of the harmonic oscillator. For every state other than the n = 0
ground state, one can clearly see regions where W is positive and regions where W is
negative. Hence, W (x, p; t)d xd p cannot be interpreted as the probability of finding
a particle in an infinitesimal phase-space volume d xd p about the point x, p.
The function itself has a number of useful properties. First, W ( p, x) is real.
Secondly, the x and p distributions are given by the marginals
d pW (x, p) = ρ(x, x) (6.108)
yields the momentum distribution. Again for a pure state, ρ( p, p) = |ψ̃( p)|2 . Finally
d xd pW = Tr (ρ) = 1 (6.110)
That W is real and it can give both the momentum and position distributions implies
that W can be negative somewhere.
In order to compute physical quantities, we need to first transform the quantum
mechanical operators into the Wigner representation
BW (x, p) = dy x + y/2| B̂|x − y/2
ei py/h̄ (6.111)
where the W subscript denotes the “Wigner-ized” operator. This may sound grand,
however, in practice, it is actually quite simple for operators involving only position
and momentum variables. For example, if operator A is a function of the position
operator, q̂, then
A W (x) = q + y/2|A(q)|q − y/2
ei py/h̄ = A(q̂) (6.112)
where
ˆ is the Poisson bracket operator defined as
⎡ ⎤
←−− → ← −− →
ˆ =⎣ ∂ ∂ ∂ ∂ ⎦
− (6.114)
∂x ∂p ∂p ∂x
190
FIGURE 6.7 Wigner distribution for the first three states of the harmonic oscillator.
Quantum Dynamics: Applications in Biological and Materials Systems
Quantum Density Matrix 191
We need to pay attention to the direction of the arrows in this last expression since
they indicate the direction of operation for the partial derivative. For example,
ˆ W = ∂A ∂B − ∂A ∂B
A W B (6.115)
∂x ∂p ∂p ∂x
We can also easily see that
ih̄ ˆ ih̄ ˆ
(A · B)W = A W exp BW = BW exp − A W (6.116)
2 2
This allows us to construct the Wigner transform of a commutator as
2 h̄ 2
sin(h̄ T̂ /2) = T̂ − T̂ 3 + · · · (6.119)
h̄ 24
where we notice that the lowest-order term involving h̄ enters in only the second
term and beyond and the leading-order term is simply the classical Poisson bracket
operator. Such expansions are extremely useful in evaluating the time evolution of
the Wigner distribution
∂W i
= − ([H, ρ])W
∂t h̄
2
= Hc sin(h̄ /2)W
ˆ (6.120)
h̄
= −iLW W (6.121)
where in the first equation it is assumed that the ∂/∂ x operates only on the V (x) term
and the ∂/∂ p acts only on the W (x, p) term. Finally, there is an equivalent form given
by Groenewold that reads58
∂W p ∂W 1 ih̄ ∂ ih̄ ∂
=− + V x+ −V x− W (x, p) (6.123)
∂t m ∂x ih̄ 2 ∂p 2 ∂p
This last term is especially useful when V (x) can be expressed as a polynomial in x.
Operationally, we perform the Taylor series expansion, then replace the x operator
with x ±(ih̄/2)d/d p. For example, for the harmonic potential, the potential term yields
192 Quantum Dynamics: Applications in Biological and Materials Systems
mω2 xd W/d p, which is precisely what we get for the classical Liouville equation.
Only at order of V (x) ∝ a3 x 3 /3 do we begin to see quantum terms appearing in the
equations of motion. For example, for the cubic potential,
∂W p ∂W ∂W h̄ 2 ∂ 3 W
=− + a3 x 2 − a3 3
∂t m ∂x ∂p 12 ∂ p
Let us examine the various terms in Equations 6.122 and 6.123. The first term
on the right-hand side of the two equations comes from the kinetic energy operator
and, depending upon the context, is termed the “drift,” “streaming,” or “advection”
term. This term is also present in the classical Liouville equation (Equation 6.107).
In fact, simply expanding the potential term in powers of h̄ and then setting h̄ → 0
produces the classical Liouville equation. Thus, all quantum effects enter in through
the Wignerized potential, VW (x, p− p ), which is nonlocal in momentum and basically
redistributes the Wigner function along all possible momenta p for a given position
x. The rough picture here is that particles that have been scattered by the potential at
point x ± y/2 interfere with particles scattering at different points. In this way, we
sample over all possible pathways the particle can take.59
where the Bn and Bn† destroy and create an exciton on site n. We can write this in a
basis as
H = h̄ n |n
n| + J (n − m)|n
m| (6.125)
n n=m
where we have written the interaction operator Jnm as depending upon the distance
between sites n and m. In this representation, the density matrix is given by ρnm =
n|ρ|m
. At this point we change variables to relative and center of mass variables by
writing
n = r − s/2
and
m = r + s/2
with ρ(r, s) = r − s/2|ρ|r + s/2
. ρ(r, 0 is a diagonal element of the density matrix
and gives the probability for the exciton’s being located at lattice position r at a given
time. Consequently, the ρ(r, s = 0) carries the phase coherence information between
Quantum Density Matrix 193
two sites separated by distance s on the lattice. In this representation the time evolution
of the density matrix is given by
i ρ̇(r, s) = J (a)(ρ(r + a/2, s − a) − ρ(r + a/2, s + a))
a
+ (r −s/2 − r +s/2 )ρ(r, s) (6.126)
where the a summation index runs over all displacements in the lattice. The second
term will vanish if all the sites have the same energy. We can now apply the Wigner
transformation to both sides of the equation.
1
ρ(r, s) = √ W (r, p)ei ps (6.127)
N p
As above, the resulting Liouville equation has both kinetic and potential energy
contributions
Ẇ = T W + V W (6.128)
For the potential term, we derive this by first taking the discrete sine transform of the
energy differences
2
V (r, p) = √ sin( ps)(Er −s/2 − Er +s/2 ) (6.129)
N s
and then taking the convolution with the Wigner transformed density matrix,
1
VW = V (r, p − p )W (r, p ) (6.130)
h̄ p
Taking the continuum limit, we can write this in the Groenewold form as59
1 i ih̄ ∂ ih̄ ∂
V (r, p − p )W (r, p ) = E r+ −E r− W (r, p)
h̄ p h̄ 2 ∂p 2 ∂p
(6.131)
Here, again, we see that dynamics can be interpreted as that of a particle that scatters
onto some site where it receives a random momentum kick according to the momen-
tum distribution at that site. Quantum effects (that is, constructive and destructive
interference) occur when we sum over all scattering events.
The kinetic term arises from the hopping terms in our original Hamiltonian and
can be directly evaluated by inserting Equation 6.127 into Equation 6.126
TW =2 J (a) sin( pa)W (r + a/2, p) (6.132)
a
Taking the continuum limit for the first term requires us to write J (a) as a symmetric
function J (a) = J (−a). If it is sufficiently short ranged, then the first term becomes
simply
p ∂
2 J (a) sin( pa)W (r + a/2, p) = ∗ W (r, p)
a
m ∂r
where m ∗ is the effective mass of the exciton given by m ∗ = h̄/(2Jl 2 ) where l is the
lattice spacing.61
Furthermore, the Heisenberg equations for the binary product Bm† Bn are given by
d † †
i Bn Bm = (n − m )Bn† Bm + Jmk Bn† Bk − Jnk Bk Bm + nm
collision
(6.135)
dt k
This term represents all possible bimolecular collisions between excitons. Ignoring
this term, which is allowable in the limit of few excitations in the system, brings us
back to the expressions above.
Notice that this expression involves the product of Bm and Bm† with the Wn number
operator. Consequently (and alas), we are again faced with a hierarchy of equations
since we need to deduce the equations of motion for operator products and their
expectation values. Various approximations are possible, the simplest being to factor
all many-operator terms into single-operator terms, viz,
†
Wk Bn† Bm
= Bk Bk
Bn† Bm
where the first term represents scattering to site r from all other sites and the second
represents scattering from site r to any other site each with a momentum change of
p − p .
We can also make a couple of simplifications to the collisional term. First, we
can invoke Boltzmann’s Stosszahl Ansatz and assume that prior to collision at time t,
the two excitons were uncorrelated. If we take the hopping term in as a momentum
transfer, then the collision operator can be written as
boltz (r, p) = J ( p − p , q)(W (r, p + q)W (r, p − q)
where Weq (r, p) is the equilibrium (stationary) distribution. Here, γ is the collision
frequency and it is assumed that the resulting momentum distribution after each
It is also possible to introduce the Pauli exclusion principle as a dynamical constraint on the system using
the Dirac bracket method discussed earlier. Here, one appends to the Hamiltonian series of constraints on
† †
the canonical variables such that the dynamics occurs on a surface defined by φn = Bn Bn + Bn Bn − 1 = 0.
Thus, the constraint generates the collision term in Eq. (6.136).
196 Quantum Dynamics: Applications in Biological and Materials Systems
ρ(k,
) = Tr [ρei(k x̂+
p̂) ] = Tr [Dρ] = Dρ
(6.140)
where we introduce the last term to simplify our notation and to distinguish ρ(k,
)
from ρ(x, x ). Since the density matrix is Hermitian, ρ(k,
) has the following sym-
metry: ρ ∗ (k,
) = ρ(−k, −
).
The transformation is similar to the Wigner transform in that it involves the Fourier
transform of the density matrix
ρ(k,
) = d xeikx ρ(x +
/2, x −
/2) (6.141)
For example, the first two moments of the normal distribution read
m = {μ, μ2 + σ 2 } (6.146)
For a Gaussian distribution, all subsequent moments can be related to these first two
moments.
Likewise, for the Wigner function we can obtain
∂ n+m
x n p m
= lim (−i)n+m ρ(k,
) (6.147)
k,
→0 ∂k n ∂
m
which are the expectation values for the operator x n p m . Moreover, if we know the
time derivative of ρ(k,
), we can derive the Heisenberg equations of motion for
operators composed of x and p:
∂ x n p m
∂ n+m ∂ρ(k,
)
= lim (−i)n+m n m (6.148)
∂t k,
→0 ∂k ∂
∂t
We can also use the characteristic functions to derive the cumulants of a distribu-
tion as well. These are related to the moments but only include the “connected” parts.
In other words, they cannot be reduced into a sum of other moments. For example, the
cumulants of the Gaussian distribution are simply the μ and σ specifying the center
and width of the Gaussian function. Cumulants are given by the log-derivative of the
characteristic function:
1 ∂G
cn = (−i) n
(6.149)
G(t) ∂t t=0
Thus, the cumulants of the Wigner function are the log-derivatives of the Dρ
char-
acteristic function
1 ∂ n+m Dρ
cn,m = (−i) n+m
(6.150)
Dρ
∂k m ∂
n k,
=0
The advantage of the k,
representation is that one can make considerable use
of commutation relations to simplify the equations of motion. For example, the D
operator can be written as
D = ei(kx+
p) = eik
/2 eikx ei p
(6.151)
198 Quantum Dynamics: Applications in Biological and Materials Systems
=i + x̂ D (6.152)
∂k 2
and
∂D k
=i + p̂ D (6.153)
∂
2
which can be rearranged to
D ∂
x̂ D = − +i D (6.154)
2 ∂k
k ∂
p̂ D = − +i D (6.155)
2 ∂
= Dρ
= (6.160)
∂t m ∂
k,
=0 m
∂ p
∂
= −mω2 Dρ
= −mω2 x
(6.161)
∂t ∂k k,
=0
Quantum Density Matrix 199
which are what we expect to find for the Ehrenfest equations of motion for a harmonic
oscillator. In Problem 6.3 we derive more general equations of motion for a particle
in a potential.
Problem 6.2 Show that the Wigner distribution for the harmonic oscillator is given
by
−1n −2H ( p,x)/ω
W (x, p) = e L n (4H/ω)
π
where L n is a Laguerre polynomial and H is the Hamiltonian for a classical oscillator.
Problem 6.3 Show that the relations given in Equations 6.154 to 6.157 are correct.
Using these, derive the equations of motion for a particle with mass m in a general
polynomial potential
∞
xn dn V
V = (6.162)
n=0
n! d x n x=0
[ p̂, D] = eik
[ p, eikx ei p
] (6.163)
=e ik
[ p, e ikx
]e i p
] (6.164)
d ikx d ikx
[ p, eikx ] f = i ,e f = i( f eikx − e f) (6.165)
dx dx
= −i(ik)eikx f (6.166)
thus, [ p̂, D] = k D.
The second is a bit trickier since it involves putting p in the exponent when
evaluating [x, ei p
]. For this use, expand the exponent
∞
(i
)n
[x̂, ei p
] = [x̂, p̂ n ] (6.167)
n=0
n!
∞
(i
)n−1 n−1
[x̂, ei p
] = −
p (6.169)
n=1
(n − 1)!
[x̂, ei p
] = −
ei p
(6.170)
Problem 6.4 When a dissipative bath is included under a certain set of assumptions,
the equations of motion for the density matrix can be written as
1 2 mω2 2
i ρ̇ = [ p , ρ] + [x , ρ] − [x, [x, ρ]] + γ [x, { p, ρ}] (6.171)
2m 2
where and γ are constants and {A, B} denotes the anticommutation relation:
{A, B} = AB + B A.
ρ(k,
) = exp(−(c1 k 2 + c2
2 + c3 k
+ ic4 k + ic5
+ c6 )) (6.172)
ρ(k,
) = exp(−(c1 k 2 + c2
2 + c3 k
+ ic4 k + ic5
+ c6 )) (6.173)
derive the corresponding Wigner function and make either contour or three-
dimensional plots of W (x, p) at the same time steps as in your plot of
ρ(k,
). How does this compare with how you would expect a classical
system to behave under dissipative conditions?
Quantum Density Matrix 201
Partial Solutions:
2. You should arrive at the following equations of motion:
ċ1 = c2 /m (6.174)
ċ2 = 2c3 /m − 2mω2 c1 − γ c2 (6.175)
ċ3 = − mω c2 − 2γ c3
2
(6.176)
ċ4 = c5 /m (6.177)
ċ5 = −mω c4 − γ c5
2
(6.178)
ċ6 = 0 (6.179)
Problem 6.5 Consider the time evolution of the Wigner function for a free particle,
∂W p
+ ∇r W = 0
∂t m
Using the k,
representation, derive and solve the equations of motion for ρ(k,
)
assuming that at time t = 0 the initial Wigner function is given by W (x, p; 0). Using
ρ(k,
; t), derive expressions for (x − x
)2
(t) and ( p− p
)2
(t). Are these the same
as you would expect for the time evolution of a free particle using the Schrödinger
equation?
Problem 6.6 Let ρ be the density operator for an arbitrary system where |χl
and
πl are its eigenvectors and eigenvalues. Write ρ and ρ 2 in terms of the |χl
and πl .
What do the matrices representing these operators look like in the {|χl
} basis—first
in the case where ρ describes a pure state and second where ρ describes a mixed
state. Begin by showing that in the pure case, ρ has a single nonzero diagonal element
equal to 1, while for a statistical mixture, it has a several diagonal elements between
0 and 1. Show that ρ corresponds to a pure case if and only if tr [ρ 2 ] = 1.
Problem 6.7 Consider a system with density matrix ρ evolving under Hamiltonian
H (t). Show that tr [ρ 2 (t)] does not change in time. Can the system evolve to be
successively a pure state and a statistical mixture of states?
Problem 6.8 Let (1) and (2) be a global ensemble consisting of two subspaces (1)
and (2). A and B denote operators acting in the state space E (1) ⊗ E (2). Show that
the partial traces tr1 (AB) and tr1 (B A) are equal only if A or B acts only in space
E (1). That is, A or B can be written as A = A(1) ⊗ I (2) or B = B(1) ⊗ I (2).
202 Quantum Dynamics: Applications in Biological and Materials Systems
Note: tr1 [] means that you take the trace ONLY over space (2). For example, take
the case where we have states |a, i
spanning E (1) ⊗ E (2). Then
tr1 [A] = Aai,a i
i
A photoexcited molecule is rarely a stable species. The fact that we have just pumped
in excess of 1 to 3 eV of energy into a small molecule through the interaction with a
visible or UV photon means that this energy is likely to be rapidly dissipated to other
degrees of freedom, to phonons in the form of heat, to other electronic states of the
system via intersystem crossing or nonradiative decay, or through emission of photons.
Typically we think of photoemission as leading to some observable spectroscopic
signal. However, if in fact there are neighboring molecules that can absorb the emitted
photon, the excitation that started off localized on one molecule may be transferred
reversibly or irreversibly to the next. Typically, this is an irreversible process since
the time scale for emission is roughly a thousandfold slower than the time scale for
intramolecular vibrational relaxation and reorganization of the surrounding media.
Thus, at each energy transfer event, some energy is lost to heat.
Figure 7.1 shows the various energy transfer and relaxation events that can occur
following photo-excitation.
In this chapter, we explore the basis for excitation energy transfer between
molecules. We begin with a discussion of irreversibility in a quantum mechanical
system. We shall leave the molecular-level details of what causes this irreversibility
for later, focusing our attention upon a phenomenological treatment in which we
introduce in a rather ad hoc way the requisite decay times. Following this, we will
consider how to compute the exciton coupling matrix element between molecular
species using modern quantum chemical approaches.
S2
IC
T2
S1
S1
FRET
F
T1
A
F
So
P
So
Donor Acceptor
FIGURE 7.1 Possible photochemical pathways following excitation. The dashed lines indicate
non-radiative processes whereas the solid lines indicate radiative. A = excitation of the donor
molecule from its singlet ground state to one of its singlet excited states: S1 and S2 (IC =
internal conversion, F = fluorescence, P = phosphorescence). The FRET process corresponds
to the transfer of the excitation from one molecule to the next.
203
204 Quantum Dynamics: Applications in Biological and Materials Systems
where R is a vector extending from the charge center of A to the charge center of B.
Setting this to be the z axis, we can write M as a function of the angles (see Figure 7.2)
χ 2 = 2/3 (7.4)
Acceptor
pA
pB
Donor
2 | p A |2 | p B |2
2
Mab = (7.5)
3 R6
The transition dipoles can be replaced by their oscillator strengths, viz,
h̄e2
p 2A = fA (7.6)
2mω
Assuming only radiative transitions are allowed, the lifetime of A of B is given by
1 fA
= (7.7)
τA τcl
where, as we discussed previously, τcl is the decay time for a classical electronic
oscillator τcl = 3(mc3 /e2 ω2 )/2.
Now, consider the ratio of the transfer rate W to the radiative rate
2
W τA = |Mab |2 T2 τ A (7.8)
h̄ 2
Inserting our expression for Mab from above,
6
3 λ T2
W τA = fB (7.9)
8π R τcl
It is an important practice to learn to insert numbers into equations such as this in order
to determine their range of validity for molecular-scale systems. For example, if the
radiative linewidth is on the order of 100 cm−1 , then T2 ≈ 50 fs. Taking τcl ≈ 10 ns,
then for f b = 1 the characteristic distance for which W τ A = 1 is Ro ≈ 0.02λ. For
typical molecules with electronic transitions in the UV/visible region, λ ≈ 300 nm,
so Ro ≈ 6 nm or 60 Å. This is consistent with experimental values of around 50 Å
for most molecular systems. For small aromatic rings, a ≈ 10 Å, so about W τ A = 1
for molecules separated by about 10 molecular radii. Notice that this estimate is
independent of the oscillator strength of A.
At what distance can the interaction be considered strong? Consider the distance
for which
2
|Vab |2 T2 τb ≈ 1 (7.10)
h̄ 2
From what we have just seen above, it is the distance for which
R ≈ Ro (τ B /τ A )1/6 (7.11)
If the two molecules are identical or even similar, τ B ≈ τ A , and only when R ≈ Ro ,
does the interaction become sufficiently strong.
206 Quantum Dynamics: Applications in Biological and Materials Systems
ΔEa ΔEb
φa´ φb φa φb´
FIGURE 7.3 Resonant energy transfer between donor (a) and acceptor (b) energy levels.
From the discussion above, it should be clear that if we can measure the radi-
tive transfer rate between two distinct species, we have the means of measuring the
instantaneous distance of separation between the two, provided the transfer rate is
fast compared to the time scale for the relative motion between A and B. Because of
this and the advent of single-molecule spectroscopic techniques that can selectively
excite and collect photons from what amounts to single chromophores, we can effec-
tively monitor experimentally the dynamics of complex molecular reactions through
a technique termed “Förster resonant energy transfer” or FRET. A schematic of this
process is shown in Figure 7.3
2π
dW = |Vab |2 δ(
E a −
E b ) (7.12)
h̄
where
E a −
E b = (E a − E a ) − (E b − E b ) is the difference in energys as shown in
Figure 7.3. This ensures that energy is conserved upon performing the final integration
over energies.
Following Förster’s approach, we write the initial and final wave functions as
where the φ’s are the ground and excited electronic states of the system and the
’s are vibrational wave functions. We assume here that the Born–Oppenheime
approximation is valid so that the ’s represent vibrational motion on a given potential
energy surface associated with either the ground or excited state of either molecule.
We denote the energy origin of each surface with E a , E a , and so forth. Taking these
as our states, we can write the coupling matrix element as
V12 = 1 |V |2
(7.15)
The terms in square brackets are directly related to experimentally observable quan-
tities, namely, the first is the normalized emission spectra of the donor (A) and the
second is the normalized absorption spectra of the acceptor (B):
1
pa2 g (E a )Sa (E a , E a − E)d E a = pa2 G a (ω) (7.21)
h̄
208 Quantum Dynamics: Applications in Biological and Materials Systems
and
1
pb2 g(E b )Sb (E b , E b − E)d E b = pb2 G b (ω) (7.22)
h̄
#
where G a,b (ω) dω = 1.
There is a well-known relation between the lifetime τ , the absorption index μ(ω),
and the transition dipole moment pa :
4ω3 2
Fa (ω) = p τa G a (ω) (7.23)
3h̄c3 a
4π 2 ω
μb (ω) = Nb pb2 G b (ω) (7.24)
3h̄c
where Fa (ω) is the normalized radiation spectrum of A given as the number of quanta
per unit frequency range. The second, μb (ω), is the absorption coefficient as per the
Beer–Lambert relation
where Nb is the number of acceptor molecules per cm3 and z is the thickness of the
sample. Finally, we arrive at a well-known result that
9χ 2 c4
W = Fa (ω)μb (ω)ω−4 dω (7.26)
8π Nb τa R 6
whereby the rate is obtained by taking the overlap integral between the emission
spectrum of A and the absorption spectrum of B, multipled by the appropriate scal-
ing factors. The advantage of this formula is that both spectra can be determined
independently by simple spectroscopic techniques.
The first general requirement for efficient energy transfer is a good degree of spec-
tral overlap between the emission spectrum of the donor species and the absorption
spectrum of the acceptor species. This is determined by the integral in Equation 7.26,
which is often written as J :
J= Fa (ω)μb (ω)ω−4 dω
Herein, though, lies one of the experimental paradoxes of FRET. The spectral profiles
of the FRET pair cannot be so separated that they have poor overlap, yet we want to
avoid “cross-talk” between the two imaging channels—that is, ideally the donor emis-
sion filter set must collect only the light from the donor and none from the acceptor,
and vice versa for the acceptor. In practice, this can be somewhat realized by employ-
ing short bandpass filters that collect light from only the shorter-wavelength side of
the donor emission and the longer-wavelength side of the acceptor emission. This can
limit somewhat the photon flux from both donor and acceptor during a typical expo-
sure, especially when we bear in mind that these measurements are best performed
under conditions of reduced excitation power, such that we do not accelerate the rates
of bleaching.
Excitation Energy Transfer 209
Secondly, the rate scales as 1/R 6 due to the dipole–dipole nature of the coupling
matrix element. Consequently, we can define a distance in which the transfer rate W
is equal to the radiative rate τa by
9χ 2 c4
Ro6 = J (7.27)
8π Nb R 6
Thus, 6
1 Ro
W =
τa R
where Ro is the “Förster radius.” At this distance, energy transfer is 50% efficient.
Often the FRET technique is combined with imaging microscopy techniques to
monitor the proximity of two fluorophores. Since fluorophores can be employed to
specifically label biomolecules and the distance condition for FRET is of the order
of the diameter of most biomolecules, FRET is often used to determine when and
where two or more biomolecules, often proteins, interact within their physiological
surroundings. Since energy transfer occurs over distances of 1–10 nm, a FRET signal
corresponding to a particular location within a microscope image provides an addi-
tional distance accuracy surpassing the optical resolution (≈ 0.25 mm) of the light
microscope.
Furthermore, the transfer rate depends critically upon the relative orientation of
the two transition dipoles. Above, we have expressed χ in terms of the relative dihe-
dral angles between the two dipoles. Furthermore, assuming the donor and acceptor
species are randomly oriented, χ 2 = 2/3. However, if the two molecules are tethered
to a common backbone, the instantaneous orientation factor χ 2 will reflect the instan-
taneous relative orientation of the two dipoles and as such may provide a sensitive
probe of the dynamics of the backbone provided the time scale of the motion is long
compared with the experimental time scale.
Finally, we can express all these factors in a general equation in spectroscopic
units (per mol):
χ2 J 1
W = 4 8.785 × 10−23
n τo R 6
where χ 2 is the orientation factor, n the refractive index of the medium, τo the radiative
lifetime of the donor, R the distance (in cm) between the donor and acceptor, and
J the spectral overlap (in coherent units cm6 mol−1 ) between the donor fluorescence
spectrum and acceptor absorbance spectrum. We can also write the Förster radius (in
cm) as
χ 2D J
Ro6 = 8.785 × 10−5
n4
where D is the quantum efficiency of the donor. The efficiency of the transfer may
be evaluated by comparing the fluorescence lifetime of the donor in the presence of
τa and in the absence of the acceptor τao or by the quantum yield in the presence D
and in the absence oD of the acceptor:
τa D
E =1− =1− o
τao D
210 Quantum Dynamics: Applications in Biological and Materials Systems
220
200 a
Fluorescence Intensity
D b
D A 180
C G
C G
C G 160
A T
Closing A T
140
Opening
120
100
FIGURE 7.4 (a) Model open/closed loop structures for DNA hairpin. (b) Equilibrium thermal
melting curves for the DNA hairpin loop. The closed-to-open transition was monitored by the
ratio of fluorescence intensity of TMR in a double-labeled sample to that of a TMR-only-
labeled sample. (a) 10 mM Tris/1 mM EDTA (pH 7.5) and 100 mM NaCl; (b) 10 mM Tris and
20 mM MgCl2 . Solid lines are fits to a two-state model (from Ref. 69).
Because the energy transfer rate is highly sensitive to the distance of separation
between the donor and acceptor pairs, we can use resonant energy transfer to accu-
rately measure inter- and intramolecular distances. Over 30 years ago, Hass et al.
showed that FRET techniques could be used to monitor the end-to-end chain diffu-
sion of a tagged biopolymer over a range of 2–10 nm.68 In Figure 7.4 we show an
example of FRET measurements that can be used to monitor the melting of a DNA
hairpin loop where fluorescence from the donor is quenched by the proximity of a
tagged acceptor. Here, in the work by Wallace et al.,69 FRET techniques were used to
determine the thermodynamic parameters of the closed-to-open transition of a model
DNA oligomer in which the ends were terminated by the dye molecules carboxyte-
tramethylrhodamine (TMR), which is a fluorescence donor, and indodicarbocyanine
(Cy5), here the fluorescence acceptor. DNA hairpin-loop structures fluctuate between
different conformations and are involved in various biological functions including
gene expression and regulation.70,71 Loops have also been used in biotechnology as
biosensors and molecular beacons.72
In Figure 7.4b are the equilibrium melting curves for a model DNA hairpin as
determined by comparing the FRET intensities for the donor-acceptor labeled sam-
ple to that of the donor-only labeled sample. The high-temperature/high-fluorescence
intensity limit corresponds to the case where the two ends are farthest apart. Conse-
quently, the fluorescence from the TMR is not quenched or transferred to the Cy5 as
efficiently as in the low-temperature case.
units is oftentimes comparable to the actual size of the molecule. In such cases, the
Förster approach is incapable of providing an accurate estimate of the energy transfer
rate. The problem stems from the fact that at sufficiently short ranges, the donor
molecular “feels” segments of the acceptor species more strongly than others. Hence,
one needs to account for the inhomogeneities in the transition densities about the
donor and acceptor.
One improvement on the Förster scheme was proposed by Beenken and Pullerits73
and given much more rigorous justification by Barford74 whereby the total transition
dipole moment for a polymer chain is projected onto individual monomeric units and
the total interaction is summed as a line-dipole:
1 3
M= pAi · pB j − 2 ( pAi · Ri j )( pB j · R)
(7.28)
ij
Ri3j Ri j
where the pAi and pB j are fractional transition dipoles that obey the sum rule
pA = pAi
i
As seen in Figure 7.5, the line-dipole approach does a far better job of approaching the
Coulomb coupling limit than the point-dipole approximation for linear polymers. Only
when the distance of separation between the charge centers of the two chains is slightly
larger than the actual chain lengths (in this case of 64 Å) do the three approaches agree.
Notice, also, that the point-dipole approach consistently overestimates the coupling.
For typical packing distances of R ≈ 4 –10 Å, the point-dipole approach can be as
much as 2 to 4 orders of magnitude too large. For parallel polyene chains, it can be
shown analytically that the Coulomb coupling integral between donor and acceptor
species VD A scales as the chain length L when L is smaller than the separation
distance and VD A ∝ 1/L when L is larger than their separation within a plane-wave
approximation of the excitonic wave functions:74
θ R
The scaling of VD A ∝ L for short chain lengths is a reflection of the fact that
at these length scales the point-dipole approximation may be applied to the entire
chain, implying that VD A ∝ L. Similarly, the scaling of VD A ∝ L −1 for collinear
chains in the plane-wave approximation is easy to understand for chain lengths that
are large compared to their separation. In this case, the exciton dipoles are uniformly
distributed along both chains of length L. As a result, the double line integral of r −3
yields the L −1 scaling.74
The scaling of VD A with L for long parallel chains is somewhat less intuitive
since it implies that the probability for exciton transfer between neighboring chains is
212 Quantum Dynamics: Applications in Biological and Materials Systems
(a)
102
101
100
V1A1D [eV]
10–1
10–2
10–3
10–4
2 3 4 5 6 7 8 9 10 20 30 40 50 60 708090
RAD [Å]
(b)
FIGURE 7.5 (a) Line-dipole approach for computing couplings based upon local transition
dipole approximation between two polyfluorene oligomers with slightly different comforma-
tions. Bold arrows are the total transition dipole moment for each polymer chain while the small
arrows indicate the projection of the total moment onto the individual repeating unit. (b). Exci-
tonic coupling VD A for the lowest singlet excited states between two transoid sedecithophene
oligomers in a skew-line arrangement comparing the point-dipole (dotted), the line-dipole
(dashed), and Coulomb intergral (solid). The squares indicate the half-splitting energies from
ZINDO calculations of the dimer. The arrow at 64 Å indicates the length of a single 16-ring
sedecithophene chain (figure from Ref. 73).
Excitation Energy Transfer 213
a decreasing function of the chain length. The scaling can be understood in one of two
ways. First, if the distance of separation between the two polyenes, R, is on the order
of a monomer length, as in the case of π stacked polyenes, VD A becomes a periodic
function of the relative alignment (or shift) of the two chains. For polyene-polyene
dimers, one has “in-phase” or “out of phase” stacking configurations depending upon
whether or not the C=C bonds on one chain are aligned with the C=C bonds on the
other polyacetylene chain. This results in a modulation of VD A as the two chains
are displaced relative to each other. As the chains become farther and farther apart,
this periodic variation vanishes due to interference effects from increasingly longer-
ranged local transition densities. Ultimately, the coupling integral will vanish in the
asymptotic limit even for chain separations greater than a few monomers. Alterna-
tively, one can imagine that as two finite-length dipoles side longitudinally relative
√
to each other, the sign of the dipole–dipole coupling changes once cos θ = 1/ 3.
Consequently, any small variation in the alignment results in a vanishing of VD A as
L becomes large.
κ=μ
D · μ
A − 3(μ
D · n )(μ
A · n )
where μ D,A is the unit vector giving the direction of the transition dipole moment for
the donor or acceptor species and n is a unit vector pointing from the charge center
of the donor to the charge center of the acceptor.
As noted, this approximation is no longer valid when R ≈ a, that is, when
the donor and acceptors are within a few molecular radii. At short range, higher-
order multipoles must be taken into account to properly describe the charge density
associated with the transition. Secondly, we have ignored the direct overlap between
214 Quantum Dynamics: Applications in Biological and Materials Systems
the wave functions on each molecule. Consequently, the exchange interaction must
also be included once the two molecules become very close. In fact, if the two species
are too close, the assumption that the two molecules are “independent” is simply too
severe and one should really use a full quantum chemical treatment for the whole
system.
The most robust approach aside from a full quantum chemical treatment is to
compute the Coulomb matrix element directly from the donor and acceptor wave
functions:75
∗
e2
VD A ≈ D A DA ∗
(7.29)
|ra − rd |
where ra denotes the coordinates of electrons associated with the acceptor molecule
and rd the coordinates for the electrons associated with the donor molecule. Under
this assumption, the above integral can be recast as an integral over two densities,
M D (r ) = r |D
D ∗ |r
(7.30)
and
M A (r ) = r |A
A∗ |r
(7.31)
where |D
D ∗ | is the excitation operator (or projection operator) constructed by tak-
ing the outer product between the ground- and excited-state wave functions of the
donor molecule (integrating over the spin coordinate) and likewise for the acceptor
molecule. Both of these quantities can be computed using separate excited-state quan-
tum chemical calculations involving the donor and acceptor species. The advantage,
then, is that the accuracy of the exciton-exciton coupling is determined entirely by the
accuracy of the quantum chemical approach used in determining the excited states of
the donor and acceptor molecules.
Numerically, this is implemented by approximating the transition densities as
xi +δx y j +δy z k +δz
M D (i jk) = δxδyδz ds ri jk |D
D ∗ |ri jk
d x d y dz (7.32)
xi yj zk
where the integration is only over a small “voxel” or volume cell of dimension
{δx, δy, δz}. Such volume renderings are very useful for analysis by external pro-
grams since they are essentially independent of the choice of basis functions used in
quantum chemical routines used to generate the data. The final integral is constructed
by taking
M D (r D )M A (r A )
VD A = dr D dr A (7.33)
|r A − r D |
where the integrals are over the full three-dimensional volume. The de facto data
format for the transition densities is the format used by the Gaussian quantum chemical
code.76 This format is also used by the Orca code77 and Qchem.
Excitation Energy Transfer 215
Adenine Thymine
Guanine Cytosine
FIGURE 7.6 Transition densities for So → S1 excited states of the four DNA bases (from
Ref. 78).
TDC TDC
IDA 1200 IDA
200
1000
150 800
600
100
400
50
200
0 0
2 3 4 5 6 7 8 9 10 11 1 2 3 4 5 6 7 8 9 10 11
Distance (Angs.) Distance (Angs.)
FIGURE 7.7 Comparison between point-dipole approximation (left) and exact (numerical)
evaluation (right) of coupling between two DNA bases (from Ref. 78).
For the stacking and pairing distances corresponding to the idealized B-DNA
geometry, the coupling elements calculated with the point-dipole approximation re-
sult with several-fold larger absolute values compared with the corresponding val-
ues calculated using the transition density cube method. The largest differences
between the two methods are obtained for the couplings between the π-stacked
adenines. For the idealized B-DNA geometry, the coupling between two adenines
located on the same strand calculated using point-dipole approximation, 872 cm−1 ,
is more than five fold larger compared with the value obtained using transition den-
sity cube, 161 cm−1 . The differences in the calculated couplings using the same
two methods for two stacked thymines are much smaller. For this base pair, the
Coulombic coupling calculated using point-dipole approximation is equal to ap-
proximately 230 cm−1 , more than twice the value of 101 cm−1 obtained with tran-
sition density cube. Bouvier et al.79 reported the magnitudes of Coulombic cou-
pling calculated using atomic transition charges model.80 The corresponding values
for the intrastrand nearest neighbors in a standard B-DNA geometry are 170 and
217 cm−1 for the lowest energy π π ∗ transitions of adenine and thymine, respec-
tively. The absolute values of the coupling elements between the second-nearest
neighbors located on the same strand are much smaller. At the point-dipole level
of approximation, the coupling between the two adenines is only 57 cm−1 com-
pared with 9 cm−1 calculated for the same base pairs using transition density cubes.
The coupling between the two thymine bases on the same strand is even smaller—
approximately 3 and 1 cm−1 for point-dipole approximation and transition density
cube methods, respectively. We conclude then that while the point-dipole approx-
imation provides a simple and robust way of estimating the electronic coupling
between chromophores that are well separated, the simple approximation decid-
edly breaks down once the down and acceptor speeds are brought to with close
proximity.
Excitation Energy Transfer 217
SUGGESTED READING
1. Quantum Mechanics in Chemistry, G. C. Schatz and M. A. Ratner (Mineola,
NY: Dover Books, 2002).
2. Principles of Nonlinear Optics and Spectroscopy, S. Mukamel (Oxford:
Oxford University Press, 1995).
3. Optical Resonance and Two-level Atoms, L. Allen and J. H. Eberly (Mineola,
NY: Dover Books, 1974).
4. Principles of Nuclear Magnetism, A Abragam (Oxford: Oxford University
Press, 1961).
5. The Quantum Theory of Light, R. Loudon (Oxford: Oxford University
Press, 1973).
6. Laser Theory, H. Haken, Handbuch der Physik, Vol. XXV/2 (Springer-
Verlag, Berlin, 1970).
7. Fundamentals of Quantum Electronics, R. H. Pantell and H. E. Puthoff
(New York: Wiley, 1969).
8 Electronic Structure
of Conjugated Systems
The underlying physical laws necessary for the mathematical theory of
a large part of physics and the whole of chemistry are thus completely
known, and the difficulty is only that the exact application of these laws
leads to equations much too complicated to be solvable
Paul Dirac
. . . that is solvable by a person armed only
with pencil and paper.
219
220 Quantum Dynamics: Applications in Biological and Materials Systems
H3C
CH3 CH3
H3C CH3
H3C CH3
CH3 CH3
CH3
FIGURE 8.1 (Top) Chemical structure of betacarotene. The conjugated domain is indicated
in bold. LUMO (middle) and HOMO (bottom) orbitals of betacarotene superimposed on its
3D structure.
orbitals energetically nearest the highest occupied and lowest unoccupied molecular
orbitals (HOMO and LUMO) as well as the first few excited electronic states, this is
fairly good approximation.
In this chapter we will develop a description of the π electronic structure of
conjugated organic systems. We will start with a simple free-electron model and
finish with a brief description of modern quantum chemical techniques. A simple
model for understanding this trend is the “free-electron model” where we assume
that the electrons within the π bonding network are more or less free particles and
we ignore any electron/electron interaction. If the average C--C bond length is a from
carbon center to carbon center then an electron within the π orbital in the C N H N +2
polyene is confined to a “box” of length L. From elementary quantum mechanics,
Electronic Structure of Conjugated Systems 221
4 (–C = C–)n
E(eV)
3
n–oligothiophene
2
1 SSH, Huckel
FIGURE 8.2 Variation of optical energy gap with number of repeating units for polythiophene
and polyacetylene oligomers as computed various the semiempirical models.
h̄ 2 π 2 n 2
En = (8.2)
2m e L 2
since each C atom contributes one electron to the π system. So for a system with k
C=C double bonds, we have a total of 2k electrons with the n = k level being the
highest occupied and the k = n + 1 being the lowest unoccupied levels. Assuming
the optical transitions are between these two levels,
h̄ 2 π 2
E = (2k + 1) (8.3)
2m e L 2
The length of the “box” is actually somewhat arbitrary since the π orbital extends a
bit beyond the terminal C atoms, say, 1/2 a C--C bond length (a = 1.4 Å), in which
case we get a length of L = (2k + 1)a. Thus,
TABLE 8.1
Electronic Spectra of Linear Polyenes versus
Number of C=C Double Bonds
k ν f r ee νhuck νobs
1 51333.3 cm−1 63009.7 cm−1 62000 cm−1
2 30800.0 45119.1 46000
3 22000.0 37016.5 38000
4 17111.1 32438.3 33000
5 14000.0 29503.1 30000
6 11846.2 27463.0 27500
7 10266.7 25963.4 26000
8 9058.82 24814.9 24000
10 7333.33 23172.0 22000
polyacetylene are not evenly spaced and in fact alternate their bond lengths between
double (C=C) and single (C--C) bonds. As we shall show later in this chapter, this
gives rise to a nonzero energy gap for the infinitely long chain. Finally, we have
neglected the fact that realistic polyene molecules are not straight chains. They can
have twists and kinks and other contortions that can limit the extent of π conjugation.
However, the fact that the energy gap does follow the predicted
E ∝ 1/L behavior
indicates that the electrons in the π orbitals are moving ballistically as free particles
more or less unaware of the geometry of the molecule. Consequently, even the π
electronic structure for a molecule such as betacarotene shown in Figure 8.1 can be
well understood within a free-electron model.
φi |H |φ j
= β and φi |H |φi
= α (8.5)
We shall also assume that the overlap integral between neighboring C 2 pz orbitals
is exactly zero, φi |φ j
= δi j , and provide a sufficient basis to expand the electronic
wave functions:
N
|
= c j |φ j
(8.6)
j=1
Finally, we also will neglect the Coulombic interaction between the electrons. Our
task, then, is to determine the coefficients and the energy eigenvalues. Since we
Electronic Structure of Conjugated Systems 223
have ignored interactions between the electrons, we need to solve the one-electron
Schrödinger equation:
H |
= E|
(8.7)
In general, H will have nondiagonal elements whenever there is a π bond linking
adjacent C atoms. For a linear chain, only nearest neighbors are linked, H becomes
a tridiagonal matrix, the Schrödinger equation in matrix form reads:
⎡ ⎤⎡ ⎤ ⎡ ⎤
α β 0 0 ··· 0 c1 c1
⎢β α β 0 ··· 0 ⎥⎢ c ⎥ ⎢ c ⎥
⎢ ⎥⎢ 2 ⎥ ⎢ 2 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ 0 β α β · · · 0 ⎥ ⎢ c3 ⎥ ⎢ c3 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢. .. .. .. . . .. ⎥ ⎢ . ⎥= E⎢ . ⎥ (8.8)
⎢ .. ⎥ ⎢
. . ⎥⎢ . ⎥ . ⎥ ⎢ . ⎥
⎢ . . . ⎢ . ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎣ 0 · · · 0 β α β ⎦ ⎣ c N −1 ⎦ ⎣ c N −1 ⎦
0 ··· 0 0 β α cN cN
Introducing a dimensionless energy E = (E − α)/β, the equations for j = 1 and
j = N become
c j−1 + c j+1 = E c j (8.9)
If we write
c j = Aeik j + Be−ik j (8.10)
then
E = eik + e−ik = 2 cos(k) (8.11)
where A and B are constants and k is a parameter. Imposing the boundary condition
c2 = E ci and c N −1 = E c N (8.12)
we arrive at two simple equations:
A+B =0 (8.13)
Aeik(N +1) + Be−ik(N +1) = 0 (8.14)
which allow us to deduce the allowed values of k:
e2ik(N +1) = 1 (8.15)
which leads to
πn
k= , n = 1, 2, . . . , N (8.16)
N +1
We must disallow the case for n = 0 since it leads to the trivial solution of c j = 0 for
all values of j. The constant A can be determined by the normalization condition
c2j = 1 (8.17)
j
224 Quantum Dynamics: Applications in Biological and Materials Systems
Thus, the allowed energy levels for the discretized polymer lattice are given by
πn
E n = α + 2β cos (8.18)
N +1
As N increases, the width of the energy spectrum tends to 4|β|. Moreover, in the
infinite limit, the energy spacing between each successive energy level shrinks to
zero and k becomes a continuous variable. The coefficients for the eigenstates are
given by (including normalization)
2
cn j = sin(n jπ/(N + 1)) (8.19)
N +1
2ζ
N = (2ζ )n (8.24)
(2n)!
Electronic Structure of Conjugated Systems 225
1.0
0.8
S(2pπ, 2pπ)
0.6
0.4
0.2
The angular terms are given by the real form of the spherical harmonics. Clearly, given
the fact that Rn decays exponentially with radial distance r , the resonance integrals
will be nonvanishing only between C atoms that are close to each other. Hence, for
adjacent C atoms we set h ab = β. Since orbital overlap varies with bond length, one
expects some systematic variation in β with bond length. Mullikin suggested that
β(r ) should vary as the overlap integral between two 2 p STOs,
S(2 pπ, 2 pπ ) = e− p (1 + p + (2/5) p 2 + p 3 /15) (8.25)
where p = 1.625R/ao for C atoms separated by distance R (in Bohr radii). This ?
function is plotted in Figure 8.3 where we note that for R > 3 Å, the overlap integral
is nearly vanishing. Taking the typical C--C bond length to be 1.39 Å and expanding
about this point gives
β(r ) = βo (1 − 1.72(R − 1.39 Å)) (8.26)
as an approximate variation of β with C--C bond distance where βo is the resonance in-
tegral at 1.39 Å. We shall later use this approximately linear variation in the resonance
integral as a means for including electron–phonon coupling in these systems.
Finally, the remaining assumption that the orbitals localized on one C atom are
orthogonal to orbitals localized on different C atoms requires us to write
Sab = φa φb dr = 0 (8.27)
unless a = b, in which case Saa = 1. In the simplest case, this is a rather extreme
approximation as we can see in Figure 8.3 where at the typical C--C bond length
S ≈ 0.25. We can improve upon the simple Hückel model by solving the generalized
eigenvalue equation
(h − εS)ψ = 0 (8.28)
where S is the overlap matrix. For example, Roald Hoffman’s extended Hückel ap-
proach includes the overlap integral
Si j = i| j
(8.29)
226 Quantum Dynamics: Applications in Biological and Materials Systems
between basis functions on different atomic centers as a way to account for bond
bends and torsions using STOs on each atom. The remaining parameters are given by
the ionization potentials, electron affinities, and core charges of the atomic sites.
1 2 3 4
where each C atom is labeled and the adjacency is indicated by a solid line. We then
write the Hamiltonian for the π electrons as
⎡ ⎤
α β 0 0
⎢β α β 0 ⎥
H =⎢ ⎣0 β α β⎦
⎥ (8.30)
0 0 β α
The eigenvalues and eigenvectors can be readily determined either numerically or
algebraically by solving the secular determinant equation
x 1 0 0
1 x 1 0
0 1 x 1 = 0 (8.31)
0 0 1 x
x 4 − 3x 2 + 1 = 0 (8.32)
which has four roots corresponding to the orbital energies as shown in Figure 8.4.
The total energy is then
Eπ = njEj (8.33)
j
where n j is the occupancy of the jth energy level. For the case of 1-3-butadiene,
each C atom contributes one electron to the π system. Taking the Pauli principle into
account, only the lowest two levels are fully occupied and the total π energy is
E π = 4α + 4.472β (8.34)
If the π electrons in butadiene were not delocalized but rather formed two isolated
double bonds, the total energy would be simply 4α + 4β, that is, twice the energy of
ethylene. By delocalizing the electrons, the total energy is lowered by 0.472β.
Notethat the resonance integral β is a negative value since delocalization should lower the energy of the
system.
Electronic Structure of Conjugated Systems 227
α – 1.618β
α – 0.618β
C2PZ
α + 0.618β
α + 1.618β
(a) (b)
FIGURE 8.4 Energy levels and orbitals for 1, 3-butadiene from the Hückel model.
An alternative way to derive the eigenvalues for the linear chain is to consider the
roots of the determinant equation
5 5
5x 1 0 0 ··· 05
5 5
51 x 1 0 ··· 05
5 5
Dn (x) = 5 0 1 x 1 · · · 05 (8.35)
5 5
5 .. .. .. .. 5
5. . . . 5
E = α + 2β cos(2nπ/N ) (8.45)
TABLE 8.2
Experimental Resonance Energies and Hückel Delocalization Energies
Experimental H ückel Delocalization Apparent β
Molecule Resonance Energy (kcal/mol) Energy (kcal/mol)
Benzene 36 2β 18
Napthalene 75 3.7β 20
Anthracene 105 5.3β 20
Phenanthrene 111 5.45β 20
Biphenyl 65 4.4β 15
230 Quantum Dynamics: Applications in Biological and Materials Systems
chemical compounds lends credence to its use as an empirical parameter for predicting
the stability and other electronic properties for related compounds.
H = αI + βM (8.47)
where I is the N × N identity matrix and M is the topology matrix with elements
Mi j = 1 if atoms i and j are neighbors and Mi j = 0 otherwise. Clearly, since H and
M commute, they share the same set of eigenvectors, ck . Likewise, their eigenvalues
are related
εk = α + βλk (8.48)
λ(s)
k = 1 (8.51)
16 7
λ(s)
k± = + 1± 9 + 8 cos(kπ/(N + 1)) (8.52)
2
λ(a)
k = −1 (8.53)
16 7
λ(s)
k± = − 1 ± 9 + 8 cos(kπ/(N + 1)) (8.54)
2
where the (a) and (s) superscripts denote symmetric and antisymmetric
states and k = 1, . . . N .
4. Radialenes (CCH2 )n (n membered rings with exocyclic −CH2 groups86
λk± = 2 cos(2kπ/n) ± cos2 (2kπ/n) + 1, k = 1, 2, . . . , n (8.55)
Electronic Structure of Conjugated Systems 231
There are also a number of close expressions for the delocalization (or resonance)
energy. This is the difference between the total π electron energy and the energy
corresponding to a set of localized bonds.
1. Polyenes (even number of C atoms)
2. Polymethines (n is odd)
3. Hückel annulenes (n = 4N + 2)
4. Anti-Hückel annulenes (n = 4N )
In all cases, as n → ∞,
E res 4
lim = − 1 = 0.2723 (8.60)
n→∞ β π
* * * *
* * * *
* *
* *
* *
*
* * * * *
* *
(a)
* *
*
*
*
*
*
* *
*
(b)
FIGURE 8.6 Examples of alternant (a) and nonalternant (b) hydrocarbon systems. The aster-
isks labeling different carbons indicate C atoms belonging to one of the alternant sets.
THEOREM 8.1
For every Hückel molecular orbital energy α+βx in an alternant hydrocarbon system,
there exists another orbital with energy α−βx. In other words, the roots of ||Hπ −λ|| =
0 appear in pairs. In addition, for linear or branched-linear chains with an odd number
of C atoms, there will be one root with x = 0.
The proof follows from the parity properties of the Tchebychev polynomials as
shown in Figure 8.5. Under the parity operation x → −x, we see that Tn (x) =
(−1)n T (−x). Thus, systems with an even number of sites will have roots correspond-
ing to the even polynomials while systems with an odd number of sites will have odd
polynomial roots.
Before moving on, we introduce the bond-charge density matrix ρ
N
ρ= n i |φi
φi | (8.64)
i=1
Electronic Structure of Conjugated Systems 233
where {φi } are molecular orbitals and n i is the occupancy. In the basis of local C 2 pz
orbitals, the diagonal elements qm = ρmm represent the number of π electrons local-
ized about the mth carbon atom while the off-diagonal terms, pmn = ρmn , indicate
the number of π electrons shared between the mth and nth atom, that is, the bond
order. In terms of the orbital coefficients, we can define
qr = ρrr = n i |cri |2 (8.65)
i
prs = ρrs = n i cri csi∗ (8.66)
i
THEOREM 8.2
Within the Hückel molecular orbital approximation for alternant systems, the total
electron density on each site for an N -electron N -site system is 1.
234 Quantum Dynamics: Applications in Biological and Materials Systems
The proof of the first part of the theorem can be seen by examining Equation 8.65.
For an N -electron/N -site system, the sum in Equation 8.65 is identical to taking
N /2
qr = 2 r |ψi ψi |r (8.72)
i=1
Here the |ψi ψi | acts like a projection operator and we are summing over exactly
1/2 the total states in the system. Hence,
1
qr = 2r |r × =1 (8.73)
2
In other words, for a neutral alternant system, no charge transfer or charge localization
occurs within the molecule.
THEOREM 8.3
The orbital coefficients of paired molecular orbitals are the same for “starred” atoms
and have opposite sign for “unstarred” atoms.
* * * * *
* * * *
FIGURE 8.7 Nonbonding Hückel molecular orbitals with energy E = α for chains of length
N = 3, 5, and 7.
Electronic Structure of Conjugated Systems 235
where Q in this equation reminds us that the electronic energy levels and associated
wave functions depend parametrically upon the nuclear positions. Within a fixed
frame, we can equivalently write this as
† 1 † †
Hele = h lm alσ amσ + kl|v|mn
akσ alρ amρ anσ (8.75)
σ lm
2 klmn σρ
†
where {aiσ , a jρ } = δi j δσρ are fermion operators that add or remove electrons from
single-electron basis functions.
†
akσ |0
= |kσ
and akσ ||kσ
= |0
(8.76)
Also, since we can put only one electron with a given spin into a given basis function,
† †
akσ akσ |0
= 0 (8.77)
236 Quantum Dynamics: Applications in Biological and Materials Systems
The first term in Hele represents the single-electron terms and includes the kinetic
energy and the electron-nuclear interactions. These are independent of spin,
1 a
Zn
N
h lm = l| − ∇ 2 − e2 |m
(8.78)
2 n=1
|q − Q n |
The electron-electron repulsion is introduced by the second term. Here the i j|v|kl
=
i j|kl
bracket denotes the Coulomb integral
e2
i j|kl
= dr13 dr23 φi∗ (1)φ ∗j (2) φk (1)φl (2) (8.79)
r12
Be careful! A number of authors use different conventions for this. Here, adopt the
notation more prevalent in the many-body physics literature that is consistent with the
creation/annihilation operator formalism we are following. In the quantum chemical
literature, the bracket i j|kl
is taken to mean
e2
i j|kl
= dr13 dr23 φi (1)φ j (1) φk (2)φl (2) (8.80)
r12
and is sometimes denoted with a square bracket [i j|kl] when working with spin
orbitals. The connection is that
i j|kl
≡ [ik| jl] (8.81)
for real orbitals. The reason the two notations have evolved is historical, with one camp
adopting one notation and another camp adopting the other notation. The appendix
explains the difference (but not the cause).
We can interpret each term in the electronic Hamiltonian as follows:
†
• h 22 a2μ a2μ = energy to place an electron with spin μ into the φ2 basis function
†
• h 23 a2μ a3μ = energy to remove an electron from φ3 and place it into φ2
† †
• 22|22
a2α a2β a2β a2α = Coulomb repulsion between a spin-up α electron and
a spin-down β electron when both are in the φ2 basis function
Notice that, in general, if we have N basis functions, we would need to perform N 4
six-dimensional integrals to completely account for all the electron-electron interac-
tions.
Heisenberg equations of motion92 where we write the time derivative of each electron
operator as
d
ih̄ asμ = [asμ , H ] (8.82)
dt
This can be approximated by
d
ih̄ asμ ≈ f suμ auμ (8.83)
dt u
†
Now, multiplying on left and on the right by atμ and adding the two together,
8 † 9 μ8 † 9
atμ , [asμ , H ] = f su atμ , auμ (8.85)
u
where we have assumed we are working in a orthonormal basis. Taking the anticom-
mutator on the right-hand side and averaging over the electronic ground state of the
system,
8 † 9
atμ , [asμ , H ] = f stμ (8.86)
μ
f st is called the Fock operator. It is Hermitian and its eigenvalues and eigenvectors
correspond to the energies and single-particle orbitals,
8 † 9
f stμ = atμ , [asμ , H ]
8 † 9
= − atμ , [H, asμ ]
$ %:
†
†
=− atμ , h lm alσ amσ , asμ
σ lm
$ %:
1 †
† †
− atμ , kl|nm
akσ alρ amρ anσ , asμ (8.87)
2 klmn σρ
The diagonal elements are electrons in a given basis function of a given spin, and the
off-diagonal elements are how those electrons are shared between the different basis
functions.
For a closed shell system in which all electrons are paired, we can equate the
spin-up (α) densities with the spin-down (β) densities
† †
alα amα = alβ amβ (8.90)
Using this and the symmetry relations of the two-body matrix elements, we arrive at
†
f stμ = h st + (2 sl|tm
− sl|mt
) alμ amμ (8.91)
lm
Notice that the Fock operator depends upon its own eigenstates since we can
expand the density matrix in terms of the eigenvectors of the Fock operator
μ
μ
ψk = ckm φmμ (8.92)
m
as
μ
μ ∗ μ μ
γml = ckl ckm n k (8.93)
k
μ
Operationally, we “guess” at γml either by doing a low-level calculation or, sim-
ply by using Hückel theory, construct the Fock operator as a matrix in the basis of
choice, and diagonalize this to obtain an improved set of orbital energies and single-
particle orbitals. We then reconstruct the bond-charge matrix using these new orbitals
Electronic Structure of Conjugated Systems 239
and repeat this process until neither the energies, orbitals, nor bond-charge matrix
changes to within a suitable tolerance. In performing either ab initio or semi empiri-
cal calculations, this part of the calculation usually takes the most time and one has
no real guarantee on how many iterations are required to converge the Hartree–Fock
equations. Depending upon the size of the system, the complexity of the basis set, and
the speed of our computer, this can take anywhere from a few seconds to months or
years. A good strategy is to start off with a low-level basis set, get a good guess at the
bond-charge matrix, then repeat the procedure with increasingly more accurate basis
sets. Most modern quantum chemical codes allow us to import the results (checkpoint
file) from a previous calculation as a starting point.
E[ρ] = φ|H |φ
(8.98)
1
= i|h| j
ρi j + i j||kl
ρki ρl j (8.99)
ij
2 i jkl
†
where ρ is the single-particle density matrix with elements ρi j = φ|ai a j |φ
. The
density matrix satisfies the conditions ρ 2 = ρ and Tr[ρ] = N . With this in mind,
we can minimize E[ρ] with respect to the density under the constraint that ρ 2 = ρ
remain satisfied,
δ(E[ρ] − Tr λ(ρ 2 − ρ)) = 0
where is a matrix of Lagrange multipliers. Expanding the variation
δ E[ρ]
− ρ − ρ + δρ = 0
δρ
This must hold for all δρ, so we send
F − ρ − ρ + = 0 (8.100)
The ’s can be eliminated by multiplying Eq. (8.100) on the right and left by ρ and
subtracting the two (taking ρ 2 = ρ into account). One thus obtains
[F, ρ] = 0
In other words, the density matrix that minimizes the energy commutes with the
Hartree–Fock Hamiltonian F.
We can generalize this by asking what happens to ρ if [F, ρ] = 0 or if H is a
function of time. Let us consider the case where the system at all times is described
by a single Slater determinant |ψ(t)
composed of orthonormal orbitals {|φi
} such
that the time-dependent density matrix is given by
N
ρ(t) = |φi (t)
φi (t)| (8.101)
i=1
Here, the dot denotes the time derivative, E[ρ] = ψ|H (t)|ψ
is the Hartree–Fock
energy functional, and the λi j ’s are Lagrange multipliers introduced to ensure orthog-
onality of the orbitals.
We now minimize S with respect to the variations in the orbitals
δS
= −ih̄ φ̇ i | − φi |F − λi j φ j | = 0 (8.103)
δ|φi
j
δS
= ih̄|φ̇ i
− F|φi
− λi j |φ j
= 0 (8.104)
δ φi | j
electron densities:
e2
ik| jl
= dr13 dr23 [u i (1)u j (1)] [u l (2)u k (2)] (8.106)
r12
The integral will more or less vanish unless u i and u j are the same basis function.
Thus, we can write:
where δci δcj = 0 unless the two orbitals are on the same atomic center. Such inter-
mediate neglect is used in the MNDO, PM3, and AM1 methods while methods such
as the INDO, MINDO, ZINDO, and SINDO approaches do not apply the rule when
all the orbitals in the integral are on the same atomic site.
The notion of neglect of differential overlap also plays a key role in the devel-
opment of tight-binding treatments used in solid-state physics. In many ways, we
can use the terms tight-binding and semiempirical interchangeably (although certain
purists will argue otherwise). For solid-state systems we can often take advantage of
the periodic nature of the lattice to develop recursion relations or treat the problem in
reciprocal space to compute the band structure of a given system. For more complete
treatments, any number of excellent texts on solid-state physics may be consulted.
If we further assume that
† † † †
HPPP = h ii aiσ aiσ + ti j aiσ a jσ + Ui ai↑ ai↑ a j↓ a j↓
σ i σ ij i
† 1 1
= h ii n iσ + ti j aiσ a jσ + Ui n i↑ − n i↓ −
σ i σ ij i
2 2
(8.110)
where Ui = ii|ii
, the prime on the second summation reminds us that we sum only
atoms participating in the π-electron network, and n i↑ or n i↓ indicate the number of
242 Quantum Dynamics: Applications in Biological and Materials Systems
spin-up or spin-down electrons in a given basis orbital. The original purpose of the
approach was to predict the electronic properties of organic dye molecules. In fact,
when combined with a Hartree–Fock treatment of the electronic ground state, the PPP
model does a remarkably good job of predicting the position and oscillator strengths
of the lowest singlet transitions in many π -conjugated systems.
Unlike most semiempirical treatment, π-electron theories have a rigorous ab
initio underpinning. HPPP is in fact an approximate effective operator acting on the
π-electronic subspace. Likewise, its parameters include effective electron correlation
effects between the π system and the core. The connection between the PPP model and
more rigorous approaches was explored by Freed and coworkers using diagrammatic
techniques for solving multireference perturbation theory.93–95
Now, if all the sites are equivalent, we can write ti j = t, h ii = ε, and Ui = U and
arrive at the Hubbard Hamiltonian:96
† † † †
H =ε aiσ aiσ + t aiσ a jσ + U ai↑ ai↑ a j↓ a j↓ (8.111)
σ i σ ij i
This can be further reduced by noticing that the first term is simply the sum over the
number operators and as such is equal to εN . This can be removed by choosing our
energy origin so that ε = 0. Next, the interaction term can be also written in terms of
the electron number operators and we arrive at
† 1 1
H =t aiσ a jσ +U n i↑ − n j↓ − (8.112)
σ ij i
2 2
This particular model is seldom used in the chemical field but widely used in the
solid-state physics community for describing electrons in narrow band-gap materials
and has been applied to problems such as high-Tc superconductivity, band magnetism,
and the metal-insulator transition. For small systems, one can perform numerically
exact calculations, and the code for doing so is included on the disk accompanying this
text with some representative results shown in Figure 8.8 for the case of a spin-paired
four-electron/four-site model of 1,3-butadiene.
The Hubbard model can be solved exactly in one dimension as shown by Lieb
and Wu97 in 1968 using the Bethe Ansatz technique. This solution was later shown
to be the complete solution by Essler et al. in 1991.98 One of the most significant
predictions of the model is that there is an absence of a “Mott transition” between
conducting and insulating states as the strength of the interaction U is increased.
Energy Eigenvalues
Energy Per Site
–0.4
10
–0.6
–0.8
E/t
E/t
–1.0
0
–1.2
0 1 2 3 4 5 6 0 1 2 3 4 5 6
U/t U/t
(a) (b)
FIGURE 8.8 Exact solution for Hubbard model for closed-shell four-electron/four-site sys-
tem. (a) Energy eigenvalues for the 36 × 36 configuration system. (b) Total ground-state energy
per site.
†
φ2 = a2 a1 † |0
†
φ3 = a1 a1 † |0
†
φ4 = a2 a2 † |0
†
φ5 = a1 a2 † |0
†
φ6 = a 1 a2 † |0
(8.113)
where φ1 is the case where we have a spin-up electron on atom 1 and a spin-down
electron on atom 2 (as denoted by the overbar), φ2 is the reverse where the spin-down
electron is on atom 1 and the spin-up electron is on atom 2. φ3 and φ4 are the cases
where we have both spin-paired electrons on either atom 1 or atom 2. Lastly, φ5 and
φ6 are the cases where the two spins are parallel but on different atoms as indicated
in the diagram below.
244 Quantum Dynamics: Applications in Biological and Materials Systems
where we have included ε as the local site energy. Since we are working with fermion
operators already, antisymmetry is already enforced within our basis states so we
do not need to construct separate exchange and Coulomb contributions to the total
energy.
Notice that only in the case where there is spin pairing does the electron/electron
Coulomb coupling term enter into H . Furthermore, notice that the two parallel spin
configurations are completely decoupled from the other antiparallel configurations
and H can be block-diagonalized along these lines with φ5 and φ6 being eigenstates of
H each with energy 2ε. The remaining eigenstates are symmetric and antisymmetric
linear combinations of the remaining four basis states. Again, we notice that φ1 and
φ2 are decoupled, as are φ3 and φ4 . Thus, we can form a suitable basis by taking even
and odd linear combinations of the two types of states,
√
φag = (φ1 + φ2 )/ 2 (8.115)
√
φbg = (φ3 + φ4 )/ 2 (8.116)
√
φau = (φ1 − φ2 )/ 2 (8.117)
√
φbu = (φ3 − φ4 )/ 2 (8.118)
where u and g indicated odd (ungerade) and even (gerade) linear combinations. Within
the gerade basis, H is again block-diagonal with eigenvalues
E ±s = 2ε + U/2 ± U 2 + 16t 2 /2 (8.119)
gerade state
gs = (cos θφag + sin θφbg ) (8.120)
where θ is the mixing angle that mixes delocalized (φag ) and localized (φbg ) electronic
configurations. The ground-state energy is then
E gs = 2ε + U/2 − (U/2)2 + 4t 2 (8.121)
Recalling Chapter 2, we can write the mixing angle as
4t
tan 2θ = (8.122)
U
For the case of small U , the Coulomb repulsion between electrons is very small rela-
tive to the hopping, and the delocalized configuration φa+ dominates. In the limit of
no interaction, E gs = 2ε − 2|t|, which is what we expect from a Hückel treatment.
As U becomes large compared to t, hopping is insignificant and the ground state is
dominated by the localized configurations φb+ with an asymptotic energy of 2ε + U ,
which is the energy necessary to have two electrons on the same atom with Coulomb
interaction U . In the exact case, we have perfect correlation between the two asymp-
totic limits. If you imagine that t is a function of the bond distance, then we have a
smooth potential energy surface connecting the bound molecular system to the dis-
sociated atom limit. This is not always the case when we make approximations to the
system.
with energies
ε± = α ∓ |t| (8.129)
Thus, we form the trial solution to the Hartree–Fock equations by writing the ground-
state wave function as
† †
ψgs = a+β a+α |0
(8.130)
E gs = 2h ++ + + + | + +
= 2ε − 2|t| + + + | + +
(8.135)
Evaluating the electron-electron (e-e) interaction, we find that the only terms that
survive are those involving the Coulomb interactions in the ionic configurations:
1 1
+ + | + +
= ( 11|11
+ 22|22
) = U (8.136)
4 2
which we take to be the same for both atoms:
e2
U= dr1 dr2 |φ1 (r1 )|2 |φ1 (r2 )|2 (8.137)
r12
Hartree–Fock limit. We previously derived this result:
Here we see a linear variation in the ground-state energy with increasing e-e inter-
action. When U/t is small (large t or small U ) the Hartree–Fock energy approaches
the exact energy from above. While the HF energy does asymptotically approach the
Electronic Structure of Conjugated Systems 247
π
Egs 4
0.5
Delocalized
Localized
U
2 3 4 5 t
ηopt
HF
–0.5
UHF π
8
–1.0
Exact
–1.5 2 3 4 5
U/t
(a) (b)
FIGURE 8.9 (a) Comparison between Hartree–Fock, unrestricted Hartree–Fock, and the exact
ground-state energy for a diatomic model using the Hubbard model. For U/t < 2, the UHF
and HF results are identical. (b) UHF mixing angle η vs. coupling. At U/t = 2 the derivative
of the UHF energy with respect to the coupling is discontinuous, indicating a sudden transition
from localized to delocalized states.
exact value in the limit of U → 0, it fails to reproduce the dissociated atom limit as
plotted in Figure 8.9.
We can extend this a bit further by adopting an unrestricted Hartree–Fock approach
by constructing a variational wave function mixed spin configuration:
φα = sin η|1 ↑
+ cos η|2 ↑
(8.139)
φβ = cos η|1 ↓
+ sin η|2 ↓
(8.140)
The coefficients are such that the segregation of the electrons can be controlled by
varying η. Using these one-electron states as a trial wave function, the UHF energy is
which reduces to the HF value for η = π/4. The UHF energy is the case where E uh f (η)
has a minimum value. Solving d E uh f (η)/dη = 0 for various values of U yields the
variation of the UHF energy with U . The resulting E uh f energy is plotted in Figure 8.9
along with a plot of the optimal mixing angle. Here we notice that for U/t ≤ 2, the
HF and UHF energies are identical with ηopt = π/4, indicating that only delocalized
configurations contribute to the ground state. At U/t = 2 the first derivative of
E uh f is discontinuous with respect to the coupling. This is indicative of a sudden
and discontinuous change in the ground state to include localized configurations.
Asymptotically, as η → 0, the UHF converges to the exact ground-state energy. While
the UHF does incorporate the physically correct behavior of charge segregation, it
does it in a discontinuous way.
248 Quantum Dynamics: Applications in Biological and Materials Systems
as given above and are generally considered to be a more accurate basis for a given
number of basis functions. However, because the radial parts are based upon expo-
nential forms, it becomes compuationally costly to compute the various multicentered
integrals required to evaluate Hele .
In the early 1950s Boys100 introduced the use of Gaussian-type orbitals—although
there is some evidence that Roy McWeeny was using them as early as 1946. Gaussian-
type orbitals have a radial part
Rn (r ) = Nr n−1 e−ζ r
2
(8.143)
Although the Gaussian-type orbitals (GTO) are not as closely matched to the actual
atomic orbitals as the STOs and generally more GTOs are needed to achieve com-
parable accuracy, the advantage gained is a tremendous speedup in the evaluation of
multicentered integrals. This is gained by the use of the “Gaussian product theorem”
whereby the integral over two Gaussian orbitals centered on different atoms is a fi-
nite sum of Gaussians centered on a point along the axis connecting them. Hence,
any four-centered integral reduces to a sum of two-centered integrals and any two-
centered integrals become a finite number of single-centered integrals. This facilitates
a speedup on the order of 4–5 orders of magnitude.
Currently there are literally hundreds of basis sets composed of Gaussian-type
orbitals. A minimum basis set is one in which, on each atom in the molecule, a single
basis function is used for each orbital in a Hartree–Fock calculation on the free atom.
However, often additional basis functions are added. For example for the Li atom,
which would require the 1s and 2s orbitals, one adds a complete set of 2p orbitals.
Thus, for the first row on the periodic table, one has a basis of 2s functions and 3p
functions for a total of 5 basis functions per atom. The most common minimal bases
is the STO-nG basis where the n integer represents the number of Gaussian primitive
functions comprising a single basis function. It is usually a good idea to start off
using a minimal basis and then improve the calculation using a more accurate (hence
larger) basis. Commonly used minimal basis sets in most ab initio codes are: STO-3G,
STO-4G, STO-6G, and STO-3G*, the latter being a polarized version of the STO-3G
basis.101
One of the problems with the minimal bases is that they are quite inflexible
and are unable to adjust to the different electron densities encountered in forming
chemical bonds. In chemical bonding, it is typically the outer or valence electrons
that take part in forming bonds. Consequently, split-valence basis sets were developed
Electronic Structure of Conjugated Systems 249
to account for this fact by allowing each valence orbital to be composed of a fixed
linear combination of primitive Gaussian functions. These are termed valence double,
triple, or quadruple-zeta basis sets according to the number of Gaussians used. These
are denoted using Pople’s scheme via X-YZg where X is the number of primitive
Gaussians composing each atomic core basis function and Y and Z denote how
the valence orbitals are split. In this case, we would have a split-valence double-zeta
basis. Split-valence triple- and split-valence quadruple-zeta basis sets are likewise
termed X-YZWg and X-YZWVg, respectively.
Additionally, these basis sets may include polarization functions as denoted by an
asterisk (*). Two asterisks (**) indicate that polarization functions are also added to
the light atoms (H and He). In a minimal basis, only the 1s orbital would be used for
these two atoms. In this case, adding polarization functions would involve including
a p function about the light atom. The addition of polarization functions allows the
electron density about these atoms to be more asymmetric about the atom center.
Similarly d- and even f-type orbitals can be added. The more precise notation is to
indicate exactly how many polarization functions were added such as (p,d).
One can also include diffuse functions to light atoms, as denoted by a + sign. Here
one adds additional Gaussian functions that are very broad so as to more accurately
represent the “tail” of the atomic orbits as one moves farther away from the atomic
center. Such functions are necessary to include when performing calculations on
anions or Rydberg states.
Finally, one can include correlation consistent basis functions. These were devel-
oped by Thom Dunning and co-workers and are designed to converge to the complete
basis set limit using extrapolation techniques. These are the “cc-pVNZ” basis func-
tions (for correlation consistent polarized) where N = D,T, Q, 5, 6, . . . for doubles,
triples, and so on, and V denotes that they are valence-only basis sets. Such basis sets
are currently considered state of the art and are widely used for post-Hartree–Fock
calculations.
Needless to say, ab initio treatments are appealing since one strictly deals with
the basic physical interactions between the electron and the nuclei. There is some art
in designing basis functions, and certainly a lot of thoughtful computational design
is required to both set up and solve the many-body problem. A number of standard
implementations and codes are available, some at no cost and some at low cost for
academic users. As better codes, better basis sets, and better theoretical techniques
are developed, we also have nearly parallel progress in the amount of computational
horsepower available. Consequently, armed with a modest modern workstation, we
can perform quite accurate ab initio level calculations with rather large basis-set
systems up to about 100 to 300 carbon atoms.
r1 , . . . , r N |ψ
= ψ(r1 , . . . , r N ) (8.144)
and
gives the joint probability of finding electron 1 at r1 , electron 2 at r2 , and so on. Since
the physics implied by this cannot depend upon how we arbitrarily assign the labels to
the electrons (being identical particles), swapping labels should not affect the physics.
Thus,
This introduces an ambiguity into the wave function since swapping particles can
then only change the total phase of the wave function
Repeated operations of the permutation operator will then introduce a phase change
nδ:
Consequently, one can readily conclude that δ can take one of two possible values:
δ = π or δ = 0. For the case of δ = π, each binary permutation of indices changes
the sign of the wave function while for δ = 0, swapping particle indices has no effect
on the sign of the wave function. From relativistic considerations, it can be shown
that particles with half-integer spin (fermions) must have δ = π while particles with
integer spin (bosons) must have δ = 0. Thus, for electrons,
where P is the binary transpositions required to return the permutation { p1, p2,
p3, . . . , pN } to its original form. For a two-electron system,
and
Since all of these wave functions are for all intents and purposes identical in terms of
their physical content, we need to sum over all possible permutations in constructing
Electronic Structure of Conjugated Systems 251
the final fermionic state. Let us define P̂ F as the antisymmetrization operator that
acts upon an N fermion state to produce an antisymmetric state
1
P̂ F ψ(1, . . . , N ) = (−1) p ψ( p1, . . . , pN ) (8.153)
N! p
HN = H ⊗ H ⊗ H ⊗ . . . (8.159)
|φ1 · · · φ N ) = |φ1
⊗ · · · |φ N
(8.160)
For notation, we use the rounded ket, ), to denote a nonsymmetrized direct product
basis ket. Similarly, we use a curly bracket | } to denote a properly antisymmetrized
and normalized ket
√
|φ1 φ2 · · · φ N } = N !PF |φ1 φ2 · · · φ N ) (8.161)
1
= √ (−1) p |φ p1 φ p2 · · · φ pN ) (8.162)
N! p
while
Within the orbital basis we can write the antisymmetrtized state as a Slater determinant
5 5
5 φ1 (1) φ2 (1) · · · φ N (1) 5
5 5
1 55 φ1 (2) φ2 (2) · · · φ N (2) 5
5
|φ1 · · · φ N } = √ 5 .
.. 5
. . (8.168)
N! 5
5 .
. .. ··· 5
5
5 φ1 (N ) φ2 (N ) · · · φ N (N ) 5
add an electron to basis orbital λ. Normalizing this (as per the harmonic oscillator
operators)
†
aλ |λ1 · · · λ N
= 1 + n λ |λλ1 · · · λ N
(8.170)
and
1 † † †
|λ1 · · · λ N
= √ a a · · · aλ N |0
(8.173)
n 1 ! · n N ! λ 1 λ2
One must be careful in using these operators since
† †
aλ aμ† |0
= |λμ} = −|μλ} = −aμ† aλ |0
(8.174)
or
8 † †9
aμ , aλ = 1 (8.176)
aλ |0
= 0 (8.177)
†
0|aλ = 0 (8.178)
√
aλ |λ
= n λ |λ − 1
(8.179)
In other words, the aλ removes a fermion from orbital λ if this state is occupied to
begin with. To see how this works on the antisymmetrized many-body state, consider
∞
1
aλ |β1 · · · β N } = {α1 · · · α P |aλ |β1 · · · β N }|α1 · · · α P } (8.180)
P
P! α ···α
1 P
∞
1
= {λα1 · · · α P |aλ |β1 · · · β N }|α1 · · · α P } (8.181)
P
P! α ···α
1 P
where P denotes the number of particles in a given state and the inner sums are over
all P-particle basis states. Clearly, only terms with P = N − 1 and (λα1 · · · α P ) equal
to some permutation of (β1 · · · β N ) will contribute to the sum. Thus, we can write
N
aλ |β1 · · · β N } = (−1)i−1 δλβi |β1 · · · βi−1 βi+1 · · · β N } (8.182)
i=1
N
= (−1)i−1 δλβi |β1 · · · β̂i · · · β N } (8.183)
i=1
where the hat denotes that an electron has been removed from orbital βi . Thus, for
fermions
(−1)i−1 |β1 · · · β̂ λ · · · β N } if |λ
is occupied
aλ |β1 · · · β N } = (8.184)
0 otherwise
254 Quantum Dynamics: Applications in Biological and Materials Systems
Now to close the algebra. Consider the action of two operators on the vacuum
state:
†
aλ aμ† |0
= δλμ |0
= aμ aλ |0
(8.185)
which gives the fermion anticommutation bracket. If we act upon an already occupied
state, then
N
aμ† aλ |α1 · · · α N
= (−1)i−1 δλαi |μα1 · · · α̂i · · · α N } (8.187)
i=1
Thus, we conclude
aλ aμ† |α1 · · · α N } = δλμ − aμ† aλ |α1 · · · α N } (8.188)
This last expression is extremely useful since typically we write operators in normal
ordered form in which all creation operators are to the left of the annihilation operators.
This then (typically) reduces the operator into terms involving occupation numbers
†
n λ = aλ aλ and integrals evaluated over the basis functions.
General rules: Let us summarize the general operational rules for the fermion
operators:
• Vacuum: |0
is defined as a state with no particles. It is normalized so that
0|0
= 1. Antisymmetrized states are generated by acting on the vacuum
ket with the creation operator.
† † †
ai a j ak |0
= |i jk
(8.189)
|0
= ak a j ai |i jk
= ak a j ai |A
(8.191)
and
† † †
0| = A|(ak a j ai )† = A|ai a j ak (8.192)
Electronic Structure of Conjugated Systems 255
and
ai |i jkl · · ·
= |i jkl · · ·
(8.194)
where the overline indicates an electron has been removed from orbital i.
If orbital i was occupied, the result is
ai |λjkl · · ·
= | jkl · · ·
(8.195)
otherwise
ai | jkl · · ·
= 0 (8.196)
• Order is important:
† †
ak al |0
= |kl
(8.197)
while
† †
al ak |0
= |lk
= −|kl
(8.198)
† †
• ai × ai = × = 0
ai ai
†
• Anticommutation: {ai , a j } = δi j . This allows us to swap creation and
annihilation operators for different orbitals (i = j) provided we change
† †
the sign: ai a j = −a j ai . If we are dealing with the same orbital, then
† †
ai ai = 1 − ai ai .
• Overlap between two states A|B
is evaluated by writing out creation/
annihilation operators and rearranging to normal ordering. For example, if
† † † †
|A
= |i j
and |B
= |kl
, A|B
= 0|(ai a j )† ak al |0 >. This we evaluate
by using the symmetry and commutation rules as follows:
† †
A|B
= 0|a j ai ak al |0
(8.199)
† †
= 0|a j (δik − ak ai )al |0
† † †
= δik 0|a j al |0
− 0|a j ak ai al |0
(8.200)
† † †
= δik 0|(δ jl − al a j )|0
− 0|(δ jk − ak a j )(δil − al ai )|0
(8.201)
= δik δ jl − δ jk δil (8.202)
N (O) =: O :
where λ|U |μ
is the integral
d xφλ∗ (x)U (x̂, p̂)φμ (x) (8.204)
A final example, this time involving the creation operator in the same sequence,
results in
' † † † ( † †
ak , al am an am = δkl al am† a p − δkp al am† , an (8.211)
In all cases we manipulate the sequence of the fermion operators to bring it into the
normal ordered form.
(assuming real orbitals) where the quantum numbers for particle #1 are in the left
bracket and the quantum numbers for particle #2 are in the right. One also sees this
written using the square brackets
[kn|lm] = dr1 dr2 φk (r1 )φn (r1 )v(12)φl (r2 )φm (r2 ) (8.217)
where φn (r ) is a spatial orbital (as opposed to spin orbital) and the integral is only
over the spatial variables (as opposed to both spin and space). The rationale for this
notation is convenient since for real orbitals
[i j|kl] = [ ji, kl] = [i j, lk] = [ ji|lk] (8.218)
258 Quantum Dynamics: Applications in Biological and Materials Systems
For purposes of clarity, we shall denote integrals using the physics notation using
round (|) or angular |
brackets and integrals using the quantum chemistry notation
using the square [|] brackets. The translation between physics and quantum chemistry
is that
i j|kl
physics = [ik| jl]q.chem (8.219)
When in doubt, double-check the notation the author is using. Also, it is a good idea
to clearly specify which convention you are using.
φα = sin η|1 ↑
+ cos η|2 ↑
φβ = cos η|1 ↓
+ sin η|2 ↓
(8.222)
Also show that in the limit of U/t < 2, E uh f reduces to the Hartree–Fock limit.
Problem 8.3 If |ψ
is a single Slater determinant state, show that the following
factorization holds:
† †
ai a j ak al = ρk j ρli − ρl j ρki
SUGGESTED READING
Below is a list of review articles and books concerning the electronic structure of
π-conjugated systems.
Electronic Structure of Conjugated Systems 259
E k = α − 2β cos(ka) (9.1)
where a is the bond distance between neighboring C atoms. For a π-electron system
that is half-filled (that is, each C atom contributes one e- to the π system), then the
band gap at the Fermi energy is exactly 0. This is the case if all the C--C bonds are
the same length, as in aromatic rings.
However, as we just noted in olefinic chains, the C=C and C--C bonds alternate,
so one expects the hopping term β to reflect this alternation. As the bond length
increases, β should decrease in magnitude; while as the bond length is compressed, β
should increase in magnitude. We can thus assign an order parameter, u, that reflects
the expansion and compression of the bonds along the olefinic chain. Assuming all
the sites are equivalent, we can write
† †
H (u) = − (β + (−1)n 2αu) ∗ anσ an+1σ an+1σ anσ (9.2)
nσ
261
262 Quantum Dynamics: Applications in Biological and Materials Systems
H H H H
C C C C
C C C C
H H H H
2a
1 ikan
χkv = √ e un (9.5)
N n
1 ikan
χkv = √ e (−1)n u n (9.6)
N n
where the u n are the localized basis functions. For a chain of length L = N a, we can
define new operators for the valence and conduction bands as
v 1 ikan
ckσ =√ e anσ (9.7)
N n
1 ikan
c
ckσ =√ e (−1)n anσ (9.8)
N n
Inverting these and reintroducing them back into the modulated Hückel Hamiltonian
above, we have for the dimerized lattice
c† v† v c† v v† c
H (u) = εk ckσ ckσ
c
− ckσ ckσ + 4αu sin(ka) ckσ ckσ + ckσ ckσ (9.9)
kσ
Electron–Phonon Coupling in Conjugated Systems 263
We now introduce the Bogolyubov transformation to bring H (u) into diagonal form
by mixing the valence and conduction bands,
v v
akσ αk β k ckσ
= (9.10)
c
akσ αk∗ −βk c
ckσ
Since |αk |2 + |βk |2 = 1, this is a unitary transformation. Now, inverting the transfor-
mation and requiring H (u) to be diagonal in this new representation, we find
H (u) = E k n ckσ − n vkσ (9.11)
kσ
where 1/2
E k = εk2 +
2k (9.12)
and
k = 4αu sin(ka) (9.13)
The operators, n ckσ and n vkσ are the occupation numbers of the conduction and valence
bands. If we restrict αk to be real and positive, then
1 εk
αk = 1+ (9.14)
2 Ek
1 εk
βk = 1− (9.15)
2 Ek
which immediately gives us the conduction and valence band eigenstates. A plot of
the valence and conduction bands for an olefinic chain is given in Figure 9.2a using
the parameters suitable for polyacetylene. Notice at ka = π/2, the gap between the
E(eV)
0.4
Egs(eV)/N
0.2
–1.785
–0.2
–1.795
–0.4 u
–1.0 –0.5 0.0 0.5 1.0
(a) (b)
FIGURE 9.2 (a) Valence and conduction bands for dimerized (u = 1) and undimerized
(u = 0) polyacetylene chains close to ka = π/2. (b) Variation of ground-state energy for
chain of length N a versus the dimerization order parameter u.
264 Quantum Dynamics: Applications in Biological and Materials Systems
valence bands (E < 0) and conduction bands (E > 0) opens as the order parameter
varies from 0 to 1.
The ground-state energy (see Figure 9.2b) is given by summing over all occupied
energy levels
E gs (u) = −2E k (9.16)
k
where the sum over k is over the entire Brillouin zone, −π/2 ≥ ka ≤ π/2. Taking
the sum to an integral
2L π/2a
E gs (u) = − [(2β cos(ka))2 + (4αu sin(ka))2 ]1/2 dk (9.17)
π 0
4Nβ π/2
=− (1 − (1 − z 2 ) sin2 (ka))1/2 d(ka) (9.18)
π 0
4Nβ
=− E(1 − z 2 ) (9.19)
π
We can arrive at this conclusion by considering the fact that the period for the dimerized lattice has, in
fact, doubled from a to 2a upon dimerization and as such the Brillouin zone now extends from −π/(2a)
to π/(2a). If we now take the two atoms in the mth unit cell as having states u m1 and u m2 , then we can
write the wave function for the lattice as
+∞
ψk = ei2kma (c1 (k)u m1 + c2 (k)u m2 ) (9.21)
m=−∞
where t1 = β − αu and t2 = β + αu. Solving for the energy produces the result given above.
Electron–Phonon Coupling in Conjugated Systems 265
where xn denotes the displacements from the uniform lattice with a strain energy
given by
ω2
E strain = (xn − xn+1 )2 (9.24)
n
2
Working within the adiabatic approximation, we can minimize the total energy of the
system by requiring
∂ Hel ∂ E strain
− φo | |φo
− =0 (9.25)
∂ xn ∂ xn
where φo is the lowest energy eigenstate of Hel for a given lattice configuration. The
first term is the Hellmann–Feynman force on the nth lattice site when the exciton
is in the lowest energy eigenstate. The second is the strain force, which increases
as the lattice is displaced from its uniform position. If β and λ are both negative,
then increasing |φo |2 in a given region gives an attractive interaction between the
lattice atoms. However, displacing the atoms from their uniform position increases the
strain energy. The final equilibrium state is where the strain force and the Hellmann–
Feynman forces are in balance.104–109 In Figure 9.3 we show the effect of exciton
self-trapping on a model 11-site lattice. The first (Figure 9.3a) is an order parameter
showing the displacement of the lattice sites from their original position in the final
relaxed state. The second (Figure 9.3b) shows how the lattice relaxes from its original
uniform state (top) to the final relaxed or self-trapped state (bottom). In Figure 9.3c
we show the exciton’s probability density for the initial and final (relaxed) state.
We can extend the model for a continuum lattice by writing the Hamiltonian as
†
H= ε(k)ak ak + ω(q)Bq† Bq + g(n, q)an† an Bq + Bq† (9.26)
k q nq
†
where {ak , ak } are exciton operators in k-space and {an , an† } denote exciton operators
in the lattice representation. The two are related by the Fourier relation. {Bq , Bq† }
are phonon operators with dispersion relation ω(q). The electron phonon coupling,
g(n, q), depends on the type of phonon in question. For longitudinal acoustic modes,
1 Cq
g(n, q) = √ eiq Rn 3
(9.27)
N 2ρω(q)a o
where ρ is the density of the medium, ao is the lattice constant, C is the deformation
parameter, and ω(q) = vq with v as the velocity of sound. For optical (transverse)
phonons,
1
g(n, q) = √ eiq Rn γo (9.28)
N
where γo and ω are assumed to be constant.
266 Quantum Dynamics: Applications in Biological and Materials Systems
δxn
0.4
0.2
n
2 4 6 8 10
–0.2
–0.4
(a) (b)
0.20
0.15
φ12
0.10
0.05
2 4 6 8 10
n
(c)
FIGURE 9.3 (a, b) Lattice distortion for an exciton self-trapping on a finite-sized lattice.
(c) Comparing the unrelaxed to trapped exciton probability density for finite lattice.
and
B̃ q† = Bq† + n̂ n g(n, −q)/ω(q) (9.31)
n
Irrespective of the phonon model, taking the phonons to be normal coordinates then
g(n, q)g(m, q)
= δnm Co (9.33)
q
ω(q)
where
! Co
ρ
v 2 ao3 for acoustic modes
Co = (9.34)
γo2 /ω for optical modes
Thus,
†
Had = εk a k a k − C o n̂ 2n (9.35)
k n
So, as discussed above, increasing the exciton probability density at a given lattice
site results in a net lowering of the exciton’s energy.
We can push this analysis farther by assuming a Gaussian form for the exciton’s
wave function
1/4
ao 2
e−(r/σ )
2
a(r ) = (9.36)
σ π
Taking its Fourier transformation,
1/4
σ 2
e−k σ
2 2
a(k) = (9.37)
ao π
These are both normalized so that
N
an2 = dra(r )2 = 1 (9.38)
n
L
and
L
ak2 = dka(k)2 = 1 (9.39)
k
N
For a free particle on an infinite lattice, the energy dispersion is given by ε(k) =
4βk 2 ao2 . Thus,
σ
ε(k)ak = β
2
(9.41)
k
a o
268 Quantum Dynamics: Applications in Biological and Materials Systems
Putting this all together, one finds that the adiabatic energy is given by
ao2 C o ao
E ad = β −√ (9.42)
σ 2 π σ
Furthermore, extending this to the d-dimensional isotropic lattice, we find:
ao2 Co ao d
E ad = β − (9.43)
σ2 π d/2 σ
Minimizing this with respect to the sole remaining parameter, σ , requires
d−1
aCo dπ −d/2 σa 2a 2 β
− =0 (9.44)
σ 2 σ3
Solving for σ , we find for the one-dimensional lattice
√
2a J π
σ = (9.45)
Co
However, for the 2D isotropic lattice, we find that d E ad /dσ = 0 occurs for the special
case of Co = 2β/π for a continuum lattice. Sumi and Sumi reach a similar conclusion
for finite-sized lattices110 in which they conclude there is a phase boundary between
the free and self-trapped (small-radius) exciton that depends upon the size of the
system and the strength of the electron/phonon coupling.
More recent quantum/classical dynamical simulations by Kobrak et al.107–109 and
by Tretiak et al.111–115 have examined the interplay between different types of vi-
brational motion in the trapping and relaxation of an exciton on a polymer chain. In
particular for poly-phenylene vinylene (PPV), self-trapping of excitations on about
six repeat units in the course of photoexcitation relaxation identifies specific slow
(torsion) and fast (bond stretch) nuclear motions strongly coupled to the electronic
degrees of freedom. Similar conclusions were drawn using semiempirical excited-
state techniques by Karabunarliev and Bittner.116,117
These chains form three quasi-linear spines along the α helix. These spines are not
exactly linear and slowly wrap about the helical axis as shown in Figure 9.4b. Each
C=O group carries a substantial permanent electric dipole moment μ directed from the
Oδ− to the Cδ+ . These dipoles run parallel to the spines. Thus, to a first approximation,
neighboring C=O’s along each spine are coupled to each other via dipole–dipole
interactions
|μ|2
Ji,i+1 = 2 3
R
From this we can build a simple vibrational exciton model within a local basis where
|φi
represents a single stretching quantum on the ith C=O along a given spine.
That is,
N
Hex = h̄ωI + J |φi
φi+1 | (9.46)
i
where ε = h̄ω is the excitation energy for a C=O amide-I vibration and I is the
N × N identity matrix. Hex is the familiar tridiagonal Hamiltonian matrix we have
seen previously. Thus, its eigenvalues and eigenvectors can be immediately deduced.
For a sufficiently long helix, the eigenfunctions are plane waves with energy
Thus, for a perfect chain, a C=O vibrational exciton will be delocalized over the
entire chain.
The α helix itself is free to undergo a variety of motions. A compression of the
α-helix chain would change the local electrostatic environment about the C=O group.
We use the “physics convention” where dipoles point from source (−) to sink (+) rather than the “chemistry
O H
N R
R N
O H
(a)
Spine 1 Spine 3
N
C
N
C
4.5 Å
H
N
C
N
C
Spine 2
(b)
FIGURE 9.4 (a) Peptide group showing amide-I vibrational mode. The amide-I mode gives a
strong peak in the IR and Raman spectra of proteins with little variation: 1665–1660 cm−1 for
α helices, 1660–1665 cm−1 for a β sheet, and 1665–1680 cm−1 for a random coil. (b) Three-
dimensional model of an alpha-helix coil with hydrogen bonds between linked C=O· · ·H-N
groups.
Electron–Phonon Coupling in Conjugated Systems 271
Combining these contributions, the total Hamiltonian takes the familiar form:
∂ε({xi })
H = ε+ (x̂i+1 − x̂i−1 ) |φi
φi |
i
∂R
N
+J |φi
φi+1 |
i
1 2 κ
+ p̂ + (x̂i − x̂i+1 )2 (9.47)
2m i i 2 i
where pi is the momentum conjugate to xi . Of course, the x̂i and p̂i are quantum
mechanical operators and a rigorous solution demands that we take this into account.
If we allow that the mass of an amide-I group is much larger than the effective mass
of a C=O vibrational exciton m ∗ = h̄ 2 /2J , then we may safely invoke a classical
treatment for the u i degrees of freedom. Davydov introduces this by making an ansatz
for the state vector for the coupled exciton/lattice wave function:
⎡ ⎤
†
|ψ(t)
= φi (t)Bi exp ⎣ih̄ (u j (t) p̂ j − π j (t)x̂ j )⎦ |0
(9.48)
i j
†
where |0
is the ground-state vector and Bi = |φi
0| creates an excitation on site i.
Davydov then makes what is essentially the Ehrenfest approximation by writing
and
where φi is the quantum mechanical amplitude giving the probability |φi |2 for finding
the ith C=O in its first vibrational excited state. Also, we have introduced χ as the
272 Quantum Dynamics: Applications in Biological and Materials Systems
linear coupling between adjacent C=O’s. These last two equations are the main results
of Davydov’s original paper.118–121 Let us next assume that the de Broglie wavelength
of the exciton is large compared to R. Thus, we can rewrite these last two equations
in continuum form as
∂φ ∂u ∂ 2φ
ih̄ = εo + 2χ φ−J 2 (9.52)
∂t ∂x ∂x
and
∂ 2u κ ∂ 2u 2χ ∂|φ|2
− = (9.53)
∂t 2 m ∂x2 m ∂x
where x now represents a location along the helical spine and εo = ε − 2J . The left-
hand side of the second equation is that of the wave equation. The inhomogeneous
term on the right-hand side is a source. What we see, then, is that the quantum
mechanical motion of the C=O vibrational exciton acts as a source for the generation
of longitudinal sound waves in the α helix.
With this in mind, let us seek traveling wave solutions to Equations 9.52 and 9.53
of the form
u(x, t) = u(x − vt)
where v is the velocity of propagation. Substituting this into Equation 9.53 yields
∂u 2χ
=− |φ(x)|2
∂x κ(1 − s 2 )
where s = v/vs < 1 is the ratio of the propagation velocity to the velocity of sound.
Introducing this back into Equation 9.52 yields a nonlinear Schrödinger equation
∂φ ∂ 2φ 4χ 2
ih̄ = −J 2 + εo φ − |φ|2 φ
∂t ∂x κ(1 − s 2 )
= H [φ]φ (9.54)
This is the source of the nonlinear interactions in which the solution of the wave
function depends upon the wave function itself! Similar nonlinear Schrödinger equa-
tions arise in various contexts, typically within the self-consistent field or Hartree
approximation to the many-body problem. Here, however, the nonlinear interaction
arises because we have a feedback mechanism between the vibrational motion of the
helix and the quantum motion of the C=O exciton.
1
e−x /(2a )
2 2
φ(x) =
(aπ)1/4
Electron–Phonon Coupling in Conjugated Systems 273
and use the variational principle to minimize the total energy with respect to the
width a
√
dE d d J 1 2 2χ 2
= φ|H [φ]|φ
= εo + − − √ =0 (9.55)
da da da 2a 2 aπ κ(1 − s 2 )
Note, the 1/2 appearing in front of the term arising from the exciton/lattice interaction
is included to avoid the interaction between the exciton and itself. This produces the
variational estimate of the width of the exciton wave function
J 2 κ 2 π(s 4 − 2s 2 + 1)
a=
8χ 4
Since 1 ≥ 1 − 2s 2 + s 4 ≥ 0 for s < 1, a > 0 for all traveling wave solutions. Let us
take the case where s = 0 to find the energy of a trapped exciton using
J 2κ 2π
a=
2χ 4
3χ 4
E ste = εo −
π|J |κ 2
The wave function as written here is normalized and gives a self-trapping energy of
χ4
E ste = εo −
κ 2 |J |
For a completely rigid helix, κ → ∞ and the self-trapping vanishes and we have
a band of delocalized excitations. It can also be seen from the sign of the trapping
energy that the energy of the trapped exciton will always be below that of a delocalized
exciton, εo , when the coupling between the exciton and the lattice is taken into account.
J ψ + κ|ψ|2 ψ = Eψ
TABLE 9.1
Parameter Values for Davydov Model from
Lomdahl and Kerr
κ J χ χ2 /κJ to
N/m cm−1 pN ps
Discrete 5.0 20.0 75.0 2.83 19.5
Continuum 13.0 31.2 40.0 0.29 12.1
α helix 13.0 7.8 62.0 1.91 12.1
Fi = −mγ u̇ i + ηi (t)
where γ is the vibrational relaxation rate, and ηi (t) is a noise term with
and ηi (t) = 0. In other words, each site is subject to the thermal noise and we assume
there is no correlation between the thermal noise from site to site. This is certainly a
reasonable assumption since it gives the average kinetic energy (per site) as
m 2 1
u̇ (t) = k B T
2 i 2
as expected for a classical oscillator. They conclude that at physiological temper-
atures, random fluctuations in the lattice are too strong and prevent self-trapping.
Similar conclusions were reached by Schweitzer based upon quantum perturbation
theory.124 On the other hand, Förner concludes that Davydov solitons are stable at
300 K in reasonable parameter ranges, but only for special initial conditions close
to the terminal sides of the chain.125 Moreover, it is well known in the biophysi-
cal literature that polyamide in water undergoes a helix-to-coil transition at around
T = 280 K. Consequently, for free polyamide chains, Davydov solitons do not exist
in polyamide chains at physiologic temperatures; however, they may exist in other
peptide chains and in proteins. Their existence remains a theoretical question.
Electron–Phonon Coupling in Conjugated Systems 275
where xn represents the displacement of the nth atom with mass m n from its equi-
librium position. We can eliminate the explicit mass dependency by adopting mass-
scaled coordinates, x̃n = m 1/2
n x n and momenta. The phonons (or harmonic motions)
of the molecule are obtained by diagonalizing the second derivative matrix (Hessian)
∂ 2 E gs
h nm =
∂ x̃m ∂ x̃n eq
where the bm† and bm operators obey the familiar commutation relation for boson
operators: [bn† , bm ] = δnm . The zero-point energy can be folded into the ground-state
energy by defining the energy origin as
1
E 0 = E gs (x = 0) + h̄ ωn
2 n
The electronic energy curve and corresponding harmonic levels for the ground state
correspond to the lower parabola shown in Figure 9.5a [116].
Upon electronic excitation, the electronic density about the nuclei is changed.
Consequently, a molecule in its ground-state equilibrium geometry will be subject
to a potential force causing it to distort toward some new equilibrium geometry. Let
E k (q) be the energy of the kth excited state as computed at nuclear coordinate q and
expand this about the ground-state equilibrium geometry (q = 0):
∂ E k (q) ∂ E k (q)
E k (q) = E k (0) + qn + qn qm + · · · (9.59)
n
∂qn nm
∂ 2 qn ∂qm
276 Quantum Dynamics: Applications in Biological and Materials Systems
2 n=4
ωvert
1
ωp
Absorption/Arb. Units
n=6
0
ω00
ωvert n=8
1 n = 10
0
2.5 3.0 3.5 4.0 4.5
dp E/eV
(a) (b)
FIGURE 9.5 (a) Franck–Condon model in one normal dimension: 00 and vert: adiabatic and
vertical transition frequencies, p: vibrational frequency, dp : interstate distortion in the normal
coordinate. Several of the v0 and 0v vibrational transitions in absorption and emission are
given. (b) Theoretical 1Bu←1Ag absorption bands of all-trans polyenes with n double bonds.
Vertical lines are the positions and intensities of the dominant vibrational features in absorp-
tion of tert-butyl-capped polyenes. Lines give the positions of the absorption and emission
peaks from Ref. 126. (From S. Karabunarliev, M. Baumgarten, E. R. Bittner, and K. Müllen,
J. Chem. Phys. 113, 11372, 2000. Copyright (2000) American Institute of Physics.)
E k (0) is the excitation energy taken at the ground-state equilibrium geometry. We now
make the assumption that the second derivative term is diagonal so that the ground-
state normal modes qn are also normal modes in the excited state. This assumption
does not always hold and one can have the case where one needs to define new
normal modes qn(k) for each electronic state with frequencies ωn(k) . Generally, the
approximation is robust for larger conjugated polymer systems; however, for small
molecules one should be careful in following this prescription. We also shall ignore at
this point any coupling between the electronic states brought about by the geometric
change in the molecule:
1
E k (q) ≈ E k (0) + ω2 qn2 + f n(k) qn
n
2
1 2 2 1 2 (k) 2
≈ E k (0) + ωn qn − dn(k) − ω d (9.60)
2 n 2 n n n
Thus, within the assumptions here, a molecule in its kth excited state feels a linear
force distorting it away from the ground-state geometry toward some new geometry
Electron–Phonon Coupling in Conjugated Systems 277
n=2
n=4
n=5
FIGURE 9.6 Theoretical absorption (solid) and emission (dotted) band shapes for the low-
est electronic transitions in oligo(para-phenylenevinylenes) with n benzene rings. Computed
transition probabilities without line-shape broadening are given for the tetramer as illustrated
(from Ref. 116).
The line shape for transitions between initial and final electronic states (k → k )
is given by the general expression
1 −Em(k) β
F(E) = e | km k |μ|k n k
|2 δ E n(k ) − E m(k) − E (9.62)
Z k mn
where the sum is over the Boltzmann-weighted initial vibrational state of electronic
state k and all possible final vibrational states of electronic state k . Z k is the canon-
ical partition function for the vibrations on state k. E m(k) is the vibronic energy of
the mth vibrational level in the kth electronic state. Implicit in this expression is
that we are summing over all the vibrational levels of all the phonon modes. In or-
der to simplify our notation considerably, we shall consider the case where there is
only a single dominant phonon mode. The generalization to a multimode system is
straightforward.116,127 Notice that the line-shape function reduces to the Fermi golden
rule rate in the limit
2π
W = lim F(E)
h̄ E→0
To proceed, we make the Condon approximation that electronic transitions occur
within a fixed nuclear framework. This allows us to approximate the transition matrix
element as
km k |μ|k n k
= k|μ|k
m k |n k
The first factor is simply the matrix element of the dipole operator between the initial
and final electronic states. This determines the electronic selection rule and overall
intensity of the transition. We take this to be independent of the vibrational state and
thus pull it out of the summation. The second factor represents the overlap matrix
element between two harmonic oscillator wave functions in displaced harmonic wells.
For clarity and distinction, we label each vibrational state by the electronic state with
which it is associated. Finally, since most spectra are taken from (or to) the ground
electronic state, we assume from here on that one of our electronic states is the ground
state (k = 0). The modulus squared of the vibrational overlap between the m k and
n k levels is given by
m k ! m k −n k n k −m k 2
| m k |n k
|2 = e−X X L mk (X ) (9.63)
nk !
where X = d 2 /2, d is the dimensionless distortion between the minima of state k
and the minima of state k , and L ab (x) is an associated Laguerre polynomial.128 At
low temperatures—that is, where kT < h̄ωn —the vibrational population of the initial
state is concentrated in the lowest lying vibrational level; thus, we can simplify the
Franck–Condon factor to
X nk
| 0k |n k
|2 = e−X (9.64)
nk !
Finally, we recall the fact that the delta function can be represented as the limiting
case of a Lorentzian
1 ε
δ(x − xo ) = lim (9.65)
ε→0 π (x − xo )2 + ε 2
Electron–Phonon Coupling in Conjugated Systems 279
Intensity/Arb. Units
FIGURE 9.7 Vibrational transitions in absorption and emission within the harmonic Condon
approximation. Models of (a) coupling in a double-bond stretching mode (0.2 eV), (b) cou-
pling in a ring-torsional mode (0.01 eV), and (c) coupling in both modes. The curves are the
convolutions for a Lorentzian linewidth of 0.04 eV for the vibrational transitions. The asym-
metry between absorption and emission is a consequence of a minor stiffening of the accepting
vibrational modes for absorption (from Ref. 127).
which is, of course, the line shape for damped oscillator. Consequently, we can as-
sociate a lifetime τ = 1/γ to each vibrational mode to account for the fact that the
molecule is embedded in a continuum and, thus, write the absorption (or emission)
line shape as
∞
X n k h̄γ 1
F(ω) = | k|μ|k
|2 e−X (9.66)
n k =0
n k ! π E 0(k ) − E 0(k) − h̄ωnk n k 2 + (h̄γ ) 2
Figure 9.7 shows the vibronic line shapes for a model conjugated polymer. In
Figure 9.7a, only high frequency models are included in the model. This produces the
familiar symmetric absorption and emission line structure. In Figure 9.7b, only low
frequency phonon modes were included in our model. The absorption and emission
line shapes are nearly symmetric; however, the absorption band is slightly broader than
the emission. Finally, in Figure 9.7c, we include both high and low frequency modes
in our model. Here, the absorption band is significantly broader than the absorption
band and the high frequency vibronic fine structure is somewhat washed out. The
emission, on the other hand, exhibits well resolved vibronic fine structure.
280 Quantum Dynamics: Applications in Biological and Materials Systems
n=2
n=4
n=5
n=6
FIGURE 9.8 Computed S1←S0 absorption and emission bands of p-polyphenyls with n
phenyl/phenylene rings. Arrows mark the theoretical electronic origins (from Ref. 127).
Figure 9.8 shows the theoretical absorption and emission spectra for various
oligomers of polyphenylenevinylene (OPV) as computed using a semiempirical ap-
proach implemented within the MOPAC code.116 Here we can see a very clear vibronic
progression indicative of the strong vibronic coupling between the π -electronic sys-
tem and the skeletal vibrations. Notice that in the case for n = 4, OPV with four
phenylene rings, we include the contribution from all the modes without imposing any
additional vibronic line broadening. We can see that the main vibronic fine-structure
features are composed of nearly a continuum of finer-grained lines corresponding to
the contributions from the low-frequency modes of the molecule. In this case, the
prominent vibronic progression in the emission and absorption spectra is due to the
C=C stretching modes.
In the absorption spectra, the lines correspond to excitations from the lowest vi-
brational state to all possible vibrational states in the first electronic excited state.
Consequently, all the fine structure is to the blue of the absorption origin (correspond-
ing to the 0-0 vibronic transition). Thus, we can assign the main absorption peaks
as 0-0, 0-1, 0-2, and so forth. The emission spectra, on the other hand, correspond
to transitions from the 0 vibrational level in the upper electronic state to the nth
vibrational level of the lower state. As n increases, the energy gap decreases so all
emission peaks are to the red of the 0-0 line.
9.5 SUMMARY
Electron-phonon interactions play a significant role in the electronic states of con-
jugated systems. In this chapter, we have gone from an analytic treatment of elec-
tron/phonon coupling via the SSH model, used this approach to study exciton self-
trapping, and finally presented a linear-coupling model that when combined in con-
cert with a semiempirical technique (such as MOPAC) can reproduce the vibronic fine
structure of wide range of conjugated polymer systems.
H = Hel + Hnuc
where
2
pnuc 1
Hnuc = + kx 2
2M 2
describes the nuclear motion and
1/2
† † † † 2Mω
Hel = β a1 a2 + a2 a1 + gh̄ω a2 a2 − a1 a1 X
h̄
VB O (X ) = Vnucl (X ) + ψ|Hel |ψ
Problem 9.3 Consider the time evolution of a spin state of an electron in a magnetic
field. Take the unperturbed Hamiltonian to be the Zeeman Hamiltonian with static
magnetic field Bo taken in the z direction:
Ho = γ Bo Ŝ z
V (t) = γ Bx Ŝ x cos(t)
ψ(t) = Cα (t)|α
+ Cβ (t)|β
and write the equations of motion for the coefficients Cα and Cβ . Next, take
the limit that = γ Bo . Carefully consider the time-evolution of each term
and neglect all terms that vary as exp(±2it) (Rotating Wave Approx).
Show that these equations are the same as you derived in the first part.
3. Solve the equations of motion numerically for the case where the driving
term is both on and off resonance with the Zeeman splitting. Make a plot
of the survival probability of the initial spin-up state as a function of time
for the resonant and nonresonant cases. How do your numerical results
compare with the results you derived above?
10 Lattice Models for
Transport and Structure
In Chapter 8 we largely cast our discussion of the electronic structure of π-conjugated
systems upon the idea that the C 2 pz orbitals provided a good basis for the molecular
orbitals. We also made the simplifying assumption that the atom-centered orbitals
were orthogonal to each other. This approximation goes under the moniker of “neglect
of differential overlap” (NDO) in the quantum chemical literature and “tight-binding
approximation” in the solid-state physics literature. In this chapter, we shall continue
along these lines in order to discuss transport and dynamics in extended systems.
10.1 REPRESENTATIONS
10.1.1 BLOCH FUNCTIONS
Consider the Schrödinger equation for a system with a periodic potential (and with
Hamiltonian Ho ) and some external potential U . For U = 0, the system corresponds
to a perfectly periodic system while U represents the contributions from point defects
or impurities within the lattice. Alternatively, U could represent an entirely external
potential, say, from an electric or magnetic field, or the radiation field. While the
physics is different in each case, the underlying mathematical technique used will be
the same. We can expand the wave function in terms of a complete set of plane waves
defined by the periodicity of the underlying system.
For an unbound, free electron in one dimension, V (x) = 0 at all points and we
have the general solution of the Schrödinger equation
with energy
h̄ 2 k 2
E(k) =
2m
If we force the system to be confined to x ∈ [0, L] as in the case of a particle in a
box, then k can take only discrete values and the eigenstates read
2
ψ(k, r ) = sin(kr )
L
with k = nπ/L and n = 1, 2, 3, . . .. Likewise, for the periodic system
1 ±ikr
ψ(k, r ) = e
L
with k = n2π/L. In Figure 10.1 we show E(k) for the free particle, the bound particle,
and the particle on a periodic lattice. For the free particle, all values of k give rise to
283
284 Quantum Dynamics: Applications in Biological and Materials Systems
2
Ek = ћ ( k )2
2m L
80
Ek
є+t
60
40
kL
– 3π –π –π π π 3π
2 2 2 2
20
k
–5π –4π –3π –2π –π π 2π 3π 4π 5π
є–t
(a) (b)
FIGURE 10.1 (a) Energy E(k) versus k for a free particle (solid line), a particle on a periodic
lattice (⊕) and a particle in a box ⊗. (b) Energy band for particle on a discrete lattice. The points
indicated by ⊕ represent the eigenenergies for a linear chain (⊗) and a ring (⊕) of 10 atoms.
The shaded region between −π and π defines the first Brillouin zone.
Next, we take advantage of the fact that the system is periodic and write φ j+n =
eikna φ j where n is an integer and L is the spacing between adjacent atoms
This allows us to eliminate φ j from both sides and obtain the energy band
E(k) = ε + 2t cos(k L)
For a finite lattice of atoms, we have the Hückel results we have seen before. In
Figure 10.1b we show the energies for both a chain and a ring of 10 atoms superim-
posed upon the energy curve for the infinite system. The expressions for the energies
were given in Equations 8.18 and 8.45. For the case of a ring of N atoms, the boundary
conditions give stationary solutions only when k L takes integer multiples of 2π/N .
Lattice Models for Transport and Structure 285
Extending this to an infinite ring, one can easily see that all points k L ∈ [−π, π ]
would give rise to stationary solutions. Going beyond this range, the energies re-
peat themselves and thus we can limit our discussion to energies with wave vectors
k ∈ [−π/L , π/L]. This defines the first Brillouin zone (BZ).
We can generalize this idea considerably by defining all basis functions in terms
of complete sets of functions over the first BZ.
Bloch functions are eigenfunctions of Ho
Ho ψn (k, r ) = E n (k)ψn (k, r ) (10.3)
where n is a band index and k is the wave vector or crystal momentum. This represen-
tation is termed the crystal momentum representation (CMR) since it is based upon
states with definite k. These functions are orthonormal:
ψn∗ (k, r )ψm (k , r )dr = δnm δ(k − k ) (10.4)
Note that throughout our discussion here, the integral over r is taken over all space.
One can also show that the Bloch functions are complete:
ψn∗ (k, r )ψn (k, r )dk = δ(r − r ) (10.5)
n
where the integral is over a single BZ. Since the Bloch functions are a complete set,
we can expand any general wave function in terms of the Bloch functions
ψ(r ) = φn (k)ψn (k, r ) (10.6)
n
where the expansion coefficients describe the wave function in the CMR. Hence,
the Bloch functions can be thought of as the transformation coefficient ψ(k, r ) =
r |nk
= between the CMR and r .
1
|nk
= √ eikr
where is the volume of a unit cell.
The integral is over the Brillouin zone with volume V = (2π)d / , and is the
volume of the unit cell (with d dimensions). A given Wannier function is defined for
each band and for each unit cell. If the unit cell happens to contain multiple atoms,
the Wannier function may be delocalized over multiple atoms. The functions are
orthogonal and complete.
The Wannier functions are not energy eigenfunctions of the Hamiltonian. They
are, however, linear combinations of the Bloch functions with different wave vectors
and therefore different energies. For a perfect crystal, the matrix elements of H in
terms of the Wannier functions are given by
∗
al (r − Rν )Ho an (r − Rμ )dr = ei(q Rν −k Rμ ) ψlk (r )Ho ψnk (r )dr dqdk
(2π )d
= En (Rν − Rμ )δnl (10.8)
where
En (Rν − Rμ ) = eik(Rν −Rμ ) E n (k)dk
(2π )d
Consequently, the Hamiltonian matrix elements in the Wannier representation are
related to the Fourier components of the band structure, E n (k). Therefore, given a
band structure, we can derive the Wannier functions and the single-particle matrix
◦
elements, Fmn .
where we see that the kinetic energy term is diagonal in the CMR while the potential
term couples components of the wave function with different values of the crystal
momentum k.
Now, let us assume that the potential can be written as a superposition of core
potentials centered about each atomic site. In other words, we expand
V (r ) = v(r − r j )
j
Lattice Models for Transport and Structure 287
where v(r ) is a weak pseudopotential specific to a given atom. Using this approxi-
mation, we can evaluate k|V |k
by taking advantage of the properties of the Fourier
transformation
1
k|V |k
= dr ei(k−k )r v(r − r j ) (10.12)
j
Now we change the variable of integration from r to r − r j and factor the volume
into = N o , where N is the number of atoms and o is the atomic volume
1 i(k−k )r j 1
k|V |k
= e dr ei(k−k )r v(r )
N j o
In this last step we factor the interaction into two terms: a structure factor S(q) and a
form factor v(q), where q = k − k. The structure factor is given as the sum over the
atomic positions
1 −iqr j
S(q) = e (10.15)
N j
and is equivalent to the structure factor obtained from diffraction theory. The second
factor, termed the form factor, is given by
1
v(q) = d 3r e−iq·r v(r ) (10.16)
o
Here we have explicitly indicated that the integration is over a three-dimensional
volume d V = d 3r . Since v(r ) is centered about an atomic site, we can expand it in
terms of the spherical harmonics
∞
+l
v(r ) = vlm (r )Ylm (θ, φ) (10.17)
l=0 m=−l
where (θq , φq ) given the direction of q relative to some z axis and jl (qr ) is a spherical
Bessel function. If we choose the z axis to be along the direction of q, then qx = q y = 0
and qz = q and we can write
eiq·r = eiqr cos θ
288 Quantum Dynamics: Applications in Biological and Materials Systems
For the case of a spherically symmetric pseudopotential, vl has only one component
and we arrive at
4π ∞ 2
v(q) = r v(r ) j0 (qr )dr
o 0
4π ∞ 2 sin(qr )
= r v(r ) dr (10.21)
o 0 qr
Within the pseudopotential approximation, we simply need to know the pseudopo-
tential for a given atom and the structure factor of the material to determine the band
structure or other properties related to the electronic structure.
One form of the pseudopotential is the so-called empty-core potential, which takes
the form129
0 for r < rc
v(r ) = (10.22)
vfree (r ) otherwise
where v f r ee is the free-atom potential that may take the form of a screened Coulomb
potential
−Z eff e2 −κr
vfree (r ) = e (10.23)
r
where Z eff is the effective core charge and κ is a screening length. Hence, we can
evaluate the integral for the form factor as
4π Z eff e2 ∞ 2 e−κr sin(qr )
v(q) = − r dr (10.24)
o q rc r r
4π Z eff e2
=− cos(qrc ) (10.25)
o (q 2 + κ 2 )
In order to determine the form factor for a given atomic species within the empty-
core approximation, we need to adjust the cutoff radius rc until the eigenenergy of
the corepotential matches the energy of the actual atomic species. For example, if
we consider the corepotential for the 3s state of the Na atom, we can use a Z eff = 1
and adjust rc until the lowest energy radial eigenstate of the pseudopotential is equal
to E(3s) = −4.96 eV. Doing so, we find that rc = 1.037 Å. Formally, we may
Lattice Models for Transport and Structure 289
Es
rc v(q)(eV)
r(Å)
2 4 6 8
–2
–4 q/Å
5 10 15 20
–6 –0.2
–8
–0.4
–10
–12 –0.6
–14
(a) (b)
FIGURE 10.2 (a) Empty cone pseudo potential and radical wavefunction for sodium (Na)
atom (b) Na pseudo potential from factor.
anticipate that the real purpose of κ is to ensure that the integral has a finite radius
of convergence and that we should take κ → 0. However, within the Thomas–Fermi
statistical approximation, we find that
4me2 k F
κ2 = (10.26)
πh̄ 2
where k F is the Fermi momentum. The empty-core pseudopotential, approximate 3s
radial wave function, and pseudopotential form factor for a Na atom are given in
Figure 10.2. For Na, we use k F = 0.92Å−1 .
The structure factor S(q) can be obtained from scattering data or from simulation.
In general, the structure factor is related to the Fourier transform of the pair distribution
function.130 This gives us an independent way to incorporate the structure of a system
into the calculation of the electronic band structure. The pair distribution is given by
summing over all pairs of atoms in the sample. For an isotropic single-component
system
V
3
g(r ) = d r δ(r − ri )δ(r + r − r j )
N (N − 1) i= j
The quantity ρg(r )dr gives the “probability” of finding a second atom (or molecule)
at some distance r away, given that there is an atom or molecule at the origin and
ρ = N /V is the density. It is not really a probability since
∞
4π ρg(r )r 2 dr = N − 1 ≈ N
0
To be precise, ρg(r )dr gives the number of atoms between r and r + dr about a
central atom. The relation between g(r ) and S(q) is130
S(q) = ρ eiq·r (g(r ) − 1)d 3 r
290 Quantum Dynamics: Applications in Biological and Materials Systems
For a single chain with interatomic spacing d, the atoms on the chain are located at
distances rn = nd from an atom located at the origin. Thus, it is easy to see that for
a chain of N atoms
N −1
1 −iqnd
S(q) = e (10.27)
N n=0
1
= (1 + x + x 2 + x 3 + · · · + x N −1 ) (10.28)
N
1 x N +1 − 1
= (10.29)
N x −1
with x = exp(iqd). For a periodic system, q takes integer multiples of 2π/(N d).
Consequently, the numerator in this last expression is always equal to zero. The
denominator is zero when n/N is an integer. Thus, S(q) = 1 at values of q = n2π/d.
These are termed the lattice wave numbers or reciprocal lattice vectors.
2 2 2
1 h̄ 2 k 2 h̄ 2 (k − q)2 1 h̄ k h̄ 2 (k − q)2
Ek = + ± − + v(q)2
2 2m 2m 2 2m 2m
(10.32)
Lattice Models for Transport and Structure 291
E(k) E(k)
20 20
15 15
10 10
5 5
kd/π kd/π
–3 –1 –1 1 1 3 –3 –1 –1 1 1 3
2 2 2 2 2 2 2 2
(a) (b)
FIGURE 10.3 Bands from Eq. (10.32) for three 1D free-electron states coupled by a pseu-
dopotential with form factor v(q). (a) Three-state model. (b) Approximate two-state model.
For the left side (k < 0), we would write a similar 2 × 2 Hamiltonian except we
would couple k to k + q. The band for the full BZ would the be approximated by
joining these two cases as shown by the curves in Figure 10.3b. The approximation
is robust across both respective half zones. However, once we move too far into the
other half of the zone (that is, where k < 0 for the curve or k > 0 for the curve), the
approximation breaks down considerably and we do not recover the second avoided
crossings. Also, at k = 0 the curves cross, whereas in the three-state approximation
we have a small gap.
ψ1 = A1 eiαx + B1 e−iαx
In region 2,
ψ2 = −β 2 ψ2
ψ2 = A2 eiβx + B2 e−iβx
To find u(x) we need to do some manipulation of the wave function in each region:
ψ1 (x) = eikx u 1 (x) = eikx A1 ei(α−k)x + B1 e−i(α+k)x
and
ψ2 (x) = eikx u 2 (x) = eikx A2 ei(β−k)x + B2 e−i(β+k)x
Now we are in a position to determine the coefficients and u i (x) in each region. At the
potential steps, the two solutions and their derivatives must match. Thus, at x = 0,
Likewise,
These last two conditions enforce the periodicity of the lattice. This leads to the
following matrix equation:
⎛ ⎞⎛ ⎞
1 1 −1 −1 A1
⎜α −α −β β ⎟⎜ B ⎟
⎜ ⎟⎜ 1 ⎟
⎜ ia(α−k) ⎟⎜ ⎟
⎝e e−ia(k+α) −eib(β−k) e−ib(k+β) ⎠⎝ A2 ⎠
eia(α−k) (α − k) −e−ia(k+α) (k + α) −eib(β−k) (β − k) e−ib(k+β) (k + β) B2
(10.37)
α2 − β 2
cos(kd) = cosh(βb) cos(αa) − sinh(2β) sin(αa)
2αβ
Lattice Models for Transport and Structure 293
where E is the energy of the scattering particle. Whereas in the bound-state problem,
E is an eigenvalue of the Hamiltonian operator, here E can take any value. The
basic idea is to consider how an incident wave function starting in the distant past is
transformed into an outgoing wave function in the distant future. For convenience,
we place the interaction close to the origin so that in regions to the left and right of the
interaction region, the wave function behaves as a free particle. For a wave function
moving in one dimension, we can write the wave function in the region to the left of
the interaction as
ψ L = A L eikx + B L e−ikx
where A1 and B1 are the incident and reflected amplitudes and
k = 2m(E − V (x))/h̄
is the wave vector. For regions to the right of the interaction we have
ψ R = A R eikx + B R e−ikx
where A R is the amplitude traveling away from the interaction and B R is the amplitude
moving toward the interaction region. In most cases, we consider the case where the
particle starts to the left and moves to the right. So, eventually we shall set B R = 0.
We shall keep it for now. The left and right amplitudes are related via the transfer
matrix T :
AR t11 t12 AL
= · (10.39)
BR t21 t22 BL
We can also write the scattering matrix S as the transformation between the two
incoming and the two outgoing amplitudes
BL r t AL
= · (10.40)
BR t ∗ −r AR
1.0 1.0
0.8 0.8
0.6 0.6
TR
TR
0.4 0.4
0.2 0.2
0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0
E(eV) E(eV)
(a) (b)
FIGURE 10.4 Transmission and reflection probabilities for a (a) single and (b) double-barrier
problem.
and V (x ≥ xs ) = Vo . The wave function and its first derivative must be continuous,
so we need to join the left and right solutions at x = xs :
or in matrix form
eik L xs e−ik L xs AL eik R xs e−ik R xs AR
=
ik L eik L xs −ik L e−ik L xs BL ik R eik R xs −ik R e−ik R xs BR
(10.45)
where k L = (2m E)1/2 /h̄ and k R = (2m(E − Vo ))1/2 /h̄ are the wave vectors on either
side of the step. Thus, we can arrive at the transmission matrix as
To generalize this, let us assume that V (x) can be represented as a series of discrete
steps at x = {x1 , . . . , x N } spanning the interaction region. As we move from left to
Lattice Models for Transport and Structure 295
right, we can relate the wave-function amplitudes such that after N steps,
AR AL
= T [x N , k N −1 , k N ] · T [x N −1 , k N −1 , k N ] · · · T [x1 , k1 , k L ]
BR BL
AL
= Ttot
BL
t t AL
= 11 12 (10.48)
t21 t22 BL
where we can construct the total transfer matrix as the product of the intermediate
transfer matrices representing a series of transmitted and reflected amplitudes due to
each change in the potential. Once we have the total transfer matrix, we can relate
the transmitted and reflected wave-function coefficients (A R and B L ) to the incident
amplitude, A L :
t21
BL = − AL (10.49)
t22
t12 t21
AR = t11 − AL (10.50)
t22
In Figure 10.4a and 10.4b we show the transmission and reflection probabilities
for scattering past a single barrier (a) and a double barrier (b). In the single-barrier
case, the potential “bump” of Vo = 1 a.u. extends from x = −π/2 to x = +π/2,
whereas in the double barrier, each bump of Vo = 1 a.u. is of width π/2 with a space
of π in between. These are purely toy problems and we have chosen the parameters
to be as simple as possible to illustrate the problem. Moreover, we can easily do this
problem by hand. First, notice that the transmission probability is not a step function
at E = 1 a.u. as one would expect for a classically described particle scattering from
left to right. This is due to quantum tunneling contributions. Second, notice that for
E > 1 a.u. the transmission probability as slight dip at E ≈ 2 a.u. This is a quasi
resonance since it corresponds to the case where the width of the barrier is an integer
number of de Broglie wavelengths of the scattering wave. Consequently, the particle
is partially reflected.
The double-barrier case can also be handled analytically, again providing a con-
venient check of the transfer matrix calculation. Here we see much more complex
features in the transmission/reflection spectra. First, we have two sharp resonance
features, one at E = 0.21 a.u. and a second at E = 0.84 a.u.
Im
1.0
0.5
Re
–1.0 –0.5 0.5 1.0
–0.5
–1.0
FIGURE 10.5 Argand diagram for the transmission and reflection coefficients for the model
double-barrier problem. The loops indicate the presence of resonances.
We have too many unknowns: T, I, R. So, we can only specify the solution up to
normalization: T /I and R/I . For example, we can derive
T 2it sin(kd)
= (10.53)
I 2it sin(kd) + δε
We define the reflection and transmission probabilities as
2
R δε 2
R = = 2 2 (10.54)
I 4t sin (kd) + δε 2
and
4t 2 sin2 (kd)
T=1−R= (10.55)
4t 2 sin2 (kd) + δε 2
In the limit that the barrier height is small, δε ≈ 0, the reflectivity R → 0 and the
particle are transmitted through the system. We can define the velocity of the particle
by taking the derivative of the band energy with respect to the momentum, k:
1 dE dt
v= = −2 sin(kd) (10.56)
h̄ dk h̄
Lattice Models for Transport and Structure 297
1.0
[R(k)]2
0.8
0.6
TR
0.4
|T(k)|2
0.2
0 1 1
2
kd/π
FIGURE 10.6 Transmission and reflection probablities for a chain with a single defect at the
origin (t = 1, δe = 3).
Thus,
δε 2
R= (10.57)
(h̄v)2 /d 2 + δε 2
As δε2 → 0, we can ignore the δε in the denominator and write
δε 2 d 2
R= (10.58)
h̄ 2 v 2
For this case, the particle is more or less delocalized over the entire chain and we can
say that the probability of finding the particle on any given site is 1/N d where N is
the number of sites in the chain. So, the probability of the particle striking the barrier
per unit time is v/N d. In this limit the transmission rate is given by
v d 2 δε 2 d δε 2
kT = = (10.59)
N d (h̄v)2 v N h̄ 2
For larger barriers or smaller energies, we have the particle tunneling through the
barrier and the transmission is given by
4t 2 sin2 (kd)
T=1−R= (10.60)
4t 2 sin2 (kd) + δε 2
Substituting the definition of velocity from above and taking δε to be the dominant
term in the denominator,
h̄ 2 v 3
T= (10.61)
δε 2 d 2
When the barrier is large compared to the energy, then the wave function is more or
less a standing wave to the left of the barrier and the particle will strike the barrier at
a rate v/2L. So, we can define the transmission rate as
v h̄ 2 v 3
kT = T= (10.62)
2L 2Lδε 2 d 2
298 Quantum Dynamics: Applications in Biological and Materials Systems
If we have a particle transmitted to the right, then u j = T eikd j . Since we need not
specify the normalization, we can take T = 1 and iteratively determine u j for the
rest of the chain. This is a complex number and we can write it as u j = x j + i y j . We
are going to assume that the site energy to the left and right of the barrier is the same,
so E = εs + 2t cos(k2) gives the dispersion relation between k and E for the two
asymptotic regions. If this is the case, we can compute the transmission probability
by comparing u j and u j+1 on the opposite side of the chain (where we have incident
and reflected components). The result is
4 sin2 (kd)
T=
(x j+1 − x j cos(kd) + y j sin(kd))2 + (y j+1 − x j sin(kd) − y j cos(kd))2
(10.64)
Consider the case in which we have two defect sites, one at j = 0 and another at
j = 3 with ε j = εs + 3 eV and t = −1 eV. We can set εs = 0 since it is an arbitrary
zero of energy. The defects are spaced so that there are two nearly bound states in the
1.0
0.8
0.6
0.4
0.2
E eV
0 2 4 6 8
TABLE A.1
Physical Constants
Constant Symbol SI Value
Speed of light c 299792458 m/s (exact)
Charge of proton e 1.6021764 ×10−19 C
Permitivity of vacuum ε◦ 8.8541878 −12 J−1 C2 m−1
Avagadro’s number NA 6.022142 ×1023 mol−1
Rest mass of electron me 9.109382 ×10−31 kg
TABLE A.2
Atomic Units: In Atomic Units, the Following Quantities Are
Unitary: h̄, e, me, ao
Quantity Symbol or Expression CGS or SI Equivalent
Mass me 9.109382 ×10−31 kg
Charge e 1.6021764 ×10−19 C
Angular momentum h̄ 1.05457×10−34 Js
Length (bohr) ao = h̄ 2 /(m e e2 ) 0.5291772 −10 m
Energy (hartree) E h = e2 /ao 4.35974 ×10−18 J
Time to = h̄ 3 /(m e e4 ) 2.41888 ×10−17 s
Velocity e2 /h̄ 2.18770 ×106 m/s
Force e2 /ao2 8.23872 ×10−8 N
Electric field e/ao2 5.14221 ×1011 V/m
Electric potential e/ao 27.2114 V
2
Fine structure constant α = h̄c
e
1/137.036
Magnetic moment βe = eh̄/(2m e ) 9.27399 ×10−24 J/T
Permitivity of vacuum εo = 1/4π 8.8541878 −12 J−1 C2 m−1
Hydrogen atom IP −α 2 m e c2 /2 = −E h /2 −13.60580 eV
301
302 Quantum Dynamics: Applications in Biological and Materials Systems
TABLE A.3
Useful Orders of Magnitude
Quantity Approximate Value Exact Value
Electron rest mass m e c2 ≈ 0.5 MeV 0.511003 MeV
Proton rest mass m p c2 ≈ 1000 MeV 938.280 MeV
Neutron rest mass Mn c2 ≈ 1000 MeV 939.573 MeV
Proton/electron mass ratio m p /m e ≈ 2000 1836.1515
The integral picks out the first term in the Taylor expansion of f (x) about the point xo
and this relation must hold for any function of x. For example, let us take a function
that is zero only at some arbitrary point, xo . Then the integral becomes
+∞
d xδ(x − xo ) f (x) = 0 (A.2)
−∞
For this to be true for any arbitrary function, we have to conclude that
δ(0) = ∞ (A.5)
This is a very odd function: it is zero everywhere except at one point, at which it is
infinite. So it is not a function in the regular sense. In fact, it is more like a distrubution
Miscellaneous Results and Constants 303
function that is infinitely narrow. If we set f (x) = 1, then we can see that the δ function
is normalized to unity
+∞
d xδ(x − xo ) = 1 (A.6)
−∞
A.2.2 PROPERTIES
Some useful properties of the δ function are as follows:
1. It is real: δ ∗ (x) = δ(x).
2. It is even: δ(x) = δ(−x)
3. δ(ax) = δ(x)/a for a > 0
#
4. δ (x) f (x)d x = f (0)
5. δ (−x) = −δ (x)
6. xδ(x) = 0
1
7. δ(x 2 − a 2 ) = (δ(x + a) + δ(x − a))
2a
8. f (x)δ(x − a) = f (a)δ(x − a)
#
9. δ(x − a)δ(x − b)d x = δ(a − b)
Exercise
Prove the above relations.
sinc(x)
1.0
0.8
0.6
0.4
0.2
x
–10 –5 5 10
–0.2
1 a
δ(x) = lim
a→ π x + a2
2
1.2
1.0
0.8
0.6
0.4
0.2
–4 –2 2 4
• Probability density:
• Schrödinger equation:
∂
ih̄ − Ĥ ψ = 0 (A.14)
∂t
• Normalization:
ψ ∗ Âψd x = 1 (A.15)
where
cn = ψn∗ φd x (A.17)
A.3.3 OPERATORS
If  is a quantum mechanical operator and φ and ψ are normalizable functions,
• Hermitian conjugate:
( Âφ)∗ ψd x = φ ∗ Âψd x (A.19)
• Position operator:
x̂ n = x n (A.20)
• Momentum operator:
n n
h̄ ∂
p̂ =n
(A.21)
i ∂xn
• Kinetic energy operator:
h̄ 2 2
T̂ = − ∇ (A.22)
2m
• Potential energy operator:
V̂ = V (x̂) (A.23)
Miscellaneous Results and Constants 307
• Hamiltonian operator:
Ĥ = T̂ + V̂ (A.24)
h̄ 2 ∂ 2
Ĥ = − + V (x) (A.25)
2m ∂ x 2
• Parity operator:
• Expectation value:
Â
= ψ| Â|ψ
= ψ ∗ Âψd x (A.27)
• Matrix element:
φ|A|ψ
= ψ ∗ Âψd x (A.28)
= p
& = − ∇V
(A.30)
dt dt
• Expansion of expectation values in terms of eigenfunctions: If ψn is an eigen-
function of  such that Âψn = an ψn , then
φ| Â|φ
= |cn |2 an (A.31)
n
We interpret |cn |2 as the probability of finding the system in the nth eigenstate.
More precisely, |cn |2 is the likelihood that making a physical observation
described by the quantum mechanical operator  will result in a measurement
of an .
• Dirac notation (bra-ket notation):
– Matrix element
anm = ψm | Â|ψn
(A.33)
308 Quantum Dynamics: Applications in Biological and Materials Systems
– Scalar product:
ψn |ψm
= d xψn (x)ψm (x) (A.34)
• Pauli matrices:
0 1
σx = (A.37)
1 0
0 −i
σy = (A.38)
i 0
1 0
σz = (A.39)
0 −1
1 0
σ0 = (A.40)
0 1
σi · σ j + σ j · σi = 2δi j σo (A.41)
– Commutation:
σi · σ j − σ j · σi = 2iσk (A.42)
σi σ j = iσk (A.43)
σi σi = σo (A.44)
Miscellaneous Results and Constants 309
This last relation indicates that the harmonic oscillator ground state carries the
minimal amount of uncertainty in accordance with the Heisenberg uncertainty
principle.
with
+L
1
an = f (x) cos(nπ x/L)d x (A.74)
L −L
Miscellaneous Results and Constants 311
and
+L
1
bn = f (x) sin(nπ x/L)d x (A.75)
L −L
• Complex form:
+∞
f (x) = cn ex p(inπ x/L) (A.76)
n=−∞
with
+L
1
cn = d x f (x)einπ x/L (A.77)
2L −L
• Parseval’s theorem:
+L ∞
1
d x| f (x)|2 = |cn |2 (A.78)
2L −L n=−∞
∞
f (x) = F(k)e+2πikx dk (A.80)
−∞
– Definition #2:
∞
F(k) = f (x)e−ikx d x (A.81)
−∞
∞
1
f (x) = F(k)e+ikx dk (A.82)
2π −∞
– Definition #3:
∞
1
F(k) = √ f (x)e−ikx d x (A.83)
2π −∞
∞
1
f (x) = √ F(k)e+ikx dk (A.84)
2π −∞
312 Quantum Dynamics: Applications in Biological and Materials Systems
– Convolution rules:
f ∗g = g∗ f (A.86)
f ∗ (g ∗ h) = ( f ∗ g) ∗ h (A.87)
– Convolution theorem:
– Wiener–Khintchine theorem:
– Cross-correlation:
∞
∗
f (x) g(x) = du f ∗ (u − x)g(u) (A.91)
−∞
– Correlation theorem:
d f (x)
2πikF(k) (A.95)
dx
d d f (x) dg(x)
( f (x) ∗ g(x)) = ∗ g(x) + ∗ f (x) (A.96)
dx dx dx
Miscellaneous Results and Constants 313
• Symmetry relations:
f (x) F(s)
even even
odd odd
real, even real, even
real, odd imaginary, odd
imaginary, even imaginary, even
complex, even complex, even
complex, odd complex, odd
real, asymmetric complex, Hermitian
imaginary, asymmetric complex, anti-Hermitian
real : f (x) = f ∗ (x)
imaginary : f (x) = − f ∗ (x)
even : f (x) = f (−x)
odd : f (x) = − f (−x)
Hermitian : f (x) = f ∗ (−x)
anti-Hermitian: f (x) = − f ∗ (−x)
CHAPTER 4
4. E. Fermi. Nuclear Physics. University of Chicago Press, Chicago, 1950.
5. P. A. M. Dirac. The quantum theory of emission and absorption of radiation. Proc. R.
Soc. London, Ser. A, 114:243–265, 1927.
6. Eric R. Bittner and Peter J. Rossky. Quantum decoherence in mixed quantum-classical
systems: Nonadiabatic processes. J. Chem. Phys., 103(18):8130–8143, Nov 1995.
7. Eric R. Bittner and Peter J. Rossky. Decoherent histories and nonadiabatic quantum
molecular dynamics simulations. J. Chem. Phys., 107(20):8611–8618, Nov 1997.
8. Robbie Grunwald and Raymond Kapral. Decoherence and quantum-classical master
equation dynamics. J. Chem. Phys., 126(11):114109, 2007.
9. Klaus Hornberger. Master equation for a quantum particle in a gas. Phys. Rev. Lett.,
97(6):060601, 2006.
10. Ahren W. Jasper and Donald G. Truhlar. Electronic decoherence time for non-Born-
oOppenheimer trajectories. J. Chem. Phys., 123(6):064103, 2005.
11. Gunter Kab. Fewest switches adiabatic surface hopping as applied to vibrational energy
relaxation. J. Phys. Chem. A, 110(9):3197–3215, 2006.
12. Gil Katz, Mark A Ratner, and Ronnie Kosloff. Decoherence control by tracking a
Hamiltonian reference molecule. Phys. Rev. Lett., 98(20):203006, 2007.
13. M. Merkli, I. M. Sigal, and G. P. Berman. Decoherence and thermalization. Phys Rev
Lett, 98(13):130401, 2007.
14. Maximilian A. Schlosshauer. Decoherence and the quantum-to-classical transition.
Springer, Berlin, 2007.
15. L. D. Landau. Phy. Z. Sowjetunion, 1:89, 1932.
16. C. Zener. Proc. R. Soc. London, Ser. A, 137:696, 1933.
17. A. Nitzan. Chemical Dynamics in Condensed Phases. Oxford University Press, 2007.
18. R. A. Marcus. On the theory of electron-transfer reactions. Part VI. Unified treatment
for homogeneous and electrode reactions. J. Chem. Phys., 43(2):679–701, Jul 1965.
19. Rudolph A. Marcus. Nobel lecture: Electron transfer reactions in chemistry. Theory
and experiment. Rev. Mod. Phys., 65(3):599–610, 1993.
20. J. R. Miller, L Calcaterra, and G. L. Closs. “Intramolecular long-distance electron
transfer in radical anions. The effects of free energy and solvent on the reaction rates.”
J. Am. Chem. Soc., 106:3047, 1984.
21. Eyal Neria, Abraham Nitzan, R. N. Barnett, and Uzi Landman. Quantum dynamical
simulations of nonadiabatic processes: Solvation dynamics of the hydrated electron.
Phys. Rev. Lett., 67(8):1011–1014, Aug 1991.
315
316 Quantum Dynamics: Applications in Biological and Materials Systems
22. Eyal Neria and Abraham Nitzan. Semiclassical evaluation of nonadiabatic rates in
condensed phases. J. Chem. Phys., 99(2):1109–1123, 1993.
23. Louis de Broglie. Ondes et mouvements. Gauthiers-Villars, Paris, 1926.
24. Louis de Broglie. La mécanique ondulatoire. Gauthiers-Villars, Paris, 1928.
25. Peter Holland. The Quantum Theory of Motion: An Account of the de Broglie-Bohm
Causal Interpretation of Quantum Mechanics. Cambridge, UK: Cambridge University
Press, 1993.
CHAPTER 5
26. P. A. M. Dirac. The Principles of Quantum Mechanics. Oxford University Press,
Oxford, NY, 4th edition, 1958.
27. Julian Schwinger. Quantum Mechanics—Symbolism of Atomic Measurements. Physics
and Astonomy. Springer, Berlin, 2001.
28. Philip Pechukas. Time-dependent semiclassical scattering theory. Part I potential scat-
tering. Phys. Rev., 181(1):166–174, May 1969.
29. Philip Pechukas. Time-dependent semiclassical scattering theory. Part II atomic colli-
sions. Phys. Rev., 181(1):174–185, May 1969.
30. John C. Tully. Molecular dynamics with electronic transitions. J. Chem. Phys.,
93(2):1061–1071, 1990.
31. Frank J. Webster, Jurgen Schnitker, Mark S. Friedrichs, Richard A. Friesner, and
Peter J. Rossky. Solvation dynamics of the hydrated electron: A nonadiabatic quantum
simulation. Phys. Rev. Lett., 66(24):3172–3175, Jun 1991.
32. B. Space and D. F. Coker. Nonadiabatic dynamics of excited excess electrons in simple
fluids. J. Chem. Phys., 94(3):1976–1984, 1991.
CHAPTER 6
33. L. Allen and J. H. Eberly. Optical Resonance and Two-Level Atoms. Dover Publications,
1987.
34. E. L. Hahn. Spin echoes. Phys. Rev., 80(4):580–594, Nov 1950.
35. E. L. Hahn and D. E. Maxwell. Chemical shift and field independent frequency mod-
ulation of the spin echo envelope. Phys. Rev., 84(6):1246–1247, Dec 1951.
36. N. A. Kurnit, I. D. Abella, and S. R. Hartmann. Observation of a photon echo. Phys.
Rev. Lett., 13(19):567–568, Nov 1964.
37. I. D. Abella, N. A. Kurnit, and S. R. Hartmann. Photon echoes. Phys. Rev., 141(1):
391–406, Jan 1966.
38. A.A. Ovchinnikov and N.S. Erikhman. Sov. Phys. JETP, 40:733, 1975.
39. A. Madhukar and W. Post. Phys. Rev. Lett., 39:1424, 1977.
40. A. M. Jayannavar and N. Kumar. Phys. Rev. Lett, 48:553, 1982.
41. S. M. Girvin and G. D. Mahan. Phys. Rev. B, 20:4896, 1979.
42. W. Fischer, H. Leschke, and P. Müller. Phys. Rev. Lett., 73:1578, 1994.
43. I. Burghardt and L. S. Cederbaum. Hydrodynamic equation for mixed quantum states.
i. general formulation. J. Chem. Phys., 115(22):10303, December 2001.
44. I. Burghardt and L. S. Cederbaum. Hydrodynamic equations for mixed quantum states.
ii. coupled electronic states. J. Chem. Phys., 115(22):10312, December 2001.
45. Yasuteru Shigeta, Hideaki Miyachi, and Kimihiko Hirao. Quantal cumulant dynamics:
General theory. J. Chem. Phys., 125(24):244102, 2006.
46. Jeremy B. Maddox and Eric R. Bittner. Quantum relaxation dynamics using Bohmian
trajectories. J. Chem. Phys., 115(14):6309–6316, 2001.
References 317
47. Jeremy B. Maddox and Eric R. Bittner. Quantum dissipation in unbounded systems.
Phys. Rev. E, 65(2):026143, Jan 2002.
48. J. B. Maddox and E. R. Bittner. Quantum dissipation in the hydrodynamic moment
hierarchy: A semiclassical truncation strategy. J. Phys. Chem. B, 106(33):7981–7990,
2002.
49. L. A. Khalfin. Sov. Phys. JETP, 6:1053, 1958.
50. Wayne M. Itano, D. J. Heinzen, J. J. Bollinger, and D. J. Wineland. Quantum zeno
effect. Phys. Rev. A, 41(5):2295–2300, Mar 1990.
51. R. J. Cook. What are quantum jumps? Phys. Scr., T21:49, 1988.
52. E. P. Wigner. On the quantum correction for thermodynamic equilibrium. Phys. Rev.,
40:749–759, 1932.
53. H. Weyl. The Theory of Groups and Quantum Mechanics. Dover, New York, 1931.
54. H. Weyl. Gruppentheorie und Quantenmechanik. Hirzel, Leipzig, 1928.
55. H. Weyl. Z. Phys., 46:1, 1927.
56. J. Ville. Théorie et applications de la notion de signal analytique. Cables et Transmis-
sion, 2A:61–74, 1948.
57. J. E. Moyal. Quantum mechanics as a statistical theory. Proc. Cambridge Philos. Soc.,
45:737–740, 1949.
58. H. J. Groenewold. Physica, 12(405), 1946.
59. William Frensley. Boundary conditions for open quantum systems driven far from
equilibrium. Rev. Mod. Phys., 62:745, 1990.
60. Jasper Koester and Shaul Mukamel. Transient gratings, four wave mixing, and polariton
effects in non-linear optics. Phys. Rep., 205(1):1–58, 1991.
61. Roger F. Loring, Daniel S. Franchi, and Shaul Mukamel. Anderson localization in
Liouville space: The effective dephasing approximation. Phys. Rev. B, 37(4):1874–
1883, Feb 1988.
62. Shaul Mukamel. Principles of Nonlinear Optics and Spectroscopy. Oxford University
Press, 1995.
63. P. L. Bhatnagar, E. P. Gross, and M. Krook. A model for collision processes in gases. i.
small amplitude processes in charged and neutral one-component systems. Phys. Rev.,
94(3):511–525, May 1954.
CHAPTER 7
64. T. Förster. Transfer mechanisms of electronic excitation. Discuss. Faraday Soc. Ab-
erdeen, (7-18), 1960.
65. T. Förster. Zwischenmolekulare Energiewanderung und Fluoreszenz. Ann. Phys.
(Leipzig), 2(55–75), 1948.
66. T. Förster. Energiewanderung und Fluoreszenz. Naturwissenschaften, 6(166–175),
1946.
67. T. Förster. Transfer mechanisms of electronic excitation. Discuss. Faraday Soc., 27:17,
1959.
68. E. Hass, E. Katchalski-Katzir, and I. Z. Steinberg. First demo of FRET in DNA loops.
Biopolymers, 17(11–31), 1978.
69. Mark I. Wallace, Liming Ying, Shankar Balasubramanian, and David Klenerman.
Non-arrhenius kinetics for the loop closure of a DNA hairpin. Proc. Nat. Acad. Sci.,
98(10):5584–5589, 2001.
70. E. Zazopoulos, E. Lalli, D. M. Stocco, and P. Sassone-Corsi. Nature, 390:311–315,
1997.
71. X. Dai, M. B. Greizerstein, K. Nadas-Chinni, and L. B. Rothman-Denes. Proc. Nat.
Acad. Sci., 94:2174, 1997.
318 Quantum Dynamics: Applications in Biological and Materials Systems
72. S. Tyagi and F. R. Kramer. Molecular beacons. Nature Biotechnol., 14:303–308, 1996.
73. Wichard J. D. Beenken and Tõnu Pullerits. Excitonic coupling in polythiophenes: Com-
parison of different calculation methods. J. Chem. Phys., 120(5):2490–2495, 2004.
74. William Barford. Exciton transfer integrals between polymer chains. J. Chem. Phys.,
126(13):134905, 2007.
75. B. P. Krueger, G. D. Scholes, and G. R. Fleming. Calculation of couplings and energy-
transfer pathways between the pigments of lh2 by the ab initio transition density cube
method. J. Phys. Chem. B, 102(27):5378–5386, Jul 1998.
76. M. J. Frisch, G. W. Trucks, H. B. Schlegel, G. E. Scuseria, M. A. Robb, J. R. Cheeseman,
J. A. Montgomery, Jr. et al. Gaussian 03, Revision C.02. Gaussian, Inc., Wallingford,
CT, 2004.
77. Frank Neese. ORCA—An ab initio Density Functional and Semiempirical Program
Package, Version 2.4, Revision 45, Jan 2006. Max Planck Institute for Bioinorganic
Chemistry, Muelheim, Germany.
78. Arkadiusz Czader and Eric R. Bittner. Calculations of the exciton coupling elements
between the DNA bases using the transition density cube method. J. Chem. Phys.,
128(3):035101, 2008.
79. B. Bouvier, T. Gustavsson, D. Markovitsi, and P. Millie. Dipolar coupling between
electronic transitions of the DNA bases and its relevance to exciton states in double
helices. Chem. Phys., 275:75–92, 2002.
80. P. Claverie. Intermolecular Interactions—From Diatomic to Biopolymers., Chapter
Elaboration of Approximate Formulas for the Interaction between Large Molecules:
Application in Organic Chemistry, pages 69–306. Wiley, 1978.
CHAPTER 8
81. J. C. Light, I. P. Hamilton, and J. V. Lill. Generalized discrete variable approximation
in quantum mechanics. J. Chem. Phys., 82(3):1400–1409, 1985.
82. J. C. Light. Discrete variable representations in quantum dynamics. In Time-Dependent
Quantum Molecular Dynamics. Plenum-Press, 1992.
83. E. Hückel. Zeitschrift für Physik, 76:628, 1932.
84. J. E. Lennard-Jones. Proc. R. Soc. London, 158:280, 1937.
85. C. A. Coulson and A. Streitwieser. Dictionary of π Electron Calculations. Pergammon,
New York, 1965.
86. W. Kutzelnigg. Einführung in die Theoretische Cheme, Vol 2: Die Chemische Bindung.
Wiley-VCH, 1978.
87. J. Barriol and J. J. Metzger. J. Chem. Phys., 47:433, 1950.
88. C. C. J. Roothaan. Self-consistent field theory for open shells of electronic systems.
Rev. Mod. Phys., 32(2):179–185, 1960.
89. C. C. J. Roothaan. New developments in molecular orbital theory. Rev. Mod. Phys.,
23(2):69–89, 1951.
90. Michael C. Zerner. Perspective on “New developments in molecular orbital theory.”
Theor. Chim. Acta, 103(3):217–218, 2000.
91. G. G. Hall. Proc. R. Soc. London, Ser. A, 205:541, 1951.
92. G. C. Schatz and M. A. Ratner. Quantum Mechanics in Chemistry. Prentice Hall, 1993.
93. Karl F. Freed. Is there a bridge between ab initio and semiempirical theories of valence?
Acc. Chem. Res., 16(4):137–144, 1983.
94. Charles H. Martin and Karl F. Freed. Ab initio computation of semiempirical π -electron
ν
methods. Part I, constrained, transferable valence spaces in H calculations. J. Chem.
Phys., 100(10):7454–7470, 1994.
References 319
95. Charles. H. Martin, R. L. Graham, and Karl. F. Freed. Ab initio study of cyclobutadiene
using the effective valence shell Hamiltonian method. J. Chem. Phys., 99(10):7833–
7844, 1993.
96. J. Hubbard. Electron correlations in narrow energy bands. Proc. R. Soc. London, Ser.
A, 276(1365):238–257, Nov 1963.
97. Elliott H. Lieb and F. Y. Wu. Absence of Mott transition in an exact solution of the
short-range, one-band model in one dimension. Phys. Rev. Lett., 20(25):1445–1448,
Jun 1968.
98. Fabian H. L. Essler, Vladimir E. Korepin, and Kareljan Schoutens. Complete solution
of the one-dimensional Hubbard model. Phys. Rev. Lett., 67(27):3848–3851, Dec 1991.
99. J. C. Slater. Atomic shielding constants. Phys. Rev., 36(1):57–64, Jul 1930.
100. S. F. Boys. Proc. R. Soc. London, Ser. A, 200:542, 1950.
101. Ira Levine. Quantum Chemistry. Prentice Hall, 4th edition, 1991.
CHAPTER 9
102. W. P. Su, J. R. Schrieffer, and A. J. Heeger. Soliton excitations in polyacetylene. Phy.
Rev. B, 22(4):2099–2111, Aug 1980.
103. W. P. Su, J. R. Schrieffer, and A. J. Heeger. Solitons in polyacetylene. Phys. Rev. Lett.,
42(25):1698–1701, Jun 1979.
104. L. D. Landau. Phys. Z. Sowjetunion, 3:664, 1933.
105. E. I. Rashba and M. D. Sturge. Excitons. North-Holland, Amsterdam, Netherlands,
1982.
106. E. I. Rashba. Opt. Spektrosk., 2:568, 1957.
107. Mark N. Kobrak and Eric R. Bittner. A dynamic model for exciton self-trapping in
conjugated polymers. Part I. Theory. J. Chem. Phys., 112(12):5399–5409, 2000.
108. Mark N. Kobrak and Eric R. Bittner. A dynamic model for exciton self-trapping
in conjugated polymers. ii. implementation. J. Chem. Phys., 112(12):5410–5419,
2000.
109. Mark N. Kobrak and Eric R. Bittner. A quantum molecular dynamics study of exciton
self-trapping in conjugated polymers: Temperature dependence and spectroscopy. J.
Chem. Phys., 112(17):7684–7692, 2000.
110. Hitoshi Sumi and Atsuko Sumi. Dimensionality dependence in self-trapping of exci-
tons. J. Phys. Soc. Jp. 63(2):637–657, 1994.
111. I. Franco and S. Tretiak. Electron-vibrational dynamics of photoexcited polyfluorenes.
J. Am. Chem. Soc., 126(38):12130–12140, Sep 2004.
112. Kirill I. Igumenshchev, Sergei Tretiak, and Vladimir Y. Chernyak. Excitonic effects in
a time-dependent density functional theory. J. Chem. Phys., 127(11):114902, 2007.
113. S. Tretiak, A. Saxena, R. L. Martin, and A. R. Bishop. Photoexcited breathers in
conjugated polyenes: An excited-state molecular dynamics study. Proc. Nat. Acad.
Sci., 100(5):2185–2190, Mar 2003.
114. S. Tretiak, A. Saxena, R. L. Martin, and A. R. Bishop. Conformational dynamics of
photoexcited conjugated molecules. Phys. Rev. Lett., 89(9):097402, 2002.
115. Sergei Tretiak, Kirill Igumenshchev, and Vladimir Chernyak. Exciton sizes of con-
ducting polymers predicted by time-dependent density functional theory. Phys. Rev.
B, 71(3):033201–4, 2005.
116. Stoyan Karabunarliev, Martin Baumgarten, Eric R. Bittner, and Klaus Müllen. Rig-
orous Franck–Condon absorption and emission spectra of conjugated oligomers from
quantum chemistry. J. Chem. Phys., 113(24):11372–11381, 2000.
117. Stoyan Karabunarliev and Eric R. Bittner. Polaron–excitons and electron–vibrational
band shapes in conjugated polymers. J. Chem. Phys., 118(9):4291–4296, Mar 2003.
320 Quantum Dynamics: Applications in Biological and Materials Systems
118. A. S. Davydov. Excitons and solitons in molecular systems. Int. Rev. Cytology, 106:183,
1987.
119. A. S. Davydov. Solitons and energy transfer along protein molecules. J. Theor. Biol.,
66(2):379–387, 1977.
120. A. S. Davydov. The theory of contraction of proteins under their excitation. J. Theor.
Biol., 38:559–569, 1973.
121. Alwyn Scott. Davydov’s soliton. Phys. Rep., 217(1):1–67, 1992.
122. V. E. Zakharov and A. B. Shabat. Exact theory of two dimensional self-focusing and
one-dmensional modulations of waves in nonlinear media. Sov. Phys. JETP, 34:62–69,
1972.
123. P. S. Lomdahl and W. C. Kerr. Do Davydov solitons exist at 300 K? Phys. Rev. Lett.,
55(11):1235–1238, Sep 1985.
124. J. W. Schweitzer. Lifetime of the Davydov soliton. Phys. Rev. A, 45(12):8914–8923,
Jun 1992.
125. W. Förner. Davydov solitons in proteins. Int. J. Quantum Chem., 64(3):351, 1997.
126. Konrad Knoll and Richard R. Schrock. Preparation of tert-butyl-capped polyenes con-
taining up to 15 double bonds. J. Am. Chem. Soc., 111(20):7989–8004, 1989.
127. Stoyan Karabunarliev, Eric R. Bittner, and Martin Baumgarten. Franck–Condon spectra
and electron-libration coupling in para-polyphenyls. J. Chem. Phys., 114(13):5863–
5870, 2001.
128. W. M. Gelbart, K. G. Speers, K. F. Freed, J. Jortner, and S. A. Rice. Boltzmann
statistics and radiationless decay large molecules: optical selection rules. Chem. Phys.
Lett., 6(4):354, 1970.
CHAPTER 10
129. Walter Harrison. Applied Quantum Mechanics. World Scientific Publishing Company,
Singapore, 2000.
130. Donald A. McQuarrie. Statistical Mechanics. Harper and Row, New York, 1976.
Index
α helix, 268 F
π -electron theories, 242 Fermi’s golden rule, 106
A Fock operator, 237–239
Free-electron model, 219, 220
ab initio treatments, 248 Förster radius, 209
acoustic modes, 265 Förster theory, 206
adiabatic approximation, 125
alternant hydrocarbons, 232 G
Argand diagram, 296
golden rule, 205
B
B-DNA, 216 H
betacarotene, 220, 222
Bethe Ansatz, 242 harmonic perturbation, 104
Bloch functions, 283–285 Hartree–Fock approximation, 236
Bloch states, 290 Hellmann–Feynman theorem, 125
Bohr frequency, 103 Hooke’s law, atom described by, 115
bond-charge matrix, 240 Hubbard model, 243
Born-Oppenheimer approximation, 207, 235 Hückel model, 219, 222
Brillouin zone, 285 Hückel model, justification of, 224
Hückel theory, extended, 226
C
CNDO, 241 I
Condon, 207
Interactions at low light intensity, 113
Condon approximation, 207
irreversibility, quantum mechanical system,
conjugated molecules, 219
203
correlation functions, use of, 106
Coulomb matrix element, 214
Coulson–Rushbrooke pairing theorem, 232, K
233
Kronig–Penny model, 291
crystal momentum representation, 285
Kubo identity, 159
D
Davydov’s soliton, 268
L
delocalization, 226 Landau–Zener approximation, 128
density matrix, time-evolution of, 163 LH1, 213
diabatic representation, 127 line–dipole approximation, 211
dipole approximation, 114 Liouville superoperator, 163
dipole–dipole approximation, 209 Liouville–von Neumann equation, 174
Dirac quantum condition, 149 longitudinal modes, 265
discrete variable representation, 228 longitudinal relaxation time, 174
DNA, 215
E M
electromagnetic field, 111 magnetic field, comparison to electric field
electron/phonon coupling, 265 component, 112
empty-core potential, 288 Maxwell’s relations, 112
energy transfer, 206 mixing angle, 86
excitation energy transfer, 203 motion under linear force, 136
exciton self-trapping, 264–268 Mott transition, 242
321
322 Index
Q U
R V
Rashba model, 265 variation of energy gap, 221
reciprocal lattice vectors, 290 vector potential, 112
residue theorem, 104 von Neumann entropy, 166
rotating wave approximation, 169
W
S
Wannier functions, 283, 285, 286
scattering matrix, 293 Wigner representation, 189
Schrödinger representation, 146
self-consistent field approach, 236 Z
Slater-type orbitals, STO, 224–226
statistical mixture, 175 zero-differential overlap approximation, 241