0% found this document useful (0 votes)
400 views

Quantum Dynamics Applications in Biological and Materials Systems (Eric R - Bittner)

Uploaded by

Phyo Zaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
400 views

Quantum Dynamics Applications in Biological and Materials Systems (Eric R - Bittner)

Uploaded by

Phyo Zaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 336

CRC Press

Taylor & Francis Group


6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2010 by Taylor & Francis Group, LLC


CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works


Version Date: 20110715

International Standard Book Number-13: 978-1-4398-8214-6 (eBook - PDF)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com

and the CRC Press Web site at


http://www.crcpress.com
Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
About the Author. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Chapter 1 Survey of Classical Mechanics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1


1.1 Newton’s Equations of Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Newton’s Postulates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Lagrangian Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 The Principle of Least Action . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Example: Three-Dimensional Harmonic Oscillator
in Spherical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Conservation Laws. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
1.3.1 Conservative Forces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
1.4 Hamiltonian Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.1 Phase Plane Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
1.4.2 Interaction between a Charged Particle and an
Electromagnetic Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4.3 Time Dependence of a Dynamical Variable . . . . . . . . . 19
1.4.4 Virial Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.4.5 Angular Momentum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.4.6 Classical Motion of an Electron about a Positive
Charge (Nucleus) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4.7 Birth of Quantum Theory . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4.8 Do the Electron’s Orbitals Need to Be Circular? . . . . 25
1.4.9 Wave–Particle Duality. . . . . . . . . . . . . . . . . . . . . . . . . . . .27
1.4.10 De Broglie’s Matter Waves . . . . . . . . . . . . . . . . . . . . . . . 29
1.5 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

Chapter 2 Waves and Wave Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33


2.1 Position and Momentum Representation of |ψ . . . . . . . . . . . . . 34
2.2 The Schrödinger Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2.1 Gaussian Wave Functions. . . . . . . . . . . . . . . . . . . . . . . . .36
2.2.2 Evolution of ψ(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3 Particle in a Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.1 Infinite Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.2 Particle in a Finite Box . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3.3 Scattering States and Resonances . . . . . . . . . . . . . . . . . . 44
2.3.4 Application: Quantum Dots . . . . . . . . . . . . . . . . . . . . . . . 47
2.4 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Chapter 3 Semiclassical Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57


3.1 Bohr–Sommerfield Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2 The Wentzel, Kramers, and Brillouin Approximation . . . . . . . . 60

v
vi Contents

3.2.1 Asymptotic Expansion for Eigenvalue Spectrum . . . . 60


3.2.2 Example: Semiclassical Estimate of Spectrum
for Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2.3 The Wentzel, Kramers, and Brillouin
Wave Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2.4 Semiclassical Tunneling and Barrier Penetration . . . . 65
3.3 Connection Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.4 Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.4.1 Classical Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.4.2 Scattering at Small Deflection Angles . . . . . . . . . . . . . . 76
3.4.3 Quantum Treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.4.4 Semiclassical Evaluation of Phase Shifts . . . . . . . . . . . 79
3.5 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

Chapter 4 Quantum Dynamics (and Other Un-American Activities) . . . . . . . . . . 85


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.2 The Two-State System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.3 Perturbative Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.3.1 Dipole Molecule in Homogenous Electric Field . . . . . 90
4.3.1.1 Weak Field Limit. . . . . . . . . . . . . . . . . . . . . . . .91
4.3.1.2 Strong Field Limit . . . . . . . . . . . . . . . . . . . . . . . 93
4.4 Dyson Expansion of the Schrödinger Equation . . . . . . . . . . . . . . 93
4.4.1 van der Waals Forces: Origin of Long-Range
Attractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.4.2 Attraction between an Atom and a Conducting
Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.5 Time-Dependent Schrödinger Equation . . . . . . . . . . . . . . . . . . . . 99
4.6 Time Evolution of a Two-Level System . . . . . . . . . . . . . . . . . . . 101
4.7 Time-Dependent Perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.7.1 Harmonic Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.7.2 Correlation Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.8 Interaction between Matter and Radiation . . . . . . . . . . . . . . . . . 111
4.8.1 Fields and Potentials of a Light Wave . . . . . . . . . . . . . 111
4.8.2 Interactions at Low Light Intensity . . . . . . . . . . . . . . . 113
4.8.2.1 Oscillator Strength . . . . . . . . . . . . . . . . . . . . . 117
4.8.3 Spontaneous Emission of Light . . . . . . . . . . . . . . . . . . 117
4.9 Application of Golden Rule: Photoionization
of Hydrogen 1s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.10 Coupled Electronic/Nuclear Dynamics . . . . . . . . . . . . . . . . . . . . 124
4.10.1 Electronic Transition Rates . . . . . . . . . . . . . . . . . . . . . 128
4.10.2 Marcus’ Treatment of Electron Transfer . . . . . . . . . 131
4.10.3 Including Vibrational Dynamics. . . . . . . . . . . . . . . . .133
4.11 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

Chapter 5 Representations and Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145


5.1 Schrödinger Picture: Evolution of the State Function . . . . . . . 145
5.1.1 Properties of the Time-Evolution Operator . . . . . . . . 146
Contents vii

5.2 Heisenberg Picture: Evolution of Observables . . . . . . . . . . . . . 147


5.3 Quantum Principle of Stationary Action . . . . . . . . . . . . . . . . . . . 152
5.4 Interaction Picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.5 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Suggested Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

Chapter 6 Quantum Density Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161


6.1 Introduction: Mixed vs. Pure States . . . . . . . . . . . . . . . . . . . . . . . 161
6.2 Time Evolution of the Density Matrix. . . . . . . . . . . . . . . . . . . . .163
6.3 Reduced Density Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6.3.1 von Neumann Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.4 The Density Matrix for a Two-State System . . . . . . . . . . . . . . . 166
6.4.1 Two-Level System under Resonance
Coupling—Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
6.4.2 Photon Echo Experiment . . . . . . . . . . . . . . . . . . . . . . . . 173
6.4.3 Relaxation Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.5 Decoherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.5.1 Decoherence by Scattering . . . . . . . . . . . . . . . . . . . . . . 179
6.5.2 The Quantum Zeno Effect . . . . . . . . . . . . . . . . . . . . . . . 184
6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.7 Appendix: Wigner Quasi-Probability Distribution . . . . . . . . . . 187
6.7.1 Wigner Representation on a Lattice:
Exciton Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
6.7.2 Enforcing Fermi–Dirac Statistics . . . . . . . . . . . . . . . . . 194
6.7.3 The k,  Representation . . . . . . . . . . . . . . . . . . . . . . . . 196
6.8 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

Chapter 7 Excitation Energy Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203


7.1 Dipole–Dipole Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
7.2 Förster’s Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
7.3 Beyond Förster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
7.4 Transition Density Cube Approach . . . . . . . . . . . . . . . . . . . . . . . 213
Suggested Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

Chapter 8 Electronic Structure of Conjugated Systems . . . . . . . . . . . . . . . . . . . . 219


8.1 π Conjugation in Organic Systems . . . . . . . . . . . . . . . . . . . . . . . 219
8.2 Hückel Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.2.1 Justification for the Hückel Model . . . . . . . . . . . . . . . . 224
8.2.2 Example: 1,3 Butadiene . . . . . . . . . . . . . . . . . . . . . . . . . 226
8.2.3 Cyclic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
8.2.4 Summary of Results from the Hückel Model . . . . . . 230
8.2.5 Alternant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
8.2.6 Why Bother with the Hückel Theory?. . . . . . . . . . . . .235
8.3 Electronic Structure Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
8.3.1 Hartree–Fock Approximation . . . . . . . . . . . . . . . . . . . . 236
8.3.2 Variational Derivation of the Hartree–Fock
Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
viii Contents

8.4 Neglect of Differential Overlap . . . . . . . . . . . . . . . . . . . . . . . . . . 240


8.5 An Exact Solution: INDO Treatment of Ethylene . . . . . . . . . . 242
8.5.1 HF Treatment of Ethylene . . . . . . . . . . . . . . . . . . . . . . . 245
8.6 Ab Initio Treatments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
8.7 Creation/Annhiliation Operator Formalism
for Fermion Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
8.7.1 Evaluating Fermion Operators . . . . . . . . . . . . . . . . . . . 256
8.7.2 Notation for Two-Body Integrals . . . . . . . . . . . . . . . . . 257
8.8 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Suggested Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

Chapter 9 Electron–Phonon Coupling in Conjugated Systems . . . . . . . . . . . . . . 261


9.1 Su–Schrieffer–Heeger Model for Polyacetylene . . . . . . . . . . . . 261
9.2 Exciton Self-Trapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
9.3 Davydov’s Soliton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
9.3.1 Approximate Gaussian Solution . . . . . . . . . . . . . . . . . . 272
9.3.2 Exact Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
9.3.3 Discussion: Do Solitons Exist in Real
α-Helix Systems? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
9.4 Vibronic Relaxation in Conjugated Polymers . . . . . . . . . . . . . . 275
9.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
9.6 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

Chapter 10 Lattice Models for Transport and Structure . . . . . . . . . . . . . . . . . . . . 283


10.1 Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
10.1.1 Bloch Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
10.1.2 Wannier Functions. . . . . . . . . . . . . . . . . . . . . . . . . . .285
10.2 Stationary States on a Lattice . . . . . . . . . . . . . . . . . . . . . . . . . . 286
10.2.1 A Simple Band-Structure Calculation . . . . . . . . . . 290
10.3 Kronig–Penney Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
10.4 Quantum Scattering and Transport . . . . . . . . . . . . . . . . . . . . . 293
10.5 Defects on Lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
10.6 Multiple Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298

Appendix Miscellaneous Results and Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . 301


References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Preface
Why do we need another book on quantum mechanics? Go to any university library
and you’re bound to find hundreds of textbooks on this subject. A number are truly
outstanding and nearly everyone has a favorite. In my case, I very much like Cohen-
Tannoudji’s two-volume text, although Merzbach, Feynman, and Hibbs and Landau
and Lifshitz hold their own places of honor on my bookshelf. So, why go through the
bother and effort of trying to say something new? The majority of leading texts focus
upon the solution of Schrödinger’s equation for a handful of solvable problems. Some
will venture into the realm of scattering theory and most will have a good presentation
of second quantization. However, the discussion of time-dependent quantum dynam-
ics is typically limited to the spread of a Gaussian wave packet, the dispersionless
evolution of a Gaussian in a parabolic potential, and time-dependent perturbation
theory leading to Fermi’s golden rule.
This book grew out of the need to fill a glaring gap between the standard quantum
mechanics textbooks and more specialized texts. It has evolved out of a series of
lecture notes for a course on this topic that I have presented intermittently over the
past decade, and it has grown out of my own attempts to study the underlying physics
of quantum relaxation dynamics as applied to chemical systems. For certain, this book
draws from a variety of deep wells.
One significant focus of modern chemical physics is the experimental detection
of quantum dynamical processes that occur in chemical systems, typically in a con-
densed phase environment. With the rapid advance of multiphonon spectroscopies,
we are beginning to probe some of nature’s most important processes, such as the
light-harvesting mechanism in photosynthetic systems or the mechanism of photo-
damage to DNA. We have also turned these tools to study similar ultrafast processes
in nanoscale materials that may eventually be used for artificial photosynthetic sys-
tems, electronic switches, or light sources. Understanding these systems requires an
in-depth knowledge of time-dependent quantum mechanics beyond what is presented
in a typical graduate-level course.
Regarding scope and level, I deliberately chose not to include much detail on
solving the standard models for the harmonic oscillator, hydrogen atom, quantized
angular momentum, and so forth. These appear in all standard textbooks and I saw
little need to rework these models here. A truly comprehensive text would fill at
least two complete bookshelves. However, I do rely upon such models for bases and
approximations, and I summarize the essential features (eigenstates, spectrum, and
so on) as needed. I assume that the reader is familiar with the essential theory of
quantum mechanics as presented in a typical undergraduate-level physical chemistry
course, and we have used this material in our first-year graduate quantum chem-
istry course at the University of Houston. My assumption is that students are ac-
quainted with the notion of quantization and its role in molecular spectroscopy.
Applications and codes for further illustration can be found on the accompanying Web
site (http://k2.chem.uh.edu/quantum dynamics). A solutions manual is also available
for download.

ix
x Preface

Much of this was committed to text over the course of my sabbatical at Cambridge
in 2007, and I wish to thank all the students, postdocs, and colleagues who helped
track down typos, clarify sessions, provide figures, and so on. I thank the editors at
CRC/Taylor & Francis for keeping me on target to complete this. I also thank the
postdocs and graduate students in my group for contributing figures, proofreading,
and working problems.

Eric R. Bittner
Cambridge, U.K. & Houston, Texas
About the Author
Eric Bittner is currently the John and Rebecca Moores Distinguished Professor
of chemical physics at the University of Houston. He received his PhD from the
University of Chicago in 1994 and was a National Science Foundation Postdoctoral
Fellow at the University of Texas at Austin and at Stanford University before moving
to the University of Houston in 1997. His accolades include an NSF Career Award and
a Guggenheim Fellowship. He has also held visiting appointments at the University
of Cambridge, the Ecôle Normale Supérieure, Paris, and at Los Alamos National
Lab. His research is focused in the areas of quantum dynamics as applied to organic
polymer semiconductors, object linking and embedding directory services (OLEDS),
solar cells, and energy transport in biological systems.

xi
1 Survey of Classical
Mechanics
Quantum mechanics is in many ways the cumulation of many hundreds of years
of work and thought about how mechanical things move and behave. Since ancient
times, scientists have wondered about the structure of matter and have tried to develop
a generalized and underlying theory that governs how matter moves at all length scales.
For ordinary objects, the rules of motion are very simple. By ordinary, I mean
objects that are more or less on the same length and mass scale as you and I, say
(conservatively) 10−7 m to 106 m and 10−25 g to 108 g moving at less than 20% of the
speed of light. On other words, almost everything you can see and touch and hold
obeys what are called classical laws of motion. The term classical means that that the
basic principles of this class of motion have their foundation in antiquity. Classical
mechanics is an extremely well-developed area of physics. While you may think that
because classical mechanics has been studied extensively for hundreds of years there
really is little new development in this field, it remains a vital and extremely active area
of research. Why? Because the majority of universe “lives” in a dimensional realm
where classical mechanics is extremely valid. Classical mechanics is the workhorse
for atomistic simulations of fluids, proteins, and polymers. It provides the basis for
understanding chaotic systems. It also provides a useful foundation of many of the
concepts in quantum mechanics.
Quantum mechanics provides a description of how matter behaves at very small
length and mass scales, that is, the realm of atoms, molecules, and below. It has been
developed over the past century to explain a series of experiments on atomic systems
that could not be explained using purely classical treatments. The advent of quantum
mechanics forced us to look beyond the classical theories. However, it was not a drastic
and complete departure. At some point, the two theories must correspond so that
classical mechanics is the limiting behavior of quantum mechanics for macroscopic
objects. Consequently, many of the concepts we will study in quantum mechanics
have direct analogs to classical mechanics: momentum, angular momentum, time,
potential energy, kinetic energy, and action.
Much as classical music is cast in a particular style, classical mechanics is based
upon the principle that the motion of a body can be reduced to the motion of a point
particle with a given mass m, position x, and velocity v. In this chapter, we will
review some of the concepts of classical mechanics which are necessary for studying
quantum mechanics. We will cast these in forms whereby we can move easily back
and forth between classical and quantum mechanics. We will first discuss Newtonian
motion and cast this into the Lagrangian form. We will then discuss the principle of
least action and Hamiltonian dynamics and the concept of phase space.

1
2 Quantum Dynamics: Applications in Biological and Materials Systems

1.1 NEWTON’S EQUATIONS OF MOTION


1.1.1 NEWTON’S POSTULATES
Why do things move? Why does an apple fall from a tree? This is usually the first
sort of problem we face in trying to study the motion and dynamics of particles and
develop laws of nature that are independent of a particular situation.
We understand the concept of force. We all have pushed, pulled, or thrown some-
thing. Those actions require an action or force from the muscles in our body. Newton
proposed a set of basic rules or postulates which he thought could describe the rules
that all objects obey under the influence of any kind of force.

Postulate 1.1
Law of Inertia: A free particle always moves without acceleration.

That is, a particle that is not under the influence of an outside force moves along a
straight line at constant speed, or remains at rest.

Postulate 1.2
Law of Motion: The rate of change of an object’s momentum is equal to the force
acting upon it.
d p
= F (1.1)
dt
This is equivalent to F = m a where a = d v /dt is the acceleration. Note that in
Newton’s first postulate, we assume that the mass does not change with time.

Postulate 1.3
Law of Action: For every action, there is an equal and opposite reaction.

F 12 = − F 21 (1.2)

This is to say that if particle 1 pushes on particle 2 with force F, then particle 2
pushes on particle 1 with a force −F. In SI units, the unit of force is the Newton,
1N = 1kg · m · s −2 .
Newton’s Principia set the theoretical basis of mathematical mechanics and anal-
ysis of physical bodies. The equation that force equals mass times acceleration is the
fundamental equation of classical mechanics. Stated mathematically,

m ẍ = f (x) (1.3)

The dots refer to differentiation with respect to time. We will use this notion for time
derivatives. We may also use x  or d x/dt as well. So,
d2x
ẍ = (1.4)
dt 2
For now we are limiting ourselves to one particle moving in one dimension. For
motion in more dimensions, we need to introduce vector components. In Cartesian
Survey of Classical Mechanics 3

coordinates, Newton’s equations are


m ẍ = f x (x, y, z) (1.5)
m ÿ = f y (x, y, z) (1.6)
m z̈ = f z (x, y, z) (1.7)
where the force vector f(x, y, z) has components in all three dimensions and varies
with location. We can also define a position vector x = (x, y, z) and velocity vector
v = (ẋ, ẏ, ż). We can also replace the second-order differential equation with two
first-order equations
ẋ = vx (1.8)
v̇ x = f x /m (1.9)
These, along with the initial conditions x(0) and v(0), are all that are needed to solve
for the motion of a particle with mass m given a force f . We could have chosen two
endpoints as well and asked, What path must the particle take to get from one point
to the next? Let us consider some elementary solutions.
First, the case in which f = 0 and ẍ = 0. Thus, v = ẋ = const. So, unless there
is an applied force, the velocity of a particle will remain unchanged.
Second, we consider the case of a linear force f = −kx. This is restoring force
for a spring and such force laws are termed Hooke’s law and k is termed the force
constant. Our equations are
ẋ = vx (1.10)
v̇ x = −k/mx (1.11)
or ẍ = −(k/m)x. So we want some function which is its own second derivative
multiplied by some number. The cosine and sine functions have this property, so let
us try
x(t) = A cos(at) + B sin(bt) (1.12)
Taking time derivatives,
ẋ(t) = −a A sin(at) + bB cos(bt) (1.13)
ẍ(t) = −a A cos(at) − b B sin(bt)
2 2
(1.14)

So we get the required result if a = b = k/m, leaving A and B undetermined. Thus,
pick x(0) = xo and
we need two initial conditions to specify these coefficients. Let us √
v(0) = 0. Thus, x(0) = A = xo and B = 0. Notice that the term k/m has units of
angular frequency,

k
ω= (1.15)
m
So, our equations of motion are
x(t) = xo cos(ωt) (1.16)
v(t) = −xo ω sin(ωt) (1.17)
4 Quantum Dynamics: Applications in Biological and Materials Systems

Let us now consider a two-dimensional example where we have a particle launched


upwards at some initial velocity and we wish to predict where it will land. We shall
neglect frictional forces.
The equations of motion in each direction are as follows. In the vertical direction,

m ÿ = −mg (1.18)

where g is the gravitational constant and the force −mg is the attractive force due to
gravity. In x, we have

m ẍ = 0 (1.19)

since there are no net forces acting in the x direction. Hence, we can solve the x
equation immediately since v̇ x = 0 and thus, x(t) = vx (0)t + xo = vo t cos(φ). For
the y equation, denote v y = ẏ,
d
m v y = −mg (1.20)
dt
Integrating, v y = −gt + const. Evaluating this at t = 0, v y (0) = vo sin(φ) = const.
Thus,

v y (t) = −gt + vo sin(φ) (1.21)

This we can integrate as


 
dy = (−gt + vo sin(φ))dt (1.22)

that is,
g 2
y = vo sin(φ)t − t (1.23)
2
So the trajectory in y is parabolic. To determine the point of impact, we seek the roots
of the equation
 g 
vo sin(φ)t − t 2 = 0 (1.24)
2

V0

φ
X

Either t = 0 or
2
tI = vo sin(φ) (1.25)
g
Survey of Classical Mechanics 5

We can now ask this question: What angle do we need to point our cannon to hit a
target X meters away? In time t I the cannon ball will travel a distance x = vo cos(φ)t I .
Substituting our expression for the impact time:
2 v 2 sin(2φ)
X = vo2 cos(φ) sin(φ) = o (1.26)
g g
Thus,
g
sin(2φ) = X (1.27)
vo2
One can also see that the maximum range is obtained when φ = π/4.

1.2 LAGRANGIAN MECHANICS


1.2.1 THE PRINCIPLE OF LEAST ACTION
The most general form of the law governing the motion of a mass is the principle of
least action or Hamilton’s principle. The basic idea is that every mechanical system
is described by a single function of coordinate, velocity, and time: L(x, ẋ, t) and that
the motion of the particle is such that certain conditions are satisfied. That condition
is that the time integral of this function
 tf
S= L(x, ẋ, t) dt (1.28)
to

takes the least possible value given a path that starts at xo at the initial time and ends
at x f at the final time.
Let us take x(t) to be a function for which S is minimized. This means that S
must increase for any variation about this path, x(t) + δx(t). Since the endpoints are
specified, δx(0) = δx(t) = 0 and the change in S upon replacement of x(t) with
x(t) + δx(t) is
 tf  tf
δS = L(x + δx, ẋ + δ ẋ, t)dt − L(x, ẋ, t)dt = 0 (1.29)
to to

This is zero because S is a minimum. Now, we can expand the integrand in the first
term
 
∂L ∂L
L(x + δx, ẋ + δ ẋ, t) = L(x, ẋ, t) + δx + δ ẋ (1.30)
∂x ∂ ẋ
Thus, we have
  
tf
∂L ∂L
δx + δ ẋ dt = 0 (1.31)
to ∂x ∂ ẋ
Since δ ẋ = dδx/dt and integrating the second term by parts
 t f  t f  
∂L ∂L d ∂L
δS = δx + − δxdt = 0 (1.32)
δ ẋ to to ∂x dt ∂ ẋ
6 Quantum Dynamics: Applications in Biological and Materials Systems

The surface term vanishes because of the condition imposed above. This leaves the
integral. It too must vanish and the only way for this to happen is if the integrand
itself vanishes. Thus we have
∂L d ∂L
− =0 (1.33)
∂x dt ∂ ẋ
L is known as the Lagrangian. Before moving on, we consider the case of a free
particle. The Lagrangian in this case must be independent of the position of the particle
since a freely moving particle defines an inertial frame. Since space is isotropic, L
must depend upon only the magnitude of v and not its direction. Hence,

L = L(v 2 ) (1.34)

Since L is independent of x, ∂ L/∂ x = 0, so the Lagrange equation is


d ∂L
=0 (1.35)
dt ∂v
So, ∂ L/∂v = const, which leads us to conclude that L is quadratic in v. In fact,
1 2
L= v (1.36)
m
which is the kinetic energy for a particle
1 2 1
T = mv = m ẋ 2 (1.37)
2 2
For a particle moving in a potential field V , the Lagrangian is given by

L =T −V (1.38)

L has units of energy and gives the difference between the energy of motion and the
energy of location.
This leads to the equations of motion:
d ∂L ∂L
= (1.39)
dt ∂v ∂x
Substituting L = T − V yields
∂V
m v̇ = − (1.40)
∂x
which is identical to Newton’s equations given above once we identify the force as
the minus of the derivative of the potential. For the free particle, v = const. Thus,
 tf
m 2 m
S= v dt = v 2 (t f − to ) (1.41)
to 2 2
You may be wondering at this point why we needed a new function and derived
all this from some minimization principle. The reason is that for some systems we
Survey of Classical Mechanics 7

have constraints on the type of motion they can undertake. For example, there may be
bonds, hinges, and other mechanical hindrances that limit the range of motion a given
particle can take. The Lagrangian formalism provides a mechanism for incorporating
these extra effects in a consistent and correct way. In fact we will use this principle
later in deriving a variational solution to the Schrödinger equation by constraining
the wave function solutions to be orthonormal.
Lastly, it is interesting to note that v 2 = (dl/d)2 = (dl)2 /(dt)2 is the square
of the element of an arc in a given coordinate system. Thus, within the Lagrangian
formalism it is easy to convert from one coordinate system to another. For example, in
Cartesian coordinates: dl 2 = d x 2 +dy 2 +dz 2 . Thus, v 2 = ẋ 2 + ẏ 2 + ż 2 . In cylindrical
coordinates, dl = dr 2 + r 2 dφ 2 + dz 2 , we have the Lagrangian
1
L= m(ṙ 2 + r 2 φ̇ 2 + ż 2 ) (1.42)
2
and for spherical coordinates, dl 2 = dr 2 + r 2 dθ 2 + r 2 sin2 θdφ 2 ; hence,
1
L= m(ṙ 2 + r 2 θ˙2 + r 2 sin2 θ φ̇ 2 ) (1.43)
2

1.2.2 EXAMPLE: THREE-DIMENSIONAL HARMONIC OSCILLATOR


IN SPHERICAL COORDINATES

Here we take the potential energy to be a function of r alone (isotropic)


V (r ) = kr 2 /2 (1.44)
Thus, the Lagrangian in Cartesian coordinates is
m 2 k
L= (ẋ + ẏ 2 + ż 2 ) + r 2 (1.45)
2 2
Since r 2 = x 2 + y 2 + z 2 , we could easily solve this problem in Cartesian space since
m 2 k
L= (ẋ + ẏ 2 + ż 2 ) + (x 2 + y 2 + z 2 ) (1.46)
2 2
     
m 2 k 2 m 2 k 2 m 2 k 2
= ẋ + x + ẏ + y + ż + z (1.47)
2 2 2 2 2 2
and we see that the system is separable into three independent oscillators. To convert
to spherical polar coordinates, we use
x = r sin(φ) cos(θ) (1.48)
y = r sin(φ) sin(θ) (1.49)
z = r cos(θ) (1.50)
and the arc length given above
m 2 k
L= (ṙ + r 2 θ˙2 + r 2 sin2 θ φ̇ 2 ) − r 2 (1.51)
2 2
8 Quantum Dynamics: Applications in Biological and Materials Systems

a X
o
F

FIGURE 1.1 Vector diagram for motion in central forces. The particle’s motion is along the
Z axis, which lies in the plane of the page.

The equations of motion are

d ∂L ∂L d
− = mr 2 sin2 θ φ̇ = 0 (1.52)
dt ∂ φ̇ ∂φ dt
d ∂L ∂L d
− = (mr 2 θ)
˙ − mr 2 sin θ cos θ φ̇ = 0 (1.53)
dt ∂ θ
˙ ∂θ dt
d ∂L ∂L d
− = (m ṙ ) − mr θ˙2 − mr sin2 θ φ̇ 2 + kr = 0 (1.54)
dt ∂ ṙ ∂r dt

We now prove that the motion of a particle in a central force field lies in a plane
containing the origin. The force acting on the particle at any given time is in a direction
toward the origin. Now, place an arbitrary Cartesian frame centered about the particle
with the z axis parallel to the direction of motion as sketched in Figure 1.1. Note
that the y axis is perpendicular to the plane of the page, and hence, there is no force
component in that direction. Consequently, the motion of the particle is constrained
to lie in the zx plane, that is the plane of the page, and there is no force component
that will take the particle out of this plane.
Let us make a change of coordinates by rotating the original frame to a new one
whereby the new z  is perpendicular to the plane containing the initial position and
velocity vectors. In Figure 1.1, this new z  axis would be perpendicular to the page
and would contain the y axis we placed on the moving particle. In terms of these new
coordinates, the Lagrangian will have the same form as previously since our initial
choice of axis was arbitrary. However, now we have some additional constraints.
Because the motion is now constrained to lie in the x  y  plane, θ  = π/2 is a constant,
and θ˙ = 0. Thus cos(π/2) = 0 and sin(π/2) = 1 in the previous equations. From the
equations for φ we find

d
mr 2 φ̇ = 0 (1.55)
dt
or

mr 2 φ̇ = const = pφ (1.56)
Survey of Classical Mechanics 9

This we can put into the equation for r


d
(m ṙ ) − mr φ̇ 2 + kr = 0 (1.57)
dt
d pφ2
(m ṙ ) − + kr = 0 (1.58)
dt mr 3
where we notice that − pφ2 /mr 3 is the centrifugal force. Taking the last equation,
multiplying by ṙ , and then integrating with respect to time gives

pφ2
ṙ 2 = − − kr 2 + b (1.59)
m 2r 2
that is,

pφ2
ṙ = − − kr 2 + b (1.60)
m 2r 2
Integrating once again with respect to time,

r dr
t − to = (1.61)


r dr
= 2 (1.62)

− m 2 − kr 4 + br 2

1 dx
= √ (1.63)
2 a + bx + cx 2

where x = r 2 , a = − pφ2 /m 2 , b is the constant of integration, and c = −k. This is a


standard integral and we can evaluate it to find
1
r2 = (b + A sin(ω(t − to ))) (1.64)


Z

Y Y´
10 Quantum Dynamics: Applications in Biological and Materials Systems

where

ω2 pφ2
A= b2 − (1.65)
m2
What we see then is that r follows an elliptical path in a plane determined by the
initial velocity.
This example also illustrates another important point that has tremendous impact
on molecular quantum mechanics, namely, that the angular momentum about the axis
of rotation is conserved. We can choose any axis we want. In order to avoid confusion,
let us define χ as the angular rotation about the body-fixed Z  axis and φ as angular
rotation about the original Z axis. So our conservation equations are

mr 2 χ̇ = pχ (1.66)

about the Z  axis and

mr 2 sin θ φ̇ = pφ (1.67)

for some arbitrary fixed Z axis. The angle θ will also have an angular momentum
associated with it, pθ = mr 2 θ,
˙ but we do not have an associated conservation principle
for this term since it varies with φ. We can connect pχ with pθ and pφ about the other
axis via

pχ dχ = pθ dθ + pφ dφ (1.68)

Consequently,

mr 2 χ̇ 2 dχ = mr 2 (φ̇ sin θdφ + θdθ)


˙ (1.69)

Here we see that the the angular momentum vector remains fixed in space in the
absence of any external forces. Once an object starts spinning, its axis of rotation re-
mains pointing in a given direction unless something acts upon it (torque); in essence,
in classical mechanics we can fully specify L x , L y , and L z as constants of the motion

since d L/dt = 0. In a later chapter, we will cover the quantum mechanics of rotations
in much more detail. In the quantum case, we will find that one cannot make such a
precise specification of the angular momentum vector for systems with low angular
momentum. We will, however, recover the classical limit in the end as we consider
the limit of large angular momenta.

1.3 CONSERVATION LAWS


We just encountered one extremely important concept in mechanics, namely, that
some quantities are conserved if there is an underlying symmetry. Next, we consider
a conservation law arising from the homogeneity of time. For a closed dynamical
system, the Lagrangian does not explicitly depend upon time. Thus we can write
dL ∂L ∂L
= ẋ + ẍ (1.70)
dt ∂x ∂ ẋ
Survey of Classical Mechanics 11

Replacing ∂ L/∂ x with Lagrange’s equation, we obtain


 
dL d ∂L ∂L
= ẋ + ẍ (1.71)
dt dt ∂ ẋ ∂ ẋ
 
d ∂L
= ẋ (1.72)
dt ∂ ẋ
Now, rearranging this a bit,
 
d ∂L
ẋ −L =0 (1.73)
dt ∂ ẋ
So, we can take the quantity in the parenthesis to be a constant, and
 
∂L
E = ẋ − L = const (1.74)
∂ ẋ
is an integral of the motion. This is the energy of the system. L can be written in the
form L = T − V where T is a quadratic function of the velocities, and using Euler’s
theorem on homogeneous functions:
∂L ∂T
ẋ = ẋ = 2T (1.75)
∂ ẋ ∂ ẋ
This gives

E =T +V (1.76)

which says that the energy of the system can be written as the sum of two different
terms: the kinetic energy or energy of motion and the potential energy or the energy
of location.
One can also prove that linear momentum is conserved when space is homoge-
neous. That is, when we can translate our system, some arbitrary amount ε and our
dynamical quantities must remain unchanged. We will prove this in the problem sets.

1.3.1 CONSERVATIVE FORCES


A conservative force has nothing to do with its particular political bend. In a loose
sense, it is a force in which the total energy is conserved. More precisely, a conservative
force acts in such a way that the potential energy of an object does not depend upon
the path taken by the object. Recall that work is force times the distance moved.
More precisely, work is an integral of the force along a given line or trajectory. In one
dimension
 b
W = F(x)d x (1.77)
a

where a and b are beginning and end of the path. In multiple dimensions, we have to
extend this concept so that the integral is taken along some arbitrary path.
12 Quantum Dynamics: Applications in Biological and Materials Systems

Suppose we have a curve, C, connecting two points either on a plane or in a


volume. This curve may twist and bend, but it is fixed at the two endpoints and our
integral must be taken along C from one endpoint to the other. First, let us cut C into
N short straight segments of length
si so that the segments {
s1 · · ·
s N } make up
a piecewise continuous approximation for C. The work performed along any one of
the segments can be approximated as

Wi =
si F(xi , yi , z i ) (1.78)

Consequently, the total work in moving along C is approximately


N
W ≈
si F(xi , yi , z i ) (1.79)
i

Taking
s → 0 and N → ∞, we can write the work performed in moving along
path C as


N 
W = lim
si F(xi , yi , z i ) = F(s)ds (1.80)

si →∞ C
i

Now, suppose the force can be written as the gradient of some scaler potential
function

F = ∇G (1.81)

and that our curve C can be parametrized via a single variable t. For example, t could
be the length traveled along C or the time. Thus,
dG ds ds
= ∇G = F(s(t)) (1.82)
dt dt dt
Inserting this into the work integral,
  
ds dG
W = F(s)ds = F(s(t)) dt = dt = G(a) − G(b) (1.83)
C C dt C dt

where a and b are the two endpoints. As you can see, the integral now depends only
upon the two endpoints and does not depend upon the particular details of path C.
Suppose an object starts at point A and moves about some arbitrary closed path
P such that after some time it is again at point A. It may still be moving, but the net
work done on the object is exactly zero. That is, for a conservative force

W =  · ds = 0
F(s) (1.84)

Although most forces encountered in molecular systems are conservative, many


are not, particularly those that depend upon velocity. For such forces, the three criteria
are not mathematically equivalent. For example, a magnetic force will satisfy the first
requirement, but its curl is not defined and it cannot be written as the gradient of a
Survey of Classical Mechanics 13

potential. However, the magnetic force F = q v × B  can be counted as conservative


since the force acts perpendicular to the velocity vector v and as such the work is
always zero. Nonconservative forces often arise when we neglect or exclude various
degrees of freedom. For example, for Brownian motion, the Brownian particle feels
a random kick and a viscous drag. These forces arise from the microscopic motion of
the surrounding atoms and molecules in the liquid. If we were to treat their motions
explicitly, the force acting on the Brownian particle would be conservative. Treating
the forces and interactions statistically makes for a far simpler description at the cost
of introducing a nonconservative force.
Example: Let us take for an example a force given by F(x, y) = (x + y) and let
us compute the work along three different paths. First, a path C1 from the origin to
(1, 1); second, along a path C2 from (0, 0) to (1, 0) then to (1, 1); and finally along
a curved parabolic path C3 given by y = x 2 from the √ origin to (1, 1).√
Along C1 , we
take s as the distance traveled along C1 so that x = s/ 2 and y = s/ 2. Thus,
 √
√  2 √
W1 = (x + y)ds = 2 sds = 2 (1.85)
C1 0

Moving on to C2 , it is easier to break this into two segments. Along the segment from
(0,0) to (1,0), x = s and y = 0. Thus,
 1
1
W2(1) = sds = (1.86)
0 2
Along the next segment from (1, 0) to (1, 1), x = 1 and y = s, so we integrate
 1
(2) 3
W2 = (1 + s)ds = (1.87)
0 2
then add W2 = W2(1) + W2(2) = 2. Finally, along the parabolic path, let x = s and
y = s 2 and we integrate
 1
5
W3 = (s + s 2 )ds = (1.88)
0 6
Clearly, we are not dealing with a conservative force in this case! In fact, in most
cases, line integrals depend upon the path taken.

1.4 HAMILTONIAN DYNAMICS


Hamiltonian dynamics is a further generalization of classical dynamics and provides
a crucial link with quantum mechanics. Hamilton’s function, H , is written in terms of
the particle’s position and momentum, H = H ( p, q). It is related to the Lagrangian via
H = ẋ p − L(x, ẋ) (1.89)
Taking the derivative of H with respect to x,
∂H ∂L
=− = − ṗ (1.90)
∂x ∂x
14 Quantum Dynamics: Applications in Biological and Materials Systems

Differentiation with respect to p gives

∂H
= q̇ (1.91)
∂p

These last two equations give the conservation conditions in the Hamiltonian for-
malism. If H is independent of the position of the particle, then the generalized
momentum, p, is constant in time. If the potential energy is independent of time, the
Hamiltonian gives the total energy of the system,

H =T +V (1.92)

It is often easier and more convenient to express Newton’s equations of motion


as two first-order differential equations rather than a single second-order differential
equation. Both are equally valid. However, it is far easier to obtain equations of motion
in other coordinate systems than the x, y, z Cartesian coordinates we work with as
a more general set of equations. For this, we define a more general quantity for the
energy of a system,

H = T (v1 , v2 , . . . , v N ) + V (q1 , q2 , . . . q N ) (1.93)

where T is the kinetic energy that depends upon the velocities of the N particles in
the system and V is the potential energy describing the interaction between all the
particles and any external forces. V is the energy of position whereas T is the energy
of motion. For a single particle moving in three dimensions,

1  2 
T = m vx + v 2y + vz2 (1.94)
2
If we write the momentum as px = mvx , then

1  2 
T = px + p 2y + pz2 (1.95)
2m
Notice that we can also define the momentum as the velocity derivative of T :

∂T
px = (1.96)
∂vx
This defines a generalized momentum such that qx is the conjugate coordinate to
px and (qx , px ) are a pair of conjugate variables. This relation between T and px is
important since we can define the canonical momentum in any coordinate frame. In
the Cartesian frame, px = mvx . However, in other frames, this will not be so simple.
We can also define the following relations:

∂H ∂T pi ∂qi
= = = (1.97)
∂ pi ∂ pi m ∂t
∂H ∂V ∂(mvi )
= = −Fi = − (1.98)
∂qi ∂qi ∂t
Survey of Classical Mechanics 15

θ
X

where i now denotes a general coordinate (not necessarily x, y, z). In short, we can
write the following equations of motion:
∂H ∂qi
= (1.99)
∂ pi ∂t
∂H ∂ pi
− = (1.100)
∂qi ∂t
These hold in any coordinate frame and are termed Hamilton’s equations.

Example: Hamilton’s Equations in Polar Coordinates


Let us consider the transformation between polar and two-dimensional Cartesian
coordinates, x and y.
x = r cos θ and y = r sin θ (1.101)
Our Hamiltonian in x, y coordinates is
m 2 
H= vx + v 2y + V (x, y) (1.102)
2
Thus,
dx
vx = = vr cos θ − vθ r sin θ (1.103)
dt
dy
vy = = vr sin θ + vθ r cos θ (1.104)
dt
v 2 = vx2 + v 2y = vr2 + vθ2r 2 (1.105)
Thus,
m 2 
H= v + vθ2r 2 + V (r, θ ) (1.106)
2 r
We can now proceed to write this in terms of the conjugate variables,
∂T
pr = = mvr (1.107)
∂vr
∂T
pθ = = mvθ r 2 (1.108)
∂vθ
16 Quantum Dynamics: Applications in Biological and Materials Systems

Note that pθ is the angular momentum of the system. Thus, we can write
 
1 pθ2
H= pr + 2 + V (r, θ )
2
(1.109)
2m r
Next, consider the case where V (r, θ ) has no angular dependence. Thus,
∂ pr ∂H p2 ∂V
=− = θ3 − (1.110)
∂t ∂r mr ∂r
∂ pθ ∂H ∂V
=− =− =0 (1.111)
∂t ∂θ ∂θ
∂r ∂H pr
= = (1.112)
∂t ∂ pr m
∂θ ∂H pθ
= = (1.113)
∂t ∂ pθ mr 2
Notice that pθ does not change in time; that is, the angular momentum is a constant
of the motion. The radial force we obtain from ∂ pr /∂t = Fr is
pθ2 ∂V
Fr = − (1.114)
mr 3 ∂r
The first term is constant (since pθ = const) and represents the radial force produced
by the angular momentum. It always points outward toward larger values of r and
is termed the centrifugal force. The second term is the force due to the attraction
between the moving object and the origin. It could be the gravitational forces, the
Coulombic force between charged particles, and so forth. Using the expression for
pθ (Equation 1.111), we can also write the force equation as
(mvθ r 2 )2 ∂V ∂V
Fr = − = mvθ2r − (1.115)
mr 3 ∂r ∂r
If the two forces counterbalance each other, then the net force is Fr = 0 and we have
∂V
mvθ2r = (1.116)
∂r
Since vθ = θ˙ = const, θ = ωt + const. Where ω is the angular velocity and using
vθ = ω, we can write
∂V
mω2r = (1.117)
∂r
Finally, we note that the linear velocity is related to the angular velocity by ω = vr ,
mv 2 ∂V
= (1.118)
r ∂r
Hence we have a relation between the kinetic energy T and the potential energy V
for a centro-symmetric system:
∂V
mv 2 = 2T = r (1.119)
∂r
This relation is extremely useful in deriving the classical orbital motion for Coulomb-
bound charges as in the hydrogen atom or for planetary motion.
Survey of Classical Mechanics 17

1.4.1 PHASE PLANE ANALYSIS


Often we cannot determine the closed-form solution to a given problem and we need
to turn to more approximate methods or even graphical methods. Here, we will look
at an extremely useful way to analyze a system of equations by plotting their time
derivatives.
First, let us look at the oscillator we just studied. We can define a vector s =
(ẋ, v̇) = (v, −k/mx) and plot the vector field. Figure 1.2 shows how to do this in
Mathematica. The superimposed curve is one trajectory and the arrows give the “flow”
of trajectories on the phase plane.
We can examine more complex behavior using this procedure. For example, the
simple pendulum obeys the equation ẍ = −ω2 sin x. This can be reduced to two
first-order equations: ẋ = v and v̇ = −ω2 sin(x).
We can approximate the motion of the pendulum for small displacements by
expanding the pendulum’s force about x = 0:

 
x3
−ω2 sin(x) = −ω2 x − + ··· (1.120)
6

For small x, the cubic term is very small, and we have

k
v̇ = −ω2 x = − x (1.121)
m

which is the equation for harmonic motion. So, for small initial displacements, we see
that the pendulum oscillates back and forth with an angular frequency ω. For large
initial displacements, xo = π, or if we impart some initial velocity on the system
vo > 1, the pendulum does not oscillate back and forth but undergoes librational
motion (spinning!) in one direction or the other.

–2π –π 0 π 2π
3
2
1
0
v

–1
–2
–3
–2π –π 0 π 2π
x

FIGURE 1.2 Tangent field for simple pendulum with ω = 1. The superimposed curve is a
linear approximation to the pendulum motion.
18 Quantum Dynamics: Applications in Biological and Materials Systems

1.4.2 INTERACTION BETWEEN A CHARGED PARTICLE


AND AN ELECTROMAGNETIC FIELD

We consider here a free particle with mass m and charge e in an electromagnetic field.
The Hamiltonian is

H = px ẋ + p y ẏ + pz ż − L (1.122)
∂L ∂L ∂L
= ẋ + ẏ + ż −L (1.123)
∂ ẋ ∂ ẏ ∂ ż
Our goal is to write this Hamiltonian in terms of momenta and coordinates.
For a charged particle in a field, the force acting on the particle is the Lorenz
force. Here it is useful to introduce a vector and scalar potential and to work in
centimeter-gram-second (cgs) units

e
F = v × (∇  − e ∂ A − e∇φ
 × A)  (1.124)
c c ∂t
The force in the x direction is given by
 
d e ∂ Ay ∂ Az
Fx = m ẋ = ẏ + ż
dt c ∂x ∂x
 
e ∂ Ax ∂ Ax ∂ Ax ∂φ
− ẏ + ż + −e (1.125)
c ∂y ∂z ∂t ∂x
with the remaining components given by cyclic permutation. Since
d Ax ∂ Ax ∂ Ax ∂ Ax ∂ Ax
= + ẋ + ẏ + ż (1.126)
dt ∂t ∂x ∂y ∂z
with the force in x given by
 
e ∂ Ax ∂ Ax ∂ Ax e  − eφ
Fx = ẋ + ẏ + ż − v · A (1.127)
c ∂x ∂y ∂z c
and we find that the Lagrangian is
1 2 1 2 1 2 e  − eφ
L= m ẋ + m ẏ + m ż + v · A (1.128)
2 2 2 c
where φ is a velocity independent and static potential.
Continuing on, the Hamiltonian is
m 2
H = (ẋ + ẏ 2 + ż 2 ) + eφ (1.129)
2
1
= ((m ẋ)2 + (m ẏ)2 + (m ż)2 ) + eφ (1.130)
2m
The velocities, m ẋ, are derived from the Lagrangian via the canonical relation
∂L
p= (1.131)
∂ ẋ
Survey of Classical Mechanics 19

From this we find,


e
m ẋ = px − Ax (1.132)
c
e
m ẏ = p y − A y (1.133)
c
e
m ż = pz − A z (1.134)
c
and the resulting Hamiltonian is

1  e 2  e 2  e 2
H= p x − A x + p y − A y + pz − A z + eφ (1.135)
2m c c c

We see here an important concept relating the velocity and the momentum. In the
absence of a vector potential, the velocity and the momentum are parallel. However,
when a vector potential is included, the actual velocity of a particle is no longer
parallel to its momentum and is in fact deflected by the vector potential.

1.4.3 TIME DEPENDENCE OF A DYNAMICAL VARIABLE


One of the important applications of Hamiltonian mechanics is in the dynamical
evolution of a variable that depends upon p and q, G( p, q). The total derivative
of G is
dG ∂G ∂G ∂G
= + q˙ + ṗ (1.136)
dt ∂t ∂q ∂p

From Hamilton’s equations, we have the canonical definitions

∂H ∂H
q˙ = , ṗ = − (1.137)
∂p ∂q

Thus,
dG ∂G ∂G ∂ H ∂G ∂ H
= + − (1.138)
dt ∂t ∂q ∂ p ∂ p ∂q
dG ∂G
= + {G, H } (1.139)
dt ∂t
where {A, B} is called the Poisson bracket of two dynamical quantities, G and H :

∂G ∂ H ∂G ∂ H
{G, H }, = − (1.140)
∂q ∂ p ∂ p ∂q

We can also define a linear operator L as generating the Poisson bracket with the
Hamiltonian:
1
LG = {H, G} (1.141)
i
20 Quantum Dynamics: Applications in Biological and Materials Systems

so that if G does not depend explicitly upon time,

G(t) = exp(iLt)G(0) (1.142)

where exp(iLt) is the propagator that carried G(0) to G(t).


Also, note that if {G, H } = 0, then dG/dt = 0 so that G is a constant of the
motion. This too, along with the construction of the Poisson bracket, has considerable
importance in the realm of quantum mechanics.

1.4.4 VIRIAL THEOREM


Finally, we turn our attention to a concept that has played an important role in both
quantum and classical mechanics. Consider a function G that is a product of linear
momenta and coordinate,

G = pq (1.143)

The time derivative is simply


G
= q ṗ + pq̇ (1.144)
dt
Now, let us take a time average of both sides of this last equation:
    
d 1 T d
pq = lim pq dt (1.145)
dt T →∞ T 0 dt

1 T
= lim d( pq) (1.146)
T →∞ T 0
1
= lim (( pq)T − ( pq)0 ) (1.147)
T →∞ T

If the trajectories of the system are bounded, both p and q are periodic in time and
are therefore finite. Thus, the average must vanish as T → ∞ giving

pq̇ + q ṗ
= 0 (1.148)

Since pq̇ = 2T and ṗ = −F, we have

2T
= − q F
(1.149)

In Cartesian coordinates this leads to


 

2T
= − xi Fi (1.150)
i

For a conservative system F = −∇V . Thus, if we have a centro-symmetric


potential given by V = Cr n , it is easy to show that

2T
= n V
(1.151)
Survey of Classical Mechanics 21

For the case of the harmonic oscillator, n = 2 and T


= V
. So, for example,
if we have a total energy equal to kT in this mode, then T
+ V
= kT and
T
= V
= kT /2. Moreover, for the interaction between two opposite charges
separated by r , n = −1 and
2T
= − V
(1.152)

1.4.5 ANGULAR MOMENTUM


We noted above that if we have a radial force, then the angular velocity and angular
momentum are constants of the motion. In general, the angular momentum is de-
fined as the cross product between a radial vector locating the particle and its linear
momentum
 = r × p
M (1.153)
Cross products are equivalent to taking the determinant of a matrix
 
 î ĵ k̂ 
 
M = x y z  (1.154)
 
 p x p y pz 

where î, ĵ, and k̂ are the unit vectors along the x, y, z axes. Evaluating the determinant
gives
 = î(ypz − zp y ) − ĵ(x pz − zpx ) + k̂(x p y − ypx )
M (1.155)
= î Mx + ĵ M y + k̂ Mz (1.156)
For motion in the x–y plane, the only term that remains is the Mz term, indicating
that the angular momentum vector points perpendicular to the plane of rotation,
Mz = (x p y − ypx ) = m(xv y − yvx ) (1.157)
Since we have noted that the angular momentum is a constant of the motion, we must
have d Mz /dt = 0. Let us check:
d Mz
= m(vx v y − v y vx + xa y − yax ) (1.158)
dt
where ax = v̇ x is the acceleration in x. Thus,
d Mz
= (x Fy − y Fx ) (1.159)
dt
If the force is radial, Fx = Fr cos(θ) and Fy = Fr sin(θ). Likewise, x = r cos(θ ) and
y = r sin(θ). Putting this into the equations, we have
d Mz
= r Fr (sin(θ) cos(θ) − sin(θ) cos(θ)) = 0 (1.160)
dt
Taking θ = ωt as above where ω is the angular frequency, and using vx =
−r ω sin(ωt) and y y = +r ω cos(ωt), we can also write
M = m(vx y − v y x) = mvr (sin 2 (ωt) + cos2 (ωt)) = mvr (1.161)
22 Quantum Dynamics: Applications in Biological and Materials Systems

1.4.6 CLASSICAL MOTION OF AN ELECTRON ABOUT A POSITIVE


CHARGE (NUCLEUS)
Now we are ready to describe the motion and energy of a charged particle about
a nucleus. This will provide us with a classical description of the hydrogen atom
and any hydrogenic (hydrogen-like) species. We shall need these results to begin our
discussion of quantum theory.
For an electron bound to a nucleus of charge +Z e, the Coulomb force holding
the electron to its orbit is counterbalanced by the centrifugal force as in the equations
we derived above. The Coulomb force is
1 Z e2
F= (1.162)
4π εo r 2
Note that we shall eventually use units such that 4π εo = 1; for now, we keep the SI
units. Since the Coulomb force counterbalances the centrifugal force,
1 Z e2 v2
= mr ω 2
= m (1.163)
4πεo r 2 r
As above, we can see that the virial relation between the kinetic and potential energies
1 2 1 Z e2 1
T = mv = =− V (1.164)
2 4π εo 2r 2
where
1 Z e2
V =− (1.165)
4π εo r
is the Coulomb potential. Since the classical energy is E = T + V , we can write
 2 
Z e e2 1 Z e2
E= − =− (1.166)
4πεo 2r r 4π εo 2r

1.4.7 BIRTH OF QUANTUM THEORY


In spite of the elegance of classical mechanics, there is something clearly missing. Ac-
cording to electrodynamics, an accelerating charge radiates electromagnetic energy.
The ramification of this is that if matter is composed of negatively charged electrons
moving about positively charged nuclear cores, all matter would be unstable since
every classically bound electron would spiral inwards towards the nucleus giving off
a burst of x-ray and gamma ray radiation. This is clearly not observed! Moreover,
there is no restriction about which radius we choose for the electron . . . we can have
any energy we want. However, we also know that hydrogen emits and absorbs light
only at specific wavelengths. This fact was demonstrated by Balmer back in 1885
when he showed that all of the absorption (and emission) lines of H could be fit to a
single empirical equation
 
1 1
λ−1 = 109677 cm−1 − (1.167)
n2 m2
Survey of Classical Mechanics 23

where n and m are integers n = 1, 2, 3, 4, . . . and m = n + 1, n + 2, n + 3, . . . . The


numerical constant 109, 677 cm−1 is the Rydberg constant (denoted R H ).
About the turn of the nineteenth century, results such as this and results demon-
strating the photoelectric effect and Planck’s theory of blackbody radiation indicated
that there was something wrong with the way we viewed the physical world when it
came to atoms and molecules. Niels Bohr attempted to explain the spectral observa-
tions and combine Planck’s notion of quantized energies with a radation-less orbit of
the electron about the nucleus. In doing so, he made a number of remarkable leaps of
faith. Bohr postulated the following:
1. A discrete spectrum implies discrete energy levels and that the energy
absorbed or emitted are the energy difference between these levels,

E = hν = W f − Wi (1.168)
2. Electrons undergo transition between these levels via absorbing or emitting
light.
3. Electrons are bound to the nuclei via Coulombic forces and obey classical
mechanics.
4. The Rydberg equation gives us the
E; hence, since λν = c,
 
hc 1 1

E = = 109677 cm−1 − (1.169)
λ n2 m2
5. The energy levels are then
1
Wn = −hc R H (1.170)
n2
6. Correspondence principle: At large values of n, the classical emission
frequency must be equal to the electron’s orbital frequency, as required by
classical electrodynamics.
Now Bohr makes a dramatic leap of faith. The Wn are the quantum energies. Bohr
assumes that
quantum = classical
1 1 Z e2
−hc R H 2 = − (1.171)
n 4π εo 2r
According to Bohr’s correspondence principle, the angular frequency of an elec-
tron for large values of n must be equal to the classical radiation frequency (I now
drop the 4πεo ):

e2
ν=τ (1.172)
4πmr 3
where τ = 1, 2, 3, . . . . For large values of n, the quantum frequency is
 
1 1
ν = RH c − 2 + (1.173)
n (n − τ )2
2R H cτ (1 − τ/(2n))
= (1.174)
n 3 (1 − 2τ/n + τ 2 /n 2 )
24 Quantum Dynamics: Applications in Biological and Materials Systems

where τ = 1, 2, 3, . . . is the change in quantum number as the electron goes from a


higher energy orbit to a lower energy orbit. For large values of n,
2R H τ
ν= (1.175)
n3
Thus, the frequencies are integer multiples of some fundamental frequency νo . Again,
we use

quantum = classical

2R H τ e2
= τ (1.176)
n3 4πmr 3
Now we start canceling terms. To eliminate n we use the expression for the energy
levels

R H hc
n= (1.177)
|Wn |

and for r , the classical radius

e2
W =− (1.178)
2r
which gives, r = e2 /(2|W |). Turning the crank and eliminating variables where
possible,

2π 2 me4
RH = (1.179)
ch 3
which gives the Rydberg constant entirely in terms of fundamental physical constants:
m, mass of electron; c, speed of light; and h, Planck’s constant. Thus, we can write
the energy levels in terms of no adjustable parameters:

R H hc 2πme4 1
Wn = − = − (1.180)
n2 h2 n2
Furthermore, we can go on to show that since

e2 R H hc
W =− =− 2 (1.181)
2r n
we can solve for the orbital radius r ,

e2 2
r= n R H hc (1.182)
2
giving fixed circular orbitals for the electrons

h2 h̄ 2 2
rn = n 2
= n (1.183)
4π 2 me2 me2
Survey of Classical Mechanics 25

where n = 1, 2, 3, . . . . The innermost orbital, with n = 1, is the Bohr radius ao .


Hence, rn = n 2 ao .
Finally, since the electron is orbiting about the proton, it must have angular
momentum. From classical mechanics M = mr 2 ω = mvr . Again, following our
prescription

quantum = classical
2π 2 4 1 e2
− me 2 = − (1.184)
h n 2r
from the energy expression. Now, take

e2
mω2r = (1.185)
r2

and solve for e2

e2 = mr 3 ω2 (1.186)

and plug this back into the energy equation above

quantum = classical
2π 2 4 1 mr 3 ω2 M2
− me 2 = − =− (1.187)
h n 2r 2mr 2
where M is the angular momentum we derived above. Thus,

2π 2 me4 2mr 2
M2 = · 2 (1.188)
h n
Taking our expression for the quantized radii from above,
 
h
M2 = n 2 = h̄ 2 n 2 (1.189)

Thus, M = h̄n is the angular momentum. This, too, is quantized in units of Planck’s
constant over 2π.

1.4.8 DO THE ELECTRON’S ORBITALS NEED TO BE CIRCULAR ?


Here we pose an interesting question. Is there any reason to believe that the electron’s
orbits need to be strictly circular (or rather elliptical if we account for motion about
the center of mass of the electron-proton system)? It seems strange that only circular
motion would be allowed. Is there a deeper underlying reason for the quantization?
A bit of dimensional analysis indicates that the units of h are energy × time. That is
also equivalent to momentum × length. If we imagine plotting the position versus the
momentum of a particle on the p − x plane, then momentum × length corresponds to
26 Quantum Dynamics: Applications in Biological and Materials Systems

FIGURE 1.3 Closed-loop phase space trajectory corresponding to a bound particle.

some area encompassed by a closed path as shown in Figure 1.3. The closed dashed
loop encompasses an area equal to

area = p(x)d x (1.190)

If we assume that energy is conserved, then the energy is a constant along the dashed
loop, H ( p, x) = E = const.
Planck required that E = hνn for the quantized harmonic oscillator levels, but
that equation is applicable only to harmonic systems. If the quantization condition is
general, then we should see it appear in a more general context. Let us take a harmonic
oscillator system as an example:

p2 k
H ( p, x) = + x2 = E (1.191)
2m 2
This is the equation for an ellipse:

p2 k 2
1= + x (1.192)
2m E 2E
√ √
with major and minor axes a = 2m E and b = 2E/k. The area of an ellipse is

A = πab (1.193)

Hence, the arc on the px plane (called phase space) is



√  m 2π E E
A = π 2m E 2E/k = 2π E = = (1.194)
k ω ν
Since we assumed A to be quantized in multiples of h̄,
E
A= (1.195)
ν
From Planck: E = hνn; thus,

A = hn (1.196)
Survey of Classical Mechanics 27

hv´

hv
φ

mv
e–

FIGURE 1.4 Compton scattering experiment.

Hence, the electrons need not move in strictly circular orbits; they only need to move
in closed paths such that

nh = p(x)d x (1.197)

This is termed the Bohr–Sommerfield quantization condition. In Chapter 3, we shall


return to the use of classical trajectories to analyze quantum mechanical states within
a semiclassical theory.

1.4.9 WAVE -- PARTICLE DUALITY


In 1922, Arthur Haley Compton at the University of Chicago performed a series
of remarkable experiments by scattering x-rays from atoms. He observed that the
scattering wavelength λ was always greater than the incident wavelength. To explain
this, he assumed that the energy lost was due to the particle-like collision between
a photon and an electron and that some of the incident energy was transfered to the
electron. Figure 1.4 gives the geometry of this experiment.
From energy conservation:

mv 2
hν = hν  + (1.198)
2
Furthermore, the x and y components of the momentum must also be conserved:

px = p  cos(θ) + mv cos(φ) = p (1.199)


p y = p  sin(θ) + mv sin(φ) = 0 (1.200)

Here, p and p  are the incident and final momenta of the photon, v is the final velocity
of the electron, and we take the incident momentum to be along the x axis,

mv cos(φ) = p − p  cos(θ) (1.201)


mv sin(φ) = − p  sin(θ) (1.202)
28 Quantum Dynamics: Applications in Biological and Materials Systems

Squaring both and adding them together produces

m 2 v 2 = p  + p 2 − 2 pp  cos θ
2
(1.203)

Going back to the energy equation,

m 2 v 2 = 2m(hν − hν  ) (1.204)

and equating the last two equations:

2m(hν − hν  ) = p 2 + p 2 − 2 pp  cos θ (1.205)

Compton then postulated that the momentum of a photon is given by h/λ = hν/c.
Thus, replacing the frequency with c/λ,
   
1 1 1 1 2h 2
2mch −  =h 2
+ 2 − cos θ (1.206)
λ λ λ2 λ λλ

Taking common denominators,


 
λλ h 1 1 h
λ − λ = + 2 − cos θ (1.207)
2mc λ2 λ mc

Finally, if the change in wavelength is small, λ ≈ λ and λλ ≈ λ2 ≈ λ2 , we have

h
λ = λ + (1 − cos θ) (1.208)
mc
which is the Compton scattering formula. The factor h/mc is the Compton wavelength
(λc = 0.024263 Å). Since x-ray wavelengths are on the order of 1–3 Å, the assumption
that λ = λ is pretty accurate. In fact, if we do a relativistic treatment, we do not need
to even make that assumption.
If we rewrite this as an energy equation,

Emc2
E = (1.209)
mc2 + E(1 − cos θ)

Since m is the mass of the electron, mc2 = 0.5 MeV is the rest-energy of the electron.
We can also write this as
1 1 1

= + (1 − cos θ) (1.210)
E E mc2

Plotting 1/E  vs 1 − cos θ should give a straight line with the slope being 1/m e c2 .
In Table 1.1 and Figure 1.5 we show some data measured by the author as part of
an Experimental Physics course. Using the data, we can calculate the rest-mass and
incident energy of the x-ray photon emitted by the 137 Cs source.
Survey of Classical Mechanics 29

TABLE 1.1
Compton Scattering Data
θ (degrees) E  (MeV)
30 0.562
45 0.464
60 0.3955
75 0.339
90 0.29
105 0.244
120 0.223
135 0.205

Note: This data was measured by the author back in 1987 in an Exper-
imental Physics course at Valparaiso University. Here we measured
the energy of the scattered x-ray toward a target. From this, you can
calculate the rest-mass of the electron and the incident energy of the
x-ray emitted by the source (137 Cs).

0.7
Theoretical compton scatter
0.6 Experimental scatter

0.5
Energy Mev

0.4

0.3

0.2

0.1
0 50 100 150 200
Theta Degrees

FIGURE 1.5 Experimental Compton scattering data taken by the author in an Experimental
Physics course at Valparaiso University (1987).

1.4.10 DE BROGLIE’S MATTER WAVES


Louis de Broglie rationalized that if light could behave as both particle (as in the
Compton experiment) and wavelike (as in diffraction experiments), then so should
ordinary matter such as electrons, H atoms, neutrons, and so on. For light:

E = hν = pc (1.211)
30 Quantum Dynamics: Applications in Biological and Materials Systems

where pc = E is from Einstein’s relativity. Thus, λ = c/ν, which √


is what Compton
also used. Consequently, λ = h/ p. For particles with mass, p = 2m E; thus, the
wavelength of a particle with mass is
h
λ= √ (1.212)
2m E
This served as the basis of de Broglie’s PhD thesis in 1923. When asked during his
PhD examination just how one may observe such “matter waves,” he replied that one
should be able to diffract very light particles such as electrons or He atoms from a
surface, just as one can do x-ray diffraction. This suggestion was tested by Davisson
and Germer and eventually led to the development of a number of powerful analytical
techniques for analyzing the structure of crystal surfaces (most commonly low energy
electron diffraction, LEED). For this, both de Broglie (in 1929) and Davisson (with
Thompson in 1937) were awarded Nobel Prizes.
To come full circle, we ask, “How many de Broglie wavelengths does an electron
have in the H atom?”
h
λ= √ (1.213)
2m E
= h(2me2 /2r )−1/2 (1.214)
  2 −1/2
me
= h 2me 2
(1.215)
h̄ 2 n 2
hh̄n
= (1.216)
me2
h̄ 2
= 2πn 2 = 2π nao (1.217)
me
Thus, for n = 1, λ corresponds to the circumference of the first Bohr radius. Hence,
we can imagine the electron as a standing wave on a ring of radius ao

1.5 PROBLEMS AND EXERCISES


Problem 1.1 Find the force that each of the following potentials implies.
1. V (x) = ax 2
2. V (x) = a log sin(x)
3. V (x, y) = a cos(by) + c sin(d x)
4. V (x, y, z) = eax (tan z + b sin(x/y))
5. V (x) = −a/x + b/x 3
6. V (r, θ ) = e−bθ /r in polar coordinates
7. V (r, θ, φ) = r 2 cos(φ) sin(2θ) in spherical polar coordinates

Problem 1.2 The gravitational potential of the Earth is V (r ) = −G M/r , where G


is the gravitational constant, M is the mass of the Earth, and r is the distance from
the center of the Earth to some point r . If we set r = R + z, where R, is the radius
of the Earth and z is the altitude above the surface, show that for z R the resulting
Survey of Classical Mechanics 31

force is given by

GM  z
F(z) = − 1 − 2 (1.218)
R3 R

Using values for G, M, and R, what is the gravitational force on an object 1 km above
the surface of the Earth?

Problem 1.3 Calculate the work necessary to move a unit of mass m = 1 along the
indicated paths in the x y plane from (0, 1) to (1, 0).
Path 1: Counterclockwise along a circle of radius 1.
Path 2: First from (0, 1) to (1, 1), then from (1, 1) to (1, 0). Use the following force
fields normal to the x y plane for your calculations:

1. F(x, y) = Ax y
2. F(x, y) = B log(y)

3. F(x, y) = A/ x 2 + y 2
4. F(x, y) = A(x 2 + y 2 )2
5. F(x, y) = A exp(−β(x 2 + y 2 )2 )

Which of these are conservative fields?

Problem 1.4 Given the two vectors μ


 A = 3î + 2 jˆ − 6k̂ and μ
 B = −5î + 7 jˆ + 10k̂,
find μ
A · μ
 B and μ
A × μ B.

Problem 1.5 Given the vector fields A = 3x î + 2x y jˆ + z 2 k̂ and B = −4x î + 3x z jˆ −


 + B),
.25 cos(z/y)k̂, find ∇ · ( A  ∇ · A,
 and ∇ · B at the point (3, 1, 6).

Problem 1.6  = (x + y)î + x/z jˆ + e−x 2 −y 2 k̂, find ∇ · A.


Given the vector field A

Problem 1.7 Compute the flux due to the vector F = 4x y î + 3 jˆ + z 3 k̂ through the
surface of a sphere of radius a centered at the origin.

Problem 1.8 A particle moves between two points on the x axis, from A = (−a, 0)
to B = (+2a, 0) under the influence of a radial force f = k/(x 2 + y 2 ) directed toward
the origin. Calculate by direct integration the work done along the following paths:

1. Over a rectangular path (−a, 0) → (−a, a) → (+2a, a) → (+2a, 0)


2. Along the straight line from A to B
3. Along a circular arc from (0, 2a) to (2a, 0)

Problem 1.9 Consider the paths in the previous problem. Assume that no force is
present but that the particle moves in a viscous medium that slows the motion with a
velocity-dependent force F = −γ v. Compute the work required to move the particle
along each of the paths at constant speed.

Problem 1.10 Find the force field associated with the potential V (x, y, z) = x y +
4z/x − 2z 2 y 2 /x 4 .
32 Quantum Dynamics: Applications in Biological and Materials Systems

Problem 1.11 Given the force vector F = 3x 2 y î −(4z 2 −6y) jˆ +(cos(x)/2− ye−z )k̂,
find the potential V, from which this force is derived.

Problem 1.12 Show that if the energy is independent of one or more coordinates,
then the momentum associated with those coordinates is constant. Use this result to
show that a classical electron moving in a Coulomb potential has constant angular
velocity.

Problem 1.13 Derive the Euler-Lagrange equations of motion for a pendulum con-
sisting of a mass suspended by a (massless) rigid rod attached to a ball and socket.
Neglect any effects of the Earth’s rotation.

Problem 1.14 Show that the Hamiltonian for an electron moving in a centro-symmetric
potential can be written as

pr2 p2
H= + 2θ + V (r )
2m 2r m
where pr and pθ are the respective radial and angular momenta and r is the radial
coordinate.

Problem 1.15 Prove that if F = ∇


 where  is a potential function, then F is a
conservative vector field. Show that if F is a conservative field, then ∇×
 F = 0.

Problem 1.16 In the previous problem, you showed that if a vector field is conserva-
tive, it is also irrotational. However, is the converse also true? Is an irrotational field
also conservative? Consider the field
 
−y x
v = , , 0
x 2 + y2 x 2 + y2

1. Show that v is irrotational at every point on the x, y plane.


2. Compute the integral
v · d r
C
where C is the unit circle on the x, y plane.
3. Is v conservative?
2 Waves and Wave
Functions

In the world of quantum physics, no phenomenon is a phenomenon until


it is a recorded phenomenon.
John Archibald Wheeler

Bohr’s model of the hydrogen atom was successful in that it gave us a radically new
way to look at atoms. However, it has serious shortcomings. It could not be used to
explain the spectra of He or any multielectron atom. It could not predict the intensities
of the H absorption and emission lines. With de Broglie’s hypothesis that matter was
also wavelike,1 there arose a question at the 1925 Solvey conference: What is the
wave equation? De Broglie could not answer this; however, over the next year Erwin
Schrödinger, working in Vienna, published a series of papers in which he deduced
the general form of the equation that bears his name and applied it successfully to the
hydrogen atom.2,3 What emerged was a new set of postulates, much like Newton’s,
that laid the foundations of quantum theory.
The physical basis of quantum mechanics is
1. That matter, such as electrons, always arrives at a point as a discrete chunk,
but that the probibility of finding a chunk at a specified position is like the
intensity distribution of a wave.
2. The “quantum state” of a system is described by a mathematical object
called a “wave function” or state vector and is denoted |ψ
.
3. The state |ψ
can be expanded in terms of the basis states of a given vector
space, {|φi
}, as


= |φi
φi |ψ
(2.1)
i

where φi |ψ
denotes an inner product of the two vectors.
4. Observable quantities are associated with the expectation value of Hermi-
tian operators and that the eigenvalues of such operators are always real.
5. If two operators commute, one can measure the two associated physical
quantities simultaneously to arbitrary precision.
6. The result of a physical measurement projects |ψ
onto an eigenstate of the
associated operator |φn
yielding a measured value of an with probability
| φn |ψ
|2 .

33
34 Quantum Dynamics: Applications in Biological and Materials Systems

2.1 POSITION AND MOMENTUM REPRESENTATION OF |ψ


Two common operators that we shall use extensively are the position and momentum
operators.
The position operator acts on the state |ψ
to give the amplitude of the system to
be at a given position:

x̂|ψ
= |x
x|ψ
(2.2)
= |x
ψ(x) (2.3)

We shall call ψ(x) the wave function of the system since it is the amplitude of |ψ
at
point x. Here we can see that ψ(x) is an eigenstate of the position operator. We also
define the momentum operator p̂ as a derivative operator:
h̄ ∂
p̂ = (2.4)
i ∂x
Thus,

p̂ψ(x) = −ih̄ψ  (x) (2.5)

Note that ψ  (x) = ψ(x); thus, an eigenstate of the position operator is not also an
eigenstate of the momentum operator.
We can deduce this also from the fact that x̂ and p̂ do not commute. To see this,
first consider

x f (x) = f (x) + x f  (x) (2.6)
∂x
Thus (using the shorthand ∂x as partial derivative with respect to x),

[x̂, p̂] f (x) = −ih̄(x∂x f (x) − ∂x (x f (x))) (2.7)



= (x f  (x) − f (x) − x f  (x)) (2.8)
i

= − f (x) (2.9)
i
What are the eigenstates of the p̂ operator? To find them, consider the following
eigenvalue equation:

p̂|φ(k)
= k|φ(k)
(2.10)

Inserting a complete set of position states using the idempotent operator



I = |x
x|d x (2.11)

and using the “coordinate” representation of the momentum operator, we get

−ih̄∂x φ(k, x) = kφ(k, x) (2.12)


Waves and Wave Functions 35

Thus, the solution of this is (subject to normalization)

φ(k, x) = C exp(ikx/h̄) = x|φ(k)


(2.13)

We can also use the |φ(k)


= |k
states as a basis for the state |ψ
by writing


= dk|k
k|ψ
(2.14)

= dk|k
ψ(k) (2.15)

where ψ(k) is related to ψ(x) via



ψ(k) = k|ψ
= d x k|x
x|ψ
(2.16)

=C d x exp(ikx/h̄)ψ(x) (2.17)

This type of integral is called a “Fourier transform.” There are a number of ways to
define the normalization
√ C when using this transform; for our purposes at the moment,
we will set C = 1/ 2πh̄ so that
 +∞
1
ψ(x) = √ dkψ(k) exp(−ikx/h̄) (2.18)
2πh̄ −∞

and
 +∞
1
ψ(x) = √ d xψ(x) exp(ikx/h̄) (2.19)
2πh̄ −∞

Using this choice of normalization, the transform and the inverse transform have
symmetric forms and we only need to remember the sign in the exponential.

2.2 THE SCHRÖDINGER EQUATION

Postulate 2.1
The quantum state of the system is a solution of the Schrödinger equation

ih̄∂t |ψ(t)
= H |ψ(t)
(2.20)

where H is the quantum mechanical analog of the classical Hamiltonian.

From classical mechanics, H is the sum of the kinetic and potential energy of a
particle,

1 2
H= p + V (x) (2.21)
2m
36 Quantum Dynamics: Applications in Biological and Materials Systems

Thus, using the quantum analogs of the classical x and p, the quantum H is
1 2
H= p̂ + V (x̂) (2.22)
2m
To evaluate V (x̂) we need a theorem that a function of an operator is the function
evaluated at the eigenvalue of the operator. The proof is straightforward.
If
 
1
V (x) = V (0) + x V  (0) + V  (0)x 2 · · · (2.23)
2
then
 
 1 
V (x̂) = V (0) + x̂ V (0) + V (0)x̂ · · ·
2
(2.24)
2
Since for any operator

[ fˆ , fˆ p ] = 0∀ p (2.25)

Thus, we have

x|V (x̂)|ψ
= V (x)ψ(x) (2.26)

So, in coordinate form, the Schrödinger equation is written as


 
∂ h̄ ∂ 2
ih̄ ψ(x, t) = − + V (x) ψ(x, t) (2.27)
∂t 2m ∂ x 2

2.2.1 GAUSSIAN WAVE FUNCTIONS


Let us assume that our initial state is a Gaussian in x with some initial momentum ko .
 
2 1/4
ψ(x, 0) = exp(iko x) exp(−x 2 /a 2 ) (2.28)
πa 2
The momentum representation of this is

1
ψ(k, 0) = d xe−ikx ψ(x, 0) (2.29)
2πh̄
= (πa)1/2 e−(k−ko ) a /4)
2 2
(2.30)

In Figure 2.1, we see a Gaussian wave packet centered about x = 0 with ko = 10


and a = 1. For now we will use dimensionless units. The gray components correspond
to the real and imaginary components of ψ and the black curve is |ψ(x)|2 . Notice,
that the wave function is pretty localized along the x axis.
In the next figure (Figure 2.2), we have the momentum distribution of the wave
function, ψ(k, 0). Again, we have chosen ko = 10. Notice that the center of the dis-
tribution is shifted about ko .
Waves and Wave Functions 37

0.75

0.5

0.25

–3 –2 –1 1 2 3
–0.25

–0.5

–0.75

FIGURE 2.1 Real, imaginary, and absolute value of Gaussian wave packet ψ(x).


So, for f (x) = exp(−x 2 /b2 ),
x √
= b/ 2. Thus, when x varies form 0 to ±
x,
f (x) is diminished by a factor of 1/ e. [
x is the root-mean-square deviation of
f (x).]
For the Gaussian wave packet,


x = a/2 (2.31)

k = 1/a (2.32)

or

p = h̄/a (2.33)

Thus,
x
p = h̄/2 for the initial wave function.

2.5

1.5
ψ[k]

0.5

6 8 10 12 14
k

FIGURE 2.2 Momentum-space distribution of ψ(k).


38 Quantum Dynamics: Applications in Biological and Materials Systems

2.2.2 EVOLUTION OF ψ(x)


Now, let us consider the evolution of a free particle. By a “free” particle, we mean a
particle whose potential energy does not change; that is, we set V (x) = 0 for all x
and solve
 
∂ h̄ ∂ 2
ih̄ ψ(x, t) = − ψ(x, t) (2.34)
∂t 2m ∂ x 2

This equation is actually easier to solve in k-space. Taking the Fourier transform (FT),

k2
ih̄∂t ψ(k, t) = ψ(k, t) (2.35)
2m
Thus, the temporal solution of the equation is

ψ(k, t) = exp(−ik 2 /(2m)t/ h̄)ψ(k, 0) (2.36)

This is subject to some initial function ψ(k, 0). To get the coordinate x-representation
of the solution, we can use the FT relations above:

1
ψ(x, t) = √ dkψ(k, t) exp(−ikx) (2.37)
2πh̄

= d x  x| exp(−i p̂ 2 /(2m)t/h̄)|x 
ψ(x  , 0) (2.38)
   
m im(x − x  )2
= d x  exp ψ(x  , 0) (2.39)
2πih̄t 2h̄t

= d x  Go (x, x  )ψ(x  , 0) (2.40)

The function Go (x, x  ) is called the free particle propagator or Green’s function. This
gives the amplitude for a particle at x to be found at x  some time, t, later. A plot of
Go (x, x  ) is shown in Figure 2.3 for a particle starting at the origin. Notice, that as
|x| increases, the ascillation period decreases rapidly. Since momentum is inversely
proportional to wavelength p = h/λ through the de Broglie relationship, in order
for a particle starting at the origin to move distance (x) away in time t, it must have
sufficient momentum.
The sketch tells us that in order to get far away from the initial point in time t,
we need to have a lot of energy (wiggles get closer together implying higher Fourier
component).
Consequently, to find a particle at the initial point decreases with time. Since the
period of oscillation (T ) is the time required to increase the phase by 2π,

mx 2 mx 2
2π = − (2.41)
2h̄t 2h̄(t + T )
 
mx 2 T2
= (2.42)
2h̄t 2 1 + T /t
Waves and Wave Functions 39

0.4

0.2

–10 –5 5 10

–0.2

–0.4

FIGURE 2.3 Go for fixed t as a function of x.

Let ω = 2π/T and take the long time limit t  T ; we can estimate
m  x 2
ω≈ (2.43)
2h̄ t
Since the classical kinetic energy is given by E = m/2v 2 , we obtain
E = h̄ω (2.44)
Thus, the energy of the wave is proportional to the period of oscillation.
We can evaluate the evolution in x using either the G o we derived above, or by
taking the FT of the wave function evolving in k-space. Recall that the solution in
k-space was
ψ(k, t) = exp(−ik 2 /(2m)t/h̄)ψ(k, 0) (2.45)
Assuming a Gaussian form for ψ(k) as above,
√ 
a
dke−a /4(k−ko ) ei(kx−ω(k)t)
2 2
ψ(x, t) = (2.46)
(2π )3/4
where ω(k) is the dispersion relation for a free particle:
h̄k 2
ω(k) = (2.47)
2m
Cranking through the integral,
 2 1/4 
2a eiφ (x − h̄ko /mt)2
ψ(x, t) =  1/4 e
iko x
exp (2.48)
π 2 2 a 2 + 2ih̄t/m
a 4 + 4h̄m 2t

where φ = −θ − h̄ko2 /(2m)t and tan 2θ = 2h̄t/(ma 2 ).


40 Quantum Dynamics: Applications in Biological and Materials Systems

Likewise, for the amplitude




1 (x − v◦ t)2
|ψ(x, t)| =
2
exp − (2.49)

x(t)2 2
x(t)2

where we define

a 4h̄ 2 t 2

x(t) = 1+ (2.50)
2 m2a4
and the time-dependent root-mean-square (rms) width of the wave and the group
velocity as
h̄ko
vo = (2.51)
m
Now, since
p = h̄
k = h̄/a is a constant for all time, the uncertainty relation
becomes


x(t)
p ≥ h̄/2 (2.52)

corresponding to the particle’s wave function becoming more and more diffuse as it
evolves in time.

2.3 PARTICLE IN A BOX


2.3.1 INFINITE BOX
The potential we will work with for this example consists of two infinitely steep walls
placed at x =  and x = 0 such that between the two walls, V (x) = 0. Within this
region, we seek solutions to the differential equation

∂x2 ψ(x) = −2m E/h̄ 2 ψ(x) (2.53)

The solutions of this are plane waves traveling to the left and to the right:

ψ(x) = A exp(−ikx) + B exp(+ikx) (2.54)

The coefficients A and B we will have to determine; k is determined by substitution


back into the differential equation

ψ  (x) = −k 2 ψ(x) (2.55)



Thus, k 2 = 2m E/h̄ 2 , or h̄k = 2m E. Let us work in units in which h̄ = 1 and
m e = 1. Energy in these units is the hartree (≈ 27. eV).
Since ψ(x) must vanish at x = 0 and x = ,

A+B =0 (2.56)
A exp(ik) + B exp(−ik) = 0 (2.57)
Waves and Wave Functions 41

We can see immediately that A = −B and that the solutions must correspond to a
family of sine functions:
ψ(x) = A sin(nπ/x) (2.58)
Just a check,
ψ() = A sin(nπ/) = A sin(nπ ) = 0 (2.59)
To obtain the coefficient, we simply require that the wave functions be normalized
over the range x = [0, ]:
 

sin(nπ x/)2 d x = (2.60)
0 2
Thus, the normalized solutions are

2
ψn (x) = sin(nπ/x) (2.61)

The eigenenergies are obtained by applying the Hamiltonian to the wave-function
solution
h̄ 2 2
E n ψn (x) = − ∂ ψn (x) (2.62)
2m x
h̄ 2 n 2 π 2
= ψn (x) (2.63)
2a 2 m
Thus we can write E n as a function of n:
h̄ 2 π 2 2
En = n (2.64)
2a 2 m
for n = 0, 1, 2, . . . . What about the case where n = 0? Clearly it is an allowed
solution of the Schrödinger equation. However, we also required that the probability
to find the particle anywhere must be 1. Thus, the n = 0 solution cannot be permitted.
Note that the cosine functions are also allowed solutions. However, the restriction
of ψ(0) = 0 and ψ() = 0 discounts these solutions.
In Figure 2.4 we show the first few eigenstates for an electron trapped in a well
of length a = π. Notice that the number of nodes increases as the energy increases.
In fact, we can determine the state of the system by simply counting nodes.
What about orthonormality? We stated that the solution of the eigenvalue problem
forms an orthonormal basis. In Dirac notation we can write

ψn |ψm
= d x ψn |x
x|ψm
(2.65)
 
= d xψn∗ (x)ψm (x) (2.66)
0
 
2
= d x sin(nπ x/) sin(mπ x/) (2.67)
 0

= δnm (2.68)
42 Quantum Dynamics: Applications in Biological and Materials Systems

14

12

10

–1 1 2 3 4

FIGURE 2.4 Particle in a box states.

Thus, we can see in fact that these solutions do form a complete set of orthogonal
states on the range x = [0, ]. Note that it is important to specify “on the range . . .”
since clearly the sine functions are not a set of orthogonal functions over the entire
x axis.

2.3.2 PARTICLE IN A FINITE BOX


Now, suppose our box is finite. That is,

−Vo if −a < x < a
V (x) = (2.69)
0 otherwise

Let us consider the case for E < 0. The case E > 0 will correspond to scattering
solutions. Inside the well, the wave function oscillates, much as in the previous case,

ψW (x) = A sin(ki x) + B cos(ki x) (2.70)

where ki comes from the equation for the momentum inside the well

h̄ki = 2m(E n + Vo ) (2.71)

We actually have two classes of solution: a symmetric solution when A = 0 and


an antisymmetric solution when B = 0. Outside the well the potential is 0 and we
have the solutions

ψ O (x) = c1 eρx and c2 e−ρx (2.72)

We will choose the coefficients c1 and c2 to create two cases, ψ L and ψ R on the left-
and right-hand sides of the well. Also,

h̄ρ = −2m E (2.73)
Waves and Wave Functions 43

Thus, we have three pieces of the full solution that we must connect together:

ψ L (x) = Ceρx for x < −a (2.74)


ψ R (x) = De−ρx for x > +a (2.75)
ψW (x) = A sin(ki x) + B cos(ki x) for inside the well (2.76)

To find the coefficients, we need to set up a series of simultaneous equations by


applying the conditions that (a) the wave function is a continuous function of x and
that (b) it has continuous first derivatives with respect to x. Thus, applying the two
conditions at the boundaries, we have

ψ L (−a) = ψW (−a) (2.77)


ψ R (a) = ψW (a) (2.78)
ψ L (−a) = ψW

(−a) (2.79)
ψ R (a) = ψW

(a) (2.80)

Since the well is symmetric about x = 0, we have either symmetric or antisymmetric


solutions. For the symmetric case: A = 0 and C = D. Thus we have

B = C sec(aki )e−aρ (2.81)


−ki B = −ρCe−aρ (2.82)

Eliminating B/C, we have the condition


ρ
= tan(aki ) (2.83)
ki
Both ki and ρ are functions of the energy

−E   
= tan a 2m(E + Vo )/ h̄ (2.84)
E + Vo

Similarly, for the antisymmetric case, we have


ρ
= cot(aki ) (2.85)
ki
Again, ki and ρ are functions of energy E:

−E   
= − cot a 2m(Vo + E)/ h̄ (2.86)
Vo + E

These equalities can only be satisfied by stationary solutions of the Schrödinger


equation. However, try as we may, we cannot obtain a closed-form equation for
the stationary energies. Equations such as these are called “transcendental” equa-
tions and closed-form solutions are generally impossible to obtain. Consequently,
44 Quantum Dynamics: Applications in Biological and Materials Systems

10 V(x)

x
–4 –2 2 4
8
–2
S 6
–4

4
A –6

S 2
A –8
S
A
E –10
–10 –8 –6 –4 –2 0

(a) (b)

FIGURE 2.5 (a) Graphical solution to transendental equations for an electron in a truncated
hard well of depth Vo = 10 and width a = 2. (b) Wave functions corresponding to stationary
states for the finite well.

we need to perform a numerical root search or use graphical techniques. One of


the tricks to performing an efficient root search is knowing where to start. For tran-
scendental functions such as cot(x) and tan(x), we need to start a root search on
a given branch. In Figure 2.5 we show the graphical solution to the transenden-
tal equations for an electron in a Vo = −10 well of width a = 2. The intersec-
tions indicate the presence of six bound states. The symmetric states are located
at E n = −9.75, −7.77, −3.94 and the asymmetric states at E n = −9.01, −6.07
and −1.49.

2.3.3 SCATTERING STATES AND RESONANCES


Now let us take the same example as above, except look at states for which E > 0.
In this case, we have to consider where the particles are coming from and where they
are going. We will assume that the particles are emitted with precise energy E toward
the well from −∞ and travel from left to right. As in the case above, we have three
distinct regions:

1. x > −a, where ψ(x) = eik1 x + Re−ik1 x = ψ L (x)


2. −a ≤ x ≤ + a, where ψ(x) = Ae−ik2 x + Be+ik2 x = ψW (x)
3. x > +a, where ψ(x) = T e+ik1 x = ψ R (x)
√ √
where k1 = 2m E/h̄ is the momentum outside the well, k2 = 2m(E − V )/h̄ is the
momentum inside the well, and A, B, T , and R are coefficients we need to determine.
Waves and Wave Functions 45

We also have the matching conditions:


ψ L (−a) − ψW (−a) = 0

ψ L (−a) − ψW

(−a) = 0

ψ R (a) − ψW (a) = 0

ψ R (a) − ψW

(a) = 0
This can be solved by hand; however, Mathematica keeps the bookkeeping easy. The
result is a series of rules that we can use to determine the transmission and reflection
coefficients:
−4e−2iak1 +2iak2 k1 k2
T →
−k1 2 + e4iak2 k1 2 − 2k1 k2 − 2e4iak2 k1 k2 − k2 2 + e4iak2 k2 2

2e−iak1 +3iak2 k1 (k1 − k2 )


A→
−k1 2
+ e4iak2 k1 − 2k1 k2 − 2e4iak2 k1 k2 − k2 2 + e4iak2 k2 2
2

−2e−iak1 +iak2 k1 (k1 + k2 )


B→
−k1 + e4iak2 k1 − 2k1 k2 − 2e4iak2 k1 k2 − k2 2 + e4iak2 k2 2
2 2

  
−1 + e4iak2 k1 2 − k2 2
R → 2iak  
e 1 −k1 2 + e4iak2 k1 2 − 2k1 k2 − 2e4iak2 k1 k2 − k2 2 + e4iak2 k2 2
The R and T coefficients are related to the ratios of the reflected and transimitted
flux to the incoming flux. The current operator is given by
h̄  ∗ 
j(x) = ψ ∇ψ − ψ∇ψ ∗ (2.87)
2mi
Inserting the wave functions above yields
h̄k1
jin =
m
h̄k1 R 2
jref = −
m
h̄k1 T 2
jtrans =
m
Thus, R 2 = − jref /jin and T 2 = jtrans /jin . In Figure 2.6 we show the transmitted and
reflection coefficients for an electron passing over a well of depth V = −40 and a = 1
as a function of incident energy E.
Notice that the transmission and reflection coefficients undergo a series of os-
cillations as the incident energy is increased. These are due to resonance states that
lie in the continuum. The condition for these states is such that an integer number
of the de Broglie wavelength of the wave in the well matches the total length of the
well:
λ/2 = na
46 Quantum Dynamics: Applications in Biological and Materials Systems

1.0

0.8
Transmission
0.6

T, R
0.4
Reflection
0.2

10 20 30 40
E (hartree)
(a)

1.5

0.5

–10 –5 5 10

–0.5

–1

–1.5
1

0.5

–10 –5 5 10

–0.5

–1
(b)

FIGURE 2.6 (a) Transmission and reflection coefficients for an electron scattering over a
square well (V = −40 and a = 1 ). (b) Scattering waves for particle passing over a well. In
the top graphic, the particle is partially reflected from the well (V < 0); in the bottom graphic,
the particle passes over the well with a slightly different energy than above, this time with little
reflection.
Waves and Wave Functions 47

0.95 10

T 0.9
5
0.85
0
V
10

20 –5
En
30

40 –10

FIGURE 2.7 Transmission coefficient for particle passing over a bump. Here we have plotted
T as a function of V and incident energy E n . The oscillations correspond to resonance states
that occur as the particle passes over the well (for V < 0) or bump V > 0.

Figure 2.7 shows the transmission coefficient as a function of both incident energy
and the well depth and (or height) over a wide range, indicating that resonances can
occur for both wells and bumps.

2.3.4 APPLICATION: QUANTUM DOTS


One of the most active areas of research in soft condensed matter is that of designing
physical systems that can confine a quantum state in some controllable way. The idea
of engineering a quantum state is extremely appealing and has numerous technological
applications from small logic gates in computers to optically active materials for
biomedical applications. The basic physics of these materials is relatively simple, and
we can use the basic ideas presented in this chapter. The basic idea is to layer a series
of materials such that electrons can be trapped in a geometrically confined region.
This can be accomplished by insulator–metal–insulator layers and etching, creating
disclinations in semiconductors, growing semiconductor or metal clusters, and so on.
A quantum dot can even be a defect site.
We will assume throughout that our quantum well contains a single electron so
that we can treat the system as simply as possible. For a square or cubic quantum well,
48 Quantum Dynamics: Applications in Biological and Materials Systems

energy levels are simply those of an n-dimensional particle in a box. For example,
for a three-dimensional (3D) system,
   2  2 
h̄ 2 π 2 nx 2 ny nz
E n x ,n y ,n z = + + (2.88)
2m Lx Ly Lz

where L x , L y , and L z are the lengths of the sides of the box and m is the mass of an
electron.
The density of states is the number of energy levels per unit energy. If we take the
box to be a cube L x = L y = L z , we can relate n to a radius of a sphere and write the
density of states as
 
dn d E −1
ρ(n) = 4π 2 n 2 = 4π 2 n 2
dE dn
Thus, for a 3D cube, the density of states is
 
4m L 2
ρ(n) = n
πh̄ 2
that is, for a three-dimensional cube, the density of states increases as n and hence as
E 1/2 (Figure 2.8).
Note that the scaling of the density of states with energy depends strongly upon
the dimensionality of the system. For example, in one dimension,
2m L 2 1
ρ(n) =
h̄ 2 π 2 n
and in two dimensions
ρ(n) = const
The reason for this lies in the way the volume element for linear, circular, and spherical
integration scales with radius n. Thus, measuring the density of states tells us not only
the size of the system but also its dimensionality.

6 5D

4
4D
ρd(E)

2 3D
1 2D
1D
0.2 0.4 0.6 0.8 1.0
E (a.u)

FIGURE 2.8 Density of states versus dimensionality of the system. For D > 3, these are
effective dimensions reflecting the number of free-particle degrees of freedom carried by a
given particle.
Waves and Wave Functions 49

We can generalize the results here by realizing that the volume of a d-dimensional
sphere in k space is given by

k d π d/2
Vd =
(1 + d/2)
where (x) is the gamma function. The total number of states per unit volume in a
d-dimensional space is then
1
n k = 2 2 Vd

and the density is then the number of states per unit of energy. The relation between
energy and k is
h̄ 2 2
Ek = k
2m
that is, √
2E k m
k=

which gives
 √ d
2 2 −1 d π 2 −2
d d mE

ρd (E) =  
E  1 + d2
A quantum well is typically constructed so that the system is confined in one
dimension and unconfined in the other two. Thus, a quantum well will typically have
a discrete state only in the confined direction. The density of states for this system will
be identical to that of the three-dimensional system at energies where the k vectors
coincide. If we take the thickness to be s, then the density of states for the quantum
well is  
L ρ3 (E)
ρ = ρ2 (E) L
s Lρ2 (E)/s
where x is the “floor” function, which means it takes the largest integer less than x.
This is plotted in Figure 2.9a and the stair-step density of states (DOS) is indicative
of the embedded confined structure.

Quantum Well vs. 3D Body Quantum Well vs. 3D Body


120

30 100
80
DOS

DOS

20 60
40
10
20

0.005 0.01 0.015 0.02 0.025 0.03 0.05 0.1 0.15 0.2 0.25 0.3
ε (au) ε (au)

FIGURE 2.9 Density of states for a quantum well and quantum wire compared to a 3D space.
Here L = 5 and s = 2 for comparison.
50 Quantum Dynamics: Applications in Biological and Materials Systems

Next, we consider a quantum wire of thickness s along each of its two confined
directions (Figure 2.9b). The DOS along the unconfined direction is one dimensional.
As above, the total DOS will be identical to the 3D case when the wave vectors
coincide. Increasing the radius of the wire eventually leads to the case where the steps
decrease and merge into the 3D curve,
 2  2 
L L ρ2 (E)
ρ= ρ1 (E)
s L 2 ρ2 (E)/s

For a spherical dot, we consider the case in which the radius of the quantum dot
is small enough to support discrete rather than continuous energy levels. In a later
chapter, we will derive this result in more detail, for now, we consider just the results.
First, an electron in a spherical dot obeys the Schrödinger equation:

h̄ 2 2
− ∇ ψ = Eψ (2.89)
2m

where ∇ 2 is the Laplacian operator in spherical coordinates

1 ∂2 1 ∂ ∂ 1 ∂2
∇2 = r + sin θ +
r ∂r 2 r 2 sin θ ∂θ ∂θ r 2 sin2 θ ∂φ 2

The solution of the Schrödinger equation is subject to the boundary condition that for
r ≥ R, ψ(r ) = 0, where R is the radius of the sphere and is given in terms of the
spherical Bessel function, jl (r ), and spherical harmonic functions, Ylm ,

21/2 jl (αr/R)
ψnlm = Ylm () (2.90)
R 3/2 jl+1 (α)

with energy
h̄ 2 α 2
E= (2.91)
2m R 2
Note that the spherical Bessel functions (of the first kind) are related to the Bessel
functions via

π
jl (x) = Jl+1/2 (x) (2.92)
2x

The first few of these are shown in Figure 2.10,

sin x
j0 (x) = (2.93)
x
sin x cos x
j1 (x) = 2 − (2.94)
x x
Waves and Wave Functions 51

0.8

j1(x) 0.6

0.4

0.2

x
5 10 15 20
–0.2

FIGURE 2.10 Spherical Bessel functions, j0 , j1 , and j2 .

 
3 1 3
j2 (x) = − sin x − 2 cos x (2.95)
x3 x x
 
1 d n
jn (x) = (−1) x
n n
j0 (x) (2.96)
x dx

where Equation 2.97 provides a way to generate jn from j0 .


The α’s appearing in the wave function and in the energy expression are deter-
mined by the boundary condition that ψ(R) = 0. Thus, for the lowest energy state
we require

j0 (α) = 0, (2.97)

that is, α = π . For the next state (l = 1),

sin α cos α
j1 (α) = − = 0. (2.98)
α 2 α

This can be solved to give α = 4.4934. These correspond to where the spherical Bessel
functions pass through zero. The first six of these are 3.14159, 4.49341, 5.76346,
6.98793, 8.18256 and 9.35581. These correspond to where the first zeros occur and
give the condition for the radial quantization, n = 1 with angular momentum l =
0, 1, 2, 3, 4, 5. There are more zeros, and these correspond to the case where n > 1.
In the next set of figures (Figure 2.11), we look at the radial wave functions
for an electron in a 0.5 Å quantum dot. First, the case where n = 1, l = 0 and
n = 0, l = 1. In both cases, the wave functions vanish at the radius of the dot. The
radial probability distribution function (PDF) is given by P = r 2 |ψnl (r )|2 . Note that
increasing the angular momentum l from 0 to 1 causes the electron’s most probable
position to shift outwards. This is due to the centrifugal force due to the angular
motion of the electron. For the n, l = (2, 0) and (2, 1) states, we have one node in the
system and two peaks in the PDF functions.
52 Quantum Dynamics: Applications in Biological and Materials Systems

12
4
10
8 3

P
6
ψ

2
4
1
2

0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5
r r

5 4
r
0.1 0.2 0.3 0.4 0.5 3
–5

P
2
–10
ψ

–15 1

–20
0.1 0.2 0.3 0.4 0.5
–25 r

FIGURE 2.11 Radial wave functions (left column) and corresponding PDFs (right column)
for an electron in an R = 0.5 Å quantum dot. The upper two correspond to (n, l) = (1, 0)
(solid) and (n, l) = (1, 1) (dashed) while the lower correspond to (n, l) = (2, 0) (solid) and
(n, l) = (2, 1) (dashed).

2.4 PROBLEMS AND EXERCISES


Problem 2.1 Derive the expression for
G o (x, x  ) = x| exp(−i h o t/ h̄)|x 
(2.99)
where h o is the free particle Hamiltonian,
h̄ 2 ∂ 2
ho = − (2.100)
2m ∂ x 2
Problem 2.2 Show that G o is a solution of the free particle Schrödinger equation
ih̄∂t G o (t) = h o G o (t) (2.101)

Problem 2.3 Show that the normalization of a wave function is independent of time.

Problem 2.4 Compute the bound-state solutions (E < 0) for a square well of depth
Vo where

−Vo −a/2 ≤ x ≤ a/2
V (x) = (2.102)
0 otherwise
Waves and Wave Functions 53

1. How many energy levels are supported by a well of width a?


2. Show that a very narrow well can support only one bound state, and that
this state is an even function of x.
3. Show that the energy of the lowest bound state is

mVo2 a 2
E≈ (2.103)
2h̄ 2
4. Show that as

2m E
ρ= − →0 (2.104)
h̄ 2
the probability of finding the particle inside the well vanishes.

Problem 2.5 Consider a particle with the potential




⎪0 for x > a

V (x) = −Vo for 0 ≤ x ≤ a (2.105)



∞ for x < 0

1. Let φ(x) be a stationary state. Show that φ(x) can be extended to give an
odd wave function corresponding to a stationary state of the symmetric
well of width 2a (that is, the one studied above) and depth Vo .
2. Discuss with respect to a and Vo the number of bound states and argue that
there is always at least one such state.
3. Now turn your attention toward the E > 0 states of the well. Show that
the transmission of the particle into the well region vanishes as E → 0
and that the wave function is perfectly reflected off the sudden change in
potential at x = a.

Problem 2.6 Which of the following are eigenfunctions of the kinetic energy
operator:

h̄ 2 ∂ 2
T̂ = − (2.106)
2m ∂ x 2
e x , x 2 , x n , 3 cos(2x), sin(x) + cos(x), e−ikx
 ∞


dke−ik(x−x ) e−ik /(2m)
2
f (x − x ) = (2.107)
−∞

Problem 2.7 Which of the following would be acceptable one-dimensional wave


functions for a bound particle (upon normalization): f (x) = e−x , f (x) = e−x ,
2

f (x) = xe−x , or
2

 −x 2
e x ≥0
f (x) = (2.108)
2e−x x < 0
2
54 Quantum Dynamics: Applications in Biological and Materials Systems

Problem 2.8 For a one-dimensional problem, consider a particle with wave function
exp(i po x/ h̄)
ψ(x) = N √ (2.109)
x 2 + a2
where a and po are real constants and N the normalization.
1. Determine N so that ψ(x) is normalized
 ∞  ∞
1
d x|ψ(x)|2 = N 2 dx (2.110)
−∞ −∞ x2 + a2
π
= N2 (2.111)
a
Thus ψ(x) is normalized when

a
N= (2.112)
π
2. The position of the particle
√ is measured.
√ What is the probability of finding
a result between −a/ 3 and +a/ 3?
 √  √
+a/ 3 +a/ 3
a 1
√ d x|ψ(x)| =
2
√ dx (2.113)
π −a/ 3 −a/ 3 x 2 + a2
+a/√3
1 
= tan (x/a) √
−1
(2.114)
π −a/ 3

1
= (2.115)
3
3. Compute the mean value of a particle that has ψ(x) as its wave function.

a ∞ x
x
= dx 2 (2.116)
π −∞ x + a 2
=0 (2.117)

Problem 2.9 Consider the Hamiltonian of a particle in a one-dimensional well given


by
1 2
H= p̂ + x̂ 2 (2.118)
2m
where x̂ and p̂ are position and momentum operators. Let |φn
be a solution of

H |φn
= E n |φn
(2.119)

for n = 0, 1, 2, . . . . Show that

φn | p̂|φm
= αnm φn |x̂|φm
(2.120)
Waves and Wave Functions 55

where αnm is a coefficient depending upon E n − E m . Compute αnm . (Hint: You will
need to use the commutation relations of [x̂, H ] and [ p̂, H ] to get this.) Finally, from
all this, deduce that
h̄ 2 
(E n − E m )2 | φn |x̂|φm
|2 = φn | p̂ 2 |φn (2.121)
m
2m

Problem 2.10 The state space of a certain physical system is three dimensional. Let
|u 1
, |u 2
, and |u 3
be an orthonormal basis of the space in which kets |ψ1
and |ψ2

are defined by
1 i 1
|ψ1
= √ |u 1
+ |u 2
+ |u 3
(2.122)
2 2 2
1 i
|ψ2
= √ |u 1
+ √ |u 3
(2.123)
3 3
1. Are the states normalized?
2. Determine the matrices ρl and ρz as represented in the {|u i

basis, the
projection operators onto |ψ1
and |ψ2
. Verify that these matrices are
Hermitian.

Problem 2.11 Let ψ(r ) = ψ(x, y, z) be the normalized wave function of a particle.
Express in terms of ψ(r ):
1. A measurement along the x axis to yield a result between x1 and x2 .
2. A measurement of momentum component px to yield a result between p1
and p2 .
3. Simultaneous measurements of x and pz to yield x1 ≤ x ≤ x2 and pz > 0.
4. Simultaneous measurements of px , p y , and pz , to yield
p1 ≤ p x ≤ p2 (2.124)
p3 ≤ p y ≤ p4 (2.125)
p5 ≤ p z ≤ p6 (2.126)
Show that this result is equal to the result of part 2 when p3 , p5 → −∞
and p4 , p6 → +∞.

Problem 2.12 Consider a particle of mass m whose potential energy is


V (x) = −α(δ(x + l/2) + δ(x − l/2)) (2.127)
1. Calculate the bound states of the particle, setting
h̄ 2 ρ 2
E =− (2.128)
2m
Show that the possible energies are given by
 

e−ρl = ± 1 − (2.129)
μ
where μ = 2mα/ h̄ 2 . Give a graphic solution of this equation.
56 Quantum Dynamics: Applications in Biological and Materials Systems

(a) Ground State. Show that the ground state is even about the origin
and that its energy E s is less than the bound state of a particle in
a single δ-function potential −E L . Interpret this physically. Plot the
corresponding wave function.
(b) Excited State. Show that when l is greater than some value (which you
need to determine), there exists an odd excited state of energy E A with
energy greater than −E L . Determine and plot the corresponding wave
function.
(c) Explain how the preceeding calculations enable us to construct a model
for an ionized diatomic molecule, for example, H2+ , whose nuclei are
separated by l. Plot the energies of the two states as functions of l.
What happens as l → ∞ and l → 0?
(d) If we take Coulombic repulsion of the nuclei into account, what is the
total energy of the system? Show that a curve that gives the variation
with respect to l of the energies thus obtained enables us to predict in
certain cases the existence of bound states of H2+ and to determine the
equilibrium bond length.
2. Calculate the reflection and transmission coefficients for this system. Plot
R and T as functions of l. Show that resonances occur when l is an integer
multiple of the de Broglie wavelength of the particle. Why?

Problem 2.13 Write down the Schrödinger equation for an oscillator in the momen-
tum representation and determine the momentum wave functions.
3 Semiclassical Quantum
Mechanics

Good actions ennoble us, and we are the sons of our own deeds.
Miguel de Cervantes

The use of classical mechanical analogs for quantum behavior holds a long and proud
tradition in the development and application of quantum theory. In Bohr’s original
formulation of quantum mechanics to explain the spectra of the hydrogen atom,
Bohr used purely classical mechanical notions of angular momentum and rotation for
the basic theory and imposed a quantization condition that the angular momentum
should come in integer multiples of h̄. Bohr worked under the assumption that at
some point the laws of quantum mechanics that govern atoms and molecules should
correspond to the classical mechanical laws of ordinary objects like rocks and stones.
Bohr’s Principle of Correspondence states that quantum mechanics is not completely
separate from classical mechanics; rather, it incorporates classical theory.
From a computational viewpoint, this is an extremely powerful notion since per-
forming a classical trajectory calculation (even running thousands of them) is simpler
than a single quantum calculation of a similar dimension. Consequently, the devel-
opment of semiclassical methods has been and remains an important part of the
development and utilization of quantum theory. In fact even in the most recent issues
of leading physics and chemical physics journals, one finds new developments and
applications of this very old idea.
In this chapter we will explore this idea in some detail. The field of semiclassical
mechanics is vast and I would recommend the following for more information:

1. Chaos in Classical and Quantum Mechanics, Martin Gutzwiller (New


York: Springer-Verlag, 1990). Chaos in quantum mechanics is a touchy
subject and really has no clear-cut definition that anyone seems to agree
upon. Gutzwiller is one of the key figures in sorting all this out. This is
very nice and a not-too-technical monograph on quantum and classical
correspondence.
2. Semiclassical Physics, M. Brack and R. Bhaduri (Reading, MA: Addison-
Wesley, 1997). Very interesting book, mostly focusing upon many-body
applications and Thomas-Fermi approximations.
3. Computer Simulations of Liquids, M. P. Allen and D. J. Tildesley (New
York: Oxford, 1994). This book mostly focuses upon classical molecular
dynamics (MD) methods, but has a nice chapter on the quantum methods
that were state of the art in 1994. Methods come and methods go.

There are many others, of course. These are just the ones on my bookshelf.

57
58 Quantum Dynamics: Applications in Biological and Materials Systems

3.1 BOHR – SOMMERFIELD QUANTIZATION


Let us first review Bohr’s original derivation of the hydrogen atom. We will go through
this a bit differently than Bohr since we already know part of the answer. In the chapter
on the hydrogen atom we derived the energy levels in terms of the principle quantum
number n:
me4 1
En = − (3.1)
2h̄ 2 n 2
In Bohr’s correspondence principle, the quantum energy must equal the classical
energy. So for an electron moving about a proton, that energy is inversely proportional
to the distance of separation. So, we can write

me4 1 e2
− = − (3.2)
2h̄ 2 n 2 2r
Now we need to figure out how angular momentum gets pulled into this. For an orbiting
body the centrifugal force, which pulls the body outward, is counterbalanced by the
inward tugs of the centripetal force coming from the attractive Coulomb potential.
Thus,

e2
mr ω2 = (3.3)
r2
where ω is the angular frequency of the rotation. Rearranging this a bit, we can plug
this into the right-hand side (rhs) of Equation 3.2 and write

me4 1 mr 3 ω2
− 2 n2
=− (3.4)
2h̄ 2r
The numerator now looks amost like the classical definition of angular momentum:
L = mr 2 ω. So we can write the last equation as

me4 1 L2
− = − (3.5)
2h̄ 2 n 2 2mr 2

Solving for L 2 :

me4 2mr 2
L2 = (3.6)
2h̄ 2 n 2
Now, we need to pull in another one of Bohr’s results for the orbital radius of the H
atom:
h̄ 2 2
r= n (3.7)
me2
Plug this into Equation 3.6 and after the dust settles, we find

L = h̄n (3.8)
Semiclassical Quantum Mechanics 59

But, why should electrons be confined to circular orbits? Equation 3.8 should be
applicable to any closed path the electron should choose to take. If the quantization
condition only holds for circular orbits, then the theory itself is in deep trouble. At
least that is what Sommerfield thought.
The numerical units of h̄ are energy times time. That is the unit of action in classical
mechanics. In classical mechanics, the action of a mechanical system is given by the
integral of the classical momentum along a classical path:
 x2
S= pd x (3.9)
x1

For an orbit, the initial point and the final point must coincide, x1 = x2 , so the action
integral must describe some of the area circumscribed by a closed loop on the p − x
plane called phase space

S= pd x (3.10)

So, Bohr and Sommerfield’s idea was that the circumscribed area in phase space was
quantized as well.
As a check, let us consider the harmonic oscillator. The classical energy is given by

p2 k
E( p, q) = + q2
2m 2
This is the equation for an ellipse in phase space since we can rearrange this to read

p2 k 2
1= + q
2m E 2E
p2 q2
= 2
+ 2 (3.11)
a b
√ √
where a = 2m E and b = 2E/k describe the major and minor axes of the ellipse.
The area of an ellipse is A = πab, so the area circumscribed by a classical trajectory
with energy E is

S(E) = 2Eπ m/k (3.12)

Since k/m = ω, S = 2π E/ω = E/ν. Finally, since E/ν must be an integer
multiple of h, the Bohr–Sommerfield condition for quantization becomes

pd x = nh (3.13)


where p is the classical momentum for a path of energy E, p = 2m(E − V (x)).
Taking this a bit further, the de Broglie wavelength is p/ h, so the Bohr–Sommerfield
rule basically states that stationary energies correspond to classical paths for which
there are an integer number of de Broglie wavelengths.
60 Quantum Dynamics: Applications in Biological and Materials Systems

Now, perhaps you can anticipate a problem with the quantum description of a
classically chaotic system. In classical chaos, chaotic trajectories never return to their
exact staring point in phase space. They may come close, but there are no closed
orbits. For 1D systems, this does not occur since the trajectories are the contours of
the energy function. For higher dimensions, the dimensionality of the system makes
it possible to have extremely complex trajectories that never return to their starting
point.

Problem 3.1 Apply the Bohr–Sommerfield procedure to determine the stationary


energies for a particle in a box of length l.

3.2 THE WENTZEL, KRAMERS, AND BRILLOUIN


APPROXIMATION
The original Bohr-Sommerfield idea can be improved upon considerably to produce
an asymptotic (h̄ → 0) approximation to the Schrödinger wave function. The idea
was put forward at about the same time by three different theoreticians: Brillouin
(in Belgium), Kramers (in the Netherlands), and Wentzel (in Germany). Depending
upon your point of origin, this method is the WKB (US & Germany), BWK (France,
Belgium), JWKB (UK)—you get the idea. The original references are
1. “La mécanique odularatoire de Schrödinger; une méthode générale de
résolution par approximations successives,” L. Brillouin, Comptes rendus
(Paris) 183, 24 (1926).
2. “Wellenmechanik und halbzahlige Quantisierung,” H. A. Kramers,
Zeitschrift für Physik 39, 828 (1926).
3. “Eine Verallgemeinerung der Quantenbedingungen für die Zwecke der
Wellenmechanik,” G. Wentzel Zeitschrift für Physik 38, 518 (1926).
We will first go through how one can use the approach to determine the eigenvalues
of the Schrödinger equation via semiclassical methods, and then show how one can
approximate the actual wave functions themselves.

3.2.1 ASYMPTOTIC EXPANSION FOR EIGENVALUE SPECTRUM


The WKB proceedure is initiated by writing the solution to the Schrödinger equation

2m
ψ  + (E − V (x))ψ = 0
h̄ 2
as
  
i
ψ(x) = exp χdx (3.14)

We will soon discover that χ is the classical momentum of the system, but for now,
let us consider it to be a function of the energy of the system. Substituting this into
Semiclassical Quantum Mechanics 61

the Schrödinger equation produces a new differential equation for χ :


h̄ dχ
= 2m(E − V ) − χ 2 (3.15)
i dx
If we take h̄ → 0, it follows then that

χ = χo = 2m(E − V ) = | p| (3.16)
which is the magnitude of the classical momentum of a particle. So, if we assume
that this is simply the leading order term in a series expansion in h̄, we would have
 2
h̄ h̄
χ = χo + χ1 + χ2 . . . (3.17)
i i
Substituting Equation 3.17 into
h̄ 1 ∂ψ
χ= (3.18)
i ψ x
and equating to zero coefficients with different powers of h̄, we obtain equations that
determine the χn corrections in succession:

d n
χn−1 = − χn−m χm (3.19)
dx m=0

for n = 1, 2, 3, . . . . For example,


1 χo 1 V
χ1 = − = (3.20)
2 χo 4E−V

χ12 + χ1
χ2 = −
2χo
! "
1 V 2 V 2 V 
=− + +
2χo 16(E − V )2 4(E − V )2 4(E − V )

5V 2 V 
=− − (3.21)
32(2m)1/2 (E − V )5/2 8(2m)1/2 (E − V )3/2
and so forth.

Problem 3.2 Verify Equation 3.19 and derive the first-order correction in Equa-
tion 3.20.

Now, to use these equations to determine the spectrum, we replace x everywhere


by a complex coordinate z and suppose that V (z) is a regular and analytic function
of z in any physically relevant region. Consequently, we can then say that ψ(z) is an

 Note: An analytic function is such that it can be expanded in a polynomial series about some local point.
62 Quantum Dynamics: Applications in Biological and Materials Systems

analytic function of z. So, we can write the phase integral as



1
n= χ (z)dz
h C


1 ψn (z)
= dz (3.22)
2πi C ψn (z)

where ψn is the nth discrete stationary solution to the Schrödinger equation and C is
some contour of integration on the z plane. If there is a discrete spectrum, we know
that the number of zeros, n, in the wave function is related to the quantum number
corresponding to a given energy level. So if ψ has no real zeros, this is the ground-
state wave function with energy E o ; one real zero corresponds to energy level E 1 and
so forth.
Suppose the contour of integration, C, is taken such that it includes only these
zeros and no others, then we can write
  
1 1
n= χo dz + χ1 − h̄ χ2 dz + . . . (3.23)
h̄ C 2πi c C

Each of these terms involves E − V in the denominator. At the classical turning


points where V (z) = E, we have poles and we can use the Cauchy integral theorem
to evaluate the integrals. These poles are located at the classical turning points.
For example, from above, the integral

1 1 1 V
χ1 = − dz (3.24)
2πi c 2πi 4 E−V

We can make a change of variables Z = V (z) and d Z = V  dz and write the integral as

1 1 1 dZ 1
χ1 = − =− (3.25)
2πi c 2πi 4 E−Z 2

since each classical turning point contributes −1/4.


The next term we evaluate by integration by parts

 
V  3 V 2
dz = − dz (3.26)
C (E − V (z)) 3/2 2 C (E − V (z))5/2

Hence, we can write


 
1 V 2
χ2 (z)dz = dz (3.27)
C 32(2m)1/2 C (E − V (z))5/2
Semiclassical Quantum Mechanics 63

Putting it all together


 
1
n + 1/2 = 2m(E − V (z))dz
h c

h V 2
− dz + . . . (3.28)
128π 2 (2m)1/2 c (E − V (z))5/2
The above analysis is pretty formal. But what we have is something new. Notice that
we have an extra 1/2 added here that we did not have in the original Bohr–Sommerfield
(BS) theory. What we have is something even more general. The original BS idea
came from the notion that energies and frequencies were related by integer multiples
of h. But this is really only valid for transitions between states. If we go back and
ask what happens at n = 0 in the Bohr–Sommerfield theory, this corresponds to a
phase-space ellipse with major and minor axes both of length 0—which violates the
Heisenberg uncertainly rule. This new quantization condition forces the system to
have some lowest energy state with a phase-space area h/2.
Where did this extra 1/2 come from? It originates from the classical turning points
where V (x) = E. Recall that for a 1D system bound by a potential, there are at least
two such points. Each contributes a π/4 to the phase for a total contribution of π/2.
We will see this more explicitly in the next section when evaluating the matching
conditions at the turning points.

3.2.2 EXAMPLE: SEMICLASSICAL ESTIMATE OF SPECTRUM


FOR HARMONIC OSCILLATOR

As an example in using this approach, let us consider the simple case of a harmonic
oscillator. Recall in the discussion of the Bohr–Sommerfield approach, we noted that
the momentum integral over a closed trajectory was equal to the area in phase space
enclosed by an ellipse

nh = p(x)d x = πab = 2π E/ω

where a and b are the major and minor axes along the x and p directions and E
is the energy. The semiclassical treatment adds an additional factor of 1/2 to the
Bohr–Sommerfield expression so that

n + 1/2 = E/h̄ω

This agrees with the exact result for the harmonic oscillator energies: E n = h̄ω(n +
1/2) with n = 0, 1, 2, . . . .

3.2.3 THE WENTZEL, KRAMERS, AND BRILLOUIN WAVE FUNCTION


Going back to our original wave function in Equation 3.14 and writing

ψ = ei S/h̄
64 Quantum Dynamics: Applications in Biological and Materials Systems

where S is the integral of χ , we can derive equations for S:


 
1 ∂S ih̄ ∂ 2 S
− + V (x) = E (3.29)
2m ∂ x 2m ∂ x 2

If we neglect the term involving h̄, we recover the classical Hamilton–Jacobi equation
for the action S,
 
1 ∂S
+ V (x) = E (3.30)
2m ∂ x

and can identify ∂ S/∂ x = χo = p with the classical momentum. Again, as above,
we can seek a series expansion of S in powers of h̄. The result is simply the integral
of Equation 3.17:

S = So + S1 + · · · (3.31)
i
Looking at Equation 3.29: it is clear that the classical approximation is valid when
the second term is very small compared to the first. That is,

S 
1

S 2
   2
d dS dx
h̄ 1
dx dx dS
d 1
h̄ 1 (3.32)
dx p
where we equate d S/d x = p. Since p is related to the de Broglie wavelength of the
particle λ = h/ p, the same condition implies that
 
 1 dλ 
 
 2π d x  1 (3.33)

Thus the semiclassical approximation is only valid when the wavelength of the particle
as determined by λ(x) = h/ p(x) varies slightly over distances on the order of the
wavelength itself. Noting the gradiant of the momentum, this can be written another
way:
dp d  m dV
= 2m(E − V (x)) = −
dx dx p dx
Thus, we can write the classical condition as

mh̄|F|/ p 3 1 (3.34)

Alternatively, in terms of the de Broglie wavelength λ = h/ p,


d  dV
2m(E − V (x)) = −mλ
dx dx
Semiclassical Quantum Mechanics 65

Thus, the semiclassical condition is met when the potential changes slowly over a
length-scale comparable to the local de Broglie wavelength λ(x) = h/ p(x).
Going back to the expansion for χ
1 χo 1 V
χ1 = − = (3.35)
2 χo 4E−V
or equivalently for S1
So p
S1 = − = − (3.36)
2S  2p
So,
1
S1 (x) = − log p(x)
2
If we stick to regions where the semi-classical condition is met, then the wave function
becomes
# #
C1 i C2 − h̄i p(x)d x
ψ(x) ≈ √ e h̄ p(x)d x + √ e (3.37)
p(x) p(x)

The 1/ p prefactor has a remarkably simple interpretation. The probability of finding
the particle in some region between x and x + d x is given by |ψ|2 so that the classical
probability is essentially proportional to 1/p. So, the faster the particle is moving, the
less likely it is to be found in some small region of space. Conversely, the slower a
particle moves, the more likely it is to be found in that region. So the time spent in a
small d x is inversely proportional to the momentum of the particle. We will return to
this concept in a bit when we consider the idea of time in quantum mechanics.
The C1 and C2 coefficients are yet to be determined. If we take x = a to be
one classical turning point so that x > a corresponds to the classically inaccessible
region where E < V (x), then the wave function in that region must be exponentially
damped:
  
C 1 x
ψ(x) ≈ √ exp − | p(x)|d x (3.38)
| p| h̄ a
To the left of x = a, we have a combination of incoming and reflected components:
  a    
C1 i C2 i a
ψ(x) = √ exp pd x + √ exp − pd x (3.39)
p h̄ x p h̄ x

3.2.4 SEMICLASSICAL TUNNELING AND BARRIER PENETRATION


Before solving the general problem of how to use this in an arbitrary well, let us
consider the case for tunneling through a potential barrier that has some bumpy top or
corresponds to some simple potential. So, to the left of the barrier the wave function
has incoming and reflected components:

ψ L (x) = Aeikx + Be−ikx (3.40)


66 Quantum Dynamics: Applications in Biological and Materials Systems

Inside we have
# #
C D
e+ h̄ | p|d x
e− h̄ | p|d x
i i
ψ B (x) = √ +√ (3.41)
| p(x)| | p(x)|

and to the right of the barrier:

ψ R (x) = Fe+ikx (3.42)

If F is the transmitted amplitude, then the tunneling probability is the ratio of the
transmitted probability to the incident probability: T = |F|2 /|A|2 . If we assume that
the barrier is high or broad, then C = 0 and we obtain the semiclassical estimate for
the tunneling probability:
  
2 b
T ≈ exp − | p(x)|d x (3.43)
h̄ a

where a and b are the turning points on either side of the barrier.
Mathematically, we can “flip the potential upside down” and work in imaginary
time. In this case the action integral becomes
 b 
S= 2m(V (x) − E)d x (3.44)
a

So we can think of tunneling as motion under the barrier in imaginary time.


There are a number of useful applications of this formula. Gamow’s theory of
alpha decay is a common example. Another useful application is in the theory of
reaction rates where we want to determine tunneling corrections to the rate constant
for a particular reaction. Close to the top of the barrier, where tunneling may be
important, we can expand the potential and approximate the peak as an upside-down
parabola:
k
V (x) ≈ Vo − x 2
2
where +x represents the product side and −x represents the reactant side. The Eckart
potential (Figure 3.1) is usually used to approximate the potential energy along a
one-dimensional reaction path:

Veck (x) = Vo sech2 (x/a) ≈ Vo (1 − (x/a)2 + · · ·)

For convenience, set the zero in energy to be the barrier height Vo so that any trans-
mission for E < 0 corresponds to tunneling.
At sufficiently large distances from the turning point, the motion is purely quasi
classical and we can write the momentum as
 √ 
p = 2m(E + kx 2 /2) ≈ x mk + E m/k/x (3.45)

 The analysis is from Kembel (1935) as discussed in Landau and Lifshitz, Quantum Mechanics (nonrela-
tivistic theory), third edition. (New York: Oxford, Pergamon Press, 1977.)
Semiclassical Quantum Mechanics 67

1.0

0.8

0.6

0.4

0.2

x
–4 –2 0 2 4

FIGURE 3.1 Eckart barrier and parabolic approximation of the transition state.

and the asymptotic of the Schrödinger wave function is

ψ = Ae+iξ /2 +i−1/2
+ Be−iξ /2 −i−1/2
2 2
ξ ξ (3.46)

where A and B are the coefficients we need to determine by the matching condition
and ξ and √ are dimensionless lengths and energies given by ξ = x(mk/h̄)1/4 and
 = (E/h̄) m/k.
The particular case we are interested in is for a particle coming from the left and
passing to the right with the barrier in between. So, the wave functions in each of
these regions must be

ψ R = Be+iξ /2 i−1/2
2
ξ (3.47)

and

ψ L = e−iξ /2
(−ξ )−i−1/2 + Ae+iξ /2
2 2
(−ξ )i−1/2 (3.48)

where the first term is the incident wave and the second term is the reflected com-
ponent. So, |A|2 is the reflection coefficient and |B|2 is the transmission coefficient
normalized so that
|A|2 + |B|2 = 1
Let us move to the complex plane, write a new coordinate, ξ = ρeiφ , and consider what
happens as we rotate around in φ and take ρ to be large. Since iξ 2 = ρ 2 (i cos 2φ −
sin 2φ), we have

ψ R (φ = 0) = Beiρ ρ +i−1/2
2

ψ L (φ = 0) = Aeiρ (−ρ)+i−1/2
2
(3.49)

and at φ = π

ψ R (φ = π ) = Beiρ (−ρ)+i−1/2
2

ψ L (φ = π ) = Aeiρ ρ +i−1/2
2
(3.50)

So, in other words, ψ R (φ = π ) looks like ψ L (φ = 0) when

A = B(eiπ )i−1/2
68 Quantum Dynamics: Applications in Biological and Materials Systems

So, we have the relation A = −i Be−π  . Finally, after we normalize this we get the
transmission coefficient:
1
T = |B|2 =
1 + e−2π 
which must hold for any energy. If the energy is large and negative, then

T ≈ e−2π 

Also, we can compute the reflection coefficient for E > 0 as 1 − D,


1
R=
1 + e+2π 
This gives us the transmission probabilty as a function of incident energy. But
normal chemical reactions are not done at constant energy, they are done at constant
temperature. To get the thermal transmission coefficient, we need to take a Boltzmann
weighted average of transmission coefficients

1
Tth (β) = d Ee−Eβ T (E) (3.51)
Z
where β = 1/kT and Z is the partition function. If E represents a continuum of
energy states, then
     
βωh̄ ψ (0) βωh̄ − ψ (0) 14 βωh̄
π
+2
Tth (β) = − 4π
(3.52)

where ψ (n) (z) is the polygamma function, which is the nth derivative of the digamma
function ψ (0) (z), which is the logarithmic derivative of Euler’s gamma function
ψ (0) (z) =   (z)/ (z).

3.3 CONNECTION FORMULAS


In what we have considered thus far, we have assumed that up until the turning point
the wave function was well behaved and smooth. We can think of the problem as
having two domains: an exterior and an interior. The exterior part we assumed to be
simple and the boundary conditions trivial to impose. The next task is to figure out
the matching condition at the turning point for an arbitrary system. So far what we
have are two pieces, ψ L and ψ R , in the notation above. What we need is a patch. To
do so, we make a linearizing assumption for the force at the classical turning point:

E − V (x) ≈ Fo (x − a) (3.53)

where Fo = −d V /d x evaluated at x = a. Thus, the phase integral is easy:



1 x 2
pd x = 2m Fo (x − a)3/2 (3.54)
h̄ a 3h̄

Problem 3.3 Verify the relations for the transmission and reflection coefficients for
the Eckart barrier problem.
Semiclassical Quantum Mechanics 69

But, we can do better than that. We can actually solve the Schrödinger equation for
the linear potential and use the linearized solutions as our patch. The Mathematica
Notebook for this chapter (Chapter3.nb) determines the solution of the linearized
Schrödinger equation
h̄ 2 dψ
− + (E + V  )ψ = 0 (3.55)
2m d x 2
which can be rewritten as
ψ  = α 3 xψ (3.56)

with  1/3
2m 
α= V (0)
h̄ 2
Absorbing the coefficient into a new variable y, we get Airy’s equation
ψ  (y) = yψ
The solutions of Airy’s equation are Airy functions, Ai(y) and Bi(y) for the regular
and irregular cases. The integral representation of the Ai and Bi are
  3 
1 ∞ s
Ai(y) = cos + sy ds (3.57)
π 0 3
and
 ∞   3 
1 s
e−s /3+sy + sin
3
Bi(y) = + sy ds (3.58)
π 0 3
Plots of these functions are shown in Figure 3.2.
Since both Ai and Bi are acceptible solutions, we will take a linear combination
of the two as our patching function and figure out the coefficients later:

ψ P = a Ai(αx) + bBi(αx) (3.59)

We now have to determine those coefficients. We need to make two assumptions:


(1) that the overlap zones are sufficiently close to the turning point that a linearized

Bi(x)
1.0
0.8
0.6
0.4
0.2 Ai(x)
x
–10 –5 5 10
–0.2
–0.4

FIGURE 3.2 Airy functions, Ai(x) and Bi(x).


70 Quantum Dynamics: Applications in Biological and Materials Systems

potential is reasonable and (2) that the overlap zone is far enough from the turning
point (at the origin) that the WKB approximation is accurate and reliable. You can
certainly cook up some potential for which this will not work, but we will assume it
is reasonable. In the linearized region, the momentum is

p(x) = h̄α 3/2 (−x)3/2 (3.60)

So for +x,
 x
| p(x)|d x = 2h̄(αx)3/2 /3 (3.61)
0

and the WKB wave function becomes


D
e−2(αx) /3
3/2
ψ R (x) = √ (3.62)
h̄α 3/4 x 1/4
In order to extend into this region, we will use the asymptotic form of the Ai and
Bi functions for y  0:

e−2y /3
3/2

Ai(y) ≈ √ 1/4 (3.63)


2 πy
e+2y /3
3/2

Bi(y) ≈ √ 1/4 (3.64)


πy
Clearly, the Bi(y) term will not contribute, so b = 0 and


a= D
αh̄
Now, for the other side, we do the same procedure, except this time x < 0 so the
phase integral is
 0
pd x = 2h̄(−αx)3/2 /3 (3.65)
x

Thus the WKB wave function on the left-hand side is


1  
ψ L (x) = √ Be2i(−αx) /3 + Ce−2i(−αx) /3
3/2 3/2
(3.66)
p
1  2i(−αx)3/2 /3 
+ Ce−2i(−αx) /3
3/2
= √ Be (3.67)
3/4
h̄α (−x) 1/4

That is the WKB part, to connect with the patching part, so we again use the asymptotic
forms for y 0 and take only the regular solution,
1  
Ai(y) ≈ √ sin 2(−y)3/2 /3 + π/4
π(−y)1/4
1  iπ/4 i2(−y)3/2 /3 −iπ/4 −i2(−y)3/2 /3

≈ √ e e − e e (3.68)
2i π(−y)1/4
Semiclassical Quantum Mechanics 71

Comparing the WKB wave and the patching wave, we can match term-by-term
a B
√ eiπ/4 = √ (3.69)
2i π h̄α
−a −iπ/4 C
√ e =√ (3.70)
2i π h̄α
Since we know a in terms of the normalization constant D, B = ieiπ/4 D and C =
ie−iπ/4 . This is the connection! We can write the WKB function across the turning
point as
⎧   0

⎪ 2D 1
⎪ √
⎨ p(x) sin pd x + π/4 x <0
h̄ x
ψWKB (x) = #0 (3.71)

⎪ 2D

⎩ √ e
− h̄1
x
pd x
x >0
| p(x)|

Example: Bound States in the Linear Potential


Since we worked so hard, we have to use the results. So, consider a model problem for
a particle in a gravitational field. Actually, this problem is not so far-fetched since one
can prepare trapped atoms above a parabolic reflector and make a quantum bouncing
ball. Here the potential is V (x) = mgx where m is the particle mass and g is the
gravitational constant (g = 9.80 m/s). We will take the case where the reflector is
infinite so that the particle cannot penetrate into it. The Schrödinger equation for this
potential is
h̄ 2 
− ψ + (E − mgx)ψ = 0 (3.72)
2m
The solutions are the Airy Ai(x) functions. Setting, β = mg and c = h̄ 2 /2m, the
solutions are
 
β 1/3
ψ = C Ai( − (x − E/β)) (3.73)
c
However, there is one caveat: ψ(0) = 0, thus the Airy functions must have their nodes
at x = 0. So we have to systematically shift the Ai(x) function in x until a node lines
up at x = 0. The nodes of the Ai(x) function can be determined and the first seven
of them are listed in Table 3.1. To find the energy levels, we systematically solve the
equation
 
β 1/3 E n
− = xn
c β
So the ground state is where the first node lands at x = 0,
2.33811β
E1 =
(β/c)1/3
2.33811mg
= (3.74)
(2m 2 g/h̄ 2 )1/3
72 Quantum Dynamics: Applications in Biological and Materials Systems

TABLE 3.1
Location of Nodes for Airy, Ai (x ) Function
node xn
1 −2.33811
2 −4.08795
3 −5.52056
4 −6.78671
5 −7.94413
6 −9.02265
7 −10.0402

and so on. Of course, we still have to normalize the wave function to get the correct
energy.
We can make life a bit easier by using the quantization condition derived from the
WKB approximation. Since we require the wave function to vanish exactly at x = 0,
we have

1 xt π
p(x)d x + = nπ (3.75)
h̄ 0 4
This assures us that the wave vanishes at x = 0. In this case xt is the turning point
E = mgxt (see Figure 3.3). As a consequence,
 xt
p(x)d x = (n − 1/4)π
0

Since p(x) = 2m(E n − mgx), the integral can be evaluated
 xt   √ √ 
√ 2E n E n m 2 m (E n − gmxt ) (−E n + gmxt )
2m(E − mgx)d x = 2 +
0 3gm 3gm
Since xt = E n /mg for the classical turning point, the phase intergral becomes

2 2E n 2
√ = (n − 1/4)πh̄.
3g E n m

V(x)
15
10
5
x
2 4 6 8 10
–5
–10

FIGURE 3.3 Quantum bound states in a graviational well.


Semiclassical Quantum Mechanics 73

Solving for E n yields the semiclassical approximation for the eigenvalues:


2 1  1 2 2
g 3 m 3 (1 − 4 n)2 3 (3 π) 3 h̄ 3
En = 1 (3.76)
4 23
In atomic units, the gravitional constant is g = 1.08563×10−22 bohr/a.u.2 . For n = 0,
we get for an electron E osc = 2.014 × 10−15 hartree or about 12.6 Hz. So, graviational
effects on electrons are extremely tiny compared with the typical electronic energy
for an atom or molecule. However, quantized gravitational states have been observed
in atomic fountains.

3.4 SCATTERING
The collision between two particles plays an important role in the dynamics of reactive
molecules. We consider here the collision between two particles interacting via a
central force V (r ). Working in the center of mass frame, we consider the motion of
a point particle with mass μ and position vector r. We will first examine the process
in a purely classical context since it is intuitive and then apply what we know to the
quantum and semiclassical case.

3.4.1 CLASSICAL SCATTERING


The angular momentum of the particle about the origin is given by

L = r × p = μ(r × r˙ ) (3.77)

We know that angular momentum is a conserved quantity and it is is easy to show


that L˙ = 0, viz,
d
L˙ = r × p = (r˙ × r˙ + (r × p˙ ) (3.78)
dt
Since ṙ = ṗ/μ, the first term vanishes; likewise, the force vector, p˙ = −d V /dr , is
along r so that the second term vanishes. Thus, L = const, meaning that angular
momentum is a conserved quantity during the course of the collision.
In Cartesian coordinates, the total energy of the collision is given by
μ 2
E= (ẋ + ẏ 2 ) + V (3.79)
2
To convert from Cartesian to polar coordinates, we use

x = r cos θ (3.80)
y = r sin θ (3.81)
ẋ = ṙ cos θ − r θ˙ sin θ (3.82)
ẏ = ṙ sin θ + r θ˙ cos θ (3.83)
74 Quantum Dynamics: Applications in Biological and Materials Systems

Thus,
mu 2 L2
E= ṙ + V (r ) + (3.84)
2 2μr 2
where we use the fact that
L = μr 2 θ̇ 2 (3.85)
where L is the angular momentum. What we see here is that we have two poten-
tial contributions. The first is the physical attraction (or repulsion) between the two
scattering bodies. The second is a purely repulsive centrifugal potential that depends
upon the angular momentum and ultimately upon the impact parameters. For cases
of large impact parameters, this can be the dominant force. The effective radial force
is given by
L2 ∂V
μr̈ = − (3.86)
2r 3 μ ∂r
Again, we note that the centrifugal contribution is always repulsive while the physical
interaction V(r) is typically attractive at long ranges and repulsive at short ranges.
We can derive the solutions to the scattering motion by integrating the velocity
equations for r and θ
  1/2
2 L2
ṙ = ± E − V (r ) − (3.87)
μ 2μr 2
L
θ̇ = (3.88)
μr 2
and taking into account the starting conditions for r and θ. In general, we could solve
the equations numerically and obtain the complete scattering path. However, really
what we are interested in is the deflection angle χ since this is what is ultimately
observed. So, we integrate the last two equations and derive θ in terms of r :
 θ  r

θ(r ) = dθ = − dr (3.89)
0 ∞ dr
 r
L 1
=−  dr (3.90)
∞ μr 2
(2/μ)(E − V − L 2 /2μr 2 )
where the collision starts at t = −∞ with r = ∞ and θ = 0. What we want to
do is derive this in terms of an impact parameter, b, and scattering angle χ . These
are illustrated in Figure 3.4 and can be derived from basic kinematic considerations.
First, energy is conserved throughout, so if we know the asymptotic velocity v, then
E = μv 2 /2. Secondly, angular momentum is conserved, so L = μ|r × v| = μvb.
Thus the integral above becomes
 r

θ (r ) = −b dr (3.91)
∞ dr
 r
dr
=−  (3.92)
∞ r2 1 − V /E − b2 /r 2
Semiclassical Quantum Mechanics 75

θc
b r
χ
θ rc

FIGURE 3.4 Elastic scattering trajectory for classical collision.

Finally, the angle of deflection is related to the angle of closest approach by 2θc +
χ = π; hence,
 ∞
dr
χ = π − 2b  (3.93)
rc r 2 1 − V /E − b2 /r 2
The radial distance of closest approach is determined by

L2
E= + V (rc ) (3.94)
2μrc2
which can be restated as
 
V (rc )
b2 = rc2 1 − (3.95)
E
Once we have specified the potential, we can compute the deflection angle using
Equation 3.95. If V (rc ) < 0 , then rc < b and we have an attractive potential; if
V (rc ) > 0, then rc > b and the potential is repulsive at the turning point.
If we have a beam of particles incident on some scattering center, then collisions
will occur with all possible impact parameters (hence angular momenta) and will give
rise to a distribution in the scattering angles. We can describe this by a differential
cross-section. If we have some incident intensity of particles in our beam Io , which
is the incident flux or the number of particles passing a unit area normal to the beam
direction per unit time, then the differential cross-section I (χ ) is defined so that
I (χ )d is the number of particles per unit time scattered into some solid angle d
divided by the incident flux.
The deflection pattern will be axially symmetric about the incident beam direction
due to the spherical symmetry of the interaction potential; thus, I (χ ) depends only
upon the scattering angle. Thus, d can be constructed by the cones defining χ and
χ + dχ , that is, d = 2π sin χdχ . Even if the interaction potential is not spherically
76 Quantum Dynamics: Applications in Biological and Materials Systems

symmetric, since most molecules are not spherical, the scattering would be axially
symmetric since we would be scattering from a homogeneous distribution of all
possible orientations of the colliding molecules. Hence any azimuthal dependency
must vanish unless we can orient on the colliding species.
Given an initial velocity v, the fraction of the incoming flux with impact parameters
between b and b+db is 2πbdb. These particles will be deflected between χ and χ +dχ
if dχ /db > 0 or between χ and χ − dχ if dχ /db < 0. Thus, I (χ)d = 2πbdb and
it follows then that
b
I (χ ) = (3.96)
sin χ |dχ /db|
Thus, once we know χ (b) for a given v, we can get the differential cross-section. The
total cross-section is obtained by integrating
 π
σ = 2π I (χ ) sin χ dχ (3.97)
0

This is a measure of the attenuation of the incident beam by the scattering target and
has the units of area.

3.4.2 SCATTERING AT SMALL DEFLECTION ANGLES


Our calculations will be greatly simplified if we consider collisions that result in small
deflections in the forward direction. If we let the initial beam be along the x axis with
momentum p, then the scattered momentum p  will be related to the scattered angle
by p  sin χ = p y . Taking χ to be small,

p y momentum transfer
χ≈ = (3.98)
p momentum
Since the time derivative of momentum is the force, the momentum transfered per-
pendicular to the incident beam is obtained by integrating the perpendicular force
∂V ∂ V ∂r ∂V b
Fy = − =− =− (3.99)
∂y ∂r ∂ y ∂r r

where we used r 2 = x 2 + y 2 and y ≈ b. Thus we find


p y
χ = (3.100)
μ(2E/μ)1/2
 +∞
∂ V dt
= −b(2μE)−1/2 (3.101)
−∞ ∂r r
 −1/2  +∞
−1/2 2E ∂V dx
= −b(2μE) (3.102)
μ −∞ ∂r r
 ∞
b ∂V 2
=− (r − b2 )−1/2 dr (3.103)
E b ∂r
Semiclassical Quantum Mechanics 77

where we used x = (2E/μ)1/2 t and x varies from −∞ to +∞ as r goes from −∞


to b and back.
Let us use this in a simple example of the V = C/r s potential for s > 0.
Substituting V into the integral above and solving yields

sCπ 1/2 ((s + 1)/2)


χ= (3.104)
2bs E (s/2 + 1)
This indicates that χ E ∝ b−s and |dχ /db| = χs/b. Thus, we can conclude by
deriving the differential cross-section
 2/s
1 −(2+2/s) sCπ 1/2 ((s + 1)/2)
I (χ) = χ (3.105)
s 2E (s/2 + 1)
for small values of the scattering angle. Consequently, a log–log plot of the center of
mass differential cross-section as a function of the scattering angle at fixed energy
should give a straight line with a slope −(2 + 2/s) from which one can determine the
value of s. For the van der Waals potential, s = 6 and I (χ ) ∝ E −1/3 χ −7/3 .

3.4.3 QUANTUM TREATMENT


The quantum mechanical case is a bit more complex. Here we will develop a brief
overview of quantum scattering and move on to the semiclassical evaluation. The
quantum scattering is determined by the asymptotic form of the wave function,
 
r →∞ f (χ) ikr
ψ(r, χ ) −→ A eikz + e (3.106)
r
where A is some normalization constant and k = 1/λ = μv/h̄ is the initial wave
vector along the incident beam direction (χ = 0). The first term represents a plane
wave incident upon the scatterer and the second represents an outgoing spherical
wave. Notice that the outgoing amplitude is reduced as r increases. This is because
the wave function spreads as r increases. If we can collimate the incoming and
outgoing components, then the scattering amplitude f (χ ) is related to the differential
cross-section by

I (χ ) = | f (χ )|2 (3.107)

What we have is then that the asymptotic form of the wave function carries within it
information about the scattering process. As a result, we do not need to solve the wave
equation for all of space, we just need to be able to connect the scattering amplitude
to the interaction potential. We do so by expanding the wave as a superposition of
Legendre polynomials


ψ(r, χ ) = Rl (r )Pl (cos χ) (3.108)
l=0

Rl (r ) must remain finite as r = 0. This determines the form of the solution.


78 Quantum Dynamics: Applications in Biological and Materials Systems

When V (r ) = 0, then ψ = A exp(ikz) and we can expand the exponential in


terms of spherical waves

sin(kr − lπ/2)
e ikz
= (2l + 1)eilπ/2 Pl (cos χ) (3.109)
l=0
kr

∞  i(kr −lπ/2) 
1 e e−i(kr −lπ/2)
= (2l + 1)i l Pl (cos χ) + (3.110)
2i l=0 kr kr

We can interpret this equation in the following intuitive way: The incident plane wave
is equivalent to an infinite superposition of incoming and outgoing spherical waves
in which each term corresponds to a particular angular momentum state with

L = h̄ l(l + 1) ≈ h̄(l + 1/2) (3.111)

From our analysis above, we can relate L to the impact parameter, b,

L l + 1/2
b= ≈ λ (3.112)
μv k

In essence the incoming beam is divided into cylindrical zones in which the lth zone
contains particles with impact parameters (and hence angular momenta) between
lλ and (l + 1)λ.

Problem 3.4 In the collision between hard spheres as described on p. 78, the impact
parameter b is treated as continuous; however, in quantum mechanics we allow only
discrete values of the angular momentum l. How will this affect our results, since
b = (l + 1/2)λ?

If V (r ) is short ranged (that is, it falls off more rapidly than 1/r for large r ), we
can derive a general solution for the asymptotic form

  
lπ sin(kr − lπ/2 + ηl
ψ(r, χ ) −→ (2l + 1) exp i + ηl Pl (cos χ)
l=0
2 kr
(3.113)

The significant difference between Equation 3.113 and Equation 3.110 for the V (r ) =
0 case is the addition of a phase shift ηl . This shift only occurs in the outgoing part
of the wave function and so we conclude that the primary effect of a potential in
quantum scattering is to introduce a phase in the asymptotic form of the scattering
wave. This phase must be a real number and has the physical interpretation illustrated
in Figure 3.5. A repulsive potential will cause a decrease in the relative velocity of
the particles at small r resulting in a longer de Broglie wavelength. This causes the
wave to be “pushed out” relative to that for V = 0 and the phase shift is negative. An
attractive potential produces a positive phase shift and “pulls” the wave function in a
bit. Furthermore, the centrifugal part produces a negative shift of −lπ/2.
Semiclassical Quantum Mechanics 79

1.0

0.5

x
2 4 6 8 10
–0.5

–1.0

FIGURE 3.5 Form of the radial wave for repulsive (short dashed) and attractive (long dashed)
potentials. The form for V = 0 is the solid curve for comparison.

Comparing the various forms for the asymptotic waves, we can deduce that the
scattering amplitude is given by

1
f (χ) = (2l + 1)(e2iηl − 1)Pl (cos χ) (3.114)
2ik l=0

From this, the differential cross-section is


 2
 ∞ 
 
I (χ) = λ2  (2l + 1)eiηl sin(ηl )Pl (cos χ) (3.115)
 
l=0

What we see here is the possibility for interference between different angular mo-
mentum components.
Moving forward at this point requires some rather sophisticated treatments. How-
ever, we can use the semiclassical methods developed in this chapter to estimate the
phase shifts.

3.4.4 SEMICLASSICAL EVALUATION OF PHASE SHIFTS


The exact scattering wave is not so important. What is important is the asymptotic
extent of the wave function because that is the part that carries the information from
the scattering center to the detector. What we want is a measure of the shift in phase
between a scattering with and without the potential. From the WKB treatment above,
we know that the phase is related to the classical action along a given path. Thus,
in computing the semiclassical phase shifts, we are really looking at the difference
between the classical actions for a system with the potential switched on and a system
with the potential switched off,
 R  R 
dr dr
ηlSC = lim − (3.116)
R→∞ rc λ(r ) b λ(r )

R is the radius a sphere about the scattering center and λ(r ) is a de Broglie wavelength
h̄ 1 h̄
λ(r ) = = = (3.117)
p k(r ) μv(1 − V (r ) − b2 /r 2 )1/2
80 Quantum Dynamics: Applications in Biological and Materials Systems

associated with the radial motion. Putting this together:


  
R 2 1/2  R 2 1/2
V (r ) b b
ηlSC = lim k 1− − 2 dr − 1− 2 dr (3.118)
R→∞ rc E r b r
   1/2 
R R
b2
= lim k(r )dr − k 1− 2 dr (3.119)
R→∞ rc b r
(k is the incoming wave vector.) The last integral we can evaluate as
 R 2   R
(r − b2 )1/2 −1 b 
 kbπ
k dr = k (r − b ) − b cos
2 2
 = kR − 2 (3.120)
b r r b

Now, to clean things up a bit, we add and subtract an integral over k (we do this to get
rid of the R dependence, which will cause problems when we take the limit R → ∞):
 R  R  R  
kbπ
ηl = lim
SC
k(r )dr − kdr + kdr − k R − (3.121)
R→∞ rc rc rc 2
 R
= (k(r ) − k)dr − k(rc − bπ/2) (3.122)
rc
 R
= (k(r ) − k)dr − krc π(l + 1/2)/2 (3.123)
rc

This last expression is the standard form of the phase shift.


The deflection angle can be determined in a similar way:
     
χ = lim π −2 dθ − π − dθ (3.124)
R→∞ actual path V =0 path

We transform this into an integral over r :


$  −1/2  ∞ −1/2 %

V (r ) b2 dr b2 dr
χ = −2b 1− − 2 − 1− 2 (3.125)
rc E r r2 b r r2
Agreed, this is a weird way to express the scattering angle. But let us keep pushing
this forward. The last integral can be evaluated as
 ∞ −1/2 ∞
b2 dr 1 
−1 b  π
1− 2 2
= cos  = − 2b (3.126)
b r r b r b

which yields the classical result we obtained previously. So, why did we bother? From
this we can derive a simple and useful connection between the classical deflection
angle and the rate of change of the semiclassical phase shift with angular momentum,
dηlSC/dl. First, recall the Leibnitz rule for taking derivatives of integrals:
 b(x)
d b da
f (x, y)dy = f (b(x), y) − f (a(x), y)
d x a(x) dx dx
 b(x)
∂ f (x, y)
+ dy (3.127)
a(x) ∂x
Semiclassical Quantum Mechanics 81

Taking the derivative of ηlSC with respect to l, using the last equation and the relation
that (∂b/∂l) E = b/k, we find that
dηlSC χ
= (3.128)
dl 2
Next, we examine the differential cross-section, I (χ ). The scattering amplitude

λ
f (χ) = (2l + 1)e2iηl Pl (cos χ) (3.129)
2i l=0
where we use λ = 1/k and exclude the singular point where χ = 0 since this
contributes nothing to the total flux.
Now, we need a mathematical identity to take this to the semiclassical limit where
the potential varies slowly with wavelength. What we do is to first relate the Legendre
polynomial, Pl (cos θ), to a zeroth-order Bessel function for small values of θ (θ 1),
Pl (cos θ) = J0 ((l + 1/2)θ) (3.130)
Now, when x = (l + 1/2)θ  1 (that is, large angular momentum), we can use the
asymptotic expansion of J0 (x)
 
2 π
J0 (x) → sin x + (3.131)
πx 4
Pulling this together,
 1/2
2
Pl (cos θ) → sin ((l + 1/2)θ + π/4)
π(l + 1/2)θ
 1/2
2 sin ((l + 1/2)θ + π/4)
≈ (3.132)
π(l + 1/2) (sin θ)1/2
for θ(l + 1/2)  1. Thus, we can write the semiclassical scattering amplitude as
∞  
(l + 1/2) 1/2 iφ + −
f (χ ) = −λ (e + eiφ ) (3.133)
l=0
2π sin χ
where
φ ± = 2ηl ± (l + 1/2)χ ± π/4 (3.134)
The phases are rapidly oscillating functions of l. Consequently, the majority of the
terms must cancel and the sum is determined by the ranges of l for which either φ +
or φ − is extremized. This implies that the scattering amplitude is determined almost
exclusively by phase shifts that satisfy
dηl
2 ±χ =0 (3.135)
dl
where the + is for dφ + /dl = 0 and the − is for dφ − /dl = 0. This demonstrates that
only the phase shifts corresponding to impact parameter b can contribute significantly
to the differential cross-section in the semiclassical limit. Thus, the classical condition
for scattering at a given deflection angle χ is that l be large enough for Equation 3.135
to apply.
82 Quantum Dynamics: Applications in Biological and Materials Systems

3.5 PROBLEMS AND EXERCISES


Problem 3.5 In this problem we will consider the ammonia inversion problem, but
this time we will proceed in a semiclassical context.
Recall that the ammonia inversion potential consists of two symmetrical potential
wells separated by a barrier. If the barrier were impenetrable, one would find energy
levels corresponding to motion in one well or the other. Since the barrier is not
infinite, there can be passage between wells via tunneling. This causes the otherwise
degenerate energy levels to split. In this problem, we will make life a bit easier by
taking
V (x) = α(x 4 − x 2 )

Let ψo be the semiclassical wave function describing the motion in one well with
energy E o . Assume that ψo is exponentially damped on both sides of the well and that
the wave function is normalized so that the integral over ψo2 is unity. When tuning
is taken into account, the wave functions corresponding to the new energy levels, E 1
and E 2 , are the symmetric and antisymmetric combinations of ψo (x) Q|P|Q 
and
ψo (−x) √
ψ1 = (ψo (x) + ψo (−x)/ 2

ψ2 = (ψo (x) − ψo (−x)/ 2
where ψo (−x) can be thought of as the contribution from the zeroth-order wave
function in the other well. In well 1, ψo (−x) is very small, in well 2, ψo (+x) is
very small, and the product ψo (x)ψo (−x) is vanishingly small everywhere. Also, by
construction, ψ1 and ψ2 are normalized.
1. Assume that ψo and ψ1 are solutions of the Schrödinger equations
2m
ψo + (E o − V )ψo = 0
h̄ 2
and
2m
ψ1 + (E 1 − V )ψ1 = 0
h̄ 2
Multiply the former by ψ1 and the latter by ψo , combine and subtract
equivalent terms, and integrate over x from 0 to ∞ to show that

h̄ 2
E1 − Eo = − ψo (0)ψo (0)
m
Perform a similar analysis to show that

h̄ 2
E2 − Eo = + ψo (0)ψo (0)
m
2. Show that the unperturbed semiclassical wave function is
  
ω 1 a
ψo (0) = exp − | p|d x
2π vo h̄ 0
Semiclassical Quantum Mechanics 83

and mvo
ψo (0) = ψo (0)


where vo = 2(E o − V (0))/m and a is the classical turning point at
E o = V (a).
3. Combining your results, show that the tunneling splitting is
 
h̄ω 1 +a

E = exp − | p|d x
π h̄ −a
where the integral is taken between classical turning points on either side
of the barrier.
4. Assuming that the potential in the barrier is an upside-down parabola
V (x) ≈ Vo − kx 2 /2
What is the tunneling splitting?
5. Now, taking α = 0.1, expand the potential about the barrier and deter-
mine the harmonic force constant for the upside-down parabola. Use the
equations you derived and compute the tunneling splitting for a proton in
this well.

Problem 3.6 Use the Bohr–Sommerfield approximation to derive an expression for


the number of discrete bound states in the following potentials:
1. V = muω2 x 2 /2
2. V = Vo cot2 (π x/a) for 0 < x < a

Problem 3.7 Use the semiclassical approximation to determine the average kinetic
energy of a particle in a stationary state.

Problem 3.8 Use the result of the previous problem to determine the average kinetic
energy of a particle in the following potentials:
1. V = muω2 x 2 /2
2. V = Vo cot2 (π x/a) for 0 < x < a

Problem 3.9 Use the semiclassical approximation to determine the form of the po-
tential V (x) for a given energy spectrum E n . Assume V (x) to be an even function
V (x) = V (−x) increasing monotonically for x > 0.

Problem 3.10 Use the Ritz variational principle to show that any purely attractive
one-dimensional potential well has at least one bound state.

Problem 3.11 Consider a particle of mass m moving in a potential λV (x) that satisfies
the following conditions: For x < 0 and x > a, V (x) = 0 and for 0 ≤ x ≤ a,
 a
λ V (x)d x < 0
0
84 Quantum Dynamics: Applications in Biological and Materials Systems

Show that if λ is small, there exists a bound state with energy


 a 2
mλ2
E ≈− V (x)d x
2h̄ 2 0

Problem 3.12 Let us revisit the double well tunneling problem by making the fol-
lowing approximation to the tunneling doublet.
1
ψ± = √ (φ0 (x) ± φ0 (−x))
2
when φ0 (x) is a quasi-classical wave function describing motion in the right-hand
well.
2m
φ0 2 (V (x) − E 0 )φ0 = 0.

Show that the tunneling splitting between the ψ± states is given by

4h̄ 2
E− − E+ = φ0 (0)φ0 (x)
m
4 Quantum Dynamics
(and Other Un-American
Activities)
Dr. Condon, it says here that you have been at the forefront of a rev-
olutionary movement in physics called quantum mechanics. It strikes
this hearing that if you could be at the forefront of one revolutionary
movement . . . you could be at the forefront of another.
—House Committee on Un-American Activities to
Dr. Edward Condon, 1948

4.1 INTRODUCTION
This chapter is really the heart and soul of this text—not only in a physical sense but
also in a scientific sense. In the early days of quantum mechanics and especially chem-
ical physics, we were mostly interested in discerning the energy states or predicting
equilibrium structures of a given atomic or molecular system. This provided a good
test of quantum theory and deepened our understanding of the nature of the bonding
and intermolecular interactions that define a chemical system. With the introduction
of time-resolved laser techniques in the 1980s, modern investigations have focused
upon pulling apart how an atomic or molecular system undergoes transitions from
one state to the next and how the quantum interferences between different pathways
influence these transitions. Typically, in a molecular system we treat the electronic de-
grees of freedom using rigorous quantum theory and allow their energies and states to
be parametrized by the instantaneous positions of the nuclei. This is justified through
the Born–Oppenheimer approximation, which allows us to separate the fast motion
of the electrons from the far slower motions of the nuclei by virtue of their disparity in
mass. As we shall see in this chapter, things become interesting when the separation
of time scales is no longer valid.
We shall begin with a brief review of the bound states of a coupled two-level
system. This is a model problem that captures the essential physics of a wide range of
situations. We shall discuss this within first a time-independent perspective and then
a time-dependent perspective. Finally, at the end of the chapter we shall discuss what
happens when we allow the two-level system to have an additional harmonic degree
of freedom that couples the transitions between the two states.

85
86 Quantum Dynamics: Applications in Biological and Materials Systems

4.2 THE TWO-STATE SYSTEM


A very general problem is to consider two states {|1&|2} with energies E 1 and
E 2 coupled by some off-diagonal interaction V . We shall refer to these states as the
“localized” basis since if we allow V to be parametrized by, say, the spatial separation
between the states, then as this distance becomes large, we expect V → 0 and the
localized states become the exact states of the system. An elementary example of this
is the description of bonding within the hydrogen molecule where the localized states
can be taken to be the hydrogenic 1s orbitals localized about each atom center. For
the sake of simplicity we assume 1|2 = 0.
In the localized basis, we can write the Hamiltonian H as a matrix
 
E1 V
H= (4.1)
V E2

Let us define new energy variables E m = (E 1 + E 2 )/2 and  = (E 1 − E 2 )/2. The


Hamiltonian can then be written in these new variables as
 
1 tan(2θ)
H = Em I +  (4.2)
tan(2θ) −1

where
V
tan(2θ) = (4.3)

defines a “mixing angle.” It is straightforward to determine the energy levels of the
coupled system in terms of the mixing angle

ε± = E m +  sec(2θ) (4.4)

or in terms of the interaction



ε± = E m ∓ 2 + V 2 (4.5)

where we have assumed V < 0.


The mixing angle θ comes about because H can be brought into diagonal form
by introducing the 2 × 2 rotation matrix
 
cos(θ) sin(θ)
T = (4.6)
− sin(θ) cos(θ)

where θ is the mixing angle defined above. What we do is to rotate the initial two
orthogonal state vectors, |1 and |2, which lie on a two-dimensional plane, by an
angle θ to form new state vectors, |+ and |−,
    
|+ cos(θ) sin(θ) |1
= (4.7)
|− − sin(θ) cos(θ) |2
Quantum Dynamics (and Other Un-American Activities) 87

є_

E2
Em

E1

є+

FIGURE 4.1 Energy levels of the two-state system. The superimposed circles are representa-
tive of the initial (localized) and final (delocalized) states of the system.

These new states, which we shall term the “delocalized” basis, are linear combinations
of the original localized states as illustrated in Figure 4.1. When the energy gap
V  , tan(2θ) = V / becomes small and θ → 0. In this limit, the eigenstates
become more and more like the initial localized states. In the other limit, as  → 0
and the initial states become degenerate, tan(2θ) diverges and θ = π/4. In this case,
the true eigenstates of the system are the totally delocalized states:
1
|± = √ (|1 ± |2) (4.8)
2
Let us briefly examine the impact of these two limits on the final energies of the
system. From above, the exact energy levels are given by

ε± = E m ∓ 2 + V 2 (4.9)
We can use the binomial theorem to expand the exact energies either in terms of the
initial energy gap  or in terms of the coupling. When V /  1, then we can expand

ε± = E m ∓  1 + (V /)2
 
1 V2
= Em ∓  1 + + · · · (4.10)
2 2
to obtain a lowest-order correction to the energy levels:
1 V2
ε+ ≈ = E 1 − (4.11)
2 
1 V2
ε− ≈ = E 2 + (4.12)
2 
(Note, we have assumed that E 1 < E 2 .)
In the opposite limit where /V  1, we pull V out from under the square root
and perform the binomial expansion
 
1 2
ε± = E m ∓ V 1 + + · · · (4.13)
2 V2
88 Quantum Dynamics: Applications in Biological and Materials Systems

In this limit (as  → 0) the exact energies are

ε± = E 1 ∓ |V | (4.14)

We can perform a similar analysis on the wave functions. In the weak coupling
limit, V /  1; hence, θ ≈ 0. Thus, we can expand the coefficients of the |± about
θ = 0 to obtain the lowest-order corrections to the states:

|+ = cos(θ)|1 + sin(θ)|2


 
θ2
= 1− + · · · |1 + (θ + · · ·)|2 (4.15)
2
V
≈ |1 + |2 (4.16)
2
where we have used tan(2θ) ≈ 2θ for small values of θ. Similarly for |−,

|− = cos(θ)|2 − sin(θ)|1


 
θ2
= 1− + · · · |2 − (θ + · · ·)|1 (4.17)
2
V
≈ |2 − |1 (4.18)
2
In short, within the weak coupling limit, both the energies and states resemble their
parent uncoupled energies and states.

4.3 PERTURBATIVE SOLUTIONS


One of the most important techniques we will use thoughout this text will be the use
of perturbative expansions whereby we consider the response or reaction of some
reference system to some sort of applied or additional interaction. In most cases,
it is simply impossible to obtain the exact solution to the Schrödinger equation. In
fact, the vast majority of problems that are of physical interest cannot be resolved
exactly and one is forced to make a series of well-posed approximations. The simplest
approximation is to say that the system we want to solve looks a lot like a much simpler
system that we can solve with some additional complexity (which hopefully is quite
small). In other words, we want to be able to write our total Hamiltonian as

H = Ho + V (4.19)

where Ho represents that part of the problem we can solve exactly and V some extra
part that we cannot. This we take as a correction or perturbation to the exact problem.
Perturbation theory can be formuated in a variery of ways, but we begin with
what is typically termed Rayleigh–Schrödinger perturbation theory. This is the typi-
cal approach and used most commonly. Let Ho |φn  = Wn |φn  and (Ho + λV )|ψ =
E n |ψ be the Schrödinger equations for the uncoupled and perturbed systems.
Quantum Dynamics (and Other Un-American Activities) 89

In what follows, we take λ as a small parameter and expand the exact energy in
terms of this parameter. Clearly, we write E n as a function of λ and write

E n (λ) = E n(0) + λE n(1) + λ2 E n(2) . . . (4.20)

Likewise, we can expand the exact wave function in terms of λ:


     
|ψn  = ψn(0) + λψn(1) + λ2 ψn(2) . . . (4.21)

Since we require that |ψ be a solution of the exact Hamiltonian with energy E n , then
     

H |ψ = (Ho + λV ) ψn(0) + λψn(1) + λ2 ψn(2) . . . (4.22)


(0)
 (0)   (1)   (2) 

= E n + λE n + λ E n . . . ψn + λψn + λ ψn . . .


(1) 2 (2) 2
(4.23)

Now, we collect terms order by order in λ:


   
• λ0 : Ho ψn(0) = E n(0) ψn(0)
     
• λ1 : Ho ψn(1) + V ψn(0) = E n(0) |ψ (1)  + E n(1) ψn(0)
       
• λ2 : Ho ψn(2) + V |ψ (1)  = E n(0) ψn(2) + E n(1) ψn(1) + E n(2) ψn(0)

and so on.
The λ0 problem is just the unperturbed problem we can solve. Taking the λ1 terms
and multiplying by ψn(0) | we obtain
(0)   (0)  (0)     
ψn  Ho ψn + ψn |V |ψ (0) = E n(0) ψn(0) ψn(1) + E n(1) ψn(0) ψn(0) (4.24)

In other words, we obtain the first-order correction for the nth eigenstate:

E n(1) = ψn(0) |V |ψ (0) (4.25)

This is easy to check by performing a similar calculation, except by multiplying by


ψm(0) | for m = n and noting that ψn(0) |ψm(0)  = 0 are orthogonal states.
(0)   (0)  (0)   
ψm  Ho ψn + ψm |V |ψ (0) = E n(0) ψm(0) ψn(1) (4.26)

Rearranging things a bit, one obtains an expression for the overlap between the un-
perturbed and perturbed states:
(0) 
(0)  (1)  ψm |V |ψn(0)

ψm ψn = (4.27)
E n(0) − E m(0)
Now, we use the resolution of the identity to project the perturbed state onto the
unperturbed states:
 (1)   (0)  (0)  (1) 
ψ = ψ ψm ψn
n m
m

ψm(0) |V |ψn(0)  
=  ψm(0) (4.28)
m =n E n(0) − E m(0)
90 Quantum Dynamics: Applications in Biological and Materials Systems

where we explictly exclude the n = m term to avoid the singularity. Thus, the first-
order correction to the wave function is

 (0)  ψm(0) |V |ψn(0)  (0) 

|ψn  ≈ ψn + ψ (4.29)
(0) (0) m
m =n E n − E m

This also justifies our assumption above.

4.3.1 DIPOLE MOLECULE IN HOMOGENOUS ELECTRIC FIELD


Here we take the example of ammonia inversion in the presence of an electric field.
The NH3 molecule can tunnel between two equivalent C3v configurations and, as a
result of the coupling between the two configurations, the unperturbed energy levels
E o are split by an energy A. Defining the unperturbed states as |1 and |2, we can
define the tunneling Hamiltonian as
 
E o −A
H= (4.30)
−A E o

or in terms of Pauli matrices:

H = E o σo − Aσx (4.31)

Now we apply an electric field. When the dipole moment of the molecule is aligned
parallel with the field, the molecule is in a lower energy configuration, whereas for the
antiparallel case, the system is in a higher energy configuration. This lifts the degen-
eracy between the two otherwise equivalent states. We can denote the contribution to
the Hamiltonian from the electric field as

H
= μe Eσz (4.32)

The total Hamiltonian in the {|1, |2} basis is thus


 
E o + μe E −A
H= (4.33)
−A E o − μe E

Solving the eigenvalue problem

|H − λI | = 0 (4.34)

we find two eigenvalues:



λ± = E o ± A2 + μ2e E 2 (4.35)

These are the exact eigenvalues. In Figure 4.2 we show the variation of the energy
levels as a function of the field strength.
Quantum Dynamics (and Other Un-American Activities) 91

λ±/|A|

μeε/|A|
–2 –1 1 2

–1

–2

FIGURE 4.2 Variation of energy levels (λ± ) as a function of the applied field for a polar
molecule in an electric field.

4.3.1.1 Weak Field Limit

If μe E/A  1, then we can use the binomial expansion



1 + x 2 ≈ 1 + x 2 /2 + · · · (4.36)

to write
  2 1
μe E
A2 + μ2e E = A 1 +
2
/2
A
  2 
1 μe E
≈ A 1+ (4.37)
2 A

Thus in the weak field limit, the system can still tunnel between configurations, and
the energy splittings are given by

μ2e E 2
E ± ≈ (E o ∓ A) ∓ (4.38)
A
To understand this a bit further, let us use perturbation theory in which the tun-
neling dominates and treat the external field as a perturbing force. The unperturbed
Hamiltonian can be diagonalized by taking symmetric and antisymmetric combina-
tions of the |1 and |2 basis functions. This is exactly what we did above with the
time-dependent coefficients. Here the stationary states are
1
|± = √ (|1 ± |2) (4.39)
2
with energies E ± = E o ∓ A, so in the |± basis, the unperturbed Hamiltonian becomes
 
Eo − A 0
H= (4.40)
0 Eo + A
92 Quantum Dynamics: Applications in Biological and Materials Systems

The first-order correction to the ground-state energy is given by

E (1) = E (0) + +|H


|+ (4.41)

To compute +|H
|+ we need to transform H
from the {|1, |2} uncoupled basis
to the new |± coupled basis. This is accomplished by inserting the identity on either
side of H
and collecting terms:

+|H
|+ = +|(|1 < 1| + |22|)H
(|1 < 1| + |22|) (4.42)
1
= (1| + 2|)H
(|1 + |2) (4.43)
2
=0 (4.44)

This also applies to −|H


|− = 0. Thus, the first-order correction vanishes. However,
since +|H
|− = μe E does not vanish, we can use second-order perturbation theory
to find the energy correction:
H
H

W+(2) = mi im
(4.45)
m =i
E i − Em

+|H
|−−|H
|+
= (4.46)
E +(0) − E −(0)
(μe E )2
= (4.47)
Eo − A − Eo − A
μ2e E 2
=− (4.48)
2A
This also applies to W−(2) = +μ2e E 2 /A. So we get the same variation as we estimated
above by expanding the exact energy levels when the field was weak.
Now let us examine the wave functions. Remember the first-order correction to
the eigenstates is given by
−|H
|−
|+(1)  = |− (4.49)
E+ − E−
μE
=− |− (4.50)
2A
Thus,
μE
|+ = |+(0)  − |− (4.51)
2A
μE
|− = |−(0)  + |+ (4.52)
2A
So we see that by turning on the field, we begin to mix the two tunneling states.
However, since we have assumed that μE/A  1, the final state is not too unlike our
initial tunneling states.
Quantum Dynamics (and Other Un-American Activities) 93

4.3.1.2 Strong Field Limit


 2
In the strong field limit, we expand the square-root term such that A
μe E
1

 2 1/2
A
A2 + μ2e E 2 = Eμe +1
μe E
  2 
1 A
= Eμe 1+ ...
2 μe E

1 A2
≈ Eμe + (4.53)
2 μe E
For very strong fields, the first term dominates and the energy splitting becomes linear
in the field strength. In this limit, the tunneling has been effectively suppressed.
Let us analyze this limit using perturbation theory. Here we will work in the |1, 2
basis and treat the tunneling as a perturbation. Since the electric field part of the
Hamiltonian is diagonal in the 1,2 basis, our unperturbed strong-field Hamiltonian is
simply  
E o − μe E 0
H= (4.54)
0 E o − μe E
and the perturbation is the tunneling component. As stated previously, the first-order
corrections to the energy vanish and we are forced to resort to second-order pertur-
bation theory to get the lowest-order energy correction. The result is
A2
W (2) = ± (4.55)
2μE
which is exactly what we obtained by expanding the exact eigenenergies above.
Likewise, the lowest-order correction to the state vectors are
A 0
|1 = |10  − |2  (4.56)
2μE
A 0
|2 = |20  + |1  (4.57)
2μE
So, for large E the second-order correction to the energy vanishes, the correction to
the wave function vanishes, and we are left with the unperturbed (that is, localized)
states. We also find that the perturbative results exactly agree with the series expansion
results we obtained above. Thus, perturbative approaches work in the limit that the
coupling remains small compared with the energy gap of the unperturbed system.

4.4 DYSON EXPANSION OF THE SCHRÖDINGER EQUATION


The Rayleigh–Schrödinger approach is useful for discrete spectra. However, it is not
very useful for scattering or for systems with continuous spectra. On the other hand, the
Dyson expansion of the wave function can be applied to both cases. Its development
94 Quantum Dynamics: Applications in Biological and Materials Systems

is similar to the Rayleigh–Schrödinger case. We begin by writing the Schrödinger


equation as usual:

(Ho + V )|ψ = E|ψ (4.58)

where we define |φ and W to be the eigenvectors and eigenvalues of part of the full
problem. We shall call this the “uncoupled” problem and assume it is something we
can easily solve:

Ho |φ = W |φ (4.59)

We want to write the solution of the fully coupled problem in terms of the solution
of the uncoupled problem. First we note that

(Ho − E)|ψ = V |ψ (4.60)

Using the “uncoupled problem” as a “homogeneous” solution and the coupling as


an inhomogeneous term, we can solve the Schrödinger equation and obtain |ψ
exactly as

1
|ψ = |φ + V |ψ (4.61)
Ho − E
This may seem a bit circular. But we can iterate the solution:

1 1 1
|ψ = |φ + V |φ + V V |ψ (4.62)
Ho − E Ho − E Ho − W
Taking this out to all orders, one obtains:
∞ 
n
1
|ψ = |φ + V |φ (4.63)
n=1
Ho − E

Assuming that the series converges rapidly (true for V << Ho weak coupling case),
we can truncate the series at various orders and write

|ψ (0)  = |φ (4.64)


 
1
|ψ (1)  = |φ + V |φ (4.65)
Ho − E
 2
1
|ψ (2)  = |ψ (1)  + V |φ (4.66)
Ho − E

and so on. Let us look at |ψ (1)  for a moment. We can insert one in the form of

n |φn φn |:

 (1)   1

ψ = |φn  + |φm φn |V |φm  (4.67)
n
n
Ho − Wm
Quantum Dynamics (and Other Un-American Activities) 95

that is,
 
 (1)  1
ψ = |φn  + |φm φn |V |φm  (4.68)
n
n
Wn − Wm
Likewise,
 2
 (2)   (1)  1
ψ 
= ψn + Vlm Vmn |φn  (4.69)
n
lm
(Wm − Wl )(Wn − Wm )
where
Vlm = φl |V |φm  (4.70)
is the matrix element of the coupling in the uncoupled basis. These last two expressions
are the first- and second-order corrections to the wave function.
Note that we can actually solve the perturbation series exactly by noting that the
series has the form of a geometric progression, for x < 1 converge uniformly to

1
= 1 + x + x2 + · · · = xn (4.71)
1−x n=0

Thus, we can write


∞ 
n
1
|ψ = V |φ (4.72)
n=0
Ho − E


= (G o V )n |φ (4.73)
n=0

1
= |φ (4.74)
1 − Go V
where G o = (Ho − E)−1 (this is the “time-independent” form of the propagator for
the uncoupled system). This particular analysis is particularly powerful in deriving
the propagator for the fully coupled problem.
We now calculate the first-order and second-order corrections to the energy of the
system. To do so, we make use of the wave functions we just derived and write
  
E n(1) = ψn(0)  H ψn(0) = Wn + φn |V |φn  = Wn + Vnn (4.75)
So the lowest-order correction to the energy is simply the matrix element of the per-
turbation in the uncoupled or unperturbed basis. That was easy. What about the next
order corrections? Using the same procedure as previously (assuming the states are
normalized)
  
E n(2) = ψn(1)  H ψn(1)
= φn |H |φn 
 
1
+ φn |H |φm  φm |V |φn  + O [V 3 ]
m =n
Wn − Wm
|Vnm |2
= Wn + Vnn + (4.76)
m =n
Wn − Wm
96 Quantum Dynamics: Applications in Biological and Materials Systems

Notice that I am avoiding the case where m = n as that would cause the denomi-
nator to be zero, leading to an infinity. This must be avoided. The “degenerate case”
must be handled via explicit matrix diagonalization. Closed forms can be obtained
for the doubly degenerate case easily. Also note that the successive approximations
to the energy require one less level of approximation to the wave function. Thus,
second-order energy corrections are obtained from first-order wave functions.

4.4.1 VAN DER WAALS FORCES: ORIGIN OF LONG-RANGE ATTRACTIONS


One of the underlying principles in chemistry is that molecules at long range are
attracted toward each other. This is clearly true for polar and oppositely charged
species. It is also true for nonpolar and neutral species, such as methane, noble gases,
and so on. These forces are due to polarization forces or van der Waals forces, which
are attractive and decrease as 1/R 7 ; that is, the attractive part of the potential goes
as −1/R 6 . In this section we will use perturbation theory to understand the origins
of this force, restricting our attention to the interaction between two hydrogen atoms
separated by some distance R.
Let us take the two atoms to be motionless and separated by distance R with n
being the vector pointing from atom A to atom B. Now let r a be the vector connecting
nuclei A to its electron and likewise for r b . Thus each atom has an instantaneous
electric dipole moment

a = q R a
μ (4.77)
b = q R b
μ (4.78)

We will assume that R ra & rb so that the electronic orbitals on each atom do not
come into contact.
Atom A creates an electrostatic potential U for atom B in which the charges in
B can interact. This creates an interaction energy W . Since both atoms are neutral,
the most important source for the interactions will come from the dipole–dipole
interactions. Thus, the dipole of A interacts with an electric field E = −∇U generated
by the dipole field about B and vice versa. To calculate the dipole–dipole interaction,
we start with the expression for the electrostatic potential created by μa at B,

1 μa · R
U (R) = (4.79)
4π εo R 3
Thus,
q 1

E = −∇U = − r a − 3( ra · n ) n (4.80)
4π εo R 3

Thus, the dipole–dipole interaction energy is

b · E
W = −μ
e2

= 3
r a · r b − 3( ra · n )( rb · n ) (4.81)
R
Quantum Dynamics (and Other Un-American Activities) 97

where e2 = q 2 /4πεo . Now, we set the z axis to be along n , so we can write


e2
W = (xa xb + ya yb − 2z a z b ) (4.82)
R3
This will be our perturbing potential that we add to the total Hamiltonian:
H = Ha + Hb + W (4.83)
where Ha are the unperturbed Hamiltonians for the atoms. Let us take, for exam-
ple, the interaction between two hydrogen atoms each in the 1s (ground) state. The
unperturbed system has energy
H |1s1 ; 1s2  = (E 1 + E 2 )|1s1 ; 1s2  = −2E I |1s1 ; 1s2  (4.84)
where E I is the ionization energy of the hydrogen 1s state (E I = 13.6 eV). The
first order vanishes because it involves integrals over odd functions. This we can
anticipate since the 1s orbitals are spatially isotropic, so the time-averaged value of
the dipole moments is zero. So, we have to look toward second-order corrections.
The second-order energy correction is
|nlm; n
l
m
|W |1sa ; 1sb |2
E (2) = (4.85)
nlm n
l
m

−2E I − E n − E n

where we restrict the summation to avoid the |1sa ; 1ab  state. Since W ∝ 1/R 3 and
the denominator is negative, we can write
C
E (2) = − (4.86)
R6
which explains the origin of the 1/R 6 attraction.
Now we evaluate the proportionality constant C. Written explicitly,
|nlm
n
l
m
|(xa xb + ya yb − 2z a z b )|1sa ; 1sb |2
C = e4 (4.87)
nml n
l
m

2E I + E n + E n

Since n and n
≥ 2 and |E n | = E I /n 2 < E I , we can replace E n and E n
with 0
without appreciable error. Now, we can use the resolution of the identity

1= |nlm; n
l
m
nlm; n
l
m
| (4.88)
nml n
l
m

to remove the summation and we get


e4
C= 1sa ; 1ab |(xa xb + ya yb − 2z a z b )2 |1sa ; 1sb  (4.89)
2E I
where E I is the ionization potential of the 1s state (E I = 1/2). Surprisingly, this
is simple to evaluate because we can use symmetry to our advantage. Since the 1s
orbitals are spherically symmetric, any terms involving cross-terms of the sort
1sa |xa ya |1s = 0 (4.90)
98 Quantum Dynamics: Applications in Biological and Materials Systems

vanish. This leaves only terms of the sort


1s|x 2 |1s (4.91)
all of which are equal to 1/3 of the mean value of R A = xa2 + ya2 + z a2 . Thus,
   2
e2  R 
C =6  1s   1s  = 6e2 ao (4.92)
2E I 3
where ao is the Bohr radius. Thus,
ao
E (2) = −6e2 (4.93)
R6
What does all this mean? We stated at the beginning that the average dipole
moment of a H 1s atom is zero. That does not mean that every single measurement
of μa will yield zero. What it means is that the probability of finding the atom with
a dipole moment μa is the same for finding the dipole vector pointed in the opposite
direction. Adding the two together produces a net zero dipole moment. So it is the
fluctuation about the mean that gives the atom an instantaneous dipole field. Moreover,
the fluctuations in A are independent of the fluctuations in B, so first-order effects
must be zero since the average interaction is zero.
Just because the fluctuations are independent does not mean they are not corre-
lated. Consider the field generated by A as felt by B. This field is due to the fluctuating
dipole at A. This field induces a dipole at B. This dipole field is in turn felt by A. As
a result, the fluctuations become correlated and explain why this is a second-order
effect. In a sense, A interacts with its own dipole field through “reflection” off B.

4.4.2 ATTRACTION BETWEEN AN ATOM AND A CONDUCTING SURFACE


The interaction between an atom or molecule and a surface is a fundamental physical
process in surface chemistry. In this example, we will use perturbation theory to
understand the long-range attraction between an atom, again taking a H 1s atom as
our species for simplicity, and a conducting surface. We will take the z axis to be
normal to the surface and assume that the atom is high enough off the surface that
its altitude is much larger than its atomic dimensions. Furthermore, we will assume
that the surface is a metal conductor and we will ignore any atomic level of detail
on the surface. Consequently, the atom can only interact with its dipole image on the
opposite side of the surface.
We can use the same dipole–dipole interaction as previously with the following
substitutions
e2 −→ −e2 (4.94)
R −→ 2d (4.95)
xb −→ xa
= xa (4.96)
yb −→ ya
= ya (4.97)
z b −→ z a
= −z a (4.98)
Quantum Dynamics (and Other Un-American Activities) 99

where the sign change reflects the sign difference in the image charges. So we get

e2 2

W =− 3
xa + ya2 + 2z a2 (4.99)
8d
as the interaction between a dipole and its image. Taking the atom to be in the 1s
ground state, the first-order term is nonzero:

E (1) = 1s|W |1s. (4.100)

Again, using spherical symmetry to our advantage,

e2 e2 a 2
E (1) = − 3
41s|r 2 |1s = − 3o (4.101)
8d 2d
Thus an atom is attracted to the wall with an interaction energy that varies as 1/d 3 .
This is a first-order effect since there is perfect correlation between the two dipoles.

4.5 TIME-DEPENDENT SCHRÖDINGER EQUATION


Our discussion of time-dependent quantum mechanics begins with a brief overview
of the time-dependent Schrödinger equation (TDSE) that governs the time evolution
of the system:

ih̄ |ψ(t) = H |ψ(t) (4.102)
∂t
where H is the Hamiltonian operator for the system. If we assume that H is indepen-
dent of time and ψ(r, t) can be separated into time-dependent and time-independent
(spatial or otherwise) components

ψ(r, t) = φ(r ) f (t) (4.103)

then
1 ∂ 1
ih̄ f (t) = H φ(r ) (4.104)
f (t) ∂t φ(r )
Since the left-hand side is a function of only t and the right-hand side is a function of
only r , both sides must be equal to the same constant, E. Hence,

H φ(r ) = Eφ(r ) (4.105)

which is the time-independent Schrödinger equation. Turning to the other equation,


1 ∂
ih̄ f (t) = E (4.106)
f (t) ∂t
Solving this yields

f (t) = A exp(−iωt) (4.107)


100 Quantum Dynamics: Applications in Biological and Materials Systems

where ω = E/h̄ is the (angular) frequency. Functions φn (r ) satisfying Equation 4.105


are eigenfunctions of H with eigenvalue E n . Consequently, the complete wave func-
tion can be written as

ψ(r, t) = e−iωn t φn (r ) (4.108)

This is stationary because the probability density P(r )|ψ(r, t)|2 is independent of
time. More generally, since the complete set of eigenfunctions forms a suitable space
for representing any arbitrary function, ,

(r ) = φm | φm (r ) (4.109)
m

it is not stationary since



ih̄ = φm | e−iωm t φm (r ) (4.110)
∂t m

leads to a probability distribution that evolves in time.


The formal solution of the time-dependent Schrödinger equation is given by

ψ(t) = e−i H t/h̄ ψ(0) = U (t, 0)ψ(0) (4.111)

where ψ(0) is the state at time t = 0 and ψ(t) is the state evolved forward in time.
The operator U (t, 0) is the time-evolution operator. It is formally defined via the
expansion
 2
it 1 i
U (t, 0) = 1 − H + H − ··· (4.112)
h̄ 2! h̄
The time-evolution operator has a number of useful properties:
1. It is unitary, I = U † U and U (t
, t) = U ∗ (t, t
) = U † (t, t
)
2. U obeys the semigroup property that U (t, to ) = U (t, t
)U (t
, to ) for t ≥
t
≥ to
3. U itself is a solution of the time-dependent Schrodinger equation

ih̄ U (t, t
) = HU (t, t
) (4.113)
∂t
4. U (t, t
) = U (t − t
)
5. U (0) = 1
Notice that U is a polynomial function of an operator H . Hence if we know the
eigenvectors and eigenvalues of H , we know that f (H )φn = f (an )φn . Thus, we can
write U in an eigenbasis representation as

U= e−iωn |φn φn | (4.114)
n

This form is especially convenient when we have at hand the eigenvalues and eigen-
vectors of the system.
Quantum Dynamics (and Other Un-American Activities) 101

4.6 TIME EVOLUTION OF A TWO-LEVEL SYSTEM


As in the time-independent case, a great deal can be learned by examining what
happens to a two-state system subject to some sort of coupling. Again, we write our
Hamiltonian in the {|1, |2} basis
 
E1 V
H= (4.115)
V E2

and this time we consider the solutions of the time-dependent Schrödinger equation:

ih̄ |ψ = H |ψ (4.116)
∂t
We can write |ψ in terms of either the |± eigenstates of H or in terms of the localized
basis states

|ψ = c1 (t)|1 + c2 (t)|2


= c+ (t)|+ + c− (t)|− (4.117)

where the c’s are time-dependent coefficients. Either representation will work and we
can transform between the two easily enough.
From our discussion above, the time evolution of |ψ is generated by

|ψ(t) = U (t, 0)|ψ(0) (4.118)

We can write the time-evolution operator in terms of the eigenstates:


 
e−iω+ t 0
U (t, 0) = (4.119)
0 e−iω− t

where ω± = ε± /h̄. If our initial state, however, is one of the states of the uncoupled
system, say |ψ(0) = |1, then we need to write this in terms of the |±. Using the
rotation matrix above,

|1 = cos θ|+ − sin θ|− (4.120)

Thus, our time-evolved state is

|ψ(t) = U (t, 0)|1 = e−iω1 t cos θ|+ − e−iω2 t sin θ|− (4.121)

We now ask, what is the probability that at time t > 0 the system will be found in the
other state? It is straightforward to show that

V2
P12 (t) = sin2 ω R t (4.122)
V 2 + 2

where ω R = 2 + V 2 /h̄ is the Rabi frequency, which gives the frequency at which
the system oscillates between the two states.
102 Quantum Dynamics: Applications in Biological and Materials Systems

1.0

0.8

0.6

0.4

0.2

π π 3π 2π
2 2

FIGURE 4.3 Rabi oscillation between two degenerate states starting off in |1.

In Figure 4.3 we show the P12 (t) and P11 (t) for the case of a degenerate system
 = 0. Here the ω R = V /h̄ and the system oscillates between the two localized states
with a period τ = π/ω R . In other words, if we prepare the system in state |1 then for
every t = nτ we have a 100% likelihood of finding the system in state 1 and for every
t = nτ/2 we have a 100% likelihood of finding the system in state 2. The amount of
amplitude transferred depends upon both the coupling V and the energy gap . For
the degenerate case,  = 0 and 100% of the initial population in 1 is transfered to
2 and back every π/ω R . For nondegenerate cases, a maximum of V 2 /(V 2 + 2 ) is
transferred every Rabi period. Ultimately, in the weak coupling limit, the population
remains localized in the the initial state.

4.7 TIME-DEPENDENT PERTURBATIONS


Time-dependent perturbation theory is a powerful tool for deriving approximate so-
lutions and theories whenever the complete solution is impossible to obtain and the
coupling between the unperturbed system and coupling is weak. One of the advan-
tages of the perturbative approach is that it allows us to discuss quantum transitions as
a series of state-to-state interactions followed by free propagation of the unperturbed
system by writing

H = Ho + λV (t) (4.123)

where Ho represents the Hamiltonian for the uncoupled system and λV is some
coupling. We begin by writing the state in the basis of unperturbed states as

ψ(t) = cn (t)|φn  (4.124)
n

where the expansion coefficients cn (t) = ψ(t)|φn  are simply the projection of the
evolving state onto the unperturbed basis. In this representation, the time-dependent
Quantum Dynamics (and Other Un-American Activities) 103

equation for the coefficients is given by



ih̄ ċn (t) = εn cn (t) + λ Vnm (t)cm (t) (4.125)
n

where Vnm (t) = φn |V (t)|φm  is the matrix element of the coupling in the φn basis.
As such, this is a set of coupled linear differential equations to first order in time,
and, in principle at least, we can determine the coefficients for the time-evolved state.
The coupling between the equations comes from the fact that the operator V (t) is
nondiagonal in this basis representation. When λV = 0, our system of equations
becomes totally decoupled and the solutions are simply
cn (t) = bn e−iεn t/h̄ (4.126)
where bn depends entirely upon the choice of initial condition. We can also make a
simple change of variables by writing the general solution for the coefficients as
cn (t) = bn (t)e−iεn t/h̄ (4.127)
and determine the equations that govern the evolution of the new coefficients. The
advantage here is that this will eliminate any rapidly evolving phase terms e−iεn t/h̄ ,
and we expect the bn (t) to be slowly varying functions of time. Upon substitution into
the TDSE and introducing the Bohr frequency ωnm = (εn − εm )/h̄,

ih̄ ḃn (t) = λ Vnm (t)e−iωnm t bm (t) (4.128)
m

Again, this is a system of linear equations first order in time, but the bn (t) coefficients
are now slowly varying in time.
Next, let us expand the bn (t) = bn(0) (t) + λ1 bn(1) (t) + λ2 bn(2) (t) + · · · and substitute
this into Equation 4.128. Equating terms on each side to the equation with equal
orders in λα , one finds for λ0
ih̄ ḃn(0) (t) = 0 (4.129)
However, for α > 0

ih̄ ḃn(α+1) (t) = e−iωnm t Vnm (t)bm(α) (t) (4.130)
m

one obtains a recursive solution whereby lower-order solutions serve as input to the
next higher-order term.
Suppose at time t = 0 our initial state was prepared in the state |φi . Hence,
bi (t = 0) = 1 and all other bi =1 (0) = 0 (that is, bn (t = 0) = δni ). Prior to t = 0,
we assume that the interaction is turned off V (t < 0) = 0 and at time t = 0, it is
instantly switched on (but remains finite). To first order in the perturbation series,
ih̄ ḃn(1) (t) = e−iωni t Vni (t)bi(0) (t) (4.131)
This can be easily integrated
 t
1

bn(1) (t) = dt
Vni (t
)eiωni t (4.132)
ih̄ 0
104 Quantum Dynamics: Applications in Biological and Materials Systems

to give the first-order probability amplitude for starting in state i and finding the
system in state n some time t later. Notice that this is the partial Fourier transform of
the coupling operator. (Partial in the sense that we integrate only out to intermediate
times.) The transition probability is then found by Pni (t) = |bn (t)|2 .

4.7.1 HARMONIC PERTURBATION


Take, for example, a harmonic perturbation Vni (t) = Vni sin(ωt). Integrating
Equation 4.132 and writing the corresponding transition probability yields
 2
|Vni |2  1 − ei(ωni −ω)t 1 − ei(ωni +ω)t 
Pni (t) = 
4h̄ 2  ω −ω + ω +ω  (4.133)
ni ni

In the limit of a slowly varying perturbation, ω can be set to zero and we find

|Vni |2 4 sin2 (ωni t/2)


Pni (t) = (4.134)
h̄ 2 ωni
2

that is,

4|Vni |2
Pni (t) = f (t, ωni ) (4.135)
h̄ 2
where f (t, ωni ) is shown in Figure 4.4 as a function of the transition frequency ωni for
fixed t. Notice that f (t, ωni ) has a sharp peak at ωni = 0 with a height proportional to
t 2 /4 while its width (at half maximum) is given by 2π/t. A straightforward application
of the residue theorem indicates that
 ∞

dω f (t, ω) = (4.136)
−∞ 2

f (t,ωni)

1/4t2

2π/t

ωni/t
–6π –4π –2π 0 2π 4π 6π

FIGURE 4.4 f (t, ωni ), a function of transition frequency for fixed t.


Quantum Dynamics (and Other Un-American Activities) 105

so that the total area under the curve is proportional to t. Also, one has
πt
lim f (t, ωni ) ∝ δ(ω) (4.137)
t→∞ 2
This tells us that transitions occur mainly between states whose final energies E n
do not differ from the initial energy E i by more than

δ E = 2πh̄/t (4.138)

hence, energy is approximately conserved with the spread in energy given by 2πh̄/t.
We can relate this result with the so-called time-energy uncertainty relationship
δ Eδt ≥ h̄. In a sense, the perturbation is akin to making a measurement of the energy
of the system by inducing a transition from the initial state i to the final state n. Since
the time associated with making this observation is t, the associated uncertainty with
the observation should be approximately h̄/t, which is good agreement with the
estimate above.
For processes that occur to a continuum of final states whose energies lie within a
given interval (E f − ε, E f + ε) about some central energy E f , it should be apparent
from this discussion that we need to consider transitions from the initial state to
groups of states. Let us denote by ρ f (E f ) the density of levels so that ρ f (E f )d E f is
the number of states with energy levels in the interval (E f , E f + d E f ). Integrating
our result for a single state over a continuum of final states yields
 E f +ε
P f i (t) = d E f P f i (t, E f )ρ f (E f ) (4.139)
E f −ε

If we assume that both |V f i | and ρ f (E f ) are slowly varying over a narrow integration
range,
 ω f i +ε

4
P f i (t) = |V f i |2 ρ f (E f ) f (t, ω f i )dω f i (4.140)
h̄ ω f i −ε

In this last expression, we make a change of integration variable to ω f i and ε


=
ε/h̄. Clearly, the overwhelming contribution to the integral here comes from those
transitions that do in fact conserve energy (within δ E ≈ 2π/t). Moreover, if we set
the range of integration so that ε
2π/t (that is, a long enough time), then the entire
peak will fall within the integration range and all transitions will conserve energy.
Since ε
2π/t, we can take the limits of integration out to infinity and we
arrive at

P f i (t) = |V f i |2 ρ f (E f )t (4.141)

with E f = E i .
We are now in the position to write perhaps the most physically interesting and
important result of this discussion, namely, the transition probability per unit time:
d
kfi = P f i (t) (4.142)
dt
106 Quantum Dynamics: Applications in Biological and Materials Systems

that is,

kfi = |V f i |2 ρ f (E) (4.143)

This is often referred to as Fermi’s Golden Rule4 (even though it was first obtained
by Paul Dirac)5 since it plays an important role in many physical processes.

4.7.2 CORRELATION FUNCTIONS


We can gain some deeper understanding of the transition rate by doing some further
analysis. For this, let us write the golden rule expression as


kfi = | f |V |i|2 δ(E f − E i ) (4.144)

This is the expression for the transition rate that we previously derived for the lim-
iting case where the external field varied slowly with time. It is applicable when the
initial and final energies are the same. For nondegenerate systems, we obtain for the
transition between states 1 and 2

k21 = |2|V |1|2 δ(E 2 − E 1 − h̄ω) (4.145)

corresponding to a transition from the initial to final state that involves the absorption
of a quantum of energy h̄ω. We can also write the transition rate for the reverse 2 → 1
process as

k12 = |1|V |2|2 δ(E 1 − E 2 + h̄ω) (4.146)

Because we are dealing with Hermitian operators, |1|V |2|2 = |2|V |1|2 , we can
conclude that
k12 = k21 (4.147)
This is an example of microscopic reversibility and stems from the fact that our
equations of motion are symmetric in time.
In general, however, we rarely encounter isolated systems. Typically in chemical
dynamical systems we deal with an ensemble of identically prepared systems. Thus,
to compute a statistical transition rate for the ensemble we need to sum over all initial
conditions, weighted by their respective Boltzmann probability, and sum over all
possible final states. Let us write this as

P(ω) = k f i (ω)wi (4.148)
f,i

where

e−β Ei
wi = (4.149)
Z
Quantum Dynamics (and Other Un-American Activities) 107

is the canonical density. Next, let us write the microscopic rate as



kfi = |F(ω)|2 | f |B|i|2 δ(E f − E i − h̄ω) (4.150)

where we have written the time-dependent driving field in terms of a frequency
spectrum F(ω) and an operator coupling the initial and final states. Thus, we can
write P(ω) as

P(ω) = |F(ω)|2 wi | f |B|i|2 δ(E f − E i − h̄ω) (4.151)
h̄ f,i

for an absorption process where E f = E i + h̄ω and


P(−ω) = |F(ω)|2 wi | f |B|i|2 δ(E f − E i + h̄ω) (4.152)
h̄ f,i

for the emission process where E f = E i −h̄ω. In order to relate these two expressions,
let us now assume that E f > E i so that i and f now serve as state indices rather
than simply referring to the initial and final states. Thus, the sum over initial states in
P(−ω) is really a sum over f ’s, so we need to write

e−β E f e−β(Ei +h̄ω)


wf = = = e−βh̄ω wi (4.153)
Z Z
It should be clear that the emission and absorption rates are related by

P(−ω) = e−βh̄ω P(+ω) (4.154)

In essence, the rate for stimulated emission is statistically lower than that for
stimulated absorption. It makes sense because at thermal equilibrium, we are less
likely to find the system in a higher energy state than in a lower energy state. The
relation also assumes that the transition occurs from a thermally populated distribution
of initial states at time t = 0. Since P(ω) > P(−ω), we have lost microscopic
reversibility. This is the principle of detailed balance. Reversibility is lost the moment
we place the system in contact with a thermal bath.
Let us consider the part of P(ω) that only involves the summations:

C> (ω) = wi | f |B|i|2 δ(E f − E i − h̄ω) (4.155)
f,i

and

C< (ω) = wi | f |B|i|2 δ(E f − E i + h̄ω) (4.156)
f,i

so that

P(ω) = |F(ω)|2 C> (ω) (4.157)

108 Quantum Dynamics: Applications in Biological and Materials Systems

and

P(−ω) = |F(ω)|2 C< (ω) (4.158)

Clearly from the discussion above, C< (ω) = exp(−βh̄ω)C> (ω).
For the moment, consider just C> (ω) and recast this using the integral form of
δ(E):
 ∞
1
δ(E) = dte−i Et/h̄ (4.159)
2πh̄ −∞
Using this we can write
 ∞
C> (ω) = dt wi |Bi f |2 ei(E f −Ei −h̄ω)t/h̄ (4.160)
−∞ i, f

Now, we break up the matrix element


 ∞
C> (ω) = dteiωt wi i|B| f  f |B|iei(E f −Ei )t/h̄ (4.161)
−∞ i, f

and use

i|B| f ei(E f −Ei )t/h̄ = i|e+i H t/h̄ Be−i H t/h̄ | f  (4.162)

to write this as
 ∞
C> (ω) = dteiωt wi i|e+i H t/h̄ Be−i H t/h̄ | f  f |B|i (4.163)
−∞ i, f

We can now eliminate the sum over the final states since this is simply a resolution
of the identity
 ∞
C> (ω) = dteiωt wi i|B(t)B(0)|i (4.164)
−∞ i

Lastly, we can condense our notation by letting the sum over the initial conditions be
written as the trace over the thermal density

wi i|B(t)B(0)|i = T r [ρe+i H t/h̄ Be−i H t/h̄ B(0)] = B(t)B(0) (4.165)
i

Thus, we can write the C> (ω) and C< (ω) in terms of Fourier transform of correlation
functions:
 ∞
C> (ω) = dteiωt B(t)B(0) (4.166)
−∞

and
 ∞
C< (ω) = dteiωt B(0)B(t) (4.167)
−∞
Quantum Dynamics (and Other Un-American Activities) 109

It is vitally important to note that C> (ω) = C< (ω). This is because B(t) and B(0)
are quantum mechanical operators and do not necessarily commute. Also, while B(t)
and B(0) are Hermitian operators, their product is not Hermitian.
Symmetry properties: It is important to take a close look at the properties of
time-correlation functions. Consider the time-correlation function

C(t) = B(t)B(0) (4.168)

discussed above. If B is a real operator, then we can immediately recognize that

C(t) = C ∗ (−t) (4.169)

which implies that

Re(C(t)) = Re(C(−t)) (4.170)

and

I m(C(t)) = −I m(C(t)) (4.171)

In other words, the real part of C(t) is an even function of time and the imaginary
part must be an odd function of time. Thus, we can write

P(ω) = |F(ω)|2 I (ω) (4.172)

and consider
 ∞
I (ω) = eiωt C(t)dt (4.173)
−∞

If we were measuring the absorption cross-section of a molecular species, P(ω) would


be the rate of energy absorption given an external driving frequency ω. Thus, we can
understand the physical origin of its two components. The part involving |F(ω)|2 will
depend upon the specific nature of the driving field itself and can be referred to as an
“instrument” function. The other part, I (ω), depends upon only the internal details
of the system being investigated. This we term the “line-shape” function and we can
write this as
 0  ∞ 
I (ω) = eiωt C(t)dt + eiωt C(t)dt
−∞ 0
 −∞  ∞ 
= e−iωt C(−t)dt + eiωt C(t)dt
0 0
 −∞  ∞ 
+iωt ∗
= e C (t)dt + e iωt
C(t)dt
0 0
 ∞
= 2Re C(t)e+iωt dt (4.174)
0
110 Quantum Dynamics: Applications in Biological and Materials Systems

Typical forms for correlation functions: In general, C(t) is an oscillatory func-


tion that decays in time. Since the real part of C(t) must be even, dC(0)/dt = 0.
Thus, we anticipate the general form

C(t) = C(0)e−(t/τg ) cos(ωo t)


2
(4.175)

where tg is a Gaussian decay time and ωo is a characteristic frequency. One can also
encounter correlation functions that decay exponentially with time:

C(t) = C(0)e−t/τ cos(ωo t) (4.176)

While formally incorrect, this can occur whenever there is a loss of time reversibility
within the system being probed. This can occur either through coarse-graining over
some intermediate time scale or through the presence of a dissipative (that is, velocity-
dependent) force. The line shapes corresponding to these two correlation functions
are easy to obtain:
1 √
C(0) π τ (e−τ (ω−ωo ) /4 + e−τ (ω+ωo ) /4 )
2 2 2 2
Ig (ω) = (4.177)
2
for the Gaussian case and
 
τ τ
Il (ω) = + (4.178)
τ 2 (ω + ωo )2 + 1 τ 2 (ω − ωo )2 + 1

for the exponential decaying case. Here, the line shape is the characteristic Lorentzian.
In the limit that ωo = 0 we have

Ig (ω) = C(0) π τ e−τ ω /4
2 2
(4.179)

and

Il (ω) = (4.180)
τ 2 (ω + ω o )2 + 1
Finally, we can define the correlation time as
 ∞
C(t)
τc = dt (4.181)
0 C(0)

For the Gaussian decay: τc = πτ while for the exponential decay: τc = τ .
Lastly, we note that for a classical system, the time-correlation function is sym-
metric in time with C(t) = C(−t). This is the result of the time-reversal symmetry
that arises in Newtonian mechanics.
Example: A Brownian particle. To gain some understanding and practice in
computing correlation functions, we consider the time-correlation function of the
position for a particle with unit mass (m = 1) undergoing Brownian motion. This can
be described via the Langevin equation:

ẍ(t) = −γ ẋ(t) + R(t) (4.182)


Quantum Dynamics (and Other Un-American Activities) 111

where R(t) is a random force with R(t) = 0 and R(t)R(0) = 2πkT γ δ(t). In
essence, each random kick is uncorrelated with the previous one that occurred an
instant earlier in time. Multiply both sides on the right by x(0),

ẍ(t)x(0) = −γ ẋ(t)x(0) + R(t)x(0) (4.183)

then perform the thermal average,

ẍ(t)x(0) = −γ ẋ(t)x(0) + R(t)x(0) (4.184)

The last term vanishes since R(t) = 0. For the other terms, the time derivative can
be pulled out in front of the thermal average:

d2 d
2
x(t)x(0) = −γ x(t)x(0) (4.185)
dt dt
This gives us a simple ordinary differential equation for our correlation function:

d2 d
C(t) = −γ C(t) (4.186)
dt 2 dt
This we can easily solve to find

C(t) = C(0)e−γ t (4.187)

In other words, the line-shape function for a randomly kicked particle is a Lorentzian

I (ω) = C(0) (4.188)
ω2 + γ 2

4.8 INTERACTION BETWEEN MATTER AND RADIATION


Much of what we shall discuss in this book hinges upon what happens once a molecule
has been promoted into one of its excited states. However, the dynamics that do occur
depend upon just how the molecule found itself in this excited state. Moreover, the
ultimate fate of the excitation depends upon how strongly coupled the initial excited
state is to other excited states or to the ground state. Since much of this book is
concerned with electronic processes in an excited state, we review here briefly the basic
interactions between molecules and the electromagnetic field. We take a traditional,
semiclassical approach to describe the coupling between matter and radiation rather
than a fully quantum mechanical treatment in which the radiation field is treated
entirely within the context of Maxwell’s equations. This will allow us to describe
the coupling entirely in terms of time-dependent operators involving position and
momentum operators acting on the material degrees of freedom.

4.8.1 FIELDS AND POTENTIALS OF A LIGHT WAVE


An electromagnetic wave consists of two oscillating vector field components that are
perpendicular to each other and oscillate at an angular frequency ω = ck where k is
112 Quantum Dynamics: Applications in Biological and Materials Systems

the magnitude of the wave vector that points in the direction of propagation and c is
the speed of light. For such a wave, we can always set the scalar part of its potential to
zero with a suitable choice in gauge and describe the fields associated with the wave
given by
in terms of a vector potential A,

t) = Ao ez eiky−iωt + A∗o ez e−iky+iωt


A(r, (4.189)

Here, the wave vector points in the +y direction, the electric field E is polarized in
the yz plane, and the magnetic field B is in the x y plane. Using Maxwell’s relations

t) = − ∂ A = iωez (Ao ei(ky−ωt) − A∗o e−i(ky−ωt) )


E(r, (4.190)
∂t

and

t) = ∇ × A
B(r, = ikex (Ao ei(ky−ωt) − A∗o e−i(ky−ωt) ) (4.191)

We are free to choose the time origin, so we will choose it so as to make Ao purely
imaginary, and set

iω Ao = E/2 (4.192)
ik Ao = B/2 (4.193)

where E and B are real quantities such that

E ω
= =c (4.194)
B k

Thus

E(r, t) = Eez cos(ky − ωt) (4.195)


B(r, t) = Bez sin(ky − ωt) (4.196)

where E and B are the magnitudes of the electric and magnetic field components of
the plane wave.
Lastly, we define what is known as the Poynting vector, which is parallel to the
direction of propagation:

S = εo c2 E × B (4.197)

Using the expressions for E and B above and averaging over several oscillation
periods:

E
S = εo c2 e y (4.198)
2
Quantum Dynamics (and Other Un-American Activities) 113

4.8.2 INTERACTIONS AT LOW LIGHT INTENSITY


The electromagnetic wave we just discussed can interact with an atomic electron. The
Hamiltonian of this electron can be given by
1 q
H= (P − qA(r, t))2 + V (r ) − S · B(r, t) (4.199)
2m m
where the first term represents the interaction between the electron and the electrical
field of the wave and the last term represents the interaction between the magnetic
moment of the electron and the magnetic moment of the wave. In expanding the kinetic
energy term, we have to remember that momentum and position do not commute.
However, in the present case, A is parallel to the z axis, and Pz and y commute. So,
we wind up with the following:

H = Ho + W (4.200)

where
P2
Ho = + V (r ) (4.201)
2m
is the unperturbed (atomic) Hamiltonian and

q q q2 2
W =− P·A− S·B+ A (4.202)
m m 2m
The first two terms depend linearly upon A and the second is quadratic in A. So, for
low intensity we can take
q q
W =− P · A − S · B = WE + WB (4.203)
m m
Before moving on, we need to evaluate the relative importance of each term by orders
of magnitude for transitions between bound states. In the second term, the contribution
of the spin operator is on the order of h̄ and the contribution from B is on the order
of k A. Thus,

WB
q
S ·B h̄k
= m
≈ (4.204)
WE q
m
P ·A p

where h/ p = d B is the de Broglie wavelength of the particle. For an electron in


an atom, d B is on the order of an atomic radius, ao The wave number is related to
the wavelength via k = 2π/λ. For electronic excitations in the UV or visible spectral
range, λ is on the order of 1000ao to 10 000 ao . Thus,
WB ao
≈ 1 (4.205)
WE λ
We can safely conclude then that the magnetic interaction is not at all important for
ordinary optical transitions and we focus only upon the coupling to the electric field.
114 Quantum Dynamics: Applications in Biological and Materials Systems

Using the expressions we derived previously, the coupling to the electric field
component of the light wave is given by
q

WE = − pz Ao eiky e−iωt + A∗o e−iky e+iωt (4.206)


m
Now, we expand the exponential in powers of y:
1
e±iky = 1 ± iky − k 2 y 2 + · · · (4.207)
2
Since ky ≈ ao /λ  1, we can, with good approximation, keep only the first term.
Thus we get the dipole operator
qE
WD = pz sin(ωt) (4.208)

In the electric dipole approximation, W (t) = W D (t).
Note that we might expect that W D should have been written as

W D = −qEz cos(ωt) (4.209)

since we are, after all, talking about a dipole moment associated with the motion of
the electron about the nucleus. Actually, the two expressions are equivalent because
we can always choose a different gauge to represent the physical problem without
changing the physical result. In electrodynamics, the electric and magnetic fields are
described in terms of a vector potential A and a scalar potential U . To get the present
result, we used
E
A= ez sin(ωt) (4.210)
ω
and set the scalar potential

U (r ) = 0 (4.211)

But this is completely arbitrary. We can always choose another vector potential and
scalar potential to describe the fields and require that, in the end, the physics be
invariant to how we choose to describe these potentials. Formally, when we choose
the potential we make a particular choice of gauge. We can transform from one gauge
to another by taking a function f and defining a new vector potential and a new scalar
potential as

A
= A + ∇ f (4.212)
∂f
U
= U − (4.213)
∂t
We are free to choose f . Let us take f = zE sin(ωt)/ω so that
E
A
= ez (sin(ky − ωt) + sin(ωt)) (4.214)
ω
Quantum Dynamics (and Other Un-American Activities) 115

is the new vector potential and

U
= −zE cos ωt (4.215)

is the new scalar potential. In the electric dipole approximation, ky is small, so we


set ky = 0 everywhere and obtain A
= 0. Thus, the total Hamiltonian becomes

H = Ho + qU
(r, t) (4.216)

with perturbation

W D
= −qzE cos(ωt) (4.217)

Now our perturbation depends upon the displacement operator rather than the mo-
mentum operator. This is the usual form of the dipole coupling operator.
Next, let us consider the matrix elements of the dipole operator between two
stationary states of Ho : |ψi  and |ψ f  with eigenenergy E i and E f , respectively. The
matrix elements of W D are given by

qE
W f i (t) = sin(ωt)ψ f | pz |ψi  (4.218)

We can evaluate this by noting that

∂ Ho pz
[z, Ho ] = ih̄ = ih̄ (4.219)
∂ pz m

Thus,

ψ f | pz |ψi  = imω f i ψ f |z|ψi  (4.220)

Consequently,

sin(ωt)
W f i (t) = iqEω f i z fi (4.221)
ω
Thus, the matrix elements of the dipole operator are those of the position operator.
This determines the selection rules for the transition.
Before going through any specific details, let us consider what happens if the
frequency ω does not coincide with ω f i . Specifically, we limit ourselves to transitions
originating from the ground state of the system, |ψo . We will assume that the field
is weak and that in the field the atom acquires a time-dependent dipole moment
that oscillates at the same frequency as the field via a forced oscillation. To simplify
matters, assume that the electron is harmonically bound to the nucleus with a classical
Hooke’s law force,
1
V (r ) = mωo r 2 (4.222)
2
where ωo is the natural frequency of the electron.
116 Quantum Dynamics: Applications in Biological and Materials Systems

The classical motion of the electron is given by the equations of motion (via the
Ehrenfest theorem)
qE
z̈ + ω2 z =
cos(ωt) (4.223)
m
This is the equation of motion for a harmonic oscillator subject to a periodic force.
This inhomogeneous differential equation can be solved (using Fourier transform
methods) and the result is
qE
z(t) = A cos(ωo t − φ) +
cos(ωt) (4.224)
m ωo2 − ω2
where the first term represents the harmonic motion of the electron in the absence
of the driving force. The two coefficients, A and φ, are determined by the initial
condition. If we have a very slight damping of the natural motion, the first term
disappears after a while, leaving only the second, forced oscillation, so we write
qE
z=
cos(ωt) (4.225)
m ωo2− ω2
Thus, we can write the classical induced electric dipole moment of the atom in the
field as
q 2E
D = qz =
cos(ωt) (4.226)
m ωo2 − ω2
Typically this is written in terms of a susceptibility, χ , where
q2
χ=
(4.227)
m ωo2 − ω2
Now we look at this from a quantum mechanical point of view. Again, take the
initial state to be the ground state and H = Ho + W D as the Hamiltonian. Since the
time-evolved state can be written as a superposition of eigenstates of Ho ,

|ψ(t) = cn (t)|φn  (4.228)
n

To evaluate this we can use the results derived previously in our derivation of the
golden rule,
qE
|ψ(t) = |φo  + n| pz |φo 
n =0
2imh̄ω
 
e−iωno t − eiωt e−iωno t − e−iωt
× − |φn  (4.229)
ωno + ω ωno − ω
where we have removed a common phase factor. We can then calculate the dipole
moment expectation value, D(t), as
2q 2 ωon |φn |z|φo |2
D(t) = E cos(ωt) (4.230)
h̄ n
ωno
2 − ω2
Quantum Dynamics (and Other Un-American Activities) 117

From this we can begin to clearly appreciate the physics behind the absorption or
emission of light by an atom or molecule. When an oscillating dipole field is applied to
an atom or molecule, the electrons in the atom respond by oscillating with the applied
field. Ordinarily, this oscillation is not very significant when ω = ωno . However, at
the resonance condition, the induced dipole moment atom or molecule (due to its
interaction with the field) oscillates readily at the transition frequency and the atom
or molecule readily absorbs or emits energy in the form of electromagnetic radiation.

4.8.2.1 Oscillator Strength

We can now notice the similarity between a driven harmonic oscillator and the expec-
tation value of the dipole moment of an atom in an electric field. We can define the
oscillator strength as a dimensionless and real number characterizing the transition
between |φo and |φn ,
2mωno
f no = |φn |z|φo |2 (4.231)

The term oscillator strength comes from the analysis of a harmonically bound electron.
In a sense, such an electron is a perfect absorber since its motion is perfectly harmonic
and as such it can maintain a perfect phase relationship with the external driving field.
Summing over all possible transitions from the original state yields the Thomas–
Reiche–Kuhn (TRK) sum rule:

f no = 1 (4.232)
n

which can be written in a very compact form:


m
φo |[z, [H, z]]|φo  = 1 (4.233)
h̄ 2
From this, one concludes that the highest possible oscillator strength for a transition
is 1. In fact, strong electronic transitions can have oscillator strengths on the order of
unity and often are greater than 1. For example, polyacetylenes can have f ’s as large
as 5. This seems like a contradiction until we realize that the above sum rule is for a
single particle system. If we extend this for all electrons in the system, then the sum
adds up to the total number of electrons in the system. So, in a sense, one can think
of an f > 1 as a measure of the total number of electronic states that are involved in
the transition.

4.8.3 SPONTANEOUS EMISSION OF LIGHT


The emission and absorption of light by an atom or molecule is perhaps the most
spectacular and important phenomena in the universe. It happens when an atom or
molecule undergoes a transition from one state to another due to its interaction with
the electromagnetic field. Because the electromagnetic field cannot be entirely elim-
inated from any so-called isolated system (except for certain quantum confinement
experiments), no atom or molecule is ever really isolated. Thus, even in the absence
118 Quantum Dynamics: Applications in Biological and Materials Systems

of an explicitly applied field, an excited system can spontaneously emit a photon and
relax to a lower energy state. Since we have all done spectroscopy experiments at
one point or another in our education, we all know that the transitions are between
discrete energy levels. In fact, it was in the examination of light passing through glass
and light emitted from flames that people in the nineteenth century began to speculate
that atoms can absorb and emit light only at specific wavelengths.
We will use the golden rule to deduce the probability of a transition under the
influence of an applied light field (laser or otherwise). We will argue that the system
is in equilibrium with the electromagnetic field and that the laser drives the system
out of equilibrium. From this we can deduce the rate of spontaneous emission in the
absence of the field.
The electric field associated with a monochromatic light wave of average inten-
sity I is
I  = cρ (4.234)
  
E o
2
1 Bo 2
= c εo + (4.235)
2 μo 2
 1/2
εo E 2o
= (4.236)
μo 2
E 2o
= cεo (4.237)
2
| and |Bo | = (1/c)|E
where ρ is the energy density of the field, |E | are the maximum
amplitudes of the E and B fields of the wave, and we are using meter-kilogram-second
(mks) units. The electromagnetic wave in reality contains a spread of frequencies, so
we must also specify the intensity density over a definite frequency interval:
dI
dω = cu(ω)dω (4.238)

where u(ω) is the energy density per unit frequency at ω.
Within the “semiclassical” dipole approximation, the coupling between a molecule
and the light wave is

=μ E
μ
· E(t) · ε o cos(ωt) (4.239)
2
where μ is the dipole moment vector and ε is the polarization vector of the wave.
Using this result, we can plug directly into the golden rule and deduce that
E 2o sin2 ((E f − E i − h̄ω)t/(2h̄))
P f i (ω, t) = 4| f |μ
· ε |i|2 (4.240)
4 (E f − E i − h̄ω)2
Now, we can take into account the spread of frequencies of the electromagnetic wave
around the resonant value of ωo = (E f − E i )/h̄. To do this we note
I 
E 2o = 2 (4.241)
cεo
Quantum Dynamics (and Other Un-American Activities) 119

and replace I  with (d I /dω)dω.


 ∞
P f i (t) = dω P f i (t, ω) (4.242)
0
   ∞
2 dI sin2 ((h̄ωo − h̄ω)(t/(2h̄)))
= | f |μ
· ε |i| 2
dω (4.243)
cεo dω ωo 0 (h̄ωo − h̄ω)2

To get this we assume that d I /dω and the matrix element of the coupling vary slowly
with frequency as compared to the sin2 (x)/x 2 term. Thus, as far as doing integrals
are concerned, they are both constants. With ωo so fixed, we can do the integral over
dw and get πt/(2h̄ 2 ), and we obtain the golden rule transition rate:
 
π dI
kfi = | f |μ · ε
|i| 2
(4.244)
cεoh̄ 2 dω ωo

Notice also that this equation predicts that the rate for excitation is identical to the
rate for de-excitation. This is because the radiation field contains both +ω and −ω
terms (unless the field is circularly polarized), and the transition rate from a state of
lower energy to a higher energy is the same as that of the transition from a higher energy
state to a lower energy state. However, we know that systems can emit spontaneously
in which a state of higher energy can go to a state of lower energy in the absence of
an external field. This is difficult to explain in the present framework since we have
assumed that |i is stationary. Let us assume that we have an ensemble of atoms in
a cavity containing electromagnetic radiation and the system is in thermodynamic
equilibrium. (Thought you could escape thermodynamics, eh?) Let E 1 and E 2 be
the energies of two states of the atom with E 2 > E 1 . When equilibrium has been
established, the number of atoms in the two states is determined by the Boltzmann
equation:

N2 N e−E2 β
= = e−β(E2 −E1 ) (4.245)
N1 N e−E1 β
where β = 1/kT . The number of atoms (per unit time) undergoing the transition
from 1 to 20 is proportional to k21 induced by the radiation and to the number of
atoms in the initial state N1 :
dN
(1 → 2) = N1 k21 (4.246)
dt
The number of atoms going from 2 to 1 is proportional to N2 and to k21 + A where
A is the spontaneous transition rate
dN
(2 → 1) = N2 (k21 + A) (4.247)
dt
At equilibrium, these two rates must be equal. Thus,
k21 + A N1
= = eh̄ωβ (4.248)
k21 N2
120 Quantum Dynamics: Applications in Biological and Materials Systems

Now, let us refer to the result for the induced rate k21 and express it in terms of the
energy density per unit frequency of the cavity, u(ω),
π
k21 = |2|μ
· ε |1|2 u(ω) = B21 u(ω) (4.249)
εoh̄ 2
where
π
B21 = |2|μ
· ε |1|2 (4.250)
εoh̄ 2
For electromagnetic radiation in equilibrium at temperature T , the energy density per
unit frequency is given by Planck’s law:
1 h̄ω3
u(ω) = (4.251)
π 2 c3 eh̄ωβ − 1
Combining the results we obtain
B12 A 1
+ = eh̄ωβ (4.252)
B21 B21 u(ω)

B21 A π 2 c3 h̄ωβ
+ (e − 1) = eh̄ωβ (4.253)
B12 B21 h̄ω3

(4.254)

which must hold for all temperatures. Since


B21
=1 (4.255)
B12
we get
A π 2 c3
=1 (4.256)
B21 h̄ω3
and thus, the spontaneous emission rate is
h̄ω3
A= B12 (4.257)
π 2 c3

ω3
= |2|μ
· ε |1|2 (4.258)
εo πh̄c3
This is a key result in that it determines the probability for the emission of light by
atomic and molecular systems. We can use it to compute the intensity of spectral lines
in terms of the electric dipole moment operator. The lifetime of the excited state is
then inversely proportional to the spontaneous decay rate,
1
τ= (4.259)
A
Quantum Dynamics (and Other Un-American Activities) 121

To compute the matrix elements, we can make a rough approximation that μ ∝
xe where e is the charge of an electron and x is on the order of atomic dimensions.
We also must include a factor of 1/3 for averaging over all orientations of (μ · ε ).
Since at any given time the moments are not all aligned,

1 4 ω3 e2
=A= |x|2 (4.260)
τ 3 h̄c3 4π εo

The factor

e2 1
=α≈ (4.261)
4π εoh̄c 137

is the fine structure constant. Also, ω/c = 2π/λ. So, setting x ≈ 1 Å,
 3
4 1 2π 6 × 1018
A= c (1Å)2 ≈ sec−1 (4.262)
3 137 λ [λ(Å)]3

So, for a typical wavelength, λ ≈ 4 × 103 Å,

τ = 10−8 sec (4.263)

which is consistent with observed lifetimes.


We can also compare with classical radiation theory. The power radiated by an
accelerated particle of charge e is given by the Larmor formula (cf. Jackson)

2 e2 (v̇)2
P= (4.264)
3 4π εo c3

where v̇ is the acceleration of the charge. Assuming the particle moves in a circular
orbit of radius r with angular velocity ω, the acceleration is v̇ = ω2r . Thus, the time
required to radiate energy h̄ω/2 is equivalent to the lifetime τ

1 2P
= (4.265)
τclass h̄ω
1 4 e2 ω4 r 2
= (4.266)
h̄ω 3 4π εo c3
4 ω3 e2 2
= r (4.267)
3 h̄c3 4π εo

This qualitative agreement between the classical and quantum results is a manifes-
tation of the correspondence principle. However, it must be emphasized that the
MECHANISM for radiation is entirely different. The classical result will never pre-
dict a discrete spectrum. This was in fact a very early indication that something
was certainly amiss with the classical electromagnetic field theories of Maxwell and
others.
122 Quantum Dynamics: Applications in Biological and Materials Systems

4.9 APPLICATION OF GOLDEN RULE: PHOTOIONIZATION


OF HYDROGEN 1S
We consider here the photoionization of the hydrogen 1s orbital to illustrate how the
golden rule formalism can be used to calculate photoionization cross-sections as a
function of the photon frequency. We already have an expression for dipole coupling:
qE
WD = pz sin(ωt) (4.268)

and we have derived the golden rule rate for transitions between states:

ki f = | f |V |i|2 δ(E i − E f + h̄ω) (4.269)

For transitions to the continuum, the final states are the plane waves,
1 ik·r
ψ(k) = e (4.270)
1/2
where  is the volume element. Thus the matrix element 1s|V |k can be written as

h̄k z
1s| pz |k = 1/2 ψ1s (r)eik·r dr (4.271)

To evaluate the integral, we need to transform the plane-wave function into spherical
coordinates. This can be done via the expansion

eik·r = i l (2l + 1) jl (kr )Pl (cos(θ)) (4.272)
l

where jl (kr ) is the spherical Bessel function and Pl (x) is a Legendre polynomial,
which we can also write as a spherical harmonic function,


Pl (cos(θ)) = Yl0 (θ, φ) (4.273)
2l + 1
Thus, the integral we need to perform is
   ∞
1 ∗

1s|k = √ Y00 Yl0 d i 4π(2l + 1)
l
r 2 e−r jl (kr )dr (4.274)
π l 0

The angular integral we do by orthogonality and this produces a delta function that
restricts the sum to l = 0 only leaving
 ∞
1
1s|k = √ r 2 e−r j0 (kr )dr (4.275)
 0
The radial integral can be easily performed using
sin(kr )
j0 (kr ) = (4.276)
kr
Quantum Dynamics (and Other Un-American Activities) 123

leaving
4 1 1
1s|k = (4.277)
k 1/2 (1 + k 2 )2
Thus, the matrix element is given by

qE h̄ 1 2
1s|V |k = (4.278)
mω  (1 + k 2 )2
1/2

This we can insert directly into the golden rule formula to get the photoionization rate
to a given k-state:
 2
2πh̄ qE 4
R0k = δ(E o − E k + h̄ω) (4.279)
 mω (1 + k 2 )4

which we can manipulate into reading as


 2
16π qE δ(k 2 − K 2 )
R0k = m (4.280)
h̄ mω (1 + k 2 )4

where we write K 2 = 2m(E I + h̄ω)/h̄ 2 to make our notation a bit more compact.
Eventually, we want to know the rate as a function of the photon frequency, so let us
put everything except the frequency and the volume element into a single constant
I, which is related to the intensity of the incident photon,

I 1 δ(k 2 − K 2 )
R0k = (4.281)
 ω2 (1 + k 2 )4
Now, we sum over all possible final states to get the total photoionization rate. To do
this, we need to turn the sum over final states into an integral, and this is done by
 ∞

= 3
4π k 2 dk (4.282)
k
(2π ) 0

Thus,
 ∞
I 1  δ(k 2 − K 2 )
R= 4π k2 dk
 ω (2π )
2 3
0 (1 + k 2 )4
 ∞
I 1 δ(k 2 − K 2 )
= 2 2 k2 dk
ω 2π 0 (1 + k 2 )2

Now we do a change of variables, y = k 2 and dy = 2kdk, so that the integral becomes


 ∞ 
δ(k 2 − K 2 ) 1 ∞ y 1/2
k2 dk = δ(y − K 2 )dy
0 (1 + k 2 )2 2 0 (1 + y 2 )4
K
= (4.283)
2(1 + K 2 )4
124 Quantum Dynamics: Applications in Biological and Materials Systems

0.035
0.030
0.025
0.020

R
0.015
0.010
0.005
0.000
0.5 1.0 1.5 2.0
ћω (a.u.)

FIGURE 4.5 Photoionization spectrum for hydrogen atom. Note that the vertical axis is scaled
by the incident photon flux.

Pulling everything together, we see that the total photoionization rate is given by

I 1 K
R=
ω2 2π 2 (1 + K 2 )4

I h̄m2 ω h̄ − εo
= √  4
2 π 2 ω2 1 + 2 m (ωh̄ 2h̄−εo )

2ω − 1
=I (4.284)
32 π 2 ω6
where in the last line we have converted to atomic units to clean things up a bit. This
expression is clearly valid only when h̄ω > E I = 1/2 hartree (13.6 eV); a plot of the
photoionization rate is given in Figure 4.5.

4.10 COUPLED ELECTRONIC/NUCLEAR DYNAMICS


We conclude this chapter with a brief discussion of the coupling between nuclear
and electronic degrees of freedom in a molecular system. For the sake of connecting
to the rest of this chapter, let us take the nuclear degrees of freedom to be a time-
dependent driving field for the electronic degrees of freedom. However, unlike the
electromagnetic field, there will be a considerable back reaction since a change in
the electronic state will result in a force acting on the nuclei. We begin by writing the
Hamiltonian describing this as

p2
H = He (r (t)) + (4.285)
2m
where the first term is the electronic part that depends parametrically upon the nuclear
coordinate r and the second is the nuclear kinetic energy. If ψ(t) is the electronic state
Quantum Dynamics (and Other Un-American Activities) 125

at time t, its total time derivative contains two terms:


dψ ∂ψ ∂ψ
ih̄ = ih̄ + ih̄ ṙ (4.286)
dt ∂t ∂r
where the first term gives the contribution from the explicit dependency on time while
the second gives the implicit dependency. In the language of fluid dynamics, this is
the advective derivative since we can imagine that ψ is being carried along some path
r (t). If we expand ψ in terms of the eigenstates of He (r ) at a given value of r ,

ψ(r ) = cn (r )φn (r ) (4.287)
n

then
 
dcn ∂
ih̄ = εn cn + ih̄ ṙ cm φn | |φm (4.288)
dt m
∂r

First, in the limit of slow nuclear motion, ṙ ≈ 0 or if the electronic wave function
varies slowly along r , then the second term gives no contribution to the dynamics.
If our initial electronic state is prepared in an eigenstate of He , then under these
conditions it will remain in the same eigenstate even as the nuclei move. This is the
adiabatic approximation whereby the nuclear motion is slow enough such that the
electronic state instantly responds to any small change. Within this approximation,
we can use the Hellmann–Feynman theorem to compute the forces exerted on the
nuclei. The resulting equations of motion for the nuclei then read:
   
∂εn (r )  ∂ He (r ) 
m r̈ = − 
= − φn (r )   φn (r ) (4.289)
∂r ∂r 
In general, we do not need to assume that ψ is initially an eigenstate of He ; it can be
a superposition of eigenstates, in which case we need to take a weighted average over
forces
   
 ∂ He (r ) 
m r̈ = − ψ   ψ
∂r 
∂εn (r )
=− |cn |2 (4.290)
n
∂r

In other words, in this expression, the nuclear degrees of freedom (represented by r )


experience an average force weighted by the population in each state. This is a
very compelling picture since one can effectively partition a very large system with
many degrees of freedom into one consisiting of two interacting subsystems, one of
which behaves classically and the other quantum mechanically with a time-dependent
Hamiltonian that depends parametrically upon the classical variables
ih̄ ψ̇(t) = He (r (t))ψ(t) (4.291)
At first glance, there does not seem to be any problem with this description.
However, consider the case where the system evolves into two very different config-
urations, perhaps one corresponding to the case in Figure 4.6 where an electron is
126 Quantum Dynamics: Applications in Biological and Materials Systems

V
1.5

A– B A B–

1.0
|1> |2>

0.5
A B

Q
(|1> + |2>)/21/2 –3 –2 –1 1 2 3

(a) (b)

FIGURE 4.6 Problem with Hellmann–Feynman forces: (a) We have two possible electron
transfer states. One (|1) has the electron localized on site A with B being neutral and the other
(|2) has the electron on site B with A being neutral. The arrows indicate the dipole moments
of surrounding solvent molecules. Since |1 and |2 are coupled, the state will naturally evolve
into a linear combination of the two possible outcomes. Consequently, the Hellmann–Feynman
forces will see an average of the two. (b) We have a potential well representing the two states
with Q being an order parameter. For Q = −1, the dipoles are oriented about A and for
Q = +1 the dipoles are oriented about B. Q = 0 corresponds to the unstable case of neither
A nor B being fully solvated.

localized on the left-hand molecule and the other corresponding to where the electron
is localized on the right-hand molecule. If the final populations are such that there is
a 1:1 mixture between the left and right configurations, the solvent molecules follow-
ing the transfer of the electron from the left to the right will “see” an averaged case
and will not fully solvate either side. The problem stems from the fact that when we
partition the full system into interacting subsystems and then make the mean-field
assumption, we essentially “trap” quantum coherence within the two separate sub-
spaces and do not allow for the mixing of phase coherence. Energy can flow between
the two subspaces, but phase information cannot. Consequently, the system is forced
to remain too coherent and never resolves itself into either state. A number of “fixes”
have been proposed 6–14 to kill off coherence and force the system to localize in one
state or the other. We shall pick up with this discussion of coherence and decoherence
in detail in a later chapter.
In effect, we are really solving the two-level system problem posed earlier in this
chapter. For the sake of discussion, we limit ourselves to two electronic states, labeled
a and b, and write our He (r ) as
 
E a (r ) λ
He (r ) = (4.292)
λ E b (r )

where E a (r ) and E b (r ) define two potential energy surfaces and λ is some constant
coupling. When λ = 0, the two potential energy curves will cross at some point.
Quantum Dynamics (and Other Un-American Activities) 127

However, when λ = 0, the two curves avoid each other as seen in Figure 4.6. Let us
center our frame of reference at the point of intersection where E a (0) = E b (0). We
know from our previous analysis that if the coupling is much weaker than the energy
difference (in the uncoupled representation), then the probability to make a transition
between the two states will be vanishingly small. Also for the sake of discussion, let
us consider this as a scattering problem whereby the nuclear motion is from r → −∞
to r → +∞ and appears at the crossing point at t = 0. Also at t → −∞, we presume
the system is prepared in one of the two states φa or φb , which are eigenstates of the
uncoupled system (that is, with λ = 0). We shall refer to this representation as the
diabatic representation. As the system progresses from left to right, the diabatic states
will mix, leading to a superposition of states

ψ = ca φa + cb φb (4.293)

At t → ∞ the coefficients |ca |2 and cb give the probability for either remaining in
the initial state or making a transition from state a to state b.
We can equally well picture a representation where He (r ) is diagonal at each point
along r . Although the physics (that is, what we eventually compute or observe) will
not depend upon our choice of representation, our description of the physics may be
quite different. In this adiabatic representation, the electronic coupling is described
by Equation 4.288.

V̂12 = ṙ · φ1 |∇r |φ2  (4.294)

We can estimate this using the “off-diagonal” Hellmann–Feynman theorem


d
φ1 |H |φ2  = φ1
|H |φ2  + φ1 |H
|φ2  + φ1 |H |φ2
 = 0. (4.295)
dr
Rearranging, we find

φ1 |H
|φ2 
φ1 |∇r |φ2  = (4.296)
E1 − E2
Again, when we are far from the point of intersection, ṙ · φ1 |H
|φ2   E 1 − E 2
and the coupling can be ignored. In this limit, both the adiabatic states φ1 and φ2 and
the diabatic states φa and φb are equivalent. To the left, the lower state φ1 = φa and
the upper state φ2 = φb . However, to the right as r → +∞, the lower state becomes
φ1 = φb while the upper adiabatic state becomes φ2 = φa . In other words, as our
state evolves, it becomes a superposition of adiabatic states:

ψ = a1 φ1 + a2 φ2 (4.297)

with |a1 |2 being the probability for the system to be found in the lower adiabatic
state. Again starting off at t in the distant past with the system prepared to the left
in the lower adiabatic state |a1 (r → −∞)|2 = 1, we find at long time and when the
nuclear coordinate has progressed though the intersection |a1 (r → ∞)|2 = |cb |2 and
|a2 (r → ∞|2 = |ca |2 . Thus, the probability to remain on the lower adiabatic surface
is given by P1→1 = |cb (r → ∞)|2 = |a1 (r → ∞)|2 = Pa→b and the probability
128 Quantum Dynamics: Applications in Biological and Materials Systems

for making a transition to the other surface is P1→2 = |ca (r → ∞)|2 = |a2 (r →
∞)|2 = Pa→a .
We can approximate the probability for making the transition using the Landau–
Zener approach 15–17 :
  
2π |Vab |2
P1→1 = Pa→b = 1 − exp − (4.298)
h̄∂t (E a (r ) − E b (r )) r =rc

where the time dependence of the energy gap is due to the motion along r . Taking the
derivative,
d
(E a (r ) − E b (r )) = ṙ (Fb − Fa ) (4.299)
dt
where Fa = −∇r E a is the force acting on the nuclear coordinate at r from either the
lower (Fa ) or upper (Fb ) adiabatic surfaces. Note that all of these quantities are to
be evaluated at the point of crossing rc , and ṙ is the velocity at the point of crossing.
Hence, Fa and Fb are the slopes of the diabatic curves at the point of crossing.
Again, in the limit of weak coupling or high velocity through the coupling region,
2π|Vab |2  h̄ ṙ (Fb − Fa ) and
 
2π |Vab |2
P1→1 ≈ (4.300)
h̄ ṙ |Fb − Fa | r =rc

Thus, the probability to remain in the original adiabatic state is very small. This is
referred to as the nonadiabatic limit. On the other hand, in the limit of large coupling
or slow motion, the exponential term in the Landau–Zener equation (Equation 4.298)
vanishes and the system remains on the original adiabatic surface throughout the
scattering process.

4.10.1 ELECTRONIC TRANSITION RATES


As an application of the Landau–Zener treatment, let us consider the simple model for
charge transfer suggested by Figure 4.6. For the sake of discussion, let |a represent
the quantum state where species “A” is an electron donor and “B” is an electron
acceptor. The polarization field (represented by the arrows) is generated by polar
solvent molecules around the two species. We assume that the A and B are fixed in
space. The reaction then is for A to pass its electron over to B. For example, in the
ferric-ferrous self-exchange reaction,

Fe3+ + Fe2+ → Fe2+ + Fe3+

Thus, |a corresponds to the reactant state and |b is the product state. We can also
have a more general cross-electron transfer if the species are different, for example,

Fe2+ + Ce4+ → Fe3+ + Ce3+

In these chemical reactions, no bonds are broken, we simply have a rearrangement


of the electronic charge density about the two ion sites. If these reactions were to
Quantum Dynamics (and Other Un-American Activities) 129

be carried out in a polar medium, the dipoles within the medium would respond by
reorganizing themselves to minimize the electrostatic interactions. As suggested by
Figure 4.6 and Figure 4.7, we have two minima corresponding to the cases where
the solvent polarization fields are organized to minimize these interactions. If we
take Q to be some collective polarization coordinate, then we can easily arrive at the
parabolic curves:

k
Va (Q) = E a + (Q − Q a )2 (4.301)
2
k
Vb (Q) = E b + (Q − Q b )2 (4.302)
2
Furthermore, since the electronic coupling is only significant close to the crossing
point, we shall assume it is independent of Q and equal to Vab . These curves cross
at Q c where Va (Q c ) = Vb (Q c ). Simple algebra yields

(E a − E b ) + k Q a2 − Q 2b /2
Qc = (4.303)
k(Q a − Q b )

For the forward reaction, the activation energy is the energy difference between E 1
and the energy at the crossing:

(E a − E b ) − λ
EA = (4.304)

where
k
λ= (Q a − Q b )2 (4.305)
2
This last term carries an important physical meaning. It is the energy required to
reorganize the polarization following the transfer of a charge from A to B. These
terms are shown in Figure 4.7 along with a simple sketch of the parabolic potentials.
To get to a transition rate, we need to compute the expectation value that our
system will arrive at the crossing at an appropriate velocity,
 ∞
ka→b = d Q̇ P(Q c , Q̇)Pa→b ( Q̇) (4.306)
0

Pa→b ( Q̇) we get from the Landau–Zener expression above. P(Q c , Q̇) we get by
taking the Boltzmann probability that the system will have the appropriate velocity
at the crossing point
 
βm −βm Q̇ 2 /2 −β E A Q c −β(Va −Ea )
P(Q c , Q̇) = e e e (4.307)
2π −∞

 Inkeeping with our notation above, Va and Vb will represent the uncoupled (diabatic) potentials and V1
and V2 will denote the adiabatic potentials.
130 Quantum Dynamics: Applications in Biological and Materials Systems

V
Va

Vb

EA

ΔE

Q
Qa Qc Qb

FIGURE 4.7 Sketch of parabolic free-energy potentials arising from Marcus’ treatment of
electron transfer. In this figure, E A is the activation energy, E is the driving force taken as
the (free) energy difference between the initial and final states. λ is the reorganization energy.

Taking the integration limit to +∞, we arrive at


β √
mke−β E A e−βm Q̇ /2
2
P(Q c , Q̇) = (4.308)

where β = 1/k B T . In the adiabatic limit, Pa→b ≈ 1 and we arrive at an expression
for the transition rate very much like what we expect from transition state theory
ωc −β Ẽ A
kad = e


where ωc /2π is the transmission frequency, ωc = k/m, and Ẽ A is the activation
energy. Since the electronic coupling opens an energy gap of 2Vab at the crossing
point, we have written Ẽ A = E A − Vab to account for the electronic coupling.
In the nonadiabatic limit, we use the weakcoupling form for Pa→b and obtain

πkβ |Vab |2
kna = e−β E A
2 h̄|Fb − Fa |
The F = |Fb − Fa | is simply the difference in forces at the crossing point. Since
our potentials are parabolic Fa = k(Q c − Q a ) and Fb = k(Q c − Q b ), F = kQ
where Q is the distance between the two potential minima. The nonadiabatic rate
is similar to the adiabatic rate in that it depends upon both the activation energy and
the force constant. However, it does not depend upon the mass. The fact that the
electron coupling appears as |Vab |2 /h̄ reminds us that this is the first-order term in
the perturbation expansion.
While the Landau–Zener model does account for the nuclear motion in a semi-
classical way, it does not treat the motion fully quantum mechanically nor does it
fully account for the electronic coherences between the two electronic states. These
coherences and correlations are vitally important in the weak coupling (nonadiabatic
limit) and cannot be entirely ignored. Finally, there are a number of parameters—k,
Q a , Q b , and so on—that need to be inferred from experiments and as such may be
difficult to obtain.
Quantum Dynamics (and Other Un-American Activities) 131

4.10.2 MARCUS’ TREATMENT OF ELECTRON TRANSFER


In the 1950s, Rudy Marcus examined the problem of electron transfer in a polar
medium and gave a solid physical significance to the parabolic potentials we used
above. In particular, we used a generic coordinate Q to characterize the reorganization
of the medium about the two charge distributions about initial and final states.18,19 In
his original work, he treats the molecular solvent as a dielectric continuum rather than
an explicit solvent model. Although this lacks the molecularity of an explicit solvent,
it does allow one to simplify response of the solvent by its characteristic dielectric
time scales. Within the dielectric continuum model, one typically assumes that the
total response is characterized by two distinct time scales: a fast one characterizing
the electronic dynamics and a much slower time scale characterizing the nuclear
(molecular) dynamics. The static dielectric constant εs contains contributions from
both. The electronic contribution (termed the optical response) is related to the index
of refraction via εo = n 2 . The nuclear component εs contains contributions from the
translational, rotational, and vibrational motions of the solvent species. Unfortunately,
one cannot write εs as εn +εe , because the electronic degrees of freedom of the solvent
depend strongly upon its instantaneous nuclear arrangement.
The critical assumption underlying Marcus’ approach is that change in the elec-
tronic density associated with the transfer of an electron occurs on a time scale much
faster than the time scale for the nuclear charges to respond but it is slow on the time
scale for the electronic motions that determine εe . As such, electron-transfer events
occur at constant nuclear polarization as determined by fixed nuclear positions. This
is more or less a statement of the Franck–Condon principle.
The essential feature of the Marcus treatment is that the rate constant can be
expressed in a very compact form:

ket = |Vab |2 FCF (4.309)

The Franck–Condon factor (FCF) can be written as the overlap integral between a
vibrational state in the Va potential centered about Q a and vibrational eigenstate in
the Vb potential centered about Q b . Assuming the two wells have the same vibrational
frequency, this is the overlap between two displaced Gauss–Hermite wave functions.
In Marcus’ approach, the FCF term is computed using a semiclassical approximation
and takes the form:
 
1 (E + λ)2
FCF = √ exp − (4.310)
4π λkT 4π λkT
Since the potentials are parabolic, the displacements between the wells can be ex-
pressed in terms of specific energy differences, namely, the driving force E =
E b − E a , which is the difference between the energy minima of the initial and final
states, and the reorganization energy λ, which is the energy required to change the
nuclear coordinates without changing the electronic state. In other words, this is the
energy of the initial electronic state evaluated at the equilibrium geometry of the final
electronic state. Both E and λ can be obtained from the emission and absorption
spectra of the system, respectively.
132 Quantum Dynamics: Applications in Biological and Materials Systems

The coupling Vab can also be determined spectroscopically by examining the


transition moment between the two adiabatic states. Recall, that a and b label the
localized or diabatic states and 1 and 2 label the adiabatic eigenstates of a model
two-level system with Hamiltonian
 
E a Vab
H= (4.311)
Vab E b

Electronic transitions occur between energy eigenstates of H , not between the dia-
batic states. It is important that we make this distinction. Recall our discussion of
light absorption earlier in this chapter. We assumed initially that the system was at
equilibrium and perturbed only by the electromagnetic field of the incident photon.
Consequently, the initial state for optical absorption must be an eigenstate of H . For-
tunately, far from the crossing region, ψa ≈ ψ1 close to Q a and ψb ≈ ψ1 close to Q b .
To find an expression for the coupling, begin by writing the transition moment
between ψ1 and ψ2 as
μ12 = eψ1 |r |ψ2 
and then expand the eigenstates in terms of the diabatic states

|ψ1  = cos θ|ψa  + sin θ|ψb  (4.312)


|ψ2  = − sin θ|ψa  + cos θ|ψb  (4.313)

where θ is the mixing angle. If we assume that the transition moment between the
diabatic state vanishes, μab = eψa |r |ψb  = 0, and let μa and μb be the static dipole
moments of the donor and acceptor species, then we can write

Vab |μ| = h̄ωmax μ12

where we have written h̄ωmax = E 2 − E 1 is the optical absorption maximum. As we


showed earlier as well, the oscillator strength is related to the transition moment by
2m e ωno
f osc = |μ12 |2 (4.314)
e2h̄
where e and m e are the charge and mass of an electron. Thus, measuring the oscillator
strength gives the transition moment. Furthermore, if we assume that a single charge is
being passed between the donor and acceptor and they are separated by some distance
Rab , then μ = e Rab .
Predictions of the Marcus’ theory: The Marcus rate equation makes an inter-
esting prediction. If we consider the rate as a function of the driving force,

(E + λ)2
log k ∝ −
λ
as we increase the driving force and keep the reorganization energy roughly constant,
then at some point we reach a maximum rate where E = λ. Increasing E beyond
this will actually lead to a decrease in the electron transfer rate. Experimental verifica-
tion of this turnover in the rate did not occur until nearly 30 years after Marcus made
Quantum Dynamics (and Other Un-American Activities) 133

9.5 Lower
limit O

9.0
O O
O
8.5
O
Log(k)

8.0 Cl
O O
7.5 O
O
Cl
7.0 Cl
k
O
6.5
R

0.5 1.0 1.5 2.0 2.5


–ΔG(eV)

FIGURE 4.8 Comparison between predicted and experimental electron transfer rates between
a series of donor/acceptor species. Here, k is in units of 1/sec. Note that −G in this figure is
equivalent to E in our discussion. (Adopted from Refs. 19 and 20.)

this prediction. In a true tour de force of synthesis and spectroscopy, Gerhard Closs’
group produced a series of donor-acceptor molecules in the form of a bi-phenyl radi-
cal anion separated from an organic acceptor held a fixed distance away by a linking
chain.20 A plot of the observed rates vs. the driving force is shown in Figure 4.8. Here
we see that the rate constant increases up to a maximum value of about 2 × 109 s−1
with increasing |G ◦ |. Note, that the E used in our discussion should be taken
as free energy change between final and initial states and the activation free energy
G †† = (λ + G ◦ )2 /4λ.
From the rate expression, G †† decreases as G ◦ becomes increasingly negative
and the reaction becomes more and more exothermic. When G ◦ = −λ, the acti-
vation energy vanishes and any further increase in the exothermicity causes G †† to
increase, leading to a decrease in the rate constant. Looking at Figure 4.8, we can
identify these three regimes. First, the normal regime where −G ◦ < λ. Here in-
creasing |G ◦ leads to an increase in the rate since the barrier for the reaction steadily
decreases. At the point −G = λ ≈ 1 eV, the barrier vanishes and we achieve the
maximum electron transfer rate. Making the reaction increasingly exothermic only
serves to increase the energetic barrier for the reaction and hence leads to a decrease in
the rate. This regime is termed the “inverted regime.” Sketches of the energy parabolas
corresponding to each of these regimes are given in Figure 4.9.

4.10.3 INCLUDING VIBRATIONAL DYNAMICS


Now that we have established the basic physical picture, let us expand on it a bit and
apply the golden rule technology we have just developed to analyze this problem.
134 Quantum Dynamics: Applications in Biological and Materials Systems

V V V

Q Q Q
(a) (b) (c)

FIGURE 4.9 Potential energy curves corresponding to the normal (a), barrierless (b), and
inverted (c) regimes for electron transfer. The vertical arrow denotes the driving force E and
the gap between the dashed lines is the activation energy.

Clearly, one of the problems with the Landau–Zener approach is that we have ne-
glected the quantum motion of the nuclear degree of freedom. As such, we consider
here a semiclassical approach developed by Neria and Nitzan.21 The idea here is that
the nuclear vibrational motion on the potential energy surface of the initial electronic
state drive transitions to vibrational states on the potential energy surface of the final
electronic state. We begin by writing the golden rule expression for the transition
between electronic states 1 and 2 as
2π e−β E1i
k12 = |1i|V |2 f |2 δ(E 1i − E 2 f ) (4.315)
h̄ i Z1 f

where β = 1/kT , Z is the vibrational partition function for the initial state, and
|i and | f  are nuclear states associated with the initial and final electronic states.
Integrating over electronic degrees of freedom, we can write V12 = 1|V |2 as it is
still an operator acting on the nuclear degrees of freedom. Moreover, following our
discussion above, the rate constant can be expressed as a correlation function as
 ∞
k12 = dteiEt/h̄ C(t) (4.316)
−∞

where E is the difference between the energy origin of the two potential energy
surfaces. The correlation function is given by
1 −β Ei 
C(t) = 2 e i V12 ei H2 t/h̄ V21 e−i H1 t/h̄ |i (4.317)
h̄ Z i
1 
= 2 V12 ei H2 t/h̄ V21 e−i H1 t/h̄ T (4.318)

where the T subscript denotes a thermal averaging over initial conditions. H1 and H2
denote the Hamiltonians for nuclear motion on the adiabatic potential energy curves
and V12 is the nonadiabatic coupling as given above.
As a model, we consider the crossing of two diabatic potential curves: one, a
harmonic well describing a bound molecular state, and the other a linear potential
representing an unbound or dissociative state:
Va (x) = x 2 /2 (4.319)
Vb (x) = αx + E o (4.320)
Quantum Dynamics (and Other Un-American Activities) 135

V
30

25

20
Eo

15

10

5
Vc
x
–10 –5 5 10

–5

FIGURE 4.10 Schematic view of Gaussian wave packet propagation scheme in computing the
correlation function in Equation 4.318. The shaded Gaussians denote the initial wavepackets
with the arrow indicating evolution on the upper potential energy curve. (Figure adopted from
Ref. 22.)

where α is the slope and E o determines


 the vertical excitation energy. The crossing
energy is given by Vc = α ± α 2 + 2E o . These curves are shown in Figure 4.10
for the case of α = 6 and E o = 19.65 following √ Ref. 22. Also, for the √ sake of
convenience, we use scaled units for position ( h̄/(mω)) and momentum ( mh̄ω)
so that the energy is in units of the harmonic oscillator frequency, (h̄ω). Also, as
above, we take the diabatic coupling Vab = λ to be a constant. The resulting adiabatic
curves are given in Figure 4.10.
As a first approximation, let us adopt a purely diabatic viewpoint and factor
completely the electronic coupling so that the correlation function can be written as

|Vab |2
C(t) = J (t) (4.321)
h̄ 2 Z 1

where J (t) is a time-dependent overlap between a Gaussian wave packet starting


at position x = 0 on the lower potential and a Gaussian wave packet starting at
x = 0 on the the upper potential surface. In doing so, we assume that the electronic
transition occurs instantaneously on the time scale of nuclear motion. Given the
disparity between the forces on either surface, the two wave packets will rapidly move
apart and C(t) will rapidly converge to zero. Consequently, over this time frame, the
wave packets will more or less retain their original shape and only their centroids will
change. Thus, we can estimate J (t) by taking the time-dependent overlap between
two Gaussians, one moving in the lower harmonic well weighted by the initial thermal
populations and the other moving on the upper linear slope.
136 Quantum Dynamics: Applications in Biological and Materials Systems

We take the lower state to be a harmonic oscillator eigenstate


1 −x 2
/(2a 2 )
ψn (x) =  √ e Hn (x/a) (4.322)
n!2n a π

where a = h̄/(mω). The upper state is not stationary and evolves according to
 2 
p
ih̄ ψ̇ = + αx + E o ψ (4.323)
2m

The exact solution was originally derived by de Broglie1,23–25

ψ(x, t) = R(t)ei S(t)/h̄ (4.324)


2
( αt −ut+x)2
1 − 2m 2
R(t) = √ e 4σ (t) (4.325)
( 2πσ (t))1/2
 2 2
α2t 3
αt
2m
− ut + x h̄ 2 t  tu

S(t) = − + + x − (mu − tα)
6m 8mσ 2 σ (t)2 2
 
3 −1 th̄
− h̄ tan (4.326)
2 2mσ 2

with the time-dependent width given by σ (t) = σ + (h̄ 2 t 2 /(2m)2σ with σ being
the initial width and u being the initial group velocity. Notice that this is the same as
what one finds for the spread of a free particle. We thus construct the integral
 ∞
J (t) = ψ0 (x)R(t)ei S(t)/h̄ d x (4.327)
−∞

choosing the initial width of the upper state to match that of the initial harmonic wave
function. (See Figure 4.10.) The resulting integral can be tediously worked out by
hand; however, numerical evaluation can be readily done. The resulting J (t) decay
curve for the problem at hand is shown in Figure 4.11c.

J(t) J(t) J(t)


1.0 1.0 1.0
0.8
0.6 0.5 0.5
0.4
0.2 t
t 2 4 6 8 10
t 1 2 3 4 5
0.5 1.0 1.5 2.0
–0.5
–0.2
–0.5

(a) α = 6 (b) α = 3 (c) α = 1

FIGURE 4.11 Time-dependent Franck–Condon overlap integral between harmonic oscillator


ground state and a Gaussian moving on a linear ramp. Plotted are the real, imaginary, and
absolute values.
Quantum Dynamics (and Other Un-American Activities) 137

The rate of decay of J (t) depends upon how rapidly the wave packet on the linear
potential moves away from the initial state. The steeper the potential, the faster the
upper wave packet loses overlap with the lower wave function. This is an indication
of how long it takes the system to lose memory of its initial condition. Once this
memory has been lost, the correlation between the initial and final states is zero, and
the Fourier integral required to compute the golden rule transfer rate will converge.
The final rate constant will depend upon two factors. The transfer will be more
efficient if in fact the vibrational spectrum of the final states has a significant overlap
with the initial state. For example, if we write as the overlap of two vibrational wave
packets evolving on electronic potentials i and f ,

J (t) = ψ f (t)|ψi (0) = ψ f (0)| exp[+i H f t/h̄]|ψi (0) (4.328)

where ψ f (0)|ψi (0) = 1 and H f is the Hamiltonian for nuclear motion on the final
surface, then by inserting a complete set of vibrational states

J (t) = | exp[+i E n f t/h̄]|n f |ψi (0)|2 (4.329)
nf

we see that J (t) involves the projection of the initial state onto all possible vibrational
eigenstates on the final electronic potential surface and E f is relative to a common
energy origin. Since we are dealing with a continuum of final energy states, the sum
must be converted to an integral

→ d Eg(E) (4.330)
nf

where g(E) is normalized to 1:



J (t) = d Eg(E) exp[+i Et/h̄]|E|ψi (0)|2 (4.331)

Thus, upon taking the Fourier transform we can write the transition rate from the nth
vibrational eigenstate on the initial electronic state as


ki f = |V |2 d Eg(E)|E|ψni |2 δ(E + Eo − E i ) (4.332)

The overlap integral we can evaluate exactly since we know the energy eigenstates
for a particle under the influence of a constant force V = xα
1  
x|E = Ai (2α)1/3 (x − E/α) (4.333)
21/3 α 1/6
 ∞
1
= √ dk exp[ik 3 /6α + ik(x − E/α)] (4.334)
2π α −∞
where Ai(x) is the Airy function chosen to be regular at the origin. The overlap
integral is then computed using
 ∞
E|ψni  = d xE|xx|ψni  (4.335)
−∞
138 Quantum Dynamics: Applications in Biological and Materials Systems

{x, ψ}

0.15 0.4

0.2
0.10
Rate

–10 –5 5 10
0.05
–0.2

10 20 30 40 50 60 –0.4
n
(a) (b)

FIGURE 4.12 (a) Overlap integral Equation 4.336 vs. quantum number. (b) Comparison be-
tween continuum wavefunction with E = E o and eigenstate #19.

which can be recast as


 ∞
1
E|ψni  = √ dk exp[−ik 3 /6α + ik E/a]ψ̃ni (k) (4.336)
2πα −∞
where
ψ̃ni (k) = (−i)n ψn (x → k) (4.337)
is the Fourier transform of the nth harmonic oscillator eigenstate. This last integral
over k can be evaluated numerically.
For the case of α = 6, E o = 19.36, and λ = 0.5, we show the rate as a function
of the initial quantum number in the lower harmonic well. Notice that it undergoes
a series of oscillations corresponding to constructive and destructive overlaps be-
tween oscillations in the initial and final vibrational wave functions. Remarkably,
even though the two diabatic potential curves cross at E o = 19.36, there is very poor
integrated overlap between the n ≈ 19 vibrational state and an Airy function at that
energy due to the oscillations in both wave functions as seen in Figure 4.12b.

4.11 PROBLEMS AND EXERCISES

Problem 4.1 A simple analysis of the two-well model can be performed using
the golden rule techniques developed thus far. Consider the case of two identical
wells, one displaced from the other by xo . We can also add an energy shift to the
problem E b .
Va (x) = m2 x 2 /2 (4.338)
Vb (x) = m (x − xo ) /2 + E b
2 2
(4.339)
Show that the time-dependent overlap between the harmonic oscillator ground-state
wave function in state a and an initially identical Gaussian evolving in state b is given
Quantum Dynamics (and Other Un-American Activities) 139

r(Å)
0.02 0.04 0.06 0.08 0.1
–20

–40
E(eV)
–60

–80

–100

–120

FIGURE 4.13 Coulomb potential for H atom including a cutoff approximating the finite radius
of the proton.

where
 
2
J (t) = exp − (1 − e−it ) − it/2 (4.340)
2

and  = xo m/h̄ is a dimensionless displacement (Huang–Rhys parameter). Show
this by taking the Fourier transform of J (t) that the spectral function is given by
 2n 
−2 /2 
σ (ω) = e δ(h̄ω − (n + 1/2)) (4.341)
n=0
2n n!

Finally, evaluate and plot the transition rate from state a to state b as a function of
temperature.

Problem 4.2 A one-dimensional harmonic oscillator, with frequency ω, in its ground


state is subjected to a perturbation of the form
H
(t) = C p̂e−α|t| cos(t) (4.342)
where p̂ is the momentum operator and C, α, and  are constants. What is the
probability that as t → ∞ the oscillator will be found in its first excited state in
first-order perturbation theory? Discuss the result as a function of , ω, and α.

Problem 4.3 A particle is in a one-dimensional infinite well of width 2a. A time-


dependent perturbation of the form
πx 
H
(t) = To Vo sin δ(t) (4.343)
a
acts on the system, where To and Vo are constants. What is the probability that the
system will be in the first excited state at time t afterwards?

Problem 4.4 Because of the finite size of the nucleus, the actual potential seen by
the electron is more like what is seen in Figure 4.12.
140 Quantum Dynamics: Applications in Biological and Materials Systems

1. Calculate this effect on the ground-state energy of the H atom using first-
order perturbation theory with
 2 3

e
− eR for r ≤ R
H = r (4.344)
0 otherwise

2. Explain this choice for H


.
3. Expand your results in powers of R/ao  1. (Be careful!)
4. Evaluate numerically your result for R = 1 fm and R = 100 fm.
5. Give the fractional shift of the energy of the ground state.
6. A more rigorous approach is to take into account the fact that the nucleus
has a homogeneous charge distribution. In this case, the potential energy
experienced by the electron goes as

Z e2
V (r ) = − (4.345)
r
when r > R and
    
Z e2 1 r 2 R
V (r ) = − +2 −3 −1 (4.346)
r 2R R r
for r ≤ R. What is the perturbation in this case? Calculate the energy shift
for the H (1s) energy level for R = 1 fm and compare to the result you
obtained above.
Note that this effect is the “isotope shift” and can be observed in the spectral lines of
the heavy elements.

Problem 4.5 As a good exercise in commutation relations and identity insertions,


derive the Thomas–Reiche–Kuhn sum rule. Show that for a harmonic oscillator, the
result is exact. Finally, apply the sum rule to a linear rotor. Does the rule still hold?

Problem 4.6 Adiabatic vs. Sudden Approximations. There are two essential limits
for time-dependent problems: first, where the perturbation or coupling varies slowly
in time and the other when the coupling is suddenly switched on. In the chapter we dis-
cussed the case where the reference system was coupled to some time-dependent field.
In this problem, we consider the case where the boundary conditions are changed.
Consider the case of an electron trapped in an infinite well of length L. The energy
levels are discrete and we shall assume that the electron is initially prepared in the
lowest energy level. The twist here is that we shall allow L to change with time,
something like a piston that can compress and expand the electron’s box.
1. What outside pressure must be exerted on the electron in order for L to be
fixed at some length L eq ?
2. Show that by expanding the electron’s wave function in terms of the time-
dependent states
ψ(t) = u j (t)| j(t)
j
Quantum Dynamics (and Other Un-American Activities) 141

where 
2
| j(t) = cos(π(2 j + 1)x/L(t))
L(t)

are the particle-in-a-box states with x measured from the center of the well
and with energy
h̄ 2 2π(2 j + 1)
ε j (t) = = h̄ω j (t)
2m L(t)2
and substituting this into the time-dependent Schrödinger equation,
 
∂ −ω j (t)t ∂
h̄i u j e = h̄i j| |0 e−ω0 (t)t
∂t ∂t

3. Evaluate  j| ∂t∂ |0 and show that

∂u j ∂ log L(t)
= −λ j π exp[i(ω j (t) − ω0 (t)]
∂t ∂t

where
 1/2
λj = 2 cos[π(2 j + 1)u]u sin[πu]du
−1/2

4. To proceed, we need to specify L(t). Consider a sudden change in L o to


L o + L over a time to . In other words L(t) = L o + L(exp(t/to ) − 1).
In doing so, ∂ log L/∂t ≈ −(L/L) exp(−t/to )/to . Use this and show
that the transition probability from the initial state to some final state | j
is given by
 
π λ j L 2 1
|u j |2 =
L 1 + (ω j − ωo )2 to2
5. Show that if the rate of change in the boundary is large compared to the
transition frequency, then this last expression approaches unity, indicating
an abrupt transition. Show also if 1/to is small (slowly changing boundary),
then |u j |2 → 0, indicating the electron remains in its initial state.

Problem 4.7 The potential function for an anharmonic oscillator of mass m is


given by
k 2
V (x) = x + cx 4
2
where the second term is small compared with the first. First, show that the first-order
correction to the ground-state energy is given by

E − E o = 3c(h̄/2mω)2

What would the first-order energy correction be if there were an x 3 term in V ?


142 Quantum Dynamics: Applications in Biological and Materials Systems

Problem 4.8 Consider a particle of mass m in a harmonic well with force constant
k. A small perturbation is applied that changes the force constant by δk. Show that
the first- and second-order corrections to the ground-state energy are given by
1 δk
E (1) = h̄ω
4 k
and  
1 δk 2
E =− (2)
h̄ω
16 k
How do these expressions relate to the exact expression for the energy?

Problem 4.9 Taking a trial wave function of the form φ = exp(−βr 2 ) where β is
an adjustable parameter, use the variational procedure to obtain an estimate of the
ground-state energy for the hydrogen atom in terms of atomic constants. How does
this compare to the exact answer? Also, use your optimized wave function to compute
r , < 1/r >, and  p 2 . Compare your results with the exact values for a hydrogenic
system.

Problem 4.10 For an attractive 1D square well potential, it is possible to show that
there is always at least one bound state. Does this hold true for any one-dimensional
attractive potential of arbitrary shape?

Problem 4.11 Consider a one-dimensional harmonic oscillator with mass m and


angular frequency ω. At time t = 0, the following state is prepared:
N +s
1
|ψ(0) = √ |n
2s n=N −s

where |n is an eigenstate of the Hamiltonian with energy E n = h̄ω(n + 1/2). The
summation runs from n = N − s to n = N + s with N s 1. Show that the
expectation value of x(t) is oscillatory with amplitude (2h̄ N /mω)1/2 . How does
this compare to the time variation of the displacement for a classical oscillator?

Problem 4.12 At t = −∞ an oscillator is in its ground state |0. Determine the


probability that at t = +∞ the oscillator will be in the nth excited state if it is acted
upon by a force f (t), where f (t) is an arbitrary even function of time with f = 0 at
t = ±∞. Evaluate the expression for the following choices of f (t):
1. f (t) = f o e−t /τ
2 2

2. f (t) = f o /((t/τ )2 + 1)

Problem 4.13 The S states for an electron in a spherical cavity of radius R are given by

ψn (r ) = An sin(nπr/R)/r

where n is the radial quantum number n = 1, 2, 3, . . . and An is chosen to ensure


normalization. The first two of these are shown in the graphic below:
Quantum Dynamics (and Other Un-American Activities) 143

6
5
4
3
2
1
r
0.2 0.4 0.6 0.8 1.0
–1

At time t = 0, the radius of the cavity is rapidly (and instantly) expanded to


R f = 1.1Ri . Plot as a function of n the probability of finding the electron in each of
the eigenstates of the new (expanded) potential.
What would be the R f if the probability for finding the electron in the n = 1 state
was exactly 0.5?

Problem 4.14 Consider the case of the vibrational motion of a linear triatomic
molecule such as C = N − H where the harmonic stretching frequency of one
bond is much higher than the stretching frequency of the other bond so that the
low-frequency mode can treated essentially classically. A suitable Hamiltonian for
this is
1
H = h̄ωa † a + λ(a † + a)x + ( p 2 + 2 x 2 ) (4.347)
2
where a and a † are the anhiliation and creation operators for a quantum harmonic well
with frequency ω, p and x are the classical momentum and position for an oscillator
with frequency , and λ the coupling between the two systems. If the low-frequency
mode is described by a classical harmonic oscillator, what is the golden rule transition
rate for the high-frequency part to relax from its first excited state to the ground state?
What happens if   ω?

Problem 4.15 Along a reaction coordinate, R(t), the harmonic frequency of a molecule
can change. For the sake of building a model, let us consider the case where the har-
monic frequency of a diatomic molecule is increased then decreased,

ω(t) = ωo + ω exp(−t 2 /τ )

with time scale τ due to a collision with an atom. Derive an expression for finding
the molecule in its lowest vibrational state at t → ∞ given that it was in its ground
vibrational state at t → −∞. Since τ is related to the collision energy (that is, the
speed of the colliding atom), what happens if the collision is very slow or very fast on
the time scale of ω? What collisional time scale is needed for the survival probability
to be exactly 50%?
5 Representations
and Dynamics
In this chapter we shall examine different ways in which one can represent the evolu-
tion of a quantum state. While mathematically different, the various representations
are equivalent in that, at the end of the day, one obtains the same physical prediction.
This is good because physics and physical measurements should not depend upon
how one chooses to represent the quantum state. Depending upon the problem at
hand, each representation has its unique advantages and disadvantages.

5.1 SCHRÖDINGER PICTURE: EVOLUTION


OF THE STATE FUNCTION
By its very nature, quantum mechanics is a linear theory since all results are ultimately
derived by starting with the time-dependent Schrödinger equation

∂¯
i ψ(t) = Ĥ ψ(t) (5.1)
∂t
As we know from the postulates of quantum mechanics, the state of the system at time
t is described by ψ(t), which is a solution of the TDSE where Ĥ is the Hamiltonian
operator derived from the classical Hamiltonian function by the substitution of

r → r̂ (5.2)
V (r ) → V (r̂ ) (5.3)

p → −ih̄ (5.4)
∂r
in the coordinate representation or equivalently in the momentum representation

p → p̂ (5.5)

r → ih̄ (5.6)
∂r
 

V (r ) → V ih̄ (5.7)
∂p

with
  ∞ n n 
∂ i h̄ ∂ n V  ∂n
V ih̄ = (5.8)
∂p n=0
n! ∂r n r =0 ∂p n

145
146 Quantum Dynamics: Applications in Biological and Materials Systems

Consider the time evolution of an observable associated with an operator A S .


The subscript S indicates that we will be working in the Schrödinger representation,
which is more or less the de facto representation of quantum mechanics as specified
by the postulates. In the Schrödinger representation (or picture), the time evolution is
carried by the state vector, which in turn is a solution of the TDSE. The expectation
value of A S is given by

A S
(t) = ψ S (t)|A S |ψ S (t)
(5.9)

Taking the time derivative,


d
ih̄ A
(t) = ih̄ ψ̇ S (t)|A S |ψ S (t)
ih̄ ψ S (t)|A ˙S |ψ S (t)
ih̄ ψ S (t)|A S |ψ̇ S (t)

dt S
= − ψ S (t)|[H, A S ]|ψ S (t)
+ ih̄ ψ S (t)|A ˙S |ψ S (t)
(5.10)

If the operator itself carries no explicit time dependency, then the time evolution of
the expectation value of an observable is specified by
d
ih̄ A
= − ψ S (t)|[H, A ]|ψ S (t)
(5.11)
dt S
If, in fact, [A, H ] = 0, then the observable associated with A is a constant of the
motion.
For completion, the time evolution of the Schrödinger state is given by

ψ S (t) = U S (t, to )ψ S (to ) (5.12)

5.1.1 PROPERTIES OF THE TIME-EVOLUTION OPERATOR


Let us consider briefly the mathematical properties of the time-evolution operator.
Expanding the Schrödinger state as a polynomial in time,

t2
ψ S (t) = ψ(0) + t ψ̇(0) + ψ̈(0) + · · · (5.13)
2
 
t 1
= 1+ H +t 2
H · · · ψ(0)
n
(5.14)
ih̄ 2(ih̄ 2 )

1
= tn Hn (5.15)
n=0
n!(ih̄)n

= e−i H t/h̄ ψ(0) (5.16)


= U S (t, 0)ψ(0). (5.17)

The fact that U S can be expressed as a polynomial in time has a number of advantages.
We can choose any one of a number of polynomial bases for this expansion since
we can take advantage of various recurrence relations. Later on, when dealing with
numerical solutions, we will compare different ways of approximating the evolution
operator over short periods of time.
Representations and Dynamics 147

First, we note that U S is in fact a unitary operator since U † U = UU † = I and


U † = U −1 where the † denotes the Hermitian conjugate. Written in a basis, U S has
matrix elements

a|U |b
= b|U |a
∗ (5.18)

U S is also a solution of the time-dependent Schrodinger equation since


∂Us
ih̄ = HU S (5.19)
∂t
and

∂U S †
= US H (5.20)
∂t
subject to the initial condition that U (0) = 1.
Notice that U (t) = U † (−t) so that when operating on a ket U † (t) affects evolution
backwards in time

U S (t)|ψ(t)
= |ψ(0)
(5.21)

while it has the effect of evolving a bra forward in time.

ψ(0)|U † (t) = ψ(t)| (5.22)

Of particular importance is the semigroup property

U S (t2 , to ) = U S (t2 , t1 )U S (t1 , to ) (5.23)

where t2 ≥ t1 ≥ to , since this allows us to approximate the long-time evolution


operator as a product of short-time evaluations

b|U (t)|a
= b|U (δtn )|i n
i n |U (δtn−1 )|i n−1
· · · i 1 |U (δt1 )|a
(5.24)
i 1 ,i 2 ,...,i n
&
where t = i δti and we have inserted n complete sets of states at various intermediate
times.

5.2 HEISENBERG PICTURE: EVOLUTION OF OBSERVABLES


In Heisenberg’s viewpoint, one never directly observes the state of the system; it is
simply a mathematical abstraction that you use to compute observables and make
predictions regarding the outcome of specific experiments. What we do, in fact,
observe is the spectrum of eigenvalues associated with a given physical operator:

A |an
= αn |an
(5.25)

If the state of the system is given by ψ and a measurement is made to determine


A, then the probability of finding the value αn is given by | an |ψ
|2 . Because the
observation should not depend upon how we choose to represent the state vector ψ,
148 Quantum Dynamics: Applications in Biological and Materials Systems

any new picture or representation of quantum mechanics must satisfy the following
two criteria:
1. The eigenspectrum of an operator must not change upon moving to the
new representation.
2. The probability amplitude for a given observation an |ψ
must not change.
Both of these criteria are satisfied by unitary transformations

A|x
= |y
(5.26)
A |x 
= |y 
(5.27)

Let U be a unitary transformation such that

|x 
= U |x
x  | = x|U † (5.28)

and

|y 
= U |y
y  | = y|U † (5.29)

Thus,

A U |x
= A |x 
= U |y
= U A|x
(5.30)

From this we can conclude that

U † A U = A (5.31)

Applying this to the eigenvalue equation A|an


= αn |an
we see that

U AU † U |an
= αn U |an
= αn |an
(5.32)

that is,

A |an
= αn |an
(5.33)

In other words, the eigenvalues of the transformed operator A are the same as the
original operator A. Likewise,

an |ψ 
= an |U † U |ψ
= an |ψ
(5.34)

What we conclude from this is that there are an infinite numbers of ways we can for-
mulate dynamical representations of quantum mechanics based upon unitary trans-
formations.
The Heisenberg picture is based upon the transformation that returns the time-
evolved Schrodinger state back to its initial condition,

ψ H (t) = U † (t, to )ψ S (t) = ψs (0) (5.35)

Because we never directly observe the state, time evolution is carried by the operators
themselves:

A H (t) = U S (t, to )A S (t)U S (t, to ) (5.36)
Representations and Dynamics 149

Upon working through the time derivative of A H (t), we find


d ∂
ih̄ A (t) = [A H , H H ] + ih̄ A H (t) (5.37)
dt H ∂t
where the subscript H denotes an operator in the Heisenberg representation.
The advantage of working in the Heisenberg picture is that there is a very clear
connection to the Poisson bracket operation that gives the time evolution of a function
in phase space
dA( p, q) ∂A ∂q ∂A ∂p ∂A
= + +
dt ∂q ∂t ∂p ∂t ∂t
∂A ∂H ∂A ∂H ∂A
= − + (5.38)
∂q ∂p ∂p ∂q ∂t
∂A
= {A, H } + (5.39)
∂t
where we have used the canonical relationships
∂q ∂H
= (5.40)
∂t ∂p
∂p ∂H
=− (5.41)
∂t ∂q
Dirac realized this close connection between the classical Poisson bracket and
the quantum commutation relation. He proposed that the two are related and that this
relation defines an acceptable set of quantum operations:26
The quantum mechanical operators fˆ and ĝ, which in classical theory
replace the classically defined functions f and g, must always be such
that the commutator of fˆ and ĝ corresponds to the Poisson bracket of
f and g according to

ih̄{ f, g} = [ fˆ, ĝ] (5.42)

where fˆ and ĝ are operators constructed from the quantum x̂ and p̂


operators. In other words,
 
∂ fˆ ∂ ĝ ∂ ĝ ∂ fˆ
[ fˆ(x̂, p̂), ĝ(x̂, p̂)] = ih̄ − (5.43)
∂ x̂ ∂ p̂ ∂ x̂ ∂ p̂

This is immediately verified if we take Ĥ (x̂, p̂) = p̂ 2 /(2m) + V (x̂) and write the
Heisenberg equations of motion for the momentum and position operators,
d x̂
= { Ĥ , x̂} = p̂/m (5.44)
dt
d p̂ ∂ V (x̂)
= { Ĥ , p̂} = − (5.45)
dt ∂ x̂
150 Quantum Dynamics: Applications in Biological and Materials Systems

Again, we must emphasize that the difference between these equations of motion and
their classical counterparts is that here we are dealing with operators rather than with
ordinary numbers or functions.
We can extend this idea to any pair of canonical variables. For example, for the
case of Boson operators, [â, â † ] = 1, we can write a similar relation for operators
composed of products of â and â † :

[ Â(â, â † ), B̂(â, â † )] = { Â(â, â † ), B̂(â, â † )}


∂ ∂B̂ ∂B̂ ∂Â
= − (5.46)
∂ â ∂ â † ∂ â ∂ â †
Again, we can verify this by calculating the Heisenberg equations for a harmonic
oscillator,
d â 1 1
= [ Ĥ , â] = { Ĥ , â} = iâ (5.47)
dt ih̄ ih̄
d â † 1 1
= [ Ĥ , â † ] = { Ĥ , â} = −iâ † (5.48)
dt ih̄ ih̄
The close connection between the Dirac commutation bracket and the Poisson bracket
stems from the fact that both are Lie derivatives of one vector field (or operator) with
respect to the flow along the other vector field. The connection was first utilized by
Dirac to treat constrained systems where the standard Hamiltonian-based mechanics
is inadequate. For example, the Pauli exclusion principle is equivalent to imposing an
additional constraint on the system such that no two particles can share the same state.
Such constraints are handled easily within the context of a Lagrangian formulation.
Dirac’s idea was that we should generalize the Hamiltonian to include any imposed
constraint φ j by writing

H∗ = H + cjφj (5.49)
j

where the constraints are very small, φi ≈ 0, and the coefficients are functions of p
and q. Typically we arrive at the Hamiltonian equations by taking the variation of H ,
∂H ∂H
δH = δq + δp = − ṗδq + q̇δp (5.50)
∂q ∂p
so that
   
∂H ∂H
+ ṗ δq + − q̇ δp = 0 (5.51)
∂q ∂p

Since the coefficients c j are functions of the canonical variables, we cannot separately
set δp and δq to zero. The variations must be tangent to the constraints. This can be

 One is referred at this point to a more complete discussion of the Poisson bracket formulation of classical

mechanics, such as presented in Goldstein’s Classical Mechanics text.


Representations and Dynamics 151

done by setting

An δqn + Bn δpn = 0 (5.52)
n n

with
∂φ j ∂φ j
An = uj and Bn = uj (5.53)
j
∂qn j
∂pn

where u j is an arbitrary function. The constraint is a function of the canonical vari-


ables, so we set it to be zero everywhere: φi (q, p) = 0 so that our dynamics occurs
on the surface defined by constraint. Now, we can write the equations of motion for
the canonical variables as
∂H ∂φk
ṗ j = − − uk (5.54)
∂q j k
∂q j
∂H ∂φk
q̇ j = − uk (5.55)
∂p j k
∂p j

More generally, the equation of motion for a function of canonical variables becomes

f˙ = { f, H ∗ } = { f, H } + u k { f, φk } (5.56)
k

The equations for the u k ’s come about by requiring

φ̇ k = 0 = {φk , H ∗ } (5.57)

The connection to quantum mechanics is made by requiring the commutator of two


operators to be proportional to ih̄ times the modified Poisson bracket (aka Dirac
bracket),

A · B − B · A = ih̄{A,B} D B (5.58)

where

−1
{A,B} D B = {A,B} P B − {A, φn }Mnm {B, φm } (5.59)
nm

defines the Dirac bracket in terms of the Poisson bracket. The constraint matrix M
is formed by taking the Poisson bracket of constraints

Mnm = {φn , φm } (5.60)

 Here we are taking the constraints to be “second-class” constraints.


152 Quantum Dynamics: Applications in Biological and Materials Systems

5.3 QUANTUM PRINCIPLE OF STATIONARY ACTION


The principle of least action is one of the foundations of classical dynamics. We define
the action as the integral of the Lagrangian
 t  t
   1
S= dt L(q̇(t ), q(t )) = dt  m q̇(t) − V (q) (5.61)
0 0 2
where q(t) is a trajectory. Taking the variation δS = 0 leads to the Euler–Lagrange
equation
d ∂L ∂L
+ =0 (5.62)
dt ∂ q̇ dq
Substituting L(q̇, q) into the Euler–Lagrange equation yields Newton’s equations of
motion
∂V
m q̈ = − (5.63)
dq
Consequently, the paths in classical mechanics are the ones by which the action
integral is minimized over the entire trajectory.
One wonders, is there a similar principle in quantum mechanics?27 Do the equa-
tions of motion for the time-dependent Schrödinger equation follow from an action
principle? The answer is yes. Consider the definition of the transition amplitude be-
tween an initial and final state

ψ f |ψ(t f )
= ψ f |U (t f , ti )|ψi

= ψ f |U (t f , t)U (t, ti )|ψi

= ψ+ (t)|ψ− (t)
(5.64)

where U (t f , ti ) is the Schrödinger time-evolution operator. The two states we have


defined are both solutions of the Schrödinger equations

ih̄∂t |ψ− (t)


= H (t)|ψ− (t)
|ψ− (ti )
= |ψi
(5.65)
ih̄∂t ψ+ (t)| = ψ+ (t)|H (t) ψ+ (t f )
= ψ f | (5.66)

The ket |ψ− (t)


represents the state at time t knowing that at time ti the system was
in state |ψi
. Likewise, the bra ψ+ (t)| represents the state of the system at time t
knowing that at some future time t f the system will be in ψ f |. It is important to note
that |ψ− (t)
and ψ+ (t)| are not Hermitian conjugates of each other. Also, note that
the scalar product

ψ+ (t)|ψ− (t)
= ψ f |ψ− (t f )
= ψ+ (ti )|ψi
(5.67)

is invariant of time t.
Now consider the quantity
 tf
φ+ (t)|ih̄∂t − H |φ− (t)

S= dt − ih̄ log φ f |φ− (t f )


(5.68)
ti φ+ (t)|φ− (t)

Representations and Dynamics 153

which we shall refer to as an action taken as a functional of both the bra and the ket.
These we shall take as initial trial vectors for the variation of S. The bra and ket we
are using are subject to the boundary conditions

|φ− (ti )
= |φi
(5.69)
φ+ (t f )| = φ f | (5.70)

Taking the variation of S by expanding it in terms of δψ+ | and |δψ− | yields


 tf 
δφ+ (t)|ih̄∂t − H − λ|φ− (t)

δS = dt
ti φ+ (t)|φ− (t)


φ+ (t)|ih̄∂t − H − λ|δφ− (t)
φ f |δφ− (t f )

+ − ih̄ (5.71)
φ+ (t)|φ− (t)
φ+ (t)|φ− (t)

where we have defined


φ+ (t)|ih̄∂t − H |φ− (t)

λ(t) = (5.72)
φ+ (t)|φ− (t)

Integrating by parts and imposing the boundary conditions produces



 ti −

δφ+ (t)|ih̄ ∂ t − H − λ|φ− (t)

δS = dt ⎝
ti φ+ (t)|φ− (t)


←−
φ+ (t)|ih̄ ∂ t − H − λ |δφ− (t)

− (5.73)
φ+ (t)|φ− (t)

where the arrows over the partial derivative operator indicate that the operator acts
either to the left or to the right. Clearly, the variation vanishes if both φ+ (t)| and
|φ− (t)
obey

(ih̄∂t − H (t))|φ− (t)


= λ(t)|φ− (t)
|φ− (ti )
= |φi
(5.74)
←−
φ+ (t)|(ih̄ ∂ t + H (t)) = φ+ (t)|λ (t) φ+ (t f )
= φ f | (5.75)

where we have defined


←−
φ+ (t)|ih̄ ∂ t + H |φ− (t)

λ (t) = (5.76)
φ+ (t)|φ− (t)

If we set the boundary conditions |φi


= |ψi
and |φ f
= |ψi
, then the trial ket
|φ− (t)
is related to the Schrödinger ket |ψ− (t)
by phase factor
 
i tf
|φ− (t)
= exp − dtλ (t) |ψ− (t)
(5.77)
h̄ ti
Acting on the left with φ f | we obtain the transition amplitude

φ f |φ− (t)
= ei Sc /h̄ φ f |ψ− (t)
(5.78)
154 Quantum Dynamics: Applications in Biological and Materials Systems

where the stationary action is


 tf
Sc = dtλ(t) − ih̄ log φ f |φ− (t f )
= −ih̄ log φ f |ψ(t f )
(5.79)
ti

Thus, the quantum transition amplitude is given by the stationary value of the action

φ f |ψ(t f )
= ei Sc /h̄ (5.80)

Unfortunately, the phases of the two trial vectors are undetermined. For example, if
we add on an additional phase so that

|φ− (t)
= ei ξ (t)/h̄|φ− (t)
(5.81)

where
 t
ξ (t) = dt  z(t  ) (5.82)
ti

this new ket is now a solution of

(ih̄∂t − H (t))|φ− (t)


= λ (t)|φ− (t)
(5.83)

where

λ (t) = λ(t) + z(t) (5.84)

Fortunately, the phase indeterminacy does not change the transition amplitude since
the final bra state is modified as well. We can eliminate this indeterminacy by imposing
an additional constraint on the system that S is a functional of |ψ(t)
and its Hermitian
conjugate ψ(t)|:
 t2
ψ(t)|ih̄∂t − H |ψ(t)

S= dt (5.85)
t1 ψ(t)|ψ(t)

We also assume (as in classical mechanics) that the variations vanish at the boundaries

δψ(t f )| = |δψ(ti )
= δψ(ti )| = |δψ(ti )
(5.86)

What this means is that we are taking S to be a functional of the state vector and its
Hermitian conjugate at all times ti < t < t f other than the initial and final times. In
doing so, our final equations will not depend upon the choice of boundary conditions.
We can also set ψ(t)|ψ(t)
= 1 for all time to enforce normalization. One can easily
verify that δS = 0 when

(ih̄∂t − H )|ψ(t)
= 0 (5.87)

In other words, the action S is stationary with respect to all variations of |ψ(t)
and
its Hermitian conjugate provided they are solutions of the Schrödinger equation.
Let us take, for instance, the case where the Hamiltonian is driven by some set of
external time-dependent variables (perhaps a set of nuclear coordinates), q(t), such
Representations and Dynamics 155

that at time ti , the state |ψ(ti )


is an eigenstate of H (q(ti )) and at time t f , |ψ(t f )
is an
eigenstate of H (q(t f )). What is the stationary action connecting the initial and final
states? In other words, can we calculate the transition amplitude ψ(t f )|ψ(ti )
such
that δ ψ(t f )|ψ(ti )
= 0? Let us define the total action as

S = Sc + ih̄ log ti f (5.88)

where

ti f = ψ(t f )|ψ(ti )
= ei S[q]/h̄ (5.89)

is the transition amplitude between the initial and final states, Sc is the classical action
 tf
m 2
Sc = q̇ (t) − V (q(t))dt (5.90)
ti 2

and S[q] is the contribution to the action due to the quantum transition. Again, we
use the trick we used above and write this as

ψ(t f )|ψ(ti )
= ψ+ (t)|ψ− (t)
(5.91)

where t f > t > ti is some intermediate time. Setting V (q) = 0 and taking the varia-
tion of S with respect to q(t) results in the classical equations of motion for q(t):28,29

ψ+ (t)| ∂H(q(t))
∂q
|ψ− (t)

m q̈(t) = −Im (5.92)


ψ+ (t)|ψ− (t)

The fact that the transition matrix element also depends upon the path between q(ti )
and q(t f ) means that the resulting force is path dependent. Consequently, in order to
determine the path, one must iterate this last equation self-consisently.
The equations of motion (Equation 5.92) were first derived by Phil Pechukas
in 1969 starting from a path-integral formulation for the fully quantum mechanical
problem of atomic scattering and then making a stationary phase approximation for
the nuclear trajectory. Although the approach is very appealing and gives the correct
semiclassical path, it is often impossible to converge a unique path if the time interval
t f − ti is too long and the states are strongly coupled.30–32
How do we interpret this last result? Imagine that q(t) represents the scattering
trajectory of an atom and the quantum states are internal degrees of freedom (say, the
atom’s electronic states). At the initial time ti we prepare the system at q(ti ) in some
well-determined quantum state that we will take to be an eigenstate of H (q(ti )). As
time evolves, the quantum state |ψ(t)
may no longer be an eigenstate of H (q(t))
since the Hamiltonian is changing in time as well. As a result, |ψ(t)
will evolve into
a superposition of eigenstates

|ψ(t)
= |cn (t)φn (q(t))
(5.93)
n

where the cn (t) are the transition amplitudes for starting in the initial state and evolving
into the nth eigenstate of H (q(t)) at some intermediate time ti < t < t f . Suppose at
156 Quantum Dynamics: Applications in Biological and Materials Systems

time t f we observe the system in state |ψ f (t f )


, which is now an eigenstate of H (t f ).
The path with the least action connecting |ψi (ti )
to |ψ f (t f )
satisfies Equation 5.92.
In fact, every possible final state has its own unique stationary phase path connecting
it to the initial state.
Since we have determined that the quantum state is now |ψ f (t f )
, we lose all
quantum coherence between |ψ f (t f )
and any other alternative state at t f . If we
chronicle a series of such events such that at time t0 we prepare the system in some
initial state |ψ0 (t0 )
and at t1 we determine the quantum state to be |ψ1 (t1 )
, at t2 > t1
we determine the quantum state to be |ψ2 (t2 )
and so on, we can write a history of
events:
hist1 = {ψ0 , ψ1 , ψ2 , · · · ψ N } (5.94)
Between each segment, we compute a stationary phase trajectory q(t) according to
Equation 5.92. The resulting world-line would depend strongly on the outcome of
each quantum transition and the frequency that we measured for the quantum state
by choosing the time interval δt = t2 − t1 to be too small,
 
δt
U (t2 , t1 )|ψ1 (t1 )
= 1 − i H (t1 ) + · · · |ψ1 (t1 )


 
δt
≈ 1 − i ε1 (q1 ) + · · · |ψ1 (t1 )
(5.95)

At time t2 hardly any quantum evolution will have occurred and we would determine
that the atom is still in its original quantum state. In fact, the quantum state will remain
in the original eigenstate of H over the course of the trajectory. If our scattering atom
was in an electronic excited state and we frequently inquire about its current state,
the atom will forever remain in that excited state. An alternative history may be
hist2 = {ψ0 , ψ1 , ψ2 , . . . ψ N } (5.96)
where hist1 and hist2 are the same until between t1 and t2 the system makes a switch
from state 1 to state 2 . Up until time t2 , we would have some degree of quantum
coherence between the two histories and information can be passed from one world-
line to the other. After t2 there is no common phase relation between the two paths.
This is illustrated in Figure 5.1.

5.4 INTERACTION PICTURE


Very often we are faced with a situation in which the Hamiltonian of the system can
be broken into two terms
H = Ho + V (5.97)
where Ho describes some reference system and V describes some additional interac-
tion. For example, we may choose the reference system to be a harmonic oscillator

 This may be starting to sound a bit like the plot line for a Star Trek episode.
Representations and Dynamics 157

|φα2Δ1
2

Δ1(t)
x2α
2

Δ2Δ1(t)
x3α
|φ1α Δ1 (t)
2
1 x2β
2

Δ 2́Δ1
x3β (t)
|φβ2Δ1 3
x(0)

2
Δ 1́(t)
x2α
2

Δ 2́Δ 1́(t)
x3α
|φ1β 3
1

Δ 1́(t)
x2β Δ 2̋Δ 1́
2 x3β (t)
3

Time

FIGURE 5.1 Illustration of coarse-graining of quantum histories. Starting from an initial


point, the system can follow any number of alternative paths as indicated by bifurcations.
The ovals surrounding each path indicate quantum fluctuations about the stationary phase
paths. Overlapping ovals indicate that two or more paths are quantum mechanically entangled
for a short period of time. (From Ref. 7)

with V being some additional coupling that induces some sort of dynamical evolution
of the system. With this in mind, we define the interaction wave function as
ψ I (t) = e+i Ho t/h̄ ψ S (t)
= e+i Ho t/h̄ e−i H t/h̄ ψ S (0) (5.98)
Taking the time derivative, we find
d
ih̄ ψ I (t) = V (t)ψ I (t) (5.99)
dt
where
V (t) = e+i Ho t/h̄ V e−i Ho t/h̄ (5.100)
is the interaction operator written in the Heisenberg representation of the reference
system. Since unitary transformations can be visualized as rotations in some N -
dimensional space, both the interaction wave function and coupling operator are
simultaneously rotated along with the reference system so that any departure from
the initial state is entirely due to the interaction. This has the distinct advantage of
eliminating the rapidly oscillating terms that appear in the evolution of the Schrödinger
state.
Let us now consider the time-evolution operator in the interaction representation.
As previously,
ψ I (t) = U I (t, to )ψo (5.101)
158 Quantum Dynamics: Applications in Biological and Materials Systems

Expanding U I in time,

1 t
U I (t, to ) = 1 + dt1 V (t1 )U (t1 , to )
ih̄ to
  t  t1
1 t 1
= 1+ dt1 V (t1 ) + dt1 dt2 V (t1 )V (t2 ) + · · · (5.102)
ih̄ to (ih̄)2 to to

This is often refered to as the Dyson series and it often serves as the starting point
for perturbative theories since each term involves subsequent interactions with the
coupling operator. The series can be taken to infinite order


U (t, to ) = Un (t) (5.103)
n=0

where each term is given by


 t  t1  tn−1
1
Un (t) = dt 1 dt 2 · · · dtn V (t1 )V (t2 ) · · · V (tn ) (5.104)
(ih̄)n to to to

where t ≥ t1 ≥ t2 ≥ · · · ≥ tn . The ordering of the operators in the integral is very


crucial since we have no a priori guarantee that [V (t), V (t  )] = 0 for t = t  . To avoid
problems associated with time ordering of the operators, we introduce a time-ordering
operator

A(t1 )B(t2 ) for t1 ≥ t2


P[A(t1 )B(t2 )] = (5.105)
B(t2 )A(t1 ) for t2 > t1

that has the effect of rearranging a series of operators into chronological order. For
example,

P[Ai (ti )A j (t j ) · · · An (tn )] = Ai (ti )A j (t j ) · · · An (tn ) (5.106)

with ti ≥ t j · · · ≥ tn .
Consider the second term in the expansion of the time-evolution operator:
 t  t1
1
U2 (t, 0) = dt 1 dt2 V (t1 )V (t2 ) (5.107)
(ih̄)2 0 0

The implied area of integration on the t1 , t2 plane is the shaded area above the t2 = t1
line. On the other hand, in
 t  t2
1
U2 (t, 0) = dt 2 dt1 V (t1 )V (t2 ) (5.108)
(ih̄)2 0 0

the implied area of integration is below the t1 = t2 line. Since both integrals should
give the same result, we can write
 t  t
1 1
U2 (t, 0) = dt1 dt2 P[V (t1 )V (t2 )] (5.109)
2 (ih̄)2 0 0
Representations and Dynamics 159

As a result, we can write the time-ordered series as


∞  t  t
1 1
U (t) = 1 + dt1 · · · dtn P[V (t1 ) · · · V (tn )] (5.110)
n=1
n! (ih̄)n 0 0

This is the polynomial expansion for the exponential function, so we can immediately
write the interaction evolution operator as
 
i t  
U (t) = P exp − V (t ) dt (5.111)
h̄ 0

5.5 PROBLEMS AND EXERCISES


Problem 5.1 Demonstrate that each of the above properties of U is true.

Problem 5.2 Using the mixing angle and rotation matrix given in Equation 4.6 show
that T H T † is diagonal with eigenvalues ε± .

Problem 5.3 Prove the Kubo identity:


 β
[ Â, e−β B̂ ] = e−β B̂ eλ B̂ [ Â, B̂]e−λ B̂ dλ
0

Problem 5.4 Since the time-dependent Schrödinger equation is a first-order differ-


ential equation with respect to time, the state at time t, ψ(t), is uniquely determined
by the state at time t = 0, ψ(0). In other words, we can write

ψ(t) = Ŝ(t)ψ(0)

where Ŝ(t) is some quantum mechanical operator.


1. Show that Ŝ(t) satisfies ih̄∂t Ŝ(t) = Ĥ Ŝ(t) where Ĥ is the Hamiltonian
operator. Also, show that Ŝ(t) is unitary.
2. Show that if Ĥ is independent of time, Ŝ(t) takes the form

Ŝ(t) = e−i Ĥ t/h̄

Problem 5.5 The expectation value of an operator B̂ is given by

B̂(t)
= ψ(t)| B̂|ψ(t)

1. Show that the time evolution of the Heisenberg operator B̂ = Ŝ −1 (t) B̂ Ŝ(t)
satisfies
B̂(t)
= ψ(0)| Ŝ −1 (t) B̂ Ŝ(t)|ψ(0)

2. Show that the time derivative of the Heisenberg operator B̂(t) is given by

ih̄∂t B̂(t) = B̂Hˆ − HˆB̂

where Hˆ = Ŝ −1 (t) Ĥ Ŝ(t).


160 Quantum Dynamics: Applications in Biological and Materials Systems

3. Show that if [ Â, B̂] = Ĉ, then the corresponding Heisenberg operators
satisfy [A,ˆ B̂] = Cˆ

Problem 5.6 Prove the following relationship:


1 1
e L̂ Âe− L̂ = Â + [ L̂, Â] + [ L̂, [ L̂, Â]] + [ L̂, [ L̂, [ L̂, Â]]] + · · ·
2! 3!

SUGGESTED READING
There are any number of excellent textbooks on quantum mechanics. Listed below
are various texts I have found to be particularly useful in preparing this chapter.
1. Chemical Dynamics in Condensed Phases Relaxation, Transfer and Reac-
tions in Condensed Molecular Systems, Abraham Nitzan (Oxford Graduate
Texts, 2007). This is one of the best interdisciplinary accountings of dy-
namical processes in the condensed phase.
2. Quantum Mechanics, Claude Cohen-Tannoudji, Bernard Diu, and Frank
Laloë (Wiley Interscience, 1973)
3. Quantum Mechanics, A Modern Introduction, A. Das and A. C. Melissinos
(Gordon and Breach, 1986)
4. Quantum Mechanics, E. Merzbacher (Wiley, 1961).
6 Quantum Density Matrix

6.1 INTRODUCTION: MIXED VS. PURE STATES


Up until this point, we have concerned ourselves with a description of quantum
mechanics centered upon how a state |ψ
evolves in time. Using this we can show
that the expectation value of an operator evolves as

A(t)
= ψ(t)| Â|ψ(t)
(6.1)

However, for many instances, especially if we are interested in describing relaxation


processes, it is useful to introduce the density operator

ρ̂(t) = |ψ(t)
ψ(t)| (6.2)

taken as the outer product of the state vector with itself. From this definition we can
write

ρ̂ = cn∗ cm |m
n| (6.3)
mn

= ρmn |m
n| (6.4)
mn

where the ρmn are the density matrix elements. Expectation values of operator are
then given by the trace

A(t)
= Amn ρmn = Tr [ Âρ(t)] (6.5)
mn

where Tr [ Âρ(t)] denotes the trace operation:



Tr [ Âρ(t)] = Ann  ρn  n
nn 

If ρ is diagonal, then

Tr [ Âρ(t)] = Ann ρnn = n|A|n
Pn
n n

where Pn = ρnn is the statistical probability of finding the system in state n. These
statistical weights must be such that Pn ≤ 1 and

Pn = 1
n

Hence, we conclude that knowing ρ we can compute the statistical average of an


operator A.

161
162 Quantum Dynamics: Applications in Biological and Materials Systems

The diagonal elements of the density matrix, ρnn = n|ψ


ψ|n
= pn , are the
occupation numbers of the nth state while the off-diagonal ρnm = n|ψ
ψ|m
rep-
resent the phase coherences between the n and m states. Since the diagonal elements
represent probabilities,
1 ≥ ρnn ≥ 0
so that ρ is a positive semidefinite operator. Moreover, since the trace of a matrix is
invariant to representation, the eigenvalues of ρ must be populations as well. Further-
more, the eigenvectors of ρ are the pure states of the ensemble.
We can define a pure state as a system that can be described by a single-state vector.
Pure states evolve according to the rules of quantum mechanics to form coherent
superpositions with well-defined phase relations between the components

|ψ(t) = cn (t)|n

Let us define an initially pure state, |ψ


= |λ
, with density matrix
ρ λ = |λ
λ|
Notice that in this case the density matrix acts as a projection operator such that
(ρ λ )2 = |λ
λ|λ
λ| = |λ
λ| (6.6)
Thus we conclude that for a pure state, Tr [ρ ] = Tr [ρ] = 1. This relation holds in
2

any representation since the trace operation is invariant to basis transformations. As


a result, the evolution of a pure state under the rules of quantum mechanics results in
a density matrix,
ρ(t) = U |λ
λ|U † = Uρ λ U †
that still preserves the invariance under the trace operation so that Tr [ρ(t)] = 1. Also,
Tr [ρ 2 (t)] = Tr [Uρ λ U † Uρ λ U † ] = 1
which allows us to conclude that unitary time evolution transforms an initially pure
state into another pure state.
On the other hand, a mixed state cannot be described by a single-state vector but
consists of a statistical mixture of states with a probability pn of being in any one
of them. The members of the mixed state (that is, the ensemble) are independently
prepared and there is no phase coherence between them. For example, if our initial
system is at thermodynamic equilibrium, we can write it as a mixed state of the form
1 −β H
ρ= e |n
n| (6.7)
Q k

where β = 1/kT . If the |n


are energy eigenstates of H , then
1 −β En
ρ= e |n
n| (6.8)
Q k

As before, let us define a density matrix for a mixed system as ρ = p1 |1


1|+ p2 |2
2|.
Since p1 + p2 = 1, Tr [ρ] = 1. However, Tr [ρ 2 ] = p12 + p22 < 1 unless either p1 = 1
and p2 = 0 or p2 = 1 and p1 = 0.
Quantum Density Matrix 163

6.2 TIME EVOLUTION OF THE DENSITY MATRIX


From our definition of the density matrix above,
ρ(t) = |ψ(t)
ψ(t)|
Taking the time derivative,
   
∂ ∂|ψ(t)
∂ ψ(t)|
ih̄ ρ = ih̄ ψ(t)| + |ψ(t)
| ih̄ (6.9)
∂t ∂t ∂t
Since ih̄∂t |ψ(t)
= H |ψ(t)
and ih̄∂t ψ(t) = − ψ|H |, we have

ih̄ ρ = H |ψ(t)
ψ(t)| − |ψ(t)
ψ(t)|H (6.10)
∂t
= [H, ρ] (6.11)
which is often compactly written as
ρ̇ = −iLρ
where L is the Liouville superoperator
1
[H, ρ]Lρ =

This last equation is referred to as the Liouville–von Neumann equation.
Let us take the typical case where the Hamiltonian is composed of a zeroth-order
term and an interaction: H = Ho + V (t). The corresponding Liouville–von Neumann
equation reads
∂ρ
= (Lo + LV (t))ρ
i (6.12)
∂t
where the action of Lo and LV on ρ is given by h̄Lo ρ = [Ho , ρ] and h̄LV ρ =
[V (t), ρ]. Formally, the evolution of the density matrix follows from
ρ(t) = e−i L t ρ(0) = U (t)ρ(0) = Us (t)ρ(0)Us† (t)
where Us (t) are the unitary time-evolution operators in the Schrödinger representation
and ρ(0) is the initial condition. As noted in Chapter 5, the exponential form is really
the result of an expansion in terms of an infinite series. Thus, we can write U (t) as
 t
U = e−i Lo t − i dt1 e−i Lo (t−t1 ) LV (t1 )e−i Lo t1
0
 t t1
− dt2 e−i Lo (t−t1 ) LV (t1 )e−i Lo (t1 −t2 ) LV (t2 )e−i Lo t2 + · · · (6.13)
0 0

Rearranging this a bit yields


  t
−i Lo t
 
U=e 1−i dt1 e+i Lo t1 LV (t1 )e−i Lo t1
0
 t
t1   
− dt2 e+i Lo t1 LV (t1 )e−i Lo t1 e+i Lo t2 LV (t2 )e−i Lo t2 + · · · (6.14)
0 0
164 Quantum Dynamics: Applications in Biological and Materials Systems

Taking each term in the () as a Heisenberg operator evolving under Lo , we can write
this as
  t  t  t1
U = e−i Lo t 1 − i dt1 LV I (t1 ) − dt2 LV I (t1 )LV I (t2 ) + · · · (6.15)
0 0 0

This allows us to define the propagator for the interaction representation as

U I (t) = e−i Lo t U (t) (6.16)

6.3 REDUCED DENSITY MATRIX


Up until now, we have labeled state space using some generic index n. In general,
however, n can be a collection of subindices labeling, for example, the spin, angular
momentum, and so forth of the state. It could also include continuous variables such
as the position or momentum of the state. In general we really should write

ρnn  = ρn 1 n 2 ...,n 1 ,n 2 ...

Say, for example, we are interested in only one aspect of the system or in some
particular property, such as the spin or energy. We can then retain only the relevant
indices and define in such a way a reduced density matrix. Most commonly, the
reduced density matrix is used in cases where the total state space is partitioned into
interacting (or noninteracting) subsystems, such as the internal states of a molecule
coupled to the normal modes of a surrounding environment. In such cases we can
factor the total density matrix into a tensor product

ρ AB = ρ A ⊗ ρ B

If we take states |i
as spanning space A and states | j
as spanning B so that |i j

spans the entire composite space, the tensor product is written as



ρ= ρi j,kl |i j
kl|
ij kl

In the case of a molecule embedded in a matrix, we may only be explicitly interested


in the part of ρ AB corresponding to the internal states of the molecule. Thus, we define
the reduced density matrix for this subspace as

ρA = σik |i
k|
ik

where the coefficients are obtained by



σik = ρi j,k j
j

This is a “partial” trace since it involves summing only over states belonging to
space B.
Quantum Density Matrix 165

The expectation value of any operator acting in space A can be computed using ρ A

A
= Tr [ρ A A]

However, if we are interested in quantities involving space B or correlations between


subspaces A and B, we need the full density matrix.
For example, consider a pair of particles that are entangled in a superposition state

1
|−
= √ (|00
− |11
)
2

where |0
and |1
label, say, the ground and excited states of each particle. The density
matrix for this system is then

ρ̂ = |−
− |
1 1
= √ [|00
− |11
) √ ( 00| − 11|)
2 2
1
= (|00
00| − |00
11| − |11
00| + |11
11|] (6.17)
2
Say, for example, you want to find the reduced density matrix for the second particle.
For this you would take the partial trace over the first

ρ̂ 2 = tr1 (ρ̂)
1
= (tr (|0
0|)|0
0| − tr (|0
1|)|0
1| − tr (|1
0|)|1
0| + tr (|1
1|)|1
1|)
2
1
= [ 0|0
|0
0| − 0|1
|0
1| − 1|0
|1
0| + 1|1
|1
1|]
2
1
= (|0
0| + |1
1|)
2
1
= Î (6.18)
2
Notice the reduced density matrix for the second particle tells us the probabilities of
that particular particle being in either state |0
or |1
with no mention or regards to
the state of the other particle. Notice also that ρ2 represents a mixed state for particle
#2. We cannot determine with complete certainty which state particle #2 is in because
it is not in one. Because of its entanglement with particle #1, particle #2 is not in a
single definable state and likewise for particle #1.
If one were to make a measurement of the state of #1 (for example, |1
emits a
photon to decay to |0
), the total system would be forced to be in state |00
. If we
were to then calculate the reduced density matrix for #2, we would find ρ2 = |0
0|
indicating that #2 has a 100% chance of being in state |0
and a 0% chance of being
in state |1
. This is the “spooky” nature of quantum mechanics. Measuring the state
of one part of an entangled pair determines the state of the other.
166 Quantum Dynamics: Applications in Biological and Materials Systems

6.3.1 VON NEUMANN ENTROPY


Recognizing the connection between statistical mechanics and the quantum density
matrix, John von Neumann formulated the concept of entropy as an extension of the
Gibbs and Shannon entropy to a quantum mechanical system. The von Neumann
entropy is defined as
S[ρ] = −Tr [ρ log ρ]
Using the definition of the density matrix for a thermal mixture, we can immediately
see that k B S[ρ] is the correct thermodynamic entropy. Just as in statistical mechanics,
S gives a measure of the number of thermodynamically accessible states under a given
set of thermodynamic conditions, the von Neumann entropy gives the degree departure
of the system from a pure state. In essence, it gives a measure of the degree of mixture.
Some properties of the von Neumann entropy functional S[ρ] include
• S[ρ] = 0 only for a pure state.
• The maximum value of S[ρ] = log N for maximally mixed states where
N is the number of states (that is, the dimension of the Hilbert space).
• S[ρ] is invariant under basis transformation of ρ.
• S[ρ] is concave. In other words, given a set of positive numbers λi > 0
and density matrices ρi ,
$ %

S λi ρi ≥ λi S[ρi ]
i i

• S[ρ] is additive. That is, given two independent systems each with density
matrix ρ A and ρ B , S[ρ A ⊗ ρ B ] = S[ρ A ] + S[ρ B ]. If instead ρ A and ρ B are
the reduced density matrices of some general system ρ AB , then

|S[ρ A ] − S[ρ B ]| ≤ S[ρ AB ] ≤ S[ρ A ] + S[ρ B ]

This last property is termed subadditivity. In a quantum mechanical system,


the entropy of the composite system can in fact be lower than the sum of the
entropies of its parts. This is the case when there is any degree of coherence
between parts A and B.

6.4 THE DENSITY MATRIX FOR A TWO-STATE SYSTEM


To illustrate these properties, let us consider the case of a degenerate two-state system
with some coupling β with Hamiltonian
   
1 0 0 1
H=E + h̄β
0 1 1 0

The equations of motion for the density matrix elements in this basis are thus

ρ̇ 11 = iβ(ρ12 − ρ21 )
ρ̇ 22 = −iβ(ρ12 − ρ21 )
Quantum Density Matrix 167

ρ̇ 12 = iβ(ρ11 − ρ22 )
ρ̇ 21 = −iβ(ρ11 − ρ22 ) (6.19)

We can solve these equations in a number of ways, the easiest being to take the time
derivative of ρ̇ 11

ρ̈ 11 = iβ(ρ̇ 12 − ρ̇ 21 )
= −2β 2 (ρ11 − ρ22 ) (6.20)

Since ρ11 + ρ22 = 1,


ρ̈ 11 = 2β 2 − 4β 2 ρ11
Solving this last differential equation produces
1
ρ11 (t) = + C1 cos(2βt) + C2 sin(2βt)
2
with C1 and C2 being constants of integration. Setting ρ11 (0) = 1 yields

ρ11 (t) = cos2 (βt)

and
ρ22 (t) = sin2 (βt)
for the populations. Obtaining the coherences

ρ̇ 12 = iβ(ρ11 − ρ22 )

results in the integral


 t
ρ12 (t) = iβ dt  (cos2 (βt  ) − sin2 (βt  ))
0
i
= sin(2βt) (6.21)
2
The time evolution of the populations and coherence are shown in Figure 6.1. Notice
that at tβ = π/2, the original population in state 1 has been entirely transferred to
state 2 and entirely returns to the original state every βt = π. Notice that in order
to transfer population from state 1 to state 2, one must first establish a coherence
ρ12 between the two states. Only once this coherence has been established can a
population be effectively transferred. If this coherence is destroyed—for example,
though interaction with an external environment—the ability to transfer a population
from 1 to 2 and back is effectively diminished.

6.4.1 TWO-LEVEL SYSTEM UNDER RESONANCE COUPLING —REVISITED


Let us now revisit our discussion of what happens when a system is coupled to a
time-dependent driving field. For example, consider the case of a two-state system
with energies E 1,2 = ±h̄ωo coupled to a periodic driving field, such as a laser or a
168 Quantum Dynamics: Applications in Biological and Materials Systems

ρij
1

є_
1
E2 2
Em

E1
π π 3π 2π βt
2 2

є+ –1
2

FIGURE 6.1 Time evolution of various components of the density matrix for a degenerate
two-state system with coupling h̄β.

single phonon, with frequency . This system is shown schematically in Figure 6.2a.
Whereas in our previous discussion of this system we assumed that the driving field
remained on essentially forever, here we shall consider the case where the field is
switched on at time t = 0 and then switched off at some later time. For the sake of
discussion, let the two states in question be two electronic states of some system and
the external field be the electric field. The Hamiltonian we consider is given by

H = h̄ωσz + h̄(Ee x̂ cos(t)) (6.22)

where μ̂ = e x̂ is the x component of the electric dipole operator and  is the laser
frequency with intensity E. Let us assume that x̂ only couples 1 and 2 so that its matrix
elements are
μ = 1|e x̂|2

ћΩ

|2 , ћωo/2 3 |1 + 1 photon
2
|2
1 ћΩ ΔE = ћωo
ΔE = ћωo
1 |1
2

|1 , –ћωo/2
(a) (b)

FIGURE 6.2 (a) States 1 and 2 with energies ±h̄ωo are coupled to an external driving field
with frequency . (b) Energy level diagram showing how state |1
is dressed by its interaction
with the photon field so that it becomes nearly degenerate with state |2
. Note: The horizontal
offset between the wells is strictly for clarity.
Quantum Density Matrix 169

Thus, our Hamiltonian in the {|1


, |2
} basis becomes

H = h̄ωo σz + h̄μE cos(t)σx

From the discussion in Chapter 2, we identify the Rabi frequency as

ω1 = μE

and write the equations of motion for the density matrix elements as

ρ̇ 11 = iω1 cos(t)(eiωo t ρ12 − e−iωo t ρ21 ) (6.23)


ρ̇ 22 = −iω1 cos(t)(eiωo t ρ12 − e−iωo t ρ21 ) (6.24)
−iωo t
ρ̇ 12 = iωi cos(t)e (ρ11 − ρ22 ) (6.25)
ρ̇ 21 = ρ̇ ∗12 (6.26)

Thus far, our analysis is exact. However, to proceed we need to make a judicious
approximation in order to simplify our analysis. First, note that when we combine the
cosine and exponential, we arrive at terms that are of the form

e±i(ωo +)t and e±i(ωo −)t

If the laser frequency is such that we are nearly resonant with h̄ωo , then only the terms
with ωo −  ≈ 0 will give a contribution to the transition rate between the two states.
The other terms with ωo +  ≈ 2ωo are off resonance and will give vanishingly little
to the transition rate. Thus we define the rotating wave approximation or RWA by
neglecting all off-resonant terms. This allows us to simplify the above equations as
ω1 ' i(ωo −)t (
ρ̇ 11 = i e ρ12 − e−i(ωo −)t ρ21 (6.27)
2
ω1 ' i(ωo −)t (
ρ̇ 22 = −i e ρ12 − e−i(ωo −)t ρ21 (6.28)
2
ω1 i(ωo −)t
ρ̇ 12 =i e (ρ11 − ρ22 ) (6.29)
2
ρ̇ 21 = ρ̇ ∗12 (6.30)

If we are exactly on resonance, then all the exponential terms become unity and
the equations reduce to
ω1
ρ̇ 11 = i (ρ12 − ρ21 ) (6.31)
2
ω1
ρ̇ 22 = −i (ρ12 − ρ21 ) (6.32)
2
ω1
ρ̇ 12 = i (ρ11 − ρ22 ) (6.33)
2

ρ̇ 21 = ρ̇ 12 (6.34)
170 Quantum Dynamics: Applications in Biological and Materials Systems

The solutions read

ρ11 = cos2 (ω1 t/2) (6.35)


ρ22 = sin (ω1 t/2)
2
(6.36)
i
ρ12 = sin(ω1 t) (6.37)
2
i
ρ21 = − sin(ω1 t) (6.38)
2
These are identical to the equations we had for the degenerate two-state system. Thus,
on resonance, the dynamics of a nondegenerate two-state system becomes identical
to that of a degenerate system.
This can be understood by considering Figure 6.2b. We can imagine that the ef-
fect of coupling a two-state system to a driving field is identical to the case where
we have two wells representing the energy levels of the system plus the quantized
energy levels of a harmonic field. When the interaction is switched off, the har-
monic field is in its ground state, that is, there are no photons (or phonons) present.
When the field is switched on, we have at least one phonon present and the system
is now in the second-energy level of the left-hand well in Figure 6.2b. Depending
upon the frequency of the laser, this state (|1
+ 1 photon) is now energetically
closer to |2
. When the resonance condition is met, the system oscillates between
the |1
+ 1 photon state and the |2
+ 0 photons state at the Rabi frequency ω1 .
The coherent transfer of population from one state to the other is termed transient
nutation.
For a laser field, we can easily control the duration of the interaction, and when
the field is switched off,
ρ̇ 11 = ρ̇ 22 = 0

but
ρ̇ 12 = +iωo ρ12 = ρ̇ ∗21

If the field is switched off at some later time t1 > 0, then for all time after t1 the
populations will remain constant with ρ11 = ρ11 (t1 ), ρ22 = ρ22 (t1 ). However, the
coherences will continue to evolve as

ρ12 = eiωo t ρ12 (t1 ) & ρ21 = e−iωo t ρ21 (t1 )

This is free precession in that the system continues to evolve even though no further
population is being transferred. State 1 is effectively locked into a superposition with
state 2.
If the duration of the pulse is such that ω1 t1 = π , then for times t > π/ω1 , ρ11 = 0
and ρ22 = 1. In other words, we have achieved a perfect population inversion. If we
are discussing a spin 1/2 system, we can imagine that all the spins in the system have
been flipped from up to down (or vice versa). Such pulses are termed “pi” pulses.
We can also define a “pi-over-two” pulse by setting the pulse duration to be such that
ω1 t1 = π/2. In that case, the magnitudes of the two coherences ρ12 and ρ21 are at
Quantum Density Matrix 171

their maximal values. In general, we can define a “flip-angle” θ = ω1 t1 such that


following a pulse of duration t1 = θ/ω1 , the density matrix elements are exactly

ρ11 (t1 ) = cos2 (θ/2) (6.39)


ρ22 (t1 ) = sin2 (θ/2) (6.40)
ρ12 (t1 ) = i sin(θ)/2 (6.41)
ρ21 (t1 ) = −i sin(θ)/2 (6.42)

To appreciate the effect of all this on the system, consider what happens to the
evolution of an observable, such as the average polarization μ
following preparation
by a pulse of duration θ = t1 ω1 . The x component of the polarization is given by

μx (t)
= Tr [ρ(t)μ̂] = eTr [ρ(t)x̂]

For the two-state system at hand, the polarization operator in the {|1
, |2
} basis is
given by  
0 1
μ̂x = μ
1 0
Thus, for t > t1
μx
= −μ sin(θ) sin(ωo t)
In other words, the x component of the polarization vector of the sample oscillates
between ±μ sin(θ) at the Bohr transition frequency. According to classical elec-
trodynamics, an oscillating electric field must radiate at its oscillation frequency.
Consequently, this polarization will eventually lead to radiative decay in any real-
istic physical situation. We can introduce a phenomenological radiative decay as an
afterthought to the dynamics by requiring the population of state 2 to be exponentially
damped (corresponding to radiative decay to state 1). As a result, we expect

μx (t)
rad = −μ sin(θ) sin(ωo t) × e−γ t

Since the coupling of a molecular or atomic state to the electromagnetic field is an


intrinsic property of that state, this radiative decay time 1/γ is referred to as the natural
lifetime and the exponential decay of the time signal results in a Lorentzian spectral
line shape centered at the transition frequency with full-width at half maximum equal
to γ . This spectral shape assumes that the entire sample is composed of identical
systems each identically prepared. Hence, the resulting spectral line shape is termed
the “homogeneous line shape.”
However, in reality no sample is composed of an ensemble of identical compo-
nents. In a realistic sample, each component may sit in a slightly different environment
and have a slightly different energy gap h̄ωok . If we let Pk be the probability that any
given component of the system has energy gap ωok , then the total density matrix for
the ensemble is the direct sum of the density matrices for the components weighted
by the probability Pk
ρ= Pk ρk
k
172 Quantum Dynamics: Applications in Biological and Materials Systems

with
Pk = 1
k

Note that if we were to write out the full density matrix in matrix form, it would be
in block-diagonal form such as:
⎛ ⎞
ρ1 0 ··· 0
⎜ ρ2 ···⎟
⎜ 0 0 ⎟
ρ=⎜
⎜ .. ..

⎟ (6.43)
⎝ . 0 . 0 ⎠
0 0 0 ρn

and Pk would be the fractional number of subsystems with energy gap h̄ωok . Since
there is no coupling between subsystems, the off-diagonal blocks will remain 0 for
all times. In such a case, the density matrix of the entire system is a statistical mixture
of its component subsystems.
To calculate the polarization of the ensemble as a function of time, we need to
evaluate
μx (t)
= Tr [ρ μ̂x ]

= Pk Tr [ρk μ̂]
k

= −μ sin θ Pk sin(ωok t) (6.44)
k

To make things concrete, let us assume that the energy gaps are normally distributed
about some average energy gap such that the probability of a subsystem having energy
gap h̄ωok is given by
1
e−(ωok −ω) /2σ
2 2
P(ωok ) = √
2π σ 2

Taking the sum over subsystems to a continuous integral and integrating over all
frequencies results in

μx (t)
= −μ sin θ sin(ωt)e−σ t /2
2 2
(6.45)

where t = 0 is taken to be at the end of the initial polarizing pulse.


The decay of the polarization of the sample in this case is due to the fact that each
member of the ensemble is contributing a slightly different Fourier component to the
entire signal. Initially all the polarization components are in lockstep with each other
and contribute constructively to the total polarization. However, since each member is
oscillating at its own frequency, very soon we will have an equal likelihood of finding
any member of the ensemble with any possible orientation of its polarization. This
decay of the signal due to variations in the environment gives rise to an inhomogeneous
broadening of the spectral line shape associated with this signal.
Again, in a real physical system there will be a natural lifetime associated with
the excitation giving rise to a homogeneous contribution to the spectral line shape.
Quantum Density Matrix 173

Whereas the inhomogeneous contribution is entirely due to the distribution of environ-


ments, the homogeneous contribution is an intrinsic atomic or molecular property and
will at best be weakly dependent upon the environment surrounding the system. In the
parlance of the field, the natural lifetime giving rise to the homogeneous linewidth
is called the “longitudinal” relaxation time or T1 and the various contributions to
the total inhomogeneous line shape are lumped into a single timescale T2∗ , which is
termed the “transverse” relaxation time.

6.4.2 PHOTON ECHO EXPERIMENT


Consider the following experiment. At time t = 0 we excite the system with a resonant
π/2 pulse (ωo t1 − π/2). As just discussed above, this results in a polarization of the
sample, which for t > t1 decays according to Equation 6.45. At some time td = t2 −t1
later, we apply a second resonant pulse again of twice this duration (that is, a π pulse).
This has the effect of reversing the precession of the dipoles in the sample such that
after again some delay time td = t4 − t3 = td , the dipoles are once again perfectly
aligned, giving rise to a spontaneous repolarization of the sample, or an “echo.” This
time series of pulses is shown in Figure 6.3. This aligned state will inevitably decay
due to the same dephasing that occurred following the original π/2 pulse.33 The “spin-
echo” was first demonstrated for nuclear spins by Hahn in 1950.34,35 Hahn’s original
paper on this is one of the most cited papers in experimental physics with well close to
1000 citations according to the Physical Review online archive. Later, an analogous
“photon-echo” effect was observed in 1964 by Kurnit, Abella, and Hartmann.36,37
By varying the various pulse widths and delays, a whole host of “two-dimensional”
spectroscopy experiments can be devised. While such spectroscopies are now com-
monplace in the field of magnetic resonance spectroscopy, multidimensional spectro-
scopies are currently under active development for UV/visible and infrared ranges.
Aside from being an interesting physical phenomenon, the photon echo experi-
ment can resolve the homogeneous from inhomogeneous contributions to the spectral
line shape. If T1  T2∗ and we can space the two pulses by some delay time td < T1 ,
we can determine the T1 time by monitoring the attenuation of the intensity of the
echo pulse as a function of the td . In doing so, we can unambiguously separate the
two components of the total spectral line shape.

1.2 1.0
π/2 pulse π pulse
1.0 0.8
0.8 0.6 Initial
Echo
P(t)
ε(t)

0.6 free induction


0.4 signal
0.4 decay
0.2 0.2

t1 t2 t3 t4 t1 t2 t3 t4
tωo tωo

FIGURE 6.3 Photon echo time sequence. Left: The two input pulses with areas π/2 and π .
Right: On same time scale are the resulting output signals for the macroscopic polarization
P(t) in the form of two free induction decay signals, each with nearly identical lifetimes.
174 Quantum Dynamics: Applications in Biological and Materials Systems

6.4.3 RELAXATION PROCESSES


We conclude our discussion of the density matrix with a phenomenological description
of the relaxation of the density matrix. In a later chapter we will be more specific as
to how quantum relaxation occurs when a system is in thermal contact with a bath
environment. For the time being, however, we simply assume that the net contribution
of the coupling between the chromophore and its surroundings and the radiative decay
of the chromphore’s excited state can be described with a few parameters that can be
related to spectroscopic observables such as T1 and T2∗ .
As a starting point for discussion, consider the transfer of an excitation (exciton)
from one molecular species (A) to a nearby neighbor (B). The reaction we consider
can be written as
A∗ + B  A + B ∗
I have written this as a reversible reaction since the interaction that facilitates the
forward transfer also facilitates the reverse transfer. We can write the Schrödinger
equation describing this reaction in matrix form as
    
EA V a(t) ȧ(t)
= ih̄ (6.46)
V EB b(t) ḃ(t)

where V is the matrix element coupling the two states and E A and E B are their
energies. The coefficients, a(t) and b(t), are the probability amplitudes for observing
the excitation on A and B, respectively, at time t. Take, for example, the case where
the two species are identical so that E 1 = E 2 and that at time t = 0, A is photoexcited
so that a(0) = 1 and b(0) = 0. We can immediately write the solution as

a(t) = cos(V t/h̄) & b(t) = sin(V t/h̄) (6.47)

where we see that the exciton is passed back and forth between A and B at the Rabi
frequency,  = V /h̄. This oscillation is due to the fact that the initial state is not a
stationary state.
We know, however, that in a thermal system, energy transfer can be irreversible
due to contact and mixing between the donor and acceptor species with the solvent
media or matrix in which the two molecules are embedded. To consider the case of
irreversible transfer, we need to use a density matrix approach ρ = |ψ
ψ| where
the diagonal elements of the density matrix, ρ11 = |a(t)|2 and ρ22 = |b(t)|2 , are
the populations of each state and the off-diagonal elements, ρ12 = a ∗ (t)b(t) and
ρ21 = b∗ (t)a(t), are the coherences between the two states. For an isolated system,
the time evolution of the density matrix is given by the Liouville–von Neumann
equation,

ih̄ ρ̇ = [H, ρ] (6.48)

However, for a system imbedded in an environment, we need to consider the relaxation


populations and coherences to their equilibrium values. For the moment, we take a
phenomenological approach and define a longitudinal relaxation time T1 as the time
scale for the populations of each state to relax to its equilibrium value and a transverse
Quantum Density Matrix 175

relaxation time T2 as the time scale for coherence relaxation. For a statistical mixture,
ρ12 = ρ21 = 0 and we can write the equations of motion for ρ as

∂ρii 1 1  (eq) 
= [H, ρ]ii + ρii − ρii (6.49)
∂t ih̄ T1
for the diagonal terms and

∂ρi j 1 1
= [H, ρ]i j − ρi j (6.50)
∂t ih̄ T2
for the coherences. Since we are dealing with the transfer of an electronic excitation
from one species to the next, we can assume that at thermal equilibrium, all of the
population is in the ground electronic state |0
, which we have not explicitly included.
If we take E A and E B  kT , the thermal populations of the exciton states is vanish-
ingly small and we can include the ground state only as a “sink” such that the total
population ρ11 + ρ22 + ρ00 = 1. Thus, our equations of motion for the two excited
states read
i 1
ρ̇ 11 = − V (ρ21 − ρ12 ) − ρ11 (6.51)
h̄ τA
i 1
ρ̇ 22 = + V (ρ21 − ρ12 ) − ρ22 (6.52)
h̄ τB
i 1
E
ρ̇ 12 = − V (ρ22 − ρ11 ) − ρ12 − ρ12 (6.53)
h̄ T2 ih̄
i 1
E
ρ̇ 21 = + V (ρ22 − ρ11 ) − ρ21 + ρ21 (6.54)
h̄ T2 ih̄
Here, τ A and τ B are the radiative lifetimes of the A and B excitons and
E is the energy
difference. We now consider various limits to understanding the various physical
regimes described by these equations.
In the case of strong coupling, we take the Rabi frequency to be much greater than
the radiative rates,  = 2V /h̄  1/τ A &1/τ B , as well as the dephasing rate, 1/T2 . In
this case, the exchange between A and B is far more rapid than any other process in
the system. Our equations of motion reduce to

ρ̈ 11 + 2 ρ11 = 1/2 (6.55)

and we have the same oscillatory behavior as previously. A comparison between the
numerically exact solution and the approximate solution is shown in Figure 6.4(a) for
the case of
E = 0.1, V = 1/2, and τ = T2 = 104 .
Identical systems: For the case of exciton exchange between two identical sys-
tems, we have τ A = τ B = τ and
E = 0. In this limit, we can arrive at an equation
for the population in state 1:
   
1 1 1 1
ρ̈ 11 + + ρ̇ 11 + + T2 2 ρ11 = 2 /2 (6.56)
τ T2 T2 τ
176 Quantum Dynamics: Applications in Biological and Materials Systems

1.2 1.2 1.2

1.0 1.0 1.0

0.8 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 20 40 60 80 100 0 50 100 150 200 0 50 100 150 200


t t t
(a) (b) (c)

FIGURE 6.4 Comparison between numerically exact evaluation and various approximations.
(a) Strong coupling with no decay of population or dephasing 1/T2 , 1/τ → 0. (b) Identical
systems (
E = 0) with 1/T2  1/T1 . (c) Rapid dephasing with 1/T2  .

These are the equations of motion for a damped oscillating system. Taking the initial
condition to be ρ11 (0) = 1 and ignoring the oscillatory part, we obtain the solution
for the overdamped decay of the initial population
1 −t/τ
ρ11, decay (t) = e (1 − exp(−t/T2 )) (6.57)
2
The results of this case are shown in Figure 6.4(b) for the case of
E = 0, V = 1/2,
τ = 500, and T2 = 10. It is interesting to note that the approximate solution given by
Equation 6.57 does not decay at long times.
Rapid dephasing: As a final limit, we take the case where the dephasing time
is short compared with the radiative lifetime. This gives us the case of a damped
oscillator:
 
1 2 T2 /2
ρ̇ 11 + + ρ11 = 0 (6.58)
τA 1 + (
E T2 /h̄)2
The solution we immediately obtain is
  
1
ρ11 (t) = exp − −W t (6.59)
τA
with
2|V |2 T2
W = (6.60)
h̄ 1 + (T2
E/h̄)2
In this limit, we see energy transfer as a truly irreversible processes with rate constant
W , which gives the probability of transfer from A to B per unit time. This estimate
works well if the dephasing time is the shortest time scale in the system. In Figure
6.4(c) we show the case for T2 = 0.05, V = 0.1,
E = 1, and τ = 100. The dashed
curve is the approximate solution with the solid curve being the numerically exact
solution.
Note that if we take the dephasing time to be extremely short, T2 → 0, then the
transfer rate vanishes. This underscores the importance of the buildup of quantum
Quantum Density Matrix 177

coherence between the two coupled states. In many regards, this is much like the old
phrase that a watched pot never boils. If we take T2 as a time scale by which the envi-
ronment queries the system as to which state it happens to be in at a particular instant
in time, the system must be found in one state or the other and hence immediately
after that instant in time one or the other of the populations must be exactly 1 and
the other exactly 0 and all the coherences between the two must exactly vanish. As a
result, even if the states are strongly coupled, if T2 h̄/V , the exciton is effectively
localized on the initial state forever.
General solution: We now seek a general solution to the equations of motion in
the form of a damped oscillator. Notice that if we integrate Equation 6.59 over all
time, we obtain an equation of the form
 ∞  −1
1
dtρ11 (t) = +W (6.61)
0 τA

For now, let us differentiate the approximate rate W from Eq. (6.60) from W , which
we will obtain from the exact solution, and look for the case in which the two are
identical.
At this point it is best to work with the Laplace transformed versions of the
equations of motion,
 ∞
Lρ = ρ̃(s) = e−st ρ(t)dt (6.62)
0

and

Lρ̇ = s ρ̃ − ρ(0) (6.63)

First, the transformed equations must be true for all values of s, so we take the case
of s = 0. Secondly, the initial conditions are such that only ρ11 (0) = 1 with all other
elements equal to zero. Thus, our Laplace transformed equations of motion reduce to
a set of algebraic equations [where we take ρ̃ = ρ̃(0)]:

V 1
−1 = (ρ̃ 21 − ρ̃ 12 ) − ρ̃ 11 (6.64)
ih̄ τA
V 1
0=− (ρ̃ 21 − ρ̃ 12 ) − ρ̃ 22 (6.65)
ih̄ τB
V 1
E
0= (ρ̃ 22 − ρ̃ 12 ) − ρ̃ 12 + ρ̃ 12 (6.66)
ih̄ T2 ih̄
V 1
E
0=− (ρ̃ 22 − ρ̃ 12 ) − ρ̃ 21 + ρ̃ 21 (6.67)
ih̄ T2 ih̄

After quite a bit of tedious algebra, we obtain the final result in the desired form

1
ρ̃ −1
11 = +W (6.68)
τA
178 Quantum Dynamics: Applications in Biological and Materials Systems

where the exact rate is given by


2|V |2 T2 /h̄
W = (6.69)
1 + (T2
E/h̄)2 + (2|V |2 /h̄)T2 τ B

In the case of weak interaction 2T2 τb V 2 /h̄ 1, we recover W = W from above.


In the case of very strong interaction, W → 1/τ B and the transfer is dominated
by the radiative lifetime of B. It is important to note that the transfer is completely
independent of the coupling V .
Typically in molecular systems, τ B is not determined by the radiative lifetime
from the excited state to the ground state but by the vibrational relaxation time, which
is typically on the order of a ps rather than ns. For the moment, we shall ignore the
internal dynamical contributions to energy transfer and focus our attention on the
electronic contributions.

6.5 DECOHERENCE
Certainly one of the hallmarks of quantum dynamics is the fact that over the course
of the evolution of a system, it evolves from some initially prepared state to a su-
perposition of states. This leads to one of the effects that make quantum mechanics
interesting, namely, interference. The most common thought experiment (and very
colorfully described in Feynman’s book) is where electrons are shot from some source
toward a blocking screen with two small parallel slits. If the slits are close enough
together, then there is equal likelihood for an electron to go through either slit. For
classical electrons, this would result in the accumulation of two distributions of elec-
trons behind the screen: those that went though slit #1 and those that went through
slit #2. However, what is observed is an interference pattern consistent with a plane
wave passing through both slits and interfering constructively and destructively on the
other side—much like water waves. Since electrons are not divisible into chunks of
partial electrons, we have to assume that each electron passing through the observing
screen went through one or the other slits and not through both.
By its very nature, quantum mechanics likes to explore all equivalent (or nearly
equivalent) alternatives. Yogi Berra put it best in saying, “When you come to a fork in
the road, you take it.” Perhaps the “Yogi Berra” rule of quantum mechanics is “When
you come to a fork in the road, you take both paths.” The story behind this quote is
that Yogi lived at the end of a cul-de-sac. So, if you were going to Yogi’s house, you
would eventually have to take the left or right turn . . . both of which would land you
at Yogi’s house. I wonder how many Brooklyn Dodgers were lost through destructive
interference this way.
Returning to the double-slit thought experiment, if we try to monitor the flux of
electrons through either hole, then we force the electron’s wave function to localize
every time we observe an electron passing by. Say we observe the flux using a laser
beam so that the scatter of a photon by the electron indicates the electron’s passage.
If we turn the light intensity down low so that some electrons go by undetected,
we partially recover the interference pattern such that the resulting distribution of
electrons on the final detector represents the weighted sum of electrons that got
caught (with no interferrence) and those that slipped by undetected.
Quantum Density Matrix 179

6.5.1 DECOHERENCE BY SCATTERING


From an operational standpoint, dephasing and decoherence are identical in that
they both cause the off-diagonal elements of the density matrix to relax to zero so
as to produce a statistical mixture. However, this “relaxation of the off-diagonal
elements” statement is not entirely rigorous, although one finds this terminology
throughout the literature. A more precise statement is that through the interaction
with an environment, the density matrix becomes diagonal and is a basis of eigenstate
of the system-bath interaction operator. In this representation, [Vsb , σ ] = 0. Zurek
refers to such states as “pointer states” since it is into these states that the system is
directed through its interaction with the bath.
Dephasing refers to the fact that if we have an ensemble of identically prepared
systems, each evolves in a slightly different (static) environment. When we average
over the ensemble, the accumulated phase differences interfere destructively and sum
to zero, leading to a loss of polarization of the sample. The fact that the ensemble can
be repolarized as in the spin-echo experiment indicates that dissipative and irreversible
effects are not important on the time scale of the spin-echo experiment. Decoherence,
on the other hand, is the irreversible loss of phase coherence between states for a single
subsystem embedded in an environment. It results from the fact that the eigenstates
of the subsystem become intertwined with the eigenstates of the environment. In a
somewhat prosaic sense, the environment makes a series of weak measurements on
the system, asking it, “What state are you in now?” If the system answers, “OK, now
I’m in state #2!” then the potential force coupling the environment to the system will
be the force for state #2. Furthermore, since the environment has determined that the
system is in fact in state #2, the reduced density matrix for the system must at that
instance correspond to a pure state with the system’s state vector in state #2. As the
coupled systems continue to evolve, each will ascertain from the other information
concerning how it is to evolve. Consequently, one can easily imagine that this dialog
could be recorded in the form of a series of events indicating the state of the system
at various time steps.
We can formalize this somewhat by letting |x
be the position state of a particle
and |X
be the macroscopic environment. We shall assume for the time being that the
interaction between the particle and its surroundings is purely elastic with no recoil.
If we prepare the system in a given state at time t = 0 and let it interact with the
surroundings, then we have the following:

|x
|X
→ |x
|X x
= |x
Sx |X

where the x subscript denotes that the surroundings have interacted with the quantum
particle and are thus entangled. Sx is the scattering matrix describing the process. If
instead we start with a superposition state
 
d xφ(x)|x
|X
→ d xφ(x)|x
Sx |X

the reduced density matrix for the system can be written as

ρ(x, x  ) = φ(x)φ ∗ (x  ) → φ(x)φ ∗ (x  ) X |Sx† Sx |X

180 Quantum Dynamics: Applications in Biological and Materials Systems

In order to estimate X |Sx† Sx |X


, let us consider the scattering light in terms of
incoming and outgoing plane waves and that the wavelength of the light is long
enough that it cannot resolve distances smaller than |x − x  | λ where λ = c/ν is
the wavelength of the scattering light. If this is the case, then working in the wave-
vector representation,

Sx (k, k  ) = S(k, k  )e−i(k−k )x
where S(k, k  ) is the scattering matrix, which we can relate to the form factor of the
interaction. As the result of this impulsive approximation, we can write

ρ(x, x  ; t) = ρ(x, x  ; 0) exp[−(x − x  )2 t]

where
k 2 N vσe f f
=
V
with k as the wave vector, N v/V the incoming flux that we can relate to the collision
frequency, and σe f f the effective cross-section.  is the decoherence rate given as the
number of scattering events per unit time per unit area. It is also (formally) equivalent
to the dephasing rate 1/T2 introduced in the last chapter. Here, at least, we have some
inkling of how the dephasing process may actually occur.
If we extend this analogy to a quantum state in a solvent environment, such as a
solvated electron or an excited state of a molecule, then we can relate the effective
cross-section to the molecular radius σe f f = πr 2 ; k is given by the de Broglie
wavelength of the scattering
√ particle, k = 2π/λth , and v is replaced by the mean
thermal velocity v = kT /m. Pulling this together, we arrive at a simple estimate
for the decoherence rate for condensed phase states:

2π 2 m(kT )3/2r 2 N
=
h2 V
Still working within the impulsive approximation, the resulting equation of motion
for the reduced density matrix of quantum particle is
∂ρ i
ih̄ = [H, ρ] − [x, [x, ρ]
∂t h̄
It is interesting to note that this model also results if we consider the equations
of motion for a particle randomly kicked by a Gaussian noise term. The equivalent
Hamiltonian for this reads

H = H0 + λi (t)Vi (6.70)
i

where H0 and Vi are arbitrary Hermitian operators and Gaussian stochastic coefficients
λi (t) with average mean λi (t) = λi = 0 and second moments given by

λi (t)λ j (t  ) = gi j δ(t − t  ) (6.71)

Hamiltonians such as this can model a wide variety of physical situations where
the motion or transport is driven by an external field. One such example is the case
Quantum Density Matrix 181

of Förster resonant excitation transfer (FRET) in biomolecules where the migration


and diffusion of an initial electronic excitation within the system is dependent upon
the local environmental and conformational fluctuations or the transport of a proton
in a channel in which the tunneling barriers between sites are modulated by the
environment in some stochastic way. A useful property of such Hamiltonians is that
one can explicitly average over the noise when calculating the time evolution of the
density matrix or expectation values of various observables. Noise-averaged time
evolution for various specific forms of the Hamiltonian in Equation 6.70 have been
considered previously.38–42
Let us take a brief technical departure and consider the ramifications of this model.
In general, the quantum density matrix satisfies the Liouville–von Neumann equation,
∂ρ
i = (L0 + LV (t))ρ (6.72)
∂t

& L0 and LV on ρ is given, respectively, by h̄L0 ρ =


Action of the superoperators
[H0 , ρ] and h̄LV (t)ρ = i λi (t)[Vi , ρ]. The density matrix at time t is given in
terms of the density matrix ρ(0) at time t = 0 by

ρ(t) = U (t)ρ(0) (6.73)

Here the time-evolution superoperator U (t) is given by the following infinite series:
 t
−i L0 t
U (t) = e −i dτ e−i L0 (t−τ ) LV (τ )e−i L0 τ
0
 t  τ
 
− dτ dτ  e−i L0 (t−τ ) LV (τ )e−i L0 (τ −τ ) LV (τ  )e−i L0 τ + ... (6.74)
0 0

Noise-averaged expectation values of an operator O are computed using

O(t)
= Tr (O(t)U (t)ρ(0)) (6.75)

It is assumed here that operator O can have an explicit time dependence. When per-
forming averages as in Equation 6.75, we need to distinguish two types of operators:
those with and those without stochastic coefficients λi (t). In the latter case, averaging
over noise as in Equation 6.75 reduces to an averaging of the evolution superoperator
U (t) and such expectation values can be calculated with the noise-averaged density
matrix.
Noise averaging of U (t) can be performed by taking averages for each term in
the series and then resumming the series. This involves averaging products of the
stochastic coefficients. Since λi (t) is sampled from a Gaussian deviate, all terms
involving an odd number of coefficients necessarily vanish. Furthermore, any term
with an even number of coefficients can be written as a sum of all possible products of
second moments. However, due to the order of integrations over time in Equation 6.74
and the fact that second moments in Equation 6.71 involve delta functions in time,
only one product from the sum contributes to the average after all the time integrations
are performed. For example, consider the fourth-order average:

λi (t)λ j (τ )λk (τ  )λl (τ  )


182 Quantum Dynamics: Applications in Biological and Materials Systems

Using Equation 6.71 for the second moments, we have


λi (t)λ j (τ )λk (τ  )λl (τ  ) = gi j gkl δ(t − τ )δ(τ  − τ  )
+ gik g jl δ(t − τ  )δ(τ − τ  )
+ gil g jk δ(t − τ  )δ(τ − τ  ) (6.76)
Since the region of integration for the fourth-order term in the series (Equation 6.74)
is t ≥ τ ≥ τ  ≥ τ  , we can see that only the gi j gkl δ(t − τ )δ(τ  − τ  ) term will
contribute to the series. Similar analysis can be applied to all even-order terms.
Averaging over noise and resumming the series (Equation 6.74) produces

U (t) = e−i L0 t−Mt (6.77)


where the action of superoperator M on the density matrix ρ is given by
1
Mρ = 2 gi j [Vi , [V j , ρ]] (6.78)
2h̄ i j

It follows from Equation 6.77 that the noise-averaged density matrix satisfies the
following equation:
∂ρ
i = (L0 − iM )ρ (6.79)
∂t
This allows one to see an interesting connection between the noise-averaged time
evolution of the density matrix for the noisy system and the time evolution of the
reduced density matrix for an open quantum system.
Consider the case when the correlation matrix gi j in Equation 6.78 is diagonal,
that is, gi j = gii δi j . In this case, Equation 6.79 can be rewritten as
∂ρ 1 i
i = [H0 , ρ] − 2 gii [Vi , [Vi , ρ]] (6.80)
∂t h̄ 2h̄ i
Replacing gii =  and Vi = x, we arrive at the equation we had above for the
implusively monitored particle.
Even though we have stated that there is no recoil between the quantum particle
and the scattering particle, the energy of the quantum particle must increase with time.
In general, the Ehrenfest equations of motion for the expectation values of operators
are given by  
d O
d dρ
= Tr (Oρ) = Tr O
dt dt dt
For example, if we consider the Ehrenfest equations of motion for position, momen-
tum, and energy, we find
d x
p

= (6.81)
dt m
 
d p
dV
=− (6.82)
dt dx
d H

=0 (6.83)
dt
Quantum Density Matrix 183

In other words, when we eliminate the interaction with the scattering particles, energy
is conserved as we expect. However, including the recoilless interaction,

∂ρ i
ih̄ = [Ho , ρ] − [x, [x, ρ]] (6.84)
∂t h̄
d x
p

= (6.85)
dt m
 
d p
dV
=− (6.86)
dt dx
d H

=+ (6.87)
dt m
where m is the mass of the quantum particle. Thus, the average energy must increase
due to the noisy interaction. One can also conclude that the necessary condition for
energy conservation even in this case is that [Ho , Vi ] = 0 for all operators describing
the coupling between the quantum particle and the scatterer.
We can construct equations of motion that do lead to d H
/dt = 0 at long times
by including a frictional term. For example, the equations of motion for a classical
Brownian particle can be written as

m ẍ + η ẋ + V  = f (t)

where η is the relaxation rate and f (t) is a noise source with the properties f (t) = 0
and f (t) f (t  ) = 2ηkT δ(t − t  ) = 2δ(t − t  ). In other words, the relaxation rate
η is directly proportional to the frequency (and strength) of the interaction with the
noisy field. Analyzing the classical equations results in the following noise-averaged
quantities:

d
x = p/m (6.88)
dt
d
p = −V  − η p/m (6.89)
dt
$ %
d 2 kT p2
E= − (6.90)
dt ηm 2 2m

As the system relaxes, the average kinetic energy becomes equal to kT /2 even though
the average momentum p relaxes to zero. Also, it is important to notice that p 2 = p 2 .
To actually solve these equations, we also need to also work out the equations for the
higher-order averages, p 2 , x 2 , and so on.
Similarly, for quantum systems, we can insert a frictional term into the equations
of motion for the density matrix:

∂ 1 η ηkT
i ρ = [Ho , ρ] + [x, { p, ρ}] − i 2 [x, [x, ρ]] (6.91)
∂t h̄ 2mh̄ 2

184 Quantum Dynamics: Applications in Biological and Materials Systems

where {· · ·} is the anticommutation bracket {A, B} = AB − B A. Again, working out


the Ehrenfest equations of motion for the averaged quantities,

d
x
= p
(6.92)
dt
 
d dV η
p
= − − p
(6.93)
dt dx m
  2 
d 2η kT p
H
= − (6.94)
dt m 2 2m

As in the classical case, the system relaxes to some thermal distribution such that its
final kinetic energy is identical to the thermal energy. Also, one needs to be aware
that these equations of motion are not closed. In fact, for the general case, this is the
beginning of a hierarchy of equations.43–48

6.5.2 THE QUANTUM ZENO EFFECT


The quantum Zeno effect is the prediction that an unstable particle (such as an excited
state of an atom) will never decay if it is continuously observed. The effect was first
predicted by Leonid Khalfin in 195949 and the term “quantum Zeno effect” was
later coined by Sudarshan and Misra almost 20 years later.50 In the analysis above,
we have shown that whenever the environment interacts with a quantum subsystem,
the coherence terms in the reduced density matrix decay to zero. The rate of decay
depends upon how often the environment queries the quantum state of the subsystem.
A simple experiment to test this idea was proposed by Cook;51 it consisted of three
levels in which two of the levels, (2) and (3), were resonantly coupled to a common
ground state (1). Spontaneous decay between (2) and (1) is assumed to be negligible
(Figure 6.5a). If the system is (1) at t = 0 and a resonant interaction is applied, a
superposition, is created between (1) and (2) that transfers population between (1) and
(2) at the Rabi frequency . As we have seen previously, the transient populations in
(1) and (2) are then given by

P1 (t) = cos2 (t/2) (6.95)


P2 (2) = sin (t/2)
2
(6.96)

Suppose at some time such that t 1 we measure the state of the system and
P1 ≈ 1 and P2 ≈ 2 t 2 /4 1. Likewise, if we had prepared the system in (2), we
would have the reverse situation where P2 ≈= 1 and P1 ≈= 0. If level (3) can only
decay to (1), say, due to a selection rule, then we can perform a measurement on the
system by driving the (1) → (3) transition with an optical pulse.
The proposed experiment goes as follows. First, we prepare the system in state
(2) by driving the (1) → (2) transition with a π-pulse of duration T = π/  while
simultaneously applying a series of short measurement pulses. The duration of the
measurement pulse is assumed to be much less than T . Suppose the system is in (1) at
t = 0 and a π pulse is applied. In the absence of the probe pulse, P2 (T ) = 1. For a
Quantum Density Matrix 185

1.0
n = 20

0.8 n = 10
|3 n=4

0.6
n=3

P2(t)
|2
Probe pulse

0.4
π pulse Coherent
0.2

0 π π 3π π
4 2 4
|1 Ωt
(a) (b)

FIGURE 6.5 (a) Three-level scheme used in Cook’s proposed experiment for testing the quan-
tum Zeno effect. (b) Predicted transient populations of state (2) following n impulsive probe
pulses.

two-state system, we can represent the density matrix as


ρ(t) = R1 (t)σx + R2 (t)σ y + R3 (t)σz

The equations of motion for the polarization vector R = (R1 , R2 , R3 ) are given by

d R/dt = ω × R
with R(0) = (0, 0, −1) and ω = (1, 0, 0). Following preparation and in the absence
of any further interactions, the polarization vector precesses about ω at the Rabi
frequency. (Note that we have ignored the counter-rotating terms.)
Now assume that n probe pulses are applied at times τk = kπ/(n) where
k = 1, . . . , n. Just before the first probe pulse at t = π/(n), the polarization vector
is
R = (0, sin(π/n), − cos(π/n))
The probe pulse collapses the wave function (that is, eliminates the coherences)
while leaving the populations unchanged. In other words, after the pulse, R1 (t+ ) =
 + ) is
R2 (t+ ) = 0 and R3 (t+ ) = (0, 0, − cos(π/n)). For all intents and purposes, R(t

identical to R(0) except that its magnitude is now |R| = | cos(π/n). Consequently,
 ) = (0, 0, − cosn (π/n)). Since R3 is the difference
after a sequence of n pulses, R(T
between the two populations, R3 = P2 − P1 and P1 + P2 = 1, it is easy to see that
P2 (T ) = (1 + R3 (T ))/2
= (1 − cosn (π/n))/2 (6.97)
Expanding cos(π/n) as a power series and using
lim (1 − x/n)n = e−x
n→∞
186 Quantum Dynamics: Applications in Biological and Materials Systems

we find that in the limit of rapid probe pulses

P2 (T ) = (1 − exp(−π 2 /(2n)))/2 (6.98)

The predicted transient populations of state (2) are shown in Figure 6.5b where we
have assumed the probe pulse to be impulsive. As the frequency probe pulses increase,
the population in (2) does not decay to state (1).
This scheme was used by Itano et al.50 in 1990 to examine the effect of mea-
surement on a quantum superposition states. In this experiment, approximately 5000
9
Be+ ions were held in a Penning trap and laser cooled to below 250 mK. In a mag-
netic field, the 2s 2 S1/2 ground state of Be+ is split into hyperfine levels similar to
what is shown in Figure 6.5(a). Radio-frequency (rf) transitions can occur between
the (m I , m J ) = (3/2, 1/2) and (1/2, 1/2) sublevels. A resonant rf pulse would place
nearly all the atoms in the upper (3/2, 1/2) state (2), depopulating the lower state. A
second UV pulse resonant with the transition between the lower 2s 2 S1/2 state (1) and
one of the 2 p 2 P3/2 states (3) with quantum numbers (m I , m J ) = (3/2, 1/2), which
only decays to state (1). The results of this experiment are shown in Figure 6.6 where
we have plotted both the predicted and experimental transition probabilities between
states (1) and (2). The agreement is well within the 0.02% statistical uncertainty of
the experiment due to photon counting.

6.6 SUMMARY
In this chapter we have described the relaxation of a quantum system through a
rather phenomenological approach. We have not thus far described the process for
connecting the various relaxation time scales to a molecular-level description of the
interaction between an individual molecule and an environment. This we reserve for
later discussion and refer the interested reader to other texts and sources:

1.0 1.0

0.8 0.8

0.6 0.6
2 1
1 2

0.4 0.4

0.2 0.2

1 2 4 8 16 32 64 1 2 4 8 16 32 64
n n
(a) (b)

FIGURE 6.6 Comparison between predicted and experimental transition probabilities follow-
ing n probe pulses for the (a) 1 → 2 (b) 2 → 1 transition. Experimental data from Ref. 50.
Quantum Density Matrix 187

1. Chemical Dynamics in the Condensed Phase, Abraham Nitzan


(Oxford, UK: Oxford University Press, 2007),
2. Quantum Mechanics in Chemistry, George C. Schatz and Mark A. Ratner
(Mineola, NY: Dover, 2002).
Both of these texts provide excellent pedagogic descriptions and technical details for
deriving the reduced equations of motion for a system interacting with an environment.

6.7 APPENDIX: WIGNER QUASI-PROBABILITY DISTRIBUTION


Not long after the introduction of the Schrödinger equation with its wave-function
solutions, Eugene Wigner proposed a means to study quantum dynamics in phase
space and thereby systematically include quantum mechanical corrections to classical
dynamics.52 The resulting function of interest is a “quasi-probability” distribution
function over the p, x classical phase-space variables. The function is defined in
terms of a Fourier transform of the density matrix (or wave function)

1
W (x, p) = dyψ(x + y/2)ψ ∗ (x − y/2)ei py/h̄ (6.99)
πh̄

1
= dyρ(x − y/2, x + y/2)ei py/h̄ (6.100)
πh̄
Here, x and p denote the position and momentum. However, in principle, one can
use any pair of conjugate variables—for example, the real and imaginary parts of a
field or the time and frequency components of a signal. We can also take the inverse
transform and recover the original density matrix

ρ(x + y/2, x − y/2) = W (x, p)ei py/h̄ d p (6.101)

W (x, p) is a generating function for all spatial autocorrelation functions of a given


quantum mechanical wave function ψ(x). For example, taking the derivative of W
 
1 ∂ W  1
= dyρ(x − y, x + y)y = x
(6.102)
i ∂ p  p=0 πh̄

gives the expectation value of a conjugate variable. Thus, W corresponds to the


quantum density matrix in the map between real phase-space functions and Hermitian
operators introduced by Hermann Weyl in 1927, in a context related to representation
theory in mathematics (cf. Weyl quantization in physics).53–55 In effect, it is the Weyl
transform of the density matrix. Similar transforms were later rederived by J. Ville
in 1948 as a quadratic (in signal) representation of the local time-frequency energy
of a signal.56 Furthermore in 1949, José Enrique Moyal, who had also rederived it
independently, recognized it as the quantum moment-generating functional and, thus,
as the basis of an elegant encoding of all quantum expectation values and, hence,
quantum mechanics, in phase space (cf. Weyl quantization).57 It has applications in
statistical mechanics, quantum chemistry, quantum optics, classical optics, and signal
188 Quantum Dynamics: Applications in Biological and Materials Systems

analysis in such diverse fields as electrical engineering, seismology, biology, speech


processing, and engine design.
A classical particle has a definite position and momentum, and hence it is repre-
sented by a point in phase space, x, p. Given a collection or ensemble of particles, the
probability of finding a particle in an infinitesimal phase-space volume d xd p about
the point x, p at time t is given by P(x, p; t) = ρ(x, p)d x d p, which has the property
that P ≥ 0,

d xd pρ(x, p) = N (6.103)

where N is the number of particles in the ensemble. The time development for the
phase-space distribution ρ(x, p) is given by the Liouville equation:
dρ ∂ρ ∂ρ ∂ H ∂ρ ∂ H
= + − =0 (6.104)
dt ∂t ∂q ∂ p ∂ p ∂q
where H is the Hamiltonian governing the motion of the particles. Thus, the equation
of motion governing the classical phase-space density is
∂ρ ∂ρ ∂ H ∂ρ ∂ H
=− + = −{ρ, H } (6.105)
∂t ∂q ∂ p ∂ p ∂q

or, in terms of the Liouville operator, L̂,


∂ρ
+ L̂ρ = 0 (6.106)
∂t
One can also recast this equation as
∂ρ p
+ · ∇x ρ + F · ∇ p ρ = 0 (6.107)
∂t m
In astrophysics this is called the Vlasov equation, or sometimes the collisionless
Boltzmann equation, and is used to describe the evolution of a large number of
collisionless particles moving in a potential. In classical statistical mechanics, N
can become very large. Consequently, setting ∂ρ/∂t = 0 gives the stationary or
equilibrium density for an ensemble of microstates. In particular, this is satisfied by
the Boltzmann distribution, ρ ∝ exp(−Hβ) where β = 1/k B T .
The physical interpretation of the classical Liouville equation is that we imagine
the density enclosed in small-volume elements as seen by following along a trajectory
x(t), p(t). Since dρ/dt = 0, there is no net flux of probability in or out of the small-
volume element. Alternatively, one can imagine a cloud of points in phase space. It
is straightforward to show that as the cloud stretches in one dimension, say, in x, it
shrinks in the other dimension p so that the volume
x
p remains a constant. This
classical interpretation fails for a quantum particle due to the uncertainty principle,
which forbids the precise simultaneous determination of both x and p. Instead, the
above quasi-probability Wigner distribution plays an analogous role but does not
satisfy all the properties of a conventional probability distribution; and, conversely,
it satisfies boundedness properties unavailable to classical distributions. Moreover,
Quantum Density Matrix 189

the Wigner distribution can and normally does go negative for states that have no
classical analog—and is a convenient indicator of quantum mechanical interference.
This can be seen in Figure 6.7 where we have plotted the Wigner function for the
first three eigenstates of the harmonic oscillator. For every state other than the n = 0
ground state, one can clearly see regions where W is positive and regions where W is
negative. Hence, W (x, p; t)d xd p cannot be interpreted as the probability of finding
a particle in an infinitesimal phase-space volume d xd p about the point x, p.
The function itself has a number of useful properties. First, W ( p, x) is real.
Secondly, the x and p distributions are given by the marginals

d pW (x, p) = ρ(x, x) (6.108)

If ψ is a pure state, then ρ(x, x) = |ψ(x)|2 . Likewise,



d x W (x, p) = ρ( p, p) (6.109)

yields the momentum distribution. Again for a pure state, ρ( p, p) = |ψ̃( p)|2 . Finally

d xd pW = Tr (ρ) = 1 (6.110)

That W is real and it can give both the momentum and position distributions implies
that W can be negative somewhere.
In order to compute physical quantities, we need to first transform the quantum
mechanical operators into the Wigner representation

BW (x, p) = dy x + y/2| B̂|x − y/2
ei py/h̄ (6.111)

where the W subscript denotes the “Wigner-ized” operator. This may sound grand,
however, in practice, it is actually quite simple for operators involving only position
and momentum variables. For example, if operator A is a function of the position
operator, q̂, then

A W (x) = q + y/2|A(q)|q − y/2
ei py/h̄ = A(q̂) (6.112)

Likewise for the momentum operator. Where we do have to be careful, however, is


in taking the Wigner transform of operator products

ih̄ ˆ
(A · B)W = A W exp  BW (6.113)
2

where 
ˆ is the Poisson bracket operator defined as
⎡ ⎤
←−− → ← −− →
ˆ =⎣ ∂ ∂ ∂ ∂ ⎦
 − (6.114)
∂x ∂p ∂p ∂x
190

0.4 0.4 0.4


0.2 0.2 0.2
0.0 0.0 0.0
2 2 2
–0.2 –0.2 –0.2
–0.4 –0.4 –0.4
0 p 0p 0 p
–2 –2 –2
0 0 0
x –2 x –2 x –2
2 2 2

FIGURE 6.7 Wigner distribution for the first three states of the harmonic oscillator.
Quantum Dynamics: Applications in Biological and Materials Systems
Quantum Density Matrix 191

We need to pay attention to the direction of the arrows in this last expression since
they indicate the direction of operation for the partial derivative. For example,

ˆ W = ∂A ∂B − ∂A ∂B
A W B (6.115)
∂x ∂p ∂p ∂x
We can also easily see that
 
ih̄ ˆ ih̄ ˆ
(A · B)W = A W exp  BW = BW exp −  A W (6.116)
2 2
This allows us to construct the Wigner transform of a commutator as

[A, B]W = (AB)W − (BA)W (6.117)


= 2i A W sin(h̄ /2)B
ˆ W (6.118)

From this we can construct an expansion in powers of h̄ using

2 h̄ 2
sin(h̄ T̂ /2) = T̂ − T̂ 3 + · · · (6.119)
h̄ 24
where we notice that the lowest-order term involving h̄ enters in only the second
term and beyond and the leading-order term is simply the classical Poisson bracket
operator. Such expansions are extremely useful in evaluating the time evolution of
the Wigner distribution
∂W i
= − ([H, ρ])W
∂t h̄
2
= Hc sin(h̄ /2)W
ˆ (6.120)

= −iLW W (6.121)

where Hc is the classical Hamiltonian. Applying the Poisson operator yields


 
∂W p ∂W 2 h̄ ∂ ∂
=− − sin V (x)W (x, p) (6.122)
∂t m ∂x h̄ 2 ∂x ∂p
∞  2n+1
p ∂W 2 h̄ 1 ∂ 2n+1 V (x) ∂ 2n+1 W (x, p)
=− − (−1)n
m ∂x h̄ n=0 2 (2n + 1)! ∂ x 2n+1 ∂ p 2n+1

where in the first equation it is assumed that the ∂/∂ x operates only on the V (x) term
and the ∂/∂ p acts only on the W (x, p) term. Finally, there is an equivalent form given
by Groenewold that reads58
    
∂W p ∂W 1 ih̄ ∂ ih̄ ∂
=− + V x+ −V x− W (x, p) (6.123)
∂t m ∂x ih̄ 2 ∂p 2 ∂p
This last term is especially useful when V (x) can be expressed as a polynomial in x.
Operationally, we perform the Taylor series expansion, then replace the x operator
with x ±(ih̄/2)d/d p. For example, for the harmonic potential, the potential term yields
192 Quantum Dynamics: Applications in Biological and Materials Systems

mω2 xd W/d p, which is precisely what we get for the classical Liouville equation.
Only at order of V (x) ∝ a3 x 3 /3 do we begin to see quantum terms appearing in the
equations of motion. For example, for the cubic potential,

∂W p ∂W ∂W h̄ 2 ∂ 3 W
=− + a3 x 2 − a3 3
∂t m ∂x ∂p 12 ∂ p

Let us examine the various terms in Equations 6.122 and 6.123. The first term
on the right-hand side of the two equations comes from the kinetic energy operator
and, depending upon the context, is termed the “drift,” “streaming,” or “advection”
term. This term is also present in the classical Liouville equation (Equation 6.107).
In fact, simply expanding the potential term in powers of h̄ and then setting h̄ → 0
produces the classical Liouville equation. Thus, all quantum effects enter in through
the Wignerized potential, VW (x, p− p  ), which is nonlocal in momentum and basically
redistributes the Wigner function along all possible momenta p for a given position
x. The rough picture here is that particles that have been scattered by the potential at
point x ± y/2 interfere with particles scattering at different points. In this way, we
sample over all possible pathways the particle can take.59

6.7.1 WIGNER REPRESENTATION ON A LATTICE: EXCITON DIFFUSION


So far we have discussed the Wigner function for a continuous system. One can also
derive a Wigner representation for a discretized system of a particle (or quasi particle
such as an excitation) hopping from one site to the next. Consider, for example, the
simple case of an exciton hopping between different sites as in the Frenkel exciton
model:60–62
 
H = h̄ n Bn† Bn + Jnm Bn† Bm + Bm† Bn (6.124)
n n =m

where the Bn and Bn† destroy and create an exciton on site n. We can write this in a
basis as

H = h̄ n |n
n| + J (n − m)|n
m| (6.125)
n n =m

where we have written the interaction operator Jnm as depending upon the distance
between sites n and m. In this representation, the density matrix is given by ρnm =
n|ρ|m
. At this point we change variables to relative and center of mass variables by
writing
n = r − s/2
and
m = r + s/2
with ρ(r, s) = r − s/2|ρ|r + s/2
. ρ(r, 0 is a diagonal element of the density matrix
and gives the probability for the exciton’s being located at lattice position r at a given
time. Consequently, the ρ(r, s = 0) carries the phase coherence information between
Quantum Density Matrix 193

two sites separated by distance s on the lattice. In this representation the time evolution
of the density matrix is given by

i ρ̇(r, s) = J (a)(ρ(r + a/2, s − a) − ρ(r + a/2, s + a))
a
+ (r −s/2 − r +s/2 )ρ(r, s) (6.126)
where the a summation index runs over all displacements in the lattice. The second
term will vanish if all the sites have the same energy. We can now apply the Wigner
transformation to both sides of the equation.
1
ρ(r, s) = √ W (r, p)ei ps (6.127)
N p
As above, the resulting Liouville equation has both kinetic and potential energy
contributions
Ẇ = T W + V W (6.128)
For the potential term, we derive this by first taking the discrete sine transform of the
energy differences
2
V (r, p) = √ sin( ps)(Er −s/2 − Er +s/2 ) (6.129)
N s
and then taking the convolution with the Wigner transformed density matrix,
1
VW = V (r, p − p  )W (r, p  ) (6.130)
h̄ p

Taking the continuum limit, we can write this in the Groenewold form as59
    
1   i ih̄ ∂ ih̄ ∂
V (r, p − p )W (r, p ) = E r+ −E r− W (r, p)
h̄ p h̄ 2 ∂p 2 ∂p
(6.131)
Here, again, we see that dynamics can be interpreted as that of a particle that scatters
onto some site where it receives a random momentum kick according to the momen-
tum distribution at that site. Quantum effects (that is, constructive and destructive
interference) occur when we sum over all scattering events.
The kinetic term arises from the hopping terms in our original Hamiltonian and
can be directly evaluated by inserting Equation 6.127 into Equation 6.126

TW =2 J (a) sin( pa)W (r + a/2, p) (6.132)
a

Pulling everything together yields



Ẇ (r, p) = 2 J (a) sin( pa)W (r + a/2, p)
a
1
+ V (r, p − p  )W (r, p  ) (6.133)
h̄ p
194 Quantum Dynamics: Applications in Biological and Materials Systems

Taking the continuum limit for the first term requires us to write J (a) as a symmetric
function J (a) = J (−a). If it is sufficiently short ranged, then the first term becomes
simply
p ∂
2 J (a) sin( pa)W (r + a/2, p) = ∗ W (r, p)
a
m ∂r
where m ∗ is the effective mass of the exciton given by m ∗ = h̄/(2Jl 2 ) where l is the
lattice spacing.61

6.7.2 ENFORCING FERMI–DIRAC STATISTICS


So far we have dealt with single-particle systems. However, we can use the Wigner
representation to develop many-body theories. As an example, let us consider a lattice
system with more than one exciton present but restrict the system in such a way that
any given site can have at most a single exciton present on it at a given time. This is
essentially the Pauli exclusion rule that must be applied to systems with half-integer
spin (such as electrons or protons). However, since excitons are generally prepared
optically, they are typically singlet or triplet in their spin multiplicity and hence would
be more aptly described as bosons. The justification for enforcing the exclusion rule
in the case of excitons is that typically the energy required to doubly excite a given
site is generally more than the energy carried by two excitons on separate sites. Rather
than putting in the double exciton states by hand and carrying the extra baggage, we
simply put a constraint on the system excluding two excitons from being on the same
site.
Fermion systems obey the antisymmetrization rule, and as a result, Fermion oper-
ators obey an anticommutation rule {Bm† , Bn } = δnm rather than the usual [Bm† , Bn =
δnm ], which is fine for Bose particles. We can write the commutation relation in a
general form using ' † (
Bm , Bn = δnm (1 − ζ Wn )
where Wn = 2Bn† Bn and ζ = 1 for fermions and ζ = 0 for bosons. This, along with
Bn2 = (Bn† )2 = 0, yields the following Heisenberg equations of motion for the exciton
operators:60,62

i Ḃn = n Bn + Jmn (1 − Wn )Bm
m =n

i Ḃn† = −n Bn† − Jmn (1 − Wn )Bm† (6.134)
m =n

Furthermore, the Heisenberg equations for the binary product Bm† Bn are given by
d † †
i Bn Bm = (n − m )Bn† Bm + Jmk Bn† Bk − Jnk Bk Bm +  nm
collision
(6.135)
dt k

where the collision term is



 nm
collision
= Jmk Wm Bn† Bk − Jnk Wn Bk Bm (6.136)
k =m,n
Quantum Density Matrix 195

This term represents all possible bimolecular collisions between excitons. Ignoring
this term, which is allowable in the limit of few excitations in the system, brings us
back to the expressions above.
Notice that this expression involves the product of Bm and Bm† with the Wn number
operator. Consequently (and alas), we are again faced with a hierarchy of equations
since we need to deduce the equations of motion for operator products and their
expectation values. Various approximations are possible, the simplest being to factor
all many-operator terms into single-operator terms, viz,
 † 
Wk Bn† Bm
= Bk Bk
Bn† Bm

This is essentially a local-field approximation whereby an exciton at site n moves


in the mean-field of excitons at all other sites. We can, however, also approximate
the Wk Bn† Bm
product with the factorized product Wk
Bn† Bm
where Wk
is the
expected occupation number at site k.
The connection to the analysis above is that the density matrix elements are given
by ρnm = Bn† Bm
. Consequently, we can make the same substitutions as previously
and use the convolution theorem to write the collisional term as
1
 collision (r, p) = J (r, p  ) W (r )
W (r  , p − p  )
N r  , p
− J (r  , p  ) W (r  )
W (r, p − p  ) (6.137)

where the first term represents scattering to site r from all other sites and the second
represents scattering from site r to any other site each with a momentum change of
p − p .
We can also make a couple of simplifications to the collisional term. First, we
can invoke Boltzmann’s Stosszahl Ansatz and assume that prior to collision at time t,
the two excitons were uncorrelated. If we take the hopping term in as a momentum
transfer, then the collision operator can be written as

 boltz (r, p) = J ( p − p  , q)(W (r, p + q)W (r, p  − q)

− W (r, p)W (r, p  ))d p  dq (6.138)

Alternatively, one can use the Bhatnagar–Gross–Krook (BGK) approximation63 used


in the lattice Boltzmann method for simulating fluid flow and plasmas

 BGK (r, p) = −γ (W (r, p) − Weq (r, p)) (6.139)

where Weq (r, p) is the equilibrium (stationary) distribution. Here, γ is the collision
frequency and it is assumed that the resulting momentum distribution after each

 It is also possible to introduce the Pauli exclusion principle as a dynamical constraint on the system using

the Dirac bracket method discussed earlier. Here, one appends to the Hamiltonian series of constraints on
† †
the canonical variables such that the dynamics occurs on a surface defined by φn = Bn Bn + Bn Bn − 1 = 0.
Thus, the constraint generates the collision term in Eq. (6.136).
196 Quantum Dynamics: Applications in Biological and Materials Systems

collision is equal to the equilibrium momentum distribution. If we assume that all


momenta
√ are equally likely and the site energies are all the same, then Weq (r, p)
=
1/ N .
As we have seen, the primary source of quantum mechanical effects in the Wigner
representation is any term in the potential listed as x 3 or greater in the displacement
coordinate. For the homogeneous lattice or even the harmonic oscillator, the dynamics
in the Wigner representation is identical to that of a classical ensemble of particles.
This has a number of advantages because it allows us to treat excitons in a large system
as a classical gas with essentially hard-sphere interactions that limit the number of
excitons on a given site to just one per site.
Just as Boltzmann’s collision-number approximation is valid when the collision
frequency is long compared with the thermal relaxation time, the approximate forms
for the exciton/exciton interaction given here are only valid if we have few excitons in
the system. We have also ignored the possibility that exciton-exciton collisions may
not necessarily conserve total exciton numbers. In fact, if we use a more generalized
Frenkel exciton model that includes Bn† Bm† and Bn Bm , then we open the possibility
for exciton-exciton annihilation or production of additional excitons on the lattice.
Because the dynamics is still described by a Hermitian Hamiltonian, total energy will
be conserved even if the total particle number is no longer a constant of the motion.
Exciton-exciton annihilation is an especially important process for triplet excitons
and can lead to a loss of quantum efficiency in photovoltaic devices.

6.7.3 THE k, Δ REPRESENTATION


Although the Wigner distribution has proven to be quite useful in developing semi-
classical representations of quantum mechanics, other representations are possible.
One such is the so-called k,
representation given by

ρ(k,
) = Tr [ρei(k x̂+
p̂) ] = Tr [Dρ] = Dρ
(6.140)

where we introduce the last term to simplify our notation and to distinguish ρ(k,
)
from ρ(x, x  ). Since the density matrix is Hermitian, ρ(k,
) has the following sym-
metry: ρ ∗ (k,
) = ρ(−k, −
).
The transformation is similar to the Wigner transform in that it involves the Fourier
transform of the density matrix

ρ(k,
) = d xeikx ρ(x +
/2, x −
/2) (6.141)

and is related to the Wigner function via


 1/2  
1
W (x, p) = dk d
e−i(kx+
p) ρ(k,
) (6.142)

or taking the inverse
 
ρ(k,
) = ei(kx+ p
) W (x, p)d xd p = ei(kx+ p
)
W (6.143)
Quantum Density Matrix 197

A characteristic function is the expected value of eit.x for a given distribution,


assuming that t is real. As such, ρ(k,
) is the characteristic function for the Wigner
distribution. Characteristic functions are useful in deriving the moments of a given
distribution. For example, the characteristic function for the normal (or Gaussian)
distribution is
c(μ, σ, t) = eitμ e−t σ 2 /2
2
(6.144)
Taking the derivative of c with respect to t, then setting t = 0, generates the moments
of the distribution:
n 

n d c
(−i) = x n
. (6.145)
dt n 
t=0

For example, the first two moments of the normal distribution read
m = {μ, μ2 + σ 2 } (6.146)
For a Gaussian distribution, all subsequent moments can be related to these first two
moments.
Likewise, for the Wigner function we can obtain
∂ n+m
x n p m
= lim (−i)n+m ρ(k,
) (6.147)
k,
→0 ∂k n ∂
m
which are the expectation values for the operator x n p m . Moreover, if we know the
time derivative of ρ(k,
), we can derive the Heisenberg equations of motion for
operators composed of x and p:
∂ x n p m
∂ n+m ∂ρ(k,
)
= lim (−i)n+m n m (6.148)
∂t k,
→0 ∂k ∂
∂t
We can also use the characteristic functions to derive the cumulants of a distribu-
tion as well. These are related to the moments but only include the “connected” parts.
In other words, they cannot be reduced into a sum of other moments. For example, the
cumulants of the Gaussian distribution are simply the μ and σ specifying the center
and width of the Gaussian function. Cumulants are given by the log-derivative of the
characteristic function:

1 ∂G 
cn = (−i) n
(6.149)
G(t) ∂t t=0
Thus, the cumulants of the Wigner function are the log-derivatives of the Dρ
char-
acteristic function

1 ∂ n+m Dρ

cn,m = (−i) n+m
(6.150)

∂k m ∂
n k,
=0
The advantage of the k,
representation is that one can make considerable use
of commutation relations to simplify the equations of motion. For example, the D
operator can be written as
D = ei(kx+
p) = eik
/2 eikx ei p
(6.151)
198 Quantum Dynamics: Applications in Biological and Materials Systems

Taking derivatives of D with respect to the k and


variables,
 
∂D

=i + x̂ D (6.152)
∂k 2
and
 
∂D k
=i + p̂ D (6.153)

2
which can be rearranged to
 
D ∂
x̂ D = − +i D (6.154)
2 ∂k
 
k ∂
p̂ D = − +i D (6.155)
2 ∂

Finally, we have the commutation relations


[ p̂, D] = k D (6.156)
and
[x̂, D] = −
D (6.157)
We can work out the time-evolution equation for ρ in the k,
representation
by taking the time derivative of ρ(k,
) = tr [Dρ] and applying Equations 6.154
to 6.157. For a particle with mass m in a harmonic well with frequency ω the time
evolution for the density matrix is
1 2 mω2 2
i ρ̇ =[ p , ρ] + [x , ρ] (6.158)
2m 2
Multiplying through by D and taking the trace, we arrive at
 
d k ∂ ∂

= − mω
2

. (6.159)
dt m ∂
∂k
It is interesting to point out that we have replaced a complex second-order elliptical
partial differential equation in x, x  with a real first-order partial differential equation
in k,
. This presents us with an equation that (in principle at least) should be easier
to solve than the original Liouville–von Neumann equation. The physics carried by
ρ(k,
) has not changed; we have simply reduced the effort required to derive (or
solve for) its time evolution.
Finally, consider the equations of motion for the moments x
and p
correspond-
ing to the expected values of the position and momentum. Using the characteristic
equation and the equation of motion for Dρ
above, we find

∂ x
1 ∂  p

= Dρ
 = (6.160)
∂t m ∂
k,
=0 m

∂ p
∂ 
= −mω2 Dρ
 = −mω2 x
(6.161)
∂t ∂k k,
=0
Quantum Density Matrix 199

which are what we expect to find for the Ehrenfest equations of motion for a harmonic
oscillator. In Problem 6.3 we derive more general equations of motion for a particle
in a potential.

6.8 PROBLEMS AND EXERCISES


Problem 6.1 Starting from the equations of motion for the density matrix in Equa-
tion 6.54, verify, using Laplace transform techniques, that Equation 6.69 is correct.

Problem 6.2 Show that the Wigner distribution for the harmonic oscillator is given
by
−1n −2H ( p,x)/ω
W (x, p) = e L n (4H/ω)
π
where L n is a Laguerre polynomial and H is the Hamiltonian for a classical oscillator.

Problem 6.3 Show that the relations given in Equations 6.154 to 6.157 are correct.
Using these, derive the equations of motion for a particle with mass m in a general
polynomial potential

∞  
xn dn V
V = (6.162)
n=0
n! d x n x=0

Hint: Evaluating the commutators is straightforward; however, you need to be


careful that the p and x in the exponent are still operators and do not commute:

[ p̂, D] = eik
[ p, eikx ei p
] (6.163)
=e ik

[ p, e ikx
]e i p

] (6.164)


d ikx d ikx
[ p, eikx ] f = i ,e f = i( f  eikx − e f) (6.165)
dx dx
= −i(ik)eikx f (6.166)

thus, [ p̂, D] = k D.
The second is a bit trickier since it involves putting p in the exponent when
evaluating [x, ei p
]. For this use, expand the exponent

(i
)n
[x̂, ei p
] = [x̂, p̂ n ] (6.167)
n=0
n!

then use the relation [A, B n ] = n B n−1 [A, B],



(i
)n
[x̂, e i p

]= np n−1 [x̂, p̂] (6.168)


n=1
n!
200 Quantum Dynamics: Applications in Biological and Materials Systems

and [x, p] = i (keeping h̄ = 1, pulling out an i


and changing range of summation
index since the n = 0 case vanishes),


(i
)n−1 n−1
[x̂, ei p
] = −
p (6.169)
n=1
(n − 1)!

again, changing summation index and doing the sum

[x̂, ei p
] = −
ei p
(6.170)

Thus, we arrive at the equations above.

Problem 6.4 When a dissipative bath is included under a certain set of assumptions,
the equations of motion for the density matrix can be written as

1 2 mω2 2
i ρ̇ = [ p , ρ] + [x , ρ] − [x, [x, ρ]] + γ [x, { p, ρ}] (6.171)
2m 2

where  and γ are constants and {A, B} denotes the anticommutation relation:
{A, B} = AB + B A.

1. Derive the equivalent equation of motion for ρ(k,


).
2. Taking a Gaussian form for ρ(k,
)

ρ(k,
) = exp(−(c1 k 2 + c2
2 + c3 k
+ ic4 k + ic5
+ c6 )) (6.172)

where c1 · · · c6 are time-dependent coefficients, derive the appropriate


equations of motion for ċ1 · · · ċ6 for a particle in a dissipative environ-
ment.
3. Using Mathematica or other means, find numerical solutions for the coef-
ficients given some appropriate initial conditions, and plot these vs. time.
Comment upon how the system relaxes as you vary γ and .
4. Using the numerical solutions, make contour plots of ρ(k,
) at various
times as it relaxes.
5. Using the Gaussian form

ρ(k,
) = exp(−(c1 k 2 + c2
2 + c3 k
+ ic4 k + ic5
+ c6 )) (6.173)

derive the corresponding Wigner function and make either contour or three-
dimensional plots of W (x, p) at the same time steps as in your plot of
ρ(k,
). How does this compare with how you would expect a classical
system to behave under dissipative conditions?
Quantum Density Matrix 201

Partial Solutions:
2. You should arrive at the following equations of motion:

ċ1 = c2 /m (6.174)
ċ2 = 2c3 /m − 2mω2 c1 − γ c2 (6.175)
ċ3 =  − mω c2 − 2γ c3
2
(6.176)
ċ4 = c5 /m (6.177)
ċ5 = −mω c4 − γ c5
2
(6.178)
ċ6 = 0 (6.179)

5. The Wigner function corresponding to the Gaussian form of ρ(k,


) is

c3 (c4 −x)2 +(c5 − p)(c1 (c5 − p)+c2 (x−c4 ))


1 1
W (x, p) = √ e c2 2 −4c1 c3
(6.180)
2π 4c1 c3 − c2 2

Representative plots are given in the Mathematica notebooks.

Problem 6.5 Consider the time evolution of the Wigner function for a free particle,

∂W p
+ ∇r W = 0
∂t m
Using the k,
representation, derive and solve the equations of motion for ρ(k,
)
assuming that at time t = 0 the initial Wigner function is given by W (x, p; 0). Using
ρ(k,
; t), derive expressions for (x − x
)2
(t) and ( p− p
)2
(t). Are these the same
as you would expect for the time evolution of a free particle using the Schrödinger
equation?

Problem 6.6 Let ρ be the density operator for an arbitrary system where |χl
and
πl are its eigenvectors and eigenvalues. Write ρ and ρ 2 in terms of the |χl
and πl .
What do the matrices representing these operators look like in the {|χl
} basis—first
in the case where ρ describes a pure state and second where ρ describes a mixed
state. Begin by showing that in the pure case, ρ has a single nonzero diagonal element
equal to 1, while for a statistical mixture, it has a several diagonal elements between
0 and 1. Show that ρ corresponds to a pure case if and only if tr [ρ 2 ] = 1.

Problem 6.7 Consider a system with density matrix ρ evolving under Hamiltonian
H (t). Show that tr [ρ 2 (t)] does not change in time. Can the system evolve to be
successively a pure state and a statistical mixture of states?

Problem 6.8 Let (1) and (2) be a global ensemble consisting of two subspaces (1)
and (2). A and B denote operators acting in the state space E (1) ⊗ E (2). Show that
the partial traces tr1 (AB) and tr1 (B A) are equal only if A or B acts only in space
E (1). That is, A or B can be written as A = A(1) ⊗ I (2) or B = B(1) ⊗ I (2).
202 Quantum Dynamics: Applications in Biological and Materials Systems

Note: tr1 [] means that you take the trace ONLY over space (2). For example, take
the case where we have states |a, i
spanning E (1) ⊗ E (2). Then

tr1 [A] = Aai,a  i
i

where Aai,a  i = ai|A|a  i


are the matrix elements of operator A.
7 Excitation Energy Transfer

A photoexcited molecule is rarely a stable species. The fact that we have just pumped
in excess of 1 to 3 eV of energy into a small molecule through the interaction with a
visible or UV photon means that this energy is likely to be rapidly dissipated to other
degrees of freedom, to phonons in the form of heat, to other electronic states of the
system via intersystem crossing or nonradiative decay, or through emission of photons.
Typically we think of photoemission as leading to some observable spectroscopic
signal. However, if in fact there are neighboring molecules that can absorb the emitted
photon, the excitation that started off localized on one molecule may be transferred
reversibly or irreversibly to the next. Typically, this is an irreversible process since
the time scale for emission is roughly a thousandfold slower than the time scale for
intramolecular vibrational relaxation and reorganization of the surrounding media.
Thus, at each energy transfer event, some energy is lost to heat.
Figure 7.1 shows the various energy transfer and relaxation events that can occur
following photo-excitation.
In this chapter, we explore the basis for excitation energy transfer between
molecules. We begin with a discussion of irreversibility in a quantum mechanical
system. We shall leave the molecular-level details of what causes this irreversibility
for later, focusing our attention upon a phenomenological treatment in which we
introduce in a rather ad hoc way the requisite decay times. Following this, we will
consider how to compute the exciton coupling matrix element between molecular
species using modern quantum chemical approaches.

S2

IC
T2
S1
S1
FRET
F
T1
A
F
So
P
So

Donor Acceptor

FIGURE 7.1 Possible photochemical pathways following excitation. The dashed lines indicate
non-radiative processes whereas the solid lines indicate radiative. A = excitation of the donor
molecule from its singlet ground state to one of its singlet excited states: S1 and S2 (IC =
internal conversion, F = fluorescence, P = phosphorescence). The FRET process corresponds
to the transfer of the excitation from one molecule to the next.

203
204 Quantum Dynamics: Applications in Biological and Materials Systems

7.1 DIPOLE–DIPOLE INTERACTIONS


Molecules interact with each other at a distance via Coulomb forces determined by
the shape and polarizibility of the electronic density surrounding each of them. In
general, we work in the limit that a given pair of molecules are far enough apart
that electron exchange and correlation contributions can be safely ignored. Thus, the
interaction can be written as
 
1 e2 ρa (ra )ρb (rb )
Vab = d 3 ra d 3 rb (7.1)
2 |ra − rb |

where ρa and ρb are the transition densities of molecules A and B, respectively,


between the initial and final electronic states. In loose terms, the transition density
can be thought of as the induced charge oscillations in the ground-state electronic
density in response to a linear oscillating driving force (that is, the electromagnetic
field) at the transition frequency. If the distance, R, between A and B is large compared
to the size of either molecule, a, we can safely expand the integrand in terms of its
multipole moments and write the interaction in terms of the transition dipole moments
of each molecule
 
1 3  pB · R)

M = 3 pA · pB − 2 ( pA · R)( (7.2)
R R

where R is a vector extending from the charge center of A to the charge center of B.
Setting this to be the z axis, we can write M as a function of the angles (see Figure 7.2)

χ (θa , θb , φ) = sin θa sin θb cos φ − 2 cos θa cos θb (7.3)

If all angles are statistically possible, we obtain the mean value

χ 2 = 2/3 (7.4)

Acceptor

pA

pB
Donor

FIGURE 7.2 Schematic of dipole–dipole interaction between a donor-acceptor pair. Here p A


and p B indicate the relative orientation of the transition dipoles associated with the donor or
acceptor species. The molecular framework represents either a solvent matrix or the actual
backbone of a polymer or biomolecule in which the chromophore pairs are intercalated.
Excitation Energy Transfer 205

Before moving forward, let us make a back-of-the-envelope estimate of the trans-


fer rate between two molecules treated as a two-level system. Taking the molecules
to be randomly oriented, the matrix element squared that would go into a golden rule
expression becomes

2 | p A |2 | p B |2
2
Mab = (7.5)
3 R6
The transition dipoles can be replaced by their oscillator strengths, viz,

h̄e2
p 2A = fA (7.6)
2mω
Assuming only radiative transitions are allowed, the lifetime of A of B is given by
1 fA
= (7.7)
τA τcl
where, as we discussed previously, τcl is the decay time for a classical electronic
oscillator τcl = 3(mc3 /e2 ω2 )/2.
Now, consider the ratio of the transfer rate W to the radiative rate
2
W τA = |Mab |2 T2 τ A (7.8)
h̄ 2
Inserting our expression for Mab from above,
 6
3 λ T2
W τA = fB (7.9)
8π R τcl
It is an important practice to learn to insert numbers into equations such as this in order
to determine their range of validity for molecular-scale systems. For example, if the
radiative linewidth is on the order of 100 cm−1 , then T2 ≈ 50 fs. Taking τcl ≈ 10 ns,
then for f b = 1 the characteristic distance for which W τ A = 1 is Ro ≈ 0.02λ. For
typical molecules with electronic transitions in the UV/visible region, λ ≈ 300 nm,
so Ro ≈ 6 nm or 60 Å. This is consistent with experimental values of around 50 Å
for most molecular systems. For small aromatic rings, a ≈ 10 Å, so about W τ A = 1
for molecules separated by about 10 molecular radii. Notice that this estimate is
independent of the oscillator strength of A.
At what distance can the interaction be considered strong? Consider the distance
for which
2
|Vab |2 T2 τb ≈ 1 (7.10)
h̄ 2
From what we have just seen above, it is the distance for which

R ≈ Ro (τ B /τ A )1/6 (7.11)

If the two molecules are identical or even similar, τ B ≈ τ A , and only when R ≈ Ro ,
does the interaction become sufficiently strong.
206 Quantum Dynamics: Applications in Biological and Materials Systems

ΔEa ΔEb

φa´ φb φa φb´

FIGURE 7.3 Resonant energy transfer between donor (a) and acceptor (b) energy levels.

From the discussion above, it should be clear that if we can measure the radi-
tive transfer rate between two distinct species, we have the means of measuring the
instantaneous distance of separation between the two, provided the transfer rate is
fast compared to the time scale for the relative motion between A and B. Because of
this and the advent of single-molecule spectroscopic techniques that can selectively
excite and collect photons from what amounts to single chromophores, we can effec-
tively monitor experimentally the dynamics of complex molecular reactions through
a technique termed “Förster resonant energy transfer” or FRET. A schematic of this
process is shown in Figure 7.3

7.2 FÖRSTER’S THEORY


The model presented above is highly instructive and enables us to develop a theoretical
“feel” for energy transfer between donor and acceptor chromophores. It is far from
complete but can be easily extended to include more complex models, for example,
which take into account the intramolecular vibrational motions.
An alternative approach, first developed by German physicist Theodor
Förster 64–67 in 1948, proves to be highly useful in the case where the emission
and absorption spectra of the donor and acceptor species are broad and diffuse. In
this limit, we cannot make the assumption that the transverse relaxation time is short
or that we are dealing with a two-level system.
Förster suggested that one should treat the spectra as being continous and applied
the golden rule/first-order perturbative result for the case of a continuos spectra


dW = |Vab |2 δ(
E a −
E b ) (7.12)

where
E a −
E b = (E a − E a ) − (E b − E b ) is the difference in energys as shown in
Figure 7.3. This ensures that energy is conserved upon performing the final integration
over energies.
Following Förster’s approach, we write the initial and final wave functions as

1 = φa φb a (E a )b (E b ) (7.13)


2 = φb φa a (E a )b (E b ) (7.14)
Excitation Energy Transfer 207

where the φ’s are the ground and excited electronic states of the system and the
’s are vibrational wave functions. We assume here that the Born–Oppenheime
approximation is valid so that the ’s represent vibrational motion on a given potential
energy surface associated with either the ground or excited state of either molecule.
We denote the energy origin of each surface with E a , E a , and so forth. Taking these
as our states, we can write the coupling matrix element as
V12 = 1 |V |2
(7.15)

= Vab a (E a )|a (E a )


b (E b )|b (E b )
(7.16)

= Vab Sa (E a , E a )Sb (E b , E b ) (7.17)


where Sa and Sb are the matrices of overlap integrals between vibrational wave func-
tions and
Vab = χ | pa || pb |/R 3 (7.18)
is the dipole–dipole coupling matrix element.
Notice that in writing this expression we have made the assumption that the elec-
tronic transition is effectively decoupled from the vibrational dynamics and takes
place in a fixed frame of the nuclei. This is often termed the “Condon approximation”
after Eugene Condon. Its justification is similar to that of the Born–Oppenheimer
approximation.
Now, let g  (E a ) be the energy distribution for molecule A when it is in its excited
electronic state and g(E b ) be the energy distribution for B in its ground electronic
state. If we assume that intramolecular vibrational relaxation is fast compared with the
transfer rate, then the mechanism suggested by Figure 7.3 where, upon excitation, the
donor (A) relaxes in its excited state to some lowest vibrational state and then transfers
its remaining electronic energy to B. Assuming thermalization is fast compared with
the transfer rate, then both of these represent thermal populations.
Pulling all this together yields
 

W = Vab g  (E a )Sa2 (E a , E a − E)d E a
2


× g(E b )Sb2 (E b , E b + E)d E b d E (7.19)
  
2πχ 2     
= pa
2
g (E )S
a a (E a , E a − E)d E a
R6
 
× pb2 g(E b )Sb (E b , E b − E)d E b dω (7.20)

The terms in square brackets are directly related to experimentally observable quan-
tities, namely, the first is the normalized emission spectra of the donor (A) and the
second is the normalized absorption spectra of the acceptor (B):
 
1
pa2 g  (E a )Sa (E a , E a − E)d E a = pa2 G a (ω) (7.21)

208 Quantum Dynamics: Applications in Biological and Materials Systems

and
 
1
pb2 g(E b )Sb (E b , E b − E)d E b = pb2 G b (ω) (7.22)

#
where G a,b (ω) dω = 1.
There is a well-known relation between the lifetime τ , the absorption index μ(ω),
and the transition dipole moment pa :

4ω3 2
Fa (ω) = p τa G a (ω) (7.23)
3h̄c3 a
4π 2 ω
μb (ω) = Nb pb2 G b (ω) (7.24)
3h̄c
where Fa (ω) is the normalized radiation spectrum of A given as the number of quanta
per unit frequency range. The second, μb (ω), is the absorption coefficient as per the
Beer–Lambert relation

I (z) = Io e−μ(ω)z (7.25)

where Nb is the number of acceptor molecules per cm3 and z is the thickness of the
sample. Finally, we arrive at a well-known result that

9χ 2 c4
W = Fa (ω)μb (ω)ω−4 dω (7.26)
8π Nb τa R 6
whereby the rate is obtained by taking the overlap integral between the emission
spectrum of A and the absorption spectrum of B, multipled by the appropriate scal-
ing factors. The advantage of this formula is that both spectra can be determined
independently by simple spectroscopic techniques.
The first general requirement for efficient energy transfer is a good degree of spec-
tral overlap between the emission spectrum of the donor species and the absorption
spectrum of the acceptor species. This is determined by the integral in Equation 7.26,
which is often written as J :

J= Fa (ω)μb (ω)ω−4 dω

Herein, though, lies one of the experimental paradoxes of FRET. The spectral profiles
of the FRET pair cannot be so separated that they have poor overlap, yet we want to
avoid “cross-talk” between the two imaging channels—that is, ideally the donor emis-
sion filter set must collect only the light from the donor and none from the acceptor,
and vice versa for the acceptor. In practice, this can be somewhat realized by employ-
ing short bandpass filters that collect light from only the shorter-wavelength side of
the donor emission and the longer-wavelength side of the acceptor emission. This can
limit somewhat the photon flux from both donor and acceptor during a typical expo-
sure, especially when we bear in mind that these measurements are best performed
under conditions of reduced excitation power, such that we do not accelerate the rates
of bleaching.
Excitation Energy Transfer 209

Secondly, the rate scales as 1/R 6 due to the dipole–dipole nature of the coupling
matrix element. Consequently, we can define a distance in which the transfer rate W
is equal to the radiative rate τa by

9χ 2 c4
Ro6 = J (7.27)
8π Nb R 6
Thus,  6
1 Ro
W =
τa R
where Ro is the “Förster radius.” At this distance, energy transfer is 50% efficient.
Often the FRET technique is combined with imaging microscopy techniques to
monitor the proximity of two fluorophores. Since fluorophores can be employed to
specifically label biomolecules and the distance condition for FRET is of the order
of the diameter of most biomolecules, FRET is often used to determine when and
where two or more biomolecules, often proteins, interact within their physiological
surroundings. Since energy transfer occurs over distances of 1–10 nm, a FRET signal
corresponding to a particular location within a microscope image provides an addi-
tional distance accuracy surpassing the optical resolution (≈ 0.25 mm) of the light
microscope.
Furthermore, the transfer rate depends critically upon the relative orientation of
the two transition dipoles. Above, we have expressed χ in terms of the relative dihe-
dral angles between the two dipoles. Furthermore, assuming the donor and acceptor
species are randomly oriented, χ 2 = 2/3. However, if the two molecules are tethered
to a common backbone, the instantaneous orientation factor χ 2 will reflect the instan-
taneous relative orientation of the two dipoles and as such may provide a sensitive
probe of the dynamics of the backbone provided the time scale of the motion is long
compared with the experimental time scale.
Finally, we can express all these factors in a general equation in spectroscopic
units (per mol):
χ2 J 1
W = 4 8.785 × 10−23
n τo R 6
where χ 2 is the orientation factor, n the refractive index of the medium, τo the radiative
lifetime of the donor, R the distance (in cm) between the donor and acceptor, and
J the spectral overlap (in coherent units cm6 mol−1 ) between the donor fluorescence
spectrum and acceptor absorbance spectrum. We can also write the Förster radius (in
cm) as
χ 2D J
Ro6 = 8.785 × 10−5
n4
where  D is the quantum efficiency of the donor. The efficiency of the transfer may
be evaluated by comparing the fluorescence lifetime of the donor in the presence of
τa and in the absence of the acceptor τao or by the quantum yield in the presence  D
and in the absence oD of the acceptor:
τa D
E =1− =1− o
τao D
210 Quantum Dynamics: Applications in Biological and Materials Systems

220

200 a

Fluorescence Intensity
D b
D A 180
C G
C G
C G 160
A T
Closing A T
140
Opening
120

100

(A)30 260 280 300 320 340 360 380


A Temperature (K)
(a) (b)

FIGURE 7.4 (a) Model open/closed loop structures for DNA hairpin. (b) Equilibrium thermal
melting curves for the DNA hairpin loop. The closed-to-open transition was monitored by the
ratio of fluorescence intensity of TMR in a double-labeled sample to that of a TMR-only-
labeled sample. (a) 10 mM Tris/1 mM EDTA (pH 7.5) and 100 mM NaCl; (b) 10 mM Tris and
20 mM MgCl2 . Solid lines are fits to a two-state model (from Ref. 69).

Because the energy transfer rate is highly sensitive to the distance of separation
between the donor and acceptor pairs, we can use resonant energy transfer to accu-
rately measure inter- and intramolecular distances. Over 30 years ago, Hass et al.
showed that FRET techniques could be used to monitor the end-to-end chain diffu-
sion of a tagged biopolymer over a range of 2–10 nm.68 In Figure 7.4 we show an
example of FRET measurements that can be used to monitor the melting of a DNA
hairpin loop where fluorescence from the donor is quenched by the proximity of a
tagged acceptor. Here, in the work by Wallace et al.,69 FRET techniques were used to
determine the thermodynamic parameters of the closed-to-open transition of a model
DNA oligomer in which the ends were terminated by the dye molecules carboxyte-
tramethylrhodamine (TMR), which is a fluorescence donor, and indodicarbocyanine
(Cy5), here the fluorescence acceptor. DNA hairpin-loop structures fluctuate between
different conformations and are involved in various biological functions including
gene expression and regulation.70,71 Loops have also been used in biotechnology as
biosensors and molecular beacons.72
In Figure 7.4b are the equilibrium melting curves for a model DNA hairpin as
determined by comparing the FRET intensities for the donor-acceptor labeled sam-
ple to that of the donor-only labeled sample. The high-temperature/high-fluorescence
intensity limit corresponds to the case where the two ends are farthest apart. Conse-
quently, the fluorescence from the TMR is not quenched or transferred to the Cy5 as
efficiently as in the low-temperature case.

7.3 BEYOND FÖRSTER


Förster’s approach is only valid for distances that are large on the length scale of
the individual chromophores involved. However, in many systems, particularly those
involving π-conjugated molecules, the distance of separation between chromophore
Excitation Energy Transfer 211

units is oftentimes comparable to the actual size of the molecule. In such cases, the
Förster approach is incapable of providing an accurate estimate of the energy transfer
rate. The problem stems from the fact that at sufficiently short ranges, the donor
molecular “feels” segments of the acceptor species more strongly than others. Hence,
one needs to account for the inhomogeneities in the transition densities about the
donor and acceptor.
One improvement on the Förster scheme was proposed by Beenken and Pullerits73
and given much more rigorous justification by Barford74 whereby the total transition
dipole moment for a polymer chain is projected onto individual monomeric units and
the total interaction is summed as a line-dipole:
 
1 3
M= pAi · pB j − 2 ( pAi · Ri j )( pB j · R)
 (7.28)
ij
Ri3j Ri j

where the pAi and pB j are fractional transition dipoles that obey the sum rule

pA = pAi
i

As seen in Figure 7.5, the line-dipole approach does a far better job of approaching the
Coulomb coupling limit than the point-dipole approximation for linear polymers. Only
when the distance of separation between the charge centers of the two chains is slightly
larger than the actual chain lengths (in this case of 64 Å) do the three approaches agree.
Notice, also, that the point-dipole approach consistently overestimates the coupling.
For typical packing distances of R ≈ 4 –10 Å, the point-dipole approach can be as
much as 2 to 4 orders of magnitude too large. For parallel polyene chains, it can be
shown analytically that the Coulomb coupling integral between donor and acceptor
species VD A scales as the chain length L when L is smaller than the separation
distance and VD A ∝ 1/L when L is larger than their separation within a plane-wave
approximation of the excitonic wave functions:74

θ R

The scaling of VD A ∝ L for short chain lengths is a reflection of the fact that
at these length scales the point-dipole approximation may be applied to the entire
chain, implying that VD A ∝ L. Similarly, the scaling of VD A ∝ L −1 for collinear
chains in the plane-wave approximation is easy to understand for chain lengths that
are large compared to their separation. In this case, the exciton dipoles are uniformly
distributed along both chains of length L. As a result, the double line integral of r −3
yields the L −1 scaling.74
The scaling of VD A with L for long parallel chains is somewhat less intuitive
since it implies that the probability for exciton transfer between neighboring chains is
212 Quantum Dynamics: Applications in Biological and Materials Systems

(a)

102

101

100
V1A1D [eV]

10–1

10–2

10–3

10–4
2 3 4 5 6 7 8 9 10 20 30 40 50 60 708090
RAD [Å]
(b)

FIGURE 7.5 (a) Line-dipole approach for computing couplings based upon local transition
dipole approximation between two polyfluorene oligomers with slightly different comforma-
tions. Bold arrows are the total transition dipole moment for each polymer chain while the small
arrows indicate the projection of the total moment onto the individual repeating unit. (b). Exci-
tonic coupling VD A for the lowest singlet excited states between two transoid sedecithophene
oligomers in a skew-line arrangement comparing the point-dipole (dotted), the line-dipole
(dashed), and Coulomb intergral (solid). The squares indicate the half-splitting energies from
ZINDO calculations of the dimer. The arrow at 64 Å indicates the length of a single 16-ring
sedecithophene chain (figure from Ref. 73).
Excitation Energy Transfer 213

a decreasing function of the chain length. The scaling can be understood in one of two
ways. First, if the distance of separation between the two polyenes, R, is on the order
of a monomer length, as in the case of π stacked polyenes, VD A becomes a periodic
function of the relative alignment (or shift) of the two chains. For polyene-polyene
dimers, one has “in-phase” or “out of phase” stacking configurations depending upon
whether or not the C=C bonds on one chain are aligned with the C=C bonds on the
other polyacetylene chain. This results in a modulation of VD A as the two chains
are displaced relative to each other. As the chains become farther and farther apart,
this periodic variation vanishes due to interference effects from increasingly longer-
ranged local transition densities. Ultimately, the coupling integral will vanish in the
asymptotic limit even for chain separations greater than a few monomers. Alterna-
tively, one can imagine that as two finite-length dipoles side longitudinally relative

to each other, the sign of the dipole–dipole coupling changes once cos θ = 1/ 3.
Consequently, any small variation in the alignment results in a vanishing of VD A as
L becomes large.

7.4 TRANSITION DENSITY CUBE APPROACH


Thus far we have assumed that the interactions can be computed with the dipole–
dipole approximation that assumes that the donor and acceptor molecules are far
enough apart so that the distance of separation, R, is much larger than their size.
However, as experimental techniques advanced, especially in the 1990s, it became
apparent that in photosynthetic and other light-harvesting systems, the rapid time
scales involved in the energy transfer between pigment molecules was much larger
than what one would expect based upon a dipole–dipole coupling. This is especially
evident for systems with extended electronic states, such as conjugated polymers (as
in the case of optical electronic devices) and caratinoids in LH1 (for a review, see,
for instance, V. Sundström, et al., J. Phys. Chem. B, 1999, 103, 2327–2346).
As noted earlier in this chapter, in the standard approach developed by Förster,
one uses the lowest-order term in the multipole expansion of the Coulomb matrix
between |D ∗ A
and D A∗
where D and D ∗ denote ground- and excited-state wave
functions for the donor (D) or acceptor (A) molecules
|μ D ||μ A |
VD A = κ
R 3D A
where κ is an orientation factor

κ=μ
D · μ
 A − 3(μ
 D · n )(μ
 A · n )

where μ D,A is the unit vector giving the direction of the transition dipole moment for
the donor or acceptor species and n is a unit vector pointing from the charge center
of the donor to the charge center of the acceptor.
As noted, this approximation is no longer valid when R ≈ a, that is, when
the donor and acceptors are within a few molecular radii. At short range, higher-
order multipoles must be taken into account to properly describe the charge density
associated with the transition. Secondly, we have ignored the direct overlap between
214 Quantum Dynamics: Applications in Biological and Materials Systems

the wave functions on each molecule. Consequently, the exchange interaction must
also be included once the two molecules become very close. In fact, if the two species
are too close, the assumption that the two molecules are “independent” is simply too
severe and one should really use a full quantum chemical treatment for the whole
system.
The most robust approach aside from a full quantum chemical treatment is to
compute the Coulomb matrix element directly from the donor and acceptor wave
functions:75
   
∗ 
 e2 
VD A ≈ D A   DA ∗
(7.29)
|ra − rd | 

where ra denotes the coordinates of electrons associated with the acceptor molecule
and rd the coordinates for the electrons associated with the donor molecule. Under
this assumption, the above integral can be recast as an integral over two densities,

M D (r ) = r |D
D ∗ |r
(7.30)

and

M A (r ) = r |A
A∗ |r
(7.31)

where |D
D ∗ | is the excitation operator (or projection operator) constructed by tak-
ing the outer product between the ground- and excited-state wave functions of the
donor molecule (integrating over the spin coordinate) and likewise for the acceptor
molecule. Both of these quantities can be computed using separate excited-state quan-
tum chemical calculations involving the donor and acceptor species. The advantage,
then, is that the accuracy of the exciton-exciton coupling is determined entirely by the
accuracy of the quantum chemical approach used in determining the excited states of
the donor and acceptor molecules.
Numerically, this is implemented by approximating the transition densities as
  xi +δx  y j +δy  z k +δz
M D (i jk) = δxδyδz ds ri jk |D
D ∗ |ri jk
d x d y dz (7.32)
xi yj zk

where the integration is only over a small “voxel” or volume cell of dimension
{δx, δy, δz}. Such volume renderings are very useful for analysis by external pro-
grams since they are essentially independent of the choice of basis functions used in
quantum chemical routines used to generate the data. The final integral is constructed
by taking
 
M D (r D )M A (r A )
VD A = dr D dr A (7.33)
|r A − r D |

where the integrals are over the full three-dimensional volume. The de facto data
format for the transition densities is the format used by the Gaussian quantum chemical
code.76 This format is also used by the Orca code77 and Qchem.
Excitation Energy Transfer 215

Adenine Thymine

Guanine Cytosine

FIGURE 7.6 Transition densities for So → S1 excited states of the four DNA bases (from
Ref. 78).

As an example of the techniques, consider the electronic couplings between DNA


bases. The transition densities for the four DNA bases are shown in Figure 7.6 for
the So → S1 transitions for the pyrimidines (thymine and cytosine) and the So → S1
and So → S2 transitions for the purines (adenine and guanine).78 To calculate these,
the geometries of the DNA bases—adenine, guanine, cytosine, and thymine—in their
most common tautomeric forms were optimized at the MP2/TZVP level of theory
in chloroform using the GAUSSIAN03 suite of programs.76 The optimized geometries
were subsequently used to calculate the singlet excitation energies in gas phase at
the TD-DFT level using PBE0 functional and TZVP basis set augmented with the
diffusion functions on all atoms as implemented in Orca.77
In order to compare the transition density cube method to the simple point-dipole
approximation, we show here the values of the Coulombic couplings between the
lowest energy ππ ∗ transitions of the adenine and thymine and two π-stacked thymines
as a function of distance between the bases (Figure 7.7). The comparison of the
coupling elements obtained with the two methods, point-dipole approximation and
transition density cube (Figure 7.7), shows a good agreement at a separation between
the bases larger than 5 and 6 Å for the AT pair and two stacked thymines, respectively.
At a shorter separations, in the range of 3–4 Å, which is typical for DNA structures,
the agreement between point-dipole approximation and transition density cube is very
poor with the differences between calculated couplings larger than 100% in case of AT
pair. The aforementioned good agreement between point-dipole approximation and
transition density cube at larger and poor agreement at shorter separations between
nucleobases indicates that the shape and spatial extent of transition density (Figure 7.7)
become important and cannot be neglected at distances between the bases typical for
double helices DNA. The agreement between the two methods becomes very good
in the limit of very large separation (> 8 Å).
216 Quantum Dynamics: Applications in Biological and Materials Systems

300 Coulombic coupling between 1600 Coulombic coupling between


Watson-Crick AT pair two stacked and parallel thymines
1400
250

Coulombic Coupling (1/cm)


Coulombic Coupling (1/cm)

TDC TDC
IDA 1200 IDA
200
1000

150 800

600
100
400
50
200

0 0
2 3 4 5 6 7 8 9 10 11 1 2 3 4 5 6 7 8 9 10 11
Distance (Angs.) Distance (Angs.)

FIGURE 7.7 Comparison between point-dipole approximation (left) and exact (numerical)
evaluation (right) of coupling between two DNA bases (from Ref. 78).

For the stacking and pairing distances corresponding to the idealized B-DNA
geometry, the coupling elements calculated with the point-dipole approximation re-
sult with several-fold larger absolute values compared with the corresponding val-
ues calculated using the transition density cube method. The largest differences
between the two methods are obtained for the couplings between the π-stacked
adenines. For the idealized B-DNA geometry, the coupling between two adenines
located on the same strand calculated using point-dipole approximation, 872 cm−1 ,
is more than five fold larger compared with the value obtained using transition den-
sity cube, 161 cm−1 . The differences in the calculated couplings using the same
two methods for two stacked thymines are much smaller. For this base pair, the
Coulombic coupling calculated using point-dipole approximation is equal to ap-
proximately 230 cm−1 , more than twice the value of 101 cm−1 obtained with tran-
sition density cube. Bouvier et al.79 reported the magnitudes of Coulombic cou-
pling calculated using atomic transition charges model.80 The corresponding values
for the intrastrand nearest neighbors in a standard B-DNA geometry are 170 and
217 cm−1 for the lowest energy π π ∗ transitions of adenine and thymine, respec-
tively. The absolute values of the coupling elements between the second-nearest
neighbors located on the same strand are much smaller. At the point-dipole level
of approximation, the coupling between the two adenines is only 57 cm−1 com-
pared with 9 cm−1 calculated for the same base pairs using transition density cubes.
The coupling between the two thymine bases on the same strand is even smaller—
approximately 3 and 1 cm−1 for point-dipole approximation and transition density
cube methods, respectively. We conclude then that while the point-dipole approx-
imation provides a simple and robust way of estimating the electronic coupling
between chromophores that are well separated, the simple approximation decid-
edly breaks down once the down and acceptor speeds are brought to with close
proximity.
Excitation Energy Transfer 217

SUGGESTED READING
1. Quantum Mechanics in Chemistry, G. C. Schatz and M. A. Ratner (Mineola,
NY: Dover Books, 2002).
2. Principles of Nonlinear Optics and Spectroscopy, S. Mukamel (Oxford:
Oxford University Press, 1995).
3. Optical Resonance and Two-level Atoms, L. Allen and J. H. Eberly (Mineola,
NY: Dover Books, 1974).
4. Principles of Nuclear Magnetism, A Abragam (Oxford: Oxford University
Press, 1961).
5. The Quantum Theory of Light, R. Loudon (Oxford: Oxford University
Press, 1973).
6. Laser Theory, H. Haken, Handbuch der Physik, Vol. XXV/2 (Springer-
Verlag, Berlin, 1970).
7. Fundamentals of Quantum Electronics, R. H. Pantell and H. E. Puthoff
(New York: Wiley, 1969).
8 Electronic Structure
of Conjugated Systems
The underlying physical laws necessary for the mathematical theory of
a large part of physics and the whole of chemistry are thus completely
known, and the difficulty is only that the exact application of these laws
leads to equations much too complicated to be solvable
Paul Dirac
. . . that is solvable by a person armed only
with pencil and paper.

8.1 π CONJUGATION IN ORGANIC SYSTEMS


In organic chemistry, a conjugated system is one in which there is a chain of unsatu-
rated alternating single and double bonds as in CH2 =CH--CH=CH--CH=CH2 . In such
polyene chains, the valence orbitals about the carbon atoms are in the sp 2 hybrid
electronic configuration that forms the σ -bonding frame for the chain. The remaining
2 pz orbitals are aligned perpendicular to the σ bonding plane, and electrons within
these orbitals are more or less delocalized over the entire molecule. In general, this
delocalization decreases the overall kinetic energy of the electrons and lowers the total
electronic energy of the system. Conjugated systems absorb readily in the ultraviolet
and visible regions of the spectrum due to excitations of π bonding to π ∗ antibonding
orbitals. Generally, chains with fewer than eight conjugated bonds absorb in the UV
region while increasingly longer chains absorb at longer wavelengths.
A complete description of the electronic properties of conjugated molecules
requires that we consider all the electrons in the system, and certainly modern quan-
tum chemical codes and computational hardware are up to the challenge for even
moderately large systems. However, with a few judicious approximations we can
arrive at fairly robust models that can oftentimes outperform more exact treatments.
Moreover, having a simple theory that works well allows us to develop a deeper
understanding of the underlying physics of these systems that lead to many of their
spectroscopic and optical-electronic properties.
The crucial approximation we make is that the total electronic wave function can
be factored into two parts:
 = σ π core (8.1)
where σ describes the σ orbitals, π describes the π orbitals, and core are core
orbitals that do not participate in chemical bonding. The justification for this is that, by
and large, there is very little overlap between the σ orbitals that are generally localized
along lines directly connecting the nuclear centers and the π orbitals that are generally
delocalized and lie above and below the σ -bonding plane. Certainly for many systems
there is some degree of mixing between the σ and π manifolds. However, for the

219
220 Quantum Dynamics: Applications in Biological and Materials Systems

H3C
CH3 CH3
H3C CH3

H3C CH3
CH3 CH3
CH3

FIGURE 8.1 (Top) Chemical structure of betacarotene. The conjugated domain is indicated
in bold. LUMO (middle) and HOMO (bottom) orbitals of betacarotene superimposed on its
3D structure.

orbitals energetically nearest the highest occupied and lowest unoccupied molecular
orbitals (HOMO and LUMO) as well as the first few excited electronic states, this is
fairly good approximation.
In this chapter we will develop a description of the π electronic structure of
conjugated organic systems. We will start with a simple free-electron model and
finish with a brief description of modern quantum chemical techniques. A simple
model for understanding this trend is the “free-electron model” where we assume
that the electrons within the π bonding network are more or less free particles and
we ignore any electron/electron interaction. If the average C--C bond length is a from
carbon center to carbon center then an electron within the π orbital in the C N H N +2
polyene is confined to a “box” of length L. From elementary quantum mechanics,
Electronic Structure of Conjugated Systems 221

4 (–C = C–)n

E(eV)
3
n–oligothiophene
2

1 SSH, Huckel

0.1 0.2 0.3 0.4 0.5


1/n

FIGURE 8.2 Variation of optical energy gap with number of repeating units for polythiophene
and polyacetylene oligomers as computed various the semiempirical models.

such a system has energy levels

h̄ 2 π 2 n 2
En = (8.2)
2m e L 2
since each C atom contributes one electron to the π system. So for a system with k
C=C double bonds, we have a total of 2k electrons with the n = k level being the
highest occupied and the k = n + 1 being the lowest unoccupied levels. Assuming
the optical transitions are between these two levels,

h̄ 2 π 2

E = (2k + 1) (8.3)
2m e L 2
The length of the “box” is actually somewhat arbitrary since the π orbital extends a
bit beyond the terminal C atoms, say, 1/2 a C--C bond length (a = 1.4 Å), in which
case we get a length of L = (2k + 1)a. Thus,

h̄ 2 π 2 1 1.54 × 105 cm−1



E = = (8.4)
2m e 2k + 1 2k + 1
after inserting values for the other physical constants. A comparison of the free
electron prediction and the observed UV/VIS absorption maxima for polyenes of
various lengths is given in Table 8.1. The agreement is not so good; however, this
simple model does give the correct variation of energy gap with increasing chain
length,
E ∝ 1/L.
The free electron model also predicts that for infinitely long chains,
E → 0 as
L → ∞, implying that a linear fit of the data to a + b/(2k + 1) should give a = 0 as a
y intercept. However, such a fit gives a = 16 775.9 cm−1 and b = 1.396 × 105 cm−1 .
There are a number of likely reasons for the deviation. The most obvious is that
we have ignored the interactions between electrons. However, this actually turns out
not to be the primary reason for the deviation. The problem is that the C atoms in
222 Quantum Dynamics: Applications in Biological and Materials Systems

TABLE 8.1
Electronic Spectra of Linear Polyenes versus
Number of C=C Double Bonds
k ν f r ee νhuck νobs
1 51333.3 cm−1 63009.7 cm−1 62000 cm−1
2 30800.0 45119.1 46000
3 22000.0 37016.5 38000
4 17111.1 32438.3 33000
5 14000.0 29503.1 30000
6 11846.2 27463.0 27500
7 10266.7 25963.4 26000
8 9058.82 24814.9 24000
10 7333.33 23172.0 22000

polyacetylene are not evenly spaced and in fact alternate their bond lengths between
double (C=C) and single (C--C) bonds. As we shall show later in this chapter, this
gives rise to a nonzero energy gap for the infinitely long chain. Finally, we have
neglected the fact that realistic polyene molecules are not straight chains. They can
have twists and kinks and other contortions that can limit the extent of π conjugation.
However, the fact that the energy gap does follow the predicted
E ∝ 1/L behavior
indicates that the electrons in the π orbitals are moving ballistically as free particles
more or less unaware of the geometry of the molecule. Consequently, even the π
electronic structure for a molecule such as betacarotene shown in Figure 8.1 can be
well understood within a free-electron model.

8.2 HÜCKEL MODEL


In the free electron model, we assumed that as far as the electron was concerned
there was no energetic cost in moving from one C atom to the other. However, a
more systematic approach can be developed if we assume that there is an energy α
associated with placing an electron in a C 2 pz orbital and an energy β associated with
transferring that electron from one C 2 pz to another neighboring C 2 pz orbital. In
other words, in a basis of C 2 pz orbitals localized about the C atoms in the chain,

φi |H |φ j
= β and φi |H |φi
= α (8.5)

We shall also assume that the overlap integral between neighboring C 2 pz orbitals
is exactly zero, φi |φ j
= δi j , and provide a sufficient basis to expand the electronic
wave functions:

N
|
= c j |φ j
(8.6)
j=1

Finally, we also will neglect the Coulombic interaction between the electrons. Our
task, then, is to determine the coefficients and the energy eigenvalues. Since we
Electronic Structure of Conjugated Systems 223

have ignored interactions between the electrons, we need to solve the one-electron
Schrödinger equation:
H |
= E|
(8.7)
In general, H will have nondiagonal elements whenever there is a π bond linking
adjacent C atoms. For a linear chain, only nearest neighbors are linked, H becomes
a tridiagonal matrix, the Schrödinger equation in matrix form reads:
⎡ ⎤⎡ ⎤ ⎡ ⎤
α β 0 0 ··· 0 c1 c1
⎢β α β 0 ··· 0 ⎥⎢ c ⎥ ⎢ c ⎥
⎢ ⎥⎢ 2 ⎥ ⎢ 2 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ 0 β α β · · · 0 ⎥ ⎢ c3 ⎥ ⎢ c3 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢. .. .. .. . . .. ⎥ ⎢ . ⎥= E⎢ . ⎥ (8.8)
⎢ .. ⎥ ⎢
. . ⎥⎢ . ⎥ . ⎥ ⎢ . ⎥
⎢ . . . ⎢ . ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎣ 0 · · · 0 β α β ⎦ ⎣ c N −1 ⎦ ⎣ c N −1 ⎦
0 ··· 0 0 β α cN cN
Introducing a dimensionless energy E  = (E − α)/β, the equations for j = 1 and
j = N become
c j−1 + c j+1 = E  c j (8.9)
If we write
c j = Aeik j + Be−ik j (8.10)
then
E  = eik + e−ik = 2 cos(k) (8.11)
where A and B are constants and k is a parameter. Imposing the boundary condition
c2 = E  ci and c N −1 = E  c N (8.12)
we arrive at two simple equations:
A+B =0 (8.13)
Aeik(N +1) + Be−ik(N +1) = 0 (8.14)
which allow us to deduce the allowed values of k:
e2ik(N +1) = 1 (8.15)
which leads to
πn
k= , n = 1, 2, . . . , N (8.16)
N +1
We must disallow the case for n = 0 since it leads to the trivial solution of c j = 0 for
all values of j. The constant A can be determined by the normalization condition

c2j = 1 (8.17)
j
224 Quantum Dynamics: Applications in Biological and Materials Systems

Thus, the allowed energy levels for the discretized polymer lattice are given by
 
πn
E n = α + 2β cos (8.18)
N +1
As N increases, the width of the energy spectrum tends to 4|β|. Moreover, in the
infinite limit, the energy spacing between each successive energy level shrinks to
zero and k becomes a continuous variable. The coefficients for the eigenstates are
given by (including normalization)

2
cn j = sin(n jπ/(N + 1)) (8.19)
N +1

8.2.1 JUSTIFICATION FOR THE HÜCKEL MODEL


The Hückel model rests upon the following assumptions. First, as noted above, we
discount any electron-electron interaction within the π system itself. Thus, the matrix
elements of our π electron Hamiltonian reduce to

h aa = φa ĥφa dr (8.20)

where φa is a C 2 pz orbitals located on the carbon atom a. The operator ĥ is really an


effective operator since we are assuming that it contains all core-level interactions.
The diagonal elements of h aa include (tacitly) an integration over all core electrons
not included in the π system. Since we are talking about C atoms participating in σ
bonding, these matrix elements should be roughly the same for all similar C atoms.
Hence we set h aa = α as a constant. For heteroatoms, one needs to use other values
of α.
Integrals of the form

h ab = φa ĥφb dr (8.21)

with a = b are termed resonance integrals. If we allow each 2 pz to be given by a


Slater-type orbital (STO) with a radial function

Rn (r ) = Nr n−1 e−ζ r (8.22)

where n is the principle quantum number with values n = 1, 2, . . .; N is the normal-


ization; and ζ is related to the effective charge of the nucleus. The normalization is
given by
 ∞
n!
x n e−αx d x = n+1 (8.23)
0 α
which gives


N = (2ζ )n (8.24)
(2n)!
Electronic Structure of Conjugated Systems 225

1.0

0.8

S(2pπ, 2pπ)
0.6

0.4

0.2

0.5 1.0 1.5 2.0 2.5 3.0


R(Å)

FIGURE 8.3 Overlap integral between two adjacent C 2 pπ orbitals.

The angular terms are given by the real form of the spherical harmonics. Clearly, given
the fact that Rn decays exponentially with radial distance r , the resonance integrals
will be nonvanishing only between C atoms that are close to each other. Hence, for
adjacent C atoms we set h ab = β. Since orbital overlap varies with bond length, one
expects some systematic variation in β with bond length. Mullikin suggested that
β(r ) should vary as the overlap integral between two 2 p STOs,
S(2 pπ, 2 pπ ) = e− p (1 + p + (2/5) p 2 + p 3 /15) (8.25)
where p = 1.625R/ao for C atoms separated by distance R (in Bohr radii). This ?

function is plotted in Figure 8.3 where we note that for R > 3 Å, the overlap integral
is nearly vanishing. Taking the typical C--C bond length to be 1.39 Å and expanding
about this point gives
β(r ) = βo (1 − 1.72(R − 1.39 Å)) (8.26)
as an approximate variation of β with C--C bond distance where βo is the resonance in-
tegral at 1.39 Å. We shall later use this approximately linear variation in the resonance
integral as a means for including electron–phonon coupling in these systems.
Finally, the remaining assumption that the orbitals localized on one C atom are
orthogonal to orbitals localized on different C atoms requires us to write

Sab = φa φb dr = 0 (8.27)

unless a = b, in which case Saa = 1. In the simplest case, this is a rather extreme
approximation as we can see in Figure 8.3 where at the typical C--C bond length
S ≈ 0.25. We can improve upon the simple Hückel model by solving the generalized
eigenvalue equation
(h − εS)ψ = 0 (8.28)
where S is the overlap matrix. For example, Roald Hoffman’s extended Hückel ap-
proach includes the overlap integral
Si j = i| j
(8.29)
226 Quantum Dynamics: Applications in Biological and Materials Systems

between basis functions on different atomic centers as a way to account for bond
bends and torsions using STOs on each atom. The remaining parameters are given by
the ionization potentials, electron affinities, and core charges of the atomic sites.

8.2.2 EXAMPLE: 1,3 BUTADIENE


We begin by drawing a sketch of the molecule:

1 2 3 4

where each C atom is labeled and the adjacency is indicated by a solid line. We then
write the Hamiltonian for the π electrons as
⎡ ⎤
α β 0 0
⎢β α β 0 ⎥
H =⎢ ⎣0 β α β⎦
⎥ (8.30)
0 0 β α
The eigenvalues and eigenvectors can be readily determined either numerically or
algebraically by solving the secular determinant equation
 
x 1 0 0
 
1 x 1 0
 
0 1 x 1 = 0 (8.31)
 
0 0 1 x 

where x = (α − ε)/β. Expanding the determinant gives

x 4 − 3x 2 + 1 = 0 (8.32)

which has four roots corresponding to the orbital energies as shown in Figure 8.4.
The total energy is then

Eπ = njEj (8.33)
j

where n j is the occupancy of the jth energy level. For the case of 1-3-butadiene,
each C atom contributes one electron to the π system. Taking the Pauli principle into
account, only the lowest two levels are fully occupied and the total π energy is

E π = 4α + 4.472β (8.34)

If the π electrons in butadiene were not delocalized but rather formed two isolated
double bonds, the total energy would be simply 4α + 4β, that is, twice the energy of
ethylene. By delocalizing the electrons, the total energy is lowered by 0.472β.

 Notethat the resonance integral β is a negative value since delocalization should lower the energy of the
system.
Electronic Structure of Conjugated Systems 227

α – 1.618β

α – 0.618β

C2PZ

α + 0.618β

α + 1.618β

(a) (b)

FIGURE 8.4 Energy levels and orbitals for 1, 3-butadiene from the Hückel model.

An alternative way to derive the eigenvalues for the linear chain is to consider the
roots of the determinant equation
5 5
5x 1 0 0 ··· 05
5 5
51 x 1 0 ··· 05
5 5
Dn (x) = 5 0 1 x 1 · · · 05 (8.35)
5 5
5 .. .. .. .. 5
5. . . . 5

where Dn (x) is an nth-order polynomial and x = (α − λ)/β. Expanding the determi-


nant in terms of its cofactors, we find a recursion relation
Dn (x) = x Dn−1 (x) − Dn−2 (x) (8.36)
This is the recursion relation for the Tchebychev polynomials with D0 (x) = 1,
D1 (x) = 1, or in general
Dn (x) = 21−n cos(n cos−1 (x)) (8.37)
Setting x = 2 cos θ,
Dn (x) = sin((n + 1)θ)/ sin θ = 0 (8.38)
Consequently, the roots occur whenever θ = iπ/(n + 1) for i = 1, 2, . . . , n or
x = 2 cos(iπ/(n + 1)). Thus
λi = α + 2β cos(iπ/(n + 1)) = α + 2βTi (cos(π/(n + 1))) (8.39)
are the roots and Ti (x) are the Tchebychev polynomials.
The eigenfunctions are obtained as linear combinations of the C 2 pz orbitals

ψn = cni φi (8.40)
i
228 Quantum Dynamics: Applications in Biological and Materials Systems

In general it is a tedious procedure to determine the expansion coefficients by hand.


For anything with more than four–six atoms, this is best done numerically using
various eigenvector routines. In fact, the problem we have just solved is equivalent
to that of the Schrödinger equation for a square well potential on a mesh of N + 2
points with ψ(N = 0) = ψ(N + 1) = 0 as boundary conditions and t = h̄ 2 /(2ma 2 ).
The eigenvalue coefficients for a linear Hückel chain are the amplitudes for standing
waves on a grid of points located at x j = ja where a is the interatomic spacing.
Thus, the normalized eigenstate corresponding to the nth energy level for a chain of
N identical atoms is given by

2
N
ψn = sin(niπ/(N + 1))φi (8.41)
N + 1 i=1

Furthermore, using the x j points to sample a function for integration is equiv-


alent to performing the Gauss-Tchebychev quadrature with weightings given by
2/(N + 1). Such grids are the basis for the Tchebychev version of the discrete
variable representation (DVR) used extensively in grid-based quantum dynamical
calculations.81,82
The Hückel model can also be used to predict and interpret electronic π − π ∗
transitions. Assuming that the primary optical transition is between the HOMO and
LUMO orbitals, the energy gap for linear polyenes with k C=C bonds can be written as

E = 4β sin(π/(4k + 2)) (8.42)
In Table 8.1 we compare the experimental UV/VIS absorption maximum for polyenes
of various lengths to an empirical fit to the Hückel model
νhuck = a + 4b sin(π/(4k + 2)) (8.43)
with a = 16 171.6 cm−1 and b = 2.34 × 105 cm−1 . The fact that the y intercept
does not vanish implies that no single value of β can give a good fit for all the data.
However, the fact that the energy gap varies as 4 sin(π/(4k + 2)) implies that the
first excited states of similar molecules are correlated in about the same way as their
respective ground states.
While this energy gap law generally holds for polymethines (odd-numbered C
chains), it does not hold for polyenes (even-numbered C chains), which have a finite
gap as n grows to be large. The reason for this is that the transfer integral β is not
the same between every pair of neighboring C atoms due to bond-length alternation.
Longer bonds have slightly smaller β values while short bonds will have slightly
larger β values (in magnitude).

8.2.3 CYCLIC SYSTEMS


For cyclic systems, the Hamiltonian is essentially the same as above except that
atoms 1 and N are now linked and H1N = HN 1 = β. Because of the periodicity,
(x + N ) = (x) so that
2πn
ei N k = 1 → k = , n = 0, 1, 2, . . . , N − 1 (8.44)
N
Electronic Structure of Conjugated Systems 229

FIGURE 8.5 Hückel levels for cyclic molecules.

The eigenvalues are then

E = α + 2β cos(2nπ/N ) (8.45)

where n = 0, 1, 2, 3, . . . , N −1. In the limit of an infinite ring, k becomes a continuous


variable between 0 and 2π. Since the cosine function is a periodic function, we
can redefine the range of k to be between −π and +π to highlight the degeneracy
between +k and −k. Thus, the spectrum of an infinite system with open boundaries
is equivalent to that of a system with periodic boundary conditions.
Secondly, we note a useful way to remember the level structure of a cyclic system
with N sites. Notice that the argument of the cosine in the energy expression can be
written as
E = α + 2β cos(nθ) (8.46)
where θ = 2π/N is one angular step going about a circle. These define the vertices
of a regular polygon circumscribed by a circle of radius 2β centered about α with
the θ = 0 vertex pointing straight down. The polygons for the Cn Hn for n = 3 to 6
are shown in Figure 8.5. From this, one can more or less immediately read off the
Hückel energy levels without having to rederive the energy equation. For the case
of cyclobutadiene, the total π energy is E π = 4α + 4β. Recall that E π for two
ethylene molecules was also 4α + 4β. Hence, according to the Hückel model at least,
there is no additional energy stabilization gained by delocalizing the π electrons in
cyclobutadiene.
While cyclobutadiene is not stabilized by delocalization, other cyclic rings are
stabilized. Table 8.2 compares the experimental resonance energies of a number of
common cyclic aromatic systems to their Hückel delocalization energies along with an
estimate for the resonance integral. The fact that β is fairly constant over a number of

TABLE 8.2
Experimental Resonance Energies and Hückel Delocalization Energies
Experimental H ückel Delocalization Apparent β
Molecule Resonance Energy (kcal/mol) Energy (kcal/mol)
Benzene 36 2β 18
Napthalene 75 3.7β 20
Anthracene 105 5.3β 20
Phenanthrene 111 5.45β 20
Biphenyl 65 4.4β 15
230 Quantum Dynamics: Applications in Biological and Materials Systems

chemical compounds lends credence to its use as an empirical parameter for predicting
the stability and other electronic properties for related compounds.

8.2.4 SUMMARY OF RESULTS FROM THE HÜCKEL MODEL


The Hückel model is both extremely simple and extremely powerful. Its success relies
on the fact that one of the most important deciding factors in the energy and shape
of a molecular orbital is its nodal structure. This is determined by the connectivity of
the π electronic system and hence reflects the topology of the molecule. The Hückel
matrix can written as

H = αI + βM (8.47)

where I is the N × N identity matrix and M is the topology matrix with elements
Mi j = 1 if atoms i and j are neighbors and Mi j = 0 otherwise. Clearly, since H and
M commute, they share the same set of eigenvectors, ck . Likewise, their eigenvalues
are related

εk = α + βλk (8.48)

Consequently, everything about the electronic structure can be deduced by knowing λk


and ck without any regard or information about the site energy α or transfer integral β.
As we have seen above, there are a number of cases for which we can obtain
closed-form solutions for the energies and eigenvectors. We summarize four important
cases here.
1. Polyenes and polymethines Cn Hn+2 .83,84
λk = 2 cos(kπ/(n + 1)), k = 1, 2, . . . n (8.49)
2. Cyclic polyenes (annulenes) Cn Hn (example: benzene)
λk = 2 cos(2kπ/n), k = 1, 2, · · · n and λk = λn−k (8.50)
3. Polyacenes, including benzene (N = 1), napthalene (N = 2), and an-
thracene (N = 3).85

λ(s)
k = 1 (8.51)
16  7
λ(s)
k± = + 1± 9 + 8 cos(kπ/(N + 1)) (8.52)
2
λ(a)
k = −1 (8.53)
16  7
λ(s)
k± = − 1 ± 9 + 8 cos(kπ/(N + 1)) (8.54)
2
where the (a) and (s) superscripts denote symmetric and antisymmetric
states and k = 1, . . . N .
4. Radialenes (CCH2 )n (n membered rings with exocyclic −CH2 groups86

λk± = 2 cos(2kπ/n) ± cos2 (2kπ/n) + 1, k = 1, 2, . . . , n (8.55)
Electronic Structure of Conjugated Systems 231

There are also a number of close expressions for the delocalization (or resonance)
energy. This is the difference between the total π electron energy and the energy
corresponding to a set of localized bonds.
1. Polyenes (even number of C atoms)

E res /β = 2 csc(π/(2(n + 1)) − n − 2 (8.56)

2. Polymethines (n is odd)

E res /β = 2 cot(π/(2(n + 1)) − n − 1 (8.57)

3. Hückel annulenes (n = 4N + 2)

E res /β = 4 csc(π/n) − n (8.58)

4. Anti-Hückel annulenes (n = 4N )

E res /β = 4 cot(π/n) − n (8.59)

In all cases, as n → ∞,
E res 4
lim = − 1 = 0.2723 (8.60)
n→∞ β π

The corresponding limit for polyacenes86 is given by


 π
E res 1
lim = 1 + 16 cos2 θdθ − 1 = 0.40284 (8.61)
n→∞ nβ 2π 0
This can be compared with the values obtained for benzene, 0.33333, 0.36832 for
napthalene, and 0.37954 for anthracene. Graphene has the highest resonance energy
of 0.575.87
The bond order between neighboring C atoms in Hückel-type annulene is
2
pr,r +1 = csc(π/n) (8.62)
n
which in the limit of an infinite system becomes
2
lim pr,r +1 = = 0.6366 . . . (8.63)
n→∞ π

8.2.5 ALTERNANT SYSTEMS


The model works well for systems that can be classified as alternant. An alternant
system is such that one can divide the carbon atoms in the molecule into two sets
such that members of the first set are only bonded to members of the second set.
All linear chains and branched linear chains are alternant as are rings with an even
number of carbon atoms as shown in Figure 8.6. In the case of rings with at least one
odd-membered ring, the system is nonalternant.
232 Quantum Dynamics: Applications in Biological and Materials Systems

* * * *

* * * *

* *

* *
* *
*

* * * * *
* *
(a)

* *
*

*
*
*
*
* *
*
(b)

FIGURE 8.6 Examples of alternant (a) and nonalternant (b) hydrocarbon systems. The aster-
isks labeling different carbons indicate C atoms belonging to one of the alternant sets.

The alternant property imparts a number of important topological implications on


both the electronic energy levels and the shape of the wave functions. The first is the
Coulson–Rushbrooke pairing theorem, which states

THEOREM 8.1
For every Hückel molecular orbital energy α+βx in an alternant hydrocarbon system,
there exists another orbital with energy α−βx. In other words, the roots of ||Hπ −λ|| =
0 appear in pairs. In addition, for linear or branched-linear chains with an odd number
of C atoms, there will be one root with x = 0.

The proof follows from the parity properties of the Tchebychev polynomials as
shown in Figure 8.5. Under the parity operation x → −x, we see that Tn (x) =
(−1)n T (−x). Thus, systems with an even number of sites will have roots correspond-
ing to the even polynomials while systems with an odd number of sites will have odd
polynomial roots.
Before moving on, we introduce the bond-charge density matrix ρ


N
ρ= n i |φi
φi | (8.64)
i=1
Electronic Structure of Conjugated Systems 233

where {φi } are molecular orbitals and n i is the occupancy. In the basis of local C 2 pz
orbitals, the diagonal elements qm = ρmm represent the number of π electrons local-
ized about the mth carbon atom while the off-diagonal terms, pmn = ρmn , indicate
the number of π electrons shared between the mth and nth atom, that is, the bond
order. In terms of the orbital coefficients, we can define

qr = ρrr = n i |cri |2 (8.65)
i

prs = ρrs = n i cri csi∗ (8.66)
i

The total π energy is, thus,


E π = Tr[ρ Hπ ] (8.67)
For systems with 2n carbons and (hence) 2n electrons in the π system one finds

n
Eπ = 2 εi = 2 ρrs (Hπ )rs (8.68)
i rs

=α qr + 2β prs (8.69)
r r <s

Returning to the example of 1,3-butadiene above, the molecular orbitals are


given by
ψ1 = 0.3718(φ1 + φ4 ) + 0.6015(φ2 + φ3 )
ψ2 = 0.6015(φ1 − φ4 ) + 0.3718(φ2 − φ3 )
ψ3 = 0.6015(φ1 + φ4 ) − 0.3718(φ2 + φ3 )
ψ4 = 0.3718(φ1 − φ4 ) − 0.6015(φ2 − φ3 )
with ε1,4 = α ± 1.618β and ε2,3 = α ± 0.618β. Experimentally, the trans di-
astereomer of 1,3 butadiene is more stable than the cis diastereomer. Furthermore, it
belongs to the C2h point-group and as such its orbitals can be classified according to
the irreducible representations of this group. One finds that the four π orbitals span
the reducible representation of 2Au ⊕ 2Bg with φ1 and φ3 being of Au symmetry
and the φ2 and φ4 being of Bg symmetry. It is interesting to note that the energy
levels linked by the Coulson-Rushbrooke pairing theorem span irreducible represen-
tations with opposite symmetry character with respect to inversion and rotation about
the C2 axis.
Next we note that the charge densities on each atom is identical since by Eq. (8.65)
q1 = 2(0.3718)2 + 2(0.6015)2 = q4 = 1 (8.70)
q2 = 2(0.6015) + 2(0.3718) = q3 = 1
2 2
(8.71)
This brings us to two other important theorems.

THEOREM 8.2
Within the Hückel molecular orbital approximation for alternant systems, the total
electron density on each site for an N -electron N -site system is 1.
234 Quantum Dynamics: Applications in Biological and Materials Systems

The proof of the first part of the theorem can be seen by examining Equation 8.65.
For an N -electron/N -site system, the sum in Equation 8.65 is identical to taking

N /2

qr = 2 r |ψi ψi |r  (8.72)
i=1

Here the |ψi ψi | acts like a projection operator and we are summing over exactly
1/2 the total states in the system. Hence,

1
qr = 2r |r  × =1 (8.73)
2

In other words, for a neutral alternant system, no charge transfer or charge localization
occurs within the molecule.

THEOREM 8.3
The orbital coefficients of paired molecular orbitals are the same for “starred” atoms
and have opposite sign for “unstarred” atoms.

This we can see by inspection of the orbital coefficients themselves.


An important special case of the pairing theorem occurs when there is an odd
number of sites in the system, as in the case of the allyl radical. Since the energy
eigenvalues are determined by the roots of an N th-order polynomial, any system with
an odd number of sites will have a characteristic polynomial that must pass through
x = 0. The corresponding orbital will be a nonbonding molecular orbital with nodes
passing through unstarred alternant sites as shown in Figure 8.7.

* * * * *

* * * *

FIGURE 8.7 Nonbonding Hückel molecular orbitals with energy E = α for chains of length
N = 3, 5, and 7.
Electronic Structure of Conjugated Systems 235

8.2.6 WHY BOTHER WITH THE HÜCKEL THEORY ?


The Hückel model represents historically one of the first steps towards a quantum me-
chanical description of chemical bonding and electronic structure. Even by the 1950s
it was certainly being supplanted by more sophisticated models that took electron-
electron interactions into account. Today, given the incredible advances and prolifer-
ation of quantum chemical methods, we wonder why we should bother with Hückel
theory at all beyond its role as a pedagogic tool.
First, because the model depends only upon the topology and hence symmetry
of the molecule, it provides an important way to interpret the rigorous and more
detailed results we obtain from ab initio calculations. Certain effects and trends can
be accounted for within the context of the model. Certainly, for π electronic systems,
the insight gained from a Hückel model is very valuable. In fact, this is what separates
a “theory” or model from an exact result. A good theory should be able to reproduce
a physical result with a modest number of parameters. These parameters should be
valid within some range such that the parameters for one physical system should be
transferable to another related system. Finally, a good theory should be simple enough
in its form to be readily understandable and based upon a logical set of arguments.
The Hückel model has all these characteristics and this I believe is why, even after 70
or so years, the model remains central to our discussion of electronic structure.

8.3 ELECTRONIC STRUCTURE MODELS


The goal of electronic structure theory is to solve the Schrödinger equation for a set
of interacting electrons and nuclei. Typically, this is done within the framework of a
fixed geometry of the nuclei as justified by the Born–Oppenheimer approximation.
Essentially, we assume that since the mass of an electron is on the order of 10−4 , that
of a typical nucleus, we can separate the nuclear motions described by Q from the
electronic motions q and write

Hele (q; Q)(q; Q) = E ele (Q)(q; Q) (8.74)

where Q in this equation reminds us that the electronic energy levels and associated
wave functions depend parametrically upon the nuclear positions. Within a fixed
frame, we can equivalently write this as
† 1 † †
Hele = h lm alσ amσ + kl|v|mn
akσ alρ amρ anσ (8.75)
σ lm
2 klmn σρ


where {aiσ , a jρ } = δi j δσρ are fermion operators that add or remove electrons from
single-electron basis functions.

akσ |0
= |kσ
and akσ ||kσ
= |0
(8.76)

Also, since we can put only one electron with a given spin into a given basis function,
† †
akσ akσ |0
= 0 (8.77)
236 Quantum Dynamics: Applications in Biological and Materials Systems

The first term in Hele represents the single-electron terms and includes the kinetic
energy and the electron-nuclear interactions. These are independent of spin,
 
1 a
Zn
N
h lm = l| − ∇ 2 − e2 |m
(8.78)
2 n=1
|q − Q n |

The electron-electron repulsion is introduced by the second term. Here the i j|v|kl
=
i j|kl
bracket denotes the Coulomb integral
 
e2
i j|kl
= dr13 dr23 φi∗ (1)φ ∗j (2) φk (1)φl (2) (8.79)
r12

Be careful! A number of authors use different conventions for this. Here, adopt the
notation more prevalent in the many-body physics literature that is consistent with the
creation/annihilation operator formalism we are following. In the quantum chemical
literature, the bracket i j|kl
is taken to mean
 
e2
i j|kl
= dr13 dr23 φi (1)φ j (1) φk (2)φl (2) (8.80)
r12
and is sometimes denoted with a square bracket [i j|kl] when working with spin
orbitals. The connection is that

i j|kl
≡ [ik| jl] (8.81)

for real orbitals. The reason the two notations have evolved is historical, with one camp
adopting one notation and another camp adopting the other notation. The appendix
explains the difference (but not the cause).
We can interpret each term in the electronic Hamiltonian as follows:

• h 22 a2μ a2μ = energy to place an electron with spin μ into the φ2 basis function

• h 23 a2μ a3μ = energy to remove an electron from φ3 and place it into φ2
† †
• 22|22
a2α a2β a2β a2α = Coulomb repulsion between a spin-up α electron and
a spin-down β electron when both are in the φ2 basis function
Notice that, in general, if we have N basis functions, we would need to perform N 4
six-dimensional integrals to completely account for all the electron-electron interac-
tions.

8.3.1 HARTREE –FOCK APPROXIMATION


Writing the Hamiltonian and solving the requisite integrals is the first step. We are
still a long way from a solution. By and large, most quantum chemical treatments
use the Hartree–Fock (HF) approximation as solved by the self-consistent field (SCF)
approach. The basic SCF procedure was first developed (independently) by Clemens
Roothaan88–90 and G. G. Hall91 in the early 1950s. There are a number of deriva-
tions for these equations; one particularly elegant derivation involves the use of
Electronic Structure of Conjugated Systems 237

Heisenberg equations of motion92 where we write the time derivative of each electron
operator as

d
ih̄ asμ = [asμ , H ] (8.82)
dt
This can be approximated by

d
ih̄ asμ ≈ f suμ auμ (8.83)
dt u

If we allow the equality, then we can write



[asμ , H ] = f suμ auμ (8.84)
u


Now, multiplying on left and on the right by atμ and adding the two together,

8 † 9 μ8 † 9
atμ , [asμ , H ] = f su atμ , auμ (8.85)
u

where we have assumed we are working in a orthonormal basis. Taking the anticom-
mutator on the right-hand side and averaging over the electronic ground state of the
system,
8 † 9
atμ , [asμ , H ] = f stμ (8.86)
μ
f st is called the Fock operator. It is Hermitian and its eigenvalues and eigenvectors
correspond to the energies and single-particle orbitals,
8 † 9
f stμ = atμ , [asμ , H ]
8 † 9
= − atμ , [H, asμ ]
 $ %:


=− atμ , h lm alσ amσ , asμ
σ lm
 $ %:
1 †
† †
− atμ , kl|nm
akσ alρ amρ anσ , asμ (8.87)
2 klmn σρ

Working through the operator algebra yields the Fock operator:


 :
†  †
μ
f st = h st + ls|mt
alσ amσ
− ls|tm
alμ amμ (8.88)
lm σ

The density matrix or “bond-charge” matrix is


μ  †
γml = alμ amμ (8.89)
238 Quantum Dynamics: Applications in Biological and Materials Systems

The diagonal elements are electrons in a given basis function of a given spin, and the
off-diagonal elements are how those electrons are shared between the different basis
functions.
For a closed shell system in which all electrons are paired, we can equate the
spin-up (α) densities with the spin-down (β) densities
 †  †
alα amα = alβ amβ (8.90)

Using this and the symmetry relations of the two-body matrix elements, we arrive at
 †
f stμ = h st + (2 sl|tm
− sl|mt
) alμ amμ (8.91)
lm

Notice that the Fock operator depends upon its own eigenstates since we can
expand the density matrix in terms of the eigenvectors of the Fock operator
μ
μ
ψk = ckm φmμ (8.92)
m

as
μ
 μ ∗ μ μ
γml = ckl ckm n k (8.93)
k

where the coefficients satisfy


  μ
μ
f sm − E k Ssm ckm = 0 (8.94)
m
μ
where S is the overlap matrix between different basis functions and n k is the occupancy
of the kth eigenstate.
The total ground-state energy is not the sum over the energies of the occupied HF
energy levels. In calculating the energy for orbital 1, E 1 , we include the interaction
between electrons 1 and 2, 1 and 3, and so on. Likewise for electron 2, we average
over the interactions between 2 and 1, 2 and 3, 2 and 4, and so on. In other words, by
summing

E= Ek nk (8.95)
k

where n k = 0 or 1 is the occupation number of state k, we have included the Coulombic


interaction i j|i j
term twice, so we need to subtract this out in writing the total
ground-state energy,

HF
E gs = n k (E k − k j|k j
) (8.96)
k j>k

μ
Operationally, we “guess” at γml either by doing a low-level calculation or, sim-
ply by using Hückel theory, construct the Fock operator as a matrix in the basis of
choice, and diagonalize this to obtain an improved set of orbital energies and single-
particle orbitals. We then reconstruct the bond-charge matrix using these new orbitals
Electronic Structure of Conjugated Systems 239

and repeat this process until neither the energies, orbitals, nor bond-charge matrix
changes to within a suitable tolerance. In performing either ab initio or semi empiri-
cal calculations, this part of the calculation usually takes the most time and one has
no real guarantee on how many iterations are required to converge the Hartree–Fock
equations. Depending upon the size of the system, the complexity of the basis set, and
the speed of our computer, this can take anywhere from a few seconds to months or
years. A good strategy is to start off with a low-level basis set, get a good guess at the
bond-charge matrix, then repeat the procedure with increasingly more accurate basis
sets. Most modern quantum chemical codes allow us to import the results (checkpoint
file) from a previous calculation as a starting point.

8.3.2 VARIATIONAL DERIVATION OF THE HARTREE –FOCK APPROACH


We shall now rederive the Hartree–Fock equation using a more standard approach.
The assumption made is that the ground-state wave function consists of a single
Slater determinant, |φ
, and that the ground-state energy is obtained by minimizing
φ|H |φ
with respect to variations in the density matrix (that is, bond-charge matrix).
Our generic electronic structure Hamiltonian has the form
† 1 † †
H= h i j ai a j + i j|v|kl
ai a j al ak (8.97)
ij
4 i jkl

where i, j, k, l label single-particle basis functions that we shall take to be orthogonal.


If we assume the ground state is composed of a single-determinant |φ
, the energy
expectation value is

E[ρ] = φ|H |φ
(8.98)
1
= i|h| j
ρi j + i j||kl
ρki ρl j (8.99)
ij
2 i jkl


where ρ is the single-particle density matrix with elements ρi j = φ|ai a j |φ
. The
density matrix satisfies the conditions ρ 2 = ρ and Tr[ρ] = N . With this in mind,
we can minimize E[ρ] with respect to the density under the constraint that ρ 2 = ρ
remain satisfied,
δ(E[ρ] − Tr λ(ρ 2 − ρ)) = 0
where  is a matrix of Lagrange multipliers. Expanding the variation
 
δ E[ρ]
− ρ − ρ +  δρ = 0
δρ
This must hold for all δρ, so we send

F − ρ − ρ +  = 0 (8.100)

and define Fock matrix F with elements:


δ E[ρ]
Fi j =
δρ
240 Quantum Dynamics: Applications in Biological and Materials Systems

The ’s can be eliminated by multiplying Eq. (8.100) on the right and left by ρ and
subtracting the two (taking ρ 2 = ρ into account). One thus obtains

[F, ρ] = 0

In other words, the density matrix that minimizes the energy commutes with the
Hartree–Fock Hamiltonian F.
We can generalize this by asking what happens to ρ if [F, ρ] = 0 or if H is a
function of time. Let us consider the case where the system at all times is described
by a single Slater determinant |ψ(t)
composed of orthonormal orbitals {|φi
} such
that the time-dependent density matrix is given by


N
ρ(t) = |φi (t)
φi (t)| (8.101)
i=1

We write the action in terms of the orbitals as


⎡ ⎤
 t N
S= ds ⎣ih̄ φi |φ̇ i
− E[ρ] − λi j φi |φ j
⎦ (8.102)
0 i=1 ij

Here, the dot denotes the time derivative, E[ρ] = ψ|H (t)|ψ
is the Hartree–Fock
energy functional, and the λi j ’s are Lagrange multipliers introduced to ensure orthog-
onality of the orbitals.
We now minimize S with respect to the variations in the orbitals
δS
= −ih̄ φ̇ i | − φi |F − λi j φ j | = 0 (8.103)
δ|φi
j

δS
= ih̄|φ̇ i
− F|φi
− λi j |φ j
= 0 (8.104)
δ φi | j

where F = δ H/δρ is the Hartree–Fock Hamiltonian operator. Again, we can elimi-


nate the λ’s and find the equations of motion for the density matrix
∂ρ
ıh̄ = [F, ρ] (8.105)
∂t
This is the time-dependent Hartree–Fock equation. In fact, the stationary Hartree–
Fock equation can be viewed as a limiting case where ρ̇ = 0, describing a system
at equilibrium. The time-dependent Hartree–Fock approach is useful for studying
systems displaced from equlibrium or the response of a system to a time-dependent
driving field and obtain transition amplitudes.

8.4 NEGLECT OF DIFFERENTIAL OVERLAP


The electron-electron interactions can be approximated in a number of ways as well.
First, we note that the two-electron integral depends upon the overlap between two
Electronic Structure of Conjugated Systems 241

electron densities:
 
e2
ik| jl
= dr13 dr23 [u i (1)u j (1)] [u l (2)u k (2)] (8.106)
r12
The integral will more or less vanish unless u i and u j are the same basis function.
Thus, we can write:

[u i (1)u j (1)] = δi j u i2 (1) (8.107)

This approximation, termed CNDO for “complete neglect of differential overlap,”


reduces the total number of such two-electron integrals to N (N + 1)/2 ∝ N 2 from
(N (N + 1)/2)(N (N + 1)/2 + 1)/2 ∝ N 4 ) that are included in ab initio calcula-
tions. Note that in making this approximation, we now have only electron–electron
repulsions between an electron in basis u i and an electron in basis u j . Also, there
are no spin-dependent interactions. Methods, such as the Pariser–Parr–Pople method
(PPP) and CNDO/2 use the zero-differential overlap approximation. One can relax
the zero-differential overlap approximation to some extent by instead writing

[u i (1)u j (1)] = δci δcj u i (1)u j (1) (8.108)

where δci δcj = 0 unless the two orbitals are on the same atomic center. Such inter-
mediate neglect is used in the MNDO, PM3, and AM1 methods while methods such
as the INDO, MINDO, ZINDO, and SINDO approaches do not apply the rule when
all the orbitals in the integral are on the same atomic site.
The notion of neglect of differential overlap also plays a key role in the devel-
opment of tight-binding treatments used in solid-state physics. In many ways, we
can use the terms tight-binding and semiempirical interchangeably (although certain
purists will argue otherwise). For solid-state systems we can often take advantage of
the periodic nature of the lattice to develop recursion relations or treat the problem in
reciprocal space to compute the band structure of a given system. For more complete
treatments, any number of excellent texts on solid-state physics may be consulted.
If we further assume that

u i (1)u i (1)u j (2)u j (2) = δi j (u i (1)u j (2))2 (8.109)

and assume that h i j = 0 unless i = j or i is on atoms adjacent to j, we have the


Pariser–Parr–Pople Hamiltonian:




† † † †
HPPP = h ii aiσ aiσ + ti j aiσ a jσ + Ui ai↑ ai↑ a j↓ a j↓
σ i σ ij i



  
† 1 1
= h ii n iσ + ti j aiσ a jσ + Ui n i↑ − n i↓ −
σ i σ ij i
2 2
(8.110)

where Ui = ii|ii
, the prime on the second summation reminds us that we sum only
atoms participating in the π-electron network, and n i↑ or n i↓ indicate the number of
242 Quantum Dynamics: Applications in Biological and Materials Systems

spin-up or spin-down electrons in a given basis orbital. The original purpose of the
approach was to predict the electronic properties of organic dye molecules. In fact,
when combined with a Hartree–Fock treatment of the electronic ground state, the PPP
model does a remarkably good job of predicting the position and oscillator strengths
of the lowest singlet transitions in many π -conjugated systems.
Unlike most semiempirical treatment, π-electron theories have a rigorous ab
initio underpinning. HPPP is in fact an approximate effective operator acting on the
π-electronic subspace. Likewise, its parameters include effective electron correlation
effects between the π system and the core. The connection between the PPP model and
more rigorous approaches was explored by Freed and coworkers using diagrammatic
techniques for solving multireference perturbation theory.93–95
Now, if all the sites are equivalent, we can write ti j = t, h ii = ε, and Ui = U and
arrive at the Hubbard Hamiltonian:96



† † † †
H =ε aiσ aiσ + t aiσ a jσ + U ai↑ ai↑ a j↓ a j↓ (8.111)
σ i σ ij i

This can be further reduced by noticing that the first term is simply the sum over the
number operators and as such is equal to εN . This can be removed by choosing our
energy origin so that ε = 0. Next, the interaction term can be also written in terms of
the electron number operators and we arrive at

 
 
† 1 1
H =t aiσ a jσ +U n i↑ − n j↓ − (8.112)
σ ij i
2 2

This particular model is seldom used in the chemical field but widely used in the
solid-state physics community for describing electrons in narrow band-gap materials
and has been applied to problems such as high-Tc superconductivity, band magnetism,
and the metal-insulator transition. For small systems, one can perform numerically
exact calculations, and the code for doing so is included on the disk accompanying this
text with some representative results shown in Figure 8.8 for the case of a spin-paired
four-electron/four-site model of 1,3-butadiene.
The Hubbard model can be solved exactly in one dimension as shown by Lieb
and Wu97 in 1968 using the Bethe Ansatz technique. This solution was later shown
to be the complete solution by Essler et al. in 1991.98 One of the most significant
predictions of the model is that there is an absence of a “Mott transition” between
conducting and insulating states as the strength of the interaction U is increased.

8.5 AN EXACT SOLUTION: INDO TREATMENT OF ETHYLENE


There are very few systems that can be treated exactly—most of which are poor
representations of the actual physical systems; however, we can use models to test
the limits and range validity of our theoretical approaches. Oftentimes, one can learn
more about a broad class of physical systems by using models and then performing
more detailed calculations to get a better match to the experimental data.
Electronic Structure of Conjugated Systems 243

Energy Eigenvalues
Energy Per Site
–0.4

10
–0.6

–0.8
E/t

E/t
–1.0
0

–1.2

0 1 2 3 4 5 6 0 1 2 3 4 5 6
U/t U/t
(a) (b)

FIGURE 8.8 Exact solution for Hubbard model for closed-shell four-electron/four-site sys-
tem. (a) Energy eigenvalues for the 36 × 36 configuration system. (b) Total ground-state energy
per site.

Let us consider a solvable two-electron problem to illustrate what we have devel-


oped thus far. Consider a two-atom/two-electron system described by the Hubbard
Hamiltonian as would be the case for the pi system for ethene. The orbital basis φ1
and φ2 we take as the C 2 pz orbitals centered about each carbon atom and assume
that we can work within the NDO approximation and treat the basis functions as
orthogonal.
We have a total of six possible two-electron configurations:

φ1 = a1 a2 † |0


φ2 = a2 a1 † |0


φ3 = a1 a1 † |0


φ4 = a2 a2 † |0


φ5 = a1 a2 † |0


φ6 = a 1 a2 † |0
(8.113)

where φ1 is the case where we have a spin-up electron on atom 1 and a spin-down
electron on atom 2 (as denoted by the overbar), φ2 is the reverse where the spin-down
electron is on atom 1 and the spin-up electron is on atom 2. φ3 and φ4 are the cases
where we have both spin-paired electrons on either atom 1 or atom 2. Lastly, φ5 and
φ6 are the cases where the two spins are parallel but on different atoms as indicated
in the diagram below.
244 Quantum Dynamics: Applications in Biological and Materials Systems

Delocalized Localized Delocalized Triplets

φ1: φ3: φ5:

φ2: φ4: φ6:

Using this as a basis, we can construct the following Hamiltonian matrix:


⎡ ⎤
2ε 0 t t 0 0
⎢ ⎥
⎢ 0 2ε t t 0 0 ⎥
⎢ ⎥
⎢ t t 2ε + U 0 0 0 ⎥
H =⎢


⎥ (8.114)
⎢ t t 0 2ε + U 0 0 ⎥
⎢ ⎥
⎣ 0 0 0 0 2ε 0 ⎦
0 0 0 0 0 2ε

where we have included ε as the local site energy. Since we are working with fermion
operators already, antisymmetry is already enforced within our basis states so we
do not need to construct separate exchange and Coulomb contributions to the total
energy.
Notice that only in the case where there is spin pairing does the electron/electron
Coulomb coupling term enter into H . Furthermore, notice that the two parallel spin
configurations are completely decoupled from the other antiparallel configurations
and H can be block-diagonalized along these lines with φ5 and φ6 being eigenstates of
H each with energy 2ε. The remaining eigenstates are symmetric and antisymmetric
linear combinations of the remaining four basis states. Again, we notice that φ1 and
φ2 are decoupled, as are φ3 and φ4 . Thus, we can form a suitable basis by taking even
and odd linear combinations of the two types of states,

φag = (φ1 + φ2 )/ 2 (8.115)

φbg = (φ3 + φ4 )/ 2 (8.116)

φau = (φ1 − φ2 )/ 2 (8.117)

φbu = (φ3 − φ4 )/ 2 (8.118)

where u and g indicated odd (ungerade) and even (gerade) linear combinations. Within
the gerade basis, H is again block-diagonal with eigenvalues

E ±s = 2ε + U/2 ± U 2 + 16t 2 /2 (8.119)

The ungerade eigenstates of H with energies 2ε and 2ε + U , respectively. These two


states are clearly antibonding and we notice that φa− is of the triplet states.
We are typically interested in the ground electronic state of a given system. For a
closed shell system such as this, the ground electronic state is the totally symmetric
Electronic Structure of Conjugated Systems 245

gerade state
gs = (cos θφag + sin θφbg ) (8.120)
where θ is the mixing angle that mixes delocalized (φag ) and localized (φbg ) electronic
configurations. The ground-state energy is then

E gs = 2ε + U/2 − (U/2)2 + 4t 2 (8.121)
Recalling Chapter 2, we can write the mixing angle as
4t
tan 2θ = (8.122)
U
For the case of small U , the Coulomb repulsion between electrons is very small rela-
tive to the hopping, and the delocalized configuration φa+ dominates. In the limit of
no interaction, E gs = 2ε − 2|t|, which is what we expect from a Hückel treatment.
As U becomes large compared to t, hopping is insignificant and the ground state is
dominated by the localized configurations φb+ with an asymptotic energy of 2ε + U ,
which is the energy necessary to have two electrons on the same atom with Coulomb
interaction U . In the exact case, we have perfect correlation between the two asymp-
totic limits. If you imagine that t is a function of the bond distance, then we have a
smooth potential energy surface connecting the bound molecular system to the dis-
sociated atom limit. This is not always the case when we make approximations to the
system.

8.5.1 HF TREATMENT OF ETHYLENE


Within the Hückel model, the single-electron term is given by

h 11 h 12
h= (8.123)
h 21 h 22
with h 11 = h 22 = α and h 12 = h ∗21 = t as per the Hückel notation. In the localized
basis, matrix elements of the Fock operator are
μ
 †
f 11 = h 11 + alμ amμ (2 l1|m1
− l1|1m
(8.124)
lm
μ
 †
f 22 = h 22 + alμ amμ (2 l2|m2
− l2|2m
(8.125)
lm
μ
 †
f 12 = h 12 + alμ amμ (2 l1|m2
− l1|2m
(8.126)
lm
μ μ
f 21 = f 12 (8.127)
While we can work in the localized basis, it is more convenient to take the Hückel
ground-state molecular orbital as a trial wave function for performing the Hartree–
Fock calculation. This we can find by diagonalizing h and forming the molecular
orbitals
1
φ± = √ (φ1 ± φ2 ) (8.128)
2
246 Quantum Dynamics: Applications in Biological and Materials Systems

with energies

ε± = α ∓ |t| (8.129)

Thus, we form the trial solution to the Hartree–Fock equations by writing the ground-
state wave function as
† †
ψgs = a+β a+α |0
(8.130)

Expanding this in the local orbital basis,


1 6 † † † †
 
† † † †
7
ψgs = a1β a1α + a2β a2α + a1β a2α + a2β a1α |0
(8.131)
2
where the first term represents a symmetric linear combination of the two possible
ionic configurations and the second term is a symmetric linear combination of the
two covalent configurations. Writing the Fock matrix in the molecular orbital basis
results in
$ %
ε+ + + + | + +
0
F= (8.132)
0 ε− + 2 − + | − +
− − + | + −

No further refinement is needed, and we find the single-electron (Hartree–Fock) orbital


energies to be
hf
ε1 = ε+ + + + | + +
(8.133)
hf
ε2 = ε− + 2 − + | − +
− − + | + −
(8.134)

Finally, we write the total ground-state energy as

E gs = 2h ++ + + + | + +
= 2ε − 2|t| + + + | + +
(8.135)

Evaluating the electron-electron (e-e) interaction, we find that the only terms that
survive are those involving the Coulomb interactions in the ionic configurations:
1 1
+ + | + +
= ( 11|11
+ 22|22
) = U (8.136)
4 2
which we take to be the same for both atoms:
 
e2
U= dr1 dr2 |φ1 (r1 )|2 |φ1 (r2 )|2 (8.137)
r12
Hartree–Fock limit. We previously derived this result:

E h f = 2ε + U/2 − 2|t| (8.138)

Here we see a linear variation in the ground-state energy with increasing e-e inter-
action. When U/t is small (large t or small U ) the Hartree–Fock energy approaches
the exact energy from above. While the HF energy does asymptotically approach the
Electronic Structure of Conjugated Systems 247

π
Egs 4
0.5
Delocalized
Localized
U
2 3 4 5 t

ηopt
HF
–0.5
UHF π
8
–1.0
Exact

–1.5 2 3 4 5
U/t
(a) (b)

FIGURE 8.9 (a) Comparison between Hartree–Fock, unrestricted Hartree–Fock, and the exact
ground-state energy for a diatomic model using the Hubbard model. For U/t < 2, the UHF
and HF results are identical. (b) UHF mixing angle η vs. coupling. At U/t = 2 the derivative
of the UHF energy with respect to the coupling is discontinuous, indicating a sudden transition
from localized to delocalized states.

exact value in the limit of U → 0, it fails to reproduce the dissociated atom limit as
plotted in Figure 8.9.
We can extend this a bit further by adopting an unrestricted Hartree–Fock approach
by constructing a variational wave function mixed spin configuration:

φα = sin η|1 ↑
+ cos η|2 ↑
(8.139)
φβ = cos η|1 ↓
+ sin η|2 ↓
(8.140)

The coefficients are such that the segregation of the electrons can be controlled by
varying η. Using these one-electron states as a trial wave function, the UHF energy is

E uh f (η) = 2ε + (2ε + U ) cos2 η sin2 η + t sin(2η) (8.141)

which reduces to the HF value for η = π/4. The UHF energy is the case where E uh f (η)
has a minimum value. Solving d E uh f (η)/dη = 0 for various values of U yields the
variation of the UHF energy with U . The resulting E uh f energy is plotted in Figure 8.9
along with a plot of the optimal mixing angle. Here we notice that for U/t ≤ 2, the
HF and UHF energies are identical with ηopt = π/4, indicating that only delocalized
configurations contribute to the ground state. At U/t = 2 the first derivative of
E uh f is discontinuous with respect to the coupling. This is indicative of a sudden
and discontinuous change in the ground state to include localized configurations.
Asymptotically, as η → 0, the UHF converges to the exact ground-state energy. While
the UHF does incorporate the physically correct behavior of charge segregation, it
does it in a discontinuous way.
248 Quantum Dynamics: Applications in Biological and Materials Systems

8.6 AB INITIO TREATMENTS


The goal of ab initio quantum chemistry is to evaluate the three- and six-dimensional
integrals properly by assuming some finite set of spatial basis functions, {u j }. Typi-
cally the basis functions are chosen to be a set of atomic-centered Slater or Gaussian-
type orbitals. Slater-type orbitals were among the first basis sets to be introduced
in the early development of quantum chemistry and are based upon the hydogenic
atomic orbitals99 centered about the atomic sites. They have the radial form

Rn (r ) = Nr n−1 e−ζ r (8.142)

as given above and are generally considered to be a more accurate basis for a given
number of basis functions. However, because the radial parts are based upon expo-
nential forms, it becomes compuationally costly to compute the various multicentered
integrals required to evaluate Hele .
In the early 1950s Boys100 introduced the use of Gaussian-type orbitals—although
there is some evidence that Roy McWeeny was using them as early as 1946. Gaussian-
type orbitals have a radial part

Rn (r ) = Nr n−1 e−ζ r
2
(8.143)

Although the Gaussian-type orbitals (GTO) are not as closely matched to the actual
atomic orbitals as the STOs and generally more GTOs are needed to achieve com-
parable accuracy, the advantage gained is a tremendous speedup in the evaluation of
multicentered integrals. This is gained by the use of the “Gaussian product theorem”
whereby the integral over two Gaussian orbitals centered on different atoms is a fi-
nite sum of Gaussians centered on a point along the axis connecting them. Hence,
any four-centered integral reduces to a sum of two-centered integrals and any two-
centered integrals become a finite number of single-centered integrals. This facilitates
a speedup on the order of 4–5 orders of magnitude.
Currently there are literally hundreds of basis sets composed of Gaussian-type
orbitals. A minimum basis set is one in which, on each atom in the molecule, a single
basis function is used for each orbital in a Hartree–Fock calculation on the free atom.
However, often additional basis functions are added. For example for the Li atom,
which would require the 1s and 2s orbitals, one adds a complete set of 2p orbitals.
Thus, for the first row on the periodic table, one has a basis of 2s functions and 3p
functions for a total of 5 basis functions per atom. The most common minimal bases
is the STO-nG basis where the n integer represents the number of Gaussian primitive
functions comprising a single basis function. It is usually a good idea to start off
using a minimal basis and then improve the calculation using a more accurate (hence
larger) basis. Commonly used minimal basis sets in most ab initio codes are: STO-3G,
STO-4G, STO-6G, and STO-3G*, the latter being a polarized version of the STO-3G
basis.101
One of the problems with the minimal bases is that they are quite inflexible
and are unable to adjust to the different electron densities encountered in forming
chemical bonds. In chemical bonding, it is typically the outer or valence electrons
that take part in forming bonds. Consequently, split-valence basis sets were developed
Electronic Structure of Conjugated Systems 249

to account for this fact by allowing each valence orbital to be composed of a fixed
linear combination of primitive Gaussian functions. These are termed valence double,
triple, or quadruple-zeta basis sets according to the number of Gaussians used. These
are denoted using Pople’s scheme via X-YZg where X is the number of primitive
Gaussians composing each atomic core basis function and Y and Z denote how
the valence orbitals are split. In this case, we would have a split-valence double-zeta
basis. Split-valence triple- and split-valence quadruple-zeta basis sets are likewise
termed X-YZWg and X-YZWVg, respectively.
Additionally, these basis sets may include polarization functions as denoted by an
asterisk (*). Two asterisks (**) indicate that polarization functions are also added to
the light atoms (H and He). In a minimal basis, only the 1s orbital would be used for
these two atoms. In this case, adding polarization functions would involve including
a p function about the light atom. The addition of polarization functions allows the
electron density about these atoms to be more asymmetric about the atom center.
Similarly d- and even f-type orbitals can be added. The more precise notation is to
indicate exactly how many polarization functions were added such as (p,d).
One can also include diffuse functions to light atoms, as denoted by a + sign. Here
one adds additional Gaussian functions that are very broad so as to more accurately
represent the “tail” of the atomic orbits as one moves farther away from the atomic
center. Such functions are necessary to include when performing calculations on
anions or Rydberg states.
Finally, one can include correlation consistent basis functions. These were devel-
oped by Thom Dunning and co-workers and are designed to converge to the complete
basis set limit using extrapolation techniques. These are the “cc-pVNZ” basis func-
tions (for correlation consistent polarized) where N = D,T, Q, 5, 6, . . . for doubles,
triples, and so on, and V denotes that they are valence-only basis sets. Such basis sets
are currently considered state of the art and are widely used for post-Hartree–Fock
calculations.
Needless to say, ab initio treatments are appealing since one strictly deals with
the basic physical interactions between the electron and the nuclei. There is some art
in designing basis functions, and certainly a lot of thoughtful computational design
is required to both set up and solve the many-body problem. A number of standard
implementations and codes are available, some at no cost and some at low cost for
academic users. As better codes, better basis sets, and better theoretical techniques
are developed, we also have nearly parallel progress in the amount of computational
horsepower available. Consequently, armed with a modest modern workstation, we
can perform quite accurate ab initio level calculations with rather large basis-set
systems up to about 100 to 300 carbon atoms.

8.7 CREATION/ANNHILIATION OPERATOR FORMALISM


FOR FERMION SYSTEMS
In this chapter, we have drawn heavily upon the use of fermion creation and annhilia-
tion operators in describing the various models for electronic structure. The advantage
of adopting such a formalism is that the Pauli exclusion principle and antisymmetriza-
tion rule are immediately enforced. Here we briefly summarize their properties.
250 Quantum Dynamics: Applications in Biological and Materials Systems

An N -fermion wave function is a function of N coordinates describing the position


of each fermion:

r1 , . . . , r N |ψ
= ψ(r1 , . . . , r N ) (8.144)

This is a normalizable function since


 ∞
dr1 · · · dr N |ψ(r1 , . . . , r N )|2 < +∞ (8.145)

and

P(r1 , . . . , r N ) = dr1 · · · dr N |ψ(r1 , . . . , r N )|2 (8.146)

gives the joint probability of finding electron 1 at r1 , electron 2 at r2 , and so on. Since
the physics implied by this cannot depend upon how we arbitrarily assign the labels to
the electrons (being identical particles), swapping labels should not affect the physics.
Thus,

P(r1 , r2 , . . . , r N ) = P(r2 , r1 , . . . , r N ) (8.147)

This introduces an ambiguity into the wave function since swapping particles can
then only change the total phase of the wave function

P12 ψ(r1 , r2 , . . . , r N ) = ψ(r2 , r1 , . . . , r N ) = eiδ ψ(r1 , r2 , . . . , r N ) (8.148)

Repeated operations of the permutation operator will then introduce a phase change
nδ:

Pinj ψ(r1 , r2 , . . . , r N ) = einδ ψ(r1 , r2 , . . . , r N ) (8.149)

Consequently, one can readily conclude that δ can take one of two possible values:
δ = π or δ = 0. For the case of δ = π, each binary permutation of indices changes
the sign of the wave function while for δ = 0, swapping particle indices has no effect
on the sign of the wave function. From relativistic considerations, it can be shown
that particles with half-integer spin (fermions) must have δ = π while particles with
integer spin (bosons) must have δ = 0. Thus, for electrons,

ψ(r p1 , r p2 , . . . , r pN ) = (−1) P ψ(r1 , r2 , . . . , r N ) (8.150)

where P is the binary transpositions required to return the permutation { p1, p2,
p3, . . . , pN } to its original form. For a two-electron system,

ψ(1, 2) = (−1)ψ(2, 1) (8.151)

and

ψ(1, 2, 3) = −ψ(2, 1, 3) = +ψ(2, 3, 1) (8.152)

Since all of these wave functions are for all intents and purposes identical in terms of
their physical content, we need to sum over all possible permutations in constructing
Electronic Structure of Conjugated Systems 251

the final fermionic state. Let us define P̂ F as the antisymmetrization operator that
acts upon an N fermion state to produce an antisymmetric state
1
P̂ F ψ(1, . . . , N ) = (−1) p ψ( p1, . . . , pN ) (8.153)
N! p

This is a Hermitian operator since


1
PF ψ(1, 2) = (ψ(1, 2) − ψ(2, 1)) (8.154)
2
It also acts as a projection operator since by operating twice it is equivalent to operating
once:
1 1 
PF PF ψ(1 · · · N ) = (−1) p+ p ψ( pp  1, pp  2, . . . , pp  N ) (8.155)
N ! N ! p p
 
1 1
= (−1) ψ(q1, q2, · · · q N )
q
(8.156)
N! p N! q
1
= PF ψ(1, . . . , N ) (8.157)
N! p
= PF ψ(1, . . . , N ) (8.158)

This N -particle wave function is a vector in a N -dimensional Hilbert space H N ,


which is direct product space of N single-particle Hilbert spaces H,

HN = H ⊗ H ⊗ H ⊗ . . . (8.159)

each spanned by an orthonormal basis of vectors {|φi


}. Thus, we can construct an
appropriate basis for the N fermion state using

|φ1 · · · φ N ) = |φ1
⊗ · · · |φ N
(8.160)

For notation, we use the rounded ket, ), to denote a nonsymmetrized direct product
basis ket. Similarly, we use a curly bracket | } to denote a properly antisymmetrized
and normalized ket

|φ1 φ2 · · · φ N } = N !PF |φ1 φ2 · · · φ N ) (8.161)
1
= √ (−1) p |φ p1 φ p2 · · · φ pN ) (8.162)
N! p

For example, for a two-electron system, we write


1
|φ1 , φ2 } = √ (|φ1 φ2 ) − |φ2 φ1 )) (8.163)
N!
This state obeys antisymmetry since applying the permutation operator yields

|φ1 , φ2 } = −|φ2 , φ1 } (8.164)


252 Quantum Dynamics: Applications in Biological and Materials Systems

Also, the Pauli principle is automatically enforced since


1
|φ1 , φ1 } = √ (|φ1 φ1 ) − |φ1 φ1 )) = 0 (8.165)
N!
In other words, putting two particles into the same orbital results in a vanishing wave
function.
A second notational convention we shall adopt is that |φ1 φ2 ) reads “electron #1
is in single-particle orbital #1 and electron #2 is in single-particle orbital #2” while
|φ2 φ1 ) reads “electron #1 is in single-particle orbital #2 and electron #2 is in single-
particle orbital #1.” That is,

(1, 2|φ1 φ2 ) = φ1 (1)φ2 (2) (8.166)

while

(1, 2|φ2 φ1 ) = φ2 (1)φ1 (2) (8.167)

Within the orbital basis we can write the antisymmetrtized state as a Slater determinant
5 5
5 φ1 (1) φ2 (1) · · · φ N (1) 5
5 5
1 55 φ1 (2) φ2 (2) · · · φ N (2) 5
5
|φ1 · · · φ N } = √ 5 .
.. 5
. . (8.168)
N! 5
5 .
. .. ··· 5
5
5 φ1 (N ) φ2 (N ) · · · φ N (N ) 5

Determinants automatically satisfy the antisymmetry requirement since swapping


any two rows or any two columns results in a change of sign. Likewise, if any two
(or more) columns or any two (or more) rows are identical to within a constant factor,
the determinant vanishes.
Needless to say, the bookkeeping in this looks to be quite painful! However, we
can simplify matters considerably by introducing operators that add or remove single
electrons from the antisymmetrized state. Let

aλ |λ1 · · · λ N } = |λλ1 · · · λ N } (8.169)

add an electron to basis orbital λ. Normalizing this (as per the harmonic oscillator
operators)


aλ |λ1 · · · λ N
= 1 + n λ |λλ1 · · · λ N
(8.170)

where n λ is the occupation number of orbital λ in |λ1 · · · λ N


. Since the Pauli principle
puts a limit that n λ = 0 or 1, we have

† |λλ1 · · · λ N
if |λ
is not in |λ1 · · · λ N }
aλ |λ1 · · · λ N } = (8.171)
0 otherwise

Any basis vector |λ1 · · · λ N


or |λ1 · · · λ N } can be constructed by repeated applications
of the creation operator on the vacuum state |0
. Thus,
† † †
|λ1 · · · λ N } = aλ1 aλ2 · · · aλ N |0
(8.172)
Electronic Structure of Conjugated Systems 253

and
1 † † †
|λ1 · · · λ N
= √ a a · · · aλ N |0
(8.173)
n 1 ! · n N ! λ 1 λ2
One must be careful in using these operators since
† †
aλ aμ† |0
= |λμ} = −|μλ} = −aμ† aλ |0
(8.174)

Thus, we can conclude that


† †
aμ† aλ + aλ aμ† = 1 (8.175)

or
8 † †9
aμ , aλ = 1 (8.176)

In other words, the fermion creation operators anticommute.


The creation operators are also not self-adjoint, and we have the following rela-
tions:

aλ |0
= 0 (8.177)

0|aλ = 0 (8.178)

aλ |λ
= n λ |λ − 1
(8.179)

In other words, the aλ removes a fermion from orbital λ if this state is occupied to
begin with. To see how this works on the antisymmetrized many-body state, consider

1
aλ |β1 · · · β N } = {α1 · · · α P |aλ |β1 · · · β N }|α1 · · · α P } (8.180)
P
P! α ···α
1 P

1
= {λα1 · · · α P |aλ |β1 · · · β N }|α1 · · · α P } (8.181)
P
P! α ···α
1 P

where P denotes the number of particles in a given state and the inner sums are over
all P-particle basis states. Clearly, only terms with P = N − 1 and (λα1 · · · α P ) equal
to some permutation of (β1 · · · β N ) will contribute to the sum. Thus, we can write


N
aλ |β1 · · · β N } = (−1)i−1 δλβi |β1 · · · βi−1 βi+1 · · · β N } (8.182)
i=1

N
= (−1)i−1 δλβi |β1 · · · β̂i · · · β N } (8.183)
i=1

where the hat denotes that an electron has been removed from orbital βi . Thus, for
fermions

(−1)i−1 |β1 · · · β̂ λ · · · β N } if |λ
is occupied
aλ |β1 · · · β N } = (8.184)
0 otherwise
254 Quantum Dynamics: Applications in Biological and Materials Systems

Now to close the algebra. Consider the action of two operators on the vacuum
state:

aλ aμ† |0
= δλμ |0
= aμ aλ |0
(8.185)

Thus, we can conclude


8 9
aλ , aμ† = δλμ (8.186)

which gives the fermion anticommutation bracket. If we act upon an already occupied
state, then


N
aμ† aλ |α1 · · · α N
= (−1)i−1 δλαi |μα1 · · · α̂i · · · α N } (8.187)
i=1

Thus, we conclude
 
aλ aμ† |α1 · · · α N } = δλμ − aμ† aλ |α1 · · · α N } (8.188)

This last expression is extremely useful since typically we write operators in normal
ordered form in which all creation operators are to the left of the annihilation operators.
This then (typically) reduces the operator into terms involving occupation numbers

n λ = aλ aλ and integrals evaluated over the basis functions.
General rules: Let us summarize the general operational rules for the fermion
operators:

• Many-particle state: |i jkl)


• Antisymmetrized, many-particle state: |i jkl}
• Normalized antisymmetrized, many-particle state: (that is, Slater determi-
nant state) |i jkl

• Vacuum: |0
is defined as a state with no particles. It is normalized so that
0|0
= 1. Antisymmetrized states are generated by acting on the vacuum
ket with the creation operator.
† † †
ai a j ak |0
= |i jk
(8.189)

or by acting on the vacuum bra using the annihilation operator

0|ai a j ak = i jk| (8.190)

The vacuum is recovered by the reverse operation

|0
= ak a j ai |i jk
= ak a j ai |A
(8.191)

and
† † †
0| = A|(ak a j ai )† = A|ai a j ak (8.192)
Electronic Structure of Conjugated Systems 255

• When acting on an antisymmetrized state



ai | jkl · · ·
= |i jkl · · ·
(8.193)

and
ai |i jkl · · ·
= |i jkl · · ·
(8.194)
where the overline indicates an electron has been removed from orbital i.
If orbital i was occupied, the result is

ai |λjkl · · ·
= | jkl · · ·
(8.195)

otherwise
ai | jkl · · ·
= 0 (8.196)
• Order is important:
† †
ak al |0
= |kl
(8.197)
while
† †
al ak |0
= |lk
= −|kl
(8.198)
† †
• ai × ai = × = 0
ai ai

• Anticommutation: {ai , a j } = δi j . This allows us to swap creation and
annihilation operators for different orbitals (i = j) provided we change
† †
the sign: ai a j = −a j ai . If we are dealing with the same orbital, then
† †
ai ai = 1 − ai ai .
• Overlap between two states A|B
is evaluated by writing out creation/
annihilation operators and rearranging to normal ordering. For example, if
† † † †
|A
= |i j
and |B
= |kl
, A|B
= 0|(ai a j )† ak al |0 >. This we evaluate
by using the symmetry and commutation rules as follows:
† †
A|B
= 0|a j ai ak al |0
(8.199)
† †
= 0|a j (δik − ak ai )al |0

† † †
= δik 0|a j al |0
− 0|a j ak ai al |0
(8.200)
† † †
= δik 0|(δ jl − al a j )|0
− 0|(δ jk − ak a j )(δil − al ai )|0
(8.201)
= δik δ jl − δ jk δil (8.202)

• Normal order. Operators composed of fermion (or boson) operators are


in “normal ordered form” if all creation operators are to the left of all
annihilation operators. Operators in normal ordered form are denoted as

N (O) =: O :

Single-particle operators: Single-particle operators—that is those involving ei-


ther the coordinate or momentum of only a single electron—can always be written in
the form

Û = λ|U |μ
aλ aμ (8.203)
λμ
256 Quantum Dynamics: Applications in Biological and Materials Systems

where λ|U |μ
is the integral

d xφλ∗ (x)U (x̂, p̂)φμ (x) (8.204)

Two-particle operators: Two-particle terms, in particular the electron-electron


interaction, are such that
V̂ |αβ) = Vαβ |αβ) (8.205)
For matrix elements between antisymmetrized states, we need to sum over all per-
mutations
1
{α1 · · · α N |V̂ |α1 · · · α N } = ζP (α Pi α P j |V |α1 α2 ) (8.206)
P
2 i = j
⎛ ⎞
1
=⎝ Vα α ⎠ {α1 · · · α N |α1 α N } (8.207)
2 i = j i j

where the sum is over all distinct particle pairs in |α1 α N


.

8.7.1 EVALUATING FERMION OPERATORS


The motivation in introducing an operator formalism is to simplify calculations in-
volving operators composed of multiple fermion operators. More importantly, in
developing electronic structure theories we need to evaluate commutators of operator
products much like we did above in generating the Fock matrix or if we were inter-
ested in the time-dependent response of the system due to some external driving field.
For this, we have two powerful allies given by antisymmetry. First,
{aλ , aμ† } = δλμ (8.208)
and second aμ aλ = −aλ aμ . For example, consider the commutator between a single
fermion operator and a binary product (as would be encountered in the single-particle
hopping terms):
† † †
[ak , al am ] = ak al am − al am ak
 †  †
= δkl − al ak am − al am ak
 †  †
= δkl − al ak am − (−1)al ak am
= δkl am (8.209)
Here we have shown the sequence of steps where we have used the two “tricks” in our
arsenal. Obviously, more complicated examples can be given involving commutators
of operators composed of two or more fermion operators each. A more complex

example arises in the commutator between the density operator ak al and the two-
particle interaction operator am† an† a p aq ,
' † ( † †
ak al , am† an† a p aq = δlm ak an† a p aq − δln ak am† a p aq
+ δkp am† an† aq al + δkq am† an† a p al (8.210)
Electronic Structure of Conjugated Systems 257

A final example, this time involving the creation operator in the same sequence,
results in
' † † † ( † †
ak , al am an am = δkl al am† a p − δkp al am† , an (8.211)
In all cases we manipulate the sequence of the fermion operators to bring it into the
normal ordered form.

8.7.2 NOTATION FOR TWO-BODY INTEGRALS


One of the irritations of the quantum chemistry and the quantum many-body physics
literature is that both branches of the same subject evolved over much the same period
of time yet have developed slightly different notations for the same thing. Based upon
our definitions above for many-body wave functions, we interpret the matrix element
kn|v|lm
to mean
 
kn|v|lm
= d1 d2φk∗ (1)φn∗ (2)v(12)φl (1)φm (2) (8.212)

We also can abbreviate kn|v|lm


≡ kn|lm
and we have the following relations
i j|kl
= ji|lk
∗ (8.213)
and
i j|kl
= kl|i j
∗ (8.214)
This notation is mostly used in the many-body physics literature. We have adopted
it here since it is easier to read the matrix element from the order of the creation/
annihilation operators once they have been put into normal ordered form with the
creation operators to the left of the annihilation operators. For example,
† †
kl|nm
ak al am an (8.215)
which appears in the electron-electron interaction term of the electronic Hamiltonian.
However, it is an unfortunate fact of life that this same symbol can mean
 
kn|lm
= d1 d2φk (1)φn (1)v(12)φl (2)φm (2) (8.216)

(assuming real orbitals) where the quantum numbers for particle #1 are in the left
bracket and the quantum numbers for particle #2 are in the right. One also sees this
written using the square brackets
 
[kn|lm] = dr1 dr2 φk (r1 )φn (r1 )v(12)φl (r2 )φm (r2 ) (8.217)

where φn (r ) is a spatial orbital (as opposed to spin orbital) and the integral is only
over the spatial variables (as opposed to both spin and space). The rationale for this
notation is convenient since for real orbitals
[i j|kl] = [ ji, kl] = [i j, lk] = [ ji|lk] (8.218)
258 Quantum Dynamics: Applications in Biological and Materials Systems

For purposes of clarity, we shall denote integrals using the physics notation using
round (|) or angular |
brackets and integrals using the quantum chemistry notation
using the square [|] brackets. The translation between physics and quantum chemistry
is that

i j|kl
physics = [ik| jl]q.chem (8.219)

Two important integrals are the Coulomb integral

Ji j = [ii| j j] = (i j|i j) (8.220)

and the exchange integral

K i j = [i j|i j] = (i j| ji) (8.221)

When in doubt, double-check the notation the author is using. Also, it is a good idea
to clearly specify which convention you are using.

8.8 PROBLEMS AND EXERCISES

Problem 8.1 Consider the two-atom/two-electron problem discussed in the chapter.


Show that by using the mixed single-electron states

φα = sin η|1 ↑
+ cos η|2 ↑
φβ = cos η|1 ↓
+ sin η|2 ↓
(8.222)

we arrive at the following expression for E uh f

E uh f (η) = 2ε + (2ε + U ) cos2 η sin2 η + t sin(2η) (8.223)

Also show that in the limit of U/t < 2, E uh f reduces to the Hartree–Fock limit.

Problem 8.2 Evaluate the following commutator identities:


' † † (
ai , a j ak al am
' † † † (
ai a j , ak al am an

Problem 8.3 If |ψ
is a single Slater determinant state, show that the following
factorization holds:
 † †
ai a j ak al = ρk j ρli − ρl j ρki

SUGGESTED READING
Below is a list of review articles and books concerning the electronic structure of
π-conjugated systems.
Electronic Structure of Conjugated Systems 259

1. “Light-emitting Diodes Based on Conjugated Polymers,” J. H. Burroughes,


D. D. C. Bradley, A. R. Brown, R. N. Marks, K. Mackay, R. H. Friend, P.
L. Burns, and A. B. Holmes, Nature, 347, 539–541 (1990).
2. “An Organic Electronics Primer,” G. Malliaras and R. H. Friend, Physics
Today, 58, 53–58 (2005).
3. Quantum Chemistry Aided Design of Organic Polymers: An Introduction
to the Quantum Chemistry of Polymers and Its Applications,” Jean-Marie
Andre, Joseph Delhalle, and Jean-Luc Bredas (Singapore: World Scientific,
1991).
4. Electronic Processes in Organic Crystals and Polymers, 2nd ed, Martin
Pope and Charles E. Swenberg (Oxford: Oxford University Press, 1999).
5. Conjugated Polymers : The Novel Science and Technology of Highly Con-
ducting and Nonlinear Optically Active Material, J. L. Brédas and R. Silbey
(Kluwer: Dordrect, 1991).
6. “An Overview of the First Half-Century of Molecular Electronics,” Noel
S. Hush, Ann. N.Y. Acad. Sci., 1006, 1 (2003).
7. “What I Like About the Hückel Model,” W. Kutzelnigg, J. Comp. Chem.
28, 25–34 (2007). This is a recent essay by Prof. Werner Kutzelnigg based
upon a lecture at the University of Marburg in 1996 celebrating the 100th
birthday of Erich Hückel. It has a concise overview of Hückel’s work and
key references. Also, many of the results in Sec. 3.2 are presented in this
paper.
The following are good texts concerning electronic structure theory of molecular
systems:
1. Quantum Chemistry, 5th ed., I. Levine (Prentice Hall, 1999).
2. Modern Quantum Chemistry: Introduction to Advanced Electronic Struc-
ture Theory, A. Szabo and N. S. Ostlund (Dover, 1996).
3. Quantum Mechanics in Chemistry, George C. Schatz and Mark A. Ratner
(Prentice Hall, 1993).
4. Elementary Quantum Chemistry, 2nd ed., Frank L. Pilar (McGraw-Hill,
1990).
9 Electron–Phonon
Coupling in Conjugated
Systems
9.1 SU–SCHRIEFFER–HEEGER MODEL FOR POLYACETYLENE
A significant distinguishing feature of π-electron systems is the relatively strong
interaction between the electrons and phonons. For example, the carbon-carbon bond
length in an olefinic C=C bond is generally taken to be 1.35 Å, whereas an olefinic
C--C bond is somewhat larger at 1.45 Å. In aromatic systems, the C=C bond length
is 1.40 Å. This is, of course, due to the modulation of bond order as one moves
down the olefinic chain. For aromatic systems, we know that the resonance structures
account for the homogenous delocalization of the electron density about the ring.
Here we present the classic model developed by Su, Schrieffer, and Heeger (SSH)
that accounts for semiconducting properties of olefinic chains.102,103
From the Hückel model, we found that if all the sites along the chain are equivalent,
then the energy for the infinite chain is given by

E k = α − 2β cos(ka) (9.1)

where a is the bond distance between neighboring C atoms. For a π-electron system
that is half-filled (that is, each C atom contributes one e- to the π system), then the
band gap at the Fermi energy is exactly 0. This is the case if all the C--C bonds are
the same length, as in aromatic rings.
However, as we just noted in olefinic chains, the C=C and C--C bonds alternate,
so one expects the hopping term β to reflect this alternation. As the bond length
increases, β should decrease in magnitude; while as the bond length is compressed, β
should increase in magnitude. We can thus assign an order parameter, u, that reflects
the expansion and compression of the bonds along the olefinic chain. Assuming all
the sites are equivalent, we can write
 † † 
H (u) = − (β + (−1)n 2αu) ∗ anσ an+1σ an+1σ anσ (9.2)

For the sake of simplicity, take α = 0.05 eV and β = 1.40 eV so as to correspond


with the spectroscopic parameters of the PPP model. Thus, u = 0 corresponds to the
uniform Hückel model and u = ±1 corresponds to the “dimerized” olefinic chain
with alternating single and double bonds. (See Figure 9.1.)
For the undimerized u = 0 chain, we have two bands, the filled valence band

E k0v = −2β cos(ka) = −εk (9.3)

261
262 Quantum Dynamics: Applications in Biological and Materials Systems

H H H H

C C C C
C C C C

H H H H

2a

FIGURE 9.1 Polyacetylene chain showing (top) arrangement of C 2 pz orbitals forming


a π system and (bottom) distortions of a C--C lattice from uniform lattice to dimerized
lattice.

and the conduction band

E k0c = +2β cos(ka) = +εk (9.4)

with corresponding states

1 ikan
χkv = √ e un (9.5)
N n

1 ikan
χkv = √ e (−1)n u n (9.6)
N n

where the u n are the localized basis functions. For a chain of length L = N a, we can
define new operators for the valence and conduction bands as

v 1 ikan
ckσ =√ e anσ (9.7)
N n
1 ikan
c
ckσ =√ e (−1)n anσ (9.8)
N n

Inverting these and reintroducing them back into the modulated Hückel Hamiltonian
above, we have for the dimerized lattice
 c† v† v   c† v v† c 
H (u) = εk ckσ ckσ
c
− ckσ ckσ + 4αu sin(ka) ckσ ckσ + ckσ ckσ (9.9)

Electron–Phonon Coupling in Conjugated Systems 263

We now introduce the Bogolyubov transformation to bring H (u) into diagonal form
by mixing the valence and conduction bands,
    
v v
akσ αk β k ckσ
= (9.10)
c
akσ αk∗ −βk c
ckσ

Since |αk |2 + |βk |2 = 1, this is a unitary transformation. Now, inverting the transfor-
mation and requiring H (u) to be diagonal in this new representation, we find
 
H (u) = E k n ckσ − n vkσ (9.11)

where  1/2
E k = εk2 +
2k (9.12)
and

k = 4αu sin(ka) (9.13)
The operators, n ckσ and n vkσ are the occupation numbers of the conduction and valence
bands. If we restrict αk to be real and positive, then

 
1 εk
αk = 1+ (9.14)
2 Ek

 
1 εk
βk = 1− (9.15)
2 Ek

which immediately gives us the conduction and valence band eigenstates. A plot of
the valence and conduction bands for an olefinic chain is given in Figure 9.2a using
the parameters suitable for polyacetylene. Notice at ka = π/2, the gap between the

E(eV)

0.4
Egs(eV)/N

0.2
–1.785

1.45 1.5 1.55 π ka –1.790


2

–0.2
–1.795

–0.4 u
–1.0 –0.5 0.0 0.5 1.0
(a) (b)

FIGURE 9.2 (a) Valence and conduction bands for dimerized (u = 1) and undimerized
(u = 0) polyacetylene chains close to ka = π/2. (b) Variation of ground-state energy for
chain of length N a versus the dimerization order parameter u.
264 Quantum Dynamics: Applications in Biological and Materials Systems

valence bands (E < 0) and conduction bands (E > 0) opens as the order parameter
varies from 0 to 1.
The ground-state energy (see Figure 9.2b) is given by summing over all occupied
energy levels
E gs (u) = −2E k (9.16)
k

where the sum over k is over the entire Brillouin zone, −π/2 ≥ ka ≤ π/2. Taking
the sum to an integral

2L π/2a
E gs (u) = − [(2β cos(ka))2 + (4αu sin(ka))2 ]1/2 dk (9.17)
π 0

4Nβ π/2
=− (1 − (1 − z 2 ) sin2 (ka))1/2 d(ka) (9.18)
π 0
4Nβ
=− E(1 − z 2 ) (9.19)
π

The integral is the complete elliptic integral E(1 − z 2 ) with argument z = β1 /β =


2αu/β. For z ∈ , the expansion of E(1 − z 2 ) about small values of z produces
 
log(z) 1 2
E(1 − z 2 ) ≈= 1 + − + log(2) − z + O(z 4 ) (9.20)
2 4
from which we can conclude that the total ground-state energy E gs is always lowered
upon dimerization of the olefinic lattice and reaches a maximum for the perfectly
uniform lattice. This analysis is, in fact, a demonstration of a very powerful theorem
given by Peierls that states that one-dimensional metals cannot exist and so as such
will always distort so as to lower their total energy and open their band gap.

9.2 EXCITON SELF-TRAPPING


In the SSH model we included the effects of bond distortion to minimize the total
energy of a linear chain by making the hopping integral a linear function of the
distortion of atoms from uniform spacing. We can generalize this by writing the

 We can arrive at this conclusion by considering the fact that the period for the dimerized lattice has, in
fact, doubled from a to 2a upon dimerization and as such the Brillouin zone now extends from −π/(2a)
to π/(2a). If we now take the two atoms in the mth unit cell as having states u m1 and u m2 , then we can
write the wave function for the lattice as

+∞
ψk = ei2kma (c1 (k)u m1 + c2 (k)u m2 ) (9.21)
m=−∞

Inserting this into the Schrödinger equation above we find


6 76 7 6 7
t1 eik2a t2 c1 (k) c2 (k)
= Ek (9.22)
t1 e−ik2a t2 c2 (k) c1 (k)

where t1 = β − αu and t2 = β + αu. Solving for the energy produces the result given above.
Electron–Phonon Coupling in Conjugated Systems 265

Hamiltonian for a particle on a lattice as


 † 
Hel (x) = εn an† an + (β + λ(xn − xn+1 ) an† an+1 + an+1 an (9.23)
n n

where xn denotes the displacements from the uniform lattice with a strain energy
given by
ω2
E strain = (xn − xn+1 )2 (9.24)
n
2
Working within the adiabatic approximation, we can minimize the total energy of the
system by requiring
∂ Hel ∂ E strain
− φo | |φo
− =0 (9.25)
∂ xn ∂ xn
where φo is the lowest energy eigenstate of Hel for a given lattice configuration. The
first term is the Hellmann–Feynman force on the nth lattice site when the exciton
is in the lowest energy eigenstate. The second is the strain force, which increases
as the lattice is displaced from its uniform position. If β and λ are both negative,
then increasing |φo |2 in a given region gives an attractive interaction between the
lattice atoms. However, displacing the atoms from their uniform position increases the
strain energy. The final equilibrium state is where the strain force and the Hellmann–
Feynman forces are in balance.104–109 In Figure 9.3 we show the effect of exciton
self-trapping on a model 11-site lattice. The first (Figure 9.3a) is an order parameter
showing the displacement of the lattice sites from their original position in the final
relaxed state. The second (Figure 9.3b) shows how the lattice relaxes from its original
uniform state (top) to the final relaxed or self-trapped state (bottom). In Figure 9.3c
we show the exciton’s probability density for the initial and final (relaxed) state.
We can extend the model for a continuum lattice by writing the Hamiltonian as

 
H= ε(k)ak ak + ω(q)Bq† Bq + g(n, q)an† an Bq + Bq† (9.26)
k q nq


where {ak , ak } are exciton operators in k-space and {an , an† } denote exciton operators
in the lattice representation. The two are related by the Fourier relation. {Bq , Bq† }
are phonon operators with dispersion relation ω(q). The electron phonon coupling,
g(n, q), depends on the type of phonon in question. For longitudinal acoustic modes,
1 Cq
g(n, q) = √ eiq Rn 3
(9.27)
N 2ρω(q)a o

where ρ is the density of the medium, ao is the lattice constant, C is the deformation
parameter, and ω(q) = vq with v as the velocity of sound. For optical (transverse)
phonons,
1
g(n, q) = √ eiq Rn γo (9.28)
N
where γo and ω are assumed to be constant.
266 Quantum Dynamics: Applications in Biological and Materials Systems

δxn

0.4

0.2

n
2 4 6 8 10

–0.2

–0.4

(a) (b)

0.20

0.15
φ12

0.10

0.05

2 4 6 8 10
n
(c)

FIGURE 9.3 (a, b) Lattice distortion for an exciton self-trapping on a finite-sized lattice.
(c) Comparing the unrelaxed to trapped exciton probability density for finite lattice.

By completing the square in H we arrive at



g(n, q)g(m, q)
H= ε(k)ak ak + ω(q) B̃ q† B̃ q − n̂ n n̂ m (9.29)
k q mnq
ω(q)

where we have defined renormalized phonon operators as



B̃ q = Bq + n̂ n g(n, q)/ω(q) (9.30)
n

and
B̃ q† = Bq† + n̂ n g(n, −q)/ω(q) (9.31)
n

with n̂ n = an† an as the exciton number operator on site n. If we adiabatically minimize


the total energy, the second term will always be a constant and can thus be ignored.
Electron–Phonon Coupling in Conjugated Systems 267

Thus, we write the adiabatic Hamiltonian as



g(n, q)g(m, q)
Had = ε(k)ak ak − n̂ n n̂ m (9.32)
k mnq
ω(q)

Irrespective of the phonon model, taking the phonons to be normal coordinates then
g(n, q)g(m, q)
= δnm Co (9.33)
q
ω(q)

where
! Co
ρ
v 2 ao3 for acoustic modes
Co = (9.34)
γo2 /ω for optical modes
Thus,


Had = εk a k a k − C o n̂ 2n (9.35)
k n

So, as discussed above, increasing the exciton probability density at a given lattice
site results in a net lowering of the exciton’s energy.
We can push this analysis farther by assuming a Gaussian form for the exciton’s
wave function
  1/4
ao 2
e−(r/σ )
2
a(r ) = (9.36)
σ π
Taking its Fourier transformation,
  1/4
σ 2
e−k σ
2 2
a(k) = (9.37)
ao π
These are both normalized so that

N
an2 = dra(r )2 = 1 (9.38)
n
L

and

L
ak2 = dka(k)2 = 1 (9.39)
k
N

and L/N = ao . Using these two relations,


1 ao
n̂ 2n = √ (9.40)
n
π σ

For a free particle on an infinite lattice, the energy dispersion is given by ε(k) =
4βk 2 ao2 . Thus,
 
σ
ε(k)ak = β
2
(9.41)
k
a o
268 Quantum Dynamics: Applications in Biological and Materials Systems

Putting this all together, one finds that the adiabatic energy is given by

ao2 C o ao
E ad = β −√ (9.42)
σ 2 π σ
Furthermore, extending this to the d-dimensional isotropic lattice, we find:

ao2 Co  ao d
E ad = β − (9.43)
σ2 π d/2 σ
Minimizing this with respect to the sole remaining parameter, σ , requires
 d−1
aCo dπ −d/2 σa 2a 2 β
− =0 (9.44)
σ 2 σ3
Solving for σ , we find for the one-dimensional lattice

2a J π
σ = (9.45)
Co
However, for the 2D isotropic lattice, we find that d E ad /dσ = 0 occurs for the special
case of Co = 2β/π for a continuum lattice. Sumi and Sumi reach a similar conclusion
for finite-sized lattices110 in which they conclude there is a phase boundary between
the free and self-trapped (small-radius) exciton that depends upon the size of the
system and the strength of the electron/phonon coupling.
More recent quantum/classical dynamical simulations by Kobrak et al.107–109 and
by Tretiak et al.111–115 have examined the interplay between different types of vi-
brational motion in the trapping and relaxation of an exciton on a polymer chain. In
particular for poly-phenylene vinylene (PPV), self-trapping of excitations on about
six repeat units in the course of photoexcitation relaxation identifies specific slow
(torsion) and fast (bond stretch) nuclear motions strongly coupled to the electronic
degrees of freedom. Similar conclusions were drawn using semiempirical excited-
state techniques by Karabunarliev and Bittner.116,117

9.3 DAVYDOV’S SOLITON


The exciton model is quite universal and can be invoked to study a wide variety of
systems in which one has a highly quantized degree of freedom interacting with a
harmonic reservoir. Moreover, the exciton need not be an electronic excitation. One
particularly interesting application of the exciton model occurs in modeling energy
transport along an alpha-helix chain in a protein. In a biological system, chemical
energy is provided through the hydrolysis of adenosine triphosphate (ATP). An ATP
molecule binds to the active site of a protein, reacts with water, and releases 0.49 eV
of free energy. Roughly speaking, 1 eV corresponds to 12 000 K of thermal energy,
so this 0.49 eV is about 20 times greater than the average energy available from the
thermal background around 300 K. The crucial questions become “What happens to
this energy?” and “How is it transported from the reaction site to where it is needed?”
This is a highly nonequilibrium system and one would expect that the excess energy
Electron–Phonon Coupling in Conjugated Systems 269

would be rapidly dissipated as heat to the surroundings. Classical molecular dynamics


simulations confirm that this excess heat is rapidly distributed among the vast degrees
of freedom of the protein and surrounding water molecules in a few picoseconds.
An alternative explanation is that the energy released in ATP hydrolysis is con-
verted via resonant coupling to localized high-frequency modes of the protein, perhaps
occurring through some intermediate vibrational coupling. One likely participant is
the amide-I vibrational mode of a peptide group at 0.21 eV (1660 cm−1 ). This is about
half of the free energy released and almost resonant with the H-O-H bending mode
at 1646 cm−1 . The amide-I mode gives a strong peak in the IR and Raman spectra of
proteins with little variation: 1665–1660 cm−1 for α-helices, 1660–1665 cm−1 for a
β-sheet, and 1665–1680 cm−1 for a random coil. It is primarily composed of a C=O
stretch weakly coupled to an N-H in-plane bend as shown in Figure 9.4a.
In an α helix, the peptide chain is wound into a right-handed helix whereby every
third C=O is hydrogen-bonded to an amide hydrogen to form a quasi-linear chain,
with each C=O separated by a distance R = 4.5 Å.

C=O · · · H-N-C=O · · · H-N-C=O

These chains form three quasi-linear spines along the α helix. These spines are not
exactly linear and slowly wrap about the helical axis as shown in Figure 9.4b. Each
C=O group carries a substantial permanent electric dipole moment μ directed from the
Oδ− to the Cδ+ . These dipoles run parallel to the spines. Thus, to a first approximation,
neighboring C=O’s along each spine are coupled to each other via dipole–dipole
interactions
|μ|2
Ji,i+1 = 2 3
R
From this we can build a simple vibrational exciton model within a local basis where
|φi
represents a single stretching quantum on the ith C=O along a given spine.
That is,


N
Hex = h̄ωI + J |φi
φi+1 | (9.46)
i

where ε = h̄ω is the excitation energy for a C=O amide-I vibration and I is the
N × N identity matrix. Hex is the familiar tridiagonal Hamiltonian matrix we have
seen previously. Thus, its eigenvalues and eigenvectors can be immediately deduced.
For a sufficiently long helix, the eigenfunctions are plane waves with energy

ε(k) = ε + 2|J | cos(k R)

Thus, for a perfect chain, a C=O vibrational exciton will be delocalized over the
entire chain.
The α helix itself is free to undergo a variety of motions. A compression of the
α-helix chain would change the local electrostatic environment about the C=O group.

 We use the “physics convention” where dipoles point from source (−) to sink (+) rather than the “chemistry

convention” where dipoles point from (+) to (−).


270 Quantum Dynamics: Applications in Biological and Materials Systems

O H

N R
R N

O H
(a)

Spine 1 Spine 3

N
C

N
C

4.5 Å
H

N
C

N
C

Spine 2
(b)

FIGURE 9.4 (a) Peptide group showing amide-I vibrational mode. The amide-I mode gives a
strong peak in the IR and Raman spectra of proteins with little variation: 1665–1660 cm−1 for
α helices, 1660–1665 cm−1 for a β sheet, and 1665–1680 cm−1 for a random coil. (b) Three-
dimensional model of an alpha-helix coil with hydrogen bonds between linked C=O· · ·H-N
groups.
Electron–Phonon Coupling in Conjugated Systems 271

Displacing a peptide changes the CO--CO distance R. Taking u i to be the displacement


of the ith peptide group, then the local site energy of the ith C=O becomes
∂ε({u i })
εi = ε + (xi+1 − xi−1 ) + · · ·
∂R
Secondly, a compression of the helix sets up a longitudinal sound wave along the
chain. This wave travels at vs = R(κ/m)1/2 where κ is the bulk modulus or spring
constant and m is the mass of an amide group. Assuming the chain to be harmonic,
1 2 κ
H phonon = p̂ + (x̂i − x̂i+1 )2
2m i i 2 i

Combining these contributions, the total Hamiltonian takes the familiar form:
 ∂ε({xi })

H = ε+ (x̂i+1 − x̂i−1 ) |φi
φi |
i
∂R

N
+J |φi
φi+1 |
i
1 2 κ
+ p̂ + (x̂i − x̂i+1 )2 (9.47)
2m i i 2 i

where pi is the momentum conjugate to xi . Of course, the x̂i and p̂i are quantum
mechanical operators and a rigorous solution demands that we take this into account.
If we allow that the mass of an amide-I group is much larger than the effective mass
of a C=O vibrational exciton m ∗ = h̄ 2 /2J , then we may safely invoke a classical
treatment for the u i degrees of freedom. Davydov introduces this by making an ansatz
for the state vector for the coupled exciton/lattice wave function:
⎡ ⎤


|ψ(t)
= φi (t)Bi exp ⎣ih̄ (u j (t) p̂ j − π j (t)x̂ j )⎦ |0
(9.48)
i j


where |0
is the ground-state vector and Bi = |φi
0| creates an excitation on site i.
Davydov then makes what is essentially the Ehrenfest approximation by writing

u i (t) = ψ(t)|x̂i |ψ(t)


& πi (t) = ψ(t)| p̂i |ψ(t)
(9.49)

Thus, the fully quantum problem reduces to a mixed-quantum/classical problem


with two coupled equations:

ih̄ φ̇i = (ε + χ(u i+1 − u i−1 ))φi − J (φi−1 + φi+1 ) (9.50)

and

m ü i − κ(u i+1 − 2u i + u i−1 ) = χ (|φi−1 |2 − |φi+1 |2 ) (9.51)

where φi is the quantum mechanical amplitude giving the probability |φi |2 for finding
the ith C=O in its first vibrational excited state. Also, we have introduced χ as the
272 Quantum Dynamics: Applications in Biological and Materials Systems

linear coupling between adjacent C=O’s. These last two equations are the main results
of Davydov’s original paper.118–121 Let us next assume that the de Broglie wavelength
of the exciton is large compared to R. Thus, we can rewrite these last two equations
in continuum form as
 
∂φ ∂u ∂ 2φ
ih̄ = εo + 2χ φ−J 2 (9.52)
∂t ∂x ∂x

and

∂ 2u κ ∂ 2u 2χ ∂|φ|2
− = (9.53)
∂t 2 m ∂x2 m ∂x
where x now represents a location along the helical spine and εo = ε − 2J . The left-
hand side of the second equation is that of the wave equation. The inhomogeneous
term on the right-hand side is a source. What we see, then, is that the quantum
mechanical motion of the C=O vibrational exciton acts as a source for the generation
of longitudinal sound waves in the α helix.
With this in mind, let us seek traveling wave solutions to Equations 9.52 and 9.53
of the form
u(x, t) = u(x − vt)
where v is the velocity of propagation. Substituting this into Equation 9.53 yields

∂u 2χ
=− |φ(x)|2
∂x κ(1 − s 2 )

where s = v/vs < 1 is the ratio of the propagation velocity to the velocity of sound.
Introducing this back into Equation 9.52 yields a nonlinear Schrödinger equation

∂φ ∂ 2φ 4χ 2
ih̄ = −J 2 + εo φ − |φ|2 φ
∂t ∂x κ(1 − s 2 )
= H [φ]φ (9.54)

This is the source of the nonlinear interactions in which the solution of the wave
function depends upon the wave function itself! Similar nonlinear Schrödinger equa-
tions arise in various contexts, typically within the self-consistent field or Hartree
approximation to the many-body problem. Here, however, the nonlinear interaction
arises because we have a feedback mechanism between the vibrational motion of the
helix and the quantum motion of the C=O exciton.

9.3.1 APPROXIMATE GAUSSIAN SOLUTION


As previously, let us make a simple ansatz to the form of the wave function solution
to the stationary (time-independent) Schrödinger equation:

1
e−x /(2a )
2 2
φ(x) =
(aπ)1/4
Electron–Phonon Coupling in Conjugated Systems 273

and use the variational principle to minimize the total energy with respect to the
width a
 √ 
dE d d J 1 2 2χ 2
= φ|H [φ]|φ
= εo + − − √ =0 (9.55)
da da da 2a 2 aπ κ(1 − s 2 )

Note, the 1/2 appearing in front of the term arising from the exciton/lattice interaction
is included to avoid the interaction between the exciton and itself. This produces the
variational estimate of the width of the exciton wave function
J 2 κ 2 π(s 4 − 2s 2 + 1)
a=
8χ 4

Since 1 ≥ 1 − 2s 2 + s 4 ≥ 0 for s < 1, a > 0 for all traveling wave solutions. Let us
take the case where s = 0 to find the energy of a trapped exciton using

J 2κ 2π
a=
2χ 4

Introducing this into the variational estimate of the energy yields

3χ 4
E ste = εo −
π|J |κ 2

9.3.2 EXACT SOLUTION


Zakharov and Shabat122 have found a general solution to the nonlinear Schrödinger
equation. Their solution for the stationary case with s = 0 reads
χ
φ(x) = √ sech(χ 2 /(κ J )x) (9.56)
2κ|J |

The wave function as written here is normalized and gives a self-trapping energy of

χ4
E ste = εo −
κ 2 |J |

For a completely rigid helix, κ → ∞ and the self-trapping vanishes and we have
a band of delocalized excitations. It can also be seen from the sign of the trapping
energy that the energy of the trapped exciton will always be below that of a delocalized
exciton, εo , when the coupling between the exciton and the lattice is taken into account.

Problem 9.1 Demonstrate that φ(x) = sinh(x/a) is a stationary solution of the


nonlinear Schrödinger equation

J ψ  + κ|ψ|2 ψ = Eψ

What is the normalization, a, and the ground-state energy in terms of J and κ?


274 Quantum Dynamics: Applications in Biological and Materials Systems

TABLE 9.1
Parameter Values for Davydov Model from
Lomdahl and Kerr
κ J χ χ2 /κJ to
N/m cm−1 pN ps
Discrete 5.0 20.0 75.0 2.83 19.5
Continuum 13.0 31.2 40.0 0.29 12.1
α helix 13.0 7.8 62.0 1.91 12.1

9.3.3 DISCUSSION: DO SOLITONS EXIST IN REAL α-HELIX SYSTEMS?


Although the formal analysis leading to Davydov’s nonlinear Schrödinger equation
in Equation 9.54 is quite compelling and has led to some interesting and beautiful
analyses in the field of nonlinear dynamics, one must ask at some point whether
such coherent structures actually exist in real biological proteins under physiological
conditions.123 In order to analyze this, we first need to know some specific parame-
ters for the model. The dipole–dipole interaction is straightforward given the dipole
moment of a C=O in a peptide chain. The nonlinear interaction χ is a bit more subtle
and estimates place it between 30 and 62 pN. Table 9.1 lists a set of parameters from
Lomdahl and Kerr’s Phys. Rev. Letter.123 from 1985.
In order to include the effects of a thermal environment, Lomdahl and Kerr add a
damping force and noise term to the equations of motion for each amide-I site

Fi = −mγ u̇ i + ηi (t)

where γ is the vibrational relaxation rate, and ηi (t) is a noise term with

ηi (t)η j (t  ) = δi j δ(t − t  )2mγ k B T

and ηi (t) = 0. In other words, each site is subject to the thermal noise and we assume
there is no correlation between the thermal noise from site to site. This is certainly a
reasonable assumption since it gives the average kinetic energy (per site) as

m 2 1
u̇ (t) = k B T
2 i 2
as expected for a classical oscillator. They conclude that at physiological temper-
atures, random fluctuations in the lattice are too strong and prevent self-trapping.
Similar conclusions were reached by Schweitzer based upon quantum perturbation
theory.124 On the other hand, Förner concludes that Davydov solitons are stable at
300 K in reasonable parameter ranges, but only for special initial conditions close
to the terminal sides of the chain.125 Moreover, it is well known in the biophysi-
cal literature that polyamide in water undergoes a helix-to-coil transition at around
T = 280 K. Consequently, for free polyamide chains, Davydov solitons do not exist
in polyamide chains at physiologic temperatures; however, they may exist in other
peptide chains and in proteins. Their existence remains a theoretical question.
Electron–Phonon Coupling in Conjugated Systems 275

9.4 VIBRONIC RELAXATION IN CONJUGATED POLYMERS


Our discussion of vibronic relaxation of an exciton in contact with a harmonic lattice
concludes with a brief recapitulation of the theory of the electonic spectra in a molec-
ular system. For the sake of discussion, consider the case of a two-level electronic
system coupled with the nuclear motions of the molecule x. We shall work in a repre-
sentation such that all of our energies are referenced to the electronic ground state of
the molecule and that x represents the distortions of the molecule away from its min-
imum energy geometry. First, consider the variation of the ground-state energy with
the distortions of the molecule away from its equilibrium position. The Hamiltonian
describing the nuclear motions is, thus,
p2  2 
1 ∂ E gs
Hvib = n
+ E gs (x = 0) + xm xn + · · · (9.57)
n
2m n 2 nm ∂ xm ∂ xn eq

where xn represents the displacement of the nth atom with mass m n from its equi-
librium position. We can eliminate the explicit mass dependency by adopting mass-
scaled coordinates, x̃n = m 1/2
n x n and momenta. The phonons (or harmonic motions)
of the molecule are obtained by diagonalizing the second derivative matrix (Hessian)
 
∂ 2 E gs
h nm =
∂ x̃m ∂ x̃n eq

to obtain the phonon frequencies ωn and normal coordinates qn . Each qn represents


some collective motion in a harmonic well with frequency ωn . As a result, we can write
the vibrational Hamiltonian in terms of phonon creation and annihilation operators,
 
Hvib = h̄ωn bn† bn + 1/2 + E gs (x = 0) (9.58)
n

where the bm† and bm operators obey the familiar commutation relation for boson
operators: [bn† , bm ] = δnm . The zero-point energy can be folded into the ground-state
energy by defining the energy origin as

1
E 0 = E gs (x = 0) + h̄ ωn
2 n

The electronic energy curve and corresponding harmonic levels for the ground state
correspond to the lower parabola shown in Figure 9.5a [116].
Upon electronic excitation, the electronic density about the nuclei is changed.
Consequently, a molecule in its ground-state equilibrium geometry will be subject
to a potential force causing it to distort toward some new equilibrium geometry. Let
E k (q) be the energy of the kth excited state as computed at nuclear coordinate q and
expand this about the ground-state equilibrium geometry (q = 0):
∂ E k (q) ∂ E k (q)
E k (q) = E k (0) + qn + qn qm + · · · (9.59)
n
∂qn nm
∂ 2 qn ∂qm
276 Quantum Dynamics: Applications in Biological and Materials Systems

2.5 3.0 3.5 4.0 4.5

2 n=4
ωvert
1
ωp

Absorption/Arb. Units
n=6
0
ω00

ωvert n=8

1 n = 10

0
2.5 3.0 3.5 4.0 4.5
dp E/eV
(a) (b)

FIGURE 9.5 (a) Franck–Condon model in one normal dimension: 00 and vert: adiabatic and
vertical transition frequencies, p: vibrational frequency, dp : interstate distortion in the normal
coordinate. Several of the v0 and 0v vibrational transitions in absorption and emission are
given. (b) Theoretical 1Bu←1Ag absorption bands of all-trans polyenes with n double bonds.
Vertical lines are the positions and intensities of the dominant vibrational features in absorp-
tion of tert-butyl-capped polyenes. Lines give the positions of the absorption and emission
peaks from Ref. 126. (From S. Karabunarliev, M. Baumgarten, E. R. Bittner, and K. Müllen,
J. Chem. Phys. 113, 11372, 2000. Copyright (2000) American Institute of Physics.)

E k (0) is the excitation energy taken at the ground-state equilibrium geometry. We now
make the assumption that the second derivative term is diagonal so that the ground-
state normal modes qn are also normal modes in the excited state. This assumption
does not always hold and one can have the case where one needs to define new
normal modes qn(k) for each electronic state with frequencies ωn(k) . Generally, the
approximation is robust for larger conjugated polymer systems; however, for small
molecules one should be careful in following this prescription. We also shall ignore at
this point any coupling between the electronic states brought about by the geometric
change in the molecule:

1
E k (q) ≈ E k (0) + ω2 qn2 + f n(k) qn
n
2
1 2 2 1 2  (k) 2
≈ E k (0) + ωn qn − dn(k) − ω d (9.60)
2 n 2 n n n

Thus, within the assumptions here, a molecule in its kth excited state feels a linear
force distorting it away from the ground-state geometry toward some new geometry
Electron–Phonon Coupling in Conjugated Systems 277

2.0 2.5 3.0 3.5 4.0 4.5

n=2

Intensity/Arb. Units n=3

n=4

n=5

2.0 2.5 3.0 3.5 4.0 4.5


E/eV

FIGURE 9.6 Theoretical absorption (solid) and emission (dotted) band shapes for the low-
est electronic transitions in oligo(para-phenylenevinylenes) with n benzene rings. Computed
transition probabilities without line-shape broadening are given for the tetramer as illustrated
(from Ref. 116).

defined by the distortion coordinates:


f n(k)
dn(k) = − (9.61)
ωn2
Finally, the quantity
(k) 1 2  (k) 2
E ad = E k (0) − ω d
2 n n n
is the relaxed (adiabatic) energy of the kth electronic state. Thus, our picture of the
electronic energies of a molecule begins to look like what is shown in Figure 9.5a
where we see a manifold of parabola each representing the potential energy surface
for the various electronic states within the harmonic approximation. Once we know
the harmonic frequencies of the phonons ωk , the linear forces f n(k) , and the distortions
dn(k) , we can compute the line-shape function for the absorption or emission of light.
Figures 9.5b and 9.6 give the predicted absorption and emission spectra for a
series of conjugated oligomers. The absorpton peaks in Figure 9.5b are computed to
the experimental positions and intensities for all trans-polyacetylene. Aside from a
systematic shift in the position, the agreement between the theoretical Frank–Conden
model and the experimental data generally improves with polymer chain length.
278 Quantum Dynamics: Applications in Biological and Materials Systems

The line shape for transitions between initial and final electronic states (k → k  )
is given by the general expression
1 −Em(k) β   
F(E) = e | km k |μ|k  n k 
|2 δ E n(k ) − E m(k) − E (9.62)
Z k mn

where the sum is over the Boltzmann-weighted initial vibrational state of electronic
state k and all possible final vibrational states of electronic state k  . Z k is the canon-
ical partition function for the vibrations on state k. E m(k) is the vibronic energy of
the mth vibrational level in the kth electronic state. Implicit in this expression is
that we are summing over all the vibrational levels of all the phonon modes. In or-
der to simplify our notation considerably, we shall consider the case where there is
only a single dominant phonon mode. The generalization to a multimode system is
straightforward.116,127 Notice that the line-shape function reduces to the Fermi golden
rule rate in the limit

W = lim F(E)
h̄ E→0
To proceed, we make the Condon approximation that electronic transitions occur
within a fixed nuclear framework. This allows us to approximate the transition matrix
element as
km k |μ|k  n k 
= k|μ|k 
m k |n k 

The first factor is simply the matrix element of the dipole operator between the initial
and final electronic states. This determines the electronic selection rule and overall
intensity of the transition. We take this to be independent of the vibrational state and
thus pull it out of the summation. The second factor represents the overlap matrix
element between two harmonic oscillator wave functions in displaced harmonic wells.
For clarity and distinction, we label each vibrational state by the electronic state with
which it is associated. Finally, since most spectra are taken from (or to) the ground
electronic state, we assume from here on that one of our electronic states is the ground
state (k = 0). The modulus squared of the vibrational overlap between the m k and
n k  levels is given by
m k ! m k −n k   n k  −m k 2
| m k |n k 
|2 = e−X X L mk (X ) (9.63)
nk !
where X = d 2 /2, d is the dimensionless distortion between the minima of state k
and the minima of state k  , and L ab (x) is an associated Laguerre polynomial.128 At
low temperatures—that is, where kT < h̄ωn —the vibrational population of the initial
state is concentrated in the lowest lying vibrational level; thus, we can simplify the
Franck–Condon factor to
X nk
| 0k |n k 
|2 = e−X (9.64)
nk !
Finally, we recall the fact that the delta function can be represented as the limiting
case of a Lorentzian
1 ε
δ(x − xo ) = lim (9.65)
ε→0 π (x − xo )2 + ε 2
Electron–Phonon Coupling in Conjugated Systems 279

1.5 2.0 2.5

Intensity/Arb. Units

1.5 2.0 2.5


E/eV

FIGURE 9.7 Vibrational transitions in absorption and emission within the harmonic Condon
approximation. Models of (a) coupling in a double-bond stretching mode (0.2 eV), (b) cou-
pling in a ring-torsional mode (0.01 eV), and (c) coupling in both modes. The curves are the
convolutions for a Lorentzian linewidth of 0.04 eV for the vibrational transitions. The asym-
metry between absorption and emission is a consequence of a minor stiffening of the accepting
vibrational modes for absorption (from Ref. 127).

which is, of course, the line shape for damped oscillator. Consequently, we can as-
sociate a lifetime τ = 1/γ to each vibrational mode to account for the fact that the
molecule is embedded in a continuum and, thus, write the absorption (or emission)
line shape as

X n k  h̄γ 1
F(ω) = | k|μ|k 
|2 e−X     (9.66)
n k  =0
n k  ! π E 0(k  ) − E 0(k) − h̄ωnk  n k  2 + (h̄γ ) 2

Figure 9.7 shows the vibronic line shapes for a model conjugated polymer. In
Figure 9.7a, only high frequency models are included in the model. This produces the
familiar symmetric absorption and emission line structure. In Figure 9.7b, only low
frequency phonon modes were included in our model. The absorption and emission
line shapes are nearly symmetric; however, the absorption band is slightly broader than
the emission. Finally, in Figure 9.7c, we include both high and low frequency modes
in our model. Here, the absorption band is significantly broader than the absorption
band and the high frequency vibronic fine structure is somewhat washed out. The
emission, on the other hand, exhibits well resolved vibronic fine structure.
280 Quantum Dynamics: Applications in Biological and Materials Systems

3.0 3.5 4.0 4.5 5.0 5.5

n=2

Intensity/Arb. Units n=3

n=4

n=5

n=6

3.0 3.5 4.0 4.5 5.0 5.5


E/eV

FIGURE 9.8 Computed S1←S0 absorption and emission bands of p-polyphenyls with n
phenyl/phenylene rings. Arrows mark the theoretical electronic origins (from Ref. 127).

Figure 9.8 shows the theoretical absorption and emission spectra for various
oligomers of polyphenylenevinylene (OPV) as computed using a semiempirical ap-
proach implemented within the MOPAC code.116 Here we can see a very clear vibronic
progression indicative of the strong vibronic coupling between the π -electronic sys-
tem and the skeletal vibrations. Notice that in the case for n = 4, OPV with four
phenylene rings, we include the contribution from all the modes without imposing any
additional vibronic line broadening. We can see that the main vibronic fine-structure
features are composed of nearly a continuum of finer-grained lines corresponding to
the contributions from the low-frequency modes of the molecule. In this case, the
prominent vibronic progression in the emission and absorption spectra is due to the
C=C stretching modes.
In the absorption spectra, the lines correspond to excitations from the lowest vi-
brational state to all possible vibrational states in the first electronic excited state.
Consequently, all the fine structure is to the blue of the absorption origin (correspond-
ing to the 0-0 vibronic transition). Thus, we can assign the main absorption peaks
as 0-0, 0-1, 0-2, and so forth. The emission spectra, on the other hand, correspond
to transitions from the 0 vibrational level in the upper electronic state to the nth
vibrational level of the lower state. As n increases, the energy gap decreases so all
emission peaks are to the red of the 0-0 line.

 Our modification of the MOPAC code is available upon request.


Electron–Phonon Coupling in Conjugated Systems 281

9.5 SUMMARY
Electron-phonon interactions play a significant role in the electronic states of con-
jugated systems. In this chapter, we have gone from an analytic treatment of elec-
tron/phonon coupling via the SSH model, used this approach to study exciton self-
trapping, and finally presented a linear-coupling model that when combined in con-
cert with a semiempirical technique (such as MOPAC) can reproduce the vibronic fine
structure of wide range of conjugated polymer systems.

9.6 PROBLEMS AND EXERCISES


Problem 9.2 Consider a problem for a mixed valency metal complex Ru I I -Ru I I I .
Neglecting the electronic repulsion, we can write a simple two-state model as

H = Hel + Hnuc

where
2
pnuc 1
Hnuc = + kx 2
2M 2
describes the nuclear motion and
 1/2
 † †   † †  2Mω
Hel = β a1 a2 + a2 a1 + gh̄ω a2 a2 − a1 a1 X

describes the transfer of an electron from atom 1 to atom 2. M, k, β,√ ω, and g


are the nuclear mass, force constant, tunneling term, frequency ω = k/M, and
electron/phonon coupling (dimensionless). X represents the relative displacement
between the two atoms.
Define the Born–Oppenheimer (BO) potential surface as

VB O (X ) = Vnucl (X ) + ψ|Hel |ψ

for a given electronic state ψ.


1. Determine the Fock matrix elements for the electrons. This is a 2×2 matrix
with

f i j = {[ai , H ], a j }

The elements of this matrix will depend upon X .


2. Determine the electronic eigenvalues from f . Add these to Vnuc to deter-
mine the two BO potential curves, VBO (X ). Plot these curves using the
parameters β = −0.02 eV, g = 1, ω = 500 cm−1 , M = 10, 000m e . How
do these curves change if β = −2.0 eV?
3. Suppose in describing a mixed valance Ru complex Ru I I -Ru I I I , we take

a1 as creating an electron in a dx y orbital on the left atom analogously for
the atom on the right. Assume at time zero the electronic configuration is
Ru I I -Ru I I I so that

ψel (t = 0)a1 |0

282 Quantum Dynamics: Applications in Biological and Materials Systems

Using the Franck–Condon principle, calculate the electronic excitation


energy for the parameters in part 2. This type of transition is termed the
intervalence transfer band.
4. Using the Marcus theory of electron transfer, determine the driving force,
the reorganization energy, and nonadiabatic coupling for the parameter
cases you considered above. Comment on which “regime” each parameter
set corresponds to and compute the electron transfer rate at T = 300 K.
5. Finally, using the Franck–Condon principle and Fermi’s golden rule, plot
the electronic absorption and emission spectra (at 300 K) as functions of
photon frequency for the two parameter cases. [Hint: See Equation 9.66
and discussion in the text.]

Problem 9.3 Consider the time evolution of a spin state of an electron in a magnetic
field. Take the unperturbed Hamiltonian to be the Zeeman Hamiltonian with static
magnetic field Bo taken in the z direction:

Ho = γ Bo Ŝ z

where Sz is the usual spin operator Ŝ z |m


= m|m
and γ is the gyromagnetic ratio.
Consider the case where the coupling is a time-varying interaction with a magnetic
field in the x direction so that the perturbing term is

V (t) = γ Bx Ŝ x cos(t)

where Bx is the perturbing field and  the frequency.


1. At t = 0, the electron is prepared in the spin-down state α. What is the
probability of ending up in the spin-up state β as a function of time? Take
 = γ Bo and use first-order perturbation theory.
2. Now, let us solve the time-dependent Schrödinger equation exactly. For
this, write the time-evolved α state as

ψ(t) = Cα (t)|α
+ Cβ (t)|β

and write the equations of motion for the coefficients Cα and Cβ . Next, take
the limit that  = γ Bo . Carefully consider the time-evolution of each term
and neglect all terms that vary as exp(±2it) (Rotating Wave Approx).
Show that these equations are the same as you derived in the first part.
3. Solve the equations of motion numerically for the case where the driving
term is both on and off resonance with the Zeeman splitting. Make a plot
of the survival probability of the initial spin-up state as a function of time
for the resonant and nonresonant cases. How do your numerical results
compare with the results you derived above?
10 Lattice Models for
Transport and Structure
In Chapter 8 we largely cast our discussion of the electronic structure of π-conjugated
systems upon the idea that the C 2 pz orbitals provided a good basis for the molecular
orbitals. We also made the simplifying assumption that the atom-centered orbitals
were orthogonal to each other. This approximation goes under the moniker of “neglect
of differential overlap” (NDO) in the quantum chemical literature and “tight-binding
approximation” in the solid-state physics literature. In this chapter, we shall continue
along these lines in order to discuss transport and dynamics in extended systems.

10.1 REPRESENTATIONS
10.1.1 BLOCH FUNCTIONS
Consider the Schrödinger equation for a system with a periodic potential (and with
Hamiltonian Ho ) and some external potential U . For U = 0, the system corresponds
to a perfectly periodic system while U represents the contributions from point defects
or impurities within the lattice. Alternatively, U could represent an entirely external
potential, say, from an electric or magnetic field, or the radiation field. While the
physics is different in each case, the underlying mathematical technique used will be
the same. We can expand the wave function in terms of a complete set of plane waves
defined by the periodicity of the underlying system.
For an unbound, free electron in one dimension, V (x) = 0 at all points and we
have the general solution of the Schrödinger equation

ψ(k, r ) = aeikr + be−ikr

with energy
h̄ 2 k 2
E(k) =
2m
If we force the system to be confined to x ∈ [0, L] as in the case of a particle in a
box, then k can take only discrete values and the eigenstates read

2
ψ(k, r ) = sin(kr )
L
with k = nπ/L and n = 1, 2, 3, . . .. Likewise, for the periodic system

1 ±ikr
ψ(k, r ) = e
L
with k = n2π/L. In Figure 10.1 we show E(k) for the free particle, the bound particle,
and the particle on a periodic lattice. For the free particle, all values of k give rise to

283
284 Quantum Dynamics: Applications in Biological and Materials Systems

2
Ek = ћ ( k )2
2m L

80
Ek
є+t

60

40
kL
– 3π –π –π π π 3π
2 2 2 2
20

k
–5π –4π –3π –2π –π π 2π 3π 4π 5π
є–t

(a) (b)

FIGURE 10.1 (a) Energy E(k) versus k for a free particle (solid line), a particle on a periodic
lattice (⊕) and a particle in a box ⊗. (b) Energy band for particle on a discrete lattice. The points
indicated by ⊕ represent the eigenenergies for a linear chain (⊗) and a ring (⊕) of 10 atoms.
The shaded region between −π and π defines the first Brillouin zone.

stationary solutions of the Schrödinger equation. However, because of the imposed


boundary conditions for the other two cases, only certain values of k give rise to
stationary solutions.
Let us define for a periodic system the basis function φ j as a localized atomic
orbital centered about atom #j. As in the Hückel model, we consider ε j to be the
energy to put an electron in orbital #j and ti j the energy to transfer an electron (of a
given spin) from orbital #i to orbital #j. Taking all sites to be equivalent and ti j = 0
only between neighboring sites (i = j ±1), the stationary Schrödinger equation reads

φ j ε + t(φ j+1 + φ j−1 ) = Eφ j (10.1)

Next, we take advantage of the fact that the system is periodic and write φ j+n =
eikna φ j where n is an integer and L is the spacing between adjacent atoms

φ j ε + t(e+iknL + eiknL )φ j = Eφ j (10.2)

This allows us to eliminate φ j from both sides and obtain the energy band

E(k) = ε + 2t cos(k L)

For a finite lattice of atoms, we have the Hückel results we have seen before. In
Figure 10.1b we show the energies for both a chain and a ring of 10 atoms superim-
posed upon the energy curve for the infinite system. The expressions for the energies
were given in Equations 8.18 and 8.45. For the case of a ring of N atoms, the boundary
conditions give stationary solutions only when k L takes integer multiples of 2π/N .
Lattice Models for Transport and Structure 285

Extending this to an infinite ring, one can easily see that all points k L ∈ [−π, π ]
would give rise to stationary solutions. Going beyond this range, the energies re-
peat themselves and thus we can limit our discussion to energies with wave vectors
k ∈ [−π/L , π/L]. This defines the first Brillouin zone (BZ).
We can generalize this idea considerably by defining all basis functions in terms
of complete sets of functions over the first BZ.
Bloch functions are eigenfunctions of Ho
Ho ψn (k, r ) = E n (k)ψn (k, r ) (10.3)
where n is a band index and k is the wave vector or crystal momentum. This represen-
tation is termed the crystal momentum representation (CMR) since it is based upon
states with definite k. These functions are orthonormal:

ψn∗ (k, r )ψm (k  , r )dr = δnm δ(k − k  ) (10.4)

Note that throughout our discussion here, the integral over r is taken over all space.
One can also show that the Bloch functions are complete:

ψn∗ (k, r )ψn (k, r  )dk = δ(r − r  ) (10.5)
n

where the integral is over a single BZ. Since the Bloch functions are a complete set,
we can expand any general wave function in terms of the Bloch functions

ψ(r ) = φn (k)ψn (k, r ) (10.6)
n

where the expansion coefficients describe the wave function in the CMR. Hence,
the Bloch functions can be thought of as the transformation coefficient ψ(k, r ) =
r |nk
= between the CMR and r .
1
|nk
= √ eikr

where  is the volume of a unit cell.

10.1.2 WANNIER FUNCTIONS


Wannier functions are essentially spatially localized basis functions that can be derived
from the band structure of an extended system. Quantities such as the exchange
interaction and Coulomb interaction can be easily computed within the atomic orbital
basis; however, there are many known difficulties in computing these within the
crystal momentum representation. Because of this, it is desirable to develop a set of
orthonormal spatially localized functions that can be characterized by a band index
and a lattice site vector, Rμ . These are the Wannier functions, which we shall denote
by an (r − Rμ ) and define in terms of the Bloch functions

1/2
an (r − Rμ ) = e−ik Rμ ψnk (r )dk (10.7)
(2π )d/2
286 Quantum Dynamics: Applications in Biological and Materials Systems

The integral is over the Brillouin zone with volume V = (2π)d / , and  is the
volume of the unit cell (with d dimensions). A given Wannier function is defined for
each band and for each unit cell. If the unit cell happens to contain multiple atoms,
the Wannier function may be delocalized over multiple atoms. The functions are
orthogonal and complete.
The Wannier functions are not energy eigenfunctions of the Hamiltonian. They
are, however, linear combinations of the Bloch functions with different wave vectors
and therefore different energies. For a perfect crystal, the matrix elements of H in
terms of the Wannier functions are given by
 
∗ 
al (r − Rν )Ho an (r − Rμ )dr = ei(q Rν −k Rμ ) ψlk (r )Ho ψnk (r )dr dqdk
(2π )d
= En (Rν − Rμ )δnl (10.8)

where 

En (Rν − Rμ ) = eik(Rν −Rμ ) E n (k)dk
(2π )d
Consequently, the Hamiltonian matrix elements in the Wannier representation are
related to the Fourier components of the band structure, E n (k). Therefore, given a
band structure, we can derive the Wannier functions and the single-particle matrix

elements, Fmn .

10.2 STATIONARY STATES ON A LATTICE


For convenience, let us restrict our attention to a single band and consider the
Schrödinger for the stationary states in a general periodic potential
 
h̄ 2 2
− ∇ + V (r ) ψ(r ) = Eψ(r ) (10.9)
2m
Let us now transform to the CMR using
1
ψ(r ) = √ φk eikr (10.10)
 k

In the CMR, the Schrödinger equation becomes


h̄ 2 k 2
φk + k|V |k 
φk  = Eφk (10.11)
2m k 

where we see that the kinetic energy term is diagonal in the CMR while the potential
term couples components of the wave function with different values of the crystal
momentum k.
Now, let us assume that the potential can be written as a superposition of core
potentials centered about each atomic site. In other words, we expand

V (r ) = v(r − r j )
j
Lattice Models for Transport and Structure 287

where v(r ) is a weak pseudopotential specific to a given atom. Using this approxi-
mation, we can evaluate k|V |k 
by taking advantage of the properties of the Fourier
transformation

 1 
k|V |k
= dr ei(k−k )r v(r − r j ) (10.12)
 j

Swapping the integral and sum,



1 i(k−k  )r j 
k|V |k 
= e dr ei(k−k )(r −r j ) v(r − r j ) (10.13)
 j

Now we change the variable of integration from r to r − r j and factor the volume
 into  = N o , where N is the number of atoms and o is the atomic volume

 1 i(k−k  )r j 1 
k|V |k
= e dr ei(k−k )r v(r )
N j o

= S(k  − k)vk  −k (10.14)

In this last step we factor the interaction into two terms: a structure factor S(q) and a
form factor v(q), where q = k  − k. The structure factor is given as the sum over the
atomic positions
1 −iqr j
S(q) = e (10.15)
N j

and is equivalent to the structure factor obtained from diffraction theory. The second
factor, termed the form factor, is given by

1
v(q) = d 3r e−iq·r v(r ) (10.16)
o
Here we have explicitly indicated that the integration is over a three-dimensional
volume d V = d 3r . Since v(r ) is centered about an atomic site, we can expand it in
terms of the spherical harmonics

+l
v(r ) = vlm (r )Ylm (θ, φ) (10.17)
l=0 m=−l

Likewise, the exponential can be expanded as



+l

eiq·r = 4π i l Ylm (θq , φq ) jl (qr )Ylm (θ, φ) (10.18)
l=0 m=−l

where (θq , φq ) given the direction of q relative to some z axis and jl (qr ) is a spherical
Bessel function. If we choose the z axis to be along the direction of q, then qx = q y = 0
and qz = q and we can write
eiq·r = eiqr cos θ
288 Quantum Dynamics: Applications in Biological and Materials Systems

It is then straightforward to show that




eiq·r = i l (2l + 1) jl (qr )Pl (cos θ) (10.19)
l=0

Pulling everything together, we obtain


∞  ∞
1 l
v(q) = i 4π(2l + 1) vl (r ) jl (qr )r 2 dr (10.20)
o l=0 0

For the case of a spherically symmetric pseudopotential, vl has only one component
and we arrive at

4π ∞ 2
v(q) = r v(r ) j0 (qr )dr
o 0

4π ∞ 2 sin(qr )
= r v(r ) dr (10.21)
o 0 qr
Within the pseudopotential approximation, we simply need to know the pseudopo-
tential for a given atom and the structure factor of the material to determine the band
structure or other properties related to the electronic structure.
One form of the pseudopotential is the so-called empty-core potential, which takes
the form129

0 for r < rc
v(r ) = (10.22)
vfree (r ) otherwise

where v f r ee is the free-atom potential that may take the form of a screened Coulomb
potential

−Z eff e2 −κr
vfree (r ) = e (10.23)
r
where Z eff is the effective core charge and κ is a screening length. Hence, we can
evaluate the integral for the form factor as

4π Z eff e2 ∞ 2 e−κr sin(qr )
v(q) = − r dr (10.24)
o q rc r r
4π Z eff e2
=− cos(qrc ) (10.25)
o (q 2 + κ 2 )
In order to determine the form factor for a given atomic species within the empty-
core approximation, we need to adjust the cutoff radius rc until the eigenenergy of
the corepotential matches the energy of the actual atomic species. For example, if
we consider the corepotential for the 3s state of the Na atom, we can use a Z eff = 1
and adjust rc until the lowest energy radial eigenstate of the pseudopotential is equal
to E(3s) = −4.96 eV. Doing so, we find that rc = 1.037 Å. Formally, we may
Lattice Models for Transport and Structure 289

Es
rc v(q)(eV)
r(Å)
2 4 6 8
–2
–4 q/Å
5 10 15 20
–6 –0.2
–8
–0.4
–10
–12 –0.6
–14

(a) (b)

FIGURE 10.2 (a) Empty cone pseudo potential and radical wavefunction for sodium (Na)
atom (b) Na pseudo potential from factor.

anticipate that the real purpose of κ is to ensure that the integral has a finite radius
of convergence and that we should take κ → 0. However, within the Thomas–Fermi
statistical approximation, we find that

4me2 k F
κ2 = (10.26)
πh̄ 2
where k F is the Fermi momentum. The empty-core pseudopotential, approximate 3s
radial wave function, and pseudopotential form factor for a Na atom are given in
Figure 10.2. For Na, we use k F = 0.92Å−1 .
The structure factor S(q) can be obtained from scattering data or from simulation.
In general, the structure factor is related to the Fourier transform of the pair distribution
function.130 This gives us an independent way to incorporate the structure of a system
into the calculation of the electronic band structure. The pair distribution is given by
summing over all pairs of atoms in the sample. For an isotropic single-component
system  
V
3   
g(r ) = d r δ(r − ri )δ(r + r − r j )
N (N − 1) i = j

The quantity ρg(r )dr gives the “probability” of finding a second atom (or molecule)
at some distance r away, given that there is an atom or molecule at the origin and
ρ = N /V is the density. It is not really a probability since
 ∞
4π ρg(r )r 2 dr = N − 1 ≈ N
0

To be precise, ρg(r )dr gives the number of atoms between r and r + dr about a
central atom. The relation between g(r ) and S(q) is130

S(q) = ρ eiq·r (g(r ) − 1)d 3 r
290 Quantum Dynamics: Applications in Biological and Materials Systems

For a single chain with interatomic spacing d, the atoms on the chain are located at
distances rn = nd from an atom located at the origin. Thus, it is easy to see that for
a chain of N atoms
N −1
1 −iqnd
S(q) = e (10.27)
N n=0
1
= (1 + x + x 2 + x 3 + · · · + x N −1 ) (10.28)
N
1 x N +1 − 1
= (10.29)
N x −1
with x = exp(iqd). For a periodic system, q takes integer multiples of 2π/(N d).
Consequently, the numerator in this last expression is always equal to zero. The
denominator is zero when n/N is an integer. Thus, S(q) = 1 at values of q = n2π/d.
These are termed the lattice wave numbers or reciprocal lattice vectors.

10.2.1 A SIMPLE BAND-STRUCTURE CALCULATION


We just found that for the one-dimensional chain the structure factor only takes
nonvanishing values at integer multiples of q = 2π/d. This means that a given Bloch
basis function |k
is coupled only to Bloch functions |k ± nq
. Since this only occurs
once as we move across the BZ, we can write the Hamiltonian operator in the CMR
as a 3 × 3 matrix where |k
is coupled to Bloch states |k + q
and |k − q
,
⎡ 2 ⎤
h̄ (k + q)2
⎢ v(q) 0 ⎥
⎢ 2m ⎥
⎢ h̄ 2 k 2 ⎥
H (k) = ⎢ v(q) v(q) ⎥ (10.30)
⎢ 2m ⎥
⎣ h̄ 2 (k − q)2 ⎦
0 v(q)
2m
These bands are shown in Figure 10.3 for the case where v(q) = −2 (with m = 1
and h̄ = 1). Note that bands are present in the final analysis, but the highest energy
band is only (nearly) degenerate with the second band at k = 0. Consequently, we
can approximate H (k) as a 2 × 2 system when k > 0
⎡ 2 2 ⎤
h̄ k
⎢ v(q) ⎥
H (k > 0) ≈ ⎣ 2m ⎦ (10.31)
h̄ 2 (k − q)2
v(q)
2m
Diagonalizing this yields a fairly good approximation for the two lower bands for
k > 0,

    2 2 2
1 h̄ 2 k 2 h̄ 2 (k − q)2 1 h̄ k h̄ 2 (k − q)2
Ek = + ± − + v(q)2
2 2m 2m 2 2m 2m
(10.32)
Lattice Models for Transport and Structure 291

E(k) E(k)
20 20

15 15

10 10

5 5

kd/π kd/π
–3 –1 –1 1 1 3 –3 –1 –1 1 1 3
2 2 2 2 2 2 2 2
(a) (b)

FIGURE 10.3 Bands from Eq. (10.32) for three 1D free-electron states coupled by a pseu-
dopotential with form factor v(q). (a) Three-state model. (b) Approximate two-state model.

For the left side (k < 0), we would write a similar 2 × 2 Hamiltonian except we
would couple k to k + q. The band for the full BZ would the be approximated by
joining these two cases as shown by the curves in Figure 10.3b. The approximation
is robust across both respective half zones. However, once we move too far into the
other half of the zone (that is, where k < 0 for the curve or k > 0 for the curve), the
approximation breaks down considerably and we do not recover the second avoided
crossings. Also, at k = 0 the curves cross, whereas in the three-state approximation
we have a small gap.

10.3 KRONIG–PENNEY MODEL


A drastic simplification to the core-potential model is where we replace the Coulomb
interaction between the electron and the core atom with a simple rectangular potential
about each atom on the lattice. From Bloch’s theorem, we only need to find a solution
on a single period of the lattice and make sure that it is both continuous and smooth:
ψ(x) = eikx u(x)
where u(x) is smooth and periodic. The function u(x) satisfies
u(x + d) = u(x)
and
u  (x + d) = u  (x)
Since everything is periodic over d, we need only consider the solution in this range,
making sure that the solution and its derivative are both continuous. We have two
regions, region 1 where 0 < x < a and the second where a < x < b. We take
a + b = d to be the lattice spacing. In the first region,
ψ1 = −αψ1
292 Quantum Dynamics: Applications in Biological and Materials Systems

(α 2 = 2m E/h̄ 2 ), which gives

ψ1 = A1 eiαx + B1 e−iαx

In region 2,
ψ2 = −β 2 ψ2

(β 2 = 2m(Vo − E)/h̄ 2 ) and the solution

ψ2 = A2 eiβx + B2 e−iβx

To find u(x) we need to do some manipulation of the wave function in each region:
 
ψ1 (x) = eikx u 1 (x) = eikx A1 ei(α−k)x + B1 e−i(α+k)x

and
 
ψ2 (x) = eikx u 2 (x) = eikx A2 ei(β−k)x + B2 e−i(β+k)x

Now we are in a position to determine the coefficients and u i (x) in each region. At the
potential steps, the two solutions and their derivatives must match. Thus, at x = 0,

ψ1 (0) = ψ2 (0) (10.33)


ψ1 (0) = ψ2 (0) (10.34)

Likewise,

u 1 (a) = u 2 (−b) (10.35)


u 1 (a) = u 2 (−b) (10.36)

These last two conditions enforce the periodicity of the lattice. This leads to the
following matrix equation:
⎛ ⎞⎛ ⎞
1 1 −1 −1 A1
⎜α −α −β β ⎟⎜ B ⎟
⎜ ⎟⎜ 1 ⎟
⎜ ia(α−k) ⎟⎜ ⎟
⎝e e−ia(k+α) −eib(β−k) e−ib(k+β) ⎠⎝ A2 ⎠
eia(α−k) (α − k) −e−ia(k+α) (k + α) −eib(β−k) (β − k) e−ib(k+β) (k + β) B2
(10.37)

Since we have to ignore the trivial solution of A1 = A2 = B1 = B2 = 0, the


determinant of the matrix must vanish. This leads to the condition

α2 − β 2
cos(kd) = cosh(βb) cos(αa) − sinh(2β) sin(αa)
2αβ
Lattice Models for Transport and Structure 293

10.4 QUANTUM SCATTERING AND TRANSPORT


In order to study transport in materials, we need to discuss some basic principles
of scattering theory. Our goal is to find an unbound stationary solution of the one-
dimensional Schrödinger equation
 
h̄ 2 ∂ 2
− + V (x) ψ(x) = Eψ(x) (10.38)
2m ∂ x 2

where E is the energy of the scattering particle. Whereas in the bound-state problem,
E is an eigenvalue of the Hamiltonian operator, here E can take any value. The
basic idea is to consider how an incident wave function starting in the distant past is
transformed into an outgoing wave function in the distant future. For convenience,
we place the interaction close to the origin so that in regions to the left and right of the
interaction region, the wave function behaves as a free particle. For a wave function
moving in one dimension, we can write the wave function in the region to the left of
the interaction as
ψ L = A L eikx + B L e−ikx
where A1 and B1 are the incident and reflected amplitudes and

k = 2m(E − V (x))/h̄

is the wave vector. For regions to the right of the interaction we have

ψ R = A R eikx + B R e−ikx

where A R is the amplitude traveling away from the interaction and B R is the amplitude
moving toward the interaction region. In most cases, we consider the case where the
particle starts to the left and moves to the right. So, eventually we shall set B R = 0.
We shall keep it for now. The left and right amplitudes are related via the transfer
matrix T :
     
AR t11 t12 AL
= · (10.39)
BR t21 t22 BL

We can also write the scattering matrix S as the transformation between the two
incoming and the two outgoing amplitudes
     
BL r t AL
= · (10.40)
BR t ∗ −r AR

where r and t are the transmitted and reflected amplitudes. By normalization, r 2 +


t 2 = 1. Typically, in scattering calculations, knowing the S or T matrix is sufficient
for determining the scattering cross-section, quantum transport properties, or reaction
rate for a particle of mass m and a potential V (x).
As an example, let us take the case where the potential is taken to be a series of
discrete steps. The simplest is where V (x) is a step function where V (x < xs ) = 0
294 Quantum Dynamics: Applications in Biological and Materials Systems

1.0 1.0

0.8 0.8

0.6 0.6
TR

TR
0.4 0.4

0.2 0.2

0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0
E(eV) E(eV)
(a) (b)

FIGURE 10.4 Transmission and reflection probabilities for a (a) single and (b) double-barrier
problem.

and V (x ≥ xs ) = Vo . The wave function and its first derivative must be continuous,
so we need to join the left and right solutions at x = xs :

ψ L (xs ) = ψ R (xs ) (10.41)


ψ L (xs ) = ψ R (xs ) (10.42)

Thus, we can relate the amplitude coefficients:

A L eik L xs + B L e−ik L xs = A R eik R xs + B R e−ik R xs (10.43)


−ik L xs −ik R xs
ik L A L e ik L xs
− ik L B L e = ik R A R e ik R xs
− ik R B R e (10.44)

or in matrix form
     
eik L xs e−ik L xs AL eik R xs e−ik R xs AR
=
ik L eik L xs −ik L e−ik L xs BL ik R eik R xs −ik R e−ik R xs BR
(10.45)

Or, matrix form,


   
AL AR
M[xs , k L ] · = M[xs , k R ] · (10.46)
BL BR

where k L = (2m E)1/2 /h̄ and k R = (2m(E − Vo ))1/2 /h̄ are the wave vectors on either
side of the step. Thus, we can arrive at the transmission matrix as

T [xs , k L , k R ] = M −1 [xs , k L ] · M[xs , k R ] (10.47)

To generalize this, let us assume that V (x) can be represented as a series of discrete
steps at x = {x1 , . . . , x N } spanning the interaction region. As we move from left to
Lattice Models for Transport and Structure 295

right, we can relate the wave-function amplitudes such that after N steps,
   
AR AL
= T [x N , k N −1 , k N ] · T [x N −1 , k N −1 , k N ] · · · T [x1 , k1 , k L ]
BR BL
 
AL
= Ttot
BL
  
t t AL
= 11 12 (10.48)
t21 t22 BL
where we can construct the total transfer matrix as the product of the intermediate
transfer matrices representing a series of transmitted and reflected amplitudes due to
each change in the potential. Once we have the total transfer matrix, we can relate
the transmitted and reflected wave-function coefficients (A R and B L ) to the incident
amplitude, A L :
t21
BL = − AL (10.49)
t22
 
t12 t21
AR = t11 − AL (10.50)
t22
In Figure 10.4a and 10.4b we show the transmission and reflection probabilities
for scattering past a single barrier (a) and a double barrier (b). In the single-barrier
case, the potential “bump” of Vo = 1 a.u. extends from x = −π/2 to x = +π/2,
whereas in the double barrier, each bump of Vo = 1 a.u. is of width π/2 with a space
of π in between. These are purely toy problems and we have chosen the parameters
to be as simple as possible to illustrate the problem. Moreover, we can easily do this
problem by hand. First, notice that the transmission probability is not a step function
at E = 1 a.u. as one would expect for a classically described particle scattering from
left to right. This is due to quantum tunneling contributions. Second, notice that for
E > 1 a.u. the transmission probability as slight dip at E ≈ 2 a.u. This is a quasi
resonance since it corresponds to the case where the width of the barrier is an integer
number of de Broglie wavelengths of the scattering wave. Consequently, the particle
is partially reflected.
The double-barrier case can also be handled analytically, again providing a con-
venient check of the transfer matrix calculation. Here we see much more complex
features in the transmission/reflection spectra. First, we have two sharp resonance
features, one at E = 0.21 a.u. and a second at E = 0.84 a.u.

10.5 DEFECTS ON LATTICES


Now, let us modify this description a bit by including a defect in the middle of
the chain. Keeping the transfer term t the same for the chain, we can include the
defect by changing the site energy of one or more sites. First, let us consider one
defect, at j = 0, where the site energy is ε0 , and εs is the same everywhere else.
For j = 1, 2, 3, . . ., u j = T eikd j is the transmitted wave. For j = −1, −2, −3, . . .,
296 Quantum Dynamics: Applications in Biological and Materials Systems

Im
1.0

0.5

Re
–1.0 –0.5 0.5 1.0

–0.5

–1.0

FIGURE 10.5 Argand diagram for the transmission and reflection coefficients for the model
double-barrier problem. The loops indicate the presence of resonances.

we have both incident and reflected components: u j = I eikd j + Re−ikd j . At j = 0,


we have u 0 = I + R. Now, we match boundary conditions. At j = 0, we have the
requirement that I + R = T . We also must satisfy the Schrödinger equation:

T (εs + δε) + t(T eikd + I e−ikd + Reikd ) = T ε (10.51)

where δε = ε0 − εs is the barrier height. We can rearrange this last equation to

(−2t cos(kd) + T eikd + I e−ikd + Reikd ) = −T δε (10.52)

We have too many unknowns: T, I, R. So, we can only specify the solution up to
normalization: T /I and R/I . For example, we can derive
T 2it sin(kd)
= (10.53)
I 2it sin(kd) + δε
We define the reflection and transmission probabilities as
 2
R δε 2
R =   = 2 2 (10.54)
I 4t sin (kd) + δε 2
and
4t 2 sin2 (kd)
T=1−R= (10.55)
4t 2 sin2 (kd) + δε 2
In the limit that the barrier height is small, δε ≈ 0, the reflectivity R → 0 and the
particle are transmitted through the system. We can define the velocity of the particle
by taking the derivative of the band energy with respect to the momentum, k:
1 dE dt
v= = −2 sin(kd) (10.56)
h̄ dk h̄
Lattice Models for Transport and Structure 297

1.0
[R(k)]2
0.8

0.6

TR
0.4
|T(k)|2
0.2

0 1 1
2
kd/π

FIGURE 10.6 Transmission and reflection probablities for a chain with a single defect at the
origin (t = 1, δe = 3).

Thus,
δε 2
R= (10.57)
(h̄v)2 /d 2 + δε 2
As δε2 → 0, we can ignore the δε in the denominator and write
δε 2 d 2
R= (10.58)
h̄ 2 v 2
For this case, the particle is more or less delocalized over the entire chain and we can
say that the probability of finding the particle on any given site is 1/N d where N is
the number of sites in the chain. So, the probability of the particle striking the barrier
per unit time is v/N d. In this limit the transmission rate is given by
v d 2 δε 2 d δε 2
kT = = (10.59)
N d (h̄v)2 v N h̄ 2
For larger barriers or smaller energies, we have the particle tunneling through the
barrier and the transmission is given by
4t 2 sin2 (kd)
T=1−R= (10.60)
4t 2 sin2 (kd) + δε 2
Substituting the definition of velocity from above and taking δε to be the dominant
term in the denominator,
h̄ 2 v 3
T= (10.61)
δε 2 d 2
When the barrier is large compared to the energy, then the wave function is more or
less a standing wave to the left of the barrier and the particle will strike the barrier at
a rate v/2L. So, we can define the transmission rate as
v h̄ 2 v 3
kT = T= (10.62)
2L 2Lδε 2 d 2
298 Quantum Dynamics: Applications in Biological and Materials Systems

To illustrate what is going on, let us consider an electron on a lattice with ε0 =


−5 eV, εs = −10 eV, and t = −2.5 eV. Rather than specifying the lattice spacing,
we can plot (Figure 10.6) all of our results with respect to kd, which ranges from 0
to π. What we see is that the transmission probability peaks at kd/π = 1/2, giving
a case where R = T.

10.6 MULTIPLE DEFECTS


Now, let us consider a more general case and examine what happens when the sys-
tem has multiple defect sites. For the general case, we need to resort to numerical
approaches and propagate a solution for u j given some initial value. Rearranging the
Schrödinger equation to solve for u j+1 gives

u j+1 = (u j (E − ε j ) − tu j−1 )/t (10.63)

If we have a particle transmitted to the right, then u j = T eikd j . Since we need not
specify the normalization, we can take T = 1 and iteratively determine u j for the
rest of the chain. This is a complex number and we can write it as u j = x j + i y j . We
are going to assume that the site energy to the left and right of the barrier is the same,
so E = εs + 2t cos(k2) gives the dispersion relation between k and E for the two
asymptotic regions. If this is the case, we can compute the transmission probability
by comparing u j and u j+1 on the opposite side of the chain (where we have incident
and reflected components). The result is

4 sin2 (kd)
T=
(x j+1 − x j cos(kd) + y j sin(kd))2 + (y j+1 − x j sin(kd) − y j cos(kd))2
(10.64)

Consider the case in which we have two defect sites, one at j = 0 and another at
j = 3 with ε j = εs + 3 eV and t = −1 eV. We can set εs = 0 since it is an arbitrary
zero of energy. The defects are spaced so that there are two nearly bound states in the

1.0

0.8

0.6

0.4

0.2

E eV
0 2 4 6 8

FIGURE 10.7 Transmission probability versus energy and kd.


Lattice Models for Transport and Structure 299

one-dimensional well formed by defects. We consider here a particle incident from


the left and transmitted to the right. In Figure 10.7 is the transmission probability
vs energy and kd for this system. We see clearly two maxima in the transmission.
These correspond to energies where the scattering energy closely matches the energy
of a quasi-bound state within the well. Consequently, these are termed tunneling
resonances and the width of the resonance is proportional to the rate of decay of a
particle initially trapped in the well.
A Miscellaneous Results
and Constants
A.1 PHYSICAL CONSTANTS AND CONVERSION FACTORS

TABLE A.1
Physical Constants
Constant Symbol SI Value
Speed of light c 299792458 m/s (exact)
Charge of proton e 1.6021764 ×10−19 C
Permitivity of vacuum ε◦ 8.8541878 −12 J−1 C2 m−1
Avagadro’s number NA 6.022142 ×1023 mol−1
Rest mass of electron me 9.109382 ×10−31 kg

TABLE A.2
Atomic Units: In Atomic Units, the Following Quantities Are
Unitary: h̄, e, me, ao
Quantity Symbol or Expression CGS or SI Equivalent
Mass me 9.109382 ×10−31 kg
Charge e 1.6021764 ×10−19 C
Angular momentum h̄ 1.05457×10−34 Js
Length (bohr) ao = h̄ 2 /(m e e2 ) 0.5291772 −10 m
Energy (hartree) E h = e2 /ao 4.35974 ×10−18 J
Time to = h̄ 3 /(m e e4 ) 2.41888 ×10−17 s
Velocity e2 /h̄ 2.18770 ×106 m/s
Force e2 /ao2 8.23872 ×10−8 N
Electric field e/ao2 5.14221 ×1011 V/m
Electric potential e/ao 27.2114 V
2
Fine structure constant α = h̄c
e
1/137.036
Magnetic moment βe = eh̄/(2m e ) 9.27399 ×10−24 J/T
Permitivity of vacuum εo = 1/4π 8.8541878 −12 J−1 C2 m−1
Hydrogen atom IP −α 2 m e c2 /2 = −E h /2 −13.60580 eV

301
302 Quantum Dynamics: Applications in Biological and Materials Systems

TABLE A.3
Useful Orders of Magnitude
Quantity Approximate Value Exact Value
Electron rest mass m e c2 ≈ 0.5 MeV 0.511003 MeV
Proton rest mass m p c2 ≈ 1000 MeV 938.280 MeV
Neutron rest mass Mn c2 ≈ 1000 MeV 939.573 MeV
Proton/electron mass ratio m p /m e ≈ 2000 1836.1515

One electron volt corresponds to a:


Quantity Symbol/Relation Exact Value
Frequency: ν = 2.4 × 1014 Hz E = hν 2.417970 ×1014 Hz
Wavelength: λ = 12000 Å λ = c/ν 12398.52 Å
Wave number: 1/λ = 8000 cm−1 8065.48 cm−1
Temperature: T = 12000 K E = kT 11604.5 K

A.2 THE DIRAC DELTA FUNCTION


A.2.1 DEFINITION
The Dirac delta function is not really a function, per se; it is really a generalized
function defined by the relation
 +∞
f (xo ) = d xδ(x − xo ) f (x) (A.1)
−∞

The integral picks out the first term in the Taylor expansion of f (x) about the point xo
and this relation must hold for any function of x. For example, let us take a function
that is zero only at some arbitrary point, xo . Then the integral becomes
 +∞
d xδ(x − xo ) f (x) = 0 (A.2)
−∞

For this to be true for any arbitrary function, we have to conclude that

δ(x) = 0 for x = 0 (A.3)

Furthermore, from the Reimann-Lebesque theory of integration


  

f (x)g(x)d x = lim a f (xn )g(xn ) (A.4)
h→0
n

the only way for the defining relation to hold is for

δ(0) = ∞ (A.5)

This is a very odd function: it is zero everywhere except at one point, at which it is
infinite. So it is not a function in the regular sense. In fact, it is more like a distrubution
Miscellaneous Results and Constants 303

function that is infinitely narrow. If we set f (x) = 1, then we can see that the δ function
is normalized to unity
 +∞
d xδ(x − xo ) = 1 (A.6)
−∞

A.2.2 PROPERTIES
Some useful properties of the δ function are as follows:
1. It is real: δ ∗ (x) = δ(x).
2. It is even: δ(x) = δ(−x)
3. δ(ax) = δ(x)/a for a > 0
#
4. δ  (x) f (x)d x = f  (0)
5. δ  (−x) = −δ  (x)
6. xδ(x) = 0
1
7. δ(x 2 − a 2 ) = (δ(x + a) + δ(x − a))
2a
8. f (x)δ(x − a) = f (a)δ(x − a)
#
9. δ(x − a)δ(x − b)d x = δ(a − b)

Exercise
Prove the above relations.

A.2.3 SPECTRAL REPRESENTATIONS


The δ function can be thought of as the limit of a sequence of regular functions. For
example,
1 sin(ax)
δ(x) = lim
a→∞ π x
This is the “sinc” function or diffraction function with a width proportional to 1/a.
For any value of a, the function is regular. As we make a larger, the width increases
and focuses about x = 0. This is shown in the Figure A.1 for increasing values
of a. Notice that as a increases, the peak increases and the function itself becomes
extremely oscillitory.
Another extremely useful representation is the Fourier representation

1
δ(x) = eikx dk (A.7)

304 Quantum Dynamics: Applications in Biological and Materials Systems

sinc(x)
1.0

0.8

0.6

0.4

0.2

x
–10 –5 5 10
–0.2

FIGURE A.1 sin(xa)/π x representation of the Dirac δ function.

Finally, another form is in terms of Gaussian functions as shown in Figure A.2,



a −ax 2
δ(x) = lim e (A.8)
a→∞ π

Here the height is proportional to a and the width to the standard deviation, 1/ 2a.
Other representations include Lorentzian form,

1 a
δ(x) = lim
a→ π x + a2
2

1.2

1.0

0.8

0.6

0.4

0.2

–4 –2 2 4

FIGURE A.2 Gaussian representation of δ function.


Miscellaneous Results and Constants 305

and derivative form


d
δ(x) = θ(x)
dx
where θ(x) is the Heaviside step function

0, x ≤0
θ(x) = (A.9)
1 x ≥0

This can be understood as the cumulative distribution function


 x
θ(x) = δ(y)dy (A.10)
−∞

A.3 SUMMARY OF ESSENTIAL EQUATIONS


FROM QUANTUM MECHANICS
A.3.1 QUANTUM UNCERTAINTY RELATIONS
• De Broglie relation: λ = h/ p where p is the particle momentum and λ is
the de Broglie wavelength.
• Planck–Einstein relation: E = hν = h̄ω
• Uncertainty relation (generalized)

(δ A)2 (δ B)2 ≥ i[ Â, B̂]


2 /4 (A.11)

where  and B̂ are operators corresponding to physical observables with


variance (δ A) = A2
− A
2 and . . .
denoting the quantum expectation
value
• Position/momentum: δx · δp ≥ h̄/2. This is also known as the “Heisenberg
uncertainty relation”
• Energy/time: δ E · δt ≥ h̄/2.
• Occupation number/phase: δn · δφ ≥ 1/2

A.3.2 STATES AND WAVE FUNCTIONS

If ψ(x, t) is a quantum wave function,

• Probability density:

P(x, t) = |ψ(x, t)|2 (A.12)

• Probability density current:

j(x, t) = h̄ (ψ ∗ (x, t)∇ψ(x,


  ∗ (x, t)
t) − ψ(x, t)∇ψ
2mi
= Re(ψ ∗ pψ)
 (A.13)

For particles in three dimensions, suitable units are m−2 s−1 .


306 Quantum Dynamics: Applications in Biological and Materials Systems

• Schrödinger equation:
 

ih̄ − Ĥ ψ = 0 (A.14)
∂t
• Normalization:

ψ ∗ Âψd x = 1 (A.15)

• Superposition principle: If ψn is a complete set of eigenfunctions over


some finite or infinite range on x, then any other function on that range can
be represented as

φ= cn ψ n (A.16)
n

where

cn = ψn∗ φd x (A.17)

Such sets of functions are orthogonal if



ψn∗ ψm d x = δnm (A.18)

A.3.3 OPERATORS
If  is a quantum mechanical operator and φ and ψ are normalizable functions,
• Hermitian conjugate:
 
( Âφ)∗ ψd x = φ ∗ Âψd x (A.19)

• Position operator:

x̂ n = x n (A.20)

• Momentum operator:
 n n
h̄ ∂
p̂ =n
(A.21)
i ∂xn
• Kinetic energy operator:

h̄ 2 2
T̂ = − ∇ (A.22)
2m
• Potential energy operator:

V̂ = V (x̂) (A.23)
Miscellaneous Results and Constants 307

• Hamiltonian operator:

Ĥ = T̂ + V̂ (A.24)

for a one-dimensional system, this is written as

h̄ 2 ∂ 2
Ĥ = − + V (x) (A.25)
2m ∂ x 2
• Parity operator:

P̂ f (x) = f (−x) (A.26)

• Expectation value:


= ψ| Â|ψ
= ψ ∗ Âψd x (A.27)

• Matrix element:

φ|A|ψ
= ψ ∗ Âψd x (A.28)

• Time evolution (Heisenberg equations of motion):


 
d A
i ∂A
= [ Ĥ , Â]
+ (A.29)
dt h̄ ∂t
• Ehrenfest theorem:
d x
d p

= p
& = − ∇V
(A.30)
dt dt
• Expansion of expectation values in terms of eigenfunctions: If ψn is an eigen-
function of  such that Âψn = an ψn , then

φ| Â|φ
= |cn |2 an (A.31)
n

where cn are given by



cn = ψn∗ φd x (A.32)

We interpret |cn |2 as the probability of finding the system in the nth eigenstate.
More precisely, |cn |2 is the likelihood that making a physical observation
described by the quantum mechanical operator  will result in a measurement
of an .
• Dirac notation (bra-ket notation):
– Matrix element

anm = ψm | Â|ψn
(A.33)
308 Quantum Dynamics: Applications in Biological and Materials Systems

– Bra state vector: |ψn

– Ket state vector: ψ|


– Wave function: ψn (x) = x|ψn

– Scalar product:

ψn |ψm
= d xψn (x)ψm (x) (A.34)

– Resolution of the identity:



I= |ψn
ψn | (A.35)
n

for a continuous set of states



I= d x|x
x| (A.36)

• Pauli matrices:
 
0 1
σx = (A.37)
1 0
 
0 −i
σy = (A.38)
i 0
 
1 0
σz = (A.39)
0 −1
 
1 0
σ0 = (A.40)
0 1

These matrices have the following properties:


– Anticommutation:

σi · σ j + σ j · σi = 2δi j σo (A.41)

– Commutation:

σi · σ j − σ j · σi = 2iσk (A.42)

– Cyclic permutation of indices:

σi σ j = iσk (A.43)

σi σi = σo (A.44)
Miscellaneous Results and Constants 309

• Boson operators (harmonic oscillator):


– Properties of the creation/annihilation operators:
â|0
= 0 (A.45)
â † |0
= |1
(A.46)

â|n
= n|n
(A.47)

â † |n
= n + 1|n + 1
(A.48)

n̂ = â â (A.49)
1
|n
= √ (â † )n |n
(A.50)
n!
n̂|n
= n|n
(A.51)
' (
â, â † = 1 (A.52)

n − 1|â|n
= n (A.53)

n + 1|â † |n
= n + 1 (A.54)
– Relation to other operators:
â = X̂ + i P̂ (A.55)

â = X̂ − i P̂ (A.56)
H = h̄ω( P̂ + X̂ ) = h̄ω(n̂ + 1/2)
2 2
(A.57)
where X̂ = (mω/2h̄)2 x̂ and P̂ = (2mh̄ω)−1/2 p̂ are dimensionless position
and momentum operators.
– Harmonic oscillating matrix elements:
m| Ĥ |n
= h̄ω(n + 1/2)δmn (A.58)
 1/2

n + 1|x̂|n
= (n + 1)1/2 (A.59)
2mω
 1/2

n − 1|x̂|n
= n 1/2 (A.60)
2mω
 
h̄mω 1/2
n + 1| p̂|n
= i(n + 1)1/2 (A.61)
2
 1/2
h̄mω
n − 1| p̂|n
= −in 1/2 (A.62)
2
• Coordinate representation:
   1/2
mω 1/2 h̄ d
â = x+ (A.63)
2h̄ 2mω dx
 1/2  1/2
mω h̄ d
â † = x− (A.64)
2h̄ 2mω dx
310 Quantum Dynamics: Applications in Biological and Materials Systems

• Wave functions: Since â|0


= 0, we have
$   1/2 %
mω 1/2 h̄ d
x+ φo (x) = 0 (A.65)
2h̄ 2mω dx

which by simple integration leads to


  
mω 1/4 mω 2
φo (x) = exp − x (A.66)
πh̄ h̄
Thus, any other harmonic oscillating eigenfunction is given by
1
φn = (â † )n φo (x) (A.67)
(n!)1/2
When the â † acts upon a Gaussian, it generates the Hermite polynomials
2 d n −x 2
Hn (x) = (−1)n e x e (A.68)
dxn
Thus,
 1/4
1 β2
Hn (y)e−y
2
φn (y) = n 1/2 (A.69)
(2 n!) π

where y = βx and β = (mω/h̄)1/2


• Variances and uncertainty product:

x 2
− x
2 = (2n + 1) (A.70)
2mω
h̄mω
p 2
− p
2 = (2n + 1) (A.71)
2

x
p = h̄(n + 1/2) (A.72)

This last relation indicates that the harmonic oscillator ground state carries the
minimal amount of uncertainty in accordance with the Heisenberg uncertainty
principle.

A.4 MATHEMATICAL SERIES AND INTEGRAL TRANSFORMATIONS


• Fourier series (real forms): If f (x) is a periodic function with period 2L,

ao
f (x) = + (an cos(nπ x/L) + bn sin(nπ x/L)) (A.73)
2 n=1

with
 +L
1
an = f (x) cos(nπ x/L)d x (A.74)
L −L
Miscellaneous Results and Constants 311

and
 +L
1
bn = f (x) sin(nπ x/L)d x (A.75)
L −L

• Complex form:
+∞

f (x) = cn ex p(inπ x/L) (A.76)
n=−∞

with
 +L
1
cn = d x f (x)einπ x/L (A.77)
2L −L

• Parseval’s theorem:
 +L ∞

1
d x| f (x)|2 = |cn |2 (A.78)
2L −L n=−∞

• Fourier transform: If f (x) is a function of x, its Fourier transform is F(k).


There are at least three ways one can define this transformation and its inverse.
– Definition #1:
 ∞
F(k) = f (x)e−2πikx d x (A.79)
−∞

 ∞
f (x) = F(k)e+2πikx dk (A.80)
−∞

– Definition #2:
 ∞
F(k) = f (x)e−ikx d x (A.81)
−∞

 ∞
1
f (x) = F(k)e+ikx dk (A.82)
2π −∞

– Definition #3:
 ∞
1
F(k) = √ f (x)e−ikx d x (A.83)
2π −∞

 ∞
1
f (x) = √ F(k)e+ikx dk (A.84)
2π −∞
312 Quantum Dynamics: Applications in Biological and Materials Systems

• Fourier transform theorems ( denotes Fourier transform relation):


– Convolution of two functions:
 ∞
f (x) ∗ g(x) = f (u)g(x − u)du (A.85)
−∞

– Convolution rules:

f ∗g = g∗ f (A.86)
f ∗ (g ∗ h) = ( f ∗ g) ∗ h (A.87)

– Convolution theorem:

f (x) ∗ g(x)F(s) ∗ G (s) (A.88)

– Autocorrelation ( f ∗ denotes the complex conjugate of f ):


 ∞
f ∗ (x)  f (x) = du f ∗ (u − x) f (u) (A.89)
−∞

– Wiener–Khintchine theorem:

f ∗ (x)  f (x)|F(s)|2 (A.90)

– Cross-correlation:
 ∞

f (x)  g(x) = du f ∗ (u − x)g(u) (A.91)
−∞

– Correlation theorem:

h(x)  g(x)H (s)G (s) (A.92)

– Parseval’s relation (also called the power transform):


 ∞  ∞

f (x)g (x)d x = F(s)G (s)ds (A.93)
−∞ −∞

– Parseval’s theorem (also called Rayleigh’s theorem):


 ∞  ∞
| f (x)|2 d x = |F(s)|2 ds (A.94)
−∞ −∞

– Derivatives of transform pairs:

d f (x)
 2πikF(k) (A.95)
dx
d d f (x) dg(x)
( f (x) ∗ g(x)) = ∗ g(x) + ∗ f (x) (A.96)
dx dx dx
Miscellaneous Results and Constants 313

• Symmetry relations:

f (x)  F(s)
even  even
odd  odd
real, even  real, even
real, odd  imaginary, odd
imaginary, even  imaginary, even
complex, even  complex, even
complex, odd  complex, odd
real, asymmetric  complex, Hermitian
imaginary, asymmetric  complex, anti-Hermitian
real : f (x) = f ∗ (x)
imaginary : f (x) = − f ∗ (x)
even : f (x) = f (−x)
odd : f (x) = − f (−x)
Hermitian : f (x) = f ∗ (−x)
anti-Hermitian: f (x) = − f ∗ (−x)

• Miscellaneous Fourier transform pairs (using Definition #1 from above):

f (x)  F(s) (A.97)


1
f (ax)  F(s/a) (A.98)
|a|
f (x − a)  e−2πias F(s) (A.99)
δ(x)  1 (A.100)
δ(x − a)  e−2πias (A.101)
dn f
 (2πis)n F(s) (A.102)
dxn
2a
e−a|x|  2 (A.103)
a + 4π 2 s 2
8π ias
xe−a|x|  2 (A.104)
(a + 4π 2 s 2 )2
−x 2 /a 2

 a πe−π a s
2 2 2
e (A.105)
1   a   a 
sin(ax)  δ s− −δ s+ (A.106)
2i 2π 2π
1  a   a 
cos(ax)  δ s− +δ s+ (A.107)
2 2π 2π
∞ ∞
1
δ(x − na)  δ(s − n/a) (A.108)
n=−∞
a n=−∞
314 Quantum Dynamics: Applications in Biological and Materials Systems

A.5 NUMERICAL APPROXIMATIONS


• Numerical derivatives: The derivative of f (x) at x can be approximated as
(h = small interval in x)
d f (x) 1
≈ ( f (x + h) − f (x − h)) + O (h 2 ) (A.109)
dx 2h
1
≈ (− f (x + 2h) + 8 f (x + h)
12h
− 8( f − h) + f (x + 2h)) + O (h 4 ) (A.110)
d 2 f (x) 1
≈ 2 ( f (x + h) − 2 f (x) + f (x − h)) + O (h 2 ) (A.111)
dx2 h
1
≈ (− f (x + 2h) + 16 f (x + h) − 30 f (x)
12h 2
+ 16(x − h) − f (x − 2h)) + O (h 4 ) (A.112)
d 3 f (x) 1
≈ 3 ( f (x + 2h) − 2 f (x + h)
dx3 2h
+ 2 f (x − h) − f (x − 2h)) + O (h 2 ) (A.113)

• Finding roots f (x) = 0:


– Secant method:
xn − xn−1
xn+1 = xn − f (xn ) (A.114)
f (xn ) − f (xn−1 )
– Newton-Raphson method:
f (xn )
xn+1 = xn − (A.115)
f  (xn )
– The built-in Mathematica function FindRoots[f[x]==0,x] is very
convenient for this and works in multiple dimensions.
References
CHAPTER 2
1. Louis de Broglie. Recherches sur la théorie des quanta. PhD thesis, Sorbonne, Paris,
1924.
2. Erwin Schrödinger. An undulatory theory of the mechanics of atoms and molecules.
Phys. Rev., 28(6):1049–1070, December 1926.
3. Erwin Schrödinger. Über das verhältnis der Heisenberg-Born-Jordanschen quanten-
mechanik zu der meinen. Ann. Phys. (Leipzig), 1926.

CHAPTER 4
4. E. Fermi. Nuclear Physics. University of Chicago Press, Chicago, 1950.
5. P. A. M. Dirac. The quantum theory of emission and absorption of radiation. Proc. R.
Soc. London, Ser. A, 114:243–265, 1927.
6. Eric R. Bittner and Peter J. Rossky. Quantum decoherence in mixed quantum-classical
systems: Nonadiabatic processes. J. Chem. Phys., 103(18):8130–8143, Nov 1995.
7. Eric R. Bittner and Peter J. Rossky. Decoherent histories and nonadiabatic quantum
molecular dynamics simulations. J. Chem. Phys., 107(20):8611–8618, Nov 1997.
8. Robbie Grunwald and Raymond Kapral. Decoherence and quantum-classical master
equation dynamics. J. Chem. Phys., 126(11):114109, 2007.
9. Klaus Hornberger. Master equation for a quantum particle in a gas. Phys. Rev. Lett.,
97(6):060601, 2006.
10. Ahren W. Jasper and Donald G. Truhlar. Electronic decoherence time for non-Born-
oOppenheimer trajectories. J. Chem. Phys., 123(6):064103, 2005.
11. Gunter Kab. Fewest switches adiabatic surface hopping as applied to vibrational energy
relaxation. J. Phys. Chem. A, 110(9):3197–3215, 2006.
12. Gil Katz, Mark A Ratner, and Ronnie Kosloff. Decoherence control by tracking a
Hamiltonian reference molecule. Phys. Rev. Lett., 98(20):203006, 2007.
13. M. Merkli, I. M. Sigal, and G. P. Berman. Decoherence and thermalization. Phys Rev
Lett, 98(13):130401, 2007.
14. Maximilian A. Schlosshauer. Decoherence and the quantum-to-classical transition.
Springer, Berlin, 2007.
15. L. D. Landau. Phy. Z. Sowjetunion, 1:89, 1932.
16. C. Zener. Proc. R. Soc. London, Ser. A, 137:696, 1933.
17. A. Nitzan. Chemical Dynamics in Condensed Phases. Oxford University Press, 2007.
18. R. A. Marcus. On the theory of electron-transfer reactions. Part VI. Unified treatment
for homogeneous and electrode reactions. J. Chem. Phys., 43(2):679–701, Jul 1965.
19. Rudolph A. Marcus. Nobel lecture: Electron transfer reactions in chemistry. Theory
and experiment. Rev. Mod. Phys., 65(3):599–610, 1993.
20. J. R. Miller, L Calcaterra, and G. L. Closs. “Intramolecular long-distance electron
transfer in radical anions. The effects of free energy and solvent on the reaction rates.”
J. Am. Chem. Soc., 106:3047, 1984.
21. Eyal Neria, Abraham Nitzan, R. N. Barnett, and Uzi Landman. Quantum dynamical
simulations of nonadiabatic processes: Solvation dynamics of the hydrated electron.
Phys. Rev. Lett., 67(8):1011–1014, Aug 1991.

315
316 Quantum Dynamics: Applications in Biological and Materials Systems

22. Eyal Neria and Abraham Nitzan. Semiclassical evaluation of nonadiabatic rates in
condensed phases. J. Chem. Phys., 99(2):1109–1123, 1993.
23. Louis de Broglie. Ondes et mouvements. Gauthiers-Villars, Paris, 1926.
24. Louis de Broglie. La mécanique ondulatoire. Gauthiers-Villars, Paris, 1928.
25. Peter Holland. The Quantum Theory of Motion: An Account of the de Broglie-Bohm
Causal Interpretation of Quantum Mechanics. Cambridge, UK: Cambridge University
Press, 1993.

CHAPTER 5
26. P. A. M. Dirac. The Principles of Quantum Mechanics. Oxford University Press,
Oxford, NY, 4th edition, 1958.
27. Julian Schwinger. Quantum Mechanics—Symbolism of Atomic Measurements. Physics
and Astonomy. Springer, Berlin, 2001.
28. Philip Pechukas. Time-dependent semiclassical scattering theory. Part I potential scat-
tering. Phys. Rev., 181(1):166–174, May 1969.
29. Philip Pechukas. Time-dependent semiclassical scattering theory. Part II atomic colli-
sions. Phys. Rev., 181(1):174–185, May 1969.
30. John C. Tully. Molecular dynamics with electronic transitions. J. Chem. Phys.,
93(2):1061–1071, 1990.
31. Frank J. Webster, Jurgen Schnitker, Mark S. Friedrichs, Richard A. Friesner, and
Peter J. Rossky. Solvation dynamics of the hydrated electron: A nonadiabatic quantum
simulation. Phys. Rev. Lett., 66(24):3172–3175, Jun 1991.
32. B. Space and D. F. Coker. Nonadiabatic dynamics of excited excess electrons in simple
fluids. J. Chem. Phys., 94(3):1976–1984, 1991.

CHAPTER 6
33. L. Allen and J. H. Eberly. Optical Resonance and Two-Level Atoms. Dover Publications,
1987.
34. E. L. Hahn. Spin echoes. Phys. Rev., 80(4):580–594, Nov 1950.
35. E. L. Hahn and D. E. Maxwell. Chemical shift and field independent frequency mod-
ulation of the spin echo envelope. Phys. Rev., 84(6):1246–1247, Dec 1951.
36. N. A. Kurnit, I. D. Abella, and S. R. Hartmann. Observation of a photon echo. Phys.
Rev. Lett., 13(19):567–568, Nov 1964.
37. I. D. Abella, N. A. Kurnit, and S. R. Hartmann. Photon echoes. Phys. Rev., 141(1):
391–406, Jan 1966.
38. A.A. Ovchinnikov and N.S. Erikhman. Sov. Phys. JETP, 40:733, 1975.
39. A. Madhukar and W. Post. Phys. Rev. Lett., 39:1424, 1977.
40. A. M. Jayannavar and N. Kumar. Phys. Rev. Lett, 48:553, 1982.
41. S. M. Girvin and G. D. Mahan. Phys. Rev. B, 20:4896, 1979.
42. W. Fischer, H. Leschke, and P. Müller. Phys. Rev. Lett., 73:1578, 1994.
43. I. Burghardt and L. S. Cederbaum. Hydrodynamic equation for mixed quantum states.
i. general formulation. J. Chem. Phys., 115(22):10303, December 2001.
44. I. Burghardt and L. S. Cederbaum. Hydrodynamic equations for mixed quantum states.
ii. coupled electronic states. J. Chem. Phys., 115(22):10312, December 2001.
45. Yasuteru Shigeta, Hideaki Miyachi, and Kimihiko Hirao. Quantal cumulant dynamics:
General theory. J. Chem. Phys., 125(24):244102, 2006.
46. Jeremy B. Maddox and Eric R. Bittner. Quantum relaxation dynamics using Bohmian
trajectories. J. Chem. Phys., 115(14):6309–6316, 2001.
References 317

47. Jeremy B. Maddox and Eric R. Bittner. Quantum dissipation in unbounded systems.
Phys. Rev. E, 65(2):026143, Jan 2002.
48. J. B. Maddox and E. R. Bittner. Quantum dissipation in the hydrodynamic moment
hierarchy: A semiclassical truncation strategy. J. Phys. Chem. B, 106(33):7981–7990,
2002.
49. L. A. Khalfin. Sov. Phys. JETP, 6:1053, 1958.
50. Wayne M. Itano, D. J. Heinzen, J. J. Bollinger, and D. J. Wineland. Quantum zeno
effect. Phys. Rev. A, 41(5):2295–2300, Mar 1990.
51. R. J. Cook. What are quantum jumps? Phys. Scr., T21:49, 1988.
52. E. P. Wigner. On the quantum correction for thermodynamic equilibrium. Phys. Rev.,
40:749–759, 1932.
53. H. Weyl. The Theory of Groups and Quantum Mechanics. Dover, New York, 1931.
54. H. Weyl. Gruppentheorie und Quantenmechanik. Hirzel, Leipzig, 1928.
55. H. Weyl. Z. Phys., 46:1, 1927.
56. J. Ville. Théorie et applications de la notion de signal analytique. Cables et Transmis-
sion, 2A:61–74, 1948.
57. J. E. Moyal. Quantum mechanics as a statistical theory. Proc. Cambridge Philos. Soc.,
45:737–740, 1949.
58. H. J. Groenewold. Physica, 12(405), 1946.
59. William Frensley. Boundary conditions for open quantum systems driven far from
equilibrium. Rev. Mod. Phys., 62:745, 1990.
60. Jasper Koester and Shaul Mukamel. Transient gratings, four wave mixing, and polariton
effects in non-linear optics. Phys. Rep., 205(1):1–58, 1991.
61. Roger F. Loring, Daniel S. Franchi, and Shaul Mukamel. Anderson localization in
Liouville space: The effective dephasing approximation. Phys. Rev. B, 37(4):1874–
1883, Feb 1988.
62. Shaul Mukamel. Principles of Nonlinear Optics and Spectroscopy. Oxford University
Press, 1995.
63. P. L. Bhatnagar, E. P. Gross, and M. Krook. A model for collision processes in gases. i.
small amplitude processes in charged and neutral one-component systems. Phys. Rev.,
94(3):511–525, May 1954.

CHAPTER 7
64. T. Förster. Transfer mechanisms of electronic excitation. Discuss. Faraday Soc. Ab-
erdeen, (7-18), 1960.
65. T. Förster. Zwischenmolekulare Energiewanderung und Fluoreszenz. Ann. Phys.
(Leipzig), 2(55–75), 1948.
66. T. Förster. Energiewanderung und Fluoreszenz. Naturwissenschaften, 6(166–175),
1946.
67. T. Förster. Transfer mechanisms of electronic excitation. Discuss. Faraday Soc., 27:17,
1959.
68. E. Hass, E. Katchalski-Katzir, and I. Z. Steinberg. First demo of FRET in DNA loops.
Biopolymers, 17(11–31), 1978.
69. Mark I. Wallace, Liming Ying, Shankar Balasubramanian, and David Klenerman.
Non-arrhenius kinetics for the loop closure of a DNA hairpin. Proc. Nat. Acad. Sci.,
98(10):5584–5589, 2001.
70. E. Zazopoulos, E. Lalli, D. M. Stocco, and P. Sassone-Corsi. Nature, 390:311–315,
1997.
71. X. Dai, M. B. Greizerstein, K. Nadas-Chinni, and L. B. Rothman-Denes. Proc. Nat.
Acad. Sci., 94:2174, 1997.
318 Quantum Dynamics: Applications in Biological and Materials Systems

72. S. Tyagi and F. R. Kramer. Molecular beacons. Nature Biotechnol., 14:303–308, 1996.
73. Wichard J. D. Beenken and Tõnu Pullerits. Excitonic coupling in polythiophenes: Com-
parison of different calculation methods. J. Chem. Phys., 120(5):2490–2495, 2004.
74. William Barford. Exciton transfer integrals between polymer chains. J. Chem. Phys.,
126(13):134905, 2007.
75. B. P. Krueger, G. D. Scholes, and G. R. Fleming. Calculation of couplings and energy-
transfer pathways between the pigments of lh2 by the ab initio transition density cube
method. J. Phys. Chem. B, 102(27):5378–5386, Jul 1998.
76. M. J. Frisch, G. W. Trucks, H. B. Schlegel, G. E. Scuseria, M. A. Robb, J. R. Cheeseman,
J. A. Montgomery, Jr. et al. Gaussian 03, Revision C.02. Gaussian, Inc., Wallingford,
CT, 2004.
77. Frank Neese. ORCA—An ab initio Density Functional and Semiempirical Program
Package, Version 2.4, Revision 45, Jan 2006. Max Planck Institute for Bioinorganic
Chemistry, Muelheim, Germany.
78. Arkadiusz Czader and Eric R. Bittner. Calculations of the exciton coupling elements
between the DNA bases using the transition density cube method. J. Chem. Phys.,
128(3):035101, 2008.
79. B. Bouvier, T. Gustavsson, D. Markovitsi, and P. Millie. Dipolar coupling between
electronic transitions of the DNA bases and its relevance to exciton states in double
helices. Chem. Phys., 275:75–92, 2002.
80. P. Claverie. Intermolecular Interactions—From Diatomic to Biopolymers., Chapter
Elaboration of Approximate Formulas for the Interaction between Large Molecules:
Application in Organic Chemistry, pages 69–306. Wiley, 1978.

CHAPTER 8
81. J. C. Light, I. P. Hamilton, and J. V. Lill. Generalized discrete variable approximation
in quantum mechanics. J. Chem. Phys., 82(3):1400–1409, 1985.
82. J. C. Light. Discrete variable representations in quantum dynamics. In Time-Dependent
Quantum Molecular Dynamics. Plenum-Press, 1992.
83. E. Hückel. Zeitschrift für Physik, 76:628, 1932.
84. J. E. Lennard-Jones. Proc. R. Soc. London, 158:280, 1937.
85. C. A. Coulson and A. Streitwieser. Dictionary of π Electron Calculations. Pergammon,
New York, 1965.
86. W. Kutzelnigg. Einführung in die Theoretische Cheme, Vol 2: Die Chemische Bindung.
Wiley-VCH, 1978.
87. J. Barriol and J. J. Metzger. J. Chem. Phys., 47:433, 1950.
88. C. C. J. Roothaan. Self-consistent field theory for open shells of electronic systems.
Rev. Mod. Phys., 32(2):179–185, 1960.
89. C. C. J. Roothaan. New developments in molecular orbital theory. Rev. Mod. Phys.,
23(2):69–89, 1951.
90. Michael C. Zerner. Perspective on “New developments in molecular orbital theory.”
Theor. Chim. Acta, 103(3):217–218, 2000.
91. G. G. Hall. Proc. R. Soc. London, Ser. A, 205:541, 1951.
92. G. C. Schatz and M. A. Ratner. Quantum Mechanics in Chemistry. Prentice Hall, 1993.
93. Karl F. Freed. Is there a bridge between ab initio and semiempirical theories of valence?
Acc. Chem. Res., 16(4):137–144, 1983.
94. Charles H. Martin and Karl F. Freed. Ab initio computation of semiempirical π -electron
ν
methods. Part I, constrained, transferable valence spaces in H calculations. J. Chem.
Phys., 100(10):7454–7470, 1994.
References 319

95. Charles. H. Martin, R. L. Graham, and Karl. F. Freed. Ab initio study of cyclobutadiene
using the effective valence shell Hamiltonian method. J. Chem. Phys., 99(10):7833–
7844, 1993.
96. J. Hubbard. Electron correlations in narrow energy bands. Proc. R. Soc. London, Ser.
A, 276(1365):238–257, Nov 1963.
97. Elliott H. Lieb and F. Y. Wu. Absence of Mott transition in an exact solution of the
short-range, one-band model in one dimension. Phys. Rev. Lett., 20(25):1445–1448,
Jun 1968.
98. Fabian H. L. Essler, Vladimir E. Korepin, and Kareljan Schoutens. Complete solution
of the one-dimensional Hubbard model. Phys. Rev. Lett., 67(27):3848–3851, Dec 1991.
99. J. C. Slater. Atomic shielding constants. Phys. Rev., 36(1):57–64, Jul 1930.
100. S. F. Boys. Proc. R. Soc. London, Ser. A, 200:542, 1950.
101. Ira Levine. Quantum Chemistry. Prentice Hall, 4th edition, 1991.

CHAPTER 9
102. W. P. Su, J. R. Schrieffer, and A. J. Heeger. Soliton excitations in polyacetylene. Phy.
Rev. B, 22(4):2099–2111, Aug 1980.
103. W. P. Su, J. R. Schrieffer, and A. J. Heeger. Solitons in polyacetylene. Phys. Rev. Lett.,
42(25):1698–1701, Jun 1979.
104. L. D. Landau. Phys. Z. Sowjetunion, 3:664, 1933.
105. E. I. Rashba and M. D. Sturge. Excitons. North-Holland, Amsterdam, Netherlands,
1982.
106. E. I. Rashba. Opt. Spektrosk., 2:568, 1957.
107. Mark N. Kobrak and Eric R. Bittner. A dynamic model for exciton self-trapping in
conjugated polymers. Part I. Theory. J. Chem. Phys., 112(12):5399–5409, 2000.
108. Mark N. Kobrak and Eric R. Bittner. A dynamic model for exciton self-trapping
in conjugated polymers. ii. implementation. J. Chem. Phys., 112(12):5410–5419,
2000.
109. Mark N. Kobrak and Eric R. Bittner. A quantum molecular dynamics study of exciton
self-trapping in conjugated polymers: Temperature dependence and spectroscopy. J.
Chem. Phys., 112(17):7684–7692, 2000.
110. Hitoshi Sumi and Atsuko Sumi. Dimensionality dependence in self-trapping of exci-
tons. J. Phys. Soc. Jp. 63(2):637–657, 1994.
111. I. Franco and S. Tretiak. Electron-vibrational dynamics of photoexcited polyfluorenes.
J. Am. Chem. Soc., 126(38):12130–12140, Sep 2004.
112. Kirill I. Igumenshchev, Sergei Tretiak, and Vladimir Y. Chernyak. Excitonic effects in
a time-dependent density functional theory. J. Chem. Phys., 127(11):114902, 2007.
113. S. Tretiak, A. Saxena, R. L. Martin, and A. R. Bishop. Photoexcited breathers in
conjugated polyenes: An excited-state molecular dynamics study. Proc. Nat. Acad.
Sci., 100(5):2185–2190, Mar 2003.
114. S. Tretiak, A. Saxena, R. L. Martin, and A. R. Bishop. Conformational dynamics of
photoexcited conjugated molecules. Phys. Rev. Lett., 89(9):097402, 2002.
115. Sergei Tretiak, Kirill Igumenshchev, and Vladimir Chernyak. Exciton sizes of con-
ducting polymers predicted by time-dependent density functional theory. Phys. Rev.
B, 71(3):033201–4, 2005.
116. Stoyan Karabunarliev, Martin Baumgarten, Eric R. Bittner, and Klaus Müllen. Rig-
orous Franck–Condon absorption and emission spectra of conjugated oligomers from
quantum chemistry. J. Chem. Phys., 113(24):11372–11381, 2000.
117. Stoyan Karabunarliev and Eric R. Bittner. Polaron–excitons and electron–vibrational
band shapes in conjugated polymers. J. Chem. Phys., 118(9):4291–4296, Mar 2003.
320 Quantum Dynamics: Applications in Biological and Materials Systems

118. A. S. Davydov. Excitons and solitons in molecular systems. Int. Rev. Cytology, 106:183,
1987.
119. A. S. Davydov. Solitons and energy transfer along protein molecules. J. Theor. Biol.,
66(2):379–387, 1977.
120. A. S. Davydov. The theory of contraction of proteins under their excitation. J. Theor.
Biol., 38:559–569, 1973.
121. Alwyn Scott. Davydov’s soliton. Phys. Rep., 217(1):1–67, 1992.
122. V. E. Zakharov and A. B. Shabat. Exact theory of two dimensional self-focusing and
one-dmensional modulations of waves in nonlinear media. Sov. Phys. JETP, 34:62–69,
1972.
123. P. S. Lomdahl and W. C. Kerr. Do Davydov solitons exist at 300 K? Phys. Rev. Lett.,
55(11):1235–1238, Sep 1985.
124. J. W. Schweitzer. Lifetime of the Davydov soliton. Phys. Rev. A, 45(12):8914–8923,
Jun 1992.
125. W. Förner. Davydov solitons in proteins. Int. J. Quantum Chem., 64(3):351, 1997.
126. Konrad Knoll and Richard R. Schrock. Preparation of tert-butyl-capped polyenes con-
taining up to 15 double bonds. J. Am. Chem. Soc., 111(20):7989–8004, 1989.
127. Stoyan Karabunarliev, Eric R. Bittner, and Martin Baumgarten. Franck–Condon spectra
and electron-libration coupling in para-polyphenyls. J. Chem. Phys., 114(13):5863–
5870, 2001.
128. W. M. Gelbart, K. G. Speers, K. F. Freed, J. Jortner, and S. A. Rice. Boltzmann
statistics and radiationless decay large molecules: optical selection rules. Chem. Phys.
Lett., 6(4):354, 1970.

CHAPTER 10
129. Walter Harrison. Applied Quantum Mechanics. World Scientific Publishing Company,
Singapore, 2000.
130. Donald A. McQuarrie. Statistical Mechanics. Harper and Row, New York, 1976.
Index
α helix, 268 F
π -electron theories, 242 Fermi’s golden rule, 106
A Fock operator, 237–239
Free-electron model, 219, 220
ab initio treatments, 248 Förster radius, 209
acoustic modes, 265 Förster theory, 206
adiabatic approximation, 125
alternant hydrocarbons, 232 G
Argand diagram, 296
golden rule, 205
B
B-DNA, 216 H
betacarotene, 220, 222
Bethe Ansatz, 242 harmonic perturbation, 104
Bloch functions, 283–285 Hartree–Fock approximation, 236
Bloch states, 290 Hellmann–Feynman theorem, 125
Bohr frequency, 103 Hooke’s law, atom described by, 115
bond-charge matrix, 240 Hubbard model, 243
Born-Oppenheimer approximation, 207, 235 Hückel model, 219, 222
Brillouin zone, 285 Hückel model, justification of, 224
Hückel theory, extended, 226
C
CNDO, 241 I
Condon, 207
Interactions at low light intensity, 113
Condon approximation, 207
irreversibility, quantum mechanical system,
conjugated molecules, 219
203
correlation functions, use of, 106
Coulomb matrix element, 214
Coulson–Rushbrooke pairing theorem, 232, K
233
Kronig–Penny model, 291
crystal momentum representation, 285
Kubo identity, 159
D
Davydov’s soliton, 268
L
delocalization, 226 Landau–Zener approximation, 128
density matrix, time-evolution of, 163 LH1, 213
diabatic representation, 127 line–dipole approximation, 211
dipole approximation, 114 Liouville superoperator, 163
dipole–dipole approximation, 209 Liouville–von Neumann equation, 174
Dirac quantum condition, 149 longitudinal modes, 265
discrete variable representation, 228 longitudinal relaxation time, 174
DNA, 215

E M
electromagnetic field, 111 magnetic field, comparison to electric field
electron/phonon coupling, 265 component, 112
empty-core potential, 288 Maxwell’s relations, 112
energy transfer, 206 mixing angle, 86
excitation energy transfer, 203 motion under linear force, 136
exciton self-trapping, 264–268 Mott transition, 242
321
322 Index

N strong interaction, 205


structure factor, 289
nonadiabatic limit, 128 Su-Schrieffer-Heeger model, 261
nonlinear Schrödinger equation, 268
normal order, 257 T
O Tchebychev polynomials, 227
Thomas–Reiche–Kuhn sum rule, 117
oscillator strength, 117 tight-binding approximation, 283
time-dependent Hartree–Fock theory, 240
P time-dependent perturbation theory, 102
time-dependent quantum mechanics, 99
Pariser–Parr–Pople, 241
time-evolution operator, 100
perturbation series, 103
time-ordering operator, 158
phonon modes, 265
transfer matrix, 293
polyene chain, 219
transfer matrix method, 295
power radiated by accelerated charge, 121
transient nutation, 170
Poynting vector, 112
transition density cube, 213
PPP model, 242
transverse relaxation time, 173
protein dynamics, 268
tunneling resonances, 299
pyrimidines, 215

Q U

quantum efficiency, 209 unrestricted Hartree–Fock, 247

R V
Rashba model, 265 variation of energy gap, 221
reciprocal lattice vectors, 290 vector potential, 112
residue theorem, 104 von Neumann entropy, 166
rotating wave approximation, 169
W
S
Wannier functions, 283, 285, 286
scattering matrix, 293 Wigner representation, 189
Schrödinger representation, 146
self-consistent field approach, 236 Z
Slater-type orbitals, STO, 224–226
statistical mixture, 175 zero-differential overlap approximation, 241

You might also like