0% found this document useful (0 votes)
101 views

Computational Quantum Physics

This document provides an introduction to a computational quantum physics course taught by Prof. Matthias Troyer at ETH Zurich. The course covers numerical solutions to the Schrodinger equation for both single-particle and many-body quantum systems using computational methods. It will explore techniques like path integrals and quantum Monte Carlo simulations. The document outlines prerequisites in computing, numerical analysis, and quantum mechanics. Exercises will involve programming and students can use languages like C++ or Mathematica.

Uploaded by

Eric Anderson
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views

Computational Quantum Physics

This document provides an introduction to a computational quantum physics course taught by Prof. Matthias Troyer at ETH Zurich. The course covers numerical solutions to the Schrodinger equation for both single-particle and many-body quantum systems using computational methods. It will explore techniques like path integrals and quantum Monte Carlo simulations. The document outlines prerequisites in computing, numerical analysis, and quantum mechanics. Exercises will involve programming and students can use languages like C++ or Mathematica.

Uploaded by

Eric Anderson
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

Computational Quantum Physics

Prof. Matthias Troyer ([email protected])


ETH Z
urich, SS 2010

Chapter 1
Introduction
1.1

General

For physics students the computational quantum physics courses is a recommended


prerequisite for any computationally oriented semester thesis, proseminar, master thesis
or doctoral thesis.
For computational science and engineering (RW) students the computational quantum physics courses is part of the Vertiefung in theoretical physics.

1.1.1

Exercises

Programming Languages
Except when a specific programming language or tool is explicitly requested you are
free to choose any programming language you like. Solutions will often be given either
as C++ programs or Mathematica Notebooks.
Computer Access
The lecture rooms offer both Linux workstations, for which accounts can be requested
with the computer support group of the physics department in the HPR building, as
well as connections for your notebook computers.

1.1.2

Prerequisites

As a prerequisite for this course we expect knowledge of the following topics. Please
contact us if you have any doubts or questions.
Computing
Basic knowledge of UNIX
At least one procedural programming language such as C, C++, Pascal, Java or
FORTRAN. C++ knowledge is preferred.
Knowledge of a symbolic mathematics program such as Mathematica or Maple.
Ability to produce graphical plots.
Numerical Analysis
Numerical integration and differentiation
Linear solvers and eigensolvers
Root solvers and optimization
Statistical analysis
Quantum Mechanics
Basic knowledge of quantum mechanics, at the level of the quantum mechanics taught
to computational scientists, should be sufficient to follow the course. If you feel lost at
any point, please ask the lecturer to explain whatever you do not understand. We want
you to be able to follow this course without taking an advanced quantum mechanics
class.

1.1.3

References

1. J.M. Thijssen, Computational Physics, Cambridge University Press (1999) ISBN


0521575885
2. Nicholas J. Giordano, Computational Physics, Pearson Education (1996) ISBN
0133677230.
3. Harvey Gould and Jan Tobochnik, An Introduction to Computer Simulation Methods, 2nd edition, Addison Wesley (1996), ISBN 00201506041
4. Tao Pang, An Introduction to Computational Physics, Cambridge University Press
(1997) ISBN 0521485924

1.2

Overview

In this class we will learn how to simulate quantum systems, starting from the simple
one-dimensional Schrodinger equation to simulations of interacting quantum many body
problems in condensed matter physics and in quantum field theories. In particular we
will study
The one-body Schrodinger equation and its numerical solution
The many-body Schrodinger equation and second quantization
Approximate solutions to the many body Schrodinger equation
Path integrals and quantum Monte Carlo simulations
Numerically exact solutions to (some) many body quantum problems
Some simple quantum field theories

Chapter 2
Quantum mechanics in one hour
2.1

Introduction

The purpose of this chapter is to refresh your knowledge of quantum mechanics and
to establish notation. Depending on your background you might not be familiar with
all the material presented here. If that is the case, please ask the lecturers and we
will expand the introduction. Those students who are familiar with advanced quantum
mechanics are asked to glance over some omissions.

2.2
2.2.1

Basis of quantum mechanics


Wave functions and Hilbert spaces

Quantum mechanics is nothing but simple linear algebra, albeit in huge Hilbert spaces,
which makes the problem hard. The foundations are pretty simple though.
A pure state of a quantum system is described by a wave function |i, which is
an element of a Hilbert space H:
|i H
(2.1)
Usually the wave functions are normalized:
p
|| |i || = h|i = 1.

(2.2)

Here the bra-ket notation

h|i

(2.3)

denotes the scalar product of the two wave functions |i and |i.
The simplest example is the spin-1/2 system, describing e.g. the two spin states
~ of the electron (which can be visualized as an
of an electron. Classically the spin S
internal angular momentum), can point in any direction. In quantum mechanics it is
described by a two-dimensional complex Hilbert space H = C2 . A common choice of

basis vectors are the up and down spin states


 
1
| i =
0
 
0
| i =
1

(2.4)
(2.5)

This is similar to the classical Ising model, but in contrast to a classical Ising spin
that can point only either up or down, the quantum spin can exist in any complex
superposition
|i = | i + | i
(2.6)
of the basis states, where the normalization condition (2.2) requires that ||2 + ||2 = 1.
For example, as we will see below the state
1
| i = (| i + | i)
2

(2.7)

is a superposition that describes the spin pointing in the positive x-direction.

2.2.2

Mixed states and density matrices

Unless specifically prepared in a pure state in an experiment, quantum systems in


Nature rarely exist as pure states but instead as probabilistic superpositions. The most
general state of a quantum system is then described as a density matrix , with unit
trace
Tr = 1.
(2.8)
The density matrix of a pure state is just the projector onto that state
pure = |ih|.

(2.9)

For example, the density matrix of a spin pointing in the positive x-direction is


1/2 1/2
.
(2.10)
= | ih | =
1/2 1/2
Instead of being in a coherent superposition of up and down, the system could also
be in a probabilistic mixed state, with a 50% probability of pointing up and a 50%
probability of pointing down, which would be described by the density matrix


1/2 0
.
(2.11)
mixed =
0 1/2

2.2.3

Observables

Any physical observable is represented by a self-adjoint linear operator acting on the


Hilbert space, which in a final dimensional Hilbert space can be represented by a Hermitian matrix. For our spin-1/2 system, using the basis introduced above, the components
5

S x , S y and S z of the spin in the x-, y-, and z-directions are represented by the Pauli
matrices


~
~ 0 1
x
S =
(2.12)
x =
2
2 1 0


~
~ 0 i
y
S =
(2.13)
y =
2
2 i 0


~
~ 1 0
z
S =
(2.14)
z =
2
2 0 1
The spin component along an arbitrary unit vector e is the linear superposition of
the components, i.e.


z
x
y
~
e
e

ie
x
x
y
y
z
z
~ =e S +e S +e S =
(2.15)
e S
ez
2 ex + iey
The fact that these observables do not commute but instead satisfy the non-trivial
commutation relations
[S x , S y ] = S x S y S y S x = i~S z ,
[S y , S z ] = i~S x ,
[S z , S x ] = i~S y ,

(2.16)
(2.17)
(2.18)

is the root of the differences between classical and quantum mechanics .

2.2.4

The measurement process

The outcome of a measurement in a quantum system is usually intrusive and not deterministic. After measuring an observable A, the new wave function of the system will be
an eigenvector of A and the outcome of the measurement the corresponding eigenvalue.
The state of the system is thus changed by the measurement process!
For example, if we start with a spin pointing up with wave function
 
1
(2.19)
|i = | i =
0
or alternatively density matrix
=

1 0
0 0

(2.20)

and we measure the x-component of the spin S x , the resulting measurement will be
either +~/2 or ~/2, depending on whether the spin after the measurement points in
the + or x-direction, and the wave function after the measurement will be either of
 
1
1/2
| i = (| i + | i) =
(2.21)
1/
2
2
 
1
1/ 2
(2.22)
| i = (| i | i) =
1/ 2
2
6

Either of these states will be picked with a probability given by the overlap of the initial
wave function by the individual eigenstates:
p = ||h |i||2 = 1/2
p = ||h |i||2 = 1/2

(2.23)
(2.24)

The final state is a probabilistic superposition of these two outcomes, described by the
density matrix


1/2 0
.
(2.25)
= p | ih | + p | ih | =
0 1/2

which differs from the initial density matrix .


If we are not interested in the result of a particular outcome, but just in the average,
the expectation value of the measurement can easily be calculated from a wave function
|i as
hAi = h|A|i
(2.26)

or from a density matrix as

hAi = Tr(A).

(2.27)

Tr(0 A) = Tr(|ih|A) = h|A|i.

(2.28)

For pure states with density matrix = |ih| the two formulations are identical:

2.2.5

The uncertainty relation

If two observables A and B do not commute [A, B] 6= 0, they cannot be measured


simultaneously. If A is measured first, the wave function is changed to an eigenstate of
A, which changes the result of a subsequent measurement of B. As a consequence the
values of A and B in a state cannot be simultaneously known, which is quantified by
the famous Heisenberg uncertainty relation which states that if two observables A and
B do not commute but satisfy
[A, B] = i~
(2.29)
then the product of the root-mean-square deviations A and B of simultaneous measurements of A and B has to be larger than
AB ~/2

(2.30)

For more details about the uncertainty relation, the measurement process or the interpretation of quantum mechanics we refer interested students to an advanced quantum
mechanics class or text book.

2.2.6

The Schr
odinger equation

The time-dependent Schr


odinger equation
After so much introduction the Schrodinger equation is very easy to present. The wave
function |i of a quantum system evolves according to

i~ |(t)i = H|(t)i,
(2.31)
t
where H is the Hamilton operator. This is just a first order linear differential equation.
7

The time-independent Schr


odinger equation
For a stationary time-independent problem the Schrodinger equation can be simplified.
Using the ansatz
|(t)i = exp(iEt/~)|i,
(2.32)
where E is the energy of the system, the Schrodinger equation simplifies to a linear
eigenvalue problem
H|i = E|i.
(2.33)
The rest of the semester will be spent solving just this simple eigenvalue problem!
The Schr
odinger equation for the density matrix
The time evolution of a density matrix (t) can be derived from the time evolution of
pure states, and can be written as
i~

(t) = [H, (t)]


t

(2.34)

The proof is left as a simple exercise.

2.2.7

The thermal density matrix

Finally we want to describe a physical system not in the ground state but in thermal
equilibrium at a given inverse temperature = 1/kB T . In a classical system each microstate i of energy Ei is occupied with a probability given by the Boltzman distribution
1
exp(Ei ),
Z

pi =

(2.35)

where the partition function


Z=

exp(Ei )

(2.36)

normalizes the probabilities.


In a quantum system, if we use a basis of eigenstates |ii with energy Ei , the density
matrix can be written analogously as
=

1X
exp(Ei )|iihi|
Z i

(2.37)

For a general basis, which is not necessarily an eigenbasis of the Hamiltonian H, the
density matrix can be obtained by diagonalizing the Hamiltonian, using above equation,
and transforming back to the original basis. The resulting density matrix is
1
exp(H)
Z

(2.38)

Z = Tr exp(H)

(2.39)

=
where the partition function now is

Calculating the thermal average of an observable A in a quantum system is hence


formally very easy:
TrA exp(H)
hAi = Tr(A ) =
,
(2.40)
Tr exp(H)
but actually evaluating this expression is a hard problem.

2.3

The spin-S problem

Before discussing solutions of the Schrodinger equation we will review two very simple
systems: a localized particle with general spin S and a free quantum particle.
In section 2.2.1 we have already seen the Hilbert space and the spin operators for the
most common case of a spin-1/2 particle. The algebra of the spin operators given by the
commutation relations (2.12)-(2.12) allows not only the two-dimensional representation
shown there, but a series of 2S + 1-dimensional representations in the Hilbert space
C2S+1 for all integer and half-integer values S = 0, 12 , 1, 32 , 2, . . .. The basis states {|si}
are usually chosen as eigenstates of the S z operator
S z |si = ~s|si,

(2.41)

where s can take any value in the range S, S + 1, S + 2, . . . , S 1, S. In this basis


the Sz operator is diagonal, and the S x and S y operators can be constructed from the
ladder operators
p
S(S + 1) s(s + 1)|s + 1i
(2.42)
S + |si =
p
S |si =
S(S + 1) s(s 1)|s 1i
(2.43)

which increment or decrement the S z value by 1 through



1 +
S + S
2

1
=
S+ S .
2i

Sx =

(2.44)

Sy

(2.45)

The Hamiltonian of the spin coupled to a magnetic field ~h is


~
H = gB~h S,

(2.46)

~ do not commute. As
which introduces nontrivial dynamics since the components of S
a consequence the spin precesses around the magnetic field direction.
Exercise: Derive the differential equation governing the rotation of a spin starting
along the +x-direction rotating under a field in the +z-direction

2.4

A quantum particle in free space

Our second example is a single quantum particle in an n-dimensional free space. Its
Hilbert space is given by all twice-continuously differentiable complex functions over
9

the real space Rn . The wave functions |i are complex-valued functions (~x) in ndimensional space. In this representation the operator x, measuring the position of the
particle is simple and diagonal
x = ~x,
(2.47)
while the momentum operator p becomes a differential operator
p = i~.

(2.48)

These two operators do not commute but their commutator is


[
x, p] = i~.

(2.49)

The Schrodinger equation of a quantum particle in an external potential V (~x) can be


obtained from the classical Hamilton function by replacing the momentum and position
variables by the operators above. Instead of the classical Hamilton function
p~2
H(~x, p~) =
+ V (~x)
2m

(2.50)

we use the quantum mechanical Hamiltonian operator


H=

p2
~2 2
+ V (
x) =
+ V (~x),
2m
2m

(2.51)

which gives the famous form


i~

~2 2
=
+ V (~x)
t
2m

(2.52)

of the one-body Schrodinger equation.

2.4.1

The harmonic oscillator

As a special exactly solvable case let us consider the one-dimensional quantum harmonic
oscillator with mass m and potential K2 x2 . Defining momentum p and position operators
q in units where m = ~ = K = 1, the time-independent Schrodinger equation is given
by
1 2
H|ni = (
p + q2 )|ni = En |ni
(2.53)
2
Inserting the definition of p we obtain an eigenvalue problem of an ordinary differential
equation
1
1
n (q) + q 2 n (q) = En n (q)
(2.54)
2
2
whose eigenvalues En = (n + 1/2) and eigenfunctions


1
1 2
(2.55)
n (q) = p
exp q Hn (q),
2
2n n!
are known analytically. Here the Hn are the Hermite polynomials and n = 0, 1, . . ..
10

Using these eigenstates as a basis sets we need to find the representation of q and
p. Performing the integrals
hm|
q |ni

and

hm|
p|ni

(2.56)

it turns out that they are nonzero only for m = n 1 and they can be written in terms
of ladder operators a and a :
1
q = (a + a)
2
1
p = (a a)
i 2

(2.57)
(2.58)
(2.59)

where the raising and lowering operators a and a only have the following nonzero
matrix elements:

(2.60)
hn + 1|a |ni = hn|a|n + 1i = n + 1.
and commutation relations
[a, a] = [a , a ] = 0
[a, a ] = 1.

(2.61)
(2.62)

It will also be useful to introduce the number operator n


= a a which is diagonal with
eigenvalue n: elements

n
|ni = a a|ni = na |n 1i = n||ni.
(2.63)
To check this representation let us plug the definitions back into the Hamiltonian to
obtain
1 2
(
p + q2 )
2

1
=
(a a)2 + (a + a)2
4
1
(a a + aa )
=
2
1
1
=
(2a a + 1) = n
+ ,
2
2

H =

(2.64)

which has the correct spectrum. In deriving the last lines we have used the commutation
relation (2.62).

11

Chapter 3
The quantum one-body problem
3.1

The time-independent 1D Schr


odinger equation

We start the numerical solution of quantum problems with the time-indepent onedimensional Schrodinger equation for a particle with mass m in a Potential V (x). In
one dimension the Schrodinger equation is just an ordinary differential equation

~2 2
+ V (x)(x) = E(x).
2m x2

(3.1)

We start with simple finite-difference schemes and discretize space into intervals of
length x and denote the space points by
xn = nx

(3.2)

and the wave function at these points by


n = (xn ).

3.1.1

(3.3)

The Numerov algorithm

After rewriting the second order differential equation to a coupled system of two first
order differential equations, any ODE solver such as the Runge-Kutta method could be
applied, but there exist better methods. For the special form
(x) + k(x)(x) = 0,

(3.4)

of the Schrodinger equation, with k(x) = 2m(E V (x))/~2 we can derive the Numerov
algorithm by starting from the Taylor expansion of n :
n1 = n

xn

x2 x3 (3) x4 (4) x5 (5)


+
n
n +
n
n + O(x6 )
2
6
24
120

(3.5)

Adding n+1 and n1 we obtain


n+1 + n1 = 2n + (x)2 n +
12

(x)4 (4)
n .
12

(3.6)

Replacing the fourth derivatives by a finite difference second derivative of the second
derivatives

+ n1
2n
n(4) = n+1
(3.7)
x2
and substituting k(x)(x) for (x) we obtain the Numerov algorithm




5(x)2
(x)2
kn+1 n+1 =
kn n
2 1
1+
12
12


(x)2
1+
kn1 n1 + O(x6 ),
(3.8)
12
which is locally of sixth order!
Initial values
To start the Numerov algorithm we need the wave function not just at one but at two
initial values and will now present several ways to obtain these.
For potentials V (x) with reflection symmetry V (x) = V (x) the wave functions
need to be either even (x) = (x) or odd (x) = (x) under reflection, which
can be used to find initial values:
For the even solution we use a half-integer mesh with mesh points xn+1/2 =
(n + 1/2)x and pick initial values ( x1/2 ) = (x1/2 ) = 1.
For the odd solution we know that (0) = (0) and hence (0) = 0, specifying
the first starting value. Using an integer mesh with mesh points xn = nx we
pick (x1 ) = 1 as the second starting value.
In general potentials we need to use other approaches. If the potentials vanishes for
large distances: V (x) = 0 for |x| a we can use the exact solution of the Schrodinger
equation at large distances to define starting points, e.g.
(a) = 1
(a x) = exp(x

(3.9)
p

2mE/~).

(3.10)

Finally, if the potential never vanishes we need to begin with a single starting value
(x0 ) and obtain the second starting value (x1 ) by performing an integration over the
first time step with an Euler or Runge-Kutta algorithm.

3.1.2

The one-dimensional scattering problem

The scattering problem is the numerically easiest quantum problem since solutions
exist for all energies E > 0, if the potential vanishes at large distances (V (x) 0 for
|x| ). The solution becomes particularly simple if the potential is nonzero only
on a finite interval [0, a]. For a particle approaching the potential barrier from the left
(x < 0) we can make the following ansatz for the free propagation when x < 0:
L (x) = A exp(iqx) + B exp(iqx)
13

(3.11)

where A is the amplitude of the incoming wave and B the amplitude of the reflected
wave. On the right hand side, once the particle has left the region of finite potential
(x > a), we can again make a free propagation ansatz,
R (x) = C exp(iqx)

(3.12)

The coefficients A, B and C have to be determined self-consistently by matching to a


numerical solution of the Schrodinger equation in the interval [0, a]. This is best done
in the following way:
Set C = 1 and use the two points a and a + x as starting points for a Numerov
integration.
Integrate the Schrodinger equation numerically backwards in space, from a to
0 using the Numerov algorithm.
Match the numerical solution of the Schrodinger equation for x < 0 to the free
propagation ansatz (3.11) to determine A and B.
Once A and B have been determined the reflection and transmission probabilities R
and T are given by
R = |B|2 /|A|2
T = 1/|A|2

3.1.3

(3.13)
(3.14)

Bound states and solution of the eigenvalue problem

While there exist scattering states for all energies E > 0, bound states solutions of the
Schrodinger equation with E < 0 exist only for discrete energy eigenvalues. Integrating
the Schrodinger equation from to + the solution will diverge to as x
for almost all values. These functions cannot be normalized and thus do not constitute
solutions to the Schrodinger equation. Only for some special eigenvalues E, will the
solution go to zero as x .
A simple eigensolver can be implemented using the following shooting method, where
we again will assume that the potential is zero outside an interval [0, a]:
Start with an initial guess E
Integrate the Schrodinger equation for E (x) from x = 0 to xf a and determine
the value E (xf )
use a root solver, such as a bisection method (see appendix A.1), to look for an
energy E with E (xf ) 0
This algorithm is not ideal since the divergence of the wave function for x will
cause roundoff error to proliferate.
A better solution is to integrate the Schrodinger equation from both sides towards
the center:
We search for a point b with V (b) = E
14

Starting from x = 0 we integrate the left hand side solution L (x) to a chosen point
b and obtain L (b) and a numerical estimate for L (b) = (L (b)L (bx))/x.
Starting from x = a we integrate the right hand solution R (x) down to the same
point b and obtain R (b) and a numerical estimate for R (b) = (R (b + x)
R (b))/x.
At the point b the wave functions and their first two derivatives have to match,
since solutions to the Schrodinger equation have to be twice continuously differentiable. Keeping in mind that we can multiply the wave functions by an arbitrary
factor we obtain the conditions
L (b) = R (b)
L (b) = R (b)
L (b) = R (b)

(3.15)
(3.16)
(3.17)

The last condition is automatically fulfilled since by the choice V (b) = E the
Schrodinger equation at b reduces to (b) = 0. The first two conditions can be
combined to the condition that the logarithmic derivatives vanish:
d log L
(b)
(b)
d log R
|x=b = L
= R
=
|x=b
dx
L (b)
R (b)
dx

(3.18)

This last equation has to be solved for in a shooting method, e.g. using a bisection
algorithm
Finally, at the end of the calculation, normalize the wave function.

3.2

The time-independent Schr


odinger equation in
higher dimensions

The time independent Schrodinger equation in more than one dimension is a partial
differential equation and cannot, in general, be solved by a simple ODE solver such as
the Numerov algorithm. Before employing a PDE solver we should thus always first try
to reduce the problem to a one-dimensional problem. This can be done if the problem
factorizes.

3.2.1

Factorization along coordinate axis

A first example is a three-dimensional Schrodinger equation in a cubic box with potential


V (~r) = V (x)V (y)V (z) with ~r = (x, y, z). Using the product ansatz
(~r) = x (x)y (y)z (z)
the PDE factorizes into three ODEs which can be solved as above.

15

(3.19)

3.2.2

Potential with spherical symmetry

Another famous trick is possible for spherically symmetric potentials with V (~r) = V (|~r|)
where an ansatz using spherical harmonics
u(r)
Ylm (, )
(3.20)
r
can be used to reduce the three-dimensional Schrodinger equation to a one-dimensional
one for the radial wave function u(r):


~2 d 2
~2 l(l + 1)

+
+ V (r) u(r) = Eu(r)
(3.21)
2m dr2
2mr2
l,m (~r) = l,m (r, , ) =

in the interval [0, [. Given the singular character of the potential for r 0, a
numerical integration should start at large distances r and integrate towards r = 0, so
that the largest errors are accumulated only at the last steps of the integration.
In the exercises we will solve a three-dimensional scattering problem and calculate
the scattering length for two atoms.

3.2.3

Finite difference methods

The simplest solvers for partial differential equations, the finite difference solvers can
also be used for the Schrodinger equation. Replacing differentials by differences we
convert the Schrodinger equation to a system of coupled inear equations. Starting from
the three-dimensional Schrodinger equation (we set ~ = 1 from now on)
2 (~x) + 2m(V E(~x))(~x) = 0,
we discretize space and obtain the system of linear equations
1
[(xn+1 , yn , zn ) + (xn1 , yn , zn )
x2
+(xn , yn+1 , zn ) + (xn , yn1 , zn )
+(xn , yn , zn+1 ) + (xn , yn , zn1 )]


6
(xn , yn , zn ) = 0.
+ 2m(V (~x) E)
x2

(3.22)

(3.23)

For the scattering problem a linear equation solver can now be used to solve the
system of equations. For small linear problems Mathematica can be used, or the dsysv
function of the LAPACK library. For larger problems it is essential to realize that the
matrices produced by the discretization of the Schrodinger equation are usually very
sparse, meaning that only O(N ) of the N 2 matrix elements are nonzero. For these
sparse systems of equations, optimized iterative numerical algorithms exist1 and are
implemented in numerical libraries such as in the ITL library.2
1

R. Barret, M. Berry, T.F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C.


Romine, and H. van der Vorst, Templates for the Solution of Linear Systems: Building Blocks for
Iterative Methods (SIAM, 1993)
2
J.G. Siek, A. Lumsdaine and Lie-Quan Lee, Generic Programming for High Performance Numerical
Linear Algebra in Proceedings of the SIAM Workshop on Object Oriented Methods for Inter-operable
Scientific and Engineering Computing (OO98) (SIAM, 1998); the library is availavle on the web at:
http://www.osl.iu.edu/research/itl/

16

To calculate bound states, an eigenvalue problem has to be solved. For small problems, where the full matrix can be stored in memory, Mathematica or the dsyev eigensolver in the LAPACK library can be used. For bigger systems, sparse solvers such as
the Lanczos algorithm (see appendix A.2) are best. Again there exist efficient implementations3 of iterative algorithms for sparse matrices.4

3.2.4

Variational solutions using a finite basis set

In the case of general potentials, or for more than two particles, it will not be possible to
reduce the Schrodinger equation to a one-dimensional problem and we need to employ
a PDE solver. One approach will again be to discretize the Schrodinger equation on a
discrete mesh using a finite difference approximation. A better solution is to expand
the wave functions in terms of a finite set of basis functions
|i =

N
X
i=1

ai |ui i.

(3.24)

To estimate the ground state energy we want to minimize the energy of the variational wave function
h|H|i
E =
.
(3.25)
h|i

Keep in mind that, since we only chose a finite basis set {|ui i} the variational estimate
E will always be larger than the true ground state energy E0 , but will converge towards
E0 as the size of the basis set is increased, e.g. by reducing the mesh size in a finite
element basis.
To perform the minimization we denote by


Z
~2 2

+ V uj (~r)
(3.26)
Hij = hui |H|uj i = d~rui (~r)
2m
the matrix elements of the Hamilton operator H and by
Z
Sij = hui |uj i = d~rui (~r) uj (~r)

(3.27)

the overlap matrix. Note that for an orthogonal basis set, Sij is the identity matrix ij .
Minimizing equation (3.25) we obtain a generalized eigenvalue problem
X
X
Hij aj = E
Sik ak .
(3.28)
j

or in a compact notation with ~a = (a1 , . . . , aN )


H~a = ES~a.
3

(3.29)

http://www.comp-phys.org/software/ietl/
Z. Bai, J. Demmel and J. Dongarra (Eds.), Templates for the Solution of Algebraic Eigenvalue
Problems: A Practical Guide (SIAM, 2000).
4

17

If the basis set is orthogonal this reduces to an ordinary eigenvalue problem and we can
use the Lanczos algorithm.
In the general case we have to find orthogonal matrices U such that U T SU is the
identity matrix. Introducing a new vector ~b = U 1~a. we can then rearrange the problem
into
H~a = ES~a
HU~b = ESU~b
U T HU~b = EU T SU~b = E~b

(3.30)

and we end up with a standard eigenvalue problem for U T HU . Mathematica and


LAPACK both contain eigensolvers for such generalized eigenvalue problems.
Example: the anharmonic oscillator
The final issue is the choice of basis functions. It is advantageous to make use of known
solutions to a similar problem as we will illustrate in the case of an anharmonic oscillator
with Hamilton operator
H = H0 + q 4
1 2
H0 =
(p + q 2 ),
2

(3.31)

where the harmonic oscillator H0 was already discussed in section 2.4.1. It makes sense
to use the N lowest harmonic oscillator eigenvectors |ni as basis states of a finite basis
and write the Hamiltonian as
H=

1
1

+n
+
q4 = + n
+ (a + a)4
2
2
4

(3.32)

Since the operators a and a are nonzero only in the first sub or superdiagonal, the
resulting matrix is a banded matrix of bandwidth 9. A sparse eigensolver such as the
Lanczos algorithm can again be used to calculate the spectrum. Note that since we use
the orthonormal eigenstates of H0 as basis elements, the overlap matrix S here is the
identity matrix and we have to deal only with a standard eigenvalue problem.
The finite element method
In cases where we have irregular geometries or want higher precision than the lowest
order finite difference method, and do not know a suitable set of basis function, the
finite element method (FEM) should be chosen over the finite difference method. Since
explaining the FEM can take a full semester in itself, we refer interested students to
classes on solving partial differential equations.

3.3

The time-dependent Schr


odinger equation

Finally we will reintroduce the time dependence to study dynamics in non-stationary


quantum systems.
18

3.3.1

Spectral methods

By introducing a basis and solving for the complete spectrum of energy eigenstates we
can directly solve the time-dependent problem in the case of a stationary Hamiltonian.
This is a consequence of the linearity of the Schrodinger equation.
To calculate the time evolution of a state |(t0 )i from time t0 to t we first solve
the stationary eigenvalue problem H|i = E|i and calculate the eigenvectors |n i and
eigenvalues n . Next we represent the initial wave function |i by a spectral decomposition
X
|(t0 )i =
cn |n i.
(3.33)
n

Since each of the |n i is an eigenvector of H, the time evolution ei~H(tt0 ) is trivial


and we obtain at time t:
X
|(t)i =
cn ei~n (tt0 ) |n i.
(3.34)
n

3.3.2

Direct numerical integration

If the number of basis states is too large to perform a complete diagonalization of


the Hamiltonian, or if the Hamiltonian changes over time we need to perform a direct
integration of the Schrodinger equation. Like other initial value problems of partial
differential equations the Schrodinger equation can be solved by the method of lines.
After choosing a set of basis functions or discretizing the spatial derivatives we obtain a
set of coupled ordinary differential equations which can be evolved for each point along
the time line (hence the name) by standard ODE solvers.
In the remainder of this chapter we use the symbol H to refer the representation of
the Hamiltonian in the chosen finite basis set. A forward Euler scheme
|(tn+1 )i = |(tn )i i~t H|(tn )i

(3.35)

is not only numerically unstable. It also violates the conservation of the norm of the
wave function h|i = 1. Since the exact quantum evolution
(x, t + t ) = ei~Ht (x, t).

(3.36)

is unitary and thus conserves the norm, we want to look for a unitary approximant as
integrator. Instead of using the forward Euler method (3.35) which is just a first order
Taylor expansion of the exact time evolution
ei~Ht = 1 i~Ht + O(2t ),

(3.37)

we reformulate the time evolution operator as



1 


t
t
i~Ht /2 1 i~Ht /2
i~Ht
e
= 1 + i~H
= e
e
1 i~H
+ O(3t ), (3.38)
2
2

which is unitary!

19

This gives the simplest stable and unitary integrator algorithm



1 

t
t
(x, t + t ) = 1 + i~H
1 i~H
(x, t)
2
2

(3.39)

or equivalently


t
1 + i~H
2

(x, t + t ) =

t
1 i~H
2

(x, t).

(3.40)

Unfortunately this is an implicit integrator. At each time step, after evaluating the
right hand side a linear system of equations needs to be solved. For one-dimensional
problems the matrix representation of H is often tridiagonal and a tridiagonal solver
can be used. In higher dimensions the matrix H will no longer be simply tridiagonal
but still very sparse and we can use iterative algorithms, similar to the Lanczos algorithm for the eigenvalue problem. For details about these algorithms we refer to the
nice summary at http://mathworld.wolfram.com/topics/Templates.html and especially the biconjugate gradient (BiCG) algorithm. Implementations of this algorithm
are available, e.g. in the Iterative Template Library (ITL).

3.3.3

The split operator method

A simpler and explicit method is possible for a quantum particle in the real space picture
with the standard Schrodinger equation (2.52). Writing the Hamilton operator as
H = T + V

(3.41)

with
1 2
p
2m
V = V (~x)
T =

(3.42)
(3.43)

it is easy to see that V is diagonal in position space while T is diagonal in momentum


space. If we split the time evolution as

ei~t H = ei~t V /2 ei~t T ei~t V /2 + O(3t )

(3.44)

we can perform the individual time evolutions ei~t V /2 and ei~t T exactly:
h
i
i~t V /2
e
|i (~x) = ei~t V (~x)/2 (~x)
i
h
~ 2

ei~t T /2 |i (~k) = ei~t ||k|| /2m (~k)

(3.45)
(3.46)

in real space for the first term and momentum space for the second term. This requires
a basis change from real to momentum space, which is efficiently performed using a Fast
Fourier Transform (FFT) algorithm. Propagating for a time t = N t , two consecutive

20

applications of ei~t V /2 can easily be combined into a propagation by a full time step

ei~t V , resulting in the propagation:


e

i~t H


N
i~t V /2 i~t T i~t V /2
= e
e
e
+ O(2t )
h
iN 1

i~t T i~t V
i~t V /2
e
= e
e
ei~t T ei~t V /2

(3.47)

and the discretized algorithm starts as

1 (~x) = ei~t V (~x)/2 0 (~x)


1 (~k) = F1 (~x)

(3.48)
(3.49)

where F denotes the Fourier transform and F 1 will denote the inverse Fourier transform. Next we propagate in time using full time steps:
~

2n (~k) = ei~t ||k|| /2m 2n1 (~k)


2n (~x) = F 1 2n (~k)
2

i~t V (~
x)

2n+1 (~x) = e
2n (~x)
2n+1 (~k) = F2n+1 (~x)

(3.50)
(3.51)
(3.52)
(3.53)

except that in the last step we finish with another half time step in real space:
2N +1 (~x) = ei~t V (~x)/2 2N (~x)

(3.54)

This is a fast and unitary integrator for the Schrodinger equation in real space. It could
be improved by replacing the locally third order splitting (3.44) by a fifth-order version
involving five instead of three terms.

21

Chapter 4
Introduction to many-body
quantum mechanics
4.1

The complexity of the quantum many-body problem

After learning how to solve the 1-body Schrodinger equation, let us next generalize to
more particles. If a single body quantum problem is described by a Hilbert space H
of dimension dimH = d then N distinguishable quantum particles are described by the
tensor product of N Hilbert spaces
H

(N )

N
O
i=1

(4.1)

with dimension dN .
As a first example, a single spin-1/2 has a Hilbert space H = C2 of dimension 2,
N
but N spin-1/2 have a Hilbert space H(N ) = C2 of dimension 2N . Similarly, a single
particle in three dimensional space is described by a complex-valued wave function (~x)
of the position ~x of the particle, while N distinguishable particles are described by a
complex-valued wave function (~x1 , . . . , ~xN ) of the positions ~x1 , . . . , ~xN of the particles.
Approximating the Hilbert space H of the single particle by a finite basis set with d
basis functions, the N -particle basis approximated by the same finite basis set for single
particles needs dN basis functions.
This exponential scaling of the Hilbert space dimension with the number of particles
is a big challenge. Even in the simplest case a spin-1/2 with d = 2, the basis for N = 30
spins is already of of size 230 109 . A single complex vector needs 16 GByte of memory
and will not fit into the memory of your personal computer anymore.

22

This challenge will be to addressed later in this course by learning about


1. approximative methods, reducing the many-particle problem to a single-particle
problem
2. quantum Monte Carlo methods for bosonic and magnetic systems
3. brute-force methods solving the exact problem in a huge Hilbert space for modest
numbers of particles

4.2
4.2.1

Indistinguishable particles
Bosons and fermions

In quantum mechanics we assume that elementary particles, such as the electron or


photon, are indistinguishable: there is no serial number painted on the electrons that
would allow us to distinguish two electrons. Hence, if we exchange two particles the
system is still the same as before. For a two-body wave function (~q1 , ~q2 ) this means
that
(~q2 , ~q1 ) = ei (~q1 , ~q2 ),
(4.2)
since upon exchanging the two particles the wave function needs to be identical, up to
a phase factor ei . In three dimensions the first homotopy group is trivial and after
doing two exchanges we need to be back at the original wave function1

and hence e2i = 1:

(~q1 , ~q2 ) = ei (~q2 , ~q1 ) = e2i (~q1 , ~q2 ),

(4.3)

(~q2 , ~q1 ) = (~q1 , ~q2 )

(4.4)

The many-body Hilbert space can thus be split into orthogonal subspaces, one in which
particles pick up a sign and are called fermions, and the other where particles pick
up a + sign and are called bosons.
Bosons
For bosons the general many-body wave function thus needs to be symmetric under
permutations. Instead of an arbitrary wave function (~q1 , . . . , ~qN ) of N particles we
use the symmetrized wave function
X
(S) = S+ (~q1 , . . . , ~qN ) NS
(~qp(1) , . . . , ~qp(N ) ),
(4.5)
p

where the sum goes over all permutations p of N particles, and NS is a normalization
factor.
1

As a side remark we want to mention that in two dimensions the first homotopy group is Z and not
trivial: it matters whether we move the particles clock-wise or anti-clock wise when exchanging them,
and two clock-wise exchanges are not the identity anymore. Then more general, anyonic, statistics are
possible.

23

Fermions
For fermions the wave function has to be antisymmetric under exchange of any two
fermions, and we use the anti-symmetrized wave function
X
(A) S (~q1 , . . . , ~qN ) NA
sgn(p)(~qp(1) , . . . , ~qp(N ) ),
(4.6)
p

where sgn(p) = 1 is the sign of the permutation and NA again a normalization factor.
A consequence of the antisymmetrization is that no two fermions can be in the same
state as a wave function
(~q1 , ~q2 ) = (~q1 )(~q2 )
(4.7)
since this vanishes under antisymmetrization:
(~q1 , ~q2 ) = (~q1 , ~q2 ) (~q2 , ~q1 ) = (~q1 )(~q2 ) (~q2 )(~q1 ) = 0

(4.8)

Spinful fermions
Fermions, such as electrons, usually have a spin-1/2 degree of freedom in addition
to their orbital wave function. The full wave function as a function of a generalized
coordinate ~x = (~q, ) including both position ~q and spin .

4.2.2

The Fock space

The Hilbert space describing a quantum many-body system with N = 0, 1, . . . ,


particles is called the Fock space. It is the direct sum of the appropriately symmetrized
single-particle Hilbert spaces H:

M
S Hn
(4.9)
N =0

where S+ is the symmetrization operator used for bosons and S is the anti-symmetrization
operator used for fermions.
The occupation number basis
Given a basis {|1 i, . . . , |L i} of the single-particle Hilbert space H, a basis for the
Fock space is constructed by specifying the number of particles ni occupying the singleparticle wave function |f1 i. The wave function of the state |n1 , . . . , nL i is given by the
appropriately symmetrized and normalized product of the single particle wave functions.
For example, the basis state |1, 1i has wave function
1
[1 (~x1 )2 (~x2 ) 1 (~x2 )2 (~x1 )]
2

(4.10)

where the + sign is for bosons and the sign for fermions.
For bosons the occupation numbers ni can go from 0 to , but for fermions they
are restricted to ni = 0 or 1 since no two fermions can occupy the same state.

24

The Slater determinant


The antisymmetrized and normalized product of
can be written as a determinant, called the Slater

1 (~x1 )
N
Y
1
..
i (~xi ) =
S
.

N
i1
1 (~xN )

N single-particle wave functions i


determinant

N (~x1 )

..
(4.11)
.
.

N (~xN )

Note that while the set of Slater determinants of single particle basis functions forms
a basis of the fermionic Fock space, the general fermionic many body wave function is a
linear superposition of many Slater determinants and cannot be written as a single Slater
determinant. The Hartee Fock method, discussed below, will simplify the quantum
many body problem to a one body problem by making the approximation that the
ground state wave function can be described by a single Slater determinant.

4.2.3

Creation and annihilation operators

Since it is very cumbersome to work with appropriately symmetrized many body wave
functions, we will mainly use the formalism of second quantization and work with
creation and annihilation operators.
The annihilation operator ai, associated with a basis function |i i is defined as the
result of the inner product of a many body wave function |i with this basis function
|i i. Given an N -particle wave function |(N ) i the result of applying the annihilation
(N ) i = ai |(N ) i. It is given by the
operator is an N 1-particle wave function |
appropriately symmetrized inner product
Z
x1 , . . . , ~xN 1 ) = S d~xN f (~xN )(~x1 , . . . , ~xN ).
(~
(4.12)
i
Applied to a single-particle basis state |j i the result is
ai |j i = ij |0i

(4.13)

where |0i is the vacuum state with no particles.


The creation operator ai is defined as the adjoint of the annihilation operator ai .
Applying it to the vacuum creates a particle with wave function i :
|i i = ai |0i

(4.14)

For sake of simplicity and concreteness we will now assume that the L basis functions
|i i of the single particle Hilbert space factor into L/(2S + 1) orbital wave functions
fi (~q) and 2S + 1 spin wave functions |i, where = S, S + 1, ..., S. We will write
creation and annihilation operators ai, and ai, where i is the orbital index and the
spin index. The most common cases will be spinless bosons with S = 0, where the spin
index can be dropped and spin-1/2 fermions, where the spin can be up (+1/2) or down
(1/2).
25

Commutation relations
The creation and annihilation operators fulfill certain canonical commutation relations,
which we will first discuss for an orthogonal set of basis functions. We will later generalize them to non-orthogonal basis sets.
For bosons, the commutation relations are the same as that of the ladder operators
discussed for the harmonic oscillator (2.62):
[ai , aj ] = [ai , aj ] = 0
[ai , aj ]

= ij .

(4.15)
(4.16)

For fermions, on the other hand, the operators anticommute


{aj , ai } = {ai , aj } = ij
{ai , aj } = {ai , aj } = 0.

(4.17)

The anti-commutation implies that


(ai )2 = ai ai = ai ai

(4.18)

(ai )2 = 0,

(4.19)

and that thus

as expected since no two fermions can exist in the same state.


Fock basis in second quantization and normal ordering
The basis state |n1 , . . . , nL i in the occupation number basis can easily be expressed in
terms of creation operators:
|n1 , . . . , nL i =

L
Y
i=1

(ai )ni |0i = (a1 )n1 (a2 )n2 (aL )nL |0i

(4.20)

For bosons the ordering of the creation operators does not matter, since the operators
commute. For fermions, however, the ordering matters since the fermionic creation
operators anticommute: and a1 a2 |0i = a1 a2 |0i. We thus need to agree on a specific
ordering of the creation operators to define what we mean by the state |n1 , . . . , nL i.
The choice of ordering does not matter but we have to stay consistent and use e.g. the
convention in equation (4.20).
Once the normal ordering is defined, we can derive the expressions for the matrix
elements of the creation and annihilation operators in that basis. Using above normal
ordering the matrix elements are
ai |n1 , . . . , ni , . . . , nL i = ni ,1 (1)

ai |n1 , . . . , ni , . . . , nL i

= ni ,0 (1)

Pi1

j=1

Pi1

ni

j=1 ni

|n1 , . . . , ni 1, . . . , nL i

|n1 , . . . , ni + 1, . . . , nL i

(4.21)
(4.22)

where the minus signs come from commuting the annihilation and creation operator to
the correct position in the normal ordered product.
26

4.2.4

Nonorthogonal basis sets

In simulating the electronic properties of atoms and molecules below we will see that
the natural choice of single particle basis functions centered around atoms will necessarily give a non-orthogonal set of basis functions. This is no problem, as long as
the definition of the annihilation and creation operators is carefully generalized. For
this generalization it will be useful to introduce the fermion field operators (~r) and
(~r), creating and annihilating a fermion localized at a single point ~r in space. Their
commutation relations are simply
{ (~r), (r~ )} = { (~r), (r~ )} = (~r r~ )
{ (~r), (r~ )} = { (~r), (r~ )} = 0.
The scalar products of the basis functions define a matrix
Z
Sij = d3~rfi (~r)fj (~r),

(4.23)

(4.24)

which is in general not the identity matrix. The associated annihilation operators ai
are again defined as scalar products
Z
X
1
ai =
(S )ij d3~rfj (~r) (~r).
(4.25)
j

The non-orthogonality causes the commutation relations of these operators to differ


from those of normal fermion creation- and annihilation operators:
{ai , aj } = (S 1 )ij

{ai , aj } = {ai , aj } = 0.

(4.26)

Due to the non-orthogonality the adjoint ai does not create a state with wave function
fi . This is done by the operator a
i , defined through:
X
a
i =
Sji ai ,
(4.27)
j

which has the following simple commutation relation with aj :


{
ai , aj } = ij .

(4.28)

The commutation relations of the a


i and the a
j are:
{
ai a
j } = Sij

{
ai , a
j } = {
ai , a
j } = 0.

(4.29)

We will need to keep the distinction between a and a


in mind when dealing with
non-orthogonal basis sets.
27

Chapter 5
Quantum Monte Carlo
This chapter is devoted to the study of quatum many body systems using Monte Carlo
techniques. We analyze two of the methods that belong to the large family of the
quantum Monte Carlo techniques, namely the Path-Integral Monte Carlo (PIMC) and
the Diffusion Monte Carlo (DMC, also named Greens function Monte Carlo). In the
first section we start by introducing PIMC.

5.1

Path Integrals in Quantum Statistical Mechanics

In this section we introduce the path-integral description of the properties of quantum


many-body systems. We show that path integrals permit to calculate the static properties of systems of Bosons at thermal equilibrium by means of Monte Carlo methods.
We consider a many-particle system described by the non-relativistic Hamiltonian
= T + V ;
H

(5.1)

in coordinate representation the kinetic operator T and the potential operator V are
defined as:
N

~2 X

T =
i , and
2m i=1

V = V (R) .

(5.2)
(5.3)

In these equations ~ is the Planks constant divided by 2, m the particles mass, N


the number of particles and the vector R (r 1 , . . . , r N ) describes their positions. We
consider here systems in d dimensions, with fixed number of particles, temperature T ,
contained in a volume V .
In most case, the potential V (R) is determined by inter-particlePinteractions, in which
case it can be written as the sum of pair contributions V (R) = i<j v (r i r j ), where
v(r) is the inter-particle potential; it can also be due to an external field,P
call it vext (r),
in which case it is just the sum of single particle contributions V (R) = i vex (r i ).
28

We first assume that particles, although being identical, are distinguishable. Therefore,
they obey Boltzmann statistics. In section 5.1.3 we will describe the treatment of identical particles obeying Bose statistics.
All the static properties of a quantum many-body
 system
 in thermal equilibrium

are obtainable from the thermal density matrix exp H , where = 1/kB T , with
is:
kB the Boltzmanns constant. The expectation value of an observable operator O



= Tr O
exp H

hOi
/Z,

where the partition function Z is the trace of the density matrix:






Z = Tr exp H .

(5.4)

(5.5)

In the following we will find convenient to use the density matrix in coordinate representation. We denote its matrix elements as:

 E
D

R .
(5.6)
(R, R , ) R exp H
The partition function is the integral of the diagonal matrix elements over all possible
configurations:
Z
Z (N, T, V ) = (R, R, ) dR.
(5.7)
The product of two density matrices is again a density matrix:







exp (1 + 2 ) H = exp 1 H exp 2 H .

(5.8)

This property, often referred to as product property, written in coordinate representation gives a convolution integral:
Z
(R1 , R3 , 1 + 2 ) = (R1 , R2 , 1 ) (R2 , R3 , 2 ) dR2 .
(5.9)
If we apply the product property M times we obtain the density matrix at the inverse
temperature as the product of M density matrices at the inverse temperature =
/M . In operator form:

 

M

exp H = exp H
.

(5.10)

We call time step the quantity . Eq. (5.10) written in coordinate representation becomes:
Z
Z
(R1 , RM +1 , ) = dR2 dR3 dRM
(R1 , R2 , ) (R2 , R3 , ) (RM , RM +1 , ) . (5.11)
29

Eq. (5.11) is not useful as it is since the density matrices (Rj , Rj+1 , ) are, in general,
unknown quantities. We note, however, that if M is a large number, then the time-step
, which corresponds to the high temperature M T , is small. If in eq. (5.11) we replace
the exact density matrix (Rj , Rj+1 , ) with a short time or high temperature approximation we obtain a multidimensional integral of known functions. Furthermore,
in coordinate representation the density matrix is positive definite. It is known that
many-variable integrals of positive functions can be calculated efficiently by means of
Monte Carlo methods.
The simplest expression for the high temperature density matrix is the so called primitive approximation. It consists in neglecting all terms beyond the one which is linear
in in the left-hand side exponent of the following operator identity (Baker-CampbellHausdorff relation):
 

 2 h
i




exp T + V +
T, V + = exp T exp V .
(5.12)
2
(In this equation dots indicate terms which contain powers of higher than the second.)
One obtains the following approximate expression for the density matrix operator:







exp H
exp

T
exp

V
.
(5.13)
=



It is easy to write the matrix elements of the kinetic density matrix exp T and



the potential density matrix exp V in coordinate representation. The latter is


diagonal:
E


D


(5.14)
Ri exp V Ri+1 = exp ( V (Ri )) (Ri Ri+1 ) ,
given that we consider potentials that are diagonal in coordinate space. The former, in
free space, is a gaussian propagator (see section 5.1.2):
"
#
2
E


D

(R

R
)
dN/2


i
i+1
exp
Ri exp T Ri+1 = 2~2 /m
.
(5.15)
2~2 /m

For later convenience we introduce the following definition:


#
"
2

(R

R
)
dN/2
.
free (R, R , ) 2~2 /m
exp
2~2 /m

(5.16)

In the limit of large Trotter number M equation (5.10) remains exact if we use the
primitive approximation eq. (5.12) in its right hand side. This is guaranteed by the
Trotter formula:


h



iM


exp T + V
= lim exp T exp V
,
(5.17)
M +

which holds for any pairs of operators bounded from below. The kinetic operator T
and the potential operators V of interest to us satisfy this requirement. To make the
30

consequence of the Trotter formula explicit in coordinate representation we substitute


the matrix elements of the kinetic and the potential density matrices eqs. (5.15) and
(5.14) in the path-integral formula (5.11). We arrive at the following dN (M 1)dimensional integral:
(R1 , RM +1 , )
=

Z Y
M

dRj

j=2

M
Y

j=1


free (Rj , Rj+1 , ) exp [ V (Rj )] . (5.18)

The Trotter formula guarantees that in the limit M this is an exact equation. If
M is a large, but finite, number the integral (5.18) can be computed using the Monte
Carlo procedure. One big issue is the determination of the lowest value of M for which
the systematic error due to M being finite is smaller than the unavoidable statistical
error associated to the Monte Carlo evaluation.
At this point it is useful to introduce some definitions we will employ extensively in
the next lectures.
Many-particle path: also called system configuration, it is the set of the dN M
coordinates R1 , R2 , . . . , RM .
Time-slice: the jth term of a system configuration, indicated with Rj , contains the
dN coordinates of the N particles at imaginary time (j 1) and will be called
time-slice.
World line: the world line i is the set of coordinates describing the path of the
particle i in imaginary time: r i1 , r i2 , . . . , r ij , . . . , r iM .

Bead: we call beads the M components of a world line.

The trace of the density matrix (5.18) gives the partition function:
Z (N, V, T ) =

(R1 , R1 , ) dR1 =

Z
M
Y
j=1

Z Y
M

dRj

j=1


free (Rj , Rj+1 , ) exp [ V (Rj )] . (5.19)

For distinguishable particles RM +1 R1 . Note that eq. (5.19) represents the partition
function of a classical system of polymers. Every polymer is a necklake of beads interacting as if they were connected by ideal springs. This harmonic interaction is due to
the kinetic density matrix. In the primitive approximation beads with the same imaginary time index j, i.e., belonging to the same time-slice, interact with the inter-particle
potential v(r). With higher order approximations one generally introduces effective
interparticle interactions. This is the famous mapping of quantum to classical systems
introduced by Feynman to describe the properties of superfluid helium. Each quantum
particle
p has been substituted by a classical polymer. The size of polymers is of order
T = 2~2 /m, the de Broglie thermal wave-length, and represents the indetermination on the position of the corresponding quantum particle. In the section 5.1.3 we will
31

see how the indistinguishability of identical particles modifies the polymer description
of the quantum many body system.

5.1.1

Analogy inverse temperature imaginary time

In the previous sections we have shown that the partition function of a quantum system
can be decomposed using path-integrals. It is interesting to notice that a path-integral
can be regarded as a time-evolution in imaginary time. To understand this, let us
consider the time-dependent Schrodinger equation:
i~

(R, t) = H(R,
t).
t

(5.20)

The Greens function of eq. (5.20) is:




 E

G(R, R , t) = R exp it/~H R .

(5.21)

It is the solution of the Schrodinger equation with the initial condition (R, 0) =
(R R ). It governs the time-evolution of the wave function. In fact, using the
Greens function one can write the differential equation (5.20) in the integral form:
Z
(R, t) = G(R, R , t)(R , 0)dR .
(5.22)
Now, we can notice that eq. (5.21) is analogous to the thermal density matrix (5.6)
once one substitutes it/~ in eq. (5.6).

5.1.2

Free-particle density matrix

Let us consider a free particle in 1D. The Hamiltonian describing this system is:
2
2
=~ d .
H
2m dx2

(5.23)

It is easy to determine the thermal density matrix corresponding to this Hamiltonian.


We start from the definition:

 E
D

x ;
(5.24)
(x, x , ) = x exp H

R
We introduce twice the completeness relation |pi hp| dp = I, where |pi are the eigenstates of the momentum operator:
Z
Z
D

 E

p hp |x i =
(x, x , ) = dp dp hx|pi p exp H


Z
2
1

p .
dp/~ exp (i (x x ) p/~) exp
2
2m
(5.25)
32

Here we have used the expression of the momentum eigenstates is coordinate space
1
hx|pi = 2~
exp (ixp/~), and their orthogonality hp|p i = (p p ). In the last integral
in eq. (5.25) we recognize the inverse-Fourier transform of a Gaussian function. The
Fourier transform F (k) of the function f (x) = exp (x2 /(4a2 )) is again a Gaussian
function:


(5.26)
F (k) = 2a exp ak 2 .

Using this result in eq. (5.25) we obtain that the free-particle density matrix is a Gaussian propagator:


r
m
m

2
(x, x , ) =
(5.27)
exp
(x x ) .
2~2
2~2

5.1.3

Bose symmetry

The expression (5.19) for the partition function is not symmetrical under particle exchange, so it holds for distinguishable particles only. The correct expression for identical
particles obeying Bose (Fermi) statistics should be symmetrical (anti-symmetrical) under particle exchange. A convenient way to symmetrize the density matrix (5.18) is to
sum over all possible permutations of the particle labels in one of the two arguments:
Bose (R1 , R2 , ) =

1 X
(R1 , PR2 , ) ,
N! P

(5.28)

where P is one of the N ! permutations of the particle labels; this means that PR =
r p(1) , r p(2) , . . . , r p(N ) , where p(i), with i = 1, 2,. . . ,N , is the particle label in permutation with the i-th particle. If we trace the symmetrized density matrix eq. (5.28) we
obtain the partition function for identical Bose particles:
1 X
ZBose (N, V, T ) =
N! P

Z Y
M

dRj

j=1

M
Y

j=1


free (Rj , Rj+1 , ) exp [ V (Rj )] , (5.29)

with the new boundary condition RM +1 = PR1 . As a consequence of symmetrization


the necklaces constituting the polymers are not closed on themselves. The last bead of
p(i)
the i-th world line, r iM , is connected to the first bead of the p(i)-th world-line, r 1 .
At low temperatures, where the thermal wave-length T is comparable to the average
inter-particle distance, large permutations cycles form. These are responsible for macroscopic quantum phenomena such as superfluidity and Bose-Einstein condensation.
An exact evaluation of the N ! addends summed in eq.(5.29) becomes soon unfeasible
by increasing N . Fortunately, all terms are positive definite, then we can still arrange a
Monte Carlo procedure for the evaluation of eq. (5.29). If we considered Fermi particles,
an additional + or sign would appear in front of each term, the former for even
permutations, the latter for odd permutations. A Monte Carlo evaluation of the Fermi
partition function would lead to an exponentially small signal to noise ratio going to
33

small T and large N . As a consequence of this sign problem the path-integral calculation becomes unfeasible unless one introduces some systematic approximations.

5.1.4

Path sampling methods

In this section we describe the Monte Carlo procedure to sample path-integrals.


One has to set a random walk through configuration space. Let P (X, X ) be the
probability to jump from configuration X to X . One can prove that if the transition
matrix P (X, X ) satisfies the detailed balance condition:
(X) P (X, X ) = (X ) P (X , X) ,

(5.30)

then the random walk samples points with probability (X).


One very flexible algorithm that satisfies eq. (5.30) is the famous Metropolis algorithm.
This algorithm is divided in two steps. The first is the proposal of a transition from
point X to X with an arbitrary probability T (X, X ). The second consists in an
acceptance/rejection stage. The proposal is accepted with the probability defined by:
A (X, X ) = min (1, (X, X )) ,
where
(X, X ) =

(X )T (X , X)
.
(X)T (X, X )

(5.31)
(5.32)

If, for example, we choose to displace one bead, say r ij , to another point, call it r ij ,
that we sample uniformly from a sphere with center in the old position, then one has
that T (X , X) = T (X, X ) by symmetry and that the probability to accept the move
is determined by


2
2
(rij1 rij ) +(rij rij+1 )
exp
2~2 /m





 exp V Rj V (Rj ) . (5.33)
(X, X ) =
2
2
i
i
i
i
(r r ) +(r r )
exp j1 j2~2 /mj j+1
This type of single bead move becomes extremely inefficient when the number of
time-slices M increases (critical slowing down), so one faces ergodicity problems. To
increase efficiency one can implement a direct sampling of the kinetic-energy part of
the probability distribution for one bead or for a larger piece of a word-line. There
are several algorithms that permit drawing a free-particle path (see references). With
this type of move rejections are only determined by inter-particle interactions and/or
external potentials.

5.1.5

Calculating properties

associated to a physical observable can be


The expectation value of any operator O
written as a path integral in the following form:
XZ
1
hO(X)i
O
O (X) (X, P) dX.
(5.34)
N! P
34

The energy per particle E/N of a quantum many body system is the expectation value
divided by the number of particles N . According to
of the Hamiltonian operator H
its thermodynamic definition we can also obtain E/N through a -derivative of the
partition function Z:
E (N, V, )
1 Z (N, V, )
=
.
N
NZ

If we apply this derivative to the symmetrized partition function defined in eq. (5.29)
we obtain the following estimator for the energy per particle (called thermodynamic
estimator ):
*
+
M
M
X
m
d
Eth
1 X
2
=

(Rj Rj+1 ) +
V (Rj ) .
(5.35)
N
2
2(~ )2 M N j=1
M N j=1

5.1.6

Useful references

A statistical approach to Quantum Mechanics, by M. Creutz and B. Freedman,


Annals of Physics 132 (1981) 427.
A Java demonstration of Path integral Monte Carlo by A. Santamaria can be
found at http://fisteo12.ific.uv.es/santamar/qapplet/metro.html. Note that the
parameters of the quartic potential can be adjusted interactively.
D. M. Ceperley, Review of Modern Physics 67, 279 (1995).

5.2

Diffusion Monte Carlo

Diffusion Monte Carlo (DMC) is a tool to study the ground-state properties of quantum systems. This means that using DMC one can simulate many-body systems at
zero temperature. When applied to bosons, DMC provides the exact result for the
ground-state energy and for other diagonal properties. By introducing some approximation, one can also treat fermionic systems. One approximation which has proven to
be reliable is the so-called fixed-node approximation. Similarly, one can extend DMC to
study exited states.
DMC is based on the solution of the time-dependent Schrodinger equation written
in imaginary time:

),
(5.36)
(R, ) = H(R,

where = it/~. The formal solution of eq. (5.36) is:




(R, 0).
(R, ) = exp H
(5.37)
Let us expand (R, ) on the basis of the eigenstates n (R, ):
(R, ) =

cn n (R, ) =

n=0

X
n=0

35

cn n (R) exp (En ) .

(5.38)

n =
The states n are the solution of the time independent Schrodinger equation H
En n with eigenvalues En . We order them in such a way that En monotonically increases
with the quantum number n. In the long time limit eq. (5.38) reduces to:
(R, ) c0 0 (R) exp (E0 ) .

(5.39)

In other words, the contribution of the ground state dominates the sum in eq. (5.38).
States with n 6= 0 decay exponentially faster. In the following we will see that by
introducing an energy shift we can obtain a normalized wave function.
In the case of Bose systems at zero temperature one can assume, without loss of generality, that 0 (R) is real and positive definite1 . Fermi systems and excited stated of
bosons will be addressed in subsection 5.2.2.
Let us introduce the Greens function of eq. (5.36):

 E
D

(5.40)
(R, R , ) = R exp H R .

Notice that (R, R , ) is equal to the thermal density matrix (5.6). The Greens
function permits to write the eq. (5.36) in the integral form:
Z
(R, ) = (R, R , )(R , 0)dR .
(5.41)

This integral equation may be interpreted as a diffusion process guided by (R, R , )


from the initial state (R , 0) to the final state (R, ) at time .
The evolution during the long time interval can be generated repeating a large number of short time-steps . In the limit 0, one can make use of the primitive
approximation (see section 5.1):
#
"
 m dN/2
(R1 R2 )2
(R1 , R3 , )
exp [ V (R2 )] (R2 R3 ) . (5.42)
exp
2~2
2~2 /m
In a DMC simulation, one treats (R, ) as the density distribution of a large ensemble
of equivalent copies of the many-body system, usually called walkers. The simulation
starts with an arbitrary initial distribution. The population of walkers diffuses according to the Greens function (5.42). The first term corresponds to a free-particle
diffusion, which can be implemented by adding to R1 a vector whose components are
sampled from a gaussian distribution. The second term in eq. (5.42), instead, does not
cause displacement of particles. It only determines a change in the probability density.
This effect, usually called branching, can be implemented by allowing variations in the
number of walkers. We have to assign to each walker a number of descendant nd proportional to the weight exp [ (V (R2 ) E)]. Notice that we have included the energy
shift E, which serves to normalize the density distribution. One could simply set nd to
be equal to the integer number which is closest to w. However, this discretization of
the weight w would result in a considerable loss of information. A much more efficient
procedure is obtained by calculating nd according to the following rule:
nd = int(w + ),
1

If a magnetic field is present the wave function must have an imaginary part also.

36

(5.43)

where is a uniform random variable in the range [0, 1], and the function int() takes
the integer part of the argument. In this way one makes use of the full information
contained in the signal w. If nd > 1, one has to create nd 1 identical copies of the
walker and include them in the total population. If nd = 0, one has to erase the current
walker from the population. The parameter E acts as a normalization factor. It must
be adjusted during the simulation in order to maintain the total number of walkers
close to an average value, call it nave . This is an algorithm parameter which has to
be optimized. For small values of nave one has systematic deviations from the exact
results. On the other hand, large values of nave result in computationally demanding
simulations.
If we generate a long random walk performing sequentially the two types of update that
we have described, the asymptotic distribution (R, ) converges to the exact
ground state 0 (R).

5.2.1

Importance Sampling

The algorithm described in the previous subsection is extremely inefficient for large particle numbers, especially if the inter-particle interaction is not smooth. The efficiency
can be enormously enhanced by using the importance sampling technique. To implement this method one has to design a trial wave function, call it T , that approximately
describes the exact ground-state 0 . For example, in the case of homogeneous liquid
or gases an accurate approximation of the ground-state is given by the Jastrow wave
function:
Y
f2 (|r i r j |),
(5.44)
J (R) =
i<j

where the function f2 (r) describes the direct correlation between particles i and j. In
dilute systems, like ultracold gases, is can be set equal to the solution of the two-body
problem for the relative motion of the pair.
One then solves the modified Schrodinger equation (in imaginary time) for the product
f (R, ) = T (R)(R, ):

~2
~2
f (R, ) =
f (R, ) +
(F f (R, )) + (Eloc (R) E)f (R, ), (5.45)

2m
2m

where F =

2T (R)
T (R)

is called pseudo force and the local energy Eloc (R) is defined by:
Eloc (R) =

T (R)
H
.
T (R)

(5.46)

The function f (R, ) is interpreted as density distribution of the population of walkers.


In the long time limit it converges to the product T (R)0 (R). It is easy to see that
the average of the local energy (5.46) is equal to the ground-state energy. Instead, for
observableDoperatorsEthat do not commute with the Hamiltonian, one obtains the mixed
T / h0 |T i.2
estimator 0 |O|
2

For diagonal operators, one can implement exact estimators using the forward walking technique
(see references).

37

The diffusion process that solves eq. (5.45) is similar to the one described above. The
free-particle diffusion must be implemented in the same way. Between this free-particle
diffusion and the branching term, one must introduce an additional update which consists in a drift of particle coordinates guided by the pseudo-force F :
R2 = R1 +

~2
F (R1 ).
2m

(5.47)

This drift guides the walkers in regions with high probability. The branching term
has to be implemented similarly to what described before, but substituting the local
energy Eloc (R) to the bare potential V (R). In fact, with an accurate choice of the trial
wave function T , the local energy has small fluctuations. This permits to stabilize
the populations of walkers, which, if no importance sampling was implemented, would
instead oscillate widely rendering the simulation unfeasible.

5.2.2

Fixed Node Diffusion Monte Carlo

The conclusion that the DMC algorithm samples, after long times, a density distribution
proportional to the exact ground state 0 , is based on the hypothesis that 0 and T
are not orthogonal. If, instead, they are orthogonal, the asymptotic distribution is
proportional to the lowest excited state 1 not orthogonal to T . This property is often
used to simulate excited states of bosons or the ground state of fermions, which can
be considered as the first fully antisymmetric eigenstate of the Hamiltonian. Having to
deal with non-positive definite wave functions introduces the well known sign problem.
Several procedures exist to circumvent this pathology. Here we describe the fixed-node
approximation. This approximation consists in forcing the ground state of the Fermi
system F to have the same nodal structure as the trial wave function. It is evident that,
if F and T change sign together, the probability distribution is always positive. It can
be proven that the fixed-node constraint provides an upper bound to the ground-state
energy of fermions. In particular, if the nodes of T were exact, the FNDMC would
provide the exact ground-state energy. In a DMC simulation, the nodal constraint on
F corresponds to forcing the walkers not to cross the nodal surface.
Just to show an example, we describe now one type of antisymmetric trial wave function
which has proven to capture the essential properties of several Fermi systems in the
homogeneous normal phase. This is the so-called Jastrow-Slater wave function. If we
consider a spin-polarized system (all fermions have the same spin-projection) in a box
of size L with periodic boundary conditions, the Jastrow-Slater wave function JS is
the product of a Jastrow factor (5.44) and a Slater determinant of plane waves:
JS (R) = J (R)Det,n [exp (ik r n )] ,

(5.48)

where the index n = 1, . . . , N labels particles and k are the wave vectors compatible
with periodic boundary conditions.
Techniques to go beyond the fixed-node approximation exist, but they have not proven
to be robust. The sign problem has to be considered still unsolved.

38

5.2.3

Useful references

J. Boronat, in Microscopic approaches to quantum liquids in confined geometries,


chapter 2, ed. by E. Krotscheck and J. Navarro, World Scientific (2002).
B. L. Hammond, W. A. Lester and Peter James Reynolds, Monte Carlo methods
in Ab Initio quantum chemistry, World Scientific (1994).
I. Kosztin, B. Faber and K. Schulten, Introduction to the Diffusion Monte Carlo
Method, arXiv:physics/9702023.
M. H. Kalos and P. A. Whitlock, Monte Carlo methods, Wiley pub. (1986).

39

Chapter 6
Electronic structure of molecules
and atoms
6.1

Introduction

In this chapter we will discuss the arguably most important quantum many body problem the electronic structure problem relevant for almost all properties of matter
relevant in our daily life. With O(1023 ) atoms in a typical piece of matter, the exponential scaling of the Hilbert space dimension with the number of particles is a nightmare.
In this chapter we will discuss first the exact solution by exact diagonalization of simplified effective models, and then approximate methods that reduce the problem to a
polynomial one, typically scaling like O(N 4 ) and even O(N ) in modern codes that aim
for a sparse matrix structure. These methods map the problem to a single-particle
problem and work only as long as correlations between electrons are weak.
This enormous reduction in complexity is however paid for by a crude approximation
of electron correlation effects. This is acceptable for normal metals, band insulators and
semi-conductors but fails in materials with strong electron correlations, such as almost
all transition metal compounds.

6.2

The electronic structure problem

For many atoms (with the notable exception of Hydrogen and Helium which are so light
that quantum effects are importanr in daily life), the nuclei of atoms are so much heavier
than the electrons that we can view them as classical particles and can consider them
as stationary for the purpose of calculating the properties of the electrons. Using this
Born-Oppenheimer approximation the Hamiltonian operator for the electrons becomes
H=

N 
X
i=1


X e2
~2 2
2

+ V (~ri ) + e
2m
|~ri ~rj |
i<j

40

(6.1)

~ i is given
where the potential of the M atomic nuclei with charges Zi at the locations R
by
M
X
Zi
2
V (~r) = e
.
(6.2)
~
r|
i=1 |Ri ~

The Car-Parinello method for molecular dynamics, which we will discuss later, moves
the nuclei classically according to electronic forces that are calculated quantum mechanically.
Using a basis set of L orbital wave functions {fi }, the matrix elements of the Hamilton operator (6.1) are
 2

Z
~
3

2
tij = d ~rfi (~r)
+ V (~r) fj (~r)
(6.3)
2m
Z
Z
1
fk (~r )fl (~r )
(6.4)
Vijkl = e2 d3~r d3~r fi (~r)fj (~r)

|~r ~r |
and the Hamilton operator can be written in second quantized notation as
H=

tij ai aj +

ij

6.3

1 X
Vijkl ai ak al aj .
2 ijkl

(6.5)

Basis functions

Before attempting to solve the many body problem we will discuss basis sets for single
particle wave functions.

6.3.1

The electron gas

For the free electron gas with Hamilton operator


N
X
X
~2 2
H =
vee (~ri , ~rj )
+ e2
2m
i=1
i<j

vee (~r, ~r ) =

1
|~r ~r |

(6.6)
(6.7)

the ideal choice for basis functions are plane waves


~k (~r) = exp(i~k~r).

(6.8)

Such plane wave basis functions are also commonly used for band structure calculations
of periodic crystals.
At low temperatures the electron gas forms a Wigner crystal. Then a better choice of
basis functions are eigenfunctions of harmonic oscillators centered around the classical
equilibrium positions.

41

6.3.2

Atoms and molecules

Which functions should be used as basis functions for atoms and molecules? We can let
ourselves be guided by the exact solution of the Hydrogen atom and use the so-called
Slater-Type-Orbitals (STO):
finlm (r, , ) rn1 ei r Ylm (, ).

(6.9)

These wave functions have the correct asymptotic radial dependence and the correct
angular dependence. The values i are optimized so that the eigenstates of isolated
atoms are reproduced as accurately as possible.
The main disadvantage of the STOs becomes apparent when trying to evaluate
the matrix elements in equation (6.4) for basis functions centered around two different
~ A and R
~ B . There we have to evaluate integrals containing terms
nuclei at position R
like
1
~
~
ei |~rRA | ej |~rRB |
(6.10)

~
~r r
which cannot be solved in any closed form.
The Gauss-Type-Orbitals (GTO)
filmn (~r) xl y m z n ei r

(6.11)

simplify the evaluation of matrix elements, as Gaussian functions can be integrated


easily and the product of Gaussian functions centered at two different nuclei is again a
single Gaussian function:
~

ei |~rRA | ej |~rRB | = Ke|~rR|

(6.12)

with
i j
i +j

~ A R~B |2
|R

K = e
= i + j
~
~
~ = i RA + j RB
R
i + j
Also the term

1
|~
r ~
r |

can be rewritten as an integral over a Gaussian function


Z
2
1
2
2
=
dtet (~r~r ) .

|~r ~r |
0

(6.13)
(6.14)
(6.15)

(6.16)

and thus all the integrals (6.4) reduce to purely Gaussian integrals which can be performed analytically.
As there are O(L4 ) integrals of the type (6.4), quantum chemistry calculations typically scale as O(N 4 ). Modern methods can be used to reduce the effort to an approximately O(N ) method, since the overlap of basis functions at large distances becomes
negligibly small.
Independent of whether one chooses STOs or GTOs, extra care must be taken to
account for the non-orthogonality of these basis functions.
42

6.4

Pseudo-potentials

The electrons in inner, fully occupied shells do not contribute in the chemical bindings.
To simplify the calculations they can be replaced by pseudo-potentials, modeling the
inner shells. Only the outer shells (including the valence shells) are then modeled using
basis functions. The pseudo-potentials are chosen such that calculations for isolated
atoms are as accurate as possible.

6.5

Effective models

To understand the properties of these materials the Hamilton operator of the full quantum chemical problem (6.1) is usually simplified to effective models, which still contain
the same important features, but which are easier to investigate. They can be used to
understand the physics of these materials, but not directly to quantitatively fit experimental measurements.

6.5.1

The tight-binding model

The simplest model is the tight-binding model, which concentrates on the valence bands.
All matrix elements tij in equation (6.3), apart from the ones between nearest neighbor
atoms are set to zero. The others are simplified, as in:
X
H=
(tij ci, cj, + H.c.).
(6.17)
hi,ji,

This model is easily solvable by Fourier transforming it, as there are no interactions.

6.5.2

The Hubbard model

To include effects of electron correlations, the Hubbard model includes only the often
dominant intra-orbital repulsion Viiii of the Vijkl in equation (6.4):
X
X
H=
(tij ci, cj, + H.c.) +
Ui ni, ni, .
(6.18)
i

hi,ji,

The Hubbard model is a long-studied, but except for the 1D case still not completely
understood model for correlated electron systems.
In contrast to band insulators, which are insulators because all bands are either
completely filled or empty, the Hubbard model at large U is insulating at half filling,
when there is one electron per orbital. The reason is the strong Coulomb repulsion U
between the electrons, which prohibit any electron movement in the half filled case at
low temperatures.

43

6.5.3

The Heisenberg model

In this insulating state the Hubbard model can be simplified to a quantum Heisenberg
model, containing exactly one spin per site.
X
~i S
~j
H=
Jij S
(6.19)
hi,ji

For large U/t the perturbation expansion gives Jij = 2t2ij (1/Ui + 1/Uj ). The Heisenberg
model is the relevant effective models at temperatures T tij , U ( 104 K in copper
oxides). The derivation will be shown in the lecture.

6.5.4

The t-J model

The t-J model is the effective model for large U at low temperatures away from halffilling. Its Hamiltonian is
i X
X h

~i S
~j ni nj /4). (6.20)
H=
(1 ni, )tij ci, cj, (1 nj, ) + H.c. +
Jij (S
hi,ji

hi,ji,

As double-occupancy is prohibited in the t-J model there are only three instead of four
states per orbital, greatly reducing the Hilbert space size.

6.6

Exact diagonalization

The most accurate method is exact diagonalization of the Hamiltonian matrix using
the Lanczos algorithm, discussed in appendix A.2. The size of the Hilbert space of an
N -site system (4N for a Hubbard model , 3N for a t-J model and (2S + 1)N for a spin-S
model) can be reduced by making use of symmetries. Translational symmetries can be
employed by using Bloch waves with fixed momentum as basis states. Conservation of
particle number and spin allows to restrict a calculation to subspaces of fixed particle
number and magnetization.
As an example we will sketch how to implement exact diagonalization for a simple
one-dimensional spinless fermion model with nearest neighbor hopping t and nearest
neighbor repulsion V :
H = t

L1
X

(ci ci+1 + H.c.) + V

i=1

L1
X

ni ni+1 .

(6.21)

i=1

The first step is to construct a basis set. We describe a basis state using multi-bit
coding. A many-body state of fermions can be represented as an unsigned integer
where bit i set to one corresponds to an occupied site i. For spinful fermions we take
either two integers, one for the up and one for the down spins, or two bits per site.
As the Hamiltonian conserves the total particle number we thus want to construct
a basis of all states with N particles on L sites (or N bits set to one in L bits). In the
code fragment below we use the following variables:
44

states is a vector storing the integers whose bit patterns correspond to the basis
states. It can be accessed using the following functions:
dimension() returns the number of basis states.
state(i) returns the i-th basis state, where i runs from 0 to dimension()1.
index is a much larger vector of size 2L . It is used to obtain the number of a
state in the basis, given the integer representation of the state. It can be accessed
using the function
index(s) which returns the index i of the state in the basis, or the largest
integer to denote an invalid state, if the bit pattern of the integer does not
correspond to a basis state.
Since this vector is very large, it will limit the size of system that can be studied.
To save space, the index array could be omitted and the index(s) function
implemented by a binary search on the states array.
Here is the C++ code for this class:
#include
#include
#include
#include
#include

<vector>
<alps/bitops.h>
<limits>
<valarray>
<cassert>

class FermionBasis {
public:
typedef unsigned int state_type;
typedef unsigned int index_type;
FermionBasis (int L, int N);
state_type state(index_type i) const {return states_[i];}
index_type index(state_type s) const {return index_[s];}
unsigned int dimension() const { return states_.size();}
private:
std::vector<state_type> states_;
std::vector<index_type> index_;
};
In the constructor we build the basis states. For N spinless fermions on L sites the
valid basis states are all the ways to place N particles on L sites, which is equivalent
to all integers between 0 and 2L 1 that have N bits set. The constructor uses the
alps::popcnt function of the ALPS library.

45

FermionBasis::FermionBasis(int L, int N)
{
index_.resize(1<<L); // 2^L entries
for (state_type s=0;s<index_.size();++s)
if(alps::popcnt(s)==N) {
// correct number of particles
states_.push_back(s);
index_[s]=states_.size()-1;
}
else
// invalid state
index_[s]=std::numeric_limits<index_type>::max();
}
Finally we want to implement a matrix-vector multiplication v = Hw for our Hamiltonian and derive a Hamiltonian class. We do not want to store the matrix at all,
neither in dense nor in sparse form but instead implement a fast function to perform
the matrix-vector multiplication on-the-fly.
class HamiltonianMatrix : public FermionBasis {
public:
HamiltonianMatrix(int L, int N, double t, double V)
: FermionBasis(L,N), t_(t), V_(V), L_(L) {}
void multiply(std::valarray<double>& v, const std::valarray<double>& w);
private:
double t_, V_;
int L_;
};
Finally we show the implementation of the matrix-vector multiplication. It might look
like magic but we will explain it all in detail during the lecture.
void HamiltonianMatrix::multiply(std::valarray<double>& v,
const std::valarray<double>& w)
{
// check dimensions
assert(v.size()==dimension());
assert(w.size()==dimension());
// do the V-term
for (int i=0;i<dimension();++i) {
state_type s = state(i);
// count number of neighboring fermion pairs
v[i]=w[i]*V_*alps::popcnt(s&(s>>1));
46

}
// do the t-term
for (int i=0;i<dimension();++i) {
state_type s = state(i);
// inside the chain
for (int r=0;r<L_-1;++r) {
state_type shop = s^(3<<r);
// exchange two neighbors
index_type idx = index(shop); // get the index
if(idx!=std::numeric_limits<index_type>::max())
v[idx]+=-t_*w[i];
}
// across the boundary
state_type shop = s^(1|(1<<(L-1))); // exchange the first and last
index_type idx = index(shop);
// get the index
if(idx!=std::numeric_limits<index_type>::max())
// watch out for Fermi sign since we hop over some particles
v[idx]+=-t*(alps::popcnt(s&((1<<(L-1))-1))%2==0 ? 1 : -1)*w[i];
}
}
This class can now be used with the Lanczos algorithm to calculate the energies and
wave functions of the low lying states of the Hamiltonian.
In production codes one uses all symmetries to reduce the dimension of the Hilbert
space as much as possible. In this example translational symmetry can be used if
periodic boundary conditions are applied. The implementation gets much harder then.
In order to make the implementation of exact diagonalization much easier we have
generalized the expression templates technique developed by Todd Veldhuizen for array
expression to expressions including quantum operators. Using this expression template
library we can write a multiplication
|i = H|i = (t
simply as:

L1
X

(ci ci+1 + H.c.) + V

i=1

L1
X

ni ni+1 )|i

(6.22)

Range i(1,L-1);
psi = sum(i,(-t*(cdag(i)*c(i+1)+HermitianConjugate)+V*n(i)*n(i+1))*phi);
The advantage of the above on-the-fly calculation of the matrix in the multiplication
routine is that the matrix need not be stored in memory, which is an advantage for the
biggest systems where just a few vectors of the Hilbert space will fit into memory.
If one is not as demanding and wants to simulate a slightly smaller system, where
the (sparse) matrix can be stored in memory, then a less efficient but more flexible
function can be used to create the matrix and store it in memory. Such a program
is available through the ALPS project at http://alps.comp-phys.org/. It allows to
perform the above calculation just by describing the lattice and model in an XML input
file.
47

6.7
6.7.1

The Hartree Fock method


The Hartree-Fock approximation

The Hartree-Fock approximation is based on the assumption of independent electrons.


It starts from an ansatz for the N -particle wave function as a Slater determinant of N
single-particle wave functions:


1 (~r1 , 1 ) N (~r1 , 1 )

1

..
..
(~r1 , 1 ; . . . ; ~rN , N ) =
(6.23)
.
.
.

N
1 (~rN , N ) N (~rN , N )

The orthogonal single particle wave functions are chosen so that the energy is
minimized.
For numerical calculations a finite basis has to be introduced, as discussed in the
previous section. Quantum chemists distinguish between the self-consistent-field (SCF)
approximation in a finite basis set and the Hartree-Fock (HF) limit, working in a complete basis. In physics both are known as Hartree-Fock approximation.

6.7.2

The Hartree-Fock equations in nonorthogonal basis sets

It will be easiest to perform the derivation of the Hartree-Fock equations in a second


quantized notation. To simplify the discussion we assume closed-shell conditions, where
each orbital is occupied by both an electron with spin and one with spin . We start
by writing the Hartree Fock wave function (6.23) in second quantized form:
Y
|i =
c |0i,
(6.24)
,

where c creates an electron in the orbital (r, ). As these wave functions are orthogonal the c satsify the usual fermion anticommutation relations. Greek subscripts
refer to the Hartree-Fock single particle orbitals and roman subscripts to the single
n
particle basis functions. Next we expand the c in terms of the creation operators a
of our finite basis set:
L
X

c =
dn a
n
(6.25)
n=1

and find that

aj |i = aj

Y
,

c |0i =

dj

6=

c |0i.

(6.26)

In order to evaluate the matrix elements h|H|i of the Hamiltonian (6.5) we introduce
the bond-order matrix
X
X
Pij =
h|ai aj |i = 2
di dj ,
(6.27)

where we have made use of the closed-shell conditions


P to sum over the spin degrees
of freedom. The kinetic term of H is now simply
ij Pij tij . Next we rewrite the
48

interaction part h|ai ak al aj |i in terms of the Pij . We find that if =


h|ai ak al aj |i = h|ai aj |ih|ak al |i h|ai al |ih|ak aj |i
and if 6= :

h|ai ak al aj |i = h|ai aj |ih|ak al |i

Then the energy is (again summing over the spin degrees of freedom):


X
1X
1
E0 =
tij Pij +
Vijkl Vilkj Pij Pkl .
2 ijkl
2
ij

(6.28)
(6.29)

(6.30)

We now need to minimize the energy E0 under the condition that the | i are
normalized:
X
1 = h | i =
di dj Sij .
(6.31)
i,j

Using Lagrange multipliers to enforce this constraint we have to minimize




X
X X
1X
1
tij Pij +
Vijkl Vilkj Pij Pkl

di dj Sij
2
2

ij
i,j
ijkl

(6.32)

Setting the derivative with respect to di to zero we end up with the Hartree-Fock
equations for a finite basis set:
L
X
j=1

(fij Sij )dj = 0,

where
fij = tij +

X
kl


1
Vijkl Vilkj Pkl .
2

(6.33)

(6.34)

This is again a generalized eigenvalue problem of the form Ax = Bx and looks like a
one-particle Schrodinger equation. However, since the potential depends on the solution
it is a nonlinear and not a linear eigenvalue problem. The equation is solved iteratively,
always using the new solution for the potential, until convergence to a fixed point is
achieved.
The eigenvalues of f do not directly correspond to energies of the orbitals, as the
Fock operator counts the V -terms twice. Thus we obtain the total ground state energy
from the Fock operator eigenvalues by subtracting the double counted part:
N
X



1X
1
E0 =

Vijkl Vilkj Pij Pkl
2 ijkl
2
=1

49

(6.35)

6.7.3

Configuration-Interaction

The approximations used in Hartree-Fock and density functional methods are based
on non-interacting electron pictures. They do not treat correlations and interactions
between electrons correctly. To improve these methods, and to allow the calculation of
excited states, often the configuration-interaction (CI) method is used.
Starting from the Hartree-Fock ground state
|HF i =

N
Y

=1

c |0i

(6.36)

one or two of the c are replaced by other orbitals ci :


|0 i =

1+

X
i,

i ci c +

i<j,<

ij

ci cj c c

|HF i.

(6.37)

The energies are then minimized using this variational ansatz. In a problem with
N occupied and M empty orbitals this leads to a matrix eigenvalue problem with
dimension 1 + N M + N 2 M 2 . Using the Lanczos algorithm the low lying eigenstates can
then be calculated in O((N + M )2 ) steps.
Further improvements are possible by allowing more than only double-substitutions.
The optimal method treats the full quantum problem of dimension (N + M )!/N !M !.
Quantum chemists call this method full-CI. Physicists simplify the Hamilton operator
slightly to obtain simpler models with fewer matrix elements, and call that method
exact diagonalization. This method will be discussed later in the course.

6.8

Density functional theory

Another commonly used method, for which the Nobel prize in chemistry was awarded
to Walter Kohn, is the density functional theory. In density functional theory the
many-body wave function living in R3N is replaced by the electron density, which lives
just in R3 . Density functional theory again reduces the many body problem to a onedimensional problem. In contrast to Hartree-Fock theory it has the advantage that it
could in principle be exact if there were not the small problem of the unknown
exchange-correlation functional.
It is based on two fundamental theorems by Hohenberg and Kohn. The first theorem
states that the ground state energy E0 of an electronic system in an external potential
V is a functional of the electron density (~r) :
Z
E0 = E[] = d3~rV (~r)(~r) + F [],
(6.38)
with a universal functional F . The second theorem states that the density of the ground
state wave function minimizes this functional. The proof of both theorems will be shown
in the lecture.
50

These theorems make our life very easy: we only have to minimize the energy
functional and we obtain both the ground state energy and the electron density in the
ground state and everything is exact!
The problem is that, while the functional F is universal, it is also unknown! Thus
we need to find good approximations for the functional. One usually starts from the
ansatz:
F [] = Eh [] + Ek [] + Exc [].
(6.39)
The Hartree-term Eh given by the Coulomb repulsion between two electrons:
Z
e2
(~r)(~r )
Eh [] =
.
d3~rd3~r
2
|~r ~r |

(6.40)

The kinetic energy Ek [] is that of a non-interacting electron gas with the same density.
The exchange- and correlation term Exc [] contains the remaining unknown contribution, which we will discuss a bit later.
To calculate the ground state density we have to minimize this energy, solving the
variational problem


Z
Z
r )
Ek [] Exc []
3 (~
3
2
d ~r
0 = E[] = d ~r(~r) V (~r) + e
(6.41)
+
+
|~r ~r |
(~r)
(~r)
0 subject to the constraint that the total electron number to be conserved
Z
d3~r(~r) = 0.
Comparing this variational equation to the one for noninteracting system


1 2
+ Vef f (~r) (~r) = (~r),

2m

(6.42)

(6.43)

we realize that they are the same if we define the potential of the non-interacting system
as
Z
(~r )
2
Vef f (~r) = V (~r) + e
d3~r
+ vxc (~r),
(6.44)
|~r ~r |
where the exchange-correlation potential is defined by
vxc (~r) =

Exc []
.
(~r)

(6.45)

The form (6.43) arises because we have separated the kinetic energy of the non-interacting
electron system from the functional. The variation of this kinetic energy just gives the
kinetic term of this Schrodinger-like equation.
The non-linear equation is again solved iteratively, making an ansatz using N/2
normalized single-electron wave functions, which we occupy with spin and spin
electrons to get the electron density.
(~r) = 2

N/2
X
=1

51

| (~r)|2 ,

(6.46)

6.8.1

Local Density Approximation

Apart from the restricted basis set everything was exact up to this point. As the
functional Exc [] and thus the potential vxc (~r) is not known, we need to introduce
approximations.
The simplest approximation is the local density approximation (LDA), which
replaces vxc by that of a uniform electron gas with the same density. Instead of taking
a functional E[](~r) which could be a function of (~r), (~r), (~r), . . . we ignore all
the gradients and just take the local density
Exc [](r) = ELDA ((r));

(6.47)

Defining
rs1
the exchange correlation potential is
vxc

e2
=
aB

3
2

2/3

= aB

1/3

1
[1 + 0.0545rs ln(1 + 11.4/rs )]
rs

(6.48)

(6.49)

where the first part corresponds to uncorrelated electrons and the last factor is a correlation correction determined by fitting to quantum Monte Carlo (QMC) simulations
of an electron gas.

6.8.2

Improved approximations

Improvements over the LDA have been an intense field of research in quantum chemistry.
I will just mention two improvements. The local spin density approximation (LSDA)
uses separate densities for electrons with spin and . The generalized gradient
approximation (GGA) and its variants use functionals depending not only on the
density, but also on its derivatives.

6.9

Car-Parinello molecular dynamics

In the lecture on Computational Statistical Physics you have learned about the molecular dynamics method, in which atoms move on classical trajectories under forces, such
as those from the Lennard-Jones potential, which have been previously calculated in
quantum mechanical simulations. It would be nicer, and more accurate, to use a full
quantum mechanical force calculation at every time step instead of using such static
forces that have been extracted from previous simulations.
Roberto Car (currently in Princeton) and Michele Parinello (currently at ETH) have
combined density functional theory with molecular dynamics to do just that. Their
method, Car-Parinello molecular dynamics (CPMD) allows much better simulations of
molecular vibration spectra and of chemical reactions.

52

The atomic nuclei are propagated using classical molecular dynamics, but the electronic forces which move them are estimated using density functional theory:
~n
~ n]
d2 R
E[(~r, t), R
Mn 2 =
.
~n
dt
R

(6.50)

~ n are the masses and locations of the atomic nuclei.


Here Mn and R
As the solution of the full electronic problem at every time step is a very time
consuming task we do not want to perform it all the time from scratch. Instead CPMD
uses the previous values of the noninteracting electron wave functions { } of the DFT
calculation (6.43) [dont confuse it with the Hartee-Fock orbitals!] and evolves them
to the ground state for the current positions of the nuclei by an artificial molecular
~ n } and the wave functions { } evolve in
dynamics evolution. Hence both the nuclei {R
the same molecular dynamics scheme. The electronic degrees of freedoms are updated
using an artificial dynamics:
m

~ n] X
d2 (~r, t)
1 E[(~r, t), R
+
(~r, t),
=

dt2
2 (~r, t)

(6.51)

where m is an artificial mass that needs to be chosen much lighter than the nuclear
masses so that the electronic structure adapts quickly to the move of the nuclei. The
Lagrange multipliers need to be chose to ensure proper orthonormalization of the
wave functions.
Since the exact form of the artifical dynamics of the electronic structure does not
matter, we can evolve the expansion coefficients dn of an expansion in terms of the
basis functions as in equation (6.25) instead of evolving the wave functions. This gives
the equations of motion
m

X
X
d2 dn
E
=

Snl dl

dt2
dn

(6.52)

There are various algorithms to determine the so that the wave functions stay
orthonormal. We refer to text books and special lectures on CPMD for details.

6.10

Program packages

As the model Hamiltonian and the types of basis sets are essentially the same for all
quantum chemistry applications flexible program packages have been written. There
is thus usually no need to write your own programs unless you want to implement a
new algorithm.

53

Chapter 7
Cluster quantum Monte Carlo
algorithms for lattice models
7.1

World line representations for quantum lattice


models

All quantum Monte Carlo algorithms are based on a mapping of a d-dimensional quantum system to a (d + 1)-dimensional classical system using a path-integral formulation.
We then perform classical Monte Carlo updates on the world lines of the particles. We
will introduce one modern algorithm for lattice models, the loop-algorithm which is a
generalization of the classical cluster algorithms for the Ising model to quantum models.
We will discuss the loop algorithm for a spin-1/2 quantum XXZ model with the
Hamiltonian
X

H =
Jz Siz Sjz + Jxy (Six Sjx + Siy Sjy )
hi,ji

X
hi,ji

Jz Siz Sjz


Jxy +
+
(Si Sj + Si Sj ) .
+
2

(7.1)

For J Jz = Jxy we have the Heisenberg model (J > 0 is ferromagnetic, J < 0


antiferromagnetic). Jxy = 0 is the (classical) Ising model and Jz = 0 the quantum XY
model.
Continuous-time world lines
In contrast to models in continuous space, where a discrete time step was needed for the
path integral, for lattice models the continuum limit can be taken. The spatial lattice
is sufficient to regularize any ultraviolet divergencies.
We start by still discretizing discretize the imaginary time (inverse temperature)
direction and subdivide = M :
M
eH = e H
= (1 H)M + O( )
(7.2)

In the limit M ( 0) this becomes exact. We will take the limit later, but
stay at finite for now.
54

imaginary time

0
space

Figure 7.1: Example of a world line configuration for a spin-1/2 quantum Heisenberg
model. Drawn are the world lines for up-spins only. Down spin world lines occupy the
rest of the configuration.
The next
P step is to insert the identity matrix, represented by a sum over all basis
states 1 = i |iihi| between all operators (1 H):
Z = TreH = Tr (1 H)M + O( )
X
=
hi1 |1 H|i2 ihi2 |1 H|i3 i hiM |1 H|i1 i + O( )
i1 ,...,iM

=: Pi1 ,...,iM

(7.3)

and similarly for the measurement, obtaining


hAi =

X hi1 |A(1 H)|i2 i


Pi1 ,...,iM + O( ).
hi1 |1 H|i2 i
i ,...,i
1

(7.4)

If we choose the basis states |ii to be eigenstates of the local S z operators we end
up with an Ising-like spin system in one higher dimension. Each choice i1 , . . . , iM
corresponds to one of the possible configurations of this classical spin system. The
trace is mapped to periodic boundary conditions in the imaginary time direction of this
classical spin system. The probabilities are given by matrix elements hin |1 H|in+1 i.
We can now sample this classical system using classical Monte Carlo methods.
However, most of the matrix elements hin |1 H|in+1 i are zero, and thus nearly all
configurations have vanishing weight. The only non-zero configurations are those where
neighboring states |in i and |in+1 i are either equal or differ by one of the off-diagonal
matrix elements in H, which are nearest neighbor exchanges by two opposite spins. We
can thus uniquely connect spins on neighboring time slices and end up with world
lines of the spins, sketched in Fig. 7.1. Instead of sampling over all configurations of
local spins we thus have to sample only over all world line configurations (the others
have vanishing weight). Our update moves are not allowed to break world lines but
have to lead to new valid world line configurations.
Finally we take the continuous time limit 0. Instead of storing the configurations at all M time steps, we will not just store the times where a spin flips as
55

the consequence of an off-diagonal operator acting at that time. The number of such
events stays finite as M : as can be seen from equation (7.2) the probability of H
acting at a given time is proportional to 1/M , and hence the total number of spin flips
will stay finite although M

7.1.1

The stochastic series expansion (SSE)

An alternative representation, which is easier to code, is the stochastic series expansion


(SSE) method, developed by Anders Sandvik.1 We start from a high temperature series
expansion of the partition function, which is just a Taylor series
Z = Tre

X n
X

n=0

n!

h|(H)n |i

(7.5)

where {|i} now denotes the basis states.2 The next step is to decompose the Hamiltonian into a sum of bond Hamiltonians
H=

M
X

Hb ,

(7.6)

b=1

where each term Hb is associated with one of the M bonds b of the lattice.
We can assume that the Hamiltonian is a sum of bond terms without loss of generality. Any single-site terms, such as a local magnetic field hSiz can be assigned to
bond terms connected to the site.
Inserting this decomposition of the Hamiltonian and sets |(p)i of basis vectors into
the partition function we obtain
Z=

X
X

n=0 {Cn } (0),...,(n)

n
n Y
h(p)|(Hbp )|(p 1)i,
n! p=1

(7.7)

where {Cn } denotes the set of all concatenations of n bond Hamiltonians Hb , each called
an operator string. Also, |(n)i |(0)i, reflecting the periodicity in the propagation
direction given by the trace.
For a finite system and at finite temperature the relevant exponents of this power
series are centered around ni Ns , where Ns is the number of lattice sites. Hence
we can truncate the infinite sum over n at a finite cut-off length Ns without
introducing any systematic error for practical computations. The best value for can
be determined and adjusted during the equilibration part of the simulation, e.g. by
setting > (4/3)n after each update step.
Instead of working with a variable number of operators n, the code is simpler3 if
we keep a fixed number of operators . This can be done by inserting ( n) unit
1

A.W. Sandvik and J. Kurkijarvi, Phys. Rev. B 43, 5950 (1991); A. W. Sandvik, J. Phys. A 25,
3667 (1992).
2
We switch notation here to stay consistent with the notation used in papers on the SSE method
3
We actually recently found an way to write a simple code even with variable n, but that method
is not written up yet.

56

operators Id into every operator string of length n < , and we define H0 = Id. Taking
the number of such possible insertions into account, we obtain
Z=

X
X

X n ( n)! Y
h(p)| Hbp |(p 1)i,
!
p=1

(7.8)

n=0 {C } (0),...,()

where n now denotes the number of non-unity operators in the operator string C .
Each such operator string is thus given by an index sequence C = (b1 , b2 , ..., b ), where
on each propagation level p = 1, ..., either bp = 0 for an unit operator, or 1 bp M
for a bond Hamiltonian.
Positivity of the weights
In order to interpret these weights as probabilities, all the matrix elements of Hb need
to be positive, and hence all matrix elements of Hb should be negative. For the diagonal
part of the Hamiltonian, one can assure this by subtracting a suitable constant C from
each bond Hamiltonian.
For the off-diagonal part of the Hamiltonian an equally simple remedy does not exist.
However, if only operator strings with an even number of negative matrix elements have
a finite contribution to Eq. (7.8), the relative weights are again well suited to define
a probability distribution. One can show that this is in general the case for bosonic
models, ferromagnetic spin models, and antiferromagnetic spin models on bipartite
lattices (such as the square lattice).
Representing the configuration
In order to represent the configurations |(p)i we do not need to store the full configuration at avery step p, but is is sufficient to store:
the initial state |(0)i
and for each level p
a flag indicating whether the operator at a level p is the identity, or a diagonal
or off-diagonal operator
bond bp at which each operator p acts
the new state on the two sites of the bond bp .

7.2

Cluster updates

Before discussing cluster updates for quantum systems in continuous time or SSE representations, we will review the cluster algorithms for the classical Ising model which
should be known from the computational statistical physics course.

57

7.2.1

Kandel-Domany framework

To provide a general framework, which can be extended to quantum systems, we use


the Fortuin-Kastelyn representation of the Ising model, as generalized by Kandel and
Domany. The phase space of the Ising model is enlarged by assigning a set G of possible
graphs to each configuration C in the set of configurations C. We write the partition
function as
XX
Z=
W (C, G)
(7.9)
CC GG

where the new weights W (C, G) > 0 are chosen such that Z is the partition function of
the original model by requiring
X
W (C, G) = W (C) := exp(E[C]),
(7.10)
GG

where E[C] is the energy of the configuration C.


The algorithm now proceeds as follows. First we assign a graph G G to the
configuration C, chosen with the correct probability
PC (G) = W (C, G)/W (C).

(7.11)

Then we choose a new configuration C with probability p[(C, G) (C , G)], keeping


the graph G fixed; next a new graph G is chosen
C (C, G) (C , G) C (C , G ) . . .

(7.12)

What about detailed balance? The procedure for choosing graphs with probabilities
PG obeys detailed balance trivially. The non-trivial part is the probability of choosing
a new configuration C . There detailed balance requires:
W (C, G)p[(C, G) (C , G)] = W (C , G)p[(C , G) (C, G)],

(7.13)

which can be fulfilled using either the heat bath algorithm


p[(C, G) (C , G)] =

W (C , G)
W (C, G) + W (C , G)

(7.14)

or by again using the Metropolis algorithm:


p[(C, G) (C , G)] = min(W (C , G)/W (C, G), 1)

(7.15)

The algorithm simplifies a lot if we can find a graph mapping such that the graph
weights do not depend on the configuration whenever it is nonzero in that configuration.
This means, we want the graph weights to be
W (C, G) = (C, G)V (G),
where
(C, G) :=

1 if W (C, G) 6= 0,
.
0 otherwise.

(7.16)
(7.17)

Then equation (7.14) simply becomes p = 1/2 and equation (7.15) reduces to p = 1 for
any configuration C with W (C , G) 6= 0.
58

Table 7.1: Local bond weights for the Kandel-Domany representation of the Ising model.
(c, discon.)
(c, con.)
w(c)

7.2.2

c =
1
1
exp(J)

c =
1
0
exp(J)

c =
1
0
exp(J)

c =
1
1
exp(J)

V(g)
exp(J)
exp(J) exp(J)

The cluster algorithms for the Ising model

Let us now show how this abstract and general algorithm can be applied to the Ising
model. Our graphs will be bond-percolation graphs on the lattice. Spins pointing
into the same direction can be connected or disconnected. Spins pointing in opposite
directions will always be disconnected. In the Ising model we can write the weights
W (C) and W (C, G) as products over all bonds b:
Y
W (C) =
w(Cb )
(7.18)
b

W (C, G) =

w(Cb , Gb ) =

(Cb , Gb )V (Gb )

(7.19)

where the local bond configurations Cb can be one of {, , , }


and the local graphs can be connected or disconnected. The graph selection
can thus be done locally on each bond.
Table 7.1 shows the local bond weights w(c, g), w(c), (c, g) and V (g). It can easily
be checked that the sum rule (7.10) is satisfied.
The probability of a connected bond is [exp(J) exp(J)]/ exp(J) = 1
exp(2J) if two spins are aligned and zero otherwise. These connected bonds group
the spins into clusters of aligned spins.
A new configuration C with the same graph G can differ from C only by flipping
clusters of connected spins. Thus the name cluster algorithms. The clusters can be
flipped independently, as the flipping probabilities p[(C, G) (C , G)] are configuration
independent constants.
There are two variants of cluster algorithms that can be constructed using the rules
derived above.
The Swendsen-Wang algorithm
The Swendsen-Wang or multi-cluster algorithm proceeds as follows:
i) Each bond in the lattice is assigned a label connected or disconnected according to above rules. Two aligned spins are connected with probability 1
exp(2J). Two antiparallel spins are never connected.
ii) Next a cluster labeling algorithm, like the Hoshen-Kopelman algorithm is used to
identify clusters of connected spins.

59

iii) Measurements are performed, using improved estimators discussed in the next
section.
iv) Each cluster of spins is flipped with probability 1/2.
The Wolff single-cluster algorithm
The Swendsen Wang algorithm gets less efficient in dimensions higher than two as the
majority of the clusters will be very small ones, and only a few large clusters exist.
The Wolff algorithm is similar to the Swendsen-Wang algorithm but builds only one
cluster starting from a randomly chosen point. As the probability of this point being
on a cluster of size s is proportional to s the Wolff algorithm builds preferedly larger
clusters. It works in the following way:
i) Choose a random spin as the initial cluster.
ii) If a neighboring spin is parallel to the initial spin it will be added to the cluster
with probability 1 exp(2J).
iii) Repeat step ii) for all points newly added to the cluster and repeat this procedure
until no new points can be added.
iv) Perform measurements using improved estimators.
v) Flip all spins in the cluster.
We will see in the next section that the linear cluster size diverges with the correlation length and that the average number of spins in a cluster is just T . Thus the
algorithm adapts optimally to the physics of the system and the dynamical exponent
z 0, thus solving the problem of critical slowing down. Close to criticality these
algorithms are many orders of magnitudes (a factor L2 ) better than the local update
methods. Away from criticality sometimes a hybrid method, mixing cluster updates
and local updates can be the ideal method.

7.2.3

Improved Estimators

In this section we present a neat trick that can be used in conjunction with cluster
algorithms to reduce the variance, and thus the statistical error of Monte Carlo measurements. Not only do these improved estimators reduce the variance. They are
also much easier to calculate than the usual simple estimators.
To derive them we consider the Swendsen-Wang algorithm. This algorithm divides
the lattice into Nc clusters, where all spins within a cluster are aligned. The next
possible configuration is any of the 2Nc configurations that can be reached by flipping
any subset of the clusters. The idea behind the improved estimators is to measure
not only in the new configuration but in all equally probable 2Nc configurations.
As simplest example we consider the average magnetization hmi. We can measure
it as the expectation value h~i i of a single spin. As the cluster to which the spin belongs

60

can be freely flipped, and the flipped cluster has the same probability as the original
one, the improved estimator is
1
hmi = h (~i ~i )i = 0.
2

(7.20)

This result is obvious because of symmetry, but we saw that at low temperatures a
single spin flip algorithm will fail to give this correct result since it takes an enormous
time to flip all spins. Thus it is encouraging that the cluster algorithms automatically
give the exact result in this case.
Correlation functions are not much harder to measure:

1 if ~i und ~j are on the same cluster
h~i ~j i =
(7.21)
0 otherwise
To derive this result consider the two cases and write down the improved estimators by
considering all possible cluster flips.
Using this simple result for the correlation functions the mean square of the magnetization is
1 X
1 X
hm2 i = 2
S(cluster)2 i,
(7.22)
h~i ~j i = 2 h
N
N cluster
~i,~j

where S(cluster) is the number of spins in a cluster. The susceptibility above Tc is


simply given by hm2 i and can also easily be calculated by above sum over the squares
of the cluster sizes.
In the Wolff algorithm only a single cluster is built. Above sum (7.22) can be
rewritten to be useful also in case of the Wolff algorithm:
1 X
hm2 i =
h
S(cluster)2 i
N 2 cluster
1
1 X
=
S(cluster containing ~i)2
2
~i)
N
S(cluster
containing
~i
X
1
1
S(cluster containing ~i) = hS(cluster)i.
(7.23)
=
2
N
N
~i

The expectation value for m2 is thus simply the mean cluster size. In this derivation
we replaced the sum over all clusters by a sum over all sites and had to divide the
contribution of each cluster by the number of sites in the cluster. Next we can replace
the average over all lattice sites by the expectation value for the cluster on a randomly
chosen site, which in the Wolff algorithm will be just the one Wolff cluster we build.

7.2.4

The loop algorithm for quantum spins

We will now generalize these cluster algorithms to quantum systems and present the
loop algorithm 4
4

H. G. Evertz et al., Phys. Rev. Lett. 70, 875 (1993); B. B. Beard and U.-J. Wiese, Phys. Rev.
Lett. 77, 5130 (1996); B. Ammon, H. G. Evertz, N. Kawashima, M. Troyer and B. Frischmuth, Phys.
Rev. B 58, 4304 (1998).

61

Table 7.2: The six local configurations for an XXZ model and their weights.
configuration
Si(+d)
Si()
Si(+d)
Si()
Si(+d)
Si()

a)

weight

Sj(+d)

Si(+d)

Sj()
Sj(+d)

Sj(+d)
Sj()

Si()

Sj()

Si(+d)

Sj()

Sj(+d)

Si()

Sj()
Sj(+d)

Si()

, b)

Jz
d
4

Jz
d
4

Sj(+d)

Si(+d)

1+

Jxy
d
2

Sj()

, c)

, d)

Figure 7.2: The four local graphs: a) vertical, b) horizontal c) crossing and d) freezing
(connects all four corners).
The loop algorithm for continuous time world lines
This algorithm is best described by first taking the continuous time limit M
( d ) and by working with infinitesimals. Similar to the Ising model we look at
two spins on neigboring sites i and j at two neighboring times and + d , as sketched
in Tab. 7.2. There are a total of six possible configurations, having three different
probabilities. The total probabilities are the products of all local probabilities, like in
the classical case. This is obvious for different time slices. For the same time slice it
is also true since, denoting by HQ
ij the term in the Hamiltonian
P H acting on the bond
between sites i and j we have hi,ji (1 d Hij ) = 1 d hi,ji Hij = 1 d H. In
the following we focus only on such local four-spin plaquettes. Next we again use the
Kandel-Domany framework and assign graphs. As the updates are not allowed to break
world lines only four graphs, sketched in Fig. 7.2 are allowed. Finally we have to find
functions and graph weights that give the correct probabilities. The solution for the
XY -model, ferromagnetic and antiferromagnetic Heisenberg model and the Ising model
is shown in Tables 7.3 - 7.6.
Let us first look at the special case of the Ising model. As the exchange term is
absent in the Ising model all world lines run straight and can be replaced by classical
spins. The only non-trivial graph is the freezing, connecting two neighboring world
lines. Integrating the probability that two neighboring sites are nowhere connected
along the time direction we obtain: times:

(1 d J/2) = lim (1 J/2)M = exp(J/2)

=0

(7.24)

Taking into account that the spin is S = 1/2 and the corresponding classical coupling Jcl = S 2 J = J/4 we find for the probability that two spins are connected:
62

Table 7.3: The graph weights for the quantum-XY model and the function specifying
whether the graph is allowed. The dash denotes a graph that is not possible for a
configuration because of spin conservation and has to be zero.
G

, G)

, G)

, G)

= (

, G)

= (

, G)

= (

, G)

total weight

graph weight
1

Jxy
d
4

Jxy
d
4

Jxy
d
4

Jxy
d
2

Table 7.4: The graph weights for the ferromagnetic quantum Heisenberg model and the
function specifying whether the graph is allowed. The dash denotes a graph that
is not possible for a configuration because of spin conservation and has to be zero.
G

total weight

, G)

, G)

, G)

= (

, G)

= (

, G)

= (

, G)

graph weight

1 J4 d

J
d
2

0
1 + J4 d

0
1 J4 d

63

J
d
2

Table 7.5: The graph weights for the antiferromagnetic quantum Heisenberg model and
the function specifying whether the graph is allowed. The dash denotes a graph
that is not possible for a configuration because of spin conservation and has to be zero.
To avoid the sign problem (see next subsection) we change the sign of Jxy , which is
allowed only on bipartite lattices.
G

total weight

, G)

, G)

, G)

= (

, G)

= (

, G)

= (

, G)

graph weight
1

|J|
d
4

|J|
d
2

|J|
d
4

|J|
d
4

1+

|J|
d
2

Table 7.6: The graph weights for the ferromagnetic Ising model and the function
specifying whether the graph is allowed. The dash denotes a graph that is not possible
for a configuration because of spin conservation and has to be zero.
G

total weight

, G)

, G)

, G)

= (

, G)

= (

, G)

= (

, G)

1+

graph weight
1

Jz
d
4

0
0

Jz
d
2

Jz
d
4

Jz
d
4

64

world lines

world lines +
decay graphs

world lines
after flips of some
loop clusters

Figure 7.3: Example of a loop update. In a first step decay paths are inserted where
possible at positions drawn randomly according to an exponential distribution and
graphs are assigned to all exchange terms (hoppings of world lines). In a second stage
(not shown) the loop clusters are identified. Finally each loop cluster is flipped with
probability 1/2 and one ends up with a new configuration.
1 exp(2Jcl ). We end up exactly with the cluster algorithm for the classical Ising
model!
The other cases are special. Here each graph connects two spins. As each of these
spins is again connected to only one other, all spins connected by a cluster form a
closed loop, hence the name loop algorithm. Only one issue remains to be explained:
how do we assign a horizontal or crossing graph with infinitesimal probability, such
as (J/2)d . This is easily done by comparing the assignment process with radioactive
decay. For each segment the graph runs vertical, except for occasional decay processes
occuring with probability (J/2)d . Instead of asking at every infinitesimal time step
whether a decay occurs we simply calculate an exponentially distributed decay time t
using an exponential distribution with decay constant J/2. Looking up the equation
in the lecture notes of the winter semester we have t = (2/J) ln(1 u) where u is a
uniformly distributed random number.
The algorithm now proceeds as follows (see Fig. 7.3): for each bond we start at time
0 and calculate a decay time. If the spins at that time are oriented properly and an
exchange graph is possible we insert one. Next we advance by another randomly chosen
decay time along the same bond and repeat the procedure until we have reached the
extent . This assigns graphs to all infinitesimal time steps where spins do not change.
Next we assign a graph to all of the (finite number of) time steps where two spins
are exchanged. In the case of the Heisenberg models there is always only one possible
graph to assign and this is very easy. In the next step we identify the loop-clusters and
then flip them each with probability 1/2. Alternatively a Wolff-like algorithm can be
constructed that only builds one loop-cluster.
Improved estimators for measurements can be constructed like in classical models.
The derivation is similar to the classical models. I will just mention two simple ones

65

for the ferromagnetic Heisenberg model. The spin-spin corelation is



1 if (i, ) und (j, ) are on the same cluster
z
z
Si ( )Sj ( ) =
0 otherwise

(7.25)

and the uniform susceptibilty is


=

1 X
S(c)2 ,
N c

(7.26)

where the sum goes over all loop clusters and S(c) is the length of all the loop segments
in the loop cluster c.
The loop algorithm in SSE
We now discuss the loop algorithm in the SSE representation, which we will also use in
the exercises. In the SSE representations, the loop algorithm consists of two parts:
1. diagonal updates, replacing identity operators H0 by diagonal operators Hb , without changing any of the spins.
2. loop updates, flipping spins and changing diagonal into off-diagonal operators.
The first step are the diagonal updates. We attempt to change the expansion order
n by inserting and removing the number of unit operators. During this update step, all
propagation levels p = 1, ..., are traversed in ascending order. If the current operator
is an unit operator H0 it is replaced by a bond Hamiltonian with a certain probability
which guarantees detailed balance. The reverse process, i.e. substitution of a bond
Hamiltonian by a unit operator is only attempted if the action of the current bond
Hamiltonian does not change the propagated state, i.e., if |(p)i = |(p 1)i, since
otherwise the resulting contribution to Eq. (7.8) would vanish.
The acceptance probabilities for both substitutions, as determined from detailed
balance, are


M h(p)|Hb |(p 1)i
P (H0 Hb ) = min 1,
,
n


( n + 1)|(p)i,|(p1)i
P (Hb H0 ) = min 1,
.
M h(p)|Hb |(p 1)i
At the same time we build up a quadruply-linkes list of of the operators Hp , connecting to the previous and next operators on the two sites of the bond bp . This list
will be needed to construct the loop clusters
In the next step we assign graphs to each of the non-identity operators Hp , and then
build clusters. Flipping the clusters will turn diagonal into off-diagonal operators and
vice-versa. Since some stochastic choices were already made by replacing identity with
diagonal operators, and there are no infinitesimals involved the cluster building rules
are actually easier than for continuous time world lines.
For the ferromagnetic and anti-ferromagnetic Heisenberg and Ising models we can
see that the graph assignments are unique - we do not even need any random numbers
here but can just build the loops and flip them.
66

Table 7.7: The graph weights for the quantum-XY model in the SSE representation
and the function specifying whether the graph is allowed. An offset of J4xy has been
added to the diagonal terms to give simple cluster rules. The dash denotes a graph
that is not possible for a configuration because of spin conservation and has to be zero.
G

, G)

, G)

, G)

= (

, G)

= (

, G)

= (

, G)

total weight

graph weight

Jxy
4

Jxy
4

Jxy
4

Jxy
4

Jxy
2

Table 7.8: The graph weights for the ferromagnetic quantum Heisenberg model in
the SSE representation and the function specifying whether the graph is allowed.
The dash denotes a graph that is not possible for a configuration because of spin
conservation and has to be zero. An offset of J/4 was added to the diagonal terms to
keep the weights positive.
G

total weight

, G)

, G)

, G)

= (

, G)

= (

, G)

= (

, G)

graph weight

J
2

0
0

J
2

67

J
2

Table 7.9: The graph weights for the antiferromagnetic quantum Heisenberg model in
the SSE representation and the function specifying whether the graph is allowed.
The dash denotes a graph that is not possible for a configuration because of spin
conservation and has to be zero. An offset of J/4 was added to the diagonal terms to
keep the weights positive. To avoid the sign problem for off-diagonal terms (see next
subsection) we change the sign of Jxy , which is allowed only on bipartite lattices.
G

, G)

, G)

, G)

= (

, G)

= (

, G)

= (

, G)

total weight

graph weight

|J|
2

0
0

|J|
2

|J|
2

Table 7.10: The graph weights for the ferromagnetic Ising model in the SSE representation and the function specifying whether the graph is allowed. An offset of Jz /4
was added to the diagonal terms to keep the weights positive. The dash denotes a
graph that is not possible for a configuration because of spin conservation and has to
be zero.
G

total weight

, G)

, G)

, G)

= (

, G)

= (

, G)

= (

, G)

graph weight

0
0

0
0

Jz
2

Jz
2

68

7.3

The negative sign problem

Now that we have an algorithm with no critical slowing down we could think that we
have completely solved the problem of quantum many body problems. However, in this
section we will show that the sign problem is NP-hard, following the paper M. Troyer
and U.J. Wiese, Phys. Rev. Lett. 94, 170201 (2005).
The difficulties in finding polynomial time solutions to the sign problem are reminiscent of the apparent impossibility to find polynomial time algorithms for nondeterministic polynomial (NP)-complete decision problems, which could be solved in
polynomial time on a hypothetical non-deterministic machine, but for which no polynomial time algorithm is known for deterministic classical computers. A hypothetical
non-deterministic machine can always follow both branches of an if-statement simultaneously, but can never merge the branches again. It can, equivalently, be viewed as
having exponentially many processors, but without any communication between them.
In addition, it must be possible to check a positive answer to a problem in NP on a
classical computer in polynomial time.
Many important computational problems in the complexity class NP, including the
traveling salesman problem and the problem of finding ground states of spin glasses
have the additional property of being NP-hard, forming the subset of NP-complete
problems, the hardest problems in NP. A problem is called NP-hard if any problem in
NP can be mapped onto it with polynomial complexity. Solving an NP-hard problem
is thus equivalent to solving any problem in NP, and finding a polynomial time solution
to any of them would have important consequences for all of computing as well as the
security of classical encryption schemes. In that case all problems in NP could be
solved in polynomial time, and hence NP=P.
As no polynomial solution to any of the NP-complete problems was found despite
decades of intensive research, it is generally believed that NP6=P and no deterministic
polynomial time algorithm exists for these problems. The proof of this conjecture
remains as one of the unsolved millennium problems of mathematics for which the Clay
Mathematics Institute has offered a prize of one million US$ . In this section we will
show that the sign problem is NP-hard, implying that unless the NP6=P conjecture
is disproved there exists no generic solution of the sign problem.
Before presenting the details of our proof, we will give a short introduction to classical and quantum Monte Carlo simulations and the origin of the sign problem. In the
calculation of the phase space average of a quantity A, instead of directly evaluating
the sum
hAi =

1X
A(c)p(c) ,
Z c

Z=

p(c),

(7.27)

over a high-dimensional space of configurations c, a classical Monte Carlo method


chooses a set of M configurations {ci } from , according to the distribution p(ci ). The
average is then approximated by the sample mean
M
1 X
hAi A =
A(ci ),
M i=1

69

(7.28)

p
within a statistical error A = VarA(2A + 1)/M , where VarA is the variance of A
and the integrated autocorrelation time A is a measure of the autocorrelations of the
sequence {A(ci )}. In typical statistical physics applications, p(c) = exp(E(c)) is the
Boltzmann weight, = 1/kB T is the inverse temperature, and E(c) is the energy of
the configuration c.
Since the dimension of configuration space grows linearly with the number N
of particles, the computational effort for the direct integration Eq. (7.27) scales exponentially with the particle number N . Using the Monte Carlo approach the same
average can be estimated to any desired accuracy in polynomial time, as long as the
autocorrelation time A does not increase faster than polynomially with N .
In a quantum system with Hamilton operator H, instead of an integral like Eq.
(7.27), an operator expression
hAi =

1
Tr[A exp(H)] ,
Z

Z = Tr exp(H)

(7.29)

needs to be evaluated in order to calculate the thermal average of the observable A


(represented by a self-adjoint operator). Monte Carlo techniques can again be applied
to reduce the exponential scaling of the problem, but only after mapping the quantum
model to a classical one. One approach to this mapping5 is a Taylor expansion:
Z = Tr exp(H) =

X
()n
n=0

TrH n

(7.30)

X
X
()n
hi1 |H|i2 ihi2 |H|i3 i hin |H|i1 i
n!
n=0 i ,...,i
1

n!

n=0 i1 ,...,in

p(i1 , ..., in )

p(c),

where for each order n in the expansion we insert n sums over complete sets of basis
states {|ii}. The configurations are sequences c = (i1 , ..., in ) of n basis states and we
define the weight p(c) by the corresponding product of matrix elements of H and the
term ()n /n!. With a similar expansion for Tr[A exp(H)] we obtain an expression
reminiscent of classical problems:
hAi =

1
1X
Tr[A exp(H)] =
A(c)p(c).
Z
Z c

(7.31)

If all the weights p(c) are positive, standard Monte Carlo methods can be applied,
as it is the case for non-frustrated quantum magnets and bosonic systems. In fermionic
systems negative weights p(c) < 0 arise from the Pauli exclusion principle, when along
the sequence |i1 i |i2 i |in i |i1 i two fermions are exchanged, as shown in
Fig. 1.
The standard way of dealing with the negative weights of the fermionic system is to
sample with respect to the bosonic system by using the absolute values of the weights
5

Note that the conclusions are independent of the representation

70

|i1>
|i4>
|i3>
|i2>
|i1>

Figure 7.4: A configuration of a fermionic lattice model on a 4-site square. The


configuration has negative weight, since two fermions are exchanged in the sequence
|i1 i |i2 i |i3 i |i4 i |i1 i. World lines connecting particles on neighboring slices
are drawn as thick lines.
|p(c)| and to assign the sign s(c) sign p(c) to the quantity being sampled:
P
c A(c)p(c)
P
hAi =
p(c)
P c
P
A(c)s(c)|p(c)| / c |p(c)|
hAsi
c
P
P
.

=
hsi
c s(c)|p(c)| /
c |p(c)|

(7.32)

While this allows Monte Carlo simulations to be performed, the errors increase exponentially with the particle number N and the inverse temperature . To see this,
consider the mean value of the sign hsi P
= Z/Z , which is just the ratio of the partition
functions of the fermionic systemP
Z = c p(c) with weights p(c) and the bosonic system used for sampling with Z = c |p(c)|. As the partition functions are exponentials
of the corresponding free energies, this ratio is an exponential of the differences f
in the free energy densities:hsi = Z/Z = exp(N f ). As a consequence, the relative error s/hsi increases exponentially with increasing particle number and inverse
temperature:
p
p
(hs2 i hsi2 ) /M
1 hsi2
s
eN f

.
(7.33)
=
=
hsi
hsi
M hsi
M
Similarly the error for the numerator in Eq. (7) increases exponentially and the time
needed to achieve a given relative error scales exponentially in N and .
In order to avoid any misconception about what would constitute a solution of
the sign problem, we start by giving a precise definition:
A quantum Monte Carlo simulation to calculate a thermal average hAi of an
observable A in a quantum system with Hamilton operator H is defined to suffer
71

from a sign problem if there occur negative weights p(c) < 0 in the classical
representation as given by Eq. (7.31).
The related bosonic system of a fermionic quantum system is defined as the system
where the weights p(c) are replaced by their absolute values |p(c)|, thus ignoring
the minus sign coming from fermion exchanges:
hAi =

1 X
A(c)|p(c)|.
Z c

(7.34)

An algorithm for the stochastic evaluation of a thermal average such as Eqns.


(7.31) or (7.34) is defined to be of polynomial complexity if the computational
time t(, N, ) needed to achieve a relative statistical error = A/hAi in the
evaluation of the average hAi scales polynomially with the system size N and
inverse temperature , i.e. if there exist integers n and m and a constant <
such that
t(, N, ) < 2 N n m .
(7.35)
For a quantum system that suffers from a sign problem for an observable A, and
for which there exists a polynomial complexity algorithm for the related bosonic
system Eq. (7.34), we define a solution of the sign problem as an algorithm of
polynomial complexity to evaluate the thermal average hAi.
It is important to note that we only worry about the sign problem if the bosonic
problem is easy (of polynomial complexity) but the fermionic problem hard (of exponential complexity) due to the sign problem. If the bosonic problem is already hard,
e.g. for spin glasses 6 , the sign problem will not increase the complexity of the problem.
Also, changing the representation so that the sum in Eq. (7.31) contains only positive
terms p(c) 0 is not sufficient to solve the sign problem if the scaling remains exponential, since then we just map the sign problem to another exponentially hard problem.
Only a polynomial complexity algorithm counts as a solution of the sign problem.
At first sight such a solution seems feasible since the sign problem is not an intrinsic
property of the quantum model studied but is representation-dependent: it depends on
the choice of basis sets {|ii}, and in some models it can be solved by a simple local
basis change.. Indeed, when using the eigen basis in which the Hamilton operator H is
diagonal, there will be no sign problem. This diagonalization of the Hamilton operator
is, however, no solution of the sign problem since its complexity is exponential in the
number of particles N .
We now construct a quantum mechanical system for which the calculation of a
thermal average provides the solution for one and thus all of the NP-complete problems.
This system exhibits a sign problem, but the related bosonic problem is easy to solve.
Since, for this model, a solution of the sign problem would provide us with a polynomial
time algorithm for an NP-complete problem, the sign problem is NP-hard. Of course, it
is expected that the corresponding thermal averages cannot be calculated in polynomial
time and the sign problem thus cannot be solved. Otherwise we would have found a
6

F. Barahona, J. Phys. A 15, 3241 (1982)

72

polynomial time algorithm for the NP-complete problems and would have shown that
NP=P.
The specific NP-complete problem we consider is to determine whether a state with
energy less than or equal to a bound E0 exists for a classical three-dimensional Ising
spin glass with Hamilton function
X
H=
Jjk j k .
(7.36)
hj,ki

Here the spins j take the values 1, and the couplings Jjk between nearest neighbor
lattice points j and k are either 0 or J.
This problem is in the complexity class NP since the non-deterministic machine
can evaluate the energies of all configurations c in polynomial time and test whether
there is one with E(c) E0 . In addition, the validity of a positive answer (i.e. there
is a configuration c) can be tested on a deterministic machine by evaluating
the energy
P
of that configuration. The evaluation of the partition function Z = c exp(E(c))
is, however, not in NP since the non-deterministic machine cannot perform the sum in
polynomial time.
This question whether there is a state with energy E(c) E0 can also be answered
in a Monte Carlo simulation by calculating the average energy of the spin glass at a
large enough inverse temperature . Since the energy levels are discrete with spacing J
it can easily be shown that by choosing an inverse temperature J N ln 2 + ln(12N )
the thermal average of the energy will be less than E0 + J/2 if at least one configuration
with energy E0 or less exists, and larger than E0 + J otherwise.
In this classical Monte Carlo simulation, the complex energy landscape, created by
the frustration in the spin glass (Fig. 2a), exponentially suppresses the tunneling of the
Monte Carlo simulation between local minima at low temperatures. The autocorrelation
times and hence the time complexity of this Monte Carlo approach are exponentially
large exp(aN ), as expected for this NP-complete problem.
We now map this classical system to a quantum system with a sign problem. We do
so by replacing the classical Ising spins by quantum spins. Instead of the common choice
in which the classical spin configurations are basis states and the spins are represented
by diagonal jz Pauli matrices we choose a representation in which the spins point in
the x direction and are represented by jx Pauli matrices:
X
H=
Jjk jx kx ,
(7.37)
hj,ki

Here the random signs of the couplings are mapped to random signs of the off-diagonal
matrix elements which cause a sign problem (see Fig. 2b). The related bosonic model
is the ferromagnet with all couplings Jjk 0 and efficient cluster algorithms with
polynomial time complexity are known for this model. Since the bosonic version is easy
to simulate, the sign problem is the origin of the NP-hardness of a quantum Monte
Carlo simulation of this model. A generic solution of the sign problem would provide
a polynomial time solution to this, and thus to all, NP-complete problems, and would
hence imply that NP=P. Since it is generally believed that NP6=P, we expect that
such a solution does not exist.
73

a)

b)
|i1>

|i3>
|i2>
|i1>
Figure 7.5: a) A classically frustrated spin configuration of three antiferromagnetically
coupled spins: no configuration can simultaneously minimize the energy of all three
bonds. b) A configuration of a frustrated quantum magnet with negative weights: three
antiferromagnetic exchange terms with negative weights are present in the sequence
|i1 i |i2 i |i3 i |i1 i. Here up-spins with z-component of spin jz = 1 and downspins with jz = 1 are connected with differently colored world lines.
By constructing a concrete model we have shown that the sign problem of quantum Monte Carlo simulations is NP-hard. This does not exclude that a specific sign
problem can be solved for a restricted subclass of quantum systems. This was indeed
possible using the meron-cluster algorithm 7 for some particular lattice models. Such
a solution must be intimately tied to properties of the physical system and allow an
essentially bosonic description of the quantum problem. A generic approach might scale
polynomially for some cases but will in general scale exponentially.
In the case of fermions or frustrated quantum magnets, solving the sign problem
requires a mapping to a bosonic or non-frustrated system which is, in general, almost
certainly impossible for physical reasons. The origin of the sign problem is, in fact, the
distinction between bosonic and fermionic systems. The brute-force approach of taking
the absolute values of the probabilities means trying to sample a frustrated or fermionic
system by simulating a non-frustrated or bosonic one. As for large system sizes N and
low temperatures the relevant configurations for the latter are not the relevant ones for
the former, the errors are exponentially large.
Given the NP-hardness of the sign problem one promising idea for the simulation of fermionic systems is to use ultra-cold atoms in optical lattices to construct
well-controlled and tunable implementations of physical systems, such as the Hubbard
model, and to use these quantum simulators to study the phase diagrams of correlated quantum systems. But even these quantum simulators are most likely not a
generic solution to the sign problem since there exist quantum systems with exponentially diverging time scales and it is at present not clear whether a quantum computer
7

S. Chandrasekharan and U.-J. Wiese, Phys. Rev. Lett. 83, 3116 (1999)

74

could solve the NP-complete problems.

7.4

Worm and directed loop updates

The quantum loop algorithm, like the classical cluster algorithms work only for models
with spin-inversion symmetry: a configuration of spins and the flipped configuration
need to have the same weight. Applying a magnetic field breaks this spin-inversion
symmetry, and the loop algorithm cannot be applied. For that case, Nikolay Prokofev
and coworkers invented the worm algorithm.8 The worm algorithm is based on a very
simple idea: while local updates on world lines are not ergodic (since knots cannot be
created or undone), the worm algorithm proceeds by cutting a world line, and then
moves the open ends in local updates until they meet again and the world line is glued
together once more.

N. V. Prokofev, B. V. Svistunov and I. S. Tupitsyn, Sov. Phys - JETP 87, 310 (1998).

75

Chapter 8
An Introduction to Quantum Field
Theory
8.1

Introduction and motivation

This chapter is a generalization of Sec. 5.1 on Path Integral Monte Carlo. Instead of
considering one (or N ) quantum-mechanical particles as was done there, the idea is
now to consider a quantum field, which contains infinitely many degrees of freedom.
However, in practice, we are going to simulate a large but finite number of degrees
of freedom, and extrapolate at the end. So there really is not much difference with
Sec. 5.1.
The formal basis for Quantum Field Theory is an important subject, which will not
be covered here. The goal of this chapter is to convey some understanding of simulations
of quantum field theories, by appealing to intuition rather than rigor.
Let us mention at least three reasons why numerical simulations of Quantum Field
Theory are interesting:
They represent a complementary approach to the traditional, analytic perturbative
expansion. A perturbative expansion is useful when the theory possesses a small expansion parameter, which ensures fast convergence of the Taylor series. This is the
case for QED (quantum electrodynamics), where the coupling constant characterizing
1
the strength of the interactions is QED 137
. For QCD (quantum chromodynamics)
which describes the interactions of quarks, the coupling constant is QCD O(1), which
makes numerical simulations the method of choice, because they are non-perturbative
(i.e. not based on a perturbative expansion).
Quantum field theories are complicated by divergences, infinities which arise when
interactions occur at very short distances. These UV divergences cancel out of all
measurable quantities. This process of renormalization of infinities is traditionally
considered in a perturbative expansion framework. The numerical simulations of QFT
are performed on a space-time lattice, where the lattice spacing a naturally defines the
smallest distance scale, thus providing an elegant formalism to study renormalization
as a 0.
Solid-state physicists, who deal with crystalline materials, may think that they have
no need for quantum field theory which is formulated in continuous, uniform space-

76

time. This is not true. A topical example is graphene, which is a 2d honeycomb crystal.
Electrons whose momenta p~ are special, close to two particular values p~ , behave like
relativistic particles with momentum p~ p~ . Their properties for small values of |~p p~ |
are well described by an effective quantum field theory. Because the momenta considered are small, or equivalently the distances are large, the crystalline nature of graphene
is irrelevant. The effective quantum field theory is formulated in the continuum, and
can be studied by numerical simulations on, say, a square lattice.

8.2

Recall: Path integral Monte Carlo

The starting point is the Path Integral formalism, introduced in Chapter 5. Let us
recall the main idea and some results from Sec. 5.1, specialized to the simplest case of
one bosonic particle of mass m moving in one dimension in a potential V (x).
The partition function Z = Tr (exp (H)) is the trace of the density matrix
~2
= exp (H), with H = T + V , T = 2m
, V = V (x). The difficulty in evaluating Z is that the kinetic and potential energy operators T and V are diagonal in
momentum space and in coordinate space, respectively, and thus do not commute. To
evaluate Z, one divides into M elementary time steps = /M , and uses the primitive approximation for the high-temperature (T = 1/
) density matrix exp (
H)
=
M
exp (
T ) exp (
V ), so that exp ( (T + V )) = limM + [exp (
T ) exp (
V )] .
Finally, the matrix elements of exp (
T ) , 1 can be evaluated in coordinate space,
yielding

(x1 , xM +1 , ) = lim

M +

Z Y
M

dxj

j=2

with free (xj , xj+1 , ) = (2~2 /m)

M
Y
 free

(xj , xj+1 , ) exp [
V (xj )]

(8.1)

j=1

1/2

i
h
(xj xj+1 )2
. See eq.(5.18).
exp 2~
2
/m

Here, we want to emphasize that the density matrix is also the Greens function
in imaginary time, as explained in Sec. 5.1.1. The solution of the time-dependent

Schrodinger equation i~ t
= H is
Z
(x, t) = dx0 G(x, t; x0 , 0) (x0 , 0)
(8.2)

where

i
G(x, t; x0 , 0) hx| exp( Ht)|x0 i
(8.3)
~
is the Greens function, i.e. the solution of the Schrodinger equation for (x, 0) =
(x x0 ) (also called transition amplitude, or matrix element). If we now introduce an
imaginary time it, we see that the Greens function G(x, ; x0 , 0) is identical to the
element of the density matrix (x, x0 ; /~) (Changing time to pure imaginary is also
called performing a Wick rotation to Euclidean time).
This equivalence helps understand several aspects of the path integral formalism:
Propagation over a large imaginary time isolates the groundstate: the operator
77

x
x

Figure 8.1: After compactification of the Euclidean time direction, the paths become
closed loops, and the path integral is identical to the partition function of the quantum
mechanical system, where the inverse temperature is (proportional to) the Euclidean
time-extent.
exp( ~ H) is applied to the initial state |(0)i, so that

|( )i = exp( H)|(0)i
~

(8.4)

If one expands in eigenstates of H: H|k i = Ek |k i, k = 0, 1, .., E0 E1 .., then


X

|( )i =
exp( Ek )hk |(0)i|k i .
(8.5)
~
k
Unless the initial state |(0)i is accidentally orthogonal to the groundstate |0 i, the
latter will dominate the sum at large , while excited state contributions will be suppressed by factors exp( ~ (E1 E0 )).
2

The free density matrix free (


) is simply the operator exp( ~2m), which describes
a free diffusion in imaginary time, and therefore has Gaussian matrix elements as used
in eq.(8.1). This equation can then be written (with /~) as
!#
"

2
Z Y
M
M
Y
1
x

j+1
j
+ m
+ V (xj )
dxj
exp
hxM +1 | exp(H)|x1 i = lim C M 1
M +
~
2

j=2
j=1
1/2

where C = (2~ /m)


. One can then take the trace by imposing xM +1
integrating over x1 , to obtain



Z
Z
1 dx 2
1 ~
d
m( ) + V (x)
Z=
Dx( ) exp
~ 0
2 d
x(~)=x(0)
78

(8.6)
= x1 and

(8.7)

up to an infinite, but irrelevant, multiplicative constant limM + C M which drops out


of all expectation values. Now, the expression path integral is clear: Z is an integral
over periodic paths x( ). Since
is over functions of , Z is called a funcR the 1integral
dx 2
tional integral. The exponent d 2 m( d ) + V (x) is called the Euclidean action SE ,
which depends on the path x( ). Each path contributes to Z with a weight exp( ~1 SE ).
Furthermore, notice the role of ~ in the exponent of Eq.(8.7): it governs the magnitude of the fluctuations, and therefore plays a role analogous to that of a temperature.
In the limit ~ 0, all fluctuations are suppressed and the only path surviving in the
path integral is that which minimizes the exponent: this is the classical limit, where the
path from (x0 , 0) to (x, ) is deterministic, and we recover the minimum action principle. When ~ 6= 0, fluctuations are allowed around this special path: these fluctuations
are quantum fluctuations.
Expectation values of observables W , defined by
hW i =

1
Tr (W exp(H))
Z

(8.8)

can be measured by forming the average over measurements obtained on paths generated
by Monte Carlo sampling of Z. When , one recovers the groundstate (or
vacuum) expectation value
hW i =

h0 |W |0 i
.
h0 |0 i

(8.9)

A simple example is the position projector W = |x0 ihx0 | = (x0 ). Substituting above
gives
h(x0 )i = |h0 |x0 i|2 = |0 (x0 )|2
(8.10)

so that the number of paths going through x0 (at any Euclidean time) is proportional
to the square of the wave-function. An important observable for quantum field theory
is the two-point correlator (x0 , )(x0 , 0), which measures the probability for a path to
go through the same position x0 after an Euclidean time . Expanding the expectation
value over an eigenstate basis of H, one obtains
P
)H) (x0 ) exp( H) (x0 )|k i
k hk | exp((
P
h(x0 , )(x0 , 0)i
=
k hk | exp(H)|k i
P
2
k,l |hk |(x0 )|l i| exp(( )Ek ) exp( El )
P
=
k exp(Ek )
2

|h0 |(x0 )|0 i| + |h1 |(x0 )|0 i|2 exp( (E1 E0 )) +


(8.11)

P
where a complete set of states l |l ihl | was inserted to go from the first to the second
line. The final expression shows that the connected correlator h(x0 , )(x0 , 0)ih(x0 )i2
approaches 0 exponentially fast, and that the rate of decay (in Euclidean time ) gives
the energy gap (E1 E0 ) between the groundstate and the first excited state. Contrary
79

to the groundstate energy E0 , the energy gap is insensitive to an arbitrary shift of


the energies, as happens in quantum field theory under renormalization. Therefore,
the decay of the connected two-point correlator is the crucial observable with which a
physical, experimentally measurable property of the theory can be extracted.

8.3

Path integrals: from classical mechanics to field


theory

Consider first the case of a single, classical particle with Hamiltonian H =


Hamiltons equations describe the time-evolution of this particle:
dq
H
=+
dt
p
H
dp
=
dt
q

q =

p
m

p = V

p2
2m

+V.

(8.12)
(8.13)

The usual point of view is to start from initial conditions (q, q)


at time t = 0, and
evolve q and q according to the coupled ODEs above. Note, however, that the boundary
conditions can instead be split between the beginning and the end of the evolution. In
particular, one can specify the beginning and ending coordinates (q(0), q(t)). There
is a unique path q(t ), t [0, t], which satisfies the above equations, and specifies the
initial and final velocities.R To find this path, it is convenient to change viewpoint and
t
where L is the Lagrangian 21 mq2 V (q). One
consider the action S = 0 dt L(q, q),
then invokes the principle of stationary (or least) action, from which can be derived the
Euler-Lagrange equations
L
L
.
(8.14)
=
qi
( qi )
Note that the notion of action is more general than the notion of Hamiltonian: some
systems have an action, but no Hamiltonian. This was in fact the original motivation for
Feynman to develop the path integral formalism in his Ph.D. thesis: he was interested
in systems having non-local interactions in time (with an interaction term q(t)q(t )).
now many particles interacting with each other, with Lagrangian L =
P Consider
1
2
i 2 mqi V ({qi }) and take the number qi to represent the state of particle i, whose
x and y coordinates are fixed on a square grid P
of spacing a. Furthermore, take the
interaction between particles to be of the form hiji (qi qj )2 , where hiji stands for
nearest-neighbours on the grid, as if springs were connecting i and j. Low-energy
configurations will then have almost the same q-value at neighbouring grid points, so
that the configuration {qi } will be smooth and look like a mattress as in Fig. 8.2.
When the grid spacing a is reduced to zero, the configuration {qi } becomes a classical
field q(~x) (~x R2 in this example), with infinitely many degrees of freedom. The
action of this field is specified by its Lagrangian density L = 12 q q 21 m20 q 2
V (q) where the first term is the continuum version of (qi qj )2 (with q q = q2
~ 2 ), the second one is a harmonic term corresponding to a mass, and the last term
|q|

80

q
a
y
x

Figure 8.2: A field configuration q(x, y), discretized on a square grid of spacing a.
the local (anharmonic) interaction, e.g. q 4 1 . The action is then S =
Rdescribes
t
dt dxdyL(q(x, y, t )). Note that the Lagrangian density L satisfies several symmetries:
0
it is invariant under translations and rotations in the (x, y) plane, and under the sign
flip q(~x) q(~x) ~x, at least for an interaction q 4 . Each continuous symmetry leads
to a conserved quantity: energy-momentum for translations, angular momentum for
rotations. We will see the importance of the discrete symmetry q q later.
Now we can consider quantum effects on the above system. As a result of quantum
fluctuations, the path from q(t = 0) to q(t) is no longer Runique. As in quantum mechant
ics, all paths contribute with an amplitude exp(+ ~i 0 dt L), from which it becomes
clear that the magnitude of relevant fluctuations in the action is ~. One can then follow
the strategy of Sec. 8.2 and make time purely imaginary, by introducing = it R.
The immediate result is that idt above becomes d , so that the amplitude becomes
real. The other change is q2 ( q)2 , so that an overall minus sign can be taken out,
leading to the amplitude
1
exp( SE )
(8.15)
~
R
where SE = d d~xLE is the Euclidean action, and
1
1
LE = ( )2 + m20 2 + V ()
2
2

(8.16)

is the Euclidean Lagrangian density, and the field q is now denoted by as is customary.
~ 2 is now symmetric between space and time, so
The first term ( )2 = ( )2 + ||
that the metric is Euclidean in (d + 1) dimensions (d spatial dimensions, plus Euclidean
time).
It is worth summarizing the sign flips which occurred in the kinetic energy T and
the potential energy U during the successive steps we have just taken. We started with
1
One could think of additional interaction terms, constructed from higher derivatives of the field.
They are not considered here because they lead to non-renormalizable theories.

81

the Hamiltonian H = T + U , then considered the Lagrangian L = T U . Going to


imaginary time changes the sign of T . Finally, we take out an overall minus sign in the
definition of LE , so that paths with the smallest action are the most likely. This leads
to the Euclidean Lagrangian density LE = T + U , which is identical to the Hamiltonian
we started from, except that the momentum p is replaced by the derivative 0 .
It is also useful to perform some elementary dimensional analysis. Since it appears
in the exponent of the amplitude Eq.(8.15), the Euclidean action SE is dimensionless
(we set ~ = 1). Hence the Euclidean Lagrangian density has mass dimension (d + 1),
and therefore the field has mass dimension d1
. This is interesting, because if we take
2
the normal number of spatial dimensions d = 3 and the interaction term V () = g4!0 4 ,
then g0 is a dimensionless number. It makes sense then to perform a Taylor expansion of
this theory in powers of g0 about the free case g0 = 0: this is the scope of perturbation
theory. Here, we will try to obtain non-perturbative results, by directly simulating the
theory at some large value of g0 .
We have so far considered a field (~x, ) which takes values in R. It is easy to
generalize the Lagrangian density Eq.(8.16)
when takes values in C, or has
to cases
1
P
~

.. , perhaps with a constraint N 2k =


several components forming a vector
N
1, depending on the desired symmetries. Typically, the Euclidean Lagrangian density
is the starting, defining point of a quantum field theory.
Finally, we can introduce a finite temperature T , exactly as we did in the quantummechanical case (and setting ~ = 1): we make the Euclidean time direction compact:
[0, = T1 ], and impose periodic boundary conditions on the field : (~x, ) =
(~x, 0) ~x. This works for the same reason as in quantum mechanics: the partition
function
Z
Z
d d~xLE ())
(8.17)
Z=
D exp(
0

periodic

P
is a weighted sum of eigenstates of the Hamiltonian: Z = i exp(Ei ). We will be
concerned here with the T = 0 situation. In that case, the two-point correlator provides
a means to measure the mass gap (E1 E0 ):
h(~x, )(~x, 0)i hi2 = c2 exp((E1 E0 ) )

(8.18)

or equivalently the correlation length = (E1 E0 )1 . The lowest energy state, with
energy E0 , is the vacuum, which contains particle-antiparticle pairs because of quantum
fluctuations, but whose net particle number is zero. The first excited state, with energy
E1 , contains one particle at rest. Call its mass mR = E1 E0 . Then this mass can
be obtained from the decay of the two-point correlator, as mR = 1/. This is the
true, measurable mass of the theory, and it is not equal to the mass m0 used in the
Lagrangian density. mR is called the renormalized mass, while m0 is the bare mass.
Similarly, the true strength gR of the interaction can be measured from 4-correlators
of , and it is not equal to the coupling g0 used in the Lagrangian density: g0 is the
bare coupling, gR the renormalized coupling.

82

8.4

Numerical study of 4 theory

Here, we show that very important results in Quantum Field Theory can be extracted
from simulations of the 4d Ising model. Our starting point is the continuum Euclidean
action:


Z
1 2 2 g0 4
1
2
3
(8.19)
SE = d d x ( 0 ) + m0 0 + 0
2
2
4!

where the subscript 0 is to emphasize that we are dealing with bare quantities (field,
mass and coupling), and the coupling normalization 1/4! is conventional. We discretize
the theory on a hypercubic (4d) lattice with spacing a 2 . After the usual replacements
R
P
d d3 x a4 sites x and 0 0 (x+a)0 (x) , we end up with the lattice action
"
#
X
X
2
2
2
SL =
2
(x)(x +
) + (x) + ((x) 1)
(8.20)
x

where we use the new variables , and defined by

2
a0 =
1 2
a2 m20 =
8

6
g0 =
2

(8.21)
(8.22)
(8.23)

Note in particular the multiplication of 0 by a to form a dimensionless variable, since


0 has mass dimension 1. The original formulation had two bare parameters, m0 and
g0 . They have been mapped into two bare parameters, and . This discretized theory
can be simulated by standard Monte Carlo algorithms like Metropolis, on a hypercubic
lattice of L sites in each direction. To minimize finite-size effects, periodic boundary
conditions are usually imposed in each direction.
The behaviour of our system is easy to understand qualitatively in the two limits
= 0 and = +.
When = 0, the interaction is turned off. This is the free theory, which has two
phases depending on the value of : a disordered or symmetric phase hi = 0 when
is small, and an ordered phase hi 6= 0 when is large. Thus, the symmetry
is spontaneously broken when > c = 18 , which corresponds to the vanishing of the
mass m0 .
When = +, fluctuations of away from the values 1 cost an infinite amount of
energy. Thus, is restricted to 1, and our theory reduces to the 4d Ising model with
coupling 2. As in lower dimensions, the Ising model undergoes a second-order phase
transition corresponding to the spontaneous breaking of the symmetry , for a
critical value c 0.075.
For intermediate values of , again a second-order transition takes place, leading to the
phase diagram depicted Fig. 8.3.
2

The lattice spacing is taken to be the same in space and in time for simplicity; one could consider
different values as and a .

83

< > =/ 0

1/8

< > = 0
0.075..

Figure 8.3: Phase diagram of the lattice theory defined by Eq.(8.20). The two phases
are separated by a line of second-order phase transitions.
The existence of a second-order phase transition is crucial: it allows us to define a
continuum limit of our lattice theory. Remember that the true, renormalized mass
mR can be extracted from the exponential decay of the 2-point correlator
h(x)(y)i hi2

|xy|

exp(mR |x y|) = exp(

|x y|
)

(8.24)

(see Eq.(8.18)). On the lattice, we can only measure the dimensionless combination
1
amR = /a
, and the separation |x y| can only be measured in lattice units, i.e. |xy|
.
a
Taking the continuum limit a 0 (while the physical mass mR is fixed) forces the
correlation length measured in lattice units, i.e. /a, to diverge. This only occurs when
the lattice theory has a second-order phase transition (or higher order).
Therefore, the interpretation of a second-order phase transition is different between
solid-state physics and lattice field theory. In the first case, the lattice spacing has a
physical meaning, like the distance between two ions in a crystal: the lattice is for
real, and the correlation length really diverges at a second-order critical point. In the
lattice field theory, the correlation length is for real, and the lattice spacing a shrinks
to zero at the critical point. This is illustrated in Fig. 8.4.
In this latter case, one must be careful that the physical box size (La) also shrinks
as a 0. In order to obtain a controlled continuum limit at constant physical volume,
one must increase the number of lattice points L in each direction keeping (La) fixed.
Going back to our 4 theory, one sees that a continuum limit can be defined for
any value of the bare coupling [0, +], by tuning to its critical value c ().
An interesting question is: what is the value of the true, renormalized coupling as a
function of ? The answer is clear when = 0: the theory is free, and the coupling
is zero, whether bare or renormalized. To obtain a non-zero answer, a reasonable
84

QFT

Solid State

a
a

2nd order P.T.

2nd order P.T.

a fixed,, grows

fixed, a shrinks

a diverges

a diverges

Figure 8.4: Two different viewpoints on a second-order phase transition: in solid-state


physics (left), the crystal is real and the physical correlation length diverges; in quantum field theory (right), the correlation length is real, and the lattice spacing shrinks
to zero.
strategy is to maximize the bare coupling, and thus to consider the Ising limit = +.
The renormalized coupling is extracted from the strength of the 4-spin correlation,
normalized as explained in the Exercises. The rather surprising answer is that the
renormalized coupling is zero, just like for = 0. In fact, the renormalized coupling is
always zero for any choice of . In other words, the renormalized 4 theory is free, no
matter the value of the bare coupling! The formal statement is that the 4 theory is
trivial. Note that this is only true in (3 + 1) dimensions. In lower dimensions, the
renormalized coupling is non-zero.
Now, why is this finding important? The Standard Model of particle interactions
contains a Higgs particle, which gives a mass to all other particles by coupling to them.
The field theory describing the Higgs particle is very much like the 4 theory
 we have
1 + i2
. The bare
just studied, except that the field is now a complex doublet
3 + i4
parameters are chosen so that the system is in the broken-symmetry phase, where
has a non-zero expectation value. The masses of all particles are proportional to hi,
therefore it is crucial that hi 6= 0. In turn, this symmetry breaking is only possible if
the coupling is non-zero. But, as we have seen, the true, renormalized value of is
85

zero. Therefore, we have a logical inconsistency. The consequence is that the Standard
Model cannot be the final, exact description of particle interactions. Some new physics
beyond the Standard Model must exist, and from the numerical study of the lattice
theory near c (), one can set the energy scale for this new physics to become visible at
around 600-800 GeV. This argument is so powerful that it has been used in the design
of the Large Hadron Collider (LHC), recently turned on at CERN, and which will study
collisions up to about 1000 GeV only.

8.5

Monte Carlo algorithms for the Ising model

It is most efficient to perform numerical simulations of the 4 theory in its Ising limit.
The partition function to sample is then simply
X
X
Z=
exp(+J
x y )
(8.25)
x =1

hxyi

with a homogeneous ferromagnetic interaction J = 2 > 0 between pairs of spins x , y


at nearest-neighbour sites hxyi. This partition function can be sampled by the general
Metropolis algorithm. However, the autocorrelation (Monte Carlo) time necessary to
produce statistically independent configurations grows with the correlation length as
z , where the dynamic critical exponent z is about 2. Simulations near the second-order
phase transition, where diverges, become very demanding in computer resources. This
phenomenon is called critical slowing down.
Two alternatives to the Metropolis algorithm have been discovered, the cluster algorithm (Swendsen & Wang, 1987) and the worm algorithm (Prokofev & Svistunov,
2001), which give z 0 to 0.5 (z varies with the dimensionality of the system) for the
Ising model. They have revolutionized the numerical study of the systems where they
work efficiently. To obtain interesting results on the renormalized coupling of the 4
theory, either of these algorithms should be used.

8.6

Gauge theories

Of the four forces known in Nature, at least three (the strong, the weak and the electromagnetic forces) are described by gauge theories. In addition to the usual matter fields
(electrons, quarks), these theories contain gauge fields (photons, W and Z bosons,
gluons) which mediate the interaction: the interaction between, say, two electrons is
caused by the exchange of photons between them. This is analogous to the exchange
of momentum which occurs when one person throws a ball at another, and the other
catches it. In this way, two particles interact when they are far apart, even though
the Lagrangian contains only local interactions. Moreover, gauge theories are invariant
under a larger class of symmetries, namely local (x-dependent) symmetries.

8.6.1

QED

As an example, let us consider here a variant of Quantum ElectroDynamics (QED),


called scalar QED, where electrons are turned into bosons. A simple modification of
86

x+

x+

x+

x++

x+

Figure 8.5: Graphical representation (left) of the gauge-invariant nearest-neighbour


interaction: (x)(x +
) becomes (x)U (x)(x +
); (middle) an example of a
gauge-invariant 4-point correlation; (right) the smallest closed loop is the plaquette,
with associated matrix U (x)U (x +
)U (x + )U (x).
the previous 4 theory is required: in order to represent charged bosons, the field ,
instead of being real, is made complex, (x) C. The continuum Euclidean action
becomes
Z
h
i
g0
SE = d d3 x | 0 |2 + m20 |0 |2 + |0 |4
(8.26)
4!
and, after discretization on the lattice:
"
#
X
X
SL =

( (x)(x +
) + h.c.) + |(x)|2 + (|(x)|2 1)2
(8.27)
x

SL is invariant under the global (x-independent) rotation (x) exp(i)(x) x. The


idea is now to promote this symmetry to a local one, where may depend on x. It is
clear that the derivative term (x)(x +
) is not invariant under this transformation.
Invariance is achieved by introducing new degrees of freedom, namely complex phases
(elements of U (1)) which live on the links between nearest-neighbours. Call U (x) =
exp(i (x)) the link variable starting at site x in direction , and give it the initial value
1, i.e. (x) = 0 x, . Modify the derivative term as follows:
(x)(x +
) (x)U (x)(x +
)

(8.28)

This term is now invariant under a local transformation (x) exp(i(x))(x), with
(x) 6= (x +
), provided that U (x) also transforms:
(x) exp(ix)(x)
U (x) exp(i(x))U (x) exp(i(x +
))

(8.29)
(8.30)

The significance of the new variables U (x) and of the new expression for the discretized
derivative can be elucidated by expressing (x) = eaA (x), and considering the continuum limit a 0. To lowest order in a, the derivative becomes the covariant
derivative D + ieA , and the transformation eq.(8.30) is a gauge transformation
87

for A : A (x) A (x) e (x). Thus, our link variables U (x) represent the
gauge potential A (x) associated with the electromagnetic field, and eq.(8.28) describes
the interaction of our bosonic electrons with the photon. To complete the lattice discretization
of QED, what is missing is the energy of the electromagnetic field, namely
R
1
~
~ 2 (x)). We identify its lattice version below.
d~xd (|E|2 (x) + |B|
2
It becomes simple to construct n-point correlators of which are invariant under
the local transformation eq.(8.30): all the fields need to be connected by strings
of gauge fields, made of products of gauge links U as in Fig. 8.5. Under a local
gauge transformation, the phase changes (x) will always cancel out between and the
attached U , or between two successive U s.
Q There exists another type of gauge-invariant object. Consider the product of links
xx U around a closed loop, starting and ending at x. It transforms as
!
Y
Y
U
exp(i(x))
(8.31)
U exp(i(x))
xx

xx

which is invariant since all the U s are complex phases which commute with each other.
Thus, another valid term to add to the [real] lattice action is the real part of any closed
loop, summed over translations and rotations to preserve the other desired symmetries.
The simplest version of such a term is to take elementary square loops of size a, made
of 4 links going around a plaquette: P (x) U (x)U (x +
)U (x + )U (x). Thus,
the complete action of our scalar QED theory is
X
XX
XX
|(x)|2
( (x)U (x)(x +
) + h.c.) +
(8.32)
Re(P (x))
x

6=

The plaquette term looks geometrically like a curl. Indeed, substituting U (x) =
exp(ieaA (x)) and expanding to leading-order in a yields
1
Re(P (x)) 1 e2 a4 ( A A )2
2

(8.33)

R
~ 2+
so that the last term in eq.(8.32) becomes, up to an irrelevant constant, e2 12 d~xd (|E|
~ 2 ), where one has expressed the electric and magnetic fields E
~ and B
~ in terms of
|B|
2
the gauge potential A . It suffices then to set = 1/e to recover the energy of an
electro-magnetic field.
Note that it is our demand to preserve invariance under the local transformation
eq.(8.30) which has led us to the general form of the action eq.(8.32). We could have
considered larger loops instead of plaquettes. But in the continuum limit a 0,
these loops would yield the same continuum action. So the form of the QED action is
essentially dictated by the local gauge symmetry.
One can now study the scalar QED theory defined by eq.(8.32) using Monte Carlo
simulations, for any value of the bare couplings (, ). Contrary to continuum perturbation theory, one is not limited to small values of e (i.e. large ).

8.6.2

QCD

Other gauge theories have basically the same discretized action eq.(8.32). What changes
is the group to which the link variables U (x) belong. For QCD, these variables represent
88

the gluon field, which mediates the interaction between quarks. Quarks come in 3
different colors and are thus represented by a 3-component vector at each site3 . Hence,
the link variables are 3 3 matrices. The gauge-invariant piece associated with a closed
loop is the trace of the corresponding matrix, thanks to cyclic invariance of the trace
in eq.(8.31). No other changes are required to turn lattice QED into lattice QCD!
As emphasized in Sec. 8.4, the Euclidean Lagrangian density defines the lattice
theory. The continuum limit can be obtained by approaching a critical point. For
QCD, the critical point is +, i.e. g0 = 0 since 1/g02 as in QED. As we
have seen, the vanishing of the bare coupling does not imply much about the true,
renormalized coupling.

Figure 8.6: Potential V (r) between a static quark and antiquark, as a function of their
separation r. Data obtained at 2 values of the lattice spacing (finite values of ) are
extrapolated to the continuum limit ( ). At short distance, the potential is
Coulomb-like because the interaction becomes weak (the solid line shows the prediction
of perturbation theory). At large distance, the potential rises linearly, which shows that
it takes an infinite energy to separate the two objects: quarks are confined. A simple
model of a vibrating string (dotted line) gives a good description, down to remarkably
short distances.
3

This would be the full story if quarks were bosons. Because they are fermions, each color component
is in fact a 4-component vector, called Dirac spinor.

89

8.7

Overview

The formulation of lattice QCD is due to K. Wilson (1974). First Monte Carlo simulations were performed by M. Creutz in 1980, on a 44 lattice. A goal of early simulations
was to check whether quarks were confined. This can be demonstrated by considering
the potential V (r) between a quark and an anti-quark separated by distance r. Contrary to the case of QED where the potential 1/r saturates as r , in QCD the
potential keeps rising linearly, V (r) r, so that it takes an infinite amount of energy
to completely separate the quark and anti-quark. Equivalently, the force between the
two objects goes to a constant . The energy of the quark-antiquark pair grows as if it
was all concentrated in a tube of finite diameter. In fact, describing the quark-antiquark
pair as an infinitely thin vibrating string is a very good approximation, as shown in the
state-of-the-art Monte Carlo data Fig. 8.6, now performed on 644 lattices. To control
discretization errors, the lattice spacing must be about 1/10th of the correlation length
or less. To control finite-volume effects, the lattice size must be about 3 times the
correlation length or more. This implies lattices of minimum size 304 , which have only
become within reach of a reasonable computer budget in recent years.
The above simulations considered only the effect of gluons: since gluons carry a
color charge (in contrast to the photon which is electrically neutral), they can lead to
complex effects like the confinement of charges introduced in the gluon system. But
to study QCD proper, quarks must be simulated also. This is more difficult because
quarks are fermions, i.e. non-commuting variables. The strategy is to integrate them
out analytically. This integration induces a more complicated interaction among the
remaining gluonic link variables. Actually, this interaction is non-local, which increases
the algorithmic complexity of the Monte Carlo simulation. An efficient, exact simulation
algorithm, called Hybrid Monte Carlo, was only discovered in 1987 (see bibliography).
Even so, the simulation of so-called full QCD, on lattices of size 304 or larger, requires
a computer effort O(1) Teraflop year, which has forced the community to evolve into
large collaborations using dedicated computers.
Using these resources, one is now able to reproduce the masses of quark and antiquark bound states, i.e. mesons and baryons, to a few percent accuracy. The impact of
neglecting the effect of quarks or including them is nicely illustrated in Fig. 8.7. Some
predictions have also been made for the properties of mesons made of charm or bottom
quarks, currently being studied in particle accelerators.
Another essential purpose of QCD simulations is to quantify QCD effects in experimental tests of the electroweak Standard Model. By checking whether experimental
results are consistent with the Standard Model, one can expose inconsistencies which
would be the signature of new, beyond-the-standard-model physics. To reveal such inconsistencies, one must first determine precisely the predictions of the Standard Model.
This entails the determination of QCD effects, which can only be obtained from lattice
QCD simulations.
Finally, another direction where QCD simulations have been playing a major role is
that of high temperature. The confinement of quarks, which is an experimental fact at
normal temperatures, is believed to disappear at very high temperatures O(100) MeV
O(1012 ) K. This new state of matter, where quarks and gluons form a plasma, is
being probed by accelerator experiments which smash heavy ions against each other.
90

f
fK
3M MN
2MBs M
(1P 1S)
(1D 1S)
(2P 1S)
(3S 1S)
(1P 1S)
0.9 1 1.1
LQCD/Expt (nf = 3)

0.9 1 1.1
LQCD/Expt (nf = 0)

Figure 8.7: Comparison of lattice and experimental measurements of various quantities.


The left figure shows the ratios lattice/experiment, for a lattice model which neglects
quark effects (the number Nf of quark flavors is set to zero). The right figure shows
the same ratios, for a lattice model including 3 quark flavors, all with equal masses.
Lattice simulations provide an essential tool to predict the properties of this plasma.

8.8

Useful references

Computational Physics, by J.M. Thijssen, Cambridge Univ. Press (2007), second


edition. See Chapter 15.
Quantum Field Theory in a nutshell, by A. Zee, Princeton Univ. Press (2003).
This book reads like a thriller.
Hybrid Monte Carlo, by S. Duane, A. D. Kennedy, B. J. Pendleton and D. Roweth,
Phys. Lett. B 195 (1987) 216.

91

Appendix A
Numerical methods
A.1

Numerical root solvers

The purpose of a root solver is to find a solution (a root) to the equation


f (x) = 0,

(A.1)

or in general to a multi-dimensional equation


f~(~x) = 0.

(A.2)

Numerical root solvers should be well known from the numerics courses and we will
just review three simple root solvers here. Keep in mind that in any serious calculation
it is usually best to use a well optimized and tested library function over a hand-coded
root solver.

A.1.1

The Newton and secant methods

The Newton method is one of best known root solvers, however it is not guaranteed to
converge. The key idea is to start from a guess x0 , linearize the equation around that
guess
f (x0 ) + (x x0 )f (x0 ) = 0
(A.3)

and solve this linearized equation to obtain a better estimate x1 . Iterating this procedure
we obtain the Newton method:
xn+1 = xn

f (xn )
.
f (xn )

(A.4)

If the derivative f is not known analytically, as is the case in our shooting problems,
we can estimate it from the difference of the last two points:
f (xn )

f (xn ) f (xn1 )
xn xn1

(A.5)

Substituting this into the Newton method (A.4) we obtain the secant method:
xn+1 = xn (xn xn1 )
a

f (xn )
.
f (xn ) f (xn1 )

(A.6)

The Newton method can easily be generalized to higher dimensional equations, by


defining the matrix of derivatives
Aij (~x) =

fi (~x)
xj

(A.7)

to obtain the higher dimensional Newton method


~xn+1 = ~xn A1 f~(~x)

(A.8)

If the derivatives Aij (~x) are not known analytically they can be estimated through finite
differences:

fi (~x + hj ~ej ) fi (~x)


(A.9)
with
h j xj
Aij (~x) =
hj
where is the machine precision (about 1016 for double precision floating point numbers on most machines).

A.1.2

The bisection method and regula falsi

Both the bisection method and the regula falsi require two starting values x0 and x1
surrounding the root, with f (x0 ) < 0 and f (x1 ) > 0 so that under the assumption of a
continuous function f there exists at least one root between x0 and x1 .
The bisection method performs the following iteration
1. define a mid-point xm = (x0 + x1 )/2.
2. if signf (xm ) = signf (x0 ) replace x0 xm otherwise replace x1 xm
until a root is found.
The regula falsi works in a similar fashion:
1. estimate the function f by a straight line from x0 to x1 and calculate the root of
this linearized function: x2 = (f (x0 )x1 f (x1 )x0 )/(f (x1 ) f (x0 )
2. if signf (x2 ) = signf (x0 ) replace x0 x2 otherwise replace x1 x2
In contrast to the Newton method, both of these two methods will always find a
root.

A.1.3

Optimizing a function

These root solvers can also be used for finding an extremum (minimum or maximum)
of a function f (~x), by looking a root of
f (~x) = 0.

(A.10)

While this is efficient for one-dimensional problems, but better algorithms exist.
In the following discussion we assume, without loss of generality, that we want to
minimize a function. The simplest algorithm for a multi-dimensional optimization is
b

steepest descent, which always looks for a minimum along the direction of steepest
gradient: starting from an initial guess ~xn a one-dimensional minimization is applied
to determine the value of which minimizes
f (~xn + f (~xn ))

(A.11)

and then the next guess ~xn+1 is determined as


~xn+1 = ~xn + f (~xn )

(A.12)

While this method is simple it can be very inefficient if the landscape of the
function f resembles a long and narrow valley: the one-dimensional minimization will
mainly improve the estimate transverse to the valley but takes a long time to traverse
down the valley to the minimum. A better method is the conjugate gradient algorithm which approximates the function locally by a paraboloid and uses the minimum
of this paraboloid as the next guess. This algorithm can find the minimuim of a long
and narrow parabolic valley in one iteration! For this and other, even better, algorithms
we recommend the use of library functions.
One final word of warning is that all of these minimizers will only find a local
minimum. Whether this local minimum is also the global minimum can never be
decided by purely numerically. A necessary but never sufficient check is thus to start
the minimization not only from one initial guess but to try many initial points and
check for consistency in the minimum found.

A.2

The Lanczos algorithm

Sparse matrices with only O(N ) non-zero elements are very common in scientific simulations. We have already encountered them in the winter semester when we discretized
partial differential equations. Now we have reduced the transfer matrix of the Ising
model to a sparse matrix product. We will later see that also the quantum mechanical
Hamilton operators in lattice models are sparse.
The importance of sparsity becomes obvious when considering the cost of matrix
operations as listed in table A.1. For large N the sparsity leads to memory and time
savings of several orders of magnitude.
Here we will discuss the iterative calculation of a few of the extreme eigenvalues of
a matrix by the Lanczos algorithm. Similar methods can be used to solve sparse linear
systems of equations.
To motivate the Lanczos algorithms we will first take a look at the power method
for a matrix A. Starting from a random initial vector u1 we calculate the sequence
un+1 =

Aun
,
||Aun ||

(A.13)

which converges to the eigenvector of the largest eigenvalue of the matrix A. The
Lanczos algorithm optimizes this crude power method.

Table A.1: Time and memory complexity for operations on sparse and dense N N
matrices
operation
time
memory
storage
dense matrix

N2
sparse matrix

O(N )
matrix-vector multiplication
dense matrix
O(N 2 )
O(N 2 )
O(N )
O(N )
sparse matrix
matrix-matrix multiplication
dense matrix
O(N ln 7/ ln 2 )
O(N 2 )
sparse matrix
O(N ) . . . O(N 2 ) O(N ) . . . O(N 2 )
all eigen values and vectors
dense matrix
O(N 3 )
O(N 2 )
sparse matrix (iterative)
O(N 2 )
O(N 2 )
some eigen values and vectors
dense matrix (iterative)
O(N 2 )
O(N 2 )
O(N )
O(N )
sparse matrix (iterative)
Lanczos iterations
The Lanczos algorithm builds a basis {v1 , v2 , . . . , vM } for the Krylov-subspace KM =
span{u1 , u2 , . . . , uM }, which is constructed by M iterations of equation (A.13). This is
done by the following iterations:
n+1 vn+1 = Avn n vn n vn1 ,

(A.14)

n = vn Avn ,

(A.15)

where
As the orthogonality condition

n = |vn Avn1 |.

vi vj = ij

(A.16)

does not determine the phases of the basis vectors, the i can be chosen to be real and
positive. As can be seen, we only need to keep three vectors of size N in memory, which
makes the Lanczos algorithm very efficient, when compared to dense matrix eigensolvers
which require storage of order N 2 .
In the Krylov basis the matrix A is tridiagonal

1 2 0 0
.

.
.
2 2 . . . . ..

... ... ...


.
(A.17)
T (n) :=
0
0

. .
. . . . . . . . n
..
0 0 n n

The eigenvalues {1 , . . . , M } of T are good approximations of the eigenvalues of A.


The extreme eigenvalues converge very fast. Thus M N iterations are sufficient to
obtain the extreme eigenvalues.
d

Eigenvectors
It is no problem to compute the eigenvectors of T . They are however given in the
Krylov basis {v1 , v2 , . . . , vM }. To obtain the eigenvectors in the original basis we need
to perform a basis transformation.
Due to memory constraints we usually do not store all the vi , but only the last three
vectors. To transform the eigenvector to the original basis we have to do the Lanczos
iterations a second time. Starting from the same initial vector v1 we construct the
vectors vi iteratively and perform the basis transformation as we go along.
Roundoff errors and ghosts
In exact arithmetic the vectors {vi } are orthogonal and the Lanczos iterations stop after
at most N 1 steps. The eigenvalues of T are then the exact eigenvalues of A.
Roundoff errors in finite precision cause a loss of orthogonality. There are two ways
to deal with that:
Reorthogonalization of the vectors after every step. This requires storing all of
the vectors {vi } and is memory intensive.
Control of the effects of roundoff.
We will discuss the second solution as it is faster and needs less memory. The main
effect of roundoff errors is that the matrix T contains extra spurious eigenvalues, called
ghosts. These ghosts are not real eigenvalues of A. However they converge towards
real eigenvalues of A over time and increase their multiplicities.
A simple criterion distinguishes ghosts from real eigenvalues. Ghosts are caused by
roundoff errors. Thus they do not depend on on the starting vector v1 . As a consequence
these ghosts are also eigenvalues of the matrix T, which can be obtained from T by
deleting the first row and column:

2 3 0 0
.

.
.
3 3 . . . . ..

... ... ...


T(n) :=
(A.18)
0
.
0

. .
. . . . . . . . n
..
0 0 n n
From these arguments we derive the following heuristic criterion to distinguish ghosts
from real eigenvalues:
All multiple eigenvalues are real, but their multiplicities might be too large.
All single eigenvalues of T which are not eigenvalues of T are also real.
Numerically stable and efficient implementations of the Lanczos algorithm can be
obtained from netlib or fromhttp://www.comp-phys.org/software/ietl/ .

You might also like