1) Script
1) Script
Bettina Keller
This is work in progress. The script will be updated on a weekly basis. If you find an error,
send me an email: [email protected]
1 INTRODUCTION
1 Introduction
1.1 What is statistical thermodynamics?
In your curriculum you have learnt so far
• how macroscopic systems behave when the external conditions (pressure, temperature, concentra-
tiond) are altered ⇒ classical thermodynamics
• how to calculate the properties of individual microscopic particles, such as a single atom or a single
molecule ⇒ Atombau und Chemische Bindung, Theoretische Chemie
You also know that macroscopic systems are an assembly of microscopic particles. Hence, it stands
to reason that the behaviour of macroscopic systems is determined by the properties of the microscopic
particles it consists of. Statistical thermodynamics provides a quantitative link between the properties of
the microscopic particles and the behaviour of the bulk material.
Classical thermodynamics is a heuristic theory. It allows for quantitative prediction but does not
explain why the systems behave the way they do. For example:
• Ideal gas law: P V = nRT . Found experimentally by investigating the behaviour of gas when the
pressure, the volume and the temperature is changed.
• Phase diagrams. The state of matter of a substance is recorded at different temperatures and
pressures.
It relies on quantities such as Cv , ∆H, ∆S, ∆G ... which must be measured experimentally. Statistical
thermodynamics aims at predicting these parameters from the properties of the microscopic particles.
2
1 INTRODUCTION
• thermodynamic equilibrium
• free energy
• entropy
and the role temperature play in all of these. Also, you will understand how measurements of macroscopic
matter can reveal information on the properties of the microscopic constituents. For example, the energy
of a molecule consists of its
• translational energy
• rotational energy
• vibrational energy
• electronic energy.
In any experiment you will find mixture of molecules in different translational, rotational, vibrational, and
electronic states. Thus, to interpret an experimental spectrum, we need to know the distribution of the
molecules across these different energy states. Moreover, the thermodynamic quantities of a complex
molecule can only be derived from experimental data (∆H, ∆S) by applying statistical thermodynamics.
Figure 2: Infrared rotational-vibration spectrum of hydrochloric acid gas at room temperature. The dubletts
in the IR absorption intensities are caused by the isotopes present in the sample: 1H-35Cl 1H-37Cl
Note: The explicit caclulation can be done using molecular dynamics simulations, albeit with typical
box sizes of 5 × 5 × 5 nm3 .
3
1 INTRODUCTION
3. Non-equilibrium thermodynamics
~2 2
s ψs (xk ) = ĥk ψs (xk ) = − ∇ ψs (xk ) + Vk (xk ) ψs (xk ) (1.1)
2mk k
where s is the associated energy eigenvalue. If a system consists of N such particles which do not interact
with each other, the time-independent Schrödinger equation of the system is given as
N
X
Ej Ψj (x1 , . . . xN ) = Ĥ Ψj (x1 , . . . xN ) = ĥk Ψj (x1 , . . . xN ) (1.2)
k=1
1
The possible quantum state of the system are
where each state j corresponds to a specific placement of the individual particles on the energy levels of
the single-particle system, i.e. to a specific permutation
4
2 MICROSTATES, MACROSTATES, ENSEMBLES
Part of the difficulties with statistical mechanics arise because the definitions as well as the notations change
when moving from quantum mechanics to a statistial mechanics. For example, in quantum mechanics a
single particle is usually called a "system" and its energy levels are often denoted as En . When reading a
text on statistical mechanics (including this script), make sure you understand what the authors mean by
"system", "energy of the system" and similar terms.
In thermodynamics, the world is always divided into a system and its surroundings. The behaviour of the
system depends on how the system can interact with its surroundings: Can it exchange heat or other forms
of energy? Can it exchange particles with the surroundings? To come up with equations for the systems’
behaviour, it will be useful to introduce the concept of an ensemble of systems.
A B
system
surroundings
ensemble of systems
5
2 MICROSTATES, MACROSTATES, ENSEMBLES
• An open system exchanges particles and heat with its surroundings. The following parameters are
constant temperature T , volume V and chemical potential µ → grand canonical ensemble
T, V, μ
grand canonical
ensemble
closed flask closed flask
no piston piston
0 = µB Bz ms = −µB Bz
1 = µB Bz ms = +µB Bz . (2.1)
where µB is the Bohr magneton and Bz is the external magnetic field. Now consider N of these particles
arranged in a line (one-dimensional Ising model). The possible permutations for N = 5 particles are shown
in Fig. 2.3. In general 2N permutations are possible for an Ising model of N particles. In statistical
thermodynamics such a permutation is called microstate.
2 Caution: such a particle is usually called two-level system - with the quantum mechanical meaning of the term "system".
6
2 MICROSTATES, MACROSTATES, ENSEMBLES
Figure 5: Microstates of a system with five spins, the corresponding configurations and macrostates.
Let us assume that the particles do not interact with each other, i.e the energy of a particular spin does
not depend on the orientiation of the neighboring spins. The energy of the system is then given as the sum
of the energies of the individual particles.
N
X N
X
Ej = µB Bz ms(k) = µB Bz ms(k) (2.2)
k=1 k=1
where k is the index of the particles, ms(k) is the spin quantum state of the kth particle, and the Ej is
the energy of the system. A (non-interacting) spin system with five spins, can assume six different energy
values: E1 = −5µB Bz , E2 = −3µB Bz , E3 = −1µB Bz , E4 = 1µB Bz , E5 = 3µB Bz , and E6 = 5µB Bz
(Fig. 2.3). The energy Ej together with the number of spins N in the system define the macrostate of the
system. Thus, the system has 6 macrostates. Note that most macrostates can be realized by more than
one microstate.
same system energy (macrostate) as n = (0, N, 0). Thus in the treatment of more complex systems, the
microstates are first combined into occupation number which are then further combined into macrostates.
ordered sample ↔ permutation ↔ microstate
unordered sample ↔ combination ↔ configuration
7
3 MATHEMATICAL BASICS: PROBABILITY THEORY
Example 3: Six pips when throwing an unfair die fair die. The six is twice as likely as the other faces
of the die.
• Sample space Ω = {1, 2, 3, 4, 5, 6}
• Events X = {six pips, not six pips} = {{6}, {1, 2, 3, 4, 5}}
• Probability of the individual outcomes pΩ = { 17 , 17 , 17 , 17 , 17 , 27 }. Probability of the set of events
pX = { 27 , 57 }
8
3 MATHEMATICAL BASICS: PROBABILITY THEORY
For mutually dependent experiments one needs to work with conditional probabilities.
• The probability of first throwing 6 pips and then 3 pips when throwing a fair die twice p({6, 3}) =
1
p(6)p(3) = 36 .
• The probability of first throwing 6 pips and more than 3 pips when throwing a fair die twice
1
p(6, {4, 5, 6}) = p(6)p({4, 5, 6}) = 12 .
• The probability of first throwing 6 pips with a fair die and then head with a fair coin p(6, head) =
1
p(6)p(head) = 12 . (Note: the experiments are not necessarily identical.)
The number of ways in which k objects taken from the set of N objects can be arranged in a sequence (i.e.
the number of k-permutations of N ) is given as
N! N!
P (N, k) = N · (N − 1) · (N − 2)... · (N − k + 1) = = (3.4)
(N − k) · (N − k − 1)... · 1 (N − k)!
Splitting a set of N objectso into two subset of size k and N − k. Consider a set of N numbered
objects which is to be split into two subset of size k0 and k1 = N − k0 . An example would be n spins of
which k0 are "up", and k1 = N − k0 are "down". The configuration is denoted k = (k0 , k1 ). How many
possible ways are there to realize the configuration k?
We start from the list of possible permutations of all N objects P (N, N ) = N !. Then we split each
of these permutations between position k and k + 1 into two subsequences of size k and N − k. Each
possible set of k numbers on the left side of the dividing line can be arranged into k! sequences. Likewise
9
3 MATHEMATICAL BASICS: PROBABILITY THEORY
each possible set of N − k numbers on the right side can be arranged into (N − k)! sequences. Thus, the
number of possible ways to distribute N objects over these two sets is
N!
W (k) = (3.6)
(N − k)!k!
where
N N!
= (3.7)
k (N − k)!k!
is called the binomial coefficient.
The last example can be generalized. Consider a set of N objects which will be split into m subsets of
Pm−1
sizes k0 , ...km−1 with i=0 ki = N . There are
N N!
W (k) = = (3.8)
k0 , ...km−1 k0 !...km−1 !
ways to do this. Eq. 3.8 is called the multinomial coefficient.
Example: Choosing three out of five. We want to know the possible subsets of size three (k = 3) within
a set of five objects (n = 5), i.e the number of combinations W (k = (3, 2)). There are P (5, 3) = 5·4·3 = 60
possible sequences of length three which can be drawn from this set. For example, one can draw the
ordered sequence #1, #2, #3 which corresponds to the (unordered) subset {#1, #2, #3}. However, one
could also draw the ordered sequence #2, #1, #3 which corresponds to the same (unordered) subset
{#1, #2, #3}. In total there are 3 · 2 · 1 = 3! = 6 way to arrange the numbers {#1, #2, #3} into a
sequence. Therefore, the subset {#1, #2, #3} appears six times in the list of permutations. The same
is true for all other subsets of size three. The number of subsets (i.e. the number of combinations) is
therefore W (k = (3, 2)) = P (5, 3)/6 = 60/6 = 10.
Example: Flipping three out of five spins. The framework of permutations and combinations can be
also applied to slightly different type of thought experiment. Consider sequence of five non-interacting spins
(n = 5), all of which are in the "up" quantum state. Such a spin model is called an Ising model (see also
section 2). We (one by one) flip three out of these five spins (k = 3) into the "down" quantum state. How
many configurations exist which have two spins "up" and three spins "down"? There are P (5, 3) = 5 · 4 · 3 =
60 sequences in which one can flip the three spins. Each configuration (e.g. ↓↓↑↓↑) can be generated by
3·2·1 = 3! = 6 different sequences. Thus the number of configurations is W (k = (3, 2)) = P (5, 3)/6 = 10.
Note that p↑ and p↓ are not necessarily equal and hence the probability of the outcomes of the combined
experiments are not uniform. However, all outcomes which belong to the same combination of spin ↑ and
spin ↓ have the same probability
10
3 MATHEMATICAL BASICS: PROBABILITY THEORY
(See also Fig. 3.5). In general terms, the probability of a particular sequence in which k spins are ↑ and
N − k spins are ↓ is
Often one is not interested in the probability of each individual sequence but in the probability that in
N experiments k spins are ↑ and n − k spins are ↓, i.e. one combines a several sequences (outcomes) into
an event. The number of sequences in which a particular combination of k0 = k spins ↑ and k1 = N − k
spins ↓ can be generated is given by the binomial coefficient (eq. 3.7). Thus, the probability of event
X = {k ↑, N − k ↓} is equal to the probability of the configuration k = (k0 = k, k1 = N − k)
N N!
pX = p(k) = pk↑ (1 − p↑ )N −k = pk (1 − p↑ )N −k (3.12)
k k!(N − k)! ↑
↑↑↑ ↓↑↑
p3" · p0# p2" · p1# p2" · p1# p1" · p2# p2" · p1# p1" · p2# p1" · p2# p0" · p3#
Figure 6: Possible outcomes in a sequence of three random experiments with two possible events each.
p(red, red, blue) = p(red, blue, red) = p(blue, red, red) = p2red · pblue . (3.14)
In general, the probability of a sequence which contains kred red balls, kblue blue balls, and kyellow yellow
k k k
balls (with kred + kblue + kyellow = N ) is pred red · p blue · p yellow . There are
blue yellow
N N!
= (3.15)
kred , kblue , kyellow kred !kblue !kyellow !
possible sequences with this combination of balls. The probability of drawing such a combination is
N! kred kblue kyellow
p(kred , kblue , kyellow ) = pred · pblue · pyellow . (3.16)
kred !kblue !kyellow !
11
3 MATHEMATICAL BASICS: PROBABILITY THEORY
Generalizing to m possible outcomes with probabilities p = {p0 , ...pm−1 } yields the multinomial probability
distribution
N! km−1
pX = p(k) = pk0 · ...pm−1 . (3.17)
k0 ! · ...km−1 ! 0
This distribution represents the probability of the event that in N trials the results are distributed as
Pm−1
X = k = (k0 , ...km−1 ) (with i=0 ki = N ).
●● ●● ●●
●● ●● ●● ●● ●● ●● ●● ●● ●●
p2o · p0o · p0o p1o · p0o · p1o p0o · p2o · p0o p1o · p0o · p1o p0o · p0o · p2o
p1o · p1o · p0o p1o · p1o · p0o p0o · p1o · p1o p0o · p1o · p1o
Figure 7: Drawing balls from a urn with replacment. Possible outcomes in a sequence of two random
experiments with three possible events each.
Comments:
• This comparison is true for distinguishable particles. For indistinguishable particles, the equations
need to be modified. In particular, the distinction between fermions and bosones becomes important.
• To characterize the possible states of the system, one would need to evaluate all possible configurations
k which quickly becomes intractable for large numbers of energy levels m and large number of particles
N . Two approximations drastically simplify the equations:
12
3 MATHEMATICAL BASICS: PROBABILITY THEORY
NN √
N! ≈ 2πN (3.18)
eN
holds very well for large values of N . Taking the logarithm yields
1
ln N ! ≈ N ln N − N + ln(2πN ) (3.19)
2
For large N , the first and second term is much bigger than the third, and one can further approximate
ln N ! ≈ N ln N − N . (3.20)
N! N!
p(k) = · pk (1 − p1 )N −k = · 0.5N (3.21)
k!(N − k)! 0 k!(N − k)!
Thus, if the outcomes have equal probabilities, the probability of a configuration k is determined by the
number of (ordered) sequences W (k) with which this configuration can be realized (equivalently: by
the number of microstates which give rise to this configuration). W (k) is also called the weight of a
configuration. The most likely configuration k∗ is the one with the heighest weight. Thus solve
d
0= W (k) (3.22)
dk
Mathematically equivalent but easier is
d d N! d
0 = ln W (k) = ln = [ln N ! − ln k! − ln(N − k)!]
dk dk k!(N − k)! dk
d d
= − ln k! − ln(N − k)! . (3.23)
dk dk
Use Stirling’s formula (eq. 3.20)
d d
0 = − [k ln k − k] − [(N − k) ln(N − k) − (N − k)] = − ln k + ln(N − k)
dk dk
m
N −k
0 = ln
k
N −k
e0 =
k
m
N
k = (3.24)
2
The most likely configuration is k∗ = ( N2 , N2 ).
13
4 THE MICROCANONICAL ENSEMBLE
1. The single particles systems are distinguishable, e.g. you can imagine them to be numbered.
2. The particles are independent of each other, i.e. they do not interact with each other.
4. There can be multiple particles in the same energy level. The number of particles in the ith energy
level is denoted ki .
Thus, each particles is modeled as random experiment with N possible outcomes. The random ex-
periment is repeated N times generating a sequence of outcomes j = ((1), (2), ...(N )), where (i) is
the energy level of the ith particle and j denotes the microstate of the system. There are NN possible
microstates. There number of particles in energy level s is denoted ks , and k = (k0 , k2 , ....kN−1 ) with
PN −1
s=0 ks = N is called the configuration of the system.
Because the particles are independent of each other, the total energy of the system in microstate j is
given as the sum of the energies of the individual particles, or equivalently as the weighted sum over all
single-particle energy levels with weights according to k
N
X −1
NX
Ej = (i) = ks s . (4.1)
i=1 s=0
Note that (i) denotes the energy level of the ith particle, whereas s the sth entry in the sequence of
possible energy levels {0 , 2 , ...N −1 }.
The total energy of the system is its macrostate. Given the configuration k, one can calculate the
macrostate of the system. The probability that the system is in a particular configuation k is given by the
multinomial probability distribution
N! k −1
p(k) = · pk0 · ...pNN−1 . (4.2)
k0 ! · ...kN −1 ! 0
To work with this equation, we need to make an assumption on the probability ps with which a particle
occupies the energy level s .
N!
p(k) = · pN . (4.3)
k0 ! · ...kN −1 ! s
The probability that the system is in a particular configuation k is then proportional to the number of
microstates which give rise to the configuration, i.e. to the weight of this configuration
N!
W (k) = . (4.4)
k0 ! · ...kN −1 !
14
4 THE MICROCANONICAL ENSEMBLE
(Interpretation of eq. 4.5: Suppose the number of particles ks in each energy level s is changed by a small
number dks , then the weight of configuration changes by dW (k). At the maximum of W (k), the change
in W (k) upon a small change in k is zero.)
As in the example with binomial distribution, we solve the mathematically equivalent but easier problem
−1
NX
∂
d ln W (k) = ln W (k) dks = 0. (4.6)
s=0
∂ks
First we rearrange
−1
NY −1
NX
N!
ln W (k) = ln QN −1 = ln N ! − ln ki ! = ln N ! − ln ki !
i=0 ki ! i=0 i=0
−1
NX NX −1
= N ln N − N − ki ln ki + ki
i=0 i=0
| {z }
N
−1
NX
= N ln N −
|{z} ki ln ki
i=0
P
ki
−0
NX
ki
= − ki ln (4.7)
i=0
N
where we have used Stirling’s formula in the second line. Thus, we need to solve
−1
" N −0
−1
NX
# NX
∂ X ki ∂ ks
d ln W (k) = − ki ln dks = − ks ln dks = 0 (4.8)
s=0
∂ks i=0
N s=0
∂ks N
This equation has several solutions. But not all solutions are consistent with the problem we stated at the
beginning. In particular, because the system is isolated from its surrounding (microcanonical ensemble),
the total number of particles N needs be constant. This implies that the changes of the number of particles
in each energy level dks need to add up to zero
−0
NX
dN = dks = 0 . (4.10)
s=0
Second, the total energy stays constant, which implies that the changes in energy have to add up to zero
−1
NX
dE = dks · s = 0 . (4.11)
s=0
15
4 THE MICROCANONICAL ENSEMBLE
Only solutions which fulfill eq. 4.10 and eq. 4.11 are consistent with the microcanonical ensemble. We use
the method of Lagrange multipliers: since both terms (eq. 4.10 and 4.11) are zero if the constraints are
fulfilled, they can be substracted from eq. 4.9, multiplied by a factors α and β. The factors α and β are
the Lagrange multipliers. One obtains
−1
NX NX −1 NX −0 NX −1
ks
d ln W (k) = − ln dks − dks − dks − β dks · s
s=0
N s=0 s=0 s=0
NX −1
ks
= − ln − (α + 1) − βs dks
s=0
N
= 0 (4.12)
−1
NX NX −1
ks −(α+1) 1 1
=e e−βs = 1 ⇔ e−(α+1) = PN −1 = . (4.14)
s=0
N s=0 s=0 e
−β s Q
In summary, the microstate which has the highest probability is the one for which the energy level occupancies
are given as
ks 1
k∗ : = e−βs . (4.16)
N q
If one interprets the relative populations as probabilties, one obtains the Boltzmann distribution
1 −βs e−βs
ps = e = PN −1 . (4.17)
q s=0 e
−βs
To link the microscopic properties of particles to the macroscopic observables, one needs to know the
Boltzmann distribution.
where kB = 1.381 · 10−23 J/K is the Boltzmann constant, and T is the absolute temperature.
16
5 THE BOLTZMANN ENTROPY AND BOLTZMANN DISTRIBUTION
W1.2 = W1 · W2 . (5.1)
However, from classical thermodynamics we expect that the total entropy is given as a sum of the original
entropies
S1,2 = S1 + S2 (5.2)
Therefore, the entropy has to be a function of W which fulfill the following equality
This is only possible if f is the logarithm of W . Thus, the Boltzamnn equation for the entropy is
S = kB ln W (5.4)
Note that kB = 1.381 · 10−23 is a very small number. Suppose, the ensemble of particles can be in two
microstates 1 and 2 which have the same energy, but which differ by 1.381 · 10−10 J/K in entropy. Then,
according to eq. 5.5, the ratio of the statistical weights is given as
W2 ∆S
= exp = exp(1013 ) . (5.6)
W1 kB
Even a small entropy difference leads to an enormous difference in the statistical weights. Hence, once
the system is in the states with the higher weight (entropy) it is extremely unlikely that it will visit the
microstate with the lower statistical weight again.
17
5 THE BOLTZMANN ENTROPY AND BOLTZMANN DISTRIBUTION
To illustrate this consider a system with equidistant energy levels {1 , 2 , ...N } (e.g. vibrational states
of a diatomic molecule). Let the Boltzmann distribution yield occupancy numbers {n1 , n2 , ...nN }. The
microstate of the Boltzmann distribution is compared to a microstate in which ν particles have been moved
from state i − 1 to state i, and ν particles have been moved to from state i + 1 to state i. Let ν be small
in comparison to the occupancy numbers, e.g.
(The occupancy of the state j + 1 is changed by 0.1%.) Since, the energy levels are equidistant the two
occupation number distributions have the same total energy. According to eq. ??, the associated change
in entropy is given as
N N
X nj X
∆S = −kB νj ln + kB νj
j=i
N j=i
(5.8)
Because the total number in the system has not been changed, the last term is zero, and we obtain
h nj−1 nj nj+1 i
∆S = −kB −ν ln + 2ν ln − ν ln
h n N N N
j−1 nj nj+1 i
= kB ν ln − 2 ln + ln
" N #N N
nj−1 nj+1
= kB ν ln . (5.9)
n2j
This entropy difference gives rise to the following ratio of statistical weights of the occupation number
distributions (eq. 5.5)
" #!
W2 ∆S 1 nj−1 nj+1
= exp = exp kB ν ln
W1 kB kB n2j
!ν
nj−1 nj+1
= (5.10)
n2j
Consider that ν = nj+1 · 10−3 , i.e. if the occupancy numbers are in the order of 1 mol (6.022 · 1023 ),
Boltzmann distribution is approximately 1020 more likely than the new occupation number distribution.
Although, the occupation number distribution cannot be determined unambiguously from the macrostate,
for large numbers, the ambiguity is reduced so drastically, that we effectively have a one-to-one relation
from macrostate to Boltzmann distribution.
18
5 THE BOLTZMANN ENTROPY AND BOLTZMANN DISTRIBUTION
Figure 8: (a) Definition of the backbone torsion angles. (b) Ramachandran plot of an alanine residue. (c) Estimate
of the fraction of the conformational space, which is visited, as a function of the peptide chain length.
temperature (i.e. the fraction of the conformational space, which is visited is f=0.65)(Fig. 5.5.b). For the
remaining 35% of the conformations the potential energy is so high (due to steric clashes) that they are
inaccessible at room temperature. For a chain with n residues, the visited conformational space, which is
visited, can be estimated as
Hence, the fraction of the conformational space which is accessible at room temperature decreases expo-
nentially with the number of residues in a peptide chain (Fig. 5.5.c).
Due to the vastness of the conformational space, the Boltzmann entropy cannot be evaluated directly from
the potential energy function. Instead, a sampling algorithm is needed which samples the relevant regions
of the conformational space with high probability (→ importance sampling).
109 residues: f (n = 109) = 4.05094 · 10−21 , Surface 1 cent coin: 2.1904 · 10−6 m2 , Surface earth:
510 072 000km2 , Ratio: 4.29429 · 10−21
19
6 THE CANONICAL ENSEMBLE
where Ĥ is the Hamiltonian of the system, ĥk are the Hamiltonians of the individual particles. Thus, within
the ensemble, each system plays the role of a “super-particle”, and we can treat the ensemble as a system
of “super-particles” at constant Nensemble and Eensemble . In analogy to section 4, we have the following
assumptions
1. The systems are distinguishable, e.g. you can imagine them to be numbered.
2. The systems are independent of each other, i.e. they do not interact with each other.
3. Each system occupies on of NE energy levels: {E0 , E2 , ...ENE −1 }.
4. There can be multiple systems in the same energy level. The number of particles in the jth energy
level is denoted nj .
The configuration of the ensemble is given by the number of systems in each energy level n = (n0 , n1 , . . . nNE −1 ).
Each configuration can be generated by several ensemble microstates (ordered sequence of systems dis-
tributed according to n). We again assume that the a priori probabilities pj of the energy states Ej are
equal. Then the probability of finding the ensemble in a configuration n is given as
Nensemble !
p(n) = · pNensemble . (6.2)
n0 ! · ...nNE −1 ! j
The probability that the ensemble is in a particular configuation n is then proportional to the number of
ensemble microstates which give rise to the configuration, i.e. to the weight of this configuration
Nensemble !
W (n) = . (6.3)
n0 ! · ...nNE −1 !
The most likely confiuration n∗ is obtained by setting the total derivative of the weight to zero
E −1
NX E −1
NX
nj
d ln W (n) = − ln dnj − dnj = 0 (6.4)
j=0
Nensemble j=0
and solving the equation under the constraints that the number of systems in the ensemble is constant
E −1
NX
dNensemble = dnj = 0 , (6.5)
j=0
20
6 THE CANONICAL ENSEMBLE
This yields the Boltzmann probability distribution of finding the system in an energy state Ej
1 −βEj e−βEj
pj = e = PNE −1 . (6.7)
Q j=0 e−βEj
where
E −1
NX
Q = e−βEj . (6.8)
j=0
1
is the partition function of the system and β = kb T .
6.2 Ergodicity
With eq. 6.7, we can make statements about the entire ensemble. For example, we can calculate the
average energy hEi of the systems in the ensemble as
E −1
NX E −1
NX
1
hEiensemble = pj · Ej = nj · Ej (6.9)
j=0
Nensemble j=0
But how does this help us to characterize the thermodynamic properties of a single system? Each system
exchanges energy with the thermal reservoir and therefore continuously changes its energy state. What we
could calculate for a single system is its average energy measure over a period of time T
NT
1 X
hEitime = E(t), (6.10)
NT t=1
where we assumed that the energy of the single system has been measured at regular intervals ∆. Then
T = ∆·NT and E(t) is the energy of the single system measured at time interval t. The ergodic hypothesis
relates these two averages
The average time a system spends in energy state Ej is proportional to ensemble probability pj of this
state.
Thus, ensemble average and time average are equal
NT E −1
NX
1 X
hEitime = E(t) = pj · Ej = hEiensemble (6.11)
NT t=1 j=0
and we can use eq. 6.7 to characterize the time average of single system.
21
7 THERMODYNAMIC STATE FUNCTIONS
One can also express eq. 7.3 as a temperature derivative, rather than a derivative with respect to β
∂f ∂f ∂β 1 ∂f
= = − 2
(7.4)
∂T ∂β ∂T kB T ∂β
7.2 Entropy
Also, the entropy can be expressed as a function of the partition function Q(N, V, T ). We take eq. ?? as
starting point
X
S = −kB pi ln pi
i
X exp − kB1T i exp − kB1T i
= −kB ln
i
Q Q
22
7 THERMODYNAMIC STATE FUNCTIONS
X exp − kB1T i 1
= −kB − i − ln Q
i
Q kB T
1 1
1 X i exp − kB T i X exp − kB T i
= + kB ln Q
T i Q i
Q
U ln Q X 1
= + kB exp − i
T Q i kB T
U
= + kB ln Q (7.8)
T
Replacing U by its relation to the partition function (eq. 7.7)
∂
S = N kB T ln Q + kB ln Q (7.9)
∂T N,V
7.4 Enthalpy
In the isothermal-isobaric ensemble, one has to account for the change in volume. The relevant thermody-
namic properties are the enthalpy H and the Gibbs free energy G. The enthalpy is defined as
H = U + PV . (7.14)
Expressed as a function of Q:
2 ∂ ∂ ln Q
H = N kB T ln Q + kB T V . (7.15)
∂T N,V ∂V T
23
7 THERMODYNAMIC STATE FUNCTIONS
G = H − TS = A + PV
∂ ln Q
= −kB T ln Q + kB T V . (7.16)
∂V T
name equation
∂
internal energy U = N kB T 2 ∂T ln Q N,V
∂
entropy S = N kB T ∂T ln Q N,V + kB ln Q
∂ ∂
enthalpy H = N kB T 2 ∂T ln Q N,V + kB T V ∂V ln Q T
∂ ln Q
Gibbs free energy G = −kB T ln Q + kB T V ∂V
T
24
8 CRYSTALS
8 Crystals
In the previous lectures, we have derived the canonical partition function and its relation to various ther-
modynamic state functions. Given the energy levels of a system of N particles, we can now calculate its
energy, its entropy and its free energy. The difficulty with which we will deal in the coming lectures is to
calculate the energy levels of a system with N particles. A very useful approximation is to assume that the
particles do not interact with each other, because for non-interacting particles the energy of the system is
simply a sum of the energies of the individual particles. This assumption often works well for gases, crystals
and mixtures.
~2 2
s ψs (xk ) = ĥk ψs (xk ) = − ∇ ψs (xk ) + Vk (xk ) ψs (xk ) (8.1)
2mk k
where s is the associated energy eigenvalue. If a system consists of N such particles which do not interact
with each other and which are distinguishable, the time-independent Schrödinger equation of the system is
given as
N
X
Ej Ψj (x1 , . . . xN ) = Ĥ Ψj (x1 , . . . xN ) = ĥk Ψj (x1 , . . . xN ) (8.2)
k=1
where each state j corresponds to a specific placement of the individual particles on the energy levels of
the single-particle system, i.e. to a specific permutation
There are NN ways to distribute the N particles over the N energy levels. Each of the resulting configu-
rations gives rise to a system energy
where s(k) is the energy level of the kth particle. The partition function of the system is
X N
X N
X N
X
Q = exp(−βEj ) = ... exp(−β[s(l) + s(m) + ...s(z) ]) (8.8)
j s(l)=1 s(m)=1 s(z)=1
In eq. 8.8, there are as many sums as there are particles in the system, such that all possible configurations
are included in the summation. Luckily eq. 8.8 can be simplified.
25
8 CRYSTALS
For illustration, consider a system with N = 2 particles which can be in N = 3 energy levels. The
partition function of this system is
3 X
X 3
Q = exp(−β[s(l) + s(m) ])
l=1 m=1
−β1 −β1
= e e + e−β1 e−β2 + e−β1 e−β3 +
e−β2 e−β1 + e−β2 e−β2 + e−β2 e−β3 +
e−β3 e−β1 + e−β3 e−β2 + e−β3 e−β3 +
−β1 2
= e + e−β2 + e−β3
" 3 #2
X
= e−βi
i=1
= qN (8.9)
This can be generalized to arbitrary values of N and N . Thus, the partition function of a system of N
non-interacting and distinguishable particles can be factorized as
Q = qN (8.10)
In most realistic systems, the particles are however indistunguishable due to their quantum nature. Thus,
eq. 8.10 only applies to systems in which the particles are nonthelesss distinguishable because they are fixed
to specific position in space. For example, it can be applied to calculate the thermodynamic properties of
ideal crystals.
A first estimate can be obtained without the full formalism of statistical mechanics. The model assumptions
are:
1. Particles in the crystal are bound to fixed positions in the crystal lattice.
2. The particles oscillate around their equilibrium positions in three dimensions (three degrees of freedom
per particle).
3. The oscillations in each dimension are independent from the oscillations in the other diimensions and
independent from the oscillations of other particles in the crystal.
According to the equipartition theorem, every degree of freedom has an average kinetic energy of
1
Ekin = kB T . (8.12)
2
The average potential energy is equal to the average kinetic enegy Epot = Ekin . Thus, the average total
energy per degree of freedom is
1
Etot = Ekin + Epot = 2 · kB T = kB T . (8.13)
2
26
8 CRYSTALS
There are 3N degrees of freedom. The internal energy for 1 Mol particles is then
U = Etot = 3 · NA · kB · T = 3 · R · T (8.14)
This is the Dulong-Petit law. Is is a good approximation for many substances at room temperature, but
fails at low and high temperatures.
We combine the constants into a new constant, the characteristic temperature or Einstein temperature
Θvib
hν0
Θvib = (8.18)
kB
and can rearrange eq. 11.12
X ∞ ∞ ν
1 h νi X 1
qvib = exp −Θvib · exp −Θvib = exp −Θvib
2T ν=0
T ν=0
T
27
8 CRYSTALS
1
exp −Θvib 2T
= h Θ i. (8.19)
1 − exp − Tvib
We have used that vibrational partition function has the form of a geometric series, which converges
∞
X 1
qν = (8.20)
ν=0
1−q
h Θ i
with q = exp − Tvib . Note that
r
ω 1 κ
ν0 = = (8.21)
2π 2π m
where m is the mass of the particle. Thus, the characteristic temperature Θvib on the force constant of
the potential and the mass of the particle.
The partition function of a crystal with N particles is
1
3N
exp −Θ
3N
Q = qvib = h vibΘ2T i . (8.22)
1 − exp − Tvib
U
∂
= kB T 2 ln Q
∂T N,V
1
3N
∂ exp −Θvib 2T
= kB T 2 ln h Θ i
∂T
1 − exp − Tvib
N,V
1
∂ exp −Θvib 2T
= kB T 2 3N ln h Θ i
∂T 1 − exp − Tvib
N,V
∂ 1 ∂ Θ
= kB T 2
3N ln exp −Θvib − kB T 2 3N ln 1 − exp − vib
∂T 2T N,V ∂T T N,V
−1
∂ 3N Θvib ∂ Θ
= kB T 2
−Θvib 2
− kB T · 3N · 1 − exp − · 1 − exp − vib
∂T 2T N,V T ∂T T N,V
−1
3N Θ vib Θ vib Θ vib
= kB T 2 Θvib 2 − kB T 2 · 3N · 1 − exp − · − exp − ·
2T T T T2
h Θ i
3 exp − Tvib
= N kB Θvib + kB · 3N · Θvib · h Θ i
2 1 − exp − vib T
3 1
= N kB Θvib + kB · 3N · Θvib · hΘ i (8.23)
2 exp vib −1
T
28
8 CRYSTALS
hΘ i
Θvib
2 exp Tvib
= R · 3N · · 2 (8.25)
T
hΘ i
exp Tvib − 1
Θ
hΘ i
For high temperatures T Θvib or Tvib << 1, the Taylor expansion of exp vib
T can be truncated
after the linear term, and the equation approach the Dulong-Petit law
2 Θ
1 + Tvib + . . .
Θvib
Cm,V = R · 3N · · 2
T Θ
1 + Tvib + · · · − 1
Θ
= R · 3N · 1 + vib + . . .
T
≈ R · 3N (8.26)
• In the Einstein model, the heat capacity depends on single substance dependent parameter: Θvib .
• The model can be further by accounting for coupled vibrations in the crystal (Debye theory) and for
magnetic effects.
29
9 FERMI-DIRAC, BOSE-EINSTEIN, AND MAXWELL-BOLTZMANN STATISTICS
where each state j corresponds to a specific placement of the individual particles on the energy levels of
the single-particle system, i.e. to a specific permutation / microstate
The total number of microstates (analogous to the sample space in probability theory) is
Ω = NN (9.4)
where N is the number of energy levels in the single particle system. (The are N choices to place the fist
particle, N choices to place the second particle etc.)
Example: Consider a system with N = 2 indistinguishable particles, denoted i and j. Each of the
particles can occupy N = 3 energy levels. The total number of microstates is Ω = 32 = 9. The possible
configurations and their associated weights (number of microstates per configuration) are
2!
k1 = (0, 0, 2) ⇒ W (k1 ) = =1
0!0!2!
2!
k2 = (0, 2, 0) ⇒ W (k2 ) = =1
0!2!0!
2!
k3 = (2, 0, 0) ⇒ W (k3 ) = =1
2!0!0!
2!
k4 = (0, 1, 1) ⇒ W (k4 ) = =2
0!1!1!
2!
k5 = (1, 0, 1) ⇒ W (k5 ) = =2
1!0!1!
2!
k6 = (1, 1, 0) ⇒ W (k6 ) = =2 (9.5)
1!1!0!
yielding a total of 9 microstates. 3 . Represented as a table the microstates are
j
1 2 3
1 (2, 0, 0) (1, 1, 0) (1, 0, 1)
i 2 (1, 1, 0) (0, 2, 0) (0, 1, 1)
3 (1, 0, 1) (0, 1, 1) (0, 0, 2)
Note that, by exchanging particles i and j, there are two ways the generate the configurations in the off-
diagonal matrix elements, and hence W (k4 ) = W (k5 ) = W (k6 ) = 2, but there is only one way to generate
the configurations in which both particles occupy the same energy level.
3 Remember N!
that the number of microstates per configuration k is given as W (k) = k0 !·...kN −1 !
30
9 FERMI-DIRAC, BOSE-EINSTEIN, AND MAXWELL-BOLTZMANN STATISTICS
This implies that there is can be at most 1 particle per spin-energy state, which further implies that N > N .
In the two-particle example this means that microstates on opposited sites of the diagonal in this table are
identical and only count once to the number of microstates. Thus, W (k4 ) = W (k5 ) = W (k6 ) = 1. The
fact that the wave function has to change sign upon the exchange of two particles k and l implies that ther
can be at most a single particle in each energy state. Proof: Consider a microstate with two particles in
state ψs(k)
To account for the fact that the particles are indistinguishable, one has to divide by the number of permu-
tation of the N particles and obtains
N !
Ωfermion = (9.9)
(N − N )!N !
In general, we however have an infinite number of single-particle quantum states. To account for this,
we consider the density of states D(i ), i.e. the number of qunatum states gi in an small energy interval
i + δ
(Fig. 11). For fermions, f (i ) ∈ [0, 1], or equivalently Ni ≤ gi . We assume that the particles in i + δ
can exchange with particles in the neighboring energy intervals. In equilibrium, the temperature T and the
chemical potential µ are the same in all energy intervals.
Let Ai , Ui , and Si be the free energy, the internal energy and the entropy of the subsystem, which are
related by the Gibb-Helmholtz equation
Ai = Ui − T Si . (9.12)
31
9 FERMI-DIRAC, BOSE-EINSTEIN, AND MAXWELL-BOLTZMANN STATISTICS
Figure 11: Density of states. W. Göpel, H.-D. Wiemhöfer, ãStatistische Thermodynamik", Spektrum
Akademischer Verlag (2000)
The chemical potential is given as a the derivative of the free energy Ai with respect to the number of
particles Ni in the subsystem
∂Ai ∂Ui ∂Si
µ = = −T = const. (9.13)
∂Ni T,V ∂Ni T,V ∂Ni T,V
Ui = Ni i (9.14)
Si = kb ln Ωi (Ni , gi ) (9.15)
where Ωi denotes the number of microstates in the energy interal i + δ (extension of W - the number of
microstates per configuration - which we had earlier in the course). Ωi depends on the number of quantum
states gi and the number of particles Ni in this energy interval. Inserting eqs. 9.14 and 9.15 into eq. 9.16
yields
∂ ln Ωi
µ = i − kB T = const. (9.16)
∂Ni T,V
and thus
gi − Ni
µ = i − kB T ln = const. (9.19)
Ni
32
9 FERMI-DIRAC, BOSE-EINSTEIN, AND MAXWELL-BOLTZMANN STATISTICS
The average number of particles per quantum state for a system of fermions is
−1
Ni i − µ
ffermions (i ) = = exp +1 (9.20)
gi kB T
In the two-particle example this means that microstates on opposited sites of the diagonal in this table
are identical and only count once to the number of microstates. Thus for bosones, W (k4 ) = W (k5 ) =
W (k6 ) = 1. But wave functions with more than one particle per single-particle quantum state are permitted:
W (k1 ) = W (k2 ) = W (k3 ) = 1.
The total number of microstates for a system with N bosones and N single-particle quantum states is
(N + N − 1)!
Ωbosones = (9.22)
(N !(N − 1)!
(The derivation is more complicated than for fermions.) The extension to single particle quantum states
is analogous the derivation for the fermions (eqs. 9.10 - 9.16). The number of microstates in the energy
interval i + δ for bosones is
(Ni + gi − 1)!
Ωi,bosones = (9.23)
(Ni !(gi − 1)!
Using the Stirling approximation, the derivative with respect to the number of particles in eq. 9.16 is
∂ ln Ωi,bosones Ni + gi − 1 Ni + gi
≈ ln ≈ ln (9.24)
∂Ni T,V Ni Ni
and thus
Ni + gi
µ = i − kB T ln = const. (9.25)
Ni
The average number of particles per quantum state for a system of bosones is
−1
Ni i − µ
fbosones (i ) = = exp −1 (9.26)
gi kB T
In all three types of statistic the average number of particles per quantum state is determined by the
chemical potential within the energy scale µ and the temperature T . In some systems (e.g. diluted gases
of atoms or molecules), the chemical potential can be much lower than the lowest single-particle energy -
i µ for all i ≥ 0 - and the Maxwell-Boltzmann statistics can be used.
33
9 FERMI-DIRAC, BOSE-EINSTEIN, AND MAXWELL-BOLTZMANN STATISTICS
and thus
q
µ = −kB T ln . (9.29)
N
For gi Ni , the expressions for the chemical potential for fermions and bosones (eqs. 9.19 and 9.30)
simplify
gi
µ ≈ i − kB T ln = const. (9.30)
Ni
These two equations for µ can be combined to obtain an expression for the relative number of particles in
the single-particle quantum state i
i
Ni fMaxwell−Boltzmann exp − kB T
= = (9.31)
gi N N q
This is the Boltzmann distribution.
34
10 IDEAL MONO-ATOMIC GAS
qN
Q = (10.1)
N!
Eq. 10.1 is called Maxwell-Boltzmann statistics. It is an approximation to the true partition function,
because q N contains terms in which two or more particles occupy the same singl-particle energy level for
which less than N ! permutations exist. Thus, by dividing everything by N ! one underestimates the partition
function. The deviation from the true partition function is only significant if the number of microstates with
two or more particles in the same energy level is a sizeable compared to the total number of microstates.
In most physical systems, the number of single-particle energy levels is much larger than the number of
particles, i.e. N N , and the Maxwell-Boltzmann statistics is an excellent approximation.
V = Lx · Ly · Lz (10.3)
where Lx , Ly , andd Lz are the length of the container in x, y, and z direction. Since the gas of atoms
the only contributions to its energy are the translational energy and the electronic energy. We neglect the
contributions by the electronic energy, i.e we assume that all atoms are in the electronic ground state. The
translational energy is given by the quantum mechanical treatment of a particle in a box (see exercise 2)
" 2 2 #
2
h2 nx ny nz
trans = x + y + z = + + (10.4)
8m Lx Ly Lz
where nx , ny , nz ∈ N>0 are the quantum numbers. The single-particle partition function is hence given as
∞ X
∞ X
∞
X x + y + z
qtrans = exp −
nx =1 ny =1 nz =1
kB T
∞ ∞
" # ∞
h2 n2x h2 n2y h2 n2z
X X X
= exp − · exp − · exp − . (10.5)
n =1
kB T 8mL2x n =1 kB T 8mL2y n =1
kB T 8mL2z
x y z
At room temperature, the energy level are so closely spaced that we can assume an energy continuum
(half-classical approximation)
∞ ∞
h2 n2x h2 n2x
X Z
exp − ≈ exp − dnx . (10.6)
nx =1
kB T 8mL2x 0 kB T 8mL2x
qtrans
35
10 IDEAL MONO-ATOMIC GAS
∞ Z ∞ Z ∞
h2 h2 h2
Z
2 2 2
= exp − n dnx · exp − n dny · exp − n dnz
0 kB T 8mL2x x 0 kB T 8mL2y y 0 kB T 8mL2z z
r r r
1 kB T 8mL2x 1 kB T 8mL2y 1 kB T 8mL2z
= π · π · π
2 h2 2 h2 2 h2
3/2
2πmkB T
= ·V (10.8)
h2
where we have used eq. 10.3. The translational single-particle partition function depends on V , T , and m
as
qtrans ∼ V
qtrans ∼ T 3/2
qtrans ∼ m3/2 . (10.9)
Thermal de Broglie wavelength. The factor λ has units of meters and can be interpreted as the wave
length of the particle. It is also called the thermal de Boglie wavelength can be used to estimate at which
particle densities the half-classical approximation breaks down and quantum effects start playing a role
13
V
≤ λ. (10.13)
N
That is, if the volume per particle is smaller than λ3 , the approximation is not valid.
for the momentum of a single particle. The effective kinetic energy of a single particle is then
p2
Ekin = = πkB T . (10.15)
2m
This expression differs from the average kinetic energy of an ideal gas particle, which is Ekin = U = 3/2kB T
and will be derived in the following section. This is because eq. 10.15 has been derived from the single-
particle partition function, i.e it does not acount for the fact that there are N particles in the box and that
the particles are indistinguishable. The expression in eq. 10.15 exists. I however could not find out in which
situations it is useful.
36
10 IDEAL MONO-ATOMIC GAS
The ideal gas law is obtained by deriving eq. 10.17 with respect to the volume
∂A kB T N nRT
p = − = = (10.18)
∂V T,N V V
where the ideal gas constant is given as R = kB NA . NA is Avogadro’s constant and n = N/NA . The
molar inner energy (N = NA ) is given as
2 ∂ ln Q
Um = kB T
∂T V,N
2 ∂ 1
= kB T NA ln 3
∂T λ V,N
2
−3/2 !
∂ h
= kB T 2 NA ln
∂T 2πmkB T
V,N
2 3 ∂ 2πmkB T
= kB T NA ln
2 ∂T h2 V,N
31
= kB T 2 NA
2T
3
= kB T NA
2
3
= RT . (10.19)
2
The molar heat capacity is given as
∂U 3
CV,m = = R. (10.20)
∂T N,V 2
37
10 IDEAL MONO-ATOMIC GAS
5 3 2πmkB 3 V
= kB N + ln + ln T + ln (10.21)
2 2 h2 2 N
This is the Sackur-Tetrode equation, which one also finds in the following rearrangements
" 3/2 #!
5 V 2πmkB T
S = kB N + ln
2 N h2
" 3/2 #!
5 V 4πm U
= kB N + ln (10.22)
2 N 3h2 N
and
5 V
S = kB N + ln . (10.23)
2 N λ3
This is in contrast to the observation in classical thermodynamics, which requires that the entropies doubles
of the systems size doubles
S2 = 2S1 . (10.26)
One the other hand, eq. 10.23, which was derived for indistinguishable particles, fulfills the observed addi-
tivity of the entropy. Hence, the factor N ! is necessary in the partition function of ideal gases.
38
11 IDEAL GAS WITH INTERNAL DEGREES OF FREEDOM
0 is the energy of the molecule if all contributing terms are in the ground state, i.e. the ground state
energy is moved to seperate term and all other terms are zero for the lowest quantum number. trans is
the translational energy. The following four terms are grouped together to yield the internal energy int : the
vibrational energy vib , the rotational energy rot , the electronic energy e , and the energy of the nuclear
degrees of freedom n . The single-particle partition function is thus given as a product
qmolecule = q0 · qtrans · qvib · qrot · qe · qn ·
| {z }
qint
= qtrans · qint · e−
0/k T
B . (11.3)
and the partition function of the system is given as
1 N
Q = q
N ! molecule
1 N
· q N · e− 0/kB T
N
= q
N ! trans int
= Qtrans · Qint · e−
N 0/k T
B , (11.4)
where we have incorporated the factor 1/N ! into the translational partition function. In chapter 10, we have
derived an expression for the translational partition function
N
1 V
Qtrans = (11.5)
N ! λ3
where V is the volume and λ is the
39
11 IDEAL GAS WITH INTERNAL DEGREES OF FREEDOM
zn = gn,0 = 2I + 1 (11.9)
For a molecule with Natom atoms, one needs to take the degeneracy of the nuclear ground state of all
atoms into account and thus
N
Yatom
zn = (2Ii + 1) . (11.10)
i=1
Many atoms have spin I = 0 and contribute a factor gn,0 = 1 to the partition function. One has however
to pay attention to molecules with a rotational symmetry. Atoms which are symmetry equivalent are also
indistinguishable. If these atoms additionally have a spin greater than zero, the rotational partition function
cannot be decoupled from the rotational partition function.
with quantum numbers ν = 0, 1, 2, ... All energy levels are non-degenerate, i.e. gν = 1 for all ν. The
ground state energy = 1/2hν0 is incorporated into the ground state energy of the molecule 0 . Thus the
partition function is given as
∞ X ∞
X 1 1 1
qvib = exp − vib,ν − hν0 = exp − νhν0 (11.12)
ν=0
kB T 2 ν=0
kB T
We combine the constants into a new constant, the characteristic temperature Θvib
hν0
Θvib = (11.13)
kB
and can rearrange eq. 11.12
∞ ∞ ν
X h νi X 1
qvib = exp −Θvib = exp −Θvib
ν=0
T ν=0
T
1
= h Θ i. (11.14)
1 − exp − Tvib
We have used that vibrational partition function has the form of a geometric series, which converges
∞
X 1
qν = (11.15)
ν=0
1−q
h Θ i
with q = exp − Tvib .
40
11 IDEAL GAS WITH INTERNAL DEGREES OF FREEDOM
Molecules with more than two atoms. Using a normal mode analysis, one can decompose the complex
vibration of a molecule with more than two atoms in a superposition of harmonic oscillations. The vibration
in each of these so-called normal modes can be described by an independent harmonic oscillator with a
specific ground state frequency ν0 . Thus, the total vibrational energy of a molecule is given as sum of the
energies of harmonic oscillators. For linear molecules, this sum has 3Natom − 5 terms and for non-linear
molecules, it has 3Natom − 6 terms
3Natom −6(5)
X 1
vib = vib (νi ; ν0,i ) − hνi,0
i=1
2
3Natom −6(5)
X
= hνi,0 νi . (11.16)
i=1
where we shifted the ground state energy to zero. The single-particle vibrational partition function is the
thus given as
∞ X
∞ ∞
X X 1
qvib = ... exp − vib (νi )
ν1 =0 ν2 =0 ν3N −6(5)=0
kB T
∞ X ∞ ∞ 3Natom −6(5)
X X X Θvib,i
= ... exp − νi
ν =0 ν =0 ν i=1
T
1 2 3N −6(5)=0
∞
∞ X ∞ 3Natom −6(5)
X X Y Θvib,i
= ... exp − νi
ν1 =0 ν2 =0 ν3N −6(5)=0 i=1
T
3Natom −6(5) ∞
Θvib,i νi
Y X
= exp −
i=1 νi =0
T
3Natom −6(5)
Y
= qvib (Θvib,i ) (11.17)
i=1
4
The characteristic frequencies ν0,i of the normal modes can be obtained by carrying out a normal mode
analysis using a quantum chemistry software. Alternatively, they can be measured by IR or Raman spec-
troscopy Note however that this approach is only useful for relatively small molecules. For large molecules
the assumption that vibrational and rotational modes are decoupled is not valid. Moreover, the normal
mode analysis is only valid for a molecule close to a minimum in the potential energy surface. Should the
potential energy surface have more than one minimum, i.e. should the molecule have several conformations,
the normal mode analysis needs to be carried out for each minimum and the different conformations need
to be accounted for in the partition function.
41
11 IDEAL GAS WITH INTERNAL DEGREES OF FREEDOM
where m1 and m2 are the masses of the two atoms and r1 and r2 are the respective distances to the center
of mass. The rotation can be described by an equivalent one-particle problem, in which the particle with
reduced mass
m1 m2
µ = (11.19)
m1 + m2
rotates around a fixed center at a radius
r0 = r1 + r2 . (11.20)
The quantum mechanical treatment of this problem is called the "quantum-mechanical rigid rotator" and
yield energy levels
h2
rot,J = J(J + 1) with J = 0, 1, 2, ... (11.21)
8π 2 I
where J is the rotational quantum number. The energy levels are degenerate with a degeneracy factor
gJ = 2J + 1 . (11.22)
(Note that the rotational ground state with J = 0 and gJ = 1 is the only rotational state which is not
degenerate.) Hence we obtain for the rotational partition function
∞
h2
X 1
qrot = (2J + 1) exp − J(J + 1) 2 . (11.23)
kB T 8π I
J=0
Analogously to the vibrational partition function, we combine the constants in the exponent into a new
constant, the characteristic temperature for the rotation Θrot
h2
Θrot = (11.24)
kB 8π 2 I
and rewrite the rotational partition function as
∞
X Θrot
qrot = (2J + 1) exp −J(J + 1) . (11.25)
T
J=0
For many common molecules, the rotational characteristic temperature is very small - in the order of 0.1
to 1 K. Molecules with a small moment of intertia can have larger characteristic temperatures, which are
however still well below room temperature. Examples are H2 : Θrot = 87.6 K or HF: Θrot = 30.2 K (see
https://en.wikipedia.org/wiki/Rotational_temperature)
Population of the rotational energy levels. The relative population of the rotational energy levels pJ
is given as
NJ Θrot
pJ = ∼ (2J + 1) exp −J(J + 1) . (11.26)
N T
where NJ is the number of particles in rotational state J (see Fig. 12). The rotational level with the highest
relative population is given as
r
T 1
Jmax = − . (11.27)
2Θrot 2
This value is a measure how dense the rotational states are. Since the shape of the population distribution
is the same for all diatomic molecules, Jmax tells us how many states can be found before the maximum.
42
11 IDEAL GAS WITH INTERNAL DEGREES OF FREEDOM
a b
12 16 16 16
C O O O
⇥rot = 2.779 K ⇥rot = 2.080 K
T = 500 K T = 500 K
pJ
pJ
J J
12
Figure 12: Population of the rotational levels for diatomic molecules. a: C-16 O; b: 16
O-16 O.
High temperature approximation: nonlinear molecules Linear molecules have two rotational axes A
and B with identical moments of inertia IA = IB and hence Θrot = Θrot,A = Θrot,B . Nonlinear molecules
43
11 IDEAL GAS WITH INTERNAL DEGREES OF FREEDOM
have three rotational axes A, B, and C with different moments of inertia IA , IB , and IC . Each of the axes
has its own characteristic temperature Θrot,A , Θrot,B , and Θrot,C . For the rotational partition function of
nonlinear molecules one obtains
1/2 3/2
π 1/2 (πIA IB IC )1/2
T T T 8πkB T
qrot = · · = (11.31)
σ Θrot,A Θrot,B Θrot,C h2 σ
Again σ is the symmetry number, i.e the number of symmetry operations which yield conformation which
is indistinguishable from the starting conformation. For example σ(HCl) = 1, σ(H2 ) = 2, σ(NH3 ) = 3,
σ(CH4 ) = 12, σ(benzene) = 12.
Nuclear spins and rotational states: O2 The stable isotopes of oxygen are
16
• O, abundance: 99.757%, nuclear spin: I = 0
17 5
• O, abundance: 0.038%, nuclear spin: I = 2
18
• O, abundance: 0.205%, nuclear spin: I = 0
The molecular wave function O2 is
Ψ(O2 ) ≈ ψtrans · ψrot · ψvib · ψe · ψn (11.32)
If the O2 molecule consists of atomes of the same isotope, the wavefunction must obey the symmetry /
anti-symmetry properties of the corresponding isotopes, i.e.
a. Ψ(O2 ) is symmetric for the exchange of the two atoms
+Ψ(O2 ) exchange + Ψ(O2 )
−−−−→
if the isotopes are bosones (16 O or 18
O)
b. Ψ(O2 ) is anti-symmetric for the exchange of the two atoms
+Ψ(O2 ) exchange − Ψ(O2 )
−−−−→
if the isotopes are fermions (17 O)
The electronic ground state of O2 is anti-symmetric with respect to the exchange of the two atoms. ψvib
and ψvib depend on the positions of the center of mass and the distance between the two atoms and are
therefore symmetric with respect to an exchange of the two atoms. Thus the product ψtrans · ψvib · ψe is
anti-symmetric for all isotopes. Whether the molecule wave function Ψ(O2 ) is symmetric or anti-symmetric
depends on the symmetry of ψrot · ψn .
For the two bosone isotopes (16 O or 18 O), ψn is symmetric because both nuclei can only be in one quantum
state: I = 0. Thus for 16 O-16 O and 18 O-18 O , the rotational wave function ψrot must be antisymmetric
such that the molecular wave function is symmetric. This means, that rotational states with even quantum
number J = 0, 2, 4... are not allowed and only rotational states with odd quantum numbers J = 1, 3, 5, ...
are occupied (see Fig. 12.b). By far the vast majority of the molecules in natural oxygen are 16 O-16 O. In
these molecules half of the rotational are "missing". By comparison, 16 O-17 O has the full set of rotational
states.
For the fermion isotope (17 O, I = 5/2), each nucleus can assume 2I + 1 = 6 different quantum states. The
nuclear wave function ψn has hence a degeneracy of
gn = (2I + 1) · (2I + 1) = 36 .
Of these 36 degenerate states, 15 have a antisymmetric nuclear wavefunction and 21 have a symmetric
nuclear wavefunction. Therefore 17 O-17 O exists in two different variants: (i) ψn anti-symmetric, and (ii)
ψn symmetric. In variant i the rotational wavefunction has to be anti-symmetric such that Ψ(O2 ) is
antisymmetric (⇒ only J = 1, 3, 5.. allowed), whereas in variant ii the rotational wavefunction has to be
symmetric such the Ψ(O2 ) is antisymmetric (⇒ only J = 0, 2, 4, ... allowed).
44
11 IDEAL GAS WITH INTERNAL DEGREES OF FREEDOM
Rotational vibrational spectroscopy Let us consider the rotational vibrational spectrum of 1 H35 Cl. The
absorption lines in this spectrum correspond to transitions form the rotational states J of the vibrational
ground state ν = 0 to the rotational states J 0 of the first excited vibrational states ν 0 = 1. The selection
rule is J 0 = J ± 1.
where we have used Θrot = 15.021 K. Likewise the transitions to lower rotational states (ν = 0, J) → (ν 0 =
1, J 0 = J − 1) (R-branch of the spectrum) occur at energies
The relative heights of the absorption lines is given by the population of the initial state, i.e. by eq. 11.26.
Figure 13: Infrared rotational-vibration spectrum of hydrochloric acid gas at room temperature. The
dubletts in the IR absorption intensities are caused by the isotopes present in the sample: 1 H-35 Cl and
1
H-37 Cl
45
12 MIXTURES OF IDEAL GASES
qA (V, T ) = ζA (T )V
qB (V, T ) = ζB (T )V (12.4)
where ζA and ζB are functions which depend on the single-particle partition function of gas A and B. For
the partition function of the two systems, we obtain
NA NA
ζA VA ζ NB V NB
QI = · B B (12.5)
NA ! NB !
and
NA NA
ζA V ζ NB V NB
QII = · B . (12.6)
NA ! NB !
The free energies of the two systems are
AI = −kB T ln QI
= −kB T [NA ln ζA + NA ln VA + NB ln ζB + NB ln VB − ln NA ! − ln NB !] (12.7)
and
∆A = AII − AI
= −kB T [NA ln V + NB ln V −NA ln VA − NB ln VB ]
VA VB
= kB T NA ln + NB ln
V V
46
12 MIXTURES OF IDEAL GASES
VA VB
= (NA + NB )kB T xA ln + xB ln (12.9)
V V
For the change of entropy and the change of internal energy upon mixing, we obtain
∂∆A ∆A
∆S = − =− (12.11)
∂T NA ,NB ,V T
and
∆U = ∆A + T ∆S = 0 . (12.12)
The pressure is defined as minus the partial derivative of the free energy with respect to the volume. Thus
for system II, we have
∂AII NA NB
PII = − = kB T + kB T = PA + PB (12.13)
∂V NA ,NB ,T V V
where PA and PB are the partial pressures of the two components in the mixture.
For mixing two ideal gases at constant pressure, the equations for the thermodynamic state functions are
more complicated. A particularly simple form however arises of the pressure in the two containers A and B
is the same. Then
NA VA
xA = =
N V
NB VB
xB = = . (12.14)
N V
Upon removal of the barrier neither the pressure nor the volume changes and the volume work is zero
P ∆V = 0 . (12.15)
Hence,
∆H = ∆U = 0 (12.16)
and
47
12 MIXTURES OF IDEAL GASES
That is, one consideres the change of free energy upon a change of the particle numbers of component k
while keeping the temperature, the volume and the particle numbers of all other components fixed.
48
13 CHEMICAL EQUILIBRIUM
13 Chemical equilibrium
So far we have considered physical changes (changes in temperature, pressure, volume...) and the properties
of spectra. Next, we will consider actual chemical reactions. In fact, one can calculate the equilibrium
constant of a reaction from the microscopic properties of the reagents and the products. For reactions of
small molecules in the gas phase this approach yields very accurate results. It is useful for reactions which
occur under such extreme conditions that the equilibrium constant cannot be probed experimentally (e.g.
in explosions, volcanoes etc.).
νA A + νB B νC C + νD D (13.1)
is
νA µA + νB µB = νC µC + νD µD (13.2)
where νk is the stoichiometric number of the kth component and µk is chemical potential. Let us assume
the reaction takes place in the gas phase and all reactants and products behave as ideal gases. The the
chemical potential is given as
qk (V, T )
µk = −kB T ln k = A, B, C, D (13.3)
Nk
Inserting in eq. 13.2 yields a simple equilibrium condition
νA µA + νB µB = νC µC + νD µD
Equilibrium constant. Using absolute particle numbers is impracticle. We therefore replace the particle
numbers by dimensionless concentrations
Nk
ck = k = A, B, C, D . (13.5)
v
with
V
v = (13.6)
V0
where V 0 is the standard volume. We obtain
NCνC ND
νD
cνCC cνDD v νC v νD νC νD
qC qD
νA νB = νA νB · νA νB = νA νB . (13.7)
NA NB cA cB v v qA qB
We define the equilibrium constant K as
ν ν
cνCC cD
νD
(qC/v) C (qD/v) D
K = νA νB = qA νA q νB . (13.8)
cA cB ( /v) ( B/v)
In an ideal gas, the single-particle function depends linearly on the volume
qk = V ζk (T ) k = A, B, C, D . (13.9)
49
13 CHEMICAL EQUILIBRIUM
qk qk 0 V ζk (T ) 0
= V = V = V 0 ζk (T ) k = A, B, C, D . (13.10)
v V V
corresponds to the single-particle partition function in a standard volume V 0 and only depends on the
temperature. Eq. 13.8 defines the equilibrium constant in a standard volume, which then only depends on
the temperature.
Reference energy level. In the chapter 9 we have calculated the molecular partition function q 0 with
respect to reference energy level 0 , where 0 was defined as the energy of the quantum mechanical ground
state. To this we shifted the energy levels
0i = i − 0 (13.11)
where i is the true quantum energy and 0i is the energy with respect to the reference energy level The
partition q is related to the shifted partition function qk0 by
X 1
qk = exp − i
i=0
kB T
0
X
X 1 0
= exp − (0i + 0 ) = exp − i exp −
i=0
kB T i=0
kB T kB T
0
= qk0 exp − (13.12)
kB T
0
The partition
function qk can be calculated using the approximations discussed in chapter 9. The factor
0
exp − kB T corrects the partition function, such qk applies to the true ground state energy. So far, we
have never explicitly calculated the correction factor. This however becomes necessary when dealing with
chemical reactions. Inserting eq. 13.12 into eq. 13.8 yields
0
νC 0 νD
qC /v qD/v ∆0
K = 0
νA 0
νB · exp − (13.13)
qA /v qB /v kB T
with
Derivation of eq. 13.2 from eq. 13.1 The chemical reaction in eq. 13.1 takes place in mixture of NA
particles of type A, NB particles of type B, NC particles of type C, and ND particles of type D. Assuming
ideal particles, the partition function of this mixture is
NA
qA q NB q NC q ND
Q = · B · C · D . (13.15)
NA ! NB ! NC ! ND !
qA , qB , qC , and qD are the single-particle partition functions. The corresponding free energy is
A = −kB T ln Q
−kB T NA ln q A + NB ln q B + NC ln q C + ND ln q D − ln NA ! − ln NB ! − ln NC ! − ln ND ! .
=
(13.16)
The change of free energy with respect to a change of the particle numbers of one of the substances k
defines the chemical potential of this substance in this reaction
∂A
µk = k = A, B, C, D, j 6= k . (13.17)
∂Nk Nj ,T,V
50
13 CHEMICAL EQUILIBRIUM
If the number of particles are change in all four substances by small amounts dNA , dNB , dNC , and dND ,
the corresponding change in free energy is given as
∂A ∂B
∆A = dNA + dNB
∂NA NB ,NC ,ND ,T,V ∂NB NA ,NC ,ND ,T,V
∂C ∂D
+ dNC + dND
∂NC NA ,NB ,ND ,T,V ∂ND NA ,NB ,NC ,T,V
= µA dNA + µB dNB + µC dNC + µD dND (13.18)
In a chemical reaction the relative the change in the number of particles (the ratio of dNA to dNB etc.)
is not arbitrary but determined by the stoichiometric coefficients νA , νB , νC , and νD in eq. 13.1. That is,
if the number of particles of sustance A changes by −νA · N , the number of particles in the other three
substances have to change by −νB · N , νC · N , and νD · N (forward reaction). Thus, eq. 13.18 becomes
∆A = −νA µA − νB µB + νC µC + νD µD (13.19)
In equilibrium ∆A = 0 and hence
νA µA + νB µB = νC µC + νD µD (13.20)
51
13 CHEMICAL EQUILIBRIUM
Example: formation of hydrogen iodine HI is form from its elements in an exchange reaction
H2 + I2 2HI . (13.27)
1
where we use the most abundant isotope forms of the molecules: 1 H2 , 27
I2 and 1 H 127 I. The electronic
ground states in H2 , I2 , and HI are not degenerate and hence
2
ge,0,AB
=1 (13.28)
ge,0,A2 ge,0,B2
The characteristic rotational temperatures are Θrot,H2 = 85.36 K, Θrot,I2 = 0.0537 K, and Θrot,HI = 9.246
K yielding
Θrot,H2 Θrot,I2 85.36 · 0.0537
= = 0.054 (13.29)
Θ2rot,HI 9.2462
The electronic ground state energies are D0,H2 = 4.4773 eV, D0,I2 = 1.544 eV, and D0,HI = 3.053 eV,
yielding
Note that the contributions of the vibrational states to the equlibrium constant depends on the temperature.
We here use precalculated vibrational partition functions, which contain the ground state energy
52
14 THE ACTIVATED COMPLEX
A+B C (14.1)
In the statistical thermodynamical theory of the activated complex (AB)† , this complex is treated as a
separated and independent chemical species which is in equilibrium with the substrates and products. The
reaction equation is extended
A + B (AB)† → C . (14.2)
During the reaction, the concentration if of the activated complex is very small compared to the concen-
tration of the educts and products. Thus, for the equlibrium constant for the formation of the activated
complex (first part of the reaction scheme) we have
c(AB)†
Kc† = 1. (14.3)
cA cB
The concentrations are again dimensionless properties ck = Nk /v with v = V /V0 . The overall reaction
rate depends on the rate with which the complex reacts into the products.
dcA
− = νr c(AB)† = νr Kc† cA cB . (14.4)
dt
Both the equilibrium constant of the activated complex Kc† and its decay rate νr can be calculated using
statistical thermodynamics.
D + H2 (D − H − H)† → DH + H . (14.5)
The decay of the process will proceed along one of its vibrational normal modes. Assuming that the three
atoms are aligned linearly in the complex, the anti-symmetric stretch vibration will lead to the decay of
the complex. The force constant of this vibration is so small that the complex will decay during the first
vibration. Thus, the decay rate of the complex is equal to the vibrational frequency of this particular mode
νr = ν∗ (14.6)
Thus,
• Find the transition state structure of the reaction.
• Find the normal mode along which the complex will decay. The frequency associated to this mode is
the decay rate νr of the complex.
Let’s consider the single-particle partition function of the activated complex more closely. The vibrational
partition function is given as a product of the vibrational partition functio of all normal modes
Y −6
3Natoms
qvib ((AB)† ) = qvib (Θvib,i )
i=1
53
14 THE ACTIVATED COMPLEX
Y −5
3Natoms
= qvib (Θvib,r ) qvib (Θvib,i ) (14.8)
i=1, i6=r
where Θvib,i is the characteristic vibrational temperature of the ith normal mode and r is the index of
the reactive mode. The characteristic vibrational temperature of the reactive mode is low and hence the
high-temperature approximation is appropriate
kB T
qvib (Θvib,r ) ≈ . (14.9)
hν ∗
We define a "truncated partition function" for the activated complex
3N Y −6
kB T atoms
q = qvib (Θvib,i ) qtrans qrot qe qn
hν ∗
i=1, i6=r
kB T 0
= q † (14.10)
hν ∗ (AB)
and obtain for the equilibrium constant
kB T q0 /v
(AB)†
Kc† = (14.11)
hν ∗ qB/v qB/v
is the activation energy. The pre-exponential factor has units of s−1 , i.e. it is a rate. This molecular
reaction rate is related to a molar reaction rate by
Ơ
00
RT q (AB)†/v
kr,m = kr NA = exp − (14.16)
h qB0 /v qB0 /v kB T
where NA is Avogadro’s number and R is gas constant.
D + H2 → DH + H (14.17)
The activated complex D − H − H has 3N − 5 = 4 normal modes, of which three do not lead to a decay
of the complex
54
14 THE ACTIVATED COMPLEX
The frequency of the reactive normal mode cancels in the equation for the rate constant. The characteristic
rotational temperatur is Θrot = 9.799 K. The molar rate constant is given as
Ơ
RT 1 qtrans,C/V qrot,C qvib,C ge,0,C
kr,m = · · · · exp −
h V 0 qtrans,H2/V qtrans,D/V qrot,H2 qvib,H2 ge,0,H2 ge,0,D kB T
Ơ
= A · exp − (14.18)
kB T
where we used that the rotational and vibrational partition function of a single atom (i.e. D) is 1. The
degeneracy of the electronic ground state of D and of the activated complex is 2, thus
ge,0,C
= 1 (14.19)
ge,0,H2 ge,0,D
Using the approximations from chapters 8 and 9, we obtain for the pre-exponential factor
3/2 −1 −2
Ic σ 1 − e− s/kB T 1 − e−hνδ/kB T
hν
h2
RT 1 mC
A = −1 (14.20)
h V0 2πkB T mH2 mD IH2 1 − e−hνH2/kB T
Inserting all the properties in SI units and choosing V0 = 1 cm3 = 1 · 10−6 m3 yields
is two orders of magnitude smaller because one has to take the rotational partition function of CH3 into
account.
55