Sect 2
Sect 2
2.2.2 The notion of a microstate So much for a single particle. But we are interested in a system consisting of a large number of such particles, N. A microscopic description would necessitate specifying the state of each particle. In a localised assembly of such particles, each particle has only two possible quantum states or . (By localised we mean the particles are fixed in position, like in a solid. We can label them if we like by giving a number to each position. In this way we can tell them apart; they are distinguishable). In a magnetic field the energy of each particle has only two possible values. (This is an example of a two level system). You can't get much simpler than that! Now we have set up the system, let's explain what we mean by a microstate and enumerate them. First consider the system in zero field . Then the states and have the same energy. We specify a microstate by giving the quantum state of each particle, whether it is or . For N
=
10 spins
Thats it! 2.2.3 Counting the microstates What is the total number of such microstates (accessible to the system). This is called . Well for each particle the spin can be or ( there is no restriction on this in zero field). Two possibilities for each particle gives 210 arrangements (we have merely given three examples) for N = 10. For N particles = 2N. 2.2.4 Distribution of particles among states This list of possible microstates is far too detailed to handle. What's more we dont need all this detail to calculate the properties of the system. For example the total magnetic moment of all the particles is M = (N N ) ; it just depends on the relative number of up and down moments and not on the detail of which are up or down. So we collect together all those microstates with the same number of up moments and down moments. Since the relative number of ups and downs is constrained by N + N = N (the moments must be either up or down), such a state can be characterised by one number: m = N N . This number m tells us the distribution of N particles among possible states: N = (N + m) / 2. N = (N m) / 2, Now we ask the question: how many microstates are there consistent with a given distribution (given value of m; given values of N , N ). Call this t (m). (This is Guenaults notation). Look at N = 10. For m = 10 (all spins up) and m = 10 (all spins down), that's easy! t = 1. Now for m = 8 (one moment down) t (m = 8) = 10. There are ten ways of reversing one moment. PH261 BPC/JS 1997 Page 2.2
The general result is t (m) = N! / N ! N !. This may be familiar to you as a binomial coefficient. It is the number of ways of dividing N objects into one set of N identical objects and N different identical objects (red ball and blue balls or tossing an unbiassed coin and getting heads or tails). In terms of m the function t (m) may be written N! t (m) = N m N + m . ( 2 )! ( 2 )! If you plot the function t (m) it is peaked at m = 0, (N = N = N / 2). Here we illustrate that for N = 10. The sharpness of the distribution increases with the size of the system N, since the standard deviation goes as N .
t (m )
252 210 210
120
120
45 1 10 10 8 6 4 2 0 2 4
45 10 6 8 1 10
m
When N
=
Plot of the function t (m) for 10 spins 100 the function is already much narrower:
110 0.8 0.6 0.4 0.2
29
100
50
50
100
Plot of the function t (m) for 100 spins When N is large, then t (m) approaches the normal (Gaussian) function m2 . 2N This may be shown using Stirlings approximation (Guenault, Appendix 2). Note that the RMS width of the function is N . t (m)
2N exp
Page 2.3
2.2.5 The average distribution and the most probable distribution The physical significance of this result derives from the fundamental assumption of statistical physics that each of these microstates is equally likely. It follows that t (m) is the statistical weight of the distribution m (recall m determines N and N ), that is the relative probability of that distribution occurring. Hence we can work out the average distribution; in this example this is just the average value of m. The probability of a particular value of m is just t (m) / i.e. the number of microstates with that value of m divided by the total number of microstates ( = m t (m)). So the average value of m is m m t (m) / . In this example because the distribution function t (m) is symmetrical it is clear that mav the value of m for which t (m) is maximum is m = 0.
=
0. Also
So on average for a system of spins in zero field m = 0: there are equal numbers of up and down spins. This average value is also the most probable value. For large N the function t (m) is very strongly peaked at m = 0; there are far more microstates with m = 0 than any other value. If we observe a system of spins as a function of time for most of the time we will find m to be at or near m = 0. The observed m will fluctuate about m = 0, the (relative) extent of these fluctuations decreases with N. States far away from m = 0 such as m = N (all spins up) are highly unlikely; the probability of observing that state is 1 / = 1 / 2N since there is only one such state out of a total 2N. (Note: According to the Gibbs method of ensembles we represent such a system by an ensemble of systems, each in a definite microstate and one for every microstate. The thermodynamic properties are obtained by an average over the ensemble. The equivalence of the ensemble average to the time average for the system is a subtle point and is the subject of the ergodic hypothesis.) It is worthwhile to note that the model treated here is applicable to a range of different phenomena. For example one can consider the number of particles in two halves of a container, and examine the density fluctuations. Each particle can be either side of the imaginary division so that the distribution of density in either half would follow the same distribution as derived above. ______________________________________________________________ End of lecture 4
generalised entropy for systems not in equilibrium but let's not complicate the issue). Entropy was defined by Boltzmann and Planck as S = k ln where is the total number of microstates accessible to a system. Thus for our example of spins N = 2 so that S = Nk ln 2. Here k is Boltzmanns constant. 2.3.2 The second law of thermodynamics The second law of thermodynamics is the law of increasing entropy. During any real process the entropy of an isolated system always increases. In the state of equilibrium the entropy attains its maximum value. This law may be illustrated by asking what happens when a constraint is removed on an isolated composite system. Is the number of microstates of the final equilibrium state be smaller, the same or bigger ? We expect the system to evolve towards a more probable state. Hence the number of accessible microstates of the final state must be greater or the same and the entropy increases or stays the same. A nice simple example is given by Guenault on p12-13. The mixing of two different gases is another. For a composite system = 12. If the two systems are allowed to exchange energy, them in the final equilibrium state the total number of accessible microstates is always greater. The macroscopic desciption of this is that heat flows until the temperature of the two systems is the same . This leads us, in the next section, to a definition of statistical temperature as 1 S = . T E V ,N
2.3.3 Thermal interaction between systems and temperature Consider two isolated systems of given volume, number of particles, total energy When separated and in equilibrium they will individually have
1 = 1 ( E1,
V1, N 1) and
2 = 2 (E2,
V2, N 2)
Now suppose the two systems are brought into contact through a diathermal wall, so they can now exchange energy. E1, V1 , N 1 E2, V2 , N 2
E1, V1 , N 1
E2, V2 , N 2
fixed diathermal wall Thermal interaction The composite system is isolated, so its total energy is constant. So while the systems exchange energy (E1 and E2 can vary) we must keep E1 + E2 = E0 = const. And since V1, N 1, V2, N 2 all PH261 BPC/JS 1997 Page 2.5
remain fixed, they can be ignored (and kept constant in any differentiation). Our problem is this: after the two systems are brought together what will be the equilibrium state? We know that they will exchange energy, and they will do this so as to maximise the total number of microstates for the composite system. The systems will exchange energy so as to maximise . Writing
= 1 ( E) 2 ( E0
E)
E
or
1 E 2
=
2 E
1 1 1 E or
1 2 2 E
ln 1 E
But from the definition of entropy, S characterised by
=
ln 2 . E S2 . E
In other words, when the systems have reached equilibrium the quantity S / E of system 1 is equal to S / E of system 2. Since we have defined temperature to be that quantity which is the same for two systems in thermal equilibrium, then it is clear that S / E (or ln / E) must be related (somehow) to the temperature. We define statistical temperature as 1 S = T E V ,N (recall that V and N are kept constant in the process) With this definition it turns out that statistical temperature is identical to absolute temperature.
S1 E
It follows from this definition and the law of increasing entropy that heat must flow from high temperature to low temperature (this is actually the Clausius statement of the second law of thermodynamics. We will discuss all the various formulations of this law a bit later on.) Let us prove this. For a composite system = 12 so that S = S1 According to the law of increasing entropy S 0, so that: S1 S2 E1 0 S = E E or, using our definition of statistical temperature: 1 1 E1 0. S = T1 T2
+
S2 as we have seen.
Page 2.6
T1
E1 decreases if T2 < T1 so energy flows from systems at higher temperatures to systems at lower temperatures. ______________________________________________________________ End of lecture 5
B and E
B.
Now for an isolated system, with total energy E, number N, and volume V fixed, it turns out that N and N are uniquely determined (since the energy depends on N N ). Thus m = N N is fixed and therefore it exhibits no fluctuations (as it did in zero field about the average value m = 0). This follows since we must have both E N These two equations are solved to give N N
= (N + = (N = (N = (N +
N ) N )
E / ) / 2 E / ) / 2 .
All microstates have the same distribution of particles between the two energy levels N and N . But there are still things we would like to know about this system; in particular we know nothing about the temperature. We can approach this from the entropy in our now-familiar way. The total number of microstates is given by
=
N! . N ! N !
k ln :
S = k {ln N ! ln N ! ln N !} Now we use Stirlings approximation (Guenault Appendix 2) which says ln x! this is important you should remember this.
x ln x
x;
Page 2.7
Hence k {N ln N N ln N N ln N } , where we have used N + N = N. Into this we substitute for N and N since we need S in terms of E for differentiation to get temperature. 1 1 1 1 (N + E / ) ln (N + E / ) (N E / ) ln ( N E / ) S = k . N ln N 2 2 2 2 Now we use the definition of statistical temperature: 1 / T = S / E|N.V to obtain the temperature: 1 k N E/ = ln . T 2 N + E/ (You can check this differentiation with Maple or Mathematica if you wish!) S
=
Recalling the expressions for N and N , the temperature expression can be written: 1 k N = ln , T 2 N which can be inverted to give the ratio of the populations as N = exp 2 / kT . N Or, since N + N = N we find N exp / kT = exp + / kT + exp / kT N
{ }
N N
1.0
0.8
N / N
0.6
0.4
0.2
N / N kT /
0.0 0 2 4 6 8 10 12 14 16
Temperature variation of up and sown populations This is our final result. It tells us the fraction of particles in each of the two energy states as a function of temperature. This is the distribution function. On inspection you see that it can be written rather concisely as exp / kT n ( ) = N z PH261 BPC/JS 1997 Page 2.8
where the quantity z z = exp / kT + exp + / kT is called the (single particle) partition function. It is the sum over the possible states of the factor exp / kT . This distribution among energy levels is an example of the Boltzmann distribution. This distribution applies generally(to distinguishable particles) where there is a whole set of possible energies available and not merely two as we have considered thus far. This problem we treat in the next section. In general z
=
states
exp / kT ,
where the sum is taken over all the possible energies of a single particle. In the present example there are only two energy levels. In the general case n () is the average number of particles in the state of energy (In the present special case of only two levels n (1) and n (2) are uniquely determined). To remind you, an example of counting the microstates and evaluating the average distribution for a system of a few particles with a large number of available energies is given in Guenault chapter 1. The fluctuations in n () about this average value are small for a sufficiently large number of particles. (The 1/ N factor) 2.4.2 Magnetisation of S = 1 / 2 magnet Curies law We can now obtain an expression for the magnetisation of the spin 1/2 paramagnet in terms of temperature and applied magnetic field. The magnetisation (total magnetic moment) is given by: M = (N N ) . We have expressions for N and N , giving the expression for M as exp / kT exp + / kT M = N exp + / kT + exp / kT
= N
or, since
tanh / kT
kT
saturation
0.8
0.6
0.2
B / kT
The general behaviour of magnetisation on magnetic field is nonlinear. At low fields the magnetisation starts linearly but at higher fields it saturates as all the moments become aligned. The low field, linear behaviour may be found by expanding the tanh: M The magnetisation has the general form B , T proportional to B and inversely proportional to T. This behaviour is referred to as Curies law and the constant C = N 2 / k is called the Curie constant. M
= =
N 2 B. kT C
2.4.3 Entropy of S = 1 / 2 magnet We can see immediately that for the spin system the entropy is a function of the magnetic field. In particular at large B / T all the moments are parallel to the field. There is then only one possible microstate and the entropy is zero. But it is quite easy to determine the entropy at all B / T from S = k ln . As we have seen already Since N
=
N ln N
N ln N .
S / k = N ln N / N And substituting for N and N we then get Nk ln [ exp (2B / kT ) + 1] S = exp (2B / kT ) + 1
Nk ln 2
lower B higher B
0 0 T
Page 2.10
1 At low temperatures kT 2B. Then the first term is negligible and the 1s may be ignored. In this case S Nk (2B / kT ) exp (2B / kT ). Clearly S 0 as T 0, in agreement with our earlier argument. 2 At high temperatures kT 2B. Then both terms are equal and we find S = Nk ln 2. There are equal numbers of up and down spins: maximum disorder. 3 The entropy is a function of B / T and increasing B at constant T reduces S; the spins tend to line up; the system becomes more disordered. ______________________________________________________________ End of lecture 6
a Gibbs ensemble of systems used to calculate {ni} There are many distributions {ni} which are possible for this ensemble. In particular we know that because the whole ensemble is isolated then the number of elements and the total energy are fixed. In other words any distribution {ni} must satisfy the requirements
Page 2.11
ni
i
N E.
nii
i
By analogy with the case of the spin 1/2 paramagnet, we will denote the number of microstates of the ensemble corresponding to a given distribution {ni} by t ({ni}). Then the most likely distribution is that for which t ({ni}) is maximised. 2.5.2 Most likely distribution The value of t ({ni}) is given by N! i ni ! since there are N elements all together and there are ni in the ith state. So the most probable distribution is found by varying the various ni to give the maximum value for t. It is actually more convenient (and mathematically equivalent) to maximise the logarithm of t. t ({ni})
=
The maximum in ln t corresponds to the place where its differential is zero: ln t ln t ln t dn1 + dn2 + + dn d ln t ({ni}) = n1 n2 ni i
If the ni could be varied independently then we would have the following set of equations
ln t ({ni}) ni
0,
1, 2, 3,
Unfortunately the ni can not all be varied independently because all possible distributions {ni} must satisfy the two requirements
ni
i
N E;
nii
i
there are two constraints on the distribution {ni}. These may be incorporated in the following way. Since the constraints imply that d ni
i
and d nii
i
0,
ni
i
nii
i
where and are, at this stage undetermined. This gives us two extra degrees of freedom so that we can maximise this by varying all ni independently. In other words, the maximum in ln t is also PH261 BPC/JS 1997 Page 2.12
specified by d ln t ({ni})
ni
i
nii
i
Now we have recovered the two lost degrees of freedom so that the ni can be varied independently. But then the multipliers and are no longer free variables and their values must be found from the constraints fixing N and E. (This is called the method of Lagranges undetermined multipliers.) The maximisation can now be performed by setting the N partial derivatives to zero
ln t ({ni}) ni
ni
i
nii
i
0,
1, 2, 3,
( n ) ln ( n )
i i i i
ni ln ni.
i
ln nj
j
.
Nee
These are the values of nj which maximise t subject to N and E being fixed. This gives us the most probable distribution {ni}. The probability that a system will be in the state j is then found by dividing by N: pj
=
ee
So the remaining step, then, is to find the constants and . 2.5.3 What are and ? For the present we shall sidestep the question of by appealing to the normalisation of the probability distribution. Since we must have
pj
j
it follows that we can express the probabilities as pj where the normalisation constant Z is given by Z
= =
e
j
The constant Z will turn out to be of central importance; from this all thermodynamic properties can be found. It is called the partition function, and it is given the symbol Z because of its name in German: zustandsumme (sum over states).
Page 2.13
But what is this ? For the spin 1/2 paramagnet we had many similar expressions with 1 / kT there, where we have here. We shall now show that this identification is correct. We will approach this from our definition of temperature: 1 S = . T E V ,N Now the fundamental expression for entropy is S = k ln , where is the total number of microstates of the ensemble. This is a little difficult to obtain. However we know t ({ni}), the number of microstates corresponding to a given distribution {ni}. And this t is a very sharply peaked function. Therefore we make negligible error in approximating by the value of t corresponding to the most probable distribution. In other words, we take N! S = k ln i ni ! where
Nee
k N ln N k {N ln N 1 T
ni ln ni
i
N
=
E} .
S E |V N
,
so that
as we asserted.
1 kT
The probability of a (non-isolated) system being found in the ith microstate is then pi or ei/kT pi = Z This is known as the Boltzmann distribution or the Boltzmann factor. It is a key result. Feynman says This fundamental law is the summit of statistical mechanics, and the entire subject is either a slidedown from the summit, as the principle is applied to various cases, or the climb-up to where the fundamental law is derived and the concepts of thermal equilibrium and temperature clarified.
ei/kT
Page 2.14
2.5.4 Link between the partition function and thermodynamic variables All the important thermodynamic variables for a system can be derived from Z and its derivatives. We can see this from the expression for entropy. We have And since e so that S
= =
E} .
k {N ln N
ln N
ln Z N ln Z
+
N ln N
E / kT }
Nk ln Z + E / T . Here both S and E refer to the ensemble of N elements. So for the entropy and internal energy of a single system S which, upon rearrangement, can be written E Now the thermodynamic function F = E TS is known as the Helmholtz free energy, or the Helmholtz potential. We then have the memorable result F
= kT =
k ln Z TS
E / T, ln Z.
= k
ln Z.
2.5.5 Finding Thermodynamic Variables A host of thermodynamic variables can be obtained from the partition function. This is seen from the differential of the free energy. Since dE it follows that dF = SdT pdV . We can then identify the various partial derivatives: F ln Z = kT + k ln Z S = T V T V F ln Z p = = kT . V T V T Since E = F + TS we can then express the internal energy as ln Z . E = kT 2 T V Thus we see that once the partition function is evaluated by summing over the states, all relevant thermodynamic variables can be obtained by differentiating Z.
=
T dS
pdV
| |
It is instructive to examine this expression for the internal energy further. This will also confirm the identification of this function of state as the actual energy content of the system. If pj is the probability of the system being in the eigenstate corresponding to energy j then the mean energy of the system may be expressed as PH261 BPC/JS 1997 Page 2.15
jpj
j
1 j ej Z j
where Z is the previously-defined partition function, and it is convenient here to work in terms of rather than converting to 1 / kT . In examimining the sum we note that
j e j
j e
so that the expression for E may be written. after interchanging the differentiation and the summation, E
=
1 e j. Z j
And since
ln Z.
1 / kT , this is equivalent to our previous expression ln Z E = kT 2 , T V however the mathematical manipulations are often more convenient in terms of .
=
2.5.6 Summary of methodology We have seen that the partition function provides a means of calculating all thermodynamic information from the microscopic description of a system. We summarise this procedure as follows: 1 2 3 4 Write down the possible energies for the system. Evaluate the partition function for the system. The Helmholtz free energy then follows from this. All thermodynamic variables follow from differentiation of the Helmholtz free energy.
Since the partition function for distinguishable systems is the product of the partition function for each system, if we have an assembly of N localised identical particles, and if the partition function for a single such particle is z, then the partition function for the assembly is Z = zN. It then follows that the Helmholtz free energy for the assembly is F
= kT
ln Z
ln z in other words the free energy is N times the free energy contribution of a single particle; the free energy is an extensive quantity, as expected. This allows an important simplification to the general methodology outlined above. For localised systems we need only consider the energy levels of a single particle. We then evaluate the partition function z of a single particle and then use the relation F = NkT ln z in order to find the thermodynamic properties of the system. We will consider two examples of this, one familiar and one new. ______________________________________________________________ End of lecture 8 2.6.2 Using the partition function I the S = 1 / 2 paramagnet (again) Step 1) Write down the possible energies for the system The assembly of magnetic moments are placed in a magnetic field B. The spin has two quantum states, which we label by and . The two energy levels are than
= NkT
B,
B.
Step 2) Evaluate the partition function for the system We require to find the partition function for a single spin. This is z
=
i / kT e
states i
eB/kT + eB/kT . This time we shall obtain the results in terms of hyperbolic functions rather than exponentials for variety. The partition function is expressed as the hyperbolic cosine:
=
2 cosh B / kT .
Step 3) The Helmholtz free energy then follows from this Here we use the relation F
= NkT = NkT
ln z ln {2 cosh B / kT } .
Step 4) All thermodynamic variables follow from differentiation of the Helmholtz free energy Before proceeding with this step we must pause to consider the performance of magnetic work. Here we dont have pressure and volume as our work variables; we have magnetisation M and magnetic field B. The expression for magnetic work is PH261 BPC/JS 1997 Page 2.17
d W = MdB, so comparing this with our familiar pdV , we see that when dealing with magnetic systems we must make the identification p
V B. The internal energy differential of a magnetic system is then dE = T dS MdB and, of more immediate importance, the Helmholtz free energy E dF = SdT We can then identify the various partial derivatives: F = NkT S = T B F M = = NkT B T Upon differentiation we then obtain N B tanh B / kT + S = T
MdB.
ln z + Nk ln z, T |B ln z . B |T
Nk ln {2 cosh B / kT } ,
TS ln z
+
= NkT = NkT
TS
+
ln {2 cosh B / kT } E
N B tanh B / kT T
Nk ln {2 cosh B / kT } ,
giving
= N
B tanh B / kT .
= MB
as expected.
By differentiating the internal energy with respect to temperature we obtain the thermal capacity at constant field CB
=
Nk (
kT )
sech 2 B / kT .
Some of these expressions were derived before, from the microcanonical approach; you should check that the exponential expressions there are equivalent to the hyperbolic expressions here. We plot the magnetisation as a function of inverse temperature. Recall that at high temperatures we have a linear region, where Curies law holds. At low temperatures the magnetisation saturates as all the moments become aligned against the applied field. Incidentally, we note that the expression M PH261 BPC/JS 1997
=
is the equation of state for the paramagnet. This is a nonlinear equation. But recall that we found linear behaviour in the high T / B region where N 2B , kT just as it is in the high temperature region that the ideal gas equation of state p M
=
NkT / V is valid.
M / N 1.0
saturation
0.8
0.6
0.2
B / kT
magnetisation of spin 1/2 paramagnet against inverse temperature Next we plot the entropy, internal energy and the thermal capacity as a function of temperature. Observe that they all go to zero as T 0. At low temperatures the entropy goes to zero, as expected. And at high temperatures the entropy goes to the classical two-state value of k ln 2 per particle S S
2Nk (
e kT )
kT
2B
T . kT ) The thermal capacity is particularly important as it is readily measurable. It exhibits a maximum of CB 0.44Nk at a temperature of T 0.83B / k . This is known as a Schottky anomaly. Ordinarily the thermal capacity of a substance increases with temperature, saturating at a constant value at high temperatures. Spin systems are unusual in that the energy states of a spin are finite and therefore bounded from above. The system then has a maximum entropy. As the entropy increases towards this maximum value it becomes increasingly difficult to pump in more heat energy.
Nk ln 2
4Nk (
At both high and low temperatures the thermal capacity goes to zero, as may be seen by expanding the expression for CB: CB CB PH261 BPC/JS 1997
4Nk ( 2Nk (
kT )
e kT
2
2B
T T
kT )
Page 2.19
Nk ln 2
lower B
-0.2
-0.4
-0.6
-1.0
0.30
0.20
0.10
kT / B
thermal capacity of spin paramagnet ______________________________________________________________ End of lecture 9 PH261 BPC/JS 1997 Page 2.20
2.6.3 Using the partition function II the Einstein model of a solid One of the challenges faced by Einstein was the explanation of why the thermal capacity of solids tended to zero at low temperatures. He was concerned with nonmagnetic insulators, and he had an inkling that the explanation was something to do with quantum mechanics. The thermal excitations in the solid are due to the vibrations of the atoms. Einstein constructed a simple model of this which was partially successful in explaining the thermal capacity. In Einsteins model each atom of the solid was regarded as a simple harmonic oscillator vibrating in the potential energy minimum produced by its neighbours. Each atom sees a similar potential, so they all oscillate at the same frequency; let us call this / 2. And since each atom can vibrate in three independent directions, the solid of N atoms is modelled as a collection of 3N identical harmonic oscillators. We shall follow the procedures outlined above. Step 1) Write down the possible energies for the system The energy states of the harmonic oscillator form an infinite ladder of equally spaced levels; you should be familiar with this from your quantum mechanics course.
9 2 7 2 5 2 3 2 1 2
4 3 2 1 j
=
(j
1 + 2)
0, 1, 2, 3,
Step 2) Evaluate the partition function for the system We require to find the partition function for a single harmonic oscillator. This is
j=0
j/ kT
j=0
exp
{ (j + 2 ) kT } .
/ k.
j=0
exp
{ (j + 2 ) T }
/2T
j=0
(e
/T j
and we observe the sum here to be a (convergent) geometric progression. You should recall the result
n=0
n x
x2
x3
1 1
Page 2.21
If you dont remember this result then multiply the power series by 1 The harmonic oscillator partition function is then given by z
=
e/2T . 1 e/T
Step 3) The Helmholtz free energy then follows from this Here we use the relation F
= 3NkT =
ln z
+
3N k 2
3NkT ln {1
e/T } .
Step 4) All thermodynamic variables follow from differentiation of the Helmholtz free energy Here we have no explicit volume dependence, indicating that the equilibrium solid is at zero pressure. If an external pressure were applied then the vibration frequency / 2 would be a function of the interparticle spacing or, equivalently, the volume. This would give a volume dependence to the free energy from which the pressure could be found. However we shall ignore this. The thermodynamic variables we are interested in are then entropy, internal energy and thermal capacity. The entropy is
F T |V
3Nk ln
( e
e / T
/T
1)
( e / T
/T
1)
In the low temperature limit S goes to zero, as one would expect. The limiting low temperature behaviour is e/T T 0. T At high temperatures the entropy tends towards a logarithmic increase T T . S 3Nk ln ( ) S
3Nk
The internal energy is found from E where we write ln z as ln z Then on differentiation we find e 1 The first term represents the contribution to the internal energy from the zero point oscillations; it is present even at T = 0. PH261 BPC/JS 1997 Page 2.22 E
= = = 3N
ln z
ln {1
e} .
3N
{2
}.
At high temperatures the variation of the internal energy is 1 2 . 12 ( T ) The first term is the classical equipartition part, which is independent of the vibration frequency. E
3NkT 1
E , T |V 1 3Nk . { /T T e 1}
3Nk (
Upon differentiation this then gives T ) (e/T 1)2 At high temperatures the variation of the thermal capacity is CV
=
e / T
1 2 CV 3Nk Nk ( ) + T 4 T where the first term is the constant equipartition value. At low temperatures we find CV
3Nk (
e /T ) T
We see that this model does indeed predict the decrease of CV to zero as T 0, which was Einsteins challenge. The decrease in heat capacity below the classical temperature-independent value is seen to arise from the quantisation of the energy levels. However the observed low temperature behaviour of such thermal capacities is a simple T 3 variation rather than the more complicated variation predicted from this model. The explanation for this is that the atoms do not all oscillate independently. The coupling of their motion leads to normal mode waves propagating in the solid with a continuous range of frequencies. This was explained by the Debye model of solids. The Einstein model introduces a single new parameter, the frequency / 2 of the oscillations or, equivalently, the characteristic temperature = / k . And the prediction is that CV is a universal function of T / . To the extent that the model has some validity, each substance should have its own value of ; as the CV figure shows, the value for diamond is 1300K. This is an example of a law of corresponding states: When the temperatures are scaled appropriately all substances should exhibit the same behaviour.
Page 2.23
S / Nk
E0
3 = 2 Nk
1.0
T/
Page 2.24
Finally we give the calculated properties of the Einstein model in terms of hyperbolic functions; you should check these. The partition function for simple harmonic oscillator can be written as 1 cosech ( ) , z = 2 2T so that the Helmholtz free energy for the solid comprising 3N such oscillators is F
=
3NkT ln 2 sinh (
. ) 2T
3Nk
3 Nk coth ( ) 2 2T 3Nk ( 2T )
2 sinh ( ) {( 2T ) coth ( 2T ) ln 2T }
2T ) ______________________________________________________________ End of lecture 10 Equation of state for the Einstein solid We have no equation of state for this model so far. No p V relation has been found, since there was no volume-dependence in the energy levels, and thus in the partition function and everything which followed from that. This deficiency can be rectified by recognising that the Einstein frequency, or equivalently the temperature may vary with volume. Then the pressure may be found from
= .
cosech2 (
3Nk d coth . 2 2T d V We note that the equilibrium state of the solid when no pressure is applied, corresponds to the vanishing of d / dV . In fact will be a minimum at the equilibrium volume V0 (why?). For small changes of volume from this equilibrium we may then write
=
F V |T
( )
so that
V0
V0
d 2 (V V0) . = 2 dV V0 Then the equation of state for the solid is 3Nk 0 (V0 V ) coth p = 2 V0 2T The high-temperature limit of this is 6Nk (V0 V ) T p = 2 V0 and the low-temperature, temperature-independent limit is (V0 V ) . p = 3Nk 2 V0 PH261 BPC/JS 1997
( )
Page 2.25