Thermal Physics Tutorials With Python Simulations 2023
Thermal Physics Tutorials With Python Simulations 2023
This book may serve as a thermal physics textbook for a semester-long under-
graduate thermal physics course or may be used as a tutorial on scientific com-
puting with focused examples from thermal physics. This book will also appeal
to engineering students studying intermediate-level thermodynamics as well as
computer science students looking to understand how to apply their computer
programming skills to science.
Series in Computational Physics
Series Editors: Steven A. Gottlieb and Rubin H. Landau
Reasonable efforts have been made to publish reliable data and information, but the author and
publisher cannot assume responsibility for the validity of all materials or the consequences of their use.
The authors and publishers have attempted to trace the copyright holders of all material reproduced
in this publication and apologize to copyright holders if permission to publish in this form has not
been obtained. If any copyright material has not been acknowledged please write and let us know so
we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.com
or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. For works that are not available on CCC please contact mpkbookspermissions@tandf.
co.uk
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are
used only for identification and explanation without intent to infringe.
DOI: 10.1201/9781003287841
Publisher’s note: This book has been prepared from camera-ready copy provided by the authors.
To our parents, Yong Woo and Byung Ok Kouh
Contents
Preface xi
Chapter 1 Calculating π 1
vii
viii Contents
Appendix 205
APPENDIX A: GETTING STARTED WITH PYTHON 205
APPENDIX B: PYTHON PROGRAMMING BASICS 205
APPENDIX C: PLOTS 209
APPENDIX D: COLORS 212
APPENDIX E: ANIMATION 217
Epilogue 221
Index 223
Preface
xi
xii Preface
Einstein solid). The final chapter is about random and guided walks, a
topic that is independent of earlier chapters and provides a glimpse of
other areas of thermal physics.
T. Kouh earned his B.A. in physics from Boston University and Sc.M.
and Ph.D. degrees in physics from Brown University. After his study in
Providence, RI, he returned to Boston, MA, and worked as a postdoc-
toral research associate in the Department of Aerospace and Mechan-
ical Engineering at Boston University. He is a full faculty member in
the Department of Nano and Electronic Physics at Kookmin University
in Seoul, Korea, teaching and supervising undergraduate and graduate
students. His current research involves the dynamics of nanoelectrome-
chanical systems and the development of fast and reliable transduction
methods and innovative applications based on tiny motion.
M. Kouh holds Ph.D. and B.S. degrees in physics from MIT and M.A.
from UC Berkeley. He completed a postdoctoral research fellowship
at the Salk Institute for Biological Studies in La Jolla, CA. His re-
search includes computational modeling of the primate visual cortex,
information-theoretic analysis of neural responses, machine learning,
and pedagogical innovations in undergraduate science education. He
taught more than 30 distinct types of courses at Drew University (Madi-
son, NJ), including two study-abroad programs. His professional experi-
ences include a role as a program scientist for a philanthropic initiative,
a data scientist at a healthcare AI startup, and an IT consultant at a
software company.
ACKNOWLEDGEMENT
his academic career and showing him the fun of doing physics. Last but
not least, his biggest and deepest thanks go to his dearest Sarah.
M. Kouh would like to thank his colleagues from the Drew University
Physics Department, R. Fenstermacher, J. Supplee, D. McGee, R. Mu-
rawski, and B. Larson, as well as his students. They helped him to
think deeply about physics from the first principles and from different
perspectives. He is indebted to his academic mentors, T. Poggio and
T. Sharpee, who have shown the power and broad applicability of com-
putational approaches, especially when they are thoughtfully combined
with mathematical and experimental approaches. His family is a con-
stant source of his inspiration and energy. Thank you, Yumi, Chris, and
Cailyn!
CHAPTER 1
Calculating π
π.†
Perhaps one of the easiest ways to obtain the value of π is to use the
fact that cos π = −1, which means cos−1 (−1) = arccos(−1) = π. The
following lines of code print out the value of the inverse cosine func-
tion, which is equal to π. The import command in Python expands
the functionality of the calling script by “importing” or giving access to
other modules or libraries. Here, the import math command imports
the mathematical library, math, which contains many predefined math-
ematical functions and formulas, such as arccosine acos().
# Code Block 1.1
import math
print(math.acos(-1))
3.141592653589793
†
See “The Discovery That Transformed Pi,” by Veritasium, at
www.youtube.com/watch?v=gMlf1ELvRzc about these approaches.
1
2 Thermal Physics Tutorials with Python Simulations (TPTPS)
import numpy as np
import matplotlib.pyplot as plt
Let’s start with a few straightforward plots. Try to decipher what each
line of code is doing. You can run the code without or with a particular
line by adding or removing # at the beginning of the line.
The command plt.plot((-1,1),(1,1),color=’black’,
linewidth=5) draws a black horizontal line that connects two
points at (-1,1) and (1,1) – top side of the square – and the rest
of the sides are similarly added in the subsequent commands with
plt.plot(). plt.xlim() and plt.ylim() which help to set the limits
of x- and y-axes. plt.axis() controls the axis properties of a plot
based on the argument within the command. For example, by using
equal with plt.axis(), we can generate a plot with an equal scale
along x- and y- axes, and the argument off will hide the axes. Finally,
plt.savefig() and plt.show() allow us to save and display the
resulting plot.
# Code Block 1.3
# Make a square with a side of 2.
plt.plot((-1,1),(1,1),color='black',linewidth=5)
plt.plot((1,1),(1,-1),color='gray',linewidth=10)
plt.plot((1,-1),(-1,-1),color='black',linewidth=5,linestyle='dashed')
plt.plot((-1,-1),(-1,1),color='gray',linewidth=5,linestyle='dotted')
plt.xlim((-2,2))
plt.ylim((-2,2))
plt.axis('equal')
plt.axis('off')
plt.savefig('fig_ch1_box.eps')
plt.show()
Calculating π 3
Figure 1.1
pi = 3.141592
delta_theta = 0.4
theta = np.arange(0,2*pi+delta_theta,delta_theta)
x = np.cos(theta)
y = np.sin(theta)
N = len(theta)
print("Number of data points = %d"%N)
for i in range(N-1):
plt.plot((x[i],x[i+1]),(y[i],y[i+1]),color='black')
Figure 1.2
Let’s go over a few key lines in the above code. The equal sign = assigns
a value on the right to a variable on the left. Hence, pi = 3.141592
means that the variable named pi is assigned to a value of 3.141592.
Similarly, the variable delta_theta is assigned to a value of 0.4 (ra-
dian), which can be made smaller to create a even finer polygon.
There is a lot going on with theta = np.arange(0,
2*pi+delta_theta,delta_theta). Here, we are creating an ar-
ray or a vector of numbers and assigning it to a variable named theta.
This array is created with the numpy module imported earlier. Since
we have imported numpy with a nickname np, we can access a very
Calculating π 5
Here is another approach for estimating π, using the fact that the area of
a circle is πr2 . Let’s consider a square with a side of 2 and an inscribed
circle with a radius r of 1. The square would have an area of 4, and
the area of the circle would be π. The ratio of the areas of the circle
and the square would be π/4. To estimate the ratio of these areas, we
will generate a large number of random dots within the square, as if
we are sprinkling salt or pepper uniformly over a plate. The position
of these dots will be generated from a uniform distribution. Then, we
could compare the number of dots inside the inscribed circle and the
square.∗
∗
We might imagine doing this simulation in real life by throwing many darts
and comparing the number of darts that landed inside of a circle. The precision
of estimation will increase with the number of darts, but who would do something
like that? See “Calculating Pi with Darts” on the Physics Girl YouTube Channel
(www.youtube.com/watch?v=M34TO71SKGk).
Calculating π 7
plt.scatter(x,y,s=5,color='black')
plt.xlim((-2,2))
plt.ylim((-2,2))
plt.axis('equal')
plt.axis('off')
plt.savefig('fig_ch1_random_num.eps')
plt.show()
Figure 1.3
In the following code block, the distance of each point from the origin
is calculated by dist = np.sqrt(x**2+y**2), so that dist is also an
array. The next line in the code, is_inside = dist < 1, compares
the distance with a constant 1. If distance is indeed less than 1, the
comparison < 1 is true (or a boolean value of 1), and the point lies
within a circle. If distance is not less than 1, the comparison < 1 is false
(or a boolean value of 0). Therefore, the variable is_inside is an array
of boolean values (true/false or 1/0) that indicate whether each point
lies inside a circle or not. Finally, by summing the values of this array
with np.sum(is_inside), we can calculate the number of points inside
the unit circle.
We also note a clever way of selectively indexing an array. x[dist<1]
returns a subset of array x which meets the condition where dist<1. In
other words, it returns the x coordinates of the points that lie within the
circle. Hence, plt.scatter(x[dist<1],y[dist<1],color='black')
makes a scatter plot of the points in the circle with a black marker.
Sometimes it is convenient to package several lines of code into a func-
tion that accepts input arguments and returns an output. In the follow-
ing code block, we created a custom function estimate_pi(), which
takes an optional input argument N with a default value of 500. This
function calculates an estimate of π using N random points.
# Code Block 1.7
plt.scatter(x[dist<1],y[dist<1],s=5,c='black')
plt.scatter(x[dist>1],y[dist>1],s=5,c='#CCCCCC')
plt.xlim((-2,2))
plt.ylim((-2,2))
plt.axis('equal')
plt.axis('off')
plt.savefig('fig_ch1_pi_circle_square.eps')
plt.show()
Calculating π 9
Figure 1.4
that the estimates are more consistent, or the spread of the estimates
is smaller, with higher N.
# Code Block 1.9
N_range = [100,500,1000,5000,10000]
N_trial = 30
result = np.zeros((N_trial,len(N_range)))
for i, N in enumerate(N_range):
for trial in range(N_trial):
pi_estimate = estimate_pi(N)
result[trial,i] = pi_estimate
plt.scatter(i+np.zeros(N_trial)+1,result[:,i],color='gray')
# Overlay a box plot (also known as a whisker plot).
plt.boxplot(result)
plt.xticks(ticks=np.arange(len(N_range))+1,labels=N_range)
plt.xlabel('N')
plt.ylabel('Estimate of $\pi$')
plt.savefig('fig_ch1_boxplot.eps')
plt.show()
Figure 1.5
I
Classical Thermodynamics
CHAPTER 2
The idea that gas is composed of many tiny moving particles is obvious
to us now, but it took many centuries of scientific inquiries for this
idea to be accepted as a verified theory. We call this idea the “kinetic
theory of gas.” It was a significant breakthrough in physics, bridging
two different domains of knowledge: classical mechanics, which usually
deals with the force, momentum, and kinetic energy of an individual
particle, and thermodynamics, which deals with the pressure, volume,
and temperature of a gas.
Critical insights from the kinetic theory of gas are the mechanical in-
terpretation of temperature and the derivation of the ideal gas law,
PV = nRT, where P is pressure, V is volume, T is temperature, and nR
is related to the quantity of the gas. More specifically, n is the number
of moles of gas, and 1 mole is equivalent to 6.02 × 1023 particles (this
quantity is known as Avogadro’s number NA ). The proportionality con-
stant R is known as a universal gas constant. Sometimes, the ideal gas
law is also written as PV = NkT, where N is the number of particles
and k is known as the Boltzmann constant. Therefore, n = N/NA and
NA = R/k.
As we will show in this chapter, the pressure of an ideal gas is a macro-
scopic manifestation of numerous microscopic collisions of gas parti-
cles with the containing walls. The gas temperature is directly related
to the average kinetic energy of the constituent particles. As an anal-
ogy, consider the economy of a market, which consists of many individ-
ual transactions of people buying and selling products. These financial
13
14 Thermal Physics Tutorials with Python Simulations (TPTPS)
import numpy as np
import matplotlib.pyplot as plt
plt.plot(t,y,color='black')
plt.xlabel('Time')
plt.ylabel('Position')
plt.savefig('fig_ch2_y_vs_t.eps')
plt.show()
Kinetic Theory of Gas 15
Figure 2.1
Now let’s put this code into a more versatile and reusable format, a
function that can be called with different parameters. Such a function
is implemented below. We will update the position of a particle with a
small increment dt, so that y(t + dt) = y(t) + v · dt. In other words, after
short time later, the particle has moved from the old position y(t) to a
new position y(t + dt) by a displacement of v · dt.
In the following code, we use a for-loop structure to update the po-
sition of a particle over time from tmin to tmax in a small increment
of dt. The update starts from the initial position y0. The incremental
update is accomplished by current_y = y[i] + current_v*dt. Note
that, by using time_range[1:] in this for-loop calculation, we are
just considering the elements in the time_range array excluding the
first element (tmin), since the update is only needed for the subsequent
times.
When a particle runs into a wall, the particle bounces off without a
change in its speed or without loss of its kinetic energy. Only the sign
of its velocity flips. In the code, this process is implemented with a
statement, current_v = -current_v. The first if-statement handles
the collision event with the bottom wall located at ymin, and the second
if-statement is for the collision with the top wall at ymax.
16 Thermal Physics Tutorials with Python Simulations (TPTPS)
When the particle bounces off of a wall at the origin ( y = 0), simply
taking the absolute value of the position prohibits negative position
values and disallows the particle from punching through the wall. How-
ever, if we were to simulate a collision with a wall placed somewhere
other than the origin, the particle’s position would need to be updated
more carefully. When the particle has just hit the bottom wall with
a negative velocity (current_v < 0), the calculation of current_y
= y[i] + current_v*dt would yield a value that is less than ymin.
Therefore, the current position of the particle needs to be corrected
by current_y = ymin + (ymin-current_y). This command correctly
calculates the position of the bounced particle to be above the bot-
tom wall by the distance of (ymin-current_y). When the particle hits
the top wall at ymax, a similar calculation of current_y = ymax -
(current_y-ymax) correctly puts the bounced particle below the top
wall. We also keep track of the number of bounces by incrementing
Nbounce by one within each if-statement.
# Code Block 2.2
Nbounce = 0
for i, t in enumerate(time_range[1:]):
current_y = y[i] + current_v*dt # Update position.
if current_y <= ymin:
# if the particle hits the bottom wall.
current_v = -current_v # velocity changes the sign.
current_y = ymin + (ymin - current_y)
Nbounce = Nbounce+1
if current_y >= ymax:
# if the particle hits the top wall.
current_v = -current_v # velocity changes the sign.
current_y = ymax - (current_y - ymax)
Nbounce = Nbounce+1
y[i+1] = current_y
if (plot):
plt.plot(time_range,y,color='black')
plt.xlabel('Time')
plt.ylabel('Position')
Kinetic Theory of Gas 17
plt.savefig('fig_ch2_bounce.eps')
plt.show()
return y, time_range, Nbounce
Figure 2.2
We can build on the above set of codes to simulate the motion of mul-
tiple particles. In the following code block, there is a for-loop that
accounts for N particles with random velocities and initial positions.
np.random.randn(N) generates N random numbers taken from a nor-
mal distribution with a mean of 0 and standard deviation of 1. Hence,
multiplying it by 0.5 creates random numbers with a smaller variation.
(There is a caveat. The velocities of gas particles are not normally dis-
tributed but follow the Maxwell-Boltzmann distribution. However, the
main ideas discussed in this chapter hold up regardless of the distribu-
tion type. We will revisit this topic in a later chapter.)
A series of timestamps between the minimum and maximum time in
steps of dt can be created with t = np.arange(tmin,tmax,dt). The
18 Thermal Physics Tutorials with Python Simulations (TPTPS)
# Multiple particles.
N = 30
tmin = 0
tmax = 10
dt = 0.1
t = np.arange(tmin,tmax,dt)
pos = np.zeros((N,len(t))) # initialize the matrix.
Nbounce = np.zeros(N)
v = np.random.randn(N)*0.5
y0 = np.random.rand(N)
for i in range(N):
# pos[i,:] references the i-th row of the array, pos.
# That is the position of i-th particle at all timestamps.
pos[i,:], _, Nbounce[i] = calculate_position(y0[i],v[i],dt=dt,
tmin=tmin,tmax=tmax)
plt.hist(v,color='black')
plt.xlabel('Velocity (m/sec)')
plt.ylabel('Number of Particles')
plt.title("Initial velocities (randomly chosen).")
plt.savefig('fig_ch2_v0_distrib.eps')
plt.show()
for i in range(N):
plt.plot(t,pos[i,:],color='gray')
plt.xlabel('Time (sec)')
plt.ylabel('Position (m)')
plt.ylim((-0.1,1.1))
plt.title('Position of $N$ particles versus time')
plt.savefig('fig_ch2_Nbounces.eps')
plt.show()
Figure 2.3
Figure 2.4
Although the numbers we are using for the above simulations are not
particularly tied to any units, we will assume that they have the stan-
dard units of meters for position, seconds for time, and m/sec for
20 Thermal Physics Tutorials with Python Simulations (TPTPS)
2L
∆t = ,
|v|
where L is the distance between the walls (or ymax-ymin) and v is the
velocity of a particle. The particle travels the distance of 2L before
hitting the same wall. The frequency of bounce is 1/∆t, and hence, the
number of bounces is linearly proportional to |v|, as Figure 2.5 shows.
Figure 2.5
Kinetic Theory of Gas 21
Now let’s visualize a superhero whose mighty body can deflect a volley
of bullets fired from a machine gun by a villain. As they strike and
bounce off of our superhero, these bullets would exert force on the body.
The microscopic collisions of individual gas particles on the containing
walls similarly exert force, macroscopically manifested as pressure. In
the following, we will develop this idea mathematically and derive the
ideal gas law.
When a particle bounces off the wall, its direction changes and its ve-
locity goes from −v to v (or from v to −v). This change in velocity,
∆v = v − (−v) = 2v, comes from the force acting on the particle by
the wall. The particle, in turn, exerts a force with the same magnitude
and opposite direction on the wall, according to Newton’s Third Law
of Motion.
According to Newton’s Second Law of Motion, this force F is equal to
the mass of the particle m times its acceleration (the rate of change in
its velocity) a, so F = ma. Acceleration of an object is the rate of change
of velocity, or ∆v
∆t , and we had also discussed that ∆t = 2L/|v|. Therefore,
∆v 2v|v|
Fsingle particle = m =m .
∆t 2L
The magnitude of the force is |Fsingle particle | = mv2 /L.
Since there are many particles with different velocities that would col-
lide with the wall, we should consider the average force delivered by
N different particles. Let’s use a notation of a pair of angled brackets,
< · > to denote an average quantity. Then, for an average total force on
the wall due to N particles:
m < v2 >
< F >= N .
L
By dividing this average force by the cross-sectional area A of the wall,
we can find pressure P = F/A. We also note V = LA, where V is the
volume of the container, so
PV = Nm < v2 > .
1 2
PV = Nm < |~
v|2 >= N < Kinetic Energy >,
3 3
where we made an identification that the kinetic energy of a particle is
given by 12 mv2 .
By comparing this relationship with the ideal gas law PV = NkT, we
arrive at a conclusion that
3
< Kinetic Energy >= kT,
2
and
r
p 3kT
< |~
v|2 > = vrms = .
m
In other words, temperature T is a measure of the average √
kinetic energy
of N gas particles, whose root-mean-square (rms) speed is 3kT/m. This
is the main result of the Kinetic Theory of Gas.
Answer: 515 m/sec. There will be other particles moving faster and
slower than this speed.
As a hypothetical situation, if all N2 particles were moving at this speed
and if they were striking a wall of area 100 cm2 in a head-on collision
(i.e., one-dimensional motion) for 0.5 sec, what would be the pressure
exerted on the wall by these particles?
Answer: These particles would all experience the change of velocity of
2 × 515 m/sec and the acceleration of a = ∆v/∆t = 2060 m/sec2 . Each
particle would exert a force of ma, and since there are NA molecules,
the pressure would be equal to 5.80 × 103 Pa.
x1 + x2 + ... + xN
< x >= ,
N
and
s
x21 + x22 + ... + x2N
xrms = .
N
x = np.array([0,1,2,3,4])
N = len(x)
print("Number of items = %d"%N)
print("x = ",np.array2string(x, precision=3, sign=' '))
print("x^2 = ",np.array2string(x**2, precision=3, sign=' '))
print("<x> = %4.3f"%(np.sum(x)/N))
print("<x^2> = %4.3f"%(np.sum(x**2)/N))
print("x_rms = %4.3f"%(np.sqrt(np.sum(x**2)/N)))
24 Thermal Physics Tutorials with Python Simulations (TPTPS)
Number of items = 5
x = [0 1 2 3 4]
x^2 = [ 0 1 4 9 16]
<x> = 2.000
<x^2> = 6.000
x_rms = 2.449
Let’s calculate the force exerted on the wall in two ways, using our
mechanical interpretation of elastic collisions. One way to calculate the
force is to add up the momentum changes, m∆v, due to multiple colli-
sions by multiple particles on the wall and then to divide the sum by
the time ∆t. Another way is to use one of the results derived earlier,
< F >= N m<vL > . We can directly calculate < v2 > from a list of particle
2
ymin = 0
ymax = 2
tmin = 0
tmax = 10
dt = 0.1
t = np.arange(tmin,tmax,dt)
Nbounce = np.zeros(N)
y0 = np.random.rand(N)
v = np.random.randn(N)*2
v = np.sort(v)
for i in range(N):
_, _, Nbounce[i] = calculate_position(y0[i],v[i],dt=dt,
ymin=ymin,ymax=ymax,
Kinetic Theory of Gas 25
tmin=tmin,tmax=tmax)
m = 1 # mass of a particle
L = ymax-ymin
delta_t = tmax - tmin
delta_v = 2*np.abs(v)
v_rms = np.sqrt(np.sum(v**2)/N)
F2 = N*m*v_rms**2/L
print("Force = Nm<v^2>/L = %4.3f"%F2)
# Calculate percent difference as (A-B) / average(A,B) * 100.
print("Percent Difference = %3.2f"%(100*(F2-F1)*2/(F1+F2)))
# misc. info.
print('\n============================================\n')
print('misc. info:')
print('speed range: min = %4.3f, max = %4.3f'
%((np.min(np.abs(v)),np.max(np.abs(v)))))
print('number of particles with 0 bounce = %d'%(np.sum(Nbounce==0)))
print('number of particles with 1 bounce = %d'%(np.sum(Nbounce==1)))
============================================
misc. info:
speed range: min = 0.007, max = 6.223
number of particles with 0 bounce = 4
number of particles with 1 bounce = 14
2.5 TEMPERATURE
Our derivation of the ideal gas law has revealed that the temperature
is directly proportional to the average kinetic energy of gas particles,
which is directly proportional to v2rms .
In the following block of code, we make a comparison between T and
v2rms . For T, we will refer to the ideal gas law, PV = NkT, so that
kT = PV/N. The quantity P will be calculated by measuring the force
due to individual collisions of N gas particles in the simulation, as we
have done above. Then, PV = (F/A)(AL) = FL, where A is the area of
the wall and L is the distance between the two walls.
The root-mean-square velocity, vrms , can simply be calculated by
np.sqrt(np.sum(v**2)/N), since in this simulation, each particle
maintains its speed. The particles only collide with the walls, so their
velocities will always be either v or −v. In the next chapter, we will
consider what happens when particles collide.
We will simulate different temperature values by generating random
velocities with different magnitudes, which is accomplished by v =
np.random.randn(N)*T, so that high T increases v_rms. In other words,
higher temperature results in particles traveling faster with more col-
lision events, which in turn would be manifested as increased pressure
on the wall.
Kinetic Theory of Gas 27
The following simulation verifies that the kinetic theory of gas “works”
over different situations. The simulation parameter T scales the range of
velocities by v = np.random.randn(N)*T, so high T increases v_rms.
As the resulting plot demonstrates, PV/N is indeed equal to m < v2 >
or mv2rms . The data points lie along a diagonal line. The factor of 3
in the derivation of an ideal gas is not here because our simulation is
one-dimensional.
# Code Block 2.6
tmin=0
tmax=10
dt = 0.1
t = np.arange(tmin,tmax,dt)
Nbounce = np.zeros(N)
v = np.random.randn(N)*T # T scales the v_rms.
y0 = np.random.rand(N)
v = np.sort(v)
for i in range(N):
_, _, Nbounce[i] = calculate_position(y0[i],v[i],dt=dt,
ymin=ymin,ymax=ymax,
tmin=tmin,tmax=tmax)
delta_t = tmax - tmin
delta_v = 2*np.abs(v)
v_rms = np.sqrt(np.sum(v**2)/N)
F = m*np.sum( Nbounce * delta_v / delta_t )*0.5
PV = F*L
return PV, v_rms
perc_diff = np.zeros(len(T_range))
maxval = 0 # keep track of max value for scaling the plot.
for i,T in enumerate(T_range):
PV, v_rms = run_with_different_T(T=T,N=N,m=m)
plt.scatter(m*v_rms**2,PV/N,color='black')
# Calculate percent difference as (A-B)/average(A,B)*100.
perc_diff[i] = (m*v_rms**2 - PV/N)*2/(m*v_rms**2+PV/N)*100
if maxval < PV/N:
maxval = PV/N
28 Thermal Physics Tutorials with Python Simulations (TPTPS)
Figure 2.6
Velocity Distribution
m1 ~
v1, before + m2 ~
v2, before = m1 ~
v1, after + m2 ~
v2, after
1 1 1 1
m1 v21, before + m2 v22, before = m1 v21, after + m2 v22, after
2 2 2 2
Since ~
v has x, y, and z components, vx , v y , and vz , the above equations
can be written out this way, too.
29
30 Thermal Physics Tutorials with Python Simulations (TPTPS)
1
m1 (v21x, before + v21y, before + v21z, before )
2
1
+ m2 (v22x, before + v22y, before + v22z, before )
2
1
= m1 (v21x, after + v21y, after + v21z, after )
2
1
+ m2 (v22x, after + v22y, after + v22z, after )
2
There are a lot of possible solutions that simultaneously satisfy the
above set of equations because there are 12 variables (three spatial di-
mensions and two particles for before and after conditions) that are
constrained by only four relationships (three for momentum conserva-
tion and one for energy conservation).
m1 − m2 2m2
v1x, after = v1x, before + v2x, before
m1 + m2 m1 + m2
2m1 m2 − m1
v2x, after = v1x, before + v2x, before
m1 + m2 m1 + m2
The above result is symmetric. When we swap the indices 1 and 2, the
resulting expressions remain mathematically identical, as they should
be since there is nothing special about which particle is called 1 or 2.
Another fun consideration is to swap “before” and “after” distinctions,
and then solve for the post-collision velocities. After a few lines of al-
gebra, we again obtain mathematically identical expressions, as they
should be since particle collisions are symmetric under time-reversal.
That means that the time-reversed process of a particle collision is pos-
sible. If two particles with velocities of v1x, after and v2x, after collided
Velocity Distribution 31
with each other, their post-collision velocities would be v1x, before and
v2x, before . Note it is not a typo that I am calling the post-collision
velocities with “before” labels, as we are considering a time-reversed
collision event. If we recorded a movie of microscopic particle collisions
and played it backward in time, it would look as physically plausible as
the original movie, as the conservation laws of momentum and kinetic
energy would hold. (Another way to think about the time reversal is to
replace t with −t in an equation of motion and to notice that this extra
negative sign does not change the equation.)
The profound question is, then, why is there an arrow or directionality
of time that we experience macroscopically. For example, a concentrated
blob of gas particles would diffuse across the space, as a scent from a
perfume bottle would spread across a room, not the other way. This
question will be addressed in later chapters as we further develop a
statistical description of various physical phenomena.
As an interesting case of a two-particle collision event, let’s consider
m2 >> m1 and v2x, before = 0. Then, we would have a reasonable result
of v1x, after = −v1x, before and v2x, after = 0. In other words, m1 would
bounce off of a more massive particle, m2 . Another interesting case is
m1 = m2 and v2x, before = 0. We obtain v1x, after = 0 and v2x, after =
v1x, before . In other words, the first particle comes to a stop, and the
second particle is kicked out with the same speed as the first particle
after the collision. We sometimes see this type of collision on a billiard
table when a white cue ball hits a ball at rest, gives all its momentum
to that ball, and comes to an immediate stop. The last example is
m1 = m2 and v1x, before = −v2x, before , where two particles move toward
each other with the same momentum. This collision sends each particle
in the opposite direction with the same speed.
The following code draws cartoons of these three examples programmat-
ically. The calculation of post-collision velocities is implemented in the
function headon_collision(). The plotting routine is packaged in the
function plot_pre_post_collision(), which splits a figure window
into two subplots. The left subplot draws the pre-collision condition,
and the right subplot shows the post-collision condition. Each particle
is placed with a scatter() command, and its velocity is drawn with
an arrow() command. A for-loop is used to issue the same commands
for constructing each subplot.
32 Thermal Physics Tutorials with Python Simulations (TPTPS)
The following code also illustrates the use of the assert command,
which can be used to check test cases and potentially catch program-
ming errors.
# Code Block 3.1
def plot_pre_post_collision(velocity1,velocity2,m1,m2):
x1, x2 = (-1,1) # Location on m1 and m2.
y = 1 # Arbitrary location along y.
marker_size = 100
scale = 0.5 # the length scale of the velocity arrow.
title_str = ('Before','After')
c1 = 'black'
c2 = 'gray'
fig,ax = plt.subplots(1,2,figsize=(8,1))
for i in range(2):
ax[i].scatter(x1,y,s=marker_size,color=c1)
ax[i].scatter(x2,y,s=marker_size,color=c2)
# Draw an arrow if the velocity is not too small.
if abs(velocity1[i])>0.01:
ax[i].arrow(x1,y,velocity1[i]*scale,0,color=c1,
head_width=0.1)
if abs(velocity2[i])>0.01:
ax[i].arrow(x2,y,velocity2[i]*scale,0,color=c2,
head_width=0.1)
ax[i].set_xlim((-2,2))
ax[i].set_ylim((0.8,1.2))
ax[i].set_title(title_str[i])
ax[i].axis('off')
plt.tight_layout()
m1, m2 = (1, 1)
u1, u2 = (1, 0)
v1, v2 = headon_collision(u1,u2,m1,m2)
plot_pre_post_collision((u1,v1),(u2,v2),m1,m2)
plt.savefig('fig_ch3_collision_case2.eps')
assert v1==0
assert v2==u1
Figure 3.1
Let’s use the following notation to keep track of various velocity com-
ponents:
v1 , ~
(~ v2 )before = (v1x , v1y , v1z , v2x , v2y , v2z )before = (1, 0, 0, 0, 0, 0).
(v1x , v1y , v1z , v2x , v2y , v2z )after = (0.9, 0.3, 0.0, 0.1, −0.3, 0.0).
Over the following several blocks of code, we will systematically and ex-
haustively search for possible velocities. Even though brute-force search
may not be the most mathematically sophisticated or elegant, it can be
relied on when an analytical method is not readily available.
We will develop a few versions of the search code so that the subsequent
version will be faster than the previous ones. The main idea will be
the same. We will use six for-loops to go through possible values of
(v1x , v1y , v1z , v2x , v2y , v2z )after , and check whether they jointly satisfy the
principles of momentum and energy conservation. The improvements of
different code versions will come from reducing the search space. As an
analogy, suppose you are trying to find a key you lost in the house, and
you can optimize your search by only visiting the places you’ve been to
36 Thermal Physics Tutorials with Python Simulations (TPTPS)
recently. That is, don’t look for your key in the bathroom if you haven’t
been there.
There are two main variables in the following code: v_before and
v_after, each of which contains six numbers, and they are put into in-
dividual variables to make the code more readable. For example, v1x_b
corresponds to v1x, before and v2z_a corresponds to v2z, after . Each of
these six numbers in v_before and v_after can be accessed using an
index value between 0 and 5 (0, 1, 2, 3, 4, 5). Multiple values can be
accessed with the : symbol. For example, v_before[:3] returns the
first three values from the full list: v_before[0], v_before[1], and
v_before[2], starting with index 0 and ending before index 3. These
three values are v1x_b, v1y_b, and v1z_b. Similarly, v_before[3:]
returns the remaining three values, starting with index 3, which are
assigned to v2x_b, v2y_b, and v2z_b.
We can check momentum conservation by asking if the difference be-
tween the sum of pre-collision velocities and the sum of post-collision
velocities is zero. However, numerical values
√ do not have infinite preci-
sion. For example, we know numbers like 2, π, and 1/3 are infinitely
long in their decimal representations, such as 1.414..., 3.141592..., and
0.333..., but a typical numerical computation deals with a fixed num-
ber of decimal points, possibly producing small but non-zero round-off
errors. (Note that there are methods that deal with infinite-precision
or arbitrary-precision arithmetic, which we are not using.) Therefore,
in our implementation, we will check whether the absolute value of the
difference is smaller than a relatively tiny number tol, rather than
comparing the difference to zero.
The following code block illustrates this idea: if (num1-num2)==0:
checks whether num1 and num2 are numerically identical, and if
abs(num1-num2)<tol: checks whether these two values are close
enough up to a tolerance threshold value tol.
Velocity Distribution 37
num1 = 1.0/3.0
num2 = 0.333333333
tol = 0.0001
if (num1-num2)==0:
print('num1 is equal to num2.')
else:
if abs(num1-num2)<tol:
print('num1 is practically equal to num2.')
else:
print('num1 is not equal to num2.')
import numpy as np
def is_conserved(v_before,v_after,tol=0.0001):
v1x_b, v1y_b, v1z_b = v_before[:3]
v2x_b, v2y_b, v2z_b = v_before[3:]
v1x_a, v1y_a, v1z_a = v_after[:3]
v2x_a, v2y_a, v2z_a = v_after[3:]
38 Thermal Physics Tutorials with Python Simulations (TPTPS)
to 1/dv. Because there are six nested for-loops, the total number of
possible combinations for v_after goes up exponentially like (1/dv)6 .
This exponential behavior makes generate_solutions_very_slow()
very slow and inefficient. For maxval = 1 and dv = 0.5, this function
will process more than 15625 (=56 ) cases. For dv = 0.1, the number of
cases would be more than 85 million, and for dv = 0.01, the number
would increase to more than 6.5 × 1013 . In fact, do not use this function
unless you can leave your computer running for many, many hours.
# Code Block 3.4
and gives the rest to the next oldest one (B), who takes a portion from
the remaining fund and passes the rest to the next one (C). We know
that A+B+C=$100, so B cannot have more than $60 because $40 was
already taken by A.
A helper function my_range() generates a reduced range for each
nested loop. In the first for-loop for v1x, we consider the full range
between -maxval and maxval, but in the second loop for v1y, we
consider a smaller range since v1x reduced the range of possible
values for v1y. The next loop for v1z considers an even smaller
range since both v1x and v1y took up a portion of energy, and a
smaller amount of energy is left for v1z. Therefore, the input argu-
ment for my_range() starts with [0,0,0,0,0,0] and incrementally
adds the contributions from the previous loops: [v1x,0,0,0,0,0],
[v1x,v1y,0,0,0,0], [v1x,v1y,v1z,0,0,0], etc.
The reduced range is given by
np.arange(-new_maxval,new_maxval+dv,dv), where new_maxval is
the remaining amount of energy. That is the square root of
maxval**2-np.sum(np.array(trial)**2), where trial is the set of
velocity values that are being considered in the previous, outer for-
loops. In the above sibling analogy, this trial would be a record of
how much money was already taken by the older siblings.
There are a few lines of code that may need some clarification.
The tmp_maxval is a temporary variable for the square root of
maxval**2-np.sum(np.array(trial)**2), which, due to rare unfor-
tunate numerical round-off errors, may be slightly less than 0, so
my_sqrt() function returns zero in such a case.
Why might we encounter such a case? That is because the code was
written so that any trial solutions we consider would come from a range
of np.arange(-maxval,maxval+dv,dv), a set of evenly-spaced num-
bers. If the newly returned tmp_maxval happens to be a number not
contained in this discrete set, we pick the closest number using the
np.argmin() function to determine new_maxval, thereby introducing
a small numerical imprecision.
Compared to generate_solutions_very_slow(), this new function
generate_solutions() is a significant improvement. For maxval = 1
and dv = 0.1, the slow code considers more than 85 million possibil-
ities, while this one considers 5,827,203 possibilities, which is a speed-
up of more than ten-fold. A geometric insight of this optimization is
Velocity Distribution 41
this. The slow code searches for a solution within the full volume of a
(six-dimensional) cube, while the new code searches on a surface of a
(six-dimensional) sphere.
# Code Block 3.5
def my_range(maxval,trial,dv):
dv = 0.5
solutions, num = generate_solutions([1,0,0,0,0,0],dv=dv)
print("Number of solutions tried = ", num, ' with dv = ', dv)
# Pre-collision velocities.
v_before = [1,0,0,0,0,0] # [v1x,v1y,v1z,v2x,v2y,v2z]
# Slower function.
# Uncomment, if you want to try this out.
v_before = [1,0,0,0,0,0] # [v1x,v1y,v1z,v2x,v2y,v2z]
dv = 0.1
#print("\nTrying out generate_solutions_very_slow() function.")
#print("Started at ====> ", datetime.now().strftime("%H:%M:%S"))
#solutions, num = generate_solutions_very_slow (v_before,dv=dv)
#print("Finished at ===> ", datetime.now().strftime("%H:%M:%S"))
#print_solutions_neatly(solutions)
#print("Number of solutions tried = ", num, ' with dv = ', dv)
The following code plots the split of the total energy as a histogram. It
calculates the energy ratio as the amount of post-collision kinetic energy
carried away by one particle, divided by the total pre-collision kinetic
energy,
E1, after
.
E1, before + E2, before
The histogram shows that this ratio can be 0 (if all the post-collision
kinetic energy is given to the second particle), 0.5 (if the energy is
shared equally), 1 (if the first particle takes all the kinetic energy after
the collision), or other values.
# Code Block 3.8
Figure 3.2
We will start the simulation where two particles are chosen randomly,
exchanging their energies while keeping the total energy constant. What
would be the resulting distribution of energy? In this simulation, we
pick two particles i and j using the np.random.randint() function.
If these particles are different (if (i!=j):), split their total energy
(e[i]+e[j]) randomly. This random split is accomplished by picking a
number between 0 and 1 (p = np.random.rand()). This is a simplify-
ing assumption because the histogram of energy split from our earlier
calculation is not uniform. Still, it simplifies our simulation into a single
line of code.
# Code Block 3.10
import numpy as np
import matplotlib.pyplot as plt
e = e + e0
for t in range(T):
e_all[t,:] = e
Figure 3.3
Figure 3.4
Velocity Distribution 49
import numpy as np
import matplotlib.pyplot as plt
T,N = e_all.shape
width = 5
maxlim = np.ceil(np.max(e_all)*width)/width
bins = np.arange(0,maxlim,width)
nrow = 5
ncol = 5
fig, axes = plt.subplots(nrow,ncol,figsize=(8,8),
sharex=True,sharey=True)
step = int(T/(nrow*ncol))
for i in range(nrow):
for j in range(ncol):
ax = axes[i,j]
t = (i*ncol + j)*step
h, b = np.histogram(e_all[t,:], bins=bins)
ax.bar(b[:-1],h/N,width=width,color='black')
ax.set_title("t = %d"%t)
ax.set_yticklabels([])
ax.set_xticklabels([])
ax.set_ylim((0,1.1))
Figure 3.5
or
By the inspection of the last two expressions, we can infer that P() ∝
e−β . We will come back to this result with more rigorous proof later.
Figure 3.6
Assuming isotropy (that is, all directions are equal or there is no special
direction), we may consider a thin spherical shell in the velocity phase
space (where the axes are vx , v y , and vz ). A spherical shell whose radius
q
is v = v2x + v2y + v2z and whose thickness is dv will have a differential
volume of 4πv2 dv, and the velocities represented within this volume will
have the same energy. Then, the distribution of speed is given by:
mv2
P(v)dv ∝ 4πv2 e−β 2 dv.
fig, ax = plt.subplots()
ax.add_patch(circle1)
ax.add_patch(circle2)
ax.add_patch(circle3)
ax.add_patch(circle4)
plt.axis('equal')
plt.axis('off')
plt.savefig('fig_ch3_more_area.eps')
plt.show()
print('The gray band covers more area than the black band, ')
print('even though they have the same thickness.')
The gray band covers more area than the black band,
even though they have the same thickness.
Figure 3.7
54 Thermal Physics Tutorials with Python Simulations (TPTPS)
dv = 0.01
v = np.arange(0,5,dv)
plt.plot(v,MB_distribution(v,T=1,dv=dv),color='k',linestyle='solid')
plt.plot(v,MB_distribution(v,T=2,dv=dv),color='k',linestyle='dotted')
plt.plot(v,MB_distribution(v,T=4,dv=dv),color='k',linestyle='dashed')
legend_txt = ('Cold (or Heavy)','Warm (or Medium)','Hot (or Light)')
plt.legend(legend_txt,framealpha=1)
plt.xlabel('v (a.u.)')
plt.ylabel('P(v)')
plt.yticks((0,0.5,1))
plt.title('Maxwell-Boltzmann Distribution')
plt.savefig('fig_ch3_maxwell_boltzmann_distrib.eps')
plt.show()
Figure 3.8
†
There are many excellent tutorials and examples of such simulation of particle
dynamics. For a reference, see:
• “Building Collision Simulations” by Reducible:
www.youtube.com/watch?v=eED4bSkYCB8
• A collision simulation can be tweaked to model an epidemic (as done by
3B1B):
www.youtube.com/watch?v=gxAaO2rsdIs
• Python code example: “The Maxwell–Boltzmann distribution in two dimen-
sions” at scipython.com/blog
CHAPTER 4
Thermal Processes
57
58 Thermal Physics Tutorials with Python Simulations (TPTPS)
situation where no heat energy is added or removed from the gas (i.e.,
by isolating the gas in a heat-insulating container).
The thermal processes of an ideal gas are often visualized
as curves on a PV -diagram (a graph whose axes are P and
V ). The following simulation from PhET is a great resource
for thinking about different thermal processes of an ideal gas:
https://phet.colorado.edu/en/simulation/gas-properties
import numpy as np
import matplotlib.pyplot as plt
dx = 5
x = np.arange(0,100+dx,dx)
y = x**2/100
plt.plot(x,y,'o-',color='k')
Thermal Processes 59
plt.bar(x,y,width=dx*0.85,color='gray')
# width = how wide each bar is.
plt.ylim([-dx, np.max(y)+dx])
plt.xlim([-dx, np.max(x)+dx])
plt.xlabel('x')
plt.ylabel('y')
plt.savefig('fig_ch4_numerical_integ.eps')
plt.show()
Figure 4.1
4.3 PV DIAGRAM
The pressure and volume of gas can change in many different ways.
One of the simplest ways is an isobaric process where the gas expands
while maintaining its pressure. For example, a piston may be allowed
to push out slowly against the constant atmospheric pressure. This is
analogous to a case where you lift a weight by “fighting against” the
constant gravitational force, mg, over distance h. The amount of work
you did would be mgh. The total amount of work performed by the gas
during an isobaric process, where the volume changes from Va to Vb , is:
Z Vb
Wisobaric = PdV = P(Vb − Va ).
Va
the work, so that the average internal energy of the gas (i.e., its tem-
perature) is maintained. However, during an adiabatic expansion, such
infusion of energy is not allowed by definition. This would be analogous
to a case where you lift a weight without any caloric intake. Over the
long run, you are depleting your internal energy and will not be able to
do as much work. In other words, compared to the isothermal expansion
process over the same range of volume change, an adiabatic expansion
process will produce less work and end up at a lower temperature.
As a consequence, ideal gas that goes through an adiabatic process
between states a and b has an additional constraint, in addition to the
usual PV = NkT:
γ γ
PV γ = constant, or Pa Va = Pb Vb ,
Let’s start with a very general statement that a thermal system can
be specified with its state variables, T, V , and P. If these variables are
related by an equation of state variables (for example, PV = NkT for
an ideal gas), the system is ultimately specified with only two variables.
Considering the internal energy U(T, V) as a multivariate function of T
and V , we can make the following general statement:
∂U ∂U
! !
dU = dT + dV.
∂T V ∂V T
∂U
The expression is a partial derivative of U with respect to T
∂T V
while keeping V constant. Likewise, ∂U
∂V T is a partial derivative of U
with respect to V under constant T.
Another general statement we can make is that
dU = dQ + dW = dQ − PdV.
62 Thermal Physics Tutorials with Python Simulations (TPTPS)
The above statement means that the internal energy of a thermal system
would change from the heat transfer (dQ) or the mechanical work done
on the system (dW ). The latter is dW = Fdx = −PdV , as shown in
the chapter on the kinetic theory of gas. The negative sign of −PdV
indicates that if the gas is compressed (negative dV ), work is done on
the system by the external force, and hence dU should be positive.
How universal are those two statements? Do T and V completely specify
the internal energy of a physical system? Is there any other way (beyond
dQ and dW ) to change the internal energy of a physical system? There
could be other relevant state variables and other ways to affect the
internal energy. For example, the internal energy of some material may
be affected by the absorption or emission of a photon. Other material
may change internal energy when it is subject to electrical potential.
However, these are sufficiently rigorous statements within our context
of classical thermodynamics.
Now let’s focus our context even more narrowly by considering ideal gas.
According to the kinetic theory of gas, the internal energy of ideal gas
is the sum of an individual particle’s kinetic energy, which determines
T, so U of ideal gas would not depend on V . Therefore, ∂U
∂V T = 0.
Furthermore, because PV = NkT, PdV + VdP = NkdT, which will be
used shortly.
Now, using the first two expressions about dU, we have:
∂U ∂U
! !
dQ − PdV = dT + dV.
∂T V ∂V T
∂U
!
dQ = PdV + dT.
∂T V
∂U ∂U
! " ! #
dQ = (−VdP + NkdT) + dT = −VdP + Nk + dT.
∂T V ∂T V
dQ
It is convenient to define specific heat, CV = dT , which is the amount
V
of heat energy needed to raise
the temperature of a system at constant
dQ
volume. Similarly, CP = dT is defined as the specific heat at constant
P
pressure.
Given the above relationships, we see that for an ideal gas,
CV = ∂U ∂T V and CP = CV + Nk. This latter relationship is called Mayer’s
equation and indicates that CP > CV . When the thermal system is
allowed to expand (that is, V is not constant), the added heat energy
Thermal Processes 63
will not only go into the internal energy of the system but also will be
spent through its mechanical work of pushing against the environment.
Therefore, a system allowed to expand will need more heat energy to
increase its temperature, compared to a different system whose volume
does not change.
Now we have
dQ = CV dT + PdV,
and
dQ = CP dT − VdP.
ln P + ln V γ = ln (PV γ ) = constant.
In other words, we have shown that PV γ is constant for ideal gas during
an adiabatic process.
Finally, since the total internal energy of ideal gas is 32 NkT according to
the kinetic theory of gas, CV = ∂U
∂T = 32 Nk. Furthermore, according
V
to the Mayer’s equation, CP = CV + Nk = 52 Nk. Therefore, γ = 53 , as
claimed above.
# Code Block 4.2
Vb = 20
dV = 0.1
V = np.arange(Va,Vb,dV)
P_isobaric = np.zeros(len(V))+Pa
64 Thermal Physics Tutorials with Python Simulations (TPTPS)
P_isotherm = NkT/V
P_adiabat = Pa*(Va/V)**(5/3)
plt.plot(V,P_isobaric,color='black',linestyle='solid')
plt.plot(V,P_isotherm,color='black',linestyle='dotted')
plt.plot(V,P_adiabat,color='black',linestyle='dashed')
plt.legend(('isobaric','isothermal','adiabatic'),framealpha=1.0)
plt.xlim((0,25))
plt.ylim((0,25))
plt.xlabel('V (m$^3$)')
plt.ylabel('P (pascal)')
plt.savefig('fig_ch4_thermal_processes.eps')
plt.show()
Figure 4.2
HOT COLD
Figure 4.3
66 Thermal Physics Tutorials with Python Simulations (TPTPS)
dV = 0.1
Va = 10 # volume at a.
Vb = 20 # volume at b.
Vc = 40 # volume at c.
Vd = 20 # volume at d.
V_ab = np.arange(Va,Vb,dV)
V_bc = np.arange(Vb,Vc,dV)
V_cd = np.arange(Vc,Vd,-dV)
V_da = np.arange(Vd,Va,-dV)
Pb = T_high/Vb
kb = T_high*Vb**(gamma-1) # constant along adiabat
P_bc = kb/V_bc**(gamma) # adiabatic process
Pc = kb/Vc**(gamma)
T_low = Vc*Pc # low T
P_cd = T_low/V_cd # isothermal process
Pd = T_low/Vd
kd = T_low*Vd**(gamma-1) # constant along adiabat
P_da = kd/V_da**(gamma) # adiabatic process
plt.plot(V_ab,P_ab,color='gray',linestyle='solid')
plt.plot(V_bc,P_bc,color='black',linestyle='dotted')
plt.plot(V_cd,P_cd,color='gray',linestyle='solid')
plt.plot(V_da,P_da,color='black',linestyle='dotted')
plt.legend(('a->b: isothermal','b->c: adiabatic',
'c->d: isothermal','d->a: adiabatic'),framealpha=1)
Thermal Processes 67
spacing = 1
plt.text(Va+spacing,Pa,'a')
plt.text(Vb+spacing,Pb,'b')
plt.text(Vc+spacing,Pc,'c')
plt.text(Vd+spacing,Pd,'d')
plt.text((Va+Vb)/2+spacing,Pa-6,'high T')
plt.text((Vc+Vd)/2-spacing,Pd-4,'low T')
plt.xlim((0,50))
plt.ylim((0,30))
plt.xlabel('V (m$^3$)')
plt.ylabel('P (pascal)')
plt.savefig('fig_ch4_carnot.eps')
plt.show()
Figure 4.4
For a Carnot cycle, the addition of heat energy occurs during the
isothermal expansion (a → b) only, and this thermal energy is equal
to the amount of mechanical work performed by the gas as the volume
expands from Va to Vb . These two states are at an equal temperature
Ta = Tb = Thigh .
Z Vb
Qin = PdV = NkThigh ln (Vb /Va ).
Va
eta_measure = W_total/Q_in
eta_theory = 1 - T_low/T_high
Premise of Statistical
Mechanics
73
74 Thermal Physics Tutorials with Python Simulations (TPTPS)
Here is a short code sample for creating bar graphs depicting three
different distributions.
# Code Block 5.1
import numpy as np
import matplotlib.pyplot as plt
titles = ('(a)','(b)','(c)')
xaxis = ('Low','Med','High')
fig, axes = plt.subplots(1,3,sharey=True,figsize=(8,3))
# Draw a vertical bar plot.
axes[0].bar(xaxis,dist1,color='black')
axes[1].bar(xaxis,dist2,color='black')
axes[2].bar(xaxis,dist3,color='black')
axes[0].set_ylabel('Percent')
for i in range(3):
axes[i].set_title(titles[i])
axes[i].set_ylim((0,100))
axes[i].set_yticks((0,50,100))
axes[i].set_xlim((-1,3))
plt.tight_layout()
plt.savefig('fig_ch5_distrib_vertical.eps')
plt.show()
xaxis = ('Low','Med','High')
fig, axes = plt.subplots(1,3,sharey=True,figsize=(8,3))
# Draw a horizontal bar plot.
for i in range(3):
axes[i].barh(xaxis,dists[i],color='black')
axes[i].set_title(titles[i])
axes[i].set_xlim((0,100))
axes[i].set_xticks((0,50,100))
axes[i].set_ylim((-1,3))
axes[i].set_xlabel('Percent')
axes[0].set_ylabel('Levels')
plt.tight_layout()
plt.savefig('fig_ch5_distrib_horizontal.eps')
plt.show()
Premise of Statistical Mechanics 75
Figure 5.1
where U is the total energy shared by the particles and < U > is the
average energy. N and U are conserved quantities.
When the particles are allowed to exchange their energies with each
other freely, they settle into a particular distribution called the Boltz-
mann distribution. That is, at equilibrium, the number of particles in
each energy level takes on a stable mathematical form, as shown here:
ni = αe−i /kT ,
Premise of Statistical Mechanics 77
5.4 VISUALIZATION
plt.ylim(-0.5,N+0.5)
plt.yticks(ticks=np.arange(N)+0.5,labels=np.arange(N)+1)
plt.xticks([])
#plt.ylabel('Energy Levels')
plt.axis('equal')
plt.title("Occupancy:\n%s"%n)
plt.box(on=False)
n = [2,0,0,0,0,1]
fig = plt.figure(figsize=(2,8))
sketch_occupancy (n)
plt.savefig('fig_ch5_occupancy_ex1.eps')
plt.show()
n = [1,1,0,0,1,0]
fig = plt.figure(figsize=(2,8))
80 Thermal Physics Tutorials with Python Simulations (TPTPS)
sketch_occupancy (n)
plt.savefig('fig_ch5_occupancy_ex2.eps')
plt.show()
n = [0,1,2,0,0,0]
fig = plt.figure(figsize=(2,8))
sketch_occupancy (n)
plt.savefig('fig_ch5_occupancy_ex3.eps')
plt.show()
Figure 5.2
Premise of Statistical Mechanics 81
N = 10 # individuals
E = 5 # dividers, so there will be E+1 categories
# Individuals are represented by zeros.
# Dividers are represented by ones.
arrayN = np.zeros(N)
arrayE = np.ones(E)
# Initial line up of N individuals, followed by E dividers.
x = np.hstack((arrayN,arrayE)).astype('int')
82 Thermal Physics Tutorials with Python Simulations (TPTPS)
Nexamples = 4
Example 1
[1 0 0 0 0 0 1 0 0 1 1 1 0 0 0]
n[0] = 0
n[1] = 5
n[2] = 2
n[3] = 0
n[4] = 0
n[5] = 3
Example 2
[0 0 1 0 1 0 0 0 0 0 0 1 1 0 1]
n[0] = 2
n[1] = 1
n[2] = 6
n[3] = 0
n[4] = 1
n[5] = 0
Example 3
[1 1 0 1 0 1 0 0 0 0 0 0 0 1 0]
n[0] = 0
n[1] = 0
n[2] = 1
Premise of Statistical Mechanics 83
n[3] = 1
n[4] = 7
n[5] = 1
Example 4
[0 1 0 1 1 0 0 1 0 0 1 0 0 0 0]
n[0] = 1
n[1] = 1
n[2] = 0
n[3] = 2
n[4] = 2
n[5] = 4
Now let’s develop code for enumerating all possibilities that satisfy our
constraints. We will implement two different methods.
The first computational method below is a brute-force algorithm. To
systematically go through all possibilities, we consider a (E + 1)-digit
number in base (N + 1). Each one of the (E + 1) digits can take on a
value between 0 and N, and it represents the number of particles in
each energy level. Therefore, collectively, these (E + 1) digits correspond
to the occupation numbers, (n0 , n1 , ..., nE−1 , nE ).
Let’s consider a simple example. Suppose there are 9 individual particles
(N = 9) that can go into one of three different energy levels: 0 , 1 , and
2 . The occupation number for each level will be written as a 3-digit
number. A number 216 would denote that there are 2 particles in 0 , 1
in 1 , and 6 in 2 . We could list all possible 3-digit numbers between 000
and 999 in base 10. Many in this listing will include impossible cases,
such as 217 which violates the condition that there are nine individual
particles, so these extraneous cases must be eliminated. Nevertheless,
we have a systematic way of considering all possibilities.
Here is another example. Assume there is only one particle with three
different levels. Then, a 3-digit binary number can be used to specify all
possibilities. The following three configurations would work: 100, 010,
and 001, but not others, such as 110, 101, or 111 which have more than
1 particle in total.
84 Thermal Physics Tutorials with Python Simulations (TPTPS)
import math
import numpy as np
assert all(num2base(9,base=2)==np.array([1,0,0,1]))
assert all(num2base(9,base=10,max_num_digits=3)==np.array([9,0,0]))
[[0 1 2 0 0 0]
[0 2 0 1 0 0]
[1 0 1 1 0 0]
[1 1 0 0 1 0]
[2 0 0 0 0 1]]
The above brute force strategy of considering all possibilities and elimi-
nating ones that do not satisfy the constraints is intuitive and straight-
forward. However, one serious drawback is its inefficiency, where the
computational time increases exponentially. For example, if there are
five individual particles with three different levels, we must consider all
possible 3-digit numbers in base 6. Because each digit would take on
a value between 0 and 5 in base 6, the total number of numbers to
be considered is 63 (from 000 to 555), and all these numbers must be
checked to see if they satisfy the constraining conditions. If there are
ten particles and three different levels, we would consider 113 numbers,
etc. Similarly, if we consider five particles with six different levels, there
are 66 possibilities. If there are five particles with 12 different levels,
there are 612 possibilities. In other words, the number of possibilities to
consider and the computational time increase rapidly.
86 Thermal Physics Tutorials with Python Simulations (TPTPS)
The second computational routine performs the same job of listing all
permutations but uses a powerful technique called recursion. The main
idea is that we create a general function (called perm_recursive()
below) that can be called within itself, but when it is called “recursively,”
it considers a smaller range of possible permutations.
For example, suppose there are nine particles with three levels
(0 , 1 , 2 ). We can assign one particle to one of the three levels and then
consider possible permutations with eight particles in the next recur-
sive call of the same function perm_recursive(). Within this recursive
call for eight particles, the function will again assign one particle to a
particular energy level and make yet another recursive call with seven
particles, and the process continues. In addition to considering fewer
particles in the subsequent function calls, each recursive step considers
a smaller amount of energy because some energy was taken up at the
previous step. Once the function reaches a base case, where no particles
are left to assign to an energy level, the recursive call stops.
Compare the time of running the following and the previous code blocks.
You will notice that this recursive method is much faster while produc-
ing the same answer as the brute force method.
# Code Block 5.5
assert E>=0
assert N>=0
assert all(e>=0)
N = int(N)
E = int(E)
Premise of Statistical Mechanics 87
n = np.zeros((0,dim))
for i in range(dim):
if (E-e[i])>=0: # enough energy to drill down recursively.
n_next = perm_recursive(e,N-1,E-e[i])
if len(n_next)>0: # Solution(s) was found.
n_next[:,i] = n_next[:,i]+1
n = np.vstack((n,n_next)) # Keep adding solutions
return n
N = 3
E = 5
e = np.arange(0,E+1,1)
n = perm_recursive(e,N,E)
print(np.array(n).astype(int))
[[0 1 2 0 0 0]
[0 2 0 1 0 0]
[1 0 1 1 0 0]
[1 1 0 0 1 0]
[2 0 0 0 0 1]]
N = 10
E = 5
e = np.arange(0,E+1,1)
n = perm_recursive(e,N,E)
n, counts = np.unique(n,axis=0,return_counts=True)
print('All possibilities:')
print(np.array(n).astype(int))
All possibilities:
[[5 5 0 0 0 0]
[6 3 1 0 0 0]
[7 1 2 0 0 0]
[7 2 0 1 0 0]
[8 0 1 1 0 0]
[8 1 0 0 1 0]
[9 0 0 0 0 1]]
Premise of Statistical Mechanics 89
Figure 5.3
We set the partial derivatives with respect to x and y equal to zero and
solve the resulting equations, including the constraint φ(x, y) = 0.
∂L
= y+λ=0
∂x
∂L
=x+λ=0
∂y
x + y − L/2 = 0
With three equations and three unknowns (x, y, λ), the system of equa-
tions is solvable, yielding a solution of x = y = L/4 and the maximum
area of L2 /16. That is, making a square enclosure gives the maximum
area.
ln n! ≈ n ln n − n.
import numpy as np
import matplotlib.pyplot as plt
n_range = np.arange(5,20)
for n in n_range:
v1 = np.log(np.math.factorial(n))
v2 = n*np.log(n) - n
print("For n =%3d: ln(n!) = %4.1f, nln(n)-n = %4.1f"%(n,v1,v2))
perc_diff = (v1-v2)/v1*100
plt.scatter(n,perc_diff,color='black')
plt.ylabel('Perc. Difference')
plt.xlabel('n')
plt.title("Goodness of Stirling's Approximation")
plt.savefig('fig_ch5_stirling_goodness.eps')
plt.show()
print("Percent Difference = (Actual-Approx)*100/Actual.")
print("The approximation gets better with large n.")
Figure 5.4
The above series is the sum of the areas of n rectangles whose widths
are one and heights are ln (i), where i takes on values from 1 to n. Thus,
the series can be considered an approximation of an integral of a natural
log function from 1 to n, as illustrated by Figure 5.5.
# Code Block 5.8
n_max = 22
n = np.arange(1,n_max,1)
n_smooth = np.arange(1,n_max,0.1)
plt.plot(n_smooth,np.log(n_smooth),color='black')
plt.legend(('ln x',),framealpha=1.0)
plt.bar(n,np.log(n),width=0.8,color='gray')
plt.xticks((0,10,20))
plt.savefig('fig_ch5_stirling_approx.eps')
plt.show()
Premise of Statistical Mechanics 93
Figure 5.5
Therefore,
Z n n
ln n! ≈ ln x dx = (x ln x − x) = n ln n − n + 1.
1 1
N! Y 1
ω(n0 , n1 , . . .) = = N! .
n0 !n1 ! . . . ni !
i=0,1,...
For more generality, let’s assume that each energy level may have degen-
eracy gi . For example, if there is only one way to have energy 4 , then
g4 = 1, but if there are two distinct states for 4 , g4 = 2. As an analogy,
we may picture a high-rise building with multiple floors, corresponding
to energy levels, and there are multiple rooms or sections on each floor,
corresponding to degeneracy. We could also picture different wealth or
income levels, with many different career options with the same income.
A particular orbital for an electron can accommodate an electron with
spin up or down, so there is a two-fold degeneracy.
Imagine that six people (n7 =6) are sent to the seventh floor (7 ), where
10 empty office spaces ( g7 =10) are available. They are allowed to choose
an office at random. The number of all possible ways of distributing six
people into ten possible offices is gni i = 106 . An unfortunate case will
be that of all six people cramming into a single office space. Therefore,
the above expression for ω(n0 , n1 , . . .) needs to account for the extra
possibilities arising from each level’s degeneracy. The final expression
is:
Y gni
i
ω(n0 , n1 , . . .) = N! .
ni !
i=0,1,...
L(n0 , n1 , . . . ; α, β) = ln ω + αφ − βψ,
have X
ln ω = ln N! + ni ln gi − ni ln ni + ni ,
i
so
∂ ln ω gi
= ln gi − ln ni − 1 + 1 = ln .
∂ni ni
Following the recipe of the Lagrange Multiplier technique, we obtain:
∂L ∂ ln ω ∂φ ∂ψ gi
= +α −β = ln + α − βi = 0.
∂ni ∂ni ∂ni ∂ni ni
The above expression leads to
ni
ln = α − βi ,
gi
or
ni = gi eα e−βi ,
Quantum mechanics gives a different way to think about the ideal gas.
Individual particles in an ideal gas can be described with a wavefunction
representing the probabilistic nature of their existence. Let’s start with
a one-dimensional example, where the wavefunction ψ(x) satisfies the
time-independent Schrödinger’s equation:
~2 d2 ψ(x)
− + V(x)ψ(x) = ψ(x).
2m dx2
~2 d2 ψ(x)
− = ψ(x),
2m dx2
ψ(0) = ψ(L) = 0.
where kn = nπ
L for n = 1, 2, 3, . . . .
97
98 Thermal Physics Tutorials with Python Simulations (TPTPS)
~2 kn2 h2 2
(n) = = n,
2m 8mL2
which reveals the discrete nature of the energy levels, depending on the
quantum number, n. Note ~ = 2π h
.
Let’s visualize the first few energy levels. For convenience, we will as-
h2
sume 8mL 2 = 1 in the code.
import numpy as np
import matplotlib.pyplot as plt
pi = 3.1415
L = 1
dx = 0.01
x = np.arange(0,L,dx)
for n in range(1,5):
E = n**2
kn = n*pi/L
psi = np.sin(kn*x)
psi = psi/np.sqrt(np.sum(psi**2)) # normalization
# but the normalized wavefunction looks too short or tall,
# so adjust the height of psi a bit (just for cosmetics).
psi = psi*8
plt.plot((0,L),(E,E),color='gray')
plt.plot(x,psi+E,color='black',linewidth=3)
plt.text(L+0.15,E,"n = %d"%n)
xbox_left = np.array([-0.1*L,0,0,-0.1*L])
ybox_left = np.array([0,0,E*1.1,E*1.1])
xbox_right = np.array([1.1*L,L,L,1.1*L])
ybox_right = np.array([0,0,E*1.1,E*1.1])
plt.fill(xbox_left,ybox_left,color='#CCCCCC')
plt.fill(xbox_right,ybox_right,color='#CCCCCC')
Revisiting Ideal Gas 99
plt.plot((0,0),(0,E*1.1),color='gray')
plt.plot((L,L),(0,E*1.1),color='gray')
plt.ylim((0,E*1.1))
plt.xlabel('Position')
plt.ylabel('Energy')
plt.axis('off')
plt.savefig('fig_ch6_wavefunc.eps')
plt.show()
Figure 6.1
6.2 DEGENERACY
h2 h2
(n) = (n2
x + n2
y + n2
z ) = n2 .
8mL2 8mV 2/3
100 Thermal Physics Tutorials with Python Simulations (TPTPS)
Then, we run into cases where the quantum state of a particle may be
different even when the total energy is the same. For example, three dif-
ferent quantum states are possible for a single value of = h2 /(8mV 2/3 ):
with nx = 1, n y = nz = 0, or with n y = 1, nx = nz = 0, or with
nz = 1, nx = n y = 0. As an analogy, consider a building with many dif-
ferent rooms on different floors. A person may have a different amount
of gravitational potential energy on different floors, but there are also
different locations on the same floor with the same energy. As briefly in-
troduced in the previous chapter, we call different states with the same
energy “degenerate.” The following code counts the number of degener-
ate cases for different amounts of energy, revealing that the degeneracy
on average increases with the energy in the case of an ideal gas.
# Code Block 6.2
import numpy as np
import matplotlib.pyplot as plt
# E = total energy
# i, j, k = quantum number nx, ny, nz
# g = number of degenerate states
break_into_sum_square(3,verbose=True)
break_into_sum_square(99,verbose=True)
break_into_sum_square(101,verbose=True)
assert break_into_sum_square(3)==1
assert break_into_sum_square(9)==3
E_range = range(1000)
g_range = np.zeros(len(E_range))
for E in E_range:
g_range[E] = break_into_sum_square(E)
plt.scatter(E_range,g_range,color='gray',s=3)
plt.xlabel('E')
plt.ylabel('Degeneracy')
plt.savefig('fig_ch6_degeneracy_scatter.eps')
plt.show()
102 Thermal Physics Tutorials with Python Simulations (TPTPS)
Figure 6.2
Figure 6.3
Revisiting Ideal Gas 103
# continuous approximation
g_cont = (3.1415/4)*np.sqrt(E_range)
plt.scatter(E_range,g_range,color='gray',s=4)
plt.plot(E_range,g_cont,color='black',linewidth=3)
plt.xlabel('E')
plt.ylabel('Degeneracy')
legend_txt = ('Directly Counted','Continuous Approximation')
plt.legend(legend_txt,framealpha=1)
plt.savefig('fig_ch6_degeneracy_scatter_cont_approx.eps')
plt.show()
# moving average
window = 10
newE = len(E_range)-window
g_avg = np.zeros(newE)
E_avg = np.zeros(newE)
for i in range(newE):
E_avg[i] = np.sum(E_range[i:i+window])/window
g_avg[i] = np.sum(g_range[i:i+window])/window
# Note: subtract window/2, because we are averaging around a value.
E_avg = E_avg - window/2
plt.scatter(E_avg,g_avg,color='gray',s=3)
plt.plot(E_range,g_cont,color='black',linewidth=3)
plt.xlabel('E')
plt.ylabel('Degeneracy')
legend_txt = ('Moving Average','Continuous Approximation')
plt.legend(legend_txt,framealpha=1)
plt.savefig('fig_ch6_degeneracy_scatter_cont_approx_moving_avg.eps')
plt.show()
Revisiting Ideal Gas 105
Figure 6.4
N()
P() = ∝ g()e−/kB T .
N
In earlier chapters, we used k for the Boltzmann distribution, but we will
adopt a notation kB to recognize it as the famous Boltzmann constant
and to distinguish it from wavenumber kn = nπ/L.
The proportionality constant in the last expression is defined as 1/Z, a
reciprocal of the partition
R ∞ function. It can be determined by the nor-
malization constraint 0 P()d = 1 (note ≥ 0).
In other words, Z ∞
Z= g()e−/kB T d.
0
By putting the above expressions together with the expression for g()
for ideal gas, we have
Z ∞ √
4 2πVm3/2 1/2 −/kB T
Z= e d.
0 h3
To simplify the integral, we can make a change of variable, kB T → x, so
that the same integral can be written as the following:
Z ∞
Z=α x1/2 e−x dx,
0
Revisiting Ideal Gas 107
√ 3/2
where α = 4 2π mk h2
B
VT3/2 and x is a unitless integration variable.
We note that everything else other √
than V and T are just constants.
π
The integral is finite and equal to 2 , as will be shown in the following
code blocks.
Therefore, we have the final expression for the partition function of ideal
gas:
!3/2
2πmkB
Z= VT3/2 .
h2
# Numerical calculation.
import numpy as np
import matplotlib.pyplot as plt
dx = 0.001
x = np.arange(0,10,dx)
y = x**(0.5)*np.exp(-x)
plt.plot(x,y,color='k')
plt.xlabel('x')
plt.savefig('fig_ch6_integral_demo.eps')
plt.show()
# Area under the curve
print("Integral = %8.7f"%(np.sum(y)*dx))
Integral = 0.8860699
108 Thermal Physics Tutorials with Python Simulations (TPTPS)
Figure 6.5
x = sym.Symbol('x')
# sym.oo is symbolic constant for infinity.
sym.integrate(x**(0.5)*sym.exp(-x), (x,0,sym.oo))
0.886226925452758
Let’s do one more clever change of variable, x → y2 , so that
Z ∞ Z ∞ Z ∞
−y2 2
x e dx =
1/2 −x
ye (2ydy) = 2 y2 e−y dy.
0 0 0
When the latter integral√is entered into sympy as shown below, we obtain
π
the analytical solution 2 .
# Code Block 6.7
y = sym.Symbol('y')
sym.integrate(2*(y**2)*sym.exp(-y**2), (y,0,sym.oo))
√
π
2
Revisiting Ideal Gas 109
∂ ln Z ∂ ln Z ∂Z
=
∂T ∂Z ∂T
1 ∂Z
=
Z ∂T Z
∞
1 ∂
= g()e−/kB T d
Z ∂T 0
Z ∞
1 1
= g()e−/kB T d.
kB T2 Z 0
∂ ln Z
< U >= kB T2 .
∂T
∂ ln Z
< U > = kB T2
∂T
2πmkB 3/2
2 ∂
!
= kB T 3/2
ln VT
∂T h2
3 ∂ ln T
= kB T2
2 ∂T
3
= kB T.
2
110 Thermal Physics Tutorials with Python Simulations (TPTPS)
The following code block generates an example plot for visualizing en-
ergy levels with degeneracy. The vertical axis, as before, represents the
energy levels, and the number of boxes along the horizontal axis cor-
responds to the number of degenerate states at each energy level. As
we have discussed above, the higher energy levels tend to have more
degenerate states, as illustrated in Figure 6.6.
# Code Block 6.8
for i in range(N):
for j in range(len(n[i])):
plt.fill(xbox+j,ybox+i,color="#AAAAAA")
x = (np.random.uniform(size=n[i][j])-0.5)*w*0.9+0.5+j
y = (np.random.uniform(size=n[i][j])-0.5)*h*0.9+0.5+i
plt.scatter(x,y,marker='.',color='k',s=50,zorder=2.5)
plt.yticks([])
plt.xticks([])
plt.ylabel('Energy Levels')
plt.axis('equal')
plt.title("Occupancy:\n%s"%n)
plt.box(on=False)
n = list([[5],[2,1,0],[0,0,1,2,0],[0,1,0,0,0,0,0]])
fig = plt.figure(figsize=(6,4))
Revisiting Ideal Gas 111
sketch_occupancy_with_degeneracy(n)
plt.arrow(0, 0, 0, len(n)-0.1, head_width=0.05, head_length=0.1)
plt.savefig('fig_ch6_occupancy_with_degeneracy.eps')
plt.show()
Figure 6.6
CHAPTER 7
Revisiting Thermal
Processes
7.1 REVIEW
dU = δQ − δW,
113
114 Thermal Physics Tutorials with Python Simulations (TPTPS)
state i
state f
Figure 7.1
h2
i = 2/3
(n2x + n2y + n2z ) (Energy Equation),
8mV
where n’s are positive integers.
The degeneracy g(i ) of energy level can be approximated by the fol-
lowing expression:
√
4 2πVm3/2 1/2
g(i ) = i (Degeneracy Equation).
h3
The partition function for an ideal gas is
!3/2
2πmkB
Z= VT3/2 (Partition Function).
h2
116 Thermal Physics Tutorials with Python Simulations (TPTPS)
import numpy as np
import matplotlib.pyplot as plt
# Constants.
h = 1. # Planck constant
k_b = 1. # Boltzmann constant
m = 1. # mass of particle
pi = 3.141592
Note that n’s (nx , n y , nz ) in the energy equation are positive integers that
specify a quantum state of an individual particle, and n2 = n2x + n2y + n2z .
A good geometric picture is a three-dimensional sphere with radius n
along with integer lattice points, as shown in Chapter 6. For a given T
and V , each particle will occupy one of the states represented by the
lattice points in the positive octant, or an eighth of a sphere. While im-
plementing the helper functions above, we have also defined the degen-
eracy in terms of this quantum number n as g(n) = 4πn2 /8. Depending
on your choice of variable (either energy or quantum number n), there
are degeneracy functions: g(), g_e() or g(n), g_n().
It is an interesting question whether more than one particle can occupy
the same state (c.f., fermions versus bosons), but for our discussion, we
will assume there are much more states than the number of particles
even for large N, so it is highly unlikely that multiple particles will
occupy the same state. This assumption is called the dilute gas limit.
Then, for a given T and V , we can calculate a profile of energy (n)
with function e_n(). We can also examine a profile of P(n), function
P_n(), which determines the probability of occupancy according to the
Boltzmann distribution.
R∞ For each n, (n)P(n) represents energy density,
and its integral 0 (n)P(n)dn would be equal to the average internal
energy. The function U_n(V,T,n) in the following code block calculates
the total internal energy for given V , T, and n, by numerically summing
the energy density: U = np.sum(P*e)*dn. n denotes the quantum states
(e.g., nx , n y , nz ), which are different from the occupancy numbers Ni .
Let’s examine these for a particular value of T and V .
# Code Block 7.2
if N>1:
color_range = np.linspace(0.8,0.2,N)
else:
color_range = np.zeros(N)
axes[0].plot(T,V,'o',color=col)
axes[1].plot(n,e,'-',color=col)
axes[2].plot(n,P_n(V,T,n),'-',color=col)
axes[3].plot(n,e*P_n(V,T,n),'-',color=col)
axes[4].plot(T,U,'o',color=col)
axes[0].set_xlabel('T')
axes[0].set_ylabel('V')
axes[0].set_xlim((0.0,2.5))
axes[0].set_ylim((0.0,2.5))
axes[1].set_xlabel('Quantum States (n)')
axes[1].set_ylabel('$\epsilon(n)$')
axes[2].set_xlabel('Quantum States (n)')
axes[2].set_ylabel('$P(n)$')
axes[3].set_xlabel('Quantum States (n)')
axes[3].set_ylabel('$\epsilon(n) P(n)$')
axes[4].plot(np.array([0,2]),3/2*np.array([0,2]),'k-')
axes[4].set_xlabel('T')
axes[4].set_ylabel('U')
axes[4].set_xlim((0,2.1))
plt.tight_layout()
if len(plot_filename)>0:
plt.savefig(plot_filename)
plt.show()
n = np.arange(0,15,0.1)
T_range = [1]
V_range = [1]
plot_results(V_range,T_range,n,plot_filename='fig_ch7_singleTV.eps')
Revisiting Thermal Processes 119
Figure 7.2
The above five plots require a close inspection. The point in the first plot
specifies the state of an ideal gas on a V -vs.-T space. The next 3 plots
show (n), P(n), and (n)P(n) as a function of quantum number n. (n)
shows the energy levels. P(n) shows the Boltzmann distribution (i.e.,
higher energy levels are less likely to be populated), while considering
the degeneracy and the size of the state space (i.e., lower quantum
states are less likely to be populated because the number of states is
small for low n), as we already observed from the Maxwell-Boltzmann
distribution describing the speed of gas particles in Chapter 3. (n)P(n)
shows the spectrum of energy density, whose integral is equal to the
total internal energy, U. The point in the last plot specifies the state of
ideal gas again, now on a U-vs.-T space. As expected, this point sits on
a straight line of U = 32 NkB T.
Now let’s examine the profiles of (n) and P(n) for different thermal
processes that cover different ranges of T and V , which are distinguished
by contrasts of points and curves in each plot.
The first process is an isochoric process where V is held constant, as
shown in Figure 7.3. We note that the energy levels, visualized by an
(n)-vs.-n graph, does not change. For higher T, the quantum states with
higher n’s are more likely to be occupied, as shown by the rightward-
shifting curves in the middle. As a result, during an isochoric process
of increasing T, the particles move from lower energy levels to higher
levels, resulting in the overall increase of the internal energy. The work
δW in the isochoric process is zero, so the heat δQ injected into the
ideal gas is responsible for the change in U.
The second process, shown in Figure 7.4, is an adiabatic process where
no heat enters or exits (δQ = 0), so that the occupancy of each quantum
120 Thermal Physics Tutorials with Python Simulations (TPTPS)
state does not change, while the energy levels themselves change. There-
fore, the profiles of P(n) are identical for various values of V and T along
the adiabat described by PV γ = constant or TV γ−1 = constant. The in-
crease in internal energy U at higher T comes from the elevation of the
energy levels.
Imagine a population of people scattered within a high-rise building.
The collective gravitational potential energy (like U) may be increased
by people moving to the upper levels. A different situation would be
everyone stays on the same floors, but each floor level rises mysteriously.
The third process we examine is an isothermal process, where T is held
constant. In this case, both (n) and P(n) change, but in such a man-
ner that the integral of (n)P(n) stays constant. For example, as V
increases, the energy level associated with each quantum state n de-
creases. However, the added heat energy promotes the gas particles in
the lower quantum states with higher states. These two opposite trends
are perfectly matched in the case of the isothermal process, so that the
combined result is such that U remains constant, as shown in Figure
7.5.
# Code Block 7.3
# Comparing different thermal processes.
# Case 1: de = 0 (or dW = 0)
print('V = const (isochoric process)')
print('e(n)-vs-n are the same.')
T_range = np.arange(0.5,2.1,0.25)
V_range = np.ones(len(T_range))
plot_results(V_range,T_range,n,plot_filename='fig_ch7_dVzero.eps')
# Case 2: dn = 0 (or dQ = 0)
# Change V according to PV**gamma = const = TV**(gamma-1)
print('Q = const (adiabatic process)')
print('P(n)-vs-n are the same.')
T_range = np.arange(0.5,2.1,0.25)
gamma = 5./3.
V_range = 1/T_range**(1/(gamma-1))
plot_results(V_range,T_range,n,plot_filename='fig_ch7_dQzero.eps')
Figure 7.3
Figure 7.4
Figure 7.5
122 Thermal Physics Tutorials with Python Simulations (TPTPS)
7.3 CHECK
T = 1
V = 1
print('Normalization Check: Following values should be close to 1.0')
dn = 0.0001
n = np.arange(0,100,dn)
print("%.16f (for V=%f, T=%f) "%(dn*np.sum(P_n(V,2*T,n)),V, 2*T))
print("%.16f (for V=%f, T=%f) "%(dn*np.sum(P_n(V,2*T,n)),V, T))
print("%.16f (for V=%f, T=%f) "%(dn*np.sum(P_n(V,2*T,n)),2*V, T))
print("%.16f (for V=%f, T=%f) "%(dn*np.sum(P_n(V,2*T,n)),2*V, 2*T))
print('')
print('Total Energy Check: U = (3/2)*NkT')
print("U = %f (for V=%f, T=%f)"%(U_n (V,T,n),V,T))
print("U = %f (for V=%f, T=%f)"%(U_n (2*V,T,n),2*V, T))
print("U = %f (for V=%f, T=%f)"%(U_n (V,2*T,n),V, 2*T))
print("U = %f (for V=%f, T=%f)"%(U_n (2*V,2*T,n),2*V, 2*T))
Entropy, Temperature,
Energy, and Other
Potentials
8.1 ENTROPY
S = kB ln ω.
123
124 Thermal Physics Tutorials with Python Simulations (TPTPS)
words, the system tends toward a state with more possible configura-
tions because a high-entropy state is more probable.
Let’s revisit our old example of splitting up $5 among three individu-
als. We listed all possible permutations and showed that it is least likely
for one person to have all the money because there is only one way to
arrange such a situation. However, there are more ways to broadly dis-
tribute the five $1 bills among all people. We observed similar behavior
when we simulated elastic collisions of gas molecules. Even if we started
the simulation with one particle having all the energy (which is an un-
likely situation), the total kinetic energy eventually gets shared among
all particles (not uniformly but in an exponential form) because such
a distribution is more likely. In an earlier chapter, we proved that the
Boltzmann distribution maximizes the number of microstates and hence
a thermal system will take on this state at equilibrium. Note that the
system will continue to fluctuate dynamically about the final distribu-
tion, as the constituent particles will continue to exchange energy via
existing interaction mechanisms.
Some people loosely describe entropy as a measure of “disorder,” which
is a reasonable but limited analogy. We might consider a system highly
ordered when there are few ways of arranging its constituents. For ex-
ample, a collection of coins is highly ordered and has low entropy if they
are laid flat with all their heads facing upward. There is only one way
to arrange the coins heads up. If half of the coins are facing up and the
other half are facing down, as long as we do not care which particular
coins are facing up, there are many more possible configurations, and
the coin system is considered to have high entropy. The collection of
the coins would look more disordered. A similar analogy can be applied
to a collection of books in a library. The system has low entropy when
the books are neatly ordered according to their assigned call numbers.
There are many more ways of putting books on the shelves of a library
if we disregard the call number system. Unless there is an active process
or an agent to organize the library, the system will tend toward a high
entropy state. Nevertheless, simply calling the entropy “disorder” does
not fully capture the ideas of microstates and probability.
Entropy, Temperature, Energy, and Other Potentials 125
where U1 + U2 = U.
At thermal equilibrium, these two systems will have the same temper-
ature, T1 = T2 = T, as a consequence of the zeroth law of thermo-
dynamics. At thermal equilibrium, the most probable state, which is
the state with the highest entropy or maximum ω(U), would have been
reached, according to the second law of thermodynamics. The differen-
tial of ω(U) would be zero for infinitesimal energy exchange between
the two systems. Mathematically,
∂ω1 ∂ω2
! !
dω = ω2 dU1 + ω1 dU2 = 0.
∂U1 ∂U2
∂ ln ω1 ∂ ln ω2
! !
= .
∂U1 ∂U2
∂S1 ∂S2
! !
= .
∂U1 ∂U2
In other words, at thermal equilibrium, these two systems have the same
∂S
temperature and the same value of ∂U . Hence, T is intimately related
to the ratio of changes in entropy and internal energy, while other state
variables like V are fixed. Let’s make the following definition for T:
∂U
!
T= .
∂S V
Entropy, Temperature, Energy, and Other Potentials 127
∂U ∂U
! !
dU = dS + dV = TdS − PdV,
∂S V ∂V S
δQreversible = TdS.
and
Y gni
i
ω(n0 , n1 , . . .) = N! (Number of microstates).
ni !
i=0,1,...
128 Thermal Physics Tutorials with Python Simulations (TPTPS)
Now let’s apply Boltzmann’s definition of entropy and simplify the re-
sulting expression with Stirling’s approximation and some algebra.
S = kB ln ω
X
= kB ln N! + kB ln gni i − ln ni !
i
X
= kB ln N! + kB ni ln gi − ni ln ni + ni
i
!
X ni
= kB ln N! + kB (−ni ) ln − 1
gi
i
X
(−ni ) ln (eα e−βi ) − 1
= kB ln N! + kB
i
X
= kB ln N! + kB (−ni ) α − βi − 1
i
X X
= kB ln N! − kB (α − 1) ni + kB β ni i
i i
= kB (ln N! − (α − 1)N) + kB βU
= So + kB βU,
where we lumped the three terms that do not depend on the energy as
So .
When we take a partial derivative of entropy with respect to the energy,
∂S
we arrive at a conclusion that kB β = ∂U . Furthermore, as we have defined
T as a ratio of the change in internal energy and entropy, while other
state variables, such as volume V and number of particles N, are held
constant (T = ∂U∂S ), we arrive at:
1
β= .
kB T
πd/2 rd
Vd (r) = .
Γ d2 + 1
dim_range = range(1,10,1)
print("")
print("Evaluate the volume of a unit sphere in various dimensions.")
for d in dim_range:
gamma_value = gamma.subs(z,d/2+1)
vol_value = (3.1419**(d/2))/gamma_value
print("d = %d, spherical volume = %4.3f"%(d,vol_value))
import numpy as np
import matplotlib.pyplot as plt
N_trials = 10
N_total = 50000
volumes = np.zeros((N_trials,len(dim_range)))
for d in dim_range:
for i in range(N_trials): # Multiple trials
coord = np.random.random(size=(N_total,d))
coord = coord*2 - 1 # Numbers are between -1 and 1.
dist = np.sqrt(np.sum(coord**2,axis=1))
ratio = np.sum(dist<1) / N_total
volumes[i,d-1] = ratio*(2**d)
plt.boxplot(volumes)
plt.xlabel('Dimension')
132 Thermal Physics Tutorials with Python Simulations (TPTPS)
dim_range_smooth = np.arange(0.5,9.5,0.1)
vol_value_smooth = np.zeros(len(dim_range_smooth))
Figure 8.1
In this section, we did not derive the volume formula for a hypersphere
and only demonstrated its plausibility. A rigorous proof and more dis-
cussions can be found in other books. In the following section, we will
consider the volume of a hypersphere as a way of calculating the num-
ber of microstates. We will use d = 3N and r = n. Since N is a large
number,
it does not particularly matter whether 3N is even or odd, so
Γ 3N2 + 1 will be written as 3N
2 !.
Entropy, Temperature, Energy, and Other Potentials 133
π
3N 3N 3N/2 3N
1 1 n
Ω(n) = V3N (n) = .
2 2 3N
!2
dΩ(n) dn
ω(U) =
dn dU
3N+1 !3N/2
1 1 3Nπ3N/2 8mV 2/3
= 2
U3N/2−1 .
N! 2 3N h
2 !
134 Thermal Physics Tutorials with Python Simulations (TPTPS)
Let’s take a different approach to find the entropy of an ideal gas again.
One of the main ideas from Chapter 5 was:
Y gni
i
ω(n0 , n1 , · · · ) = ,
ni !
i=0,1,...
S = kB ln ω
X
= kB
ni ln gi − ln ni !
i
X X X
= kB ni ln gi − ni ln ni + ni .
i i i
Entropy, Temperature, Energy, and Other Potentials 135
Let’s work with the second term in the above expression by applying
g e−i /kB T
the Boltzmann distribution result, nNi = i Z .
gi e−i /kB T
X X !
ni ln ni = ni ln N
Z
i i
X
ni ln N + ln gi + ln e−i /kB T − ln Z
=
i
i
X
= ni ln N + ln gi − − ln Z
kB T
i
X U
= N ln N + ni ln gi − − N ln Z,
kB T
i
When we substitute the last expression back into the second term of
the entropy, we obtain the following result:
X X U
S = kB ni ln gi − N ln N + ni ln gi − − N ln Z + N
kB T
i i
Z U
= kB N ln + +1 .
N NkB T
This expression of S, thus far, is quite general and would apply to any
thermal system at equilibrium. For an ideal gas specifically, we know
U = 32 NkB T and have found an expression for the partition function of
an ideal gas in Chapter 6:
!3/2
4πmV 2/3 U
Z= .
3Nh2
the following code block, we used the exact values of the fundamental
constants to calculate the entropy of argon (Ar) gas while varying the
temperature.
# Code Block 8.3
import numpy as np
import matplotlib.pyplot as plt
Figure 8.2
Entropy, Temperature, Energy, and Other Potentials 137
In the above S-vs.-T plot, we can see that the entropy diverges to −∞
as T goes to zero, which seems consistent with the above expression
for S, where S ∝ ln U ∝ ln T. However, this contradicts the third law
of thermodynamics, which states that the entropy must approach a
constant value at an absolute zero temperature.
This contradiction arises because our treatment of ideal gas has relied
on a continuous approximation. Each discrete state of a gas particle
was conceptualized as an integer lattice point within a phase space of
quantum numbers. However, the total number of these quantum states
was approximated by the continuous volume of a sphere in the phase
space. As T approaches zero, the gas particles occupy the lowest energy
state. The number of states available to the ideal gas decreases with
decreasing T, but this number does not become zero. Imagine a sphere
and the integral lattice points it encloses. As the volume of the sphere
decreases, the number of enclosed lattice points decreases. However,
even in the limit of zero volume, the number of enclosed lattice points
is still one since an infinitesimal sphere would still include a point at the
origin. Therefore, the continuous value of volume can not approximate
the discrete number of lattice points well in this extreme limit.
Thus far in this book, we have worked extensively with internal energy
U as an essential metric of a thermodynamic system. However, other
related metrics become more useful under different conditions.
As a motivation, let’s consider a simplistic example of measuring the
value of a hypothetical company that sells a single product. The rev-
enue of the company is calculated by Np Vp , where Np is the number of
products sold and Vp is the price of the product. As a very crude model,
the total value of the company may be given by Ucompany = Np Vp + Uo ,
where Uo accounts for other quantities, such as its operating expenses
and real estate values. The CEO of the company may be interested in
the change in the company’s value, ∆Ucompany . If the company sells more
products, the growth of the company can be calculated by (∆Np )Vp . On
the other hand, if the same number of products are sold at a higher unit
price, the quantity of interest is Np (∆Vp ).
138 Thermal Physics Tutorials with Python Simulations (TPTPS)
This is the expression we saw before, plus a new term with the chemical
potential and the particle number, hinting that the relevant variables
involved in the change of internal energy are entropy, volume, and par-
ticle number. These are called natural variables and make the internal
energy a function of S, V , and N. The total differential of U(S, V, N) is
∂U ∂U ∂U
! ! !
dU = dS + dV + dN,
∂S V,N ∂V S,N ∂N S,V
where the subscript symbols next to the right parenthesis indicate the
state variables that are held constant for the partial derivatives.
As we compare these two expressions, we can make the following iden-
tifications, which are not new.
∂U
!
T= (definition of temperature)
∂S V,N
∂U
!
P=− (expression related to mechanical work)
∂V S,N
∂U
!
µ= (definition of chemical potential)
∂N S,V
dH = dU + (PdV + VdP)
= (TdS − PdV + µdN) + (PdV + VdP)
= TdS + VdP + µdN
Entropy, Temperature, Energy, and Other Potentials 141
∂H
!
T=
∂S P,N
∂H
!
V=
∂P S,N
∂H
!
µ=
∂N S,P
∂F ∂F ∂F
! ! !
dF = dT + dV + dN.
∂T V,N ∂V T,N ∂N T,V
∂F
!
S=−
∂T V,N
∂F
!
P=−
∂V T,N
∂F
!
µ=
∂N T,V
142 Thermal Physics Tutorials with Python Simulations (TPTPS)
∂G ∂G ∂G
! ! !
dG = dT + dP + dN.
∂T P,N ∂P T,N ∂N T,P
∂G
!
S=−
∂T P,N
∂G
!
V=
∂P T,N
∂G
!
µ=
∂N T,P
# Define symbols
k, T, N, V, h, m, pi = sym.symbols('k_{B} T N V h m \pi')
Let’s take one more step and work with the second-order partial deriva-
tives of the above expressions. We can obtain a few other useful equal-
ities known as Maxwell’s relations, whose derivations are based on the
fact that a mixed second-order partial derivative, successive differenti-
ation of a function with respect to two independent variables, remains
identical regardless of the order of the differentiation. For example, con-
sider a second-order partial differentiation of internal energy, first with
respect to entropy and then with respect to volume:
!
∂ ∂U ∂T
!
=
∂V ∂S V,N S,N ∂V S,N
∂U
where we used ∂S V,N = T.
Now let’s make the order of differentiation reversed so that
!
∂ ∂U ∂P
!
= −
∂S ∂V ∂S V,N
S,N V,N
∂U
where we used ∂V S,N = −P.
Since these two second-order derivatives should be identical, we have
∂T ∂P
! !
=− .
∂V S,N ∂S V,N
Entropy, Temperature, Energy, and Other Potentials 145
∂S ∂P
Let’s check Maxwell’s relations by calculating ∂V T,N and ∂T V,N for
ideal gas separately.
# Code Block 8.5
NkB
V
dP/dT for fixed V and N
NkB
V
Note that these two expressions are equal.
146 Thermal Physics Tutorials with Python Simulations (TPTPS)
∂S ∂P
We have shown that ∂V is identical to ∂T as an illustration of
T,N V,N
one of the Maxwell’s relations. These relations imply that if one knows
the ratio of changes in one pair of variables, one can also find the ra-
tio of changes in the other pair. For example, the rate of change in
entropy with respect to volume at constant temperature and particle
number can be determined by the rate of change in pressure with re-
spect to temperature for fixed volume and gas particle number. The
latter quantity can be experimentally measurable using barometers and
thermometers, while the former quantity involving entropy may not be
directly measurable.
It is interesting to note that many of the thermodynamic relations and
definitions are expressed in terms of a rate of change between two vari-
ables or a partial derivative. We often define a function as a mapping
between an input value x and the corresponding point-wise output f (x).
Our study of calculus shows an alternative way of dealing with a func-
d f (x)
tion. If we know the derivative dx at all points and a single value f (xo )
at some reference point xo , we can
R x determine the point-wise value of a
d f (s)
function by integration: f (x) = x ds ds + f (xo ). That is, if we know
o
df
f (x), we can of course calculate dx , but what is interesting is that if we
df
know dx and f (xo ), we can also determine f (x).
The derivative or the rate of change can be more interesting and useful
than the value of a function. For example, when we are hiking, what
we tend to notice and find useful is how steep or shallow the local
landscape is, rather than the exact height with reference to the sea level.
In kinematics, the instantaneous velocity v(t), a time rate of change in
position, of a moving object gives more interesting information, such
as its kinetic energy or momentum, than its exactR position x(t). Also
t
knowing v(t) allows us to calculate x(t) by x(t) = 0 v(s)ds + x(0). The
change may also be more readily and directly measurable than the raw
values since the former does not require setting a reference value. For
example, in classical mechanics, it is convenient to work with the change
in gravitational potential energy of a falling object as mg∆x, but if you
would like to work with the exact value of gravitational potential energy
at a particular height, an additional definition about a reference point
(x = 0) should be introduced. Thus, the utility of a thermodynamic
potential, like gravitational or electrical potential, lies in its change
during a thermodynamic process between different states, rather than
its point-wise value at a particular state.
III
Examples
CHAPTER 9
Two-State System
N2 e−2 /kB T − ∆
= − /k T = e kB T .
N1 e 1 B
149
150 Thermal Physics Tutorials with Python Simulations (TPTPS)
∆
block illustrates this trend by plotting N 2
N1 versus kB T for two different
values of T (low and high). Note N 2
N1 will always be less than 1.
kT_hi = 100
kT_lo = 1
e = np.arange(0,5,0.1)
plt.plot(e,np.exp(-e/kT_hi),color='k',linestyle='solid')
plt.plot(e,np.exp(-e/kT_lo),color='k',linestyle='dotted')
plt.legend(('High T','Low T'),framealpha=1)
plt.ylabel('$N_2 / N_1$')
plt.xlabel('$\Delta \epsilon / k_B T$')
plt.ylim((0,1.1))
plt.yticks(ticks=(0,0.5,1))
plt.savefig('fig_ch9_ratio_vs_epsilon.eps')
plt.show()
Figure 9.1
n = np.array(n)
for i in range(2):
ax1.fill(xbox,ybox+i,color=colors[i])
x = (np.random.uniform(size=n[i])-0.5)*w*0.9+0.5
y = (np.random.uniform(size=n[i])-0.5)*h*0.9+0.5+i
ax1.scatter(x,y,marker='.',color='k',s=50,zorder=2.5)
# Display the fraction to the left of each box.
ax1.text(-0.35,i+0.2,'%3.2f'%(n[i]/np.sum(n)))
ax1.set_ylim(0,2)
ax1.set_yticks([])
ax1.set_xticks([])
ax1.set_aspect('equal')
ax1.axis('off')
n = [40,10]
de = 0.5
sketch_distrib_2states(n,de=de,figsize=(4,5))
plt.savefig('fig_ch9_sketch_distrib_2states_demo.eps',
bbox_inches='tight')
plt.show()
Two-State System 153
Figure 9.2
N_total = 50
kT = 1
de_range = np.array([0.1,0.5,1,2])
for i, de in enumerate(de_range):
r = np.exp(-de/kT)
N1 = np.round(N_total/(1+r))
N2 = np.round(N_total-N1)
n = np.array([N1,N2],dtype='int')
sketch_distrib_2states (n,de=de,kT=kT,xmax=2,figsize=(6,8))
str = '$\Delta \epsilon/kT$ = %2.1f, $\Delta \epsilon$ = %2.1f'
plt.title(str%(de/kT,de))
plt.savefig('fig_ch9_occupancy_fixed_T_%d.eps'%i,
bbox_inches='tight')
plt.show()
Two-State System 155
Figure 9.3
156 Thermal Physics Tutorials with Python Simulations (TPTPS)
N_total = 50
de = 0.5
kT_range = np.array([0.25,0.5,1,5])
for i, kT in enumerate(kT_range):
r = np.exp(-de/kT)
N1 = np.round(N_total/(1+r))
N2 = np.round(N_total-N1)
n = np.array([N1,N2],dtype='int')
sketch_distrib_2states (n,de=de,kT=kT,xmax=2,figsize=(6,8))
str = '$\Delta \epsilon/kT$ = %2.1f, kT = %3.2f'
plt.title(str%(de/kT,kT))
plt.savefig('fig_ch9_occupancy_fixed_de_%d.eps'%i,
bbox_inches='tight')
plt.show()
Two-State System 157
Figure 9.4
158 Thermal Physics Tutorials with Python Simulations (TPTPS)
Our analysis thus far has dealt with the occupancy of the energy levels
at thermal equilibrium, which is determined by the ratio of ∆ and kB T.
What would happen when the system is not quite at thermal equilib-
rium? The answer to the question is that the system will move toward
the state described by the Boltzmann distribution since this state has
the most microstates or the highest entropy, and hence it is the most
probable.
The next code block demonstrates a two-state system with a fixed en-
ergy difference, ∆ (de = 0.5 in the code) and constant kB T. It is ini-
tially at a non-equilibrium state with many more particles in the lower
energy level than expected from the Boltzmann distribution. This de-
viation from the Boltzmann distribution is represented by the fact that
the top of the bars does not coincide with the exponential curve. How-
ever, this system at non-equilibrium moves closer to the steady state
of the Boltzmann distribution, as some particles migrate from lower
to upper energy levels. The extra energy that allows the promotion of
these particles would come from the environment. Particles will ran-
domly move back and forth between the energy levels by releasing or
absorbing energy. Over time, the number of particles in each energy
level will match the values expected from the Boltzmann distribution,
and a steady state will have been reached.
# Code Block 9.5
# Sketch of transition toward equilibrium state.
N_total = 50
de = 0.5
kT = 1
r_range = np.array([0.1,0.2,0.4,0.6])
for i, r in enumerate(r_range):
N1 = np.round(N_total/(1+r))
N2 = np.round(N_total-N1)
n = np.array([N1,N2],dtype='int')
sketch_distrib_2states (n,de=de,kT=kT,figsize=(4,5))
plt.savefig('fig_ch9_occupancy_dynamic_%d.eps'%i,
bbox_inches='tight')
plt.show()
Two-State System 159
Figure 9.5
160 Thermal Physics Tutorials with Python Simulations (TPTPS)
the influx and net accumulation of Na+ ions since Cl− ions cannot cross
the divider.
The like charges (e.g., two positive charges) repel each other, and the
opposite charges (e.g., a positive and a negative) attract each other,
according to Coulomb’s well-known law of electromagnetism. Hence,
the net increase of Na+ ions on one side would not continue indefinitely.
At some point, the repulsive electrical force between Na+ ions will be
balanced by the diffusive movement of Na+ ions from the side of higher
concentration. In other words, the voltage difference between the two
sides would eventually be too high for a Na+ ion to overcome despite
the concentration difference.
The voltage difference is the amount of electrical potential energy per
unit charge. Just as a mass at a certain height possesses gravitational
potential energy, an electrical charge on an electrical landscape possesses
electrical energy. An electrical landscape is created by a distribution of
charges in space. Just as a higher-mass object possesses more gravita-
tional energy than a lower-mass object at the same height, an object
with a higher electrical charge has higher electrical potential energy
than a lower-charge object at the same voltage difference.
It takes work (in the physics sense of applying force over distance)
to bring a positive charge toward a crowd of other positive charges
because the repulsive force has to be overcome. Therefore, a positive
charge brought to a region of other positive charges has gained electri-
cal potential energy. In other words, this region of positive charges has a
positive electrical potential compared to the origin of the single charge.
Unfortunately, we are dealing with two terms that are conceptually dif-
ferent but sound confusingly similar: Potential energy versus potential.
Potential is associated with different positions, and potential energy is
gained or lost by an object when it moves between places with different
potentials. Electrical potential difference and voltage difference refer to
the same concept and are measured in the unit called volts, which is
equivalent to joules per coulomb, or a unit of energy divided by a unit
of charge.
Let’s calculate this voltage difference in terms of the ionic concentrations
(or the number of ions) on the two sides of the tank. According to the
Boltzmann distribution, the ratio of the number of ions is:
N2
= e−∆/kB T .
N1
162 Thermal Physics Tutorials with Python Simulations (TPTPS)
The energy difference ∆ for each ion is equal to the product of its elec-
trical charge and the potential difference V across the membrane. Hence,
∆ = −ZeV , where |Ze| is the amount of electric charge in Coulombs of
each ion. For Na+ , Z = 1. Following this, the above expression can be
simplified as
N2 ZeV
ln =
N1 kB T
or
kB T N2
V= ln .
Ze N1
This is called the Nernst equation. It captures a delicate balance be-
tween two competing tendencies within this special tank: (1) a diffusive,
entropic movement of ions from a region of higher concentration to a
lower region, and (2) an energetic movement of positive ions from higher
to lower potential.
The following code block simulates this situation with a two-state sys-
tem. We will start with most particles in the lower energy level and a
small energy difference ∆. The temperature is fixed to a constant value
(kT = 1 in the code). Let’s make a simple assumption that the energy
gap linearly increases with the migration of each particle from lower to
upper energy level, or ∆ is proportional to the number of particles in
the upper level:
1
∆ = N2 ,
C
where C1 is the proportionality constant (C is conceptually similar to an
electrical capacitance which is equal to Q/V ).
At each point in time, ∆/kB T is determined, and there is an expected
occupancy value, according to the Boltzmann distribution. We can make
another simple assumption that the rate of particle movement across
the energy levels is proportional to the difference between the expected
and actual number of ions at each level. There will be more move-
ments when the two-state system is very far from the equilibrium or the
discrepancy between the expected and actual numbers is large. There
will be fewer movements when the system is almost at equilibrium. In
the code, this idea is implemented by discrepancy = n1_actual -
n1_expected and n[1] = int(n[1] + discrepancy*efficiency),
Two-State System 163
kT = 1
# initial condition.
n = np.array([95,5],dtype='int')
N_total = np.sum(n)
for i in range(8):
de = n[1]/C # energy difference
sketch_distrib_2states(n,de=de,kT=kT,figsize=(4,5))
plt.savefig('fig_ch9_dynamic_2states_%d.eps'%i,
bbox_inches='tight')
plt.title('$\Delta \epsilon = %3.2f, kT = %2.1f$'%(de,kT))
plt.show()
Figure 9.6
The above plots in Figure 9.6 show how the two-state system moves dy-
namically toward an equilibrium state over time. Note that the target
Two-State System 165
who laboriously brings herself to the top of a mountain and then enjoys
a rapid glide down the slope.
p-type
Na+ Na+
Na+
+ - n-type
potential
difference +
K Na+
Na+ + potential
- -
+ difference
K
+ -
+ -
Na+
+
Na+ -
+
K
+
+ depletion -
K +
K
+ + zone -
cell diode
Figure 9.7
9.4 DIODE
Specific Heat
∂U ∂U
! !
dU(V, T) = dV + dT.
∂V T
∂T V
169
170 Thermal Physics Tutorials with Python Simulations (TPTPS)
∂U
! !
dQ
CV = = .
dT V
∂T V
For specific heat CP at constant pressure, let’s consider enthalpy H(P, T):
∂H ∂H
! !
dH(P, T) = dP + dT.
∂P T
∂T P
∂H
! !
dQ
CP = = .
dT P
∂T P
∂ ln Z
U = NkB T2 ,
∂T
which applies to any thermal system.
de = sym.Symbol('\Delta \epsilon')
k = sym.Symbol('k_B')
T = sym.Symbol('T')
Z = 1+sym.exp(-de/(k*T))
u = k*T**2*sym.diff(sym.ln(Z),T)
c = sym.diff(u,T)
plt.plot(T_range,u_range,color='#000000',linestyle='dotted')
plt.plot(T_range,c_range,color='#AAAAAA',linestyle='solid')
plt.legend(('Internal Energy, U/N','Specific Heat, C/N'),
framealpha=1)
172 Thermal Physics Tutorials with Python Simulations (TPTPS)
plt.xlabel('T')
plt.ylabel('U/N (Joules), C/N (Joules/Kelvin)')
plt.title('$\Delta \epsilon/k_B = 1$')
plt.savefig('fig_ch10_specific_heat_2states.eps')
plt.show()
∆
− Tk
1+e B
Figure 10.1
de = sym.Symbol('\Delta \epsilon')
k = sym.Symbol('k_B')
T = sym.Symbol('T')
Z = 1/(1-sym.exp(-de/(k*T)))
u = k*T**2*sym.diff(sym.ln(Z),T)
c = sym.diff(u,T)
†
The lowest energy level, or the ground state, actually has a non-zero energy
value, which we will consider later in the section about a solid. This ground state
energy only introduces an overall additive constant for U and does not affect CV .
174 Thermal Physics Tutorials with Python Simulations (TPTPS)
display(c)
plt.plot(T_range,u_range,color='#000000',linestyle='dotted')
plt.plot(T_range,c_range,color='#AAAAAA',linestyle='solid')
plt.legend(('Internal Energy, U','Specific Heat, C'),framealpha=1)
plt.xlabel('T')
plt.ylabel('U (Joules), C (Joules/Kelvin)')
plt.title('$\Delta \epsilon/k_B = 1$')
plt.savefig('fig_ch10_specific_heat_SHO.eps')
plt.show()
∆
− Tk
1−e B
Figure 10.2
Specific Heat 175
For both thermal systems, most particles occupy the lowest energy level
at a low temperature, so the thermal system has a low internal energy
overall. As the temperature increases, the higher energy levels are in-
creasingly populated. For the two-state system, when kB T >> ∆, the
occupancy of the higher energy level becomes comparable to the occu-
U
pancy of the lower level, so the average internal energy N is 12 (0 + ∆).
(The occupancy of the higher level can not be higher than the lower
level, according to the Boltzmann distribution.) Therefore, the plot of
U/N (dotted line) for the two-state system starts from zero at low T
and approaches 0.5 at a higher temperature. (You can verify this by
extending the range of temperature T_range in the corresponding code
block. While doing so, to keep the computation time reasonable, set the
temperature step larger.) There are infinitely many energy levels for an
SHO system, so the internal energy can continue to increase with higher
temperatures.
Specific heat (solid line) is zero for low temperatures because if kB T is
not big enough to overcome the spacing ∆ between the energy levels,
only a few particles will be able to jump to the higher energy levels.
For a two-state system, there is an upper limit to the average internal
energy, so the specific heat approaches zero again at a high tempera-
ture because further addition of heat can not make the energy higher
than this upper limit. For an SHO system, as T increases, specific heat
approaches a non-zero value, indicating that the system will continue
to increase its internal energy as more heat energy is added. At high
T, the available thermal energy kB T is much more significant than ∆,
and a continuous variable may approximate the discrete energy levels.
Then, the SHO may be compared to an ideal gas whose constituents
can take on continuous values of kinetic energy. In the case of an ideal
gas, we already noted that its average kinetic energy along one spatial
dimension is equal to 12 kB T, and hence U = 32 NkB T with three spatial
dimensions. Similarly, a one-dimensional SHO will have an average ki-
netic energy of 12 kB T, and it will also have an equal amount of average
potential energy 12 kB T. Such a result is called the equipartition theorem.
Therefore, at high T, the internal energy of an SHO, the sum of kinetic
and potential energies, is 12 kB T + 12 kB T = kB T and the specific heat would
be kB . In our code block, we scaled the constants, so that kB (k in the
176 Thermal Physics Tutorials with Python Simulations (TPTPS)
We can expand on the SHO model and understand a solid’s thermal be-
havior. Let us start with a case of two neighboring neutral atoms with
potential energy as a function of the distance between them. There is
an equilibrium separation where the potential energy is minimum with
a net zero force between them. When two atoms get separated more
than the equilibrium distance, a net attractive force brings the atoms
closer. This attraction, commonly known as the van der Waals force, is
due to the spontaneous formation of an electric dipole in an atom and
the resulting dipole-dipole interaction. Its potential energy varies as the
6th power of the distance. Some other bonding mechanisms in a solid
include: ionic bonds from the electrostatic interaction or covalent bonds
through the sharing of electrons. The repulsive force pushes them apart
when the two atoms get too close. This repulsion originates from the
Pauli exclusion principle that bans the overlap of the electron clouds
at a close range. The potential energy from the repulsive interaction
varies as the 12th power of the distance. This potential energy model is
called Lennard-Jones 6-12 potential. The combination of the attractive
and repulsive interactions around an equilibrium point creates a po-
tential well and is similar to the restoring force of a spring. Therefore,
a three-dimensional solid composed of many atoms can be considered
as a collection of simple harmonic oscillators that vibrate around their
equilibrium positions.
Figure 10.3
Specific Heat 177
1
(n) = h fE +n ,
2
where h is Planck’s constant and n is a non-negative integer. fE is the
characteristic oscillation frequency of an atom determined by the inter-
actions with the neighboring atoms. It is analogous to the natural fre-
quency of a classical oscillator determined by the ratio between spring
constant and mass. Hence, the energy levels are equally spaced with
∆ = h fE . A key assumption in the Einstein model is that all atoms
in the same solid oscillate with the same characteristic frequency fE .
Different materials with different atomic constituents would have their
unique frequencies since their atom-to-atom interactions would differ.
The partition function for a single atom in a solid can be obtained by
extending the approach for a one-dimensional SHO. Each atom in a
three-dimensional solid can vibrate in x, y, and z-directions. Therefore,
the complete partition function can be constructed as a product of three
partition functions, each corresponding to a single direction:
Z = Zx × Z y × Zz
h fE 3h f
h fE h fE
− − E − −
= e 2kB T + e 2kB T + · · · × e 2kB T + · · · × e 2kB T + · · ·
hf hf hf
e− 2kBET e− 2kBET e− 2kBET
= × ×
hf hf hf
− k ET − k ET − k ET
1−e B 1−e B 1−e B
hf
3
e− 2kBET
=
hf
− k ET
1−e B
where we have used the fact that each series within parentheses is a
converging series. The total internal energy of the solid with N atoms
can be obtained with U = NkB T2 ∂ ∂T ln Z
. We will again use the sympy
module in the following code block.
178 Thermal Physics Tutorials with Python Simulations (TPTPS)
h = sym.Symbol('h')
f = sym.Symbol('f_E')
k = sym.Symbol('k_B')
T = sym.Symbol('T')
N = sym.Symbol('N')
Z = (sym.exp(-h*f/(2*k*T))/(1-sym.exp(-h*f/(k*T))))**3
u = N*k*T**2*sym.diff(sym.ln(Z),T)
c = sym.diff(u,T)
display(sym.limit(c,T,sym.oo))
plt.plot(T_range,u_range,color='#000000',linestyle='dotted')
plt.plot(T_range,c_range,color='#AAAAAA',linestyle='solid')
plt.legend(('Internal Energy, U','Specific Heat, C'),framealpha=1)
plt.xlabel('T')
plt.ylabel('U/N, C /N$k_B$')
plt.title('$h f_E/k_B = 1$')
plt.savefig('fig_ch10_specific_heat_einstein.eps')
plt.show()
Specific Heat 179
3NkB
The above result of the symbolic calculation for specific heat can be
written as:
x2 ex
CV, Einstein = 3NkB ,
(ex − 1)2
h fE
where x = kB T .
The Einstein model of a solid is built upon the idea that each atom
in a solid behaves like a simple harmonic oscillator, so the temperature
dependence of its energy and specific heat can be understood in the same
way as SHO. One difference from the one-dimensional SHO discussion
is that the expression of the energy level includes a constant term,
1
2 h fE . This is the lowest energy, also known as zero-point energy with
n = 0, implying that even at an absolute zero temperature, the system
still has non-zero energy. Therefore, at T = 0, the Einstein soild has
non-zero internal energy. Since each atom in a three-dimensional solid
has three degrees of freedom, the value of total zero-point energy is
3 × 12 h fE N. Given our scaling of the constants in the code block (h, k,
and f, corresponding to h, kB , and fE respectively, are all set to 1), U/N
in Figure 10.4 approaches 1.5 at low T. At high temperatures, the N
atoms in this three-dimensional solid collectively have a total energy of
3N times the energy of a one-dimensional SHO, which increases linearly
with T. Therefore, the specific heat of an Einstein solid approaches
3NkB , as shown in the above plot of C/NkB approaching 3. This trend
is known as the Law of Dulong and Petit.
180 Thermal Physics Tutorials with Python Simulations (TPTPS)
Figure 10.4
The Einstein solid is a rather crude model since it considers each atom
as an independent oscillator with the same characteristic frequency. Al-
though this simplistic assumption is good enough to capture the overall
trend of CV versus T, careful experimental measurements of CV on bulk
Al, Cu, and Ag show that there is a slight mismatch between the pre-
dictions of the Einstein model and the experimental data, especially
at a lower temperature. Another model of a solid proposed by Debye
provides a better fit to experimental data. The Debye model consid-
ers the collective motion of atoms with multiple frequencies, using the
notion of phonons (similar to photons, but for the collective vibration
responsible for the propagation of heat and sound in a solid). There is a
maximum cut-off frequency known as the Debye frequency fD . Debye’s
approach deals with the complex interactions among the coupled atoms
more rigorously and matches the experimental data more accurately.
A fuller discussion of the Debye model will be left for other solid-state
textbooks, and here we will simply present its result of specific heat:
!3 Z h fD
kB T kB T x4 ex
CV, Debye = 9NkB dx,
h fD 0 (ex − 1)2
import numpy as np
import matplotlib.pyplot as plt
T = np.arange(0.01,2,0.01)
# Debye solid.
f_D = 1
x = (h*f_D)/(k*T)
cD = 9*N*k*(x**(-3))*Debye_integal(T,f_D)
# Einstein solid
f_E = 1
x = (h*f_E)/(k*T)
cE = 3*N*k*(x**2)*np.exp(x)/(np.exp(x)-1)**2
plt.plot(T,cD,color='black',linestyle='solid')
plt.plot(T,cE,color='black',linestyle='dotted')
plt.legend(('Debye','Einstein'),framealpha=1)
plt.xlabel('T')
plt.ylabel('C / N$k_B$')
plt.title('Debye versus Einstein Models')
plt.savefig('fig_ch10_einstein_vs_debye.eps')
plt.show()
182 Thermal Physics Tutorials with Python Simulations (TPTPS)
Figure 10.5
183
184 Thermal Physics Tutorials with Python Simulations (TPTPS)
Here, d2i will always be equal to positive d2 . The last term E(di d j ) = 0,
because the movement at each time step is assumed to be independent
and hence the number of times when the product di d j is +d2 will be, on
average, matched by the times when it is −d2 . Therefore, E(D2 ) = Nd2 .
The key trend is that the variance increases linearly with N.
We simulate one-dimensional random walks of single and multiple par-
ticles in the following code blocks. A total of N steps are stored in an
array each_walk, where each element is either −1 or 1. To cumulatively
add up the displacement after each step, np.cumsum() is used.
# Code Block 11.1
import numpy as np
import matplotlib.pyplot as plt
N = 1000
d = 1
each_walk = np.random.randint(2,size=N)*2 - 1
disp = np.cumsum(d*each_walk)
plt.plot(disp,color='#AAAAAA')
plt.ylabel('Displacement')
plt.xlabel('Time Step')
plt.title('Single Random Walk')
plt.savefig('fig_ch11_random_walk_single.eps')
plt.show()
Figure 11.1
N_particles = 20
N = 1000
d = 1
disp = np.cumsum(d*each_walk,axis=0)
max_disp = np.abs(np.max(disp))
186 Thermal Physics Tutorials with Python Simulations (TPTPS)
plt.plot(disp,color='#AAAAAA')
plt.ylim(np.array([-1,1])*1.5*max_disp)
plt.ylabel('Displacement')
plt.xlabel('Time Step')
plt.title('Multiple Random Walks')
plt.savefig('fig_ch11_random_walks.eps')
plt.show()
step = 25
plt.errorbar(range(0,N,step), np.mean(disp,axis=1)[::step],
np.std(disp,axis=1)[::step], color='black')
plt.ylim(np.array([-1,1])*1.5*max_disp)
plt.xlabel('Time Step')
plt.ylabel('Displacement')
plt.title('Mean and Standard deviation of Random Walks')
plt.savefig('fig_ch11_random_walks_mean_std.eps')
plt.show()
Figure 11.2
each_walk = np.random.randint(2,size=(N,N_particles))*2 - 1
disp = np.cumsum(d*each_walk,axis=0)
var_disp = np.var(disp,axis=1)
plt.plot(var_disp/(d**2),color='k')
Figure 11.3
188 Thermal Physics Tutorials with Python Simulations (TPTPS)
N_particles = 6
N = 2000
d = 1
# np.random.randint gives either 0 or 1,
# so (*2) and (-1) give either -1 or 1.
each_walk_x = np.random.randint(2,size=(N,N_particles))*2 - 1
each_walk_y = np.random.randint(2,size=(N,N_particles))*2 - 1
disp_x = np.cumsum(d*each_walk_x,axis=0)
disp_y = np.cumsum(d*each_walk_y,axis=0)
for i in range(N_particles):
# x0, y0 = initial position of a particle.
# Stagger the locations so that they are easily distinguished.
x0 = i*100
y0 = 100*(-1)**i
plt.plot(disp_x[:,i]+x0,disp_y[:,i]+y0,color='black')
plt.xlabel('Displacement in x')
plt.ylabel('Displacement in y')
plt.axis('equal')
plt.savefig('fig_ch11_random_walks_2dim.eps')
plt.show()
Figure 11.4
Random and Guided Walks 189
import numpy as np
import matplotlib.pyplot as plt
N_particles = 6
for i in range(N_particles):
x,y = random_walk_2D()
# x0, y0 = initial position of a particle.
# Stagger the locations so that they are easily distinguished.
x0 = i*100
y0 = 100*(-1)**i
plt.plot(x+x0,y+y0,color='black')
plt.xlabel('Displacement in x')
plt.ylabel('Displacement in y')
plt.axis('equal')
plt.savefig('fig_ch11_random_walks_2dim_general.eps')
plt.show()
190 Thermal Physics Tutorials with Python Simulations (TPTPS)
Figure 11.5
∂F ∂U ∂S
= −T .
∂x ∂x ∂x
Since T does not depend on the length of the rubber band, there is no
term with ∂T∂x . Assuming that the rubber band is a freely jointed chain,
its internal energy U does not depend on its length, so ∂U∂x = 0. Hence,
∂F ∂S
f =− =T ,
∂x ∂x
∂F
where we are using one of the results from Chapter 8, P = − ∂V .
T,N
Instead of P, we use its linear analog f , which is the tension on the
rubber band. The magnitude of tension increases with increasing T. The
stretching of the rubber band decreases its entropy because a straight
chain is unlikely, so ∂S
∂x < 0. As a result, the overall sign of f becomes
negative, implying that the tension f is restorative like a spring force.
Hence, the shrinking of the rubber band when it gets hotter is explained
as an entropic phenomenon.
11.3 A TANGENT
import numpy as np
import matplotlib.pyplot as plt
import math
import mpmath as m # Multi-precision math
m.mp.dps = 20000
print('Digits of %s'%case)
print(num_str[:60])
# plot() function will work fine, but quiver() will also work.
#plt.plot(x,y,color='black',linewidth=0.25)
plt.quiver(x[:-1],y[:-1],x[1:]-x[:-1],y[1:]-y[:-1],
units='xy',scale=1,color='black')
plt.axis('equal')
plt.axis('square')
plt.axis('off')
plt.savefig('fig_ch11_guided_walk_%s.eps'%(case),
bbox_inches='tight')
plt.show()
Random and Guided Walks 193
Digits of pi
314159265358979323846264338327950288419716939937510582097494
Figure 11.6
Figure 11.7
Figure 11.8
Random and Guided Walks 195
∂f ∂f
∇f = x̂ + ŷ,
∂x ∂y
∂f
!
x(t + ∆t) = x(t) + ∆x = x(t) − d
∂x
∂f
!
y(t + ∆t) = y(t) + ∆y = y(t) − d ,
∂y
∂f
where d is a small step size. The negative sign in ∆x = −d ∂x and ∆y =
∂f
−d ∂y indicates that the particle is descending, not ascending, along the
function toward its minimum. After each update, the position becomes
closer to the minimum of f (x, y). The caveats are: if d is too large, the
196 Thermal Physics Tutorials with Python Simulations (TPTPS)
next position may overshoot the minimum, and if d is too small, many
position updates are necessary to reach the minimum.
# Code Block 11.7
import numpy as np
import matplotlib.pyplot as plt
# Add noise.
noise_level = 5
df_x = df_x + np.random.randn()*noise_level
df_y = df_y + np.random.randn()*noise_level
return df_x,df_y
N = 200
d = 0.01 # step size.
x = np.zeros(N)
y = np.zeros(N)
# starting position
x[0] = -3
y[0] = -3
for i in range(N-1):
dx, dy = gradient_simple_2D(x[i],y[i])
x[i+1] = x[i] - d*dx
y[i+1] = y[i] - d*dy
plt.plot(x,y,color='gray')
plt.plot(x[0],y[0],'ko')
plt.plot(x[-1],y[-1],'k*')
plt.text(x[0]+0.5,y[0],'start')
plt.text(x[-1]+0.5,y[-1],'end')
plt.axis('square')
plt.xlim((-5,5))
plt.ylim((-5,5))
plt.savefig('fig_ch11_guided_walk_simple.eps')
plt.show()
Random and Guided Walks 197
Figure 11.9
The next code block extends the above by introducing multiple particles
at different starting points. The collective target-seeking behavior is
reminiscent of a population of ants converging on a food source. The
concept of chemical potential µ from Chapter 8, which determines the
movement of particles from one thermal system to another, is related
to this idea.
Figure 11.10
Next, consider a more complex landscape with two local minima, some-
times called a double well. As a definite example, we will use
where (x0 , y0 ) = (−3, 3) and (x1 , y1 ) = (3, 3). The partial derivatives of
the above function, as appear in gradient_double_2D() are
∂f 1
= (x − x0 )e−((x−x0 ) +(y−y0 ) )/4 + 4(x − x1 )e−((x−x1 ) +(y−y1 ) )
2 2 2 2
∂x 2
∂f 1
= (y − y0 )e−((x−x0 ) +(y−y0 ) )/4 + 4(y − y1 )e−((x−x1 ) +(y−y1 ) )
2 2 2 2
∂y 2
def double_well_2D(x,y):
x0, y0 = -3, 3
x1, y1 = 3, 3
f0 = -np.exp(-((x-x0)**2+(y-y0)**2)/4)
f1 = -2*np.exp(-((x-x1)**2+(y-y1)**2))
f = f0 + f1
# Gradient
df_x = -2*(x-x0)/4*f0 -2*(x-x1)*f1
df_y = -2*(y-y0)/4*f0 -2*(y-y1)*f1
N = 100
d = 0.1 # step size
P = 100 # number of particles
x = np.zeros((P,N))
y = np.zeros((P,N))
f, _, _ = double_well_2D(x_range,3)
plt.plot(x_range,f,'ko')
plt.legend(('$f(x,y=3) = -e^{-(x+3)^2/4} -2e^{-(x-3)^2}$',
'End points of trajectories'), framealpha=1.0)
plt.title('1D slice')
plt.xlabel('$x$')
plt.savefig('fig_ch11_guided_walk_double_slice.eps')
plt.show()
N0 = find_neighbors (x[:,-1],y[:,-1],-3,3,d*5)
N1 = find_neighbors (x[:,-1],y[:,-1], 3,3,d*5)
print('Number of neighbors near (%2d,%2d) = %d'%(-3,3,N0))
print('Number of neighbors near (%2d,%2d) = %d'%( 3,3,N1))
Figure 11.11
Perhaps the most challenging step in following the codes in this book
may be the first step of getting started with Python. Fortunately, there
are a few user-friendly options at this point of writing.
The first option is a free, cloud-based Python environment like Google
Colaboratory (or Colab) (research.google.com/colaboratory). You
can open, edit, and run Python codes on a Jupyter Notebook en-
vironment using a browser. The second option is to download and
install a distribution of Python that already includes relevant pack-
ages, such as numpy and matplotlib, and other valuable tools, such
as Jupyter Notebook (jupyter.org). We recommend Anaconda Dis-
tribution (www.anaconda.com), which supports different operating sys-
tems (Windows, iOS, and Linux) and makes it easy to configure your
computer.† The third option is to install each module and dependency
separately.
†
There is an interesting interview of Travis Oliphant by Lex Fridman, where they
talk about the history behind the development of numpy, scipy, Anaconda, and other
topics on scientific computing available at www.youtube.com/watch?v=gFEE3w7F0ww.
205
206 Appendix
x = 5
y = 2
print(x+y)
print(x-y)
print(x*y)
print(x/y)
print(x**y)
7
3
10
2.5
25
for i in range(5):
print(i**2)
if (i**2 == 9):
print('This was a nice number.')
0
1
4
9
This was a nice number.
16
import numpy as np
def calculate_average(x):
avg = 0
for val in x:
avg = avg + val
return avg/len(x)
x = [1,5,3,7,2]
3.6
3.6
x = np.array([1,2,3])
208 Appendix
-----------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-4-976cd16bfa8a> in <module>
7 # Because we are demonstrating the importance of import,
8 # let's unimport or del numpy.
----> 9 del numpy
10
11 x = np.array([1,2,3])
x = [10,20,30]
print(x[0])
print(x[1])
print(x[2])
print(x[3])
10
20
30
-------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-5-1487e342efb9> in <module>
11 print(x[1])
12 print(x[2])
---> 13 print(x[3])
import numpy as np
print('')
APPENDIX C: PLOTS
import numpy as np
import matplotlib.pyplot as plt
t = np.arange(0,2*np.pi,0.01)
plt.plot(t,np.sin(t-0.0),color='black',linestyle='solid')
plt.plot(t,np.sin(t+np.pi/2),color='black',linestyle='dotted')
plt.savefig('fig_appC_single.eps')
plt.show()
If you want to show multiple graphs, you can draw each curve one at
a time within a single plot, as demonstrated in the above code block.
Alternatively, you can prepare a grid of plots using subplots() from
the matplotlib.pyplot module. Let us demonstrate how each subplot
can be modified.
Appendix 211
import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots(2,2)
t = np.arange(0,3*np.pi,0.01)
ax[0,0].plot(t,np.sin(t-0.0),color='black',linestyle='solid')
ax[0,1].plot(t,np.sin(t-0.0),color='black',linestyle='dotted')
ax[1,0].plot(t,np.sin(t-0.0),color='gray',linestyle='solid')
ax[1,1].plot(t,np.sin(t-0.0),color='gray',linestyle='dotted')
plt.subplots_adjust(left=0.1,right=0.9,top=0.9,bottom=0.1,
wspace=0.4,hspace=0.4)
ax[0,0].set_xlim((0,2))
ax[0,1].set_xlim((0,4))
ax[1,0].set_xlim((0,6))
ax[1,1].set_xlim((0,8))
plt.savefig('fig_appC_subplots.eps')
plt.show()
212 Appendix
APPENDIX D: COLORS
All the graphs in the main text of this book were presented in grayscale.
However, sprucing up your graphs with colors in Python is straightfor-
ward. In the matplotlib.pyplot module, the optional argument color
allows you to specify colors easily by naming colors or their nicknames.
For example, both color='red' and color='r' make plots in red. Over
the next several code blocks, we will create a few color plots that would
best be viewed on a screen rather than in print.
# Code Block Appendix D.1
import numpy as np
import matplotlib.pyplot as plt
t = np.arange(0,2*np.pi,0.01)
plt.plot(t,np.sin(t-0.0),color='black')
plt.plot(t,np.sin(t-0.2),color='k') # black
plt.plot(t,np.sin(t-0.4),color='red')
plt.plot(t,np.sin(t-0.6),color='r') # red
plt.plot(t,np.sin(t-0.8),color='green')
plt.plot(t,np.sin(t-1.0),color='g') # green
plt.plot(t,np.sin(t-1.2),color='blue')
plt.plot(t,np.sin(t-1.4),color='b') # blue
plt.savefig('fig_appD_color1.eps')
plt.show()
Appendix 213
plt.plot(t,np.sin(t-0.0),color=(0,0,0)) # black
plt.plot(t,np.sin(t-0.2),color=(1,0,0)) # red
plt.plot(t,np.sin(t-0.4),color=(0,1,0)) # green
plt.plot(t,np.sin(t-0.6),color=(0,0,1)) # blue
plt.plot(t,np.sin(t-0.8),color=(1,1,0)) # red+green = yellow
plt.plot(t,np.sin(t-1.0),color=(0,1,1)) # green+blue = cyan
plt.plot(t,np.sin(t-1.2),color=(1,0,1)) # red+blue = violet
plt.plot(t,np.sin(t-1.4),color=(1,0,0.5)) # reddish violet
plt.plot(t,np.sin(t-1.6),color=(0.5,0,1)) # bluish violet
plt.savefig('fig_appD_color2.eps')
plt.show()
214 Appendix
plt.plot(t,np.sin(t-0.0),color=(0,0,0)) # black
plt.plot(t,np.sin(t-0.2),color=(0.2,0.2,0.2))
plt.plot(t,np.sin(t-0.4),color=(0.4,0.4,0.4))
plt.plot(t,np.sin(t-0.6),color=(0.6,0.6,0.6))
plt.plot(t,np.sin(t-0.8),color=(0.8,0.8,0.8))
plt.plot(t,np.sin(t-1.0),color=(1,1,1)) # white (not visible)
plt.savefig('fig_appD_color3.eps')
plt.show()
Appendix 215
plt.plot(t,np.sin(t-0.0),color=(0.0,0,0))
plt.plot(t,np.sin(t-0.2),color=(0.2,0,0))
plt.plot(t,np.sin(t-0.4),color=(0.4,0,0))
plt.plot(t,np.sin(t-0.6),color=(0.6,0,0))
plt.plot(t,np.sin(t-0.8),color=(0.8,0,0))
plt.plot(t,np.sin(t-1.0),color=(1.0,0,0))
plt.savefig('fig_appD_color4.eps')
plt.show()
# both black
plt.plot(t,np.sin(t-0.0),color=(0,0,0))
plt.plot(t,np.sin(t-0.2),color='#000000')
# both gray
plt.plot(t,np.sin(t-0.4),color=(0.5,0.5,0.5))
plt.plot(t,np.sin(t-0.6),color='#808080')
# both red
plt.plot(t,np.sin(t-0.8),color=(1,0,0))
plt.plot(t,np.sin(t-1.0),color='#FF0000')
plt.savefig('fig_appD_color5.eps')
plt.show()
Appendix 217
APPENDIX E: ANIMATION
plt.axis('off')
plt.savefig('fig_appE_particle_in_box.eps')
plt.show()
218 Appendix
Nbounce = 0
for i, t in enumerate(time_range[1:]):
current_y = y[i] + current_v*dt # Update position.
if current_y <= ymin:
# if the particle hits the bottom wall.
current_v = -current_v # velocity changes the sign.
current_y = ymin + (ymin - current_y)
Nbounce = Nbounce+1
if current_y >= ymax:
# if the particle hits the top wall.
current_v = -current_v # velocity changes the sign.
current_y = ymax - (current_y - ymax)
Nbounce = Nbounce+1
y[i+1] = current_y
if (plot):
Appendix 219
plt.plot(time_range,y)
plt.xlabel('Time')
plt.ylabel('Position')
plt.savefig('fig_ch2_bounce.eps')
plt.show()
return y, time_range, Nbounce
fig, ax = plt.subplots()
x0 = 0.3
y0 = 0.5
v = -0.2
position, _, _ = calculate_position(y0,v,dt=0.5,tmax=20)
def animate(i):
ax.cla()
plt.scatter(x0,position[i])
# Draw walls
plt.plot((-0.1,1.1),(0,0),color='black')
plt.plot((-0.1,1.1),(1,1),color='black')
plt.xlim((-0.1,1.1))
plt.ylim((-0.1,1.1))
plt.axis('off')
N = 30
tmin = 0
tmax = 10
dt = 0.1
t = np.arange(tmin,tmax,dt)
pos = np.zeros((N,len(t))) # initialize the matrix.
Nbounce = np.zeros(N)
v = np.random.randn(N)*0.5
y0 = np.random.rand(N)
for i in range(N):
220 Appendix
fig, ax = plt.subplots()
def animate_Nparticles(i):
ax.cla()
N, frames = pos.shape
x = np.linspace(0,1,N)
for j in range(N):
plt.scatter(x[j],pos[j,i],color='gray')
# Draw walls
plt.plot((-0.1,1.1),(0,0),color='black')
plt.plot((-0.1,1.1),(1,1),color='black')
plt.xlim((-0.1,1.1))
plt.ylim((-0.1,1.1))
plt.axis('off')
N, frames = pos.shape
ani = animation.FuncAnimation(fig,animate_Nparticles,
interval=50,frames=frames,
repeat=False)
ani
Epilogue
†
See “Thermodynamics in Einstein’s Thought” by Martin Klein in Science
Vol. 157, No. 3788 (1967).
221
Index
223