thermodynamics
thermodynamics
In broad
terms, thermodynamics deals with the transfer of energy from one place to another and from one form
to another. The key concept is that heat is a form of energy corresponding to a definite amount
of mechanical work.
Heat was not formally recognized as a form of energy until about 1798, when Count Rumford (Sir
Benjamin Thompson), a British military engineer, noticed that limitless amounts of heat could be
generated in the boring of cannon barrels and that the amount of heat generated is proportional to the
work done in turning a blunt boring tool. Rumford’s observation of the proportionality between heat
generated and work done lies at the foundation of thermodynamics. Another pioneer was the French
military engineer Sadi Carnot, who introduced the concept of the heat-engine cycle and the principle
of reversibility in 1824. Carnot’s work concerned the limitations on the maximum amount of work that
can be obtained from a steam engine operating with a high-temperature heat transfer as its driving
force. Later that century, these ideas were developed by Rudolf Clausius, a German mathematician and
physicist, into the first and second laws of thermodynamics, respectively.
The zeroth law of thermodynamics. When two systems are each in thermal equilibrium with a
third system, the first two systems are in thermal equilibrium with each other. This property
makes it meaningful to use thermometers as the “third system” and to define a temperature
scale.
The first law of thermodynamics, or the law of conservation of energy. The change in a
system’s internal energy is equal to the difference between heat added to the system from its
surroundings and work done by the system on its surroundings. In other words, energy can not
be created or destroyed but merely converted from one form to another.
The second law of thermodynamics. Heat does not flow spontaneously from a colder region to a
hotter region, or, equivalently, heat at a given temperature cannot be converted entirely into
work. Consequently, the entropy of a closed system, or heat energy per unit temperature,
increases over time toward some maximum value. Thus, all closed systems tend toward an
equilibrium state in which entropy is at a maximum and no energy is available to do useful work.
The third law of thermodynamics. The entropy of a perfect crystal of an element in its most
stable form tends to zero as the temperature approaches absolute zero. This allows an absolute
scale for entropy to be established that, from a statistical point of view, determines the degree of
randomness or disorder in a system.
Although thermodynamics developed rapidly during the 19th century in response to the need to
optimize the performance of steam engines, the sweeping generality of the laws of thermodynamics
makes them applicable to all physical and biological systems. In particular, the laws of thermodynamics
give a complete description of all changes in the energy state of any system and its ability to perform
useful work on its surroundings.
This article covers classical thermodynamics, which does not involve the consideration of
individual atoms or molecules. Such concerns are the focus of the branch of thermodynamics known as
statistical thermodynamics, or statistical mechanics, which expresses macroscopic thermodynamic
properties in terms of the behaviour of individual particles and their interactions. It has its roots in the
latter part of the 19th century, when atomic and molecular theories of matter began to be generally
accepted.
Fundamental concepts
Thermodynamic states
The application of thermodynamic principles begins by defining a system that is in some sense distinct
from its surroundings. For example, the system could be a sample of gas inside a cylinder with a
movable piston, an entire steam engine, a marathon runner, the planet Earth, a neutron star, a black
hole, or even the entire universe. In general, systems are free to exchange heat, work, and other forms
of energy with their surroundings.
A system’s condition at any given time is called its thermodynamic state. For a gas in a cylinder with a
movable piston, the state of the system is identified by the temperature, pressure, and volume of the
gas. These properties are characteristic parameters that have definite values at each state and are
independent of the way in which the system arrived at that state. In other words, any change in value of
a property depends only on the initial and final states of the system, not on the path followed by the
system from one state to another. Such properties are called state functions. In contrast, the work done
as the piston moves and the gas expands and the heat the gas absorbs from its surroundings depend on
the detailed way in which the expansion occurs.
Thermodynamic equilibrium
A particularly important concept is thermodynamic equilibrium, in which there is no tendency for the
state of a system to change spontaneously. For example, the gas in a cylinder with a movable piston will
be at equilibrium if the temperature and pressure inside are uniform and if the restraining force on the
piston is just sufficient to keep it from moving. The system can then be made to change to a new state
only by an externally imposed change in one of the state functions, such as the temperature by adding
heat or the volume by moving the piston. A sequence of one or more such steps connecting different
states of the system is called a process. In general, a system is not in equilibrium as it adjusts to an
abrupt change in its environment. For example, when a balloon bursts, the compressed gas inside is
suddenly far from equilibrium, and it rapidly expands until it reaches a new equilibrium state. However,
the same final state could be achieved by placing the same compressed gas in a cylinder with a movable
piston and applying a sequence of many small increments in volume (and temperature), with the system
being given time to come to equilibrium after each small increment. Such a process is said to
be reversible because the system is at (or near) equilibrium at each step along its path, and the direction
of change could be reversed at any point. This example illustrates how two different paths can connect
the same initial and final states. The first is irreversible (the balloon bursts), and the second is reversible.
The concept of reversible processes is something like motion without friction in mechanics. It represents
an idealized limiting case that is very useful in discussing the properties of real systems. Many of the
results of thermodynamics are derived from the properties of reversible processes.
Temperature
The concept of temperature is fundamental to any discussion of thermodynamics, but its precise
definition is not a simple matter. For example, a steel rod feels colder than a wooden rod at room
temperature simply because steel is better at conducting heat away from the skin. It is therefore
necessary to have an objective way of measuring temperature. In general, when two objects are brought
into thermal contact, heat will flow between them until they come into equilibrium with each other.
When the flow of heat stops, they are said to be at the same temperature. The zeroth law of
thermodynamics formalizes this by asserting that if an object A is in simultaneous thermal equilibrium
with two other objects B and C, then B and C will be in thermal equilibrium with each other if brought
into thermal contact. Object A can then play the role of a thermometer through some change in its
physical properties with temperature, such as its volume or its electrical resistance.
With the definition of equality of temperature in hand, it is possible to establish a temperature scale by
assigning numerical values to certain easily reproducible fixed points. For example, in the Celsius (°C)
temperature scale, the freezing point of pure water is arbitrarily assigned a temperature of 0 °C and
the boiling point of water the value of 100 °C (in both cases at 1 standard atmosphere; see atmospheric
pressure). In the Fahrenheit (°F) temperature scale, these same two points are assigned the values 32 °F
and 212 °F, respectively. There are absolute temperature scales related to the second law of
thermodynamics. The absolute scale related to the Celsius scale is called the Kelvin (K) scale, and that
related to the Fahrenheit scale is called the Rankine (°R) scale. These scales are related by the equations
K = °C + 273.15, °R = °F + 459.67, and °R = 1.8 K. Zero in both the Kelvin and Rankine scales is at absolute
zero.
Energy has a precise meaning in physics that does not always correspond to everyday language, and yet
a precise definition is somewhat elusive. The word is derived from the Greek word ergon, meaning work,
but the term work itself acquired a technical meaning with the advent of Newtonian mechanics. For
example, a man pushing on a car may feel that he is doing a lot of work, but no work is actually done
unless the car moves. The work done is then the product of the force applied by the man multiplied by
the distance through which the car moves. If there is no friction and the surface is level, then the car,
once set in motion, will continue rolling indefinitely with constant speed. The rolling car has something
that a stationary car does not have—it has kinetic energy of motion equal to the work required to
achieve that state of motion. The introduction of the concept of energy in this way is of great value in
mechanics because, in the absence of friction, energy is never lost from the system, although it can be
converted from one form to another. For example, if a coasting car comes to a hill, it will roll some
distance up the hill before coming to a temporary stop. At that moment its kinetic energy of motion has
been converted into its potential energy of position, which is equal to the work required to lift the car
through the same vertical distance. After coming to a stop, the car will then begin rolling back down the
hill until it has completely recovered its kinetic energy of motion at the bottom. In the absence of
friction, such systems are said to be conservative because at any given moment the total amount of
energy (kinetic plus potential) remains equal to the initial work done to set the system in motion.
As the science of physics expanded to cover an ever-wider range of phenomena, it became necessary to
include additional forms of energy in order to keep the total amount of energy constant for all closed
systems (or to account for changes in total energy for open systems). For example, if work is done to
accelerate charged particles, then some of the resultant energy will be stored in the form
of electromagnetic fields and carried away from the system as radiation. In turn the electromagnetic
energy can be picked up by a remote receiver (antenna) and converted back into an equivalent amount
of work. With his theory of special relativity, Albert Einstein realized that energy (E) can also be stored as
mass (m) and converted back into energy, as expressed by his famous equation E = mc2, where c is
the velocity of light. All of these systems are said to be conservative in the sense that energy can be
freely converted from one form to another without limit. Each fundamental advance of physics into new
realms has involved a similar extension to the list of the different forms of energy. In addition to
preserving the first law of thermodynamics (see below), also called the law of conservation of energy,
each form of energy can be related back to an equivalent amount of work required to set the system into
motion.
Thermodynamics encompasses all of these forms of energy, with the further addition of heat to the list
of different kinds of energy. However, heat is fundamentally different from the others in that the
conversion of work (or other forms of energy) into heat is not completely reversible, even in principle. In
the example of the rolling car, some of the work done to set the car in motion is inevitably lost as heat
due to friction, and the car eventually comes to a stop on a level surface. Even if all the generated heat
were collected and stored in some fashion, it could never be converted entirely back into mechanical
energy of motion. This fundamental limitation is expressed quantitatively by the second law of
thermodynamics (see below).
The role of friction in degrading the energy of mechanical systems may seem simple and obvious, but the
quantitative connection between heat and work, as first discovered by Count Rumford, played a key role
in understanding the operation of steam engines in the 19th century and similarly for all energy-
conversion processes today.
Although classical thermodynamics deals exclusively with the macroscopic properties of materials—such
as temperature, pressure, and volume—thermal energy from the addition of heat can be understood at
the microscopic level as an increase in the kinetic energy of motion of the molecules making up a
substance. For example, gas molecules have translational kinetic energy that is proportional to the
temperature of the gas: the molecules can rotate about their centre of mass, and the constituent atoms
can vibrate with respect to each other (like masses connected by springs). Additionally, chemical
energy is stored in the bonds holding the molecules together, and weaker long-range interactions
between the molecules involve yet more energy. The sum total of all these forms of
energy constitutes the total internal energy of the substance in a given thermodynamic state. The total
energy of a system includes its internal energy plus any other forms of energy, such as kinetic energy due
to motion of the system as a whole (e.g., water flowing through a pipe) and gravitational potential
energy due to its elevation.
The first law of thermodynamics
The laws of thermodynamics are deceptively simple to state, but they are far-reaching in their
consequences. The first law asserts that if heat is recognized as a form of energy, then the total energy
of a system plus its surroundings is conserved; in other words, the total energy of the universe remains
constant.
The first law is put into action by considering the flow of energy across the boundary separating a system
from its surroundings. Consider the classic example of a gas enclosed in a cylinder with a movable piston.
The walls of the cylinder act as the boundary separating the gas inside from the world outside, and the
movable piston provides a mechanism for the gas to do work by expanding against the force holding the
piston (assumed frictionless) in place. If the gas does work W as it expands, and/or absorbs heat Q from
its surroundings through the walls of the cylinder, then this corresponds to a net flow of
energy W − Q across the boundary to the surroundings. In order to conserve the total energy U, there
must be a counterbalancing changeΔU = Q − W (1)in the internal energy of the gas. The first law provides
a kind of strict energy accounting system in which the change in the energy account (ΔU) equals the
difference between deposits (Q) and withdrawals (W).
There is an important distinction between the quantity ΔU and the related energy quantities Q and W.
Since the internal energy U is characterized entirely by the quantities (or parameters) that uniquely
determine the state of the system at equilibrium, it is said to be a state function such that any change in
energy is determined entirely by the initial (i) and final (f) states of the system: ΔU = Uf − Ui.
However, Q and W are not state functions. Just as in the example of a bursting balloon, the gas inside
may do no work at all in reaching its final expanded state, or it could do maximum work by expanding
inside a cylinder with a movable piston to reach the same final state. All that is required is that the
change in energy (ΔU) remain the same. By analogy, the same change in one’s bank account could be
achieved by many different combinations of deposits and withdrawals. Thus, Q and W are not state
functions, because their values depend on the particular process (or path) connecting the same initial
and final states. Just as it is more meaningful to speak of the balance in one’s bank account than its
deposit or withdrawal content, it is only meaningful to speak of the internal energy of a system and not
its heat or work content.
From a formal mathematical point of view, the incremental change dU in the internal energy is an exact
differential (see differential equation), while the corresponding incremental changes d′Q and d′W in heat
and work are not, because the definite integrals of these quantities are path-dependent. These concepts
can be used to great advantage in a precise mathematical formulation of thermodynamics (see
below Thermodynamic properties and relations).
The first law of thermodynamics asserts that energy must be conserved in any process involving the
exchange of heat and work between a system and its surroundings. A machine that violated the first law
would be called a perpetual motion machine of the first kind because it would manufacture its own
energy out of nothing and thereby run forever. Such a machine would be impossible even in theory.
However, this impossibility would not prevent the construction of a machine that could extract
essentially limitless amounts of heat from its surroundings (earth, air, and sea) and convert it entirely
into work. Although such a hypothetical machine would not violate conservation of energy, the total
failure of inventors to build such a machine, known as a perpetual motion machine of the second kind,
led to the discovery of the second law of thermodynamics. The second law of thermodynamics can be
precisely stated in the following two forms, as originally formulated in the 19th century by the Scottish
physicist William Thomson (Lord Kelvin) and the German physicist Rudolf Clausius, respectively:
A cyclic transformation whose only final result is to transform heat extracted from a source which is at
the same temperature throughout into work is impossible.
A cyclic transformation whose only final result is to transfer heat from a body at a given temperature to
a body at a higher temperature is impossible.
The two statements are in fact equivalent because, if the first were possible, then the work obtained
could be used, for example, to generate electricity that could then be discharged through an electric
heater installed in a body at a higher temperature. The net effect would be a flow of heat from a lower
temperature to a higher temperature, thereby violating the second (Clausius) form of the second
law. Conversely, if the second form were possible, then the heat transferred to the higher temperature
could be used to run a heat engine that would convert part of the heat into work. The final result would
be a conversion of heat into work at constant temperature—a violation of the first (Kelvin) form of the
second law.
Central to the following discussion of entropy is the concept of a heat reservoir capable of providing
essentially limitless amounts of heat at a fixed temperature. This is of course an idealization, but the
temperature of a large body of water such as the Atlantic Ocean does not materially change if a small
amount of heat is withdrawn to run a heat engine. The essential point is that the heat reservoir is
assumed to have a well-defined temperature that does not change as a result of the process being
considered.
In order to carry through a program of finding the changes in the various thermodynamic functions that
accompany reactions—such as entropy, enthalpy, and free energy—it is often useful to know these
quantities separately for each of the materials entering into the reaction. For example, if
the entropies are known separately for the reactants and products, then the entropy change for the
reaction is just the differenceΔSreaction = Sproducts − Sreactantsand similarly for the other thermodynamic
functions. Furthermore, if the entropy change for a reaction is known under one set of conditions
of temperature and pressure, it can be found under other sets of conditions by including the variation of
entropy for the reactants and products with temperature or pressure as part of the overall process. For
these reasons, scientists and engineers have developed extensive tables of thermodynamic properties
for many common substances, together with their rates of change with state variables such as
temperature and pressure.
The first task in carrying out the above program is to calculate the amount of work done by a single pure
substance when it expands at constant temperature. Unlike the case of a chemical reaction, where the
volume can change at constant temperature and pressure because of the liberation of gas, the volume of
a single pure substance placed in a cylinder cannot change unless either the pressure or the temperature
changes. To calculate the work, suppose that a piston moves by an infinitesimal amount dx. Because
pressure is force per unit area, the total restraining force exerted by the piston on the gas is PA,
where A is the cross-sectional area of the piston. Thus, the incremental amount of work done is d
′W = PA dx.
However, A dx can also be identified as the incremental change in the volume (dV) swept out by the
head of the piston as it moves. The result is the basic equation d′W = P dV for the incremental work done
by a gas when it expands. For a finite change from an initial volume Vi to a final volume Vf, the total work
As shown originally by Count Rumford, there is an equivalence between heat (measured in calories) and
mechanical work (measured in joules) with a definite conversion factor between the two. The conversion
factor, known as the mechanical equivalent of heat, is 1 calorie = 4.184 joules. (There are several slightly
different definitions in use for the calorie. The calorie used by nutritionists is actually a kilocalorie.) In
order to have a consistent set of units, both heat and work will be expressed in the same units of joules.
The amount of heat that a substance absorbs is connected to its temperature change via its molar
specific heat c, defined to be the amount of heat required to change the temperature of 1 mole of the
substance by 1 K. In other words, c is the constant of proportionality relating the heat absorbed (d′Q) to
the temperature change (dT) according to d′Q = nc dT, where n is the number of moles. For example, it
takes approximately 1 calorie of heat to increase the temperature of 1 gram of water by 1 K. Since there
are 18 grams of water in 1 mole, the molar heat capacity of water is 18 calories per K, or about 75 joules
per K. The total heat capacity C for n moles is defined by C = nc.
However, since d′Q is not an exact differential, the heat absorbed is path-dependent and the path must
be specified, especially for gases where the thermal expansion is significant. Two common ways of
specifying the path are either the constant-pressure path or the constant-volume path. The two different
kinds of specific heat are called cP and cV respectively, where the subscript denotes the quantity that is
being held constant. It should not be surprising that cP is always greater than cV, because the substance
must do work against the surrounding atmosphere as it expands upon heating at constant pressure but
not at constant volume. In fact, this difference was used by the 19th-century German physicist Julius
Robert von Mayer to estimate the mechanical equivalent of heat.
The goal in defining heat capacity is to relate changes in the internal energy to measured changes in the
variables that characterize the states of the system. For a system consisting of a single pure substance,
the only kind of work it can do is atmospheric work, and so the first law reduces todU = d′Q − P dV.(28)
Suppose now that U is regarded as being a function U(T, V) of the independent pair of variables T and V.
The differential quantity dU can always be expanded in terms of its partial derivatives according to
(29)where the subscripts denote the quantity being held constant when
calculating derivatives. Substituting this equation into dU = d′Q − P dV then yields the general expression
(32)for the heat capacity at constant volume, showing that the change in internal energy at
constant volume is due entirely to the heat absorbed.
To find a corresponding expression for CP, one need only change the independent variables
For a temperature change at constant pressure, dP = 0, and, by definition of heat capacity, d′Q = CP dT,
resulting in (35)
The two additional terms beyond CV have a direct physical meaning. The term represents the
additional atmospheric work that the system does as it undergoes thermal expansion at constant
pressure, and the second term involving represents the internal work that must be done to pull
the system apart against the forces of attraction between the molecules of the substance (internal
stickiness). Because there is no internal stickiness for an ideal gas, this term is zero, and, from the ideal
gas law, the remaining partial derivative is (36)With these substitutions the equation
for CP becomes simplyCP = CV + nR (37)orcP = cV + R (38)for the molar specific heats. For example, for
a monatomic ideal gas (such as helium), cV = 3R/2 and cP = 5R/2 to a good approximation. cVT represents
the amount of translational kinetic energy possessed by the atoms of an ideal gas as they bounce around
randomly inside their container. Diatomic molecules (such as oxygen) and polyatomic molecules (such
as water) have additional rotational motions that also store thermal energy in their kinetic energy of
rotation. Each additional degree of freedom contributes an additional amount R to cV. Because diatomic
molecules can rotate about two axes and polyatomic molecules can rotate about three axes, the values
of cV increase to 5R/2 and 3R respectively, and cP correspondingly increases to 7R/2 and 4R.
(cV and cP increase still further at high temperatures because of vibrational degrees of freedom.) For a
real gas such as water vapour, these values are only approximate, but they give the correct order of
magnitude. For example, the correct values are cP = 37.468 joules per K (i.e., 4.5R) and cP − cV = 9.443
joules per K (i.e., 1.14R) for water vapour at 100 °C and 1 atmosphere pressure.