Transportation Network Analysis
Transportation Network Analysis
Stephen D. Boyles
Civil, Architectural, and Environmental Engineering
The University of Texas at Austin
Nicholas E. Lownes
Civil and Environmental Engineering
University of Connecticut
Avinash Unnikrishnan
Civil and Environmental Engineering
Portland State University
Preface
This book is the product of more than ten years of teaching transportation
network analysis, at the The University of Texas at Austin, the University of
Wyoming, the University of Connecticut, Portland State University, and West
Virginia University. The project began during a sabbatical visit by the second
author to The University of Texas at Austin, and has continued since. This
version is being released as a public beta, with a final first edition hopefully
to be released within a year. In particular, we aim to improve the quality of
the figures, add some additional examples and exercises, and add historical and
bibliographic notes to each chapter. We are also developing a companion set of
lecture slides and assignments. A second volume, covering transit, freight, and
logistics, is also under preparation.
Any help you can offer to improve this text would be greatly appreciated,
whether spotting typos, math or logic errors, inconsistent terminology, or any
other suggestions about how the content can be better explained, better orga-
nized, or better presented.
We gratefully acknowledge the support of the National Science Foundation
under Grants 1069141/1157294, 1254921, 1562109/1562291, 1636154, 1739964,
and 1826320. Travis Waller (University of New South Wales), Chris Tampère
(Katholieke Universiteit Leuven), and Xuegang Ban (University of Washington)
hosted visits by the first author to their respective institutions, and provided
wonderful work environments where much of this writing was done.
Target Audience
This book is primarily intended for first-year graduate students, but is also
written with other potential audiences in mind. The content should be fully
accessible to highly-prepared undergraduate students, and certain specialized
topics would be appropriate for advanced graduate students as well. The text
covers a large number of topics, likely more than would be covered in one or
two semesters, and would also be useful for self-paced learners, or practitioners
who may want in-depth learning on specific topics. We have included some
supplementary material in optional sections, marked with an asterisk, which we
believe are interesting, but which can be skipped without loss of continuity.
The most important prerequisites for this book are an understanding of
multivariate calculus, and the intellectual maturity to understand the tradeoffs
involved in mathematical modeling. Modeling does not involve one-size-fits-all
approaches, and dogma about the absolute superiority of one model or algorithm
over another is scarce. Instead, the primary intent of this book is to present a
survey of important approaches to modeling transportation network problems,
as well as the context to determine when particular models or algorithms are
appropriate for real-world problems. Readers who can adopt this perspective
will gain the most from the book.
Appendix A covers the mathematics needed for the network models in this
iii
book. Readers of this book will have significantly different mathematical back-
grounds, and this appendix is meant to collect the necessary results and concepts
in one place. Depending on your background, some parts of it may need only a
brief review, while other parts may be completely new.
While this book does not explicitly cover how to program these models and
algorithms into a computer, if you have some facility in a programming language,
it is highly instructive to try to implement them as you read. Many network
algorithms are tedious to apply by hand or with other manual tools (calculator,
spreadsheet). Computer programming will open the door to applying these
models to interesting problems of large, realistic scale.
I Preliminaries 1
1 Introduction to Transportation Networks 3
1.1 Transportation Networks . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Examples of Networks . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 The Notion of Equilibrium . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Traffic Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Static Traffic Assignment . . . . . . . . . . . . . . . . . . . . . . 13
1.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.2 Critique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6 Dynamic Traffic Assignment . . . . . . . . . . . . . . . . . . . . . 16
1.6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.2 Critique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.7 Historical Notes and Further Reading . . . . . . . . . . . . . . . 20
1.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
v
vi CONTENTS
Bibliography 607
Part I
Preliminaries
1
Chapter 1
Introduction to
Transportation Networks
This introductory chapter lays the groundwork for traffic assignment, providing
some overall context for transportation planning in Section 1.1. Some examples
of networks in transportation are given in Section 1.2. The key idea in traf-
fic assignment is the notion of equilibrium, which is presented in Section 1.3.
The goals of traffic assignment are described in Section 1.4. Traffic assignment
models can be broadly classified as static or dynamic. Both types of models are
described in this book, and Sections 1.5 and 1.6 provide general perspective on
these types of models.
3
4 CHAPTER 1. INTRODUCTION TO TRANSPORTATION NETWORKS
Dynamic models are more realistic in portraying congestion, but require more
data for calibration and validation, and more computational resources to run.
Solving and interpreting the output of dynamic models is also more difficult.
As research progresses, however, more planners are using dynamic models, par-
ticularly for applications when travel conditions are changing rapidly during
the analysis period. This chapter will present a balanced perspective of the
advantages and disadvantages of dynamic traffic assignment vis-à-vis static as-
signment, but one of them is worth mentioning now: dynamic traffic assignment
models are inherently mode-specific.
That is, unlike in static assignment (where it is relatively easy to build “mul-
timodal” networks mixing roadway, transit, air, and waterway infrastructure),
the vast majority of dynamic traffic assignment models have been specifically
tailored to modeling vehicle congestion on roadways. In recent years, researchers
have started integrating other modes into dynamic traffic assignment, and this
area is likely to receive more attention in years to come. However, the congestion
model for each mode must be custom-built. This is at once an advantage (in that
congestion in different modes arises from fundamentally different sources, and
perhaps ought to be modeled quite differently) and a disadvantage (a “generic”
dynamic traffic assignment model can only be specified at a very high level). For
this reason, this book will focus specifically on automobile traffic on roadway
networks. This is not meant to suggest that dynamic traffic assignment cannot
or should not be applied to other modes, but simply an admission that covering
other modes would essentially require re-learning a new theory for each mode.
Developing such theories would make excellent research topics.
Park-and-ride Park-and-ride
Roadway link
Bus route Bus route link
Transfer link
planning model, all major and minor arterials may be included as well. For a
more detailed model, individual intersections may be “exploded” so that dif-
ferent links represent each turning movement (Figure 1.2). Other examples of
transportation networks are shown in Table 1.1.
Toll-setting model
are affected by tolls just as the toll revenue is determined by driver choices.
It is fruitful to think of other ways this type of system can be expanded. For
instance, a government agency might set regulations on the maximum and min-
imum toll values, but travelers can influence these policy decisions through the
voting process.
The task of transportation planners is to somehow make useful predictions
to assist with policy decision and alternatives analysis, despite the complexities
which arise when mutually-dependent systems interact. The key idea is that
a good prediction is mutually consistent in the sense that all of the systems
should “agree” with the prediction. As an example, in the basic traffic assign-
ment problem (Figure 1.3), a planning model will provide both a forecast of
travel choices, and a forecast of system congestion. These should be consistent
in the sense that inputting the forecasted travel choices into the supply-side
model should give the forecast of system congestion, and inputting the fore-
casted system congestion into the demand-side model should give the forecast
of travel choices. Such a consistent solution is termed an equilibrium.
The word equilibrium is meant to allude to the concept of economic equilib-
rium, as it is used in game theory. In game theory, several agents each choose
a particular action, and depending on the choices of all of the agents, each
receives a payoff (perhaps monetary, or simply in terms of happiness or satis-
faction). Each agent wants to maximize their payoff. The objective is to find a
“consistent” or equilibrium solution, in which all of the agents are choosing ac-
tions which maximize their payoff (keeping in mind that an agent cannot control
another agent’s decision). A few examples are in order.
1.3. THE NOTION OF EQUILIBRIUM 9
Table 1.2: Alice and Bob’s game; Alice chooses the row and Bob the column.
Bob
Cactus Café Desert Drafthouse
Cactus Café (−1, −1) (1, 1)
Alice
Desert Drafthouse (1, 1) (−1, −1)
Consider first a game with two players (call them Alice and Bob), who
happen to live in a small town with only two bars (the Cactus Café and the
Desert Drafthouse). Alice and Bob have recently broken off their relationship,
so they each want to go out to a bar. If they attend different bars, both of
them will be happy (signified by a payoff of +1), but if they attend the same
bar an awkward situation will arise and they will regret having gone out at all
(signified by a payoff of −1). Table 1.2 shows the four possible situations which
can arise — each cell in the table lists Alice’s payoff first, followed by Bob’s.
Two of these are boldfaced, indicating that they are equilibrium solutions: if
Alice is at the Cactus Café and Bob at the Desert Drafthouse (or vice versa),
they each receive a payoff of +1, which is the best they could hope to receive
given what the other is doing. The states where they attend the same bar are
not equilibria; either of them would be better off switching to the other bar.
This is a game with two equilibria in which Alice and Bob always attend the
same bar each week. 1
A second game involves the tale of Erica and Fred, two criminals who have
engaged in a decade-long spree of major art thefts. They are finally apprehended
by the police, but for a minor crime of shoplifting a candy bar from the grocery
store. The police suspect the pair of the more serious crimes, but have no hard
evidence. So, they place Erica and Fred in separate jail cells. They approach
Erica, offering her a reduced sentence in exchange for testifying against Fred for
the art thefts, and separately approach Fred, offering him a reduced sentence
if he would testify against Erica. If they remain loyal to each other, they will
be convicted only of shoplifting and will each spend a year in jail. If Erica
testifies against Fred, but Fred stays silent, then Fred goes to jail for 15 years
while Erica gets off free. (The same is true in reverse if Fred testifies against
Erica.) If they both testify against each other, they will both be convicted of
the major art theft, but will have a slightly reduced jail term of 14 years for
being cooperative. This game is diagrammed in Table 1.3, where the “payoff”
is the negative of the number of years spent in jail, negative because more years
in jail represents a worse outcome. Surprisingly, the only equilibrium solution
is for both of them to testify against each other. From Erica’s perspective,
she is better off testifying against Fred no matter what Fred will do. If he is
1 There is also a third equilibrium in which they each randomly choose a bar each weekend,
but equilibria involving randomization are outside the scope of this book.
10 CHAPTER 1. INTRODUCTION TO TRANSPORTATION NETWORKS
going to testify against her, she can reduce her sentence from 15 years to 14 by
testifying against Fred. If he is going to stay silent, she can reduce her sentence
from one year to zero by testifying against him. Fred’s logic is exactly the same.
This seemingly-paradoxical result, known as the prisoner’s dilemma, shows that
agents maximizing their own payoff can actually end up in a very bad situation
when you look at their combined payoffs!
A third game, far less dramatic than the first two, involves Ginger and
Harold, who are retirees passing the time by playing a simple game. Each of
them has a penny, and on a count of three each of them chooses to reveal either
the head or the tail of their penny. If the pennies show the same (both heads
or both tails), Ginger keeps them both. If one penny shows heads and the
other shows tails, Harold keeps them both. (Table 1.4). In this case, there is
no equilibrium solution: if Ginger always shows heads, Harold will learn and
always show tails; once Ginger realizes this, she will start showing tails, and so
on ad infinitum.
You may be wondering how these games are relevant to transportation prob-
lems. In fact, the route choice decision can be seen as a game with a very large
number of players. Some drivers may choose to avoid the freeway, anticipating
a certain level of congestion and trying to second-guess what others are doing —
but surely other drivers are engaging in the exact same process. 2 Each of these
three games has bearing on the traffic assignment problem. The game with
Alice and Bob shows that some games have more than one equilibrium solution
(an issue of equilibrium uniqueness), although the two equilibrium solutions are
2 To borrow from Yogi Berra, nobody takes the freeway during rush hour anymore — it’s
too congested.
1.4. TRAFFIC ASSIGNMENT 11
the same if you just look at the total number of people at each bar and not
which individuals go at each. What does it mean for transportation planning if
a model can give several seemingly valid predictions? The game with Erica and
Fred shows that agents individually doing what is best for themselves may lead
to an outcome which is quite bad overall, an issue of equilibrium efficiency. As
we will see later on, in transportation systems this opens the door for seemingly
helpful projects (like capacity expansion on congested roads) to actually make
things worse. The game with Ginger and Harold is a case where there is no
equilibrium at all (an issue of equilibrium existence). If this could happen in a
transportation planning model, then perhaps equilibrium is the wrong concept
to use altogether. These questions of uniqueness, efficiency, and existence are
important, and will appear throughout the book.
The three example games described above can be analyzed directly, by enu-
merating all the possible outcomes. However, transportation systems involve
thousands or even millions of different “players” and an analysis by enumera-
tion is hopeless. The good news is that the number of players is so great that
little is lost in assuming that the players can be treated as a continuum. 3
This allows us to work with smooth functions, greatly simplifying the process
of finding equilibria. The remainder of this chapter introduces the basic traffic
assignment problem in terms of the equilibrium concept and with a few moti-
vating examples, but still in generally qualitative terms and restricted to small
networks. The following two chapters provide us with the mathematical vocab-
ulary and network tools needed to formulate and solve equilibrium on realistic,
large-scale systems.
Demographic data
details.
To get link flows from demographic data, most regions use the so-called four-
step model (Figure 1.5). The first step is trip generation: based on demographic
data, how many trips will people make? The second is trip distribution: once
we know the total number of trips people make, what are the specific locations
people will travel to? The third is mode choice: once we know the trip locations,
will people choose to drive, take the bus, or use another mode? The fourth and
final step is route choice, also known as traffic assignment: once we know the
modes people will take to their trip destinations, what routes will they choose?
Thus, at the end of the four steps, the transition from demographic data to link
flows has been accomplished. 4
Demographics are not uniform in a city; some areas are wealthier than others,
some areas are residential while others are commercial, some parts are more
crowded while other parts have a lower population density. For this reason,
planners divide a city into multiple zones, and assume certain parameters within
each zone are uniform. Clearly this is only an approximation to reality, and the
larger the number of zones, the more accurate the approximation. (At the
extreme, each household would be its own zone and the uniformity assumption
becomes irrelevant.) On the other hand, the more zones, the longer it takes to
run each model, and at some point computational resources become limiting.
Typical networks used for large metropolitan areas have a few thousand zones.
Zones are often related to census tracts, to make it easy to get demographic
information from census results.
The focus of this book is the last of the four steps, traffic assignment. In the
beginning, we assume that the first three steps have been completed, and we
know the number of drivers traveling between each origin zone and destination
zone. From this, we want to know how many drivers are going to use each road-
4 In more sophisticated models, the four steps may be repeated again, to ensure that the
end results are consistent with the input data. There are also newer and arguably better
alternatives to the four-step model.
1.5. STATIC TRAFFIC ASSIGNMENT 13
Figure 1.6: Centroids (shaded) coinciding with existing infrastructure, and ar-
tificial centroids. Dashed links on the right represent centroid connectors.
way segment, from which we can estimate congestion, emissions, toll revenue,
or other measures of interest.
We’ve already discussed several of the pieces of information we need in or-
der to describe traffic assignment models precisely, including zones and travel
demand. The final piece of the puzzle is a representation of the transportation
infrastructure itself: the transportation network described more in the next
chapter.
It is usually convenient to use a node to represent each zone; such nodes
are called centroids, and all trips are assumed to begin and end at centroids.
The set of centroids is thus a subset of the set of nodes, defined in the next
chapter. Centroids may coincide with physical nodes in the network. Centroids
may also represent artificial nodes which do not correspond to any one physical
point, and are connected to the physical infrastructure with links called centroid
connectors (dashed lines in Figure 1.6).
1.5.1 Overview
In static assignment, the traffic flow model is based on link performance func-
tions, which map the flow on each link to the travel time on that link. Mathe-
14 CHAPTER 1. INTRODUCTION TO TRANSPORTATION NETWORKS
matically, if the notation (i, j) is used to refer to a roadway link connecting two
nodes i and j, then xij is the flow on link (i, j) and the function tij (xij ) gives
the travel time on link (i, j) as a function of the flow on (i, j). These functions
tij (·) are typically assumed to be nonnegative, nondecreasing, and convex, re-
flecting the idea that as more vehicles attempt to drive on a link, the greater
the congestion and the higher the travel times will be. A variety of link per-
formance functions exist, but the most popular is the Bureau of Public Roads
(BPR) function, which takes the form
β !
xij
tij (xij ) = t0ij 1+α (1.1)
uij
where t0ij and uij are the free-flow time and capacity of link (i, j), respectively,
and α and β are shape parameters which can be calibrated to data. It is common
to use α = 0.15 and β = 4.
With such functions, the more travelers choose a path, the higher its travel
time will be. Since travelers seek to minimize their travel time, travelers will
not choose a path with high travel time unless there is no other option avail-
able. Indeed, if travelers only choose paths to minimize travel time, and if they
have perfect knowledge of network conditions, then the network state can be
described by the principle of user equilibrium: all used paths between the same
origin and destination have equal and minimal travel times, for if this were not
the case travelers would switch from slower routes to faster ones, which would
tend to equalize their travel times.
It is not difficult to show that this user equilibrium state is not socially op-
timal, and that other configurations of traffic flow can reduce the average travel
time (or even the travel time for all drivers) compared to the user equilibrium
state. In other words, individual drivers seeking to minimize their own travel
times will not always minimize travel times throughout the network, and this
latter system optimal state can be contrasted with the user equilibrium one.
The prime advantage of using link performance functions like that in equa-
tion (1.1) is that the user equilibrium and system optimum states can be found
with relative ease, even in realistic networks involving tens of thousands of
links. Part II of the book discusses this in detail, showing how the static assign-
ment problem can be formulated using the mathematical tools of optimization,
fixed point, and variational inequality problems. These three representations of
the equilibrium problem can be linked to powerful mathematical results which
assure the existence and uniqueness of user equilibrium solutions under mild
conditions. Efficient algorithms allow these states to be identified in a matter
of minutes on large-scale networks.
For these reasons, static traffic assignment has been widely used in trans-
portation planning practice for decades, and remains a powerful tool that can
be used for performing alternatives analysis and generating forecasts of network
conditions.
1.5. STATIC TRAFFIC ASSIGNMENT 15
1.5.2 Critique
There are also a number of serious critiques of static assignment models, focused
primarily on the link performance functions. By definition, static models do
not monitor how network conditions (either demand or congestion) change over
time, and implicitly assume a steady-state condition. This is clearly not the
case. There are additional, subtler and more fundamental problems with link
performance functions. This section describes a few of these problems.
First, not all vehicles on a link experience the same travel time. Even if
the demand for travel on a link exceeds the capacity, the first vehicles to travel
on that link will not experience much delay at all, while vehicles which arrive
later may experience a very high delay. Together with the principle of user
equilibrium, this means that the paths chosen by travelers will also depend on
when they are departing. Route choices during periods of high congestion will be
different from route choices made while this congestion is forming or dissipating.
Furthermore, the travel time faced by a driver on a link depends crucially on the
vehicles in front of them, and very little on the vehicles behind them. (A driver
must adjust their speed to avoid colliding with vehicles downstream; a driver
has no obligation to change their speed based on vehicles behind them.) This
asymmetry is known as the anisotropic property of traffic flow, and it is violated
by link performance functions — an increase in flow on the link is assumed to
increase the travel time of all vehicles, and directly using link performance
functions in a dynamic model would lead to undesirable phenomena, such as
vehicles entering a link immediately raising the travel time for all other vehicles
on the link, even those at the downstream end.
Second, the use of the word “flow” in static assignment is problematic. In
traffic engineering, flow is defined as the (time) rate at which vehicles pass a
fixed point on the roadway, and capacity is the greatest possible value for flow.
By definition, flow cannot exceed capacity. However, the BPR function (1.1)
imposes no such restriction, and it is common to see “flow-to-capacity” ratios
much greater than one in static assignment. 5 Instead, the xij values in static
assignment are better thought of as demand rather than actual flow, since there
is no harm in assuming that the demand for service exceeds the capacity, but it is
impossible for the flow itself to exceed capacity. And for purposes of calibration,
demand is much harder to observe than actual flow. These issues do not have
clean resolutions.
Third, and related to the previous issue, link performance functions suggest
that lower-capacity links have higher travel times under the same demand. But
consider what happens at a freeway bottleneck, such as the lane drop shown in
Figure 1.7. Congestion actually forms upstream of a bottleneck, and downstream
of the lane drop there is no reason for vehicles to flow at a reduced speed. In
5 There are several ways to add such a restriction, but these are less than satisfactory. A
Traffic flow
Figure 1.7: Congestion arises upstream of a bottleneck, not on the link with
reduced capacity.
reality, it is upstream links that suffer when the demand for traveling on a link
exceeds the capacity, not the bottleneck link itself.
Fourth, in congested urban systems it is very common for queues to fill the
entire length of a link, causing congestion to spread to upstream links. This is
observed on freeways (congested offramp queues) and in arterials (gridlock in
central business districts) and is a major contributor to severe delay. In addition
to the capacity, which is a maximum flow rate, real roadways also have a jam
density, a maximum spatial concentration of vehicles. If a link is at jam density,
no more vehicles can enter, which will create queues on upstream links. If these
queues continue to grow, they will spread even further upstream to other links.
Capturing this phenomenon can greatly enhance the realism of traffic models.
For all of these reasons, dynamic traffic assignment models shift to an en-
tirely different traffic flow model. Rather than assuming simple, well-behaved
link performance functions for each link we turn to traffic flow theory, to find
more realistic ways to link traffic flow to congestion. Some early research in
dynamic traffic assignment attempted to retain the use of link performance
functions — for instance, modeling changes in demand by running a sequence
of static assignment models over shorter time periods, with methods for linking
together trips spanning multiple time periods. While this addresses the obvi-
ous shortcoming of static models, that they cannot accommodate changes in
demand or model changes in congestion over time, it does nothing to address
the more serious and fundamental problems with link performance functions
described above. For this reason, this approach has largely been abandoned in
favor of entirely different traffic flow models.
1.6.1 Overview
The biggest difference between static and dynamic traffic assignment is in the
traffic flow models used. A number of different traffic flow models are available,
and a number of them are discussed in the following chapter. At a minimum, a
traffic flow model for dynamic traffic assignment must be able to track changes
in congestion at a fairly fine resolution, on the order of a few seconds. To do
this, the locations of vehicles must be known at the same resolution. Most of
them also address some or all of the shortcomings of link performance functions
identified above.
The behavior of drivers is similar to that in static assignment in that drivers
choose paths with minimum travel time. However, since congestion (and there-
fore travel time) changes over time, the minimum-time paths also change with
time. Therefore, the principle of user equilibrium is replaced with a principle
of dynamic user equilibrium: All paths used by travelers departing the same
origin, for the same destination, at the same time, have equal and minimal
travel times. Travelers departing at different times may experience different
travel times; all the travelers departing at the same time will experience the
same travel time at equilibrium, regardless of the path they choose. By virtue
of representing demand changes over time, dynamic traffic assignment can also
incorporate departure time choice of drivers, as well as route choice. Some dy-
namic traffic assignment models include both of these choices, while others focus
only on route or departure time choice. Which choices are appropriate to model
depends on which is more significant for a particular application, as well as the
data and computational resources available. Most chapters of this book focus
only on route choice, and as a general rule we will assume that departure times
are fixed. In a few places we show how departure time choice can be added in.
It is important to specify that this equilibrium is based on the travel times
the drivers actually experience, not the “instantaneous” travel times at the
moment they depart. That is, we do not simply assume that drivers will pick
the fastest route based on current conditions (as would be provided by most
advanced traveler information services), but that they will anticipate changes in
travel times which will occur during their journey. This suggests that drivers are
familiar enough with the network that they know how congestion changes with
time. This distinction is important — dynamic traffic assignment equilibrates
on experienced travel times, not instantaneous travel times.
This means that it is impossible to find the dynamic user equilibrium in one
step. Experienced travel times cannot be calculated at the moment of departure,
but only after the vehicle has arrived at the destination. Therefore, dynamic
traffic assignment is an iterative process, where route choices are updated at
each iteration until an (approximate) dynamic user equilibrium solution has
been found. This iterative process virtually always involves three steps, shown
in Figure 1.8:
18 CHAPTER 1. INTRODUCTION TO TRANSPORTATION NETWORKS
1. Calculate route
travel times 2. Find shortest paths
Network loading: This is the process of using a traffic flow model to calcu-
late the (time-dependent) travel times on each link, taking the routes and
departure times of all drivers as inputs. In static assignment, this step
was quite simple, involving nothing more than evaluating the link perfor-
mance functions for each network link. In dynamic traffic assignment, this
involves the use of a more sophisticated traffic flow model, or even the use
of a traffic simulator. Network loading is discussed in Chapter 9.
Path finding: Once network loading is complete, the travel time of each link,
at each point in time, is obtained. From this, we find the shortest path
from each origin to each destination, at each departure time. Since we
need experienced travel times, our shortest path finding must take into
account that the travel time on each link depends upon the time a vehicle
enters. This requires time-dependent shortest path algorithms, which are
discussed in Chapter 10.
Route updating: Once time-dependent shortest paths have been found for
all origins, destinations, and departure times, vehicles can be shifted from
their current paths onto these new, shortest paths. As in static assignment,
this step requires care, because shifting vehicles will change the path travel
times as well. A few options for this are discussed in Chapter 11, along
with other issues characterizing dynamic equilibrium. Unfortunately, and
in contrast with static assignment, dynamic user equilibrium need not
always exist, and when it exists it need not be unique.
1.6.2 Critique
The primary advantage of dynamic traffic assignment, and a significant one,
is that the underlying traffic flow models are much more realistic. Link per-
formance functions used in static assignment are fundamentally incapable of
representing many important traffic phenomena. However, dynamic traffic as-
signment is not without its drawbacks as well.
1.6. DYNAMIC TRAFFIC ASSIGNMENT 19
1.8 Exercises
1. [10] If all models are wrong, how can some of them be useful?
2. [10] All of the links in Figure 1.1 have a “mirror” connecting the same
nodes, but in the opposite direction. When might mirror links not exist?
3. [23] A network is called planar if it can be drawn in two dimensions with-
out links crossing each other. (The left network in Figure 1.2 is planar,
but not the network on the right.) Do we expect to see planar graphs in
transportation network modeling? Does it depend on the mode of trans-
portation? What other factors might influence whether a transportation
network is planar?
4. [35] Table 1.1 shows how networks can represent five types of transporta-
tion infrastructure. List at least five more systems (not necessarily in
transportation) that can be modeled by networks, along with what nodes
and links would represent.
5. [45] What factors might determine how large of a geographic area is mod-
eled in a transportation network (e.g., corridor, city, region, state, na-
1.8. EXERCISES 21
tional)? Illustrate your answer by showing how they would apply to the
hypothetical projects or policies at the start of Section 1.1.
6. [45] What factors might determine the level of detail in a transportation
network (e.g., freeways, major arterials, minor arterials, neighborhood
streets)? Illustrate your answer by showing how they would apply to the
hypothetical projects or policies at the start of Section 1.1.
7. [21] Provide an intuitive explanation of the prisoner’s dilemma, as de-
scribed in the Erica-Fred game of Table 1.3. Why does it happen? Name
other real-world situations which exhibit a similar phenomenon.
8. [20] For each of the following games, list all equilibria (or state that none
exist). In which of these games do equilibria exist; in which are the equi-
libria unique; and in which are there some equilibria which are inefficient?
These games are all played by two players A and B: A chooses the row
and B chooses the column, and each cell lists the payoffs to A and B, in
that order.
(a) L R (b) L R
U (8, 13) (5, 14) U (12, 12) (2, 10)
D (10, 10) (7, 12) D (10, 2) (4, 5)
(c) L R (d) L R
U (8, 6) (10, 8) U (4, 5) (5, 6)
D (2, 3) (8, 4) D (3, 4) (6, 3)
9. [1] Explain the difference between the demand for travel on a link, and
the flow on a link.
10. [42] Explain why a less realistic model less sensitive to correct input data
may be preferred to a more realistic model more sensitive to correct inputs,
and in what circumstances. Give specific examples.
22 CHAPTER 1. INTRODUCTION TO TRANSPORTATION NETWORKS
Chapter 2
Network Representations
and Algorithms
This chapter introduces networks as they are used in the transportation field.
Section 2.1 introduces the mathematical terminology and notation used to de-
scribe network elements. Section 2.2 discusses two special kinds of networks
which are used frequently in network analysis, acyclic networks and trees. Sec-
tion 2.3 then describes several ways of representing a network in a way computers
can use when solving network problems. This third section can be skipped if
you do not plan to do any computer programming. Section 2.4 describes the
shortest path problem, a classic network algorithm which plays a central role in
traffic assignment.
2.1 Terminology
Because of its relative youth, there are a variety of notational conventions in the
transportation networks community. A common notation is adopted in the book
to present the methods and topics in a consistent manner, but you should be
aware that other authors may use slightly different conventions and definitions.
Table 2.1 gives an example of some terms which are often used synonymously
(or nearly synonymously) with ours. These differences are mainly a matter of
style, not substance, but when reading other books or papers you should pay
careful attention to the exact wording of their definitions.
The fundamental construct we will utilize is the network. A network is
a collection of nodes, and a collection of links which connect the nodes. A
network is denoted G = (N, A), where N is the set of nodes and A is the
set of links. Figure 2.1(a) shows a simple network, with four nodes in the set
N = {1, 2, 3, 4} and five links in the set A = {(1, 2), (1, 3), (2, 3), (2, 4), (3, 4)}.
Notice that the notation for each link contains the two nodes connected by the
link: the upstream node is called the tail of the link, and the downstream node
the head. We will often refer to the total number of nodes in a network as n
23
24 CHAPTER 2. NETWORK REPRESENTATIONS AND ALGORITHMS
2
(1, 2) (2, 4)
1 (2, 3) 4
(1, 3) (3, 4)
3
Figure 2.1: Example network notation
and the total number of links as m. This book is solely concerned with the
case where n and m are finite — networks with infinitely many nodes and links
are sometimes studied in theoretical mathematics, but rarely in transportation
applications.
All of the networks in this book are directed. In a directed network, a link
can only be traversed in one direction, specified by the ordering of the nodes.
Therefore, (1,2) and (2,1) represent different links: they connect the same nodes,
but represent travel in opposite directions. Unless stated otherwise, we assume
that there are no “self-loops” (i, i) from a node to itself, and no parallel links, so
the notation (i, j) is unambiguous. This is not usually a restrictive assumption,
since we can introduce artificial nodes to split up self-loops or parallel links,
as shown in Figure 2.2. The upper left panel in this figure shows a network
with a self-loop (2,2). By introducing a fourth node in the bottom left, we have
divided the self-loop into two links (2,4) and (4,2). In the upper right panel of
the figure, there are two networks connecting nodes 2 and 3, so the notation
(2,3) does not tell us which of these two links we are referring to. By introducing
a fourth node in the bottom right, we now have three links: (2,3), (2,4), and
(4,3), which collectively represent both of the ways to travel between nodes 2
and 3 in the original network, but without parallel links. These new nodes
are artificial in the sense that they do not represent physical transportation
infrastructure, but are inserted for modeling convenience. Artificial nodes (and
2.1. TERMINOLOGY 25
1 2 3 1 2 3
1 2 3 1 2 3
4
4
Figure 2.2: Artificial links can be introduced to simulate self-loops and parallel
links.
(i, j)
i j
Figure 2.4: Networks which are (a) strongly connected; (b) connected, but not
strongly connected; (c) disconnected.
or more compactly, by the nodes passed on the way with the notation
[i0 , i1 , i2 , i3 , . . . , ik−1 , ik ] .
A path is a cycle if the starting and ending nodes are the same (i0 = ik ). Paths
which contain a cycle are called cyclic; paths which do not have a cycle as a
component are called acyclic. Cyclic paths are uncommon in transportation
problems, so unless stated otherwise, we only consider acyclic paths. Let Πrs
denote the set of all acyclic paths connecting nodes r and s, and let Π denote
the set of all acyclic paths in the entire network, that is, Π = ∪(r,s)∈N 2 Πrs . A
network is connected if there is at least one path connecting any two nodes in
the network, assuming that the links can be traversed in either direction (that
is, ignoring the direction of the link); otherwise it is disconnected. A network
is strongly connected if there is at least one path connecting any two nodes
in the network, obeying the given directions of the links. Figure 2.4 shows
networks which are strongly connected; connected, but not strongly connected;
and disconnected.
graphs. However, the transportation community generally uses the term “tree” in both cases,
a convention followed in this book.
28 CHAPTER 2. NETWORK REPRESENTATIONS AND ALGORITHMS
tij
i j
2
2 4
1 3 4
1 5
3
Figure 2.7: Graph used for network representation examples
Figure 2.8: Node-link incidence matrix for the network in Figure 2.7.
30 CHAPTER 2. NETWORK REPRESENTATIONS AND ALGORITHMS
1 2 3 4
1 0 1 1 0
2
0 0 1 1
3 0 0 0 1
4 0 0 0 0
Figure 2.9: Node-node adjacency matrix for the network in Figure 2.7.
In this structure, a non-zero entry provides both existence and direction infor-
mation. An example of a node-node adjacency matrix is given in Figure 2.9.
In very dense networks, with many more links than nodes (m n), node-node
adjacency matrices can be efficient.
A shortcoming of the previous data structures is that they do not contain
information beyond the existence and direction of the links in a network. In
transportation applications we are typically working with networks for which
we want to store and use more information about each element. For example,
we may want to store information about the travel time, number of lanes (ca-
pacity), speed limit, signalization, etc. for each link. Adjacency lists give us an
opportunity to store this information efficiently in a manner that is compatible
with programming languages and their built-in data structures.
Storing a network as an array is more efficient in terms of space, and is
easier to work with in some programming languages. Storage is not wasted
on a large number of zeros, as with the adjacency matrices. A “forward star”
representation is presented below. In the forward star representation, links
are sorted in order of their tail node, as shown in Table 2.2. A second array,
called point, is used to indicate where the forward star of each node begins. If
the network has n nodes and m links, the point array has n + 1 entries, and
point(n + 1) is always equal to m + 1; the reason why is discussed below. As
shown in Table 2.2, point(2) = 3 because the link 3 is the first link adjacent
to node 2; point(3) = 5 because link 5 is the first link adjacent to node 3, and
so forth. Three conventions associated with the star representation are:
1. The number of entries in the point array is one more than the number of
nodes.
2. We automatically set point(n+1) = m + 1.
3. If there are no outgoing links, point(i) = point (i+1)
With these conventions, we can say that the forward star for any node i con-
sists of all links whose IDs are greater than or equal to point(i), and strictly
less than point(i+1). This representation is most useful when we frequently
need to loop over all of the links leaving a particular node, which can be accom-
plished by programming a “for” loop between point(i) and point(i+1) - 1,
and referring to the link arrays with these indices. We can make this statement
universally, because we defined the point array to have one more entry than
the number of nodes. If point only had n entries, then referring to point(n+1)
2.4. SHORTEST PATHS 31
3
2
4
1
1 4
5
2
2
Figure 2.10: Example network for shortest paths (link costs indicated).
erybody’s route choices held fixed. By separating the process of identifying the
shortest path from the process of shifting path flows toward the shortest path,
we simplify the problem and end up with something which is relatively easy to
solve. This problem can also be phrased in the language of optimization, as
shown in Appendix B.4, by identifying an objective function, decision variables,
and constraints. Here, we develop specialized algorithms for the shortest path
problem which are more efficient and more direct than general optimization
techniques.
Although the shortest path problem most obviously fits into transportation
networks, many other applications also exist in construction management, ge-
ometric design, operations research, and many other areas. For instance, the
fastest way to solve a Rubik’s Cube from a given position can be solved using
a shortest path algorithm, as can the fewest number of links needed to connect
an actor to Kevin Bacon when playing Six Degrees of Separation.
A curious fact of shortest path algorithms is that finding the shortest path
from a single origin to every other node is only slightly harder than finding the
shortest path from that origin to a single destination. This also happens to be
the reason why we can find shortest paths without having to list all of the zillions
of possible paths from an origin to a destination, and adding up their paths.
This common reason is Bellman’s Principle, which states that any segment of a
shortest path must itself be a shortest path between its endpoints. For instance,
consider the network in Figure 2.10, where the costs cij are printed next to each
link. The shortest path from node 1 to node 4 is [1, 2, 3, 4]. Bellman’s Principle
requires that [1, 2, 3] also be a shortest path from nodes 1 to 3, and that [2, 3, 4]
be a shortest path from nodes 2 to 4. It further requires that [1, 2] be a shortest
path from node 1 to node 2, [2, 3] be a shortest path from node 2 to node 3,
and [3, 4] be a shortest path from node 3 to node 4. You should verify that this
is true with the given link costs.
To see why this must be the case, assume that Bellman’s Principle was
violated. If the cost on link (1, 3) was reduced to 2, then [1, 2, 3] is no longer a
shortest path from node 1 to node 3 (that path has a cost of 3, while the single-
2.4. SHORTEST PATHS 33
r i
g
link path [1, 3] has cost 2). Bellman’s Principle then implies that [1, 2, 3, 4] is
no longer the shortest path between nodes 1 and 4. Why? The first part of the
path can be replaced by [1, 3] (the new shortest path between 1 and 3), reducing
the cost of the path from 1 to 4: [1, 3, 4] now has a cost of 4. In general, if a
segment of a path does not form the shortest path between two nodes, we can
replace it with the shortest path, and thus reduce the cost of the entire path.
Thus, the shortest path must satisfy Bellman’s Principle for all of its segments.
The implication of this is that we can construct shortest paths one node at a
time, proceeding inductively. Let’s say we want to find the shortest path from
node r to a node i, and furthermore let’s assume that we’ve already found the
shortest paths from r to every node which is directly upstream of i (nodes f , g,
and h in Figure 2.11). The shortest path from r to i must pass through either f ,
g, or h; and according to Bellman’s Principle, the shortest path from r to i must
be either (a) the shortest path from r to f , plus link (f, i); the shortest path
from r to g, plus link (g, i); or the shortest path from r to h, plus link (h, i). This
is efficient because, rather than considering all of the possible paths from r to i,
we only have to consider three, which can be easily compared. Furthermore, we
can re-use the information we found when finding shortest paths to f , g, and h,
and don’t have to duplicate the same work when finding the shortest path to i.
This idea doesn’t give a complete algorithm yet — how did we find the shortest
paths to f , g, and h, for instance? — but gives the flavor of the shortest path
algorithms presented next.
Bellman’s Principle also gives us a compact way of expressing all of the
shortest paths from an origin to every other node in the network: for each node,
simply indicate the last node in the shortest path from that origin. This is
called the backnode vector qr , where each component qir is the node immediately
preceding i in the shortest path from r to i. If i = r, then qir is not well-defined
(i is the origin itself; what is the shortest path from the origin to itself, and
if we can define it, what node immediately precedes the origin?) so we say
qrr ≡ −1 by definition. For the network in Figure 2.10 (with the original costs),
34 CHAPTER 2. NETWORK REPRESENTATIONS AND ALGORITHMS
1 1 1 1
have q1 = −1, q2 = 1, q3 = 2, and q4 = 3, or, in vector notation,
we thus
1
q = −1 1 2 3 .
The backnode vector can be used as follows: say we want to look up the
shortest path from node 1 to node 4. Starting at the destination, the backnode
of 4 is 3, which means “the shortest path to node 4 is the shortest path to
node 3, plus the link (3, 4).” To find the shortest path to node 3, consult its
backnode: “the shortest path to node 3 is the shortest path to node 2, plus the
link (2, 3).” For the shortest path to node 2, its backnode says: “the shortest
path to node 2 is from node 1.” This is the origin, so we’ve found the start
of the path, and can reconstruct the original path to node 4: [1, 2, 3, 4]. More
briefly, we can use the backnodes to trace the shortest path back to an origin,
by starting from the destination, and reading back one node at a time.
We will also define Lri to be the total cost on the shortest path from origin r
to node i (the letter L is used because these values are often referred to as node
labels), with Lrr ≡ 0, so in this example we would have L11 = 0, L12 = 2, L13 = 3,
and L14 = 5.
This chapter presents four shortest path algorithms. The first, in Sec-
tion 2.4.1, only applies when the network is acyclic but is extremely fast and
simple. Sections 2.4.2 and 2.4.3 next present the two most common approaches
for solving shortest paths in networks with cycles: label setting and label correct-
ing. Dijkstra’s algorithm and the Bellman-Ford algorithm are the quintessential
label setting and label correcting algorithms, respectively, and they are used to
show the difference between these approaches. These three algorithms actually
find the shortest path from the origin r to every other node, not just the des-
tination s; this exploits Bellman’s Principle, because the shortest path from r
to s must also contain the shortest path from r to every node in that path.
Further, both of them contain a set of labels Lri associated with each node,
representing the shortest path from r to node i. With a label setting approach,
each node’s label is determined once and only once. On the other hand, with a
label correcting approach, each node’s label can be updated multiple times.
The fourth algorithm, A∗ is presented in Section 2.4.4 . This algorithm can
be faster than Dijkstra’s algorithm if we are only interested in the shortest path
from a single origin to a single destination.
path assumption), so we can only look at an acyclic portion of the network and
thereby use the much faster shortest path algorithm for acyclic networks.
Once we have a topological order on a network, it becomes very easy to
find the shortest path from any node r to any other node s (clearly r has lower
topological order than s), using the following algorithm which is based directly
on Bellman’s Principle:
3. We can find the shortest path from r to i using Bellman’s Principle, using
these formulas:
Lri = min {Lrh + chi } (2.1)
(h,i)∈Γ−1 (i)
(Essentially, these formulas have us search all of the links arriving at node
i for the approach with minimum cost.)
4. Does i = s? If so, stop, and we have found the shortest path from r to
s, which has cost Lrs . Otherwise, let i be the next node topologically, and
return to step 3.
Bellman’s Principle lets us find all of the shortest paths in one pass over the
nodes (in topological order), because there are no cycles which could make us
loop back.
Here’s how to apply this algorithm to the network in Figure 2.10, with node
1 as origin and node 4 as destination.
Step 1: Initialize: L1 ← 0 ∞ ∞ ∞ and q1 ← −1 −1 −1 −1 .
Step 2: We set j ← 2.
Step 3: The only approach to node 2 is (1, 2), so Γ−1 (2) is just {(1, 2)} and
the minimization is trivial: L12 ← L11 + c12 = 0 + 2 = 2 and q21 ← 1.
Step 3: There are two approaches to node 3, (1, 3) and (2, 3). So L13 ←
min{L11 + c13 , L12 + c23 } = min{0 + 4, 2 + 1} = 3, which corresponds to
approach (2, 3), so q31 ← 2.
Step 3: There are two approaches to node 4, (2, 4) and (3, 4). So L14 ←
min{L12 + c24 , L13 + c34 } = min{2 + 5, 3 + 2} = 5, which corresponds to
approach (3, 4), so q41 ← 3.
Step 4: We have reached the destination (node 4) and terminate.
A simple induction proof shows that this algorithm must give the correct
shortest paths to each node (except for those topologically before r, since such
paths do not exist). By “give the correct shortest paths,” we mean that the
labels L give the lowest cost possible to each node when traveling from r, and
that the backnodes q yield shortest paths when traced back to r. Assume
that the nodes are numbered in topological order. Clearly Lrr and qrr are set
to the correct values in the first step (the shortest path from r to itself is
trivial), and are never changed again because the network proceeds in increasing
topological order. Now assume that the algorithm has found correct L and q
values for all nodes between r and k. In the next step, it updates labels for
node k + 1. Let (i, k + 1) be the last link in a shortest path from r to k + 1. By
Bellman’s principle, the first part of this path must be a shortest path from r to
i. Since i is topologically between r and k, by the induction hypothesis the Lr
and q r labels are correct for all nodes immediately upstream of k. Therefore,
Lri +ci,k+1 ≤ Lrj +cj,k+1 for all immediately upstream nodes j, and the algorithm
makes the correct choices in step 3 for node k + 1.
1. Initialize every label Lri to ∞, except for the origin, where Lrr ← 0, and
the backnode vector qr ← −1.
2. Initialize the scan eligible list to contain all nodes immediately downstream
of the origin, that is, SEL ← {i : (r, i) ∈ Γ(r)}.
3. Choose a node i ∈ SEL and remove it from that list.
4. Scan node i by applying the same equations as in the previous algorithm:
5. If the previous step changed the value of Lri , then add all nodes immedi-
ately downstream of i to the scan eligible list, that is, SEL ← SEL ∪ {j :
(i, j) ∈ Γ(i)}.
Step 3: Choose node 3 and remove it from SEL, so i ← 3, and SEL = {2}.
Step 4: The two possible approaches are (1, 3) and (2, 3), so L13 ← min{L11 +
c13 , L12 + c23 } = min{0 + 4, ∞ + 1} = 4, which corresponds to approach
(1, 3), so q31 ← 1. Note that, unlike in the case of the acyclic network, we
are not claiming that this is the shortest path to node 3. We are simply
claiming that this is the shortest path we have found thus far.
Step 5: The previous step reduced L13 from ∞ to 4, so we add the downstream
node 4 to SEL, so SEL = {2, 4}.
Step 3: Choose node 4 and remove it from SEL, so i ← 4, and SEL = {2}.
Step 4: There are two approaches to node 4, (2, 4) and (3, 4). So L14 ←
min{L12 + c24 , L13 + c34 } = min{∞ + 5, 4 + 2} = 6, which corresponds to
approach (3, 4), so q41 ← 3.
Step 5: The previous step reduced L14 from ∞ to 6, but there are no down-
stream nodes to add, so SEL = {2}.
Step 4: The only approach to node 2 is (1, 2), so {(h, i) ∈ Γ−1 (i)} is just
{(1, 2)} and the minimization is trivial: L12 ← L11 + c12 = 0 + 2 = 2 and
q21 ← 1.
Step 5: The previous step reduced L12 from ∞ to 2, so we add the downstream
nodes 3 and 4 to SEL, so SEL = {3, 4}.
Step 3: Choose node 3 and remove it from SEL, so i ← 3, and SEL = {4}.
Note that this is the second time we are scanning node 3. We have to do
this, because we found a path to node 2, and there is a possibility that
this could lead to a shorter path to node 3 as well.
38 CHAPTER 2. NETWORK REPRESENTATIONS AND ALGORITHMS
Step 4: The two possible approaches are (1, 3) and (2, 3), so L13 ← min{L11 +
c13 , L12 + c23 } = min{0 + 4, 2 + 1} = 3, which corresponds to approach
(2,3), so q31 ← 2. Note that we have changed the labels again, showing
that we have indeed found a better path through node 2.
Step 5: The previous step reduced L13 from 4 to 3, so we add the downstream
node 4 to SEL, so SEL = {4}.
Step 4: There are two approaches to node 4, (2, 4) and (3, 4). So L14 ←
min{L12 + c24 , L13 + c34 } = min{2 + 5, 3 + 2} = 5, which corresponds to
approach (3, 4), so q41 ← 3.
Step 5: The previous step reduced L14 from 6 to 5, but there are no down-
stream nodes to add, so SEL remains empty.
As you can see, this method required more steps than the algorithm for
acyclic networks (because there is a possibility of revisiting nodes), but it does
not rely on having a topological order and can work in any network. One can
show that the algorithm converges no matter how you choose the node from
SEL, but it is easier to prove if you choose a systematic rule. If you operate
SEL in a “first-in, first-out” manner, always choosing a node which has been
in the queue for the greatest number of iterations, it is possible to show that
after m iterations, you have certainly found the shortest paths from r which
consist of only a single link. (You’ve probably found quite a few more shortest
paths, but even in the worst case you’ll have found at least these.) After 2m
iterations, you will have certainly found the shortest paths from r which consist
of one or two links only, and so on. So, after mn iterations, you will have found
all of the shortest paths, since a shortest path cannot use more than n links.
The exercises ask you to fill in the details of this proof sketch.
when every link has a nonnegative cost. Furthermore, at each step it finds the
shortest path to at least one additional node. It uses the concept of finalized
nodes, that is, nodes to which the shortest path has already been found. It
uses the same L and q labels as before. The primary distinction between label
setting and label correcting is that label setting requires fewer iterations than
label correcting, but each iteration takes a greater amount of effort. Which
one is superior depends on the network topology, the specific implementation of
each algorithm, and the skill of the programmer.
Dijkstra’s algorithm can be stated as follows:
1. Initialize every label Lri to ∞, except for the origin, where Lrr ← 0.
2. Initialize the set of finalized nodes F = ∅ and the backnode vector qr ←
−1.
3. Find an unfinalized node i (i.e., not in F ) for which Lri is minimal.
4. Finalize node i by adding it to set F ; if all nodes are finalized (F = N )
then terminate.
5. Update labels for links leaving node i: for each (i, j) ∈ A, update Lj =
min{Lrj , Lri + cij }.
6. Update backnodes: for each (i, j) ∈ A, if Lrj was reduced in the previous
step set qjr ← i.
7. If all nodes are finalized, stop. Otherwise, return to step 3
This algorithm is considered label setting, because once a node’s label is
updated, the node is finalized and never revisited again. We consider how this
algorithm can be applied to the same example used for the other algorithm
demonstrations.
Initialization. We initialize L1 = 0 ∞ ∞ ∞ , F = ∅, and q1 = −1 −1 −1 −1 .
Iteration 1. The unfinalized node with least L value is the origin, so we
set i = 1 and finalize it: F = {1}. We update the downstream labels:
L12 = min{∞, 0 + 2} = 2 and L13 = 4. Both labels were reduced, so we
update the backnodes: q21 = q31 = 1.
Iteration 2. Of the unfinalized nodes, i = 2 has the lowest label, so we finalize
the node: F = {1, 2}. We update the downstream labels: L13 = min{4, 2 +
1} = 3 and L14 = 7. Both labels were reduced, so we update the backnodes:
q31 = q41 = 2.
Iteration 3. Of the unfinalized nodes, i = 3 has the lowest label, so we finalize
it: F = {1, 2, 3}. We update the downstream label L14 = min{7, 3 + 2} = 5
and backnode q41 = 3.
Iteration 4. Node 4 is the only unfinalized node, so i = 4. We finalize it, and
since F = N we are done.
40 CHAPTER 2. NETWORK REPRESENTATIONS AND ALGORITHMS
cases can dramatically reduce the running time, in exchange for limiting the
scope to one origin r and one destination s. This algorithm requires an addi-
tional value gis for each node, representing an estimate of the cost of the shortest
path from i to s. This estimate (often called the “heuristic”) should be a lower
bound on the actual cost of the shortest path from i to s. Some examples of
how these estimates are chosen are discussed below.
Once the estimates gis are chosen, the label-setting algorithm from Sec-
tion 2.4.3 proceeds as before with a small modification in Step 3: rather than
choosing a node i in E which minimizes Lri , we choose a node which minimizes
Lri + gis . Everything else proceeds exactly as before. The intuition is that we
are taking into account our estimates of which nodes are closer to the destina-
tion. The algorithm in Section 2.4.3 fans out in all directions from the origin
(by simply looking at Lri ), rather than directing the search towards a particular
destination.
It can be shown that this algorithm will always yield the correct shortest
path from r to s as long as the gis are lower bounds on actual shortest path costs
from i to s. If this is not the case, A∗ is not guaranteed to find the shortest
path from r to s. Some care must be taken in how these estimates are found.
Two extreme examples are:
Choose gis = 0 for all i. This is certainly a valid lower bound on the
shortest path costs (recall that label-setting methods assume nonnegative
link costs), so A∗ will find the shortest path from r to s. However, zero is
a very poor estimate of the actual shortest path costs. With this choice
of gis , A∗ will run exactly the same as Dijkstra’s algorithm, and there is
no time savings.
Choose gis to be the actual shortest path cost from i to s. This is the
tightest possible “lower bound,” and will make A∗ run extremely quickly
— in fact, it will only scan nodes along the shortest path, the best pos-
sible performance that can be achieved. However, coming up with these
estimates is just as hard as solving the original problem! So in the end we
aren’t saving any effort; what we gain from A∗ is more than lost by the
extra effort we need to compute gis in the first place.
So, there is a tradeoff between choosing tight bounds (the closer gis to the true
costs, the faster A∗ will be) while not spending too long in computing the esti-
mates (which might swamp the savings in A∗ itself). Luckily, in transportation
networks, there are several good bounds available which can be computed fairly
quickly. For instance:
The Euclidean (“as the crow flies”) distance between i and s, divided by
the fastest travel speed in the network, is a lower bound on the travel time
between i and s.
Replace every link with a lower bound on its cost (say, free-flow travel
time) and find shortest paths between all nodes and all destinations (re-
peatedly using one of the previous algorithms from this chapter). This
42 CHAPTER 2. NETWORK REPRESENTATIONS AND ALGORITHMS
takes more time, but only needs to be done once and can be done as a
preprocessing step. As we will see in Chapter 6, solving traffic assignment
requires many shortest path computations. The extra time spent finding
these costs once might result in total time savings over many iterations.
You may find it instructive to think about other ways we can estimate gis values,
and how they might be used in transportation settings.
1 2 3
4 5 6
7 8
2.6 Exercises
1. [9] In the network in Figure 2.12, list the indegree, outdegree, degree,
forward star, and reverse star of each node.
2. [3] In the network in Figure 2.12, list all of the paths between nodes 1
and 5.
3. [4] State whether the network in Figure 2.12 is or is not (a) cyclic; (b) a
tree; (c) connected; (d) strongly connected.
4. [15] For each of the following, either draw a network with the stated
properties, or explain why no such network can exist: (a) connected, but
not strongly connected; (b) strongly connected, but not connected; (c)
cyclic, but not strongly connected.
5. [25] If m and n are the number of links and nodes in a network, show that
m < n2 .
6. [25] If a network is connected, show that m ≥ n − 1.
7. [25] If a network is strongly connected, show that m ≥ n.
44 CHAPTER 2. NETWORK REPRESENTATIONS AND ALGORITHMS
|Γ−1 (i)| = m.
P P
8. [10] Show that i∈N |Γ(i)| = i∈N
22. [55] Consider a rectangular grid network of one-way streets, with r rows
of nodes and c columns of nodes. All links are directed northbound and
eastbound. How many paths exist between the lower-left node (southwest)
and the upper-right (northeast) node?
23. [14] Write down the node-node adjacency matrix of the network in Fig-
ure 2.12.
24. [16] Consider the network defined by this node-node adjacency matrix:
0 0 0 1 1 0 0
1 0 1 0 0 0 0
0 0 0 1 0 0 0
0 0 0 0 1 0 0
0 0 0 0 0 0 0
1 1 0 0 0 0 0
0 1 0 0 0 1 0
25. [17] Is the network represented by the following node-node adjacency ma-
trix strongly connected?
0 0 0 1 1 0 0
1 0 1 0 0 0 0
0 0 0 1 0 0 0
0 0 0 0 1 0 0
0 0 0 0 0 1 0
1 1 0 0 0 0 0
0 1 0 0 0 1 0
26. [10] If the nodes in an acyclic network are numbered in a topological order,
show that the node-node adjacency matrix is upper triangular.
27. [37] Let A be the node-node adjacency matrix for a network. What is the
interpretation of the matrix product A2 ?
29. [65] A unimodular matrix is a square matrix whose elements are integers
and whose determinant is either +1 or −1. A matrix is totally unimodular
if every nonsingular square submatrix is unimodular. (Note that a to-
tally unimodular matrix need not be square). Show that every node-link
incidence matrix is totally unimodular.
46 CHAPTER 2. NETWORK REPRESENTATIONS AND ALGORITHMS
30. [10] One disadvantage of the forward star representation is that it is time-
consuming to identify the reverse star of a node — one must search through
the entire array to find every link with a particular head node. Describe a
“reverse star” data structure using arrays, where the reverse star can be
easily identified.
31. [52] By combining the forward star representation from the text and the
reverse star representation from the previous exercise, we can quickly iden-
tify both the forward and reverse stars of every node. However, a naive
implementation will have two different sets of arrays, one sorted accord-
ing to the forward star representation, and the other sorted according to
the reverse star representation. This duplication wastes space, especially
if there are many attributes associated with each link (travel time, cost,
capacity, etc.) Identify a way to easily identify the forward and reverse
stars of every node, with only one set of arrays of link data, by adding an
appropriate attribute to each link.
32. [58] In the language of your choice, write computer code to do the follow-
ing:
33. [13] After solving a shortest path problem from node 3 to every other
node, I obtain the backnode vector shown in Table 2.3. Write the shortest
paths (a) from node 3 to node 5; (b) from node 3 to node 7; (c) from node
4 to node 8.
34. [26] Find the shortest path from node 1 to every other node in the network
shown in Figure 2.14. Report the final labels and backnodes (L and q
values) for all nodes.
35. [25] The network in Figure 2.15 has a link with a negative cost. Show that
the label-correcting algorithm still produces the correct shortest paths in
this network, while the label-setting algorithm does not.
36. [57] Prove or disprove the following statement: “Any network with neg-
ative costs can be transformed into a network with nonnegative costs by
adding a large enough constant to every link’s cost. We can then use
the label-setting algorithm on this new network. Therefore, the label-
setting algorithm can find the shortest paths on any network, even if it
2.6. EXERCISES 47
3
2
4
3 1
1 4
5
2
2
3
1
2
1 -2 4
4
3
2
7 7
11
5
3 10 1 8
4
8 4 5
6
6
2
1 3 9
Figure 2.16: Network for Exercise 37, link labels are costs.
(a) Modify the algorithm presented in Section 2.4.1 to find the longest
path in an acyclic network.
(b) If you modify the label-correcting or label-setting algorithms for gen-
eral networks in a similar way, will they find the longest paths suc-
cessfully? Try them on the network in Figure 2.14.
43. [22]. In the game “Six Degrees of Kevin Bacon,” players are given the
name of an actor or actress, and try to connect them to Kevin Bacon in
as few steps as possible, winning if they can make the connection in six
50 CHAPTER 2. NETWORK REPRESENTATIONS AND ALGORITHMS
steps or less. Two actors or actresses are “connected” if they were in the
same film together. For example, Alfred Hitchcock is connected to Kevin
Bacon in three steps: Hitchcock was in Show Business at War with Orson
Welles, who was in A Safe Place with Jack Nicholson, who was in A Few
Good Men with Kevin Bacon. Beyoncé Knowles is connected to Kevin
Bacon in two steps, since she was in Austin Powers: Goldmember, where
Tom Cruise had a cameo, and Cruise was in A Few Good Men with Bacon.
Assuming that you have total, encyclopedic knowledge of celebrities and
films, show how you can solve “Six Degrees of Kevin Bacon” as a shortest
path problem. Specify what nodes, links, costs, origins, and destinations
represent in the network you construct.
44. [14] In a network with no negative-cost cycles, show that −nC is a lower
bound on the shortest path cost between any two nodes in a network.
45. [57] Show that if the label-correcting algorithm is performed, and that at
each iteration you choose a node in SEL which has been in the list the
longest, at the end of kn iterations the cost and backnode labels correctly
reflect all shortest paths from r which are no more than k links long.
(Hint: try an induction proof.)
46. [44] Assume that the label-correcting algorithm is terminated once the
label for some node i falls below −mC. Show that the following the
current backnode labels from i will lead to a negative-cost cycle.
47. [32] Modify three of the shortest path algorithms in this chapter so that
they find shortest paths from all origins to one destination, rather than
one origin to all destinations:
(a) The acyclic shortest path algorithm from Section 2.4.1.
(b) The label-correcting algorithm from Section 2.4.2.
(c) The label-setting algorithm from Section 2.4.3.
48. [73] It is known that the label correcting algorithm will find the correct
shortest paths as long as the initial labels L correspond to the distance of
some path from the origin (they do not necessarily need to be initialized
to +∞). Assume that we are given a vector of backnode labels q which
represents some tree (not necessarily the shortest paths) rooted at the
origin r. Develop a one-to-all shortest path algorithm that uses this vector
to run more efficiently. In particular, if the given backnode labels q do
correspond to a shortest path tree, your algorithm should recognize this
fact and terminate in a number of steps linear in the number of network
links.
Chapter 3
Mathematical Techniques
for Equilibrium
51
52 CHAPTER 3. MATHEMATICAL TECHNIQUES FOR EQUILIBRIUM
definition, but does not give much indication as to how one might actually find
this equilibrium. The variational inequality formulation lends itself to physical
intuition and can also accommodate a number of variations on the equilibrium
problem. The convex optimization approach provides an intuitive interpretation
of solution methods, provides an elegant proof of equilibrium uniqueness in
link flows, and powers the best-known solution algorithms, but the connection
between the equilibrium concept and optimization requires more mathematical
explanation and is less obvious at first glance.
[y]+ = max{y, 0}. If the term in brackets is negative, it is replaced by zero; otherwise it is
3.1. FIXED-POINT PROBLEMS 53
The goal is to find the ridership x; but x = X(t) and t = T (x), which means
that we need to find some value of x such that x = X(T (x)). This is a fixed
point problem! Here the function f is the composition of X and T :
x = X(T (x))
= [20 − T (x)]+
= [20 − (10 + x)]+
= [10 − x]+
The exercises ask you to show that each of the conditions in Brouwer’s
theorem is necessary; you might find it helpful to visualize these conditions
geometrically similar to Figure 3.1.
Notice the stipulation that f be a function “from the set K to itself;” this
means that the range of its function must be contained in its domain. Intuitively,
this means that any “output” of the function f must also be a valid “input”
to that same function. In other words, iteration is possible: starting from any
value x in its domain, you can apply the function over and over again to produce
a sequence of values x, f (x), f (f (x)), . . .. If this condition does not hold, then
Brouwer’s theorem does not guarantee anything about a fixed point.
Let us apply Brouwer’s theorem to the transit ridership example. Both
X and T are continuous functions, so their composition X ◦ T is continuous
as well. (Alternately, by substituting one function into the other we obtain
X(T (x)) = [10 − x]+ , which is evidently continuous.) What are the domain
unchanged.
54 CHAPTER 3. MATHEMATICAL TECHNIQUES FOR EQUILIBRIUM
f(x) x = f(x)
Figure 3.1: Visualizing fixed-point problems; the intersection of f (x) and the
45-degree line is a point where x = f (x).
and range of X(T (x))? Since X(T (x)) is the positive part of 20 − T (x), then
x = X(T (x)) ≥ 0. Further note that because x ≥ 0, T (x) ≥ 10, so x =
X(T (x)) ≤ 10. That is, we have shown that x must lie between 0 and 10, so
the function X(T (x)) = [10 − x]+ can be defined from the set [0, 10] to itself.
This set is convex and compact, so Brouwer’s theorem would have told us that
at least one fixed point must exist, even if we didn’t know how to find it.
A
B
D
C
E
object will not move under the action of the force, being effectively resisted by
the container wall.
How can we think about such problems in general? A little thought should
convince you that (1) if the object is on the boundary of the container, but not
at a corner point, it will be unmoved if and only if the force is perpendicular
to the boundary (point D in Figure 3.3), and (2) if the object is at a corner
of the container, it will be unmoved if and only if the force makes a right or
obtuse angle with all of the boundary directions (point E). These two cases can
be combined together: a point on the boundary is an equilibrium if and only if
the force makes a right or obtuse angle with all boundary directions. In fact, if
the force makes such an angle with all boundary directions, it will do so with
any other direction pointing into the feasible set (Figure 3.4). So, we see that
a point is unmoved by the force if and only if the direction of the force at that
point makes a right or obtuse angle with any possible direction the object could
move in.
The mathematical definition of a variational inequality is little more than
translating the above physical problem into algebraic terminology. The “con-
tainer” is replaced by a set K of n-dimensional vectors, which for our purposes
56 CHAPTER 3. MATHEMATICAL TECHNIQUES FOR EQUILIBRIUM
Figure 3.4: Stable points (in green) make an obtuse or right angle with all
feasible directions; unstable points (in red) make an acute angle with some
feasible directions.
can be assumed compact and convex (as in all of the figures so far). The “force
field” is replaced by a vector-valued function F : Rn → Rn which depends on
n variables and produces an n-dimensional vector as a result. Recalling vector
operations, specifically equation (A.16), saying that two vectors make a right
or obtuse angle is equivalent to saying that their dot product is negative. A
“solution” to the variational inequality is a point which is unmoved by the force
field. (In the above example, we want to say that D and E are solutions to the
variational inequality problem created by the container shape and force field,
while A, B, and C are not.) Therefore, rewriting the condition in the previous
paragraph with this mathematical notation, we have the following definition:
Definition 3.1. Given a convex set K ⊆ Rn and a function F : K → Rn , we
say that the vector x̂ ∈ K solves the VI(K, F) if, for all x ∈ K, we have
Figure 3.5: Trajectories of different points (red) assuming the force (black) is
constant.
then the fixed points of f coincide exactly with solutions to VI(K, F).
In many cases of practical interest, F will be a continuous function. Further-
more, it can be shown that the projection mapping onto a convex set (such as
K) is a well-defined (i.e., single-valued), continuous function. Then by Proposi-
tion A.6, the function f defined by equation (3.2) is a continuous function. So,
if the set K is compact in addition to being convex, then Brouwer’s theorem
shows that the variational inequality must have at least one solution:
You should convince yourself that all of these conditions are necessary: if the
container K is not bounded or not closed, or if the force field F is not continuous,
then it is possible that an object placed at any point in the container will move
under the action of the force field.
analogy with variational inequalities suggests how travelers might change their
behavior from one day to the next; the “container” represents the set of pos-
sible travel choices, and the “force field” represents travelers’ desires to move
to lower-cost options. An equilibrium solution is one where there is no way to
move in such an improving direction.
Both of these methods have disadvantages. Fixed point theorems are “non-
constructive,” which means they often lack methods guaranteed to find a fixed
point, even if one exists. Brouwer’s theorem gives us conditions under which a
fixed point exists, but tells us nothing about how to find it. Sometimes applying
f repeatedly from a starting point will converge to a fixed point, but not always.
The force field analogy in variational inequalities suggests a natural algorithm
(pick a starting point, and see where the force carries you), but again this is
not always guaranteed to work. It is also possible to have multiple solutions
to a fixed point or variational inequality problem. From the standpoint of
transportation planning this is inconvenient — how can you consistently rank
alternatives if you have several different predictions for what might happen
under each alternative?
Convex optimization is a more powerful tool in that we have uniqueness
guarantees on solutions, and efficient algorithms that provably converge to an
optimal point. The downside is that it is not immediately obvious how a user
equilibrium problem can be formulated in terms of optimization. Chapter 5
takes up this task; this section presents what you need to know about convex
optimization for this to make sense. If you have never encountered optimiza-
tion problems before, please read Appendix B before proceeding further, to
get familiar with the terminology and notation used in presenting optimization
problems. This subsection will focus on convex optimization, a specific kind of
optimization problem which is both easier to solve, and well-suited for solving
transportation network problems. If you are interested in other applications,
Appendix C discusses methods and properties of other kinds of optimization
problems.
The reason to restrict attention to convex optimization is that some opti-
mization problems are much easier to solve than others. For instance, the func-
tion in Figure 3.6 has many local minima and is unbounded below as x → −∞,
both of which can cause serious problems if we’re trying to minimize this func-
tion. Usually, the best that a software program can do is find a local minimum.
If it finds one of the local minima for this function, it may not know if there
is a better one somewhere else (or if there is, how to find it). Or if it starts
seeking x values which are negative, we could run into the unbounded part of
this function.
On the other hand, some functions are very easy to minimize. The function
in Figure 3.7 only has one minimum point, is not unbounded below, and there
are many algorithms which can find that minimum point efficiently.
What distinguishes these is a property called convexity, which is defined in
Appendix A. If the feasible region is a convex set, and if the objective function
is a convex function, then it is much easier to find the optimal solution. Check-
ing convexity of the objective function is not usually too difficult. To check
3.3. CONVEX OPTIMIZATION 59
f ((1 − λ)x1 + λx2 ) ≤ (1 − λ)f (x1 ) + λf (x2 ) = f (x1 ) + λ(f (x2 ) − f (x1 ))
and furthermore all points (1 − λ)x1 + λx2 are feasible since X is a convex set
and x1 and x2 are feasible. Since f (x2 ) < f (x1 ), this means that
Proposition 3.2. If f is a convex function and X is a convex set, then the set
of global minima is convex.
Proof. Let X̂ be the set of global minima of f over the feasible region X. Choose
any two global optima x̂1 ∈ X̂ and x̂2 ∈ X̂, and any λ ∈ [0, 1].
Since x̂1 and x̂2 are global minima, f (x̂1 ) = f (x̂2 ); let fˆ denote this common
value. Because X is a convex set, the point (1 − λ)x̂1 + λx̂2 is also feasible.
Because f is a convex function,
Therefore f ((1 − λ)x̂1 + λx̂2 ) ≤ fˆ But at the same time, f ((1 − λ)x̂1 + λx̂2 ) ≥ fˆ
because fˆ is the global minimum value of f . So we must have f ((1 − λ)x̂1 +
λx̂2 ) = fˆ which means that this point is also a global minimum and (1 − λ)x̂1 +
λx̂2 ∈ X̂ as well, proving its convexity.
Proof. By contradiction, assume that X̂ contains two distinct elements x̂1 and
x̂2 . Repeating the proof of the previous proposition, because f is strictly convex,
the first inequality becomes strict and we must have f ((1 − λ)x̂1 + λx̂2 ) < fˆ.
This contradicts the assumption that x̂1 and x̂2 are global minima.
min f (x)
x
Because f is convex, any local minimum is also a global minimum. So, all
we need is to know when we’ve reached a local minimum. Thus, we’re looking
for a set of optimality conditions, that is, a set of equations and/or inequalities
62 CHAPTER 3. MATHEMATICAL TECHNIQUES FOR EQUILIBRIUM
in x which are true if and only if x is optimal. For the unconstrained case, this
is easy: we know from basic calculus that x̂ is a local minimum if
f 0 (x̂) = 0
min f (x)
x
s.t. x ≥0
As Figure 3.8 shows, there are only two possibilities. In the first case, the
minimum occurs when x is strictly positive. We can call this an interior min-
imum, or we can say that the constraint x ≥ 0 is nonbinding at this point. In
this case, clearly f 0 (x) must equal zero: otherwise, we could move slightly in
one direction or the other, and reduce f further. The other alternative is that
the minimum occurs for x = 0, as in Figure 3.8(b). For this to be a minimum,
we need f 0 (0) ≥ 0 — if f 0 (0) < 0, f is decreasing at x = 0, so we could move to
a slightly positive x, and thereby reduce f .
Let’s try to draw some general conclusions. For the interior case of Fig-
ure 3.8(a), we needed x ≥ 0 for feasibility, and f 0 (x) = 0 for optimality. For the
boundary case of Figure 3.8(b), we had x = 0 exactly, and f 0 (x) ≥ 0. So we see
that in both cases, x ≥ 0 and f 0 (x) ≥ 0, and furthermore that at least one of
these has to be exactly equal to zero. To express the fact that either x or f 0 (x)
must be zero, we can write xf 0 (x) = 0. So a solution x̂ solves the minimization
problem if and only if
x̂ ≥ 0
0
f (x̂) ≥ 0
x̂f 0 (x̂) = 0.
3.3. CONVEX OPTIMIZATION 63
(a) Convex function where the constraint is not binding at the minimum.
Figure 3.8: Two possibilities for minimizing a convex function with a constraint.
64 CHAPTER 3. MATHEMATICAL TECHNIQUES FOR EQUILIBRIUM
Whenever we can find x̂ that satisfies these three conditions, we know it is op-
timal. These are often called first-order conditions because they are related to
the first derivative of f . The condition x̂f (x̂) = 0 is an example of a comple-
mentarity constraint because it forces either x̂ or f (x̂) to be zero.
min f (x)
x
s.t. a ≤ x ≤ b
min f (x)
x
s.t. x ≥0
Using the same logic as before, we can show that x̂ solves this problem if
and only if the following conditions are satisfied for every decision variable xi :
x̂i ≥ 0
∂f (x̂)
≥ 0
∂xi
∂f (x̂)
x̂i = 0
∂xi
You should convince yourself that if these conditions are not met for each de-
cision variable, then x̂ cannot be optimal: if the first condition is violated, the
solution is infeasible; if the second is violated, the objective function can be re-
duced by increasing xi ; if the first two are satisfied but the third is violated, then
x̂i > 0 and ∂f∂x(x̂)
i
> 0, and the objective function can be reduced by decreasing
xi .
This can be compactly written in vector form as
0 ≤ x̂ ⊥ ∇f (x̂) ≥ 0 (3.3)
where the ⊥ symbol indicates orthogonality, i.e. that the dot product of x̂ and
∇f (x̂) is zero.
Unfortunately, the bisection algorithm does not work nearly as well in higher
dimensions. It is difficult to formulate an extension that always works, and those
that do are inefficient. We’ll approach solution methods for higher-dimensional
problems somewhat indirectly, tackling a few other topics first: addressing con-
straints other than nonnegativity, and a few highlights of linear optimization.
3.3. CONVEX OPTIMIZATION 67
where the xi are decision variables, and the ai and b are constants.
We can handle these using the technique of Lagrange multipliers. This tech-
nique is demonstrated in the following example for the case of a single linear
equality constraint.
min x21 + x22
x1 ,x2
s.t. x1 + x2 =5
(It is a useful exercise to verify that x21 + x22 is a strictly convex function, and
that {x1 , x2 : x1 + x2 = 5} is a convex set.)
The main idea behind Lagrange multipliers is that unconstrained problems
are easier than constrained problems. The technique is an ingenious way of
nominally removing a constraint while still ensuring that it holds at optimality.
The equality constraint is “brought into the objective function” by multiplying
the difference between the right- and left-hand sides by a new decision variable
κ (called the Lagrange multiplier), adding the original objective function. This
creates the Lagrangian function
∂L
= 2x1 − κ = 0 (3.6)
∂x1
∂L
= 2x2 − κ = 0 (3.7)
∂x2
∂L
= 5 − x1 − x2 = 0 (3.8)
∂κ
Notice that the third optimality condition (3.8) is simply the original constraint,
so this stationary point must be feasible. Equations (3.6) and (3.7) respectively
tell us that x1 = κ/2 and x2 = κ/2; substituting these expressions into (3.8)
gives κ = 5, and therefore the optimal solution occurs for x1 = x2 = 5/2.
68 CHAPTER 3. MATHEMATICAL TECHNIQUES FOR EQUILIBRIUM
This technique generalizes perfectly well to the case of multiple linear equal-
ity constraints. Consider the general optimization problem
min f (x1 , . . . , xn )
x1 ,...,xn Pn
s.t. Pi=1 a1i xi = b1
n
i=1 a2i xi = b2
..
Pn .
i=1 a mi x i = bm
For an optimization problem that has both linear equality constraints and
nonnegativity constraints, we form the optimality conditions by combining the
Lagrange multiplier technique with the complementarity technique from the
previous section. Thinking back to Section 3.3.3, in the same way that we
replaced the condition f 0 (x̂) = 0 for the unconstrained case with the three
conditions x̂ ≥ 0, f 0 (x̂) ≥ 0, and xf 0 (x̂) = 0 when the nonnegativity constraint
was added, we’ll adapt the Lagrangian optimality conditions. If the optimization
problem has the form
min f (x1 , . . . , xn )
x1 ,...,xn Pn
s.t. Pi=1 a1i xi = b1
n
i=1 a2i xi = b2
..
Pn .
i=1 ami xi = bm
x1 , . . . , x n ≥0
∂L
≥0 ∀i ∈ {1, . . . , n}
∂xi
∂L
=0 ∀j ∈ {1, . . . , m}
∂κj
xi ≥ 0 ∀i ∈ {1, . . . , n}
∂L
xi =0 ∀i ∈ {1, . . . , n}
∂xi
Be sure you understand what each of these formulas implies. Each decision
variable must be nonnegative, the partial derivative of L with respect to this
variable must be nonnegative, and their product must equal zero (for the same
reasons as discussed in Section 3.3.3). For the Lagrange multipliers (κ1 , . . . , κm ),
the corresponding partial derivative of L must be zero. Notice how this is a
combination of the two techniques.
For small optimization problems, we can write down each of these conditions
and solve for the optimal solution, as above. However, for large-scale problems
this process can be very inconvenient. Later chapters in the book explain meth-
ods which work better for large problems. As a final note, the full theory of
Lagrange multipliers is more involved than what is discussed here. However,
it suffices for the case of a convex objective function and linear equality con-
straints. Optimality conditions for some other cases are given in Appendix C.
3.5 Exercises
1. [31] For each of the following functions, find all of its fixed points or state
that none exist.
(a) f (x) = x2 , where X = R
(b) f (x) = 1 − x2 , where X = [0, 1]
(c) f (x) = ex , where X = R.
−x1
(d) f (x1 , x2 ) = where X = {(x1 , x2 ) : −1 ≤ x1 ≤ 1, 0 ≤ x2 ≤ 1}.
x2
−y
(e) f (x, y) = where X = {(x, y) : x2 + y 2 ≤ 1} is the unit disc.
x
2. [54] Brouwer’s theorem guarantees the existence of a fixed point for the
function f : X → X if f is continuous and X is closed, bounded, and
convex. Show that each of these four conditions is necessary by creating
examples of a function f and set X which satisfy only three of those
conditions but do not have a fixed point. Come up with such examples
with each of the four conditions missing. (The notation f : X → X means
that the range of the function must be contained in its domain; every
“output” from f is also a valid “input.”) Hint: you will probably find
it easiest to work with simple functions and sets whenever possible, e.g.
something like X = [0, 1]. Visualizing fixed points as intersections with
the diagonal line through the origin may help you as well.
3. [45] Find all of the solutions of each of the following variational inequalities
VI(K, F).
(a) F (x) = x + 1, K = [0, 1]
(b) F (x) = x2 , K = [−1, 1]
0
(c) F (x, y) = , K = {(x, y) : x2 + y 2 ≤ 1}
−y
3.5. EXERCISES 71
73
Chapter 4
Introduction to Static
Assignment
75
76 CHAPTER 4. INTRODUCTION TO STATIC ASSIGNMENT
3
10x
50+x
10+x 10+x
1 4
50+x
10x
2
Figure 4.1: Example network for demonstration, with link performance func-
tions shown.
Public Roads (BPR) function, named after the agency which developed it:
β !
0 xij
tij (xij ) = tij 1 + α , (4.1)
uij
where t0ij is the “free-flow” travel time (the travel time with no congestion), uij
is the practical capacity (typically the value of flow which results in a level of
service of C or D), and α and β are shape parameters which can be calibrated
to data. The values α = 0.15 and β = 4 are commonly used if no calibration is
done.
Notice that this function is well-defined for any value of xij , even when flows
exceed the stated “capacity” uij . In the basic traffic assignment problem, there
are no explicit upper bounds enforced on link flows. The interpretation of a
“flow” greater than the capacity is actually that the demand for travel on the
link exceeds the capacity, and queues will form. The delay induced by these
queues is then reflected in the link performance function. Alternately, one can
choose a link performance function which asymptotically grows to infinity as
xij → uij , implicitly enforcing the capacity constraint. However, this approach
can introduce numerical issues in solution methods. More discussion on this
issue follows in Section 4.1.1, but the short answer is that properly addressing
capacity constraints in traffic assignment requires a dynamic traffic assignment
model, which is the subject of Part III.
From the modeler’s perspective, the goal of traffic assignment is to determine
the link flows in a network. But from the standpoint of the travelers themselves,
it is easier to think of them each choosing a path connecting their origin to their
destination. Let drs denote the number of vehicles which will be departing origin
r to destination s, where r and s are both centroids. Using hπ to represent the
4.1. NOTATION AND CONCEPTS 77
x = ∆h , (4.3)
where x and h are the vectors of link and path flows, respectively, and ∆ is the
link-path adjacency matrix. The number of rows in this matrix is equal to the
number of links, and the number of columns is equal to the number of paths
in the network, and the value in the row corresponding to link (i, j) and the
π
column corresponding to path π is δij .
Given a feasible assignment, the corresponding link flows can be obtained
by using equation (4.2) or (4.3). The set of feasible link assignments is the set
X of vectors x which satisfy (4.3) for some feasible assignment h ∈ H.
Similarly, the path travel times cπ are directly related to the link travel times
tij : the travel time of a path is simply the sum of the travel times of the links
comprising that path. By the same logic, we have
X
cπ = π
δij tij (4.4)
(i,j)∈A
It will be useful to have specific indices for each path, i.e. Π24 = {π1 , π2 } so
π1 = [2, 3, 4] and π2 = [2, 4], and similarly Π14 = {π3 , π4 , π5 , π6 } with π3 =
[1, 2, 4], π4 = [1, 2, 3, 4], π5 = [1, 3, 4], and π6 = [1, 3, 2, 4].
Let’s say that the demand for travel from centroid 1 to centroid 4 is 40
vehicles, and that the demand from 2 to 4 is 60 vehicles. Then d14 = 40 and
d24 = 60, and we must have
X
hπ = h3 + h4 + h5 + h6 = d14 = 40 (4.6)
π∈Π14
and X
hπ = h1 + h2 = d24 = 60 . (4.7)
π∈Π24
Let’s assume that the vehicles from each OD pair are divided evenly among
all of the available paths, so hπ = 10 for each π ∈ Π14 and hπ = 30 for each
π ∈ Π24 . We can now use equation (4.2) to calculate the link flows. For instance
the flow on link (1,2) is
( ( ))
X X X
π
δ1,2 hπ =
r∈Z s∈Z π∈Πrs
nn oo nn oo
3
= δ(1,2) h3 + δ(1,2)
4
h4 + δ(1,2)
5
h5 + δ(1,2)
6
h6 + 1
δ(1,2) h1 + δ(1,2)
2
h2 =
= {{1 × 10 + 1 × 10 + 0 × 10 + 0 × 10}} + {{0 × 30 + 0 × 30}} = 20 . (4.8)
where the braces show how the summations “nest.” Remember, this is just a
fancy way of picking the paths which use link (1,2), and adding their flows. The
equation is for use in computer implementations or for large networks; when
solving by hand, it’s perfectly fine to just identify the paths using a particular
link by inspection — in this case, only paths π11,4 and π21,4 use link (1,2). Repeat-
ing similar calculations, you should verify that x13 = 20, x23 = 40, x32 = 10,
x24 = 50, and x34 = 50.
From these link flows we can get the link travel times by substituting the
flows into the link performance functions, that is, t12 = 10x12 = 200, t13 =
50 + x13 = 70, t23 = 10 + x23 = 50, t32 = 10 + x32 = 20, t24 = 50 + x24 = 100,
and t34 = 10x34 = 500. Finally, the path travel times can be obtained by either
adding the travel times of their constituent links, or by applying equation (4.4).
You should verify that c1 = 550, c2 = 100, c3 = 300, c4 = 750, c5 = 570, and
c6 = 190.
The role of traffic assignment is to choose one path flow vector ĥ for purposes
of forecasting and ranking alternatives, out of all of the feasible assignments in
the network. An assignment rule is a principle used to determine this path flow
vector. The most common assignment rule in practice is that the path flow
vector should place all vehicles on a path with minimum travel time between
their origins and destinations, although other rules are possible as well and will
be discussed later in the book.
4.1. NOTATION AND CONCEPTS 79
x = Dh
Path flows h Link flows x
4.1.1 Commentary
The equations and concepts mentioned in the previous subsection can be related
as follows. Given a vector of path flows h, we can obtain the vector of link flows
x from equation (4.3); from these, we can obtain the vector of link travel times t
by substituting each link’s flow into its link performance function; from this, we
can obtain the vector of path travel times c from equation (4.5). This process
is shown schematically in Figure 4.2.
The one component which does not have a simple representation is how to
obtain path flows from path travel times using an assignment rule, “completing
the loop” with the dashed line in the figure. This is actually the most compli-
cated step, and answering it will require most of the remainder of the chapter.
The main difficulty is that introducing some rule for relating path travel times
to path flows creates a circular dependency: the path flows would depend on
the path travel times, which depend on the link travel times, which depend on
the link flows, which depend on the path flows, which depend on the path travel
times and so on ad infinitum. We need to find a consistent solution to this
process, which in this case means a set of path flows which remain unchanged
when we go around the circuit: the path flows must be consistent with the travel
times we obtain from those same path flows. Furthermore, the assignment rule
must reflect the gaming behavior described in Section 1.3.
At this point, it is worthwhile to discuss the assumptions that go into the
traffic assignment problem as stated here. The first assumption concerns the
time scale over which the network modeling is occurring. This part of the book
is focused entirely on what is called static network modeling, in which we assume
that the network is close to a “steady state” during whatever length of time we
choose to model (whether a peak hour, a multi-hour peak period, or a 24-hour
model), and the link flows and capacities are measured with respect to the entire
time period. That is, the capacity is the capacity over the entire analysis period,
so if a facility has a capacity of 2200 veh/hr and we are modeling a three-hour
peak period, we would use a capacity of 6600. Likewise, the link flows are the
total flow over the three hours.
80 CHAPTER 4. INTRODUCTION TO STATIC ASSIGNMENT
How long the analysis period should be is a matter of some balancing. Ob-
viously, the longer the analysis period, the less accurate the steady state as-
sumption is likely to be. In the extreme case of a 24-hour model, it is usually
a (very big) stretch to assume that the congestion level will be the same over
all 24 hours. On the other hand, choosing too short an analysis period can be
problematic as well. In particular, if we are modeling a congested city or a large
metropolitan area, trips from one end of the network to the other can easily
take an hour or two, and it is good practice to have the analysis period be at
least as long as most of the trips people are taking.
Properly resolving the issue of the “steady state” assumption requires mov-
ing to a dynamic traffic assignment (DTA) model. DTA models have the po-
tential to more accurately model traffic, but are harder to calibrate, are more
sensitive to having correct input data, and require more computer time. Fur-
ther, DTA models end up requiring very different formulations and approaches.
In short, while useful in some circumstances, they are not universally better
than static models, and in any case they are surprisingly dissimilar. As a result,
a full discussion of DTA models is deferred to the final part of this volume.
Another assumption we make is that link and path flows can take any non-
negative real value. In particular, they are not required to be whole numbers,
and there is no issue with saying that the flow on √a link is 12.5 vehicles or that
the number of vehicles choosing a path is, say, 2. This is often called the
continuum assumption, because it treats vehicles as an infinitely-divisible fluid,
rather than as a discrete number of “packets” which cannot be split. The reason
for this assumption is largely computational — without the continuum assump-
tion, traffic assignment problems become extremely difficult to solve, even on
small networks. Further, from a practical perspective most links of concern have
volumes in the hundreds or thousands of vehicles per hour, where the difference
between fractional and integer values is negligible. Some also justify the con-
tinuum assumption by interpreting link and path flows to describe a long-term
average of the flows, which may fluctuate to some degree from day to day.
Notice that this principle does not include the impact a driver has on other
drivers in the system, and a pithy characterization of Assumption 4.1 is that
“people are greedy.” If this seems unnecessarily pejorative, Section 4.3 shows
a few ways that this principle can lead to suboptimal flow distributions on
networks. Again, I emphasize that we adopt this assumption because we are
modeling urban, peak period travel which is predominantly composed of work
trips. If we were modeling, say, traffic flows around a national park during
summer weekends, quality of scenery may be considerably more important than
being able to drive at free-flow speed, and a different assumption would be
needed.
Further, the basic model developed in the first few weeks could really func-
tion just as well with cost or some other criterion, as long as it is separable and
additive (that is, you can get the total value of the criterion by adding up its
82 CHAPTER 4. INTRODUCTION TO STATIC ASSIGNMENT
value on each link — the travel time of a route is the sum of the travel times
on its component links, the total monetary cost of a path is the sum of the
monetary costs of each link, but the total scenic quality of a path may not be
the sum of the scenic quality of each link), and can be expressed as a function of
the link flows x. So even though we will be speaking primarily of travel times,
it is quite possible, and sometimes appropriate, to use other measures as well.
A second assumption follows from modeling commute trips:
Assumption 4.2. Drivers have perfect knowledge of link travel times.
In reality, drivers’ knowledge is not perfect — but commutes are typically
habitual trips, so it is not unreasonable to assume that drivers are experienced
and well-informed about congestion levels at different places in the network,
at least along routes they might plausibly choose. We will later relax this
assumption, but for now we’ll take it as it greatly simplifies matters and is not
too far from the truth for commutes. (Again, in a national park or other place
with a lot of tourist traffic, this would be a poor assumption.)
Now, with a handle on individual behavior, we can try to scale up this
assumption to large groups of travelers. What will be the resulting state if
there are a large number of travelers who all want to take the fastest route
to their destinations? For example, if you have to choose between two routes,
one of which takes ten minutes and the other fifteen, you would always opt for
the first one. If you are the only one traveling, this is all well and good. The
situation becomes more complicated if there are others traveling. If there are
ten thousand people making the same choice, and all ten thousand pick the first
route, congestion will form and the travel time will rapidly increase. According
to Assumption 4.2, drivers would become aware of this, and some people would
switch from the first route to the second route, because the first would no longer
be faster. This process would continue: as long as the first route is slower, people
would switch away to the second route. If too many people switch to the second
route, so the first becomes faster again, people would switch back.
With a little thought, it becomes clear that if there is any difference in the
travel times between the two routes, people will switch from the slower route
to the faster one. Note that Assumption 4.1 does not make any allowance for
a driver being satisfied with a path which is a minute slower than the fastest
path, or indeed a second slower, or even a nanosecond slower. Relaxing Assump-
tion 4.1 to say that people may be indifferent as long as the travel time is “close
enough” to the fastest path leads to an interesting, but more complicated line
of research based on the concept of “bounded rationality,” which is discussed
later, in Section 5.3.3. It is much simpler to assume that Assumption 4.1 holds
strictly, in which case there are only three possible stable states:
1. Route 1 is faster, even when all of the travelers are using it.
2. Route 2 is faster, even when all of the travelers are using it.
3. Most commonly, neither route dominates the others. In this case, people
use both Routes 1 and 2, and their travel times are exactly equal.
4.2. PRINCIPLE OF USER EQUILIBRIUM 83
Because the third case is most common, this basic route choice model is
called user equilibrium: the two routes are in equilibrium with each other. Why
must the travel times be equal? If the first route was faster than the second,
people would switch from the second to the first. This would decrease the travel
time on the second route, and increase the travel time on the first, and people
would continue switching until they were equal. The reverse is true as well: if
the second route were faster, people would switch from the first route to the
second, decreasing the travel time on the first route and increasing the travel
time on the second. The only outcome where nobody has any reason to change
their decision, is if the travel times are equal on both routes.
This is important enough to state again formally:
Unused routes may of course have a higher travel time, and used routes
connecting different origins and destinations may have different travel times,
but any two used routes connecting the same origin and destination must have
exactly the same travel times. Notice that we call this principle a corollary
rather than an assumption: the real assumptions are the shortest path and full
information assumptions. If you believe these are true, the principle of user equi-
librium follows immediately and does not require you to assume anything more
than you already have. The next section describes how to solve for equilibrium,
along with a small example.
1. Select a set of paths Π̂rs which you think will be used by travelers from
this OD pair.
2. Write equations for the travel times of each path in Π̂rs as a function of
the path flows.
3. Solve the system of equations enforcing equal travel times on all of these
paths, together with the requirement that the total path flows must equal
the total demand drs .
4. Verify that this set of paths is correct; if not, refine Π̂rs and return to step
2.
The first step serves to reduce a large potential set of paths to a smaller set of
reasonable paths Π̂rs , which you believe to be the set of paths which will be
used by travelers from r to s. Feel free to use engineering judgment here; if this
is not the right set of paths, we’ll discover this in step 4 and can adjust the set
84 CHAPTER 4. INTRODUCTION TO STATIC ASSIGNMENT
accordingly. The second step involves applying equations (4.4) and (4.2). Write
the equations for link flows as a function of the flow on paths in Π̂rs (assuming
all other paths have zero flow). Substitute these expressions for link flows into
the link performance functions to get an expression for link travel times; then
write the equations for path travel times as a function of link travel times.
If Π̂rs truly is the set of used paths, all the travel times on its component
paths will be equal. So, we solve a system of equations requiring just that. If
there are n paths, there are only n − 1 independent equations specifying equal
travel times. (If there are three paths, we have one equation stating paths one
and two have equal travel time, and a second one stating paths two and three
have equal travel time. An equation stating that paths one and three have
equal travel time is redundant, because it is implied by the other two, and so
it doesn’t help us solve anything.) To solve for the n unknowns (the flow on
each path), we need one more equation: the requirement that the sum of the
path flows must equal the total flow from r to s (the “no vehicle left behind”
equation requiring every vehicle to be assigned to a path). Solving this system
of equations gives us path flows which provide equal travel times.
The last step is to verify that the set of paths is correct. What does this
mean? There are two ways that Π̂rs could be “incorrect”: either it contains
paths that it shouldn’t, or it omits a path that should be included. In the
former case, you will end up with an infeasible solution (e.g., a negative or
imaginary flow on one path), and should eliminate the paths with infeasible
flows, and go back to the second step with a new set Π̂rs . In the latter case,
you have a feasible solution and the travel times of paths in Π̂rs , but they are
not minimal: you have missed a path which has a faster travel time than any
of the ones in Π̂rs , so you need to include this path in Π̂rs and again return to
the second step.
Let’s take a concrete example: Figure 4.3 shows 7000 travelers traveling
from zone 1 to zone 2 during one hour, and choosing between the two routes
mentioned above: route 1, with free-flow time 20 minutes and capacity 4400
veh/hr, and route 2, with free-flow time 10 minutes and capacity 2200 veh/hr.
That means we have
x 4
1
t1 (x1 ) = 20 1 + 0.15 , (4.9)
4400
x 4
2
t2 (x2 ) = 10 1 + 0.15 . (4.10)
2200
This example is small enough that there is no real distinction between paths
and links (because each path consists of a single link), so link flows are simply
path flows (x1 = h1 and x2 = h2 ), and path travel times are simply link travel
times (c1 = t1 and c2 = t2 ). As a starting assumption, we assume that both
paths are used, so Π12 = {π1 , π2 }. We then need to choose the path flows h1
and h2 so that t1 (h1 ) = t2 (h2 ) (equilibrium) and h1 + h2 = 7000 (no vehicle
left behind). Substituting h2 = 7000 − h1 into the second delay function, the
4.2. PRINCIPLE OF USER EQUILIBRIUM 85
7000 7000
1 2
Using a numerical equation solver, we find that equilibrium occurs for h1 = 3376,
so h2 = 7000 − 3376 = 3624, and t1 (x1 ) = t2 (x2 ) = 21.0 minutes. Alternately,
we can use a graphical approach. Figure 4.4 plots the travel time on both routes
as a function of the flow on route 1 (because if we know the flow on route
1, we also know the flow on route 2). The point where they intersect is the
equilibrium: h1 = 3376, t1 = t2 = 21.0.
In the last step, we verify that the solution is reasonable (no paths have
negative flow) and complete (there are no paths we missed which have a shorter
travel time). Both conditions are satisfied, so we have found the equilibrium
flow and stop.
Let’s modify the problem slightly, so the travel time on link 1 is now
x 4
1
t1 (x1 ) = 50 1 + 0.15 .
4400
In this case, solving
4 ! 4 !
h1 7000 − h1
50 1 + 0.15 = 10 1 + 0.15
4400 2200
leads to a nonsensical solution: none of the answers involve real numbers without
an imaginary part. The physical interpretation of this is that there is no way to
assign 7000 vehicles to these two paths so they have equal travel times. Looking
at a plot (Figure 4.5), we see that this happens because path 2 dominates path
1: even with all 7000 vehicles on path 2, it has a smaller travel time.
86 CHAPTER 4. INTRODUCTION TO STATIC ASSIGNMENT
Route 1
Route 2
Figure 4.4: The equilibrium point lies at the intersection of the delay functions.
Route 1
Route 2
ourselves with how a feasible assignment satisfying this assignment rule is found
(the trial-and-error method from the previous section would suffice), but will
focus instead on how equilibrium can be used to evaluate the performance of
potential alternatives.
In the first example, consider the network shown in Figure 4.6 where the
demand between nodes 1 and 2 is d12 = 30 vehicles. Using h↑ and h↓ to
represent the flows on the top and bottom paths in this network, the set of
feasible assignments are the two-dimensional vectors h = h↑ h↓ which satisfy
50
30
1 2
45+x
50
30
1 2
40+x/2
Figure 4.7: The two-link network with an improvement on the bottom link.
was improved, the effect was completely offset by vehicles switching paths away
from the top link and onto the bottom link.
In other words, improving a link (even the only congested link in a network)
does not necessarily reduce travel times, because drivers can change their behav-
ior in response to changes on the network. (If we could somehow force travelers
to stay on the same paths they were using before, then certainly some vehicles
would experience a lower travel time after the improvement is made.) This is
called the Knight-Pigou-Downs paradox.
The second example was developed by Dietrich Braess, and shows a case
where building a new roadway link can actually worsen travel times for all
travelers in the network, after the new equilibrium is established. Assume we
have the network shown in Figure 4.8a, with the link performance functions
next to each link. The user equilibrium solution can be found by symmetry:
since the top and bottom paths are exactly identical, the demand of six vehicles
will evenly split between them. A flow of three vehicles on the two paths leads
to flow of three vehicles on each of the four links; substituting into the link
performance functions gives travel times of 53 on (1, 3) and (2, 4) and 30 on
4.3. THREE MOTIVATING EXAMPLES 89
3
10x
50+x
1 4
50+x
10x
2
(a)
3
10x
50+x
10+x
1 4
50+x
10x
2
(b)
Figure 4.8: Braess network, before and after construction of a new link.
(1, 2) and (3, 4), so the travel time of both paths is 83, and the principle of user
equilibrium is satisfied.
Now, let’s modify the network by adding a new link from node 2 to node 3,
with link performance function 10 + x23 (Figure 4.8b). We have added a new
path to the network; let’s label these as follows. Path 1 is the top route [1, 3, 4];
path 2 is the middle route [1, 2, 3, 4], and path 3 is the bottom route [1, 2, 4].
Paths 1 and 3 each have a demand of three vehicles and a travel time of 83.
Path 2, on the other hand, has a flow of zero vehicles and a travel time of 70, so
the principle of user equilibrium is violated: the travel time on the used paths
is equal, but not minimal.
Assumption 4.1 suggests that someone will switch their path to take advan-
tage of this lower travel time; let’s say someone from path 1 switches to path 2,
so we have h1 = 2, h2 = 1, and h3 = 3. From this we can predict new link flows:
x12 = 4, x13 = 2, x23 = 1, x24 = 3, x34 = 3. Substituting into link performance
functions gives new link travel times: t12 = 40, t13 = 52, t23 = 11, t24 = 53,
t34 = 30, and finally we can recover the new path travel times: c1 = 82, c2 = 81,
and c3 = 93.
This is still not an equilibrium; perhaps someone from path 3 will switch
90 CHAPTER 4. INTRODUCTION TO STATIC ASSIGNMENT
Saturation flow 30
35
1 2
Saturation flow 60
So far, so good. Now assume that it has been a long time since the signal was
last retimed, and a traffic engineer decides to check on the signal and potentially
change the timing. A traditional rule in traffic signal timing is that the green
time given to an approach should be proportional to the degree of saturation.
However, with the given solution, the degrees of saturation are X ↑ = 0.982
and X ↓ = 0.953, which are unequal — the top link is given slightly less green
time than the equisaturation rule suggests, and the bottom link slightly more.
Therefore, the engineer changes the green times to equalize these degrees of sat-
uration, which occurs if G↑ = 48.3 sec and G↓ = 11.7 sec, a smallish adjustment.
If drivers could be counted on to remain on their current routes, all would be
well. However, changing the signal timing changes the delay formulas (4.12) on
the two links. Under the assumption that drivers always seek the shortest path,
the equilibrium solution will change as drivers swap from the (now longer) path
to the (now shorter) path. Re-equating the travel time formulas, the flow rates
on the top and bottom links are now x↑ = 23.8 veh/min and x↓ = 11.2 veh/min,
with equal delays t↑ = t↓ = 2.26 min on each link. Delay has actually increased,
because the signal re-timing (aimed at reducing delay) did not account for the
changes in driver behavior after the fact.
A bit surprised by this result, our diligent traffic engineer notes that the
degrees of saturation are still unequal, with X ↑ = 0.984 and X ↓ = 0.959,
actually further apart than before the first adjustment. Undeterred, the engineer
changes the signal timings again to G↑ = 48.5 s and G↓ = 11.5 s, which results
in equal degrees of saturation under the new flows. But by changing the green
times, the delay equations have changed, and so drivers re-adjust to move toward
the shorter path, leading to x↑ = 23.9 veh/min and x↓ = 11.1 veh/min, and new
delays of 2.43 minutes on each approach, even higher than before!
You can probably guess what happens from here, but Table 4.2 tells the
rest of the story. As our valiant engineer stubbornly retimes the signals in a
vain attempt to maintain equisaturation, flows always shift in response. Fur-
thermore, the delays grow faster and faster, asymptotically growing to infinity
as more and more adjustments are made. The moral of the story? Changing
the network will change the paths that drivers take, and “optimizing” the net-
work without accounting for how these paths will change is naive at best, and
counterproductive at worst.
92
CHAPTER 4. INTRODUCTION TO STATIC ASSIGNMENT
Table 4.2: Evolution of the network as signals are iteratively retimed.
Iteration G↑ (s) G↓ x↑ (v/min) x↓ t↑ (min) t↓ X↑ X↓
0 48 12 23.6 11.4 2.11 2.11 0.982 0.953
1 48.3 11.7 23.8 11.2 2.26 2.26 0.984 0.959
2 48.5 11.5 23.9 11.1 2.43 2.43 0.986 0.965
3 48.7 11.3 24.1 10.9 2.63 2.63 0.988 0.969
4 48.9 11.1 24.2 10.8 2.86 2.86 0.99 0.974
5 49.1 10.9 24.3 10.7 3.12 3.12 0.991 0.977
10 49.5 10.5 24.7 10.3 5.11 5.11 0.996 0.989
20 49.88 10.12 24.91 10.09 16.58 16.58 0.9988 0.9971
50 49.998 10.002 24.998 10.002 855.92 855.93 0.99998 0.99995
∞ 50 10 25 10 ∞ ∞ 1 1
4.4. HISTORICAL NOTES AND FURTHER READING 93
4.5 Exercises
1. [15] Expand the diagram of Figure 1.4 to include a “government model”
which reflects public policy and regulation regarding tolls. What would
this “government” agent influence, and how would it be influenced?
2. [23] How realistic do you think link performance functions are? Name at
least two assumptions they make about traffic congestion, and comment
on how reasonable you think they are.
3. [23] How realistic do you think the principle of user equilibrium is? Name
at least two assumptions it makes (either about travelers or congestion),
and comment on how reasonable you think they are.
1 5 6 3
2 7 8 4
7. [26] The network in Figure 4.10 has 8 nodes, 12 links, and 4 zones. The
travel demand is d13 = d14 = 100, d23 = 150, and d24 = 50. The dashed
links have a constant travel time of 10 minutes regardless of the flow on
those links; the solid links have the link performance function 15 + x/20
where x is the flow on that link.
(a) For each of the four OD pairs with positive demand, list all acyclic
paths connecting that OD pair. In total, how many such paths are in
the network?
(b) Assume that the demand for each OD pair is divided evenly among all
of the acyclic paths you found in part (a) for that OD pair. What are
the resulting link flow vector x, travel time vector t, and path travel
time vector C?
(c) Does that solution satisfy the principle of user equilibrium?
(d) What is the total system travel time?
8. [32] Consider the network and OD matrix shown in Figure 4.11. The
travel time on every link is 10 + x/100, where x is the flow on that link.
Find the link flows and link travel times which satisfy the principle of user
equilibrium.
9. [36] Find the equilibrium path flows, path travel times, link flows, and link
travel times on the Braess network (Figure 4.8b) when the travel demand
from node 1 to node 4 is (a) 2; (b) 7; and (c) 10.
10. [52] In the Knight-Pigou-Downs network, the simple and seemingly rea-
sonable heuristic to improve the congested link (i.e., changing its link
4.5. EXERCISES 95
1 3
3 4
5 6
1 5,000 0
2 0 10,000
2 4
performance function to lower the free-flow time) did not help. Can you
identify an equally simple heuristic that would lead you to improve the
other link? (Ideally, such a heuristic would be applicable in many kinds
of networks, not just one with an uncongestible link.)
11. [20] Give nontechnical explanations of why uniqueness, efficiency, and ex-
istence of equilibrium solutions have practical implications, not just theo-
retical ones. Concrete examples may be helpful.
12. [77] In the trial-and-error method, identify several different strategies for
choosing the initial set of paths. Test these strategies on networks of
various size and complexity. What conclusions can you draw about the
effectiveness of these different strategies, and the amount of effort they
involve?
13. [55] The trial-and-error method generally involves the solution of a system
of nonlinear equations. Newton’s method for solving a system of n nonlin-
ear equations is to move all quantities to one side of the equation, express-
ing the resulting equations in the form F(x) = 0 where F is a vector-valued
function mapping the n-dimensional vector of unknowns x to another n-
dimensional vector giving the value of each equation. An initial guess is
made for x, which is then updated using the rule x ← x − (JF(x))−1 F(x),
where (JF(x))−1 is the inverse of the Jacobian matrix of F, evaluated at
x. This process continues until (hopefully) x converges to a solution. A
quasi-Newton method approximates the Jacobian with a diagonal matrix,
which is equal to the Jacobian along the diagonal and zero elsewhere,
which is much faster to calculate and convenient to work with. Extend
your experiments from Exercise 12 to see whether Newton or quasi-Newton
methods work better.
14. [84] What conditions on the link performance functions are needed for
Newton’s method (defined in the previous exercise) to converge to a solu-
tion of the system of equations? What about the quasi-Newton method?
Assume that the network consists of n parallel links connecting a single
origin to a single destination. Can you guarantee that the solution to the
system of equations only involves real numbers, and not complex numbers?
96 CHAPTER 4. INTRODUCTION TO STATIC ASSIGNMENT
Hint: it may be useful to redefine the link performance functions for nega-
tive x values. This should not have any effect on the ultimate equilibrium
solution, since negative link flows are infeasible, but cleverly redefining the
link performance functions in this region may help show convergence.
15. [96] Repeat the previous exercise, but for a general network with any
number of links and nodes, and where paths may overlap.
Chapter 5
This chapter formalizes the user equilibrium traffic assignment problem defined
in Chapter 4. Using the mathematical language from Chapter 3, we are pre-
pared to model and solve equilibrium problems even on networks of realistic
size, with tens of thousands of links and nodes. Section 5.1 begins with fixed
point, variational inequality, and optimization formulations of the user equilib-
rium problem. Section 5.2 then introduces important existence and uniqueness
properties of the user equilibrium assignment, as was first introduced with the
small two-player games of Section 1.3. Section 5.3 names several alternatives
to the user equilibrium rule for assigning traffic to a network, including the
system optimal principle, and the idea of bounded rationality. This chapter
concludes with Section 5.4, exploring the inefficiency of the user equilibrium
rule and unraveling the mystery of the Braess paradox.
97
98 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
f(x)
30
H
H
x
30
Figure 5.1: Feasible path-flow sets H for the simple two-link network (left) and
a schematic representing H for more complex problems (right).
f(x)
30
H
H
x
30
Figure 5.2: “Force” vectors −c for the simple two-link network (left) and a
schematic representing H for more complex problems (right).
ary of the feasible set at that point. At a corner point, the force must make a
right or obtuse angle with all boundary directions. So we can characterize stable
path flow vectors ĥ as points where the force −c(ĥ) makes a right or obtuse
angle with any feasible direction h − ĥ, where h is any other feasible path flow
vector. Recalling vector operations, saying that two vectors make a right or
obtuse angle with each other is equivalent to saying that their dot product is
nonpositive. Thus, ĥ is a stable point if it satisfies
or, equivalently,
c(ĥ) · (ĥ − h) ≤ 0 ∀h ∈ H . (5.2)
This is a variational inequality (VI) in the form shown in Section 3.2. We
now prove that solutions of this VI correspond to equilibria, as shown by the
next result:
Theorem 5.1. A path flow vector ĥ solves the variational inequality (5.2) if
and only if it satisfies the principle of user equilibrium.
Proof. The theorem can equivalently be written “a path flow vector ĥ does not
solve (5.2) if and only if it does not satisfy the principle of user equilibrium,”
which is easier to prove. Assume ĥ does not solve (5.2). Then there exists some
h ∈ H such that c(ĥ) · (ĥ − h) > 0, or equivalently c(ĥ) · ĥ > c(ĥ) · h. Now,
c(ĥ) · ĥ is the total system travel time when the path flows are ĥ, and c(ĥ) · h
is the total system travel time if the travel times were held constant at c(ĥ)
even when the flows changed to h. For the latter to be strictly less than the
former, switching from ĥ to h must have reduced at least one vehicle’s travel
time even though the path travel times did not change. This can only happen
if that vehicle was not on a minimum travel time path to begin with, meaning
that ĥ does not satisfy user equilibrium.
Conversely, assume that ĥ is not a user equilibrium. Then there is some OD
pair (r, s) and path π such that ĥπ > 0 even though cπ (ĥ) > minπ0 ∈Πrs cπ0 (ĥ).
Let π̂ be a minimum travel time path for this OD pair. Create a new path
flow vector h which is the same as ĥ except that ĥπ is reduced by some small
positive amount and ĥπ̂ is increased by . As long as 0 < < ĥπ the new
point h remains feasible. By definition all the components of ĥ − h are equal
to zero, except the component for π is and the component for π̂ is −. So
c(ĥ) · (ĥ − h) = (cπ (ĥ) − cπ̂ (ĥ)) > 0, so ĥ does not solve (5.2).
∂f
π
− κrs ≥ 0 ∀(r, s) ∈ Z 2 , π ∈ Πrs (5.6)
∂h
∂f
hπ − κ rs
=0 ∀(r, s) ∈ Z 2 , π ∈ Πrs (5.7)
∂hπ
X
hπ = drs ∀(r, s) ∈ Z 2 (5.8)
π∈Πrs
hπ ≥ 0 ∀π ∈ Π (5.9)
The last two of these are simply (5.4) and (5.5), requiring that the solution be
feasible. The condition (5.7) is the most interesting of these, requiring that the
product of each path’s flow and another term involving f must be zero. For
∂f
this product to be zero, either hπ must be zero, or ∂h π = κ
rs
must be true;
∂f rs
and by (5.6), for all paths ∂hπ ≥ κ . That is, in a solution to this problem,
∂f ∂f
whenever hπ is positive we have ∂h π = κ
rs
; if a path is unused, then ∂h π ≥ κ
rs
— in other words, at optimality all used paths connecting OD pair (r, s) must
∂f rs
have equal and minimal ∂h π , which is equal to κ . According to the principle
of user equilibrium, all used paths have equal and minimal travel time... so if
∂f π
we can choose a function f (h) such that ∂h π = c , we are done!
So, the objective function must involve some integral of the travel times. A
first guess might look something like
XZ ?
f (h) = cπ dhπ , (5.10)
π∈Π ?
5.1. MATHEMATICAL FORMULATIONS 101
where the bounds of integration and other details are yet to be determined.
The trouble is that cπ is not a function of hπ alone: the travel time on a path
depends on the travel times on other paths as well, so the partial derivative of
this function with respect to hπ will not simply be cπ , but contain other terms
as well. However, these interactions are not arbitrary, but instead occur where
paths overlap, that is, where they share common links. In fact, if we try writing
a similar function to our guess but in terms of link flows, instead of path flows,
we will be done.
To be more precise, let x(h) be the link flows as a function of the path flows
h, as determined by equation (4.2). Then the function
X Z xij (h)
f (h) = tij (x) dx (5.11)
(i,j)∈A 0
satisfies our purposes. To show this, calculate the partial derivative of f with
respect to the flow on an arbitrary path π, using the fundamental theorem of
calculus and the chain rule:
∂f X ∂xij X
π
π
= tij (xij (h)) π = δij tij (xij (h)) = cπ (5.12)
∂h ∂h
(i,j)∈A (i,j)∈A
where the last two equalities respectively follow from differentiating (4.2) and
from (4.4).
Finally, we can clean up the notation a bit by simply introducing the link
flows x as a new set of decision variables, adding equations (4.2) as constraints
to ensure they are consistent with the path flows. This gives the following
optimization problem, first formulated by Martin Beckmann:
X Z xij
min tij (x)dx (5.13)
x,h 0
(i,j)∈A
X
s.t. xij = hπ δij
π
∀(i, j) ∈ A (5.14)
π∈Π
X
hπ = drs ∀(r, s) ∈ Z 2 (5.15)
π∈Πrs
hπ ≥ 0 ∀π ∈ Π (5.16)
Section 5.2 shows that the objective function is convex, and the feasible
region is a convex set, so this problem will be relatively easy to solve.
50
30
1 2
45+x
In the transit ridership example of Section 3.1, we could formulate the solu-
tion to the problem as a fixed point problem directly, without first going through
a variational inequality. The traffic assignment problem is slightly more com-
plex, because at equilibrium all used routes will have equal travel time. So, if
we try to apply the same technique as in the transit ridership problem, we will
run into a difficulty with “breaking ties.” To see this concretely, consider the
two-route network in Figure 5.3, and let h and c be the vectors of route flows
and route travel times. As Figure 1.3 suggests, we can try to define two func-
tions: H(c) gives the vector of path flows representing route choices if the travel
times were fixed at c, and C(h) gives the vector of path travel times when the
path flows are h. The function C is clearly defined: simply calculate the cost
on each route using the link performance functions. However, if both paths are
used at equilibrium, then c1 = c2 , which means that any path flow vector would
be a valid choice for H(c). Be sure you understand this difficulty: because H
assumes that the travel times are fixed, if they are equal any path flow vector is
consistent with our route choice assumptions. If we relax the assumption that
the link performance functions are fixed, then H is no simpler than solving the
equilibrium problem in the first place.
To resolve this, we introduce the concept of a multifunction.1 Recall that
a regular function from X to Y associates each value in its domain X with
exactly one value in the set Y . A multifunction, on the other hand, associates
each value in the domain X with some subset of Y . If F is such a multifunction
the notation F : X ⇒ Y can be used. For example, consider the multifunction
R(c). If c1 < c2 (and these travel times were fixed), then the only consistent
path flows are to have everybody on path 1. Likewise, if c1 > c2 , then everyone
would have to be on path 2. Finally, if c1 = c2 then people could split in any
proportion while satisfying the rule that drivers are using least-time paths. That
1 Also known by many other names, including correspondence, point-to-set map, set-valued
h1
30
c2 - c1
is, nh io
30 0 c1 < c2
nh io
R(c1 , c2 ) = 0 30 c1 > c2 (5.17)
nh i o
h 30 − h : h ∈ [0, 30]
c1 = c2
The notation in equation (5.17) is chosen very carefully to reflect the fact that
H is a multifunction, which means that its values are sets, not a specific number.
In the first two cases, the set only consists of a single element, so this distinction
may seem a bit pedantic; but in the latter case, the set contains a whole range of
possible values. In this way, multifunctions generalize the concept of a function
by allowing R to take multiple “output” values for a single input. This is
graphically represented in Figure 5.4, using the fact that the function can be
parameterized in terms of a single variable c2 − c1 .
Fixed-point problems can be formulated for multifunctions as well as for
functions. If F is an arbitrary multifunction defined on the set K, then a fixed
point of F is a point x such that x ∈ F (x). Note the use of set inclusion ∈
rather than equality = because F can associate multiple values with x. Just as
Brouwer’s theorem guarantees existence of fixed points for functions, Kakutani’s
theorem guarantees existence of fixed points for multifunctions under general
conditions:
Theorem 5.2. (Kakutani). Let F : K ⇒ K be a multifunction (from the set
K to itself ), where K is convex and compact. If F has a closed graph and F (x)
is nonempty and convex for all x ∈ K, then there is at least one point x ∈ K
such that x ∈ F (x).
The new terminology here is a closed graph; we say that the multifunction
F has a closed graph if the set {(x, y) : x ∈ X, y ∈ F (x)} is closed. Again,
each of these conditions is necessary; you might find it helpful to visualize these
conditions geometrically similar to Figure 3.1.
As a result of Kakutani’s theorem, we know that an equilibrium solution
must always exist in the traffic assignment problem. Let H denote the set of
104 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
all feasible path flow vectors, that is, vectors h such that π∈Πrs hπ = drs for
P
all OD pairs (r, s) and hπ ≥ 0 for all π ∈ Π. Let the function C(h) represent
the path travel times when the path flows are h.2 Let the multifunction R(c)
represent the set of path flows which could possibly occur if all travelers chose
least cost paths given travel times c. Mathematically
π π π0 2 rs
R(c) = h ∈ H : h > 0 only if c = min 0 rs
c for all (r, s) ∈ Z , π ∈ Π
π ∈Π
(5.18)
This is the generalization of equation (5.17) when there are multiple OD pairs
and multiple paths connecting each OD pair — be sure that you can see how
(5.17) is a special case of (5.18) when there is just one OD pair connected by
two paths.
Theorem 5.3. If the link performance functions tij are continuous, then at
least one equilibrium solution exists to the traffic assignment problem.
Proof. Let the multifunction F be the composition of the multifunction R and
the function C defined above, so F (h) = R(C(h)) is the set of path flow choices
which are consistent with drivers choosing fastest paths when the travel times
correspond to path flows h. An equilibrium path flow vector is a fixed point of
F : if h ∈ F (h) then the path flow choices and travel times are consistent with
each other. The set H of feasible h is convex and compact by Lemma 5.1 in
the next section. Examining the definition of R in (5.18), we see that for any
vector c, R(c) is nonempty (since there is at least one path of minimum cost),
and compact by the same argument used in the proof of Lemma 5.1. Finally,
the graph of F is closed, as you are asked to show in the exercises.
Section 5.2.2, this suggests that the principle of user equilibrium is not strong
enough to identify path flows, simply link flows. If path flows are needed to
evaluate a project, an alternative approach is needed, and this section explains
the concepts of maximum entropy and proportionality which provide this alter-
native.
Lastly, when solving equilibrium problems on large networks both the link
flow and path flow representations have significant limitations — algorithms
only using link flows tend to be slow, while algorithms only using path flows
can require a large amount of memory and tend to return low-entropy path
flow solutions. A compromise is to use the link flows, but distinguish the flow
on each link by its origin or destination. While explored more in Chapter 6,
Section 5.2.3 lays the groundwork by showing how flow can be decomposed this
way, and that at equilibrium the links with positive flow from each origin or
destination form an acyclic network.
that it is closed. For boundedness, consider any h ∈ H, and OD pair (r, s), and
rs π rs π
P path ππ ∈ Πrs . We must have h ≤ d , since each h is nonnegative and
any
π∈Πrs h = d . Therefore, if D is the largest p
entry in the OD matrix, we
π
have h ≤ D for any path π. Therefore |h| ≤ |Π|D, so H is contained in the
ball B√|Π|D (0) and H is bounded.
Finally, for convexity, consider any h1 ∈ H, any h2 ∈ H, any λ ∈ [0, 1], and
the resulting vector h = λh1 + (1 − λ)h2 . For any path π, hπ1 ≥ 0 and hπ2 ≥ 0,
106 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
= λdrs + (1 − λ)drs
= drs
so h ∈ H as well.
Every feasible link assignment x ∈ X is obtained from a linear transforma-
tion of some h ∈ H by (5.14) so X is also closed, bounded, and convex.
Proposition 5.1. If the link performance functions tij are continuous for each
link (i, j) ∈ A, then there is a feasible assignment satisfying the principle of user
equilibrium.
Proof. Taking H as the set of feasible assignments, the range of the func-
tion f (h) = projH (h − c(h)) clearly lies in H because of the projection. By
Lemma 5.1, H is compact and convex, so if we show that f is continuous, then
Brouwer’s Theorem guarantees existence of a fixed point. The discussion in
Section 5.1 showed that each fixed point is a user equilibrium solution, which
will be enough to prove the result.
We use the result that the composition of continuous functions is continuous
(Proposition A.6). The function f is the composition of two other functions
(call them f1 and f2 ), where f1 (h) = projH (h) and f2 (h) = h − c(h). By
Proposition A.7, f1 is continuous because H is a convex set. Furthermore, f2
is continuous if c is a continuous function of h, which is true by hypothesis.
Therefore, the conditions of Brouwer’s Theorem are satisfied, and f has at least
one fixed point (which satisfies the principle of user equilibrium).
Proposition 5.2. If the link performance functions tij are differentiable, and
t0ij (x) > 0 for all x and links (i, j), there is exactly one feasible link assignment
satisfying the principle of user equilibrium.
Proof. The previous section showed that user equilibrium link flow solutions
correspond to minimum points of f on X. If we can show that f is strictly
convex, then there can only be one minimum point. (Differentiability implies
5.2. PROPERTIES OF USER EQUILIBRIUM 107
continuity, so we can assume that at least one minimum point exists by Propo-
sition 5.1). Since f is a function of multiple variables (each link’s flow), to show
that f is convex we can write its Hessian matrix of second partial derivatives.
∂f
The first partial derivatives take the form ∂x ij
= tij (xij ). So, the diagonal en-
∂2f
tries of the Hessian take the form ∂x2ij
= t0ij (xij ), while the off-diagonal entries
2
∂ f
take the form ∂xij ∂xk` = 0. Since the Hessian is a diagonal matrix, and its diag-
onal entries are strictly positive by assumption, f is a strictly convex function.
Therefore there is only one feasible link assignment satisfying the principle of
user equilibrium.
It is also possible to prove the same result under the slightly weaker condition
that the link performance functions are continuous and strictly increasing; this
is undertaken in the exercises.
To summarize the above discussion, if the link performance functions are
continuous, then at least one user equilibrium solution exists; if in addition
the link performance functions are strictly increasing, then exactly one user
equilibrium solution exists. For many problems representing automobile traffic,
these conditions seem fairly reasonable, if not universally so: adding one more
vehicle to the road is likely to increase the delay slightly, but not dramatically.
30 40
A B C
30 20
Path h1 h2
20 30
10 0
20 10
10 20
path travel times), and since the principle of user equilibrium is defined in terms
of path travel times, it is reasonable to expect multiple path flow solutions to
satisfy the equilibrium principle.
The existence of multiple path flow equilibria (despite a unique link flow
equilibrium) is not simply a technical curiosity. Rather, it plays an important
role in using network models to evaluate and rank alternatives. Thus far, we’ve
been content to ask ourselves what the equilibrium link flows are, but in practice
these numbers are usually used to generate other, more interesting measures of
effectiveness such as the total system travel time and total vehicle-miles traveled.
If a neighborhood group is worried about increased traffic, link flows can support
this type of analysis. If we are concerned about safety, we can use link flows as
one input in estimating crash frequency, and so forth.
Other important metrics, however, require more information than just the
total number of vehicles on a link. Instead, we must know the entire paths used
by drivers. Examples include
Figure 5.6: Molecules of an ideal gas in a box. Which situation is more likely?
Why does the scenario on the left seem so unlikely? If the molecules form an
ideal gas, the location of each molecule is independent of the location of every
other molecule. Therefore, the probability that any given molecule is in the top
half of the box is 1/2, and the probability that all n molecules are as in the left
figure is
n
L 1
p = . (5.19)
2
To find the probability that the distribution of molecules is as in the right figure
(pR ), we use the binomial distribution:
n/2 n/2 n
n 1 1 n! 1
pR = = (5.20)
n/2 2 2 (n/2)!(n/2)! 2
and the most likely path flow vector is the one which maximizes this product.
Since (1/k)d is a constant, the most likely path flow vector simply maximizes
d!
p= (5.22)
h1 !h2 ! · · · hk !
Now we introduce a common trick: since the logarithm function is strictly
increasing, the path flows which maximize p also maximize log p, which is
k
X
log p = log d! − log hπ ! (5.23)
π=1
To simplify
√ further, we use Stirling’s approximation, which states that n! ≈
nn e−n 2πn, or equivalently log n! ≈ n log n − n + (1/2) log(2πn). This ap-
proximation
√ is asymptotically exact in the sense that the ratio between n! and
nn e−n 2πn approaches 1 as n grows large. Further, when n is large, n is much
larger than log n, so the last term can be safely ignored and log n! ≈ n log n − n.
Substituting into (5.23) we obtain
X
log p ≈ (d log d − d) − (hπ log hπ − hπ ) (5.24)
π
P
Since d = π hπ , we can manipulate (5.24) to obtain
X
log p ≈ − hπ log(hπ /d) (5.25)
π
and the
P path flows h1 , . . . , hk maximizing this quantity (subject to the constraint
d = π hπ and nonnegativity) approximately maximize the probability that
this particular path flow vector will occur in the field if travelers are choosing
routes independent of each other.
To move towards the traffic assignment problem we’re used to, we need to
make the following changes:
1. In the traffic assignment problem, the demand d and path flows hπ do not
need to be integers, but can instead take on real values. The corresponding
interpretation is that the demand is “divided” into smaller and smaller
units, each of which is assumed to act independently of every other unit.
(This is how user equilibrium works with continuous flows.) This doesn’t
cause any problems, and in fact helps us out: as we take the limit towards
infinite divisibility, Stirling’s approximation becomes exact — so we can
replace the ≈ in our formulas with exact equality.
2. There are multiple origins and destinations. This doesn’t change the basic
idea, the formulas just become incrementally more complex as we sum
equations of the form (5.25) for each OD pair.
3. Travel times are not constant, but are instead flow dependent. Again,
this doesn’t change any basic ideas; we just have to add a constraint
112 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
Putting all this together, we seek the path flows h which solve the optimization
problem
X X
max − hπ log(hπ /drs ) (5.26)
h
(r,s)∈Z 2 π∈Π̂rs
X
π
s.t. δij hπ = x̂ij ∀(i, j) ∈ A (5.27)
π∈Π
X
hπ = drs ∀(r, s) ∈ Z 2 (5.28)
π∈Π̂rs
hπ ≥ 0 ∀π ∈ Π (5.29)
The objective function (5.26) is called the entropy of a particular path flow
solution, and the constraints (5.28), (5.27), and (5.29) respectively require that
the path flows are consistent with the OD matrix, equilibrium link flows, and
nonnegativity. The set Π̂rs is the set of paths which are used by OD pair (r, s).
The word entropy is meant to be suggestive. The connection with the ther-
modynamic concept of entropy may be apparent from the physical analogy at
the start of this section.3 In physics, entropy can be interpreted as a measure of
disorder in a system. Both the left and right panels in Figure 5.6 are “allowable”
in the sense that they obey the laws of physics. However, the scenario in the
right has much less structure and much higher entropy, and is therefore more
likely to occur.
It is not trivial to solve the optimization problem (5.26)–(5.29). It turns out
that entropy maximization implies a much simpler condition, proportionality
among pairs of alternate segments. Any two paths with a common origin and
destination imply one or more pairs of alternate segments where the paths di-
verge, separated by links common to both paths. In Figure 5.5, there are two
pairs of alternate segments: the top and bottom links between nodes A and B,
and the top and bottom links between B and C. It turns out that path flows h1
are the solution to the entropy maximization problem.
You might notice some regularity or patterns in this solution. For instance,
looking only at the top and bottom links between A and B, at equilibrium these
links have equal flow (30 vehicles each). Paths 1 and 3 are the same except for
between nodes A and B, and these paths also have equal flow (20 vehicles each).
The same is true for paths 2 and 4 (10 vehicles each). Or, more subtly, between
B and C, the ratio of flows between the top and bottom link is 2:1, the same as
the ratio of flows on paths 1 and 2 (which only differ between B and C), and
the ratio of flows on paths 3 and 4. This is no coincidence; in fact, we can show
that entropy maximization implies proportionality.
3 Actually, the term here is more directly drawn from the fascinating field of information
A E
40 120
40
C D
120
120 40
B F
Before proving this fact, we show that it holds even for different OD pairs.
The network in Figure 5.7 has 40 vehicles traveling from origin A to destination
F, and 120 vehicles from B to E, and the equilibrium link flows are shown in
the figure. Each OD pair has two paths available to it, one using the top link
between C and D, and the other using the bottom link between C and D. Using
an upward-pointing arrow to denote the first type of path, and a downward-
pointing arrow to describe the second, the four paths are h↑AF , h↓AF , h↑BE , and
h↓BE . Solving the optimization problem (5.26)–(5.29), we find the most likely
path flows are h↑AF = 10, h↓AF = 30, h↑BE = 30, and h↓BE = 90. The equilibrium
flows for the top and bottom links between C and D have a ratio of 1:3; you
can see that this ratio also holds between h↑AF and h↓AF , as well as between h↑BE
and h↓BE .
In particular, the obvious-looking solution h↑AF = 40, h↓AF = 0, h↑BE = 0,
h↓BE = 120 has extremely low entropy, because it implies that for some reason all
travelers from one OD pair are taking one path, and all travelers from the other
are taking the other path, even though both paths are perceived identically by
all travelers and even though travelers are making choices independent of each
other. This is exceptionally unlikely.
We now derive the proportionality condition by defining it a slightly more
precise way:
Theorem 5.4. Let π1 and π2 be any two paths connecting the same OD pair.
If the path flows h solve the entropy-maximizing problem (5.26)–(5.29), then
the ratio of flows h1 /h2 for paths π1 and π2 is identical regardless of the OD
pair these paths connect, and only depends on the pairs of alternate segments
distinguishing these two paths.
The proof is a bit lengthy, and is deferred to the end of the section. But
even though the derivation is somewhat involved, the proportionality condition
itself is fairly intuitive: when confronted with the same set of choices, travelers
from different OD pairs should behave in the same way. The proportionality
114 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
condition implies nothing more than that the share of travelers choosing one
alternative over another is the same across OD pairs. Proportionality is also a
relatively easy condition to track and enforce.
It would be especially nice if proportionality can be shown fully equivalent
to entropy maximization. Theorem 5.4 shows that entropy maximization im-
plies proportionality, but is the converse true? Unfortunately, the result is no,
and examples can be created which satisfy proportionality without maximizing
entropy. Therefore, proportionality is a weaker condition than entropy max-
imization. The good news is that proportionality “gets us most of the way
there.” In the Chicago regional network, there are over 93 million equal travel-
time paths at equilibrium; after accounting for the equilibrium and “no vehicle
left behind” constraints (5.27) and (5.28), there are still over 90 million degrees
of freedom in how the path flows are chosen. Accounting for proportionality
reduces the number of degrees of freedom to 91 (a reduction of 99.9999%!)
So, in practical terms, enforcing proportionality seems essentially equivalent to
maximizing entropy.
4 Everything in this section would apply equally well to an aggregation by destination; the
Proposition 5.3. Let xr be the link flows associated with origin r at some
feasible assignment satisfying the principle of user equilibrium. If tij > 0 for all
links (i, j), then the subset of arcs Ar = {(i, j) ∈ A : xrij > 0} with positive flow
from origin r contains no cycle.
!
X X hπ X X
π
L(h, β, γ) = − hπ log + βij x̂ij − δij hπ
drs
(r,s)∈Z 2 π∈Π̂rs (i,j)∈A π∈Π
X X
+ γrs drs − hπ (5.31)
(r,s)∈Z 2 π∈Π̂rs
using β and γ to denote the Lagrange multipliers. Note that the nonnegativity
constraint can be effectively disregarded since the objective function is only
116 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
defined for strictly positive h. Therefore at the optimum solution the partial
derivative of L with respect to any path flow must vanish:
∂L hπ X
π
= −1 − log − δij βij − γrs = 0 (5.32)
∂hπ drs
(i,j)∈A
Likewise we have
∂L X
= drs − hπ0 = 0 (5.34)
∂γrs 0 rs π ∈Π̂
so
X X X
drs = hπ0 = drs exp (−1 − γrs ) exp − π
δij βij (5.35)
π 0 ∈Π̂rs π∈Π̂rs (i,j)∈A
substituting the result from (5.33). Therefore we can solve for γrs :
X X 0
π
γrs = −1 + log exp − δij βij (5.36)
π 0 ∈Π̂rs (i,j)∈A
Noting that the fraction in (5.37) only depends on the OD pair (r, s), we can
simply write
X
hπ = Krs exp − π
δij βij (5.38)
(i,j)∈A
where the steps of the derivation respectively involve substituting (5.37), ex-
panding A into A1 ∪ A2 ∪ A3 ∪ A4 , using the definitions of the Ai sets to identify
π
sums where δij = 0, splitting exponential terms, and canceling common factors.
Thus in the end we have
P
π1
h1 exp − (i,j)∈A3 δij βij
= P (5.44)
h2 exp − (i,j)∈A4 δij π1
βij
regardless of the OD pair h1 and h2 connect, and this ratio only depends on A3
and A4 , that is, the pairs of alternate segments distinguishing π1 and π2 .
Assignment with perception errors relaxes the assumption that drivers have
perfect knowledge of travel times. The interpretation is that every driver per-
ceives a particular travel time on each path (which may or may not equal its
actual travel time), and chooses a path that they believe to be shortest based on
these perceptions. The net effect is that travelers are distributed across paths
with different travel times, but are concentrated more on paths with lower travel
times. (If a path is very much longer than the shortest one, it is unlikely that
someone’s perception would be so far off as to mistakenly think it is shortest.)
The mathematical tool most commonly used to model perception errors is
the theory of discrete choice from economics. The resulting model is termed
stochastic user equilibrium, and this important model is the focus of Section 8.3.
The other two alternative assignment rules are discussed next.
Imagine for a moment that route choice was taken out of the hands of individual
travelers, and instead placed in the hands of a dictator who could assign each
traveler a path that they would be required to follow. Suppose also that this
dictator was benevolent, and wanted to act in a way to minimize average travel
delay in the network. The resulting assignment rule results in the system optimal
state.
While this scenario is a bit fanciful, the system optimal state is important
for several reasons. First, it provides a theoretical lower bound on the delay
in a network, a benchmark for the best performance that could conceivably be
achieved that more realistic assignment rules can be compared to. Second, there
are ways to actually achieve the system optimal state even without taking route
choice out of the hands of travelers, if one has the ability to freely charge tolls
or provide incentives on network links. Third, there are some network problems
where a single agent can exert control over the routes chosen by travelers, as
in certain logistics problems where a dispatcher can assign specific routes to
vehicles.
At first, it may not be obvious that this assignment rule is meaningfully
different from the user equilibrium assignment rule. After all, in user equilibrium
each traveler is individually choosing routes, while in system optimum a single
agent is choosing routes for everyone, but both are doing so to minimize travel
times. To see why they might be different, consider the network used for the
Knight-Pigou-Downs paradox in Section 1.5: a two-link network (Figure 4.6)
with a constant travel time of 50 minutes on the top link, a link performance
function 45 + x↓ on the bottom link, and a total demand of 30 vehicles. The
user equilibrium solution was x↑ = 25, x↓ = 5, with equal travel times of 50
minutes on both top and bottom routes.
So, in the user equilibrium state the average travel time is 50 minutes for all
5.3. ALTERNATIVE ASSIGNMENT RULES 119
1 1
50x↑ + (45 + x↓ )x↓ = 50(30 − x↓ ) + (45 + x↓ )x↓
50 50
1
(x↓ )2 − 5x↓ + 1500 . (5.45)
=
50
This is a quadratic function which obtains its minimum at x↓ = 2.5. At the
solution x↑ = 27.5, x↓ = 2.5, the travel times are unequal (t↑ = 50, t↓ = 47.5),
but the average travel time of 49.8 minutes is slightly less than the average
travel time of 50 minutes at the user equilibrium solution.
Therefore, travelers individually choosing routes to minimize their own travel
times may create more delay than would be obtained with central control. This
reveals an important, but subtle point about user equilibrium assignment, a
point important enough that Section 5.4 is devoted entirely to explaining this
issue. So, we will not belabor the point here, but it will be instructive to start
thinking about why the user equilibrium and system optimal states need not
coincide.
To be more precise mathematically, the system optimal assignment rule
chooses the feasible assignment minimizing the average travel time. Since the
total number of travelers is a constant, we can just as well minimize the total
system travel time, defined as
X X
T ST T = hπ cπ = xij tij (5.46)
π∈Π (i,j)∈A
where it is not hard to show that calculating T ST T through path flows and link
flows produces the same value.
U π = −cπ + π (5.47)
with the negative sign indicating that maximizing utility for drivers means min-
imizing travel time. Assuming that the π are independent, identically dis-
tributed Gumbel random variables, we can use the logit formula (8.37) to ex-
press the probability that path π is chosen:
exp(−θcπ )
pπ = P (5.48)
π 0 ∈Πrs exp(θCπ )
0
The comments in the previous section apply to the interpretation of this formula.
As θ approaches 0, drivers’ perception errors are large relative to the path travel
times, and each path is chosen with nearly equal probability. (The errors are
so large, the choice is essentially random.) As θ grows large, perception errors
are small relative to path travel times, and the path with lowest travel time is
chosen with higher and higher probability. At any level of θ, there is a strictly
positive probability that each path will be taken.
For concreteness, the route choice discussion so far corresponds to the inter-
pretation where the unobserved utility represents perception errors in the utility.
The other interpretation would mean that π represents factors other than travel
time which affect route choice (such as comfort, quality of scenery, etc.). Either
of these interpretations is mathematically consistent with the discussion here.
The fact that the denominator of (8.39) includes a summation over all paths
connecting r to s is problematic, both theoretically and practically. From a
theoretical standpoint, it implies that drivers are considering literally every
path between the origin and destination, even when this path is very circuitous
or illogical (driving all around town when going to a store a half mile away).
Presumably no driver would ever choose such a path, no matter how ill-informed
they are about travel conditions or how poorly they can estimate travel times —
yet the logit formula (8.39) suggests that some travelers will indeed choose such
paths. From a practical standpoint, evaluating the formula (8.39) first requires
enumerating all of these paths. Since the number of paths grows exponentially
with the network size, any approach which requires an explicit listing of all the
paths in a network will not scale to realistic-sized problems.
To address this fact, we can restrict the choice set somewhat. Rather than
using all paths Πrs connecting r to s, we can restrict path choices to a subset
5.3. ALTERNATIVE ASSIGNMENT RULES 121
of reasonable paths (denoted Π̂rs ). There are different ways to identify sets of
reasonable paths, but they should address both the theoretical and practical
difficulties in the previous paragraph. That is, the paths in Π̂rs correspond
to a plausible-sounding behavioral principle (why might travelers only consider
paths in this set) and lead to a formula which is efficiently computable even in
large networks. There may be some tension between these ideas, in that there
may be very efficient formulas which do not correspond to realistic choices, or
that the most realistic models of reasonable paths may not lead to an efficient
formula. Section 8.3 discusses these in more detail.
So, equation (8.39) leads us an expression for path flows in terms of travel
times, which is the assignment rule filling the place of the question mark in
Figure 4.2. To be explicit, the formula for path flows as a function of path
travel times is
exp(−θcπ )
hπ = drs P (5.49)
π 0 ∈Πrs exp(θCπ )
0
where (r, s) is the OD pair corresponding to path π. The complete traffic as-
signment problem with this assignment rule can then be expressed as follows:
find a feasible path flow vector h∗ such that h∗ = H(C(h∗ )). This is a stan-
dard fixed-point problem. Clearly H and c are continuous functions if the link
performance functions are continuous, and the feasible path set is compact and
convex, so Brouwer’s theorem immediately gives existence of a solution to the
SUE problem.
Notice that this was much easier than showing existence of an equilibrium
solution to the original traffic assignment problem! For that problem, there was
no equivalent of (8.73). Travelers were all using shortest paths, but if there were
two or more shortest paths there was no rule for how those ties should be broken.
As a result, we had to reformulate the problem as a variational inequality and
introduce an auxiliary function based on movement of a point under a force.
With this assignment rule, there is no need for such machinations, and we can
write down the fixed point problem immediately.
That said, it is also possible to formulate the assignment problem with this
rule as the solution to a variational inequality, and as the solution to a convex
minimization problem. However, the objective function is rather complicated
and computationally expensive to evaluate, and thus it is less useful for identi-
fying an equilibrium point. The objective function is
X X X Z xij
rs π
z(x) = − d E minrs U + xij tij − tij (x) dx (5.50)
π∈Π 0
(r,s)∈Z 2 (i,j)∈A (i,j)∈A
where the expectation is taken with respect to the “unobserved” random vari-
ables , and U π and tij in the first two terms are understood to be functions of
xij . It can be shown that this function is convex, and therefore that the solu-
tion is unique. Unfortunately, this function is less useful for actually providing
a solution. The first term in (8.112) involves an expectation over all paths con-
necting an OD pair, and as a result requires enumerating all paths, which is
impractical in large networks.
122 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
30 Flow (top)
Flow (bottom)
25
20
Flow
15
10
0
0 0.5 1 1.5 2 2.5
Theta
60
Time (top)
58 Time (bottom)
Time (average)
56
54
Flow
52
50
48
46
To demonstrate what this assignment rule looks like, refer again to the net-
work in Figure 5.3. Figures 5.8 and 5.9 show how the flows and travel times
in the network vary with θ. A few observations worth noting: first, the travel
times on the two links are not equal, due to the presence of perception er-
rors. However, as drivers’ perceptions become more accurate (θ increasing), the
travel times on the two paths become closer, and the solution asymptotically
approaches the user equilibrium solution.
ĉπ = cπ + ρπ . (5.52)
These new quantities are helpful because the principle of BRUE can now be
formulated in a more familiar way:
124 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
Proposition 5.4 may give you hope that we can write a convex optimization
problem whose solutions correspond to BRUE. Unfortunately, this is not the
case, for reasons that will be shown shortly. However, we can write a variational
inequality where both the path flows h and the auxiliary variables ρ are decision
variables. In what follows, let κrs (h) = minπ∈Πrs cπ (h) and, with slight abuse
of notation, let κπ refer to the κrs value corresponding to the path π, and κ be
the vector of κπ values.
Proposition 5.5. A vector of path flows ĥ ∈ H is a BRUE if there exists a
|Π|
vector of nonnegative auxiliary costs ρ̂ ∈ R+ such that
ĉ(ĥ, ρ̂) ĥ − h
· ≤0 (5.53)
c(ĥ) + ρ̂ − κ(ĥ) − ρ̂ − ρ
|Π|
for all h ∈ H, ρ ∈ R+ .
Proof. Assume that (ĥ, ρ̂) solve the variational inequality (5.53). The first
component of the VI ĉ(ĥ, ρ̂) · (ĥ − h) ≤ 0 shows that all used paths have equal
and minimal effective cost, and by Proposition 5.4 satisfy the principle of BRUE,
assuming that ρ is consistent with (5.51). To show this, consider the second
component of the VI: (κ(ĥ) + − ρ̂) · ρ̂ − ρ ≤ 0, or equivalently
X X
(cπ (ĥ) + ρ̂π − κπ (ĥ) − )ρ̂π ≤ (cπ (ĥ) + ρ̂π − κπ (ĥ) − )ρπ (5.54)
π∈Π π∈Π
|Π|
for all ρ ∈ R+ . If ρ̂π > 0 for any path, inequality (5.54) can be true only if
ρπ = κπ (ĥ) + − cπ (ĥ); or, if ρ̂π = 0, then inequality (5.54) can be true only if
ρπ = 0 ≤ κπ (ĥ) + − cπ (ĥ). In either case (5.51) is satisfied.
|Π|
In the reverse direction, assume that ĥ is BRUE. Choose ρ̂ ∈ R+ according
to (5.51). By Proposition 5.4 surely ĉ(ĥ, ρ̂) · (ĥ −Ph) ≤ 0 for all h ∈ H. For
any path where ρπ = κπ (ĥ) + − cπ (ĥ), clearly π∈Π (cπ (ĥ) + ρ̂π − κπ (ĥ) −
5.4. USER EQUILIBRIUM AND SYSTEM OPTIMAL 125
π π π π
P π − πρ ) = 0; andπ for any path πwhere ρ = 0, κ (ĥ) + − c (ĥ) ≥ 0 and
)(ρ̂
π∈Π (c (ĥ)+ ρ̂π −κ (ĥ)−)(ρ̂π −ρ ) ≥ 0. In either case the second component
of the VI is satisfied as well.
5.4.1 Externalities
Recall the three examples in Section 4.3, in which equilibrium traffic assignment
exhibited counterintuitive results. Perhaps the most striking of these was the
Braess paradox, in which adding a new roadway link actually worsened travel
times for all travelers. This is counterintuitive, because if the travelers simply
agreed to stay off of the newly-built road, their travel times would have been
no worse than before. This section describes exactly why this happens. But
before delving into the roots of the Braess paradox, let’s discuss some of the
implications:
The “invisible hand” may not work in traffic networks. In economics, Adam
Smith’s celebrated “invisible hand” suggests that each individual acting
in his or her own self-interest also tends to maximize the interests of so-
ciety as well. The Braess paradox shows that this is not necessarily true
in transportation networks. The two drivers switched paths out of self-
interest, to reduce their own travel times; but the end effect made things
worse for everyone, including the drivers who switched paths.
5.4. USER EQUILIBRIUM AND SYSTEM OPTIMAL 127
If you have studied economics, you might already have some idea of why the
Braess paradox occurs. The “invisible hand” requires certain things to be true
if individual self-interest is to align with societal interest, and fails otherwise.
A common case where the invisible hand fails is in the case of externalities: an
externality is a cost or benefit resulting from a decision made by one person, and
felt by another person who had no say in the first person’s decision. Examples
of externalities include industrial pollution (we cannot rely on industries to self-
regulate pollution, because all citizens near a factory suffer the effects of poor air
quality, even though they had no say in the matter), driving without a seatbelt
(if you get into an accident without a seatbelt, it is likely to be more serious,
costing others who help pay your health care costs and delaying traffic longer
while the ambulance arrives; those who help pay your costs and sit in traffic had
no control over your decision to not wear a seatbelt), and education (educated
people tend to commit less crime, which benefits all of society, even those who
do not receive an education). The first two examples involve costs, and so are
called negative externalities; the latter involves a benefit, and is called a positive
externality.
Another example of an externality is seen in the prisoner’s dilemma (the
Erica-Fred game of Section 1.3). When Erica chooses to testify against Fred,
she does so because it reduces her jail time by one year. The fact that her choice
also increases Fred’s jail time by fourteen years was irrelevant (such is the nature
of greedy decision-making). When both Erica and Fred behaved in this way, the
net effect was to dramatically increase the jail time they experienced, even
though each of them made the choices they did in an attempt to minimize their
jail time.
The relevance of this economics tangent is that congestion can be thought
of as an externality. Let’s say I’m driving during the peak period, and I can
choose an uncongested back road which is out of my way, or a congested freeway
which is a direct route (and thus faster, even with the congestion). If I choose
the freeway, I end up delaying everyone with the bad luck of being behind me.
Perhaps not by much; maybe my presence increases each person’s commute by a
few seconds. However, multiply these few seconds by a large number of drivers
sitting in traffic, and we find that my presence has cost society as a whole a
substantial amount of wasted time. This is an externality, because those other
drivers had no say over my decision to save a minute or two to take the freeway.
(This is why the user equilibrium assumption is said to be “greedy.” It assumes
a driver will happily switch routes to save a minute even if it increases the total
time people spend waiting in traffic by an hour.) These concepts are made more
precise in the next subsection.
As a society, we tend to adopt regulation or other mechanisms for minimizing
128 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
which is the same term for link (i, j) used in the system optimal function.
That is, solving user equilibrium with the modified functions t̂ij produces the
same link and path flow solution as solving system optimum with the original
performance functions tij . The interpretation of the formula (5.63) is closely
linked with the concept of externalities introduced in the previous subsection.
Equation (5.63) consists of two parts: the actual travel time tij (xij ), and an
additional term t0ij (xij )xij . If an additional vehicle were to travel on link (i, j),
it would experience a travel time of tij (xij ). At the margin, the travel time on
link (i, j) would increase by approximately t0ij (xij ), and this marginal increase
in the travel time would be felt by the other xij vehicles on the link. So, the
second term in (5.63) expresses the additional, external increase in travel time
caused by a vehicle using link (i, j). For this reason, the modified functions t̂ij
can be said to represent the marginal cost on a link.
Since the system optimal solution is a “user equilibrium” with respect to the
modified costs t̂ij , we can formulate a “principle of system optimum” similar to
the principle of user equilibrium:
Definition 5.2. (Principle of system optimum.) Every used route connecting
an origin and destination has equal and minimal marginal cost.
So, system optimum can be seen as a special case of user equilibrium, with
modified link performance functions. The converse is true as well. Suppose we
had an algorithm which would solve the system optimal problem for any input
network and OD matrix. If we replace the link performance functions tij (xij )
with modified functions t̃ij (xij ) defined by
R xij
tij (x) dx
t̃ij (xij ) = 0 (5.66)
xij
when xij > 0 and t̃ij (xij ) = 0 otherwise, a similar argument shows that the ob-
jective function for the system optimal problem with link performance functions
t̃ij is identical to that for the user equilibrium problem with link performance
functions tij .
As a result, despite very different physical interpretations, the user equilib-
rium and system optimal problems are essentially the same mathematically. If
you can solve one, you can solve the other by making some simple changes to the
link performance functions. So, there is no need to develop separate algorithms
for the user equilibrium and system optimal problems.
130 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
T ST T (x̂) 4
≤ (5.67)
T ST T (ŷ) 3
or, equivalently,
1
1
1 2
(tij (x̂ij ) − tij (ŷij ))ŷij = bij (x̂ij − ŷij )ŷij . (5.74)
Now, the function bij (x̂ij − z)z is a concave quadratic function in z, which
obtains its maximum when z = x̂ij /2. Therefore
bij (x̂ij )2 1
(tij (x̂ij ) − tij (ŷij ))ŷij ≤ ≤ (aij + bij x̂ij )x̂ij = tij (x̂ij )x̂ij ) , (5.75)
4 4
proving the result.
Furthermore, this bound is tight. Consider the more extreme version of the
Knight-Pigou-Downs network shown in Figure 5.10 where the demand is 1 unit,
the travel time on the upper link is 1 minute, and the travel time on the bottom
link is t↓ = x↓ . The user equilibrium solution is x↑ = 0, x↓ = 1, when both
links have equal travel times of 1 minute, and the total system travel time is
1. You can verify that the system optimal solution is x↑ = x↓ = 21 , when the
total system travel time is 34 . Thus the ratio between the user equilibrium and
system optimal total system travel times is 34 .
You may be wondering if a price of anarchy can be found when we relax the
assumption that the link performance functions are affine. In many cases, yes;
for instance, if√ the link performance functions are quadratic, √
then the price of
3 3 434
anarchy is 3√ 3−2
, if cubic, then the price of anarchy is √
3
4 4−3
, and so on. In
all of these cases the modified Knight-Pigou-Downs network can show that this
bound is tight. On the other hand, if the link performance functions have a
132 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
vertical asymptote (e.g., tij = (uij − xij )−1 ), then the ratio between the total
system travel times at user equilibrium and system optimum may be made
arbitrarily large.
5.6 Exercises
1. [34] Consider a network with two parallel links connecting a single origin
to a single destination; the link performance function on each link is 2 + x
and the total demand is d = 10.
(a) Write the equations and inequalities defining the set of feasible path
flows H, and draw a sketch.
(b) What are the vectors −C(h) for the following path flow vectors? (1)
h = [0, 10] (2) h = [5, 5] (3) h = [10, 0] Draw these vectors at these
three points on your sketch.
(c) For each of the vectors from part (b), identify the point projH (h −
C(h)) and include these on your sketch.
2. [45] Consider a network with two parallel links connecting a single origin
to a single destination; the link performance function on the first link is
5.6. EXERCISES 133
300 225 3
1 2
100 175 100 25
75 0
4 5 6
0 75
25 100 175 100
7 225 8 300 9
Figure 5.11: Network for Exercises 14 and 15. Each link has link performance
function 10 + xij
the Beckmann function is still strictly convex even if t0ij (xij ) is not always
strictly positive.
13. [51] Show that the user-equilibrium and system-optimal link flows are the
same if there is no congestion, that is, if ta (xa ) = τa for some constant τa .
14. [10] Consider the network of Figure 5.11, ignoring the labels next to the
link in the figure. The demand in this network is given by d19 = 400.
Does the following path-flow solution satisfy the principle of user equilib-
rium? 100 vehicles on path [1,2,3,6,9], 100 vehicles on path [1,2,3,6,9], 100
vehicles on path [1,4,5,6,9], and 100 vehicles on path [1,4,5,2,3,6,9]. You
can answer this without any calculation.
15. [45] In the network in Figure 5.11, the demand is given by d13 = d19 =
d17 = d91 = d93 = d97 = 100. The flow on each link is shown in the figure.
(a) Find a path flow vector h which corresponds to these link flows.
(b) Show the resulting origin-based link flows x1 and x9 from the two
origins in the network.
(c) Show that the links used by these two origins form an acyclic sub-
network by finding topological orders for each subnetwork.
(d) Determine whether these link flows satisfy the principle of user equi-
librium.
16. [35] Consider the network in Figure 5.12, along with the given (equilib-
rium) link flows. There is only one OD pair, from node 1 to node 3.
Identify three values of path flows which are consistent with these link
flows, in addition to the most likely (entropy-maximizing) path flows.
17. [37] In the network shown in Figure 5.13, 320 vehicles travel from A to
C, 640 vehicles travel from A to D, 160 vehicles travel from B to C, and
320 vehicles travel from B to D. The equilibrium link flows are shown.
5.6. EXERCISES 135
90 120
1 30 2 3
60
60
C D
480 960
720 720 240
1 2 3 4
240 720 720
960 480
A B
(a) Give a path flow solution which satisfies proportionality and produces
the equilibrium link flows.
(b) At the proportional solution, what fraction of flow on the top link
connecting 3 and 4 is from origin A?
(c) Is there a path flow solution which produces the equilibrium link flows,
yet has no vehicles from origin A on the top link connecting 3 and 4?
If so, list it. If not, explain why.
18. [38] Consider the network in Figure 5.14, where 2 vehicles travel from 1
to 4 (with a value of time of $20/hr), and 4 vehicles travel from 2 to 4
(with a value of time of $8/hr). The equilibrium volumes are 3 vehicles
on Link 1, and 3 vehicles on Link 2.
(a) Assuming that the vehicles in this network are discrete and cannot be
split into fractions, identify every combination of path flows which
give the equilibrium link volumes (there should be 20). Assuming
each combination is equally likely, show that the proportional division
of flows has the highest probability of being realized.
(b) What is the average value of travel time on Link 1 at the most likely
path flows? What are the upper and lower limits on the average value
of travel time on this link?
19. [65] Derive the optimality conditions for the system-optimal assignment,
and provide an interpretation of these conditions which intuitively relates
them to the concept of system optimality.
136 CHAPTER 5. THE TRAFFIC ASSIGNMENT PROBLEM
1 Link 1
3 4
2 Link 2
20. [33] Find the system optimal assignment in the Braess network (Fig-
ure 4.8b), assuming a demand of 6 vehicles from node 1 to node 4.
21. [51] Consider a network where every link performance function is linear,
of the form tij = aij xij . Show that the user equilibrium and system
optimum solutions are the same.
22. [34] Calculate the ratio of the total system travel time between the user
equilibrium and system optimal solutions in the Braess network (Fig-
ure 4.8b).
23. [46] Find the set of boundedly rational assignments in the Braess network
(Figure 4.8b).
Chapter 6
This chapter presents algorithms for solving the basic traffic assignment problem
(TAP), which was defined in Chapter 5 as the solution x̂ to the variational
inequality
t(x̂) · (x̂ − x) ≤ 0 ∀x ∈ X , (6.1)
which can also be expressed in terms of path flows ĥ as
137
138 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
6.1.2 Framework
It turns out that none of these solution methods get to the right answer im-
mediately, or even after a finite number of steps. There is no “step one, step
two, step three, and then we’re done” recipe for solving large-scale equilibrium
problems. Instead, an iterative approach is used where we start with some fea-
sible assignment (link or path), and move closer and closer to the equilibrium
solution as you repeat a certain set of steps over and over, until you’re “close
enough” to quit and call it good. One iterative algorithm you probably saw in
calculus was Newton’s method for finding zeros of a function. In this method,
one repeats the same step over and over until the function is sufficiently close
to zero.
Broadly speaking, all equilibrium solution algorithms repeat the following
three steps:
1. Find the shortest (least travel time) path between each origin and each
destination.
2. Shift travelers from slower paths to faster ones.
3. Recalculate link flows and travel times after the shift, and return to step
one unless we’re close enough to equilibrium.
The shortest path computation can be done quickly and efficiently even in
large networks, as was described in Section 2.4. The third step is even more
140 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
10 + x1
50 vehicles
1 2 traveling
20 + x2 from 1 to 2
The numerator of the fraction is the total system travel time (TSTT). The
relative gap is always nonnegative, and it is equal to zero if and only if the flows
x satisfy the principle of user equilibrium. It is these properties which make
the relative gap a useful convergence criterion: once it is close enough to zero,
our solution is “close enough” to equilibrium. For most practical purposes, a
relative gap of 10−4 –10−6 is small enough.
A second definition of the relative gap γ2 is based on the Beckmann function
itself. Let f denote the Beckmann function, and fˆ its value at equilibrium
(which is a global minimum). In many algorithms, given a current solution x,
it is not difficult to generate upper and lower bounds on fˆ based on the current
solution, respectively denoted f¯(x) and f (x). A trivial upper bound is its value
¯
at the current solution: f¯ = f (x), since clearly fˆ ≤ f (x) for any feasible link
assignment x. Sometimes, a corresponding lower bound can be identified as
well. Assuming that these bounds can become tighter over time, and that in
the limit both f¯ → fˆ and f → fˆ, the difference or gap f¯(x) − f (x) can be used
as a convergence criterion.¯ These values are typically normalized,
¯ leading to
one definition of the relative gap:
f¯ − f
γ2 = ¯ , (6.8)
f
¯
or a slightly modified version
f¯ − max f
γ3 = ¯ , (6.9)
max f
¯
where max f is the greatest lower bound found to date, in case the sequence of f
values is not¯ monotone over iterations. A disadvantage of these definitions of the
¯
relative gap is that different algorithms calculate these upper and lower bounds
differently. While they are suitable as termination criteria in an algorithm, it is
142 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
not possible (or at least not easy) to directly compare the relative gap calculated
by one algorithm to that produced by another to assess which of two solutions
is closer to equilibrium.
One drawback of the relative gap (in all of its forms) is that it is unitless and
does not have an intuitive meaning. Furthermore, it can be somewhat confusing
to have several slightly different definitions of the relative gap, even though they
all have the same flavor. A more recently proposed metric is the average excess
cost, defined as
κrs drs
P P
(i,j)∈A tij xij − (r,s)∈Z 2 t·x−κ·d
AEC = P = . (6.10)
(r,s)∈Z 2 drs d·1
This quantity represents the average difference between the travel time on each
traveler’s actual path, and the travel time on the shortest path available to him
or her. Unlike the relative gap, AEC has units of time, and is thus easier to
interpret.
Another convergence measure with time units is the maximum excess cost,
which relates directly to the principle of user equilibrium. The maximum excess
cost is defined as the largest amount by which a used path’s travel time exceeds
the shortest path travel time available to that traveler:
π rs
M EC = max 2 max {c − κ } . (6.11)
(r,s)∈Z π∈Πrs :hπ >0
This is often a few orders of magnitude higher than the average excess cost.
One disadvantage of the maximum excess cost is that it is only applicable when
the path flow solution is known. This is easy in path-based or bush-based
algorithms. However, since many path-flow solutions correspond to the same
link-flow solution (cf. Section 5.2.2), M EC is not well suited for link-based
algorithms.
Finally, this section concludes with two convergence criteria which are infe-
rior to those discussed thus far. The first is to simply use the Beckmann function
itself; when it is sufficiently close to the global optimal value fˆ, terminate. A
moment’s thought should convince you that this criterion is not practical: there
is no way to know the value of fˆ until the problem has already been solved.
(Upper and lower bounds are possible to calculate, though, as with γ2 .) A more
subtle version is to terminate when the Beckmann function stops decreasing,
or (in a more common form) to terminate the algorithm when the link or path
flows stabilize from one iteration to the next. The trouble with these conver-
gence criteria is that they cannot distinguish between a situation when the flows
stabilize because they are close to the equilibrium solution, and when they sta-
bilize because the algorithm “gets stuck” and cannot improve further due to a
flaw in its design or a bug in the programming. For this reason, it is always
preferable to base the termination criteria on the equilibrium principle itself.
6.2. LINK-BASED ALGORITHMS 143
df
dλ (0)< 0 or, equivalently, we can decrease the Beckmann function if we take a
small enough step in the direction x∗ − x, by shifting people from longer paths
onto shorter ones.
Two examples of MSA are shown below. A proof of convergence is sketched
in Exercise 12.
Small network example Here we solve the small example of Figure 6.1 by
MSA, using the relative gap γ1 to measure how close we are to equilibrium.
Initialization. Find the shortest paths: with no travelers on the network, the
top link has a travel time of 10, and the bottom link has a travel
time of
20. Therefore the top link is the shortest path,so x∗ = 50 0 . We take
this to be the initial solution x ← x∗ = 50 0 . Recalculating the travel
times, we have t1 = 10 + x1 = 60 and t2 = 20 + x2 = 20 (or, in vector
form, t = 60 20 ).
Iteration 1. With the new travel times, the shortest path is now the bottom
link, so κ = 20 and the relative gap is
t·x 50 × 60 + 0 × 20
γ1 = −1= − 1 = 2.
κ·d 20 × 50
This is far too big, so we continue with the second iteration. If everyone
were to take the new shortest path, the flows would be x∗ = 0 50 .
Iteration 2. With the new travel times, the shortest path is now the top link,
so κ = 35 and the relative gap is
t·x 25 × 35 + 25 × 45
γ1 = −1= − 1 = 0.143 .
κ·d 35 × 50
If
everyone
were to take the new shortest path, the flows would be x∗ =
50 0 . Because this is the second iteration, we shift1/3of the travelers
∗
so x ← (1/3)x + (2/3)x =
onto this path, 50/3 0 + 50/3 50/3 =
100/3 50/3 . The new travel times are thus t = 43.33 36.67 .
Iteration 3. With the new travel times, the shortest path is now the bottom
link, so κ = 36.67 and the relative gap is γ1 = 0.121. A bit better,
∗
0 50 , x ← (1/4)x∗ +
but still too
big, so we
carry on.
Here x =
= 0 50/4 + 25 50/4 = 25 25 . The new travel times are
(3/4)x
t = 35 45 . Note that we have returned to the same solution found in
Iteration 1. Don’t despair; this just means the last shift was too big. Next
time we’ll shift fewer vehicles (because λ is smaller with each iteration).
146 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
1 3 3 4
1
4 1 5,000 0
2
3 2 0 10,000
5 6
5 6
7
2 4
Figure 6.2: Larger example with two OD pairs. (Link numbers shown.)
Iteration 4. With the new travel times, the shortest path is now the top link,
so
κ = 35 and the relative gap is γ1 = 0.143. The new target is x∗ =
∗
50 0 , x ← (1/5)x + (4/5)x = 30 20 . The new travel times are
t = 40 40 . With the new travel times, the shortest path is the top
link, so κ = 40 and the relative gap is γ1 = 0, so we stop. In fact, either
path could have been chosen for the shortest path. Whenever there is a tie
between shortest paths, you are free to choose among them.
Initialization. Find the shortest paths: with no travelers on the network, paths
[1, 3], [1, 5, 6, 3], [2, 5, 6, 4], and [2, 4] respectively have travel times of 10,
30, 30, and 10. Therefore [1, 3] is shortest for OD pair (1,3), and [2,4]
is shortest for OD pair (2,4), so x∗ = 5000 0 0 0 0 0 10000 .2
x ← x∗ = 5000
0 0 0 0 0 10000 .
Iteration 1. With the new travel times, the shortest path for (1,3) is now
[1, 5, 6, 3], with a travel time of 30, so κ13 = 30. Likewise, the new shortest
2 For each OD pair, we add the total demand from the OD matrix onto each link in the
shortest path.
6.2. LINK-BASED ALGORITHMS 147
path for (2,4) is [2, 5, 6, 4], so κ24 = 30 and the average excess cost is
t·x−κ·d
AEC =
d·1
5000 × 60 + 10000 × 110 − 30 × 5000 − 30 × 10000
= = 63.33 .
5000 + 10000
This is far too big and suggests that the average trip is 63 minutes slower
than the shortest paths available! If everyone were to take the new shortest
paths, the flows would be
(Be sure you understand how we calculated this.) Because this is iteration
1, we shift 1/2 of the travelers onto this path, so
Iteration 2. With the new travel times, the shortest path for (1,3) is now [1, 3],
with κ13 = 35. The new shortest path for (2,4) is [2, 4], so κ24 = 60 and
the average excess cost is
AEC =
2500 × 35 + 2500 × 35 + 7500 × 85 + 2500 × 35 + 5000 × 60
+ 5000 × 60 + 5000 × 60 − 35 × 5000 − 60 × 10000
15000
= 68.33 .
This is still big (and in fact worse), but we persistently continue with the
second iteration. If everyone were to take the new shortest paths, the
flows would be
x∗ = 5000 0 0 0 0 0 10000 ,
so
Iteration 3. With the new travel times, the shortest path for (1,3) is still
[1, 3], with κ13 = 43.3, and the shortest path for (2,4) is still [2, 4] with
148 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
κ24 = 76.7 and the average excess cost is AEC = 23.6. Continuing the
fourth iteration, as before
x∗ = 5000 0 0 0 0 0 10000 ,
so
Iteration 4. With the new travel times, the shortest path for (1,3) is still [1, 3],
with κ13 = 47.5, and the shortest path for (2,4) is still [2, 4] with κ24 = 85
and the average excess cost is AEC = 9.42. Continuing the fifth iteration,
as before
x∗ = 5000 0 0 0 0 0 10000 ,
so
Iteration 5. With the new travel times, the shortest path for (1,3) is still [1, 3],
with κ13 = 50, and the shortest path for (2,4) is still [2, 4] with κ24 = 90
and the average excess cost is AEC = 3.07. Note that the shortest paths
have stayed the same over the last three iterations. This means that we
really could have shifted more flow than we actually did. The Frank-Wolfe
algorithm, described in the next section, fixes this problem. We have
x∗ = 5000 0 0 0 0 0 10000 ,
so
Iteration 6. With the new travel times, the shortest path for (1,3) is still
[1, 3], with κ13 = 51.7, but the shortest path for (2,4) is now [2, 5, 6, 4]
with κ24 = 88.3. The average excess cost is AEC = 3.80. Note that the
OD pairs are no longer behaving “symmetrically,” the shortest path for
6.2. LINK-BASED ALGORITHMS 149
(1,3) stayed the same, but the shortest path for (2,4) has changed. We
have
x∗ = 5000 0 10000 0 10000 10000 0 .
so
This process continues over and over until the average excess cost is suffi-
ciently small. Even with such a small network, MSA requires a very long time
to converge. An average excess cost of 1 is obtained after eleven iterations, 0.1
after sixty-three iterations, 0.01 after three hundred thirty-two, and the rate of
convergence only slows down from there.
6.2.2 Frank-Wolfe
One of the biggest drawbacks with MSA is that it has a fixed step size. Iteration
i moves exactly 1/(i + 1) of the travelers onto the new shortest paths, no matter
how close or far away we are from the equilibrium. Essentially, MSA decides its
course of action before it even gets started, then sticks stubbornly to the plan of
moving 1/(i+1) travelers each iteration. The Frank-Wolfe (FW) algorithm fixes
this problem by using an adaptive step size. At each iteration, FW calculates
exactly the right amount of flow to shift to get as close to equilibrium as possible.
We might try to do this by picking λ to minimize the relative gap or average
excess cost, but this turns out to be harder to compute. Instead, we pick λ
to solve a “restricted” VI where the feasible set is the line segment connecting
x and x∗ . It turns out that this is the same as choosing λ to minimize the
Beckmann function (6.3) along this line segment. Both approaches for deriving
the step size λ are discussed below.
Define X 0 to be the link flows lying on the line segment between x and x∗ .
That is, X 0 = {x0 : x = λx∗ + (1 − λ)x for some λ ∈ [0, 1]}. The restricted VI
is: find x̂0 ∈ X 0 such that t(x̂0 ) · (x̂0 − x0 ) ≤ 0 for all x0 ∈ X 0 .
This VI is simple enough to be solved as a single equation. The set X has
two endpoints (x and x∗ , corresponding to λ = 0 and λ = 1, respectively). For
now, assume that the solution x̂0 to the VI is not at one of these endpoints.3
In this case, the force vector −t(x̂0 ) is perpendicular to the direction x∗ − x.
(Figure 6.3), so −t(x̂0 ) · (x∗ − x) = 0. Writing this equation out in terms of
individual components, we need to solve
X
tij (x̂0ij ) x∗ij − xij = 0
(6.12)
ij
3 Exercise13 asks you to show that the solution methods provided below will still give the
right answer even in these cases.
150 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
x*
x’
t(x’)
x
X
or equivalently
X X
tij (x̂0ij )x∗ij = tij (x̂0ij )xij (6.13)
ij ij
The same equation can be derived based on the Beckmann function. Recall
the discussion above, where we wrote the function f (x(λ)) = f ((1 − λ)x + λx∗ )
to be the value of the Beckmann function after taking a step of size λ, and
furthermore found the derivative of f (x(λ)) to be
df X
= tij ((1 − λ)xij + λx∗ij )(x∗ij − xij ) . (6.14)
dλ
(i,j)∈A
which we need to solve for λ ∈ [0, 1] and ζ(λ) is used as a shorthand for f (x(λ)).
Since the link performance functions are typically nonlinear, we cannot expect
to be able to solve this equation analytically to get an explicit formula for λ.
General techniques such as Newton’s method or an equation solver can be used;
but it’s not too difficult to use an enlightened trial-and-error method such as a
binary search or bisection. Details for all of these line search techniques were
presented in Section 3.3.1, and will not be repeated here.
To summarize: in FW, x∗ is an all-or-nothing assignment (just as with
MSA). The difference is that λ is chosen to solve (6.16). So, there is a little bit
more work at each iteration (we have to solve an equation for λ instead of using
a precomputed formula as in MSA), but the reward is much faster convergence
to the equilibrium solution. Two examples of FW now follow; you are asked to
provide a proof of correctness in Exercise 14 and 15.
Initialization. Path [1, 3] is shortest for OD pair (1,3), and path [2, 4] is short-
est for OD pair (2,4), so
x∗ = 5000 0 0 0 0 0 10000
and
x = x∗ = 5000
0 0 0 0 0 10000 .
Recalculating the travel times, we have
t = 60 10 10 10 10 10 110 .
Iteration 1. With the new travel times, the shortest path for (1,3) is now
[1, 5, 6, 3], and the new shortest path for (2,4) is [2, 5, 6, 4], so AEC = 63.33
If everyone were to take the new shortest paths, the flows would be
Iteration 2. With the new travel times, the shortest path for (1,3) is now
[1, 3], but the shortest path for (2,4) is still [2, 5, 6, 4]. The relative gap is
AEC = 2.67 (roughly 30 times smaller than the corresponding point in
the MSA algorithm!) We have
At this point, the average excess cost is around 1.56 min; note that FW is
able to decrease the relative gap much faster than MSA. However, we’re still
quite far from equilibrium if you compute the actual path travel times. In this
case, even though we’re allowing the step size to vary for each iteration, we are
forcing travelers from all OD pairs to shift in the same proportion. In reality,
OD pairs farther from equilibrium should see bigger flow shifts, and OD pairs
closer to equilibrium should see smaller ones. This can be remedied by more
advanced algorithms.
f (x) = xT Qx + bT x , (6.17)
where x = x1 x2 and b are two-dimensional vectors, and Q is a 2 × 2 matrix.
Figure 6.5 shows the Q matrix and b vector corresponding to each example.
156 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
H5*
H7*
H6*
H4*
H
H1*
H2*
H3*
Figure 6.4: Frank-Wolfe can only move towards extreme points of the feasible
region.
How would you go about finding the minimum of such a function? Given
some initial solution (x1 , x2 ), one idea is to fix x1 as a constant, and find the
value of x2 which minimizes f . Then, we can fix x2 , and find the value of x1
which minimizes f , and so on. This process will converge to the minimum, as
shown in Figure 6.6, but in general this convergence is only asymptotic, and the
process will never actually reach the minimum. The exception is when Q is the
identity matrix, as in Figure 6.6(a). In this case, the exact optimum is reached
in only two steps.
In fact, it is possible to reach the exact optimum in only two steps even
when Q is not the identity matrix, by changing the search directions. The
process described above (alternately fixing x1 , and then x2 ) can be thought of
as alternating between searching in the direction 0 1 , then searching in the
direction 1 0 . As shown in Figure 6.7, by making a different choice for the
two search directions, the minimum can always be obtained in exactly two steps.
This happens if the two directions d1 and d2 are conjugate, that is, if
d1 T Qd2 = 0 (6.18)
1 0 0
Q= b=
0 1 0
1 1/2 0
Q= b=
1/2 1 0
1 1 0
Q= b=
1 1 0
1 1/2 1
Q= b=
1/2 1 −1
Figure 6.5: Four examples of convex quadratic functions of the form f (x) =
xT Qx + bT x
158 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
1 0 0
Q= b=
0 1 0
1 1/2 1
Q= b=
1/2 1 −1
Figure 6.6: Searching in orthogonal directions finds the optimum in two steps
only if Q = I.
the previous search direction. Before the derivation, there are a few differences
between traffic assignment and the unconstrained quadratic program used to
introduce conjugacy which should be addressed.
So, how can we make sure that the new target vector x∗ is chosen so that the
search direction is conjugate to the previous direction, and that x∗ is feasible?
Since X is a convex set, feasibility can be assured by choosing x∗ to be a convex
combination of the old target vector (x∗old ) and the all-or-nothing assignment
xAON :
x∗ = αx∗old + (1 − α)xAON , (6.19)
for some α ∈ [0, 1]. Choosing α = 0 would make the all-or-nothing assignment
the target (as in plain FW), while choosing α = 1 would make the target in this
iteration the same as in the last. In fact, α should be chosen so that the new
direction is conjugate to the last, that is,
Now, for TAP, the Hessian takes a specific form. Since the Beckmann func-
tion is
X Z xij
f (x) = tij (x) dx , (6.26)
(i,j)∈A 0
its gradient is simply the vector of travel times at the current flows
and its Hessian is the diagonal matrix of travel time derivatives at the current
flows
Hf (x) = diag{t0ij (xij )} . (6.28)
So, the matrix products in equation (6.25) can be written out explicitly, giving
∗ AON
− xij )t0ij
P
(i,j)∈A ((xold )ij − xij )(xij
α= P ∗ AON − (x∗ ) )t0
, (6.29)
(i,j)∈A ((xold )ij − xij )(xij old ij ij
where the derivatives t0ij are evaluated at the current link flows xij .
Almost there! A careful reader may have some doubts about the formula
in (6.29). First, it is possible that the denominator can be zero, and division
by zero is undefined. Second, to ensure feasibility of x∗ , we need α ∈ [0, 1],
even though it is not obvious that this formula always lies in this range (and in
fact, it need not do so). Furthermore, α = 1 is undesirable, because then the
current target point is the same as the target point in the last iteration. If the
previous line search was exact, there will be no further improvement and the
algorithm will be stuck in an infinite loop. Finally, what should you do for the
first iteration, when there is no “old” target x∗ ?
To address the first issue, the easiest approach is to simply set α = 0 if
the denominator of (6.29) is zero (i.e., if the formula is undefined, simply take
a plain FW step by using the all-or-nothing solution as the target). As for
the second and third issues, if the denominator is nonzero we can project the
right-hand side of (6.29) onto the interval [0, 1 − ] where > 0 is some small
tolerance value. That is, if equation (6.29) would give a value greater than 1 − ,
set α = 1 − ; if it would give a negative value, use zero. Finally, for the first
iteration, simply use the all-or-nothing solution as the target: x∗ = xAON .
So, to summarize the discussion, choose α in the following way. If it is the
first iteration or the denominator of (6.29) is zero, set α = 0. Otherwise set
0 ∗
P AON
!
(i,j)∈A tij ((xold )ij − xij )(xij − xij )
α = proj[0,1−] P 0 ∗ AON − (x∗ ) )
(6.30)
(i,j)∈A tij ((xold )ij − xij )(xij old ij
6.2. LINK-BASED ALGORITHMS 161
Then the target solution x∗ is calculated using (6.19). The value of the step
size λ is chosen in the same way as in Frank-Wolfe, by performing a line search
(e.g., using bisection or Newton’s method) to solve (6.16).
Large network example Here we apply CFW to the network shown in Fig-
ure 6.2, using the same notation as in the FW and MSA examples. The tolerance
is chosen to be a small positive constant, 0.01 in the following example.
Initialization. Generate the initial solution by solving an all-or-nothing assign-
ment. Path [1, 3] is shortest for OD pair (1,3), and path [2, 4] is shortest
for OD pair (2,4), so
xAON = 5000 0 0 0 0 0 10000
and
x = xAON = 5000
0 0 0 0 0 10000 .
Recalculating the travel times, we have
t = 60 10 10 10 10 10 110 .
Iteration 1. Proceeding in the same way as in the large network example for
Frank-Wolfe, the all-or-nothing assignment in this case is
xAON = 0 5000 0 10000 15000 5000 10000 .
which is used as the target x∗ since this is the first iteration of CFW.
Repeating the same line search process, the optimal value of λ is 19/120,
producing the new solution
x = 4208 792 8417 1583 2375 792 1583 .
However, here FW and CFW take different paths. Rather than using
xAON as the target vector, CFW generates a conjugate search direction.
First calculate the right-hand side of (6.29). Since for this problem tij =
1/100 for all links (regardless of the flow), the formula is especially easy to
compute using x and x∗ from the previous iteration and xAON as just now
computed. The denominator of (6.29) is nonzero, and the formula gives
−2.37; projecting onto the set [0, 1 − ] thus gives α = 0. So, calculating
the target x∗ from equation (6.19) with α = 0 we have
x∗ = 5000 0 0 10000 10000 0 10000
and, using a line search between x and x∗ , find that λ = 0.0321 is best,
resulting in
x = 4233 766 8146 1853 2620 766 1854 .
162 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
Iteration 3. With the new flows x, the travel times are now
t = 52.3 17.7 91.5 28.5 36.2 17.7 28.5
and the all-or-nothing assignment is
xAON = 5000 0 10000
0 0 0 0 .
Calculating the right-hand side of (6.29), we see that the denominator
is nonzero, and the formula gives 0.198, which can be used as is since it
lies in [0, 1 − ]. So, calculating the target x∗ from equation (6.19) with
α = 0.198 we have
x∗ = 5000 0 8024 1976 1976 0 1976 .
Note that unlike any of the other link-based algorithms in this section,
the target flows are not an all-or-nothing assignment (i.e., not an extreme
point of X). Performing a line search between x and x∗ , find that λ =
0.652 is best, resulting in
x = 4733 267 8067 1933 2200 267 1933 .
which solves the equilibrium problem exactly, so we terminate.
In this example, CFW found the exact equilibrium solution in three itera-
tions. This type of performance is not typical (even though it is generally faster
than regular FW or MSA). In this example, the link performance functions are
linear, so the Beckmann function is quadratic. Iterative line searches with con-
jugate directions lead to the exact solution of quadratic programs in a finite
number of iterations, as suggested by the above discussion. This performance
cannot be assured with other types of link performance functions.
An even faster algorithm known as biconjugate Frank-Wolfe chooses its tar-
get so that the search direction is conjugate to both of the previous two search
directions. This method converges better than CFW, but is not explained here
because the details are a little more complicated even though the idea is the
same.
3
7
3
3 7
1 4
3
7
2
It is unable to erase cyclic flows. Consider the network in Figure 6.8, with
the flows as shown. Such a flow might easily arise if [1, 2, 3, 4] is the
shortest path during the first iteration of Frank-Wolfe, [1, 3, 2, 4] is the
shortest path during the second, and λ = 0.3. With only one OD pair, it
is impossible for both links (2, 3) and (3, 2) to be used at equilibrium, as
discussed in Section 5.2.3. However, the Frank-Wolfe method will always
leave some flow on both links unless λ = 1 at any iteration (which is
exceedingly rare, especially in later iterations when λ is typically very
close to zero).
These difficulties can all be avoided by tracking the path flows h, rather than
the link flows x. The path flows contain much more information, tracking flow by
origin and destination, as opposed to link flows which are aggregated together.
On balance, the number of elements in the path flow vector is many orders of
magnitude larger than that of the link flows, easily numbering in the millions
for realistic networks. Algorithms which require us to first list off all paths in
the network are not tractable. Instead, path-based algorithms only track the
164 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
paths which an OD pair actually uses, that is the set Π̂rs = {π ∈ π rs : hπ > 0}.4
This is often referred to as the set of working paths for each OD pair.5 A rough
description of path-based algorithms can then be described as
(a) Find the shortest path π̂rs . Add it to Π̂rs if it’s not already used.
(b) Shift travelers among paths to get closer to equilibrium.
(c) Update travel times.
3. Drop paths from Π̂rs if they are no longer used; return to step 2 unless a
convergence criterion is satisfied.
On the surface, this scheme looks quite similar to the link-based methods
presented earlier. Why might it converge faster? Recall the three factors de-
scribed above. First, each OD pair is now being treated independently. With
MSA and Frank-Wolfe, the same step-size λ was applied across all links (and
therefore, across all OD pairs). If one OD pair is very close to equilibrium, while
another is far away, we should probably make a finer adjustment to the first OD
pair, and a larger adjustment to the second one. Link-based methods allow no
such finesse, and instead bluntly apply the same λ to all origins. In practice,
this means that λ becomes very small after a few iterations: we can’t move
very far without disturbing an OD pair which is already close to equilibrium.
As a result of this, it takes a really long time to solve OD pairs which are far
from equilibrium. Equivalently, the set of search directions is broader in that
we can vary the step size by OD pairs; and lastly, it is quite possible to erase
cyclic flows in a path-based context, because the extra precision allows us to
take larger steps.
Step 3 is where different path-based algorithms differ. This section describes
two path-based algorithms: gradient projection and manifold suboptimization.
The latter is sometimes called projected gradient, because both algorithms use
the same two ingredients: exploiting the fact that the gradient is the direction
of steepest ascent (and therefore, in a minimization problem, we should move in
the opposite direction to descend as quickly as possible); and having to consider
the constraints in the problem by using a projection operation to stay in the
feasible set. (Recall from Chapter 3 that projection involves finding the point
within a set which lies closest to another point.) Where they differ is in the
order these steps are applied.
In gradient projection, we first take a step in the opposite direction of the
gradient, which will typically result in an infeasible point. The projection is
done after the flow shift, not before: we do the projection after we make use of
4 Those familiar with other types of optimization problems might recognize this as a column
generation scheme.
5 You may recognize similarities with the “trial-and-error” method from Chapter 4. Path-
and its partial derivative with respect to any path flow variable is
!
∂f X
π
X 0 0
π π
X
π
π
= δij tij δij h = δij tij (xij ) = cπ , (6.32)
∂h 0
(i,j)∈A π ∈Π (i,j)∈A
− ∇h f = −vect(cπ ) . (6.33)
Substituting this into the path-based Beckmann function (6.31), the partial
derivative with respect to one of the nonbasic path flows is now
∂ fˆ
= cπ − cπ̂rs ∀(r, s) ∈ Z 2 , π ∈ Π̂rs − {π̂rs } , (6.35)
∂hπ
denoting the Beckmann function as fˆ instead of f because the function has been
modified by using (6.34) to eliminate some of the path flow variables.
So, the change in path flows will be the negative of the gradient. Since the
gradient is given by (6.35) and cπ ≥ cπ̂rs (because the basic path is by definition
166 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
the shortest one), moving in this direction means that every nonbasic path flow
will decrease.
Since the transformation (6.34) eliminated the demand satisfaction con-
straint, the only remaining constraint is that the nonbasic path flow variables
be nonnegative. Projecting onto this set is trivial: if any of the nonbasic path
flow variables is negative after taking a step, simply set it to zero. At this point,
the basic path flow can be calculated through equation (6.34).
Furthermore, the larger the difference in travel times, the larger the cor-
responding element of the gradient will be. This suggests that more flow be
shifted away from paths with higher travel times. We can go a step further, and
estimate directly how much flow should be shifted from a nonbasic path to a
basic path to equalize the travel times, using Newton’s method.
Let ∆h denote the amount of flow we shift away from non-basic path π and
onto the basic path π̂, and let cπ (∆h) and cπ̂ (∆h) denote the travel times on
path π and π̂ after we make such a shift. We want to choose ∆h so these costs
are equal, that is, so
that is, g is simply the difference in travel times between the two paths. To
apply Newton’s method, we need to find the derivative of g with respect to ∆h.
Using the relationships between link travel times and path travel times, we
have X
π π̂
g(∆h) = (δij − δij )tij (xij (∆h)) ,
(i,j)∈A
so
X dtij dxij
g 0 (∆h) = π
(δij π̂
− δij )
dxij d∆h
(i,j)∈A
by the chain rule. For each arc, there are four possible cases:
π π̂
Case I : δij = δij = 0, that is, neither path π nor path π̂ uses link (i, j). Then
π π̂ dtij dxij
(δij − δij ) dxij d∆h = 0 and this link does not contribute to the derivative.
π π̂
Case II : δij = δij = 1, that is, both paths π and path π̂ use link (i, j).
π π̂ dtij dxij
Then (δij − δij ) dxij d∆h = 0 and this link again does not contribute to the
derivative. (Another way to think of it: since both paths use this arc, its
total flow will not change if we shift travelers from one path to another.)
π π̂
Case III : δij = 1 and δij = 0, that is, path π uses link (i, j), but path π̂ does
π π̂ dtij dxij dt dxij dt dxij
not. Then (δij − δij ) dxij ∆h = dxijij d∆h = − dxijij since d∆h = −1.
π π̂
Case IV : δij = 0 and δij = 1, that is, path π̂ uses link (i, j), but path π does
π π̂ dtij dxij dt dxij dt dxij
not. Then (δij − δij ) dxij ∆h = − dxijij d∆h = − dxijij since d∆h = 1.
6.3. PATH-BASED ALGORITHMS 167
Putting it all together, the only terms which contribute to the derivative are
the links which are used by either π or π̂, but not both. Let A1 , A2 , A3 , and
A4 denote the sets of links falling into the four cases listed above. Then
X dtij
g 0 (∆h) = − .
dxij
(i,j)∈A3 ∪A4
which is simply the negative sum of the derivatives of these links, evaluated at
the current link flows.
Then, starting with an initial guess of ∆h = 0, one step of Newton’s method
gives an improved guess of
cπ − cπ̂
∆h = 0 − g(0)/g 0 (0) = P dtij
.
a∈A3 ∪A4 dxij
That is, the recommended Newton shift is given by the difference in path costs,
divided by the sum of the derivatives of the link performance functions for links
used by one path or the other, but not both. Therefore, the updated nonbasic
and basic path flows are given by
cπ − cπ̂
hπ̂ ← hπ̂ + P dtij
a∈A3 ∪A4 dxij
and
cπ − cπ̂
hπ ← hπ − P dtij
.
a∈A3 ∪A4 dxij
Iteration 1, Step 2a, OD pair (1,3). Find the shortest path for (1,3): with
no travelers on the network, the top link has a travel time of 10. This is
not in the set of used paths, so include it: Π̂13 = {[1, 3]}.
Iteration 1, Step 2b, OD pair (1,3). Since there is only one used path, we
simply have h13
[1,3] = 5000.
Iteration 1, Step 2a, OD pair (2,4). Find the shortest path for (2,4): with
no travelers on the network, link 7 has a travel time of 10. This is not in
the set of used paths, so include it: Π̂24 = {[2, 4]}.
Iteration 1, Step 2b. Since there is only one used path, h24
[2,4] = 10000.
168 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
Iteration 1, Step 3. All paths are used, so return to step 2. The relative gap
is γ1 = 2.11.
Iteration 2, Step 2a, OD pair (1,3). The shortest path is now [1,5,6,3]. This
is not part of the set of used paths, so we add it: Π̂13 = {[1, 3], [1, 5, 6, 3]}.
Iteration 2, Step 2b, OD pair (1,3). The difference in travel times between
the paths is 30 minutes; and the sum of the derivatives of links 1, 2, 3,
and 4 is 0.04. So we shift 30/0.04 = 750 vehicles from [1,3] to [1,5,6,3],
h13 13
[1,3] = 4250 and h[1,5,6,3] = 750.
Note that the two paths have exactly the same cost after only one step!
This is because Newton’s method is exact for linear functions.
Iteration 2, Step 2a, OD pair (2,4). The shortest path is now [2,5,6,4]. This
is not part of the set of used paths, so we add it: Π̂24 = {[2, 4], [2, 5, 6, 4]}.
Iteration 2, Step 2b, OD pair (2,4). The difference in travel times between
the paths is 72.5 minutes; and the sum of the derivatives of links 5, 3, 6,
and 7 is 0.04. So we shift 72.5/0.04 = 1812.5 vehicles from [2,4] to [2,5,6,4],
h24 24
[2,4] = 8187.5 and h[2,5,6,4] = 1812.5.
Note that the two paths again have exactly the same cost. However, the
equilibrium for the first OD pair has been disturbed.
Iteration 2, Step 3. All paths are used, so return to step 2. The relative gap
is γ1 = 0.0115.
Iteration 3, Step 2a, OD pair (1,3). The shortest path is now [1,3], which
is already in the set of used paths, so nothing to do here.
Iteration 3, Step 2b, OD pair (1,3). The difference in travel times between
the paths is 18.125 minutes; and the sum of the derivatives of links 1, 2,
3, and 4 is 0.04. So we shift 18.125/0.04 = 453 vehicles from [1,5,6,3] to
[1,3], h13 13
[1,3] = 4703 and h[1,5,6,3] = 297.
6.3. PATH-BASED ALGORITHMS 169
Iteration 3, Step 2a, OD pair (2,4). The shortest path is again [2,5,6,4].
This is already in the set of used paths, so nothing to do here.
Iteration 3, Step 2b, OD pair (2,4). The difference in travel times between
the paths is 4.6 minutes; and the sum of the derivatives of links 5, 3, 6,
and 7 is 0.04. So we shift 4.6/0.04 = 114 vehicles from [2,4] to [2,5,6,4],
h24 24
[2,4] = 8073 and h[2,5,6,4] = 1927.
Iteration 3, Step 3. All paths are used, so return to step 2. The relative gap
is γ1 = 0.00028.
Note that after three iterations of gradient projection, the relative gap is two or-
ders of magnitude smaller than that from the Frank-Wolfe algorithm. Although
not demonstrated here for reasons of space, the performance of gradient projec-
tion relative to Frank-Wolfe actually improves from here on out. Frank-Wolfe
usually does most of its work in the first few iterations, and then converges very
slowly after that.6 On the other hand, gradient projection maintains a steady
rate of progress throughout, with a nearly constant proportionate decrease in
gap from one iteration to the next.
∇h f = vect(cπ ) . (6.37)
Assuming that the current path flow solution h is feasible, we must move in a
direction that does not violate any of the constraints,
P only using the working
paths Π̂rs . For instance, if the demand constraint π∈Π̂rs hπ = drs is satisfied
for all OD pairs (r, s), it must remain so after taking a step in the direction ∆h:
X
(hπ + ∆hπ ) = drs . (6.38)
π∈Π̂rs
so s0 ∈ ∆H.
Regarding the second part, let ∆h be any vector in ∆H. We now show that
(s − s0 ) · ∆h = 0. We have
X
(s − s0 ) · ∆h = [cπ − (cπ − c̄rs )]∆hπ (6.42)
π∈Π̂rs
X
= −c̄rs ∆hπ (6.43)
π∈Π̂P rs
=0 (6.44)
since ∆h ∈ ∆H.
6.4. BUSH-BASED ALGORITHMS 171
So, given current path flows hrs for OD pair (r, s), we use ∆hrs = vect(cπ −
c̄ ) as the search direction. To update the path flows hrs ← hrs + µ∆hrs , we
rs
need an expression for the step size. For any path π ∈ Π̂rs for which ∆hπ < 0,
the new path flows would be infeasible if µ > hπ /∆hπ . Therefore, the largest
possible step size is
hπ
µ̄ = min π
. (6.45)
π∈Π̂rs :∆hπ <0 ∆h
The actual step size µ ∈ [0, µ̄] should be chosen to minimize the Beckmann
function. This can be done either through bisection or one or more iterations
of Newton’s method.
Connected ; that is, using only links in the bush, it is possible to reach
every node which was reachable in the original network.
Acyclic; that is, no path using only bush links can pass the same node more
than once. This is not restrictive, because travelers trying to minimize
their own travel time would never cycle back to the same node, and greatly
speeds up the algorithm, because acyclic networks are much simpler and
admit much faster methods for finding shortest paths and other quantities
of interest.
(a) (b)
(c) (d)
Figure 6.9: Examples of bushes (panels (a) and (b)) and non-bushes (panels (c)
and (d)).
has exactly one path from the origin to every node. The thickly-shaded links
in panel (c) do not form a bush, because it is not connected; there is no way to
reach the nodes at the bottom of the network only using bush links. Likewise,
the thick links in panel (d) do not form a bush either, because a cycle exists and
it would be possible to revisit some nodes multiple times using the bush links
(find them!).
Notice that because a bush is acyclic, it can never include both directions
of a two-way link. This implies that at equilibrium, on every street travelers
from the same origin must all be traveling in the same direction. (This follows
from Proposition 5.3 in Section 5.2.3). Interestingly, link-based and path-based
algorithms cannot enforce this requirement easily; and this is yet another reason
that bush-based algorithms are a good option for solving equilibrium. There is
one bush for every origin; this means that if there are z origins and m links, we
need to keep track of at most zm values. By contrast, a link-based approach
(such as Frank-Wolfe) requires storage of only m values to represent a solution,
while a path-based approach could conceivably require z 2 2m values.7
The first well-known origin-based algorithm was developed by Hillel Bar-
Gera in his dissertation (circa 2000), and was simply called origin-based assign-
ment (OBA). Bob Dial developed another, simpler method that was published
7 These values are very approximate, but give you an idea of the scale.
6.4. BUSH-BASED ALGORITHMS 173
in 2006 as “Algorithm B,” which also seems to work faster than OBA. Yu
(Marco) Nie compared both algorithms and developed additional variations by
combining features of both, and contributing some ideas of his own. Most re-
cently, Guido Gentile has developed the LUCE algorithm, and Hillel Bar-Gera
has provided a new algorithm called TAPAS which simultaneously solves for
equilibrium and proportional link flows (approximating entropy maximization).
This section focuses on Algorithm B, since it is simpler to explain on its own.
All bush-based algorithms operate according to the same general scheme:
1. Start with initial bushes for each origin (the shortest path tree with free-
flow times is often used as a starting point).
2. Shift flows within each bush to bring each origin closer to equilibrium.
3. Improve the bushes by adding links which can reduce travel times, and by
removing unused links. Return to step 2.
1 6 2 51 3 1 4 2 14 3
6 4 6
42 27 11 4 10
4 5 6 4 5 6
27 10
51 11 14 6
7 8 9 7 8 9
11 11 6 6
(c) Bush travel times t (d) Bush travel time derivatives t'
irrelevant. Table 6.1 shows all of the labels defined in this section, and which
labels are used in which algorithms, and in the bush-updating steps described
in Section 6.4.3.
We start with two different ways to represent the travel patterns on each
bush, starting with x labels. The label xB ij associated with each link indicates
the number of travelers starting at node r (the root of bush B) and traveling
on bush link (i, j). The superscript B indicates that we are only referring to
the flow on this link associated with the bush B, and the total link flow xij is
the sum of xB ij across all bushes B. However, using these superscripts tends to
clutter formulas, and often times it is clear that we are only referring to flows
within the context of a specific bush. In this case, we can simply write xij with
it being understood that this label refers to the flow on a bush link. Within
this section, we are only concerned with a single bush and the superscript will
be omitted for brevity.
The network in Figure 6.10(a) will be used as to demonstrate the labels
introduced in this section. The thick links comprise the bush, and the link
performance functions for all links in the network are shown. The origin in this
case is node 7, and the demand is 10 vehicles from node 7 to node 3. You can
verify that a topological ordering of the nodes on this bush is 7, 8, 9, 6, 4, 1, 5,
2, 3.
The corresponding label xi associated with each node indicates the total
number of vehicles using node i on the bush B, including flow which is termi-
nating at node i. The flow conservation equations relating xij and xi labels are
as follows: X X
xi = xhi = dri + xij ∀i ∈ N ,
(h,i)∈B (i,j)∈B
6.4. BUSH-BASED ALGORITHMS 175
where the first expression defines the node flow in terms of incoming link flows,
and the second in terms of outgoing link flows. The two definitions are equiva-
lent.
It is sometimes convenient to refer to the fraction of the flow at a node
coming from a particular link. For any node i with positive node flow (xi > 0),
define αhi to be the proportion of the node flow contributed by the incoming
link (h, i), that is
Clearly each αhi is nonnegative, and, by flow conservation, the sum of the αhi
values entering each node i is one. The definition of αhi is slightly trickier
when xi = 0, because the formula (6.46) then involves a division by zero. To
accommodate this case, we adopt this rule: when xi = 0, the proportions αhi
may take any values whatsoever, as long as they are nonnegative and sum to
one. It is important to be able to define αhi values even in this case, because
the flow-shifting algorithms may cause xi to become positive, and in this case
we need to know how to distribute this new flow among the incoming links.
If we are given α labels for each bush link, it is possible to calculate the
resulting node and link flows xi and xij , using this recursion:
X
xi = dri + xij ∀i ∈ N (6.47)
(i,j)∈B
The sum in (6.47) is empty for the node with the highest topological order.
So we can start there, and then proceed with the calculations in backward
topological order.
Figure 6.10(b) shows the x and α labels for the example bush. You should
verify that the formulas (6.47) and (6.48) are consistent with these labels. The
link travel times and link travel time derivatives are shown in panels (c) and (d)
of this figure.
As has been used earlier in the text for shortest path algorithms, L is used
to denote travel times on shortest paths. The superscript B can be appended
to these labels when it is necessary to indicate that these labels are for shortest
paths specifically on the bush B, although it will usually be clear from context
which bush is meant. This section will omit such a superscript to avoid cluttering
the formulas, and it should be understood that L means the shortest path only
on the bush under consideration. The same convention will apply to the other
labels in this section. There are L labels associated with each node i, and with
each link (i, j) ∈ B: Li denotes the distance on the shortest path from r to i
using only bush links, and Lij is the travel time which would result if you follow
the shortest path to node i, then take link (i, j). These labels are calculated
176 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
Lr = 0 (6.49)
Lij = Li + tij ∀(i, j) ∈ B (6.50)
Li = min {Lhi } ∀i 6= r (6.51)
(h,i)∈B
The U labels are used to denote travel times on longest paths within the bush,
and are calculated in a similar way. Like the L labels, U labels are calculated
for both nodes and bush links, using the formulas:
Ur = 0 (6.52)
Uij = Ui + tij ∀(i, j) ∈ B (6.53)
Ui = max {Uhi } ∀i 6= r (6.54)
(h,i)∈B
The M labels are used to denote the average travel times within the bush,
recognizing that some travelers will be on longer paths and other travelers will be
on shorter ones. The node label Mi represents the average travel time between
origin r and node i across all bush paths connecting these nodes, weighted by
the number of travelers using each of these paths. The label Mij indicates the
average travel time of vehicles after they finish traveling on link (i, j), again
averaging across all of the bush paths starting at the origin r and ending with
link (i, j). These can be calculated as follows:
Mr = 0 (6.55)
Mij = Mi + tij ∀(i, j) ∈ B (6.56)
X
Mi = αhi Mhi ∀i 6= r (6.57)
(h,i)∈B
(63,63,63) (114,144,156)
1 2 3 1 2 3
(57,57,57)
(57,57,57) (63,93,105) (44,114,156)
(93,93,93) (105,105,105) (44,44,44)
4 5 6 4 5 6
(78,78,78) (51,51,51) (78,78,78) (33,33,33)
(51,51,51) (33,33,33)
7 8 9 7 8 9
(11,11,11) (22,22,22) (0,0,0) (11,11,11) (22,22,22)
(a) (L,M,U) labels for bush links (b) (L,M,U) labels for nodes
1 22 2 43.6 3 1 2 3
18 29.6 37.1
18 34
18 24
4 5 6 4 5 6
24 14 24 18
14 18
7 8 9 7 8 9
6 12 0 6 12
(c) D labels for bush links (d) D labels for nodes
Dr = 0 (6.58)
Dij = Di + t0ij ∀(i, j) ∈ B (6.59)
X X X
2
p
Di = αhi Dhi + αhi αgi Dhi Dgi ∀i 6= r (6.60)
(h,i)∈B (h,i)∈B (g,i)∈B
(g,i)6=(h,i)
Figure 6.11 shows the D labels associated with the example bush in panels
(c) and (d). Panel (c) shows the Dij labels associated with links, and panel (d)
shows the Di labels associated with nodes.
of the section. You will notice that all of these procedures follow the same
general form: calculate bush labels (different labels for different algorithms) in
forward topological order, then scan each node in turn in reverse topological
order. When scanning a node, use the labels to identify vehicles entering the
node from higher-cost approaches, and shift them to paths using lower-cost
approaches. Update the x and/or α labels accordingly, then proceed to the
previous node topologically until the origin has been reached.
All three of these algorithms also make use of divergence nodes (also called
last common nodes or pseudo-origins in the literature) as a way to limit the
scope of these updates. While the definition of divergence nodes is slightly
different in these algorithms, the key idea is to find the “closest” node which is
common to all of the paths travelers are being shifted among. The rest of this
subsection details these definitions and the role they play.
Algorithm B
Algorithm B identifies the longest and shortest paths to reach a node, and
shifts flows between them to equalize their travel times. That is, when scanning
a node i, only two paths (the longest and shortest) are considered. It is easy
to determine these paths using the L and U labels, tracing back the shortest
and longest paths by identifying the links used for the minimum or maximum
in equations (6.51) and (6.54). Once these paths are identified, the divergence
node a is the last node common to both of these paths.
The shortest and longest path segments between nodes a and i form a pair
of alternate segments; let σL and σU denote these path segments. Within Al-
gorithm B, flow is shifted from the longest path to the shortest path, using
Newton’s method to determine the amount of flow to shift:
(Ui − Ua ) − (Li − La )
∆h = P 0 (6.61)
(g,h)∈σL ∪σU tgh
(a) Use the L and U labels to determine the divergence node a and the
pair of alternate segments σL and σU .
(b) Calculate ∆h using equation (6.61) (capping ∆h at min(i,j)∈σU xij if
needed).
(c) Subtract ∆h from the x label on each link in σU , and add ∆h to the
x label on each link in σL .
4. If i = r, go to the next step. Otherwise, let i be the previous node
topologically and return to step 3.
5. Update all travel times tij and derivatives t0ij using the new flows x (re-
membering to add flows from other bushes.)
Demonstrating on the example in Figures 6.10 and 6.11, we start with the L
and U labels as shown in Figure 6.11(a) and (b), and start by letting i = 3, the
last node topologically. The longest path in the bush from the origin (r = 7)
to node i = 3 is [7,4,5,2,3], and the shortest path is [7,8,9,6,3], as can be easily
found from the L and U labels. The divergence node is the last node common
to both of these paths, which in this case is the origin, so a = 7. Equation (6.61)
gives ∆h = 1.56, so we shift this many vehicles away from the longest path and
onto the shortest path, giving the flows in Figure 6.12(a).
The second-to-last node topologically is node 2, and we repeat this process.
The longest and shortest paths from the origin to node 2 in the bush are [7,4,5,2]
and [7,4,1,2], respectively.10 The last node common to both of these paths is 4,
so the divergence node is a = 4 and we shift flow between the pair of alternate
segments [4,5,2] and [4,1,2]. Using equation (6.61) gives ∆h = 1.50. Shifting
this many vehicles from the longer segment to the shorter one gives the flows in
Figure 6.12(b).
The previous node topologically is node 5. Since there is only one incoming
bush link to node 5, there is nothing for Algorithm B to do. To see why, notice
that the longest and shortest bush paths are [7,4,5] and [7,4,5]. The divergence
node would be a = 5, which is the same as i, and the “pair of alternate segments”
is the empty paths [5] and [5]. Intuitively, since there is only one way to approach
node 5, there are no “alternate routes” to divert incoming flow. In fact, the same
is true for all of the previous nodes topologically (1, 4, 6, 9, 8, 7 in the reverse
of the order given above.), so there are no more flow shifts on the bush.
You may do so if you wish, but it is not required for Algorithm B to work, and in all of the
examples in this section travel times are not updated until all flow shifts are complete for the
bush.
180 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
greatest travel time difference). Because OBA shifts flow among many paths, it
makes use of the average cost labels M and the derivative labels D, rather than
the shortest and longest path costs (and the direct link travel times derivatives)
used by Algorithm B. OBA also makes use of a Newton-type shift, dividing a
difference in travel times by an approximation of this difference’s derivative.
When scanning a node i in OBA, first identify a least-travel time approach,
that is, a link (ĥ, i) such that Mĥi ≤ Mhi for all other approaches (h, i). This
link is called the basic approach in analogy to the basic path concept used in
path-based algorithms. Then, for each nonbasic approach (h, i), the following
amount of flow is shifted from xhi to xĥi :
Mhi − Mĥi
∆xhi = , (6.62)
Dhi + Dĥi − 2Da
with the constraint that ∆xhi ≤ xhi to prevent negative flows. It can be shown
that this formula would equalize the mean travel times on the two approaches if
they were linear functions. In reality, they are not, but the formula is still used
as an approximation.
Here a is the divergence node, defined for OBA as the node with the highest
topological order which is common to all paths in the bush from r to i, excluding
i itself. This is a reasonable place to truncate the search, since shifting flow
among path segments between a and i will not affect the flows on any earlier
links in the bush. This is a generalization of the definition used for Algorithm
B, needed since there are more than two paths subject to the flow shift.
After applying the shift ∆xhi to each of the links entering node i, we have to
update flows on the other links between a and i to maintain flow conservation.
This is done by assuming that the α values stay the same elsewhere, meaning
that any increase or decrease in the flow passing through a node propagates
backward to its incoming links in proportion to the contribution each incoming
link provides to that total flow. Thus, the x labels can be recalculated using
equations (6.47) and (6.48) to links and nodes topologically between a and i.
The steps of OBA are as follows:
1. Calculate the M and D labels in forward topological order.
2. Let i be the topologically last node in the bush.
6.4. BUSH-BASED ALGORITHMS 181
5. Update all travel times tij and derivatives t0ij using the new flows x (re-
membering to add flows from other bushes.)
The LUCE algorithm chooses ∆xhi values for each approach according to the
following principles:
P
(a) Flow conservation must be obeyed, that is, (h,i)∈B ∆xhi = 0.
(b) No link flow can be made negative, that is, xhi + ∆xhi ≥ 0 for all (h, i) ∈ B.
0
(c) The Mhi values should be equal and minimal for any approach with positive
0
flow after the shift, that is, if xhi + ∆xhi > 0, then Mhi must be less than
or equal to the M 0 label for any other approach to i.
The ∆xhi values satisfying this principles can be found using a “trial and error”
algorithm, like that introduced in Section 4.2.1. Choose a set of approaches (call
6.4. BUSH-BASED ALGORITHMS 183
it A), and set ∆xhi = −xhi for all (h, i) not in this set A. For the remaining
0
approaches, solve the linearPsystem of equations which set Mhi equal for all
(h, i) ∈ A and which have (h,i)∈A ∆xhi = 0. The number of equations will
equal the number of approaches in A. After obtaining such a solution, you
can verify whether the three principles are satisfied. Principle (a) will always
be satisfied. If principle (b) is violated, approaches with negative xhi + ∆xhi
should be removed from A. If principle (c) is violated, then some approach not
in A has a lower M 0 value, and that approach should be added to A. In either
of the latter two cases, the entire process should be repeated with the new A
set.
The steps of LUCE are as follows:
5. Update all travel times tij and derivatives t0ij using the new flows x (re-
membering to add flows from other bushes.)
Applying LUCE to the same example, we again start by scanning node 2. As-
suming that both approaches (2,3) and (6,3) will continue to be used, we solve
the following equations simultaneously for ∆x23 and ∆x63 :
Substituting the M and D labels from Figure 6.11 and solving, we obtain ∆x23 =
−1.48 and ∆x63 = +1.48. Updating flows as in OBA (using the divergence node
a = 7 and assuming all α values at nodes other than 3 are fixed) gives the x and
α labels shown in Figure 6.14(a) and (b). Notice that this step is exactly the
same as the first step taken by OBA. The interpretation of LUCE and solving
a local, linearized equilibrium problem provides insight into how the OBA flow
shift formula (6.62) was derived.
184 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
The shift is slightly different when there are three approaches to a node,
as happens when we proceed to scan node 2. First assuming that all three
approaches will be used, we solve these three equations simultaneously, enforcing
flow conservation and that the travel times on the three approaches should be
the same:
Substituting values from Figure 6.11 and solving these equations simultaneously
gives ∆x12 = 2.82, ∆x42 = −1.85, and ∆x52 = −0.97. This is problematic, since
x42 = 0.79 and it is impossible to reduce its flow further by 1.85. This means
that approach (4,2) should not be used, so we fix ∆x42 = −0.79 as a constant
and re-solve the system of equations for ∆x12 and ∆x52 :
This produces ∆x12 = 2.06, and ∆x52 = −1.27, alongside the fixed value
∆x42 = −0.79. Updating flows on other links as in OBA produces the x
and α labels shown in Figure 6.14(b) and (c). No other shifts occur at lower
topologically-ordered nodes, because there is only one incoming link, and flow
conservation demands that no flow increase or decrease take place.
6.4. BUSH-BASED ALGORITHMS 185
1 20.2 2 32.5 3 1 2 3
20.2 9.1
42 22.1
4 5 6 4 5 6
9.1 2
32.5 2 22.1
7 8 9 7 8 9
22.1 22.1
(a)Travel times t after LUCE update (b) Bush with unused links removed
1 2 3 1 2 3
(52.7,52.7) (50.7,72.9) (83.2,105.4)
4 5 6 4 5 6
(41.6,41.6) (66.3,66.3)
(32.5,32.5)
7 8 9 7 8 9
(0,0) (22.1,22.1) (44.2,44.2)
(c) Updated (L, U) labels for nodes (d) Bush with shortcut links added
Figure 6.15: Updating a bush by removing unused links and adding shortcuts.
added, we have Uij + tij ij+1 < Uij+1 ). Applying this identity cyclically, we have
Ui1 ≤ Ui2 ≤ . . . ≤ Uik ≤ Ui1 . Furthermore, since there were no cycles in the
bush during the previous iteration, at least one of these links must be new, and
for this link the inequality must be strict.
As an example, consider the bush updates which occur after performing the
LUCE example in the previous section. Figure 6.15 shows the updated link
travel times in panel (a), and the remaining bush links after zero-flow links are
removed in panel (b). Re-calculating L and U labels with the new travel times
and bush topology gives the values in Figure 6.15(c). At this point the three
unused bush links (4,2), (5,6), and (8,5), are examined to determine whether
Ui +tij < Uj for any of them. This is true for (5,6) and (8,5), since 41.6+2 < 66.3
and 22.1 + 2 < 41.6, but false for (4,2) since 32.5 + 42 ≥ 72.9. So, (5,6) and
(8,5) are added to the bush, as shown in Figure 6.15(d). From here, one can
return to the flow shifting algorithm to update flows further.
where Krs is a proportionality constant associated with the OD pair (r, s) cor-
responding to path π. The value of KP rs can be found from the constraint that
total path flows must equal demand, π∈Π̂rs hπ = drs . The rest of the equa-
tion shows that the entropy-maximizing path flows are determined solely by the
Lagrange multipliers βij associated with each link, and therefore that the ratio
between two path flows for the same OD pair only depends on the links where
they differ.
Here we describe three ways to calculate likely path flows. The first method
is “primal,” and operates directly on the path flow vector h itself. The second
method is “dual,” operating on the Lagrange multipliers β in the entropy max-
imization problem, which can then be used to determine the path flows. Both
188 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
of these methods presume that an equilibrium link flow solution x̂ has already
been found. The third method, “traffic assignment by paired alternative seg-
ments” (TAPAS), is an algorithm which simultaneously solves for equilibrium
and likely paths.
Primal and dual methods for likely path flows have two major issues in
common:
5 5 10
2 4 6 8
50 30 10
5 5 10
1 3 5 7
(a) Equilibrium link travel times
6 5 9
2 4 6 8
49 31 10
5 4 11
1 3 5 7
(b) Link travel times at an approximate equilibrium solution
Figure 6.16: Link travel times at the true equilibrium solution (top) and at an
approximate equilibrium solution (bottom).
can be set to reflect this. Assume that = 1, so that any path within 1 minute
of the shortest path travel time is included in Π̂. This choice gives the path sets
Π̂12 = {[1, 2], [1, 3, 5, 6, 4, 2]} and Π̂34 = {[3, 5, 6, 4], [3, 5, 7, 8, 6, 4]}. This choice
is not consistent, because it assumes travelers passing between nodes 5 and 6
consider different choices depending on their OD pair: travelers starting at node
1 only consider the segment [5, 6], while travelers starting at node 3 consider
both [5, 6] and [5, 7, 8, 6] as options. If travelers are choosing routes to minimize
cost, it should not matter what their origin or destination is. Increasing to a
larger value would address this problem, but in a larger network would run the
risk of including paths which are not used at the true equilibrium solution.
So some care must be taken in how is chosen. In the approximate equilib-
rium solution, we can define the acceptance gap ga to be the greatest difference
between a used path’s travel time and the shortest path travel time for its OD
pair, and the rejection gap gr to be the smallest difference between the travel
time of an unused path, and the shortest path for an OD pair. In Figure 6.17,
the OD pairs are from 1 to 6 and 4 to 9, and the links are labeled with their
travel times. The thick lines show the links used by these OD pairs, and the
thin lines show unused links. For travelers between nodes 1 and 6, the used
paths have travel times of 21 (shortest) and 22 minutes, and the unused path
has a travel time of 26 minutes. For travelers between nodes 4 and 9, the used
paths have travel times 18 (which is shortest) and 20 minutes, while the unused
path has a travel time of 21 minutes. The acceptance gap ga for this solution is
2 minutes (difference between 20 and 18), and the rejection gap is 5 (difference
between 26 and 21).
It is possible to show that if is at least equal to the acceptance gap, but less
190 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
10 4
7 8 9
7 5 6
9 5
4 5 6
7 9 10
8 10
1 2 3
Figure 6.17: Example demonstrating rejection and acceptance gaps. Thick links
are parts of used paths at the equilibrium solution. Link labels are travel times.
than half of the rejection gap, then the resulting sets of paths Π̂rs are consistent
for the proportionality condition. That is, we need
gr
ga ≤ ≤ . (6.70)
2
In the network of Figure 6.17, any choice between 2 and 2.5 will thus lead to
a consistent set of paths.
Exercise 21 gives a more formal definition of consistency and asks you to
prove this statement. It is not possible to choose such an unless the equilibrium
problem is solved with enough precision for the acceptance gap to be less than
half of the rejection gap. To fully maximize entropy, rather than just satisfying
proportionality, demands a more stringent level of precision in the equilibrium
solution.
The second issue involves redundancies in the set of equations enforcing the
OD matrix and equilibrium link flow constraints. Properly resolving this issue
requires using linear algebra to analyze the structure of the set of equations (and
this in fact is the key to bridging the gap between proportionality and entropy
maximization), but for proportionality a simpler approach is possible.
A redundancy in a system of equations can be interpreted as a “degree of
freedom,” a dimension along which a solution can be adjusted. Consider the
network in Figure 6.18, which has two OD pairs (A to B, and C to D). The
equilibrium link flows are shown in the figure, along with the link IDs and
an indexing of the eight paths. The OD matrix and link flow constraints are
6.5. LIKELY PATH FLOW ALGORITHMS 191
A B
15 15
5 40 30 7
1 3
E F G
20 30
45 2 4 45
6 8
C D
OD pair (A, B) OD pair (C, D)
Path ID Links Path ID Links
1 5, 1, 3, 7 5 6, 1, 3, 8
2 5, 1, 4, 7 6 6, 1, 4, 8
3 5, 2, 3, 7 7 6, 2, 3, 8
4 5, 2, 4, 7 8 6, 2, 4, 8
h1 + h2 + h3 + h4 = 15 (6.71)
h5 + h6 + h7 + h8 = 45 (6.72)
h1 + h2 + h5 + h6 = 40 (6.73)
h3 + h4 + h7 + h8 = 20 (6.74)
h1 + h3 + h5 + h7 = 30 (6.75)
h2 + h4 + h6 + h8 = 30 (6.76)
Equations (6.71) and (6.72) reflect the constraints that the total demand among
all paths from A to B, and from C to D, must equal the respective values in
the OD matrix. Equations (6.73)–(6.76) reflect the constraints that the flow
on links 1–4 must match their equilibrium values. Similar equations for links
5–8 are omitted, since they are identical to (6.71) and (6.72), as can easily be
verified.
This system has eight variables but only six equations, so there must be
at least two independent variables — in fact, there are four, since some of the
six equations are redundant. For example, adding equations (6.71) and (6.72)
gives you the same result as adding equations (6.73) and (6.74), so one of them
— say, (6.74) can be eliminated. Likewise, equation (6.76) can be eliminated,
since adding (6.71) and (6.72) is the same as adding (6.75) and (6.76). These
choices are not unique, and there are other equivalent ways of expressing the
192 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
same redundancies.
Each of these redundancies corresponds to an independent way to adjust the
path flows without affecting either total path flows between OD pairs, or total
link flows. A primal method uses these redundancies to adjust the path flows
directly, increasing the entropy of the solution without sacrificing feasibility
of the original path flow solution. A dual method uses these redundancies to
eliminate unnecessary link flow constraints — for instance, equations (6.74)
and (6.76) in the example above — so that the entropy-maximizing βij values
are unique.
but would make the explanations more complicated. You are encouraged to think about how
to implement this algorithm efficiently, without having to list all paths explicitly.
6.5. LIKELY PATH FLOW ALGORITHMS 193
is sufficiently small.
With these values, we can calculate the fraction of the flow from origin r ap-
proaching any node j from one specific link entering that node (i, j):
,
X
r
αij = xrij xrij (6.78)
(h,j)∈Γ−1 (j)
r
with αij defined arbitrarily if the denominator is zero. (This use of α is the
same as in the bush-based algorithms described in Section 6.4).
We can ensure that the proportionality condition holds between all paths
associated with an origin r by updating the path flows according to the formula
Y
hπ ← drs r
αij ∀s ∈ Z, π ∈ Π̂rs , (6.79)
(i,j)∈π
that is, by applying the aggregate approach proportions across all paths from
this origin to each individual path. One can show updating the path flows with
this formula will not change either the total OD flows or link flows, maintaining
feasibility, and will also increase entropy if there is any change in h.
This process is repeated for each origin r.
segment σi , the segment flow g(σi ) is defined as the sum of flows on all paths
using that segment:
X X
g(σi ) = hπ , (6.80)
(r,s)∈Z 2 (σ) π∈Π̂rs :σi ⊆π
where the notation σi ⊆ π means that all of the links in the segment σi are in
the path π.
If the equilibrium path set Π̂ is consistent in the sense of (6.70), then any
path π which uses one segment of the pair has a “companion” path which is
identical, except it uses the other segment of the pair, denoted π c (σ). For
example, in Figure 6.18, if σ is the pair of alternate segments between nodes E
and F, the companion of path 1 is path 3, and the companion of path 6 is path
8.
To achieve proportionality between origins for the alternate segments in σ,
we calculate the ratios between g(σi ) values and apply the same ratios to the
path flows for each OD pair using this set of alternate segments:
c g(σi )
hπ ← (hπ + hπ (σ)
) ∀(r, s) ∈ Z 2 (σ), i ∈ {1, 2}, π ∈ Π̂rs ∧ π ⊇ σi .
g(σ1 ) + g(σ2 )
(6.81)
It is again possible to show that updating path flows with this formula leaves
total OD flows and link flows fixed, and can only increase entropy.
Example
This section shows how the primal algorithm can solve the example in Fig-
ure 6.18. Assume that the initial path flow solution is h1 = 15, h5 = 15,
h6 = 10, h8 = 20, and all other path flows zero. As shown in the first column
of Table 6.2, this solution satisfies the OD matrix and the resulting link flows
match the equilibrium link flows, so it is feasible. The entropy of this solution,
calculated using (6.64), is 47.7.12 Two pairs of alternate segments are identified:
links 1 and 2 between nodes E and F, and links 3 and 4 between nodes F and
G.
This table summarizes the progress of the algorithm in successive columns;
you may find it helpful to refer to this table when reading this section. The
bottom section of the table shows the origin-based link flows corresponding to
the path flow solution, calculated using (6.77).
The first iteration applies the within-origin formula (6.79) to origin A, and
to origin C. To apply the formula to origin A, the origin-based proportions are
first calculated with equation (6.78): α1A = 1, α2A = 0, α3A = 1, and α4A = 0,
and thus h1 ← 15, h2 ← 0, h3 ← 0, and h4 ← 0. (There is no change.) For
origin C, we have α1C = 5/9, α2C = 4/9, α3C = 1/3, and α4C = 2/3, and thus
h5 ← 8 31 , h6 ← 16 23 , h7 ← 6 23 , and h8 ← 13 13 . The entropy of this new solution
has increased to 59.6.
12 When computing this formula, 0 log 0 is taken to be zero, since limx→0+ x log x = 0.
6.5. LIKELY PATH FLOW ALGORITHMS 195
with redundancies in the system of constraints. The first two details are fairly
simple: any initial solution will do; β = 0 is simplest. Terminate when x is
“close enough” to x̂ according to some measure.
For adjusting the βij values, notice from equation (6.69) that increasing βij
will decrease xij , and vice versa. So a natural update rule is
where α is a step size, and xij − x̂ij is the difference between the link flows
currently implied by β, and the equilibrium values. In addition to this intuitive
interpretation, this search direction is also proportional to the gradient of the
least-squares function X
φ(x) = (xij − x̂ij )2 . (6.83)
(i,j)∈A
This function is zero only at a feasible solution, and (6.82) is a steepest descent
direction in terms of x.13
Equation (6.83) can also be used to set the step size α. One can select a trial
sequence of α values (say, 1, 1/2, 1/4, 1/8, . . .), evaluating each α value in turn
and stopping once the new x values reduce (6.83). A more sophisticated step
size rule chooses α using Newton’s method, to approximately maximize entropy.
Newton’s method also has the advantage of scaling the step size based on the
effect changes in β have on link flows. Exercise 23 develops this approach in
more detail.
A last technical detail concerns redundancies in the system of link flow equa-
tions, as discussed at the end of Section 6.5. Redundancies in the link flow
equations mean that the βij values maximizing entropy may not be unique. To
resolve this issue, redundant link flow constraints can be removed, and their βij
values left fixed at zero. Practical experience shows that this can significantly
speed convergence.
Example
The dual method is now demonstrated on the same example as the primal
algorithm; see Figure 6.18. You may find it helpful to refer to Table 6.3 when
reading this section to track the progress of the algorithm. The format is similar
to Table 6.2, except for additional rows showing the βij values.
This table summarizes the progress of the algorithm in successive columns;
you may find it helpful to refer to this table when reading this section. The
bottom section of the table shows the origin-based link flows corresponding to
the path flow solution, calculated using (6.77).
To begin, as discussed at the end of Section 6.5, two of the link flow con-
straints are redundant, and their βij values are fixed at zero. Assume that links
2 and 4 are chosen for this purpose, so β2 = β4 = 0 throughout the algorithm.
(The algorithm would perform similarly for other choices of the two redundant
13 To be precise, it is not a steepest descent direction in terms of β. An alternative derivation
links; note that we are also continuing to ignore the link flow constraints associ-
ated with links 5–8, since these are identical to the OD matrix constraints (6.71)
and (6.72).) P
π
Initially, β1 = β3 = 0. This means that exp − (i,j)∈A δij βij = 1 for all
paths, so h1 = h2 = h3 = h4 = KAB and h5 = h6 = h7 = h8 = KCD . To
ensure that the sum of each OD pairs’ path flows equals the total demand, we
need KAB = 3.75 and KCD = 11.25, and equation (6.69) gives the flows on each
path, as shown in the Iteration 0 column of Table 6.3.
The table also shows the link flows corresponding to this solution: links 1–4
all have 30 vehicles, whereas the equilibrium solution has x1 = 40 and x2 = 20.
The least-squares function (6.83) has the value (40 − 30)2 + (20 − 30)2 = 200.
Trying an initial step size of α = 1 would give β1 = 0 + 1 × (30 − 40) = −10. The
other βij values are unchanged: β3 remains at zero because it has the correct
link flow, while β2 and β4 are permanently fixed at zero because their link flow
constraints were redundant. Re-applying equation (6.69) with this new value
of β1 (and recalculating KAB and KCD to satisfy the OD matrix) would give
x1 = 60, x2 = 0, and x3 = x4 = 30. This has a larger least-squares function
than before (800 vs. 200), so we try again with α = 1/2. This is slightly better
(the least-squares function is 768), but still worse than the current solution.
After two more trials, we reach α = 1/8, which produces a lower mismatch
(88).
This step size is accepted, and we proceed to the next iteration. The path
and link flows are shown in the Iteration 1 column of Table 6.3. The flows on
links 1 and 2 are closer to their equilibrium values than before. Continuing as
before, we find that α = 1/8 is again the acceptable step size with the new link
flow values, so β1 ← −1.25 + 81 (36.2 − 40) = −0.42, producing the values in the
Iteration 2 column. Over additional iterations, the algorithm converges to the
final values shown in the rightmost column.
It is instructive to compare the dual algorithm in Table 6.3 with the primal
algorithm in Table 6.2. Notice how the dual algorithm always maintains propor-
tionality, and the link flows gradually converge to their equilibrium algorithms.
By contrast, the primal algorithm maintains the link flows at their equilibrium
values, and gradually converges to proportionality. The entropy also does not
change monotonically, and at times it is higher than the maximum entropy value
— this can only happen for an infeasible solution.
Recall from Section 6.4 that path- and bush-based algorithms find the equi-
librium solution by shifting flow from longer paths to shorter ones, and that
these paths often differ on a relatively small set of links (in gradient projection,
we denoted these by the set A3 ∪A4 ; in bush-based algorithms, by the concept of
a divergence node). The main insights of TAPAS are that these algorithms tend
to shift flow repeatedly between the same sets of links, and that these links are
common to paths used by between different origins and destinations. As a re-
sult, it makes sense to store these path segments from one iteration to the next,
rather than having to expend effort finding them again and again. Furthermore,
since these links are common to multiple origins, we can apply proportionality
concepts at the same time to find a high-entropy path flow solution.
Pairs of alternative segments can also form a concise representation of the
equilibrium conditions. In the grid network of Figure 6.19, the number of paths
between the origin in the lower-left and the destination in the upper-right is
rather large (in fact there are 184,756) even though the network is a relatively
modestly-sized grid of ten rows and columns. If all paths are used at equilibrium,
expressing the equilibrium condition by requiring the travel times on all paths
6.5. LIKELY PATH FLOW ALGORITHMS 201
where X X
xrij = π π
δij h . (6.85)
s∈Z π∈Π̂rs
An example of a PAS is shown in Figure 6.20. For each link, the upper and
lower labels give the flows on that link from Origin 1 and Origin 2, respectively.
202 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
There are two path segments: [2, 4] and [2, 3, 4], and there is one relevant origin
(Origin 2). You might expect that this PAS is also relevant to Origin 1; and
indeed at the ultimate equilibrium solution this will be true. However, in large
networks it is not immediately obvious which PASs are relevant to which origins,
and the TAPAS algorithm must discover this during its steps.
There are three main components to the algorithm: PAS management, flow
shifts, and proportionality adjustments. PAS management involves identifying
new PAS, updating the lists of relevant origins, and removing inactive ones.
Flow shifts move the solution closer to user equilibrium, by shifting vehicles
from longer paths to shorter ones. Proportionality adjustments maintain the
total link flows at their current values, but adjust the origin-specific link flows
to increase the entropy of the path flow solution implied by (6.84). One possible
way to perform these steps is as follows; the rest of this subsection fills out the
details of each step.
1. Find an initial origin-disaggregated solution, and initialize the set of PASs
to be empty.
2. Update the set of PASs by determining whether new ones should be cre-
ated, or whether existing ones are relevant to more origins.
3. Perform flow shifts within existing PASs.
4. Perform proportionality adjustments within existing PASs.
5. Check for convergence, and return to step 2 unless done.
This algorithmic description may appear vague. Like many of the fastest al-
gorithms currently available, the performance of the algorithm depends on suc-
cessfully balancing these three components of the algorithm. The right amount
of time to spend on each component is network- and problem-specific, and im-
plementations that make such decisions adaptively, based on the progress of the
algorithm, can work well.
6.5. LIKELY PATH FLOW ALGORITHMS 203
multiple paths are tied for being shortest with one of them having zero flow.
204 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
Figure 6.21: Creating a new PAS. The top panel shows the origin-specific flows,
the bottom panel the link performance functions and current times. Bold links
are the shortest path tree for Origin 1.
example include a segment of links used by this origin [2, 4], and a segment of
links from the shortest path tree [2, 3, 4] that have a common divergence node
2. At this point, we declare Origin 1 relevant to this PAS by adding it to the
set Za .
For link (3, 5), we need to create a new PAS. There are two possibilities
for choosing segments: one choice is [3, 5] and [3, 4, 5]; and the other choice is
[2, 3, 5] and [2, 3, 4, 5] (both involve a segment of used links and a segment from
the shortest path tree, starting at a common divergence node). The first one
is preferred, because it has fewer links — and in fact the common link (2, 3)
in the second PAS is irrelevant, since shifting flow between segments will not
change flow on such a link at all). By being shorter, there are potentially more
relevant origins. If node 3 were also an origin, it could be relevant to the first
choice of segments, but not the second. Therefore we create a new PAS b, and
set σb1 = [3, 5], σb2 = [3, 4, 5], and Zb = {1}. (The choice of which segment is the
first and second one is arbitrary.)
Repeating the same process with the shortest path tree from Origin 2, we
6.5. LIKELY PATH FLOW ALGORITHMS 205
verify that it is relevant to PAS a (which it already is), and add it as relevant
to PAS b, so Za = Zb = {1, 2}. (Both origins are now relevant to both PASs).
Flow shifts
TAPAS uses flow shifts to find an equilibrium solution. There are two types of
flow shifts: the most common involves shifting flow between the two segments
on an existing PAS. The second involves identifying and eliminating cycles of
used links for particular origins.
For the first type, assume we are given a PAS ζ, and without loss of generality
assume that the current travel time on the first segment σ1ζ is greater than that
on the second σ2ζ . We wish to shift flow from the first segment to the second
one to either equalize their travel times, or to shift all the flow to the second
path if it is still shorter. The total amount of flow we need to shift to equalize
the travel times is approximately given by Newton’s method:
P P
(i,j)∈σ1ζ tij − (i,j)∈σ2ζ tij
∆h = P 0
P 0 . (6.86)
(i,j)∈σ ζ tij +
1 (i,j)∈σ ζ tij
2
We must also determine whether such a shift is feasible (would shifting this much
flow force an xrij value to become negative?) and, unlike the algorithms earlier
in this chapter, how much of this flow shift comes from each of the relevant
origins in Zζ .
To preserve feasibility, for any relevant origin r, we must subtract the same
amount from xrij for each link in the longer segment, and add the same amount
to each link in the shorter segment. Call r this amount ∆hr . The non-negativity
r r
constraints require ∆h ≤ min(i,j)∈πζ xij ; let ∆h denote the right-hand side
1
of this inequality, which must hold for every relevant origin.
If X r
∆h ≤ ∆h , (6.87)
r∈Zζ
then the desired shift is feasible, and we choose the origin-specific shifts ∆hr to
r
be proportional to their maximum values ∆h to help maintain proportionality.
r
If this shift is not feasible, then we shift as much as we can by setting ∆hr = ∆h
for each relevant origin.
An example of such a shift is shown in Figure 6.22, continuing the example
from before. The left side of the figure shows the state of the network prior
to the flow shift. The top panel shows the current origin-specific link flows;
the middle panel the current travel times and travel time derivatives; and the
bottom panel shows the structure of both PASs. Starting with PAS a, we first
calculate the desired total shift from equation (6.86):
53 − (30 + 11)
∆h = = 1. (6.88)
2 + (10 + 1)
For origin 1, we can subtract at most 2 units of flow from segment 1, and for
origin 2, we can subtract at most 1. Removing any more would result in negative
206 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
2 4 2 4/3 4 2
4 1 2 4 2/3 2
0 0 0 2/3
1 2 1 2 5 1 2 4/3 2 5
0 0
2 Flow from Origin 1 8/3
d15 = 4 1 3 Flow from Origin 2 4/3 3
d25 = 2 i j
53 4 40 52 4 40
0 1 10 0 1 10
0 0
1 2 11 5 1 2 12 5
52 52
1 1
30 1 40 1
Travel time
10 3 10 3
Derivative
i j
Figure 6.22: Example of flow shifts using TAPAS; left panel shows initial flows
and times, right panel shows flows and times after a shift for PAS a.
6.5. LIKELY PATH FLOW ALGORITHMS 207
origin flows on link (2, 4). We thus split ∆h in proportion to these maximum
allowable values, yielding
and producing the solution shown in the right half of Figure 6.22. Since the link
performance functions are linear, Newton’s method is exact, and travel times
are equal on the two segments of PAS a.
Moving to the second PAS, we see that it is at equilibrium as well, and no
flow shift is done: the numerator of equation (6.86) is zero. In fact, the entire
network is now at equilibrium, but the origin-based link flows do not represent
a proportional solution. Proportionality adjustments are discussed below.
The second kind of flow shift involves removing flow from cycles. In TAPAS,
there may be occasions where cycles are found among the links with positive flow
(xrij > 0). Such cycles can be detected using the topological ordering algorithm
described in Section 2.2.
In such cases, we can subtract flow from every link in the cycle, maintaining
feasibility and reducing the value of the Beckmann function. (See Exercise 26).
Let X denote the minimum value of xrij among the links in such a cycle. After
subtracting this amount from every xrij in the cycle, the solution is closer to
equilibrium and the cycle of positive-flow links no longer exists.
Figure 6.23 shows an example of how this might happen. This network has
only a single origin, and two PASs. Applying the flow shift formula, we move
1 vehicle from segment [1, 2] to [1, 3, 2], and 1 vehicle from segment [2, 4] to
segment [2, 3, 4]. This produces the flow solution in the lower-left of the figure,
which contains a cycle of flow involving links (2, 3) and (3, 2). If we subtract 1
unit of flow from both of those links, we have the flow solution in the lower-right.
This solution is feasible and has a lower value of the Beckmann function, as you
can verify.
Proportionality adjustments
3 3
5 5 6 5
1 0 0 4 1 1 0 4
5 5
5 4
2 2
(a) (b)
3 3
6 6 6 6
1 1 1 4 1 0 0 4
4 4
4 4
2 2
(c) (d)
Figure 6.23: (a) Initial flow solution; (b) After a flow shift at a PAS ending at
3; (c) After a flow shift at a PAS ending at 4; (d) After removing a cycle of flow.
6.5. LIKELY PATH FLOW ALGORITHMS 209
To describe the problem more formally, let b denote the node at the down-
stream end of the PAS, and let xb denote the total flow through this node as
in Equation (6.4.1) . We can calculate the flow on the segments σ1ζ and σ2ζ for
each relevant origin r with the formulas
Y xrij
g r (σ1ζ ) = xrb (6.91)
xrj
(i,j)∈σ1ζ
Y xrij
g r (σ2ζ ) = xrb , (6.92)
xrj
(i,j)∈σ2ζ
(6.93)
assuming positive flow through all nodes in the segment (xrj > 0).15 If propor-
tionality were satisfied, we would have
r0 ζ
P
g r (σ1ζ ) r 0 ∈Zζ g (σ1 )
= (6.94)
g r (σ1ζ ) + g r (σ2ζ )
P r0 ζ r0 ζ
r 0 ∈Zζ g (σ1 ) + g (σ2 )
(6.97)
using brackets for an indicator function. We aim to find ∆hr values satisfy-
ing constraint (6.90) and (6.94), where the segment flows are computed with
equations (6.95) and (6.96).
Solving this optimization problem exactly is a bit difficult because equa-
tions (6.84) and (6.85) are nonlinear. A good approximation method is de-
veloped in Exercise 27. A simpler heuristic is to adapt the “proportionality
between origins” technique from Section 6.5.1 and approximate the (nonlinear)
formulas (6.95) and (6.96) by the (linear) formulas
where g0r (σ1ζ ) and g0r (σ2ζ ) are the current segment flows (with zero shift).
Substituting equations (6.98) and (6.99) into (6.94) and simplifying, we ob-
tain
0
g0r (σ1ζ )
P
r 0 ∈Zζ
r
∆h = g0r (σ1ζ ) − (g0r (σ1ζ ) + g0r (σ2ζ )) P (6.100)
g0r (σ1ζ ) + g0r (σ2ζ )
0 0
r 0 ∈Zζ
2 d14 = 3
2
0 2 d23 = 1
0
d24 = 2
[2,3] 1 Total 2 2
[2,4] 0
[2,3,4] 2
2 d14 = 3
1
0 2 d23 = 1
1
d24 = 2
[2,3] 1 Total 2 2
[2,4] 1
[2,3,4] 1
[2,3] 1
[2,4] 0
[2,3,4] 2
[2,3] 1
[2,4] 1
[2,3,4] 1
Figure 6.27: The heuristic formula is not always exact for a non-isolated PAS.
6.6. HISTORICAL NOTES AND FURTHER READING 215
6.7 Exercises
1. [32] One critique of the BPR link performance function is that it allows
link flows to exceed capacity. An alternative link performance “function”
is tij = t0ij /(uij − xij ) if xij < uij , and ∞ otherwise, where t0ij and uij are
the free-flow time and capacity of link (i, j). First show that tij → ∞ as
xij → uij . How would using this kind of link performance function affect
216 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
3 30 6 40
G H I
2 2 2
1 10 4 20
D E F
1 3 3
A 3 B 3 C
2. [33] Show that the relative gap γ1 and average excess cost are always
nonnegative, and equal to zero if and only if the link or path flows satisfy
the principle of user equilibrium.
3. [61] Some relative gap definitions require a lower bound f on the value
of the Beckmann function at optimality. Let x denote the current solu-
tion, f (x) the value of the Beckmann function at the current solution,
and T ST T (x) and SP T T (x) the total system travel time and short-
est path travel time at the current solution, respectively. Show that
f (x) + SP T T (x) − T ST T (x) is a lower bound on the Beckmann func-
tion at user equilibrium.
4. [10] What is the value of the lower bound f = f (x)+SP T T (x)−T ST T (x)
if x satisfies the principle of user equilibrium?
5. [23] Let (h, x) and (g, y) be two feasible solutions to the Beckmann formu-
lation (6.3)–(6.6), and let λ ∈ [0, 1]. Show that (λh+(1−λ)g, λx+(1−λy)
is also feasible, directly from the constraints (without appealing to con-
vexity.)
6. [35] In the network in Figure 6.28, all trips originate at node A. The links
are labeled with the current travel times, and the nodes are labeled with
the number of trips whose destination is that node.
(a) Find the shortest paths from node A to all other nodes, and report
the cost and backnode labels upon termination.
(b) What would be the target link flow solution x̂ in the method of
successive averages or the Frank-Wolfe algorithm?
30 10
1 2 3
10 20 10
10 10
4 5 6
Figure 6.29: Network for Exercise 8, boldface links indicate previous x̂ target.
then add drs to each link in this path. This method may require adding up
to |Z|2 terms for each link, in case every shortest path uses the same link.
Formulate a more efficient algorithm which requires solving one shortest
path problem per origin, and which requires adding no more than |Z|
terms for each link. (Hint: Do not wait until the end to calculate x̂ and
find a way to build x as you go.)
8. [42] Consider the network in Figure 6.29, with a single origin and two
destinations. Each link has the link performance function 10 + x2 , and the
boldface links indicate the links used in the previous x̂ target. Report the
new target link flows x̂, the step size λ, and the new resulting link flows,
according to (a) Frank-Wolfe and (b) conjugate Frank-Wolfe.
9. [47] Consider the network in Figure 6.30, where 8 vehicles travel from
node 1 to node 4. Each link is labeled with its delay function. For each of
the algorithms listed below, report the resulting link flows, average excess
cost, and value of the Beckmann function.
(a) Perform three iterations of the method of successive averages.
(b) Perform three iterations of the Frank-Wolfe algorithm.
(c) Perform three iterations of conjugate Frank-Wolfe.
(d) Perform three iterations of gradient projection.
(e) Perform three iterations of manifold suboptimization.
(f) Perform three iterations of Algorithm B (for each iteration, do one
flow update and one bush update)
(g) Perform three iterations of origin-based assignment (for each itera-
tion, do one flow update and one bush update)
(h) Perform three iterations of linear user cost equilibrium (for each it-
eration, do one flow update and one bush update)
(i) Compare and discuss the performance of these algorithms.
10. [48] Consider the network in Figure 6.31. The cost function on the light
links is 3 + (xa /200)2 , and the delay function on the thick links is 5 +
(xa /100)2 . 1000 vehicles are traveling from node 1 to 9, and 1000 vehicles
from node 4 to node 9. For each of the algorithms listed below, report
the resulting link flows, average excess cost, and value of the Beckmann
function.
218 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
3
10x
50+x
10+x
1 4
60+x
15 x
2
7 8 9
4 5 6
1 2 3
11. [43] The method of successive averages, as presented in the text, uses
the step size λi = 1/(i + 1) at iteration i. Other choices of step size can
be used, and P Exercise 12 shows
P 2 that the algorithm converges whenever
λi ∈ [0, 1], λi = ∞ and λi < ∞. Which of the following step size
choices guarantee convergence?
(a) λi = 1/(i + 2)
6.7. EXERCISES 219
(b) λi = 4/(i + 2)
(c) λi = 1/i2
(d) λi = 1/(log i)
√
(e) λi = 1/ i
(f) λi = 1/i2/3
12. [65] (Proof of convergence for the method of successive averages.) Con-
sider the method of successive averages applied to the vector of link flows.
This produces a sequence of link flow vectors x1 , x2 , x3 , . . . where xi is the
vector of link flows at iteration i. We can also write down the sequence
f 1 , f 2 , f 3 , . . . of the values taken by the Beckmann function for x1 , x2 ,
etc. To show that this algorithm converges to the optimal solution, we
have to show that either xi → x̂ or f i → fˆ as i → ∞, where x̂ is the user
equilibrium solution and fˆ the associated value of the Beckmann func-
tion. This exercise walks through one proof of this fact, for P any version
of
P 2 the method of successive averages for which λi ∈ [0, 1], λi = ∞ and
λi < ∞.
(a) Assuming that the link performance functions are differentiable, show
that for any feasible x and y there exists θ ∈ [0, 1] such that
X
f (y) = f (x) + tij (xij )(yij − xij )+
(i,j)∈A
1 X 0
tij ((1 − θ)xij + θyij )(yij − xij )2 . (6.101)
2
(i,j)∈A
13. [34] The derivation leading to (6.16) assumed that the solution to the
restricted VI was not at the endpoints λ = 0 or λ = 1. Show that if you
are solving (6.16) using either the bisection method from Section 3.3.2, or
Newton’s method (with a “projection” step ensuring λ ∈ [0, 1]), you will
obtain the correct solution to the restricted VI even if it is at an endpoint.
link flows x∗ have just be found, and we need to find new flows x0 (λ) =
λx∗ + (1 − λ)x for some λ ∈ [0, 1]. Let z(λ) be the value of the Beckmann
function at x0 (λ).
(a) Using the multi-variable chain rule, we can show that z is differen-
tiable and z 0 (λ) is the dot product of the gradient of the Beckmann
function evaluated at x0 (λ) and the direction x∗ − x. Calculate the
gradient of the Beckmann function and use this to write out a formula
for z 0 (λ).
(b) Is z a convex function of λ?
(c) Show that z 0 (0) = 0 only if x is an equilibrium, and that otherwise
z 0 (0) < 0.
(d) Assume that the solution of the restricted variational inequality in
the Frank-Wolfe algorithm is for an “interior” point λ∗ ∈ (0, 1). Show
that z 0 (λ∗ ) = 0.
(e) Combine the previous answers to show that the Beckmann function
never increases after an iteration of the Frank-Wolfe algorithm (and
always decreases strictly if not at an equilibrium).
15. [74] (Proof of convergence for Frank-Wolfe.) Exercise 14 shows that the
sequence of Beckmann function values f 1 , f 2 , . . . from subsequent itera-
tions of Frank-Wolfe is nonincreasing. Starting from this point, show that
this sequence has a limit, and that the resulting limit corresponds to the
global minimum of the Beckmann function (demonstrating convergence to
equilibrium.) Your solution may require knowledge of real analysis.
17. [33] Identify conjugate directions for the following quadratic programs:
5 6
13 9
1 2 3
7 10
and
∗
− xij )(xAON − xij )t0ij
P
µλ−1 (i,j)∈A ((x−1 )ij ij
ν= − ∗ . (6.103)
− xij )2 t0ij
P
1 − λ−1 (i,j)∈A ((x−1 )ij
21. [73]. This exercises walks through a proof of the formula (6.70) for choos-
ing the threshold for finding proportional path flows. A set of paths
222 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
22. [38]. The proof of Theorem 5.4 started from the entropy-maximizing La-
grangian (5.31), which Lagrangianized both the link flow constraints (with
multipliers βij ) and the OD matrix constraints (with multipliers γrs ). Al-
ternatively, we can Lagrangianize only the link flow constraints, and re-
place (hπ /drs ) with hπ (why?), giving the equation
!
X X X
π
L̂(x, β) = hπ log hπ + βij x̂ij − δij hπ . (6.105)
π∈Π (i,j)∈A π∈Π
∂ L̂
= xij − x̂ij
∂βij
23. [59]. The dual algorithm step (6.82) can be compactly written as β ←
β + α∆β, where ∆βij = xij − x̂ij . Let f (α) denote the value of the
alternative Lagrangian (6.105) after a step of size α is taken, and the new
β and h values are calculated. Newton’s method can be used to find an
α value which approximately minimizes f (α), maximizing entropy in the
direction ∆β. The Newton step is α = −f 0 (0)/f 00 (0).
(2,0)
1 4 (tij,xij)
(2,100) i j
(1,100)
(1,0)
2 3
(1,0)
24. [30]. In the discussion surrounding Figure 6.19, we argued that satisfying
the equilibrium conditions around a “spanning” set of PASs (one for each
block) was sufficient for establishing equilibrium on the entire network.
Consider the network in Figure 6.33, where the demand from origin 1
to destination 4 is 100 vehicles, and there are two PASs: one between
segments [1, 4] and [1, 2, 3, 4], and another between segments [2, 4] and
[2, 3, 4]. These are spanning, in the sense that by shifting flows between
these two PASs we can obtain any feasible path flow solution from any
other. They also satisfy the equilibrium conditions: for the first PAS,
because there is no flow on either segment16 ; for the second, because the
travel times are equal on the two segments. Yet the network is not at
equilibrium, since [1, 4] is the only shortest path and it is unused. Explain
this apparent inconsistency.
25. [62]. Develop one or more algorithms to find “short” segments when
generating a new PAS. These methods should require a number of steps
that grows at most linearly with network size.
26. [22]. Show that the cycle-removing procedure described in the TAPAS
algorithm maintains feasibility of the solution (flow conservation at each
node, and non-negativity of link flows), and that the Beckmann function
decreases strictly (assuming link performance functions are positive).
27. [68]. This exercise develops a technique for approximately solving equa-
tions (6.90) and (6.94), better than the heuristic given in the text.
(a) Define ∆hr (ρ) to be the amount of flow that needs to be shifted from
origin r’s flows on σ1 to σ2 , to adjust the proportion g r (σ1 )ζ /(g r (σ1 )ζ +
g r (σ2 )ζ to be exactly ρ. A negative value of this function indicates
16 There is flow on link (1,2), but not on the entire segment [1, 2, 3, 4].
224 CHAPTER 6. ALGORITHMS FOR TRAFFIC ASSIGNMENT
This chapter shows how a sensitivity analysis can be conducted for the traffic
assignment problem (TAP), identifying how the equilibrium assignment will
change if the problem parameters (such as the OD matrix or link performance
functions) are changed. This type of analysis is useful in many ways: it can
be used to determine the extent to which errors or uncertainty in the input
data create errors in the output data. It can be used as a component in so-
called “bilevel” optimization problems, where we seek to optimize some objective
function while enforcing that the traffic flows remain at equilibrium. This occurs
most often in the network design problem, where one must determine how to
improve network links to reduce total costs, and in the OD matrix estimation
problem, where one attempts to infer the OD matrix from link flows, or improve
upon an existing estimate of the OD matrix.
After exploring the sensitivity analysis problem using the familiar Braess
network, the first objective in the chapter is calculating derivatives of the equi-
librium link flows with respect to elements in the OD matrix. It turns out that
this essentially amounts to solving another, easier, traffic assignment problem
with different link performance functions and constraints. The remainder of
the chapter shows how these derivatives can be used in the network design and
OD matrix estimation problems, which are classic transportation examples of
bilevel programs.
225
226 CHAPTER 7. SENSITIVITY ANALYSIS AND APPLICATIONS
3
10x
50+x
solution varies according to the demand level. Panel (a) shows the flows x12
and x34 , panel (b) shows the flows x13 and x24 , panel (c) shows the flow x23 ,
and panel (d) shows the shortest path travel time between nodes 1 and 4, at
the corresponding equilibrium solution. You can check that when d14 = 6, the
original equilibrium solution is shown in this figure.
Instead of the OD matrix, we also could have changed the link performance
functions in the network. Now assume that d14 is fixed at its original value of 6,
but that the link performance function on link (2,3) can vary. Let t23 (x23 ) = y +
x23 , where y is the free-flow time, resulting in the network shown in Figure 7.3.
In the base solution y = 10, but conceivably the “free-flow time” could be
changed. If the speed limit were increased, y would be lower; if traffic calming
were implemented, y would be higher. In an extreme case, if the link were closed
entirely you could imagine y takes an extremely large value, large enough that
no traveler would use the path. One can also effectively decrease y by providing
incentives for traveling on this link (a direct monetary payment, a discount at
an affiliated retailer, etc.), and conceivably this incentive could be so large that
y is negative. The resulting sensitivity analysis is provided in Figure 7.4
Examining the plots in Figure 7.2 and 7.4, we see that the relationships
between the equilibrium solution (link flows and travel times) and the demand
or free-flow time are all piecewise linear. Each “piece” of these piecewise linear
functions corresponds to a particular subset of the paths being used — for in-
stance, in Figure 7.2, when the demand is lowest, only the middle path is used.
When the demand is highest, only the two outer paths are used. When the
demand is at a moderate level, all three paths are used. Within each of these
regions, the relationship between the demand and the equilibrium solution is
linear. These pieces meet at so-called degenerate solutions, where the equilib-
rium solution does not use all of the minimum travel-time paths. (For instance,
when d14 = 40/11 the equilibrium solution requires all drivers to be assigned to
the middle path, even though all three have equal travel times.)
In general networks involving nonlinear link performance functions, these
relationships cannot be expected to stay linear. However, they are still defined
by piecewise functions, with each piece corresponding to a certain set of paths
7.1. SENSITIVITY ANALYSIS PRELIMINARIES 227
(a) Flow on (1,2) and (3,4) (b) Flow on (1,3) and (2,4)
3
10x
50+x
6 +x 6
1 4
50+x
10x
2
(a) Flow on (1,2) and (3,4) (b) Flow on (1,3) and (2,4)
being used, and with the pieces meeting at degenerate solutions. The goal of the
sensitivity analyses in this chapter is to identify derivatives of the equilibrium
solution (link flows and travel times) at a given point. For these derivatives
to be well-defined, we therefore assume that the point at which our sensitivity
analysis occurs is not degenerate. That is, all minimum-travel time paths have
positive flow. This assumption is not too restrictive, because there are only a
finite number of degenerate points; for instance, if we pick the demand value at
random, the probability of ending up at a degenerate point is zero.
This sensitivity analysis is still local, because the information provided by
a derivative grows smaller as we move farther away from the point where the
derivative is taken. For a piecewise function, the derivative provides no in-
formation whatsoever for pieces other than the one where the derivative was
taken.
In this chapter, we show how this kind of sensitivity analysis can be used
in two different ways. In the network design problem, this type of sensitivity
analysis can be used to determine where network investments are most valu-
able. In Figure 7.4, the fact that the equilibrium travel time increases when
y decreases (around the base solution y = 10) highlights the Braess paradox:
investing money to improve this link will actually increase travel times through-
out the network. If we were to conduct a similar analysis for other links in
the network, we would see that the equilibrium travel time would decrease with
improvements to the link. In the OD matrix estimation problem, we can use
this sensitivity analysis to help calibrate an OD matrix to given conditions.
thermore, one can show that the equilibrium solution is differentiable, and the
derivatives of the equilibrium link flows or travel times with respect to values
in the OD matrix or link performance function parameters can be interpreted
as the sensitivities of the equilibrium solution.
There are several ways to calculate the values of these derivatives: his-
torically, the first researchers used matrix-based formulas, and subsequent re-
searchers generalized these formulas using results from the theory of variational
inequalities. We adopt a different approach, using the bush-based solution rep-
resentation, because it leads to an easy solution method and is fairly straightfor-
ward. This approach is based on the fact that the equilibrium solution (travel
times tij and bush flows xrij ) must satisfy the following equations for each origin
r:
x̂rij = 0 / Br
∀(i, j) ∈ (7.5)
where t0ij is the derivative of the link performance function, evaluated at the cur-
rent equilibrium solution x̂ (and thus treated as a constant in these equations).
Equations (7.6) enforce the fact that the equilibrium bushes must remain the
same. That is, the shortest path labels Li and travel times tij must change in
such a way that every link on the bush is part of a minimum travel time path
to its head node. Equations (7.8) and (7.9) enforce flow conservation. For all
bushes except for r̂, the total flow from the origin to each destination is the
same, so flow is allowed to redistribute among the bush links, but the flows
starting or ending at a node cannot change. For the bush corresponding to r̂, a
unit increase in demand from r̂ to ŝ must be reflected by an additional vehicle
leaving r̂ and an additional vehicle arriving at ŝ.
All together, the system of equations (7.6)–(7.9) involves variables Λri for
r
each origin r and node i, and ξij for each origin r and link (i, j). Furthermore,
for each origin, it contains an equation for each link and each node. Therefore,
this linear system of equations can be solved to obtain the sensitivity values.1
However, there is an easier way to solve for Λri and ξijr
. Using the techniques
in Section 3.3, you can show that the equations (7.6)–(7.9) are exactly the
optimality conditions to the following minimization problem:
!2
1 X 0 X
r
min tij ξij
ξr ,Λr 2
(i,j)∈A r∈Z
XX X X
+ Λri r
ξhi − r
ξij − ∆ri (7.11)
r∈Z i∈N (h,i)∈Γ−1 (i) (i,j)∈Γ(i)
where ∆ri represents the right-hand side of equation (7.8) or (7.9), that is,
∆r̂ŝ = 1, ∆r̂r̂ = −1, and ∆ri = 0 otherwise.
This optimization problem can be put in a more convenient form by in-
terpreting Λri as the Lagrange multiplier for the flow conservation equation
corresponding to node i in bush r, and the second term in (7.11) P as the La-
r
grangianization of this equation. Furthermore, defining ξij = r∈Z ij the
ξ ,
optimization problem can be recast in the following equivalent form:
Z ξij
min t0ij ξ dξ (7.14)
ξr 0
X X
r r
s.t. ξhi − ξij = ∆ri ∀i ∈ N, r ∈ Z (7.15)
(h,i)∈Γ−1 (i) (i,j)∈Γ(i)
r
ξij =0 / Br
∀(i, j) ∈ (7.16)
3
x 10x
1 x 1
1 4
x
10x
2
3
11/13 2/13
1 -9/13 1
1 4
2/13 11/13
times of 31/13. This is exactly the slope of the piece of the equilibrium travel
time (Figure 7.2d) around d14 = 6, that is, the equilibrium travel time in the
sensitivity problem gives the derivative of the equilibrium travel time in the
original network.
2 This is what we wrote as t0 in the previous section, when the link performance function
ij
only depended on xij .
234 CHAPTER 7. SENSITIVITY ANALYSIS AND APPLICATIONS
where as before, t0ij,x and t0ij,y are evaluated at the current, equilibrium solution
x̂. Equations (7.17) enforce the fact that the equilibrium bushes must remain
the same, taking into account both the change in travel time on (i, j) due to
the change in its link performance function as well as changes in all links’ travel
times from travelers shifting paths. Equations (7.19) and (7.20) enforce flow
conservation. These equations are simpler than for the case of a change to the
OD matrix, because the total number of vehicles on the network remains the
same, and these vehicles can only shift amongst the paths in the bush. There is
no change in the flow originating or terminating at any node, and the ξ variables
must form a circulation.
As with a change in an OD matrix entry, the system of equations (7.17)–
(7.20) is a linear system involving, for each origin, variables for each node and
link. Repeating the same steps as before, this system of equations can be seen
as the optimality conditions for the following optimization problem:
Z ξij
t0ij,x ξ + t0ij,y dξ
min
r
(7.22)
ξ 0
X X
r r
s.t. ξhi − ξij =0 ∀i ∈ N, r ∈ Z (7.23)
(h,i)∈Γ−1 (i) (i,j)∈Γ(i)
r
ξij =0 / Br
∀(i, j) ∈ (7.24)
The original link performance functions have been replaced by affine link
performance functions with slope equal to the derivative of the original link
performance function at the original equilibrium solution, and intercept
equal to the derivative of the link performance function with respect to
the parameter y.
The equilibrium bushes for each origin are fixed at the bushes for the
original equilibrium solution.
3
x 10x
0 1+x 0
1 4
x
10x
2
3
1/13 -1/13
0 -2/13 0
1 4
-1/13 1/13
changed.
Specifically, assume that the link performance functions for each link (i, j)
now depend on the amount of money yij invested in that link (perhaps increasing
its capacity through widening, or decreasing its free-flow time) as well as on the
flow xij . One example of such a link performance function is
β !
0 xij
tij (xij , yij ) = tij 1 + α (7.25)
uij + Kij yij
Most of this problem is familiar: the objective (7.26) is to minimize the sum of
total system travel time (converted to units of money) and construction cost, and
constraint (7.28) requires that money can only be spent on links (not “recovered”
7.3. NETWORK DESIGN PROBLEM 237
from them with a negative yij value). Also note that since y = 0 is a feasible
solution (corresponding to the “do-nothing” alternative), in the optimal solution
to this problem the cost savings (in the form of reduced T ST T ) must at least be
equal to the construction costs, guaranteeing that the optimal investment policy
has greater benefit than cost. The key equation here is (7.27), which requires
that the link flows x satisfy the principle of user equilibrium by minimizing
the Beckmann function. In other words, one of the constraints of the network
design problem is itself an optimization problem. This is why the network design
problem is called a bilevel program. This type of problem is also known as a
mathematical program with equilibrium constraints. This class of problems is
extremely difficult to solve, because the feasible region is typically nonconvex.
To see why, consider two feasible solutions (x1 , y1 ) and (x2 , y2 ) to the net-
work design problem. The link flows x1 are the equilibrium link flows under
investment policy y1 , and link flows x2 are the equilibrium link flows under in-
vestment policy y2 . If the feasible region were a convex set, then any weighted
average of these two solutions would themselves be feasible. Investment pol-
icy 21 y1 + 12 y2 still satisfies all the constraints on y (all link investments are
nonnegative). However, the equilibrium link flows under this policy cannot be
expected to be the average of x1 and x2 , because the influence of yij on tij
can be nonlinear and the sets of paths which are used in x1 and x2 can be
completely different. In other words, the equilibrium link flows after averaging
two investment policies need not be the average of the equilibrium link flows
under those two policies separately.
Unfortunately, solving optimization problems with nonconvex feasible re-
gions is a very difficult task. Therefore, solution methods for the network design
problem are almost entirely heuristic in nature. These heuristics can take many
forms; one popular approach is to adapt a metaheuristic method, such as those
discussed in Section C.6.
Another approach is to develop a more tailored heuristic based on specific
insights about the network design problem. This approach, being more educa-
tional, is adopted here. Specifically, we can use the sensitivity analysis from the
previous sections to identify derivatives of the objective function f with respect
to each link investment yij , and use this to move in a direction which reduces
total cost.
Specifically, notice that constraint (7.27) actually makes x a function of y,
since the solution to the user equilibrium problem is unique in link flows. That
is, the investment policy y determines the equilibrium link flows x exactly. So,
the objective function can be made a function of y alone, written f (x(y), y).
The derivative of this function with respect to an improvement on any link is
then
∂f X ∂f ∂xk`
=Θ +1 (7.29)
∂yij ∂xk` ∂yij
(k,`)∈A
238 CHAPTER 7. SENSITIVITY ANALYSIS AND APPLICATIONS
or, substituting the derivative of (7.26) with respect to each link flow,
∂f X ∂xk` ∂tk` ∂tij
=Θ tk` (xk` , yk` ) + xk` (xk` , yk` ) + xij + 1.
∂yij ∂yij ∂xk` ∂yij
(k,`)∈A
(7.30)
In turn, the partial derivatives ∂x k`
∂yij can be identified using the technique of
Section 7.2.2 as the marginal changes in link flows throughout the network
when the link performance function of (i, j) is perturbed.
The vector of all the derivatives (7.30) forms the gradient of f with respect
to y. This gradient is the direction of steepest ascent, that is, the direction
in which f is increasing fastest. Since we are solving a minimization problem,
we should move in the opposite direction. Taking such a step, and ensuring
feasibility, gives the updating equation
+
y ← [y − µ∇y f ] (7.31)
where µ is a step size to be determined, and the [·]+ operation is applied to each
component of the vector. This suggests the following algorithm:
1. Initialize y ← 0.
2. Calculate the link flows x(y) by solving the traffic assignment problem
with link performance functions t(x, y).
∂f
3. For each link (i, j) determine ∂y ij
by solving the sensitivity problem de-
scribed in Section 7.2.2 and using (7.30).
Two questions are how µ should be chosen in step 4, and how convergence
should be tested in step 5. The difficulty in step 4 is that the derivatives
provided by a sensitivity analysis are only local, and in particular if µ is large
enough that (7.31) changes the set of used paths, the derivative information
is meaningless. However, if µ is small enough one will see a decrease in the
objective function if at all possible. So, one could start by testing a sequence of
µ values (say, 1, 1/2, 1/4, . . .), evaluating the resulting y values, x values, and f ,
stopping as soon as f decreases from its current value. (Note that this is a fairly
computationally intensive process, since the traffic assignment problem must be
solved for each µ to get the appropriate x value.) Other options include using a
stricter stopping criterion such as the Armijo rule, which would ensure that the
decrease is “sufficiently large”; or using bisection to try to choose the value of
µ which minimizes f in analogy to Frank-Wolfe. All of these methods require
solving multiple traffic assignment problems at each iteration.
Regarding convergence in step 5, one can either compare the progress made
in decreasing f over the last few iterations, or the changes in the investments
7.3. NETWORK DESIGN PROBLEM 239
50+x13exp(-y13) 3
10x34exp(-y34)
10+x23exp(-y23)
1 4
10x12exp(-y12)
2 50+x24exp(-y24)
–
–
Link (3,4)
these, notice that the demand is zero, and the link performance functions are
affine. Within each problem, the link being improved has a slightly different link
performance function, accounting for the effect of the link improvement. The
other links’ performance functions only reflect their change due to shifting flows.
For instance, in the problem in the upper left, link (1,3) is not improved, so its
∂t13
link performance function is simply ξ13 ∂x 13
= 10ξ12 exp(−y13 ) = 10ξ12 since
∂t12
y13 = 0. Since link (1,2) is being improved, in addition to the term ξ12 ∂x 12
=
∂t12
ξ12 exp(−y12 ) = ξ14 , we add the constant term ∂y12 = −x14 exp(−y12 ) = −4
since x12 = 4 and y12 = 0.
The solutions (in terms of ξij ) to the five sensitivity problems are shown in
Table 7.1, as can be verified by substituting these ξ values into the networks
in Figure 7.10. Substituting these ξ values into equation (7.30), along with the
current values of the travel times and link flows, gives the gradient
y12 −0.85
y13 0.49
∇y f = y23 = 1.42
(7.32)
y24 0.49
y34 −0.85
Each component of the gradient shows how the objective of the network de-
7.3. NETWORK DESIGN PROBLEM 241
sign problem will change if a unit of money is spent improving a particular link.
In the derivative formula (7.30), the term in parentheses represents the marginal
change in total system travel time, and the addition of unity at the end of the
formula represents the increase in total expenditures. If the derivative (7.30)
is negative, then the reduction in total system travel time from a marginal im-
provement in the link will outweigh the investment cost. If it lies between zero
and one for a link, then a marginal investment in the link will reduce total
system travel time, but the cost of the improvement will outweigh the value of
the travel time savings. If it is greater than one, then total system travel time
would actually increase if the link is improved, as in the Braess paradox. So, in
this example, the gradient (7.32) shows that improvements on links (1,2) and
(3,4) will be worthwhile; improvements on links (1,3) and (2,4) would reduce
TSTT but not by enough to outweigh construction cost; and an improvement
on link (2,3) would actually be counterproductive and worsen congestion.
Notice that solving the network design problem requires solving a very large
number of traffic assignment subproblems: once for each iteration to determine
x; modified sensitivity problems for each link to calculate derivatives; and again
multiple times per iteration to identify µ. Solving practical problems can easily
require solution of thousands or even millions of traffic assignment problems. In
bilevel programs such as network design, having an efficient method for solving
traffic assignment problems is critical. Path-based and bush-based algorithms
can be efficiently “warm-started,” making them good choices for this applica-
tion.
242 CHAPTER 7. SENSITIVITY ANALYSIS AND APPLICATIONS
1510 1490
1 2
3
Figure 7.11: Observed link volumes from traffic sensors along a freeway.
the counts. Even worse, due to unavoidable errors in traffic count records (Fig-
ure 7.11), sometimes this trivial matrix will match counts much better than a
more “realistic” matrix! In this figure, it is likely that the total flow on the
freeway is approximately 1500 vehicles from node 1 to node 3; but this does
not match the counts as well as 1510 vehicles from 1 to 2, and 1490 from 2 to
3. Simply matching counts does not provide the behavioral insight needed to
identify what a “realistic” matrix is.
This section provides a method to reconcile both approaches. The idea is
that an initial OD matrix d∗ is already available from a travel demand model.
While this matrix is hopefully close to the true value, it also contains sampling
and model estimation errors and can never be fully accurate. However, there
are sensors on a subset of links Ā ⊂ A, and there is a vector x∗ of traffic volume
counts on these links. The intent is to use these traffic counts to try to improve
the initial OD matrix d∗ . The following optimization problem expresses this:
X 2
X 2
minf (d, x) = Θ (drs − d∗rs ) + (1 − Θ) xij − x∗ij (7.33)
d,x
(r,s)∈Z 2 (i,j)∈Ā
X Z xij
s.t. x ∈ arg min tij (x) dx (7.34)
x∈X(d) 0
(i,j)∈A
where Θ is a parameter ranging from zero to one and X(d) is the set of feasible
link flows when the OD matrix is d.
The objective function (7.33) is of the least-squares type, and attempts to
minimize both the deviation between the final OD matrix d and the initial esti-
mate d∗ , and the deviation between the equilibrium link flows x associated with
the OD matrix d, and the actual observations x∗ on the links with sensors. The
hope is to match traffic counts reasonably well, while not wandering into com-
pletely unrealistic OD matrices. The factor Θ is used to weight the importance
of matching the initial OD matrix estimate, and the link flows. It can reflect the
relative degree of confidence in d∗ and x∗ ; if the travel demand was obtained
from high-quality data and a large sample size, whereas the traffic count data
is old and error-prone, a Θ value close to one is appropriate. Conversely, if
the travel demand model is less trustworthy but the traffic count data is highly
reliable, a lower Θ value is appropriate. In practice, a variety of Θ values can be
chosen, and the resulting tradeoffs between matching the initial estimate and
link flows can be seen.
As indicated by the constraint (7.34), this optimization problem is also a
bilevel program, because the mapping from an OD matrix d to the resulting
244 CHAPTER 7. SENSITIVITY ANALYSIS AND APPLICATIONS
link flows x involves the equilibrium process. The fact that the optimization
problem is bilevel also means that we cannot expect to find the global optimum
OD matrix, and that heuristics should be applied. A sensitivity-based heuristic,
like the one used for the network design problem, would determine how the link
flows would shift if the OD matrix is perturbed, and use this information to find
a “descent direction” which would reduce the value of the objective.
Following the same technique as in Section 7.3, the constraint (7.34) defines
x uniquely as a function of d due to the uniqueness of the link flow solution to
the traffic assignment problem. (Also note that the dependence on d appears
through the feasible region, requiring that the minimization take place over
X(d).) So, we can rewrite the objective function as a function of d alone,
by defining x(d) as the equilibrium link flows in terms of the OD matrix and
transforming the objective to f (d, x(d)). Then, the partial derivative of the
objective with respect to any entry drs in the OD matrix is given by
∂f X ∂xij
rs
= 2Θ(drs − d∗rs ) + 2(1 − Θ) (xrs − x∗rs ) rs (7.36)
∂d ∂d
(i,j)∈Ā
∂x ij
where the partial derivatives ∂drs are found from the sensitivity formulas in
Section 7.2.1.
The vector of all the derivatives (7.36) forms the gradient of f with respect
to d. So, taking a step in the opposite direction, and ensuring that the values
in the OD matrix remain non-negative provide the following update rule:
+
d ← [d − µ∇d f ] (7.37)
where µ is a step size to be determined, and the [·]+ operation is applied to each
component of the vector. This leads to the following algorithm for OD matrix
estimation:
1. Initialize d ← d∗ .
2. Calculate the link flows x(d) by solving the traffic assignment problem
with OD matrix d.
∂f
3. For each OD pair (r, s) determine ∂drs by solving the sensitivity problem
in Section 7.2.1 and using (7.36).
4. Update d using (7.37) for a suitable step size µ.
5. Test for convergence, and return to step 2 if not converged.
The comments about the step size µ and convergence criteria from the network
design problem (Section 7.3) apply equally as well here: µ can be determined
using an iterative line search procedure, choosing smaller values until the ob-
jective function decreases, and the algorithm can be terminated when it fails to
make additional substantial progress in reducing the objective.
This procedure is demonstrated using the network in Figure 7.12. In this
network, traffic counts are available on three of the links in the network, and
7.4. OD MATRIX ESTIMATION 245
1 3 3 4
4652
1 5,000 —
2 — 10,000
5 6
2150
1980
2 4
Figure 7.12: Example network for OD matrix estimation. (Observed link vol-
umes and initial OD matrix shown.)
1 3
5 6
2 4
OD pair (1,3)
1 3
5 6
2 4
OD pair (2,4)
Taking a trial step in this direction with µ = 1, the updating rule (7.37) gives
the candidate OD matrix d13 = 4869, d24 = 9995. The resulting equilibrium
link flows include x13 = 4637, x25 = 1941, and x56 = 2150, so the “fit” of
the equilibrium link flows and traffic counts has improved from 11293 to 2293.
The “fit” of the OD matrix has worsened from 0 to 17179, but with the weight
Θ = 0.1, the overall objective still decreases from 10164 to 3782. Therefore, we
accept the step size µ = 1, and return to step 3 to continue updating the OD
matrix.
7.6 Exercises
1. [63] Given a nondegenerate equilibrium solution in a network with a single
origin and continuous link performance functions, show that the equilib-
rium bushes remain unchanged in a small neighborhood of the current OD
matrix.
5. [26] Verify that the optimality conditions for (7.11)–(7.13) include the
conditions (7.6)–(7.10).
248 CHAPTER 7. SENSITIVITY ANALYSIS AND APPLICATIONS
3
10x
50+x
50+x13exp(-y13) 3
10x34exp(-y34)
10+x23exp(-y23)
1 4
15x12exp(-y12)
2 60+x24exp(-y24)
6. [33] In the modified Braess network of Figure 7.14, find the sensitivity of
each link’s flow to the demand d14 .
7. [35] In the modified Braess network of Figure 7.14, find the sensitivity of
each link’s flow to the link performance function parameter y 23 . What
value of this parameter minimizes the equilibrium travel time? Suggest
what sort of real-world action would correspond to adjusting this param-
eter to this optimal value.
8. [44] In the network design problem for Figure 7.15, give the gradient of
1
the objective function at the initial solution y = 0. Assume that Θ = 20 .
x1/(1+y1)
10 10
1 2
x2/(1+y2)
3
50+x13 10x34
3 10+x23
1 4
2
15x12
2 60+x24
Figure 7.17: Network and observed link flows (in boxes) for Exercise 13.
heuristic with the algorithm given in the text for several networks.
12. [79] Design a heuristic for the network design problem, based on genetic
algorithms as discussed in Section C.6.2. Compare the performance of this
heuristic with the algorithm given in the text for several networks.
13. [47] In the OD matrix estimation problem of Figure 7.17, give the gradient
of the objective function at the initial solution d∗14 = 6, d∗24 = 4. What is
the value of the objective function if Θ = 21 ?
16. [79] Design a heuristic for the OD matrix estimation problem, based on
simulated annealing as discussed in Section C.6.1. Compare the perfor-
mance of this heuristic with the algorithm given in the text for several
networks.
17. [79] Design a heuristic for the OD matrix estimation problem, based on
genetic algorithms as discussed in Section C.6.2. Compare the perfor-
250 CHAPTER 7. SENSITIVITY ANALYSIS AND APPLICATIONS
mance of this heuristic with the algorithm given in the text for several
networks.
18. [76] Perform the following “validation” exercise: create a small network
with a given OD matrix, and find the equilibrium solution. Then, given
the equilibrium link flows, try to compute your original OD matrix using
the algorithm given in the text. Do you get your original OD matrix back?
Chapter 8
Extensions of Static
Assignment
The basic traffic assignment problem (TAP) was defined in Chapter 5 as follows:
we are given a network G = (N, A), link performance functions tij (xij ), and the
demand values drs between each origin and destination. The objective is to find
a feasible vector of path flows (or link flows) which satisfy the principle of user
equilibrium, that is, that every path with positive flow has the least travel time
among all paths connecting that origin and destination. We formulated this as
a VI (find ĥ ∈ H such that c(ĥ) · (ĥ − h) ≤ 0 for all h ∈ H) and as the solution
to the following convex optimization problem:
X Z xij
min tij (x)dx (8.1)
x,h 0
(i,j)∈A
X
s.t. xij = hπ δij
π
∀(i, j) ∈ A (8.2)
π∈Π
X
hπ = drs ∀(r, s) ∈ Z 2 (8.3)
π∈Πrs
π
h ≥0 ∀π ∈ Π (8.4)
This formulation remains the most commonly used version of traffic assign-
ment in practice today. However, it is not difficult to see how some of the
assumptions may not be reasonable. This chapter shows extensions of the basic
TAP which relax these assumptions. This is the typical course of research: the
first models developed make a number of simplifying assumptions, in order to
capture the basic underlying behavior. Then, once the basic behavior is under-
stood, researchers develop progressively more sophisticated and realistic models
which relax these assumptions.
This chapter details three such extensions. Section 8.1 relaxes the assump-
tion that the OD matrix is known and fixed, leading to an elastic demand
formulation. Section 8.2 relaxes the assumption that the travel time on a link
251
252 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
depends only on the flow on that link (and not on any other link flows, even
at intersections). Section 8.3 relaxes the assumption that travelers have accu-
rate knowledge and perception of all travel times in a network, leading to the
important class of stochastic user equilibrium models.
For simplicity, all of these variations are treated independently of each other.
That is, the OD matrix is assumed known and fixed in all sections except Sec-
tion 8.1, and so forth. This is done primarily to keep the focus on the relevant
concept of each section, but also to guard the reader against the temptation
to assume that a model which relaxes all of these assumptions simultaneously
is necessarily better than one which does not. While realism is an important
characteristic of a model, it is not the only relevant factor when choosing a math-
ematical model to describe an engineering problem. Other important concerns
are computation speed, the existence of enough high-quality data to calibrate
and validate the model, transparency, making sure the sensitivity of the model
is appropriate to the level of error in input data, ease of explanation to decision
makers, and so on. All of these factors should be taken into account when choos-
ing a model, and you can actually do worse off by choosing a more “realistic”
model when you don’t have adequate data for calibration — the result may even
give the impression of “false precision” when in reality your conclusions cannot
be justified.
X Z drs !
−1 rs rs
CS = Drs (y) dy − d κ (8.5)
(r,s)∈Z 2 0
(We
R −1are using y for the dummy variable of integration instead of d because
D (d) dd is notationally awkward.) The interpretation of this formula is as
follows. Each driver has a certain travel time threshold: if the travel time is
greater than this threshold, the trip will not be made, and if the travel time is
less than this threshold, the trip will be made. Different drivers have different
1 An alternative is to have D rs be a function of the average travel time on the used paths
between r and s, not the shortest. At equilibrium it doesn’t matter because all used paths have
the same travel time as the shortest, but in the process of finding an equilibrium the alternative
definition of the demand function can be helpful. For our purposes, though, definition in terms
of the shortest path time is more useful because it facilitates a link-based formulation.
2 In the text, we typically indicate OD pairs with a superscript, as in drs , and link variables
with a subscript, as in xij . In elastic demand, we will often need to refer to inverse functions,
and writing (Drs )−1 is clumsy. For this reason, OD pairs may also be denoted with a subscript,
−1
as in Drs . This is purely for notational convenience and carries no significance.
254 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
Travel time
Consumer surplus
k
D-1(d)
d Demand
TSTT
thresholds, and the demand function represents the aggregation of these thresh-
olds: when the travel time is κ, D(κ) represents the number of travelers whose
threshold is κ or higher. If my threshold is, say, 15 minutes and the travel time
is 10 minutes, the difference (5 minutes) can be thought of as the “benefit” of
travel to me: the trip is worth 15 minutes of my time, but I was able to travel
for only 10. Adding this up for all travelers provides the total benefits of travel,
which is what CS represents. Figure 8.1 shows the connection between this
concept and equation (8.5): assume that drivers are numbered in decreasing
order of their threshold values. Then D−1 (1) gives the threshold value for the
first driver, D−1 (2) gives the threshold value for the second driver, and so forth.
At equilibrium all drivers experience a travel time of κ, so the benefit to the
first driver is D−1 (1) − κ, the benefit to the second driver is D−1 (2) − κ, and
so forth. Adding over all drivers gives equation (8.5).
demand equilibrium on the original network as follows: the flows on the links
common to both networks represent flows on the actual traffic network; the flow
on the new direct connection links represent drivers who choose not to travel
due to excess congestion. Think of drs as the total number of people who might
possibly travel from r to s; those that actually complete their trips travel on the
original links and those who choose not to travel choose the direct connection
link. At equilibrium, all used paths connecting r to s (including the direct
connection link) have the same travel time κrs ; therefore, the flow on the direct
−1
connection link xrs must be such that Drs (drs − xrs ) = κrs , or equivalently
xrs = drs − Drs (κrs ), which is exactly the number of drivers who choose not to
travel when the equilibrium times are κrs . That is, the demand drs = drs − xrs .
The downside of this approach is that it requires creating a large number
of new links. In a typical transportation network, the number of links is pro-
portional to the number of nodes and of the same order of magnitude (so a
network of 1,000 nodes may have 3–4,000 links). However, the number of OD
pairs is roughly proportional to the square of the number of nodes, since every
node could potentially be both an origin and a destination. So, a network with
1,000 nodes could have roughly 1,000,000 OD pairs. Implementing the Gartner
transformation requires creating a new link for every one of these OD pairs,
which would result in 99.9% of the network links being the artificial arcs for the
transformation!
the demand function; further, if the demand is greater than zero then it must
equal the demand function. Thinking laterally, you might notice this is similar
to the principle of user equilibrium: the travel time on any path must always be
at least as large as the shortest path travel time; further, if the demand is pos-
itive then the path travel time must equal the shortest path travel time. When
deriving the Beckmann function, we showed that the latter statements could be
expressed by cπ ≥ κrs and hπ (cπ − κrs ) = 0 (together with the nonnegativity
condition hπ ≥ 0). The same “trick” applies for the relationship between de-
mand and the demand function: drs ≥ Drs (κrs ), drs (drs − Drs (κrs )) = 0, and
the nonnegativity condition drs ≥ 0.
It will turn out to be easier to express the latter conditions in terms of the
inverse demand functions D−1 , rather than the “forward” functions D, because
the convex objective function we will derive will be based on the Beckmann
function. The Beckmann function involves link performance functions (with
units of time). Since the inverse demand functions also are measured in units of
time, it will be easier to combine them with the link performance functions than
the regular demand functions (which have units of vehicles). Expressed in terms
−1 rs
of the inverse demand functions, the conditions above become κrs ≥ Drs (d ),
rs −1 rs rs rs
d (Drs (d ) − κ ) = 0, and d ≥ 0.
So, this is the question before us. What optimization problem has the fol-
lowing as its optimality conditions?
∂L ∂F
rs
= κrs + rs ,
∂d ∂d
∂F −1 rs
so if = −Drs
∂drs (d ), we are done (both equations will be true). Integrating,
P R drs −1
F (d) = − (r,s)∈Z 2 0 Drs (ω) dω gives us what we need. De-Lagrangianizing
the “no vehicle left behind” constraint, we obtain the optimization problem
associated with the elastic demand problem:
X Z xij X Z drs
−1
min tij (x) dx − Drs (ω) dω (8.16)
x,h,d 0 0
(i,j)∈A (r,s)∈Z 2
X
s.t. xij = hπ δij
π
∀(i, j) ∈ A (8.17)
π∈Π
X
hπ = drs ∀(r, s) ∈ Z 2 (8.18)
π∈Πrs
hπ ≥ 0 ∀π ∈ Π (8.19)
rs 2
d ≥0 ∀(r, s) ∈ Z (8.20)
in λ. This is simply the variational inequality (8.7) written out in terms of its
components, substituting λx∗ + (1 − λ)x for x̄0 and λd∗ + (1 − λ)d for d̄0 .
Third, the stopping criterion (relative gap or average excess cost) needs to be
augmented with a measure of how well the OD matrix matches the values from
the demand P functions. A simple measure is the total misplaced flow defined
as T M F = (r,s)∈Z 2 |drs − [Drs (κrs )]+ |. The total misplaced flow is always
nonnegative, and is zero only if all of the entries in the OD matrix are equal
to the values given by the demand function (or zero if the demand function is
negative). We should keep track of both total misplaced flow and one of the
equilibrium convergence measures (relative gap or average excess cost), and only
terminate the algorithm when both of these are sufficiently small.
Implementing these changes, the Frank-Wolfe algorithm for elastic demand
is as follows:
1. Choose some initial OD matrix d and initial link flows x corresponding to
that OD matrix.
2. Find the shortest path between each origin and destination, and calculate
convergence measures (total misplaced flow, and either relative gap or
average excess cost). If both are sufficiently small, stop.
3. Improve the solution:
(a) Calculate a target OD matrix d∗ using the demand functions: d∗rs =
[Drs (κrs )]+ for all OD pairs (r, s).
(b) Using the target matrix d∗ , find the link flows if everybody were
traveling on the shortest paths found in step 2, store these in x∗ .
(c) Solve the restricted variational inequality by finding λ such that (8.21)
is true.
(d) Update the OD matrix and link flows: replace x with λx∗ + (1 − λ)x
and replace d with λd∗ + (1 − λ)d.
4. Return to step 1.
This algorithm can also be linked to the convex programming formulation
described above. Given a current solution (x, d), it can be shown that the deriva-
tive of the Beckmann function in the direction towards (x∗ , d∗ ) is nonpositive
(and strictly negative if the current solution does not solve the elastic demand
problem), and that the solution of the restricted variational inequality (8.21)
minimizes the objective function along the line joining (x, d) to (x∗ , d∗ ). The
algebra is a bit tedious and is left as an exercise at the end of the chapter.
8.2. LINK INTERACTIONS 259
8.1.6 Example
Here we solve the small example of Figure 8.2 with the Frank-Wolfe algorithm,
using the average excess cost to measure how close we are to equilibrium. The
demand function is D(κ) = 50 − κ, so its inverse function is D−1 (d) = 50 − d.
we obtain λ = 12/19 ≈ 0.632 so the new demand and flows are d = 37.36
and x = 18.42 18.95 .
Iteration 2. The link travel times are now t = 28.42 38.95 , so κ = 28.42,
D = 21.58, AEC = 5.19, and T M F = 15.78. Both convergence measures
have decreased from the first iteration,
particularly the average excess
cost. Thus, d∗ = 21.58, x∗ = 21.58 0 , and we solve
(10+21.58λ+18.42(1−λ))(21.58−18.42)+(20+18.95(1−λ))(0−18.95)
− (50 − (21.58λ + 37.36(1 − λ)))(21.58 − 37.36) = 0
so
λ = 0.726 and the new demand and flows are d = 25.9 and x =
20.71 5.19
Iteration 3 . The link travel times are now t = 30.71 25.19 , so κ = 25.19,
D = 24.09, AEC = 4.42, and T M F = 1.80. Assuming that these are
small enough to terminate, we are done.
10 + x
50 – κ
1 2
20 + x
lanes, the travel time on the onramp depends on the flow on the main
lanes as well as the flow on the onramp. Similar arguments hold at arte-
rial junctions controlled by two-way or four-way stops, at signalized inter-
sections with permissive phases (e.g., left-turning traffic yielding to gaps
in oncoming flow), or at actuated intersections where the green times are
determined in real-time based on available flow. The basic TAP cannot
model the link interactions characterizing these types of links.
Multiclass flow: Consider a network model where there are two types of ve-
hicles (say, passenger cars and semi trucks, or passenger cars and buses).
Presumably these vehicles may choose routes differently or even have a dif-
ferent roadway network available to them (heavy vehicles are prohibited
from some streets, and buses must drive along a fixed route. This type of
situation can be modeled by creating a “two-layer” network, with the two
layers representing the links available to each class. However, where these
links represent the same physical roadway, the link performance functions
should be connected to each other (truck volume influences passenger car
speed and vice versa) even if they are not identical (truck speed need not
be the same as passenger car speed). Link interaction models therefore
allow us to model multiclass flow as well.
However, there are a few twists to the story, some of which are explored
below. Section 8.2.1 presents a mathematical formulation of the link interactions
model, but shows that a convex programming formulation is not possible except
8.2. LINK INTERACTIONS 261
in some rather unlikely cases. Section 8.2.2 explores the properties of the link
interactions model, in particular addressing the issue of uniqueness — even
though link flow solutions to TAP are unique under relatively mild assumptions,
this is not generally true when there are link interactions. Section 8.2.3 gives
us two solution methods for the link interactions model, the diagonalization
method and simplicial decomposition. Diagonalization is easier to implement,
but simplicial decomposition is generally more powerful.
8.2.1 Formulation
In the basic TAP, the link performance function for link (i, j) was a function
of xij alone, that is, we could write tij (xij ). Now, tij may depend on the flow
on multiple links. For full generality, our notation will allow tij to depend on
the flows on any or all other links in the network: the travel time is given by
the function tij (x1 , x2 , · · · , xij , · · · , xm ) or, more compactly, tij (x) using vector
notation. We assume these are given to us. Everything else is the same as
in vanilla TAP: origin-destination demand is fixed, and we seek an equilibrium
solution where all used paths have equal and minimal travel time.
Now, how to formulate the equilibrium principle? It’s not hard to see that
the variational inequality for TAP works equally well here:
c(h̄) · (h̄ − h) ≤ 0 ∀h ∈ H (8.22)
where the only difference is that the link performance functions used to calculate
path travel times C are now of the form tij (x) rather than tij (xij ). But this is
of no consequence. Path flows h̄ solve the variational inequality if and only if
c(h̄) · h̄ ≤ c(h̄) · h (8.23)
for any other feasible path flows h. That is, if the travel times were fixed at their
current values, then it is impossible to reduce the total system travel time by
changing any drivers’ route choices. This is only possible if all used paths have
equal and minimal travel time. Similarly, the link-flow variational inequality
t(x̄) · (x̄ − x) ≤ 0 , (8.24)
where x is any feasible link flow, also represents the equilibrium problem with
link interactions.
The ease of translating the variational inequality formulation for the case of
link interactions may give us hope that a convex programming formulation exists
as well. The feasible region is the same, all we need is to find an appropriate
objective function. Unfortunately, this turns out to be a dead end. For example,
the obvious approach is Z to amend the Beckmann function in some way, for
X xij
instance, changing tij (x) dx to
(i,j)∈A 0
X Z x
tij (y) dy (8.25)
(i,j)∈A 0
262 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
where the simple integral in the Beckmann function is replaced with a line
integral between the origin 0 and the current flows x. Unfortunately, this line
integral is in general not well-defined, since its value depends on the path taken
between the origin and x.
The one exception is if the vector of travel times t(x) is a gradient map (that
is, it is a conservative vector field). In this case, the fundamental theorem of
line integrals implies that the value of this integral is independent of the path
taken between 0 and x. For t(x) to be a gradient map, its Jacobian must be
symmetric. That is, for every pair of links (i, j) and (k, `), we need the following
condition to be true:
∂tij ∂tk`
= (8.26)
∂xk` ∂xij
That is, regardless of the current flow vector x, the marginal impact of another
vehicle added to link (i, j) on the travel time of (k, `) must equal the marginal
impact of another vehicle added to link (k, `) on the travel time of (i, j). This
condition is very strong. Comparing with the motivating examples used to jus-
tify studying link interactions, the symmetry condition is not usually satisfied:
the impact of an additional unit of flow on the mainline on the onramp travel
time is much greater than the impact of an additional unit of onramp flow on
mainline travel time. The impact of semi truck flow on passenger car travel
time is probably greater than the impact of passenger car flow on truck travel
time at the margin. Symmetry may perhaps hold in the case of overtaking on a
rural highway, but even then it is far from clear. So, when modeling link inter-
actions we cannot hope for condition (8.26) to hold. If it does so, consider it a
happy accident: the function (8.25) is then an appropriate convex optimization
problem.
8.2.2 Properties
This section explores the properties of the link interaction equilibrium prob-
lem defined by the variational inequality (8.22). The first question concerns
existence of an equilibrium. Because (8.22) is essentially the same variational
inequality derived for TAP, the arguments used to derive existence of an equi-
librium (based on Brouwer’s theorem) carry over directly and we have the same
result:
Proposition 8.1. If each link performance function tij (x) is continuous in the
vector of link flows x, then at least one solution exists satisfying the principle
of user equilibrium.
Travel time
12 t2
6 t1
3 6 x1
Figure 8.3: Change in path travel times as x1 varies, “artificial” two-link net-
work.
then x1 = 6, x2 = 0, t1 = 6, and t2 = 12. The top link is the only used path,
but it has the least travel time so this solution also satisfies the principle of user
equilibrium. Likewise, if x1 = 0 and x2 = 6, then t1 = 12 and t2 = 6 and again
the only used path has the least travel time. Therefore, this network has three
equilibrium solutions; compare with Figure 8.3.
To make this situation less artificial, we can change the link performance
functions to represent a more realistic scenario. Assume that the rate of demand
is 1800 vehicles per hour, and that link 1 has a constant travel time of 300
seconds independent of the flow on either link. Link 2 is shorter with a free-flow
time of 120 seconds, but must yield to link 1 using gap acceptance principles.
In traffic operations, gap acceptance is often modeled with two parameters: the
critical gap tc , and the follow-up gap tf . The critical gap is the smallest headway
required in the main stream for a vehicle to enter. Given that the gap is large
enough for one vehicle to enter the stream, the follow-up gap is the incremental
amount of time needed for each additional vehicle to enter. For this example,
let tc be 4 seconds and tf be 2 seconds. Then, assuming that flows on both
links 1 and 2 can be modeled as Poisson arrivals, the travel time on link 2 can
be derived as
" r #
1 Λ x2 x2 2 8x
2
t2 (x1 , x2 ) = + −1+ −1 + 2 (8.27)
u 4 u u u Λ
where Λ is the length of the analysis period and u is the capacity of link 2
defined by
x2 exp(−x1 tc )
u= (8.28)
1 − exp(−x1 tf )
264 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
Figure 8.4: Change in path travel times as x1 varies, “realistic” two-link net-
work.
Figure 8.4 shows the travel times on the two paths as x1 varies. Again, there
are three equilibria: (1) x1 = 0, x2 = 1800, where t1 = 300 and t2 = 182; (2)
x1 = 362, x2 = 1438, where t1 = t2 = 300; and (3) x1 = 892, x2 = 908, where
t1 = t2 = 300.
So, even in realistic examples we cannot expect equilibrium to be unique
when there are link interactions. The practical significance is that it raises
doubt about which equilibrium solution should be used for project evaluation
or ranking. For instance, consider a candidate project which would improve
the free-flow time on link 1 from 300 to a smaller value; this would correspond
to lowering the horizontal line in Figure 8.4. If we are at one of the equilibria
where the travel times are equal, such a change will indeed reduce the travel
times experienced by drivers. However, if we are at the equilibrium where the
top path is unused, such a change will have no impact whatsoever.
While a complete study of the methods used to distinguish among multiple
equilibria is beyond the scope of this section, a simple stability criterion is
explained here: an equilibrium solution is stable if small perturbations to the
solution would incentivize drivers to move back towards that initial equilibrium
— that is, if we reassign a few drivers to different paths, the path travel times
will change in such a way that those drivers would want to move back to their
original paths. By contrast, an unstable equilibrium does not have this property:
if a few drivers are assigned to different paths, the path travel times will change
in such a way that even more drivers would want to switch paths, and so on
until another equilibrium is found.
In the simple two-link network we’ve been looking at, stability can be iden-
tified using graphs such as those in Figures 8.3 and 8.4. The arrows on the
bottom axis indicate the direction of the path switching which would occur for
a given value of x1 . When the arrow is pointing to the left, t1 > t2 so travelers
want to switch away from path 1 to path 2, resulting in a decrease in x1 (a move
8.2. LINK INTERACTIONS 265
further to the left on the graph). When t2 > t1 , travelers want to switch away
from path 2 to path 1, resulting in an increase in x1 , indicated by an arrow
pointing to the right. At the equilibrium solutions, there is no pressure to move
in any feasible direction. So, for the first example, the only stable equilibria are
the “extreme” solutions with all travelers on either the top or bottom link. The
equilibrium with both paths used is unstable in the sense that any shift from
one path to another amplifies the difference in travel times and encourages even
more travelers to shift in that direction. In the second example, the first and
third equilibria are stable, but the second is unstable.
Based on these two examples, an intuitive explanation for the presence of
stability with link interactions can be provided. For the regular traffic assign-
ment problem with increasing link performance functions, shifting flow away
from a path π and onto another path π 0 always decreases the travel time on
π and increases the travel time on π 0 . Therefore, if the paths have different
travel times, flow will shift in a way that always tends to equalize the travel
times on the two paths. Even where there are link interactions, the same will
hold true if the travel time on a path is predominantly determined by the flow
on that path. However, when the link interactions are very strong, the travel
time on a path may depend more strongly by the flow on a different path. In
the first example, notice that each link’s travel time is influenced more by the
other link’s flow than its own. In the second example, for certain ranges of flow
the travel time on the merge path is influenced more by the flow on the priority
path. In such cases, there is no guarantee that moving flow from a higher-cost
path to a lower-cost path will tend to equalize their travel times. In the first
example, we have an extreme case where moving flow to a path decreases its
travel time while increasing the travel time of the path the flow moved away
from!
To make this idea more precise, the following section introduces the mathe-
matical concept of strict monotonicity.
Strict Monotonicity
Let f (x) be a vector-valued function whose domain and range are vectors of the
same dimension. For instance, t(x) maps the vector of link flows to the vector
of link travel times; the dimension of both of these is the number of links in the
network. We say that f is strictly monotone if for any two distinct vectors x and
y in its domain, the dot product of f (x) − f (y) and x − y is strictly positive.
For example, let f (x) be defined by f1 (x1 , x2 ) = 2x1 and f2 (x1 , x2 ) = 2x2 .
Then for any distinct vectors x and y, we have
(f (x) − f (y)) · (x − y) = ( 2x1 2x2 − 2y1 2y2 ) · ( x1 x2 − y1 y2 )
= 2((x1 − y1 )2 + (x2 − y2 )2 )
so these link performance functions are not strictly monotone. Roughly speak-
ing, strict monotonicity requires the diagonal terms of the Jacobian of t to be
large compared to the off-diagonal terms. The precise version of this “roughly
speaking” fact is the following:
Proposition 8.2. If f is a continuously differentiable function whose domain is
convex, then f is strictly monotone if and only if its Jacobian is positive definite
at all points in the domain.
With this definition of monotonicity in hand, we can provide the uniqueness
result we’ve been searching for:
Proposition 8.3. Consider an instance of the traffic assignment problem with
link interactions. If the link performance functions t(x) are continuous and
strictly monotone, then there is exactly one user equilibrium solution.
Proof. Since t(x) is continuous, we are guaranteed existence of at least one equi-
librium solution from Brouwer’s theorem; let x̂ be such an equilibrium and let
x̃ be any other feasible link flow solution. We need to show that x̃ cannot be an
equilibrium. Arguing by contradiction, assume that x̃ is in fact an equilibrium.
Then it would solve the variational inequality (8.24), so
Adding a clever form of zero to the left hand side, this would imply
But since the link performance functions are strictly monotone, the first term
on the left-hand side is strictly positive. Furthermore, since x̂ is an equilibrium
the variational inequality (8.24) is true, so t(x̂) · (x̂ − x̃) ≤ 0, which implies that
the second term on the right-hand side is nonnegative. Therefore, the left-hand
side of (8.29) is strictly positive, which is a contradiction. Therefore x̃ cannot
satisfy the principle of user equilibrium.
8.2. LINK INTERACTIONS 267
8.2.3 Algorithms
This section presents two algorithms for the traffic assignment problem with link
interactions. If the link performance functions are strictly monotone, it can be
shown that both of these algorithms converge to the unique equilibrium solution.
Otherwise, it is possible that these algorithms may not converge, although they
will typically do so if they start sufficiently close to an equilibrium. In any case,
these algorithms may be acceptable heuristics even when strict monotonicity
does not hold.
Diagonalization
The diagonalization method is a variation of Frank-Wolfe, which differs only in
how the step size λ is found. Recall that the Frank-Wolfe step size is found by
solving the equation
X
tij (λx∗ij + (1 − λ)xij )(x∗ij − xij ) = 0 (8.30)
(i,j)∈A
since this minimizes the Beckmann function along the line segment connecting x
to x∗ . Since there is no corresponding objective function when there are asym-
metric link interactions, it is not clear that a similar approach will necessarily
work. (And in any case, tij is no longer a function of xij alone, so the formula
as stated will not work.)
To make this formula logical, construct a temporary link performance func-
tion t̃ij (xij ) which only depends on its own flow. This is done by assuming that
the flow on all other links is constant: t̃ij (xij ) = tij (x1 , . . . , xij , . . . , xm ). For
example, if t1 (x1 , x2 , x3 ) = x1 + x22 + x33 and the current solution is x1 = 1,
x2 = 2, and x3 = 3, then t̃1 (x1 ) = 31 + x1 , since this is what we would get if x2
and x3 were set to constants at their current values of 2 and 3, respectively.
The step size λ is then found by adapting the Frank-Wolfe formula, using
t̃ij (xij ) in place of tij (x). That is, in the diagonalization method λ solves
X
t̃ij (λx∗ij + (1 − λ)xij )(x∗ij − xij ) = 0 (8.31)
(i,j)∈A
At each iteration, new t̃ functions are calculated based on the current solu-
tion. The complete algorithm is as follows:
1. Find the shortest path between each origin and destination, and calculate
the relative gap (unless it is the first iteration). If the relative gap is
sufficiently small, stop.
(a) Find the link flows if everybody were traveling on the shortest paths
found in step 1, store these in x∗ .
268 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
50+x13+0.5x23 3
10x34+5x24
10+x23+0.5x13
1 4
10x12
50+x24+0.5x34
2
(b) If this is the first iteration, set x ← x∗ and move to step 3. Otherwise,
continue with step c.
(c) Using the current solution x, form the diagonalized link performance
functions t̃ij (xij ) for each link.
(d) Find λ which solves equation (8.31).
(e) Update x ← λx∗ + (1 − λ)x.
3. Calculate the new link travel times and the relative gap. Increase the
iteration counter i by one and return to step 1.
As an example, consider the modified Braess network shown in Figure 8.5.
At each merge node, the travel time on each of the incoming links depends on
the flow
on both links which merge
together. The link flow vectors are indexed
x = x12 x13 x23 x24 x34 .
omitting terms where xij = x∗ij because they are zero in (8.31). The
1
1 11 1
solution
5is λ = 23/72, so we update x = 4 12 12 4 12 0 6 and
t = 40 6 53 23 1
24 15 24 53 60 (using the regular cost functions, not
the diagonalized ones.) The average excess cost is now 21.43.
8.2. LINK INTERACTIONS 269
Simplicial decomposition
An alternative to diagonalization is the simplicial decomposition algorithm.
This algorithm is introduced at this point (rather than in Chapter 6) for several
reasons. First, historically it was the first provably convergent algorithm for the
equilibrium problem with link interactions. Second, although it is an improve-
ment on Frank-Wolfe, for the basic TAP it is outperformed by the path-based
and bush-based algorithms presented in that chapter. However, like those algo-
rithms, it overcomes the “zig-zagging” difficulty that Frank-Wolfe runs into (cf.
Figure 6.4).
The price of this additional flexibility is that more computer memory is
needed. Frank-Wolfe and MSA are exceptionally economical in that they only
require two vectors to be stored: the current link flows x and the target link flows
x∗ . In simplicial decomposition, we will “remember” all of the target link flows
found in earlier iterations, and exploit this longer-term memory by allowing
“combination” moves towards several of these previous targets simultaneously.
In the algorithm, the set X is used to store all target link flows found thus far.
A second notion in simplicial decomposition is that of a “restricted equilib-
rium.” Given a set X = {x∗1 , x∗2 , · · · , x∗k } and a current link flow solution x, we
say that x is a restricted equilibrium if it solves the variational inequality
where X(x, X ) means the set of link flow vectors which are obtained by a convex
combination of x and any of the target vectors in X .3 Equivalently, x is a
restricted equilibrium if none of the targets in X lead to improving directions
in the sense that the total system travel time would be reduced by moving to
some x∗i ∈ X while fixing the travel times at their current values. That is,
rule.
8.2. LINK INTERACTIONS 271
and X = {x∗1 , x∗2 }. For the first iteration of the subproblem, notice that
t · (x − x∗1 ) = 0 and t · (x − x∗2 ) = 138, so Smith’s formula (8.34) reduces
to
0 138 ∗
(x∗1 − x) +
∆x = (x2 − x) = −6 6 −6 0 0 .
138 138
Taking a step of size µ = 1/2 gives us
x= 3 3 3 0 6 .
The new travel times are
54 21 14 12
t = 30 53 60 ,
so t · x = 657, t · x∗1 = 627, and t · x∗2 = 687, so
γS = ([657 − 627]+ )2 + ([657 − 687]+ )2 = 302 + 02 = 900 .
Assume this is “small enough” to complete the subproblem.
272 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
x∗3 = 6 0 0 6 0
and X = {x∗1 , x∗2 , x∗3 }. For the first iteration of the subproblem, calculate
t · (x − x∗1 ) = 30, t · (x − x∗2 ) = −30, and t · (x − x∗3 ) = 159. So (8.34)
gives
30 ∗ 0 159 ∗
∆x = (x1 − x) + (x∗2 − x) + (x3 − x)
189 189 189
= 3 −3 −2.05 5.05 −5.05 .
and
t = 45 52.5 12.7 54.3 47.4
which has γS = 1308. Assume that this is no longer “small enough” to
return to the master problem, so we begin a second subproblem iteration.
Smith’s formula (8.34) now gives ∆x = 0.2(x − x∗2 ) + 0.8(x − x∗3 ), and the
trial solution x + 21 ∆x has γS = 1195, which is an improvement. In this
case, choosing a smaller µ would work even better; for instance µ = 1/4
would reduce the Smith gap to 557. There is thus a tradeoff between
spending more time on finding the “best” value of µ, or spending more
time on finding new search directions and vectors for X . Balancing these
is an important question for implementation.
The algorithm can continue from this point or terminate if this average
excess cost is small enough.
they believe to be shortest, but allow for some perception error between their
belief and the actual travel times. An alternative, mathematically equivalent,
interpretation (explained below) is that drivers do in fact perceive travel times
accurately, but care about factors other than travel time. This section explains
the development of the SUE model. The mathematical foundation for the SUE
model is in discrete choice concepts, which are briefly reviewed in Section 8.3.1.
The specific application of discrete choice to the route choice decision is taken
up in Section 8.3.2.
These sections address the “individual” perspective of logit route choice.
The next steps to creating the SUE model are an efficient network loading
model (a way to find the choices of all drivers efficiently), and then finally the
equilibrium model which combines the network loading with updates to travel
times, to account for the mutual dependence between travel times and route
choices. These are undertaken in Sections 8.3.4 and 8.3.5, respectively. For the
most part, this discussion assumes a relatively simplistic model for perception
errors in travel times; Section 8.3.7 briefly discusses how more general situations
can be handled.
Ui = Vi + i (8.36)
even though they are real insofar as they affect your choice. In the grocery store
example, this might include your opinion on the taste of the store brands, the
cleanliness of the store, and so on. Then, by modeling the unobserved utility
as a random variable , we can express choices in terms of probabilities. (The
modeler does not know all of the factors affecting your choice, so they can only
speak of probabilities of choosing different options based on what is observable.)
A second interpretation is that the observed utility Vi actually represents
all of the factors that you care about. However, for various reasons you are
incapable of knowing all of these reasons with complete accuracy. (You probably
have a general sense of the prices of items at a grocery store, but very few know
the exact price of every item in a store’s inventory.) Then the random variable
i represents the error between the true utility (Vi ) and what you believe the
utility to be (Ui ). Either interpretation leads to the same mathematical model.
Depending on the distribution we choose for the random variables i , differ-
ent discrete models are obtained. A classic is the logit model, which is obtained
when the unobserved utilities i are assumed to be independent across alterna-
tives, and to have Gumbel distributions with zero mean and identical variance.
Under this assumption, the probability of choosing alternative i is given by
exp(θVi )
pi = P (8.37)
j∈I exp(θVj )
have a closed-form expression for probabilities like (8.37). Instead, Monte Carlo
sampling methods are used to estimate choices.
The majority of this section is focused on logit-based models. While probit
models are more general and arguably more realistic, the logit model has two
major advantages from the perspective of a book like this. First, computations
in logit models can often be done analytically, simplifying explanations and
making it possible to give examples you can easily verify. This helps you better
understand the main ideas in stochastic user equilibrium and build intuition.
Second, logit models admit faster solution algorithms, algorithms which scale
relatively well with network size. This is an important practical advantage for
logit models. Nevertheless, Section 8.3.7 provides some discussion on probit and
other models and what needs to change from the logit discussion below.
U π = −cπ + π (8.38)
with the negative sign indicating that maximizing utility for drivers means min-
imizing travel time. Assuming that the π are independent, identically dis-
tributed Gumbel random variables, we can use the logit formula (8.37) to ex-
press the probability that path π is chosen:
exp(−θcπ )
pπ = P (8.39)
π 0 ∈Πrs exp(−θcπ )
0
The comments in the previous section apply to the interpretation of this formula.
As θ approaches 0, drivers’ perception errors are large relative to the path travel
times, and each path is chosen with nearly equal probability. (The errors are
so large, the choice is essentially random.) As θ grows large, perception errors
are small relative to path travel times, and the path with lowest travel time is
chosen with higher and higher probability. At any level of θ, there is a strictly
positive probability that each path will be taken.
For concreteness, the route choice discussion so far corresponds to the second
interpretation of SUE, where the unobserved utility represents perception errors
in the utility. The first interpretation would mean that π represents factors
other than travel time which affect route choice (such as comfort, quality of
scenery, etc.). Either of these interpretations is mathematically consistent with
the discussion in Section 8.3.1.
The fact that the denominator of (8.39) includes a summation over all paths
connecting r to s is problematic, from a computation standpoint. The number
276 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
of paths can grow exponentially with network size. Any use of stochastic user
equilibrium in a practical setting, therefore, requires a way to compute link flows
without explicitly calculating the sum in (8.39).
This is done by carefully defining which paths are in the choice set for trav-
elers. The notation Πrs in (8.39) in this book means the set of all acyclic
paths connecting origin r to destination s. Following Chapter 6, we will use the
notation Π̂rs to define the set of paths being considered by travelers in SUE;
these sets are sometimes called sets of reasonable paths. With a suitable defini-
tion of this set, using Π̂rs in place of Πrs in equation (8.39) leads to tractable
computation schemes.
Two possibilities are common: selecting an acyclic subset of links, and choos-
ing Π̂rs to contain the paths using these links only; or setting Π̂rs to consist of
literally all paths (even cyclic ones) connecting origin r to destination s. Both
of these are discussed next. The key to both of these definitions of Π̂rs is that
we can determine how many of the travelers passing through a given node came
from each of the available incoming links, without needing to know the specific
path they are on. This is known as the Markov property, and is discussed at
more length in the optional Section 8.3.3.
1. There is at least one path from r to s using allowable links; this ensures
that Π̂rs is nonempty.
2. The set of allowable links contains no cycle; this ensures that all paths in
Π̂rs are acyclic.
Â14 = {(1, 2), (1, 3), (2, 3), (2, 4), (3, 2), (3, 4)}
This set contains the cycle [2, 3, 2], so it is not possible to generate a reasonable
path set containing [1, 2, 3, 4] and [1, 3, 2, 4] from an acyclic set of allowable links.
8.3. STOCHASTIC USER EQUILIBRIUM 277
1 4
The advantage of totally acyclic path sets is that we can define a topolog-
ical order on the nodes (see Section 2.2). With this topological order, we can
efficiently make computations using the logit formula (8.39) without having to
enumerate all the paths. This procedure is described in the next section.
We next describe two ways to form sets of totally acyclic paths. For each
link, define a positive value c0ij for each link which is constant and independent
of flow — examples include the free-flow travel time or distance on the link. For
each origin r and node i, let Lri denote the length of the shortest path from r
to i, using the quantities c0ij as the link costs. Likewise, for each destination s
and node i, let `si denote the length of the shortest path from i to s, again using
the quantities c0ij as the link costs.
Consider the following sets of paths:
1. The set of all paths, for which the head node of each link is further away
from the origin than the tail node, based on the quantities c0ij . That is,
the sets Π̂r containing all paths starting at r and satisfying Lrj > Lri for
each link (i, j) in the path.
2. The set of all paths, for which the head node of each link is closer to
the destination than the tail node, based on the quantities c0ij . That is,
the sets Π̂s containing all paths ending at s satisfying `sj < `si for each
link (i, j) in the path. (This is like the first one, but oriented toward the
destination, rather than the origin.)
3. The set of paths which satisfy both of the above criteria: Π̂rs = Π̂r ∩ Π̂s .
For instance, if c0ij reflects the physical length on each link, then for a given
origin, Π̂r would consist of all of the paths which start at that origin and al-
ways move away from it, never “doubling back.” Likewise, Π̂s would consist
of all paths which always move closer to their destination node s, without any
diversions that lead it away. The third option has paths which both continually
move away from their origin and toward their destination. Exercise 15 asks you
to show that all three possibilities for Π̂ are totally acyclic.
Figure 8.7 illustrates these three definitions. The top of the figure shows
a network with node A as origin and node F as destination, and the links
278 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
3 1 2 1 3
D E F
2 1 0
3 1 1 3
0 1 1 1 2
A B C
3 2 3
D E F D E F D E F
A B C A B C A B C
are labeled with their c0ij values. The nodes are labeled with their Li values
(above each node) and `i values below. The bottom of the figure shows the
links satisfying each of the three criteria (Lj > Li ; `j < `i ; and both of these
simultaneously). The paths in these networks are the allowable paths in the
original network. Notice that in all cases, there are no cycles in these links
(even though the original network had the cycle [2,5,2]).
Of these principles, the third imposes stricter conditions on which paths
are in the reasonable set. The first and second are weaker, and includes some
paths which may not seem reasonable to you. For instance, the spiral path
in Figure 8.8 satisfies the first condition, since the distance from the origin is
always increasing. However, it does not satisfy the third condition, since at
times the distance to the destination increases as well.
However, a major advantage of the first two principles is that we can aggre-
gate travelers by origin or destination. With the first principle, the destination
of travelers can be ignored for routing purposes — if a path is reasonable for a
travel from an origin r to a node i, that path segment is reasonable for travel
to any node beyond i as well. This allows us to aggregate travelers by origin
(as in Section 5.2.3) and calculate a “one-to-all” path set for each origin, rather
than having separate path sets for each OD pair. A similar destination-based
aggregation is possible with the second principle.
8.3. STOCHASTIC USER EQUILIBRIUM 279
Instead of restricting the path set to create a totally acyclic collection, an alter-
native is to have Π̂rs consist of literally all paths from origin r to destination
s, even including cycles. This will often mean these sets are infinite. For exam-
ple, consider the network in Figure 8.6. Under this definition, the (cyclic) path
[1, 2, 3, 2, 4] is part of Π̂rs , as is [1, 2, 3, 2, 3, 2, 4], and so on.
Including cyclic paths, especially paths with arbitrarily many repetitions of
cycles, may seem counterintuitive. There are several reasons why this definition
of Π̂rs is nevertheless useful. One reason is that requiring total acyclicity is in
fact quite a strong condition. In Figure 8.6, there is no totally acyclic path set
that includes both [1, 2, 3, 4] and [1, 3, 2, 4] as paths — if we want to allow one
path as reasonable, then by symmetry the other should be reasonable as well.
But any path set including both of those includes both links (2, 3) and (3, 2),
which form a cycle.
So, it is desirable to have an alternative to total acyclicity that still does
not require path enumeration. As shown in the following section, it is possible
to compute the link flows resulting from (8.39) without having to list all the
paths, if all cyclic paths are included. The intuition is that all travelers at a
given node can be treated identically in terms of which link they move to next:
in Figure 8.6, we can split the vehicles arriving at node 2 between links (2, 3)
and (2, 4) without having to distinguish whether they came via link (1,2), as if
on the path [1,2,4] or [1,2,3,4], or whether they came via link (3,2), as if on the
path [1,3,2,4], or even [1,2,3,2,4].
Furthermore, some modelers are philosophically uncomfortable with includ-
ing restrictions like those in the previous section, without evidence that those
rules really represent traveler behavior. Determining which sets of paths travel-
ers actually consider (and why) is complicated, and still not well-understood.5
One school of thought is that it is therefore better to impose no restrictions at
all, essentially taking an “agnostic” position with respect to the sets Π̂rs , rather
than imposing restrictions which may not actually represent real behavior.
5 Emerging data sources, such as Bluetooth readers, are providing more complete informa-
tion on observed vehicle trajectories. This may provide more insight on this subject.
280 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
3
1/2 1/2
1 1/2 1/2 4
1/2 1/2
2
Figure 8.9: Flows from stochastic network loading when all links have unit cost,
θ = log 2, and the full cyclic path set is allowed.
As an example, assume that every link in Figure 8.6 has the same travel
time of 1 unit, and that θ = log 2. Then there are two paths of length 2 ([1,2,4]
and [1,3,4]), two paths of length 3 ([1,2,3,4] and [1,3,2,4]), two paths of length
4 ([1,2,3,2,4] and [1,3,2,3,4]), and so on. Therefore the denominator in the logit
formula is
X 1 1 1 1 1 1
exp(−θcπ ) = + + + + + + · · · = 2 (8.40)
2 2 4 4 8 8
π∈Π̂
and the probability of choosing one of the length-2 paths is (1/2)/2 = 1/4, the
probability of choosing one of the length-3 paths is 1/8, and so on. The flows
on each link can be calculated by multiplying these path flows by the number
of times that path uses a link. For instance, to calculate the flow on link (1,2),
observe that it is used by paths [1,2,4], [1,2,3,4], [1,2,3,2,4], [1,2,3,2,3,4], and so
on, with respective probabilities 1/4, 1/8, 1/16, 1/32, etc. Thus the total flow
on this link is the sum of these, or 1/2. The flows on links (1,3), (2,4), and (3,4)
are also found to be 1/2 by the same technique. Calculating the flow on links
(2,3) and (3,2) is trickier, because some paths use these multiple times. For
example, path [1,2,3,2,3,4] uses link (2,3) twice, so even though the probability
of selecting this path is 1/25 = 1/32, it actually contributes twice this (1/16) to
the flow. It is possible to show that the flow on these links is also 1/2, giving
the final flows in Figure 8.9.
As this example shows, direct calculations involving this path set usually
involve summing infinite series. As will show in Section 8.3.4, there is an al-
ternative method that allows us to make these computations without explicitly
calculating such sums.
1 2 3
4 5
6
All links have cost 1 d13 = 40
Both path set definitions above (“totally acyclic paths” and the “full cyclic
path set”) allow for the computation of the logit formula to be disaggregated
by node and by link, without having to enumerate the paths in the network.
The key to this is the Markov property. An informal statement of this property
is that if we randomly select a traveler passing through a node, and want to
know the probability that they leave that node by a particular link, there is no
information provided by knowing which link they used to arrive to that node.
For example, consider the network in Figure 8.10, where the demand is
d13 = 40 vehicles and θ = log 2. All links have unit cost. Assume first that all
paths in this network are allowed. Then the right panel of Figure 8.11 shows
the flow on each path, and the left panel shows the flow on each link.
In this network 18 vehicles pass through node 2. Suppose we pick one of
them at random, and want to know the probability that the next link in this
vehicle’s path is (2,3), as opposed to (2,5). From examining the path flows in
Figure 8.11, we can see that this probability is (8 + 4)/(8 + 4 + 4 + 2) = 2/3
(ignoring the flow on path [1,4,6,5,3] in the denominator, since these trips do
not pass through node 2). Now, suppose that these vehicles also reported the
segment of their path that led them to node 2 — that is, they also report
whether they came via segment [1,2] or segment [1,4,2]. Does this change our
answers in any way?
If we know they came from segment [1,2], then they are either on path [1,2,3]
or [1,2,5,3], and the probability that they continue on (2,3) is 8/(8 +4) = 2/3. If
we know they came from segment [1,4,2], then they are either on path [1,4,2,3]
or path [1,4,2,5,3], and the probability that they continue on (2,3) is 4/(4 + 2),
which is still 2/3. So knowing the first segment of their trip does not provide
any additional information as to the remaining segment.
The situation changes if we modify the allowable path set to only include
three paths, [1,2,3], [1,2,5,3], and [1,4,2,3].6 Here Figure 8.12 shows the corre-
sponding path flows and link flows.
Let us ask the same question of the travelers passing through node 2. With-
6 A natural way this path set might arise is to include paths that are only within a small
threshold of the shortest path cost, thus including paths with cost 2 or 3 but excluding paths
with cost 4.
282 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
12 12
1 2 3 Path Pr Flow
[1,2,3] 0.4 8
8 6 6 8 [1,2,5,3] 0.2 4
5 [1,4,2,3] 0.2 4
4
[1,4,2,5,3] 0.1 2
2 2 [1,4,6,5,3] 0.1 2
Figure 8.11: Link and path flows when all paths are allowed.
15 15
1 2 3 Path Pr Flow
[1,2,3] 1/2 10
5 5 5 5 [1,2,5,3] 1/4 5
5 [1,4,2,3] 1/4 5
4
0 0
6
Figure 8.12: Link and path flows when only a subset of paths is allowed.
8.3. STOCHASTIC USER EQUILIBRIUM 283
out knowing anything further, the probability that they continue on link (2,3)
is (10 + 5)/20 = 3/4. However, if we know they came from [1,2], then the prob-
ability that they continue on (2,3) is 10/15 = 2/3. If we know they came from
[1,4,2], then the probability that they continue on (2,3) is 1, and there is no
other option! So in this case, knowing the first segment of the path does give
us additional information about the rest of their journey.
It turns out that the Markov property will be very useful, and will allow us
to efficiently evaluate the logit formula without enumerating paths. Informally,
we can do computations using just the link flows (the left panels in Figures 8.11
and 8.12) without having to use the path flows (the right panels of these figures)
— we can get the path flows on the right from the link flows on the left. In large
networks, the link-based representation is much more compact and efficient.
A more formal statement of this property is as follows. To keep the formulas
clean, assume that there is a single origin r and destination s; in a general
network, we can apply the same logic separately to each OD pair. In logit
assignment, the path set Π̂ is said to satisfy the Markov property if there exist
values Pij for each link such that
exp(−θcπ ) Y π
δij
pπ = P π 0 = P ij (8.41)
π 0 ∈Π̂ exp(−θc ) (i,j)∈π
π
where δij is the number of times path π uses link (i, j).
That is, that the probability of a traveler selecting any path can be computed
by multiplying Pij values across its links. The Pij values can be interpreted as
conditional probabilities: given that a traveler is passing through i, they express
the probability that their path leaves that node through link (i, j).
To make the connection between totally acyclic and complete path sets and the
Markov property, we first show that both path set choices satisfy the segment
substitution property. Given any path π = [r, i1 , i2 , . . . , s], a segment σ is any
contiguous subset of one or more of its nodes. For example, the path [1, 2, 3, 2, 4]
contains [1, 2, 3], [2, 3], [2], and [2, 3, 2, 4]. It does not contain the segment
[1, 4]; even though both of those nodes are in the path, they do not appear
consecutively. Note that a segment can consist of a single link, such as [2, 3],
or even a single node, such as [2]. We use the notation ⊕ to indicate joining
segments, so [1, 2, 3, 2, 4] = [1, 2, 3] ⊕ [3, 2, 4]. If two segments are being joined,
the end node of the first must match the starting node of the second.
The set of reasonable paths Π̂ satisfies the segment substitution property
if, for any pair of reasonable paths which pass through the same two nodes,
the paths formed by exchanging the segments between those nodes are also
reasonable. That is, if π = σ1 ⊕ σ2 ⊕ σ3 is reasonable, and if there is another
reasonable path π 0 = σ10 ⊕ σ20 ⊕ σ30 with σ2 and σ20 starting and ending at the
same node, then the paths σ1 ⊕ σ20 ⊕ σ3 and σ10 ⊕ σ2 ⊕ σ30 are also reasonable.
284 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
With the allowable path set in Figure 8.12, the segment substitution property
is not satisfied. There are paths [1,2,5,3] and [1,4,2,3], both passing through
nodes 1 and 2, which can be decomposed in the following way:
Notice that the corresponding pairs of segments on the right-hand sides all start
and end at the same nodes. We can generate two new paths by “crossing” the
middle segments of (8.42) and (8.43): [1,2,3] and [1,4,2,5,3]. The first of these
is allowable, but the second is not.
By contrast, you can verify that segment substitution is satisfied for the
allowable path set in Figure 8.11. No matter which pairs of paths you choose,
swapping the segments results in another allowable path.
A reasonable path set cannot have the Markov property unless it satisfies
the segment substitution property, as shown in the above example. Without the
segment substitution property, we could not know how the vehicles at node i
would split without knowing the specific paths they were on. Segment substitu-
tion ensures that all travelers passing through node i are considering the same
set of outgoing links (and, indeed, the same set of path segments continuing on
to the destination).
It is fairly easy to show that both path set definitions considered above —
sets of totally acyclic paths, and the set of all paths (even cyclic ones) — satisfy
the segment substitution property; see Exercise 16.
To calculate the flow on a link xij , we need to add the flow from all of the
reasonable paths which use this link. Every reasonable path using link (i, j)
takes the form σ1 ⊕ [i, j] ⊕ σ2 , where σ1 goes from the origin r to node i and σ2
goes from node j to the destination s.7 Therefore
X
π π
xij = δij h (8.46)
π∈Π̂rs
drs X π
= δij exp(−θcπ ) (8.47)
Vrs rs
π∈Π̂
drs X X
= exp(−θ(cσ1 + tij + cσ2 )) (8.48)
Vrs
σ1 ∈Σri σ2 ∈Σjs
drs X X
= exp(−θcσ1 ) exp(−θtij ) exp(−θcσ2 ) (8.49)
Vrs
σ1 ∈Σri σ2 ∈Σjs
!
drs X X
= exp(−θcσ1 ) exp(−θtij ) exp(−θcσ2 ) (8.50)
Vrs
σ1 ∈Σri σ2 ∈Σjs
The third equality groups the sum over paths according to the starting and
ending segment. This equation for xij will be used extensively in the rest of
this section.
Similarly, to find the number of vehicles passing through a particular node i
(call this xi ), observe that every path through i can be divided into a segment
from r to i, and a segment from i to s. Grouping the paths according to these
segments and repeating the algebraic manipulations above gives the formula
Vri Vis
xi = drs (8.52)
Vrs
With equations (8.51) and (8.52) in hand, it is easy to show the Markov prop-
erty holds in any reasonable path set with the segment substitution property.
Define
xij exp(−θtij )Vjs
Pij = = (8.53)
xi Vis
and then multiply these values together for the links in a path, say, π =
7 If the link starts at the origin or ends at the destination, we may have i = r or j = s, in
[r, i1 , i2 , · · · , ik , s]:
Y Y exp(−θtij )Vjs
Pij = (8.54)
Vis
(i,j)∈π
X Vi1 s Vi2 s · · · Vik s Vss
= exp −θ tij (8.55)
Vrs Vi1 s Vi2 s · · · Vik s
(i,j)∈π
Vss
= exp(−θcπ ) (8.56)
Vrs
But Vss = 1 and Vrs is simply the denominator in the logit formula, so this
product is exactly pπ . Therefore the logit path flow assignment in the set of
reasonable paths satisfies the Markov property.
where Vab is the sum of exp(−θcσ ) for all reasonable path segments σ starting
at node a and ending at node b. This formula is important because it allows us
to calculate the flow on each link without having to enumerate all of the paths
that use that link.
It is also possible to show (see Exercise 19) that we can replace tij with
Li + tij − Lj , where L is a vector of node-specific constants in this formula;
this reduces numerical errors in computations. It is common to use shortest
path distances at free-flow (hence the use of the notation L). Doing this for the
link and segment costs helps avoid numerical issues involved with calculating
exponentials of large values. With this re-scaling, we define the link likelihood
as
Lij = exp(θ(Lj − Li − tij )) (8.58)
and thus
Vri Lij Vjs
xij = drs . (8.59)
Vrs
It remains to describe how the Vab values can be efficiently calculated. This
section shows how this can be done for both totally acyclic path sets, and the
complete set of all paths (even cyclic ones).
In the case of a totally acyclic path set, the relevant V values can be calcu-
lated in a single pass over the network in topological order, and then the link
flows calculated in a second pass over the network in reverse topological order.
In the case of the full cyclic path set, the cycles create dependencies in the link
weights and in the link flow formulas which prevent them from being directly
evaluated. But we can still calculate them explicitly using matrix techniques.
Define Wi and Wij to be the weight of a node and link, respectively. The
node weight Wi is a shorthand for Vri , and Wij is a shorthand for Vri Lij . These
can be calculated recursively, using the following procedure:
1. Calculate the link likelihoods Lij using equation (8.58) for all allowable
links; set Lij = 0 for any link not in the allowable set.
2. Set the current node i to be the origin r, and initialize its weight: Wr = 1.
3. For all links (i, j) leaving node i, set Wij = Wi Lij .
4. If i is the destination, stop. Otherwise, set i to be the next node in
topological order.
P
5. Calculate Wi = (h,i)∈Γ−1 (i) Whi by summing the weights of the incoming
links.
6. Return to step 3.
With the node and link weights in hand, we proceed to calculate the flows
on each link xij and the flow through each node xi . Using the link weights, we
can rewrite equation (8.52) for the flow through each node as
e−4 e−3
p[1,3,5] = = 0.155 p[1,3,2,4,5] = = 0.422
e−4 + e−3 + e−3 e−4 + e−3 + e−3
290 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
3
1 + x/100 3 + x/100
1 1 + x/100 2 + x/100 5
4 x/100
2 + x/100 5
2 1 + x/100
e−3
p[1,2,4,5] = = 0.422
e−4 + e−3 + e−3
as the path choice proportions. Multiplying each of these by the total demand
(2368) gives path flows
As you can verify, these path flows correspond to the same link flows shown in
Table 8.2.
So, the components of L2 are exactly the portion of the sums defining Vab for
segments of length two!
We demonstrate this using the example from Figure 8.6, recalling that d14 =
1, θ = log 2, and that all links have unit cost. In this network, the matrices L
and L2 take the form
0 1/2 1/2 0 0 1/4 1/4 1/2
0 0 1/2 1/2 2
0 1/4 0 1/4
L= 0 1/2
L = 0 0 1/4 1/4 .
(8.64)
0 1/2
0 0 0 0 0 0 0 0
8 The parentheses are intentional: (L2 ) 2
ab is the component of matrix L in row a and
column b. This is not the same as (Lab )2 , the square of the component of matrix L in row a
and column b.
292 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
Looking at the first row of L2 , we see that (L2 )12 = 1/4, because there is one
segment which starts at node 1, ends at node 2, and contains two links ([1,3,2])
and exp(−θc[1,3,2] ) = 1/4. Similarly (L2 )13 = 1/4, and (L2 )14 = 1/2 because
there are two segments starting at node 1, ending at node 2, and containing two
links ([1,2,4] and [1,3,4]). The sum exp(−θc[1,2,4] ) + exp(−θc[1,3,4] ) is indeed
1/2. In this matrix, (L2 )23 = 0, because there are no segments of length two
starting at node 2 and ending at node 3.
Proceeding a step further, we have
X
(L3 )ab = (L2 )ac Lcb . (8.65)
c∈N
By the same logic, we see that this sum expresses the component of Vab corre-
sponding to segments of length three. Every segment of length three connecting
a to b consists of a segment of length two connecting a to some node c, followed
by a link (c, b). Group the sum of all segments of length three by this final link,
and note that (L2 )ac already contains the relevant portion of the sum for the
first segment.
Thus, by induction, the components of matrix Ln contains the portion of
the sum defining V corresponding to segments of length n. Therefore
V = L0 + L1 + L2 + L3 + · · · , (8.66)
where the infinite sum is needed since we allow paths with an arbitrary number
of cycles. Assuming that this sum exists, we can calculate it as follows:
V = I + L + L2 + L3 + · · · (8.67)
2
= I + L(I + L + L + · · · ) (8.68)
= I + LV . (8.69)
Therefore V − LV = I, or
V = (I − L)−1 . (8.70)
After calculating the matrix V with this formula, we can directly read off its
components Vab and use them to calculate the link flows using (8.59).
To complete the example, we have
1 1 1 1
0 4/3 2/3 1
V= 0 2/3 4/3 1
(8.71)
0 0 0 1
and, for example, the flow on link (2,3) is given by
V12 L23 V34 1· 1 ·1 1
x23 = d14 =1 2 = . (8.72)
V14 1 2
Repeating this process will give the flow on every link — and unlike in the
previous section, does not involve summing an infinite series term-by-term. The
formula (8.59) handles all of the paths.
8.3. STOCHASTIC USER EQUILIBRIUM 293
where (r, s) is the OD pair corresponding to path π. Therefore, the SUE prob-
lem can be expressed as follows: find a feasible path flow vector h∗ such that
h∗ = H(C(h∗ )). This is a standard fixed-point problem. Clearly H and C
are continuous functions if the link performance functions are continuous, and
the feasible path set is compact and convex, so Brouwer’s theorem immediately
gives existence of a solution to the SUE problem.
Notice that this was much easier than showing existence of an equilibrium
solution to the original traffic assignment problem! For that problem, there was
no equivalent of (8.73). Travelers were all using shortest paths, but if there
were two or more shortest paths there was no rule for how those ties should
be broken. As a result, we had to reformulate the problem as a variational
inequality and introduce an auxiliary function based on movement of a point
under a force. For the SUE problem, there is no need for such machinations,
and we can write down the fixed point problem immediately.
However, fixed-point theorems do not offer much help in terms of actually
finding the SUE solution. It turns out that convex programming and variational
inequality formulations exist as well, and we will get to them shortly. But first,
as a practical note, we mention that our definition of the reasonable path set
Π̂ should be specified without reference to the final travel times. The reason
is that until we have found the equilibrium solution, we do not know what
the link and path travel times will be. If the sets of reasonable paths vary
from iteration to iteration, as the flows and travel times change, there may be
problems with convergence or solution consistency. This is why the methods
described above for generating totally acyclic path sets relied on constants c0ij
294 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
hπ ≥ 0 ∀π ∈ Π̂ (8.77)
Substituting into (8.80) gives the logit formula, and therefore the path flows
solving Fisk’s convex optimization problem are those solving logit stochastic
user equilibrium.
This convex optimization problem can be solved using the method of suc-
cessive averages. In this way, SUE can be solved quite simply, using a familiar
method from the basic TAP. The only change is that x∗ is calculated using
a method from the previous section, rather than by finding an all-or-nothing
assignment loading all flow onto shortest paths. The algorithm is as follows:
1. Choose an initial feasible link assignment x.
2. Update link travel times based on x.
3. Calculate target flows x∗ :
(a) For each OD pair (r, s), use a method from the previous section to
calculate OD-specific flows x∗rs .
(b) Calculate x∗ ← (r,s)∈Z 2 x∗rs
P
3
10x
50+x
6 10+x 6
1 4
50+x
10x
2
path, which would result in a dramatic change in x∗ , moving all flow from an
OD pair to that new shortest path. Furthermore, when there are ties there
are multiple possible x∗ values. So, in this case we had to introduce auxiliary
convergence measures like the relative gap or average excess cost. SUE is simpler
in that we can simply compare the current and target solutions — in fact, as
defined earlier, the relative gap and average excess cost do not make sense,
since the equilibrium principle is no longer defined by all travelers being on the
shortest path. For these reasons, the method of successive averages works much
better for SUE than for deterministic assignment.
To demonstrate this algorithm, consider the Braess paradox network shown
in Figure 8.14, where the demand from node 1 to node 4 is 6 vehicles, θ = 0.01,
and all paths are allowable. Assume that we choose the initial solution x by
performing Dial’s method on this network, using the free-flow travel times.
Table 8.3 shows the calculations; in this table, we first calculate the Li values
for nodes (based on shortest paths), then the link likelihoods Lij ; then the
node and link weights Wi and Wij , and finally the node and link flows xi and
xij . This gives initial link flows of x = 4.282 1.718 2.563 1.718 4.282 .
Recalculating the link performance functions with these flows gives new link
∗
travel times, which we put into Dial’s method again, giving the result x =
3.976 2.024 1.951 2.024 3.976 . Notice that x∗ and x are quite close to
each other! This is a very different situation than when the method of successive
averages is applied to the classical traffic assignment problem (compare with the
examples in Section 6.2.1), where x∗ was an extreme-point solution quite far
away from x.
So the link flows x are updated by averaging x∗ into the old x values, using a
weight of λ = 1/2, producing the results in the rightmost column of Figure 8.14.
The process is repeated until convergence.
harder because this program makes use of the path-flow variables h. With the
classical traffic assignment problem, we could express solutions and the objective
solely in terms of the link flows x. The addition of the h log h terms to the SUE
objective function renders this impossible. Furthermore, the number of paths
grows very quickly with network size (and if we are choosing the reasonable
path set to include fully cyclic paths, the number of paths is usually infinite).
The method of successive averages avoids this problem by not actually refer-
ring to the objective function at any point — if you review the steps above, you
see that the objective function is never calculated. Its role is implicit, guarantee-
ing that the algorithm will eventually converge, since the direction x∗ − x is one
along which the objective is decreasing. If we can find an efficient way to evalu-
ate the objective function, then we can develop an analogue to the Frank-Wolfe
method for classical assignment.
It turns out that we can reformulate the objective function in terms of the
destination-aggregated flows on each link, xsij (see Section 5.2.3), using the
Markov property of the logit loading. There is an equivalent disaggregation by
origin (if we reverse our interpretation of the Markov property; see Exercise 18.)
Recall the following results from Section 8.3.3:9
There exist values Pijs for each link (i, j) and destination s, such that
Y π
hπ = drs (Pijs )δij (8.82)
(i,j)∈π
Proposition 8.4. Let h be a feasible path flow vector satisfying the Markov
property, and let xsij and xsi be the corresponding destination-aggregated link
and node flows. Then
π
X h X X X
hπ log = xsij log xsij − xsi log xsi (8.84)
d rs
π∈Π̂ s∈Z (i,j)∈A i∈N
i6=s
which does not require path enumeration. We can thus develop the following
analogue to the Frank-Wolfe algorithm:
(a) For each destination s, use a method from the previous section to
calculate OD-specific flows x∗s .
(b) Calculate x∗ ← s∈Z x∗s
P
xs +
4. Find
∗ the∗ value
of λ minimizing (8.92) along the line (1 − λ) x
λ x xs .
Recall that the most likely path flows were defined as those maximizing
entropy, and solving the following optimization problem:
X
max − hπ log(hπ /d) (8.93)
h
π∈Π̂
X
s.t. π
δij hπ = x∗ij ∀(i, j) ∈ A (8.94)
π∈Π
X
hπ = d (8.95)
π∈Π̂
hπ ≥ 0 ∀π ∈ Π (8.96)
where we have assumed there is a single origin-destination pair for simplicity.
We can transform this into an equivalent optimization that more closely re-
sembles the stochastic user equilibrium optimization problems. First, we replace
maximization by minimization by changing the sign of the objective. Second,
we can remove d from the objective, because
X X X
hπ log(hπ /d) = hπ log hπ − hπ log d (8.97)
π∈Π̂ π∈Π̂ π∈Π̂
X
= hπ log hπ − d log d (8.98)
π∈Π̂
The last two of these are simply the constraints (8.100) and (8.101). The first
can be solved for the path flows, giving
To satisfy the demand constraint (8.101), κ must be chosen so that the sum
of (8.109) over all paths gives the total demand d.
Omitting the algebra, we must have
X
κ = log d(1 − log exp(−θcπ )) (8.110)
π 0 ∈Π
or
exp(−θcπ )
hπ = d P π
. (8.111)
π 0 ∈Π exp(−θc )
But this is just the logit formula! The Lagrange multiplier θ must be chosen to
satisfy the remaining constraint on the average path cost.
This derivation shows that the most likely path flows and stochastic user
equilibrium problems have a similar underlying structure. If we relax the re-
quirement that all travelers be on shortest paths, and simply constrain the aver-
age cost of travel, the most likely path flows coincide with a logit loading, where
the parameter θ is the Lagrange multiplier for this constraint. As θ approaches
infinity, the average cost of travel approaches its value at the deterministic user
equilibrium solution, and the stochastic user equilibrium path flows approach
the most likely path flows in the corresponding deterministic problem. This
provides another algorithmic approach for solving for most likely path flows, in
addition to those discussed in Section 6.5. In practice, this algorithm is difficult
to implement, because of numerical issues that arise as θ grows large.
Common methods for creating totally acyclic route sets can also create unrea-
sonable artifacts, and allowing all cyclic paths can also be unreasonable if the
network topology creates many such paths with low travel times. See Exer-
cise 25 for some concrete examples. The main alternative to the logit model is
the probit model, in which the terms have a multivariate normal distribution,
with a (possibly nondiagonal) covariance matrix to allow for correlation between
these terms.
The framework of stochastic user equilibrium can be generalized to other
distributions of the error terms, including correlation. The full development
of this general framework is beyond the scope of this book, but we provide an
overview and summary. The objective function (8.74) must be replaced with
X X X Z xij
rs π
z(x) = − d E minrs (c + ) + xij tij − tij (x) dx
π∈Π 0
(r,s)∈Z 2 (i,j)∈A (i,j)∈A
(8.112)
where the expectation is taken with respect to the “unobserved” random vari-
ables , and tij in the last two terms are understood to be functions of xij . It
can be shown that this function is convex, and therefore that the SUE solution
is unique. However, evaluating this function is harder. The first term in (8.112)
involves an expectation over all paths connecting an OD pair. In discrete choice,
this is known as the satisfaction function, and expresses the expected perceived
travel time on a path chosen by a traveler. In the case of the logit model, this ex-
pectation can be computed in closed form; for most distributions it cannot, and
must be evaluated through Monte Carlo sampling or another approximation.
The method of successive averages can still be used, even without evaluating
the objective function. Step 4a needs to be replaced with a stochastic network
loading, using the current travel times and whatever distribution of is chosen.
This often requires Monte Carlo sampling as well: for (multiple) samples of the
terms, the shortest paths can be found using one of the standard algorithms,
and the resulting flows averaged together to form an estimate of x∗ .
Because we are using an estimate of x∗ , it is possible that the target direction
is not exactly right, and that it is not in a direction in which z(x) is decreasing.
Nevertheless, as long as it is correct “on average” (i.e., the sampling is done in
an unbiased manner), one can show that the method of successive averages will
still converge to the stochastic user equilibrium solution.
8.5 Exercises
1. [25] Verify that each of the demand functions D below is strictly de-
creasing and bounded above (for κ ≥ 0), then find the inverse functions
D−1 (d).
(a) D(κ) = 50 − κ
(b) D(κ) = 1000/(κ + 1)
(c) D(κ) = 50 − κ2
(d) D(κ) = (κ + 2)/(κ + 1)
2. [13] The total misplaced flow reflects consistency of a solution with (in
this case, that the OD matrix should be given by the demand functions).
Suggest another measure for how close a particular OD matrix and traffic
assignment are to satisfying this consistency condition. Your proposed
measure should be a nonnegative and continuous function of values related
to the solution (e.g., drs , xij , κrs , etc.), which is zero if and only if the
OD matrix is completely consistent with the demand functions. Compare
your new measure with the total misplaced flow, and comment on any
notable differences.
3. [31] Verify that the elastic demand objective function (8.16) is convex,
given the assumptions made on the demand functions.
304 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
3
10x
50+x
10+x
1 4
50+x
10x
2
4. [47] Using the network in Figure 8.15, solve the elastic demand equilibrium
problem with demand given by d14 = 15−κ14 /30. Perform three iterations
of the Frank-Wolfe method, and report the average excess cost and total
misplaced flow for the solution.
5. [48] Using the network in Figure 8.15, solve the elastic demand equilibrium
problem with demand given by d14 = 10 − κ14 /30.
(a) Perform three iterations of the Frank-Wolfe method designed for elas-
tic demand (Section 8.1.5).
(b) Transform the problem to an equivalent fixed-demand problem using
the Gartner transformation from Section 8.1.2, and perform three
iterations of the original Frank-Wolfe method.
(c) Compare the performance of these two methods: after three itera-
tions, which is closer to satisfying the equilibrium and demand con-
ditions?
6. [49] Using the network in Figure 8.16, solve the elastic demand equilibrium
problem with demand functions q 19 = 1000 − 50(u19 − 50) and q 49 =
1000 − 75(u49 − 50). The cost function on the light links is 3 + (xa /200)2 ,
and the cost function on the thick links is 5 + (xa /100)2 . 1000 vehicles
are traveling from node 1 to 9, and 1000 vehicles from node 4 to node 9.
Perform three iterations of the Frank-Wolfe method and report the link
flows and OD matrix.
7. [13] Assume that (x, d) and (x∗ , d∗ ) are both feasible solutions to an
elastic demand equilibrium problem. Show that (λx + (1 − λ)x∗ , λd +
(1 − λ)d∗ ) is also feasible if λ ∈ [0, 1]. This ensures that the Frank-Wolfe
solutions are always feasible, assuming we start with x and d values which
are consistent with each other, and always choose targets in a consistent
way.
8. [57] (Calculating derivative formulas.) Let x and d be the current, feasi-
ble, link flows and OD matrix, and let x∗ and d∗ be any other feasible link
flows and OD matrix. Let x0 = λx∗ + (1 − λ)x and d0 = λd∗ + (1 − λ)d.
8.5. EXERCISES 305
7 8 9
4 5 6
1 2 3
(a) Let f (x0 , d0 ) be the objective function for the elastic demand equi-
librium problem. Recognizing that x0 and d0 are functions of λ,
calculate df /dλ.
df
(b) For the elastic demand problem, show that dλ λ=0
≤ 0 if d∗ and
∗
x are chosen in the way given in the text. That is, the objective
function is nonincreasing in the direction of the “target.” (You can
assume that the demand function values are strictly positive if that
would simplify your proof.)
9. [24] Consider a two-link network. For each pair of link performance
functions shown below, determine whether or not the symmetry condi-
tion (8.26) is satisfied.
(a) t1 = 4 + 3x1 + x2 , t2 = 2 + x1 + 4x2
(b) t1 = 7 + 3x1 + 4x2 , t2 = 12 + 2x1 + 4x2
(c) t1 = 4 + x1 + 3x2 , t2 = 2 + 3x1 + 2x2
(d) t1 = 3x21 + x2 , t2 = 4 + x1 + 4x32
(e) t1 = 3x21 + 2x22 , t2 = 2x21 + 3x32
(f) t1 = 50 + x1 , t2 = 10x2
10. [34] Determine which of the pairs of link performance functions in the
previous exercise are strictly monotone.
11. [49] Consider the network in Figure 8.16 with a fixed demand of 1000
vehicles from 1 to 9 and 1000 vehicles from 4 to 9. The link performance
function on every link a arriving at a “merge node” (that is, nodes 5, 6, 8,
and 9) is 4 + (xa /150)2 + (xa0 /300)2 where a0 is the other link arriving at
the merge. Verify that the link interactions are symmetric, and perform
five iterations of the Frank-Wolfe method. Report the link flows and travel
times.
12. [49] Consider the network in Figure 8.16 with a fixed demand of 1000
vehicles from 1 to 9 and 1000 vehicles from 4 to 9. The link performance
function on every link a arriving at a “merge node” (that is, nodes 5,
306 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
1 3
A B C
2 4
and the demand from node A to node C is 10 vehicles. For both methods
below, start with an initial solution loading all flow on links 1 and 4.
(a) Use three iterations of the diagonalization method to try to find an
equilibrium solution, and report the link flows and average excess
cost. (As before, this means finding three x∗ vectors after your initial
solution.)
(b) Use three iterations of simplicial decomposition to try to find an
equilibrium solution, and report the link flows and average excess
cost. Three iterations means that X should have three vectors in it
when the algorithm terminates (unless it terminates early because
the x∗ you find is already in X ). For each subproblem, make the
number of improvement steps one less than the size of X (so when
X has 1 vector, perform 0 steps; when it has 2 vectors, perform 1
step, and so on). For each of these steps, try the sequence of µ
values 1/2, 1/4, 1/8, . . ., choosing the first that reduces the restricted
average excess cost.
2 H 2 I
G
2 2 2
D 1 E 1 F
2 2 2
A 2 B 2 C
16. [32] Show that any set of totally acyclic paths satisfies the segment sub-
stitution property. Then show that the set of all paths (including all cyclic
paths) also satisfies this property.
17. [53] Complete the example following equation (8.40) by forming the infi-
nite sum for the flow on links (2,3) and (3,2), and showing that they are
equal to 1/2.
18. [62] Reformulate the Markov property, and the results in Section 8.3.3 in
terms of conditional probabilities for the link a vehicle used to arrive at
a given node, rather than the link a vehicle will choose to depart a given
node.
19. [34] In the logit formula (8.39), show that the same path choice probabil-
ities are obtained if each link travel time tij is replaced with Li + tij − Lj ,
where L is a vector of node-specific constants. (In practice, these are
usually the shortest path distances from the origin.)
20. [51] What would have to change in the “full cyclic path set” stochastic
network loading procedure, if there were multiple links with the same tail
and head nodes?
21. [35] Consider the network in Figure 8.18 with 1000 vehicles traveling from
node A to node I, where the link labels are the travel times. Use the first
criterion in Section 8.3.2 to define the path set, and identify the link flows
corresponding to these travel times. Assume θ = 1.
22. [35] Repeat Exercise 21 for the third definition of a path set. Assume
θ = 1.
23. [34] Show that the objective function (8.74) is strictly convex.
24. [49] Consider the network in Figure 8.16 with a fixed demand of 1000
vehicles from 1 to 9 and 1000 vehicles from 4 to 9. The light links have
delay function 3 + (xij /200)2 , and the dark links have delay function 5 +
308 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
(xij /100)2 . Assume that drivers choose paths according to the stochastic
user equilibrium principle, with θ = 1/5. Perform three iterations of the
method of successive averages, and report the link flows. Assume all paths
are allowed.
25. [24] (Limitations of logit assignment). This problem showcases three
“problem instances” for the stochastic network loading models described
in this chapter (the logit model, and one proposed definition of allowable
paths). Throughout, assume that the first definition in Section 8.3.2 is
used to define the allowable path set. These instances motivated the de-
velopment of probit and other, more sophisticated, stochastic equilibrium
models.
(a) Consider the network in Figure 8.19(a), where the numbers by each
link represents the travel time and z ∈ (0, 1). Let p↑ represent the
proportion of vehicles choosing the
qtop path.
In a typical probit
model, we have p↑P ROBIT = 1 − Φ z
2π−z , where Φ(·) is the stan-
dard cumulative normal distribution function. Calculate p↑LOGIT for
the logit model as a function of z, and plot p↑P ROBIT and p↑LOGIT as
z varies from 0 to 1. Which do you think is more realistic, and why?
(b) Consider networks of the type shown in Figure 8.19(b), where there is
a top path consisting of a single link, and a number of bottom paths.
The network is defined by an integer m; there are m − 1 intermediate
nodes in the bottom paths, and each consecutive pair of intermediate
nodes is connected by two parallel links with travel time 8/m. In a
√
common probit model, p↑P ROBIT = Φ(−0.435 m). What is p↑LOGIT
as a function of m? Again create plots for small values of m (say 1
to 8), indicate which you think is more realistic, and explain why.
(c) In the networks in Figure 8.19(c), identify the proportion of travelers
choosing each link if θ = 1. The left and right panels show a net-
work before and after construction of a new link. Again identify the
proportion of travelers choosing each link if θ = 1. Do your findings
seem reasonable?
8.5. EXERCISES 309
1 3
1
z
z 1−z
2
(a)
8 8
1 8 2 4 4
1 3 2
8 4 4
m=1 m=2
8
2 2 2 2
1 3 4 5 2
2 2 2 2
m=4
(b)
7 7
1 2 1 2
8
1
10 3
9
After
Before (c)
Figure 8.19: Networks for Exercise 25. The label on each link is its constant
travel time.
310 CHAPTER 8. EXTENSIONS OF STATIC ASSIGNMENT
Part III
Dynamic Traffic
Assignment
311
Chapter 9
Network Loading
This chapter discusses network loading, the process of modeling the state of
traffic on a network, given the route and departure time of every vehicle. In
static traffic assignment, this is a straightforward process based on evaluating
link performance functions. In dynamic traffic assignment, however, network
loading becomes much more complicated due to the additional detail in the
traffic flow models — but it is exactly this complexity which makes dynamic
traffic assignment more realistic than static traffic assignment. Rather than link
performance functions, dynamic network loading models generally rely on some
concepts of traffic flow theory. There are a great many theories, and an equally
great number of dynamic network loading models, so this chapter will focus on
those most commonly used.
To give the general flavor of network loading, we start with two simple link
models, the point queue and spatial queue (Section 9.1), which describe traffic
flow on a single link. We next present three simple node models describing how
traffic streams behave at junctions (Section 9.2). With these building blocks we
can perform network loading on simple networks, showing how link and node
models interact to represent traffic flow in a modular way (Section 9.3).
However, the point queue and spatial queue models have significant limita-
tions in representing traffic flow. The most common network loading models for
dynamic traffic assignment are based on the hydrodynamic model of traffic flow,
reviewed in Section 9.4. This theory is based in fluid mechanics, and assumes
that the traffic stream can be modeled as the motion of a fluid, but it can be
derived from certain car-following models as well, which have a more behavioral
basis. The cell transmission model and link transmission model are link models
based on this theory, and both of these are discussed in Section 9.5. Section 9.6
concludes the chapter with a discussion of more sophisticated node models that
can represent general intersections.
This chapter aims to present several alternative network loading schemes as
part of the general dynamic traffic assignment framework in Figure 1.8, so that
any of them can be combined with the other steps in a flexible way.
313
314 CHAPTER 9. NETWORK LOADING
Figure 9.1: Convention for indexing discrete time intervals for two hypothetical
variables y (measured at an instant) and z (measured over time).
The exact components of the network state vary from one model to the next
(for instance, signals may be modeled in detail, approximately, or not at all),
and as you read about the link and node models presented in this chapter,
you should think about what information you would need to store in order to
perform the computations for each link and node model. At a minimum, it is
common to record the cumulative number of vehicles which have entered and
left each link at each timestep, since the start of the modeling period. These
values are denoted by N ↑ (t) and N ↓ (t), respectively; the arrows are meant as
a mnemonic for “upstream” and “downstream”, since they can be thought of
as counters at the ends of the links. So, for instance, N ↑ (3) is the number of
vehicles which have crossed the upstream end of the link by the start of the
3rd time interval (the cumulative entries), and N ↓ (5) is the number of vehicles
which have crossed the downstream end of the link by the start of the 5th time
interval (cumulative exits). We will assume that the network is empty when
t = 0 (no vehicles anywhere), and as a result N ↑ (t) − N ↓ (t) gives the number
of vehicles currently on the link at any time t.
It is possible to use different time interval lengths for different links and
nodes, and this can potentially reduce the necessary computation time. There
are also continuous time dynamic network loading models, where flows and other
traffic variables are assumed to be functions defined for any real time value, not
just a finite number of points. These are not discussed here to keep the focus
on the basic network loading ideas, and to avoid technical details associated
with infinitesimal calculations. Finally, some formulas may call for the value
of a discrete variable at a non-integer point in time, such as N ↑ (3.4), in which
case a linear interpolation can be used between the neighboring values N ↑ (3)
and N ↑ (4). If possible, the time step should be chosen to minimize or eliminate
these interpolation steps, which are time-consuming and which can introduce
numerical errors.
Direction of travel
Distance traveled
in t
Figure 9.2: Calculating the sending flow when there is a queue on the link (left)
and when the link is at free-flow (right).
interval (that is, between times t and t + 1) if there was no obstruction from
downstream links or nodes (you can imagine that the link is connected to a
wide, empty link downstream). You can also think of this as the flow that is
ready to leave the link during this time interval. The sending flow is calculated
at the downstream end of a link.
To visualize sending flow, Figure 9.2 shows two examples of links. In the
left panel, there is a queue at the downstream end of the link. If there is no
restriction from downstream, the queue would discharge at the full capacity
of the link, and the sending flow would be equal to the capacity of the link
multiplied by ∆t. In the right panel, the link is uncongested and vehicles are
traveling at free-flow speed. The sending flow will be less than the capacity,
because relatively few vehicles are close enough to the downstream end of the
link to exit within the next time interval. The vertical line in the figure indicates
the distance a vehicle would travel at free-flow speed during one time step, so
in this case the sending flow would be 3 vehicles. Note that the actual number
of vehicles which can leave the link during the next time step may be affected
by downstream conditions: perhaps the next link is congested, or perhaps there
is a traffic signal at the downstream end. These considerations are irrelevant
for calculating sending flow, and will be treated with node models, introduced
in Section 9.2. In any case, the sending flow is an upper bound on the actual
number of vehicles which can depart the link.
The receiving flow during time t, denoted R(t), is the number of vehicles
which would enter the link during the t-th time interval if the upstream link
could supply a very large (even infinite) number of vehicles: you can imagine
that the upstream link is wide, and completely full at jam density. You can also
think of this as the maximum amount of flow which can enter the link during
this interval, taking into account the available free space. The receiving flow is
calculated at the upstream end of the link.
To visualize receiving flow, Figure 9.3 shows two examples of links. In the
left panel, the upstream end of the link is empty. This means that vehicles can
potentially enter the link at its full capacity, and the receiving flow would equal
the link’s capacity multiplied by ∆t. The actual number of vehicles which will
enter the link may be less than this, if there is little demand from upstream —
like the sending flow, the receiving flow is simply an upper bound indicating how
9.1. LINK MODEL CONCEPTS 317
Direction of travel
Distance traveled
in t
Figure 9.3: Calculating the receiving flow when the link is at free-flow (left) and
when there is a queue on the link (right).
Point queue
Physical section
many vehicles could potentially enter the link if demand were high enough. In
the right panel, there is a stopped queue which nearly fills the entire link. Here
the vertical line indicates how far into the link a vehicle would travel at free-flow
speed during one time step. Assuming that the stopped vehicles remain stopped
throughout the t-th time interval, the receiving flow is the number of vehicles
which can physically fit into the link, in this case 2.
Each link model has a slightly different way of calculating the sending and
receiving flows, which correspond to different assumptions on traffic behavior
with the link, or to different calculation methods. The next two subsections
present simple link models. Notice how the different traffic flow assumptions in
these models lead to different formulas for sending and receiving flow.
A physical section which spans the length of the link and is assumed
uncongestible: vehicles will always travel over this section at free-flow
speed.
Physical section
Point queue
wide portion of the link, and the point queue represents the vehicles which are
delayed as the capacity is reduced at the downstream bottleneck. One can also
imagine a traffic signal at the downstream end, and magical technology (flying
cars?) which allows vehicles to “stack” vertically at the signal — there can be
no congestion upstream of this “stack,” since vehicles can always fly to the top.
(Figure 9.5) One may even imagine that there is no physical meaning to either
of these, and that the physical section and point queue merely represent the
delays incurred from traveling the link at free-flow, and the additional travel
time due to congestion.
↓
The point queue discharges vehicles at a maximum rate of qmax (measured
in vehicles per unit time), called the capacity. The capacity imposes an upper
limit on the sending flow, so we always have
↓
S(t) ≤ qmax ∆t . (9.1)
However, if the queue is empty, or if only a few vehicles are in the queue,
the discharge rate may be less than this. Once the queue empties, the only
vehicles which can exit the link are ones reaching the downstream end from the
uncongested physical section. Since we assume that all vehicles in this section
travel at the free-flow speed (which we will denote uf ), this means that only the
vehicles that are closer than uf ∆t to the downstream end can possibly leave.
We can use the cumulative counts N ↑ and N ↓ to count the number of vehicles
which are close enough to the downstream end to exit in the next time step.
Since the entire physical section is traversed at the free-flow speed uf , a vehicle
whose distance from the downstream end of the link is exactly uf ∆t distance
units must have passed the upstream end of the link exactly (L−uf ∆t)/uf time
units ago. We call this a “threshold” vehicle, since any vehicle entering the link
after this one has not yet traveled far enough, while any vehicle entering the
link before this one is close enough to the downstream end to exit. The number
of vehicles between the threshold vehicle and the downstream end of the link
can thus be given by
L − uf ∆t L
N↑ t − − N ↓ (t) = N ↑ t + ∆t − − N ↓ (t) . (9.2)
uf uf
The sending flow is the smaller of the number of vehicles which are close
enough to the downstream end to exit, given by equation (9.2), and the capacity
of the queue. Thus
S(t) = min{N ↑ (t + ∆t − L/uf ) − N ↓ (t), qmax
↓
∆t} . (9.3)
9.1. LINK MODEL CONCEPTS 319
The receiving flow for the point queue model is easy to calculate. Since the
physical section is uncongestible, the link capacity is the only limitation on the
rate at which vehicles can enter. The capacity of the upstream end of the link
may be different than the capacity of the downstream end of the link (perhaps
due to a lane drop, or a stop sign at the end of the link), so we denote the
↑
capacity of the upstream end by qmax . (If we just use qmax without an ↑ or
↓ superscript, the same capacity applies over the whole link.) ) The receiving
flow is given by
↑
R(t) = qmax ∆t . (9.4)
In real traffic networks, queues occupy physical space and cannot be confined to
a single point. The spatial queue model in the next subsection shows one way
to reflect this.
Table 9.1 shows how the point queue model operates, depicting the state
of a link over ten time steps. The N ↑ and N ↓ columns express the number
of vehicles which have entered and left the link at each time step, as well as
the sending and receiving flows during each timestep. The difference between
N ↑ and N ↓ represents the number of vehicles on the link at any point in time.
In this example, we assume that the free-flow speed is uf = L/(3∆t) (so a
vehicle takes 3 time steps to traverse the link under free-flow conditions), the
↑ ↓
upstream capacity is qmax = 10/∆t, and the downstream capacity is qmax =
5/∆t. Initially, the sending flow is zero, because no vehicles have reached the
downstream end of the link. The sending flow then increases as flow exits, but
eventually reaches the downstream capacity. At this point, a queue forms and
vehicles exit at the downstream capacity rate. Eventually, the queue clears, the
link is empty, and the sending flow returns to zero. The receiving flow never
changes from the upstream capacity, even when a queue is present. In this
example, notice that N ↓ (t + 1) = N ↓ (t) + S(t). This happens because we are
temporarily ignoring what might be happening from downstream. Depending
on downstream congestion, N ↓ (t + 1) could be less than N ↓ (t) + S(t); but it
could never be greater, because the sending flow is always a limit on the number
of vehicles that can exit. Also notice that for all time steps, N ↑ (t + 1) ≤
N ↑ (t) + R(t), because the receiving flow is a limit on the number of vehicles
that can enter the link.
In practice, the upstream and downstream capacities are usually assumed
the same, in which case we just use the notation qmax to refer to capacities
at both ends. The network features which would make the capacities different
upstream and downstream (such as stop signs or signals) are usually better
represented with node models, discussed in the next section.
↑
Table 9.1: Point queue example, with uf = L/(3∆t), qmax = 10/∆t, and
↓
qmax = 5/∆t.
t N↑ N↓ R S
0 0 0 10 0
1 1 0 10 0
2 5 0 10 0
3 10 0 10 1
4 17 1 10 4
5 27 5 10 5
6 30 10 10 5
7 30 15 10 5
8 30 20 10 5
9 30 25 10 5
10 30 30 10 0
The receiving flow includes an additional term to reflect the finite space on
the link for the queue, alongside the link capacity. Since vehicles in the queue
are stopped, the space they occupy is given by the jam density kj , expressed in
vehicles per unit length. The maximum number of vehicles the link can hold is
kj L, while the number of vehicles currently on the link is N ↑ (t) − N ↓ (t). The
receiving flow cannot exceed the difference between these:
By assuming that the queue is always at the downstream end of the link,
the spatial queue model essentially assumes that all vehicles in a queue move
together. In reality, there is some delay between when the head of the queue
starts moving, and when the vehicle at the tail of the queue starts moving —
9.2. NODE MODEL CONCEPTS 321
↑ ↓
Table 9.2: Spatial queue example, with uf = L/(3∆t), qmax = 10/∆t, qmax =
5/∆t, and kj L = 20.
t N↑ N↓ R S
0 0 0 10 0
1 1 0 10 0
2 5 0 10 0
3 10 0 10 1
4 17 1 4 4
5 21 5 4 5
6 25 10 5 5
7 30 15 5 5
8 30 20 10 5
9 30 25 10 5
10 30 30 10 0
when a traffic light turns green, vehicles start moving one at a time, with a
slight delay between when a vehicle starts moving and when the vehicle behind
it starts moving. These delays cannot be captured in a spatial queue model.
To represent this behavior, we will need a better understanding of traffic flow
theory. Section 9.4 will present this information, and we will ultimately build
more realistic link models. The point queue and spatial queue models never-
theless illustrate the basic principles of link models, and what the sending and
receiving flow represent.
Table 9.2 shows how the spatial queue model operates. It is similar to the
point queue example (Table 9.1), but we now introduce a jam density, assuming
that the maximum number of vehicles which can fit on the link is kj L = 20.
In this example, the receiving flow drops once the queue reaches a certain size.
This reflects the finite space available on the link. As a result, it takes longer for
the 30 vehicles to enter the link. The queue is still able to completely discharge
by the end of the ten time steps. As before, the difference between N ↓ (t) and
N ↓ (t + 1) is never more than S(t) (in this example, exactly equal because we
are ignoring downstream conditions), and the difference between N ↑ (t) and
N ↑ (t + 1) is never more than R(t).
g j
Sgi Rij
i Rik
Shi
Ril k
h
l
Figure 9.6: Sending and receiving flows interact at nodes to produce turning
movement flows.
1. Vehicles will not voluntarily hold themselves back. That is, it should be
impossible to increase any turning movement flow yhij (t) without violating
one of the constraints listed below, or one of the other constraints imposed
by a specific node model.
2. Each turning movement flow must be nonnegative, that is, yhij (t) ≥ 0 for
9.2. NODE MODEL CONCEPTS 323
The sum on the left may be less than Shi (t), because it is possible that
some vehicles cannot leave (h, i) due to obstructions from a downstream
link or from the node itself (such as a red signal).
5. For each outgoing link, the sum of the turning movement flows into this
link cannot exceed the receiving flow, since that is the maximum number
of vehicles that link can accommodate:
X
yhij (t) ≤ Rij (t) ∀(i, j) ∈ Γ(i) (9.8)
(h,i)∈Γ−1 (i)
The sum on the left may be less than Rij (t), because there may not be
enough vehicles from upstream links to fill all of the available space in the
link.
6. Route choices must be respected. That is, the values yhij (t) must be
compatible with the directions travelers wish to go based on their chosen
paths; we cannot reassign them on the fly in order to increase a yhij value.
7. The first-in, first-out (FIFO) principle must be respected. This is closely
related to the previous property; we cannot allow vehicles to “jump ahead”
in queue to increase a yhij value, unless there is a separate turn lane or
other roadway geometry which can separate vehicles on different routes.
(Figure 9.7).
8. The invariance principle must be respected. If the outflow from a link is
less than its sending flow, then increasing the sending flow further could
not increase the outflow beyond its current value. Likewise, if the inflow
to a link is less than its receiving flow, then increasing the receiving flow
could not increase the inflow to the link beyond its current value. In other
words, if the sending (or receiving) flow is not “binding,” then its specific
value cannot matter for the actual flows.
The invariance principle warrants additional explanation. In the first case,
assume that flow into link (i, j) is restricted by its receiving flow Rij . This
means that more vehicles wish to enter the link than can be accommodated,
so a queue must form on the upstream link. But when there is a queue on the
324 CHAPTER 9. NETWORK LOADING
B B B
B
A Path
Path B
A A
Figure 9.7: The first-in, first-out principle implies that vehicles waiting to turn
will obstruct vehicles further upstream.
hi
upstream link, its sending flow will increase to the capacity qmax ∆t. If the flow
yhij increases in response, then the queue might be cleared in the next time
interval, the sending flow would drop, a new queue would form, and so on, with
the flows oscillating between time intervals. This is unrealistic, and is an artifact
introduced by choosing a particular discretization, not a traffic phenomenon one
would expect in the field. In the second case, if flow onto link (i, j) is restricted
by the sending flow Shi , then there is more space available on link (i, j) than
vehicles wish to enter, a short time later the receiving flow will increase to the
ij
capacity qmax ∆t. If yhij would increase because of this increase in the receiving
flow, again an unrealistic oscillation would be seen. The exercises ask you to
compare some node models which violate the invariance principle to those which
do not.
h j
Shi Rij
that is, the number of vehicles moving from link (h, i) to (i, j) during the t-th
time interval is the lesser of the sending flow from the upstream link in that
time interval, and the receiving flow of the downstream link. For instance,
if the upstream link sending flow is 10 vehicles, while the downstream link
receiving flow is 5 vehicles, a total of 5 vehicles will successfully move from the
upstream link to the downstream one, because that is all there is space for. If the
upstream link sending flow is 3 vehicles and the downstream link receiving flow
is 10 vehicles, 3 vehicles will move from the upstream link to the downstream
one, because that is all the vehicles that are available.
We can check that this node model satisfies all of the desiderata from Sec-
tion 9.2. Going through each of these conditions in turn:
1. The flow yhij (t) is chosen to be the minimum of the upstream sending
flow, and the downstream receiving flow; any value larger than this would
violate either the sending flow constraint or the receiving flow constraint.
4. The formula for yhij (t) ensures it cannot be greater than the upstream
sending flow. (This condition simplifies since there is only one incom-
ing and outgoing link, so the summation and “for all” quantifier can be
disregarded.)
5. The formula for yhij (t) ensures it cannot be greater than the downstream
receiving flow. (This condition simplifies in the same way.)
326 CHAPTER 9. NETWORK LOADING
h j
Shi Rij
i
Sgi
g
6. Route choice is irrelevant when two links meet in series, since all incoming
vehicles must exit by the same link.
7. FIFO is also irrelevant, since all vehicles entering the node behave in the
same way. (This would not be the case if there was more than one exiting
link, and vehicles were on different paths.) So we don’t have to worry
about FIFO when calculating the turning movement flow.
8. To see that the formula satisfies the invariance principle, we have to check
two conditions. If the outflow from (h, i) is less than the sending flow,
this means that yhij (t) = Rij (t), and the second term in the minimum of
equation (9.9) is binding. Increasing the sending flow (the first term in
the minimum) further would not affect its value. Similarly, if the inflow to
(i, j) is less than its receiving flow, this means that yhij (t) = Shi (t), and
the first term in the minimum is binding. Increasing the receiving flow
(the second term) would not affect its value either.
9.2.2 Merges
A merge node has only one outgoing link (i, j), but more than one incoming
link, here labeled (g, i) and (h, i), as in Figure 9.9. This section only concerns
itself with the case of only two upstream links, and generalizing to the case of
additional upstream links is left as an exercise. Here Ξ(i) = {[g, i, j], [h, i, j]},
and we want to calculate the rate of flow from the upstream links to the down-
stream one, that is, the flow rates ygij (t) and yhij (t). As you might expect,
the main quantities of interest are the upstream sending flows Sgi (t) and Shi (t),
and the downstream receiving flow Rij (t). We assume that these values have
already been computed by applying a link model.
For brevity, we will omit the time index in the rest of this section — it is
implicit that all calculations are done with the sending and receiving flows at
the current time step.
There are three possibilities, one corresponding to free flow conditions at the
merge, one corresponding to congestion with queues growing on both upstream
links, and one corresponding to congestion on only one upstream link. For the
merge to be freely flowing, both upstream links must be able to transmit all of
the flow which seeks to leave them, and the downstream link must be able to
9.2. NODE MODEL CONCEPTS 327
accommodate all of this flow. Mathematically, we need Sgi + Shi ≤ Rij , and if
this is true then we simply set ygij = Sgi , and yhij = Shi .
In the second case, there is congestion (so Sgi +Shi > Rij ), and furthermore,
flow is arriving fast enough on both upstream links for a queue to form at each
of them. Empirically, in such cases the flow rate from the upstream links is
approximately proportional to the capacity on these links, that is,
ygij q gi
= max
hi
(9.10)
yhij qmax
A little thought should convince you that this relationship is plausible. Fur-
thermore, in the congested case, all of the available downstream capacity will
be used, so
ygij + yhij = Rij (9.11)
Substituting (9.10) into (9.11) and solving, we obtain
gi
qmax
ygij = gi
Rij (9.12)
qmax hi
+ qmax
with a symmetric expression for yhij .
The third case is perhaps a bit unusual. The merge is congested (Sgi + Shi >
Rij ), but a queue is only forming on one of the upstream links. This may happen
if the flow on one of the upstream links is much less than the flow on the other.
In this case, the proportionality rule allows all of the sending flow from one link
to enter the downstream link, with room to spare. This “spare capacity” can
then be consumed by the other approach. If link (g, i) is the link which cannot
send enough flow to meet the proportionality condition, so that
gi
qmax
Sgi < gi
Rij , (9.13)
qmax hi
+ qmax
then the two flow rates are ygij = Sgi and yhij = Rij − Sgi : one link sends all
of the flow it can, and the other link consumes the remaining capacity. The
formulas are reversed if it is link (h, i) that cannot send enough flow to meet its
proportionality condition.
Exercise 7 asks you to show that the second and third cases can be handled
by the single equation
q gi
ygij = med Sgi , Rij − Shi , gi max Rij , (9.14)
hi
qmax + qmax
where med refers to the median of a set of numbers. This formula applies
whenever Sgi + Shi > Rij . An analogous formula holds for the other approach
by swapping the g and h indices. Exercise 8 asks you to verify the desiderata
of Section 9.2 are satisfied by this equation.
For certain merges, it may not be appropriate to assign flow proportional to
the capacity of the incoming links. Rules of the road, signage, or signalization
328 CHAPTER 9. NETWORK LOADING
h j
Shi Rij
i
Rik k
might allocate the capacity of the downstream link differently. In general, the
share of the downstream receiving flow that is allocated to approaches (g, i) and
(h, i) can be written as ψgi and ψhi , respectively, with ψgi + ψhi = 1 and both
of them nonnegative. A more general form of the merge equation can then be
written as
ygij = med {Sgi , Rij − Shi , ψgi Rij } , (9.15)
with a similar formula for (h, i).
9.2.3 Diverges
A diverge node is one with only one incoming link (h, i), but more than one
outgoing link, as in Figure 9.10. This section concerns itself with the case of
only two downstream links. The exercises ask you to generalize to the case of
three downstream links, using the same concepts. Let these two links be called
(i, j) and (i, k), so Ξ(i) = {[h, i, j], [h, i, k]}. Our interest is calculating the rate
of flow from the upstream link to the downstream ones, that is, the flow rates
yhij and yhik . We assume that the sending flow Shi and the receiving flows Rij
and Rik have already been calculated. Unlike links in series or merges, we also
need to represent some model of route choice, since some drivers may choose
link (i, j), and others link (i, k). Let pij and pik be the proportions of drivers
choosing these two links during the t-th time interval, respectively. Naturally,
pij and pik are nonnegative, and pij + pik = 1. Like the sending and receiving
flows, these values can change with time, but to avoid cluttering formulas we
will leave the time indices off of p values unless it is unclear which time step we
are referring to.
There are two possibilities, one corresponding to free flow conditions at the
diverge, and the other corresponding to congestion. What does “free flow”
mean? For the diverge to be freely flowing, both of the downstream links must
be able to accommodate the flow which seeks to enter them. The rates at which
vehicles want to enter the two links are pij Shi and pik Shi , so if both downstream
links can accommodate this, we need pij Shi ≤ Rij and pik Shi ≤ Rik . In this
case we simply have yhij = pij Shi and yhik = pik Shi : all of the flow which wants
to leave the diverge can.
The case of congestion is slightly more interesting, and requires making
assumptions about how drivers will behave. One common assumption is that
9.3. COMBINING NODE AND LINK MODELS 329
flow waiting to enter one link at a diverge will obstruct every other vehicle on the
link (regardless of which link it is destined for). This most obviously represents
the case where the upstream link has only a single lane, so any vehicle which
has to wait will block any vehicle behind it; but this model is commonly used
even in other cases.1 When there is congestion, only some fraction φ of the
upstream sending flow can move. The assumption that any vehicle waiting
blocks every vehicle upstream implies that this same fraction applies to both of
the downstream links, so yhij = φpij Shi and yhik = φpik Shi .
So, how to calculate φ? The inflow rate to a link cannot exceeds its receiving
flow, so yhij = φpij Shi ≤ Rij and yhik = φpik Shi ≤ Rik , or equivalently φ ≤
Rij /pij Shi and φ ≤ Rik /pik Shi . Every vehicle which can move will, so
Rij Rik
φ = min , (9.16)
pij Shi pik Shi
Furthermore, we can introduce the uncongested case into this equation as well,
and state
Rij Rik
φ = min , ,1 (9.17)
pij Shi pik Shi
regardless of whether there is congestion at the diverge or not. Why? If the
diverge is at free flow, then φ = 1, but Rij /pij Shi ≥ 1 and Rik /pik Shi ≥ 1.
Introducing 1 into the minimum therefore gives the correct answer for free flow.
Furthermore, if the diverge is not at free flow, then either Rij /pij Shi < 1 or
Rik /pik Shi < 1, so adding 1 does not affect the minimum value. Therefore, this
formula is still correct even in the congested case. Exercise 13 asks you to verify
the desiderata of Section 9.2 are satisfied by this equation.
turn lane at the last moment. Or, if the turn lane is long, it may be appropriate to treat the
diverge at the point where the turn lane begins, as opposed to at the physical diverge.
330 CHAPTER 9. NETWORK LOADING
travelers from using centroid connectors except to start and end their trips, that
is, to exclude the use of centroid connectors as “shortcuts.” This can be done
either by transforming the underlying network, having origins and destinations
be distinct nodes adjacent to “one-way” centroid connectors, or by excluding
such paths when finding paths for travelers, as discussed in Chapter 10.
If centroid connectors are set up in this way, then flow entering the network
can simply be added to the upstream ends of their centroid connectors, and flow
leaving the network at destinations can simply vanish, without any constraints
in either case. The network loading algorithm can then be stated as follows:
↑ ↓
1. Initialize all counts and the time index: Nij (0) ← 0 and Nij (0) ← 0 for
all links (i, j), t ← 0.
2. Use a link model to calculate sending and receiving flows Sij and Rij for
all links.
3. Use a node model to calculate transition flows yijk for all nodes j except
for zones.
4. Update cumulative counts: for each non-zone node i, perform
↓ ↓
X
Nhi (t + 1) ← Nhi (t) + yhij (9.18)
(i,j)∈Γ(i)
r i j s
uf = L/t uf = L/(2t) uf = L/t
qmax = 20/t q↑max = 20/t qmax = ∞
k jL = ∞ q↓max = 10/t kjL = ∞
kjL = 40
used. This might reflect a red traffic signal, or a closed drawbridge between these
time intervals, since no flow can move through the node. In this example, the
flow downstream is first interrupted when moving from the centroid connector
(r, i) to the link (i, j), whose capacity is lower than the rate at which vehicles
are being loaded at origin r. This can be seen by examining the difference
between the N ↑ and N ↓ values for link (r, i) during the initial timesteps. The
difference between these values gives the total number of vehicles on the link at
that instance in time. Since N ↑ is increasing at a faster rate than N ↓ , vehicles
are accumulating on the link, in a queue at the downstream end. There is no
↑ ↓
queue at node j, because no bottleneck exists there. Compare Nij (t) and Nij (t)
when 3 ≤ t ≤ 5. Both the upstream and downstream count values increase at
the same rate, which means there is no net accumulation of vehicles.
At t = 5, the flow through node j drops to zero, which introduces a further
bottleneck. The impacts of the bottleneck are first seen at t = 6: 40 vehicles are
now on link (i, j), up from 30. As a result, the receiving flow Rij drops to zero,
because the link is full. Therefore, the node model at i restricts any additional
inflow to link (i, j), and the queue on (r, i) grows at an even faster rate than
before, even though the number of new vehicles loaded onto the network has
dropped. At t = 10, the bottleneck at node j is released, and vehicles begin to
move again. At t = 25, the upstream and downstream counts are equal on all
links, which means that the network is empty. All vehicles have reached their
destination.
Flow
Density
Speed
small, finite time interval ∆t. If the speed of vehicles is u, any upstream vehicle
within a distance of u∆t from the fixed point will pass during the next ∆t time
units, and the number of such vehicles is
∆N = ku∆t . (9.21)
The flow rate is approximately ∆N/∆t, and the formula (9.20) then follows
from taking limits as the time increment ∆t shrinks to zero.
Figure 9.12 is a trajectory diagram showing the locations of vehicles on the
link over time — the horizontal axis denotes time, and the vertical axis denotes
space, with the upstream end of the link at the bottom and the downstream
end at the top. Speed, flow, and density can all be interpreted in terms of
these trajectories. The speed of a vehicle at any point in time corresponds
to the slope of its trajectory there. Flow is the rate at which vehicles pass a
fixed point: on a trajectory diagram, a fixed point in space is represented by
a horizontal line. Time intervals when more trajectories cross this horizontal
line have higher flow, and when fewer trajectories cross this line, the flow is
lower. Density is the spatial concentration of vehicles at a particular instant
in time: on a trajectory diagram, a specific instant is represented by a vertical
line. Where more trajectories cross this vertical line, the density is higher, and
where fewer trajectories cross, the density is lower.
However, this equation by itself is not enough to describe anything of real
interest in traffic flow. Another equation, based on vehicle conservation prin-
ciples, is described in the next subsection. The Lighthill-Whitham-Richards
model, described at the end of this section, makes a further assumption about
the relationships of three state variables. These relationships are enough to
specify the network loading problem as the solution to a well-defined system of
partial differential equations.
9.4. ELEMENTARY TRAFFIC FLOW THEORY 335
x
N=0 1 2
3
4
∂2N ∂2N
= , (9.24)
∂x∂t ∂t∂x
so substituting the relationships (9.22) and (9.23) and rearranging, we have
∂q ∂k
+ = 0. (9.25)
∂x ∂t
This is an expression of vehicle conservation, that is, vehicles do not appear
or disappear at any point. This equation must hold everywhere that these
derivatives exist.2 Equations (9.22) and (9.23) are useful in another way. If
(x1 , t1 ) and (x2 , t2 ) are any two points in space and time, the difference in
cumulative count number between these points is given by the line integral
Z
N (x2 , t2 ) − N (x1 , t1 ) = q dt − k dx , (9.26)
C
where C is any curve connecting (x1 , t1 ) and (x2 , t2 ). Because vehicles are
conserved, this line integral does not depend on the specific path taken. This
2 Of course, vehicle conservation must hold even when these derivatives do not exist, it is
just that the formula (9.25) is meaningless there. We have to enforce flow conservation in a
different way at such points.
9.4. ELEMENTARY TRAFFIC FLOW THEORY 337
Exercise 24 asks you to verify that the conservation relationship (9.25) is satisfied
by explicit computation.
Ordinarily, the N (x, t) map is not given — indeed, the goal of network
loading is to calculate it. For if we know N (x, t) everywhere, we can calculate
flow, density, and speed everywhere, using equations (9.22), (9.23), and (9.20).
So let’s assume that we only know the density and flow maps (9.28) and (9.29),
and try to recover information about the cumulative counts. For the given N
map, the vehicle at x = 1/2 at t = 0 has the number 0. (As discussed above,
we do not necessarily have to set the zero point at N (0, 0).) To calculate the
number of the vehicle at x = 1 and t = 1 (the downstream end of the link, one
minute later), we can use equation (9.26).
As this equation involves a line integral, we must choose a path between
(x, t) = (1/2, 0) and (1, 1). Because of the conservation relationship (9.25), we
can choose any path we wish. For the purposes of an example, we will calculate
this integral along three different paths, and verify that they give the same
answer. Figure 9.14 shows the three paths of integration.
Path A : This path consists of the line segment from (1/2, 0) to (1, 0), followed
by the segment from (1, 0) to (1, 1). Because these line segments are
parallel to the axes, this reduces the line integral to two integrals, one
over x alone, and the other over t alone. We thus have
Z Z 1 Z 1
N (1, 1) = q dt − k dx = − k(x, 0) dx + q(1, t) dt
A 1/2 0
Z 1 Z 1 2 !
1
=− 120 (1 − x) dx + 60 1 −
1/2 0 t+1
= −15 + 30 = 15 ,
and the vehicle at the downstream end of the link at t = 1 has the number
15.
Path B : This path consists of the line segment from (1/2, 0) to (1/2, 1), fol-
lowed by the segment from (1/2, 1) to (1, 1). As before, we have
Z Z 1 Z 1
N (1, 1) = q dt − k dx = q(1/2, t) dt − k(x, 1) dx
B 0 1/2
Z 1 2 ! Z 1
1/2 x
= 60 1 − dt − 120 1 − dx
0 t+1 1/2 2
= 52.5 − 37.5 = 15 .
Path C : This path is the line segment directly connecting (1/2, 0) to (1, 1).
Although this line is not parallel to either axis, the integral actually ends
up being the easiest to evaluate, because x/(t + 1) is constant along this
9.4. ELEMENTARY TRAFFIC FLOW THEORY 339
x A (1,1)
C
(1/2,0)
B
Figure 9.14: Three possible paths for the line integral between (1/2, 0) and
(1, 1).
line, and equal to 1/2. Therefore k(x, t) = 60 at all points along this line,
and q(x, t) = 45. Since dx = (1/2)dt on this line segment, we have
Z Z 1 Z 1
N (1, 1) = q dt − k dx = (45dt − 60(1/2)dt) = 15 dt
C 0 0
= 15 .
All three integrals gave the same answer (as they must), which we can verify by
checking N (1, 1) with equation (9.27). So, we can choose whichever integration
path is easiest. In this example, the integrals in Path B involved the most work.
The integral in Path C required a bit more setup, but the actual integral ended
up being very easy, since q and k were constants along the integration path.
Such a path is called a characteristic, and will be described in more detail later
in this chapter.
in rivers. Richards independently proposed an equivalent model in 1956. All three are now
given credit for this model.
340 CHAPTER 9. NETWORK LOADING
q
qmax
Q(k)
k
kc kj
ing to the density at that point. That is, in the LWR model, the density k at
a point completely determines the traffic state. The flow q at that point is ob-
tained from the fundamental diagram Q, and the speed u can then be obtained
from equation (9.31).
To summarize, the three equations relating flow, density, and speed are:
q(x, t) − k(x, t)u(x, t) = 0 (9.31)
q(x, t) − Q(k(x, t)) = 0 (9.32)
∂q ∂k
+ =0 (9.33)
∂x ∂t
and these equations must hold everywhere (with the exception that (9.33) may
not be defined if q or k is not differentiable at a point).
Together with initial conditions (such as the values of k along the link at
t = 0) and boundary conditions (such as the “inflow rates” q at the upstream
end x = 0 throughout the analysis period, or restrictions on q at the downstream
end from a traffic signal), this system of equations can in principle be solved to
yield k(x, t) everywhere. Exercise 24 asks you to verify that the N (x, t) map
used in the example in the previous section is consistent with the fundamental
diagram Q(k) = 14 k(240 − k).
The points where k is not differentiable are known as shockwaves, and often
correspond to abrupt changes in the density. Figure 9.16 shows an example of
several shockwaves associated with the changing of a traffic light. Notice that in
region A, the density is subcritical (uncongested); in region B, traffic is at jam
density; and in region C, traffic is at critical density and flow is at capacity. The
speed of a shockwave can still be determined from conservation principles, even
though the conservation equation (9.33) does not apply because the density and
flow derivatives do not exist at a shock.
Assume that kA and kB are the densities immediately upstream and immedi-
ately downstream of the shockwave (Figure 9.17). The corresponding flow rates
qA = Q(kA ) and qB = Q(kB ) can be calculated from the fundamental diagram,
and finally the speeds are obtained as uA = qA /kA and uB = qB /kB . Further-
more, let uAB denote the speed of the shockwave. Then the speed of vehicles in
region A relative to the shockwave is uA − uAB , and the rate at which vehicles
cross the shockwave from region A is (uA − uAB )kA ; this is nothing more than
equation (9.20) as viewed from the perspective of an observer moving with the
shockwave.
Likewise, the relative speed of the vehicles in region B is uB − uAB , and the
rate at which vehicles cross the shockwave and enter region B is (uB − uAB )kB .
Obviously these two quantities must be equal, since vehicles do not appear or
disappear at the shock. Equating these flow rates from the left and right sides
of the shockwave, we can solve for the shockwave speed:
qA − qB
uAB = . (9.34)
kA − kB
Notice that this calculated speed is the same regardless of whether A is the
upstream region and B the downstream region, or vice versa. This equation also
342 CHAPTER 9. NETWORK LOADING
x uf = 60 mi/hr
D kj = 240 veh/mi
C
qmax = 3600 veh/hr
B
Q(k) = k(240-k)/4
A
t
A B C D
uA = 55 uB = 0 uC = 30 uD undefined
kA = 20 kB = 240 kC = 120 kD = 0
qA = 1100 qB = 0 qC = 3600 qD = 0
uA kA qA u B k B qB
uAB
A B
q C
Q(k)
k
D B
has a nice geometric interpretation: the speed of the shockwave is the slope of
the line connecting regions A and B on the fundamental diagram (Figure 9.18).
For instance, in Figure 9.16, in region A the flow and density are 1100
vehicles per hour and 20 vehicles per mile, and in region B the flow and density
are 0 vehicles per hour and 240 vehicles per mile. Therefore, using (9.34), the
shockwave between regions A and B has a speed of (1100 − 0)/(20 − 240) =
−5 miles per hour. The negative sign indicates that the shockwave is moving
upstream. Since A represents uncongested traffic, and region B represents the
stopped queue at the traffic signal, the interpretation is that the queue is growing
at 5 miles per hour. Tracing the derivation of equation (9.34), the rate at which
vehicles enter the shockwave is (55 + 5) × 20 from the perspective of region A,
or 1200 vehicles per hour. You should check that the same figure is obtained
from the perspective of region B. Vehicles are entering the queue faster than the
upstream flow rate (1200 vs. 1100 vph) because the queue is growing upstream,
moving to meet vehicles as they arrive.
The system of equations (9.31)–(9.33) can then be solved, introducing shock-
waves as necessary to accommodate the initial and boundary conditions, us-
ing (9.34) to determine their speed. This is the LWR model. However, this
theory does not immediately suggest a technique for actually solving the sys-
tem of partial differential equations, which is the topic of the next subsection.
Notice also that the fundamental diagram determines the maximum speed
at which a shockwave can move. Because the fundamental diagram is concave,
its slope at any point can never be greater than the free-flow speed uf = Q0 (0),
nor less than the slope at jam density Q0 (kj ). The absolute values of these
slopes give the fastest speeds shockwaves can move in the downstream and
upstream directions, respectively, because shockwave speeds are the slopes of
lines connecting points on the fundamental diagram. This leads to an important
notion, the domain of dependence. Consider a point (x, t) in space and time.
Through this point, draw lines with slopes Q0 (0) and Q0 (kj ). The area between
344 CHAPTER 9. NETWORK LOADING
q x C
Fundamental diagram
(x,t) B
A
Slope Q’(kj)
C
Slope Q’(0)
k Past Future t
Figure 9.19: The fundamental diagram (left) and the domain of dependence
(right).
these lines to the left of (x, t) represents all the points which can potentially
influence the traffic state at (x, t) (labeled as region A in Figure 9.19), and the
area between the lines to the right of (x, t) represents all of the points the traffic
state at (x, t) can potentially influence (labeled as region B). In the LWR model,
points outside of these regions (the regions labeled C) are independent of what
happens at (x, t) (say, a signal turning red or green): in the past, they are either
too recent or too distant to affect what is happening at (x, t). In the future, they
are too soon or too distant to be affected by an event at (x, t). This is a crucial
fact for dynamic network loading. If you want to know what is happening at
(x, t), it is sufficient to know what has happened in the past, in the domain
of dependence. We do not need to know what is happening simultaneously at
other points in the network, and as a result we can perform network loading in
an decentralized fashion, performing calculations in parallel since they do not
depend on each other.6
selves, or the speed of shockwaves (which are given by slopes of secant lines on the fundamental
diagram).
346 CHAPTER 9. NETWORK LOADING
where C is the straight line between (x0 , t0 ) and (x, t). Since this line is a
characteristic, we have dx/dt = dQ/dk, and so
Z
dQ
N (x, t) = N (x0 , t0 ) + q−k dt . (9.39)
C dk
The only trouble is that we do not actually know the density at (x, t). Each
possible value of density corresponds to a slightly different cumulative count,
based on equation (9.40). The insight of the Newell-Daganzo method is that
the correct value of the cumulative count is the lowest possible value. That
is, imagine that the density at (x, t) is k, and let (xk , tk ) be the known point
corresponding to the characteristic slope of density k. Then
dQ
N (x, t) = inf N (xk , tk ) + q − k (t − tk ) . (9.41)
k∈[0,kj ] dk
In this case, there are only two possible characteristic speeds: one (+uf ) cor-
responding to uncongested conditions, and the other (−w) corresponding to
congested conditions. The uncongested speed uf is the free-flow speed, and w
is known as the backward wave speed. In the case where the characteristic speed
9.4. ELEMENTARY TRAFFIC FLOW THEORY 347
q
Q(k)
u -w
k
kj
is uf , the vehicle speed u equals the characteristic speed uf , and both are equal
to q/k. Therefore, the line integral along the characteristic in (9.40) is
because q = uk.
In other words, the vehicle number is constant along characteristics at free-
flow speed. For the congested characteristic with slope −w, we have
q
(q − k(−w)) (t − t0 ) = w + k (t − t0 ) = kj w(t − t0 ) (9.44)
w
since k + wq = kj , as can be seen in Figure 9.20. This expression can also be
written as kj (x0 − x). Since these are the only two characteristics which can
prevail at any point, equation (9.41) gives
qmax (t − tR ) (9.47)
348 CHAPTER 9. NETWORK LOADING
q
qmax Q(k)
-w
u
k
kj
since dx = 0 in the line integral (9.26). (Note that the location of this third
known point is the same as the location of the point we are solving for, since the
characteristic is stationary.) This adds a third term to the minimum in (9.45),
giving
q x (mi)
(veh/min) 1 I G
7/8
3/4 C B
E
80 1/2 A
3/8
k H
(veh/mi) D F t (min)
80 240 1/8 1/4 1/2 3/4 1
t
3 3 3 2 2 1 2 3
3 4 4 4 2 2 4 5
4 4 4 4 5 4 3 1
Figure 9.23: Discretizing space and time into cells, n(x, t) values below.
With this discretization in mind, we will use the notation n(x, t) to describe
the number of vehicles in cell x at time t, where x and t are both integers —
we must convert the continuous LWR variables k and q into discrete variables
for dynamic network loading, which we will call n and y.
If the cell size is small, we can make the approximation that
essentially assuming that the density within the cell is constant. Further define
y(x, t) to be the number of vehicles which enter cell x during the t-th time
interval. Making a similar assumption, we can make the approximation
y(1,t) y(2,t)
3 0
5 4 7
n(0,t) n(1,t) n(2,t)
x=0 1 2 3
Figure 9.24: Where discrete values are calculated in the cell transmission model.
sending and receiving flow at each time step. For this reason, we will be con-
tent with determining how the n(x, t + ∆t) values can be calculated, given the
n(x, t) values (which are already known), and the y(x, t) values, which must be
calculated.
Since q(x, t) = Q(k(x, t)), substitution into equations (9.51) and (9.52) give
n(x, t)
y(x, t) ≈ Q ∆t . (9.53)
∆x
Using the fact that ∆x/∆t = uf , the first term in the minimum is simply n(x, t).
The third term can be simplified by defining n̄(x) = kj ∆x to be the maximum
number of vehicles which can fit into a cell and δ = w/uf to be the ratio between
the backward wave speed and free-flow speed. Then, factoring out 1/∆x from
the term in parentheses and again using ∆x/∆t = uf , the third term simplifies
to δ(n̄(x) − n(x, t)).
There is one more point which is subtle, yet incredibly important. Being a
“flow” variable, y(x, t) is calculated at a single point (over a time interval), while
n(x, t) is calculated at a single time (over a longer spatial interval). As shown
in Figure 9.24, the x in y(x, t) refers to a single location, while the x in n(x, t)
refers to an entire cell. So, when we are calculating the flow across the (single)
point x, which is the boundary between two cells, do we look at the adjacent
cell upstream n(x − 1, t), or the adjacent cell downstream n(x, t)?
The correct answer depends on the fundamental diagram, and the meaning of
characteristics. In uncongested conditions, corresponding to the increasing part
of the fundamental diagram and the first term in the minimum, the traffic state
moves from upstream to downstream (because the characteristic has positive
speed). In congested conditions, corresponding to the decreasing part of the
fundamental diagram and the third term in the minimum, the traffic state moves
from downstream to upstream (because the characteristic has negative speed.)
9.5. LWR-BASED LINK MODELS 353
∂q 1
(x, t) ≈ (y(x + ∆x, t) − y(x, t)) (9.56)
∂x ∆t∆x
∂k
and the derivative ∂t can be approximated as
∂k 1
(x, t) ≈ (n(x, t + ∆t) − n(x, t)) . (9.57)
∂t ∆t∆x
Substituting into (9.33), we have
1
(y(x + ∆x, t) − y(x, t) + n(x, t + ∆t) − n(x, t)) = 0 (9.58)
∆t∆x
or, in a more convenient form,
This also has a simple intuitive interpretation: the number of vehicles in cell
x at time t + 1 is simply the number of vehicles in cell x at the previous time
t, plus the number of vehicles which flowed into the cell during the t-th time
interval, minus the number of vehicles which left.
Together, the equations (9.55) and (9.59) define the cell transmission model
for trapezoidal fundamental diagrams. There are only two pieces of “miss-
ing” information, at the boundaries of the link. Refer again to Figure 9.24.
How should y(0, t) and y(L, t) be calculated? For y(0, t), the first term in for-
mula (9.55) involves n(−∆x, t), while for y(L, t), the third term in the formula
involves n(L + ∆x, t), and both of these cells are “out of range.” The answer is
that these boundary flows are used to calculate the sending and receiving flows
for the link, and a node model will then give the actual values link inflows y(0, t)
and link outflows y(L, t).
Remember that the sending flow is the maximum number of vehicles which
could leave the link if there was no obstruction from downstream. In terms
of (9.55), this means that the third term in the minimum (which corresponds to
downstream congestion) is ignored. Then, the first two terms in the minimum
354 CHAPTER 9. NETWORK LOADING
(which only refer to cells on the link) are the possible restrictions on the flow
leaving the link, so
S(t) = min{n(C, t), qmax ∆t} . (9.60)
Likewise, the receiving flow is the maximum number of vehicles which could
enter the link if there were a large number of vehicles wanting to enter from
upstream. In terms of (9.55), this means that the first term in the minimum
(which corresponds to the number of vehicles wanting to enter) is ignored. The
second two terms in the minimum refer to cells on the link, and
Notice that the table has non-integer values: we do not need to round cell oc-
cupancies and flows to whole values, since the LWR model assumes vehicles are
a continuously-divisible fluid. Preserving non-integer values also ensures that
the cell transmission model remains accurate no matter how small the timestep
∆t is (in fact, its accuracy should increase as this happens). Insisting that flows
and occupancies be rounded to whole numbers can introduce significant error if
the timestep is small, unless one is careful with implementation.
Table 9.5 shows only the cell occupancies at each timestep, color-coded ac-
cording to the density in the cells. In this example, the link is initially at
free-flow, until the first vehicles encounter the red light and must stop. A queue
forms, and a shockwave begins moving backward. When this shockwave reaches
the cell at the upstream end of the link, the receiving flow of the link decreases,
and the inflow to the link is limited. When the light turns green, a second
shockwave begins moving backward as the queue clears. Once this shockwave
overtakes the first, vehicles can begin entering the link again. For a few time
steps, the inflow is greater than d(t), representing demand which was blocked
when the receiving flow was restricted by the queue and which was itself queued
on an upstream link (the “queue spillback” phenomenon). Unlike the point
queue and spatial queue link models, the cell transmission model tells us what
is happening in the interior of a link, not just at the endpoints. This is both
a blessing and a curse: sometimes this additional information is helpful, while
other times we may not be concerned with such details. The link transmission
model, described next, can simplify computations if we do not need information
on the internal state of a link.
Table 9.4: Cell transmission model example, on a link with three cells, qmax ∆t =
10, n̄ = 30, and δ = 23
Cell 0 Cell 1 Cell 2
t d(t) R(t) y(0, t) N (0, t) y(1, t) N (1, t) y(2, t) N (2, t) S(t) y(3, t)
0 10 10 10 0 0 0 0 0 0 0
1 10 10 10 10 10 0 0 0 0 0
2 10 10 10 10 10 10 10 0 0 0
3 10 10 10 10 10 10 10 10 10 0
4 10 10 10 10 10 10 6.7 20 10 0
5 9 10 9 10 10 13.3 2.2 26.7 10 0
6 8 10 8 9 5.9 21.1 0.7 28.9 10 0
7 7 10 7 11.1 2.5 26.3 0.2 29.6 10 0
8 6 9.6 6 15.6 1 28.5 0.1 29.9 10 0
9 5 6.3 5 20.6 0.4 29.4 0 30 10 0
10 4 3.2 3.2 25.2 0.1 29.8 0 30 10 10
11 3 1.2 1.2 28.3 0.1 29.9 6.7 20 10 10
12 2 0.4 0.4 29.4 4.5 23.3 8.9 16.7 10 10
13 1 3.1 3.1 25.3 7.4 18.9 9.6 15.6 10 10
14 0 6 1.3 21.0 8.9 16.7 9.9 15.2 10 10
15 0 10 0 13.4 9.5 15.7 10 15.1 10 10
16 0 10 0 3.9 3.9 15.3 10 15 10 10
17 0 10 0 0 0 5.8 9.2 15 10 10
18 0 10 0 0 0 0 0 14.2 10 10
19 0 10 0 0 0 0 0 4.2 4.2 4.2
20 0 10 0 0 0 0 0 0 0 0
9.5. LWR-BASED LINK MODELS 357
Table 9.5: Cell occupancies from the example in Table 9.4, green is zero density
and red is jam density.
358 CHAPTER 9. NETWORK LOADING
and downstream ends of the link, that is, at N (0, ·) and N (L, ·), respectively.
In keeping with the notation introduced in Section 9.1, we will refer to these as
N ↑ and N ↓ . If there is no obstruction from a downstream link or node, then
the end of the link will be uncongested, and the characteristic at this end will
either have slope +uf or slope zero. The Newell-Daganzo method thus gives
and the equation for the sending flow is obtained as the difference between
N ↓ (t + 1) and N ↓ (t):
For the receiving flow, we have to take into account the two relevant charac-
teristic speeds of 0 and −w, since the receiving flow is calculated assuming an
inflow large enough that the upstream end of the link is congested (or at least
at capacity). The stationary characteristic corresponds to the known point
N (0, t), while the backward-moving characteristic corresponds to the known
point N (L, t − L/w). Thus, applying the last two terms of equation (9.46)
would give
and
It is possible to show that equations (9.63) and (9.65) ensure that the number
of vehicles on the link is always nonnegative, and less than kj L.
The link transmission model is demonstrated on an example similar to the
one used for the cell transmission model; the only difference is that the ratio
of backward-to-forward characteristics has been adjusted from 2/3 to 3/4. In
particular, L/uf = 3∆t, and L/w = 4∆t, so forward-moving characteristics
require three time steps to cross the link, and backward-moving characteristics
require four time steps. The total number of vehicles which can fit on the
link is kj L = 90. Otherwise, the example is the same: the demand profile is
identical, and a red light prevents outflow from the link until t = 10. The
results of the calculations are shown in Table 9.6. The rightmost column shows
the number of vehicles on the link, which is the difference between the upstream
and downstream cumulative counts at any point in time. Notice that inflow to
the link is completely blocked during the 12th and 13th time intervals, even
though the number of vehicles on the link is less than the jam density of 90.
This happens because the queue has started to clear at the downstream end, but
the clearing shockwave has not yet reached the upstream end of the link. The
vehicles at the upstream end are still stopped, and no more vehicles can enter.
In contrast, the spatial queue model would allow vehicles to start entering the
link as soon as they began to leave.
9.5. LWR-BASED LINK MODELS 359
Table 9.6: Link transmission model example, with L/uf = 3∆t, L/w = 4∆t,
and kj L = 90.
t d(t) R(t) Inflow N ↑ (t) N ↓ (t) S(t) Outflow Vehicles on link
0 10 10 10 0 0 0 0 0
1 10 10 10 10 0 0 0 10
2 10 10 10 20 0 0 0 20
3 10 10 10 30 0 10 0 30
4 10 10 10 40 0 10 0 40
5 9 10 9 50 0 10 0 50
6 8 10 8 59 0 10 0 59
7 7 10 7 67 0 10 0 67
8 6 10 6 74 0 10 0 74
9 5 10 5 80 0 10 0 80
10 4 5 4 85 0 10 10 85
11 3 1 1 89 10 10 10 79
12 2 0 0 90 20 10 10 70
13 1 0 0 90 30 10 10 60
14 0 10 5 90 40 10 10 50
15 0 10 0 95 50 10 10 45
16 0 10 0 95 60 10 10 35
17 0 10 0 95 70 10 10 25
18 0 10 0 95 80 10 10 15
19 0 10 0 95 90 5 5 5
20 0 10 0 95 95 0 0 0
360 CHAPTER 9. NETWORK LOADING
q
qmax
Q(k)
9.5.3 Point and spatial queues and the LWR model (*)
(This optional section shows how the previously-introduced point and spatial
queue models can be seen as special cases of the LWR model.)
The first link models introduced in this chapter were the point and spatial
queue models, in Section 9.1. These were presented as simple link models to
illustrate concepts like the sending and receiving flow, rather than realistic de-
pictions of traffic flow. Nevertheless, it is possible to view the point and spatial
queue models as special cases of the LWR model by making an appropriate
choice of the fundamental diagram, as shown in this section. Applying the
Newell-Daganzo method with this fundamental diagram gives us a second way
to derive the expressions for sending and receiving flow for these models.
The point queue model is equivalent to assuming that the flow-density rela-
tionship is as shown in Figure 9.25. This diagram is unlike others we’ve seen,
because there is no jam density. This represents the idea that the point queue
occupies no physical space: no matter how many vehicles are in queue, there
is nothing to prevent additional vehicles from entering the link and joining the
queue. It is also the simplest diagram which we have seen so far, and is defined
by only two parameters: the free-flow speed uf and the capacity qmax . (Even
the triangular fundamental diagram in Figure 9.20 required a third parameter,
either −w or kj .)
The Newell-Daganzo method leads to a simple expression for the sending
and receiving flows in a point queue model. To calculate the sending flow S(t),
we need to examine the downstream end of the link, so x = L. Since we are
solving in increasing order of time, we already know N ↓ (0), N ↓ (∆t), . . . , N ↓ (t)
(the number of vehicles which have left the link at each time interval). Likewise,
we know how many vehicles have entered the link at earlier points in time, so
we know N ↑ (0), N ↑ (∆t), . . . , N ↑ (t). For the sending flow, we are assuming that
there are no obstructions from downstream. If this were the case, then we can
calculate N ↓ (t + ∆t) using the Newell-Daganzo method, and
In the point queue model, there are two possible wave speeds, uf (corre-
9.5. LWR-BASED LINK MODELS 361
x
N(L,t) N(L,t+1)
L
uf
N(0,t) t
N(0,t-ufL) N(0,t+1)
Figure 9.26: Point queue model characteristics for sending and receiving flow.
and
S(t) = min{N ↑ ((t + ∆t) − L/uf ) − N ↓ (t), qmax ∆t} . (9.68)
The two terms in the minimum in (9.68) correspond to the case when the queue
is empty, and when there are vehicles in queue. In the first term, since there
is no queue, we just need to know how many vehicles will finish traversing the
physical section of the link between t and t + ∆t; this is exactly the difference
between the total number of vehicles which have entered by time t + ∆t − L/uf
and the total number that have left by time t. When there is a queue, the
vehicles exit the link at the full capacity rate.
In these expressions, it is possible that (t + ∆t) − L/uf is not an integer,
that is, it does not line up with one of the discretization points exactly. In this
case the most accurate choice is to interpolate between the known time points
on either side (remember that we chose ∆t so that L/uf ≥ 1). If you are willing
to sacrifice some accuracy for efficiency, you can choose to round to the nearest
integer, or to adjust the length of the link so that L/uf is an integer.
For the receiving flow, we look at the upstream end of the link. We can treat
the same points as known — N ↑ (0), N ↑ (∆t), . . . , N ↑ (t) and N ↓ (0), N ↓ (∆t), . . . , N ↓ (t).
Since the fundamental diagram for the point queue model has no decreasing por-
tions, the known data at the downstream end can never be relevant. (A line
connecting one of these points to the unknown point N ↑ (t + ∆t) must have
362 CHAPTER 9. NETWORK LOADING
q
qmax
Q(k)
k
kj
negative slope, see Figure 9.26.) Furthermore, for the receiving flow, the char-
acteristic with positive slope +uf , corresponding to upstream conditions, is
irrelevant because we are assuming an unlimited number of vehicles are avail-
able to move from upstream — and therefore its term in (9.48) will never be the
minimum. We are only left with the middle term, corresponding to capacity, so
and
R(t) = qmax ∆t . (9.70)
In terms of the fundamental diagram, the spatial queue model takes the
form in Figure 9.27. This diagram requires three parameters to calibrate: the
free-flow speed uf , the capacity qmax , and the jam density kj . Notice, however,
that the fundamental diagram is discontinuous, and immediately drops from
qmax to zero once jam density is reached. This implies that backward-moving
shockwaves can have infinite speed in the spatial queue model — a physical
interpretation is that when vehicles at the front of the queue begin moving,
vehicles at the rear of the queue immediately start moving as well. In reality,
there is a delay before vehicles at the rear of the queue begin moving, and this
can be treated as an artifact arising from simplifying assumptions made in the
spatial queue model.9
There are thus three possible characteristic speeds: +uf at free-flow, 0 at ca-
pacity flow, and −∞ when the queue reaches jam density. The Newell-Daganzo
method is applied in much the same way as was done for the point queue model.
In particular, the sending flow expression is exactly the same, because the two
characteristics with nonnegative velocity are the same. We thus have
and
S(t) = min{N ↑ ((t + ∆t) − L/uf ) − N ↓ (t), qmax ∆t} . (9.72)
9 Alternatively, connected and autonomous vehicles may be able to exhibit such behavior
if an entire platoon of vehicles coordinates its acceleration.
9.5. LWR-BASED LINK MODELS 363
x
N(L,t) N(L,t+1)
L
uf
t
N(0,t-ufL) N(0,t) N(0,t+1)
Figure 9.28: Spatial queue model characteristics for sending and receiving flow;
the upstream-moving wave is an approximation of the vertical component of the
fundamental diagram.
For the receiving flow, we have to take into account the new shockwave
speed. Dealing with an infinite speed can be tricky, since, taken literally, would
mean that the upstream cumulative count N ↑ (t + ∆t) could depend on the
downstream cumulative count at the same time N ↓ (t + ∆t). Since we are solv-
ing the model in forward order of time, however, we do not know the value
N ↓ (t + ∆t) when calculating N ↑ (t + ∆t). In an acyclic network, we could sim-
ply do the calculations such that N ↓ (t+∆t) is calculated first before N ↑ (t+∆t),
using the concept of a topological order. In networks with cycles — virtually
all realistic traffic networks — this will not work. Instead, what is best is to
approximate the “infinite” backward wave speed with one which is as large as
possible, basing the calculation on the most recent known point N ↓ (t) (Fig-
ure 9.28). Effectively, this replaces the infinite backward wave speed with one
of speed L/∆t. Equation (9.48) thus gives
and
R(t) = min{qmax ∆t, (N ↓ (t) + kj L) − N ↑ (t)} . (9.74)
Equation (9.74) will ensure that the number of vehicles on the link will never
exceed kj L, assuming that this is true at time zero, as you are asked to show in
Exercise 4.
9.5.4 Discussion
This chapter has presented four different link models: point queues, spatial
queues, the cell transmission model, and the link transmission model. Although
not initially presented this way, all four can be seen as special cases of the
Lighthill-Whitham-Richards model. The point and spatial queue models can
364 CHAPTER 9. NETWORK LOADING
be derived from particularly simple forms of the fundamental diagram (as well
as from physical first principles, as in Section 9.1), while the cell transmission
model and link transmission model are more general methods which can handle
more sophisticated fundamental diagrams (typically triangular or trapezoidal in
practice). The cell transmission model directly solves the LWR system of partial
differential equations by discretizing in space and time, and applying a finite-
difference approximation. The link transmission model is based on the Newell-
Daganzo method. The primary distinction between these methods is that the
Newell-Daganzo method only requires tracking the cumulative counts N at the
upstream and downstream ends of each link in time, while the cell transmission
model also requires tracking the number of vehicles n at intermediate cells within
the link. However, the cell transmission model does not require storing any
values from previous time steps, and can function entirely using the number of
vehicles in each cell at the current time. The Newell-Daganzo method requires
that some past cumulative counts be stored, for the amount of time needed
for a wave to travel from one end of the link to the other. Which is more
desirable depends on implementation details, and on the specific application
context — at times it may be useful to know the distribution of vehicles within
a link (as the cell transmission model gives), while for other applications this
may be an irrelevant detail. One final advantage of the Newell-Daganzo method
is that the values it gives are exact. In the cell transmission model, backward-
moving shockwaves will tend to “spread out” as a numerical issue involved in the
discretization; this will not happen when applying the Newell-Daganzo method.
The exercises explore this issue in more detail. On the other hand, the cell
transmission model is easier to explain to decision-makers, and its equations
have intuitive explanations in terms of vehicles moving within a link and the
amount of available space. The Newell-Daganzo method is a “deeper” method
requiring knowledge of partial differential equations, and seems more difficult
to convey to nontechnical audiences.
The point queue and spatial queue models have their places as well, despite
their strict assumptions. The major flaw in the point queue model, from the
standpoint of realism, is its inability to model queue spillbacks which occur
when links are full. On the other hand, by ignoring this phenomenon, the
point queue model is much more tractable, and is amenable even to closed-form
expressions of delay and sensitivity to flows. It is also more robust to errors in
input data, because queue spillback can introduce discontinuities in the network
loading. There are cases where this simplicity and robustness may outweigh the
(significant) loss in realism induced by ignoring spillbacks. The spatial queue
model can represent spillbacks, but will tend to underestimate its effect due to
its assumption of infinitely-fast backward moving shockwaves. Nevertheless, it
can also lead to simpler analyses than the link transmission model.
9.6. FANCIER NODE MODELS 365
1. Let (h∗ , i) ∈ Γ−1 (i) be the approach which has the green indication at the
current time.
In this implementation, one must be a little bit careful if the green times
in the signal are not multiples of the time step ∆t. It is possible to round the
green times so that they are multiples of ∆t, but this approach can introduce
considerable error over the analysis period: for instance, assume that ∆t is equal
to six seconds, and a two-phase intersection has green times of 10 seconds and
14 seconds, respectively. Rounding to multiples of the time step would give both
phases twelve seconds each, which seems reasonable enough; but over a three-
hour analysis period, the phases would receive 75 and 105 minutes of green time
in reality, as compared to 90 minutes each in simulation. In highly congested
situations, this can introduce considerable error. This issue can be avoided if,
instead of rounding, one gives the green indication to the approach which would
have green in reality at that time. In the example above, the intersection has a
cycle length of 24 seconds. So, when t = 60 seconds, we are 12 seconds into the
third cycle; and at this point the green indication should be given to the second
phase. In this way, there is no systematic bias introduced into the total green
time each approach receives.
1 1
(a) (b)
60 60
2 60 3 2 60 3
60 30 30
4 (a) 4 (b)
Figure 9.30: Oriented capacities for a three-leg intersection, where all ap-
proaches have qmax = 60. (a) All approaches map to a unique outgoing link.
(b) The flow on approach (4,2) is split between two outgoing links.
stream link because of an obstruction, we assume that the vehicle obstructs all
other vehicles from the same approach, respecting the FIFO principle. This
means that the outflows for all of the turning movements corresponding to any
approach must follow the same proportions as the number of drivers wishing
to use all of these movements. Similar to a merge, we assume that if there are
high sending flows from all the approaches, the fraction of the receiving flow
allocated to each approach is divided up proportionally. However, instead of
allocating the receiving flow Rij to approach (h, i) based on the full capacity
hi
qmax , we instead divide up the receiving flow based on the oriented capacity
hij hi
qmax = qmax phij , (9.77)
where phij is the proportion of the flow from approach (h, i) which wishes to
exit on link (i, j). (If turn movement [h, i, j] is not in the allowable set Ξ(i),
then phij = 0.)
Multiplying the capacity by this proportion reflects the fact that an upstream
approach can only make use of an available space on a downstream link if there
is a vehicle wishing to turn. In Figure 9.30(a), each incoming link uses a unique
exiting link, and thus can claim its full capacity. In Figure 9.30(b), half the
vehicles on link (4,2) want to turn right and half wish to go straight, whereas
all the vehicles on link (3,2) wish to turn right. Link (3,2) therefore has twice
as many opportunities to fill available space on link (2,1), and thus its rightful
share is twice that of link (4,2). For any two approaches [h, i, j] and [h0 , i, j]
using the same outgoing link, we thus require that
yhij q hij
= max
h0 ij
. (9.78)
yh0 ij qmax
assuming that both approaches are fully competing for the link (i, j). If an ap-
proach has a small sending flow, it may use less of its assigned receiving flow
than equation (9.78) allocates, and this unused receiving flow may be used by
other approaches.
368 CHAPTER 9. NETWORK LOADING
to reflect the demand for the turning movement [h, i, j]. Following the same
principles as the merge model, if the oriented sending flow from an approach is
less than its proportionate share of a downstream link’s receiving flow, its unused
share will be divided among the other approaches with unserved demand still
remaining, in proportion to their oriented capacities. If the oriented sending
flow for a turn movement is greater than the oriented receiving flow for that
movement, then by the FIFO principle applied to diverges, it will restrict flow
to all other downstream links by the same proportion, and for any two turning
movements (h, i, j) and (h, i, j 0 ) from the same approach we must have
yhij Shij phij
= = . (9.80)
yhij 0 Shij 0 phij 0
We can rearrange this equation to show that the ratio yhij /Shij (the ratio of
actual flow and desired flow for any turning movement) is uniform for all the
turning movements approaching from link (h, i) — this ratio plays the same role
as φ in a diverge.
The presence of multiple incoming and outgoing links causes another com-
plication, in that the flows between approaches are all linked together. If an
approach is restricted by the receiving flow of a downstream link, flow from
that approach is restricted to all other downstream links. This means that the
approach may not fully consume its “rightful share” of another downstream
link, thereby freeing up additional capacity for a different approach. Therefore,
we cannot treat the approaches or downstream links separately or even sequen-
tially in a fixed order, because we do not know a priori how these will be linked
together.
However, there is an algorithm which generates a consistent solution de-
spite these mutual dependencies. In this algorithm, each approach link can be
demand-constrained, or supply-constrained by a downstream link. If an approach
is demand-constrained, its oriented sending flow to all downstream links is less
than its rightful share, and therefore all of the sending flow can move. If an
approach is supply-constrained by link (i, j), then the approach is unable to
move all of its sending flow, and the fraction which can move is dictated by link
(i, j). (That is, receiving flow on (i, j) is the most restrictive constraint for the
approach). The algorithm must determine which links are demand-constrained,
and which are supply-constrained by a downstream link.
To find such a solution, we define two sets of auxiliary variables, S̃hij to
reflect the amount of unallocated sending flow for movement [h, i, j], and R̃ij to
reflect the amount of unallocated receiving flow for outgoing link (i, j). These
are initialized to the oriented sending flows and link receiving flows, and reduced
iteratively as flows are assigned and the available sending and receiving flows are
used up. The algorithm also uses the notion of active turning movements; these
are turning movements whose flows can still be increased. A turning movement
9.6. FANCIER NODE MODELS 369
[h, i, j] becomes inactive either when S̃hij drops to zero (all vehicles that wish to
turn have been assigned), or when R̃ij 0 drops to zero for any outgoing link (i, j 0 )
that approach (h, i) is using (that is, for which phij 0 > 0). Allocating all of the
receiving flow for one outgoing link can thus impact flow on turning movements
which use other outgoing links, because of the principle that vehicles wishing to
turn will block others, as expressed in equation (9.80). The set of active turning
movements will be denoted by A; a turning movement remains active until we
have determined whether it is supply-constrained or demand-constrained.
At each stage of the algorithm, we will increase the flows for all active turning
movements. We must increase these flows in a way which is consistent both
with the turning fractions (9.80), and with the division of receiving flow for
outgoing links given by equation (9.78), and we will use αhij to reflect the rate
of increase for turning movement (h, i, j). The absolute values of these α values
do not matter, only their proportions, so you can scale them in whatever way is
most convenient to you. Often it is easiest to pick one turning movement (h, i, j)
and fix its α value either to one, or to its oriented sending flow. The turning
proportions from (h, i) then fix the α values for all other turning movements
from the same approach. You can then use equation (9.78) to determine α
values for turning movements competing for the same outgoing link, then use
the turning fractions for the upstream link on those turning movements, and so
on until a consistent set of α values has been determined.
The algorithm then increases the active turning movement flows in these
proportions until some movement becomes inactive, because its sending flow or
the receiving flow on its outgoing link becomes exhausted. The process is then
repeated with the smaller set of turning movements which remain active, and
continues until all possible flows have been assigned. This algorithm is a bit
more involved than the node models seen thus far, and you may find it helpful
to follow the example below as you read through the algorithm steps.
2. Identify a set of αhij values for all active turning movements which is con-
sistent with the turning fractions (αhij /αhij 0 = phij /phij 0 for all turning
movements from the same incoming link) and oriented capacities (αhij /αh0 ij =
hij h0 ij
qmax /qmax for all turning movements to the same outgoing link).
3. For each outgoing link (i, j) ∈ Γ(i), identify the rate αij at which its
receiving flow will be reduced, by adding
P αhij for all active turn movements
whose outgoing link is (i, j): αij ← [h,i,j]∈A αhij .
4. Determine the point at which some turning movement will become inac-
370 CHAPTER 9. NETWORK LOADING
10 1
1
2 1 60 3
1/2 1/2
60
4
Figure 9.31: Example of equal priorities algorithm, showing sending flows and
turning proportions. All links have capacity of 60.
tive, by calculating
( ( ) ( ))
S̃hij R̃ij
θ = min min , min . (9.81)
[h,i,j]∈A αhij (i,j)∈Γ(i):αij >0 αij
5. Increase flows for active turning movements, and update unallocated send-
ing and receiving flows: for all [h, i, j] ∈ A update yhij ← yhij + θαhij ,
S̃hij ← S̃hij − θαhij , and R̃ij ← R̃ij − θαhij .
6. Update the set of active turning movements, by removing from A any
turning movement [h, i, j] for which S̃hij = 0 or for which R̃ij 0 = 0 for any
(i, j 0 ) ∈ Γ(i) which is being used (phij 0 > 0).
7. If there are any turning movements which are still active (A 6= ∅), return
to step 3. Otherwise, stop.
As a demonstration, consider the intersection in Figure 9.31, where all links
(incoming and outgoing) have the same capacity of 60 vehicles per time step,
and the sending flows Shi and turning proportions phij are shown. None of
the downstream links is congested, so their receiving flows are equal to the
capacity. (In the figure, two sets of numbers are shown for each approach; the
“upstream” number is the sending flow and the “downstream” number(s) are
the proportions.) The oriented capacities can be seen in Figure 9.30(b). For
this example, the six turning movements will be indexed in the following order:
Ξ(2) = {[1, 2, 3], [1, 2, 4], [3, 2, 1], [3, 2, 4], [4, 2, 1], [4, 2, 3]} (9.82)
All vectors referring to turning movements will use this ordering for their com-
ponents.
Step 1 of the algorithm initializes the oriented sending flows using equa-
tion (9.79),
S = Shij = 0 10 60 0 30 30 (9.83)
and the oriented capacities using equation (9.77)
hij
qmax = qmax = 0 60 60 0 30 30 . (9.84)
9.6. FANCIER NODE MODELS 371
The step also initializes the turning movement flows and auxiliary variables:
y← 0 0 0 0 0 0 , (9.85)
S̃ ← Shij = 0 10 60 0 30 30 , (9.86)
and
R̃ = R̃21 R̃23 R̃24 = 60 60 60 . (9.87)
The set of active turn movements is A = {[1, 2, 4], [3, 2, 1], [4, 2, 1], [4, 2, 3]}.
Step 2 of the algorithm involves calculation of a set of consistent αhij values.
One way of doing this is to start by setting α423 ← S423 = 30. The turning
fractions from (4, 2) then require that α421 = α423 = 30. The allocation rule
for outgoing link (2, 1) then forces S321 = 60: the oriented capacity for [3, 2, 1]
is twice that of [4, 2, 1], and the α values must follow the same proportion.
Turning movement [1, 2, 4] is independent of all of the other turning movements
considered thus far, so we can choose its α value arbitrarily; say, α124 ← S124 =
10. (You should experiment around with different ways of calculating these α
values, and convince yourself that the final flows are the same as long as the
proportions of α values for interdependent turning movements are the same.)
We thus have
α = αhij = 0 10 60 0 30 30 . (9.88)
The α values for inactive turning movements have been set to zero for clarity;
their actual value is irrelevant because they will not be used in any of the steps
that follow.
With these flow increments, flow on outgoing links (2, 1), (2, 3), and (2, 4)
will be α21 = 90, α23 = 30, and α24 = 10, as dictated by Step 3.
In Step 4, we determine how much we can increase the flow at the rates
given by α until some movement becomes inactive. We have
or θ = 2/3.
We can now adjust the flows, as in Step 5. We increase the flow on each
active turning movement by 23 αhij , giving
y ← 0 6 23 40 0 20 20 .
(9.90)
We subtract these flow increments from the auxiliary sending and receiving
flows, giving
S̃ ← 0 3 31 20 0 10 10
(9.91)
and
53 31 .
R̃ ← 0 40 (9.92)
Step 6 updates the set of active turning movements. With the new S̃ and R̃
values, we see that [3, 2, 1] and [4, 2, 1] have become inactive, since there is no
372 CHAPTER 9. NETWORK LOADING
remaining receiving flow on link (2, 1). Furthermore, this inactivates movement
[4, 2, 3]: even though there are still travelers that wish to turn in this direction,
and space on the downstream link (S̃423 and R̃23 are still positive), they are
blocked by travelers waiting to use movement [4, 2, 1]. So, there is only one
active movement remaining, A ← {[1, 2, 4]}, and we must return to step 3.
In Step 3, we must recalculate the αij values because some of the turning
movements are inactive. With the new set A, we have α21 = α23 = 0 and
α24 = 10. The new step size is
S̃124 R̃24
n
3 13 53 13
o 1
θ = min 10 ,
= . (9.93)
10 3
We then increase the flows, increasing y124 by 10 × 13 to 10, decreasing S̃124
to zero, and decreasing R̃24 to 50. This change inactivates movement [1, 2, 4].
Since there are no more active turning movements, the algorithm terminates,
and the final vector of flows is
y = 0 10 40 0 20 20 . (9.94)
The node model is simplified if we assume that the β values are all strictly
positive. You might find this unrealistic: in the example above, if the turning
movement must yield to the through movement, then at full saturation perhaps
no turning vehicles could move. In practice, however, priority rules are not
strictly obeyed as traffic flows near saturation. Polite through drivers may
stop to let turning drivers move, or aggressive turning drivers might force their
way into the through stream. (Think of what would happen at a congested
freeway if vehicles merging from an onramp took the “yield” sign literally!)
The requirement of strictly positive β values thus has some practical merits, as
well as mathematical ones. The exercises explore ways to generalize this node
model, including strict priority, and cases where different turning movements
may consume different amounts of the receiving flow (for instance, if they are
moving at different speeds).
We can now adapt the algorithm for equal priority for the case of intersec-
tions with different priorities. The algorithm is augmented by adding receiving
flows Rc and auxiliary receiving flows R̃c for each conflict point, and we extend
some of the computations to include the set of conflict points. First, we require
that the ratio of α values for two turning movements using the same crossing
point follow the ratio of the β values, assuming saturated conditions:
c
αhij βhij phij
= c ∀c ∈ C; [h, i, j], [h0 , i, j 0 ] ∈ Ξ(c) . (9.96)
αh0 ij 0 βh0 ij 0 ph0 ij 0
Note that the β values are multiplied by the relevant turning fraction for the
movements. As with the oriented capacity, this respects the fact that the more
flow is attempting to turn in a particular direction, the more opportunities or
gaps will be available for it to claim.
Second, we must calculate the inflow rates to conflict points given the α
values from active turning movements:
X
αc = αhij . (9.97)
[h,i,j]∈Ξ(c)∩A
Third, the calculation of the step size θ must now include obstructions from
conflict points:
( ( ) ( ) ( ))
S̃hij R̃ij R̃c
θ = min min , min , min . (9.98)
[h,i,j]∈A αhij (i,j)∈Γ(i):αij >0 αij c∈C:αc >0 αc
With these modifications, the algorithm proceeds in the same way as before.
Specifically,
1. Initialize by calculating oriented capacities and sending flows using equa-
tions (9.77) and (9.79); by setting yhij ← 0 for all [h, i, j] ∈ Ξ(i); by
setting S̃hij ← Shij for all [h, i, j] ∈ Ξ(i), R̃ij ← Rij for all (i, j) ∈ Γ(i),
and R̃c ← Rc for each c ∈ C; and by declaring active all turning move-
ments with positive sending flow: A ← {[h, i, j] ∈ Xi(i) : Shij > 0}.
374 CHAPTER 9. NETWORK LOADING
60 1 Capacity 60
Receiving flow 45
1/4 3/4
Receiving flow 60
1 27 3
2
Capacity 30
Receiving flow 30
Capacity 60 1
Receiving flow 60 54
4
2. Identify a set of αhij values for all active turning movements which is con-
sistent with the turning fractions (αhij /αhij 0 = phij /phij 0 for all turning
movements from the same incoming link), oriented capacities (αhij /αh0 ij =
hij h0 ij
qmax /qmax for all turning movements to the same outgoing link), and con-
flict points based on equation (9.96).
3. For each outgoing link (i, j) ∈ Γ(i) and conflict point c ∈ C, identify the
rate αij at which its receiving flow will be reduced, by adding
P αhij for all
active turn movements whose outgoing link is (i, j): αij ← [h,i,j]∈A αhij
for links, and equation (9.97) for conflict points.
4. Determine the point at which some turning movement will become inac-
tive, by calculating θ using equation (9.98).
5. Increase flows for active turning movements, and update unallocated send-
ing and receiving flows: for all [h, i, j] ∈ A update yhij ← yhij + θαhij ,
S̃hij ← S̃hij − θαhij ; for all (i, j) ∈ Γ(i) update R̃ij ← R̃ij − θαij , and for
all c ∈ C update R̃c ← R̃c − θαc .
7. If there are any turning movements which are still active (A 6= ∅), return
to step 3. Otherwise, stop.
and (2, 3) have receiving flows of 60 vehicles, while link (2, 1) has a receiving flow
of only 45 vehicles. There is one conflict point, indexed c, which is marked with a
circle in Figure 9.32). Conflict point c has a receiving flow of 60, and the turning
movement [1, 2, 3] must yield to the through movement [4, 2, 1], as reflected by
the ratio β421 /β123 = 5. For this example, the six turning movements will be
indexed in the following order:
Ξ(2) = {[1, 2, 3], [1, 2, 4], [3, 2, 1], [3, 2, 4], [4, 2, 1], [4, 2, 3]} (9.99)
All vectors referring to turning movements will use this ordering for their com-
ponents.
Step 1 of the algorithm initializes the oriented sending flows using equa-
tion (9.79),
S = Shij = 45 15 27 0 54 0 (9.100)
and the oriented capacities using equation (9.77)
hij
qmax = qmax = 30 30 30 0 60 0 . (9.101)
The step also initializes the turning movement flows and auxiliary variables:
y← 0 0 0 0 0 0 , (9.102)
S̃ ← Shij = 45 15 27 0 54 0 , (9.103)
and
R̃ = R̃21 R̃23 R̃24 R̃c = 45 30 60 60 . (9.104)
The set of active turn movements is A = {[1, 2, 3], [1, 2, 4], [3, 2, 1], [4, 2, 1]}.
Step 2 of the algorithm involves calculation of a set of consistent αhij values.
One way of doing this is to start by setting α123 ← S123 = 15. The turning frac-
tions from (1, 2) then require that α124 = α123 = 45. The allocation rule (9.96)
for conflict point c forces S421 = 300: the β value for [4, 2, 1] is five times that
of the β value for [1, 2, 3], and the turning fraction for [4, 2, 1] is a third higher.
Since S421 = 300, we must have S321 = 150 because the oriented capacity of
[3, 2, 1] is half that of [4, 2, 1]. This gives the flow increments
α = αhij = 45 15 150 0 300 0 . (9.105)
With these flow increments, flow on outgoing links (2, 1), (2, 3), (2, 4), and
the conflict point c will be α21 = 450, α23 = 45, α24 = 15, and αc = 345, as
dictated by Step 3.
In Step 4, we determine how much we can increase the flow at the rates
given by α until some movement becomes inactive. We have
We can now adjust the flows, as in Step 5. We increase the flow on each
1
active turning movement by 10 αhij , giving
y ← 4.5 1.5 15 0 30 0 . (9.107)
We subtract these flow increments from the auxiliary sending and receiving
flows, giving
S̃ ← 40.5 13.5 12 0 24 0 (9.108)
and
R̃ ← 0 25.5 58.5 25.5 . (9.109)
Step 6 updates the set of active turning movements. With the new S̃ and
R̃ values, we see that [3, 2, 1] and [4, 2, 1] have become inactive, since there is
no remaining receiving flow on link (2, 1). So, there are two active movements
remaining, A ← {[1, 2, 3], [1, 2, 4]}, and we must return to step 3.
In Step 3, we must recalculate the αij values because some of the turning
movements are inactive. With the new set A, we have α21 = 0, α23 = 45,
α24 = 15, and αc = 45. The new step size is
17
We then increase the flows, increasing y123 by 45 × 30 to 30 and y124 by 15 × 17
30
to 10, decreasing S̃123 to 15, S̃124 to 5, R̃23 to 0, R̃24 to 50, and R̃c to 0. This
change inactivates movement [1, 2, 3]; and movement [1, 2, 4] is then inactivated
because these movement’s flows are blocked by vehicles waiting to take [1, 2, 3].
Since there are no more active turning movements, the algorithm terminates,
and the final vector of flows is
y = 30 10 15 0 30 0 . (9.111)
(1995a); for an alternative merge model (which does not satisfy the invariance
principle), see Jin and Zhang (2003).
The hydrodynamic traffic flow theory described in Section 9.4.3 was indepen-
dently developed by Lighthill and Whitham (1955) and Richards (1956). Newell
(1993a), Newell (1993b), and Newell (1993c) recognized that the cumulative ve-
hicle counts N , and the analysis of characteristics resulting from a triangular
fundamental diagram, greatly simplify the solution of the model, a theory com-
pleted by Daganzo (2005a) and Daganzo (2005b). Interestingly, an equivalent
model (used for soil erosion) was separately developed by Luke (1972). There
are alternative means of solving the LWR model not presented in this book,
through recognizing it as a Hamilton-Jacobi system of partial differential equa-
tions (LeVeque, 1992; Evans, 1998) which can be solved using the Lax-Hopf
formula or viability theory (Lax, 1957; Hopf, 1970; Claudel and Bayen, 2010);
or for the purposes of a link model, representing sending and receiving flows
using a “double queue,” one at each end of the link (Osorio and Bierlaire, 2009;
Osorio et al., 2011), a representation suitable for a stochastic version of the
LWR model.
The cell transmission model was reported in Daganzo (1994) and Daganzo
(1995a), essentially a Gudanov scheme for solving the LWR system (Godunov,
1959; Lebacque, 1996). The link transmission model was developed by Yperman
(2007); see also Gentile (2010).
The more sophisticated node models reported later in the chapter are adapted
from Tampère et al. (2011) and Corthout et al. (2012).
Finally, network loading can be accomplished by entirely different means
than that reported in this chapter, without the use of explicit link and node mod-
els discretized in space and time. For instance, the discretization can be done in
the space of vehicle trajectories (Bar-Gera, 2005). In mathematical terms, this
involves converting from Eulerian coordinates (x and t) to Lagrangian coordi-
nates (with the cumulative count N in place of either x or t). For more on this
alternative, and reformulations of the LWR model with this change of variables,
see Laval and Leclercq (2013).
Another common alternative is to use traffic simulation to perform the
network loading. Examples include the software packages VISSIM (Fellen-
dorf, 1994), AIMSUN (Barcelo, 1998), DynaMIT (Ben-Akiva et al., 1998),
VISTA (Ziliaskopoulos and Waller, 2000), DYNASMART (Mahmassani, 2000),
Dynameq (Mahut et al., 2003), and DynusT.
9.8 Exercises
1. [14] Table 9.7 shows cumulative inflows and outflows to a link with a
capacity of 10 vehicles per time step, and a free-flow time of 2 time steps.
Use the point queue model to calculate the sending and receiving flow for
each time step.
2. [14] Repeat Exercise 1 with the spatial queue model. Assume that the jam
378 CHAPTER 9. NETWORK LOADING
density is such that at most 20 vehicles can fit on the link simultaneously.
.
3. [33] In the point queue model, if the inflow and outflow rates qin and qout
are constants with qin ≥ qout , show that the travel time experienced by
n qin
the n-th vehicle is L/uf + qin qout − 1 . The same result holds for the
spatial queue model, if there is no spillback.
4. [24] In the spatial queue model, show that if the total number of vehicles
on a link is at most kj L at time t, then the total number of vehicles on
the link at time t + 1 is also at most kj L.
6. [32] Extend the merge model of Section 9.2.2 to a merge node with three
incoming links.
7. [43] Show that the formula (9.14) indeed captures both case II and case
III of the merge model presented in Section 9.2.2.
8. [32] Show that the merge model of Section 9.2.2 satisfies all the desiderata
of Section 9.2.
9. [41] Consider an alternative merge model for the congested case, which
allocates the receiving flow proportional to the sending flows of the in-
coming links, rather than proportional to the capacities of the incoming
links as was done in Section 9.2.2. Show that this model does not satisfy
the invariance principle.
9.8. EXERCISES 379
10. [13] In a diverge, the incoming link has a sending flow of 120, 25% of the
vehicles want to turn onto outgoing link 1, and the remainder want to turn
onto outgoing link 2. Report the number of vehicles moving to outgoing
links 1 and 2 if their respective receiving flows are (a) 80 and 100; (b) 80
and 60; (c) 10 and 40.
11. [10] Extend the diverge model of Section 9.2.3 to a diverge node with
three outgoing links.
12. [21] Develop a model for a diverge node with two outgoing links, in which
flows waiting to enter one link do not block flows entering the other link.
When might this model be more appropriate?
13. [25] Show that the diverge model of Section 9.2.3 satisfies all the desiderata
of Section 9.2.
14. [10] In the network loading procedure in Section 9.3, we specified that
centroid connectors starting at origins should have high jam density, and
those ending at destinations should have high capacity. Would anything
go wrong if centroid connectors starting at origins also had high capacity?
What if centroid connectors ending at destinations had high jam density?
15. [12] Draw trajectory diagrams which reflect the following situations: (a)
steady-state traffic flow, no vehicles speeding up or slowing down; (b)
vehicles approaching a stop sign, then continuing; (c) a slow semi truck
merges onto the roadway at some point in time, then exits at a later point
in time. Draw at least five vehicle trajectories for each scenario.
17. [51] Which of the following statements are true with the LWR model and
a concave fundamental diagram?
(a) Show that for any concave fundamental diagram and any flow value q
less than the capacity, there are exactly two possible speeds u1 and u2
producing the flow q, one corresponding to subcritical (uncongested)
conditions and the other corresponding to supercritical (congested)
conditions.
(b) Derive and plot the speed-flow relationship corresponding to the Green-
shields model of Exercise 16. Express this relationship with two
functions u1 (q) and u2 (q) corresponding to uncongested and con-
gested conditions, respectively; these functions should have a domain
of [0, qmax ] and intersect at capacity.
(c) Derive and plot the fundamental diagram corresponding to the High-
way Capacity Manual speed-flow relation for basic freeway segments.
The capacity of such a segment is given by qmax = 1800 + 5uf . In this
equation, speeds are measured in km/hr and flows in veh/hr/lane:
uf if q ≤ 3100 − 15u0
u1 (q) = 2.6
uf − 23u0 −1800 q+15uf −3100 if 3100 − 15u0 ≤ q ≤ qmax
28 20uf −1300
(9.112)
u2 (q) = 28q (9.113)
19. [22] For each of these fundamental diagrams, derive the speed-density
function (that is, the travel speed for any given density value), and provide
a sketch.
(a) Q(k) = Ck(kj − k), where C is a constant and kj is the jam density.
(b) Q(k) = min{uf k, w(kj − k)}
(c) Q(k) = min{uf k, qmax , w(kj − k)}
20. [44] Consider a long, uninterrupted freeway with a capacity of 4400 vehi-
cles per hour, a jam density of 200 vehicles per mile, and a free-flow speed
of 75 miles per hour. Initially, freeway conditions are uniform and steady
9.8. EXERCISES 381
with a subcritical flow of 2000 vehicles per hour. An accident reduces the
roadway capacity to 1000 veh/hr for thirty minutes. Draw a shockwave di-
agram to show the effects of this accident, reporting the space-mean speed,
volume, and density in each region of your diagram, and the speed and
direction of each shockwave. Assume that the fundamental diagram takes
the shape of the Greenshields model (Exercise 16), and that a stopped
queue discharges at capacity.
21. [46] Consider a roadway with a linear-speed density relationship (cf. Ex-
ercise 16) whose capacity is 2000 veh/hr and free-flow speed is 40 mi/hr.
Initially, the flow is 1000 veh/hr and uncongested. A traffic signal is red
for 45 seconds, causing several shockwaves. When the light turns green,
the queue discharges at capacity.
23. [11] Show that a shockwave connecting two uncongested (subcritical) traf-
fic states always moves downstream, while a shockwave connecting two
congested (supercritical) traffic states always moves upstream. This is re-
lated to the observation in the chapter that “uncongested states propagate
downstream, and congested states propagate upstream.”
24. [36] This exercise asks you to fill in some details of the example in Sec-
1
tion 9.4 where the fundamental diagram was Q(k) = 240 k(240 − k) and
382 CHAPTER 9. NETWORK LOADING
60 vpm D
42 vpm
80 vpm 40 vpm
A B 10 vpm E F
20 vpm
42 vpm
60x2
the cumulative count map was N (x, t) = 60t − 120x + t+1 . Times are
measured in minutes, and distances in miles.
(a) Calculate the capacity, jam density, and free-flow speed associated
with this fundamental diagram.
(b) Verify that the conservation relationship (9.25) is satisfied by the flow
and density maps q(x, t) and k(x, t).
(c) Verify that the density and flow maps are consistent with the given
fundamental diagram.
(d) Calculate the speed u(x, t) at each point and time. Are vehicles ac-
celerating, decelerating, or maintaining a constant speed?
25. [45] Consider the network in Figure 9.33, where each link has a free-flow
time of 5 minutes and a capacity shown on the figure, and vehicles split
equally at each diverge (that is, pABC = pABD = pBCD = pBCE = 1/2
at all times). Vehicles enter the network at a rate of 80 veh/min for
20 minutes, and then the inflow rate drops to zero. Perform dynamic
network loading, using point queues for the link models. For each link in
the network, plot the cumulative counts N ↑ and N ↓ over time, as well as
the sending flow and receiving flow over time. At what time does the last
vehicle leave the network?
26. [25] Write the formula for the fundamental diagram Q(k) in the cell trans-
mission model example depicted in Table 9.4.
27. [13] A link is divided into four cells; on this link the capacity is 10 vehicles
per time step, each cell can hold at most 40 vehicles, and the ratio of
backward wave speed to free-flow speed is 0.5. Currently, the number of
vehicles in each cell is as in Table 9.8 (Cell 1 is at the upstream end of
the link, Cell 4 at the downstream end.) Calculate the number of vehicles
that will move between each pair of cells in the current time interval (that
is, the y12 , y23 , and y34 values.), and the number of vehicles in each cell
at the start of the next time interval. Assume no vehicles enter or exit the
link. .
9.8. EXERCISES 383
28. [23] Table 9.9 shows cumulative inflows and outflows to a link with a
capacity of 10 vehicles per time step, a free-flow time of 2 time steps, and
a backward wave time of 4 time steps. At jam density, there are 20 vehicles
on the link. Use the link transmission model to calculate the sending flow
S(7) and the receiving flow R(7).
29. [44] (Exploring shock spreading.) A link is seven cells long; at most 15
vehicles can fit into each cell, the capacity is 5 vehicles per timestep, and
w/uf = 1/2. Each time step, 2 vehicles wish to enter the link, and will do
so if the receiving flow can accommodate. There is a traffic signal at the
downstream end of the link. During time steps 0–9, and from time step
50 onward, the light is green and all of the link’s sending flow can leave.
For the other time steps, the light is red, and the sending flow of the link
is zero.
(a) Use the cell transmission model to propagate flow for 80 time steps,
portraying the resulting cell occupancies in a time-space diagram (time
on the horizontal axis, space on the vertical axis). At what time
interval does the receiving flow first begin to drop; at what point does
it reach its minimum value; and what is that minimum value? Is there
any point at which the entire link is at jam density?
(b) Repeat, but with w/uf = 1.
(c) Repeat, but instead use the link transmission model (with the same
time step) to determine how much flow can enter or leave the link.
30. [68] Consider the network in Figure 9.34. The figure shows each link’s
length, capacity, jam density, free-flow speed, and backward wave speed.
384 CHAPTER 9. NETWORK LOADING
0.125 km 0.125 km
11520 vph 11520 vph
320 veh/km
320 veh/km C 90 kph
90 kph
30 kph 30 kph
A B D E
Length: 0.25 km 0.125 km 0.25 km
Capacity: 17280 vph 5760 vph 10800 vph
Jam density: 480 veh/km 220 veh/km 160 veh/km
Free-flow speed: 90 kph 45 kph 90 kph
Backward wave speed: 30 kph 15 kph 30 kph
(a) Use a point queue model to propagate the vehicle flow with the time
step ∆t = 5 s. Plot the turning movement flows yABC , yABD , yBDE ,
and yBCE from t = 0 until the last vehicle has left the network. (q12
is the rate at which flow leaves the downstream end of link 1 to enter
the upstream end of link 2).
(b) Use the cell transmission model to propagate the vehicle flow with the
time step ∆t = 5 s. Plot the same flow rates as in the previous part.
(c) Use the link transmission model to propagate the vehicle flow with
the time step ∆t = 10 s. Plot the same flow rates as in the previous
part.
(d) Comment on any differences you see in these plots for the three flow
models.
31. [21] Assuming that a cell initially has between 0 and n̄ vehicles, show
the cell transmission model formula (9.55) ensures that it will have be-
tween 0 and n̄ vehicles at all future time steps, regardless of upstream or
downstream conditions.
32. [21] On a link, we must have N ↑ (t) ≥ N ↓ (t) at all time steps. Assuming
this is true for all time steps before t, show that the link transmission
model formulas (9.63) and (9.65) ensure this condition holds at t as well.
33. [42] Generalize the cell transmission model formula (9.55) to handle an
arbitrary piecewise-linear fundamental diagram (not necessarily triangular
or trapezoidal).
9.8. EXERCISES 385
Lamar
SB sending flow: 300
Proportion to Guadalupe: 30%
NB receiving flow: 100
Lamar Guadalupe
NB sending flow: 300 NB sending flow: 200
Proportion to Guadalupe: 5% Proportion to NB Lamar: 100%
SB receiving flow: 500 SB receiving flow: 500
Saturation flows:
NB Lamar -> Guadalupe: 50
NB Lamar -> Lamar: 200
SB Lamar -> Guadalupe: 200
SB Lamar -> Lamar: 200
Guadalupe -> NB Lamar: 200
15 s 45 s
34. [65] Generalize the link transmission model formulas (9.63) and (9.65) to
handle an arbitrary piecewise-linear fundamental diagram.
35. [35] Figure 9.35 represents the intersection of Lamar and Guadalupe,
showing sending and receiving flows, saturation flows, turning movement
proportions, and the signal timing plan (assume no lost time due to clear-
ance intervals or startup delay). Note that the receiving flow on north-
bound Lamar is quite low, because of congestion spilling back from a
nearby signal just downstream. No U-turns are allowed, and drivers may
not turn left from Guadalupe onto southbound Lamar.
(a) Find the transition flows yijk for all five turning movements at the
current time step, using the “smoothed signal” node model.
(b) The southbound receiving flow on Guadalupe is now reduced to 50
due to congestion further downstream. Find the updated transition
flow rates for all turning movements.
36. [51] Extend the “basic signal” node model to account for turns on red,
where a vehicle facing a red indication may make a turn in the direction
nearest to them (usually right-on-red in countries that drive on the right,
386 CHAPTER 9. NETWORK LOADING
Time-Dependent Shortest
Paths
This chapter discusses how travelers make choices when traveling in networks
whose state varies over time. Two specific choices are discussed: how drivers
choose a route when link costs are time-varying (Sections 10.1 and 10.2), and
how drivers choose a departure time (Section 10.3). This chapter is the comple-
ment of the previous one. In network loading, we assumed that the travelers’
choices were known, and we then determined the (time-varying) flow rates and
congestion pattern throughout the network. In this chapter, we take this con-
gestion pattern as known, and predict the choices that travelers would make
given this congestion pattern. In particular, by taking the congestion level as
fixed, we can focus the question on an individual traveler and do not need to
worry about changes in congestion based on these choices just yet.
387
388 CHAPTER 10. TIME-DEPENDENT SHORTEST PATHS
that is, a link’s travel time cannot decrease by more than ∆t in one time step.
In continuous time, if τij (t) is a piecewise differentiable function, we must have
d
− τij (t) ≤ −1 , (10.3)
dt
everywhere that τij (t) has a derivative, which expresses the same idea.
One can also distinguish time-dependent shortest path problems by whether
the departure time is fixed, whether the arrival time is fixed, or neither. In
the first case, the driver has already decided when they will leave, and want to
find the route to the destination with minimum cost when leaving at that time,
regardless of the arrival time at the destination. In the second case, the arrival
time at the destination is known (perhaps the start of work), and the traveler
wants to find the route with minimum cost arriving at the destination at that
time (regardless of departure time). In the third case, both the departure and
arrival times are flexible (as with many shopping trips), and the traveler wants
to find a minimum cost route without regard to when they leave or arrive. In
this way, we can model the departure time choice decision simultaneously with
the route choice decision. Section 10.3 develops this approach further, showing
how we can incorporate penalty “costs” associated with different departure and
arrival times. Strictly speaking, one can imagine a fourth variant where both
departure and arrival time are fixed, but this problem is not always well-posed;
there may be no path from the origin to the destination with those exact depar-
ture and arrival times. In such cases, we can allow free departure and/or arrival
times, but severely penalize departure/arrival times that differ greatly from the
desired times.
In comparison with the static shortest path problem, the fixed departure
time variant is like the one origin-to-all destinations shortest path problem, and
the fixed arrival time variant is like the all origins-to-one destination shortest
path problem.
10.1. TIME-DEPENDENT SHORTEST PATH CONCEPTS 389
amount of time, all congestion will dissipate and travel times can be treated as constants
equal to free-flow time. All routing after this point can be done with a static shortest path
algorithm.
390 CHAPTER 10. TIME-DEPENDENT SHORTEST PATHS
Physical node
i j k l m
(a) (b)
shortest path problem from node i at time t0 corresponds exactly to solving the
fixed-departure time-dependent shortest path problem. Furthermore, if all link
travel times are positive, the time-expanded network is acyclic, with the time
labels forming a natural topological order. Shortest paths on acyclic networks
can be solved rather quickly, so this is a significant advantage. Even if some
link travel times are zero (as may occur with some artificial links or centroid
connectors), the time-expanded network remains acyclic unless there is a cycle
of zero-travel time links in the network; and if that is the case, it is often
possible to collapse the zero-travel time cycle into a single node. Time-expanded
networks can also unify some of the variants of the time-dependent shortest path
problem. If waiting is allowed at a node i, we can represent that with links of
the form (i : t, i : (t + 1)) connecting the same physical node to itself, a time
step later. The FIFO principle in a time-dependent network means that two
links connecting the same physical nodes will never cross (although they may
terminate at the same node).
A disadvantage is that the number of time intervals T may be quite large.
In dynamic traffic assignment, network loading often uses a time step on the
order of a few seconds. If this same time step is used for time-dependent shortest
paths, a typical planning period of a few hours means that T is in the thousands.
Given a physical network of thousands of nodes, the time-expanded network
can easily exceed a million nodes. Even though the time-expanded network is
acyclic, this is a substantial increase in the underlying network size, increasing
both computation time and amount of computer memory required. With clever
implementations, it is often possible to avoid explicitly generating the entire
10.1. TIME-DEPENDENT SHORTEST PATH CONCEPTS 391
i1
r i2
Figure 10.2: Bellman’s principle illustrated. If the dashed path were a shorter
route from s to i2 , then it would also form a “shortcut” for the route from s to
d.
Scenario 1 Scenario 2
4 3 4 3
c(2)=5
c(3)=10
c=1
c=1
c=1
1 2 1 2
second algorithm finds the time-dependent shortest path from a single origin
and departure time to all destinations, while the third algorithm finds the time-
dependent shortest paths from a single origin and all departure times to a single
destination. There are many other possible variants of time-dependent shortest
path algorithms, some of which are explored in the exercises — in particular,
Exercise 6 asks you to develop a time-dependent shortest path algorithm for
all origins and departure times simultaneously, which often arises in dynamic
traffic assignment software. Nevertheless, the algorithms here should give the
general flavor of how they function. Which one of these algorithms is best, or
whether another variant is better, depends on the particular dynamic traffic
assignment implementation. All of these algorithms use labels with similar (or
even identical) names, but the meanings of these labels are slightly different in
each.
1. Initialize by setting Li ← ∞ for all nodes i, except for the origin, where
Lr ← t0 .
3. Choose an unfinalized node i with the lowest Li value. (At the first iter-
ation, this will be the origin r.)
5. For each link (i, j) ∈ Γ(i) such that Li + τij (Li ) is within the time horizon,
perform the following steps:
(a) Update Lj ← min {Lj , Li + τij (Li )}.
(b) If Lj changed in the previous step, update qj ← i.
6. If all nodes are finalized (F = N ), terminate. Otherwise, return to step 3.
As an example of this algorithm, consider the network in Figure 10.4. The
time-dependent travel times are shown in this figure. The FIFO assumption is
satisfied: for links (1, 3) and (2, 4) the travel times are constant; for links (1, 2)
and (3, 4) the travel times are increasing (so arriving earlier always means leaving
earlier); and for link (2, 3) the travel time is decreasing, but at a slow enough
rate that you cannot leave earlier by arriving later, cf. (10.3). Assume that the
initial departure time from node 1 is at t0 = 2, and that the time horizon is large
enough that Li + τij (Li ) is always within the time horizon whenever step 5 is
encountered. The steps of the algorithm are explained below, and summarized
in Table 10.1. This table shows the state of the algorithm just before step 3 is
performed, counting the first time through as iteration zero.
Initially, no nodes are finalized, all cost labels are initialized to ∞ (except
for the origin, which is assigned 2, the departure time), and all backnode labels
are initialized to −1. The unfinalized node with the least L value is node 1,
which is selected as the node i to scan. At the current time of 2, link (1, 2) has
a travel time of 6, and link (1, 3) has a travel time of 10. Following these links
would result in arrival at nodes 2 and 3 at times 8 and 12, respectively. Each of
these is less than their current values of ∞, so the cost and backnode labels are
adjusted accordingly. At the next iteration, node 2 is the unfinalized node with
the least L value, so i = 2. At time 8, link (2, 3) has a travel time of 1, and link
(2, 4) has a travel time of 5. Following these links, one would arrive at nodes
3 and 4 at times 9 and 13, respectively. Both of these values are less than the
current L values for these nodes, so their L and q labels are changed. At the
next iteration, node 3 is the unfinalized node with the least L value, so i = 3.
At this time, link (3, 4) would have a travel time of 4 21 , and choosing it means
arriving at node 4 at time 13 12 . This is greater than the current value (L4 = 13),
so no labels are adjusted. Finally, node 4 is chosen as the only unfinalized node.
Since it has no outgoing links (Γ(4) is empty), there is nothing to do in step 5,
and since all nodes are finalized the algorithm terminates.
At this point, we can trace back the shortest paths using the backnode
labels: the shortest paths to nodes 2, 3, and 4 are [1, 2], [1, 2, 3], and [1, 2, 4],
respectively; and following these paths one arrives at the nodes at times 8, 9,
and 13.
3
t
10 2
max 5 − 2t , 1
1 4
4+t 5
Figure 10.4: Network and travel time functions for demonstrating the FIFO
time-dependent shortest path algorithm.
Table 10.1: FIFO time-dependent shortest path algorithm for the network in
Figure 10.4, departing node 1 at t0 = 2.
Iteration F i L1 L2 L3 L4 q1 q2 q3 q4
0 ∅ — 2 ∞ ∞ ∞ −1 −1 −1 −1
1 {1} 1 2 8 12 ∞ −1 1 1 −1
2 {1, 2} 2 2 8 9 13 −1 1 2 2
3 {1, 2, 3} 3 2 8 9 13 −1 1 2 2
4 {1, 2, 3, 4} 4 2 8 9 13 −1 1 2 2
396 CHAPTER 10. TIME-DEPENDENT SHORTEST PATHS
1. Initialize q ← −1 and L ← ∞.
2. Set Ltr0 ← 0.
4. For each time-expanded node i : t for which Lti < ∞, and for each time-
expanded link (i : t, j : t0 ), perform the following steps:
0
n 0 o
(a) Set Ltj = min Ltj , Lti + cij (t) .
0 0
(b) If Ltj changed in the previous step, update qjt ← i : t.
At the conclusion of this algorithm, we have the least-cost paths for each
possible arrival time at each destination. To find the least-cost path to a par-
ticular destination s (at any arrival time), you can consult the Lts labels at all
times t, and trace back the path for the arrival time t with the least Lts value.
This algorithm is demonstrated on the network in the right panel of Fig-
ure 10.3, and its progress is summarized in Table 10.2. The table shows the
state of the algorithm just before step 4 is executed. For brevity, this table
only reports Lti and qit labels for nodes and arrival times which are reachable
in the network (that is, i and t values for which Lti < ∞ at the end of the
algorithm). All other cost and backnode labels are at ∞ and −1 throughout
the entire duration of the algorithm.
Initially, all cost labels are set to ∞ and all backnode labels to −1, except
for the origin and departure time: L01 = 0. The algorithm then sets t = 0, and
10.2. TIME-DEPENDENT SHORTEST PATH ALGORITHMS 397
Table 10.2: Discrete time-dependent shortest path algorithm for the network in
the right panel of Figure 10.3, departing node 1 at t0 = 0.
t L01 L12 L23 L33 L34 L44 q10 q21 q32 q33 q43 q44
0 0 ∞ ∞ ∞ ∞ ∞ −1 −1 −1 −1 −1 −1
1 0 1 ∞ 1 ∞ ∞ −1 1 −1 1 −1 −1
2 0 1 2 1 ∞ ∞ −1 1 2 1 −1 −1
3 0 1 2 1 7 ∞ −1 1 2 1 3 −1
4 0 1 2 1 7 11 −1 1 2 1 3 3
scans over all physical nodes which are reachable at this time2 Only node 1 can
be reached at this time, and the possible links are (1 : 0, 2 : 1) and (1 : 0, 3 : 3).
Following either link incurs a cost of 1, which is lower than the (infinite) values
of L12 and L33 , so the cost and backnode labels are updated.
The algorithm then sets t = 1. Only node 2 is reachable at this time, and
the only link is (2 : 1, 3 : 2). Following this link incurs a cost of 1; in addition
to the cost of 1 already involved in reaching node 2, this gives a cost of 2 for
arriving at node 3 at time 2. The cost and time labels for 3 : 2 are updated.
Since the costs and times are different, notice that arriving at node 3 at a later
time (3 vs. 2) incurs a lower cost (1 vs. 2). This is why we need to track labels
for different arrival times, unlike the algorithm in the previous section.
The next time step is t = 2. Only node 3 is reachable at this time (from the
path [1, 2, 3]), and the only link is (3 : 2, 4 : 3). Following this link incurs a cost
of 5, resulting in a total cost of 7, and the labels for 4 : 3 are updated. Time
t = 3 is next, and again only node 3 is reachable at this time — but from the
path [1, 3]. The only link is (3 : 3, 4 : 4), and following this link incurs a cost of
10, for a total cost of 11. Labels are updated for 4 : 4. There are no further label
changes in the algorithm (all nodes have already been scanned at all reachable
times), and it terminates as soon as t is increased to the time horizon.
After termination, the Lt3 labels show that we can reach node 3 either with
a cost of 1 (arriving at time 3) or a cost of 2 (arriving at time 2). The least-cost
path thus arrives at time 3, and it is [1, 3]. The Lt4 labels show that we can
reach node 4 either with a cost of 7 (arriving at time 3) or a cost of 11 (arriving
at time 4). The least-cost path arrives at time 3, and it is [1, 2, 3, 4]. (The
least-cost path to node 2 is [1, 2], since there is only one possible arrival time
there). Notice that the naı̈ve form of Bellman’s principle is not satisfied: the
least-cost path to node 2 is not a subset of the least-cost path to node 3. This
was why we needed to keep track of different possible arrival times to nodes
— the least-cost path to node 3 arriving at time 2 is indeed a subset of the
least-cost path to node 4 arriving at time 3.
2. Set Lr ← 0 for the super-origin r, and for each artificial link (r, r : t) set
Ltr ← cr,depart (t) and qrt ← r, where cr,depart (t) is the cost on the artificial
link for departing node r at time t.
3. Initialize the current time t to the earliest possible departure time (the
lowest t index for which an artificial link (r, r : t) exists).
4. For each time-expanded node i : t for which Lti < ∞, and for each time-
expanded link (i : t, j : t0 ), perform the following steps:
10.3. DEPARTURE TIME CHOICE 399
Physical node
i j k l m
i j k l
0
n 0 o
(a) Set Ltj = min Ltj , Lti + cij (t) .
0 0
(b) If Ltj changed in the previous step, update qjt ← i : t.
The algorithm initializes labels differently in step 2; step 3 starts at the earliest
possible departure time rather than the fixed time t0 ; and step 4 is expanded
to update labels both at adjacent time-expanded nodes and super-destinations.
All other steps work in the same way.
To demonstrate this algorithm, consider the network in Figure 10.6, where
the time horizon is T = 20 and the destination is node 4, and where the cost of a
link is equal to its travel time. Table 10.3 shows the cost and forwardnode labels
at the conclusion of the algorithm. Each iteration of the algorithm generates
one row of this table, starting with T = 1 and working up to T = 20. Whenever
Lti = ∞ is seen in Table 10.3, there is no way to reach the destination node
within the time horizon, if leaving node i at time t.
400 CHAPTER 10. TIME-DEPENDENT SHORTEST PATHS
1 + 2t 5
1 max {10 − t, 0} 4
5 1+t
Figure 10.6: Network and travel time functions for demonstrating the all de-
parture times time-dependent shortest path algorithm.
Table 10.3 also shows the labels for the super-origin and super-destination,
below the labels for the time-expanded nodes. The backnode label for the super-
destination tells us that the least-cost path arrives at node 4 at time 6; the q46
label tells us the least-cost path there comes through node 3 at time 1; q31 tells
us the least-cost path there comes through node 1 at time 0, and q10 brings
us to the super-origin. Therefore, we should depart the origin at time 0, and
follow the path [1, 3, 4] to arrive at the destination at time 6, with a total cost
of c4 = 6.
where [·]+ = max {·, 0} expresses the positive part of the quantity in brackets.
In (10.4), the first bracketed term then represents the amount by which the
10.3. DEPARTURE TIME CHOICE 401
traveler arrived early, compared to the preferred time, and the second bracketed
term represents the amount by which the traveler arrived late. The coefficients
α and β then weight these terms and convert them to cost units; generally
α < β to reflect the fact that arriving early by a certain amount of time, while
undesirable, is usually not as bad as arriving late by that same amount of time.
Another possible function is nonlinear, taking a form such as
In this function, the penalty associated with early or late arrival grows faster
and faster the farther the arrival time from the target. This may occur if, for
instance, being ten minutes late is more than ten times as bad as being one
minute late. Special cases of these functions arise when α = β (the function
becomes symmetric), or when α = 0 (there is no penalty for early arrival, but
only for late arrival). This function is also differentiable everywhere, in contrast
to equation (10.4) which is not differentiable at t∗ . For certain algorithms this
may be advantageous.
To each of these schedule delay functions f (t), one can add the cost of the
path arriving at time t (the sum of the link costs cij along the way) to yield the
total cost of travel. We assume that travelers will choose both the departure
time and the path to minimize this sum. The algorithm from Section 10.3.1 can
be used to find both this ideal departure time and the path. The only change is
that artificial links connecting time-expanded destination nodes (s : t) to super-
destinations s now have a cost of f (t), rather than zero. At termination, the
t∗
departure time t∗0 minimizing Lr0 corresponds to the least total cost, and the
forwardnode labels trace out the path.
To demonstrate this algorithm, again consider the network in Figure 10.6,
but with the arrival time penalty function
which suggests that the traveler wishes to arrive at time 10, and that late arrival
is twice as costly as early arrival.
As before, the time horizon is T = 20 and the destination is node 4, and
the cost of each link is equal to its travel time. Table 10.4 shows the cost and
backnode labels at the conclusion of the algorithm, which runs in exactly the
same way as before except that the artificial destination links (4 : t, 4) have a
cost equal to f (t). At the conclusion of the algorithm, we can identify the total
cost on the shortest path from node 1 to node 4, now including the penalty for
arriving early or late at the destination.
Table 10.4 also shows the labels for the super-origin and super-destination,
below the labels for the time-expanded nodes. The backnode label for the
super-destination tells us that the least-cost path arrives at node 4 at time 9;
the q49 label tells us the least-cost path there comes through node 3 at time 4;
q34 tells us the least-cost path there comes through node 1 at time 1, and q11
brings us to the super-origin. Therefore, we should depart the origin at time 1,
and follow the path [1, 3, 4] to arrive at the destination at time 9, with a total
10.3. DEPARTURE TIME CHOICE 403
cost of c4 = 9. Of this cost, 8 units are due to travel time (difference between
arrival and departure times), and 1 unit is due to the arrival time penalty from
equation (10.6) with t = 9.
Departing earlier, at t = 0, on the same path would reduce the travel time
to 6, but increase the early arrival penalty to 4. The total cost of leaving at
t = 0 is thus higher than departing one time step later. Leaving at t = 2 and
following the same path increases the travel time cost to 10. Since this means
arriving at time 12, there is a late penalty cost of 4 added, resulting in a total
travel cost of 14. By comparing all possible paths and departure times, you can
verify that it is impossible to have a total cost less than 9.
Considering costs associated with departure time, rather than arrival time, is
done in essentially the same way, by assigning a nonzero cost to the artificial links
connecting the super-origin r to the time-expanded nodes r(t). It is thus possible
to have only departure time penalties, only arrival time penalties, both, or
neither, depending on whether the artificial origin links and artificial destination
links have nonzero costs.
404 CHAPTER 10. TIME-DEPENDENT SHORTEST PATHS
10.4 Dynamic A∗
Because the time-expanded network transforms the time-dependent shortest
problem into the classical, static shortest path problem from Section 2.4, any
of the algorithms described there can be applied. The time-expanded network
is acyclic, so it is fastest to use the algorithm from Section 2.4.1 — and indeed
this is all that the algorithm in Section 10.2.2 is, using the time labels as a
topological order.
Section 2.4.4 also presented the A∗ algorithm, which provides a single path
from one origin to one destination, rather than all shortest paths from one origin
to all destinations, or all origins to one destination. By focusing on a single
origin and destination, A∗ can often find a shortest path much faster than a
one origin-to-all destinations algorithm. The tradeoff is that the algorithm has
to repeated many times, once for every OD pair, rather than once for every
origin or destination. In static assignment, one-to-all or all-to-one algorithms
are preferred because, because in many cases, running A* for each OD pair
takes more time than running a one-to-all algorithm for each origin.
In dynamic traffic assignment with fixed departure times, however, the num-
ber of “origins” in the time-expanded network is multiplied by the number of
departure times. If one were to write a full time-dependent OD matrix, the
number of entries in this matrix is very large: in a network with 1000 centroids
and 1000 time steps, there are 1 billion entries, one for every origin, every des-
tination, and departure time. This is much larger than the number of vehicles
that will be assigned, so almost every entry in this matrix will be zero. In such
cases, A∗ can work much better, only being applied to origins, destinations, and
departure times with a positive entry in the matrix.
As discussed in Section 2.4.4, an effective estimate for A∗ in traffic assign-
ment problems is to use the free-flow travel costs. As a preprocessing step at
the start of traffic assignment, you can use an all-to-one static shortest path
algorithm to find the least-cost travel cost gjs from every node j to every des-
tination s at free flow. For the remainder of the traffic assignment algorithm,
you can then use gjs as the estimates for A∗ . This is quite effective in practical
networks.
include stochastic link costs, where the link costs are drawn from some prob-
ability distribution and the path with minimum expected cost is sought. You
may recall that this problem was not too difficult to address in the static case.
However, if travel times are both time-dependent and stochastic, more care is
needed (Hall, 1986; Fu and Rilett, 1998), because Bellman’s principle need not
hold. Examples of algorithms to handle this issue are given in Hall (1986) and
Miller-Hooks and Mahmassani (2000).
10.6 Exercises
1. [13] Table 10.5 shows time-dependent costs on five links, for different entry
times. Which links have costs satisfying the FIFO principle?
2. [23] Prove that waiting is never beneficial in a FIFO network where link
costs are equal to travel time.
3. [34] Prove that there is an acyclic time-dependent shortest path in a FIFO
network where link costs are equal to travel time.
4. [53] In this chapter, we assumed it is impossible to travel beyond the time
horizon, by not creating time-expanded links if they arrive at a down-
stream node after T̄ . Another alternative is to assume that travel beyond
T̄ is permitted, but that travel times and costs stop changing after that
point and take constant values. (Perhaps free-flow times after the peak
period is over.)
(a) Modify the time-expanded network concept to handle this assump-
tion. (The network should remain finite.)
(b) Modify the algorithm in Section 10.2.2 to work in this setting.
5. [63] Another way to handle the time horizon is to assume that the link
travel times and costs are periodic with length T̄ . (For instance, T̄ may
be 24 hours, and so entering a link at t = 25 hours would be the same
as at t = 1 hour.) First repeat Exercise 4 with this assumption. Then
prove that the modified algorithm you create will converge to the correct
time-dependent shortest paths for any possible departure time.
406 CHAPTER 10. TIME-DEPENDENT SHORTEST PATHS
7. [34] Show that the classical form of Bellman’s principle (Section 2.4 holds
in a FIFO, time-dependent network where the link costs are equal to the
link travel time.)
8. [32] Find the time-dependent shortest path when departing node 1 in the
network shown in the left panel of Figure 10.3, departing at time 0.
9. [23] Tables 10.6 and 10.7 show the backnode and cost labels for a time-
dependent shortest path problem, where the destination is node 4, and
the time horizon is 6. (Figure 10.7 shows the network topology.) What is
the shortest path from node 1 to node 4, when departing at time 1?
10. [22] In the same network as Exercise 9, fill in the forwardnode and cost
labels for time 0. The link costs are as in Table 10.5, and waiting is not
allowed at nodes.
(a) Verify that the travel times satisfy the FIFO principle.
(b) Find the shortest paths between nodes 1 and 4 when departing at
t = 0, t = 2, and t = 10.
10.6. EXERCISES 407
3
10 5
1 max{8 – t/2, 1} 4
2+t 2+t
2
(c) For what departure times would the travel times on paths [1, 2, 4] and
[1, 3, 4] be equal?
12. [23] Solve the time-dependent shortest path problem on the network in
Figure 10.6 using the algorithm in Section 10.2.1. Comment on the amount
of work needed to solve the problem using this algorithm, compared to the
way it was solved in the text.
13. [51] In the schedule delay equation (10.4), we typically assume 0 ≤ α ≤
1 ≤ β for peak-hour commute trips. What counterintuitive behavior would
occur if any of these three inequalities were violated?
14. [12] Find the optimal departure times and paths from nodes 2 and 3 in the
network in Figure 10.6, with equation (10.6) as the arrival time penalty.
(You can answer this question directly from Table 10.3.)
15. [36] Find the optimal departure times and paths from nodes 1, 2, and 3 in
the network in Figure 10.6, if the arrival time penalty function is changed
to f (t) = ([12 − t]+ )2 + 2([t − 12]+ )2 .
16. [0] According to the penalty function in Exercise 15, what is the desired
arrival time?
17. [88] Implement all of the algorithms described in this chapter, and test
them on transportation networks with different characteristics (number of
408 CHAPTER 10. TIME-DEPENDENT SHORTEST PATHS
409
410 CHAPTER 11. DYNAMIC USER EQUILIBRIUM
The main advantage of this notation is that the behavior is clear: by tracking
some auxiliary variables in the network loading (as described in Section 11.1.3,
at any point in time we can see exactly which vehicles are on which links, and can
trace these to the paths the vehicles must follow). These paths can be connected
directly with the paths found in a time-dependent shortest path algorithm. A
disadvantage of this approach is that the number of paths grows exponentially
with the network size. In practice, “column generation” schemes are popular,
in which paths π are only identified when found by a shortest path algorithm,
and hπt values only need be calculated and stored for paths to which travelers
have been assigned.
This is not the only possible way to represent travel choices. A link-based
representation requires fewer variables. For each turning movement [h, i, j],
t
each destination s, and each time interval t, let αhij,s denote the proportion of
travelers arriving at node i, via approach (h, i), during the t-th time interval,
who will exit onto link (i, j) en route to destination s. This approach mimics the
flow splitting rules often found in traffic microsimulation software. The number
of variables required by this representation grows with the network size, but
not at the exponential rate required by a path-based approach. One can show
that the path-based and link-based representation of route choice are equivalent
in the sense that there exist α values which represent the same network state
as any feasible set of H values, regardless of the network loading model, and
that one can identify the H values corresponding to a given set of α values
(see Exercise 1). The primary disadvantage of this representation is that the
behavior is less clear: one cannot trace the path of any vehicle throughout the
network deterministically (although one can do so stochastically, making a turn
11.1. TOWARDS DYNAMIC USER EQUILIBRIUM 411
Travel
times
Updated path
flows Target paths
Figure 11.1: Iterative dynamic traffic assignment framework, with details from
previous chapters.
N
Upstream
Downstream
(t)
t
t
Figure 11.2: Obtaining travel times from plots of the cumulative counts N ↑ and
N ↓.
ous functions of t. If this is the case, we can define inverse functions Tij↑ (n) and
Tij↓ (n), respectively giving the times when the n-th vehicle entered the link and
left the link. The travel time for the n-th vehicle is then the difference between
these: Tij↓ (n) − Tij↑ (n). Graphically, this can be seen as the horizontal difference
between the upstream and downstream N curves. (Figure 11.2). Then, to find
the travel time for a vehicle entering the link at time t, we simply evaluate this
↑
difference for n = Nij (t):
We must ensure that τij (t) is always at least equal to the free-flow travel
time on the link. The danger is illustrated in Figure 11.3, where no vehicles
enter or leave the link for an extended period of time. The horizontal
distance between the N ↑ and N ↓ curves at their closest point is small (the
dashed line in the figure), but this does not reflect the actual travel time
of any vehicle. In reality, a vehicle entering the link when it is completely
empty, and when there is no downstream bottleneck, would experience
free-flow conditions on the link.
↓
If the link has no outflow for an interval of time, then Nij (t) will be
constant over that interval. This frequently happens with traffic signals.
↓
In this case, there are multiple values of time where Nij (t) = n. The
↓
correct way to resolve this is to define Tij (n) to be the earliest time for
11.1. TOWARDS DYNAMIC USER EQUILIBRIUM 413
N
Upstream
Downstream
Figure 11.3: Cumulative counts are flat when there is no inflow or outflow.
↓
which Nij (t) = n:
n o
Tij↓ (n) = min t : Nij
↓
(t) = n . (11.3)
t
In discrete time, the time at which the n-th vehicle departs may not line
↓
up with a multiple of ∆t, so there may be no known point where Nij (t) is
exactly equal to n. In this case, it is appropriate to interpolate between
↓
the last time point where Nij (t) < n, and the first time point where
↓
Nij (t) ≥ n.
With these modifications to how Tij↓ is calculated, the formula (11.2) can be
used to calculate the travel times on each link and arrival time.
The travel time on a path π for a traveler departing at time t, denoted C π (t),
can then be calculated sequentially. If the path is π = [r, i1 , i2 , . . . , s], then the
traveler departs origin r at time t, and arrives at node i1 at time t + τri1 (t). The
travel time on link (i1 , i2 ) is then τi1 i2 (t + τri1 (t)), so the traveler arrives at i2
at time t + τri1 (t) + τi1 i2 (t + τri1 (t)), and so forth. Writing out this formula can
be a bit cumbersome, but calculating it in practice is quite simple: it is nothing
more than accumulating the travel times of the links in the path, keeping track
of the time at which each link is entered.
N
Upstream
(total)
Path 2 sending flow
S(t)
Path 2
Path 1 sending flow Downstream
(total) t
Path 1 t1 t2 t
That said, the network loading does in fact need phij (t) values for each
turning movement [h, i, j] at a diverge or general intersection, for each time
period t. These are obtained by examining the vehicles comprising the sending
flow Shi (t), and calculating the fraction of these vehicles whose path includes
link (i, j) as the next link. One way to do this is to disaggregate the cumulative
count values N ↑ (t) and N ↓ (t) calculated at the upstream and downstream ends
of each link. For each path π in the network, and for every link (h, i) and time
↑ ↓
t, define Nhi,π (t) and Nhi,π (t) to be the total number of vehicles using path π
which have respectively entered and left link (h, i) by time t. Clearly we have
↑ ↑ ↓ ↓
X X
Nhi (t) = Nhi,π (t) Nhi (t) = Nhi,π (t) ∀(h, i) ∈ A, t . (11.4)
π∈Π π∈Π
Then, the sending flow for each link and time interval can be disaggregated
in the same way, with Shi,π (t) defined as the number of vehicles in the sending
flow which are using path π. Assume that we are calculating sending flow for
link (h, i) at time t, and have determined Shi (t). At this point in time, the total
↓
number of vehicles which has left this link is Nhi (t), Therefore, the vehicles in
↓ ↓
the sending flow are numbered in the range Nhi (t) to Nhi (t) + Shi (t). Using the
inverse functions (11.2), the times at which these vehicles entered link (h, i) are
↓ ↓
in the range T ↑ (Nhi (t)) to T ↑ (Nhi (t) + Shi (t)). Denote these two times by t1
and t2 , respectively. Then the disaggregate sending flow Shi,π (t) is the number
of vehicles on path π which entered link (h, i) between t1 and t2 , that is,
↑ ↑
Shi,π (t) = Nhi,π (t2 ) − Nhi,π (t1 ) . (11.5)
node (h, i) at time t and immediately continue onto link (i, j). Then
P
π∈Π[h,i,j] (t) Shi,π (t)
phij (t) = (11.6)
Shi (t)
for use in diverge or general intersection node models. Then, after a node model
produces the actual flows yhij (t) between links, the disaggregated cumulative
counts must be P updated as well.
Let yhi (t) = (i,j)∈Γ(i) yhij (t) be the total flow which leaves link (h, i) during
↑
time t. Then all vehicles which entered the link between t1 and Thi (N ↓ (t)hi +
yhi ) can leave the link (use t3 to denote this latter time), so we update the
downstream aggregate counts to
↓ ↑
Nhi,π (t + ∆t) = Nhi,π (t3 ) (11.7)
T = N (H) (11.9)
cost (travel time plus schedule delay); or some other principle. In general, there
may be multiple H matrices which satisfy this principle, as when multiple paths
have the same, minimal travel time, in which case B actually denotes a set of
H matrices. So, we can express the behavioral consistency rule by
H ∈ B(T ) . (11.10)
We are now in a position to define the solution to the dynamic traffic assign-
ment problem as a fixed point problem: the path flow matrix H is a dynamic
user equilibrium if
H ∈ B(N (H)) . (11.11)
That is, the path flow matrix must be consistent with the driver behavior as-
sumptions, when the travel times are obtained from that same path flow matrix.
The most common behavioral rule is that departure times are fixed, but only
minimum-travel time paths will be used. In this case the principle can be stated
more intuitively as all used paths connecting the same origin to the same des-
tination at the same departure time must have equal and minimal travel time.
If departure times can be varied to minimize total cost, then the principle can
be stated as all used paths connecting the same origin to the same destination
have the same total cost, regardless of departure time.
The dynamic user equilibrium solution can also be stated as a solution to a
variational inequality. For the case of fixed departure times, the set of feasible
path flow matrices is given by
( )
T̄ ×|Π|
X
π rs 2
H̄ = H ∈ R+ : ht = dt ∀(r, s) ∈ Z , t , (11.12)
π∈Πrs
that is, matrices with nonnegative entries where the sum of all path flows con-
necting the origin r to destination s at departure time t is the corresponding
value in the time-dependent OD matrix drst . Then, the dynamic user equilibrium
solution Ĥ satisfies
N (Ĥ) · (Ĥ − H) ≤ 0 ∀H ∈ H̄ , (11.13)
where the product · is the Frobenius product, obtained by treating the matrices
as vectors and calculating their dot product.
In the case of departure time choice, we can define S = S(H) to be the
matrix of total costs for each path and departure time, where S is obtained by
composing the schedule delay function (10.4) with the network loading mapping
N . In this case, the set of feasible path flow matrices is given by
( )
T̄ ×|Π|
X X
π rs 2
Ĥ = H ∈ R+ : ht = d ∀(r, s) ∈ Z , (11.14)
π∈Πrs t
where drs is the OD matrix giving total flows between each origin and desti-
nation throughout the analysis period. The variational inequality in this case
is
S(Ĥ) · (Ĥ − H) ≤ 0 ∀H ∈ Ĥ . (11.15)
11.2. SOLVING FOR DYNAMIC EQUILIBRIUM 417
This is often known as the shortest path travel time. This is contrasted with
the total system travel time, which is the actual total travel time spent by all
travelers on the network:
XX
T ST T = htπ τπt = H · T . (11.17)
π∈Π t
H5*
H7*
H6*
H4*
H
H1*
H2*
H3*
cost. While this likely decreases the number of iterations required to reach a
small gap, the computation at each iteration is increased. In particular, each
λ value tested requires an entire network loading step to determine the corre-
sponding gap, which can be a significant time investment. It is also possible to
vary the λ value for different path flows and departure times. Some popular
heuristics are to use larger λ values for OD pairs which are further from equi-
librium, or to vary λ for different departure times — since travel times for later
departure times depend on the path choices for earlier departure times, it may
not make sense to invest much effort in equilibrating later departure times until
earlier ones have stabilized.
inequality
N (Ĥ) · (Ĥ − Hi∗ ) ≤ 0 ∀Hi∗ ∈ H . (11.21)
This is different from the variational inequality (11.13) because the only possible
choices for H are the matrices in the set H, rather than any of the feasible H
matrices satisfying (11.12).
Effectively, H is a restricted equilibrium if none of the targets in H lead
to improving directions in the sense that the total system travel time would be
reduced by moving to some Hi∗ ∈ H while fixing the travel times at their current
values.
At a high level, simplicial decomposition works by iterating between adding
a new target matrix to H, and then finding a restricted equilibrium using the
current matrices in H. In practice, it is too expensive to exactly find a restricted
equilibrium at each iteration. Instead, several “inner iteration” steps are taken
to move towards a restricted equilibrium with the current set H before looking
to add another target. In each inner iteration, the current solution H is adjusted
to H + µ∆H, where µ is a step size and ∆H is a direction which moves toward
restricted equilibrium. One good choice for this direction is
+
[N (H) · (H − Hi∗ )] (Hi∗ − H)
P
Hi∗ ∈H
∆H = + (11.22)
[N (H) · (H − Hi∗ )]
P
Hi∗ ∈H
which is similar to the average excess cost, but instead of using the shortest
path travel time uses the best available target vector in H.
Unlike the version of simplicial decomposition used for the static traffic as-
signment problem (Chapter 6), there is no guarantee that a sufficiently small
choice of µ will result in a reduction of restricted average excess cost. However,
in practice, this rule seems to work acceptably.
1 Those familiar with nonlinear optimization may see parallels between this and the Armijo
rule.
422 CHAPTER 11. DYNAMIC USER EQUILIBRIUM
Putting all of this together, for simplicial decomposition, step 5 of the dy-
namic traffic assignment algorithm in Section 11.2.1 involves performing all of
these steps as a “subproblem”:
g(0)
∆h = − . (11.25)
g 0 (0)
The numerator g(0) is simply the difference in travel times between the two
paths before any flow is shifted. To calculate the denominator, we need to know
how the difference in travel times will change as flow is shifted from π1 to π2 .
Computing the derivative of (11.24) with the help of the chain rule, we have
0 ∂t1 ∂t2 ∂t1 ∂t2
g (0) = + − + . (11.26)
∂h2 ∂h1 ∂h1 ∂h2
This formula is not easy to evaluate exactly, but we can make a few approxima-
tions. The first term in (11.26) reflects the impact the flow on one path has on
the other path’s travel time, while the second term reflects the impact the flow
on one path has on its own travel time. Typically, we would expect the second
11.2. SOLVING FOR DYNAMIC EQUILIBRIUM 423
effect will be larger in magnitude than the first effect (although exceptions do
exist). So, as a first step we can approximate the derivative as
0 ∂t1 ∂t2
g (0) ≈ − + . (11.27)
∂h1 ∂h2
The next task is to calculate the derivative of a path travel time with re-
spect to flow along this path. Unlike in static traffic assignment, there is no
closed-form expression mapping flows to travel times, but rather a network
loading procedure must be used. For networks involving triangular fundamen-
tal diagrams (which can include the point queue model, cf. Section 9.5.3), the
derivative of the travel time on a single link (i, j) with respect to flow entering
at time t can be divided into two cases. In the first case, suppose that the
vehicle entering link (i, j) at time t exits at t0 = t + τij (t), and that the link is
demand-constrained at that time (that is, all of the sending flow Sij (t0 ) is able
to move). In this case, even if a marginal unit of flow is added at this time,
the link will remain demand-constrained, and all of the flow will still be able
to move. No additional delay would accrue, so the derivative of the link travel
time is
dτij (t)
= 0. (11.28)
dh
In the second case, suppose that at time t0 flows leaving link (i, j) are con-
strained either by the capacity of the link or by the receiving flow of a down-
stream link. In this case, not all of the flow is able to depart the link, and a
queue has formed at the downstream end of (i, j). The clearance rate for this
queue is given by yij (t0 ), so the incremental delay added by one more vehicle
joining the queue is 1/yij (t0 ), and
τij (t) 1
= . (11.29)
dh yij (t0 )
(The denominator of this expression cannot be zero, since t0 is the time at which
a vehicle is leaving the link.)
Therefore, we can calculate the derivative of an entire path’s travel time
inductively. Assume that π = [r, i1 , i2 , . . . , s], and that ti gives the travel time
for arriving at each node i in the path. Then dtπ /dhπ is obtained by summing
expressions (11.28) and (11.29) for the uncongested and congested links in this
path, taking care to use the correct time indices:
dtπ X 1
≈ [yij (tj ) < Sij (tj )] , (11.30)
dhπ yij (tj )
(i,j)∈π
where the brackets are an indicator function equal to one if the statement in
brackets is true, and zero if false.
We are almost ready to give the formula for (11.27), but we can make one
more improvement to the approximation. Two paths may share a certain num-
ber of links in common. Define the divergence node of two paths to be node
424 CHAPTER 11. DYNAMIC USER EQUILIBRIUM
h k
j
l
i
m
where the paths diverge for the first time. (Figure 11.6). Prior to the divergence
node, the two paths include the same links, so there will be no effect of shifting
flow from one path to the other. Therefore, the sum in (11.30) need only be
taken for links beyond the divergence node. Even if the paths rejoin at a later
point, they may do so at different times, so we cannot say that there is no effect
of shifting flow between common links downstream of a divergence node. So, if
d(π1 , π2 ) is the divergence node, then we have
dtπ X 1
≈ [yij (tj ) < Sij (tj )] , (11.31)
dhπ yij (tj )
(i,j)∈π:(i,j)>d(π1 ,π2 )
where the “>” notation for links indicates links downstream of a node in a path.
This expression can then be used in (11.27) and (11.25) to find the approximate
amount of flow which needs to be shifted from π1 to π2 to equalize their costs.
The procedure for updating H can now be described as follows:
5. Subproblem: Perform Newton updates between all paths and the target
path found.
(a) For each path π with positive flow, let ρ be the least-travel time path
connecting the same OD pair and departure time.
(b) Calculate ∆h using equations (11.31), (11.27), and (11.25), using π
as the first path and ρ as the second.
(c) If ∆h < hπ , then update hρ ← hρ +∆h and hπ ← hπ −∆h. Otherwise,
set hρ ← hρ + hπ and hπ ← 0.
As stated, the path travel times and derivatives are not updated after each shift
is performed. Doing so would increase accuracy, but greatly increase computa-
tion time since a complete network loading would have to be performed.
11.3. PROPERTIES OF DYNAMIC EQUILIBRIA 425
N (Ĥ)(˙ Ĥ − H) ≤ 0 ∀H ∈ H̄ . (11.32)
3 (Toll 0.5)
A B
2 D 1
1 1
1
1 2
1 2
(Toll 0.5) C
Figure 11.7: Network for example with no equilibrium solution. Link travel
times shown.
must yield to flow on [C, 1, D]. This can be expressed with the following node
model:
yC1D = min {SC1 , R1D } (11.33)
(
0 if yC1D > 0
yA12 = (11.34)
min {SA1 , R12 } otherwise
Likewise, for Node 2, all flow on the movement [C, 2, D] must yield to flow on
[1, 2, B]. That is, traffic from C to D has priority at node 1; and traffic from
A to B has priority at node 2. These relationships are indicated on the figure
with triangles next to the approach that must yield.
Each OD pair has one unit of demand, that departs during the same time
interval. We now show that there is no assignment of demand to paths that
satisfies the equilibrium principle, requiring any used path to have minimal
travel time. We begin first by showing that any equilibrium solution cannot
split demand among multiple paths; each OD pair must place all of its flow on
just one path or the other. With the given demand, the cost on the bottom path
is either 3 (if there is no flow on the left path from C to D) or 4 (if there is), in
either case, the cost is different from that on the top path (3.5). All demand will
thus use either the top path or the bottom path, whichever is lower. Similarly,
for the OD pair from C to D, the cost on the right path is either 3 or 4, whereas
that on the left is either 3.5 or 4.5. Both paths cannot have equal travel time,
so at most one path can be used.
Thus, there are only four solutions that can possibly be equilibria: all flow
from A to B must either be on the top path or the bottom path; and all flow
from C to D must either be on the left path or the right path. First consider
the case when all flow from A to B is on the top path. The left path would then
have a cost of 3.5, and the right path a cost of 3. So all flow from A to B must
be on the right path. But then the travel times on the top and bottom paths are
11.3. PROPERTIES OF DYNAMIC EQUILIBRIA 427
Link 2
Link 1 40 Link 4
80 Link 3 40
40
Figure 11.8: Nie’s merge network. All capacity values in vehicles per minute.
vehicles per minute to 40 vehicles per minute, and where drivers either merge
early (choosing the top lane at the diverge point) or late (waiting until the lane
drop itself). The inflow rate from upstream is 80 vehicles per minute. Since
the capacity of the downstream exit is only 40 vehicles per minute, the excess
vehicles will form one or more queues in this network.
To simplify the analysis, assume that the time horizon is short enough that
none of these queues will grow to encompass the entire length of the link. With
this assumption, queue spillback can be ignored, and we can focus on the issue
of route choice. Furthermore, under these assumptions, S1 is always 80 vehicles
per minute, and R2 , R3 , and R4 are always 40 vehicles per minute. We work
with a timestep of one minute, and will express all flow quantities in vehicles
per minute.2
In this network, there is only one choice of routes (the top or bottom link
at the diverge). We will restrict our attention to path flow matrices H where
the proportions of vehicles choosing the top and bottom routes are the same
for all departure times, and show that there are multiple equilibrium solutions
even when limiting our attention to such H matrices. There may be still more
equilibria where the proportion of vehicles choosing each path varies over time.
Therefore, the ratio h124 /h134 is constant for all time intervals, which means
that the splitting fractions p12 and p13 at the diverge point are also constant,
and equal to h124 /40 and h134 /40, respectively.
Using the diverge and merge models from Section 9.2.3, we can analyze the
queue lengths and travel times on each link as a function of these splitting
2 This is larger than what is typically used in practice, but simplifies the calculations and
fractions.
It turns out that three distinct dynamic user equilibria exist:
Equilibrium I: If p12 = 1 and p13 = 0, then the diverge formula (9.17) gives
the proportion of flow which can move as
40 40 1
φ = min 1, , = . (11.35)
80 0 2
Therefore, the transition flows at each time step are y12 = 40 and y13 = 0.
Since y12 < S12 , a queue will form on the upstream link, and its sending
flow will remain at S1 = 80. Therefore, once the first vehicles reach the
merge, we will have S2 = 40 and S3 = 0. Applying these proportions,
together with R4 = 40, the merge formula gives y24 = 40 and y34 = 0.
There will be no queue on link (2,4), so both link (2,4) and link (3,4) are
at free-flow. This solution is an equilibrium: the two paths through the
network only differ by the choice of link 2 or link 3, and both of these have
the same travel time since they are both at free-flow. In physical terms,
this corresponds to all drivers choosing to merge early; the queue forms
upstream of the point where everyone chooses to merge, and there is no
congestion downstream.
Equilibrium III: If p12 = p13 = 1/2, drivers wish to split equally between
the top and bottom links. The proportion of flow which can move at the
diverge is
40 40
φ = min 1, , = 1, (11.36)
40 40
so all vehicles can move and there is no queue at the diverge: y12 = y13 =
40. Once these vehicles reach the merge, we will have S2 = S3 = 40 and
R4 = 40. The merge formula (9.12) then gives y24 = y34 = 20, so queues
will form on both merge links. However, since the inflow and outflow rates
of links 2 and 3 are identical, the queues will have identical lengths, and
so the travel times on these links will again be identical. Therefore, this
solution satisfies the principle of dynamic user equilibrium. In physical
terms, this is the case when no drivers change lanes until the lane actually
ends, and queueing occurs at the merge point.
Path [1,3,4]
III
I II
Path [1,2,4]
1/2 1
p12
Figure 11.10: Travel time on top and bottom paths as a function of splitting
fraction, with three equilibrium solutions marked.
values of the other fraction, since p13 = 1 − p12 .) This is shown in Figure 11.10,
and the three equilibria correspond to the crossing points of the paths. Note
that all three of the equilibria share the same equilibrium travel time. Although
the queues in Equilibrium III are only half as long as those in Equilibria I and
II, being split between two links, the outflow rates of these queues are also only
half as great (20 veh/min instead of 40 veh/min).
This example shows that the dynamic user equilibrium solution is not unique,
even in an extremely simple network. This nonuniqueness has practical conse-
quences. The effect of a potential improvement to link 2 or 3 will depend
crucially on the number of travelers on the link, which varies widely in all
three equilibria. One criterion for distinguishing which of these equilibria is
more likely is stability, which explores what would happen if the equilibria were
slightly perturbed. As shown in Figure 11.10, if one begins at Equilibrium I and
perturbs p13 to a small value (reducing p12 by the same amount), we see that
the travel time on the top path increases, where as the travel time on the bot-
tom path decreases. It may seem odd that increasing flow on the bottom path
decreases its travel time — what is happening is that congestion at the diverge
decreases (lowering the travel time of both paths), but congestion forms on the
top link at the merge (increasing the travel time just of the top path). Super-
imposing these effects produces the result in Figure 11.10. As a result, travelers
will switch from the (slower) top path to the (faster) bottom one, moving us
even further away from Equilibrium I. Therefore, Equilibrium I is not stable.
The same analysis holds for Equilibrium II.
11.3. PROPERTIES OF DYNAMIC EQUILIBRIA 431
Equilibrium III, on the other hand, is stable. If a few travelers switch from
the top to the bottom path, the travel time on the top path decreases and that on
the bottom path increases. Therefore, travelers will tend to switch back to the
top path, restoring the equilibrium. The same holds if travelers switch from the
bottom path to the top path. This gives us reason to believe that Equilibrium
III is more likely to occur in practice than Equilibrium I or II. This type of
analysis is much more complicated in larger networks, and for the most part is
completely unexplored. Coming to a better understanding of the implications
of nonuniqueness in large networks, as well as techniques for addressing this, is
an important research question.
Link 1
queue length
1/2 1
p12
Figure 11.11: Queue length and total travel time for different splitting propor-
tions p in the modified Nie’s merge network. Note that all p values represent
equilibria.
travelers will choose one alternative over another identical one. One argument
is from entropy principles (Section 5.2.2): if travelers have the same behavior
assumption, it is unlikely they would all choose one route over another with equal
travel time. Furthermore, the assumption of a triangular fundamental diagram
implies that travel speeds remain at free-flow for all subcritical densities. In
practice the speed will drop slightly due to variations in preferred speeds and
difficulties in overtaking at higher density, so drivers would likely prefer the
route chosen by fewer travelers.
However, from the standpoint of modeling the fact that all feasible solutions
are equilibria poses significant challenges.
Literally any solution will have zero gap, and if an all-or-nothing assignment
is chosen as the initial solution (as is sometimes done in implementations),
dynamic traffic assignment software will report that a perfect equilibrium has
been reached.
This example shows that initial solutions for dynamic traffic assignment
should be carefully chosen, perhaps by spreading vehicles over multiple paths,
or breaking ties stochastically in shortest path algorithms to avoid assigning all
vehicles to the same path in the first all-or-nothing assignment.
Link 2
Link 1 5 Link 4
20
L/uf = 4 60
1
qmax = 20 Link 3A Link 3B 20
1 1 ∞
kjL = ∞ 20 10
50 25
reducing the capacity on this link can improve system conditions! In this sense,
it is a dynamic equivalent of the Braess paradox from Section 4.3.
Many have criticized the Braess paradox on the grounds that the link per-
formance functions used in static assignment are unrealistic. In the example
shown below, queue spillback (a feature unique to dynamic network loading) is
actually the critical factor in the paradox.
See the network in Figure 11.12, where time is measured in units of time
steps ∆t. Like the networks in the two previous sections, it consists of a single
merge and diverge. However, the free-flow travel times and capacities on the
top and bottom links are now different: the top route is longer, but has a higher
capacity, while the bottom route is shorter at free-flow, but has a bottleneck
limiting the throughput on this route — link 3B has only half the capacity of
3A. We will use the spatial queue model of Section 9.1.3 to propagate traffic
flow, although the same results would be obtained with an LWR-based model
or anything else which captures queue spillback and delay. The input demand
is constant, at 20 vehicles per time step.
The capacity on the top route is high enough to accommodate all of the
demand; if all of this demand were to be assigned to this route, the travel time
would be 10 minutes per vehicle. Assigning all vehicles to the top route is neither
the user equilibrium nor the system optimum solution, but it does give an upper
bound on the average vehicle travel time in the system optimal assignment —
it is possible to do better than this if we assign some vehicles to the bottom,
shorter route.
To derive the user equilibrium solution, notice that initially all vehicles will
choose the bottom path, since it has the lower travel time at free flow. A queue
will start forming on link 3A, since the output capacity is only 10 vehicles per
time step (because of the series node model from Section 9.2.1, and the capacity
of link 3B) and vehicles are entering at double this rate. As the queue grows,
the travel time on link 3A will increase as well. With the spatial queue model
and these inflow and outflow rates, you can show that the travel time for the
n-th vehicle entering the link is
n qin n
1+ −1 =1+ (11.37)
qin qout 20
434 CHAPTER 11. DYNAMIC USER EQUILIBRIUM
as long as the queue has not spilled back (see Exercise 9.3.
Based on equation (11.37), when the 60th vehicle enters the link, it will
spend four time units on link 3A, and thus its travel time across the entire
bottom path would be the same as if it had chosen the top path. At this point
there are 40 vehicles on link 3A, as can be seen by drawing the N ↑ and N ↓
curves for this link. From this point on, vehicles will split between the top and
bottom paths to maintain equal travel times.
So far, so good; the first 60 vehicles that enter the network have a travel
time of less than 10 minutes, and all the rest have a travel time of exactly 10
minutes. Now see what happens if we increase the capacity on link 3B, with
the intent of alleviating congestion by improving the bottleneck capacity. If
the capacity on 3B increases from 10 to 12, the story stays the same, except
that it is the first 90 vehicles that have a travel time of less than 10 minutes.
We can see this by setting (11.37) equal to four (the time needed to equalize
travel times on the top and bottom paths), but with qin = 12 instead of 10, and
solving for n. Network conditions indeed have improved. But if the capacity
increases still further, to 15, then equation (11.37) tells us that it is only after
180 vehicles have entered the system that travelers would start splitting between
the top and bottom links. By tracing the N ↑ and N ↓ curves, at this point in
time 120 vehicles will have exited link 3A, meaning the queue length would be
60 vehicles. But the jam density of the bottom link only allows it to
hold 50 vehicles. The queue will thus spill upstream of the diverge node,
and in this scenario, no vehicles will opt to take the top path. By the time
a driver reaches the diverge point, the number of vehicles on the bottom link
is 50, giving a travel time on 3A of 3 13 minutes, and a total travel time of 9 13
minutes from origin to destination. This is less than the travel time on the top
path, so drivers prefer waiting in the queue to taking the bypass route.
As a result, all drivers will choose the bottom path, and the queue on the
origin centroid connector will grow without bound, as will the travel times
experienced by vehicles entering the network later and later. By increasing the
length of time vehicles enter the network at this rate, we can make the delays
as large as we like.
We thus see that even with dynamic network loading, increasing the capacity
on the only bottleneck link can make average travel times worse — and in fact
arbitrarily worse than the system optimum solution, where the average travel
time is less than 10 minutes. The reason for this phenomenon is the interaction
between queue spillback and selfish route choice: in the latter scenario it would
be better for the system for some drivers to choose the top route, even though
it would worsen their individual travel time.
Of course, if the capacity on 3B were increased even further, all the way
to 20 vehicles per time step, delays would drop again since there would be no
bottleneck at all. Exercise 11 asks you to plot the average vehicle delay in this
network as the capacity on link 3B varies from 0 to 25 vehicles per time step.
11.4. HISTORICAL NOTES AND FURTHER READING 435
11.3.5 Conclusion
The purpose of these examples is to show that dynamic user equilibrium is
complex. Guaranteeing existence or uniqueness of dynamic equilibrium requires
making strong assumptions on traffic flow propagation. However, for some prac-
tical applications, using a more realistic traffic flow model is more important
than mathematical properties of the resulting model. Many people find comfort
in the fact that we can at best solve for an equilibrium approximately, and thus
dismiss the question of whether an equilibrium “truly” exists as akin to asking
how many angels can dance on the head of a pin.
We emphasize that existence and uniqueness are not simply mathematical
abstractions, and that they have significant implications for practice: if an equi-
librium does not exist, should we really be ranking projects based on equilibrium
analysis? If multiple equilibria exist, what should we plan for? Can we even find
them all? At the same time, we acknowledge that using static equilibrium to
sidestep these difficulties is often unacceptable. For many applications, the as-
sumptions in link performance functions are simply too unrealistic. Such is the
nature of mathematical modeling in engineering practice. No tool is right for ev-
ery task, but rather experienced practitioners know how to match the available
tools to the job at hand. By reading this book, we hope that you have gained
the insight to understand the advantages and disadvantages of both static and
dynamic traffic assignment models, and to make educated decisions about the
right tool for a particular project.
Finally, more research is needed on these topics, understanding the signifi-
cance of equilibrium nonexistence and nonuniqueness, particularly in practical
networks and not just in “toy” networks such as the ones in this section. As
researchers, we would be delighted for you to contribute to work in this field.
take that headed to another imposes nonconvexity on the feasible flows (Carey,
1992).
For this reason, variational inequality formulations are more common than
mathematical optimization. Examples of these include Friesz et al. (1993), Wie
et al. (1995), and Chen and Hsueh (1998). Optimal control approaches have
also been proposed (Friesz et al., 1989; Ran et al., 1993), as have formulations
as a nonlinear complementarity problem (Ban et al., 2012). Fixed point ap-
proaches are also common — Bar-Gera (2005) and Bellei et al. (2005) are just
two examples — but as with the static assignment problem, are more useful for
specifying the problem then for solving it.
Using link or path flows as the main decision variable is the most intuitive
choice, and therefore the most common in the literature. However, the use of
splitting proportions is becoming more common (Nezamuddin and Boyles, 2015;
Long et al., 2012; Gentile, 2016).
For the convex combinations and simplicial decomposition algorithms, refer
to the references in Section 6.6. For gradient projection as specialized to dy-
namic network loading, see Nezamuddin and Boyles (2015) and Gentile (2016).
The equilibrium existence counterexample in Section 11.3.1 is from Waller
(2006). The uniqueness counterexample in Section 11.3.2 is from Nie (2010b),
and its special case in Section 11.3.3 is from Boyles et al. (2013). The efficiency
counterexample in Section 11.3.4 is from Daganzo (1998). One consequence
of these counterexamples is that queue spillback significantly complicates the
finding and interpretation of dynamic user equilibria; see also the discussion in
Boyles and Ruiz Juri (2019).
11.5 Exercises
1. [37] (Equivalence of link-based and path-based flow representations). Given
t
splitting proportions αhij,s for each destination s, time interval t, and turn-
ing movement [h, i, j], show how “equivalent” path flow values hπt can be
found. Then, if given path flows hπt , show how “equivalent” αhij,s t
values
can be found. (“Equivalent” means that the link cumulative counts N ↑
and N ↓ would be the same for all time steps after performing network
loading, possibly with a small error due to time discretization that would
shrink to zero as ∆t → 0.)
2. [37] Consider the four-link network shown in Figure 11.13, and perform
network loading using the cell transmission model. (Each link is one cell
long.) During the first time interval, 10 vehicles enter Link 1 on the top
path; during the second time interval, 5 vehicles enter Link 1 on the top
path and 5 on the bottom path; and during the third time interval, 10
vehicles enter Link 1 on the bottom path. No other vehicles enter the
network.
Interpolating as necessary, what time do the first and last vehicles on the
top path exit cell 4? the first and last vehicles on the bottom path? What
11.5. EXERCISES 437
Link 2
1
5
30
Link 1 Link 4
1
L/uf = 1 1
qmax = 20 5
Link 2
30
kjL = 30 1 1
w/uf = 1 5
30
1
is the derivative of the travel time on the top path for a vehicle leaving at
the start of interval 2?
3. [12] In the course of the convex combinations algorithm, assume that the
H and H ∗ matrices are as below, and λ = 1/3. What is the new H
matrix?
6 12 18 0
H= H∗ = (11.38)
30 24 0 54
and the current path flow and travel time matrices are:
14 6 0 0 20 20 24 27
H =0 8 0 2 T (H) = 30 34 37 40 (11.40)
0 9 21 0 44 35 36 40
What are the unrestricted and restricted average excess costs of the current
solution? What is the matrix ∆H?
5. [35] Assume that there is a single OD pair, two paths, and three departure
times; 15 vehicles depart during the first, 10 during the second, and 5
during the third. Let Hπt denote the number of vehicles departing on
path π at time t, and that the path travel times are related to the path
438 CHAPTER 11. DYNAMIC USER EQUILIBRIUM
Find the path flows obtained from three iterations of the convex combi-
nations method with step sizes λ1 = 1/2, λ2 = 1/4, and λ3 = 1/6 (so you
should find a total of four H ∗ matrices, counting the initial matrix, and
take three weighted averages). Your initial matrix should be the shortest
paths with zero flow. What is the resulting AEC? Break any ties in favor
of path 1..
6. [35] Using the same network and demand as in Exercise 5, now assume
that you are solving the same problem with simplicial decomposition, and
at some stage H contains the following two matrices:
15 0 15 0
0 10 10 0
5 0 0 5
and that the current solution is
15 0
H =4 6
3 2
(a) What is the average excess cost of the current solution?
(b) What is the restricted average excess cost?
(c) What is the search direction from H?
(d) Give the updated H matrix and new restricted average excess cost
after taking a step with µ = 0.1.
(e) If we terminate the subproblem and return to the master algorithm,
what matrix (if any) do we add to H?
7. [35] Again using the network and demand from Exercise 5, now start with
the initial solution
10 5
H = 5 5
0 5
3 In practice these would come from performing network loading and calculating path travel
times as described in this chapter; these functions are provided here for your convenience.
11.5. EXERCISES 439
tB B B
1 = 0.5h1 + 3h2 + 10
tA A A
2 = 0.3h1 + 0.6h2 + 0.8
tB B B
2 = 0.2h1 + 0.4h2 + 2
at a later time, recall that FIFO violations can occur due to phenomena such as express lanes
opening, allowing “later” vehicles to overtake “earlier” ones and delay them in the network.
440 CHAPTER 11. DYNAMIC USER EQUILIBRIUM
Mathematical Concepts
441
442 APPENDIX A. MATHEMATICAL CONCEPTS
The left-hand side of Equation (A.1) is used as shorthand for the right-hand
side. More formally, the left-hand side instructs us to choose all the values of i
between 1 and 5 (inclusive); collect the terms Pi for all of these values of i; and
to add them together. A variant of this notation is
X
Pi , (A.2)
i∈N
which expresses the sum of the productions from all nodes in the set N . Here
i ranges over all elements in the set N , rather than between two numbers as
in (A.1). We can also add conditions to this range by using a colon. If we only
wanted to sum over nodes whose products are less than, say, 500, we can write
X
Pi . (A.3)
i∈N :Pi <500
When it is exceptionally clear what values or set i is ranging over, we can simply
write X
Pi , (A.4)
i
but this “abbreviated” form should be used sparingly, and avoided if there is
any ambiguity at all as to what values i should take in the sum.
When there are multiple indices, a double summation can be used:
3 X
X 3
xij = x12 + x22 + x32 + x13 + x23 + x33 . (A.5)
i=1 j=2
which expands to the same thing as the right-hand side of (A.5), or as summing
over all combinations of i and j such that i is between 1 and 3, and j is between
2 and 3. Triple sums, quadruple sums, and so forth behave in exactly the same
way, and are common when there are many indices.
The summation index is often called a dummy variable, because the indices
do not have an absolute meaning. Rather, they are only important P5 insofar
as they point to the correct numbers to add up. For instance, i=1 Pi and
P4
P
i=0 i+1 are exactly the same, because both expressions have you add up P1
A.1. INDICES AND SUMMATION 443
P5 P5
through P5 . Likewise, i=1 Pi and j=1 Pj are exactly the same, the fact that
we are counting from 1 to 5 using the variable j instead of i is of no consequence.
Related to this, it is wrong toPrefer to a summation index outside of the sum
5
itself. A formula such as xi + i=1 yi is incorrect. For the formula to make
sense,Pxi needs to refer to one specific value of i. But using i as the index in the
5
sum i=1 means that i must range over all the values between 1 and 5. Does
yi refer to the index of summation (which ranges from 1 to 5), or to the one
specific value of i used outside the sum? If you want to refer to a specific node,
as well as to index over all nodes, you can add a prime to one of them, as in
5
X
xi + yi0 , (A.7)
i0 =1
The notation on the right can be a bit confusing at first glance, since it looks
like we are summing the flow of every link (the whole set A), rather than just
the links entering node i. The critical point is that in this formula, i refers
to one specific node which was previously chosen and defined outside of the
summation. The only variable we are summing over in Equation (A.9) is h. In
this light, i is fixed, and the right-most sum is over all the values of h such that
(h, i) is a valid link (that is, (h, i) ∈ A). These are exactly the links which form
the reverse star of i.
Almost all of the sums we will see in this book involve only a finite number
of terms. These sums are much easier to work with than infinite sums, and
have the useful properties listed below. (Some of these properties do not always
apply to sums involving infinitely many terms.)
1. You can factor constants out of a sum:
X X
cxi = c xi , (A.10)
i i
no matter what values i ranges over. This follows from the distributive
property for sums: a(b + c) = ab + ac. (A “constant” here is any term
which does not depend on i.)
444 APPENDIX A. MATHEMATICAL CONCEPTS
2. If what you are summing is itself a sum, you can split them up:
X X X
(xi + yi ) = xi + yi . (A.11)
i i i
no matter what values i and j range over. This also follows from the
commutative property.
None of these properties
P are anything new; they simply formalize how we can
operate with the notation using the basic properties of addition. These
properties can also be combined. It is common to exchange the order of a
double sum, and then factor out a term, to perform a simplification:
!
XX XX X X
cj xij = cj xij = cj xij . (A.13)
i j j i j i
where the sum is taken over all vector components; with the example above,
x · y = 1 × 3 + 2 × 4 = 11.
The dot product of two√vectors is a scalar. The magnitude of a vector x
is given by its norm |x| = x · x. This norm provides a measure of distance
between two vectors; the distance between x and y is given by |x − y|.
The dot product can also be written
where θ is the angle between the vectors x and y if they are both drawn from
a common point. In particular, x and y are perpendicular if x · y = 0.
A collection of vectors x1 , . . . , xn is linearly independent if the only solution
to the equation a1 x1 + · · · + an xn = 0 is a1 = · · · = an = 0. Otherwise, these
vectors are linearly dependent.
A matrix is a rectangular array of scalars. If a matrix has m rows and n
columns, it is called an m × n matrix, and is an element of Rm×n . A matrix is
square if it has the same number of rows and columns. In this book, matrices
are denoted by boldface capital letters, such as X or Y. In the examples that
follow, let X, Y, and Z be defined as follows:
1 2 0 −1 −1 1 −2
X= Y= Z= .
3 4 −3 5 3 −5 8
Elements are matrices are indexed by their row first and column second, so
x11 = 1 and x12 = 2.
Addition and scalar multiplication of matrices works in the same way as
with vectors, so
1+0 2−1 1 1
X+Y = =
3−3 4+5 0 9
1 There are some exceptions; for instance, shortest-path labels are traditionally denoted
with an upper-case L.
446 APPENDIX A. MATHEMATICAL CONCEPTS
and
5 10
5X = .
15 20
The transpose of a matrix A, written AT , is obtained by interchanging the
rows and columns, so that if A is an m × n matrix, AT is an n × m matrix. As
examples we have
−1 3
1 3
XT = ZT = 1 −5 .
2 4
−2 8
If you imagine that each row of the first matrix is treated as a vector, and that
each column of the second matrix is treated as a vector, then each component
of the product matrix is the dot product of a row from the first matrix and
a column from the second. For this dot product to make sense, these row
and column vectors have to have the same dimension, that is, the number of
columns in the first matrix must equal the number of rows in the second. Using
the matrices defined above, we have
1 × 0 + 2 × −3 1 × −1 + 2 × 5 −6 9
XY = = .
3 × 0 + 4 × −3 3 × −1 + 4 × 5 −12 17
Observe that the element in the first row and first column of the product matrix
is the dot product of the first row from X and the first column from Y; the
element in the first row and second column of the product is the dot product of
the first row from X and the second column of Y; and so forth. You should be
able to verify that
5 −9 14
XZ = .
9 −17 26
A matrix and a vector can be multiplied together, if you interpret a row
vector as an 1×n matrix or a column vector as a m×1 matrix. A dot product of
two vectors x and y of equal dimension can be written as a matrix multiplication:
A.2. VECTORS AND MATRICES 447
x · y = xT y if they are both column vectors, or as xyT if they are both row
vectors. Here, the distinction between row and column vectors is important,
because matrices can only be multiplied if their dimensions are compatible. In
matrix multiplications, the convention used in this text is that vectors such as x
are column vectors, and a row vector is denoted by xT . Again, this distinction is
only relevant in matrix multiplication, and for other purposes row and column
vectors can be treated interchangeably.
It is worth repeating that matrix multiplication is not commutative, so that
usually XY 6= YX (and in fact both products may not even exist, depending
on the dimensions of X and Y), although there are some exceptions. You may
wonder why this seemingly-complicated definition of matrix multiplication is
used instead of other, seemingly simpler, approaches. One reason is that this
definition actually ends up representing real-world calculations more frequently
than other definitions. For instance, it can be used to compactly write a set of
equations, as is common in optimization problems (see Section B.3).
A square matrix A is symmetric if aij = aji (in other words, for a symmetric
matrix A = AT ), and it is diagonal if aij = 0 unless i = j (that is, all its
elements are zero except on the diagonal from upper-left to lower-right). A very
special diagonal matrix is the identity matrix, which has 1’s along the diagonal
and 0’s everywhere else. The notation I denotes an identify matrix of any size,
so we can write
1 0 0 0
1 0 0 1 0 0
I= or I= 0 0 1 0 .
0 1
0 0 0 1
None of those three terms can be negative, and since x 6= 0, at least one of these
terms is strictly positive.
The matrix B is not positive definite, since if x = 1 0 then xT Bx = 0,
which is not strictly positive. However, it is positive semidefinite, since the
matrix product is
0 0 x1
xT Bx = x1 x2 = x22
0 1 x2
positive for all nonzero x – but the eigenvalue test does not apply, and there
are non-symmetric matrices which have strictly positive eigenvalues but do not
satisfy xT Ax > 0 for all nonzero x. However, we can form the symmetric part
of the matrix A by calculating 12 (A + AT ). It is easy to show that this isalways
a symmetric matrix, and that xT Ax > 0 if and only if xT 21 (A + AT ) x > 0.
So, when we refer to a non-symmetric matrix being positive definite, what we
will mean is that its symmetric part 12 (A + AT ) is positive definite.
The determinant of a square matrix is occasionally useful in transportation
network analysis (but less so than in other applied mathematics fields). For
a 1 × 1 matrix, its determinant is simply the value of the single entry in the
matrix. For an n × n matrix, the determinant can be computed by the following
procedure: select any row or column of the matrix; for each entry in this row
or column, compute the determinant of the (n − 1) × (n − 1) matrix resulting
from deleting both the row and column of this entry; and alternately add and
subtract the resulting determinants. For our purposes, the determinant can be
used to concisely express other matrix properties. For instance, one can show
that a square matrix is invertible if and only if its determinant is not zero,
and that it is positive definite if and only if the determinants of all the square
submatrices including the element in the first row and column are positive. It
also appears in the characterization of totally unimodular matrices, which are
discussed in Section C.5.2 as a property of certain integer optimization problems
which makes them easier to solve.
A.3 Sets
A set is a collection of any type of objects, denoted by a plain capital letter, such
as X or Y . In transportation network analysis, we work with sets of numbers,
sets of nodes, sets of links, sets of paths, sets of origin-destination pairs, and
so forth. Sets can contain either a finite or infinite number of elements. As
examples, let’s work with the sets
X = {1, 2, 4} Y = {1, 3, 5, 7} ,
using curly braces to denote a set, and commas to list the elements. Set mem-
bership is indicated with the notation ∈, so 1 ∈ X, 1 ∈ Y , 2 ∈ X, but 2 ∈/ Y.
The union of two sets X ∪ Y is the set consisting of elements either in X or in
Y (or both), so
X ∪ Y = {1, 2, 3, 4, 5, 7} .
The intersection of two sets X ∩ Y is the set consisting only of elements in both
X and Y , so
X ∩ Y = {1} .
A set is a subset of another set, denoted by ⊆, if all of its elements also belong
to the second set. With these sets X * Y . Even though the element 1 is in both
X and Y , the elements 2 and 4 are in X but not Y . We do have X ∩ Y ⊂ X
and X ∩ Y ⊂ Y , however.
450 APPENDIX A. MATHEMATICAL CONCEPTS
2. Sets which are intervals on the real line, such as [−1, 1] or (0, 2]. These
intervals are sets containing all real numbers between their endpoints; a
square bracket next to an endpoint means that the endpoint is included
in the set, while parentheses mean that the endpoint is not included.
Intervals usually contain infinitely many elements.
3. Sets which contain all objects satisfying one or more conditions. For in-
stance, the set {x ∈ R : x2 < 4} contains all real numbers whose square is
less than four; in this case it can be written simply as the interval (−2, 2).
A more complicated set is {(x, y) ∈ R2 : x+y ≤ 3, |x| ≤ |y|}. This set con-
tains all two-dimensional vectors which (if x and y are the two components
of the vector) satisfy both the conditions x + y ≤ 3 and |x| ≤ |y|. It can
also be thought of as the intersection of the sets {(x, y) ∈ R2 : x + y ≤ 3}
and {(x, y) ∈ R2 : |x| ≤ |y|}. If there are no vectors which satisfy all of
the conditions, the set is empty, denoted ∅.
We use the common mathematical conventions that R is the set of all real
numbers, and Z is the set of all integers. If we want to restrict attention to non-
negative real numbers or integers (i.e., positive or zero), the notations R+ and
Z+ are used. Superscripts (e.g., R3 or Z5 ) indicate vectors of a particular dimen-
sion whose elements belong to a particular set: R3 is the set of 3-dimensional
vectors of real numbers, and Z5 the set of 5-dimensional integer vectors. To refer
to a matrix of a particular size, we indicate both dimensions in the superscript;
for instance, Z3×5
+ is the set of matrices with 3 rows and 5 columns, each of
whose elements is a non-negative integer.
Sets of scalars or vectors can be described in other ways. Given any vector
x ∈ Rn , the ball of radius r is the set
that is, the set of all vectors whose distance to x is less than r, where r is some
positive number. A ball in one dimension is an interval; a two-dimensional
ball is a circle; a three-dimensional ball is a sphere; a four-dimensional ball a
hypersphere, and so on.
Given some set of real numbers X, the vector x is a boundary point of X if
every ball Br (x) contains both elements in X and elements not in X, no matter
how small the radius r. Notice that the boundary points of a set need not belong
to the set: 2 is a boundary point of the interval (−2, 2). A set is closed if it
contains all of its boundary points. A set X is bounded if every element of X
is contained in a sufficiently large ball centered at the origin, that is, if there is
A.3. SETS 451
X
d
a
b
e
some r such that x ∈ Br (0) for all x ∈ X. A set is compact if it is both closed
and bounded.
These facts will prove useful:
Proposition A.3. Let f (x1 , x2 , · · · , xn ) be any linear function, that is, f (x) =
a1 x1 + a2 x2 + · · · + an xn for some constants ai , and let b be any scalar.
(a) The set {x ∈ Rn : f (x) = b} is closed.
(b) The set {x ∈ Rn : f (x) ≤ b} is closed.
(c) The set {x ∈ Rn : f (x) ≥ b} is closed.
Proposition A.4. Let X and Y be any sets of scalars or vectors.
(a) If X and Y are closed, so are X ∩ Y and X ∪ Y
(b) If X and Y are bounded, so are X ∩ Y and X ∪ Y
(c) If X and Y are compact, so are X ∩ Y and X ∪ Y .
Combining Propositions A.3 and A.4, we can see that any set defined solely
by linear equality or weak inequality constraints (any number of them) is closed.
If X is a closed set of n-dimensional vectors, and y ∈ Rn is any n-dimensional
vector, the projection of y onto X, written projX (y), is the vector x in X which
is “closest” to y in the sense that |x − y| is minimal. In Figure A.1, we have
projX (a) = b, projX (c) = d, and projX (e) = e.
This subsection concludes with a discussion of what it means for a set to be
convex. This notion is very important, and is described in more detail than the
other concepts, starting with an intuitive definition.
If X is convex, geometrically this means that line segments connecting points
of X lie entirely within X. For example, the set in Figure A.2 is convex, while
452 APPENDIX A. MATHEMATICAL CONCEPTS
those in Figure A.3 and Figure A.4 are not. Intuitively, a convex set cannot
have any “holes” punched into it, or “bites” taken out of it.
Mathematically, we write this as follows:
Definition A.1. A set X ⊆ Rn is convex if, for all x1 , x2 ∈ X and all λ ∈ [0, 1],
the point λx2 + (1 − λ)x1 ∈ X.
If this definition is not clear, notice that one way to express the line segment
between any two points is λx2 + (1 − λ)x1 , and that you cover the entire line
between x1 and x2 as λ varies between 0 and 1, regardless of how close or far
apart these two points are located.
Example A.1. Show that the one-dimensional set X = {x : x ≥ 0} is convex.
Solution. Pick any x1 , x2 ≥ 0 and any λ ∈ [0, 1]. Because x1 , x2 , and λ are
all nonnegative, so are λx2 and (1 − λ)x1 , and therefore so is λx2 + (1 − λ)x1 .
Therefore λx2 + (1 − λ)x1 belongs to X as well.
Pn
Example A.2. Show that the hyperplane X = {x ∈ Rn : i=1 ai xi − b = 0} is
convex.
A.3. SETS 453
Pn
Solution. This set is the same as {x ∈ Rn : i=1 ai xi = b} Pick any
x, y ∈ X and any λ ∈ [0, 1]. Then
n
X n
X n
X
ai (λyi + (1 − λ)xi ) = λ ai yi + (1 − λ) ai xi
i=1 i=1 i=1
= λb + (1 − λ)b
=b
so λy + (1 − λ)x ∈ X as well.
Sometimes, more complicated arguments are needed.
Example A.3. Show that the two-dimensional ball B = x y ∈ R2 : x 2 + y 2 ≤ 1
is convex.
Solution. Pick any vectors
a, b ∈ B and any λ ∈ [0, 1]. We will write the
components of these as a = ax ay and b = bx by . The point λb+(1−λ)a
is the vector λbx + (1 − λ)ax λby + (1 − λ)ay . To show that it is in B, we
must show that the sum of the squares of these components is no greater than
1.
because a, b ∈ B (and therefore a2x + a2y ≤ 1 and b2x + b2y ≤ 1). Notice that
ax bx + ay by is simply the dot product of a and b, which is equal to |a||b| cos θ,
where θ is the angle between the vectors a and b. Since |a| ≤ 1, |b| ≤ 1 (by
definition of B), and since cos θ ≤ 1 regardless of θ, ax bx + ay by ≤ 1. Therefore
A.4 Functions
A function is a mapping between sets. If the function f maps set X to set Y ,
then f associates every element of X with some single element of Y . (Note
that not every element of Y needs to be associated with an element of X.) The
set X is known as the domain of f . Examples include the familiar functions
f (x) = x2 and g(x, y) = x2 + y 2 . The function f maps R to R, while g maps
R2 to R. An example of a function which maps R2 to R2 is the vector-valued
function
3x1 + 2x2
h(x1 , x2 ) =
−x1 x2
The inverse of a function f , denoted f −1 , “undoes” the mapping f in the sense
that
√ if f (x) = y, then f −1 (y) = x. As an example, if f (x) = x3 , then f −1 (x) =
3
x. Not every function has an inverse, and inverse functions may need to be
restricted to subsets of X and Y .
The composition of two functions f and g, denoted f ◦g involves substituting
function g into function f . If f (x) = x3 and g(x) = 2x + 4, then f ◦ g(x) =
(2x + 4)3 . This can also be written as the function f (g(x)).
A function is continuous if, for any point x̂ ∈ X, the limit limx→x̂ f (x) exists
and is equal to f (x̂). Intuitively, continuous functions can be drawn without
lifting your pen from the paper. A function is differentiable if, for any point
x ∈ X, the limit
f (x + ∆x) − f (x)
lim (A.19)
∆x→0 |∆x|
exists; that limit is then called the derivative of f , and gives the slope of the
tangent line at any point. It can be shown that any differentiable function must
be continuous.
Proposition A.6. Let f and g be continuous functions. Then we have the
following:
A.4. FUNCTIONS 455
Be sure to note the difference between Hessians and Jacobians. The Hessian is
defined for scalar-valued functions, and contains all the second partial deriva-
tives. The Jacobian is defined for vector-valued functions, and contains all the
first partial derivatives.
Finally, there is an important notion of function convexity. Confusingly,
this is a different idea than set convexity discussed in Section A.3, although
there are some relationships and similar ideas between them. Set and function
convexity together play a pivotal role in optimization and network equilibrium
problems, so function convexity is discussed at length here. Geometrically, a
convex function lies below its secant lines. Remember that a secant line is the
line segment joining two points on the function. As we see in Figure A.5, no
matter what two points we pick, the function always lies below its secant line.
On the other hand, in Figure A.6, not every secant line lies above the function:
some lie below it, and some lie both above and below it. Even though we can
draw some secant lines which are above the function, this isn’t enough: every
possible secant must lie above the function. For this concept to make sense, the
domain X of the function must be a convex set, an assumption which applies
for the remainder of this section
The following definition makes this intuitive notion formal:
Definition A.2. A function f : X → R is convex if, for every x1 , x2 ∈ X and
every λ ∈ [0, 1],
Figure A.6: A nonconvex function does not lie below all of its secants.
458 APPENDIX A. MATHEMATICAL CONCEPTS
Figure A.7: All of the relevant points for the definition of convexity.
can be replaced by a strict inequality <. However, we can’t do this: for example,
if x1 = 1, x2 = 2, λ = 0.5, the left side of the inequality (|1/2 + 2/2| = 3/2) is
exactly equal to the right side (|1/2| + |2/2| = 3/2). So f is not strictly convex.
Note that proving that f (x) is convex requires a general argument, where
proving that f (x) was not strictly convex only required a single counterexample.
This is because the definition of convexity is a “for all” or “for every” type of
argument. To prove convexity, you need an argument that allows for all possible
values of x1 , x2 , and λ, whereas to disprove it you only need to give one set of
values where the necessary condition doesn’t hold.
Example A.6. Show that every linear function f (x) = ax + b, x ∈ R is convex,
but not strictly convex.
Solution.
or
(x1 − x2 )2 > 0
Example A.9. Show that f (x) = x2 is strictly convex using Proposition A.9
Proposition A.10. If f and g are convex functions, and α and β are positive
real numbers, then αf + βg is convex as well.
for all x1 , x2 ∈ X.
A.5 Exercises
1. [22] For this exercise, let x11 = 5, x12 = 6, x13 = 7, x21 = 4, x22 = 3, and
x23 = 9. Evaluate each of these sums.
P2 P3
(a) i=1 j=1 xij
P2 P3
(b) i=1 j=2 xij
P3
(c) j=1 x1j
P2 P3
(d) j=1 i=1 xji
P3 P1
(e) j=1 k=0 xk+1,j
P2 P3
(f) i=1 j=i xij
Q P
2. [22] Repeat Exercise 1, but with products instead of sums .
P
3. [33] Section A.1 lists three properties of the summation notation Q. For-
mulate and prove analogous properties for the product notation . You
can assume that the product involves only a finite number of factors.
7. [26] For each of the sets below, identify its boundary points and indicate
whether or not it is closed, whether or not it is bounded, and whether or
not it is convex.
(a) (5, 10]
(b) [4, 6]
(c) (0, ∞)
(d) {x : |x| > 5}
(e) {x : |x| ≤ 5}
(f) {(x, y) : 0 ≤ x ≤ 4, −3 ≤ y ≤ 3}
(g) {(x, y) : 4x − y = 1}
8. [34] Prove Proposition A.3.
9. [34] Prove Proposition A.4.
10. [53] Identify the projections of the following points on the corresponding
sets. You may find it helpful to draw sketches.
(a) The point x = 3 on the set [0, 1].
1
(b) The point x = 2 on the set [0, 1].
(c) The point (2, 5) on the unit circle x2 + y 2 = 1.
(d) The point (2, 5) on the line x + y = 1.
(e) The point (2, 5) on the line segment between (0, 1) and (1, 0).
(f) The point (2, 3) on the line segment between (0, 1) and (1, 0).
(g) The point (1, 2, 3) on the sphere x2 + y 2 + z 2 = 2.
11. [20] Prove Proposition A.5.
12. [23] Find the inverses of the following functions.
(a) f (x) = 3x + 4
(b) f (x) = 5ex − 1
(c) f (x) = 1/x
13. [57] Prove Proposition A.6 from first principles, using the definitions of
continuity and differentiability in the text.
14. [65] Prove Proposition A.7, using the formal definition of continuity.
15. [22] Calculate the gradients of the following functions:
(a) f (x1 , x2 , x3 ) = x21 + 2x2 x3 + x23
xy
(b) f (x, y) = x2 +y 2 +1
19. [35] Determine which of the following sets is convex. Justify your answer
rigorously.
(a) X = {(x1 , x2 ) ∈ R2 : 4x1 − 3x2 ≤ 0}
(b) X = {x ∈ R : ex ≤ 4}
(c) X = {(x1 , x2 , x3 ) ∈ R3 : x1 = 0}
(d) X = {(x1 , x2 , x3 ) ∈ R3 : x1 6= 0}
(e) X = {1}
20. [31] Show that the integral of an increasing function is a convex function.
21. [54] Show that a differentiable function f of a single variable is convex if
and only if f (x) + f 0 (x)(y − x) ≤ f (y) for all x and y.
22. [54] Show that a twice-differentiable function f of a single variable is
convex if and only if f 00 is everywhere nonnegative.
23. [68] Prove the claims of Exercises 21 and 22 for functions of multiple
variables.
24. [32] Page 460 states that “f 00 (x) > 0 is sufficient for strict convexity
but not necessary.” Give an example of a strictly convex function where
f 00 (x) = 0 at some point.
Appendix B
Optimization Concepts
forth. The notation given in this section is what is traditionally used if we are referring to a
generic optimization problem outside of a specific context.
465
466 APPENDIX B. OPTIMIZATION CONCEPTS
(whichever is appropriate for the given problem). For some problems, we may
need to distinguish local and global optima. Section B.6.1 discusses this in
greater detail. In short, we usually want to find a global optimum, but for some
complicated optimization problems a local optimum may be the best we can do.
The following example explains these definitions:
Example B.1. A plant produces two types of concrete (Type 1 and Type 2). The
owner of the plant makes a profit of $90 for each truckload of Type 1 concrete
produced and a profit of $120 for each truckload of Type 2 concrete produced.
Three materials are required to produce concrete: cement, aggregate, and water.
The plant requires 30 units of cement, 50 units of aggregate, and 60 units of
water to produce one truckload of Type 1 concrete. The plant requires 40 units
of cement, 20 units of aggregate, and 90 units of water to produce one truckload
of Type 2 concrete. The plant is supplied 2000 units of cement, 2500 units of
aggregate, and 4000 units of water daily. How many truckloads of Type 1 and
Type 2 cement should the plant produce to maximize daily profit?
In any optimization problem, the first step is to identify the decision variables
or the variables which can be controlled by the decision maker. In this problem,
the decision variables are the daily truckloads of Type 1 and Type 2 concrete
produced by the plant.
Let x represent the daily truckloads of Type 1 concrete produced by the
plant and y represent the daily truckloads of Type 2 concrete produced by the
plant.
The second step is to identify the constraints or restrictions on the decision
variables. In this problem there are restrictions on the total volume of cement,
aggregate, and water supplied to the plant daily which limits the total amount
of Type 1 and Type 2 concrete which can be produced.
Each truckload of Type 1 concrete requires 30 units of cement. Therefore,
to produce x truckloads of Type 1 concrete requires 30x units of cement. Each
truckload of Type 2 concrete requires 40 units of cement. Therefore, to produce
y truckloads of Type 2 concrete requires 40y units of cement. The plant is
supplied 2000 units of cement daily. Therefore, the constraint on the total
cement consumed by the plant can be written as below:
The plant is supplied 2500 units of aggregate and 4000 units of water daily.
Along similar lines, the constraint on the total volume of aggregate and water
consumed by the plant can be written as below:
The entire problem can be summarized as below. The following set of equa-
tions represents an optimization formulation or a mathematical programming
formulation for the concrete plant profit maximizing problem.
Here x and y are written underneath max to indicate that these are the
decision variables. In cases where it is obvious what the decision variables are,
we sometimes omit writing them below the max or min in the objective. More
complicated optimization problems might use letters to name things other than
decision variables, and in those cases it is helpful to explicitly write down which
variables the decision maker can affect. The abbreviation “s.t.” stands for
“subject to” or “such that” and indicates the constraints.
The above example has linear objective functions and constraints. Opti-
mization formulations can also have nonlinear functions as shown in the exam-
ple below. The objective in the above example is to maximize the profit. When
the decision maker is concerned with controlling costs rather than profit, the
objective function often involves minimization.
Example B.2. A business has the option of setting up concrete plants at two
locations. The daily cost operating a plant at Location 1 per unit of production
has two components: a fixed cost of 20 and an additional cost which increases
by 0.5 for every unit of production. Similarly, the daily cost of operating a plant
at Location 2 per unit production has a fixed cost of 12 and an additional cost
which increases by 1.2 for every unit of production. The business has committed
to supplying at least 12 units of concrete daily. What is the total amount to be
produced at each location to minimize cost and satisfy demand?
There are two decision variables in this problem: the production in plant
at Location 1 and production in plant at Location 2. Let x denote the units
of production in plant at Location 1 and y represent the units of production in
plant at Location 2.
The business has agreed to supply at least 12 units of concrete daily. There-
fore, the sum of production in plants at Location 1 and Location 2 must be
greater than equal to 12.
B.1. COMPONENTS OF AN OPTIMIZATION PROBLEM 469
x + y ≥ 12 (B.6)
Also, the production at both plants cannot be negative which is represented
as:
x ≥ 0, y ≥ 0 (B.7)
The daily cost of operating a plant at Location 1 per unit of production
is 20 + 0.5x. Since the plant produces x units of concrete, the total cost of
operating a plant at Location 1 is (20 + 0.5x)x. Similarly, the total cost of
operating a plant at Location 2 is (12 + 1.2y)y. The goal of the business is to
minimize the total cost of operation which is given as:
Project 1 yields an average return of $10 for each dollar invested. Therefore,
with an investment of $x, the average return is $10x. Project 2 yields an average
return of $12 for each dollar invested. Therefore, with an investment of $y, the
average return is $12y. The total average return on investment is given as
10x + 12y.
The standard deviation of return on project 1 is $5 for each dollar invested
and for project 2 is $7 for each dollar invested.
p Therefore, the standard deviation
of return on investments is given as 25x2 + 49y 2 . p
The business’s utility is given as: U = µ− 12 σ = (10x+12y)− 21 25x2 + 49y 2 .
The optimization formulation can be summarized as follows:
p
min 10x + 12y − 0.5 25x2 + 49y 2
x,y
s.t. x+y ≤ 1000
x, y ≥0
a + b + c + d + e ≥ 300 (B.11)
The business has committed to supplying at least 200 units of window frames
of Type 2 per week. Therefore, the total amount of window frames of Type 2
produced per week must be greater than or equal to 200.
p + q + r + s + t ≥ 200 (B.12)
The Pittsburgh factory can produce a maximum of 100 units of window
frames per week. Therefore, the sum of the total number of window frames
of Type 1 produced per week at Pittsburgh and the total number of window
frames of Type 2 produced per week at Pittsburgh must be less than or equal
to 100.
a + p ≤ 100 (B.13)
The factory at Boston can produce a maximum of 125 units of window frames
per week. Therefore,
b + q ≤ 125 (B.14)
The factory at Austin can produce a maximum of 100 units of window frames
per week. Therefore,
c + r ≤ 100 (B.15)
Along similar lines, the factories at Los Angeles and Miami are constrained
to produce a maximum of 125 and 50 window frames respectively.
d + s ≤ 125 (B.16)
e + t ≤ 50 (B.17)
The number of window frames of both types produced per week at all loca-
tions has to be greater than or equal to zero. The non-negativity constraints
are represented as:
a ≥ 0, b ≥ 0, c ≥ 0, d ≥ 0, e ≥ 0 (B.18)
472 APPENDIX B. OPTIMIZATION CONCEPTS
p ≥ 0, q ≥ 0, r ≥ 0, s ≥ 0, t ≥ 0 (B.19)
The cost of producing a units of window frames of Type 1 and p units of
window frames of Type 2 at Pittsburgh is 10a+40p. Along similar lines, the cost
of producing specific number of window frames of both types at the other four
locations can be determined. The total production cost is the sum of production
costs at each location which is given as: 10a + 10b + 25c + 30d + 30e + 40p +
40q + 15r + 20s + 20t. The objective is to minimize the total production costs.
The optimization formulation can thus be summarized as shown below
min 10a + 10b + 25c + 30d + 30e + 40p + 40q + 15r + 20s + 20t
a,...,e,p,...,t
s.t. a+b+c+d+e ≥ 300
p+q+r+s+t ≥ 200
a+p ≤ 100
b+q ≤ 125
c+r ≤ 100
d+s ≤ 125
e+t ≤ 50
a, b, c, d, e ≥0
p, q, r, s, t ≥0
I = {1, 2, 3, 4, 5 .} (B.21)
The above notation can become cumbersome if the number of locations is
high. So a shorter form representation is
I = {1, . . . , 5 .} (B.22)
The above notation can be further generalized for any value n denoting
number of locations as
I = {1, . . . , n .} (B.23)
An index is used to refer to any element of the set. The symbol ∈ represents
“an element of”. Therefore, i ∈ I denotes an index i which is an element of the
set I. Thus when I = {Pittsburgh, Boston, Austin, Los Angeles, Miami}, i can
refer to any of Pittsburgh, Boston, Austin, Los Angeles, or Miami. Indices can
be combined with symbols to represent decision variables and input parameters
in a concise manner.
In the previous example, we used the symbols a, b, c, d, and e to represent
the number of units of window frames of Type 1 to be produced per week at
Pittsburgh, Boston, Austin, Los Angeles, and Miami respectively. Using the set-
index notation, let xi denote the number of units of window frames per week of
Type 1 to be produced at location i ∈ I. Thus a corresponds to xPittsburgh , b
corresponds to xBoston , and so on.
The business has to produce at least 300 units of window frames of Type 1
per week. This constraint can be represented as:
If the set I = {1,. . . , 5}, then the above constraint can also be represented
using either of the following equations:
5
X
xi ≥ 300 (B.26)
i=1
X
xi ≥ 300 . (B.27)
1≤i≤5
Similarly, let yi denote the number of units of window frames per week of
Type 2 to be produced at location i ∈ I. As before, the constraint on minimum
474 APPENDIX B. OPTIMIZATION CONCEPTS
Now, look at the constraints which limits the number of window frames
produced at each location. Consider the case where the city names are writ-
ten explicitly, so I = {Pittsburgh, Boston, Austin, Los Angeles, Miami}. The
constraints are:
xi + yi ≤ ui ∀i ∈ I (B.36)
The above equation denotes for each element i ∈ I, the constraints xi + yi ≤
ui holds. Similarly the non-negativity constraints can be written as:
xi ≥ 0 ∀i ∈ I (B.37)
yi ≥ 0 ∀i ∈ I (B.38)
When I = {1,. . . , 5}, the production limit at each location and non-negativity
constraints can also be represented using either of the two following set of equa-
tions:
xi + yi ≤ ui ∀i = 1, . . . , 5 (B.39)
xi ≥ 0 ∀i = 1, . . . , 5 (B.40)
yi ≥ 0 ∀i = 1, . . . , 5 (B.41)
B.2. INDEX NOTATION 475
or:
xi + yi ≤ ui ∀1 ≤ i ≤ 5 (B.42)
xi ≥ 0 ∀1 ≤ i ≤ 5 (B.43)
yi ≥ 0 ∀1 ≤ i ≤ 5 (B.44)
min 10a + 10b + 25c + 30d + 30e + 40p + 40q + 15r + 20s + 20t (B.45)
Let vi represent the cost of producing one window frame of Type 1 at location
i ∈ I and wi represent the cost of producing one window frame of Type 2 at
location i ∈ I. For example vMiami = 30, wMiami = 20. The objective function
can now be succinctly represented as:
X
min (vi xi + wi yi ) (B.47)
i∈I
In addition to making the formulation more compact, the set index notation
also makes the formulation easier to understand, and easier to change (if there
were more cities, all we would have to change is the definition of the set I or
the number 5 to whatever the new number of cities is). Notice also that now it
is important to specify that xi and yi are the decision variables: ui , vi , and wi
now represent given problem data which we cannot change.
X
xi ≥ 300 (B.50)
i∈I
X
yi ≥ 200 (B.51)
i∈I
Using the double index notation, the two equations can be summarized into
one equation as shown below, by introducing bj to be the demand for window
frames of type j ∈ J . For this example, bType 1 = 300 and bType 2 = 200.
X
xij ≥ bj ∀j ∈ J (B.52)
i∈I
Pay close attention to the two set element references, ∀j ∈ J on the right
hand side and i ∈ I underneath the summation
P operator. The ∀j ∈ J on the
right hand side ensures that the equation i∈I () is repeated for each element
j ∈ J as shown below.
B.2. INDEX NOTATION 477
Since the set J has two elements, the equation is repeated twice representing
the demand needing to be met for Type 1 and Type 2 window frames. In the
single index notation xi + yi ≤ ui ∀i ∈ I is used to represent the constraint
on maximum window frames which can be produced at each location. In the
double index notation, the left hand side can be made more compact using a
summation operator as shown below.
X
xij ≤ ui ∀i ∈ I (B.54)
j∈J
xij ≥ 0 ∀i ∈ I, j ∈ J (B.56)
You are enforcing xij to be ≥ 0 for each element or location i ∈ I as well
as window frame type j ∈ J . The order in which you reference the elements
and sets on the right hand side after ∀ does not matter, i.e., the following two
equations represent the same non-negativity constraints.
xij ≥ 0 ∀i ∈ I, j ∈ J (B.57)
xij ≥ 0 ∀j ∈ J , i ∈ I (B.58)
Using
P the single index notation, the objective function was represented as
min i∈I (vi xi + wi yi ). InP
the double index notation the objective function
can be represented as min i∈I (vi xi,Type 1 + wi x,:Type 2 ). We can make the
representation even more compact and more intuitive by defining vij as the cost
of producing one window frame of type j ∈ J at location i ∈ I. The objective
function can then be defined as:
XX
min vij xij (B.59)
i∈I j∈J
Note
P thatP the order inPwhichP we sum the objective function does not matter,
i.e, i∈I v x
j∈J ij ij = j∈J i∈I vij xij . (See Section A.1).
The formulation in double index notation is finally:
478 APPENDIX B. OPTIMIZATION CONCEPTS
5 X
X 2
min vij xij
xij
i=1
X j=1
s.t. xij ≥ bj ∀j ∈ J
i∈I
X
xij ≤ ui ∀i ∈ I
j∈J
xij ≥ 0 ∀i ∈ I, j ∈ J
xij ∈ R+ ∀i ∈ I, j ∈ J (B.60)
In the above equation each decision variable xij is restricted to lie in the
set of non-negative real numbers. If the decision variables can be positive or
negative real numbers or zero, then the above equation can be modified as:
xij ∈ R ∀i ∈ I, j ∈ J (B.61)
2
The set Z contains the set of all integers. The set of all non-negative integers
is commonly represented as Z+ . In certain type of optimization problems, called
integer programs, the decision variables are restricted to be integers or non-
negative integers which can be represented as follows:
xij ∈ Z ∀i ∈ I, j ∈ J (B.62)
xij ∈ Z+ ∀i ∈ I, j ∈ J (B.63)
“such that”. Another way to present this information is using the | symbol.
K(I) = {i ∈ I|i lies on East Coast}. Given the definition of the subset K(I)
( which can also be represented as KI ), the constraint can be more succinctly
presented as:
X X
xkj ≤ 150 (B.65)
k∈K(I) j∈J
The set notation can also be used to represent various mathematical expres-
sions of decision variables in a clean manner. Let I = {1, . . . , 10}. We want to
represent the expression x2 + x4 + x6 + x8 + x10 in a concise manner. One way
to do this would be to define K(I) = {i ∈ I : i is even} and then write:
X
x2 + x4 + x6 + x8 + x10 = xk (B.66)
k∈K(I)
X X
x2 + x4 + x6 + x8 + x10 = xi = xi (B.67)
{i∈I:i is even} {i∈I|i is even}
X X
x3 + x4 + x5 = xi = xi (B.68)
{i∈I:3≤i≤5} {i∈I|3≤i≤5}
X X
x7 + x8 + x9 + x10 = xi = xi (B.69)
{i∈I:i≥7} {i∈I|i≥7}
(B.70)
Also,
n
X
bT x = b1 x1 + b2 x2 + . . . + bn xn = bi x i (B.73)
i=1
is another way to express the dot product of two vectors x and b.
Given the above information, we now reformulate the window frame opti-
mization formulation using vectors and matrices. Let x = (x1 , . . . , x10 ) repre-
sent a column vector of decision variables, i.e.,
x1
x2
x= ...
(B.74)
x10
In the vector x, let x1 , x2 , x3 , x4 , and x5 represent the five decision vari-
ables corresponding to the amount of window frame of Type 1 produced at
the five locations and x6 , x7 , x8 , x9 , and x10 represent the five decision vari-
ables corresponding to the amount of window frame of Type 2 produced at the
five locations. Let c = (c1 , . . . , c10 ) represent a column vector of costs where
c1 , c2 , c3 , c4 , and c5 represent the cost of producing one unit of Type 1 window
frame at the five different locations and c6 , c7 , c8 , c9 , and c10 represent the cost
of producing one unit of Type 2 window frame at the five different locations.
c1 10
c2 10
c3 25
c4 30
c5 30
c=
=
(B.75)
c6 40
c7 40
c8 15
c9 20
c10 20
The objective function in this case is:
min 10x1 +10x2 +25x3 +30x4 +30x5 +40x6 +40x7 +15x8 +20x9 +20x10 (B.76)
10
X
min cT x = ci xi (B.77)
i=1
The production of window frame of Type 1 must be greater than 300 and
window frame of Type 2 must be higher than 200.
x1 + x2 + x3 + x4 + x5 ≥ 300 (B.78)
x6 + x7 + x8 + x9 + x10 ≥ 200 (B.79)
Ax ≥ b (B.83)
x1 + x6 ≤ 100 (B.84)
x2 + x7 ≤ 125 (B.85)
x3 + x8 ≤ 100 (B.86)
x4 + x9 ≤ 125 (B.87)
x5 + x10 ≤ 50 (B.88)
1 0 0 0 0 1 0 0 0 0
0 1 0 0 0 0 1 0 0 0
U=
0 0 1 0 0 0 0 1 0 0
(B.90)
0 0 0 1 0 0 0 0 1 0
0 0 0 0 1 0 0 0 0 1
Thus the production at each location being lesser than the capacity can be
written in a compact manner as:
Ux ≤ u (B.91)
min cT x
x
s.t. Ax ≥b
Ux ≤u
x ≥0
x23 corresponds to the volume of wooden frames to be sent from Mill 2 to Market
3.
Let ui represent the amount of wooden frames which can be produced at
mill i ∈ I and dj represent the amount of wooden frames needed at market
j ∈ J . For example, u3 = 30 and d4 = 15.
At each mill, the total volume of wooden frames transported must be less
than or equal to the capacity of the mill. For example, at Mill 2:
x21 + x22 + x23 + x24 + x25 ≤ 40 (B.92)
This can be represented generally as:
X
xij ≤ ui ∀i ∈ I (B.93)
j∈J
Similarly, demand at each market must be met. The demand constraint can
be represented as: X
xij ≥ dj ∀j ∈ J (B.94)
i∈I
In addition, the volume of wooden frames transported between mill and
market locations cannot be negative.
xij ≥ 0 ∀i ∈ I, ∀j ∈ J (B.95)
Let cij represent the cost to transport a wooden frame from mill i ∈ I to
location j ∈ J . For example c24 = 27. The objective is to minimize the total
transportation costs which is:
XX
min cij xij (B.96)
i∈I j∈J
The final formulation for the transportation problem can be summarized as:
XX
min cij xij
xij
i∈I
X j∈J
s.t. xij ≤ ui ∀i ∈ I
j∈J
X
xij ≥ di ∀j ∈ J
i∈I
xij ≥ 0 ∀i ∈ I, ∀j ∈ J
Example B.6. The trucks used in transporting wooden frames from Mill 1 are
very old, the frames may be damaged because of their poor suspension systems.
To prevent this, additional packing material is needed, and the amount depends
on the destination market location. The table below shows the amount of packing
material per frame for each mill and market combination (Mills 2 and 3 have
newer trucks that do not require special packaging.)
Mill Location/Market 1 2 3 4 5
1 3 7 3 1 0
2 0 0 0 0 0
3 0 0 0 0 0
If Mill 1 has 21 units of packing material available each day, formulate the
problem of meeting demands while minimizing transportation costs.
It turns out that adding the packing material constraint changes the opti-
mization problem in such a way that the optimal solutions may not be integers.
If this condition is important, we must enforce it with an additional constraint,
by replacing
xij ≥ 0 ∀i ∈ I, ∀j ∈ J (B.97)
with
xij ∈ Z+ ∀i ∈ I, ∀j ∈ J (B.98)
The formulation now becomes an integer program. The methods needed
to solve integer programs are different than the methods used to solve linear
program, and integer programs are much harder to solve. Sometimes, it may be
adequate to solve the problem as a linear program and then convert its optimal
solution to an integral one, say, by rounding — if the values of the decision
B.4. EXAMPLES OF BASIC OPTIMIZATION PROBLEMS 485
variables are in the hundreds or thousands, the effect of rounding is likely small.
However, for some integer programs this can lead to very poor solutions.
We now return to the original transportation problem formulation of Exam-
ple B.5 without the packing material constraint. In that example, the objective
functions and constraints are all linear functions of the decision variables. In
many real world applications, it might not be possible to use linear functions to
represent the objective function or constraints. The next example modifies the
transportation costs to reflect “diseconomies of scale,” where the unit cost of
shipping increases with the quantity (perhaps the most efficient trucks are used
first, but as more and more frames are shipped, you have to start using older
and less fuel-efficient trucks).
Example B.7. Assume now that the unit cost of transporting wooden frames
between mill i ∈ I and market j ∈ J is cij + xij . (For example, if 10 wooden
frames are being transported between mill i ∈ I and location j ∈ J , then the
cost of transporting each frame between the two locations is cij + 10, and the
total transportation cost would be (cij + 10) × 10). Formulate an optimization
problem to meet the demands at the markets while minimizing transportation
costs.
In this formulation, if xij wooden frames are being transported then the total
transportation cost between mill i ∈ I and market j ∈ J is (cij + xij ) × xij .
The constraints are the same, but the objective function now changes:
XX
min (cij + xij )xij
xij
i∈I
X j∈J
s.t. xij ≤ ui ∀i ∈ I
j∈J
X
xij ≥ di ∀j ∈ J
i∈I
xij ≥ 0 ∀i ∈ I, ∀j ∈ J
xij ≥ 0 ∀i ∈ I, ∀j ∈ J (B.99)
with
xij ∈ Z+ ∀i ∈ I, ∀j ∈ J (B.100)
in the final formulation. This leads to a nonlinear integer program, which
again requires different solution methods.
In the linear variant of the transportation problem, we assume a unit cost
of transportation between each mill and market location. For example, the unit
cost of transportation between mill 2 and market 3 is 11. Therefore, the total
486 APPENDIX B. OPTIMIZATION CONCEPTS
As in Example B.5, let I and J represent the set of potential factory loca-
tions and markets respectively. There are two sets of decision variables. The
first set of decision variables yi takes the value 1 if a facility is located at i ∈ I
and 0 otherwise. For example the values y1 = 1, y2 = 0, and y3 = 1 would mean
that factories are opened at sites 1 and 3. The second set of decision variables
xij represents the volume of demand at market j ∈ J served by a facility at
location i ∈ I.
Let dj represent the demand for wooden doors at market j ∈ J . Therefore
d1 = 35, d2 = 15 and so on. Let ui represent the capacity of the factory at
location i ∈ I. Note that u1 = 40, u2 = 80, u3 = 60.
If a factory is located at i ∈ I, then the total volume of wooden doors
supplied to all markets cannot exceed the production capacity of the factory.
If a factory is not located at i ∈ I, then the total volume of total volume of
wooden doors supplied to all markets must be zero. This constraint can be
represented as
X
xij ≤ ui yi ∀i ∈ I . (B.101)
j∈J
Note that when yi = 1, the right hand side becomes ui (total production at
an open factory cannot exceed its capacity). When yi = 0 the right hand side
B.4. EXAMPLES OF BASIC OPTIMIZATION PROBLEMS 487
becomes zero (nothing can be produced at a factory which was never opened).
Similar to the transportation problem, the total volume supplied to each market
location must meet the demand:
X
xij ≥ di ∀j ∈ J . (B.102)
i∈I
xij ≥ 0 ∀i ∈ I, ∀j ∈ J (B.103)
yi ∈ {0, 1} ∀i ∈ I (B.104)
The objective function has two cost components: the cost of establishing the
facility and the transportation costs. The expression for the transportation costs
is the same as in the linear transportation problem example. Let fi represent
P a facility at location i ∈ I. The total facility location
the cost of establishing
cost can be given as i∈I fi yi . Therefore the objective function is:
X XX
min fi yi + (cij + xij )xij (B.105)
i∈I i∈I j∈J
Now let us tackle the objective function. Let µi represent the average return
of investment in project i ∈ I. The average return on investment µ is given as:
X
µ = 5000x1 + 2000x2 + 3500x3 + 2000x4 + 3750x5 = µi xi (B.107)
i∈I
Given the standard deviation of returns σi for each i ∈ I and the correlation
coefficient ρij = 0.1 for all i ∈ I and j ∈ I. The standard deviation of return is
given as:
p
σ= 3002 x1 + . . . + 2252 x4 + 752 x5 + 0.1 × 300 × 150 + . . .
sX X X
= σi2 xi + ρij σi σj xi xj (B.108)
i∈I i∈I j∈I:j6=i
Example B.10. (Transit frequency setting.) You are working for a public
transit agency in a city, and must decide the frequency of service on each of the
bus routes. The bus routes are known and cannot change, but you can change
how the city’s bus fleet is allocated to each of these routes. (The more buses
assigned to a route, the higher the frequency of service.) Knowing the ridership
on each route, how should buses be allocated to routes to minimize the total
waiting time?
dispersed throughout the time period we are modeling, and assuming that each
bus is always in use, then the headway on route r will be the time required
to traverse this route (Tr ) divided by the number of buses nr assigned to this
route. So, the average delay per passenger is half of the headway, or Tr /(2nr ),
and the total passenger delay on this route is (dr Tr )/(2nr ). This leads us to the
objective function
X dr Tr
D(n) = (B.109)
2nr
r∈R
in which the total delay is calculated by summing the delay associated with each
route.
What constraints do we have? Surely we must run at least one bus on each
route (or else we would essentially be canceling a route), and in reality as a
matter of policy there may be some lower limit on the number of buses assigned
to each route; for route r, call this lower bound Lr . Likewise, there is some
upper bound Ur on the number of buses assigned to each route as well. So, we
can introduce the constraint Lr ≤ nr ≤ Ur for each route r.
Putting all of these together, we have the optimization problem
s.t. nr ≥ Lr ∀r ∈ R
P nr ≤ Ur ∀r ∈ R
r∈R r ≤ N
n
Example B.11. (Scheduling maintenance.) You are responsible for schedul-
ing routine maintenance on a set of transportation facilities (such as pavement
sections or bridges.) The state of these facilities can be described by a condi-
tion index which ranges from 0 to 100. Each facility deteriorates at a known,
constant rate (which may differ between facilities). If you perform maintenance
during a given year, its condition will improve by a known amount. Given an
annual budget for your agency, when and where you should perform mainte-
nance to maximize the average condition of these facilities? You have a 10 year
planning horizon.
Solution. In contrast to the previous example, where the three compo-
nents of the optimization problem were described independently, from here on
problems will be formulated in a more organic way, describing a model built
from the ground up. (This is how optimization models are usually described in
practice.) After describing the model in this way, we will identify the objective
function, decision variables, and constraints to write the optimization problem
in the usual form. We start by introducing notation based on the problem
statement.
Let F be the set of facilities, and let ctf be the condition of facility f at the
end3 of year t, where t ranges from 1 to 10. Let df be the annual deterioration on
3 This word is intentionally emphasized. In this kind of problem it is very easy to get
B.5. MORE EXAMPLES OF OPTIMIZATION FORMULATIONS 491
facility f , and if the amount by which the condition will improve if maintenance
is performed. So, if no maintenance is performed during year t, then
ctf = ct−1
f − df ∀f ∈ F, t ∈ {1, 2, . . . , 10} (B.110)
ctf = ct−1
f − df + if ∀f ∈ F, t ∈ {1, 2, . . . , 10} (B.111)
Both of these cases can be captured in one equation with the following trick:
let xtf equal one if maintenance is performed on facility f during year t, and 0
if not. Then
ctf = ct−1
f − df + xtf if ∀f ∈ F, t ∈ {1, 2, . . . , 10} (B.112)
Finally, the condition can never exceed 100 or fall below 0, so the full equation
for the evolution of the state is
100
if ct−1
f − df + xtf if > 100
ctf = 0 if ct−1
f − df + xtf if < 0 (B.113)
t−1
t
cf − df + xf if otherwise
for all f ∈ F and t ∈ {1, 2, . . . , 10}. Of course, for this to be usable we need to
know the current conditions of the facilities, c0f .
The annual budget can be represented this way: let kf be the cost of per-
forming maintenance on facility f , and B t the budget available in year t. Then
X
kf xtf ≤ B t ∀t ∈ {1, . . . 10} . (B.114)
f ∈F
confused about what occurs at the start of period t, at the end of period t, during the middle
of period t, etc.
492 APPENDIX B. OPTIMIZATION CONCEPTS
1 2 3 ...
In this example, pay close attention to the use of formulas like (B.114), which
show up very frequently in optimization. It is important to make sure that every
“index” variable in the formula is accounted for in some way. Equation (B.114)
involves the variables xtf , but for which facilities f and time periods t? The
facility index f is summed over, while the time index t is shown at right as
∀t ∈ {1, . . . 10}. This means that a copy of (B.114) exists for each time period,
and in each of these copies, the left-hand side involves a sum over all facilities at
that time. Therefore the one line (B.114) actually includes 10 constraints, one
for each year. It is common to forget to include things like ∀t ∈ {1, . . . 10}, or to
try to useP an index of summation outside of the sum as well (e.g., an expression
like Bf − f ∈F kf xtf ), which is meaningless. Make sure that all of your indices
are properly accounted for!
In this next example, the objective function is less obvious.
Let P be the set of customers, and let Hp denote the intersection that is the
home location of customer p.
How can we calculate the walking distance between two intersections (say,
i and j)? Figure B.1 shows a coordinate system superimposed on the grid.
Let x(i) and y(i) be the coordinates of intersection i in this system. Then the
walking distance between points i and j is
This is often called the Manhattan distance between two points, after one of the
densest grid networks in the world.
So what is the walking distance D(p) for customer p? The distance from
p to the first terminal is d(Hp , L1 ), to the second terminal is d(Hp , L2 ), and
to the third is d(Hp , L3 ). The passenger will walk to whichever is closest, so
D(p, L1 , L2 ,P L3 ) = min{d(Hp , L1 ), d(Hp , L2 ), d(Hp , L3 )} and the total walking
distance is p∈P D(p, L1 , L2 , L3 ).
For this problem, the decision variables and constraints are straightforward:
the only decision variables are L1 , L2 , and L3 and the only constraint is that
these need to be integers between 1 and I. The tricky part is the objective
function: we are instructed both to minimize total cost as well as total walking
distance. We have equations for each of these, but we can only have one objective
function. In these cases, it is common to form a convex combination of the two
objectives, introducing a weighting parameter λ ∈ [0, 1]. That is, let
X
f (L1 , L2 , L3 ) = λ[C(L1 ) + C(L2 ) + C(L3 )] + (1 − λ) D(p, L1 , L2 , L3 )
p∈P
(B.116)
Look at what happens as λ varies. If λ = 1, then the objective function
reduces to simply minimizing the cost of construction. If λ = 0, the objective
function is simply minimizing the total walking distance. For a value in between
0 and 1, the objective function is a weighted combination of these two objectives,
where λ indicates how important the cost of construction is relative to the
walking distance.
For concreteness, the optimization problem is
hP i
min λ[C(L1 ) + C(L2 ) + C(L3 )] + (1 − λ) p∈P D(p, L1 , L2 , L3 )
L1 ,L2 ,L3
s.t. Lf ∈ {1, 2, . . . , I} ∀f ∈ {1, 2, 3}
In the final example in this section, finding a mathematical representation
of a solution is more challenging.
Example B.13. (Shortest path problem.) Figure B.2 shows a road network,
with the origin and destination marked. Given the travel time on each roadway
link, what is the fastest route connecting the origin to the destination?
494 APPENDIX B. OPTIMIZATION CONCEPTS
Origin
Destination
Solution. We’ve already presented some algorithms for solving this prob-
lem in Section 2.4, but here we show how it can be placed into the general
framework of optimization problems.
Notation first: let the set of nodes be N , and let r and s represent the origin
and destination nodes. Let the set of links be A, and let tij be the travel time
on link (i, j). So far, so good, but how do we represent a route connecting two
intersections?
Following Example B.11, introduce binary variables xij ∈ {0, 1}, where xij =
1 if link (i, j) is part of the route, and xij = 0 if link (i, j) is not part of the
route. The travel time of a P route is simply the sums of the travel times of the
links in the route, which is (i,j)∈A tij xij .
We now have an objective function and decision variables, but what of the
constraints? Besides the trivial constraint xij ∈ {0, 1}, we need constraints
which require that the xij values actually form a contiguous path which starts
at the origin r and ends at the destination s. We do this by introducing a flow
conservation constraint at each intersection. For node i, recall that Γ(i) denotes
the set of links which leave node i, and Γ−1 (i) denotes the set of links which
enter node i.
Consider any contiguous path connecting intersection r and s, and examine
any node i. One of four cases must hold:
Case I: Node i does not lie on the path at all. Then all of the xij values
should be zero for (i, j) ∈ Γ(i) ∪ Γ−1 (i).
Case II: Node i lies on the path, but is not the origin or destination. Then
xij = 1 for exactly one (i, j) ∈ Γ(i), and for exactly one (i, j) ∈ Γ−1 (i).
Case III: Node i is the origin r. Then all xij values should be zero for (i, j) ∈
Γ−1 (i), and xij = 1 for exactly one (i, j) ∈ Γ(i).
Case IV: Node i is the destination s. Then all xij values should be zero for
(i, j) ∈ Γ(i), and xij = 1 for exactly one (i, j) ∈ Γ−1 (i).
B.6. GENERAL PROPERTIES OF OPTIMIZATION PROBLEMS 495
P
An elegant way to combine these cases is to look at the differences (i,j)∈Γ(i) xij −
P
(h,i)∈Γ−1 (i) xhi . For cases I and II, this difference will be 0; for case III, the
difference will be +1, and for case IV, the difference will be −1.
So, this leads us to the optimization formulation of the shortest path prob-
lem:
P
min (i,j)∈A tij xij
x
P P 1
if i = r
s.t. (i,j)∈Γ(i) x ij − −1
(h,i)∈Γ (i) x hi = −1 if i=s ∀i ∈ N
0 otherwise
xij ∈ {0, 1} ∀(i, j) ∈ A
(B.117)
A careful reader will notice that if the four cases are satisfied for a solution,
then the equations (B.117) are satisfied, but the reverse may not be true. Can
you see why, and is that a problem?
There is often more than one way to formulate a problem: for instance,
we might choose to minimize congestion by spending money on capacity im-
provements, subject to a budget constraint. Or, we might try to minimize the
amount of money spent, subject to a maximum acceptable limit on congestion.
Choosing the “correct” formulation in this case may be based on which of the
two constraints is harder to adjust (is the budget constraint more fixed, or the
upper limit on congestion?). Still, you may be troubled by this seeming impre-
cision. This is one way in which modeling is “as much art as science,” which
is not surprising — since the human and political decision-making processes
optimization tries to formalize, along with the inherent value judgments, (what
truly is the objective?) are not as precise as they seem on the surface. One
hallmark of a mature practitioner of optimization is a flexibility with different
formulations of the same underlying problem, and a willingness to engage in
“back-and-forth” with the decision maker as together you identify the best for-
mulation for a particular scenario. In fact, in some cases it may not matter.
The theory of duality (which is beyond the scope of this book) shows that these
alternate formulations sometimes lead to the same ultimate decision, which is
comforting.
min f (x)
x
s.t. gi (x) ≥ bi ∀i = 1, 2, .., m
hi (x) = 0 ∀i = 1, 2, .., l
f (x) ≥ f (x∗ ) ∀x ∈ B ∩ X
f (x) ≥ f (x∗ ) ∀x ∈ X
Strict local and global minima are obtained by replacing the inequalities in
the above definitions with strict inequalities:
B.6. GENERAL PROPERTIES OF OPTIMIZATION PROBLEMS 497
Proof. By Proposition B.1, with no loss of generality we can assume that the
optimization problem is a minimization problem and that x̂ is a global minimum.
Therefore f (x̂) ≤ f (x) for all x ∈ X. We can add the constant b to both sides
of the equation, so f (x̂) + b ≤ f (x) + b for all x ∈ X. Therefore x̂ is also a
global minimum (and thus optimal) when the objective function is f (x) + b.
Practically speaking, if you are in a situation like the ones described above,
you should take another look at your optimization problem: perhaps one con-
straint can be relaxed (maybe penalized in the objective, rather than strictly
excluding solutions), or perhaps you missed a constraint that should have been
there. Systems with unbounded or truly infeasible solutions are rare. This
subsection provides a mathematical perspective on the topic, giving conditions
on the constraints and objective function which can guarantee existence of a
solution.
Keeping with our standard convention, we consider a problem of the form
“minimize f (x) subject to x ∈ X,” where the set X contains all solutions satis-
fying all constraints. Weierstrass’ Theorem identifies gives sufficient conditions,
under which an optimal solution exists.
Theorem B.1. (Weierstrass’ Theorem.) Let f be a continuous, real-valued
function defined on X, and let X be a non-empty, closed, and bounded set. The
optimization problem {min f (x) : x ∈ X} has a minimum solution.
(For definitions of the terms in this theorem, see Appendix A.)
Example B.14. Consider the optimization formulation min(x − 5)2 , x ∈ X =
{x : x ∈ R, 3 ≤ x ≤ 7}. Does a minimum exist?
Solution. The function (x−5)2 is continuous and real valued. The feasible
region is non-empty, closed, and bounded. Therefore, the above optimization
problem has a minimum. By plotting the function, we can see that the minimum
occurs at x = 5.
Example B.15. Consider the optimization formulation min(x − 5)2 , x ∈ X =
{x : x ∈ R, 5 < x ≤ 7}. Does a minimum exist?
Solution. In the above variant, the feasible region is not closed. Given any
y ∈ X, we will be able to find another z ∈ X, such that f (z) < f (y). For
example, if we pick y = 5.001, we can always find another z = 5.0001 whose
objective function value is smaller. Therefore, no minimum exists in this case.
Note that Weierstrass’ Theorem provides sufficient and not necessary con-
ditions (more on this in next subsection). The continuity, closed, bounded
assumption are not required for minima to exist. We can have situations where
minima exist without the above conditions being satisfied.
Example B.16. Consider the optimization formulation min(x − 5)2 , x ∈ X =
{x : x ∈ R, x > 3}. Does a minimum exist?
Solution. In the above optimization problem, the feasible region is not
closed or bounded. However, we do know that the minimum exists at x = 5.
Example B.17. Consider the objective function
(
1 x=0
f (x) = .
5 otherwise
The feasible region is X = {x : x ∈ R}. Does a minimum exist?
500 APPENDIX B. OPTIMIZATION CONCEPTS
B.7 Exercises
1. [11] Why is it generally a bad idea to use strict inequalities (‘>’, ‘<’) in
mathematical programs?
3. [24] For the following functions, identify all stationary points and global
optima.
5. [30] Can you ever improve the value of the objective function at the opti-
mal solution by adding constraints to a problem? If yes, give an example.
If no, explain why not.
7. [27] Write out the objective and all of the constraints for the maintenance
scheduling problem of Example B.11, using the bridge data shown in Ta-
ble B.1, where the maintenance cost is expressed in millions of dollars.
Assume a two-year time horizon and an annual budget of $5 million.
9. [53] You are responsible for allocating bridge maintenance funding for a
state, and must develop optimization models to assist with this for the
current year. Political realities and a strict budget will require you to
develop multiple formulations for the purposes of comparison.
Let B denote the set of bridges in the state. There is an “economic value”
associated with the condition of each bridge: the higher the condition, the
higher the value to the state, because bridges in worse condition require
more routine maintenance, impose higher costs on drivers who must drive
slower or put up with potholes, and carry a higher risk of unforeseen fail-
ures requiring emergency maintenance. To represent this, there is a value
502 APPENDIX B. OPTIMIZATION CONCEPTS
11. [57] In an effort to fight rising maintenance costs and congestion, a state
transportation agency is considering a toll on a certain freeway segment
during the peak period. Imposing a toll accomplishes two objectives at
once: it raises money for the state, and also reduces congestion by making
some people switch to less congested routes, travel earlier or later, carpool,
take the bus, and so forth. Suppose that there are 10000 people who would
B.7. EXERCISES 503
want to drive on the freeway if there was no toll and no congestion, but
the number of people who actually do is given by
x = 10000e(15−6τ −t)/500
where τ is the roadway toll (in dollars) and t is the travel time (in minutes).
(That is, the higher the toll, or the higher the travel time, the fewer people
drive.) The travel time, in turn, is given by
x 4
t = 15 1 + 0.15
c
minutes, where c = 8000 veh/hr is the roadway capacity during rush
hour. Regulations prohibit tolls exceeding $10 at any time. Citizens are
unhappy with both congestion and having to pay tolls. After conducting
multiple surveys, the agency has determined that citizen satisfaction can
be quantified as
s = 100 − t − τ /5
(a) Formulate two nonlinear programs based on this information, where
the objectives are to either (a) maximize revenue or (b) maximize
citizen satisfaction.
(b) Solve both of these problems, reporting the optimal values for the
decision variables and the objective function.
(c) Comment on the two solutions. (e.g., do they give a similar toll value,
or very different ones? How much revenue does the state give up in
order to maximize satisfaction?)
(d) Name at least two assumptions that went into formulating this prob-
lem. Do you think they are realistic? Pick one of them, and explain
how the problem might be changed to eliminate that assumption and
make it more realistic.
12. [59] You are asked to design a traffic signal timing at the intersection of
8th & Grand. Assuming a simple two-phase cycle (where Grand Avenue
moves in phase 1, and 8th Street in phase 2), no lost time when the
signal changes, and ignoring turning movements, the total delay at the
intersection can be written as
λ1 (c − g1 )2 λ (c − g2 )2
+ 2
2c 1 − µλ11 2c 1 − µλ22
where g1 and g2 are the effective green time allotted to Grand Avenue and
8th Street, c = g1 + g2 is the cycle length, λ1 and µ1 are the arrival rate
and saturation flow for Grand Avenue, and λ2 and µ2 are the arrival rate
and saturation flow for 8th Street.
All signals downtown are given a sixty-second cycle length to foster good
progression, and the arrival rate and saturation flow are 2200 veh/hr and
504 APPENDIX B. OPTIMIZATION CONCEPTS
3600 veh/hr for Grand Avenue, respectively, and 300 veh/hr and 1900
veh/hr for 8th Street. Furthermore, no queues can remain at the end of
the green interval; this means that µi gi must be at least as large as λi c
for each approach i.
(a) Why does the constraint µi gi ≥ λi c imply that no queues will remain
after a green interval?
(b) Formulate a nonlinear program to minimize total delay.
(c) Simplify the nonlinear program so there is only one decision variable,
and solve this nonlinear program using the bisection method of Sec-
tion 3.3.2 (terminate when b − a ≤ 1).
(d) Write code to automate this process, and perform a sensitivity analysis
by plotting the effective green time on Grand Ave as λ1 varies from
500 veh/hr to 3000 veh/hr, in increments of 500 veh/hr. Interpret
your plot.
(e) Identify two assumptions in the above model. Pick one of them, and
describe how you would change your model to relax that assumption.
13. [26] Find the global minima of the following functions in two ways: the
bisection method, and using Newton’s method to directly find a stationary
point (if the Newton step leaves the feasible region, move to the boundary
point closest to where Newton’s method would go). Run each method
for five iterations, and see which is closer to the optimum, making the
comparison based on the value of the objective function at the final points.
(a) f (x) = − arctan x, x ∈ [0, 10]
(b) f (x) = x sin(1/(100x)), x ∈ [0.015, 0.04]
(c) f (x) = x3 , x ∈ [5, 15]
14. [51] Give a network and a feasible solution x to the mathematical formu-
lation of the shortest path problem in Example B.13 where the links with
xij = 1 do not form a contiguous path between r and s, as alluded to at
the end of the example.
Appendix C
Optimization Techniques
505
506 APPENDIX C. OPTIMIZATION TECHNIQUES
Step 4: Iterate. Increase the counter k by 1 and check the termination crite-
rion. If bk − ak < , then terminate; otherwise, return to step 1.
Example C.1. Find the minimum of the function f (x) = (x − 1)2 + ex in the
interval [0, 2], within a tolerance of = 0.01.
Solution. In the initialization phase, we set k = 0, a0 = 0, b0 = 2, = 0.01,
and
c0 = a0 + θ(b0 − a0 ) = 0.764
d0 = b0 − θ(b0 − a0 ) = 1.236
We now proceed to step 1. Since f (c1 ) = 2.2025 < f (d1 ) = 3.4975, we decide
to eliminate the upper end and perform step 3.
a1 = a0 = 0b1 = d0 = 1.236
c1 = a0 + θ(c0 − a0 ) = 0.4722d1 = c0 = 0.764
The interval is still wider than the tolerance , so we return to the first step.
Now, f (c2 ) = 1.882 < f (d2 ) = 2.2025, so we again eliminate the upper end by
performing step 3.
a2 = a1 = 0
b2 = d1 = 0.764
c2 = a1 + θ(c1 − a1 ) = 0.2918
d2 = c1 = 0.4721
508 APPENDIX C. OPTIMIZATION TECHNIQUES
Example C.2. Find the minimum of the function f (x) = 1 + e−x sin(−x) in
the interval [0, 3] with a tolerance of 0.01.
use it in several ways; and several of the algorithms mentioned in the main text
make use of it as well.
Here we are specifically concerned with solving a one-dimensional optimiza-
tion problem with an interval constraint. If the objective function f is convex,
then it is enough to find a point where the derivative f 0 vanishes. So, we simply
apply Newton’s method to the derivative, with g ≡ f 0 , to try to find x̂ such that
f 0 (x̂) = 0. Newton’s method uses the derivative of g, which ends up being the
second derivative f 00 . (If f is not twice-differentiable, Newton’s method cannot
be applied.) We have to make one minor modification to Newton’s method: the
line search cannot leave the feasible region [a, b], so we truncate the search at
these boundary points. This can actually be helpful, since it prevents Newton’s
method from diverging. There are still cases where Newton’s method can fail;
an example is given in Example C.5 below.
Unlike bisection or golden section, Newton’s method is not an “interval re-
duction” method, where we gradually shrink the range of possible values where
the optimum can lie. So we need a different way to measure convergence. It
is common to stop when f 0 is “close enough” to zero; we will let 0 denote this
value.
The steps of Newton’s method for line search are:
Step 0: Initialize. Set the iteration counter k = 0, and initialize x0 to any
point in [a, b]. (If you have a good guess as to the minimum point, it can
greatly speed things up.)
Step 1: Check convergence. If |f 0 (xk )| < 0 , then terminate.
Step 2: Calculate recommended shift. Create a candidate point x̃ = xk −
f 0 (xk )/f 00 (xk ).
510 APPENDIX C. OPTIMIZATION TECHNIQUES
Step 3: Ensure feasibility. Project candidate onto the feasible region xk+1 =
proj[a,b] (x̃) by setting xk+1 = a if x̃ < a, xk+1 = b if x̃ > b, and xk+1 = x̃
otherwise.
Step 4: Iterate. Increase k by 1, and return to step 1.
Example C.3. Find the minimum of the function f (x) = (x − 1)2 + ex in
the interval [0, 2] using Newton’s method, with 0 = 0.0337. (In this and the
next example, 0 is chosen to make the tolerance comparable to the value used
for bisection and golden section; for this function, when |f 0 (x)| < 0.0337, x is
within 0.01 of its optimal value.)
Solution. Start by computing the formulas for the first and second
derivative of f , since we will be using these often: f 0 (x) = 2(x − 1) + ex ,
and f 00 (x) = 2 + ex .
In the initialization phase, we set k = 0. For an initial guess, choose x0 = 1.
(This makes for a fair comparison, since this is the starting point for bisection.)
The first and second derivatives are equal to e and e+2, respectively, so the new
candidate point is x̃ = 1 − e/(e + 2) = 0.4239. This lies within the boundary
[0, 2], so we accept the candidate point as the next solution: x1 = 0.4239.
At this new point, the first and second derivatives equal 0.3756 and 3.528, so
the next candidate is 0.4239 − 0.3756/3.528 = 0.3174. We accept the candidate
as the new point, so x2 = 0.3174. At x2 , the derivative is f 0 (x2 ) = 0.0084 < 0 ,
so we terminate and report 0.3174 as the optimal solution.
Notice that Newton’s method achieved in only two iterations the level of
precision bisection reached in eight, and golden section reached in thirteen! At
this point, the solution given by Newton’s method differs from the true optimum
x by roughly 2 × 10−3 . One more iteration of Newton’s method would reduce
this error to 1 × 10−6 , and yet another would reduce it to 3 × 10−13 . This is
what we mean when we say its convergence rate is miraculous!
Example C.4. Find the minimum of the function f (x) = f (x) = 1+e−x sin(−x)
in the interval [0, 3] using Newton’s method, with 0 = 0.00645. (Again, this
choice of 0 ensures that when Newton’s method terminates, x is within 0.01 of
its optimal value.)
Solution. Start by computing the formulas for the first and second deriva-
tive of f , since we will be using these often: f 0 (x) = e−x (sin x − cos x), and
f 00 (x) = 2e−x cos x.
In the initialization phase, we set k = 0. For an initial guess, choose x0 = 1.5.
(This makes for a fair comparison, since this is the starting point for bisection.)
The first and second derivatives are equal to 0.2068 and 0.03157, respectively, so
the new candidate point is x̃ = 1.5 − 0.2068/0.03157 = −5.051. This is outside
of the feasible interval [0, 3], so we project the candidate back onto the feasible
region, choosing x1 = 0.
At this new point, the first and second derivatives equal −1 and 2, so the
next candidate is 0 − (−1)/2 = 0.5. We accept the candidate as the new point,
so x2 = 0.5. Another two iterations are needed: x3 = 0.5 − (−0.2415)/(1.065) =
C.2. LINEAR PROGRAMMING 511
10
6 x+y ≤9
5
y
11x + 3y ≥ 21
4
3 Z = 10x + 26y
2
(1.5,1.5)
1
6x + 20y ≥ 39
0
0 1 2 3 4 5 6 7 8 9 10
x
This point happens to lie at a “corner” of the feasible region. This is not
a coincidence. If a linear program has an optimal solution, and if its feasible
region has a corner, then there is an optimal solution that lies at such a corner
point. As a result, we can usually confine our attention to the corner points, of
which there are only a finite number. In this particular case, the corner point
feasible solutions are (0, 9), (0, 7), (1.5, 1.5), (6.5, 0) and (9, 0). Checking in each
in turn, we see that the objective function is minimized at (1.5, 1.5), so this is
the optimal solution.
This provides us with a method for solving a linear program, if the linear
program has an optimal solution and its feasible region has at least one corner.
For small problems this can be done graphically, as in Figure C.2. For larger
problems, it is difficult to identify all the corner points by inspection, and a
different method is needed.
This intuitive description is enough for our purposes; for readers wanting a
more technical definition of a “corner,” we can define it as a point that is not the
midpoint of any line segment contained in the feasible region. These points are
also known as vertices, or extreme points. A point along an edge of the feasible
region can be drawn as the midpoint of a line segment drawn along this edge;
a point in the interior of the feasible region is the midpoint of many different
line segments. But a line segment drawn in the feasible region can only have a
corner as one of its endpoints, never the midpoint.
Why do we need the caveats “if a linear program has an optimal solution”
and “if its feasible region has a corner”? Consider the examples below. Modify
the constraints so the optimization problem has the following form:
This produces the feasible region shown in Figure C.3. There is no restriction
on how small x can be, and as x tends to −∞, so does the objective function.
Therefore there is no optimal solution; the problem is said to be unbounded
since there is no limit to how much the objective function can be minimized.
(The same would hold true for a maximization problem if it is possible for the
objective function to be made arbitrarily large.)
514 APPENDIX C. OPTIMIZATION TECHNIQUES
10
6
y
5
y≤5
4 x+y ≤9
2 Z = 10x + 26y
y≥0
-4 -3 -2 -1 1 2 3 4 5 6 7 8 9 10
-1 x
There is no solution which obeys all the constraints (see Figure C.4). The
constraints x + y ≤ 9 and y ≥ 0 mean that x cannot be greater than 9; however
there is also the constraint x ≥ 9.5. Such a problem is called infeasible, because
there is not even a feasible solution (let alone an optimal one). As a rule of
thumb, unbounded or infeasible problems often mean that you have missed
something in your formulation. In the real world, it is not possible to produce
“infinitely good” solutions (surely there is some limitation; this would be a
constraint you have missed), and there is usually some possible course of action,
even if it is very unpleasant (your constraints are too restrictive).
Here is an optimization problem which has an optimal solution, but its
feasible region has no corners (Figure C.5):
min x+y
x,y
s.t. x + y ≥ 0.
Any solution on the line x + y = 0 is feasible and optimal. So this problem has
infinitely many optimal solutions, but no corner points. These examples show
that the statement “at least one corner point is optimal, so we only have to look
at corner points” needs a few technical qualifications (that there is in fact an
optimal solution, and a corner point). In practice these kinds of counterexamples
are rare.
Also note that this statement does not say that only corner points may be
optimal, even with the qualifications that there are corner points and optimal
solutions. There very well may be a non-corner point which is optimal. But in
such cases there is also an optimal corner point, so it is still acceptable to just
check the corner points. We will still find an optimal solution that way. An
example of such a problem is
(See Figure C.6) Any feasible point on the line segment between (1.5, 1.5)
and (6.5, 0) has the same, optimal objective function value of 78. So a solution
like (3.5, 0.75) is optimal even though it is not a corner point. However, the
corner points (1.5, 1.5) and (6.5, 0) are both optimal as well, so we can still find
516 APPENDIX C. OPTIMIZATION TECHNIQUES
10
6
y
5 x ≥ 9.5
y≤5
4 x+y ≤9
2 Z = 10x + 26y
y≥0
-4 -3 -2 -1 1 2 3 4 5 6 7 8 9 10
-1 x
5
y
4
2 Z =x+y
x+y ≥0 1
x
-5 -4 -3 -2 -1 1 2 3 4 5
-1
-2
-3
-4
-5
10
6 x+y ≤9
5
y
11x + 3y ≥ 21
4
2 Z = 12x + 40y
1
6x + 20y ≥ 39
0
0 1 2 3 4 5 6 7 8 9 10
x
This problem is equivalent to the original one in the sense that any solution to
the standard-form problem can easily be translated to a solution in the original
problem (just ignore the new variables); and a solution is feasible to one problem
if and only if it is feasible to the other problem. Given a solution (x, y) to the
original problem, let s1 = 11x+3y −21, s2 = 6x+20y −30, and s3 = 9−(x+y).
If that solution is feasible, these new variables must all be non-negative, and the
three equality constraints in the standard-form problem will be satisfied. The
reverse is true as well; for instance, since 11x+3y = 21+s1 in a feasible solution
to the standard-form problem, and since s1 ≥ 0, we must have 11x + 3y ≥
21 and the first constraint in the original problem is also satisfied. The new
variables introduced (s1 , s2 , and s3 ) are often called slack variables, because
they show how much the left-hand side of the constraint can be changed before
it is violated.
This technique can be used to ensure that all constraints are equalities.
Similarly, non-standard problems violating the other requirements can also be
converted into equivalent, standard-form linear programs. If the objective func-
tion is maximization, rather than minimization, we can multiply the objective
function by −1. (Minimizing −f is the same as maximizing f ; see Proposi-
tion B.1 in Section B.6.2.) If there is a decision variable which is “free” (it does
not have to be non-negative), it can be replaced with two new decision variables.
For instance, if the variable y can be either negative or positive, everywhere y
appears in the optimization problem we can replace it with y + − y − , where y +
and y − are two new decision variables with non-negativity constraints (y + ≥ 0,
y − ≥ 0). This works because we represent any value (positive or negative) as a
subtraction of two positive numbers.
It is common to write linear programs in matrix-vector form, since this
is more convenient when there are a large number of decision variables and
constraints. In matrix notation, the linear program in standard form can be
written as:
min c · x
x
s.t. Ax =b
x ≥0
10
26
0
c=
0
0
11 3 −1 0 0
A = 6 20 0 −1 0
1 1 0 0 1
21
b = 39
9
ai1
ai2
Ai = .
..
aim
min c·x
x
s.t. Ax =b
x ≥0
Set xN = 0.
2. Compute reduced costs:
Compute the reduced cost of each nonbasic index j ∈ N as c̄j =
cj −cB T AB −1 Aj , where cB is the column vector of objective function
coefficients for the basic variables.
If c̄j ≥ 0 for all j ∈ N , then the current solution is optimal. Termi-
nate the algorithm and return the solution.
Otherwise, choose some nonbasic variable j ∈ N with c̄j < 0. The
column representing this variable will enter the basis.
Set dj = 1, and di = 0 for all i ∈ N − {j}.
3. Identify descent direction:
Compute the basic direction dB = −AB −1 Aj .
If di ≥ 0 for all i ∈ B, the linear program is unbounded. Terminate
the algorithm and report this.
Otherwise, di < 0 for at least one i ∈ B. Compute the maximum
step size
−xi
θ = min . (C.1)
i∈B:di <0 di
Let k be the index of a variable achieving this minimum, with θ =
−xi /di . The column representing this variable will leave the basis
(to be replaced with column j).
4. Compute new solution:
Set xnew
i = xi + θdi for all variables i (both basic and nonbasic).
Update the basis AB by replacing Ak with Aj .
Update the index sets by removing k from B, and adding j. Likewise,
remove j from N and add k.
Update the vectors cb , xb , xN appropriately based on xnew
i .
Return to step 2.
As an example, we show how the simplex method will operate on the follow-
ing standard-form linear program (Figure C.7 plots the feasible region).
10
−1
7 3 x +y ≤6
5 x+y ≤9
y
4
11x + 3y ≥ 21
3 4x − 5y ≤ 20
Z = 10x + 26y
2
(1.5,1.5) 6x + 20y ≥ 39
1
0
0 1 2 3 4 5 6 7 8 9 10
x
c¯5 = c5 − cB T AB −1 A5 = −14
c¯6 = c6 − cB T AB −1 A6 = −12
As both reduced costs are negative, we can pick either of them. Let us pick
j = 5. Set d5 = 1, d6 = 0.
Identify descent direction:
Now B = {1, 2, 3, 4, 7}. Compute:
−0.75
−0.25
−1
dB = −AB A5 = −9.00
−9.50
1.75
2.25 6.75 24.0 109.5 24
θ = min , , , =
0.75 0.25 9.0 9.5 9
The index k = 3.
Compute new solution:
2.25 −0.75 0.25
6.75 −0.25 6.0833
24 −9.00 0
xnew
109.5 + θ −9.50 = 84.1667
= x + θd =
0 1.00 2.6667
0 0 0
44.75 1.75 49.4167
526 APPENDIX C. OPTIMIZATION TECHNIQUES
c¯3 = c3 − cB T AB −1 A3 = 1.5556
c¯6 = c6 − cB T AB −1 A6 = −21.333
0.25
−0.9167
−1
dB = −AB A6 =
0.6667
−16.8333
−5.5833
6.083 84.1667 49.4167 84
θ = min , , =
0.967 16.8333 5.5833 16.8333
The index k = 4.
Compute new solution:
0.25 0.25 1.5
6.0833 −0.9167 1.5
0 0.0 0
xnew
84.1667 + θ −16.8333 = 0
= x + θd =
2.6667 0.6667 6.0
0 1.0 5.0
49.4167 −5.5833 21.5
The new B = {1, 2, 5, 6, 7} and N = {3, 4}.
C.2. LINEAR PROGRAMMING 527
x 1.5
y 1.5
s1 0
xB = s
3
= 6.0 xN = =
s4
5.0 s2 0
s5 21.5
11 3 0 0 0 10
6 20 0 0 0 26
AB = 1
−1 1 1 0 0 cB = 0
1 0 1 0 0
3
4 −5 0 0 1 0
The objective function Z = cB T xB = 54.0. The basic feasible solution
corresponds to the green dot in Figure C.7.
Calculate the reduced cost: N = {3, 4}.
c¯3 = c3 − cB T AB −1 A3 = 0.2178
c¯4 = c4 − cB T AB −1 A4 = 1.2673
As both the reduced costs are positive, we have reached the optimal solution
which is x = 1.5, y = 1.5 with an objective function value of 54.
The above procedure provides the basic framework for the simplex algorithm.
There are a few issues that can arise when applying this algorithm. First, the
issue of degeneracy, which occurs when a basic variable has the value of zero.
Usually it is the nonbasic variables that are equal to zero; but sometimes we need
to set a basic variable to zero to solve the system of equations. This can cause a
few problems, and a poorly-designed implementation of the simplex algorithm
may get stuck in an infinite loop in the presence of degeneracy, rotating among
the same set of columns over and over again with no change in the objective
function or the solution, just changing which zero variables count as basic and
nonbasic. Degeneracy can be addressed by a good tiebreaking rule that can
prevent these infinite loops. One such rule is Bland’s rule: among all the indices
j for which c̄j is a negative reduced cost, choose the first of them (smallest j)
to enter the basis; and among all the indices k which achieve the minimum
in (C.1), choose the first of them (smallest k) to leave the basis.
There are also a few steps which were not unambiguously specified in the
algorithm above.
this is often very difficult. Instead, the optimality conditions are mostly used in
solution algorithms to know when an optimal solution has been found (or if we
are close to optimality), and to provide guidance on how to improve a solution
if it is not optimal. Throughout this discussion, we make heavy reference to
“necessary” and “sufficient” conditions, explained in Section B.6.4.
This subsection deals with unconstrained nonlinear minimization problems
of the form:
min {f (x), x ∈ Rn }
The objective function may or may not be convex. Some of the results require
assumptions on differentiability, which we will state as needed. These results
will be stated without proof; readers wanting more explanation are referred to
the books by Bertsekas (2016) and Bazaraa et al. (2006).
This section presents several necessary and sufficient optimality conditions
for unconstrained nonlinear minimization problems. Optimality conditions are
important because they help identify if a given solution is optimal or not. This
can help in algorithm development to check if we can stop the algorithm or
proceed further. In certain specific cases, the optimality conditions can also help
solve for the optimal solution or arrive at a set containing optimal solutions.
Definition C.1. If the function f is differentiable, a stationary point is a value
of x∗ such that ∇f (x∗ ) = 0.
In many cases, stationary point is either a local minimum or local maximum.
However, this is not always the case; for instance if f (x) = x3 , then x∗ = 0 is a
stationary point, but the function has neither a minimum or maximum there. A
stationary point which is neither a local minimum or a local maximum is called
a saddle point.
Theorem C.1. (First-order necessary conditions for local minima.) If f is
differentiable at a point x∗ which is a local minimum, then x∗ is also a stationary
point.
This result is a necessary condition; it is “first-order” because it refers to
the first derivative. Therefore, if x∗ is a local minimum, then it is a stationary
point. But stationary points need not be local minima; they could also be local
maxima or saddle points, for instance.
Example C.6. Determine the stationary points of the function f (x) = (x−5)2 .
Does a minimum exist?
Solution.
∇f (x∗ ) = 2(x∗ − 5) = 0 =⇒ x∗ = 5.
Thus x∗ = 5 is the stationary point. By plotting the graph, one can easily
determine that the stationary point corresponds to a local as well as global
minimum.
Example C.7. Determine the stationary points of the function f (x) = x3 +
x2 − x + 1. Does a minimum exist?
530 APPENDIX C. OPTIMIZATION TECHNIQUES
Solution.
If you plot the graph, you will notice that x∗ = 1/3 corresponds to a local
minimum and x∗ = −1 corresponds to a local maximum.
Solution.
∂f
= 8(x∗1 − 7) = 0 =⇒ x∗1 = 7
∂x∗1
∂f
= 3(x∗2 )2 − 20x∗2 = 0 =⇒ x∗2 = 0, 20/3.
∂x∗2
The two stationary points are (7, 0) and (7, 20/3). However, we do not have
enough information to determine if the stationary points are minima, maxima,
or saddle points.
If the first order necessary condition does not provide clarity on whether a
stationary point is a minimum or not, a second-order necessary condition can
be used if the function is twice differentiable:
Example C.9. Consider the function f (x) = (x − 5)5 . Identify the stationary
point and check if the Hessian is positive semidefinite.
Solution.
∇f (x∗ ) = 5(x∗ − 5)4 = 0 =⇒ x∗ = 5.
The stationary point is x∗ = 5. The Hessian at the stationary point is f 00 (x∗ ) =
20(x∗ − 5)3 , which is nonnegative (actually zero) and thus positive semidefinite.
However if you plot the function, you will notice that x∗ = 5 does not correspond
to a local minima, but instead a saddle point.
Example C.10. Consider the stationary point (7, 0) of the function f (x1 , x2 ) =
4(x1 − 7)2 + 5x22 (x2 − 10). Is (7, 0) is a saddle point?
C.3. UNCONSTRAINED NONLINEAR OPTIMIZATION 531
∂2f
=8
∂x21
2
∂ f
=0
∂x1 ∂x2
∂2f
=0
∂x2 ∂x1
∂2f
= 6x2 − 20 ,
∂x22
The Hessian is neither positive nor negative semidefinite. Therefore, the sta-
tionary point (7, 0) is a saddle point.
A minor modification of the second-order necessary conditions gives a suffi-
cient condition on optimality.
Solution. The stationary points of the function are x∗ = 1/3 and x∗ = −1.
Hf (x∗ ) = 6(x∗ ) + 2
Example C.12. Consider the stationary point (7, 20/3) of the function f (x1 , x2 ) =
4(x1 − 7)2 + 5x22 (x2 − 10). Is (7, 20/3) is a strict local minimum?
532 APPENDIX C. OPTIMIZATION TECHNIQUES
∂2f
=8
∂x21
∂2f
=0
∂x1 ∂x2
∂2f
=0
∂x2 ∂x1
∂2f
= 6x2 − 20 ,
∂x22
so the Hessian is
8 0
Ff = .
0 20
The following subsections spell out choices for Steps 1, 2, and 3. These
choices are independent of each other, and you can combine any choice for one
step with any choice for another.
C.3. UNCONSTRAINED NONLINEAR OPTIMIZATION 533
You can stop when the solution stabilizes, ||xk − xk−1 || < .
You can stop when the objective function stabilizes, |f (xk )−f (xk−1 )| < .
You can normalize the previous two inequalities to reflect relative stability
(e.g., stop when the objective function decreases by less than 1% between
iterations).
You can stop after a certain amount of computation time has elapsed.
These methods are more intuitive to apply than ||∇f || < . The downside
is that there is no guarantee that you are close to an optimal solution when
they are satisfied. This is particularly obvious for the last criteria; but even
for the earlier ones, there is no way to tell the difference between a solution
“stabilizing” for a good reason (you are close to the minimum) or for a bad
reason (the algorithm is stuck somewhere suboptimal but can’t make progress).
You can also use a combination of these methods; for instance, stopping
when the objective changes by less than 1%, or after one hour of run time
(whichever comes first).
that minimum point. We will derive this direction, and then explain how it is
related to Newton’s method as you learned it in calculus, or as we used it in the
previous section.
The quadratic approximation to f at xk is its second-order Taylor series,
based on its gradient and Hessian:
1
f (x) ≈ f (xk ) + (x − xk )T ∇f (xk ) + (x − xk )T Hf (xk )(x − xk )T . (C.2)
2
Let f˜ denote the right-hand side of equation (C.2). The minimum point of the
quadratic approximation is the point where ∇f˜ vanishes, that is, where
so the new point xk+1 = xk + dk is given by exactly the same formula as in Step
2 of Newton’s method for line search in Section C.1.2
Newton’s method usually yields a “better” search direction, and fewer iter-
ations are required to reach the minimum. However, it can be computationally
expensive to calculate all the elements in the Hessian matrix, and to calculate its
inverse. For this reason, there are a variety of “quasi-Newton” methods, which
replace H −1 in equation (C.5) by another matrix which is easier to calculate,
but is approximately the same.
C.3. UNCONSTRAINED NONLINEAR OPTIMIZATION 535
The first value η, βη, β 2 η, . . . which satisfies this equation is chosen for αk . The
intuition in formula (C.7) is that the left-hand side shows how much f changes
if we take a step of size αk . We are trying to solve a minimization problem,
so hopefully f decreases, and the left-hand side is negative. On the right-hand
side, αk ∇f (xk )T dk is how much we would expect f to decrease based on its
linear approximation. Likewise, since dk is a direction in which f decreases, the
right-hand side is also a negative number. We stop at the first choice of αk for
which the actual decrease in the objective function is at least a certain fraction
γ of what we would expect from the linear approximation; this is exactly what
the condition (C.7) checks. Typical values of the constants in this method are
η = 1, β = 1/2, and γ = 1/10, but you should experiment with different values
for your specific problem. This rule is often called the Armijo rule.
C.3.6 Examples
Example C.13. Apply the unconstrained optimization algorithm to the func-
tion f (x) = x2 − 10x + 20. Terminate when |xk − xk−1 | < 0.01, use gradient
1 You will need to impose an upper bound on α to do this, but in practice this is not
k
usually very hard.
536 APPENDIX C. OPTIMIZATION TECHNIQUES
descent for the direction, and use a constant step size of α = 1. Repeat with a
constant step size of α = 0.1.
Solution. From basic calculus we know that the minimum of the above
function is at x∗ = 5. For this example, we pick an initial value of x0 = 15. At
any point xk , the descent direction is dk = −∇f (xk ) = 2xk − 10. Then, using
a constant step size of α = 1, we have the following:
At the initial point, the descent direction is d0 = −20, so x1 = 15 − 1 × 20 =
−5. Proceeding to the next iteration, we check the termination criterion. Since
|x1 −x0 | > 0.01, we continue. The new descent direction is d1 = −20. Therefore,
x2 = −5 + 1 × 20 = 15. Since |x2 − x1 | > 0.01, we proceed to the next iteration.
But we’ve returned to our initial point! Notice that the solutions will continue
to oscillate between 15 and −5, due to the large step size.
Repeating using a smaller constant step size of α = 0.1 produces conver-
gence: from the initial point, the descent direction is d1 = −20 and x1 =
15 − 0.1 × 20 = 13. We have |x1 − x0 | > 0.01, so we continue. At this new
point, the descent direction is d1 = −16. Therefore, x2 = 13 − 0.1 × 16 = 11.4.
Subsequent iterations are shown in Table C.3.
In this example, the constant step size plays a major role in how quickly
we converge (if at all). For reference, Table C.4 shows how many iterations are
required for different choices. Similarly, the initial value chosen, and the param-
eters η, β, γ of an inexact line search play an important role in convergence.
Example C.14. Apply the unconstrained optimization algorithm to the func-
tion f (x, y) = (x − 1)4 + 5(y − 2)4 + xy. Terminate when ||xk − xk−1 || < 0.01,
and use gradient descent. First use the algorithm with a constant step size of
α = 0.0025, then solve again using backtracking line search with η = 1, β = 0.1,
and γ = 0.1.
Solution. Notice that there are two decision variables; we will use the
vector x = (x, y) to describe both decision variables together. For a specific
iteration, we will let their values be given by xk = (xk , yk ).
For this example, we pick an initial value of (x1 , y1 ) = (4, 4). At any point
k, the descent direction is
−4(xk − 1)3 − yk
dk = −∇f (xk , yk ) = .
−20(yk − 2)3 − xk
So from the initial point, the descent direction is
−4(x0 − 1)3 − y0
−112
d0 = −∇f (x0 , y0 ) = =
−20(y0 − 2)3 − x0 −164
and the new point is
x1 4 −112 3.72
x1 = = + 0.0025 = .
y1 4 −164 3.59
p
For convergence, we check if (x1 − x0 )2 + (y1 − y0 )2 < 0.01. This is not true,
so we increase k to 1 and move to the next iteration.
C.3. UNCONSTRAINED NONLINEAR OPTIMIZATION 537
Table C.3: Gradient descent applied to f (x) = x2 − 10x + 20, with a constant
step size αk = 0.1
k xk f (xk )
0 15 100
1 13.0 64.0
2 11.4 40.96
3 10.12 26.2144
4 9.096 16.7772
5 8.2768 10.7374
6 7.6214 6.8719
7 7.0972 4.3980
8 6.6777 2.8147
9 6.3422 1.8014
10 6.0737 1.1529
11 5.8590 0.7379
12 5.6872 0.4722
13 5.5498 0.3022
14 5.4398 0.1934
15 5.3518 0.1238
16 5.2815 0.0792
17 5.2252 0.0507
18 5.1801 0.0325
19 5.1441 0.0208
20 5.1153 0.0133
21 5.0922 0.0085
22 5.0738 0.0054
23 5.0590 0.0035
24 5.0472 0.0022
25 5.0378 0.0014
The value of the objective function here is very large, approximately 3.58 × 109 ,
so the left-hand side of (C.7) is 3.58 × 109 − 177 ≈ 3.58 × 109 . For the right
hand side, we calculate
t
−112
γα0 ∇f (x0 ) d0 = (0.1)(1) 112 164 = −3944 .
−164
Inequality (C.7) is clearly false (the left-hand side is very positive, the right-
hand side is negative), so we try again with α0 = βη = 1/10. This choice
corresponds to the solution
4 1 −112 −7.2
+ = .
4 10 −164 −12.4
The objective function has a value of 2.19 × 105 , so the left-hand side of (C.7)
is still approximately 2.19 × 105 , while the right-hand side is
t 1 −112
γα0 ∇f (x0 ) d0 = 112 164 = −394.4 .
100 −164
which is again false. Trying again with α0 = β 2 η = 1/100, the new solution is
x = (2.88, 2.36), and the left and right-hand sides of (C.7) are now −157.6 and
−39.4, respectively. So we accept this choice: α0 = 1/100, and x1 = (2.88, 2.36).
Proceeding similarly, you can verify that the next step sizes are α1 = 1/10,
α2 = 1/10, α3 = 1, and α4 = 0.1, with the algorithm terminating after that
step at the solution x5 = (0.236, 1.774), with objective value 0.772. Notice that
each iteration of backtracking line search required more work, but in the end
we only had to perform five iterations, rather than sixty-four.
C.3. UNCONSTRAINED NONLINEAR OPTIMIZATION 539
min f (x)
x=(x1 ,...,xn )
s.t. gi (x) ≤ 0 ∀i = 1, 2, . . . , m
hj (x) = 0 ∀j = 1, 2, . . . , ` ,
where the gi and hj are functions representing the inequality and equality
constraints, respectively. We will concisely denote the feasible region by X.
That is, X is the set
X = {x ∈ Rn : gi (x) ≤ 0 ∀i = 1, 2, . . . , m; hj (x) = 0 ∀j = 1, 2, . . . , `} .
Definition C.6. The linearity constraint qualification holds if all of the func-
tions gi and hj defining the constraints are affine functions.
hj (x∗ ) = 0 ∀j = 1, 2, . . . , ` (C.9)
gi (x∗ ) ≤ 0 ∀i = 1, 2, . . . , m (C.10)
µi gi (x∗ ) = 0 ∀i = 1, 2, . . . , m (C.11)
µi ≥ 0 ∀i = 1, 2, . . . , m (C.12)
The above conditions can also be written in matrix form. Let µ and λ be
the vectors whose components are µ1 , µ2 , . . . , µm and λ1 , λ2 , . . . , λ` , and likewise
let g(x) and h(x) be vectors with components gi (x) and hj (x). Then we can
rewrite the KKT conditions in terms of the Jacobians of g and h as
Another way of representing the first order necessary condition is using the
Lagrangian function
m
X l
X
L(x, µ, λ) = f (x∗ ) + µi gi (x∗ ) + λj hj (x∗ )
i=1 j=1
We can rewrite the KKT functions in a simpler way using the Lagrangian.
Taken as a vector, the partial derivatives of L with respect to x (written as
∇x L) form the left-hand side of the first KKT condition (C.8). Similarly, the
partial derivatives of L with respect to λ form the left-hand side of the second
KKT condition (C.9). So we can write the KKT conditions as:
∇x L(x∗ , µ, λ) = 0
∇λ L(x∗ , µ, λ) = 0
These are necessary conditions, meaning that they must be satisfied at any
optimal solution. Without additional restrictions, they are not sufficient, mean-
ing that there may be non-optimal points which also satisfy those conditions.
Still, we can identify all the points which satisfy the KKT conditions to gener-
ate a set of “candidate solutions;” if an optimal solution exists it must be one
of them. We give some examples of how to do this in the following subsection.
For large-scale problems this is not a practical approach (the following subsec-
tion describes some methods that can be used), but it may be reasonable for
problems with only a few decision variables and constraints, or where there is a
special structure which further simplifies these conditions.
With additional conditions on the objective function and constraints, we
can give stronger results based on the KKT conditions: For instance, under
the linear independence constraint qualification, there are unique vectors of
Lagrange multipliers µ and λ satisfying the KKT conditions at a local minimum.
The KKT conditions can also be sufficient, under additional restrictions. For
example, by imposing convexity conditions on f and the gi , and linearity on the
hj , we have this result:
Theorem C.5. (First-order sufficient conditions for global minima.) Let f and
all gi be continuously differentiable convex functions, and let all hj be linear
functions. Then if there are x∗ ∈ Rn , µ ∈ Rm , and λ ∈ R` satisfying the
following conditions, x∗ is a global minimum of f subject to x ∈ X.
m
X `
X
∇f (x∗ ) + µi ∇gi (x∗ ) + λj ∇hj (x∗ ) = 0 (C.13)
i=1 j=1
hj (x∗ ) = 0 ∀j = 1, 2, . . . , ` (C.14)
∗
gi (x ) ≤ 0 ∀i = 1, 2, . . . , m (C.15)
∗
µi gi (x ) = 0 ∀i = 1, 2, . . . , m (C.16)
µi ≥ 0 ∀i = 1, 2, . . . , m (C.17)
C.4. CONSTRAINED NONLINEAR OPTIMIZATION 543
In this result, the conditions on gi and hj imply that the feasible region X
is a convex set; since f is also a convex function, we have a convex optimization
problem, and therefore any local minimum is also global. This is what allowed
us to convert the “local” necessary condition into a global sufficient condition.
As with unconstrained problems, we can also formulate a second-order suf-
ficient condition, based on second partial derivatives.
This condition can be relaxed slightly. Rather than requiring that the ma-
trix Hxx consisting of all second partial derivatives ∂ 2 L/∂xi ∂xj be positive
definite (requiring dT Hxx (x∗ )d > 0 for all nonzero d ∈ Rn ), it is enough if
dT Hxx (x∗ )d > 0 holds for any nonzero d ∈ Rn such that ∇hj (x∗ )T d = 0 for
all j, and ∇gi (x∗ )T d = 0 for all active i ∈ A(x∗ ).
min x2
s.t. (x1 − 1)2 + x22 ≤ 1
x1 ≥ 2
or ∗
0 2(x1 − 1) −1
+ µ1 + µ2 = 0.
1 2x∗2 0
At (x∗1 , x∗2 ) = (2, 0), we have
0 2 −1
+ µ1 + µ2 = 0.
1 0 0
But there are no values of u1 and u2 for which the above equation is satisfied.
So it is impossible to satisfy the KKT conditions at this point, even though it
is optimal. How can this be? The answer is that the constraint qualifications
are not satisfied at this point. The first constraint is not linear, so the linearity
constraint qualification fails. The gradients of the two constraints are (2, 0)
and (−1, 0), which are linearly dependent, so the linear independence constraint
qualification also fails. Similarly, you can show that the Mangasarian-Fromovitz
constraint qualification fails at this point.
This example highlights the importance of constraint qualifications. How-
ever, in traffic assignment, a majority of the formulations will have linear con-
straints, in which case the linear constraint qualification holds. In such cases,
you do not have to worry further.
Note that convexity alone does not imply constraint qualification. In the
above example both g1 and g2 are convex functions, so both the objective func-
tion and constraints are convex functions, so this is a convex optimization prob-
lem.
min x1 + x2 + x3
x21 x2 x2
s.t. + 2+ 3 =1
a b c
C.4. CONSTRAINED NONLINEAR OPTIMIZATION 545
becomes 2x∗
1
1 a∗
2x
1 + λ 2 = 0.
b
1 2x∗3
c
Example C.18. Write down the KKT conditions for the following optimization
problem.
min x21 + x22
s.t. 11x1 + 3x2 ≥ 21
6x1 + 20x2 ≥ 39
x1 + x2 ≤ 9
x1 ≥ 0
x2 ≥ 0
546 APPENDIX C. OPTIMIZATION TECHNIQUES
Solution. Since all constraints are linear, the linear constraint qualification
is satisfied and the KKT conditions are indeed necessary. We can rewrite the
optimization problem in the form needed for the KKT conditions:
The second KKT condition (C.9) is ignored since there are no equality con-
straints. The third KKT condition (C.10) ensures feasibility:
The fourth ensures complementarity (C.11), that the µi values must be zero
unless the constraint is active:
µ1 , µ2 , µ3 , µ4 , µ5 ≥ 0 .
C.4. CONSTRAINED NONLINEAR OPTIMIZATION 547
x∗1 + 2x∗2 − 6 ≤ 0
x∗1 − x∗2 + 2 ≤ 0
µ1 (x∗1 + 2x∗2 − 6) = 0
µ2 (x∗1 − x∗2 + 2) = 0
µ1 , µ2 ≥ 0 .
The functions defining the constraints are linear, so the KKT conditions are
indeed necessary. We now find candidate solutions by considering all combina-
tions of active and inactive constraints.
Case I: Both constraints inactive:
In this case µ1 = µ2 = 0, so the equality conditions reduce to
2x∗1 = 0
4x∗2 = 0
.
548 APPENDIX C. OPTIMIZATION TECHNIQUES
The solution is (x∗1 , x∗2 ) = (0, 0). However (0, 0) violates the constraint g2 (x1 , x2 ) =
x∗1 − x∗2 + 2 ≤ 0. So, there is no optimum solution where both constraints are
inactive.
Case II: Only the second constraint is active:
In this case µ1 = 0. The equality conditions reduce to
2x∗1 + µ2 = 0
4x∗2 − µ2 = 0
µ2 (x∗1 − x∗2 + 2) = 0 .
The first two equations give x∗1 = −µ2 /2 and x∗2 = µ2 /4. Because the second
constraint is active, we know x∗1 − x∗2 + 2 = 0. Therefore, the third equation
will be satisfied automatically. Substituting x∗1 = −µ2 /2 and x∗2 = µ2 /4 into
the active constraint, we have µ2 = 8/3, and therefore x∗1 = 4/3 and x∗2 = 2/3.
This solution satisfies both constraints, and the Lagrange multipliers are
non-negative, so it is a candidate for optimality.
Case III: Only the first constraint is active:
In this case µ2 = 0, and the equality conditions are:
2x∗1 + µ1 = 0
4x∗2 + 2µ1 = 0
µ1 (x∗1 + 2x∗2 − 6) = 0
Proceeding in the same way, the first two equations require x∗1 = −µ1 /2 and
x∗2 = −µ2 /2. Likewise, because we assume the first constraint is active, we can
replace the third equation with x∗1 + 2x∗2 = 6. But with the values of x∗1 and x∗2
for this case, this simplifies to −µ1 = 6. This violates the requirement µ1 ≥ 0,
and therefore there cannot be an optimal solution corresponding to this case.
Case IV: Both constraints are active:
In this case, we know both constraints are satisfied with equality:
x∗1 + 2x∗2 = 6
x∗1 − x∗2 = −2
.
The only solution to these equations is x∗1 = 2/3 and x∗2 = 8/3. It remains
to see whether there are Lagrange multipliers satisfying the rest of the KKT
conditions. Substituting into the first two conditions, we have
4
µ1 + µ2 = −2x∗1 = −
3
∗ 32
2µ1 − µ2 = −4x2 = − .
3
Solving this system, we find µ1 < 0, violating the non-negativity condition,
and establishing that the solution where both constraints are active cannot be
optimal.
C.4. CONSTRAINED NONLINEAR OPTIMIZATION 549
min ∇f (xk )T y
s.t. Ax − b ≤ 0 ∀i ∈ {1, 2, . . . , m}
Cx − d = 0 ∀i ∈ {1, 2, . . . , `}
The difference between MSA and FW is how the step size is chosen In MSA,
at each iteration k, the step size is commonlyPchosen to be αP k = 1/(k + 1),
although any sequence of step sizes satisfying αk = ∞ and αk2 < ∞ will
C.4. CONSTRAINED NONLINEAR OPTIMIZATION 551
work. In FW, the step size αk is obtained by solving the following optimization
problem in λ. This optimization problem can be solved using any of the line
search methods from Sections 3.3.2 or C.1.
FW picks the step size in a more intelligent manner than MSA and typically
converges faster. For this reason, MSA is rarely used in solving nonlinear opti-
mization with linear constraints. However, in certain complex traffic assignment
problems, evaluating the objective function f can be computationally expensive.
In such cases, a line search method in FW can require large amounts of time,
and taking faster MSA steps outweighs the greater precision that each FW step
would have.
Example C.20. Determine the optimal solution of the following optimization
problem using MSA and FW.
min 2(x1 − 1)2 + (x2 − 2)2
s.t. x1 + 4x2 − 2 ≤0
−x1 + x2 ≤0
x1 , x2 ≥0
Solution. The KKT conditions for this problem can be solved to yield
the optimal solution x∗1 = 0.7878, x∗2 = 0.3030. The MSA and FW algorithms
will converge to this solution over successive iterations; the advantage of these
methods is that solving the KKT conditions for a large problem can be very
difficult, whereas MSA and FW scale better with problem size. For this objective
function, the gradient at any point (x1 , x2 ) is
4(x1 − 1)
∇f (x1 , x2 ) = .
2(x2 − 2)
MSA: Assume the initial solution is x11 = 0, x12 = 0. The search direction
is obtained by solving the following linear program.
min −4y1 − 4y2
s.t. y1 + 4y2 − 2 ≤0
−y1 + y2 ≤0
y1 , y2 ≥0
The optimal solution to this
problem
is y11 = 2, y21 = 0. Therefore, the search
1 1 1
direction is p = y − x = 2 0 . Therefore
1
x21 = 0 + (2) = 1
2
1
x22 = 0 + (0) = 0
2
552 APPENDIX C. OPTIMIZATION TECHNIQUES
At the start of the second iteration we have x21 = 1, x22 = 0. Therefore, the
search direction is obtained by solving the following linear program.
min −4y2
s.t. y1 + 4y2 − 2 ≤0
−y1 + y2 ≤0
y1 , y2 ≥0
The optimal solution to this linear program is y12 = 0.4, y22 = 0.4. In iteration
2, the step size is α2 = 31 . Therefore,
1
x31 = 1 + (0.4 − 1) = 0.8
3
3 1
x2 = 0 + (0.4 − 0) = 0.1333
3
Table C.5 shows the progress of MSA over further iterations. The error in this
table is calculated as the sum of square deviation from the optimal solution
which we know to be x∗1 = 0.7878, x∗2 = 0.3030.
FW: Again assume the initial solution to x11 = 0, x12 = 0. The search
direction is obtained by solving the same linear program as in MSA.
x21 = 1 + 0.5 × (2 − 0) = 1
x22 = 0 + 0.5 × (0 − 0) = 0
The optimal solution to the linear program is y12 = 0.4, y22 = 0.4, as before.
However, at this point FW chooses the step size differently than MSA. Solving
the optimization problem
2
min 2 (1 + α2 (0.4 − 1) − 1))2 + (0 + α2 (0.4 − 0) − 2)
α2
s.t. 0 ≤ α2 ≤ 1
gives the step size α2 = λ = 0.9091. Therefore, we obtain the new solution
point if the new solution is infeasible. Recall that projX (x) means “the point in
X closest to x.” (If x ∈ X already, then projX (x) = x.) If the set X is convex,
then the projection operation is uniquely defined, and a continuous function
of x. In general, projection cannot be easily computed. When the constraints
are linear (as we assume in this section), it is considerably easier. For some
problems, it can be exceptionally easy. For instance, in traffic assignment, we
can reformulate all constraints to be simple non-negativity constraints of the
form hπ ≥ 0, and “projection” simply means “any negative path flow should be
set to zero.”
The steps of gradient projection are shown in Algorithm 2.
Initialize:
k ← 1: iteration counter
x1 : initial value
: tolerance
while ConvergenceCriterion > do
dk ← −∇f (xk )
αk ← arg minα {f (projX (xk + αdk )), 0 ≤ α ≤ ᾱ}
xk+1 ← xk + αk d
k ←k+1
end
return xk
Algorithm 2: Gradient projection method
Example C.21. Re-solve the previous optimization problem using the gradient
projection method, with ᾱ = 1.
1 1
Solution. We start from the same initial
solution x1 = 0, x2 = 0. Here the
search direction is d1 = −∇f (x1 ) = 4 4 . To find the step size, we solve the
optimization problem
The projection operation can be more easily seen on a plot of the feasi-
ble region, as in Figure C.8. For a given value of α, x1 + αd1 is the point
(4α, 4α). If α ≤ 1/10, this coincides with the line x1 = x2 , which is part
of the feasible region. As a result, projection does not change the point and
projX (4α, 4α) = (4α, 4α). Once α > 1/10, the point (4α, 4α) violates the con-
straint x1 + 4x2 ≤ 2, and is infeasible. For these points, we have to project
C.4. CONSTRAINED NONLINEAR OPTIMIZATION 555
0
≤
2
x
+
1
x
−
1.5
x2 1 α = 1/4 → (1, 1)
0.5
x1 +
4x2 ≤
projX (1, 1) = 2
(14/17, 5/17)
0
0 0.5 1 1.5 2
x1
them onto the nearest feasible point. As an example, if α = 1/4, the point
(4α, 4α) = (1, 1) is infeasible. The closest feasible point is (14/17, 5/17); this
can be seen geometrically in Figure C.8 by drawing a line through (1, 1) perpen-
dicular to the line corresponding to that constraint. The expression mapping a
value of α in this range to the closest feasible point can be found algebraically,
and the formula is shown below. If α > 23 , the point on the line x1 + 4x2 lies
below the x2 -axis, which is also infeasible. For α values this large, the closest
feasible point is simply (2, 0). Therefore we have
(4α, 4α)
α ∈ [0, 1/10]
1
projX (x1 + αd1 ) = 17 (48α + 2, −12α + 8) α ∈ [1/10, 2/3] .
(2, 0) α ≥ 2/3
Again, you can check that this function is continuous and convex. Using a line
search method, the minimum is found for α = 0.236, which corresponds to the
point (0.7878, 0.3030). As shown above, this is actually the optimal solution,
found in a single iteration! Typically, GP only converges to the optimal solution
in the limit, but as this example shows it can be extremely efficient in terms of
iterations. The drawback is that each iteration requires a significant amount of
work. But in problems where the projection can be calculated very efficiently
(as in traffic assignment), GP can be an excellent algorithmic choice.
min f (x)
s.t. Ax ≤ b ∀i = 1, 2, .., m
Cx = d ∀i = 1, 2, .., l
X = {x : Ax ≤ b, Cx = d} .
For any feasible solution x̂, let Aa be the matrix containing the coefficients
of the set of active inequality constraints, that is,
Aa x̂ = ba .
Aa
E= .
C
It may be that the rows of the matrix E are not linearly independent; this
indicates that one or more of the active constraints is redundant (they are
implied by some of the others). These redundant constraints may be removed.
Example C.22. Re-solve the previous optimization problem using the manifold
suboptimization method, using ᾱ = 1.
Solution. We start from the same initial solution x11 = 0, x12 = 0. At this
point, the active constraints are −x1 + x2 ≤ 0, x1 ≥ 0, and x2 ≥ 0. We only use
−x1 + x2 ≤ 0 and x2 ≥ 0 as they together imply x1 ≥ 0. (Mathematically, the
three vectors corresponding
to the coefficients
in these constraints are linearly
dependent: −1 1 , 0 1 , and 1 0 .)
−1 1
E=
0 −1
0 0
P = I − ET (EET )−1 E =
0 0
C.4. CONSTRAINED NONLINEAR OPTIMIZATION 559
0
d1 = −P∇f (x1 ) =
0
Since d = 0, we calculate Q:
T −1 −4
Q = −(EE ) E∇f (x1 ) =
−8
Since the second row is the most negative, we delete the second row of the
matrix E.
E = −1 1
T T −1 0.5 0.5
P = I − E (EE ) E =
0.5 0.5
4
d1 = −P∇f (x1 ) =
4
The new solution can be obtained as:
x21 = 0 + 4α
x22 = 0 + 4α
We need to find the step size α which minimizes 2(0+4α −1)2 +(0+4α −2)2
while retaining feasibility. Note that
x1 + 4x2 − 2 ≤ 0 =⇒ α ≤ 0.1 ,
whose solution is α = 0.1. Therefore, the new solution to the original problem
is
x21 = 0 + 4α = 0.4
x22 = 0 + 4α = 0.4
If we do one more iteration of MS, we get x31 = 0.7878, x32 = 0.3030, which is
optimal. MS converges to the optimal solution much faster than MSA or FW
(and about as fast as GP), because its steps are closer to the steepest descent
direction. Section 6.3 has more discussion of these reasons in the specific context
of traffic assignment.
560 APPENDIX C. OPTIMIZATION TECHNIQUES
As always, there is no such thing as a free lunch; for reasons discussed below,
integer problems are significantly harder to solve, and often it is impractical to
find a provably optimal solution.
There are several types of integer programs. An integer linear program
(ILP) is a linear program where all the variables are restricted to be integers.
A mixed integer linear program (MILP) is a linear program where only some of
the decision variables are assumed to be integer. A MILP takes the form
min c·x+d·y
x,y
s.t. Ax + Ey ≤ b
x ≥ 0, y ≥ 0
y ∈ Zp
10
7
Optimal LP Solution
6
Optimal IP Solution
y2 5 11y1 + 3y2 ≥ 21
2
Z = 10y1 + 26y2
(1.5,1.5)
1
6y1 + 20y2 ≥ 39
0
0 1 2 3 4 5 6 7 8 9 10
y1
Still, while ignoring the integrality constraints and solving the resulting lin-
ear program may not lead to an optimal solution, this linear program relaxation
is still very useful, because it provides a lower bound on the optimal value of the
objective function. In the above example, the optimal solution without integral-
ity constraints had an objective function value of 54. Restricting the feasible
set more by adding the integer constraints cannot improve this. Therefore, even
if we don’t know the optimal integer solution of the optimization problem, we
know its objective function value cannot be lower than 54.
Furthermore, any feasible integer solution provides an upper bound on the
optimal value for the objective. For instance, the rounded solution y1 = y2 = 2
is feasible, and has an objective function value of 72. Even if we don’t know
whether this solution is optimal or not, this solution tells us that the optimal
objective function value can’t be any higher than 72. We therefore have a
corresponding pair of upper and lower bounds. Solving the LP relaxation, and
calculating the objective function at the feasible solution (2, 2) tell us that the
optimal objective function value (often denoted Z ∗ ) must lie between 54 and
72.
A common framework for solving integer programs is to work to bring these
bounds closer together. Upper bounds can be tightened by finding better feasi-
ble solutions (with lower objective function values that are closer to the optimal
solution). Lower bounds can be tightened by solving “partial relaxations” where
only some of the integrality constraints are enforced; these will have higher ob-
jective function values than a full relaxation, which are also closer to the optimal
solution (but from the other side). Ultimately, we determine a decreasing se-
quence of upper bounds ZU1 B ≥ ZU2 B ≥ · · · Z ∗ and an increasing sequence of
lower bounds ZLB 1 2
≤ ZLB . . . · · · ≤ Z ∗ . When the difference between the bounds
is small enough (say, within ), we know that the feasible solution correspond-
ing to the best-known upper bound is within of being optimal to the original
integer program.
The success of such a framework depends on the strength of the bounds.
The tighter the upper and lower bounds are, the greater the guarantee we can
provide for the solution we find. Effectively solving these problems often requires
intelligently exploiting the problem structure, and expert domain knowledge, in
order to provide good feasible solutions (for upper bounds), and good partial
relaxations (for lower bounds). The branch and bound algorithm (described in
Section C.5.1 is one framework for doing this. Before presenting this method, we
will provide another concrete example of an integer program for facility location,
and how it relates to the selection of good bounds.
Consider the case of locating a certain number of facilities in a region (say,
warehouses, or fire stations). Let I denote the set of locations that demand
what the facility provides, and J the set of potential locations where facilities
can be built. Let cij denote the cost of meeting the demand at location i ∈ I
from facility j ∈ J. Let fj represent the fixed cost of locating a facility at site
j ∈ J. There are two sets of decision variables corresponding to location and
assignment decisions. The location-related decision variable yj takes the value
1 if a facility is located at site j and 0 otherwise. The decision variable xij
C.5. INTEGER PROGRAMMING 563
denotes the fraction of demand for center i met by facility j. The xij variables
can be continuous (its demand can be provided by multiple facilities), whereas
the yj values must be integer (a facility cannot be “half-built” The optimization
model for the UFLP is shown below:
O-UFLP :
XX X
min cij xij + fj yj (C.18)
x,y
i∈I j∈J j∈J
X
s.t. xij = 1 ∀i ∈ I (C.19)
j∈J
X
xij ≤ |I|yj ∀j ∈ J (C.20)
i∈I
yj ∈ {0, 1} ∀j ∈ J (C.21)
xij ≥ 0 ∀i ∈ I, ∀j ∈ J (C.22)
The objective of the UFLP is to locate facilities and assign demand points to
facilities so that the overall facility location costs and transportation costs are
minimized, as shown in equation (C.18). Constraint (C.19) ensures all demands
are met. Constraint (C.20) ensures that the demand points are assigned to open
facilities only. (If yj = 0, no facility is located at site j, so no demand can be
served from there; if yj = 1, then potentially all demand could be served from
there.)
Now consider a new optimization model consisting of the same objective
function (C.18), along with constraints (C.19), (C.21), and (C.22), but with
constraint (C.20) replaced by: xij ≤ yj ∀i ∈ I, ∀j ∈ J.
S-UFLP :
XX X
min cij xij + fj yj (C.23)
x,y
i∈I j∈J j∈J
X
s.t. xij = 1 ∀i ∈ I (C.24)
j∈J
xij ≤ yj ∀i ∈ I, ∀j ∈ J (C.25)
yj ∈ {0, 1} ∀j ∈ J (C.26)
xij ≥ 0 ∀i ∈ I, ∀j ∈ J (C.27)
Let P and Q denote the feasible regions of the linear relaxations for models
O-UFLP and S-UFLP, respectively. We can show that Q ⊆ P , that is, any
solution feasible to S-UFLP is also feasible to O-UFLP. Any point which satisfies
xij ≤ yj must also satisfy xij ≤ |I|yj , as can be seen by summing up xij
along all values of i. Now, when restricted to integer values of y, the set of
feasible solutions is the same in both cases. However, when considering their
564 APPENDIX C. OPTIMIZATION TECHNIQUES
LP relaxations, there are solutions which are feasible to O-UFLP which are not
feasible to S-UFLP. For example, if y1 = 1/2, a solution of the form x11 = 1
and xi1 = 0 for i = {2, . . . , n} satisfies xi1 ≤ |I|y1 but not x11 ≤ y1 . As a
result of this, the optimal solution to a relaxation based on S-UFLP will have a
greater (or equal) objective function value than the same relaxation based on O-
UFLP, producing a tighter (and more useful) lower bound on the original integer
program. We therefore say that S-UFLP is a stronger (or tighter ) formulation
for UFLP. This discussion can be generalized in the following result:
Proposition C.1. Consider two feasible regions P and Q for the linear relax-
ation of the same minimization integer program Z = min{cT x : x ∈ X ∩ Zn }
P Q
with Q ⊆ P . If zLP = min{cT x : x ∈ P } and zLP = min{cT x : x ∈ Q}, then
P Q
zLP ≤ zLP
The intuition is shown in the figure C.10. Note that Q ∩ Zn = P ∩ Zn .
However since Q ⊆ P , the corresponding LP solution will be larger. Stronger
formulations leading to tighter linear programming bounds will result in faster
solutions, as long as the time need to solve the tighter linear programming
relaxation is not significantly higher.
C.5. INTEGER PROGRAMMING 565
∗
ZLP 1 = 54
y1 = 1.5, y2 = 1.5
y2 ≤ 1 y2 ≥ 2
∗ ∗
ZLP 3 = 57.67 ZLP 2 = 65.64
y1 = 3.17, y2 = 1 y1 = 1.36, y2 = 2
y1 ≤ 3 y1 ≥ 4 y1 ≤ 1 y1 ≥ 2
∗ ∗ ∗
LP7 ZLP 6 = 59.5 ZLP 5 = 96.67 ZLP 4 = 72
Infeasible y1 = 4, y2 = 0.75 y1 = 1, y2 = 3.33 y1 = 2, y2 = 2
y2 ≤ 0 y2 ≥ 1
∗ ∗
ZLP 9 = 65 ZLP 8 = 66
y1 = 6.5, y2 = 0 y1 = 4, y2 = 1
y1 ≤ 6 y1 ≥ 7
∗
LP11 ZLP 10 = 70
Infeasible y1 = 7, y2 = 0
ZU B = ∞ and lower bound ZLB = −∞, since we do not yet know any infor-
mation about the optimal solution. The linear relaxation of the above integer
program is solved. Let XLP 1 denote the feasible region of the LP relaxation.
Thus XLP 1 = {y ∈ R2+ : 11y1 + 3y2 ≥ 21, 6y1 + 20y2 ≥ 39}. The optimal value
∗
of the linear programming objective function is ZLP 1 = 54, and the solution is
y1 = 1.5, y2 = 1.5. The best lower bound is now updated to to 54, ZLB = 54.
Now, the optimal solution either has y2 as an integer greater than or equal
to 2, or as an integer less than or equal to 1. We consider each option in turn.
(We could have also branched on y1 ; it would be instructive for you to solve the
problem that way.) We generate two linear programs by adding the constraints
y2 ≥ 2 to one, and and y2 ≤ 1 to the other. The feasible region of the second
linear program XLP 2 = XLP 1 ∩ {y2 ≥ 2} and the feasible region of the third
linear program XLP 3 = XLP 1 ∩ {y2 ≤ 1}. Solving the two linear programs we
∗ ∗
get ZLP 2 = 65.64 with the solution y1 = 1.36, y2 = 2 and ZLP 3 = 57.67 with
the solution y1 = 3.17, y2 = 1.
Let us first examine LP2. Since y2 is already an integer in its optimal
solution, we don’t need to branch again on y2 . However, y1 is not an integer, and
in the optimal solution it must be either greater than or equal to 2, or less than or
equal to 1. So the constraints y1 ≥ 2 and y2 ≤ 1 are added to the feasible regions
of LP2 to generate new linear programs LP4 and LP5, with the respective
feasible regions XLP 4 = XLP 2 ∩ {y1 ≥ 2} and XLP 5 = XLP 2 ∩ {y1 ≤ 1}.
Solving LP4 we get the optimal objective value to be 74 with the solution
y1 = 2, y2 = 2. The solution to this linear programming relaxation is an integer
∗
solution which is feasible for the original integer program. Therefore ZLP 5 is as
∗
an upper bound to the optimal integer solution. Moreover since ZLP 5 < ZU B
we update ZU B = 74; this is the first upper bound we have the optimal objective
function value.. Since this is an integer solution, we do not search further in
this direction. This is called “fathoming by optimality.”
Solving LP5 we get the optimal objective value to be 96.67 with the solution
y1 = 1, y2 = 3.33. Since the LP objective function is greater than the current
best upper bound 74, there is no point in further exploring in this direction also
and the search is stopped here. This is called “fathoming by bound.” Notice
that this cuts off a considerable portion of the feasible region. We know we
do not have to explore it any further, since the best possible objective function
values there are at least 96.67, whereas we have already found a solution (2, 2)
with an objective of 74.
Let us now look at LP3. Since y2 has integer solution we don’t need to
branch on y2 . But with respect to y1 , the optimal solution is either an integer
at least 4, or at most 3. So the constraints y1 ≥ 4 and y1 ≤ 3 is added to the
feasible region of LP3 to get new linear programs LP6 and LP7, with respective
feasible regions XLP 6 = XLP 3 ∩ {y1 ≥ 4} and XLP 7 = XLP 3 ∩ {y2 ≤ 3}.
LP7 turns out to be infeasible, so the search is stopped along this direction.
This is called “fathoming by infeasibility.” LP6, on the other hand, has an
optimal objective function value of 59.5 with y1 = 4, y2 = 0.75. We branch
again on y2 , adding the constraints y2 ≥ 1 and y2 ≤ 0 to the feasible region of
LP6 to obtain new linear programs LP8 and LP9 with feasible regions XLP 8 =
C.5. INTEGER PROGRAMMING 567
Looking at the solution of LP9, we generate two linear programs, LP10 and
LP11, by adding the constraints y1 ≥ 7 and y2 ≤ 6 to the feasible region of LP9.
The feasible region of the new linear programs are XLP 10 = XLP 9 ∩ {y1 ≥ 7}
and XLP 11 = XLP 9 ∩ {y1 ≤ 6}. LP11 is infeasible, and therefore we fathom by
infeasibility. LP10 has an integer solution, but the objective function value is
greater than the current upper bound. Therefore we fathom by bound as well as
integrality. There are no more directions left to search. Therefore the optimal
solution corresponds to the current best upper bound of 66.
3)
(c
34
r1
,r
3,
34
1
(c
)
1 (c23 , r23 ) 4
(c
)
24
1
,r
2
,r
24
12
(c
) 2
to 1 if the corresponding arc lies on the shortest path, and 0 otherwise. The
shortest path formulation for the above network is:
min c12 x12 + c13 x13 + c23 x23 + c24 x24 + c34 x34
s.t. x12 + x13 =1
−x12 + x23 + x24 =0
−x13 − x23 + x34 =0
−x24 − x34 = −1
x12 , x13 , x23 , x24 , x34 ∈ {0, 1}
r12 x12 + r13 x13 + r23 x23 + r24 x24 + r34 x34 ≤ R .
Depending on the values of the resources consumed, the constraint matrix need
not be totally unimodular. In this case the linear programming relaxation does
not always have an integer solution. For this reason, the resource-constrained
shortest path problem is much harder to solve than the traditional shortest path
problem.
C.6 Metaheuristics
In optimization, there is often a tradeoff between how widely applicable a solu-
tion method is, and how efficient or effective it is at solving specific problems.
570 APPENDIX C. OPTIMIZATION TECHNIQUES
For any one specific problem, a tailor-made solution process is likely much faster
than a generally-applicable method, but generating such a method requires more
effort and specialized knowledge, and is less “rewarding” in the sense that the
method can only be more narrowly applied. Some of the most general techniques
are heuristics, which are not guaranteed to find the global optimum solution,
but tend to work reasonably well in practice. In practice, they only tend to
be applied for very large or very complicated problems which cannot be solved
exactly in a reasonable amount of time with our current knowledge.
Many engineers are initially uncomfortable with the idea of a heuristic. Af-
ter all, the goal of an optimization problem is to find an optimal solution, so
why should we settle for something which is only approximately “optimal,” of-
ten without any guarantees of how approximate the solution is? First, for very
complicated problems a good heuristic can often return a reasonably good so-
lution in much less time than it would take to find the exact, global optimal
solution. For many practical problems, the cost improvement from a reasonably
good solution to an exactly optimal one is not worth the extra expense (both
time and computational hardware) needed, particularly if the heuristic gets you
within the margin of error based on the input data.
Heuristics are also very, very common in psychology and nature. If I give
someone a map and ask them to find the shortest-distance route between two
points in a city by hand, they will almost certainly not formulate an mathe-
matical model and solve it to provable optimality. Instead, they use mental
heuristics (rules of thumb based on experience) and can find paths which are
actually quite good. Many of the heuristics are inspired by things seen in nature.
An example is how ant colonies find food. When a lone wandering ant
encounters a food source, it returns to the colony and lays down a chemical
pheromone. Other ants who stumble across this pheromone begin to follow it to
the food source, and lay down more pheromones, and so forth. Over time, more
and more ants will travel to the food source, taking it back to the colony, until
it is exhausted at which point the pheromones will evaporate. Is this method
the optimal way to gather food? Perhaps not, but it performs well enough for
ants to have survived for millions of years!
Another example is the process of evolution through natural selection. The
human body, and many other organisms, function remarkably well in their habi-
tats, even if their biology is not exactly “optimal.”2 One of the most common
heuristics in use today, and one described below, is based on applying principles
of natural selection and mutation to a “population” of candidate solutions to
an optimization problem, using an evolutionary process to identify better and
better solutions over time. This volume describes two heuristics: simulated an-
nealing, and genetic algorithms, both of which can be applied to many different
optimization problems.
2 For instance, in humans the retina is “backwards,” creating a blind spot; in giraffes, the
laryngeal nerve takes an exceptionally long and roundabout path; and the descent of the testes
makes men more vulnerable to hernias later in life.
C.6. METAHEURISTICS 571
1. Choose some initial feasible solution x ∈ X, and calculate the value of the
objective function f (x).
3. Calculate f (x0 )
4. If f (x0 ) ≤ f (x), the new solution is better than the old solution. So
update the current solution by setting x equal to x0 .
6. Return to step 2 and repeat until we are unable to make further progress.
One way to visualize local search is a hiker trying to find the lowest elevation
point in a mountain range. In this analogy, the park boundaries are the feasible
set, the elevation of any point is the objective function, and the location of the
hiker is the decision variable. In local search, starting from his or her initial
position, the hiker looks around and finds a nearby point which is lower than
their current point. If such a point can be found, they move in that direction
and repeat the process. If every neighboring point is higher, then they stop and
conclude that they have found the lowest point in the park.
It is not hard to see why this strategy can fail; if there are multiple local
optima, the hiker can easily get stuck in a point which is not the lowest (Fig-
ure C.13). Simulated annealing attempts to overcome this deficiency of local
search by allowing a provision for occasionally moving in an uphill direction, in
hopes of finding an even lower point later on. Of course, we don’t always want
to move in an uphill direction and have a preference for downhill directions, but
this preference cannot be absolute if there is any hope of escaping local minima.
Simulated annealing accomplishes this by making the decision to move or
not probabilistic, introducing a temperature parameter T which controls how
likely you are to accept an “uphill” move. When the temperature is high,
the probability of moving uphill is large, but when the temperature is low,
the probability of moving uphill becomes small. In simulated annealing, the
temperature is controlled with a cooling schedule. Initially, the temperature
is kept high to encourage a broad exploration of the feasible region. As the
temperature decreases slowly, the solution is drawn more and more to lower
areas. The cooling schedule can be defined by an initial temperature T0 , a final
temperature Tf , the number of search iterations n at a given temperature level,
572 APPENDIX C. OPTIMIZATION TECHNIQUES
f(x)
Figure C.13: Moving downhill from the current location may not lead to the
global optimum.
and a scaling factor k ∈ (0, 1) which is applied every n iterations to reduce the
temperature.
Finally, because the search moves both uphill and downhill, there is no
guarantee that the final point of the search is the best point found so far. So,
it is worthwhile to keep track of the best solution x∗ encountered during the
algorithm. (This is analogous to the hiker keeping a record of the lowest point
observed, and returning to that point when done searching.)
So, the simulated annealing algorithm can be stated as follows:
1. Choose some initial feasible solution x ∈ X, and calculate the value of the
objective function f (x).
The key step is step 4d. Notice how the probability of “moving uphill”
depends on two factors: the temperature, and how much the objective function
will increase. The algorithm is more likely to accept an uphill move if it is only
slightly uphill, or if the temperature is high. The exponential function captures
C.6. METAHEURISTICS 573
these effects while keeping the probability between 0 and 1. A few points require
explanation.
How should the cooling schedule be chosen? Unfortunately, it is
hard to give general guidance here. Heuristics often have to be “tuned” for
a particular problem: some problems do better with higher temperatures and
slower cooling (k values closer to 1, n larger), others work fine with faster cooling.
When you use simulated annealing, you should try different variations of the
cooling schedule to identify one that works well for your specific problem.
How should an initial solution be chosen? It is often helpful if the
initial solution is relatively close to the optimal solution. For instance, if the
optimization problem concerns business operations, the current operational plan
can be used as the initial solution for further optimization. However, it’s easy
to go overboard with this. You don’t want to spent so long coming up with
a good initial solution that it would have been faster to simply run simulated
annealing for longer starting from a worse solution. The ideal is to think of a
good, quick rule of thumb for generating a reasonable initial solution; failing
that, you can always choose the initial solution randomly.
How do I define a neighboring solution for step 4a? Again, this
is problem-specific, and one of the decisions that must be made when apply-
ing simulated annealing. A good neighborhood definition should involve points
which are “close” to the current solution in some way, but ensure that feasi-
ble solutions are connected in the sense that any two feasible solutions can be
reached by a chain of neighboring solutions. For the examples in Section B.4,
some possibilities (emphasis on possibilities, there are other choices) are:
Transit frequency setting problem: The decision variables n are the num-
ber of buses assigned to each route. Given a current solution n, a neighbor-
ing solution is one where exactly one bus has been assigned to a different
route.
Facility location problem: The decision variables are the intersections that
the three terminals are located at, L1 , L2 , and L3 . Given current values for
these, in a neighboring solution two of the three terminals are at the same
location, but one of the three has been assigned to a different location.
574 APPENDIX C. OPTIMIZATION TECHNIQUES
Shortest path problem: The decision variables specify a path between the
origin and destination. Given a current path, a neighboring path is one
which differs in only intersection. (Can you think of a way to express this
mathematically?)
This probability is high, because (being one of the early iterations) the temper-
ature is set high. If the temperature were lower, the probability of accepting
this move would be lower as well.
C.6. METAHEURISTICS 575
Figure C.16: Progress of simulated annealing. Solid line shows current solution
cost, dashed line best cost so far.
Supposing that the move is accepted, the algorithm replaces the current
solution with the candidate and continues as before. If, on the other hand, the
move is rejected, the algorithm generates another candidate solution based on
the same current solution (5,1), (8,6), and (1,0). The algorithm continues in the
same way, reducing the temperature by 25% every 8 iterations.
The progress of the algorithm until termination is shown in Figure C.16. The
solid line shows the cost of the current solution, while the dashed line tracks the
cost of the best solution found so far. A few observations are worth making.
First, in the early iterations the cost is highly variable, but towards termination
the cost becomes more stable. This is due to the reduction in temperature
which occurs over successive iterations. When the temperature is high, nearly
any move will be accepted so one expects large fluctuations in the cost. When
the temperature is low, the algorithm is less likely to accept cost-increasing
moves, so fewer fluctuations are seen. Also notice that the best solution was
found shortly after iteration 700. The final solution is not the best, although it
is close.
The best facility locations found by simulated annealing are shown in Fig-
ure C.17, with a total cost of 92.1.
selection, reproduction, and mutation which are observed in biology. For the
sake of our purposes, natural selection means identifying solutions in the popu-
lation which have good (low) values of the objective function. The hope is that
there are certain aspects or patterns in these solutions which make them good,
which can be maintained in future generations. Given an initial population as
the first generation, subsequent generations are created by choosing good solu-
tions from the previous generation, and “breeding” them with each other in a
process called crossover which mimics sexual reproduction: new “child” solu-
tions are created by mixing characteristics from two “parents.” Lastly, there
is a random mutation element, where solutions are changed externally with a
small probability. If all goes well, after multiple generations the population will
tend towards better and better solutions to the optimization problem.
At a high level, the algorithm is thus very straightforward and presented
below. However, at a lower level, there are a number of careful choices which
need to be made about how to implement each step for a particular problem.
Thus, each of the steps is described in more detail below. The high-level version
of genetic algorithms is as follows:
Although not listed explicitly, it is a good idea to keep track at every stage
of the best solution x∗ found so far. Just like in simulated annealing, there is
no guarantee that the best solution will be found in the last generation. So,
whenever a new solution is formed, calculate its objective function value, and
record the solution if it is better than any found so far.
Here are explanations of each step in more detail. It is very important to
realize that there are many ways to do each of these steps and that what is
presented below is one specific example intended to give the general flavor while
sparing unnecessary complications at this stage.
Generate an initial population of feasible solutions: In contrast
with simulated annealing, where we had to generate an initial feasible solution,
with genetic algorithms we must generate a larger population of initial feasible
solutions. While it is still desirable to have these initial feasible solutions be rea-
sonably good, it is also very important to have some diversity in the population.
Genetic algorithms work by combining characteristics of different solutions to-
gether. If there is little diversity in the initial population the difference between
successive generations will be small and progress will be slow.
Selection of parent solutions: Parent solutions should be chosen in a
way that better solutions (lower objective function values) are more likely to
be chosen. This is intended to mimic natural selection, where organisms better
adapted for a particular environment are more likely to reproduce. One way
to do this is through tournament selection, where a number of solutions from
the old generation are selected randomly, and the one with the best objective
function value is chosen as the first parent. Repeating the “tournament” again,
randomly select another subset of solutions from the old generation, and choose
the best one as the second parent. The number of entrants in the tournament
is a parameter that you must choose.
Combining parent solutions: This is perhaps the trickiest part of ge-
netic algorithms: how can we combine two feasible solutions to generate a new
feasible solution which retains aspects of both parents? The exact process will
differ from problem to problem. Here are some ideas, based on the example
problems from Section B.4:
Transit frequency setting problem: The decision variables are the num-
ber of buses on each route. Another way of “encoding” this decision is
to make a list assigning each bus in the fleet to a corresponding route.
(Clearly, given such a list, we can construct the nr values by counting
how many times a route appears.) Then, to generate a new list from two
parent lists, we can assign each bus to either its route in one parent, or
its route in the other parent, choosing randomly for each bus.
Facility location problem: For each facility in the child solution, make its
location the same as the location of that facility in one of the two parents
(chosen randomly).
1. The first facility is located either at (2,3) or (1,4); randomly choose one
of them, say, (2,3).
2. The second facility is located either at (1,7) or (7,6); randomly choose one
of them, say, (1,7).
3. The third facility is located either at (8,7) or (5,0); randomly choose one
of them, say, (5,0).
This gives a new solution (2,3), (1,7), (5,0) in the next generation, as shown
in Figure C.18. With 5% probability, this solution will be “mutated.” If this
solution is selected for mutation, one of the three facility is randomly reassigned
to another location. For instance, the facility at (1,7) may be reassigned to
(0,4). This process is repeated until all 100 solutions in the next generation
have been created.
C.6. METAHEURISTICS 581
Tournament 1 Tournament 2
(5,1) (8,6) (6,1) : cost 158.0 (6,7) (2,1) (0,0) : cost 125.3
(8,1) (1,7) (3,9) : cost 167.4 (1,4) (7,6) (5,0) : cost 112.6
(2,3) (1,7) (8,7) : cost 118.7 (8,8) (2,1) (5,0) : cost 130.7
Parent 1 Parent 2
(2,3) (1,7) (8,7) (1,4) (7,6) (5,0)
Child
(2,3) (1,7) (5,0)
Child
(2,3) (0,4) (5,0)
Figure C.19: Progress of genetic algorithm. Solid line shows average generation
cost, dashed line best cost so far, crosses cost of individual solutions.
Figure C.19 shows the progress of the algorithm over ten generations. The
solid line shows the average cost in each generation. The dashed line shows the
cost of the best solution found so far, and the crosses show the cost of each of
the solutions comprising the generations. Since lower-cost alternatives are more
likely to win the tournaments and be selected as parents, the average cost of
each generation decreases. Figure C.20 shows the locations of the terminals in
the best solution found.
C.6. METAHEURISTICS 583
D.1 Algorithms
The word algorithm appears often in applied mathematics and computer science
to denote a concrete procedure for accomplishing a task, specified in enough de-
tail that it can be implemented by a computer (often in a programming language
like C or Python). We adopt Knuth’s (1997) criteria for operationalizing this
definition: an algorithm must be finite (always stopping in a finite number of
steps), definite (each step is unambiguous), effective (each operation is simple
enough to be done in a finite amount of time), and map inputs to at least one
output. An algorithm can have any finite number of inputs, even zero: an al-
gorithm with no inputs would compute the same thing every time it is run,
but that still might be useful (e.g., to answer a difficult question). However,
it must have at least one output; an algorithm must report something when it
terminates.
For example, a step such as “add integers x and y” is definite and effective.
A step such as “multiply π by 2” is definite, but not effective since π is an
irrational number with an infinite number of digits (there is no way to multiply
all the digits by 2). Such a step can be made effective by specifying “round π
to 15 digits and multiply it by 2.” A step such as “pick an integer between 1
and 10” is effective, but it is not definite (how exactly should I pick such an
integer?) One way to make it definite is to specify “randomly pick an integer
between 1 and 10, with each possibility equally likely.”
Another example of a “definite but not effective” step is “add 1 to x if there
are one million consecutive zeros in the decimal expansion of π.” It is definite:
585
586 APPENDIX D. ALGORITHMS AND COMPLEXITY
either there are that many consecutive zeros or there are not. However, we
don’t already know whether that is true, and there is no simple way to find out.
We could spend a lot of time computing digits of π and maybe we will find this
many zeros; but maybe we won’t, and we don’t know if it’s because there aren’t
any, or if we just haven’t computed far enough. With our current knowledge
of mathematics, there is no way to answer that question in a finite amount of
time, so this step is not effective.
Many algorithms contain three main components:
Initialization: These are the first steps performed. The initialization steps
prepare for the main algorithm, assembling the inputs, and providing ini-
tial or default values to variables which will be used later on.
Body: The steps in the body form the main core of the algorithm. Many
algorithms are iterative, and the steps in the body are repeated multiple
times.
Stopping criterion: An algorithm must be finite, and not run forever. How-
ever, we often don’t know how many iterations are required in advance.
Therefore, we periodically check a stopping criterion; once it is satisfied,
we can stop. There may also be a few final processing and “clean up”
steps needed at the end of the algorithm.
You were probably taught an algorithm for adding two positive integers when
you were a child (although that word was probably not used). One way to do
this is to work from right to left, adding the digits one at a time. If the sum
of the digits is 9 or less, then that is the corresponding digit in the sum; if it
is 10 or more, subtract 10 to get the digit in the sum, and “carry” by adding 1
to the next digit to the left. For example, to add 158 and 371, you would start
by adding 8 + 1 = 9 for the rightmost digit; 5 + 7 = 12, so enter 2 for the next
digit and carry a 1 for the next digit: 1 + 3 + 1 = 5, so the sum is 529.
This can be specified as an algorithm in the following way. For simplicity, we
assume that the two integers we are adding (a and b) have the same number of
digits n, and they are specified as a = an an−1 . . . a2 a1 and b = bn bn−1 . . . b2 b1 .
(In the previous example, a = 158, and a3 = 1, a2 = 5, and a3 = 8; and likewise
for b = 371.) We also assume that we know how to add two single digits, perhaps
with an “addition table” telling us the value of adding 0 + 0, 0 + 1, and so on,
up through 9 + 9. The steps below describe the addition algorithm, calculating
the digits in the sum s.
Note the use of the ← operator to mean assignment: calculate the expression
on the right-hand side, and store the result in the variable on the left-hand side.
For instance, c ← 0 sets the variable c to zero. Many programming languages
use the = sign to denote this (you would type c = 0). In this text, we will
reserve = to refer to mathematical equality, the statement that the left-hand
side and right-hand side are currently equal to each other. So a statement like
x = x + 1 can never be true (no number is equal to one plus itself). However, a
statement like x ← x + 1 is perfectly fine; it means add one to the current value
D.1. ALGORITHMS 587
in x, and then store that new value in x. Some programming languages use
context to determine whether = is meant in the sense of assignment or equality,
while others use different operators for the two (for instance, C and Python
use = for assignment, and == for equality). To avoid confusion, we will use the
different characters ← for assignment, and = for equality.
In this algorithm, step 1 is the initialization, and steps 2–3 are the body.
Step 4 contains the stopping criterion, expressed in a “negative” way (“repeat
as long as i < n” rather than “stop repeating if i ≥ n.”) Step 5 contains a final
step before we terminate (if there is a carry digit after adding the n-th place,
we need to add a 1 at the left-hand side of the sum, which will have one more
digit than the original numbers.)
In general, when reading an algorithm, you should work out a small example
on a separate piece of paper to see how the steps work. Trying to mentally
think through all the steps adds unnecessary difficulty, especially for complex
algorithms. It is much easier to see how it works on a small example or two, and
then study the specific steps to see how they fit together.1 So, let’s see how this
algorithm works for the numbers a = 158 and b = 371. These are three-digit
numbers, so n = 3. After the initialization (step 1), c = 0 and i = 1, meaning
we are working with the first (right-most) digit and there is nothing to carry
so far. Step 2 adds the two right-most digits: s1 = c + a1 + b1 = 9, giving the
right-most digit in the sum. This is less than 10, so the carry digit is set to zero
(it is already zero, so nothing changes). The current digit is i = 1, which is less
than 3; so in step 4 we increase it and return to step 2 with i = 2. Repeating
with the second digit, we compute s2 = c + a2 + b2 = 12. This is greater than
10, so in step 3, set the carry digit c to 1, and reduce the sum digit by 10, so
s2 = 2, the second digit from the right in the sum. In step 4, the current digit
is still less than the number of digits, so we increase i and return to step 2 with
i = 3. With the third digit, we compute s3 = c + a3 + b3 = 5; since this is less
than 10, we set the carry digit to zero. Now i = n, so we meet the stopping
criterion and move to step 5. Since the carry digit is zero, we are done and
return the sum s = 529 (collecting the individual digits computed above). Step
5 would have handled the case when there is a carry digit in the end, meaning
1 This is similar to studying a play or movie. The best way to do this is to first watch a
performance, and then turn to the script to study it in more detail. Starting with the script is
much harder, and not really how a movie is meant to be experienced. So too with algorithms.
588 APPENDIX D. ALGORITHMS AND COMPLEXITY
the sum has more digits than the addends; for instance, the sum of 999 and 111
is 1110, and this step would have added the ‘1’ at the very left.
This algorithm satisfies all the criteria. It is finite, since we repeat the body
once for every digit, and any integer has a finite number of digits. It is definite,
since every step is spelled out precisely without any room for ambiguity. It is
effective, since every step can be done in a finite amount of time. It takes two
inputs (the numbers a and b), and produces one output (the sum s).
An analogy is often made between algorithms, and cooking recipes. For
example, cooking pasta can be expressed by the following recipe:
2. Add the pasta, and stir until the water returns to a boil.
Which of the criteria of an algorithm are satisfied by a recipe like this? Which
of them are violated, and are they violated in a “weak” way (where you could
satisfy our definition by adding some more explanation to the step) or a “strong”
way (a fundamental violation that can’t be easily fixed)?
D.2 Pseudocode
The previous section gave the formal definition of an algorithm. Sometimes,
specifying an algorithm formally — either writing out all the steps in full detail
in English (as in the addition example above) or with source code in a pro-
gramming language — may be unnecessarily verbose or confusing. Pseudocode
is a less formal way of specifying the steps of an algorithm. Pseudocode can
be thought of as a midway point between intuitive English descriptions (which
express the general idea, but are usually ambiguous) and computer code (un-
ambiguous, but harder to read and understand). The goal of pseudocode is to
express the main ideas of an algorithm while avoiding unnecessary pedantry. It
communicates the algorithm in enough detail that a programmer can write ac-
tual code in a specific language, but without the “fiddly bits” inherent in specific
languages (declaring variables, calling library functions, details of data structure
implementations, and so forth). Algorithm 5 is an example of pseudocode, and
illustrates some common conventions.
This pseudocode is for an algorithm that finds the maximum of n numbers.
As an example, let us find the maximum of five numbers: 7, 3, 5, 9, and 1. In
the initialization, the five numbers are assigned to the list or set or array A, so
that A[1] = 7, A[2] = 3, . . . , A[5] = 1.2 The variable max value is initialized to
a large negative number.
2 Note that some programming languages index arrays from 0, rather than 1; using pseu-
docode lets us abstract from this. The assumption is that a programmer working in such a
language would make the necessary changes.
D.2. PSEUDOCODE 589
Initialize:
A = {A[1], A[2], . . . , A[n]}: Set, list, or array containing the n numbers
max value ← −∞
for i ← 1 to n do
if A[i] > max value then
max value ← A[i]
end
end
Algorithm 5: Finding the maximum of n numbers.
The for loop is a control structure which repeatedly executes a set of instruc-
tions a fixed number of times, as specified by an iterator variable. In this case, i
is the iterator variable which takes values from 1 to n. The variable i takes the
values 1, 2, 3, 4, and 5 in turn; for each of these values, the set of instructions
indented under the for loop is executed. Inside the for loop is an if block. This
is another control structure which executes the indented instructions when (and
only when) a certain condition is satisfied.
Each iteration corresponds to one time that the instructions in the for loop
are executed. We now illustrate a few iterations. At first, the iterator i takes
the value 1. The if statement then checks whether A[1] = 7 > max value =
−∞. Since 7 > −∞, the instructions under the if statement are executed, and
max value now takes the value 7. Next, the iterator variable i takes the value
2. The if statement checks whether A[1] = 3 > max value = 7. This is not
true, so the instructions under the if statement are not executed, and max value
remains at 7. After five iterations, we will find that max value is equal to 9,
which is indeed the maximum of the given five numbers. The stopping criterion
for this algorithm is when the for loop has been executed for all the numbers in
the array.
As another example, the pseudocode in Algorithm 6 represents the algo-
rithm to find the root of a differentiable function f (x) using Newton’s method.
This pseudocode indicates some conventions commonly found in optimization
algorithms. Like the other algorithms seen so far, Newton’s method repeats a
“body” of steps iteratively until the stopping criterion is met. The variable k
keeps track of the number of iterations, and xk is the solution at iteration k.
Initialize:
x0 : initial guess
: tolerance
x1 ← x0 − ff0(x 0)
(x0 )
k←1
while |xk − xk−1 | > do
xk+1 ← xk − ff0(x k)
(xk )
k ←k+1
end
Algorithm 6: Finding the root of a function f (x).
590 APPENDIX D. ALGORITHMS AND COMPLEXITY
In this implementation, the while loop control structure is used to control the
number of repetitions. In the while loop, a set of instructions is repeated as long
a condition is met. The condition in the while loop corresponds to the stopping
criterion, expressed in a “negative” way (repeat as long as the stopping criterion
is false). In Newton’s method, the algorithm stops when the change in solution
across consecutive iterations |xk − xk−1 | is less than a pre-specified tolerance
level . The tolerance level is chosen depending on the level of accuracy desired.
We illustrate a few iterations of this algorithm for the function f (x) = x2 −3,
with the initial guess x0 = 1. The tolerance is set at 0.001, which represents
the desired level of accuracy in the output. x1 is now calculated using the
formula to be equal to 2 and iteration counter k is set to 1. The condition in
the while loop |x1 − x0 | is equal to 1 which is greater than epsilon. Therefore,
we now calculate x3 to be equal to 1.75 and increase the value of k to 2. This
process is repeated until the difference between xk and xk−1 is less than 0.001.
The stopping criterion we use (|xk − xk−1 | ≤ ) is one of several widely used
stopping criteria. Another popular criterion is to stop the iterations when the
iteration counter hits a pre-specified maximum number N .
An attentive reader will notice that to be a valid algorithm, it must be finite,
meaning that the convergence criterion must eventually be true. If the stopping
criterion is a maximum number of iterations N , this will clearly happen at some
point. For the stopping criterion |xk − xk−1 | ≤ , it is not obvious that it will
eventually be satisfied (and indeed, for some functions f and initial guesses it
may never be). Analysis of algorithms is the sub-field of computer science which
studies algorithms to identify conditions for finiteness (will it eventually stop?),
proofs of correctness (are the outputs always the ones we want?), and the time
and computer memory they need before stopping. For the most part, this book
will not dwell on these details except where relevant for applying transportation
network models.
In this case, the advantage of the |xk −xk−1 | ≤ stopping criterion is that it is
directly related to the precision in the solution; after a fixed number of iterations
N we may or may not be close to the actual root of the function; or we may have
found the solution many iterations ago but kept on going unnecessarily until we
hit N iterations. However, the k = N stopping criterion has the advantage
of guaranteeing that it will eventually be satisfied. We can combine the two
stopping criteria to gain the advantages of both, by changing the condition in
the while loop, so it reads “while |xk − xk−1 | > and k < N .” This lets us stop
as soon as we have found the solution, and also guarantees the algorithm will
not run forever.
Initialize:
Initialize money bet to the wager:
Obtain outcome by rolling a 6-sided die.
if outcome=1 or outcome=2 then
money returned ← money bet/2
end
else if outcome=3 or outcome=4 then
money returned ← money bet
end
else if outcome=5 or outcome=6 then
money returned ← money bet × 2
end
return money returned
Algorithm 7: Gambling algorithm for demonstrating worst-case analysis.
We want to measure how long it will take for this algorithm to run. We
can either measure this in terms of “clock time” (imagine using a stopwatch to
time how long it takes from start to finish), or by counting the number of oper-
ations that need to be performed. Algorithmic analysis usually uses the latter
592 APPENDIX D. ALGORITHMS AND COMPLEXITY
approach, because it keeps the focus on the algorithm rather than a specific
hardware. The exact same algorithm will run much faster on a supercomputer
than on your phone. As computers continue to advance, it is more useful to say
“this algorithm requires X steps” than “it takes Y seconds to run.”
This requires defining what we mean by an “operation.” Some computations
take much more time than others; computing the square root of a number, or
the cosine of a number, takes significantly longer than changing the sign of a
number from positive to negative, for instance. We will define a basic operation
to be (i) a simple arithmetic computation (add, subtract, multiply, divide); (ii)
testing whether one value is equal to, greater than, or less than another; and
(iii) storing a value in a variable using the assignment operator ←.3
In Algorithm 7, there are two basic operations associated with initialization.
Depending on the value of outcome, it takes a different number of basic opera-
tions to determine which if block is executed. If outcome = 1, we only require
one comparison to see that the money returned should be half the bet (we don’t
have to check whether outcome = 2, because the first if statement has an “or” in
it; as soon as we know outcome = 1 we can jump to the statement in the block.)
If outcome = 2, we need two comparisons: the first checks whether outcome
is 1 (false), the second checks whether outcome is 2 (true), and only then we
execute the statement. If outcome = 3, we need three comparisons; two failed
comparisons testing whether outcome is 1 or 2, and a third successful one. In
the worst case, outcome = 6, and we need six comparisons to determine which
block of code to execute. Within the if blocks, there are either zero arithmetic
operations (if the outcome is 3 or 4) or one arithmetic operation (all other out-
comes) multiplying or dividing the wager by 2. There is always one assignment
operation in each block.
Putting this together, how many basic operations does the algorithm need?
In the best case, outcome = 1, and a total of five basic operations are needed
(two initialization, one comparison, one arithmetic, and one assignment). In the
worst case, outcome = 6, and ten operations are needed (two initialization, six
comparison, one arithmetic, one assignment). The average case can be treated
by representing the operation count as a random variable. Its expected value can
be calculated by identifying all possible operation counts under every outcome,
multiplying by the probability of that outcome, and adding. The resulting
number is 7.2, representing the long-run average of the number of operations
needed.
Which one of these is most useful? The average case might seem to be best,
but its calculation was more involved, and it would have to be recomputed if a
different kind of die were used where all six outcomes were not equally likely.
By contrast, the best case and worst case were much simpler to compute, and
they would stay the same even if the die were to be weighted more to some
outcomes than others. Between these two, the worst case is more useful for
planning purposes. The best case is optimistic to a fault; all it tells you is that
3 By “value” in this section, we mean an integer, or a floating-point number rounded to a
you’ll need at least five basic operations (but nothing about how bad it could
get). This is akin to a shady investment pitch which only tells you how much
you could earn, and nothing about how much you stand to lose. The worst case,
by contrast, is a more solid basis for planning and allocating resources. You will
never need more than ten operations (and often will need less), so if you plan
for this you will never exceed your resource budget.
In this case, the performance of the algorithm requires a constant number
of steps. What if it is repeated n times (either the same gambler playing the
game multiple times, or several gamblers playing)? The worst case is that ten
operations are needed each time; in this case 10n basic operations are needed
to run the game. Again, this is a conservative estimate. The only time 10n
operations are needed is if a 6 is rolled all n times the game is played, which is
unlikely. However, we can absolutely guarantee that you will never need more
than this.
As another example, return to Algorithm 5 for finding the largest of n num-
bers. In the for loop, for each value of i we perform at most two basic operations:
one comparison operation (testing whether A[i] > max value) and one assign-
ment operation (max value ← A[i]). Since i takes values from 1 to n, these two
operations will be repeated n times, giving a total of 2n operations. There is
also one basic operation associated with initialization, for a total of 2n + 1 basic
operations in the worst case. (The best case has fewer than this; for example, if
the A[1] is the largest of the n numbers, then only n + 2 operations are needed,
because max value is only assigned once.)
To further simplify the worst-case analysis, we introduce an asymptotic no-
tation which highlights the bottleneck as the problem grows harder. For finding
the maximum of n numbers, the one operation needed for initialization becomes
insignificant if n is large. If n is equal to a million, say, what really matters are
the (up to) two million operations needed in the body of the algorithm, not the
one needed for initialization. So we would like to focus on the 2n part and not
the +1 part. Even here, we can simplify further. The most important part is
that it grows with n (and not n2 , say): doubling n will also double the number
of steps needed. The factor of 2 is less important.
To see why, imagine that someone proposes an alternative algorithm for
finding the maximum of n numbers, and worst case analysis shows that it may
require up to n2 +4n+5 operations. As n grows large, the n2 part will dominate
the remaining terms. Furthermore, if n were to double, n2 would increase by a
factor of four. This is evidence that Algorithm 5 is a better choice. As n gets
larger and larger, the number of operations needed by Algorithm 5 increases in
direct proportion, whereas that needed by the hypothetical alternative grows
with the square of n. The fact that the first algorithm is 2n, as opposed to n is
much less important than that it is linear in n and not quadratic.
Another reason to focus on the n and not the 2 is to abstract certain imple-
mentation details. In Algorithm 5, do we count an initialization step associated
with the first line, setting the values in the array A? Maybe it is passed di-
rectly as an address in computer memory, to values that already exist; in this
case there is no new operation. On the other hand, if we have to assign them
594 APPENDIX D. ALGORITHMS AND COMPLEXITY
O(1), for algorithms that run in constant time (independent of input size);
O(log n), sometimes called logarithmic time or sublinear time (the run
time increases more slowly than the problem size);
O(n), or linear time (run time scales in direct proportion to the problem
size);
O(n3 ), O(n4 ), and other polynomial time algorithms of the form O(nc )
for some constant c.
time. After some time, you upgrade your computer so that it is twice as power-
ful as the one you had before. Roughly speaking, the quadratic-time algorithm √
on this better machine can handle a network which is larger by a factor of 2,
so roughly 1400 nodes. The exponential-time algorithm, on the other hand, can
only handle a network with one more node, since 21001 is twice as large as 21000 !
Faster hardware will not overcome the disadvantages of an exponential-time
algorithm, at least for large problems.
However, just because an algorithm is exponential doesn’t mean it is use-
less. For one, the O(·) notation is asymptotic, and tells you what will happen
as n tends to infinity. An exponential time algorithm may be perfectly fine
working with the problem sizes you are dealing with; it just means that it does
not scale well, and for large enough problems you will run into difficulties. It
is also worst case, and there are algorithms which may have exponential worst
case complexity, yet behave more like polynomial time in practice. The simplex
method for solving linear programs (Section C.2.3), and the branch and bound
method for solving mixed-integer linear programs (Section C.5.1) are examples
of algorithms which are exponential in the worst case, yet offer better perfor-
mance in practice, especially when implemented well and tailored to a specific
application.
For one more example of complexity calculation, Algorithm 8 presents the
“bubble sort” algorithm for sorting an array of n numbers into increasing order.
The algorithm has two loops: an outer loop using the iterator i, and an inner
loop with the iterator j. When i = 1, there are 4(n − 1) operations associated
with the j loop. When i = 2, there are 4(n−2) operations, and so on. Therefore
in total there are 4(n−1)+4(n−2)+. . .+1 = 2n(n−1) operations. The bubble
sort is therefore O(n2 ) in complexity, and has quadratic time. This implies that
if the input size doubles, the running time quadruples. There are faster sorting
methods with O(n log n) complexity; if you have to sort a large array of numbers
this will perform much better than bubble sort. However, bubble sort is much
easier to implement, and may perform acceptably if you only have to sort a
small array.
Initialize:
A: Set, list, or array containing the n numbers
for i ← 1 to n − 1 do
for j ← 1 to n − i do
if A[j] > A[j + 1] then
temp ← A[j]
A[j] ← A[j + 1]
A[j + 1] ← temp
end
end
end
Algorithm 8: Bubble sort
596 APPENDIX D. ALGORITHMS AND COMPLEXITY
Optimization P problem: Find the acyclic path π between r and s with maxi-
mal cost cπ = (i,j)∈π cij .
Decision problem: For a given value C, is there an acyclic path π between r
and s whose cost cπ is at least C?
From the standpoint of solution, a polynomial time algorithm for the op-
timization problem also leads to a polynomial time algorithm for the decision
problem, and vice versa. For example, if we have a polynomial time algorithm
for the optimization version of the shortest path problem, we can use it to solve
the decision problem: just find the shortest path using the optimization version;
if its cost is less than or equal to C, then the answer to the decision problem is
“yes.” Otherwise it is “no.” Or, if we have a polynomial time algorithm for the
decision version of the problem and want to find the exact cost of the shortest
path (not just whether it is less than C or not), we can repeatedly solve the de-
cision version problem for different versions of C using a bisection-like approach
(cf. Section 3.3.2). This requires solving the decision problem multiple times,
but a polynomial number of times; substituting one polynomial into another
still results in a polynomial, so the overall method is still polynomial time (even
if the polynomial has a higher degree).
evidence that you can verify in polynomial time. A few examples are shown
below; keep in mind that all of these refer to the decision versions of these
problems described in the previous subsection, not the optimization versions.
The decision version of the shortest path problem with non-negative link
costs is in P , since Dijkstra’s algorithm (and others) can find the shortest path
in polynomial time. So we simply compute the shortest path; if its cost is less
than or equal to C, the answer to the decision problem is “yes” (there is a path
of cost at most C), otherwise the answer is “no” (no such path exists).
The decision version of the traveling salesperson problem is in N P . If the
answer is “yes” (there is a tour visiting each node exactly once whose cost is
less than or equal to C), someone can convince you of this by providing the
tour itself as evidence. You can check in polynomial time whether this tour is
valid (it visits every node once and only once) and whether its total cost does
not exceed C. We have said nothing about how someone might find such a tour
in the first place; simply that given such a tour as supposed evidence that the
answer is “yes,” you can check this evidence in polynomial time. Likewise, we
have not said anything about “no” instances of the problem, in which all tours
have a cost greater than C. Indeed, there does not seem to be an easy way to
demonstrate that a given instance of the traveling salesperson problem is “no.”
Just providing a tour whose cost is more than C does not prove that there isn’t
some other tour with a lower cost. But this doesn’t matter; the class N P is
only concerned with “yes” instances of decision problems.4
Similarly, the decision versions for the knapsack problem, longest path prob-
lem, and resource-constrained shortest path problems are in N P , since in any
case where the answer is “yes” you can provide a feasible knapsack assignment
or path as easily-checked evidence that the answer really is “yes.”
Every decision problem in P is also in N P . Given a “yes” instance of the
problem, I can check whether the answer is indeed “yes” by simply answering
the question myself using a polynomial-time algorithm (which exists because
the problem is in P ). So the decision version of the shortest path problem is in
both P and N P .
We do not know if the reverse is true, that every problem in N P is also in
P . For instance, we know that the traveling salesperson problem is in N P , but
we do not know if it is in P or not. We do not currently know of any polynomial
time algorithm for solving it, but this does not mean that none exists. Perhaps
there is such an algorithm, but if so nobody has been clever enough to find it
yet. Most computer scientists believe that no such algorithm exists, but nobody
has been able to prove that either. Answering the question either way would be
a tremendous advance in computer science.
The reason is that algorithms used to solve one kind of problem can often be
used to solve other kinds of problems as well. Many important problems (even
problems with no obvious “network” in them) can be reduced to the traveling
salesperson problem in the sense that an algorithm for the latter can also be used
to solve the former, by constructing a network, choosing costs in the right way,
4 The complexity class co-N P is the analogous class for “no” instances.
600 APPENDIX D. ALGORITHMS AND COMPLEXITY
and then translating the resulting tour back to the original problem. So, finding
a good way to solve the traveling salesperson problem would directly provide
good ways to solve many other problems as well. This idea is formalized in the
following definition:
Definition D.3. A decision problem A can be polynomially reduced to decision
problem B if (a) there is a procedure that can transform an instance of A to an
instance of B in polynomial time; and (b) the answer for an instance of A is
“yes” if and only if the answer for the corresponding instance of “B” under this
transformation is also “yes.”
The implication is that if we know a polynomial time algorithm for B, then
we can use it to solve A. The number of steps needed to do this is the sum of the
number of steps needed to construct an instance of problem B from the given
instance of A, and the number of steps needed to answer that instance of B. If
there is a polynomial reduction from A to B, and if B is solvable in polynomial
time, then the total number of steps required is also polynomial. Intuitively,
the problem A is no harder than B, since any method for B can also be used to
solve A. (The problem A could be easier than B, in the sense that a specialized
method for A might be a faster approach than translating the problem to B
and solving that; we just know it can’t be harder.)
We give a few examples of these kinds of transformations for decision prob-
lems in the next subsection. The main reason for introducing polynomial re-
ductions is to define two additional complexity classes: N P -hard, and N P -
complete.
Definition D.4. A decision problem X belongs to the class N P -hard if every
problem in the class N P has a polynomial reduction to X.
In other words, any problem in class N P -hard is at least as difficult to solve
as any problem in N P , since any problem in class N P can be converted to an
N P -hard problem. N P -hard problems are not necessarily in N P ; they might
be strictly more difficult in the sense that we might not be able to verify a “yes”
answer to an N P -hard problem in polynomial time.
Definition D.5. A decision problem belongs to the class N P -complete if it is
both in classes N P and N P -hard.
The problems in N P -complete can be thought of as the most difficult prob-
lems in N P , because any other N P problem can be converted to an N P -
complete problem which will have the same answer. The consequence of this
is that if someone can develop a polynomial time algorithm for any problem
in N P -complete, then we can answer all problems in N P in polynomial time
as well; therefore the classes P and N P will be the same: P = N P . On the
other hand, proving that no polynomial time algorithm exists to answer an
N P -complete problem would show that there are some problems in N P which
are not in P , so P 6= N P . This makes the class N P -complete very important.
The decision version of the traveling salesperson problem is an example of an
D.4. COMPLEXITY CLASSES 601
N P -complete problem. The proof of this statement is difficult and beyond the
scope of this book.
and whose total size is no greater than U . The total profit and total size can be
calculated in linear time by simply summing the profit and size of each object
in JI .
The remainder of the argument establishes a polynomial reduction from an
N P -complete problem. For this example, we will reduce from the partition
decision problem, with a given set of positive integers S. From this, we will
construct an instance of the knapsack decision problem whose answer (“yes” or
“no”) will always be the same as that for the given partition problem. Let S =
{b1 , b2 , . . . , bn } (possibly with repetitions). Create an instance of the knapsack
problem where the set of objects is the set of integers I = {1, 2, . . . , |S|}. The
size of each object, and its profit, are both set equal to the corresponding integer
in S, so ai = piP = bi . The available capacity is half of the sum of the integers
in S, so U =P21 i∈S bi . The minimum profit P is set to this same amount as
well: P = 12 i∈S bi . This reduction can be done in linear time.
Assume that the answer to the P knapsack problem isP“yes.” Then there
is a subset of objects JI such that a ≤ U and j∈JI pj ≥ P . But
Pj∈JI j
U = P = 12 i∈S bi , so this implies JI aj = 12 i∈S bi . Furthermore, each
P P
Example D.3. Show that the decision version of the resource-constrained short-
est path problem (given in Section D.4.1) is N P -complete.
consecutive pair of nodes i and i + 1, we create two parallel links. In each pair
of parallel links, the top link has a cost of V , and a resource consumption of 0.
The bottom link has a cost of V − bi , and a resource consumption of bi . (See
Figure D.2.) Finally, for this instance of the resource-constrained shortest path
decision problem, set the cost target C = nV − (U/2), the resource limit to U/2,
and select nodes 1 and n + 1 as the origin and destination. The number of links
and nodes is linear in the size of the original set S, so this transformation can
be done in polynomial time.
Assume that the resulting resource-constrained shortest path problem is a
“yes” instance. Let π be a path whose cost is at most C, and whose resource
consumption is at most R. This path consists of n links, each of which is either
the top or bottom link in the n pairs of parallel links in the network. Let
S1 contain the values of S corresponding to “bottom” links, and S2 P the values
corresponding to “top” links. Then thePtotal cost of this path is nV − bi ∈S1 bi ,
and its total resource consumption isP bi ∈S1 bi . Since this is a “yes”Pinstance,
the cost target is achieved and nV − bi ∈S1 bi ≤ C = nVP− (U/2), so bi ∈S1 ≥
P the resource constraint is satisfied, so bi ∈S1 bi ≤ R = U/2.
U/2. Likewise,
Therefore bi ∈S1 is exactly U/2, that is, the sum of the entries in set S1 is
exactly half the total of the sum of the entries in S. We can thus conclude
that the sum of the values in sets S1 and S2 are equal, and this is also a “yes”
instance of the partition problem.
Or, if this is a “no” instance of the resource-constrained shortest path prob-
lem, then every path either exceeds the cost target or the resource limitation.
Following the same construction Pas in the previousP paragraph, this means that
for every set S1 we have either bi ∈S1 bi < U/2 or bi ∈S1 bi > U/2. Therefore,
there is no subset of the values in S whose sum is exactly half of the total sum
of the values in S; so it is not possible to partition S into two sets S1 and S2
whose sums are exactly equal. Therefore this is a “no” instance of the partition
problem as well.
You may have noticed that this proof was fairly similar to that used to
show that the knapsack problem is N P -complete. In fact, once Example D.2
was done, and we established that the knapsack problem was N P -complete, we
could have done Example D.3 in a faster way by reducing the knapsack problem
to the resource-constrained shortest path problem (rather than starting from the
606 APPENDIX D. ALGORITHMS AND COMPLEXITY
?
Aaronson, S. (2016). P = N P . In Open Problems in Mathematics, pp. 1–122.
Springer.
Aashtiani, H. Z. and T. L. Magnanti (1981). Equilibria on a congested trans-
portation network. SIAM Journal on Algebraic and Discrete Models 2, 213–
226.
Ahuja, R. K., T. Magnanti, and J. Orlin (1993). Network Flows. Englewood
Cliffs, NJ: Prentice-Hall.
Akamatsu, T. (1996). Cyclic flows, Markov process and stochastic traffic as-
signment. Transportation Research Part B 30, 369–386.
Akamatsu, T. (1997). Decomposition of path choice entropy in general transport
networks. Transportation Science 31, 349–362.
Andreatta, G. and L. Romeo (1988). Stochastic shortest paths with recourse.
Networks 18, 193–204.
Ban, X. J., J.-S. Pang, H. X. Liu, and R. Ma (2012). Modeling and solving
continuous-time instantaneous dynamic user equilibria: A differential com-
plementarity systems approach. Transportation Research Part B 46 (3), 389–
408.
Bar-Gera, H. (2002). Origin-based algorithm for the traffic assignment problem.
Transportation Science 36 (4), 398–417.
Bar-Gera, H. (2005). Continuous and discrete trajectory models for dynamic
traffic assignment. Networks and Spatial Economics 5, 41–70.
Bar-Gera, H. (2006). Primal method for determining the most likely route flows
in large road networks. Transportation Science 40 (3), 269–286.
Bar-Gera, H. (2010). Traffic assignment by paired alternative segments. Trans-
portation Research Part B 44 (8–9), 1022–1046.
Bar-Gera, H. and D. Boyce (1999). Route flow entropy maximization in origin-
based traffic assignment. In Proceedings of the 14th International Symposium
on Transportation and Traffic Theory, Jerusalem, pp. 397–415.
607
608 BIBLIOGRAPHY
Boyles, S. D. and N. Ruiz Juri (2019). Queue spillback and demand uncertainty
in dynamic network loading. Transportation Research Record 2673, 38–48.
Chen, H.-K. and C. F. Hsueh (1998). A model and an algorithm for the dynamic
user-optimal route choice problem. Transportation Research Part B 32B, 219–
234.
Cho, H.-J., T. E. Smith, and T. L. Friesz (2000). A reduction method for local
sensitivity analyses of network equilibrium arc flow. Transportation Research
Part B 34 (1), 31–51.
Cooke, L. and E. Halsey (1969). The shortest route through a network with
time-dependent internodal transit times. Journal of Mathematical Analysis
and Applications 14, 492–498.
Daganzo, C. F. (1995a). The cell transmission model, part II: network traffic.
Transportation Research Part B 29B (2), 79–93.
Di, X., H. X. Liu, J.-S. Pang, and X. Ban (2013). Boundedly rational user equi-
libria (BRUE): Mathematical formulation and solution sets. Transportation
Research Part B 57, 300–313.
Dial, R. B. (1971). A probabilistic multipath traffic assignment model which
obviates path enumeration. Transportation Research 5, 83–111.
Hall, R. W. (1986). The fastest path through a network with random time-
dependent travel times. Transportation Science 20 (3), 182–188.
Hart, P. E., N. J. Nilsson, and B. Raphael (1968). A formal basis for the heuristic
determination of minimum cost paths. IEEE Transactions on Systems Science
and Cybernetics SSC4 4 (2), 100–107.
Hopf, E. (1970). On the right weak solution of the Cauchy problem for a quasi-
linear equation of first order. Indiana University Mathematics Journal 19,
483–487.
Lebacque, J. P. (1996). The Godunov scheme and what it means for first order
traffic flow models. In Proceedings of the 13th International Symposium on
Transportation and Traffic Theory, London, pp. 647–678.
Mahut, M., M. Florian, and N. Tremblay (2003). Space-time queues and dy-
namic traffic assignment: a model, algorithm and applications. Presented at
the 82nd Annual Meeting of the Transportation Research Board, Washington,
DC.
Xing, T. and X. Zhou (2011). Finding the most reliable path with and with-
out link travel time correlation: a Lagrangian substitution based approach.
Transportation Research Part B 59, 22–44.
Yang, H. and M. G. H. Bell (2007). Sensitivity analysis of network traffic equi-
libria revisited: the corrected approach. In B. Heydecker (Ed.), Mathematics
in Transport, pp. 373–411. Elsevier.
Yperman, I. (2007). The Link Transmission Model for Dynamic Newtork Load-
ing. Ph. D. thesis, Katholieke Universiteit Leuven, Belgium.
Yu, G. and J. Yang (1998). On the robust shortest path problem. Computers
and Operations Research 25 (6), 457–468.