Notes on Theory on Distributed Systems
Notes on Theory on Distributed Systems
James Aspnes
2021-01-05 01:59
i
Table of contents ii
List of figures xv
Preface xxi
1 Introduction 1
1.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
I Message passing 7
2 Model 8
2.1 Basic message-passing model . . . . . . . . . . . . . . . . . . 8
2.1.1 Formal details . . . . . . . . . . . . . . . . . . . . . . 9
2.1.2 Network structure . . . . . . . . . . . . . . . . . . . . 10
2.2 Asynchronous systems . . . . . . . . . . . . . . . . . . . . . . 10
2.2.1 Example: client-server computing . . . . . . . . . . . . 11
2.3 Synchronous systems . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Drawing message-passing executions . . . . . . . . . . . . . . 12
2.5 Complexity measures . . . . . . . . . . . . . . . . . . . . . . . 14
ii
CONTENTS iii
5 Leader election 29
5.1 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.2 Leader election in rings . . . . . . . . . . . . . . . . . . . . . 31
5.2.1 The Le Lann-Chang-Roberts algorithm . . . . . . . . 31
5.2.1.1 Proof of correctness for synchronous executions 32
5.2.1.2 Performance . . . . . . . . . . . . . . . . . . 33
5.2.2 The Hirschberg-Sinclair algorithm . . . . . . . . . . . 33
5.2.3 Peterson’s algorithm for the unidirectional ring . . . . 34
5.2.4 A simple randomized O(n log n)-message algorithm . . 35
5.3 Leader election in general networks . . . . . . . . . . . . . . . 36
5.4 Lower bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.4.1 Lower bound on asynchronous message complexity . . 37
5.4.2 Lower bound for comparison-based algorithms . . . . 38
6 Logical clocks 41
6.1 Causal ordering . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.2 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.2.1 Lamport clock . . . . . . . . . . . . . . . . . . . . . . 44
6.2.2 Neiger-Toueg-Welch clock . . . . . . . . . . . . . . . . 45
6.2.3 Vector clocks . . . . . . . . . . . . . . . . . . . . . . . 46
6.3 Consistent snapshots . . . . . . . . . . . . . . . . . . . . . . . 46
6.3.1 Property testing . . . . . . . . . . . . . . . . . . . . . 48
7 Synchronizers 49
7.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.2 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.2.1 The alpha synchronizer . . . . . . . . . . . . . . . . . 51
CONTENTS iv
8 Coordinated attack 57
8.1 Formal description . . . . . . . . . . . . . . . . . . . . . . . . 57
8.2 Impossibility proof . . . . . . . . . . . . . . . . . . . . . . . . 58
8.3 Randomized coordinated attack . . . . . . . . . . . . . . . . . 60
8.3.1 An algorithm . . . . . . . . . . . . . . . . . . . . . . . 60
8.3.2 Why it works . . . . . . . . . . . . . . . . . . . . . . . 61
8.3.3 Almost-matching lower bound . . . . . . . . . . . . . . 62
9 Synchronous agreement 63
9.1 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . 63
9.2 Solution using flooding . . . . . . . . . . . . . . . . . . . . . . 64
9.3 Lower bound on rounds . . . . . . . . . . . . . . . . . . . . . 65
9.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10 Byzantine agreement 69
10.1 Lower bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
10.1.1 Minimum number of rounds . . . . . . . . . . . . . . . 69
10.1.2 Minimum number of processes . . . . . . . . . . . . . 69
10.1.3 Minimum connectivity . . . . . . . . . . . . . . . . . . 71
10.1.4 Weak Byzantine agreement . . . . . . . . . . . . . . . 72
10.2 Upper bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
10.2.1 Exponential information gathering gets n = 3f + 1 . . 73
10.2.1.1 Proof of correctness . . . . . . . . . . . . . . 75
10.2.2 Phase king gets constant-size messages . . . . . . . . . 76
10.2.2.1 The algorithm . . . . . . . . . . . . . . . . . 77
10.2.2.2 Proof of correctness . . . . . . . . . . . . . . 78
10.2.2.3 Performance of phase king . . . . . . . . . . 79
12 Paxos 85
12.1 The Paxos algorithm . . . . . . . . . . . . . . . . . . . . . . . 85
12.2 Informal analysis: how information flows between rounds . . 89
12.2.1 Example execution . . . . . . . . . . . . . . . . . . . . 89
12.2.2 Safety properties . . . . . . . . . . . . . . . . . . . . . 91
12.2.3 Learning the results . . . . . . . . . . . . . . . . . . . 92
12.2.4 Liveness properties . . . . . . . . . . . . . . . . . . . . 92
12.3 Replicated state machines and multi-Paxos . . . . . . . . . . 93
13 Failure detectors 94
13.1 How to build a failure detector . . . . . . . . . . . . . . . . . 95
13.2 Classification of failure detectors . . . . . . . . . . . . . . . . 95
13.2.1 Degrees of completeness . . . . . . . . . . . . . . . . . 95
13.2.2 Degrees of accuracy . . . . . . . . . . . . . . . . . . . 95
13.2.3 Boosting completeness . . . . . . . . . . . . . . . . . . 96
13.2.4 Failure detector classes . . . . . . . . . . . . . . . . . . 97
13.3 Consensus with S . . . . . . . . . . . . . . . . . . . . . . . . . 98
13.3.1 Proof of correctness . . . . . . . . . . . . . . . . . . . 99
13.4 Consensus with ♦S and f < n/2 . . . . . . . . . . . . . . . . 100
13.4.1 Proof of correctness . . . . . . . . . . . . . . . . . . . 102
13.5 f < n/2 is still required even with ♦P . . . . . . . . . . . . . 103
13.6 Relationships among the classes . . . . . . . . . . . . . . . . . 103
15 Model 113
15.1 Atomic registers . . . . . . . . . . . . . . . . . . . . . . . . . 113
15.2 Single-writer versus multi-writer registers . . . . . . . . . . . 114
15.3 Fairness and crashes . . . . . . . . . . . . . . . . . . . . . . . 115
15.4 Concurrent executions . . . . . . . . . . . . . . . . . . . . . . 115
15.5 Consistency properties . . . . . . . . . . . . . . . . . . . . . . 116
15.6 Complexity measures . . . . . . . . . . . . . . . . . . . . . . . 117
15.7 Fancier registers . . . . . . . . . . . . . . . . . . . . . . . . . 118
22 Common2 197
22.1 Test-and-set and swap for two processes . . . . . . . . . . . . 198
22.2 Building n-process TAS from 2-process TAS . . . . . . . . . . 198
22.3 Obstruction-free swap from test-and-set . . . . . . . . . . . . 200
22.4 Wait-free swap from test-and-set . . . . . . . . . . . . . . . . 202
22.5 Implementations using stronger base objects . . . . . . . . . . 205
24 Renaming 224
24.1 Renaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
24.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
24.3 Order-preserving renaming . . . . . . . . . . . . . . . . . . . 226
24.4 Deterministic renaming . . . . . . . . . . . . . . . . . . . . . 226
24.4.1 Wait-free renaming with 2n − 1 names . . . . . . . . . 227
24.4.2 Long-lived renaming . . . . . . . . . . . . . . . . . . . 228
24.4.3 Renaming without snapshots . . . . . . . . . . . . . . 229
24.4.3.1 Splitters . . . . . . . . . . . . . . . . . . . . . 229
24.4.3.2 Splitters in a grid . . . . . . . . . . . . . . . 230
24.4.4 Getting to 2n − 1 names in polynomial space . . . . . 232
24.4.5 Renaming with test-and-set . . . . . . . . . . . . . . . 233
24.5 Randomized renaming . . . . . . . . . . . . . . . . . . . . . . 233
24.5.1 Randomized splitters . . . . . . . . . . . . . . . . . . . 234
24.5.2 Randomized test-and-set plus sampling . . . . . . . . 234
24.5.3 Renaming with sorting networks . . . . . . . . . . . . 235
CONTENTS ix
26 Obstruction-freedom 247
26.1 Why build obstruction-free algorithms? . . . . . . . . . . . . 248
26.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
26.2.1 Lock-free implementations . . . . . . . . . . . . . . . . 248
26.2.2 Double-collect snapshots . . . . . . . . . . . . . . . . . 248
26.2.3 Software transactional memory . . . . . . . . . . . . . 249
26.2.4 Obstruction-free test-and-set . . . . . . . . . . . . . . 249
26.2.5 An obstruction-free deque . . . . . . . . . . . . . . . . 251
26.3 Boosting obstruction-freedom to wait-freedom . . . . . . . . . 253
26.3.1 Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
26.4 Lower bounds for lock-free protocols . . . . . . . . . . . . . . 258
26.4.1 Contention . . . . . . . . . . . . . . . . . . . . . . . . 258
26.4.2 The class G . . . . . . . . . . . . . . . . . . . . . . . . 259
26.4.3 The lower bound proof . . . . . . . . . . . . . . . . . . 261
26.4.4 Consequences . . . . . . . . . . . . . . . . . . . . . . . 264
26.4.5 More lower bounds . . . . . . . . . . . . . . . . . . . . 265
26.5 Practical considerations . . . . . . . . . . . . . . . . . . . . . 265
27 BG simulation 266
27.1 High-level strategy . . . . . . . . . . . . . . . . . . . . . . . . 266
27.2 Safe agreement . . . . . . . . . . . . . . . . . . . . . . . . . . 267
27.3 The basic simulation algorithm . . . . . . . . . . . . . . . . . 269
27.4 Effect of failures . . . . . . . . . . . . . . . . . . . . . . . . . 270
27.5 Inputs and outputs . . . . . . . . . . . . . . . . . . . . . . . . 270
CONTENTS x
30 Overview 298
31 Self-stabilization 299
31.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
31.2 Token ring circulation . . . . . . . . . . . . . . . . . . . . . . 300
31.3 Synchronizers . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
31.4 Spanning trees . . . . . . . . . . . . . . . . . . . . . . . . . . 306
31.5 Self-stabilization and local algorithms . . . . . . . . . . . . . 307
35 Beeping 332
35.1 Interval coloring . . . . . . . . . . . . . . . . . . . . . . . . . 333
35.1.1 Estimating the degree . . . . . . . . . . . . . . . . . . 334
35.1.2 Picking slots . . . . . . . . . . . . . . . . . . . . . . . 334
35.1.3 Detecting collisions . . . . . . . . . . . . . . . . . . . . 334
35.2 Maximal independent set . . . . . . . . . . . . . . . . . . . . 335
35.2.1 Lower bound . . . . . . . . . . . . . . . . . . . . . . . 335
35.2.2 Upper bound with known bound on n . . . . . . . . . 337
Appendix 341
A Assignments 341
A.0 Assignment 0: due as soon as possible . . . . . . . . . . . . . 341
A.1 Assignment 1: due Wednesday, 2020-09-23, at 5:00pm Eastern
US time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
A.1.1 A token-passing game . . . . . . . . . . . . . . . . . . 342
A.1.2 A load-balancing problem . . . . . . . . . . . . . . . . 344
A.2 Assignment 2: due Wednesday, 2020-10-07, at 5:00pm Eastern
US time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
CONTENTS xii
Bibliography 453
Index 478
List of Figures
21.1 Snapshot from max arrays; see also [AACHE15, Fig. 2] . . . . 195
xvi
List of Tables
xvii
List of Algorithms
12.1 Paxos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
xviii
LIST OF ALGORITHMS xix
These are notes for the Fall 2020 semester version of the Yale course CPSC
465/565 Theory of Distributed Systems. This document also incorporates the
lecture schedule and assignments, as well as some sample assignments from
previous semesters. Because this is a work in progress, it will be updated
frequently over the course of the semester.
The most recent version of these notes will be available at http://www.cs.
yale.edu/homes/aspnes/classes/465/notes.pdf. More stable archival
versions may be found at https://arxiv.org/abs/2001.04235.
Not all topics in the notes will be covered during this semester. Some
chapters have not been updated and are marked as possibly out of date.
Much of the structure of the course follows the textbook, Attiya and
Welch’s Distributed Computing [AW04], with some topics based on Lynch’s
Distributed Algorithms [Lyn96] and additional readings from the research
literature. In most cases you’ll find these materials contain much more detail
than what is presented here, so it may be better to consider this document a
supplement to them than to treat it as your primary source of information.
Acknowledgments
Many parts of these notes were improved by feedback from students taking
various versions of this course, as well as others who have kindly pointed
out errors in the notes after reading them online. Many of these suggestions,
sadly, went unrecorded, so I must apologize to the many students who
should be thanked here but whose names I didn’t keep track of in the past.
However, I can thank Mike Marmar and Hao Pan in particular for suggesting
improvements to some of the posted solutions, Guy Laden for suggesting
corrections to Figure 12.1, and Ali Mamdouh for pointing out an error in
the original presentation of Algorithm 5.2.
xxi
Lecture schedule
As always, the future is uncertain, so you should take parts of the schedule
that haven’t happened yet with a grain of salt. Unless otherwise specified,
readings refer to chapters or sections in the course notes.
xxii
LECTURE SCHEDULE xxiii
Introduction
1
CHAPTER 1. INTRODUCTION 2
1.1 Models
The global state consisting of all process states is called a configuration,
and we think of the system as a whole as passing from one global state
or configuration to another in response to each event. When this occurs
the processes participating in the event update their states, and the other
processes do nothing. This does not model concurrency directly; instead,
we interleave potentially concurrent events in some arbitrary way. The
advantage of this interleaving approach is that it gives us essentially the
same behavior as we would get if we modeled simultaneous events explicitly,
but still allows us to consider only one event at a time and use induction to
prove various properties of the sequence of configurations we might reach.
We will often use lowercase Greek letters for individual events or sequences
of events. Configurations are typically written as capital Latin letters (often
C). An execution of a schedule is an alternating sequence of configurations
and events C0 σ0 C1 σ1 C2 . . . , where Ci+1 is the configuration that results from
CHAPTER 1. INTRODUCTION 3
In the simplest case, the objects are simple memory cells supporting
read and write operations. These are called atomic registers. But
in general, the objects could be more complex hardware primitives
like compare-and-swap (§18.1.3), load-linked/store-conditional
(§18.1.3), atomic queues, or even more exotic objects from the seldom-
visited theoretical depths.
Practical shared-memory systems may be implemented as distributed
shared-memory (Chapter 16) on top of a message-passing system.
This gives an alternative approach to designing message-passing systems
if it turns out that shared memory is easier to use for a particular
problem.
Like message-passing systems, shared-memory systems must also deal
with issues of asynchrony and failures, both in the processes and in the
shared objects.
Realistic shared-memory systems have additional complications, in that
modern CPUs allow out-of-order execution in the absence of special
(and expensive) operations called fences or memory barriers.[AG95]
We will effectively be assuming that our shared-memory code is liberally
sprinkled with these operations so that nothing surprising happens,
but this is not always true of real production code, and indeed there is
work in the theory of distributed computing literature on algorithms
that don’t require unlimited use of memory barriers.
We’ll see many of these at some point in this course, and examine which
of them can simulate each other under various conditions.
CHAPTER 1. INTRODUCTION 5
1.2 Properties
Properties we might want to prove about a system include:
• Safety properties, of the form “nothing bad ever happens” or, more
precisely, “there are no bad reachable configurations.” These include
things like “at most one of the traffic lights at the intersection of Busy
Road and Main Street is ever green” or “every value read from a counter
equals the number of preceding increment operations.” Such properties
are typically proved using an , a property of configurations that is true
initially and that is preserved by all transitions (this is essentially a
disguised induction proof).
There are some basic proof techniques that we will see over and over
again in distributed computing.
For lower bound and impossibility proofs, the main tool is the in-
distinguishability argument. Here we construct two (or more) executions
in which some process has the same input and thus behaves the same way,
regardless of what algorithm it is running. This exploitation of process’s ig-
norance is what makes impossibility results possible in distributed computing
despite being notoriously difficult in most areas of computer science.2
For safety properties, statements that some bad outcome never occurs,
the main proof technique is to construct an invariant. An invariant is
essentially an induction hypothesis on reachable configurations of the system;
an invariant proof shows that the invariant holds in all initial configurations,
and that if it holds in some configuration, it holds in any configuration that
is reachable in one step.
Induction is also useful for proving termination and liveness properties,
statements that some good outcome occurs after a bounded amount of time.
Here we typically structure the induction hypothesis as a progress measure,
showing that some sort of partial progress holds by a particular time, with
the full guarantee implied after the time bound is reached.
2
An exception might be lower bounds for data structures, which also rely on a process’s
ignorance.
Part I
Message passing
7
Chapter 2
Model
8
CHAPTER 2. MODEL 9
into processes as outbuf and inbuf variables. This leads to some oddities like
having to distinguish the accessible state of a process (which excludes the
outbufs) from the full state (which doesn’t). Our approach is close to that of
Lynch [Lyn96], in that we have separate automata representing processes and
communication channels. But since the resulting model produces essentially
the same executions, the exact details don’t really matter.1
1 initially do
2 send request to server
Algorithm 2.1: Client-server computation: client code
The interpretation of Algorithm 2.1 is that the client sends request (by
adding it to its outbuf) in its very first computation event (after which it does
nothing). The interpretation of Algorithm 2.2 is that in any computation
event where the server observes request in its inbuf, it sends response.
We want to claim that the client eventually receives response in any
admissible execution. To prove this, observe that:
1. After finitely many steps, the client carries out a computation event.
This computation event puts request in its outbuf.
2. After finitely many more steps, a delivery event occurs that delivers
request to the server. This causes the server to send response.
3. After finitely many more steps, a delivery event delivers response to
the client, causing it to process response (and do nothing, given that
we haven’t included any code to handle this response).
Each step of the proof is justified by the constraints on admissible
executions. If we could run for infinitely many steps without a particular
process doing a computation event or a particular message being delivered,
we’d violate those constraints.
Most of the time we will not attempt to prove the correctness of a
protocol at quite this level of tedious detail. But if you are only interested in
CHAPTER 2. MODEL 12
distributed algorithms that people actually use, you have now seen a proof
of correctness for 99.9% of them, and do not need to read any further.
p3
p2
p1
Time →
For example, Figure 2.2 shows an execution that is still asynchronous but
that assumes FIFO (first-in first-out) channels. A FIFO channel from some
process p to another process q guarantees that q receives messages in the
same order that p sends them (this can be simulated by a non-FIFO channel
by adding a sequence number to each message, and queuing messages at
the receiver until all previous messages have been processed).
If we go as far as to assume synchrony, we get the execution in Figure 2.3.
Now all messages take exactly one time unit to arrive, and computation
events follow each other in lockstep.
p3
p2
p1
Time →
p3
p2
p1
Time →
p3
1 1
p2
1 1
p1
0 2 2
Time →
delivered, which probably means that it won’t be making any progress. These
complexity measures will be more useful when we look at shared-memory
models (Part II).
For a protocol that terminates, the message complexity is the total
number of messages sent. We can also look at message length in bits, total
bits sent, etc., if these are useful for distinguishing our new improved protocol
from last year’s model.
For synchronous systems, time complexity becomes just the number of
rounds until a protocol finishes. Message complexity is still only loosely
connected to time complexity; for example, there are synchronous leader
election (Chapter 5) algorithms that, by virtue of grossly abusing the
synchrony assumption, have unbounded time complexity but very low message
complexity.
Chapter 3
3.1 Flooding
Flooding is about the simplest of all distributed algorithms. It’s dumb and
expensive, but easy to implement, and gives you both a broadcast mechanism
and a way to build rooted spanning trees.
We’ll give a fairly simple presentation of flooding roughly following
Chapter 2 of [AW04].
16
CHAPTER 3. BROADCAST AND CONVERGECAST 17
1 initially do
2 if pid = root then
3 seen-message ← true
4 send M to all neighbors
5 else
6 seen-message ← false
7 upon receiving M do
8 if seen-message = false then
9 seen-message ← true
10 send M to all neighbors
Note that the time complexity proof also demonstrates correctness: every
process receives M at least once.
As written, this is a one-shot algorithm: you can’t broadcast a second
message even if you wanted to. The obvious fix is for each process to
remember which messages it has seen and only forward the new ones (which
costs memory) and/or to add a time-to-live (TTL) field on each message
that drops by one each time it is forwarded (which may cost extra messages
and possibly prevents complete broadcast if the initial TTL is too small).
The latter method is what was used for searching in http://en.wikipedia.
org/wiki/Gnutella, an early peer-to-peer system. An interesting property
of Gnutella was that since the application of flooding was to search for huge
(multiple MiB) files using tiny ( 100 byte) query messages, the actual bit
complexity of the flooding algorithm was not especially large relative to the
bit complexity of sending any file that was found.
We can optimize the algorithm slightly by not sending M back to the
node it came from; this will slightly reduce the message complexity in many
cases but makes the proof a sentence or two longer. (It’s all a question of
what you want to optimize.)
CHAPTER 3. BROADCAST AND CONVERGECAST 18
1 initially do
2 if pid = root then
3 parent ← root
4 send M to all neighbors
5 else
6 parent ← ⊥
We can easily prove that Algorithm 3.2 has the same termination proper-
ties as Algorithm 3.1 by observing that if we map parent to seen-message by
the rule ⊥ → false, anything else → true, then we have the same algorithm.
We would like one additional property, which is that when the algorithm
quiesces (has no outstanding messages), the set of parent pointers form a
rooted spanning tree. For this we use induction on time:
Lemma 3.1.2. At any time during the execution of Algorithm 3.2, the
following invariant holds:
1. If u.parent 6= ⊥, then u.parent.parent 6= ⊥ and following parent pointers
gives a path from u to root.
2. If there is a message M in transit from u to v, then u.parent 6= ⊥.
Proof. We have to show that that the invariant is true initially, and that any
event preserves the invariant. We’ll assume that all events are delivery events
for a single message, since we can have the algorithm treat a multi-message
delivery event as a sequence of single-message delivery events.
We’ll treat the initial configuration as the result of the root setting its
parent to itself and sending messages to all its neighbors. It’s not hard to
verify that the invariant holds in the resulting configuration.
CHAPTER 3. BROADCAST AND CONVERGECAST 19
For a delivery event, let v receive M from u. There are two cases: if
v.parent is already non-null, the only state change is that M is no longer in
transit, so we don’t care about u.parent any more. If v.parent is null, then
1. v.parent is set to u. This triggers the first case of the invariant. From
the induction hypothesis we have that u.parent = 6 ⊥ and that there
exists a path from u to the root. Then v.parent.parent = u.parent 6= ⊥
and the path from v → u → root gives the path from v.
At the end of the algorithm, the invariant shows that every process has a
path to the root, i.e., that the graph represented by the parent pointers is
connected. Since this graph has exactly |V | − 1 edges (if we don’t count the
self-loop at the root), it’s a tree.
Though we get a spanning tree at the end, we may not get a very good
spanning tree. For example, suppose our friend the adversary picks some
Hamiltonian path through the network and delivers messages along this
path very quickly while delaying all other messages for the full allowed 1
time unit. Then the resulting spanning tree will have depth |V | − 1, which
might be much worse than D. If we want the shallowest possible spanning
tree, we need to do something more sophisticated: see the discussion of
distributed breadth-first search in Chapter 4. However, we may be
happy with the tree we get from simple flooding: if the message delay on
each link is consistent, then it’s not hard to prove that we in fact get a
shortest-path tree. As a special case, flooding always produces a BFS tree in
the synchronous model.
Note also that while the algorithm works in a directed graph, the parent
pointers may not be very useful if links aren’t two-way.
3.1.3 Termination
See [AW04, Chapter 2] for further modifications that allow the processes to
detect termination. In a sense, each process can terminate as soon as it is
done sending M to all of its neighbors, but this still requires some mechanism
for clearing out the inbuf; by adding acknowledgments as described in [AW04],
we can terminate with the assurance that no further messages will be received.
CHAPTER 3. BROADCAST AND CONVERGECAST 20
3.2 Convergecast
A convergecast is the inverse of broadcast: instead of a message propagating
down from a single root to all nodes, data is collected from outlying nodes
to the root. Typically some function is applied to the incoming data at
each node to summarize it, with the goal being that eventually the root
obtains this function of all the data in the entire system. (Examples would
be counting all the nodes or taking an average of input values at all the
nodes.)
A basic convergecast algorithm is given in Algorithm 3.3; it propagates
information up through a previously-computed spanning tree.
1 initially do
2 if I am a leaf then
3 send input to parent
• If input = 1 for all nodes and f is sum, then we count the number of
nodes in the system.
• If input is arbitrary and f is sum, then we get a total of all the input
values.
1 initially do
2 children ← ∅
3 nonChildren ← ∅
4 if pid = root then
5 parent ← root
6 send init to all neighbors
7 else
8 parent ← ⊥
Distributed breadth-first
search
23
CHAPTER 4. DISTRIBUTED BREADTH-FIRST SEARCH 24
paths, but there is otherwise no particular reason to use it. Not only does
it not detect termination, but it is also dominated by the O(D) time and
O(DE) message complexity synchronizer-based algorithm described in §4.3.
(Here D is the diameter of the network, the maximum distance between
any two nodes.)
The idea is to run flooding with distances attached. Each node sets its
distance to 1 plus the smallest distance sent by its neighbors and its parent
to the neighbor supplying that smallest distance. A node notifies all its
neighbors of its new distance whenever its distance changes.
Pseudocode is given in Algorithm 4.1
1 initially do
2 if pid = initiator then
3 distance ← 0
4 send distance to all neighbors
5 else
6 distance ← ∞
Lemma 4.1.1. The variable distancep is always the length of some path
from initiator to p, and any message sent by p is also the length of some
path from initiator to p.
Proof. The second part follows from the first; any message sent equals p’s
current value of distance. For the first part, suppose p updates its distance;
then it sets it to one more than the length of some path from initiator to p0 ,
which is the length of that same path extended by adding the pp0 edge.
CHAPTER 4. DISTRIBUTED BREADTH-FIRST SEARCH 25
The initiator sends exactly(0) to all neighbors at the start of the protocol
(these are the only messages the initiator sends).
My distance will be the unique distance that I am allowed to send in an
exactly(d) messages. Note that this algorithm terminates in the sense that
every node learns its distance at some finite time.
If you read the discussion of synchronizers in Chapter 7, this algorithm
essentially corresponds to building the alpha synchronizer into the syn-
chronous BFS algorithm, just as the layered model builds in the beta
synchronizer. See [AW04, §11.3.2] for a discussion of BFS using synchro-
nizers. The original approach of applying synchronizers to get BFS is due to
Awerbuch [Awe85].
We now show correctness. Under the assumption that local computation
takes zero time and message delivery takes at most 1 time unit, we’ll show
that if d(initiator, p) = d, (a) p sends more-than(d0 ) for any d0 < d by time
d0 , (b) p sends exactly(d) by time d, (c) p never sends more-than(d0 ) for any
d0 ≥ d, and (d) p never sends exactly(d0 ) for any d0 6= d. For parts (c) and
(d) we use induction on d0 ; for (a) and (b), induction on time. This is not
terribly surprising: (c) and (d) are safety properties, so we don’t need to
talk about time. But (a) and (b) are liveness properties so time comes in.
Let’s start with (c) and (d). The base case is that the initiator never
sends any more-than messages at all, and so never sends more-than(0), and
1
In an earlier version of these notes, these messages where called distance(d) and
not-distance(d); the more self-explanatory exactly and more-than terminology is taken from
[BDLP08].
CHAPTER 4. DISTRIBUTED BREADTH-FIRST SEARCH 27
broadcast. Further optimizations are possible; see, for example, the paper of
Boulinier et al. [BDLP08], which shows how to run the same algorithm with
constant-size messages.
Chapter 5
Leader election
29
CHAPTER 5. LEADER ELECTION 30
5.1 Symmetry
A system exhibits symmetry if we can permute the nodes without changing
the behavior of the system. More formally, we can define a symmetry
as an equivalence relation on processes, where we have the additional
properties that all processes in the same equivalence class run the same code;
and whenever p is equivalent to p0 , each neighbor q of p is equivalent to a
corresponding neighbor q 0 of p0 .
An example of a network with a lot of symmetries would be an anony-
mous ring, which is a network in the form of a cycle (the ring part) in
which every process runs the same code (the anonymous part). In this case
all nodes are equivalent. If we have a line, then we might or might not have
any non-trivial symmetries: if each node has a sense of direction that tells
it which neighbor is to the left and which is to the right, then we can identify
each node uniquely by its distance from the left edge. But if the nodes don’t
have a sense of direction, we can flip the line over and pair up nodes that
map to each other.1
Symmetries are convenient for proving impossibility results, as observed by
Angluin [Ang80]. The underlying theme is that without some mechanism for
symmetry breaking, a message-passing system escape from a symmetric
initial configuration. The following lemma holds for deterministic systems,
basically those in which processes can’t flip coins:
Lemma 5.1.1. A symmetric deterministic message-passing system that
starts in an initial configuration in which equivalent processes have the same
state has a synchronous execution in which equivalent processes continue to
have the same state.
Proof. Easy induction on rounds: if in some round p and p0 are equivalent
and have the same state, and all their neighbors are equivalent and have the
same state, then p and p0 receive the same messages from their neighbors
and can proceed to the same state (including outgoing messages) in the next
round.
1
Typically, this does not mean that the nodes can’t tell their neighbors apart. But it
does mean that if we swap the labels for all the neighbors (corresponding to flipping the
entire line from left to right), we get the same executions.
CHAPTER 5. LEADER ELECTION 31
id, which we’ll write as idi . We will also treat all positions as values mod n,
to simplify the arithmetic.
The initial version of this algorithm was proposed by Le Lann [LL77]; it
involved sending every id all the way around the ring, and having a node
decide it was a leader if it had the largest id. Chang and Roberts [CR79]
improved on this by having nodes refuse to forward any id smaller than the
maximum id seen so far. This means that only the largest id makes it all the
way around the ring, so a node can declare itself leader the moment it sees
its own id. The resulting algorithm is known as either Chang-Roberts or Le
Lann-Chang-Roberts (LCR). We’ll go with the latter because it is always
polite to be generous with credit.
Code for the LCR algorithm is given in Algorithm 5.1.
1 initially do
2 leader ← 0
3 maxId ← idi
4 send idi to clockwise neighbor
5 upon receiving j do
6 if j = idi then
7 leader ← 1
8 if j > maxId then
9 maxId ← j
10 send j to clockwise neighbor
5.2.1.2 Performance
It’s immediate from the correctness proof that the protocols terminates after
exactly n rounds.
To count message traffic, observe that each process sends at most 1
message per round, for a total of O(n2 ) messages. This is a tight bound
since if the ids are in decreasing order n, n − 1, n − 2, . . . 1, then no messages
get eaten until they hit n.
1 procedure candidate()
2 phase ← 0
3 current ← pid
4 while true do
5 send probe(phase, current)
6 wait for probe(phase, x)
7 id2 ← x
8 send probe(phase + 1/2, id2 )
9 wait for probe(phase + 1/2, x)
10 id3 ← x
11 if id2 = current then
12 I am the leader!
13 return
14 else if id2 > current and id2 > id3 do
15 current ← id2
16 phase ← phase + 1
17 else
18 switch to relay()
19 procedure relay()
20 upon receiving probe(p, i) do
21 send probe(p, i)
Note: The phase arguments in the probe messages are useless if one has
FIFO channels, which is why [Lyn96] doesn’t use them.
Proof of correctness is essentially the same as for the 2-way algorithm.
For any pair of adjacent candidates, at most one of their current IDs survives
to the next phase. So we get a sole survivor after lg n phases. Each process
sends or relays at most 2 messages per phases, so we get at most 2n lg n total
messages.
and Lynch [FL87], allows a synchronous system and relaxes the uniformity
assumption, but requires that the algorithm can’t do anything to ids but
copy and compare them.
the combined schedule no messages are passed between the two sides, so the
processes continue to behave as they did in their separate executions.
Let e and e0 be the edges we used to past together the two rings. Extend
σ1 σ2 by the longest possible suffix σ3 in which no messages are delivered
across e and e0 . Since σ3 is as long as possible, after σ1 σ2 σ3 , there are no
messages waiting to be delivered across any edge except e and e0 and all
processes are quiescent—they will send no additional messages until they
receive one.
We now consider some suffix σ4 such causes the protocol to finish. While
executing σ4 , construct two sets of processes S and S 0 by the following rules:
1. If a process is not yet in S or S 0 and receives a message delivered across
e, put it in S; similarly if it receives a message delivered across e0 , put
it in S 0 .
2. If a process is not yet in S or S 0 and receives a message that was sent
by a process in S, put it in S; similarly for S 0 .
triggered by a delivery across e
Observe that S ∪ S 0 includes every process on the half of the ring with
the larger minimum id, because any such process that doesn’t receive a
message in σ4 doesn’t learn the global minimum. So |S ∪ S 0 | ≥ n/2 and thus
min(|S|, |S 0 |) ≥ n/4.
Assume without loss of generality that it is |S| that is at least n/4.
Except for the two processes incident to e, every process that is added to S
is added in response to a message sent in σ4 . So there are at least n/4 − 2
such messages. We can also argue that all of these messages are sent in
the subschedule τ of σ4 that contains only messages that do not depend on
messages delivered across e0 . It follows that σ1 σ2 σ3 τ is an open execution
on n processes with at least 2T (n/2) + n/4 − 2 sent messages. This gives
T (n) ≥ 2T (n/2) + n/4 − 2 = 2T (n/2) + Ω(n) as claimed.
35
30
25
20
15
10
0
-5 0 5 10 15 20 25 30 35
5
The classic example is Ramsey’s Theorem, which says that if you color the edges of
a complete graph red or blue, while trying to avoid having any subsets of k vertices with
all edges between them the same color, you will no longer be able to once the graph is large
enough (for any fixed k). See [GRS90] for much more on the subject of Ramsey theory.
Chapter 6
Logical clocks
41
CHAPTER 6. LOGICAL CLOCKS 42
1. All pairs (e, e0 ) where e precedes e0 in S and e and e0 are events of the
same process.
2. All pairs (e, e0 ) where e is a send event and e0 is the receive event for
the same message.
3. All pairs (e, e0 ) where there exists a third event e00 such that e ⇒ e00
S
and e00 ⇒ e0 . (In other words, we take the transitive closure of the
S
relation defined by the previous two cases.)
It is not terribly hard to show that this gives a partial order; the main
observation is that if e ⇒ e0 , then e precedes e0 in S. So ⇒ is a subset of the
S S
total order <S given by the order of events in S.
A causal shuffle S 0 of a schedule S is a permutation of S that is
consistent with the happens-before relation on S; that is, if e happens-before
e0 in S, then e precedes e0 in S 0 . The importance of the happens-before
relation follows from the following lemma, which says that the causal shuffles
of S are precisely the schedules S 0 that are similar to S.
1. S 0 is a causal shuffle of S.
1. e and e0 are events of the same process p and e <S e0 . But then e <S 0 e0
because S|p = S 0 |p.
In both cases, we are using the fact that if I tell you ⇒, then you know
S
everything there is to know about the order of events in S that you can
deduce from reports from each process together with the fact that messages
don’t travel back in time.
In the case that we want to use this information inside an algorithm, we
run into the issue that ⇒ is a pretty big relation (Θ(|S|2 ) bits with a naive
S
encoding), and seems to require global knowledge of <S to compute. So we
can ask if there is some simpler, easily computable description that works
almost as well. This is where logical clocks come in.
6.2 Implementations
The basic idea of a logical clock is to compute a timestamp for each event,
so that comparing timestamps gives information about ⇒. Note that these
S
timestamps need not be totally ordered. In general, we will have a relation
CHAPTER 6. LOGICAL CLOCKS 44
<L between timestamps such that e ⇒ e0 implies e <L e0 , but it may be that
S
there are some pairs of events that are ordered by the logical clock despite
being incomparable in the happens-before relation.
Examples of logical clocks that use small timestamps but add extra
ordering are Lamport clocks [Lam78], discussed in §6.2.1; and Neiger-Toueg-
Welch clocks [NT87, Wel87], discussed in §6.2.2. These both assign integer
timestamps to events and may order events that are not causally related.
The main difference between them is that Lamport clocks do not alter the
underlying execution, but may allow arbitrarily large jumps in the logical
clock values; while Neiger-Toueg-Welch clocks guarantee small increments at
the cost of possibly delaying parts of the system.1
More informative are vector clocks [Fid91, Mat93], discussed in §6.2.3.
These use n-dimensional vectors of integers to capture ⇒ exactly, at the cost
S
of much higher overhead.
Proof. Let e <L e0 if e has a lower clock value than e0 . If e and e0 are two
events of the same process, then e <L e0 . If e and e0 are send and receive
events of the same message, then again e <L e0 . So for any events e, e0 , if
e ⇒ e0 , then e <L e0 . Now apply Lemma 6.1.1.
S
1
This makes them similar to synchronizers, which we will discuss in Chapter 7.
CHAPTER 6. LOGICAL CLOCKS 45
Proof. Again, we have that (a) all events at the same process occur in
increasing order (since the event count rises even if the clock value doesn’t,
and we assume that the clock value doesn’t drop) and (b) all receive events
occur later than the corresponding send event (since we force them to). So
Lemma 6.1.1 applies.
Proof. The if part follows immediately from the update rules for the vector
clock. For the only if part, suppose e does not happen-before e0 . Then e and
e0 are events of distinct processes p and p0 . For VC(e) < VC(e0 ) to hold, we
must have VC(e)p < VC(e0 )p ; but this can occur only if the value of VC(e)p
is propagated to p0 by some sequence of messages starting at p and ending
at p0 at or before e0 occurs. In this case we have e ⇒ e0 .
S
before taking a snapshot this is the about the best we can hope for in a
message-passing system.
Logical clocks can be used to obtain consistent snapshots: pick some
logical clock time and have each process record its state at this time (i.e.,
immediately after its last step before the time or immediately before its first
step after the time). We have already argued that the logical clock gives a
consistent reordering of the original schedule, so the set of values recorded is
just the configuration at the end of an appropriate prefix of this reordering.
In other words, it’s a consistent snapshot.
If we aren’t building logical clocks anyway, there is a simpler consistent
snapshot algorithm due to Chandy and Lamport [CL85]. Here some central
initiator broadcasts a snap message, and each process records its state and
immediately forwards the snap message to all neighbors when it first receives
a snap message. To show that the resulting configuration is a configuration
of some consistent reordering, observe that (with FIFO channels) no process
receives a message before receiving snap that was sent after the sender sent
snap: thus causality is not violated by lining up all the pre-snap operations
before all the post-snap ones.3
The full Chandy-Lamport algorithm adds a second marker message that is
used to sweep messages in transit out of the communications channels, which
avoids the need to keep logs if we want to reconstruct what messages are in
transit (this can also be done with the logical clock version). The idea is that
when a process records its state after receiving the snap message, it issues
a marker message on each outgoing channel. For incoming channels, the
process all records all messages received between the snapshot and receiving
a marker message on that channel (or nothing if it receives marker before
receiving snap). A process only reports its value when it has received a
marker on each channel. The marker and snap messages can also be combined
if the broadcast algorithm for snap resends it on all channels anyway, and a
further optimization is often to piggyback both on messages of the underlying
protocol if the underlying protocol is chatty enough.
Note that Chandy-Lamport is equivalent to the logical-time snapshot
using Lamport clocks, if the snap message is treated as a message with a
very large timestamp. For Neiger-Toueg-Welch clocks, we get an algorithm
where processes spontaneously decide to take snapshots (since Neiger-Toueg-
Welch clocks aren’t under the control of the snapshot algorithm) and delay
3
If FIFO channels are not available, they can be simulated in the absence of failures by
adding a sequence number to each outgoing message on a given channel, and processing
messages at the recipient only when all previous messages have been processed.
CHAPTER 6. LOGICAL CLOCKS 48
post-snapshot messages until the local snapshot has been taken. This can be
implemented as in Chandy-Lamport by separating pre-snapshot messages
from post-snapshot messages with a marker message, and essentially turns
into Chandy-Lamport if we insist that a process advance its clock to the
snapshot time when it receives a marker.
Synchronizers
7.1 Definitions
Formally, a synchronizer sits between the underlying network and the pro-
cesses and does one of two things:
In both cases, the synchronizer packages all the incoming round r mes-
sages m for a single process together and delivers them as a single action
recv(p, m, r). Similarly, a process is required to hand over all of its outgoing
round-r messages to the synchronizer as a single action send(p, m, r)—this
prevents a process from changing its mind and sending an extra round-r
message or two. It is easy to see that the global synchronizer produces
executions that are effectively indistinguishable from synchronous executions,
assuming that a synchronous execution is allowed to have some variability in
exactly when within a given round each process does its thing. The local
synchronizer only guarantees an execution that is locally indistinguishable
from an execution of the global synchronizer: an individual process can’t
49
CHAPTER 7. SYNCHRONIZERS 50
tell the difference, but comparing actions at different (especially widely sepa-
rated) processes may reveal some process finishing round r + 1 while others
are still stuck in round r or earlier. Whether this is good enough depends on
what you want: it’s bad for coordinating simultaneous missile launches, but
may be just fine for adapting a synchronous message-passing algorithm (as
with distributed breadth-first search as described in §4.3) to an asynchronous
system, if we only care about the final states of the processes and not when
precisely those states are reached.
Formally, the relation between global and local synchronization is de-
scribed by the following lemma:
7.2 Implementations
Here we describe several implementations of synchronizers. All of them give
at least local synchrony. One of them, the beta synchronizer (§7.2.2), also
gives global synchrony.
The names were chosen by their inventor, Baruch Awerbuch [Awe85].
The main difference between them is the mechanism used to determine when
round-r messages have been delivered.
In the alpha synchronizer, every node sends a message to every neigh-
bor in every round (possibly a dummy message if the underlying protocol
doesn’t send a message); this allows the receiver to detect when it’s gotten
all its round-r messages (because it expects to get a message from every
neighbor) but may produce huge blow-ups in message complexity in a dense
graph.
CHAPTER 7. SYNCHRONIZERS 51
• When the root of a tree gets all acks and OK, it sends ready to the
roots of all adjacent trees (and itself). Two trees are adjacent if any of
their members are adjacent.
• When the root collects ready from itself and all adjacent roots, it
broadcasts go through its own tree.
trees in the forest toward the alpha or beta ends of the spectrum, e.g., if
the whole graph is a clique (and we didn’t worry about contention issues),
we might as well just use beta and get O(1) time blowup and O(n) added
messages.
7.3 Applications
See [AW04, §11.3.2] or [Lyn96, §16.5]. The one we have seen is distributed
breadth-first search, where the two asynchronous algorithms we described in
Chapter 4 were essentially the synchronous algorithms with the beta and
alpha synchronizers embedded in them. But what synchronizers give us in
general is the ability to forget about problems resulting from asynchrony
provided we can assume no failures (which may be a very strong assumption)
and are willing to accept a bit of overhead.
We’ll see more examples of this trick of showing that a particular simula-
tion is impossible because it would allow us to violate impossibility results
later, especially when we start looking at the strength of shared-memory
objects in Chapter 18.
this execution into two segments: an initial segment β that includes all
rounds with special actions, and a suffix δ that includes any extra rounds
where the algorithm is still floundering around. We will mostly ignore δ, but
we have to leave it in to allow for the possibility that whatever is happening
there is important for the algorithm to work (say, to detect termination).
We now want to perform a causal shuffle on β that leaves it with only
s − 1 sessions. Because causal shuffles don’t affect time complexity, this will
give us a new bad execution β 0 δ that has only s − 1 sessions despite taking
(s − 1)D time.
The first step is to chop β into at most s − 1 segments β1 , β2 , . . . of at
most D rounds each. Because a message sent in round i is not delivered until
round i + 1, if we have a chain of k messages, each of which triggers the next,
then if the first message is sent in round i, the last message is not delivered
until round i + k. If the chain has length D, its events (including the initial
send and the final delivery) span D + 1 rounds i, i + 1, . . . , i + D. In this
case the initial send and final delivery are necessarily in different segments
βi and βi+1 .
Now pick processes p0 and p1 at distance D from each other. Then any
chain of messages starting at p0 within some segment reaches p1 after the
end of the segment. It follows that for any events e0 of p0 and e1 of p1 in
the same segment βi , e0 6⇒ e1 . So there exists a causal shuffle of βi that
βδ
puts all events of p0 after all events of p1 .1 By a symmetrical argument, we
can similarly put all events of p1 after all events of p0 . In both cases the
resulting schedule is indistinguishable by all processes from the original.
So now we apply these shuffles to each of the segments βi in alternating
order: p0 goes first in the even-numbered segments and p1 goes first in the
odd-numbered segments, yielding a sequence of shuffled segments βi0 . This
has the effect of putting the p0 events together, as in this example with
(s − 1) = 4:
βδ|(p0 , p1 ) = β1 β2 β3 β4 δ|(p0 , p1 )
= β10 β20 β30 β40 δ|(p0 , p1 )
= (p1 p0 )(p0 p1 )(p1 p0 )(p0 p1 )δ
= p1 (p0 p0 )(p1 p1 )(p0 p0 )p1 δ
Now let’s count sessions. We can’t end a session until we reach a point
where both processes have taken at least one step since the end of the last
session. If we mark with a slash the earliest places where this can happen,
we get a picture like this:
Coordinated attack
57
CHAPTER 8. COORDINATED ATTACK 58
Validity If all processes have the same input x, and no messages are lost,
all processes produce output x. (If processes start with different inputs
or one or more messages are lost, processes can output 0 or 1 as long
as they all agree.)
Sadly, there is not protocol that satisfies all three conditions. We show
this in the next section.
8.3.1 An algorithm
Here’s an algorithm that gives = 1/r. (See [Lyn96, §5.2.2] for details
or [VL92] for the original version.) A simplifying assumption is that network
is complete, although a strongly-connected network with r greater than or
equal to the diameter also works.
– Process 1 chooses a random key value uniformly in the range [1, r].
CHAPTER 8. COORDINATED ATTACK 61
– This key is distributed along with leveli [1], so that every process
with leveli [1] ≥ 0 knows the key.
– A process decides 1 at round r if and only if it knows the key,
its information level is greater than or equal to the key, and all
inputs are 1.
• Note that in the preceding, the key value didn’t figure in; so
everybody’s level at round r is independent of the key.
• So now we have that levelri [i] is in {`, ` + 1}, where ` is some fixed
value uncorrelated with the key. The only way to get some process
to decide 1 while others decide 0 is if ` + 1 ≥ key but ` < key. (If
` = 0, a process at this level doesn’t know key, but it can still
reason that 0 < key since key is in [1, r].) This can only occur if
key = ` + 1, which occurs with probability at most 1/r since key
was chosen uniformly.
Synchronous agreement
Validity If all processes start with the same input, all non-faulty processes
decide it.
63
CHAPTER 9. SYNCHRONOUS AGREEMENT 64
For lower bounds, we’ll replace validity with non-triviality (often called
validity in the literature):
Non-triviality follows from validity but doesn’t imply validity; for example,
a non-trivial algorithm might have the property that if all non-faulty processes
start with the same input, they all decide something else.
In §9.2, we’ll show that a simple algorithm gives agreement, termination,
and validity with f failures using f + 1 rounds. We’ll then show in §9.3 that
non-triviality, agreement, and termination imply that f + 1 rounds is the
best possible. In Chapter 10, we’ll show that the agreement is still possible
in f + 1 rounds even if faulty processes can send arbitrary messages instead
of just crashing, but only if the number of faulty processes is strictly less
than n/3.
Lemma 9.2.1. After f + 1 rounds, all non-faulty processes have the same
set.
Proof. Let Sir be the set stored by process i after r rounds. What we’ll really
show is that if there are no failures in round k, then Sir = Sjr = Sik+1 for all
i, j, and r > k. To show this, observe that no faults in round k means that
all processes that are still alive at the start of round k send their message
to all other processes. Let L be the set of live processes in round k. At the
end of round k, for i in L we have Sik+1 = j∈L Sjk = S. Now we’ll consider
S
already did m = 0, so for larger m notice that all messages are equal to S
and so Sik+1+m is the union of a whole bunch of S’s. So in particular we
have Sif +1 = S (since some failure-free round occurred in the preceding f + 1
rounds) and everybody decides the same value f (S).
process, each Ai has at most one crash per round, and the communication
pattern in Ak is identical to A except that p crashes fully in round r.
The first and last step apply the induction hypothesis; the middle one yields
indistinguishable executions since only p0 can tell the difference between m
arriving or not and its lips are sealed.
We’ve shown that we can remove one message through a sequence of
executions where each pair of adjacent executions is indistinguishable to
some process. Now paste together n − 1 such sequences (one per message)
to prove the lemma.
The rest of the proof: Crash some process fully in round 0 and then
change its input. Repeat until all inputs are changed.
9.4 Variants
So far we have described binary consensus, since all inputs are 0 or 1. We
can also allow larger input sets. With crash failures, this allows a stronger
validity condition: the output must be equal to some non-faulty process’s
CHAPTER 9. SYNCHRONOUS AGREEMENT 68
input. It’s not hard to see that Dolev-Strong (§9.2) gives this stronger
condition.
Chapter 10
Byzantine agreement
69
CHAPTER 10. BYZANTINE AGREEMENT 70
A0 B0 A0 B0
C1 C0
Č B1 A1
B0
B0 A0 D0
A0 D0 C0
C1
Č A1
B1
D1
the group getting the same input, which can only make things easier. Then
we get a protocol for n = 3 and f = 1, an impossibility.
1 S ← {hhi , inputi}
2 for round ← 0 . . . f do
3 Send {hxi, vi | hx, vi ∈ S ∧ |x| = round ∧ i 6∈ x} to all processes
4 upon receiving S 0 from j do
// Filter out obviously bogus tuples
5 if ∀ hxj 0 , vi ∈ S 0 : |x| = round ∧ j 0 = j then
6 S ← S ∪ S0
that use majority voting to try to get a more trustworthy picture of the
original inputs.
Formally, we think of the set of paths as a tree where w is the parent of
wj for each path w and each id j not in w. To apply EIG in the Byzantine
model, ill-formed messages received from j are treated as missing messages,
but otherwise the data-collecting part of EIG proceeds as in the crash failure
model. However, we compute the decision value from the last-round values
recursively as follows. First replace any missing pair involving a path w with
|w| = f + 1 with (w, 0). Then for each path w, define val0 (w, i) to be the
majority value among val0 (wj, i) for all j, or val(w, i) if |w| = f + 1. Finally,
have process i decide val0 (hi , i), which it can compute locally from its own
stored values val(w, i).
Each val0 is a reconstruction of older values from later ones. As we move
up the tree from wj to w we are moving backwards in time, until in the
end we get the decision value val0 (hi, i) as a majority of reconstructed inputs
val0 (j, i). One way to think about this is that I don’t trust j to give me the
right value for wj—even when w = hi and j is claiming to report its own
input—so instead I take a majority of values of wj that j allegedly reported
to other people. But since I don’t trust those other people either, I use the
same process recursively to construct those reports.
Lemma 10.2.1. If i, j, and k are all non-faulty then for all w, val(wk, i) =
val(wk, j) = val(w, k).
Lemma 10.2.2. If j is non-faulty then val0 (wj, i) = val(w, j) for all non-
faulty i and all w.
CHAPTER 10. BYZANTINE AGREEMENT 76
The idea of the algorithm is that in each phase, everybody announces their
CHAPTER 10. BYZANTINE AGREEMENT 78
2. The process i takes its value from the phase king. We’ve already shown
that i then agrees with any j that sees a big majority; but since the
phase king is non-faulty, process i will agree with any process j that
also takes its new preference from the phase king.
This shows that after any phase with a non-faulty king, all processes
agree. The proof that the non-faulty processes continue to agree is the same
as for validity.
CHAPTER 10. BYZANTINE AGREEMENT 79
Impossibility of
asynchronous agreement
There’s an easy argument that says that you can’t do most things in an
asynchronous message-passing system with n/2 crash failures: partition the
processes into two subsets S and T of size n/2 each, and allow no messages
between the two sides of the partition for some long period of time. Since
the processes in each side can’t distinguish between the other side being
slow and being dead, eventually each has to take action on their own. For
many problems, we can show that this leads to a bad configuration. For
example, for agreement, we can supply each side of the partition with a
different common input value, forcing disagreement because of validity. We
can then satisfy the fairness condition that says all messages are eventually
delivered by delivering the delayed messages across the partition, but it’s
too late for the protocol.
The Fischer-Lynch-Paterson (FLP) result [FLP85] says something much
stronger: you can’t do agreement in an asynchronous message-passing system
if even one crash failure is allowed.1 After its initial publication, it was quickly
generalized to other models including asynchronous shared memory [LAA87],
and indeed the presentation of the result in [Lyn96, §12.2] is given for shared-
memory first, with the original result appearing in [Lyn96, §17.2.3] as a
corollary of the ability of message passing to simulate shared memory. In
these notes, I’ll present the original result; the dependence on the model is
surprisingly limited, and so most of the proof is the same for both shared
memory (even strong versions of shared memory that support operations
1
Unless you augment the basic model in some way, say by adding randomization
(Chapter 23) or failure detectors (Chapter 13).
80
CHAPTER 11. IMPOSSIBILITY OF ASYNCHRONOUS AGREEMENT81
11.1 Agreement
Usual rules: agreement (all non-faulty processes decide the same value),
termination (all non-faulty processes eventually decide some value), valid-
ity (for each possible decision value, there an execution in which that value
is chosen). Validity can be tinkered with without affecting the proof much.
To keep things simple, we assume the only two decision values are 0 and
1.
11.2 Failures
A failure is an internal action after which all send operations are disabled.
The adversary is allowed one failure per execution. Effectively, this means
that any group of n − 1 processes must eventually decide without waiting
for the n-th, because it might have failed.
11.3 Steps
The FLP paper uses a notion of steps that is slightly different from the
send and receive actions of the asynchronous message-passing model we’ve
been using. Essentially a step consists of receiving zero or more messages
followed by doing a finite number of sends. To fit it into the model we’ve been
using, we’ll define a step as either a pair (p, m), where p receives message
m and performs zero or more sends in response, or (p, ⊥), where p receives
nothing and performs zero or more sends. We assume that the processes are
deterministic, so the messages sent (if any) are determined by p’s previous
state and the message received. Note that these steps do not correspond
2
Chapter 19.
CHAPTER 11. IMPOSSIBILITY OF ASYNCHRONOUS AGREEMENT82
precisely to delivery and send events or even pairs of delivery and send events,
because what message gets sent in response to a particular delivery may
change as the result of delivering some other message; but this won’t affect
the proof.
The fairness condition essentially says that if (p, m) or (p, ⊥) is continu-
ously enabled it eventually happens. Since messages are not lost, once (p, m)
is enabled in some configuration C, it is enabled in all successor configurations
until it occurs; similarly (p, ⊥) is always enabled. So to ensure fairness, we
have to ensure that any non-faulty process eventually performs any enabled
step.
Comment on notation: I like writing the new configuration reached by
applying a step e to C like this: Ce. The FLP paper uses e(C).
2. Now suppose e and e0 are steps of the same process p. Again we let both
go through in either order. It is not the case now that Dee0 = De0 e,
since p knows which step happened first (and may have sent messages
telling the other processes). But now we consider some finite sequence
of steps e1 e2 . . . ek in which no message sent by p is delivered and some
process decides in Dee1 . . . ek (this occurs since the other processes
can’t distinguish Dee0 from the configuration in which p died in D, and
so have to decide without waiting for messages from p). This execution
fragment is indistinguishable to all processes except p from De0 ee1 . . . ek ,
so the deciding process decides the same value i in both executions.
But Dee0 is 0-valent and De0 e is 1-valent, giving a contradiction.
It follows that our assumption was false, and there is some reachable
bivalent configuration C 0 e.
Now to construct a fair execution that never decides, we start with a
bivalent configuration, choose the oldest enabled action and use the above
to make it happen while staying in a bivalent configuration, and repeat.
Paxos
85
CHAPTER 12. PAXOS 86
Implementing these rules require only that each accepter track na , the
highest number of any proposal for which it sent an ack, and hv, nv i, the last
proposal that it accepted. Pseudocode showing the behavior of proposer and
accepters in the core Paxos protocol is given in Algorithm 12.1.
Note that acceptance is a purely local phenomenon; additional messages
are needed to detect which if any proposals have been accepted by a majority
of accepters. Typically this involves a fourth round, where accepters send
accepted(n, v) to all learners.
There is no requirement that only a single proposal is sent out (indeed,
if proposers can fail we will need to send out more to jump-start the proto-
col). The protocol guarantees agreement and validity no matter how many
proposers there are and no matter how often they start.
CHAPTER 12. PAXOS 88
1 procedure Propose(n, v)
// Issue proposal number n with value v
// Assumes n is unique
2 send prepare(n, v) to all accepters
3 wait to receive ack(n, v 0 , nv0 ) from a majority of accepters
4 if some v 0 is not ⊥ then
5 v ← v 0 with maximum nv0
6 send accept(n, v) to all accepters
7 procedure accepter()
8 initially do
9 na ← −∞
10 v←⊥
11 nv ← −∞
12 upon receiving prepare(n) from p do
13 if n > max(na , nv ) then
// Respond to proposal
14 send ack(n, v, nv ) to p
15 na ← n
p1 p2 p3 a1 a2 a3
prepare(3)
prepare(2)
prepare(1)
ack(3, ⊥, 0)
ack(1, ⊥, 0)
ack(1, ⊥, 0)
accept(1, 1)
accepted(1, 1)
nack(1, 3)
ack(2, 1, 1)
ack(2, ⊥, 0)
accept(2, 1)
nack(2, 3)
accepted(2, 1)
ack(3, 1, 2)
accept(3, 1)
accepted(3, 1)
accepted(3, 1)
1. Any ack(n0 , v 0 , nv0 ) message received by pn0 has nv0 < n0 . Proof: Im-
mediate from the code.
These two properties together imply that pn0 receives at least one
ack(n, v 00 , n00 ) with n ≤ n00 < n0 and no such messages with n00 < n. So
the maximum proposal number it sees is n00 where n ≤ n00 < n. By the
induction hypothesis, the corresponding value is v. It follows that pn0 also
chooses v.
CHAPTER 12. PAXOS 92
Failure detectors
94
CHAPTER 13. FAILURE DETECTORS 95
Note that “strong” and “weak” mean different things for accuracy vs
completeness: for accuracy, we are quantifying over suspects, and for com-
pleteness, we are quantifying over suspectors. Even a weakly-accurate failure
detector guarantees that all processes trust the one visibly good process.
1 initially do
2 suspects ← ∅
3 while true do
4 Let S be the set of all processes my weak detector suspects.
5 Send S to all processes.
6 upon receiving S from q do
7 suspects ← (suspects ∪ p) \ {q}
Algorithm 13.1: Boosting completeness
CHAPTER 13. FAILURE DETECTORS 97
It’s not hard to see that this boosts completeness: if p crashes, somebody’s
weak detector eventually suspects it, this process tells everybody else, and p
never contradicts it. So eventually everybody suspects p.
What is slightly trickier is showing that it preserves accuracy. The
essential idea is this: if there is some good-guy process p that everybody trusts
forever (as in weak accuracy), then nobody ever reports p as suspect—this
also covers strong accuracy since the only difference is that now every non-
faulty process falls into this category. For eventual weak accuracy, wait
for everybody to stop suspecting p, wait for every message ratting out p
to be delivered, and then wait for p to send a message to everybody. Now
everybody trusts p, and nobody every suspects p again. Eventual strong
accuracy is again similar.
This will justify ignoring the weakly-complete classes.
Jumping to the punch line: P can simulate any of the others, S and
♦P can both simulate ♦S but can’t simulate P or each other, and ♦S can’t
simulate any of the others (See Figure 13.1—we’ll prove all of this later.)
Thus ♦S is the weakest class of failure detectors in this list. However, ♦S is
strong enough to solve consensus, and in fact any failure detector (whatever
CHAPTER 13. FAILURE DETECTORS 98
S ♦P
♦S
Figure 13.1: Partial order of failure detector classes. Higher classes can
simulate lower classes but not vice versa.
1 Vp ← {hp, vp i}
2 δp ← {hp, vp i}
// Phase 1
3 for i ← 1 to n − 1 do
4 Send hi, δp i to all processes.
5 Wait to receive hi, δq i from all q I do not suspect.
S
6 δp ← q δq \ Vp
S
7 Vp ← q δq ∪ Vp
// Phase 2
8 Send hn, Vp i to all processes.
9 Wait to receive hn, Vq i from all q I do not suspect.
T
10 Vp ← q Vq ∩ Vp
// Phase 3
11 return some input from Vp chosen via a consistent rule.
Algorithm 13.2: Consensus with a strong failure detector
1 procedure broadcast(m)
2 send m to all processes.
3 upon receiving m do
4 if I haven’t seen m before then
5 send m to all processes
6 deliver m to myself
1 preference ← input
2 timestamp ← 0
3 for round ← 1 . . . ∞ do
4 Send hround, preference, timestampi to coordinator
5 if I am the coordinator then
6 Wait to receive hround, preference, timestampi from majority
of processes.
7 Set preference to value with largest timestamp.
8 Send hround, preferencei to all processes.
9 Wait to receive round, preference0 from coordinator or to suspect
coordinator.
10 if I received round, preference0 then
11 preference ← preference0
12 timestamp ← round
13 Send ack(round) to coordinator.
14 else
15 Send nack(round) to coordinator.
16 if I am the coordinator then
17 Wait to receive ack(round) or nack(round) from a majority of
processes.
18 if I received no nack(round) messages then
19 Broadcast preference using reliable broadcast.
2
This is a slight weakening of the problem, which however still separates P from the
other classes. For the real problem see Chandra and Toueg [CT96].
Chapter 14
Quorum systems
14.1 Basics
In the past few chapters, we’ve seen many protocols that depend on the
fact that if I talk to more than n/2 processes and you talk to more than
n/2 processes, the two groups overlap. This is a special case of a quorum
system, a family of subsets of the set of processes with the property that
any two subsets in the family overlap. By choosing an appropriate family, we
may be able to achieve lower load on each system member, higher availability,
defense against Byzantine faults, etc.
The exciting thing from a theoretical perspective is that these turn
a systems problem into a combinatorial problem: this means we can ask
combinatorialists how to solve it.
• Dynamic quorum systems: get more than half of the most recent copy.
105
CHAPTER 14. QUORUM SYSTEMS 106
14.3 Goals
• Minimize quorum size.
Naor and Wool [NW98] describe trade-offs between these goals (some of
these were previously known, see the paper for citations):
• load ≥ max(c/n, 1/c) where c is the minimum quorum size. The first
case is obvious: if every access hits c nodes, spreading them out as
evenly as possible still hits each node c/n of the time. The second is
trickier: Naor and Wool prove it using LP duality, but the argument
essentially says that if we have some quorum Q of size c, then since
every other quorum Q0 intersects Q in at least one place, we can show
that every Q0 adds at least 1 unit of load in total to the c members of
Q. So if we pick a random quorum Q0 , the average load added to all of
Q is at least 1, so the average load added to some particular element
of Q is at least 1/|Q| = 1/c. Combining the two cases, we can’t hope
√
to get load better than 1/ n, and to get this load we need quorums of
√
size at least n.
CHAPTER 14. QUORUM SYSTEMS 107
Figure 14.1: Figure 2 from [NW98]. Solid lines are G(3); dashed lines are
G∗ (3).
G(d) grid and one from the G∗ (d) grid (the star indicates that G∗ (d) is the
dual graph1 of G(d). A quorum consists of a set of servers that produce an
LR path in G(d) and a TB path in G∗ (d). Quorums intersect, because any
LR path in G(d) must cross some TB path in G∗ (d) at some server (in fact,
each pair of quorums intersects in at least two places). The total number of
√
elements n is (d + 1)2 and the minimum size of a quorum is 2d + 1 = Θ( n).
The symmetry of the mesh gives that there exists a LR path in the
mesh if and only if there does not exist a TB path in its complement, the
graph that has an edge only if the mesh doesn’t. For a mesh with failure
probability p < 1/2, the complement is a mesh with failure probability
q = 1 − p > 1/2. Using results in percolation theory, it can be shown that for
failure probability q > 1/2, the probability that there exists a left-to-right
path is exponentially small in d (formally, for each p there is a constant φ(p)
such that Pr[∃LR path] ≤ exp(−φ(p)d)). We then have
So the failure probability of this system is exponentially small for any fixed
p < 1/2.
See the paper [NW98] for more details.
14.6.1 Example
√
Let a quorum be any set of size k n for some k and let all quorums be
chosen uniformly at random. Pick some quorum Q1 ; what is the probability
that a random Q2 does not intersect Q1 ? Imagine we choose the elements
of Q2 one at a time. The chance that the first element x1 of Q2 misses Q1
√ √
is exactly (n − k n)/n = 1 − k/ n, and conditioning on x1 through xi−1
√
missing Q1 the probability that xi also misses it is (n − k n − i + 1)/(n −
√ √
i + 1) ≤ (n − k n)/n = 1 − k/√ n. So taking the √product over all i gives
√ √
Pr[all miss Q1 ] ≤ (1 − k/ n)k n ≤ exp(−k n)k/ n) = exp(−k 2 ). So by
setting k = Θ(ln 1/), we can get our desired -intersecting system.
14.6.2 Performance
Failure probabilities, if naively defined, can be made arbitrarily small: add
low-probability singleton quorums that are hardly ever picked unless massive
failures occur. But the resulting system is still -intersecting.
One way to look at this is that it points out a flaw in the -intersecting
definition: -intersecting quorums may cease to be -intersecting conditioned
on a particular failure pattern (e.g., when all the non-singleton quorums are
knocked out by massive failures). But Malkhi et al. [MRWW01] address the
problem in a different way, by considering only survival of high quality
quorums, where a particular quorum Q is δ-high-quality if Pr[Q1 ∩ Q2 =
√
∅|Q1 = Q] ≤ δ and high quality if it’s -high-quality. It’s not hard to show
that a random quorum is δ-high-quality with probability at least /δ, so a
high quality quorum is one that fails to intersect a random quorum with
√
probability at most and a high quality quorum is picked with probability
√
at least 1 − .
We can also consider load; Malkhi et al. [MRWW01] show that essentially
the same bounds on load for strict quorum systems also hold for -intersecting
√
quorum systems: load(S) ≥ max((E(|Q|)/n, (1− )2 / E(|Q|)), where E(|Q|)
is the expected size of a quorum. The left-hand branch of the max is just
the average load applied to a uniformly-chosen server. For the right-hand
side, pick some high quality quorum Q0 with size less than or equal to
√
(1 − ) E(|Q|) and consider the load applied to its most loaded member by
√
its nonempty intersection (which occurs with probability at least 1 − )
with a random quorum.
CHAPTER 14. QUORUM SYSTEMS 111
Shared memory
112
Chapter 15
Model
113
CHAPTER 15. MODEL 114
A process is defined by giving, for each state, the operation that it would
like to do next, together with a transition function that specifies how the
state will be updated in response to the return value of that operation. A
configuration of the system consists of a vector of states for the processes
and a vector of value for the registers. A sequential execution consists of a
sequence of alternating configurations and operations C0 , π1 , C1 , π2 , C2 . . . ,
where in each triple Ci , πi+1 , Ci+1 , the configuration Ci+1 is the result of
applying πi+1 to configuration Ci . For read operations, this means that the
state of the reading process is updated according to its transition function.
For write operations, the state of the writing process is updated, and the
state of the written register is also updated.
Pseudocode for shared-memory protocols is usually written using stan-
dard pseudocode conventions, with the register operations appearing either
as explicit subroutine calls or implicitly as references to shared variables.
Sometimes this can lead to ambiguity; for example, in the code fragment
1 leftIsDone ← read(leftDone)
2 rightIsDone ← read(rightDone)
3 write(done, leftIsDone ∧ rightIsDone)
Space Just how big are those registers anyway? Much of the work in this
area assumes they are very big. But we can ask for the maximum
number of bits in any one register (width) or the total size (bit
complexity) or number (space complexity) of all registers, and will
try to minimize these quantities when possible. We can also look at
the size of the internal states of the processes for another measure of
space complexity.
Sticky bits (or sticky registers) With a sticky bit or sticky regis-
ter [Plo89], once the initial empty value is overwritten, all further
writes fail. The writer is not notified that the write fails, but may
be able to detect this fact by reading the register in a subsequent
operation.
Bank accounts Replace the write operation with deposit, which adds a
non-negative amount to the state, and withdraw, which subtracts a
non-negative amount from the state provided the result would not go
below 0; otherwise, it has no effect.
These solve problems that are hard for ordinary read/write registers under
bad conditions. Note that they all have to return something in response to
an invocation.
There are also blocking objects like locks or semaphores, but these don’t
fit into the RMW framework.
We can also consider generic read-modify-write registers that can compute
arbitrary functions (passed as an argument to the read-modify-write opera-
tion) in the modify step. Here we typically assume that the read-modify-write
operation returns the old value of the register. Generic read-modify-write
registers are not commonly found in hardware but can be easily simulated
(in the absence of failures) using mutual exclusion.2
2
See Chapter 17.
Chapter 16
120
CHAPTER 16. DISTRIBUTED SHARED MEMORY 121
with new values (v, t) upon receiving write(v, t) from any other process p,
provided t is greater than the process’s current timestamp. It then responds
to p with ack(v, t), whether or not it updated its local copy. A process will
also respond to a message read(u) with a response ack(value, timestamp, u);
here u is a nonce3 used to distinguish between different read operations so
that a process can’t be confused by out-of-date acknowledgments.
To write a value, the writer increments its timestamp, updates its value
and sends write(value, timestamp) to all other processes. The write operation
terminates when the writer has received acknowledgments containing the
new timestamp value from a majority of processes.
To read a value, a reader does two steps:
(Any extra messages, messages with the wrong nonce, etc., are discarded.)
Both reads and writes cost Θ(n) messages (Θ(1) per process).
Intuition: Nobody can return from a write or a read until they are sure
that subsequent reads will return the same (or a later) value. A process
can only be sure of this if it knows that the values collected by a read will
include at least one copy of the value written or read. But since majorities
overlap, if a majority of the processes have a current copy of v, then the
majority read quorum will include it. Sending write(v, t) to all processes
and waiting for acknowledgments from a majority is just a way of ensuring
that a majority do in fact have timestamps that are at least t.
If we omit the write stage of a read operation, we may violate lineariz-
ability. An example would be a situation where two values (1 and 2, say),
have been written to exactly one process each, with the rest still holding the
initial value ⊥. A reader that observes 1 and (n − 1)/2 copies of ⊥ will return
1, while a reader that observes 2 and (n − 1)/2 copies of ⊥ will return 2. In
the absence of the write stage, we could have an arbitrarily long sequence
of readers return 1, 2, 1, 2, . . . , all with no concurrency. This would not be
3
A nonce is any value that is guaranteed to be used at most once (the term originally
comes from cryptography, which in turn got it from linguistics). In practice, a reader will
most likely generate a nonce by combining its process id with a local timestamp.
CHAPTER 16. DISTRIBUTED SHARED MEMORY 123
consistent with any sequential execution in which 1 and 2 are only written
once.
4. none of the other cases applies, and we feel like putting π1 first.
The intent is that we pick some total ordering that is consistent with both
<T and the timestamp ordering (with writes before reads when timestamps
are equal). To make this work we have to show (a) that these two orderings
are in fact consistent, and (b) that the resulting ordering produces values
consistent with an atomic register: in particular, that each read returns the
value of the last preceding write.
Part (b) is easy: since timestamps only increase in response to writes,
each write is followed by precisely those reads with the same timestamp,
which are precisely those that returned the value written.
For part (a), suppose that π1 <T π2 . The first case is when π2 is a read.
Then before the end of π1 , a set S of more than n/2 processes send the π1
process an ack(v1, t1 ) message. Since local timestamps only increase, from
this point on any ack(v2 , t2 , u) message sent by a process in S has t2 ≥ t1 .
Let S 0 be the set of processes sending ack(v2 , t2 , u) messages processed by
π2 . Since |S| > n/2 and |S 0 | > n/2, we have S ∩ S 0 is nonempty and so S 0
includes a process that sent ack(v2 , t2 ) with t2 ≥ t1 . So π2 is serialized after
CHAPTER 16. DISTRIBUTED SHARED MEMORY 124
3. Send write(v, t) to all processes, and wait for a response ack(v, t) from
a majority of processes.
This increases the cost of a write by a constant factor, but in the end we
still have only a linear number of messages. The proof of linearizability is
CHAPTER 16. DISTRIBUTED SHARED MEMORY 125
essentially the same as for the single-writer algorithm, except now we must
consider the case of two write operations by different processes. Here we have
that if π1 <T π2 , then π1 gets acknowledgments of its write with timestamp
t1 from a majority of processes before π2 starts its initial phase to compute
count. Since π2 waits for acknowledgments from a majority of processes as
well, these majorities overlap, so π2 ’s timestamp t2 must exceed t1 . So the
linearization ordering previously defined still works.
Mutual exclusion
126
CHAPTER 17. MUTUAL EXCLUSION 127
17.2 Goals
(See also [AW04, §4.2], [Lyn96, §10.2].)
Core mutual exclusion requirements:
Note that the protocol is not required to guarantee that processes leave
the critical or remainder state, but we generally have to insist that the
processes at least leave the critical state on their own to make progress.
An additional useful property (not satisfied by all mutual exclusion
protocols; see [Lyn96, §10.4)]:
1 oldValue ← read(bit)
2 write(bit, 1)
3 return oldValue
Typically there is also a second reset operation for setting the bit back
to zero. For some implementations, this reset operation may only be used
safely by the last process to get 0 from the test-and-set bit.
Because a test-and-set operation is atomic, if two processes both try to
perform test-and-set on the same bit, only one of them will see a return value
of 0. This is not true if each process simply executes the above code on a
stock atomic register: there is an execution in which both processes read
0, then both write 1, then both return 0 to whatever called the non-atomic
test-and-set subroutine.
Test-and-set provides a trivial implementation of mutual exclusion, shown
in Algorithm 17.1.
1 while true do
// trying
2 while TAS(lock) = 1 do nothing
// critical
3 (do critical section stuff)
// exiting
4 reset(lock)
// remainder
5 (do remainder stuff)
Algorithm 17.1: Mutual exclusion using test-and-set
It is easy to see that this code provides mutual exclusion, as once one
process gets a 0 out of lock, no other can escape the inner while loop until
that process calls the reset operation in its exiting state. It also provides
progress (assuming the lock is initially set to 0); the only part of the code
that is not straight-line code (which gets executed eventually by the fairness
condition) is the inner loop, and if lock is 0, some process escapes it, while if
lock is 1, some process is in the region between the TAS call and the reset
CHAPTER 17. MUTUAL EXCLUSION 129
call, and so it eventually gets to reset and lets the next process in (or itself,
if it is very fast).
The algorithm does not provide lockout-freedom: nothing prevents a
single fast process from scooping up the lock bit every time it goes through
the outer loop, while the other processes ineffectually grab at it just after it
is taken away. Lockout-freedom requires a more sophisticated turn-taking
strategy.
1 while true do
// trying
2 enq(Q, myId)
3 while head(Q) 6= myId do nothing
// critical
4 (do critical section stuff)
// exiting
5 deq(Q)
// remainder
6 (do remainder stuff)
Algorithm 17.2: Mutual exclusion using a queue
Here the proof of mutual exclusion is that only the process whose id is at
the head of the queue can enter its critical section. Formally, we maintain an
invariant that any process whose program counter is between the inner while
loop and the call to deq(Q) must be at the head of the queue; this invariant
is easy to show because a process can’t leave the while loop unless the test
fails (i.e., it is already at the head of the queue), no enq operation changes
the head value (if the queue is nonempty), and the deq operation (which
does change the head value) can only be executed by a process already at
the head (from the invariant).
Deadlock-freedom follows from proving a similar invariant that every
CHAPTER 17. MUTUAL EXCLUSION 130
1 while true do
// trying
2 position ← RMW(V, hV.first, V.last + 1i)
// enqueue
3 while RMW(V, V ).first 6= position.last do
4 nothing
// critical
5 (do critical section stuff)
// exiting
6 RMW(V, hV.first + 1, V.lasti)
// dequeue
// remainder
7 (do remainder stuff)
Algorithm 17.3: Mutual exclusion using read-modify-write
1. p0 sets present[0] ← 1
2. p0 sets waiting ← 0
4. p1 sets present[1] ← 1
5. p1 sets waiting ← 1
CHAPTER 17. MUTUAL EXCLUSION 132
shared data:
1 waiting, initially arbitrary
2 present[i] for i ∈ {0, 1}, initially 0
3 Code for process i:
4 while true do
// trying
5 present[i] ← 1
6 waiting ← i
7 while true do
8 if present[¬i] = 0 then
9 break
10 if waiting 6= i then
11 break
// critical
12 (do critical section stuff)
// exiting
13 present[i] = 0
// remainder
14 (do remainder stuff)
Algorithm 17.4: Peterson’s mutual exclusion algorithm for two
processes
CHAPTER 17. MUTUAL EXCLUSION 133
7. p0 sets present[0] ← 0
The idea is that if I see a 0 in your present variable, I know that you
aren’t playing, and can just go in.
Here’s a more interleaved execution where the waiting variable decides
the winner:
1. p0 sets present[0] ← 1
2. p0 sets waiting ← 0
3. p1 sets present[1] ← 1
4. p1 sets waiting ← 1
5. p0 reads present[1] = 1
6. p1 reads present[0] = 1
9. p0 sets present[0] ← 0
Note that it’s the process that set the waiting variable last (and thus sees
its own value) that stalls. This is necessary because the earlier process might
long since have entered the critical section.
Sadly, examples are not proofs, so to show that this works in general,
we need to formally verify each of mutual exclusion and lockout-freedom.
Mutual exclusion is a safety property, so we expect to prove it using invariants.
The proof in [Lyn96] is based on translating the pseudocode directly into
automata (including explicit program counter variables); we’ll do essentially
the same proof but without doing the full translation to automata. Below,
we write that pi is at line k if it the operation in line k is enabled but has
not occurred yet.
Lemma 17.4.2. If pi is at Line 12, and p¬i is at Line 8, 10, or 12, then
waiting = ¬i.
Proof. We’ll do the case i = 0; the other case is symmetric. The proof is by
induction on the schedule. We need to check that any event that makes the
left-hand side of the invariant true or the right-hand side false also makes
the whole invariant true. The relevant events are:
shared data:
1 atomic register race, big enough to hold an id, initially ⊥
2 atomic register door, big enough to hold a bit, initially open
3 procedure splitter(id)
4 race ← id
5 if door = closed then
6 return right
7 door ← closed
8 if race = id then
9 return stop
10 else
11 return down
arrives at a splitter, then (a) at least one process returns right or stop; and
(b) at least one process returns down or stop; (c) at most one process returns
stop; and (d) any process that runs by itself returns stop. The first two
properties will be useful when we consider the problem of renaming in
Chapter 24; we will prove them there. The last two properties are what we
want for mutual exclusion.
The names of the variables race and door follow the presentation in
[AW04, §4.4.5]; Moir and Anderson [MA95], following Lamport [Lam87],
call these X and Y . As in [MA95], we separate out the right and down
outcomes—even though they are equivalent for mutex—because we will need
them later for other applications.
The intuition behind Algorithm 17.5 is that setting door to closed closes
the door to new entrants, and the last entrant to write its id to race wins
(it’s a slow race), assuming nobody else writes race and messes things up.
The added cost of the splitter is always O(1), since there are no loops.
To reset the splitter, write open to door. This allows new processes to
enter the splitter and possibly return stop.
Lemma 17.4.3. After each time that door is set to open, at most one process
running Algorithm 17.5 returns stop.
Proof. To simplify the argument, we assume that each process calls splitter
at most once.
Let t be some time at which door is set to open (−∞ in the case of the
initial value). Let St be the set of processes that read open from door after
CHAPTER 17. MUTUAL EXCLUSION 137
time t and before the next time at which some process writes closed to door,
and that later return stop by reaching Line 9.
Then every process in St reads door before any process in St writes door.
It follows that every process in St writes race before any process in St reads
race. If some process p is not the last process in St to write race, it will not
see its own id, and will not return stop. But only one process can be the last
process in St to write race.3
Proof. Follows from examining a solo execution: the process sets race to id,
reads open from door, then reads id from race. This causes it to return stop
as claimed.
shared data:
1 choosing[i], an atomic bit for each i, initially 0
2 number[i], an unbounded atomic register, initially 0
3 Code for process i:
4 while true do
// trying
5 choosing[i] ← 1
6 number[i] ← 1 + maxj6=i number[j]
7 choosing[i] ← 0
8 for j 6= i do
9 loop until choosing[j] = 0
10 loop until number[j] = 0 or hnumber[i], ii < hnumber[j], ji
// critical
11 (do critical section stuff)
// exiting
12 number[i] ← 0
// remainder
13 (do remainder stuff)
Algorithm 17.6: Lamport’s Bakery algorithm
Note that several of these lines are actually loops; this is obvious for
Lines 9 and 10, but is also true for Line 6, which includes an implicit loop to
read all n − 1 values of number[j].
Intuition for mutual exclusion is that if you have a lower number than
I do, then I block waiting for you; for lockout-freedom, eventually I have
the smallest number. (There are some additional complications involving
the choosing bits that we are sweeping under the rug here.) For a real proof
see [AW04, §4.4.1] or [Lyn96, §10.7].
CHAPTER 17. MUTUAL EXCLUSION 139
Selling point is a strong near-FIFO guarantee and the use of only single-
writer registers (which need not even be atomic—it’s enough that they return
correct values when no write is in progress). Weak point is unbounded
registers.
The final result follows by the fact that when k = n we cover n registers;
this implies that there are n registers to cover.
It’s worth noting that the execution constructed in this proof might be
very, very long. It’s not clear what happens if we consider executions in which,
say, the critical section is only entered a polynomial number of times. If we
are willing to accept a small probability of failure over polynomially-many
entries, there is a randomized mutex that runs in O(log n) space [AHTW18].
CHAPTER 17. MUTUAL EXCLUSION 141
1 C[side(i)] ← i
2 T ←i
3 P [i] ← 0
4 rival ← C[¬side(i)]
5 if rival 6= ⊥ and T = i then
6 if P [rival] = 0 then
7 P [rival] = 1
8 while P [i] = 0 do spin
9 if T = i then
10 while P [i] ≤ 1 do spin
When I want to enter my critical section, I first set C[side(i)] so you can
find me; this also has the same effect as setting present[side(i)] in Peterson’s
algorithm. I then point T to myself and look for you. I’ll block if I see
C[¬side(i)] = 1 and T = i. This can occur in two ways: one is that I really
write T after you did, but the other is that you only wrote C[¬side(i)] but
haven’t written T yet. In the latter case, you will signal to me that T may
have changed by setting P [i] to 1. I have to check T again (because maybe I
really did write T later), and if it is still i, then I know that you are ahead of
me and will succeed in entering your critical section. In this case I can safely
spin on P [i] waiting for it to become 2, which signals that you have left.
There is a proof that this actually works in [YA95], but it’s 27 pages
of very meticulously-demonstrated invariants (in fairness, this includes the
entire algorithm, including the tree parts that we omitted here). For intuition,
this is not much more helpful than having a program mechanically check all
the transitions, since the algorithm for two processes is effectively finite-state
if we ignore the issue with different processes i jumping into the role of
side(i).
A slightly less rigorous but more human-accessible proof would be analo-
gous to the proof of Peterson’s algorithm. We need to show two things: first,
that no two processes ever both enter the critical section, and second, that
no process gets stuck.
For the first part, consider two processes i and j, where side(i) = 0 and
side(j) = 1. We can’t have both i and j skip the loops, because whichever
one writes T last sees itself in T . Suppose that this is process i and that
j skips the loops. Then T = i and P [i] = 0 as long as j is in the critical
section, so i blocks. Alternatively, suppose i writes T last but does so after
j first reads T . Now i and j both enter the loops. But again i sees T = i on
its second test and blocks on the second loop until j sets P [i] to 2, which
doesn’t happen until after j finishes its critical section.
Now let us show that i doesn’t get stuck. Again we’ll assume that i wrote
T second.
If j skips the loops, then j sets P [i] = 2 on its way out as long as T = i;
this falsifies both loop tests. If this happens after i first sets P [i] to 0, only
i can set P [i] back to 0, so i escapes its first loop, and any j 0 that enters
from the 1 side will see P [i] = 2 before attempting to set P [i] to 1, so P [i]
remains at 2 until i comes back around again. If j sets P [i] to 2 before i sets
P [i] to 0 (or doesn’t set it at all because T = j, then C[side(j)] is set to ⊥
before i reads it, so i skips the loops.
If j doesn’t skip the loops, then P [i] and P [j] are both set to 1 after i
and j enter the loopy part. Because j waits for P [j] 6= 0, when it looks at
CHAPTER 17. MUTUAL EXCLUSION 145
T the second time it will see T = i 6= j and will skip the second loop. This
causes it to eventually set P [i] to 2 or set C[side(j)] to ⊥ before i reads it
as in the previous case, so again i eventually reaches its critical section.
Since the only operations inside a loop are on local variables, the algorithm
has O(1) RMR complexity. For the full tree this becomes O(log n).
146
CHAPTER 18. THE WAIT-FREE HIERARCHY 147
processes there is some object that has consensus number n but that cannot
be implemented from an arbitrary n-consensus object.2
The wait-free hierarchy was suggested by work by Maurice Herlihy [Her91b]
that classified many common (and some uncommon) shared-memory objects
by consensus number, and showed that an unbounded collection of objects
with consensus number n together with atomic registers gives a wait-free
implementation of any object in an n-process system.
Various subsequent authors noticed that this did not give a robust
hierarchy in the sense that combining two types of objects with consensus
number n could solve wait-free consensus for larger n, and the hierarchy
hrm was proposed by Prasad Jayanti [Jay97] as a way of classifying objects
that might be robust: an object is at level n of the hrm hierarchy if having
unboundedly many objects plus unboundedly many registers solves n-process
wait-free consensus but not (n + 1)-process wait-free consensus.3
Whether or not the resulting hierarchy is in fact robust for arbitrary
deterministic objects is still open, but Ruppert [Rup00] subsequently showed
that it is robust for RMW registers and objects with a read operation that
returns the current state, and there is a paper by Borowsky, Gafni, and
Afek [BGA94] that sketches a proof based on a topological characterization
of computability4 that hrm is robust for deterministic objects that don’t
discriminate between processes (unlike, say, single-writer registers). So for
well-behaved shared-memory objects (deterministic, symmetrically accessible,
with read operations, etc.), consensus number appears to give a real classi-
fication that allows us to say for example that any collection of read-write
registers (consensus number 1), fetch-and-increments (2), test-and-set bits
(2), and queues (2) is not enough to build a compare-and-swap (∞).5
Another useful result from the Borowsky-Gafni-Afek paper is that the
consensus number is not generally dependent on what assumptions we make
about the initial state of the objects. Specifically, [BGA94, Lemma 3.2]
states that as long as there is some sequence of operations that takes an
object from a fixed initial state to a desirable initial state for consensus, then
2
The existence of such objects was eventually demonstrated by Afek, Ellen, and
Gafni [AEG16].
3
The r in hrm stands for the registers, the m for having many objects of the given type.
Jayanti [Jay97] also defines a hierarchy hr1 where you only get finitely many objects. The
h stands for “hierarchy,” or, more specifically, h(T ) stands for the level of the hierarchy at
which T appears [Jay11].
4
See Chapter 28.
5
Ruppert’s paper is particularly handy because it gives an algorithm for computing
the consensus number of the objects it considers. However, for infinite-state objects, this
requires solving the halting problem (as previously shown by Jayanti and Toueg [JT92].
CHAPTER 18. THE WAIT-FREE HIERARCHY 148
we can safely assume that the object is in the desirable state. The core idea
of the proof is that each process can initialize its own copy of the object
and then announce that it is ready; each process will then participate in a
sequence of consensus protocols using the objects that they observe are ready,
with the output of each protocol used as the input to the next. Because the
first object Si to be announced as initialized will be visible to all processes,
they will all do consensus using Si . Any subsequent protocols that may be
used by only a subset of the processes will not change the common agreed
output from the Si protocol.6
When considering multiple objects, the usual assumption is that objects
are combined by putting them next to each other. If we can combine two
objects by constructing a single object with operations of both—which is
essentially what happens when we apply different machine language instruc-
tions to the same memory location—then the object with both operations
may have a higher consensus number than the object with either opera-
tion individually. This was observed by Ellen et al. [EGSZ16]. A simple
example would be a register than supports increment (+1) and doubling
(×2) operations. A register with only one of these operations is equivalent
to a counter and has consensus number 1. But a register with both oper-
ations has consensus number at least 2, since if it is initialized to 2, we
can tell which of the two operations went first by looking at the final value:
3 = 2 + 1, 4 = 2 × 2, 5 = (2 × 2) + 1, 6 = (2 + 1) × 2.
We won’t attempt to do the robustness proofs of Borowsky et al. [BGA94]
or Ruppert [Rup00]. Instead, we’ll concentrate on Herlihy’s original results
and show that specific objects have specific consensus numbers when used in
isolation. The procedure in each case will be to show an upper bound on the
consensus number using a variant of Fischer-Lynch-Paterson (made easier
because we are wait-free and don’t have to worry about fairness) and then
show a matching lower bound (for non-trivial upper bounds) by exhibiting
an n-process consensus protocol for some n. Essentially everything below is
taken from Herlihy’s paper [Her91b], so reading that may make more sense
than reading these notes.
get a contradiction.
Now let’s suppose we are looking at atomic registers, and consider cases:
• x and y are both reads, Then x and y commute: Cxy = Cyx, and we
get a contradiction.
• x and y are both writes. Now py can’t tell the difference between Cxy
and Cy, so we get the same decision value for both, again contradicting
that Cx is 0-valent and Cy is 1-valent.
Fetch-and-cons Returns old cdr and adds new car on to the head of a list.
Use preceding protocol where peek() = tail(car :: cdr).
Sticky bit Has a write operation that has no effect unless register is in the
initial ⊥ state. Whether the write succeeds or fails, it returns nothing.
The consensus protocol is to write my input and then return result of
a read.
operation succeeds only if the location has not been changed since the
preceding load-linked by the same process. Can be used to build a
sticky bit.
For many years, I assumed that this example demonstrated why cryp-
tocurrencies all seem to use embedded consensus protocols of some
sort. However, it turns out that there is a critical assumption needed
for this proof, which is that more than one process can spend from the
same account. Without this assumption, it has been shown by Guer-
raoui et al. [GKM+ 19] that the consensus number of a single-spender
bank account is 1, and more generally that the consensus number of a
k-spender bank account is exactly k.
Algorithm 18.1 requires 2-register writes, and will give us a protocol for 2
processes (since the reader above has to participate somewhere to make the
first case work). For m processes, we can do the same thing with m-register
writes. We have a register rpq = rqp for each pair of distinct processes p
m
and q, plus a register rpp for each p; this gives a total of 2 + m = O(m2 )
with input v subtract 2m − v.
10
The main issue is that processes can only read the registers one at a time. An
alternative to running Algorithm 18.1 is to use a double-collect snapshot (see §19.1) to
simulate reading all three registers at once. However, this might require as many as twelve
read operations, since a process doing a snapshot has to re-read all three registers if any of
them change.
CHAPTER 18. THE WAIT-FREE HIERARCHY 155
1 v1 ← r1
2 v2 ← r2
3 if v1 = v2 = ⊥ then
4 return no winner
5 if v1 = 1 and v2 = ⊥ then
// p1 went first
6 return 1
// read r1 again
7 v10 ← r1
8 if v2 = 2 and v10 = ⊥ then
// p2 went first
9 return 2
// both p1 and p2 wrote
10 if rshared = 1 then
11 return 2
12 else
13 return 1
Algorithm 18.1: Determining the winner of a race between 2-register
writes. The assumption is that p1 and p2 each wrote their own ids
to ri and rshared simultaneously. This code can be executed by any
process (including but not limited to p1 or p2 ) to determine which of
these 2-register writes happened first.
CHAPTER 18. THE WAIT-FREE HIERARCHY 156
registers. All registers are initialized to ⊥. Process p then writes its initial
preference to some single-writer register pref p and then simultaneously writes
p to rpq for all q (including rpp ). It then attempts to figure out the first
writer by applying the above test for each q to rpq (standing in for rshared ),
rpp (r1 ) and rqq (r2 ). If it won against all the other processes, it decides its
own value. If not, it repeats the test recursively for some p0 that beat it until
it finds a process that beat everybody, and returns its value. So m-register
writes solve m-process wait-free consensus.
A further tweak gets 2m − 2: run two copies of an (m − 1)-process
protocol using separate arrays of registers to decide a winner for each group.
Then add a second phase where processes contend across the groups. This
involves each process p from group 1 writing the winning id for its group
simultaneously into sp and spq for each q in the other group. To figure out
who won in the end, build a graph of all victories, where there is an edge
from p to q if and only if p beat q in Phase 1 or p’s id was written before q’s
id in Phase 2. The winner is the (unique) process with at least one outgoing
edge and no incoming edges, which will be the process that won its own
group (by writing first) and whose value was written first in Phase 2.
One thing to note about the second phase is that, unlike mutex, we can’t
just have the winners of the two groups fight each other, since this would
not give the wait-free property for non-winners. Instead, we have to allow a
non-winner p to pick up the slack for a slow winner and fight on behalf of
the entire group. This requires an m-process write operation to write sp and
all spq at once.
Now suppose we have 2m − 1 processes. The first part says that each of
the pending operations (x, y, all of the zi ) writes to 1 single-writer register
and at least k two-writer registers where k is the number of processes leading
to a different univalent value. This gives k + 1 total registers simultaneously
written by this operation. Now observe that with 2m − 1 process, there is
some set of m processes whose operations all lead to a b-valent state; so
for any process to get to a (¬b)-valent state, it must write m + 1 registers
simultaneously. It follows that with only m simultaneous writes we can only
do (2m − 2)-consensus.
1 procedure apply(π)
// announce my intended operation
2 op[i] ← π
3 while true do
// find a recent round
4 r ← maxj round[j]
// obtain the history as of that round
5 if hr = ⊥ then
6 hr ← consensus(c[r], ⊥)
7 if π ∈ hr then
8 return value π returns in hr
// else attempt to advance
9 h0 ← hr
10 for each j do
11 if op[j] 6∈ h0 then
12 append op[j] to h0
the most recent version of hr . Suppose some call to apply(π1 ) finishes before
a call to apply(π2 ) starts. Then π1 is contained in some hr when apply(π1 )
finishes, and since π2 can only enter h by being appended at the end, we get
π1 linearized before π2 .
Finally, we need to show termination. The algorithm is written with a
loop, so in principle it could run forever. But we can argue that no process
after executes the loop more than twice. The reason is that a process p puts
its operation in op[p] before it calculates r; so any process that writes r0 > r
to round sees p’s operation before the next round. It follows that p’s value
gets included in the history no later than round r + 2. (We’ll see this sort of
thing again when we do atomic snapshots in Chapter 19.)
Building a consistent shared history is easier with some particular objects
that solve consensus. For example, a fetch-and-cons object that supplies
an operation that pushes a new head onto a linked list and returns the old
head trivially implements the common history above without the need for
helping. One way to implement fetch-and-cons is with memory-to-memory
swap; to add a new element to the list, create a cell with its next pointer
pointing to itself, then swap the next field with the head pointer for the entire
list.
The solutions we’ve described here have a number of deficiencies that
make them impractical in a real system (even more so than many of the
algorithms we’ve described). If we store entire histories in a register, the
register will need to be very, very wide. If we store entire histories as a linked
list, it will take an unbounded amount of time to read the list. For solutions
to these problems, see [AW04, 15.3] or the papers of Herlihy [Her91b] and
Plotkin [Plo89].
Chapter 19
Atomic snapshots
We’ve seen in the previous chapter that there are a lot of things we can’t
make wait-free with just registers. But there are a lot of things we can.
Atomic snapshots are a tool that let us do a lot of these things easily.
An atomic snapshot object acts like a collection of n single-writer
multi-reader atomic registers with a special snapshot operation that returns
(what appears to be) the state of all n registers at the same time. This
is easy without failures: we simply lock the whole register file, read them
all, and unlock them to let all the starving writers in. But it gets harder if
we want a protocol that is wait-free, where any process can finish its own
snapshot or write even if all the others lock up.
We’ll give the usual sketchy description of a couple of snapshot algo-
rithms. More details on early snapshot results can be found in [AW04, §10.3]
or [Lyn96, §13.3]. There is also a reasonably recent survey by Fich on upper
and lower bounds for the problem [Fic05].
161
CHAPTER 19. ATOMIC SNAPSHOTS 162
terminate if there are a lot of writers around.1 So we need some way to slow
the writers down, or at least get them to do snapshots for us.
in order to prevent case (a) from holding, the adversary has to supply at
least one new value in each collect after the first. But it can only supply one
new value for each of the n − 1 processes that aren’t doing collects before
case (b) is triggered (it’s triggered by the first process that shows up with a
second new value). Adding up all the collects gives 1 + (n − 1) + 1 = n + 1
collects before one of the cases holds. Since each collect takes n − 1 read
operations (assuming the process is smart enough not to read its own register),
a snapshot operation terminates after at most n2 − 1 reads.
19.2.1 Linearizability
We now need to argue that the snapshot vectors returned by the Afek et al.
algorithm really work, that is, that between each matching invoke-snapshot
and respond-snapshot there was some actual time where the registers in the
array contained precisely the values returned in the respond-snapshot action.
CHAPTER 19. ATOMIC SNAPSHOTS 164
pair of handshake bits for every other process. After performing my two
collects, I’ll execute checkHandshake. I will also assume each update (after
performing a snapshot) toggles a mod-2 sequence number bit on the value
stored in its segment of the snapshot array. The hope is that between the
toggle and the handshake, I detect any changes. (See [AW04, Algorithm 30]
for the actual code.)
Does this work? Let’s look at cases:
1. The toggle bit for some process q is unchanged between the two snap-
shots taken by p. Since the bit is toggled with each update, this means
that an even number of updates to q 0 s segment occurred during the
interval between p’s writes. If this even number is 0, we are happy: no
updates means no call to tryHandshake by q, which means we don’t
see any change in q’s segment, which is good, because there wasn’t any.
If this even number is 2 or more, then we observe that each of these
events precedes the following one:
It follows that q both reads and writes the handshake bits in between
p’s calls to tryHandshake and checkHandshake, so p correctly sees
that q has updated its segment.
2. The toggle bit for q has changed. Then q did an odd number of updates
(i.e., at least one), and p correctly detects this fact.
What does p do with this information? Each time it sees that q has done
a scan, it updates a count for q. If the count reaches 3, then p can determine
that q’s last scanned value is from a scan that is contained completely within
the time interval of p’s scan. Either this is a direct scan, where q actually
performs two collects with no changes between them, or it’s an indirect
scan, where q got its value from some other scan completely contained within
q’s scan. In the first case p is immediately happy; in the second, we observe
that this other scan is also contained within the interval of p’s scan, and so
CHAPTER 19. ATOMIC SNAPSHOTS 167
include its own input in its output. Upward validity acts like validity: an
output can’t include anything that didn’t show up in some input.
For the snapshot algorithm, we also demand wait-freedom: each process
terminates after a bounded number of its own steps, even if other processes
fail.
Note that if we are really picky, we can observe that we don’t actually
need meets; a semi-lattice that provides only joins is enough. In practice
we almost always end up with a full-blown lattice, because (a) we are working
with finite sets, and (b) we generally want to include a bottom element ⊥
that is less than all the other elements, to represent the “empty” state of
our data structure. But any finite join-semi-lattice with a bottom element
turns out to be a lattice, since we can define x ∧ y as the join of all elements
z such that z ≤ x and z ≤ y. We don’t use the fact that we are in a lattice
anywhere, but it does save us two syllables not to have to say “semi-lattice
agreement.”
1 procedure scan()
2 for attempt ← 1 to 2 do
3 Ri ← r ← max(R1 . . . Rn ; Ri + 1)
4 collect ← read(S1 . . . Sn )
5 view ← LAr (collect)
// max computation requires a collect
6 if max(R1 . . . Rn ) ≤ Ri then
7 Vir ← view
8 return Vir
1. All views returned by the scan operation are comparable; that is, there
exists a total order on the set of views (which can be extended to a
total order on scan operations by breaking ties using the execution
order).
3. The total order on views respects the execution order: if π1 and π2 are
scan operations that return v1 and v2 , then π1 <S π2 implies v1 ≤ v2 .
(This gives us linearization.)
Let’s start with comparability. First observe that any view returned
is either a direct view (obtained from LAr ) or an indirect view (obtained
from Vjr for some other process j). In the latter case, following the chain of
indirect views eventually reaches some direct view. So all views returned for
a given round are ultimately outputs of LAr and thus satisfy comparability.
But what happens with views from different rounds? The lattice-
agreement objects only operate within each round, so we need to ensure that
any view returned in round r is included in any subsequent rounds. This is
where checking round numbers after calling LAr comes in.
Suppose some process i returns a direct view; that is, it sees no higher
round number in either its first attempt or its second attempt. Then at
the time it starts checking the round number in Line 6, no process has yet
written a round number higher than the round number of i’s view (otherwise
CHAPTER 19. ATOMIC SNAPSHOTS 171
i would have seen it). So no process with a higher round number has yet
executed the corresponding collect operation. When such a process does
so, it obtains values that are at least as large as those fed into LAr , and i’s
round-r view is less than or equal to the vector of these values by upward
validity of LAr , and thus less than or equal to the vector of values returned
by LAr0 for r0 > r, by downward validity of LAr0 . So we have comparability
of all direct views, which implies comparability of all indirect views as well.
To show that each view returned by a scan includes any preceding update,
we observe that either a process returns its first-try scan (which includes
the update by downward validity) or it returns the results of a scan in the
second-try round (which includes the update by downward validity in the
later round, since any collect in the second-try round starts after the update
occurs). So no updates are missed.
Now let’s consider two scan operations π1 and π2 where π1 precedes π2
in the execution. We want to show that, for the views v1 and v2 that these
scans return, v1 ≤ v2 . Pick some time between when π1 finishes and π2
starts, and let s be the contents of the registers at this time. Then v1 ≤ s by
upward validity, since any input fed to a lattice agreement object before π1
finishes was collected from a register whose value was no greater than it is in
s. Similarly, s ≤ v2 by downward validity, because v2 is at least as large as
the collect value read by π2 , and this is at least as large as s. So v1 ≤ s ≤ v2 .
ensure that out-of-date smaller sets don’t overwrite larger ones at any node,
and the cost of using this data structure and carrying out the double-collect
snapshot at a node with m leaves below it is shown to be O(m). So the total
cost of a snapshot is O(n + n/2 + n/4 + . . . 1) = O(n), giving the linear time
bound.
Let’s now look at the details of this protocol. There are two main
components: the Union algorithm used to compute a new value for each
node of the tree, and the ReadSet and WriteSet operations used to store the
data in the node. These are both rather specialized algorithms and depend
on the details of the other, so it is not trivial to describe them in isolation
from each other; but with a little effort we can describe exactly what each
component demands from the other, and show that it gets it.
The Union algorithm does the usual two-collects-without change trick to
get the values of the children and then stores the result. In slightly more
detail:
3. If the values obtained are the same in both collects, call WriteSet on
the current node to store the union of the two sets and proceed to the
parent node. Otherwise repeat the preceding step.
1 procedure WriteSet(S)
2 for i ← |S| down to 1 do
3 a[i] ← S
4 procedure ReadSet()
// update p to last nonempty position
5 while true do
6 s ← a[p]
7 if p = m or a[p + 1] = ∅ then
8 break
9 else
10 p←p+1
11 return s
Algorithm 19.4: Increasing set data structure
Naively, one might think that we could just write directly to a[|S|] and
skip the previous ones, but this makes it harder for a reader to detect that
a[|S|] is occupied. By writing all the previous registers, we make it easy to
tell if there is a set of size |S| or bigger in the sequence, and so a reader can
start at the beginning and scan forward until it reaches an empty register,
secure in the knowledge that no larger value has been written.4 Since we
4
This trick of reading in one direction and writing in another dates back to a paper by
Lamport from 1977 [Lam77].
CHAPTER 19. ATOMIC SNAPSHOTS 174
want to guarantee that no reader every spends more that O(m) operations
on an array of m registers (even if it does multiple calls to ReadSet), we also
have it remember the last location read in each call to ReadSet and start
there again on its next call. For WriteSet, because we only call it once, we
don’t have to be so clever, and can just have it write all |S| ≤ m registers.
We need to show linearizability. We’ll do so by assigning a specific
linearization point to each high-level operation. Linearize each call to ReadSet
at the last time that it reads a[p]. Linearize each call to WriteSet(S) at the
first time at which a[|S|] = S and a[i] 6= ∅ for every i < |S| (in other words,
at the first time that some reader might be able to find and return S); if
there is no such time, linearize the call at the time at which it returns. Since
every linearization point is inside its call’s interval, this gives a linearization
that is consistent with the actual execution. But we have to argue that it
is also consistent with a sequential execution, which means that we need
to show that every ReadSet operation returns the largest set among those
whose corresponding WriteSet operations are linearized earlier.
Let R be a call to ReadSet and W a call to WriteSet(S). If R returns S,
then at the time that R reads S from a[|S|], we have that (a) every register
a[i] with i < |S| is non-empty (otherwise R would have stopped earlier), and
(b) |S| = m or a[|S| + 1] = ∅ (as otherwise R would have kept going after
later reading a[|S| + 1]. From the rule for when WriteSet calls are linearized,
we see that the linearization point of W precedes this time and that the
linearization point of any call to WriteSet with a larger set follows it. So
the return value of R is consistent.
The payoff: unless we do more updates than snapshots, don’t want to
assume multi-writer registers, are worried about unbounded space, have a
beef with huge registers, or care about constant factors, it costs no more
time to do a snapshot than a collect. So in theory we can get away with
assuming snapshots pretty much wherever we need them.
algorithm for a single scanner (i.e., only one process can do snapshots) in
which each updater maintains two copies of its segment, a high copy (that
may be more recent than the current scan) and a low copy (that is guaranteed
to be no more recent than the current scan). The idea is that when a scan is
in progress, updaters ensure that the values in memory at the start of the
scan are not overwritten before the scan is completed, by copying them to
the low registers, while the high registers allow new values to be written
without waiting for the scan to complete. Unbounded sequence numbers,
generated by the scanner, are used to tell which values are recent or not.
As long as there is only one scanner, nothing needs to be done to ensure
that all scans are consistent, and indeed the single-scanner algorithm can be
implemented using only atomic registers. But extending the algorithm to
multiple scanners is tricky. A simple approach would be to keep a separate
low register for each concurrent scan—however, this would require up to n
low registers and greatly increase the cost of an update. Instead, the authors
devise a mechanism, called a coordinated collect, that allows the scanners
collectively to implement a sequence of virtual scans that do not overlap.
Each virtual scan is implemented using the single-scanner algorithm, with its
output written to a common view array that is protected from inconsistent
updates using LL/SC operations (CAS also works). A scanner participates
in virtual scans until it obtains a virtual scan that is useful to it (this means
that the virtual scan has to take place entirely within the interval of the
process’s actual scan operation); the simplest way to arrange this is to have
each scanner perform two virtual scans and return the value obtained by the
second one.
The paper puts a fair bit of work into ensuring that only O(n) view
arrays are needed, which requires handling some extra special cases where
particularly slow processes don’t manage to grab a view before it is reallocated
for a later virtual scan. We avoid this complication by simply assuming an
unbounded collection of view arrays; see the paper for how to do this right.
A more recent paper by Fatourou and Kallimanis [FK07] gives improved
time and space complexity using the same basic technique.
LL/SC.
A call to scan copies the first of memory[j].high or memory[j].low that
has a sequence number less than the current sequence number. Pseudocode
is given as Algorithm 19.5.
1 procedure scan()
2 currSeq ← currSeq + 1
3 for j ← 0 to n − 1 do
4 h ← memory[j].high
5 if h.seq < currSeq then
6 view[j] ← h.value
7 else
8 view[j] ← memory[j].low.value
1 procedure update()
2 seq ← currSeq
3 h ← memory[i].high
4 if h.seq 6= seq then
5 memory[i].low ← h
6 memory[i].high ← (value, seq)
Algorithm 19.6: Single-scanner snapshot: update
read it), and won’t get it from memory[i].low either (because the value
that is in memory[i].high will have seq < currSeq, and so S will take
that instead).
19.5 Applications
Here we describe a few things we can do with snapshots.
19.5.2 Counters
Given atomic snapshots, it’s easy to build a counter (supporting increment,
decrement, and read operations); or, in more generality, a generalized counter
(supporting increments by arbitrary amounts); or, in even more generality,
an object supporting any collection of commutative and associative update
operations (as long as these operations don’t return anything). The idea
is that each process stores in its segment the total of all operations it has
performed so far, and a read operation is implemented using a snapshot
followed by summing the results. This is a case where it is reasonable
to consider multi-writer registers in building the snapshot implementation,
because there is not necessarily any circularity in doing so.
Lower bounds on
perturbable objects
Being able to do snapshots in linear time means that we can build linearizable
counters, generalized counters, max registers, and so on, in linear time, by
having each reader take a snapshot and combine the contributions of each
updater using the appropriate commutative and associative operation. A
natural question is whether we can do better by exploiting the particular
features of these objects.
Unfortunately, the Jayanti-Tan-Toueg [JTT00] lower bound for per-
turbable objects says each of these objects requires n − 1 space and n − 1
steps for a read operation in the worst case, for any solo-terminating deter-
ministic implementation from historyless objects. Like Burns-Lynch, this
is a worst-case bound based on a covering argument, so it may be possible
to evade it in some cases using either randomization or a restriction on the
length of an execution (see Chapter 21).
Perturbable means that the object has a particular property that makes
the proof work, essentially that the outcome of certain special executions
can be changed by stuffing lots of extra update operations in the middle (see
below for details).
Solo-terminating means that a process finishes its current operation
in a finite number of steps if no other process takes steps in between; it is a
much weaker condition, for example, than wait-freedom.
Historyless objects are those for which any operation either never
changes the state (like a read, but it could be weaker) or always sets the
state to a value that depends only on the operation and not the previous
value (like a write, but it may also return some information about the old
181
CHAPTER 20. LOWER BOUNDS ON PERTURBABLE OBJECTS 182
state). The point of historyless objects is that covering arguments work for
them: if there is a process with a pending update operations on some object,
the adversary can use it at any time to wipe out the state of the object and
hide any previous operations from any process except the updater (who, in
a typical covering argument, is quietly killed to keep it from telling anybody
what it saw).
Atomic registers are a common example of a historyless object: the read
never changes the state, and the write always replaces it. Swap objects
(with a swap operation that writes a new state while returning the old state)
are the canonical example, since they can implement any other historyless
object (and even have consensus number 2, showing that even extra consensus
power doesn’t necessarily help here). Test-and-sets (which are basically one-
bit swap objects where you can only swap in 1) are also historyless. In
contrast, anything that looks like a counter or similar object where the new
state is a combination of the old state and the operation is not historyless.
This is important because many of these objects turn out to be perturbable,
and if they were also historyless, we’d get a contradiction.
Below is a sketch of the proof. See the original paper [JTT00] for more
details.
The basic idea is to build a sequence of executions of the form Λk Σk π,
where Λk is a preamble consisting of various complete update operations and
k incomplete update operations by processes p1 through pn−1 , Σk delivers
k delayed writes from the incomplete operations in Λk , and π is a read
operation by pn . To make our life easier, we’ll assume that π performs only
read steps.1
We’ll expand Λk Σk to Λk+1 Σk+1 by inserting new operations in between
Λk and Σk , and argue that because those operations can change the value
returned by π, one of them must write an object not covered in Σk , which
will (after some more work) allow us to cover yet another object.
In order for these covered objects to keep accumulating, the reader has
to keep looking at them. To a first approximation, this means that we want
the first k reads done by π to be from objects written in Σk : since the
values seen by the reader for these objects never change, the (deterministic)
1
The idea is that if π does anything else, then the return values of other steps can
be simulated by doing a read in place of the first step and using the property of being
historyless to compute the return values of subsequent steps. There is still a possible
objection that we might have some historyless objects that don’t even provide read steps.
The easiest way to work around this is to assume that our objects do in fact provide a read
step, because taking the read step away isn’t going to make implementing the candidate
perturbable object any easier.
CHAPTER 20. LOWER BOUNDS ON PERTURBABLE OBJECTS 183
reader will continue to read them even as we add more operations before Σk .
Unfortunately, this does not quite match all possible cases, because it may
be that π performs useless reads of objects that aren’t covered in Σk but
that aren’t written to by anybody anyway. So we have the more technical
condition that π has an initial prefix that only includes covered reads and
useless reads: formally, there is a prefix π 0 of π that includes at least one read
operation of every object covered by Σk , such that any other read operation
in π 0 reads an object whose state cannot be changed by any step that can be
performed by any sequence of operations by processes p1 through pn−1 that
can be inserted between Λk and Σk π.
The induction hypothesis is that an execution Λk Σk with these properties
exists for each k ≤ n − 1.
For the base case, Λ0 Σ0 = hi. This covers 0 reads by π.
For the induction step, we start with Λk Σk , and look for a sequence
of operations γ that we can insert in between Λk and Σk that changes
what π returns in Λk γΣk π from what it returned in Λk γΣk . This is where
perturbability comes in: an object is defined to be perturbable if such a
sequence γ always exists.
Some examples of γ:
• For a snapshot object, let γ write to a component that is not written
to by any of the operations in Σk .
• For a max register, let γ include a bigger write than all the others.
• For a counter, let γ include at least n increments. The same works for
a mod-m counter if m is at least 2n. (Why n increments? With fewer
increments, we can make π return the same value by being sneaky
about when the partial increments represented in Σk are linearized.)
In contrast, historyless objects (including atomic registers) are not per-
turbable: if Σk includes a write that sets the value of the object, no set of
operations inserted before it will change this value. This is good, because we
know that it only takes one atomic register to implement an atomic register.
Assuming that our object is perturbable, now we want to use the existence
of γ to generate our bigger execution Λk+1 Σk+1 . As in the Burns-Lynch
mutex bound [BL93], we will be arguing that γ must include a write to an
object that is not covered by the k delayed writes. Also as in Burns-Lynch,
it turns out that it is not enough just to delay this particular write, because
it might not cover the specific object we want.
Instead, we look for an alternative γ 0 that changes the value of the earliest
object read by π that can be changed. We know that some such γ 0 exists,
CHAPTER 20. LOWER BOUNDS ON PERTURBABLE OBJECTS 184
because γ writes to some such object, so there must be a first place in the
execution of π where the output of an object can change, and there must be
some γ 0 that makes that change. Note however that γ 0 that hits that earliest
object need not be the same as the γ used to demonstrate perturbability,
and indeed it may be that γ 0 is very different from γ—in particular, it may
be much longer.
So now we expand γ 0 = αβδ, where β is the magic write to the uncovered
object, and let Λk+1 = Λk αδ 0 and Σk+1 = βΣk , where δ 0 consists of running
all incomplete operations in α except the one that include β to completion.
We’ve now covered k + 1 distinct objects in Σk . It remains only to show that
the technical condition that any uncovered object that π reads before reading
all the covered objects can’t have its value changed by inserting additional
operations.
Suppose that there is a sequence of operations κ such that Λk+1 κ changes
one of these forbidden uncovered objects. But Λk+1 κ = Λk ακ, and so
γ 00 = ακ changes an object that either (a) can’t be changed because of the
technical condition in the induction hypothesis for k, or (b) changes an object
that π reads before the object covered by β. In the second case, this γ 00
changes an earlier object that γ 0 , contradicting the choice of γ 0 .
It follows that we do in fact manage to cover k + 1 objects while satisfying
the technical condition, and the induction hypothesis holds for k + 1.
We can repeat this step until we’ve covered n − 1 objects. This implies
that there are at least n − 1 objects (the space lower bound), and in the
worst case some reader reads all of them (the step complexity lower bound).
Chapter 21
Restricted-use objects
The Jayanti-Tan-Toueg bound puts a hard floor under the worst-case com-
plexity of almost anything interesting we’d like to do wait-free in a system
that provides only historyless objects as primitives. As with the consensus hi-
erarchy lower bounds, we could interpret this as a reason to demand stronger
primivitives. Or we could look for ways to bypass the JTT bound.
One approach is to modify our target objects so that they are no longer
perturbable. This can be done by limiting their use: a counter or max register
that can only change its value a limited number of times is not perturbable,
because once we hit the limit, there is no perturbing sequence of operations
that we can insert between Λk and Σk in the JTT execution that changes
the value returned by the eventual reader. This observation motivated a
line of work on restricted-use max registers [AACH12] and restricted-use
snapshots [AACHE15] that have polylogarithmic worst-case individual step
complexity assuming a polynomial limit on updates. While restricted-use
objects might not be all that exciting on their own, they in turn have served
as building blocks for implementations of snapshots with polylogarithmic
expected individual step complexity [ACH13] or polylogarithmic amortized
individual step complexity [BHMT20].
In this chapter, we will concentrate on the original restricted-use max
register construction of Aspnes, Attiya, and Censor-Hillel [AACH12], and
its extension to give restricted-use snapshots by Aspnes et al. [AACHE15].
185
CHAPTER 21. RESTRICTED-USE OBJECTS 186
1 procedure read(r)
2 if switch = 0 then
3 return 0 : read(left)
4 else
5 return 1 : read(right)
The intuition is that the max register is really a big tree of switch
variables, and we store a particular bit-vector in the max register by setting
to 1 the switches needed to make read follow the path corresponding to
that bit-vector. The procedure for writing 0x tests switch first, because once
switch gets set to 1, any 0x values are smaller than the largest value, and we
don’t want them getting written to left where they might confuse particularly
slow readers into returning a value we can’t linearize. The procedure for
writing 1x sets switch second, because (a) it doesn’t need to test switch, since
1x always beats 0x, and (b) it’s not safe to send a reader down into right
until some value has actually been written there.
It’s easy to see that read and write operations both require exactly
one operation per bit of the value read or written. To show that we get
linearizability, we give an explicit linearization ordering (see the paper for a
full proof that this works):
(a) Within this pile, we sort operations using the linearization ordering
for left.
(a) Within this pile, operations that touch right are ordered using
the linearization ordering for right. Operations that don’t (which
are the “do nothing” writes for 0x values) are placed consistently
with the actual execution order.
To show that this gives a valid linearization, we have to argue first that
any read operation returns the largest earlier write argument and that we
don’t put any non-concurrent operations out of order.
CHAPTER 21. RESTRICTED-USE OBJECTS 188
For the first part, any read in the 0 pile returns 0 : read(left), and
read(left) returns (assuming left is a linearizable max register) the largest
value previously written to left, which will be the largest value linearized
before the read, or the all-0 vector if there is no such value. In either case
we are happy. Any read in the 1 pile returns 1 : read(right). Here we have
to guard against the possibility of getting an all-0 vector from read(right)
if no write operations linearize before the read. But any write operation
that writes 1x doesn’t set switch to 1 until after it writes to right, so no read
operation ever starts read(right) until after at least one write to right has
completed, implying that that write to right linearizes before the read from
right. So in all the second-pile operations linearize as well.
not hard to show that this is linearizable.) This gives an unbounded max
register with read and write cost O(n). So by choosing this in preference
to the balanced tree when m is large, the cost of either operation on a max
register is min (dlg me , O(n)).
We can combine this with the unbalanced tree by terminating the right
path with a snapshot-based max register. This gives a cost for reads and
writes of values v of O(min(log v, n)).
We’ve shown the recurrence T (m, n) ≥ mint (max(T (t, n), T (m−t, n)))+1,
with base cases T (1, n) = 0 and T (m, 1) = 0. The solution to this recurrence
is exactly min(dlg me , n − 1), which is the same, except for a constant factor
on n, as the upper bound we got by choosing between a balanced tree for
small m and a snapshot for m ≥ 2n−1 . For small m, the recursive split we
get is also the same as in the tree-based algorithm: call the r register switch
and you can extract a tree from whatever algorithm somebody gives you. So
this says that the tree-based algorithm is (up to choice of the tree) essentially
the unique optimal bounded max register implementation for m ≤ 2n−1 .
It is also possible to show lower bounds on randomized implementations
of max registers and other restricted-use objects. See [AACH12, ACAH16,
HK14] for examples.
cached copies at every internal node. These cached copies are updated by
readers at times carefully chosen to ensure linearizability.
The base of the construction is an `-valued max register r, used directly
as a 1 × ` max array; this is the case where the head component is trivial and
we only need to store a.tail = r. Here calling write(a, 0, v) does nothing,
while write(a, 1, v) maps to write(r, v), and read(a) returns h0, read(r)i.
For larger values of k, paste a kleft × ` max array left and a kright × ` max
array right together to get a (kleft + kright ) × ` max array. This construction
uses a switch variable as in the basic construction, along with an `-valued
max register tail that is used to store the value of a[1].
Calls to write(a, 0, v) and read(a) follow the structure of the correspond-
ing operations for a simple max register, with some extra work in read to
make sure that the value in tail propagates into left and right as needed to
ensure the correct value is returned.
A call to write(a, 1, v) operation writes tail directly, and then calls
read(a) to propagate the new value as well.1
Pseudocode is given in Algorithm 21.3.
The individual step complexity of each operation is easily computed.
Assuming a balanced tree, write(a, 0, v) takes exactly dlg ke steps, while
write(a, 1, v) costs exactly dlg `e steps plus the cost of read(a). Read
operations are more complicated. In the worst case, we have two reads of
a.tail and a write to a.right[1] at each level, plus up to two operations on
a.switch, for a total cost of at most (3dlg ke − 1)(dlg `e + 2) = O(log k log `)
steps. This dominates other costs in write(a, 1, v), so the asymptotic cost
of both write and read operations is O(log k log `).
In the special case where k = `, both writes and reads have their step
complexities squared compared to a single-component k-valued max register.
21.6.1 Linearizability
In broad outline, the proof of linearizability follows the proof for a simple
max register. But as with snapshots, we have to show that the ordering of
the head and tail components are consistent.
The key observation is the following lemma.
1 procedure write(a, i, v)
2 if i = 0 then
3 if v < kleft then
4 if a.switch = 0 then
5 write(a.left, 0, v)
6 else
7 write(a.right, 0, v − kleft )
8 a.switch ← 1
9 else
10 write(a.tail, v)
11 read(a)
12 procedure read(a)
13 x ← read(a.tail)
14 if a.switch = 0 then
15 write(a.left, 1, x)
16 return read(a.left)
17 else
18 x ← read(a.tail)
19 write(a.right, 1, x)
20 return hkleft , 0i + read(a.right)
that returns vleft from a.left and a read(a) operation πright that returns vright
from a.right. Then vleft [1] ≤ vright [1].
Proof. Both vleft [1] and vright [1] are values that were previously written to
their respective max arrays by read(a) operations (such writes necessarily
exist because any process that reads a.left or a.right writes a.left[1] or a.right[1]
first). From examining the code, we have that any value written to a.left[1]
was read from a.tail before a.switch was set to 1, while any value written to
a.right[1] was read from a.tail after a.switch was set to 1. Since max-register
reads are non-decreasing, we have than any value written to a.left[1] is less
than or equal to any value written to a.right[1], proving the claim.
Theorem 21.6.2. If a.left and a.right are linearizable max arrays, and a.tail
is a linearizable max register, then Algorithm 21.3 implements a linearizable
max array.
cms
5 cmr
bmr
br
ar
0 5 a
ms
0 0 3 3 mr
m
c
b 1 2
a
s
m r
Figure 21.1: Snapshot from max arrays; see also [AACHE15, Fig. 2]
Common2
197
CHAPTER 22. COMMON2 198
1 procedure TAS2()
2 if Consensus2(myId) = myId then
3 return 0
4 else
5 return 1
Once we have test-and-set for two processes, we can easily get one-shot
swap for two processes. The trick is that a one-shot swap object always
returns ⊥ to the first process to access it and returns the other process’s value
to the second process. We can distinguish these two roles using test-and-set
and add a register to send the value across. Pseudocode is in Algorithm 22.2.
1 procedure swap(v)
2 a[myId] = v
3 if TAS2() = 0 then
4 return ⊥
5 else
6 return a[¬myId]
objects may only work for two specific processes). A process drops out if
it ever sees a 1. We can easily show that at most one process leaves each
subtree with all zeros, including the whole tree itself.
Unfortunately, this process does not give a linearizable test-and-set object.
It is possible that p1 loses early to p2 , but then p3 starts (elsewhere in the
tree) after p1 finishes, and races to the top, beating out p2 . To avoid this,
we can follow [AWW93] and add a gate bit that locks out latecomers.1
The resulting construction looks something like Algorithm 22.3. This
gives a slightly different interface from straight TAS; instead of returning 0
for winning and 1 for losing, the algorithm returns ⊥ if you win and the id
of some process that beats you if you lose.2 It’s not hard to see that this
gives a linearizable test-and-set after translating the values back to 0 and 1
(the trick for linearizability is that any process that wins saw an empty gate,
and so started before any other process finished). It also sorts the processes
into a rooted tree, with each process linearizing after its parent (this latter
claim is a little trickier, but basically comes down to a loser linearizing after
the process that defeated it either on gate or on one of the TAS2 objects).
This algorithm is kind of expensive: the losers that drop out early are
relatively lucky, but the winning process has to win a TAS2 against everybody,
for a total of Θ(n) TAS operations. We can reduce the cost to O(log n) if
our TAS2 objects allow arbitrary processes to execute them. This is done,
for example, in the RatRace test-and-set implementation of Alistarh et
al. [AAG+ 10], using a randomized implementation of TAS2 due to Tromp
and Vitányi [TV02] (see §24.5.2).
1
The original version of this trick is from an earlier paper [AGTV92], where the gate
bit is implemented as an array of single-writer registers.
2
Note that this process may also be a loser, just one that made it further up the tree
than you did. We can’t expect to learn the id of the ultimate winner, because that would
solve n-process consensus.
CHAPTER 22. COMMON2 200
1 procedure compete(i)
// check the gate
2 if gate 6= ⊥ then
3 return gate
4 gate ← i
// Do tournament, returning id of whoever I lose to
5 node ← leaf for i
6 while node 6= root do
7 for each j whose leaf is below sibling of node do
8 if TAS2(t[i, j]) = 1 then
9 return j
10 node ← node.parent
// I win!
11 return ⊥
Algorithm 22.3: Tournament algorithm with gate
1 procedure swap(v)
2 i←0
3 while true do
// Look for a starting point
4 while TAS(si ) = 1 do
5 i←i+1
6 vi ← v
// Check if we’ve been blocked
7 if TAS(ti ) = 0 then
// We win, find our predecessor
8 for j ← i − 1 down to 0 do
9 if TAS(tj ) = 1 then
// Use this value
10 return vj
and a max register accessed that keeps track of the largest position accessed
so far.
AMW implement accessed using a snapshot, which we will do as well to
avoid complications from trying to build a max register out of an infinitely
deep tree.4 Note that AMW don’t call this data structure a max register,
but we will, because we like max registers.
Code for the swap procedure is given in Algorithm 22.5.
To show Algorithm 22.5 works, we need the following technical lemma,
which, among other things, implies that node 1 − 2depth is always available
to be captured by the process at depth depth. This is essentially just a
restatement of Lemma 1 from [AMW11].
Lemma 22.4.1. For any x = k/2q , where k is odd, no process attempts to
capture any y ∈ [x, x + 1/2q ) before some process writes x to accessed.
Proof. Suppose that the lemma fails, let y = `/2r be the first node captured
in violation of the lemma, and let x = k/2q be such that y ∈ [x, x + 1/2q )
but x has not been written to accessed when y is captured. Let p be the
process that captures y.
Now consider y 0 = x − 1/2r , the last node to the left of x at the same
depth as y. Why didn’t p capture y 0 ?
One possibility is that some other process p0 blocked y 0 during its return
phase. This p0 must have captured a node z > y 0 . If z > y, then p0 would
have blocked y first, preventing p from capturing it. So y 0 < z < y.
The other possibility is that p never tried to capture y 0 , because some
other process p0 wrote some value z > y 0 to accessed first. This value z must
also be less than y (or else p would not have tried to capture y).
In both cases, there is a process p0 that captures a value z with y 0 < z < y,
before p captures y and thus before anybody writes x to accessed.
Since y 0 < x and y 0 < z, either y 0 < z < x or y 0 < x < z. In the first case,
z ∈ [y 0 , y 0 + 1/2r ) is captured before y 0 is written to accessed. In the second
case z ∈ [x, x + 1/2q ) is captured before x is written to accessed. Either
way, y is not the first capture to violate the lemma, contradicting our initial
assumption.
Using Lemma 22.4.1, it is straightforward to show that Algorithm 22.5 is
wait-free. If I get q for my value of depth, then no process will attempt to
4
The issue is not so much that we can’t store arbitrary dyadics, since we can encode them
using an order-preserving prefix-free code, but that, without some sort of helping mechanism,
a read running concurrently with endlessly increasing writes (e.g. 1/2, 3/4, 7/8, . . . ) might
not be wait-free. Plus as soon as the denominator exceeds 2n , which happens after only n
calls to swap, O(n)-step snapshots are cheaper anyway.
CHAPTER 22. COMMON2 204
1 procedure swap(v)
// Pick a new row just for me
2 depth ← fetchAndIncrement(maxDepth)
// Capture phase
3 repeat
// Pick leftmost node in my row greater than
accessed
n o
4 cap ← min x x = k/2depth for odd k, x > accessed
// Post my value
5 reg[cap] ← v
// Try to capture the test-and-set
6 win ← TAS(tst[cap]) = 0
7 writeMax(accessed, cap)
8 until win
// Return phase
// Max depth reached by anybody left of cap
9 maxPreviousDepth ← read(maxDepth)
10 ret ← cap
// Block previous nodes until we find one we can take
11 repeat
12 ret ← max {x = k/2q | q ≤ maxPreviousDepth, k odd, x < ret}
13 if ret < 0 then
14 return ⊥
15 until TAS(tst[ret]) = 1
16 return reg[ret]
Algorithm 22.5: Wait-free swap from test-and-set [AMW11]
CHAPTER 22. COMMON2 205
of Jayanti [Jay98].
The lower bound applies a fortiori to the case where we don’t have
LL/SC or CAS and have to rely on 2-process consensus objects. But it’s not
out of the question that there is a matching upper bound in this case.
207
CHAPTER 23. RANDOMIZED CONSENSUS AND TEST-AND-SET208
so based on knowledge of the state of the protocol and its past evolution.
How much knowledge we give the adversary affects its power. Several classes
of adversaries have been considered in the literature; ranging from strongest
to weakest, we have:
1. An adaptive adversary. This adversary is a function from the state
of the system to the set of processes; it can see everything that has
happened so far (including coin-flips internal to processes that have not
yet been revealed to anybody else), but can’t predict the future. It’s
known that an adaptive adversary can force any randomized consensus
protocol to take Θ(n2 ) total steps [AC08]. The adaptive adversary
is also called a strong adversary following a foundational paper of
Abrahamson [Abr88].
2. An intermediate adversary or weak adversary [Abr88] is one
that limits the adversary’s ability to observe or control the system
in some way, without completely eliminating it. For example, a
content-oblivious adversary [Cha96] or value-oblivious adver-
sary [Aum97] is restricted from seeing the values contained in registers
or pending write operations and from observing the internal states
of processes directly. A location-oblivious adversary [Asp12b] can
distinguish between values and the types of pending operations, but
can’t discriminate between pending operations based one which register
they are operating on. These classes of adversaries are modeled by
imposing an equivalence relation on partial executions and insisting
that the adversary make the same choice of processes to go next in
equivalent situations. Typically they arise because somebody invented
a consensus protocol for the oblivious adversary (described below) and
then looked for the next most powerful adversary that still let the
protocol work.
Weak adversaries often allow much faster consensus protocols than
adaptive adversaries. Each of the above adversaries permits consensus
to be achieved in O(log n) expected individual work using an appropri-
ate algorithm. But from a mathematical standpoint, weak adversaries
are a bit messy, and once you start combining algorithms designed for
different weak adversaries, it’s natural to move all the way down to the
weakest reasonable adversary, the oblivious adversary.
3. A oblivious adversary has no ability to observe the system at all;
instead, it fixes a sequence of process ids in advance, and at each step
the next process in the sequence runs.
CHAPTER 23. RANDOMIZED CONSENSUS AND TEST-AND-SET209
23.2 History
The use of randomization to solve consensus in an asynchronous system
with crash failures was proposed by Ben-Or [BO83] for a message-passing
model. Chor, Israeli, and Li [CIL94] gave the first wait-free consensus
protocol for a shared-memory system, which assumed a particular kind of
weak adversary. Abrahamson [Abr88] defined strong and weak adversaries
and gave the first wait-free consensus
2 protocol for a strong adversary; its
expected step complexity was Θ 2n . After failing to show that exponential
time was necessary, Aspnes and Herlihy [AH90a] showed how to do consensus
in O(n4 ) total step complexity, a value that was soon reduced to O(n2 log n)
by Bracha and Rachman [BR91]. This remained the best known bound for
the strong-adversary model until Attiya and Censor [AC08] showed matching
Θ(n2 ) upper and lower bounds on total step complexity. A later paper by
Aspnes and Censor [AC09] showed that it was also possible to get an O(n)
bound on individual step complexity.
CHAPTER 23. RANDOMIZED CONSENSUS AND TEST-AND-SET210
For weak adversaries, the best known upper bound on individual step
complexity was O(log n) for a long time [Cha96, Aum97, Asp12b], with
an O(n) bound on total step complexity for some models [Asp12b]. More
recent work has lowered the bound to O(log log n), under the assumption of
an oblivious adversary [Asp12a]. No non-trivial lower bound on expected
individual step complexity is known, although there is a known lower bound
on the distribution of of the individual step complexity [ACH10].
In the following sections, we will concentrate on the more recent weak-
adversary algorithms. These have the advantage of being fast enough that
one might reasonably consider using them in practice, assuming that the
weak-adversary assumption does not create trouble, and they are also require
less probabilistic machinery to analyze than the strong-adversary algorithms.
1 preference ← input
2 for r ← 1 . . . ∞ do
3 (b, preference) ← AdoptCommit(AC[r], preference)
4 if b = commit then
5 return preference
6 else
7 do something to generate a new preference
The idea is that the adopt-commit takes care of ensuring that once
somebody returns a value (after receiving commit), everybody else who
doesn’t return adopts the same value (follows from coherence). Conversely,
if everybody already has the same value, everybody returns it (follows from
convergence). The only missing piece is the part where we try to shake all
the processes into agreement. For this we need a separate object called a
conciliator.
23.3.2 Conciliators
Conciliators are a weakened version of randomized consensus that replace
agreement with probabilistic agreement: the processes can disagree some-
times, but must agree with constant probability despite interference by the
adversary. An algorithm that satisfies termination, validity, and probabilistic
agreement is called a conciliator.1
The important feature of conciliators is that if we plug a conciliator that
guarantees agreement with probability at least δ into Algorithm 23.1, then
on average we only have to execute the loop 1/δ times before every process
agrees. This gives an expected cost equal to 1/δ times the total cost of
AdoptCommit and the conciliator. Typically we will aim for constant δ.
1
Warning: This name has not really caught on in the general theory-of-distributed-
computing community, and so far only appears in papers that have a particular researcher
as a co-author [Asp12a, AE11, Asp12b, AACV17]. Unfortunately, there doesn’t seem to
be a better name for the same object that has caught on. So we are stuck with it for now.
CHAPTER 23. RANDOMIZED CONSENSUS AND TEST-AND-SET212
shared coin into a conciliator requires extra machinery to bypass the coin if
the processes that have accessed the conciliator so far are all in agreement;
see Algorithm 23.3. The intuition is that having some processes (who all
agree with each other) skip the shared coin is not a problem, because with
probability δ the remaining processes will agree with them as well.
shared data:
binary registers r0 and r1 , initially 0;
weak shared coin sharedCoin
1 procedure coinCoinciliator()
2 rv ← 1
3 if r¬v = 1 then
4 return sharedCoin()
5 else
6 return v
This still leaves the problem of how to build a shared coin. In the
message-passing literature, the usual approach is to use cryptography,2 but
because we are assuming an arbitrarily powerful adversary, we can’t use
cryptography.
If we don’t care how small δ gets, we could just have each process flip its
own local coin and hope that they all come up the same. (This is more or
less what was done by Abrahamson [Abr88].) But that might take a while. If
we aren’t willing to wait exponentially long, a better approach is to combine
many individual local coins using some sort of voting.
A version of this approach, based on a random walk, was used by Aspnes
and Herlihy [AH90a] to get consensus in (bad) polynomial expected time
against an adaptive adversary. A better version was developed by Bracha
and Rachman [BR91]. In their version, each process repeatedly generates a
random ±1 vote and adds it to a common pool (which just means writing the
sum and count of all its votes so far out to a single-writer register). Every
Θ(n/ log n) votes, the process does a collect (giving an overhead of Θ(log n)
operations per vote) and checks to see if the total number of votes is greater
than a Θ(n2 ) threshold. If it is, the process returns the sign of the total vote.
Bracha and Rachman showed that despite processes seeing different
combinations of votes (due to the collects running at possibly very different
2
For example, Canetti and Rabin [CR93] solved Byzantine agreement in O(1) time
building a shared coin on top of secret sharing.
CHAPTER 23. RANDOMIZED CONSENSUS AND TEST-AND-SET214
speeds), the difference between what each process sees and the actual sum of
all votes ever generated is at most O(n) with high probability. This means
that if the total vote is more than cn from 0 for some c, which occurs with
constant probability, then every processes is likely to return the same value.
This gives a weak shared coin with constant bias, and thus also a consensus
protocol, that runs in O(n2 log n) expected total steps.
This remained the best known protocol for many years, leaving an
annoying gap between the upper bound and the best known lower bound
of Ω(n2 / log2 n) [Asp98]. Eventually, Attiya and Censor [AC08] produced
an entirely new argument to bring the lower bound up to Ω(n2 ) and at the
same time gave a simple tweak to the Bracha-Rachman protocol to bring
the upper bound down to O(n2 ), completely settling (up to constant factors)
the asymptotic expected total step complexity of strong-adversary consensus.
But the question of how quickly one could solve weak-adversary adversary
consensus remained (and still remains) open.
individual work to O(log log n), although the total work for that algorithm
may be O(n log log n).
The basic idea is that processes alternate between reading a register r
and (maybe) writing to the register; if a process reads a non-null value from
the register, it returns it. Any other process that reads the same non-null
value will agree with the first process; the only way that this can’t happen is
if some process writes a different value to the register before it notices the
first write.
The random choice of whether to write the register or not avoids this
problem. The idea is that even though the adversary can schedule a write at
a particular time, because it’s oblivious, it won’t be able to tell if the process
wrote (or was about to write) or did a no-op instead.
The basic version of this algorithm, due to Chor, Israeli, and Li [CIL94],
1
uses a fixed 2n probability of writing to the register. So once some process
writes to the register, the chance that any of the remaining n − 1 processes
write to it before noticing that it’s non-null is at most n−12n < 1/2. It’s also
not hard to see that this algorithm uses O(n) total operations, although it
may be that one single process running by itself has to go through the loop
2n times before it finally writes the register and escapes.
Using increasing probabilities avoids this problem, because any process
that executes the main loop dlg ne + 1 times will write the register. This
establishes the O(log n) per-process bound on operations. At the same time,
an O(n) bound on total operations still holds, since each write has at least
1
a 2n chance of succeeding. The price we pay for the improvement is that
we increase the chance that an initial value written to the register gets
overwritten by some high-probability write. But the intuition is that the
probabilities can’t grow too much, because the probability that I write on
my next write is close to the sum of the probabilities that I wrote on my
previous writes—suggesting that if I have a high probability of writing next
time, I should have done a write already.
Formalizing this intuition requires a little bit of work. Fix the schedule,
and let pi be the probability that the i-th write operation in this schedule
succeeds. Let t be the least value for which ti=1 pi ≥ 1/4. We’re going to
P
argue that with constant probability one of the first t writes succeeds, and
that the next n − 1 writes by different processes all fail.
CHAPTER 23. RANDOMIZED CONSENSUS AND TEST-AND-SET216
Now observe that if some process p writes at or before the t-th write,
then any process q with a pending write either did no writes previously, or
its last write was among the first t − 1 writes, whose probabilities sum to
P 1
less than 1/4. In either case, q has a i∈Sq pi + 2n chance of writing on
its pending attempt, where Sq is the set of indices in 1 . . . t − 1 where q
previously attempted to write.
Summing up these probabilities over all processes gives a total of n−1 2n +
P P
p ≤ 1/2 + 1/4 = 3/4. So with probability at least e −1/4 (1 − 3/4) =
q i∈Sq i
e−1/4 /4, we get agreement.
23.7 Sifters
A faster conciliator can be obtained using a sifter, which is a mechanism for
rapidly discarding processes using randomization [AA11] while keeping at
least one process around. The simplest sifter has each process either write a
register (with low probability) or read it (with high probability); all writers
and all readers that see ⊥ continue to the next stage of the protocol, while
all readers who see a non-null value drop out. If the probability of writing
√
is tuned carefully, this will reduce n processes to at most 2 n processes on
average; by iterating this mechanism, the expected number of remaining
processes can be reduced to 1 + after O(log log n + log(1/)) phases.
As with previous implementations of test-and-set (see Algorithm 22.3),
it’s often helpful to have a sifter return not only that a process lost but which
process it lost to. This gives the implementation shown in Algorithm 23.5.
To use a sifter effectively, p should be tuned to match the number of
processes that are likely to use it. This is because of the following lemma:
Lemma 23.7.1. Fix p, and let X processes executed a sifter with parameter
p. Let Y be the number of processes for which the sifter returns ⊥. Then
1
E [X | Y ] ≤ pX + . (23.7.1)
p
CHAPTER 23. RANDOMIZED CONSENSUS AND TEST-AND-SET217
1 procedure sifter(p, r)
2 with probability p do
3 r ← id
4 return ⊥
5 else
6 return r
1 if gate 6= ⊥ then
2 return 1
3 else
4 gate ← myId
5 for i ← 1 . . . dlog log ne + dlog
4/3
(7 log n)e do
−i+1
6 with probability min 1/2, 21−2 do
7 ri ← myId
8 else
9 w ← ri
10 if w 6= ⊥ then
11 return 1
loop body is essentially the same as the code in Algorithm 23.5, except that
the random choice is replaced by a lookup in persona.chooseWrite.
To show that this works, we need to argue that having multiple copies
of a persona around doesn’t change the behavior of the sifter. In each
round, we will call the first process with a given persona p to access ri
the representative of p, and argue that a persona survives round i in
this algorithm precisely when its representative would survive round i in
a corresponding test-and-set sifter with the schedule restricted only to the
representatives.
There are three cases:
1 procedure conciliator(input)
2 Let R = dlog log ne + dlog4/3 (7/)e
3 Let chooseWrite be a vector of R independent random Boolean
variables with Pr[chooseWrite[i] = 1] = pi , where
−i+1 −i
pi = 21−2 (n)−2 for i ≤ dlog log ne and pi = 1/2 for larger i.
12 return persona.input
Algorithm 23.7: Sifting conciliator (from [Asp12a])
From the preceding analysis for test-and-set, we have that after O(log log n+
log 1/) rounds with appropriate probabilities of writing, at most 1 + values
survive on average. This gives a probability of at most of disagreement. By
alternating these conciliators with adopt-commit objects, we get agreement
in O(log log n + log m/ log log m) expected time, where m is the number of
possible input values.
I don’t think the O(log log n) part of this expression is optimal, but I
don’t know how to do better.
described in §23.7.2.
Pseudocode is given in Algorithm 23.8. In this simplified version, we
assume an infinitely long array A[1 . . . ], so that we don’t need to worry
about n. Truncating the array at log n also works, but the analysis requires
handling the last position as a special case, which I am too lazy to do here.
Proof. For the first part, observe that any process that picks the largest
value of r among all processes will survive; since the number of processes is
finite, there is at least one such survivor.
For the second part, let Xi be the number of survivors with r = i. Then
E [Xi ] is bounded by n · 2−i , since no process survives with r = i without
first choosing r = i. But we can also argue that E [Xi ] ≤ 3 for any value of
n, by considering the sequence of write operations in the execution.
Because the adversary is oblivious, the location of these writes is uncor-
related with their ordering. If we assume that the adversary is trying to
maximize the number of survivors, its best strategy is to allow each process
to read immediately after writing, as delaying this read can only increase the
probability that A[r + 1] is nonzero. So in computing Xi , we are counting
the number of writes to A[i] before the first write to A[i + 1]. Let’s ignore
all writes to other registers; then the j-th write to either of A[i] or A[i + 1]
has a conditional probability of 2/3 of landing on A[i] and 1/3 on A[i + 1].
We are thus looking at a geometric distribution with parameter 1/3, which
has expectation 3.
CHAPTER 23. RANDOMIZED CONSENSUS AND TEST-AND-SET222
because once n · 2−i drops below 3, the remaining terms form a geometric
series.
Renaming
24.1 Renaming
In the renaming problem, we have n processes, each starts with a name
from some huge namespace, and we’d like to assign them each unique names
from a much smaller namespace. The main application is allowing us to run
algorithms that assume that the processes are given contiguous numbers,
e.g., the various collect or atomic snapshot algorithms in which each process
is assigned a unique register and we have to read all of the registers. With
renaming, instead of reading a huge pile of registers in order to find the few
that are actually used, we can map the processes down to a much smaller
set.
Formally, we have a decision problem where each process has input xi
(its original name) and output yi , with the requirements:
Uniqueness If pi 6= pj , then yi 6= yj .
Anonymity The code executed by any process depends only on its input
xi : for any execution of processes p1 . . . pn with inputs x1 . . . xn , and
224
CHAPTER 24. RENAMING 225
24.2 Performance
Conventions on counting processes:
discussing lower bounds on the namespace follow the approach of Herlihy and
Shavit and quote lower bounds that are generally 2 higher than the minimum
number of names needed for n processes. This requires a certain amount of
translation when comparing these lower bounds with upper bounds, which
use the more natural convention.
1 procedure getName()
2 s←1
3 while true do
4 a[i] ← s
5 view ← snapshot(a)
6 if view[j] = s for some j then
7 r ← |{j : view[j] 6= ⊥ ∧ j ≤ i}|
8 s ← r-th positive integer not in
{view[j] : j 6= i ∧ view[j] = ⊥}
9 else
10 return s
The array a holds proposed names for each process (indexed by the
original names), or ⊥ for processes that have not proposed a name yet. If a
process proposes a name and finds that no other process has proposed the
same name, it takes it; otherwise it chooses a new name by first computing
its rank r among the active processes and then choosing the r-th smallest
name that hasn’t been proposed by another process. Because the rank is at
most n and there are at most n − 1 names proposed by the other processes,
this always gives proposed names in the range [1 . . . 2n − 1]. But it remains
to show that the algorithm satisfies uniqueness and termination.
For uniqueness, consider two process with original names i and j. Suppose
that i and j both decide on s. Then i sees a view in which a[i] = s and
a[j] 6= s, after which it no longer updates a[i]. Similarly, j sees a view in
which a[j] = s and a[i] 6= s, after which it no longer updates a[j]. If i’s view
is obtained first, then j can’t see a[i] 6= s, but the same holds if j’s view is
obtained first. So in either case we get a contradiction, proving uniqueness.
CHAPTER 24. RENAMING 228
1 procedure releaseName()
2 a[i] ← ⊥
Algorithm 24.2: Releasing a name
releaseName.
24.4.3.1 Splitters
The Moir-Anderson renaming protocol uses a network of splitters, which
we last saw providing a fast path for mutual exclusion in §17.4.2. Each
splitter is a widget, built from a pair of atomic registers, that assigns to
each processes that arrives at it the value right, down, or stop. As discussed
previously, the useful properties of splitters are that if at least one process
arrives at a splitter, then (a) at least one process returns right or stop; and
(b) at least one process returns down or stop; (c) at most one process returns
stop; and (d) any process that runs by itself returns stop.
We proved the last two properties in §17.4.2; we’ll prove the first two here.
Another way of describing these properties is that of all the processes that
arrive at a splitter, some process doesn’t go down and some process doesn’t
go right. By arranging splitters in a grid, this property guarantees that every
row or column that gets at least one process gets to keep it—which means
that with k processes, no process reaches row k + 1 or column k + 1.
Algorithm 24.3 gives the implementation of a splitter (it’s identical to
Algorithm 17.5, but it will be convenient to have another copy here).
Lemma 24.4.1. If at least one process completes the splitter, at least one
process returns stop or right.
Proof. Suppose no process returns right; then every process sees open in door,
which means that every process writes its id to race before any process closes
the door. Some process writes its id last: this process will see its own id in
race and return stop.
CHAPTER 24. RENAMING 230
shared data:
1 atomic register race, big enough to hold an id, initially ⊥
2 atomic register door, big enough to hold a bit, initially open
3 procedure splitter(id)
4 race ← id
5 if door = closed then
6 return right
7 door ← closed
8 if race = id then
9 return stop
10 else
11 return down
Lemma 24.4.2. If at least one process completes the splitter, at least one
process returns stop or down.
Proof. First observe that if no process ever writes to door, then no process
completes the splitter, because the only way a process can finish the splitter
without writing to door is if it sees closed when it reads door (which must
have been written by some other process). So if at least one process finishes,
at least one process writes to door. Let p be any such process. From the
code, having written door, it has already passed up the chance to return
right; thus it either returns stop or down.
(see Figure 24.2, also taken from [Asp10]). Each splitter on this path must
handle at least two processes (or p would have stopped at that splitter, by
Lemma 17.4.4). So some other process leaves on the other output wire, either
right or down. If we draw a path from each of these wires that continues right
or down to the end of the grid, then along each of these m disjoint paths
either some splitter stops a process, or some process reaches a final output
wire, each of which is at a distinct splitter. But this gives m processes in
addition to p, for a total of m + 1 processes. It follows that:
Ω(k) is a lower bound on the individual steps of any renaming protocol with
a polynomial output namespace [AAGG11].
in exactly the same way and end up with the same name. This observation
was formalized by Buhrman, Panconesi, Silvestri, and Vitányi [BPSV06].
may also have a process that stopped on that node in the renaming step);
this test-and-set is just a very small binary tree of 2-process test-and-sets
implemented using the algorithm of Tromp and Vitányi [TV02]. A gate bit
is added at the top as in the test-and-set protocol of Afek et al. [AGTV92]
to get linearizability.
Once we have test-and-set, we could get strong renaming using a linear
array of test-and-sets as suggested by Moir and Anderson [MA95], but it’s
more efficient to use the randomization to spread the processes out. In
the ReShuffle protocol, each process chooses a name in the range [1 . . . n]
uniformly at random, and attempts to win a test-and-set guarding that name.
If it doesn’t work, it tries again. Alistarh et al. show that this method
produces unique names for everybody in O(n log4 n) total steps with high
probability. The individual step complexity of this algorithm, however, is
not very good: there is likely to be some unlucky process that needs Ω(n)
probes (at an expected cost of Θ(log n) steps each) to find an empty slot.
Software transactional
memory
Last updated 2011. Some material may be out of date. If you are interested
in software transactional memory from a theoretical perspective, there is a
more recent survey on this material by Attiya [Att14], available at http:
// www. eatcs. org/ images/ bulletin/ beatcs112. pdf .
240
CHAPTER 25. SOFTWARE TRANSACTIONAL MEMORY 241
25.1 Motivation
Some selling points for software transactional memory:
On the other hand, we now have to deal with the possibility that opera-
tions may fail. There is a price to everything.
increasing order means that I have to know which locks I want before
I acquire any of them, which may rule out dynamic transactions.
1 if LL(status) = ⊥ then
2 if LL(r) = oldValue then
3 if SC(status, ⊥) = true then
4 SC(r, newValue)
to the RMW, and an array oldValues[] of old values at these addresses (for
the R part of the RMW). These are all initialized by the initiator of the
transaction, who will be the only process working on the transaction until it
starts acquiring locks.
1. Initialize the record rec for the transaction. (Only the initiator does
this.)
Note that only an initiator helps; this avoids a long chain of helping and
limits the cost of each attempted transaction to the cost of doing two full
transactions, while (as shown below) still allowing some transaction to finish.
25.4 Improvements
One downside of the Shavit and Touitou protocol is that it uses LL/SC very
aggressively (e.g., with overlapping LL/SC operations) and uses non-trivial
(though bounded, if you ignore the ever-increasing version numbers) amounts
of extra space. Subsequent work has aimed at knocking these down; for
example a paper by Harris, Fraser, and Pratt [HFP02] builds multi-register
CAS out of single-register CAS with O(1) extra bits per register. The proof
of these later results can be quite involved; Harris et al., for example, base
their algorithm on an implementation of 2-register CAS whose correctness
has been verified only by machine (which may be a plus in some views).
25.5 Limitations
There has been a lot of practical work on STM designed to reduce overhead
on real hardware, but there’s still a fair bit of overhead. On the theory side,
CHAPTER 25. SOFTWARE TRANSACTIONAL MEMORY 246
a lower bound of Attiya, Hillel, and Milani [AHM09] shows that any STM
system that guarantees non-interference between non-overlapping RMW
transactions has the undesirable property of making read-only transactions
as expensive as RMW transactions: this conflicts with the stated goals
of many practical STM implementations, where it is assumed that most
transactions will be read-only (and hopefully cheap). So there is quite a bit
of continuing research on finding the right trade-offs.
Chapter 26
Obstruction-freedom
Last updated 2011. Some material may be out of date. In particular: §26.3 has
not been updated to include some more recent results [ACHS13, GHHW13];
and §26.4 mostly follows the conference version [FHS05] of the Ellen-Hendler-
Shavit paper and omits stronger results from the journal version [EHS12].
247
CHAPTER 26. OBSTRUCTION-FREEDOM 248
26.2 Examples
26.2.1 Lock-free implementations
Pretty much anything built using compare-and-swap or LL/SC ends up
being lock-free. A simple example would be a counter, where an increment
operation does
1 x ← LL(C)
2 SC(C, x + 1)
1 x←0
2 while true do
3 δ ← x − a[1 − i]
4 if δ = 2 (mod 5) then
5 return 0
6 else if δ = −1 (mod 5) do
7 return 1
8 else
9 x ← (x + 1) mod 5
10 a[i] ← x
1 procedure rightPush(v)
2 while true do
3 k ← oracle(right)
4 prev ← a[k − 1]
5 next ← a[k]
6 if prev.value 6= RN and next.value = RN then
7 if CAS(a[k − 1], prev, [prev.value, prev.version + 1]) then
8 if CAS(a[k], next, [v, next.version + 1]) then
9 we win, go home
10 procedure rightPop()
11 while true do
12 k ← oracle(right)
13 cur ← a[k − 1]
14 next ← a[k]
15 if cur.value 6= RN and next.value = RN then
16 if cur.value = LN and A[k − 1] = cur then
17 return empty
18 else if CAS(a[k], next, [RN, next.version + 1]) do
19 if CAS(a[k − 1], cur, [RN, cur.version + 1]) then
20 return cur.value
only if neither register was modified between the preceding read and the
CAS. If both registers are unmodified at the time of the second CAS, then
the two CAS operations act like a single two-word CAS, which replaces the
previous values (top, RN) with (top, value) in rightPush or (top, value) with
(top, RN) in rightPop; in either case the operation preserves the invariant.
So the only way we get into trouble is if, for example, a rightPush does a
CAS on a[k − 1] (verifying that it is unmodified and incrementing the version
number), but then some other operation changes a[k − 1] before the CAS on
a[k]. If this other operation is also a rightPush, we are happy, because it
must have the same value for k (otherwise it would have failed when it saw
a non-null in a[k − 1]), and only one of the two right-pushes will succeed
in applying the CAS to a[k]. If the other operation is a rightPop, then it
can only change a[k − 1] after updating a[k]; but in this case the update to
a[k] prevents the original right-push from changing a[k]. With some more
tedious effort we can similarly show that any interference from leftPush or
leftPop either causes the interfering operation or the original operation to
fail. This covers 4 of the 16 cases we need to consider. The remaining cases
will be brushed under the carpet to avoid further suffering.
don’t know what this ratio is.3 In particular, if I can execute more than R
steps without you doing anything, I can reasonably conclude that you are
dead—the semisynchrony assumption thus acts as a failure detector.
The fact that R is unknown might seem to be an impediment to using
this failure detector, but we can get around this. The idea is to start with
a small guess for R; if a process is suspected but then wakes up again, we
increment the guess. Eventually, the guessed value is larger than the correct
value, so no live process will be falsely suspected after this point. Formally,
this gives an eventually perfect (♦P ) failure detector, although the algorithm
does not specifically use the failure detector abstraction.
To arrange for a solo execution, when a process detects a conflict (because
its operation didn’t finish quickly), it enters into a “panic mode” where pro-
cesses take turns trying to finish unmolested. A fetch-and-increment register
is used as a timestamp generator, and only the process with the smallest
timestamp gets to proceed. However, if this process is too sluggish, other
processes may give up and overwrite its low timestamp with ∞, temporarily
ending its turn. If the sluggish process is in fact alive, it can restore its low
timestamp and kill everybody else, allowing it to make progress until some
other process declares it dead again.
The simulation works because eventually the mechanism for detecting
dead processes stops suspecting live ones (using the technique described
above), so the live process with the winning timestamp finishes its operation
without interference. This allows the next process to proceed, and eventually
all live processes complete any operation they start, giving the wait-free
property.
The actual code is in Algorithm 26.3. It’s a rather long algorithm but
most of the details are just bookkeeping.
The preamble before entering PANIC mode is a fast-path computation
that allows a process that actually is running in isolation to skip testing
any timestamps or doing any extra work (except for the one register read of
PANIC). The assumption is that the constant B is set high enough that any
process generally will finish its operation in B steps without interference. If
there is interference, then the timestamp-based mechanism kicks in: we grab
a timestamp out of the convenient fetch-and-add register and start slugging
it out with the other processes.
(A side note: while the algorithm as presented in the paper assumes
a fetch-and-add register, any timestamp generator that delivers increasing
3
This is a much older model, which goes back to a famous paper of Dwork, Lynch, and
Stockmeyer [DLS88].
CHAPTER 26. OBSTRUCTION-FREEDOM 255
1 if ¬PANIC then
2 execute up to B steps of the underlying algorithm
3 if we are done then return
4 PANIC ← true // enter panic mode
5 myTimestamp ← fetchAndIncrement()
6 A[i] ← 1 // reset my activity counter
7 while true do
8 T [i] ← myTimestamp
9 minTimestamp ← myTimestamp; winner ← i
10 for j ← 1 . . . n, j 6= i do
11 otherTimestamp ← T [j]
12 if otherTimestamp < minTimestamp then
13 T [winner] ← ∞ // not looking so winning any more
14 minTimestamp ← otherTimestamp; winner ← j
15 else if otherTimestamp < ∞ do
16 T [j] ← ∞
17 if i = winner then
18 repeat
19 execute up to B steps of the underlying algorithm
20 if we are done then
21 T [i] ← ∞
22 PANIC ← false
23 return
24 else
25 A[i] ← A[i] + 1
26 PANIC ← true
27 until T [i] = ∞
28 repeat
29 a ← A[winner]
30 wait a steps
31 winnerTimestamp ← T [winner]
32 until a = A[winner] or winnerTimestamp 6= minTimestamp
33 if winnerTimestamp = minTimestamp then
34 T [winner] ← ∞ // kill winner for inactivity
values over time will work. So if we want to limit ourselves to atomic registers,
we could generate timestamps by taking snapshots of previous timestamps,
adding 1, and appending process ids for tie-breaking.)
Once I have a timestamp, I try to knock all the higher-timestamp processes
out of the way (by writing ∞ to their timestamp registers). If I see a smaller
timestamp than my own, I’ll drop out myself (T [i] ← ∞), and fight on behalf
of its owner instead. At the end of the j loop, either I’ve decided I am the
winner, in which case I try to finish my operation (periodically checking T [i]
to see if I’ve been booted), or I’ve decided somebody else is the winner, in
which case I watch them closely and try to shut them down if they are too
slow (T [winner] ← ∞). I detect slow processes by inactivity in A[winner];
similarly, I signal my own activity by incrementing A[i]. The value in A[i]
is also used as an increasing guess for the time between increments of A[i];
eventually this exceeds the R(B + O(1)) operations that I execute between
incrementing it.
We still need to prove that this all works. The essential idea is to show
that whatever process has the lowest timestamp finishes in a bounded number
of steps. To do so, we need to show that other processes won’t be fighting it
in the underlying algorithm. Call a process active if it is in the loop guarded
by the “if i = winner” statement. Lemma 1 from the paper states:
Lemma 26.3.1 ([FLMS05, Lemma 1]). If processes i and j are both active,
then T [i] = ∞ or T [j] = ∞.
Proof. Assume without loss of generality that i last set T [i] to myTimestamp
in the main loop after j last set T [j]. In order to reach the active loop, i
must read T [j]. Either T [j] = ∞ at this time (and we are done, since only j
can set T [j] < ∞), or T [j] is greater than i’s timestamp (or else i wouldn’t
think it’s the winner). In the second case, i sets T [j] = ∞ before entering
the active loop, and again the claim holds.
The next step is to show that if there is some process i with a minimum
timestamp that executes infinitely many operations, it increments A[i] in-
finitely often (thus eventually making the failure detector stop suspecting it).
This gives us Lemma 2 from the paper:
Lemma 26.3.2 ([FLMS05, Lemma 2]). Consider the set of all processes that
execute infinitely many operations without completing an operation. Suppose
this set is non-empty, and let i hold the minimum timestamp of all these
processes. Then i is not active infinitely often.
CHAPTER 26. OBSTRUCTION-FREEDOM 257
Proof. Suppose that from some time on, i is active forever, i.e., it never
leaves the active loop. Then T [i] < ∞ throughout this interval (or else i
leaves the loop), so for any active j, T [j] = ∞ by the preceding lemma. It
follows that any active T [j] leaves the active loop after B + O(1) steps of j
(and thus at most R(B + O(1)) steps of i). Can j re-enter? If j’s timestamp
is less than i’s, then j will set T [i] = ∞, contradicting our assumption. But
if j’s timestamp is greater than i’s, j will not decide it’s the winner and
will not re-enter the active loop. So now we have i alone in the active loop.
It may still be fighting with processes in the initial fast path, but since i
sets PANIC every time it goes through the loop, and no other process resets
PANIC (since no other process is active), no process enters the fast path after
some bounded number of i’s steps, and every process in the fast path leaves
after at most R(B + O(1)) of i’s steps. So eventually i is in the loop alone
forever—and obstruction-freedom means that it finishes its operation and
leaves. This contradicts our initial assumption that i is active forever.
So now we want to argue that our previous assumption that there exists
a bad process that runs forever without winning leads to a contradiction, by
showing that the particular i from Lemma 26.3.2 actually finishes (note that
Lemma 26.3.2 doesn’t quite do this—we only show that i finishes if it stays
active long enough, but maybe it doesn’t stay active).
Suppose i is as in Lemma 26.3.2. Then i leaves the active loop infinitely
often. So in particular it increments A[i] infinitely often. After some finite
number of steps, A[i] exceeds the limit R(B + O(1)) on how many steps some
other process can take between increments of A[i]. For each other process j,
either j has a lower timestamp than i, and thus finishes in a finite number of
steps (from the premise of the choice of i), or j has a higher timestamp than
i. Once we have cleared out all the lower-timestamp processes, we follow the
same logic as in the proof of Lemma 26.3.2 to show that eventually (a) i sets
T [i] < ∞ and PANIC = true, (b) each remaining j observes T [i] < ∞ and
PANIC = true and reaches the waiting loop, (c) all such j wait long enough
(since A[i] is now very big) that i can finish its operation. This contradicts
the assumption that i never finishes the operation and completes the proof.
26.3.1 Cost
If the parameters are badly tuned, the potential cost of this construction is
quite bad. For example, the slow increment process for A[i] means that the
time a process spends in the active loop even after it has defeated all other
processes can be as much as the square of the time it would normally take
CHAPTER 26. OBSTRUCTION-FREEDOM 258
26.4.1 Contention
A limitation of real shared-memory systems is that physics generally won’t
permit more than one process to do something useful to a shared object
at a time. This limitation is often ignored in computing the complexity of
a shared-memory distributed algorithm (and one can make arguments for
ignoring it in systems where communication costs dominate update costs in
the shared-memory implementation), but it is useful to recognize it if we
can’t prove lower bounds otherwise. Complexity measures that take the cost
of simultaneous access into account go by the name of contention.
The particular notion of contention used in the Ellen et al. paper is an
adaptation of the contention measure of Dwork, Herlihy, and Waarts [DHW97].
4
The result first appeared in FOCS in 2005 [FHS05], with a small but easily fixed bug in
the definition of the class of objects the proof applies to. We’ll use the corrected definition
from the journal version.
CHAPTER 26. OBSTRUCTION-FREEDOM 259
The idea is that if I access some shared object, I pay a price in memory
stalls for all the other processes that are trying to access it at the same time
but got in first. In the original definition, given an execution of the form
Aφ1 φ2 . . . φk φA0 , where all operations φi are applied to the same object as φ,
and the last operation in A is not, then φk incurs k memory stalls. Ellen et
al. modify this to only count sequences of non-trivial operations, where an
operation is non-trivial if it changes the state of the object in some states
(e.g., writes, increments, compare-and-swap—but not reads). Note that this
change only strengthens the bound they eventually prove, which shows that
in the worst case, obstruction-free implementations of operations on objects
in a certain class incur a linear number of memory stalls (possibly spread
across multiple base objects).
1. φ is an instance of Op executed by p,
2. no operation in A or A0 is executed by p,
then there exists a sequence of operations Q by q such that for every sequence
HφH 0 where
So this definition includes both the fact that p incurs k stalls and some
other technical details that make the proof go through. The fact that p
incurs k stalls follows from observing that it incurs |Sj | stalls in each segment
σj , since all processes in Sj access Oj just before p does.
Note that the empty execution is a 0-stall execution (with i = 0) by the
definition. This shows that a k-stall execution exists for some k.
Note also that the weird condition is pretty strong: it claims not only
that there are no non-trivial operation on O1 . . . Oi in τ , but also that there
are no non-trivial operations on any objects accessed in σ1 . . . σi , which may
include many more objects accessed by p.6
We’ll now show that if a k-stall execution exists, for k ≤ n − 2, then a
(k +k 0 )-stall execution exists for some k 0 > 0. Iterating this process eventually
produces an (n − 1)-stall execution.
Start with some k-stall execution Eσ1 . . . σi . Extend this execution by
a sequence of operations σ in which p runs in isolation until it finishes its
operation φ (which it may start in σ if it hasn’t done so already), then each
process in S runs in isolation until it completes its operation. Now linearize
the high-level operations completed in Eσ1 . . . σi σ and factor them as AφA0
as in the definition of class G.
Let q be some process not equal to p or contained in any Sj (this is where
we use the assumption k ≤ n − 2). Then there is some sequence of high-level
operations Q of q such that Hφ does not return the same value as Aφ for
any interleaving HH 0 of Q with the sequences of operations in AA0 satisfying
the conditions in the definition. We want to use this fact to shove at least
one more memory stall into Eσ1 . . . σi σ, without breaking any of the other
conditions that would make the resulting execution a (k + k 0 )-stall execution.
6
And here is where I screwed up in class on 2011-11-14, by writing the condition as the
weaker requirement that nobody touches O1 . . . Oi .
CHAPTER 26. OBSTRUCTION-FREEDOM 263
26.4.4 Consequences
We’ve just shown that counters and snapshots have (n − 1)-stall executions,
because they are in the class G. A further, rather messy argument (given in
the Ellen et al. paper) extends the result to stacks and queues, obtaining a
slightly weaker bound of n total stalls and operations for some process in
CHAPTER 26. OBSTRUCTION-FREEDOM 265
the worst case.7 In both cases, we can’t expect to get a sublinear worst-case
bound on time under the reasonable assumption that both a memory stall
and an actual operation takes at least one time unit. This puts an inherent
bound on how well we can handle hot spots for many practical objects, and
means that in an asynchronous system, we can’t solve contention at the
object level in the worst case (though we may be able to avoid it in our
applications).
But there might be a way out for some restricted classes of objects. We saw
in Chapter 21 that we could escape from the Jayanti-Tan-Toueg [JTT00] lower
bound by considering bounded objects. Something similar may happen here:
the Fich-Herlihy-Shavit bound on fetch-and-increments requires executions
with n(n − 1)d + n increments to show n − 1 stalls for some fetch-and-
increment if each fetch-and-increment only touches d objects, and even for
d = log n this is already superpolynomial. The max-register construction
of a counter [AACH12] doesn’t help here, since everybody hits the switch
bit at the top of the max register, giving n − 1 stalls if they all hit it at the
same time. But there might be some better construction that avoids this.
7
This is out of date: Theorem 6.2 of [EHS12] gives a stronger result than what’s in
[FHS05].
Chapter 27
BG simulation
266
CHAPTER 27. BG SIMULATION 267
doing updates. This assumption is not very restrictive, because two snapshots
with no intervening update are equivalent to two snapshots separated by an
update that doesn’t change anything, and two updates with no intervening
snapshot can be replaced by just the second update, since the adversary
could choose to schedule them back-to-back anyway.
This approach means that we can determine the actions of some simulated
process by determining the sequence of snapshots that it receives. So the
goal will be to allow any of the real processes to take a snapshot on behalf
of any of the simulated processes, and then coordinate these snapshots via
weak consensus objects to enforce consistency if more than one real process
tries to simulate a step of the same simulated process. The key tool for doing
this is a safe agreement object, described in §27.2.
until they receive the safe notification, and that the usual validity (all
outputs equal some input) and agreement (all outputs equal each other)
conditions hold. There is also a wait-free progress condition that the safei
notices do eventually arrive for any process that doesn’t fail, no matter what
the other processes do (so nobody gets stuck in their unsafe section).
Pseudocode for a safe agreement object is given in Algorithm 27.1. This
is a translation of the description of the algorithm in [BGLR01], which is
specified at a lower level using I/O automata.1
// proposei (v)
1 A[i] ← hv, 1i
2 if snapshot(A) contains hj, 2i for some j 6= i then
// Back off
3 A[i] ← hv, 0i
4 else
// Advance
5 A[i] ← hv, 2i
// safei
6 repeat
7 s ← snapshot(A)
8 until s does not contain hj, 1i for any j
// agreei
9 return s[j].value where j is smallest index with s[j].level = 2
Algorithm 27.1: Safe agreement (adapted from [BGLR01])
1. Make an initial guess for sjr by taking a snapshot of A and taking the
value with the largest round number for each component A[−][k].
2. Initiate the safe agreement protocol Sjr using this guess. It continues
to run Sjr until it leaves the unsafe interval.
4. If Sjr terminates, compute a new value vjr for j to write based on the
simulated snapshot returned by Sjr , and update A[i][j] with hvjr , ri.
guarantees that i enters the unsafe part of any safe agreement object on
behalf of only one simulated j at a time, while preventing delays in the safe
part of Sjr from blocking it from finishing some other Sj 0 r0 .
Topological methods
273
CHAPTER 28. TOPOLOGICAL METHODS 274
and getting rid of the topology (in fact, the proof in [AW04] contains a proof
of Sperner’s Lemma with the serial numbers filed off). The disadvantage of
this approach is that it obscures what is really going in and makes it harder to
obtain insight into how topological techniques might help for other problems.
The advantage is that (unlike these notes) the resulting text includes actual
proofs instead of handwaving.
position part means that the xi are not all contained in some subspace of
dimension (k −1) or smaller (so that the simplex isn’t squashed flat somehow).
What this gives us is a body with (k + 1) corners and (k + 1) faces, each of
which is a (k − 1)-dimensional simplex (the base case is that a 0-dimensional
simplex is a point). Each face includes all but one of the corners, and each
corner is on all but one of the faces. So we have:
and letting the sets that appear in the complex be those collections of
states/inputs/outputs that are compatible with each other in some sense.
For states, this means that they all appear in some global configuration
in some admissible execution of some system; for inputs and outputs, this
means that they are permitted combinations of inputs or outputs in the
specification of some problem.
Example: For 2-process binary consensus with processes 0 and 1, the
input complex, which describes all possible combinations of inputs, consists
of the sets
{{}, {p0}, {q0}, {p1}, {q1}, {p0, q0}, {p0, q1}, {p1, q0}, {p1, q1}} ,
p0 q0
q1 p1
As a picture, this omits two of the edges (1-dimensional simplexes) from the
input complex:
p0 q0
q1 p1
One thing to notice about this output complex is that it is not connected:
there is no path from the p0–q0 component to the q1–p1 component.
Here is a simplicial complex describing the possible states of two processes
p and q, after each writes 1 to its own bit then reads the other process’s bit.
Each node in the picture is labeled by a sequence of process ids. The first id
in the sequence is the process whose view this node represents; any other
CHAPTER 28. TOPOLOGICAL METHODS 278
process ids are processes this first process sees (by seeing a 1 in the other
process’s register). So p is the view of process p running by itself, while pq
is the view of process p running in an execution where it reads q’s register
after q writes it.
p qp pq q
The edges express the constraint that if we both write before we read,
then if I don’t see your value you must see mine (which is why there is no
p–q edge), but all other combinations are possible. Note that this complex
is connected: there is a path between any two points.
Here’s a fancier version in which each process writes its input (and
remembers it), then reads the other process’s register (i.e., a one-round full-
information protocol). We now have final states that include the process’s
own id and input first, then the other process’s id and input if it is visible.
For example, p1 means p starts with 1 but sees a null and q0p1 means q starts
with 0 but sees p’s 1. The general rule is that two states are compatible if p
either sees nothing or q’s actual input and similarly for q, and that at least
one of p or q must see the other’s input. This gives the following simplicial
complex:
p0 q0p0 p0q0 q0
q1p0 p1q0
p0q1 q0p1
q1 p1q1 q1p1 p1
p−q
p − qp − pq − q
Here (pq)(qp) is the view of p after seeing pq in the first round and seeing
that q saw qp in the first round.
28.3.2 Subdivisions
In the simple write-then-read protocol above, we saw a single input edge turn
into 3 edges. Topologically, this is an example of a subdivision, where we
represent a simplex using several new simplexes pasted together that cover
exactly the same points.
Certain classes of protocols naturally yield subdivisions of the input
complex. The iterated immediate snapshot (IIS) model, defined by
Borowsky and Gafni [BG97], considers executions made up of a sequence
of rounds (the iterated part) where each round is made up of one or more
mini-rounds in which some subset of the processes all write out their current
views to their own registers and then take snapshots of all the registers (the
immediate snapshot part). The two-process protocols of the previous section
are special cases of this model.
Within each round, each process p obtains a view vp that contains the
previous-round views of some subset of the processes. We can represent the
views as a subset of the processes, which we will abbreviate in pictures by
putting the view owner first: pqr will be the view {p, q, r} as seen by p, while
qpr will be the same view as seen by q. The requirements on these views
are that (a) every process sees its own previous view: p ∈ vp for all p; (b)
all views are comparable: vp ⊆ vq or vq ⊆ vp ; and (c) if I see you, then I see
everything you see: q ∈ vp implies vq ⊆ vp . This last requirement is called
immediacy and follows from the assumption that writes and snapshots are
done in the same mini-round: if I see your write, then I see all the values
you do, because your snapshot is either in an earlier mini-round than mine
or in the same mini-round. Note this depends on the peculiar structure of
the mini-rounds, where all the writes precede all the snapshots.
CHAPTER 28. TOPOLOGICAL METHODS 280
rp qp
qpr rpq
pr pq
pqr
r q
qr rq
rp qp
qpr rpq
pr pq
pqr
r q
qr rq
2 1
1 1
1 3
2 3 3 3
Protocol implies map Even though we don’t get a subdivision with the
full protocol, there is a restricted set of executions that does give a
subdivision. So if the protocol works on this restricted set of execu-
tions, an appropriate map exists. There are two ways to prove this:
Herlihy and Shavit do so directly, by showing that this restricted set
of executions exists, and Borowksy and Gafni [BG97] do so indirectly,
by showing that the IIS model (which produces exactly the standard
chromatic subdivision used in the ACT proof) can simulate an ordinary
snapshot model. Both methods are a bit involved, so we will skip over
this part.
Map implies protocol This requires an algorithm. The idea here is that
that participating set algorithm, originally developed to solve k-set
agreement [BG93], produces precisely the standard chromatic subdivi-
sion used in the ACT proof. In particular, it can be used to solve the
problem of simplex agreement, the problem of getting the processes
to agree on a particular simplex contained within the subdivision of
their original common input simplex. This is a little easier to explain,
so we’ll do it.
The following theorem shows that the return values from participating
set have all the properties we want for iterated immediate snapshot:
Theorem 28.6.2. Let Si be the output of the participating set algorithm for
process i. Then all of the following conditions hold:
1. For all i, i ∈ Si . (Self-containment.)
2. For all i, j, Si ⊆ Sj or Sj ⊆ Si . (Atomic snapshot.)
3. For all i, j, if i ∈ Sj , then Si ⊆ Sj . (Immediacy.)
Proof. Self-inclusion is trivial, but we will have to do some work for the other
two properties.
The first step is to show that Algorithm 28.1 neatly sorts the processes
out into levels, where each process that returns at level ` returns precisely
the set of processes at level ` and below.
For each process i, let Si be the set of process ids that i returns, let `i
be the final value of level[i] when i returns, and let Si0 = {j | `j ≤ `i }. Our
goal is to show that Si0 = Si , justifying the above claim.
Because no process ever increases its level, if process i observes level[j] ≤ `i
in its last snapshot, then `j ≤ level[j] ≤ `i . So Si0 is a superset of Si . We
thus need to show only that no extra processes sneak in; in particular, we
will to show that |Si | = |Si0 |, by showing that both equal `i .
CHAPTER 28. TOPOLOGICAL METHODS 289
The first step is to show that |Si0 | ≥ |Si | ≥ `i . The first inequality follows
from the fact that Si0 ⊇ Si ; the second follows from the code (if not, i would
have stayed in the loop).
The second step is to show that |Si0 | ≤ `i . Suppose not; that is, suppose
that |Si0 | > `i . Then there are at least `i + 1 processes with level `i or less, all
of which take a snapshot on level `i + 1. Let i0 be the last of these processes
to take a snapshot while on level `i + 1. Then i0 sees at least `i + 1 processes
at level `i + 1 or less and exits, contradicting the assumption that it reaches
level `i . So |Si0 | ≤ `i .
The atomic snapshot property follows immediately from the fact that if
`i ≤ `j , then `k ≤ `i implies `k ≤ `j , giving Si = Si0 ⊆ Sj0 = Sj . Similarly, for
immediacy we have that if i ∈ Sj , then `i ≤ `j , giving Si ≤ Sj by the same
argument.
The missing piece for turning this into IIS is that in Algorithm 28.1, I
only learn the identities of the processes I am supposed to include but not
their input values. This is easily dealt with by the usual trick of adding an
extra register for each process, to which it writes its input before executing
participating set.
28.7.1 k-connectivity
Define the m-dimensional disk to be the set of all points at most 1 unit away
from the origin in Rm , and the m-dimensional sphere to be the surface of
the (m + 1)-dimensional disk (i.e., all points exactly 1 unit away from the
origin in Rm+1 ). Note that what we usually think of as a sphere (a solid
body), topologists call a disk, leaving the term sphere for just the outside
part.
CHAPTER 28. TOPOLOGICAL METHODS 290
b1 c2 a1 b2 c1 a2 b1
c3 a4 b3 c4 a3 b4 c3
a1 b2 c1 a2 b1 c2 a1
Approximate agreement
Validity Every process returns an output within the range of inputs. For-
mally, for all i, it holds that (minj xj ) ≤ yi ≤ (maxj xj ).
292
CHAPTER 29. APPROXIMATE AGREEMENT 293
1 A[i] ← hxi , 1, xi i
2 repeat
3 hx01 , r1 , v1 i . . . hx0n , rn , vn i ← snapshot(A)
4 rmax ← maxj rj
5 v ← midpoint{vj | rj = rmax }
6 A[i] ← hxi , rmax + 1, vi
7 until rmax ≥ 2 and rmax ≥ log2 (spread({x0j })/)
8 return v
Algorithm 29.1: Approximate agreement
be the set of all values v that are ever written to the snapshot object with
round number r. Let Ur ⊆ Vr be the set of values that are ever written to
the snapshot object with round number r before some process writes a value
with round number r + 1 or greater; the intuition here is that Ur includes only
those values that might contribute to the computation of some round-(r + 1)
value.
Proof. Let Uri be the set of round-r values observed by a process i in the
iteration in which it sees rmax = r in some iteration, if such an iteration
exists. Note that Uri ⊆ Ur , because if some value with round r + 1 or greater
is written before i’s snapshot, then i will compute a larger value for rmax .
Given two processes i and j, we can argue from the properties of snapshot
that either Uri ⊆ Urj or Urj ⊆ Uri . The reason is that if i’s snapshot comes
first, then j sees at least as many round-r values as i does, because the only
way for a round-r value to disappear is if it is replaced by a value in a later
round. But in this case, process j will compute a larger value for rmax and
will not get a view for round r. The same holds in reverse if j’s snapshot
comes first.
Observe that if Uri ⊆ Urj , then
This holds because midpoint(Uri ) lies within the interval min Urj , max Urj ,
and every point in this interval is within spread(Urj )/2 of midpoint(Urj ). The
same holds if Urj ⊆ Uri . So any two values written in round r + 1 are within
spread(Ur )/2 of each other.
In particular, the minimum and maximum values in Vr+1 are within
spread(Ur )/2 of each other, so spread(Vr+1 ) ≤ spread(Ur )/2.
compute
Let i be some process that finishes in the fewest number of rounds. Process
i can’t finish until it reaches round rmax +1, where rmax ≥ log2 (spread({x0j })/)
for a vector of input values x0 that it reads after some process writes round
2 or greater. We have spread({x0j }) ≥ spread(U1 ), because every value in
U1 is included in x0 . So rmax ≥ log2 (spread(U1 )/) and spread(Vrmax +1 ) ≤
spread(U1 )/2rmax ≤ spread(U1 )/(spread(U1 )/) = . Since any value re-
turned is either included in Vrmax +1 or some later Vr0 ⊆ Vrmax +1 , this gives
us that the spread of all the outputs is less than : Algorithm 29.1 solves
approximate agreement.
The cost of Algorithm 29.1 depends on the cost of the snapshot operations,
on , and on the initial input spread D. For linear-cost snapshots, this works
out to O(n log(D/)).
again and repeat until both p and q have pending writes that will change the
other process’s preference. Let p1 and q1 be the new preferences that result
from these operations. The adversary can now choose between running P only
and getting to a configuration with preferences p0 and q1 , Q only and getting
p1 and q0 , or both and getting p1 and q1 ; each of these choices incurs at least
one step. By the triangle inequality, |p0 − q0 | ≤ |p0 − q1 |+|q1 − p1 |+|p1 − q0 |,
so at least on of these configurations has a spread between preferences that is
at least 1/3 of the initial spread. It follows that after k steps the best spread
we can get is D/3k , requiring k ≥ log3 (D/) steps to get -agreement.
Herlihy uses this result to show that there are decisions problems that have
wait-free but not bounded wait-free deterministic solutions using registers.
Curiously, the lower bound says nothing about the dependence on the number
of processes; it is conceivable that there is an approximate agreement protocol
with running time that depends only on D/ and not n.
Part III
297
Chapter 30
Overview
In this part, we consider models that don’t fit well into the standard message-
passing or shared-memory models. These includes models where processes
can directly observe the states of nearby processes (Chapter 31); where
computation is inherently local and the emphasis is on computing information
about the communication graph (Chapter 32); where processes wander about
and exchange information only with processes they physically encounter
(Chapter 33); where processes (in the form of robots) communicate only by
observing each others’ locations and movements (Chapter 34); and where
processes can transmit only beeps, and are able to observe only whether at
least one nearby process beeped (Chapter 35).
Despite the varying communication mechanisms, these models all share
the usual features of distributed systems, where processes must contend with
nondeterminism and incomplete local information.
298
Chapter 31
Self-stabilization
A self-stabilizing algorithm has the property that, starting from any ar-
bitrary configuration, it eventually reaches a legal configuration, and this
property is stable in the sense that it remains in a legal configuration
thereafter. The notion of which configurations are legal depends on what
problem we are trying to solve, but the overall intuition is that an algorithm
is self-stabilizing if it can recover from arbitrarily horrible errors, and will
stay recovered as long as no new errors occur.
It’s generally not possible to detect whether the algorithm is in a legal
configuration from the inside: if a process has a bit that says that everything
is OK, the adversary can set that bit in the initial configuration, even if
everything is in fact broken. So self-stabilizing algorithms don’t actually
terminate: at best, they eventually converge to a configuration where the
necessary ongoing paranoid consistency checks produce no further changes
to the configuration (a property called silent self-stabilization.
The idea of self-stabilization first appeared in a paper by Edsger Dijk-
stra [Dij74], where he considered the problem of building robust token-ring
networks. In a token-ring network, there are n nodes arranged in a directed
cycle, and we want a single token to circulate through the nodes, as a mech-
anism for enforcing mutual exclusion: only the node currently possessing the
token can access the shared resource.
The problem is: how do you get the token started? Dijkstra worried both
about the possibility of starting with no tokens or with more than one token,
and he wanted an algorithm that would guarantee that, from any starting
state, eventually we would end up with exactly one token that would circulate
as desired. He called such an algorithm self-stabilizing, and gave three
examples, the simplest of which we will discuss in §31.2 below. These became
299
CHAPTER 31. SELF-STABILIZATION 300
the foundation for the huge field of self-stabilization, which spans thousands
of papers, at least one textbook [Dol00], a specialized conference (SSS, the
International Symposium on Stabilization, Safety, and Security in Distributed
Systems), and its own domain name http://www.selfstabilization.org/.
We won’t attempt to summarize all of this, but will highlight a few results
to give a sampling of what self-stabilizing algorithms look like.
31.1 Model
Much of the work in this area, dating back to Dijkstra’s original paper, does
not fit well in either the message-passing or shared-memory models that we
have been considering in this class, both of which were standardized much
later. Instead, Dijkstra assumed that processes could, in effect, directly
observe the states of their neighbors. A self-stabilizing program would
consist of a collection of what he later called guarded commands [Dij75],
statements of the form “if [some condition is true] then [update my state in
this way].” In any configuration of the system, one or more of these guarded
commands might have the if part (the guard) be true; these commands are
said to be enabled.
A step consists of one or more of these enabled commands being executed
simultaneously, as chosen by an adversary scheduler, called the distributed
daemon. The usual fairness condition applies: any process that has an
enabled command eventually gets to execute it. If no commands are enabled,
nothing happens. With the central daemon variant of the model, only one
step can happen at a time. With the synchronous daemon, every enabled
step happens at each time. Note that both the central and synchronous
daemons are special cases of the distributed daemon.
More recent work has tended to assume a distinction between the part
of a process’s state that is visible to its neighbors and the part that isn’t.
This usually takes the form of explicit communication registers or link
registers, which allow a process to write a specific message for a specific
neighbor. This is still not quite the same as standard message-passing or
shared-memory, because a process is often allowed to read and write multiple
link registers atomically.
In this algorithm, the nonzero processes just copy the state of the process
to their left. The zero process increments its state if it sees the same state
to its left. Note that the nonzero processes have guards on their commands
that might appear useless at first glance, but these are there ensure that the
adversary can’t waste steps getting nonzero processes to carry out operations
that have no effect.
What does this have to with tokens? The algorithm includes an additional
interpretation of the state, which says that:
This instantly guarantees that there is at least one token: if none of the
nonzero processes have a token, then all the `i variables are equal. But then
0 has a token. It remains though to show that we eventually converge to a
configuration where at most one process has a token.
Define a configuration ` as legal if there is some value j such that `i = `j
for all i ≤ j and `i = `j − 1 (mod n + 1) for all i > j. When j = n − 1,
this makes all `i equal, and 0 has the only token. When j < n − 1, then
`0 6= `n−1 (so 0 does not have a token), `j 6= `j+1 (so j + 1 has a token), and
`i = `i+1 for all i 6∈ j, n − 1 (so nobody else has a token). That each legal
1
In Dijkstra’s paper, there are n + 1 processes numbered 0 . . . n, but this doesn’t really
make any difference.
CHAPTER 31. SELF-STABILIZATION 302
configuration has exactly one token partially justifies our definition of legal
configurations.
If a configuration ` is legal, then when j = n − 1, the only enabled step
is `00 ← (`n−1 + 1) mod (n + 1); when j < n − 1, the only enabled step is
`0j+1 ← `j . In either case, we get a new legal configuration `0 . So the property
of being a legal configuration is stable, which is the other half of justifying
our definition.
Now we want to show that we eventually converge to a legal configuration.
Fix some initial configuration `0 , and let c be some value such that `0i 6= c
for all i. (There is at least one such c by the Pigeonhole Principle.) We will
argue that there is a sequence of configurations with c as a prefix of the
values that forms a bottleneck forcing us into a legal configuration:
Proof. By induction on t. For the base case, `0 satisfies `0i = c if and only if
i < j when j = 0.
If `t is legal, `t+1 is also legal. So the interesting case is when `t is not
legal. In this case, there is some 0 ≤ j < n such that `ti = c if and only if
i < j.
If j = 0, then `ti 6= c for all i. Then the only way to get `t+1 i = c is if
i = 0. But then `t+1 satisfies the condition with j 0 = 1.
If 0 < j < n, then `ti = c for at least one i < j, and `tn−1 = 6 c since
n − 1 6< j. So we may get a transition that sets `t+1 j = `t
j−1 = c, giving a
new configuration ` t+1 0
that satisfies the induction hypothesis with j = j + 1,
or we may get a transition that does not create or remove any copies of c.
In either case the induction goes through.
and that the gap drops when 0 increments its value. By weighting these
values appropriately, we can arrange for a function that always drops.
Let
n−2
X
Φ(`) = ng(`) + (n − 1 − i)ui (`). (31.2.1)
i=0
Most of the work here is being done by the first two terms. The g term
tracks the gap between `0 and c, weighted by n. The sum tracks unhappiness,
weighted by distance to position n − 1.
In the initial configuration `0 , g is at most n, and each ui is at most 1,
so Φ(`0 ) = O(n2 ). We also have that Φ ≥ 0 always; if Φ = 0, then g = 0 and
ui = 0 for all i implies we are in an all-c configuration, which is legal. So we’d
like to argue that every step of the algorithm in a non-legal configuration
reachable from `0 reduces Φ by at least 1, forcing us into a legal configuration
after O(n2 ) steps.
Consider any step of the algorithm starting from a non-legal configuration
` with Φ(`t ) > 0 that satisfies the condition in Lemma 31.2.1:
t
Since the condition of Lemma 31.2.1 holds for any reachable `t , as long
as we are in a non-legal configuration, Φ drops by at least 1 per step. If we
do not reach a legal configuration otherwise, Φ can only drop O(n2 ) times
before hitting 0, giving us a legal configuration. Either way, the configuration
stabilizes in O(n2 ) steps.
31.3 Synchronizers
Self-stabilization has a curious relationship with failures: the arbitrary initial
state corresponds to an arbitrarily bad initial disruption of the system, but
once we get past this there are no further failures. So it is not surprising that
many of the things we can do in a failure-free distributed system we can also
CHAPTER 31. SELF-STABILIZATION 304
This rule works find as long as every process starts synchronized. But
it’s not self-stabilizing. A counterexample, given in the paper, assumes we
have 10 processes organized in a ring. By carefully choosing which processes
are activated at each step, we can go through the following sequence of
configurations, where in each configuration the updated node is shown in
boldface:
1234312343
1234212343
1234232343
1234234343
1234234543
1234234542
3234234542
3434234542
3454234542
1 1 1050
If we run the nodes in round-robin order, the left two nodes will eventually
catch up to the rightmost, but it will take a while.
After some further tinkering, the authors present their optimal rule, which
they call max minus one:
(
minu∈N (v) (P (u) + 1) if P (v) looks legal,
P (v) ←
maxu∈N (v) (P (u) − 1) otherwise.
where d(u, v) is the distance between u and v in the graph. This is zero if
the skew between any pair of nodes is equal to the distance, which is the
most we can expect from a synchronizer. The proof shows that applying
the max-minus-one rule never increases φ(v), and decreases it by at least 1
whenever a node v with positive φ(v) changes P (v). Because this only gives
P
a bound of φ(v), the rest of the proof using a second potential function
which measures the distance from v to the nearest node u that supplies the
maximum in φ(v). It is shown that Φ(v) drops by 1 per time unit. When it
reaches 0, then φ(v) = P (v) − P (v) − d(v, v) = 0.
CHAPTER 31. SELF-STABILIZATION 306
We’ll skip over the details of the argument here. If you are interested,
the presentation in the paper is not too hard to follow.
The important part is that once we have a synchronizer, we can effectively
assume synchrony in other self-stabilizing algorithms. We just run the
synchronizer underneath our main protocol, and when the synchronizer
stabilizes, that gives us the initial starting point for the main protocol.
Because the main protocol itself should stabilize starting from an arbitrary
configuration, any insanity produced while waiting for the synchronizer to
converge is eventually overcome.
a different root), or any neighbor that has the ghost root has at least the
same distance (in which case it increases its distance) Once the minimum
distance exceeds the upper bound D0 , all the ghost roots will have been
eliminated, and only real distances will remain. This gives a stabilization
time (in rounds) linear in the upper bound on the diameter.
We can now argue that, after stabilization, this process eventually con-
verges to Tu consisting precisely of the set of all pairs hw, xv i where w is a
u–v path of length at most f (n) and xv is the input to v. Indeed, this works
under almost any reasonable assumption about scheduling. The relevant
lemma:
Lemma 31.5.1. Starting from any initial configuration, for any sequence
w of at most f (n) vertices starting at u and ending at v, (31.5.1) fires for
each node in w in reverse order, then Tu (w) = xv if w is a u–v path, and
Tu (w) = ⊥ otherwise.
CHAPTER 31. SELF-STABILIZATION 308
Proof. The proof is by induction on the length of w. The base case is when
|w| = 1, implying w = u = v. Here rule (31.5.1) writes hu, xu i to Tu , giving
Tu (u) = xu as claimed.
For a sequence w = uw0 where w0 is a nonempty path from some node u0
to v, if u0 is a neighbor of u, then firing rule (31.5.1) at u after firing the rule
for each node in w0 has Tu (uw0 ) ← Tu0 (w) = xv by the induction hypothesis.
If uw0 is not a path from u to v, then either u0 is not a neighbor of u, or w0
is not a path from u0 to v and Tu0 (w0 ) = ⊥ by the induction hypothesis. In
either case, Tu (uw0 ) ← ⊥.
309
CHAPTER 32. DISTRIBUTED GRAPH ALGORITHMS 310
all colors down, by having each node switch its color to that of its successor
(or some new color chosen from {0, 1, 2} if it doesn’t have a successor). The
reason for doing this is that it guarantees that each node’s predecessors
will all share the same color, meaning that that node now has at most two
colors represented among its predecessors and successor. At the same time,
it doesn’t create any new pair of adjacent nodes with the same color. So
for the second stage, each node v that currently has color c can choose a
new color from {0, 1, 2} that is the smallest color that doesn’t appear in its
neighborhood. Since none of v’s neighbors change color (they don’t have
color c), this replaces all instances of c with a color form {0, 1, 2} while
keeping all edges two-colored. After doing this for all c ∈ {3, 4, 5}, the only
colors left are in {0, 1, 2}.
Doing the 6 to 3 reduction in the obvious way takes an additional 6
rounds, which is dominated by the O(log∗ n) of reducing from N initial ids.
This algorithm has the advantage of simplicity, but there are faster
algorithms. An algorithm of Ghaffari and Kuhn [GK20] obtains a (∆ + 1)-
coloring of a graph with maximum degree ∆ in O(log2 ∆ log n) rounds.
Chapter 33
Population protocols
H2 + O2 → H2 O + O
314
CHAPTER 33. POPULATION PROTOCOLS 315
appropriate mix of molecules into a test tube, stirring it up, and hoping
to learn something from the final product.
Unlike population protocols, chemical reaction networks do not neces-
sarily conserve the number of molecules in each interaction, and (in
reality at least) require some source of energy to keep the reactions
going.
The first agent in an interaction is called the initiator and the second
agent the responder. Note that this means that the model breaks symmetry
for us.
With a complete interaction graph, we can will often not bother with the
identities of specific agents and just treat the configuration C as a multiset
of states.
The main difference between population protocols and similar models
is the input and output mappings, and the notion of stable computation,
which gets its own section.
33.2.2 Examples
These examples are mostly taken from the original paper of Angluin et
al. [AAD+ 06].
L, L → L, F.
It is easy to see that in any configuration with more than one leader,
there exists a transition that eventually reduces the number of leaders. So
global fairness says this happens eventually, which causes us to converge to
a single leader after some finite number of interactions.
If we assume random scheduling, the expected number of transitions to
get down to one leader is exactly
n n
X n(n − 1) X 1
= n(n − 1)
k=2
k(k − 1) k=2
k(k − 1)
n
1 1
X
= n(n − 1) −
k=2
k−1 k
1
= n(n − 1) 1 −
n
2
=n .
CHAPTER 33. POPULATION PROTOCOLS 318
This protocol satisfies the invariant that the sum over all agents of
the second component, mod m, is unchanged by any transition. Since the
components for any is follower is zero, this means that when we converge to
a unique leader, it will contain the count of initial A’s mod m.
where the xi are the counts of various possible inputs and the ai and b are
integer constants. This includes comparisons like x1 > x2 as a special case.
The idea is to compute a truncated version of the left-hand side of (33.2.1)
as a side-effect of leader election.
CHAPTER 33. POPULATION PROTOCOLS 319
Fix some k > max(|b|, maxi |ai |). In addition to the leader bit, each agent
stores an integer in the range −k through k. The input map sends each xi to
the corresponding coefficient ai , and the transition rules cancel out positive
and negative ai , and push any remaining weight to the leader as much as
possible subject to the limitation that values lie within [−k, k].
Formally, define a truncation function t(x) = max(−k, min(k, r)), and a
remainder function r(x) = x − t(x). These have the property that if |x| ≤ 2k,
then t(x) and r(x) both have their absolute value bounded by k. If we have
the stronger condition |x| ≤ k, then t(x) = x and r(x) = 0.
We can now define the transition rules:
These have the property that the sum of the second components is
preserved by all transitions. Formally, if we write yi for the second component
P
of agent i, then yi does not change through the execution of the protocol.
When agents with positive and negative values meet, we get cancellation.
This reduces the quantity |yi |. Global fairness implies that this quantity
P
will continue to drop until eventually all nonzero yi have the same sign.
Once this occurs, and there is a unique leader, then the leader will
eventually absorb as much of the total as it can. This will leave the leader
P
with y = min(k, max(−k, yi )). By comparing this quantity with b, the
leader can compute the threshold predicate.
∃y : x = y + y
∃z : x = y + z + 1
A, − → A0 , B
xy → xb
yx → yb
xb → xx
bx → xx
yb → yy
by → yy
minus y agents, we get a random walk biased toward the majority value with
an absorbing barrier in the state where all agents are equal. However, the
rate at which these special transitions occur depends on how quickly blank
agents are created, which in turn depends on the relative numbers of x and
y agents.
Analysis of the full process is difficult, but Angluin et al. show that with
high probability all agents end up in the initial majority state in O(n log n)
√
interactions, provided
√ the initial majority is large enough (Ω( n log n), later
improved to Ω( n log n by Condon et al. [CHKM19]). Curiously, a later
paper by Cardelli and Csikász-Nagy [CCN12] showed that the cell cycle
controlling mitosis in all living organisms uses a chemical switch that looks
suspiciously like the approximate majority algorithm, making this algorithm
roughly three billion years old.
But we can do better than this. With random scheduling, we have
much more control over how a computation evolves, and this can be used
to simulate (with high probability) a register machine, which in turn can
be used to simulate a Turing machine. The catch is that the state of a
population protocol with n agents can be described using O(log n) bits, by
counting the number of agents in each state. So the most we can simulate is
a machine that has O(log n) space.
The original population protocol paper included a simulation of an
O(log n)-space Turing machine, but the time overhead per operation was very
bad, since most operations involved a controller agent personally adjusting
the state of some other agent, which requires at least O(n) interactions before
the controller meets its target.
A better construction was given by Angluin et al. [AAE08a], under the
assumption that the population starts with a single agent in a special leader
state. The main technique used in this paper it to propagate a message m
using an epidemic protocol mb → mm. The time for an epidemic to spread
through a population of n individuals through random pairwise interactions is
well-understood, and has the property that (a) the time to infect everybody is
Θ(log n) with high probability, and (b) it’s still Θ(log n) with high probability
if we just want to infect a polynomial fraction n of the agents.
So now the idea is that if the leader, for example, wants to test if there is
a particular state x in the population, it can spread a message x? using an
epidemic, and any agent with x can respond by starting a counter-epidemic
x!. So if there is an x, the leader finds out about it in O(log n) time, the
time for the first epidemic to go out plus the time for the second epidemic
to come back.
What if there is no x agent? Then the query goes out but nothing comes
CHAPTER 33. POPULATION PROTOCOLS 323
back. If the leader can count off Θ(n) time units (with an appropriate
constant, it can detect this. But it does not have enough states by itself to
count to Θ(log n).
The solution is to take advantage of the known spreading time for epi-
demics to build a phase clock out of epidemics. The idea here is that
the leader will always be in some phase 0 . . . m − 1. Non-leader agents
try to catch up with the leader by picking up on the latest rumor of the
leader’s phase, which is implemented formally by transitions of the form
hx, ii hF, ji → hx, ii hF, ii when 0 < i − j < m/2 (mod m). The leader on
the other hand is a hipster and doesn’t like it when everybody catches up; if
it sees a follower in the same phase, it advances to the next phase to maintain
its uniqueness: hL, ii hF, ii → hL, i + 1i hF, ii.
Because the current phase spreads like an epidemic, when the leader
advances to i + 1, every agent catches up in a log n time w.h.p. This means
both that the leader doesn’t spend too much time in i + 1 before meeting
a same-phase follower and that followers don’t get too far behind. (In
particular, followers don’t get so far behind that they start pulling other
followers forward.) But we also have that it takes at least b log n time
w.h.p. before more than n followers catch up. This gives at most an
n−1 1 probability that the leader advances twice in b log n time. By
making m large enough, the chances that this happens enough to get all
the way around the clock in less than, say b(m/2) log n) time can be made
at most n−c for any fixed c. So the leader can now count of Θ(log n) time
w.h.p., and in particular can use this to time any other epidemics that are
propagating around in parallel with the phase clock.
Angluin et al. use these techniques to implement various basic arithmetic
operations such as addition, multiplication, division, etc., on the counts of
agents in various states, which gives the register machine simulation. The
simulation can fail with nonzero probability, which is necessary because
otherwise it would allow implementing non-semilinear operations in the
adversarial scheduling model.
The assumption of an initial leader can be replaced by a leader election
algorithm, but at the time of the Angluin et al. paper, no leader election
algorithm better than the Θ(n)-time fratricide protocol described §33.2.2.1
was known, and even using this protocol requires an additional polynomial-
time cleaning step before we can run the main algorithm, to be sure that
there are no leftover phase clock remnants from deposed leaders to cause
trouble. So the question of whether this could be done faster remained open.
Hopes of finding a better leader election protocol without changing the
model ended when Doty and Soloveichek [DS15] proved a matching Ω(n)
CHAPTER 33. POPULATION PROTOCOLS 324
lower bound on the expected time to convergence for any leader election
algorithm in the more general model of chemical reaction networks. This
results holds assuming constant states and a dense initial population where
any state that appears is represented by a constant fraction of the agents.
Because of this and related lower bounds, recent work on fast population
protocols has tended to assume more states. This is a fast-moving area of
research, so I will omit trying to summarize the current state of the art here.
There are good recent surveys of this work in [AG18, ER+ 18].
Chapter 34
Mobile robots
34.1 Model
We will start by describing the Suzuki-Yamashita model [SY99], the
CORDA model [Pri01], and some variants. We’ll follow the naming con-
ventions used by Agmon and Peleg [AP06].
Basic idea:
325
CHAPTER 34. MOBILE ROBOTS 326
– Anonymity: any two robots that see the same view take the
same action.
– Oblivious: The output of the compute phase is base only on
results of last look phase, and not on any previous observations.
Robots have no memory!
– No absolute coordinates: Translations of the space don’t change
the behavior of the robots.
– No sense of direction: robots don’t know which way is north.
More formally, if view v can be rotated to get view v 0 , then a
robot that sees v 0 will make the same move as in v, subject to the
same rotation.
– No sense of scale: robots don’t have a consistent linear measure.
If view v can be scaled to get view v 0 , then a robot that sees v 0
will move to the same point as in v, after applying the scaling.
– No sense of chirality: robots can’t tell counter-clockwise from
clockwise. Flipping a view flips the chosen move but has no other
effect.
– No ability to detect multiplicities: the view of other robots is a
set of points (rather than a multiset), so if two robots are on the
same point, they look like one robot.
– Fat robots: robots block the view of the robots behind them.
2. Only one robot moves. Without loss of generality, suppose the robot
at p moves to q. Then there is a different execution where q also moves
to p and the robots switch places.
In either case the distance between the two robots in the modified
execution is at least half the original distance. In particular, it’s not zero.
Note that this works even if the adversary can’t stop a robot in mid-move.
Both obliviousness and the lack of coordinates and sense of direction
are necessary. If the robots are not oblivious, then they can try moving
to the midpoint, and if only one of them moves then it stays put until the
other one catches up. If the robots have absolute coordinates or a sense of
direction, then we can deterministically choose one of the two initial positions
as the ultimate gathering point (say, the northmost position, or the westmost
position if both are equally far north). But if we don’t have any of this we
are in trouble.
Like the 3-process impossibility result for Byzantine agreement, the 2-
process impossibility result for robot gathering extends to any even number of
robots where half of them are on one point and half on the other. Anonymity
then means that each group of robots acts the same way a single robot would
if we activate them all together. Later work (e.g., [BDT12]) refers to this as
bivalent configuration, and it turns out to be the only initial configuration
for which it is not possible to solve gathering absent Byzantine faults.
However, once we have a Byzantine fault, this blows up. This is shown
by considering a lot of cases, and giving a strategy for the adversary and the
Byzantine robot to cooperate to prevent the other two robots from gathering
in each case. This applies to both algorithms for gathering and convergence:
the bad guys can arrange so that the algorithm eventually makes no progress
at all.
The first trick is to observe that any working algorithm for the n =
3, f = 1 case must be hyperactive: every robot attempts to move in every
configuration with multiplicity 1. If not, the adversary can (a) activate the
non-moving robot (which has no effect); (b) stall the moving non-faulty robot
if any, and (c) move the Byzantine robot to a symmetric position relative to
the first two so that the non-moving robot become the moving robot in the
next round and vice versa. This gives an infinite execution with no progress.
The second trick is to observe that if we can ever reach a configuration
where two robots move in a way that places them further away from each
other (a diverging configuration), then we can keep those two robots at
the same or greater distance forever. This depends on the adversary being
able to stop a robot in the middle of its move, which in turn depends on the
robot moving at least δ before the adversary stops it. But if the robots have
no sense of scale, then we can scale the initial configuration so that this is
not a problem.
CHAPTER 34. MOBILE ROBOTS 330
Beeping
The (discrete) beeping model was introduced by Cornejo and Kuhn [CK10]
to study what can be computed in a wireless network where communication
is limited to nothing but carrier sensing. According to the authors, the model
is inspired in part by some earlier work on specific algorithms based on carrier
sensing due to Scheideler et al. [SRS08] and Flury and Wattenhofer [FW10].
It has in turn spawned a significant literature, not only in its original domain
of wireless networking, but also in analysis of biological systems, which often
rely on very limited signaling mechanisms. Some of this work extends or
adjusts the capabilities of the processes in various ways, but the essential
idea of tightly limited communication remains.
In its simplest form, the model consists of synchronous processes organized
in an undirected graph. Processes wake up at arbitrary rounds chosen by the
adversary, and do not know which round they are in except by counting the
number of rounds since they woke. Once awake, a process chooses in each
round to either send (beep) or listen. A process that sends learns nothing in
that round. A process that listens learns whether any of its neighbors sends,
but not how many or which one(s).
From a practical perspective, the justification for the model is that carrier
sensing is cheap and widely available in radio networks. From a theoretical
perspective, the idea is to make the communication mechanism as restrictive
as possible while still allowing some sort of distributed computing. The
assumption of synchrony both adds to and limits the power of the model.
With no synchrony at all, it’s difficult to see how to communicate anything
with beeps, since each process will just see either a finite or infinite sequence
of beeps with not much correlation to its own actions. With continuous
332
CHAPTER 35. BEEPING 333
or there are constant ` and p such that the process beeps in round ` with
probability p. This follows because if the process is ever going to beep, there
is some first round ` where it might beep, and the probability that it does
so is constant because it depends only on the algorithm and the sequence b,
and not on n.
If an algorithm that hears only silence remains silent, then nobody ever
beeps, and nobody learns anything about the graph. Without knowing
anything, it’s impossible to correctly compute an MIS (consider a graph with
only two nodes that might or might not have an edge between them). This
means that in any working algorithm, there is some round ` and probability
p such that each process beeps with probability p after ` rounds of silence.
We can now beep the heck out of everybody by assembling groups of
1
Θ( p log n) processes and waking up each one ` rounds before we want them
to deliver their beeps. But we need to be a little bit careful to keep the
graph from being so connected that the algorithm finds an MIS despite this.
There are two cases, depending on what a process that hears only beeps
does:
1. If a process that hears only beeps stays silent forever, then we build
a graph with k − 1 cliques C1 , . . . , Ck−1 of size Θ( kp log n) each, and
a set of k cliques U1 , . . . , Uk of size Θ(log n) each. Here k ` is a
placeholder that will be filled in later (foreshadowing: it’s the biggest
value that doesn’t give us more than n processes). Each Ci clique is
further partitioned into subcliques Ci1 , . . . , Cik of size Θ( p1 log n) each.
Each Cij is attached to Uj by a complete bipartite graph.
We wake up each clique Ci in round i, and wake up all the U cliques
in round `. We can prove by induction on rounds that with high
probability, at least one process in each Cij beeps in round i + `, which
means that every process in every Ui hears a beep in the first k − 1
rounds that it is awake, and remains silent, causing the later C cliques
to continue to beep.
Because each Ci is a clique, each contains at most one element of the
M IS, and so between them they contain at most k − 1 elements of
the MIS. But there are k U cliques, so one of them is not adjacent to
any MIS element in a C clique. This means that one of the Uj must
contain an MIS element.
So now we ask when this extra Uj picks an MIS element. If it’s in the
first k − 1 rounds after it wakes up, then all elements have seen the
same history, so if any of them attempt to join the MIS then all of
CHAPTER 35. BEEPING 337
1 Leave MIS and restart the algorithm here upon hearing a beep
2 for c lg2 N rounds do
3 listen
4 for i ← 1 to lg N do
5 for c lg N rounds do
2i
6 with probability 8N do
7 beep
8 else
9 listen
10 Join MIS
11 while I don’t hear any beeps do
12 with probability 12 do
13 beep
14 listen
15 else
16 listen
17 beep;
4. The hard part: After O(log2 N log n) rounds, it holds with high prob-
ability that every node is either in the MIS or has a neighbor in the
MIS. This will give us that the alleged MIS is in fact maximal.
The bad case for termination is when some node u hears a neighbor v
that is then knocked out by one of its neighbors w. So now u is not in
the MIS, but neither is its (possibly only) neighbor v. The paper gives
a rather detailed argument that this can’t happen too often, which we
will not attempt to reproduce here. The basic idea is that if one of v’s
neighbors were going to knock v shortly after v first beeps, then the
sum of the probabilities of those neighbors beeping must be pretty high
(because at least one of them has to be beeping instead of listening
when v beeps). But they don’t increase their beeping probabilities
very fast, so if this is the case, then with high probability one of them
would have beeped in the previous c log N rounds before v does. So
the most likely scenario is that v knocks out u and knocks out the
rest of its neighbors at the same time, causing it to enter the MIS and
remain there forever. This doesn’t happen always, so we might have to
have some processes go through the whole O(log2 N ) initial rounds of
the algorithm more than once before the MIS converges. But O(log n)
attempts turn out to be enough to make it work in the end.
Appendix
340
Appendix A
Assignments
1. Your name.
2. Your status: whether you are an undergraduate, grad student, auditor,
etc.
3. Whether you are taking the course as CPSC 465 or CPSC 565.
4. Anything else you’d like to say.
341
APPENDIX A. ASSIGNMENTS 342
1. Show that no algorithm that allows tokens to move can guarantee that
there are exactly m tokens in any reachable configuration.
To keep things simple, you may assume that the processes can make
non-deterministic choices. For example, a process p might choose
arbitrarily between sending a message to a neighbor q or to a different
neighbor r, and each choice leads to a different possible execution.
Solution
1. Suppose that we are preserving total tokens. Consider some transition
between configurations C1 and C2 . If some process switches hasToken
1
Strictly speaking, this is a lot weaker than the usual definition of liveness, because it
effectively assumes that the adversary is cooperating with us. In retrospect I should have
written this as “for any admissible adversary strategy, there is a sequence of nondeterministic
choices by the algorithm that causes the execution to reach a desired configuration.” But I
didn’t write this, and so it’s fine to answer the problem I did write.
APPENDIX A. ASSIGNMENTS 343
1 initially do
2 if I am a leader then
3 parent ← id
4 send recruit to both neighbors
5 else
6 parent ← ⊥
Solution
1. In a synchronous execution, we can prove by induction that for each t
with 0 ≤ t ≤ k−12 , and each 0 ≤ i ≤ m − 1, each node at position ik ± t
joins the tree rooted at ik at time t. This puts exactly k nodes in each
tree.
3. The easiest fix may be to have each leader initially send just one recruit
message to the right. For each i, this recruits all agents ik, . . . , ik+(k−1)
to a tree of size k rooted at ik.
Show that for it is possible to solve agreement in this model in O(f /k)
rounds, assuming n > f .
Solution
We’ll use the flooding algorithm of Dolev and Strong [DS83] (see §9.2), but
replace sending S to all n processes in each round with sending S to all nk
possible recipient lists. As in the original algorithm, we want to prove that
after some round with few enough failures, all the non-faulty processes have
the same set.
Let Sir be the set stored by process i after r rounds. Suppose there is
some round r + 1 in which fewer than k processes fail. Then every recipient
list in round r includes a process that does not fail in round r + 1. Let L
be the set of processes that successfully deliver a message to at least one
recipient list in round r, and let S = ∪i∈L Sir . Then for each value v ∈ S,
there is some process that receives v during round r, does not crash in round
r + 1, and so retransmits v to all processes in round r + 1, causing it to be
added to Sir+2 . On the other hand, for any v 6∈ S, v is not transmitted to any
recipient list in round r, which means that no non-faulty process i includes
v in Sir+1 . So S ⊆ Sir+2 ⊆ ∪j Sjr+1 ⊆ S for all i, and the usual induction
0
argument shows that Sir continues to equal S for all non-faulty i and all
r0 ≥ r + 2.
We can have at most bf /kc rounds with ≥ k crashes before we run out,
so the latest possible round in which we have fewer than k crashes is is
r = bf /kc + 1, giving agreement after bf /kc + 2 rounds (since we don’t need
to send any messages in round r + 2).
(With some tinkering, it is not too hard to adapt the Dolev-Strong lower
bound to get a bf /kc + 1 lower bound for this model. The main issue is
now we have to crash k processes fully in round r + 1 before we can remove
one outgoing broadcast from a process in round r, which means we need to
budget tk failures to break a t-round protocol. The details are otherwise
pretty much the same as described in §9.3.)
1 preference ← input
2 for i ← 1 to m do
3 send hi, preferencei to all processes
4 wait to receive hi, vi from n − f processes
5 for each hi, vi received do
6 preference ← min(preference, v)
7decide preference
Algorithm A.2: Candidate algorithm for asynchronous agreement
Solution
We know from the FLP bound ([FLP85], Chapter 11) that Algorithm A.2
can’t work. So the only question is how to find an execution that shows it
doesn’t work.
It’s not too hard to see that Algorithm A.2 satisfies both termination
and validity. So we need to find a problem with agreement.
The easiest way I can see to do this is to pick a patsy process p and give it
input 0, while giving all the other processes input 1. Now run Algorithm A.2
while delaying all outgoing messages hi, vi from p until after the receiver has
finished the protocol. Because each other process is waiting for n − f ≤ n − 1
messages, this will not prevent the other processes from finishing. But all
the other processes have input 1, so we have an invariant that messages in
transit from processes other than p and preferences of processes other than p
will be 1 that holds as long as no messages from p are delivered. This results
in the non-p processes all deciding 1. We can then run p to completion, at
which point it will decide 0.
APPENDIX A. ASSIGNMENTS 348
Solution
1. Termination: The algorithm always terminates in f + 1 synchronous
rounds, so f doesn’t matter.
Give an algorithm that solves this problem, and show that it satisfies
these requirements.
(For the purpose of defining when a process starts or ends the protocol,
imagine that it uses explicit invoke and respond events. Your protocol should
have the property that all non-faulty processes eventually terminate.)
Solution
The easiest way to do this may be to use ABD (see §16.2). Algorithm A.3
has each process read the simulated register, which we assume is initialized
to 1, then write a 0 before returning the value it read.
1 procedure inc(v)
2 A[i] ← A[i] + v
3 procedure read()
4 s←0
5 for j ← 1 to n do
6 s ← s + A[j]
7 return s
Algorithm A.4: An alleged counter. Code for process i.
1. V = {0, 1}.
2. V = {−1, 1}.
3. V = {1, 2}.
APPENDIX A. ASSIGNMENTS 351
Solution
1. The {0, 1} case is linearizable. Given an execution S of Algorithm A.4,
we assign to a linearization point to each inc operation at the step
where it writes to A, and assign a linearization point to each read
operation ρ that returns s at the later of the first step that leaves
P
j A[j] = s or the first step of ρ. Since this may assign the same
linearization point to some write operation π and one or more read
operations ρ1 , . . . , ρk , when this occurs, we order the write before the
reads and the reads arbitrarily.
Observe that:
These are easily shown by induction on the steps of the execution, since
each inc operation only changes at most one A[j] and only changes it
by increasing it by 1.
The first condition implies that the value vj of A[j] used by a particular
read operation ρ lies somewhere between the minimum and maximum
values of A[j] during the operation’s interval, which implies the same
P
about the total j A[j]. In particular, if ρ returns s the value of
P
j A[j] is no greater than s, and it reaches s no later than the end of
ρ.
P
Because j A[j] increases by at most one per step, this means that
P P
either j A[j] = s at the first step of ρ, or j A[j] = s at some step
within the execution interval of ρ. In either case, ρ is assigned an
execution point within its interval that follows exactly s non-trivial
increments. This means that the return values of all read operations
are consistent with a sequential generalized counter execution, and
because both read and inc operations are ordered consistently with
the execution ordering in S, we have a linearization of S.
2. For increments in {−1, 1}, there are executions of Algorithm A.4 that
are not linearizable. We will construct a specific bad execution for
n = 3. Let p1 perform inc(1) and p2 perform inc(2), where p1 finishes
its operation before p2 starts. Because the inc(1) must be linearized
before the inc(−1), the values of the counter in any linearization will
be 0, 1, 0 in this order.
APPENDIX A. ASSIGNMENTS 352
A.4.2 Rock-paper-scissors
Define a rock-paper-scissors object as having three states 0 (rock), 1
(paper), and 2 (scissors), with a read operation that returns the current
state and a play(v) operation for v ∈ {0, 1, 2} that changes the state from s
to v if v = (s + 1) (mod 3) and has no effect otherwise.
Prove or disprove: There exists a deterministic wait-free linearizable
implementation of a rock-paper-scissors object from atomic registers.
APPENDIX A. ASSIGNMENTS 353
Solution
Proof: We will show how to implement a rock-paper-scissors object using
an unbounded max register, which can be built from atomic registers using
snapshots. The idea is to store a value v such that v mod 3 gives the value
of the rock-paper-scissors object. Pseudocode for both operations is given in
Algorithm A.5.
6 procedure read()
7 return (m mod 3)
Algorithm A.5: Implementation of a rock-paper-scissors object
Linearize each play operation that does not write m at the step at which
it reads m.
Linearize each play operation that writes s + 1 to m at the first step
at which m ≥ s + 1. If this produces ties, break first in order of increasing
s + 1 and then arbitrarily. Since each such operation has m ≤ s when the
operation starts and m ≥ s + 1 when it finishes, these linearization points fit
within the intervals of their operations.
Linearize each read() operation at the step where it reads m.
Since each of these linearization points is within the corresponding oper-
ation’s interval, this preserves the observed execution ordering.
Observe that the play operations that write are linearized in order of
increasing values written, and there are no gaps in this sequence because
no process writes s + 1 without first seeing s. (This actually shows there is
no to break ties by value.) So the sequence of values in the max register,
taken mod 3, iterates through the values 0, 1, 2, 0, . . . in sequence, with each
value equal mod 3 to some argument to a play operation. So we can take
these values mod 3 as the actual value of the register for the purposes of
read operations, meaning the read operations all return correct values. The
play operations that don’t write are linearized at a point where they would
have no effect on the state of the rock-paper-scissors object, which is also
consistent with the sequential specification.
It follows that Algorithm A.5 is a linearizable implementation of a rock-
APPENDIX A. ASSIGNMENTS 354
Solution
We’ll disprove it.
Let p0 and p1 be the two processes. The idea is to consider, for each
i ∈ {0, 1} some nonzero-probability solo terminating execution ξi of pi with
input i, then show that ξ0 and ξ1 can be interleaved to form a two-process
execution ξ that is indistinguishable by each pi from ξi .
The oblivious adversary will simply choose to schedule the processes for
ξ. Since the processes flip a finite number of coins in this execution, there
is a nonzero chance that the adversary gets lucky and they flip their coins
exactly the right way.
Fix ξ0 and ξ1 as above. Partition each ξi as αi βi1 βi2 . . . βiki where αi
contains only read operations and each βij starts with a write operation of a
value vij strictly larger than any previous write operation.
Let ξ = α0 α1 βi1 j1 βi2 j2 . . . βik jk where k = k0 + k1 and the blocks βi` j` are
the blocks {β0j } and {β1j } sorted in order of non-decreasing vij . Then each
block βi` j` in ξ starts with a write of a value no smaller than the previous
value in the max register, causing each read operation within the block to
return the value of this write, just as in the solo execution ξi` . Assuming
both processes flip their coins as in the solo executions, they both perform
the same operations and return the same values. These values will either
violate agreement in ξ or validity in at least one of ξ0 or ξ1 .
APPENDIX A. ASSIGNMENTS 355
Solution
Case (2) holds.
To implement the object, use a snapshot array to hold the total votes
from each process, and have the winner operation take a snapshot, add up
all the votes and return the correct result. This can be done using n registers.
To show that it can’t be done with o(n) registers, use the JTT bound (see
Chapter 20). We need to argue that the object is perturbable. Let ΛΣπ be
an execution that needs to be perturbed, and let m be the maximum number
of vote(v) operations that start in Λ for any value v. Then a sequence γ of
m + 1 votes for some v 0 that does not appear in Λ will leave the object with v 0
as the plurality value, no matter how the remaining operations are linearized.
Since v 0 did not previously appear in Λ, this gives a different return value
for π in ΛγΣπ from ΛΣπ as required. The JTT bound now implies that any
implementation of the object requires at least n − 1 registers.
3. A description of where this result fits into the literature (e.g., solves
an open problem previously proposed in [...], improves on the previous
best running time for an algorithm from [...], gives a lower bound or
impossibility result for a problem previously proposed by [...], opens
up a new area of research for studying [...]), and why it is interesting
and/or hard.
You do not have to prepare slides for your presentation if you would
prefer to use the whiteboard feature in Zoom, but you should make sure
to practice it in advance to make sure it fits in the allocated time. The
instructor will be happy to offer feedback on draft versions if available far
enough before the actual presentation date.
Relevant dates:
2
Literary theorists will recognize this as a three-act structure (preceded by a title card):
introduce the main character, make their life difficult, then resolve their problems in time
for the final curtain. This is not the only way to organize a talk, but if done right it has
the advantage of keeping the audience awake.
APPENDIX A. ASSIGNMENTS 357
2020-11-27 Last date to send draft slides or arrange for a practice presen-
tation with the instructor if you want guaranteed feedback.
Solution
Time complexity Observe that Alice sends at least k messages by time
2k − 2. This is easily shown by induction on k, because Alice sends at least
1 message by time 0, and if Alice has sent at least k − 1 message by time
358
APPENDIX B. SAMPLE ASSIGNMENTS FROM SPRING 2019 359
1 Alice:
2 initially do
3 send message to Bob
4 upon receiving message from Bob do
5 send message to Bob
6 Bob:
7 upon receiving message from Alice do
8 send message to Alice
9 send message to Charlie 1
10 Charlie i, for i < n:
11 initially do
12 c←0
13 upon receiving message from Bob or Charlie i − 1 do
14 c←c+1
15 if c = 3 then
16 c←0
17 send message to Charlie i + 1
Ti (k) = (2 · 3i · k − 1) + k.
differ in exactly one coordinate. We also assume that each node knows
its own coordinate vector and those of its neighbors.
Show that any algorithm for an asynchronous ring can be adapted to
an asynchronous d-dimensional hypercube with no increase in its time
or message complexity.
Solution
1. The idea is to embed the ring in the hypercube, so that each node is
given a clockwise and counterclockwise neighbors, and any time the ring
algorithm asks to send a message clockwise or counterclockwise, we send
to the appropriate neighbor in the hypercube. We can then argue that
for any execution of the hypercube algorithm there is a corresponding
execution of the ring algorithm and vice versa; this implies that the
worst-case time and message-complexity in the hypercube is the same
as in the ring.
It remains only to construct an embedding. For d = 0, d = 1, and d = 2,
the ring and hypercube are the same graph, so it’s easy. For larger
d, split the hypercube into two subcubes Qd−1 , consisting of nodes
with coordinate vectors of the form 0x and 1x. Use the previously
constructed embedding for d − 1 to embed a ring on each subcube,
using the same embedding for both. Pick a pair of matching edges
(0x, 0y) and (1x, 1y) and remove them, replacing them with (0x, 1x)
and (0y, 1y). We have now constructed an undirected Hamiltonian
cycle on Qd . Orient the edges to get a directed cycle, and we’re done.
initial start-up cost to map the graph, adding to the time and
message complexity of the ring algorithm.
Solution
This is pretty much the same as a Chandy-Lamport snapshot [CL85], as
described in §6.3. The main difference is that instead of recording its state
upon receiving a stop message, a process shuts down the underlying protocol.
Pseudocode is given in Algorithm B.2. We assume that the initial stop order
takes the form of a stop message delivered by a process to itself.
1 initially do
2 stopped ← false
3 upon receiving stop do
4 if ¬stopped then
5 stopped ← true
6 send stop to all neighbors
7 replace all events in underlying protocol with no-ops
Solution
We need f < n/2.
To show that f < n/2 is sufficient, observe that we can use the oracle to
construct an eventually strong (♦S) failure detector.
Recall that ♦S has the property that there is some non-faulty process
that is eventually never suspected, and every fault process is eventually
permanently suspected. Have each process broadcast the current value of its
leader oracle whenever it increases; when a process p receives i from some
process q, it stops suspecting q if i is greater than any value p has previously
seen, and starts suspecting all other processes. The guarantee that eventually
some non-faulty q gets a maximum value that never changes ensures that
eventually q is never suspected, and all other processes (including faulty
processes) are suspected. We can now use Algorithm 13.2 to solve consensus.
To show that f < n/2 is necessary, apply a partition argument. In
execution Ξ0 , processes n/2 + 1 through n crash, and processes 1 through
n/2 run with input 0 and with the oracle assigning value 1 to process 1 (and
no others). In execution Ξ1 , processes 1 through n/2 crashes, and processes
n/2 + 1 through n run with input 1 and with the oracle assigning value 2
to process n (and no others). In each of these executions, termination and
validity require that eventually the processes all decide on their respective
input values 0 and 1.
Now construct an execution Ξ2 , in which both groups of processes run
as in Ξ0 and Ξ1 , but no messages are exchanged between the groups until
after both have decided (which must occur after a finite prefix because this
execution is indistinguishable to the processes from Ξ0 or Ξ1 ). We now
violate agreement.
Solution
No such implementation is possible. The proof is by showing that if some
such implementation could work, we could solve asynchronous consensus with
1 crash failure, contradicting the Fischer-Lynch-Patterson bound [FLP85]
(see Chapter 11).
An implementation of consensus based on totally-ordered partial broad-
cast for k = 3n/4 is given in Algorithm B.3. In fact, k = 3n/4 is overkill
when f = 1; k > n/2 + f is enough.
1 first ← ⊥
2 for i ← 1ton do
3 count[i] ← 0
4 value[i] ← ⊥
5 broadcast hi, inputi
6 upon receiving hj, vi do
7 if first = ⊥ then
8 first ← hj, vi
9 send received(hj, vi) to all processes
Lemma B.2.1. In any execution of Algorithm B.3 with k > n/2 + f . there
is a unique pair hj, vi such that at least k − f non-faulty processes resend
received(hj, vi).
Proof. Because all processes that receive messages m1 and m2 through the
broadcast mechanism receive them in the same order, we can define a partial
order on messages by letting m1 < m2 if any process receives m1 before m2 .
There are only finitely many messages, so there is at least one pair hj, vi
that is minimal in this partial order. This message is received by at least k
processes, of which at least k − f are non-faulty. Each such process receives
hj, vi before any other broadcast messages, so it sets first to hj, vi and resends
received(hj, vi).
To show that hj, vi is unique, observe that k − f > n/2 implies that if
there is some other pair hj 0 , v 0 i that is resent by k − f non-faulty processes,
then there is some process that resends both hj, vi and hj 0 , v 0 i. But each
process resends at most one pair.
shared data:
1 waiting, atomic register, initially arbitrary
2 count, atomic counter, initially 0
3 Code for process i:
4 while true do
// trying
5 increment count
6 waiting ← i
7 while true do
8 if count = 1 then
9 break
10 if waiting = i + 1 (mod n) then
11 break
// critical
12 (do critical section stuff)
// exiting
13 decrement count
// remainder
14 (do remainder stuff)
Algorithm B.4: Peterson’s mutual exclusion algorithm using a
counter
APPENDIX B. SAMPLE ASSIGNMENTS FROM SPRING 2019 368
Solution
The proof that this works for two processes is essentially the same as in the
original algorithm. The easiest way to see this is to observe that process
pi sees count = 1 in Line 8 under exactly the same circumstances as it sees
present[¬i] = 0 in Line 8 in the original algorithm; and similarly with two
processes waiting is always set to the same value as waiting in the original
algorithm. So we can map any execution of Algorithm B.4 for two processes
to an execution of Algorithm 17.4, and all of the properties of the original
algorithm carry over to the modified version.
To show that the algorithm doesn’t work for three processes, we construct
an explicit bad execution:
1. p0 increments count
2. p1 increments count
3. p2 increments count
4. p0 writes 0 to waiting
5. p1 writes 1 to waiting
6. p2 writes 2 to waiting
Solution
One possible implementation is given in Algorithm B.5. This requires O(1)
space and O(1) steps per call to inc or read.
1 procedure inc
2 if c[1] = 1 then
// somebody already did inc
3 c[2] ← 1
4 else
5 c[1] ← 1
// maybe somebody else is doing inc
6 if splitter returns right or down then
7 c[2] ← 1
8 procedure read
9 if c[2] = 1 then
10 return 2
11 else if c[1] = 1 do
12 return 1
13 else
14 return 0
The implementation uses two registers c[1] and c[2] to represent the value
of the counter. Two additional registers implement a splitter object as in
Algorithm 17.5.1
Claim: For any two calls to inc, at least one sets c[2] to 1. Proof: Suppose
otherwise. Then both calls are by different processes p and q (or else the
second call would see c[1] = 1) and both execute the splitter. Since a splitter
returns stop to at most one process, one of the two processes gets right or
down, and sets c[2].
It is also straightforward to show that a single inc running alone will set
c[1] but not c[2], since in this case the splitter will return stop.
Now we need to argue linearizability. We will do so by assigning lineariza-
tion points to each operation.
If some inc does not set c[2], assign it the step at which it sets c[1].
Assign each other inc the step at which it first sets c[2].
If every inc sets c[2], assign the first inc to set c[1] the step at which it
does so, and assign all others the first point during its execution interval at
which c[2] is nonzero.
For a read operation that returns 2, assign the step at which it reads c[2].
For a read operation that returns 1, assign the first point in the execution
interval after it reads c[2] at which c[1] = 1. For a read operation that
returns 0, assign the step at which it reads c[0].
This will assign the same linearization point to some operations; in this
case, put incs before reads and otherwise break ties arbitrarily.
These choices create a linearization which consists of (a) a sequence of
read operations that return 0, all of which are assigned linearization points
before the first step at which c[1] = 1; (b) the first inc operation that
sets c[1]; (c) a sequence of read operations that return 1, all of which are
linearized after c[1] = 1 but before c[2] = 1; (c) some inc that is either
the first to set c[2] or spans the step that sets c[2]; and (d) additional inc
operations together with read operations that all return 2. Since each read
returns the minimum of 2 and the number of incs that precede it, this is a
correct linearization.
Solution
The worst-case step complexity of an operation is Θ(n).
For the upper bound, implement a counter on top of snapshots (or just
collect), and have read compute log∗ of whatever value is read.
For the lower bound, observe that a slow counter has the perturbability
property needed for the JTT proof. Given an execution of the form Λk Σk π
as described in Chapter 20, we can always insert some sequence of inc
operations between Λk and Σk that will change the return value of π. The
number of incs needed will be the number needed to raise log∗ v, plus an
extra n to overcome the possibility of pending incs in Σk being linearized
before or after π. Since this object is perturbable, and the atomic registers
we are implementing it from are historyless, JTT applies and gives an Ω(n)
lower bound on the cost of read in the worst case.
• The operation close(i, j) sets Ai to zero and adds the previous value
to Aj . It is equivalent to atomically executing transfer(i, j, read(i)).
Solution
1. The consensus number of the object is infinite. Initialize A0 to 1 and
the remaining Ai to 0. We can solve id consensus by having process
i (where i > 0 execute close(0, i) and then applying read to scan all
the Aj values for itself and other processes. Whichever process gets
the 1 wins.
3. A description of where this result fits into the literature (e.g., solves
an open problem previously proposed in [...], improves on the previous
best running time for an algorithm from [...], gives a lower bound or
impossibility result for a problem previously proposed by [...], opens
up a new area of research for studying [...]), and why it is interesting
and/or hard.
2019-04-17 Last date to send draft slides or arrange for a practice presen-
tation with the instructor if you want guaranteed feedback.
Solution
You will need exactly n registers (Θ(n) is also an acceptable answer).
For the upper bound, have each process write its id to its own register,
and use a double-collect snapshot to read all of them. This uses exactly n
registers. The double-collect snapshot is wait-free because after each process
has called announce once, the contents of the registers never change, so read
finishes after O(n) collects or O(n2 ) register reads. It’s linearizable because
double-collect snapshot returns the exact contents of the registers at some
point during its execution.3
For the lower bound, use a covering argument.4
Have the processes p1 , . . . , pn run announce in order, stopping each
process when it covers a new register. This will give sequence of partial
executions Ξi where at the end of Ξi , there is a set of i registers r1 . . . ri that
are covered by p1 . . . pi , and no other operations are in progress.
To show this works, we need to argue that each pi+1 does in fact cover a
register ri+1 6∈ {r1 , . . . , ri }. If not, then we can extend Ξi by running pi+1 ’s
announce operation to completion, then delivering all the covering writes
by p1 . . . pi . Now any subsequent read will fail to return pi+1 , violating the
specification. (If we have a spare process, we can have it do the bad read;
otherwise we can run p1 to completion and let it do it.)
At the end of Ξn , we have covered n distinct registers, proving the lower
bound.
Solution
It turns out that this problem is a good example of what happens if you
don’t remember to include some sort of validity condition. As pointed in
several student solutions, having each process pick a fixed constant xi the
first time it updates works.
Here is a protocol that also works, and satisfies the validity condition that
the common output was some process’s input (which was not required in the
problem statement). When pi takes a step, it sets xi to max(xi , x(i−1) mod n ).
To show that this works, we argue by induction that the maximum value
eventually propagates to all processes. Let x = xi be the initial maximum
value. The induction hypothesis is that for each j ∈ {0, . . . , n − 1}, eventually
all processes in the range i through i + j (mod n) hold value x forever.
Suppose that the hypothesis holds for j; to show that it holds for j + 1,
start in a configuration where xi through xi+j are all x. No transition can
change any of these values, because taking the max of x and any other value
yields x. Because each process is scheduled infinitely often, eventually pi+j+1
takes a step; when this happens, xi+j+1 is set to max(x, xi+j+1 ) = x.
Since the hypothesis holds for all j ∈ {0, . . . , n − 1}, it holds for j = n−1;
but this just says that eventually all n processes hold x forever.
faults in the sense that a process either sends all of its messages in a given
round or no messages in that round. To avoid shutting down a protocol
completely, we require that in every round, there is at least one process that
sends all of its messages. We also allow a process to send a message to itself.
If we wish to solve agreement (that is, get agreement, termination, and
validity) in this model, what is the minimum number of rounds we need in
the worst case?
Solution
We need one round. Every process transmits its input to all processes,
including itself. From the all-or-nothing property, all processes receive the
same set of messages. From the assumption that some process is not faulty
in this round, this set is nonempty. So the processes can reach agreement by
applying any consistent rule to choose an input from the set.
Solution
The consensus number is 1.
Proof: We can implement it from atomic snapshot, which can be imple-
mented from atomic registers, which have consensus number 1.
For my first write(v) operation, write v to my component of the snapshot;
for subsequent write(v) operations, write fail. For a read operation, take
a snapshot and return (a) ⊥ if all components are empty; (b) v if exactly
one component is non-empty and has value v; and (c) fail if more than one
component is non-empty or any component contains fail.
Appendix C
1. Your name.
3. Whether you are taking the course as CPSC 465 or CPSC 565.
(You will not be graded on the bureaucratic part, but you should do it
anyway.)
377
APPENDIX C. SAMPLE ASSIGNMENTS FROM SPRING 2016 378
where they have a cookie but child i + 1 does not, child i gives one cookie to
child i + 1. If child i + 1 already has a cookie, or child i has none, nothing
happens. We assume that a fairness condition guarantees that even though
some children are fast, and some are slow, each of them takes a step infinitely
often.
1. Show that after some finite number of steps, every child has exactly
one cookie.
Solution
1. First observe that in any configuration reachable from the initial con-
figuration, child 0 has k cookies, n − k of the remaining children have
one cookie each, and the rest have zero cookies. Proof: Suppose we
are in a configuration with this property, and consider some possible
step that changes the configuration. Let i be the child that takes the
step. If i = 0, then child i goes from k to k − 1 cookies, and child 1
goes from 0 to 1 cookies, increasing the number of children with one
cookie to n − k + 1. If i > 0, then child i goes from 1 to 0 cookies and
child i + 1 from 0 to 1 cookies, with k unchanged. In either case, the
invariant is preserved.
Now let us show that k must eventually drop as long as some cookie-less
child remains. Let i be the smallest index such that the i-th child has
no cookie. Then after finitely many steps, child i − 1 takes a step and
gives child i a cookie. If i − 1 = 0, k drops. If i − 1 > 0, then the
leftmost 0 moves one place to the left. It can do so only finitely many
times until i = 1 and k drops the next time child 0 takes a step. It
follows that after finitely many steps, k = 1, and by the invariant all
n − 1 remaining children also have one cookie each.
cookie i, let xti be the position of the i-th cookie after t asynchronous
rounds, where an asynchronous round is the shortest interval in which
each child takes at least one step.
Observe that no child j > 0 ever gets more than one cookie, since no
step adds a cookie to a child that already has one. It follows that cookie
0 never moves, because if child 0 has one cookie, so does everybody
else (including child 1). We can thus ignore the fact that the children
are in a cycle and treat them as being in a line 0 . . . n − 1.
We will show by induction on t that, for all i and t, xti ≥ yit =
max(0, min(i, zit )) where zit = t + 2(i − n + 1).
Proof: The base case is when t = 0. Here xti = 0 for all i. We also have
zit = 2(i − n + 1) ≤ 0 so yit = max(0, min(i, zit )) = max(0, zit ) = 0. So
the induction hypothesis holds with xti = yit = 0.
Now suppose that the induction hypothesis holds for t. For each i,
there are several cases to consider:
(a) xti = xti+1 = 0. In this case cookie i will not move, because it’s not
at the top of child 0’s stack. But from the induction hypothesis
we have that xti+1 = 0 implies zi+1 t = t + 2(i + 1 − n + 1) ≤ 0,
which gives zi = zi+1 − 2 ≤ −2. So zit+1 ≤ zi+1
t t t + 1 ≤ −1 and
yi = 0, and the induction hypothesis holds for xt+1
t+1
i .
(b) xti = i. Then even if cookie i doesn’t move (and it doesn’t), we
have xt+1
i ≥ xti ≥ min(i, zit ).
(c) xti < i and xti+1 = xti + 1. Again, even if cookie i doesn’t move, we
still have xt+1
i ≥ xti = xti+1 −1 ≥ yi+1
t −1 ≥ t+2(i+1−n+1)−1 =
t + 2(i − n + 1) + 1 > yi .t
(d) xti < i and xti+1 > xti + 1. Nothing is blocking cookie i, so it moves:
xit+1 = xti + 1 ≥ t + 2(i − n + 1) + 1 = (t + 1) + 2(i − n + 1) = yit+1 .
C.1.2 Eccentricity
Given a graph G = (V, E), the eccentricity (v) of a vertex v is the
maximum distance maxv0 d(v, v 0 ) from v to any vertex in the graph.
Suppose that you have an anonymous1 asynchronous message-passing
system with no failures whose network forms a tree.
1. Give an algorithm that allows each node in the network to compute its
eccentricity.
Solution
1. Pseudocode is given in Algorithm C.1. For each edge vu, the algorithm
sends a message d from v to u, where d is the maximum length of
any simple path starting with uv. This can be computed as soon as v
knows the maximum distances from all of its other neighbors u0 6= u.
1 initially do
2 notify ()
3 upon receiving d from u do
4 d[u] ← d
5 notify ()
6 procedure notify ()
7 foreach neighbor u do
8 if ¬notified[u] and d[u0 ] 6= ⊥ for all u0 6= u then
9 Send 1 + maxu0 6=u d[u0 ] to u
10 notified[u] ← true
algorithm computes the correct values, we will prove the invariant that
dv [u] ∈ {⊥, `v [u]} always, and for any message d in transit from u to v,
d = `v [u].
In the initial configuration, dv [u] = ⊥ for all v and u, and there are no
messages in transit. So the invariant holds.
Now let us show that calling notify at some process v preserves the
invariant. Because notify() does not change dv , we need only show
that the messages it sends contain the correct distances.
Suppose notify() causes v to send a message d to u. Then d = 1 +
maxu0 6=u dv [u0 ] = 1+maxu0 6=u `v [u0 ], because dv [u0 ] 6= ⊥ for all neighbors
u0 6= u by the condition on the if statement and thus dv [u0 ] = `v [u0 ] for
all u0 6= u by the invariant.
So the invariant will continue to hold in this case provided `u [v] =
1 + maxu0 6=u `v [u0 ]. The longest simple path starting with uv either
consists of uv alone, or is of the form uvw . . . for some neighbor w of v
with w 6= u. In the former case, v has no other neighbors u0 , in which
case d = 1 + maxu0 6=u `v [u0 ] = 1 + 0 = 1, the correct answer. In the
latter case, d = 1 + maxu0 6=u `v [u0 ] = 1 + `v [w], again the length of the
longest path starting with uv.
APPENDIX C. SAMPLE ASSIGNMENTS FROM SPRING 2016 382
This shows that notify preserves the invariant. We must also show
that assigning dv [u] ← d upon receiving d from u does so. But in this
case we know from the invariant that d = `v [u], so assigning this value
to dv [u] leaves dv [u] ∈ {⊥, `v [u]} as required.
3. First let’s observe that at most one message is sent in each direction
across each edge, for a total of 2|E| = 2(n − 1) messages. This is
optimal, because if in some execution we do not send a message across
some edge uv, then we can replace the subtree rooted at u with an
arbitrarily deep path, and obtain an execution indistinguishable to v
in which its eccentricity is different from whatever it computed.
For time complexity (and completion!) we’ll argue by induction on
`v [u] that we send a message across uv by time `v [u] − 1.
If `v [u] = 1, then u is a leaf; as soon as notify is called in its initial
computation event (which we take as occurring at time 0), u notices it
has no neighbors other than v and sends a message to v.
If `v [u] > 1, then since `v [u] = 1 + maxv0 6=v `u [v 0 ], we have `u [v]0 ≤
`v [u]−1 for all neighbors v 0 6= v of u, which by the induction hypothesis
means that each such neighbor v 0 sends a message to u no later than
time `v [u] − 2. These messages all arrive at u no later than time
`v [u] − 1; when the last one is delivered, u sends a message to v.
It follows that the last time a message is sent is no later than time
maxuv (`v [u] − 1), and so the last delivery event occurs no later than
time maxuv `v [u]. This is just the diameter D of the tree, giving a
worst-case time complexity of exactly D.
To show that this is optimal, consider an execution of some hypothetical
algorithm that terminates by time D − 1 in the worst case. Let u and
v be nodes such that d(u, v) = D. Then there is an execution of this
algorithm in no chain of messages passes from u to v, meaning that
no event of u is causally related to any event of v. So we can replace
u with a pair uw of adjacent nodes with d(w, v) = d(u, v) + 1, which
changes (v) but leaves an execution that is indistinguishable to v
from the original. It follows that v returns an incorrect value in some
executions, and this hypothetical algorithm is not correct. So time
complexity D is the best possible in the worst case.
APPENDIX C. SAMPLE ASSIGNMENTS FROM SPRING 2016 383
Solution
For sufficiency, ignore the extra edges and use Hirschberg-Sinclair [HS80]
(see §5.2.2).
For necessity, we’ll show that an algorithm that solves leader election in
this system using at most T (n) messages can be modified to solve leader
election in a standard ring without the extra edges using at most 3T (n)
messages. The idea is that whenever a process i attempts to send to i + 3,
we replace the message with a sequence of three messages relayed from i
to i + 1, i + 2, and then i + 3, and similarly for messages sent in the other
direction. Otherwise the original algorithm is unmodified. Because both
systems are asynchronous, any admissible execution in the simulated system
has a corresponding admissible execution in the simulating system (replace
each delivery event by three delivery events in a row for the relay messages)
and vice versa (remove the initial two relay delivery events for each message
and replace the third delivery event with a direct delivery event). So in
particular if there exists an execution in the simulating system that requires
Ω(n log n) messages, then there is a corresponding execution in the simulated
system that requires at least Ω(n log n/3) = Ω(n log n) messages as well.
1 procedure write(A, v)
2 atomically do
3 A[r] ← v; r ← (r + 1) mod n
4 procedure read(A)
5 return A[i]
Algorithm C.2: Rotor array: code for process i
Solution
First let’s show that it is at least 2, by exhibiting an algorithm that uses
a single rotor array plus two atomic registers to solve 2-process wait-free
consensus.
1 procedure consensus(v)
2 input[i] ← v
3 write(A, i)
4 i0 ← read(A)
5 if i0 = i then
// Process 0 wrote first
6 return input[0]
7 else
// Process 1 wrote first
8 return input[1]
The algorithm is given as Algorithm C.3. Each process i first writes its
input value to a single-writer register input[i]. The process then writes its id
to the rotor array. There are two cases:
1. If process 0 writes first, then process 0 reads 0 and process 1 reads
1. Thus both processes see i0 = i and return input[0], which gives
agreement, and validity because input[0] is then equal to 0’s input.
2. If process 1 writes first, then process 0 reads 1 and process 1 reads
either 0 (if 0 wrote quickly enough) or ⊥ (if it didn’t). In either case,
both processes see i0 6= i and return input[1].
Now let us show that a rotor array can’t be used to solve wait-free
consensus with three processes. We will do the usual bivalence argument,
APPENDIX C. SAMPLE ASSIGNMENTS FROM SPRING 2016 385
Solution
1. It’s probably possible to do this with some variant of ABD, but getting
linearizability when there are multiple concurrent insert operations
will be tricky.
Instead, we’ll observe that it is straightforward to implement a set
register using a shared-memory snapshot: each process writes to A[i]
the set of all values it has ever inserted, and a read consists of taking
a snapshot and then taking the union of the values. Because we can
implement snapshots using atomic registers, and we can implement
atomic registers in a message-passing system with f < n/2 crash failures
using ABD, we can implement this construction in a message-passing
system with f < n/2 failures.
2. This we can’t do. The problem is that an ordered set register can solve
agreement: each process inserts its input, and the first input wins. But
FLP says we can’t solve agreement in an asynchronous message-passing
system with one crash failure.
Solution
We can solve agreement using the k-bounded failure detector for n ≥ 2
processes if and only if f ≤ k and f < n/2.
Proof:
If k ≥ f , then every faulty process is eventually permanently suspected,
and the k-bounded failure detector is equivalent to the ♦S failure detector.
The Chandra-Toueg protocol [CT96] then solves consensus for us provided
f < n/2.
APPENDIX C. SAMPLE ASSIGNMENTS FROM SPRING 2016 387
1 procedure fetchAndMax(r, 0 : x)
2 if switch = 0 then
3 return 0 : fetchAndMax(left, x)
4 else
5 return 1 : fetchAndMax(right, 0)
6 procedure fetchAndMax(r, 1 : x)
7 v ← fetchAndMax(right, x)
8 if TAS(switch) = 0 then
9 return 0 : fetchAndMax(left, 0)
10 else
11 return 1 : v
Algorithm C.4 replaces the switch bit in the max register implementation
from Algorithm 21.2 with a test-and-set, and adds some extra machinery to
APPENDIX C. SAMPLE ASSIGNMENTS FROM SPRING 2016 388
Solution
Here is a bad execution (there are others). Let k = 1, and let π1 do
fetchAndMax(01) and π2 do fetchAndMax(10). Run these operations concur-
rently as follows:
C.3.2 Median
Define a median register as an object r with two operations addSample(r, v),
where v is any integer, and computeMedian(r). The addSample operation
adds a sample to the multiset M of integers stored in the register, which
is initially empty. The computeMedian operations returns a median of this
multiset, defined as a value x with the property that (a) x is in the multiset;
(b) at least |M |/2 values v in the multiset are less than or equal to x; (c) at
least |M |/2 values v in the multiset are greater than or equal to x.
For example, if we add the samples 1, 1, 3, 5, 5, 6, in any order, then a
subsequent computeMedian can return either 3 or 5.
APPENDIX C. SAMPLE ASSIGNMENTS FROM SPRING 2016 389
Solution
For the upper bound, we can do it with O(n) registers using any linear-space
snapshot algorithm (for example, Afek et al. [AAD+ 93]). Each process stores
in its own segment of the snapshot object the multiset of all samples added
by that process; addSample just adds a new sample to the process’s segment.
For computeMedian, take a snapshot, then take the union of all the multisets,
then compute the median of this union. Linearizability and wait-freedom
of both operations are immediate from the corresponding properties of the
snapshot object.
For the lower bound, use JTT [JTT00]. Observe that both atomic
registers and resettable test-and-sets are historyless: for both types, the new
state after an operation doesn’t depend on the old state. So JTT applies if
we can show that the median register is perturbable.
Suppose that we have a schedule Λk Σk π in which Λk consists of an
arbitrary number of median-register operations of which at most k are
incomplete, Σk consists of k pending base object operations (writes, test-and-
sets, or test-and-set resets) covering k distinct base objects, and π is a read
operation by a process not represented in Λk Σk . We need to find a sequence
of operations γ that can be inserted between Λk and Σk that changes the
outcome of π.
Let S be the multiset of all values appearing as arguments to addSample
operations that start in Λk or Σk . Let x = max S (or 0 if S is empty), and let
γ consist of |S| + 1 addSample(r, x + 1) operations. Write T for the multiset
of |S| + 1 copies of x + 1. Then in any linearization of Λk γΣk π, the multiset
U of samples contained in r when π executes includes at least all of T and at
most all of S; this means that a majority of values in U are equal to x + 1,
and so the median is x + 1. But x + 1 does not appear in S, so π can’t
return it in Λk Σk π. It follows that a median register is in fact perturbable,
and JTT applies, which means that we need at least Ω(n) base objects to
implement a median register.
APPENDIX C. SAMPLE ASSIGNMENTS FROM SPRING 2016 390
1 procedure TAS(i)
2 myPosition ← 0
3 while true do
4 otherPosition ← read(a¬i )
5 x ← myPosition − otherPosition
6 if x ≡ 2 (mod m) then
7 return 0
8 else if x ≡ −1 (mod m) do
9 return 1
10 else if fair coin comes up heads do
11 myPosition ← (myPosition + 1) mod m
12 write(ai , myPosition)
1. An oblivious adversary?
2. An adaptive adversary?
Solution
For the oblivious adversary, we can quickly rule out m < 5, by showing that
there is an execution in each case where both processes return 0:
for any fixed k, because the coin-flips are uncorrelated with the oblivious
adversary’s choice of which process is fast). Then for k sufficiently large, the
fast process eventually sees a0 − a1 congruent to either 2 or −1 and returns.
Since this event occurs with independent nonzero probability in each interval
of length 2k, eventually it occurs.2
Once one process has terminated, the other increments myPosition in-
finitely often, so it too eventually sees a gap of 2 or −1.
For the adaptive adversary, the adversary can prevent the algorithm from
terminating. Starting from a state in which both processes are about to
read and a0 = a1 = k, run p0 until it is about to write (k + 1) mod m to a0
(unlike the oblivious adversary, the adaptive adversary can see when this will
happen). Then run p1 until it is about to write (k + 1) mod m to a1 . Let
both writes go through. We are now in a state in which both processes are
about to read, and a0 = a1 = (k + 1) mod m. So we can repeat this strategy
forever.
audience, you can assume that your listeners know at least everything
that we’ve talked about so far in the class.
3. A description of where this result fits into the literature (e.g., solves
an open problem previously proposed in [...], improves on the previous
best running time for an algorithm from [...], gives a lower bound or
impossibility result for a problem previously proposed by [...], opens
up a new area of research for studying [...]), and why it is interesting
and/or hard.
You do not have to prepare slides for your presentation if you would
prefer to use the blackboard, but you should make sure to practice it in
advance to make sure it fits in the allocated time. The instructor will be
happy to offer feedback on draft versions if available far enough before the
actual presentation date.
Relevant dates:
2016-04-22 Last date to send draft slides or arrange for a practice presen-
tation with the instructor if you want guaranteed feedback.
Solution
The consensus number of this object is 2.
For two processes, have each process i write its input to a standard
atomic register r[i], and then write its id to a shared second-to-last-value
register s. We will have whichever process writes to s first win. After writing,
process i can detect which process wrote first by reading s once, because
it either sees ⊥ (meaning the other process has not written yet) or it sees
the identity of the process that wrote first. In either case it can return the
winning process’s input.
For three processes, the usual argument gets us to a configuration C
where all three processes are about to execute operations x, y, and z on
the same object, where each operation moves from a bivalent to a univalent
state. Because we know that this object can’t be a standard atomic register,
it must be a second-to-last register. We can also argue that all of x, y, and
z are writes, because if one of them is not, the processes that don’t perform
it can’t tell if it happened or not.
Suppose that Cx is 0-valent and Cy is 1-valent. Then Cxyz is 0-valent
and Cyz is 1-valent. But these configurations are indistinguishable to any
process but x. It follows that the second-to-last register can’t solve consensus
for three processes.
Solution
Here is an algorithm.
If there are two processes p and q with the same id that are adjacent to
each other, they can detect this in the initial configuration, and transmit
this fact to all the other processes by flooding.
APPENDIX C. SAMPLE ASSIGNMENTS FROM SPRING 2016 395
If these processes p and q are not adjacent, we will need some other
mechanism to detect them. Define the extended id of a process as its own id
followed by a list of the ids of its neighbors in some fixed order. Order the
extended ids lexicographically, so that a process with a smaller id also has a
smaller extended id.
Suppose now that p and q are not adjacent and have the same extended
id. Then they share the same neighbors, and each of these neighbors will see
that p and q have duplicate ids. So we can do an initial round of messages
where each process transmits its extended id to its neighbors, and if p and q
observe that their id is a duplicate, they can again notify all the processes to
return that there are two leaders by flooding.
The remaining case is that p and q have distinct extended ids, or that
only one minimum-process id exists. In either case we can run any standard
broadcast-based leader-election algorithm, using the extended ids, which will
leave us with a tree rooted at whichever process has the minimum extended
id. This process can then perform convergecast to detect if there is another
process with the same id, and perform broadcast to inform all processes of
this fact.
Solution
The implementation is correct.
If one process runs alone, it sets A[i][idi ] for each i, sees 0 in door, then
sees 0 in each location A[i][¬idi ] and wins. So we have property (a).
Now suppose that some process with id p wins in an execution that may
involve other processes. Then p writes A[i][pi ] for all i before observing 0 in
door, which means that it sets all these bits before any process writes 1 to
door. If some other process q also wins, then there is at least one position i
where pi = ¬qi , and q reads A[i][pi ] after writing 1 to door. But then q sees
1 in this location and loses, a contradiction.
APPENDIX C. SAMPLE ASSIGNMENTS FROM SPRING 2016 396
shared data:
1 one-bit atomic registers A[i][j] for i = 0 . . . dlg ne − 1 and j ∈ {0, 1},
all initially 0
2 one-bit atomic register door, initially 0
3 procedure splitter(id)
4 for i ← 0 to k − 1 do
5 A[i][idi ] ← 1
6 if door = 1 then
7 return lose
8 door ← 1
9 for i ← 0 to k − 1 do
10 if A[i][¬idi ] = 1 then
11 return lose
12 return win
Algorithm C.6: Splitter using one-bit registers
Solution
Disproof by counterexample: Fix some f , and consider a graph with two
processes p0 and p1 connected by an edge. Let p0 start with 0 and p1 start
with 1. Then p0 ’s next state is f (0, 0, 1) = ¬f (1, 1, 0) 6= f (1, 1, 0), which is
p1 ’s next state. So either p0 still has 0 and p1 still has 1, in which case we
never make progress; or they swap their bits, in which case we can apply the
same analysis with p0 and p1 reversed to show that they continue to swap
back and forth forever. In either case the system does not converge.
Appendix D
1. Your name.
(You will not be graded on the bureaucratic part, but you should do it
anyway.)
398
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 399
evil, and knows the identities of all of its neighbors. However, the processes
do not know the number of processes n or the diameter of the network D.
Give a protocol that allows every process to correctly return the number
of evil processes no later than time D. Your protocol should only return a
value once for each process (no converging to the correct answer after an
initial wrong guess).
Solution
There are a lot of ways to do this. Since the problem doesn’t ask about
message complexity, we’ll do it in a way that optimizes for algorithmic
simplicity.
At time 0, each process initiates a separate copy of the flooding algorithm
(Algorithm 3.1). The message hp, N (p), ei it distributes consists of its own
identity, the identities of all of its neighbors, and whether or not it is evil.
In addition to the data for the flooding protocol, each process tracks a
set I of all processes it has seen that initiated a protocol and a set N of all
processes that have been mentioned as neighbors. The initial values of these
sets for process p are {p} and N (p), the neighbors of p.
Upon receiving a message hq, N (q), ei, a process adds q to I and N (q) to
N . As soon as I = N , the process returns a count of all processes for which
e = true.
Termination by D: Follows from the same analysis as flooding. Any
process at distance d from p has p ∈ I by time d, so I is complete by time D.
S
Correct answer: Observe that N = i∈I N (i) always. Suppose that there
is some process q that is not in I. Since the graph is connected, there is a
path from p to q. Let r be the last node in this path in I, and let s be the
following node. Then s ∈ N \ I and N 6= I. By contraposition, if I = N
then I contains all nodes in the network, and so the count returned at this
time is correct.
its neighbors as its parent, and following the parent pointers always gives a
path of minimum total weight to the initiator.1
Give a protocol that solves this problem with reasonable time, message,
and bit complexity, and show that it works.
Solution
There’s an ambiguity in the definition of total weight: does it include the
weight of the initiator and/or the initial node in the path? But since these
values are the same for all paths to the initiator from a given process, they
don’t affect which is lightest.
If we don’t care about bit complexity, there is a trivial solution: Use an
existing BFS algorithm followed by convergecast to gather the entire structure
of the network at the initiator, run your favorite single-source shortest-path
algorithm there, then broadcast the results. This has time complexity O(D)
and message complexity O(DE) if we use the BFS algorithm from §4.3. But
the last couple of messages in the convergecast are going to be pretty big.
A solution by reduction: Suppose that we construct a new graph G0
where each weight-2 node u in G is replaced by a clique of nodes u1 , u2 , . . . uk ,
with each node in the clique attached to a different neighbor of u. We then
run any breadth-first search protocol of our choosing on G0 , where each
weight-2 node simulates all members of the corresponding clique. Because
any path that passes through a clique picks up an extra edge, each path in
the breadth-first search tree has a length exactly equal to the sum of the
weights of the nodes other than its endpoints.
A complication is that if I am simulating k nodes, between them they
may have more than one parent pointer. So we define u.parent to be ui .parent
where ui is a node at minimum distance from the initiator in G0 . We also
re-route any incoming pointers to uj 6= ui to point to ui instead. Because ui
was chosen to have minimum distance, this never increases the length of any
path, and the resulting modified tree is a still a shortest-path tree.
Adding nodes blows up |E 0 |, but we don’t need to actually send messages
between different nodes ui represented by the same process. So if we use the
§4.3 algorithm again, we only send up to D messages per real edge, giving
O(D) time and O(DE) messages.
If we don’t like reductions, we could also tweak one of our existing
algorithms. Gallager’s layered BFS (§4.2) is easily modified by changing the
1
Clarification added 2014-01-26: The actual number of hops is not relevant for the
construction of the shortest-path tree. By shortest path, we mean path of minimum total
weight.
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 401
Solution
√
The par solution for this is an Ω( f ) lower bound and O(f ) upper bound. I
don’t know if it is easy to do better than this.
For the lower bound, observe that the adversary can simulate an ordinary
crash failure by jamming a process in every round starting in the round it
crashes in. This means that in an r-round protocol, we can simulate k crash
failures with kr jamming faults. From the Dolev-Strong lower bound [DS83]
(see also Chapter 9), we know that there is no r-round protocol with k = r
crash failures faults, so there is√no r-round protocol with r2 jamming faults.
This gives a lower bound of b f c + 1 on the number of rounds needed to
solve synchronous agreement with f jamming faults.3
2
Clarifications added 2014-02-10: We assume that processes don’t know that they are
being jammed or which messages are lost (unless the recipient manages to tell them that a
message was not delivered). As in the original model, we assume a complete network and
that all processes have known identities.
3
Since Dolev-Strong only needs to crash one process per round, we don’t really need
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 402
For the upper bound, have every process broadcast its input every round.
After f + 1 rounds, there is at least one round in which no process is jammed,
so every process learns all the inputs and can take, say, the majority value.
Solution
The relevant bound here is the requirement that the network have enough
connectivity that the adversary can’t take over half of a vertex cut (see
§10.1.3). This is complicated slightly by the requirement that the faulty
nodes be contiguous.
The smallest vertex cut in a sufficiently large torus consists of the four
neighbors of a single node; however, these nodes are not connected. But we
can add a third node to connect two of them (see Figure D.1).
By adapting the usual lower bound we can use this construction to show
that f = 3 faults are enough to prevent agreement when m ≥ 3. The question
the full r jamming faults for processes that crash late. This could be used to improve the
constant for this argument.
4
Problem modified 2014-02-03. In the original version, it asked to compute f for all m,
but there are some nasty special cases when m is small.
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 403
Solution
We can tolerate f < n/2, but no more.
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 404
If f < n/2, the following algorithm works: Run Paxos, where each
process i waits to learn that it is non-faulty, then acts as a proposer for
proposal number i. The highest-numbered non-faulty process then carries
out a proposal round that succeeds because no higher proposal is ever issued,
and both the proposer (which is non-faulty) and a majority of accepters
participate.
If f ≥ n/2, partition the processes into two groups of size bn/2c, with
any leftover process crashing immediately. Make all of the processes in both
groups non-faulty, and tell each of them this at the start of the protocol.
Now do the usual partitioning argument: Run group 0 with inputs 0 with
no messages delivered from group 1 until all processes decide 0 (we can do
this because the processes can’t distinguish this execution from one in which
the group 1 processes are in fact faulty). Run group 1 similarly until all
processes decide 1. We have then violated agreement, assuming we didn’t
previously violate termination of validity.
Solution
First observe that ♦S can simulate ♦Sk for any k by having n − k processes
ignore the output of their failure detectors. So we need f < n/2 by the usual
lower bound on ♦S.
If f ≥ k, we are also in trouble. The f > k case is easy: If there exists
a consensus protocol for f > k, then we can transform it into a consensus
protocol for n − k processes and f − k failures, with no failure detectors at all,
by pretending that there are an extra k processes with real failure detectors
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 405
that crash immediately. The FLP impossibility result rules this out.
If f = k, we have to be a little more careful. By immediately crashing
f − 1 processes with real failure detectors, we can reduce to the f = k = 1
case. Now the adversary runs the FLP strategy. If no processes crash, then
all n − k + 1 surviving process report no failures; if it becomes necessary to
crash a process, this becomes the one remaining process with the real failure
detector. In either case the adversary successfully prevents consensus.
So let f < k. Then we have weak completeness, because every faulty
process is eventually permanently suspected by at least k − f > 0 processes.
We also have weak accuracy, because it is still the case that some process
is eventually permanently never suspected by anybody. By boosting weak
completeness to strong completeness as described in §13.2.3, we can turn
out failure detector into ♦S, meaning we can solve consensus precisely when
f < min(k, n/2).
Solution
No. We can adapt the lower bound on the session problem from §7.4.2 to
apply in this model.
Consider an execution of an algorithm for the session problem in which
each message is delivered exactly one time unit after it is sent. Divide
it as in the previous proof into a prefix β containing special actions and
a suffix δ containing no special actions. Divide β further into segments
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 406
Solution
This algorithm is basically implementing an array of ABD registers [ABND95],
but it omits the second phase on a read where any information the reader
learns is propagated to a majority. So we expect it to fail the same way ABD
would without this second round, by having two read operations return
values that are out of order with respect to their observable ordering.
Here is one execution that produces this bad outcome:
1 procedure inc
2 ci [i] ← ci [i] + 1
3 Send ci [i] to all processes.
4 Wait to receive ack(ci [i]) from a majority of processes.
5 upon receiving c from j do
6 ci [j] ← max(ci [j], c)
7 Send ack(c) to j.
8 procedure read
9 ri ← ri + 1
10 Send read(ri ) to all processes.
11 Wait to receive respond(ri , cj ) from a majority of processes j.
P
12 return k maxj cj [k]
13 upon receiving read(r) from j do
14 Send respond(r, ci ) to j
Algorithm D.1: Counter algorithm for Problem D.4.2.
Solution
It is not possible to implement this object using atomic registers.
Suppose that there were such an implementation. Algorithm D.2 im-
plements two-process consensus using a two atomic registers and a single
concurrency detector, initialized to the state following enter1 .
will equal process 1’s value, because process 2’s read follows its call to
enter2 , which follows exit1 and thus process 1’s write to r1 .
2. Process 1 executes exit1 after process 2 executes enter2 . Now both
exit operations return 1, and so process 2 returns its own value while
process 1 returns the contents of r2 , which it reads after process 2
writes its value there.
In either case, both processes return the value of the first process to access
the concurrency detector, satisfying both agreement and validity. This would
give a consensus protocol for two processes implemented from atomic registers,
contradicting the impossibility result of Loui and Abu-Amara [LAA87].
Solution
If n = 2, then a two-writer sticky bit is equivalent to a sticky bit, so we can
solve consensus.
If n ≥ 3, suppose that we maneuver our processes as usual to a bivalent
configuration C with no bivalent successors. Then there are three pending
operations x, y, and z, that among them produce both 0-valent and 1-valent
configurations. Without loss of generality, suppose that Cx and Cy are both
0-valent and Cz is 1-valent. We now consider what operations these might
be.
Solution
The necessary part is easier, although we can’t use JTT (Chapter 20) di-
rectly because having write operations means that our rotate register is not
perturbable. Instead, we argue that if we initialize the register to 1, we
get a mod-m counter, where increment is implemented by RotateLeft and
read is implemented by taking the log of the actual value of the counter.
Letting m ≥ 2n gives the desired Ω(n) lower bound, since a mod-2n counter
is perturbable.
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 411
For sufficiency, we’ll show how to implement the rotate register using
snapshots. This is pretty much a standard application of known tech-
niques [AH90b, AM93], but it’s not a bad exercise to write it out.
Pseudocode for one possible solution is given in Algorithm D.3.
The register is implemented using a single snapshot array A. Each entry
in the snapshot array holds four values: a timestamp and process id indicating
which write the process’s most recent operations apply to, the initial write
value corresponding to this timestamp, and the number of rotate operations
this process has applied to this value. A write operation generates a new
timestamp, sets the written value to its input, and resets the rotate count
to 0. A rotate operation updates the timestamp and associated write value
to the most recent that the process sees, and adjusts the rotate count as
appropriate. A read operation combines all the rotate counts associated with
the most recent write to obtain the value of the simulated register.
1 procedure write(A, v)
2 s ← snapshot(A)
3 A[id] ← hmaxi s[i].timestamp + 1, id, v, 0i
4 procedure RotateLeft(A)
5 s ← snapshot(A)
6 Let i maximize hs[i].timestamp, s[i].processi
7 if s[i].timestamp = A[id].timestamp and
s[i].process = A[id].process then
// Increment my rotation count
8 A[id].rotations ← A[id].rotations + 1
9 else
// Reset and increment my rotation count
10 A[id] ← hs[i].timestamp, s[i].process, s[i].value, 1i
11 procedure read(A)
12 s ← snapshot(A)
13 Let i maximize hs[i].timestamp, s[i].processi
14 Let
P
r = j,s[j].timestamp=s[i].timestamp∧s[j].process=s[i].process s[j].rotations
15 return s[i].value rotated r times.
Algorithm D.3: Implementation of a rotate register
Since each operation requires one snapshot and at most one update, the
cost is O(n) using the linear-time snapshot algorithm of Inoue et al. [IMCT94].
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 412
1 procedure TASi ()
2 while true do
3 with probability 1/2 do
4 ri ← ri + 1
5 else
6 ri ← ri
7 s ← r¬i
8 if s > ri then
9 return 1
10 else if s < ri − 1 do
11 return 0
1. Show that any return values of the protocol are consistent with a
linearizable, single-use test-and-set.
Solution
1. To show that this implements a linearizable test-and-set, we need to
show that exactly one process returns 0 and the other 1, and that if one
process finishes before the other starts, the first process to go returns
1.
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 413
Suppose that pi finishes before p¬i starts. Then pi reads only 0 from
r¬i , and cannot observe ri < r¬i : pi returns 0 in this case.
We now show that the two processes cannot return the same value.
Suppose that both processes terminate. Let i be such that pi reads r¬i
for the last time before p¬i reads ri for the last time. If pi returns 0,
then it observes ri ≥ r¬i + 2 at the time of its read; p¬i can increment
r¬i at most once before reading ri again, and so observed r¬i < ri and
returns 1.
Alternatively, if pi returns 1, it observed ri < r¬i . Since it performs
no more increments on ri , pi also observes ri < r¬i in all subsequent
reads, and so cannot also return 1.
2. Let’s run the protocol with an oblivious adversary, and track the value
of r0t − r1t over time, where rit is the value of ri after t writes (to either
register). Each write to r0 increases this value by 1/2 on average, with
a change of 0 or 1 equally likely, and each write to r1 decreases it by
1/2 on average.
To make things look symmetric, let ∆t be the change caused by the
t-th write and write ∆t as ct + X t where ct = ±1/2 is a constant
determined by whether p0 or p1 does the t-th write and X t = ±1/2 is
a random variable with expectation 0. Observe that the X t variables
are independent of each other and the constants ct (which depend only
on the schedule).
For the protocol to run forever, at every time t it must hold that
r0t − r1t ≤ 3; otherwise, even after one or both processes does its
0 0
next write, we will have r0t − r1t and the next process to read will
terminate. But
t
X
r0t − r1t = ∆s
s=1
Xt
= (cs + Xs )
s=1
Xt t
X
= cs + Xs .
s=1 s=1
Solution
It’s not possible.
Consider an execution with n = 3 processes, each with input 0. If the
protocol is correct, then after some finite number of rounds t, each process
returns 0. By symmetry, the processes all have the same states and send the
same messages throughout this execution.
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 415
Now consider a ring of size 2(t + 1) where every process has input 0,
except for one process p that has input 1. Let q be the process at maximum
distance from p. By induction on r, we can show that after r rounds of
communication, every process that is more than r + 1 hops away from p has
the same state as all of the processes in the 3-process execution above. So in
particular, after t rounds, process q (at distance t + 1) is in the same state
as it would be in the 3-process execution, and thus it returns 0. But—as it
learns to its horror, one round too late—the correct maximum is 1.
Solution
Test-and-sets are (a) historyless, and (b) have consensus number 2, so n is
at least 2.
To show that no historyless object can solve wait-free 3-process consensus,
consider an execution that starts in a bivalent configuration and runs to a
configuration C with two pending operations x and y such that Cx is 0-valent
and Cy is 1-valent. By the usual arguments x and y must both be operations
on the same object. If either of x and y is a read operation, then (0-valent)
Cxy and (1-valent) Cyx are indistinguishable to a third process pz if run
alone, because the object is left in the same state in both configurations;
whichever way pz decides, it will give a contradiction in an execution starting
with one of these configurations. If neither of x and y is a read, then x
overwrites y, and Cx is indistinguishable from Cyxto pz if pz runs alone;
again we get a contradiction.
Solution
Consider an execution in which the client orders ham. Run the northern
server together with the client until the server is about to issue a launch
action (if it never does so, the client receives no ham when the southern
server is faulty).
Now run the client together with the southern server. There are two
cases:
1. If the southern server ever issues launch, execute both this and the
northern server’s launch actions: the client gets two hams.
2. If the southern server never issues launch, never run the northern
server again: the client gets no hams.
In either case, the one-ham rule is violated, and the protocol is not
correct.5
5
It’s tempting to try to solve this problem by reduction from a known impossibility
result, like Two Generals or FLP. For these specific problems, direct reductions don’t
appear to work. Two Generals assumes message loss, but in this model, messages are not
lost. FLP needs any process to be able to fail, but in this model, the client never fails.
Indeed, we can solve consensus in the Hamazon model by just having the client transmit
its input to both servers.
APPENDIX D. SAMPLE ASSIGNMENTS FROM SPRING 2014 417
1 procedure mutex()
2 predecessor ← swap(s, myId)
3 while r 6= predecessor do
4 try again
// Start of critical section
5 ...
// End of critical section
6 r ← myId
Algorithm D.5: Mutex using a swap object and register
Solution
Because processes use the same id if they try to access the mutex twice, the
algorithm doesn’t work.
Here’s an example of a bad execution:
2. Process 2 swaps 2 into s and gets 1, reads 1 from r, and enters the
critical section.
I believe this works if each process adopts a new id every time it calls
mutex, but the proof is a little tricky.6
6
The simplest proof I can come up with is to apply an invariant that says that (a)
the processes that have executed swap(s, myId) but have not yet left the while loop have
predecessor values that form a linked list, with the last pointer either equal to ⊥ (if no
process has yet entered the critical section) or the last process to enter the critical section;
(b) r is ⊥ if no process has yet left the critical section, or the last process to leave the
critical section otherwise; and (c) if there is a process that is in the critical section, its
predecessor field points to the last process to leave the critical section. Checking the effects
of each operation shows that this invariant is preserved through the execution, and (a)
combined with (c) show that we can’t have two processes in the critical section at the same
time. Additional work is still needed to show starvation-freedom. It’s a good thing this
algorithm doesn’t work as written.
Appendix E
1. Your name.
(You will not be graded on the bureaucratic part, but you should do it
anyway.)
419
APPENDIX E. SAMPLE ASSIGNMENTS FROM FALL 2011 420
Solution
Disproof: Consider two executions, one in an n × m torus and one in an
m × n torus where n > m and both n and m are at least 2.2 Using the same
argument as in Lemma 5.1.1, show by induction on the round number that,
for each round r, all processes in both executions have the same state. It
follows that if the processes correctly detect n > m in the n × m execution,
then they incorrectly report m > n in the m × n execution.
E.1.2 Clustering
Suppose that k of the nodes in an asynchronous message-passing network
are designated as cluster heads, and we want to have each node learn the
identity of the nearest head. Given the most efficient algorithm you can for
this problem, and compute its worst-case time and message complexities.
You may assume that processes have unique identifiers and that all
processes know how many neighbors they have.3
Solution
The simplest approach would be to run either of the efficient distributed
breadth-first search algorithms from Chapter 4 simultaneously starting at all
cluster heads, and have each process learn the distance to all cluster heads
at once and pick the nearest one. This gives O(D2 ) time and O(k(E + V D))
messages if we use layering and O(D) time and O(kDE) messages using
local synchronization.
We can get rid of the dependence on k in the local-synchronization
algorithm by running it almost unmodified, with the only difference being
the attachment of a cluster head id to the exactly messages. The simplest
way to show that the resulting algorithm works is to imagine coalescing
1
Clarification added 2011-09-28.
2
This last assumption is not strictly necessary, but it avoids having to worry about
what it means when a process sends a message to itself.
3
Clarification added 2011-09-26.
APPENDIX E. SAMPLE ASSIGNMENTS FROM FALL 2011 421
all cluster heads into a single initiator; the clustering algorithm effectively
simulates the original algorithm running in this modified graph, and the
same proof goes through. The running time is still O(D) and the message
complexity O(DE).
E.1.3 Negotiation
Two merchants A and B are colluding to fix the price of some valuable
commodity, by sending messages to each other for r rounds in a synchronous
message-passing system. To avoid the attention of antitrust regulators, the
merchants are transmitting their messages via carrier pigeons, which are
unreliable and may become lost. Each merchant has an initial price pA or
pB , which are integer values satisfying 0 ≤ p ≤ m for some known value
m, and their goal is to choose new prices p0A and p0B , where |p0A − p0B | ≤ 1.
If pA = pB and no messages are lost, they want the stronger goal that
p0A = p0B = pA = pB .
Prove the best lower bound you can on r, as a function of m, for all
protocols that achieve these goals.
Solution
This is a thinly-disguised version of the Two Generals Problem from Chap-
ter 8, with the agreement condition p0A = p0B replaced by an approximate
agreement condition |p0A − p0B | ≤ 1. We can use a proof based on the
indistinguishability argument in §8.2 to show that r ≥ m/2.
Fix r, and suppose that in a failure-free execution both processes send
messages in all rounds (we can easily modify an algorithm that does not
have this property to have it, without increasing r). We will start with a
sequence of executions with pA = pB = 0. Let X0 be the execution in which
no messages are lost, X1 the execution in which A’s last message is lost,
X2 the execution in which both A and B’s last messages are lost, and so
on, with Xk for 0 ≤ k ≤ 2r losing k messages split evenly between the two
processes, breaking ties in favor of losing messages from A.
When i is even, Xi is indistinguishable from Xi+1 by A; it follows that
p0A is the same in both executions. Because we no longer have agreement,
it may be that p0B (Xi ) and p0B (Xi+1 ) are not the same as p0A in either
execution; but since both are within 1 of p0A , the difference between them is
at most 2. Next, because Xi+1 to Xi+2 are indistinguishable to B, we have
p0B (Xi+1 ) = p0B (Xi+2 ), which we can combine with the previous claim to get
|p0B (Xi ) − p0B (Xi+2 )|. A simple induction then gives p0B (X2r ) ≤ 2r, where
APPENDIX E. SAMPLE ASSIGNMENTS FROM FALL 2011 422
Suppose that we augment the system so that senders are notified imme-
diately when their messages are delivered. We can model this by making the
delivery of a single message an event that updates the state of both sender
and recipient, both of which may send additional messages in response. Let
us suppose that this includes attempted deliveries to faulty processes, so that
any non-faulty process that sends a message m is eventually notified that m
has been delivered (although it might not have any effect on the recipient if
the recipient has already crashed).
1. Show that this system can solve consensus with one faulty process
when n = 2.
2. Show that this system cannot solve consensus with two faulty processes
when n = 3.
Solution
1. To solve consensus, each process sends its input to the other. Whichever
input is delivered first becomes the output value for both processes.
2. To show impossibility with n = 3 and two faults, run the usual FLP
proof until we get to a configuration C with events e0 and e such that
Ce is 0-valent and Ce0 e is 1-valent (or vice versa). Observe that e
and e0 may involve two processes each (sender and receiver), for up
to four processes total, but only a process that is involved in both e
and e0 can tell which happened first. There can be at most two such
processes. Kill both, and get that Ce0 e is indistinguishable from Cee0
for the remaining process, giving the usual contradiction.
Solution
There is an easy reduction to FLP that shows f ≤ n/2 is necessary (when n
√
is even), and a harder reduction that shows f < 2 n − 1 is necessary. The
easy reduction is based on crashing every other process; now no surviving
process can suspect any other survivor, and we are back in an asynchronous
message-passing system with no failure detector and 1 remaining failure (if
f is at least n/2 + 1).
√
The harder reduction is to crash every ( n)-th process. This partitions
√ √
the ring into n segments of length n − 1 each, where there is no failure
detector in any segment that suspects any process in another segment. If an
algorithm exists that solves consensus in this situation, then it does so even
if (a) all processes in each segment have the same input, (b) if any process in
√
one segment crashes, all n − 1 process in the segment crash, and (c) if any
process in a segment takes a step, all take a step, in some fixed order. Under
this additional conditions, each segment can be simulated by a single process
√
in an asynchronous system with no failure detectors, and the extra n − 1
√
failures in 2 n − 1 correspond to one failure in the simulation. But we can’t
solve consensus in the simulating system (by FLP), so we can’t solve it in
the original system either.
On the other side, let’s first boost completeness of the failure detector,
by having any process that suspects another transmit this submission by
reliable broadcast. So now if any non-faulty process i suspects i + 1, all the
non-faulty processes will suspect i + 1. Now with up to t failures, whenever
I learn that process i is faulty (through a broadcast message passing on the
suspicion of the underlying failure detector, I will suspect processes i + 1
through i + t − f as well, where f is the number of failures I have heard
about directly. I don’t need to suspect process i + t − f + 1 (unless there is
some intermediate process that has also failed), because the only way that
this process will not be suspected eventually is if every process in the range
i to i + t − f is faulty, which can’t happen given the bound t.
Now if t is small enough that I can’t cover the entire ring with these
segments, then there is some non-faulty processes that is far enough away
from the nearest preceding faulty process that it is never suspected: this gives
us an eventually strong failure detector, and we can solve consensus using the
standard Chandra-Toueg ♦S algorithm from §13.4 or [CT96]. The inequality
I am looking for is f (t − f ) < n, where the √left-hand side is maximized by
setting 2
f = t/2, which gives t /4 < n or t < 2n. This leaves a gap of about
√
2 between the upper and lower bounds; I don’t know which one can be
improved.
APPENDIX E. SAMPLE ASSIGNMENTS FROM FALL 2011 425
√
I am indebted to Hao Pan for suggesting the Θ( n) upper and lower
bounds, which corrected an error in my original draft solution to this problem.
Termination If at some time an odd number of sensors are active, and from
that point on no sensor changes its state, then some process eventually
sets off an alarm.
For what values of n is it possible to construct such a protocol?
Solution
It is feasible to solve the problem for n < 3.
For n = 1, the unique process sets off its alarm as soon as its sensor
becomes active.
For n = 2, have each process send a message to the other containing
its sensor state whenever the sensor state changes. Let s1 and s2 be the
state of the two process’s sensors, with 0 representing inactive and 1 active,
and let pi set off its alarm if it receives a message s such that s ⊕ si = 1.
This satisfies termination, because if we reach a configuration with an odd
number of active sensors, the last sensor to change causes a message to be
sent to the other process that will cause it to set off its alarm. It satisfies
no-false-positives, because if pi sets off its alarm, then s¬i = s because at
most one time unit has elapsed since p¬i sent s; it follows that s¬i ⊕ si = 1
and an odd number of sensors are active.
No such protocol is possible for n ≥ 3. Make p1 ’s sensor active. Run the
protocol until some process pi is about to enter an alarm state (this occurs
APPENDIX E. SAMPLE ASSIGNMENTS FROM FALL 2011 426
• enq(Q) always pushes the identity of the current process onto the tail
of the queue.
• deq(Q) tests if the queue is nonempty and its head is equal to the
identity of the current process. If so, it pops the head and returns
true. If not, it does nothing and returns false.
The rationale for these restrictions is that this is the minimal version of
a queue needed to implement a starvation-free mutex using Algorithm 17.2.
What is the consensus number of this object?
Solution
The restricted queue has consensus number 1.
Suppose we have 2 processes, and consider all pairs of operations on Q
that might get us out of a bivalent configuration C. Let x be an operation
carried out by p that leads to a b-valent state, and y an operation by q that
leads to a (¬b)-valent state. There are three cases:
• One enq and one deq operation. Suppose x is an enq and y a deq. If
Q is empty or the head is not q, then y is a no-op: p can’t distinguish
Cx from Cyx. If the head is q, then x and y commute. The same holds
in reverse if x is a deq and y an enq.
• Two enq operations. This is a little tricky, because Cxy and Cyx are
different states. However, if Q is nonempty in C, whichever process
APPENDIX E. SAMPLE ASSIGNMENTS FROM FALL 2011 427
isn’t at the head of Q can’t distinguish them, because any deq operation
returns false and never reaches the newly-enqueued values. This leaves
the case where Q is empty in C. Run p until it is poised to do
x0 = deq(Q) (if this never happens, p can’t distinguish Cxy from Cyx);
then run q until it is poised to do y 0 = deq(Q) as well (same argument
as for p). Now allow both deq operations to proceed in whichever order
causes them both to succeed. Since the processes can’t tell which deq
happened first, they can’t tell which enq happened first either. Slightly
more formally, if we let α be the sequence of operations leading up to
the two deq operations, we’ve just shown Cxyαx0 y 0 is indistinguishable
from Cyxαy 0 x0 to both processes.
In all cases, we find that we can’t escape bivalence. It follows that Q can’t
solve 2-process consensus.
Solution
We’ll use a snapshot object a to control access to an infinite array f
of fetch-and-increments, where each time somebody writes to the imple-
mented object, we switch to a new fetch-and-increment. Each cell in a
holds (timestamp, base), where base is the starting value of the simulated
fetch-and-increment. We’ll also use an extra fetch-and-increment T to hand
out timestamps.
Code is in Algorithm E.1.
Since this is all straight-line code, it’s trivially wait-free.
Proof of linearizability is by grouping all operations by timestamp, us-
ing s[i].timestamp for FetchAndIncrement operations and t for write op-
erations, then putting write before FetchAndIncrement, then ordering
FetchAndIncrement by return value. Each group will consist of a write(v)
for some v followed by zero or more FetchAndIncrement operations, which
will return increasing values starting at v since they are just returning values
APPENDIX E. SAMPLE ASSIGNMENTS FROM FALL 2011 428
1 procedure FetchAndIncrement()
2 s ← snapshot(a)
3 i ← arg maxi (s[i].timestamp)
4 return f [s[i].timestamp] + s[i].base
5 procedure write(v)
6 t ← FetchAndIncrement(T )
7 a[myId] ← (t, v)
Algorithm E.1: Resettable fetch-and-increment
Solution
Let b be the box object. Represent b by a snapshot object a, where a[i] holds
a pair (∆wi , ∆hi ) representing the number of times process i has executed
IncWidth and IncHeight; these operations simply increment the appropriate
APPENDIX E. SAMPLE ASSIGNMENTS FROM FALL 2011 429
value and update the snapshot object. Let GetArea take a snapshot and
P P
return ( i ∆wi ) ( i ∆hi ); the cost of the snapshot is O(n).
To see that this is optimal, observe that we can use IncWidth and GetArea
to represent inc and read for a standard counter. The Jayanti-Tan-Toueg
bound applies to counters, giving a worst-case cost of Ω(n) for GetArea.
Solution
The consensus number is ∞; a single lockable register solves consensus for
any number of processes. Code is in Algorithm E.2.
1 write(r, input)
2 lock(r)
3 return read(r)
Algorithm E.2: Consensus using a lockable register
Termination and validity are trivial. Agreement follows from the fact
that whatever value is in r when lock(r) is first called will never change,
and thus will be read and returned by all processes.
APPENDIX E. SAMPLE ASSIGNMENTS FROM FALL 2011 430
Solution
It is possible to solve the problem for all n except n = 3. For n = 1, there are
no non-faulty processes, so the specification is satisfied trivially. For n = 2,
there is only one non-faulty process: it can just keep its own counter and
return an increasing sequence of timestamps without talking to the other
process at all.
For n = 3, it is not possible. Consider an execution in which messages
between non-faulty processes p and q are delayed indefinitely. If the Byzantine
process r acts to each of p and q as it would if the other had crashed, this
execution is indistinguishable to p and q from an execution in which r is
correct and the other is faulty. Since there is no communication between
p and q, it is easy to construct and execution in which the specification is
violated.
For n ≥ 4, the protocol given in Algorithm E.3 works.
The idea is similar to the Attiya, Bar-Noy, Dolev distributed shared
memory algorithm [ABND95]. A process that needs a timestamp polls n − 1
other processes for the maximum values they’ve seen and adds 1 to it; before
returning, it sends the new timestamp to all other processes and waits to
receive n − 1 acknowledgments. The Byzantine process may choose not to
answer, but this is not enough to block completion of the protocol.
To show the timestamps are increasing, observe that after the completion
of any call by i to getTimestamp, at least n − 2 non-faulty processes j have
a value vj ≥ vi . Any call to getTimestamp that starts later sees at least
n − 3 > 0 of these values, and so computes a max that is at least as big as
APPENDIX E. SAMPLE ASSIGNMENTS FROM FALL 2011 431
1 procedure getTimestamp()
2 ci ← ci + 1
3 send probe(ci ) to all processes
4 wait to receive response(ci , vj ) from n − 1 processes
5 vi ← (maxj vj ) + 1
6 send newTimestamp(ci , vi ) to all processes
7 wait to receive ack(ci ) from n − 1 processes
8 return vi
Solution
Yes. With f < n/2 and ♦S, we can solve consensus using Chandra-
Toueg [CT96]. Since this gives a unique decision value, it solves k-set
agreement for any k ≥ 1.
APPENDIX E. SAMPLE ASSIGNMENTS FROM FALL 2011 432
Solution
Algorithm E.4 implements a counter from a set object, where the counter
read consists of a single call to size(S). The idea is that each increment is
implemented by inserting a new element into S, so |S| is always equal to the
number of increments.
1 procedure inc(S)
2 nonce ← nonce + 1
3 add(S, hmyId, noncei).
4 procedure read(S)
5 return size(S)
Algorithm E.4: Counter from set object
4
Clarification added during exam.
Appendix F
This appendix contains final exams from previous times the course was
offered, and is intended to give a rough guide to the typical format and
content of a final exam. Note that the topics covered in past years were not
necessarily the same as those covered this year.
433
APPENDIX F. ADDITIONAL SAMPLE FINAL EXAMS 434
your choosing, and that the design of the consensus protocol can depend on
the number of processes N .
Solution
The consensus number is 2.
To implement 2-process wait-free consensus, use a single fetch-and-
subtract register initialized to 1 plus two auxiliary read/write registers
to hold the input values of the processes. Each process writes its input to its
own register, then performs a fetch-and-subtract(1) on the fetch-and-subtract
register. Whichever process gets 1 from the fetch-and-subtract returns its
own input; the other process (which gets 0) returns the winning process’s
input (which it can read from the winning process’s read/write register.)
To show that the consensus number is at most 2, observe that any two
fetch-and-subtract operations commute: starting from state x, after fetch-and-
subtract(k1 ) and fetch-and-subtract(k2 ) the value in the fetch-and-subtract
register is max(0, x − k1 − k2 ) regardless of the order of the operations.
Solution
Upper bound
Because there are no failures, we can appoint a leader and have it decide.
The natural choice is some process near the middle, say pb(N +1)/2c . Upon
receiving an input, either directly through an input event or indirectly from
another process, the process sends the input value along the line toward the
leader. The leader takes the first input it receives and broadcasts it back out
in both directions as the decision value. The worst case is when the protocol
is initiated at pN ; then we pay 2(N − b(N + 1)/2c) time to send all messages
out and back, which is N time units when N is even and N − 1 time units
when N is odd.
Lower bound
Proving an almost-matching lower bound of N − 1 time units is trivial: if
p1 is the only initiator and it starts at time t0 , then by an easy induction
argument,in the worst case pi doesn’t learn of any input until time t0 + (i − 1),
and in particular pN doesn’t find out until after N − 1 time units. If pN
nonetheless decides early, its decision value will violate validity in some
executions.
But we can actually prove something stronger than this: that N time
units are indeed required when N is odd. Consider two slow executions Ξ0
and Ξ1 , where (a) all messages are delivered after exactly one time unit in
each execution; (b) in Ξ0 only p1 receives an input and the input is 0; and (c)
in Ξ1 only pN receives an input and the input is 1. For each of the executions,
construct a causal ordering on events in the usual fashion: a send is ordered
before a receive, two events of the same process are ordered by time, and
other events are partially ordered by the transitive closure of this relation.
Now consider for Ξ0 the set of all events that precede the decide(0) event
of p1 and for Ξ1 the set of all events that precede the decide(1) event of
pN . Consider further the sets of processes S0 and S1 at which these events
occur; if these two sets of processes do not overlap, then we can construct
an execution in which both sets of events occur, violating Agreement.
Because S0 and S1 overlap, we must have |S0 | + |S1 | ≥ N + 1, and so
at least one of the two sets has size at least d(N + 1)/2e, which is N/2 + 1
when N is even. Suppose that it is S0 . Then in order for any event to occur
at pN/2+1 at all some sequence of messages must travel from the initial input
to p1 to process pN/2+1 (taking N/2 time units), and the causal ordering
implies that an additional sequence of messages travels back from pN/2+1 to
APPENDIX F. ADDITIONAL SAMPLE FINAL EXAMS 436
p1 before p1 decides (taking and additional N/2 time units). The total time
is thus N .
In either case, the solution should work for arbitrarily many processes—solving
mutual exclusion when N = 1 is not interesting. You are also not required
in either case to guarantee lockout-freedom.
Solution
1. Disproof: With append registers only, it is not possible to solve mutual
exclusion. To prove this, construct a failure-free execution in which
the processes never break symmetry. In the initial configuration, all
processes have the same state and thus execute either the same read
operation or the same append operation; in either case we let all N
operations occur in some arbitrary order. If the operations are all
reads, all processes read the same value and move to the same new
state. If the operations are all appends, then no values are returned and
again all processes enter the same new state. (It’s also the case that
the processes can’t tell from the register’s state which of the identical
append operations went first, but we don’t actually need to use this
fact.)
APPENDIX F. ADDITIONAL SAMPLE FINAL EXAMS 437
2. Since the processes are anonymous, any solution that depends on them
having identifiers isn’t going to work. But there is a simple solution
that requires only appending single bits to the register.
Each process trying to enter a critical section repeatedly executes an
append-and-fetch operation with argument 0; if the append-and-fetch
operation returns either a list consisting only of a single 0 or a list
whose second-to-last element is 1, the process enters its critical section.
To leave the critical section, the process does append-and-fetch(1).
Solution
Pick some leader node to implement the object. To execute an operation,
send the operation to the leader node, then have the leader carry out the
operation (sequentially) on its copy of the object and send the results back.
each i less than k − 1 and a[k − 1] ← v; and (b) returns a snapshot of the
new contents of the array (after the shift).
What is the consensus number of this object as a function of k?
Solution
We can clearly solve consensus for at least k processes: each process calls
shift-and-fetch on its input, and returns the first non-null value in the buffer.
So now we want to show that we can’t solve consensus for k + 1 processes.
Apply the usual FLP-style argument to get to a bivalent configuration C
where each of the k + 1 processes has a pending operation that leads to a
univalent configuration. Let e0 and e1 be particular operations leading to
0-valent and 1-valent configurations, respectively, and let e2 . . . ek be the
remaining k − 1 pending operations.
We need to argue first that no two distinct operations ei and ej are
operations of different objects. Suppose that Cei is 0-valent and Cej is
1-valent; then if ei and ej are on different objects, Cei ej (still 0-valent) is
indistinguishable by all processes from Cej ei (still 1-valent), a contradiction.
Alternatively, if ei and ej are both b-valent, there exists some (1−b)-valent ek
such that ei and ej both operate on the same object as ek , by the preceding
argument. So all of e0 . . . ek are operations on the same object.
By the usual argument we know that this object can’t be a register. Let’s
show it can’t be a ring buffer either. Consider the configurations Ce0 e1 . . . ek
and Ce1 . . . ek . These are indistinguishable to the process carrying out ek
(because its sees only the inputs to e1 through ek in its snapshot). So they
must have the same valence, a contradiction.
It follows that the consensus number of a k-element ring buffer is exactly
k.
Solution
First observe that each row and column of the torus is a bidirectional ring, so
we can run e.g. Hirschbirg and Sinclair’s O(n log n)-message protocol within
each of these rings to find the smallest identifier in the ring. We’ll use this
to construct the following algorithm:
1. Run Hirschbirg-Sinclair in each row to get a local leader for each row;
this takes n × O(n log n) = O(n2 log n) messages. Use an additional n
messages per row to distribute the identifier for the row leader to all
nodes and initiate the next stage of the protocol.
2. Run Hirschbirg-Sinclair in each column with each node adopting the row
leader identifier as its own. This costs another O(n2 log n) messages;
at the end, every node knows the minimum identifier of all nodes in
the torus.
The total message complexity is O(n2 log n). (I suspect this is optimal,
but I don’t have a proof.)
3. Give the best lower bound you can on the total message complexity of
the pre-processing and search algorithms in the case above.
Solution
1. Run depth-first search to find the matching key and return the corre-
sponding value back up the tree. Message complexity is O(|E|) = O(n)
(since each node has only O(1) links).
2. Basic idea: give each node a copy of all key-value pairs, then searches
take zero messages. To give each node a copy of all key-value pairs we
could do convergecast followed by broadcast (O(n) message complexity)
or just flood each pair O(n2 ). Either is fine since we don’t care about
the message complexity of the pre-processing stage.
Solution
No protocol for two: turn an anti-consensus protocol with outputs in {0, 1}
into a consensus protocol by having one of the processes always negate its
output.
A protocol for three: Use a splitter.
Solution
Here is an impossibility proof. Suppose there is such an algorithm, and let
it correctly decide “odd” on a ring of size 2k + 1 for some k and some set
of leader inputs. Now construct a ring of size 4k + 2 by pasting two such
rings together (assigning the same values to the leader bits in each copy)
and run the algorithm on this ring. By the usual symmetry argument, every
corresponding process sends the same messages and makes the same decisions
in both rings, implying that the processes incorrectly decide the ring of size
4k + 2 is odd.
Solution
Disproof: Let s1 and s2 be processes carrying out snapshots and let w1 and
w2 be processes carrying out writes. Suppose that each wi initiates a write
of 1 to a[wi ], but all of its messages to other processes are delayed after it
updates its own copy awi [wi ]. Now let each si receive responses from 3n/4 − 1
processes not otherwise mentioned plus wi . Then s1 will return a vector
with a[w1 ] = 1 and a[w2 ] = 0 while s2 will return a vector with a[w1 ] = 0
and a[w2 ] = 1, which is inconsistent. The fact that these vectors are also
disseminated throughout at least 3n/4 other processes is a red herring.
the queue, and deq() removes and returns the smallest value in the queue,
or returns null if the queue is empty. (If there is more than one copy of the
smallest value, only one copy is removed.)
What is the consensus number of this object?
Solution
The consensus number is 2. The proof is similar to that for a queue.
To show we can do consensus for n = 2, start with a priority queue with
a single value in it, and have each process attempt to dequeue this value. If
a process gets the value, it decides on its own input; if it gets null, it decides
on the other process’s input.
To show we can’t do consensus for n = 3, observe first that starting from
any states C of the queue, given any two operations x and y that are both
enqueues or both dequeues, the states Cxy and Cyx are identical. This
means that a third process can’t tell which operation went first, meaning
that a pair of enqueues or a pair of dequeues can’t get us out of a bivalent
configuration in the FLP argument. We can also exclude any split involving
two operations on different queues (or other objects) But we still need to
consider the case of a dequeue operation d and an enqueue operation e on
the same queue Q. This splits into several subcases, depending on the state
C of the queue in some bivalent configuration:
1. C = {}. Then Ced = Cd = {}, and a third process can’t tell which of
d or e went first.
2. C is nonempty and e = enq(v), where v is greater than or equal to the
smallest value in C. Then Cde and Ced are identical, and no third
process can tell which of d or e went first.
3. C is nonempty and e = enq(v), where v is less than any value in C.
Consider the configurations Ced and Cde. Here the process pd that
performs d can tell which operation went first, because it either obtains
v or some other value v 0 6= v. Kill this process. No other process in Ced
or Cde can distinguish the two states without dequeuing whichever of
v or v 0 was not dequeued by pd . So consider two parallel executions
Cedσ and Cdeσ where σ consists of an arbitrary sequence of operations
ending with a deq on Q by some process p (if no process ever attempts
to dequeue from Q, then we have already won, since the survivors can’t
distinguish Ced from Cde). Now the state of all objects is the same
after Cedσ and Cdeσ, and only pd and p have different states in these
two configurations. So any third process is out of luck.
Appendix G
I/O automata
444
APPENDIX G. I/O AUTOMATA 445
All output actions of the components are also output actions of the
composition. An input action of a component is an input of the composition
only if some other component doesn’t supply it as an output; in this case
1
Note that infinite (but countable) compositions are permitted.
APPENDIX G. I/O AUTOMATA 446
G.1.5 Fairness
I/O automata come with a built-in definition of fair executions, where an
execution of A is fair if, for each equivalence class C of actions in task(A),
3. the execution is infinite and there are infinitely many states in which
no action in C is enabled.
G.2.1 Example
A property we might demand of the spambot above (or some other abstraction
of a message channel) is that it only delivers messages that have previously
been given to it. As a trace property this says that in any trace t, if
tk = spam(m), then tj = setMessage(m) for some j < k. (As a set, this
is just the set of all sequences of external spambot-actions that have this
property.) Call this property P .
To prove that the spambot automaton given above satisfies P , we might
argue that for any execution s0 a0 s1 a1 . . . , that si = m in the last setMessage
action preceding si , or ⊥ if there is no such action. This is easily proved
by induction on i. It then follows that since spam(m) can only transmit the
current state, that if spam(m) follows si = m that it follows some earlier
setMessage(m) as claimed.
However, there are traces that satisfy P that don’t correspond to execu-
tions of the spambot; for example, consider the trace setMessage(0)setMessage(1)spam(0).
This satisfies P (0 was previously given to the automaton spam(0)), but
the automaton won’t generate it because the 0 was overwritten by the later
setMessage(1) action. Whether this is indicates a problem with our automa-
ton not being nondeterministic enough or our trace property being too weak
is a question about what we really want the automaton to do.
1. P is nonempty.
APPENDIX G. I/O AUTOMATA 449
Because of the last restrictions, it’s enough to prove that P holds for all
finite traces of A to show that it holds for all traces (and thus for all fair
traces), since any trace is a limit of finite traces. Conversely, if there is some
trace or fair trace for which P fails, the second restriction says that P fails
on any finite prefix of P , so again looking at only finite prefixes is enough.
The spambot property mentioned above is a safety property.
Safety properties are typically proved using invariants, properties that
are shown by induction to hold in all reachable states.
G.2.3.1 Example
Consider two spambots A1 and A2 where we identify the spam(m) operation
of A1 with the setMessage(m) operation of A2 ; we’ll call this combined
action spam1 (m) to distinguish it from the output actions of A2 . We’d like
to argue that the composite automaton A1 + A2 satisfies the safety property
(call it Pm ) that any occurrence of spam(m) is preceded by an occurrence
of setMessage(m), where the signature of Pm includes setMessage(m) and
spam(m) for some specific m but no other operations. (This is an example
of where trace property signatures can be useful without being limited to
actions of any specific component automaton.)
To do so, we’ll prove a stronger property Pm 0 , which is P
m modified
to include the spam1 (m) action in its signature. Observe that Pm 0 is the
the later says that any trace that includes spam(m) has a previous spam1 (m)
and the former says that any trace that includes spam1 (m) has a previous
setMessage(m). Since these properties hold for the individual A1 and A2 ,
their product, and thus the restriction Pm0 , holds for A + A , and so P (as
1 2 m
a further restriction) holds for A1 + A2 as well.
Now let’s prove the liveness property for A1 + A2 , that at least one
occurrence of setMessage yields infinitely many spam actions. Here we
let L1 = {at least one setMessage action ⇒ infinitely many spam1 actions}
and L2 = {at least one spam1 action ⇒ infinitely many spam actions}. The
product of these properties is all sequences with (a) no setMessage actions or
(b) infinitely many spam actions, which is what we want. This product holds
if the individual properties L1 and L2 hold for A1 + A2 , which will be the
case if we set task(A1 ) and task(A2 ) correctly.
G.2.4.1 Example
A single spambot A can simulate the conjoined spambots A1 +A2 . Proof: Let
f (s) = (s, s). Then f (⊥) = (⊥, ⊥) is a start state of A1 + A2 . Now consider
a transition (s, a, s0 ) of A; the action a is either (a) setMessage(m), giving
s0 = m; here we let x = setMessage(m)spam1 (m) with trace(x) = trace(a)
since spam1 (m) is internal and f (s0 ) = (m, m) the result of applying x; or (b)
a = spam(m), which does not change s or f (s); the matching x is spam(m),
which also does not change f (s) and has the same trace.
A different proof could take advantage of f being a relation by defining
f (s) = {(s, s0 )|s0 ∈ states(A2 )}. Now we don’t care about the state of
A2 , and treat a setMessage(m) action of A as the sequence setMessage(m)
in A1 + A2 (which updates the first component of the state correctly) and
treat a spam(m) action as spam1 (m)spam(m) (which updates the second
component—which we don’t care about—and has the correct trace.) In some
cases an approach of this sort is necessary because we don’t know which
simulated state we are heading for until we get an action from A.
Note that the converse doesn’t work: A1 +A2 don’t simulate A, since there
are traces of A1 +A2 (e.g. setMessage(0)spam1 (0)setMessage(1)spam(0)) that
don’t restrict to traces of A. See [Lyn96, §8.5.5] for a more complicated
example of how one FIFO queue can simulate two FIFO queues and vice
versa (a situation called bisimulation).
Since we are looking at traces rather than fair traces, this kind of simula-
tion doesn’t help much with liveness properties, but sometimes the connection
between states plus a liveness proof for B can be used to get a liveness proof
for A (essentially we have to argue that A can’t do infinitely many action
without triggering a B-action in an appropriate task class). Again see [Lyn96,
§8.5.5].
Bibliography
[AAB+ 11] Yehuda Afek, Noga Alon, Omer Barad, Eran Hornstein, Naama
Barkai, and Ziv Bar-Joseph. A biological solution to a fundamen-
tal distributed computing problem. science, 331(6014):183–185,
2011.
[AABJ+ 11] Yehuda Afek, Noga Alon, Ziv Bar-Joseph, Alejandro Cornejo,
Bernhard Haeupler, and Fabian Kuhn. Beeping a maximal
independent set. In Proceedings of the 25th International Con-
ference on Distributed Computing, DISC’11, pages 32–50, Berlin,
Heidelberg, 2011. Springer-Verlag.
[AACH+ 11] Dan Alistarh, James Aspnes, Keren Censor-Hillel, Seth Gilbert,
and Morteza Zadimoghaddam. Optimal-time adaptive tight
renaming, with applications to counting. In Proceedings of
the Thirtieth Annual ACM SIGACT-SIGOPS Symposium on
Principles of Distributed Computing, pages 239–248, June 2011.
453
BIBLIOGRAPHY 454
[AACV17] Yehuda Afek, James Aspnes, Edo Cohen, and Danny Vain-
stein. Brief announcement: Object oriented consensus. In
Elad Michael Schiller and Alexander A. Schwarzmann, editors,
Proceedings of the ACM Symposium on Principles of Distributed
Computing, PODC 2017, Washington, DC, USA, July 25-27,
2017, pages 367–369. ACM, 2017.
[AAD+ 93] Yehuda Afek, Hagit Attiya, Danny Dolev, Eli Gafni, Michael
Merritt, and Nir Shavit. Atomic snapshots of shared memory.
J. ACM, 40(4):873–890, 1993.
[AAD+ 06] Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer,
and René Peralta. Computation in networks of passively mobile
finite-state sensors. Distributed Computing, pages 235–253,
March 2006.
[AAE08a] Dana Angluin, James Aspnes, and David Eisenstat. Fast com-
putation by population protocols with a leader. Distributed
Computing, 21(3):183–199, September 2008.
[AAG+ 10] Dan Alistarh, Hagit Attiya, Seth Gilbert, Andrei Giurgiu, and
Rachid Guerraoui. Fast randomized test-and-set and renam-
ing. In Nancy A. Lynch and Alexander A. Shvartsman, editors,
Distributed Computing, 24th International Symposium, DISC
2010, Cambridge, MA, USA, September 13-15, 2010. Proceed-
ings, volume 6343 of Lecture Notes in Computer Science, pages
94–108. Springer, 2010.
BIBLIOGRAPHY 455
[AAGG11] Dan Alistarh, James Aspnes, Seth Gilbert, and Rachid Guer-
raoui. The complexity of renaming. In Fifty-Second Annual
IEEE Symposium on Foundations of Computer Science, pages
718–727, October 2011.
[ABND+ 90] Hagit Attiya, Amotz Bar-Noy, Danny Dolev, David Peleg, and
Rüdiger Reischuk. Renaming in an asynchronous environment.
J. ACM, 37(3):524–548, 1990.
[AC08] Hagit Attiya and Keren Censor. Tight bounds for asynchronous
randomized consensus. Journal of the ACM, 55(5):20, October
2008.
[ACH10] Hagit Attiya and Keren Censor-Hillel. Lower bounds for ran-
domized consensus under a weak adversary. SIAM J. Comput.,
39(8):3885–3904, 2010.
[AHW08] Hagit Attiya, Danny Hendler, and Philipp Woelfel. Tight RMR
lower bounds for mutual exclusion and other problems. In
Proceedings of the 40th annual ACM symposium on Theory of
computing, STOC ’08, pages 217–226, New York, NY, USA,
2008. ACM.
[AKM+ 93] Baruch Awerbuch, Shay Kutten, Yishay Mansour, Boaz Patt-
Shamir, and George Varghese. Time optimal self-stabilizing
synchronization. In Proceedings of the twenty-fifth annual ACM
symposium on Theory of computing, pages 652–661. ACM, 1993.
[AKM+ 07] Baruch Awerbuch, Shay Kutten, Yishay Mansour, Boaz Patt-
Shamir, and George Varghese. A time-optional self-stabilizing
synchronizer using a phase clock. IEEE Transactions on De-
pendable and Secure Computing, 4(3):180–190, July–September
2007.
[AKP+ 06] Hagit Attiya, Fabian Kuhn, C. Greg Plaxton, Mirjam Watten-
hofer, and Roger Wattenhofer. Efficient adaptive collect using
randomization. Distributed Computing, 18(3):179–188, 2006.
[Bel03] S. Bellovin. The Security Flag in the IPv4 Header. RFC 3514
(Informational), April 2003.
[BHMT20] Mirza Ahad Baig, Danny Hendler, Alessia Milani, and Corentin
Travers. Long-lived snapshots with polylogarithmic amortized
step complexity. In Proceedings of the 39th Symposium on
Principles of Distributed Computing, PODC ’20, page 3140, New
York, NY, USA, 2020. Association for Computing Machinery.
[CCN12] Luca Cardelli and Attila Csikász-Nagy. The cell cycle switch
computes approximate majority. Scientific reports, 2, 2012.
[Cha93] Soma Chaudhuri. More choices allow more faults: Set consen-
sus problems in totally asynchronous systems. Inf. Comput.,
105(1):132–158, 1993.
[CIL94] Benny Chor, Amos Israeli, and Ming Li. Wait-free consensus
using asynchronous hardware. SIAM J. Comput., 23(4):701–712,
1994.
[CLM+ 16] Michael B. Cohen, Yin Tat Lee, Gary L. Miller, Jakub Pachocki,
and Aaron Sidford. Geometric median in nearly linear time.
In Daniel Wichs and Yishay Mansour, editors, Proceedings
of the 48th Annual ACM SIGACT Symposium on Theory of
Computing, STOC 2016, Cambridge, MA, USA, June 18-21,
2016, pages 9–21. ACM, 2016.
[CV86] Richard Cole and Uzi Vishkin. Deterministic coin tossing with
applications to optimal parallel list ranking. Information and
Control, 70(1):32–53, 1986.
[EGSZ16] Faith Ellen, Rati Gelashvili, Nir Shavit, and Leqi Zhu. A
complexity-based hierarchy for multiprocessor synchronization:
[extended abstract]. In Proceedings of the 2016 ACM Symposium
on Principles of Distributed Computing, PODC ’16, pages 289–
298, New York, NY, USA, 2016. ACM.
[EHS12] Faith Ellen, Danny Hendler, and Nir Shavit. On the inherent se-
quentiality of concurrent objects. SIAM Journal on Computing,
41(3):519–536, 2012.
[ER+ 18] Robert Elsässer, Tomasz Radzik, et al. Recent results in popu-
lation protocols for exact majority and leader election. Bulletin
of EATCS, 3(126), 2018.
BIBLIOGRAPHY 466
[FHS98] Faith Ellen Fich, Maurice Herlihy, and Nir Shavit. On the space
complexity of randomized synchronization. J. ACM, 45(5):843–
862, 1998.
[FHS05] Faith Ellen Fich, Danny Hendler, and Nir Shavit. Linear lower
bounds on real-world implementations of concurrent objects. In
Foundations of Computer Science, Annual IEEE Symposium on,
pages 165–173, Los Alamitos, CA, USA, 2005. IEEE Computer
Society.
[FL06] Rui Fan and Nancy A. Lynch. An ω(n log n) lower bound on
the cost of mutual exclusion. In Eric Ruppert and Dahlia
Malkhi, editors, Proceedings of the Twenty-Fifth Annual ACM
Symposium on Principles of Distributed Computing, PODC
2006, Denver, CO, USA, July 23-26, 2006, pages 275–284.
ACM, 2006.
[FLMS05] Faith Ellen Fich, Victor Luchangco, Mark Moir, and Nir Shavit.
Obstruction-free algorithms can be practically wait-free. In
Pierre Fraigniaud, editor, Distributed Computing, 19th Inter-
national Conference, DISC 2005, Cracow, Poland, September
26-29, 2005, Proceedings, volume 3724 of Lecture Notes in Com-
puter Science, pages 78–92. Springer, 2005.
[GW12a] George Giakkoupis and Philipp Woelfel. On the time and space
complexity of randomized test-and-set. In Darek Kowalski and
Alessandro Panconesi, editors, ACM Symposium on Princi-
ples of Distributed Computing, PODC ’12, Funchal, Madeira,
Portugal, July 16-18, 2012, pages 19–28. ACM, 2012.
[JTT00] Prasad Jayanti, King Tan, and Sam Toueg. Time and space
lower bounds for nonblocking implementations. SIAM J. Com-
put., 30(2):438–456, 2000.
[Zhu16] Leqi Zhu. A tight space bound for consensus. In Daniel Wichs
and Yishay Mansour, editors, Proceedings of the 48th Annual
ACM SIGACT Symposium on Theory of Computing, STOC
2016, Cambridge, MA, USA, June 18-21, 2016, pages 345–350.
ACM, 2016.
Index
478
INDEX 479
ring, 30 server, 11
ring buffer object, 437 session, 54
RMR, 118, 141 session problem, 54
RMW, 118 set
robot indepedent, 335
fat, 326 linear, 320
robust hierarchy, 147 maximal indepedent, 335
rock-paper-scissors object, 352 semilinear, 320
root set register, 385
ghost, 306 ordered, 385
roster, 373 shared coin
round, 12, 89, 117 weak, 212
shared memory, 3
safe agreement, 267 distributed, 120
safety, 5 shutdown mechanism, 362
safety property, 6, 447, 448 sifter, 216
scan signature, 444
direct, 162, 166 signed quorum system, 111
indirect, 162, 166 silent self-stabilization, 299
schedule, 3, 9 similar, 39, 41
admissible, 10 simplex, 273, 275, 276
second-to-last register, 393 simplex agreement, 287
self-stabilization, 4, 299 simplicial complex, 275
silent, 299 abstract, 276
self-stabilizing, 299 simplicial map, 285
semi-lattice, 168 simply connected, 290
semi-synchronous simulation, 5, 448
mobile robots, 326 simulation relation, 451
semi-synchronous message-passing, 3 single-writer multi-reader register, 114
semilinear set, 320 single-writer register, 114
semisynchrony single-writer single-reader register, 115
unknown-bound, 253 slow counter, 370
send event, 41 slow path, 137
sense of direction, 30 snapshot, 161
mobile robots, 326 snapshot object, 179
sense of scale software transactional memory, 240
mobile robots, 326 solo termination, 181
sequence number, 13 solo-terminating, 181, 247
sequential consistency, 116 sorting network, 235
sequential execution, 120 space complexity, 118
INDEX 487
Oliver, 5
Two Generals, 5, 57
two-writer sticky bit, 409
unidirectional ring, 31
uniform, 36
univalent, 82
universality of consensus, 158
unknown-bound semisynchrony, 253
unsafe, 267
upward validity, 167