0% found this document useful (0 votes)
87 views69 pages

Coordination and Agreement

The document discusses algorithms for achieving distributed mutual exclusion. It describes a central server algorithm where a server grants permission to processes to enter the critical section. It also describes a ring-based algorithm where a token is passed in one direction around a logical ring of processes. Additionally, it outlines an algorithm that uses multicast messages and logical clocks, where a process multicasts a request to enter the critical section and can enter once it receives replies from all other processes.

Uploaded by

akash pedapaaga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views69 pages

Coordination and Agreement

The document discusses algorithms for achieving distributed mutual exclusion. It describes a central server algorithm where a server grants permission to processes to enter the critical section. It also describes a ring-based algorithm where a token is passed in one direction around a logical ring of processes. Additionally, it outlines an algorithm that uses multicast messages and logical clocks, where a process multicasts a request to enter the critical section and can enter once it receives replies from all other processes.

Uploaded by

akash pedapaaga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 69

Coordination and Agreement

Master 2007
Outline

 Introduction
 Distributed Mutual Exclusion
 Election Algorithms
 Group Communication
 Consensus and Related Problems

11/26/20
VFSTR 2
19
Distributed Mutual Exclusion
(1)
Process 2
Process 1 Process 3


Shared Process n
resource

 Mutual exclusion very important

 Prevent interference
 Ensure consistency when accessing the resources
11/26/20
VFSTR 3
19
Distributed Mutual
Exclusion (2)
 Mutual exclusion useful when the server managing
the resources don’t use locks

 Critical section

Enter() enter critical section – blocking


• Access shared resources in critical
• section

Exit() Leave critical section

11/26/20
VFSTR 4
19
Distributed Mutual
Exclusion (3)
 Distributed mutual exclusion: no shared variables, only message
passing

 Properties:
 Safety: At most one process may execute in the critical
section at a time
 Liveness: Requests to enter and exit the critical section
eventually succeed
No deadlock and no starvation

 Ordering: If one request to enter the CS happened-before


another, then entry to the CS is granted in that order
11/26/20
VFSTR 5
19
Mutual Exclusion Algorithms

 Basic hypotheses:
 System: asynchronous
 Processes: don’t fail
 Message transmission: reliable

 Central Server Algorithm


 Ring-Based Algorithm
 Mutual Exclusion using Multicast and Logical Clocks
 Maekawa’s Voting Algorithm
 Mutual Exclusion Algorithms Comparison
11/26/20
VFSTR 6
19
Central Server Algorithm
 The simplest way to achieve mutual exclusion is to empty a server that
grants permission to enter the critical section.

 To enter a critical section, a process sends a request message to the server


and awaits a reply from it .

 The reply constitutes a token signifying permission to enter the critical


section.

 If no other process has the token at the time of request, then the server
replies immediately, granting the token.

 If the token is currently held by another process, then the server does not
reply but queens the request on exiting the critical section, a message is
sent to the server, giving it back the token. 11/26/20
VFSTR 7
19
Central Server Algorithm

Server
Queue of
Holds the token
requests 4
2
2 3) Grant
token

1) Request 2) Release
token token P4
P1 Waiting

P2 P3
Holds the token
11/26/20
VFSTR 8
19
Ring-Based Algorithm (1)

A group of unordered
processes in a network

P4 P2 Pn P1 P3

Ethernet

11/26/20
VFSTR 9
19
Ring-Based Algorithm (2)
 One of the simplest ways to arrange mutual exclusion between the M
processes without requiring an additional process is to arrange them in a
logical ring exclusion is conferred by obtaining a token in me form of d
message passed from process to process in a single direction- clockwise
around the ring.

 If a process does not require entering the critical section when it receives
the token, then it immediately forwards the token to its neighbour.

 A process that requires the token waits until it receives it but retains it.

 To exit the critical section, the process sends the token on to its
neighbour. This algorithm continuously consumes network bandwidth

 The delay experienced b y a process requesting entr y to CS is between 0


messages and N messages.
11/26/20
VFSTR 10
19
Ring-Based Algorithm (2)

P1 Enter()
P2 • Critical

• Section
Pn Exit()
P3

P4

Token navigates
around the ring
11/26/20
VFSTR 11
19
Mutual Exclusion using Multicast and Logical
Clocks

 The basic idea is that processes that require entry to a critical section
multicast a request message and can enter it only when all the other
process have replied to this message.

 Messages requesting entry are of the form < T, Pi >, where T is the
sender's timestamp and Pi is the sender's identifier.

 Each process records its state of being outside the CS (RELEASED),


waiting entry (WANTED) or being in the CS (HELD) in a variable state. The
protocol is given below.

11/26/20
VFSTR 12
19
 If a process requests entry and the state of all other processes is
RELEASED, then processes will reply immediately the request and the
requestor will obtain entry.

 If some process is in state HELD, then that process will not reply to
requests until finished with the CS and so the requester cannot gain entry
in the meantime.

 If two or more process request entry at the same time then whichever
process's request bears the lowest timestamp will be first to collect N-l
replies, granting it entry next.

 If the request equal timestamp the requests are ordered according to the
processes corresponding identifiers.

More expensive algorithm, in terms of bandwidth


consumption.
Improved
 VFSTR performance. 11/26/20
19
13
Mutual Exclusion using Multicast
and Logical Clocks (1)

Waiting
queue 19 P3
19
2
P1

23
Enter() 23 P1 and P2 request
• entering the critical

• 19 23 section simultaneously
Exit()
P2
Critical Section

11/26/20
VFSTR 14
19
Mutual Exclusion using Multicast
and Logical Clocks (2)
 Main steps of the algorithm:
Initialization

State := RELEASED;

Process pi request entering the critical section


State := WANTED;
T := request’s timestamp;
Multicast request <T, pi> to all processes;
Wait until (Number of replies received = (N – 1));
State := HELD;
11/26/20
VFSTR 15
19
Mutual Exclusion using
Multicast and Logical Clocks(3)
 Main steps of the algorithm (cont’d):
On receipt of a request <Ti, pi> at pj (i  j)
If (state = HELD) OR
(state = WANTED AND (T, pj) < (Ti, pi))
Then queue request from pi without replying;
Else reply immediately to pi;

To quit the critical section


state := RELEASED;
Reply to any queued requests;
11/26/20
VFSTR 16
19
Maekawa’s Voting Algorithm (1)

 Maekawa observed that in order for a process to enter a critical section,


it is not needed for all of its pears to grant it access.

 Processes need only obtain permission to enter subsets of their peers, as


long as the subsets used by any two processes overlap.

 A Candidate process must collect sufficient votes to enter.

 Processes in the intersection of two sets of voters ensure the safety


property ME 1, that at most one process can enter the critical section, by
casting their votes for only one candidate.
11/26/20
17
19
Maekawa’s Voting Algorithm (1)
 Candidate process: must collect sufficient votes to
enter to the critical section
 Each process pi maintain a voting set Vi (i=1, …, N),
where Vi  {p1, …, pN}
 Sets Vi: chosen such that  i,j
 pi  Vi
(at least one common member of any
 Vi  Vj   two voting sets)
 Vi = k (fairness)

 Each process pj is contained in M of the voting sets Vi


11/26/20
VFSTR 18
19
Maekawa’s Voting Algorithm (2)
 Main steps of the algorithm:
Initialization
state := RELEASED;
voted := FALSE;
For pi to enter the critical section
state := WANTED;
Multicast request to all processes in Vi – {pi};
Wait until (number of replies received = K – 1);
pi enter the critical section only
state := HELD; after collecting K-1 votes
11/26/20
VFSTR 19
19
Maekawa’s Voting Algorithm (3)
 Main steps of the algorithm (cont’d):
On receipt of a request from pi at pj (i  j)

If (state = HELD OR voted = TRUE)


Then queue request from pi without replying;
Else Reply immediately to pi;
voted := TRUE;

For pi to exit the critical section


state := RELEASED;
Multicast release to all processes Vi – {pi};
11/26/20
VFSTR 20
19
Maekawa’s Voting Algorithm (4)
 Main steps of the algorithm (cont’d):

On a receipt of a release from pi at pj (i  j)

If (queue of requests is non-empty)

Then remove head of queue, e.g., pk;


send reply to pk;
voted := TRUE;

Else voted := FALSE;

11/26/20
VFSTR 21
19
M. E. Algorithms Comparison

Number of messages
Algorithm Enter()/Exit Before Enter() Problems

Centralized 3 2 Crash of server

Crash of a process
Virtual Token lost
1 to  0 to N-1
ring Ordering non
satisfied
Logical Crash of a
2(N-1) 2(N-1)
clocks process
Maekawa’s Alg. 3N 2N Crash of a
process who votes
11/26/20
VFSTR 22
19
Outline

 Introduction
 Distributed Mutual Exclusion
 Election Algorithms
 Group Communication
 Consensus and Related Problems

11/26/20
VFSTR 23
19
Election Algorithms (1)

 Objective: Elect one process pi from a group of


processes p1…pN
Even if multiple elections have
 Utility: Elect a been
primary started simultaneously
manager, a master process, a
coordinator or a central server
 Each process pi maintains the identity of the elected
in the variable Electedi (NIL if it isn’t defined yet)
 Properties to satisfy:  pi,
 Safety: Electedi = NIL or Elected = P A non-crashed

i  identifier
process with the
 Liveness: pi participates and sets Elected
largest
NIL, or
VFSTR
crashes 11/26/20
19 24
 An election algorithm determines which process will play the role of
coordinator or server. All processes need to agree on the selected
process. Any process can start an election, for example if it notices
that the previous coordinator has failed. The requirements of an
election algorithm are as follows:
 Safety: Only one process is chosen -- the one with the largest
identifying value. The value could be load, uptime, a random number,
etc.
 Liveness: All process eventually choose a winner or crash.

11/26/20
VFSTR 25
19
Election Algorithms (2)

 Ring-Based Election Algorithm

 Bully Algorithm

 Election Algorithms Comparison

11/26/20
VFSTR 26
19
 Processes are arranged in a logical ring. A process starts an
election by placing its ID and value in a message and sending the
message to its neighbor. When a message is received, a process
does the following:
 If the value is greater that its own, it saves the ID and forwards
the value to its neighbor.
 Else if its own value is greater and the it has not yet participated
in the election, it replaces the ID with its own, the value with its
own, and forwards the message.
 Else if it has already participated it discards the message.
 If a process receives its own ID and value, it knows it has been
elected. It then sends an elected message to its neighbor.
 When an elected message is received, it is forwarded to the next
neighbor.

11/26/20
VFSTR 27
19
Ring-Based Election Algorithm
(1)
5
5
16
16
9
25
Process 5 starts
25
the election

25
11/26/20
VFSTR 28
19
Ring-Based Election Algorithm
(2)
Initialization
Participanti := FALSE;
Electedi := NIL

Pi starts an election
Participanti := TRUE;
Send the message <election, pi> to its neighbor

Receipt of a message <elected, pj> at pi

Participanti := FALSE;
If pi  pj
Then Send the message <elected, pj> to its neighbor
11/26/20
VFSTR 29
19
Ring-Based Election Algorithm
(3)
Receipt of the election’s message <election, pi> at pj
If pi > pj
Then Send the message <election, pi> to its neighbor
Participantj := TRUE;
Else If pi < pj AND Participantj = FALSE

Then Send the message <election, pj> to its neighbor


Participantj := TRUE;
Else If pi = pj
Then Electedj := TRUE;
Participantj := FALSE;
Send the message <elected, pj> to its neighbor
11/26/20
VFSTR 30
19
Bully Algorithm (1)

 Characteristic: Allows processes to crash during


an election
 Hypotheses:
 Reliable transmission
 Synchronous system
DelayTrans.

DelayTrans.
DelayTrait.

T = 2 DelayTrans. + DelayTrait.
11/26/20
VFSTR 31
19
Bully Algorithm (2)

 Hypotheses (cont’d):
 Each process knows which processes have higher
identifiers, and it can communicate with all such
processes
 Three types of messages:
 Election: starts an election
 OK: sent in response to an election message
 Coordinator: announces the new coordinator
 Election started by a process when it notices, through
timeouts, that the coordinator has failed
11/26/20
VFSTR 32
19
Bully Algorithm (3)

3 6

Process 5 detects
5 it first Election
OK
7 New Coordinator

1 4

8 Coordinator failed
11/26/20
VFSTR 33
19
Bully Algorithm (4)

Initialization
Electedi := NIL

pi starts the election


Send the message (Election, pi) to pj , i.e., pj > pi
Waits until all messages (OK, pj) from pj are received;
If no message (OK, pj) arrives during T
Then Elected := pi;
Send the message (Coordinator, pi) to pj , i.e., pj < pi
Else waits until receipt of the message (coordinator)
(if it doesn’t arrive during another timeout T’, it begins another election)
11/26/20
VFSTR 34
19
Bully Algorithm (5)

Receipt of the message (Coordinator, pj)

Elected := pj;

Receipt of the message (Election, pj ) at pi

Send the message (OK, pi) to pj

Start the election unless it has begun one already

 When a process is started to replace a crashed


process: it begins an election
11/26/20
VFSTR 35
19
Election Algorithms
Comparison
Election Number of Problems
algorithm messages

Virtual Don’t tolerate


2N to 3N-1
ring faults
System must be
Bully N-2 to (N2) synchronous

11/26/20
VFSTR 36
19
Outline

 Introduction
 Distributed Mutual Exclusion
 Election Algorithms
 Group Communication
 Consensus and Related Problems

11/26/20
VFSTR 37
19
Group Communication (1)

 Objective: each of a group of processes must


receive copies of the messages sent to the group
 Group communication requires:

 Coordination
 Agreement: on the set of messages that is
received and on the delivery ordering

 We study multicast communication of processes


whose membership is known (static groups)
11/26/20
VFSTR 38
19
Group Communication (2)

 System: contains a collection of processes, which


can communicate reliably over one-to-one channels
 Processes: members of groups, may fail only by
crashing

 Groups:

Closed group Open group


11/26/20
VFSTR 39
19
Group Communication (3)

 Primitives:
 multicast(g, m): sends the message m to all
members of group g
 deliver(m) : delivers the message m to the
calling process
 sender(m) : unique identifier of the process that
sent the message m
 group(m): unique identifier of the group to which
the message m was sent
11/26/20
VFSTR 40
19
Group Communication (4)

 Basic Multicast

 Reliable Multicast

 Ordered Multicast

11/26/20
VFSTR 41
19
Basic Multicast

 Objective: Guarantee that a correct process will eventually


deliver the message as long as the multicaster does not crash

 Primitives: B_multicast, B_deliver

 Implementation: Use a reliable one-to-one communication

To B_multicast(g, m)
For each process p  g, send(p, m);
Use
On receive(m) of
at p threads to perform the send
operations simultaneously
B_deliver(m) to p

 Unreliable: Acknowledgments may be dropped


11/26/20
VFSTR 42
19
Reliable Multicast (1)

 Properties to satisfy:
 Integrity: A correct process P delivers the message
m at most once

 Validity: If a correct process multicasts a message


m, then it will eventually deliver m

 Agreement: If a correct process delivers the


message m, then all other correct processes in
group(m) will eventually deliver m

 Primitives: R_multicast, R_deliver


11/26/20
VFSTR 43
19
Reliable Multicast (2)

 Implementation using B-multicast:


Initialization Correct algorithm, but
msgReceived := {}; inefficient
(each message is sent |g|
R-multicast(g, m) by p times to each process)

B-multicast(g, m); // p g
B-deliver(m) by q with g = group(m)
If (m  msgReceived)
Then msgReceived := msgReceived  {m};
If (q  p) Then B-multicast(g, m);
VFSTR R-deliver(m); 11/26/20
19 44
Ordered Multicast

 Ordering categories:
 FIFO Ordering

 Total Ordering

 Causal Ordering
 Hybrid Ordering: Total-Causal,
Total-FIFO

11/26/20
VFSTR 45
19
FIFO Ordering (1)

 If a correct process issues multicast(g, m1) and then


multicast(g, m2), then every correct process that
delivers m2 will deliver m1 before m2
m1

m3
m2

Process 1 Process 2 Process 3


11/26/20
VFSTR 46
19
FIFO Ordering (2)

 Primitives: FO_multicast, FO_deliver


 Implementation: Use of sequence numbers
 Variables maintained by each process p:
p
 Sg : Number of messages sent by p to group g
q
 Rg: sequence number of the latest message p has
delivered from process q that was sent to the group
 Algorithm
 FIFO Ordering is reached only under the assumption
that groups are non-overlapping
11/26/20
VFSTR 47
19
Total Ordering (1)

 If a correct process delivers message m2 before it


delivers m1, then any correct process that delivers m1
will deliver m2 before m1
m1
m2

Process 1 Process 2 Process 3

 Primitives: TO_multicast, TO_deliver


11/26/20
VFSTR 48
19
Total Ordering (2)

 Implementation: Assign totally ordered identifiers to


multicast messages
 Each process makes the same ordering decision
based upon these identifiers
 Methods for assigning identifiers to messages:
 Sequencer process
 Processes collectively agree on the assignment of
sequence numbers to messages in a distributed
fashion
11/26/20
VFSTR
19
49
Total Ordering (3)

 Sequencer process: Maintains a group-specific


sequence number Sg
Initialization
Sg := 0;

B-deliver(<m, Ident.>) with g = group(m)


B-multicast(g, <“order”, Ident., Sg>);
Sg = Sg + 1;
 Algorithm for group Initialization
member p  g
VFSTR
Rg := 0;
11/26/20
19 50
Total Ordering (4)

Unique
TO-multicast(g, m) by p identifier of m

B-multicast(g  Sequencer(g), <m, Ident.>);

B-deliver(<m, Ident.>) by p, with g = group(m)


Place <m, Ident.> in hold-back queue;

B-deliver(morder= <“order”, Ident., S>) by p, with g = group(morder)

Wait until (<m, Ident.> in hold-back queue AND S = Rg);


TO-deliver(m);
Rg = S + 1; 11/26/20
VFSTR 51
19
Total Ordering (5)

 Processes collectively agree on the assignment of


sequence numbers to messages in a distributed
fashion

 Variables maintained by each process p:


q
 Pg : largest sequence number proposed by q to
group g
q
 Ag : largest agreed sequence number q has
observed so far for group g

11/26/20
VFSTR 52
19
Total Ordering (6)

p3
p3 p3 A p3 = SN
Pg = MAX(Ag, Pgg ) + 1 P3
Proposition
Assigning
Message of
a sequence
a sequence
transmission
number to the
P3
<Ident.,
<m, Ident.>
P
SN>
g > number
message
p2
Ag = SN P2 P4
<Ident., Pg SN>
<Ident.,
<m, Ident.>
> <Ident.,
<m,
<Ident., Pg >
Ident.>
SN>
P2 P1 P4
p1 pi p4
p2
Pg =
p2
MAX(Ag,
p2
Pg SN =
A
)+1 gMAX= SN
i=1,..,5 (P
PP5g )
g
p4
= A p4 = SN
p4
MAX(Ag, Pg )
g +1
<Ident.,
<m, Ident.>
P
SN>
g >

p5 p5 p5 p5
Ag = SN P5 Pg = MAX(Ag, Pg )+1
11/26/20
VFSTR 53
19
Causal Ordering (1)

 If multicast(g, m1)  multicast(g, m3), then any correct


process that delivers m3 will deliver m1 before m3
m1

m2
m3

Process 1 Process 2 Process 3


11/26/20
VFSTR 54
19
Causal Ordering (2)

 Primitives: CO_multicast, CO_deliver


 Each process pi of group g maintains a timestamp
g
vector Vi
g
Vi [j] = Number of multicast messages received from
pj that happened-before the next message to
be sent
 Algorithm for group member pi:


Initialization
Example
g
Vi [j] := 0 (j = 1, …, N);
11/26/20
VFSTR
19 55
Causal Ordering (3)

CO-multicast(g, m)
g g
Vi [i] := Vi [i] + 1;
g
B-multicast(g, <m,Vi >);
g
B-deliver(<m, Vj >) of pj, with g = group(m)
g
Place <m, Vj> in a hold-back queue;
g g g g
V
Wait until (Vj [j] = i [j] + 1) AND ( Vj [k]  Vi [k] );
(k  j)
CO-deliver(m);
g g
Vi [j] := Vi [j] + 1;
11/26/20
VFSTR 56
19
Outline

 Introduction
 Distributed Mutual Exclusion
 Election Algorithms
 Group Communication
 Consensus and Related Problems

11/26/20
VFSTR 57
19
Consensus introduction

 Make agreement in a distributed manner

 Mutual exclusion: who can enter the critical region


 Totallyordered multicast: the order of message
delivery
 Byzantine generals: attack or retreat?
 Consensus problem

 Agree on a value after one or more of the processes


has proposed what the value should be

11/26/20
VFSTR 58
19
Consensus (1)

 Objective: processes must agree on a value after one or more


of the processes has proposed what that value should be
 Hypotheses: reliable communication, but processes may fail
 Consensus problem:
 Every process Pi begins in the undecided state
 Proposes a value Vi  D (i=1, …, N)
 Processes communicate with one another, exchanging
values
 Each process then sets the value of a decision variable di
Enters the state decided, in which it may no
longer change di (i=1, …, N)11/26/20
VFSTR 59
19
Consensus (2)
d1:=proceed d2:=proceed
P1 P2

V1:=proceed V2:=proceed

Consensus
algorithm

V3:=abort

P3 (Crashes)
11/26/20
VFSTR 60
19
Consensus (3)
 Proprieties to satisfy:
 Termination: Eventually each correct process
sets its decision variable
 Agreement: the decision value of all correct
processes is the same:
Pi and Pj are correct  di = dj (i,j=1, …, N)
 Integrity: If the correct processes all proposed
the same value, then any correct process in the
decided state has chosen that value

11/26/20
VFSTR 61
19
Consensus (4)
 Consensus in a synchronous system:
 Use of basic multicast Valuesir : set of proposed
values known to process pi at
 At most f processes may crash
the beginning of round r
 f+1 rounds are necessary
 Delay of one round is bounded by a timeout

11/26/20
VFSTR 62
19
Consensus (5)
 Interactive consistency problem: variant of the consensus
problem
 Objective: correct processes must agree on a vector of values,
one for each process
 Proprieties to satisfy:
 Termination: Eventually each correct process sets its
decision variable
 Agreement: the decision vector of all correct processes is
the same
 Integrity: If Pi is correct, then all correct processes decide
on Vi as the ith component of their vector
11/26/20
VFSTR 63
19
Consensus (6)
 Byzantine generals problems: variant of the consensus
problem
 Objective: a distinguished process supplies a value that the
others must agree upon
 Proprieties to satisfy:
 Termination: Eventually each correct process sets its
decision variable
 Agreement: the decision value of all correct processes is
the same
same:
Pi and Pj are
 Integrity: correct
If the  di = dj is
commander (i,j=1, …, N)then all correct
correct,
processes decide on the value that the commander
proposed 11/26/20
VFSTR 64
19
Consensus (7)
 Byzantine agreement in a synchronous system:
 Example : a system composed of three processes (must

agree on a binary value 0 or 1)

Scenario 1: process j is faulty Scenario 2: Commander is faulty

Commander Commander
1 1 1 0

Nodei Nodej Nodei Nodej


0

Number of faulty processes must be bounded


11/26/20
VFSTR 65
19
Consensus (8)
 For m faulty processes, n  3m+1, where n denotes
the total number of processes
 Interactive Consistency Algorithm: ICA(m), m>0, m denotes
the maximal number of processes that may fail simultaneously
 Sender: all nodes must agree upon its value
 Receivers: all other processes

 If a process doesn’t send a message, the receiving process


will use a default value 
 ICA Algorithm requires m+1 rounds in order to achieve the
consensus
11/26/20
VFSTR 66
19
Consensus (9)
 Interactive Consistency Algorithm:
Algorithm
AlgorithmICA(m)ICA(0)
1.1.The
Thesender
sendersends
sendsitsitsvalue
valuetotoallallthe
theother
othern-1
n-1processes
processes
2.2.Let
EachVi be
process
the value
usesreceived
the valueby received
process ifromfromthe
thesender,
sender,
ororthe
usedefault
the default
value value
if no message
if no message is received
is received
EndProcess i consider itself as a sender in ICA(m-1):
it sends the value Vi to the n-2 other processes
3.  i, Let Vj be the value received from process j (j  i)
The process i uses the value Choice(V1, …, Vn)
End

11/26/20
VFSTR 67
19
References

 PhD. Mourad Elhadef’s presentation

 Coulouris G (et al) – Distributed System – Concepts and


Design – Pearson 2001

 Other presentations

 Wikipedia: www.wikipedia.com

11/26/20
VFSTR 68
19
11/26/20
VFSTR 69
19

You might also like