Unit_3_Distributed_ME_Deadlock_DC(Part1)%20(4)_241006_215805
Unit_3_Distributed_ME_Deadlock_DC(Part1)%20(4)_241006_215805
Part-A:
1) List the advantages and limitations of token based algorithms for implementing distributed
mutual exclusion?(April/May 2024)
2) What is a false deadlock and when does it occur?(April/May 2024)
3) What are the various deadlock detection methods in a distributed system?State a common factor
in detecting the deadlock?(Nov/Dec 2023)
4) How beneficial is the Chandy-Misra-Hass algorithm in the AND model and the OR
model?(Nov/Dec 2023)
Part-B
----------------------------------------------------------------------------------------------------------------------------- --------------------------------------------
1
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
Distributed mutual exclusion is a mechanism in distributed systems to ensure that only one
process at a time can access a shared resource or execute a critical section (CS).
Since shared variables like semaphores are not available in distributed systems, mutual exclusion
is achieved through message passing among processes, ensuring that no two processes can
be in the CS simultaneously.
There are three basic approaches for implementing distributed mutual exclusion
i) Token-based approach:
● A unique token is passed between processes.
● A process can enter its CS only if it holds the token.
● Advantage: Simple enforcement of mutual exclusion.
● Challenge: Token loss can cause delays or system failure.(Token loss may happen due to
network line breakdown,error in implementation(bug-no code to release token after its
usage)
ii) Non-token-based approach:
● Processes exchange messages to coordinate entry into the CS.
● A process enters the CS when a certain condition (e.g., based on timestamps) is met.
● Advantage: No token to manage or lose.
● Challenge: High message overhead.
iii) Quorum-based approach:
● A process must get permission from a subset (quorum) of other processes.
● Quorums overlap to ensure only one process accesses the CS at a time.
● Advantage: Scalable and fault-tolerant.
● Challenge: Complexity in determining quorum size and structure.
Preliminaries
System Model:
The system consists of N sites (S1, S2, ..., SN), each running a single process (denoted by pi for
site Si). These processes communicate asynchronously over a network. To enter the critical
section (CS), a process sends REQUEST messages to other processes and waits for appropriate
replies before entering the CS. While waiting, the process cannot make further requests.
2
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
For token-based algorithms, an additional idle token state exists, where a site holds the token
but is not in the CS.
Performance metrics:
The performance of mutual exclusion algorithms is evaluated using four key metrics:
2. Synchronization Delay:(Time between one process exiting and another entering the CS.)
○ This is the time taken after a site exits the CS before the next site can enter. It
includes any sequential message exchanges required for coordination among sites.
Minimizing synchronization delay is important for reducing wait times for CS
access.
3
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
The performance of mutual exclusion algorithms varies depending on the load, which is defined
by the arrival rate of critical section (CS) requests. Two primary conditions are studied:
1. Low Load:
○ There is rarely more than one CS request in the system at any given time.
○ Under these conditions, performance is typically optimized, with minimal delays.
2. High Load:
○ There is always at least one pending CS request at a site.
○ A site initiates a new CS request immediately after completing the previous one.
○ Sites are seldom idle, resulting in higher contention for the CS.
Performance Metrics:
● Best-case performance often occurs under low load, where a site can execute the CS
quickly, and the response time is approximately equal to a round-trip message delay plus
the CS execution time (2T + E).
● Worst-case performance usually occurs under high load, where contention and delays are
maximized.
4
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
Lamport Algorithm:
Algorithm:
Example:
5
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
6
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
Hence,The assumption that both Si and Sj can be in the CS at the same time leads to a
contradiction, proving that Lamport's algorithm ensures mutual exclusion.
Use:
● Mutual Exclusion: Lamport's algorithm is used in distributed systems to ensure that only
one process can access a shared resource (critical section) at a time
---------------------------------------------------------------------------------------------------------------------
7
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
Ricart–Agrawala algorithm
The Ricart–Agrawala algorithm is a distributed mutual exclusion algorithm that allows multiple
processes in a distributed system to request access to a shared resource (critical section) without
conflicts.
Each process sends a REQUEST message with a Lamport timestamp to all other processes and
waits for REPLY messages from them. If a process receives a REQUEST while also waiting for
the critical section, it compares timestamps. The process with the lower (earlier) timestamp gets
priority, and the other defers its reply until it finishes its critical section. The algorithm requires
2(N−1) messages per critical section access and ensures mutual exclusion with fewer messages
than Lamport's algorithm, but it may introduce delays.
Algorithm:
8
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
Example:
In Figure 9.7, sites S1 and S2 are each making requests for the CS and sending out REQUEST
messages to other sites. The timestamps of the requests are (1,1) and (1,2), respectively.
In Figure 9.8, S1 has received REPLY messages from all other sites and, consequently, enters the
CS
9
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
In Figure 9.10, site S2 has received REPLY from all other sites and enters the CS next.
1. Token Mechanism:
○ A unique token is shared among sites. A site can only enter its critical section
(CS) if it holds the token.
○ A site can enter the CS repeatedly while holding the token, but it must eventually
send the token to another site that requests it.
2. Sequence Numbers:
○ Token-based algorithms use sequence numbers to identify requests, replacing the
need for timestamps.
10
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
○ Each request for the token includes a sequence number, which is incremented
independently by each site whenever it makes a new request.
○ The sequence number helps distinguish between old and current requests.
3. Correctness:
○ The correctness proof for mutual exclusion is simple: only the site holding the
token can enter the CS, ensuring mutual exclusion.
4. Key Challenges:
○ The primary challenges in token-based algorithms include ensuring:
■ Freedom from starvation (all sites eventually access the CS),
■ Freedom from deadlock (no deadlock situations arise),
■ Token loss detection and regeneration (handling situations where the
token is lost).
In Suzuki-Kasami's algorithm (Algorithm 9.7), when a site wants to enter the critical section
(CS) but does not have the token, the following sequence of actions occurs:
1. The site broadcasts a REQUEST message to all other sites in the system.
2. If a site possesses the token:
○ It sends the token to the requesting site immediately upon receiving the
REQUEST message, unless it is currently executing the CS.
○ If the site is executing the CS, it will wait until it finishes its execution of the CS
before sending the token to the requesting site.
11
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
Although the basic idea underlying this algorithm may sound rather simple, there are two design
issues that must be efficiently addressed:
12
UNIT III/ DISTRIBUTED COMPUTING/ PART-1
13