0% found this document useful (0 votes)
19 views

Unit_3_Distributed_ME_Deadlock_DC(Part1)%20(4)_241006_215805

Uploaded by

Muhammad Ibrahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Unit_3_Distributed_ME_Deadlock_DC(Part1)%20(4)_241006_215805

Uploaded by

Muhammad Ibrahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT III/ DISTRIBUTED COMPUTING/ PART-1

UNIT III DISTRIBUTED MUTEX AND DEADLOCK 10

Distributed Mutual exclusion Algorithms: Introduction – Preliminaries – Lamport’s algorithm –


RicartAgrawala’s Algorithm –– Token-Based Algorithms – Suzuki-Kasami’s Broadcast
Algorithm; Deadlock Detection in Distributed Systems: Introduction – System Model –
Preliminaries – Models of Deadlocks – Chandy-Misra-Haas Algorithm for the AND model
and OR Model.

Must Study Topics:(Be extra thorough with the below topics)


1)Lamport’s Algorithm
2)RicartAgrawala’s Algorithm
3)Suzuki-Kasami broadcast algorithm
4)Chandy-Misra-Haas Algorithm for the AND model and OR Model.

Part-A:

1) List the advantages and limitations of token based algorithms for implementing distributed
mutual exclusion?(April/May 2024)
2) What is a false deadlock and when does it occur?(April/May 2024)
3) What are the various deadlock detection methods in a distributed system?State a common factor
in detecting the deadlock?(Nov/Dec 2023)
4) How beneficial is the Chandy-Misra-Hass algorithm in the AND model and the OR
model?(Nov/Dec 2023)

Part-B

1) Outline Chandy-Misra-Hass algorithm to detect deadlock in OR modeland illustrate the


algorithm with an example.(April/May 2024)
2) Outline Ricart and Agrawala’s algorithm for implementing mutual exclusion in distributed
system and illustrate the algorithm.(April/May 2024)
3) State the Lamport’s algorithm with its use,limitation and benefits.Compare this with any two
token based algorithm.(Nov/Dec 2023)
4) Explain the system models with its preliminaries and how is it characterized in the models of
deadlock?(Nov/Dec 2023)

----------------------------------------------------------------------------------------------------------------------------- --------------------------------------------

1
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

Distributed mutual exclusion:

Distributed mutual exclusion is a mechanism in distributed systems to ensure that only one
process at a time can access a shared resource or execute a critical section (CS).
Since shared variables like semaphores are not available in distributed systems, mutual exclusion
is achieved through message passing among processes, ensuring that no two processes can
be in the CS simultaneously.

There are three basic approaches for implementing distributed mutual exclusion
i) Token-based approach:
● A unique token is passed between processes.
● A process can enter its CS only if it holds the token.
● Advantage: Simple enforcement of mutual exclusion.
● Challenge: Token loss can cause delays or system failure.(Token loss may happen due to
network line breakdown,error in implementation(bug-no code to release token after its
usage)
ii) Non-token-based approach:
● Processes exchange messages to coordinate entry into the CS.
● A process enters the CS when a certain condition (e.g., based on timestamps) is met.
● Advantage: No token to manage or lose.
● Challenge: High message overhead.
iii) Quorum-based approach:
● A process must get permission from a subset (quorum) of other processes.
● Quorums overlap to ensure only one process accesses the CS at a time.
● Advantage: Scalable and fault-tolerant.
● Challenge: Complexity in determining quorum size and structure.

Preliminaries

System Model:
The system consists of N sites (S1, S2, ..., SN), each running a single process (denoted by pi for
site Si). These processes communicate asynchronously over a network. To enter the critical
section (CS), a process sends REQUEST messages to other processes and waits for appropriate
replies before entering the CS. While waiting, the process cannot make further requests.

A site can be in three states:


1. Requesting the CS – Waiting for permission to enter.
2. Executing the CS – Actively using the critical section.
3. Idle – Not requesting or executing the CS.

2
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

For token-based algorithms, an additional idle token state exists, where a site holds the token
but is not in the CS.

Notations:The following notation were used in mutual exclusion algorithm:


i) N denotes the number of processes or sites involved in invoking the critical section,
ii)T denotes the average message delay, and
iii) E denotes the average critical section execution time

Requirements of mutual exclusion algorithms:


Mutual exclusion algorithms must satisfy three key properties:
1. Safety: Only one process can be in the critical section (CS) at any given time, ensuring
mutual exclusion.
2. Liveness: The system must be free of deadlock and starvation. All requesting processes
should eventually get access to the CS in a finite time.
3. Fairness: Each process gets a fair opportunity to enter the CS, with requests generally
being served in the order they are made (based on logical timestamps).

Performance metrics:
The performance of mutual exclusion algorithms is evaluated using four key metrics:

1. Message Complexity:(Number of messages per CS execution.)


○ This metric measures the number of messages that need to be exchanged for a site
to execute a critical section (CS). A lower message complexity is generally
preferred as it indicates a more efficient algorithm.

2. Synchronization Delay:(Time between one process exiting and another entering the CS.)
○ This is the time taken after a site exits the CS before the next site can enter. It
includes any sequential message exchanges required for coordination among sites.
Minimizing synchronization delay is important for reducing wait times for CS
access.

3
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

3. Response Time:(Time from sending a request to CS completion.)


○ This is the total time a requesting process waits for its CS execution to complete
after sending out its request messages. It does not include the time spent waiting at
the site before the request is sent. A shorter response time indicates better
performance.

4. System Throughput:( Rate of CS executions, influenced by synchronization delay and CS


execution time.) If SD is the synchronization delay and E is the average critical section
execution time, then the throughput is given by the following equation

The performance of mutual exclusion algorithms varies depending on the load, which is defined
by the arrival rate of critical section (CS) requests. Two primary conditions are studied:
1. Low Load:
○ There is rarely more than one CS request in the system at any given time.
○ Under these conditions, performance is typically optimized, with minimal delays.
2. High Load:
○ There is always at least one pending CS request at a site.
○ A site initiates a new CS request immediately after completing the previous one.
○ Sites are seldom idle, resulting in higher contention for the CS.

Performance Metrics:
● Best-case performance often occurs under low load, where a site can execute the CS
quickly, and the response time is approximately equal to a round-trip message delay plus
the CS execution time (2T + E).
● Worst-case performance usually occurs under high load, where contention and delays are
maximized.

4
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

Lamport Algorithm:

Lamport’s algorithm ensures mutual exclusion in a distributed system by using message


passing with timestamps to manage the order of requests for a critical section (CS).

Algorithm:

Example:

Figure 9.3: Sites S1 and S2 Request Access to the CS


1. Site S1 sends a request (1,1)to all other sites.
2. Site S2 sends a request (1,2)to all other sites.
3. Each site adds these requests to their respective request queues ordered by timestamp.

5
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

Theorem 9.1: Lamport’s Algorithm Achieves Mutual Exclusion


Assumption: Suppose two processes, Si and Sj, are in the critical section at the same time.
Conditions (L1 and L2): For both processes to be in the CS, they must both:
● Have their own request at the top of their request queue.
● Have received replies from all other processes.
Contradiction:
● Assume Si has an earlier timestamp than Sj.
● By the FIFO (First-In-First-Out) rule, Si 's request should be at the top of Sj's queue
before Sj can enter the CS.
● However, if Sj enters the CS first, it would mean that its request was processed before Si
's, even though Si's timestamp is smaller—this creates a contradiction.

6
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

Hence,The assumption that both Si and Sj can be in the CS at the same time leads to a
contradiction, proving that Lamport's algorithm ensures mutual exclusion.

Theorem 9.2: Lamport’s Algorithm is Fair


Lamport's algorithm ensures fairness by executing requests in the order of their timestamps.
Assumption: Suppose process Sj gets to enter the critical section (CS) before process Si, even
though Si 's request has an earlier timestamp (meaning Si requested first).
1. Conditions for CS Entry: For Sj to enter the CS, it must satisfy two conditions:
○ L1: Its own request is at the top of its request queue.
○ L2: It has received a REPLY from all other processes, acknowledging their
requests.
2. Contradiction:
○ Since Si's request has an earlier timestamp, it should be placed ahead of Sj's
request in all request queues, including Sj’s queue.
○ This means Sj shouldn’t be able to enter the CS until Si does, as Si's request
should be processed first.
3. Conclusion: The assumption that Sj enters the CS before Si leads to a contradiction
because request queues are ordered by timestamps. Hence, Lamport's algorithm is
fair, as processes are allowed to enter the CS in the correct order based on their
timestamps.
Hence, Lamport's algorithm is fair, as every process is served in the order of its request without
starvation.

Use:
● Mutual Exclusion: Lamport's algorithm is used in distributed systems to ensure that only
one process can access a shared resource (critical section) at a time

---------------------------------------------------------------------------------------------------------------------

7
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

Ricart–Agrawala algorithm

The Ricart–Agrawala algorithm is a distributed mutual exclusion algorithm that allows multiple
processes in a distributed system to request access to a shared resource (critical section) without
conflicts.

It is an optimization of Lamport's mutual exclusion algorithm, aiming to minimize the number


of messages exchanged.

Each process sends a REQUEST message with a Lamport timestamp to all other processes and
waits for REPLY messages from them. If a process receives a REQUEST while also waiting for
the critical section, it compares timestamps. The process with the lower (earlier) timestamp gets
priority, and the other defers its reply until it finishes its critical section. The algorithm requires
2(N−1) messages per critical section access and ensures mutual exclusion with fewer messages
than Lamport's algorithm, but it may introduce delays.

Algorithm:

8
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

Example:
In Figure 9.7, sites S1 and S2 are each making requests for the CS and sending out REQUEST
messages to other sites. The timestamps of the requests are (1,1) and (1,2), respectively.

In Figure 9.8, S1 has received REPLY messages from all other sites and, consequently, enters the
CS

In Figure 9.9, S1 exits the CS and sends a REPLY mesage to site S2

9
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

In Figure 9.10, site S2 has received REPLY from all other sites and enters the CS next.

Performance: For each CS execution, the Ricart–Agrawala algorithm requires N −1 REQUEST


messages and N −1 REPLY messages. Thus, it requires 2 N −1 messages per CS execution. The
synchronization delay in the algorithm is T.

Theorem 9.3: Ricart–Agrawala Algorithm Achieves Mutual Exclusion


Proof (by contradiction):
● Assume two processes, Si and Sj, are in the critical section (CS) simultaneously.
● Let Si's request have higher priority than Sj's request, meaning Si's timestamp is smaller.
● For Si to have a higher priority, it must have received Sj 's REQUEST after sending its
own REQUEST.
● For Sj to be in the CS at the same time as Si, it must have received a REPLY from Si.
● However, since Si's request has a higher priority, it would have deferred the REPLY to
Sj until after it exited the CS.
Thus, it is impossible for both Si and Sj to be in the CS at the same time, proving that the Ricart–
Agrawala algorithm ensures mutual exclusion.

Token-Based Mutual Exclusion Algorithms:

1. Token Mechanism:
○ A unique token is shared among sites. A site can only enter its critical section
(CS) if it holds the token.
○ A site can enter the CS repeatedly while holding the token, but it must eventually
send the token to another site that requests it.
2. Sequence Numbers:
○ Token-based algorithms use sequence numbers to identify requests, replacing the
need for timestamps.

10
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

○ Each request for the token includes a sequence number, which is incremented
independently by each site whenever it makes a new request.
○ The sequence number helps distinguish between old and current requests.
3. Correctness:
○ The correctness proof for mutual exclusion is simple: only the site holding the
token can enter the CS, ensuring mutual exclusion.
4. Key Challenges:
○ The primary challenges in token-based algorithms include ensuring:
■ Freedom from starvation (all sites eventually access the CS),
■ Freedom from deadlock (no deadlock situations arise),
■ Token loss detection and regeneration (handling situations where the
token is lost).

Suzuki-Kasami’s Broadcast Algorithm

In Suzuki-Kasami's algorithm (Algorithm 9.7), when a site wants to enter the critical section
(CS) but does not have the token, the following sequence of actions occurs:
1. The site broadcasts a REQUEST message to all other sites in the system.
2. If a site possesses the token:
○ It sends the token to the requesting site immediately upon receiving the
REQUEST message, unless it is currently executing the CS.
○ If the site is executing the CS, it will wait until it finishes its execution of the CS
before sending the token to the requesting site.

This mechanism ensures that:


● Sites that do not have the token can request it efficiently by broadcasting their need to all
other sites.
● Mutual exclusion is maintained since the token is only handed off once the current holder
has completed its critical section execution.
Variables used in the algorithm:
RN[i](maintained at each site): Request number for site i. It tracks the latest request number
received from each site.
LN[i](available with token): Last executed number for site i. It tracks the last request number
for which the token was granted to each site.
Note:The site which is owning the token can modify/access LN
Token Queue: A queue that holds the site IDs that are waiting for the token.

11
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

Although the basic idea underlying this algorithm may sound rather simple, there are two design
issues that must be efficiently addressed:

12
UNIT III/ DISTRIBUTED COMPUTING/ PART-1

Key Points from the Suzuki-Kasami Algorithm Explanation:


1. Priority for Requests:
○ After executing the critical section (CS), a site gives priority to other sites with
pending requests for the CS, even if it has its own pending requests.
○ This makes the algorithm asymmetric. Unlike symmetric algorithms (like Ricart
and Agrawala’s), a site may retain the token even if it doesn’t need access to the
CS.
2. Correctness:
● Mutual exclusion is guaranteed because there is only one token in the system at any
given time, and only the site holding the token can enter the CS.
3. Theorem of Finite Access (Theorem 9.3):
● A requesting site is guaranteed to enter the CS in finite time. This is because:
○ Token request messages are delivered in finite time.
○ Once a site receives a request, the requesting site is placed in the token queue in
finite time.
○ Since there are at most N−1 requests ahead of it, the requesting site will get the
token and enter the CS eventually.
4. Performance:
● The algorithm is simple and efficient.
● If a site holds the idle token when it makes a request, no message is needed, and the
synchronization delay is zero.
● If the requesting site does not hold the token, it requires N messages (where N is the
number of sites) to obtain the token.
● The synchronization delay is either 0 or T (the time it takes to send the token).

13

You might also like