0% found this document useful (0 votes)
3 views

Unit 3 Process Synchronization and Deadlock

The document outlines the lecture topics for Unit 3 of a course on Process Synchronization and Deadlock, covering critical-section problems, synchronization mechanisms, and deadlock handling methods. It discusses concepts such as race conditions, critical sections, and various solutions like Peterson's solution and semaphores. The document emphasizes the importance of synchronization in multi-process systems to prevent data inconsistency and ensure mutual exclusion, progress, and bounded waiting.

Uploaded by

raiumeshwar75
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Unit 3 Process Synchronization and Deadlock

The document outlines the lecture topics for Unit 3 of a course on Process Synchronization and Deadlock, covering critical-section problems, synchronization mechanisms, and deadlock handling methods. It discusses concepts such as race conditions, critical sections, and various solutions like Peterson's solution and semaphores. The document emphasizes the importance of synchronization in multi-process systems to prevent data inconsistency and ensure mutual exclusion, progress, and bounded waiting.

Uploaded by

raiumeshwar75
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

NUTAN MAHARASHTRA VIDYA PRASARAK MANDAL’S

NUTAN COLLEGE OF ENGINEERING & RESEARCH (NCER)


Department of Computer Science & Engineering
-------------------------------------------------------------------------------------------------------------------------------------------------------

BTCOC402

Lecture
Topic to be covered
Number
Unit 3 : Process Synchronization and Deadlock: (07 Hrs)
1
 The critical-section problem, Critical regions

2  Peterson‘s Solution, Synchronization Hardware,

3  Semaphores, Classical Problems of synchronization,


4  Monitors

5  Deadlocks: Systems Model, Deadlock characterization, Methods for handling


Deadlocks
6  Deadlock Prevention,Deadlock Avoidance

7  Deadlock Detection, Recovery from Deadlock, Combined approach to deadlock


Handling.

: Submitted by:
Prof. S. H. Sable

-------------------------------------------------------------------------------------------------------- ------------------------------------------------
PROF. S. H. SABLE NCER, BATU UNIVERSITY, LONERE
DEPARTMENT OF
COMPUTER SCIENCE
Nutan College Of Engineering & Research, & ENGINEERING
Talegaon Dabhade, Pune- 410507
Operating system
Unit 3 – Process Synchronization and
Deadlock
Process Synchronization
Process Synchronization is the task of coordinating the execution of processes in a way that no
two processes can have access to the same shared data and resources.It is specially needed in a
multi-process system when multiple processes are running together, and more than one processes
try to gain access to the same shared resource or data at the same time.This can lead to the
inconsistency of shared data. So the change made by one process not necessarily reflected when
other processes accessed the same shared data. To avoid this type of inconsistency of data, the
processes need to be synchronized with each other. These processes that are sharing resources
between each other are called Cooperative Processes and the processes whose execution does
not affect the execution of other processes are called Independent Processes.

Race Condition
Race condition is a situation where-
 The final output produced depends on the execution order of instructions of different
processes.
 Several processes compete with each other.
The below example is a good illustration of race condition.
Example:
The following illustration shows how inconsistent results may be produced if multiple processes
execute concurrently without any synchronization.
Consider-
Two processes P1 and P2 are executing concurrently.
Both the processes share a common variable named “count” having initial value = 5.
Process P1 tries to increment the value of count.
Process P2 tries to decrement the value of count.

1
In assembly language, the instructions of the processes may be written as-

Now, when these processes execute concurrently without synchronization, different results may
be produced.
Case-01:
The execution order of the instructions may be-
P1(1), P1(2), P1(3), P2(1), P2(2), P2(3)
In this case, Final value of count = 5
Case-02:
The execution order of the instructions may be-
P2(1), P2(2), P2(3), P1(1), P1(2), P1(3)
In this case, Final value of count = 5
Case-03:
The execution order of the instructions may be-
P1(1), P2(1), P2(2), P2(3), P1(2), P1(3)
In this case,Final value of count = 6
Case-04:
The execution order of the instructions may be-
P2(1), P1(1), P1(2), P1(3), P2(2), P2(3)
In this case,Final value of count = 4
Case-05:
The execution order of the instructions may be-
P1(1), P1(2), P2(1), P2(2), P1(3), P2(3)
In this case,Final value of count = 4

2
It is clear from here that inconsistent results may be produced if multiple processes execute
concurrently without any synchronization.
Here, the order of execution of processes changes the output. All these processes are in a race to
say that their output is correct. This is called a race condition.

1. Critical Section
 Critical Section is the part of a program which tries to access shared resources. That resource
may be any resource in a computer like a memory location, Data structure, CPU or any IO
device.
 All the shared variables or resources are placed in the critical section that can lead to data
inconsistency.
 The critical section cannot be executed by more than one process at the same time; operating
system faces the difficulties in allowing and disallowing the processes from entering the
critical section.

Four essential elements of the critical section/Structure of Critical section for synchronization:

 Entry Section: It is part of the process which decides the entry of a particular process. It
ensures that only one process is present inside the critical section at any time.It does not
allow any other process to enter inside the critical section if one process is already present
inside it.
 Critical Section: This part allows one process to enter and modify the shared variable.
 Exit Section: Exit section allows the other process that are waiting in the Entry Section, to
enter into the Critical Sections. It also checks that a process that finished its execution should
be removed through this Section. When a process takes exit from the critical section, some
changes are made so that other processes can enter inside the critical section
 Remainder Section: All other parts of the Code, which is not in Critical, Entry, and Exit
Section, are known as the Remainder Section..

2. Critical Section Problem-


If multiple processes access the critical section concurrently, then results produced might be
inconsistent. This problem is called as critical section problem. Synchronization mechanisms

3
allow the processes to access critical section in a synchronized manner to avoid the inconsistent
results.
Criteria For Synchronization Mechanisms /A solution to the critical section problem must
satisfy the following conditions.
Any synchronization mechanism proposed to handle the critical section problem should meet the
following criteria-
1. Mutual Exclusion-
The mechanism must ensure-
 If process Pi is executing in its critical section then no any other process can be executing in
their critical section. Only one process is present inside the critical section at any time.
 No other process can enter the critical section until the process already present inside it
completes.
2. Progress-
The mechanism must ensure-
 Progress means that if one process doesn't need to execute into critical section then it should
not stop other processes to get into the critical section.
 If no process is executing in its critical section and some process wish to enter their critical
sections then only those process that are not executing in their remainder section can enter its
critical section next.
 A process enters the critical section only if it wants to enter.
 A process is not forced to enter inside the critical section if it does not want to enter.

3. Bounded Wait-
The mechanism should ensure-
 Bounded waiting means that each process must have a limited waiting time. It should not wait
endlessly to access the critical section. A process gets to enter the critical section before its
wait gets over.

4. Architectural Neutral-
The mechanism should ensure-
 It can run on any architecture without any problem.There is no dependency on the architecture.

Note-01:
Mutual Exclusion and Progress are the mandatory criteria.
They must be fulfilled by all the synchronization mechanisms.
Note-02:
Bounded waiting and Architectural neutrality are the optional criteria.
However, it is recommended to meet these criteria if possible.

4
I. Solutions To The Critical Section
A. Two process solution:
1. Using Turn Variable

Turn variable is a synchronization mechanism that provides synchronization among two


processes.It uses a turn variable to provide the synchronization.

do
{
initial section
while(turn !=i);
Critical section
turn = j
Remainder section
}while(1);

Let Process P0 and P1 share a common variable turn initialized to values 0 or 1


Initially, turn value is set to 0.
 Turn value = 0 means it is the turn of process Pi to enter the critical section.
 Turn value = 1 means it is the turn of process Pj to enter the critical section.

Working-
This synchronization mechanism works as explained in the following scenes-
Case-01: Process P0 arrives.It executes the turn!=0 instruction. Since turn value is set to 0, so it
returns value 0 to the while loop.The while loop condition breaks.Process P0 enters the critical
section and executes.Now, even if process P0 gets preempted in the middle, process P1 can not
enter the critical section.Process P1 can not enter unless process P0 completes and sets the turn
value to 1.
Case-02: Process P1 arrives.It executes the turn!=1 instruction.Since turn value is set to 0, so it
returns value 1 to the while loop.The returned value 1 does not break the while loop condition.
The process P1 is trapped inside an infinite while loop.The while loop keeps the process P1 busy
until the turn value becomes 1 and its condition breaks.

5
Case-03: Process P0 comes out of the critical section and sets the turn value to 1.The while loop
condition of process P1 breaks. Now, the process P1 waiting for the critical section enters the
critical section and execute. Now, even if process P1 gets preempted in the middle, process P0
can not enter the critical section.Process P0 can not enter unless process P1 completes and sets
the turn value to 0.

It is a two process solution which uses a Boolean variables turn.


 It satisfies mutual exclusion condition but it fails on progress as it does not know which
process wants to go in critical section.
 It does not guarantee progress since it follows strict alternation approach
 It ensures bounded waiting since processes are executed turn wise one by one and each
process is guaranteed to get a chance
 .It ensures processes does not starve for the CPU.
 It is architectural neutral since it does not require any support from the operating system.

2. Using Flag array


Flag is a synchronization mechanism that provides synchronization among two processes.
In this algorithm if Flag is set to true means that process wants to enter in critical section.
Initially array is initialized to false

do
{
initial section
flag[i] = T
while(flag[j]);
Critical section;
Flag[i] = F;
Remainder section
}while(1);

It is implemented as-
In this algorithm the process Pi first sets the flag[i] to true means Pi wants to enter in critical
section. Process Pi also checks for process Pj .If process Pj were ready the process Pi would wait
until flag[i] is false. So Pi would enter in critical section.
In this solution, we used a boolean array called flag where each process has one call where false
means not interested and true means interested. This solution satisfies mutual exclusion but fails
on progress because it suffers from Deadlock.

3. Petersons solution using flag &turn


Peterson's solution provides a solution to the following problems,
It ensures that if a process is in the critical section, no other process must be allowed to enter it.
This property is termed mutual exclusion.
If more than one process wants to enter the critical section, the process that should enter the
critical region first must be established. This is termed progress.

6
There is a limit to the number of requests that processors can make to enter the critical region,
provided that a process has already requested to enter and is waiting. This is termed bounding.
It provides platform neutrality as this solution is developed to run in user mode, which doesn't
require any permission from the kernel.

do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
/* critical section */
flag[i] = false;
/* remainder section */
}
while (true);

 The variable turn denotes whose turn it is to enter its critical section. i.e., if turn == i, then
process Pi is allowed to execute in its critical section.
 If a process is ready to enter its critical section, the flag array is used to indicate that. For E.g., if
flag[i] is true, this value indicates that Pi is ready to enter its critical section. Initially the flags
are false.
 After completing the critical section, it sets it’s own flag to false, indication it does not wish to
execute anymore.
We now prove that this solution is correct. We need to show that Peterson’s Solution preserves
all three synchronization criteria:

1. Mutual exclusion is preserved.


2. The progress requirement is satisfied.
3. The bounded-waiting requirement is met.
Prove 1:

 Pi enters its critical section only if either flag[j] == false or turn == i.


 if both processes can be executing in their critical sections at the same time, then
flag[0] == flag[1] == true.
 These two observations indicate that P0 and P1 could not have successfully executed their while
statements at about the same time, since the value of turn can be either 0 or 1 but cannot be both.
Hence, one of the processes — say, Pj must have successfully executed the while statement,
whereas Pi had to execute at least one additional statement
(“turn == j”).
 However, at that time, flag[j] == true and turn == j, and this condition will persist as long as Pj is
in its critical section; as a result, mutual exclusion is preserved.

Prove 2 & 3:

 If a process is stuck in the while loop with the condition flag[j] == true and turn == j,
process Pi can be prevented from entering the critical section only; this loop is the only

7
once possible. After Pj execution in Critical section flag[j] will be == false, and Pi can
enter its critical section.
 if Pj is not ready to enter the critical section. If Pj has set, flag[j] = true and is also
executing in its while statement, then either turn == i or turn == j. If turn == i, Pi will
enter the critical section then. Pj will enter the critical section, If turn == j. Although once
Pj exits its critical section, it will reset flag[j] to false, allowing Pi to enter its critical
section. Pj must also set turn to i, if Pj resets flag[j] to true. Hence, since Pi does not
change the value of the variable turn while executing the while statement, Pi will enter
the critical section (progress) after at most one entry by Pj (bounded waiting).

Disadvantages of Peterson’s Solution


 It involves Busy waiting
 It is limited to 2 processes.

3. Semaphores:

For the solution to the critical section problem one synchronization tool is used which is known
as semaphores. A semaphore ‘S’ is an integer variable which is accessed through two standard
operations such as wait and signal. These operations were originally termed ‘P’ (for wait means
to test) and ‘V’ (for single means to increment).Wait operation is also called P, sleep or down
operation and Signal operation is also called V, wake-up or up operation.
The classical definition of wait is
Wait (S)
{
While (S <= 0)
{
Test;
}
S--;
}
The classical definition of the signal is
Signal (S)
{
S++;
}
A critical section is surrounded by both operations to implement process synchronization.See
below image.critical section of Process P is in between P and V operation.

8
Entry Section

Exit Section

ADVANTAGES OF SEMAPHORE
 Semaphores are machine independent since their implementation and codes are written in
the microkernel's machine independent code area.
 They strictly enforce mutual exclusion and let processes enter the crucial part one at a
time (only in the case of binary semaphores).
 With the use of semaphores, no resources are lost due to busy waiting since we do not
need any processor time to verify that a condition is met before allowing a process access
to the crucial area.
 Semaphores have the very good management of resources
 They forbid several processes from entering the crucial area. They are significantly more
effective than other synchronization approaches since mutual exclusion is made possible
in this way.
DISADVANTAGES OF SEMAPHORE
 While using semaphore, if a low priority process is in the critical section, then no other
higher priority process can get into the critical section. So, the higher priority process has
to wait for the complete execution of the lower priority process.
 The wait() and signal() functions need to be implemented in the correct order. So, the
implementation of a semaphore is quite difficult.

Semaphores are of two types:


1) Binary Semaphore – This is also known as mutex lock. It can have only two values – 0 and 1. Its
value is initialized to 1. It is used to implement the solution of critical section problem with
multiple processes. If the value of binary semaphore is 1.The value of binary semaphore is set to
0.The process is allowed to enter the critical section.If the value of binary semaphore is 0.The
process is blocked and not allowed to enter the critical section.The process is put to sleep in the
waiting list.
Let there be two processes P1 and P2 and a semaphore s is initialized as 1. Now if suppose P1
enters in its critical section then the value of semaphore s becomes 0. Now if P2 wants to enter
its critical section then it will wait until s > 0, this can only happen when P1 finishes its critical
section and calls Signal operation on semaphore s. This way mutual exclusion is achieved.
Binary semaphores are mainly used for two purposes-
To ensure mutual exclusion.
To implement the order in which the processes must execute.

9
2) Counting Semaphore – Its value can range over an unrestricted domain. It is used to control
access to a resource that has multiple instances. In a system,Consider n units of a particular
non-shareable resource are available.Then, n processes can use these n units at the same time.
So, the access to these units is kept in the critical section. The value of counting semaphore is
initialized with ‘n’.When a process enters the critical section, the value of counting semaphore
decrements by 1.When a process exits the critical section, the value of counting semaphore
increments by 1. Positive value indicates the number of processes that can be present in the
critical section at the same time. Negative value indicates the number of processes that are
blocked in the waiting list.

Example:

Now suppose there is a resource whose number of instance is 4. Now we initialize S = 4 ,


Whenever process wants that resource it calls P or wait function if the resulting value of
counting semaphore is greater than or equal to 0, process is allowed to enter the critical section.
If the resulting value of counting semaphore is less than 0, process is not allowed to enter the
critical section.In this case, process is put to sleep in the waiting list. and when it is done using
critical section it calls V or signal function. If the value of S becomes zero then a process has to
wait until S becomes positive.
For example, Suppose there are 4 process P1, P2, P3, P4 and they all call wait operation on
S(initialized with 4). If another process P5 wants the resource then it should wait until one of the
four processes calls signal function and value of semaphore becomes positive.

Difference between Counting and Binary Semaphores :

Criteria Binary Semaphore Counting Semaphore

A counting semaphore is a
A Binary Semaphore is a semaphore
semaphore that has multiple values
Definition whose integer value range over 0
of the counter. The value can range
and 1.
over an unrestricted domain.

0 means that a process or a thread is


The value can range from 0 to N,
accessing the critical section, other
where N is the number of process or
Representation process should wait for it to exit the
thread that has to enter the critical
critical section. 1 represents the
section.
critical section is free.

No, it doesn’t guarantees mutual


Yes, it guarantees mutual exclusion,
Mutual exclusion, since more than one
since just one process or thread can
Exclusion process or thread can enter the
enter the critical section at a time.
critical section at a time.

10
No, it doesn’t guarantees bounded Yes, it guarantees bounded wait,
wait, as only one process can enter since it maintains a list of all the
the critical section, and there is no process or threads, using a queue,
Bounded wait
limit on how long the process can and each process or thread get a
exist in the critical section, making chance to enter the critical section
another process to starve. once. So no question of starvation.

Used only for a single instance of Used for any number of instance of
Number of
resource type R.it can be usedonly resource of type R.it can be used for
instance
for 2 processes. any number of processes.

Problem:
1.A counting semaphore S is initialized to 7. Then, 20 P operations and 15 V
operations are performed on S. What is the final value of S?
Ans: P operation also called as wait operation decrements the value of semaphore variable by 1.
V operation also called as signal operation increments the value of semaphore variable by 1.
Thus,
Final value of semaphore variable S
= 7 – (20 x 1) + (15 x 1)
= 7 – 20 + 15
=2

2. A counting semaphore S is initialized to 10. Then, 6 P operations and 4 V


operations are performed on S. What is the final value of S?
Ans: P operation also called as wait operation decrements the value of semaphore variable by 1.
V operation also called as signal operation increments the value of semaphore variable by 1.
Thus,
Final value of semaphore variable S
= 10 – (6 x 1) + (4 x 1)
= 10 – 6 + 4
=8

1. Classical Problem on Synchronization:

There are various types of problem which are proposed for synchronization scheme such as
A. PRODUCER CONSUMER/BOUNDED BUFFER PROBLEM:
The bounded-buffer problems/ producer-consumer problem) is a classic example of concurrent
access to a shared resource. A bounded buffer lets multiple producers and multiple consumers
share a single buffer. Producers write data to the buffer and consumers read data from the buffer.
 Producers must block if the buffer is full.
Consumers must block if the buffer is empty

11
Problem Statement – We have a buffer of fixed size. A producer can produce an item and can
place in the buffer. A consumer can pick items and can consume them. We need to ensure that
when a producer is placing an item in the buffer, then at the same time consumer should not
consume any item. In this problem, buffer is the critical section.
To solve this problem, we need two counting semaphores – Full and Empty. “Full” keeps track
of number of items in the buffer at any given time and “Empty” keeps track of number of
unoccupied slots.
Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially
The mutex semaphore ensures mutual exclusion. The empty and full semaphores count the
number of empty and full spaces in the buffer
After the item is produced, wait operation is carried out on empty. This indicates that the empty
space in the buffer has decreased by 1. Then wait operation is carried out on mutex so that
consumer process cannot interfere.
After the item is put in the buffer, signal operation is carried out on mutex and full. The signal on
mutex indicates that consumer process can now act and the signal on full shows that the buffer
is full by 1
.Solution for Consumer –
do{
wait(full);
wait(mutex);
// Remove an item from buffer signal(mutex);
signal(empty);
// Consume the item
}while(true)
The wait operation is carried out on full. This indicates that items in the buffer have decreased by
Then wait operation is carried out on mutex so that producer process cannot interfere.
Then the item is removed from buffer. After that, signal operation is carried out on mutex which
indicates that consumer process can now act and signal on empty shows that the empty space in
the buffer has increased by 1.

READER WRITER PROBLEM:


In this type of problem there are two types of process are used such as Reader process and Writer
process. The reader process is responsible for only reading and the writer process is responsible
for writing. This is an important problem of synchronization which has several variations like
 Readers can read simultaneously
 Only one writer can write at a time
 When a writer is writing,no reader can read.
 If there is any reader reading then all the incoming writers should wait.
Three variables are used: mutex, wrt, readcount to implement solution
semaphore mutex, wrt; .semaphore mutex is used to ensure mutual exclusion when readcount is
updated i.e. when any reader enters or exit from the critical section and semaphore wrt is used by
both readers and writers

12
Reader process:
The structure of a reader process is as follows:
Wait (mutex);
Readcount++;
if (read count == 1)
Wait (wrt);
Signal (mutex);
Reading is performed
Wait (mutex);
Read count --;
if (read count == 0)
Signal (wrt);
Signal (mutex);

In the above code, mutex and wrt are semaphores that are initialized to 1. Also, Readcount is a
variable that is initialized to 0. The mutex semaphore ensures mutual exclusion and wrt handles
the writing mechanism and is common to the reader and writer process code.
The variable Readcount denotes the number of readers accessing the object. As soon as
Readcount becomes 1, wait operation is used on wrt. This means that a writer cannot access the
object anymore. After the read operation is done, Readcount is decremented. When Readcount
becomes 0, signal operation is used on wrt. So a writer can access the object now.

Writer process:
The structure of the writer process is as follows:
Wait (wrt);
Writing is performed;
Signal (wrt);
Writer requests the entry to critical section.
If a writer wants to access the object, wait operation is performed on wrt. After that no other
writer can access the object. When a writer is done writing into the object, signal operation is
performed on wrt.

B. DINING PHILOSOPHER PROBLEM:


The dining philosopher’s problem states that there are 5 philosophers sharing a circular table
and they eat and think alternatively. There is a bowl of rice for each of the philosophers and 5
chopsticks. A philosopher needs both their right and left chopstick to eat. A hungry philosopher
may only eat if there are both chopsticks available. Otherwise a philosopher puts down their
chopstick and begin thinking again.

13
Philosopher

Chopsticks

Rice Bowl

The dining philosopher is a classic synchronization problem as it demonstrates a large class of


concurrency control problems.

Solution of Dining Philosophers Problem


A solution of the Dining Philosophers Problem is to use a semaphore to represent a chopstick. A
chopstick can be picked up by executing a wait operation on the semaphore and released by
executing a signal semaphore.

The structure of the chopstick is shown below:


semaphore chopstick [5];
Initially the elements of the chopstick are initialized to 1 as the chopsticks are on the table and
not picked up by a philosopher.
The structure of a random philosopher i is given as follows:
do{
Wait ( chopstick [i]);
Wait (chopstick [(i+1)%5]);
.............
Eat
.............
Signal (chopstick [i]);
Signal (chopstick [(i+1)%5]);
.............
Think
.............
} While (1);
In the above structure, first wait operation is performed on chopstick[i] and chopstick[(i+1) % 5].
This means that the philosopher i has picked up the chopsticks on his sides. Then the eating
function is performed.
After that, signal operation is performed on chopstick[i] and chopstick[ (i+1) % 5]. This means
that the philosopher i has eaten and put down the chopsticks on his sides. Then the philosopher
goes back to thinking.

14
Difficulty with the solution
The above solution makes sure that no two neighboring philosophers can eat at the same time.
But this solution can lead to a deadlock. This may happen if all the philosophers pick their left
chopstick simultaneously. Then none of them can eat and deadlock occurs.
Some of the ways to avoid deadlock are as follows:

 There should be at most four philosophers on the table.


 An even philosopher should pick the right chopstick and then the left chopstick while an
odd philosopher should pick the left chopstick and then the right chopstick.
 A philosopher should only be allowed to pick their chopstick if both are available at the
same time

4. Synchronization hardware
 Process Syncronization problem can be solved by software as well as a hardware solution
 In a uniprocessor multiprogrammed system, mutual exclusion can be obtained by
disabling the interrupts before the process enters its critical section and enabling them
after it has exited the critical section.
Disable interrupts
Critical section
Enable interrupts
 Once a process is in critical section it cannot be interrupted. This solution cannot be used
in multiprocessor environment. since processes run independently on different
processors. Many modern computer systems therefore provide special hardware
instructions that allow us either to test and modify the content of a word orto swap the
contents of two words is, as one uninterruptible unit.
 Hardware Locks are used to solve the problem of `process synchronization. The process
synchronization problem occurs when more than one process tries to access the same
resource or variable. If more than one process tries to update a variable at the same time
then a data inconsistency problem can occur. This process synchronization is also called
synchronization hardware in the operating system.

Hardware Synchronization Algorithms


The hardware solution is as follows:
1. Test and Set
2. Swap
3. Unlock and Lock

I. Test and set instruction


 Test and Set Lock (TSL) is a synchronization mechanism.
 It uses a test and set instruction to provide the synchronization among the processes
executing concurrently.
 Lock value is false means the critical section is currently vacant and no process is
present inside it.
 Lock value true means the critical section is currently occupied and a process is present
inside it.

15
 Test and set algorithm uses a boolean variable 'lock' which is initially initialized to false.
This lock variable determines the entry of the process inside the critical section of the
code. Let's first see the algorithm and then try to understand what the algorithm is doing.

boolean TestAndSet(Boolean *target)


{
boolean rv=*target;
*target=true;
return rv;
}
Algorithm for TestAndSet
lock=false;
do{
while testandset(&lock) ; //do nothing
//critical section
lock=false
remainder section
}while(TRUE);

 In the above algorithm the TestAndSet() function takes a boolean value and returns the same
value. TestAndSet() function sets the lock variable to true.
 When lock varibale is initially false the TestAndSet(lock) condition checks for TestAndSet
(false). As TestAndSet function returns the same value as its argument, TestAndSet(false)
returns false. Now, while loop while(TestAndSet(lock)) breaks and the process enters the
critical section.
 As one process is inside the critical section and lock value is now 'true', if any other process
tries to enter the critical section then the new process checks for while(TestAndSet(true))
which will return true inside while loop and as a result the other process keeps executing the
while loop.
 As no queue is maintained for the processes stuck in the while loop, bounded waiting is not
ensured. If a process waits for a set amount of time before entering the critical section, it is said
to be a bounded waiting condition.
 In test and set algorithm the incoming process trying to enter the critical section does not wait
in a queue so any process may get the chance to enter the critical section as soon as the process
finds the lock variable to be false. It may be possible that a particular process never gets the
chance to enter the critical section and that process waits indefinitely.

The characteristics of this synchronization mechanism are-


 It ensures mutual exclusion.
 It is deadlock free.
 It does not guarantee bounded waiting and may cause starvation.
 It suffers from spin lock.

16
II. Swap instruction
 Swap function uses two boolean variables lock and key. Both lock and key variables are
initially initialized to false. Swap algorithm is the same as lock and set algorithm. The Swap
algorithm uses a temporary variable to set the lock to true when a process enters the critical
section of the program.

In the code above when a process P1 enters the critical section of the program it first executes the
while loop

Definition
void swap(boolean *a, boolean *b)
{
boolean temp=*a;
*a=*b;
*b=temp;
}
Algorithm
do
{
key=true;
while(key==true)
swap(&lock ,&key);
critical section
lock=false;
remainder section
}while(True);

 As key value is set to true just before the for loop so swap(lock, key) swaps the value of lock
and key. Lock becomes true and the key becomes false. In the next iteration of the while loop
breaks and the process, P1 enters the critical section.The value of lock and key when P1 enters
the critical section is lock = true and key = false.
 Let's say another process, P2, tries to enter the critical section while P1 is in the critical section.
Let's take a look at what happens if P2 tries to enter the critical section.key is set to true again
after the first while loop is executed i.e while(1).
 Now, the second while loop in the program i.e while(key) is checked. As key is true the
process enters the second while loop. swap(lock, key) is executed again. as both key and lock
are true so after swapping also both will be true. So, the while keeps executing and the process
P2 keeps running the while loop until Process P1 comes out of the critical section and makes
lock false.
 When Process P1 comes out of the critical section the value of lock is again set to false so that
other processes can now enter the critical section.
 When a process is inside the critical section than all other incoming process trying to enter the
critical section is not maintained in any order or queue. So any process out of all the waiting
process can get the chance to enter the critical section as the lock becomes false. So, there may
be a process that may wait indefinitely. So, bounded waiting is not ensured in Swap algorithm
also. It ensures mutual exclusion as only one process can enter inside critical section.

17
III. Unlock and lock
 Unlock and lock algorithm uses the TestAndSet method to control the value of lock. Unlock
and lock algorithm uses a variable waiting[i] for each process i. Here i is a positive integer i.e
1,2,3,... which corresponds to processes P1, P2, P3... and so on. waiting[i] checks if the process
i is waiting or not to enter into the critical section.
 All the processes are maintained in a ready queue before entering into the critical section. The
processes are added to the queue with respect to their process number. The queue is the circular
queue.

// // Shared variable lock initialized to false


// and individual key initialized to false
boolean lock;
Individual key;
Individual waiting[i];
while(1){
waiting[i] = true;
key = true;
while(waiting[i] && key)
key = TestAndSet(lock);
waiting[i] = false;
critical section
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
remainder section
}

 In Unlock and lock algorithm the lock is not set to false as one process comes out of the critical
section. In other algorithms like swap and Test and set the lock was being set to false as the
process comes out of the critical section so that any other process can enter the critical section.
 But in Unlock and lock, once the ith process comes out of the critical section the algorithm
checks the waiting queue for the next process waiting to enter the critical section i.e jth
process. If there is a jth process waiting in the ready queue to enter the critical section, the
waiting[j] of the jth process is set to false so that the while loop while(waiting[i] && key)
becomes false and the jth process enters the critical section.
 If no process is waiting in the ready queue to enter the critical section the algorithm then sets
the lock to false so that any other process comes and enters the critical section easily.
 Since a ready queue is always maintained for the waiting processes, the Unlock and lock
algorithm ensures bounded waiting.

18
5. MONITORS
 Monitor in an operating system is simply a class containing variable_declarations,
condition_variables, various procedures (functions), and an initializing_code block that is used
for process synchronization.

Syntax of monitor in OS
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedurePn (…) {……}
Initialization code (…) { … }
}
}

 Only one process can be active in a monitor at a time. Other processes that need to access the
shared variables in a monitor have to line up in a queue and are only provided access when the
previous process release the shared variables.
 Components of Monitor in an operating systemThe monitor is made up of four primary parts:
o Initialization: The code for initialization is included in the package, and we just need it
once when creating the monitors.
o Private Data: It is a feature of the monitor in an operating system to make the data
private. It holds all of the monitor's secret data, which includes private functions that may
only be utilized within the monitor. As a result, private fields and functions are not visible
outside of the monitor.
o Monitor Procedure: Procedures or functions that can be invoked from outside of the
monitor are known as monitor procedures.
o Monitor Entry Queue: Another important component of the monitor is the Monitor
Entry Queue. It contains all of the threads, which are commonly referred to as procedures
only..

Condition Variables
There are two sorts of operations we can perform on the monitor's condition variables:
Wait,Signal
Consider a condition variable (y) is declared in the monitor:
y.wait(): The activity/process that applies the wait operation on a condition variable will be
suspended, and the suspended process is located in the condition variable's block queue.
y.signal(): If an activity/process applies the signal action on the condition variable, then one of
the blocked activity/processes in the monitor is given a chance to execute.

19
Schematic view of a Monitor

Advantages of Monitor in OS
 Monitors offer the benefit of making concurrent or parallel programming easier and
less error-prone than semaphore-based solutions.
 It helps in process synchronization in the operating system.
 Monitors have built-in mutual exclusion.
 Monitors are easier to set up than semaphores.
 Monitors may be able to correct for the timing faults that semaphores cause.
Disadvantages of Monitor in OS
 Monitors must be implemented with the programming language.
 Monitor increases the compiler's workload.
 The monitor requires to understand what operating system features are available for
controlling crucial sections in the parallel procedures.

Examples:

A. Solution to Producer consumer problem using monitors:


Mutual exclusion is achieved by placing the critical section of a program inside a monitor. In the
code below, the critical sections of the producer and consumer are inside the monitor Producer
Consumer. Once inside the monitor, a process is blocked by the Wait and Signal primitives if it
cannot continue.

monitor ProducerConsumer
condition full, empty;
int count;
procedure insert();
{
if (count == N) wait(full); // if buffer is full, block
put_item(item); // put item in buffer
count = count + 1; // increment count of full slots
if (count == 1) signal(empty); // if buffer was empty, wake consumer
}

20
procedure remove();
{
if (count == 0) wait(empty); // if buffer is empty, block
remove_item(item); // remove item from buffer
count = count - 1; // decrement count of full slots
if (count == N-1) signal(full); // if buffer was full, wake producer
}
count = 0;
end monitor;
Producer();
{
while (TRUE)
{
make_item(item); // make a new item
ProducerConsumer.insert; // call insert function in monitor
}
}
Consumer();
{
while (TRUE)
{
ProducerConsumer.remove; // call remove function in monitor
consume_item; // consume an item
}
}
B. Solution to the Readers and Writers problem using Monitors

Monitors can be used to restrict access to the database. In this example, the read and write
functions used by processes which access the database are in a monitor called ReadersWriters. If
a process wants to write to the database, it must call the writeDatabase function. If a process
wants to read from the database, it must call the readDatabase function. Monitors use the
primitives Wait and Signal to put processes to sleep and to wake them up again.
In writeDatabase, the calling process will be put to sleep if the number of reading processes,
stored in the variable count, is not zero. Upon exiting the readDatabase function, reading
processes check to see if they should wake up a sleeping writing process.

monitor ReadersWriters
condition OKtoWrite, OKtoRead;
int ReaderCount = 0;
Boolean busy = false;
procedure StartRead()
{
if (busy) // if database is not free, block
OKtoRead.wait;
ReaderCount++; // increment reader ReaderCount
OKtoRead.signal();

21
}
procedure EndRead()
{
ReaderCount-- ; // decrement reader ReaderCount
if ( ReaderCount == 0 )
OKtoWrite.signal();
}
procedure StartWrite()
{
if ( busy || ReaderCount != 0 )
OKtoWrite.wait();
busy = true;
}
procedure EndWrite()
{
busy = false;
If (OKtoRead.Queue)
OKtoRead.signal();
else
OKtoWrite.signal();
}
Reader()
{
while (TRUE) // loop forever
{
ReadersWriters.StartRead();
readDatabase(); // call readDatabase function in monitor
ReadersWriters.EndRead();
}
}

Writer()
{
while (TRUE) // loop forever
{
make_data(&info); // create data to write
ReaderWriters.StartWrite();
writeDatabase(); // call writeDatabase function in monitor
ReadersWriters.EndWrite();
}
}

C. Solution to dining philosophers problem using monitors


We illustrate monitor concepts by presenting a deadlock-free solution to the dining-philosophers
problem. Monitor is used to control access to state variables and condition variables. It only tells

22
when to enter and exit the segment. This solution imposes the restriction that a philosopher may
pick up her chopsticks only if both of them are available.
we need to distinguish among three states in which we may find a philosopher. For this purpose,
we introduce the following data structure:
THINKING – When philosopher doesn’t want to gain access to either fork.
HUNGRY – When philosopher wants to enter the critical section.
EATING – When philosopher has got both the forks, i.e., he has entered the section.
Philosopher i can set the variable state[i] = EATING only if her two neighbors are not eating
(state[(i+4) % 5] != EATING) and (state[(i+1) % 5] != EATING). // Dining-Philosophers
Solution Using Monitors
monitor DP
{
status state[5];
condition self[5];
// Pickup chopsticks
Pickup(int i)
{
// indicate that I’m hungry
state[i] = hungry;
// set state to eating in test() only if my left and right neighbors are not eating
test(i);
// if unable to eat, wait to be signaled
if (state[i] != eating)
self[i].wait;
}
// Put down chopsticks
Putdown(int i)
{

// indicate that I’m thinking


state[i] = thinking;
// if right neighbor R=(i+1)%5 is hungry and both of R’s neighbors are not eating,set R’s state to
eating and wake it up by signaling R’s CV
test((i + 1) % 5);
test((i + 4) % 5);
}
test(int i)
{
if (state[(i + 1) % 5] != eating && state[(i + 4) % 5] != eating && state[i] == hungry)
{ // indicate that I’m eating
state[i] = eating;
// signal() has no effect during Pickup(), but is important to wake up waiting hungry
philosophers during Putdown()
self[i].signal();
}
}

23
init()
{
// Execution of Pickup(), Putdown() and test() are all mutually exclusive, i.e. only one at a time
can be executing
For i = 0 to 4
// Verify that this monitor-based solution is deadlock free and mutually exclusive in that no 2
neighbors can eat simultaneously
state[i] = thinking;
}
} // end of monitor

Difference Between Semaphore and Monitors


BASIS FOR SEMAPHORE MONITOR
COMPARISON
Basic Semaphores is an integer variable S. Monitor is an abstract data
type.
Action The value of Semaphore S indicates the The Monitor type contains
number of shared resources available in shared variables and the set of
the system procedures that operate on the
shared variable.
Access When any process access the shared When any process wants to
resources it perform wait() operation on S access the shared variables in
and when it releases the shared resources the monitor, it needs to access
it performs signal() operation on S. it through the procedures.
Condition variable Semaphore does not have condition Monitor has condition
variables. variables.

Deadlock
Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.. Consider an example
when two trains are coming toward each other on same track and there is only one track, none of
the trains can move once they are in front of each other. Similar situation occurs in operating
systems when there are two or more processes hold some resources and wait for resources held
by other(s). For example, in the below diagram, the process 1 has resource 1 and needs to acquire
resource 2. Similarly process 2 has resource 2 and needs to acquire resource 1. Process 1 and
process 2 are in deadlock as each of them needs the other’s resource to complete their execution
but neither of them is willing to release their resources.

24
6. System Model
A system consists of a finite number of resources to be distributed among a number of
competing processes. The example of resource type are memory space, CPU cycles, directories
and files, I/O devices like keyboards, printers and CD-DVD drives. When a system has 2 CPUs,
then the resource type CPU got two instances. A process must request a resource before using it
and must release the resource after using it. The number of resources requested must not exceed
the total number of resources available in the system.
1. Request: Process needs to request the resource, if it is available it will be allocated else process
has to wait until it can obtain the resource.
2. Use: The process can operate on the resource (like when the resource is a printer, its
job/process is to print on the printer).
3. Release: The process releases the resource (like, terminating or exiting any specific process).

To illustrate a deadlock state, consider a system with three CD RW drives. Suppose each of the
three processes holds one CD RW drives and now request another drive the three processes
will be in deadlock state. Each is waiting for the event “CD RW is released” which can be
caused only by one of the other waiting process. This is an example of deadlock with same
resource type.
Deadlock may also involve different resource types.
Example: Consider a system with one printer and one DVD drive. Suppose that process p1 is
holding the DVD drive and process p2 is holding the printer. If P1 request the printer and p2
request the DVD drive ,a deadlock occurs.
7. Deadlock Characterization:
7.1 Necessary ConditionA deadlock situation can arise if the following four conditions hold
simultaneously in a system:
a) Mutual Exclusion
At least one resource must be held in a non-sharable mode, that is, only one process at a time
can use this resource, if another process requests that resource ,the requesting resource must be
delay until the resource has been released In the diagram below, there is a single instance of
Resource1 and it is held by Process 1 only.

25
b) Hold and Wait
A process must be holding at least one resource and waiting to acquire additional resources that
are currently being held by other processes. In the diagram given below, Process 2 holds
Resource 2 and Resource 3 and is requesting the Resource 1 which is held by Process 1.

c) No Preemption
Resources cannot be preempted ,that is ,a resource can be released voluntarily by the process
holding it, after that process has completed the task. In the diagram below, Process 2 cannot
preempt Resource 1 from Process 1. It will only be released when Process 1 relinquishes it
voluntarily after its execution is complete.

d) Circular Wait
A set {P0, P1 , … , P n } of waiting processes must exist such that P0 is waiting for a resource
held by P1 , P1 is waiting for a resource that is held by P2 , ..., Pn-1 is waiting for a resource hold
by process Pn and Pn is waiting for a resource that is held by P0 .
For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly, Process
2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait loop.

26
The four conditions should be hold for a deadlock to occur. The circular wait condition implies
hold and wait condition so the four conditions are not completely independent.
7.2 Resource Allocation Graph
 Deadlock can be described more clearly by directed graph which is called system resource
allocation graph. The graph consists of a set of vertices ‘V’ and a set of edges ‘E’. The set of
vertices ‘V’ is partitioned into two different types of nodes such as P = {P1, P2, …….Pn}, the
set of all the active processes in the system and R = {R1, R2, …….Rn}, the set of all the
resource type in the system.
 A directed edge from process Pi to resource type Rj is denoted by Pi → Rj. It signifies that
process Pi has requested an instance of resource type Rj and waits for that resource. A
directed edge from resource type Rj to the process Pi which signifies that an instance of
resource type Rj has been allocated to process Pi. A directed edge Pi → Rj is called as request
edge and Rj → Pi is called as assigned edge.

 Process is represent as circle and resource as rectangle. If resource type has more than one
instance than it is represented by dot within the rectangle. The request edge points to only the
rectangle where as the assignment edge must also designate one of the dots in the rectangle.
 When a process Pi requests an instance of resource type Rj then a request edge is inserted as
resource allocation graph. When this request can be fulfilled, the request edge is transformed to an
assignment edge. When the process no longer needs access to the resource it releases the resource
and as a result the assignment edge is deleted.

27
Resource Allocation graph with deadlock

The resource allocation graph shown in above figure has the following situation. The sets P, R, E
P = {P1, P2, P3}
R = {R1, R2, R3, R4}
E = {P1 → R1,P2 → R3,R1 → P2,R2 → P2,R2 → P1,R3 → P3}
The resource instances are:
Resource R1 has one instance
Resource R2 has two instances.
Resource R3 has one instance
Resource R4 has three instances.
The process states are:
Process P1 is holding an instance of R2 and waiting for an instance of R1.
Process P2 is holding an instance of R1 and R2 and waiting for an instance R3.
Process P3 is holding an instance of R3.
The definition of a resource-allocation graph, it can be shown that, if the graph contains no
cycles, then no process in the system is deadlocked.

 If the graph does contain a cycle, then a deadlock may exist. If each resource type has
exactly one instance, then a cycle implies that a deadlock has occurred. If the cycle
involves only a set of resource types, each of which has only a single instance, then a
deadlock has occurred. Each process involved in the cycle is deadlocked. In this case, a
cycle in the graph is both a necessary and a sufficient condition for the existence of
deadlock.
 If each resource type has several instances, then a cycle does not necessarily imply that
a deadlock has occurred. In this case, a cycle in the graph is a necessary but not a
sufficient condition for the existence of deadlock.

Suppose that process P3 requests an instance of resource type R2. Since no resource instance is
currently available, a request edge P3 R2 is added to the graph At this point, two minimal
cycles exist in the system:
P1 -> R1 -> P2 -> R3 -> P3 -> R2 -> P1
P2 -> R3 -> P3 -> R2 -> P1

28
Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which is
held by process P3. Process P3 is waiting for either process P1 or process P2 to release resource
R2. In addition, process P1 is waiting for process P2 to release resource R1

The following example shows the resource allocation graph with a cycle but no deadlock.
P1 -> R1 -> P3 -> R2 -> P1

Resource Allocation graph with deadlock

However, there is no deadlock. Observe that process P4 may release its instance of resource type
R2. That resource can then be allocated to P3, breaking the cycle. In summary, if a resource-
allocation graph does not have a cycle, then the system is not in a deadlocked state. If there is a
cycle, then the system may or may not be in a deadlocked state.
8. Methods for Handling Deadlocks
The problem of deadlock can deal with the following 4 ways.

1. Deadlock Ignorance (Ostrich Method)


2. Deadlock Prevention
3. Deadlock avoidance (Banker's Algorithm)
4. Deadlock detection & recovery

Deadlock Ignorance:
In the Deadlock ignorance method the OS acts like the deadlock never occurs and completely
ignores it even if the deadlock occurs. This method only applies if the deadlock occurs very
rarely. The algorithm is very simple. It says ” if the deadlock occurs, simply reboot the system
and act like the deadlock never occurred.” That’s why the algorithm is called the Ostrich
Algorithm.
Advantages: Ostrich Algorithm is relatively easy to implement and is effective in most cases.
It helps in avoiding the deadlock situation by ignoring the presence of deadlocks.
Disadvantages:
Ostrich Algorithm does not provide any information about the deadlock situation.
It can lead to reduced performance of the system as the system may be blocked for a long time.

29
It can lead to a resource leak, as resources are not released when the system is blocked due to
deadlock.

9. Deadlock Prevention:
Deadlock Prevention provides a set of methods for ensuring that at least one of the necessary
conditions of deadlock cannot hold. For a deadlock to occur, each of the four necessary
conditions must hold. By ensuring that at least one of these conditions cannot hold, we can
prevent the occurrence of a deadlock
Advantages:
 Convenient when applied to resources whose state can be saved and restored easily
 Works well for processes that perform a single burst of activity.
Disadvantages:
 Inefficient
 Delays process initiation
 Future resource requirements must be known
1. Mutual Exclusion:
The mutual exclusion condition must holds for non sharable resources. The example
is a printer cannot be simultaneously shared by several processes. Sharable resources do not
require mutual exclusive access and thus cannot be involved in a dead lock. The example is
read only files which are in sharing condition. If several processes attempt to open the readonly
file at the same time they can be guaranteed simultaneous access to the file. A process
neverneeds to wait for a sharable resource. In general, however, we cannot prevent deadlocks
by denying the mutual-exclusion condition, because some resources are intrinsically
nonsharable.

2. Hold and wait:


To ensure that the hold and wait condition never occurs in the system, we must guarantee
that whenever a process requests a resource it does not hold any other resources. One protocol
that can be used requires each process to request and be allocated all its resources before it
begins execution. The other protocol allows a process to request resources only when the
process has no resource.
To illustrate the difference between these two protocols, we consider a process that copies
data from a DVD drive to a file on disk, sorts the file, and then prints the results to a printer. If
all resources must be requested at the beginning of the process, then the process must initially
request the DVD drive, disk file, and printer. It will hold the printer for its entire execution,
even though it needs the printer only at the end.

The second method allows the process to request initially only the DVD drive and disk file. It
copies from the DVD drive to the disk and then releases both the DVD drive and the disk file.
The process must then again request the disk file and the printer. After copying the disk file to
the printer, it releases these two resources and terminates.
These protocols have two main disadvantages. First, resource utilization may be low, since
many of the resources may be allocated but unused for a long period. Second,starvation is
possible. A process that needs several popular resources may have to wait indefinitely,
because at least one of the resources that it needs is always allocated to some other process.

30
3. No Preemption:
To ensure that this condition does not hold, a protocol is used. If a process is holding some
resources and requests another resource that cannot be immediately allocated to it (that is, the
process must wait), then all resources the process is currently holding are preempted. In other
words, these resources are implicitly released. The preempted resources are added to the list of
resources for which the process is waiting. The process will be restarted only when it can
regain its old resources, as well as the new ones that it is requesting.

Alternatively if a process requests some resources, we first check whether they are
available. If they are, we allocate them. If they are not available, we check whether they are
allocated to some other process that is waiting for additional resources. If so, we preempt the
desired resources from the waiting process and allocate them to the requesting process. If the
resources are not either available or held by a waiting process, the requesting process must
wait. While it is waiting, some of its resources may be preempted, but only if another process
requests them. A process can be restarted only when it is allocated the new resources it is
requesting and recovers any resources that were preemptedwhile it was waiting.
This protocol is often applied to resources whose state can be easily saved and restored
later, such as CPU registers and memory space. It cannot generally be applied to such
resources as printers and tape drives.

4. Circular Wait:
We can ensure that this condition never holds by ordering of all resource type
and to require that each process requests resource in an increasing order of enumeration.
Let R= {R1, R2, ...Rn}be the set of resource types. We assign to each resource type a
unique integer number, which allows us to compare two resources and to determine whether
one precedes another in our ordering. Formally, we define a one to one function F: R N,
where N is the set of natural numbers. For example, if the set of resource types R includes
tape drives, disk drives and printers, then the function F might be defined as follows:
F (Tape Drive) = 1,
F (Disk Drive) = 5,
F (Printer) = 12.
We can now consider the following protocol to prevent deadlocks: Each process can
request resources only in an increasing order of enumeration. That is, a process can initially
request any number of instances of a resource type, say Ri. After that, the process can request
instances of resource type Rj if and only if F (Rj) > F (Ri). If several instances of the same
resource type are needed, defined previously, a process that wants to use the tape drive and
printer at the same time must first request the tape drive and then request the printer.
Alternatively, we can require that a process requesting an instance of resource type Rj must
have released any resources Ri such that F(Ri) >= F(Rj). It must also be noted that if several
instances of the same resource type are needed, a single request for all of them must be issued.
If these two protocols are used, then the circular-wait condition cannot hold.

10. Deadlock Avoidance


Deadlock Avoidance requires that the operating system be given in advance additional
information concerning which resources a process will request and use during its lifetime. With
this additional knowledge, it can decide for each request whether or not the process should wait.

31
To decide whether the current request can be satisfied or must be delayed, the system must
consider the resources currently available, the resources currently allocated to each process, and
the future requests and releases of each process Given this a priori information, it is possible to
construct an algorithm that ensures that the system will never enter a deadlocked state. Such an
algorithm defines the deadlock-avoidance approach. The deadlock-avoidance algorithm
dynamically examines the resource-allocation state to ensure that there can never be a circular-
wait condition. Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes.
Advantages:
 No preemption necessary
 Decisions are made dynamically
Disadvantages:
 Future resource requirements must be known
 Processes can be blocked for long periods
We have two deadlock-avoidance algorithms:
6.1 Safe State
A state is safe if the system can allocate resources to each process (up to its maximum) in some
order and still avoid a deadlock. More formally, a system is in a safe state only if there exists a
safe sequence. When a process requests an available resource, system must decide if immediate
allocation leaves the system in a safe state. Systems are in safe state if there exists a safe
sequence of all process.
A sequence <P1, P2, …, Pn> of ALL the processes is the system such that for each Pi, the
resources that Pi can still request can be satisfied by currently available resources + resources
held by all the Pj, with j < i. That is:
 If Pi resource needs are not immediately available, then Pi can wait until all Pj have
finished.
 When Pj is finished, Pi can obtain needed resources, execute, return allocated resources,
and terminate.
 When Pi terminates, Pi +1 can obtain its needed resources, and so on.
 If no such sequence exists, then the system state is said to be unsafe.

If system is in safe state => No deadlock


If system in not in safe state => possibility of deadlock
OS cannot prevent processes from requesting resources in a sequence that leads to deadlock
Avoidance => ensure that system will never enter an unsafe state, prevent getting into deadlock

32
To illustrate, we consider a system with twelve magnetic tape drives and three processes: Po, P1,
and P2. Process Po requires ten tape drives, process P1 may need as many as four tape drives,
and process P2 may need up to nine tape drives.

Process Maximum Needs Current Needs


P0 10 5
P1 4 2
P2 9 2

Suppose that, at time to, process Po is holding five tape drives, process P1 is holding two
tape drives, and process P2 is holding two tape drives. (Thus, there are three free tape drives.)At
time t0, the system is in a safe state.
The sequence <P1, P0, P2> satisfies the safety condition. Process P1 can immediately be
allocated all its tape drives and then return them (the system will then have five available tape
drives); then process Po can get all its tape drives and return them (the system will then have ten
available tape drives); and finally process P2 can get all its tape drives and return them (the
system will then have all twelve tape drives available).
A system can go from a safe state to an unsafe state. Suppose that, at time t1,process P2
requests and is allocated one more tape drive. The system is no longer in a safe state. At this
point, only process P1 can be allocated all its tape drives. When it returns them, the system will
have only four available tape drives. Since process Po is allocated five tape drives but has a
maximum of ten,it may request five more tape drives. If it does so, it will have to wait, because
they are unavailable. Similarly, process P2 may request six additional tape drives and have to
wait, resulting in a deadlock. Our mistake was in granting the request from process P2 for one
more tape drive. If we had made P2 wait until either of the other processes had finished and
released its resources, then we could have avoided the deadlock.

6.2 Resource Allocation Graph Algorithm


In this graph a new type of edge has been introduced is known as claim edge. Claim edge PiRj
indicates that process Pj may request resource Rj at sometime in future. It is represented by a
dashed line. Similarly, when a resource R1 is released by Pi the assignment edge Rj P; is
reconverted to a claim edge Pi Rj. Resources must be claimed a priori in the system. That
is,before process P; starts executing, all its claim edges must already appear in the resource-
allocation graph. We can relax this condition by allowing a claim edge PiR1 to be added to the
graph only if all the edges associated with process Pi are claim edges.When process Pi requests
resource R1, the claim edge Pi R1 is converted to a request edge.
Now suppose that process Pi requests resource Rj. The request can be granted only if
converting the request edge Pi Rj to an assignment edge RjPi; does not result in the
formation of a cycle in the resource-allocation graph. We check for safety by using a cycle-
detection algorithm. If no cycle exists, then the allocation of the resource will leave the system in
a safe state. If a cycle is found, then the allocation will put the system in an unsafe state. In that
case, process Pi will have to wait for its requests to be satisfied.

33
To illustrate this algorithm, we consider the resource-allocation graph .Suppose that P2 requests
R2 . Although R2 is currently free, we cannot allocate it to P2, since this action will create a
cycle in the graph A cycle, as mentioned, indicates that the system is in an unsafe state. If P1
requests R2, and P2 requests R1, then a deadlock will occur.

6.3 Banker’s Algorithm


The resource-allocation-graph algorithm is not applicable to a resource allocation system
with multiple instances of each resource type. The deadlock avoidance algorithm that we
describe next is applicable to such a system but is less efficient than the resource-allocation
graph scheme. This algorithm is commonly known as the banker's algorithm.
The name was chosen because this algorithm can be used in banking system to ensure
that the bank never allocates all its available cash such that it can no longer satisfy the needs of
all its customer. When a new process enter in to the system it must declare the maximum number
of instances of each resource type that it may need. This number may not exceed the total
number of resources in the system. Several data structure must be maintained to implement the
banker’s algorithm.
Let, n = number of processes
m = number of resources types
Available: Vector of length m indicates number of available resources. If Available[j] = k, there
are k instances of resource type Rj available.
Max: n x m matrix defines the maximum demand of each process . If Max [i,j] = k, then process
Pi may request at most k instances of resource type Rj.
Allocation: n x m matrix defines number of resources of each type allocated to each process . If
Allocation[i,j] = k then Pi is currently allocated k instances of Rj.
Need: n x m matrix is indicates the remaining resource need of each process. If Need[i,j] = k,
then Pi may need k more instances of Rj to complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].

34
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work = Available
Finish [i] = false for i = 0, 1, …,n- 1.
2. Find an index i such that both:
(a) Finish [i] = false
(b) Needi <=Work
If no such i exists, go to step 4.
3. Work = Work + Allocation i
Finish[i] = true
go to step 2.
4.If Finish [i] == true for all i, then the system is in a safe state.

Resource Request Algorithm


We describe the algorithm for determining whether requests can be safely granted.
Let Requesti be the request vector for process Pi. If Requesti [j] == k, then process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken:
1. If Requesti <=Needi to step 2. Otherwise, raise error condition, since process has exceeded its
maximum claim.
2. If Requesti<=Available, go to step 3. Otherwise Pi must wait, since resources are not available.
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available – Request;
Allocationi= Allocationi + Requesti;
Needi=Needi – Requesti;
If the resulting resource-allocation state is safe, the transaction is completed,and process Pi is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Requesti , and
the old resource-allocation state is restored.
Example
Consider a system with five processes P0 through P4 and three resource types A, B, and C.
Resource type A has ten instances, resource type B has five instances, and resource type C has
seven instances. Suppose that, at time T0 , the following snapshot of the system has been taken:
Process Allocation Max Available
A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
The content of the matrix Need = Max – Allocation.
Process Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1

35
We claim that the system is currently in a safe state.
Safe Sequence is calculated as follows:
1. Need is compared with available.If need<=available then the resources are allocated to the
process and the process will release that resource.
2. If Need is greater than available then next process is compared
3. In above example Need of process P0(7,4,3) is greater than Available (3,3,2)
So move to next Process
4. Next Need of Process P1(1,2,2) is less than Available (3,3,2)(work) so it p1 is granted
resource and then released.
Work=Work+Allocation
Work=(3,3,2)+(2,0,0)
=(5,3,2)
Procedure continued for all processes
5. Next Need of Process P2(6,0,0) is more than Available (5,3,2)(work) so move to next process
6. Next Need of Process P3(0,1,1) is less than Available (5,3,2)(work)
P3 is granted resource and then released.
Work=Work+Allocation
Work=(5,3,2)+(2,1,1)
=(7,4,3)
7. Next Need of Process P4(4,3,1) is less than Available (7,4,3)(work)
P4 is granted resource and then released.
Work=Work+Allocation
Work=(7,4,3)+(0,0,2)
=(7,4,5)
8. One cycle completed again takes remaining process in sequence
Again Need of process P0(7,4,3) is less than available(7,4,5) so grant resource to P0 and
release
Work=Work+Allocation
Work=(7,4,5)+(0,1,0)
=(7,5,5)
9. Next Need of Process P2(6,0,0) is less than Available (7,5,5)(work)
P1 is granted resource and then released.
Work=Work+Allocation
Work=(7,5,5)+(3,0,2)
=(10,5,7)

Indeed, the sequence < P1, P3, P4, P0, P1>> satisfies the safety criteria.
Suppose now that process P1 requests one additional instance of resource type A and two
instances of resource type C, so Request1= (1,0,2). To decide whether this request can be
immediately granted, we first check that Request1 <= Available-that is, that
(1,0,2)<= (3,3,2), which is true. We then pretend that this request has been fulfilled, and we
arrive at the following new state:
Process Allocation Need Available
A B C A B C A B C
P0 0 1 0 7 4 3 2 3 0
P1 3 0 2 0 2 0

36
P2 3 0 2 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1

To do so, we execute our safety algorithm and find that the sequence <P1, P3, P4, P0, P2>
satisfies the safety requirement. Hence, we can immediately grant the request of process P1.
when the system is in this state, a request for (3,3,0) by P4 cannot be granted, since the resources
are not available. Furthermore, a request for (0,2,0) by P0 cannot be granted, even though the
resources are available, since the resulting state is unsafe.
11. Deadlock Detection
If a system doesn’t employ either a deadlock prevention or deadlock avoidance, then deadlock
situation may occur.In this environment the system must provide
 An algorithm that examines the state of the system to determine whether a deadlock has
occurred
 An algorithm to recover from the deadlock
 An algorithm to remove the deadlock is applied either to a system which pertains single
instance each resource type or a system which pertains several instances of a resource
type.

Advantages:
 Never delays process initiation
 Facilitates on-line handling
Disadvantages:
 Inherent preemption losses
 If algorithm is invoked for every source request, this will incur a considerable overhead
in computation time.
11.1 Single Instance of each Resource type
If all resources only a single instance then we can define a deadlock detection algorithm which
uses a new form of resource allocation graph called “Wait for graph”. We obtain this graph from
the resource allocation graph by removing the nodes of type resource and collapsing the
appropriate edges. The below figure describes the resource allocation graph and corresponding
wait for graph.

37
Resource-Allocation Graph Corresponding wait-for graph

For single instance


Pi ->Pj (Pi is waiting for Pj to release a resource that Pi needs)
Pi->Pj exist if and only if RAG contains 2 edges Pi ->Rq and Rq ->Pj for some resource Rq
In the above resource allocation graph Process P1 is holding resource R2 and requesting R1
which is held by process P2. So in the wait for graph p1 is request edge to p2.
A deadlock exists in the system if and only if the wait-for graph contains a cycle. To detect
deadlocks, the system needs to maintain the wait-for graph and periodically invoke an algorithm
that searches for a cycle in the graph.

11.2 Several Instances of a Resource type


The wait for graph scheme is not applicable to a resource allocation system with multiple
instances of reach resource type. For this case the algorithm employs several data structures
which are similar to those used in the banker’s algorithm like available, allocation and request.
Available: A vector of length m indicates the number of available resources of each type.
Allocation: An n x m matrix defines the number of resources of each type currently allocated to
each process.
Request: An n x m matrix indicates the current request of each process. If Request [i,j] = k,then
process Pi is requesting k more instances of resource type. Rj.
1. Let Work and Finish be vectors of length m and n, respectively Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi !=0, then
Finish[i] = false;
otherwise, Finish[i] = true.
2. Find an index i such that both:
(a) Finish[i] = = false
(b) Requesti <=Work
If no such i exists, go to step 4.
3. Work = Work + Allocation
Finish [i] = true
Go to step 2
4.If Finish [i] = false, for some i, 0<=i<n, then the system is in a deadlock state.
Moreover, if Finish[i] = false, then process Pi is deadlocked.
Example:

38
Five processes P 0 through P4 ; three resource types A (7 instances), B (2 instances), and C
(6 instances).
Snapshot at time T 0 :
Process Allocation Request Available
A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
Step 1: Work=[0,0,0]
Finish=[False, False, False, False, False]
Step2 : i=0 is selected as both Finish[0]=false and[0,0,0]<=[0,0,0]
Step3: Work=[0,0,0]+[0,1,0]=[0,1,0] and Finish=[True, False, False, False, False]
Step 4: i=2 is selected as both Finish[2]=false and[0,0,0]<=[0,1,0]
Step5: Work=[0,1,0]+[3,0,3]=[3,1,3] and Finish=[True, False, True, False, False]
Step 6: i=1 is selected as both Finish[1]=false and[2,0,2]<=[3,1,3]
Step7: Work=[3,1,3]+[2,0,0]=[5,1,3] and Finish=[True, True, True, False, False]
Step 8: i=3 is selected as both Finish[3]=false and[1,0,0]<=[5,1,3]
Step9: Work=[5,1,3]+[2,1,1]=[7,2,4] and Finish=[True, True, True, True, False]
Step 10: i=4 is selected as both Finish[4]=false and[0,0,2]<=[7,2,4]
Step11: Work=[7,2,4]+[0,0,2]=[7,2,6] and Finish=[True, True, True, True, True]
As for finish vector all the values are true So there is no deadlock.
Sequence <P0, P2, P1, P3, P4> will result in Finish[i] = true for all i.
12. Recovery from Deadlock
When a detection algorithm determines that a deadlock exists, several alternatives exist. One
possibility is to inform the operator that a deadlock has occurred, and to let the operator deal with
the deadlock manually. The other possibility is to let the system recover from the deadlock
automatically.
In order to recover the system from deadlocks, either OS considers resources or processes.

12.1 For Resource


Preempt the resource
We can snatch one of the resources from the owner of the resource (process) and give it to the
other process with the expectation that it will complete the execution and will release this
resource sooner. Well, choosing a resource which will be snatched is going to be a bit difficult.

39
Rollback to a safe state
System passes through various states to get into the deadlock state. The operating system can
rollback the system to the previous safe state. For this purpose, OS needs to implement check
pointing at every state.The moment, we get into deadlock, we will rollback all the allocations to
get into the previous safe state.

12.2 For Process


Kill a process
Killing a process can solve our problem but the bigger concern is to decide which process to kill.
Generally, Operating system kills a process which has done least amount of work until now.

Kill all process


This is not a suggestible approach but can be implemented if the problem becomes very serious.
Killing all process will lead to inefficiency in the system because all the processes will execute
again from starting.

13. Combined approach to deadlock handling:


Rather than attempting to design an OS facility that employs only one of these strategies, it
might be more efficient to use different strategies in different situations Some of the approaches
are:
 Group resources into a number of different resource classes
 Use the linear ordering strategy defined previously for the prevention of circular wait
to prevent deadlocks between resource classes
 Within a resource class use the algorithm that is most appropriate for that class
Within each class following strategies could be used:
Swappable space
 Prevention of deadlocks by requiring that all of the required resources that may be
used be allocated at one time, as in the hold-and-wait prevention strategy
 This strategy is reasonable if the maximum storage requirements are known
Process resources
 Avoidance will often be effective in this category, because it is reasonable to expect
processes to declare ahead of time the resources that they will require in this class
 Prevention by means of resource ordering within this class is also possible
Main memory
 Prevention by preemption appears to be the most appropriate strategy for main
memory
 When a process is preempted, it is simply swapped to secondary memory, freeing
space to resolve the deadlock
Internal resources
 Prevention by means of resource ordering can be used

Prof. S. H. Sable
(Subject Incharge)

40

You might also like