OS UNIT-3
OS UNIT-3
DEADLOCKS
A deadlock happens in operating system when two or more processes need some resource to complete their
execution that is held by the other process.
In the above diagram, the process 1 has resource 1 and needs to acquire resource 2. Similarly process 2 has
resource 2 and needs to acquire resource 1. Process 1 and process 2 are in deadlock as each of them needs the
other’s resource to complete their execution but neither of them is willing to relinquish their resources.
Deadlocks Characterization
A deadlock occurs if the four Coffman conditions hold true. But these conditions are not mutually exclusive.
• Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram below, there is a
single instance of Resource 1 and it is held by Process 1 only.
• Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the resource held by
the third process and so on, till the last process is waiting for a resource held by the first process. This forms
a circular chain. For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly,
Process 2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait loop.
Methods of handling deadlocks: There are three approaches to deal with deadlocks.
1. Deadlock Prevention
2. Deadlock avoidance
3. Deadlock detection
These are explained as following below.
Deadlock Prevention
If we simulate deadlock with a table which is standing on its four legs then we can also simulate four legs with the
four conditions which when occurs simultaneously, cause the deadlock.
However, if we break one of the legs of the table then the table will fall definitely. The same happens with deadlock,
if we can be able to violate one of the four necessary conditions and don't let them occur together then we can
prevent the deadlock.
1. Mutual Exclusion
Mutual section from the resource point of view is the fact that a resource can never be used by more than one
process simultaneously which is fair enough but that is the main reason behind the deadlock. If a resource could
have been used by more than one process at the same time then the process would have never been waiting for
any resource. However, if we can be able to violate resources behaving in the mutually exclusive manner then the
deadlock can be prevented.
However, we have to find out some mechanism by which a process either doesn't hold any resource or doesn't
wait. That means, a process must be assigned all the necessary resources before the execution starts. A process
must not wait for any resource once the execution has been started.
!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't hold or you don't wait)
3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take the resource
away from the process which is causing deadlock then we can prevent deadlock.
This is not a good approach at all since if we take a resource away which is being used by the process then all the
work which it has done till now can become inconsistent.
4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A process can't request for a lesser
priority resource. This ensures that not a single process can request a resource which is being utilized by some
other process and no cycle will be formed.
Among all the methods, violating Circular wait is the only approach that can be implemented practically.
Deadlock avoidance
Safe State and Unsafe State
A state is safe if the system can allocate resources to each process( up to its maximum requirement) in some
order and still avoid a deadlock. Formally, a system is in a safe state only, if there exists a safe sequence. So a safe
state is not a deadlocked state and conversely a deadlocked state is an unsafe state.
In an Unsafe state, the operating system cannot prevent processes from requesting resources in such a way that
any deadlock occurs. It is not necessary that all unsafe states are deadlocks; an unsafe state may lead to a
deadlock.
The above Figure shows the Safe, unsafe, and deadlocked state spaces
Resource Allocation Graph:
It is a directed graph, given as G = (V, E) containing a set of vertices V and edges E. The vertices are
divided into two types, i.e., a set of processes, P = {P1, P2, P3,...Pn} and a set of resources, R = {R1, R2,
R3,....Rn}. An edge from process Pi to resource Rj (Pi → Rj) indicates that Pi has requested resource Rj. It is
called a 'request edge'.
Any edge from a resource Rj to a process Pi (Rj → Pi) indicates that Rj is allocated to process Pi. It is called
an "assignment edge" when a request is fulfilled it is transformed into an assignment edge. Processes are
represented as circles and resources as rectangles. The below figure is an example of the graph where
process P1 has R1 and requests for R2.
For avoiding deadlocks using resource allocation graph, it is modified slightly by introducing a new edge
called "claim edge". It is an edge from process Pi to resource Rj (Pi → Rj) indicates that in the future, P i may
request for Rj. The direction of the arrow is same as the request edge but with a dashed line as shown below.
The following are the various data structures that have to be created to implement Banker's algorithm.
If 'n' is the number of processes and 'm' is the number of resources.
• Max : A 'n × m' matrix indicating the maximum resources required by each process.
• Allocation : A 'n × m' matrix indicating the number of resources already allocated to each process.
• Need : A 'n × m' matrix indicates the number of resources required by each process.
• Available : It is a vector of size 'm' which indicates the resources that are still available (not allocated
to any process).
• Request : It is a vector of size 'm' which indicates that process Pi has requested some resources.
Each row of matrices "allocation" and "need" can be referred to as vectors. Then "allocation" indicates
the resources currently allocated to process Pi and "need" refers to resources required by Pi. The following
algorithm is used to determine whether the request can be safely granted or not.
• Step 1 - If Requesti ≤ Needi, then proceed to step two, otherwise raise an exception saying the process
has exceeded its maximum claim.
• Step 2 - If Requesti ≤ Available, then proceed to step three, otherwise block Pi because resources are
not available.
• Step 3 - Allocate resources to Pi as follows,
Example:
Considering a system with five processes P0 through P4 and three resources of type A, B, C.
Resource type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at time
t0 following snapshot of the system has been taken:
Question2. Is the system in a safe state? If Yes, then what is the safe sequence?
Applying the Safety algorithm on the given system,
Deadlock Recovery:
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is
a time and space-consuming process. Real-time operating systems use Deadlock recovery.
1. Killing the process –
Killing all the processes involved in the deadlock. Killing process one by one. After killing
each process check for deadlock again keep repeating the process till the system
recovers from deadlock. Killing all the processes one by one helps a system to break
circular wait condition.
2. Resource Preemption –
Resources are preempted from the processes involved in the deadlock, preempted
resources are allocated to other processes so that there is a possibility of recovering the
system from deadlock. In this case, the system goes into starvation.
Race Condition:
When more than one process is executing the same code or accessing the same memory or
any shared variable in that condition there is a possibility that the output or the value of the
shared variable is wrong so for that all the processes doing the race to say that my output is
correct this condition known as a race condition. Several processes access and process the
manipulations over the same data concurrently, then the outcome depends on the particular
order in which the access takes place. A race condition is a situation that may occur inside a
critical section. This happens when the result of multiple thread execution in the critical section
differs according to the order in which the threads execute. Race conditions in critical sections
can be avoided if the critical section is treated as an atomic instruction. Also, proper thread
synchronization using locks or atomic variables can prevent race conditions.
A critical section is a code segment that can be accessed by only one process at a time. The critical
section contains shared variables that need to be synchronized to maintain the consistency of data
variables. So the critical section problem means designing a way for cooperative processes to
access shared resources without creating data inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion: If a process is executing in its critical section, then no other process is allowed to
execute in the critical section.
• Progress: If no process is executing in the critical section and other processes are waiting outside the
critical section, then only those processes that are not executing in their remainder section can
participate in deciding which will enter in the critical section next, and the selection can not be postponed
indefinitely.
• Bounded Waiting: A bound must exist on the number of times that other processes are allowed to
enter their critical sections after a process has made a request to enter its critical section and before
that request is granted.
Peterson’s Solution:
Semaphores:
}while(true)
When producer produces an item then the value of “empty” is reduced by 1 because one
slot will be filled now. The value of mutex is also reduced to prevent consumer to access the
buffer. Now, the producer has placed the item and thus the value of “full” is increased by 1.
The value of mutex is also increased by 1 because the task of producer has been completed
and consumer can access the buffer.
Solution for Consumer –
do{
wait(full);
wait(mutex);
// remove item from buffer
signal(mutex);
signal(empty);
// consumes item
}while(true)
As the consumer is removing an item from buffer, therefore the value of “full” is reduced by
1 and the value is mutex is also reduced so that the producer cannot access the buffer at
this moment. Now, the consumer has consumed the item, thus increasing the value of
“empty” by 1. The value of mutex is also increased so that producer can access the buffer
now.
process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}
There are three states of the philosopher: THINKING, HUNGRY, and EATING. Here there
are two semaphores: Mutex and a semaphore array for the philosophers. Mutex is used
such that no two philosophers may access the pickup or putdown at the same time. The
array is used to control the behavior of each philosopher. But, semaphores can result in
deadlock due to programming errors.
• If one of the people tries editing the file, no other person should be reading or writing at
the same time, otherwise changes will not be visible to him/her.
• However if some person is reading the file, then others may read it at the same time.
Precisely in OS we call this situation as the readers-writers problem
Problem parameters:
do {
// writer requests for critical section
wait(wrt);
// performs the write
// leaves the critical section
signal(wrt);
} while(true);
Reader process:
1. Initialization
2. Private data
3. Monitor procedure
4. Monitor entry queue
Initialization: - Initialization comprises the code, and when the monitors are created, we use
this code exactly once.
Private Data: - Private data is another component of the monitor. It comprises all the private
data, and the private data contains private procedures that can only be used within the
monitor. So, outside the monitor, private data is not visible.
Monitor Procedure: - Monitors Procedures are those procedures that can be called from
outside the monitor.
Monitor Entry Queue: - Monitor entry queue is another essential component of the monitor
that includes all the threads, which are called procedures.
Syntax of monitor
Condition Variables
There are two types of operations that we can perform on the condition variables of the
monitor:
1. Wait
2. Signal
Wait Operation
a.wait(): - The process that performs wait operation on the condition variables are suspended
and locate the suspended process in a block queue of that condition variable.
Signal Operation
a.signal() : - If a signal operation is performed by the process on the condition variable, then
a chance is provided to one of the blocked processes.
Advantages of Monitor
It makes the parallel programming easy, and if monitors are used, then there is less error-
prone as compared to the semaphore.
Difference between Monitors and Semaphore
Monitors Semaphore
In monitors, wait always block the In semaphore, wait does not always block
caller. the caller.
Condition variables are present in Condition variables are not present in the
the monitor. semaphore.
• Semaphore
A semaphore is a variable that controls the access to a common resource by multiple processes. The two
types of semaphores are binary semaphores and counting semaphores.
• Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a time. This is useful
for synchronization and also prevents race conditions.
Pipes
• It is a half-duplex method (or one-way communication) used for IPC between two related
processes.
• It is like a scenario like filling the water with a tap into a bucket. The filling process is writing into
the pipe and the reading process is retrieved from the pipe.
Shared Memory
Multiple processes can access a common shared memory. Multiple processes communicate
by shared memory, where one process makes changes at a time and then others view the
change. Shared memory does not use kernel.
Message Passing
Message Queues
We have a linked list to store messages in a kernel of OS and a message queue is identified
using "message queue identifier".