0% found this document useful (0 votes)
4 views

OS UNIT-3

The document discusses deadlocks in operating systems, defining them as situations where two or more processes are unable to proceed because each is waiting for resources held by the other. It outlines the Coffman conditions that characterize deadlocks, methods for handling them including prevention, avoidance, and detection, and introduces concepts like resource allocation graphs and the Banker's algorithm. Additionally, it covers synchronization issues, race conditions, critical section problems, and solutions like semaphores and Peterson's solution.

Uploaded by

rakeshreddyt25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

OS UNIT-3

The document discusses deadlocks in operating systems, defining them as situations where two or more processes are unable to proceed because each is waiting for resources held by the other. It outlines the Coffman conditions that characterize deadlocks, methods for handling them including prevention, avoidance, and detection, and introduces concepts like resource allocation graphs and the Banker's algorithm. Additionally, it covers synchronization issues, race conditions, critical section problems, and solutions like semaphores and Peterson's solution.

Uploaded by

rakeshreddyt25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT - III

DEADLOCKS

A deadlock happens in operating system when two or more processes need some resource to complete their
execution that is held by the other process.

In the above diagram, the process 1 has resource 1 and needs to acquire resource 2. Similarly process 2 has
resource 2 and needs to acquire resource 1. Process 1 and process 2 are in deadlock as each of them needs the
other’s resource to complete their execution but neither of them is willing to relinquish their resources.

Deadlocks Characterization
A deadlock occurs if the four Coffman conditions hold true. But these conditions are not mutually exclusive.

• Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram below, there is a
single instance of Resource 1 and it is held by Process 1 only.

• Hold and Wait


A process can hold multiple resources and still request more resources from other processes which are
holding them. In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is requesting
the Resource 1 which is held by Process 1.
• No Preemption
A resource cannot be preempted from a process by force. A process can only release a resource voluntarily.
In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released when
Process 1 relinquishes it voluntarily after its execution is complete.

• Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the resource held by
the third process and so on, till the last process is waiting for a resource held by the first process. This forms
a circular chain. For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly,
Process 2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait loop.

Methods of handling deadlocks: There are three approaches to deal with deadlocks.
1. Deadlock Prevention
2. Deadlock avoidance
3. Deadlock detection
These are explained as following below.

Deadlock Prevention
If we simulate deadlock with a table which is standing on its four legs then we can also simulate four legs with the
four conditions which when occurs simultaneously, cause the deadlock.

However, if we break one of the legs of the table then the table will fall definitely. The same happens with deadlock,
if we can be able to violate one of the four necessary conditions and don't let them occur together then we can
prevent the deadlock.

1. Mutual Exclusion
Mutual section from the resource point of view is the fact that a resource can never be used by more than one
process simultaneously which is fair enough but that is the main reason behind the deadlock. If a resource could
have been used by more than one process at the same time then the process would have never been waiting for
any resource. However, if we can be able to violate resources behaving in the mutually exclusive manner then the
deadlock can be prevented.

2. Hold and Wait


Hold and wait condition lies when a process holds a resource and waiting for some other resource to complete its
task. Deadlock occurs because there can be more than one process which are holding one resource and waiting
for other in the cyclic order.

However, we have to find out some mechanism by which a process either doesn't hold any resource or doesn't
wait. That means, a process must be assigned all the necessary resources before the execution starts. A process
must not wait for any resource once the execution has been started.

!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't hold or you don't wait)

3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take the resource
away from the process which is causing deadlock then we can prevent deadlock.

This is not a good approach at all since if we take a resource away which is being used by the process then all the
work which it has done till now can become inconsistent.

4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A process can't request for a lesser
priority resource. This ensures that not a single process can request a resource which is being utilized by some
other process and no cycle will be formed.
Among all the methods, violating Circular wait is the only approach that can be implemented practically.

Deadlock avoidance
Safe State and Unsafe State

A state is safe if the system can allocate resources to each process( up to its maximum requirement) in some
order and still avoid a deadlock. Formally, a system is in a safe state only, if there exists a safe sequence. So a safe
state is not a deadlocked state and conversely a deadlocked state is an unsafe state.

In an Unsafe state, the operating system cannot prevent processes from requesting resources in such a way that
any deadlock occurs. It is not necessary that all unsafe states are deadlocks; an unsafe state may lead to a
deadlock.

The above Figure shows the Safe, unsafe, and deadlocked state spaces
Resource Allocation Graph:
It is a directed graph, given as G = (V, E) containing a set of vertices V and edges E. The vertices are
divided into two types, i.e., a set of processes, P = {P1, P2, P3,...Pn} and a set of resources, R = {R1, R2,
R3,....Rn}. An edge from process Pi to resource Rj (Pi → Rj) indicates that Pi has requested resource Rj. It is
called a 'request edge'.

Any edge from a resource Rj to a process Pi (Rj → Pi) indicates that Rj is allocated to process Pi. It is called
an "assignment edge" when a request is fulfilled it is transformed into an assignment edge. Processes are
represented as circles and resources as rectangles. The below figure is an example of the graph where
process P1 has R1 and requests for R2.

For avoiding deadlocks using resource allocation graph, it is modified slightly by introducing a new edge
called "claim edge". It is an edge from process Pi to resource Rj (Pi → Rj) indicates that in the future, P i may
request for Rj. The direction of the arrow is same as the request edge but with a dashed line as shown below.

Example for Resource Allocation Graph:


To describe the usage of this graph in deadlock avoidance, let us consider as example graph shown
below consisting of two processes (P1 and P2) and two resources (R1 and R2) such that P2 has resource
R2 and requests for R1 and P1 has resource R1 and may claim for R2 in future. This action can create a cycle
in the graph which means deadlock is possible and the system is in an unsafe state. Hence, the allocation
should not be done.
Banker's Algorithm:
It is used to avoid deadlocks when multiple instances of each resource type are present. This is not
possible, using the methods like safe state and resource allocation graphs. It is similar to a banking system
where a bank never allocates cash in such a way that it could not satisfy the needs of all its customers and
also it cannot allocate more than what is available. Here, customers are analogous to processes, cash to
resources, and bank to the operating system.
A process must specify in the beginning the maximum number of instances of each resource type it may
require. It is obvious that this number should not be more than the available. When the process requests
resources, the system decides whether allocation will result in deadlock or not. If not, resources are allocated
otherwise process has to wait.

The following are the various data structures that have to be created to implement Banker's algorithm.
If 'n' is the number of processes and 'm' is the number of resources.

• Max : A 'n × m' matrix indicating the maximum resources required by each process.
• Allocation : A 'n × m' matrix indicating the number of resources already allocated to each process.
• Need : A 'n × m' matrix indicates the number of resources required by each process.
• Available : It is a vector of size 'm' which indicates the resources that are still available (not allocated
to any process).
• Request : It is a vector of size 'm' which indicates that process Pi has requested some resources.
Each row of matrices "allocation" and "need" can be referred to as vectors. Then "allocation" indicates
the resources currently allocated to process Pi and "need" refers to resources required by Pi. The following
algorithm is used to determine whether the request can be safely granted or not.

• Step 1 - If Requesti ≤ Needi, then proceed to step two, otherwise raise an exception saying the process
has exceeded its maximum claim.
• Step 2 - If Requesti ≤ Available, then proceed to step three, otherwise block Pi because resources are
not available.
• Step 3 - Allocate resources to Pi as follows,

Available = Available - Requesti


Allocationi = Allocationi + Requesti
Needi = Needi - Requesti

Example:
Considering a system with five processes P0 through P4 and three resources of type A, B, C.
Resource type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at time
t0 following snapshot of the system has been taken:

Question1. What will be the content of the Need matrix?


Need [i, j] = Max [i, j] – Allocation [i, j]
So, the content of Need Matrix is:

Question2. Is the system in a safe state? If Yes, then what is the safe sequence?
Applying the Safety algorithm on the given system,

Question3. What will happen if process P1 requests one additional instance of


resource type A and two instances of resource type C?
We must determine whether this new system state is safe. To do so, we again execute
Safety algorithm on the above data structures.
We must determine whether this new system state is safe. To do so, we again execute
Safety algorithm on the above data structures.
Deadlock Detection and Recovery
Deadlock Detection:
1. If resources have a single instance –
In this case for Deadlock detection, we can run an algorithm to check for the cycle in the
Resource Allocation Graph. The presence of a cycle in the graph is a sufficient condition for
deadlock.
2. In the above diagram, resource 1 and resource 2 have single instances. There is a cycle
R1 → P1 → R2 → P2. So, Deadlock is Confirmed.

3. If there are multiple instances of resources –


Detection of the cycle is necessary but not sufficient condition for deadlock detection, in this
case, the system may or may not be in deadlock varies according to different situations.

Deadlock Recovery:
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is
a time and space-consuming process. Real-time operating systems use Deadlock recovery.
1. Killing the process –
Killing all the processes involved in the deadlock. Killing process one by one. After killing
each process check for deadlock again keep repeating the process till the system
recovers from deadlock. Killing all the processes one by one helps a system to break
circular wait condition.
2. Resource Preemption –
Resources are preempted from the processes involved in the deadlock, preempted
resources are allocated to other processes so that there is a possibility of recovering the
system from deadlock. In this case, the system goes into starvation.

Process Management and Synchronization


On the basis of synchronization, processes are categorized as one of the following two
types:
• Independent Process: The execution of one process does not affect the execution of
other processes.
• Cooperative Process: A process that can affect or be affected by other processes
executing in the system.

Race Condition:

When more than one process is executing the same code or accessing the same memory or
any shared variable in that condition there is a possibility that the output or the value of the
shared variable is wrong so for that all the processes doing the race to say that my output is
correct this condition known as a race condition. Several processes access and process the
manipulations over the same data concurrently, then the outcome depends on the particular
order in which the access takes place. A race condition is a situation that may occur inside a
critical section. This happens when the result of multiple thread execution in the critical section
differs according to the order in which the threads execute. Race conditions in critical sections
can be avoided if the critical section is treated as an atomic instruction. Also, proper thread
synchronization using locks or atomic variables can prevent race conditions.

Critical Section Problem:

A critical section is a code segment that can be accessed by only one process at a time. The critical
section contains shared variables that need to be synchronized to maintain the consistency of data
variables. So the critical section problem means designing a way for cooperative processes to
access shared resources without creating data inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion: If a process is executing in its critical section, then no other process is allowed to
execute in the critical section.
• Progress: If no process is executing in the critical section and other processes are waiting outside the
critical section, then only those processes that are not executing in their remainder section can
participate in deciding which will enter in the critical section next, and the selection can not be postponed
indefinitely.
• Bounded Waiting: A bound must exist on the number of times that other processes are allowed to
enter their critical sections after a process has made a request to enter its critical section and before
that request is granted.

Peterson’s Solution:

Peterson’s Solution is a classical software-based solution to the critical section problem. In


Peterson’s solution, we have two shared variables:
• boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section
• int turn: The process whose turn is to enter the critical section.
Peterson’s Solution preserves all three conditions:
• Mutual Exclusion is assured as only one process can access the critical section at any time.
• Progress is also assured, as a process outside the critical section does not block other processes
from entering the critical section.
• Bounded Waiting is preserved as every process gets a fair chance.
Disadvantages of Peterson’s solution:
• It involves busy waiting.(In the Peterson’s solution, the code statement- “while(flag[j] && turn ==
j);” is responsible for this. Busy waiting is not favored because it wastes CPU cycles that could be
used to perform other tasks.)
• It is limited to 2 processes.
• Peterson’s solution cannot be used in modern CPU architectures.

Semaphores:

A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be


signaled by another thread. This is different than a mutex as the mutex can be signaled only
by the thread that is called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations wait()
and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
• Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks, as the locks
can provide mutual exclusion. All the processes can share the same mutex semaphore that is initialized to
1. Then, a process has to wait until the lock becomes 0. Then, the process can make the mutex semaphore
1 and start its critical section. When it completes its critical section, it can reset the value of the mutex
semaphore to 0 and some other process can enter its critical section.
• Counting Semaphores: They can have any value and are not restricted over a certain domain. They
can be used to control access to a resource that has a limitation on the number of simultaneous accesses.
The semaphore can be initialized to the number of instances of the resource. Whenever a process wants
to use that resource, it checks if the number of remaining instances is more than zero, i.e., the process
has an instance available. Then, the process can enter its critical section thereby decreasing the value of
the counting semaphore by 1. After the process is over with the use of the instance of the resource, it can
leave the critical section thereby adding 1 to the number of available instances of the resource.
Producer Consumer Problem using Semaphores
A semaphore S is an integer variable that can be accessed only through two standard
operations : wait() and signal().
The wait() operation reduces the value of semaphore by 1 and the signal() operation
increases its value by 1.
wait(S){
while(S<=0); // busy waiting
S--;
}
signal(S){
S++;
}
To solve this problem, we need two counting semaphores – Full and Empty. “Full” keeps
track of number of items in the buffer at any given time and “Empty” keeps track of number
of unoccupied slots.
Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially
Solution for Producer –
do{
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);

}while(true)
When producer produces an item then the value of “empty” is reduced by 1 because one
slot will be filled now. The value of mutex is also reduced to prevent consumer to access the
buffer. Now, the producer has placed the item and thus the value of “full” is increased by 1.
The value of mutex is also increased by 1 because the task of producer has been completed
and consumer can access the buffer.
Solution for Consumer –
do{
wait(full);
wait(mutex);
// remove item from buffer
signal(mutex);
signal(empty);
// consumes item
}while(true)

As the consumer is removing an item from buffer, therefore the value of “full” is reduced by
1 and the value is mutex is also reduced so that the producer cannot access the buffer at
this moment. Now, the consumer has consumed the item, thus increasing the value of
“empty” by 1. The value of mutex is also increased so that producer can access the buffer
now.

Dining Philosopher Problem Using Semaphores


The Dining Philosopher Problem – The Dining Philosopher Problem states that K
philosophers seated around a circular table with one chopstick between each pair of
philosophers. There is one chopstick between each philosopher. A philosopher may eat if he
can pick up the two chopsticks adjacent to him. One chopstick may be picked up by any one
of its adjacent followers but not both.
Semaphore Solution to Dining Philosopher –
Each philosopher is represented by the following pseudo code:

process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}
There are three states of the philosopher: THINKING, HUNGRY, and EATING. Here there
are two semaphores: Mutex and a semaphore array for the philosophers. Mutex is used
such that no two philosophers may access the pickup or putdown at the same time. The
array is used to control the behavior of each philosopher. But, semaphores can result in
deadlock due to programming errors.

Sleeping Barber problem in Process Synchronization


The analogy is based upon a hypothetical barber shop with one barber. There is a barber
shop which has one barber, one barber chair, and n chairs for waiting for customers if there
are any to sit on the chair.
• If there is no customer, then the barber sleeps in his own chair.
• When a customer arrives, he has to wake up the barber.
• If there are many customers and the barber is cutting a customer’s hair, then the
remaining customers either wait if there are empty chairs in the waiting room or they
leave if no chairs are empty.
Solution : The solution to this problem includes three semaphores.First is for the customer
which counts the number of customers present in the waiting room (customer in the barber
chair is not included because he is not waiting). Second, the barber 0 or 1 is used to tell
whether the barber is idle or is working, And the third mutex is used to provide the mutual
exclusion which is required for the process to execute. In the solution, the customer has the
record of the number of customers waiting in the waiting room if the number of customers is
equal to the number of chairs in the waiting room then the upcoming customer leaves the
barbershop.
When the barber shows up in the morning, he executes the procedure barber, causing him to
block on the semaphore customers because it is initially 0. Then the barber goes to sleep until
the first customer comes up.
When a customer arrives, he executes customer procedure the customer acquires the mutex
for entering the critical region, if another customer enters thereafter, the second one will not
be able to anything until the first one has released the mutex. The customer then checks the
chairs in the waiting room if waiting customers are less then the number of chairs then he sits
otherwise he leaves and releases the mutex.
If the chair is available then customer sits in the waiting room and increments the variable
waiting value and also increases the customer’s semaphore this wakes up the barber if he is
sleeping.
At this point, customer and barber are both awake and the barber is ready to give that person
a haircut. When the haircut is over, the customer exits the procedure and if there are no
customers in waiting room barber sleeps.
Readers-Writers Problem
Consider a situation where we have a file shared between many people.

• If one of the people tries editing the file, no other person should be reading or writing at
the same time, otherwise changes will not be visible to him/her.
• However if some person is reading the file, then others may read it at the same time.
Precisely in OS we call this situation as the readers-writers problem
Problem parameters:

• One set of data is shared among a number of processes


• Once a writer is ready, it performs its write. Only one writer may write at a time
• If a process is writing, no other process can read it
• If at least one reader is reading, no other process can write
• Readers may not write and only read

Solution when Reader has the Priority over Writer


Here priority means, no reader should wait if the share is currently opened for reading.
Three variables are used: mutex, wrt, readcnt to implement solution

1. semaphore mutex, wrt; // semaphore mutex is used to ensure mutual exclusion


when readcnt is updated i.e. when any reader enters or exit from the critical section and
semaphore wrt is used by both readers and writers
2. int readcnt; // readcnt tells the number of processes performing read in the critical
section, initially 0
Functions for semaphore :
– wait() : decrements the semaphore value.
– signal() : increments the semaphore value.
Writer process:

1. Writer requests the entry to critical section.


2. If allowed i.e. wait() gives a true value, it enters and performs the write. If not allowed, it
keeps on waiting.
3. It exits the critical section.

do {
// writer requests for critical section
wait(wrt);
// performs the write
// leaves the critical section
signal(wrt);
} while(true);
Reader process:

1. Reader requests the entry to critical section.


2. If allowed:
• it increments the count of number of readers inside the critical section. If this reader is
the first reader entering, it locks the wrt semaphore to restrict the entry of writers if
any reader is inside.
• It then, signals mutex as any other reader is allowed to enter while others are already
reading.
• After performing reading, it exits the critical section. When exiting, it checks if no more
reader is inside, it signals the semaphore “wrt” as now, writer can enter the critical
section.
3. If not allowed, it keeps on waiting.

Monitors in Operating System


Monitors are used for process synchronization. With the help of programming languages, we
can use a monitor to achieve mutual exclusion among the processes. Example of
monitors: Java Synchronized methods such as Java offers notify() and wait()
constructs.
In other words, monitors are defined as the construct of programming language, which helps
in controlling shared data access.
The Monitor is a module or package which encapsulates shared data structure, procedures,
and the synchronization between the concurrent procedure invocations.
Characteristics of Monitors.

1. Inside the monitors, we can only execute one process at a time.


2. Monitors are the group of procedures, and condition variables that are merged together in a
special type of module.
3. If the process is running outside the monitor, then it cannot access the monitor’s internal
variable. But a process can call the procedures of the monitor.
4. Monitors offer high-level of synchronization
5. Monitors were derived to simplify the complexity of synchronization problems.
6. There is only one process that can be active at a time inside the monitor.
Components of Monitor
There are four main components of the monitor:

1. Initialization
2. Private data
3. Monitor procedure
4. Monitor entry queue

Initialization: - Initialization comprises the code, and when the monitors are created, we use
this code exactly once.
Private Data: - Private data is another component of the monitor. It comprises all the private
data, and the private data contains private procedures that can only be used within the
monitor. So, outside the monitor, private data is not visible.
Monitor Procedure: - Monitors Procedures are those procedures that can be called from
outside the monitor.
Monitor Entry Queue: - Monitor entry queue is another essential component of the monitor
that includes all the threads, which are called procedures.
Syntax of monitor

Condition Variables
There are two types of operations that we can perform on the condition variables of the
monitor:

1. Wait
2. Signal

Suppose there are two condition variables

condition a, b // Declaring variable

Wait Operation
a.wait(): - The process that performs wait operation on the condition variables are suspended
and locate the suspended process in a block queue of that condition variable.
Signal Operation
a.signal() : - If a signal operation is performed by the process on the condition variable, then
a chance is provided to one of the blocked processes.
Advantages of Monitor
It makes the parallel programming easy, and if monitors are used, then there is less error-
prone as compared to the semaphore.
Difference between Monitors and Semaphore
Monitors Semaphore

In semaphore, we can use condition


We can use condition variables only variables anywhere in the program, but
in the monitors. we cannot use conditions variables in a
semaphore.

In monitors, wait always block the In semaphore, wait does not always block
caller. the caller.

The monitors are comprised of the


The semaphore S value means the
shared variables and the
number of shared resources that are
procedures which operate the
present in the system.
shared variable.

Condition variables are present in Condition variables are not present in the
the monitor. semaphore.

Interprocess Communication Mechanisms


Interprocess communication is the mechanism provided by the operating system that allows processes to
communicate with each other. This communication could involve a process letting another process know that
some event has occurred or the transferring of data from one process to another.
A diagram that illustrates interprocess communication is as follows −
Synchronization in Interprocess Communication
Synchronization is a necessary part of interprocess communication. It is either provided by the interprocess control
mechanism or handled by the communicating processes. Some of the methods to provide synchronization are as
follows −

• Semaphore
A semaphore is a variable that controls the access to a common resource by multiple processes. The two
types of semaphores are binary semaphores and counting semaphores.
• Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a time. This is useful
for synchronization and also prevents race conditions.

Pipes

• It is a half-duplex method (or one-way communication) used for IPC between two related
processes.
• It is like a scenario like filling the water with a tap into a bucket. The filling process is writing into
the pipe and the reading process is retrieved from the pipe.

Shared Memory
Multiple processes can access a common shared memory. Multiple processes communicate
by shared memory, where one process makes changes at a time and then others view the
change. Shared memory does not use kernel.

Message Passing

• In IPC, this is used by a process for communication and synchronization.


• Processes can communicate without any shared variables, therefore it can be used in a
distributed environment on a network.
• It is slower than the shared memory technique.
• It has two actions sending (fixed size message) and receiving messages.

Message Queues

We have a linked list to store messages in a kernel of OS and a message queue is identified
using "message queue identifier".

You might also like