0% found this document useful (0 votes)
18 views

OS - U3 Notes

Uploaded by

Deadshot Gamer
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

OS - U3 Notes

Uploaded by

Deadshot Gamer
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Operating System Notes

UNIT – 3 Deadlocks

S. No Title Page No.


2
1 Introduction of Deadlock in Operating System

2
2 Necessary conditions for Deadlocks

3
3 Resource Allocation Graph (RAG)

7
4 Strategies /Methods for handling Deadlock

8
5 Deadlock Prevention

12
6 Deadlock Avoidance

17
7 Deadlock Detection

18
8 Deadlock Recovery

1
Introduction of Deadlock in Operating System
Every process needs some resources to complete its execution. However, the resource is granted
in a sequential order.

1. The process requests for some resource.


2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource which is being
assigned to some another process. In this situation, none of the process gets executed since the
resource it needs, is held by some other process which is also waiting for some other resource to
be released.

Consider an example when two trains are coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other. A
similar situation occurs in operating systems when there are two or more processes that hold
some resources and wait for resources held by other(s). For example, in the below diagram,
Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and
process 2 is waiting for resource 1.

Necessary conditions for Deadlocks


Deadlock can arise if the following four conditions hold simultaneously
1. Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.
2. Hold and Wait
A process waits for some resources while holding another resource at the same time.

2
3. No preemption
The process which once scheduled will be executed till the completion. No other process
can be scheduled by the scheduler meanwhile.
4. Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.

Resource Allocation Graph (RAG)


The resource allocation graph is the pictorial representation of the state of a system. As its name
suggests, the resource allocation graph is the complete information about all the processes which
are holding some resources or waiting for some resources.

It also contains the information about all the instances of all the resources whether they are
available or being used by the processes.

So, resource allocation graph is explained to us what is the state of the system in terms
of processes and resources. Like how many resources are available, how many are allocated
and what is the request of each process. Everything can be represented in terms of the diagram.
One of the advantages of having a diagram is, sometimes it is possible to see a deadlock directly
by using RAG, but then you might not be able to know that by looking at the table. But the
tables are better if the system contains lots of process and resource and Graph is better if the
system contains less number of process and resource.

We know that any graph contains vertices and edges. So RAG also contains vertices and edges.
In RAG vertices are two type –

1. Process vertex – Every process will be represented as a process vertex. Generally, the
process will be represented with a circle.
2. Resource vertex – Every resource will be represented as a resource vertex. It is also two
type –
 Single instance type resource – It represents as a box, inside the box, there will
be one dot. So the number of dots indicate how many instances are present of
each resource type.
 Multi-resource instance type resource – It also represents as a box, inside the
box, there will be many dots present.

3
Now coming to the edges of RAG. There are two types of edges in RAG –
1. Assign Edge – If you already assign a resource to a process then it is called Assign edge.
2. Request Edge – It means in future the process might want some resource to complete
the execution that is called request edge.

So, if a process is using a resource, an arrow is drawn from the resource node to the
process node. If a process is requesting a resource, an arrow is drawn from the process
node to the resource node.

4
5
6
Strategies for handling Deadlock

1. Deadlock Ignorance

Deadlock Ignorance is the most widely used approach among all the mechanism. This is being
used by many operating systems mainly for end user uses. In this approach, the Operating system
assumes that deadlock never occurs. It simply ignores deadlock. This approach is best suitable
for a single end user system where User uses the system only for browsing and all other normal
stuff.

There is always a tradeoff between Correctness and performance. The operating systems like
Windows and Linux mainly focus upon performance. However, the performance of the system
decreases if it uses deadlock handling mechanism all the time if deadlock happens 1 out of 100
times then it is completely unnecessary to use the deadlock handling mechanism all the time.
7
In these types of systems, the user has to simply restart the computer in the case of deadlock.
Windows and Linux are mainly using this approach.

2. Deadlock prevention

Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait
holds simultaneously. If it is possible to violate one of the four conditions at any time then the
deadlock can never occur in the system.

The idea behind the approach is very simple that we have to fail one of the four conditions but
there can be a big argument on its physical implementation in the system.

3. Deadlock avoidance

In deadlock avoidance, the operating system checks whether the system is in safe state or in
unsafe state at every step which the operating system performs. The process continues until the
system is in safe state. Once the system moves to unsafe state, the OS has to backtrack one step.

In simple words, The OS reviews each allocation so that the allocation doesn't cause the deadlock
in the system.

4. Deadlock detection and recovery

This approach let the processes fall in deadlock and then periodically check whether deadlock
occur in the system or not. If it occurs then it applies some of the recovery methods to the system
to get rid of deadlock.

Deadlock Prevention
If we simulate deadlock with a table which is standing on its four legs then we can also simulate
four legs with the four conditions which when occurs simultaneously, cause the deadlock.

However, if we break one of the legs of the table then the table will fall definitely. The same
happens with deadlock, if we can be able to violate one of the four necessary conditions and
don't let them occur together then we can prevent the deadlock.

Let's see how we can prevent each of the conditions.

1. Mutual Exclusion
The mutual-exclusion condition must hold for non-sharable resources. For example, a printer
cannot be simultaneously shared by several processes. Sharable resources, in contrast, do not
require mutually exclusive access and thus cannot be involved in a deadlock. Read-only files are
a good example of a sharable resource. If several processes attempt to open a read-only file at
the same time, they can be granted simultaneous access to the file. A process never needs to
8
wait for a sharable resource. In general, however, we cannot prevent deadlocks by denying the
mutual-exclusion condition, because some resource are intrinsically non-sharable.

Mutual Exclusion from the resource point of view is the fact that a resource can never be used
by more than one process simultaneously which is fair enough but that is the main reason behind
the deadlock. If a resource could have been used by more than one process at the same time
then the process would have never been waiting for any resource. However, if we can be able to
violate resources behaving in the mutually exclusive manner then the deadlock can be prevented.

Example: Spooling For a device like printer, spooling can work. There is a memory associated
with the printer which stores jobs from each of the process into it. Later, Printer collects all the
jobs and print each one of them according to FCFS. By using this mechanism, the process doesn't
have to wait for the printer and it can continue whatever it was doing. Later, it collects the output
when it is produced.

Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from
two kinds of problems.

1. This cannot be applied to every resource.


2. After some point of time, there may arise a race condition between the processes to get
space in that spool.

We cannot force a resource to be used by more than one process at the same time since it will
not be fair enough and some serious problems may arise in the performance. Therefore, we
cannot violate mutual exclusion for a process practically.

2. Hold and Wait


To ensure that the hold-and-wait condition never occurs in the system, we must guarantee that,
whenever a process requests a resource, it does not hold any other resources. One protocol that
can be used requires each process to request and be allocated all its resources before it begins

9
execution. We can implement this provision by requiring that system calls requesting resources
for a process precede all other system calls.

An alternative protocol allows a process to request resources only when it has none. A process
may request some resources and use them. Before it can request any additional resources,
however, it must release all the resources that it is currently allocated.

To illustrate the difference between these two protocols, we consider a process that copies data
from a DVD drive to a file on disk, sorts the file, and then prints the results to a printer. If all
resources must be requested at the beginning of the process, then the process must initially
request the DVD drive, disk file, and printer. It will hold the printer for its entire execution, even
though it needs the printer only at the end.

The second method allows the process to request initially only the DVD drive and disk file. It
copies from the DVD drive to the disk and then releases both the DVD drive and the disk file. The
process must then again request the disk file and the printer. After copying the disk file to the
printer, it releases these two resources and terminates. Both these protocols have two main
disadvantages. First, resource utilization may be low, since resources may be allocated but
unused for a long period.

In the example given, for instance, we can release the DVD drive and disk file, and then again
request the disk file and printe1~ only if we can be sure that our data will remain on the disk file.
Otherwise, we must request all resources at the beginning for both protocols.
Second, starvation is possible. A process that needs several popular resources may have to wait
indefinitely, because at least one of the resources that it needs is always allocated to some other
process.

The problem with the approach is:

1. Practically not possible.


2. Possibility of getting starved will be increases due to the fact that some process may hold
a resource for a very long time.

3. No Preemption

The third necessary condition for deadlocks is that there be no preemption of resources that have
already been allocated. To ensure that this condition does not hold, we can use the following
protocol. If a process is holding some resources and requests another resource that cannot be
immediately allocated to it (that is, the process must wait), then all resources the process is
currently holding are preempted. In other words, these resources are implicitly released. The
preempted resources are added to the list of resources for which the process is waiting. The
process will be restarted only when it can regain its old resources, as well as the new ones that it
is requesting.

10
Alternatively, if a process requests some resources, we first check whether they are available. If
they are, we allocate them. If they are not, we check whether they are allocated to some other
process that is waiting for additional resources. If so, we preempt the desired resources from the
waiting process and allocate them to the requesting process. If the resources are neither
available nor held by a waiting process, the requesting process must wait. While it is waiting,
some of its resources may be preempted, but only if another process requests them. A process
can be restarted only when it is allocated the new resources it is requesting and recovers any
resources that were preempted while it was waiting.

This protocol is often applied to resources whose state can be easily saved and restored later,
such as CPU registers and memory space. It cannot generally be applied to such resources as
printers and tape drives.

4. Circular Wait
The fourth and final condition for deadlocks is the circular-wait condition. One way to ensure that
this condition never holds is to impose a total ordering of all resource types and to require that
each process requests resources in an increasing order of enumeration.

To illustrate, we let R = {R1, R2,..., Rm} be the set of resource types. We assign to each resource
type a unique integer number, which allows us to compare two resources and to determine
whether one precedes another in our ordering. Formally, we define a one-to-one function F: R
___. N, where N is the set of natural numbers. For example, if the set of resource types R includes
tape drives, disk drives, and printers, then the function F might be defined as follows:
F (tape drive) = 1
F (disk drive) = 5
F (printer) = 12
We can now consider the following protocol to prevent deadlocks: Each process can request
resources only in an increasing order of enumeration. That is, a process can initially request any
number of instances of a resource type-say, R. After that, the process can request instances of
resource type Rj if and only if F(Rj) > F(R;). For example, using the function defined previously, a
process that wants to use the tape drive and printer at the same time must first request the tape
drive and then request the printer. Alternatively, we can require that a process requesting an
instance of resource type Rj must have released any resources R; such that F(Ri) ::=:: F(Rj). It must
also be noted that if several instances of the same resource type are needed, a single request for
all of them must be issued.

If these two protocols are used, then the circular-wait condition cannot hold. We can
demonstrate this fact by assuming that a circular wait exists (proof by contradiction). Let the set
of processes involved in the circular wait be {P0 , P1, ... , P11}, where Pi is waiting for a resource
R;, which is held by process Pi+l· (Modulo arithmetic is used on the indexes, so that P11 is waiting
for a resource R11 held by P0 .) Then, since process Pi+l is holding resource Ri while requesting
resource Ri+l' we must have F(Ri) < F(R;H) for all i. But this condition means that F(Ro) < F(R1) <

11
... < F(R11) < F (Ro). By transitivity, F(Ro) < F(Ro), which is impossible. Therefore, there can be no
circular wait.

We can accomplish this scheme in an application program by developing an ordering among all
synchronization objects in the system. All requests for synchronization objects must be made in
increasing order. For example, if the lock ordering in the Pthread program shown in Figure 7.1
was
F (first_mutex) = 1
F (second_mutex) = 5

then thread_ two could not request the locks out of order. Keep in mind that developing an
ordering, or hierarchy, does not in itself prevent deadlock. It is up to application developers to
write programs that follow the ordering. Also note that the function F should be defined
according to the normal order of usage of the resources in a system. For example, because the
tape drive is usually needed before the printer, it would be reasonable to define F(tape drive) <
F(printer).

Deadlock Avoidance
The deadlock Avoidance method is used by the operating system in order to check whether the
system is in a safe state or in an unsafe state and in order to avoid the deadlocks, the process
must need to tell the operating system about the maximum number of resources a process can
request in order to complete its execution.

How does Deadlock Avoidance work?

In this method, the request for any resource will be granted only if the resulting state of the
system doesn't cause any deadlock in the system. This method checks every step performed by
the operating system. Any process continues its execution until the system is in a safe state. Once
the system enters into an unsafe state, the operating system has to take a step back.

With the help of a deadlock-avoidance algorithm, you can dynamically assess the resource-
allocation state so that there can never be a circular-wait situation.

According to the simplest and useful approach, any process should declare the maximum number
of resources of each type it will need. The algorithms of deadlock avoidance mainly examine the
resource allocations so that there can never be an occurrence of circular wait conditions.

Deadlock avoidance can mainly be done with the help of Banker's Algorithm.

Let us first understand the concept of Safe and Unsafe states

12
Safe State and Unsafe State

A state is safe if the system can allocate resources to each process( up to its maximum
requirement) in some order and still avoid a deadlock. Formally, a system is in a safe state only,
if there exists a safe sequence. So a safe state is not a deadlocked state and conversely a
deadlocked state is an unsafe state.

In an Unsafe state, the operating system cannot prevent processes from requesting resources in
such a way that any deadlock occurs. It is not necessary that all unsafe states are deadlocks; an
unsafe state may lead to a deadlock.

The above Figure shows the Safe, unsafe, and deadlocked state spaces

Deadlock Avoidance Example

Let us consider a system having 12 magnetic tapes and three processes P1, P2, P3. Process P1
requires 10 magnetic tapes, process P2 may need as many as 4 tapes, process P3 may need up
to 9 tapes. Suppose at a time to, process P1 is holding 5 tapes, process P2 is holding 2 tapes and
process P3 is holding 2 tapes. (There are 3 free magnetic tapes)

Processes Maximum Needs Current Needs

P1 10 5

13
Processes Maximum Needs Current Needs

P2 4 2

P3 9 2

So at time t0, the system is in a safe state. The sequence is <P2, P1, P3> satisfies the safety
condition.

Process P2 can immediately be allocated all its tape drives and then return them. After the return
the system will have 5 available tapes, then process P1 can get all its tapes and return them (the
system will then have 10 tapes); finally, process P3 can get all its tapes and return them (The
system will then have 12 available tapes).

A system can go from a safe state to an unsafe state. Suppose at time t1, process P3 requests and
is allocated one more tape. The system is no longer in a safe state. At this point, only process P2
can be allocated all its tapes. When it returns them the system will then have only 4 available
tapes. Since P1 is allocated five tapes but has a maximum of ten so it may request 5 more tapes.
If it does so, it will have to wait because they are unavailable. Similarly, process P3 may request
its additional 6 tapes and have to wait which then results in a deadlock.

The mistake was granting the request from P3 for one more tape. If we made P3 wait until either
of the other processes had finished and released its resources, then we could have avoided the
deadlock

Note: In a case, if the system is unable to fulfill the request of all processes then the state of the
system is called unsafe.

The main key of the deadlock avoidance method is whenever the request is made for resources
then the request must only be approved only in the case if the resulting state is a safe state.

14
Safety Algorithm

We can now present the algorithm for finding out whether or not a system is in a safe state.
This algorithm can be described as follows:

15
This algorithm may require an order of m x n2 operations to determine whether a state is safe.

Resource-Request Algorithm

Next, we describe the algorithm for determining whether requests can be safely granted.

If the resulting resource-allocation state is safe, the transaction is completed, and process P; is
allocated its resources. However, if the new state is unsafe, then P; must wait for Request; , and
the old resource-allocation state is restored.

16
Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm,
then a deadlock situation may occur. In this environment, the system may provide:

An algorithm that examines the state of the system to determine whether a deadlock has
occurred. An algorithm to recover from the deadlock In the following discussion, we elaborate
on these two requirements as they pertain to systems with only a single instance of each resource
type, as well as to systems with several instances of each resource type. At this point, however,
we note that a detection-and-recovery scheme requires overhead that includes not only the run-
time costs of maintaining the necessary information and executing the detection algorithm but
also the potential losses inherent in recovering from a deadlock.

Single Instance of Each Resource Type

If all resources have only a single instance, then we can define a deadlock detection algorithm
that uses a variant of the resource-allocation graph, called a wait-for graph. We obtain this graph
from the resource-allocation graph by removing the resource nodes and collapsing the
appropriate edges.

17
Several Instances of a Resource Type

The wait-for graph scheme is not applicable to a resource-allocation system with multiple
instances of each resource type. We turn now to a deadlock detection algorithm that is applicable
to such a system. The algorithm employs several time-varying data structures that are similar to
those used in the banker's algorithm:

Available. A vector of length n indicates the number of available resources of each type.
Allocation. An n x m matrix defines the number of resources of each type currently allocated to
each process.
Request. An n x m matrix indicates the current request of each process.

Deadlock Recovery
When a detection algorithm determines that a deadlock exists, several alternatives are available.
One possibility is to inform the operator that a deadlock has occurred and to let the operator
deal with the deadlock manually. Another possibility is to let the system recover from the
deadlock automatically. There are two options for breaking a deadlock. One is simply to abort
one or more processes to break the circular wait. The other is to preempt some resources from
one or more of the deadlocked processes.

18
Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods. In both methods, the
system reclaims all resources allocated to the terminated processes.

Abort all deadlocked processes. This method clearly will break the deadlock cycle, but at great
expense; the deadlocked processes may have computed for a long time, and the results of these
partial computations must be discarded and probably will have to be recomputed later.

Abort one process at a time until the deadlock cycle is eliminated. This method incurs
considerable overhead, since after each process is aborted, a deadlock-detection algorithm must
be invoked to determine whether any processes are still deadlocked.

Aborting a process may not be easy. If the process was in the midst of updating a file, terminating
it will leave that file in an incorrect state. Similarly, if the process was in the midst of printing data
on a printer, the system must reset the printer to a correct state before printing the next job.

If the partial termination method is used, then we must determine which deadlocked process (or
processes) should be terminated. This determination is a policy decision, similar to CPU-
scheduling decisions. The question is basically an economic one; we should abort those processes
whose termination will incur the minimum cost. Unfortunately, the term minimum cost is not a
precise one.

Many factors may affect which process is chosen, including:


1. What the priority of the process is.
2. How long the process has computed and how much longer the process will compute before
completing its designated task
3. How many and what types of resources the process has used (for example, whether the
resources are simple to preempt)
4. How many more resources the process needs in order to complete.
5. How many processes will need to be terminated?
6. Whether the process is interactive or batch.

Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt some resources
from processes and give these resources to other processes 1-m til the deadlock cycle is broken.
If preemption is required to deal with deadlocks, then three issues need to be addressed:

Selecting a victim. Which resources and which processes are to be preempted? As in process
termination, we must determine the order of preemption to minimize cost. Cost factors may
include such parameters as the number of resources a deadlocked process is holding and the
amount of time the process has thus far consumed during its execution.

19
Rollback. If we preempt a resource from a process, what should be done with that process?
Clearly, it cannot continue with its normal execution; it is missing some needed resource. We
must roll back the process to some safe state and restart it from that state.
Since, in general, it is difficult to determine what a safe state is, the simplest solution is a total
rollback: abort the process and then restart it. Although it is more effective to roll back the
process only as far as necessary to break the deadlock, this method requires the system to keep
more information about the state of all running processes.

Starvation. How do we ensure that starvation will not occur? That is, how can we guarantee that
resources will not always be preempted from the same process?

In a system where victim selection is based primarily on cost factors, it may happen that the same
process is always picked as a victim. As a result, this process never completes its designated task,
a starvation situation that must be dealt with in any practical system. Clearly, we must ensure
that a process can be picked as a victim" only a (small) finite number of times. The most common
solution is to include the number of rollbacks in the cost factor.

20

You might also like