0% found this document useful (0 votes)
13 views

Deadlock

The document defines and describes a resource-allocation graph that can be used to model deadlocks. It explains that the graph contains vertices representing processes and resource types, with directed edges showing which processes have requested or been allocated which resources. The document then provides an example resource-allocation graph and explains how cycles in the graph relate to whether a deadlock has occurred. Specifically, it states that a cycle is necessary but not sufficient for deadlock when multiple instances of each resource exist, since a resource becoming available could break the cycle.

Uploaded by

Mohit Saini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Deadlock

The document defines and describes a resource-allocation graph that can be used to model deadlocks. It explains that the graph contains vertices representing processes and resource types, with directed edges showing which processes have requested or been allocated which resources. The document then provides an example resource-allocation graph and explains how cycles in the graph relate to whether a deadlock has occurred. Specifically, it states that a cycle is necessary but not sufficient for deadlock when multiple instances of each resource exist, since a resource becoming available could break the cycle.

Uploaded by

Mohit Saini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Resource-Allocation Graph

Deadlocks can be described more precisely in terms of a directed graph called a


system resource-allocation graph. This graph consists of a set of vertices V and
a set of edges E. The set{ of vertices V is partitioned into two different types of
nodes: P = P1, P2, ..., P} n {, the set consisting of all the active processes in the
}
system, and R = R1, R2, ..., Rm , the set consisting of all resource types in the
system.
A directed edge from process Pi to resource type Rj is denoted by Pi → Rj ; it
signifies that process Pi has requested an instance of resource type Rj and is
currently waiting for that resource. A→directed edge from resource type Rj to

process Pi is denoted by Rj Pi ; it signifies
→ that an instance of resource type Rj
has been allocated to process Pi . A directed edge Pi Rj is called a request edge;a
directed edge Rj Pi is called an assignment edge.
Pictorially, we represent each process Pi as a circle and each resource type Rj as a
rectangle. Since resource type Rj may have more than one instance, we represent
each such instance as a dot within the rectangle. Note that a request edge points
to only the rectangle Rj , whereas an assignment edge must also designate one of
the dots in the rectangle.
When process Pi requests an instance of resource type Rj , a request edge is
inserted in the resource-allocation graph. When this request can be fulfilled, the
request edge is instantaneously transformed to an assignment edge. When the
process no longer needs access to the resource, it releases the resource. As a
result, the assignment edge is deleted.
The resource-allocation graph shown in Figure 7.1 depicts the following
situation.
• The sets P, R, and E:

P = {P1, P2, P3}

R = {R1, R2, R3, R4}

E = {P1 → R1, P2 → R3, R1 → P2, R2 → P2, R2 → P1, R3 →
P3}
• Resource instances:

One instance of resource type R1

Two instances of resource type R2

One instance of resource type R3

Three instances of resource type R4
• Process states:

Process P1 is holding
◦ an instance of resource type R2 and is waiting for an
instance of resource type R1.
Process P2 is holding
◦ an instance of R1 and an instance of R2 and is waiting for an
instance of R3.

Process P3 is holding an instance of R3.
Given the definition of a resource-allocation graph, it can be shown that, if the
graph contains no cycles, then no process in the system is deadlocked. If the
graph does contain a cycle, then a deadlock may exist.
If each resource type has exactly one instance, then a cycle implies that a
deadlock has occurred. If the cycle involves only a set of resource types, each of
which has only a single instance, then a deadlock has occurred. Each process
involved in the cycle is deadlocked. In this case, a cycle in the graph is both a
necessary and a sufficient condition for the existence of deadlock.
If each resource type has several instances, then a cycle does not necessarily
imply that a deadlock has occurred. In this case, a cycle in the graph is a
necessary but not a sufficient condition for the existence of deadlock.
To illustrate this concept, we return to the resource-allocation graph depicted in
Figure 7.1. Suppose that process P3 requests an instance of resource

type R2. Since no resource instance is currently available, we


add
→a request edge P3 R2 to the graph (Figure 7.2). At this point,
two minimal cycles exist in the system:

P1 → R1 → P2 → R3 → P3
→ R2 → P1 P2 → R3 → P3
→ R2 → P2

Processes P1, P2, and P3 are deadlocked. Process P2 is waiting


for the resource R3, which is held by process P3. Process P3 is
waiting for either process P1 or process P2 to release resource
R2. In addition, process P1 is waiting for process P2 to release
resource R1.
Now consider the resource-allocation graph in Figure 7.3. In
this example, we also have a cycle:
P1 → R1 → P3 → R2 → P1
However, there is no deadlock. Observe that process P4 may release its instance
of resource type R2. That resource can then be allocated to P3, breaking the cycle.
In summary, if a resource-allocation graph does not have a cycle, then the
system is not in a deadlocked state. If there is a cycle, then the system may or
may not be in a deadlocked state. This observation is important when we deal
with the deadlock problem.
7.1 Methods for Handling Deadlocks
Generally speaking, we can deal with the deadlock problem in
one of three ways:

• We can use a protocol to prevent or


• avoid deadlocks, ensuring that the system will never enter a
deadlocked state.
• We can allow the system to enter a deadlocked state, detect it, and recover.
• We can ignore the problem altogether and pretend that deadlocks
never occur in the system.
7.2
7.3 Deadlock Prevention
For a deadlock to occur, each of the four necessary conditions
must hold. By ensuring that at least one of these conditions
cannot hold, we can prevent the occurrence of a deadlock. We
elaborate on this approach by examining each of the four
necessary conditions separately.

7.4.1 Mutual Exclusion


The mutual exclusion condition must hold. That is, at least one
resource must be nonsharable. Sharable resources, in contrast, do
not require mutually exclusive access and thus cannot be
involved in a deadlock. Read-only files are a good example of a
sharable resource. If several processes attempt to open a read-only
file at the same time, they can be granted simultaneous access
to the file. A process never needs to wait for a sharable
resource. In general, however, we cannot prevent deadlocks by
denying the mutual-exclusion condition, because some resources
are intrinsically nonsharable. For example, a mutex lock cannot be
simultaneously shared by several processes.

7.4.2 Hold and Wait


To ensure that the hold-and-wait condition never occurs in the
system, we must guarantee that, whenever a process requests a
resource, it does not hold any other resources. One protocol that
we can use requires each process to request and be allocated all
its resources before it begins execution. We can implement this
provision by requiring that system calls requesting resources
for a process precede all other system calls.
An alternative protocol allows a process to request resources
only when it has none. A process may request some resources
and use them. Before it can request any additional resources, it
must release all the resources that it is currently allocated.
To illustrate the difference between these two protocols, we
consider a process that copies data from a DVD drive to a file on
disk, sorts the file, and then prints the results to a printer. If all
resources must be requested at the beginning of the process, then
the process must initially request the DVD drive, disk file, and
printer. It will hold the printer for its entire execution, even though
it needs the printer only at the end.
The second method allows the process to request initially
only the DVD drive and disk file. It copies from the DVD drive to
the disk and then releases both the DVD drive and the disk file.
The process must then request the disk file and the printer.
After copying the disk file to the printer, it releases these two
resources and terminates.
Both these protocols have two main disadvantages. First,
resource utiliza- tion may be low, since resources may be
allocated but unused for a long period. In the example given, for
instance, we can release the DVD drive and disk file, and then
request the disk file and printer, only if we can be sure that our
data will remain on the disk file. Otherwise, we must request all
resources at the beginning for both protocols.
Second, starvation is possible. A process that needs several
popular resources may have to wait indefinitely, because at
least one of the resources that it needs is always allocated to
some other process.

7.4.3 No Preemption
The third necessary condition for deadlocks is that there be
no preemption of resources that have already been allocated.
To ensure that this condition does not hold, we can use the
following protocol. If a process is holding some resources and
requests another resource that cannot be immediately allocated
to it (that is, the process must wait), then all resources the
process is currently holding are preempted. In other words, these
resources are implicitly released. The preempted resources are
added to the list of resources for which the process is waiting.
The process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting.
Alternatively, if a process requests some resources, we first
check whether they are available. If they are, we allocate them.
If they are not, we check whether they are allocated to some
other process that is waiting for additional resources. If so, we
preempt the desired resources from the waiting process and
allocate them to the requesting process. If the resources are
neither available nor held by a waiting process, the requesting
process must wait. While it is waiting, some of its resources
may be preempted, but only if another process requests them. A
process can be restarted only when it is allocated the new
resources it is requesting and recovers any resources that were
preempted while it was waiting.

You might also like