Module 3
Module 3
MODULE-III
CHAPTER 7 DEADLOCKS
✓ When processes request resources and if the resources are not available (held by other
process) at that time, then process enters into waiting state. This situation is called
deadlock.
➢ System Model
➢ Deadlock Characterization
• Necessary Conditions
o Circular Wait: A set {P0,P1……..Pn} of waiting process must exist such that P0 is
waiting for a resource held by P1, P1 is waiting for a resource held by P2, Pn-1 is
waiting for resource held by process Pn and Pn is waiting for the resource held by P0.
✓ If the graph contains no cycle, then no process in the system is deadlocked. If the graph
contains a cycle then a deadlock may exist.
✓ If each resource type has exactly one instance than a cycle implies that a deadlock has
occurred.
✓ If each resource type has several instances then a cycle do not necessarily implies that a
deadlock has occurred.
✓ Consider the resource-allocation graph shown in above figure.
✓ Suppose, process P3 requests an instance of resource type R2. Since no resource instance
is currently available, a request edge P3→R2 is added to the graph which results in below
figure.
✓ P4 may release its instance of resource type R2. That resource can then be allocated to P3,
breaking the cycle.
✓ If a resource-allocation graph does not have a cycle, then the system is not in a
deadlocked state. If there is a cycle, then the system may or may not be in a deadlocked
state.
➢ Deadlock Prevention
✓ For a deadlock to occur each of the four necessary conditions must hold. If at least one
of these conditions does not hold then we can prevent occurrence of deadlock.
o Mutual Exclusion: This holds for non-sharable resources. For ex, A printer can be
used by only one process at a time. Mutual exclusion is not possible in sharable
resources and thus they cannot be involved in deadlock. Read-only files are good
examples for sharable resources.
o Hold and Wait: This condition can be eliminated by forcing a process to release all
its resources held by it when it requests a resource. Two possible
solutions(protocols) to achieve this are,
▪ One protocol can be used is that each process is allocated with all of its resources
before it starts execution.
▪ Another protocol that can be used is to allow a process to request a resource
when the process has none.
▪ To illustrate the difference between these two protocols, we consider a process
that copies data from a DVD drive to a file on disk, sorts the file, and then prints
the results to a printer. If all resources must be requested at the beginning of the
process, then the process must initially request the DVD drive, disk file, and
printer. It will hold the printer for its entire execution, even though it needs the
printer only at the end.
▪ The second method allows the process to request initially only the DVD drive
and disk file. It copies from the DVD drive to the disk and then releases both the
DVD drive and the disk file. The process must then again request the disk file
and the printer. After copying the disk file to the printer, it releases these two
resources and terminates.
▪ Both protocols have two main disadvantages. First, resource utilization is low,
since resources may be allocated but unused for a long period.
▪ Second, starvation is possible. A process that needs several popular resources
may have to wait indefinitely, because at least one of the resources that it needs
is always allocated to some other process.
o No Preemption: To ensure that this condition never occurs the resources must be
preempted. The following protocols can be used.
▪ If a process is holding some resource and request another resource that cannot be
immediately allocated to it, then all the resources currently held by the
requesting process are preempted and added to the list of resources for which
other processes may be waiting. The process will be restarted only when it
regains the old resources and the new resources that it is requesting.
▪ When a process request resources, we check whether they are available or not. If
they are available we allocate them else we check that whether they are allocated
to some other waiting process. If so we preempt the resources from the waiting
process and allocate them to the requesting process. Otherwise, the requesting
process must wait.
o Circular Wait: One way to ensure that this condition never holds is to impose total
ordering of all resource types and each process requests resource in an increasing
order. For ex, Let R={R1,R2,………Rm} be the set of resource types. We assign each
resource type with a unique integer value. This allows us to compare two resources
and determine whether one precedes the other in ordering. We can define a one to
one function F:R→N as follows,
F(disk drive)=5, F(printer)=12, F(tape drive)=1
➢ Deadlock Avoidance
✓ Deadlock prevention algorithm may lead to low device utilization and reduces system
throughput.
✓ Avoiding deadlocks requires additional information about how resources are to be
requested. With the knowledge of the complete sequences of requests and releases we
can decide for each requests whether the process should wait or not.
✓ For each requests it requires checking of the resources currently available, resources
that are currently allocated to each processes, future requests and release of each
process to decide whether the current requests can be satisfied or must wait to avoid
future possible deadlock.
✓ A deadlock avoidance algorithm dynamically examines the resources allocation state to
ensure that a circular wait condition never exists. The resource allocation state is defined
by the number of available and allocated resources and the maximum demand of each
process.
• Safe State
✓ A state is a safe state in which there exists at least one order in which all the process
will run completely without resulting in a deadlock.
✓ A system is in safe state if there exist a safe sequence.
✓ A sequence of processes <P1,P2,…….Pn> is a safe sequence for the current
allocation state if, for each Pi, the resources requests that Pi can still make and it can
be satisfied by the currently available resources plus the resources held by all pj, with
j<i.
✓ If the resources that Pi requests are not currently available then Pi can wait until all Pj
have finished. When they have finished, Pi can obtain all of its needed resource to
complete its designated task.
✓ A safe state is not a deadlocked state. But, a deadlocked state is an unsafe state.
✓ Not all unsafe states are deadlocked. An unsafe state may lead to a deadlock. It is
shown in below figure.
✓ For Ex, Consider a system with 12 magnetic tape drives and three processes Po, P1,
and P2. Process Po requires ten tape drives, process P1may need as many as four tape
drives, and process P2 may need up to nine tape drives. Suppose that, at time to,
process Po is holding five tape drives, process P1 is holding two tape drives, and
process P2 is holding two tape drives. (there are 3 free tape drives.)
✓ At time t0, the system is in a safe state. The sequence <P1, P0, P2> satisfies the safety
condition.
✓ A system can go from a safe state to an unsafe state. Suppose that, at time t 1, process
P2 requests and is allocated one more tape drive. The system is no longer in a safe
state. Only process P1 can be allocated all its tape drives.
✓ Whenever a process request a resource that is currently available, the system must
decide whether resources can be allocated immediately or whether the process must
wait. The request is granted only if the allocation leaves the system in safe state.
✓ This algorithm is used only if we have one instance of a resource type. In addition
to the request edge and the assignment edge a new edge called claim edge is used. A
claim edge Pi→Rj indicates that process Pi may request Rj in future. The claim edge
is represented by a dotted line.
✓ When a process Pi requests the resource Rj, the claim edge is converted to a request
edge. When resource Rj is released by process Pi, the assignment edge Rj → Pi is
replaced by the claim edge Pi → Rj.
✓ When a process Pi requests resource Rj the request is granted only if converting the
request edge Pi→ Rj to as assignment edge Rj → Pi do not result in a cycle.
✓ Cycle detection algorithm is used to detect the cycle. If there are no cycles then the
allocation of the resource to process leave the system in safe state.
✓ A cycle indicates that the system is in an unsafe state. If P1 requests R2, and P2
requests R1, then a deadlock will occur.
• Banker’s Algorithm
✓ This algorithm is applicable to the system with multiple instances of each resource
types, but this is less efficient than the resource allocation graph algorithm.
✓ When a new process enters the system it must declare the maximum number of
resources that it may need. This number may not exceed the total number of
resources in the system. The system must determine that whether the allocation of
the resources will leave the system in a safe state or not. If it is so resources are
allocated else it should wait until the process release enough resources.
✓ Several data structures are used to implement the banker’s algorithm. Let ‘n’ be the
number of processes in the system and ‘m’ be the number of resources types. The
following data structures are needed.
o Available: A vector of length m indicates the number of available resources. If
Available[j]=k, then k instances of resource type Rj is available.
▪ Safety Algorithm
✓ This algorithm is used to find out whether a system is in safe state or not. The
algorithm can be described as follows,
Step 1. Let Work and Finish be two vectors of length m and n
respectively. Initialize work = available and Finish[i]=false for
i=1,2,3,…….n
Step 2. Find i such that both
Finish[i]==false
Needi ≤ Work
If no such i exists, then go to step 4
Step 3. Work = Work +Allocation
Finish[i]=true
Go to step 2
Step 4. If Finish[i]==true for all i, then the system is in safe state.
✓ This algorithm may require an order of m*n2 operation to decide whether a
state is safe.
▪ An Illustrative Example
✓ To illustrate the use of the banker's algorithm, consider a system with five
Processes Po through P4 and three resource types A, B, and C. Resource type
A has ten instances, resource type B has five instances, and resource type C
has seven instances. Suppose that, at time T0, the following snapshot of the
system has been taken.
a)
Allocation Max Available
ABC ABC ABC
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
P0 →7 4 3 ≤ 3 3 2 is false,
P1 →1 2 2 ≤ 3 3 2 is true, so work=work + allocation
work=3 3 2 + 2 0 0=5 3 2
P2 →6 0 0 ≤ 5 3 2 is false,
P3 → 0 1 1 ≤ 5 3 2 is true, so work=5 3 2 + 2 1 1=7 4 3
P4 → 4 3 1≤ 7 4 3 is true, so work=7 4 3 + 0 0 2=7 4 5
P2 → 6 0 0 ≤ 7 4 5 is true, so work=7 4 5 + 3 0 2=10 4 7
P0 →7 4 3 ≤ 10 4 7 is true, so work=10 4 7 + 0 1 0=10 5 7
b) Suppose now the process P1 requests one additional instance of resource type A
and two instances of resource type C, so Request1 = (1,0,2).
✓ To decide whether this request can be immediately granted, we first check that
from Resource request algorithm,
✓ Now we must determine whether this new system state is safe. We execute safety
algorithm as shown below and find that the sequence <P1, P3, P4, Po, P2>
satisfies the safety requirement. Hence the request can be immediately granted.
P0 → 7 4 3 ≤ 2 3 0 is false,
P1 →0 2 0 ≤ 2 3 0 is true, so work=work + allocation
work=2 3 0+ 3 0 2=5 3 2
P2 →6 0 0 ≤ 5 3 2 is false,
P3 → 0 1 1 ≤ 5 3 2 is true, so work=5 3 2 + 2 1 1=7 4 3
P4→ 4 3 1 ≤ 7 4 3 is true, so work=7 4 3 + 0 0 2=7 4 5
P0 →7 4 3 ≤ 7 4 5 is true, so work=7 4 5 + 0 1 0=7 5 5
P2 → 6 0 0 ≤ 7 5 5 is true, so work=7 5 5 + 3 0 2=10 5 7
➢ Deadlock Detection
✓ If all the resources have only a single instance then we can define deadlock detection
algorithm that uses a variant of resource allocation graph as shown in below
figure (a) called a wait-for graph as shown in below figure (b). This graph is
obtained by removing the resource nodes and collapsing appropriate edges.
✓ An edge from Pi to Pj in wait for graph implies that Pi is waiting for Pj to release a
resource that Pi needs.
✓ An edge from Pi to Pj exists in wait for graph if and only if the corresponding
resource allocation graph contains the edges Pi →Rq and Rq→ Pj.
✓ Deadlock exists within the system if and only if there is a cycle. To detect deadlock
the system needs an algorithm that searches for cycle in a graph.
Step 1. Let Work and Finish be vectors of length m and n respectively. Initialize
Work= Available. For i=0,1,2……….n-1, if allocation ≠ 0 then Finish[i]=false, else
Finish[i]=true.
Step 2. Find an index i such that both
Finish[i]= false
Requesti≤Work
If no such i exists, go to step 4.
Step 3. Work = Work + Allocationi
Finish[i] = true
Go to step 2.
Step 4. If Finish[i] == false, for some i where 0≤i<n, then a system is in a deadlock
state.
✓ To illustrate this algorithm, we consider a system with five processes P0 through P4
and three resource types A, B, and C. Resource type A has seven instances, resource
type B has two instances, and resource type C has six instances. Suppose that, at
time T0, we have the following resource-allocation state:
✓ From the above deadlock detection algorithm, the sequence <P0, P2, P3, P1, P4> or
<P0, P2, P3, P4, P1>will result in Finish[i]== true for all i. The steps includes,
P1 → 2 0 2 ≤ 010 is false,
P2 → 0 0 0 ≤ 010 is true, so work=0 1 0 + 3 0 3=3 1 3
P3 → 1 0 0 ≤ 313 is true, so work=3 1 3 + 2 1 1=5 2 4
P4 → 0 0 2 ≤ 524 is true, so work=5 2 4 + 0 0 2=5 2 6
P1 → 2 0 2 ≤ 526 is true, so work=5 2 6+ 2 0 0= 7 2 6
If P2 requests an additional instance of type C ie., (0, 0, 1), the Request matrix is
modified as follows,
Request
ABC
P0 0 0 0
P1 2 0 2
P2 0 0 1
P3 1 0 0
P4 0 0 2
P3 → 1 0 0 ≤ 0 1 0 is false,
P4 → 0 0 2 ≤ 0 1 0 is false,
✓ The system is now deadlocked. Even though we can reclaim resources held by
process P0, but number of available resources is not sufficient to fulfill the requests
of other processes. Thus, deadlock exists, consisting of processes P1, P2, P3, and P4.
• Detection algorithm usage
• Process Termination
• Resource Preemption
➢ Background
• Basic Hardware
✓ Main memory and the registers in the processor are the only storage that the CPU
can access directly. Hence the program and data must be brought from disk into
main memory for CPU to access.
✓ Registers can be accessed in one CPU clock cycle. But main memory access can
take many CPU clock cycles.
✓ A fast memory called cache is placed between main memory and CPU registers.
✓ We must ensure correct operation to protect the operating system from access by
user processes and also to protect user processes from one another. This protection
must be provided by the hardware. It can be implemented in several ways and one
such possible implementation is,
o We first need to make sure that each process has a separate memory space.
o To do this, we need the ability to determine the range of legal addresses that the
process may access and to ensure that the process can access only these legal
addresses.
o We can provide this protection by using two registers, a base and a limit, as
illustrated in below figure.
o The base register holds the smallest legal physical memory address; the limit
register specifies the size of the range. For example, if the base register holds
300040 and the limit register is 120900, then the program can legally access all
addresses from 300040 through 420940.
✓ Protection of memory space is accomplished by having the CPU hardware
compare every address generated in user mode with the registers.
✓ Any attempt by a program executing in user mode to access operating-system
memory or other users' memory results in a trap to the operating system, which
treats the attempt as a fatal error as shown in below figure.
✓ This prevents a user program from (accidentally or deliberately) modifying the code
or data structures of either the operating system or other users.
✓ The base and limit registers uses a special privileged instructions which can be
executed only in kernel mode, and since only the operating system executes in
kernel mode, only the operating system can load the base and limit registers.
• Address Binding
✓ Programs are stored on the secondary storage disks as binary executable files.
✓ When the programs are to be executed they are brought in to the main memory and
placed within a process.
✓ The collection of processes on the disk waiting to enter the main memory forms the
input queue.
✓ One of the processes which are to be executed is fetched from the queue and is
loaded into main memory.
✓ During the execution it fetches instruction and data from main memory. After the
process terminates it returns back the memory space.
✓ During execution the process will go through several steps as shown in below
figure. and in each step the address is represented in different ways.
✓ In source program the address is symbolic. The compiler binds the symbolic address
to re-locatable address. The loader will in turn bind this re-locatable address to
absolute address.
✓ Binding of instructions and data to memory addresses can be done at any step along
the way:
o Compile time: If we know at compile time where the process resides in
memory, then absolute code can be generated. For example, if we know that a
user process will reside starting at location R, then the generated compiler code
will start at that location and extend up from there. If, at some later time, the
starting location changes, then it will be necessary to recompile this code.
o Load time: If it is not known at compile time where the process will reside in
memory, then the compiler must generate relocatable code. In this case, final
binding is delayed until load time.
o Execution time: If the process is moved during its execution from one memory
segment to another then the binding is delayed until run time. Special hardware
is used for this. Most of the general purpose operating system uses this method.
✓ The address generated by the CPU is called logical address or virtual address.
✓ The address seen by the memory unit i.e., the one loaded in to the memory register is
called the physical address.
✓ Compile time and load time address binding methods generate same logical and
physical address. The execution time addressing binding generate different logical
and physical address.
✓ Set of logical address space generated by the programs is the logical address space.
Set of physical address corresponding to these logical addresses is the physical
address space.
✓ The mapping of virtual address to physical address during run time is done by the
hardware device called Memory Management Unit (MMU).
✓ The base register is now called re-location register.
✓ Value in the re-location register is added to every address generated by the user
process at the time it is sent to memory as shown in below figure.
✓ For example, if the base is at 14000, then an attempt by the user to address location
0 is dynamically relocated to location 14000; an access to location 346 is mapped to
location 14346. The user program never sees the real physical addresses.
• Dynamic Loading
✓ For a process to be executed it should be loaded in to the physical memory. The size
of the process is limited to the size of the physical memory. Dynamic loading is used
to obtain better memory utilization.
✓ In dynamic loading the routine or procedure will not be loaded until it is called.
✓ Whenever a routine is called, the calling routine first checks whether the called
routine is already loaded or not. If it is not loaded it calls the loader to load the
desired program in to the memory and updates the programs address table to indicate
the change and control is passed to newly invoked or called routine.
✓ The advantages are ,
o Gives better memory utilization.
o Unused routine is never loaded.
➢ Swapping
✓ A process can be swapped temporarily out of the memory to a backing store and then
brought back in to the memory for continuing the execution. This process is called
swapping. Ex. In a multi-programming environment with a round robin CPU scheduling
whenever the time quantum expires then the process that has just finished is swapped
out and a new process swaps in to the memory for execution as shown in below figure.
✓ A variant of this swapping policy is priority based scheduling. When a low priority is
executing and if a high priority process arrives then a low priority will be swapped out
and high priority is allowed for execution. This process is also called as Roll out and
Roll in.
✓ Normally the process which is swapped out will be swapped back to the same memory
space that is occupied previously and this depends upon address binding.
✓ The system maintains a ready queue consisting of all the processes whose memory
images are on the backing store or in memory and are ready to run.
✓ The context-switch time in a swapping system is high. For ex, assume that the user
process is 10 MB in size and the backing store is a standard hard disk with a transfer rate
of 40MB per second. The actual transfer of the 40MB process to or from main memory
takes,
10MB (10000KB)/40MB (40000KB) per second
=1/4 second
=250 milliseconds
✓ Assuming an average latency of 8 milliseconds, the swap time is 258 milliseconds.
Since we must both swap out and swap in, the total swap time is about 516
milliseconds.
✓ Swapping is constrained by other factors,
o To swap a process, it should be completely idle.
o If a process is waiting for an I/O operation, then the process cannot be swapped.
✓ The main memory must accommodate both the operating system and the various user
processes. One common method to allocate main memory in the most efficient way is
contiguous memory allocation.
✓ The memory is divided into two partitions, one for the resident of operating system
and one for the user processes.
✓ Relocation registers are used to protect user processes from each other, and to
protect from changing OS code and data.
✓ The relocation register contains the value of the smallest physical address and the
limit register contains the range of logical addresses.
✓ With relocation and limit registers, each logical address must be less than the limit
register.
✓ The MMU maps the logical address dynamically by adding the value in the
relocation register. This mapped address is sent to main memory as shown in below
figure.
✓ The relocation-register scheme provides an effective way to allow the operating
system's size to change dynamically.
Limit
Register
Relocation
Register
Logical
Address Physical
< Memory
CPU Yes Address
+
No
Trap: Addressing error
• Memory Allocation
✓ One of the simplest methods for memory allocation is to divide memory in to several
fixed partition. Each partition contains exactly one process. The degree of multi-
programming depends on the number of partitions.
✓ In multiple partition method, when a partition is free, process is selected from the
input queue and is loaded in to free partition of memory. When process terminates,
the memory partition becomes available for another process.
✓ The OS keeps a table indicating which part of the memory is free and is occupied.
✓ Initially, all memory is available for user processes and is considered one large block
of available memory called a hole.
✓ When a process requests, the OS searches for large hole for this process. If the hole
is too large it is split in to two. One part is allocated to the requesting process and
other is returned to the set of holes.
Dr. Hema N Dept of ISE, RNSIT Page 23
Dr. Bhagyashree A
Mrs. Mounika P
Operating Systems (21CS44) IV-ISE, MODULE III
✓ The set of holes are searched to determine which hole is best to allocate.
✓ Dynamic storage allocation problem is one which concerns about how to satisfy a
request of size n from a list of free holes. There are three strategies/solutions to
select a free hole,
o First fit: Allocates first hole that is big enough. This algorithm scans
memory from the beginning and selects the first available block that is large
enough to hold the process.
o Best fit: It chooses the hole i.e., closest in size to the request. It allocates the
smallest hole i.e., big enough to hold the process.
o Worst fit: It allocates the largest hole to the process request. It searches for
the largest hole in the entire list.
✓ First fit and best fit are the most popular algorithms for dynamic memory allocation.
All these algorithms suffer from fragmentation.
• Fragmentation
➢ Paging
✓ Paging is a memory management scheme that permits the physical address space of a
process to be non-contiguous. Support for paging is handled by hardware.
✓ Paging avoids the considerable problem of fitting the varying sized memory chunks on
to the backing store.
• Basic Method
✓ Physical memory is broken in to fixed sized blocks called frames (f) and Logical
memory is broken in to blocks of same size called pages (p).
✓ When a process is to be executed its pages are loaded in to available frames from the
backing store. The backing store is also divided in to fixed-sized blocks of same
size as memory frames.
✓ The below figure. Shows paging hardware.
✓ Logical address generated by the CPU is divided in to two parts: page number (p)
and page offset (d).
✓ The page number (p) is used as index to the page table. The page table contains base
address of each page in physical memory. This base address is combined with the
page offset to define the physical memory i.e., sent to the memory unit. The paging
model memory is shown in below figure.
✓ The page size is defined by the hardware. The size is the power of 2, varying
between 512 bytes and 16Mb per page.
✓ If the size of logical address space is 2m address unit and page size is 2n, then high
order m-n designates the page number and n low order bits represents page offset.
Thus logic address is as follows.
page number page offset
p d
m-n n
✓ Ex: To show how to map logical memory in to physical memory, consider a page
size of 4 bytes and physical memory of 32 bytes (8 pages) as shown in below figure.
a. Logical address 0 is page 0 and offset 0 and Page 0 is in frame 5. The logical
address 0 maps to physical address [(5*4) + 0]=20.
b. Logical address 3 is page 0 and offset 3 and Page 0 is in frame 5. The logical
address 3 maps to physical address [(5*4) + 3]= 23.
c. Logical address 4 is page 1 and offset 0 and page 1 is mapped to frame 6. So
logical address 4 maps to physical address [(6*4) + 0]=24.
d. Logical address 13 is page 3 and offset 1 and page 3 is mapped to frame 2. So
logical address 13 maps to physical address [(2*4) + 1]=9.
• Hardware Support
✓ The hardware implementation of the page table can be done in several ways. The
simplest method is that the page table is implemented as a set of dedicated registers.
✓ The use of registers for the page table is satisfactory if the page table is reasonably small
(for example, 256 entries). But most computers, allow the page table to be very large
(for example, 1 million entries) and for these machines, the use of fast registers to
implement the page table is not feasible.
✓ So the page table is kept in the main memory and a page table base register (PTBR)
points to the page table and page table length register (PTLR) indicates size of page
table. Here two memory accesses are needed to access a byte and thus memory access is
slowed by a factor of 2.
✓ The only solution is to use a special, fast lookup hardware cache called Translation
look aside buffer (TLB). TLB is associative, with high speed memory. Each entry in
TLB contains two parts, a key and a value. When an associative register is presented
with an item, it is compared with all the key values, if found the corresponding value
field is returned. Searching is fast but hardware is expensive.
o If the TLB is full of entries the OS must select anyone for replacement.
o Some TLBs allow entries to be wired down meaning that they cannot be removed
from the TLB.
✓ Some TLBs store Address Space Identifiers (ASIDs) in each TLB entry. An ASID
uniquely identifies each process and is used to provide address-space protection for that
process.
✓ The percentage of time that a page number is found in the TLB is called hit ratio.
✓ For example, an 80-percent hit ratio means that we find the desired page number in
the TLB 80 percent of the time. If it takes 20 nanoseconds to search the TLB and 100
nanoseconds to access memory, then a mapped-memory access takes 120
nanoseconds when the page number is in the TLB. If we fail to find the page number
in the TLB (20 nanoseconds), then we must first access memory for the page table and
frame number (100 nanoseconds) and then access the desired byte in memory (100
nanoseconds), for a total of 220 nanoseconds. Thus the effective access time is,
Effective Access Time (EAT) = 0.80 x 120 + 0.20 x 220
= 140 nanoseconds.
In this example, we suffer a 40-percent slowdown in memory-access time (from 100
to 140 nanoseconds).
✓ For a 98-percent hit ratio we have
Effective Access Time (EAT) = 0.98 x 120 + 0.02 x 220
= 122 nanoseconds.
✓ This increased hit rate produces only a 22 percent slowdown in access time.
• Protection
✓ Memory protection in paged environment is done by protection bits that are associated
with each frame. These bits are kept in page table.
✓ One bit can define a page to be read-write or read-only.
✓ One more bit is attached to each entry in the page table, a valid-invalid bit.
✓ A valid bit indicates that associated page is in the process’s logical address space and
thus it is a legal or valid page.
✓ If the bit is invalid, it indicates the page is not in the process’s logical address space and
is illegal. Illegal addresses are trapped by using the valid-invalid bit.
✓ The OS sets this bit for each page to allow or disallow accesses to that page.
✓ For example, in a system with a 14-bit address space (0 to 16383), we have a program
that should use only addresses 0 to 10468. Given a page size of 2 KB, we have the
situation shown in below figure. Addresses in pages 0, 1, 2, 3, 4, and 5 are mapped
normally through the page table. Any attempt to generate an address in pages 6 or 7, we
find that the valid -invalid bit is set to invalid, and the computer will trap to the
operating system (invalid page reference).
• Shared Pages
• Hierarchical paging
✓ Recent computer system support a large logical address apace from 232 to 264 and
thus page table becomes large. So it is very difficult to allocate contiguous main
memory for page table. One simple solution to this problem is to divide page
table in to smaller pieces.
✓ One way is to use two-level paging algorithm in which the page table itself is
also paged as shown in below figure.
✓ Ex. In a 32- bit machine with page size of 4kb, a logical address is divided in to a
page number consisting of 20 bits and a page offset of 12 bit. The page table is
further divided since the page table is paged, the page number is further divided
in to 10 bit page number and a 10 bit offset. So the logical address is,
Page number page offset
P1 P2 d
10 10 12
✓ P1 is an index into the outer page table and P2 is the displacement within the
page of the outer page table. The address-translation method for this
architecture is shown in below figure. Because address translation works from
the outer page table inward, this scheme is also known as a forward-mapped
page table.
✓ For a system with a 64-bit logical address space, a two-level paging scheme is no
longer appropriate. Suppose the page size in such a system is 4 KB the page
table consists of up to 252 entries. If we use a two-level paging scheme, then the
inner page tables can be one page long, or contain 210 4-byte entries. The
addresses look like this,
✓ The outer page table consists of 242 entries, or 244 bytes. The one way to avoid
such a large table is to divide the outer page table into smaller pieces.
✓ We can avoid such a large table using three-level paging scheme.
✓ The outer page table is still 234 bytes in size. The next step would be a four-level
paging scheme.
✓ Hashed page table handles the address space larger than 32 bit. The virtual page
number is used as hash value. Linked list is used in the hash table which
contains a list of elements that hash to the same location.
✓ Each element in the hash table contains the following three fields,
o Virtual page number
o Mapped page frame value
o Pointer to the next element in the linked list
✓ The algorithm works as follows,
o Virtual page number is taken from virtual address and is hashed in to the
hash table.
o Virtual page number is compared with field 1 in the first element in the
linked list.
o If there is a match, the corresponding page frame (field 2) is used to form the
desired physical address. If there is no match, subsequent entries in the linked
list are searched for a matching virtual page number. This scheme is shown
in below figure.
o Clustered pages are similar to hash table but one difference is that each
entity in the hash table refer to several pages.
✓ Page tables may consume large amount of physical memory just to keep track of
how other physical memory is being used.
✓ To solve this problem, we can use an inverted page table has one entry for each
real page (or frame) of memory. Each entry consists of the virtual address of the
page stored in that real memory location with information about the process that
owns the page.
✓ Thus, only one page table is in the system, and it has only one entry for each
page of physical memory.
✓ The inverted page table entry is a pair <process-id, page number>. Where
process-id assumes the role of the address-space identifier. When a memory
reference is made, the part of virtual address consisting of <process-id, page
number> is presented to memory sub-system.
✓ The inverted page table is searched for a match. If a match is found at entry i,
then the physical address <i, offset> is generated. If no match is found then an
illegal address access has been attempted.
✓ This scheme decreases the amount of memory needed to store each page table,
but increases the amount of time needed to search the table when a page
reference occurs.
➢ Segmentation
• Basic method
• Hardware
✓ For example, consider the below figure. We have five segments numbered from 0
through 4. Segment 2 is 400 bytes long and begins at location 4300. Thus, a
reference to byte 53 of segment 2 is mapped onto location 4300 +53= 4353. A
reference byte 852 of segment 3, is mapped to 3200 (the base of segment 3) + 852 =
4052. A reference to byte 1222 of segment 0 would result in a trap to the operating
system, as this segment is only 1,000 bytes long.
1. Consider given chart where maximum resource available of type A, B, C and D are 3, 14, 12 and
12 respectively, and answer i) what is content of matrix need? ii) Is system safe? If yes give safe
sequence. iii) If request comes from P1 as (0, 4, 2, 0), can it be granted?
The content of the matrix Need is defined to be Max - Allocation and is as follows:
Need
ABCD
P0 0 0 0 0
P1 0 7 5 0
P2 1 0 0 2
P3 0 0 2 0
P4 0 6 4 2
a) Suppose now the process P1 requests for (0,4,2,0). To decide whether this request can be
immediately granted, we first check that,
Allocation Available
ABCD ABCD
P0 0 0 1 2 1 1 0 0
P1 1 4 2 0
P2 1 3 5 4
P3 0 6 3 2
P4 0 0 1 4
Need
ABCD
P0 0 0 0 0
P1 0 3 3 0
P2 1 0 0 2
P3 0 0 2 0
P4 0 6 4 2
✓ Now we must determine whether this new system state is safe. We execute safety algorithm
as shown below and find that the sequence <P0, P2, P3, P4, P1> satisfies the safety
requirement. Hence the request can be immediately granted.
2. Consider a system consisting of m resources of the same type being shared by n processes. A
process can request or release only one resource at a time. Show that the system is deadlock free
if the following two conditions hold:
a. The maximum need of each process is between one resource and m resources.
b. The sum of all maximum needs is less than m + n.
Solution:
The given conditions can be written as
Maxi ≥1 for all i
∑𝑛𝑖=1 𝑀𝑎𝑥𝑖 < 𝑚 + 𝑛
∑𝑛𝑖=1 𝐴𝑙𝑙𝑜𝑐𝑎𝑡𝑖𝑜𝑛𝑖 = m
P0 → 1 ≤ 0 is false,
P1 → 1 ≤ 0 is false,
Hence, the safe sequence <P2, P0, P1> is generated and therefore no deadlock.
3. Using Banker’s algorithm determine whether the following system is in a safe state.
Dr. Hema N Dept of ISE, RNSIT Page 40
Dr. Bhagyashree A
Mrs. Mounika P
Operating Systems (21CS44) IV-ISE, MODULE III
a) The content of the matrix Need is defined to be Max - Allocation and is as follows:
Need
ABC
P0 0 0 2
P1 1 0 1
P2 0 0 2
P3 2 1 0
P4 0 1 4
b) Suppose now the process P2 requests for the resources (0,0,2), so Request2 = (0,0,2).
Question Bank
VTU Questions
1. Explain with an example how resource allocation graph is used to define deadlock? (8) Dec
07/Jan 08/june 2012(6)/Dec 2012/jan 2014(4)
2. What are two options for breaking deadlock? (7) Dec 07/Jan 08
3. What is wait-for graph? How is it useful for detection of deadlock? (5) Dec 07/Jan 08
5. Which are the three methods to handle deadlock? (3) Dec 08/Jan 09, June 2013.
6. Describe necessary conditions for a deadlock situation to arise and how to handle. (4) Dec 09/
july 2011/june 2012(6)/Dec 2012
7. Consider given chart and answer i) what is content of matrix need? ii) is system safe?If yes give
safe sequence. iii) if request comes from P1, arrives for ( 0,4,2,0), can it be granted ? (12) dec
2010, june 2011, (10) Jan 2015, June/July2017
9. Explain the safety algorithm used in banker’s algorithm, with suitable data structures. (12) dec
2011
10. Explain banker’s algorithm for deadlock avoidance. (10) Jan 2014
11. Explain process termination and resource preemption to recover from deadlock. (6) Jan 2014
12. Consider a system with five Processes Po through P4 and three resource types A, B, and C.
Resource type A has ten instances, resource type B has five instances, and resource type C has
seven instances. The following snapshot of the system has been taken. Calculate need matrix and
find safe sequence using banker’s algorithm.(8) June 2012
Allocation Request Available
ABC ABC ABC
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
13. June 2013
14. Consider a system consisting of m resources of the same type being shared by n processes. A
process can request or release only one resource at a time. Show that the system is deadlock free
if the following two conditions hold:
a. The maximum need of each process is between one resource and m resources.
b. The sum of all maximum needs is less than m + n. (10) Jan 2015
15. With the help of a system model, explain a deadlock and discuss the necessary conditions that
must hold simultaneously in a system for a deadlock to occur. (06 Marks) Jan 2016
16. Using Banker’s algorithm determine whether the following system is in a safe state.
Process Allocation Max Available
ABC ABC ABC
P0 0 0 2 0 0 4 1 0 2
P1 1 0 0 2 0 1
P2 1 3 5 1 3 7
P3 6 3 2 8 4 2
P4 1 4 3 1 5 7
If a request from process P2 arrives for (0 0 2), can the request be granted immediately?
(10 Marks) June 2013, Jan 2016
17. How is a system recovered from deadlock? (04 Marks) Jan 2016
18. What do you mean by fragmentation? Explain difference between internal and external
fragmentation. (6) Dec 07/Jan 08
19. For page reference string : 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6, how many page faults would occur
for LRU and optimal alg. Assuming 2 and 6 frames. (10) Dec 07/Jan 08
20. What is the cause of thrashing? How does system detect thrashing? (4) Dec 07/Jan 08, Dec 08/Jan
09, Dec 09/ Jan 10, june 2011.
21. Differentiate between internal and external fragmentation. How are they overcome? (4)
23. With diagram, discuss steps involved in handling a page fault. (6) ) Dec 09/ Jan 10
24. What is address binding? Explain with necessary steps, binding instructions and data to memory
addresses. (8) Dec 09/ Jan 10
25. Mention the problem with simple paging scheme. How TLB is used to solve this problem?
Explain with supporting h/w dig with example. (Dec 2012)
26. What is a dead lock? What are necessary conditions an OS must satisfy for a deadlock to occur?
(5) June/July2016
27. What are the different methods to handle deadlock? Also explain Deadlock prevention and
deadlock avoidance. (6) June/July2016
28. For the following snapshot. Find the safe sequence using Banker’s algorithm.
Process Allocation Max Available
ABC ABC ABC
P0 002 004 102
P1 100 201
P2 135 137
P3 632 842
P4 143 157
i) Is the system in safe state?
ii) If a request from process P2 arrives for (0 0 2), can the request be granted
immediately?(9) June/July2016
29. Explain the multistep processing of a user program with a neat block diagram.(5)
June/July2017
32. What are deadlocks? What are its characteristics? (5) June/July2017
34. What are Translation load aside buffers (TLB)? Explain TLB in detail with a simple paging
system with a neat diagram. (8) June/July2017
35. Given the memory partitions of 100K, 500K, 200K, 300K, AND 600K apply First fit and Best
fit algorithm to place 212K, 417K, 112K, 426K. (4) June/July2017
36. Define deadlock. What are the necessary conditions for deadlock to occur? Indicate how many
of these should occur for dead lock to happen? (10) Dec 2016/Jan2017
37. State and explain banker’s algorithm for deadlock avoidance. (10) Dec 2016/Jan2017