CPU SCHEDULING and Deadlocks
CPU SCHEDULING and Deadlocks
UNIT – II
CPU Scheduling -Scheduling Criteria, Scheduling Algorithms, Multiple -Processor Scheduling.
Deadlocks - System Model, Deadlocks Characterization, Methods for Handling Deadlocks,
Deadlock Prevention, Deadlock Avoidance, Deadlock Detection, and Recovery from Deadlock
PROCESS SCHEDULING
CPU is always busy in Multiprogramming. Because CPU switches from one job to another job. But in
Simple computers CPU sit idle until the I/O request granted.
Scheduling is an important OS function. All resources are scheduled before use (CPU, memory, devices…..)
Process scheduling is an essential part of Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the executable memory at a time and
the loaded process shares the CPU using time multiplexing.
Scheduling Objectives
Maximize throughput.
Maximize number of users receiving acceptable response times.
Be predictable.
Balance resource use.
Avoid indefinite postponement.
Enforce Priorities.
Give preference to processes holding key resources
SCHEDULING QUEUES: people live in rooms. Process are present in rooms knows as queues.
There are 3types
1. Job queue: when processes enter the system, they are put into a job queue, which consists all processes in
the system. Processes in the job queue reside on mass storage and await the allocation of main memory.
2. Ready queue: if a process is present in main memory and is ready to be allocated to cpu for execution, is
kept in ready queue.
3. device queue: if a process is present in waiting state (or) waiting for an i/o event to complete is said to be in
device queue.(or)
The processes waiting for a particular I/O device is called device queue.
Scheduler duties:
Types of schedulers
1. Throughput: how many jobs are completed by the cpu with in a time period.
2. Turn around time: The time interval between the submission of the process and time of the completion is
turn around time.
TAT = Waiting time in ready queue + executing time + waiting time in waiting queue for I/O.
3. Waiting time: The time spent by the process to wait for cpu to be allocated.
4. Response time: Time duration between the submission and first response.
5. Cpu Utilization: CPU is costly device, it must be kept as busy as possible. Eg: CPU efficiency is 90%
means it is busy for 90 units, 10 units idle.
CPU SCHEDULINGALGORITHMS:
1. First come First served scheduling: (FCFS): The process that request the CPU
first is holds the cpu first. If a process request the cpu then it is loaded into the ready queue,
connect CPU to that process.
Consider the following set of processes that arrive at time 0, the length of the cpu burst time
processes holds one of these CDRW drives. If each process now requests another 55
OPERATING SYSTEMS NOTES
given in milli seconds.
burst time is the time, required the cpu to execute that job, it is in milli seconds.
P2 6 2
P3 4 4
P4 5 6
P5 2 8
Which process having the smallest CPU burst time, CPU is assigned to that process . If
two process having the same CPU burst time, FCFS is used.
P1 5
P2 24
P3 16
P4 10
P5 3
P5 having the least CPU burst time (3ms). CPU assigned to that (P5). After completion of P5
short term scheduler search for nest (P1).......
Average Waiting Time :
processes holds one of these CDRW drives. If each process now requests another 58
OPERATING SYSTEMS NOTES
Short term scheduler always chooses the process that has term shortest remaining time.
When a new process joins the ready queue, short term scheduler compare the remaining time of
executing process and new process.
processes holds one of these CDRW drives. If each process now requests another 59
OPERATING SYSTEMS NOTES
If the new process has the least CPU burst time, the schedulers selects that job and connect to
CPU. Otherwise continue the old process.
P1 arrives at time 0, P1 executing First , P2 arrives at time 2. Compare P1 remaining time and P2 ( 3-2 =
1) and 6. So, continue P1 after P1, executing P2, at time 4, P3 arrives, compare P2 remaining time (6-1=5
) and 4 ( 4<5 ) .So, executing P3 at time 6, P4 arrives. Compare P3 remaining time and P4 ( 4-
2=2 ) and 5 (2<5 ). So, continue P3 , after P3, ready queue consisting P5 is the least out of
three. So execute P5, next P2, P4.
FORMULA : Finish time - Arrival
Time Finish Time for P1 => 3-0 = 3
Finish Time for P2 => 15-2 = 13
Finish Time for P3 => 8-4 =4
Finish Time for P4 => 20-6 = 14
Finish Time for P5 => 10-8 = 2
Average Turn around time => 36/5 = 7.2 ms.
It is designed especially for time sharing systems. Here CPU switches between the processes.
When the time quantum expired, the CPU switched to another job. A small unit of time, called
a time quantum or time slice. A time quantum is generally from 10 to 100 ms. The time
quantum is generally depending on OS. Here ready queue is a circular queue. CPU scheduler
picks the first process from ready queue, sets timer to interrupt after one time quantum and
dispatches the process.
5) PRIORITY SCHEDULING :
P2 12 4
P3 1 5
P4 3 1
P5 4 3
P4 has the highest priority. Allocate the CPU to process P4 first next P1, P5, P2, P3.
Disadvantage: Starvation
Starvation means only high priority process are executing, but low priorityprocess are waiting for the
CPU for the longest period of the time.
Processor Affinity
Successive memory accesses by the process are often satisfied in cache memory. what happens if the
process migrates to another processor. the contents of cache memory must be invalidated for the first
processor, cache for the second processor must be repopulated. Most Symmetric multi processor systems
try to avoid migration of processes from one processor to another processor, keep a process running on
the same processor. This is called processor affinity.
Soft affinity:
Soft affinity occurs when the system attempts to keep processes on the same processor but makes no
guarantees.
a) Hard affinity:
Process specifies that it is not to be moved between processors.
2) Load balancing:
One processor wont be sitting idle while another is overloaded. Balancing can be
achived through push migration or pull migration.
Push migration:
Push migration involves a separate process that runs periodically(e.g every 200 ms)
and moves processes from heavily loaded processors onto less loaded processors.
Pull migration:
processes holds one of these CDRW drives. If each process now requests another 62
OPERATING SYSTEMS NOTES
Pull migration involves idle processors taking processes from the ready queues of
the other processors.
processes holds one of these CDRW drives. If each process now requests another 63
OPERATING SYSTEMS NOTES II
YEAR/IISEM
drive, the 3 processes will be in a deadlocked state. Each is waiting for the event
“CDRW is released” which can be caused only by one of the other waiting processes.
This example illustrates a deadlock involving the same resourcetype.
Deadlocks may also involve different resource types. Consider a system with one
printer and one DVD drive. The process Pi is holding the DVD and process Pj is
holding the printer. If Pi requests the printer and Pj requests the DVD drive, a
deadlock occurs.
DEADLOCK CHARACTERIZATION:
In a deadlock, processes never finish executing, and system resources are tied up,
preventing other jobs from starting.
NECESSARY CONDITIONS:
A deadlock situation can arise if the following 4 conditions hold simultaneously in a system:
1. MUTUAL EXCLUSION: Only one process at a time can use the
resource. If another process requests that resource, the requesting process must be
delayed until there source has been released.
2. HOLD AND WAIT: A process must be holding at least one
resource and waiting to acquire additional resources that are currently
being held by other processes.
3. NO PREEMPTION: Resources cannot be preempted. A
resource can be released only voluntarily by the process holding it, after
that process has completed its task.
4. CIRCULAR WAIT: A set {P0,P1,…..Pn} of waiting processes must
exist such that P0 is waiting for resource held by P1, P1 is waiting for a resource
held by P2,……,Pn-1 is waiting for a resource held by Pn and Pn is waiting for a
resource held byP0.
RESOURCE ALLOCATION GRAPH
Deadlocks can be described more precisely in terms of a directed graph called a system
resource allocation graph. This graph consists of a set of vertices V and a set of edges E.
the set of verticesV is partitioned into 2 different types of nodes:
P = {P1, P2….Pn}, the set consisting of all the active processes in the system. R= {R 1,
R2….Rm}, the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi ->Rj. It signifies that
process Pi has requested an instance of resource type R j and is currently waiting for that
resource.
A directed edge from resource type Rj to process Pi is denoted by Rj ->Pi, it signifies that
an instance of resource type Rj has been allocated to process Pi.
A directed edge Pi ->Rj is called a requested edge. A directed edge Rj->Piis called an
assignmentedge.
We represent each process Pi as a circle, each resource type Rj as a rectangle. Since
resource type Rj may have more than one instance. We represent each such instance as a
dot within the rectangle. A request edge points to only the rectangle Rj. An assignment 55
edge must also designate one of the dots in therectangle.
When process Pi requests an instance of resource type Rj, a request edge is inserted in the
resource allocation graph. When this request can be fulfilled, the request edge is
instantaneously transformed to an assignment edge. When the process no longer needs
access to the resource, it releases the resource, as a result, the assignment edge is deleted.
The sets P, R, E:
P= {P1, P2, P3}
R= {R1, R2, R3, R4}
E= {P1 ->R1, P2 ->R3, R1 ->P2, R2 ->P2, R2 ->P1, R3 ->P3}
56
Processes P1, P2, P3 are deadlocked. Process P2 is waiting for the resource R3, which is
held byprocess P3.process P3 is waiting for either process P1 (or) P2 to release resource
R2. In addition,process P1 is waiting for process P2 to release resource R1.
DEADLOCK PREVENTION
For a deadlock to occur, each of the 4 necessary conditions must held. By ensuring that
at least one of these conditions cannot hold, we can prevent the occurrence of a deadlock.
Mutual Exclusion – not required for sharable resources; must hold for non sharable
resources
Hold and Wait – must guarantee that whenever a process requests a resource, it does not
hold any other resources
o Require process to request and be allocated all its resources before it begins
o Low resource utilization
o starvation possible
o execution, or allow process to request resources only when the process has none
No Preemption –
o If a process that is holding some resources requests another resource that
cannot be immediately allocated to it, then all resources currently being held are released
o Preempted resources are added to the list of resources for which the process
is waiting
o Process will be restarted only when it can regain its old resources, as well as
the new ones that it is requesting
Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration Deadlock Avoidance
Requires that the system has some additional a priori information available
Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need
The deadlock-avoidance algorithm dynamically examines the resource- allocation
state to ensure that there can never be a circular-wait condition
Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes .
Safe State
When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state
System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the processes
in the systems such that for each Pi, the resources that Pi can still request can be satisfied
by currently available resources + resources held by all the Pj, with j <I
That is:
o If Pi resource needs are not immediately available, then Pi can wait until all
Pj have finished
o When Pj is finished, Pi can obtain needed resources, execute,return allocated
resources, and terminate
o When Pi terminates, Pi +1 can obtain its needed resources,
and so on If a system is in safe state no deadlocks
If a system is in unsafe state possibility of deadlock Avoidance ensure
that a system will never enter an unsafe state
Avoidance algorithms
Single instance of a resource type
o Use a resource-allocation graph Multiple instances of a resource type
o Use the banker’s algorithm
Resource-Allocation Graph Scheme
Claim edgePiÆRj indicated that process Pj may request resource Rj;represented by a
dashed line
Claim edge converts to request edge when a process requests a resource Request edge
converted to an assignment edge when the resource is allocatedto the process When a
resource is released by a process, assignment edge reconverts to a claim edge Resources
must be claimed a priori in the system 58
OPERATING SYSTEMS NOTES II YEAR/II
SEM
Banker’s Algorithm
Multiple instances
Each process must a priori claim maximum use
When a process requests a resource it may have to wait
When a process gets all its resources it must return them in a finite amount of time Let n
= number of processes, and m = number of resources types.
Available: Vector of length m. If available [j] = k, there are k instances of resource type
Rjavailable
Max: n x m matrix. If Max [i,j] = k, then process Pimay request at most k
instances of resource type Rj
Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currentlyallocated k instances of
Rj
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of
Rjto complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]
Safety Algorithm
1. Let Work and Finish be vectors of length m and n,respectively.
2. Initialize: Work = Available 59
Finish [i] = false fori = 0, 1, …,n- 1
3. Find an isuch that both:
(a) Finish [i] = false
(b) Needi=Work
If no such iexists, go to step 4
4. Work = Work + AllocationiFinish[i] = true
go to step 2
5. IfFinish [i] == true for all i, then the system is in a safe state
Resource-Request Algorithm for Process Pi
Request = request vector for process Pi. If Requesti[j] = k then process Pi wants
k instances of resource type Rj
1. If Requesti£Needigo to step 2. Otherwise,
raise error condition, since processhas exceeded its
maximum claim
2. If Requesti£Available, go to step 3.
Otherwise Pi must wait, since resources are not
available
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available –
Request; Allocationi=
Alloc
ationi + Requesti;Needi=Needi – Requesti;
o If safe the resources are allocated to Pi
o If unsafe Pi must wait, and the old resource-allocation state is restored
A BC ABC ABC
P0 0 1 0 753 332
P1 2 0 0 322
P2 3 0 2 902
P3 2 1 1 222
P4 0 0 2 433
Σ The content of the matrix Need is defined to be Max
– Allocation NeedA B C
156
satisfies safety criteria
P1 Request (1,0,2)
Check that Request £ Available (that is, (1,0,2) £ (3,3,2) true
157
Several Instances of a Resource Type
158