6831
6831
Process Scheduling: Foundation and Scheduling objectives, Types of Schedulers, Scheduling criteria:
CPU utilization, Throughput, Turnaround Time, Waiting Time, Response Time; Scheduling algorithms:
Pre-emptive and Non pre-emptive, FCFS, SJF, RR; Multiprocessor scheduling: Real Time scheduling: RM
and EDF.
Inter-process Communication: Critical Section, Race Conditions, Mutual Exclusion, Hardware Solution,
Strict Alternation, Peterson’s Solution, The Producer/Consumer Problem, Semaphores, Event Counters,
Monitors, Message Passing, Classical IPC Problems: Reader’s & Writer Problem, Dinning Philosopher
Problem etc.
PROCESS SCHEDULING:
CPU is always busy in Multiprogramming. Because CPU switches from one job to another job. But in
simple computers CPU sit idle until the I/O request granted.
scheduling is a important OS function. All resources are scheduled before use.(cpu,
memory, devices…..)
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at
a time and the loaded process shares the CPU using time multiplexing
. Scheduling Objectives
Maximize throughput.
Maximize number of users receiving acceptable response times.
Be predictable.
Balance resource use.
Avoid indefinite postponement.
Enforce Priorities.
Give preference to processes holding key resources
SCHEDULING QUEUES: people live in rooms. Process are present in rooms knows
as queues. There are 3types
1. job queue: when processes enter the system, they are put into a job queue, which
consists all processes in the system. Processes in the job queue reside on mass storage and await
the allocation of main memory.
2. ready queue: if a process is present in main memory and is ready to be allocated to
cpu for execution, is kept in readyqueue.
3. device queue: if a process is present in waiting state (or) waiting for an i/o event to
complete is said to bein device queue.(or)
The processes waiting for a particular I/O device is called device queue.
Schedulers : There are 3 schedulers
Scheduler duties:
1. Throughput: how many jobs are completed by the cpu with in a timeperiod.
2. Turn around time : The time interval between the submission of the process
and time of the completion is turn around time.
TAT = Waiting time in ready queue + executing time + waiting time in waiting queue for
I/O.
3. Waiting time: The time spent by the process to wait for cpu to beallocated.
4. Response time: Time duration between the submission and firstresponse.
5. Cpu Utilization: CPU is costly device, it must be kept as busy aspossible.
Eg: CPU efficiency is 90% means it is busy for 90 units, 10 units idle.
CPU SCHEDULINGALGORITHMS:
1. First come First served scheduling: (FCFS): The process that request the CPU
first is holds the cpu first. If a process request the cpu then it is loaded into the ready queue,
connect CPU to that process.
Consider the following set of processes that arrive at time 0, the length of the cpu burst time
given in milli seconds.
burst time is the time, required the cpu to execute that job, it is in milli seconds.
P2 6 2
P3 4 4
P4 5 6
P5 2 8
Which process having the smallest CPU burst time, CPU is assigned to that process . If
two process having the same CPU burst time, FCFS is used.
P1 5
P2 24
P3 16
P4 10
P5 3
P5 having the least CPU burst time ( 3ms ). CPU assigned to that ( P5 ). After completion of
P5 short term scheduler search for nest ( P1 ).......
Average Waiting Time :
Short term scheduler always chooses the process that has term shortest remaining time. When a
new process joins the ready queue , short term scheduler compare the remaining time of
executing process and new process. If the new process has the least CPU burst time, The
scheduler selects that job and connect to CPU. Otherwise continue the old process.
P1 3 0
P2 6 2
P3 4 4
P4 5 6
P5 2 8
P1 arrives at time 0, P1 executing First , P2 arrives at time 2. Compare P1 remaining time and P2 ( 3-2 =
1) and 6. So, continue P1 after P1, executing P2, at time 4, P3 arrives, compare P2 remaining time (6-1=5
) and 4 ( 4<5 ) .So, executing P3 at time 6, P4 arrives. Compare P3 remaining time and P4 ( 4-
2=2 ) and 5 (2<5 ). So, continue P3 , after P3, ready queue consisting P5 is the least out of
three. So execute P5, next P2, P4.
FORMULA : Finish time - Arrival
Time Finish Time for P1 => 3-0 = 3
Finish Time for P2 => 15-2 = 13
Finish Time for P3 => 8-4 =4
Finish Time for P4 => 20-6 = 14
Finish Time for P5 => 10-8 = 2
It is designed especially for time sharing systems. Here CPU switches between the processes.
When the time quantum expired, the CPU switched to another job. A small unit of time, called
a time quantum or time slice. A time quantum is generally from 10 to 100 ms. The time
quantum is generally depending on OS. Here ready queue is a circular queue. CPU scheduler
picks the first process from ready queue, sets timer to interrupt after one time quantum and
dispatches the process.
P1 30
P2 6
P3 8
AVERAGE WAITING TIME :
5) PRIORITY SCHEDULING :
P2 12 4
P3 1 5
P4 3 1
P5 4 3
P4 has the highest priority. Allocate the CPU to process P4 first next P1, P5, P2, P3.
Disadvantage: Starvation
Starvation means only high priority process are executing, but low priority
process are waiting for the CPU for the longest period of the time.
2) Processor Affinity
Successive memory accesses by the process are often satisfied in cache memory.
what happens if the process migrates to another processor. the contents of cache
memory must be invalidated for the first processor, cache for the second processor
must be repopulated. Most Symmetric multi processor systems try to avoid
migration of processes from one processor to another processor, keep a process
running on the same processor. This is called processor affinity.
a) Soft affinity:
Soft affinity occurs when the system attempts to keep processes on the same
processor but makes no guarantees.
b) Hard affinity:
Process specifies that it is not to be moved between processors.
3) Load balancing:
One processor wont be sitting idle while another is overloaded.
Balancing can be achived through push migration or pull migration.
Push migration:
Push migration involves a separate process that runs periodically(e.g every 200 ms)
and moves processes from heavily loaded processors onto less loaded processors.
Pull migration:
Pull migration involves idle processors taking processes from the ready queues of the other
processors.
T1 0.5 3
T2 1 4
T3 2 6
Table 1. Task set
U= 0.5/3 +1/4 +2/6 = 0.167+ 0.25 + 0.333 = 0.75
As processor utilization is less than 1 or 100% so task set is schedulable and it also satisfies the above
equation of rate monotonic scheduling algorithm.
P e D
T1 2 0.5 2
T2 4 1 4
T3 5 1.5 5
Solution
Process synchronization refers to the idea that multiple processes are to join up or
handshake at a certain point, in order to reach an agreement or commit to a certain
sequence of action. Coordination of simultaneous processes to complete a task is
known as process synchronization.
The critical section problem
Consider a system , assume that it consisting of n processes. Each process having a
segment of code. This segment of code is said to be critical section.
E.G: Railway Reservation System.
Two persons from different stations want to reserve their tickets, the train number,
destination is common, the two persons try to get the reservation at the same time.
Unfortunately, the available berths are only one; both are trying for that berth.
It is also called the critical section problem. Solution is when one process is
executing in its critical section, no other process is to be allowed to execute in
its critical section.
The critical section problem is to design a protocol that the processes can use to
cooperate. Each process must request permission to enter its critical section. The
section of code implementing this request is the entry section. The critical section
may be followed by an exit section. The remaining code is the remainder section.
Critical section:
The portion in any program that accesses a shared resource is called as critical section (or)
critical region.
Peterson’s solution:
Peterson solution is one of the solutions to critical section problem involving two
processes. This solution states that when one process is executing its critical section
then the other process executes the rest of the code and vice versa.
Peterson solution requires two shared data items:
1) turn: indicates whose turn it is to enter
into the critical section. If turn == i ,then
process i is allowed into their critical section.
2) flag: indicates when a process wants to enter into critical section. when
process i wants to enter their critical section,it sets flag[i] to true.
do {flag[i] = TRUE; turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
Synchronization hardware
In a uniprocessor multiprogrammed system, mutual exclusion can be obtained by
disabling the interrupts before the process enters its critical section and enabling
them after it has exited the critical section.
Disable
interrupts
Critical section
Enable interrupts
do {
acquire
lock
critical
section
release
lock
remainder
section
} while (TRUE);
A process wants to enter critical section and value of lock is false then testandset
returns false and the value of lock becomes true. thus for other processes wanting
to enter their critical sections testandset returns true and the processes do busy
waiting until the process exits critical section and sets the value of lock to false.
• Definition:
boolean TestAndSet(boolean&lock){
boolean temp=lock;
Lock=true;
return temp;
}
Algorithm for TestAndSet
do{
while testandset(&lock)
//do nothing
//critical section
lock=false
remainder section
}while(TRUE);
lock=false
key=true
after swap instruction,
lock=true
key=false
now key=false becomes true,process exits repeat-until,and enter into critical section.
When process is in critical section (lock=true),so other processes wanting to enter
critical section will have
lock=true
key=true
Hence they will do busy waiting in repeat-until loop until the process exits critical
section and sets the value of lock to false.
Semaphores
A semaphore is an integer variable.semaphore accesses only through two operations.
1) wait: wait operation decrements the count by1.
If the result value is negative,the process executing the wait operation is blocked.
2) signaloperation:
Signal operation increments by 1,if the value is not positive then one of the
process blocked in wait operation unblocked.
wait (S) {
while S <= 0 ; //
no-op
S--;
}
signal (S)
{
S++;
}
} while (TRUE);
First process that executes wait operation will be immediately granted sem.count to 0.
If some other process wants critical section and executes wait() then it is
blocked,since value becomes -1. If the process exits critical section it executes
signal().sem.count is incremented by 1.blocked process is removed from queue and
added to ready queue.
Problems:
1) Deadlock
Deadlock occurs when multiple processes are blocked.each waiting for a resource
that can only be freed by one of the other blocked processes.
2) Starvation
one or more processes gets blocked forever and never get a chance to take their
turn in the critical section.
3) Priority inversion
If low priority process is running ,medium priority processes are waiting for low
priority process,high priority processes are waiting for medium priority
processes.this is called Priority inversion.
The two most common kinds of semaphores are counting semaphores and
binary semaphores. Counting semaphores represent multiple resources,
while binary semaphores, as the name implies, represents two possible states
(generally 0 or 1; locked or unlocked).
Classic problems of synchronization
1) Bounded-buffer problem
Two processes share a common ,fixed –size buffer.
Producer puts information into the buffer, consumer takes it out.
The problem arise when the producer wants to put a new item in the buffer,but it is
already full. The solution is for the producer has to wait until the consumer has
consumed atleast one buffer. similarly if the consumer wants to remove an item
from the buffer and sees that the buffer is empty,it goes to sleep until the producer
puts something in the buffer and wakes it up.
The structure of the producer process
do {
// produce an item in
nextp wait (empty);
wait (mutex);
// add the item to the
buffer signal (mutex);
signal (full);
} while (TRUE);
MONITORS
The disadvantage of semaphore is that it is unstructured construct. Wait and signal operations
can be scattered in a program and hence debugging becomes difficult.
A monitor is an object that contains both the data and procedures needed to perform allocation of
a shared resource. To accomplish resource allocation using monitors, a process must call a
monitor entry routine. Many processes may want to enter the monitor at the same time. but
only one process at a time is allowed to enter. Data inside a monitor may be either global to all
routines within the monitor (or) local to a specific routine. Monitor data is accessible only within
the monitor. There is no way for processes outside the monitor to access monitor data. This is a
form of information hiding.
If a process calls a monitor entry routine while no other processes are executing inside the
monitor, the process acquires a lock on the monitor and enters it. while a process is in the
monitor, other processes may not enter the monitor to acquire the resource. If a process calls a
monitor entry routine while the other monitor is locked the monitor makes the calling process
wait outside the monitor until the lock on the monitor is released. The process that has the
resource will call a monitor entry routine to release the resource. This routine could free the
resource and wait for another requesting process to arrive monitor entry routine calls signal to
allow one of the waiting processes to enter the monitor and acquire the resource. Monitor gives
high priority to waiting processes than to newly arriving ones.
Structure:
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedurePn (…) {……}
Initialization code (…) { … }
}
}
Processes can call procedures p1,p2,p3……They cannot access the local variables of the
monitor
Schematic view of a Monitor
Monitor provides condition variables along with two operations on them i.e. wait and signal.
wait(condition variable)
signal(condition variable)
Every condition variable has an associated queue.A process calling wait on a
particular condition variable is placed into the queue associated with that condition
variable.A process calling signal on a particular condition variable causes a process
waiting on that condition variable to be removed from the queue associated with it.
Solution to Producer consumer problem using monitors:
monitor
producerconsumer
condition
full,empty;
int count;
procedure insert(item)
{
if(count==MAX)
wait(full) ;
insert_item(item);
count=count+1;
if(count==1)
signal(empty);
}
procedure remove()
{
if(count==0)
wait(empty);
remove_item(item);
count=count-1;
if(count==MAX-1)
signal(full);
}
procedure producer()
{
producerconsumer.insert(item);
}
procedure consumer()
{
producerconsumer.remove();
}
Solution to dining philosophers problem using monitors
A philosopher may pickup his forks only if both of them are available.A
philosopher can eat only if his two neighbours are not eating.some other
philosopher can delay himself when he is hungry.
Diningphilosophers.Take_forks( ) : acquires forks ,which may block the process.
Eat noodles ( )
Diningphilosophers.put_forks( ): releases the forks.
Resuming processes within a monitor
If several processes are suspended on condion x and x.signal( ) is executed by some process.
then
how do we determine which of the suspended processes should be resumed next ?
solution is FCFS(process that has been waiting the longest is resumed first).In
many circumstances, such simple technique is not adequate. alternate solution is to
assign priorities and wake up the process with the highest priority.