CPU Scheduling
CPU Scheduling
PROCESS SCHEDULERS
Processes travel various scheduling queues throughout its
entire lifetime.
Its journey from one queue to another correspondingly
changes its process state.
At this point consider that in a multiprogramming system
there are multiple programs running around the system
until it exhausts its lifetime.
Assuming that several jobs reside inside a queue in order to
address the selection process systematically a part of the
operating system, known as scheduler, performs this task.
The scheduler has three types: The long-term scheduler
or job scheduler selects processes from the secondary
storage and loads them into memory for execution.
PROCESS SCHEDULERS
The short-term scheduler or dispatcher selects
process from among the processes that are ready
to execute, and allocates the CPU to one of them.
Hence, it is also called CPU scheduler.
Finally, the mediumterm scheduler or swapper
swaps processes in and out of the memory.
The long-term scheduler executes the least among
the three schedulers.
The job scheduler is in charge of the admission of
jobs into the system.
PROCESS SCHEDULERS
It must be noted that process creation in the system
may take a while which means that the long term
scheduler is commonly prone to a lot of wait or being
idle most of the time.
The long-term scheduler controls the degree of
multiprogramming -the number of processes that will
get in memory.
The long interval in executions enable the long-term
scheduler to take more time selecting a process for
execution.
The long-term scheduler must select a good combination
of I/O-bound and CPU-bound processes in order to
maximize the, CPU and I/O devices.
PROCESS SCHEDULERS
Some systems have a medium-term scheduler.
This scheduler in quite in contrast with the long-term
scheduler.
The swapper suspends a process in memory by
temporarily removing (swaps out ) it from memory
and transferring it into a backing store.
It then replaces the swapped out process with another
job from the secondary storage.
This practice improves the performance of the
system.
The short-term scheduler must select a new process
from memory to be fed into the CPU.
PROCESS SCHEDULERS
A process may execute for only a few milliseconds before
waiting for an I/O request.
Due to the brief time frame between executions. the
short-term scheduler must be very fast in order to keep
the CPU from starving or getting idle.
Switching the CPU to from on process to another results
into overhead.
The overhead time is used to save the state of the old
process and loading the saved state for the new process.
This is known as context switch.
Since context switch time is pure overhead it must be
minimized.
CPU SCHEDULERS
Whenever the CPU becomes idle, the operating system
(particularly the CPU scheduler) must select one of the
processes in the ready queue for execution.
CPU scheduling decisions may take place under the
following four circumstances:
1. When a process switches from the running state to the
waiting state for example, I/O request invocation of
wait for the termination of one of the child processes).
2. When a process switches from the running state to the
ready state (for example, when an interrupt occurs).
3. When a process switches from the waiting state to the
ready state (for example, completion of I/O).
4. When a process terminates.
CPU SCHEDULERS
For circumstances 1 and 4, there is no choice in terms of
scheduling.
A new process (if one exists in the ready queue) must be
selected for execution.
There is a choice, however, for circumstances 2 and 3.
When scheduling takes place only under circumstances 1
and 4, the scheduling scheme is non-preemptive;
otherwise, the scheduling scheme is preemptive.
Under non-preemptive scheduling, once the CPU has
been allocated to a process, the process keeps the CPU
until it releases the CPU either by terminating or switching
states.
Preemptive scheduling incurs a cost.
CPU SCHEDULERS
Consider the case of two processes sharing data.
One may be in the midst of updating the data
when it is preempted, and the second process is
run.
The second process may try to read the data,
which are currently in an inconsistent state.
New mechanisms thus are needed to coordinate
access to shared data.
CPU SCHEDULING
ALGORITHMS
Different CPU-scheduling algorithms have
different
properties and may favor one class of processes over
another.
Many criteria have been suggested for comparing CPUscheduling algorithms.
The characteristics used for comparison can make a
substantial difference in the determination of the best
algorithm.
The criteria should include the following:
1. CPU Utilization. This measures how busy is the CPU.
CPU utilization may range from 0 to 100 percent. In a
real system, it should range from 40% (for a lightly
loaded system) to 90% (for a heavily loaded system).
CPU SCHEDULING
ALGORITHMS
CPU SCHEDULING
ALGORITHMS
CPU SCHEDULING
ALGORITHMS
A good CPU scheduling
algorithm maximizes CPU utilization
and throughput and minimizes turnaround time, waiting time
and response time.
In most cases, the average measure is optimized.
However, in some cases, it is desired to optimize the minimum
or maximum values, rather than the average.
For example, to guarantee that all users get good service, it
may be better to minimize the maximum response time.
For interactive systems (time-sharing systems), some analysts
suggests that minimizing the variance in the response time is
more important than averaging response time.
A system with a reasonable and predictable response may be
considered more desirable than a system that is faster on the
average , but is highly variable.
First-Come First-Served
(FCFS) Scheduling Algorithm
This is the simplest CPU-scheduling algorithm.
The process that requests the CPU first gets the
CPU first.
Example :
Consider the following set of processes that arrive
at time 0, with the length of the CPU burst given
PROCESS
BURST TIME
in milliseconds:
P1
24
P2
P3
First-Come First-Served
(FCFS) Scheduling Algorithm
If the processes arrive in the order P1, P2, P3 and
are served in FCFS order, the system gets the
result shown in the following Gantt chart:
P1
0
P2 P3
24
27
30
First-Come First-Served
(FCFS) Scheduling Algorithm
Therefore, the waiting time for each process is:
WT for P1
0-0
WT for P2
24 - 0
24
WT for P3
27 - 0
27
First-Come First-Served
(FCFS) Scheduling Algorithm
The turnaround time for each process would be:
TT for P1
24 0
24
TT for P2
27 0
27
TT for P3
30 - 0
30
First-Come First-Served
(FCFS) Scheduling Algorithm
However, if the processes arrive in the order P3,
P2, P1, however, the results will be:
P3 P2
0
P1
6
30
First-Come First-Served
(FCFS) Scheduling Algorithm
Therefore, the waiting time for each process
would be:
WT for P1
60
WT for P2
30
WT for P3
0-0
First-Come First-Served
(FCFS) Scheduling Algorithm
With this new job sequence, what will be the value of
the turnaround time?
Comparison of the two computations of the waiting
time reveal that if the smaller burst jobs get
processed first that the large burst process then the
average wait by each job is lessened.
But because FCFS is a non-preemptive algorithm, the
computation in the given example must be done
using the sequence P1, P2, P3, and not P3, P2, P1.
If jobs with smaller burst have to wait for a long job
to finish its long burst then Convoy Effect exists.
First-Come First-Served
(FCFS) Scheduling Algorithm
This is something that is quite annoying when
smaller jobs are of higher priorities than the bigburst job that is occupying the processor.
Since, the CPU is tied to the job with a very long
burst, jobs with smaller have no choice but to
wait.
A real life parallelism would take the form of a
waiting line for a photocopying machine.
We can see that it is not nice for a one-page
memo to wait for the very long process of copying
a lengthy thesis and a moderately lengthy report.
First-Come First-Served
(FCFS) Scheduling Algorithm
This may seem unjust. Another clear analogy
involves a tricycle which must wait for a huge
convoy to finish a rigid checkpoint.
Thus, FCFS algorithm is particularly troublesome
for time-sharing systems which require frequent
interaction with different processes.
First-Come First-Served
(FCFS) Scheduling Algorithm
Example 1 :
JOB
BURST TIME
J1
2O
J2
J3
First-Come First-Served
(FCFS) Scheduling Algorithm
Example 2 :
JOB
ARRIVAL
TIME
BURST TIME
J1
J2
J3
J4
First-Come First-Served
(FCFS) Scheduling Algorithm
Example 3 :
JOB
ARRIVAL
TIME
BURST TIME
J1
J2
11
J3
J4
16
21
Shortest-Job-First (SJF)
Scheduling Algorithm
This algorithm is very much in particular with the
length of CPU burst particular process maintains.
When the CPU is available, it is assigned to the
process that has the smallest CPU burst.
If two processes have the same length of CPU
burst, FCFS scheduling is used to break the tie by
considering which job arrived first.
Shortest-Job-First (SJF)
Scheduling Algorithm
Example :
Consider the following set of processes that arrive
at time O. with the length of the CPU burst given
in milliseconds:
PROCESS
BURST TIME
P1
P2
P3
P4
Shortest-Job-First (SJF)
Scheduling Algorithm
Using SJF, the system would schedule these processes
according to the following Gantt chart:
P4
0
P1
3
P3
9
P2
16
3-0
WT for P2
16 - 0
16
WT for P3
9-0
Average
waiting
WT
for P4 time = (3
0 -+0 16 + 9 + 0) /04
= 7ms
24
Shortest-Job-First (SJF)
Scheduling Algorithm
The turnaround time for each process is:
TT for P1
9-0
TT for P2
24 - 0
24
TT for P3
16 - 0
16
TT for P4
3-0
Shortest-Job-First (SJF)
Scheduling Algorithm
If the system were using the FCFS scheduling, then
the average waiting time would be 10.25 ms.
Also, under FCFS computation, the turnaround time
would be 16.25.
Although the SJF algorithm is optimal, it cannot be
implemented at the level of short-term scheduling.
There is no way to know the length of the next CPU
burst.
The only alternative is to predict the value of the
next CPU burst.
ARRIVAL
TIME
BURST TIME
P1
P2
P3
P4
P2
P4
P1
0 1
2the waiting
3
6 for each process
11
Therefore,
time
is:
WT for P1
11 0 (1)
10
WT for P2
3 1 (1)
for P3
2-2
AverageWT
waiting
time = (10
+ 1 + 0 + 3) / 40
WT for P4
-3
3
=6
3.5ms
18
ARRIVAL
TIME
BURST TIME
P1
P2
P3
P4
P1 P2 P3 P2
0 1
2 3
P1 P2 P3 P4
P4
5
P1
9 10 11 12 13 14 15 16 17 18
P1 = 7
P2 = 4
P1 = 7
P2 = 4
P3 = 1
P1 = 7
P2 = 4
P4 = 5
P1 = 7
P4 = 5
P1 = 7
18 - 0
18
TT for P2
61
TT for P3
3-2
TT for P4
11 - 3
P3
8
P2
9
P4
13
18
00
WT for P2
91
WT for P3
82
WT for P4
13 - 3
10
8-0
TT for P2
13 1
12
TT for P3
9-2
TT for P4
18 - 3
15
Priority
Burst Time
P1
10
P2
P3
P4
P5
P5
1
P1
6
P3
16
P4
18
60
WT for P2
00
WT for P3
16 0
16
WT for P4
18 0
18
WT for P5
1-0
16 0
16
TT for P2
10
TT for P3
18 0
18
TT for P4
19 0
19
TT for P5
6-0
Arrival
Time
Burst
Time
Priority
P1
P2
10
P3
18
P4
P5
P1 P2 P3 P4 P5
1
2
3
4
5
8
P4
P3
14
P2
31
P1
40 44
40 1 (1)
38
WT for P2
31 2 (1)
28
WT for P3
14 3 (1)
10
WT for P4
8 4 (1)
WT for P5
55
44 1
43
TT for P2
40 2
38
TT for P3
31 3
28
TT for P4
14 4
10
TT for P5
85
Round-Robin (RR)
Scheduling Algorithm
This algorithm is specifically for time-sharing
systems.
A small unit of time, called a time quantum or
time slice, is defined.
The ready queue is treated as a circular queue.
The CPU scheduler goes around the ready queue,
allocating the CPU to each process for a time
interval of up to 1 time quantum.
The RR algorithm is therefore preemptive.
Round-Robin (RR)
Scheduling Algorithm
Example:
Consider the following set of processes that arrive
at time 0, with the length of the CPU burst given
in milliseconds:
Process
Burst
Time
P1
24
P2
P3
Round-Robin (RR)
Scheduling Algorithm
If the system uses time quantum of 4 ms, then the
resulting RR Gantt chart is:
-4
P1
0
-4
P2
4
P3
7
P1
10
-4
P1
14
-4
P1
18
-4
P1
22
P1
26
26 0 (20)
WT for P2
40
WT for P3
7-0
30
Round-Robin (RR)
Scheduling Algorithm
The performance of the RR algorithm depends
heavily on the size of the time quantum.
If the time quantum is too large, the RR policy
degenerates into FCFS policy.
If the time quantum is too small on the other
hand, then the effect of the context-switch time
becomes a significant overhead.
As a general rule, 80 % of the CPU burst should
be shorter than the time quantum.
Interactive processes
Foreground Process
Student process
Lower priority
Quantum = 16ms
FCFS