4.2 Scheduling Algorithm
4.2 Scheduling Algorithm
By
Mr. Parag R. Sali
Lecturer
Department of Computer Technology
SNJB’s Shri. Hiralal Hastimal ( Jain Brothers)
Polytechnic, Chandwad
Program Name: Computer Engineering Group
Program Code : CO/CM/IF/CW
Semester : Fifth
Course Title : Operating System
Course Code : 22516
CPU scheduling deals with the problem of deciding which of the processes in
the ready queue is to be allocated the CPU. There are many different CPU
scheduling algorithms. In this section, we describe several of them.
1 First-Come, First-Served Scheduling
By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS)
scheduling algorithm. With this scheme, the process that requests the CPU first is
allocated the CPU first. The implementation of the FCFS policy is easily managed
with a FIFO queue. When a process enters the ready queue, its PCB is linked onto
the tail of the queue. When the CPU is free, it is allocated to the process at the
head of the queue. The running process is then removed from the queue. The
code for FCFS scheduling is simple to write and understand.
On the negative side, the average waiting time under the FCFS policy is often quite
long. Consider the following set of processes that arrive at time 0, with the length
of the CPU burst given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3
If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we
get the result shown in the following Gantt chart, which is a bar chart that
illustrates a particular schedule, including the start and finish times of each of
the participating processes:
The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2,
and 27 milliseconds for process P3. Thus, the average waiting time is (0 + 24 +
27)/3 = 17 milliseconds. If the processes arrive in the order P2, P3, P1, however,
the results will be as shown in the following Gantt chart:
The SJF scheduling algorithm is provably optimal, in that it gives the minimum
average waiting time for a given set of processes. Moving a short process
before a long one decreases the waiting time of the short process more than it
increases the waiting time of the long process. Consequently, the average
waiting time decreases.
The real difficulty with the SJF algorithm is knowing the length of the next
CPU request.
3 Priority Scheduling
The SJF algorithm is a special case of the general priority-scheduling algorithm. A
priority is associated with each process, and the CPU is allocated to the process
with the highest priority. Equal-priority processes are scheduled in FCFS order. An
SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of
the (predicted) next CPU burst. The larger the CPU burst, the lower the priority,
and vice versa.
Note that we discuss scheduling in terms of high priority and low priority.
Priorities are generally indicated by some fixed range of numbers, such as 0 to 7
or 0 to 4,095. However, there is no general agreement on whether 0 is the
highest or lowest priority. Some systems use low numbers to represent low
priority; others use low numbers for high priority. This difference can lead to
confusion. In this text, we assume that low numbers represent high priority.
As an example, consider the following set of processes, assumed to have arrived
at time 0 in the order P1, P2, · · ·, P5, with the length of the CPU burst given in
milliseconds:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Using priority scheduling, we would schedule these processes according to the
following Gantt chart: The average waiting time is 8.2 milliseconds.
Priorities can be defined either internally or externally. Internally defined
priorities use some measurable quantity or quantities to compute the priority
of a process. For example, time limits, memory requirements, the number of
open files, and the ratio of average I/O burst to average CPU burst have been
used in computing priorities. External priorities are set by criteria outside the
operating system, such as the importance of the process, the type and amount
of funds being paid for computer use, the department sponsoring the work, and
other, often political, factors.
Priority scheduling can be either preemptive or non-preemptive. When a
process arrives at the ready queue, its priority is compared with the priority of
the currently running process. A preemptive priority scheduling algorithm will
preempt the CPU if the priority of the newly arrived process is higher than the
priority of the currently running process. A non-preemptive priority scheduling
algorithm will simply put the new process at the head of the ready queue.
major problem with priority scheduling algorithms is indefinite blocking, or
starvation. A process that is ready to run but waiting for the CPU can be
considered blocked. A priority scheduling algorithm can leave some low
priority processes waiting indefinitely.
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Each queue has absolute priority over lower-priority queues. No process in the
batch queue, for example, could run unless the queues for system processes,
interactive processes, and interactive editing processes were all empty. If an
interactive editing process entered the ready queue while a batch process was
running, the batch process would be preempted.
Another possibility is to time-slice among the queues. Here, each queue gets a
certain portion of the CPU time, which it can then schedule among its various
processes. For instance, in the foreground–background queue example, the
foreground queue can be given 80 percent of the CPU time for RR scheduling
among its processes, while the background queue receives 20 percent of the
CPU to give to its processes on an FCFS basis.