CPU Scheduling_2.2.2024 (2)
CPU Scheduling_2.2.2024 (2)
In a single-processor system,
Only one process may run at a time.
Other processes must wait until the CPU is rescheduled.
Objective of multiprogramming:
To have some process running at all times, in order to maximize CPU
utilization.
CPU–I/O Burst Cycle
• Preemptive Scheduling
This is driven by the idea of prioritized computation.
Processes that are runnable may be temporarily suspended
Disadvantages:
1. Incurs a cost associated with access to shared-data.
2. Affects the design of the OS kernel.
• In choosing which algorithm to use in a particular situation, depends upon the properties of
the various algorithms. Many criteria have been suggested for comparing CPU- scheduling
algorithms. The criteria include the following:
• CPU utilization
• Throughput
• Turnaround time
• Waiting time
• Response time
• It is desirable to maximize CPU utilization and throughput and to minimize turnaround
time, waiting time, and response time.
• Investigators have suggested that, for interactive systems, ( it is more important to
minimize the variance in the response time than to minimize the average response time.
• A system with reasonable and predictable response time may be considered more desirable
than a system that is faster on the average but is highly variable.
SCHEDULING ALGORITHMS
• CPU scheduling deals with the problem of deciding which of the
processes in the ready-queue is to be allocated the CPU.
Following are some schedulingalgorithms:
1. FCFS scheduling (First Come FirstServed)
2. Round Robin scheduling
3. SJF scheduling (Shortest JobFirst)
4. SRT scheduling
5. Priority scheduling
6. Multilevel Queue schedulingand
7. Multilevel Feedback Queuescheduling
FCFS Scheduling
• The process that requests the CPU first is allocated the CPU first.
The implementation is easily done using a FIFO queue.
Procedure:
1. When a process enters the ready-queue, its PCB is linked onto the tail of the queue.
2. When the CPU is free, the CPU is allocated to the process at the queue’s head.
3. The running process is then removed from the queue.
Advantage:
• 1. Code is simple to write & understand.
Disadvantages:
1. Convoy effect: All other processes wait for one big process to get off the CPU.
2. Non-preemptive (a process keeps the CPU until it releases it).
3. Not good for time-sharing systems.
4. The average waiting time is generally not minimal.
FCFS Scheduling
Shortest-Job-First Scheduling
The CPU is assigned to the process that has the smallest next CPU burst.
If two processes have the same length CPU burst, FCFS scheduling is
used to break the tie.
For long-term scheduling in a batch system, we can use the process
time limit specified by the user, as the ‘length’.
SJF can't be implemented at the level of short-term scheduling,
because there is no way to know the length of the next CPU burst.
Advantage:
1. The SJF is optimal, i.e. it gives the minimum average waiting time for a
given set of processes.
Disadvantage:
1. Determining the length of the next CPU burst.
Shortest-Job-First Scheduling
• SJF algorithm may be either 1) non-preemptive or
2)preemptive.
1. Non preemptive SJF : The current process is allowed to
finish its CPU burst.
2. preemptive SJF : If the new process has a shorter next
CPU burst than what is left of the executing process, that
process is preempted. It is also known as SRTF scheduling
(Shortest-Remaining-Time-First)
Shortest-Job-First Scheduling
Shortest-Job-First Scheduling
• SJF algorithm is optimal, but cannot be implemented at the level of CPU
scheduling, as there is no way to know the length of the next CPU burst.
• One approach to this problem - try to approximate SJF scheduling
• The next CPU burst is generally predicted as an exponential average of the
measured lengths of previous CPU bursts.
• tn - length of the nth CPU burst, and let τn+1 be predicted value for the next
CPU burst.
• Then, for α, 0 ≤ α ≤1, define
• τn+1 = α tn + (1 − α)τn.
• tn contains our most recent information, while τn stores the
past history.
• α controls the relative weight of recent and past history
Shortest-Job-First Scheduling
• If α = 0, then τn+1 = τn, and
recent history has no effect
• If α = 1, then τn+1 = tn, and
only the most recent CPU
burst matters
• More commonly, α = 1/2,
so recent history and past
history are equally
weighted. The initial τ0 can
be defined as a constant or
as an overall system
average. Figure shows an
exponential average with α
= 1/2 and τ0 = 10.
Shortest-Job-First Scheduling
• τn+1 = αtn + (1 − α)αtn−1 + · · · + (1 − α)jαtn−j + · · ·+(1 − α)n+1τ0.
• α is less than 1. As a result, (1 − α) is also less than 1, and each
successive term has less weight than its predecessor.
• The SJF algorithm can be either preemptive or non preemptive.
• Preemptive SJF scheduling is sometimes called shortest-remaining
time-first scheduling.
Shortest-Job-First Scheduling
Round Robin Scheduling
Disadvantage:
Indefinite blocking, where low-priority processes are left waiting
indefinitely for CPU. Solution: Aging is a technique of increasing priority
of processes that wait in system for a long time.
Priority Scheduling
• Another option is to combine round-robin and priority scheduling in such a way that the
system executes the highest-priority process and runs processes with the same priority
using round-robin scheduling.
Multilevel Queue Scheduling
Useful for situations in which processes are easily classified into different
groups.
For example, a common division is made between
foreground (or interactive) processes and
background (or batch) processes.
The ready-queue is partitioned into several separate queues.
The processes are permanently assigned to one queue based on some
property like
memory size
process priority or
process type.
Each queue has its own scheduling algorithm.
• For example, separate queues might be used for foreground and background
processes.
Multilevel Queue Scheduling
Multilevel Feedback Queue Scheduling