0% found this document useful (0 votes)
4 views

Unit 2Process Scheduling

Process scheduling is a critical function of operating systems that determines the order and allocation of CPU time among multiple processes. It involves various types of schedulers, including long-term, short-term, and medium-term schedulers, each with specific roles in managing processes in different states. The choice of scheduling algorithms impacts system efficiency, responsiveness, and overall performance, with different algorithms suited for various scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Unit 2Process Scheduling

Process scheduling is a critical function of operating systems that determines the order and allocation of CPU time among multiple processes. It involves various types of schedulers, including long-term, short-term, and medium-term schedulers, each with specific roles in managing processes in different states. The choice of scheduling algorithms impacts system efficiency, responsiveness, and overall performance, with different algorithms suited for various scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Unit 2: Process Scheduling.

A process is the instance of a computer program in execution.


 Scheduling is important in operating systems with
multiprogramming as multiple processes might be eligible for
running at a time.
 One of the key responsibilities of an Operating System (OS) is to
decide which programs will execute on the CPU.
 Process Schedulers are fundamental components of operating
systems responsible for deciding the order in which processes
are executed by the CPU. In simpler terms, they manage how
the CPU allocates its time among multiple tasks or processes
that are competing for its attention.

What is Process Scheduling?


Process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU and the
selection of another process based on a particular strategy.
Throughout its lifetime, a process moves between
various scheduling queues, such as the ready queue, waiting
queue, or devices queue
Categories Of Scheduling:
 Non-Preemptive:
In this case, a process’s resource cannot be taken before the
process has finished running. When a running process finishes and
transitions to a waiting state, resources are switched.
Algorithms based on non-preemptive scheduling are: First Come
First Serve, Shortest Job First (SJF basically non
preemptive) and Priority (nonpreemptive version) , etc
 Premptive:
In this case, the OS can switch a process from running state to
ready state. This switching happens because the CPU may give
other processes priority and substitute the currently active
process for the higher priority process.
Mainly a process is switched from the running state to the ready
state. Algorithms based on preemptive scheduling are Round
Robin (RR) , Shortest Remaining Time First (SRTF) , Priority
(preemptive version) ,

Please refer Preemptive vs Non-Preemptive Scheduling for details

. Job queue:
 Contains all submitted jobs.
 Processes are stored here in a wait state until they are ready to
go to the execution stage.
 This is the first and most basic state that acts as a default
storage of new jobs added to a scheduling system.
 Long Term Scheduler Picks a process from Job Queue and
moves to ready queue

Ready queue: ( in disk)


 Contains processes (mainly their PCBs) waiting for the CPU to
execute various processes it contains.
 They are controlled using a scheduling algorithm like FCFS, SJF,
or Priority Scheduling.
 Short Term Scheduler picks a process from Ready Queue and
moves the selected process to running state

Block or Device Queues (In Main Memory)


The processes which are blocked due to unavailability of an I/O
device are added to this queue. Every device has its own block
queue

 All processes are initially in the Job Queue.


 A new process is initially put in the Ready queue by scheduler. It
waits in the ready queue until it is selected for execution(or
dispatched). Once the process is assigned to the CPU and is
executing, one of the following several events can occur:
1) The process could issue an I/O request, and then be placed in
a Device queue.
2) The process could create a new subprocess and wait for its
termination.
3) The process could be removed forcibly from the CPU, as a
result of an interrupt, and be put back in the ready queue

Types of Process Schedulers:


1. Long Term or Job Scheduler
Long Term Scheduler loads a process from disk to main memory
for execution. The new process to the ‘Ready State’.
 It mainly moves processes from Job Queue to Ready Queue.
 It controls the Degree of Multi-programming, i.e., the number
of processes present in a ready state or in main memory at any
point in time.
 It is important that the long-term scheduler make a careful
selection of both I/O and CPU-bound processes. I/O-bound
tasks are which use much of their time in input and output
operations while CPU-bound processes are which spend their
time on the CPU. The job scheduler increases efficiency by
maintaining a balance between the two.
 In some systems, the long-term scheduler might not even exist.
For example, in time-sharing systems like Microsoft Windows,
there is usually no long-term scheduler. Instead, every new
process is directly added to memory for the short-term
scheduler to handle.
 Slowest among the three (that is why called long term)

2. Short-Term or CPU Scheduler


CPU Scheduler is responsible for selecting one process from the
ready state for running (or assigning CPU to it).
 STS (Short Term Scheduler) must select a new process for the
CPU frequently to avoid starvation.
 The CPU scheduler uses different scheduling algorithms to
balance the allocation of CPU time.
 It picks a process from ready queue.
 Its main objective is to make the best use of CPU.
 It mainly calls dispatcher.
 Fastest among the three (that is why called Short Term)
The dispatcher is responsible for loading the process selected by the
Short-term scheduler on the CPU (Ready to Running State). Context
switching is done by the dispatcher only. A dispatcher does the
following work:
 Saving context (process control block) of previously running
process if not finished.
 Switching system mode to user mode.
 Jumping to the proper location in the newly loaded program
Time taken by dispatcher is called dispatch latency or process context
switch time.

3. Medium-Term Scheduler
Medium Term Scheduler (MTS) is responsible for moving a process
from memory to disk (or swapping).
 It reduces the degree of multiprogramming (Number of
processes present in main memory).
 A running process may become suspended if it makes an I/O
request. A suspended processes cannot make any progress
towards completion. In this condition, to remove the process
from memory and make space for other processes, the
suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be
swapped out or rolled out. Swapping may be necessary to
improve the process mix (of CPU bound and IO bound)
 When needed, it brings process back into memory and pick up
right where it left off.
 It is faster than long term and slower than short term

Some Other Schedulers

 I/O Schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks. They can use various
algorithms to determine the order in which I/O operations are executed, such
as FCFS (First-Come, First-Served) or RR (Round Robin).

 Real-Time Schedulers: In real-time systems, real-time schedulers ensure that critical


tasks are completed within a specified time frame. They can prioritize and schedule
tasks using various algorithms such as EDF (Earliest Deadline First) or RM (Rate
Monotonic)

 Context Switching
In order for a process execution to be continued from the same
point at a later time, context switching is a mechanism to store
and restore the state or context of a CPU in the Process
Control block. A context switcher makes it possible for multiple
processes to share a single CPU using this method.
A multitasking operating system must include context switching
among its features.
The state of the currently running process is saved into the
process control block when the scheduler switches the CPU
from executing one process to another. The state used to set
the computer, registers, etc. for the process that will run next is
then loaded from its own PCB. After that, the second can start
processing.

A context switcher makes it possible for multiple processes to


share a single CPU using this method. A multitasking operating
system must include context switching among its features.
 Program Counter
 Scheduling information
 The base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

Conclusion
Process schedulers are the essential parts of operating system
that manage how the CPU handles multiple tasks or processes.
They ensure that processes are executed efficiently, making the
best use of CPU resources and maintaining system
responsiveness. By choosing the right process to run at the
right time, schedulers help optimize overall system
performance, improve user experience, and ensure fair access
to CPU resources among competing processes

What is CPU scheduling?


CPU scheduling decides the order and priority of the
processes to run and allocates the CPU time based on various
parameters such as CPU usage, throughput, turnaround,
waiting time, and response time. The purpose of CPU
Scheduling is to make the system more efficient, faster, and
fairer.
Criteria of CPU Scheduling
CPU scheduling criteria, such as turnaround time, waiting time,
and throughput, are essential metrics used to evaluate the
efficiency of scheduling algorithms
1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep
the CPU as busy as possible. Theoretically, CPU utilization can
range from 0 to 100 but in a real-time system, it varies from 40
to 90 percent depending on the load upon the system
2. Throughput
A measure of the work done by the CPU is the number of
processes being executed and completed per unit of time. This
is called throughput. The throughput may vary depending on
the length or duration of the processes.
3. Turnaround Time
For a particular process, an important criterion is how long it
takes to execute that process. The time elapsed from the time
of submission of a process to the time of completion is known
as the turnaround time. Turn-around time is the sum of times
spent waiting to get into memory, waiting in the ready queue,
executing in CPU, and waiting for I/O.
Turn Around Time = Completion Time – Arrival Time.

4. Waiting Time
A scheduling algorithm does not affect the time required to
complete the process once it starts execution. It only affects the
waiting time of a process i.e. time spent by a process waiting in
the ready queue.
Waiting Time = Turnaround Time – Burst Time.
5. Response Time
In an interactive system, turn-around time is not the best
criterion. A process may produce some output fairly early and
continue computing new results while previous results are
being output to the user. Thus another criterion is the time
taken from submission of the process of the request until the
first response is produced. This measure is called response
time.
Response Time = CPU Allocation Time(when the CPU was
allocated for the first) – Arrival Time
sd
Importance of Selecting the Right CPU Scheduling
Algorithm for Specific Situations
It is important to choose the correct CPU scheduling algorithm
because different algorithms have different priorities for
different CPU scheduling criteria.Different algorithms have
different strengths and weaknesses. Choosing the wrong CPU
scheduling algorithm in a given situation can result in
suboptimal performance of the system.
Example: Here are some examples of CPU scheduling
algorithms that work well in different situations.
Round Robin scheduling algorithm works well in a time-sharing
system where tasks have to be completed in a short period of
time. SJF scheduling algorithm works best in a batch
processing system where shorter jobs have to be completed
first in order to increase throughput.Priority scheduling
algorithm works better in a real-time system where certain
tasks have to be prioritized so that they can be completed in a
timely manner.
Factors Influencing CPU Scheduling Algorithms
There are many factors that influence the choice of CPU
scheduling algorithm. Some of them are listed below.
 The number of processes.
 The processing time required.
 The urgency of tasks.
 The system requirements.
Selecting the correct algorithm will ensure that the system will
use system resources efficiently, increase productivity, and
improve user satisfaction.
CPU Scheduling Algorithms
There are several CPU Scheduling Algorithms, that are listed
below.
 First Come First Served (FCFS)
 Shortest Job First (SJF)
 Longest Job First (LJF)
 Priority Scheduling
 Round Robin (RR)
 Shortest Remaining Time First (SRTF)
 Longest Remaining Time First (LRTF)

FCFS( first come- first serve)


2. SJF
Sortest running time first
rr
Multiple-Processor Scheduling in
Operating System
In multiple-processor scheduling multiple CPUs are available
and hence Load Sharing becomes possible.
What is Multiple-Processor Scheduling?
In systems containing more than one processor, multiple-
processor scheduling addresses task allocations to multiple
CPUs. This will involve higher throughputs since several tasks
can be processed concurrently in separate processors. It would
also involve the determination of which CPU handles a
particular task and balancing loads between available
processors

Approaches to Multiple-Processor
Scheduling
One approach is when all the scheduling decisions and I/O
processing are handled by a single processor which is called
the Master Server and the other processors executes only
the user code. This is simple and reduces the need of data
sharing. This entire scenario is called Asymmetric
Multiprocessing. A second approach uses Symmetric
Multiprocessing where each processor is self scheduling. All
processes may be in a common ready queue or each
processor may have its own private queue for ready processes.
The scheduling proceeds further by having the scheduler for
each processor examine the ready queue and select a process
to execute

You might also like