0% found this document useful (0 votes)
64 views

Operating System Chapter 4 - Ma

This document discusses scheduling and dispatching in operating systems. It covers preemptive and non-preemptive scheduling, different scheduling policies and algorithms, processes versus threads, and considerations for real-time systems. Preemptive scheduling allows higher priority tasks to interrupt lower priority tasks, while non-preemptive scheduling completes the currently running task before starting a new one. Common scheduling policies include first-come, first-served and shortest job first. Processes are program instances while threads are segments of a process that can run concurrently.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

Operating System Chapter 4 - Ma

This document discusses scheduling and dispatching in operating systems. It covers preemptive and non-preemptive scheduling, different scheduling policies and algorithms, processes versus threads, and considerations for real-time systems. Preemptive scheduling allows higher priority tasks to interrupt lower priority tasks, while non-preemptive scheduling completes the currently running task before starting a new one. Common scheduling policies include first-come, first-served and shortest job first. Processes are program instances while threads are segments of a process that can run concurrently.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 41

M.

A COLLEGE
ASSOSA CAMPUS

Department of Computer Science

Compiled by: Berhanu A.(MSc.) 1


Chapter Four: Scheduling and dispatch
Chapter contents

• Introduction

• Preemptive and non-preemptive scheduling

• Schedulers and policies

• Processes and threads

• Deadlines and real-time issues


Introduction
• Scheduling is used when the scheduler wants to view all the partner
schedules and place the appointment for new tasks manually into a time
slot.

• Dispatching is intended to handle scenarios where the task is assigned


to a partner using the next available time slot.

• The Dispatch Scheduling form tries to identify the best available


partner by matching the duration required by the tasks with the next
available partner who has the appropriate skills, certifications and
coverage area.
Preemptive and non-preemptive scheduling
What is Preemptive Scheduling?

• Preemptive Scheduling is a scheduling method where the tasks are


mostly assigned with their priorities.

• If the new process arrived at the ready queue has a higher priority than
the currently running process, the CPU is preempted, which means the
processing of the current process is stopped and the incoming new
process with higher priority gets the CPU for its execution.

• At that time, the lower priority task holds for some time and resumes
when the higher priority task finishes its execution.
What is Non- Preemptive Scheduling?

• Non-preemptive priority scheduling algorithm if a new process arrives


with a higher priority than the current running process, the incoming
process is put at the head of the ready queue, which means after the
execution of the current process it will be processed.

• The process that keeps the CPU busy will release the CPU either by
switching context or terminating.

• It is the only method that can be used for various hardware platforms.

• Non-Preemptive Scheduling occurs when a process voluntarily enters


Difference Between Preemptive and Non-Preemptive Scheduling in
OS
Preemptive Scheduling Nonpreemptive Scheduling

• CPU utilization is more efficient • CPU utilization is less efficient


• A processor can be preempted to • Once the processor starts its
execute the different processes in execution, it must finish it before
the middle of any current process executing the other. It can’t be paused
execution in the middle.
• Preemptive Scheduling is flexible. • Non-preemptive Scheduling is rigid
• Examples: – Shortest Remaining • Examples: First Come First Serve,
Time First, Round Robin, etc. Shortest Job First, Priority
Scheduling, etc.
• Waiting and response time of
preemptive Scheduling is less. • Waiting and response time of the
non-preemptive Scheduling method
is higher.
Advantage and disadvantage of preemptive scheduling
Advantages of Preemptive Scheduling Disadvantages of Preemptive
Scheduling

• Choice of running task reconsidered • Need limited computational resources for


after each interruption. Scheduling
• Each event cause interruption of
• Takes a higher time by the scheduler to
running tasks
suspend the running task, switch the
• It improves the average response time.
context, and dispatch the new incoming task.
• It is beneficial when we use it for the
multi-programming environment. • The process which has low priority needs to

• All the running processes will make use wait for a longer time if some high priority

of CPU equally. processes arrive continuously.


Advantage and disadvantage of non preemptive scheduling

Advantage of nonpreemptive Disadvantage of nonpreemptive

• Offers low scheduling overhead • It can lead to starvation especially

• Tends to offer high throughput for those real-time tasks

• It is conceptually very simple • Bugs can cause a machine to freeze

method up

• Less computational resources • It can make real-time and priority

need for Scheduling Scheduling difficult

• Poor response time for processes


Key Differences of preemptive and non preemptive
scheduling
• In Preemptive Scheduling, the CPU is allocated to the processes for a
specific time period, and non-preemptive scheduling CPU is allocated
to the process until it terminates.
• In Preemptive Scheduling, tasks are switched based on priority while
non-preemptive Scheduling no switching takes place.
• Preemptive algorithm has the overhead of switching the process from
the ready state to the running state while Non-preemptive Scheduling
has no such overhead of switching.
• Preemptive Scheduling is flexible while Non-preemptive Scheduling
is rigid.
Schedulers and policies
• To make our day more logical and efficient, we work on a schedule.
An operating system operates in a similar manner: by scheduling
tasks, improving efficiency, reducing delays and wait times (response
times to the system), and managing CPU resources better.

• This activity is called process scheduling. A process is like a job in


computer systems that can be executed. Some processes are
input/output (I/O) like graphics display process, others are CPU-
focused and can be transparent to users.
Scheduling Criteria
• There are several criteria for invoking the best scheduling policies for a
system:
o CPU utilization –keep the CPU as busy as possible
o Throughput–number of processes that complete their execution per time
unit(Increases the number of processes completed in a given time frame)
o Turnaround time –amount of time to execute a particular process
o Waiting time –amount of time a process has been waiting in the ready
queue(reduce the waiting time of a process)
o Response time –minimizes the time a user has to wait for a process to
Scheduling Policies

• To fulfill those criteria, a scheduler has to use various policies or


strategies:
1.Fairness
• It must be fair for someone to bring a loaded shopping cart to the 10-
items-or-less checkout, the operating system shouldn’t give an unfair
advantage to a process that will interfere with the criteria we listed
(CPU utilization, wait time, throughput).
• It’s important to balance long-running jobs and ensure that the lighter
jobs can be run quickly.
2. FCFS – First Come First Served

• Also called FIFO (first-in-first-out), first-come-first-served (FCFS)


processes jobs in the order in which they are received. This is not a
very fair policy, because a long-running job could be running, and
other processes have to wait for it to finish.
• To the end-user, this could look like a system freeze or lock-up.
Consider the following example of a long-running process A that now
holds other processes B, C and D as hostage.
Scheduling Algorithm Optimization Criteria

• Maximum CPU utilization

• Maximum throughput

• Minimum turnaround time

• Minimum waiting time

• Minimum response time


Processes and threads
• process is an instance of program execution.

• The operating system maintains management information about a


process in a process control block (PCB).

• Modern operating systems allow a process to be divided into multiple


threads of execution, which share all process management information
except for information directly related to execution.

• Threads in a process can execute different parts of the program code at


the same time.
Threads
• Modern operating systems allow a process to be divided into multiple
threads of execution.
• The threads within a process share all process management information
except for information directly related to execution.
• Threads in a process can execute different parts of the program code at
the same time
• They have independent current instructions; that is, they have (or
appear to have) independent program counters.
• A thread is the smallest unit of processing that can be performed in an
operating system. In modern operating systems, thread exits within a
process(i.e, a single process may contain multiple threads).
• Processes start out with a single main thread. The main thread can
create new threads using a thread fork system call.
Difference between Process and Thread
Process Thread
• Process means any program is in • Thread means segment of a
execution. process.
• Process takes more time to • Thread takes less time to
terminate. terminate.
• It takes more time for creation • It takes less time for creation
• It takes more time for context • It takes less time for context
switching. switching.
• Process is less efficient in term of • Thread is more efficient in term of
communication communication
• Process consumes more resources • Thread consumes less resources.
Processes vs. Threads: Advantages and Disadvantages

Process advantage Threads advantage

• Each process has its own memory • Threads use the memory of the
space process they belong to.
• Inter-process communication is • Inter-thread communication can be
slow as processes have different faster than inter-process
memory addresses communication.
• Context switching between • Context switching between threads
processes is more expensive. of the same process is less
• Processes don’t share memory expensive
with other processes. • Threads share memory with other
threads of the same process.
SCHEDULING

• When a computer is multiprogrammed, it frequently has multiple


processes or threads competing for the CPU at the same time.

• The part of the operating system that makes the choice is called the
scheduler, and the algorithm it uses is called the scheduling algorithm.

• Many of the same issues that apply to process scheduling also apply to
thread scheduling, although some are different. When the kernel
manages threads, scheduling is usually done per thread, with little or no
regard to which process the thread belongs.
Continued
• The objective of multiprogramming is to have some process running at
all times, to maximize CPU utilization.
• The objective of time sharing is to switch the CPU among processes so
frequently. In uniprocessor only one process is running.
• A process migrates between various scheduling queues throughout its
lifetime.
• The process of selecting processes from among these queues is carried
out by a scheduler.
• The aim of processor scheduling is to assign processes to be executed by
the processor.
• Scheduling affects the performance of the system, because it determines
which process will wait and which will progress.
Types of Scheduling
• Long-term Scheduling: Long term scheduling is performed when a
new process is created.
• If the number of ready processes in the ready queue becomes very
high, then there is an overhead on the operating system (i.e.,
processor) for maintaining long lists, context switching and
dispatching increases.
• Medium-term Scheduling: Medium-term scheduling is a part of the
swapping function. When part of the main memory gets freed, the
operating system looks at the list of suspend ready processes, decides
which one is to be swapped in (depending on priority, memory and
other resources required, etc.).
• Short-term Scheduling: Short-term scheduler is also called as
dispatcher. Short-term scheduler is invoked whenever an event occurs,
that may lead to the interruption of the current running process.
Scheduling Algorithm
• A scheduling algorithm is the algorithm which dictates how much
CPU time is allocated to Processes and Threads.
• The goal of any scheduling algorithm is to fulfill a number of criteria:
no task must be starved of resources – all tasks must get their chance
at CPU time.
• Scheduling algorithms or scheduling policies are mainly used for
short-term scheduling.
• The main objective of short-term scheduling is to
allocate processor time in such a way as to optimize one or more
aspects of system behavior.
• For these scheduling algorithms assume only a single processor is
present.
Continued
• Scheduling algorithms decide which of the processes in the ready
queue is to be allocated to the CPU is basis on the type of scheduling
policy and whether that policy is either preemptive or non-preemptive.

• For scheduling arrival time and service time are also will play a role.

• List of scheduling algorithms are as follows:


o First-come, first-served scheduling (FCFS) algorithm
o Shortest Job First Scheduling (SJF) algorithm
o Round-Robin Scheduling(RR) algorithm
First Come First Serve Scheduling
• In the "First come first serve" scheduling algorithm, as the name
suggests, the process which arrives first, gets executed first, or we can
say that the process which requests the CPU first, gets the CPU
allocated first.
• First Come First Serve, is just like FIFO(First in First out) Queue data
structure, where the data element which is added to the queue first, is
the one who leaves the queue first.
• It's easy to understand and implement programmatically, using a
Queue data structure, where a new process enters through the tail of
the queue, and the scheduler selects process from the head of the
queue.
• Average waiting time is the average of the waiting times of the
processes in the queue, waiting for the scheduler to pick them for
execution.
Continued
• Lower the Average Waiting Time, better the scheduling algorithm.
• Consider the processes P1, P2, P3, P4 given in the below table, arrives
for execution in the same order, with Arrival Time 0, and given Burst
Time, let's find the average waiting time using the FCFS scheduling
algorithm.
Continued
• The average waiting time will be 18.75 ms
• For the above given processes, first P1 will be provided with the CPU
resources,
o Hence, waiting time for P1 will be 0
o P1 requires 21 ms for completion, hence waiting time for P2 will
be 21 ms
o Similarly, waiting time for process P3 will be execution time of P1 +
execution time for P2, which will be (21 + 3) ms = 24 ms.
o For process P4 it will be the sum of execution times
of P1, P2 and P3.
Definition of Arrival time, burst and their difference
• Arrival time is the point of time in milli seconds at which a process
arrives at the ready queue to begin the execution.
• Arrival time = completion time-turnaround time
• Burst time refers to the time required in milli seconds by a process for
its execution.
• Burst time = completion time-waiting time
• Arrival time marks the entry point of the process in the ready queue
where as burst time marks the exit point of the process in the queue.
• Arrival time is computed before the execution of the process where as
burst time is computed after the execution of the process.
• Arrival time is related to the ready state of the CPU where as burst
time is related to the running state of the CPU.
Shortcomings or problems with the FCFS scheduling algorithm:

• It is Non Pre-emptive algorithm, which means the process


priority doesn't matter.

• Not optimal Average Waiting Time.

• Resources utilization in parallel is not possible


• Completion Time: Time taken for the execution to complete, starting
from arrival time.
• Turnaround Time: Time taken to complete after arrival. In simple words,
it is the difference between the Completion time and the Arrival time.
• Waiting Time: Total time the process has to wait before it's execution
begins. It is the difference between the Turn Around time and the Burst
time of the process.
Shortest Job First(SJF) Scheduling
• Shortest Job First scheduling works on the process with the
shortest burst time or duration first.

• This is the best approach to minimize waiting time.

• To successfully implement it, the burst time/duration time of the


processes should be known to the processor in advance.

• This scheduling algorithm is optimal if all the jobs/processes are


available at the same time. (either Arrival time is 0 for all, or Arrival
time is same for all)

• Non Pre-emptive Shortest Job First


• Consider the below processes available in the ready queue for execution,
with arrival time as 0 for all and given burst times.

o Waiting time= turnaround –burst


time
o Average waiting time = sum of
waiting time for each divided by
number of processes
Round Robin Scheduling
• Round Robin (RR) scheduling algorithm is mainly designed for time-
sharing systems.
• This algorithm is similar to FCFS scheduling, but in Round Robin
(RR) scheduling, preemption is added which enables the system to
switch between processes.
o A fixed time is allotted to each process, called a quantum, for execution.
o Once a process is executed for the given time period that process is
preempted and another process executes for the given time period.
o Context switching is used to save states of preempted processes.
o •This algorithm is simple and easy to implement and the most important is
thing is this algorithm is starvation-free as all processes get a fair share of
CPU.
o it is important to note here that the length of time quantum is generally from
10 to 100 milliseconds in length.
Characteristics of the Round Robin Algorithm
• Round Robin Scheduling algorithm resides under the category of
Preemptive Algorithms.
• This algorithm is one of the oldest, easiest, and fairest algorithms.
• This Algorithm is a real-time algorithm because it responds to the
event within a specific time limit.
• In this algorithm, the time slice should be the minimum that is
assigned to a specific task that needs to be processed. Though it may
vary for different operating systems.
• This is a hybrid model and is clock-driven in nature.
• This is a widely used scheduling method in the traditional operating
system.
Important terms
• Completion Time It is the time at which any process completes its
execution.

• Turnaround Time This mainly indicates the time Difference between


completion time and arrival time. The Formula to calculate the same
is: Turnaround Time = Completion Time – Arrival Time

• Waiting Time (W.T): It Indicates the time Difference between


turnaround time and burst time.

• And is calculated as Waiting Time = Turn Around Time – Burst Time


Continued
Continued
• Average waiting time is calculated by adding the waiting time of all
processes and then dividing them by number of processes.
• average waiting time = waiting time of all processes/ no. of
processes
• average waiting time=11+5+15+13/4 = 44/4= 11ms

Advantages of the Round Robin scheduling algorithm

• While performing this scheduling algorithm, a particular time quantum


is allocated to different jobs.
Advantage of RR
• In terms of average response time, this algorithm gives the best
performance.
• With the help of this algorithm, all the jobs get a fair allocation of
CPU.
• This algorithm deals with all processes without any priority.
• In this, the newly created process is added to the end of the ready
queue.
• A round-robin scheduler generally employs time-sharing which means
providing each job a time slot or quantum.
• In this scheduling algorithm, each process gets a chance to reschedule
after a particular quantum time.
Disadvantage of RR

• This algorithm spends more time on context switches.

• For small quantum, it is time-consuming scheduling.

• This algorithm offers a larger waiting time and response time.

• In this, there is low throughput.

• If time quantum is less for scheduling then its Gantt chart seems to be
too big.
Some Points to Remember
1.Decreasing value of Time quantum:-With the decreasing value of
time quantum
• The number of context switches increases.
• The Response Time decreases
• Chances of starvation decrease in this case.
For the smaller value of time quantum, it becomes better in terms
of response time.
2.Increasing value of Time quantum:-With the increasing value of
time quantum
For the higher value of time
•The number of context switch decreases quantum, it becomes better in
•The Response Time increases terms of the number of the
•Chances of starvation increases in this case. context switches.
Continued
3. If the value of time quantum is increasing then Round Robin
Scheduling tends to become FCFS Scheduling.

4.In this case, when the value of time quantum tends to infinity then
the Round Robin Scheduling becomes FCFS Scheduling.

5. Thus the performance of Round Robin scheduling mainly depends on


the value of the time quantum.

6.And the value of the time quantum should be such that it is


neither too big nor too small.

You might also like