Module 2-1 ktu
Module 2-1 ktu
Introduction
• The objective of multiprogramming is to have some process running at all times, to maximize CPU
utilization.
• In a simple computer system, the CPU then just sits idle. All this waiting time is wasted; no useful
work is accomplished.
• With multiprogramming, we try to use this time productively. Several processes are kept in
memory at one time. When one process has to wait, the operating system takes the CPU away from
that process and gives the CPU to another process.
• Aim of processor scheduling is to assign processes to be executed by the processor or processors
over time, in a way that meets system objectives, such as response time, throughput, and processor
efficiency.
• Scheduling affects the performance of the system because it determines which processes will wait
and which will progress.
• Scheduling helps to manage queues and minimize queueing delay and to optimize performance in a
queueing environment
• Scheduling affects the performance of the system because it determines which processes will wait
and which will progress.
• Scheduling helps to manage queues and minimize queueing delay and to optimize performance in
a queueing environment.
•Preemptive: In this case, the OS can switch a process from running state to ready state. This
switching happens because the CPU may give other processes priority and substitute the
currently active process for the higher priority process.
Levels of Scheduling
The long-term scheduler works with the batch queue and selects the next batch job to be executed. Thus it
plans the CPU scheduling for batch jobs. Processes, which are resource intensive and have a low priority
are called batch jobs. These jobs are executed in a group or bunch. For example, a user requests for
printing a bunch of files. We can also say that a long-term scheduler selects the processes or jobs from
secondary storage device eg, a disk and loads them into the memory for execution. It is also known as
a job scheduler. The long-term scheduler is called “long-term” because the time for which the scheduling
is valid is long. This scheduler shows the best performance by selecting a good process mix of I/O-bound
and CPU-bound processes. I/O bound processes are those that spend most of their time in I/O than
computing. A CPU-bound process is one that spends most of its time in computations rather than
generating I/O requests.
Medium-Term Scheduler
The medium-term scheduler is required at the times when a suspended or swapped-out process is to be
brought into a pool of ready processes. A running process may be suspended because of an I/O request or
by a system call. Such a suspended process is then removed from the main memory and is stored in a
swapped queue in the secondary memory in order to create a space for some other process in the main
memory. This is done because there is a limit on the number of active processes that can reside in the main
memory. The medium-term scheduler is in charge of handling the swapped-out process. It has nothing to
do with when a process remains suspended. However, once the suspending condition is removed, the
medium terms scheduler attempts to allocate the required amount of main memory and swap the process in
and make it ready. Thus, the medium-term scheduler plans the CPU scheduling for processes that have
been waiting for the completion of another process or an I/O task.
Short-Term Scheduler
The short-term scheduler selects processes from the ready queue that are residing in the
main memory and allocates CPU to one of them. Thus, it plans the scheduling of the
processes that are in a ready state. It is also known as a CPU scheduler. As compared to
long-term schedulers, a short-term scheduler has to be used very often i. e. the frequency of
execution of short-term schedulers is high. The short-term scheduler is invoked whenever
an event occurs. Such an event may lead to the interruption of the current process or it may
provide an opportunity to preempt the currently running process in favor of another.
Scheduling Criteria
In choosing which algorithm to use in a particular situation, we must consider the properties of the various
algorithms.
• CPU utilization - The objective of any CPU scheduling algorithm is to keep the CPU busy if
possible and to maximize its usage.
• Throughput - It is a measure of the work that is done by the CPU which is directly proportional
to the number of processes being executed and completed per unit of time.
Turnaround time - From the point of view of a particular process, the important criterion is how long it
takes to execute that process. The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O.
Waiting time - Once the execution starts, the scheduling process does not hinder the time that is required
for the completion of the process. The only thing that is affected is the waiting time of the process, i.e the
time that is spent by a process waiting in a queue. Waiting time is the sum of the periods spent waiting in the
ready queue.
Response time - In an interactive system, turnaround time may not be the best criterion. Often, a process
can produce some output fairly early and can continue computing new results while previous results are
being output to the user. Thus, another measure is the time from the submission of a request until the first
response is produced. This measure, called response time, is the time it takes to start responding, not the
time it takes to output the response.
SCHEDULING
ALGORITHMS
SCHEDULING ALGORITHMS - CPU scheduling deals with the problem of deciding which
of the processes in the ready queue is to be allocated the CPU. There are many different CPU
scheduling algorithms.
• With this scheme, the process that request the CPU first is allocated the CPU first.
• The implementation of the FCFS policy is easily managed with FIFO queue.
• When a process enters the ready queue , its Process Control Block is linked onto the
tail of the queue.
• When the CPU is free, it is allocated to the process at the head of the queue.
• The running process is then removed from the queue.
• The code for FCFS scheduling is simple to write and understand.
• FCFS is the simplest non-preemptive algorithm
• The average waiting time under the FCFS policy, however, is often quite long.
• Consider the following set of processes that arrive at time 0, with length of the CPU
burst given in milliseconds:
The average waiting time under an FCFS policy is generally not minimal and may vary
substantially if the process’s CPU burst times vary greatly.
ADVANTAGES:
1. Easy to understand and program.
2. Straight forward and easy to implement.
3. Suitable specially for Batch Operating system where the longer time periods for each process are
often acceptable.
DISADVANTAGES:
1. As it is a Non-preemptive CPU Scheduling Algorithm, hence it will run till it finishes the execution.
2. The average waiting time in the FCFS is much higher than in the others.
5. Processes that are at the end of the queue, have to wait longer to finish.
6. It is not suitable for time-sharing operating systems where each process should get the same amount of
CPU time.
B. SHORTEST-JOB-FIRST SCHEDULING
• This algorithm associates with the length of the process’s CPU burst.
• When the CPU is available, it is assigned to the process that has the smallest next CPU
burst.
• If the next CPU bursts of two processes are the same , FCFS scheduling is used to break
the tie.
• More appropriate term for this scheduling method would be the shortest-next-CPU-burst
algorithm, because scheduling depends on the length of the next CPU burst of a process,
rather than its total length.
As an example of SJF scheduling , consider the following set of processes, with the length of
the CPU burst given in milliseconds:
Using SJF scheduling , we would schedule these processes according to the following Gantt
chart
.
The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9
milliseconds for process P3, and 0 milliseconds for process P4. Thus, the average waiting
time is (3 + 16 + 9 + 0)/4 = 7 milliseconds. By comparison, if we were using the FCFS
scheduling scheme, the average waiting time would be 10.25 milliseconds
• Moving a short process before a long one decreases the waiting time of the short process.
Consequently, the average waiting time decreases.
• The real difficulty with the SJF algorithm is knowing the length of the next CPU request.
• The SJF algorithm can be either preemptive or non-preemptive.
• The choice arises when a new process arrives at the ready queue while a previous
process is still executing. The next CPU burst of the newly arrived process may be
shorter than what is left of the currently executing process.
• A preemptive SJF algorithm will preempt the currently executing process , whereas a
non-preemptive SJF algorithm will allow the currently running process to finish its CPU
burst.
As an example , consider the following four processes, with the length of the CPU burst
given in milliseconds:
If the processes arrives at the ready queue at the times shown and need the identical burst
times, then the resulting preemptive SJF scheduling is as depicted in the following Gantt
chart:
At t=0ms only one process P1 is in the system, whose burst time is 8ms; starts its execution.
After 1ms i.e., at t=1, new process P2 (Burst time= 4ms) arrives in the ready queue. Since its
burst time is less than the remaining burst time of P1 (7ms) P1 is preempted and execution of
P2 is started. Again at t=2, a new process P3 arrive in the ready queue but its burst time (9ms)
is larger than remaining burst time of currently running process (P2 3ms). So P2 is not
preempted and continues its execution. Again at t=3 , new process P4 (burst time 5ms )
arrives . Again for same reason P2 is not preempted until its execution is completed.
Turnaround time (TAT) is the time it takes to complete a process or fulfill a request.
Preemptive SJF scheduling is sometimes called shortest-remaining-time-first scheduling
Advantages
1. Maximum throughput
2. Minimum average waiting and turnaround time
Disadvantages
1. May suffer with the problem of starvation.
2. It is not implementable because the exact Burst time for a process can't be known in
advance.
C. Round-Robin Scheduling Algorithms:
• One of the oldest, simplest, fairest and most widely used algorithm is round robin (RR).
• In the round robin scheduling, processes are dispatched in a FIFO manner but are given
a limited amount of CPU time called a time-slice or a quantum.
• If a process does not complete before its CPU-time expires, the CPU is preempted and
given to the next process waiting in a queue.
• The preempted process is then placed at the back of the ready list.
• If the process has blocked or finished before the quantum has elapsed the CPU
switching is done.
• Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective
in time sharing environments in which the system needs to guarantee reasonable
response times for interactive users.
• The only interesting issue with round robin scheme is the length of the quantum.
Setting the quantum too short causes too many context switches and lower the CPU
efficiency. On the other hand, setting the quantum too long may cause poor response
time and approximates FCFS.
• In any event, the average waiting time under round robin scheduling is on quite long.
Consider the following set of processes that arrives at time 0 ms.
If we use time quantum of 4ms then calculate the average waiting time using R-R scheduling.
According to R-R scheduling processes are executed in FCFS order. So, firstly P1
(burst time = 20ms) is executed but after 4ms it is preempted and new process P2 (Burst time =
3ms) starts its execution whose execution is completed before the time quantum. Then next
process P3 (Burst time=4ms) starts its execution and finally remaining part of P1 gets executed
with time quantum of 4ms.
Disadvantages
The Disadvantages of Round Robin CPU Scheduling are:
1.Low Operating System slicing times will result in decreased CPU output.
2.Round Robin CPU Scheduling approach takes longer to swap contexts.
3.Time quantum has a significant impact on its performance.
4.The procedures cannot have priorities established.
Q: Draw the Gantt chart and find average waiting time and turn around time.
Assume time quantum as 2msec.
Question:
Draw the Gantt chart and calculate the average waiting time and turn-around
time for these processes if time quantum is 2 units,
Turn Around Time = Completion Time – Arrival Time
Waiting Time = Turn Around Time – Burst Time
D. PRIORITY BASED SCHEDULING:
➢ Assign each process a priority. Schedule highest priority first. All processes within
same priority are FCFS.
➢ Priority may be determined by user or by some default mechanism. The system may
determine the priority based on memory requirements, time limits, or other resource
usage.
➢ Starvation occurs if a low priority process never runs.
➢ An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of
the (predicted) next CPU burst. The larger the CPU burst, the lower the priority, and
vice versa.
Priorities can be defined either internally or externally.
• Internally defined priorities use some measurable quantity or quantities to compute the
priority of a process. For example, time limits, memory requirements, the number of
open files, and the ratio of average I/O burst to average CPU burst have been used in
computing priorities.
• External priorities are set by criteria outside the operating system, such as the
importance of the process, deadline or time sensitivity, Priority assigned by a system
administrator etc.
.
Priority scheduling can be either preemptive or non-preemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the currently
running process. A preemptive priority scheduling algorithm will preempt the CPU if
the priority of the newly arrived process is higher than the priority of the currently
running process. A non-preemptive priority scheduling algorithm will simply put the
new process at the head of the ready queue.
A major problem with priority scheduling algorithms is indefinite blocking, or
starvation. A process that is ready to run but waiting for the CPU can be
considered blocked. A priority scheduling algorithm can leave some low-priority
processes waiting indefinitely.
• Multilevel queue scheduling is a type of CPU scheduling in which the processes in the ready state are
divided into different groups, each group having its own scheduling needs.
• The ready queue is divided into different queues according to different properties of the process like
memory size, process priority, or process type.
• All the different processes can be implemented in different ways, i.e., each process queue can have a
different scheduling algorithm.
• A multilevel queue scheduling algorithm partitions the ready queue into several separate queues shown in
below figure.
Properties of Multilevel Queue Scheduling
• Multilevel Queue Scheduling distributes the processes into multiple queues based on
the properties of the processes.
• Each queue has its own priority level and scheduling algorithm to manage the processes
inside that queue.
• Queues can be arranged in a hierarchical structure.
• High-priority queues might use Round Robin scheduling. Low-priority queues might
use First Come, First Serve scheduling. (Optional)
• Processes cannot move between queues. This algorithm prioritizes different types of
processes and ensures fair resource allocation
How are the Queues Scheduled?
The scheduling of the processes in the queue is necessary to decide upon the process
that should get the CPU time first. Two methods are employed to do this:
• Fixed priority preemptive scheduling
• Time slicing.
Fixed Priority Preemptive Scheduling Method:
When setting the priority of processes in a queue, every queue has an absolute
priority over that with a lower priority queue.
Here, unless Queue1 is empty, no process in Queue2 can be executed and so on
Time Slicing:
Each queue gets a slice or portion of the CPU time for scheduling its own processes.
For example, suppose Queue1 gets 40% of the CPU time then the remaining 60% of
the CPU time may be assigned as 40% to Queue2 and 20% to Queue3.
Multilevel Feedback-Queue Scheduling
• The scheduler first executes all processes in queue 1. Only when queue 1 is empty it will execute processes
in queue 2.
• Processes in queue 3 will only be executed if queues 1 and 2 are empty.
• A process entering the ready queue is put in queue 1. A process in queue 1 is given a time quantum of 8
milliseconds.
• If it does not finish within this time, it is moved to the tail of queue 2. If queue 1 is empty, the process at the
head of queue 2 is given a quantum of 16 milliseconds. If it does not complete, it is preempted and is put into
queue 3. Processes in queue 3 are run on an FCFS basis but are run only when queues 1 and 2 are empty.
• This scheduling algorithm gives highest priority to any process with a CPU burst of 8 milliseconds or less.
Such a process will quickly get the CPU, finish its CPU burst, and go off to its next I/O burst.
Disadvantages of MFQS
1. USER-LEVEL THREADS In a pure ULT facility, all of the work of thread management is
done by the application and the kernel is not aware of the existence of threads.
• Any application can be programmed to be multithreaded by using a threads library, which is
a package of routines for ULT management.
• The threads library contains code for creating and destroying threads, for passing messages
and data between threads, for scheduling thread execution, and for saving and restoring
thread contexts.
• A user thread is an entity used by programmers to handle multiple flows of controls
within a program.
• The API for handling user threads is provided by the threads library.
• A user thread only exists within a process; a user thread in process A cannot reference a
user thread in process B.
• By default, an application begins with a single thread and begins running in that thread.
• This application and its thread are allocated to a single process managed by the kernel.
• At any time that the application in Running state, may create a new thread to run within
the same process.
• Spawning is done by invoking the spawn utility in the threads library which is invoked
by a procedure call.
• The threads library, creates a data structure for the new thread and then passes control to
one of the threads within this process that is in the Ready state, using some scheduling
algorithm.
Advantages of User-level threads
1.The user threads can be easily implemented than the kernel thread.
2.User-level threads can be applied to such types of operating systems that do not support threads at the
kernel-level.
3.It is faster and efficient.
4.Context switch time is shorter than the kernel-level threads.
5.It does not require modifications of the operating system.
6.User-level threads representation is very simple. The register, PC, stack, and mini thread control
blocks are stored in the address space of the user-level process.
7.It is simple to create, switch, and synchronize threads without the intervention of the process.
Multithreading is different from multitasking which allows multiple tasks at the same time, whereas
multithreading allows multiple threads of single tasks to be processed by CPU at the same time.
Multithreading is thread-based multitasking. It allows multiple threads of the same process to execute
simultaneously.
Example: VLC media player, where one thread is used for opening the VLC media player, one for playing
a particular song and another thread for adding new songs to the playlist.
Multithreading increases responsiveness. One program contains multiple threads. So, if one thread is
taking too long to execute or if it gets blocked, the rest of the thread keep executing without any problem.
Multithreading is less costly. Creating a new process and allocating resource is a time consuming task but
in a one process allocating multiple threads and switching between them is comparatively easier.
Multithreading models
• Many operating systems provide user level thread and kernel level thread in
combined way. The best example of this combined operating system is Solaris.
• In a combined system, multiple threads within the same application can run in
parallel on multiple processors and a blocking system call need not block the entire
process.
• The goal of multiple processor scheduling, also known as multiprocessor scheduling, is to create a
system's scheduling function that utilizes several processors.
• In multiprocessor scheduling, multiple CPUs split the workload (load sharing) to enable concurrent
execution of multiple processes.
• Multiple CPUs share the load in multiprocessor scheduling so that various processes run
simultaneously.
• The multiple CPUs in the system are in close communication, which shares a common bus, memory,
and other peripheral devices. So we can say that the system is tightly coupled.
• In some instances of multiple-processor scheduling, the functioning of the processors is
homogeneous, or identical. Any process in the queue can run on any available processor.
• Multiprocessor systems can be homogeneous (the same CPU) or heterogeneous (various types of
CPUs).
There are two different architectures utilized in multiprocessor systems: −
Symmetric Multiprocessing
In an SMP system, each processor is comparable and has the same access to memory and I/O resources. The
CPUs are not connected in a master-slave fashion, and they all use the same memory and I/O subsystems.
This suggests that every memory location and I/O device are accessible to every processor without
restriction. An operating system manages the task distribution among the processors in an SMP system,
allowing every operation to be completed by any processor.
Asymmetric Multiprocessing
In the AMP asymmetric architecture, one processor, known as the master processor, has complete access to
all of the system's resources, particularly memory and I/O devices. The master processor is in charge of
allocating tasks to the other processors, also known as slave processors. Every slave processor is responsible
for doing a certain set of tasks that the master processing has assigned to it. The master processor receives
tasks from the operating system, which the master processor then distributes to the subordinate processors.
Types of Multiprocessor Scheduling Algorithms
• Round-Robin Scheduling
• Priority Scheduling
• Earliest deadline first (EDF) scheduling − Each process in this algorithm is given a deadline, and the
process with the earliest deadline is the one that will execute first.