0% found this document useful (0 votes)
6 views

chapter 2 process

The document explains the concepts of processes and threads in operating systems, detailing their definitions, differences, and states. It covers the Process Control Block (PCB), context switching, and the implementation of user-level and kernel-level threads, along with their advantages and disadvantages. Additionally, it discusses multithreading models and process scheduling, including the roles of different types of schedulers.

Uploaded by

krishalasth34
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

chapter 2 process

The document explains the concepts of processes and threads in operating systems, detailing their definitions, differences, and states. It covers the Process Control Block (PCB), context switching, and the implementation of user-level and kernel-level threads, along with their advantages and disadvantages. Additionally, it discusses multithreading models and process scheduling, including the roles of different types of schedulers.

Uploaded by

krishalasth34
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Process:

• A process is an instance of a program in execution.


• A process is always stored in the main memory also termed as the primary memory or
random access memory.
• Therefore, a process is termed as an active entity. It disappears if the machine is rebooted.
• Several processes may be associated with a same program.
• Each process provides the resources needed to execute a program.
• A process has a virtual address space, executable code, open handles to system objects, a
security context, a unique process identifier, environment variables, a priority class,
minimum and maximum working set sizes, and at least one thread of execution.
• Each process is started with a single thread, often called the primary thread, but can
create additional threads from any of its threads.
• On a multiprocessor system, multiple processes can be executed in parallel.
• On a uni-processor system, a process scheduling algorithm is applied and the processor is
scheduled to execute each process one at a time yielding an illusion of concurrency.
• For e.g.: in Windows, if we edit two text files, simultaneously, in notepad, then it means
we are implementing two different instances of the same program.
• For an operating system, these two instances are separate processes of the same
application.

Difference between process and program:

process program
A process is an instance of a program in A program is a group instruction to
execution. perform a specific task.
The resource requirement is quite high in The program only needs memory for
case of a process. storage.
Processes have considerable overhead No significant overhead cost.
The process has a shorter and very limited A program has a longer lifespan as it is
lifespan as it gets terminated after the stored in the memory until it is not
completion of the task. manually deleted.
New processes require duplication of the No such duplication is needed.
parent process.
Process holds resources like CPU, memory The program is stored on disk in some
address, disk, I/O, etc. file and does not require any other
resources.
A process is an active entity. A program is a passive entity.

1
Process state or process transition diagram:

• A process goes through a series of process states for performing its task.

• As a process executes, it changes state.


• Various events can cause a process to change state are:
new: The process is being created.
ready: The process is ready to be executed.
running: The process whose instructions are being executed is called running process.
waiting: The process is waiting for some event to occur such as completion of I/O operation.
terminated: The process has finished execution.

Process Control Block (PCB):

• Process Control Block (PCB) is a data structure used by operating system to store all the
information about a process.
• It is also known as Process Descriptor.
• When a process is created, the operating system creates a corresponding PCB.
• Information in a PCB is updated during the transition of process states.
• When a process terminates, its PCB is released.
• Each process has a single PCB.

The PCB of a process contains the following information:

2
• Process Number: Each process is allocated a unique number for the purpose of
identification.
• Process State: It specifies the current state of a process.
• Program Counter: It indicates the address of next instruction to be executed.
• Registers: These hold the data or result of calculations. The content of these registers is
saved so that a process can be resumed correctly later on.
• Memory Limits: It stores the amount of memory units allocated to a process.
• List of Open Files: It stores the list of open files and there access rights.

Roles of PCB:

• The PCB is most important and central data structure in an OS. Each PCB contains all the
information about a process that is needed by the OS.
• The blocks are read and/or modified by virtually every module in the OS, including those
involved with scheduling, resource allocation, interrupt processing and performance
monitoring and analysis that mean PCB defines the state of OS.
• When CPU switches from one process to another, the operating system uses the Process
Control Block (PCB) to save the state of process and uses this information when control
returns back process is terminated, the Process Control Block (PCB) released from the
memory.
• A context switch is the computing process of storing and restoring the state (context) of a
CPU such that multiple processes can share a single CPU resource. The context switch is
an essential feature of a multitasking operating system.
• Context switches are usually computationally intensive and much of the design of
operating systems is to optimize the use of context switches. There are three scenarios
where a context switch needs to occur: multitasking, interrupt handling, user and kernel
mode switching.
• In a context switch, the state of the first process must be saved so that, when the
scheduler gets back to the execution of the first process, it can restore this state and
continue.
• The state of the process includes all the registers that the process may be using,
especially the program counter, plus any other operating system specific data that may be
3
necessary. Often, all the data that is necessary for state is stored in one data structure,
called a process control block.

Context switching:
Switching the CPU from one process to another process by saving the current state of running
process in PCB and loading the saved state from PCB for new process is called context
switching. Or the process of switching from one process to another is called context
switching.
Switching steps are:
• Save the context of running (First) in PCB.
• Place the PCB of the next(2nd) process, which has to execute, into the relevant queue i.e.
ready queue, I/O queue etc.
• Select a new (2nd) process for execution.
• Update the the PCB of 2nd process by changing the state to running in PCB.
• Update the memory management data if required.
• After the execution of second process, restore the context of the first process on CPU.

4
Fig: Showing CPU switches from process to process

Thread:

• A thread is a lightweight process or it is the entity within a process that can be scheduled
for execution.
• A thread is a subset of the process.
• A process is always stored in the main memory also termed as the primary memory or
random access memory.
• All threads of a process share its virtual address space and system resources and process
attributes.
• In addition, each thread maintains exception handlers, a scheduling priority, thread local
storage, a unique thread identifier, and a set of structures the system will use to save the
thread context until it is scheduled.
• The thread context includes the thread's set of machine registers, the kernel stack, a
thread environment block, and a user stack in the address space of the thread's process.
• On a uni-processor system, a thread scheduling algorithm is applied and the processor is
scheduled to run each thread one at a time.

Why Threads?

• Process with multiple threads makes a great server (e.g. print server)
• Increase responsiveness, i.e. with multiple threads in a process, if one threads blocks
then other can still continue executing
• Sharing of common data reduce requirement of inter-process communication
• Proper utilization of multiprocessor by increasing concurrency
• Threads are cheap to create and use very little resources
• Context switching is fast (only have to save/reload PC, Stack, SP, Registers)

5
Difference between process and thread:

The major difference between processes and thread is:

Each process has their own address space. Threads share the address space of the process
that created it.
Each process has their own copy of the data, Threads share the same copy of data, code
code and file segment. and file segment.
Processes must use inter process Threads can directly communicate with other
communication to communicate with sibling threads of its process;
processes.
More resources are required Fewer resources are required
New processes require duplication of the New threads are easily created
parent process
Not suitable for parallelism Suitable for parallelism
A change to the parent process does not affect Changes to the main thread (cancellation,
child processes.(blocking the process does not priority change, etc.) may affect the behavior
affect the other process) of the other threads of the process;

6
System call is involved in process. (for example There is no system call is involved in thread.
(is api which create thread)
fork())
Context switching is slow Context switching is faster.

Threads Implementation
Threads are implemented in following two ways:
• User Level Threads -- User managed threads
• Kernel Level Threads -- Operating System managed threads acting on kernel, an
operating system core.

User Level Thread:


User threads are supported at user level. In this case, the thread management kernel is not aware
of the existence of threads. The thread library contains code for creating and destroying threads,
for passing message and data between threads, for scheduling thread execution and for saving
and restoring thread contexts. In fact, the kernel knows nothing about user-level threads and
manages them as if they were single-threaded processes.

Advantages:
• The user threads can be easily implemented than the kernel thread.
• User-level threads can be applied to such types of operating systems that do not support
threads at the kernel-level.
• It is faster and efficient.
• Context switch time is shorter than the kernel-level threads.
• It does not require modifications of the operating system.
Disadvantages:
• User-level threads lack coordination between the thread and the kernel.
• If a thread causes a page fault, the entire process is blocked.

Kernel-Level Threads:
In this method, the kernel knows about and manages the threads. Instead of process table in each
process, the kernel has a thread table that keeps track of all threads in the system. Scheduling by
the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and

7
management in Kernel space. Kernel threads are generally slower to create and manage than the
user threads.

Advantages:
• The kernel-level thread is fully aware of all threads.
• The scheduler may decide to spend more CPU time in the process of threads being large
numerical.
• The kernel-level thread is good for those applications that block the frequency.
Disadvantages:
• The kernel thread manages and schedules all threads.
• The implementation of kernel threads is difficult than the user thread.
• The kernel-level thread is slower than user-level threads.

Hybrid thread:
Some operating system provides a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system call
need not block the entire process. For example: Solaris

Difference between User level thread and Kernel level thread:


User level thread Kernel level thread

User threads are implemented by users. Kernel threads are implemented by OS.

OS doesn’t recognize user level threads. Kernel threads are recognized by OS.

Implementation of user threads is easy. Implementation of Kernel thread is


complicated.

8
Context switch time is less. Context switch time is more.

Context switch requires no hardware Hardware support is needed.


support.

If one user level thread performs blocking If one kernel thread performs blocking
operation then entire process will be operation then another thread can continue
blocked. execution.

User level thread is generic and can run on Kernel level thread is specific to the
any operating system. operating system.

Ex- Java thread, posix thread Ex- Windows, Solaries thread

Multithreading:
The use of multiple threads in a single program, all running at the same time and performing
different tasks is known as multithreading. Multithreading is a type of execution model that
allows multiple threads to exist within the context of a process such that they execute
independently but share their process resources. It enables the processing of multiple threads at
one time, rather than multiple processes.
Multithreading Model:
A relationship must exist between user threads and kernel threads. There are three models related
to user and kernel and they are:
➢ Many to many relationships: : it maps many user threads to single kernel threads
➢ Many to one relationship: it maps many user threads to single kernel threads.
➢ One to one relationship: it mach each user thread to a corresponding
Kernel thread
Many -To -many model.
• The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads, combining the best features of the one-to-one and
many-to-one models.
• Users have no restrictions on the number of threads created.
• Blocking kernel system calls do not block the entire process.
• Processes can be split across multiple processors.
• Individual processes may be allocated variable numbers of kernel threads, depending on
the number of CPUs present and other factors.
• IRIX, HP-UX,andTru64 UNIX (Solaris prior to v9) uses Many-to-Many

9
Many-To-One Model:
• In the many-to-one model, many user-level threads are all mapped onto a single kernel
thread.
• Thread management is handled by the thread library in user space, which is very
efficient.
• However, if a blocking system call is made, then the entire process blocks, even if the
other user threads would otherwise be able to continue.
• Because a single kernel thread can operate only on a single CPU, the many-to-one model
does not allow individual processes to be split across multiple CPUs.
• Green threads for Solaris and GNU Portable Threads implement the many-to-one model
in the past, but few systems continue to do so today.
• Solaris uses Many-to-One

One-To-One Model:
• The one-to-one model creates a separate kernel thread to handle each user thread.
• This model provides more concurrency than the many to one model. It has also another
thread to run when a thread makes a blocking system call.
• It supports multiple threads to execute in parallel on microprocessors.
• One-to-one model overcomes the problems listed above involving blocking system calls
and the splitting of processes across multiple CPUs.
• However the overhead of managing the one-to-one model is more significant, involving
more overhead and slowing down the system.
• Linux, OS/2, Windows NT and windows 2000 use one to one relationship model.

10
Fig One-to-one model

Process Scheduling:
Scheduling: Scheduling refers to a set of policies and mechanism built in to the operating
system that govern the order in which the work to be done by a computer system.
This method of selecting a process to be allocated to CPU is called Process Scheduling.
Technical terms used in scheduling:
Ready queue: The processes waiting to be assigned to a processor are put in a queue, called
ready queue.
Burst time: The time for which a process holds the CPU is known as burst time.
Arrival time: Arrival Time is the time at which a process arrives at the ready queue.
Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time.
Waiting time: Waiting time is the amount of time a process has been waiting in the ready queue.
Response Time: Time between submission of requests and first response to the request.
Throughput: number of processes completed per unit time.
Dispatch latency – It is the time it takes for the dispatcher to stop one process and start another
running
Context switch: A context switch is the computing process of storing and restoring the state
(context) of a CPU so that execution can be resumed from the same point at a later time. This
enables multiple processes to share a single CPU.
Optimal scheduling algorithm will have minimum waiting time, minimum turnaround time and
minimum number of context switches.
Scheduler:
Schedulers are special system software which handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Types of Scheduler
➢ Long term scheduler
➢ Mid - term scheduler
➢ Short term scheduler
Long term scheduler:
Long term scheduler is also known as job scheduler.
The duty of the long-term scheduler is to bring the processes from the job pool (secondary
memory) and keeps them in the ready queue maintained in the primary memory for execution.

11
So, the long-term scheduler decides which process is to be created to put into the ready state. The
purpose of long term scheduler is to choose a perfect mix of IO bound (a process is said to be I/O
bound if the majority of the time is spent on the I/O operation) and CPU bound (a process is said
to be CPU bound if the majority of the time is spent on the CPU) processes among the jobs
present in the pool. If the job scheduler chooses more IO bound processes then all of the jobs
may reside in the blocked state all the time and the CPU will remain idle most of the time.

Medium term scheduler:


The task of moving from main memory to secondary memory is called swapping out. The task of
moving back a swapped out process from secondary memory to main memory is known as
swapping in. Medium term scheduler takes care of the swapped out processes. The medium term
scheduler is responsible for suspending and resuming the processes. If the running state
processes needs some IO time for the completion then there is a need to change its state from
running to waiting. It removes the process from the running state to make room for the other
processes. It reduces the degree of multiprogramming. The swapping is necessary to have a
perfect mix of processes in the ready queue.

Short term scheduler:


Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready
queue and dispatch to the CPU for the execution.
A scheduling algorithm is used to select which job is going to be dispatched for the execution.
The Job of the short term scheduler can be very critical in the sense that if it selects job whose
CPU burst time is very high then all the jobs after that, will have to wait in the ready queue for a
very long time.
This problem is called starvation which may arise if the short term scheduler makes some
mistakes while selecting the job.
Process Queues:
The Operating system manages various types of queues for each of the process states. The PCB
related to the process is also stored in the queue of the same state. There are the following queues
maintained by the Operating system.
Job Queue:
In starting, all the processes get stored in the job queue. It is maintained in the secondary
memory. The long term scheduler (Job scheduler) picks some of the jobs and put them in the
primary memory.
Ready Queue:

12
Ready queue is maintained in primary memory. The short term scheduler picks the job from the
ready queue and dispatch to the CPU for the execution.
Waiting Queue:
When the process needs some IO operation in order to complete its execution, OS changes the
state of the process from running to waiting. The context (PCB) associated with the process gets
stored on the waiting queue which will be used by the Processor when the process finishes the
IO.

Scheduling Objectives:
• Fairness: Fairness can be reflected by treating all the processes same and no process
should suffer indefinite postponement.
• Maximum throughput: The throughput of a system is the number of processes (or
threads) that actually complete in a period of time.
• Resource: Scheduling mechanism should keep the resources of the system busy.
• Priorities: Scheduling mechanism should favor the high priorities process.
• Overhead: A certain portion of the system resources invested as overhead can greatly
improve the overall performance of the system.
• Predictable: A given job should run in about the same amount of time and at the cost
irrespective of the load on the system.

Scheduling criteria: The criteria used for comparing these algorithms include the following:
• CPU Utilization: Keep the CPU as busy as possible. It ranges from 0 to 100%.
In practice, it ranges from 40 to 90%.
• Throughput: Throughput is the rate at which processes are completed per unit of time.
• Turnaround time: This is the how long a process takes to execute a process. It is
calculated as the time gap between the submission of a process and its completion.
• Waiting time: Waiting time is the sum of the time periods spent in waiting in the ready
queue.
• Response time: Response time is the time it takes to start responding from submission
time. It is calculated as the amount of time it takes from when a request was submitted
until the first response is produced.
• Fairness: Each process should have a fair share of CPU.

Process scheduling Types:

13
In an operating system (OS), a process scheduler performs the important activity of scheduling a
process between the ready queues and waiting queue and allocating them to the CPU. The OS
assigns priority to each process and maintains these queues. The scheduler selects the process
from the queue and loads it into memory for execution.
There are two types of process scheduling:
1. Preemptive scheduling
2. Non-preemptive scheduling.
1. Preemptive Scheduling
The scheduling in which a running process can be interrupted if a high priority process enters the
queue and is allocated to the CPU is called preemptive scheduling. In this case, the current
process switches from the running queue to ready queue and the high priority process utilizes the
CPU cycle.
2. Non-preemptive Scheduling
The scheduling in which a running process cannot be interrupted by any other process is called
non-preemptive scheduling. Any other process which enters the queue has to wait until the
current process finishes its CPU cycle.

Difference between preemptive and non preemptive scheduling:

Preemptive Scheduling Non-preemptive Scheduling


A processor can be preempted to execute Once the processor starts its execution, it must
the different processes in the middle of finish it before executing the other. It can’t be
any current process execution. paused in the middle.
CPU utilization is more efficient CPU utilization is less efficient compared to
compared to Non-Preemptive preemptive Scheduling.
Scheduling.
Waiting and response time of preemptive Waiting and response time of the non-preemptive
Scheduling is less. Scheduling method is higher.
Preemptive Scheduling is prioritized. When any process enters the state of running, the
The highest priority process is a process state of that process is never deleted from the
that is currently utilized scheduler until it finishes its job.
Preemptive Scheduling is flexible. Non-preemptive Scheduling is rigid.
Examples: – Shortest Remaining Time Examples: First Come First Serve, Shortest Job
First, Round Robin, etc. First, Priority Scheduling, etc.

Preemptive Scheduling algorithm can be In non-preemptive scheduling process cannot be


pre-empted that is the process can be Scheduled
Scheduled
In this process, the CPU is allocated to In this process, CPU is allocated to the process
the processes for a specific time period. until it terminates or switches to the waiting
state.

14
Preemptive algorithm has the overhead Non-preemptive Scheduling has no such
of switching the process from the ready overhead of switching the process from running
state to the running state and vice-versa. into the ready state and terminates.

Advantages of Preemptive Scheduling:


• Preemptive scheduling method is more robust, approach so one process cannot
monopolize the CPU
• Choice of running task reconsidered after each interruption.
• Each event cause interruption of running tasks
• The OS makes sure that CPU usage is the same by all running process.
• In this, the usage of CPU is the same, i.e., all the running processes will make use of CPU
equally.
• This scheduling method also improvises the average response time.
• Preemptive Scheduling is beneficial when we use it for the multi-programming
environment.

Disadvantages of Preemptive Scheduling:


• Need limited computational resources for Scheduling
• Takes a higher time by the scheduler to suspend the running task, switch the context, and
dispatch the new incoming task.
• The process which has low priority needs to wait for a longer time if some high priority
processes arrive continuously.
Advantages of Non-preemptive Scheduling
• Offers low scheduling overhead
• Tends to offer high throughput
• It is conceptually very simple method
• Less computational resources need for Scheduling
Disadvantages of Non-Preemptive Scheduling:
• It can lead to starvation especially for those real-time tasks
• Bugs can cause a machine to freeze up
• It can make real-time and priority Scheduling difficult
• Poor response time for processes

CPU Scheduling Algorithm:


CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated first to the CPU. Following are the commonly used scheduling algorithms:
▪ First-Come-First-Served (FCFS)
▪ Shortest Job First (SJF)
▪ Round-Robin Scheduling (RR)
▪ Priority Scheduling
▪ Multi-Level Queue Scheduling (MLQ)
▪ Multi-Level Feedback Queue Scheduling (MFQ)
15
First Come First Serve (FCFS):
• First Come First Serve (FCFS) is an operating system scheduling algorithm that
automatically executes queued requests and processes in order of their arrival.
• In this type of algorithm, processes which request the CPU first, get the CPU allocation
first.
• This is managed with a FIFO queue. As the process enters the ready queue, its PCB
(Process Control Block) is linked with the tail of the queue and, when the CPU becomes
free, it should be assigned to the process at the beginning of the queue.
• FCFS scheduling algorithm is non-preemptive.
• Once the CPU is allocated to a process, that process keeps the CPU until it releases the
CPU, either by terminating or by I/O request.

Example: Consider the following example

Process Burst Time (in milliseconds)


P1 3
P2 5
P3 2
P4 4

Using FCFS algorithm find the average waiting time and average turnaround time if the order is
P1, P2, P3, P4.
Sol:
If the process arrived in the order P1, P2, P3, P4 then according to the FCFS the Gantt chart will
Be:

P1 P2 P3 P4

0 3 8 10 14
The waiting time for process P1 = 0, P2 = 3, P3 = 8, P4 = 10 then the turnaround time for
process P1 = 0 + 3 = 3, P2 = 3 + 5 = 8, P3 = 8 + 2 = 10, P4 = 10 + 4 =14.
Then average waiting time = (0 + 3 + 8 + 10)/4 = 21/4 = 5.25
Average turnaround time = (3 + 8 + 10 + 14)/4 = 35/4 = 8.75
The FCFS algorithm is non preemptive means once the CPU has been allocated to a process then
the process keeps the CPU until the release the CPU either by terminating or requesting I/O.

Example1:
Consider the following set of processes that arrive at time 0 with the length of the CPU burst
time in milliseconds:

Process Burst Time (in milliseconds)


P1 24
P2 3

16
P3 3
n
Sol :
• Suppose that the processes arrive in the order: P1, P2, P3.
• The Gantt chart for the schedule is:

Now,
Waiting Time for P1 = 0 milliseconds
Waiting Time for P2 = 24 milliseconds
Waiting Time for P3 = 27 milliseconds
Average Waiting Time = (Total Waiting Time) / No. of Processes
= (0 + 24 + 27) / 3
= 51 / 3
= 17 millisecond
Suppose that the processes arrive in the order: P2 , P3 , P1
The Gantt chart for the schedule is:

Waiting Time for P2 = 0 milliseconds


Waiting Time for P3 = 3 milliseconds
Waiting Time for P1 = 6 milliseconds
Average Waiting Time = (Total Waiting Time) / No. of Processes
= (0 + 3 + 6) / 3
= 9 / 3 = 3 milliseconds
Thus, the average waiting time depends on the order in which the processes arrive.
Example2:
Consider the following set of processes that arrives at time 0, with the length of the CPU burst
given in milliseconds:

Process Arrival Time Burst Time (in milliseconds)


P1 0 10
P2 1 6
P3 3 2
P4 5 4
Find:
1. Average Turnaround Time (ATAT) (ans 14.25)
2. Average Waiting Time (AWT) (ans 8.75)

17
3. Waited TAT (WTAT) (ans 1.295)
4. Average WTAT (AWTAT) (ans 14.25)

Example3: Consider the set of 5 processes whose arrival time and burst time are given below-

Process Arrival Time Burst Time (in milliseconds)


P1 3 4
P2 5 3
P3 0 2
P4 5 1
P5 4 3

If the CPU scheduling policy is FCFS, calculate the average waiting time and average
turnaround time.
Soln:
Gantt Chart-

We know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time

Process Arrival Burst Time (in Completion Turn Waiting


Time milliseconds) time Around Time
time Wt=tat-bt
P1 3 4 7 7-3=4 4-4=0
P2 5 3 13 13-5=8 8-3=5
P3 0 2 2 2-0=2 2-2=0
P4 5 1 14 14-5=9 9-1=8
P5 4 3 10 10-4=6 6-3=3
Now,
Average Turn Around time = (4 + 8 + 2 + 9 + 6) / 5 = 29 / 5 = 5.8 unit
Average waiting time = (0 + 5 + 0 + 8 + 3) / 5 = 16 / 5 = 3.2 unit
Example4: Consider the set of 3 processes whose arrival time and burst time are given below-
Process Arrival Time Burst Time
P1 0 2
P2 3 1

18
P3 5 6

If the CPU scheduling policy is FCFS, calculate the average waiting time and average turn
around time.
Soln:
Gantt Chart-

Process Arrival Burst Time (in Completion Turn Waiting


Time milliseconds) time Around Time
time Wt=tat-bt
P1 0 2 2 2-0=2 2-2=0
P2 3 1 4 4-3=1 1-1=0
P3 5 6 11 11-5=6 6-6=0

Now,
Average Turn Around time = (2 + 1 + 6) / 3 = 9 / 3 = 3 unit
Average waiting time = (0 + 0 + 0) / 3 = 0 / 3 = 0 unit

Characteristics of FCFS method


• It supports non-preemptive scheduling algorithm.
• Jobs are always executed on a first-come, first-serve basis.
• It is easy to implement and use.
• This method is poor in performance, and the general wait time is quite high.

Example of FCFS scheduling


A real-life example of the FCFS method is buying a movie ticket on the ticket counter. In this
scheduling algorithm, a person is served according to the queue manner. The person who arrives
first in the queue first buys the ticket and then the next one. This will continue until the last
person in the queue purchases the ticket. Using this algorithm, the CPU process works in a
similar manner.

Advantages of FCFS
• The simplest form of a CPU scheduling algorithm
• Easy to program
• First come first served

Disadvantages of FCFS
• It is a Non-Preemptive CPU scheduling algorithm, so after the process has been allocated
to the CPU, it will never release the CPU until it finishes executing.
• The Average Waiting Time is high.

19
• Short processes that are at the back of the queue have to wait for the long process at the
front to finish.
• Not an ideal technique for time-sharing systems.
• Because of its simplicity, FCFS is not very efficient.

Shortest Job First Scheduling (SJF):


• In SJF, the process with the least estimated execution time is selected from the ready
queue for execution.
• It associates with each process, the length of its next CPU burst.
• When the CPU is available, it is assigned to the process that has the smallest next CPU
burst.
• If two processes have the same length of next CPU burst, FCFS scheduling is used.
• SJF algorithm can be preemptive or non-preemptive.
Non-Preemptive SJF:
• In non-preemptive scheduling, CPU is assigned to the process with least CPU burst time.
• The process keeps the CPU until it terminates.
Advantage:
• It gives minimum average waiting time for a given set of processes.
Disadvantage:
• It requires knowledge of how long a process will run and this information is usually not
available.
Example of Non-Preemptive:
Consider the following set of processes that arrive at time 0 with the length of the CPU burst
time in milliseconds:

Process Burst Time (in milliseconds)


P1 6
P2 8
P3 7
P4 3
Soln:
The Gantt chart for the schedule is:

Waiting Time for P4 = 0 milliseconds


Waiting Time for P1 = 3 milliseconds
Waiting Time for P3 = 9 milliseconds
Waiting Time for P2 = 16 milliseconds
Average Waiting Time = (Total Waiting Time) / No. of Processes
= (0 + 3 + 9 + 16 ) / 4
= 28 / 4 = 7 milliseconds

20
Example2:
Consider the set of 5 processes whose arrival time and burst time are given below-

Process Arrival Time Burst Time (in milliseconds)


P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3

If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting time and
average turn around time.
Soln:

Process Arrival Burst Time (in Completion Turn Waiting


Time milliseconds) time Around Time
time Wt=tat-bt
P1 3 1 7 7-3=4 4-1=3
P2 1 4 16 16-1=15 15-4=11
P3 4 2 9 9-4=5 5-2=3
P4 0 6 6 6-0=6 6-6=0
P5 2 3 12 12-2=10 10-3=7

Now,
Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit
Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit
Example3:
-

Preemptive SJF or (shortest remaining time first (SRTF)):


• In preemptive SJF, the process with the smallest estimated run-time is executed first.
• Any time a new process enters into ready queue, the scheduler compares the expected
run-time of this process with the currently running process.

21
If the new process’s time is less, then the currently running process is preempted and the CPU is
allocated to the new process

Example of Preemptive SJF (SRTF):


Consider the following set of processes. These processes arrived in the ready queue at the times
given in the table:

Process Arrival Time Burst Time (in milliseconds)


P1 0 8
P2 1 4
P3 2 9
P4 3 5

The Gantt chart for the schedule is:

Waiting Time for P1 = 10 – 1 – 0 = 9


Waiting Time for P2 = 1 – 1 = 0
Waiting Time for P3 = 17 – 2 = 15
Waiting Time for P4 = 5 – 3 = 2
Average Waiting Time = (Total Waiting Time) / No. of Processes
= (9 + 0 + 15 + 2) / 4
= 26 / 4
= 6.5 milliseconds
Explanation:
Process P1 is started at time 0, as it is the only process in the queue.
Process P2 arrives at the time 1 and its burst time is 4 milliseconds.
This burst time is less than the remaining time of process P1 (7 milliseconds). So, process P1 is
preempted and P2 is scheduled

Example2: Consider the set of 5 processes whose arrival time and burst time are given below
Process Arrival Time Burst Time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3

22
If the CPU scheduling policy is SJF preemptive, calculate the average waiting time and average
turn around time.

Soln:

Process Arrival Burst Time Completion Turn Around Waiting


Time Time Time Time
ct=tat-at Wt=tat-bt
P1 3 1 4 4-3=1 1-1=0
P2 1 4 6 6-1=5 5-4=1
P3 4 2 8 8-4=4 4-2=2
P4 0 6 16 16-0=16 16-6=10
P5 2 3 11 11-2=9 9-3=6
Now,
Average Turn Around time = (1 + 5 + 4 + 16 + 9) / 5 = 35 / 5 = 7 unit
Average waiting time = (0 + 1 + 2 + 10 + 6) / 5 = 19 / 5 = 3.8 unit

Example3: Consider the set of 6 processes whose arrival time and burst time are given below-

Process Arrival Time Burst Time


P1 0 7
P2 1 5
P3 2 3
P4 3 1
P5 4 2
P6 5 1

If the CPU scheduling policy is shortest remaining time first, calculate the average waiting time
and average turn around time.

Soln: Gantt chart

23
Arrival Burst Time Completion Turn Around Waiting
Process Time Time Time Time
ct=tat-at wt=tat-bt
P1 0 7 19 19-0=19 19-7=12
P2 1 5 13 13-1=12 12-5=7
P3 2 3 6 6-2=4 4-3=1
P4 3 1 4 4-3=1 1-1=0
P5 4 2 9 9-4=5 5-2=3
P6 5 1 7 7-5=2 2-1=1
Now,
Average Turn Around time = (19 + 12 + 4 + 1 + 5 + 2) / 6 = 43 / 6 = 7.17 unit
Average waiting time = (12 + 7 + 1 + 0 + 3 + 1) / 6 = 24 / 6 = 4 unit
Exampe4:
Consider the set of 3 processes whose arrival time and burst time are given below-

Arrival
Process Id Burst time
time

P1 0 9

P2 1 4

P3 2 9

If the CPU scheduling policy is SRTF, calculate the average waiting time and average turn
around time.
Soln:

24
Ans:Average Turn Around time = (13 + 4 + 20) / 3 = 37 / 3 = 12.33 unit
Average waiting time = (4 + 0 + 11) / 3 = 15 / 3 = 5 unit
Priority Scheduling:
A priority is associated with each process, and the CPU is allocated to the process with the
highest priority. Equal priorities are scheduled in FCFS order. Priorities are generally indicated
by some fixed range of numbers and there is no general method of indicating which is the
highest or lowest priority, it may be either increasing or decreasing order.
Priority can be defined either internally or externally.
• Internally defined priorities use some measurable quantity to compute the priority of a
process. For example, time limits, memory requirements, the number of open files and
the ratio of average I/O burst to average CPU burst has been used in computing
priorities.
• External priorities are set by criteria outside the OS, such as importance of process, the
type and amount of funds being paid for computer user, and other political factors.
Priority scheduling can be either preemptive or non-preemptive.
• When a process arrives at the ready queue, its priority is compared with the priority of
currently running process. A preemptive priority scheduling algorithm will preempt the
CPU if the priority of the newly arrived process is higher than the priority of the currently
running process.
• A non-preemptive priority scheduling algorithm will simply put the new process at the
head of the ready queue.
• A major problem of such scheduling algorithm is indefinite blocking or starvation. A
process that is ready to run but waiting for the CPU can be considered blocked
Example: Non preemptive priority
Consider following set of processes, assumed to have arrived at time 0 in order P1, P2, …, P5
with the length of the CPU burst given in milliseconds

Process Burst Time (in priority


milliseconds)
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Soln:
Priority scheduling Gantt chart

25
Average waiting time = (6 + 0 + 16 + 18 + 1) / 5 = 41 / 5 = 8.2 milliseconds

Example: preemptive priority scheduling


Process Arrival Time Priority Burst Time
P1 0 ms 3 3 ms
P2 1 ms 2 4 ms
P3 2 ms 4 6 ms
P4 3 ms 6 4 ms
P5 5 ms 10 2 ms

Soln:
Gnatt chart for this is :

The calculation table is :

Total Turn Around Time = 7 + 4 + 11 + 14 + 14 = 50 ms


Average Turn Around Time = (Total Turn Around Time)/(no. of processes) = 50/5 = 10.00 ms
Total Waiting Time = 4 + 0 + 15 + 10 + 12 = 41 ms
Average Waiting Time = (Total Waiting Time)/(no. of processes) = 41/5 = 8.20 ms
Total Response Time = 0 + 0 + 5 + 10 + 12 = 27 ms

26
Average Response Time = (Total Response Time)/(no. of processes) = 27/5 = 5.40 ms
Round Robin Scheduling (RR):
• Round Robin is a CPU scheduling algorithm where each process is assigned a fixed
time slot in a cyclic way.
• This small amount of time is known as Time Quantum or Time Slice. A time quantum is
generally from 10 to 100 milliseconds.
• It is basically the pre-emptive version of First come First Serve CPU Scheduling
algorithm.
• If a process does not complete before its time slice expires, the CPU is preempted and is
given to the next process in the ready queue.
• The preempted process is then placed at the tail of the ready queue.
• If a process is completed before its time slice expires, the process itself releases the CPU.
• The scheduler then proceeds to the next process in the ready queue.
• The performance of Round Robin scheduling depends on several factors:
• Size of Time Quantum.
• Context Switching Overhead

Characteristics of Round-Robin Scheduling


• Round robin is a pre-emptive algorithm
• The CPU is shifted to the next process after fixed interval time, which is called time
quantum/time slice.
• The process that is preempted is added to the end of the queue.
• Round robin is a hybrid model which is clock-driven
• Time slice should be minimum, which is assigned for a specific task that needs to be
processed. However, it may differ OS to OS.
• It is a real time algorithm which responds to the event within a specific time limit.
• Round robin is one of the oldest, fairest, and easiest algorithms.
• Widely used scheduling method in traditional OS.
Example of Round Robin Scheduling:
Consider the following set of processes that arrive at time 0 with the length of the CPU burst
time in milliseconds:

Process Burst Time (in


milliseconds)
P1 10
P2 5
P3 2

Time quantum is of 2 milliseconds.


n
Sol :
The Gantt chart for the schedule is:

P1 P2 P3 P1 P2 P1 P2 P1 P1
0 2 4 6 8 10 12 13 15 17

27
Average waiting time=(P1+P2+p3)/3
={{(6-2)+(10-8)+(13-12)}+{2+(8-4)+(12-10)}+4}/3
=(7+8+4)/3
=6.3 ms
Example2. Consider the following set of processes that arrive at time 0, with the length of the
CPU burst given in milliseconds.
Example 1: Quantum time = 4

Process Burst Time (in


milliseconds)
P1 24
P2 3
P3 3

Soln: The Gantt chart for the schedule is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Average time: = (P1 + P2 + P3) / 3


= [ (10 – 4) + 4 + 7] / 3
= 17 / 3
= 5.66 milliseconds

Example2. 3. Consider the following set of processes that arrive at given time, with the length of
the CPU burst given in milliseconds. Calculate AWT, ATAT,
TQ= 2

Process Burst Time (in Arrival time


milliseconds)
P1 4 0
P2 5 1
P3 2 2
P4 1 3
P5 6 4
P6 3 6

Soln: The Gantt chart for the scheduling is:

P1 P2 P3 P1 P4 P5 P2 P6 P5 P2 P6 P5
0 2 4 6 8 9 11 13 15 17 18 19 21

28
Process Burst Time Arrival time FT TAT WT
(ms) =FT-AT =TAT-BT
P1 4 0 8 18 4
P2 5 1 18 17 12
P3 2 2 6 4 2
P4 1 3 9 6 5
P5 6 4 21 17 11
P6 3 6 19 13 10

ATAT=(18+17+4+6+17+13)/6=12.5 ms
AWT=(4+12+2+5+11+10)/6=

Example 4 Consider the following set of processes that arrive at given time, with the length of
the CPU burst given in milliseconds. Calculate AWT, ATAT,
TQ= 20

Process Burst Time (in Waiting Time


milliseconds)
P1 53 0 + (77 – 20) + (121 – 97) = 81
P2 17 20
P3 68 37 + (97 – 57) + (134 – 117) = 94
P4 24 57 + (117 – 77) = 97

Soln: The Gantt chart for the scheduling is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

Average Waiting Time:


= (P1 + P2 + P3 + P4) / 4
= [{0 + (77 – 20) + (121 – 97)} + 20 + {37 + (97-57) + (134 – 117)} + {57 + (117 – 77)}] / 4
= (81+20+94+97) / 4
= 73 milliseconds

29
Advantage of Round-robin Scheduling:
• It doesn’t face the issues of starvation or convoy effect.
• All the jobs get a fair allocation of CPU.
• It deals with all process without any priority
• If you know the total number of processes on the run queue, then you can also assume the
worst-case response time for the same process.
• This scheduling method does not depend upon burst time. That’s why it is easily
implementable on the system.
• Once a process is executed for a specific set of the period, the process is preempted, and
another process executes for that given time period.
• Allows OS to use the Context switching method to save states of preempted processes.
• It gives the best performance in terms of average response time.

Disadvantages of Round-robin Scheduling:


• If slicing time of OS is low, the processor output will be reduced.
• This method spends more time on context switching
• Its performance heavily depends on time quantum.
• Priorities cannot be set for the processes.
• Round-robin scheduling doesn’t give special priority to more important tasks.
• Decreases comprehension
• Lower time quantum results in higher the context switching overhead in the system.
• Finding a correct time quantum is a quite difficult task in this system.

30
Multilevel Queue Scheduling:
• Multi-level queue scheduling was created for situation in which processes are easily
classified into different groups.
• In multilevel queue Processes are divided into different queue based on their type.
• Process are permanently assigned to one queue, generally based on some property of
process i.e. system process, interactive, batch system, end user process, memory size,
process priority and process type.
• Each queue has its own scheduling algorithm. For example, interactive process may use
round robin scheduling method, while batch job uses the FCFS method.
• In addition, there must be scheduling among the queue and is generally implemented as
fixed priority preemptive scheduling.
• Foreground process may have higher priority over the background process

31
Two level scheduling:
Two-level scheduling is an efficient scheduling method that uses two schedulers to perform
process scheduling.
Consider the example:
Suppose a system has 50 running processes all with equal priority and the system’s memory can
only hold 10 processes simultaneously. Thus, 40 processes are always swapped out and written
on virtual memory on the hard disk. To swap out and swap in a process, it takes 50 ms
respectively.

Let us take up the above scenario with straightforward Round-robin scheduling: a process would
need to be swapped in (least recently used processes are swapped in) every time when a context
switch occurs. Swapping in and out costs too much, and the unnecessary swaps waste much time
of scheduler.
So the solution to the problem is two-level scheduling. There are two different schedulers in two-
level scheduling
Lower level scheduler –
This scheduler selects which process will run from memory.
Higher level scheduler –
This scheduler focuses on swapping in and swapping out the processes between hard disk and
memory. Swapping takes much time, therefore it does its scheduling much less often. It also
swaps out the processes which are running for a long time in memory and are swapped with
processes on disk that have not run for a long time.

32

You might also like