0% found this document useful (0 votes)
28 views

OS - Module2 - Apsima

The document discusses processes and process scheduling. It defines what a process is, the different states a process can be in like new, ready, running, blocked, and completed. It also describes process control blocks which contain information about each process, and threads which allow parallelism within a process. Scheduling algorithms like FCFS, SJF, priority, and round robin are also mentioned.

Uploaded by

ebin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

OS - Module2 - Apsima

The document discusses processes and process scheduling. It defines what a process is, the different states a process can be in like new, ready, running, blocked, and completed. It also describes process control blocks which contain information about each process, and threads which allow parallelism within a process. Scheduling algorithms like FCFS, SJF, priority, and round robin are also mentioned.

Uploaded by

ebin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Operating System_Btech2019_PartTime_Semester2 1

MODULE 2
Processes and Process Scheduling

Syllabus

1.1. Processes ,Process states


1.2. Process Control Block, Threads
1.3. Operations on processes: process creation and termination
1.4. Inter-process communication: Shared memory systems, Message Passing
1.5. Process Scheduling - Basic concepts, Scheduling Criteria
1.6. Scheduling algorithms - Basics
1.7. First come First Served, Shortest Job First
1.8. Priority scheduling, Round Robin Scheduling

1.1 Processes ,Process states

● What is a Process?
○ Process is the execution of a program that performs the actions specified in
that program.
○ It can be defined as an execution unit where a program runs.
○ The OS helps you to create, schedule, and terminate the processes which are
used by the CPU.
○ A process created by the main process is called a child process.
○ Process operations can be easily controlled with the help of PCB(Process
Control Block).
○ You can consider it as the brain of the process, which contains all the crucial
information related to processing like process id, priority, state, CPU registers,
etc.
○ What does a process look like in memory?


○ Text Section: A Process, sometimes known as the Text Section, also includes
the current activity represented by the value of the Program Counter.
Operating System_Btech2019_PartTime_Semester2 2

○ Data Section: Contains the global variable.


○ Heap Section: Dynamically allocated memory to process during its run time.
○ Stack: The stack contains the temporary data, such as function parameters,
returns addresses, and local variables.

● Process states
○ When a process executes, it passes through different states.
○ These stages may differ in different operating systems, and the names of these
states are also not standardized.
○ A process state is a condition of the process at a specific instant of time.
○ It also defines the current position of the process.


○ New :
■ A program which is going to be picked up by the OS into the main
memory is called a new process.
○ Ready :
■ Whenever a process is created, it directly enters in the ready state, in
which it waits for the CPU to be assigned.
■ The OS picks the new processes from the secondary memory and puts
all of them in the main memory.
■ The processes which are ready for the execution and reside in the main
memory are called ready state processes.
■ There can be many processes present in the ready state.
○ Run :
■ One of the processes from the ready state will be chosen by the OS
depending upon the scheduling algorithm.
■ Hence, if we have only one CPU in our system, the number of running
processes for a particular time will always be one.
■ If we have n processors in the system then we can have n processes
running simultaneously.
Operating System_Btech2019_PartTime_Semester2 3

○ Block or wait:
■ From the Running state, a process can make the transition to the block
or wait state depending upon the scheduling algorithm or the intrinsic
behavior of the process.
■ When a process waits for a certain resource to be assigned or for the
input from the user then the OS moves this process to the block or wait
state and assigns the CPU to the other processes.
○ Completion or termination:
■ When a process finishes its execution, it comes in the termination state.
■ All the context of the process (Process Control Block) will also be
deleted and the process will be terminated by the Operating system.
○ Suspend ready:
■ A process in the ready state, which is moved to secondary memory
from the main memory due to lack of the resources (mainly primary
memory) is called in the suspended ready state.
■ If the main memory is full and a higher priority process comes for the
execution then the OS has to make room for the process in the main
memory by throwing the lower priority process out into the secondary
memory.
■ The suspended ready processes remain in the secondary memory until
the main memory gets available.
○ Suspend wait:
■ Instead of removing the process from the ready queue, it's better to
remove the blocked process which is waiting for some resources in the
main memory.
■ Since it is already waiting for some resource to get available hence it is
better if it waits in the secondary memory and makes room for the
higher priority process.
■ These processes complete their execution once the main memory gets
available and their wait is finished.

1.2 . Process Control Block, Threads

● Process Control Block (PCB)


○ A Process Control Block is a data structure maintained by the Operating System
for every process.
○ The PCB is identified by an integer process ID (PID).
○ There is a Process Control Block for each process, enclosing all the information
about the process.
○ It is also known as the task control block. It is a data structure, which contains
the following:
Operating System_Btech2019_PartTime_Semester2 4


○ Process ID: Unique identification for each of the processes in the operating
system.
○ Process state: A process can be new, ready, running, waiting, etc.
○ Pointer :A pointer to the parent process.
○ Program counter: The program counter lets you know the address of the next
instruction, which should be executed for that process.
○ CPU registers: This component includes accumulators, index and
general-purpose registers, and information of condition code.
○ CPU scheduling information: This component includes a process priority,
pointers for scheduling queues, and various other scheduling parameters.
○ Accounting and business information: It includes the amount of CPU and
time utilities like real time used, job or process numbers, etc.
○ Memory-management information: This information includes the value of
the base and limit registers, the page, or segment tables. This depends on the
memory system, which is used by the operating system.
○ I/O status information: This block includes a list of open files, the list of I/O
devices that are allocated to the process, etc.

● Threads

○ What is a Thread?
■ A thread is a path of execution within a process.
■ A process can contain multiple threads.
○ Why Multithreading?
■ A thread is also known as a lightweight process.
■ The idea is to achieve parallelism by dividing a process into multiple
threads.
Operating System_Btech2019_PartTime_Semester2 5

■ For example, in a browser, multiple tabs can be different threads.


■ MS Word uses multiple threads: one thread to format the text, another
thread to process inputs, etc.
○ Process vs Thread?
■ The primary difference is that threads within the same process run in a
shared memory space, while processes run in separate memory spaces.
■ Threads are not independent of one another like processes are, and as a
result threads share with other threads their code section, data section,
and OS resources (like open files and signals).
■ But, like a process, a thread has its own program counter (PC), register
set, and stack space.
○ Advantages of Thread over Process
■ 1. Responsiveness: If the process is divided into multiple threads, if
one thread completes its execution, then its output can be immediately
returned.
■ 2. Faster context switch: Context switch time between threads is lower
compared to process context switch. Process context switching
requires more overhead from the CPU.
■ 3. Effective utilization of a multiprocessor system: If we have
multiple threads in a single process, then we can schedule multiple
threads on multiple processors. This will make process execution
faster.
■ 4. Resource sharing: Resources like code, data, and files can be shared
among all threads within a process.(Note: stack and registers can’t be
shared among the threads. Each thread has its own stack and registers.)
■ 5. Communication: Communication between multiple threads is
easier, as the threads share common address space. While in process
we have to follow some specific communication techniques for
communication between two processes.
■ 6. Enhanced throughput of the system: If a process is divided into
multiple threads, and each thread function is considered as one job,
then the number of jobs completed per unit of time is increased, thus
increasing the throughput of the system.
○ Types of Threads
■ There are two types of threads.
● User Level Thread
● Kernel Level Thread
Operating System_Btech2019_PartTime_Semester2 6

1.3 Operations on processes: process creation and termination

● Process Creation
○ (i). When a new process is created, the operating system assigns a
unique Process Identifier (PID) to it and inserts a new entry in the
primary process table.
○ (ii). Then the required memory space for all the elements of the
process such as program, data and stack is allocated including space
for its Process Control Block (PCB).
○ (iii). Next, the various values in PCB are initialized such as,
■ Process identification part is filled with the PID assigned to it
in step (1) and also its parent’s PID.
■ The processor register values are mostly filled with zeroes,
except for the stack pointer and program counter.
■ Stack pointer is filled with the address of stack allocated to it
in step (ii) and program counter is filled with the address of its
program entry point.
■ The process state information would be set to ‘New’.
■ Priority would be lowest by default, but users can specify any
priority during creation.
■ In the beginning, the process is not allocated to any I/O devices
or files.
■ The user has to request them or if this is a child process it may
inherit some resources from its parent.
○ (vi). Then the operating system will link this process to the scheduling
queue and the process state would be changed from ‘New’ to ‘Ready’.
Now the process is competing for the CPU.
○ (v). Additionally, the operating system will create some other data
structures such as log files or accounting files to keep track of
processes activity.
● Process Termination:
○ Processes are terminated by themselves when they finish’1 executing
Operating System_Btech2019_PartTime_Semester2 7

their last statement, then the operating system USES exit( ) system call
to delete its context.
○ Then all the resources held by that process like physical and virtual
memory, 10 buffers, open files etc., are taken back by the operating
system.
○ A process P can be terminated either by the operating system or by the
parent process of P.
○ A parent may terminate a process due to one of the following reasons,
■ (i). When a task given to the child is not required now.
■ (ii). When a child has taken more resources than its limit.
■ (iii). The parent of the process is exiting, as a result all its
children are deleted. This is called cascaded termination.

1.4 Inter-process communication: Shared memory systems, Message Passing

● What is Inter Process Communication?


○ Inter process communication (IPC) is used for exchanging data between
multiple threads in one or more processes or programs.
○ The Processes may be running on single or multiple computers connected by a
network.
○ The full form of IPC is Inter-process communication.
○ It is a set of programming interfaces which allow a programmer to coordinate
activities among various program processes which can run concurrently in an
operating system.
○ This allows a specific program to handle many user requests at the same time.
○ Since every single user request may result in multiple processes running in the
operating system, the process may require communicating with each other.
○ Each IPC protocol approach has its own advantages and limitations, so it is not
unusual for a single program to use all of the IPC methods.
Operating System_Btech2019_PartTime_Semester2 8

○ Shared Memory:
■ Communication between processes using shared memory requires processes to share
some variable, and it completely depends on how the programmer will implement it.
■ One way of communication using shared memory can be imagined like this: Suppose
process1 and process2 are executing simultaneously, and they share some resources or
use some information from another process.
■ Process1 generates information about certain computations or resources being used
and keeps it as a record in shared memory.
■ When process2 needs to use the shared information, it will check in the record stored
in shared memory and take note of the information generated by process1 and act
accordingly.
■ Processes can use shared memory for extracting information as a record from another
process as well as for delivering any specific information to other processes.


Operating System_Btech2019_PartTime_Semester2 9

■ Ex: Producer-Consumer problem


● There are two processes: Producer and Consumer.
● The producer produces some items and the Consumer consumes that item.
● The two processes share a common space or memory location known as a
buffer where the item produced by the Producer is stored and from which the
Consumer consumes the item if needed.
● There are two versions of this problem:
● The Buffer
○ The buffer can be of two types:
● Unbounded buffer
● Bounded buffer
● Unbounded buffer
○ There is no limit on the size of the unbounded buffer.
○ The consumer waits for a new item, however, there is no restriction on
the producer to produce items.
● Bounded buffer
○ If the buffer is empty, the consumer must wait for a new item.
○ When the buffer is full, the producer waits until it can produce new
items
● We will discuss the bounded buffer problem.
● First, the Producer and the Consumer will share some common memories,
then the producer will start producing items.
● If the total produced item is equal to the size of the buffer, the producer will
wait to get it consumed by the Consumer.
● Similarly, the consumer will first check for the availability of the item.
● If no item is available, the Consumer will wait for the Producer to produce it.
If there are items available, consumers will consume them.
○ Message Passing:
● In this method, processes communicate with each other without using any kind of
shared memory.
● If two processes p1 and p2 want to communicate with each other, they proceed as
follows:
Operating System_Btech2019_PartTime_Semester2 10

● Establish a communication link (if a link already exists, no need to establish it again.)
● Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)


● Here are different ways to communicate using message passing:
○ 1. Direct or Indirect communication
○ 2. Synchronous or asynchronous communication
○ 3. Automatic or explicit buffering
● 1. Direct or Indirect communication
○ Under Direct communication, processes must explicitly name the sender or
receiver in the communication.
○ The send() and receive() are defined as:
■ send(P, message) – send a message to process P
■ receive(Q, message) – receive a message from process Q
○ The communication link in direct communication has the following properties:
■ The link is established automatically. Processes need each other’s
identity to send messages.
■ A link is associated with exactly two processes.
■ Between two processes, there is only one link.
○ There is a symmetry in addressing. In asymmetrical addressing, the sending
process needs to address the receiver, but the recipient does not need to
address the sender.
Operating System_Btech2019_PartTime_Semester2 11


○ The disadvantage in symmetric and asymmetric schemes is in changing the
identifier of a process.
○ We need to change all the other process definitions and references to the old
identifier and replace it with the new one.
○ In indirect communication, messages are sent and received from mailboxes
or ports.
○ The processes can place messages into a mailbox or remove messages from
them.
○ The mailbox has a unique identification.
○ Two processes can communicate only if they have a shared mailbox.
■ send(A, message) – send a message to mailbox A
■ receive(A, message) – receive a message from mailbox A
○ In this scheme, a communication link has the following properties:
■ 1. A link is established between a pair of processes only if both have
the same shared mailbox.
■ 2. A link may be associated with more than two processes.
■ 3. There may be different links, with each link corresponding to one
mailbox, between pairs of communicating processes.


Operating System_Btech2019_PartTime_Semester2 12

● 2. Synchronous and Asynchronous Communication


○ Communication happens using send() and receive().
○ There are many options for these two primitives.
○ Message passing may be blocking or non-blocking also known as
synchronous and asynchronous.
● Blocking send – sending is blocked, until a message is received by the
receiving process or mailbox.
● Non-blocking send – sending process sends the message and resumes
operation.
● Blocking receive – receiver blocks until a message is available.
● Non-blocking receive – the receiver retrieves either a valid message or
a null.
● 3.Automatic and Explicit Buffering
○ The messages exchanged between communicating processes reside in a
temporary queue.
○ The queue can be implemented in three way:
■ Zero capacity – with zero capacity the link cannot have messages
waiting. Blocking send is the correct option.
■ Bounded capacity – the queue is of finite length n. If the queue is not
full, the sender can continue to send messages. If full, blocking send.
■ Unbounded capacity – Infinite queue length; any number of messages
wait and sender never blocks.
○ Zero capacity is called no buffering and other systems called automatic
buffering.

1.5 Process Scheduling - Basic concepts, Scheduling Criteria

● Process Scheduling
○ The act of determining which process is in the ready state, and should be
moved to the running state is known as Process Scheduling.
○ The prime aim of the process scheduling system is to keep the CPU busy all
the time and to deliver minimum response time for all programs.
○ For achieving this, the scheduler must apply appropriate rules for swapping
processes IN and OUT of the CPU.
○ Scheduling fell into one of the two general categories:
■ Non Preemptive Scheduling: When the currently executing process
Operating System_Btech2019_PartTime_Semester2 13

gives up the CPU voluntarily.


■ Preemptive Scheduling: When the operating system decides to favour
another process, pre-empting the currently executing process.
○ What are Scheduling Queues?
■ All processes, upon entering into the system, are stored in the Job
Queue.
■ Processes in the ready state are placed in the Ready Queue.
■ Processes waiting for a device to become available are placed in Device
Queues.
■ There are unique device queues available for each I/O device.
■ A new process is initially put in the Ready queue.
■ It waits in the ready queue until it is selected for execution(or
dispatched).
■ Once the process is assigned to the CPU and is executing, one of the
following several events can occur:
● The process could issue an I/O request, and then be placed in
the I/O queue.
● The process could create a new subprocess and wait for its
termination.
● The process could be removed forcibly from the CPU, as a
result of an interrupt, and be put back in the ready queue.


■ In the first two cases, the process eventually switches from the waiting
state to the ready state, and is then put back in the ready queue.
■ A process continues this cycle until it terminates, at which time it is
removed from all queues and has its PCB and resources deallocated.
○ Types of Schedulers
■ There are three types of schedulers available:
● Long Term Scheduler
● Short Term Scheduler
● Medium Term Scheduler
Operating System_Btech2019_PartTime_Semester2 14

■ Long Term Scheduler


● Long term scheduler runs less frequently.
● Long Term Schedulers decide which program must get into the
job queue.
● From the job queue, the Job Processor selects processes and
loads them into the memory for execution.
● Primary aim of the Job Scheduler is to maintain a good degree
of Multiprogramming.
● An optimal degree of Multiprogramming means the average
rate of process creation is equal to the average departure rate of
processes from the execution memory.
■ Short Term Scheduler
● This is also known as CPU Scheduler and runs very frequently.
● The primary aim of this scheduler is to enhance CPU
performance and increase process execution rate.
■ Medium Term Scheduler
● This scheduler removes the processes from memory (and from
active contention for the CPU), and thus reduces the degree of
multiprogramming.
● At some later time, the process can be reintroduced into
memory and its execution can be continued where it left off.
● This scheme is called swapping. The process is swapped out,
and is later swapped in, by the medium term scheduler.
● Swapping may be necessary to improve the process mix, or
because a change in memory requirements has overcommitted
available memory, requiring memory to be freed up.
● This complete process is descripted in the below diagram:

○ What is Context Switch?


■ Switching the CPU to another process requires saving the state of the
old process and loading the saved state for the new process.
■ This task is known as a Context Switch.
Operating System_Btech2019_PartTime_Semester2 15

■ The context of a process is represented in the Process Control


Block(PCB) of a process;
■ it includes the value of the CPU registers, the process state and
memory-management information.
■ When a context switch occurs, the Kernel saves the context of the old
process in its PCB and loads the saved context of the new process
scheduled to run.
■ Context switch time is pure overhead, because the system does no
useful work while switching.
■ Its speed varies from machine to machine, depending on the memory
speed, the number of registers that must be copied, and the existence of
special instructions(such as a single instruction to load or store all
registers).
■ Typical speeds range from 1 to 1000 microseconds.
■ Context Switching has become such a performance bottleneck that
programmers are using new structures(threads) to avoid it whenever
and wherever possible.

○ Scheduling Criteria

■ CPU utilisation –
● The main objective of any CPU scheduling algorithm is to keep
the CPU as busy as possible.
● Theoretically, CPU utilisation can range from 0 to 100 but in a
real-time system,
● it varies from 40 to 90 percent depending on the load upon the
system.
■ Throughput –
● A measure of the work done by CPU is the number of
processes being executed and completed per unit time.
● This is called throughput.
● The throughput may vary depending upon the length or
duration of processes.
■ Turnaround time –
● For a particular process, an important criteria is how long it
takes to execute that process.
Operating System_Btech2019_PartTime_Semester2 16

● The time elapsed from the time of submission of a process to


the time of completion is known as the turnaround time.
● Turn-around time is the sum of time spent waiting to get into
memory, waiting in the ready queue, executing in CPU, and
waiting for I/O.
■ Waiting time –
● A scheduling algorithm does not affect the time required to
complete the process once it starts execution.
● It only affects the waiting time of a process i.e. time spent by a
process waiting in the ready queue.
■ Response time –
● In an interactive system, turn-around time is not the best
criteria.
● A process may produce some output fairly early and continue
computing new results while previous results are being output
to the user.
● Thus another criteria is the time taken from submission of the
process of request until the first response is produced.
● This measure is called response time.

● 1.6 Scheduling algorithms - Basics


● The Purpose of a Scheduling algorithm

1. Maximum CPU utilization

2. Fare allocation of CPU

3. Maximum throughput

4. Minimum turnaround time

5. Minimum waiting time

6. Minimum response time

7.
Operating System_Btech2019_PartTime_Semester2 17

● There are mainly six types of process scheduling algorithms

1. First Come First Serve (FCFS)

2. Shortest-Job-First (SJF) Scheduling

3. Shortest Remaining Time

4. Priority Scheduling

5. Round Robin Scheduling

6. Multilevel Queue Scheduling


Operating System_Btech2019_PartTime_Semester2 18

● FCFS Scheduling
○ First come first serve (FCFS) scheduling algorithm simply schedules the jobs
according to their arrival time.
○ The job which comes first in the ready queue will get the CPU first.
○ The lesser the arrival time of the job, the sooner the job will get the CPU.
○ FCFS scheduling may cause the problem of starvation if the burst time of the
first process is the longest among all the jobs.
○ Advantages of FCFS
■ Simple
■ Easy
■ First come, First serv
○ Disadvantages of FCFS
■ The scheduling method is non preemptive, the process will run to
completion
■ Due to the non-preemptive nature of the algorithm, the problem of
starvation may occur.
■ Although it is easy to implement, it is poor in performance since the
average waiting time is higher as compared to other scheduling
algorithms.
○ Example
■ Let's take an example of The FCFS scheduling algorithm. In the
Following schedule, there are 5 processes with process ID P0, P1, P2,
P3 and P4. P0 arrives at time 0, P1 at time 1, P2 at time 2, P3 arrives at
time 3 and Process P4 arrives at time 4 in the ready queue. The
processes and their respective Arrival and Burst time are given in the
following table.
● The Turnaround time and the waiting time are calculated by
using the following formula.
1. Turn Around Time = Completion Time - Arrival Time

2. Waiting Time = Turnaround time - Burst Time

● The average waiting Time is determined by summing the

respective waiting time of all the processes and dividing the

sum by the total number of processes.


Operating System_Btech2019_PartTime_Semester2 19

● Solution:

● Avg Waiting Time=31/5


● (Gantt chart)

● Shortest Job First (SJF) Scheduling


○ Till now, we were scheduling the processes according to their arrival time (in
FCFS scheduling).
○ However, the SJF scheduling algorithm schedules the processes according to
their burst time.
○ In SJF scheduling, the process with the lowest burst time, among the list of
available processes in the ready queue, is going to be scheduled next.
○ However, it is very difficult to predict the burst time needed for a process
hence this algorithm is very difficult to implement in the system.
○ Advantages of SJF
■ Maximum throughput
■ Minimum average waiting and turnaround time
○ Disadvantages of SJF
■ May suffer with the problem of starvation
■ It is not implementable because the exact Burst time for a process can't
be known in advance.
○ Example
■ In the following example, there are five jobs named P1, P2, P3, P4 and
P5. Their arrival time and burst time are given in the table below.
Operating System_Btech2019_PartTime_Semester2 20


■ Since, No Process arrives at time 0 hence; there will be an empty slot
in the Gantt chart from time 0 to 1 (the time at which the first process
arrives).
■ According to the algorithm, the OS schedules the process which is
having the lowest burst time among the available processes in the
ready queue.
■ Till now, we have only one process in the ready queue hence the
scheduler will schedule this to the processor no matter what is its burst
time.
■ This will be executed till 8 units of time.
■ Till then we have three more processes arriving in the ready queue
hence the scheduler will choose the process with the lowest burst time.
■ Among the processes given in the table, P3 will be executed next since
it is having the lowest burst time among all the available processes.
■ So that's how the procedure will go on in the shortest job first (SJF)
scheduling algorithm.


■ Avg Waiting Time = 27/5

● Shortest Remaining Time First (SRTF) Scheduling Algorithm


○ This Algorithm is the preemptive version of SJF scheduling.
○ In SRTF, the execution of the process can be stopped after a certain amount of
time.
○ At the arrival of every process, the short term scheduler schedules the process
with the least remaining burst time among the list of available processes and
the running process.
○ Once all the processes are available in the ready queue, No preemption will
be done and the algorithm will work as SJF scheduling.
Operating System_Btech2019_PartTime_Semester2 21

○ The context of the process is saved in the Process Control Block when the
process is removed from the execution and the next process is scheduled.
○ This PCB is accessed on the next execution of this process.
○ Example
■ In this Example, there are five jobs P1, P2, P3, P4, P5 and P6. Their
arrival time and burst time are given below in the table.


■ Avg Waiting Time = 24/6
■ The Gantt chart is prepared according to the arrival and burst time
given in the table.
1. Since, at time 0, the only available process is P1 with CPU

burst time 8. This is the only available process in the list

therefore it is scheduled.

2. The next process arrives at time unit 1. Since the algorithm we

are using is SRTF which is a preemptive one, the current

execution is stopped and the scheduler checks for the process

with the least burst time.

Till now, there are two processes available in the ready queue.

The OS has executed P1 for one unit of time till now; the

remaining burst time of P1 is 7 units. The burst time of Process

P2 is 4 units. Hence Process P2 is scheduled on the CPU


Operating System_Btech2019_PartTime_Semester2 22

according to the algorithm.

3. The next process P3 arrives at time unit 2. At this time, the

execution of process P3 is stopped and the process with the

least remaining burst time is searched. Since the process P3 has

2 units of burst time hence it will be given priority over others.

4. The Next Process P4 arrives at time unit 3. At this arrival, the

scheduler will stop the execution of P4 and check which

process is having least burst time among the available

processes (P1, P2, P3 and P4). P1 and P2 are having the

remaining burst time 7 units and 3 units respectively.

5. P3 and P4 are having the remaining burst time 1 unit each.

Since, both are equal hence the scheduling will be done

according to their arrival time. P3 arrives earlier than P4 and

therefore it will be scheduled again.The Next Process P5

arrives at time unit 4. Till this time, the Process P3 has

completed its execution and it is no longer in the list. The

scheduler will compare the remaining burst time of all the

available processes. Since the burst time of process P4 is 1

which is least among all hence this will be scheduled.

6. The Next Process P6 arrives at time unit 5, till this time, the

Process P4 has completed its execution. We have 4 available

processes till now, that are P1 (7), P2 (3), P5 (3) and P6 (2).

The Burst time of P6 is the least among all hence P6 is

scheduled. Since, now, all the processes are available hence the

algorithm will now work the same as SJF. P6 will be executed

till its completion and then the process with the least remaining

time will be scheduled.


Operating System_Btech2019_PartTime_Semester2 23

7. Once all the processes arrive, No preemption is done and the

algorithm will work as SJF.

● Round Robin Scheduling Algorithm


○ Round Robin scheduling algorithm is one of the most popular scheduling
algorithms which can actually be implemented in most of the operating
systems.
○ This is the preemptive version of first come first serve scheduling.
○ The Algorithm focuses on Time Sharing.
○ In this algorithm, every process gets executed in a cyclic way.
○ A certain time slice is defined in the system which is called time quantum.
○ Each process present in the ready queue is assigned the CPU for that time
quantum, if the execution of the process is completed during that time then the
process will terminate else the process will go back to the ready queue and
wait for the next turn to complete the execution.
○ Advantages
1. It can be actually implementable in the system because it is not

dependent on the burst time.

2. It doesn't suffer from the problem of starvation or convoy effect.

3. All the jobs get a fair allocation of CPU.

○ Disadvantages

1. The higher the time quantum, the higher the response time in the

system.

2. The lower the time quantum, the higher the context switching overhead

in the system.

3. Deciding a perfect time quantum is really a very difficult task in the

system.

○ Example
■ In the following example, there are six processes named as P1, P2, P3,
P4, P5 and P6. Their arrival time and burst time are given below in the
table. The time quantum of the system is 4 units.
Operating System_Btech2019_PartTime_Semester2 24


■ Avg Waiting Time = (12+16+6+8+15+11)/6 = 76/6 units

● Priority Scheduling
○ In Priority scheduling, there is a priority number assigned to each process.
○ In some systems, the lower the number, the higher the priority.
○ While, in the others, the higher the number, the higher will be the priority.
○ The Process with the higher priority among the available processes is given
the CPU.
○ There are two types of priority scheduling algorithms.
○ One is Preemptive priority scheduling while the other is Non Preemptive
Priority scheduling.
○ The priority number assigned to each of the processes may or may not vary.
○ If the priority number doesn't change itself throughout the process, it is called
static priority,
○ while if it keeps changing itself at the regular intervals, it is called dynamic
priority.
○ Non Preemptive Priority Scheduling
■ In the Non Preemptive Priority scheduling, The Processes are
scheduled according to the priority number assigned to them.
■ Once the process gets scheduled, it will run till the completion.
■ Generally, the lower the priority number, the higher is the priority of
the process.
■ The people might get confused with the priority numbers, hence in the
GATE, they clearly mention which one is the highest priority and
which one is the lowest one.
■ Example
Operating System_Btech2019_PartTime_Semester2 25

● In the Example, there are 7 processes P1, P2, P3, P4, P5, P6
and P7. Their priorities, Arrival Time and burst time are given
in the table.


● Avg Waiting Time = (0+11+2+7+12+2+18)/7 = 52/7 units


○ Preemptive Priority Scheduling
■ In Preemptive Priority Scheduling, at the time of arrival of a process in
the ready queue, its Priority is compared with the priority of the other
processes present in the ready queue as well as with the one which is
being executed by the CPU at that point of time.
■ The One with the highest priority among all the available processes
will be given the CPU next.
■ The difference between preemptive priority scheduling and non
preemptive priority scheduling is that, in the preemptive priority
scheduling, the job which is being executed can be stopped at the
arrival of a higher priority job.
■ Once all the jobs get available in the ready queue, the algorithm will
behave as non-preemptive priority scheduling, which means the job
scheduled will run till the completion and no preemption will be done.
■ Example
● There are 7 processes P1, P2, P3, P4, P5, P6 and P7 given.
Their respective priorities, Arrival Times and Burst times are
given in the table below.
Operating System_Btech2019_PartTime_Semester2 26


● Avg Waiting Time = (0+14+0+7+1+25+16)/7 = 63/7 = 9 units

Exception Handling in Java - Ja

You might also like