0% found this document useful (0 votes)
2 views

400Level_Operating_Systems_Lecture_Notes

An operating system (OS) serves as an intermediary between users and computer hardware, managing resources and providing an environment for program execution. Key functionalities include resource management, process management, storage management, memory management, and security. Various types of operating systems, such as batch, time-sharing, distributed, network, and real-time systems, cater to different computing needs and environments.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

400Level_Operating_Systems_Lecture_Notes

An operating system (OS) serves as an intermediary between users and computer hardware, managing resources and providing an environment for program execution. Key functionalities include resource management, process management, storage management, memory management, and security. Various types of operating systems, such as batch, time-sharing, distributed, network, and real-time systems, cater to different computing needs and environments.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 248

(1) An operating system acts as an intermediary between the user of a computer and computer

hardware. The purpose of an operating system is to provide an environment in which a user can
execute programs conveniently and efficiently. OR

An operating system is a software that manages computer hardware. The hardware must provide
appropriate mechanisms to ensure the correct operation of the computer system and to prevent
user programs from interfering with the proper operation of the system.

Or

 An operating system is a program that controls the execution of application programs and
acts as an interface between the user of a computer and the computer hardware.

 A more common definition is that the operating system is the one program running at all
times on the computer (usually called the kernel), with all else being application
programs.

 An operating system is concerned with the allocation of resources and services, such as
memory, processors, devices, and information. The operating system correspondingly
includes programs to manage these resources, such as a traffic controller, a scheduler, a
memory management module, I/O programs, and a file system.

(2) Features of Operating system – Operating system has the following features:

1. Convenience: An OS makes a computer more convenient to use.


2. Efficiency: An OS allows the computer system resources to be used efficiently.
3. Ability to Evolve: An OS should be constructed in such a way as to permit the effective
development, testing, and introduction of new system functions at the same time without
interfering with service.
4. Throughput: An OS should be constructed so that It can give
maximum throughput(Number of tasks per unit time).

(3) Major Functionalities of Operating System:


 Resource Management: When parallel accessing happens in the OS means when multiple
users are accessing the system the OS works as Resource Manager, Its responsibility is to
provide hardware to the user. It decreases the load in the system.
 Process Management: It includes various tasks like scheduling and termination of the
process. It is done with the help of CPU Scheduling algorithms.
 Storage Management: The file system mechanism used for the management of the
storage. NIFS, CFS, CIFS, NFS, etc. are some file systems. All the data is stored in various
tracks of Hard disks that are all managed by the storage manager. It included Hard Disk.
 Memory Management: Refers to the management of primary memory. The operating
system has to keep track of how much memory has been used and by whom. It has to decide
which process needs memory space and how much. OS also has to allocate and deallocate
the memory space.

1
 Security/Privacy Management: Privacy is also provided by the Operating system by
means of passwords so that unauthorized applications can’t access programs or data. For
example, Windows uses Kerberos authentication to prevent unauthorized access to data.

(4) Explanation of The process on how the operating system acts as a User Interface:

The process operating system as User Interface:


1. User
2. System and application programs
3. Operating system
4. Hardware
Every general-purpose computer consists of hardware, an operating system(s), system
programs, and application programs. The hardware consists of memory, CPU, ALU, I/O
devices, peripheral devices, and storage devices. The system program consists of compilers,
loaders, editors, OS, etc. The application program consists of business programs, and database
programs.

Every computer must have an operating system to run other programs. The operating system
coordinates the use of the hardware among the various system programs and application
programs for various users. It simply provides an environment within which other programs
can do useful work.
The operating system is a set of special programs that run on a computer system that allows it
to work properly. It performs basic tasks such as recognizing input from the keyboard, keeping
track of files and directories on the disk, sending output to the display screen, and controlling

2
peripheral devices.

OS is designed to serve two basic purposes:

1. It controls the allocation and use of the computing System’s resources among the various
user and tasks.
2. It provides an interface between the computer hardware and the programmer that simplifies
and makes it feasible for coding and debugging of application programs.

The Operating system must support the following tasks. The tasks are:

1. Provides the facilities to create and modify of programs and data files using an editor.
2. Access to the compiler for translating the user program from high-level language to machine
language.
3. Provide a loader program to move the compiled program code to the computer’s memory for
execution.
4. Provide routines that handle the details of I/O programming

I/O System Management –


The module that keeps track of the status of devices is called the I/O traffic controller. Each
I/O device has a device handler that resides in a separate process associated with that device.
The I/O subsystem consists of

 A memory Management component that includes buffering caching and spooling.


 A general device driver interface.
Drivers for specific hardware devices.
Assembler –
The input to an assembler is an assembly language program. The output is an object program
plus information that enables the loader to prepare the object program for execution. At one
time, the computer programmer had at his disposal a basic machine that interpreted, through
hardware, certain fundamental instructions. He would program this computer by writing a
series of ones and Zeros (Machine language), and place them into the memory of the machine.
Compiler and Interpreter –
The High-level languages- examples are FORTRAN, COBOL, ALGOL, and PL/I are
processed by compilers and interpreters. A compiler is a program that accepts a source
program in a “high-level language “and produces a corresponding object program. An
interpreter is a program that appears to execute a source program as if it was machine
language. The same name (FORTRAN, COBOL, etc.) is often used to designate both a
compiler and its associated language.
Loader –
A Loader is a routine that loads an object program and prepares it for execution. There are
various loading schemes: absolute, relocating, and direct-linking. In general, the loader must
load, relocate and link the object program. The loader is a program that places programs into
memory and prepares them for execution. In a simple loading scheme, the assembler outputs

3
the machine language translation of a program on a secondary device and a loader places it in
the core. The loader places into memory the machine language version of the user’s program
and transfers control to it. Since the loader program is much smaller than the assembler, those
make more core available to the user’s program.

History of Operating system –


The operating system has been evolving through the years. The following table shows the
history of OS.

Types of Operating Systems –


 Batch Operating System- Sequence of jobs in a program on a computer without manual
interventions.
 Time-sharing operating System- allows many users to share computer resources. (Max
utilization of the resources).
 Distributed operating System- Manages a group of different computers and makes appear to
be a single computer.
 Network operating system- computers running in different operating systems can participate
in a common network (It is used for security purposes).
 Real-time operating system – used where any or all the jobs have strict time constraints.

Examples of Operating Systems are –


 Windows (GUI-based, PC)
 GNU/Linux (Personal, Workstations, ISP, File and print server, Three-tier client/Server)
 macOS (Macintosh), used for Apple’s personal computers and workstations (MacBook,
iMac).
 Android (Google’s Operating System for smartphones/tablets/smartwatches)
 iOS (Apple’s OS for iPhone, iPad, and iPod Touch)

An Operating System performs all the basic tasks like managing files, processes, and memory.
Thus operating system acts as the manager of all the resources, i.e. resource manager. Thus,
the operating system becomes an interface between user and machine.

4
Types of Operating Systems: Some widely used operating systems are as follows-

1. Batch Operating System –


This type of operating system does not interact with the computer directly. There is an operator
which takes similar jobs having the same requirement and group them into batches. It is the
responsibility of the operator to sort jobs with similar needs.

Advantages of Batch Operating System:


 It is very difficult to guess or know the time required for any job to complete. Processors of
the batch systems know how long the job would be when it is in queue
 Multiple users can share the batch systems
 The idle time for the batch system is very less
 It is easy to manage large work repeatedly in batch systems
Disadvantages of Batch Operating System:
 The computer operators should be well known with batch systems
 Batch systems are hard to debug
 It is sometimes costly
 The other jobs will have to wait for an unknown time if any job fails
Examples of Batch based Operating System: Payroll System, Bank Statements, etc.
2. Time-Sharing Operating Systems –
Each task is given some time to execute so that all the tasks work smoothly. Each user gets the
time of CPU as they use a single system. These systems are also known as Multitasking
Systems. The task can be from a single user or different users also. The time that each task gets
to execute is called quantum. After this time interval is over OS switches over to the next task.

5
Advantages of Time-Sharing OS:
 Each task gets an equal opportunity
 Fewer chances of duplication of software
 CPU idle time can be reduced
Disadvantages of Time-Sharing OS:
 Reliability problem
 One must have to take care of the security and integrity of user programs and data
 Data communication problem
Examples of Time-Sharing OSs are: Multics, Unix, etc.
3. Distributed Operating System –
These types of the operating system is a recent advancement in the world of computer
technology and are being widely accepted all over the world and, that too, with a great pace.
Various autonomous interconnected computers communicate with each other using a shared
communication network. Independent systems possess their own memory unit and CPU. These
are referred to as loosely coupled systems or distributed systems. These system’s processors
differ in size and function. The major benefit of working with these types of the operating
system is that it is always possible that one user can access the files or software which are not
actually present on his system but some other system connected within this network i.e.,
remote access is enabled within the devices connected in that network.

6
Advantages of Distributed Operating System:
 Failure of one will not affect the other network communication, as all systems are
independent from each other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily added to the network
 Delay in data processing reduces
Disadvantages of Distributed Operating System:
 Failure of the main network will stop the entire communication
 To establish distributed systems the language which is used are not well defined yet
 These types of systems are not readily available as they are very expensive. Not only that
the underlying software is highly complex and not understood well yet
Examples of Distributed Operating System are- LOCUS, etc.
4. Network Operating System –
These systems run on a server and provide the capability to manage data, users, groups,
security, applications, and other networking functions. These types of operating systems allow
shared access of files, printers, security, applications, and other networking functions over a
small private network. One more important aspect of Network Operating Systems is that all the
users are well aware of the underlying configuration, of all other users within the network,
their individual connections, etc. and that’s why these computers are popularly known
as tightly coupled systems.

Advantages of Network Operating System:


 Highly stable centralized servers
 Security concerns are handled through servers
 New technologies and hardware up-gradation are easily integrated into the system
 Server access is possible remotely from different locations and types of systems
Disadvantages of Network Operating System:
 Servers are costly
 User has to depend on a central location for most operations
 Maintenance and updates are required regularly
Examples of Network Operating System are: Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD, etc.

7
5. Real-Time Operating System –
These types of OSs serve real-time systems. The time interval required to process and respond
to inputs is very small. This time interval is called response time.

Real-time systems are used when there are time requirements that are very strict like missile
systems, air traffic control systems, robots, etc.
Two types of Real-Time Operating System which are as follows:
 Hard Real-Time Systems:
These OSs are meant for applications where time constraints are very strict and even the
shortest possible delay is not acceptable. These systems are built for saving life like
automatic parachutes or airbags which are required to be readily available in case of any
accident. Virtual memory is rarely found in these systems.
 Soft Real-Time Systems:
These OSs are for applications where for time-constraint is less strict.

Advantages of RTOS:
 Maximum Consumption: Maximum utilization of devices and system, thus more output
from all the resources
 Task Shifting: The time assigned for shifting tasks in these systems are very less. For
example, in older systems, it takes about 10 microseconds in shifting one task to another,
and in the latest systems, it takes 3 microseconds.
 Focus on Application: Focus on running applications and less importance to applications
which are in the queue.
 Real-time operating system in the embedded system: Since the size of programs are small,
RTOS can also be used in embedded systems like in transport and others.
 Error Free: These types of systems are error-free.
 Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
 Limited Tasks: Very few tasks run at the same time and their concentration is very less on
few applications to avoid errors.
 Use heavy system resources: Sometimes the system resources are not so good and they are
expensive as well.

8
 Complex Algorithms: The algorithms are very complex and difficult for the designer to
write on.
 Device driver and interrupt signals: It needs specific device drivers and interrupts signals
to respond earliest to interrupts.
 Thread Priority: It is not good to set thread priority as these systems are very less prone to
switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.

An Operating System acts as a communication bridge (interface) between the user and
computer hardware. The purpose of an operating system is to provide a platform on which a
user can execute programs in a convenient and efficient manner.
An operating system is a piece of software that manages the allocation of computer hardware.
The coordination of the hardware must be appropriate to ensure the correct working of the
computer system and to prevent user programs from interfering with the proper working of the
system.
Example: Just like a boss gives orders to his employee, in a similar way we request or pass our
orders to the Operating System. The main goal of the Operating System is to make the
computer environment more convenient to use and Secondary goal is to use the resources in
the most efficient manner.
What is an Operating System?
An operating system is a program on which application programs are executed and acts as a
communication bridge (interface) between the user and the computer hardware.
The main task an operating system carries out is the allocation of resources and services, such
as the allocation of memory, devices, processors, and information. The operating system also
includes programs to manage these resources, such as a traffic controller, a scheduler, memory
management module, I/O programs, and a file system.
Important functions of an operating System:
1. Security –
The operating system uses password protection to protect user data and similar other
techniques. it also prevents unauthorized access to programs and user data.

2. Control over system performance –


Monitors overall system health to help improve performance. records the response time
between service requests and system response to having a complete view of the system
health. This can help improve performance by providing important information needed to
troubleshoot problems.

3. Job accounting –
Operating system Keeps track of time and resources used by various tasks and users, this
information can be used to track resource usage for a particular user or group of users.

4. Error detecting aids –


The operating system constantly monitors the system to detect errors and avoid the

9
malfunctioning of a computer system.

5. Coordination between other software and users –


Operating systems also coordinate and assign interpreters, compilers, assemblers, and other
software to the various users of the computer systems.

6. Memory Management –
The operating system manages the Primary Memory or Main Memory. Main memory is
made up of a large array of bytes or words where each byte or word is assigned a certain
address. Main memory is fast storage and it can be accessed directly by the CPU. For a
program to be executed, it should be first loaded in the main memory. An Operating System
performs the following activities for memory management:
It keeps track of primary memory, i.e., which bytes of memory are used by which user
program. The memory addresses that have already been allocated and the memory addresses
of the memory that has not yet been used. In multiprogramming, the OS decides the order in
which processes are granted access to memory, and for how long. It Allocates the memory
to a process when the process requests it and deallocates the memory when the process has
terminated or is performing an I/O operation.

7. Processor Management –
In a multi-programming environment, the OS decides the order in which processes have
access to the processor, and how much processing time each process has. This function of
OS is called process scheduling. An Operating System performs the following activities for
processor management.
Keeps track of the status of processes. The program which performs this task is known as a
traffic controller. Allocates the CPU that is a processor to a process. De-allocates processor
when a process is no more required.

8. Device Management –
An OS manages device communication via their respective drivers. It performs the
following activities for device management. Keeps track of all devices connected to the
system. designates a program responsible for every device known as the Input/Output
controller. Decides which process gets access to a certain device and for how long.
Allocates devices in an effective and efficient way. Deallocates devices when they are no
longer required.

9. File Management –
A file system is organized into directories for efficient or easy navigation and usage. These
directories may contain other directories and other files. An Operating System carries out
the following file management activities. It keeps track of where information is stored, user
access settings and status of every file, and more… These facilities are collectively known
as the file system.
Moreover, Operating System also provides certain services to the computer system in one form
or the other.
The Operating System provides certain services to the users which can be listed in the
following manner:

10
1. Program Execution: The Operating System is responsible for the execution of all types of
programs whether it be user programs or system programs. The Operating System utilizes
various resources available for the efficient running of all types of functionalities.
2. Handling Input/Output Operations: The Operating System is responsible for handling all
sorts of inputs, i.e, from the keyboard, mouse, desktop, etc. The Operating System does all
interfacing in the most appropriate manner regarding all kinds of Inputs and Outputs.
For example, there is a difference in the nature of all types of peripheral devices such as
mice or keyboards, the Operating System is responsible for handling data between them.
3. Manipulation of File System: The Operating System is responsible for making decisions
regarding the storage of all types of data or files, i.e, floppy disk/hard disk/pen drive, etc.
The Operating System decides how the data should be manipulated and stored.
4. Error Detection and Handling: The Operating System is responsible for the detection of
any type of error or bugs that can occur while any task. The well-secured OS sometimes also
acts as a countermeasure for preventing any sort of breach to the Computer System from any
external source and probably handling them.
5. Resource Allocation: The Operating System ensures the proper use of all the resources
available by deciding which resource to be used by whom for how much time. All the
decisions are taken by the Operating System.
6. Accounting: The Operating System tracks an account of all the functionalities taking place
in the computer system at a time. All the details such as the types of errors that occurred are
recorded by the Operating System.
7. Information and Resource Protection: The Operating System is responsible for using all
the information and resources available on the machine in the most protected way. The
Operating System must foil an attempt from any external resource to hamper any sort of
data or information.
All these services are ensured by the Operating System for the convenience of the users to
make the programming task easier. All different kinds of Operating systems more or less
provide the same services.

The goal of an Operating System: The fundamental goal of an Operating System is to


execute user programs and to make tasks easier. Various application programs along with
hardware systems are used to perform this work. Operating System is software that manages
and controls the entire set of resources and effectively utilizes every part of a computer.
The figure shows how OS acts as a medium between hardware units and application programs.

11
Need for Operating System:
 OS as a platform for Application programs:
The operating system provides a platform, on top of which, other programs, called
application programs can run. These application programs help the users to perform a
specific task easily. It acts as an interface between the computer and the user. It is designed
in such a manner that it operates, controls, and executes various applications on the
computer.

 Managing Input-Output unit:


Operating System also allows the computer to manage its own resources such as memory,
monitor, keyboard, printer, etc. Management of these resources is required for effective
utilization. The operating system controls the various system input-output resources and
allocates them to the users or programs as per their requirements.
 Consistent user interface:
Operating System provides the user with an easy-to-work user interface, so the
user doesn’t have to learn a different UI every time and can focus on the content
and be productive as quickly as possible. Operating System provides templates,
and UI components to make the working of a computer, really easy for the user.

 Multitasking:
Operating System manages memory and allows multiple programs to run in their own space
and even communicate with each other through shared memory. Multitasking gives users a
good experience as they can perform several tasks on a computer at a time.

Functions of an Operating System:


An operating system has a variety of functions to perform. Some of the prominent functions of
an operating system can be broadly outlined as:

 Processor Management: This deals with the management of the Central Processing Unit
(CPU). The operating system takes care of the allotment of CPU time to different processes.
When a process finishes its CPU processing after executing for the allotted time period, this

12
is called scheduling. There is various type of scheduling techniques that are used by the
operating systems:
1. Shortest Job First(SJF): The process which needs the shortest CPU time is scheduled
first.
2. Round Robin Scheduling: Each process is assigned a fixed CPU execution time in a
cyclic way.
3. Priority Based Scheduling (Non-Preemptive): In this scheduling, processes are
scheduled according to their priorities, i.e., the highest priority process is scheduled first.
If the priorities of the two processes match, then schedule according to arrival time.
 Context Switching: In most multitasking OSs, multiple running processes on the system
may need a change of state in execution. Even if there are multiple processes being executed
at any one point in time, only one task is executed in the foreground, while the others are put
in the background. So the process that is in the foreground is determined by the priority-
based scheduling, and the OS saves the execution state of the previous process before
switching to the current one. This is known as context switching.

 Device Management:
The Operating System communicates with the hardware and the attached devices and
maintains a balance between them and the CPU. This is all the more important because the
CPU processing speed is much higher than that of I/O devices. In order to optimize the CPU
time, the operating system employs two techniques – Buffering and Spooling.

 Buffering:
In this technique, input and output data are temporarily stored in Input Buffer and Output
Buffer. Once the signal for input or output is sent to or from the CPU respectively, the
operating system through the device controller moves the data from the input device to the
input buffer and from the output buffer to the output device. In the case of input, if the
buffer is full, the operating system sends a signal to the program which processes the data
stored in the buffer. When the buffer becomes empty, the program informs the operating
system which reloads the buffer and the input operation continues.

 Spooling (Simultaneous Peripheral Operation On Line):


This is a device management technique used for processing different tasks on the same
input/output device. When there are various users on a network sharing the same resource
then it can be a possibility that more than one user might give it a command at the same
point in time. So, the operating system temporarily stores the data of every user on the hard
disk of the computer to which the resource is attached. The individual user need not wait for
the execution process to be completed. Instead, the operating system sends the data from the
hard disk to the resource one by one.
Example: printer

 Memory management:
In a computer, both the CPU and the I/O devices interact with the memory. When a program
needs to be executed it is loaded onto the main memory till the execution is completed.
Thereafter that memory space is freed and is available for other programs. The common
memory management techniques used by the operating system are Partitioning and Virtual

13
Memory.

 Partitioning:
The total memory is divided into various partitions of the same size or different sizes. This
helps to accommodate a number of programs in the memory. The partition can be fixed i.e.
remains the same for all the programs in the memory or variable i.e. memory is allocated
when a program is loaded onto the memory. The latter approach causes less wastage of
memory but in due course of time, it may become fragmented.

 Virtual Memory:
This is a technique used by the operating systems which allows the user can load programs
that are larger than the main memory of the computer. In this technique, the program is
executed even if the complete program can not be loaded inside the main memory leading to
efficient memory utilization.

 File Management:
The operating system manages the files, folders, and directory systems on a computer. Any
data on a computer is stored in the form of files and the operating system keeps the
information about all of them using the File Allocation Table (FAT), or a data structure
called an inode in Linux. The FAT stores general information about files like filename, type
(text or binary), size, starting address, and access mode (sequential/indexed
sequential/direct/relative). The file manager of the operating system helps to create, edit,
copy, allocate memory to the files and also updates the FAT. The operating system also
takes care that files are opened with proper access rights to read or edit them.

Operating System Services


The main purpose of operating system is to provide environment for execution of programs.
Thus, an operating system provides certain services to programs and the users of those
programs.
1. Program Execution
 Operating system provides a convenient environment where users can run their programs.
 The operating system performs memory allocation to programs, load them into appropriate
location so that they can execute. The users not to worry about all these tasks.
2. I/O Operations
 In order to execute a program, it usually requires an I/O operations. For example, it may
need to read a file and print the output.
 When all these I/O operations are performed users cannot control I/O devices.
 All I/O are performed under the control of operating system.
3. Communication
 The various processes executing on a system may need to communicate inorder to exchange
data or information.
 Operating system provides this communication by using a facility of message passing. In
message passing packets of information are moved between processes by operating system.

14
Introduction of Process Management
Program vs Process: A process is a program in execution. For example, when we write a
program in C or C++ and compile it, the compiler creates binary code. The original code and
binary code are both programs. When we actually run the binary code, it becomes a process.
A process is an ‘active’ entity instead of a program, which is considered a ‘passive’ entity. A
single program can create many processes when run multiple times; for example, when we
open a .exe or binary file multiple times, multiple instances begin (multiple processes are
created).
What does a process look like in memory?

Text Section: A Process, sometimes known as the Text Section, also includes the current
activity represented by the value of the
Program Counter.
Stack: The stack contains temporary data, such as function parameters, returns addresses, and
local variables.
Data Section: Contains the global variable.
Heap Section: Dynamically allocated memory to process during its run time.

Attributes or Characteristics of a Process: A process has the following attributes.

1. Process Id: A unique identifier assigned by the operating system


2. Process State: Can be ready, running, etc.
3. CPU registers: Like the Program Counter (CPU registers must be saved and
restored when a process is swapped in and out of CPU)
4. Accounts information: Amount of CPU used for process execution, time limits, execution
ID etc
5. I/O status information: For example, devices allocated to the process,
open files, etc
6. CPU scheduling information: For example, Priority (Different processes

15
may have different priorities, for example
a shorter process assigned high priority
in the shortest job first scheduling)

All of the above attributes of a process are also known as the context of the process. Every
process has its own process control block (PCB), i.e each process will have a unique PCB. All
of the above attributes are part of the PCB.

States of Process: A process is in one of the following states:


1. New: Newly Created Process (or) being-created process.

2. Ready: After creation process moves to Ready state, i.e. the


process is ready for execution.

3. Run: Currently running process in CPU (only one process at


a time can be under execution in a single processor).

4. Wait (or Block): When a process requests I/O access.

5. Complete (or Terminated): The process completed its execution.

6. Suspended Ready: When the ready queue becomes full, some processes
are moved to suspended ready state

8. Suspended Block: When waiting queue becomes full.


9.

Context Switching: The process of saving the context of one process and loading the context
of another process is known as Context Switching. In simple terms, it is like loading and
unloading the process from the running state to the ready state.

16
When does context switching happen?
1. When a high-priority process comes to a ready state (i.e. with higher priority than the
running process)
2. An Interrupt occurs
3. User and kernel-mode switch (It is not necessary though)
4. Preemptive CPU scheduling used.

Context Switch vs Mode Switch: A mode switch occurs when the CPU privilege level is
changed, for example when a system call is made or a fault occurs. The kernel works in more a
privileged mode than a standard user task. If a user process wants to access things that are only
accessible to the kernel, a mode switch must occur. The currently executing process need not
be changed during a mode switch.
A mode switch typically occurs for a process context switch to occur. Only the kernel can
cause a context switch.
CPU-Bound vs I/O-Bound Processes: A CPU-bound process requires more CPU time or
spends more time in the running state.
An I/O-bound process requires more I/O time and less CPU time. An I/O-bound process
spends more time in the waiting state.
Exercise:
1. Which of the following need not necessarily be saved on a context switch between
processes? (GATE-CS-2000)
(A) General purpose registers
(B) Translation lookaside buffer
(C) Program counter
(D) All of the above
Answer (B)
Explanation: In a process context switch, the state of the first process must be saved
somehow, so that when the scheduler gets back to the execution of the first process, it can
restore this state and continue. The state of the process includes all the registers that the
process may be using, especially the program counter, plus any other operating system-specific
data that may be necessary. A translation look-aside buffer (TLB) is a CPU cache that memory
management hardware uses to improve virtual address translation speed. A TLB has a fixed
number of slots that contain page table entries, which map virtual addresses to physical
addresses. On a context switch, some TLB entries can become invalid, since the virtual-to-
physical mapping is different. The simplest strategy to deal with this is to completely flush the
TLB.

2. The time taken to switch between user and kernel modes of execution is t1 while the time
taken to switch between two processes is t2. Which of the following is TRUE? (GATE-CS-
2011)
(A) t1 > t2
(B) t1 = t2
(C) t1 < t2
(D) nothing can be said about the relation between t1 and t2.
Answer: (C)

17
Explanation: Process switching involves a mode switch. Context switching can occur only in
kernel mode.

States of a Process in Operating Systems

States of a process are as follows:

 New (Create) – In this step, the process is about to be created but not yet created, it is the
program which is present in secondary memory that will be picked up by OS to create the
process.
 Ready – New -> Ready to run. After the creation of a process, the process enters the ready
state i.e. the process is loaded into the main memory. The process here is ready to run and is
waiting to get the CPU time for its execution. Processes that are ready for execution by the
CPU are maintained in a queue for ready processes.
 Run – The process is chosen by CPU for execution and the instructions within the process are
executed by any one of the available CPU cores.
 Blocked or wait – Whenever the process requests access to I/O or needs input from the user
or needs access to a critical region(the lock for which is already acquired) it enters the blocked
or wait state. The process continues to wait in the main memory and does not require CPU.
Once the I/O operation is completed the process goes to the ready state.
 Terminated or completed – Process is killed as well as PCB is deleted.
 Suspend ready – Process that was initially in the ready state but was swapped out of main
memory(refer Virtual Memory topic) and placed onto external storage by scheduler is said to
be in suspend ready state. The process will transition back to ready state whenever the process
is again brought onto the main memory.
 Suspend wait or suspend blocked – Similar to suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary
memory. When work is finished it may go to suspend ready.

18
CPU and I/O Bound Processes: If the process is intensive in terms of CPU operations then it is
called CPU bound process. Similarly, If the process is intensive in terms of I/O operations then it
is called I/O bound process.

Types of schedulers:
1. Long term – performance – Makes a decision about how many processes should be made to
stay in the ready state, this decides the degree of multiprogramming. Once a decision is taken
it lasts for a long time hence called long term scheduler.
2. Short term – Context switching time – Short term scheduler will decide which process to be
executed next and then it will call dispatcher. A dispatcher is a software that moves process
from ready to run and vice versa. In other words, it is context switching.
3. Medium term – Swapping time – Suspension decision is taken by medium term scheduler.
Medium term scheduler is used for swapping that is moving the process from main memory to
secondary and vice versa.

Multiprogramming – We have many processes ready to run. There are two types of
multiprogramming:
1. Pre-emption – Process is forcefully removed from CPU. Pre-emption is also called as time
sharing or multitasking.
2. Non pre-emption – Processes are not removed until they complete the execution.

Degree of multiprogramming – The number of processes that can reside in the ready state at
maximum decides the degree of multiprogramming, e.g., if the degree of programming = 100,
this means 100 processes can reside in the ready state at maximum.

Process Schedulers in Operating System


The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the executable memory at a time and the
loaded process shares the CPU using time multiplexing.
There are three types of process scheduler.

1. Long Term or Job Scheduler :


It brings the new process to the ‘Ready State’. It controls Degree of Multi-programming, i.e.,
number of process present in ready state at any point of time. It is important that the long-term
scheduler make a careful selection of both I/O and CPU-bound processes. I/O bound tasks are
which use much of their time in input and output operations while CPU bound processes are
which spend their time on CPU. The job scheduler increases efficiency by maintaining a
balance between the two.

2. Short Term or CPU Scheduler :


It is responsible for selecting one process from ready state for scheduling it on the running
state. Note: Short-term scheduler only selects the process to schedule it doesn’t load the

19
process on running. Here is when all the scheduling algorithms are used. The CPU scheduler
is responsible for ensuring there is no starvation owing to high burst time processes.
Dispatcher is responsible for loading the process selected by Short-term scheduler on the CPU
(Ready to Running State) Context switching is done by dispatcher only. A dispatcher does the
following:
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the newly loaded program.

3. Medium-Term Scheduler:
It is responsible for suspending and resuming the process. It mainly does swapping (moving
processes from main memory to disk and vice versa). Swapping may be necessary to improve
the process mix or because a change in memory requirements has overcommitted available
memory, requiring memory to be freed up. It is helpful in maintaining a perfect balance
between the I/O bound and the CPU bound. It reduces the degree of multiprogramming.

Process Table and Process Control Block (PCB)


While creating a process the operating system performs several operations. To identify the
processes, it assigns a process identification number (PID) to each process. As the operating
system supports multi-programming, it needs to keep track of all the processes. For this task, the
process control block (PCB) is used to track the process’s execution status. Each block of
memory contains information about the process state, program counter, stack pointer, status of
opened files, scheduling algorithms, etc. All these information is required and must be saved
when the process is switched from one state to another. When the process makes a transition
from one state to another, the operating system must update information in the process’s PCB.
A process control block (PCB) contains information about the process, i.e. registers, quantum,
priority, etc. The process table is an array of PCB’s, that means logically contains a PCB for all
of the current processes in the system.

 Pointer – It is a stack pointer which is required to be saved when the process is switched from
one state to another to retain the current position of the process.
 Process state – It stores the respective state of the process.

20
 Process number – Every process is assigned with a unique id known as process ID or PID
which stores the process identifier.
 Program counter – It stores the counter which contains the address of the next instruction
that is to be executed for the process.
 Register – These are the CPU registers which includes: accumulator, base, registers and
general purpose registers.
 Memory limits – This field contains the information about memory management system used
by operating system. This may include the page tables, segment tables etc.
 Open files list – This information includes the list of files opened for a process.
 Miscellaneous accounting and status data – This field includes information about the
amount of CPU used, time constraints, jobs or process number, etc.

The process control block stores the register content also known as execution content of the
processor when it was blocked from running. This execution content architecture enables the
operating system to restore a process’s execution context when the process returns to the
running state. When the process makes a transition from one state to another, the operating
system updates its information in the process’s PCB. The operating system maintains pointers
to each process’s PCB in a process table so that it can access the PCB quickly.

Interrupts
The interrupt is a signal emitted by hardware or software when a process or an event needs
immediate attention. It alerts the processor to a high-priority process requiring interruption of the
current working process. In I/O devices one of the bus control lines is dedicated for this purpose
and is called the Interrupt Service Routine (ISR).
When a device raises an interrupt at let’s say process i, the processor first completes the
execution of instruction i. Then it loads the Program Counter (PC) with the address of the first
instruction of the ISR. Before loading the Program Counter with the address, the address of the
interrupted instruction is moved to a temporary location. Therefore, after handling the interrupt
the processor can continue with process i+1.
While the processor is handling the interrupts, it must inform the device that its request has been
recognized so that it stops sending the interrupt request signal. Also, saving the registers so that
the interrupted process can be restored in the future, increases the delay between the time an
interrupt is received and the start of the execution of the ISR. This is called Interrupt Latency.

Hardware Interrupts:

In a hardware interrupt, all the devices are connected to the Interrupt Request Line. A single
request line is used for all the n devices. To request an interrupt, a device closes its associated
switch. When a device requests an interrupt, the value of INTR is the logical OR of the requests
from individual devices.
The sequence of events involved in handling an IRQ:
1. Devices raise an IRQ.
2. The processor interrupts the program currently being executed.
3. The device is informed that its request has been recognized and the device deactivates the
request signal.

21
4. The requested action is performed.
5. An interrupt is enabled and the interrupted program is resumed.
Handling Multiple Devices:
When more than one device raises an interrupt request signal, then additional information is
needed to decide which device to be considered first. The following methods are used to decide
which device to select: Polling, Vectored Interrupts, and Interrupt Nesting. These are explained
as following below.
1. Polling: In polling, the first device encountered with the IRQ bit set is the device that is to be
serviced first. Appropriate ISR is called to service the same. It is easy to implement but a lot
of time is wasted by interrogating the IRQ bit of all devices.
2. Vectored Interrupts: In vectored interrupts, a device requesting an interrupt identifies itself
directly by sending a special code to the processor over the bus. This enables the processor to
identify the device that generated the interrupt. The special code can be the starting address of
the ISR or where the ISR is located in memory and is called the interrupt vector.
3. Interrupt Nesting: In this method, the I/O device is organized in a priority structure.
Therefore, an interrupt request from a higher priority device is recognized whereas a request
from a lower priority device is not. The processor accepts interrupts only from
devices/processes having priority.
Processors’ priority is encoded in a few bits of PS (Process Status register). It can be changed by
program instructions that write into the PS. The processor is in supervised mode only while
executing OS routines. It switches to user mode before executing application programs.
Thread in Operating System
What is a Thread?
A thread is a path of execution within a process. A process can contain multiple threads.
Why Multithreading?
A thread is also known as lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads.
MS Word uses multiple threads: one thread to format the text, another thread to process inputs,
etc. More advantages of multithreading are discussed below
Process vs Thread?
The primary difference is that threads within the same process run in a shared memory space,
while processes run in separate memory spaces.
Threads are not independent of one another like processes are, and as a result threads share with
other threads their code section, data section, and OS resources (like open files and signals). But,
like process, a thread has its own program counter (PC), register set, and stack space.
Advantages of Thread over Process
1. Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned.
2. Faster context switch: Context switch time between threads is lower compared to process
context switch. Process context switching requires more overhead from the CPU.
3. Effective utilization of multiprocessor system: If we have multiple threads in a single process,
then we can schedule multiple threads on multiple processor. This will make process execution
faster.
4. Resource sharing: Resources like code, data, and files can be shared among all threads within
a process.

22
Note: stack and registers can’t be shared among the threads. Each thread has its own stack and
registers.
5. Communication: Communication between multiple threads is easier, as the threads shares
common address space. while in process we have to follow some specific communication
technique for communication between two process.
6. Enhanced throughput of the system: If a process is divided into multiple threads, and each
thread function is considered as one job, then the number of jobs completed per unit of time is
increased, thus increasing the throughput of the system.

Types of Threads
There are two types of threads.
User Level Thread
Kernel Level Thread
Threads and its types in Operating System
Thread is a single sequence stream within a process. Threads have same properties as of the
process so they are called as light weight processes. Threads are executed one after another but
gives the illusion as if they are executing in parallel. Each thread has different states. Each thread
has
1. A program counter
2. A register set
3. A stack space
Threads are not independent of each other as they share the code, data, OS resources etc.
Similarity between Threads and Processes –
 Only one thread or process is active at a time
 Within process both execute sequential
 Both can create children
Differences between Threads and Processes –
 Threads are not independent, processes are.
 Threads are designed to assist each other, processes may or may not do it

Types of Threads:
1. User Level thread (ULT) – Is implemented in the user level library, they are not created
using the system calls. Thread switching does not need to call OS and to cause interrupt to
Kernel. Kernel doesn’t know about the user level thread and manages them as if they were
single-threaded processes.
 Advantages of ULT –
 Can be implemented on an OS that doesn’t support multithreading.
 Simple representation since thread has only program counter, register set, stack
space.
 Simple to create since no intervention of kernel.
 Thread switching is fast since no OS calls need to be made.
 Limitations of ULT –
 No or less co-ordination among the threads and Kernel.
 If one thread causes a page fault, the entire process blocks.
2. Kernel Level Thread (KLT) – Kernel knows and manages the threads. Instead of thread
table in each process, the kernel itself has thread table (a master one) that keeps track of all the
23
threads in the system. In addition kernel also maintains the traditional process table to keep
track of the processes. OS kernel provides system call to create and manage threads.
 Advantages of KLT –
 Since kernel has full knowledge about the threads in the system, scheduler may
decide to give more time to processes having large number of threads.
 Good for applications that frequently block.
 Limitations of KLT –
 Slow and inefficient.
 It requires thread control block so it is an overhead.
Summary:
1. Each ULT has a process that keeps track of the thread using the Thread table.
2. Each KLT has Thread Table (TCB) as well as the Process Table (PCB).

Difference between Process and Thread


Process: Processes are basically the programs that are dispatched from the ready state and are
scheduled in the CPU for execution. PCB (Process Control Block) holds the concept of process.
A process can create other processes which are known as Child Processes. The process takes
more time to terminate and it is isolated means it does not share the memory with any other
process.
The process can have the following states: new, ready, running, waiting, terminated, and
suspended.
Thread: Thread is the segment of a process which means a process can have multiple threads
and these multiple threads are contained within a process. A thread has three states: Running,
Ready, and Blocked.
The thread takes less time to terminate as compared to the process but unlike the process, threads

do not isolate.

24
Process vs Thread

Difference between Process and Thread:


S.N
O Process Thread

Process means any program is in


1. execution. Thread means a segment of a process.

The process takes more time to


2. terminate. The thread takes less time to terminate.

3. It takes more time for creation. It takes less time for creation.

It also takes more time for


4. context switching. It takes less time for context switching.

The process is less efficient in


5. terms of communication. Thread is more efficient in terms of communication.

We don’t need multi programs in action for multiple


Multiprogramming holds the threads because a single process consists of multiple
6. concepts of multi-process. threads.

7. The process is isolated. Threads share memory.

The process is called the A Thread is lightweight as each thread in a process


8. heavyweight process. shares code, data, and resources.

Thread switching does not require calling an


Process switching uses an operating system and causes an interrupt to the
9. interface in an operating system. kernel.

10. If one process is blocked then it If a user-level thread is blocked, then all other user-

25
S.N
O Process Thread

will not affect the execution of


other processes level threads are blocked.

The process has its own Process


Control Block, Stack, and Thread has Parents’ PCB, its own Thread Control
11. Address Space. Block, and Stack and common Address space.

Since all threads of the same process share address


space and other resources so any changes to the main
Changes to the parent process do thread may affect the behavior of the other threads of
12. not affect child processes. the process.

13. A system call is involved in it. No system call is involved, it is created using APIs.

The process does not share data


14. with each other. Threads share data with each other.

Note: In some cases where the thread is processing a bigger workload compared to a process’s
workload then the thread may take more time to terminate. But this is an extremely rare situation
and has fewer chances to occur.

CPU Scheduling in Operating Systems


Scheduling of processes/work is done to finish the work on time. CPU Scheduling is a process
that allows one process to use the CPU while another process is delayed (in standby) due to
unavailability of any resources such as I / O etc, thus making full use of the CPU. The purpose of
CPU Scheduling is to make the system more efficient, faster, and fairer.

CPU Scheduling Algorithms in Operating System


Whenever the CPU becomes idle, the operating system must select one of the processes in the
line ready for launch. The selection process is done by a temporary (CPU) scheduler. The
Scheduler selects between memory processes ready to launch and assigns the CPU to one of
them.
What is a process?

26
In computing, a process is the instance of a computer program that is being executed by one
or many threads. It contains the program code and its activity. Depending on the operating
system (OS), a process may be made up of multiple threads of execution that execute instructions
concurrently.
How is process memory used for efficient operation?
The process memory is divided into four sections for efficient operation:
 The text category is composed of integrated program code, which is read from fixed storage
when the program is launched.
 The data class is made up of global and static variables, distributed and executed before the
main action.
 Heap is used for flexible, or dynamic memory allocation and is managed by calls to new,
delete, malloc, free, etc.
 The stack is used for local variables. The space in the stack is reserved for local variables
when it is announced.

What is Process Scheduling?


Process Scheduling is the process of the process manager handling the removal of an active
process from the CPU and selecting another process based on a specific strategy.

27
Process Scheduling is an integral part of Multi-programming applications. Such operating
systems allow more than one process to be loaded into usable memory at a time and the loaded
shared CPU process uses repetition time.
There are three types of process schedulers:
 Long term or Job Scheduler
 Short term or CPU Scheduler
 Medium-term Scheduler
Why do we need to schedule processes?
 Scheduling is important in many different computer environments. One of the most important
areas is scheduling which programs will work on the CPU. This task is handled by the
Operating System (OS) of the computer and there are many different ways in which we can
choose to configure programs.
 Process Scheduling allows the OS to allocate CPU time for each process. Another important
reason to use a process scheduling system is that it keeps the CPU busy at all times. This
allows you to get less response time for programs.
 Considering that there may be hundreds of programs that need to work, the OS must launch
the program, stop it, switch to another program, etc. The way the OS configures the system to
run another in the CPU is called “context switching”. If the OS keeps context-switching
programs in and out of the provided CPUs, it can give the user a tricky idea that he or she can
run any programs he or she wants to run, all at once.
 So now that we know we can run 1 program at a given CPU, and we know we can change the
operating system and remove another one using the context switch, how do we choose which
programs we need. run, and with what program?
 That’s where scheduling comes in! First, you determine the metrics, saying something like
“the amount of time until the end”. We will define this metric as “the time interval between
which a function enters the system until it is completed”. Second, you decide on a metrics that
reduces metrics. We want our tasks to end as soon as possible.
What is the need for CPU scheduling algorithm?
CPU scheduling is the process of deciding which process will own the CPU to use while another
process is suspended. The main function of the CPU scheduling is to ensure that whenever the
CPU remains idle, the OS has at least selected one of the processes available in the ready-to-use
line.
In Multiprogramming, if the long-term scheduler selects multiple I / O binding processes then
most of the time, the CPU remains an idle. The function of an effective program is to improve
resource utilization.
If most operating systems change their status from performance to waiting then there may always
be a chance of failure in the system. So in order to minimize this excess, the OS needs to
schedule tasks in order to make full use of the CPU and avoid the possibility of deadlock.
Objectives of Process Scheduling Algorithm:
 Utilization of CPU at maximum level. Keep CPU as busy as possible.
 Allocation of CPU should be fair.
 Throughput should be Maximum. i.e. Number of processes that complete their execution
per time unit should be maximized.
 Minimum turnaround time, i.e. time taken by a process to finish execution should be the
least.

28
 There should be a minimum waiting time and the process should not starve in the ready
queue.
 Minimum response time. It means that the time when a process produces the first response
should be as less as possible.
What are the different terminologies to take care of in any CPU Scheduling algorithm?
 Arrival Time: Time at which the process arrives in the ready queue.
 Completion Time: Time at which process completes its execution.
 Burst Time: Time required by a process for CPU execution.
 Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
 Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time
Things to take care while designing a CPU Scheduling algorithm?
Different CPU Scheduling algorithms have different structures and the choice of a particular
algorithm depends on a variety of factors. Many conditions have been raised to compare CPU
scheduling algorithms.
The criteria include the following:
 CPU utilization: The main purpose of any CPU algorithm is to keep the CPU as busy as
possible. Theoretically, CPU usage can range from 0 to 100 but in a real-time system, it varies
from 40 to 90 percent depending on the system load.
 Throughput: The average CPU performance is the number of processes performed and
completed during each unit. This is called throughput. The output may vary depending on the
length or duration of the processes.
 Turn round Time: For a particular process, the important conditions are how long it takes to
perform that process. The time elapsed from the time of process delivery to the time of
completion is known as the conversion time. Conversion time is the amount of time spent
waiting for memory access, waiting in line, using CPU, and waiting for I / O.
 Waiting Time: The Scheduling algorithm does not affect the time required to complete the
process once it has started performing. It only affects the waiting time of the process i.e. the
time spent in the waiting process in the ready queue.
 Response Time: In a collaborative system, turn around time is not the best option. The
process may produce something early and continue to computing the new results while the
previous results are released to the user. Therefore another method is the time taken in the
submission of the application process until the first response is issued. This measure is called
response time.
What are the different types of CPU Scheduling Algorithms?
There are mainly two types of scheduling methods:
 Preemptive Scheduling: Preemptive scheduling is used when a process switches from
running state to ready state or from the waiting state to the ready state.
 Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process
terminates , or when a process switches from running state to waiting state.

29
Different types of CPU Scheduling Algorithms

Let us now learn about these CPU scheduling algorithms in operating systems one by one:
1. First Come First Serve:
FCFS considered to be the simplest of all operating system scheduling algorithms. First come
first serve scheduling algorithm states that the process that requests the CPU first is allocated the
CPU first and is implemented by using FIFO queue.
Characteristics of FCFS:
 FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS:
 Easy to implement
 First come, first serve method
Disadvantages of FCFS:
 FCFS suffers from Convoy effect.
 The average waiting time is much higher than the other algorithms.
 FCFS is very simple and easy to implement and hence not much efficient.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on First come, First serve Scheduling.
2. Shortest Job First(SJF):
Shortest job first (SJF) is a scheduling process that selects the waiting process with the smallest
execution time to execute next. This scheduling method may or may not be preemptive.
Significantly reduces the average waiting time for other processes waiting to be executed. The
full form of SJF is Shortest Job First.

30
Characteristics of SJF:
 Shortest Job first has the advantage of having a minimum average waiting time among
all operating system scheduling algorithms.
 It is associated with each task as a unit of time to complete.
 It may cause starvation if shorter processes keep coming. This problem can be solved using
the concept of ageing.
Advantages of Shortest Job first:
 As SJF reduces the average waiting time thus, it is better than the first come first serve
scheduling algorithm.
 SJF is generally used for long term scheduling
Disadvantages of SJF:
 One of the demerit SJF has is starvation.
 Many times it becomes complicated to predict the length of the upcoming CPU request

3. Longest Job First(LJF):


Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF), as the
name suggests this algorithm is based upon the fact that the process with the largest burst time is
processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF:
 Among all the processes waiting in a waiting queue, CPU is always assigned to the process
having largest burst time.
 If two processes have the same burst time then the tie is broken using FCFS i.e. the process
that arrived first is processed first.
 LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LJF:
 No other task can schedule until the longest job or process executes completely.
 All the jobs or processes finish at the same time approximately.
Disadvantages of LJF:
 Generally, the LJF algorithm gives a very high average waiting time and average turn-around
time for a given set of processes.

31
 This may lead to convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the Longest job first scheduling.
4. Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling
algorithm that works based on the priority of a process. In this algorithm, the editor sets the
functions to be as important, meaning that the most important process must be done first. In the
case of any conflict, that is, where there are more than one processor with equal value, then the
most important CPU planning algorithm works on the basis of the FCFS (First Come First Serve)
algorithm.
Characteristics of Priority Scheduling:
 Schedules tasks based on priority.
 When the higher priority work arrives while a task with less priority is executed, the higher
priority work takes the place of the less priority one and
 The latter is suspended until the execution is complete.
 Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling:
 The average waiting time is less than FCFS
 Less complex
Disadvantages of Priority Scheduling:
 One of the most common demerits of the Preemptive priority CPU scheduling algorithm is
the Starvation Problem. This is the problem in which a process has to wait for a longer amount
of time to get scheduled into the CPU. This condition is called the starvation problem.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Priority Preemptive Scheduling algorithm.
5. Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed
time slot. It is the preemptive version of First come First Serve CPU Scheduling algorithm.
Round Robin CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin:
 It’s simple, easy to use, and starvation-free as all processes get the balanced CPU allocation.
 One of the most widely used methods in CPU scheduling as a core.
 It is considered preemptive as the processes are given to the CPU for a very limited time.
Advantages of Round robin:
 Round robin seems to be fair as every process gets an equal share of CPU.
 The newly created process is added to the end of the ready queue.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the Round robin Scheduling algorithm.

6. Shortest Remaining Time First:

Shortest remaining time first is the preemptive version of the Shortest job first which we have
discussed earlier where the processor is allocated to the job closest to completion. In SRTF the
process with the smallest amount of time remaining until completion is selected to execute.
Characteristics of Shortest remaining time first:
 SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given it’s
overhead charges are not counted.
32
 The context switch is done a lot more times in SRTF than in SJF and consumes the CPU’s
valuable time for processing. This adds up to its processing time and diminishes its advantage
of fast processing.
Advantages of SRTF:
 In SRTF the short processes are handled very fast.
 The system also requires very little overhead since it only makes a decision when a process
completes or a new process is added.
Disadvantages of SRTF:
 Like the shortest job first, it also has the potential for process starvation.
 Long processes may be held off indefinitely if short processes are continually added.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the shortest remaining time first.
7. Longest Remaining Time First:
The longest remaining time first is a preemptive version of the longest job first scheduling
algorithm. This scheduling algorithm is used by the operating system to program incoming
processes for use in a systematic way. This algorithm schedules those processes first which have
the longest processing time remaining for completion.
Characteristics of longest remaining time first:
 Among all the processes waiting in a waiting queue, the CPU is always assigned to the
process having the largest burst time.
 If two processes have the same burst time then the tie is broken using FCFS i.e. the process
that arrived first is processed first.
 LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LRTF:
 No other process can execute until the longest task executes completely.
 All the jobs or processes finish at the same time approximately.
Disadvantages of LRTF:
 This algorithm gives a very high average waiting time and average turn-around time for a
given set of processes.
 This may lead to a convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on the longest remaining time first.
8. Highest Response Ratio Next:
Highest Response Ratio Next is a non-preemptive CPU Scheduling algorithm and it is
considered as one of the most optimal scheduling algorithms. The name itself states that we need
to find the response ratio of all available processes and select the one with the highest Response
Ratio. A process once selected will run till completion.
Characteristics of Highest Response Ratio Next:
 The criteria for HRRN is Response Ratio, and the mode is Non-Preemptive.
 HRRN is considered as the modification of Shortest Job First to reduce the problem
of starvation.
 In comparison with SJF, during the HRRN scheduling algorithm, the CPU is allotted to the
next process which has the highest response ratio and not to the process having less burst
time.
Response Ratio = (W + S)/S
Here, W is the waiting time of the process so far and S is the Burst time of the process.

33
Advantages of HRRN:
 HRRN Scheduling algorithm generally gives better performance than the shortest job
first Scheduling.
 There is a reduction in waiting time for longer jobs and also it encourages shorter jobs.
Disadvantages of HRRN:
 The implementation of HRRN scheduling is not possible as it is not possible to know the burst
time of every job in advance.
 In this scheduling, there may occur an overload on the CPU.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Highest Response Ratio Next.
9. Multiple Queue Scheduling:
Processes in the ready queue can be divided into different classes where each class has its own
scheduling needs. For example, a common division is a foreground (interactive) process and
a background (batch) process. These two classes have different scheduling needs. For this kind
of situation Multilevel Queue Scheduling is used.

The description of the processes in the above diagram is as follows:


 System Processes: The CPU itself has its process to run, generally termed as System Process.
 Interactive Processes: An Interactive Process is a type of process in which there should be
the same type of interaction.
 Batch Processes: Batch processing is generally a technique in the Operating system that
collects the programs and data together in the form of a batch before the processing starts.
Advantages of multilevel queue scheduling:
 The main merit of the multilevel queue is that it has a low scheduling overhead.
Disadvantages of multilevel queue scheduling:
 Starvation problem
 It is inflexible in nature

Multilevel Feedback Queue Scheduling::


Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel Queue
Scheduling but in this process can move between the queues. And thus, much more efficient
than multilevel queue scheduling.
Characteristics of Multilevel Feedback Queue Scheduling:
 In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on
entry to the system, and processes are not allowed to move between queues.

34
 As the processes are permanently assigned to the queue, this setup has the advantage of low
scheduling overhead,
 But on the other hand disadvantage of being inflexible.
Advantages of Multilevel feedback queue scheduling:
 It is more flexible
 It allows different processes to move between different queues
Disadvantages of Multilevel feedback queue scheduling:
 It also produces CPU overheads
 It is the most complex algorithm.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed
article on Multilevel Feedback Queue Scheduling.
Comparison between various CPU Scheduling algorithms
Here is a brief comparison between different CPU scheduling algorithms:

Average
waiting
Allocation time Preemptio
Algorithm is Complexity (AWT) n Starvation Performance

According
to the
arrival time
of the
processes, Simple and
the CPU is easy to Slow
FCFS allocated. implement Large. No No performance

Based on Minimum
the lowest More Average
CPU burst complex Smaller Waiting
SJF time (BT). than FCFS than FCFS No Yes Time

Depending
on some
measures
Based on e.g., arrival
the highest More time,
CPU burst complex process Big turn-
LJFS time (BT) than FCFS size, etc. No Yes around time

LRTF Same as More Depending Yes Yes The


LJFS the complex on some preference is
allocation than FCFS measures given to the
of the CPU e.g., arrival longer jobs
is based on time,
the highest process

35
Average
waiting
Allocation time Preemptio
Algorithm is Complexity (AWT) n Starvation Performance

CPU burst
time (BT).
But it is
preemptive size, etc.

Same as
SJF the
allocation
of the CPU Depending
is based on on some
the lowest measures
CPU burst e.g., arrival The
time (BT). More time, preference is
But it is complex process given to the
SRTF preemptive. than FCFS size, etc Yes Yes short jobs

According
to the order
of the The
process complexity Large as Each
arrives with depends on compared process has
fixed time Time to SJF and given a
quantum Quantum Priority fairly fixed
RR (TQ) size scheduling. Yes No time

According
to the
priority. Well
The bigger performance
priority task This type is but contain a
Priority Pre- executes less Smaller starvation
emptive first complex than FCFS Yes Yes problem

Priority According This type is Preemptive No Yes Most


non- to the less complex Smaller beneficial
preemptive priority than Priority than FCFS with batch
with preemptive systems
monitoring
the new
incoming

36
Average
waiting
Allocation time Preemptio
Algorithm is Complexity (AWT) n Starvation Performance

higher
priority
jobs

According
to the More
process that complex Good
resides in than the performance
the bigger priority but contain a
queue scheduling Smaller starvation
MLQ priority algorithms than FCFS No Yes problem

It is the most
According Complex but
to the its Smaller
process of a complexity than all
bigger rate depends scheduling
priority on the TQ types in Good
MFLQ queue. size many cases No No performance
Exercise:
1. Consider a system which requires 40-time units of burst time. The Multilevel feedback queue
scheduling is used and time quantum is 2 unit for the top queue and is incremented by 5 unit at
each level, then in what queue the process will terminate the execution?
2. Which of the following is false about SJF? S1: It causes minimum average waiting time S2: It
can cause starvation (A) Only S1 (B) Only S2 (C) Both S1 and S2 (D) Neither S1 nor S2
Answer (D) S1 is true SJF will always give minimum average waiting time. S2 is true SJF can
cause starvation.
3. Consider the following table of arrival time and burst time for three processes P0, P1 and P2.
(GATE-CS-2011)

Process Arrival time Burst Time

P0 0 ms 9 ms

P1 1 ms 4 ms

37
P2 2 ms 9 ms

1. The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only
at arrival or completion of processes. What is the average waiting time for the three
processes? (A) 5.0 ms (B) 4.33 ms (C) 6.33 (D) 7.33 Solution : Answer: – (A) Process P0 is
allocated processor at 0 ms as there is no other process in the ready queue. P0 is preempted
after 1 ms as P1 arrives at 1 ms and burst time for P1 is less than remaining time of P0. P1
runs for 4ms. P2 arrived at 2 ms but P1 continued as burst time of P2 is longer than P1. After
P1 completes, P0 is scheduled again as the remaining time for P0 is less than the burst time of
P2. P0 waits for 4 ms, P1 waits for 0 ms and P2 waits for 11 ms. So average waiting time is
(0+4+11)/3 = 5.
2. Consider the following set of processes, with the arrival times and the CPU-burst times given
in milliseconds (GATE-CS-2004)

Process Arrival time Burst Time

P1 0 ms 5 ms

P2 1 ms 3 ms

P3 2 ms 3 ms

P4 4 ms 1 ms

1. What is the average turnaround time for these processes with the preemptive shortest
remaining processing time first (SRPT) algorithm ? (A) 5.50 (B) 5.75 (C) 6.00 (D) 6.25
Answer (A) Solution: The following is Gantt Chart of execution

P1 P2 P4 P3 P1

1 4 5 8 12

1. Turn Around Time = Completion Time – Arrival Time Avg Turn Around Time = (12 + 3 +
6+ 1)/4 = 5.50
2. An operating system uses the Shortest Remaining Time First (SRTF) process scheduling
algorithm. Consider the arrival times and execution times for the following processes:

Process Arrival time Burst Time

38
P1 20 ms 0 ms

P2 25 ms 15 ms

P3 10 ms 30 ms

P4 15 ms 45 ms

1. What is the total waiting time for process P2? (A) 5 (B) 15 (C) 40 (D) 55 Answer (B) At time
0, P1 is the only process, P1 runs for 15 time units. At time 15, P2 arrives, but P1 has the
shortest remaining time. So P1 continues for 5 more time units. At time 20, P2 is the only
process. So it runs for 10 time units At time 30, P3 is the shortest remaining time process. So
it runs for 10 time units At time 40, P2 runs as it is the only process. P2 runs for 5 time units.
At time 45, P3 arrives, but P2 has the shortest remaining time. So P2 continues for 10 more
time units. P2 completes its execution at time 55
Total waiting time for P2
= Completion time – (Arrival time + Execution time)
= 55 – (15 + 25)
= 15
FCFS Scheduling Full Form
FCFS stands for First Come First Serve. In the FCFS scheduling algorithm, the job that arrived
first in the ready queue is allocated to the CPU and then the job that came second, and so on. We
can say that the ready queue acts as a FIFO (First In First Out) queue thus the arriving
jobs/processes are placed at the end of the queue.
FCFS is a non-preemptive scheduling algorithm as a process holds the CPU until it either
terminates or performs I/O. Thus, if a longer job has been assigned to the CPU then many shorter
jobs after it will have to wait. This algorithm is used in most batch operating systems.

Characteristics:

 It follows the non-preemptive approach i.e. once a process has control over the CPU it will not
preempt until it terminates.
 The criteria for the selection of processes is arrival time. The dispatcher selects the first job in
the ready queue and this job runs to completion of its CPU burst.
 The average waiting time is very high so not optimal and thus gives poor performance.

Advantages:

39
 FCFS algorithm is simple, easy to implement into any preexisting system, and easy to
understand.
 Better for processes with large burst time as there is no context switch involved between
processes.
 It is a fair algorithm as priority is not involved, processes that arrive first get served first.

Disadvantages:

 Convoy effect occurs i.e. all small processes have to wait for one large process to get off the
CPU.
 It is non-preemptive, the process will not release the CPU until it finishes its task and
terminates.
 It is not appropriate for interactive systems as it cannot guarantee a short response time.
 Average waiting time is high and the turnaround time is unpredictable which leads to poor
performance.

First Come, First Serve – CPU Scheduling | (Non-preemptive)


FCFS Scheduling:
Simplest CPU scheduling algorithm that schedules according to arrival times of processes. First
come first serve scheduling algorithm states that the process that requests the CPU first is
allocated the CPU first. It is implemented by using the FIFO queue. When a process enters the
ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to
the process at the head of the queue. The running process is then removed from the queue. FCFS
is a non-preemptive scheduling algorithm.
Characteristics of FCFS:
 FCFS supports non-preemptive and preemptive CPU scheduling algorithm.
 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not much efficient in performance, and the wait time is quite high.
How First Come First Serve CPU Scheduling Algorithm works?
 The waiting time for first process is 0 as it is executed first.
 The waiting time for upcoming process can be calculated by:

wt[i] = ( at[i – 1] + bt[i – 1] + wt[i – 1] ) – at[i]


where
 wt[i] = waiting time of current process
 at[i-1] = arrival time of previous process
 bt[i-1] = burst time of previous process
 wt[i-1] = waiting time of previous process
 at[i] = arrival time of current process
 The Average waiting time can be calculated by:

Average Waiting Time = (sum of all waiting time)/(Number of processes)

40
Examples to show working of Non-Preemptive First come first serve CPU Scheduling
Algorithm:
Example-1: Consider the following table of arrival time and burst time for five processes P1,
P2, P3, P4 and P5.
Processes Arrival Time Burst Time

P1 0 4

P2 1 3

P3 2 1

P4 3 2

P5 4 5
The First come First serve CPU Scheduling Algorithm will work on the basis of steps as
mentioned below:
Step 0: At time = 0,
 The process begins with P1
 As it has arrival time 0
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

0-1ms P1 0ms 1ms 4ms 3ms


Step 1: At time = 1,
 The process P2 arrives
 But process P1 still executing,
 Thus, P2 kept in a waiting table and wait for its execution.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 0ms 1ms 3ms 2ms

1-2ms P2 1ms P2 0ms 3ms 3ms


Step 3: At time = 2,
 The process P3 arrives and kept in a waiting queue
 While process P1 is still executing as its burst time is 4.

41
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 0ms 1ms 2ms 1ms

P2 1ms P2 0ms 3ms 3ms

2-3ms P3 2ms P2, P3 0ms 1ms 1ms

Step 4: At time = 3,
 The process P4 arrives and kept in the waiting queue
 While process P1 is still executing as its burst time is 4
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 0ms 1ms 1ms 0ms

P2 1ms P2 0ms 3ms 3ms

P3 2ms P2, P3 0ms 1ms 1ms

3-4ms P4 3ms P2, P3, P4 0ms 2ms 2ms

Step 5: At time = 4,
 The process P1 completes its execution
 Process P5 arrives in waiting queue while process P2 starts executing
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

4-5ms P2 1ms 1ms 3ms 2ms

P3 2ms P3 0ms 1ms 1ms

P4 3ms P3, P4 0ms 2ms 2ms

42
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P5 4ms P3, P4, P5 0ms 5ms 5ms


Step 6: At time = 5,
 The process P2 completes its execution
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 1ms 2ms 2ms 0ms

P3 2ms P3 0ms 1ms 1ms

P4 3ms P3, P4 0ms 2ms 2ms

5-7ms P5 4ms P3, P4, P5 0ms 5ms 5ms

Step 7: At time = 7,
 Process P3 starts executing, it has burst time of 1 thus, it completes execution at time interval
8
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P3 2ms 1ms 1ms 0ms

P4 3ms P4 0ms 2ms 2ms

7-8ms P5 4ms P4, P5 0ms 5ms 5ms

Step 8: At time 8,
 The process of P3 completes its execution
 Process P4 starts executing, it has burst time of 2 thus, it completes execution at time interval
10.

43
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P4 3ms 2ms 2ms 0ms

8-10ms P5 4ms P5 0ms 5ms 5ms

Step 9: At time 10,


 The process P4 completes its execution
 Process P5 starts executing, it has burst time of 5 thus, it completes execution at time interval
15.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

10-15ms P5 4ms 5ms 5ms 0ms

Step 10: At time 15,


 Process P5 will finish its execution.
 The overall execution of the processes will be as shown below:
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

0-1ms P1 0ms 1ms 4ms 3ms

P1 0ms 1ms 3ms 2ms

1-2ms P2 1ms P2 0ms 3ms 3ms

2-3ms P1 0ms 1ms 2ms 1ms

P2 1ms P2 0ms 3ms 3ms

44
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P3 2ms P2, P3 0ms 1ms 1ms

P1 0ms 1ms 1ms 0ms

P2 1ms P2 0ms 3ms 3ms

P3 2ms P2, P3 0ms 1ms 1ms

3-4ms P4 3ms P2, P3, P4 0ms 2ms 2ms

P2 1ms 1ms 3ms 2ms

P3 2ms P3 0ms 1ms 1ms

P4 3ms P3, P4 0ms 2ms 2ms

4-5ms P5 4ms P3, P4, P5 0ms 5ms 5ms

P2 1ms 2ms 2ms 0ms

P3 2ms P3 0ms 1ms 1ms

P4 3ms P3, P4 0ms 2ms 2ms

5-7ms P5 4ms P3, P4, P5 0ms 5ms 5ms

7-8ms P3 2ms 1ms 1ms 0ms

45
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P4 3ms P4 0ms 2ms 2ms

P5 4ms P4, P5 0ms 5ms 5ms

8-10ms P4 3ms 2ms 2ms 0ms

P5 4ms P5 0ms 5ms 5ms

10-15ms P5 4ms 5ms 5ms 0ms

Gantt chart for above execution:

Gantt chart for First come First serve Scheduling

Waiting Time = Start time – Arrival time


P1 = 0 – 0 = 0
P2 = 4 – 1 = 3
P3 = 7 – 2 = 5
P4 = 8 – 3 = 5
P5 = 10 – 4 = 6
Average waiting time = 0 + 3 + 5 + 5+ 6 / 5 = 19 / 5 = 3.8

Complexity Analysis:
 Time Complexity: O(N)
 Auxiliary Space: O(N)
Advantages of FCFS:
 The simplest and basic form of CPU Scheduling algorithm
 Easy to implement
 First come first serve method
Disadvantages of FCFS:
 As it is a Non-preemptive CPU Scheduling Algorithm, hence it will run till it finishes the
execution.

46
 Average waiting time in the FCFS is much higher than the others
 It suffers from Convoy effect.
 Not very efficient due to its simplicity
 Processes which are at the end of the queue, have to wait longer to finish.

Shortest Job First (or SJF) CPU Scheduling Non-preemptive algorithm using Segment
Tree
Shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting process
with the smallest execution time to execute next. SJN is a non-preemptive algorithm.

 Shortest Job first has the advantage of having a minimum average waiting time among
all scheduling algorithms.
 It is a Greedy Algorithm.
 It may cause starvation if shorter processes keep coming. This problem can be solved using
the concept of ageing.
 It is practically infeasible as Operating System may not know burst time and therefore may
not sort them. While it is not possible to predict execution time, several methods can be used
to estimate the execution time for a job, such as a weighted average of previous execution
times. SJF can be used in specialized environments where accurate estimates of running time
are available.
For example:

In the above example, since the arrival time of all the processes is 0, the execution order of the
process is the ascending order of the burst time of the processes. The burst time is given by the
column duration. Therefore, the execution order of the processes is given by:

P4 -> P1 -> P3 -> P2

47
Approach: The following is the approach used for the implementation of the shortest job first:

1. As the name suggests, the shortest job first algorithm is an algorithm which executes the
process whose burst time is least and has arrived before the current time. Therefore, in order
to find the process which needs to be executed, sort all the processes from the given set of
processes according to their arrival time. This ensures that the process with the shortest burst
time which has arrived first is executed first.
2. Instead of finding the minimum burst time process among all the arrived processes by
iterating the
whole struct array, the range minimum of the burst time of all the arrived processes upto the
current time is calculated using segment tree.
3. After selecting a process which needs to be executed, the completion time, turn around
time and waiting time is calculated by using arrival time and burst time of the process. The
formulae to calculate the respective times are:
 Completion Time: Time at which process completes its execution.

Completion Time = Start Time + Burst Time


 Turn Around Time: Time Difference between completion time and arrival time.

Turn Around Time = Completion Time – Arrival Time


 Waiting Time(W.T): Time Difference between turn around time and burst time.

Waiting Time = Turn Around Time – Burst Time


1. After calculating, the respective times are updated in the array and the burst time of the
executed process is set to infinity in the segment tree base array so that it is not considered as
the minimum burst time in the further queries.

Shortest Remaining Time First (Preemptive SJF) Scheduling Algorithm


In previous post, we have discussed Set 1 of SJF i.e. non-preemptive. In this post we will discuss
the preemptive version of SJF known as Shortest Remaining Time First (SRTF).
In the Shortest Remaining Time First (SRTF) scheduling algorithm, the process with the
smallest amount of time remaining until completion is selected to execute. Since the currently
executing process is the one with the shortest amount of time remaining by definition, and since
that time should only reduce as execution progresses, processes will always run until they
complete or a new process is added that requires a smaller amount of time.

Examples to show working of Preemptive Shortest Job First CPU Scheduling Algorithm:
Example-1: Consider the following table of arrival time and burst time for five processes P1,
P2, P3, P4 and P5.
Process Burst Time Arrival Time

P1 6 ms 2 ms

48
Process Burst Time Arrival Time

P2 2 ms 5 ms

P3 8 ms 1 ms

P4 3 ms 0 ms

P5 4 ms 4 ms

The Shortest Job First CPU Scheduling Algorithm will work on the basis of steps as mentioned
below:
At time = 0,
 Process P4 arrives and starts executing
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

0-1ms P4 0ms 1ms 3ms 2ms

At time= 1,
 Process P3 arrives.
 But, as P4 has a shorter burst time. It will continue execution.
 Thus, P3 will wait till P4 gets executed.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P4 0ms 1ms 2ms 1ms

1-2ms P3 1ms P3 0ms 8ms 8ms

At time =2,
 Process P1 arrives with burst time = 6
 As the burst time of P1 is more than that of P4
 Thus, P4 will continue its execution.

49
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P4 0ms 1ms 1ms 0ms

P3 1ms 0ms 8ms 8ms

2-3ms P1 2ms P3, P1 0ms 6ms 6ms

At time = 3,
 Process P4 will finish its execution.
 Then, the burst time of P3 and P1 is compared.
 Process P1 is executed because its burst time is less as compared to P3.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P3 1ms 0ms 8ms 8ms

3-4ms P1 2ms 1ms 6ms 5ms


P3

At time = 4,
 Process P5 arrives.
 Then the burst time of P3, P5, and P1 is compared.
 Process P5 gets executed first among them because its burst time is lowest, and process P1
is preempted.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

4-5ms P3 1ms P3, P1 0ms 8ms 8ms

P1 2ms 0ms 5ms 5ms

P5 4ms 1ms 4ms 3ms

50
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

At time = 5,
 Process P2 arrives.
 The burst time of all processes are compared,
 Process P2 gets executed as its burst time is lowest among all.
 Process P5 is preempted.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P3 1ms 0ms 8ms 8ms

P1 2ms 0ms 5ms 5ms

P5 4ms 0ms 3ms 3ms

P3, P5,
5-6ms P2 5ms P1 1ms 2ms 1ms

At time = 6,
 Process P2 will keep executing.
 It will execute till time = 8 as the burst time of P2 is 2ms
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

6-7ms P3 1ms P3, P5, P1 0ms 8ms 8ms

P1 2ms 0ms 5ms 5ms

P5 4ms 0ms 3ms 3ms

51
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 5ms 1ms 1ms 0ms

At time=7,
 The process P2 finishes its execution.
 Then again the burst time of all remaining processes is compared.
 The Process P5 gets executed because its burst time is lesser than the others.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P3 1ms 0ms 8ms 8ms

P1 2ms 0ms 5ms 5ms

7-10ms P5 4ms P3, P1 3ms 3ms 0ms

At time = 10,
 The process P5 will finish its execution.
 Then the burst time of the remaining processes P1 and P3 is compared.
 Thus, process P1 is executed as its burst time is less than P3
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P3 1ms 0ms 8ms 8ms

10-15ms P1 4ms P3 4ms 5ms 0ms

At time = 15,
 The process P1 finishes its execution and P3 is the only process left.
 P3 will start executing.

52
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

15-23ms P3 1ms 8ms 8ms 0ms

At time = 23,
 Process P3 will finish its execution.
 The overall execution of the processes will be as shown below:
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

0-1ms P4 0ms 1ms 3ms 2ms

P4 0ms 1ms 2ms 1ms

1-2ms P3 1ms P3 0ms 8ms 8ms

P4 0ms 1ms 1ms 0ms

P3 1ms 0ms 8ms 8ms

2-3ms P1 2ms P3, P1 0ms 6ms 6ms

P3 1ms 0ms 8ms 8ms

3-4ms P1 2ms 1ms 6ms 5ms


P3

4-5ms P3 1ms P3, P1 0ms 8ms 8ms

P1 2ms 0ms 5ms 5ms

53
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P5 4ms 1ms 4ms 3ms

P3 1ms 0ms 8ms 8ms

P1 2ms 0ms 5ms 5ms

P5 4ms 0ms 3ms 3ms

P3, P5,
5-6ms P2 5ms P1 1ms 2ms 1ms

P3 1ms 0ms 8ms 8ms

P1 2ms 0ms 5ms 5ms

P5 4ms 0ms 3ms 3ms

P3, P5,
6-7ms P2 5ms P1 1ms 1ms 0ms

P3 1ms 0ms 8ms 8ms

P1 2ms 0ms 5ms 5ms

7-10ms P5 4ms P3, P1 3ms 3ms 0ms

P3 1ms 0ms 8ms 8ms

10-15ms P1 4ms P3 4ms 5ms 0ms

54
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

15-23ms P3 1ms 8ms 8ms 0ms

Gantt chart for above execution:

Gantt chart for SRTF

Now, lets calculate average waiting time and turn around time:
As we know,
 Turn Around time = Completion time – arrival time
 Waiting Time = Turn around time – burst time
Completion
Process Time Turn Around Time Waiting Time

P1 15 15-2 = 13 13-6 = 7

P2 7 7-5 = 2 2-2 = 0

P3 23 23-1 = 22 22-8 = 14

P4 3 3-0 = 3 3-3 = 0

P5 10 10-4 = 6 6-4 = 2

Now,
 Average Turn around time = (13 + 2 + 22 + 3 + 6)/5 = 9.2
 Average waiting time = (7 + 0 + 14 + 0 + 2)/5 = 23/5 = 4.6

Longest Job First (LJF) CPU Scheduling Algorithm

55
Longest Job First (LJF) is a non-preemptive scheduling algorithm. This algorithm is based on
the burst time of the processes. The processes are put into the ready queue based on their burst
times i.e., in descending order of the burst times. As the name suggests this algorithm is based on
the fact that the process with the largest burst time is processed first. The burst time of only those
processes is considered that have arrived in the system until that time. Its preemptive version is
called Longest Remaining Time First (LRTF) algorithm.

Characteristics of Longest Job First(Non-Preemptive)


 Among all the processes waiting in a waiting queue, the CPU is always assigned to the
process having the largest burst time.
 If two processes have the same burst time then the tie is broken using FCFS i.e. the process
that arrived first is processed first.
 LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of Longest Job First(LJF)
 No other process can execute until the longest job or process executes completely.
 All the jobs or processes finish at the same time approximately.
Disadvantages of Longest Job First CPU Scheduling Algorithm
 This algorithm gives a very high average waiting time and average turn-around time for a
given set of processes.
 This may lead to a convoy effect.
 It may happen that a short process may never get executed and the system keeps on executing
the longer processes.
 It reduces the processing speed and thus reduces the efficiency and utilization of the system.

Longest Job First CPU Scheduling Algorithm


 Step-1: First, sort the processes in increasing order of their Arrival Time.

56
 Step 2: Choose the process having the highest Burst Time among all the processes that have
arrived till that time.
 Step 3: Then process it for its burst time. Check if any other process arrives until this process
completes execution.
 Step 4: Repeat the above three steps until all the processes are executed.

Let us consider the following examples.


Example-1: Consider the following table of arrival time and burst time for four processes P1,
P2, P3 and P4.
Processes Arrival time Burst Time

P1
1 ms 2 ms

P2
2 ms 4 ms

P3
3 ms 6 ms

P4
4 ms 8 ms

The Longest Job First CPU Scheduling Algorithm will work on the basis of steps as
mentioned below:
At time = 1, Available Process : P1. So, select P1 and start executing.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

1-2ms P1 1ms 1ms 2ms 1ms

At time = 2,
 Process P2 arrives
 As P1 is executing thus, Process P2 will wait in the waiting queue.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

2-3ms P1 1ms P2 1ms 1ms 0ms

57
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 2ms 0ms 4ms 4ms

At time = 3,
 P1 gets executed, and process P3 arrives
 Available Process: P2, P3. So, select P3 and execute 6 ms (since B.T(P3)=6 which is higher
than B.T(P2) = 4)
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 2ms 0ms 4ms 4ms

3-4ms P3 3ms P2 1ms 6ms 5ms

At time = 4,
 Process P4 arrives,
 As P3 is executing thus, Process P4 will wait in the waiting queue
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 2ms 0ms 4ms 4ms

P3 3ms 1ms 5ms 4ms

4-5ms P4 4ms P2, P4 0ms 8ms 8ms

At time = 5,
 Process P3 is executing and P2 and P4 are in the waiting Table.

58
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 2ms 0ms 4ms 4ms

P3 3ms 4ms 4ms 0ms

5-9ms P4 4ms P2, P4 0ms 8ms 8ms

At time = 9,
 Process P3 completes its execution,
 Available Process : P2, P4. So, select P4 and execute 8 ms (since, B.T(P4) = 8, B.T(P2) = 4)
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 2ms 0ms 4ms 4ms

9-17ms P4 4ms P2 8ms 8ms 0ms

At time = 17,
 Finally execute the process P2 for 4 ms.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

17-21ms P2 2ms 4ms 4ms 0ms

At time = 21,
 Process P2 will finish its execution.
 The overall execution of the processes will be as shown below:

59
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

1-2ms P1 1ms 1ms 2ms 1ms

P1 1ms 1ms 1ms 0ms

2-3ms P2 2ms P2 0ms 4ms 4ms

P2 2ms 0ms 4ms 4ms

3-4ms P3 3ms P2 1ms 6ms 5ms

P2 2ms 0ms 4ms 4ms

P3 3ms 1ms 5ms 4ms

4-5ms P4 4ms P2, P4 0ms 8ms 8ms

P2 2ms 0ms 4ms 4ms

P3 3ms 4ms 4ms 0ms

5-9ms P4 4ms P2, P4 0ms 8ms 8ms

P2 2ms 0ms 4ms 4ms

9-17ms P4 4ms P2 8ms 8ms 0ms

17-21ms P2 2ms 4ms 4ms 0ms

60
Note –
CPU will be idle for 0 to 1 unit time since there is no process available in the given interval.
Gantt chart will be as following below:

Since, completion time (C.T) can be directly determined by Gantt chart, and
Turn Around Time (TAT)
= (Completion Time) – (Arrival Time)
Also, Waiting Time (WT)
= (Turn Around Time) – (Burst Time)
Therefore, final table look like,

Output :
Total Turn Around Time = 40 ms
So, Average Turn Around Time = 40/4 = 10.00 ms
And, Total Waiting Time = 20 ms
So, Average Waiting Time = 20/4 = 5.00 ms

Example-2: Consider the following table of arrival time and burst time for four processes P1,
P2, P3, P4 and P5.
Processes Arrival Time Burst Time

P1 0ms 2ms

P2 0ms 3ms

P3 2ms 2ms

P4 3ms 5ms

P5 4ms 4ms
Gantt chart for this example:

61
Since, completion time (C.T) can be directly determined by Gantt chart, and
Turn Around Time (TAT)
= (Completion Time) – (Arrival Time)
Also, Waiting Time (WT)
= (Turn Around Time) – (Burst Time)
Therefore, final table look like,

Output :
Total Turn Around Time = 44 ms
So, Average Turn Around Time = 44/5 = 8.8 ms
And, Total Waiting Time = 28 ms
So, Average Waiting Time = 28/5 = 5.6 ms

Longest Remaining Time First (LRTF) or Preemptive Longest Job First CPU Scheduling
Algorithm
Longest Remaining Time First (LRTF) is a preemptive version of Longest Job First (LJF)
scheduling algorithm. In this scheduling algorithm, we find the process with the maximum
remaining time and then process it, i.e. check for the maximum remaining time after some
interval of time(say 1 unit each) to check if another process having more Burst Time arrived up
to that time.
Characteristics of Longest Remaining Time First (LRTF)
 Among all the processes waiting in a waiting queue, CPU is always assigned to the process
having largest burst time.
 If two processes have the same burst time then the tie is broken using FCFS i.e. the process
that arrived first is processed first.
 LJF CPU Scheduling can be of both preemptive and non-preemptive type.
Advantages of Longest Remaining Time First (LRTF)
 No other process can execute until the longest job or process executes completely.
 All the jobs or processes finishes at the same time approximately.
Disadvantages of Longest Remaining Time First (LRTF)

62
 This algorithm gives very high average waiting time and average turn-around time for a given
set of processes.
 This may lead to convoy effect.
 It may happen that a short process may never get executed and the system keeps on executing
the longer processes.
 It reduces the processing speed and thus reduces the efficiency and utilization of the system.
Longest Remaining Time First (LRTF) CPU Scheduling Algorithm
 Step-1: First, sort the processes in increasing order of their Arrival Time.
 Step-2: Choose the process having least arrival time but with most Burst Time.
 Step-3: Then process it for 1 unit. Check if any other process arrives upto that time of
execution or not.
 Step-4: Repeat the above both steps until execute all the processes.

Examples to show working of Preemptive Longest Job First CPU Scheduling Algorithm:
Example-1: Consider the following table of arrival time and burst time for four processes P1,
P2, P3 and P4.

Processes Arrival time Burst Time

P1
1 ms 2 ms

P2
2 ms 4 ms

P3
3 ms 6 ms

P4
4 ms 8 ms

The Longest Remaining Time First CPU Scheduling Algorithm will work on the basis of
steps as mentioned below:

At time = 1,
 Available Process : P1. So, select P1 and execute 1 ms.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

1-2ms P1 1ms 1ms 2ms 1ms

At time = 2,

63
 Available Process : P1, P2.
 So, select P2 and execute 1 ms (since B.T(P1) = 1 which is less than B.T(P2) = 4)
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 1ms 0ms 1ms 1ms

2-3ms P2 2ms P1 1ms 4ms 3ms

At time = 3,
 Available Process : P1, P2, P3.
 So, select P3 and execute 1 ms (since, B.T(P1) = 1 , B.T(P2) = 3 , B.T(P3) = 6).
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

3-4ms P3 3ms P1, P2 1ms 6ms 5ms

At time = 4,
 Available processes: P1, P2, P3, P4.
 So, select P4 (as burst time of P4 is largest) and execute 1 ms (since, B.T(P1) = 1 , B.T(P2) =
3 , B.T(P3) = 6, B.T(P4) = 8).
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

4-5ms P1 1ms P1, P2, P3 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

P3 3ms 0ms 5ms 5ms

64
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P4 4ms 1ms 8ms 7ms

At time = 5,
 Available processes: P1, P2, P3, P4,
 Process P4 will continue its execution as no other process has burst time larger than the
Process P4
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

P3 3ms 0ms 5ms 5ms

5-7ms P4 4ms P1, P2, P3 2ms 7ms 5ms

At time = 7,
 The processes P3 and P4 have same remaining burst time,
 hence If two processes have the same burst time then the tie is broken using FCFS i.e. the
process that arrived first is processed first.
 Therefore P3 will get executed for 1ms
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

7-8ms P1 1ms P1, P2, P4 0ms 1ms 1ms

65
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 2ms 0ms 3ms 3ms

P3 3ms 1ms 5ms 4ms

P4 4ms 0ms 7ms 5ms

At time = 8,
 Available processes: P1, P2, P3, P4,
 Process P4 will again continue its execution as no other process has burst time larger than the
Process P4
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

P3 3ms 0ms 4ms 4ms

8-9ms P4 4ms P1, P2, P3 1ms 5ms 4ms

At time = 9,
 Available processes: P1, P2, P3, P4,
 Process P3 continue its execution on the basis of FCFS rule.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

9-10ms P1 1ms P1, P2, P4 0ms 1ms 1ms

66
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 2ms 0ms 3ms 3ms

P3 3ms 1ms 4ms 3ms

P4 4ms 0ms 4ms 4ms

At time = 10,
 Available processes: P1, P2, P3, P4,
 Now again the burst time of P4 is largest, thus it will execute further.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

P3 3ms 0ms 3ms 3ms

10-11ms P4 4ms P1, P2, P3 1ms 4ms 3ms

At time = 11,
 Available processes: P1, P2, P3, P4,
 Process P2 will continue its execution as the burst time of P2, P3, P4 is same
 Thus, in this case the further execution will be decided on the basis of FCFS i.e. the process
that arrived first is processed first.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

11-12ms P1 1ms P1, P3, P4 0ms 1ms 1ms

67
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 2ms 1ms 3ms 2ms

P3 3ms 0ms 3ms 3ms

P4 4ms 0ms 3ms 3ms

At time = 12,
 Available processes: P1, P2, P3, P4,
 Process P3 continue its execution on the basis of above explanation.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 2ms 2ms

P3 3ms 1ms 3ms 2ms

12-13ms P4 4ms P1, P2, P4 0ms 3ms 3ms

At time = 13,
 Available processes: P1, P2, P3, P4,
 Now again the burst time of P4 is largest, thus it will execute further.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

13-14ms P1 1ms P1, P2, P3 0ms 1ms 1ms

68
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 2ms 0ms 2ms 2ms

P3 3ms 0ms 2ms 2ms

P4 4ms 1ms 3ms 2ms

At time = 14,
 Available processes: P1, P2, P3, P4
 Now, the process P2 will again begin to execute first among all
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 1ms 0ms 1ms 1ms

P2 2ms 1ms 2ms 1ms

P3 3ms 0ms 2ms 2ms

14-15ms P4 4ms P1, P3, P4 0ms 2ms 2ms

At time = 15,
 Available processes: P1, P2, P3, P4, now P3 will execute
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

15-16ms P1 1ms P1, P2, P4 0ms 1ms 1ms

P2 2ms 0ms 1ms 1ms

69
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P3 3ms 1ms 2ms 1ms

P4 4ms 0ms 2ms 2ms

At time = 16,
 Available processes: P1, P2, P3, P4,
 here, P4 will execute as it has the largest Burst time among all
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 1ms 1ms

P3 3ms 0ms 1ms 1ms

16-17ms P4 4ms P1, P2, P3 1ms 2ms 1ms

At time = 17,
 Available processes: P1, P2, P3, P4,
 Process P1 will execute here on the basis of above explanation
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

17-18ms P1 1ms P2, P3, P4 1ms 1ms 0ms

P2 2ms 0ms 1ms 1ms

P3 3ms 0ms 1ms 1ms

70
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P4 4ms 0ms 1ms 1ms

At time = 18,
 Available processes: P2, P3, P4,
 Process P2 will execute.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P2 2ms 1ms 1ms 0ms

P3 3ms 0ms 1ms 1ms

18-19ms P4 4ms P3, P4 0ms 1ms 1ms

At time = 19,
 Available processes: P3, P4,
 Process P3 will execute.
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P3 3ms 1ms 1ms 0ms

19-20ms P4 4ms P4 0ms 1ms 1ms

At time = 20,
 Process P4 will execute at the end.

71
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

20-21ms P4 4ms 1ms 1ms 0ms

At time = 22,
 Process P4 will finish its execution.
The overall execution of the processes will be as shown below
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

1-2ms P1 1ms 1ms 2ms 1ms

P1 1ms 0ms 1ms 1ms

2-3ms P2 2ms P1 1ms 4ms 3ms

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

3-4ms P3 3ms P1, P2 1ms 6ms 5ms

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

P3 3ms 0ms 5ms 5ms

4-5ms P4 4ms P1, P2, P3 1ms 8ms 7ms

72
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

P3 3ms 0ms 5ms 5ms

5-7ms P4 4ms P1, P2, P3 2ms 7ms 5ms

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

P3 3ms 1ms 5ms 4ms

7-8ms P4 4ms P1, P2, P4 0ms 7ms 5ms

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

P3 3ms 0ms 4ms 4ms

8-9ms P4 4ms P1, P2, P3 1ms 5ms 4ms

9-10ms P1 1ms P1, P2, P4 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

73
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P3 3ms 1ms 4ms 3ms

P4 4ms 0ms 4ms 4ms

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 3ms 3ms

P3 3ms 0ms 3ms 3ms

10-11ms P4 4ms P1, P2, P3 1ms 4ms 3ms

P1 1ms 0ms 1ms 1ms

P2 2ms 1ms 3ms 2ms

P3 3ms 0ms 3ms 3ms

11-12ms P4 4ms P1, P3, P4 0ms 3ms 3ms

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 2ms 2ms

P3 3ms 1ms 3ms 2ms

12-13ms P4 4ms P1, P2, P4 0ms 3ms 3ms

74
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 2ms 2ms

P3 3ms 0ms 2ms 2ms

13-14ms P4 4ms P1, P2, P3 1ms 3ms 2ms

P1 1ms 0ms 1ms 1ms

P2 2ms 1ms 2ms 1ms

P3 3ms 0ms 2ms 2ms

14-15ms P4 4ms P1, P3, P4 0ms 2ms 2ms

P1 1ms 0ms 1ms 1ms

P2 2ms 0ms 1ms 1ms

P3 3ms 1ms 2ms 1ms

15-16ms P4 4ms P1, P2, P4 0ms 2ms 2ms

16-17ms P1 1ms P1, P2, P3 0ms 1ms 1ms

P2 2ms 0ms 1ms 1ms

75
Remaining
Time Arrival Waiting Execution Initial Burst Burst
Instance Process Time Table Time Time Time

P3 3ms 0ms 1ms 1ms

P4 4ms 1ms 2ms 1ms

P1 1ms 1ms 1ms 0ms

P2 2ms 0ms 1ms 1ms

P3 3ms 0ms 1ms 1ms

17-18ms P4 4ms P2, P3, P4 0ms 1ms 1ms

P2 2ms 1ms 1ms 0ms

P3 3ms 0ms 1ms 1ms

18-19ms P4 4ms P3, P4 0ms 1ms 1ms

P3 3ms 1ms 1ms 0ms

19-20ms P4 4ms P4 0ms 1ms 1ms

20-21ms P4 4ms 1ms 1ms 0ms

Note: CPU will be idle for 0 to 1 unit time since there is no process available in the given
interval.
Gantt chart will be as following below:

76
Since, completion time (C.T) can be directly determined by Gantt chart, and

Turn Around Time (TAT)


= (Completion Time) – (Arrival Time)
Also, Waiting Time (WT)
= (Turn Around Time) – (Burst Time)
Therefore, final table look like,

Total Turn Around Time = 68 ms


So, Average Turn Around Time = 68/4 = 17.00 ms
And, Total Waiting Time = 48 ms
So Average Waiting Time = 48/4 = 12.00 ms
Example-2: Consider the following table of arrival time and burst time for four processes P1,
P2, P3,P4 and P5.

Processes Arrival Time Burst Time

P1 0ms 2ms

P2 0ms 3ms

P3 2ms 2ms

P4 3ms 5ms

P5 4ms 4ms

Similarly example-1, Gantt chart for this example,

77
Since, completion time (CT) can be directly determined by Gantt chart, and
Turn Around Time (TAT)
= (Completion Time) – (Arrival Time)
Also, Waiting Time (WT)
= (Turn Around Time) – (Burst Time)
Therefore, final table look like,

Total Turn Around Time = 61 ms


So, Average Turn Around Time = 61/5 = 12.20 ms
And, Total Waiting Time = 45 ms
So, Average Waiting Time = 45/5 = 9.00 ms

Longest Remaining Time First (LRTF) CPU Scheduling Program


We have given some processes with arrival time and Burst Time and we have to find the
completion time (CT), Turn Around Time(TAT), Average Turn Around Time (Avg TAT),
Waiting Time(WT), Average Waiting Time (AWT) for the given processes.

LRTF is a preemptive scheduling algorithm. Its tie-breaker is FCFS and if FCFS does not breaks
the tie then, we use process id as the tie-breaker.
Example: Consider the following table of arrival time and burst time for four processes P1, P2,
P3, and P4.
Process Arrival time Burst Time
P1 1 ms 2 ms
P2 2 ms 4 ms
P3 3 ms 6 ms
p4 4 ms 8 ms

78
Gantt chart will be as following below,

Since completion time (CT) can be directly determined by Gantt chart, and
Turn Around Time (TAT)
= (Completion Time) - (Arrival Time)

Also, Waiting Time (WT)


= (Turn Around Time) - (Burst Time)
Therefore,
Output:
Total Turn Around Time = 68 ms
So, Average Turn Around Time = 68/4 = 17.00 ms

And, Total Waiting Time = 48 ms


So, Average Waiting Time = 12.00 ms

Round Robin Scheduling with different arrival times


A round-robin scheduling algorithm is used to schedule the process fairly for each job a time
slot or quantum and the interrupting the job if it is not completed by then the job come after the
other job which is arrived in the quantum time that makes these scheduling fairly.
Note:
 Round-robin is cyclic in nature, so starvation doesn’t occur
 Round-robin is a variant of first come, first served scheduling
 No priority, special importance is given to any process or task
 RR scheduling is also known as Time slicing scheduling
Advantages:
 Each process is served by CPU for a fixed time, so priority is the same for each one
 Starvation does not occur because of its cyclic nature.
Disadvantages:
 Throughput depends on quantum time.
 If the time quantum is too large RR degrades to FCFS.
 If we want to give some process priority, we cannot.
Burst Turn Around
Process Arrival Time Time Completion time Time Waiting time

P1 0 5 12 12 7

79
Burst Turn Around
Process Arrival Time Time Completion time Time Waiting time

P2 1 4 11 10 6

P3 2 2 6 4 2

P4 3 1 9 6 5

Quantum time is 2 this means each process is only executing for 2 units of time at a time.
How to compute these process requests:-
1. Take the process which occurs first and start executing the process(for quantum time only).
2. Check if any other process request has arrived. If a process request arrives during the quantum
time in which another process is executing, then add the new process to the Ready queue
3. After the quantum time has passed, check for any processes in the Ready queue. If the ready
queue is empty then continue the current process. If the queue not empty and the current
process is not complete, then add the current process to the end of the ready queue.
4. Take the first process from the Ready queue and start executing it (same rules)
5. Repeat all steps above from 2-4
6. If the process is complete and the ready queue is empty then the task is complete
After all these we get the three times which are:
1. Completion Time: the time taken for a process to complete.
2. Turn Around Time: total time the process exists in the system. (completion time – arrival
time).
3. Waiting Time: total time waiting for their complete execution. (turn around time – burst
time ).

Round Robin is a CPU scheduling algorithm where each process is assigned a fixed time slot
in a cyclic way. It is basically the preemptive version of First come First Serve CPU
Scheduling algorithm.
 Round Robin CPU Algorithm generally focuses on Time Sharing technique.
 The period of time for which a process or job is allowed to run in a pre-emptive method is
called time quantum.
 Each process or job present in the ready queue is assigned the CPU for that time quantum, if
the execution of the process is completed during that time then the process will end else the
process will go back to the waiting table and wait for its next turn to complete the
execution.

Characteristics of Round Robin CPU Scheduling Algorithm:

80
 It is simple, easy to implement, and starvation-free as all processes get fair share of CPU.
 One of the most commonly used technique in CPU scheduling as a core.
 It is preemptive as processes are assigned CPU only for a fixed slice of time at most.
 The disadvantage of it is more overhead of context switching.
Advantages of Round Robin CPU Scheduling Algorithm:
 There is fairness since every process gets equal share of CPU.
 The newly created process is added to end of ready queue.
 A round-robin scheduler generally employs time-sharing, giving each job a time slot or
quantum.
 While performing a round-robin scheduling, a particular time quantum is allotted to
different jobs.
 Each process get a chance to reschedule after a particular quantum time in this scheduling.
Disadvantages of Round Robin CPU Scheduling Algorithm:
 There is Larger waiting time and Response time.
 There is Low throughput.
 There is Context Switches.
 Gantt chart seems to come too big (if quantum time is less for scheduling. For Example:1
ms for big scheduling.)
 Time consuming scheduling for small quantum.

Examples to show working of Round Robin Scheduling Algorithm:


Example-1: Consider the following table of arrival time and burst time for four processes P1,
P2, P3, and P4 and given Time Quantum = 2
Process Burst Time Arrival Time

P1 5 ms 0 ms

P2 4 ms 1 ms

P3 2 ms 2 ms

P4 1 ms 4 ms
The Round Robin CPU Scheduling Algorithm will work on the basis of steps as mentioned
below:
At time = 0,
 The execution begins with process P1, which has burst time 5.
 Here, every process executes for 2 milliseconds (Time Quantum Period). P2 and P3 are
still in the waiting queue.
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

0-2ms P1 0ms P2, P3 P1 2ms 5ms 3ms


At time = 2,

81
 The processes P2 and P3 arrives in the ready queue and P2 starts executing for TQ period
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P1 0ms 0ms 5ms 3ms

2-4ms P2 1ms P3, P1 P2 2ms 4ms 2ms


At time = 4,
 The process P4 arrives in the ready queue,
 Then P3 executes for TQ period.
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P1 0ms 0ms 5ms 3ms

P2 1ms 0ms 4ms 2ms


P1, P4,
4-6ms P3 2ms P2 P3 2ms 2ms 0ms
At time = 6,
 Process P3 completes its execution
 Process P1 starts executing for TQ period as it is next in the b.
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P1 0ms 2ms 3ms 1ms

6-8ms P2 1ms P4, P2 P1 0ms 4ms 2ms


At time = 8,
 Process P4 starts executing, it will not execute for Time Quantum period as it has burst
time = 1
 Hence, it will execute for only 1ms.
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

8-9ms P1 0ms P2, P1 P4 0ms 3ms 1ms

P2 1ms 0ms 4ms 2ms

82
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P4 4ms 1ms 1ms 0ms


At time = 9,
 Process P4 completes its execution
 Process P2 starts executing for TQ period as it is next in the ready queue
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P1 0ms 0ms 3ms 1ms

9-11ms P2 1ms P1 P2 2ms 2ms 0ms


At time = 11,
 Process P2 completes its execution.
 Process P1 starts executing, it will execute for 1ms only
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

11-12ms P1 0ms P1 1ms 1ms 0ms


At time = 12,
 Process P1 completes its execution.
 The overall execution of the processes will be as shown below:
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

0-2ms P1 0ms P2, P3 P1 2ms 5ms 3ms

P1 0ms 0ms 5ms 3ms

2-4ms P2 1ms P3, P1 P2 2ms 4ms 2ms

P1 0ms 0ms 5ms 3ms

P2 1ms 0ms 4ms 2ms


P1, P4,
4-6ms P3 2ms P2 P3 2ms 2ms 0ms

83
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P1 0ms 2ms 3ms 1ms

6-8ms P2 1ms P4, P2 P1 0ms 4ms 2ms

P1 0ms 0ms 3ms 1ms

P2 1ms 0ms 4ms 2ms

8-9ms P4 4ms P2, P1 P4 1ms 1ms 0ms

P1 0ms 0ms 3ms 1ms

9-11ms P2 1ms P1 P2 2ms 2ms 0ms

11-12ms P1 0ms P1 1ms 1ms 0ms


Gantt chart will be as following below:

Gantt chart for Round Robin Scheduling Algorithm

How to compute below times in Round Robin using a program?


 Completion Time: Time at which process completes its execution.
 Turn Around Time: Time Difference between completion time and arrival time. Turn
Around Time = Completion Time – Arrival Time
 Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time
Now, lets calculate average waiting time and turn around time:
A C
Processes T BT T TAT WT

P1 0 5 12 12-0 = 12 12-5 = 7

P2 1 4 11 11-1 = 10 10-4 = 6

P3 2 2 6 6-2 = 4 4-2 = 2

P4 4 1 9 9-4 = 5 5-1 = 4
Now,

84
 Average Turn around time = (12 + 10 + 4 + 5)/4 = 31/4 = 7.7
 Average waiting time = (7 + 6 + 2 + 4)/4 = 19/4 = 4.7
Example 2: Consider the following table of arrival time and burst time for three processes P1,
P2 and P3 and given Time Quantum = 2
Process Burst Time Arrival Time

P1 10 ms 0 ms

P2 5 ms 0 ms

P3 8 ms 0 ms
Similarly, Gantt chart for this example:

Gantt chart for example 2

Now, lets calculate average waiting time and turn around time:
Processes AT BT CT TAT WT

P1 0 10 23 23-0 = 23 23-10 = 13

P2 0 5 15 15-0 = 15 15-5 = 10

P3 0 8 21 21-0 = 21 21-8 = 13
Total Turn Around Time = 59 ms
So, Average Turn Around Time = 59/3 = 19.667 ms
And, Total Waiting Time = 36 ms
So, Average Waiting Time = 36/3 = 12.00 ms
Selfish Round Robin CPU Scheduling
In the traditional Round Robin scheduling algorithm, all processes were treated equally for
processing. The objective of the Selfish Round Robin is to give better service to processes that
have been executing for a while than to newcomers. It’s a more logical and superior
implementation compared to the normal Round Robin algorithm.
Implementation:-

 Processes in the ready list are partitioned into two lists: NEW and ACCEPTED.
 The New processes wait while Accepted processes are serviced by the Round Robin.
 Priority of a new process increases at rate ‘a’ while the priority of an accepted process
increases at rate ‘b’.
 When the priority of a new process reaches the priority of an accepted process, that new
process becomes accepted.
 If all accepted processes finish, the highest priority new process is accepted.
Let’s trace out the general working of this algorithm:-
85
STEP 1: Assume that initially there are no ready processes, when the first one, A, arrives. It has
priority 0, to begin with. Since there are no other accepted processes, A is accepted immediately.
STEP 2: After a while, another process, B, arrives. As long as b / a < 1, B’s priority will
eventually catch up to A’s, so it is accepted; now both A and B have the same priority.
STEP 3: All accepted processes share a common priority (which rises at rate b ); that makes this
policy easy to implement i.e any new process’s priority is bound to get accepted at some point.
So no process has to experience starvation.
STEP 4: Even if b / a > 1, A will eventually finish, and then B can be accepted.

Adjusting the parameters a and b :


-> If b / a >= 1, a new process is not accepted
until all the accepted processes have finished, so SRR becomes FCFS.
-> If b / a = 0, all processes are accepted immediately, so SRR becomes RR.
-> If 0 < b / a < 1, accepted processes are selfish, but not completely.
Example on Selfish Round Robin –

Solution (where a = 2 and b = 1) –

Explanation –
Process A gets accepted as soon as it comes at time t = 0. So its priority is increased only by ‘b’
i.e ‘1’ after each second. B enters at time t = 1 and goes to the waiting queue. So its priority gets
increased by ‘a’ i.e. ‘2’ at time t = 2. At this point priority of A = priority of B = 2.
So now both processes A & B are in the accepted queue and are executed in a round-robin
fashion. At time t = 3 process C enters the waiting queue. At time t = 6 the priority of process C

86
catches up to the priority of process B and then they start executing in a Round Robin manner.
When B finishes execution at time t = 10, D is automatically promoted to the accepted queue.
Similarly, when D finishes execution at time t = 15, E is automatically promoted to the accepted
queue.
Multilevel Queue (MLQ) CPU Scheduling
It may happen that processes in the ready queue can be divided into different classes where each
class has its own scheduling needs. For example, a common division is a foreground
(interactive) process and a background (batch) process. These two classes have different
scheduling needs. For this kind of situation Multilevel Queue Scheduling is used.
Now, let us see how it works.
Advantages of Multilevel Queue CPU Scheduling:
 The processes are permanently assigned to the queue, so it has advantage of low scheduling
overhead.
Disadvantages of Multilevel Queue CPU Scheduling:
 Some processes may starve for CPU if some higher priority queues are never becoming
empty.
 It is inflexible in nature.
Ready Queue is divided into separate queues for each class of processes. For example, let us
take three different types of processes System processes, Interactive processes, and Batch
Processes. All three processes have their own queue. Now, look at the below figure.

The Description of the processes in the above diagram is as follows:


 System Processes: The CPU itself has its own process to run which is generally termed as
System Process.
 Interactive Processes: An Interactive Process is a type of process in which there should be
same type of interaction.
 Batch Processes: Batch processing is generally a technique in the Operating system that
collects the programs and data together in the form of the batch before the processing starts.
All three different type of processes has their own queue. Each queue has its own Scheduling
algorithm. For example, queue 1 and queue 2 uses Round Robin while queue 3 can use FCFS to
schedule their processes.
Scheduling among the queues: What will happen if all the queues have some processes? Which
process should get the CPU? To determine this Scheduling among the queues is necessary. There
are two ways to do so –
87
1. Fixed priority preemptive scheduling method – Each queue has absolute priority over the
lower priority queue. Let us consider following priority order queue 1 > queue 2 > queue
3.According to this algorithm, no process in the batch queue(queue 3) can run unless queues 1
and 2 are empty. If any batch process (queue 3) is running and any system (queue 1) or
Interactive process(queue 2) entered the ready queue the batch process is preempted.
2. Time slicing – In this method, each queue gets a certain portion of CPU time and can use it to
schedule its own processes. For instance, queue 1 takes 50 percent of CPU time queue 2 takes
30 percent and queue 3 gets 20 percent of CPU time.
Example Problem :
Consider below table of four processes under Multilevel queue scheduling. Queue number
denotes the queue of the process.

Priority of queue 1 is greater than queue 2. queue 1 uses Round Robin (Time Quantum = 2) and
queue 2 uses FCFS.
Below is the gantt chart of the problem :

Working:
 At starting, both queues have process so process in queue 1 (P1, P2) runs first (because of
higher priority) in the round robin fashion and completes after 7 units
 Then process in queue 2 (P3) starts running (as there is no process in queue 1) but while it is
running P4 comes in queue 1 and interrupts P3 and start running for 5 second and
 After its completion P3 takes the CPU and completes its execution.

Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling


Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel
Queue(MLQ) Scheduling but in this processes can move between the queues. And thus, much
more efficient than multilevel queue scheduling.
Characteristics of Multilevel Feedback Queue Scheduling:

88
 In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on
entry to the system and processes are not allowed to move between queues.
 As the processes are permanently assigned to the queue, this setup has the advantage of low
scheduling overhead,
 But on the other hand disadvantage of being inflexible.
Advantages of Multilevel Feedback Queue Scheduling:
 It is more flexible.
 It allows different processes to move between different queues.
 It prevents starvation by moving a process that waits too long for the lower priority queue to
the higher priority queue.
Disadvantages of Multilevel Feedback Queue Scheduling:
 For the selection of the best scheduler, it requires some other means to select the values.
 It produces more CPU overheads.
 It is the most complex algorithm.
Multilevel feedback queue scheduling, however, allows a process to move between queues.
Multilevel Feedback Queue Scheduling (MLFQ) keeps analyzing the behavior (time of
execution) of processes and according to which it changes its priority.
Now, look at the diagram and explanation below to understand it properly.

Now let us suppose that queues 1 and 2 follow round robin with time quantum 4 and 8
respectively and queue 3 follow FCFS.
Implementation of MFQS is given below –
 When a process starts executing the operating system can insert it into any of the above three
queues depending upon its priority. For example, if it is some background process, then the
operating system would not like it to be given to higher priority queues such as queue 1 and 2.
It will directly assign it to a lower priority queue i.e. queue 3. Let’s say our current process for
consideration is of significant priority so it will be given queue 1.
 In queue 1 process executes for 4 units and if it completes in these 4 units or it gives CPU for
I/O operation in these 4 units then the priority of this process does not change and if it again
comes in the ready queue then it again starts its execution in Queue 1.
 If a process in queue 1 does not complete in 4 units then its priority gets reduced and it shifted
to queue 2.

89
 Above points 2 and 3 are also true for queue 2 processes but the time quantum is 8 units. In a
general case if a process does not complete in a time quantum then it is shifted to the lower
priority queue.
 In the last queue, processes are scheduled in an FCFS manner.
 A process in a lower priority queue can only execute only when higher priority queues are
empty.
 A process running in the lower priority queue is interrupted by a process arriving in the higher
priority queue.
Well, the above implementation may differ for example the last queue can also follow Round-
robin Scheduling.
Problems in the above implementation: A process in the lower priority queue can suffer from
starvation due to some short processes taking all the CPU time.
Solution: A simple solution can be to boost the priority of all the processes after regular
intervals and place them all in the highest priority queue.
What is the need for such complex Scheduling?
 Firstly, it is more flexible than multilevel queue scheduling.
 To optimize turnaround time algorithms like SJF are needed which require the running time of
processes to schedule them. But the running time of the process is not known in advance.
MFQS runs a process for a time quantum and then it can change its priority(if it is a long
process). Thus it learns from past behavior of the process and then predicts its future behavior.
This way it tries to run a shorter process first thus optimizing turnaround time.
 MFQS also reduces the response time.
Example: Consider a system that has a CPU-bound process, which requires a burst time of 40
seconds. The multilevel Feed Back Queue scheduling algorithm is used and the queue time
quantum ‘2’ seconds and in each level it is incremented by ‘5’ seconds. Then how many times
the process will be interrupted and in which queue the process will terminate the execution?
Solution:
 Process P needs 40 Seconds for total execution.
 At Queue 1 it is executed for 2 seconds and then interrupted and shifted to queue 2.
 At Queue 2 it is executed for 7 seconds and then interrupted and shifted to queue 3.
 At Queue 3 it is executed for 12 seconds and then interrupted and shifted to queue 4.
 At Queue 4 it is executed for 17 seconds and then interrupted and shifted to queue 5.
 At Queue 5 it executes for 2 seconds and then it completes.
 Hence the process is interrupted 4 times and completed on queue 5

HRRN Scheduling Full Form


HRRN stands for Highest Response Ratio Next.
HRRN scheduling is non-preemptive scheduling, i.e. Non-preemptive scheduling is scheduling
in which priority of each job depends on its estimated run time and the amount of time spent on
waiting. The longer they wait they gain higher priority which prevents the process of
starvation. i.e. The Response ratio should be concerned with requested to serve time and
waiting time. HRRN does not suitable for priority systems.
W = Waiting time of the process so far
B = Burst time or Service time of the process

90
Procedure for Algorithm
 It starts.
 It only consider the processes with Arrival time, Burst Time and priority.
 It fills the ready Queue according to arrival times.
 Highest hybrid priority process is executed first using formula Hp = 0.5 * Ep + 0.5 * R
 Steps 4 and 5 repeated until queue becomes empty.
 Now, Calculate average waiting time, average turnaround time, number of context switches.
 It stop.
Advantages
 It is Non-preemptive.
 It prevents indefinite postponement (process of starvation).
 It improves the performance of shortest process first scheduling.
 It considers how long process has been waiting and increases its priority.
Disadvantages
 Burst time of the processes can not be known in advance so it can not be implemented
practically.
 Processes are scheduled by internal priority system because it does not support external
priority system.
Highest Response Ratio Next (HRRN) CPU Scheduling
Given N processes with their Arrival times and Burst times, the task is to find average waiting
time and an average turn around time using HRRN scheduling algorithm.
The name itself states that we need to find the response ratio of all available processes and select
the one with the highest Response Ratio. A process once selected will run till completion.
Characteristics of HRRN CPU Scheduling:
 Highest Response Ratio Next is a non-preemptive CPU Scheduling algorithm and it is
considered as one of the most optimal scheduling algorithm.
 The criteria for HRRN is Response Ratio, and the mode is Non-Preemptive.
 HRRN is basically considered as the modification of Shortest Job First in order to reduce the
problem of starvation.
 In comparison with SJF, during HRRN scheduling algorithm, the CPU is allotted to the next
process which has the highest response ratio and not to the process having less burst time.
Response Ratio = (W + S)/S
Here, W is the waiting time of the process so far and S is the Burst time of the process.
Advantages of HRRN CPU Scheduling
 HRRN Scheduling algorithm generally gives better performance than the shortest job
first Scheduling.
 There is a reduction in waiting time for longer jobs and also it encourages shorter jobs.
Disadvantages of HRRN CPU Scheduling
 The on ground implementation of HRRN scheduling is not possible as it is not possible know
the burst time of every job in advance.
 In this scheduling, there may occur overload on the CPU.

91
Performance of HRRN –
 Shorter Processes are favoured.
 Aging without service increases ratio, longer jobs can get past shorter jobs.
Let us consider the following examples.
Example-1: Consider the following table of arrival time and burst time for four processes P1,
P2, P3, P4 and P5.
Processes Arrival time Burst Time

P1 0ms 3ms

P2 2ms 6ms

P3 4ms 4ms

P4 6ms 5ms

P5 8ms 2ms

The Highest Response Ratio Next CPU Scheduling Algorithm will work on the basis of
steps as mentioned below:
At time= 0,
 Available Processes: P1, Hence P1 starts executing till its completion.
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

0-2ms P1 0ms P1 2ms 3ms 1ms

At time = 2,
 Available Processes: P1, P2
 But P1 keep executing as HRRN is a non-preemptive algorithm and thus it will finish its
execution
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

2-3ms P1 0ms P2 P1 1ms 1ms 0ms

92
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P2 2ms 0ms 6ms 6ms

At time = 3,
 Process P1 finish its execution
 Process P2 starts executing as it is only process available.
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

3-4ms P2 2ms P2 1ms 6ms 5ms

At time = 4,
 Process P3 arrives and wait for process P2 to execute
 Process P2 continue its execution
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P2 2ms 2ms 5ms 3ms

4-6ms P3 4ms P3 P2 0ms 4ms 4ms

At time = 6,
 Process P4 arrives and wait for the process P2 to execute
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

6-8ms P2 2ms P3, P4 P2 2ms 3ms 1ms

93
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P3 4ms 0ms 4ms 4ms

P4 6ms 0ms 5ms 5ms

At time = 8,
 Process P5 arrives and wait for its execution in the ready queue
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P2 2ms 1ms 1ms 0ms

P3 4ms 0ms 4ms 4ms

P4 6ms 0ms 5ms 5ms

P3, P4,
8-9ms P5 8ms P5 P2 0ms 2ms 2ms

At time = 9,
 Process P2 completes its execution
 Now there are 3 processes available, P3, P4 and P5. Since, P3, P4, P5 were available after 4, 6
and 8 units respectively.
 Therefore, waiting time for P3, P4 and P5 are (9 – 4 =)5, (9 – 6 =)3, and (9 – 8 =)1 unit
respectively.
 Using the formula given above, (Response Ratio = (W + S)/S) calculate the Response Ratios
of P3, P4 and P5 respectively as 2.25, 1.6 and 1.5.
 Clearly, P3 has the highest Response Ratio and so it gets scheduled
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

9-13ms P3 4ms P4, P5 P3 4ms 4ms 0ms

94
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P4 6ms 0ms 5ms 5ms

P5 8ms 0ms 2ms 2ms

At time = 13,
 Available Processes: P4 and P5
 Response Ratios of P4 and P5 are 2.4 and 3.5 respectively using the above formula.
 Clearly, P5 has the highest Response Ratio and so it gets scheduled
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P4 6ms 0ms 5ms 5ms

13-15ms P5 8ms P4 P5 2ms 2ms 0ms

At time = 15,
 After completion of process P5, Process P4 is selected at last and execute till it gets finished
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

15-20ms P4 6ms P4 5ms 5ms 0ms

At time = 20,
 Process P4 will finish its execution.
 The overall execution of the processes will be as shown below:
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

0-2ms P1 0ms P1 2ms 3ms 1ms

95
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P1 0ms 1ms 1ms 0ms

2-3ms P2 2ms P2 P1 0ms 6ms 6ms

3-4ms P2 2ms P2 1ms 6ms 5ms

P2 2ms 2ms 5ms 3ms

4-6ms P3 4ms P3 P2 0ms 4ms 4ms

P2 2ms 2ms 3ms 1ms

P3 4ms 0ms 4ms 4ms

6-8ms P4 6ms P3, P4 P2 0ms 5ms 5ms

P2 2ms 1ms 1ms 0ms

P3 4ms 0ms 4ms 4ms

P4 6ms 0ms 5ms 5ms

P3, P4,
8-9ms P5 8ms P5 P2 0ms 2ms 2ms

9-13ms P3 4ms P4, P5 P3 4ms 4ms 0ms

P4 6ms 0ms 5ms 5ms

96
Initial Remaining
Time Arrival Ready Running Execution Burst Burst
Instance Process Time Queue Queue Time Time Time

P5 8ms 0ms 2ms 2ms

P4 6ms 0ms 5ms 5ms

13-15ms P5 8ms P4 P5 2ms 2ms 0ms

15-20ms P4 6ms P4 5ms 5ms 0ms

Gantt Chart –

Since, completion time (C.T) can be directly determined by Gantt chart, and
Turn Around Time (TAT)
= (Completion Time) – (Arrival Time)
Also, Waiting Time (WT)
= (Turn Around Time) – (Burst Time)
Therefore, final table look like,

97
A C
Processes T BT T TAT WT

P1 0 3 3 3-0 = 3 3-3 = 0

P2 2 6 9 9-2 = 7 7-6 = 1

P3 4 4 13 13-4 = 9 9-4 = 5

P4 6 5 20 20-6 = 14 14-5 = 9

P5 8 2 15 15-8 = 7 7-2 = 5

Output:
Total Turn Around Time = 40 ms
So, Average Turn Around Time = 40/5 = 8.00 ms
And, Total Waiting Time = 20 ms
So, Average Waiting Time = 20/5 = 4.00 ms
Implementation of HRRN Scheduling –
 Input the number of processes, their arrival times and burst times.
 Sort them according to their arrival times.
 At any given time calculate the response ratios and select the appropriate process to be
scheduled.
 Calculate the turn around time as completion time – arrival time.
 Calculate the waiting time as turn around time – burst time.
 Turn around time divided by the burst time gives the normalized turn around time.
 Sum up the waiting and turn around times of all processes and divide by the number of
processes to get the average waiting and turn around time.

Difference between FCFS and Priority CPU scheduling


1. First Come First Served (FCFS) : First Come First Served (FCFS) is the simplest type of
algorithm. It is a non-preemptive algorithm i.e. the process cannot be interrupted once it starts
executing. The FCFS is implemented with the help of a FIFO queue. The processes are put into
the ready queue in the order of their arrival time. The process that arrives first becomes the head
of the queue while the others that arrive after are added to the rear of the queue. In First Come
First Served (FCFS) algorithm, the process that arrives first, is sent first for execution by the
CPU when CPU is free. The main disadvantage of this algorithm is that the average waiting time
is often quite long. It also leads to the convoy effect. This results in lower device or CPU
utilisation and lower efficiency.
2. Priority Scheduling Algorithm: Priority scheduling algorithm executes the processes
depending upon their priority. Each process is allocated a priority and the process with the

98
highest priority is executed first. Priorities can be defined internally as well as externally.
Internal priorities are decided by the system depending upon the number of resources required,
time needed etc. whereas external priorities are based upon the time in which the work is needed
or the amount being paid for the work done or the importance of process. Priority scheduling can
be preemptive or non- preemptive.
Note:
 If two processes have the same priority then tie is broken using FCFS.
 The waiting time for the highest priority process is always zero in preemptive mode while it
may not be zero in case of non preemptive mode.
Disadvantages: The major problem is the starvation or indefinite blocking. It may so happen
that in stream of processes, the system keeps executing the high priority processes and the low
priority processes never get executed. Solution: A solution to the problem is aging. Aging refers
to gradually increasing the priority of processes waiting in the system by a fixed number after a
fixed interval, say by 1 in every 10 minutes. This will ensure that a low priority process also gets
executed by slowly increasing its priority with the time.
Differences between FCFS and Priority scheduling algorithm are as follows:

First Come First Served (FCFS) Priority scheduling

It executes the processes in the order in It executes the processes based upon their prioritied
which they arrive i.e., the process that i.e., in descending order of their priorities. A process
arrives first is executed first. with higher priority is executed first.

It is non preemptive in nature. It is both non-preemptive and preemptive in nature.

It results in quite long waiting time for


the processes and thus increases average
waiting time. There is no idea of response time and waiting time.

It may happen that a low priority process keeps


waiting for an indefinite time and never gets
It leads to the convoy effect. executed.

It is the easiest to implement in any


system. It is best suited for real time operating systems.

It does not suffers from starvation. It suffers from starvation.

99
Comparison of Different CPU Scheduling Algorithms in OS
A scheduling algorithm is used to estimate the CPU time required to allocate to the processes and
threads. The prime goal of any CPU scheduling algorithm is to keep the CPU as busy as possible
for improving CPU utilization.
Scheduling Algorithms
1. First Come First Serve(FCFS): As the name implies that the jobs are executed on a first
come first serve basis. It is a simple algorithm based on FIFO that’s first in first out. The process
that comes first in the ready queue can access the CPU first. If the arrival time is less, then the
process will get the CPU soon. It can suffer from the convoy effect if the burst time of the first
process is the longest among all the jobs.
2. Shortest Job First (SJF): Also known as the shortest job first or shortest job next is a non-
preemptive type algorithm that is easy to implement in batch systems and is best in minimizing
the waiting time. It follows the strategies where the process that is having the lowest execution
time is chosen for the next execution.
3. Longest Job First Scheduling(LJF): The longest job first (LJF) is the non-preemptive
version. This algorithm is also based upon the burst time of the processes. The processes are put
into the ready queue according to their burst times and the process with the largest burst time is
processed first.
4. Longest Remaining Time First Scheduling (LRTF): The preemptive version of LJF is
LRTF. Here the process with the maximum remaining CPU time will be considered first and
then processed. And after some time interval, it will check if another process having more Burst
Time arrived up to that time or not. If any other process has more remaining burst time, so the
running process will get pre-empted by that process.
5. Shortest Remaining Time First(SRTF): This algorithm is based on SJF and this is the
preemptive version of SJF. In this scheduling algorithm, the process with the smallest remaining
burst time is executed first and it may be preempted with a new job that has arrived with a
shorter execution time.
6. Round Robin(RR): It is a preemptive scheduling algorithm in which each process is given a
fixed time called quantum to execute. At this time one process is allowed to execute for a
quantum and then preempts and then another process is executed. In this way, there is context
switching between processes to save states of these preempted processes.
7. Priority Scheduling: It is a non-preemptive algorithm that works in batch systems and, each
process is assigned with a priority and the process with the highest priority is executed first. This
can lead to starvation for other processes.
8. Multiple Levels Queues Scheduling: In this scheduling, multiple queues have their
scheduling Algorithms and are maintained with the processes that possess the same
characteristics. For this, priorities are assigned to each queue for the jobs to be executed.
9. Multilevel-Feedback-Queue Scheduler: It defines several queues and the scheduling
algorithms for each queue. This algorithm is used to determine when to upgrade a process when
to demote a process, and also determine the queue in which a process will enter and when that
process needs service.
Note: The SJF scheduling algorithm is hypothetical and un-implementable, as it is impossible to
determine the burst time of any process without running it. SJF is a benchmarking algorithm as
it provides minimum waiting time than any other scheduling algorithm

Comparative Analysis of Different CPU Scheduling Algorithms

100
Here is a brief comparison between different CPU scheduling algorithms:

Average
waiting
Allocation time Preemptio
Algorithm is Complexity (AWT) n Starvation Performance

According
to the
arrival time
of the
processes,
the CPU is Not Slow
FCFS allocated. complex Large. No No performance

Based on Minimum
the lowest More Average
CPU burst complex Smaller Waiting
SJF time (BT). than FCFS than FCFS No Yes Time

Depending
on some
measures
Based on e.g., arrival
the highest More time,
CPU burst complex process Big turn-
LJFS time (BT) than FCFS size, etc. No Yes around time

Same as
LJFS the
allocation
of the CPU Depending
is based on on some
the highest measures
CPU burst e.g., arrival The
time (BT). More time, preference is
But it is complex process given to the
LRTF preemptive than FCFS size, etc. Yes Yes longer jobs

101
Average
waiting
Allocation time Preemptio
Algorithm is Complexity (AWT) n Starvation Performance

Same as
SJF the
allocation
of the CPU Depending
is based on on some
the lowest measures
CPU burst e.g., arrival The
time (BT). More time, preference is
But it is complex process given to the
SRTF preemptive. than FCFS size, etc Yes Yes short jobs

According
to the order
of the
process Large as Each
arrives with The compared process has
fixed time complexity to SJF and given a
quantum depends on Priority fairly fixed
RR (TQ) TQ size scheduling. No No time

According
to the
priority. Well
The bigger performance
priority task This type is but contain a
Priority Pre- executes less Smaller starvation
emptive first complex than FCFS Yes Yes problem

Priority According This type is preemptive No Yes Most


non- to the less complex Smaller beneficial
preemptive priority. than Priority than FCFS with batch
with preemptive systems

102
Average
waiting
Allocation time Preemptio
Algorithm is Complexity (AWT) n Starvation Performance

monitoring
the new
incoming
higher
priority
jobs

According
to the More
process that complex Good
resides in than the performance
the bigger priority but contain a
queue scheduling Smaller starvation
MLQ priority algorithms than FCFS No Yes problem

It is the most
According Complex but
to the its Smaller
process of a complexity than all
bigger rate depends scheduling
priority on the TQ types in Good
MFLQ queue. size many cases No No performance

Difference between Preemptive and Non-preemptive CPU scheduling algorithms


In CPU Scheduling, we have two types of scheduling, Let’s have a look at them:
Preemptive Scheduling: In this, a scheduler may preempt a low-priority running process
anytime when a high-priority process enters into a ready state. When scheduling takes from
either of the below circumstances it is preemptive scheduling –
 When a process switches from the running state to the ready state (for example, when an
interrupt occurs).
 When a process switches from the waiting state to the ready state (for example, at
completion of I/)).

103
Non-Preemptive Scheduling: In this, once a process enters the running state it cannot be
preempted until it completes it’s allocation time. When scheduling takes place only under the
below circumstances we say that the scheduling scheme is non-preemptive or cooperative.
 When a process switches from the running state to the waiting state (for example, as the
result of an I/O request or an invocation of wait() for the termination of a child process).
 When a process terminates.
Let us see the difference between Preemptive Scheduling and Non-Preemptive Scheduling:

Preemptive Scheduling Non-Preemptive Scheduling

The CPU is allocated to the process till it


The CPU is allocated to the processes for a certain ends it’s execution or switches to waiting
amount of time. state.

The executing process here is interrupted in the The executing process here is not
middle of execution. interrupted in the middle of execution.

It usually switches the process from ready state to


running state, vice-versa, and maintains the ready It does not switch the process from
queue. running state to ready state.

Here, if a process with high priority frequently


arrives in the ready queue then the process with low Here, if CPU is allocated to the process
priority has to wait for long, and it may have to with larger burst time then the processes
starve. with small burst time may have to starve.

It is quite flexible because the critical processes are


allowed to access CPU as they arrive into the ready It is rigid as even if a critical process
queue, no matter what process is executing enters the ready queue the process running
currently. CPU is not disturbed.

This is cost associative as it has to maintain the


integrity of shared data. This is not cost associative.

This scheduling leads to less context


switches compared to preemptive
This scheduling leads to more context switches. scheduling.

104
Preemptive scheduling is better than non-preemptive scheduling or vice-versa can’t be said
definitely. It depends on how scheduling minimizes the average waiting time of the processes
and maximizes CPU utilization.
Difference between Turn Around Time (TAT) and Waiting Time (WT) in CPU Scheduling
In CPU Scheduling, we often need to find the average Turnaround and Waiting Time with the
help of Arrival, Burst and Completion Time. Let’s have a brief look of them: Turnaround Time
(TAT):
1. It is the time interval from the time of submission of a process to the time of the completion of
the process.
2. The difference b/w Completion Time and Arrival Time is called Turnaround Time.
Completion Time (CT): This is the time when the process completes its execution. Arrival
Time (AT): This is the time when the process has arrived in the ready state.
TAT = CT - AT
Waiting Time (WT):
1. The time spent by a process waiting in the ready queue for getting the CPU.
2. The time difference b/w Turnaround Time and Burst Time is called Waiting Time.
Burst Time (BT): This is the time required by the process for its execution.
WT = TAT - BT
Now with Waiting Time and Burst Time, we can also calculate Turn Around Time via:
TAT = BT + WT
Example:
Process Burst Time (in sec.)

P1 24

P2 3

P3 4

Solution: Figure – Gantt Chart

Avg. TAT = (24 + 27 + 31) / 3 = 27.33 sec


Avg. WT = (0 + 24 + 27) / 3 = 17.0 sec
Let’s see the difference between Turnaround Time and Waiting Time:

105
Turn Around Time Waiting Time

The time since the process entered


into ready queue for execution till
the process completed it’s The time process spent in the ready queue and for I/O
execution. completion.

CPU Scheduling Algorithm doesn’t affect the amount of


Different CPU Scheduling time during which a process executes or does I/O but only
algorithms produce different TAT the amount of time that a process spends waiting in the
for the same set of processes. ready queue.

The turnaround time is generally


limited by the speed of the output
device. Waiting time has no such major effect.

Turn Around Time = Burst time +


Waiting time. Waiting time = Turn Around – Burst time.

Difference between LJF and LRJF CPU scheduling algorithms


1. Longest Job First (LJF) :
It CPU Scheduling algorithm where the process with the largest burst line is executed first. Once
the process enters the ready queue, the process exits only after the completion of execution,
therefore it is a non-preemptive process. In case, the burst times of the processes are same, the
job with overall lowest time is selected. This CPU scheduling algorithm results in low
throughput of the system.
Process AT BT

1 0 3

2 1 2

3 2 4

4 3 5

106
Process AT BT

5 4 6

2. Longest Remaining Job First (LRJF) :


It is the preemptive version of Longest Job First CPU Scheduling Algorithm. The process Burst
Time are chosen at every second and then the longest job is selected. In case, the Burst Time of
the processes are same, the job with overall low arrival time is selected.
It suffers from starvation, because of simultaneous checking of processes’ remaining Burst Time.
It is also called “Longest Remaining Time First” algorithm.

Process AT BT

1 0 3

2 1 2

3 2 4

4 3 5

5 4 6

Difference between LJF and LRJF CPU scheduling algorithms :

107
LJF LRJF

Non preemptive Preemptive

It suffers from starvation It also suffers from starvation

Waiting Time is not so high,


and processes get chances for
Waiting Time is high execution after some interval.

Switching context is less, since a Switching context is more,


process that once enters running state since the process are continually
is executed completely. checked for execution.

The processes are executed based on The processes are repeatedly


their CPU time and arrival time alone, checking for an idle CPU,
without increasing CPU overload. thereby increasing the overload.

The processes can complete


No process can complete its execution execution before the longest
until longest job persists. process.

Difference between SJF and SRJF CPU scheduling algorithms


1. Shortest Job First (SJF) :
The Shortest Job First (SJF) is a scheduling policy that selects the waiting process with the
smallest execution time to execute next. It is also known as Shortest Job Next (SJN) or Shortest
Process Next (SPN). It is a non-preemptive scheduling algorithm.
2. Shortest Remaining Job First (SRTF) :
The Shortest Remaining Job First (SRJF) is the preemptive version of SJF scheduling. In this
scheduling algorithm, the process with the smallest amount of time remaining until completion is
selected to execute. Processes having same arrival time will convert SRTF to SJF.

108
Similarities:

 Both SJF and SRJF are practically not feasible as it is not possible to predict the burst time of
the processes.
 Both SJF and SRJF may lead to process starvation as long processes may be held off
indefinitely if short processes are continually added.
 Both SJF and SRJF are considered same once all the processes are available in the ready
queue. This is because after the processes are added in the ready queue, no preemption is
carried out.

Differences:
Shortest Job First: Shortest Remaining Job First:

It is a non-preemptive algorithm. It is a preemptive algorithm.

It involves less overheads than SRJF. It involves more overheads than SJF.

It is slower in execution than SRJF. It is faster in execution than SJF.

It leads to increased throughput as execution


It leads to comparatively lower throughput. time is less.

It minimizes the average waiting time for each It may or may not minimize the average waiting
process. time for each process.

It may suffer from priority inversion. It may suffer from convoy effect.

109
It involves lesser number of context switching. It involves higher number of context switching.

Short processes are executed first and then Shorter processes run fast and longer processes
followed by longer processes. show poor response time.

Difference between FCFS and SJF CPU scheduling algorithms


1. First Come First Served (FCFS) : First Come First Served (FCFS) is the simplest type of
algorithm. It is a non-preemptive algorithm i.e. the process cannot be interrupted once it starts
executing. The FCFS is implemented with the help of a FIFO queue. The processes are put into
the ready queue in the order of their arrival time. The process that arrives first becomes the head
of the queue while the others that arrive after are added to the rear of the queue. In First Come
First Served (FCFS) algorithm, the process that arrives first, is sent first for execution by the
CPU when CPU is free. The main disadvantage of this algorithm is that the average waiting time
is often quite long. It also leads to the convoy effect. This results in lower device or CPU
utilization and lower efficiency.
2. Shortest Job First (SJF) : Shortest Job First (SJF) Scheduling Algorithm is based upon the
burst time of the process. The processes are put into the ready queue based on their burst times.
In this algorithm, the process with the least burst time is processed first. The burst time of only
those processes is compared that are present or have arrived until that time. It is also non-
preemptive in nature. Its preemptive version is called Shortest Remaining Time First (SRTF)
algorithm. The major advantage of this algorithm is that it gives the minimum waiting time for a
given set of processes and thus reduces the average waiting time. The disadvantage of this
algorithm is that long processes may never be processed by the system and may remain in the
queue for very long time leading to starvation of processes.
Note: If two processes have same burst time then the tie is broken using FCFS, i.e., the process
that arrived first is processed first.
The difference between First Come First Served (FCFS) and Shortest Job First (SJF) scheduling
algorithm are as follows:

First Come First Served (FCFS) Shortest Job First (SJF)

First Come First Served (FCFS) executes the Shortest Job First (SJF) executes the
processes in the order in which they arrive i.e. processes based upon their burst time i.e. in
the process that arrives first is executed first. ascending order of their burst times.

SJF is also non-preemptive but its preemptive


version is also there called Shortest
FCFS is non preemptive in nature. Remaining Time First (SRTF) algorithm.

FCFS results in quite long waiting time for the


processes and thus increases average waiting The average waiting time for given set of
time. processes is minimum.

110
First Come First Served (FCFS) Shortest Job First (SJF)

FCFS leads to the convoy effect. It does not lead to the convoy effect.

FCFS algorithm is the easiest to implement in The real difficulty with SJF is knowing the
any system. length of the next CPU request or burst.

A process may have to wait for quite long to get A long process may never get executed and
executed depending on the burst time of the the system may keep executing the short
processes that have arrived first. processes.

FCFS lead to lower device and CPU utilization SJF leads to higher effectiveness of the
thereby decreasing the efficiency of the system. system due to lower average waiting time.

In case of SJF, elapsed time should be


recorded, results in more overhead on the
FCFS results in minimal overhead. processor.

FCFS does not suffers from starvation SJF suffers from starvation.

Difference between Arrival Time and Burst Time in CPU Scheduling


CPU scheduling algorithms require CPU time and IO time required for its execution. CPU time
is time taken by CPU to carry out the process while I/O time illustrates the time required for
I/O operation by the process.
The execution of multiple processes in an optimised way is based on different kinds of
algorithms, like FCFS, Shortest Job First etc., which depend on the time frame values, like the
Arrival Time, Burst Time, Waiting Time etc.
1. Arrival Time (AT) :
Arrival time is the point of time in milli seconds at which a process arrives at the ready queue
to begin the execution. It is merely independent of the CPU or I/O time and just depicts the
time frame at which the process becomes available to complete its specified job.The process is
independent of which process is there in the Running state. Arrival Time can be calculated as
the difference of the Completion Time and the Turn Around Time of the process.
Arrival Time (A.T.)
= Completion Time (C.T.) - Turn Around Time (T.A.T)

111
2. Burst Time (BT) :
Burst Time refers to the time required in milli seconds by a process for its execution. The Burst
Time takes into consideration the CPU time of a process. The I/O time is not taken into
consideration. It is called as the execution time or running time of the process. The process
makes a transition from the Running state to the Completion State during this time frame.
Burst time can be calculated as the difference of the Completion Time of the process and the
Waiting Time, that is,
Burst Time (B.T.)
= Completion Time (C.T.) - Waiting Time (W.T.)
The following table illustrates the Arrival and Burst time of three processes P1, P2 and P3. A
single CPU is allocated for the execution of these processes.

Processes Arrival Time (in ms) Burst Time (in ms)

P1 0 3

P2 4 2

P3 6 4

If we compute the Gantt chart, based on FCFS scheduling where the process that comes first in
the ready queue is executed first. The processes arrival decides the order of execution of the
process for time equal to its Burst time.

Since, the process P2 arrives at 4ms, and process P1 requires 3ms for its execution (=Burst
Time), CPU waits for 1ms, that is the idle time for the CPU, where it doesn’t perform any
process execution. The last process to get executed is P3.
The following table illustrates the key differences in the Arrival and Burst Time respectively :

112
Arrival Time Burst Time

Marks the entry point of the process in Marks the exit point of the process in the
the queue. queue.

Computed before the execution of Computed after the execution of


process. process.

Related to the Running State of the


Related to the Ready State of the CPU. CPU.

Difference between Priority Scheduling and Round Robin (RR) CPU scheduling
1. Priority Scheduling Algorithm :
Priority scheduling algorithm executes the processes depending upon their priority. Each process
is allocated a priority and the process with the highest priority is executed first. Priorities can be
defined internally as well as externally. Internal priorities are decided by the system depending
upon the number of resources required, time needed etc. whereas external priorities are based
upon the time in which the work is needed or the amount being paid for the work done or the
importance of process. Priority scheduling can be preemptive or non- preemptive.
Note –
 If two processes have the same priority then tie is broken using FCFS.
 The waiting time for the highest priority process is always zero in preemptive mode while it
may not be zero in case of non preemptive mode.
Disadvantages:
The major problem is the starvation or indefinite blocking. It may so happen that in stream of
processes, the system keeps executing the high priority processes and the low priority processes
never get executed.
2. Round-Robin (RR) :
Round-Robin (RR) Scheduling Algorithm is particularly designed for time sharing systems. The
processes are put into the ready queue which is a circular queue in this case. In this case a small
unit of time known as time quantum is defined. The algorithm selects the first process from the
queue and executes it for the time defined by the time quantum. If the process has burst time less
than the time quantum then the CPU executes the next process but if it has burst time higher than
the time quantum then the process is interrupted and next process is executed for same time
quantum. If a process is interrupted then a context switch happens and the process is put back at
the tail of the queue. It is preemptive in nature.
This algorithm mainly depends on the time quantum. Very large time quantum makes RR same
as the FCFS while a very small time quantum will lead to the overhead as context switch will
happen again and again after very small intervals.
The major advantage of this algorithm is that all processes get executed one after the other which
does not lead to starvation of processes or waiting by process for quite long time to get executed.

113
The difference between Priority Scheduling and Round-Robin (RR) scheduling algorithm are as
follows:

Priority Scheduling Round-Robin (RR)

Priority Scheduling executes the Round-Robin (RR) executes the


processes according to the processes based upon the time
priority i.e. process with higher quantum defined i.e. each process is
priority is executed first. executed for a fixed amount of time.

Priority Scheduling is both


preemptive and non-preemptive Round-Robin (RR) is preemptive in
in nature. nature.

The average waiting time and The average waiting time for given
average response time is set of processes is quite small and
unknown beforehand. depends on the time quantum.

It is easy to implement and best


suited for real time operating It is quite easy to implement RR in
systems. any system.

Each process is executed and every


The problem of blocking of a user feels that his work is being done
process can be solved using as the CPU gives equal amount of
aging. time to each process.

Difference between EDF and LST CPU scheduling algorithms


 Last Updated : 20 Jun, 2020
 Read

 Courses@Sale

 Discuss

 Practice

 Video

114
1. Earliest Deadline First (EDF) :
In Earliest Deadline First scheduling algorithm, at every scheduling point the task having the
shortest deadline is scheduled for the execution. It is an optimal dynamic priority-driven
scheduling algorithm used in real-time systems. It uses priorities of the tasks for scheduling. In
EDF, priorities to the task are assigned according to the absolute deadline. The task having
shortest deadline gets the highest priority.
Example –
Suppose here are two processes P1 and P2.
Let the period of P1 be p1 = 50
Let the processing time of P1 be t1 = 25
Let the period of P2 be p2 = 75
Let the processing time of P2 be t2 = 30
Explanation :
1. Deadline pf P1 is earlier, so priority of P1>P2.
2. Initially P1 runs and completes its execution of 25 time.
3. After 25 times, P2 starts to execute until 50 times, when P1 is able to execute.
4. Now, comparing the deadline of (P1, P2) = (100, 75), P2 continues to execute.
5. P2 completes its processing at time 55.
6. P1 starts to execute until time 75, when P2 is able to execute.
7. Now, again comparing the deadline of (P1, P2) = (100, 150), P1 continues to execute.
8. Repeat the above steps.
9. Finally at time 150, both P1 and P2 have the same deadline, so P2 will continue to execute till
its processing time after which P1 starts to execute.
2. Least Slack Time (LST) :
In Least Slack Time scheduling algorithm, at every scheduling point the task having the
minimum laxity is executed first. It is also a dynamic priority-driven scheduling algorithm used
in real-time systems. It assigns some priority to all the tasks in the system according to their
slack time. The task having the least slack time (laxity) gets the highest priority.
Example –
Process P1:
Arrival Time=0, Duration=10, Deadline=33
Process P2:
Arrival Time=4, Duration=3, Deadline=28
Process P3:
Arrival Time=5, Duration=10, Deadline=29
Explanation :
 At time t=0:
Only process P1 has arrived.
P1 is executed till time t=4.
 At time t=4: P2 has arrived.
Slack time of P1: 33-4-6=23
Slack time of P2: 28-4-3=21
Hence P2 starts to execute till time t=5 when P3 arrives.
 At time t=5:
Slack Time of P1: 33-5-6=22
Slack Time of P2: 28-5-2=21

115
Slack Time of P3: 29-5-10=12
Hence P3 starts to execute till time t=13
 At time t=13:
Slack Time of P1: 33-13-6=14
Slack Time of P2: 28-13-2=13
Slack Time of P3: 29-13-2=14
Hence P2 starts to execute till time t=15
 At time t=15:
Slack Time of P1: 33-15-6=12
Slack Time of P3: 29-15-2=12
Hence P3 starts to execute till time t=16
 At time t=16:
Slack Time of P1: 33-16-6=11
Slack Time of P3:29-16-=12
Hence P1 starts to execute till time t=18 and so on.

Difference between EDF and LST scheduling algorithms :


EDF LST

Task having shortest deadline is Task having minimum slack time is


scheduled first in it. scheduled first in it.

It assigns priority to tasks It assigns tasks according to their


according to their deadlines. slack time.

It can be used as both static and It is used only as dynamic


dynamic scheduling. scheduling.

Execution time of a task is not


required. It requires execution time of a task.

It is a simple and optimal


algorithm. It is a complex algorithm.

It can be implemented on any set It can only be implemented on set of


of tasks. tasks having their burst time.

116
EDF LST

It completely utilizes the CPU


(even sometimes 100%). It may under-utilize the CPU.

It increases the efficiency and It may decrease the efficiency and


throughput of the processor. throughput of the processor.

Difference between Priority scheduling and Shortest Job First (SJF) CPU scheduling
1. Priority Scheduling Algorithm :
Priority scheduling algorithm executes the processes depending upon their priority. Each process
is allocated a priority and the process with the highest priority is executed first. Priorities can be
defined internally as well as externally. Internal priorities are decided by the system depending
upon the number of resources required, time needed etc. whereas external priorities are based
upon the time in which the work is needed or the amount being paid for the work done or the
importance of process. Priority scheduling can be preemptive or non- preemptive.
Note –
 If two processes have the same priority then tie is broken using FCFS.
 The waiting time for the highest priority process is always zero in preemptive mode while it
may not be zero in case of non preemptive mode.
Disadvantages:
The major problem is the starvation or indefinite blocking. It may so happen that in stream of
processes, the system keeps executing the high priority processes and the low priority processes
never get executed.
2. Shortest Job First (SJF) :
Shortest Job First (SJF) Scheduling Algorithm is based upon the burst time of the process. The
processes are put into the ready queue based on their burst times. In this algorithm, the process
with the least burst time is processed first. The burst time of only those processes is compared
that are present or have arrived until that time. It is also non-preemptive in nature. Its preemptive
version is called Shortest Remaining Time First (SRTF) algorithm.
The major advantage of this algorithm is that it gives the minimum waiting time for a given set
of processes and thus reduces the average waiting time. The disadvantage of this algorithm is
that long processes may never be processed by the system and may remain in the queue for very
long time leading to starvation of processes.
Note –
If two processes have same burst time then the tie is broken using FCFS, i.e., the process that
arrived first is processed first.

The difference between Shortest job first (SJF) and Priority Scheduling algorithm are as follows:

117
Shortest job first (SJF) Priority scheduling

Shortest Job First (SJF) Priority scheduling executes the


executes the processes based processes based upon their priorities
upon their burst time i.e. in i.e. in descending order of their
ascending order of their burst priorities. A process with higher
times. priority is executed first.

SJF is also non-preemptive but


its preemptive version is also
there called Shortest
Remaining Time First (SRTF) Priority scheduling is both preemptive
algorithm. and non preemptive in nature.

The average waiting time for


given set of processes is There is no idea of average waiting
minimum. time and response time.

The real difficulty with SJF is


knowing the length of the next It is quite easy to implement and best
CPU request or burst. for real time operating systems.

The problem of blocking of a process


A long process may never get can be solved through aging which
executed and the system may means to gradually increase the priority
keep executing the short of a process after a fixed interval of
processes. time by a fixed number.

Difference between First Come First Served (FCFS) and Round Robin (RR) Scheduling
Algorithm
First Come First Served Scheduling Algorithm:
First Come First Served (FCFS) is the simplest and non-preemptive scheduling algorithm. In
First Come First Served (FCFS), the process is allocated to the CPU in the order of their arrival.
A queue data structure is used to implement the FCFS scheduling algorithm. The process which
is at the head of the ready queue is allocated to the CPU, when CPU is free. Then the process
which is running is removed from the queue. When a new process enters into the ready queue, it
is placed onto the tail of the ready queue.

118
Round Robin Scheduling Algorithm:
Round Robin (RR) Scheduling Algorithm is design for the time sharing system. This algorithm
is the preemptive scheduling algorithm. In Round Robin Scheduling Algorithm a small unit of
time called as time quantum or time slice for which the CPU is provided to each job. CPU is
allocated to the each job for the duration equal to the time quantum in cyclic order. This time
quantum, time slice or time interval is generally of the order of 10 to 100 milliseconds. Ready
queue in the Round Robin Scheduling Algorithm is treated as the circular queue.
The difference between First Come First Served (FCFS) and Round Robin(RR) scheduling
algorithm are as follows:

S.No. First Come First Served (FCFS) Round Robin(RR)

First Come First Served (FCFS) is the Round Robin(RR) is the preemptive scheduling
1. non-preemptive scheduling algorithm. algorithm.

While RR has small overhead as it is necessary


to record the time elapsed and then switch the
2. FCFS has the minimal overhead. process which causes an overhead.

First Come First Served Scheduling


Algorithm provides high response time In Round Robin Scheduling Algorithm, for the
3. for the processes. short processes there is very low response time.

FCFS is inconvenient to use in the It is mainly designed for the time sharing
3. time sharing system. system and hence convenient to use.

Average waiting time is generally not


minimal in First Come First Served In Round Robin Scheduling Algorithm average
5. Scheduling Algorithm. waiting time is minimal.

The process is simply processed in the It is similar like FCFS in processing but uses
6. order of their arrival in FCFS. time quantum.

Difference between Shortest Job First (SJF) and Round-Robin (RR) scheduling algorithms
1.Shortest Job First (SJF) :
Shortest Job First (SJF) Scheduling Algorithm is based upon the burst time of the process. The

119
processes are put into the ready queue based on their burst times. In this algorithm, the process
with the least burst time is processed first. The burst time of only those processes is compared
that are present or have arrived until that time. It is also non-preemptive in nature. Its preemptive
version is called Shortest Remaining Time First (SRTF) algorithm.
The major advantage of this algorithm is that it gives the minimum waiting time for a given set
of processes and thus reduces the average waiting time. The disadvantage of this algorithm is
that long processes may never be processed by the system and may remain in the queue for very
long time leading to starvation of processes.
Note –
If two processes have same burst time then the tie is broken using FCFS, i.e., the process that
arrived first is processed first.
2.Round-Robin (RR) :
Round-Robin (RR) Scheduling Algorithm is particularly designed for time sharing systems. The
processes are put into the ready queue which is a circular queue in this case. In this case a small
unit of time known as time quantum is defined. The algorithm selects the first process from the
queue and executes it for the time defined by the time quantum. If the process has burst time less
than the time quantum then the CPU executes the next process but if it has burst time higher than
the time quantum then the process is interrupted and next process is executed for same time
quantum. If a process is interrupted then a context switch happens and the process is put back at
the tail of the queue. It is preemptive in nature.
This algorithm mainly depends on the time quantum. Very large time quantum makes RR same
as the FCFS while a very small time quantum will lead to the overhead as context switch will
happen again and again after very small intervals.
The major advantage of this algorithm is that all processes get executed one after the other which
does not lead to starvation of processes or waiting by process for quite long time to get executed.

The difference between Shortest Job First (SJF) and Round-Robin (RR) scheduling algorithm are
as follows:

Shortest Job First (SJF) Round-Robin (RR)

Shortest Job First (SJF)


executes the processes based Round-Robin (RR) executes the
upon their burst time i.e. in processes based upon the time
ascending order of their burst quantum defined i.e. each process is
times. executed for a fixed amount of time.

SJF is also non-preemptive but


its preemptive version is also
there called Shortest Remaining Round-Robin (RR) is preemptive in
Time First (SRTF) algorithm. nature.

120
Shortest Job First (SJF) Round-Robin (RR)

The average waiting time for The average waiting time for given
given set of processes is set of processes is quite small and
minimum. depends on the time quantum.

The real difficulty with SJF is


knowing the length of the next
CPU request or burst. It is quite easy to implement RR.

A long process may never get Each process is executed and every
executed and the system may user feels that his work is being done
keep executing the short as the CPU gives equal amount of
processes. time to each process.

In case of RR, if the time quantum is


very small then context switch takes
In case of SJF, elapsed time place again and again after very short
should be recorded, results in intervals of time which leads to
more overhead on the processor. overhead.

Difference between SRJF and LRJF CPU scheduling algorithms


1. Shortest remaining job first (SRJF) :
Shortest remaining job first also called the shortest remaining time first is the preemptive version
of the shortest job first scheduling algorithm.
In the shortest remaining job first, the process with the smallest runtime to complete (i.e
remaining time) is scheduled to run next, In SRJF, a running process will be preempted if a new
process arrives in the CPU scheduler which requires less burst time for execution.
2. Longest remaining job first (LRJF) :
Longest remaining job first also called longest remaining time first is exactly opposite of SRJF.
In this type of CPU scheduling, the process which has the longest processing time is scheduled to
run next and a running process will be preempted if a new process with longer burst time enters
the queue.
Difference table :

Shortest Remaining Job First (SRJF) Longest Remaining Job First(LRJF)

short processes are executed first and at Long processes are executed first and at
any instant if a process with shorter time any instant of time if a long process

121
Shortest Remaining Job First (SRJF) Longest Remaining Job First(LRJF)

arrives it will be executed first.


appears it will be executed first.

It has a very large average turn around


time and waiting time therefore
It does not have large average turn around reduces the effectiveness of the
time therefore it is more effective than LFJT operating system

It does not lead to convoy effect It will lead to convoy effect.

More process are executed in less amount of Less process are executed in certain amount of
time time

Let’s solve one problem for each:


LJFT :
Processes Arrival Time Burst Time

P1 0 2

P2 0 4

P3 0 8

Longest remaining job first:


Gantt chart:

Therefore the final table will be:


LJFT :

122
Turn Around
Processes Arrival Time Burst Time Completion Time Time Waiting Time

P1 0 2 12 12 10

P2 0 4 13 13 9

P3 0 8 14 14 6

Turn around time = Completion time - Arrival time

Average turn around time,


= [(12-0) + (13-0) + (14-0)]/3
= (12 + 13 + 14)/3
= 13

Waiting time = Turn around time - Burst time

Average waiting time,


= [(12-2) + (13-4) + (14-8)]/3
= (10 + 9 + 6)/3
= 8.34
Problem two:
LJFT
LJFT

Processes Arrival Time Burst Time

P1 0 20

P2 15 25

P3 30 10

123
Processes Arrival Time Burst Time

P4 45 15

Shortest remaining time first:


Gantt chart:

Therefore the final table will be:


SRFT

Turn Around
Processes Arrival Time Burst Time Completion Time Time Waiting Time

P1 0 20 20 20 0

P2 15 25 55 40 15

P3 30 10 40 10 0

P4 45 15 70 25 10

Turn around time = Completion time - Arrival time

Average turn around time,


= [(20-0) + (55-15) + (40-10) + (70-45)]/4
= (20 + 40 + 30 + 25)/4
= 28.75

Waiting time = Turn around time - Burst time

124
Average waiting time,
= [(20-20) + (40-25) + (10-10) + (25-15)] / 4
= (0 + 15 + 0 + 10) / 4
= 6.25
Difference between Multilevel Queue (MLQ) and Multi Level Feedback Queue (MLFQ)
CPU scheduling algorithms
 Last Updated : 03 Nov, 2022
In multi programming environment, it often happens that more than one processes compete for
CPU resources at the same time. If only one CPU is available choice has to be made between
processes to run next. Part of the Operating System responsible for making choice of process is
called Scheduler and the algorithm it used is called Scheduling Algorithm.
The objective of multi programming is to maximize CPU utilization. Criteria like Turnaround
time, Response time, Waiting time, Throughput are suggested on basis of which scheduling
algorithms are judged. There are many CPU scheduling algorithms two of which are-
1. Multilevel Queue Scheduling
2. Multilevel Feedback Queue Scheduling
Difference between Multilevel Queue (MLQ) and Multi Level Feedback Queue (MLFQ)
CPU scheduling algorithms :
Multilevel feedback queue scheduling
Multilevel queue scheduling (MLQ) (MLFQ)

It is queue scheduling algorithm in which ready


queue is partitioned into several smaller queues In this algorithm, ready queue is
and processes are assigned permanently into these partitioned into smaller queues on basis of
queues. The processes are divided on basis of their CPU burst characteristics. The processes
intrinsic characteristics such as memory size, are not permanently allocated to one queue
priority etc. and are allowed to move between queues.

In this algorithm queue are classified into two Here, queues are classified as higher
groups, first containing background processes and priority queue and lower priority queues. If
second containing foreground processes. process takes longer time in execution it is
80% CPU time is given to foreground queue using moved to lower priority queue.
Round Robin Algorithm and 20% time is given to Thus, this algorithm leaves I/O bound and
background processes using First Come First Serve interactive processes in higher priority
Algorithm. queue.

The priority is fixed in this algorithm. When all The priority for process is dynamic as

125
Multilevel feedback queue scheduling
Multilevel queue scheduling (MLQ) (MLFQ)

process is allowed to move between queue.


A process taking longer time in lower
processes in one queue get executed completely priority queue can be shifted to higher
then only processes in other queue are executed. priority queue and vice versa.
Thus, starvation can occur. Thus, it prevents starvation.

Since, processes are allowed to move


Since, processes do not move between queues, it between queues, it has high scheduling
has low scheduling overhead and is inflexible. overhead and is flexible.

1. Example for Multilevel Queue Scheduling (MLQ) :


A multilevel Queue with five queue is listed below according to order of priority.

126
1. System Process Queue
2. Interactive Processes Queue
3. Interactive Editing Processes Queue
4. Batch Processes Queue
5. Student Processes Queue
Here, all queues have their own scheduling algorithm, and process is chosen with highest
priority. Then it is executed preemptive or non-preemptively. No process in lower priority queue
can get executed until higher process queue are all empty.
For example, if batch process queue is running and interactive process comes in ready state batch
process is preempted and interactive process is allowed to execute.
2. Example for Multilevel Feedback Queue Scheduling (MLFQ) :
Now, let us consider multilevel feedback queue with three queues.

1. A Round Robin queue with time quantum of 8 milliseconds, say Q1.


2. A Round Robin queue with time quantum of 16 milliseconds, say Q2.
3. A First Come First Serve queue, say Q3.
Now, when the process enters Q1 it is allowed to execute and if it does not complete in 8
milliseconds it is shifted to Q2 and receives 16 milliseconds. Again it is preempted to Q3 if it
does not complete in 16 seconds. In this manner, scheduling is carried on in this scheme.

Difference between Long-Term and Short-Term Scheduler


Long-Term Scheduler is also known as Job Scheduler. Long-term scheduler regulates the
programs which are selected to system for processing. In this the programs are setup in the queue
and as per the requirement the best one job is selected and it takes the processes from job pool. It
regulates the Degree of Multi-programming (DOM). Short-Term Scheduler is also known
as CPU Scheduler. Short-Term Scheduler ensures which program is suitable or important for
processing. It regulates the less DOM (Degree of Multi-programming).

127
Difference Between Long-Term and Short-Term Scheduler:

Long-Term Scheduler Short-Term Scheduler

Long-Term Scheduler takes the process from job Short-Term Scheduler takes the process
pool. from ready queue.

Long-Term Scheduler is also known as Job Short-Term Scheduler is also known


Scheduler. as CPU Scheduler.

In Long-Term Scheduler, the programs are setup in


the queue and as per the requirement the best one job In Short-Term Scheduler no such queue
is selected. is exist.

It regulates the more DOM (Degree of Multi- It regulates the less DOM (Degree of
programming). Multi-programming).

It regulates the programs which are selected to It ensures which program is suitable or
system for processing. important for processing.

Speed is very fast as compared to long-


Speed is less than the short-term scheduler. term scheduler.

Long-Term Scheduler changes the process state Short-Term Scheduler changes the
from New to Ready. process state from Ready to Running.

128
Long-Term Scheduler Short-Term Scheduler

Time-sharing operating systems have no long-term It may be minimal in time-sharing


scheduler. system.

It select a good process, mix of I/O bound and CPU It select a new process for a CPU quite
bound. frequently.

Short-Term Schedule control


Long-Term Scheduler control Multi-Programming Multitasking

Difference between SJF and LJF CPU scheduling algorithms


Shortest Job First:
The shortest job first (SJF) algorithm is a CPU scheduling algorithm designed to reorder the jobs
so that the process having the smallest burst time is chosen for the next execution. It is used to
reduce the average waiting time for other processes waiting for execution. This may
be preemptive or non-preemptive. Its preemptive version is called Shortest Remaining Time First
(SRTF).
When a job comes in, it is inserted in the ready queue based on its burst time. SJF minimizes the
average wait time because it gives service to less burst time processes before it gives service to
processes with larger burst time.
The major advantage of using this algorithm is that, it increases the average turn-around time and
average waiting time thus increasing the effectiveness of the system.
While it minimizes average wait time, it may punish processes with higher burst time. If shorter
execution time processes are in the ready list, then processes with large burst times tend to be left
in the ready list while small processes receive service. It may happen in extreme cases that
always short execution time processes will be served and large execution time processes will
wait indefinitely. This starvation of longer execution time processes is the limitation of this
algorithm.
Longest Job First:
Longest job first (LJF) on the other hand is the exact opposite of SJF. It’s designed to reorder the
jobs so that the process having the largest burst time is chosen for the next execution. It a type of
non-preemptive scheduling algorithm where once a process starts its execution, it cannot be
interrupted in between its processing and any other process can be executed only after the
assigned process has completed its processing and has been terminated. When a job comes in, it
is inserted in the ready queue based on its burst time. This may also be preemptive or non-
preemptive.
Its preemptive version is called Longest Remaining Time First (LRTF).
The major disadvantage of this algorithm is that it might lead to starvation of processes. Convoy
effect is another problem and leads to reduced throughput. Therefore this algorithm is seldom
used.

129
Shortest Job First (SJF) Longest Job First (LJF)

Short processes are executed first and then Longer processes are executed first and then
followed by longer processes. followed by shorter processes.

The throughput is increased because more


processes can be executed in less amount The throughput is decreased because less processes
of time. can be executed in a certain amount of time.

It does not lead to convoy effect. It might lead to the convoy effect.

It has very larger average turn-around time and


It has a smaller average turn-around time average waiting time. This results in slow
and average waiting time thus increasing processing and decreases the effectiveness of the
the effectiveness of the system. system.

Processes with longer burst may starve. Processes with shorter burst may starve.

Consider the following example:


Arrival
Process Time Burst Time

P1 0 10

P2 1 5

P3 2 8

P4 2 15

Let’s try and solve this problem using both SJF and LJF to do a comparative study.
1. Longest Job First:
The Gantt chart will look like:

130
Average waiting time (AWT),
= ((0-0) + (33-1) + (25-2) + (10-2)) / 4
= 63/4
= 15.75

Average turnaround time (ATAT),


= ((10-0) + (38-1) + (33-2) + (25-2))/4
= 101/4
= 25.25
2.(a). Shortest Job First (Non-Preemptive):
The Gantt chart will look like:

Average waiting time (AWT),


= ((0-0) + (10-1) + (15-2) + (23-2)) / 4
= 43 / 4
= 10.75

Average turnaround time (ATAT),


= ((10-0) + (15-1) + (23-2) + (38-2)) / 4
= 81 / 4
= 20.25
2.(b). Shortest Job First (Preemptive):
The Gantt chart will look like:

Average waiting time (AWT),


= ((0-0) + (1-1) + (6-2) + (23-2)) / 4
= 25 / 4

131
= 6.25

Average turnaround time (ATAT),


= ((23-0) + (6-1) + (14-2) + (38-2)) / 4
= 76 / 4
= 19
From the above example it is clear that SJF algorithm is more efficient than LJF algorithm.

Difference between CPU and GPU


Central Processing Unit (CPU):
CPU is known as brain for every ingrained system. CPU comprises the arithmetic logic unit
(ALU) accustomed quickly to store the information and perform calculations and Control Unit
(CU) for performing instruction sequencing as well as branching. CPU interacts with more
computer components such as memory, input and output for performing instruction.

Graphics Processing Unit (GPU):


GPU is used to provide the images in computer games. GPU is faster than CPU’s speed and it
emphasis on high throughput. It’s generally incorporated with electronic equipment for sharing
RAM with electronic equipment that is nice for the foremost computing task. It contains more
ALU units than CPU.

132
The basic difference between CPU and GPU is that CPU emphasis on low latency. Whereas,
GPU emphasis on high throughput.
Let’s see the difference between CPU and GPU:

S.NO CPU GPU

CPU stands for Central While GPU stands for Graphics


1. Processing Unit. Processing Unit.

CPU consumes or needs While it consumes or requires


2. more memory than GPU. less memory than CPU.

The speed of CPU is less While GPU is faster than


3. than GPU’s speed. CPU’s speed.

CPU contain minute While it contain more weak


4. powerful cores. cores.

CPU is suitable for serial While GPU is not suitable for


5. instruction processing. serial instruction processing.

133
S.NO CPU GPU

CPU is not suitable for


parallel instruction While GPU is suitable for
6. processing. parallel instruction processing.

CPU emphasis on low While GPU emphasis on high


7. latency. throughput.

Difference between Preemptive and Cooperative Multitasking


Multitasking is the methodology of executing multiple tasks or processes concurrently over a
period of time. Preemptive and cooperative multitasking are two types of multitasking.
In preemptive multitasking, the operating system can initiate a context switching from the
running process to another process. In other words, the operating system allows stopping the
execution of the currently running process and allocating the CPU to some other process. The OS
uses some criteria to decide for how long a process should execute before allowing another
process to use the operating system. The mechanism of taking control of the operating system
from one process and giving it to another process is called preempting or preemption.
In cooperative multitasking, the operating system never initiates context switching from the
running process to another process. A context switch occurs only when the processes voluntarily
yield control periodically or when idle or logically blocked to allow multiple applications to
execute simultaneously. Also, in this multitasking, all the processes cooperate for the scheduling
scheme to work.
Let’s see the difference between preemptive multitasking and cooperative multitasking.

SR.NO. Preemptive Multitasking Cooperative Multitasking

Preemptive multitasking is Cooperative multitasking is a


a task used by the OS to type of computer multitasking
decide for how long a task in which the operating system
should be executed before never initiates a context switch
allowing another task to use from a running process to
1 the OS. another process.

134
SR.NO. Preemptive Multitasking Cooperative Multitasking

It interrupts applications In cooperative multitasking,


and gives control to other process scheduler never
processes outside the interrupts a process
2 application’s control. unexpectedly.

The operating system can The operating system does not


initiate context switch from initiate a context switch from a
a running process to another running process to another
3 process. process.

A malicious program A malicious program can bring


initiates an infinite loop, it the entire system to a halt by
only hurts itself without busy waiting or running an
affecting other programs or infinite loop and not giving up
4 threads. control.

In cooperative multitasking, all


Preemptive multitasking programs must cooperate for it
forces applications to share to work. If one program does
the CPU whether they want not cooperate, it can hog the
5 to or not. CPU.

UNIX, Windows 95, Macintosh OS version 8.0-9.2.2


Windows NT operating and Windows 3.x operating
systems are examples of systems are examples of
6 preemptive multitasking . cooperative multitasking.

Deadline Monotonic CPU Scheduling


Deadline Monotonic Scheduling :
It is a fixed priority based algorithm in which priorities are assigned to each task based on their
relative deadline. Task with shortest deadline is assigned highest priority. It is a Preemptive
Scheduling Algorithm that means if any task of higher priority comes then, running task is
preempted and higher priority task is assigned to CPU.
Priority of task is inversely proportional to deadline i.e., task with shortest deadline is assigned
highest priority. Deadline is time limit in which task has to be completed.
Task can be represented as shown in figure :

135
Example –
Suppose there are two tasks that need to be executed.
Task1 has release time 0, period 7 units, execution time 2 units, and deadline of 6 units ( T1
( 0, 7, 2, 6 ) ). Task2 has release time 0, period 5 units, execution time 2 units, and deadline of
4 units ( T2 ( 0, 5, 2, 4 ) ).

Step by step explanation –


 At T=0 both T1 and T2 are available, but deadline (T2) < deadline (T1). So T2 gets CPU
and gets executed till T=2. Now T2 will be available at T=5.

136
 At T=2 only T1 is available so T1 gets executed till T=4. Now T1 will be available at T=7.
 At T=4 to T=5 CPU remains idle as not task is available for execution.
 At T=5 only T2 is available so T2 gets executed till T=7. Now T2 will be available at T=10.
 At T=7 only T1 is available so T1 gets executed till T=9. Now T1 will be available at T=14.
 At T=9 to T=10 CPU remains idle as not task is available for execution.
 At T=10 only T2 is available so T2 gets executed till T=12. Now T2 will be available at
T=15.
 At T=12 to T=14 CPU remains idle as not task is available for execution.
 At T=14 only T1 is available so T1 gets executed till T=15. Still 1 unit of work for T1 is
remaining.
 At T=15 both T1 and T2 are available, but deadline (T2) < deadline (T1). So T2 gets CPU
and gets executed till T=17. Now T2 will be available at T=20.
Complete execution process is shown in figure below –

Advantages :
 Optimal for Static Priority Scheduling.
 Performs well in case of availability of tasks having longer period but shorter deadline.
 Good performance in case of overload.
Disadvantages :
 Implementation is complex.
 It is a time taking process.

Rate-monotonic scheduling
Rate monotonic scheduling is a priority algorithm that belongs to the static priority scheduling
category of Real Time Operating Systems. It is preemptive in nature. The priority is decided
according to the cycle time of the processes that are involved. If the process has a small job
duration, then it has the highest priority. Thus if a process with highest priority starts execution,
it will preempt the other running processes. The priority of a process is inversely proportional to
the period it will run for.
A set of processes can be scheduled only if they satisfy the following equation :

137
Where n is the number of processes in the process set, Ci is the computation time of the process,
Ti is the Time period for the process to run and U is the processor utilization.
Example:
An example to understand the working of Rate monotonic scheduling algorithm.
Execution Time Time period
Processes (C) (T)

P1 3 20

P2 2 5

P3 2 10

n( 2^1/n - 1 ) = 3 ( 2^1/3 - 1 ) = 0.7977

U = 3/20 + 2/5 + 2/10 = 0.75


It is less than 1 or 100% utilization. The combined utilization of three processes is less than the
threshold of these processes that means the above set of processes are schedulable and thus
satisfies the above equation of the algorithm.
1. Scheduling time –
For calculating the Scheduling time of algorithm we have to take the LCM of the Time period
of all the processes. LCM ( 20, 5, 10 ) of the above example is 20. Thus we can schedule it by
20 time units.
2. Priority –
As discussed above, the priority will be the highest for the process which has the least running
time period. Thus P2 will have the highest priority, after that P3 and lastly P1.
P2 > P3 > P1
3. Representation and flow –

138
Above figure says that, Process P2 will execute two times for every 5 time units, Process P3
will execute two times for every 10 time units and Process P1 will execute three times in 20
time units. This has to be kept in mind for understanding the entire execution of the algorithm
below.

Process P2 will run first for 2 time units because it has the highest priority. After completing
its two units, P3 will get the chance and thus it will run for 2 time units.
As we know that process P2 will run 2 times in the interval of 5 time units and process P3 will
run 2 times in the interval of 10 time units, they have fulfilled the criteria and thus now
process P1 which has the least priority will get the chance and it will run for 1 time. And here
the interval of five time units have completed. Because of its priority P2 will preempt P1 and
thus will run 2 times. As P3 have completed its 2 time units for its interval of 10 time units, P1
will get chance and it will run for the remaining 2 times, completing its execution which was
thrice in 20 time units.
Now 9-10 interval remains idle as no process needs it. At 10 time units, process P2 will run
for 2 times completing its criteria for the third interval ( 10-15 ). Process P3 will now run for
two times completing its execution. Interval 14-15 will again remain idle for the same reason
mentioned above. At 15 time unit, process P2 will execute for two times completing its
execution.This is how the rate monotonic scheduling works.
Conditions :
The analysis of Rate monotonic scheduling assumes few properties that every process should
possess. They are :
1. Processes involved should not share the resources with other processes.
2. Deadlines must be similar to the time periods. Deadlines are deterministic.
3. Process running with highest priority that needs to run, will preempt all the other processes.
4. Priorities must be assigned to all the processes according to the protocol of Rate monotonic
scheduling.
Advantages :

139
1. It is easy to implement.
2. If any static priority assignment algorithm can meet the deadlines then rate monotonic
scheduling can also do the same. It is optimal.
3. It consists of calculated copy of the time periods unlike other time-sharing algorithms as
Round robin which neglects the scheduling needs of the processes.
Disadvantages :
1. It is very difficult to support aperiodic and sporadic tasks under RMA.
2. RMA is not optimal when tasks period and deadline differ.

Multiple-Processor Scheduling in Operating System


In multiple-processor scheduling multiple CPU’s are available and hence Load
Sharing becomes possible. However multiple processor scheduling is more complex as
compared to single processor scheduling. In multiple processor scheduling there are cases when
the processors are identical i.e. HOMOGENEOUS, in terms of their functionality, we can use
any processor available to run any process in the queue.

Approaches to Multiple-Processor Scheduling –

One approach is when all the scheduling decisions and I/O processing are handled by a single
processor which is called the Master Server and the other processors executes only the user
code. This is simple and reduces the need of data sharing. This entire scenario is
called Asymmetric Multiprocessing.
A second approach uses Symmetric Multiprocessing where each processor is self scheduling.
All processes may be in a common ready queue or each processor may have its own private
queue for ready processes. The scheduling proceeds further by having the scheduler for each
processor examine the ready queue and select a process to execute.

Processor Affinity –

Processor Affinity means a processes has an affinity for the processor on which it is currently
running.
When a process runs on a specific processor there are certain effects on the cache memory. The
data most recently accessed by the process populate the cache for the processor and as a result
successive memory access by the process are often satisfied in the cache memory. Now if the
process migrates to another processor, the contents of the cache memory must be invalidated for
the first processor and the cache for the second processor must be repopulated. Because of the
high cost of invalidating and repopulating caches, most of the SMP(symmetric multiprocessing)
systems try to avoid migration of processes from one processor to another and try to keep a
process running on the same processor. This is known as PROCESSOR AFFINITY.
There are two types of processor affinity:
1. Soft Affinity – When an operating system has a policy of attempting to keep a process
running on the same processor but not guaranteeing it will do so, this situation is called soft
affinity.
2. Hard Affinity – Hard Affinity allows a process to specify a subset of processors on which it
may run. Some systems such as Linux implements soft affinity but also provide some system
calls like sched_setaffinity() that supports hard affinity.

140
Load Balancing –

Load Balancing is the phenomena which keeps the workload evenly distributed across all
processors in an SMP system. Load balancing is necessary only on systems where each
processor has its own private queue of process which are eligible to execute. Load balancing is
unnecessary because once a processor becomes idle it immediately extracts a runnable process
from the common run queue. On SMP(symmetric multiprocessing), it is important to keep the
workload balanced among all processors to fully utilize the benefits of having more than one
processor else one or more processor will sit idle while other processors have high workloads
along with lists of processors awaiting the CPU.
There are two general approaches to load balancing :
1. Push Migration – In push migration a task routinely checks the load on each processor and if
it finds an imbalance then it evenly distributes load on each processors by moving the
processes from overloaded to idle or less busy processors.
2. Pull Migration – Pull Migration occurs when an idle processor pulls a waiting task from a
busy processor for its execution.

Multicore Processors –

In multicore processors multiple processor cores are places on the same physical chip. Each
core has a register set to maintain its architectural state and thus appears to the operating system
as a separate physical processor. SMP systems that use multicore processors are faster and
consume less power than systems in which each processor has its own physical chip.
However multicore processors may complicate the scheduling problems. When processor
accesses memory then it spends a significant amount of time waiting for the data to become
available. This situation is called MEMORY STALL. It occurs for various reasons such as
cache miss, which is accessing the data that is not in the cache memory. In such cases the
processor can spend upto fifty percent of its time waiting for data to become available from the
memory. To solve this problem recent hardware designs have implemented multithreaded
processor cores in which two or more hardware threads are assigned to each core. Therefore if
one thread stalls while waiting for the memory, core can switch to another thread.
There are two ways to multithread a processor :
1. Coarse-Grained Multithreading – In coarse grained multithreading a thread executes on a
processor until a long latency event such as a memory stall occurs, because of the delay
caused by the long latency event, the processor must switch to another thread to begin
execution. The cost of switching between threads is high as the instruction pipeline must be
terminated before the other thread can begin execution on the processor core. Once this new
thread begins execution it begins filling the pipeline with its instructions.
2. Fine-Grained Multithreading – This multithreading switches between threads at a much
finer level mainly at the boundary of an instruction cycle. The architectural design of fine
grained systems include logic for thread switching and as a result the cost of switching
between threads is small.

141
Virtualization and Threading –

In this type of multiple-processor scheduling even a single CPU system acts like a multiple-
processor system. In a system with Virtualization, the virtualization presents one or more virtual
CPU to each of virtual machines running on the system and then schedules the use of physical
CPU among the virtual machines. Most virtualized environments have one host operating system
and many guest operating systems. The host operating system creates and manages the virtual
machines. Each virtual machine has a guest operating system installed and applications run
within that guest.Each guest operating system may be assigned for specific use cases,applications
or users including time sharing or even real-time operation. Any guest operating-system
scheduling algorithm that assumes a certain amount of progress in a given amount of time will be
negatively impacted by the virtualization. A time sharing operating system tries to allot 100
milliseconds to each time slice to give users a reasonable response time. A given 100 millisecond
time slice may take much more than 100 milliseconds of virtual CPU time. Depending on how
busy the system is, the time slice may take a second or more which results in a very poor
response time for users logged into that virtual machine. The net effect of such scheduling
layering is that individual virtualized operating systems receive only a portion of the available
CPU cycles, even though they believe they are receiving all cycles and that they are scheduling
all of those cycles.Commonly, the time-of-day clocks in virtual machines are incorrect because
timers take no longer to trigger than they would on dedicated CPU’s.
Virtualizations can thus undo the good scheduling-algorithm efforts of the operating systems
within virtual machines.

Fair-share CPU scheduling


Fair-share scheduling is a scheduling algorithm that was first designed by Judy Kay and Piers
Lauder at Sydney University in the 1980s. It is a scheduling algorithm for computer operating
systems that dynamically distributes the time quanta “equally” to its users.
Time quantum is the processor time allowed for a process to run. But in a round-robin
scheduling where the time slices or time quanta are allocated equally in a circular order, due to
the distribution of time slices in a circular manner, any equal amount of time quanta will produce
a similar output, therefore, an arbitrary distribution is required in this scenario.
Specificity of Fair-share scheduling :
This algorithm equally distributes the processor time to its users, for instance, there are 5 users
(A, B, C, D, E)each of them are simultaneously executing a process, the scheduler divides the
CPU periods such that all the users get the same share of the CPU cycles (100%/5) that is 20%.
Even though, a user moves onto the second while the other at the first, the algorithm is so
specific that it ensures that this user is attributed with only 10% for the second process making it
a total of 20%.

142
The scheduler logically divides an equal amount even though, another layer of partition is added,
for example, if there were 3 groups present with different number of people in each group, the
algorithm would still divide the same time for those groups, 100%/3= 33.33%, this 33.33%
would be shared equally in the respective group depending on the number of users present in the
group.

To sum up, fair-share scheduling is an efficient strategy that creates a consistent user experience.

143
Earliest Deadline First (EDF) CPU scheduling algorithm
Earliest Deadline First (EDF) is an optimal dynamic priority scheduling algorithm used in real-
time systems.
It can be used for both static and dynamic real-time scheduling.
EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according to the
absolute deadline. The task whose deadline is closest gets the highest priority. The priorities are
assigned and changed in a dynamic fashion. EDF is very efficient as compared to other
scheduling algorithms in real-time systems. It can make the CPU utilization to about 100% while
still guaranteeing the deadlines of all the tasks.
EDF includes the kernel overload. In EDF, if the CPU usage is less than 100%, then it means
that all the tasks have met the deadline. EDF finds an optimal feasible schedule. The feasible
schedule is one in which all the tasks in the system are executed within the deadline. If EDF is
not able to find a feasible schedule for all the tasks in the real-time system, then it means that no
other task scheduling algorithms in real-time systems can give a feasible schedule. All the tasks
which are ready for execution should announce their deadline to EDF when the task becomes
runnable.
EDF scheduling algorithm does not need the tasks or processes to be periodic and also the tasks
or processes require a fixed CPU burst time. In EDF, any executing task can be preempted if any
other periodic instance with an earlier deadline is ready for execution and becomes active.
Preemption is allowed in the Earliest Deadline First scheduling algorithm.
Example:
Consider two processes P1 and P2.
Let the period of P1 be p1 = 50
Let the processing time of P1 be t1 = 25
Let the period of P2 be period2 = 75
Let the processing time of P2 be t2 = 30

Steps for solution:


1. Deadline pf P1 is earlier, so priority of P1>P2.
2. Initially P1 runs and completes its execution of 25 time.
3. After 25 times, P2 starts to execute until 50 times, when P1 is able to execute.
4. Now, comparing the deadline of (P1, P2) = (100, 75), P2 continues to execute.

144
5. P2 completes its processing at time 55.
6. P1 starts to execute until time 75, when P2 is able to execute.
7. Now, again comparing the deadline of (P1, P2) = (100, 150), P1 continues to execute.
8. Repeat the above steps…
9. Finally at time 150, both P1 and P2 have the same deadline, so P2 will continue to execute till
its processing time after which P1 starts to execute.
Limitations of EDF scheduling algorithm:
 Transient Overload Problem
 Resource Sharing Problem
 Efficient Implementation Problem

Advantages and Disadvantages of various CPU scheduling algorithms


CPU Scheduling involves many different scheduling algorithms which have their Advantages
and Disadvantages.
1. First Come First Serve (FCFS):
Advantages:
1. It is simple and easy to understand.
Disadvantages:
1. The process with less execution time suffers i.e. waiting time is often quite long.
2. Favors CPU Bound process then I/O bound process.
3. Here, the first process will get the CPU first, other processes can get the CPU only after the
current process has finished its execution. Now, suppose the first process has a large burst
time, and other processes have less burst time, then the processes will have to wait more
unnecessarily, this will result in more average waiting time, i.e., Convey effect.
4. This effect results in lower CPU and device utilization.
5. FCFS algorithm is particularly troublesome for time-sharing systems, where it is important
that each user get a share of the CPU at regular intervals.
2. Shortest Job First (SJF) [Preemptive and Non- Preemptive]:
Advantages:
1. Shortest jobs are favored.
2. It is probably optimal, in that it gives the minimum average waiting time for a given set of
processes.
Disadvantages:
1. SJF may cause starvation if shorter processes keep coming. This problem is solved by aging.
2. It cannot be implemented at the level of short-term CPU scheduling.
3. Round Robin (RR):
Advantages:
1. Every process gets an equal share of the CPU.
2. RR is cyclic in nature, so there is no starvation.
Disadvantages:
1. Setting the quantum too short increases the overhead and lowers the CPU efficiency, but
setting it too long may cause a poor response to short processes.
2. The average waiting time under the RR policy is often long.
3. If time quantum is very high then RR degrades to FCFS.

145
4. Priority Based (PB):
Advantages:
 This provides a good mechanism where the relative importance of each process may be
precisely defined.
Disadvantages:
1. If high-priority processes use up a lot of CPU time, lower-priority processes may starve and
be postponed indefinitely. The situation where a process never gets scheduled to run is
called starvation.
2. Another problem is deciding which process gets which priority level assigned to it.
5. Multilevel Queue Scheduling (MQS):
Advantages:
Application of separate scheduling for various kinds of processes is possible.
 System Process – FCFS
 Interactive Process – SJF
 Batch Process – RR
 Student Process – PB
Disadvantages:
 The lowest level process faces the starvation problem.
6. Multilevel Feedback Queue Scheduling (MFQS):
Advantages:
1. Low scheduling overhead.
2. Allows aging, thus no starvation.
Disadvantages:
1. It’s not flexible.
2. It also requires some means of selecting values for all the parameters to define the best
scheduler, thus it is also the most complex.

CPU Scheduling Criteria


Different CPU scheduling algorithms have different properties and the choice of a particular
algorithm depends on the various factors. Many criteria have been suggested for comparing CPU
scheduling algorithms.
The criteria include the following:
1. CPU utilisation: The main objective of any CPU scheduling algorithm is to keep the CPU as
busy as possible. Theoretically, CPU utilisation can range from 0 to 100 but in a real-time
system, it varies from 40 to 90 percent depending on the load upon the system.
2. Throughput:A measure of the work done by the CPU is the number of processes being
executed and completed per unit of time. This is called throughput. The throughput may vary
depending upon the length or duration of the processes.
3. Turnaround time: For a particular process, an important criterion is how long it takes to
execute that process. The time elapsed from the time of submission of a process to the time of
completion is known as the turnaround time. Turn-around time is the sum of times spent
waiting to get into memory, waiting in the ready queue, executing in CPU, and waiting for
I/O. The formula to calculate Turn Around Time = Compilation Time – Arrival Time.

146
4. Waiting time: A scheduling algorithm does not affect the time required to complete the
process once it starts execution. It only affects the waiting time of a process i.e. time spent by
a process waiting in the ready queue. The formula for calculating Waiting Time = Turnaround
Time – Burst Time.
5. Response time: In an interactive system, turn-around time is not the best criteria. A process
may produce some output fairly early and continue computing new results while previous
results are being output to the user. Thus another criteria is the time taken from submission of
the process of request until the first response is produced. This measure is called response
time. The formula to calculate Response Time = CPU Allocation Time(when the CPU was
allocated for the first) – Arrival Time
6. Completion time: This is the time when the process completes its execution.

Time Slicing in CPU scheduling


CPUs kernel doesn’t simply distribute the entirety of our PCs’ resources to single process or
service. CPU is continuously running many processes that are essential for it to operate, so our
kernel needs to manage these processes without moment’s delay.
When program needs to run, process must be created for it. This process needs to have important
resources like RAM and CPU. The kernel schedules time periods for CPU to perform commands
and instructions in process. Be that as it may, there’s just single CPU and numerous processes.
How does CPU outstand to execute different processes without moment’s delay? It does it by
executing processes one by one, individually by time slice. A time slice is short time frame that
gets assigned to process for CPU execution.
Time slice :
It is timeframe for which process is allotted to run in preemptive multitasking CPU. The
scheduler runs each process every single time-slice. The period of each time slice can be very
significant and crucial to balance CPUs performance and responsiveness.
If time slice is quite short, scheduler will take more processing time. In contrast, if the time slice
is too long, scheduler will again take more processing time.

147
When process is allotted to CPU, clock timer is set corresponding to time slice.
 If the process finishes its burst before time slice, CPU simply swaps it out like conventional
FCFS calculation.
 If the time slice goes off first, CPU shifts it out to back of ongoing queue.
The ongoing queue is managed like circular queue, so, after all processes are executed once,
scheduler executes first process again and then second and so forth.
Example –
Process Required burst time by
Queue process(ms)

P1 1

P2 4

P3 5

148
We have three processes(P1, P2, P3) with their corresponding burst times(1ms, 4ms, 5ms). A
rule of thumb is that 80% of CPU bursts should be smaller than the time quantum. Considering
time slice of 2ms.
Here’s how CPU manages it by time slicing.

time-slicing approach for process management

Advantages :
 Fair allocation of CPU resources.
 It deals all process with equal priority.
 Easily implementable on the system.
 Context switching method used to save states of preempted processes
 gives best performance in terms of average processing time.
Disadvantages :
 If the slicing time is short, processor output will be delayed.
 It spends time on context switching.
 Performance depends heavily on time quantum.
 Priorities can’t be fixed for processes.
 No priority to more important tasks.
 Finding an appropriate time quantum is quite difficult.

Convoy Effect in Operating Systems


Convoy Effect is phenomenon associated with the First Come First Serve (FCFS) algorithm, in
which the whole Operating System slows down due to few slow processes.

149
FCFS algorithm is non-preemptive in nature, that is, once CPU time has been allocated to a
process, other processes can get CPU time only after the current process has finished. This
property of FCFS scheduling leads to the situation called Convoy Effect.
Suppose there is one CPU intensive (large burst time) process in the ready queue, and several
other processes with relatively less burst times but are Input/Output (I/O) bound (Need I/O
operations frequently).
Steps are as following below:
 The I/O bound processes are first allocated CPU time. As they are less CPU intensive, they
quickly get executed and goto I/O queues.
 Now, the CPU intensive process is allocated CPU time. As its burst time is high, it takes time
to complete.
 While the CPU intensive process is being executed, the I/O bound processes complete their
I/O operations and are moved back to ready queue.
 However, the I/O bound processes are made to wait as the CPU intensive process still hasn’t
finished. This leads to I/O devices being idle.
 When the CPU intensive process gets over, it is sent to the I/O queue so that it can access an
I/O device.
 Meanwhile, the I/O bound processes get their required CPU time and move back to I/O queue.
 However, they are made to wait because the CPU intensive process is still accessing an I/O
device. As a result, the CPU is sitting idle now.
Hence in Convoy Effect, one slow process slows down the performance of the entire set of
processes, and leads to wastage of CPU time and other devices.
To avoid Convoy Effect, preemptive scheduling algorithms like Round Robin Scheduling can be
used – as the smaller processes don’t have to wait much for CPU time – making their execution
faster and leading to less resources sitting idle.
Introduction of Process Synchronization
On the basis of synchronization, processes are categorized as one of the following two types:
 Independent Process: The execution of one process does not affect the execution of other
processes.
 Cooperative Process: A process that can affect or be affected by other processes executing in
the system.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.

150
Race Condition:

When more than one process is executing the same code or accessing the same memory or any
shared variable in that condition there is a possibility that the output or the value of the shared
variable is wrong so for that all the processes doing the race to say that my output is correct this
condition known as a race condition. Several processes access and process the manipulations
over the same data concurrently, then the outcome depends on the particular order in which the
access takes place. A race condition is a situation that may occur inside a critical section. This
happens when the result of multiple thread execution in the critical section differs according to
the order in which the threads execute. Race conditions in critical sections can be avoided if the
critical section is treated as an atomic instruction. Also, proper thread synchronization using
locks or atomic variables can prevent race conditions.

Critical Section Problem:

A critical section is a code segment that can be accessed by only one process at a time. The
critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
 Mutual Exclusion: If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
 Progress: If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their remainder

151
section can participate in deciding which will enter in the critical section next, and the
selection can not be postponed indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.

Peterson’s Solution:

Peterson’s Solution is a classical software-based solution to the critical section problem. In


Peterson’s solution, we have two shared variables:
 boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical
section
 int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions:


 Mutual Exclusion is assured as only one process can access the critical section at any time.
 Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.
Disadvantages of Peterson’s solution:

152
 It involves busy waiting.(In the Peterson’s solution, the code statement- “while(flag[j] &&
turn == j);” is responsible for this. Busy waiting is not favored because it wastes CPU cycles
that could be used to perform other tasks.)
 It is limited to 2 processes.
 Peterson’s solution cannot be used in modern CPU architectures.

Semaphores:

A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be


signaled by another thread. This is different than a mutex as the mutex can be signaled only by
the thread that is called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations wait()
and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
 Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks, as
the locks can provide mutual exclusion. All the processes can share the same mutex
semaphore that is initialized to 1. Then, a process has to wait until the lock becomes 0. Then,
the process can make the mutex semaphore 1 and start its critical section. When it completes
its critical section, it can reset the value of the mutex semaphore to 0 and some other process
can enter its critical section.
 Counting Semaphores: They can have any value and are not restricted over a certain domain.
They can be used to control access to a resource that has a limitation on the number of
simultaneous accesses. The semaphore can be initialized to the number of instances of the
resource. Whenever a process wants to use that resource, it checks if the number of remaining
instances is more than zero, i.e., the process has an instance available. Then, the process can
enter its critical section thereby decreasing the value of the counting semaphore by 1. After
the process is over with the use of the instance of the resource, it can leave the critical section
thereby adding 1 to the number of available instances of the resource.
Critical Section in Synchronization
Critical Section: When more than one processes access the same code segment that segment is
known as the critical section. The critical section contains shared variables or resources which
are needed to be synchronized to maintain the consistency of data variables. In simple terms, a
critical section is a group of instructions/statements or region of code that need to be executed
atomically (read this post for atomicity), such as accessing a resource (file, input or output port,
global data, etc.). In concurrent programming, if one thread tries to change the value of shared
data at the same time as another thread tries to read the value (i.e. data race across threads), the
result is unpredictable. The access to such shared variables (shared memory, shared files, shared
port, etc…) is to be synchronized. Few programming languages have built-in support for
synchronization. It is critical to understand the importance of race conditions while writing
kernel-mode programming (a device driver, kernel thread, etc.). since the programmer can
directly access and modify kernel data structures.

153
It could be visualized using the pseudo-code below:-
do{
flag=1;
while(flag); // (entry section)
// critical section
if (!flag)
// remainder section
} while(true);
A simple solution to the critical section can be thought of as shown below,
acquireLock();
Process Critical Section
releaseLock();
A thread must acquire a lock prior to executing a critical section. The lock can be acquired by
only one thread. There are various ways to implement locks in the above pseudo-code. Let us
discuss them in future articles. Please write comments if you find anything incorrect, or you want
to share more information about the topic discussed above.

Inter Process Communication (IPC)


A process can be of two types:
 Independent process.
 Co-operating process.

154
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that
those processes, which are running independently, will execute very efficiently, in reality,
there are many situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity. Inter-process communication (IPC) is a
mechanism that allows processes to communicate with each other and synchronize their
actions. The communication between these processes can be seen as a method of co-operation
between them. Processes can communicate with each other through both:

1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between processes via the shared
memory method and via the message passing method.

An operating system can implement both methods of communication. First, we will discuss the
shared memory methods of communication and then message passing. Communication
between processes using shared memory requires processes to share some variable, and it
completely depends on how the programmer will implement it. One way of communication
using shared memory can be imagined like this: Suppose process1 and process2 are executing
simultaneously, and they share some resources or use some information from another process.
Process1 generates information about certain computations or resources being used and keeps
it as a record in shared memory. When process2 needs to use the shared information, it will
check in the record stored in shared memory and take note of the information generated by
process1 and act accordingly. Processes can use shared memory for extracting information as a
record from another process as well as for delivering any specific information to other
processes.
Let’s discuss an example of communication between processes using the shared memory
method.

155
i) Shared Memory Method
Ex: Producer-Consumer problem
There are two processes: Producer and Consumer. The producer produces some items and the
Consumer consumes that item. The two processes share a common space or memory location
known as a buffer where the item produced by the Producer is stored and from which the
Consumer consumes the item if needed. There are two versions of this problem: the first one is
known as the unbounded buffer problem in which the Producer can keep on producing items
and there is no limit on the size of the buffer, the second one is known as the bounded buffer
problem in which the Producer can produce up to a certain number of items before it starts
waiting for Consumer to consume it. We will discuss the bounded buffer problem. First, the
Producer and the Consumer will share some common memory, then the producer will start
producing items. If the total produced item is equal to the size of the buffer, the producer will
wait to get it consumed by the Consumer. Similarly, the consumer will first check for the
availability of the item. If no item is available, the Consumer will wait for the Producer to
produce it. If there are items available, Consumer will consume them. The pseudo-code to
demonstrate is provided below:
Shared Data between the two Processes

 C

#define buff_max 25

#define mod %

struct item{

// different member of the produced data

// or consumed data

---------

// An array is needed for holding the items.

156
// This is the shared place which will be

// access by both process

// item shared_buff [ buff_max ];

// Two variables which will keep track of

// the indexes of the items produced by producer

// and consumer The free index points to

// the next free index. The full index points to

// the first full index.

int free_index = 0;

int full_index = 0;

Producer Process Code

 C

item nextProduced;

while(1){

// check if there is no space

// for production.

157
// if so keep waiting.

while((free_index+1) mod buff_max == full_index);

shared_buff[free_index] = nextProduced;

free_index = (free_index + 1) mod buff_max;

Consumer Process Code

 C

item nextConsumed;

while(1){

// check if there is an available

// item for consumption.

// if not keep on waiting for

// get them produced.

while((free_index == full_index);

nextConsumed = shared_buff[full_index];

full_index = (full_index + 1) mod buff_max;

158
}

In the above code, the Producer will start producing again when the (free_index+1) mod buff
max will be free because if it it not free, this implies that there are still items that can be
consumed by the Consumer so there is no need to produce more. Similarly, if free index and
full index point to the same index, this implies that there are no items to consume.

ii) Messaging Passing Method


Now, We will start our discussion of the communication between processes via message
passing. In this method, processes communicate with each other without using any kind of
shared memory. If two processes p1 and p2 want to communicate with each other, they
proceed as follows:

 Establish a communication link (if a link already exists, no need to establish it again.)
 Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)

The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for an
OS designer but complicated for a programmer and if it is of variable size then it is easy for a
programmer but complicated for the OS designer. A standard message can have two
parts: header and body.
The header part is used for storing message type, destination id, source id, message length,
and control information. The control information contains information like what to do if runs
out of buffer space, sequence number, priority. Generally, message is sent using FIFO style.

159
Message Passing through Communication Link.
Direct and Indirect Communication link
Now, We will start our discussion about the methods of implementing communication links.
While implementing the link, there are some questions that need to be kept in mind like :

1. How are links established?


2. Can a link be associated with more than two processes?
3. How many links can there be between every pair of communicating processes?
4. What is the capacity of a link? Is the size of a message that the link can accommodate fixed
or variable?
5. Is a link unidirectional or bi-directional?
A link has some capacity that determines the number of messages that can reside in it
temporarily for which every link has a queue associated with it which can be of zero capacity,
bounded capacity, or unbounded capacity. In zero capacity, the sender waits until the receiver
informs the sender that it has received the message. In non-zero capacity cases, a process does
not know whether a message has been received or not after the send operation. For this, the
sender must communicate with the receiver explicitly. Implementation of the link depends on
the situation, it can be either a direct communication link or an in-directed communication
link.
Direct Communication links are implemented when the processes use a specific process
identifier for the communication, but it is hard to identify the sender ahead of time.
For example the print server.
In-direct Communication is done via a shared mailbox (port), which consists of a queue of
messages. The sender keeps the message in mailbox and the receiver picks them up.

Message Passing through Exchanging the Messages.


Synchronous and Asynchronous Message Passing:
A process that is blocked is one that is waiting for some event, such as a resource becoming
available or the completion of an I/O operation. IPC is possible between the processes on same
computer as well as on the processes running on different computer i.e. in
networked/distributed system. In both cases, the process may or may not be blocked while
sending a message or attempting to receive a message so message passing may be blocking or
non-blocking. Blocking is considered synchronous and blocking send means the sender will
be blocked until the message is received by receiver. Similarly, blocking receive has the
receiver block until a message is available. Non-blocking is considered asynchronous and
Non-blocking send has the sender sends the message and continue. Similarly, Non-blocking
receive has the receiver receive a valid message or null. After a careful analysis, we can come
to a conclusion that for a sender it is more natural to be non-blocking after message passing as
there may be a need to send the message to different processes. However, the sender expects
acknowledgment from the receiver in case the send fails. Similarly, it is more natural for a
receiver to be blocking after issuing the receive as the information from the received message
may be used for further execution. At the same time, if the message send keep on failing, the
receiver will have to wait indefinitely. That is why we also consider the other possibility of
message passing. There are basically three preferred combinations:

 Blocking send and blocking receive

160
 Non-blocking send and Non-blocking receive
 Non-blocking send and Blocking receive (Mostly used)
In Direct message passing, The process which wants to communicate must explicitly name
the recipient or sender of the communication.
e.g. send(p1, message) means send the message to p1.
Similarly, receive(p2, message) means to receive the message from p2.
In this method of communication, the communication link gets established automatically,
which can be either unidirectional or bidirectional, but one link can be used between one pair
of the sender and receiver and one pair of sender and receiver should not possess more than
one pair of links. Symmetry and asymmetry between sending and receiving can also be
implemented i.e. either both processes will name each other for sending and receiving the
messages or only the sender will name the receiver for sending the message and there is no
need for the receiver for naming the sender for receiving the message. The problem with this
method of communication is that if the name of one process changes, this method will not
work.
In Indirect message passing, processes use mailboxes (also referred to as ports) for sending
and receiving messages. Each mailbox has a unique id and processes can communicate only if
they share a mailbox. Link established only if processes share a common mailbox and a single
link can be associated with many processes. Each pair of processes can share several
communication links and these links may be unidirectional or bi-directional. Suppose two
processes want to communicate through Indirect message passing, the required operations are:
create a mailbox, use this mailbox for sending and receiving messages, then destroy the
mailbox. The standard primitives used are: send(A, message) which means send the message
to mailbox A. The primitive for the receiving the message also works in the same way
e.g. received (A, message). There is a problem with this mailbox implementation. Suppose
there are more than two processes sharing the same mailbox and suppose the process p1 sends
a message to the mailbox, which process will be the receiver? This can be solved by either
enforcing that only two processes can share a single mailbox or enforcing that only one process
is allowed to execute the receive at a given time or select any process randomly and notify the
sender about the receiver. A mailbox can be made private to a single sender/receiver pair and
can also be shared between multiple sender/receiver pairs. Port is an implementation of such
mailbox that can have multiple senders and a single receiver. It is used in client/server
applications (in this case the server is the receiver). The port is owned by the receiving process
and created by OS on the request of the receiver process and can be destroyed either on request
of the same receiver processor when the receiver terminates itself. Enforcing that only one
process is allowed to execute the receive can be done using the concept of mutual
exclusion. Mutex mailbox is created which is shared by n process. The sender is non-blocking
and sends the message. The first process which executes the receive will enter in the critical
section and all other processes will be blocking and will wait.
Now, let’s discuss the Producer-Consumer problem using the message passing concept. The
producer places items (inside messages) in the mailbox and the consumer can consume an item
when at least one message present in the mailbox. The code is given below:
Producer Code

 C

161
void Producer(void){

int item;

Message m;

while(1){

receive(Consumer, &m);

item = produce();

build_message(&m , item ) ;

send(Consumer, &m);

Consumer Code

 C

void Consumer(void){

int item;

Message m;

162
while(1){

receive(Producer, &m);

item = extracted_item();

send(Producer, &m);

consume_item(item);

Examples of IPC systems

1. Posix : uses shared memory method.


2. Mach : uses message passing
3. Windows XP : uses message passing using local procedural calls
Communication in client/server Architecture:
There are various mechanism:

 Pipe
 Socket
 Remote Procedural calls (RPCs)

Semaphores in Process Synchronization


Semaphore was proposed by Dijkstra in 1965 which is a very significant technique to manage
concurrent processes by using a simple integer value, which is known as a semaphore. A
semaphore is simply an integer variable that is shared between threads. This variable is used to
solve the critical section problem and to achieve process synchronization in the multiprocessing
environment.
Semaphores are of two types:
1. Binary Semaphore –
This is also known as mutex lock. It can have only two values – 0 and 1. Its value is initialized
to 1. It is used to implement the solution of critical section problems with multiple processes.
2. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control access to a resource that
has multiple instances.
Now let us see how it does so.

163
First, look at two operations that can be used to access and change the value of the semaphore
variable.

Some point regarding P and V operation:


1. P operation is also called wait, sleep, or down operation, and V operation is also called signal,
wake-up, or up operation.
2. Both operations are atomic and semaphore(s) is always initialized to one. Here atomic means
that variable on which read, modify and update happens at the same time/moment with no pre-
emption i.e. in-between read, modify and update no other operation is performed that may
change the variable.
3. A critical section is surrounded by both operations to implement process synchronization. See
the below image. The critical section of Process P is in between P and V operation.

Now, let us see how it implements mutual exclusion. Let there be two processes P1 and P2 and a
semaphore s is initialized as 1. Now if suppose P1 enters in its critical section then the value of
semaphore s becomes 0. Now if P2 wants to enter its critical section then it will wait until s > 0,
this can only happen when P1 finishes its critical section and calls V operation on semaphore s.
This way mutual exclusion is achieved. Look at the below image for details which is Binary
semaphore.

164
Implementation: Binary semaphores
 C++

struct semaphore {

enum value(0, 1);

// q contains all Process Control Blocks (PCBs)

// corresponding to processes got blocked

// while performing down operation.

Queue<process> q;

} P(semaphore s)

165
if (s.value == 1) {

s.value = 0;

else {

// add the process to the waiting queue

q.push(P) sleep();

V(Semaphore s)

if (s.q is empty) {

s.value = 1;

else {

// select a process from waiting queue

Process p = q.front();

// remove the process from wating as it has been

// sent for CS

q.pop();

166
wakeup(p);

The description above is for binary semaphore which can take only two values 0 and 1 and
ensure mutual exclusion. There is one other type of semaphore called counting semaphore which
can take values greater than one.
Now suppose there is a resource whose number of instances is 4. Now we initialize S = 4 and the
rest is the same as for binary semaphore. Whenever the process wants that resource it calls P or
waits for function and when it is done it calls V or signal function. If the value of S becomes zero
then a process has to wait until S becomes positive. For example, Suppose there are 4 processes
P1, P2, P3, P4, and they all call wait operation on S(initialized with 4). If another process P5
wants the resource then it should wait until one of the four processes calls the signal function and
the value of semaphore becomes positive.
Limitations :
1. One of the biggest limitations of semaphore is priority inversion.
2. Deadlock, suppose a process is trying to wake up another process which is not in a sleep state.
Therefore, a deadlock may block indefinitely.
3. The operating system has to keep track of all calls to wait and to signal the semaphore.
Problem in this implementation of semaphore :
The main problem with semaphores is that they require busy waiting, If a process is in the
critical section, then other processes trying to enter the critical section will be waiting until the
critical section is not occupied by any process. Whenever any process waits then it continuously
checks for semaphore value (look at this line while (s==0); in P operation) and waste CPU cycle.
There is also a chance of “spinlock” as the processes keep on spins while waiting for the lock. In
order to avoid this another implementation is provided below.
Implementation: Counting semaphore
 CPP

struct Semaphore {

int value;

167
// q contains all Process Control Blocks(PCBs)

// corresponding to processes got blocked

// while performing down operation.

Queue<process> q;

} P(Semaphore s)

s.value = s.value - 1;

if (s.value < 0) {

// add process to queue

// here p is a process which is currently executing

q.push(p);

block();

else

return;

V(Semaphore s)

168
{

s.value = s.value + 1;

if (s.value <= 0) {

// remove process p from queue

Process p = q.pop();

wakeup(p);

else

return;

In this implementation whenever the process waits it is added to a waiting queue of processes
associated with that semaphore. This is done through system call block() on that process. When a
process is completed it calls the signal function and one process in the queue is resumed. It uses
wakeup() system call.
Mutex vs Semaphore
What is the difference between a mutex and a semaphore? When should you use a mutex and
when should you use a semaphore?
A concrete understanding of Operating System concepts is required to design/develop smart
applications. Our objective is to educate the reader on these concepts and learn from other expert
geeks.
As per operating system terminology, mutexes and semaphores are kernel resources that provide
synchronization services (also called synchronization primitives). Why do we need such
synchronization primitives? Won’t only one be sufficient? To answer these questions, we need to
understand a few keywords. Please read the posts on atomicity and critical section. We will
illustrate with examples to understand these concepts well, rather than following the usual OS
textual description.
The producer-consumer problem:
Note that the content is a generalized explanation. Practical details vary with implementation.
Consider the standard producer-consumer problem. Assume, we have a buffer of 4096-byte
length. A producer thread collects the data and writes it to the buffer. A consumer thread
169
processes the collected data from the buffer. The objective is, both the threads should not run at
the same time.
Using Mutex:
A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and
proceed with their work. As long as the buffer is filled by the producer, the consumer needs to
wait, and vice versa.
At any point of time, only one thread can work with the entire buffer. The concept can be
generalized using semaphore.
Using Semaphore:
A semaphore is a generalized mutex. In lieu of a single buffer, we can split the 4 KB buffer into
four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers.
The consumer and producer can work on different buffers at the same time.
Misconception:
There is an ambiguity between binary semaphore and mutex. We might have come across that a
mutex is a binary semaphore. But it is not! The purpose of mutex and semaphore are different.
Maybe, due to similarity in their implementation a mutex would be referred to as a binary
semaphore.
Strictly speaking, a mutex is a locking mechanism used to synchronize access to a resource.
Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It
means there is ownership associated with a mutex, and only the owner can release the lock
(mutex).
Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For
example, if you are listening to songs (assume it as one task) on your mobile phone and at the
same time, your friend calls you, an interrupt is triggered upon which an interrupt service routine
(ISR) signals the call processing task to wakeup.
General Questions:
1. Can a thread acquire more than one lock (Mutex)?
Yes, it is possible that a thread is in need of more than one resource, hence the locks. If any lock
is not available the thread will wait (block) on the lock.
2. Can a mutex be locked more than once?
A mutex is a lock. Only one state (locked/unlocked) is associated with it. However, a recursive
mutex can be locked more than once (POSIX compliant systems), in which a count is associated
with it, yet retains only one state (locked/unlocked). The programmer must unlock the mutex as
many number times as it was locked.
3. What happens if a non-recursive mutex is locked more than once.
Deadlock. If a thread that had already locked a mutex, tries to lock the mutex again, it will enter
into the waiting list of that mutex, which results in a deadlock. It is because no other thread can
unlock the mutex. An operating system implementer can exercise care in identifying the owner
of the mutex and return if it is already locked by a same thread to prevent deadlocks.
4. Are binary semaphore and mutex same?
No. We suggest treating them separately, as it is explained in signaling vs locking mechanisms.
But a binary semaphore may experience the same critical issues (e.g. priority inversion)
associated with a mutex. We will cover these in a later article.
A programmer can prefer mutex rather than creating a semaphore with count 1.

170
5. What is a mutex and critical section?
Some operating systems use the same word critical section in the API. Usually a mutex is a
costly operation due to protection protocols associated with it. At last, the objective of mutex is
atomic access. There are other ways to achieve atomic access like disabling interrupts which can
be much faster but ruins responsiveness. The alternate API makes use of disabling interrupts.
6. What are events?
The semantics of mutex, semaphore, event, critical section, etc… are same. All are
synchronization primitives. Based on their cost in using them they are different. We should
consult the OS documentation for exact details.
7. Can we acquire mutex/semaphore in an Interrupt Service Routine?
An ISR will run asynchronously in the context of current running thread. It is not
recommended to query (blocking call) the availability of synchronization primitives in an ISR.
The ISR are meant be short, the call to mutex/semaphore may block the current running thread.
However, an ISR can signal a semaphore or unlock a mutex.
8. What we mean by “thread blocking on mutex/semaphore” when they are not available?
Every synchronization primitive has a waiting list associated with it. When the resource is not
available, the requesting thread will be moved from the running list of processors to the waiting
list of the synchronization primitive. When the resource is available, the higher priority thread on
the waiting list gets the resource (more precisely, it depends on the scheduling policies).
9. Is it necessary that a thread must block always when resource is not available?
Not necessary. If the design is sure ‘what has to be done when resource is not available‘, the
thread can take up that work (a different code branch). To support application requirements the
OS provides non-blocking API.
For example POSIX pthread_mutex_trylock() API. When the mutex is not available the function
returns immediately whereas the API pthread_mutex_lock() blocks the thread till the resource is
available.
Producer Consumer Problem using Semaphores | Set 1
Producer consumer problem is a classical synchronization problem. We can solve this problem
by using semaphores.
A semaphore S is an integer variable that can be accessed only through two standard
operations : wait() and signal().
The wait() operation reduces the value of semaphore by 1 and the signal() operation increases its
value by 1.
wait(S){
while(S<=0); // busy waiting
S--;
}

signal(S){
S++;
}
Semaphores are of two types:

171
1. Binary Semaphore – This is similar to mutex lock but not the same thing. It can have only
two values – 0 and 1. Its value is initialized to 1. It is used to implement the solution of critical
section problem with multiple processes.

2. Counting Semaphore – Its value can range over an unrestricted domain. It is used to control
access to a resource that has multiple instances.

Problem Statement – We have a buffer of fixed size. A producer can produce an item and can
place in the buffer. A consumer can pick items and can consume them. We need to ensure that
when a producer is placing an item in the buffer, then at the same time consumer should not
consume any item. In this problem, buffer is the critical section.
To solve this problem, we need two counting semaphores – Full and Empty. “Full” keeps track
of number of items in the buffer at any given time and “Empty” keeps track of number of
unoccupied slots.
Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially
Solution for Producer –
do{

//produce an item

wait(empty);
wait(mutex);

//place in buffer

signal(mutex);
signal(full);

}while(true)
When producer produces an item then the value of “empty” is reduced by 1 because one slot will
be filled now. The value of mutex is also reduced to prevent consumer to access the buffer. Now,
the producer has placed the item and thus the value of “full” is increased by 1. The value of
mutex is also increased by 1 because the task of producer has been completed and consumer can
access the buffer.
Solution for Consumer –
do{

172
wait(full);
wait(mutex);

// remove item from buffer

signal(mutex);
signal(empty);

// consumes item

}while(true)
As the consumer is removing an item from buffer, therefore the value of “full” is reduced by 1
and the value is mutex is also reduced so that the producer cannot access the buffer at this
moment. Now, the consumer has consumed the item, thus increasing the value of “empty” by 1.
The value of mutex is also increased so that producer can access the buffer now.

Dining Philosopher Problem Using Semaphores


The Dining Philosopher Problem – The Dining Philosopher Problem states that K philosophers
seated around a circular table with one chopstick between each pair of philosophers. There is one
chopstick between each philosopher. A philosopher may eat if he can pick up the two chopsticks
adjacent to him. One chopstick may be picked up by any one of its adjacent followers but not
both.

173
Semaphore Solution to Dining Philosopher –
Each philosopher is represented by the following pseudocode:

process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}
There are three states of the philosopher: THINKING, HUNGRY, and EATING. Here there
are two semaphores: Mutex and a semaphore array for the philosophers. Mutex is used such that
no two philosophers may access the pickup or putdown at the same time. The array is used to
control the behavior of each philosopher. But, semaphores can result in deadlock due to
programming errors.

Readers-Writers Problem | Set 1 (Introduction and Readers Preference Solution)


Consider a situation where we have a file shared between many people.

 If one of the person tries editing the file, no other person should be reading or writing at the
same time, otherwise changes will not be visible to him/her.
 However if some person is reading the file, then others may read it at the same time.
Precisely in OS we call this situation as the readers-writers problem
Problem parameters:

 One set of data is shared among a number of processes


 Once a writer is ready, it performs its write. Only one writer may write at a time
 If a process is writing, no other process can read it
 If at least one reader is reading, no other process can write
 Readers may not write and only read
Solution when Reader has the Priority over Writer
There are four Types of cases could happen here.

Case Process 1 Process 2 Allowed/Not Allowed

Case 1 Writing Writing Not Allowed

Case 2 Writing Reading Not Allowed

Case 3 Reading Writing Not Allowed

174
Case Process 1 Process 2 Allowed/Not Allowed

Case 4 Reading Reading Allowed

Here priority means, no reader should wait if the share is currently opened for reading.
Three variables are used: mutex, wrt, readcnt to implement solution

1. semaphore mutex, wrt; // semaphore mutex is used to ensure mutual exclusion


when readcnt is updated i.e. when any reader enters or exit from the critical section and
semaphore wrt is used by both readers and writers
2. int readcnt; // readcnt tells the number of processes performing read in the critical section,
initially 0
Functions for semaphore :
– wait() : decrements the semaphore value.
– signal() : increments the semaphore value.
Writer process:

1. Writer requests the entry to critical section.


2. If allowed i.e. wait() gives a true value, it enters and performs the write. If not allowed, it
keeps on waiting.
3. It exits the critical section.

do {
// writer requests for critical section
wait(wrt);

// performs the write

// leaves the critical section


signal(wrt);

} while(true);
Reader process:

1. Reader requests the entry to critical section.


2. If allowed:

175
 it increments the count of number of readers inside the critical section. If this reader is the
first reader entering, it locks the wrt semaphore to restrict the entry of writers if any reader
is inside.
 It then, signals mutex as any other reader is allowed to enter while others are already
reading.
 After performing reading, it exits the critical section. When exiting, it checks if no more
reader is inside, it signals the semaphore “wrt” as now, writer can enter the critical section.
3. If not allowed, it keeps on waiting.

do {

// Reader wants to enter the critical section


wait(mutex);

// The number of readers has now increased by 1


readcnt++;

// there is atleast one reader in the critical section


// this ensure no writer can enter if there is even one reader
// thus we give preference to readers here
if (readcnt==1)
wait(wrt);

// other readers can enter while this current reader is inside


// the critical section
signal(mutex);

// current reader performs reading here


wait(mutex); // a reader wants to leave

readcnt--;

// that is, no reader is left in the critical section,


if (readcnt == 0)
signal(wrt); // writers can enter

signal(mutex); // reader leaves

} while(true);
Thus, the semaphore ‘wrt‘ is queued on both readers and writers in a manner such that
preference is given to readers if writers are also there. Thus, no reader is waiting simply because
a writer has requested to enter the critical section.

176
Sleeping Barber problem in Process Synchronization
Problem : The analogy is based upon a hypothetical barber shop with one barber. There is a
barber shop which has one barber, one barber chair, and n chairs for waiting for customers if
there are any to sit on the chair.
 If there is no customer, then the barber sleeps in his own chair.
 When a customer arrives, he has to wake up the barber.
 If there are many customers and the barber is cutting a customer’s hair, then the remaining
customers either wait if there are empty chairs in the waiting room or they leave if no chairs
are empty.

Solution : The solution to this problem includes three semaphores.First is for the customer
which counts the number of customers present in the waiting room (customer in the barber chair
is not included because he is not waiting). Second, the barber 0 or 1 is used to tell whether the
barber is idle or is working, And the third mutex is used to provide the mutual exclusion which is
required for the process to execute. In the solution, the customer has the record of the number of
customers waiting in the waiting room if the number of customers is equal to the number of
chairs in the waiting room then the upcoming customer leaves the barbershop.
When the barber shows up in the morning, he executes the procedure barber, causing him to
block on the semaphore customers because it is initially 0. Then the barber goes to sleep until the
first customer comes up.
When a customer arrives, he executes customer procedure the customer acquires the mutex for
entering the critical region, if another customer enters thereafter, the second one will not be able
to anything until the first one has released the mutex. The customer then checks the chairs in the
waiting room if waiting customers are less then the number of chairs then he sits otherwise he
leaves and releases the mutex.
If the chair is available then customer sits in the waiting room and increments the variable
waiting value and also increases the customer’s semaphore this wakes up the barber if he is
sleeping.
At this point, customer and barber are both awake and the barber is ready to give that person a
haircut. When the haircut is over, the customer exits the procedure and if there are no customers
in waiting room barber sleeps.

177
Algorithm for Sleeping Barber problem:

Semaphore Customers = 0;

Semaphore Barber = 0;

Mutex Seats = 1;

int FreeSeats = N;

Barber {

while(true) {

/* waits for a customer (sleeps). */

down(Customers);

/* mutex to protect the number of available seats.*/

down(Seats);

178
/* a chair gets free.*/

FreeSeats++;

/* bring customer for haircut.*/

up(Barber);

/* release the mutex on the chair.*/

up(Seats);

/* barber is cutting hair.*/

Customer {

while(true) {

/* protects seats so only 1 customer tries to sit

in a chair if that's the case.*/

down(Seats); //This line should not be here.

if(FreeSeats > 0) {

/* sitting down.*/

179
FreeSeats--;

/* notify the barber. */

up(Customers);

/* release the lock */

up(Seats);

/* wait in the waiting room if barber is busy. */

down(Barber);

// customer is having hair cut

} else {

/* release the lock */

up(Seats);

// customer leaves

Introduction of Deadlock in Operating System


A process in operating system uses resources in the following way.
1) Requests a resource
2) Use the resource
3) Releases the resource

180
Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and there
is only one track, none of the trains can move once they are in front of each other. A similar
situation occurs in operating systems when there are two or more processes that hold some
resources and wait for resources held by other(s). For example, in the below diagram, Process 1
is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is
waiting for resource 1.

Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a
time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes are waiting for each other in circular form.
Methods for handling deadlock
There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to not let the system into a deadlock state.
One can zoom into each category individually, Prevention is done by negating one of above
mentioned necessary conditions for deadlock.
Avoidance is kind of futuristic in nature. By using strategy of “Avoidance”, we have to make an
assumption. We need to ensure that all information about resources which process will need are
known to us prior to execution of the process. We use Banker’s algorithm (Which is in-turn a gift
from Dijkstra) in order to avoid deadlock.
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it once
occurred.
3) Ignore the problem altogether: If deadlock is very rare, then let it happen and reboot the
system. This is the approach that both Windows and UNIX take.

181
Deadlock Detection Algorithm in Operating System
If a system does not employ either a deadlock prevention or deadlock avoidance algorithm then
a deadlock situation may occur. In this case-
 Apply an algorithm to examine state of system to determine whether deadlock has occurred
or not.
 Apply an algorithm to recover from the deadlock. For more refer- Deadlock Recovery
Deadlock Avoidance Algorithm/ Bankers Algorithm:
The algorithm employs several times varying data structures:

 Available –
A vector of length m indicates the number of available resources of each type.
 Allocation –
An n*m matrix defines the number of resources of each type currently allocated to a
process. Column represents resource and rows represent process.
 Request –
An n*m matrix indicates the current request of each process. If request[i][j] equals k then
process Pi is requesting k more instances of resource type R j.

Now, Bankers algorithm includes a Safety Algorithm / Deadlock Detection Algorithm


The algorithm for finding out whether a system is in a safe state can be described as follows:
Steps of Algorithm:

1. Let Work and Finish be vectors of length m and n respectively. Initialize Work= Available.
For i=0, 1, …., n-1, if Requesti = 0, then Finish[i] = true; otherwise, Finish[i]= false.
2. Find an index i such that both
a) Finish[i] == false
b) Requesti <= Work
If no such i exists go to step 4.
3. Work= Work+ Allocation i
Finish[i]= true
Go to Step 2.
4. If Finish[i]== false for some i, 0<=i<n, then the system is in a deadlocked state. Moreover,
if Finish[i]==false the process Pi is deadlocked.
For example,

182
1. In this, Work = [0, 0, 0] &
Finish = [false, false, false, false, false]

2. i=0 is selected as both Finish[0] = false and [0, 0, 0]<=[0, 0, 0].

3. Work =[0, 0, 0]+[0, 1, 0] =>[0, 1, 0] &


Finish = [true, false, false, false, false].

4. i=2 is selected as both Finish[2] = false and [0, 0, 0]<=[0, 1, 0].


5. Work =[0, 1, 0]+[3, 0, 3] =>[3, 1, 3] &
Finish = [true, false, true, false, false].

6. i=1 is selected as both Finish[1] = false and [2, 0, 2]<=[3, 1, 3].


7. Work =[3, 1, 3]+[2, 0, 0] =>[5, 1, 3] &
Finish = [true, true, true, false, false].

8. i=3 is selected as both Finish[3] = false and [1, 0, 0]<=[5, 1, 3].


9. Work =[5, 1, 3]+[2, 1, 1] =>[7, 2, 4] &
Finish = [true, true, true, true, false].

10. i=4 is selected as both Finish[4] = false and [0, 0, 2]<=[7, 2, 4].

11. Work =[7, 2, 4]+[0, 0, 2] =>[7, 2, 6] &


Finish = [true, true, true, true, true].

12. Since Finish is a vector of all true it means there is no deadlock in this example.

Deadlock Detection And Recovery


The Deadlock Detection and Recovery technique to handle deadlock is discussed.
Deadlock Detection :
1. If resources have a single instance –
In this case for Deadlock detection, we can run an algorithm to check for the cycle in the
Resource Allocation Graph. The presence of a cycle in the graph is a sufficient condition for
deadlock.

183
2. In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1 →
P1 → R2 → P2. So, Deadlock is Confirmed.

3. If there are multiple instances of resources –


Detection of the cycle is necessary but not sufficient condition for deadlock detection, in this
case, the system may or may not be in deadlock varies according to different situations.
Deadlock Recovery :
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is a
time and space-consuming process. Real-time operating systems use Deadlock recovery.
1. Killing the process –
Killing all the processes involved in the deadlock. Killing process one by one. After killing
each process check for deadlock again keep repeating the process till the system recovers from
deadlock. Killing all the processes one by one helps a system to break circular wait condition.
2. Resource Preemption –
Resources are preempted from the processes involved in the deadlock, preempted resources
are allocated to other processes so that there is a possibility of recovering the system from
deadlock. In this case, the system goes into starvation.

Deadlock Prevention And Avoidance


Deadlock Characteristics
As discussed in the previous post, deadlock has following characteristics.

1. Mutual Exclusion
2. Hold and Wait
3. No preemption
4. Circular wait
Deadlock Prevention
We can prevent Deadlock by eliminating any of the above four conditions.

184
Eliminate Mutual Exclusion
It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tape
drive and printer, are inherently non-shareable.
Eliminate Hold and wait
1. Allocate all required resources to the process before the start of its execution, this way hold
and wait condition is eliminated but it will lead to low device utilization. for example, if a
process requires printer at a later time and we have allocated printer before the start of its
execution printer will remain blocked till it has completed its execution.

2. The process will make a new request for resources after releasing the current set of resources.
This solution may lead to starvation.

Eliminate No Preemption
Preempt resources from the process when resources required by other high priority processes.
Eliminate Circular Wait
Each resource will be assigned with a numerical number. A process can request the resources
increasing/decreasing. order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser
than R5 such request will not be granted, only request for resources more than R5 will be
granted.
Deadlock Avoidance
Deadlock avoidance can be done with Banker’s Algorithm.
Banker’s Algorithm
Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm which test all the
request made by processes for resources, it checks for the safe state, if after granting request
system remains in the safe state it allows the request and if there is no safe state it doesn’t allow
the request made by the process.
Inputs to Banker’s Algorithm:

1. Max need of resources by each process.


2. Currently, allocated resources by each process.
3. Max free available resources in the system.
The request will only be granted under the below condition:
1. If the request made by the process is less than equal to max need to that process.
2. If the request made by the process is less than equal to the freely available resource in the
system.

185
Example:
Total resources in system:
ABCD
6576

Available system resources are:


ABCD
3112

Processes (currently allocated resources):


ABCD
P1 1 2 2 1
P2 1 0 3 3
P3 1 2 1 0

Processes (maximum resources):


ABCD
P1 3 3 2 2
P2 1 2 3 4
P3 1 3 5 0

Need = maximum resources - currently allocated resources.


Processes (need resources):
ABCD
P1 2 1 0 1
P2 0 2 0 1
P3 0 1 4 0
Note: Deadlock prevention is more strict than Deadlock Avoidance.

Banker’s Algorithm in Operating System


The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests for
safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible activities, before deciding whether
allocation should be allowed to continue.
Why Banker’s algorithm is named so?

186
Banker’s algorithm is named so because it is used in banking system to check whether loan can
be sanctioned to a person or not. Suppose there are n number of account holders in a bank and
the total sum of their money is S. If a person applies for a loan then the bank first subtracts the
loan amount from the total money that bank has and if the remaining amount is greater than S
then only the loan is sanctioned. It is done because if all the account holders comes to withdraw
their money then the bank can easily do it.
In other words, the bank would never allocate its money in such a way that it can no longer
satisfy the needs of all its customers. The bank would try to be in safe state always.
Following Data structures are used to implement the Banker’s Algorithm:
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.
Available :

 It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
 Available[ j ] = k means there are ‘k’ instances of resource type Rj
Max :

 It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a system.
 Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type Rj.
Allocation :

 It is a 2-d array of size ‘n*m’ that defines the number of resources of each type currently
allocated to each process.
 Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of resource type Rj
Need :

 It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each process.
 Need [ i, j ] = k means process Pi currently need ‘k’ instances of resource type Rj
 Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]

Allocationi specifies the resources currently allocated to process Pi and Needi specifies the
additional resources that process Pi may still request to complete its task.
Banker’s algorithm consists of Safety algorithm and Resource request algorithm
Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state can be described as
follows:

1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
3) Work = Work + Allocation[i]

187
Finish[i] = true
goto step (2)
4) if Finish [i] = true for all i
then the system is in a safe state

Resource-Request Algorithm
Let Requesti be the request array for process Pi. Requesti [j] = k means process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken:

1) If Requesti <= Needi


Goto step (2) ; otherwise, raise an error condition, since the process has exceeded its maximum
claim.
2) If Requesti <= Available
Goto step (3); otherwise, Pi must wait, since the resources are not available.
3) Have the system pretend to have allocated the requested resources to process Pi by modifying
the state as
follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti

Example:
Considering a system with five processes P0 through P4 and three resources of type A, B, C.
Resource type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at
time t0 following snapshot of the system has been taken:

Question1. What will be the content of the Need matrix?


Need [i, j] = Max [i, j] – Allocation [i, j]
So, the content of Need Matrix is:

188
Question2. Is the system in a safe state? If Yes, then what is the safe sequence?
Applying the Safety algorithm on the given system,

Question3. What will happen if process P1 requests one additional instance of resource type
A and two instances of resource type C?

189
We must determine whether this new system state is safe. To do so, we again execute Safety
algorithm on the above data structures.

Hence the new system state is safe, so we can immediately grant the request for process P1 .

Resource Allocation Graph (RAG) in Operating System


As Banker’s algorithm using some kind of table like allocation, request, available all that thing to
understand what is the state of the system. Similarly, if you want to understand the state of the
system instead of using those table, actually tables are very easy to represent and understand it,
but then still you could even represent the same information in the graph. That graph is
called Resource Allocation Graph (RAG).
So, resource allocation graph is explained to us what is the state of the system in terms
of processes and resources. Like how many resources are available, how many are allocated
and what is the request of each process. Everything can be represented in terms of the diagram.
One of the advantages of having a diagram is, sometimes it is possible to see a deadlock directly
by using RAG, but then you might not be able to know that by looking at the table. But the tables
are better if the system contains lots of process and resource and Graph is better if the system
contains less number of process and resource.
We know that any graph contains vertices and edges. So RAG also contains vertices and edges.
In RAG vertices are two type –
1. Process vertex – Every process will be represented as a process vertex.Generally, the process
will be represented with a circle.
2. Resource vertex – Every resource will be represented as a resource vertex. It is also two type

 Single instance type resource – It represents as a box, inside the box, there will be one
dot.So the number of dots indicate how many instances are present of each resource type.
 Multi-resource instance type resource – It also represents as a box, inside the box, there will
be many dots present.

190
Now coming to the edges of RAG.There are two types of edges in RAG –
1. Assign Edge – If you already assign a resource to a process then it is called Assign edge.
2. Request Edge – It means in future the process might want some resource to complete the
execution, that is called request edge.

So, if a process is using a resource, an arrow is drawn from the resource node to the process
node. If a process is requesting a resource, an arrow is drawn from the process node to the
resource node.

191
Example 1 (Single instances RAG) –

If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides only
one instance, then the processes will be in deadlock. For example, if process P1 holds resource
R1, process P2 holds resource R2 and process P1 is waiting for R2 and process P2 is waiting for
R1, then process P1 and process P2 will be in deadlock.

Here’s another example, that shows Processes P1 and P2 acquiring resources R1 and R2 while
process P3 is waiting to acquire both resources. In this example, there is no deadlock because
there is no circular dependency.
So cycle in single-instance resource type is the sufficient condition for deadlock.

192
Example 2 (Multi-instances RAG) –

From the above example, it is not possible to say the RAG is in a safe state or in an unsafe state.
So to see the state of this RAG, let’s construct the allocation matrix and request matrix.

 The total number of processes are three; P1, P2 & P3 and the total number of resources are
two; R1 & R2.
Allocation matrix –
 For constructing the allocation matrix, just go to the resources and see to which process it is
allocated.
 R1 is allocated to P1, therefore write 1 in allocation matrix and similarly, R2 is allocated to P2
as well as P3 and for the remaining element just write 0.
Request matrix –
 In order to find out the request matrix, you have to go to the process and see the outgoing
edges.
 P1 is requesting resource R2, so write 1 in the matrix and similarly, P2 requesting R1 and for
the remaining element write 0.
So now available resource is = (0, 0).
193
Checking deadlock (safe or not) –

So, there is no deadlock in this RAG.Even though there is a cycle, still there is no
deadlock.Therefore in multi-instance resource cycle is not sufficient condition for deadlock.

Above example is the same as the previous example except that, the process P3 requesting for
resource R1.
So the table becomes as shown in below.

194
So,the Available resource is = (0, 0), but requirement are (0, 1), (1, 0) and (1, 0).So you can’t
fulfill any one requirement.Therefore, it is in deadlock.
Therefore, every cycle in a multi-instance resource type graph is not a deadlock, if there has to
be a deadlock, there has to be a cycle.So, in case of RAG with multi-instance resource type, the
cycle is a necessary condition for deadlock, but not sufficient.

Introduction to memory and memory units


Memory devices are digital system that store data either temporarily or for a long term. Digital
computers to hard disk have built in memory devices that can store data of user or
manufacturers. The data either be in the form of control programs or programs that boot the
system. Hence, to store such huge amount of data the memory devices must have enormous
capacity. The challenge is to build memory devices that have large capacity but cost effective.
The memory devices must be capable of storing both permanent data and instantaneous data.
Memories are made up of registers. Each register in the memory is one storage location. The
storage location is also called a memory location. Memory locations are identified
using Address. The total number of bits a memory can store is its capacity.
A storage element is called a Cell. Each register is made up of a storage element in which one bit
of data is stored. The data in a memory are stored and retrieved by the process
called writing and reading respectively.

195
A word is a group of bits where a memory unit stores binary information. A word with a group
of 8 bits is called a byte.
A memory unit consists of data lines, address selection lines, and control lines that specify the
direction of transfer. The block diagram of a memory unit is shown below:

Data lines provide the information to be stored in memory. The control inputs specify the direct
transfer. The k-address lines specify the word chosen.
When there are k address lines, 2k memory words can be accessed.

Following are some important memory units:

Bit (Binary Units): bit is a logical representation of the electric state. It can be 1 or 0.
Nibble: it means the group of 4 bits.
Byte: a byte is a group of 8 bits.
Word: it is a fixed number of bits, it is different from computer to computer, but the same for
each device. Compute store information in the form of words.

196
Following are conversations of units:

Kilobyte (kb): 1kb = 1024 byte


Megabyte (mb): 1mb = 1024 kb
Gigabyte (gb): 1gb = 1024 mb
Terabyte (tb): 1tb = 1024 gb
Petabyte (pb): 1pb = 1024 tb

Memory Hierarchy Design and its Characteristics


In the Computer System Design, Memory Hierarchy is an enhancement to organize the memory
such that it can minimize the access time. The Memory Hierarchy was developed based on a
program behavior known as locality of references. The figure below clearly demonstrates the
different levels of memory hierarchy:

This Memory Hierarchy Design is divided into 2 main types:


1. External Memory or Secondary Memory –
Comprising of Magnetic Disk, Optical Disk, Magnetic Tape i.e. peripheral storage devices
which are accessible by the processor via I/O Module.
2. Internal Memory or Primary Memory –
Comprising of Main Memory, Cache Memory & CPU registers. This is directly accessible by
the processor.
We can infer the following characteristics of Memory Hierarchy Design from above figure:
1. Capacity:
It is the global volume of information the memory can store. As we move from top to bottom
in the Hierarchy, the capacity increases.
2. Access Time:
It is the time interval between the read/write request and the availability of the data. As we
move from top to bottom in the Hierarchy, the access time increases.
3. Performance:
Earlier when the computer system was designed without Memory Hierarchy design, the speed

197
gap increases between the CPU registers and Main Memory due to large difference in access
time. This results in lower performance of the system and thus, enhancement was required.
This enhancement was made in the form of Memory Hierarchy Design because of which the
performance of the system increases. One of the most significant ways to increase system
performance is minimizing how far down the memory hierarchy one has to go to manipulate
data.
4. Cost per bit:
As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal
Memory is costlier than External Memory.

Buddy System – Memory allocation technique


Static partition schemes suffer from the limitation of having the fixed number of active
processes and the usage of space may also not be optimal. The buddy system is a memory
allocation and management algorithm that manages memory in power of two increments.
Assume the memory size is 2U, suppose a size of S is required.
 If 2U-1<S<=2U: Allocate the whole block
 Else: Recursively divide the block equally and test the condition at each time, when it
satisfies, allocate the block and get out the loop.
System also keep the record of all the unallocated blocks each and can merge these different size
blocks to make one big chunk.
Advantage –
 Easy to implement a buddy system
 Allocates block of correct size
 It is easy to merge adjacent holes
 Fast to allocate memory and de-allocating memory
Disadvantage –
 It requires all allocation unit to be powers of two
 It leads to internal fragmentation
Example –
Consider a system having buddy system with physical address space 128 KB.Calculate the size
of partition for 18 KB process.
Solution –

198
So, size of partition for 18 KB process = 32 KB. It divides by 2, till possible to get minimum
block to fit 18 KB.
Fixed (or static) Partitioning in Operating System
In operating systems, Memory Management is the function responsible for allocating and
managing a computer’s main memory. Memory Management function keeps track of the status
of each memory location, either allocated or free to ensure effective and efficient use of Primary
Memory.
There are two Memory Management Techniques: Contiguous, and Non-Contiguous. In
Contiguous Technique, executing process must be loaded entirely in the main memory.
Contiguous Technique can be divided into:
1. Fixed (or static) partitioning

2. Variable (or dynamic) partitioning

Fixed Partitioning:
This is the oldest and simplest technique used to put more than one process in the main memory.
In this partitioning, the number of partitions (non-overlapping) in RAM is fixed but the size of
each partition may or may not be the same. As it is a contiguous allocation, hence no spanning
is allowed. Here partitions are made before execution or during system configure.

199
As illustrated in above figure, first process is only consuming 1MB out of 4MB in the main
memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)= 3+1+1+2 = 7MB.
Suppose process P5 of size 7MB comes. But this process cannot be accommodated in spite of
available free space because of contiguous allocation (as spanning is not allowed). Hence, 7MB
becomes part of External Fragmentation.
There are some advantages and disadvantages of fixed partitioning.
Advantages of Fixed Partitioning –
1. Easy to implement:
Algorithms needed to implement Fixed Partitioning are easy to implement. It simply requires
putting a process into a certain partition without focusing on the emergence of Internal and
External Fragmentation.

2. Little OS overhead:
Processing of Fixed Partitioning requires lesser excess and indirect computational power.

Disadvantages of Fixed Partitioning –


1. Internal Fragmentation:
Main memory use is inefficient. Any program, no matter how small, occupies an entire
partition. This can cause internal fragmentation.

2. External Fragmentation:
The total unused space (as stated above) of various partitions cannot be used to load the
processes even though there is space available but not in the contiguous form (as spanning is

200
not allowed).

3. Limit process size:


Process of size greater than the size of the partition in Main Memory cannot be
accommodated. The partition size cannot be varied according to the size of the incoming
process size. Hence, the process size of 32MB in the above-stated example is invalid.

4. Limitation on Degree of Multiprogramming:


Partitions in Main Memory are made before execution or during system configure. Main
Memory is divided into a fixed number of partitions. Suppose if there are partitions in
RAM and are the number of processes, then condition must be
fulfilled. Number of processes greater than the number of partitions in RAM is invalid in
Fixed Partitioning.

Variable (or dynamic) Partitioning in Operating System


In operating systems, Memory Management is the function responsible for allocating and
managing computer’s main memory. Memory Management function keeps track of the status of
each memory location, either allocated or free to ensure effective and efficient use of Primary
Memory.
There are two Memory Management Techniques: Contiguous, and Non-Contiguous. In
Contiguous Technique, executing process must be loaded entirely in main-memory. Contiguous
Technique can be divided into:
1. Fixed (or static) partitioning
2. Variable (or dynamic) partitioning
Variable Partitioning –
It is a part of Contiguous allocation technique. It is used to alleviate the problem faced by Fixed
Partitioning. In contrast with fixed partitioning, partitions are not made before the execution or
during system configure. Various features associated with variable Partitioning-
1. Initially RAM is empty and partitions are made during the run-time according to process’s
need instead of partitioning during system configure.
2. The size of partition will be equal to incoming process.
3. The partition size varies according to the need of the process so that the internal fragmentation
can be avoided to ensure efficient utilisation of RAM.
4. Number of partitions in RAM is not fixed and depends on the number of incoming process
and Main Memory’s size.

201
There are some advantages and disadvantages of variable partitioning over fixed partitioning as
given below.
Advantages of Variable Partitioning –
1. No Internal Fragmentation:
In variable Partitioning, space in main memory is allocated strictly according to the need of
process, hence there is no case of internal fragmentation. There will be no unused space left in
the partition.
2. No restriction on Degree of Multiprogramming:
More number of processes can be accommodated due to absence of internal fragmentation. A
process can be loaded until the memory is empty.
3. No Limitation on the size of the process:
In Fixed partitioning, the process with the size greater than the size of the largest partition
could not be loaded and process can not be divided as it is invalid in contiguous allocation
technique. Here, In variable partitioning, the process size can’t be restricted since the partition
size is decided according to the process size.
Disadvantages of Variable Partitioning –
1. Difficult Implementation:
Implementing variable Partitioning is difficult as compared to Fixed Partitioning as it involves
allocation of memory during run-time rather than during system configure.
2. External Fragmentation:
There will be external fragmentation inspite of absence of internal fragmentation.
For example, suppose in above example- process P1(2MB) and process P3(1MB) completed
their execution. Hence two spaces are left i.e. 2MB and 1MB. Let’s suppose process P5 of
size 3MB comes. The empty space in memory cannot be allocated as no spanning is allowed
in contiguous allocation. The rule says that process must be contiguously present in main
memory to get executed. Hence it results in External Fragmentation.

202
Now P5 of size 3 MB cannot be accommodated in spite of required available space because in
contiguous no spanning is allowed.

Non-Contiguous Allocation in Operating System


Variable Partitioning, Fixed Partitioning
Paging and Segmentation are the two ways that allow a process’s physical address space to be
non-contiguous. It has the advantage of reducing memory wastage but it increases the overheads
due to address translation. It slows the execution of the memory because time is consumed in
address translation.
In non-contiguous allocation, the Operating system needs to maintain the table which is called
the Page Table for each process which contains the base address of each block that is acquired
by the process in memory space. In non-contiguous memory allocation, different parts of a
process are allocated to different places in Main Memory. Spanning is allowed which is not
possible in other techniques like Dynamic or Static Contiguous memory allocation. That’s why
paging is needed to ensure effective memory allocation. Paging is done to remove External
Fragmentation.
There are five types of Non-Contiguous Allocation of Memory in the Operating System:
1. Paging
2. Multilevel Paging
3. Inverted Paging
4. Segmentation
5. Segmented Paging
Working:
Here a process can be spanned across different spaces in the main memory in a non-consecutive

203
manner. Suppose process P of size 4KB. Consider main memory has two empty slots each of
size 2KB. Hence total free space is, 2*2= 4 KB. In contiguous memory allocation, process P
cannot be accommodated as spanning is not allowed.
In contiguous allocation, space in memory should be allocated to the whole process. If not, then
that space remains unallocated. But in Non-Contiguous allocation, the process can be divided
into different parts and hence filling the space in the main memory. In this example, process P
can be divided into two parts of equal size – 2KB. Hence one part of process P can be allocated
to the first 2KB space of main memory and the other part of the process can be allocated to the
second 2KB space of main memory. The below diagram will explain in a better way:

But, in what manner we divide a process to allocate them into main memory is very important to
understand. The process is divided after analysing the number of empty spaces and their size in
the main memory. Then only we do divide our process. It is a very time-consuming process.
Their number as well as their sizes changing every time due to execution of already present
processes in main memory.
In order to avoid this time-consuming process, we divide our process in secondary memory in
advance before reaching the main memory for its execution. Every process is divided into
various parts of equal size called Pages. We also divide our main memory into different parts of
equal size called Frames. It is important to understand that:

Size of page in process


= Size of frame in memory

204
Although their numbers can be different. Below diagram will make you understand it in a better
way: consider empty main memory having a size of each frame is 2 KB, and two processes P1
and P2 are 2 KB each.

Resolvent main memory,

Logical and Physical Address in Operating System


Logical Address is generated by CPU while a program is running. The logical address is
virtual address as it does not exist physically, therefore, it is also known as Virtual Address.
This address is used as a reference to access the physical memory location by CPU. The term
Logical Address Space is used for the set of all logical addresses generated by a program’s
perspective.
The hardware device called Memory-Management Unit is used for mapping logical address to
its corresponding physical address.
Physical Address identifies a physical location of required data in a memory. The user never
directly deals with the physical address but can access by its corresponding logical address.
The user program generates the logical address and thinks that the program is running in this
logical address but the program needs physical memory for its execution, therefore, the logical
address must be mapped to the physical address by MMU before they are used. The term
Physical Address Space is used for all physical addresses corresponding to the logical
addresses in a Logical address space.

205
Mapping virtual-address to physical-addresses

Differences Between Logical and Physical Address in Operating System


The basic difference between Logical and physical address is that Logical address is generated
by CPU in perspective of a program whereas the physical address is a location that exists in the
memory unit.
1. Logical Address Space is the set of all logical addresses generated by CPU for a program
whereas the set of all physical address mapped to corresponding logical addresses is called
Physical Address Space.
2. The logical address does not exist physically in the memory whereas physical address is a
location in the memory that can be accessed physically.
3. Identical logical addresses are generated by Compile-time and Load time address binding
methods whereas they differs from each other in run-time address binding method.
4. The logical address is generated by the CPU while the program is running whereas the
physical address is computed by the Memory Management Unit (MMU).

Comparison Chart:

Parameter LOGICAL ADDRESS PHYSICAL ADDRESS

Basic generated by CPU location in a memory unit

Address Logical Address Space is set of all Physical Address is set of all physical
Space logical addresses generated by CPU in addresses mapped to the corresponding

206
Parameter LOGICAL ADDRESS PHYSICAL ADDRESS

reference to a program. logical addresses.

User can view the logical address of a User can never view physical address of
Visibility program. program.

Generation generated by the CPU Computed by MMU

The user can use the logical address to The user can indirectly access physical
Access access the physical address. address but not directly.

Editable Logical address can be change. Physical address will not change.

Also called virtual address. real address.

207
Concluding, we can say that Paging allows the memory address space of a process to be non-
contiguous. Paging is more flexible as only pages of a process are moved. It allows more
processes to reside in main memory than Contiguous memory allocation.
Paging in Operating System
Paging is a memory management scheme that eliminates the need for contiguous allocation of
physical memory. The process of retrieving processes in the form of pages from the secondary
storage into the main memory is known as paging. The basic purpose of paging is to separate
each procedure into pages. Additionally, frames will be used to split the main memory.This
scheme permits the physical address space of a process to be non – contiguous.
Let us look some important terminologies:
 Logical Address or Virtual Address (represented in bits): An address generated by the CPU
 Logical Address Space or Virtual Address Space( represented in words or bytes): The set of
all logical addresses generated by a program
 Physical Address (represented in bits): An address actually available on memory unit
 Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses
Example:
 If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words (1 G = 230)
 If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 =
27 bits
 If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M words (1 M =
220)
 If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address = log2 224 =
24 bits
The mapping from virtual to physical address is done by the memory management unit (MMU)
which is a hardware device and this mapping is known as paging technique.
 The Physical Address Space is conceptually divided into a number of fixed-size blocks,
called frames.
 The Logical address Space is also splitted into fixed-size blocks, called pages.
 Page Size = Frame Size
Let us consider an example:
 Physical Address = 12 bits, then Physical Address Space = 4 K words
 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)

208
Address
generated by CPU is divided into
 Page number(p): Number of bits required to represent the pages in Logical Address Space or
Page number
 Page offset(d): Number of bits required to represent particular word in a page or page size of
Logical Address Space or word number of a page or page offset.
Physical Address is divided into
 Frame number(f): Number of bits required to represent the frame of Physical Address Space
or Frame number.
 Frame offset(d): Number of bits required to represent particular word in a frame or frame
size of Physical Address Space or word number of a frame or frame offset.
The hardware implementation of page table can be done by using dedicated registers. But the
usage of register for the page table is satisfactory only if page table is small. If page table contain
large number of entries then we can use TLB(translation Look-aside buffer), a special, small, fast
look up hardware cache.
 The TLB is associative, high speed memory.
 Each entry in TLB consists of two parts: a tag and a value.
 When this memory is used, then an item is compared with all tags simultaneously.If the item
is found, then corresponding value is returned.
Page Table Entries in Page Table
Page table has page table entries where each page table entry stores a frame number and optional
status (like protection) bits. Many of status bits used in the virtual memory system. The
most important thing in PTE is frame Number.
Page table entry has the following information –

209
1. Frame Number – It gives the frame number in which the current page you are looking for is
present. The number of bits required depends on the number of frames.Frame bit is also
known as address translation bit.
2. Number of bits for frame = Size of physical memory/frame size
3. Present/Absent bit – Present or absent bit says whether a particular page you are looking for
is present or absent. In case if it is not present, that is called Page Fault. It is set to 0 if the
corresponding page is not in memory. Used to control page fault by the operating system to
support virtual memory. Sometimes this bit is also known as valid/invalid bits.
4. Protection bit – Protection bit says that what kind of protection you want on that page. So,
these bit for the protection of the page frame (read, write etc).
5. Referenced bit – Referenced bit will say whether this page has been referred in the last clock
cycle or not. It is set to 1 by hardware when the page is accessed.
6. Caching enabled/disabled – Some times we need the fresh data. Let us say the user is typing
some information from the keyboard and your program should run according to the input
given by the user. In that case, the information will come into the main memory. Therefore
main memory contains the latest information which is typed by the user. Now if you try to put
that page in the cache, that cache will show the old information. So whenever freshness is
required, we don’t want to go for caching or many levels of the memory.The information
present in the closest level to the CPU and the information present in the closest level to the
user might be different. So we want the information has to be consistency, which means
whatever information user has given, CPU should be able to see it as first as possible. That is
the reason we want to disable caching. So, this bit enables or disable caching of the page.
7. Modified bit – Modified bit says whether the page has been modified or not. Modified means
sometimes you might try to write something on to the page. If a page is modified, then
whenever you should replace that page with some other page, then the modified information
should be kept on the hard disk or it has to be written back or it has to be saved back. It is set
to 1 by hardware on write-access to page which is used to avoid writing when swapped out.
Sometimes this modified bit is also called as the Dirty bit.

Inverted Page Table in Operating System


Most Operating Systems implement a separate page table for each process, i.e. for ‘n’ number of
processes running on a Multiprocessing/ Timesharing operating system, there is ‘n’ number of
page tables stored in the memory. Sometimes when a process is very large in size and it occupies
virtual memory then with the size of the process, its page table size also increases substantially.
Example: A process of size 2 GB with:

210
Page size = 512 Bytes
Size of page table entry = 4 Bytes, then
Number of pages in the process = 2 GB / 512 B = 222
PageTable Size = 222 * 22 = 224 bytes
Through this example, it can be concluded that for multiple processes running simultaneously in
an OS, a considerable part of memory is occupied by page tables only. Operating Systems also
incorporate multilevel paging schemes which further increase the space required for storing the
page tables and a large amount of memory is invested in storing them. The amount of memory
occupied by the page tables can turn out to be a huge overhead and is always unacceptable as
main memory is always a scarce resource. Various efforts are made to utilize the memory
efficiently and to maintain a good balance in the level of multiprogramming and efficient CPU
utilization.
Inverted Page Table – An alternate approach is to use the Inverted Page Table structure that
consists of a one-page table entry for every frame of the main memory. So the number of page
table entries in the Inverted Page Table reduces to the number of frames in physical memory and
a single page table is used to represent the paging information of all the processes. Through the
inverted page table, the overhead of storing an individual page table for every process gets
eliminated and only a fixed portion of memory is required to store the paging information of all
the processes together. This technique is called inverted paging as the indexing is done with
respect to the frame number instead of the logical page number. Each entry in the page table
contains the following fields.
 Page number – It specifies the page number range of the logical address.
 Process id – An inverted page table contains the address space information of all the
processes in execution. Since two different processes can have a similar set of virtual
addresses, it becomes necessary in the Inverted Page Table to store a process Id of each
process to identify its address space uniquely. This is done by using the combination of PId
and Page Number. So this Process Id acts as an address space identifier and ensures that a
virtual page for a particular process is mapped correctly to the corresponding physical frame.
 Control bits – These bits are used to store extra paging-related information. These include the
valid bit, dirty bit, reference bits, protection, and locking information bits.
 Chained pointer – It may be possible sometimes that two or more processes share a part of
the main memory. In this case, two or more logical pages map to the same Page Table Entry
then a chaining pointer is used to map the details of these logical pages to the root page table.

Working – The operation of an inverted page table is shown below. The virtual address
generated by the CPU contains the fields and each page table entry contains the other relevant
information required in paging related mechanism. When a memory reference takes place, this
virtual address is matched by the Memory-Mapping unit(MMU), the Inverted Page table is
searched and the corresponding frame number is obtained. If the match is found at the ith entry
then the physical address of the process is sent as the real address otherwise if no match is found
then Segmentation Fault is generated. Note: Number of Entries in Inverted page table =
Number of frames in Physical address Space(PAS).
Examples – The Inverted Page table and its variations are implemented in various systems like
PowerPC, UltraSPARC, and the IA-64 architecture. An implementation of the Mach operating
system on the RT-PC also uses this technique.
Advantages and Disadvantages:

211
 Reduced memory space – Inverted Pagetables typically reduce the amount of memory
required to store the page tables to a size bound of physical memory. The maximum number
of entries could be the number of page frames in the physical memory.
 Longer lookup time – Inverted Page tables are sorted in order of frame number but the
memory look-up takes place with respect to the virtual address, so, it usually takes a longer
time to find the appropriate entry but often these page tables are implemented using hash data
structures for a faster lookup.
 Difficult shared memory implementation – As the Inverted Page Table stores a single entry
for each frame, it becomes difficult to implement the shared memory in the page tables.
Chaining techniques are used to map more than one virtual address to the entry specified in
the order of frame number.

Hashed Page Tables in Operating System


The following are the most common techniques for structuring the page table – Hierarchical
Paging, Hashed Page Tables, and Inverted Page Tables.
Let us explore more about Hashed Page Tables and its working in this article.
Hashed Page Tables :
In hashed page tables, the virtual page number in the virtual address is hashed into the hash table.
They are used to handle address spaces higher than 32 bits. Each entry in the hash table has a
linked list of elements hashed to the same location (to avoid collisions – as we can get the same
value of a hash function for different page numbers). The hash value is the virtual page number.
The Virtual Page Number is all the bits that are not a part of the page offset.
For each element in the hash table, there are three fields –
1. Virtual Page Number (which is the hash value).
2. Value of the mapped page frame.
3. A pointer to the next element in the linked list.

Hashed Page Table :


The virtual page number is compared with field 1 in the first element of the linked list. If it
matches, the corresponding page frame (field 2) is used to form the desired physical address.
Otherwise, subsequent entries in the linked list are checked until the virtual page number
matches.
To make this algorithm suitable for 64-bit address spaces also, clustered page tables are used.
Clustered Page Tables are similar to hashed page tables except that each entry in the hash table
refers to many pages rather than one single page (as in a hashed page table). Hence, a single
entry of a clustered page table can store the mappings for multiple physical page frames.
Clustered page tables are specifically useful for sparse address spaces, where memory references
are scattered throughout the address space (non-contiguous).

212
Segmentation in Operating System
A process is divided into Segments. The chunks that a program is divided into which are not
necessarily all of the same sizes are called segments. Segmentation gives user’s view of the
process which paging does not give. Here the user’s view is mapped to physical memory.
There are types of segmentation:
1. Virtual memory segmentation –
Each process is divided into a number of segments, not all of which are resident at any one
point in time.
2. Simple segmentation –
Each process is divided into a number of segments, all of which are loaded into memory at run
time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in
segmentation. A table stores the information about all such segments and is called Segment
Table.
Segment Table – It maps two-dimensional Logical address into one-dimensional Physical
address. It’s each table entry has:
 Base Address: It contains the starting physical address where the segments reside in memory.
 Limit: It specifies the length of the segment.

Translation of Two dimensional Logical Address to one dimensional Physical Address.

Address generated by the CPU is divided into:


 Segment number (s): Number of bits required to represent the segment.
 Segment offset (d): Number of bits required to represent the size of the segment.
Advantages of Segmentation –
 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.
Disadvantage of Segmentation –
 As processes are loaded and removed from the memory, the free memory space is broken into
little pieces, causing External fragmentation.

Virtual Memory in Operating System


Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as
though it were part of the main memory. The addresses a program may use to reference memory
are distinguished from the addresses the memory system uses to identify physical storage sites,
and program-generated addresses are translated automatically to the corresponding machine
addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and the
amount of secondary memory is available not by the actual number of the main storage
locations.
It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in computer
memory.

213
1. All memory references within a process are logical addresses that are dynamically translated
into physical addresses at run time. This means that a process can be swapped in and out of
the main memory such that it occupies different places in the main memory at different times
during the course of execution.
2. A process may be broken into a number of pieces and these pieces need not be continuously
located in the main memory during execution. The combination of dynamic run-time address
translation and use of page or segment table permits this.
If these characteristics are present then, it is not necessary that all the pages or segments are
present in the main memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented using Demand Paging
or Demand Segmentation.

Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs) is known
as demand paging.
The process includes the following steps :

1. If the CPU tries to refer to a page that is currently not available in the main memory, it
generates an interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed the OS
must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address space. The
page replacement algorithms are used for the decision-making of replacing the page in
physical address space.
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place the
process back into the ready state.
Hence whenever a page fault occurs these steps are followed by the operating system and the
required page is brought into memory.
Advantages :
 More processes may be maintained in the main memory: Because we are going to load only
some of the pages of any particular process, there is room for more processes. This leads to
more efficient utilization of the processor because it is more likely that at least one of the more
numerous processes will be in the ready state at any particular time.
 A process may be larger than all of the main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can be executed
because of demand paging. The OS itself loads pages of a process in the main memory as
required.
 It allows greater multiprogramming levels by using less of the available (primary) memory for
each process.

214
Page Fault Service Time :
The time taken to service the page fault is called page fault service time. The page fault service
time includes the time taken to perform all the above six steps.
Let Main memory access time is: m
Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m
Swapping:
Swapping a process out means removing all of its pages from memory, or marking them so that
they will be removed by the normal page replacement process. Suspending a process ensures that
it is not runnable while it is swapped out. At some later time, the system swaps back the process
from the secondary storage to the main memory. When a process is busy swapping pages in and
out then this situation is called thrashing.

Thrashing :
At any given time, only a few pages of any process are in the main memory and therefore more
processes can be maintained in memory. Furthermore, time is saved because unused pages are
not swapped in and out of memory. However, the OS must be clever about how it manages this
scheme. In the steady-state practically, all of the main memory will be occupied with process
pages, so that the processor and OS have direct access to as many processes as possible. Thus
when the OS brings one page in, it must throw another out. If it throws out a page just before it is
used, then it will just have to get that page again almost immediately. Too much of this leads to a
condition called Thrashing. The system spends most of its time swapping pages rather than
executing instructions. So a good page replacement algorithm is required.
In the given diagram, the initial degree of multiprogramming up to some extent of point(lambda),
the CPU utilization is very high and the system resources are utilized 100%. But if we further
increase the degree of multiprogramming the CPU utilization will drastically fall down and the
system will spend more time only on the page replacement and the time is taken to complete the
execution of the process will increase. This situation in the system is called thrashing.
Causes of Thrashing :
1. High degree of multiprogramming : If the number of processes keeps on increasing in the
memory then the number of frames allocated to each process will be decreased. So, fewer
frames will be available for each process. Due to this, a page fault will occur more frequently
and more CPU time will be wasted in just swapping in and out of pages and the utilization
will keep on decreasing.
For example:
Let free frames = 400
Case 1: Number of process = 100
Then, each process will get 4 frames.
Case 2: Number of processes = 400
Each process will get 1 frame.
Case 2 is a condition of thrashing, as the number of processes is increased, frames per process
are decreased. Hence CPU time will be consumed in just swapping pages.

215
2. Lacks of Frames: If a process has fewer frames then fewer pages of that process will be able
to reside in memory and hence more frequent swapping in and out will be required. This may
lead to thrashing. Hence sufficient amount of frames must be allocated to each process in
order to prevent thrashing.
Recovery of Thrashing :
 Do not allow the system to go into thrashing by instructing the long-term scheduler not to
bring the processes into memory after the threshold.
 If the system is already thrashing then instruct the mid-term scheduler to suspend some of the
processes so that we can recover the system from thrashing.

Page Replacement Algorithms in Operating Systems


In an operating system that uses paging for memory management, a page replacement algorithm
is needed to decide which page needs to be replaced when a new page comes in.
Page Fault: A page fault happens when a running program accesses a memory page that is
mapped into the virtual address space but not loaded in physical memory. Since actual physical
memory is much smaller than virtual memory, page faults happen. In case of a page fault,
Operating System might have to replace one of the existing pages with the newly needed page.
Different page replacement algorithms suggest different ways to decide which page to replace.
The target for all algorithms is to reduce the number of page faults.

Page Replacement Algorithms:

1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm,
the operating system keeps track of all pages in the memory in a queue, the oldest page is in the
front of the queue. When a page needs to be replaced page in the front of the queue is selected
for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the number
of page faults.

Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in
memory so it replaces the oldest page slot i.e 1. —>1 Page Fault. 6 comes, it is also not
available in memory so it replaces the oldest page slot i.e 3 —>1 Page Fault. Finally, when 3
come it is not available so it replaces 0 1 page fault.
Belady’s anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement algorithm.
For example, if we consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get 9
total page faults, but if we increase slots to 4, we get 10-page faults.
2. Optimal Page replacement: In this algorithm, pages are replaced which would not be used
for the longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame.
Find number of page fault.

216
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not
used for the longest duration of time in the future.—>1 Page fault. 0 is already there so —> 0
Page fault. 4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available in
the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system cannot
know future requests. The use of Optimal Page replacement is to set up a benchmark so that
other replacement algorithms can be analyzed against it.
3. Least Recently Used: In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page
frames. Find number of page faults.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7 because it is least
recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in
the memory.
4. Most Recently Used (MRU): In this algorithm, page will be replaced which has been used
recently. Belady’s anomaly can occur in this algorithm.

Overlays in Memory Management


The main problem in Fixed partitioning is the size of a process has to be limited by the maximum
size of the partition, which means a process can never be span over another.In order to solve this
problem, earlier people have used some solution which is called as Overlays.
The concept of overlays is that whenever a process is running it will not use the complete
program at the same time, it will use only some part of it. Then overlays concept says that
whatever part you required, you load it and once the part is done, then you just unload it, means
just pull it back and get the new part you required and run it.
Formally,
“The process of transferring a block of program code or other data into internal memory,
replacing what is already stored”.
Sometimes it happens that compare to the size of the biggest partition, the size of the program
will be even more, then, in that case, you should go with overlays.
So overlay is a technique to run a program that is bigger than the size of the physical memory by
keeping only those instructions and data that are needed at any given time.Divide the program
into modules in such a way that not all modules need to be in the memory at the same time.
Advantage –
 Reduce memory requirement

217
 Reduce time requirement
Disadvantage –
 Overlap map must be specified by programmer
 Programmer must know memory requirement
 Overlapped module must be completely disjoint
 Programming design of overlays structure is complex and not possible in all cases
Example –
The best example of overlays is assembler. Consider the assembler has 2 passes, 2 pass means at
any time it will be doing only one thing, either the 1st pass or the 2nd pass. This means it will
finish 1st pass first and then 2nd pass.Let assume that available main memory size is 150KB and
total code size is 200KB
Pass 1.......................70KB
Pass 2.......................80KB
Symbol table.................30KB
Common routine...............20KB
As the total code size is 200KB and main memory size is 150KB, it is not possible to use 2
passes together.So, in this case, we should go with the overlays technique.According to the
overlays concept at any time only one pass will be used and both the passes always need symbol
table and common routine.Now the question is if overlays-driver* is 10KB, then what is the
minimum partition size required?For pass 1 total memory needed is = (70KB + 30KB + 20KB +
10KB) = 130KB and for pass 2 total memory needed is = (80KB + 30KB + 20KB + 10KB) =
140KB.So if we have minimum 140KB size partition then we can run this code very easily.
*Overlays driver:-It is the user responsibility to take care of overlaying, the operating system
will not provide anything.Which means the user should write even what part is required in the
1st pass and once the 1st pass is over, the user should write the code to pull out the pass 1 and
load the pass 2.That is what is the responsibility of the user, that is known as the Overlays
driver.Overlays driver will just help us to move out and move in the various part of the code.
File Systems in Operating System
A file is a collection of related information that is recorded on secondary storage. Or file is a
collection of logically related entities. From user’s perspective a file is the smallest allotment of
logical secondary storage.
The name of the file is divided into two parts as shown below:
 name
 extension, separated by a period.

Files attributes and its operations:

Attributes Types Operations

Name Doc Create

218
Attributes Types Operations

Type Exe Open

Size Jpg Read

Creation Data Xis Write

Author C Append

Last
Modified Java Truncate

protection class Delete

Close

File type Usual extension Function

Executable exe, com, bin Read to run machine language program

Object obj, o Compiled, machine language not linked

C, java, pas, asm,


Source Code a Source code in various languages

Batch bat, sh Commands to the command interpreter

Text txt, doc Textual data, documents

Word Processor wp, tex, rrf, doc Various word processor formats

Archive arc, zip, tar Related files grouped into one compressed file

Multimedia mpeg, mov, rm For containing audio/video information

Markup xml, html, tex It is the textual data and documents

Library lib, a ,so, dll It contains libraries of routines for programmers

Print or View gif, pdf, jpg It is a format for printing or viewing a ASCII or binary file.

219
FILE DIRECTORIES:
Collection of files is a file directory. The directory contains information about the files, including
attributes, location and ownership. Much of this information, especially that is concerned with
storage, is managed by the operating system. The directory is itself a file, accessible by various
file management routines.
Information contained in a device directory are:
 Name
 Type
 Address
 Current length
 Maximum length
 Date last accessed
 Date last updated
 Owner id
 Protection information
Operation performed on directory are:
 Search for a file
 Create a file
 Delete a file
 List a directory
 Rename a file
 Traverse the file system
Advantages of maintaining directories are:
 Efficiency: A file can be located more quickly.
 Naming: It becomes convenient for users as two users can have same name for different files
or may have different name for same file.
 Grouping: Logical grouping of files can be done by properties e.g. all java programs, all
games etc.
SINGLE-LEVEL DIRECTORY
In this a single directory is maintained for all the users.
 Naming problem: Users cannot have same name for two files.
 Grouping problem: Users cannot group files according to their need.

220
TWO-LEVEL DIRECTORY
In this separate directories for each user is maintained.

 Path name:Due to two levels there is a path name for every file to locate that file.
 Now,we can have same file name for different user.
 Searching is efficient in this method.

TREE-STRUCTURED DIRECTORY :
Directory is maintained in the form of a tree. Searching is efficient and also there is grouping
capability. We have absolute or relative path name for a file.

FILE ALLOCATION METHODS :

1. Continuous Allocation –
A single continuous set of blocks is allocated to a file at the time of file creation. Thus, this is a
pre-allocation strategy, using variable size portions. The file allocation table needs just a single
entry for each file, showing the starting block and the length of the file. This method is best from
the point of view of the individual sequential file. Multiple blocks can be read in at a time to
improve I/O performance for sequential processing. It is also easy to retrieve a single block. For
example, if a file starts at block b, and the ith block of the file is wanted, its location on

221
secondary storage is simply b+i-1.

Disadvantage –
 External fragmentation will occur, making it difficult to find contiguous blocks of space of
sufficient length. Compaction algorithm will be necessary to free up additional space on disk.
 Also, with pre-allocation, it is necessary to declare the size of the file at the time of creation.
2. Linked Allocation(Non-contiguous allocation) –
Allocation is on an individual block basis. Each block contains a pointer to the next block in the
chain. Again the file table needs just a single entry for each file, showing the starting block and
the length of the file. Although pre-allocation is possible, it is more common simply to allocate
blocks as needed. Any free block can be added to the chain. The blocks need not be continuous.
Increase in file size is always possible if free disk block is available. There is no external
fragmentation because only one block at a time is needed but there can be internal fragmentation
but it exists only in the last disk block of file.
Disadvantage –
 Internal fragmentation exists in last disk block of file.
 There is an overhead of maintaining the pointer in every disk block.
 If the pointer of any disk block is lost, the file will be truncated.
 It supports only the sequential access of files.
3. Indexed Allocation –
It addresses many of the problems of contiguous and chained allocation. In this case, the file
allocation table contains a separate one-level index for each file: The index has one entry for
each block allocated to the file. Allocation may be on the basis of fixed-size blocks or variable-
sized blocks. Allocation by blocks eliminates external fragmentation, whereas allocation by
variable-size blocks improves locality. This allocation technique supports both sequential and

222
direct access to the file and thus is the most popular form of file allocation.

Disk Free Space Management :


Just as the space that is allocated to files must be managed ,so the space that is not currently
allocated to any file must be managed. To perform any of the file allocation techniques,it is
necessary to know what blocks on the disk are available. Thus we need a disk allocation table in
addition to a file allocation table.The following are the approaches used for free space
management.

1. Bit Tables : This method uses a vector containing one bit for each block on the disk. Each
entry for a 0 corresponds to a free block and each 1 corresponds to a block in use.
For example: 00011010111100110001
In this vector every bit correspond to a particular block and 0 implies that, that particular
block is free and 1 implies that the block is already occupied. A bit table has the advantage
that it is relatively easy to find one or a contiguous group of free blocks. Thus, a bit table
works well with any of the file allocation methods. Another advantage is that it is as small as
possible.
2. Free Block List : In this method, each block is assigned a number sequentially and the list of
the numbers of all free blocks is maintained in a reserved block of the disk.

223
Structures of Directory in Operating System
A directory is a container that is used to contain folders and files. It organizes files and folders
in a hierarchical manner.

There are several logical structures of a directory, these are given below.
 Single-level directory –
The single-level directory is the simplest directory structure. In it, all files are contained in the
same directory which makes it easy to support and understand.
A single level directory has a significant limitation, however, when the number of files
increases or when the system has more than one user. Since all the files are in the same
directory, they must have a unique name. if two users call their dataset test, then the unique
name rule violated.

224
Advantages:
 Since it is a single directory, so its implementation is very easy.
 If the files are smaller in size, searching will become faster.
 The operations like file creation, searching, deletion, updating are very easy in such a
directory structure.
Disadvantages:
 There may chance of name collision because two files can have the same name.
 Searching will become time taking if the directory is large.
 This can not group the same type of files together.
 Two-level directory –
As we have seen, a single level directory often leads to confusion of files names among
different users. the solution to this problem is to create a separate directory for each user.
In the two-level directory structure, each user has their own user files directory (UFD). The
UFDs have similar structures, but each lists only the files of a single user. system’s master file
directory (MFD) is searches whenever a new user id=s logged in. The MFD is indexed by
username or account number, and each entry points to the UFD for that user.

Advantages:
 We can give full path like /User-name/directory-name/.
 Different users can have the same directory as well as the file name.
 Searching of files becomes easier due to pathname and user-grouping.
Disadvantages:
 A user is not allowed to share files with other users.

225
 Still, it not very scalable, two files of the same type cannot be grouped together in the same
user.

 Tree-structured directory –
Once we have seen a two-level directory as a tree of height 2, the natural generalization is to
extend the directory structure to a tree of arbitrary height.
This generalization allows the user to create their own subdirectories and to organize their

files accordingly.
A tree structure is the most common directory structure. The tree has a root directory, and every
file in the system has a unique path.
Advantages:
 Very general, since full pathname can be given.
 Very scalable, the probability of name collision is less.
 Searching becomes very easy, we can use both absolute paths as well as relative.
Disadvantages:
 Every file does not fit into the hierarchical model, files may be saved into multiple directories.
 We can not share files.
 It is inefficient, because accessing a file may go under multiple directories.

226
 Acyclic graph directory –
An acyclic graph is a graph with no cycle and allows us to share subdirectories and files. The
same file or subdirectories may be in two different directories. It is a natural generalization of
the tree-structured directory.
It is used in the situation like when two programmers are working on a joint project and they
need to access files. The associated files are stored in a subdirectory, separating them from
other projects and files of other programmers since they are working on a joint project so they
want the subdirectories to be into their own directories. The common subdirectories should be
shared. So here we use Acyclic directories.
It is the point to note that the shared file is not the same as the copy file. If any programmer
makes some changes in the subdirectory it will reflect in both subdirectories.

Advantages:
 We can share files.
 Searching is easy due to different-different paths.
Disadvantages:
 We share the files via linking, in case deleting it may create the problem,
 If the link is a soft link then after deleting the file we left with a dangling pointer.
 In the case of a hard link, to delete a file we have to delete all the references associated with
it.

 General graph directory structure –


In general graph directory structure, cycles are allowed within a directory structure where
multiple directories can be derived from more than one parent directory.
The main problem with this kind of directory structure is to calculate the total size or space

227
that has been taken by the files and directories.

Advantages:
 It allows cycles.
 It is more flexible than other directories structure.
Disadvantages:
 It is more costly than others.
 It needs garbage collection.

File Allocation Methods


The allocation methods define how the files are stored in the disk blocks. There are three main
disk space or file allocation methods.
 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.
All the three methods have their own advantages and disadvantages as discussed below:
1. Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the disk. For example, if a file
requires n blocks and is given a block b as the starting location, then the blocks assigned to the
file will be: b, b+1, b+2,……b+n-1. This means that given the starting block address and the
length of the file (in terms of blocks required), we can determine the blocks occupied by the file.
The directory entry for a file with contiguous allocation contains
 Address of starting block
 Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks. Therefore,
it occupies 19, 20, 21, 22, 23, 24 blocks.

228
Advantages:
 Both the Sequential and Direct Accesses are supported by this. For direct access, the address
of the kth block of the file which starts at block b can easily be obtained as (b+k).
 This is extremely fast since the number of seeks are minimal because of contiguous allocation
of file blocks.
Disadvantages:
 This method suffers from both internal and external fragmentation. This makes it inefficient in
terms of memory utilization.
 Increasing file size is difficult because it depends on the availability of contiguous memory at
a particular instance.
2. Linked List Allocation
In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk
blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block
contains a pointer to the next block occupied by the file.

229
The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last block
(25) contains -1 indicating a null pointer and does not point to any other block.

Advantages:
 This is very flexible in terms of file size. File size can be increased easily since the system
does not have to look for a contiguous chunk of memory.
 This method does not suffer from external fragmentation. This makes it relatively better in
terms of memory utilization.
Disadvantages:
 Because the file blocks are distributed randomly on the disk, a large number of seeks are
needed to access every block individually. This makes linked allocation slower.
 It does not support random or direct access. We can not directly access the blocks of a file. A
block k of a file can be accessed by traversing k blocks sequentially (sequential access ) from
the starting block of the file via block pointers.
 Pointers required in the linked allocation incur some extra overhead.
3. Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all the blocks
occupied by a file. Each file has its own index block. The ith entry in the index block contains
the disk address of the ith file block. The directory entry contains the address of the index block
as shown in the image:

230
Advantages:
 This supports direct access to the blocks occupied by the file and therefore provides fast
access to the file blocks.
 It overcomes the problem of external fragmentation.
Disadvantages:
 The pointer overhead for indexed allocation is greater than linked allocation.
 For very small files, say files that expand only 2-3 blocks, the indexed allocation would keep
one entire block (index block) for the pointers which is inefficient in terms of memory
utilization. However, in linked allocation we lose the space of only 1 pointer per block.
For files that are very large, single index block may not be able to hold all the pointers.
Following mechanisms can be used to resolve this:
1. Linked scheme: This scheme links two or more index blocks together for holding the
pointers. Every index block would then contain a pointer or the address to the next index
block.
2. Multilevel index: In this policy, a first level index block is used to point to the second level
index blocks which inturn points to the disk blocks occupied by the file. This can be extended
to 3 or more levels depending on the maximum file size.
3. Combined Scheme: In this scheme, a special block called the Inode (information
Node) contains all the information about the file such as the name, size, authority, etc and the
remaining space of Inode is used to store the Disk Block addresses which contain the actual
file as shown in the image below. The first few of these pointers in Inode point to the direct
blocks i.e the pointers contain the addresses of the disk blocks that contain data of the file.
The next few pointers point to indirect blocks. Indirect blocks may be single indirect, double
indirect or triple indirect. Single Indirect block is the disk block that does not contain the file
data but the disk address of the blocks that contain the file data. Similarly, double indirect
blocks do not contain the file data but the disk address of the blocks that contain the address

231
of the blocks containing the file data.

Free space management in Operating System


The system keeps tracks of the free disk blocks for allocating space to files when they are
created. Also, to reuse the space released from deleting the files, free space management
becomes crucial. The system maintains a free space list which keeps track of the disk blocks
that are not allocated to some file or directory. The free space list can be implemented mainly
as:
1. Bitmap or Bit vector –
A Bitmap or Bit Vector is series or collection of bits where each bit corresponds to a disk
block. The bit can take two values: 0 and 1: 0 indicates that the block is allocated and 1
indicates a free block.
The given instance of disk blocks on the disk in Figure 1 (where green blocks are
allocated)can be represented by a bitmap of 16 bits as: 0000111000000110.

Advantages –
 Simple to understand.
 Finding the first free block is efficient. It requires scanning the words (a group of 8 bits)
in a bitmap for a non-zero word. (A 0-valued word has all bits 0). The first free block is
then found by scanning for the first 1 bit in the non-zero word.

232
The block number can be calculated as:
(number of bits per word) *(number of 0-values words) + offset of bit first bit 1 in the non-
zero word .
For the Figure-1, we scan the bitmap sequentially for the first non-zero word.
The first group of 8 bits (00001110) constitute a non-zero word since all bits are not 0. After
the non-0 word is found, we look for the first 1 bit. This is the 5th bit of the non-zero word.
So, offset = 5.
Therefore, the first free block number = 8*0+5 = 5.
2. Linked List –
In this approach, the free disk blocks are linked together i.e. a free block contains a pointer
to the next free block. The block number of the very first disk block is stored at a separate
location on disk and is also cached in memory.

In Figure-2, the free space list head points to Block 5 which points to Block 6, the next free
block and so on. The last free block would contain a null pointer indicating the end of free
list.
A drawback of this method is the I/O required for free space list traversal.
3. Grouping –
This approach stores the address of the free blocks in the first free block. The first free block
stores the address of some, say n free blocks. Out of these n blocks, the first n-1 blocks are
actually free and the last block contains the address of next free n blocks.
An advantage of this approach is that the addresses of a group of free disk blocks can be
found easily.
4. Counting –
This approach stores the address of the first free disk block and a number n of free
contiguous disk blocks that follow the first block.
Every entry in the list would contain:

233
1. Address of first free disk block
2. A number n
For example, in Figure-1, the first entry of the free space list would be: ([Address of Block
5], 2), because 2 contiguous free blocks follow block 5.

Disk Scheduling Algorithms


Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:

 Multiple I/O requests may arrive by different processes and only one I/O request can be
served at a time by the disk controller. Thus other I/O requests need to wait in the waiting
queue and need to be scheduled.
 Two or more request may be far from each other so can result in greater disk arm movement.
 Hard drives are one of the slowest parts of the computer system and thus need to be accessed
in an efficient manner.
There are many Disk Scheduling Algorithms but before discussing them let’s have a quick look
at some of the important terms:

 Seek Time:Seek time is the time taken to locate the disk arm to a specified track where the
data is to be read or write. So the disk scheduling algorithm that gives minimum average seek
time is better.
 Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to
rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
 Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed
of the disk and number of bytes to be transferred.
 Disk Access Time: Disk Access Time is:

Disk Access Time = Seek Time +

Rotational Latency +
Transfer Time

234
 Disk Response Time: Response Time is the average of time spent by a request waiting to
perform its I/O operation. Average Response time is the response time of the all
requests. Variance Response Time is measure of how individual request are serviced with
respect to average response time. So the disk scheduling algorithm that gives minimum
variance response time is better.
Disk Scheduling Algorithms

1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk queue.Let us understand this with the help of an
example.

Example:
1. Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50

1. So, total seek time:


=(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16)
=642

Advantages:

 Every request gets a fair chance


 No indefinite postponement
Disadvantages:

 Does not try to optimize seek time


 May not provide the best possible service

1. SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed
first. So, the seek time of every request is calculated in advance in the queue and then they are
scheduled according to their calculated seek time. As a result, the request near the disk arm
will get executed first. SSTF is certainly an improvement over FCFS as it decreases the
average response time and increases the throughput of system.Let us understand this with the
help of an example.

235
Example:
1. Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50

1.
So, total seek time:
1. =(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-140)+(190-170)
=208

Advantages:

 Average Response Time decreases


 Throughput increases
Disadvantages:

 Overhead to calculate seek time in advance


 Can cause Starvation for a request if it has higher seek time as compared to incoming requests
 High variance of response time as SSTF favours only some requests

1. SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the
requests coming in its path and after reaching the end of disk, it reverses its direction and
again services the request arriving in its path. So, this algorithm works as an elevator and
hence also known as elevator algorithm. As a result, the requests at the midrange are
serviced more and those arriving behind the disk arm will have to wait.

Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should move “towards the larger value”.

236
1.
Therefore, the seek time is calculated as:
1. =(199-50)+(199-16)
=332

Advantages:

 High throughput
 Low variance of response time
 Average response time
Disadvantages:

 Long waiting time for requests for locations just visited by disk arm

1. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after
reversing its direction. So, it may be possible that too many requests are waiting at the other
end or there may be zero or few requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its
direction goes to the other end of the disk and starts servicing the requests from there. So, the
disk arm moves in a circular fashion and this algorithm is also similar to SCAN algorithm and
hence it is known as C-SCAN (Circular SCAN).

Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at
50, and it is also given that the disk arm should move “towards the larger value”.

237
Seek time is calculated as:
=(199-50)+(199-0)+(43-0)
=391
Advantages:

 Provides more uniform wait time compared to SCAN

1. LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the
disk arm in spite of going to the end of the disk goes only to the last request to be serviced in
front of the head and then reverses its direction from there only. Thus it prevents the extra
delay which occurred due to unnecessary traversal to the end of the disk.

Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should move “towards the larger value”.

238
1.
So, the seek time is calculated as:
1. =(190-50)+(190-16)
=314

1. CLOOK: As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to


CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes
only to the last request to be serviced in front of the head and then from there goes to the other
end’s last request. Thus, it also prevents the extra delay which occurred due to unnecessary
traversal to the end of the disk.

Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should move “towards the larger value”

1.
So, the seek time is calculated as:
1. =(190-50)+(190-16)+(43-16)
=341
2. RSS– It stands for random scheduling and just like its name it is nature. It is used in situations
where scheduling involves random attributes such as random processing time, random due
dates, random weights, and stochastic machine breakdowns this algorithm sits perfect. Which
is why it is usually used for and analysis and simulation.
3. LIFO– In LIFO (Last In, First Out) algorithm, newest jobs are serviced before the existing
ones i.e. in order of requests that get serviced the job that is newest or last entered is serviced
first and then the rest in the same order.
Advantages
 Maximizes locality and resource utilization
 Can seem a little unfair to other requests and if new requests keep coming in, it cause
starvation to the old and existing ones.

239
4. N-STEP SCAN – It is also known as N-STEP LOOK algorithm. In this a buffer is created for
N requests. All requests belonging to a buffer will be serviced in one go. Also once the buffer
is full no new requests are kept in this buffer and are sent to another one. Now, when these N
requests are serviced, the time comes for another top N requests and this way all get requests
get a guaranteed service
Advantages
 It eliminates starvation of requests completely
5. FSCAN– This algorithm uses two sub-queues. During the scan all requests in the first queue
are serviced and the new incoming requests are added to the second queue. All new requests
are kept on halt until the existing requests in the first queue are serviced.
Advantages
 FSCAN along with N-Step-SCAN prevents “arm stickiness” (phenomena in I/O scheduling
where the scheduling algorithm continues to service requests at or near the current sector
and thus prevents any seeking)
Each algorithm is unique in its own way. Overall Performance depends on the number and type
of requests.
Note:Average Rotational latency is generally taken as 1/2(Rotational latency).

Program for SSTF disk scheduling algorithm


Given an array of disk track numbers and initial head position, our task is to find the total
number of seek operations done to access all the requested tracks if Shortest Seek Time First
(SSTF) is a disk scheduling algorithm is used.
Shortest Seek Time First (SSTF) –
Basic idea is the tracks which are closer to current disk head position should be serviced first in
order to minimise the seek operations.
Advantages of Shortest Seek Time First (SSTF) –
1. Better performance than FCFS scheduling algorithm.
2. It provides better throughput.
3. This algorithm is used in Batch Processing system where throughput is more important.
4. It has less average response and waiting time.
Disadvantages of Shortest Seek Time First (SSTF) –
1. Starvation is possible for some requests as it favours easy to reach request and ignores the far
away processes.
2. There is lack of predictability because of high variance of response time.
3. Switching direction slows things down.
Algorithm –
1. Let Request array represents an array storing indexes of tracks that have been requested.
‘head’ is the position of disk head.
2. Find the positive distance of all tracks in the request array from head.
3. Find a track from requested array which has not been accessed/serviced yet and has minimum
distance from head.
4. Increment the total seek count with this distance.
5. Currently serviced track position now becomes the new head position.
6. Go to step 2 until all tracks in request array have not been serviced.

240
Example –
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
The following chart shows the sequence in which requested tracks are serviced using SSTF.

Therefore, total seek count is calculated as:


= (50-41)+(41-34)+(34-11)+(60-11)+(79-60)+(92-79)+(114-92)+(176-114)
= 204
SCAN (Elevator) Disk Scheduling Algorithms
Given an array of disk track numbers and initial head position, our task is to find the total
number of seek operations done to access all the requested tracks if SCAN disk scheduling
algorithm is used.
SCAN (Elevator) algorithm
In SCAN disk scheduling algorithm, head starts from one end of the disk and moves towards the
other end, servicing requests in between one by one and reach the other end. Then the direction
of the head is reversed and the process continues as head continuously scan back and forth to
access the disk. So, this algorithm works as an elevator and hence also known as the elevator
algorithm. As a result, the requests at the midrange are serviced more and those arriving behind
the disk arm will have to wait.
Advantages of SCAN (Elevator) algorithm
1. This algorithm is simple and easy to understand.
2. SCAN algorithm have no starvation.
3. This algorithm is better than FCFS Scheduling algorithm .
Disadvantages of SCAN (Elevator) algorithm
1. More complex algorithm to implement.
2. This algorithm is not fair because it cause long waiting time for the cylinders just visited by
the head.
3. It causes the head to move till the end of the disk in this way the requests arriving ahead of the
arm position would get immediate service but some other requests that arrive behind the arm
position will have to wait for the request to complete.
Algorithm-
1. Let Request array represents an array storing indexes of tracks that have been requested in
ascending order of their time of arrival. ‘head’ is the position of disk head.
2. Let direction represents whether the head is moving towards left or right.
3. In the direction in which head is moving service all tracks one by one.
4. Calculate the absolute distance of the track from the head.

241
5. Increment the total seek count with this distance.
6. Currently serviced track position now becomes the new head position.
7. Go to step 3 until we reach at one of the ends of the disk.
8. If we reach at the end of the disk reverse the direction and go to step 2 until all tracks in
request array have not been serviced.
Example:
Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
Direction = left (We are moving from right to left)

Output:
Total number of seek operations = 226
Seek Sequence is
41
34
11
0
60
79
92
114
176
The following chart shows the sequence in which requested tracks are serviced using SCAN.

Therefore, the total seek count is calculated as:


= (50-41)+(41-34)+(34-11)
+(11-0)+(60-0)+(79-60)
+(92-79)+(114-92)+(176-114)
= 226
C-SCAN Disk Scheduling Algorithm
Given an array of disk track numbers and initial head position, our task is to find the total
number of seek operations done to access all the requested tracks if a C-SCAN disk scheduling
algorithm is used.

242
C-SCAN (Circular Elevator) Disk Scheduling Algorithm
The circular SCAN (C-SCAN) scheduling algorithm is a modified version of the SCAN disk
scheduling algorithm that deals with the inefficiency of the SCAN algorithm by servicing the
requests more uniformly. Like SCAN (Elevator Algorithm) C-SCAN moves the head from one
end servicing all the requests to the other end. However, as soon as the head reaches the other
end, it immediately returns to the beginning of the disk without servicing any requests on the
return trip (see chart below) and starts servicing again once reaches the beginning. This is also
known as the “Circular Elevator Algorithm” as it essentially treats the cylinders as a circular list
that wraps around from the final cylinder to the first one.
Advantages of C-SCAN (Circular Elevator) Disk Scheduling Algorithm:
 Works well with moderate to heavy loads.
 It provides better response time and uniform waiting time.
Disadvantages of C-SCAN (Circular Elevator) Disk Scheduling Algorithm:
 May not be fair to service requests for tracks at the extreme end.
 It has more seek movements as compared to the SCAN Algorithm.
Algorithm:
1. Let Request array represents an array storing indexes of tracks that have been requested in
ascending order of their time of arrival. ‘head’ is the position of disk head.
2. The head services only in the right direction from 0 to the size of the disk.
3. While moving in the left direction do not service any of the tracks.
4. When we reach the beginning(left end) reverse the direction.
5. While moving in the right direction it services all tracks one by one.
6. While moving in the right direction calculate the absolute distance of the track from the head.
7. Increment the total seek count with this distance.
8. Currently serviced track position now becomes the new head position.
9. Go to step 6 until we reach the right end of the disk.
10. If we reach the right end of the disk reverse the direction and go to step 3 until all tracks in the
request array have not been serviced.
Examples:
Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
Direction = right(We are moving from left to right)

Output:
Initial position of head: 50
Total number of seek operations = 389
Seek Sequence is
60
79
92
114
176
199
0
11

243
34
41
The following chart shows the sequence in which requested tracks are serviced using SCAN.

Therefore, the total seek count is calculated as:

= (60-50)+(79-60)+(92-79)
+(114-92)+(176-114)+(199-176)+(199-0)
+(11-0)+(34-11)+(41-34)
= 389
LOOK Disk Scheduling Algorithm
Given an array of disk track numbers and initial head position, our task is to find the total
number of seek operations done to access all the requested tracks if LOOK disk scheduling
algorithm is used. Also, write a program to find the seek sequence using LOOK disk scheduling
algorithm.
LOOK Disk Scheduling Algorithm:
LOOK is the advanced version of SCAN (elevator) disk scheduling algorithm which gives
slightly better seek time than any other algorithm in the hierarchy (FCFS->SRTF->SCAN->C-
SCAN->LOOK). The LOOK algorithm services request similarly as SCAN algorithm meanwhile
it also “looks” ahead as if there are more tracks that are needed to be serviced in the same
direction. If there are no pending requests in the moving direction the head reverses the direction
and start servicing requests in the opposite direction.
The main reason behind the better performance of LOOK algorithm in comparison to SCAN is
because in this algorithm the head is not allowed to move till the end of the disk.
Algorithm:
1. Let Request array represents an array storing indexes of tracks that have been requested in
ascending order of their time of arrival. ‘head’ is the position of disk head.
2. The initial direction in which head is moving is given and it services in the same direction.
3. The head services all the requests one by one in the direction head is moving.
4. The head continues to move in the same direction until all the request in this direction are
finished.
5. While moving in this direction calculate the absolute distance of the track from the head.
6. Increment the total seek count with this distance.

244
7. Currently serviced track position now becomes the new head position.
8. Go to step 5 until we reach at last request in this direction.
9. If we reach where no requests are needed to be serviced in this direction reverse the direction
and go to step 3 until all tracks in request array have not been serviced.
Examples:
Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
Direction = right (We are moving from left to right)

Output:
Initial position of head: 50
Total number of seek operations = 291
Seek Sequence is
60
79
92
114
176
41
34
11
The following chart shows the sequence in which requested tracks are serviced using LOOK.

Therefore, the total seek count is calculated as:


= (60-50)+(79-60)+(92-79)
+(114-92)+(176-114)
+(176-41)+(41-34)+(34-11)

C-LOOK Disk Scheduling Algorithm


Given an array of disk track numbers and initial head position, our task is to find the total
number of seek operations done to access all the requested tracks if C-LOOK disk scheduling
algorithm is used. Also, write a program to find the seek sequence using C-LOOK disk
scheduling algorithm.

245
C-LOOK (Circular LOOK) Disk Scheduling Algorithm:
C-LOOK is an enhanced version of both SCAN as well as LOOK disk scheduling algorithms.
This algorithm also uses the idea of wrapping the tracks as a circular cylinder as C-SCAN
algorithm but the seek time is better than C-SCAN algorithm. We know that C-SCAN is used to
avoid starvation and services all the requests more uniformly, the same goes for C-LOOK.
In this algorithm, the head services requests only in one direction(either left or right) until all the
requests in this direction are not serviced and then jumps back to the farthest request on the other
direction and service the remaining requests which gives a better uniform servicing as well as
avoids wasting seek time for going till the end of the disk.
Algorithm-
1. Let Request array represents an array storing indexes of the tracks that have been requested in
ascending order of their time of arrival and head is the position of the disk head.
2. The initial direction in which the head is moving is given and it services in the same direction.
3. The head services all the requests one by one in the direction it is moving.
4. The head continues to move in the same direction until all the requests in this direction have
been serviced.
5. While moving in this direction, calculate the absolute distance of the tracks from the head.
6. Increment the total seek count with this distance.
7. Currently serviced track position now becomes the new head position.
8. Go to step 5 until we reach the last request in this direction.
9. If we reach the last request in the current direction then reverse the direction and move the
head in this direction until we reach the last request that is needed to be serviced in this
direction without servicing the intermediate requests.
10. Reverse the direction and go to step 3 until all the requests have not been serviced.
Examples:
Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
Direction = right (Moving from left to right)
Output:
Initial position of head: 50
Total number of seek operations = 156
Seek Sequence is
60
79
92
114
176
11
34
41

The following chart shows the sequence in which requested tracks are serviced using C-LOOK.

246
Therefore, the total seek count = (60 – 50) + (79 – 60) + (92 – 79) + (114 – 92) + (176 – 114) +
(176 – 11) + (34 – 11) + (41 – 34) = 321

N-Step-SCAN disk scheduling


The input output requests that is coming for the disk is scheduled by operating system and that
scheduling of disk is known as disk scheduling. Disk scheduling is important since multiple
requests comes from processes for disk but only one disk assigned to process at a time. Seek time
is one of the crucial parameter in operating system. Requests are linked in queues henceforth
seek time gets increased due to which system becomes slow. Algorithm which is used for disk
scheduling is known as Disk Scheduling Algorithm whose purpose is to reduce total seek time.
N-Step-SCAN Disk Scheduling :
N-Step-SCAN also called as N-Step-Look which is actually a Disk Scheduling Algorithm. It
helps in determining motion of Disk’s arm and also helps in servicing read and write requests. It
divides the request queue into sub queues of length N. By doing this it ensures that the service
guarantee objective is achieved.
After this subsequent request are done they can not be allocated into N size sub queues since
they are full by elevator algorithm. Therefore starvation is completely eliminated and the service
within N requests is guaranteed.
Algorithm for N-Step-SCAN Disk Scheduling :
1. A buffer is created for N requests.
2. All the requests that are kept in this buffer is serviced in any specific wipe.
3. During this time all the new incoming requests can not be added to this buffer, this new
requests will be kept in a separate buffer.
4. Now here comes the role of I/O (Input Output) scheduler because when these top N requests
are serviced, I/O (Input Output) scheduler chooses next N requests and this process goes on
and on.
By doing this N-Step-SCAN allows better throughput and its devoid of thrust.
FScan disk scheduling algorithm
Fixed period SCAN (FSCAN) disk scheduling algorithm mainly focuses on handling high
variance in shortest seek time first (SSTF). SCAN algorithm is also proposed to handle above
mentioned situation but using SCAN algorithm causes long delay while handling requests which
are at extremes of disk. FSCAN algorithm determines how read and write head of disk will move
in order to handle issue of handling issue of high variance of SSTF.

247
How it works?
FSCAN makes use of two queues, one of queues stores old r/w requests and other queue stores
new r/w requests. When old requests are handled then only new requests are processed.
Variations of FSCAN algorithm can also consist of N queues which in turn will make response
time faster.
How it handles issue of “high variance in SSTF” ?
FSCAN addresses above mentioned issue by “freezing” queue once scan starts, requests that
arrive after scan starts are processed in the next scan.
Performance analysis :
Citing theoretical analysis, it can be seen that SCAN results in lower average response time than
FSCAN and higher average response time than shortest seek time first (SSTF). FSCAN
algorithm has nice performance due to high throughput and low average response times. FSCAN
removes problem of indefinite postponement.
Example : How requests are processed

248

You might also like